{"name": "1", "title": "2-Source Dispersers for Sub-Polynomial Entropy and Ramsey Graphs Beating the Frankl-Wilson Construction", "abstract": "The main result of this paper is an explicit disperser for two independent sources on n bits, each of entropy k = n o(1). Put differently, setting N = 2n and K = 2k , we construct explicit N N Boolean matrices for which no K K sub-matrix is monochromatic. Viewed as adjacency matrices of bipartite graphs, this gives an explicit construction of K-Ramsey bipartite graphs of size N . This greatly improves the previous bound of k = o(n) of Barak, Kindler, Shaltiel, Sudakov and Wigderson [4]. It also significantly improves the 25-year record of k = ~ O(n) on the special case of Ramsey graphs, due to Frankl and Wilson [9]. The construction uses (besides \"classical\" extractor ideas) almost all of the machinery developed in the last couple of years for extraction from independent sources, including: Bourgain's extractor for 2 independent sources of some entropy rate < 1/2 [5] Raz's extractor for 2 independent sources, one of which has any entropy rate > 1/2 [18] Rao's extractor for 2 independent block-sources of entropy n (1) [17] The \"Challenge-Response\" mechanism for detecting \"entropy concentration\" of [4]. The main novelty comes in a bootstrap procedure which allows the Challenge-Response mechanism of [4] to be used with sources of less and less entropy, using recursive calls to itself. Subtleties arise since the success of this mechanism depends on restricting the given sources, and so recursion constantly changes the original sources. These are resolved via a new construct, in between a disperser and an extractor, which behaves like an extractor on sufficiently large subsources of the given ones. This version is only an extended abstract, please see the full version, available on the authors' homepages, for more details.", "fulltext": "INTRODUCTION\nThis paper deals with randomness extraction from weak\nrandom sources. Here a weak random source is a distribution\nwhich contains some entropy. The extraction task is to\ndesign efficient algorithms (called extractors) to convert this\nentropy into useful form, namely a sequence of independent\nunbiased bits. Beyond the obvious motivations (potential\nuse of physical sources in pseudorandom generators and in\nderandomization), extractors have found applications in a\n\nvariety of areas in theoretical computer science where randomness\ndoes not seem an issue, such as in efficient constructions\nof communication networks [24, 7], error correcting\ncodes [22, 12], data structures [14] and more.\nMost work in this subject over the last 20 years has focused\non what is now called seeded extraction, in which the\nextractor is given as input not only the (sample from the)\ndefective random source, but also a few truly random bits\n(called the seed). A comprehensive survey of much of this\nbody of work is [21].\nAnother direction, which has been mostly dormant till\nabout two years ago, is (seedless, deterministic) extraction\nfrom a few independent weak sources. This kind of extraction\nis important in several applications where it is unrealis-tic\nto have a short random seed or deterministically enumerate\nover its possible values. However, it is easily shown to be\nimpossible when only one weak source is available. When at\nleast 2 independent sources are available extraction becomes\npossible in principle. The 2-source case is the one we will\nfocus on in this work.\nThe rest of the introduction is structured as follows. We'll\nstart by describing our main result in the context of Ramsey\ngraphs. We then move to the context of extractors and disperser\n, describing the relevant background and stating our\nresult in this language. Then we give an overview of the\nconstruction of our dispersers, describing the main building\nblocks we construct along the way. As the construction is\nquite complex and its analysis quite subtle, in this proceedings\nversion we try to abstract away many of the technical\ndifficulties so that the main ideas, structure and tools used\nare highlighted. For that reason we also often state definitions\nand theorems somewhat informally.\n1.1\nRamsey Graphs\nDefinition 1.1. A graph on N vertices is called a K-Ramsey\nGraph if it contains no clique or independent set of\nsize K.\nIn 1947 Erd\nos published his paper inaugurating the Prob-abilistic\nMethod with a few examples, including a proof that\nmost graphs on N = 2\nn\nvertices are 2n-Ramsey. The quest\nfor constructing such graphs explicitly has existed ever since\nand lead to some beautiful mathematics.\nThe best record to date was obtained in 1981 by Frankl\nand Wilson [9], who used intersection theorems for set systems\nto construct N -vertex graphs which are 2\nn log n\n-Ramsey.\nThis bound was matched by Alon [1] using the Polynomial\nMethod, by Grolmusz [11] using low rank matrices over rings,\nand also by Barak [2] boosting Abbot's method with almost\nk-wise independent random variables (a construction that\nwas independently discovered by others as well). Remark-ably\nall of these different approaches got stuck at essentially\nthe same bound. In recent work, Gopalan [10] showed that\nother than the last construction, all of these can be viewed\nas coming from low-degree symmetric representations of the\nOR function. He also shows that any such symmetric representation\ncannot be used to give a better Ramsey graph,\nwhich gives a good indication of why these constructions\nhad similar performance. Indeed, as we will discuss in a\nlater section, the n entropy bound initially looked like a\nnatural obstacle even for our techniques, though eventually\nwe were able to surpass it.\nThe analogous question for bipartite graphs seemed much\nharder.\nDefinition 1.2. A bipartite graph on two sets of N vertices\nis a K-Ramsey Bipartite Graph if it has no K K\ncomplete or empty bipartite subgraph.\nWhile Erd\nos' result on the abundance of 2n-Ramsey graphs\nholds as is for bipartite graphs, until recently the best explicit\nconstruction of bipartite Ramsey graphs was 2\nn/2\nRamsey\n, using the Hadamard matrix. This was improved\nlast year, first to o(2\nn/2\n) by Pudlak and R\nodl [16] and then\nto 2\no(n)\nby Barak, Kindler, Shaltiel, Sudakov and Wigderson\n[4].\nIt is convenient to view such graphs as functions f :\n({0, 1}\nn\n)\n2\n{0, 1}. This then gives exactly the definition\nof a disperser.\nDefinition 1.3. A function f : ({0, 1}\nn\n)\n2\n{0, 1} is\ncalled a 2-source disperser for entropy k if for any two sets\nX, Y {0, 1}\nn\nwith |X| = |Y | = 2\nk\n, we have that the image\nf (X, Y ) is {0, 1}.\nThis allows for a more formal definition of explicitness: we\nsimply demand that the function f is computable in polynomial\ntime. Most of the constructions mentioned above are\nexplicit in this sense.\n1\nOur main result (stated informally) significantly improves\nthe bounds in both the bipartite and non-bipartite settings:\nTheorem 1.4. For every N we construct polynomial time\ncomputable bipartite graphs which are 2\nn\no\n(1)\n-Ramsey. A standard\ntransformation of these graphs also yields polynomial\ntime computable ordinary Ramsey Graphs with the same parameters\n.\n1.2\nExtractors and Dispersers from independent\nsources\nNow we give a brief review of past relevant work (with the\ngoal of putting this paper in proper context) and describe\nsome of the tools from these past works that we will use.\nWe start with the basic definitions of k-sources by Nisan\nand Zuckerman [15] and of extractors and dispersers for independent\nsources by Santha and Vazirani [20].\nDefinition 1.5\n([15], see also [8]). The min-entropy\nof a distribution X is the maximum k such that for every\nelement x in its support, Pr[X = x] 2\n-k\n. If X is a distribution\non strings with min-entropy at least k, we will call\nX a k-source\n2\n.\nTo simplify the presentation, in this version of the paper\nwe will assume that we are working with entropy as opposed\nto min-entropy.\nDefinition 1.6\n([20]). A function f : ({0, 1}\nn\n)\nc\n\n{0, 1}\nm\nis a c-source (k, ) extractor if for every family of c\nindependent k-sources X\n1\n,\n, X\nc\n, the output f (X\n1\n,\n, X\nc\n)\n1\nThe Abbot's product based Ramsey-graph construction of\n[3] and the bipartite Ramsey construction of [16] only satisfy\na weaker notion of explicitness.\n2\nIt is no loss of generality to imagine that X is uniformly\ndistributed over some (unknown) set of size 2\nk\n.\n672\nis a -close\n3\nto uniformly distributed on m bits. f is a disperser\nfor the same parameters if the output is simply required\nto have a support of relative size (1 - ).\nTo simplify the presentation, in this version of the paper,\nwe will assume that = 0 for all of our constructions.\nIn this language, Erd\nos' theorem says that most functions\nf : ({0, 1}\nn\n)\n2\n{0,1} are dispersers for entropy 1 + log n\n(treating f as the characteristic function for the set of edges\nof the graph). The proof easily extends to show that indeed\nmost such functions are in fact extractors. This naturally\nchallenges us to find explicit functions f that are 2-source\nextractors.\nUntil one year ago, essentially the only known explicit\nconstruction was the Hadamard extractor Had defined by\nHad\n(x, y) = x, y ( mod 2). It is an extractor for entropy\nk > n/2 as observed by Chor and Goldreich [8] and can\nbe extended to give m = (n) output bits as observed by\nVazirani [23]. Over 20 years later, a recent breakthrough\nof Bourgain [5] broke this \"1/2 barrier\" and can handle 2\nsources of entropy .4999n, again with linear output length\nm = (n). This seemingly minor improvement will be crucial\nfor our work!\nTheorem 1.7\n([5]). There is a polynomial time computable\n2-source extractor f : ({0, 1}\nn\n)\n2\n{0, 1}\nm\nfor entropy\n.4999n and m = (n).\nNo better bounds are known for 2-source extractors. Now\nwe turn our attention to 2-source dispersers. It turned out\nthat progress for building good 2-source dispersers came via\nprogress on extractors for more than 2 sources, all happening\nin fast pace in the last 2 years. The seminal paper of Bourgain\n, Katz and Tao [6] proved the so-called \"sum-product\ntheorem\" in prime fields, a result in arithmetic combinatorics\n. This result has already found applications in diverse\nareas of mathematics, including analysis, number theory,\ngroup theory and ... extractor theory. Their work implic-itly\ncontained dispersers for c = O(log(n/k)) independent\nsources of entropy k (with output m = (k)). The use of\nthe \"sum-product\" theorem was then extended by Barak et\nal. [3] to give extractors with similar parameters. Note that\nfor linear entropy k = (n), the number of sources needed\nfor extraction c is a constant!\nRelaxing the independence assumptions via the idea of\nrepeated condensing, allowed the reduction of the number\nof independent sources to c = 3, for extraction from sources\nof any linear entropy k = (n), by Barak et al. [4] and\nindependently by Raz [18].\nFor 2 sources Barak et al. [4] were able to construct dispersers\nfor sources of entropy o(n). To do this, they first\nshowed that if the sources have extra structure (block-source\nstructure, defined below), even extraction is possible from 2\nsources. The notion of block-sources, capturing \"semi inde-pendence\"\nof parts of the source, was introduced by Chor\nand Goldreich [8]. It has been fundamental in the development\nof seeded extractors and as we shall see, is essential\nfor us as well.\nDefinition 1.8\n([8]). A distribution X = X\n1\n, . . . , X\nc\nis a c-block-source of (block) entropy k if every block X\ni\nhas entropy k even conditioned on fixing the previous blocks\nX\n1\n,\n, X\ni-1\nto arbitrary constants.\n3\nThe error is usually measured in terms of\n1\ndistance or\nvariation distance.\nThis definition allowed Barak et al. [4] to show that their\nextractor for 4 independent sources, actually performs as\nwell with only 2 independent sources, as long as both are\n2-block-sources.\nTheorem 1.9\n([4]). There exists a polynomial time computable\nextractor f : ({0, 1}\nn\n)\n2\n{0, 1} for 2 independent\n2-block-sources with entropy o(n).\nThere is no reason to assume that the given sources are\nblock-sources, but it is natural to try and reduce to this\ncase. This approach has been one of the most successful in\nthe extractor literature. Namely try to partition a source\nX into two blocks X = X\n1\n, X\n2\nsuch that X\n1\n, X\n2\nform a\n2-block-source. Barak et al. introduced a new technique to\ndo this reduction called the Challenge-Response mechanism,\nwhich is crucial for this paper. This method gives a way to\n\"find\" how entropy is distributed in a source X, guiding the\nchoice of such a partition. This method succeeds only with\nsmall probability, dashing the hope for an extractor, but still\nyielding a disperser.\nTheorem 1.10\n([4]). There exists a polynomial time\ncomputable 2-source disperser f : ({0, 1}\nn\n)\n2\n{0, 1} for\nentropy o(n).\nReducing the entropy requirement of the above 2-source\ndisperser, which is what we achieve in this paper, again\nneeded progress on achieving a similar reduction for extractors\nwith more independent sources. A few months ago Rao\n[?] was able to significantly improve all the above results\nfor c 3 sources. Interestingly, his techniques do not use\narithmetic combinatorics, which seemed essential to all the\npapers above. He improves the results of Barak et al. [3] to\ngive c = O((log n)/(log k))-source extractors for entropy k.\nNote that now the number c of sources needed for extraction\nis constant, even when the entropy is as low as n\n\nfor any\nconstant !\nAgain, when the input sources are block-sources with sufficiently\nmany blocks, Rao proves that 2 independent sources\nsuffice (though this result does rely on arithmetic combinatorics\n, in particular, on Bourgain's extractor).\nTheorem 1.11\n([?]). There is a polynomial time computable\nextractor f : ({0, 1}\nn\n)\n2\n{0, 1}\nm\nfor 2 independent\nc-block-sources with block entropy k and m = (k), as long\nas c = O((log n)/(log k)).\nIn this paper (see Theorem 2.7 below) we improve this\nresult to hold even when only one of the 2 sources is a c-block\n-source. The other source can be an arbitrary source\nwith sufficient entropy. This is a central building block in\nour construction. This extractor, like Rao's above, critically\nuses Bourgain's extractor mentioned above. In addition it\nuses a theorem of Raz [18] allowing seeded extractors to have\n\"weak\" seeds, namely instead of being completely random\nthey work as long as the seed has entropy rate > 1/2.\nMAIN NOTIONS AND NEW RESULTS\nThe main result of this paper is a polynomial time computable\ndisperser for 2 sources of entropy n\no(1)\n, significantly\nimproving both the results of Barak et al. [4] (o(n) entropy).\nIt also improves on Frankl and Wilson [9], who only built\nRamsey Graphs and only for entropy ~\nO(n).\n673\nTheorem 2.1\n(Main theorem, restated). There exists\na polynomial time computable 2-source disperser D :\n({0, 1}\nn\n)\n2\n{0, 1} for entropy n\no(1)\n.\nThe construction of this disperser will involve the construction\nof an object which in some sense is stronger and\nin another weaker than a disperser: a subsource somewhere\nextractor. We first define a related object: a somewhere extractor\n, which is a function producing several outputs, one of\nwhich must be uniform. Again we will ignore many technical\nissues such as error, min-entropy vs. entropy and more, in\ndefinitions and results, which are deferred to the full version\nof this paper.\nDefinition 2.2. A function f : ({0, 1}\nn\n)\n2\n({0, 1}\nm\n)\n\nis a 2-source somewhere extractor with outputs, for entropy\nk, if for every 2 independent k-sources X, Y there exists an\ni [] such the ith output f(X,Y )\ni\nis a uniformly distributed\nstring of m bits.\nHere is a simple construction of such a somewhere extractor\nwith as large as poly(n) (and the p in its name will\nstress the fact that indeed the number of outputs is that\nlarge). It will nevertheless be useful to us (though its description\nin the next sentence may be safely skipped). Define\npSE\n(x, y)\ni\n= V(E(x, i), E(y, i)) where E is a \"strong\" logarithmic\nseed extractor, and V is the Hadamard/Vazirani 2-source\nextractor. Using this construction, it is easy to see\nthat:\nProposition 2.3. For every n, k there is a polynomial\ntime computable somewhere extractor pSE : ({0, 1}\nn\n)\n2\n\n({0, 1}\nm\n)\n\nwith = poly(n) outputs, for entropy k, and m =\n(k).\nBefore we define subsource somewhere extractor, we must\nfirst define a subsource.\nDefinition 2.4\n(Subsources). Given random variables\nZ and ^\nZ on {0, 1}\nn\nwe say that ^\nZ is a deficiency d subsource\nof Z and write ^\nZ Z if there exists a set A {0,1}\nn\nsuch\nthat (Z|Z A) = ^Z and Pr[Z A] 2\n-d\n.\nA subsource somewhere extractor guarantees the \"some-where\nextractor\" property only on subsources X\n\n, Y\n\nof the\noriginal input distributions X, Y (respectively). It will be\nextremely important for us to make these subsources as large\nas possible (i.e. we have to lose as little entropy as possible).\nControlling these entropy deficiencies is a major technical\ncomplication we have to deal with. However we will be informal\nwith it here, mentioning it only qualitatively when\nneeded. We discuss this issue a little more in Section 6.\nDefinition 2.5. A function f : ({0, 1}\nn\n)\n2\n({0, 1}\nm\n)\n\nis a 2-source subsource somewhere extractor with outputs\nfor entropy k, if for every 2 independent k-sources X, Y there\nexists a subsource ^\nX of X, a subsource ^\nY of Y and an i []\nsuch the i\nth\noutput f ( ^\nX, ^\nY )\ni\nis a uniformly distributed string\nof m bits.\nA central technical result for us is that with this \"sub-source\"\nrelaxation, we can have much fewer outputs indeed\nwe'll replace poly(n) outputs in our first construction\nabove with n\no(1)\noutputs.\nTheorem 2.6\n(Subsource somewhere extractor).\nFor every > 0 there is a polynomial time computable subsource\nsomewhere extractor SSE : ({0, 1}\nn\n)\n2\n({0,1}\nm\n)\n\nwith = n\no(1)\noutputs, for entropy k = n\n\n, with output\nm = k.\nWe will describe the ideas used for constructing this important\nobject and analyzing it in the next section, where\nwe will also indicate how it is used in the construction of\nthe final disperser. Here we state a central building block,\nmentioned in the previous section (as an improvement of the\nwork of Rao [?]). We construct an extractor for 2 independent\nsources one of which is a block-sources with sufficient\nnumber of blocks.\nTheorem 2.7\n(Block Source Extractor). There is\na polynomial time computable extractor B : ({0, 1}\nn\n)\n2\n\n{0, 1}\nm\nfor 2 independent sources, one of which is a c-block-sources\nwith block entropy k and the other a source of entropy\nk, with m = (k), and c = O((log n)/(log k)).\nA simple corollary of this block-source extractor B, is the\nfollowing weaker (though useful) somewhere block-source\nextractor SB. A source Z = Z\n1\n, Z\n2\n,\n, Z\nt\nis a somewhere\nc-block-source of block entropy k if for some c indices i\n1\n<\ni\n2\n<\n< i\nc\nthe source Z\ni\n1\n, Z\ni\n2\n,\n, Z\ni\nc\nis a c-block-source.\nCollecting the outputs of B on every c-subset of blocks results\nin that somewhere extractor.\nCorollary 2.8. There is a polynomial time computable\nsomewhere extractor SB : ({0, 1}\nn\n)\n2\n({0, 1}\nm\n)\n\nfor 2 independent\nsources, one of which is a somewhere c-block-sources\nwith block entropy k and t blocks total and the other a source\nof entropy k, with m = (k), c = O((log n)/(log k)), and\nt\nc\n.\nIn both the theorem and corollary above, the values of\nentropy k we will be interested in are k = n\n(1)\n. It follows\nthat a block-source with a constant c = O(1) suffices.\nTHE CHALLENGE-RESPONSE MECHANISM\nWe now describe abstractly a mechanism which will be\nused in the construction of the disperser as well as the subsource\nsomewhere extractor. Intuitively, this mechanism allows\nus to identify parts of a source which contain large\namounts of entropy. One can hope that using such a mechanism\none can partition a given source into blocks in a way\nwhich make it a block-source, or alternatively focus on a part\nof the source which is unusually condensed with entropy two\ncases which may simplify the extraction problem.\nThe reader may decide, now or in the middle of this\nsection, to skip ahead to the next section which describes\nthe construction of the subsource somewhere extractor SSE,\nwhich extensively uses this mechanism. Then this section\nmay seem less abstract, as it will be clearer where this mechanism\nis used.\nThis mechanism was introduced by Barak et al. [4], and\nwas essential in their 2-source disperser. Its use in this paper\nis far more involved (in particular it calls itself recursively,\na fact which creates many subtleties). However, at a high\nlevel, the basic idea behind the mechanism is the same:\nLet Z be a source and Z\n\na part of Z (Z projected on a\nsubset of the coordinates). We know that Z has entropy k,\n674\nand want to distinguish two possibilities: Z\n\nhas no entropy\n(it is fixed) or it has at least k\n\nentropy. Z\n\nwill get a pass\nor fail grade, hopefully corresponding to the cases of high or\nno entropy in Z\n\n.\nAnticipating the use of this mechanism, it is a good idea\nto think of Z as a \"parent\" of Z\n\n, which wants to check if\nthis \"child\" has sufficient entropy. Moreover, in the context\nof the initial 2 sources X, Y we will operate on, think of Z\nas a part of X, and thus that Y is independent of Z and Z\n\n.\nTo execute this \"test\" we will compute two sets of strings\n(all of length m, say): the Challenge C = C(Z\n\n, Y ) and\nthe Response R = R(Z, Y ). Z\n\nfails if C R and passes\notherwise.\nThe key to the usefulness of this mechanism is the following\nlemma, which states that what \"should\" happen, indeed\nhappens after some restriction of the 2 sources Z and Y .\nWe state it and then explain how the functions C and R are\ndefined to accommodate its proof.\nLemma 3.1. Assume Z, Y are sources of entropy k.\n1. If Z\n\nhas entropy k\n\n+ O(m), then there are subsources\n^\nZ of Z and ^\nY of Y , such that\nPr[ ^\nZ\n\npasses] = Pr[C( ^\nZ\n\n, ^\nY )\nR\n( ^\nZ, ^\nY )] 1-n\nO(1)\n2\n-m\n2. If Z\n\nis fixed (namely, has zero entropy), then for some\nsubsources ^\nZ of Z and ^\nY of Y , we have\nPr[Z\n\nfails] = Pr[C( ^\nZ\n\n, ^\nY ) R( ^Z, ^Y)] = 1\nOnce we have such a mechanism, we will design our disperser\nalgorithm assuming that the challenge response mechanism\ncorrectly identifies parts of the source with high or\nlow levels of entropy. Then in the analysis, we will ensure\nthat our algorithm succeeds in making the right decisions,\nat least on subsources of the original input sources.\nNow let us explain how to compute the sets C and R. We\nwill use some of the constructs above with parameters which\ndon't quite fit.\nThe response set R(Z, Y ) = pSE(Z, Y ) is chosen to be the\noutput of the somewhere extractor of Proposition 2.3. The\nchallenge set C(Z\n\n, Y ) = SSE(Z\n\n, Y ) is chosen to be the output\nof the subsource somewhere extractor of Theorem 2.6.\nWhy does it work? We explain each of the two claims\nin the lemma in turn (and after each comment on the important\nparameters and how they differ from Barak et al.\n[4]).\n1. Z\n\nhas entropy. We need to show that Z\n\npasses the\ntest with high probability. We will point to the output\nstring in C( ^\nZ\n\n, ^\nY\n\n) which avoids R( ^\nZ, ^\nY ) with high\nprobability as follows. In the analysis we will use the\nunion bound on several events, one associated with\neach (poly(n) many) string in pSE( ^\nZ, ^\nY ). We note\nthat by the definition of the response function, if we\nwant to fix a particular element in the response set to\na particular value, we can do this by fixing E(Z, i) and\nE\n(Y, i). This fixing keeps the restricted sources independent\nand loses only O(m) entropy. In the subsource\nof Z\n\nguaranteed to exist by Theorem 2.6 we can afford\nto lose this entropy in Z\n\n. Thus we conclude that one\nof its outputs is uniform. The probability that this\noutput will equal any fixed value is thus 2\n-m\n, completing\nthe argument. We note that we can handle\nthe polynomial output size of pSE, since the uniform\nstring has length m = n\n(1)\n(something which could\nnot be done with the technology available to Barak et\nal. [4]).\n2. Z\n\nhas no entropy. We now need to guarantee that\nin the chosen subsources (which we choose) ^\nZ, ^\nY , all\nstrings in C = C( ^\nZ\n\n, ^\nY ) are in R( ^\nZ, ^\nY ). First notice\nthat as Z\n\nis fixed, C is only a function of Y . We\nset ~\nY to be the subsource of Y that fixes all strings\nin C = C(Y ) to their most popular values (losing\nonly m entropy from Y ). We take care of including\nthese fixed strings in R(Z, ~\nY ) one at a time, by\nrestricting to subsources assuring that. Let be any\nm-bit string we want to appear in R(Z, ~\nY ). Recall that\nR\n(z, y) = V(E(z, i), E(y, i)). We pick a \"good\" seed i,\nand restrict Z, ~\nY to subsources with only O(m) less\nentropy by fixing E(Z, i) = a and E( ~\nY , i) = b to values\n(a, b) for which V(a, b) = . This is repeated suc-cessively\ntimes, and results in the final subsources\n^\nZ, ^\nY on which ^\nZ\n\nfails with probability 1. Note that\nwe keep reducing the entropy of our sources times,\nwhich necessitates that this be tiny (here we could\nnot tolerate poly(n), and indeed can guarantee n\no(1)\n,\nat least on a subsource this is one aspect of how crucial\nthe subsource somewhere extractor SSE is to the\nconstruction.\nWe note that initially it seemed like the Challenge-Response\nmechanism as used in [4] could not be used to handle entropy\nthat is significantly less than n (which is approxi-mately\nthe bound that many of the previous constructions\ngot stuck at). The techniques of [4] involved partitioning\nthe sources into t pieces of length n/t each, with the hope\nthat one of those parts would have a significant amount of\nentropy, yet there'd be enough entropy left over in the rest\nof the source (so that the source can be partitioned into a\nblock source).\nHowever it is not clear how to do this when the total\nentropy is less than n. On the one hand we will have\nto partition our sources into blocks of length significantly\nmore than n (or the adversary could distribute a negligible\nfraction of entropy in all blocks). On the other hand, if\nour blocks are so large, a single block could contain all the\nentropy. Thus it was not clear how to use the challenge\nresponse mechanism to find a block source.\nTHE SUBSOURCE SOMEWHERE EXTRACTOR\nSSE\nWe now explain some of the ideas behind the construction\nof the subsource somewhere extractor SSE of Theorem 2.6.\nConsider the source X. We are seeking to find in it a somewhere\nc-block-source, so that we can use it (together with Y )\nin the block-source extractor of Theorem 2.8. Like in previous\nworks in the extractor literature (e.g. [19, 13]) we use a\n\"win-win\" analysis which shows that either X is already a\nsomewhere c-block-source, or it has a condensed part which\ncontains a lot of the entropy of the source. In this case we\nproceed recursively on that part. Continuing this way we\neventually reach a source so condensed that it must be a\nsomewhere block source. Note that in [4], the challenge response\nmechanism was used to find a block source also, but\nthere the entropy was so high that they could afford to use\n675\nt blocks\nlow\nhigh\nmed\nn bits total\nt blocks\nmed\nmed\nlow\nhigh\nresponded\nChallenge\nChallenge\nresponded\nChallenge Unresponded\nmed\nmed\nn/t bits total\nSB\nSB\nOutputs\nSomewhere Block Source!\nNot Somewhere block source\nX\nRandom Row\n< k'\n0< low < k'/t\nk'/c < high < k'\nk'/t < med < k'/c\nFigure 1: Analysis of the subsource somewhere extractor.\na tree of depth 1. They did not need to recurse or condense\nthe sources.\nConsider the tree of parts of the source X evolved by\nsuch recursion. Each node in the tree corresponds to some\ninterval of bit locations of the source, with the root node\ncorresponding to the entire source. A node is a child of another\nif its interval is a subinterval of the parent. It can be\nshown that some node in the tree is \"good\"; it corresponds\nto a somewhere c-source, but we don't know which node is\ngood. Since we only want a somewhere extractor, we can\napply to each node the somewhere block-source extractor of\nCorollary 2.8 this will give us a random output in every\n\"good\" node of the tree. The usual idea is output all these\nvalues (and in seeded extractors, merge them using the ex-ternally\ngiven random seed). However, we cannot afford to\ndo that here as there is no external seed and the number of\nthese outputs (the size of the tree) is far too large.\nOur aim then will be to significantly prune this number\nof candidates and in fact output only the candidates on one\npath to a canonical \"good\" node. First we will give a very informal\ndescription of how to do this (Figure 1). Before calling\nSSE recursively on a subpart of a current part of X, we'll\nuse the \"Challenge-Response\" mechanism described above\nto check if \"it has entropy\".\n4\nWe will recurse only with the\nfirst (in left-to-right order) part which passes the \"entropy\ntest\". Thus note that we will follow a single path on this\ntree. The algorithm SSE will output only the sets of strings\nproduced by applying the somewhere c-block-extractor SB\non the parts visited along this path.\nNow let us describe the algorithm for SSE. SSE will be\ninitially invoked as SSE(x, y), but will recursively call itself\nwith different inputs z which will always be substrings of x.\n4\nWe note that we ignore the additional complication that\nSSE\nwill actually use recursion also to compute the challenge\nin the challenge-response mechanism.\nAlgorithm: SSE\n(z, y)\nLet pSE(., .) be the somewhere extractor with a polynomial\nnumber of outputs of Proposition 2.3.\nLet SB be the somewhere block source extractor of Corollary\n2.8.\nGlobal Parameters: t, the branching factor of the tree. k\nthe original entropy of the sources.\nOutput will be a set of strings.\n1. If z is shorter than k, return the empty set, else\ncontinue.\n2. Partition z into t equal parts z = z\n1\n, z\n2\n, . . . , z\nt\n.\n3. Compute the response set R(z, y) which is the set of\nstrings output by pSE(z, y).\n4. For i [t], compute the challenge set C(z\ni\n, y), which\nis the set of outputs of SSE(z\ni\n, y).\n5. Let h be the smallest index for which the challenge set\nC\n(z\nh\n, y) is not contained in the response set (set h = t\nif no such index exists).\n6. Output SB(z, y) concatenated with SSE(z\nh\n, y).\nProving that indeed there are subsources on which SSE\nwill follow a path to a \"good\" (for these subsources) node,\nis the heart of the analysis. It is especially complex due\nto the fact that the recursive call to SSE on subparts of\nthe current part is used to generate the Challenges for the\nChallenge-Response mechanism. Since SSE works only on\na subsources we have to guarantee that restriction to these\ndoes not hamper the behavior of SSE in past and future calls\nto it.\nLet us turn to the highlights of the analysis, for the proof\nof Theorem 2.6. Let k\n\nbe the entropy of the source Z at\nsome place in this recursion. Either one of its blocks Z\ni\nhas\n676\nentropy k\n\n/c, in which case it is very condensed, since its\nsize is n/t for t c), or it must be that c of its blocks form\na c-block source with block entropy k\n\n/t (which is sufficient\nfor the extractor B used by SB). In the 2nd case the fact\nthat SB(z, y) is part of the output of of our SSE guarantees\nthat we are somewhere random. If the 2nd case doesn't hold,\nlet Z\ni\nbe the leftmost condensed block. We want to ensure\nthat (on appropriate subsources) SSE calls itself on that ith\nsubpart. To do so, we fix all Z\nj\nfor j < i to constants z\nj\n. We\nare now in the position described in the Challenge-Response\nmechanism section, that (in each of the first i parts) there\nis either no entropy or lots of entropy. We further restrict\nto subsources as explained there which make all first i - 1\nblocks fail the \"entropy test\", and the fact that Z\ni\nstill has\nlots of entropy after these restrictions (which we need to\nprove) ensures that indeed SSE will be recursively applied\nto it.\nWe note that while the procedure SSE can be described recursively\n, the formal analysis of fixing subsources is actually\ndone globally, to ensure that indeed all entropy requirements\nare met along the various recursive calls.\nLet us remark on the choice of the branching parameter t.\nOn the one hand, we'd like to keep it small, as it dominates\nthe number of outputs t\nc\nof SB, and thus the total number of\noutputs (which is t\nc\nlog\nt\nn). For this purpose, any t = n\no(1)\nwill do. On the other hand, t should be large enough so that\ncondensing is faster than losing entropy. Here note that if\nZ is of length n, its child has length n/t, while the entropy\nshrinks only from k\n\nto k\n\n/c. A simple calculation shows that\nif k\n(log t)/ log c)\n> n\n2\nthen a c block-source must exist along\nsuch a path before the length shrinks to k. Note that for\nk = n\n(1)\na (large enough) constant t suffices (resulting in\nonly logarithmic number of outputs of SSE). This analysis\nis depicted pictorially in Figure 1.\nTHE FINAL DISPERSER\nD\nFollowing is a rough description of our disperser D proving\nTheorem 2.1. The high level structure of D will resemble the\nstructure of SSE - we will recursively split the source X and\nlook for entropy in the parts. However now we must output\na single value (rather than a set) which can take both values\n0 and 1. This was problematic in SSE, even knowing where\nthe \"good\" part (containing a c-block-source) was! How can\nwe do so now?\nWe now have at our disposal a much more powerful tool\nfor generating challenges (and thus detecting entropy), namely\nthe subsource somewhere disperser SSE. Note that in constructing\nSSE we only had essentially the somewhere c-block-source\nextractor SB to (recursively) generate the challenges,\nbut it depended on a structural property of the block it was\napplied on. Now SSE does not assume any structure on its\ninput sources except sufficient entropy\n5\n.\nLet us now give a high level description of the disperser\nD\n. It too will be a recursive procedure. If when processing\nsome part Z of X it \"realizes\" that a subpart Z\ni\nof Z has\nentropy, but not all the entropy of Z (namely Z\ni\n, Z is a\n2-block-source) then we will halt and produce the output\nof D. Intuitively, thinking about the Challenge-Response\nmechanism described above, the analysis implies that we\n5\nThere is a catch it only works on subsources of them!\nThis will cause us a lot of head ache; we will elaborate on it\nlater.\ncan either pass or fail Z\ni\n(on appropriate subsources). But\nthis means that the outcome of this \"entropy test\" is a 1-bit\ndisperser!\nTo capitalize on this idea, we want to use SSE to identify\nsuch a block-source in the recursion tree. As before, we scan\nthe blocks from left to right, and want to distinguish three\npossibilities.\nlow\nZ\ni\nhas low entropy. In this case we proceed to i + 1.\nmedium\nZ\ni\nhas \"medium\" entropy (Z\ni\n, Z is a block-source).\nIn which case we halt and produce an output (zero or\none).\nhigh\nZ\ni\nhas essentially all entropy of Z. In this case we\nrecurse on the condensed block Z\ni\n.\nAs before, we use the Challenge-Response mechanism (with\na twist). We will compute challenges C(Z\ni\n, Y ) and responses\nR\n(Z, Y ), all strings of length m. The responses are computed\nexactly as before, using the somewhere extractor pSE. The\nChallenges are computed using our subsource somewhere\nextractor SSE.\nWe really have 4 possibilities to distinguish, since when we\nhalt we also need to decide which output bit we give. We will\ndo so by deriving three tests from the above challenges and\nresponses: (C\nH\n, R\nH\n), (C\nM\n, R\nM\n), (C\nL\n, R\nL\n) for high, medium\nand low respectively, as follows. Let m m\nH\n>> m\nM\n>>\nm\nL\nbe appropriate integers: then in each of the tests above\nwe restrict ourselves to prefixes of all strings of the appropriate\nlengths only. So every string in C\nM\nwill be a prefix\nof length m\nM\nof some string in C\nH\n. Similarly, every string\nin R\nL\nis the length m\nL\nprefix of some string in R\nH\n. Now\nit is immediately clear that if C\nM\nis contained in R\nM\n, then\nC\nL\nis contained in R\nL\n. Thus these tests are monotone, if\nour sample fails the high test, it will definitely fail all tests.\nAlgorithm: D\n(z, y)\nLet pSE(., .) be the somewhere extractor with a polynomial\nnumber of outputs of Proposition 2.3.\nLet SSE(., .) be the subsource somewhere extractor of Theorem\n2.6.\nGlobal Parameters: t, the branching factor of the tree. k\nthe original entropy of the sources.\nLocal Parameters for recursive level: m\nL\nm\nM\nm\nH\n.\nOutput will be an element of {0, 1}.\n1. If z is shorter than k, return 0.\n2. Partition z into t equal parts z = z\n1\n, z\n2\n, . . . , z\nt\n.\n3. Compute three response sets R\nL\n, R\nM\n, R\nH\nusing pSE(z, y).\nR\nj\nwill be the prefixes of length m\nj\nof the strings in\npSE\n(z, y).\n4. For each i [t], compute three challenge sets C\ni\nL\n, C\ni\nM\n, C\ni\nH\nusing SSE(z\ni\n, y). C\ni\nj\nwill be the prefixes of length m\nj\nof the strings in SSE(z\ni\n, y).\n5. Let h be the smallest index for which the challenge set\nC\nL\nis not contained in the response set R\nL\n, if there is\nno such index, output 0 and halt.\n6. If C\nh\nH\nis contained in R\nH\nand C\nh\nH\nis contained in R\nM\n,\noutput 0 and halt. If C\nh\nH\nis contained in R\nH\nbut C\nh\nH\nis not contained in R\nM\n, output 1 and halt.\n677\nt blocks\nt blocks\nt blocks\nfail\nfail\nfail\npass\npass\npass\nfail\nfail\nfail\nfail\nfail\nfail\nfail\nfail\nfail\nfail\nfail\nfail\npass\npass\nfail\npass\nfail\nfail\nlow\nlow\nhigh\nlow\nlow\nlow\nhigh\nlow\nmed\nn bits total\nn/t bits total\nX\nlow\nlow\nOutput 0\nOutput 1\nn/t^2 bits total\nX_3\n(X_3)_4\nFigure 2: Analysis of the disperser.\n7. Output D(z\nh\n, y),\nFirst note the obvious monotonicity of the tests. If Z\ni\nfails\none of the tests it will certainly fail for shorter strings. Thus\nthere are only four outcomes to the three tests, written in the\norder (low, medium, high): (pass, pass, pass), (pass, pass, fail),\n(pass, fail, fail) and (fail, fail, fail).\nConceptually, the algorithm\nis making the following decisions using the four tests:\n1. (fail, fail, fail): Assume Z\ni\nhas low entropy and proceed\nto block i + 1.\n2. (pass, fail, fail): Assume Z\ni\nis medium, halt and output\n0.\n3. (pass, pass, fail): Assume Z\ni\nis medium, halt and output\n1.\n4. (pass, pass, pass): Assume Z\ni\nis high and recurse on Z\ni\n.\nThe analysis of this idea (depicted in Figure 2).turns out\nto be more complex than it seems. There are two reasons for\nthat. Now we briefly explain them and the way to overcome\nthem in the construction and analysis.\nThe first reason is the fact mentioned above, that SSE\nwhich generates the challenges, works only on a subsources\nof the original sources. Restricting to these subsources at\nsome level of the recursion (as required by the analysis of of\nthe test) causes entropy loss which affects both definitions\n(such as these entropy thresholds for decisions) and correct-ness\nof SSE in higher levels of recursion. Controlling this entropy\nloss is achieved by calling SSE recursively with smaller\nand smaller entropy requirements, which in turn limits the\nentropy which will be lost by these restrictions. In order not\nto lose all the entropy for this reason alone, we must work\nwith special parameters of SSE, essentially requiring that at\ntermination it has almost all the entropy it started with.\nThe second reason is the analysis of the test when we are\nin a medium block. In contrast with the above situation, we\ncannot consider the value of Z\ni\nfixed when we need it to fail\non the Medium and Low tests. We need to show that for\nthese two tests (given a pass for High), they come up both\n(pass, fail) and (fail, fail) each with positive probability.\nSince the length of Medium challenges and responses is\nm\nM\n, the probability of failure is at least exp(-(m\nM\n)) (this\nfollows relatively easily from the fact that the responses are\nsomewhere random). If the Medium test fails so does the\nLow test, and thus (fail, fail) has a positive probability and\nour disperser D outputs 0 with positive probability.\nTo bound (pass, fail) we first observe (with a similar\nreasoning) that the low test fails with probability at least\nexp(-(m\nL\n)). But we want the medium test to pass at the\nsame time. This probability is at least the probability that\nlow\nfails minus the probability that medium fails. We already\nhave a bound on the latter: it is at most poly(n)exp(-m\nM\n).\nHere comes our control of the different length into play - we\ncan make the m\nL\nsufficiently smaller than m\nM\nto yield this\ndifference positive. We conclude that our disperser D outputs\n1 with positive probability as well.\nFinally, we need to take care of termination: we have to\nensure that the recurrence always arrives at a medium subpart\n, but it is easy to chose entropy thresholds for low, medium\nand high to ensure that this happens.\n678\nRESILIENCY AND DEFICIENCY\nIn this section we will breifly discuss an issue which arises\nin our construction that we glossed over in the previous sections\n. Recall our definition of subsources:\nDefinition 6.1\n(Subsources). Given random variables\nZ and ^\nZ on {0, 1}\nn\nwe say that ^\nZ is a deficiency d subsource\nof Z and write ^\nZ Z if there exists a set A {0,1}\nn\nsuch\nthat (Z|A) = ^Z and Pr[Z A] 2\n-d\n.\nRecall that we were able to guarantee that our algorithms\nmade the right decisions only on subsources of the original\nsource. For example, in the construction of our final disperser\n, to ensure that our algorithms correctly identify the\nright high block to recurse on, we were only able to guarantee\nthat there are subsources of the original sources in\nwhich our algorithm makes the correct decision with high\nprobability. Then, later in the analysis we had to further\nrestrict the source to even smaller subsources. This leads to\ncomplications, since the original event of picking the correct\nhigh\nblock, which occurred with high probability, may become\nan event which does not occur with high probability\nin the current subsource. To handle these kinds of issues,\nwe will need to be very careful in measuring how small our\nsubsources are.\nIn the formal analysis we introduce the concept of resiliency\nto deal with this. To give an idea of how this works,\nhere is the actual definition of somewhere subsource extractor\nthat we use in the formal analysis.\nDefinition 6.2\n(subsource somewhere extractor).\nA function SSE : {0, 1}\nn\n{0, 1}\nn\n({0, 1}\nm\n)\n\nis a subsource\nsomewhere extractor with nrows output rows, entropy\nthreshold k, deficiency def, resiliency res and error if for\nevery (n, k)-sources X, Y there exist a deficiency def subsource\nX\ngood\nof X and a deficiency def subsource Y\ngood\nof\nY such that for every deficiency res subsource X\n\nof X\ngood\nand deficiency res subsource Y\n\nof Y\ngood\n, the random variable\nSSE(X\n\n, Y\n\n) is -close to a m somewhere random\ndistribution.\nIt turns out that our subsource somewhere extractor does\nsatisfy this stronger definition. The advantage of this definition\nis that it says that once we restrict our attention to\nthe good subsources X\ngood\n, Y\ngood\n, we have the freedom to further\nrestrict these subsources to smaller subsources, as long\nas our final subsources do not lose more entropy than the\nresiliency permits.\nThis issue of managing the resiliency for the various objects\nthat we construct is one of the major technical challenges\nthat we had to overcome in our construction.\nOPEN PROBLEMS\nBetter Independent Source Extractors\nA bottleneck to\nimproving our disperser is the block versus general\nsource extractor of Theorem 2.7. A good next step\nwould be to try to build an extractor for one block\nsource (with only a constant number of blocks) and\none other independent source which works for polylog-arithmic\nentropy, or even an extractor for a constant\nnumber of sources that works for sub-polynomial entropy\n.\nSimple Dispersers\nWhile our disperser is polynomial time\ncomputable, it is not as explicit as one might have\nhoped. For instance the Ramsey Graph construction\nof Frankl-Wilson is extremely simple: For a prime p,\nlet the vertices of the graph be all subsets of [p\n3\n] of\nsize p\n2\n- 1. Two vertices S,T are adjacent if and only\nif |S T| -1 mod p. It would be nice to find a good\ndisperser that beats the Frankl-Wilson construction,\nyet is comparable in simplicity.\nREFERENCES\n[1] N. Alon. The shannon capacity of a union.\nCombinatorica, 18, 1998.\n[2] B. Barak. A simple explicit construction of an\nn\n~\no(log n)\n-ramsey graph. Technical report, Arxiv, 2006.\nhttp://arxiv.org/abs/math.CO/0601651\n.\n[3] B. Barak, R. Impagliazzo, and A. Wigderson.\nExtracting randomness using few independent sources.\nIn Proceedings of the 45th Annual IEEE Symposium\non Foundations of Computer Science, pages 384393,\n2004.\n[4] B. Barak, G. Kindler, R. Shaltiel, B. Sudakov, and\nA. Wigderson. Simulating independence: New\nconstructions of condensers, Ramsey graphs,\ndispersers, and extractors. In Proceedings of the 37th\nAnnual ACM Symposium on Theory of Computing,\npages 110, 2005.\n[5] J. Bourgain. More on the sum-product phenomenon in\nprime fields and its applications. International Journal\nof Number Theory, 1:132, 2005.\n[6] J. Bourgain, N. Katz, and T. Tao. A sum-product\nestimate in finite fields, and applications. Geometric\nand Functional Analysis, 14:2757, 2004.\n[7] M. Capalbo, O. Reingold, S. Vadhan, and\nA. Wigderson. Randomness conductors and\nconstant-degree lossless expanders. In Proceedings of\nthe 34th Annual ACM Symposium on Theory of\nComputing, pages 659668, 2002.\n[8] B. Chor and O. Goldreich. Unbiased bits from sources\nof weak randomness and probabilistic communication\ncomplexity. SIAM Journal on Computing,\n17(2):230261, 1988.\n[9] P. Frankl and R. M. Wilson. Intersection theorems\nwith geometric consequences. Combinatorica,\n1(4):357368, 1981.\n[10] P. Gopalan. Constructing ramsey graphs from boolean\nfunction representations. In Proceedings of the 21th\nAnnual IEEE Conference on Computational\nComplexity, 2006.\n[11] V. Grolmusz. Low rank co-diagonal matrices and\nramsey graphs. Electr. J. Comb, 7, 2000.\n[12] V. Guruswami. Better extractors for better codes?\nElectronic Colloquium on Computational Complexity\n(ECCC), (080), 2003.\n[13] C. J. Lu, O. Reingold, S. Vadhan, and A. Wigderson.\nExtractors: Optimal up to constant factors. In\nProceedings of the 35th Annual ACM Symposium on\nTheory of Computing, pages 602611, 2003.\n[14] P. Miltersen, N. Nisan, S. Safra, and A. Wigderson.\nOn data structures and asymmetric communication\ncomplexity. Journal of Computer and System\nSciences, 57:3749, 1 1998.\n679\n[15] N. Nisan and D. Zuckerman. More deterministic\nsimulation in logspace. In Proceedings of the 25th\nAnnual ACM Symposium on Theory of Computing,\npages 235244, 1993.\n[16] P. Pudlak and V. Rodl. Pseudorandom sets and\nexplicit constructions of ramsey graphs. Submitted for\npublication, 2004.\n[17] A. Rao. Extractors for a constant number of\npolynomially small min-entropy independent sources.\nIn Proceedings of the 38th Annual ACM Symposium\non Theory of Computing, 2006.\n[18] R. Raz. Extractors with weak random seeds. In\nProceedings of the 37th Annual ACM Symposium on\nTheory of Computing, pages 1120, 2005.\n[19] O. Reingold, R. Shaltiel, and A. Wigderson.\nExtracting randomness via repeated condensing. In\nProceedings of the 41st Annual IEEE Symposium on\nFoundations of Computer Science, pages 2231, 2000.\n[20] M. Santha and U. V. Vazirani. Generating\nquasi-random sequences from semi-random sources.\nJournal of Computer and System Sciences, 33:7587,\n1986.\n[21] R. Shaltiel. Recent developments in explicit\nconstructions of extractors. Bulletin of the European\nAssociation for Theoretical Computer Science,\n77:6795, 2002.\n[22] A. Ta-Shma and D. Zuckerman. Extractor codes.\nIEEE Transactions on Information Theory, 50, 2004.\n[23] U. Vazirani. Towards a strong communication\ncomplexity theory or generating quasi-random\nsequences from two communicating slightly-random\nsources (extended abstract). In Proceedings of the 17th\nAnnual ACM Symposium on Theory of Computing,\npages 366378, 1985.\n[24] A. Wigderson and D. Zuckerman. Expanders that\nbeat the eigenvalue bound: Explicit construction and\napplications. Combinatorica, 19(1):125138, 1999.\n680\n", "keywords": "sum-product theorem;distribution;explicit disperser;construction of disperser;Extractors;recursion;subsource somewhere extractor;structure;bipartite graph;extractors;independent sources;extractor;tools;Ramsey Graphs;disperser;polynomial time computable disperser;resiliency;Theorem;Ramsey graphs;block-sources;deficiency;termination;entropy;Ramsey graph;Independent Sources;algorithms;independent source;subsource;Dispersers;randomness extraction"} {"name": "10", "title": "A Frequency-based and a Poisson-based Definition of the Probability of Being Informative", "abstract": "This paper reports on theoretical investigations about the assumptions underlying the inverse document frequency (idf ). We show that an intuitive idf -based probability function for the probability of a term being informative assumes disjoint document events. By assuming documents to be independent rather than disjoint, we arrive at a Poisson-based probability of being informative. The framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models.", "fulltext": "INTRODUCTION AND BACKGROUND\nThe inverse document frequency (idf ) is one of the most\nsuccessful parameters for a relevance-based ranking of retrieved\nobjects. With N being the total number of documents\n, and n(t) being the number of documents in which\nterm t occurs, the idf is defined as follows:\nidf(t) := - log n(t)\nN , 0 <= idf(t) <\nRanking based on the sum of the idf -values of the query\nterms that occur in the retrieved documents works well, this\nhas been shown in numerous applications. Also, it is well\nknown that the combination of a document-specific term\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nSIGIR'03, July 28August 1, 2003, Toronto, Canada.\nCopyright 2003 ACM 1-58113-646-3/03/0007 ...\n$\n5.00.\nweight and idf works better than idf alone. This approach\nis known as tf-idf , where tf(t, d) (0 <= tf(t, d) <= 1) is\nthe so-called term frequency of term t in document d. The\nidf reflects the discriminating power (informativeness) of a\nterm, whereas the tf reflects the occurrence of a term.\nThe idf alone works better than the tf alone does. An explanation\nmight be the problem of tf with terms that occur\nin many documents; let us refer to those terms as \"noisy\"\nterms. We use the notion of \"noisy\" terms rather than \"fre-quent\"\nterms since frequent terms leaves open whether we\nrefer to the document frequency of a term in a collection or\nto the so-called term frequency (also referred to as within-document\nfrequency) of a term in a document. We associate\n\"noise\" with the document frequency of a term in a\ncollection, and we associate \"occurrence\" with the within-document\nfrequency of a term. The tf of a noisy term might\nbe high in a document, but noisy terms are not good candidates\nfor representing a document. Therefore, the removal\nof noisy terms (known as \"stopword removal\") is essential\nwhen applying tf . In a tf-idf approach, the removal of stopwords\nis conceptually obsolete, if stopwords are just words\nwith a low idf .\nFrom a probabilistic point of view, tf is a value with a\nfrequency-based probabilistic interpretation whereas idf has\nan \"informative\" rather than a probabilistic interpretation.\nThe missing probabilistic interpretation of idf is a problem\nin probabilistic retrieval models where we combine uncertain\nknowledge of different dimensions (e.g.: informativeness of\nterms, structure of documents, quality of documents, age\nof documents, etc.) such that a good estimate of the probability\nof relevance is achieved. An intuitive solution is a\nnormalisation of idf such that we obtain values in the interval\n[0; 1]. For example, consider a normalisation based on\nthe maximal idf -value. Let T be the set of terms occurring\nin a collection.\nP\nfreq\n(t is informative) := idf(t)\nmaxidf\nmaxidf := max(\n{idf(t)|t T }), maxidf <= - log(1/N)\nminidf := min(\n{idf(t)|t T }), minidf >= 0\nminidf\nmaxidf P\nfreq\n(t is informative) 1.0\nThis frequency-based probability function covers the interval\n[0; 1] if the minimal idf is equal to zero, which is the case\nif we have at least one term that occurs in all documents.\nCan we interpret P\nfreq\n, the normalised idf , as the probability\nthat the term is informative?\nWhen investigating the probabilistic interpretation of the\n227\nnormalised idf , we made several observations related to disjointness\nand independence of document events. These observations\nare reported in section 3. We show in section 3.1\nthat the frequency-based noise probability\nn(t)\nN\nused in the\nclassic idf -definition can be explained by three assumptions:\nbinary term occurrence, constant document containment and\ndisjointness of document containment events. In section 3.2\nwe show that by assuming independence of documents, we\nobtain 1\n- e\n-1\n1 - 0.37 as the upper bound of the noise\nprobability of a term. The value e\n-1\nis related to the logarithm\nand we investigate in section 3.3 the link to information\ntheory. In section 4, we link the results of the previous\nsections to probability theory. We show the steps from possible\nworlds to binomial distribution and Poisson distribution.\nIn section 5, we emphasise that the theoretical framework\nof this paper is applicable for both idf and tf . Finally, in\nsection 6, we base the definition of the probability of being\ninformative on the results of the previous sections and\ncompare frequency-based and Poisson-based definitions.\nBACKGROUND\nThe relationship between frequencies, probabilities and\ninformation theory (entropy) has been the focus of many\nresearchers. In this background section, we focus on work\nthat investigates the application of the Poisson distribution\nin IR since a main part of the work presented in this paper\naddresses the underlying assumptions of Poisson.\n[4] proposes a 2-Poisson model that takes into account\nthe different nature of relevant and non-relevant documents,\nrare terms (content words) and frequent terms (noisy terms,\nfunction words, stopwords). [9] shows experimentally that\nmost of the terms (words) in a collection are distributed\naccording to a low dimension n-Poisson model. [10] uses a\n2-Poisson model for including term frequency-based probabilities\nin the probabilistic retrieval model. The non-linear\nscaling of the Poisson function showed significant improvement\ncompared to a linear frequency-based probability. The\nPoisson model was here applied to the term frequency of a\nterm in a document. We will generalise the discussion by\npointing out that document frequency and term frequency\nare dual parameters in the collection space and the document\nspace, respectively. Our discussion of the Poisson distribution\nfocuses on the document frequency in a collection\nrather than on the term frequency in a document.\n[7] and [6] address the deviation of idf and Poisson, and\napply Poisson mixtures to achieve better Poisson-based estimates\n. The results proved again experimentally that a one-dimensional\nPoisson does not work for rare terms, therefore\nPoisson mixtures and additional parameters are proposed.\n[3], section 3.3, illustrates and summarises comprehen-sively\nthe relationships between frequencies, probabilities\nand Poisson. Different definitions of idf are put into context\nand a notion of \"noise\" is defined, where noise is viewed\nas the complement of idf . We use in our paper a different\nnotion of noise: we consider a frequency-based noise that\ncorresponds to the document frequency, and we consider a\nterm noise that is based on the independence of document\nevents.\n[11], [12], [8] and [1] link frequencies and probability estimation\nto information theory. [12] establishes a framework\nin which information retrieval models are formalised based\non probabilistic inference. A key component is the use of a\nspace of disjoint events, where the framework mainly uses\nterms as disjoint events. The probability of being informative\ndefined in our paper can be viewed as the probability\nof the disjoint terms in the term space of [12].\n[8] address entropy and bibliometric distributions. Entropy\nis maximal if all events are equiprobable and the frequency\n-based Lotka law (N/i\n\nis the number of scientists\nthat have written i publications, where N and are distribution\nparameters), Zipf and the Pareto distribution are related\n. The Pareto distribution is the continuous case of the\nLotka and Lotka and Zipf show equivalences. The Pareto\ndistribution is used by [2] for term frequency normalisation.\nThe Pareto distribution compares to the Poisson distribution\nin the sense that Pareto is \"fat-tailed\", i. e. Pareto assigns\nlarger probabilities to large numbers of events than\nPoisson distributions do.\nThis makes Pareto interesting\nsince Poisson is felt to be too radical on frequent events.\nWe restrict in this paper to the discussion of Poisson, however\n, our results show that indeed a smoother distribution\nthan Poisson promises to be a good candidate for improving\nthe estimation of probabilities in information retrieval.\n[1] establishes a theoretical link between tf-idf and information\ntheory and the theoretical research on the meaning\nof tf-idf \"clarifies the statistical model on which the different\nmeasures are commonly based\". This motivation matches\nthe motivation of our paper: We investigate theoretically\nthe assumptions of classical idf and Poisson for a better\nunderstanding of parameter estimation and combination.\nFROM DISJOINT TO INDEPENDENT\nWe define and discuss in this section three probabilities:\nThe frequency-based noise probability (definition 1), the total\nnoise probability for disjoint documents (definition 2).\nand the noise probability for independent documents (definition\n3).\n3.1\nBinary occurrence, constant containment\nand disjointness of documents\nWe show in this section, that the frequency-based noise\nprobability\nn(t)\nN\nin the idf definition can be explained as\na total probability with binary term occurrence, constant\ndocument containment and disjointness of document containments\n.\nWe refer to a probability function as binary if for all events\nthe probability is either 1.0 or 0.0. The occurrence probability\nP (t|d) is binary, if P (t|d) is equal to 1.0 if t d, and\nP (t|d) is equal to 0.0, otherwise.\nP (t|d) is binary : P (t|d) = 1.0 P (t|d) = 0.0\nWe refer to a probability function as constant if for all\nevents the probability is equal. The document containment\nprobability reflect the chance that a document occurs in a\ncollection. This containment probability is constant if we\nhave no information about the document containment or\nwe ignore that documents differ in containment. Containment\ncould be derived, for example, from the size, quality,\nage, links, etc. of a document. For a constant containment\nin a collection with N documents,\n1\nN\nis often assumed as\nthe containment probability. We generalise this definition\nand introduce the constant where 0 N. The containment\nof a document d depends on the collection c, this\nis reflected by the notation P (d|c) used for the containment\n228\nof a document.\nP (d|c) is constant : d : P (d|c) =\nN\nFor disjoint documents that cover the whole event space,\nwe set = 1 and obtain\n\nd\nP (d|c) = 1.0. Next, we define\nthe frequency-based noise probability and the total noise\nprobability for disjoint documents. We introduce the event\nnotation t is noisy and t occurs for making the difference\nbetween the noise probability P (t is noisy|c) in a collection\nand the occurrence probability P (t occurs|d) in a document\nmore explicit, thereby keeping in mind that the noise probability\ncorresponds to the occurrence probability of a term\nin a collection.\nDefinition 1. The frequency-based term noise probability\n:\nP\nfreq\n(t is noisy|c) := n(t)\nN\nDefinition 2. The total term noise probability for\ndisjoint documents:\nP\ndis\n(t is noisy|c) :=\nd\nP (t occurs|d) P (d|c)\nNow, we can formulate a theorem that makes assumptions\nexplicit that explain the classical idf .\nTheorem 1. IDF assumptions: If the occurrence probability\nP (t|d) of term t over documents d is binary, and\nthe containment probability P (d|c) of documents d is constant\n, and document containments are disjoint events, then\nthe noise probability for disjoint documents is equal to the\nfrequency-based noise probability.\nP\ndis\n(t is noisy|c) = P\nfreq\n(t is noisy|c)\nProof. The assumptions are:\nd : (P (t occurs|d) = 1 P (t occurs|d) = 0)\nP (d|c) =\nN\nd\nP (d|c) = 1.0\nWe obtain:\nP\ndis\n(t is noisy|c) =\nd|td\n1\nN =\nn(t)\nN = P\nfreq\n(t is noisy|c)\nThe above result is not a surprise but it is a mathematical\nformulation of assumptions that can be used to explain\nthe classical idf . The assumptions make explicit that the\ndifferent types of term occurrence in documents (frequency\nof a term, importance of a term, position of a term, document\npart where the term occurs, etc.) and the different\ntypes of document containment (size, quality, age, etc.) are\nignored, and document containments are considered as disjoint\nevents.\nFrom the assumptions, we can conclude that idf (frequency-based\nnoise, respectively) is a relatively simple but strict\nestimate.\nStill, idf works well.\nThis could be explained\nby a leverage effect that justifies the binary occurrence and\nconstant containment: The term occurrence for small documents\ntends to be larger than for large documents, whereas\nthe containment for small documents tends to be smaller\nthan for large documents.\nFrom that point of view, idf\nmeans that P (t d|c) is constant for all d in which t occurs,\nand P (t d|c) is zero otherwise. The occurrence and containment\ncan be term specific. For example, set P (t d|c) =\n1/N\nD\n(c) if t occurs in d, where N\nD\n(c) is the number of documents\nin collection c (we used before just N). We choose a\ndocument-dependent occurrence P (t|d) := 1/N\nT\n(d), i. e. the\noccurrence probability is equal to the inverse of N\nT\n(d), which\nis the total number of terms in document d. Next, we choose\nthe containment P (d|c) := N\nT\n(d)/N\nT\n(c)N\nT\n(c)/N\nD\n(c) where\nN\nT\n(d)/N\nT\n(c) is a document length normalisation (number\nof terms in document d divided by the number of terms in\ncollection c), and N\nT\n(c)/N\nD\n(c) is a constant factor of the\ncollection (number of terms in collection c divided by the\nnumber of documents in collection c). We obtain P (td|c) =\n1/N\nD\n(c).\nIn a tf-idf -retrieval function, the tf -component reflects\nthe occurrence probability of a term in a document. This is\na further explanation why we can estimate the idf with a\nsimple P (t|d), since the combined tf-idf contains the occurrence\nprobability. The containment probability corresponds\nto a document normalisation (document length normalisation\n, pivoted document length) and is normally attached to\nthe tf -component or the tf-idf -product.\nThe disjointness assumption is typical for frequency-based\nprobabilities. From a probability theory point of view, we\ncan consider documents as disjoint events, in order to achieve\na sound theoretical model for explaining the classical idf .\nBut does disjointness reflect the real world where the containment\nof a document appears to be independent of the\ncontainment of another document? In the next section, we\nreplace the disjointness assumption by the independence assumption\n.\n3.2\nThe upper bound of the noise probability\nfor independent documents\nFor independent documents, we compute the probability\nof a disjunction as usual, namely as the complement of the\nprobability of the conjunction of the negated events:\nP (d\n1\n. . . d\nN\n)\n=\n1\n- P (d\n1\n. . . d\nN\n)\n=\n1\nd\n(1\n- P (d))\nThe noise probability can be considered as the conjunction\nof the term occurrence and the document containment.\nP (t is noisy|c) := P (t occurs (d\n1\n. . . d\nN\n)\n|c)\nFor disjoint documents, this view of the noise probability\nled to definition 2. For independent documents, we use now\nthe conjunction of negated events.\nDefinition 3. The term noise probability for independent\ndocuments:\nP\nin\n(t is noisy|c) :=\nd\n(1\n- P (t occurs|d) P (d|c))\nWith binary occurrence and a constant containment P (d|c) :=\n/N, we obtain the term noise of a term t that occurs in n(t)\ndocuments:\nP\nin\n(t is noisy|c) = 1 - 1 N\nn(t)\n229\nFor binary occurrence and disjoint documents, the containment\nprobability was 1/N. Now, with independent documents\n, we can use as a collection parameter that controls\nthe average containment probability. We show through the\nnext theorem that the upper bound of the noise probability\ndepends on .\nTheorem 2. The upper bound of being noisy: If the\noccurrence P (t|d) is binary, and the containment P (d|c)\nis constant, and document containments are independent\nevents, then 1\n- e\nis\nthe upper bound of the noise probability\n.\nt : P\nin\n(t is noisy|c) < 1 - e\nProof\n. The upper bound of the independent noise probability\nfollows from the limit lim\nN\n(1 +\nx\nN\n)\nN\n= e\nx\n(see\nany comprehensive math book, for example, [5], for the convergence\nequation of the Euler function). With x = -, we\nobtain:\nlim\nN\n1\nN\nN\n= e\nFor\nthe term noise, we have:\nP\nin\n(t is noisy|c) = 1 - 1 N\nn(t)\nP\nin\n(t is noisy|c) is strictly monotonous: The noise of a term\nt\nn\nis less than the noise of a term t\nn+1\n, where t\nn\noccurs in\nn documents and t\nn+1\noccurs in n + 1 documents. Therefore\n, a term with n = N has the largest noise probability.\nFor a collection with infinite many documents, the upper\nbound of the noise probability for terms t\nN\nthat occur in all\ndocuments becomes:\nlim\nN\nP\nin\n(t\nN\nis noisy)\n=\nlim\nN\n1\n- 1 N\nN\n=\n1\n- e\nBy\napplying an independence rather a disjointness assumption\n, we obtain the probability e\n-1\nthat a term is not noisy\neven if the term does occur in all documents. In the disjoint\ncase, the noise probability is one for a term that occurs in\nall documents.\nIf we view P (d|c) := /N as the average containment,\nthen is large for a term that occurs mostly in large documents\n, and is small for a term that occurs mostly in small\ndocuments. Thus, the noise of a term t is large if t occurs in\nn(t) large documents and the noise is smaller if t occurs in\nsmall documents. Alternatively, we can assume a constant\ncontainment and a term-dependent occurrence. If we assume\nP (d|c) := 1, then P (t|d) := /N can be interpreted as\nthe average probability that t represents a document. The\ncommon assumption is that the average containment or occurrence\nprobability is proportional to n(t). However, here\nis additional potential: The statistical laws (see [3] on Luhn\nand Zipf) indicate that the average probability could follow\na normal distribution, i. e. small probabilities for small n(t)\nand large n(t), and larger probabilities for medium n(t).\nFor the monotonous case we investigate here, the noise of\na term with n(t) = 1 is equal to 1 - (1 - /N) = /N and\nthe noise of a term with n(t) = N is close to 1 - e\n\n. In the\nnext section, we relate the value e\nto\ninformation theory.\n3.3\nThe probability of a maximal informative\nsignal\nThe probability e\n-1\nis special in the sense that a signal\nwith that probability is a signal with maximal information as\nderived from the entropy definition. Consider the definition\nof the entropy contribution H(t) of a signal t.\nH(t) := P (t) - ln P (t)\nWe form the first derivation for computing the optimum.\nH(t)\nP (t)\n=\n- ln P (t) + -1\nP (t) P (t)\n=\n-(1 + ln P (t))\nFor obtaining optima, we use:\n0 =\n-(1 + ln P (t))\nThe entropy contribution H(t) is maximal for P (t) = e\n-1\n.\nThis result does not depend on the base of the logarithm as\nwe see next:\nH(t)\nP (t) = - log\nb\nP (t) +\n-1\nP (t) ln b P (t)\n=\n1\nln b + log\nb\nP (t) = 1\n+ ln P (t)\nln b\nWe summarise this result in the following theorem:\nTheorem 3. The probability of a maximal informative\nsignal: The probability P\nmax\n= e\n-1\n0.37 is the probability\nof a maximal informative signal. The entropy of a\nmaximal informative signal is H\nmax\n= e\n-1\n.\nProof. The probability and entropy follow from the derivation\nabove.\nThe complement of the maximal noise probability is e\nand\nwe are looking now for a generalisation of the entropy\ndefinition such that e\nis\nthe probability of a maximal informative\nsignal. We can generalise the entropy definition\nby computing the integral of + ln P (t), i. e. this derivation\nis zero for e\n\n. We obtain a generalised entropy:\n-( + ln P (t)) d(P (t)) = P (t) (1 - - ln P (t))\nThe generalised entropy corresponds for = 1 to the classical\nentropy. By moving from disjoint to independent documents\n, we have established a link between the complement\nof the noise probability of a term that occurs in all documents\nand information theory. Next, we link independent\ndocuments to probability theory.\nTHE LINK TO PROBABILITY THEORY\nWe review for independent documents three concepts of\nprobability theory: possible worlds, binomial distribution\nand Poisson distribution.\n4.1\nPossible Worlds\nEach conjunction of document events (for each document,\nwe consider two document events: the document can be\ntrue or false) is associated with a so-called possible world.\nFor example, consider the eight possible worlds for three\ndocuments (N = 3).\n230\nworld w\nconjunction\nw\n7\nd\n1\nd\n2\nd\n3\nw\n6\nd\n1\nd\n2\nd\n3\nw\n5\nd\n1\nd\n2\nd\n3\nw\n4\nd\n1\nd\n2\nd\n3\nw\n3\nd\n1\nd\n2\nd\n3\nw\n2\nd\n1\nd\n2\nd\n3\nw\n1\nd\n1\nd\n2\nd\n3\nw\n0\nd\n1\nd\n2\nd\n3\nWith each world w, we associate a probability (w), which\nis equal to the product of the single probabilities of the document\nevents.\nworld w probability (w)\nw\n7\n\n\nN\n\n3\n\n\n1\n\nN\n\n0\nw\n6\n\n\nN\n\n2\n\n\n1\n\nN\n\n1\nw\n5\n\n\nN\n\n2\n\n\n1\n\nN\n\n1\nw\n4\n\n\nN\n\n1\n\n\n1\n\nN\n\n2\nw\n3\n\n\nN\n\n2\n\n\n1\n\nN\n\n1\nw\n2\n\n\nN\n\n1\n\n\n1\n\nN\n\n2\nw\n1\n\n\nN\n\n1\n\n\n1\n\nN\n\n2\nw\n0\n\n\nN\n\n0\n\n\n1\n\nN\n\n3\nThe sum over the possible worlds in which k documents are\ntrue and N -k documents are false is equal to the probability\nfunction of the binomial distribution, since the binomial\ncoefficient yields the number of possible worlds in which k\ndocuments are true.\n4.2\nBinomial distribution\nThe binomial probability function yields the probability\nthat k of N events are true where each event is true with\nthe single event probability p.\nP (k) := binom(N, k, p) :=\nN\nk\np\nk\n(1\n- p)N -k\nThe single event probability is usually defined as p := /N,\ni. e. p is inversely proportional to N, the total number of\nevents. With this definition of p, we obtain for an infinite\nnumber of documents the following limit for the product of\nthe binomial coefficient and p\nk\n:\nlim\nN\nN\nk\np\nk\n=\n=\nlim\nN\nN (N -1) . . . (N -k +1)\nk!\n\nN\nk\n=\nk\nk!\nThe limit is close to the actual value for k << N. For large\nk, the actual value is smaller than the limit.\nThe limit of (1\n-p)N -k follows from the limit lim\nN\n(1+\nx\nN\n)\nN\n= e\nx\n.\nlim\nN\n(1\n- p)\nN-k\n= lim\nN\n1\nN\nN -k\n=\nlim\nN\ne\n\n1 N\n-k\n= e\nAgain\n, the limit is close to the actual value for k << N. For\nlarge k, the actual value is larger than the limit.\n4.3\nPoisson distribution\nFor an infinite number of events, the Poisson probability\nfunction is the limit of the binomial probability function.\nlim\nN\nbinom(N, k, p) =\nk\nk! e\nP\n(k) = poisson(k, ) :=\nk\nk! e\nThe\nprobability poisson (0, 1) is equal to e\n-1\n, which is the\nprobability of a maximal informative signal.\nThis shows\nthe relationship of the Poisson distribution and information\ntheory.\nAfter seeing the convergence of the binomial distribution,\nwe can choose the Poisson distribution as an approximation\nof the independent term noise probability. First, we define\nthe Poisson noise probability:\nDefinition 4. The Poisson term noise probability:\nP\npoi\n(t is noisy|c) := e\n\nn(t)\nk=1\n\nk\nk!\nFor independent documents, the Poisson distribution approximates\nthe probability of the disjunction for large n(t),\nsince the independent term noise probability is equal to the\nsum over the binomial probabilities where at least one of\nn(t) document containment events is true.\nP\nin\n(t is noisy|c) =\nn(t)\nk=1\nn(t)\nk\np\nk\n(1\n- p)N -k\nP\nin\n(t is noisy|c) P\npoi\n(t is noisy|c)\nWe have defined a frequency-based and a Poisson-based probability\nof being noisy, where the latter is the limit of the\nindependence-based probability of being noisy. Before we\npresent in the final section the usage of the noise probability\nfor defining the probability of being informative, we\nemphasise in the next section that the results apply to the\ncollection space as well as to the the document space.\nTHE COLLECTION SPACE AND THE DOCUMENT SPACE\nConsider the dual definitions of retrieval parameters in\ntable 1. We associate a collection space D T with a collection\nc where D is the set of documents and T is the set\nof terms in the collection. Let N\nD\n:=\n|D| and N\nT\n:=\n|T |\nbe the number of documents and terms, respectively. We\nconsider a document as a subset of T and a term as a subset\nof D. Let n\nT\n(d) := |{t|d t}| be the number of terms that\noccur in the document d, and let n\nD\n(t) := |{d|t d}| be the\nnumber of documents that contain the term t.\nIn a dual way, we associate a document space L T with\na document d where L is the set of locations (also referred\nto as positions, however, we use the letters L and l and not\nP and p for avoiding confusion with probabilities) and T is\nthe set of terms in the document. The document dimension\nin a collection space corresponds to the location (position)\ndimension in a document space.\nThe definition makes explicit that the classical notion of\nterm frequency of a term in a document (also referred to as\nthe within-document term frequency) actually corresponds\nto the location frequency of a term in a document. For the\n231\nspace\ncollection\ndocument\ndimensions\ndocuments and terms\nlocations and terms\ndocument/location\nfrequency\nn\nD\n(t, c): Number of documents in which term t\noccurs in collection c\nn\nL\n(t, d): Number of locations (positions) at which\nterm t occurs in document d\nN\nD\n(c): Number of documents in collection c\nN\nL\n(d): Number of locations (positions) in document\nd\nterm frequency\nn\nT\n(d, c): Number of terms that document d contains\nin collection c\nn\nT\n(l, d): Number of terms that location l contains\nin document d\nN\nT\n(c): Number of terms in collection c\nN\nT\n(d): Number of terms in document d\nnoise/occurrence\nP (t|c) (term noise)\nP (t|d) (term occurrence)\ncontainment\nP (d|c) (document)\nP (l|d) (location)\ninformativeness\n- ln P (t|c)\n- ln P (t|d)\nconciseness\n- ln P (d|c)\n- ln P (l|d)\nP(informative)\nln(P (t|c))/ ln(P (t\nmin\n, c))\nln(P (t|d))/ ln(P (t\nmin\n, d))\nP(concise)\nln(P (d|c))/ ln(P (d\nmin\n|c))\nln(P (l|d))/ ln(P (l\nmin\n|d))\nTable 1: Retrieval parameters\nactual term frequency value, it is common to use the maximal\noccurrence (number of locations; let lf be the location\nfrequency).\ntf(t, d) := lf(t, d) := P\nfreq\n(t occurs|d)\nP\nfreq\n(t\nmax\noccurs\n|d) =\nn\nL\n(t, d)\nn\nL\n(t\nmax\n, d)\nA further duality is between informativeness and conciseness\n(shortness of documents or locations): informativeness\nis based on occurrence (noise), conciseness is based on containment\n.\nWe have highlighted in this section the duality between\nthe collection space and the document space. We concentrate\nin this paper on the probability of a term to be noisy\nand informative. Those probabilities are defined in the collection\nspace. However, the results regarding the term noise\nand informativeness apply to their dual counterparts: term\noccurrence and informativeness in a document. Also, the\nresults can be applied to containment of documents and locations\nTHE PROBABILITY OF BEING INFORMATIVE\nWe showed in the previous sections that the disjointness\nassumption leads to frequency-based probabilities and that\nthe independence assumption leads to Poisson probabilities.\nIn this section, we formulate a frequency-based definition\nand a Poisson-based definition of the probability of being\ninformative and then we compare the two definitions.\nDefinition 5. The frequency-based probability of being\ninformative:\nP\nfreq\n(t is informative|c) := - ln\nn(t)\nN\n- ln\n1\nN\n=\n- log\nN\nn(t)\nN = 1 - log\nN\nn(t) = 1 - ln n(t)\nln N\nWe define the Poisson-based probability of being informative\nanalogously to the frequency-based probability of being\ninformative (see definition 5).\nDefinition 6. The Poisson-based probability of being\ninformative:\nP\npoi\n(t is informative|c) := ln\ne\n\n\nn(t)\nk=1\nk\nk!\n- ln(e\n\n)\n=\n- ln\n\nn(t)\nk=1\n\nk\nk!\n- ln\nFor the sum expression, the following limit holds:\nlim\nn(t)\nn(t)\nk=1\n\nk\nk! = e\n\n- 1\nFor >> 1, we can alter the noise and informativeness Poisson\nby starting the sum from 0, since e\n\n>> 1. Then, the\nminimal Poisson informativeness is poisson(0, ) = e\n\n. We\nobtain a simplified Poisson probability of being informative:\nP\npoi\n(t is informative|c) - ln\n\nn(t)\nk=0\nk\nk!\n\n=\n1\n- ln\n\nn(t)\nk=0\nk\nk!\n\nThe computation of the Poisson sum requires an optimi-sation\nfor large n(t). The implementation for this paper\nexploits the nature of the Poisson density: The Poisson density\nyields only values significantly greater than zero in an\ninterval around .\nConsider the illustration of the noise and informativeness\ndefinitions in figure 1. The probability functions displayed\nare summarised in figure 2 where the simplified Poisson\nis used in the noise and informativeness graphs. The\nfrequency-based noise corresponds to the linear solid curve\nin the noise figure. With an independence assumption, we\nobtain the curve in the lower triangle of the noise figure. By\nchanging the parameter p := /N of the independence probability\n, we can lift or lower the independence curve. The\nnoise figure shows the lifting for the value := ln N\n9.2. The setting = ln N is special in the sense that the\nfrequency-based and the Poisson-based informativeness have\nthe same denominator, namely ln N, and the Poisson sum\nconverges to . Whether we can draw more conclusions from\nthis setting is an open question.\nWe can conclude, that the lifting is desirable if we know\nfor a collection that terms that occur in relatively few doc-232\n0\n0.2\n0.4\n0.6\n0.8\n1\n0\n2000\n4000\n6000\n8000\n10000\nProbability of being noisy\nn(t): Number of documents with term t\nfrequency\nindependence: 1/N\nindependence: ln(N)/N\npoisson: 1000\npoisson: 2000\npoisson: 1000,2000\n0\n0.2\n0.4\n0.6\n0.8\n1\n0\n2000\n4000\n6000\n8000\n10000\nProbability of being informative\nn(t): Number of documents with term t\nfrequency\nindependence: 1/N\nindependence: ln(N)/N\npoisson: 1000\npoisson: 2000\npoisson: 1000,2000\nFigure 1: Noise and Informativeness\nProbability function\nNoise\nInformativeness\nFrequency P\nfreq\nDef\nn(t)/N\nln(n(t)/N)/ ln(1/N)\nInterval\n1/N P\nfreq\n1.0\n0.0 P\nfreq\n1.0\nIndependence P\nin\nDef\n1\n- (1 - p)\nn(t)\nln(1\n- (1 - p)\nn(t)\n)/ ln(p)\nInterval\np P\nin\n< 1 - e\nln\n(p) P\nin\n1.0\nPoisson P\npoi\nDef\ne\n\nn(t)\nk=1\nk\nk!\n( - ln\n\nn(t)\nk=1\nk\nk!\n)/( - ln )\nInterval\ne\n\nP\npoi\n< 1 - e\n\n( - ln(e\n\n- 1))/( - ln ) P\npoi\n1.0\nPoisson P\npoi\nsimplified\nDef\ne\n\nn(t)\nk=0\nk\nk!\n( - ln\n\nn(t)\nk=0\nk\nk!\n)/\nInterval\ne\nP\npoi\n< 1.0\n0.0 < P\npoi\n1.0\nFigure 2: Probability functions\numents are no guarantee for finding relevant documents,\ni. e. we assume that rare terms are still relatively noisy. On\nthe opposite, we could lower the curve when assuming that\nfrequent terms are not too noisy, i. e. they are considered as\nbeing still significantly discriminative.\nThe Poisson probabilities approximate the independence\nprobabilities for large n(t); the approximation is better for\nlarger . For n(t) < , the noise is zero whereas for n(t) >\nthe noise is one. This radical behaviour can be smoothened\nby using a multi-dimensional Poisson distribution. Figure 1\nshows a Poisson noise based on a two-dimensional Poisson:\npoisson(k,\n1\n,\n2\n) := e\n1\n\nk\n1\nk! + (1 - ) e\n2\n\nk\n2\nk!\nThe two dimensional Poisson shows a plateau between\n1\n=\n1000 and\n2\n= 2000, we used here = 0.5. The idea behind\nthis setting is that terms that occur in less than 1000\ndocuments are considered to be not noisy (i.e. they are informative\n), that terms between 1000 and 2000 are half noisy,\nand that terms with more than 2000 are definitely noisy.\nFor the informativeness, we observe that the radical behaviour\nof Poisson is preserved. The plateau here is approximately\nat 1/6, and it is important to realise that this\nplateau is not obtained with the multi-dimensional Poisson\nnoise using = 0.5. The logarithm of the noise is normalised\nby the logarithm of a very small number, namely\n0.5 e\n-1000\n+ 0.5 e\n-2000\n. That is why the informativeness\nwill be only close to one for very little noise, whereas for a\nbit of noise, informativeness will drop to zero. This effect\ncan be controlled by using small values for such that the\nnoise in the interval [\n1\n;\n2\n] is still very little. The setting\n= e\n-2000/6\nleads to noise values of approximately e\n-2000/6\nin the interval [\n1\n;\n2\n], the logarithms lead then to 1/6 for\nthe informativeness.\nThe indepence-based and frequency-based informativeness\nfunctions do not differ as much as the noise functions do.\nHowever, for the indepence-based probability of being informative\n, we can control the average informativeness by the\ndefinition p := /N whereas the control on the frequency-based\nis limited as we address next.\nFor the frequency-based idf , the gradient is monotonously\ndecreasing and we obtain for different collections the same\ndistances of idf -values, i. e. the parameter N does not affect\nthe distance. For an illustration, consider the distance between\nthe value idf(t\nn+1\n) of a term t\nn+1\nthat occurs in n+1\ndocuments, and the value idf(t\nn\n) of a term t\nn\nthat occurs in\nn documents.\nidf(t\nn+1\n)\n- idf(t\nn\n)\n=\nln\nn\nn + 1\nThe first three values of the distance function are:\nidf(t\n2\n)\n- idf(t\n1\n) = ln(1/(1 + 1)) = 0.69\nidf(t\n3\n)\n- idf(t\n2\n) = ln(1/(2 + 1)) = 0.41\nidf(t\n4\n)\n- idf(t\n3\n) = ln(1/(3 + 1)) = 0.29\nFor the Poisson-based informativeness, the gradient decreases\nfirst slowly for small n(t), then rapidly near n(t) and\nthen it grows again slowly for large n(t).\nIn conclusion, we have seen that the Poisson-based definition\nprovides more control and parameter possibilities than\n233\nthe frequency-based definition does. Whereas more control\nand parameter promises to be positive for the personalisa-tion\nof retrieval systems, it bears at the same time the danger\nof just too many parameters. The framework presented\nin this paper raises the awareness about the probabilistic\nand information-theoretic meanings of the parameters. The\nparallel definitions of the frequency-based probability and\nthe Poisson-based probability of being informative made\nthe underlying assumptions explicit. The frequency-based\nprobability can be explained by binary occurrence, constant\ncontainment and disjointness of documents. Independence\nof documents leads to Poisson, where we have to be aware\nthat Poisson approximates the probability of a disjunction\nfor a large number of events, but not for a small number.\nThis theoretical result explains why experimental investigations\non Poisson (see [7]) show that a Poisson estimation\ndoes work better for frequent (bad, noisy) terms than for\nrare (good, informative) terms.\nIn addition to the collection-wide parameter setting, the\nframework presented here allows for document-dependent\nsettings, as explained for the independence probability. This\nis in particular interesting for heterogeneous and structured\ncollections, since documents are different in nature (size,\nquality, root document, sub document), and therefore, binary\noccurrence and constant containment are less appropriate\nthan in relatively homogeneous collections.\nSUMMARY\nThe definition of the probability of being informative transforms\nthe informative interpretation of the idf into a probabilistic\ninterpretation, and we can use the idf -based probability\nin probabilistic retrieval approaches. We showed that\nthe classical definition of the noise (document frequency) in\nthe inverse document frequency can be explained by three\nassumptions: the term within-document occurrence probability\nis binary, the document containment probability is\nconstant, and the document containment events are disjoint.\nBy explicitly and mathematically formulating the assumptions\n, we showed that the classical definition of idf does not\ntake into account parameters such as the different nature\n(size, quality, structure, etc.) of documents in a collection,\nor the different nature of terms (coverage, importance, position\n, etc.) in a document. We discussed that the absence\nof those parameters is compensated by a leverage effect of\nthe within-document term occurrence probability and the\ndocument containment probability.\nBy applying an independence rather a disjointness assumption\nfor the document containment, we could establish\na link between the noise probability (term occurrence\nin a collection), information theory and Poisson. From the\nfrequency-based and the Poisson-based probabilities of being\nnoisy, we derived the frequency-based and Poisson-based\nprobabilities of being informative. The frequency-based probability\nis relatively smooth whereas the Poisson probability\nis radical in distinguishing between noisy or not noisy, and\ninformative or not informative, respectively. We showed how\nto smoothen the radical behaviour of Poisson with a multi-dimensional\nPoisson.\nThe explicit and mathematical formulation of idf - and\nPoisson-assumptions is the main result of this paper. Also,\nthe paper emphasises the duality of idf and tf , collection\nspace and document space, respectively. Thus, the result\napplies to term occurrence and document containment in a\ncollection, and it applies to term occurrence and position\ncontainment in a document. This theoretical framework is\nuseful for understanding and deciding the parameter estimation\nand combination in probabilistic retrieval models. The\nlinks between indepence-based noise as document frequency,\nprobabilistic interpretation of idf , information theory and\nPoisson described in this paper may lead to variable probabilistic\nidf and tf definitions and combinations as required\nin advanced and personalised information retrieval systems.\nAcknowledgment: I would like to thank Mounia Lalmas,\nGabriella Kazai and Theodora Tsikrika for their comments\non the as they said \"heavy\" pieces. My thanks also go to the\nmeta-reviewer who advised me to improve the presentation\nto make it less \"formidable\" and more accessible for those\n\"without a theoretic bent\".\nThis work was funded by a\nresearch fellowship from Queen Mary University of London.\nREFERENCES\n[1] A. Aizawa. An information-theoretic perspective of\ntf-idf measures. Information Processing and\nManagement, 39:4565, January 2003.\n[2] G. Amati and C. J. Rijsbergen. Term frequency\nnormalization via Pareto distributions. In 24th\nBCS-IRSG European Colloquium on IR Research,\nGlasgow, Scotland, 2002.\n[3] R. K. Belew. Finding out about. Cambridge University\nPress, 2000.\n[4] A. Bookstein and D. Swanson. Probabilistic models\nfor automatic indexing. Journal of the American\nSociety for Information Science, 25:312318, 1974.\n[5] I. N. Bronstein. Taschenbuch der Mathematik. Harri\nDeutsch, Thun, Frankfurt am Main, 1987.\n[6] K. Church and W. Gale. Poisson mixtures. Natural\nLanguage Engineering, 1(2):163190, 1995.\n[7] K. W. Church and W. A. Gale. Inverse document\nfrequency: A measure of deviations from poisson. In\nThird Workshop on Very Large Corpora, ACL\nAnthology, 1995.\n[8] T. Lafouge and C. Michel. Links between information\nconstruction and information gain: Entropy and\nbibliometric distribution. Journal of Information\nScience, 27(1):3949, 2001.\n[9] E. Margulis. N-poisson document modelling. In\nProceedings of the 15th Annual International ACM\nSIGIR Conference on Research and Development in\nInformation Retrieval, pages 177189, 1992.\n[10] S. E. Robertson and S. Walker. Some simple effective\napproximations to the 2-poisson model for\nprobabilistic weighted retrieval. In Proceedings of the\n17th Annual International ACM SIGIR Conference on\nResearch and Development in Information Retrieval,\npages 232241, London, et al., 1994. Springer-Verlag.\n[11] S. Wong and Y. Yao. An information-theoric measure\nof term specificity. Journal of the American Society\nfor Information Science, 43(1):5461, 1992.\n[12] S. Wong and Y. Yao. On modeling information\nretrieval with probabilistic inference. ACM\nTransactions on Information Systems, 13(1):3868,\n1995.\n234\n", "keywords": "inverse document frequency (idf);independent and disjoint documents;computer science;information search;probability theories;Poisson based probability;Term frequency;probabilistic retrieval models;Probability of being informative;Independent documents;Disjoint documents;Normalisation;relevance-based ranking of retrieved objects;information theory;Noise probability;frequency-based term noise probability;Poisson-based probability of being informative;Assumptions;Collection space;Poisson distribution;Probabilistic information retrieval;Document space;document retrieval;entropy;Frequency-based probability;Document frequency;Inverse document frequency;Information theory;independence assumption;inverse document frequency;maximal informative signal"} {"name": "100", "title": "High Performance Crawling System", "abstract": "In the present paper, we will describe the design and implementation of a real-time distributed system of Web crawling running on a cluster of machines. The system crawls several thousands of pages every second, includes a high-performance fault manager, is platform independent and is able to adapt transparently to a wide range of configurations without incurring additional hardware expenditure. We will then provide details of the system architecture and describe the technical choices for very high performance crawling. Finally, we will discuss the experimental results obtained, comparing them with other documented systems.", "fulltext": "INTRODUCTION\nWith the World Wide Web containing the vast amount\nof information (several thousands in 1993, 3 billion today)\nthat it does and the fact that it is ever expanding, we\nneed a way to find the right information (multimedia of\ntextual).\nWe need a way to access the information on\nspecific subjects that we require.\nTo solve the problems\nabove several programs and algorithms were designed that\nindex the web, these various designs are known as search\nengines, spiders, crawlers, worms or knowledge robots graph\nin its simplest terms. The pages are the nodes on the graph\nand the links are the arcs on the graph. What makes this so\ndifficult is the vast amount of data that we have to handle,\nand then we must also take into account the fact that the\nWorld Wide Web is constantly growing and the fact that\npeople are constantly updating the content of their web\npages.\nAny High performance crawling system should offer at\nleast the following two features.\nFirstly, it needs to\nbe equipped with an intelligent navigation strategy, i.e.\nenabling it to make decisions regarding the choice of subsequent\nactions to be taken (pages to be downloaded etc).\nSecondly, its supporting hardware and software architecture\nshould be optimized to crawl large quantities of documents\nper unit of time (generally per second). To this we may add\nfault tolerance (machine crash, network failure etc.) and\nconsiderations of Web server resources.\nRecently we have seen a small interest in these two\nfield. Studies on the first point include crawling strategies\nfor important pages [9, 17], topic-specific document downloading\n[5, 6, 18, 10], page recrawling to optimize overall\nrefresh frequency of a Web archive [8, 7] or scheduling the\ndownloading activity according to time [22]. However, little\nresearch has been devoted to the second point, being very\ndifficult to implement [20, 13]. We will focus on this latter\npoint in the rest of this paper.\nIndeed, only a few crawlers are equipped with an optimized\nscalable crawling system, yet details of their internal\nworkings often remain obscure (the majority being proprietary\nsolutions).\nThe only system to have been given a\nfairly in-depth description in existing literature is Mercator\nby Heydon and Najork of DEC/Compaq [13] used in the\nAltaVista search engine (some details also exist on the first\nversion of the Google [3] and Internet Archive [4] robots).\nMost recent studies on crawling strategy fail to deal with\nthese features, contenting themselves with the solution of\nminor issues such as the calculation of the number of pages\nto be downloaded in order to maximize/minimize some\nfunctional objective. This may be acceptable in the case\nof small applications, but for real time\n1\napplications the\nsystem must deal with a much larger number of constraints.\nWe should also point out that little academic research\nis concerned with high performance search engines, as\ncompared with their commercial counterparts (with the\nexception of the WebBase project [14] at Stanford).\nIn the present paper, we will describe a very high\navailability, optimized and distributed crawling system.\nWe will use the system on what is known as breadth-first\ncrawling, though this may be easily adapted to other\nnavigation strategies. We will first focus on input/output,\non management of network traffic and robustness when\nchanging scale. We will also discuss download policies in\n1\n\"Soft\" real time\n299\n\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are not\nmade or distributed for profit or commercial advantage and that copies bear\nthis notice and the full citation on the first page. To copy otherwise, or\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nMIR'04, October 1516, 2004, New York, New York, USA.\nCopyright 2004 ACM 1-58113-940-3/04/0010...$5.00.\nterms of speed regulation, fault management by supervisors\nand the introduction/suppression of machine nodes without\nsystem restart during a crawl.\nOur system was designed within the experimental framework\nof the D\nep^\not L\negal du Web Fran\ncais (French Web\nLegal Deposit). This consists of archiving only multimedia\ndocuments in French available on line, indexing them and\nproviding ways for these archives to be consulted. Legal\ndeposit requires a real crawling strategy in order to ensure\nsite continuity over time.\nThe notion of registration is\nclosely linked to that of archiving, which requires a suitable\nstrategy to be useful. In the course of our discussion, we\nwill therefore analyze the implication and impact of this\nexperimentation for system construction.\nSTATE OF THE ART\nIn order to set our work in this field in context, listed\nbelow are definitions of services that should be considered\nthe minimum requirements for any large-scale crawling\nsystem.\nFlexibility: as mentioned above, with some minor\nadjustments our system should be suitable for various\nscenarios. However, it is important to remember that\ncrawling is established within a specific framework:\nnamely, Web legal deposit.\nHigh Performance: the system needs to be scalable\nwith a minimum of one thousand pages/second and\nextending up to millions of pages for each run on\nlow cost hardware. Note that here, the quality and\nefficiency of disk access are crucial to maintaining high\nperformance.\nFault Tolerance: this may cover various aspects. As\nthe system interacts with several servers at once,\nspecific problems emerge.\nFirst, it should at least\nbe able to process invalid HTML code, deal with\nunexpected Web server behavior, and select good\ncommunication protocols etc. The goal here is to avoid\nthis type of problem and, by force of circumstance, to\nbe able to ignore such problems completely. Second,\ncrawling processes may take days or weeks, and it is\nimperative that the system can handle failure, stopped\nprocesses or interruptions in network services, keeping\ndata loss to a minimum. Finally, the system should\nbe persistent, which means periodically switching large\ndata structures from memory to the disk (e.g. restart\nafter failure).\nMaintainability and Configurability: an appropriate\ninterface is necessary for monitoring the crawling\nprocess, including download speed, statistics on the\npages and amounts of data stored. In online mode, the\nadministrator may adjust the speed of a given crawler,\nadd or delete processes, stop the system, add or delete\nsystem nodes and supply the black list of domains not\nto be visited, etc.\n2.2\nGeneral Crawling Strategies\nThere are many highly accomplished techniques in terms\nof Web crawling strategy. We will describe the most relevant\nof these here.\nBreadth-first Crawling: in order to build a wide Web\narchive like that of the Internet Archive [15], a crawl\nis carried out from a set of Web pages (initial URLs\nor seeds).\nA breadth-first exploration is launched\nby following hypertext links leading to those pages\ndirectly connected with this initial set. In fact, Web\nsites are not really browsed breadth-first and various\nrestrictions may apply, e.g. limiting crawling processes\nto within a site, or downloading the pages deemed\nmost interesting first\n2\nRepetitive Crawling: once pages have been crawled,\nsome systems require the process to be repeated\nperiodically so that indexes are kept updated. In the\nmost basic case, this may be achieved by launching\na second crawl in parallel.\nA variety of heuristics\nexist to overcome this problem:\nfor example, by\nfrequently relaunching the crawling process of pages,\nsites or domains considered important to the detriment\nof others.\nA good crawling strategy is crucial for\nmaintaining a constantly updated index list. Recent\nstudies by Cho and Garcia-Molina [8, 7] have focused\non optimizing the update frequency of crawls by using\nthe history of changes recorded on each site.\nTargeted Crawling: more specialized search engines\nuse crawling process heuristics in order to target a\ncertain type of page, e.g. pages on a specific topic or\nin a particular language, images, mp3 files or scientific\npapers. In addition to these heuristics, more generic\napproaches have been suggested. They are based on\nthe analysis of the structures of hypertext links [6,\n5] and techniques of learning [9, 18]: the objective\nhere being to retrieve the greatest number of pages\nrelating to a particular subject by using the minimum\nbandwidth. Most of the studies cited in this category\ndo not use high performance crawlers, yet succeed in\nproducing acceptable results.\nRandom Walks and Sampling: some studies have\nfocused on the effect of random walks on Web graphs\nor modified versions of these graphs via sampling in\norder to estimate the size of documents on line [1, 12,\n11].\nDeep Web Crawling: a lot of data accessible via\nthe Web are currently contained in databases and\nmay only be downloaded through the medium of\nappropriate requests or forms. Recently, this often-neglected\nbut fascinating problem has been the focus\nof new interest. The Deep Web is the name given to\nthe Web containing this category of data [9].\nLastly, we should point out the acknowledged differences\nthat exist between these scenarios. For example,\na breadth-first search needs to keep track of all pages\nalready crawled.\nAn analysis of links should use\nstructures of additional data to represent the graph\nof the sites in question, and a system of classifiers in\norder to assess the pages' relevancy [6, 5]. However,\nsome tasks are common to all scenarios, such as\n2\nSee [9] for the heuristics that tend to find the most\nimportant pages first and [17] for experimental results\nproving that breadth-first crawling allows the swift retrieval\nof pages with a high PageRank.\n300\nrespecting robot exclusion files (robots.txt), crawling\nspeed, resolution of domain names . . .\nIn the early 1990s, several companies claimed that their\nsearch engines were able to provide complete Web coverage.\nIt is now clear that only partial coverage is possible at\npresent.\nLawrence and Giles [16] carried out two experiments\nin order to measure coverage performance of data\nestablished by crawlers and of their updates. They adopted\nan approach known as overlap analysis to estimate the size\nof the Web that may be indexed (See also Bharat and Broder\n1998 on the same subject). Let W be the total set of Web\npages and W\na\nW and W\nb\nW the pages downloaded\nby two different crawlers a and b. What is the size of W\na\nand W\nb\nas compared with W ? Let us assume that uniform\nsamples of Web pages may be taken and their membership of\nboth sets tested. Let P (W\na\n) and P (W\nb\n) be the probability\nthat a page is downloaded by a or b respectively. We know\nthat:\nP (W\na\nW\nb\n|W\nb\n) = W\na\nW\nb\n|W\nb\n|\n(1)\nNow, if these two crawling processes are assumed to be\nindependent, the left side of equation 1may be reduced to\nP (W\na\n), that is data coverage by crawler a. This may be\neasily obtained by the intersection size of the two crawling\nprocesses. However, an exact calculation of this quantity\nis only possible if we do not really know the documents\ncrawled. Lawrence and Giles used a set of controlled data of\n575 requests to provide page samples and count the number\nof times that the two crawlers retrieved the same pages. By\ntaking the hypothesis that the result P (W\na\n) is correct, we\nmay estimate the size of the Web as\n|W\na\n|/P (W\na\n).\nThis\napproach has shown that the Web contained at least 320\nmillion pages in 1997 and that only 60% was covered by the\nsix major search engines of that time. It is also interesting\nto note that a single search engine would have covered only\n1/3 of the Web. As this approach is based on observation, it\nmay reflect a visible Web estimation, excluding for instance\npages behind forms, databases etc. More recent experiments\nassert that the Web contains several billion pages.\n2.2.1\nSelective Crawling\nAs demonstrated above, a single crawler cannot archive\nthe whole Web. The fact is that the time required to carry\nout the complete crawling process is very long, and impossible\ngiven the technology currently available. Furthermore,\ncrawling and indexing very large amounts of data implies\ngreat problems of scalability, and consequently entails not\ninconsiderable costs of hardware and maintenance.\nFor\nmaximum optimization, a crawling system should be able\nto recognize relevant sites and pages, and restrict itself to\ndownloading within a limited time.\nA document or Web page's relevancy may be officially\nrecognized in various ways. The idea of selective crawling\nmay be introduced intuitively by associating each URL u\nwith a score calculation function s\n()\nrespecting relevancy\ncriterion and parameters . In the most basic case, we\nmay assume a Boolean relevancy function, i.e. s(u) = 1 if\nthe document designated by u is relevant and s(u) = 0 if not.\nMore generally, we may think of s(d) as a function with real\nvalues, such as a conditional probability that a document\nbelongs to a certain category according to its content. In all\ncases, we should point out that the score calculation function\ndepends only on the URL and and not on the time or state\nof the crawler.\nA general approach for the construction of a selective\ncrawler consists of changing the URL insertion and extraction\npolicy in the queue Q of the crawler. Let us assume\nthat the URLs are sorted in the order corresponding to the\nvalue retrieved by s(u). In this case, we obtain the best-first\nstrategy (see [19]) which consists of downloading URLs\nwith the best scores first). If s(u) provides a good relevancy\nmodel, we may hope that the search process will be guided\ntowards the best areas of the Web.\nVarious studies have been carried out in this direction: for\nexample, limiting the search depth in a site by specifying\nthat pages are no longer relevant after a certain depth. This\namounts to the following equation:\ns\n(depth)\n(u) =\n1, if\n|root(u) u| <\n0, else\n(2)\nwhere root(u) is the root of the site containing u.\nThe\ninterest of this approach lies in the fact that maximizing\nthe search breadth may make it easier for the end-user to\nretrieve the information. Nevertheless, pages that are too\ndeep may be accessed by the user, even if the robot fails to\ntake them into account.\nA second possibility is the estimation of a page's popularity\n. One method of calculating a document's relevancy\nwould relate to the number of backlinks.\ns\n(backlinks)\n(u) =\n1, if indegree(u) >\n0, else\n(3)\nwhere is a threshold.\nIt is clear that s\n(backlinks)\n(u) may only be calculated if\nwe have a complete site graph (site already downloaded\nbeforehand).\nIn practice, we make take an approximate\nvalue and update it incrementally during the crawling\nprocess. A derivative of this technique is used in Google's\nfamous PageRank calculation.\nOUR APPROACH THE DOMINOS SYSTEM\nAs mentioned above, we have divided the system into two\nparts: workers and supervisors. All of these processes may\nbe run on various operating systems (Windows, MacOS X,\nLinux, FreeBSD) and may be replicated if need be. The\nworkers are responsible for processing the URL flow coming\nfrom their supervisors and for executing crawling process\ntasks in the strict sense. They also handle the resolution of\ndomain names by means of their integrated DNS resolver,\nand adjust download speed in accordance with node policy.\nA worker is a light process in the Erlang sense, acting as\na fault tolerant and highly available HTTP client.\nThe\nprocess-handling mode in Erlang makes it possible to create\nseveral thousands of workers in parallel.\nIn our system, communication takes place mainly by sending\nasynchronous messages as described in the specifications\nfor Erlang language. The type of message varies according to\nneed: character string for short messages and binary format\nfor long messages (large data structures or files). Disk access\nis reduced to a minimum as far as possible and structures\nare stored in the real-time Mnesia\n3\ndatabase that forms\n3\nhttp://www.erlang.org/doc/r9c/lib/mnesia-4\n.1.4/doc/html/\n301\na standard part of the Erlang development kit. Mnesia's\nfeatures give it a high level of homogeneity during the base's\naccess, replication and deployment. It is supported by two\ntable management modules ETS and DETS. ETS allows\ntables of values to be managed by random access memory,\nwhile DETS provides a persistent form of management on\nthe disk. Mnesia's distribution faculty provides an efficient\naccess solution for distributed data. When a worker moves\nfrom one node to another (code migration), it no longer need\nbe concerned with the location of the base or data. It simply\nhas to read and write the information transparently.\n1\nloop(InternalState) ->\n% Supervisor main\n2\n% loop\n3\nreceive {From,{migrate,Worker,Src,Dest}} ->\n4\n% Migrate the Worker process from\n5\n% Src node to Dest node\n6\nspawn(supervisor,migrate,\n7\n[Worker,Src,Dest]),\n8\n% Infinite loop\n9\nloop(InternalState);\n10\n11\n{From,{replace,OldPid,NewPid,State}} ->\n12\n% Add the new worker to\n13\n% the supervisor state storage\n14\nNewInternalState =\n15\nreplace(OldPid,NewPid,InternalState),\n16\n% Infinite loop\n17\nloop(NewInternalState);\n18\n...\n19\nend.\n20\n21\nmigrate(Pid,Src,Dest) ->\n% Migration\n22\n% process\n23\nreceive\n24\nPid ! {self(), stop},\n25\nreceive\n26\n{Pid,{stopped,LastState}} ->\n27\nNewPid = spawn{Dest,worker,proc,\n28\n[LastState]},\n29\nself() ! {self(), {replace,Pid,\n30\nNewPid,LastState}};\n31\n{Pid,Error} -> ...\n32\nend.\nListing 1: Process Migration\nCode 1describes the migration of a worker process from\none node Src to another Dest.\n4\nThe supervisor receives\nthe migration order for process P id (line 4). The migration\naction is not blocking and is performed in a different Erlang\nprocess (line 7). The supervisor stops the worker with the\nidentifier P id (line 25) and awaits the operation result (line\n26). It then creates a remote worker in the node Dest with\nthe latest state of the stopped worker (line 28) and updates\nits internal state (lines 30 and 12).\n3.1\nDominos Process\nThe Dominos system is different from all the other crawling\nsystems cited above. Like these, the Dominos offering is\non distributed architecture, but with the difference of being\ntotally dynamic. The system's dynamic nature allows its\narchitecture to be changed as required. If, for instance, one\nof the cluster's nodes requires particular maintenance, all of\nthe processes on it will migrate from this node to another.\nWhen servicing is over, the processes revert automatically\n4\nThe character % indicates the beginning of a comment in\nErlang.\nto their original node. Crawl processes may change pool\nso as to reinforce one another if necessary. The addition or\ndeletion of a node in the cluster is completely transparent in\nits execution. Indeed, each new node is created containing a\ncompletely blank system. The first action to be undertaken\nis to search for the generic server in order to obtain the\nparameters of the part of the system that it is to belong\nto. These parameters correspond to a limited view of the\nwhole system. This enables Dominos to be deployed more\neasily, the number of messages exchanged between processes\nto be reduced and allows better management of exceptions.\nOnce the generic server has been identified, binaries are sent\nto it and its identity is communicated to the other nodes\nconcerned.\nDominos Generic Server (GenServer): Erlang process\nresponsible for managing the process identifiers on the\nwhole cluster. To ensure easy deployment of Dominos,\nit was essential to mask the denominations of the\nprocess identifiers. Otherwise, a minor change in the\nnames of machines or their IP would have required\ncomplete reorganization of the system.\nGenServer\nstores globally the identifiers of all processes existing\nat a given time.\nDominos RPC Concurrent (cRPC): as its name suggests\n, this process is responsible for delegating the\nexecution of certain remote functions to other processes\n. Unlike conventional RPCs where it is necessary\nto know the node and the object providing these\nfunctions (services), our RPCC completely masks the\ninformation.\nOne need only call the function, with\nno concern for where it is located in the cluster or\nfor the name of the process offering this function.\nMoreover, each RPCC process is concurrent, and\ntherefore manages all its service requests in parallel.\nThe results of remote functions are governed by two\nmodes: blocking or non-blocking. The calling process\nmay therefore await the reply of the remote function\nor continue its execution.\nIn the latter case, the\nreply is sent to its mailbox. For example, no worker\nknows the process identifier of its own supervisor. In\norder to identify it, a worker sends a message to the\nprocess called supervisor. The RPCC deals with the\nmessage and searches the whole cluster for a supervisor\nprocess identifier, starting with the local node.\nThe address is therefore resolved without additional\nnetwork overhead, except where the supervisor does\nnot exist locally.\nDominos Distributed Database (DDB): Erlang process\nresponsible for Mnesia real-time database management\n. It handles the updating of crawled information,\ncrawling progress and the assignment of URLs to be\ndownloaded to workers.\nIt is also responsible for\nreplicating the base onto the nodes concerned and for\nthe persistency of data on disk.\nDominos Nodes: a node is the physical representation\nof a machine connected (or disconnected as the case\nmay be) to the cluster. This connection is considered\nin the most basic sense of the term, namely a simple\nplugging-in (or unplugging) of the network outlet.\nEach node clearly reflects the dynamic character of\nthe Dominos system.\n302\nDominos Group Manager: Erlang process responsible\nfor controlling the smooth running of its child processes\n(supervisor and workers).\nDominos Master-Supervisor Processes: each group\nmanager has a single master process dealing with the\nmanagement of crawling states of progress. It therefore\ncontrols all the slave processes (workers) contained\nwithin it.\nDominos Slave-Worker Processes: workers are the\nlowest-level elements in the crawling process.\nThis\nis the very heart of the Web client wrapping the\nlibCURL.\nWith Dominos architecture being completely dynamic and\ndistributed, we may however note the hierarchical character\nof processes within a Dominos node. This is the only way to\nensure very high fault tolerance. A group manager that fails\nis regenerated by the node on which it depends. A master\nprocess (supervisor) that fails is regenerated by its group\nmanager. Finally, a worker is regenerated by its supervisor.\nAs for the node itself, it is controlled by the Dominos kernel\n(generally on another remote machine). The following code\ndescribes the regeneration of a worker process in case of\nfailure.\n1\n% Activate error handling\n2\nprocess_flag(trap_exit,\ntrue\n),\n3\n...\n4\nloop(InternalState) ->\n% Supervisor main loop\n5\nreceive\n6\n{From,{job,\nName\n,finish}, State} ->\n7\n% Informe the GenServer that the download is ok\n8\n?ServerGen ! {job,\nName\n,finish},\n9\n10\n% Save the new worker state\n11\nNewInternalState=save_state(From,State,InternalState),\n12\n13\n% Infinite loop\n14\nloop(NewInternalState);\n15\n...\n16\n{From,Error} ->\n% Worker crash\n17\n% Get the last operational state before the crash\n18\nWorkerState = last_state(From,InternalState),\n19\n20\n% Free all allocated resources\n21\nfree_resources(From,InternalState),\n22\n23\n% Create a new worker with the last operational\n24\n% state of the crashed worker\n25\nPid = spawn(worker,proc,[WorkerState]),\n26\n27\n% Add the new worker to the supervisor state\n28\n% storage\n29\nNewInternalState =replace(From,Pid,InternalState),\n30\n31\n% Infinite loop\n32\nloop(NewInternalState);\n33\nend.\nListing 2: Regeneration of a Worker Process in Case\nof Failure\nThis represents the part of the main loop of the supervisor\nprocess dealing with the management of the failure of a\nworker.\nAs soon as a worker error is received (line 19),\nthe supervisor retrieves the last operational state of the\nworker that has stopped (line 22), releases all of its allocated\nresources (line 26) and recreates a new worker process with\nthe operational state of the stopped process (line 31). The\nsupervisor continually turns in loop while awaiting new\nmessages (line 40). The loop function call (lines 17 and 40)\nis tail recursive, thereby guaranteeing that the supervision\nprocess will grow in a constant memory space.\n3.2\nDNS Resolution\nBefore contacting a Web server, the worker process\nneeds to convert the Domain Name Server (DNS) into\na valid IP address.\nWhereas other systems (Mercator,\nInternet Archive) are forced to set up DNS resolvers each\ntime a new link is identified, this is not necessary with\nDominos.\nIndeed, in the framework of French Web legal\ndeposit, the sites to be archived have been identified\nbeforehand, thus requiring only one DNS resolution\nper domain name. This considerably increases crawl\nspeed.\nThe sites concerned include all online newspapers\n, such as LeMonde (http://www.lemonde.fr/ ), LeFigaro\n(http://www.lefigaro.fr/ ) . . . , online television/radio such as\nTF1(http://www.tf1.fr/ ), M6 (http://www.m6.fr/ ) . . .\nDETAILS OF IMPLEMENTATION\nThe workers are the medium responsible for physically\ncrawling on-line contents.\nThey provide a specialized\nwrapper around the libCURL\n5\nlibrary that represents the\nheart of the HTTP client.\nEach worker is interfaced to\nlibCURL by a C driver (shared library). As the system seeks\nmaximum network accessibility (communication protocol\nsupport), libCURL appeared to be the most judicious choice\nwhen compared with other available libraries.\n6\n.\nThe protocols supported include: FTP, FTPS, HTTP,\nHTTPS, LDAP, Certifications, Proxies, Tunneling etc.\nErlang's portability was a further factor favoring the\nchoice of libCURL. Indeed, libCURL is available for various\narchitectures:\nSolaris, BSD, Linux, HPUX, IRIX, AIX,\nWindows, Mac OS X, OpenVMS etc. Furthermore, it is\nfast, thread-safe and IPv6 compatible.\nThis choice also opens up a wide variety of functions.\nRedirections are accounted for and powerful filtering is\npossible according to the type of content downloaded,\nheaders, and size (partial storage on RAM or disk depending\non the document's size).\n4.2\nDocument Fingerprint\nFor each download, the worker extracts the hypertext\nlinks included in the HTML documents and initiates a fingerprint\n(signature operation). A fast fingerprint (HAVAL\non 256 bits) is calculated for the document's content itself\nso as to differentiate those with similar contents (e.g. mirror\nsites). This technique is not new and has already been used\nin Mercator[13]. It allows redundancies to be eliminated in\nthe archive.\n4.3\nURL Extraction and Normalization\nUnlike other systems that use libraries of regular expressions\nsuch as PCRE\n7\nfor URL extraction, we have opted\n5\nAvailable at http://curl.haxx.se/libcurl/\n6\nSee http://curl.haxx.se/libcurl/competitors.html\n7\nAvailable at http://www.pcre.org/\n303\nfor the Flex tool that definitely generates a faster parser.\nFlex was compiled using a 256Kb buffer in which all table\ncompression options were activated during parsing \"-8 -f Cf\n-Ca -Cr -i\". Our current parser analyzes around 3,000\npages/second for a single worker for an average 49Kb per\npage.\nAccording to [20], a URL extraction speed of 300 pages/second\nmay generate a list of more than 2,000 URLs on average.\nA naive representation of structures in the memory may\nsoon saturate the system.\nVarious solutions have been proposed to alleviate this\nproblem.\nThe Internet Archive [4] crawler uses Bloom\nfilters in random access memory. This makes it possible\nto have a compact representation of links retrieved, but also\ngenerates errors (false-positive), i.e. certain pages are never\ndownloaded as they create collisions with other pages in the\nBloom filter. Compression without loss may reduce the size\nof URLs to below 10Kb [2, 21], but this remains insufficient\nin the case of large-scale crawls. A more ingenious approach\nis to use persistent structures on disk coupled with a cache\nas in Mercator [13].\n4.4\nURL Caching\nIn order to speed up processing, we have developed a\nscalable cache structure for the research and storage of URLs\nalready archived. Figure 1describes how such a cache works:\nLinks\nLocal Cache - Worker\nRejected\nLinks\n0 1 2\n255\nJudyL-Array\nURL CRC\nURL\n#URL\nkey\nvalue\nJudySL-Array\nFigure 1: Scalable Cache\nThe cache is available at the level of each worker.\nIt\nacts as a filter on URLs found and blocks those already\nencountered.\nThe cache needs to be scalable to be able\nto deal with increasing loads. Rapid implementation using\na non-reversible hash function such as HAVAL, TIGER,\nSHA1 , GOST, MD5, RIPEMD . . . would be fatal to the\nsystem's scalability. Although these functions ensure some\ndegree of uniqueness in fingerprint constructionthey are too\nslow to be acceptable in these constructions. We cannot\nallow latency as far as lookup or URL insertion in the cache\nis concerned, if the cache is apt to exceed a certain size (over\n10\n7\nkey-value on average). This is why we have focused on\nthe construction of a generic cache that allows key-value\ninsertion and lookup in a scalable manner.\nThe Judy-Array\nAPI\n8\nenabled us to achieve this objective. Without\ngoing into detail about Judy-Array (see their site for more\ninformation), our cache is a coherent coupling between\na JudyL-Array and N JudySL-Array.\nThe JudyL-Array\nrepresents a hash table of N = 2\n8\nor N = 2\n16\nbuckets able to\nfit into the internal cache of the CPU. It is used to store \"key-numeric\nvalue\" pairs where the key represents a CRC of the\n8\nJudy Array at the address: http://judy.sourceforge.net/\nURL and whose value is a pointer to a JudySL-Array. The\nsecond, JudySL-Array, is a \"key-compressed character string\nvalue\" type of hash, in which the key represents the URL\nidentifier and whose value is the number of times that the\nURL has been viewed. This cache construction is completely\nscalable and makes it possible to have sub-linear response\nrates, or linear in the worst-case scenario (see Judy-Array at\nfor an in-depth analysis of their performance). In the section\non experimentation (section 5) we will see the results of this\ntype of construction.\n4.5\nLimiting Disk Access\nOur aim here is to eliminate random disk access completely\n.\nOne simple idea used in [20] is periodically to\nswitch structures requiring much memory over onto disk.\nFor example, random access memory can be used to keep\nonly those URLs found most recently or most frequently,\nin order to speed up comparisons.\nThis requires no\nadditional development and is what we have decided to\nuse. The persistency of data on disk depends on the size\nof data in DS memory, and their DA age.\nThe data\nin the memory are distributed transparently via Mnesia,\nspecially designed for this kind of situation. Data may be\nduplicated (\n{ram copies, [Nodes]}, {disc copies, [Nodes]})\nor fragmented (\n{frag properties, .....}) on the nodes in\nquestion.\nAccording to [20], there are on average 8 non-duplicated\nhypertext links per page downloaded.\nThis means that\nthe number of pages retrieved and not yet archived is\nconsiderably increased.\nAfter archiving 20 million pages,\nover 100 million URLs would still be waiting.\nThis has\nvarious repercussions, as newly-discovered URLs will be\ncrawled only several days, or even weeks, later. Given this\nspeed, the base's data refresh ability is directly affected.\n4.6\nHigh Availability\nIn order to apprehend the very notion of High Availability,\nwe first need to tackle the differences that exist between\na system's reliability and its availability.\nReliability is\nan attribute that makes it possible to measure service\ncontinuity when no failure occurs.\nManufacturers generally provide a statistical estimation of\nthis value for this equipment: we may use the term MTBF\n(Mean Time Between Failure). A strong MTBF provides a\nvaluable indication of a component's ability to avoid overly\nfrequent failure.\nIn the case of a complex system (that can be broken\ndown into hardware or software parts), we talk about MTTF\n(Mean Time To Failure).\nThis denotes the average time\nelapsed until service stops as the result of failure in a\ncomponent or software.\nThe attribute of availability is more difficult to calculate\nas it includes a system's ability to react correctly in case of\nfailure in order to restart service as quickly as possible.\nIt is therefore necessary to quantify the time interval during\nwhich service is unavailable before being re-established:\nthe acronym MTTR (Mean Time To Repair) is used to\nrepresent this value.\nThe formula used to calculate the rate of a system's\navailability is as follows:\navailability =\nM T T F\nM T T F + M T T R\n(4)\n304\nA system that looks to have a high level of availability should\nhave either a strong MTTF, or a weak MTTR.\nAnother more practical approach consists in measuring\nthe time period during which service is down in order to\nevaluate the level of availability. This is the method most\nfrequently adopted, even if it fails to take account of the\nfrequency of failure, focusing rather on its duration.\nCalculation is usually based on a calendar year.\nThe\nhigher the percentage of service availability, the nearer it\ncomes to High Availability.\nIt is fairly easy to qualify the level of High Availability of a\nservice from the cumulated downtime, by using the normalized\nprinciple of \"9's\" (below 3 nine, we are no longer talking\nabout High Availability, but merely availability). In order\nto provide an estimation of Dominos' High Availability, we\ncarried out performance tests by fault injection. It is clear\nthat a more accurate way of measuring this criterion would\nbe to let the system run for a whole year as explained above.\nHowever, time constraints led us to adopt this solution. Our\ninjector consists in placing pieces of false code in each part\nof the system and then measuring the time required for the\nsystem to make the service available. Once again, Erlang has\nproved to be an excellent choice for the setting up of these\nregression tests. The table below shows the average time\nrequired by Dominos to respond to these cases of service\nunavailability.\nTable 1clearly shows Dominos' High Availability.\nWe\nService\nError\nMTTR (microsec)\nGenServer\n10\n3\nbad match\n320\ncRPC\n10\n3\nbad match\n70\nDDB\n10\n7\ntuples\n9\n10\n6\nNode\n10\n3\nbad match\n250\nSupervisor\n10\n3\nbad match\n60\nWorker\n10\n3\nbad match\n115\nTable 1: MTTR Dominos\nsee that for 10\n3\nmatches of error, the system resumes\nservice virtually instantaneously.\nThe DB was tested on\n10\n7\ntuples in random access memory and resumed service\nafter approximately 9 seconds.\nThis corresponds to an\nexcellent MTTR, given that the injections were made on\na PIII-966Mhz with 512Mb of RAM. From these results, we\nmay label our system as being High Availability, as opposed\nto other architectures that consider High Availability only\nin the sense of failure not affecting other components of\nthe system, but in which service restart of a component\nunfortunately requires manual intervention every time.\nEXPERIMENTATION\nThis section describes Dominos' experimental results tested\non 5 DELL machines:\nnico: Intel Pentium 4 - 1.6 Ghz, 256 Mb RAM. Crawl\nnode (supervisor, workers). Activates a local cRPC.\nzico: Intel Pentium 4 - 1.6 Ghz, 256 Mb RAM. Crawl\nnode (supervisor, workers). Activates a local cRPC.\nchopin: Intel Pentium 3 - 966 Mhz, 512 Mb RAM.\nMain node loaded on ServerGen and DB. Also handles\ncrawling (supervisor, workers).\nActivates a local\ncRPC.\ngao: Intel Pentium 3 - 500 Mhz, 256 Mb RAM. Node\nfor DB fragmentation. Activates a local cRPC.\nmargo: Intel Pentium 2 - 333 Mhz, 256 Mb RAM.\nNode for DB fragmentation. Activates a local cRPC.\nMachines chopin, gao and margo are not dedicated solely\nto crawling and are used as everyday workstations. Disk\nsize is not taken into account as no data were actually\nstored during these tests.\nEverything was therefore carried\nout using random access memory with a network of\n100 Mb/second.\nDominos performed 25,116,487 HTTP\nrequests after 9 hours of crawling with an average of\n816 documents/second for 49Kb per document.\nThree\nnodes (nico, zico and chopin) were used in crawling, each\nhaving 400 workers.\nWe restricted ourselves to a total\nof 1,200 workers, due to problems generated by Dominos\nat intranet level.\nThe firewall set up to filter access\nis considerably detrimental to performance because of its\ninability to keep up with the load imposed by Dominos.\nThird-party tests have shown that peaks of only 4,000\nHTTP requests/second cause the immediate collapse of the\nfirewall. The firewall is not the only limiting factor, as the\nsame tests have shown the incapacity of Web servers such\nas Apache2, Caudium or Jigsaw to withstand such loads\n(see http://www.sics.se/\njoe/apachevsyaws.html). Figure 2\n(left part) shows the average URL extraction per document\ncrawled using a single worker. The abscissa (x) axis represents\nthe number of documents treated, and the ordered\n(y) axis gives the time in microseconds corresponding to\nextraction.\nIn the right-hand figure, the abscissa axis\nrepresents the same quantity, though this time in terms\nof data volume (Mb). We can see a high level of parsing\nreaching an average of 3,000 pages/second at a speed of\n70Mb/second. In Figure 3 we see that URL normalization\n0\n500000\n1e+06\n1.5e+06\n2e+06\n2.5e+06\n3e+06\n3.5e+06\n0\n2000\n4000\n6000\n8000\n10000\nTime (microsec)\nDocuments\nAverage number of parsed documents\nPD\n0\n500000\n1e+06\n1.5e+06\n2e+06\n2.5e+06\n3e+06\n3.5e+06\n0\n20\n40\n60\n80\n100\n120\n140\n160\nTime (microsec)\nDocument Size (Mb)\nAverage size of parsed documents\nPDS\nFigure 2: Link Extraction\nis as efficient as extraction in terms of speed. The abscissa\naxis at the top (and respectively at the bottom) represents\nthe number of documents processed per normalization phase\n(respectively the quantity of documents in terms of volume).\nEach worker normalizes on average 1,000 documents/second\n, which is equivalent to 37,000 URLs/second at a speed\nof 40Mb/second. Finally, the URL cache structure ensures\na high degree of scalability (Figure 3). The abscissa axis in\nthis figure represents the number of key-values inserted or\nretrieved. The cache is very close to a step function due to\nkey compression in the Judy-Array. Following an increase in\ninsertion/retrieval time in the cache, it appears to plateau\nby 100,000 key-value bands. We should however point out\nthat URL extraction and normalization also makes use of\nthis type of cache so as to avoid processing a URL already\nencountered.\n305\n0\n10000\n20000\n30000\n40000\n50000\n60000\n0\n2000\n4000\n6000\n8000\n10000\nTime (microsec)\nNormalized documents\nAverage number of normalized documents\nAD\n0\n10000\n20000\n30000\n40000\n50000\n60000\n0\n2000 4000 6000 8000 10000 12000 14000 16000\nTime (microsec)\nUrls\nAverage number of normalized Url\nAU\n0\n10000\n20000\n30000\n40000\n50000\n60000\n0\n20\n40\n60\n80\n100\n120\n140\n160\nTime (microsec)\nDocument Size (Mb)\nAverage size of normalized documents\nADS\n0\n50000\n100000\n150000\n200000\n250000\n300000\n350000\n0\n20000\n40000\n60000\n80000\n100000\nTime (microsec)\nKey-Value\nScalable Cache : Insertion vs Retrieval\nCache Insertion\nCache Retrieval\nFigure 3:\nURL Normalization and Cache Performance\nCONCLUSION\nIn the present paper, we have introduced a high availability\nsystem of crawling called Dominos.\nThis system\nhas been created in the framework of experimentation\nfor French Web legal deposit carried out at the Institut\nNational de l'Audiovisuel (INA). Dominos is a dynamic\nsystem, whereby the processes making up its kernel are\nmobile.\n90% of this system was developed using Erlang\nprogramming language, which accounts for its highly flexible\ndeployment, maintainability and enhanced fault tolerance.\nDespite having different objectives, we have been able to\ncompare it with other documented Web crawling systems\n(Mercator, InternetArchive . . . ) and have shown it to be\nsuperior in terms of crawl speed, document parsing and\nprocess management without system restart.\nDominos is more complex than its description here. We\nhave not touched upon archival storage and indexation.\nWe have preferred to concentrate rather on the detail of\nimplementation of the Dominos kernel itself, a strategic\ncomponent that is often overlooked by other systems (in particular\nthose that are proprietary, others being inefficient).\nHowever, there is still room for the system's improvement.\nAt present, crawled archives are managed by NFS, a file\nsystem that is moderately efficient for this type of problem.\nSwitchover to Lustre\n9\n, a distributed file system with a\nradically higher level of performance, is underway.\nREFERENCES\n[1] Z. BarYossef, A. Berg, S. Chien, J. Fakcharoenphol,\nand D. Weitz. Approximating aggregate queries about\nweb pages via random walks. In Proc. of 26th Int.\nConf. on Very Large Data Bases, 2000.\n[2] K. Bharat, A. Broder, M. Henzinger, P. Kumar, and\nS. Venkatasubramanian. The connectivity server: Fast\naccess to linkage. Information on the Web, 1998.\n[3] S. Brin and L. Page. The anatomy of a large-scale\nhypertextual web search engine. In Proc. of the\nSeventh World-Wide Web Conference, 1998.\n[4] M. Burner. Crawling towards eternity: Building an\narchive of the world wide web.\n9\nhttp://www.lustre.org/\nhttp://www.webtechniques.com/archives/1997/05/burner/,\n1997.\n[5] S. Chakrabarti, M. V. D. Berg, and B. Dom.\nDistributed hypertext resource discovery through\nexample. In Proc. of 25th Int. Conf. on Very Large\nData Base, pages 375386, 1997.\n[6] S. Chakrabarti, M. V. D. Berg, and B. Dom. Focused\ncrawling: A new approach to topic-specific web\nresource discovery. In Proc. of the 8th Int. World\nWide Web Conference, 1999.\n[7] J. Cho and H. Garcia-Molina. The evolution of the\nweb and implications for an incremental crawler. In\nProc. of 26th Int. Conf. on Very Large Data Bases,\npages 117128, 2000.\n[8] J. Cho and H. Garcia-Molina. Synchronizing a\ndatabase to improve freshness. In Proc. of the ACM\nSIGMOD Int. Conf. on Management of Data, 2000.\n[9] J. Cho, H. Garcia-Molina, and L. Page. Efficient\ncrawling through url ordering. In 7th Int. World Wide\nWeb Conference, 1998.\n[10] M. Diligenti, F. Coetzee, S. Lawrence, C. Giles, and\nM. Gori. Focused crawling using context graphs. In\nProc. of 26th Int. Conf. on Very Large Data Bases,\n2000.\n[11] M. Henzinger, A. Heydon, M. Mitzenmacher, and\nM. Najork. Measuring index quality using random\nwalks on the web. In Proc. of the 8th Int. World Wide\nWeb Conference (WWW8), pages 213225, 1999.\n[12] M. Henzinger, A. Heydon, M. Mitzenmacher, and\nM. Najork. On near-uniform url sampling. In Proc. of\nthe 9th Int. World Wide Web Conference, 2000.\n[13] A. Heydon and M. Najork. Mercator: A scalable,\nextensible web crawler. World Wide Web Conference,\npages 219229, 1999.\n[14] J. Hirai, S. Raghavan, H. Garcia-Molina, and\nA. Paepcke. Webbase : : A repository of web pages. In\nProc. of the 9th Int. World Wide Web Conference,\n2000.\n[15] B. Kahle. Archiving the internet. Scientific American,\n1997.\n[16] S. Lawrence and C. L. Giles. Searching the world wide\nweb. Science 280, pages 98100, 1998.\n[17] M. Najork and J. Wiener. Breadth-first search\ncrawling yields high-quality pages. In 10th Int. World\nWide Web Conference, 2001.\n[18] J. Rennie and A. McCallum. Using reinforcement\nlearning to spider the web efficiently. In Proc. of the\nInt. Conf. on Machine Learning, 1999.\n[19] S. Russel and P. Norvig. Artificial Intelligence: A\nmodern Approach. Prentice Hall, 1995.\n[20] V. Shkapenyuk and T. Suel. Design and\nimplementation of a high-performance distributed web\ncrawler. Polytechnic University: Brooklyn, Mars 2001.\n[21] T. Suel and J. Yuan. Compressing the graph structure\nof the web. In Proc. of the IEEE Data Compression\nConference, 2001.\n[22] J. Talim, Z. Liu, P. Nain, and E. Coffman. Controlling\nrobots of web search engines. In SIGMETRICS\nConference, 2001.\n306\n", "keywords": "Breadth first crawling;Hierarchical Cooperation;limiting disk access;fault tolerance;Dominos nodes;dominos process;Dominos distributed database;breadth-first crawling;repetitive crawling;URL caching;Dominos Generic server;Document fingerprint;Deep web crawling;Dominos RPC concurrent;Random walks and sampling;Web Crawler;maintaiability and configurability;deep web crawling;High Availability System;real-time distributed system;crawling system;high performance crawling system;high availability;Erlang development kit;targeted crawling"} {"name": "101", "title": "Hiperlan/2 Public Access Interworking with 3G Cellular Systems", "abstract": "This paper presents a technical overview of the Hiperlan/2 3G interworking concept. It does not attempt to provide any business justification or plan for Public Access operation. After a brief resume of public access operation below, section 2 then introduces an overview of the technologies concerned. Section 3 describes the system approach and presents the current reference architecture used within the BRAN standardisation activity. Section 4 then goes on to cover in more detail the primary functions of the system such as authentication, mobility, quality of service (QoS) and subscription. It is worth noting that since the Japanese WLAN standard HiSWANa is very similar to Hiperlan/2, much of the technical information within this paper is directly applicable to this system, albeit with some minor changes to the authentication scheme. Additionally the high level 3G and external network interworking reference architecture is also applicable to IEEE 802.11. Finally, section 5 briefly introduces the standardisation relationships between ETSI BRAN, WIG, 3GPP, IETF, IEEE 802.11 and MMAC HSWA.", "fulltext": "1.1. Public access operation\nRecently, mobile business professionals have been looking\nfor a more efficient way to access corporate information systems\nand databases remotely through the Internet backbone.\nHowever, the high bandwidth demand of the typical office applications\n, such as large email attachment downloading, often\ncalls for very fast transmission capacity. Indeed certain hot\nspots, like hotels, airports and railway stations are a natural\nplace to use such services. However, in these places the time\navailable for information download typically is fairly limited.\nIn light of this, there clearly is a need for a public wireless\naccess solution that could cover the demand for data intensive\napplications and enable smooth on-line access to corporate\ndata services in hot spots and would allow a user to roam from\na private, micro cell network (e.g., a Hiperlan/2 Network) to a\nwide area cellular network or more specifically a 3G network.\nTogether with high data rate cellular access, Hiperlan/2 has\nthe potential to fulfil end user demands in hot spot environments\n. Hiperlan/2 offers a possibility for cellular operators\nto offer additional capacity and higher bandwidths for end\nusers without sacrificing the capacity of the cellular users, as\nHiperlans operate on unlicensed or licensed exempt frequency\nbands. Also, Hiperlan/2 has the QoS mechanisms that are capable\nto meet the mechanisms that are available in the 3G\nsystems. Furthermore, interworking solutions enable operators\nto utilise the existing cellular infrastructure investments\nand well established roaming agreements for Hiperlan/2 network\nsubscriber management and billing.\nTechnology overview\nThis section briefly introduces the technologies that are addressed\nwithin this paper.\n2.1. Hiperlan/2 summary\nHiperlan/2 is intended to provide local wireless access to IP,\nEthernet, IEEE 1394, ATM and 3G infrastructure by both stationary\nand moving terminals that interact with access points.\nThe intention is that access points are connected to an IP, Ethernet\n, IEEE 1394, ATM or 3G backbone network. A number\nof these access points are required to service all but the small-44\nMCCANN AND FLYGARE\nest networks of this kind, and therefore the wireless network\nas a whole supports handovers of connections between access\npoints.\n2.2. Similar WLAN interworking schemes\nIt should be noted that the interworking model presented in\nthis paper is also applicable to the other WLAN systems, i.e.\nIEEE 802.11a/b and MMAC HiSWANa (High Speed Wireless\nAccess Network), albeit with some minor modifications\nto the authentications schemes. It has been the intention of\nBRAN to produce a model which not only fits the requirements\nof Hiperlan/23G interworking, but also to try and\nmeet those of the sister WLAN systems operating in the same\nmarket. A working agreement has been underway between\nETSI BRAN and MMAC HSWA for over 1 year, and with\nthe recent creation of WIG (see section 5), IEEE 802.11 is\nalso working on a similar model.\n2.3. 3G summary\nWithin the framework of International Mobile Telecommunications\n2000 (IMT-2000), defined by the International\nTelecommunications Union (ITU), the 3rd Generation Partnership\nProject (3GPP) are developing the Universal Mobile\nTelecommunications System (UMTS) which is one of the major\nthird generation mobile systems. Additionally the 3rd\nGeneration Partnership Project 2 (3GPP2) is also developing\nanother 3G system, Code Division Multiple Access 2000\n(CDMA-2000). Most of the work within BRAN has concentrated\non UMTS, although most of the architectural aspects\nare equally applicable to Hiperlan/2 interworking with\nCDMA-2000 and indeed pre-3G systems such as General\nPacket Radio Services (GPRS).\nThe current working UMTS standard, Release 4, of UMTS\nwas finalised in December 2000 with ongoing development\nwork contributing to Release 5, due to be completed by the\nend of 2002. A future release 6 is currently planned for the autumn\nof 2003, with worldwide deployment expected by 2005.\nSystem approach\nThis section describes the current interworking models being\nworked upon within BRAN at the current time. The BRAN\nNetwork Reference Architecture, shown in figure 1, identifies\nthe functions and interfaces that are required within a Hiperlan/2\nnetwork in order to support inter-operation with 3G systems\n.\nThe focus of current work is the interface between the\nAccess Point (AP) and the Service provider network (SPN)\nwhich is encapsulated by the Lx interface. The aim of the\nHiperlan/23G interworking work item is to standardise these\ninterfaces, initially focusing on AAA (Authentication, Authorisation\nand Accounting) functionality.\nA secondary aim is to create a model suitable for all the\n5 GHz WLAN systems (e.g., Hiperlan/2, HiSWANa, IEEE\nFigure 1. Reference architecture.\n802.11a) and all 3G systems (e.g., CDMA-2000, UMTS),\nthus creating a world wide standard for interworking as mentioned\nin section 5.\nOther interfaces between the AP and external networks and\ninterfaces within the AP are outside the scope of this current\nwork.\nFigure 1 shows the reference architecture of the interworking\nmodel. It presents logical entities within which the following\nfunctions are supported:\nAuthentication: supports both SIM-based and SIM-less\nauthentication. The mobile terminal (MT) communicates\nvia the Attendant with an authentication server in the visited\nnetwork, for example a local AAA server, across the\nLs interface.\nAuthorisation and User Policy: the SPN retrieves authorisation\nand user subscription information from the home\nnetwork when the user attaches to it. Authorisation information\nis stored within a policy decision function in the\nSPN. Interfaces used for this are Lp and Ls.\nAccounting: the resources used by a MT and the QoS provided\nto a user are both monitored by the Resource Monitor\n. Accounting records are sent to accounting functions\nin the visited network via the La interface.\nNetwork Management: the Management Agent provides\nbasic network and performance monitoring, and allows the\nconfiguration of the AP via the Lm interface.\nAdmission Control and QoS: a policy decision function in\nthe SPN decides whether a new session with a requested\nQoS can be admitted based on network load and user subscription\ninformation. The decision is passed to the Policy\nEnforcement function via the Lp interface.\nInter-AP Context Transfer: the Handover Support function\nallows the transfer of context information concerning\na user/mobile node, e.g., QoS state, across the Lh interface\nfrom the old to the new AP between which the mobile is\nhanding over.\nHIPERLAN/2 PUBLIC ACCESS INTERWORKING WITH 3G CELLULAR SYSTEMS\n45\nMobility: mobility is a user plane function that performs\nre-routing of data across the network. The re-routing may\nsimply be satisfied by layer 2 switching or may require\nsupport for a mobility protocol such as Mobile IP depending\non the technology used within the SPN. Mobility is an\nattribute of the Lr interface.\nLocation Services: the Location Server function provides\npositioning information to support location services. Information\nis passed to SPN location functions via the Ll\ninterface.\nPrimary functions\nThis section describes the primary functions of this model (refer\nto figure 1) in further detail, specifically: authentication\nand accounting, mobility and QoS.\n4.1. Authentication and authorisation\nA key element to the integration of disparate systems is the\nability of the SPN to extract both authentication and subscription\ninformation from the mobile users' home networks when\nan initial association is requested. Many users want to make\nuse of their existing data devices (e.g., Laptop, Palmtop) without\nadditional hardware/software requirements. Conversely\nfor both users and mobile operators it is beneficial to be able\nto base the user authentication and accounting on existing cellular\naccounts, as well as to be able to have Hiperlan/2-only\noperators and users; in any case, for reasons of commonality\nin MT and network (indeed SPN) development it is important\nto be able to have a single set of AAA protocols which\nsupports all the cases.\n4.1.1. Loose coupling\nThe rest of this paper concentrates on loose coupling solutions\n. \"Loose coupling\", is generally defined as the utilisation\nof Hiperlan/2 as a packet based access network complementary\nto current 3G networks, utilising the 3G subscriber databases\nbut without any user plane Iu type interface, as shown\nin figure 1. Within the UMTS context, this scheme avoids\nany impact on the SGSN and GGSN nodes. Security, mobility\nand QoS issues are addressed using Internet Engineering\nTask Force (IETF) schemes.\nOther schemes which essentially replace the User Terminal\nRadio Access Network (UTRAN) of UMTS with a HIRAN\n(Hiperlan Radio Access Network) are referred to as \"Tight\nCoupling\", but are not currently being considered within the\nwork of BRAN.\n4.1.2. Authentication flavours\nThis section describes the principle functions of the loose\ncoupling interworking system and explains the different authentication\nflavours that are under investigation. The focus\nof current work is the interface between the AP and the SPN.\nOther interfaces between the AP and external networks and\ninterfaces within the AP are initially considered to be implementation\nor profile specific.\nThe primary difference between these flavours is in the\nauthentication server itself, and these are referred to as the\n\"IETF flavour\" and the \"UMTS-HSS flavour\", where the\nHome Subscriber Server (HSS) is a specific UMTS term for a\ncombined AAA home server (AAAH)/Home Location Register\n(HLR) unit. The motivation for network operators to build\nup Hiperlan/2 networks based on each flavour may be different\nfor each operator. However, both flavours offer a maximum\nof flexibility through the use of separate Interworking\nUnits (IWU) and allow loose coupling to existing and future\ncellular mobile networks. These alternatives are presented in\nfigure 2.\nIETF flavour.\nThe IETF flavour outlined in figure 2 is driven\nby the requirement to add only minimal software functionality\nto the terminals (e.g., by downloading java applets), so\nthat the use of a Hiperlan/2 mobile access network does not\nrequire a radical change in the functionality (hardware or software\n) compared to that required by broadband wireless data\naccess in the corporate or home scenarios. Within a multiprovider\nnetwork, the WLAN operator (who also could be a\nnormal ISP) does not necessary need to be the 3G operator\nas well, but there could still be an interworking between the\nnetworks.\nWithin this approach Hiperlan/2 users may be either existing\n3G subscribers or just Hiperlan/2 network subscribers.\nThese users want to make use of their existing data devices\n(e.g., Laptop, Palmtop) without additional hardware/software\nrequirements. For both users and mobile operators it is beneficial\nto be able to base the user authentication and accounting\non existing cellular accounts, as well as to be able to have\nHiperlan/2-only operators and users; in any case, for reasons\nof commonality in MT and AP development it is important to\nbe able to have a single set of AAA protocols which supports\nall the cases.\nUMTS-HSS flavour.\nAlternatively the UMTS flavour (also\ndescribed within figure 1) allows a mobile subscriber using\na Hiperlan/2 mobile access network for broadband wireless\ndata access to appear as a normal cellular user employing\nstandard procedures and interfaces for authentication purposes\n. It is important to notice that for this scenario functionality\nnormally provided through a user services identity\nmodule (USIM) is required in the user equipment. The USIM\nprovides new and enhanced security features in addition to\nthose provided by 2nd Generation (2G) SIM (e.g., mutual authentication\n) as defined by 3GPP (3G Partnership Program).\nThe UMTS-HSS definitely requires that a user is a native\ncellular subscriber while in addition and distinctly from the\nIETF flavoured approach standard cellular procedures and\nparameters for authentication are used (e.g., USIM quintets).\nIn this way a mobile subscriber using a Hiperlan/2 mobile access\nnetwork for broadband wireless data access will appear\nas a normal cellular user employing standard procedures and\ninterfaces for authentication purposes. It is important to notice\nthat for this scenario USIM functionality is required in\nthe user equipment.\n46\nMCCANN AND FLYGARE\nFigure 2. Loose coupling authentication flavours.\nFor the IETF flavoured approach there is no need to integrate\nthe Hiperlan/2 security architecture with the UMTS\nsecurity architecture [2]. It might not even be necessary to\nimplement all of the Hiperlan/2 security features if security is\napplied at a higher level, such as using IPsec at the IP level.\nAn additional situation that must be considered is the use of\npre-paid SIM cards. This scenario will introduce additional\nrequirements for hot billing and associated functions.\n4.1.3. EAPOH\nFor either flavour authentication is carried out using a mechanism\nbased on EAP (Extensible Authentication Protocol) [3].\nThis mechanism is called EAPOH (EAP over Hiperlan/2) and\nis analogous to the EAPOL (EAP over LANs) mechanism as\ndefined in IEEE 802.1X. On the network side, Diameter [4]\nis used to relay EAP packets between the AP and AAAH.\nBetween the AP and MT, EAP packets and additional Hiperlan/2\nspecific control packets (termed pseudo-EAP packets)\nare transferred over the radio interface. This scheme directly\nsupports IETF flavour authentication, and by use of the pro-posed\nEAP AKA (Authentication and Key Agreement) mechanism\nwould also directly support the UMTS flavour authentication\n.\nOnce an association has been established, authorisation information\n(based on authentication and subscription) stored\nwithin a Policy Decision Function within the SPN itself can\nbe transmitted to the AP. This unit is then able to regulate services\nsuch as time-based billing and allocation of network and\nradio resources to the required user service. Mobile users with\ndifferent levels of subscription (e.g., \"bronze, silver, gold\")\ncan be supported via this mechanism, with different services\nbeing configured via the policy interface. A change in authentication\ncredentials can also be managed at this point.\n4.1.4. Key exchange\nKey agreement for confidentiality and integrity protection is\nan integral part of the UMTS authentication procedure, and\nhence the UTRAN confidentiality and integrity mechanisms\nshould be reused within the Hiperlan/2 when interworking\nwith a 3G SPN (i.e. core network). This will also increase\nthe applied level of security.\nThe Diffie-Hellman encryption key agreement procedure,\nas used by the Hiperlan/2 air interface, could be used to improve\nuser identity confidentiality. By initiating encryption\nbefore UMTS AKA is performed, the user identity will not\nhave to be transmitted in clear over the radio interface, as\nis the case in UMTS when the user enters a network for the\nfirst time. Thus, this constitutes an improvement compared to\nUMTS security.\nIt is also important to have a secure connection between\nAPs within the same network if session keys or other sensitive\ninformation are to be transferred between them. A secure\nconnection can either be that they for some reason trust\neach other and that no one else can intercept the communication\nbetween them or that authentication is performed and\nintegrity and confidentiality protection are present.\n4.1.5. Subscriber data\nThere are three basic ways in which the subscriber management\nfor Hiperlan/2 and 3G users can be co-ordinated:\nHave the interworking between the Hiperlan/2 subscriber\ndatabase and HLR/HSS. This is for the case where the in-HIPERLAN/2\nPUBLIC ACCESS INTERWORKING WITH 3G CELLULAR SYSTEMS\n47\nterworking is managed through a partnership or roaming\nagreement.\nThe administrative domains' AAA servers share security\nassociation or use an AAA broker.\nThe Hiperlan/2 authentication could be done on the basis\nof a (U)SIM token. The 3G authentication and accounting\ncapabilities could be extended to support access authentication\nbased on IETF protocols. This means either integrating\nHLR and AAA functions within one unit (e.g.,\na HSS unit), or by merging native HLR functions of the\n3G network with AAA functions required to support IP\naccess.\nBased on these different ways for subscriber management,\nthe user authentication identifier can be on three different formats\n:\nNetwork Address Identifier (NAI),\nInternational Mobile Subscriber Identity (IMSI) (requires\na (U)SIM card), and\nIMSI encapsulated within a NAI (requires a (U)SIM card).\n4.1.6. Pre-paid SIM cards\nAs far as the HLR within the SPN is concerned, it cannot\ntell the difference between a customer who is pre-paid or not.\nHence, this prevents a non-subscriber to this specific 3G network\nfrom using the system, if the operator wishes to impose\nthis restriction.\nAs an example, pre-paid calls within a 2G network are\nhandled via an Intelligent Network (IN) probably co-located\nwith the HLR. When a call is initiated, the switch can be pro-grammed\nwith a time limit, or if credit runs out the IN can\nsignal termination of the call. This then requires that the SPN\nknows the remaining time available for any given customer.\nCurrently the only signals that originate from the IN are to\nterminate the call from the network side.\nThis may be undesirable in a Hiperlan/23G network, so\nthat a more graceful solution is required. A suitable solution\nis to add pre-paid SIM operation to our system together with\nhot billing (i.e. bill upon demand) or triggered session termination\n. This could be achieved either by the AAAL polling\nthe SPN utilising RADIUS [5] to determine whether the customer\nis still in credit, or by using a more feature rich protocol\nsuch as Diameter [4] which allows network signalling directly\nto the MT.\nThe benefit of the AAA approach is to allow the operator\nto present the mobile user with a web page (for example), as\nthe pre-paid time period is about to expire, allowing them to\npurchase more airtime.\nAll these solutions would require an increased integration\neffort with the SPN subscriber management system. Further\nadditional services such as Customized Applications for\nMobile Network Enhanced Logic (CAMEL) may also allow\nroaming with pre-paid SIM cards.\n4.2. Accounting\nIn the reference architecture of figure 2, the accounting function\nmonitors the resource usage of a user in order to allow\ncost allocation, auditing and billing. The charging/accounting\nis carried out according to a series of accounting and resource\nmonitoring metrics, which are derived from the policy function\nand network management information.\nThe types of information needed in order to monitor each\nuser's resource consumption could include parameters such\nas, for example, volume of traffic, bandwidth consumption,\netc. Each of these metrics could have AP specific aspects\nconcerning the resources consumed over the air interface and\nthose consumed across the SPN, respectively. As well as providing\ndata for billing and auditing purposes, this information\nis exchanged with the Policy Enforcement/Decision functions\nin order to provide better information on which to base policy\ndecisions.\nThe accounting function processes the usage related information\nincluding summarisation of results and the generation\nof session records. This information may then be forwarded\nto other accounting functions within and outside the network,\nfor example a billing function. This information may also be\npassed to the Policy Decision function in order to improve\nthe quality of policy decisions; vice versa the Policy Decision\nfunction can give information about the QoS, which may affect\nthe session record. There are also a number of extensions\nand enhancements that can be made to the basic interworking\nfunctionality such as those for the provision of support for\nQoS and mobility.\nIn a multiprovider network, different sorts of inter-relationships\nbetween the providers can be established. The inter-relationship\nwill depend upon commercial conditions, which\nmay change over time. Network Operators have exclusive\nagreements with their customers, including charging and\nbilling, and also for services provided by other Network Op-erators/Service\nProviders. Consequently, it must be possible\nto form different charging and accounting models and this requires\ncorrespondent capabilities from the networks.\nCharging of user service access is a different issue from the\nissue of accounting between Network Operators and Service\nProviders. Although the issues are related, charging and accounting\nshould be considered separately. For the accounting\nissue it is important for the individual Network Operator or\nService Provider to monitor and register access use provided\nto his customers.\nNetwork operators and service providers that regularly\nprovide services to the same customers could either charge\nand bill them individually or arrange a common activity. For\njoint provider charging/billing, the providers need revenue accounting\nin accordance with the service from each provider.\nFor joint provider charging of users, it becomes necessary\nto transfer access/session related data from the providers to\nthe charging entity. Mechanisms for revenue accounting are\nneeded, such as technical configuration for revenue accounting\n. This leads to transfer of related data from the Network\n48\nMCCANN AND FLYGARE\nOperator and/or Service Providers to the revenue accounting\nentity.\nThe following parameters may be used for charging and\nrevenue accounting:\nbasic access/session (pay by subscription),\ntoll free (like a 0800 call),\npremium rate access/session,\naccess/session duration,\ncredit card access/session,\npre-paid,\ncalendar and time related charging,\npriority,\nQuality of Service,\nduration dependent charging,\nflat rate,\nvolume of transferred packet traffic,\nrate of transferred packet traffic (Volume/sec),\nmultiple rate charge.\n4.3. Mobility\nMobility can be handled by a number of different approaches.\nIndeed many mobility schemes have been developed in the\nIETF that could well be considered along with the work of the\nMIND (Mobile IP based Network Developments) project that\nhas considered mobility in evolved IP networks with WLAN\ntechnologies. Mobility support is desirable as this functionality\nwould be able to provide support for roaming with an\nactive connection between the interworked networks, for example\n, to support roaming from UMTS to WLAN in a hotspot\nfor the downloading of large data.\nIn the loose coupling approach, the mobility within the\nHiperlan/2 network is provided by native Hiperlan/2 (i.e.\nRLC layer) facilities, possibly extended by the Convergence\nLayer (CL) in use (e.g., the current Ethernet CL [6], or a future\nIP CL). This functionality should be taken unchanged\nin the loose coupling approach, i.e. handover between access\npoints of the same Hiperlan/2 network does not need\nto be considered especially here as network handover capabilities\nof Hiperlan/2 RLC are supported by both MTs and\nAPs.\nGiven that Hiperlan/2 network handover is supported, further\ndetails for completing the mobility between access points\nare provided by CL dependent functionality.\nCompletion of this functionality to cover interactions between\nthe APs and other parts of the network (excluding the\nterminal and therefore independent of the air interface) are\ncurrently under development outside BRAN. In the special\ncase where the infrastructure of a single Hiperlan/2 network\nspans more than one IP sub-network, some of the above approaches\nassume an additional level of mobility support that\nmay involve the terminal.\n4.3.1. Roaming between Hiperlan/2 and 3G\nFor the case of mobility between Hiperlan/2 and 3G access\nnetworks, recall that we have the following basic scenario:\nA MT attaches to a Hiperlan/2 network, authenticates and acquires\nan IP address. At that stage, it can access IP services\nusing that address while it remains within that Hiperlan/2 network\n. If the MT moves to a network of a different technology\n(i.e. UMTS), it can re-authenticate and acquire an IP address\nin the packet domain of that network, and continue to use IP\nservices there.\nWe have referred to this basic case as AAA roaming. Note\nthat while it provides mobility for the user between networks,\nany active sessions (e.g., multimedia calls or TCP connections\n) will be dropped on the handover between the networks\nbecause of the IP address change (e.g., use Dynamic Host\nConfiguration Protocol DHCP).\nIt is possible to provide enhanced mobility support, including\nhandover between Hiperlan/2 access networks and 3G access\nnetworks in this scenario by using servers located outside\nthe access network. Two such examples are:\nThe MT can register the locally acquired IP address with\na Mobile IP (MIP) home agent as a co-located care-of address\n, in which case handover between networks is handled\nby mobile IP. This applies to MIPv4 and MIPv6 (and\nis the only mode of operation allowed for MIPv6).\nThe MT can register the locally acquired IP address with\nan application layer server such as a Session Initiation Protocol\n(SIP) proxy. Handover between two networks can\nthen be handled using SIP (re-invite message).\nNote that in both these cases, the fact that upper layer mobility\nis in use is visible only to the terminal and SPN server,\nand in particular is invisible to the access network. Therefore,\nit is automatically possible, and can be implemented according\nto existing standards, without impact on the Hiperlan/2\nnetwork itself. We therefore consider this as the basic case\nfor the loose coupling approach.\nAnother alternative is the use of a Foreign Agent care-of\naddress (MIPv4 only). This requires the integration of Foreign\nAgent functionality with the Hiperlan/2 network, but has\nthe advantage of decreasing the number of IPv4 addresses that\nhave to be allocated. On the other hand, for MTs that do not\nwish to invoke global mobility support in this case, a locally\nassigned IP address is still required, and the access network\ntherefore has to be able to operate in two modes.\nTwo options for further study are:\nThe option to integrate access authentication (the purpose\nof this loose coupling standard) with Mobile IP\nhome agent registration (If Diameter is used, it is already\npresent). This would allow faster attach to the network in\nthe case of a MT using MIP, since it only requires one set\nof authentication exchanges; however, it also requires integration\non the control plane between the AAAH and the\nMobile IP home agent itself. It is our current assumption\nthat this integration should be carried out in a way that is\nindependent of the particular access network being used,\nand is therefore out of scope of this activity.\nHIPERLAN/2 PUBLIC ACCESS INTERWORKING WITH 3G CELLULAR SYSTEMS\n49\nThe implications of using services (e.g., SIP call control\n) from the UMTS IMS (Internet Multimedia Subsys-tem\n), which would provide some global mobility capability\n. This requires analysis of how the IMS would interface\nto the Hiperlan/2 access network (if at all).\n4.3.2. Handover\nFor handovers within the Hiperlan/2 network, the terminal\nmust have enough information to be able to make a handover\ndecision for itself, or be able to react to a network decision to\nhandover. Indeed these decision driven events are referred to\nas triggers, resulting in Network centric triggers or Terminal\ncentric triggers.\nSimple triggers include the following:\nNetwork Centric: Poor network resources or low bandwidth\n, resulting in poor or changing QoS. Change of policy\nbased on charging (i.e. end of pre-paid time).\nTerminal Centric: Poor signal strength. Change of QoS.\n4.4. QoS\nQoS support is available within the Hiperlan/2 specification\nbut requires additional functionality in the interworking specifications\nfor the provision of QoS through the CN rather than\nsimply over the air. QoS is a key concept, within UMTS, and\ntogether with the additional QoS functionality in Hiperlan/2,\na consistent QoS approach can therefore be provided. A number\nof approaches to QoS currently exist which still need to\nbe considered at this stage.\nQoS within the Hiperlan/2 network must be supported between\nthe MT and external networks, such as the Internet. In\nthe loose coupling scenario, the data path is not constrained\nto travelling across the 3G SPN, e.g., via the SGSN/GGSNs.\nTherefore no interworking is required between QoS mechanisms\nused within the 3G and Hiperlan/2 network. There is a\npossible interaction regarding the interpretation and mapping\nof UMTS QoS parameters onto the QoS mechanisms used\nin the Hiperlan/2 network. The actual provisioning of QoS\nacross the Hiperlan/2 network is dependent on the type of the\ninfrastructure technology used, and therefore the capabilities\nof the CL.\n4.4.1. HiperLAN2/Ethernet QoS mapping\nWithin the Hiperlan/2 specification, radio bearers are referred\nto as DLC connections. A DLC connection is characterised\nby offering a specific support for QoS, for instance in terms of\nbandwidth, delay, jitter and bit error rate. The characteristics\nof supported QoS classes are implementation specific. A user\nmight request for multiple DLC connections, each transferring\na specific traffic type, which indicates that the traffic division\nis traffic type based and not application based. The DLC\nconnection set-up does not necessarily result in immediate assignment\nof resources though. If the MT has not negotiated a\nfixed capacity agreement with the AP, it must request capacity\nby sending a resource request (RR) to the AP whenever it has\ndata to transmit. The allocation of resources may thereby be\nvery dynamic. The scheduling of the resources is vendor specific\nand is therefore not included in the Hiperlan/2 standard,\nwhich also means that QoS parameters from higher layers are\nnot either.\nHiperlan/2 specific QoS support for the DLC connection\ncomprises centralised resource scheduling through the\nTDMA-based MAC structure, appropriate error control (acknowledged\n, unacknowledged or repetition) with associated\nprotocol settings (ARQ window size, number of retransmis-sions\nand discarding), and the physical layer QoS support.\nAnother QoS feature included in the Hiperlan/2 specification\nis a polling mechanism that enables the AP to regularly poll\nthe MT for its traffic status, thus providing rapid access for\nreal-time services. The CL acts as an integrator of Hiperlan/2\ninto different existing service provider networks, i.e. it\nconnects the SPNs to the Hiperlan/2 data link control (DLC)\nlayer.\nIEEE 802.1D specifies an architecture and protocol for\nMAC bridges interconnecting IEEE 802 LANs by relaying\nand filtering frames between the separate MACs of the\nBridged LAN. The priority mechanism within IEEE 802.1D\nis handled by IEEE 802.1p, which is incorporated into IEEE\n802.1D. All traffic types and their mappings presented in the\ntables of this section only corresponds to default values specified\nin the IEEE 802.1p standard, since these parameters are\nvendor specific.\nIEEE 802.1p defines eight different priority levels and describes\nthe traffic expected to be carried within each priority\nlevel. Each IEEE 802 LAN frame is marked with a user priority\n(07) corresponding to the traffic type [8].\nIn order to support appropriate QoS in Hiperlan/2 the\nqueues are mapped to the different QoS specific DLC connections\n(maximum of eight). The use of only one DLC connection\nbetween the AP and the MT results in best effort traffic\nonly, while two to eight DLC connections indicates that the\nMT wants to apply IEEE 802.1p. A DLC connection ID is\nonly MT unique, not cell unique.\nThe AP may take the QoS parameters into account in the\nallocation of radio resources (which is out of the Hiperlan/2\nscope). This means that each DLC connection, possibly operating\nin both directions, can be assigned a specific QoS,\nfor instance in terms of bandwidth, delay, jitter and bit error\nrate, as well as being assigned a priority level relative to\nother DLC connections. In other words, parameters provided\nby the application, including UMTS QoS parameters if desired\n, are used to determine the most appropriate QoS level\nto be provided by the network, and the traffic flow is treated\naccordingly.\nThe support for IEEE 802.1p is optional for both the MT\nand AP.\n4.4.2. End-to-end based QoS\nAdding QoS, especially end-to-end QoS, to IP based connections\nraises significant alterations and concerns since it represents\na digression from the \"best-effort\" model, which constitutes\nthe foundation of the great success of Internet. However,\nthe need for IP QoS is increasing and essential work is cur-50\nMCCANN AND FLYGARE\nrently in progress. End-to-end IP QoS requires substantial\nconsideration and further development.\nSince the Hiperlan/2 network supports the IEEE 802.1p\npriority mechanism and since Differentiated Services (DiffServ\n) is priority based, the natural solution to the end-to-end\nQoS problem would be the end-to-end implementation\nof DiffServ. The QoS model would then appear as follows.\nQoS from the MT to the AP is supported by the Hiperlan/2\nspecific QoS mechanisms, where the required QoS for each\nconnection is identified by a unique Data Link Control (DLC)\nconnection ID. In the AP the DLC connection IDs may be\nmapped onto the IEEE 802.1p priority queues. Using the\nIEEE 802.1p priority mechanisms in the Ethernet, the transition\nto a DiffServ network is easily realised by mapping the\nIEEE 802.1p user priorities into DiffServ based priorities.\nNeither the DiffServ nor the IEEE 802.1p specification\nelaborates how a particular packet stream will be treated\nbased on the Differentiated Services (DS) field and the layer\n2 priority level. The mappings between the IEEE 802.1p priority\nclasses and the DiffServ service classes are also unspec-ified\n. There is however an Integrated Services over Specific\nLink Layers (ISSLL) draft mapping for Guaranteed and Controlled\nLoad services to IEEE 802.1p user priority, and a mapping\nfor Guaranteed and Controlled Load services, to DiffServ\nwhich together would imply a DiffServ to IEEE 802.1p\nuser priority mapping.\nDiffServ provides inferior support of QoS than IntServ, but\nthe mobility of a Hiperlan/2 MT indicates a need to keep the\nQoS signalling low. IntServ as opposed to DiffServ involves\nsignificant QoS signalling.\nThe DiffServ model provides less stringent support of QoS\nthan the IntServ/RSVP model but it has the advantage over\nIntServ/RSVP of requiring less protocol signalling, which\nmight be a crucial factor since the mobility of a Hiperlan/2\nMT indicates a need to keep the QoS signalling low. Furthermore\n, the implementation of an end-to-end IntServ/RSVP\nbased QoS architecture is much more complex than the implementation\nof a DiffServ based one.\nDiscussions around end-to-end QoS support raise some\ncritical questions that need to be considered and answered before\na proper solution can be developed; which performance\ncan we expect from the different end-to-end QoS models,\nwhat level of QoS support do we actually need, how much\nbandwidth and other resources are we willing to sacrifice on\nQoS, and how much effort do we want to spend on the process\nof developing well-supported QoS?\nRelationships with other standardisation bodies\nBRAN is continuing to have a close working relationship with\nthe following bodies:\nWLAN Interworking Group (WIG)\nThis group met for the first time in September 2002. Its broad\naim is to provide a single point of contact for the three main\nWLAN standardisation bodies (ETSI BRAN, IEEE 802.11\nand MMAC HSWA) and to produce a generic approach to\nboth Cellular and external network interworking of WLAN\ntechnology. It has been also decided to work upon, complete\nand then share a common standard for WLAN Public Access\nand Cellular networks.\n3rd Generation Partnership Project (3GPP)\nThe System Architecture working group 1 (SA1) is currently\ndeveloping a technical report detailed the requirements for\na UMTSWLAN interworking system. They have defined\n6 scenarios detailing aspects of differently coupled models,\nranging from no coupling, through loose coupling to tight\ncoupling. Group 2 (SA2) is currently investigating reference\narchitecture models, concentrating on the network interfaces\ntowards the WLAN. Group 3 (SA3) has now started work on\nsecurity and authentication issues with regard to WLAN interworking\n. ETSI BRAN is currently liasing with the SA2\nand SA3 groups.\nInternet Engineering Task Force (IETF)\nWithin the recently created `eap' working group, extensions\nare being considered to EAP (mentioned in section 4), which\nwill assist in system interworking.\nInstitute of Electrical and Electronics Engineers (IEEE)\nUSA\nThe 802.11 WLAN technical groups are continuing to progress\ntheir family of standards. Many similarities exist between\nthe current 802.11a standard and Hiperlan2/HiSWANa\nwith regard to 3G interworking. ETSI BRAN is currently liasing\nwith the Wireless Next Generation (WNG) group of the\nIEEE 802.11 project.\nMultimedia Mobile Access Communication (MMAC) Japan\nThe High Speed Wireless Access (HSWA) group's HiSWANa\n(High Speed Wireless Access Network system A) is essentially\nidentical to Hiperlan/2, except that it mandates the use\nof an Ethernet convergence layer within the access point. An\nagreement between ETSI BRAN and MMAC HSWA has now\nbeen in place for some time to share the output of the ETSI\nBRAN 3G interworking group.\nConclusions\nThis paper has addressed some of the current thinking within\nETSI BRAN (and indeed WIG) regarding the interworking of\nthe Hiperlan2 and HiSWANa wireless LAN systems into a 3G\nCellular System. Much of this information is now appearing\nin the technical specification being jointly produced by ETSI\nand MMAC, expected to be published in the first half of 2003.\nOf the two initial solutions investigated (tight and loose\ncoupling), current work has concentrated on the loose variant,\nproducing viable solutions for security, mobility and QoS.\nThe authentication schemes chosen will assume that EAP is\ncarried over the air interface, thus being compatible, at the\ninterworking level, with IEEE 802.11 and 3GPP.\nHIPERLAN/2 PUBLIC ACCESS INTERWORKING WITH 3G CELLULAR SYSTEMS\n51\nThis standardisation activity thus hopes to ensure that all\nWLAN technologies can provide a value added service within\nhotspot environments for both customers and operators of 3G\nsystems.\n\nAcknowledgements\nThe authors wish to thank Maximilian Riegel (Siemens AG,\nGermany), Dr. Robert Hancock and Eleanor Hepworth (Roke\nManor, UK) together with se Jevinger (Telia Research AB,\nSweden) for their invaluable help and assistance with this\nwork.\n\nReferences\n[1] ETSI TR 101 957 (V1.1.1):\nBroadband Radio Access Networks\n(BRAN); HIPERLAN Type 2; Requirements and Architectures for Interworking\nbetween Hiperlan/2 and 3rd Generation Cellular Systems (August\n2001).\n[2] 3GPP TS 33.102: 3rd Generation Partnership Project; Technical Specification\nGroup Services and System Aspects; 3G Security; Security Architecture\n.\n[3] L. Blunk, J. Vollbrecht and B. Aboba, PPP Extensible Authentication\nProtocol (EAP), RFC 2284bis, draft-ietf-pppext-rfc2284bis\n-04.txt\n(April 2002).\n[4] P. Calhoun et al., Diameter base protocol, draft-ietf-aaa-diameter\n-10\n(April 2002).\n[5] C. Rigney et al., Remote Authentication Dial In User Service (RADIUS),\nRFC 2058 (January 1997).\n[6] HIPERLAN Type 2; Packet Based Convergence Layer; Part 2: Ethernet\nService Specific Convergence Sublayer (SSCS), ETSI TS 101 493-2,\nBRAN.\n[7] HIPERLAN Type 2; System overview, ETSI TR 101 683, BRAN.\n[8] Information Technology Telecommunications and Information Exchange\nbetween Systems Local and Metropolitan Area Networks\nCommon Specifications Part 3: Media Access Control (MAC) Bridges\n(Revision and redesignation of ISO/IEC 10038: 1993 [ANSI/IEEE\nStd 802.1D, 1993 Edition], incorporating IEEE supplements P802.1p,\n802.1j-1996, 802.6k-1992, 802.11c-1998, and P802.12e)\", ISO/IEC\n15802-3: 1998.\nStephen McCann holds a B.Sc. (Hons) degree from\nthe University of Birmingham, England. He is currently\neditor of the ETSI BRAN \"WLAN3G\" interworking\nspecification, having been involved in\nETSI Hiperlan/2 standardisation for 3 years. He is\nalso involved with both 802.11 work and that of the\nJapanese HiSWANa wireless LAN system. In the\nautumn of 2002, Stephen co-organised and attended\nthe first WLAN Interworking Group (WIG) between\nETSI BRAN, MMAC HSWA and IEEE 802.11. He\nis currently researching multimode WLAN/3G future terminals and WLAN\nsystems for trains and ships, together with various satellite communications\nprojects. In parallel to his Wireless LAN activities, Stephen has also been\nactively involved in the `rohc' working group of the IETF, looking at various\nRobust Header Compression schemes. Previously Stephen has been involved\nwith avionics and was chief software integrator for the new Maastricht air\ntraffic control system from 1995 to 1998. He is a chartered engineer and a\nmember of the Institute of Electrical Engineers.\nE-mail: stephen.mccann@roke.co.uk\nHelena Flygare holds a M.Sc. degree in electrical\nengineering from Lund Institute of Technology,\nSweden, where she also served as a teacher in Automatic\nControl for the Master Degree program. Before\nher present job she worked in various roles with\nsystem design for hardware and software development\n. In 1999 she joined Radio System Malm at\nTelia Research AB. She works with specification, design\nand integration between systems with different\naccess technologies, e.g. WLANs, 2.5/3G, etc. from\na technical, as well as from a business perspective. Since the year 2000, she\nhas been active with WLAN interworking with 3G and other public access\nnetworks in HiperLAN/2 Global Forum, ETSI/BRAN, and 3GPP.\nE-mail: helena.flygare@telia.se", "keywords": "Hiperlan/2;interworking;3G;ETSI;BRAN;WIG;public access"} {"name": "102", "title": "2D Information Displays", "abstract": "Many exploration and manipulation tasks benefit from a coherent integration of multiple views onto complex information spaces. This paper proposes the concept of Illustrative Shadows for a tight integration of interactive 3D graphics and schematic depictions using the shadow metaphor. The shadow metaphor provides an intuitive visual link between 3D and 2D visualizations integrating the different displays into one combined information display. Users interactively explore spatial relations in realistic shaded virtual models while functional correlations and additional textual information are presented on additional projection layers using a semantic network approach. Manipulations of one visualization immediately influence the others, resulting in an in-formationally and perceptibly coherent presentation.", "fulltext": "INTRODUCTION\nIn many areas knowledge about structures and their meaning\nas well as their spatial and functional relations are required\nto comprehend possible effects of an intervention.\nFor example, engineers must understand the construction of\nmachines as a prerequisite for maintenance whereas the\nspatial composition of molecules and hence possible reactions\nare of importance for the discovering of new drugs in\nchemistry. Medical students need to imagine the wealth of\nspatial and functional correlations within the human body\nto master anatomy.\nTo date, novices as well as domain experts are required to\nconsult several, often voluminous documents in parallel to\nextract information for a certain intervention. Spatial relations\n, characteristics of structures inherently three-dimensional\n, such as the shape and location of structures, however\n, are difficult to convey on paper. Besides requiring a\nsignificant amount of images to illustrate spatial relations\nbetween only a few structures, the mental integration of\nmultiple views to form a three-dimensional picture in mind\nis demanding. Spatial relations can be conveyed more ef-fectively\nby means of 3D models [18]. Using interactive 3D\ngraphics, available to more and more people due to recent\nadvances in consumer graphics hardware, the user may actively\nexplore the spatial correlations of structures within a\nphotorealistic virtual model (see upper left of Figure 1).\nHere, the visual realism of the model facilitates recognition\non real encounters.\nInformation about functional correlations, such as the interplay\nof muscles causing an upward motion of the human\nfoot, has been traditionally provided by means of text and\nillustrations as found in textbooks. Simple, non-photorealistic\ndrawings enriched with annotations and metagraphical\nsymbols can be extremely powerful in conveying complex\nrelationships and procedures (see upper right of Figure 1).\nAbstraction techniques reduce the complexity of the depicted\nstructures to illustrate the important aspects thereby\nguiding the attention of the viewer to relevant details. In\ncontrast to the visualization of spatial relations, 3D graphics\nadd no significant value to the illustration of functional\ncorrelations.\nFigure 1: Illustrative Shadows provide an intuitive, visual link\nbetween spatial (3d) and non-spatial (2d) information displays\nintegrating them into one combined information display.\nM. tibialis anterior\nM. extensor hallucis longus\nM. extensor digitorum longus\nM. tibialis posterior\nM. flexor digitorum longus\nM. flexor hallucis longus\nM. fibularis brevis\nM. fibularis longus\ninteractive 3d-graphic\n2d-information display\nshadow\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, or\nrepublish, to post on servers or to redistribute to lists, requires prior\nspecific permission and/or a fee.\nIUI'03, January 1215, 2003, Miami, Florida, USA.\nCopyright 2003 ACM 1-58113-586-6/03/0001...$5.00.\n166\nThe integration of both aspects in one visualization is difficult\nsince each serves to fulfill a different goal with, partly\nmutually exclusive, visualization techniques. It becomes\neven more complicated if the 3D model is frequently manipulated\n, such as in construction or recent interactive\nlearning environments. Here, occludings by annotations\nand metagraphical symbols are annoying and may even interrupt\nthe current manipulation for the user. Additional\nviews, whether as insets [20], separate objects like mirrors\n[8], or in form of lenses and volumetric cursors changing\nthe rendition of embedded structures [7, 21, 23], are either\nnot close enough to the manipulated structures to be fully\nrecognized without dividing the users attention between\ndifferent views [13] or require sometimes tedious manipulations\nto be placed or moved within the scene. Nonetheless\n, additional information as to restrictions or functional\ncorrelations pertaining the current manipulated structures is\nhighly desired, even necessary.\nIn this paper we present an approach called Illustrative\nShadows that provides an intuitive, visual link between an\nactively manipulated 3D visualization and a supplemental\n2D information display integrating them into one combined\ninformation display (see Figure 1). One of the main ideas\nbehind Illustrative Shadows is the integration of secondary\ninformation, or in other words, background information,\ninto an interactive 3D scene. By analyzing the users' manipulation\nof 3D structures and finding correlations, graphical\nand textual information about the current interaction\ncontext, such as graphical object-details and textual labels,\nare displayed in the ``background''--the shadow--to give\nguidance as well as to further enhance the users' understanding\nof the context.\nThe paper is structured as follows: After reviewing related\napproaches to combine multiple visual and textual information\ndisplays, we present the design of Illustrative Shadows.\nFurthermore, an architecture realizing these concepts is discussed\nin this section. Thereafter, the major components of\nthis architecture are described. Realization issues are subject\nof the subsequent section, whereas application examples\nand the summary conclude the paper.\nRELATED WORK\nRecently proposed tools for the exploration of virtual\nscenes extend possibilities to display covered structures or\nhidden details. A complementary view called Magic Mirror\nthat mimics a hand mirror has been introduced in [8]. In addition\nto providing the optical effects of a real mirror, it also\nallows to explore the insight of objects by clipping against\nthe mirror front-frustum. Magic lens filters as presented in\n[7, 21] go further by combining an arbitrarily-shaped region\nwith an operator that changes the view of objects viewed\nthrough that region thereby displaying different aspects of\nthe visualized information space. In [5, 23] the 2D lens approach\nis extended to 3D using volumetric lenses. All\naforementioned techniques require the user to actively manipulate\na tool within the scene. These techniques assume\nthe user already knows which parts of the presented visualization\noffer additional information or is willing to explore\nthe model. While this might be feasible in explorative environments\nwhere navigation is the main interaction task, it\ncertainly hinders manipulation.\nSeveral approaches to combine 3D and 2D visualizations\nhave been made using a corner cube environment (\n). The\nthree orthogonal sides show image slices that provide a visual\ncontext for a 3D model or structures displayed in the\ncenter. In [11] the images have been integrated as back-planes\nto ground the 3D representation of anatomic structures\nvisually. By outlining the 3D structures in the images,\nthe spatial correspondence between the 3D renditions of activated\nfoci in the context of human brain slices is emphasized\nin [16]. The images, however are precomputed and do\nnot change according to the users' interaction nor is there\nany visualization of functional correlations. An interesting\ninteractive approach has been proposed by [10]. The projection\nof the 3D model onto the sides of the corner cube\ncan be manipulated by the user in order to change the position\nand orientation of the model. Fully rendered shadows\nof certain objects resemble real-world mirrors and may be\nused to stress importance. There is, however, no discussion\non how to use this feature to provide, for instance, additional\ncontext information for the user.\nTo establish hypotheses on the interaction context in order\nto be able to display additional context information and to\nprovide meaningful descriptions of relationships knowledge\nmodeling is required. Promising approaches to connect\nthose knowledge with 3D graphics have been developed\nin the area of medical applications. The Digital\nAnatomist [4] incorporates a logic-based description comprising\nclass and subclass relationships (is-a) as well as partitive\nand qualitative spatial relationships (has-parts, is-superior\n-to). The information is presented in tree-like textual\nform that can be explored by folding and unfolding. Corresponding\nstructures are displayed in a 3D visualization\naside. There is no visual integration of both information\ndisplays. The semantic network described in [14] is used to\ncreate various `views' in which correlating structures are\ndisplayed to communicate specific aspects with a voxel\nmodel of the human anatomy. The highly detailed visualization\n, however, cannot be interactively explored, nor is\nthere any kind of abstraction to focus the users' attention.\nInteraction is only possible by tree-like menus.\nSYSTEM DESIGN USING ILLUSTRATIVE SHADOWS\nWith the term Illustrative Shadows we refer to a coherent\nintegration of photorealistic depictions of a virtual model\nwith abstract illustrations and textual explanations. Both\nFigure 2: Architecture of a system incorporating the Illustrative\nShadows approach.\n3D\nmodel\n3D\nvisualization\n2D\nvisualization\nVisualization\nannotations\nKnowledge\n-based\nserver\nClient interface\nSystem\nEvent control\n167\nkinds of depictions serve to fulfill different and somehow\ncontradicting goals: on the one hand to enable navigation\nand manipulation of complex spatial models and on the\nother hand to provide adjusted visualizations that guide the\nuser's attention to additional information about the most\nrelevant objects in the current interaction context. Both visualizations\nare achieved by applying photorealistic and\nspecific non-photorealistic rendering techniques [22] to\ngeometric models.\nFurthermore, textual information describing the most relevant\nstructures and functional correlations between them\nmust be integrated. The estimation of the relevance with respect\nto the current interaction context as well as the selection\nor generation of textual explanations heavily rely on\nnon-geometric formal and informal representations and are\ntherefore determined by external inference mechanisms.\nMoreover, co-referential relations between the entities\nwithin the geometric model and the formal and informal\nrepresentations have to be established in order to link the\ndifferent representations.\nBased on these requirements we designed a system architecture\nwhich comprises three basic components (see\nFigure 2):\nThe visual component renders a photorealistic 3D\nmodel with a standard camera model as well as a non-photorealistic\nillustration that is projected onto a\nground plane. A client interface enables external control\nof the non-photorealistic rendering techniques. Finally\n, the visual component also renders text and\nmetagraphical annotations, such as labels, hypertext\nand arrows.\nThe event control allows the user to modify the parameters\nof a virtual camera and to select and manipulate\ngeometric objects within the scene. Interactions are\ntracked and ranked within an interaction history to\ncommunicate the current interaction context via a client\ninterface to an external knowledge-based component.\nThe knowledge-based server receives notification of\nuser manipulations and establishes hypotheses on the\ndegree of interest values (DOI) for geometric objects.\nThese DOI values guide the selection of appropriate\ntext fragments presented in text annotations as well as\nthe modification of parameters of the non-photorealistic\nrendering techniques for emphasizing in the illustration\n.\nThe following sections discuss important aspects of these\nsystem components.\nVISUALIZATION\nBesides displaying a photorealistic rendition of a 3D model\nthat the user can manipulate, the illustration of functional\ncorrelations between structures of the model in the \"background\"\nhas to be accomplished. To focus the user's attention\non relevant structures and to facilitate perception, important\nobjects must be emphasized and surroundings\nabstracted. Furthermore, both visualizations must be integrated\nin a coherent manner, so that a visual connection between\nthe 3D and 2D renditions of the relevant objects is\nestablished by the user. Several crucial aspects have to be\nconsidered:\nHow can objects be emphasized such that they attract\nthe user's attention while still being in the background?\nAre additional graphical elements required to establish\na visual correlation between the two model representations\n?\nWhat illustration techniques can be applied to differentiate\nbetween important and less important objects?\nIs a continuous synchronization between the photorealistic\nand the schematic representation of the model necessary\nduring user interactions?\nIntegrating Different Model Representations\nThe question coming up at this point is how a secondary,\nschematic model representation can be integrated such that\nthe following requirements are fulfilled:\nThe second representation must be placed near the original\n3D representation in order to perceive structures in\nboth representations [13].\nThe secondary representation may never occlude the\ncentral, realistic 3D visualization, which a user manipulates\ndirectly. The relevant information, however, must\nbe visible in order to be recognized but should not distract\nfrom the interaction with the 3D model.\nAn exact copy of the 3D model representation is not appropriate\nfor the task of depicting functional correlations, because\ntheir illustration in the \"background\" requires abstraction\n. Without abstraction, a lot of the users' attention\nwould be required to extract the relevant information. An\nexact copy, however, can be used to provide a mirrored\nview below the 3D model to visualize structures otherwise\nnot visible for the user (see Figure 9 at the end). The integration\nof a secondary view as inset [20] has some disadvantages\ntoo. Firstly, the inset must not occlude the 3D\nmodel, thus it must be placed in distance which in turn\nmeans visual separation. Secondly, the inset framing complicates\nthe visualization of object correlations, e.g. by\nlines.\nAn ideal solution in many respects is to project the 3D\nmodel onto a plane below the model, just like casting a\nshadow. This 2D representation may then be modified in\nvarious ways to illustrate associated concepts and relations\nand therefore is called Illustrative Shadows. Besides, this\napproach satisfies the requirements specified above.\nIllustrative Shadows\nCast shadows [3] have proven to be beneficial for perceiving\nspatial relationships and to facilitate object positioning\nFigure 3: Different types of model projections onto the illustration\nplane. Beside simple, monochrome projections (shadows),\nthe color of individual structures can be preserved. The mirrored\nprojection shows details otherwise hidden in the current view.\n;;\n;;;\n;;;\n;;\n;;\n;;\n;;\nMonochrome projection\nColored projection\nMirrored projection\n168\n[24]. Thus, their use, if already present, occupies no additional\nspace for the display of additional information, or in\nthe case of prior absence, also add valuable depth cues to\nthe 3D visualization. Furthermore, the shadows can be used\nto interact with the underlying information context or, as\nproposed in [10], with the shadowing 3D objects. To sum\nup:\nThe shadow projection results in an abstraction which\nis very important for illustrations.\nAdditional depth cues facilitate the perception of spatial\nrelations and accelerate the relative positioning of objects\nwhile manipulating the 3D model.\nThe projection establishes a link between a 3D object\nand its 2D shadow providing additional information.\nDisplaying and Focusing in the Illustration Plane\nBesides simple, monochrome shadow projection further\npossibilities to project the 3D model representation onto the\nillustration plane come to mind (see Figure 3). Preserving\nthe colors of the different structures of the model, for instance\n, enables distinct renditions and perception of the objects\nin the shadow. As an extension, the objects can be mirrored\nbefore projecting them. Hence, objects or hidden\ndetails of objects otherwise not visible become visible in\nthe illustration plane.\nTo illustrate correlations or to annotate structures, the relevant\nobjects must be emphasized to be easily distinguished\nfrom the remaining 2D visualization. Also, the viewer must\nbe able to differentiate between relevant and less relevant\nobjects. For monochrome shadow projections, the object\ncolor must contrast with the shadow color. The selection of\nemphasizing colors should be based on a perception oriented\ncolor model, such as the HSV. Moreover, an outline\ncan be used to attract the viewer's interest. A colored projection\nmakes it somewhat more difficult since the variation\nof the color won't always result in a noticeable distinct representation\n. By using a conspicuous texture or shaded representations\nof the relevant object whereas the remaining\nobjects are flat-shaded, the viewer's attention can be directed\nto those relevant objects.\nSignificance of Accentuations\nIn addition to objects being significant to the current interaction\ncontext supplementary objects have to be included in\nthe illustration. These objects are not of primary importance\nwithin the concepts to be illustrated but guide the\nviewer and maintain context. It is important that such objects\nare recognized by the viewer as objects of minor significance\n. The relevance of objects that have been emphasized\nby outlining, for instance, can be judged by line width\nor line style (e.g. contrast, waviness). Also preserving the\nobjects color as well as the use of texture indicates a higher\nimportance than an interpolation of the object's color and\nthe background color.\nRecognition of Correlations between both Representations\nAn important aspect in using two different but coherent representations\nof the same model is the identification of correlations\nbetween those visualizations by the viewer. If an object\nis being emphasized in the illustration plane, it must be\npossible for the viewer to find its counterpart in the detailed\nphotorealistic representation too. Often shape and color\ngive enough hints. However, if the projection of the relevant\nobjects results in uniform shapes, the viewer may have difficulties\nto recognize individual objects in the 3D visualization\n(see Figure 4). Besides accentuating those objects in\nthe 3D representation, the integration of additional elements\ncan be beneficial. Semi-transparent shadow volumes\noriginally developed to facilitate object positioning in 3D\n[19] indicate direct correspondence (see Figure 5).\nIntegration of Annotations\nThe conveyance of important related facts by means of\ngraphical abstraction, accentuation, or modification of relevant\nstructures alone is difficult. Therefore conventional\nbook illustrations often contain annotations with short descriptions\n. Those annotations must be placed close to the\ndescribed objects and should not occlude relevant parts of\nthe presentation. The latter, however, cannot always be\nguaranteed. A simple but effective solution places the annotation\non a semi-transparent background face that increases\nthe contrast between text annotation and illustration and\nstill does not block vision (see Figure 5). To further facilitate\nabsorption of shown concepts, single words or groups\nof words can be emphasized and graphically linked to relating\nstructures in the illustration. Hypertext functionality reduces\nthe amount to which textual annotations must be displayed\nat once. The user can request more detailed\ninformation by activating links.\nFigure 4: Recognition of individual objects in different scenes.\nTo the left, unequivocal correspondence of shape and color\nfacilitates identification. To the right, no clear correspondence.\nFigure 5: A direct connection to an object's shadow is established\nby displaying its semi-transparent shadow volume. The\nintegration of annotations gives meaning to unknown objects and\nrelationships.\n169\nINTERACTION\nA human illustrator is required to identify important aspects\nand characteristic features of the subject or concept that is\nto be conveyed in order to draw a focussed visualization.\nOne way to identify those features for the computer is to\nwatch the user interacting with the information space. Since\nour goal has been to enhance the users' understanding by\nproviding background information in the current interaction\ncontext, an Illustrative Shadow depicting correlations in\nthat context must be generated.\nBy navigating within the 3D model on the one hand, the\nuser is free to explore spatial relations by changing the\nview. Here, single structures may be tracked by the computer\nregarding their visibility hence obtaining information\nabout the users' current focus. On the other hand, the user\nmay interact with the structures, thus expressing specific interest\n. As a result of the integrated 3D/2D visualization, interaction\nis possible within the 3D visualization as well as\nin the projection layers, a technique inspired by [10]. Using\nthe shadow for interaction facilitates certain tasks, such as\nselection, since structures hidden in the 3D visualization\nmay be visible in the projection. Furthermore, 2D input device\ncoordinates can be mapped directly onto the plane\nthereby enabling the use of 2D interaction techniques.\nThe provided manipulation tasks highly depend on the application\nthat employs the concept of Illustrative Shadows.\nIn our application, the user is able to compose and to decompose\na given 3D model like a 3D jigsaw. Thus, translation\nand rotation of individual structures are main interaction\ntasks.\nUser-interactions are tracked within an interaction history.\nBy assigning relevance values to each interaction task, accumulations\nof these values show a distribution of interest\nwithin the model over time. Thus each single structure's degree\nof interest (DOI, a normalized value) is a measure for\nits importance to the user at a certain time. As shown in\nTable 1, touching a structure with the mouse pointer has a\nmuch lower relevance than actually selecting it. The degree\nof interest is communicated to the knowledge server which\nin turn may modify the 2D visualization of the shadow. To\ngive an example, the user is interested in a certain structure\nof an anatomic 3D visualization, such as a ligament, that is\npart of a functional relation between a bone and a muscle.\nOnly one of those objects, that is the bone or the muscle,\nshould be highlighted and annotated, because of space restrictions\nin the shadow layer. At this point, the DOI is used\nto decide. If the interaction history shows more user-interest\nfor muscles, information about the functional relation\nbetween the ligament and the muscle is displayed.\nKNOWLEDGE MODELING\nWhile segmentation of the 3D model into individual structures\n(objects) provides a spatial description, the presentation\nof correlations also requires a linked symbolic, textual\ndescription. Moreover, in order to establish hypotheses on\nthe current interaction context, formal knowledge is required\n. Thus, the system presented in this paper comprises\na knowledge base, i.e. a media-independent formal representation\n, media-specific realization statements of entities\nwithin the formal representation as well as a large multi-lingual\ntext corpus. Realization statements establish co-reference\nrelations between independent formal representations\ndescribing different aspects of the underlying information\nspace. They also guide the generation or selection of texts\nused to annotate structures in the 2D visualization.\nThe medical education application presented later in this\npaper is based on a knowledge base describing the objects\nand functional correlations of the musculo-skeletal system.\nIt covers the area of the lower limb and the pelvic girdle.\nThe knowledge base was created by manually analyzing\nseveral anatomy textbooks, anatomy atlases, medical dictionaries\n, and lexica. This analysis reveals important concepts\n, their hierarchical classification, and the instance attribute\nvalues forming a complex semantic network. Our\nsystem contains a hierarchical representation of basic anatomic\nconcepts such as bones, muscles, articulations, tendons\n, as well as their parts and regions. The corpus contains\nfragments of several anatomic textbooks describing global\nconcepts of the osteology, syndesmology, and myology as\nwell as descriptions of all the entities of these anatomic systems\nwithin the lower leg and the pelvic girdle.\nIn order to present appropriate system reactions the event\ncontrol informs the knowledge server of user interactions.\nFirst, exploiting the visual annotations, the knowledge\naction\nparameter\nvalue\nrelevance\nmouseOver\ntime\nshort\n1\nlong\n2\nmouseButtonPressed\nlocation\n3D object\n4\n2D object\n4\nannotation\n6\nTable 1. Relevance of certain user interactions\nFigure 6: Visualization of intermediate steps within a retrieval\nwhich discovers the association between the distal phalanx of the\nbig toe and the extensor hallucis longus.\nhas-Basis\nhas-Area\nhas-Basis\nOs-Longum\nBone-Basis\nMusculus\nBone-Area\nPhalanx distalis pedis\nBasis phalangis distalis pedis\nArea insertion of\nM. extensor hallucis longus\nM. extensor hallucis longus\n1\n2\n3\n1\n2\n3\nInstance\nRelation\nConcept\n170\nserver extracts co-referring formal entities and assigns relevance\nvalues according to Table 1. Subsequently, the\nknowledge server searches for associations between the\nmost relevant entities. Our system pursues two alternative\nstrategies: retrievals and suggestions.\nRetrievals discover relations between entities by tracking\npredefined paths within the knowledge base. Figure 6 illustrates\nthe intermediate steps in order to extract relations between\nbones and muscles. From a functional point of view\n(i.e. the muscle mechanics), bones are insertions or origins\nof muscles. Its contraction produces force, which in turn\nchanges the orientation of these bones. These retrievals also\nneed to consider substructures (e.g. bone-volumes and\nbone-area). The following retrieval extracts those muscles,\nwhich originate in a given bone (first logic order):\nThe has-Part* relation represents the transitive hull over\nseveral spatial part-of-relations [1] (e.g. the has-Basis relation\nbetween Bones and Bone-Volumes and the has-Area relation\nbetween Areas or Volumes and Areas).\nThese retrievals rely on knowledge about the structure of\nthe knowledge base. Moreover, they refer to a small number\nof relevant objects. In many situations, however, the event\ncontrol comes up with a huge number of potential relevant\nobjects, which cannot easily be mapped to a predefined\nquery. Hence, we adopt a bottom-up search approach\nwithin a complex semantic network developed within cognitive\npsychology.\nIn his model of human comprehension [15] Quillian assumed\nthat spatially and temporally independent aspects of\nhuman long-term memory are organized in a semantic network\n. Furthermore, he assumed that cognitive processes\nthat access a node of the semantic network activate all connected\nnodes in parallel. The term spreading activation refers\nto a recursive propagation of initial stimuli. Nowadays,\nthis term subsumes breadth-first search algorithms for paths\nconnecting the nodes of a start and a destination set in directed\ngraphs satisfying an evaluation criterion. Collins and\nLoftus [6] modify the propagation algorithm to consider activation\nstrength.\nIn our system, the knowledge server uses the objects' DOI\nfrom the event control as an initial activation, which\nspreads through the semantic network. These initial activa-tions\nalso take the content presented on textual labels and\ninspected by the user into account. The spreading activation\napproach generates a focus structure which contains information\nhow dominant graphical objects must be presented\nin the schematic illustration [9]. Figure 7 illustrates how visual\ndominance values control the render parameters.\nREALIZATION DETAILS\nThe visual component of the prototypical implementation\nextends the Open Inventor graphical library with powerful\nscenegraph nodes to display hypertext on overlay regions\nand to render semi-transparent shadow volumes.\nOther nodes encapsulate the mirror and shading projection\nonto the ground plane as well as the user interaction, and\nemphasize techniques (e.g. computation of silhouette\nlines). The layer management (see Figure 8) employs the\nOpenGL polygon offset feature to allow graphics to overlap\nspecifically whereas visibility-tests of individual structures\nare accomplished by offscreen rendering and analyzing\nOpenGL p-buffers. Additional OSF Motif widgets\nenable the user to add personalized annotations, which are\ninserted into the knowledge base.\nThe knowledge base encodes both the media-independent\nformal knowledge representation as well as media specific\nrealization statements using XML topic maps [2]. To process\nthis information, XML statements are transformed into\nLISP-code. The authoring system contains export filters for\nthe NeoClassic and the LOOM [12] description logic inference\nmachine. In the current version it covers about\n50 basic\nanatomic\nconcepts,\n70 relations,\nand\nover\n1500 instances, with linguistic realization statements in\nLatin, German and English. Furthermore, visual annotations\nrefer to a small number of geometric models and 2D\nillustrations.\nThe interface between the knowledge server and the visual\ncomponent is described using CORBA's interface definition\nlanguage (IDL). The CORBA-based interface implementation\nenables us to experiment with several knowledge serv-Figure\n7: Application of different emphasize techniques to the\n2D information representation according to decreasing dominance\nvalues.\n{ muscle | bone: Bone(bone)\nMusculus(muscle)\n\n(\nis-Origin-of (bone, muscle)\n\npart: has-Part*(bone, part)\nis-Origin-of (part, muscle))\n}\nFigure 8: Multiple layers are used to place the different visual\ninformation in order. Thereby, occluded details may be visible\nin the shadow (3).\nOutline\nHighlight\nOrdinary\nDetail\nAnnotation\n1\n3\n2\n171\ners implemented in Common-LISP (LOOM) and C++\n(NeoClassic).\nAPPLICATIONS\nWe applied the concept of Illustrative Shadows to an application\nof medical education. This system had been previously\ndesigned to foster the understanding of spatial relationships\nby means of 3D models based on a virtual 3D\njigsaw approach [19]. While composing anatomical structures\nhas proven to help medical students to build an understanding\nof the spatial composition [17], most of the users\nexpressed their desire for detailed information about functional\nrelations between structures. Consequently, students\nwould be able to playfully study human organs including\ntheir spatial and functional correlations. Figures 9 and 10\ndepict the screen of the prototype in typical learning sessions\n. Individual objects can be detached and moved within\nthe scene to expose occluded structures. In Figure 9, one of\nthe alternative visualization, the mirror, is shown to demonstrate\nthe various employments of the plane. It enables the\nuser to simultaneously look at two different views of the 3D\nmodel. Graphical accentuations are used to attract the\nviewer's attention (left atrium). Additional textual annotations\nare assigned to correlating structures in a hypertext\nrepresentation which can be explored by following sepa-rately\nmarked links.\nAnother application that has not yet been realized is to support\nusers of CAD systems by Illustrative Shadows. CAD\nsystems are not only used to design individual components\nin 3D but also to assemble complex systems. Information\nabout parameters of single components and relationships\nare of major importance. Being able to retrieve this information\ndirectly from the scene while interacting with the\ncomponents is of great benefit for the design engineer.\nCONCLUSIONS\nFor educational, engineering, or maintenance purposes a\nwealth of information about spatial and functional correlation\nas well as textual information is required. In this paper\nwe developed a new metaphor-based approach to coher-ently\nintegrate different views onto such a complex information\nspace within an interactive system. Illustrative\nShadows provide an intuitive visual link and a level of integration\nbetween interactive 3D graphics and supplemental\n2D information displays that is hard to achieve with other\nconcepts.\nShadow projections have proven to be beneficial for perceiving\nspatial relationships and to facilitate object positioning\n. Thus, their use, if already present, occupies no additional\nspace for the display of additional information, or\nin the case of prior absence, also add valuable depth cues to\nthe 3D visualization. The shadow projection onto a flat\nplane enables schematic illustrations which are focused on\nspecific information extraction tasks and facilitates the integration\nof generated textual information that leads to further\nmeaning. Thus, Illustrative Shadows promote the comprehension\nof complex spatial relations and functional\ncorrelations. Furthermore, the secondary information display\ndoes not hinder manipulations of the 3D model. Our\napproach is well suited for compact 3D models, and has\nbeen successfully applied to an application of medical education\nREFERENCES\n1. Bernauer, J. Analysis of Part-Whole Relation and Subsumption\nin the Medical Domain. Data & Knowledge\nEngineering, 20(3):405415, October 1996.\n2. Biezunski, M., Bryan, M., and Newcomb, S., editors.\nISO/IEC 13250:2000 Topic Maps: Information Technology\nDocument Description and Markup Language\n. International Organization for Standarization\nFigure 9: Alternative visualization. The Mirror facilitates manipulation\nin 3D, by providing a better view of the structures. The\ncentral representation is never occluded by annotations.\nFigure 10: German annotation of objects previously selected.\nRealization statements of the semantic network provide alternative\nGerman, English, and Latin phrases referring to formal entities.\n172\n(ISO) and International Electrotechnical Commission\n(IEC), December 1999. 1. Draft.\n3. Blinn, J.F. Me and my (fake) shadow. IEEE Computer\nGraphics & Applications, 8(1):8286, January/Febru-ary\n1988.\n4. Brinkley, J.F., Wong, B.A., Hinshaw, K.P., and Rosse,\nC. Design of an Anatomy Information System. IEEE\nComputer\nGraphics &\nApplications,\n19(3):3848,\nMay/June 1999.\n5. Cignoni, P., Montani, C., and Scopigno, R. Magic-sphere\n: An insight tool for 3d data visualization. IEEE\nComputer Graphics Forum, 13(3):317328, 1994.\n6. Collins, A. and Loftus, E. A Spreading-Activation Theory\nof Semantic Processing. Psychological Review,\n82(6):407428, 1975.\n7. Fishkin, K. and Stone, M.C. Enhanced dynamic queries\nvia moveable filters. In Katz, I.R., Mack, R., Marks, L.,\nRosson, M.B., and Nielsen, J., editors, Proc. of ACM\nCHI Conference on Human Factors in Computing Systems\n(Denver, May 1995), pages 415420. ACM Press,\nNew York, 1995.\n8. Grosjean, J. and Coquillart, S. The magic mirror: A\nmetaphor for assisting the exploration of virtual worlds.\nIn Zara, J., editor, Proc. of Spring Conference on Computer\nGraphics (Budmerice, Slovakia, April 1999),\npages 125129, 1999.\n9. Hartmann, K., Schlechtweg, S., Helbing, R., and\nStrothotte, T. Knowledge-Supported Graphical Illustration\nof Texts. In De Marsico, M., Levialdi, S., and\nPanizzi, E., editors, Proc. of the Working Conference on\nAdvanced Visual Interfaces (AVI 2002), pages 300307,\nTrento, Italy, May 2002. ACM Press, New York.\n10. Herndon, K.P., Zeleznik, R.C., Robbins, D.C., Conner,\nD.B., Snibbe, S.S., and van Dam, A. Interactive shadows\n. In Proc. of ACM Symposium on User Interface\nand Software Technology (Monterey, November 1992),\npages 16. ACM Press, New York, 1992.\n11. Hhne, K.H., Pflesser, B., Pommert, A., Riemer, M.,\nSchiemann, T., Schubert, R., and Tiede, U. A virtual\nbody model for surgical education and rehearsal. IEEE\nComputer, 29(1):2531, January 1996.\n12. MacGregor, R. A Description Classifier for the Predicate\nCalculus. In Hayes-Roth, B. and Korf, R., editors,\nProc. of the Twelfth Annual National Conference on\nArtificial Intelligence (AAAI-94), pages 213220, Seattle\n, Washington, August 1994. AAAI Press, Menlo\nPark.\n13. Moreno, R. and Mayer, R.E. Cognitive principles of\nmultimedia learning. Journal of Educational Psychology\n, 91:358368, 1999.\n14. Pommert, A., Hhne, K.H., Pflesser, B., Richter, E.,\nRiemer, M., Schiemann, T., Schubert, R., Schumacher,\nU., and Tiede, U. Creating a high-resolution spatial/\nsymbolic model of the inner organs based on the visible\nhuman. Medical Image Analysis, 5(3):221228, 2001.\n15. Quillian, M. Semantic Memory. In Minsky, M., editor,\nSemantic Information Processing, chapter 4, pages\n227270. MIT Press, Cambridge., 1968.\n16. Rehm, K., Lakshminaryan, K., Frutiger, S., Schaper,\nK.A., Sumners, D.W., Strother, S.C., Anderson, J.R.,\nand Rottenberg, D.A. A symbolic environment for\nvisualizing activated foci in functional neuroimaging\ndatasets. Medical Image Analysis, 2(3):215226, ???\n1998.\n17. Ritter, F., Berendt, B., Fischer, B., Richter, R., and\nPreim, B. Virtual 3d jigsaw puzzles: Studying the effect\nof exploring spatial relations with implicit guidance. In\nHerczeg, M., Prinz, W., and Oberquelle, H., editors,\nProc. of Mensch & Computer (Hamburg, September\n2002), pages 363372, Stuttgart Leipzig Wiesbaden,\n2002. B.G.Teubner.\n18. Ritter, F., Deussen, O., Preim, B., and Strothotte, T.\nVirtual 3d puzzles: A new method for exploring geometric\nmodels in vr. IEEE Computer Graphics &\nApplications, 21(5):1113, September/October 2001.\n19. Ritter, F., Preim, B., Deussen, O., and Strothotte, T.\nUsing a 3d puzzle as a metaphor for learning spatial\nrelations. In Fels, S.S. and Poulin, P., editors, Proc. of\nGraphics Interface (Montral, May 2000), pages 171\n178. Morgan Kaufmann Publishers, San Francisco,\n2000.\n20. Seligmann, D.D. and Feiner, S. Automated generation\nof intent-based 3d illustrations. In Proc. of ACM SIG-GRAPH\nConference on Computer Graphics (Las\nVegas, July 1991), pages 123132. ACM Press, New\nYork, 1991.\n21. Stone, M.C., Fishkin, K., and Bier, E.A. The moveable\nfilter as a user interface tool. In Plaisant, C., editor,\nProc. of ACM CHI Conference on Human Factors in\nComputing Systems (Boston, April 1994), pages 306\n312. ACM Press, New York, 1994.\n22. Strothotte, T. and Schlechtweg, S. Non-Photorealistic\nComputer Graphics: Modeling, Rendering, and Animation\n. Morgan Kaufmann Publishers, San Francisco,\n2002.\n23. Viega, J., Conway, M.J., Williams, G., and Pausch, R.\n3d magic lenses. In Proc. of ACM Symposium on User\nInterface and Software Technology (Seattle, November\n1996), pages 5158. ACM Press, New York, 1996.\n24. Wanger, L.R. The effect of shadow quality on the perception\nof spatial relationships in computer generated\nimagery. In Proc. of Symposium on Interactive 3D\nGraphics (Cambridge, March 1992), pages 3942.\nACM Press, New York, 1992.\nSee also: http://isgwww.cs.uni-magdeburg.de/research/is/\n173", "keywords": "Information visualization;Spreading activation"} {"name": "103", "title": "Impedance Coupling in Content-targeted Advertising", "abstract": "The current boom of the Web is associated with the revenues originated from on-line advertising. While search-based advertising is dominant, the association of ads with a Web page (during user navigation) is becoming increasingly important . In this work, we study the problem of associating ads with a Web page, referred to as content-targeted advertising , from a computer science perspective. We assume that we have access to the text of the Web page, the keywords declared by an advertiser, and a text associated with the advertiser's business. Using no other information and operating in fully automatic fashion, we propose ten strategies for solving the problem and evaluate their effectiveness. Our methods indicate that a matching strategy that takes into account the semantics of the problem (referred to as AAK for \"ads and keywords\") can yield gains in average precision figures of 60% compared to a trivial vector-based strategy. Further, a more sophisticated impedance coupling strategy, which expands the text of the Web page to reduce vocabulary impedance with regard to an advertisement, can yield extra gains in average precision of 50%. These are first results . They suggest that great accuracy in content-targeted advertising can be attained with appropriate algorithms.", "fulltext": "INTRODUCTION\nThe emergence of the Internet has opened up new marketing\nopportunities. In fact, a company has now the possibility\nof showing its advertisements (ads) to millions of people at a\nlow cost. During the 90's, many companies invested heavily\non advertising in the Internet with apparently no concerns\nabout their investment return [16]. This situation radically\nchanged in the following decade when the failure of many\nWeb companies led to a dropping in supply of cheap venture\ncapital and a considerable reduction in on-line advertising\ninvestments [15, 16].\nIt was clear then that more effective strategies for on-line\nadvertising were required. For that, it was necessary to take\ninto account short-term and long-term interests of the users\nrelated to their information needs [9, 14]. As a consequence,\nmany companies intensified the adoption of intrusive techniques\nfor gathering information of users mostly without\ntheir consent [8]. This raised privacy issues which stimu-lated\nthe research for less invasive measures [16].\nMore recently, Internet information gatekeepers as, for example\n, search engines, recommender systems, and comparison\nshopping services, have employed what is called paid\nplacement strategies [3].\nIn such methods, an advertiser\ncompany is given prominent positioning in advertisement\nlists in return for a placement fee. Amongst these methods,\nthe most popular one is a non-intrusive technique called keyword\ntargeted marketing [16]. In this technique, keywords\nextracted from the user's search query are matched against\nkeywords associated with ads provided by advertisers. A\nranking of the ads, which also takes into consideration the\namount that each advertiser is willing to pay, is computed.\nThe top ranked ads are displayed in the search result page\ntogether with the answers for the user query.\nThe success of keyword targeted marketing has motivated\ninformation gatekeepers to offer their advertisement services\nin different contexts. For example, as shown in Figure 1,\nrelevant ads could be shown to users directly in the pages of\ninformation portals. The motivation is to take advantage of\n496\nthe users immediate information interests at browsing time.\nThe problem of matching ads to a Web page that is browsed,\nwhich we also refer to as content-targeted advertising [1],\nis different from that of keyword marketing. In this case,\ninstead of dealing with users' keywords, we have to use the\ncontents of a Web page to decide which ads to display.\nFigure 1: Example of content-based advertising in\nthe page of a newspaper. The middle slice of the\npage shows the beginning of an article about the\nlaunch of a DVD movie. At the bottom slice, we can\nsee advertisements picked for this page by Google's\ncontent-based advertising system, AdSense.\nIt is important to notice that paid placement advertising\nstrategies imply some risks to information gatekeepers.\nFor instance, there is the possibility of a negative impact\non their credibility which, at long term, can demise their\nmarket share [3]. This makes investments in the quality of\nad recommendation systems even more important to minimize\nthe possibility of exhibiting ads unrelated to the user's\ninterests. By investing in their ad systems, information gatekeepers\nare investing in the maintenance of their credibility\nand in the reinforcement of a positive user attitude towards\nthe advertisers and their ads [14]. Further, that can translate\ninto higher clickthrough rates that lead to an increase in\nrevenues for information gatekeepers and advertisers, with\ngains to all parts [3].\nIn this work, we focus on the problem of content-targeted\nadvertising. We propose new strategies for associating ads\nwith a Web page. Five of these strategies are referred to as\nmatching strategies. They are based on the idea of matching\nthe text of the Web page directly to the text of the ads and\nits associated keywords. Five other strategies, which we here\nintroduce, are referred to as impedance coupling strategies.\nThey are based on the idea of expanding the Web page with\nnew terms to facilitate the task of matching ads and Web\npages. This is motivated by the observation that there is frequently\na mismatch between the vocabulary of a Web page\nand the vocabulary of an advertisement. We say that there\nis a vocabulary impedance problem and that our technique\nprovides a positive effect of impedance coupling by reducing\nthe vocabulary impedance. Further, all our strategies rely\non information that is already available to information gatekeepers\nthat operate keyword targeted advertising systems.\nThus, no other data from the advertiser is required.\nUsing a sample of a real case database with over 93,000\nads and 100 Web pages selected for testing, we evaluate our\nad recommendation strategies. First, we evaluate the five\nmatching strategies. They match ads to a Web page using\na standard vector model and provide what we may call\ntrivial solutions. Our results indicate that a strategy that\nmatches the ad plus its keywords to a Web page, requiring\nthe keywords to appear in the Web page, provides improvements\nin average precision figures of roughly 60% relative\nto a strategy that simply matches the ads to the Web page.\nSuch strategy, which we call AAK (for \"ads and keywords\"),\nis then taken as our baseline.\nFollowing we evaluate the five impedance coupling strategies\n. They are based on the idea of expanding the ad and\nthe Web page with new terms to reduce the vocabulary\nimpedance between their texts. Our results indicate that it\nis possible to generate extra improvements in average precision\nfigures of roughly 50% relative to the AAK strategy.\nThe paper is organized as follows. In section 2, we introduce\nfive matching strategies to solve content-targeted\nadvertising. In section 3, we present our impedance coupling\nstrategies. In section 4, we describe our experimental\nmethodology and datasets and discuss our results. In section\n5 we discuss related work. In section 6 we present our\nconclusions.\nMATCHING STRATEGIES\nKeyword advertising relies on matching search queries to\nads and its associated keywords. Context-based advertising\n, which we address here, relies on matching ads and its\nassociated keywords to the text of a Web page.\nGiven a certain Web page p, which we call triggering page,\nour task is to select advertisements related to the contents\nof p. Without loss of generality, we consider that an advertisement\na\ni\nis composed of a title, a textual description,\nand a hyperlink. To illustrate, for the first ad by Google\nshown in Figure 1, the title is \"Star Wars Trilogy Full\",\nthe description is \"Get this popular DVD free. Free w/ free\nshopping. Sign up now\", and the hyperlink points to the site\n\"www.freegiftworld.com\". Advertisements can be grouped\nby advertisers in groups called campaigns, such that a campaign\ncan have one or more advertisements.\nGiven our triggering page p and a set A of ads, a simple\nway of ranking a\ni\nA with regard to p is by matching the\ncontents of p to the contents of a\ni\n. For this, we use the vector\nspace model [2], as discussed in the immediately following.\nIn the vector space model, queries and documents are represented\nas weighted vectors in an n-dimensional space. Let\nw\niq\nbe the weight associated with term t\ni\nin the query q\nand w\nij\nbe the weight associated with term t\ni\nin the document\nd\nj\n. Then, q = (w\n1q\n, w\n2q\n, ..., w\niq\n, ..., w\nnq\n) and d\nj\n=\n(w\n1j\n, w\n2j\n, ..., w\nij\n, ..., w\nnj\n) are the weighted vectors used to\nrepresent the query q and the document d\nj\n. These weights\ncan be computed using classic tf-idf schemes. In such schemes,\nweights are taken as the product between factors that quantify\nthe importance of a term in a document (given by the\nterm frequency, or tf, factor) and its rarity in the whole collection\n(given by the inverse document factor, or idf, factor),\nsee [2] for details. The ranking of the query q with regard\nto the document d\nj\nis computed by the cosine similarity\n497\nformula, that is, the cosine of the angle between the two\ncorresponding vectors:\nsim(q, d\nj\n) =\nq d\nj\n|q| |d\nj\n| =\nP\nn\ni=1\nw\niq\nw\nij\nqP\nn\ni=1\nw\n2\niq\nqP\nn\ni=1\nw\n2\nij\n(1)\nBy considering p as the query and a\ni\nas the document, we\ncan rank the ads with regard to the Web page p. This is our\nfirst matching strategy. It is represented by the function AD\ngiven by:\nAD(p, a\ni\n) = sim(p, a\ni\n)\nwhere AD stands for \"direct match of the ad, composed by\ntitle and description\" and sim(p, a\ni\n) is computed according\nto Eq. (1).\nIn our second method, we use other source of evidence\nprovided by the advertisers: the keywords. With each advertisement\na\ni\nan advertiser associates a keyword k\ni\n, which\nmay be composed of one or more terms. We denote the\nassociation between an advertisement a\ni\nand a keyword k\ni\nas the pair (a\ni\n, k\ni\n) K, where K is the set of associations\nmade by the advertisers. In the case of keyword targeted\nadvertising, such keywords are used to match the ads to the\nuser queries. In here, we use them to match ads to the Web\npage p. This provides our second method for ad matching\ngiven by:\nKW(p, a\ni\n) = sim(p, k\ni\n)\nwhere (a\ni\n, k\ni\n) K and KW stands for \"match the ad keywords\"\n.\nWe notice that most of the keywords selected by advertisers\nare also present in the ads associated with those keywords\n. For instance, in our advertisement test collection,\nthis is true for 90% of the ads. Thus, instead of using the\nkeywords as matching devices, we can use them to emphasize\nthe main concepts in an ad, in an attempt to improve our\nAD strategy. This leads to our third method of ad matching\ngiven by:\nAD KW(p, a\ni\n) = sim(p, a\ni\nk\ni\n)\nwhere (a\ni\n, k\ni\n) K and AD KW stands for \"match the ad and\nits keywords\".\nFinally, it is important to notice that the keyword k\ni\nassociated\nwith a\ni\ncould not appear at all in the triggering page\np, even when a\ni\nis highly ranked. However, if we assume that\nk\ni\nsummarizes the main topic of a\ni\naccording to an advertiser\nviewpoint, it can be interesting to assure its presence\nin p. This reasoning suggests that requiring the occurrence\nof the keyword k\ni\nin the triggering page p as a condition\nto associate a\ni\nwith p might lead to improved results. This\nleads to two extra matching strategies as follows:\nANDKW(p, a\ni\n) =\nsim(p, a\ni\n)\nif k\ni\np\n0\nif otherwise\nAD ANDKW(p, a\ni\n) = AAK(p, a\ni\n) =\nsim(p, a\ni\nk\ni\n)\nif k\ni\np\n0\nif otherwise\nwhere (a\ni\n, k\ni\n) K, ANDKW stands for \"match the ad keywords\nand force their appearance\", and AD ANDKW (or AAK for \"ads\nand keywords\") stands for \"match the ad, its keywords, and\nforce their appearance\".\nAs we will see in our results, the best among these simple\nmethods is AAK. Thus, it will be used as baseline for our\nimpedance coupling strategies which we now discuss.\nIMPEDANCE COUPLING STRATEGIES\nTwo key issues become clear as one plays with the content-targeted\nadvertising problem. First, the triggering page normally\nbelongs to a broader contextual scope than that of the\nadvertisements. Second, the association between a good advertisement\nand the triggering page might depend on a topic\nthat is not mentioned explicitly in the triggering page.\nThe first issue is due to the fact that Web pages can be\nabout any subject and that advertisements are concise in\nnature. That is, ads tend to be more topic restricted than\nWeb pages. The second issue is related to the fact that, as\nwe later discuss, most advertisers place a small number of\nadvertisements. As a result, we have few terms describing\ntheir interest areas. Consequently, these terms tend to be\nof a more general nature. For instance, a car shop probably\nwould prefer to use \"car\" instead of \"super sport\" to describe\nits core business topic.\nAs a consequence, many specific\nterms that appear in the triggering page find no match in\nthe advertisements. To make matters worst, a page might\nrefer to an entity or subject of the world through a label\nthat is distinct from the label selected by an advertiser to\nrefer to the same entity.\nA consequence of these two issues is that vocabularies of\npages and ads have low intersection, even when an ad is\nrelated to a page. We cite this problem from now on as\nthe vocabulary impedance problem. In our experiments, we\nrealized that this problem limits the final quality of direct\nmatching strategies. Therefore, we studied alternatives to\nreduce the referred vocabulary impedance.\nFor this, we propose to expand the triggering pages with\nnew terms. Figure 2 illustrates our intuition. We already\nknow that the addition of keywords (selected by the advertiser\n) to the ads leads to improved results. We say that a\nkeyword reduces the vocabulary impedance by providing an\nalternative matching path. Our idea is to add new terms\n(words) to the Web page p to also reduce the vocabulary\nimpedance by providing a second alternative matching path.\nWe refer to our expansion technique as impedance coupling.\nFor this, we proceed as follows.\nexpansion\nterms\nkeyword\nvocabulary impedance\ntriggering\npage p\nad\nFigure 2: Addition of new terms to a Web page to\nreduce the vocabulary impedance.\nAn advertiser trying to describe a certain topic in a concise\nway probably will choose general terms to characterize that\ntopic. To facilitate the matching between this ad and our\ntriggering page p, we need to associate new general terms\nwith p. For this, we assume that Web documents similar\nto the triggering page p share common topics. Therefore,\n498\nby inspecting the vocabulary of these similar documents we\nmight find good terms for better characterizing the main\ntopics in the page p. We now describe this idea using a\nBayesian network model [10, 11, 13] depicted in Figure 3.\nR\nD\n0\nD\n1\nD\nj\nD\nk\nT\n1\nT\n2\nT\n3\nT\ni\nT\nm\n...\n...\n...\n...\nFigure 3: Bayesian network model for our impedance\ncoupling technique.\nIn our model, which is based on the belief network in [11],\nthe nodes represent pieces of information in the domain.\nWith each node is associated a binary random variable,\nwhich takes the value 1 to mean that the corresponding entity\n(a page or terms) is observed and, thus, relevant in our\ncomputations. In this case, we say that the information was\nobserved. Node R represents the page r, a new representation\nfor the triggering page p. Let N be the set of the k\nmost similar documents to the triggering page, including the\ntriggering page p itself, in a large enough Web collection C.\nRoot nodes D\n0\nthrough D\nk\nrepresent the documents in N ,\nthat is, the triggering page D\n0\nand its k nearest neighbors,\nD\n1\nthrough D\nk\n, among all pages in C. There is an edge\nfrom node D\nj\nto node R if document d\nj\nis in N . Nodes\nT\n1\nthrough T\nm\nrepresent the terms in the vocabulary of C.\nThere is an edge from node D\nj\nto a node T\ni\nif term t\ni\noccurs\nin document d\nj\n. In our model, the observation of the pages\nin N leads to the observation of a new representation of the\ntriggering page p and to a set of terms describing the main\ntopics associated with p and its neighbors.\nGiven these definitions, we can now use the network to\ndetermine the probability that a term t\ni\nis a good term for\nrepresenting a topic of the triggering page p. In other words,\nwe are interested in the probability of observing the final\nevidence regarding a term t\ni\n, given that the new representation\nof the page p has been observed, P (T\ni\n= 1|R = 1).\nThis translates into the following equation\n1\n:\nP (T\ni\n|R) =\n1\nP (R)\nX\nd\nP (T\ni\n|d)P (R|d)P (d)\n(2)\nwhere d represents the set of states of the document nodes.\nSince we are interested just in the states in which only a\nsingle document d\nj\nis observed and P (d) can be regarded as\na constant, we can rewrite Eq. (2) as:\nP (T\ni\n|R) =\n\nP (R)\nk\nX\nj=0\nP (T\ni\n|d\nj\n)P (R|d\nj\n)\n(3)\nwhere d\nj\nrepresents the state of the document nodes in\nwhich only document d\nj\nis observed and is a constant\n1\nTo simplify our notation we represent the probabilities\nP (X = 1) as P (X) and P (X = 0) as P (X).\nassociated with P (d\nj\n). Eq. (3) is the general equation to\ncompute the probability that a term t\ni\nis related to the triggering\npage. We now define the probabilities P (T\ni\n|d\nj\n) and\nP (R|d\nj\n) as follows:\nP (T\ni\n|d\nj\n) = w\nij\n(4)\nP (R|d\nj\n)\n=\n(1 - )\nj = 0\nsim(r, d\nj\n)\n1 j k\n(5)\nwhere is a normalizing constant, w\nij\nis the weight associated\nwith term t\ni\nin the document d\nj\n, and sim(p, d\nj\n) is\ngiven by Eq. (1), i.e., is the cosine similarity between p and\nd\nj\n. The weight w\nij\nis computed using a classic tf-idf scheme\nand is zero if term t\ni\ndoes not occur in document d\nj\n. Notice\nthat P (T\ni\n|d\nj\n) = 1 - P (T\ni\n|d\nj\n) and P (R|d\nj\n) = 1 - P (R|d\nj\n).\nBy defining the constant , it is possible to determine how\nimportant should be the influence of the triggering page p\nto its new representation r. By substituting Eq. (4) and\nEq. (5) into Eq. (3), we obtain:\nP (T\ni\n|R) = ((1 - ) w\ni0\n+\nk\nX\nj=1\nw\nij\nsim(r, d\nj\n))\n(6)\nwhere = is a normalizing constant.\nWe use Eq. (6) to determine the set of terms that will\ncompose r, as illustrated in Figure 2. Let t\ntop\nbe the top\nranked term according to Eq. (6). The set r is composed\nof the terms t\ni\nsuch that\nP (T\ni\n|R)\nP (T\ntop\n|R)\n, where is a given\nthreshold. In our experiments, we have used = 0.05. Notice\nthat the set r might contain terms that already occur\nin p. That is, while we will refer to the set r as expansion\nterms, it should be clear that p r = .\nBy using = 0, we simply consider the terms originally\nin page p. By increasing , we relax the context of the page\np, adding terms from neighbor pages, turning page p into its\nnew representation r. This is important because, sometimes,\na topic apparently not important in the triggering page offers\na good opportunity for advertising. For example, consider\na triggering page that describes a congress in London about\ndigital photography. Although London is probably not an\nimportant topic in this page, advertisements about hotels\nin London would be appropriate. Thus, adding \"hotels\" to\npage p is important. This suggests using > 0, that is,\npreserving the contents of p and using the terms in r to\nexpand p.\nIn this paper, we examine both approaches. Thus, in our\nsixth method we match r, the set of new expansion terms,\ndirectly to the ads, as follows:\nAAK T(p, a\ni\n) = AAK(r, a\ni\n)\nwhere AAK T stands for \"match the ad and keywords to the\nset r of expansion terms\".\nIn our seventh method, we match an expanded page p to\nthe ads as follows:\nAAK EXP(p, a\ni\n) = AAK(p r, a\ni\n)\nwhere AAK EXP stands for \"match the ad and keywords to\nthe expanded triggering page\".\n499\nTo improve our ad placement methods, other external\nsource that we can use is the content of the page h pointed to\nby the advertisement's hyperlink, that is, its landing page.\nAfter all, this page comprises the real target of the ad and\nperhaps could present a more detailed description of the\nproduct or service being advertised. Given that the advertisement\na\ni\npoints to the landing page h\ni\n, we denote this\nassociation as the pair (a\ni\n, h\ni\n) H, where H is the set of\nassociations between the ads and the pages they point to.\nOur eighth method consists of matching the triggering page\np to the landing pages pointed to by the advertisements, as\nfollows:\nH(p, a\ni\n) = sim(p, h\ni\n)\nwhere (a\ni\n, h\ni\n) H and H stands for \"match the hyperlink\npointed to by the ad\".\nWe can also combine this information with the more promising\nmethods previously described, AAK and AAK EXP as follows\n. Given that (a\ni\n, h\ni\n) H and (a\ni\n, k\ni\n) K, we have our\nlast two methods:\nAAK H(p, a\ni\n) =\nsim(p, a\ni\nh\ni\nk\ni\n)\nif k\ni\np\n0\nif otherwise\nAAK EXP H(p, a\ni\n) =\nsim(p r, a\ni\nh\ni\nk\ni\n)\nif k\ni\n(p r)\n0\nif otherwise\nwhere AAK H stands for \"match ads and keywords also considering\nthe page pointed by the ad\" and AAH EXP H stands\nfor \"match ads and keywords with expanded triggering page,\nalso considering the page pointed by the ad\".\nNotice that other combinations were not considered in this\nstudy due to space restrictions. These other combinations\nled to poor results in our experimentation and for this reason\nwere discarded.\nEXPERIMENTS\nTo evaluate our ad placement strategies, we performed\na series of experiments using a sample of a real case ad\ncollection with 93,972 advertisements, 1,744 advertisers, and\n68,238 keywords\n2\n. The advertisements are grouped in 2,029\ncampaigns with an average of 1.16 campaigns per advertiser.\nFor the strategies AAK T and AAK EXP, we had to generate\na set of expansion terms. For that, we used a database\nof Web pages crawled by the TodoBR search engine [12]\n(http://www.todobr.com.br/). This database is composed\nof 5,939,061 pages of the Brazilian Web, under the domain\n\".br\". For the strategies H, AAK H, and AAK EXP H, we also\ncrawled the pages pointed to by the advertisers. No other\nfiltering method was applied to these pages besides the removal\nof HTML tags.\nSince we are initially interested in the placement of advertisements\nin the pages of information portals, our test collection\nwas composed of 100 pages extracted from a Brazilian\nnewspaper. These are our triggering pages. They were\ncrawled in such a way that only the contents of their articles\nwas preserved. As we have no preferences for particular\n2\nData in portuguese provided by an on-line advertisement\ncompany that operates in Brazil.\ntopics, the crawled pages cover topics as diverse as politics,\neconomy, sports, and culture.\nFor each of our 100 triggering pages, we selected the top\nthree ranked ads provided by each of our 10 ad placement\nstrategies. Thus, for each triggering page we select no more\nthan 30 ads. These top ads were then inserted in a pool\nfor that triggering page. Each pool contained an average of\n15.81 advertisements. All advertisements in each pool were\nsubmitted to a manual evaluation by a group of 15 users.\nThe average number of relevant advertisements per page\npool was 5.15. Notice that we adopted the same pooling\nmethod used to evaluate the TREC Web-based collection [6].\nTo quantify the precision of our results, we used 11-point\naverage figures [2]. Since we are not able to evaluate the\nentire ad collection, recall values are relative to the set of\nevaluated advertisements.\n4.2\nTuning Idf factors\nWe start by analyzing the impact of different idf factors\nin our advertisement collection. Idf factors are important\nbecause they quantify how discriminative is a term in the\ncollection. In our ad collection, idf factors can be computed\nby taking ads, advertisers or campaigns as documents. To\nexemplify, consider the computation of \"ad idf\" for a term\nt\ni\nthat occurs 9 times in a collection of 100 ads. Then, the\ninverse document frequency of t\ni\nis given by:\nidf\ni\n= log 100\n9\nHence, we can compute ad, advertiser or campaign idf factors\n. As we observe in Figure 4, for the AD strategy, the best\nranking is obtained by the use of campaign idf, that is, by\ncalculating our idf factor so that it discriminates campaigns.\nSimilar results were obtained for all the other methods.\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0\n0.2\n0.4\n0.6\n0.8\n1\nprecision\nrecall\nCampaign idf\nAdvertiser idf\nAd idf\nFigure 4: Precision-recall curves obtained for the\nAD strategy using ad, advertiser, and campaign idf\nfactors.\nThis reflects the fact that terms might be better discriminators\nfor a business topic than for an specific ad. This\neffect can be accomplished by calculating the factor relative\nto idf advertisers or campaigns instead of ads. In fact, campaign\nidf factors yielded the best results. Thus, they will be\nused in all the experiments reported from now on.\n500\n4.3\nResults\nMatching Strategies\nFigure 5 displays the results for the matching strategies presented\nin Section 2. As shown, directly matching the contents\nof the ad to the triggering page (AD strategy) is not so\neffective. The reason is that the ad contents are very noisy.\nIt may contain messages that do not properly describe the\nad topics such as requisitions for user actions (e.g, \"visit our\nsite\") and general sentences that could be applied to any\nproduct or service (e.g, \"we delivery for the whole country\"\n). On the other hand, an advertiser provided keyword\nsummarizes well the topic of the ad. As a consequence, the\nKW strategy is superior to the AD and AD KW strategies. This\nsituation changes when we require the keywords to appear\nin the target Web page. By filtering out ads whose keywords\ndo not occur in the triggering page, much noise is discarded.\nThis makes ANDKW a better alternative than KW. Further, in\nthis new situation, the contents of the ad becomes useful\nto rank the most relevant ads making AD ANDKW (or AAK for\n\"ads and keywords\") the best among all described methods.\nFor this reason, we adopt AAK as our baseline in the next set\nof experiments.\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0\n0.2\n0.4\n0.6\n0.8\n1\nprecision\nrecall\nAAK\nANDKW\nKW\nAD_KW\nAD\nFigure 5:\nComparison among our five matching\nstrategies. AAK (\"ads and keywords\") is superior.\nTable 1 illustrates average precision figures for Figure 5.\nWe also present actual hits per advertisement slot. We call\n\"hit\" an assignment of an ad (to the triggering page) that\nwas considered relevant by the evaluators. We notice that\nour AAK strategy provides a gain in average precision of 60%\nrelative to the trivial AD strategy. This shows that careful\nconsideration of the evidence related to the problem does\npay off.\nImpedance Coupling Strategies\nTable 2 shows top ranked terms that occur in a page covering\nArgentinean wines produced using grapes derived from\nthe Bordeaux region of France. The p column includes the\ntop terms for this page ranked according to our tf-idf weighting\nscheme. The r column includes the top ranked expansion\nterms generated according to Eq. (6). Notice that the\nexpansion terms not only emphasize important terms of the\ntarget page (by increasing their weights) such as \"wines\" and\nMethods\nHits\n11-pt average\n#1\n#2\n#3\ntotal\nscore\ngain(%)\nAD\n41\n32\n13\n86\n0.104\nAD KW\n51\n28\n17\n96\n0.106\n+1.9\nKW\n46\n34\n28\n108\n0.125\n+20.2\nANDKW\n49\n37\n35\n121\n0.153\n+47.1\nAD ANDKW (AAK)\n51\n48\n39\n138\n0.168\n+61.5\nTable 1: Average precision figures, corresponding to\nFigure 5, for our five matching strategies. Columns\nlabelled #1, #2, and #3 indicate total of hits in\nfirst, second, and third advertisement slots, respectively\n. The AAK strategy provides improvements of\n60% relative to the AD strategy.\nRank\np\nr\nterm\nscore\nterm\nscore\n1\nargentina\n0.090\nwines\n0.251\n2\nobtained*\n0.047\nwine*\n0.140\n3\nclass*\n0.036\nwhites\n0.091\n4\nwhites\n0.035\nred*\n0.057\n5\nfrench*\n0.031\ngrape\n0.051\n6\norigin*\n0.029\nbordeaux\n0.045\n7\nfrance*\n0.029\nacideness*\n0.038\n8\ngrape\n0.017\nargentina\n0.037\n9\nsweet*\n0.016\naroma*\n0.037\n10\ncountry*\n0.013\nblanc*\n0.036\n...\n35\nwines\n0.010\n\n\n...\nTable 2: Top ranked terms for the triggering page\np according to our tf-idf weighting scheme and top\nranked terms for r, the expansion terms for p, generated\naccording to Eq. (6).\nRanking scores were\nnormalized in order to sum up to 1. Terms marked\nwith `*' are not shared by the sets p and r.\n\"whites\", but also reveal new terms related to the main topic\nof the page such as \"aroma\" and \"red\". Further, they avoid\nsome uninteresting terms such as \"obtained\" and \"country\".\nFigure 6 illustrates our results when the set r of expansion\nterms is used. They show that matching the ads to\nthe terms in the set r instead of to the triggering page p\n(AAK T strategy) leads to a considerable improvement over\nour baseline, AAK. The gain is even larger when we use the\nterms in r to expand the triggering page (AAK EXP method).\nThis confirms our hypothesis that the triggering page could\nhave some interesting terms that should not be completely\ndiscarded.\nFinally, we analyze the impact on the ranking of using the\ncontents of pages pointed by the ads. Figure 7 displays our\nresults. It is clear that using only the contents of the pages\npointed by the ads (H strategy) yields very poor results.\nHowever, combining evidence from the pages pointed by the\nads with our baseline yields improved results.\nMost important\n, combining our best strategy so far (AAK EXP) with\npages pointed by ads (AAK EXP H strategy) leads to superior\nresults. This happens because the two additional sources\nof evidence, expansion terms and pages pointed by the ads,\nare distinct and complementary, providing extra and valuable\ninformation for matching ads to a Web page.\n501\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0\n0.2\n0.4\n0.6\n0.8\n1\nprecision\nrecall\nAAK_EXP\nAAK_T\nAAK\nFigure 6: Impact of using a new representation for\nthe triggering page, one that includes expansion\nterms.\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0\n0.2\n0.4\n0.6\n0.8\n1\nprecision\nrecall\nAAK_EXP_H\nAAK_H\nAAK\nH\nFigure 7: Impact of using the contents of the page\npointed by the ad (the hyperlink).\nFigure 8 and Table 3 summarize all results described in\nthis section.\nIn Figure 8 we show precision-recall curves\nand in Table 3 we show 11-point average figures. We also\npresent actual hits per advertisement slot and gains in average\nprecision relative to our baseline, AAK. We notice that\nthe highest number of hits in the first slot was generated by\nthe method AAK EXP. However, the method with best overall\nretrieval performance was AAK EXP H, yielding a gain in\naverage precision figures of roughly 50% over the baseline\n(AAK).\n4.4\nPerformance Issues\nIn a keyword targeted advertising system, ads are assigned\nat query time, thus the performance of the system is a very\nimportant issue. In content-targeted advertising systems,\nwe can associate ads with a page at publishing (or updating\n) time. Also, if a new ad comes in we might consider\nassigning this ad to already published pages in offline mode.\nThat is, we might design the system such that its performance\ndepends fundamentally on the rate that new pages\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0\n0.2\n0.4\n0.6\n0.8\n1\nprecision\nrecall\nAAK_EXP_H\nAAK_EXP\nAAK_T\nAAK_H\nAAK\nH\nFigure 8:\nComparison among our ad placement\nstrategies.\nMethods\nHits\n11-pt average\n#1\n#2\n#3\ntotal\nscore\ngain(%)\nH\n28\n5\n6\n39\n0.026\n-84.3\nAAK\n51\n48\n39\n138\n0.168\nAAK H\n52\n50\n46\n148\n0.191\n+13.5\nAAK T\n65\n49\n43\n157\n0.226\n+34.6\nAAK EXP\n70\n52\n53\n175\n0.242\n+43.8\nAAK EXP H\n64\n61\n51\n176\n0.253\n+50.3\nTable 3: Results for our impedance coupling strategies\n.\nare published and the rate that ads are added or modified.\nFurther, the data needed by our strategies (page crawling,\npage expansion, and ad link crawling) can be gathered and\nprocessed offline, not affecting the user experience. Thus,\nfrom this point of view, the performance is not critical and\nwill not be addressed in this work.\nRELATED WORK\nSeveral works have stressed the importance of relevance\nin advertising. For example, in [14] it was shown that advertisements\nthat are presented to users when they are not\ninterested on them are viewed just as annoyance.\nThus,\nin order to be effective, the authors conclude that advertisements\nshould be relevant to consumer concerns at the\ntime of exposure. The results in [9] enforce this conclusion\nby pointing out that the more targeted the advertising, the\nmore effective it is.\nTherefore it is not surprising that other works have addressed\nthe relevance issue. For instance, in [8] it is proposed\na system called ADWIZ that is able to adapt online advertisement\nto a user's short-term interests in a non-intrusive\nway. Contrary to our work, ADWIZ does not directly use\nthe content of the page viewed by the user. It relies on search\nkeywords supplied by the user to search engines and on the\nURL of the page requested by the user. On the other hand,\nin [7] the authors presented an intrusive approach in which\nan agent sits between advertisers and the user's browser allowing\na banner to be placed into the currently viewed page.\nIn spite of having the opportunity to use the page's content,\n502\nthe agent infers relevance based on category information and\nuser's private information collected along the time.\nIn [5] the authors provide a comparison between the ranking\nstrategies used by Google and Overture for their keyword\nadvertising systems. Both systems select advertisements by\nmatching them to the keywords provided by the user in a\nsearch query and rank the resulting advertisement list according\nto the advertisers' willingness to pay. In particular\n, Google approach also considers the clickthrough rate\nof each advertisement as an additional evidence for its relevance\n. The authors conclude that Google's strategy is better\nthan that used by Overture. As mentioned before, the ranking\nproblem in keyword advertising is different from that of\ncontent-targeted advertising. Instead of dealing with keywords\nprovided by users in search queries, we have to deal\nwith the contents of a page which can be very diffuse.\nFinally, the work in [4] focuses on improving search engine\nresults in a TREC collection by means of an automatic\nquery expansion method based on kNN [17]. Such method\nresembles our expansion approach presented in section 3.\nOur method is different from that presented by [4]. They\nexpand user queries applied to a document collection with\nterms extracted from the top k documents returned as answer\nto the query in the same collection. In our case, we\nuse two collections: an advertisement and a Web collection.\nWe expand triggering pages with terms extracted from the\nWeb collection and then we match these expanded pages to\nthe ads from the advertisement collection. By doing this, we\nemphasize the main topics of the triggering pages, increasing\nthe possibility of associating relevant ads with them.\nCONCLUSIONS\nIn this work we investigated ten distinct strategies for associating\nads with a Web page that is browsed (content-targeted\nadvertising).\nFive of our strategies attempt to\nmatch the ads directly to the Web page. Because of that,\nthey are called matching strategies. The other five strategies\nrecognize that there is a vocabulary impedance problem\namong ads and Web pages and attempt to solve the problem\nby expanding the Web pages and the ads with new terms.\nBecause of that they are called impedance coupling strategies\n.\nUsing a sample of a real case database with over 93 thousand\nads, we evaluated our strategies. For the five matching\nstrategies, our results indicated that planned consideration\nof additional evidence (such as the keywords provided by the\nadvertisers) yielded gains in average precision figures (for\nour test collection) of 60%. This was obtained by a strategy\ncalled AAK (for \"ads and keywords\"), which is taken as\nthe baseline for evaluating our more advanced impedance\ncoupling strategies.\nFor our five impedance coupling strategies, the results indicate\nthat additional gains in average precision of 50% (now\nrelative to the AAK strategy) are possible. These were generated\nby expanding the Web page with new terms (obtained\nusing a sample Web collection containing over five million\npages) and the ads with the contents of the page they point\nto (a hyperlink provided by the advertisers).\nThese are first time results that indicate that high quality\ncontent-targeted advertising is feasible and practical.\nACKNOWLEDGEMENTS\nThis work was supported in part by the GERINDO project\n, grant MCT/CNPq/CT-INFO 552.087/02-5, by CNPq\ngrant 300.188/95-1 (Berthier Ribeiro-Neto), and by CNPq\ngrant 303.576/04-9 (Edleno Silva de Moura). Marco Cristo\nis supported by Fucapi, Manaus, AM, Brazil.\nREFERENCES\n[1] The Google adwords. Google content-targeted advertising.\nhttp://adwords.google.com/select/ct_faq.html, November\n2004.\n[2] R. Baeza-Yates and B. Ribeiro-Neto. Modern Information\nRetrieval. Addison-Wesley-Longman, 1st edition, 1999.\n[3] H. K. Bhargava and J. Feng. Paid placement strategies for\ninternet search engines. In Proceedings of the eleventh\ninternational conference on World Wide Web, pages 117123.\nACM Press, 2002.\n[4] E. P. Chan, S. Garcia, and S. Roukos. Trec-5 ad hoc retrieval\nusing k nearest-neighbors re-scoring. In The Fifth Text\nREtrieval Conference (TREC-5). National Institute of\nStandards and Technology (NIST), November 1996.\n[5] J. Feng, H. K. Bhargava, and D. Pennock. Comparison of\nallocation rules for paid placement advertising in search\nengines. In Proceedings of the 5th international conference on\nElectronic commerce, pages 294299. ACM Press, 2003.\n[6] D. Hawking, N. Craswell, and P. B. Thistlewaite. Overview of\nTREC-7 very large collection track. In The Seventh Text\nREtrieval Conference (TREC-7), pages 91104, Gaithersburg,\nMaryland, USA, November 1998.\n[7] Y. Kohda and S. Endo. Ubiquitous advertising on the www:\nmerging advertisement on the browser. Comput. Netw. ISDN\nSyst., 28(7-11):14931499, 1996.\n[8] M. Langheinrich, A. Nakamura, N. Abe, T. Kamba, and\nY. Koseki. Unintrusive customization techniques for web\nadvertising. Comput. Networks, 31(11-16):12591272, 1999.\n[9] T. P. Novak and D. L. Hoffman. New metrics for new media:\ntoward the development of web measurement standards. World\nWide Web J., 2(1):213246, 1997.\n[10] J. Pearl. Probabilistic Reasoning in Intelligent Systems:\nNetworks of plausible inference. Morgan Kaufmann Publishers,\n2nd edition, 1988.\n[11] B. Ribeiro-Neto and R. Muntz. A belief network model for IR.\nIn Proceedings of the 19th Annual International ACM SIGIR\nConference on Research and Development in Information\nRetrieval, pages 253260, Zurich, Switzerland, August 1996.\n[12] A. Silva, E. Veloso, P. Golgher, B. Ribeiro-Neto, A. Laender,\nand N. Ziviani. CobWeb - a crawler for the brazilian web. In\nProceedings of the String Processing and Information\nRetrieval Symposium (SPIRE'99), pages 184191, Cancun,\nMexico, September 1999.\n[13] H. Turtle and W. B. Croft. Evaluation of an inference\nnetwork-based retrieval model. ACM Transactions on\nInformation Systems, 9(3):187222, July 1991.\n[14] C. Wang, P. Zhang, R. Choi, and M. Daeredita. Understanding\nconsumers attitude toward advertising. In Eighth Americas\nConference on Information Systems, pages 11431148, August\n2002.\n[15] M. Weideman. Ethical issues on content distribution to digital\nconsumers via paid placement as opposed to website visibility\nin search engine results. In The Seventh ETHICOMP\nInternational Conference on the Social and Ethical Impacts\nof Information and Communication Technologies, pages\n904915. Troubador Publishing Ltd, April 2004.\n[16] M. Weideman and T. Haig-Smith. An investigation into search\nengines as a form of targeted advert delivery. In Proceedings of\nthe 2002 annual research conference of the South African\ninstitute of computer scientists and information technologists\non Enablement through technology, pages 258258. South\nAfrican Institute for Computer Scientists and Information\nTechnologists, 2002.\n[17] Y. Yang. Expert network: Effective and efficient learning from\nhuman decisions in text categorization and retrieval. In W. B.\nCroft and e. C. J. van Rijsbergen, editors, Proceedings of the\n17rd annual international ACM SIGIR conference on\nResearch and development in information retrieval, pages\n1322. Springer-Verlag, 1994.\n503\n", "keywords": ";advertisements;triggering page;Bayesian networks;Advertising;matching;kNN;Web;content-targeted advertising;impedance coupling"} {"name": "104", "title": "Implementing the IT Fundamentals Knowledge Area", "abstract": "The recently promulgated IT model curriculum contains IT fundamentals as one of its knowledge areas. It is intended to give students a broad understanding of (1) the IT profession and the skills that students must develop to become successful IT professionals and (2) the academic discipline of IT and its relationship to other disciplines. As currently defined, the IT fundamentals knowledge area requires 33 lecture hours to complete. The model curriculum recommends that the material relevant to the IT fundamentals knowledge area be offered early in the curriculum, for example in an introduction to IT course; however, many institutions will have to include additional material in an introductory IT course. For example, the Introduction of IT course at Georgia Southern University is used to introduce students to the available second disciplines (an important part of the Georgia Southern IT curriculum aimed at providing students with in-depth knowledge of an IT application domain), some productivity tools, and SQL. For many programs there may be too much material in an introductory IT course. This paper describes how Georgia Southern University resolved this dilemma.", "fulltext": "INTRODUCTION\nThe recently promulgated IT Model Curriculum, available at\nhttp://sigite.acm.org/activities/curriculum/, consists of 12 knowledge\nareas including IT fundamentals (ITF). ITF is intended to\nprovide students with a set of foundation skills and provide an\noverview of the discipline of IT and its relationship to other\ncomputing disciplines. It is also intended to help students\nunderstand the diverse contexts in which IT is used and the\nchallenges inherent in the diffusion of innovative technology.\nGiven its foundational nature, it will not come as a surprise that\nthe model curriculum recommends that ITF is covered early in a\nstudent's program of study, and it seems most logical that this\nknowledge area be covered in an introductory course in a\nbaccalaureate program in IT.\nThe IT Model curriculum recommends a minimum coverage of 33\nlecture hours for the ITF knowledge area; however, a typical 3-credit\nsemester course gives an instructor, at most, 45 lecture\nhours, and many programs will have to include additional material\nin an introductory course. For example, an important element of\nthe IT program at Georgia Southern University is the inclusion of\nsecond disciplines, coherent sets of 7 courses in an IT application\narea, such as electronic broadcasting, law enforcement, music\ntechnology, and supply chain management ([5], [6]). Since\nstudents must begin introductory courses in their second\ndiscipline relatively early in their academic program, it is\nimportant that they be exposed to the range of second disciplines\navailable to them early, and the most appropriate place to do this\nis in the introductory IT course. Also, students enrolling in the\nintroductory IT course at Georgia Southern are not expected to\nhave taken a computer literacy course beforehand, and it has\nbecome clear that many are weak in the use of spreadsheets. Since\nthe program strongly believes that IT graduates can be expected to\nbe conversant with basic productivity tools, including\nspreadsheets, the course must cover the basics of spreadsheet\napplication. Finally, the introductory IT course must also provide\na basic coverage of SQL, because the web design course, which\ncovers n-tier architectures and requires a basic knowledge of SQL,\nis taught before the data management course in which SQL is\nnormally presented.\nWhile the additional material that has to be covered in an\nintroductory IT course is likely to differ between institutions, it is\nlikely that many, if not all, IT programs will have to cover some\nadditional material. Given that ITF already requires 33 lecture\nhours, considerable pressure is placed upon instructors in\nintroductory IT courses to cover both the ITF material and\nwhatever additional material needs to be included.\nThe intent of this paper is to describe how this particular dilemma\nwas resolved at Georgia Southern University. Section 2 provides\nmore details about the IT fundamentals knowledge area, while\nsection 3 discusses the introduction to IT course offered at\nGeorgia Southern University. Section 4 concludes.\n\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nSIGITE 05, October 2022, 2005, Newark, NJ, USA.\nCopyright 2005 ACM 1-59593-252-6/05/0010...$5.00.\n\n1\nTHE IT FUNDAMENTALS KNOWLEDGE AREA\nThe IT Model Curriculum follows the example set by the\nComputer Science model curriculum (http://www.acm.org/\neducation/curricula.html) and distinguishes between a number of\nknowledge areas, each consisting of a number of knowledge units.\nKnowledge units are themselves composed of topics and learning\noutcomes. For reasons explained in ([4]), the IT model curriculum\ndiffers from the computer science model curriculum in that it\ndistinguishes between core learning outcomes, which every\ngraduate from an IT program is expected to achieve, and elective\nlearning outcomes, which only graduates specializing in this area\nare expected to achieve. Given the foundational nature of ITF, it\nshould come as no surprise that ITF only has core learning\noutcomes associated with it.\nBelow are listed the knowledge units and the core learning\noutcomes associated with each. The number behind each\nknowledge unit is the minimum recommended coverage expressed\nin lecture hours.\nITF1. Pervasive themes in IT (17)\n1. Describe the components of IT systems and their\ninterrelationships.\n2. Describe how complexity occurs in IT.\n3. Recognize that an IT professional must know how to\nmanage complexity.\n4. List examples of tools and methods used in IT for\nmanaging complexity.\n5. Describe the role of the IT professional as the user\nadvocate.\n6. Explain why life-long learning and continued\nprofessional development is critical for an IT\nprofessional.\n7. Explain why adaptability and interpersonal skills are\nimportant to an IT professional.\n8. Distinguish between data and information, and describe\nthe interrelationship.\n9. Describe the importance of data and information in IT.\n10. Explain why the mastery of information and\ncommunication technologies is important to an IT\nprofessional.\n11. Explain why the IAS perspective needs to pervade all\naspects of IT.\nITF2. Organizational Issues (6)\n1. Describe the elements of a feasible IT application.\n2. Identify the extent and activities involved in an IT\napplication.\n3. Understand the requirements of the business processes.\n4. Outline the project management processes.\n5. List the integration processes.\nITF3. History of IT (3)\n1. Outline the history of computing technology.\n2. Describe significant impacts of computing on society.\n3. Describe significant changes in human-computer\ninteraction.\n4. 4. Outline the history of the Internet.\nITF4. IT and its related and informing disciplines (3)\n1. Define \"Information Technology.\"\n2. Describe the relationship between IT and other\ncomputing disciplines.\n3. Describe the relationship between IT and non-computing\ndisciplines.\n4. Explain why mathematics and statistics are important in\nIT.\nITF5. Application domains (2)\n1. Describe the application of IT in non-computing\ndisciplines.\n2. Describe how IT has impacted almost all aspects of\nmodern living.\n3. Describe ways and extents in which IT has changed the\ninteraction and communication in our society.\n4.\nDescribe how IT has impacted the globalization of\nworld economy, culture, political systems, health,\nsecurity, warfare, etc\n.\nITF6. Application of math and statistics to IT (2)\n1. Recognize the foundation of IT is built upon the various\naspects of mathematics.\n2. Understand the number systems used in computation.\n3. Explain data representation and encoding systems.\n4. Describe the current encryption methods and their\nlimitations.\n5. Describe the pervasive usage of mathematical concepts,\nsuch as functions, relations, sets as well as basic logic\nused in programming.\n6. Recognize the value of probability and statistics.\n7. Describe the basic data analysis concepts and methods\nused in IT applications.\nThe total minimum recommended coverage thus is 33 lecture\nhours.\nTHE INTRODUCTION TO IT COURSE AT GEORGIA SOUTHERN UNIVERSITY\nThe introduction to IT course (IT 1130) offered in the Department\nof IT at Georgia Southern University is designed to introduce\nstudents to IT as a discipline and cover some productivity tools,\nnamely Excel and Access. In line with all other IT courses at\nGeorgia Southern University, IT 1130 was formulated through a\nset of explicit learning outcomes. The learning outcomes for IT\n1130 are\n1. Demonstrate a basic understanding of the field of IT,\nincluding the ability to\ni.\nDefine the term \"Information Technology\";\nii. Recognize the disciplines that have contributed to the\nemergence of IT, namely computer science, information\nsystems, and computer engineering;\niii. Identify areas in which IT has significantly impacted\nindividuals, organizations and/or societies.\n2. Demonstrate an understanding of basic information\ntechnology software applications, including the ability to\ni.\nUsing a given specification, create a simple database;\nii. Use SQL for simple queries;\niii. Use an office productivity suite.\nThe overlap between Objective 1 and the ITF Knowledge Area is\nsignificant; however, due to Objective 2, the introductory IT\ncourse at Georgia Southern must cover significant additional\nmaterial not specified in the IT fundamentals knowledge area.\n2\n3.2 Course Outline and its Mapping to the IT\nFundamentals Knowledge Area\nThe Introduction to IT course at Georgia Southern consists of 45\nlecture hours. Teaching productivity tools, Learning Outcome 2\nlisted in Section 3.1, accounts for roughly 9 hours of instruction.\nExams conducted during the semester account for 3 hours of\ninstruction. This leaves 33 lecture hours to cover the remaining\ntopics for IT 1130 relating to Learning Outcome 1 listed in\nSection 3.1. Table 1 provides a breakdown of the topics covered\nin the remaining 33 hours of instruction, the number of lecture\nhours spent on that topic, as well as the learning outcome in the\nIT fundamentals knowledge area of the model curriculum to\nwhich the topic corresponds.\nTABLE 1: IT 1130 Topics and ITF Learning Outcomes\nIT 1130 Topic\nObjective # Hours\n1 Define IT\nITF4.1\n1\n2 Data and Information\nITF1.8\nITF1.9\n1\n3 Components of IT Systems\n\nHardware\n\nSoftware\n\nNetworks\n\nUser\nITF1.1 8.5\n4 Core Technologies\n\nData Management\n\nNetworking\n\nWeb Systems\n\nSAD\n\nProgramming\n\nHCI\n\nSpecializations in\nBSIT\nITF1.10\nITF2.1\nITF2.2\nITF2.3\nITF2.4\nITF2.5\n8.5\n5 Related Disciplines\nITF4.2\nITF4.3\nITF4.4\n2\n6 Application Domains\n(Second Disciplines in BSIT)\nITF5.1\nITF5.2\nITF5.3\nITF 5.4\nITF 3.2\n7\n7 History of IT\nITF3.1\nITF3.4\n1\n8 Viruses, Crime, Law,\nEthics, Privacy & Security\nITF1.11\nITF 3.2\n3\n9 IT as a Profession\nITF1.5\nITF1.6\nITF1.7\nITF1.10\n1\nTOTAL 33\nTable 2 compares the number of hours of instruction in the IT\n1130 course for each of the knowledge units in the IT\nfundamentals area to the minimum recommended number of\nlecture hours listed in the model curriculum. The next section,\nSection 3.3, discusses the discrepancies between the\nrecommended number of hours and the actual number of hours\ntaught.\nTABLE 2: Comparison of IT 1130 to ITF Knowledge Area\nITF\nKnowledge\nUnits\nITF\nRecommended\nIT 1130\nKnowledge\nUnits Not\nCovered\nITF1\n17\n14\n1.2, 1.3, 1.4\nITF2 6 7.5\n\nITF3 3 2\n3.3\nITF4 3 3\nITF5 2 6.5\n\nITF6 2 Not\nCovered\n6.1 6.7\nTOTAL 33 33\n\n3.3 Some Observations\nTable 2 illustrates several noteworthy differences between the IT\n1130 course at Georgia Southern University and the knowledge\nunits in the ITF knowledge area.\n1. A discrepancy exists between the minimum number of hours\nrecommended for ITF1 (pervasive themes in IT) and the\nnumber of hours taught in IT 1130. The 3 hour discrepancy\ncan be attributed to the lack of coverage in IT 1130 of\noutcomes ITF1.2 4. Thus, IT 1130 provides no explicit\ncoverage of the reasons for the emergence of complexity in\nIT, the need for IT professionals to handle complexity, and\nthe tools and techniques available to an IT professional in\nIT1130. Instead, the IT program at Georgia Southern covers\ncomplexity-related issues in a number of courses throughout\nthe curriculum. For example, some complexity-related issues\nare discussed in a two-course sequence of Java programming\ncourses. Standards are discussed in a number of courses\nthroughout the curriculum, including a data communication\ncourse and a web design course in which students learns how\nto implement n-tier architectures. Finally, complexity related\nissues are also covered in a capstone course on IT issues and\nmanagement. Since the need to manage complexity is\nidentified in the IT model curriculum as a pervasive theme,\nthis is a reasonable alternative to cover this issue.\n2. The IT 1130 course devotes more lectures hours than the\nminimum recommendation to ITF2 (organizational issues)\nand ITF5 (application domains). As the recommendation is a\nminimum, this is not problematic; however, it is worth noting\nthat the explanation for these discrepancies relates directly to\nthe structure of the IT major at Georgia Southern University.\nIT majors are expected to take a number of core courses,\nincluding courses in programming; web design; software\nacquisition, implementation and integration; networking;\n3\nsystems analysis and design; data management; and project\nmanagement. In addition, IT majors specialize in either\nknowledge management and it integration, systems development\nand support, telecommunications and network\nadministration, or web and multimedia foundations. It is\nuseful to students starting out on their academic program in\nIT to receive information on the structure of the core of the\nprogram, the courses that it consists of and how they relate to\neach other, and on the different specializations available to\nthem. Since, for most IT majors, IT 1130 is the first course in\nthe program, it is the logical place to meet this aim. Clearly, a\nfull discussion of the structure of the program covers more\nthan just data management (ITF1.10), a broad overview of IT\napplications (ITF2.1) and their development (ITF2.2),\nsystems analysis (ITF2.3), project management (ITF2.4), and\nIT integration (ITF2.5). This explains why IT 1130 devotes\n1.5 more hours than the recommended minimum 6.\nAnother important element of the IT program at Georgia\nSouthern is the inclusion of second disciplines. One of the\nexplicit program outcomes of the BS in IT program at\nGeorgia Southern is that, on graduation, graduates will be\nable \"to demonstrate sufficient understanding of an\napplication domain to be able to develop IT applications\nsuitable for that application domain.\" This outcome was\nincluded at the recommendation of industry representatives\nwho were consulted when the IT program was designed ([5]).\nFor students to develop this ability, they must be exposed to\nan IT application domain, and the BS IT program at Georgia\nSouthern therefore contains so-called second disciplines.\nSecond disciplines are coherent sets of 7 3-credit courses in\npotential IT application domains, such as electronic\nbroadcasting, law enforcement, music technology, or supply\nchain management. Students typically start taking courses in\ntheir second discipline early in their program of study (the\nstandard program of study suggests that students take their\nfirst second discipline course in the first semester of their\nsophomore year). It is therefore important that students be\nexposed to the different second disciplines available to them\nearly, and IT 1130 is the logical place to do so. One fortunate\nside effect of the need to introduce a second discipline is that\nit gives the program an excellent opportunity to make\nstudents aware of the broad range of areas in which IT can be\napplied and, hence, cover ITF5 (application domains);\nhowever, since the number of second disciplines is large\n(currently, 26), adequate coverage requires 4.5 hours more\nthan the minimum recommend coverage for ITF 5\n(application domains)\n3. One lecture hour is missing in ITF3 (history of IT) due to\nlack of coverage in the IT 1130 course of significant changes\nin HCI (ITF3.3). Some material relevant to this topic is\nintroduced in other courses that students tend to take early in\ntheir program of study, such as the Introductory Java course\nand the introductory web design course. For example, the\nintroductory web design course includes among its course\nobjectives that students develop the ability to design Web\npages in accordance with good design principles using\nappropriate styles and formats and the ability to design Web\npages that are ADA compliant. Material relevant to both\nobjectives allows us to expand on HCI design principles and\nplace these in a historical context. Moreover, students are\nadvised to take the introductory web design course in the\nsemester following the one in which they take IT 1130, and\nthey are therefore likely to be exposed to material relevant to\nITF3.3 early in their program of study.\n4. The final discrepancy lies in the coverage of the learning\noutcomes corresponding to the ITF6 (application of math and\nstatistics to IT) in the IT 1130 course; however, the material\nrelated to this knowledge unit is covered in two courses that\nstudents are again advised to take early in their program of\nstudy. One course is a course in discrete mathematics,\ndesigned specifically for IT majors. It includes among its\ncourse objectives the ability to explain the importance of\ndiscrete mathematics in computer science and information\ntechnology and provides in-depth coverage of functions, sets,\nbasic propositional logic, and algorithm design. Finally, all\nstudents enrolled in the IT major take a statistics course,\nwhich covers probability.\n3.4 Support Material\nSince the ITF knowledge area is relatively new, no single\ntextbook covers all relevant material. We therefore use a variety of\nsources to support the course.\nFirst, we use Excel 2003 ([8]) and Access 2003 ([7]) to support\nthe teaching of spreadsheets and SQL (IT 1130 course outcomes\n2i-2iii identified in section 3.1).\nSecond, to support the teaching of Topics 3 (components of IT\nsystems), 4 (core technologies) and 7 (history of IT), we use\nDiscovering Computers 2005 ([9]). While the textbook provides a\nreasonable coverage of some of the subtopics discussed, it does\nnot sufficiently stress the importance of the users and the\nimportance of HCI in systems development, and we, therefore,\nemphasize this issue throughout the course. We discussed the way\nin which we cover these topics in Points 2 and 3 in section 3.3.\nThird, for topics 6 (Application Domains), 8 (Viruses, Crime,\nLaw, Ethics, Privacy and Security) and 9 (IT as a profession), we\nuse Computers in Our World ([3]); however, we do not rely solely\non the textbook for our coverage of topic 6. Again, we discussed\nthis in Point 2 in section 3.3.\nFinally, to support Topics 1 (define IT), 2 (data and information),\nand 5 (IT and its related disciplines), students are given material\nwritten specifically for the course. Also, we invite representatives\nfrom computer science and information systems to lecture on their\nspecific disciplines and follow this up with a lecture on computer\nengineering and a discussion on the relationship between all four\ndisciplines.\nTable 3 lists the core learning outcomes for each of the ITF\nknowledge units and maps them to the material in the IT 1130\ncourse used to achieve that outcome. The material comes either\nfrom Discovering Computers 2005 ([9]) (DC), Computers in Our\nWorld ([3]) (CIOW), or material written specifically for the\ncourse (supplemental material) and/or lectures/discussions led by\nfaculty members from other related departments.\n\n4\nTABLE 3: Course Materials Used in IT 1130 to Achieve ITF\nLearning Outcomes\nITF\nKnowledge\nUnits\nLearning\nOutcomes\nMaterial\n1\nDC Chapters 3-9\n2-4 Not\ncovered\n5-7\nDC Chapters 12 & 15, CIOW\nChapters 8 & 9,\nSupplemental Materials\n8.9\nDC Chapter 10,\nSupplemental Materials\n10\nDC Chapters 2, 9, 10,12, 13,\nSupplemental Materials\nITF 1\n11\nCIOW Chapters 7-9\nITF2\n1-5\nDC Chapters 2, 9, 10, 12, 13.\nSupplemental Materials\n1\nDC Timeline between\nChapters 1 and 2, Chapter 2\n2\nCIOW Chapters 1 - 9\n3 Not\ncovered\nITF3\n4\nDC Timeline between\nChapters 1 and 2, Chapter 2\nITF 4\n1-4\nSupplemental Materials,\nLecture and Class Discussion\nled by CS, IS and IT\nrepresentatives\nITF5\n1-4\nCIOW Chapters 1 9\nITF6 1-7\nNot\ncovered\n*Discovering Computers = DC, Computers in Our World =\nCIOW\nCONCLUSIONS\nThe IT Fundamentals knowledge area in the IT model curriculum\nis of central importance to the design of an introductory IT course;\nhowever, since institutions will have to include additional\nmaterials in their introductory IT courses, depending on the nature\nof their program, the minimum requirement of 33 lecture hours to\ncover this material is likely to lead to problems. This paper\npresents the experience with an introductory IT course at Georgia\nSouthern University, IT1130. In general, we believe that, despite\nthe need to include additional material in IT1130, we are able to\ncover most of the knowledge units in the IT fundamentals\nknowledge area. We are confident that those knowledge units not\ncovered in IT1130 are covered in other courses that students are\nadvised to take early in their programs of study. Finally, despite\nthe fact that the IT fundamentals knowledge area is new and that\nno textbooks cover all the knowledge units within the area, we\nhave been able to identify a set of textbooks that, jointly, cover\nmost of the material; however, we provide a relatively small\namount of additional material, and the textbooks we identified do\nnot always cover the material at the appropriate level. Therefore,\nsupport materials specifically for the IT fundamentals knowledge\narea need to be developed. Whether this is best provided in the\nform of a textbook, or, more dynamically, as a set of online\nlearning objects ([1], [2]) is a question open to debate.\n\nREFERENCES\n[1] Abernethy, K., Treu, K, Piegari, G, Reichgelt. H. \"An\nimplementation model for a learning object repository\",\nOctober 2005, E-learn 2005 World Conference on E-learning\nin corporate, government, healthcare and higher\neducation. Vancouver, Canada.\n[2] Abernethy, K., Treu, K, Piegari, G, Reichgelt. H. \"A learning\nobject repository in support of introduction to information\ntechnology\", August 2005, 6\nth\nAnnual Conference for the\nHigher Education Academy Subject Network for Information\nand Computer Science, York, England.\n[3] Jedlicka, L. Computers in Our World. Thompson Course\nTechnologies, 2003.\n[4] Lawson, E, Reichgelt, H, Lunt, B. Ekstrom, J, Kamali, R.\nMiller, J and Gorka, S, The Information Technology Model\nCurriculum. Paper submitted to ISECON 2005.\n[5] Reichgelt, H., Price, B. and Zhang, A., \"Designing an\nInformation Technology curriculum: The Georgia Southern\nexperience\", Journal of Information Technology Education\n2002, Vol. 1, No. 4, 213-221\n[6] Reichgelt, H., Price, B. and Zhang, A., The Inclusion of\nApplication Areas in IT Curricula, SIGITE--3, Rochester,\nNY, ACM-SIGITE (formerly SITE), September 2002\n[7] Shelley, G., Cashman, T., Pratt, P. and Last, M. Microsoft\nOffice Access 2003. Thompson Course Technologies, 2004.\n[8] Shelley, G., Cashman, T., Quasney, J. Microsoft Office Excel\n2003. Thompson Course Technologies, 2004.\n[9] Shelley, G., Vermaat, M. and Cashman, T. Discovering\nComputers 2005: A Gateway to Information. Thompson\nCourse Technologies, 2005.\n\n\n5", "keywords": "IT Fundamentals Knowledge Area;IT Model Curriculum"} {"name": "105", "title": "Implicit User Modeling for Personalized Search", "abstract": "Information retrieval systems (e.g., web search engines) are critical for overcoming information overload. A major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users, resulting in inherently non-optimal retrieval performance. For example, a tourist and a programmer may use the same word \"java\" to search for different information, but the current search systems would return the same results. In this paper, we study how to infer a user's interest from the user's search context and use the inferred implicit user model for personalized search . We present a decision theoretic framework and develop techniques for implicit user modeling in information retrieval. We develop an intelligent client-side web search agent (UCAIR) that can perform eager implicit feedback, e.g., query expansion based on previous queries and immediate result reranking based on clickthrough information. Experiments on web search show that our search agent can improve search accuracy over the popular Google search engine.", "fulltext": "INTRODUCTION\nAlthough many information retrieval systems (e.g., web search\nengines and digital library systems) have been successfully deployed,\nthe current retrieval systems are far from optimal. A major deficiency\nof existing retrieval systems is that they generally lack user\nmodeling and are not adaptive to individual users [17]. This inherent\nnon-optimality is seen clearly in the following two cases:\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nCIKM'05, October 31November 5, 2005, Bremen, Germany.\nCopyright 2005 ACM 1-59593-140-6/05/0010 ...\n$\n5.00.\n(1) Different users may use exactly the same query (e.g., \"Java\") to\nsearch for different information (e.g., the Java island in Indonesia or\nthe Java programming language), but existing IR systems return the\nsame results for these users. Without considering the actual user, it\nis impossible to know which sense \"Java\" refers to in a query. (2)\nA user's information needs may change over time. The same user\nmay use \"Java\" sometimes to mean the Java island in Indonesia\nand some other times to mean the programming language. Without\nrecognizing the search context, it would be again impossible to\nrecognize the correct sense.\nIn order to optimize retrieval accuracy, we clearly need to model\nthe user appropriately and personalize search according to each individual\nuser. The major goal of user modeling for information\nretrieval is to accurately model a user's information need, which is,\nunfortunately, a very difficult task. Indeed, it is even hard for a user\nto precisely describe what his/her information need is.\nWhat information is available for a system to infer a user's information\nneed? Obviously, the user's query provides the most direct\nevidence. Indeed, most existing retrieval systems rely solely on\nthe query to model a user's information need. However, since a\nquery is often extremely short, the user model constructed based\non a keyword query is inevitably impoverished . An effective way\nto improve user modeling in information retrieval is to ask the user\nto explicitly specify which documents are relevant (i.e., useful for\nsatisfying his/her information need), and then to improve user modeling\nbased on such examples of relevant documents. This is called\nrelevance feedback, which has been proved to be quite effective for\nimproving retrieval accuracy [19, 20]. Unfortunately, in real world\napplications, users are usually reluctant to make the extra effort to\nprovide relevant examples for feedback [11].\nIt is thus very interesting to study how to infer a user's information\nneed based on any implicit feedback information, which\nnaturally exists through user interactions and thus does not require\nany extra user effort. Indeed, several previous studies have shown\nthat implicit user modeling can improve retrieval accuracy. In [3],\na web browser (Curious Browser) is developed to record a user's\nexplicit relevance ratings of web pages (relevance feedback) and\nbrowsing behavior when viewing a page, such as dwelling time,\nmouse click, mouse movement and scrolling (implicit feedback).\nIt is shown that the dwelling time on a page, amount of scrolling\non a page and the combination of time and scrolling have a strong\ncorrelation with explicit relevance ratings, which suggests that implicit\nfeedback may be helpful for inferring user information need.\nIn [10], user clickthrough data is collected as training data to learn\na retrieval function, which is used to produce a customized ranking\nof search results that suits a group of users' preferences. In [25],\nthe clickthrough data collected over a long time period is exploited\nthrough query expansion to improve retrieval accuracy.\n824\nWhile a user may have general long term interests and preferences\nfor information, often he/she is searching for documents to\nsatisfy an \"ad hoc\" information need, which only lasts for a short\nperiod of time; once the information need is satisfied, the user\nwould generally no longer be interested in such information. For\nexample, a user may be looking for information about used cars\nin order to buy one, but once the user has bought a car, he/she is\ngenerally no longer interested in such information. In such cases,\nimplicit feedback information collected over a long period of time\nis unlikely to be very useful, but the immediate search context and\nfeedback information, such as which of the search results for the\ncurrent information need are viewed, can be expected to be much\nmore useful. Consider the query \"Java\" again. Any of the following\nimmediate feedback information about the user could potentially\nhelp determine the intended meaning of \"Java\" in the query:\n(1) The previous query submitted by the user is \"hashtable\" (as opposed\nto, e.g., \"travel Indonesia\"). (2) In the search results, the user\nviewed a page where words such as \"programming\", \"software\",\nand \"applet\" occur many times.\nTo the best of our knowledge, how to exploit such immediate\nand short-term search context to improve search has so far not been\nwell addressed in the previous work. In this paper, we study how to\nconstruct and update a user model based on the immediate search\ncontext and implicit feedback information and use the model to improve\nthe accuracy of ad hoc retrieval. In order to maximally benefit\nthe user of a retrieval system through implicit user modeling,\nwe propose to perform \"eager implicit feedback\". That is, as soon\nas we observe any new piece of evidence from the user, we would\nupdate the system's belief about the user's information need and\nrespond with improved retrieval results based on the updated user\nmodel. We present a decision-theoretic framework for optimizing\ninteractive information retrieval based on eager user model updating\n, in which the system responds to every action of the user by\nchoosing a system action to optimize a utility function. In a traditional\nretrieval paradigm, the retrieval problem is to match a query\nwith documents and rank documents according to their relevance\nvalues. As a result, the retrieval process is a simple independent\ncycle of \"query\" and \"result display\". In the proposed new retrieval\nparadigm, the user's search context plays an important role and the\ninferred implicit user model is exploited immediately to benefit the\nuser. The new retrieval paradigm is thus fundamentally different\nfrom the traditional paradigm, and is inherently more general.\nWe further propose specific techniques to capture and exploit two\ntypes of implicit feedback information: (1) identifying related immediately\npreceding query and using the query and the corresponding\nsearch results to select appropriate terms to expand the current\nquery, and (2) exploiting the viewed document summaries to immediately\nrerank any documents that have not yet been seen by the\nuser. Using these techniques, we develop a client-side web search\nagent UCAIR (User-Centered Adaptive Information Retrieval) on\ntop of a popular search engine (Google). Experiments on web\nsearch show that our search agent can improve search accuracy over\nGoogle. Since the implicit information we exploit already naturally\nexists through user interactions, the user does not need to make any\nextra effort. Thus the developed search agent can improve existing\nweb search performance without additional effort from the user.\nThe remaining sections are organized as follows. In Section 2,\nwe discuss the related work. In Section 3, we present a decision-theoretic\ninteractive retrieval framework for implicit user modeling.\nIn Section 4, we present the design and implementation of an intelligent\nclient-side web search agent (UCAIR) that performs eager\nimplicit feedback. In Section 5, we report our experiment results\nusing the search agent. Section 6 concludes our work.\nRELATED WORK\nImplicit user modeling for personalized search has been studied\nin previous work, but our work differs from all previous work\nin several aspects: (1) We emphasize the exploitation of immediate\nsearch context such as the related immediately preceding query\nand the viewed documents in the same session, while most previous\nwork relies on long-term collection of implicit feedback information\n[25]. (2) We perform eager feedback and bring the benefit of\nimplicit user modeling as soon as any new implicit feedback information\nis available, while the previous work mostly exploits long-term\nimplicit feedback [10]. (3) We propose a retrieval framework\nto integrate implicit user modeling with the interactive retrieval process\n, while the previous work either studies implicit user modeling\nseparately from retrieval [3] or only studies specific retrieval models\nfor exploiting implicit feedback to better match a query with\ndocuments [23, 27, 22]. (4) We develop and evaluate a personalized\nWeb search agent with online user studies, while most existing\nwork evaluates algorithms offline without real user interactions.\nCurrently some search engines provide rudimentary personalization\n, such as Google Personalized web search [6], which allows\nusers to explicitly describe their interests by selecting from predefined\ntopics, so that those results that match their interests are\nbrought to the top, and My Yahoo! search [16], which gives users\nthe option to save web sites they like and block those they dislike\n. In contrast, UCAIR personalizes web search through implicit\nuser modeling without any additional user efforts. Furthermore, the\npersonalization of UCAIR is provided on the client side. There are\ntwo remarkable advantages on this. First, the user does not need to\nworry about the privacy infringement, which is a big concern for\npersonalized search [26]. Second, both the computation of personalization\nand the storage of the user profile are done at the client\nside so that the server load is reduced dramatically [9].\nThere have been many works studying user query logs [1] or\nquery dynamics [13]. UCAIR makes direct use of a user's query\nhistory to benefit the same user immediately in the same search\nsession. UCAIR first judges whether two neighboring queries belong\nto the same information session and if so, it selects terms from\nthe previous query to perform query expansion.\nOur query expansion approach is similar to automatic query expansion\n[28, 15, 5], but instead of using pseudo feedback to expand\nthe query, we use user's implicit feedback information to expand\nthe current query. These two techniques may be combined.\nOPTIMIZATION IN INTERACTIVE IR\nIn interactive IR, a user interacts with the retrieval system through\nan \"action dialogue\", in which the system responds to each user action\nwith some system action. For example, the user's action may\nbe submitting a query and the system's response may be returning\na list of 10 document summaries. In general, the space of user actions\nand system responses and their granularities would depend on\nthe interface of a particular retrieval system.\nIn principle, every action of the user can potentially provide new\nevidence to help the system better infer the user's information need.\nThus in order to respond optimally, the system should use all the\nevidence collected so far about the user when choosing a response.\nWhen viewed in this way, most existing search engines are clearly\nnon-optimal. For example, if a user has viewed some documents on\nthe first page of search results, when the user clicks on the \"Next\"\nlink to fetch more results, an existing retrieval system would still\nreturn the next page of results retrieved based on the original query\nwithout considering the new evidence that a particular result has\nbeen viewed by the user.\n825\nWe propose to optimize retrieval performance by adapting system\nresponses based on every action that a user has taken, and cast\nthe optimization problem as a decision task. Specifically, at any\ntime, the system would attempt to do two tasks: (1) User model\nupdating: Monitor any useful evidence from the user regarding\nhis/her information need and update the user model as soon as such\nevidence is available; (2) Improving search results: Rerank immediately\nall the documents that the user has not yet seen, as soon\nas the user model is updated. We emphasize eager updating and\nreranking, which makes our work quite different from any existing\nwork. Below we present a formal decision theoretic framework for\noptimizing retrieval performance through implicit user modeling in\ninteractive information retrieval.\n3.1\nA decision-theoretic framework\nLet A be the set of all user actions and R(a) be the set of all\npossible system responses to a user action a A. At any time, let\nA\nt\n= (a\n1\n, ..., a\nt\n) be the observed sequence of user actions so far\n(up to time point t) and R\nt-1\n= (r\n1\n, ..., r\nt-1\n) be the responses that\nthe system has made responding to the user actions. The system's\ngoal is to choose an optimal response r\nt\nR(a\nt\n) for the current\nuser action a\nt\n.\nLet M be the space of all possible user models. We further define\na loss function L(a, r, m)\n, where a A is a user action,\nr R(a) is a system response, and m M is a user model.\nL(a, r, m) encodes our decision preferences and assesses the optimality\nof responding with r when the current user model is m\nand the current user action is a. According to Bayesian decision\ntheory, the optimal decision at time t is to choose a response that\nminimizes the Bayes risk, i.e.,\nr\n\nt\n= argmin\nrR(a\nt\n)\nM\nL(a\nt\n, r, m\nt\n)P (m\nt\n|U, D, A\nt\n, R\nt-1\n)dm\nt\n(1)\nwhere P (m\nt\n|U, D, A\nt\n, R\nt-1\n) is the posterior probability of the\nuser model m\nt\ngiven all the observations about the user U we have\nmade up to time t.\nTo simplify the computation of Equation 1, let us assume that the\nposterior probability mass P (m\nt\n|U, D, A\nt\n, R\nt-1\n) is mostly concentrated\non the mode m\n\nt\n= argmax\nm\nt\nP (m\nt\n|U, D, A\nt\n, R\nt-1\n).\nWe can then approximate the integral with the value of the loss\nfunction at m\n\nt\n. That is,\nr\n\nt\nargmin\nrR(a\nt\n)\nL(a\nt\n, r, m\n\nt\n)\n(2)\nwhere m\n\nt\n= argmax\nm\nt\nP (m\nt\n|U, D, A\nt\n, R\nt-1\n).\nLeaving aside how to define and estimate these probabilistic models\nand the loss function, we can see that such a decision-theoretic\nformulation suggests that, in order to choose the optimal response\nto a\nt\n, the system should perform two tasks: (1) compute the current\nuser model and obtain m\n\nt\nbased on all the useful information\n. (2) choose a response r\nt\nto minimize the loss function value\nL(a\nt\n, r\nt\n, m\n\nt\n). When a\nt\ndoes not affect our belief about m\n\nt\n, the\nfirst step can be omitted and we may reuse m\n\nt-1\nfor m\n\nt\n.\nNote that our framework is quite general since we can potentially\nmodel any kind of user actions and system responses. In most\ncases, as we may expect, the system's response is some ranking of\ndocuments, i.e., for most actions a, R(a) consists of all the possible\nrankings of the unseen documents, and the decision problem\nboils down to choosing the best ranking of unseen documents based\non the most current user model. When a is the action of submitting\na keyword query, such a response is exactly what a current retrieval\nsystem would do. However, we can easily imagine that a more intelligent\nweb search engine would respond to a user's clicking of\nthe \"Next\" link (to fetch more unseen results) with a more opti-mized\nranking of documents based on any viewed documents in\nthe current page of results. In fact, according to our eager updating\nstrategy, we may even allow a system to respond to a user's clicking\nof browser's \"Back\" button after viewing a document in the same\nway, so that the user can maximally benefit from implicit feedback.\nThese are precisely what our UCAIR system does.\n3.2\nUser models\nA user model m M represents what we know about the user\nU , so in principle, it can contain any information about the user\nthat we wish to model. We now discuss two important components\nin a user model.\nThe first component is a component model of the user's information\nneed. Presumably, the most important factor affecting the optimality\nof the system's response is how well the response addresses\nthe user's information need. Indeed, at any time, we may assume\nthat the system has some \"belief\" about what the user is interested\nin, which we model through a term vector x = (x\n1\n, ..., x\n|V |\n),\nwhere V = {w\n1\n, ..., w\n|V |\n} is the set of all terms (i.e., vocabulary)\nand x\ni\nis the weight of term w\ni\n. Such a term vector is commonly\nused in information retrieval to represent both queries and documents\n. For example, the vector-space model, assumes that both\nthe query and the documents are represented as term vectors and\nthe score of a document with respect to a query is computed based\non the similarity between the query vector and the document vector\n[21]. In a language modeling approach, we may also regard\nthe query unigram language model [12, 29] or the relevance model\n[14] as a term vector representation of the user's information need.\nIntuitively, x would assign high weights to terms that characterize\nthe topics which the user is interested in.\nThe second component we may include in our user model is the\ndocuments that the user has already viewed. Obviously, even if a\ndocument is relevant, if the user has already seen the document, it\nwould not be useful to present the same document again. We thus\nintroduce another variable S D (D is the whole set of documents\nin the collection) to denote the subset of documents in the search\nresults that the user has already seen/viewed.\nIn general, at time t, we may represent a user model as m\nt\n=\n(S, x, A\nt\n, R\nt-1\n), where S is the seen documents, x is the system's\n\"understanding\" of the user's information need, and (A\nt\n, R\nt-1\n)\nrepresents the user's interaction history. Note that an even more\ngeneral user model may also include other factors such as the user's\nreading level and occupation.\nIf we assume that the uncertainty of a user model m\nt\nis solely\ndue to the uncertainty of x, the computation of our current estimate\nof user model m\n\nt\nwill mainly involve computing our best estimate\nof x. That is, the system would choose a response according to\nr\n\nt\n= argmin\nrR(a\nt\n)\nL(a\nt\n, r, S, x\n\n, A\nt\n, R\nt-1\n)\n(3)\nwhere x\n\n= argmax\nx\nP (x|U, D, A\nt\n, R\nt-1\n). This is the decision\nmechanism implemented in the UCAIR system to be described\nlater. In this system, we avoided specifying the probabilistic model\nP (x|U, D, A\nt\n, R\nt-1\n) by computing x\n\ndirectly with some existing\nfeedback method.\n3.3\nLoss functions\nThe exact definition of loss function L depends on the responses,\nthus it is inevitably application-specific. We now briefly discuss\nsome possibilities when the response is to rank all the unseen documents\nand present the top k of them. Let r = (d\n1\n, ..., d\nk\n) be the\ntop k documents, S be the set of seen documents by the user, and\nx\n\nbe the system's best guess of the user's information need. We\n826\nmay simply define the loss associated with r as the negative sum\nof the probability that each of the d\ni\nis relevant, i.e., L(a, r, m) =\nk\ni=1\nP (relevant|d\ni\n, m). Clearly, in order to minimize this\nloss function, the optimal response r would contain the k documents\nwith the highest probability of relevance, which is intuitively\nreasonable.\nOne deficiency of this \"top-k loss function\" is that it is not sensitive\nto the internal order of the selected top k documents, so switching\nthe ranking order of a non-relevant document and a relevant one\nwould not affect the loss, which is unreasonable. To model ranking\n, we can introduce a factor of the user model the probability\nof each of the k documents being viewed by the user, P (view|d\ni\n),\nand define the following \"ranking loss function\":\nL(a, r, m) = k\ni=1\nP (view|d\ni\n)P (relevant|d\ni\n, m)\nSince in general, if d\ni\nis ranked above d\nj\n(i.e., i < j), P (view|d\ni\n) >\nP (view|d\nj\n), this loss function would favor a decision to rank relevant\ndocuments above non-relevant ones, as otherwise, we could\nalways switch d\ni\nwith d\nj\nto reduce the loss value. Thus the system\nshould simply perform a regular retrieval and rank documents\naccording to the probability of relevance [18].\nDepending on the user's retrieval preferences, there can be many\nother possibilities. For example, if the user does not want to see\nredundant documents, the loss function should include some redundancy\nmeasure on r based on the already seen documents S.\nOf course, when the response is not to choose a ranked list of\ndocuments, we would need a different loss function. We discuss\none such example that is relevant to the search agent that we implement\n. When a user enters a query q\nt\n(current action), our search\nagent relies on some existing search engine to actually carry out\nsearch. In such a case, even though the search agent does not have\ncontrol of the retrieval algorithm, it can still attempt to optimize the\nsearch results through refining the query sent to the search engine\nand/or reranking the results obtained from the search engine. The\nloss functions for reranking are already discussed above; we now\ntake a look at the loss functions for query refinement.\nLet f be the retrieval function of the search engine that our agent\nuses so that f (q) would give us the search results using query q.\nGiven that the current action of the user is entering a query q\nt\n(i.e.,\na\nt\n= q\nt\n), our response would be f (q) for some q. Since we have\nno choice of f , our decision is to choose a good q. Formally,\nr\n\nt\n= argmin\nr\nt\nL(a, r\nt\n, m)\n= argmin\nf (q)\nL(a, f (q), m)\n= f (argmin\nq\nL(q\nt\n, f (q), m))\nwhich shows that our goal is to find q\n\n= argmin\nq\nL(q\nt\n, f (q), m),\ni.e., an optimal query that would give us the best f (q). A different\nchoice of loss function L(q\nt\n, f (q), m) would lead to a different\nquery refinement strategy. In UCAIR, we heuristically compute q\n\nby expanding q\nt\nwith terms extracted from r\nt-1\nwhenever q\nt-1\nand\nq\nt\nhave high similarity. Note that r\nt-1\nand q\nt-1\nare contained in\nm as part of the user's interaction history.\n3.4\nImplicit user modeling\nImplicit user modeling is captured in our framework through\nthe computation of x\n\n= argmax\nx\nP (x|U, D, A\nt\n, R\nt-1\n), i.e., the\nsystem's current belief of what the user's information need is. Here\nagain there may be many possibilities, leading to different algorithms\nfor implicit user modeling. We now discuss a few of them.\nFirst, when two consecutive queries are related, the previous\nquery can be exploited to enrich the current query and provide more\nsearch context to help disambiguation. For this purpose, instead of\nperforming query expansion as we did in the previous section, we\ncould also compute an updated x\n\nbased on the previous query and\nretrieval results. The computed new user model can then be used to\nrank the documents with a standard information retrieval model.\nSecond, we can also infer a user's interest based on the summaries\nof the viewed documents. When a user is presented with a\nlist of summaries of top ranked documents, if the user chooses to\nskip the first n documents and to view the (n + 1)-th document, we\nmay infer that the user is not interested in the displayed summaries\nfor the first n documents, but is attracted by the displayed summary\nof the (n + 1)-th document. We can thus use these summaries as\nnegative and positive examples to learn a more accurate user model\nx\n\n. Here many standard relevance feedback techniques can be exploited\n[19, 20]. Note that we should use the displayed summaries,\nas opposed to the actual contents of those documents, since it is\npossible that the displayed summary of the viewed document is\nrelevant, but the document content is actually not. Similarly, a displayed\nsummary may mislead a user to skip a relevant document.\nInferring user models based on such displayed information, rather\nthan the actual content of a document is an important difference\nbetween UCAIR and some other similar systems.\nIn UCAIR, both of these strategies for inferring an implicit user\nmodel are implemented.\nUCAIR A PERSONALIZED SEARCH AGENT\nIn this section, we present a client-side web search agent called\nUCAIR, in which we implement some of the methods discussed\nin the previous section for performing personalized search through\nimplicit user modeling. UCAIR is a web browser plug-in\n1\nthat\nacts as a proxy for web search engines. Currently, it is only implemented\nfor Internet Explorer and Google, but it is a matter of\nengineering to make it run on other web browsers and interact with\nother search engines.\nThe issue of privacy is a primary obstacle for deploying any real\nworld applications involving serious user modeling, such as personalized\nsearch. For this reason, UCAIR is strictly running as\na client-side search agent, as opposed to a server-side application.\nThis way, the captured user information always resides on the computer\nthat the user is using, thus the user does not need to release\nany information to the outside. Client-side personalization also allows\nthe system to easily observe a lot of user information that may\nnot be easily available to a server. Furthermore, performing personalized\nsearch on the client-side is more scalable than on the serverside\n, since the overhead of computation and storage is distributed\namong clients.\nAs shown in Figure 1, the UCAIR toolbar has 3 major components\n: (1) The (implicit) user modeling module captures a user's\nsearch context and history information, including the submitted\nqueries and any clicked search results and infers search session\nboundaries. (2) The query modification module selectively improves\nthe query formulation according to the current user model.\n(3) The result re-ranking module immediately re-ranks any unseen\nsearch results whenever the user model is updated.\nIn UCAIR, we consider four basic user actions: (1) submitting a\nkeyword query; (2) viewing a document; (3) clicking the \"Back\"\nbutton; (4) clicking the \"Next\" link on a result page. For each\nof these four actions, the system responds with, respectively, (1)\n1\nUCAIR is available at: http://sifaka.cs.uiuc.edu/ir/ucair/download.html\n827\n\nSearch\nEngine\n(e.g.,\nGoogle)\nSearch History Log\n(e.g.,past queries,\nclicked results)\nQuery\nModification\nResult\nRe-Ranking\nUser\nModeling\nResult Buffer\nUCAIR\nUser\nquery\nresults\nclickthrough...\nFigure 1: UCAIR architecture\ngenerating a ranked list of results by sending a possibly expanded\nquery to a search engine; (2) updating the information need model\nx; (3) reranking the unseen results on the current result page based\non the current model x; and (4) reranking the unseen pages and\ngenerating the next page of results based on the current model x.\nBehind these responses, there are three basic tasks: (1) Decide\nwhether the previous query is related to the current query and if so\nexpand the current query with useful terms from the previous query\nor the results of the previous query. (2) Update the information\nneed model x based on a newly clicked document summary. (3)\nRerank a set of unseen documents based on the current model x.\nBelow we describe our algorithms for each of them.\n4.2\nSession boundary detection and query expansion\nTo effectively exploit previous queries and their corresponding\nclickthrough information, UCAIR needs to judge whether two adjacent\nqueries belong to the same search session (i.e., detect session\nboundaries). Existing work on session boundary detection is\nmostly in the context of web log analysis (e.g., [8]), and uses statistical\ninformation rather than textual features. Since our client-side\nagent does not have access to server query logs, we make session\nboundary decisions based on textual similarity between two\nqueries. Because related queries do not necessarily share the same\nwords (e.g., \"java island\" and \"travel Indonesia\"), it is insufficient\nto use only query text. Therefore we use the search results of the\ntwo queries to help decide whether they are topically related. For\nexample, for the above queries \"java island\" and \"travel Indone-sia\"'\n, the words \"java\", \"bali\", \"island\", \"indonesia\" and \"travel\"\nmay occur frequently in both queries' search results, yielding a high\nsimilarity score.\nWe only use the titles and summaries of the search results to calculate\nthe similarity since they are available in the retrieved search\nresult page and fetching the full text of every result page would sig-nificantly\nslow down the process. To compensate for the terseness\nof titles and summaries, we retrieve more results than a user would\nnormally view for the purpose of detecting session boundaries (typ-ically\n50 results).\nThe similarity between the previous query q and the current\nquery q is computed as follows. Let {s\n1\n, s\n2\n, . . . , s\nn\n} and\n{s\n1\n, s\n2\n, . . . , s\nn\n} be the result sets for the two queries.\nWe use\nthe pivoted normalization TF-IDF weighting formula [24] to compute\na term weight vector s\ni\nfor each result s\ni\n. We define the average\nresult s\navg\nto be the centroid of all the result vectors, i.e.,\n(s\n1\n+ s\n2\n+ . . . + s\nn\n)/n. The cosine similarity between the two\naverage results is calculated as\ns\navg\ns\navg\n/ s\n2\navg\ns\n2\navg\nIf the similarity value exceeds a predefined threshold, the two queries\nwill be considered to be in the same information session.\nIf the previous query and the current query are found to belong\nto the same search session, UCAIR would attempt to expand the\ncurrent query with terms from the previous query and its search\nresults. Specifically, for each term in the previous query or the\ncorresponding search results, if its frequency in the results of the\ncurrent query is greater than a preset threshold (e.g. 5 results out\nof 50), the term would be added to the current query to form an\nexpanded query. In this case, UCAIR would send this expanded\nquery rather than the original one to the search engine and return\nthe results corresponding to the expanded query. Currently, UCAIR\nonly uses the immediate preceding query for query expansion; in\nprinciple, we could exploit all related past queries.\n4.3\nInformation need model updating\nSuppose at time t, we have observed that the user has viewed\nk documents whose summaries are s\n1\n, ..., s\nk\n. We update our user\nmodel by computing a new information need vector with a standard\nfeedback method in information retrieval (i.e., Rocchio [19]). According\nto the vector space retrieval model, each clicked summary\ns\ni\ncan be represented by a term weight vector s\ni\nwith each term\nweighted by a TF-IDF weighting formula [21]. Rocchio computes\nthe centroid vector of all the summaries and interpolates it with the\noriginal query vector to obtain an updated term vector. That is,\nx = q + (1 - ) 1\nk\nk\ni=1\ns\ni\nwhere q is the query vector, k is the number of summaries the user\nclicks immediately following the current query and is a parameter\nthat controls the influence of the clicked summaries on the inferred\ninformation need model. In our experiments, is set to 0.5. Note\nthat we update the information need model whenever the user views\na document.\n4.4\nResult reranking\nIn general, we want to rerank all the unseen results as soon as the\nuser model is updated. Currently, UCAIR implements reranking in\ntwo cases, corresponding to the user clicking the \"Back\" button\nand \"Next\" link in the Internet Explorer. In both cases, the current\n(updated) user model would be used to rerank the unseen results so\nthat the user would see improved search results immediately.\nTo rerank any unseen document summaries, UCAIR uses the\nstandard vector space retrieval model and scores each summary\nbased on the similarity of the result and the current user information\nneed vector x [21]. Since implicit feedback is not completely reliable\n, we bring up only a small number (e.g. 5) of highest reranked\nresults to be followed by any originally high ranked results.\n828\nGoogle result (user query = \"java map\")\nUCAIR result (user query =\"java map\")\nprevious query = \"travel Indonesia\"\nprevious query = \"hashtable\"\nexpanded user query = \"java map Indonesia\"\nexpanded user query = \"java map class\"\n1\nJava map projections of the world ...\nLonely Planet - Indonesia Map\nMap (Java 2 Platform SE v1.4.2)\nwww.btinternet.com/ se16/js/mapproj.htm\nwww.lonelyplanet.com/mapshells/...\njava.sun.com/j2se/1.4.2/docs/...\n2\nJava map projections of the world ...\nINDONESIA TOURISM : CENTRAL JAVA - MAP\nJava 2 Platform SE v1.3.1: Interface Map\nwww.btinternet.com/ se16/js/oldmapproj.htm\nwww.indonesia-tourism.com/...\njava.sun.com/j2se/1.3/docs/api/java/...\n3\nJava Map\nINDONESIA TOURISM : WEST JAVA - MAP\nAn Introduction to Java Map Collection Classes\njava.sun.com/developer/...\nwww.indonesia-tourism.com/ ...\nwww.oracle.com/technology/...\n4\nJava Technology Concept Map\nIndoStreets - Java Map\nAn Introduction to Java Map Collection Classes\njava.sun.com/developer/onlineTraining/...\nwww.indostreets.com/maps/java/\nwww.theserverside.com/news/...\n5\nScience@NASA Home\nIndonesia Regions and Islands Maps, Bali, Java, ...\nKoders - Mappings.java\nscience.nasa.gov/Realtime/...\nwww.maps2anywhere.com/Maps/...\nwww.koders.com/java/\n6\nAn Introduction to Java Map Collection Classes\nIndonesia City Street Map,...\nHibernate simplifies inheritance mapping\nwww.oracle.com/technology/...\nwww.maps2anywhere.com/Maps/...\nwww.ibm.com/developerworks/java/...\n7\nLonely Planet - Java Map\nMaps Of Indonesia\ntmap 30.map Class Hierarchy\nwww.lonelyplanet.com/mapshells/\nwww.embassyworld.com/maps/...\ntmap.pmel.noaa.gov/...\n8\nONJava.com: Java API Map\nMaps of Indonesia by Peter Loud\nClass Scope\nwww.onjava.com/pub/a/onjava/api map/\nusers.powernet.co.uk/...\njalbum.net/api/se/datadosen/util/Scope.html\n9\nGTA San Andreas : Sam\nMaps of Indonesia by Peter Loud\nClass PrintSafeHashMap\nwww.gtasanandreas.net/sam/\nusers.powernet.co.uk/mkmarina/indonesia/\njalbum.net/api/se/datadosen/...\n10\nINDONESIA TOURISM : WEST JAVA - MAP\nindonesiaphoto.com\nJava Pro - Union and Vertical Mapping of Classes\nwww.indonesia-tourism.com/...\nwww.indonesiaphoto.com/...\nwww.fawcette.com/javapro/...\nTable 1: Sample results of query expansion\nEVALUATION OF UCAIR\nWe now present some results on evaluating the two major UCAIR\nfunctions: selective query expansion and result reranking based on\nuser clickthrough data.\n5.1\nSample results\nThe query expansion strategy implemented in UCAIR is inten-tionally\nconservative to avoid misinterpretation of implicit user models\n. In practice, whenever it chooses to expand the query, the expansion\nusually makes sense. In Table 1, we show how UCAIR can\nsuccessfully distinguish two different search contexts for the query\n\"java map\", corresponding to two different previous queries (i.e.,\n\"travel Indonesia\" vs. \"hashtable\"). Due to implicit user modeling,\nUCAIR intelligently figures out to add \"Indonesia\" and \"class\",\nrespectively, to the user's query \"java map\", which would otherwise\nbe ambiguous as shown in the original results from Google\non March 21, 2005. UCAIR's results are much more accurate than\nGoogle's results and reflect personalization in search.\nThe eager implicit feedback component is designed to immediately\nrespond to a user's activity such as viewing a document. In\nFigure 2, we show how UCAIR can successfully disambiguate an\nambiguous query \"jaguar\" by exploiting a viewed document summary\n. In this case, the initial retrieval results using \"jaguar\" (shown\non the left side) contain two results about the Jaguar cars followed\nby two results about the Jaguar software. However, after the user\nviews the web page content of the second result (about \"Jaguar\ncar\") and returns to the search result page by clicking \"Back\" button\n, UCAIR automatically nominates two new search results about\nJaguar cars (shown on the right side), while the original two results\nabout Jaguar software are pushed down on the list (unseen from the\npicture).\n5.2\nQuantitative evaluation\nTo further evaluate UCAIR quantitatively, we conduct a user\nstudy on the effectiveness of the eager implicit feedback component\n. It is a challenge to quantitatively evaluate the potential performance\nimprovement of our proposed model and UCAIR over\nGoogle in an unbiased way [7]. Here, we design a user study,\nin which participants would do normal web search and judge a\nrandomly and anonymously mixed set of results from Google and\nUCAIR at the end of the search session; participants do not know\nwhether a result comes from Google or UCAIR.\nWe recruited 6 graduate students for this user study, who have\ndifferent backgrounds (3 computer science, 2 biology, and 1 chem-<\n;top>\n<num> Number: 716\n<title> Spammer arrest sue\n<desc> Description: Have any spammers\nbeen arrested or sued for sending unsolicited\ne-mail?\n<narr> Narrative: Instances of arrests,\nprosecutions, convictions, and punishments\nof spammers, and lawsuits against them are\nrelevant. Documents which describe laws to\nlimit spam without giving details of lawsuits\nor criminal trials are not relevant.\n</top>\nFigure 3: An example of TREC query topic, expressed in a\nform which might be given to a human assistant or librarian\nistry). We use query topics from TREC\n2\n2004 Terabyte track [2]\nand TREC 2003 Web track [4] topic distillation task in the way to\nbe described below.\nAn example topic from TREC 2004 Terabyte track appears in\nFigure 3. The title is a short phrase and may be used as a query\nto the retrieval system. The description field provides a slightly\nlonger statement of the topic requirement, usually expressed as a\nsingle complete sentence or question. Finally the narrative supplies\nadditional information necessary to fully specify the requirement,\nexpressed in the form of a short paragraph.\nInitially, each participant would browse 50 topics either from\nTerabyte track or Web track and pick 5 or 7 most interesting topics.\nFor each picked topic, the participant would essentially do the normal\nweb search using UCAIR to find many relevant web pages by\nusing the title of the query topic as the initial keyword query. During\nthis process, the participant may view the search results and\npossibly click on some interesting ones to view the web pages, just\nas in a normal web search. There is no requirement or restriction\non how many queries the participant must submit or when the participant\nshould stop the search for one topic. When the participant\nplans to change the search topic, he/she will simply press a button\n2\nText REtrieval Conference: http://trec.nist.gov/\n829\nFigure 2: Screen shots for result reranking\nto evaluate the search results before actually switching to the next\ntopic.\nAt the time of evaluation, 30 top ranked results from Google and\nUCAIR (some are overlapping) are randomly mixed together so\nthat the participant would not know whether a result comes from\nGoogle or UCAIR. The participant would then judge the relevance\nof these results. We measure precision at top n (n = 5, 10, 20, 30)\ndocuments of Google and UCAIR. We also evaluate precisions at\ndifferent recall levels.\nAltogether, 368 documents judged as relevant from Google search\nresults and 429 documents judged as relevant from UCAIR by participants\n. Scatter plots of precision at top 10 and top 20 documents\nare shown in Figure 4 and Figure 5 respectively (The scatter plot\nof precision at top 30 documents is very similar to precision at top\n20 documents). Each point of the scatter plots represents the precisions\nof Google and UCAIR on one query topic.\nTable 2 shows the average precision at top n documents among\n32 topics. From Figure 4, Figure 5 and Table 2, we see that the\nsearch results from UCAIR are consistently better than those from\nGoogle by all the measures. Moreover, the performance improvement\nis more dramatic for precision at top 20 documents than that\nat precision at top 10 documents. One explanation for this is that\nthe more interaction the user has with the system, the more clickthrough\ndata UCAIR can be expected to collect. Thus the retrieval\nsystem can build more precise implicit user models, which lead to\nbetter retrieval accuracy.\nRanking Method\nprec@5\nprec@10\nprec@20\nprec@30\nGoogle\n0.538\n0.472\n0.377\n0.308\nUCAIR\n0.581\n0.556\n0.453\n0.375\nImprovement\n8.0%\n17.8%\n20.2%\n21.8%\nTable 2: Table of average precision at top n documents for 32\nquery topics\nThe plot in Figure 6 shows the precision-recall curves for UCAIR\nand Google, where it is clearly seen that the performance of UCAIR\n0\n0.2\n0.4\n0.6\n0.8\n1\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nUCAIR prec@10\nGoogle prec@10\nScatterplot of Precision at Top 10 Documents\nFigure 4: Precision at top 10 documents of UCAIR and Google\nis consistently and considerably better than that of Google at all\nlevels of recall.\nCONCLUSIONS\nIn this paper, we studied how to exploit implicit user modeling to\nintelligently personalize information retrieval and improve search\naccuracy. Unlike most previous work, we emphasize the use of immediate\nsearch context and implicit feedback information as well\nas eager updating of search results to maximally benefit a user. We\npresented a decision-theoretic framework for optimizing interactive\ninformation retrieval based on eager user model updating, in\nwhich the system responds to every action of the user by choosing\na system action to optimize a utility function. We further propose\nspecific techniques to capture and exploit two types of implicit\nfeedback information: (1) identifying related immediately preceding\nquery and using the query and the corresponding search results\nto select appropriate terms to expand the current query, and (2)\nexploiting the viewed document summaries to immediately rerank\nany documents that have not yet been seen by the user. Using these\ntechniques, we develop a client-side web search agent (UCAIR)\non top of a popular search engine (Google). Experiments on web\nsearch show that our search agent can improve search accuracy over\n830\n0\n0.2\n0.4\n0.6\n0.8\n1\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nUCAIR prec@20\nGoogle prec@20\nScatterplot of Precision at Top 20 documents\nFigure 5: Precision at top 20 documents of UCAIR and Google\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0.4\n0.45\n0.5\n0.55\n0.6\n0.65\n0.7\n0.75\n0.8\n0.85\n0.9\nrecall\nprecision\nPrecision-Recall curves\nGoogle Result\nUCAIR Result\nFigure 6: Precision at top 20 result of UCAIR and Google\nGoogle. Since the implicit information we exploit already naturally\nexists through user interactions, the user does not need to make any\nextra effort. The developed search agent thus can improve existing\nweb search performance without any additional effort from the\nuser.\nACKNOWLEDGEMENT\nWe thank the six participants of our evaluation experiments. This\nwork was supported in part by the National Science Foundation\ngrants IIS-0347933 and IIS-0428472.\n\nREFERENCES\n[1] S. M. Beitzel, E. C. Jensen, A. Chowdhury, D. Grossman,\nand O. Frieder. Hourly analysis of a very large topically\ncategorized web query log. In Proceedings of SIGIR 2004,\npages 321328, 2004.\n[2] C. Clarke, N. Craswell, and I. Soboroff. Overview of the\nTREC 2004 terabyte track. In Proceedings of TREC 2004,\n2004.\n[3] M. Claypool, P. Le, M. Waseda, and D. Brown. Implicit\ninterest indicators. In Proceedings of Intelligent User\nInterfaces 2001, pages 3340, 2001.\n[4] N. Craswell, D. Hawking, R. Wilkinson, and M. Wu.\nOverview of the TREC 2003 web track. In Proceedings of\nTREC 2003, 2003.\n[5] W. B. Croft, S. Cronen-Townsend, and V. Larvrenko.\nRelevance feedback and personalization: A language\nmodeling perspective. In Proeedings of Second DELOS\nWorkshop: Personalisation and Recommender Systems in\nDigital Libraries, 2001.\n[6] Google Personalized. http://labs.google.com/personalized.\n[7] D. Hawking, N. Craswell, P. B. Thistlewaite, and D. Harman.\nResults and challenges in web search evaluation. Computer\nNetworks, 31(11-16):13211330, 1999.\n[8] X. Huang, F. Peng, A. An, and D. Schuurmans. Dynamic\nweb log session identification with statistical language\nmodels. Journal of the American Society for Information\nScience and Technology, 55(14):12901303, 2004.\n[9] G. Jeh and J. Widom. Scaling personalized web search. In\nProceedings of WWW 2003, pages 271279, 2003.\n[10] T. Joachims. Optimizing search engines using clickthrough\ndata. In Proceedings of SIGKDD 2002, pages 133142,\n2002.\n[11] D. Kelly and J. Teevan. Implicit feedback for inferring user\npreference: A bibliography. SIGIR Forum, 37(2):1828,\n2003.\n[12] J. Lafferty and C. Zhai. Document language models, query\nmodels, and risk minimization for information retrieval. In\nProceedings of SIGIR'01, pages 111119, 2001.\n[13] T. Lau and E. Horvitz. Patterns of search: Analyzing and\nmodeling web query refinement. In Proceedings of the\nSeventh International Conference on User Modeling (UM),\npages 145 152, 1999.\n[14] V. Lavrenko and B. Croft. Relevance-based language\nmodels. In Proceedings of SIGIR'01, pages 120127, 2001.\n[15] M. Mitra, A. Singhal, and C. Buckley. Improving automatic\nquery expansion. In Proceedings of SIGIR 1998, pages\n206214, 1998.\n[16] My Yahoo! http://mysearch.yahoo.com.\n[17] G. Nunberg. As google goes, so goes the nation. New York\nTimes, May 2003.\n[18] S. E. Robertson. The probability ranking principle in i.\nJournal of Documentation, 33(4):294304, 1977.\n[19] J. J. Rocchio. Relevance feedback in information retrieval. In\nThe SMART Retrieval System: Experiments in Automatic\nDocument Processing, pages 313323. Prentice-Hall Inc.,\n1971.\n[20] G. Salton and C. Buckley. Improving retrieval performance\nby retrieval feedback. Journal of the American Society for\nInformation Science, 41(4):288297, 1990.\n[21] G. Salton and M. J. McGill. Introduction to Modern\nInformation Retrieval. McGraw-Hill, 1983.\n[22] X. Shen, B. Tan, and C. Zhai. Context-sensitive information\nretrieval using implicit feedback. In Proceedings of SIGIR\n2005, pages 4350, 2005.\n[23] X. Shen and C. Zhai. Exploiting query history for document\nranking in interactive information retrieval (Poster). In\nProceedings of SIGIR 2003, pages 377378, 2003.\n[24] A. Singhal. Modern information retrieval: A brief overview.\nBulletin of the IEEE Computer Society Technical Committee\non Data Engineering, 24(4):3543, 2001.\n[25] K. Sugiyama, K. Hatano, and M. Yoshikawa. Adaptive web\nsearch based on user profile constructed without any effort\nfrom users. In Proceedings of WWW 2004, pages 675684,\n2004.\n[26] E. Volokh. Personalization and privacy. Communications of\nthe ACM, 43(8):8488, 2000.\n[27] R. W. White, J. M. Jose, C. J. van Rijsbergen, and\nI. Ruthven. A simulated study of implicit feedback models.\nIn Proceedings of ECIR 2004, pages 311326, 2004.\n[28] J. Xu and W. B. Croft. Query expansion using local and\nglobal document analysis. In Proceedings of SIGIR 1996,\npages 411, 1996.\n[29] C. Zhai and J. Lafferty. Model-based feedback in KL\ndivergence retrieval model. In Proceedings of the CIKM\n2001, pages 403410, 2001.\n831\n", "keywords": "user model;interactive retrieval;personalized search;information retrieval systems;user modelling;implicit feedback;retrieval accuracy;clickthrough information;UCAIR"} {"name": "106", "title": "Improvements of TLAESA Nearest Neighbour Search Algorithm and Extension to Approximation Search", "abstract": "Nearest neighbour (NN) searches and k nearest neighbour (k-NN) searches are widely used in pattern recognition and image retrieval. An NN (k-NN) search finds the closest object (closest k objects) to a query object. Although the definition of the distance between objects depends on applications, its computation is generally complicated and time-consuming. It is therefore important to reduce the number of distance computations. TLAESA (Tree Linear Approximating and Eliminating Search Algorithm) is one of the fastest algorithms for NN searches. This method reduces distance computations by using a branch and bound algorithm. In this paper we improve both the data structure and the search algorithm of TLAESA. The proposed method greatly reduces the number of distance computations. Moreover, we extend the improved method to an approximation search algorithm which ensures the quality of solutions. Experimental results show that the proposed method is efficient and finds an approximate solution with a very low error rate.", "fulltext": "Introduction\nNN and k-NN searches are techniques which find the\nclosest object (closest k objects) to a query object\nfrom a database. These are widely used in pattern\nrecognition and image retrieval. We can see examples\nof their applications to handwritten character\nrecognition in (Rico-Juan & Mico 2003) and (Mico\n& Oncina 1998), and so on. In this paper we consider\nNN (k-NN) algorithms that can work in any metric\nspace. For any x, y, z in a metric space, the distance\nfunction d(, ) satisfies the following properties:\nd(x, y) = 0 x = y,\nd(x, y) = d(y, x),\nd(x, z) d(x, y) + d(y, z).\nAlthough the definition of the distance depends on\napplications, its calculation is generally complicated\nand time-consuming. We particularly call the calculation\nof d(, ) a distance computation.\nCopyright c 2006, Australian Computer Society, Inc. This paper\nappeared at Twenty-Ninth Australasian Computer Science\nConference (ACSC2006), Hobart, Tasmania, Australia, January\n2006. Conferences in Research and Practice in Information\nTechnology, Vol. 48. Vladimir Estivill-Castro and Gill\nDobbie, Ed. Reproduction for academic, not-for profit purposes\npermitted provided this text is included.\nFor the NN and k-NN searches in metric spaces,\nsome methods that can manage a large set of objects\nefficiently have been introduced(Hjaltason &\nSamet 2003). They are categorized into two groups.\nThe methods in the first group manage objects with\na tree structure such as vp-tree(Yianilos 1993), M-tree\n(Ciaccia, Patella & Zezula 1997), sa-tree (Navarro\n2002) and so forth. The methods in the second group\nmanage objects with a distance matrix, which stores\nthe distances between objects. The difference between\ntwo groups is caused by their approaches to\nfast searching. The former aims at reducing the com-putational\ntasks in the search process by managing\nobjects effectively. The latter works toward reducing\nthe number of distance computations because generally\ntheir costs are higher than the costs of other\ncalculations. In this paper we consider the latter approach\n.\nAESA (Approximating and Eliminating Search\nAlgorithm)(Vidal 1986) is one of the fastest algorithms\nfor NN searches in the distance matrix group.\nThe number of distance computations is bounded by\na constant, but the space complexity is quadratic.\nLAESA (Linear AESA)(Mico, Oncina & Vidal 1994)\nwas introduced in order to reduce this large space\ncomplexity. Its space complexity is linear and its\nsearch performance is almost the same as that of\nAESA. Although LAESA is more practical than\nAESA, it is impractical for a large database because\ncalculations other than distance computations\nincrease. TLAESA (Tree LAESA)(Mico, Oncina &\nCarrasco 1996) is an improvement of LAESA and reduces\nthe time complexity to sublinear. It uses two\nkinds of data structures: a distance matrix and a binary\ntree, called a search tree.\nIn this paper, we propose some improvements\nof the search algorithm and the data structures of\nTLAESA in order to reduce the number of distance\ncomputations. The search algorithm follows the best\nfirst algorithm. The search tree is transformed to a\nmultiway tree from a binary tree. We also improve\nthe selection method of the root object in the search\ntree. These improvements are simple but very effective\n. We then introduce the way to perform a k-NN\nsearch in the improved TLAESA. Moreover, we propose\nan extension to an approximation search algorithm\nthat can ensure the quality of solutions.\nThis paper is organized as follows. In section 2,\nwe describe the details of the search algorithm and\nthe data structures of TLAESA. In section 3, we propose\nsome improvements of TLAESA. In section 4,\nwe present an extension to an approximation search\nalgorithm. In section 5, we show some experimental\nresults. Finally, in section 6, we conclude this paper.\nFigure 1: An example of the data structures in\nTLAESA.\nTLAESA\nTLAESA uses two kinds of data structures: the distance\nmatrix and the search tree. The distance matrix\nstores the distances from each object to some selected\nobjects. The search tree manages hierarchically all\nobjects. During the execution of the search algorithm,\nthe search tree is traversed and the distance matrix\nis used to avoid exploring some branches.\n2.1\nData Structures\nWe explain the data structures in TLAESA. Let P\nbe the set of all objects and B be a subset consisting\nof selected objects called base prototypes. The distance\nmatrix M is a two-dimensional array that stores\nthe distances between all objects and base prototypes.\nThe search tree T is a binary tree such that each node\nt corresponds to a subset S\nt\nP . Each node t has\na pointer to the representative object p\nt\nS\nt\nwhich\nis called a pivot, a pointer to a left child node l, a\npointer to a right child node r and a covering radius\nr\nt\n. The covering radius is defined as\nr\nt\n= max\npS\nt\nd(p, p\nt\n).\n(1)\nThe pivot p\nr\nof r is defined as p\nr\n= p\nt\n. On the other\nhand, the pivot p\nl\nof l is determined so that\np\nl\n= argmax\npS\nt\nd(p, p\nt\n).\n(2)\nHence, we have the following equality:\nr\nt\n= d(p\nt\n, p\nl\n).\n(3)\nS\nt\nis partitioned into two disjoint subsets S\nr\nand S\nl\nas follows:\nS\nr\n= {p S\nt\n|d(p, p\nr\n) < d(p, p\nl\n)},\nS\nl\n= S\nt\n- S\nr\n.\n(4)\nNote that if t is a leaf node, S\nt\n= {p\nt\n} and r\nt\n= 0.\nFig. 1 shows an example of the data structures.\n2.2\nConstruction of the Data Structures\nWe first explain the construction process of the search\ntree T . The pivot p\nt\nof the root node t is randomly\nselected and S\nt\nis set to P . The pivot p\nl\nof the left\nchild node and the covering radius r\nt\nare defined by\nEqs. (2) and (3). The pivot p\nr\nof the right child node\nis set to p\nt\n. S\nt\nis partitioned into S\nr\nand S\nl\nby Eq.\n(4). These operations are recursively repeated until\n|S\nt\n| = 1.\nThe distance matrix M is constructed by selecting\nbase prototypes. This selection is important because\nFigure 2: Lower bound.\nbase prototypes are representative objects which are\nused to avoid some explorations of the tree.\nThe ideal selection of them is that each object is\nas far away as possible from other objects. In (Mico\net al. 1994), a greedy algorithm is proposed for this\nselection. This algorithm chooses an object that maximizes\nthe sum of distances from the other base prototypes\nwhich have already been selected. In (Mico &\nOncina 1998), another algorithm is proposed, which\nchooses an object that maximizes the minimum distance\nto the preselected base prototypes. (Mico &\nOncina 1998) shows that the latter algorithm is more\neffective than the former one. Thus, we use the later\nalgorithm for the selection of base prototypes.\nThe search efficiency depends not only on the selection\nof base prototypes but also on the number\nof them. There is a trade-off between the search\nefficiency and the size of distance matrix, i.e. the\nmemory capacity. The experimental results in (Mico\net al. 1994) show that the optimal number of base\nprototypes depends on the dimensionality dm of the\nspace. For example, the optimal numbers are 3, 16\nand 24 if dm = 2, 4 and 8, respectively. The experimental\nresults also show that the optimal number\ndoes not depend on the number of objects.\n2.3\nSearch Algorithm\nThe search algorithm follows the branch and bound\nstrategy. It traverses the search tree T in the depth\nfirst order. The distance matrix M is referred whenever\neach node is visited in order to avoid unnecessary\ntraverse of the tree T . The distance are computed\nonly when a leaf node is reached.\nGiven a query object q, the distance between q and\nthe base prototypes are computed. These results are\nstored in an array D. The object which is the closest\nto q in B is selected as the nearest neighbour candidate\np\nmin\n, and the distance d(q, p\nmin\n) is recorded as\nd\nmin\n. Then, the traversal of the search tree T starts\nat the root node. The lower bound for the left child\nnode l is calculated whenever each node t is reached if\nit is not a leaf node. The lower bound of the distance\nbetween q and an object x is defined as\ng\nx\n= max\nbB\n|d(q, b) - d(b, x)|.\n(5)\nSee Fig. 2. Recall that d(q, b) was precomputed before\nthe traversals and was stored in D. In addition,\nthe value d(b, x) was also computed during the construction\nprocess and stored in the distance matrix\nM . Therefore, g\nx\nis calculated without any actual\ndistance computations. The lower bound g\nx\nis not actual\ndistance d(q, x). Thus, it does not ensure that the\nnumber of visited nodes in the search becomes minimum\n. Though, this evaluation hardly costs, hence it\nis possible to search fast. The search process accesses\nthe left child node l if g\np\nl\ng\np\nr\n, or the right child\nnode r if g\np\nl\n> g\np\nr\n. When a leaf node is reached,\nthe distance is computed and both p\nmin\nand d\nmin\nare\nupdated if the distance is less than d\nmin\n.\nq\np\nmin\np\nt\nr\nt\nS\nt\nFigure 3: Pruning Process.\nprocedure NN search(q)\n1:\nt root of T\n2:\nd\nmin\n= , g\np\nt\n= 0\n3:\nfor b B do\n4:\nD[b] = d(q, b)\n5:\nif D[b] < d\nmin\nthen\n6:\np\nmin\n= b, d\nmin\n= D[b]\n7:\nend if\n8:\nend for\n9:\ng\np\nt\n= max\nbB\n|(D[b] - M [b, p\nt\n])|\n10:\nsearch(t, g\np\nt\n, q, p\nmin\n, d\nmin\n)\n11:\nreturn p\nmin\nFigure 4: Algorithm for an NN search in TLAESA.\nWe explain the pruning process. Fig. 3 shows the\npruning situation. Let t be the current node. If the\ninequality\nd\nmin\n+ r\nt\n< d(q, p\nt\n)\n(6)\nis satisfied, we can see that no object exists in S\nt\nwhich is closer to q than p\nmin\nand the traversal to\nnode t is not necessary. Since g\np\nt\nd(q, p\nt\n), Eq. (6)\ncan be replaced with\nd\nmin\n+ r\nt\n< g\np\nt\n.\n(7)\nFigs.\n4 and 5 show the details of the search\nalgorithm(Mico et al. 1996).\nImprovements of TLAESA\nIn this section, we propose some improvements of\nTLAESA in order to reduce the number of distance\ncomputations.\n3.1\nTree Structure and Search Algorithm\nIf we can evaluate the lower bounds g in the ascending\norder of their values, the search algorithm runs very\nfast. However, this is not guaranteed in TLAESA\nsince the evaluation order is decided according to the\ntree structure. We show such an example in Fig. 6.\nIn this figure, u, v and w are nodes. If g\np\nv\n< g\np\nw\n,\nit is desirable that v is evaluated before w. But, if\ng\np\nv\n> g\np\nu\n, w might be evaluated before v.\nWe propose the use of a multiway tree and the\nbest first order search instead of a binary tree and\nthe depth first search. During the best first search\nprocess, we can traverse preferentially a node whose\nsubset may contain the closest object. Moreover, we\ncan evaluate more nodes at one time by using of the\nmultiway tree. The search tree in TLAESA has many\nnodes which have a pointer to the same object. In the\nproposed structure, we treat such nodes as one node.\nEach node t corresponds to a subset S\nt\nP and has\na pivot p\nt\n, a covering radius r\nt\n= max\npS\nt\nd(p, p\nt\n) and\npointers to its children nodes.\nprocedure search(t, g\np\nt\n, q, p\nmin\n, d\nmin\n)\n1:\nif t is a leaf then\n2:\nif g\np\nt\n< d\nmin\nthen\n3:\nd = d(q, p\nt\n) {distance computation}\n4:\nif d < d\nmin\nthen\n5:\np\nmin\n= p\nt\n, d\nmin\n= d\n6:\nend if\n7:\nend if\n8:\nelse\n9:\nr is a right child of t\n10:\nl is a left child of t\n11:\ng\np\nr\n= g\np\nt\n12:\ng\np\nl\n= max\nbB\n|(D[b] - M [b, p\nt\n])|\n13:\nif g\np\nl\n< g\np\nr\nthen\n14:\nif d\nmin\n+ r\nl\n> g\np\nl\nthen\n15:\nsearch(l, g\np\nl\n, p\nmin\n, d\nmin\n)\n16:\nend if\n17:\nif d\nmin\n+ r\nr\n> g\np\nr\nthen\n18:\nsearch(r, g\np\nr\n, p\nmin\n, d\nmin\n)\n19:\nend if\n20:\nelse\n21:\nif d\nmin\n+ r\nr\n> g\np\nr\nthen\n22:\nsearch(r, g\np\nr\n, p\nmin\n, d\nmin\n)\n23:\nend if\n24:\nif d\nmin\n+ r\nl\n> g\np\nl\nthen\n25:\nsearch(l, g\np\nl\n, p\nmin\n, d\nmin\n)\n26:\nend if\n27:\nend if\n28:\nend if\nFigure 5: A recursive procedure for an NN search in\nTLAESA.\nFigure 6: A case in which the search algorithm in\nTLAESA does not work well.\nWe show a method to construct the tree structure\nin Fig. 7. We first select randomly the pivot p\nt\nof\nthe root node t and set S\nt\nto P . Then we execute the\nprocedure makeTree(t, p\nt\n, S\nt\n) in Fig. 7.\nWe explain the search process in the proposed\nstructure. The proposed method maintains a priority\nqueue Q that stores triples (node t, lower bound g\np\nt\n,\ncovering radius r\nt\n) in the increasing order of g\np\nt\n- r\nt\n.\nGiven a query object q, we calculate the distances between\nq and base prototypes and store their values in\nD. Then the search process starts at the root of T .\nThe following steps are recursively repeated until Q\nbecomes empty. When t is a leaf node, the distance\nd(q, p\nt\n) is computed if g\np\nt\n< d\nmin\n. If t is not a leaf\nnode and its each child node t satisfies the inequality\ng\np\nt\n< r\nt\n+ d\nmin\n,\n(8)\nthe lower bound g\np\nt\nis calculated and a triple\n(t , g\np\nt\n, r\nt\n) is added to Q. Figs. 8 and 9 show the\ndetails of the algorithm.\nprocedure makeTree(t, p\nt\n, S\nt\n)\n1:\nt new child node of t\n2:\nif |S\nt\n| = 1 then\n3:\np\nt\n= p\nt\nand S\nt\n= {p\nt\n}\n4:\nelse\n5:\np\nt\n= argmax\npS\nt\nd(p, p\nt\n)\n6:\nS\nt\n= {p S\nt\n|d(p, p\nt\n) < d(p, p\nt\n)}\n7:\nS\nt\n= S\nt\n- S\nt\n8:\nmakeTree(t , p\nt\n, S\nt\n)\n9:\nmakeTree(t, p\nt\n, S\nt\n)\n10:\nend if\nFigure 7: Method to construct the proposed tree\nstructure.\nprocedure NN search(q)\n1:\nt root of T\n2:\nd\nmin\n= , g\np\nt\n= 0\n3:\nfor b B do\n4:\nD[b] = d(q, b)\n5:\nif D[b] < d\nmin\nthen\n6:\np\nmin\n= b, d\nmin\n= D[b]\n7:\nend if\n8:\nend for\n9:\ng\nt\n= max\nbB\n|(D[b] - M [b, p\nt\n])|\n10:\nQ {(t, g\np\nt\n, r\nt\n)}\n11:\nwhile Q is not empty do do\n12:\n(t, g\np\nt\n, r\nt\n) element in Q\n13:\nsearch(t, g\np\nt\n, q, p\nmin\n, d\nmin\n)\n14:\nend while\n15:\nreturn p\nmin\nFigure 8: Proposed algorithm for an NN search.\n3.2\nSelection of Root Object\nWe focus on base prototypes in order to reduce node\naccesses. The lower bound of the distance between a\nquery q and a base prototype b is\ng\nb\n= max\nbB\n|d(q, b) - d(b, b)|\n= d(q, b).\nThis value is not an estimated distance but an actual\ndistance.\nIf we can use an actual distance in the search process\n, we can evaluate more effectively which nodes\nare close to q. This fact means that the search is efficiently\nperformed if many base prototypes are visited\nin the early stage. In other words, it is desirable that\nmore base prototypes are arranged in the upper part\nof the search tree. Thus, in the proposed algorithm,\nwe choose the first base prototype b\n1\nas the root object\n.\n3.3\nExtension to a k-NN Search\nLAESA was developed to perform NN searches and\n(Moreno-Seco, Mico & Oncina 2002) extended it so\nthat k-NN searches can be executed. In this section,\nwe extend the improved TLAESA to a k-NN search\nalgorithm. The extension is simple modifications of\nthe algorithm described above. We use a priority\nqueue V for storing k nearest neighbour candidates\nand modify the definition of d\nmin\n. V stores pairs\n(object p, distance d(q, p)) in the increasing order of\nprocedure search(t, g\np\nt\n, q, p\nmin\n, d\nmin\n)\n1:\nif t is a leaf then\n2:\nif g\np\nt\n< d\nmin\nthen\n3:\nd = d(q, p\nt\n) {distance computation}\n4:\nif d < d\nmin\nthen\n5:\np\nmin\n= p\nt\n, d\nmin\n= d\n6:\nend if\n7:\nend if\n8:\nelse\n9:\nfor each child t of t do\n10:\nif g\np\nt\n< r\nt\n+ d\nmin\nthen\n11:\ng\np\nt\n= max\nbB\n|(D[b] - M [b, p\nt\n])|\n12:\nQ Q {(t , g\np\nt\n, r\nt\n)}\n13:\nend if\n14:\nend for\n15:\nend if\nFigure 9: A procedure used in the proposed algorithm\nfor an NN search.\nprocedure k-NN search(q, k)\n1:\nt root of T\n2:\nd\nmin\n= , g\np\nt\n= 0\n3:\nfor b B do\n4:\nD[b] = d(q, b)\n5:\nif D[b] < d\nmin\nthen\n6:\nV V {(b, D[b])}\n7:\nif |V | = k + 1 then\n8:\nremove (k + 1)th pair from V\n9:\nend if\n10:\nif |V | = k then\n11:\n(c, d(q, c)) kth pair of V\n12:\nd\nmin\n= d(q, c)\n13:\nend if\n14:\nend if\n15:\nend for\n16:\ng\np\nt\n= max\nbB\n|(D[b] - M [b, p\nt\n])|\n17:\nQ {(t, g\np\nt\n, r\nt\n)}\n18:\nwhile Q is not empty do\n19:\n(t, g\np\nt\n, r\nt\n) element in Q\n20:\nsearch(t, g\np\nt\n, q, V, d\nmin\n, k)\n21:\nend while\n22:\nreturn k objects V\nFigure 10: Proposed algorithm for a k-NN search.\nd(q, p). d\nmin\nis defined as\nd\nmin\n=\n(|V | < k)\nd(q, c) (|V | = k)\n(9)\nwhere c is the object of the kth pair in V .\nWe show in Figs. 10 and 11 the details of the k-NN\nsearch algorithm. The search strategy essentially\nfollows the algorithm in Figs. 8 and 9, but the k-NN\nsearch algorithm uses V instead of p\nmin\n.\n(Moreno-Seco et al. 2002) shows that the optimal\nnumber of base prototypes depends on not only the\ndimensionality of the space but also the value of k and\nthat the number of distance computations increases\nas k increases.\nExtension to an Approximation Search\nIn this section, we propose an extension to an approximation\nk-NN search algorithm which ensures the\nprocedure search(t, g\np\nt\n, q, V, d\nmin\n, k)\n1:\nif t is a leaf then\n2:\nif g\np\nt\n< d\nmin\nthen\n3:\nd = d(q, p\nt\n) {distance computation}\n4:\nif d < d\nmin\nthen\n5:\nV V {(p\nt\n, d(q, p\nt\n))}\n6:\nif |V | = k + 1 then\n7:\nremove (k + 1)th pair from V\n8:\nend if\n9:\nif |V | = k then\n10:\n(c, d(q, c)) kth pair of V\n11:\nd\nmin\n= d(q, c)\n12:\nend if\n13:\nend if\n14:\nend if\n15:\nelse\n16:\nfor each child t of t do\n17:\nif g\np\nt\n< r\nt\n+ d\nmin\nthen\n18:\ng\np\nt\n= max\nbB\n|(D[b] - M [b, p\nt\n])|\n19:\nQ Q {(t , g\np\nt\n, r\nt\n)}\n20:\nend if\n21:\nend for\n22:\nend if\nFigure 11: A procedure used in the proposed algorithm\nfor a k-NN search.\nquality of solutions. Consider the procedure in Fig.\n11. We replace the 4th line with\nif d < d\nmin\nthen\nand the 17th line with\nif g\nt\n< r\nt\n+ d\nmin\nthen\nwhere is real number such that 0 < 1. The\npruning process gets more efficient as these conditions\nbecome tighter.\nThe proposed method ensures the quality of solutions\n. We can show the approximation ratio to an\noptimal solution using . Let a be the nearest neighbour\nobject and a be the nearest neighbour candidate\nobject. If our method misses a and give a as\nthe answer, the equation\ng(q, a) d(q, a )\n(10)\nis satisfied. Then a will be eliminated from targeted\nobjects. Since g(q, a) d(q, a), we can obtain the\nfollowing equation:\nd(q, a ) 1\nd(q, a).\n(11)\nThus, the approximate solution are suppressed by\n1\n\ntimes of the optimal solution.\nExperiments\nIn this section we show some experimental results and\ndiscuss them. We tested on an artificial set of random\npoints in the 8-dimensional euclidean space. We also\nused the euclidean distance as the distance function.\nWe evaluated the number of distance computations\nand the number of accesses to the distance matrix in\n1-NN and 10-NN searches.\n0\n50\n100\n150\n200\n250\n300\n0 10 20 30 40 50 60 70 80 90 100 110 120\nNumber of Distance Computations\nNumber of Base Prototypes\nTLAESA(1-NN)\nTLAESA(10-NN)\nProposed(1-NN)\nProposed(10-NN)\nFigure 12: Relation of the number of distance computations\nto the number of base prototypes.\n1-NN\n10-NN\nTLAESA\n40\n80\nProposed\n25\n60\nTable 1: The optimal number of base prototypes.\n5.1\nThe Optimal Number of Base Prototypes\nWe first determined experimentally the optimal number\nof base prototypes.\nThe number of objects\nwas fixed to 10000. We executed 1-NN and 10-NN\nsearches for various numbers of base prototypes, and\ncounted the number of distance computations. Fig.\n12 shows the results. From this figure, we chose the\nnumber of base prototypes as shown in Table. 1.\nWe can see that the values in the proposed method\nare fewer than those in TLAESA. This means that\nthe proposed method can achieve better performance\nwith smaller size of distance matrix. We used the\nvalues in Table. 1 in the following experiments.\n5.2\nEvaluation of Improvements\nWe tested the effects of our improvements described\nin 3.1 and 3.2. We counted the numbers of distance\ncomputations in 1-NN and 10-NN searches for various\nnumbers of objects. The results are shown in Figs.\n13 and 14. Similar to TLAESA, the number of the\ndistance computations in the proposed method does\nnot depend on the number of objects. In both of 1-NN\nand 10-NN searches, it is about 60% of the number of\ndistance computations in TLAESA. Thus we can see\nthat our improvements are very effective.\nIn the search algorithms of TLAESA and the proposed\nmethods, various calculations are performed\nother than distance computations. The costs of the\nmajor part of such calculations are proportional to\nthe number of accesses to the distance matrices. We\ntherefore counted the numbers of accesses to the distance\nmatrices. We examined the following two cases:\n(i) TLAESA vs. TLAESA with the improvement of\nselection of the root object.\n(ii) Proposed method only with improvement of tree\nstructure and search algorithm vs.\nproposed\nmethod only with the improvement of selection\nof the root object.\nIn the case (i), the number of accesses to the distance\nmatrix is reduced by 12% in 1-NN searches and 4.5%\nin 10-NN searches. In the case (ii), it is reduced by\n6.8% in 1-NN searches and 2.7% in 10-NN searches.\n0\n10\n20\n30\n40\n50\n60\n70\n80\n90\n100\n0\n2000 4000\n6000 8000 10000\nNumber of Distance Computations\nNumber of Objects\nTLAESA\nProposed\nFigure 13: The number of distance computations in\n1-NN searches.\n0\n30\n60\n90\n120\n150\n180\n210\n240\n270\n300\n0\n2000 4000 6000 8000 10000\nNumber of Distance Computations\nNumber of Objects\nTLAESA\nProposed\nFigure 14: The number of distance computations in\n10-NN searches.\nThus we can see that this improvement about selection\nof the root object is effective.\n5.3\nEvaluation of Approximation Search\nWe tested the performance of the approximation\nsearch algorithm. We compared the proposed method\nto Ak-LAESA, which is the approximation search algorithm\nproposed in (Moreno-Seco, Mico & Oncina\n2003).\nEach time a distance is computed in Ak-LAESA\n, the nearest neighbour candidate is updated\nand its value is stored. When the nearest neighbour\nobject is found, the best k objects are chosen from the\nstored values. In Ak-LAESA, the number of distance\ncomputations of the k-NN search is exactly the same\nas that of the NN search.\nTo compare the proposed method with Ak-LAESA\n, we examined how many objects in the approximate\nsolutions exist in the optimal solutions.\nThus, we define the error rate E as follows:\nE[%] = |{x\ni\n|x\ni\n/\nOpt, i = 1, 2, , k}|\nk\n100 (12)\nwhere {x\n1\n, x\n2\n, , x\nk\n} is a set of k objects which are\nobtained by an approximation algorithm and Opt is\na set of k closest objects to the query object.\nFig. 15 shows the error rate when the value of is\nchanged in 10-NN searches. Fig. 16 also shows the relation\nof the number of distance computations to the\nvalue of in 10-NN searches. In the range 0.5,\nthe proposed method shows the lower error rate than\n0\n10\n20\n30\n40\n50\n60\n70\n80\n90\n100\n0\n0.2\n0.4\n0.6\n0.8\n1\nError Rate\n[%]\n\nAk-LAESA\nProposed\nFigure 15: Error rate in 10-NN searches.\n0\n20\n40\n60\n80\n100\n120\n140\n160\n0\n0.2\n0.4\n0.6\n0.8\n1\nNumber of distance computations\n\nAk-LAESA\nProposed\nFigure 16: Relation of the number of distance computations\nto the value of in 10-NN searches.\nAk-LAESA. In particular, the error rate of the proposed\nmethod is almost 0 in range 0.9. From two\nfigures, we can control the error rate and the number\nof distance computations by changing the value of .\nFor example, the proposed method with = 0.9 reduces\nabount 28.6% of distance computations and its\nerror rate is almost 0.\nThen we examined the accuracy of the approximate\nsolutions.\nWe used = 0.5 for the proposed\nmethod because the error rate of the proposed\nmethod with = 0.5 is equal to the one of Ak-LAESA\n. We performed 10-NN searches 10000 times\nfor each method and examined the distribution of kth\napproximate solution to kth optimal solution. We\nshow the results in Figs. 17 and 18. In each figure,\nx axis represents the distance between a query object\nq and the kth object in the optimal solution. y\naxis shows the distance between q and the kth object\nin the approximate solution. The point near the\nline y = x represents that kth approximate solution is\nvery close to kth optimal solution. In Fig. 17, many\npoints are widely distributed. In the worst case, some\nappriximate solutions reach about 3 times of the optimal\nsolution. From these figures, we can see that\nthe accuracy of solution by the proposed method is\nsuperior to the one by Ak-LAESA. We also show the\nresult with = 0.9 in Fig. 19. Most points exist near\nthe line y = x.\nThough Ak-LAESA can reduce drastically the\nnumber of distance computations, its approximate solutions\nare often far from the optimal solutions. On\nthe other hand, the proposed method can reduce the\nnumber of distance computations to some extent with\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\n1.4\n1.6\n0\n0.2\n0.4\n0.6\n0.8\nDistance to the\nk\nth Approximate Solution\nDistance to the k th Optimal Solution\nFigure 17: The distribution of the approximate solution\nby Ak-LAESA to the optimal solution.\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\n1.4\n1.6\n0\n0.2\n0.4\n0.6\n0.8\nDistance to the\nk\nth Approximate Solution\nDistance to the k th Optimal Solution\nFigure 18: The distribution the approximate solution\nby the proposed method with = 0.5 to the optimal\nsolution.\nvery low error rate. Moreover, the accuracy of its approximate\nsolutions is superior to that of Ak-LAESA.\nConclusions\nIn this paper, we proposed some improvements of\nTLAESA. In order to reduce the number of distance\ncomputations in TLAESA, we improved the search\nalgorithm to best first order from depth first order\nand the tree structure to a multiway tree from a binary\ntree. In the 1-NN searches and 10-NN searches\nin a 8-dimensional space, the proposed method reduced\nabout 40% of distance computations. We then\nproposed the selection method of root object in the\nsearch tree. This improvement is very simple but is\neffective to reduce the number of accesses to the distance\nmatrix. Finally, we extended our method to an\napproximation k-NN search algorithm that can ensure\nthe quality of solutions. The approximate solutions\nof the proposed method are suppressed by\n1\n\ntimes of the optimal solutions. Experimental results\nshow that the proposed method can reduce the number\nof distance computations with very low error rate\nby selecting the appropriate value of , and that the\naccuracy of the solutions is superior to Ak-LAESA.\nFrom these viewpoints, the method presented in this\npaper is very effective when the distance computations\nare time-consuming.\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\n1.4\n1.6\n0\n0.2\n0.4\n0.6\n0.8\nDistance to the\nk\nth Approximate Solution\nDistance to the k th Optimal Solution\nFigure 19: The distribution the approximate solution\nby the proposed method with = 0.9 to the optimal\nsolution.\nReferences\nCiaccia, P., Patella, M. & Zezula, P. (1997), M-tree:\nAn efficient access method for similarity search\nin metric spaces, in `Proceedings of the 23rd\nInternational Conference on Very Large Data\nBases (VLDB'97)', pp. 426435.\nHjaltason, G. R. & Samet, H. (2003), `Index-driven\nsimilarity search in metric spaces', ACM Transactions\non Database Systems 28(4), 517580.\nMico, L. & Oncina, J. (1998), `Comparison of fast\nnearest neighbour classifiers for handwritten\ncharacter recognition', Pattern Recognition Letters\n19(3-4), 351356.\nMico, L., Oncina, J. & Carrasco, R. C. (1996), `A\nfast branch & bound nearest neighbour classifier\nin metric spaces', Pattern Recognition Letters\n17(7), 731739.\nMico, M. L., Oncina, J. & Vidal, E. (1994), `A new\nversion of the nearest-neighbour approximating\nand eliminating search algorithm (AESA) with\nlinear preprocessing time and memory require-ments'\n, Pattern Recognition Letters 15(1), 917.\nMoreno-Seco, F., Mico, L. & Oncina, J. (2002),\n`Extending LAESA fast nearest neighbour algorithm\nto find the k-nearest neighbours', Lecture\nNotes in Computer Science - Lecture Notes in\nArtificial Intelligence 2396, 691699.\nMoreno-Seco, F., Mico, L. & Oncina, J. (2003), `A\nmodification of the LAESA algorithm for ap-proximated\nk-NN classification', Pattern Recognition\nLetters 24(1-3), 4753.\nNavarro, G. (2002), `Searching in metric spaces\nby spatial approximation', The VLDB Journal\n11(1), 2846.\nRico-Juan, J. R. & Mico, L. (2003), `Comparison\nof AESA and LAESA search algorithms using\nstring and tree-edit-distances', Pattern Recognition\nLetters 24(9-10), 14171426.\nVidal, E. (1986), `An algorithm for finding nearest\nneighbours in (approximately) constant average\ntime', Pattern Recognition Letters 4(3), 145157.\nYianilos, P. N. (1993), Data structures and algorithms\nfor nearest neighbor search in general\nmetric spaces, in `SODA '93: Proceedings of the\nfourth annual ACM-SIAM Symposium on Discrete\nalgorithms', pp. 311321.", "keywords": "Approximation Search;TLAESA;Distance Computaion;k Nearest Neighbour Search;Nearest Neighbour Search"} {"name": "107", "title": "Improving the Static Analysis of Embedded Languages via Partial Evaluation", "abstract": "Programs in embedded languages contain invariants that are not automatically detected or enforced by their host language. We show how to use macros to easily implement partial evaluation of embedded interpreters in order to capture invariants encoded in embedded programs and render them explicit in the terms of their host language . We demonstrate the effectiveness of this technique in improving the results of a value flow analysis.", "fulltext": "1. One Language, Many Languages\nEvery practical programming language contains small programming\nlanguages. For example, C's\nprintf\n[18] supports a string-based\noutput formatting language, and Java [3] supports a declarative\nsub-language for laying out GUI elements in a window. PLT\nScheme [9] offers at least five such languages: one for formatting\nconsole output; two for regular expression matching; one for sending\nqueries to a SQL server; and one for laying out HTML pages.\nIn many cases, though not always, programs in these embedded\nspecial-purpose programming languages are encoded as strings. Library\nfunctions consume these strings and interpret them. Often the\ninterpreters consume additional arguments, which they use as inputs\nto the little programs.\nTake a look at this expression in PLT Scheme:\n(regexp-match "http://([a-z.]*)/([a-z]*)/" line)\nThe function\nregexp-match\nis an interpreter for the regular expression\nlanguage. It consumes two arguments: a string in the regular\nexpression language, which we consider a program, and another\nstring, which is that program's input. A typical use looks like the\nexample above. The first string is actually specified at the call site,\nwhile the second string is often given by a variable or an expression\nthat reads from an input port. The interpreter attempts to match the\nregular expression and the second string.\nIn PLT Scheme, the regular expression language allows programmers\nto specify subpatterns via parentheses. Our running example\ncontains two such subexpressions:\n([a-z.]*)\nand\n([a-z]*)\n. If\nthe regular expression interpreter fails to match the regular expression\nand the string, it produces false (\n#f\n); otherwise it produces a\nlist with n\n+ 1 elements: the first one for the overall match plus one\nper subexpression. Say\nline\nstands for\n"http://aaa.bbb.edu/zzz/"\nIn this case, the regular expression matches the string, and\nregexp-match\nproduces the list\n(list "http://aaa.bbb.edu/zzz/"\n"aaa.bbb.edu"\n"zzz")\nThe rest of the Scheme program extracts the pieces from this list\nand computes with them.\nThe\nregexp-match\nexpression above is a simplified excerpt from\nthe PLT Web Server [12]. Here is a slightly larger fragment:\n(let ([r (regexp-match\n"http://([a-z.]*)/([a-z]*)/" line)])\n(if r\n(process-url (third r) (dispatch (second r)))\n(log-error line)))\nNotice how the then-clause of the\nif\n-expression extracts the second\n16\nand third elements from\nr\nwithout any checks to confirm the length\nof the list. After all, the programmer knows that if\nr\nis not false,\nthen it is a list of three elements. The embedded program says so;\nit is a regular expression and contains two subexpressions.\nUnfortunately, the static analysis tools for PLT Scheme cannot\nreason on both levels.\nMrFlow [20], a static debugger, uses\na constraint-based analysis [22], a version of set-based analysis\n[2, 13, 10], to analyze the program and discover potential errors\n. If it finds one it can draw a flow graph from the source of the\nbad value to the faulty primitive operation. For the\nlet\n-expression\nabove, MrFlow finds that both\n(second r)\nand\n(third r)\nmay\nraise runtime errors because\nr\nmay not contain enough elements.\nIn this paper, we show how using Scheme macros to partially evaluate\ncalls to embedded interpreters such as\nregexp-match\ngreatly\nincreases the precision of the static analysis. Since we use macros,\nlibrary designers can easily implement the partial evaluation, rather\nthan relying on the host language implementor as they must for ad-hoc\nsolutions.\nIn Section 2 we give a brief overview of set-based analysis and MrFlow\n. In the next section we explain three examples of embedded\nlanguages and the problems they cause for MrFlow's static analysis\n. We then present in Section 4 our general approach to solving\nthose problems, based on macros. An overview of the macro\nsystem we use is given in Section 5. Section 6 then presents a general\ntechnique for translating embedded interpreters into macros. In\nSection 7, we explain the properties of the static analysis that enable\nit to find more results in partially evaluated code. Finally, in\nSection 8, we show how partially evaluating Scheme programs that\ncontain embedded programs helps MrFlow in our three examples.\nSection 9 presents related work and we conclude in Section 10.\nSet-Based Analysis\nTo explain how the results of a static analysis can be improved by\nusing partial evaluation of embedded languages, we first need to\ndescribe such an analysis. MrFlow, a static analyzer for DrScheme,\nuses a set-based value flow analysis to compute an approximation\nof the values that each subexpression of a program might evaluate to\nat runtime [22]. The approximation computed for each expression\nis a set of abstract values that can be displayed on demand. The\ndebugger can also draw arrows showing the flow of values through\nthe program.\nFigure 1 displays an example of analyzing a simple program.\nIn the box next to the term\n3\nis the abstract value for that term,\nmeaning that at runtime the term\n3\nmight evaluate to the value 3.\nThe arrow starting from the term\n3\nshows that at runtime the value\n3 might flow into the argument\nx\nof the function\nf\nand from there\nflow into the reference to the variable\nx\nin the body of\nf\n. There\nis a second reference to\nx\nin\nf\n--the corresponding arrow is not\nshown in this example. In the box next to the call to the Scheme\nprimitive\ngcd\nis the abstract value for the result of that call. Since\nthe analysis never tries to evaluate expressions, it uses the abstract\nvalue integer to represent the result of the primitive call, if any,\nwhich is a conservative approximation of the actual value that that\ncall might compute at runtime.\nThe biggest box displays the type of the adjacent\nif\n-expression,\nwhich is the union of the integer abstract value computed by the\ngcd\nprimitive and of the string \"hello\". Arrows show that the result of\nthe\nif\n-expression can come from both the then- and else-branches:\nthe analysis does not attempt to apply the\nnumber?\npredicate to the\nvariable\nx\n, so it conservatively assumes that both branches of the\nif\n-expression may be evaluated at runtime.\n\nThree Embedded Languages\nWe now turn to embedded languages, which are a useful technique\nfor establishing abstraction layers for a particular design space.\nFunctional languages are well-suited to writing interpreters for embedded\nlanguages, in which the higher-level embedded language is\nimplemented as a set of functions in the general purpose host language\nand has access to all of its features [15, 16, 24]. But these\nabstractions come at a cost for program analysis. In particular, tools\nbuilt to examine programs of the host language cannot derive information\nfor the programs in the embedded languages because they\ndo not understand the semantics of those languages.\nIn this section we demonstrate three examples of practical embedded\nlanguages for Scheme and show their negative effects on static\nanalysis. In the first example, properties of the embedded language\ncreate the possibility of errors that can go undetected by the analysis\n. In the next two examples, undetected properties lead to analyses\nthat are too conservative, resulting in many false positives; that is,\nthe analysis reports errors that can never actually occur.\n3.1\nFormat Strings\nThe PLT Scheme library provides a\nformat\nfunction, similar to C's\nsprintf\n, which generates a string given a format specifier and a\nvariable number of additional arguments. The format specifier is\na string containing some combination of literal text and formatting\ntags. These tags are interpreted along with the remaining arguments\nto construct a formatted string. The\nformat\nfunction is thus an\ninterpreter for the format specifier language. The format specifier\nis a program in this language and the additional arguments are its\ninputs.\nTo construct its output, the\nformat\nfunction requires the number\nof extra arguments to match the number of format tags, and these\narguments must be of the appropriate type. Consider the example\nof displaying an ASCII character and its encoding in hexadecimal:\n(format "~c = 0x~x" c n)\nIn this example, the format specifier, which contains the format tags\n"~c"\nand\n"~x"\nand some literal text, expects to consume exactly\ntwo arguments. These arguments must be a character and an integer\n, respectively. An incorrect number of arguments or a type\nmismatch results in a runtime error.\nUnfortunately analysis tools for Scheme such as MrFlow have no\na priori knowledge of the semantics of embedded languages. The\nanalysis cannot infer any information about the dependencies between\nthe contents of the format string and the rest of the arguments\nwithout knowledge of the syntax and semantics of the\nformat\nlanguage\n. As a result the analysis cannot predict certain categories of\nruntime errors, as shown in Figure 2. The application of\nformat\nis\nnot underlined as an error, even though its arguments appear in the\nwrong order and the analysis correctly computes the types of both\nc\nand\nn\n.\n17\nFigure 1. Analyzing a simple program with MrFlow.\n3.2\nRegular Expressions\nRegular expressions are used in all kinds of Scheme programs. The\nlanguage of regular expression patterns is embedded in Scheme as\nstrings. A library of functions interpret these strings as programs\nthat consume additional arguments as input strings and return either\na list of matched subpatterns or\n#f\nto indicate failure.\nConsider again the excerpt from the PLT Web Server from Section\n1. Programmers know that if the match succeeds, then the\nresult list contains exactly three elements: the result of the entire\nmatch, and the results of the two subpattern matches. Again the\nanalysis is unable to discover this invariant on its own. Figure 3\nshows the results of analyzing the sample code with MrFlow. The\nlist accessors\nsecond\nand\nthird\nare underlined in red because the\nanalysis cannot prove that their arguments are sufficiently long lists.\nProgrammers then must either go through each of these false positives\nand prove for themselves that the errors can never occur, or\nelse learn to ignore some results of MrFlow. Neither option is desirable\n. The former creates more work for the programmer, rather\nthan less; the latter is unsafe and easily leads to overlooked errors.\n3.3\nSchemeQL\nSchemeQL [28] is an embedded language for manipulating relational\ndatabases in Scheme. Unlike the string-based\nformat\nlanguage\n, SchemeQL programs consist of special forms directly embedded\ninside Scheme. The SchemeQL implementation provides\na set of macros that recognize these forms and expand them into\nScheme code. A typical database query in SchemeQL might look\nlike this:\n(direct-query (name age phone) directory)\ncorresponding to the SQL statement\nSELECT name, age, phone FROM directory\nThe result of executing a query is a lazy stream representing a cursor\nover the result set from the database server. Each element in the\nstream is a list of values representing a single row of the result set.\nThe cursor computes the rows by need when a program selects the\nnext sub-stream.\nProgrammers know that the number of elements in each row of a\ncursor is equal to the number of columns in the original request.\nOur analysis, however, cannot discover this fact automatically. Figure\n4 shows the results of an analysis of a SchemeQL query in the\ncontext of a trivial Scheme program. The example query consists of\nexactly three columns, and the code references the third element of\nthe first row. This operation can never fail, but the analysis is unable\nto prove this. Instead, it conservatively computes that\nrow\nis a list\nof unknown length: rec-type describes a recursive abstract value,\nwhich in the present case is the union of\nnull\nand a pair consisting\nof any value (top) and the abstract value itself, creating a loop in the\nabstract value that simulates all possible list lengths. MrFlow therefore\nmistakenly reports an error by underlining the primitive\nthird\nin red, since, according to the analysis,\nrow\nmight have fewer than\nthree elements at runtime.\nMacros for Partial Evaluation\nAll the embedded languages presented in the previous section have\none thing in common: they can encode invariants that are not visible\nto any analysis of the general purpose language in which they\nare embedded. These invariants can be exposed to analyses in two\nways:\nby extending the analyses in an ad-hoc manner for each embedded\nlanguage so that they understand its semantics, or\nby partially evaluating the embedded interpreters with regard\nto the embedded programs to make the invariants in the embedded\nprograms explicit as invariants in the host language,\nwhenever possible.\nThe first solution requires modifying each analysis to support each\nembedded language. The second solution can simply be implemented\nfrom within the host language through the old Lisp trick of\nusing \"compiler macros\" [25] as a light-weight partial evaluation\nmechanism. In the present case, instead of using partial evaluation\nto optimize programs for speed, we use it to increase the precision\nof program analyses.\nWhile Lisp's compiler macros are different from regular Lisp\nmacros, Scheme's macro system is powerful enough that the equivalent\nof Lisp's compiler macros can be implemented as regular\nScheme macros. The partial evaluation of embedded interpreters\nthen simply involves replacing the libraries of functions imple-18\nFigure 2. Imprecise analysis of the\nformat\nprimitive.\nFigure 3. Imprecise analysis of\nregexp-match\n.\nFigure 4. Imprecise analysis of a SchemeQL query.\n19\nmenting the interpreters with libraries of semantically equivalent\nmacros\n1\n. This has the additional advantage that it can be done by\nthe author of the library of functions, as opposed to the compiler's\nor analyzer's implementor in the case of ad-hoc extensions.\nOf course, the partial evaluation of embedded interpreters is only\npossible when their input programs are known statically. For example\n, it is not possible to expand a call to\nformat\nif the formatting\nstring given as its first argument is computed at runtime. The\nprogrammer therefore makes a trade-off between the precision of\nanalyses and how dynamic the code can be. In practice, though,\nthe embedded programs are often specified statically in user code.\nCombined with the simplicity of implementing partial evaluation\nwith macros, this makes for a useful technique for improving the\nprecision of analyses at a low cost.\nIn the next two sections, we describe some of the important features\nof the Scheme macro system and then explain how we make use of\nthis system to partially evaluate the interpreters of these embedded\nlanguages to improve the results of static analysis.\nMacros in Scheme\nScheme has a powerful macro system for extending the language\nwith derived expression forms that can be rewritten as expressions\nin the core language. Macros serve as a means of syntactic abstraction\n. Programmers can generalize syntactic patterns in ways that\nare not possible with functional abstraction. This technology also\nprovides a hook into the standard compiler tool chain by allowing\nprogrammers to implement additional program transformations before\ncompilation. In this section we describe the basics of standard\nScheme macros and introduce identifier macros, a generalization of\nthe contexts in which macros can be matched.\n5.1\nRule-Based Macros\nThe\ndefine-syntax\nspecial form allows the programmer to extend\nScheme with derived expression forms. Before compilation or execution\nof a Scheme program, all occurrences of these derived forms\nare replaced with their specified expansions.\nThe\nsyntax-rules\nform specifies macro expansions as rewrite\nrules. Consider the following simple macro, which defines a short-circuit\nlogical\nor\nas a derived form:\n(define-syntax or\n(syntax-rules ()\n[(or e1 e2)\n(let ([tmp e1])\n(if tmp tmp e2))]))\nThe macro defines a single rewrite rule, consisting of a pattern and\na template. The pattern matches the\nor\nkeyword in operator position\nfollowed by two pattern variables\ne1\nand\ne2\n, each matching\nan arbitrary subexpression in argument position. The template directs\nthe macro expansion to replace occurrences of the matched\npattern with a\nlet\n-expression constructed from the matched subexpressions\n.\n1\nThe transformation is not strictly speaking partial evaluation:\nthe reductions performed by the macros are not exactly the ones performed\nby the embedded interpreters. However, the macros share\nthe techniques and issues of partial evaluation since they simulate\nparts of the interpreters, and it is therefore useful to describe them\nas such.\nNotice that this\nor\nform cannot be defined as a regular function\nin Scheme. The second argument is only evaluated if the first argument\nevaluates to false. Since Scheme has a strict evaluation\nsemantics, a functional\nor\nwould necessarily evaluate both of its\narguments before computing a result. Controlling the evaluation of\nexpressions is an important use of Scheme macros. Macros can also\nabstract over other syntactic forms in ways that functions cannot by\nexpanding into second-class language constructs such as\ndefine\n.\n5.2\nLexical Scope\nMacros written with the standard Scheme\nsyntax-rules\nmechanism\nare both hygienic and referentially transparent. Hygienic\nmacro expansion guarantees that binding forms inside the definition\nof the macro template do not capture free variables in macro\narguments. Consider the following use of our\nor\nmacro:\n2\n(or other tmp)\n\n(let ([tmp\n1\nother])\n(if tmp\n1\ntmp\n1\ntmp))\nHygienic expansion automatically renames the variable bound inside\nthe expanded macro template to avoid capturing the free variable\nin the macro argument.\nReferential transparency complements hygiene by ensuring that\nfree variables inside the macro template cannot be captured by the\ncontext of the macro call site. For example, if the context that invokes\nor\nrebinds the\nif\nname, the expansion algorithm renames the\nbinding in the caller's context to avoid capturing the variable used\nin the template body:\n(let ([if 3])\n(or if #f))\n\n(let ([if\n1\n3])\n(let ([tmp if\n1\n])\n(if tmp tmp #f)))\nThe combination of hygiene and referential transparency produces\nmacros that are consistent with Scheme's rules of lexical scope and\ncan be invoked anywhere in a program without the danger of unexpected\nvariable capture.\n3\n5.3\nIdentifier Macros\nThe\nsyntax-rules\nform only matches expressions in which the\nmacro name occurs in \"application position,\" i.e., as the operator in\nan application expression. References to a\nsyntax-rules\nmacro in\nother contexts result in syntax errors:\n(fold or #f ls)\nsyntax error\nPLT\nScheme's\nsyntax-id-rules\nform\nis\nsimilar\nto\nsyntax-rules\nbut matches occurrences of the macro keyword\nin arbitrary expression contexts: in operator position, operand\nposition, or as the target of an assignment.\n2\nWe use the convention of representing macro expansion with a\ndouble-arrow (\n) and ordinary (runtime) evaluation with a single-arrow\n(\n).\n3\nMacros can also be defined in and exported from modules in\nPLT Scheme [11].\n20\nThe following macro demonstrates a hypothetical use of\nsyntax-id-rules\n:\n(define-syntax clock\n(syntax-id-rules (set!)\n[(set! clock e) (set-clock! e)]\n[(clock e) (make-time-stamp (get-clock) e)]\n[clock (get-clock)]))\nThe list of identifiers following\nsyntax-id-rules\n, which was\nempty in our previous examples, now includes the\nset!\nidentifier\n, indicating that\nset!\nis to be treated as a keyword rather than a\npattern variable. The first rewrite rule matches expressions in which\nthe\nclock\nname occurs as the target of an assignment. The second\nrule is familiar, matching the macro in application position. The final\nrule matches the identifier\nclock\nin any context not matched by\nthe previous two rules. In addition to the usual application context,\nwe can use the\nclock\nmacro in an argument position:\n(+ clock 10)\n\n(+ (get-clock) 10)\nor as a\nset!\ntarget:\n(set! clock 5)\n\n(set-clock! 5)\n5.4\nProgrammatic Macros\nThe\nlanguage\nof\npatterns\nand\ntemplates\nrecognized\nby\nsyntax-rules\nand\nsyntax-id-rules\nis actually a special\ncase of Scheme macros.\nIn general, the\ndefine-syntax\nform\nbinds a transformer procedure\n(define-syntax\nname\n(lambda (stx)\netc\n.\nThe argument to the transformer procedure is a syntax object, which\nis similar to an S-expression representing quoted code, but which\nalso encapsulates information about the lexical context of the code,\nsuch as source file location and variable bindings. This context\ninformation is essential in allowing DrScheme's language tools to\ntrace errors and binding relationships back to the original source location\nin the user's code where a macro is invoked. Because syntax\nobjects are so similar to quoted data, the standard library includes\nthe\nsyntax-object->datum\nprocedure, which strips the lexical information\nfrom a syntax object and returns its corresponding datum.\nFor example, the datum corresponding to a syntax object representing\na literal number is its numeric value, the datum corresponding\nto an identifier is a symbol representing the identifier's name, and\nso on.\nA syntax transformer procedure accepts as its argument a syntax\nobject representing the expression that invoked the macro,\nand produces a new syntax object, which the macro expansion\nalgorithm uses to replace the original expression.\nAll Scheme\nmacros are syntax transformers; although the\nsyntax-rules\nand\nsyntax-id-rules\nforms do not use the\nlambda\nnotation, they are\nthemselves implemented as macros that expand to syntax transformer\nprocedures.\nThe\nsyntax-case\nfacility allows the construction of macros with\npattern matching, as with\nsyntax-rules\nand\nsyntax-id-rules\n,\nbut with arbitrary expressions in place of templates for the result\nexpressions. For example, the above\nor\nmacro would be defined as:\n(define-syntax or\n(lambda (stx)\n(syntax-case stx ()\n[(or e1 e2)\n#'(let ([tmp e1])\n(if tmp tmp e2))])))\nThe macro is almost the same as before, but for two refinements.\nFirst, the\nsyntax-case\nform takes the argument\nstx\nexplicitly,\nwhereas\nsyntax-rules\nimplicitly defines a transformer procedure\nand operates on the procedure argument. Second, the result expression\nis prefixed by the syntax-quoting\n#'\noperator, which is\nanalogous to Scheme's\nquote\noperator\n'\n. Whereas an expression\nprefixed with\n'\nevaluates to a quoted S-expression, a\n#'\nexpression\nbecomes a quoted syntax object that also includes lexical information\n. Similarly, the quasisyntax operator\n#`\nand unsyntax operator\n#,\nbehave for syntax objects like the quasiquote and unquote operators\nfor S-expressions, respectively.\nThe use of arbitrary computations in the result expression allows\nmacros to expand differently based on the results of actual computations\n:\n(define-syntax swap\n(lambda (stx)\n(syntax-case stx ()\n[(swap a b)\n(if (and (identifier? #'a)\n(identifier? #'b))\n#'(let ([tmp b])\n(set! b a)\n(set! a tmp))\n(raise-syntax-error\n'swap "expects identifiers"\nstx))])))\nIn this example, if\nswap\nis not given identifiers as arguments, the\nraise-syntax-error\nfunction uses the lexical information in the\nstx\nsyntax object to highlight the original\nswap\nexpression in the\nuser's code.\nConditional matching can also be achieved using pattern guards,\nwhich can inspect a matched expression and determine whether to\naccept a match:\n(define-syntax swap\n(lambda (stx)\n(syntax-case stx ()\n[(swap a b)\n(and (identifier? #'a)\n(identifier? #'b))\n#'(let ([tmp b])\n(set! b a)\n(set! a tmp))])))\nThe pattern guard is a new expression, inserted between the pattern\nand the result expressions. A guarded match only succeeds if its\nguard does not evaluate to false; when a guard fails, the pattern\nmatcher falls through to attempt the next pattern in the list.\nMacros for Interpreters\nIn this section, we present a general technique for specializing embedded\ninterpreters with macros, and explain how we apply this\ntechnique to the three embedded languages described in Section 3.\n21\nThe technique can be summarized in the following steps:\n1. Write the interpreter compositionally as a module of library\nfunctions.\n2. Replace the interpreter's main function with a macro that unfolds\nthe case dispatch on the input (the embedded program)\nwhen it is known statically.\n3. Default to the original function when the input is not known\nat compile time.\nWriting the interpreters compositionally serves two purposes. First,\nby delegating the interpretation of the program constructs that make\nup an embedded program to separate functions, it becomes possible\nto share code between the original interpreter and the macro\nthat replaces it. This effectively limits the macro's responsibility\nto a simple dispatch. Second, compositionality makes it easier to\nguarantee that unfolding terminates, since the recursive macro calls\nalways operate on smaller terms.\n6.1\nFormat Strings\nThe implementation of a string formatter involves a number of simple\nlibrary functions to convert each possible type of argument to\nstrings. Each formatting tag corresponds to one of these combinators\n. For example, the\n"~c"\ntag corresponds to a combinator,\nformat/char\n, which accepts a character and converts it to a string,\nthe\n"~x"\ntag corresponds to\nformat/hex\n, which converts integers\nto their hexadecimal representation, and so forth. The string formatter\nthen simply dispatches to these combinators based on the\ncontent of the formatting string:\n(define (format s . args)\n(cond\n[(string=? s "") ""]\n[(string=? (substring s 0 2) "~c")\n(string-append (format/char (car args))\n(apply format\n(substring s 2)\n(cdr args)))]\netc\n.\n))\nThe interpreter accepts the formatting string\ns\nand, based on formatting\ntags like\n"~c"\nthat it finds, decomposes the string into a\nseries of applications of the corresponding combinators to successive\narguments of\nformat\n(represented by\nargs\n). It reassembles\nthe transformed pieces with the standard\nstring-append\nfunction.\nIn order to specialize the\nformat\ninterpreter, we replace it with a\nmacro that re-uses its associated combinators:\n(define (format/dynamic s . args)\nas before\n)\n(define-syntax format\n(lambda (stx)\n(syntax-case stx ()\n[(format s-exp a1 a2 ...)\n(string? (syntax-object->datum #'s-exp))\n(let ([s (syntax-object->datum #'s-exp)])\n(cond\n[(string=? s "") #'""]\n[(string=? (substring s 0 2) "~c")\n#`(string-append\n(format/char a1)\n(format #,(substring s 2) a2 ...))]\netc\n.\n))]\n[(format s-exp a1 a2 ...)\n#'(format/dynamic s-exp a1 a2 ...)]\n[format\n(identifier? #'format)\n#'format/dynamic])))\nThe partial evaluation works by unfolding the interpreter's top-level\ncase dispatch on the program text. Rather than delaying the inspection\nof the string to runtime, the macro precomputes the result of the\ndecomposition statically whenever the string is given as a literal.\nWe can identify literal strings through the use of a pattern guard.\nMore precisely, the macro can inspect the syntax object\ns-exp\n,\ncorresponding to\nformat\n's first argument, and determine whether\nit can be converted to a string via\nsyntax-object->datum\n. When\nthe conversion succeeds, the pattern guard allows the match to succeed\n, and partial evaluation proceeds.\nAfter the macro expansion, the resulting program text consists of\nthe application of\nstring-append\nto the calls to the library functions\n, with no references to the interpreter:\n(format "~c = 0x~x" c n)\n\n(string-append (format/char c)\n" = 0x"\n(format/hex n))\nIn order for the replacement of the original function with a macro\nto be unobservable, the macro must behave exactly like the original\nfunction in all contexts. When\nformat\nis applied to a dynamic\nformatting string, the macro defaults to the original functional implementation\n. Similarly, when\nformat\nis passed as an argument to\na higher-order function, we use the technique of identifier macros\nto refer to the original function.\n4\n6.2\nRegular Expressions\nOne of PLT Scheme's regular expression engines uses the two-continuation\nmodel of backtracking [1].\nA regular expression\n\"matcher\" is represented as a function that accepts a success continuation\nand a failure continuation. When a matcher succeeds in\nmatching its input, it applies its success continuation to the accepted\ninput, and when it fails to match, it invokes its failure continuation.\nThis allows the interpretation of the alternation operator \"\n|\n\" to try\neach alternate pattern sequentially: an alternation matcher tries to\nmatch its first pattern with a failure continuation to try the second\npattern. Thus if the first pattern fails, the matcher invokes the failure\ncontinuation, which tries the second pattern. Otherwise, the failure\ncontinuation is disregarded and the matcher applies its success continuation\n, which skips the second pattern and returns the result of\nthe first match.\nEach of the regular expression constructions corresponds to a functional\ncombinator that produces a matcher.\nThese combinators\ncan express the standard operators of regular expressions: success\n, failure, alternation, concatenation, and repetition (i.e., Kleene\nstar).\nThere is also a\nsubmatch\ncombinator for the parenthesized\nsubpatterns in the original regular expression. A successful\nregexp-match\nreturns a list with the entire matched string followed\nby each submatch corresponding to a parenthesized subpattern\n. Any subpattern that does not match corresponds to an entry of\nfalse (\n#f\n) in the result list. For example, the following successful\n4\nThe case of\nset!\nis not critical since, in PLT Scheme, imported\nmodule references cannot be the target of an assignment.\n22\nmatch contains a failed submatch:\n(regexp-match "a((b)|(c))" "ac")\n\n(list "ac" "c" #f "c")\nRegardless of the contents of the second argument, there is always\nexactly one element in the result list for each parenthesized subpattern\nin the regular expression. The\nsubmatch\noperator accomplishes\nthis by wrapping a given matcher with continuations that\nadd either the result of a successful match or false to a list of indexed\nsubmatches accumulated during the match. The initial (success\n) continuation for\nregexp-match\nsorts the accumulated list of\nindexed submatches, adding false entries for all submatches that\nwere never reached because of backtracking.\nPartial evaluation of the regular expression library works by unfolding\nthe definitions of the combinators as well as the contents of\nthe initial continuation. Each application of a combinator gets replaced\nby an application of a copy of the body of the combinator's\ndefinition.\n5\nThe recursive code that constructs the result list in the\nsuccess continuation gets expanded into an explicit chain of\ncons\nexpressions:\n(regexp-match "a((b)|(c))" input)\n\n((build-matcher input)\n(lambda (subs)\n(cons (lookup subs 0)\n(cons (lookup subs 1)\n(cons (lookup subs 2)\n(cons (lookup subs 3) null)))))\n(lambda () #f))\nSince the size of the result list is known, it is possible to unfold\nrecursive definitions, such as the initial continuation that constructs\nthe match result, to make the structure of the result explicit.\nFinally, in the cases where the embedded program is not known statically\n, or when\nregexp-match\nis used in non-application contexts,\nthe macro expands to the original functional definition.\n6.3\nSchemeQL\nThe SchemeQL language differs from the other examples in that its\nprograms are not embedded as strings but rather as special forms\nrecognized by a library of macros. This means that for queries\nthat select from a fixed set of columns, the length of cursor rows\nis always known statically; the column names are specified as a\nsequence of identifiers in the syntax of the query form.\nJust as the interpreters for the string-based embedded programs\nperform a case dispatch on the contents of program strings, the\nSchemeQL macros dispatch on the shape of the query expressions.\nThe cases where partial evaluation is possible can be captured by\ninserting additional rules into the original library's macros.\nPartial evaluation of SchemeQL queries uses the same technique as\nfor the regular expression library: the recursive function that constructs\na cursor row is unfolded into an explicit chain of\ncons\nexpressions\n. Since we know the length of the cursor row statically,\nthe unfolding is guaranteed to terminate.\n5\nIt is convenient to define the Kleene star operator recursively\nby p\n\n= (pp\n\n)|\n\n. However, this non-compositional definition leads\nto an infinite macro expansion, so the macro must carefully avoid\nunfolding such a definition.\nSince the SchemeQL library is implemented as macros, there is no\nneed to capture the cases where the query forms are used in non-application\ncontexts. Adding special cases to the existing macro\ndoes not affect its set of allowable contexts. Similarly, the cases\nwhere the row length is not known statically are already handled by\nthe existing SchemeQL macros.\nStatic Analysis for Scheme\nMrFlow's value flow analysis is an extension of an ordinary set-based\nclosure analysis like Palsberg's [22]. For every expression in\na program, MrFlow statically computes a conservative approximation\nof the set of values to which the expression might evaluate at\nruntime. From a given expression it creates a graph that simulates\nthe flow of values inside the expression. The analysis simulates\nevaluation by propagating abstract values in this graph until reaching\na fixed point. From the set of abstract values that propagate to\na given node, the analysis reconstructs a type that is then displayed\nto the user through DrScheme's graphical interface.\nExtensions to the basic analysis include, among other things: analyzing\nfunctions that can take any number of arguments, analyzing\nassignments to variables (\nset!\n), and analyzing generative data\nstructure definitions. MrFlow also supports all the primitives defined\nin R\n5\nRS [17]. The vast majority of these primitives are defined\nusing a special, type-like language embedded inside the analyzer\n. For a given primitive, the corresponding type translates to\na graph that simulates the primitive's internal flows. The analysis\nthen proceeds just like for any other expression. The few remaining\nprimitives need special handling because of their imperative nature\n(\nset-car!\nor\nvector-fill!\n) and are analyzed in an ad-hoc manner\n.\nBy default, MrFlow analyzes the\nformat\nprimitive based on the\nfollowing pseudo-type description:\n(string top *-> string)\nThe\n*\nin the\n*->\nconstructor means that the primitive is a function\nthat can take any number of arguments as input beyond the ones\nexplicitly specified. In the present case, the function must receive\na string as its first argument, followed by any number of arguments\nof any type (represented by the pseudo-type\ntop\n), and returns a\nstring. Given such a description, the only errors MrFlow detects are\nwhen the primitive is given something other than a string as first\nargument, or if it is given no argument at all.\nAfter partial evaluation, the application of\nformat\nis replaced by\ncalls to its individual library functions such as\nformat/char\nand\nformat/hex\n. These functions have respectively the pseudo-types\n(char -> string)\nand\n(integer -> string)\nUsing this more precise information, MrFlow can detect arguments\nto the original\nformat\ncall that have the wrong type. Checking that\nthe\nformat\nprimitive receives the right number of arguments for\na given formatting string happens during partial evaluation, so the\nanalyzer never sees arity errors in the expanded code.\nSince DrScheme's syntax object system keeps track of program\nterms through the macro expansions [11], MrFlow is then able to\ntrace detected errors back to the original guilty terms in the user's\n23\nprogram and flag them graphically. Arrows representing the flow\nof values can also be displayed interactively in terms of the original\nprogram, allowing the user to track in the program the sources of\nthe values that triggered the errors. In essence, the only requirement\nfor MrFlow to analyze the partially evaluated code of\nformat\nis to\nspecify the pseudo-types for the library functions introduced by the\ntransformations, like\nformat/char\n6\n.\nSimilarly, it is enough to define pseudo-types for the functions\nused in the partially evaluated form of SchemeQL's\nquery\nto have\nMrFlow automatically compute precise results without any further\nmodifications.\nThe partial evaluation for regular expressions is more challenging.\nConsider the example from Section 1:\n(let ([r (regexp-match\n"http://([a-z.]*)/([a-z]*)/" line)])\n(if r\n(process-url (third r) (dispatch (second r)))\n(log-error)))\nAfter the call to\nregexp-match\n, the variable\nr\ncan be either a list\nof three elements or false. Based on its conservative pseudo-type\nspecification for\nregexp-match\n, MrFlow computes that\nr\ncan be\neither a list of unknown length or false. This in turn triggers two\nerrors for each of the\nsecond\nand\nthird\nprimitives: one error because\nthe primitive might be applied to false when it expected a list,\nand one error because it might be applied to a list that is too short.\nThe second kind of false positives can be removed by partially evaluating\nregexp-match\nto make the structure of the result more explicit\nto MrFlow, as described in Section 6.2. The analysis then\ndetermines that the primitive returns either a list of three elements\nor false and in turn checks that\nsecond\nand\nthird\nare applied to a\nlist with enough elements.\nStill, the possible return values of\nregexp-match\nmay contain\nfalse. Indeed, false will be the value returned at runtime if the line\ngiven to\nregexp-match\ndoes not match the pattern. The programmer\nhas to test for such a condition explicitly before processing\nthe result any further. The only way for MrFlow not to show a false\npositive for\nsecond\nand\nthird\n, because of the presence of this false\nvalue, is to make the analysis aware of the dependency between the\ntest of\nr\nand the two branches of the\nif\n-expression. This form of\nflow-sensitive analysis for\nif\n-expressions is difficult to implement\nin general since there is no bound to the complexity of the tested expression\n. In practice, however, an appreciable proportion of these\ntests are simple enough that an ad-hoc solution is sufficient.\nIn the case where the test is simply a variable reference it is enough\nto create two corresponding ghost variables, one for each branch\nof the\nif\n, establish filtering flows between the variable\nr\nand the\ntwo ghost variables, and make sure each ghost variable binds the\nr\nvariable references in its respective branch of the\nif\n-expression.\nThe filtering flows prevent the false abstract value from flowing into\nthe then branch of the\nif\n-expression and prevent everything but the\nfalse value from flowing into the else branch. Only the combination\nof this flow sensitivity for\nif\n-expressions with the partial evaluation\nof\nregexp-match\ngives analysis results with no false positives.\n6\nSpecifying such pseudo-types will not even be necessary once\nMrFlow knows how to analyze PLT Scheme contracts. This is the\nsubject of a forthcoming paper.\nOnce flow-sensitive analysis of\nif\n-expressions is added and\npseudo-type descriptions of the necessary primitives are provided\nto the analysis, partial evaluation makes all the false positives described\nin Section 3 disappear, as we illustrate in the next section.\nImprovement of Static Analysis\nPartially evaluating\nformat\neliminates the possibility of runtime\narity errors, since the macro transformations can statically check\nsuch invariants. It also allows MrFlow to detect type errors that\nit could not detect before, since the corresponding invariants were\ndescribed only in the embedded formatting language. These invariants\nare now explicit at the Scheme level in the transformed\nprogram through the use of simpler primitives like\nformat/char\nor\nformat/integer\n. Figure 5 shows the same program as in Figure\n2, but after applying partial evaluation. The\nformat\nprimitive\nis now blamed for two type errors that before could be found only\nat runtime. The error messages show that the user simply gave the\narguments\nn\nand\nc\nin the wrong order.\nSimilarly, specializing the regular expression engine with respect\nto a pattern eliminates false positives. The length of the list returned\nby\nregexp-match\ncannot be directly computed by the analysis\nsince that information is hidden inside the regular expression\npattern. As a result, the applications of\nsecond\nand\nthird\nin Figure\n3 are flagged as potential runtime errors (we have omitted the\nfairly large error messages from the figure). After specialization,\nthe structure of the value returned by\nregexp-match\nis exposed to\nthe analysis and MrFlow can then prove that if\nregexp-match\nreturns\na list, it must contain three elements. The false positives for\nsecond\nand\nthird\ndisappear in Figure 6.\nOf course,\nregexp-match\ncan also return false at runtime, and the\nanalysis correctly predicts this regardless of whether partial evaluation\nis used or not. Adding flow sensitivity for\nif\n-expressions\nas described in Section 7 removes these last spurious errors in Figure\n6.\nPartial evaluation now allows the precise analysis of SchemeQL\nqueries as well. Figure 7 shows the precise analysis of the same\nprogram as in Figure 4, this time after partial evaluation. As with\nregexp-match\n, the analysis previously computed that\ncursor-car\ncould return a list of any length, and therefore flagged the call to\nthird\nas a potential runtime error. This call is now free of spurious\nerrors since the partial evaluation exposes enough structure of the\nlist returned by\ncursor-car\nthat MrFlow can compute its exact\nlength and verify that\nthird\ncannot fail at runtime.\nWhile the results computed by the analysis become more precise,\npartially evaluating the interpreters for any of the three embedded\nlanguages we use in this paper results in code that is bigger than the\noriginal program. Bigger code in turn means that analyses will take\nmore time to complete. There is therefore a trade-off between precision\nand efficiency of the analyses. We intend to turn that trade-off\ninto a user option in MrFlow. The user might also exercise full\ncontrol over which embedded languages are partially evaluated and\nwhere by using either the functional or macro versions of the embedded\nlanguages' interpreters, switching between the two through\nthe judicious use of a module system, for example [11].\nNote that partial evaluation does not always benefit all analyses. In\nthe\nregexp-match\nexample from Figure 6, spurious errors disappear\nbecause MrFlow has been able to prove that the list\nr\nis of\nlength three and therefore that applying the primitives\nsecond\nor\n24\nFigure 5. Precise analysis of the\nformat\nprimitive.\nFigure 6. Precise analysis of\nregexp-match\n.\nFigure 7. Precise analysis of a SchemeQL query.\n25\nthird\nto\nr\ncannot fail. If the analysis were a Hindley-Milner-like\ntype system, though, no difference would be seen whether partial\nevaluation were used or not. Indeed, while such a type system could\nstatically prove that the arguments given to\nsecond\nor\nthird\nare\nlists, is would not attempt to prove that they are lists of the required\nlength and a runtime test would still be required. Using partial evaluation\nto expose such a property to the analysis would therefore be\nuseless. Simply put, making invariants from embedded programs\nexplicit in the host language only matters if the system analyzing\nthe host language cares about those invariants.\nThis does not mean partial evaluation is always useless when used\nin conjunction with a Hindley-Milner type system, though. Partially\nevaluating\nformat\n, for example, would allow the type system\nto verify that the formatting string agrees with the types of the remaining\narguments. This is in contrast to the ad-hoc solution used\nin OCaml [19] to type check the\nprintf\nprimitive, or the use of\ndependent types in the case of Cayenne [4].\nRelated Work\nOur work is analogous to designing type-safe embedded languages\nsuch as the one for\nprintf\n[21, 4]. Both problems involve determining\nstatic information about programs based on the values\nof embedded programs. In some cases, designers of typed languages\nsimply extend the host language to include specific embedded\nlanguages. The OCaml language, for example, contains a special\nlibrary for\nprintf\n[19] and uses of\nprintf\nare type-checked\nin an ad-hoc manner. Similarly, the GCC compiler for the C language\nuses ad-hoc checking to find errors in\nprintf\nformat strings.\nDanvy [7] and Hinze [14] suggest implementations of\nprintf\nin\nML and Haskell, respectively, that obviate the need for dependent\ntypes by recasting the library in terms of individual combinators.\nIn our system, those individual combinators are automatically introduced\nduring macro expansion. The C++ language [26] likewise\navoids the problem of checking invariants for\nprintf\nby breaking\nits functionality into smaller operations that do not require the use\nof an embedded formatting language.\nA work more closely related to ours is the Cayenne language [4].\nAugustsson uses a form of partial evaluation to specialize dependent\ntypes into regular Haskell-like types that can then be used by\nthe type system to check the user's program. Our macro system\nuses macro-expansion time computation to specialize expressions\nso that the subsequent flow analysis can compute precise value flow\nresults. Augustsson's dependent type system uses computation performed\nat type-checking time to specialize dependent types so that\nthe rest of the type checking can compute precise type information.\nThe specialization is done in his system through the use of type-computing\nfunctions that are specified by the user and evaluated by\nthe type system.\nThe main difference is that his system is used to compute specialized\ntypes and verify that the program is safe. Once the original\nprogram has been typed it is just compiled as-is with type checking\nturned off. This means that in the case of\nformat\n, for example,\nthe formatting string is processed twice: once at type checking time\nto prove the safety of the program, and once again at run time to\ncompute the actual result. Our system is used to compute specialized\nexpressions. This means that the evaluation of the\nformat\n's\nstring needs to be done only once. Once specialized, the same program\ncan either be run or analyzed to prove its safety. In both cases\nthe format string will not have to be reprocessed since it has been\ncompletely replaced by more specialized code.\nAnother difference is that in our system, non-specialized programs\nare still valid programs that can be analyzed, proved safe, and run\n(though the result of the analysis will probably be more conservative\nthan when analyzing the corresponding partially evaluated\nprogram, so proving safety might be more difficult). This is not\npossible in Cayenne since programs with dependent types cannot\nbe run without going through the partial evaluation phase first.\nMuch work has gone into optimization of embedded languages.\nHudak [15], Elliott et al [8], Backhouse [5], Christensen [6], and\nVeldhuizen [27] all discuss the use of partial evaluation to improve\nthe efficiency of embedded languages, although none makes the\nconnection between partial evaluation and static analysis. In Back-house's\nthesis he discusses the need to improve error checking for\nembedded languages, but he erroneously concludes that \"syntactic\nanalyses cannot be used due to the embedded nature of domain-specific\nembedded languages.\"\nThe Lisp programming language ([25], Section 8.4) provides for\n\"compiler macros\" that programmers can use to create optimized\nversions of existing functions. The compiler is not required to use\nthem, though. To our knowledge, there is no literature showing\nhow to use these compiler macros to improve the results of static\nanalyses. Lisp also has support for inlining functions, which might\nhelp monovariant analyses by duplicating the code of a function at\nall its call sites, thereby simulating polyvariant analyses.\nBigloo [23] is a Scheme compiler that routinely implements embedded\nlanguages via macros and thus probably provides some of\nthe benefits presented in this paper to the compiler's internal analyses\n. The compiler has a switch to \"enable optimization by macro\nexpansion,\" though there does not seem to be any documentation or\nliterature describing the exact effect of using that switch.\nConclusion\nPrograms in embedded languages contain invariants that are not automatically\nenforced by their host language. We have shown that\nusing macros to partially evaluate interpreters of little languages\nembedded in Scheme with respect to their input programs can recapture\nthese invariants and convey them to a flow analysis. Because\nit is based on macros, this technique does not require any\nad-hoc modification of either interpreters or analyses and is thus\nreadily available to programmers. This makes it a sweet spot in\nthe programming complexity versus precision landscape of program\nanalysis. We intend to investigate the relationship between\nmacros and other program analyses in a similar manner.\nAcknowledgments\nWe thank Matthias Felleisen, Mitchell Wand, and Kenichi Asai for\nthe discussions that led to this work and for their helpful feedback\n. Thanks to Matthew Flatt for his help with the presentation\nof Scheme macros. Thanks to Dale Vaillancourt for proofreading\nthe paper and to Ryan Culpepper for his macrological wizardry.\nReferences\n[1] H. Abelson and G. J. Sussman. The Structure and Interpretation\nof Computer Programs. MIT Press, Cambridge, MA,\n1985.\n[2] A. Aiken. Introduction to set constraint-based program analysis\n. Science of Computer Programming, 35:79111, 1999.\n26\n[3] K. Arnold, J. Gosling, and D. Holmes. The Java Programming\nLanguage. Addison-Wesley, 3d edition, 2000.\n[4] L. Augustsson. Cayenne--a language with dependent types.\nIn Proceedings of the third ACM SIGPLAN international conference\non Functional programming, pages 239250. ACM\nPress, 1998.\n[5] K. Backhouse. Abstract Interpretation of Domain-Specific\nEmbedded Languages. PhD thesis, Oxford University, 2002.\n[6] N. H. Christensen. Domain-specific languages in software development\nand the relation to partial evaluation. PhD thesis\n, DIKU, Dept. of Computer Science, University of Copenhagen\n, Universitetsparken 1, DK-2100 Copenhagen East,\nDenmark, July 2003.\n[7] O. Danvy. Functional unparsing. Journal of Functional Programming\n, 8(6):621625, 1998.\n[8] C. Elliott, S. Finne, and O. de Moor. Compiling embedded\nlanguages. In SAIG, pages 927, 2000.\n[9] R. B. Findler, J. Clements, M. F. Cormac Flanagan, S. Krishnamurthi\n, P. Steckler, and M. Felleisen. DrScheme: A\nprogamming environment for scheme. Journal of Functional\nProgramming, 12(2):159182, March 2002.\n[10] C. Flanagan and M. Felleisen. Componential set-based analysis\n. ACM Trans. on Programming Languages and Systems,\n21(2):369415, Feb. 1999.\n[11] M. Flatt. Composable and compilable macros: you want it\nwhen? In Proceedings of the seventh ACM SIGPLAN international\nconference on Functional programming, pages 7283.\nACM Press, 2002.\n[12] P. Graunke, S. Krishnamurthi, S. V. D. Hoeven, and\nM. Felleisen.\nProgramming the web with high-level programming\nlanguages. In Programming Languages and Systems\n, 10th European Symposium on Programming, ESOP\n2001, Proceedings, volume 2028 of Lecture Notes in Computer\nScience, pages 122136, Berlin, Heidelberg, and New\nYork, 2001. Springer-Verlag.\n[13] N. Heintze.\nSet Based Program Analysis.\nPhD thesis,\nCarnegie-Mellon Univ., Pittsburgh, PA, Oct. 1992.\n[14] R. Hinze. Formatting: a class act. Journal of Functional\nProgramming, 13(5):935944, 2003.\n[15] P. Hudak. Modular domain specific languages and tools. In\nProceedings of Fifth International Conference on Software\nReuse, pages 134142, June 1998.\n[16] S. N. Kamin. Research on domain-specific embedded languages\nand program generators. In R. Cleaveland, M. Mis-love\n, and P. Mulry, editors, Electronic Notes in Theoretical\nComputer Science, volume 14. Elsevier, 2000.\n[17] R. Kelsey, W. Clinger, and J. R. [editors]. Revised\n5\nreport\non the algorithmic language Scheme. Higher-Order and Symbolic\nComputation, 11(1):7104, August 1998. Also appeared\nin SIGPLAN Notices 33:9, September 1998.\n[18] B. W. Kernighan and D. M. Ritchie. The C programming language\n. Prentice Hall Press, 1988.\n[19] X. Leroy. The Objective Caml System, release 3.07, 2003.\nhttp://caml.inria.fr/ocaml/htmlman\n.\n[20] P. Meunier.\nhttp://www.plt-scheme.org/software/\nmrflow\n.\n[21] M. Neubauer, P. Thiemann, M. Gasbichler, and M. Sperber.\nFunctional logic overloading. In Proceedings of the 29th ACM\nSIGPLAN-SIGACT symposium on Principles of programming\nlanguages, pages 233244. ACM Press, 2002.\n[22] J. Palsberg. Closure analysis in constraint form. Proc. ACM\nTrans. on Programming Languages and Systems, 17(1):47\n62, Jan. 1995.\n[23] M. Serrano and P. Weis. Bigloo: A portable and optimizing\ncompiler for strict functional languages. In Static Analysis\nSymposium, pages 366381, 1995.\n[24] O. Shivers. A universal scripting framework, or Lambda: the\nultimate \"little language\". In Proceedings of the Second Asian\nComputing Science Conference on Concurrency and Parallelism\n, Programming, Networking, and Security, pages 254\n265. Springer-Verlag, 1996.\n[25] G. L. Steele. COMMON LISP: the language. Digital Press, 12\nCrosby Drive, Bedford, MA 01730, USA, 1984. With contributions\nby Scott E. Fahlman and Richard P. Gabriel and David\nA. Moon and Daniel L. Weinreb.\n[26] B. Stroustrup. The C++ Programming Language, Third Edition\n. Addison-Wesley Longman Publishing Co., Inc., 1997.\n[27] T. L. Veldhuizen. C++ templates as partial evaluation. In Partial\nEvaluation and Semantic-Based Program Manipulation,\npages 1318, 1999.\n[28] N. Welsh, F. Solsona, and I. Glover.\nSchemeUnit and\nSchemeQL: Two little languages. In Proceedings of the Third\nWorkshop on Scheme and Functional Programming, 2002.\n27", "keywords": "macros;interpreter;value flow analysis;flow analysis;set-based analysis;partial evaluation;embedded language;Partial evaluation;regular expression;embedded languages;Scheme"} {"name": "108", "title": "IncSpan: Incremental Mining of Sequential Patterns in Large Database", "abstract": "Many real life sequence databases grow incrementally. It is undesirable to mine sequential patterns from scratch each time when a small set of sequences grow, or when some new sequences are added into the database. Incremental algorithm should be developed for sequential pattern mining so that mining can be adapted to incremental database updates . However, it is nontrivial to mine sequential patterns incrementally, especially when the existing sequences grow incrementally because such growth may lead to the generation of many new patterns due to the interactions of the growing subsequences with the original ones. In this study, we develop an efficient algorithm, IncSpan, for incremental mining of sequential patterns, by exploring some interesting properties. Our performance study shows that IncSpan outperforms some previously proposed incremental algorithms as well as a non-incremental one with a wide margin.", "fulltext": "INTRODUCTION\nSequential pattern mining is an important and active research\ntopic in data mining [1, 5, 4, 8, 13, 2], with broad\napplications, such as customer shopping transaction analysis\n, mining web logs, mining DNA sequences, etc.\nThere have been quite a few sequential pattern or closed\nsequential pattern mining algorithms proposed in the previous\nwork, such as [10, 8, 13, 2, 12, 11], that mine frequent\nsubsequences from a large sequence database efficiently. These\nalgorithms work in a one-time fashion: mine the entire\ndatabase and obtain the set of results. However, in many\napplications, databases are updated incrementally. For example\n, customer shopping transaction database is growing\ndaily due to the appending of newly purchased items for existing\ncustomers for their subsequent purchases and/or insertion\nof new shopping sequences for new customers. Other\nexamples include Weather sequences and patient treatment\nsequences which grow incrementally with time. The existing\nsequential mining algorithms are not suitable for handling\nthis situation because the result mined from the old\ndatabase is no longer valid on the updated database, and it\nis intolerably inefficient to mine the updated databases from\nscratch.\nThere are two kinds of database updates in applications:\n(1) inserting new sequences (denoted as INSERT), and (2)\nappending new itemsets/items to the existing sequences (denoted\nas APPEND). A real application may contain both.\nIt is easier to handle the first case: INSERT. An important\nproperty of INSERT is that a frequent sequence in\nDB = DB\ndb must be frequent in either DB or db\n(or both). If a sequence is infrequent in both DB and db,\nit cannot be frequent in DB , as shown in Figure 1. This\nproperty is similar to that of frequent patterns, which has\nbeen used in incremental frequent pattern mining [3, 9, 14].\nSuch incremental frequent pattern mining algorithms can be\neasily extended to handle sequential pattern mining in the\ncase of INSERT.\nIt is far trickier to handle the second case, APPEND, than\nthe first one. This is because not only the appended items\nmay generate new locally frequent sequences in db, but\nalso that locally infrequent sequences may contribute their\noccurrence count to the same infrequent sequences in the\noriginal database to produce frequent ones. For example,\nin the appended database in Figure 1, suppose\n|DB|=1000\nand\n|db|=20, min sup=10%. Suppose a sequence s is in-527\nResearch Track Poster\ns infrequent\ns infrequent\nDB\ns is infrequent in\nDB'\nsup(s)=99\ndb=20\nsup(s)=1\ns is frequent in\nDB'\ndb\n|DB| =\n1000\nFigure 1:\nExamples in INSERT and APPEND\ndatabase\nfrequent in DB with 99 occurrences (sup = 9.9%). In addition\n, it is also infrequent in db with only 1 occurrence\n(sup = 5%).\nAlthough s is infrequent in both DB and\ndb, it becomes frequent in DB with 100 occurrences. This\nproblem complicates the incremental mining since one cannot\nignore the infrequent sequences in db, but there are\nan exponential number of infrequent sequences even in a\nsmall db and checking them against the set of infrequent\nsequences in DB will be very costly.\nWhen the database is updated with a combination of INSERT\nand APPEND, we can treat INSERT as a special case\nof APPEND treating the inserted sequences as appended\ntransactions to an empty sequence in the original database.\nThen this problem is reduced to APPEND. Therefore, we\nfocus on the APPEND case in the following discussion.\nIn this paper, an efficient algorithm, called\nIncSpan, is\ndeveloped, for incremental mining over multiple database\nincrements. Several novel ideas are introduced in the algorithm\ndevelopment: (1) maintaining a set of \"almost frequent\n\" sequences as the candidates in the updated database,\nwhich has several nice properties and leads to efficient techniques\n, and (2) two optimization techniques, reverse pattern\nmatching and shared projection, are designed to improve the\nperformance. Reverse pattern matching is used for matching\na sequential pattern in a sequence and prune some search\nspace. Shared projection is designed to reduce the number\nof database projections for some sequences which share a\ncommon prefix. Our performance study shows that\nIncSpan\nis efficient and scalable.\nThe remaining of the paper is organized as follows. Section\n2introduces the basic concepts related to incremental\nsequential pattern mining. Section 3 presents the idea of\nbuffering patterns, several properties of this technique and\nthe associated method. Section 4 formulates the\nIncSpan algorithm\nwith two optimization techniques. We report and\nanalyze performance study in Section 5, introduce related\nwork in Section 6. We conclude our study in Section 7.\nPRELIMINARY CONCEPTS\nLet I =\n{i\n1\n, i\n2\n, . . . , i\nk\n} be a set of all items. A subset\nof I is called an itemset. A sequence s = t\n1\n, t\n2\n, . . . , t\nm\n(t\ni\nI) is an ordered list. The size, |s|, of a sequence is\nthe number of itemsets in the sequence. The length, l(s),\nis the total number of items in the sequence, i.e., l(s) =\nn\ni=1\n|t\ni\n|. A sequence = a\n1\n, a\n2\n, . . . , a\nm\nis a sub-sequence\nof another sequence = b\n1\n, b\n2\n, . . . , b\nn\n, denoted as\n\n(if = , written as\n), if and only if i\n1\n, i\n2\n, . . . , i\nm\n,\nsuch that 1\ni\n1\n< i\n2\n< . . . < i\nm\nn and a\n1\nb\ni\n1\n, a\n2\n\nb\ni\n2\n, . . . , and a\nm\nb\ni\nm\n.\nA sequence database, D =\n{s\n1\n, s\n2\n, . . . , s\nn\n}, is a set of sequences\n. The support of a sequence in D is the number\nof sequences in D which contain , support() =\n|{s|s\nD and\ns\n}|. Given a minimum support threshold,\nmin sup, a sequence is frequent if its support is no less\nthan min sup; given a factor\n1, a sequence is semi-frequent\nif its support is less than min sup but no less\nthan\nmin sup; a sequence is infrequent if its support\nis less than\nmin sup. The set of frequent sequential\npattern, F S, includes all the frequent sequences; and the\nset of semi-frequent sequential pattern SF S, includes\nall the semi-frequent sequences.\nEXAMPLE 1. The second column of Table 1 is a sample\nsequence database D. If min sup = 3, F S =\n{ (a) :\n4, (b) : 3, (d) : 4, (b)(d) : 3\n}.\nSeq ID.\nOriginal Part\nAppended Part\n0\n(a)(h)\n(c)\n1\n(eg)\n(a)(bce)\n2\n(a)(b)(d)\n(ck)(l)\n3\n(b)(df )(a)(b)\n\n4\n(a)(d)\n\n5\n(be)(d)\n\nTable 1: A Sample Sequence Database D and the\nAppended part\nGiven a sequence s = t\n1\n, . . . , t\nm\nand another sequence\ns\na\n=\nt\n1\n, . . . , t\nn\n, s = s\ns\na\nmeans s concatenates with\ns\na\n. s is called an appended sequence of s, denoted as\ns\n\na\ns. If s\na\nis empty, s = s, denoted as s\n=\na\ns. An\nappended sequence database D\nof a sequence database\nD is one that (1)\ns\ni\nD , s\nj\nD such that s\ni\n\na\ns\nj\nor\ns\ni\n\n=\na\ns\nj\n, and (2 )\ns\ni\nD, s\nj\nD such that s\nj\n\na\ns\ni\nor\ns\nj\n\n=\na\ns\ni\n. We denote LDB =\n{s\ni\n|s\ni\nD and s\ni\n\na\ns\nj\n},\ni.e., LDB is the set of sequences in D which are appended\nwith items/itemsets. We denote ODB =\n{s\ni\n|s\ni\nD and\ns\ni\n\na\ns\nj\n}, i.e., ODB is the set of sequences in D which are\nappended with items/itemsets in D . We denote the set of\nfrequent sequences in D as F S .\nEXAMPLE 2. The third column of Table 1 is the appended\npart of the original database.\nIf min sup\n= 3,\nF S\n=\n{ (a) : 5, (b) : 4, (d) : 4, (b)(d) : 3, (c) :\n3, (a)(b) : 3, (a)(c) : 3\n}.\nA sequential pattern tree T is a tree that represents\nthe set of frequent subsequences in a database. Each node\np in T has a tag labelled with s or i. s means the node is a\nstarting item in an itemset; i means the node is an intermediate\nitem in an itemset. Each node p has a support value\nwhich represents the support of the subsequence starting\nfrom the root of T and ending at the node p.\nProblem Statement.\nGiven a sequence database D,\na min sup threshold, the set of frequent subsequences F S\nin D, and an appended sequence database D of D, the\nproblem of incremental sequential pattern mining is\nto mine the set of frequent subsequences F S in D based\non F S instead of mining on D from scratch.\nBUFFER SEMI-FREQUENT PATTERNS\nIn this section, we present the idea of buffering semi-frequent\npatterns, study its properties, and design solutions\nof how to incrementally mine and maintain F S and SF S.\n528\nResearch Track Poster\n<>\n<d>s:4\n<b>s:3\n<a>s:4\n<e>s:2\n<d>s:3\n<d>s:2\n<b>s:2\nFigure 2: The Sequential Pattern Tree of F S and\nSF S in D\n3.1\nBuffering Semi-frequent Patterns\nWe buffer semi-frequent patterns, which can be considered\nas a statistics-based approach.\nThe technique is to\nlower the min sup by a buffer ratio\n1 and keep a set\nSF S in the original database D. This is because since the\nsequences in SF S are \"almost frequent \", most of the frequent\nsubsequences in the appended database will either\ncome from SF S or they are already frequent in the original\ndatabase. With a minor update to the original database,\nit is expected that only a small fraction of subsequences\nwhich were infrequent previously would become frequent.\nThis is based on the assumption that updates to the original\ndatabase have a uniform probability distribution on items.\nIt is expected that most of the frequent subsequences introduced\nby the updated part of the database would come from\nthe SF S. The SF S forms a kind of boundary (or \"buffer\nzone\") between the frequent subsequences and infrequent\nsubsequences.\nEXAMPLE 3. Given a database D in Example 1, min sup\n= 3, = 0.6. The sequential pattern tree T representing\nF S and SF S in D is shown in Figure 2. F S are shown in\nsolid line and SF S in dashed line.\nWhen the database D is updated to D , we have to check\nLDB to update support of every sequence in F S and SF S.\nThere are several possibilities:\n1. A pattern which is frequent in D is still frequent in D ;\n2. A pattern which is semi-frequent in D becomes frequent\nin D ;\n3. A pattern which is semi-frequent in D is still semi-frequent\nin D ;\n4. Appended database db brings new items.\n5. A pattern which is infrequent in D becomes frequent in\nD\n;\n6. A pattern which is infrequent in D becomes semi-frequent\nin D ;\nCase (1)(3) are trivial cases since we already keep the\ninformation. We will consider case (4)(6) now.\nCase (4): Appended database db brings new items. For\nexample, in the database D , (c) is a new item brought by\ndb. It does not appear in D.\nProperty: An item which does not appear in D and is\nbrought by db has no information in F S or SF S.\nSolution: Scan the database LDB for single items.\nFor a new item or an originally infrequent item in D, if\nit becomes frequent or semi-frequent, insert it into F S or\nSF S. Then use the new frequent item as prefix to construct\nprojected database and discover frequent and semi-frequent\nsequences recursively. For a frequent or semi-frequent item\nin D, update its support.\nCase (5): A pattern which is infrequent in D becomes\nfrequent in D . For example, in the database D , (a)(c) is\nan example of case (5). It is infrequent in D and becomes\nfrequent in D . We do not keep (a)(c) in F S or SF S, but\nwe have the information of its prefix (a) .\nProperty: If an infrequent sequence p in D becomes\nfrequent in D , all of its prefix subsequences must also be\nfrequent in D . Then at least one of its prefix subsequences\np is in F S.\nSolution: Start from its frequent prefix p in F S and\nconstruct p-projected database, we will discover p .\nFormally stated, given a frequent pattern p in D , we want\nto discover whether there is any pattern p with p as prefix\nwhere p was infrequent in D but is frequent in D . A sequence\np which changes from infrequent to frequent must\nhave sup(p ) > (1\n- )min sup.\nWe claim if a frequent pattern p has support in LDB\nsup\nLDB\n(p)\n(1 - )min sup, it is possible that some subsequences\nwith p as prefix will change from infrequent to\nfrequent. If sup\nLDB\n(p) < (1\n- )min sup, we can safely\nprune search with prefix p.\nTheorem 1. For a frequent pattern p, if its support in\nLDB sup\nLDB\n(p) < (1\n- )min sup, then there is no sequence\np having p as prefix changing from infrequent in D\nto frequent in D .\nProof : p was infrequent in D, so\nsup\nD\n(p ) <\nmin sup\n(1)\nIf\nsup\nLDB\n(p) < (1\n- )min sup, then\nsup\nLDB\n(p )\nsup\nLDB\n(p) < (1\n- )min sup\nSince sup\nLDB\n(p ) = sup\nODB\n(p ) + sup(p ). Then we\nhave\nsup\nLDB\n(p )\nsup\nLDB\n(p ) < (1\n- )min sup.\n(2)\nSince sup\nD\n(p ) = sup\nD\n(p ) + sup(p ), combining (1)\nand (2), we have sup\nD\n(p ) < min sup. So p cannot be\nfrequent in D .\nTherefore, if a pattern p has support in LDB sup\nLDB\n(p) <\n(1\n- )min sup, we can prune search with prefix p. Otherwise\n, if sup\nLDB\n(p)\n(1-)min sup, it is possible that some\nsequences with p as prefix will change from infrequent to frequent\n. In this case, we have to project the whole database\nD\nusing p as prefix. If\n|LDB| is small or is small, there are\nvery few patterns that have sup\nLDB\n(p)\n(1 - )min sup,\nmaking the number of projections small.\nIn our example, sup\nLDB\n(a) = 3 > (1\n- 0.6) 3, we have\nto do the projection with (a) as prefix. And we discover\n\" (a)(c) : 3\" which was infrequent in D. For another example\n, sup\nLDB\n(d) = 1 < (1\n- 0.6) 3, there is no sequence\nwith d as prefix which changes from infrequent to frequent,\nso we can prune the search on it.\nTheorem 1 provides an effective bound to decide whether\nit is necessary to project a database. It is essential to guarantee\nthe result be complete.\nWe can see from the projection condition, sup\nLDB\n(p)\n\n(1\n- )min sup, the smaller is, the larger buffer we keep,\nthe fewer database projections the algorithm needs. The\nchoice of is heuristic. If is too high, then the buffer is\nsmall and we have to do a lot of database projections to\ndiscover sequences outside of the buffer. If is set very\nlow, we will keep many subsequences in the buffer. But\nmining the buffering patterns using\nmin sup would be\nmuch more inefficient than with min sup. We will show this\n529\nResearch Track Poster\n<>\n<d>s:4\n<b>s:4\n<a>s:5\n<c>s:3\n<d>s:3\n<c>s:3\n<d>s:2\n<b>s:3\n<e>i:2\n<e>s:2\nFigure 3: The Sequential Pattern Tree of F S and\nSF S in D\ntradeoff through experiments in Section 5.\nCase (6): A pattern which is infrequent in D becomes\nsemi-frequent in D . For example, in the database D , (be)\nis an example of case (6). It is infrequent in D and becomes\nsemi-frequent in D .\nProperty: If an infrequent sequence p becomes semi-frequent\nin D , all of its prefix subsequences must be either\nfrequent or semi-frequent. Then at least one of its prefix\nsubsequences, p, is in F S or SF S.\nSolution: Start from its prefix p in F S or SF S and construct\np-projected database, we will discover p .\nFormally stated, given a pattern p, we want to discover\nwhether there is any pattern p with p as prefix where p was\ninfrequent but is semi-frequent in D .\nIf the prefix p is in F S or SF S, construct p-projected\ndatabase and we will discover p in p-projected database.\nTherefore, for any pattern p from infrequent to semi-frequent,\nif its prefix is in F S or SF S, p can be discovered.\nIn our example, for the frequent pattern (b) , we do the\nprojection on (b) and get a semi-frequent pattern (be) : 2\nwhich was infrequent in D.\nWe show in Figure 3 the sequential pattern tree T including\nF S and SF S after the database updates to D . We can\ncompare it with Figure 2to see how the database update\naffects F S and SF S.\nINCSPAN DESIGN AND IMPLEMENTATION\nIn this section, we formulate the\nIncSpan algorithm which\nexploits the technique of buffering semi-frequent patterns.\nWe first present the algorithm outline and then introduce\ntwo optimization techniques.\n4.1\nIncSpan: Algorithm Outline\nGiven an original database D, an appended database D ,\na threshold min sup, a buffer ratio , a set of frequent sequences\nF S and a set of semi-frequent sequences SF S, we\nwant to discover the set of frequent sequences F S in D .\nStep 1: Scan LDB for single items, as shown in case (4).\nStep 2: Check every pattern in F S and SF S in LDB to\nadjust the support of those patterns.\nStep 2.1: If a pattern becomes frequent, add it to F S .\nThen check whether it meets the projection condition. If so,\nuse it as prefix to project database, as shown in case (5).\nStep 2.2: If a pattern is semi-frequent, add it to SF S .\nThe algorithm is given in Figure 4.\n4.2\nReverse Pattern Matching\nReverse pattern matching is a novel optimization technique\n. It matches a sequential pattern against a sequence\nfrom the end towards the front. This is used to check sup-Algorithm\n. IncSpan(D , min sup, , F S, SF S)\nInput: An appended database D , min sup, , frequent\nsequences F S in D, semi-frequent sequences SF S\nin D.\nOutput: F S and SF S .\n1: F S = , SF S =\n2 : Scan LDB for single items;\n3: Add new frequent item into F S ;\n4: Add new semi-frequent item into SF S ;\n5: for each new item i in F S do\n6:\nPrefixSpan(i, D\n|i, min sup, F S , SF S );\n7: for every pattern p in F S or SF S do\n8:\ncheck sup(p);\n9:\nif sup(p) = sup\nD\n(p) + sup(p)\nmin sup\n10:\ninsert(F S , p);\n11:\nif sup\nLDB\n(p)\n(1 - )min sup\n12:\nPrefixSpan(p, D\n|p, min sup, F S , SF S );\n13:\nelse\n14:\ninsert(SF S , p);\n15: return;\nFigure 4: IncSpan algorithm\ns s\na\ns'\nFigure 5: Reverse Pattern Matching\nport increase of a sequential pattern in LDB. Since the\nappended items are always at the end part of the original\nsequence, reverse pattern matching would be more efficient\nthan projection from the front.\nGiven an original sequence s, an appended sequence s =\ns s\na\n, and a sequential pattern p, we want to check whether\nthe support of p will be increased by appending s\na\nto s.\nThere are two possibilities:\n1. If the last item of p is not supported by s\na\n, whether p\nis supported by s or not, sup(p) is not increased when\ns grows to s . Therefore, as long as we do not find the\nlast item of p in s\na\n, we can prune searching.\n2. If the last item of p is supported by s\na\n, we have to check\nwhether s supports p. We check this by continuing in\nthe reverse direction. If p is not supported by s , we can\nprune searching and keep sup(p) unchanged. Otherwise\nwe have to check whether s supports p. If s supports p,\nkeep sup(p) unchanged; otherwise, increase sup(p) by 1.\nFigure 5 shows the reverse pattern matching.\n4.3\nShared Projection\nShared Projection is another optimization technique we\nexploit. Suppose we have two sequences (a)(b)(c)(d) and\n(a)(b)(c)(e) , and we need to project database using each\nas prefix. If we make two database projections individually\n, we do not take advantage of the similarity between the\ntwo subsequences. Actually the two projected databases up\nto subsequence (a)(b)(c) , i.e., D\n| (a)(b)(c) are the same.\n530\nResearch Track Poster\nFrom D\n| (a)(b)(c) , we do one more step projection for item\nd and e respectively. Then we can share the projection for\n(a)(b)(c) .\nTo use shared projection, when we detect some subsequence\nthat needs projecting database, we do not do the\nprojection immediately. Instead we label it. After finishing\nchecking and labelling all the sequences, we do the projection\nby traversing the sequential pattern tree. Tree is natural\nfor this task because the same subsequences are represented\nusing shared branches.\nPERFORMANCE STUDY\nA comprehensive performance study has been conducted\nin our experiments. We use a synthetic data generator provided\nby IBM. The synthetic dataset generator can be re-trieved\nfrom an IBM website, http://www.almaden.ibm.com\n/cs/quest. The details about parameter settings can be re-ferred\nin [1].\nAll experiments are done on a PowerEdge 6600 server\nwith Xeon 2.8 , 4G memory.\nThe algorithms are written\nin C++ and compiled using g++ with -O3 optimization\n. We compare three algorithms:\nIncSpan, an incremental\nmining algorithm\nISM [7], and a non-incremental algorithm\nPrefixSpan[8].\nFigure 6 (a) shows the running time of three algorithms\nwhen min sup changes on the dataset D10C10T2.5N10, 0.5%\nof which has been appended with transactions.\nIncSpan is\nthe fastest, outperforming\nPrefixSpan by a factor of 5 or\nmore, and outperforming\nISM even more. ISM even cannot\nfinish within a time limit when the support is low.\nFigure 6 (b) shows how the three algorithms can be affected\nwhen we vary the percentage of sequences in the\ndatabase that have been updated. The dataset we use is\nD10C10T2.5N10, min sup=1%. The buffer ratio = 0.8.\nThe curves show that the time increases as the incremental\nportion of the database increases. When the incremental\npart exceeds 5% of the database,\nPrefixSpan outperforms\nIncSpan. This is because if the incremental part is not very\nsmall, the number of patterns brought by it increases, making\na lot overhead for\nIncSpan to handle. In this case, mining\nfrom scratch is better. But\nIncSpan still outperforms ISM by\na wide margin no matter what the parameter is.\nFigure 6 (c) shows the memory usage of\nIncSpan and ISM.\nThe database is D10C10T2.5N10, min sup varies from 0.4%\nto 1.5%, buffer ratio = 0.8. Memory usage of\nIncSpan increases\nlinearly as min sup decreases while memory used by\nISM increases dramatically. This is because the number of\nsequences in negative border increases sharply as min sup\ndecreases.\nThis figure verifies that negative border is a\nmemory-consuming approach.\nFigure 7 (a) shows how the\nIncSpan algorithm can be affected\nby varying buffer ratio . Dataset is D10C10T2.5N10,\n5% of which is appended with new transactions. We use\nPrefixSpan as a baseline. As we have discussed before, if we\nset very high, we will have fewer pattern in SF S, then the\nsupport update for sequences in SF S on LDB will be more\nefficient. However, since we keep less information in SF S,\nwe may need to spend more time on projecting databases.\nIn the extreme case = 1, SF S becomes empty. On the\nother hand, if we set the very low, we will have a large\nnumber of sequences in SF S, which makes the support update\nstage very slow. Experiment shows, when = 0.8, it\nachieves the best performance.\nFigure 7 (b) shows the performance of\nIncSpan to handle\nmultiple (5 updates in this case) database updates. Each\ntime the database is updated, we run\nPrefixSpan to mine\nfrom scratch. We can see from the figure, as the increments\naccumulate, the time for incremental mining increases, but\nincrease is very small and the incremental mining still outperforms\nmining from scratch by a factor of 4 or 5. This\nexperiment shows that\nIncSpan can really handle multiple\ndatabase updates without significant performance degrading\n.\nFigure 7 (c) shows the scalability of the three algorithms\nby varying the size of database. The number of sequences in\ndatabases vary from 10,000 to 100,000. 5% of each database\nis updated. min sup=0.8%. It shows that all three algorithms\nscale well with the database size.\nRELATED WORK\nIn sequential pattern mining, efficient algorithms like\nGSP\n[10],\nSPADE [13], PrefixSpan [8], and SPAM [2] were developed\n.\nPartition [9] and FUP [3] are two algorithms which promote\npartitioning the database, mining local frequent itemsets\n, and then consolidating the global frequent itemsets by\ncross check. This is based on that a frequent itemset must\nbe frequent in at least one local database. If a database is\nupdated with INSERT, we can use this idea to do the incremental\nmining. Zhang et al. [14] developed two algorithms\nfor incremental mining sequential patterns when sequences\nare inserted into or deleted from the original database.\nParthasarathy et al. [7] developed an incremental mining\nalgorithm\nISM by maintaining a sequence lattice of an old\ndatabase. The sequence lattice includes all the frequent sequences\nand all the sequences in the negative border. However\n, there are some disadvantages for using negative border:\n(1) The combined number of sequences in the frequent set\nand the negative border is huge; (2) The negative border\nis generated based on the structural relation between sequences\n. However, these sequences do not necessarily have\nhigh support. Therefore, using negative border is very time\nand memory consuming.\nMasseglia et al. [6] developed another incremental mining\nalgorithm ISE using candidate generate-and-test approach.\nThe problem of this algorithm is (1) the candidate set can\nbe very huge, which makes the test-phase very slow; and\n(2) its level-wise working manner requires multiple scans of\nthe whole database. This is very costly, especially when the\nsequences are long.\nCONCLUSIONS\nIn this paper, we investigated the issues for incremental\nmining of sequential patterns in large databases and\naddressed the inefficiency problem of mining the appended\ndatabase from scratch. We proposed an algorithm\nIncSpan\nby exploring several novel techniques to balance efficiency\nand reusability.\nIncSpan outperforms the non-incremental\nmethod (using\nPrefixSpan) and a previously proposed incremental\nmining algorithm\nISM by a wide margin. It is a\npromising algorithm to solve practical problems with many\nreal applications.\nThere are many interesting research problems related to\nIncSpan that should be pursued further. For example, incremental\nmining of closed sequential patterns, structured\n531\nResearch Track Poster\n0.01\n0.1\n1\n10\n100\n1000\n0.03 0.06\n0.1\n0.4\n0.6\n0.8\n1\n1.5\nminsup (%)\nTi\nm\ne\n(s\n)\nIncSpan\nPrefixSpan\nISM\n(a) varying min sup\n0.01\n0.1\n1\n10\n100\n0.5\n1\n2\n3\n4\n5\nPercent of growing seq (%)\nTi\nm\ne\n(s\n)\nIncSpan\nPrefixSpan\nISM\n(b) varying percentage of updated\nsequences\n1\n10\n100\n1000\n10000\n0.4\n0.6\n0.8\n1\n1.5\nminsup (%)\nMem\no\nry\nU\ns\na\ng\ne (\nM\nB\n)\nISM\nIncSpan\n(c) Memory Usage under varied\nmin sup\nFigure 6: Performance study\n0\n8\n16\n24\n32\n40\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1 PrefixSpan\nvarying buffer ratio u\nTi\nm\ne\n(\ns\n)\nT ime\n(a) varying buffer ratio\n0\n30\n60\n90\n120\n1\n2\n3\n4\n5\nIncrement of database\nTi\nm\ne\n(s\n)\nIncSpan\nPrefixSpan\n(b)\nmultiple\nincrements\nof\ndatabase\n0.01\n0.1\n1\n10\n100\n1000\n10\n20\n50\n80\n100\nNo. of S equences in 1000\nTi\nm\ne\n(\ns\n)\nIncSpan\nPrefixSpan\nISM\n(c) varying # of sequences (in\n1000) in DB\nFigure 7: Performance study\npatterns in databases and/or data streams are interesting\nproblems for future research.\nREFERENCES\n[1] R. Agrawal and R. Srikant. Mining sequential\npatterns. In Proc. 1995 Int. Conf. Data Engineering\n(ICDE'95), pages 314, March 1995.\n[2 ] J. Ayres, J. E. Gehrke, T. Yiu, and J. Flannick.\nSequential pattern mining using bitmaps. In Proc.\n2002 ACM SIGKDD Int. Conf. Knowledge Discovery\nin Databases (KDD'02), July 2 002 .\n[3] D. Cheung, J. Han, V. Ng, and C. Wong. Maintenance\nof discovered association rules in large databases: An\nincremental update technique. In Proc. of the 12th Int.\nConf. on Data Engineering (ICDE'96), March 1996.\n[4] M. Garofalakis, R. Rastogi, and K. Shim. SPIRIT:\nSequential pattern mining with regular expression\nconstraints. In Proc. 1999 Int. Conf. Very Large Data\nBases (VLDB'99), pages 223234, Sept 1999.\n[5] H. Mannila, H. Toivonen, and A. I. Verkamo.\nDiscovering frequent episodes in sequences. In Proc.\n1995 Int. Conf. Knowledge Discovery and Data\nMining (KDD'95), pages 210215, Aug 1995.\n[6] F. Masseglia, P. Poncelet, and M. Teisseire.\nIncremental mining of sequential patterns in large\ndatabases. Data Knowl. Eng., 46(1):97121, 2003.\n[7] S. Parthasarathy, M. Zaki, M. Ogihara, and\nS. Dwarkadas. Incremental and interactive sequence\nmining. In Proc. of the 8th Int. Conf. on Information\nand Knowledge Management (CIKM'99), Nov 1999.\n[8] J. Pei, J. Han, B. Mortazavi-Asl, H. Pinto, Q. Chen,\nU. Dayal, and M.-C. Hsu. PrefixSpan: Mining\nsequential patterns efficiently by prefix-projected\npattern growth. In Proc. 2001 Int. Conf. Data\nEngineering (ICDE'01), pages 215224, April 2001.\n[9] A. Savasere, E. Omiecinski, and S. Navathe. An\nefficient algorithm for mining association rules in large\ndatabases. In Proc. 1995 Int. Conf. Very Large Data\nBases (VLDB'95), Sept 1995.\n[10] R. Srikant and R. Agrawal. Mining sequential\npatterns: Generalizations and performance\nimprovements. In Proc. of the 5th Int. Conf. on\nExtending Database Technology (EDBT'96), Mar 1996.\n[11] J. Wang and J. Han. Bide: Efficient mining of\nfrequent closed sequences. In Proc. of 2004 Int. Conf.\non Data Engineering (ICDE'04), March 2004.\n[12] X. Yan, J. Han, and R. Afshar. CloSpan: Mining\nclosed sequential patterns in large datasets. In Proc.\n2003 SIAM Int.Conf. on Data Mining (SDM'03), May\n2003.\n[13] M. Zaki. SPADE: An efficient algorithm for mining\nfrequent sequences. Machine Learning, 40:3160, 2001.\n[14] M. Zhang, B. Kao, D. Cheung, and C. Yip. Efficient\nalgorithms for incremental updates of frequent\nsequences. In Proc. of Pacific-Asia Conf. on\nKnowledge Discovery and Data Mining (PAKDD'02),\nMay 2002.\n532\nResearch Track Poster\n", "keywords": "database updates;sequence database;shared projection;frequent itemsets;optimization;buffering pattern;sequential pattern;buffering patterns;reverse pattern matching;incremental mining"} {"name": "109", "title": "Index Structures and Algorithms for Querying Distributed RDF Repositories", "abstract": "A technical infrastructure for storing, querying and managing RDF data is a key element in the current semantic web development. Systems like Jena, Sesame or the ICS-FORTH RDF Suite are widely used for building semantic web applications. Currently, none of these systems supports the integrated querying of distributed RDF repositories. We consider this a major shortcoming since the semantic web is distributed by nature. In this paper we present an architecture for querying distributed RDF repositories by extending the existing Sesame system. We discuss the implications of our architecture and propose an index structure as well as algorithms for query processing and optimization in such a distributed context.", "fulltext": "MOTIVATION\nThe need for handling multiple sources of knowledge and information\nis quite obvious in the context of semantic web applications.\nFirst of all we have the duality of schema and information content\nwhere multiple information sources can adhere to the same schema.\nFurther, the re-use, extension and combination of multiple schema\nfiles is considered to be common practice on the semantic web [7].\nDespite the inherently distributed nature of the semantic web, most\ncurrent RDF infrastructures (for example [4]) store information locally\nas a single knowledge repository, i.e., RDF models from remote\nsources are replicated locally and merged into a single model.\nDistribution is virtually retained through the use of namespaces to\ndistinguish between different models. We argue that many interesting\napplications on the semantic web would benefit from or even\nrequire an RDF infrastructure that supports real distribution of information\nsources that can be accessed from a single point. Beyond\nCopyright is held by the author/owner(s).\nWWW2004\n, May 1722, 2004, New York, New York, USA.\nACM 1-58113-844-X/04/0005.\nthe argument of conceptual adequacy, there are a number of technical\nreasons for real distribution in the spirit of distributed databases:\nFreshness:\nThe commonly used approach of using a local copy\nof a remote source suffers from the problem of changing information\n. Directly using the remote source frees us from the need of\nmanaging change as we are always working with the original.\nFlexibility:\nKeeping different sources separate from each other\nprovides us with a greater flexibility concerning the addition and\nremoval of sources. In the distributed setting, we only have to adjust\nthe corresponding system parameters.\nIn many cases, it will even be unavoidable to adopt a distributed\narchitecture, for example in scenarios in which the data is not owned\nby the person querying it. In this case, it will often not be permitted\nto copy the data. More and more information providers, however,\ncreate interfaces that can be used to query the information. The\nsame holds for cases where the information sources are too large to\njust create a single model containing all the information, but they\nstill can be queried using a special interface (Musicbrainz is an example\nof this case). Further, we might want to include sources that\nare not available in RDF, but that can be wrapped to produce query\nresults in RDF format. A typical example is the use of a free-text\nindex as one source of information. Sometimes there is not even\na fixed model that could be stored in RDF, because the result of a\nquery is only calculated at runtime (Google, for instance, provides a\nprogramming interface that could be wrapped into an RDF source).\nIn all these scenarios, we are forced to access external information\nsources from an RDF infrastructure without being able to create a\nlocal copy of the information we want to query. On the semantic\nweb, we almost always want to combine such external sources with\neach other and with additional schema knowledge. This confirms\nthe need to consider an RDF infrastructure that deals with information\nsources that are actually distributed across different locations.\nIn this paper, we address the problem of integrated access to distributed\nRDF repositories from a practical point of view. In particular\n, starting from a real-life use case where we are considering\na number of distributed sources that contain research results in the\nform of publications, we take the existing RDF storage and retrieval\nsystem Sesame and describe how the architecture and the query\nprocessing methods of the system have to be extended in order to\nmove to a distributed setting.\n631\nThe paper is structured as follows. In Section 2 we present an\nextension of the Sesame architecture to multiple, distributed repositories\nand discuss basic assumptions and implications of the architecture\n. Section 3 presents source index hierarchies as suitable\nmechanisms to support the localization of relevant data during\nquery processing. In Section 4 we introduce a cost model for processing\nqueries in the distributed architecture, and show its use in\noptimizing query execution as a basis for the two-phase optimization\nheuristics for join ordering. Section 5 reviews previous work\non index structures for object-oriented data bases. It also summarizes\nrelated work on query optimization particularly focusing on\nthe join ordering problem. We conclude with a discussion of open\nproblems and future work.\nINTEGRATION ARCHITECTURE\nBefore discussing the technical aspects of distributed data and\nknowledge access, we need to put our work in context by introducing\nthe specific integration architecture we have to deal with. This\narchitecture limits the possible ways of accessing and processing\ndata, and thereby provides a basis for defining some requirements\nfor our approach. It is important to note that our work is based on\nan existing RDF storage and retrieval system, which more or less\npredefines the architectural choices we made. In this section, we\ndescribe an extension of the Sesame system [4] to distributed data\nsources.\nThe Sesame architecture is flexible enough to allow a straightforward\nextension to a setting where we have to deal with multiple\ndistributed RDF repositories. In the current setting, queries, expressed\nin Sesame's query language SeRQL, are directly passed\nfrom the query engine to an RDF API (SAIL) that abstracts from\nthe specific implementation of the repository. In the distributed setting\n, we have several repositories that can be implemented in different\nways. In order to abstract from this technical heterogeneity,\nit is useful to introduce RDF API implementations on top of each\nrepository, making them accessible in the same way.\nThe specific problem of a distributed architecture is now that information\nrelevant to a query might be distributed over the different\nsources. This requires to locate relevant information, retrieve it, and\ncombine the individual answers. For this purpose, we introduce a\nnew component between the query parser and the actual SAILs the\nmediator SAIL (see Figure 1).\nIn this work, we assume that local repositories are implemented\nusing database systems that translate queries posed to the RDF API\ninto SQL queries and use the database functionality to evaluate\nthem (compare [5]). This assumption has an important influence on\nthe design of the distributed query processing: the database engines\nunderlying the individual repositories have the opportunity to perform\nlocal optimization on the SQL queries they pose to the data.\nTherefore we do not have to perform optimizations on sub-queries\nthat are to be forwarded to a single source, because the repository\nwill deal with it. Our task is rather to determine which part of the\noverall query has to be sent to which repository.\nIn the remainder of this paper, we describe an approach for querying\ndistributed RDF sources that addresses these requirements implied\nby the adopted architecture. We focus our attention on index\nstructures and algorithms implemented in the mediator SAIL.\nFigure 1: Distribution Architecture.\nINDEX STRUCTURES\nAs discussed above, in order to be able to make use of the optimization\nmechanisms of the database engines underlying the different\nrepositories, we have to forward entire queries to the different\nrepositories. In the case of multiple external models, we can further\nspeed up the process by only pushing down queries to information\nsources we can expect to contain an answer. The ultimate goal\nis to push down to a repository exactly that part of a more complex\nquery for which a repository contains an answer. This part\ncan range from a single statement template to the entire query. We\ncan have a situation where a subset of the query result can directly\nbe extracted from one source, and the rest has to be extracted and\ncombined from different sources. This situation is illustrated in the\nfollowing example.\nE\nXAMPLE\n1. Consider the case where we want to extract information\nabout research results. This information is scattered across\na variety of data sources containing information about publications\n, projects, patents, etc. In order to access these sources in\na uniform way, we use the OntoWeb research ontology. Figure 2\nshows parts of this ontology.\nFigure 2: Part of the OntoWeb Ontology.\nSuppose we now want to ask for the titles of articles by employees\nof organizations that have projects in the area \"RDF\". The\npath expression of a corresponding SeRQL query would be the following\n1\n:\n1\nFor the sake of readability we omit namespaces whenever they do\nnot play a technical role.\n632\n{A} title {T};\nauthor {W} affiliation {O}\ncarriesOut {P} topic {'RDF'}\nNow, let's assume that we have three information sources\n\nI\n,\n\nP\n,\nand\n\nQ\n.\n\nI\nis a publication data base that contains information\nabout articles, titles, authors and their affiliations.\n\nP\nis a project\ndata base with information about industrial projects, topics, and\norganizations. Finally,\n\nQ\nis a research portal that contains all of\nthe above information for academic research.\nIf we want to answer the query above completely we need all\nthree information sources. By pushing down the entire query to\n\nQ\nwe get results for academic research. In order to also retrieve the\ninformation for industrial research, we need to split up the query,\npush the fragment\n{A} title {T};\nauthor {W} affiliation {O}\nto\n\nI\n, the fragment\n{O} carriesOut {P} topic {'RDF'}\nto\n\nP\n, and join the result based on the identity of the organization\n.\nThe example illustrates the need for sophisticated indexing structures\nfor deciding which part of a query to direct to which information\nsource. On the one hand we need to index complex query\npatterns in order to be able to push down larger queries to a source;\non the other hand we also need to be able to identify sub-queries\nneeded for retrieving partial results from individual sources.\nIn order to solve this problem we build upon existing work on\nindexing complex object models using join indices [14]. The idea\nof join indices is to create additional database tables that explic-itly\ncontain the result of a join over a specific property. At runtime,\nrather than computing a join, the system just accesses the join index\nrelation which is less computationally expensive. The idea of join\nindices has been adapted to deal with complex object models. The\nresulting index structure is a join index hierarchy [21]. The most\ngeneral element in the hierarchy is an index table for elements connected\nby a certain path\np\nHXXn I\nof length\nn. Every following level\ncontains all the paths of a particular length from 2 paths of length\nn I at the second level of the hierarchy to n paths of length 1 at the\nbottom of the hierarchy. In the following, we show how the notion\nof join index hierarchies can be adapted to deal with the problem of\ndetermining information sources that contain results for a particular\nsub-query.\n3.1\nSource Index Hierarchies\nThe majority of work in the area of object oriented databases is\nfocused on indexing schema-based paths in complex object models.\nWe can make use of this work by relating it to the graph-based interpretation\nof RDF models. More specifically, every RDF model\ncan be seen as a graph where nodes correspond to resources and\nedges to properties linking these resources. The result of a query to\nsuch a model is a set of subgraphs corresponding to a path expression\n. While a path expression does not necessarily describe a single\npath, it describes a tree that can be created by joining a set of paths.\nMaking use of this fact, we first decompose the path expression\ninto a set of expressions describing simple paths, then forward the\nsimpler path expressions to sources that contain the corresponding\ninformation using a path-based index structure, and join retrieved\nanswers to create the result.\nThe problem with using path indices to select information sources\nis the fact that the information that makes up a path might be distributed\nacross different information sources (compare Example 1).\nWe therefore have to use an index structure that also contains information\nabout sub-paths without loosing the advantage of indexing\ncomplete paths. An index structure that combines these two characteristics\nis the join index hierarchy proposed in [21]. We therefore\ntake their approach as a basis for defining a source index hierarchy.\nD\nEFINITION\n1\n(S\nCHEMA\nP\nATH\n). Let\nq a hY iY vY sY tY li\nbe a labelled graph of an RDF model where\nis a set of nodes, i\na set of edges,\nv a set of labels, sY t X i 3 and l X i 3 v.\nFor every\ne P i, we have s@eA a r\nI\nY t@eA a r\nP\nand\nl@eA a l\ne\nif\nand only if the model contains the triple\n@r\nI\nY l\ne\nY r\nP\nA. A path in G is\na list of edges\ne\nH\nY Y e\nn I\nsuch that\nt@e\ni\nA a s@e\niCI\nA for all i a\nHY Y n P. Let p a e\nH\nY Y e\nn I\nbe a path, the corresponding\nschema path is the list of labels\nl\nH\nY Y l\nn I\nsuch that\nl\ni\na l@e\ni\nA.\nThe definition establishes the notion of a path for RDF models.\nWe can now use path-based index structures and adapt them to the\ntask of locating path instances in different RDF models. The basic\nstructure we use for this purpose is an index table of sources that\ncontain instances of a certain path.\nD\nEFINITION\n2\n(S\nOURCE\nI\nNDEX\n). Let\np be a schema path; a\nsource index for\np is a set of pairs @s\nk\nY n\nk\nA where s\nk\nis an information\nsource (in particular, an RDF model) and the graph of\ns\nk\ncontains exactly\nn\nk\npaths with schema path\np and n\nk\nb H.\nA source index can be used to determine information sources that\ncontain instances of a particular schema path. If our query contains\nthe path\np, the corresponding source index provides us with a list of\ninformation sources we have to forward the query to in order to get\nresults. The information about the number of instance paths can be\nused to estimate communication costs and will be used for join ordering\n(see Section 4). So far the index satisfies the requirement of\nbeing able to list complete paths and push down the corresponding\nqueries to external sources. In order to be able to retrieve information\nthat is distributed across different sources, we have to extend\nthe structure based on the idea of a hierarchy of indices for arbitrary\nsub-paths. The corresponding structure is defined as follows.\nD\nEFINITION\n3\n(S\nOURCE\nI\nNDEX\nH\nIERARCHY\n). Let\np a l\nH\nY Y l\nn I\nbe a schema path. A source index hierarchy for\np is an n-tuple h\nn\nY Y\nI\ni where\n\nn\nis a source index for\np\n\ni\nis the set of all source indices for sub-paths of\np with\nlength\ni that have at least one entry.\nThe most suitable way to represent such index structure is a hierarchy\n, where the source index of the indexed path is the root element\n. The hierarchy is formed in such a way that the subpart rooted\nat the source index for a path\np always contains source indices for\nall sub-paths of\np. This property will later be used in the query\nanswering algorithm. Forming a lattice of source indices, a source\nindex hierarchy contains information about every possible schema\nsub-path. Therefore we can locate all fragments of paths that might\nbe combined into a query result. At the same time, we can first\nconcentrate on complete path instances and successively investigate\nsmaller fragments using the knowledge about the existence of\nlonger paths. We illustrate this principle in the following example.\n633\nE\nXAMPLE\n2. Let us reconsider the situation in Example 1. The\nschema path we want to index is given by the list (author, affiliation\n, carriesOut, topic). The source index hierarchy for this path\ntherefore contains source indices for the paths\np\nHXXQ\n: (author, affiliation, carriesOut, topic)\np\nHXXP\n: (author, affiliation, carriesOut),\np\nIXXQ\n: (affiliation, carriesOut, topic)\np\nHXXI\n:(author, affiliation),\np\nIXXP\n:(affiliation, carriesOut),\np\nPXXQ\n:(carriesOut, topic)\np\nH\n:(author),\np\nI\n:(affiliation),\np\nP\n:(carriesOut),\np\nQ\n(topic)\nStarting from the longest path, we compare our query expression\nwith the index (see Figure 3 for an example of index contents). We\nimmediately get the information that\n\nQ\ncontains results. Turning\nto sub-paths, we also find out that\n\nI\ncontains results for the sub-path\n(author, affiliation) and\n\nP\nfor the sub-path (carriesOut, topic)\nthat we can join in order to compute results, because together both\nsub-paths make up the path we are looking for.\nThe source indices also contain information about the fact that\n\nQ\ncontains results for all sub-paths of our target path. We still\nhave to take this information into account, because in combination\nwith fragments from other sources we might get additional results.\nHowever, we do not have to consider joining sub-paths from the\nsame source, because these results are already covered by longer\npaths. In the example we see that\n\nP\nwill return far less results than\n\nI\n(because there are less projects than publications). We can use\nthis information to optimize the process of joining results.\nA key issue connected with indexing information sources is the\ntrade-off between required storage space and computational properties\nof index-based query processing. Compared to index structures\nused to speed up query processing within an information source, a\nsource index is relatively small as it does not encode information\nabout individual elements in a source. Therefore, the size of the index\nis independent of the size of the indexed information sources.\nThe relevant parameters in our case are the number of sources\ns and\nthe lengths of the schema path\nn. More specifically, in the worst\ncase a source index hierarchy contains source indices for every sub-path\nof the indexed schema path. As the number of all sub-path of\na path is\nn\n\niaI\ni, the worst-case\n2\nspace complexity of a source index\nhierarchy is\ny@s n\nP\nA. We conclude that the length of the indexed\npath is the significant parameter here.\n3.2\nQuery Answering Algorithm\nUsing the notion of a source index hierarchy, we can now define\na basic algorithm for answering queries using multiple sources of\ninformation. The task of this algorithm is to determine all possible\ncombinations of sub-paths of the given query path. For each of\nthese combinations, it then has to determine the sources containing\nresults for the path fragments, retrieve these results, and join them\ninto a result for the complete path. The main task is to guarantee\nthat we indeed check all possible combinations of sub-paths for the\n2\nIt is the case where all sources contain results for the complete\nschema path.\nquery path. The easiest way of guaranteeing this is to use a simple\ntree-recursion algorithm that retrieves results for the complete path,\nthen splits the original path, and joins the results of recursive calls\nfor the sub-paths. In order to capture all possible splits this has to\nbe done for every possible split point in the original path. The corresponding\nsemi-formal algorithm is given below (Algorithm 1).\nAlgorithm 1\nCompute Answers.\nRequire:\nA schema path\np a l\nH\nY Y l\nn I\nRequire:\nA source index hierarchy\nh a @\nn\nY Y\nI\nA for p\nfor all\nsources\ns\nk\nin source index\n\nn\ndo\nANSWERS\n:= instances of schema path\np in source s\nk\nRESULT\n:=\nresult nswers\nend for\nif\nn ! P then\nfor all\ni a I n I do\np\nHXXi I\n:=\nl\nH\nY l\ni I\np\niXXn I\n:=\nl\ni\nY l\nn I\nh\nHXXi I\n:= Sub-hierarchy of\nh rooted at the source index for\np\nHXXi I\nh\niXXn I\n:= Sub-hierarchy of\nh rooted at the source index for\np\niXXn I\nres\nI\n:=\ngomputeenswers@p\nHXXi I\nY h\nHXXi I\nA\nres\nP\n:=\ngomputeenswers@p\niXXn I\nY h\niXXn I\nA\nRESULT\n:=\nresult join@res\nI\nY res\nP\nA\nend for\nend if\nreturn\nresult\nNote that Algorithm 1 is far from being optimal with respect to\nruntime performance. The straightforward recursion scheme does\nnot take specific actions to prevent unnecessary work and it neither\nselects an optimal order for joining sub-paths. We can improve this\nsituation by using knowledge about the information in the different\nsources and performing query optimization.\nQUERY OPTIMIZATION\nIn the previous section we described a light-weight index structure\nfor distributed RDF querying. Its main task is to index schema\npaths w.r.t. underlying sources that contain them. Compared to\ninstance-level indexing, our approach does not require creating and\nmaintaining oversized indices since there are far fewer sources than\nthere are instances. Instance indexing would not scale in the web\nenvironment and as mentioned above in many cases it would not\neven be applicable, e.g., when sources do not allow replication\nof their data (which is what instance indices essentially do). The\ndownside of our approach, however, is that query answering without\nthe index support at the instance level is much more computationally\nintensive. Moreover, in the context of semantic web portal\napplications the queries are not man-entered anymore but rather\ngenerated by a portal's front-end (triggered by the user) and often\nexceed the size\n3\nwhich can be easily computed by using brute\nforce. Therefore we focus in this section on query optimization as\nan important part of a distributed RDF query system. We try to\navoid re-inventing the wheel and once again seek for inspiration in\nthe database field, making it applicable by \"relationizing\" the RDF\nmodel.\nEach single schema path\np\ni\nof length 1 (also called 1-pth\n) can\nbe perceived as a relation with two attributes: the source vertex\n3\nEspecially, the length of the path expression.\n634\nFigure 3: Source index hierarchy for the given query path.\ns@p\ni\nA and the target vertex t@p\ni\nA. A schema path of length more\nthan 1 is modelled as a set of relations joined together by the identity\nof the adjacent vertices, essentially representing a chain query\nof joins as defined in Definition 4. This relational view over an\nRDF graph offers the possibility to re-use the extensive research on\njoin optimization in databases, e.g. [1, 8, 9, 17, 20].\nTaking into account the (distributed) RDF context of the join ordering\nproblem there are several specifics to note when devising\na good query plan. As in distributed databases, communication\ncosts significantly contribute to the overall cost of a query plan.\nSince in our case the distribution is assumed to be realized via an\nIP network with a variable bandwidth, the communications costs\nare likely to contribute substantially to the overall processing costs,\nwhich makes the minimization of data transmission across the network\nvery important. Unless the underlying sources provide join\ncapabilities, the data transmission cannot be largely reduced: all\n(selected) bits of data from the sources are joined by the mediator\nand hence must be transmitted via the network.\nThere may exist different dependencies (both structural and ex-tensional\n) on the way the data is distributed. If the information\nabout such dependencies is available, it essentially enables the optimizer\nto prune join combinations which cannot yield any results.\nThe existence of such dependencies can be (to some extent) com-puted/discovered\nprior to querying, during the initial integration\nphase. Human insight is, however, often needed in order to avoid\nfalse dependency conclusions, which could potentially influence\nthe completeness of query answering.\nThe performance and data statistics are both necessary for the\noptimizer to make the right decision. In general, the more the optimizer\nknows about the underlying sources and data, the better\noptimized the query plan is. However, taking into account the autonomy\nof the sources, the necessary statistics do not have to be\nalways available. We design our mediator to cope with incomplete\nstatistical information in such a way that the missing parameters are\nestimated as being worse than those that are known (pessimistic approach\n). Naturally, the performance of the optimizer is then lower\nbut it increases steadily when the estimations are made more realistic\nbased on the actual response from the underlying sources; this\nis also known as optimizer calibration.\nAs indicated above, the computational capabilities of the underlying\nsources may vary considerably. We distinguish between those\nsources that can only retrieve the selected local data (pull up strategy\n) and those that can perform joins of their local and incoming\nexternal data (push down strategy), thus offering computational services\nthat could be used to achieve both a higher degree of parallelism\nand smaller data transmission over the network, e.g., by applying\nsemi-join reductions [1]. At present, however, most sources\nare capable only of selecting the desired data within their extent,\ni.e., they do not offer the join capability. Therefore, further we focus\nmainly on local optimization at the mediator's side.\nFor this purpose we need to perceive an RDF model as a set of\nrelations on which we can apply optimization results from the area\nof relational databases. In this context the problem of join ordering\narises, when we want to compute the results for schema paths from\npartial results obtained from different sources. Creating the result\nfor a schema corresponds to the problem of computing the result of\na chain query as defined below:\nD\nEFINITION\n4\n(C\nHAIN\nQ\nUERY\n). Let\np be a schema path composed\nfrom the 1-paths\np\nI\nY Y p\nn\n. The chain query of\np is the n-join\np\nI\nFG\nt@p\nI\nAas@p\nP\nA\np\nP\nFG\nt@p\nP\nAas@p\nQ\nA\np\nQ\nFG p\nn\n, where\ns@p\ni\nA\nand\nt@p\ni\nA are returning an identity of a source and target node,\nrespectively. As the join condition and attributes follow the same\npattern for all joins in the chain query, we omit them whenever they\nare clear from the context.\nIn other words, to follow a path\np of length 2 means performing\na join between the two paths of length 1 which\np is composed\nfrom. The problem of join optimization is to determine the right\norder in which the joins should be computed, such that the overall\nresponse time for computing the path instances is minimized.\n4\nNote that a chain query in Definition 4 does not include explicit\njoins, i.e., those specified in the\nhere clause, or by assigning the\nsame variable names along the path expression. When we append\nthese explicit joins, the shape of the query usually changes from a\nlinear chain to a query graph containing a circle or a star, making\nthe join ordering problem NP-hard [15].\n4\nIn case the sources offer also join capabilities the problem is not\nonly in which order but also where the joins should take place.\n635\n4.1\nSpace Complexity\nDisregarding the solutions obtained by the commutativity of joins,\neach query execution plan can be associated with a sequence of\nnumbers that represents the order in which the relations are joined.\nWe refer to this sequence as footprint of the execution plan.\nE\nXAMPLE\n3. For brevity reasons, assume the following name\nsubstitutions in the model introduced in Example 1: the concept\nnames Article, Employee, Organization, Project, ResearchTopic\nbecome a, b, c, d, e, respectively; the property names author, affiliation\n, carriesOut, topic\nare substituted with 1, 2, 3, 4, respectively.\nFigure 4 presents two possible execution plans and their footprints.\nFigure 4: Two possible query executions and their footprints.\nIf also the order of the join operands matters, i.e., the commutativity\nlaw is considered, the sequence of the operands of each join\nis recorded in the footprint as well. The solution space consists of\nquery plans (their footprints) which can be generated. We distinguish\ntwo cases: first the larger solution space of bushy trees and\nthen its subset consisting of right-deep trees.\nIf we allow for an arbitrary order of joins the resulting query\nplans are so-called bushy trees where the operands of a join can\nbe both a base relation\n5\nor a result of a previous join. For a query\nwith\nn joins there are n3 possibilities of different query execution\nplans if we disregard the commutativity of joins and cross products\n. Note that in the case of bushy trees, there might be several\nfootprints associated with one query tree. For instance, the bushy\ntree in Example 3 can be evaluated in different order yielding two\nmore footprints: (2, 4, 1, 3) or (4, 2, 1, 3). In our current approach,\nthese footprints would be equivalent w.r.t. the cost they represent.\nHowever, treating them independently allows us to consider in the\nfuture also semi-join optimization [1] where their cost might differ\nconsiderably.\nIf the commutativity of join is taken into account, there are\nPn\nn\n\nn3\nP\nn\ndifferent possibilities of ordering joins and their individual constituents\n[22]. However, in case of memory-resident databases where\nall data fits in main memory, the possibilities generated by the commutativity\nlaw can be for some join methods neglected as they\nmainly play a role in the cost model minimizing disk-memory operations\n; we discuss this issue further in Subsection 4.2. We adopt the\nmemory-only strategy as in our context there are always only two\n5\nA base relation is that part of the path which can be retrieved directly\nfrom one source.\nattributes per relation, both of them being URI references which,\nwhen the namespace prefix is stored separately, yield a very small\nsize. Of course, the assumption we make here is that the Sesame\nserver is equipped with a sufficient amount of memory to accommodate\nall intermediate tuples of relations appearing in the query.\nA special case of a general execution plan is a so-called right-deep\ntree which has the left-hand join operands consisting only of\nbase relations. For a footprint that starts with the\nr-th join there are\n\nn\nr\npossibilities of finishing the joining sequence. Thus there are\nin total\nn I\n\niaH\n\nn I\ni\na P\nn I\npossibilities of different query execution\nplans.\n6\n. In this specially shaped query tree exists an execution\npipeline of length\nn I that allows both for easier parallelizing and\nfor shortening the response time [8] This property is very useful in\nthe context of the WWW where many applications are built in a\nproducer-consumer paradigm.\n4.2\nCost Model\nThe main goal of query optimization is to reduce the computational\ncost of processing the query both in terms of the transmission\ncost and the cost of performing join operations on the retrieved result\nfragments. In order to determine a good strategy for processing\na query, we have to be able to exactly determine the cost of a\nquery execution plan and to compare it to costs of alternative plans.\nFor this purpose, we capture the computational costs of alternative\nquery plans in a cost model that provides the basis for the optimization\nalgorithm that is discussed later.\nAs mentioned earlier, we adopt the memory-resident paradigm,\nand the cost we are trying to minimize is equivalent to minimizing\nthe total execution time. There are two main factors that influence\nthe resulting cost in our model. First is the cost of data transmission\nto the mediator, and second is the data processing cost.\nD\nEFINITION\n5\n(T\nRANSMISSION\nC\nOST\n). The transmission\ncost of path instances of the schema path\np from a source to\nthe mediator is modelled as\ng\np\na ginit\n\nC jpj vngth\np\n\nksk\n\ng\n\nwhere\nginit\n\nrepresents the cost of initiating the\ndata transmission,\njpj denotes the cardinality, vngth\np\nstands for\nthe length of the schema path\np, ksk\n\nis the size of a URI at\nthe source X\n7\nand\ng\n\nrepresents transmission cost per data unit\nfrom\nto the mediator.\nSince we apply all reducing operations (e.g., selections and projections\n) prior to the data transmission phase, the data processing\nmainly consists of join costs. The cost of a join operation is influ-enced\nby the cardinality of the two operands and the join-method\nwhich is utilized. As we already pointed out, there are no instance\nindices at the mediator side that would allow us to use some join\n\"shortcuts\". In the following we consider two join methods: a\nnested loop join and a hash join both without additional indexing\nsupport.\nD\nEFINITION\n6\n(N\nESTED LOOP JOIN COST\n). The processing\ncost of a nested loop join of two relations\npY r is defined as xtg\npYr\na jpjjrju@pY rA, where jxj denotes the cardinality of the relation\nx and u@pY rA represents the cost of the identity comparison.\n6\nThe number corresponds to a sum of the\nn I-th line in the Pascal\ntriangle.\n7\nDifferent sources may model URIs differently, however, we assume\nthat at the mediator all URIs are represented in the same way.\n636\nNote that the nested loop join allows for a more sophisticated\ndefinition of object equality than a common URI comparison. In\nparticular, if necessary, the basic URI comparison can be complemented\nby (recursive) comparisons of property values or mapping\nlook-ups. This offers room to address the issue of URI diversity\nalso known as the designation problem, when two different URIs\nrefer to the same real-life object.\nD\nEFINITION\n7\n(H\nASH JOIN COST\n). The processing cost of a\nhash join of two relations\npY r is defined as rtg\npYr\na s jpj C\njrj f, where jxj denotes the cardinality of the relation x, s\nrepresents the cost of inserting a path instance in the hash table (the\nbuilding factor),\nmodels the cost of retrieving a bucket from the\nhash table, and\nf stands for the average number of path instances\nin the bucket.\nUnlike the previous join method, the hash join algorithm assumes\nthat the object equality can be determined by a simple URI\ncomparison, in other words that the URI references are consistent\nacross the sources. Another difference is that in the case of the\nnested loop join for in-memory relations the join commutativity can\nbe neglected, as the query plan produced from another query plan\nby the commutativity law will have exactly the same cost. However\n, in the case of the hash join method the order of operands\ninfluences the cost and thus the solution space must also include\nthose solutions produced by the commutativity law.\nD\nEFINITION\n8\n(Q\nUERY PLAN COST\n). The overall cost of a\nquery plan\nconsists of the sum of all communication costs and all\njoin processing costs of the query tree.\ng\n\na\nn\n\niaI\ng\np\ni\nC g\n\n,\nwhere\ng\n\nrepresents the join processing cost of the query tree\n\nand it is computed as a sum of recurrent applications of the formula\nin Definition 6 or 7 depending on which join method is utilized. To\ncompute the cardinality of non-base join arguments, a join selectivity\nis used. The join selectivity\n' is defined as a ratio between\nthe tuples retained by the join and those created by the Cartesian\nproduct:\n' a\njpFGrj\njprj\n.\nAs it is not possible to determine the precise join selectivity before\nthe query is evaluated,\n' for each sub-path join is assumed to\nbe estimated and available in the source index hierarchy. After the\nevaluation of each query initial\n' estimates are improved and made\nmore realistic.\n4.3\nHeuristics for join ordering\nWhile the join ordering problem in the context of a linear/chain\nquery can be solved in a polynomial time [12], we have to take into\naccount the more complex problem when also the explicit joins are\ninvolved which is proven to be NP-hard [15]. It is apparent that\nevaluating all possible join strategies for achieving the global optimum\nbecomes quickly unfeasible for a larger\nn. In these cases we\nhave to rely on heuristics that compute a \"good-enough\" solution\ngiven the constraints. In fact, this is a common approach for op-timizers\nin interactive systems. There, optimization is often about\navoiding bad query plans in very short time, rather than devoting\na lot of the precious CPU time to find the optimal plan, especially,\nwhen it is not so uncommon that the optimal plan improves the\nheuristically obtained solutions only marginally.\nHeuristics for the join ordering problem have been studied exten-sively\nin the database community. In this work we adopt the results\nof comparing different join ordering heuristics from [17]. Inspired\nfrom this survey, we chose to apply the two-phase optimization\nconsisting of the iterative improvement (II) algorithm followed by\nthe simulated annealing (SA) algorithm [20]. This combination\nperforms very well on the class of queries we are interested in,\nboth in the bushy and the right-deep tree solution space, and degrades\ngracefully under time constrains.\nThe II algorithm is a simple greedy heuristics which accepts any\nimprovement on the cost function. The II randomly generates several\ninitial solutions, taking them as starting points for a walk in\nthe chosen solution space. The actual traversal is performed by applying\na series of random moves from a predefined set. The cost\nfunction is evaluated for every such move, remembering the best\nsolution so far. The main idea of this phase is to descent rapidly\ninto several local minima assuring aforementioned graceful degradation\n. For each of the sub-optimal solutions, the second phase\nof the SA algorithm is applied. The task of the SA phase is to\nexplore the \"neighborhood\" of a prosperous solution more thor-oughly\n, hopefully lowering the cost.\nAlgorithm 2\nSimulated annealing algorithm\nRequire:\nstart solution\nsolution\nRequire:\nstart temperature\ns empr\nsolution := solution\nestolution := solution\ntempr := s empr\nost := gost@estolutionA\nmingost := ost\nrepeat\nrepeat\nnewolution := NEW(solution)\nnewgost := gost@newolutionA\nif\nnewgost ost then\nsolution := newolution\nost := newCost\nelse if\ne\n@newgost ostA\ntempr\n! exh@HXXIA then\nsolution := newolution\nost := newCost\nend if\nif\nost ` mingost then\nestolution := solution\nmingost := ost\nend if\nuntil\nequilibrium reached\nDECREASE(\ntempr)\nuntil\nfrozen\nreturn\nestolution\nThe pseudo-code of the SA phase is presented in Algorithm 2.\nIt takes a starting point/solution from the II phase, and similarly to\nII performs random moves from a predefined set accepting all cost\nimprovements. However, unlike the II, the SA algorithm can accept\nwith a certain probability also those moves that result in a solution\nwith a higher cost than the current best solution. The probability of\nsuch acceptance depends on the temperature of the system and the\ncost difference. The idea is that at the beginning the system is hot\nand accepts easier the moves yielding even solutions with higher\ncosts. However, as the temperature decreases the system is becoming\nmore stable, strongly preferring those solutions with lower\ncosts. The SA algorithm improves on the II heuristics by making\nthe stop condition less prone to get trapped in a local minimum; SA\nstops when the temperature drops below a certain threshold or if the\nbest solution so far was not improved in a number of consecutive\n637\ntemperature decrements, the system is considered frozen. There\nare two sets of moves: one for the bushy solution space and one for\nthe right-deep solution space; for details we refer the reader to [20].\nFigure 5: Acceptance probability with respect to the temperature\nand the cost difference.\nFigure 5 shows the acceptance probability dependency in the SA\nphase computed for the range of parameters that we used in our experiments\n. As we adopted the two-phase algorithm our simulations\nwere able to reproduce the trends in results presented in [17]; due\nto the lack of space we omit the detail performance analysis and the\ninterested reader is referred to the aforementioned survey.\nRELATED WORK\nIn this paper we focused mainly on basic techniques such as indexing\nand join ordering. Relevant related work is described in the\nremainder of this section. More advanced techniques such as site\nselection and dynamic data placement are not considered, because\nthey are not supported by the current architecture of the system.\nWe also do not consider techniques that involve view-based query\nanswering techniques [6] because we are currently not considering\nthe problem of integrating heterogeneous data.\n5.1\nIndex Structures for Object Models\nThere has been quite a lot of research on indexing object oriented\ndatabases. The aim of this work was to speed up querying and navigation\nin large object databases. The underlying idea of many existing\napproaches is to regard an object base as a directed graph,\nwhere objects correspond to nodes, and object properties to links\n[16]. This view directly corresponds to RDF data, that is often also\nregarded as a directed graph. Indices over such graph structures\nnow describe paths in the graph based on a certain pattern normally\nprovided by the schema. Different indexing techniques vary on the\nkind of path patterns they describe and on the structure of the index.\nSimple index structures only refer to a single property and organize\nobjects according to the value of that property. Nested indices and\npath indices cover a complete path in the model that might contain\na number of objects and properties [2]. In RDF as well as in object\noriented databases, the inheritance relation plays a special role as it\nis connected with a predefined semantics. Special index structures\nhave been developed to speed up queries about such hierarchies and\nhave recently been rediscovered for indexing RDF data [5]. In the\narea of object-oriented database systems, these two kinds of indexing\nstructures have been combined resulting in the so-called nested\ninheritance indices [3] and generalized nested inheritance indices\n[16]. These index structures directly represent implications of inheritance\nreasoning, an approach that is equivalent to indexing the\ndeductive closure of the model.\n5.2\nQuery Optimization\nThere is a long tradition of work on distributed databases in general\n[13] and distributed query processing in particular [10]. The\ndominant problem is the generation of an optimal query plan that\nreduces execution costs as much as possible while guaranteeing\ncompleteness of the result. As described by Kossmann in [10],\nthe choice of techniques for query plan generation depends on the\narchitecture of the distributed system. He discusses basic techniques\nas well as methods for client-server architectures and for\nheterogeneous databases. Due to our architectural limitations (e.g.,\nlimited source capabilities) we focused on join-ordering optimization\nwhich can be performed in a centralized manner by the mediator\n. While some restricted cases of this problem can be solved in\na polynomial time [12, 11], the general problem of finding an optimal\nplan for evaluating join queries has been proven to be NP-hard\n[15]. The approaches to tackle this problem can be split into several\ncategories [17]: deterministic algorithms, randomized algorithms,\nand genetic algorithms. Deterministic algorithms often use techniques\nof dynamic programming (e.g. [12]), however, due to the\ncomplexity of the problem they introduce simplifications, which\nrender them as heuristics. Randomized algorithms (e.g. [20, 19]),\nperform a random walk in the solution space according to certain\nrules. After the stop-condition is fulfilled, the best solution found\nso far is declared as the result. Genetic algorithms (e.g. [18]) perceive\nthe problem as biological evolution; they usually start with a\nrandom population (set of solutions) and generate offspring by applying\na crossover and mutation. Subsequently, the selection phase\neliminates weak members of the new population.\nLIMITATIONS AND FUTURE WORK\nThe work reported in this paper can be seen as a very first step\ntowards a solution for the problem of distributed processing of RDF\nqueries. We motivated the overall problem and proposed some data\nstructures and algorithms that deal with the most fundamental problems\nof distributed querying in a predefined setting. We identified\na number of limitations of the current proposal with respect to the\ngenerality of the approach and assumptions made. These limitations\nalso set the agenda for future work to be done on distributed\nRDF querying and its support in Sesame.\nImplementation\nCurrently, our work on distributed query processing\nis of a purely theoretical nature. The design and evaluation\nof the methods described are based on previous work reported in\nthe literature and on worst-case complexity estimations. The next\nstep is to come up with a test implementation of a distributed RDF\nstorage system. The implementation will follow the architecture\nintroduced in the beginning of the paper and will be built on top of\nthe Sesame storage and retrieval engine. The implementation will\nprovide the basis for a more practical evaluation of our approach\nand will allow us to make assertions about the real system behavior\nin the presence of different data sets and different ways they are\ndistributed. Such a practical evaluation will be the basis for further\noptimization of the methods.\nSchema-Awareness\nOne of the limitations of the approach described\nin this paper concerns schema aware querying in a distributed\nsetting. Even if every single repository is capable of computing\nthe deductive closure of the model it contains, the overall\n638\nresult is not necessarily complete, as schema information in one\nrepository can have an influence on information in other repositories\n. This information could lead to additional conclusions if taken\ninto account during query processing. In order to be able to deal\nwith this situation, we need to do some additional reasoning within\nthe mediator in order to detect and process dependencies between\nthe different models.\nObject Identity\nOne of the basic operations of query processing\nis the computation of joins of relations that correspond to individual\nproperties. The basic assumption we make at this point is that we\nare able to uniquely determine object identity. Identity is essential\nbecause it is the main criterion that determines whether to connect\ntwo paths or not. From a pragmatic point of view, the URI of an\nRDF resource provides us with an identity criterion. While this\nmay be the case in a single repository, it is not clear at all whether\nwe can make this assumption in a distributed setting as different\nrepositories can contain information about the same real world object\n(e.g., a paper) and assign different URIs to it. To deal with this\nsituation we have to develop heuristics capable of deciding whether\ntwo resources describe the same real world object.\nQuery Model\nIn order to be able to design efficient index structures\nwe restricted ourselves to path queries as a query model that\nis directly supported. We argued above that tree-shaped queries\ncan be easily split into a number of path queries that have to be\njoined afterwards. Nevertheless, this simplification does not apply\nto the optimization part which is capable of processing also different\nquery shapes. An important aspect of future work is to extend\nour indexing approach to more expressive query models that also\ninclude tree and graph shaped queries which can be found in existing\nRDF query languages. It remains to be seen whether the same\nkind of structures and algorithms can be used for more complex\nqueries or whether we have to find alternatives.\nArchitecture\nThe starting point of our investigation was a particular\narchitecture, namely a distributed repository where the data\nis accessed at a single point but stored in different repositories. We\nfurther made the assumption that these repositories are read-only,\ni.e., they only provide answers to path queries that they are known\nto contain some information about.\nAn interesting question is how more flexible architectures can\nbe supported. We think of architectures where information is accessed\nfrom multiple points and repositories are able to forward\nqueries. Further we can imagine grid-based architectures where\ncomponents can perform local query processing on data received\nfrom other repositories. A prominent example of such more flexible\narchitectures are peer-to-peer systems. This would also bring\na new potential for optimization as peers may collaborate on query\nevaluation which in turn may help in reducing both the communication\nand processing costs.\nREFERENCES\n[1] P. Bernstein and D. Chiu. Using semi-joins to solve\nrelational queries. Journal of the ACM, 28:2540, 1981.\n[2] E. Bertino. An indexing technique for object-oriented\ndatabases. In Proceedings of the Seventh International\nConference on Data Engineering, April 8-12, 1991, Kobe,\nJapan, pages 160170. IEEE Computer Society, 1991.\n[3] E. Bertino and P. Foscoli. Index organizations for\nobject-oriented database systems. TKDE, 7(2):193209,\n1995.\n[4] J. Broekstra, A. Kampman, and F. van Harmelen. Sesame: A\ngeneric architecture for storing and querying rdf and rdf\nschema. In The Semantic Web - ISWC 2002, volume 2342 of\nLNCS, pages 5468. Springer, 2002.\n[5] V. Christophides, D. Plexousakisa, M. Scholl, and\nS. Tourtounis. On labeling schemes for the semantic web. In\nProceedings of the 13th World Wide Web Conference, pages\n544555, 2003.\n[6] A. Halevy. Answering queries using views - a survey. The\nVLDB Journal, 10(4):270294, 2001.\n[7] J. Hendler. Agents and the semantic web. IEEE Intelligent\nSystems, (2), 2001.\n[8] H. Hsiao, M. Chen, and P. Yu. Parallel execution of hash\njoins in parallel databases. IEEE Transactions on Parallel\nand Distributed Systems, 8:872883, 1997.\n[9] Y. Ioannidis and E. Wong. Query optimization by simulated\nannealing. In ACM SIGMOD International Conference on\nManagement of Data, pages 922. ACM:Press, 1987.\n[10] D. Kossmann. The state of the art in distributed query\nprocessing. ACM Computing Surveys, 32(4):422469, 2000.\n[11] G. Moerkotte. Constructing optimal bushy trees possibly\ncontaining cross products for order preserving joins is in p,\ntr-03-012. Technical report, University of Mannheim, 2003.\n[12] K. Ono and G. M. Lohman. Measuring the complexity of\njoin enumeration in query optimization. In 16th International\nConference on Very Large Data Bases, pages 314325.\nMorgan Kaufmann, 1990.\n[13] M. Ozsu and P. Valduriez. Principles of Distributed\nDatabase Systems. Prentice Hall, 1991.\n[14] D. Rotem. Spatial join indices. In Proceedings of\nInternational Conference on Data Engineering, 1991.\n[15] W. Scheufele and G. Moerkotte. Constructing optimal bushy\nprocessing trees for join queries is np-hard, tr-96-011.\nTechnical report, University of Mannheim, 1996.\n[16] B. Shidlovsky and E. Bertino. A graph-theoretic approach to\nindexing in object-oriented databases. In S. Y. W. Su, editor,\nProceedings of the Twelfth International Conference on Data\nEngineering, February 26 - March 1, 1996, New Orleans,\nLouisiana, pages 230237. IEEE Computer Society, 1996.\n[17] M. Steinbrunn, G. Moerkotte, and A. Kemper. Heuristic and\nrandomized optimization for join ordering problem. The\nVLDB Journal, 6:191208, 1997.\n[18] M. Stillger and M. Spiliopoulou. Genetic programming in\ndatabase query optimization. In J. R. Koza, D. E. Goldberg,\nD. B. Fogel, and R. L. Riolo, editors, Genetic Programming\n1996: Proceedings of the First Annual Conference, pages\n388393. MIT Press, 1996.\n[19] A. Swami. Optimization of large join queries: combining\nheuristics and combinatorial techniques. In ACM SIGMOD\nInternational Conference on Management of Data, pages\n367376. ACM:Press, 1989.\n[20] A. Swami and A. Gupta. Optimization of large join queries.\nIn ACM SIGMOD International Conference on Management\nof Data, pages 817. ACM:Press, 1988.\n[21] Z. Xie and J. Han. Join index hierarchies for supporting\nefficient navigations in object-oriented databases. In\nProceedings of the International Conference on Very Large\nData Bases, pages 522533, 1994.\n[22] C. Yu and W. Meng. Principles of Database Query\nProcessing for Advanced Applications. Morgan Kaufmann\nPublishers, 1998.\n639\n", "keywords": "index structure;external sources;query optimization;distributed architecture;repositories;RDF;infrastructure;RDF Querying;Optimization;Index Structures;semantic web;join ordering problem"} {"name": "11", "title": "A Functional Correspondence between Evaluators and Abstract Machines", "abstract": "We bridge the gap between functional evaluators and abstract machines for the \u03bb-calculus, using closure conversion, transformation into continuation-passing style, and defunctionalization. We illustrate this approach by deriving Krivine's abstract machine from an ordinary call-by-name evaluator and by deriving an ordinary call-by-value evaluator from Felleisen et al.'s CEK machine. The first derivation is strikingly simpler than what can be found in the literature. The second one is new. Together, they show that Krivine's abstract machine and the CEK machine correspond to the call-by-name and call-by-value facets of an ordinary evaluator for the \u03bb-calculus. We then reveal the denotational content of Hannan and Miller's CLS machine and of Landin's SECD machine. We formally compare the corresponding evaluators and we illustrate some degrees of freedom in the design spaces of evaluators and of abstract machines for the \u03bb-calculus with computational effects. Finally, we consider the Categorical Abstract Machine and the extent to which it is more of a virtual machine than an abstract machine", "fulltext": "Introduction and related work\nIn Hannan and Miller's words [23, Section 7], there are fundamental\ndifferences between denotational definitions and definitions of\nabstract machines. While a functional programmer tends to be\nfamiliar with denotational definitions [36], he typically wonders\nabout the following issues:\nDesign:\nHow does one design an abstract machine? How were\nexisting abstract machines, starting with Landin's SECD machine\n, designed? How does one make variants of an existing\nabstract machine? How does one extend an existing abstract\nmachine to a bigger source language? How does one go about\ndesigning a new abstract machine? How does one relate two\nabstract machines?\nCorrectness:\nHow does one prove the correctness of an abstract\nmachine?\nAssuming it implements a reduction strategy,\nshould one prove that each of its transitions implements a part\nof this strategy? Or should one characterize it in reference to\na given evaluator, or to another abstract machine?\nA variety of answers to these questions can be found in the literature\n. Landin invented the SECD machine as an implementation\nmodel for functional languages [26], and Plotkin proved its\ncorrectness in connection with an evaluation function [30, Section\n2]. Krivine discovered an abstract machine from a logical\nstandpoint [25], and Cregut proved its correctness in reference to\na reduction strategy; he also generalized it from weak to strong\nnormalization [7]. Curien discovered the Categorical Abstract Machine\nfrom a categorical standpoint [6, 8]. Felleisen et al. invented\nthe CEK machine from an operational standpoint [16, 17, 19].\nHannan and Miller discovered the CLS machine from a proof-theoretical\nstandpoint [23].\nMany people derived, invented, or\n(re-)discovered Krivine's machine. Many others proposed modifications\nof existing machines. And recently, Rose presented a\nmethod to construct abstract machines from reduction rules [32],\nwhile Hardin, Maranget, and Pagano presented a method to extract\nthe reduction strategy of a machine by extracting axioms from its\ntransitions and structural rules from its architecture [24].\nIn this article, we propose one constructive answer to all the questions\nabove.\nWe present a correspondence between functional\nevaluators and abstract machines based on a two-way derivation\nconsisting of closure conversion, transformation into continuation-passing\nstyle (CPS), and defunctionalization. This two-way derivation\nlets us connect each of the machines above with an evaluator,\nand makes it possible to echo variations in the evaluator into variations\nin the abstract machine, and vice versa. The evaluator clarifies\nthe reduction strategy of the corresponding machine. The abstract\nmachine makes the evaluation steps explicit in a transition system.\n8\nSome machines operate on\n\n-terms directly whereas others operate\non compiled\n\n-terms expressed with an instruction set. Accordingly\n, we distinguish between abstract machines and virtual machines\nin the sense that virtual machines have an instruction set and\nabstract machines do not; instead, abstract machines directly operate\non source terms and do not need a compiler from source terms to\ninstructions. (Gregoire and Leroy make the same point when they\ntalk about a compiled implementation of strong reduction [21].)\nPrerequisites: ML, observational equivalence, abstract\nmachines,\n\n-interpreters, CPS transformation, defunctionalization\n, and closure conversion.\nWe use ML as a meta-language, and we assume a basic familiarity\nwith Standard ML and reasoning about ML programs. In particular\n, given two pure ML expressions\ne\nand\ne'\nwe write\ne\ne'\nto\nexpress that\ne\nand\ne'\nare observationally equivalent. Most of our\nimplementations of the abstract machines raise compiler warnings\nabout non-exhaustive matches. These are inherent to programming\nabstract machines in an ML-like language. The warnings could be\navoided with an option type or with an explicit exception, at the\nprice of readability and direct relation to the usual mathematical\nspecifications of abstract machines.\nIt would be helpful to the reader to know at least one of the machines\nconsidered in the rest of this article, be it Krivine's machine\n, the CEK machine, the CLS machine, the SECD machine,\nor the Categorical Abstract Machine.\nIt would also be helpful\nto have already seen a\n\n-interpreter written in a functional language\n[20, 31, 35, 39]. In particular, we make use of Strachey's\nnotions of expressible values, i.e., the values obtained by evaluating\nan expression, and denotable values, i.e., the values denoted by\nidentifiers [38].\nWe make use of the CPS transformation [12, 33]: a term is CPS-transformed\nby naming all its intermediate results, sequentializing\ntheir computation, and introducing continuations. Plotkin was the\nfirst to establish the correctness of the CPS transformation [30].\nWe also make use of Reynolds's defunctionalization [31]: defunctionalizing\na program amounts to replacing each of its function\nspaces by a data type and an apply function; the data type enumerates\nall the function abstractions that may give rise to inhabitants\nof this function space in this program [15]. Nielsen, Banerjee,\nHeintze, and Riecke have established the correctness of defunctionalization\n[3, 29].\nA particular case of defunctionalization is closure conversion: in\nan evaluator, closure conversion amounts to replacing each of the\nfunction spaces in expressible and denotable values by a tuple, and\ninlining the corresponding apply function.\nWe would like to stress that all the concepts used here are elementary\nones, and that the significance of this article is the one-fits-all\nderivation between evaluators and abstract machines.\nOverview:\nThe rest of this article is organized as follows. We first consider\na call-by-name and a call-by-value evaluator, and we present the\ncorresponding machines, which are Krivine's machine and the CEK\nmachine. We then turn to the CLS machine and the SECD machine,\nand we present the corresponding evaluators. We finally consider\nthe Categorical Abstract Machine. For simplicity, we do not cover\nlaziness and sharing, but they come for free by threading a heap of\nupdateable thunks in a call-by-name evaluator [2].\nCall-by-name, call-by-value, and the calculus\nWe first go from a call-by-name evaluator to Krivine's abstract machine\n(Section 2.1) and then from the CEK machine to a call-by-value\nevaluator (Section 2.2). Krivine's abstract machine operates\non de Bruijn-encoded\n\n-terms, and the CEK machine operates on\n\n-terms with names. Starting from the corresponding evaluators, it\nis simple to construct a version of Krivine's abstract machine that\noperates on\n\n-terms with names, and a version of the CEK machine\nthat operates on de Bruijn-encoded\n\n-terms (Section 2.3).\nThe derivation steps consist of closure conversion, transformation\ninto continuation-passing style, and defunctionalization of continuations\n. Closure converting expressible and denotable values makes\nthe evaluator first order. CPS transforming the evaluator makes its\ncontrol flow manifest as a continuation. Defunctionalizing the continuation\nmaterializes the control flow as a first-order data structure.\nThe result is a transition function, i.e., an abstract machine.\n2.1\nFrom a call-by-name evaluator to Krivine's\nmachine\nKrivine's abstract machine [7] operates on de Bruijn-encoded\n\nterms\n. In this representation, identifiers are represented by their\nlexical offset, as traditional since Algol 60 [40].\ndatatype term = IND of int\n(* de Bruijn index *)\n| ABS of term\n| APP of term * term\nPrograms are closed terms.\n2.1.1\nA higher-order and compositional call-by-name\nevaluator\nOur starting point is the canonical call-by-name evaluator for the\n\n-calculus [35, 37]. This evaluator is compositional in the sense of\ndenotational semantics [34, 37, 41] and higher order (\nEval0.eval\n).\nIt is compositional because it solely defines the meaning of each\nterm as a composition of the meaning of its parts. It is higher order\nbecause the data types\nEval0.denval\nand\nEval0.expval\ncontain\nfunctions: denotable values (\ndenval\n) are thunks and expressible values\n(\nexpval\n) are functions. An environment is represented as a list\nof denotable values. A program is evaluated in an empty environment\n(\nEval0.main\n).\nstructure Eval0\n= struct\ndatatype denval = THUNK of unit -> expval\nand expval = FUNCT of denval -> expval\n(*\neval : term * denval list -> expval\n*)\nfun eval (IND n, e)\n= let val (THUNK thunk) = List.nth (e, n)\nin thunk ()\nend\n| eval (ABS t, e)\n= FUNCT (fn v => eval (t, v :: e))\n| eval (APP (t0, t1), e)\n= let val (FUNCT f) = eval (t0, e)\nin f (THUNK (fn () => eval (t1, e)))\nend\n(*\nmain : term -> expval\n*)\nfun main t\n= eval (t, nil)\nend\n9\nAn identifier denotes a thunk. Evaluating an identifier amounts\nto forcing this thunk.\nEvaluating an abstraction yields a function\n. Evaluating an application requires the evaluation of the sub-expression\nin position of function; the intermediate result is a function\n, which is applied to a thunk.\n2.1.2\nFrom higher-order functions to closures\nWe now closure-convert the evaluator of Section 2.1.1.\nIn\nEval0\n, the function spaces in the data types of denotable\nand expressible values are only inhabited by instances of the\n\nabstractions\nfn v => eval (t, v :: e)\nin the meaning of abstractions\n, and\nfn () => eval (t1, e)\nin the meaning of applications.\nEach of these\n\n-abstractions has two free variables: a term and an\nenvironment. We defunctionalize these function spaces into closures\n[15, 26, 31], and we inline the corresponding apply functions.\nstructure Eval1\n= struct\ndatatype denval = THUNK of term * denval list\nand expval = FUNCT of term * denval list\n(*\neval : term * denval list -> expval\n*)\nfun eval (IND n, e)\n= let val (THUNK (t, e')) = List.nth (e, n)\nin eval (t, e')\nend\n| eval (ABS t, e)\n= FUNCT (t, e)\n| eval (APP (t0, t1), e)\n= let val (FUNCT (t, e')) = eval (t0, e)\nin eval (t, (THUNK (t1, e)) :: e')\nend\n(*\nmain : term -> expval\n*)\nfun main t\n= eval (t, nil)\nend\nThe definition of an abstraction is now\nEval1.FUNCT (t, e)\ninstead\nof\nfn v => Eval0.eval (t, v :: e)\n, and its use is now\nEval1.eval\n(t, (Eval1.THUNK (t1, e)) :: e')\ninstead of\nf (Eval0.THUNK\n(fn () => Eval0.eval (t1, e)))\n. Similarly, the definition of a\nthunk is now\nEval1.THUNK (t1, e)\ninstead of\nEval0.THUNK (fn ()\n=> Eval0.eval (t1, e))\nand its use is\nEval1.eval (t, e')\ninstead\nof\nthunk ()\n.\nThe following proposition is a corollary of the correctness of defunctionalization\n.\nP\nROPOSITION\n1\n(\nFULL CORRECTNESS\n).\nFor any ML value\np : term\ndenoting a program, evaluating\nEval0.main p\nyields a value\nFUNCT f\nand evaluating\nEval1.main p\nyields a value\nFUNCT (t, e)\nsuch that\nf\nfn v => Eval1.eval (t, v :: e)\n2.1.3\nCPS transformation\nWe transform\nEval1.eval\ninto continuation-passing style.\n1\nDoing\nso makes it tail recursive.\n1\nSince programs are closed, applying\nList.nth\ncannot fail and\ntherefore it denotes a total function. We thus keep it in direct\nstyle [14].\nstructure Eval2\n= struct\ndatatype denval = THUNK of term * denval list\nand expval = FUNCT of term * denval list\n(*\neval : term * denval list * (expval -> 'a)\n*)\n(*\n-> 'a\n*)\nfun eval (IND n, e, k)\n= let val (THUNK (t, e')) = List.nth (e, n)\nin eval (t, e', k)\nend\n| eval (ABS t, e, k)\n= k (FUNCT (t, e))\n| eval (APP (t0, t1), e, k)\n= eval (t0, e, fn (FUNCT (t, e'))\n=> eval (t,\n(THUNK (t1, e)) :: e',\nk))\n(*\nmain : term -> expval\n*)\nfun main t\n= eval (t, nil, fn v => v)\nend\nThe following proposition is a corollary of the correctness of the\nCPS transformation. (Here observational equivalence reduces to\nstructural equality over ML values of type\nexpval\n.)\nP\nROPOSITION\n2\n(\nFULL CORRECTNESS\n).\nFor any ML value\np : term\ndenoting a program,\nEval1.main p\nEval2.main p\n2.1.4\nDefunctionalizing the continuations\nThe function space of the continuation is inhabited by instances of\ntwo\n\n-abstractions: the initial one in the definition of\nEval2.main\n,\nwith no free variables, and one in the meaning of an application,\nwith three free variables. To defunctionalize the continuation, we\nthus define a data type\ncont\nwith two summands and the corresponding\napply cont\nfunction to interpret these summands.\nstructure Eval3\n= struct\ndatatype denval = THUNK of term * denval list\nand expval = FUNCT of term * denval list\nand\ncont = CONT0\n| CONT1 of term * denval list * cont\n(*\neval : term * denval list * cont -> expval\n*)\nfun eval (IND n, e, k)\n= let val (THUNK (t, e')) = List.nth (e, n)\nin eval (t, e', k)\nend\n| eval (ABS t, e, k)\n= apply_cont (k, FUNCT (t, e))\n| eval (APP (t0, t1), e, k)\n= eval (t0, e, CONT1 (t1, e, k))\nand apply_cont (CONT0, v)\n= v\n| apply_cont (CONT1 (t1, e, k), FUNCT (t, e'))\n= eval (t, (THUNK (t1, e)) :: e', k)\n(*\nmain : term -> expval\n*)\nfun main t\n= eval (t, nil, CONT0)\nend\nThe following proposition is a corollary of the correctness of defunctionalization\n. (Again, observational equivalence reduces here\nto structural equality over ML values of type\nexpval\n.)\n10\nP\nROPOSITION\n3\n(\nFULL CORRECTNESS\n).\nFor any ML value\np : term\ndenoting a program,\nEval2.main p\nEval3.main p\nWe identify that\ncont\nis a stack of thunks, and that the transitions\nare those of Krivine's abstract machine.\n2.1.5\nKrivine's abstract machine\nTo obtain the canonical definition of Krivine's abstract machine, we\nabandon the distinction between denotable and expressible values\nand we use thunks instead, we represent the defunctionalized continuation\nas a list of thunks instead of a data type, and we inline\napply cont\n.\nstructure Eval4\n= struct\ndatatype thunk = THUNK of term * thunk list\n(*\neval : term * thunk list * thunk list\n*)\n(*\n-> term * thunk list\n*)\nfun eval (IND n, e, s)\n= let val (THUNK (t, e')) = List.nth (e, n)\nin eval (t, e', s)\nend\n| eval (ABS t, e, nil)\n= (ABS t, e)\n| eval (ABS t, e, (t', e') :: s)\n= eval (t, (THUNK (t', e')) :: e, s)\n| eval (APP (t0, t1), e, s)\n= eval (t0, e, (t1, e) :: s)\n(*\nmain : term -> term * thunk list\n*)\nfun main t\n= eval (t, nil, nil)\nend\nThe following proposition is straightforward to prove.\nP\nROPOSITION\n4\n(\nFULL CORRECTNESS\n).\nFor any ML value\np : term\ndenoting a program,\nEval3.main p\nEval4.main p\nFor comparison with\nEval4\n, the canonical definition of Krivine's\nabstract machine is as follows [7, 22, 25], where t denotes terms, v\ndenotes expressible values, e denotes environments, and s denotes\nstacks of expressible values:\n\nSource syntax:\nt\n::\nn\n\nt\nt\n0\nt\n1\n\nExpressible values (closures):\nv\n::\nt e\n\nInitial transition, transition rules, and final transition:\nt\n\nt nil nil\nn e s\n\nt e\n\ns\nwhere t e\n\nnth\n\ne n\n\n\nt e t\n\ne\n\n:: s\n\nt t\n\ne\n\n:: e s\nt\n0\nt\n1\ne s\n\nt\n0\ne t\n1\ne :: s\n\nt e nil\n\nt e\nVariables n are represented by their de Bruijn index, and the abstract\nmachine operates on triples consisting of a term, an environment,\nand a stack of expressible values.\nEach line in the canonical definition matches a clause in\nEval4\n. We\nconclude that Krivine's abstract machine can be seen as a defunctionalized\n, CPS-transformed, and closure-converted version of the\nstandard call-by-name evaluator for the\n\n-calculus. This evaluator\nevidently implements Hardin, Maranget, and Pagano's K strategy\n[24, Section 3].\n2.2\nFrom the CEK machine to a call-by-value\nevaluator\nThe CEK machine [16, 17, 19] operates on\n\n-terms with names and\ndistinguishes between values and computations in their syntax (i.e.,\nit distinguishes trivial and serious terms, in Reynolds's words [31]).\ndatatype term = VALUE of value\n| COMP of comp\nand value = VAR of string\n(* name *)\n| LAM of string * term\nand\ncomp = APP of term * term\nPrograms are closed terms.\n2.2.1\nThe CEK abstract machine\nOur starting point reads as follows [19, Figure 2, page 239], where\nt denotes terms, w denotes values, v denotes expressible values, k\ndenotes evaluation contexts, and e denotes environments:\n\nSource syntax:\nt\n::\nw\nt\n0\nt\n1\nw\n::\nx\n\nx t\n\nExpressible values (closures) and evaluation contexts:\nv\n::\nx t e\nk\n::\nstop\nfun\n\nv k\n\narg\n\nt e k\n\n\nInitial transition, transition rules (two kinds), and final transition\n:\nt\n\ninit\nt mt\nstop\nw e k\n\neval\nk\n\n\nw e\n\nt\n0\nt\n1\ne k\n\neval\nt\n0\ne\narg\n\nt\n1\ne k\n\narg\n\nt\n1\ne k\n\nv\n\ncont\nt\n1\ne\nfun\n\nv k\n\nfun\n\nx t e k\n\nv\n\ncont\nt e x\nv k\nstop\nv\n\nfinal\nv\nwhere\n\n\nx e\n\ne\n\nx\n\n\n\n\nx t e\n\nx t e\nVariables x are represented by their name, and the abstract machine\nconsists of two mutually recursive transition functions. The first\ntransition function operates on triples consisting of a term, an environment\n, and an evaluation context. The second operates on pairs\nconsisting of an evaluation context and an expressible value. Environments\nare extended in the\nfun\n-transition, and consulted in\n\n. The\nempty environment is denoted by mt.\nThis specification is straightforward to program in ML:\n11\nsignature ENV\n= sig\ntype 'a env\nval mt : 'a env\nval lookup : 'a env * string -> 'a\nval extend : string * 'a * 'a env -> 'a env\nend\nEnvironments are represented as a structure\nEnv : ENV\ncontaining\na representation of the empty environment\nmt\n, an operation\nlookup\nto retrieve the value bound to a name in an environment, and an\noperation\nextend\nto extend an environment with a binding.\nstructure Eval0\n= struct\ndatatype expval\n= CLOSURE of string * term * expval Env.env\ndatatype ev_context\n= STOP\n| ARG of term * expval Env.env * ev_context\n| FUN of expval * ev_context\n(*\neval : term * expval Env.env * ev_context\n*)\n(*\n-> expval\n*)\nfun eval (VALUE v, e, k)\n= continue (k, eval_value (v, e))\n| eval (COMP (APP (t0, t1)), e, k)\n= eval (t0, e, ARG (t1, e, k))\nand eval_value (VAR x, e)\n= Env.lookup (e, x)\n| eval_value (LAM (x, t), e)\n= CLOSURE (x, t, e)\nand continue (STOP, w)\n= w\n| continue (ARG (t1, e, k), w)\n= eval (t1, e, FUN (w, k))\n| continue (FUN (CLOSURE (x, t, e), k), w)\n= eval (t, Env.extend (x, w, e), k)\n(*\nmain : term -> expval\n*)\nfun main t\n= eval (t, Env.mt, STOP)\nend\n2.2.2\nRefunctionalizing the evaluation contexts into\ncontinuations\nWe identify that the data type\nev context\nand the function\ncontinue\nare a defunctionalized representation. The corresponding higher-order\nevaluator reads as follows.\nAs can be observed, it is in\ncontinuation-passing style.\nstructure Eval1\n= struct\ndatatype expval\n= CLOSURE of string * term * expval Env.env\n(*\neval : term * expval Env.env * (expval -> 'a)\n*)\n(*\n-> 'a\n*)\nfun eval (VALUE v, e, k)\n= k (eval_value (v, e))\n| eval (COMP (APP (t0, t1)), e, k)\n= eval (t0, e,\nfn (CLOSURE (x, t, e'))\n=> eval (t1, e,\nfn w\n=> eval (t, Env.extend (x, w, e'),\nk)))\nand eval_value (VAR x, e)\n= Env.lookup (e, x)\n| eval_value (LAM (x, t), e)\n= CLOSURE (x, t, e)\n(*\nmain : term -> expval\n*)\nfun main t\n= eval (t, Env.mt, fn w => w)\nend\nThe following proposition is a corollary of the correctness of defunctionalization\n. (Observational equivalence reduces here to structural\nequality over ML values of type\nexpval\n.)\nP\nROPOSITION\n5\n(\nFULL CORRECTNESS\n).\nFor any ML value\np : term\ndenoting a program,\nEval0.main p\nEval1.main p\n2.2.3\nBack to direct style\nCPS-transforming the following direct-style evaluator yields the\nevaluator of Section 2.2.2 [10].\nstructure Eval2\n= struct\ndatatype expval\n= CLOSURE of string * term * expval Env.env\n(*\neval : term * expval Env.env -> expval\n*)\nfun eval (VALUE v, e)\n= eval_value (v, e)\n| eval (COMP (APP (t0, t1)), e)\n= let val (CLOSURE (x, t, e')) = eval (t0, e)\nval w = eval (t1, e)\nin eval (t, Env.extend (x, w, e'))\nend\nand eval_value (VAR x, e)\n= Env.lookup (e, x)\n| eval_value (LAM (x, t), e)\n= CLOSURE (x, t, e)\n(*\nmain : term -> expval\n*)\nfun main t\n= eval (t, Env.mt)\nend\nThe following proposition is a corollary of the correctness of the\ndirect-style transformation. (Again, observational equivalence reduces\nhere to structural equality over ML values of type\nexpval\n.)\nP\nROPOSITION\n6\n(\nFULL CORRECTNESS\n).\nFor any ML value\np : term\ndenoting a program,\nEval1.main p\nEval2.main p\n2.2.4\nFrom closures to higher-order functions\nWe observe that the closures, in\nEval2\n, are defunctionalized representations\nwith an apply function inlined. The corresponding\nhigher-order evaluator reads as follows.\nstructure Eval3\n= struct\ndatatype expval = CLOSURE of expval -> expval\n(*\neval : term * expval Env.env -> expval\n*)\nfun eval (VALUE v, e)\n= eval_value (v, e)\n12\n| eval (COMP (APP (t0, t1)), e)\n= let val (CLOSURE f) = eval (t0, e)\nval w = eval (t1, e)\nin f w\nend\nand eval_value (VAR x, e)\n= Env.lookup (e, x)\n| eval_value (LAM (x, t), e)\n= CLOSURE (fn w\n=> eval (t, Env.extend (x, w, e)))\n(*\nmain : term -> expval\n*)\nfun main t\n= eval (t, Env.mt)\nend\nThe following proposition is a corollary of the correctness of defunctionalization\n.\nP\nROPOSITION\n7\n(\nFULL CORRECTNESS\n).\nFor any ML value\np : term\ndenoting a program, evaluating\nEval2.main p\nyields a value\nCLOSURE (x, t, e)\nand evaluating\nEval3.main p\nyields a value\nCLOSURE f\nsuch that\nfn w => Eval2.eval (t, Env.extend (x, w, e))\nf\n2.2.5\nA higher-order and compositional call-by-value\nevaluator\nThe result in\nEval3\nis a call-by-value evaluator that is compositional\nand higher-order. This call-by-value evaluator is the canonical\none for the\n\n-calculus [31, 35, 37]. We conclude that the CEK\nmachine can be seen as a defunctionalized, CPS-transformed, and\nclosure-converted version of the standard call-by-value evaluator\nfor\n\n-terms.\n2.3\nVariants of Krivine's machine and of the\nCEK machine\nIt is easy to construct a variant of Krivine's abstract machine for\n\nterms\nwith names, by starting from a call-by-name evaluator for\n\n-terms with names. Similarly, it is easy to construct a variant\nof the CEK machine for\n\n-terms with de Bruijn indices, by starting\nfrom a call-by-value evaluator for\n\n-terms with indices. It is\nequally easy to start from a call-by-value evaluator for\n\n-terms with\nde Bruijn indices and no distinction between values and computations\n; the resulting abstract machine coincides with Hankin's eager\nmachine [22, Section 8.1.2].\nAbstract machines processing\n\n-terms with de Bruijn indices often\nresolve indices with transitions:\n0 v :: e s\n\nv :: s\nn\n\n1 v :: e s\n\nn e s\nCompared to the evaluator of Section 2.1.1, the evaluator corresponding\nto this machine has\nList.nth\ninlined and is not compositional\n:\nfun eval (IND 0, denval :: e, s)\n= ... denval ...\n| eval (IND n, denval :: e, s)\n= eval (IND (n - 1), e, s)\n| ...\n2.4\nConclusion\nWe have shown that Krivine's abstract machine and the CEK abstract\nmachine are counterparts of canonical evaluators for call-by-name\nand for call-by-value\n\n-terms, respectively. The derivation of\nKrivine's machine is strikingly simpler than what can be found in\nthe literature. That the CEK machine can be derived is, to the best\nof our knowledge, new. That these two machines are two sides of\nthe same coin is also new. We have not explored any other aspect\nof this call-by-name/call-by-value duality [9].\nUsing substitutions instead of environments or inlining one of the\nstandard computational monads (state, continuations, etc. [39]) in\nthe call-by-value evaluator yields variants of the CEK machine that\nhave been documented in the literature [16, Chapter 8]. For example\n, inlining the state monad in a monadic evaluator yields a\nstate-passing evaluator. The corresponding abstract machine has\none more component to represent the state. In general, inlining\nmonads provides a generic recipe to construct arbitrarily many new\nabstract machines. It does not seem as straightforward, however, to\nconstruct a \"monadic abstract machine\" and then to inline a monad;\nwe are currently studying the issue.\nOn another note, one can consider an evaluator for strictness-annotated\n\n-terms--represented either with names or with indices,\nand with or without distinction between values and computations.\nOne is then led to an abstract machine that generalizes Krivine's\nmachine and the CEK machine [13].\nFinally, it is straightforward to extend Krivine's machine and the\nCEK machine to bigger source languages (with literals, primitive\noperations, conditional expressions, block structure, recursion,\netc.), by starting from evaluators for these bigger languages. For\nexample, all the abstract machines in \"The essence of compiling\nwith continuations\" [19] are defunctionalized continuation-passing\nevaluators, i.e., interpreters.\nIn the rest of this article, we illustrate further the correspondence\nbetween evaluators and abstract machines.\nThe CLS abstract machine\nThe CLS abstract machine is due to Hannan and Miller [23]. In the\nfollowing, t denotes terms, v denotes expressible values, c denotes\nlists of directives (a term or the special tag\nap\n), e denotes environments\n, l denotes stacks of environments, and s denotes stacks of\nexpressible values.\n\nSource syntax:\nt\n::\nn\n\nt\nt\n0\nt\n1\n\nExpressible values (closures):\nv\n::\nt e\n\nInitial transition, transition rules, and final transition:\nt\n\nt :: nil nil :: nil nil\n\nt :: c e :: l s\n\nc l t e :: s\n\nt\n0\nt\n1\n\n:: c e :: l s\n\nt\n0\n:: t\n1\n::\nap\n:: c e :: e :: l s\n0 :: c\n\nv :: e\n\n:: l s\n\nc l v :: s\nn\n\n1 :: c\n\nv :: e\n\n:: l s\n\nn :: c e :: l s\nap\n:: c l v :: t e :: s\n\nt :: c\n\nv :: e\n\n:: l s\nnil nil v :: s\n\nv\n13\nVariables n are represented by their de Bruijn index, and the abstract\nmachine operates on triples consisting of a list of directives, a stack\nof environments, and a stack of expressible values.\n3.1\nThe CLS machine\nHannan and Miller's specification is straightforward to program in\nML:\ndatatype term = IND of int\n(* de Bruijn index *)\n| ABS of term\n| APP of term * term\nPrograms are closed terms.\nstructure Eval0\n= struct\ndatatype directive = TERM of term\n| AP\ndatatype env = ENV of expval list\nand expval = CLOSURE of term * env\n(*\nrun : directive list * env list * expval list\n*)\n(*\n-> expval\n*)\nfun run (nil, nil, v :: s)\n= v\n| run ((TERM (IND 0)) :: c, (ENV (v :: e)) :: l, s)\n= run (c, l, v :: s)\n| run ((TERM (IND n)) :: c, (ENV (v :: e)) :: l, s)\n= run ((TERM (IND (n - 1))) :: c,\n(ENV e) :: l,\ns)\n| run ((TERM (ABS t)) :: c, e :: l, s)\n= run (c, l, (CLOSURE (t, e)) :: s)\n| run ((TERM (APP (t0, t1))) :: c, e :: l, s)\n= run ((TERM t0) :: (TERM t1) :: AP :: c,\ne :: e :: l,\ns)\n| run (AP :: c, l, v :: (CLOSURE (t, ENV e)) :: s)\n= run ((TERM t) :: c, (ENV (v :: e)) :: l, s)\n(*\nmain : term -> expval *)\nfun main t\n= run ((TERM t) :: nil, (ENV nil) :: nil, nil)\nend\n3.2\nA disentangled definition of the CLS machine\nIn the definition of Section 3.1, all the possible transitions are\nmeshed together in one recursive function,\nrun\n. Instead, let us factor\nrun\ninto several mutually recursive functions, each of them with\none induction variable.\nIn this disentangled definition,\n\nrun c\ninterprets the list of control directives, i.e., it specifies\nwhich transition to take if the list is empty, starts with a term,\nor starts with an apply directive. If the list is empty, the computation\nterminates. If the list starts with a term,\nrun t\nis\ncalled, caching the term in the first parameter. If the list starts\nwith an apply directive,\nrun a\nis called.\n\nrun t\ninterprets the top term in the list of control directives.\n\nrun a\ninterprets the top value in the current stack.\nThe disentangled definition reads as follows:\nstructure Eval1\n= struct\ndatatype directive = TERM of term\n| AP\ndatatype env = ENV of expval list\nand expval = CLOSURE of term * env\n(*\nrun_c : directive list * env list * expval list\n*)\n(*\n-> expval\n*)\nfun run_c (nil, nil, v :: s)\n= v\n| run_c ((TERM t) :: c, l, s)\n= run_t (t, c, l, s)\n| run_c (AP :: c, l, s)\n= run_a (c, l, s)\nand run_t (IND 0, c, (ENV (v :: e)) :: l, s)\n= run_c (c, l, v :: s)\n| run_t (IND n, c, (ENV (v :: e)) :: l, s)\n= run_t (IND (n - 1), c, (ENV e) :: l, s)\n| run_t (ABS t, c, e :: l, s)\n= run_c (c, l, (CLOSURE (t, e)) :: s)\n| run_t (APP (t0, t1), c, e :: l, s)\n= run_t (t0,\n(TERM t1) :: AP :: c,\ne :: e :: l,\ns)\nand run_a (c, l, v :: (CLOSURE (t, ENV e)) :: s)\n= run_t (t, c, (ENV (v :: e)) :: l, s)\n(*\nmain : term -> expval\n*)\nfun main t\n= run_t (t, nil, (ENV nil) :: nil, nil)\nend\nP\nROPOSITION\n8\n(\nFULL CORRECTNESS\n).\nFor any ML value\np : term\ndenoting a program,\nEval0.main p\nEval1.main p\nP\nROOF\n. By fold-unfold [5]. The invariants are as follows. For any\nML values\nt : term\n,\ne : expval list\n, and\ns : expval list\n,\nEval1.run c (c, l, s)\nEval0.run (c, l, s)\nEval1.run t (t, c, l, s)\nEval0.run ((TERM t) :: c, l, s)\nEval1.run a (c, l, s)\nEval0.run (AP :: c, l, s)\n3.3\nThe evaluator corresponding to the CLS\nmachine\nIn the disentangled definition of Section 3.2, there are three possible\nways to construct a list of control directives (nil, cons'ing a term,\nand cons'ing an apply directive). We could specify these constructions\nas a data type rather than as a list. Such a data type, together\nwith\nrun c\n, is in the image of defunctionalization (\nrun c\nis the apply\nfunctions of the data type). The corresponding higher-order evaluator\nis in continuation-passing style. Transforming it back to direct\nstyle yields the following evaluator:\nstructure Eval3\n= struct\ndatatype env = ENV of expval list\nand expval = CLOSURE of term * env\n(*\nrun_t : term * env list * expval list\n*)\n(*\n-> env list * expval list\n*)\nfun run_t (IND 0, (ENV (v :: e)) :: l, s)\n= (l, v :: s)\n| run_t (IND n, (ENV (v :: e)) :: l, s)\n= run_t (IND (n - 1), (ENV e) :: l, s)\n| run_t (ABS t, e :: l, s)\n= (l, (CLOSURE (t, e)) :: s)\n14\n| run_t (APP (t0, t1), e :: l, s)\n= let val (l, s) = run_t (t0, e :: e :: l, s)\nval (l, s) = run_t (t1, l, s)\nin run_a (l, s)\nend\nand run_a (l, v :: (CLOSURE (t, ENV e)) :: s)\n= run_t (t, (ENV (v :: e)) :: l, s)\n(*\nmain : term -> expval *)\nfun main t\n= let val (nil, v :: s)\n= run_t (t, (ENV nil) :: nil, nil)\nin v\nend\nend\nThe following proposition is a corollary of the correctness of defunctionalization\nand of the CPS transformation. (Here observational\nequivalence reduces to structural equality over ML values of\ntype\nexpval\n.)\nP\nROPOSITION\n9\n(\nFULL CORRECTNESS\n).\nFor any ML value\np : term\ndenoting a program,\nEval1.main p\nEval3.main p\nAs in Section 2, this evaluator can be made compositional by refunctionalizing\nthe closures into higher-order functions and by factoring\nthe resolution of de Bruijn indices into an auxiliary lookup\nfunction.\nWe conclude that the evaluation model embodied in the CLS machine\nis a call-by-value interpreter threading a stack of environments\nand a stack of intermediate results with a caller-save strategy\n(witness the duplication of environments on the stack in the meaning\nof applications) and with a left-to-right evaluation of sub-terms.\nIn particular, the meaning of a term is a partial endofunction over a\nstack of environments and a stack of intermediate results.\nThe SECD abstract machine\nThe SECD abstract machine is due to Landin [26]. In the following\n, t denotes terms, v denotes expressible values, c denotes lists of\ndirectives (a term or the special tag\nap\n), e denotes environments, s\ndenotes stacks of expressible values, and d denotes dumps (list of\ntriples consisting of a stack, an environment and a list of directives).\n\nSource syntax:\nt\n::\nx\n\nx t\nt\n0\nt\n1\n\nExpressible values (closures):\nv\n::\nx t e\n\nInitial transition, transition rules, and final transition:\nt\n\nnil mt t :: nil nil\ns e x :: c d\n\ne\n\nx\n\n:: s e c d\ns e\n\n\nx t\n\n:: c d\n\nx t e :: s e c d\ns e\n\nt\n0\nt\n1\n\n:: c d\n\ns e t\n1\n:: t\n0\n::\nap\n:: c d\nx t e\n\n:: v :: s e\nap\n:: c d\n\nnil e\n\nx\nv t :: nil d\n\nwhere d\n\n\ns e c\n\n:: d\nv :: s e nil\n\ns\n\ne\n\nd\n\n\n:: d\n\nv :: s\n\ne\n\nc\n\nd\nv :: s e nil nil\n\nv\nVariables x are represented by their name, and the abstract machine\noperates on quadruples consisting of a stack of expressible values,\nan environment, a list of directives, and a dump. Environments are\nconsulted in the first transition rule, and extended in the fourth. The\nempty environment is denoted by mt.\n4.1\nThe SECD machine\nLandin's specification is straightforward to program in ML. Programs\nare closed terms. Environments are as in Section 2.2.\ndatatype term = VAR of string\n(* name *)\n| LAM of string * term\n| APP of term * term\nstructure Eval0\n= struct\ndatatype directive = TERM of term\n| AP\ndatatype value\n= CLOSURE of string * term * value Env.env\nfun run (v :: nil, e', nil, nil)\n= v\n| run (s, e, (TERM (VAR x)) :: c, d)\n= run ((Env.lookup (e, x)) :: s, e, c, d)\n| run (s, e, (TERM (LAM (x, t))) :: c, d)\n= run ((CLOSURE (x, t, e)) :: s, e, c, d)\n| run (s, e, (TERM (APP (t0, t1))) :: c, d)\n= run (s,\ne,\n(TERM t1) :: (TERM t0) :: AP :: c,\nd)\n| run ((CLOSURE (x, t, e')) :: v :: s,\ne,\nAP :: c,\nd)\n= run (nil,\nEnv.extend (x, v, e'),\n(TERM t) :: nil,\n(s, e, c) :: d)\n| run (v :: nil, e', nil, (s, e, c) :: d)\n= run (v :: s, e, c, d)\n(*\nmain : term -> value\n*)\nfun main t\n= run (nil, Env.mt, (TERM t) :: nil, nil)\nend\n4.2\nA disentangled definition of the SECD machine\nAs in the CLS machine, in the definition of Section 4.1, all the possible\ntransitions are meshed together in one recursive function,\nrun\n.\nInstead, we can factor\nrun\ninto several mutually recursive functions,\neach of them with one induction variable. These mutually recursive\nfunctions are in defunctionalized form: the one processing the\ndump is an apply function for the data type representing the dump\n(a list of stacks, environments, and lists of directives), and the one\nprocessing the control is an apply function for the data type representing\nthe control (a list of directives). The corresponding higher-order\nevaluator is in continuation-passing style with two nested continuations\nand one control delimiter,\nreset\n[12, 18]. The delimiter\nresets the control continuation when evaluating the body of a\n\nabstraction\n. (More detail is available in a technical report [11].)\n15\n4.3\nThe evaluator corresponding to the SECD\nmachine\nThe direct-style version of the evaluator from Section 4.2 reads as\nfollows:\nstructure Eval4\n= struct\ndatatype value\n= CLOSURE of string * term * value Env.env\n(*\neval : term * value list * value Env.env\n*)\n(*\n-> value list * value Env.env\n*)\nfun eval (VAR x, s, e)\n= ((Env.lookup (x, e)) :: s, e)\n| eval (LAM (x, t), s, e)\n= ((CLOSURE (x, t, e)) :: s, e)\n| eval (APP (t0, t1), s, e)\n= let val (s, e) = eval (t1, s, e)\nval (s, e) = eval (t0, s, e)\nin apply (s, e)\nend\nand apply ((CLOSURE (x, t, e')) :: v :: s, e)\n= let val (v :: nil, _)\n= reset (fn ()\n=> eval (t,\nnil,\nEnv.extend (x,\nv,\ne')))\nin (v :: s, e)\nend\n(*\nmain : term -> value\n*)\nfun main t\n= let val (v :: nil, _)\n= reset (fn ()\n=> eval (t, nil, Env.mt))\nin v\nend\nend\nThe following proposition is a corollary of the correctness of defunctionalization\nand of the CPS transformation. (Here observational\nequivalence reduces to structural equality over ML values of\ntype\nvalue\n.)\nP\nROPOSITION\n10\n(\nFULL CORRECTNESS\n).\nFor any ML value\np : term\ndenoting a program,\nEval0.main p\nEval4.main p\nAs in Sections 2 and 3, this evaluator can be made compositional\nby refunctionalizing the closures into higher-order functions.\nWe conclude that the evaluation model embodied in the SECD machine\nis a call-by-value interpreter threading a stack of intermediate\nresults and an environment with a callee-save strategy (witness the\ndynamic passage of environments in the meaning of applications),\na right-to-left evaluation of sub-terms, and a control delimiter. In\nparticular, the meaning of a term is a partial endofunction over a\nstack of intermediate results and an environment. Furthermore, this\nevaluator evidently implements Hardin, Maranget, and Pagano's L\nstrategy, i.e., right-to-left call by value, without us having to \"guess\"\nits inference rules [24, Section 4].\nThe denotational content of the SECD machine puts a new light\non it. For example, its separation between a control register and a\ndump register is explained by the control delimiter in the evaluator\n(\nreset\nin\nEval4.eval\n).\n2\nRemoving this control delimiter gives rise\nto an abstract machine with a single stack component for control-not\nby a clever change in the machine itself, but by a straightforward\nsimplification in the corresponding evaluator.\nVariants of the CLS machine and of the SECD machine\nIt is straightforward to construct a variant of the CLS machine for\n\n-terms with names, by starting from an evaluator for\n\n-term with\nnames. Similarly, it is straightforward to construct a variant of the\nSECD machine for\n\n-terms with de Bruijn indices, by starting from\nan evaluator for\n\n-term with indices. In the same vein, it is simple\nto construct call-by-name versions of the CLS machine and of the\nSECD machine, by starting from call-by-name evaluators. It is also\nsimple to construct a properly tail recursive version of the SECD\nmachine, and to extend the CLS machine and the SECD machine to\nbigger source languages, by extending the corresponding evaluator.\n\nThe Categorical Abstract Machine\nWhat is the difference between an abstract machine and a virtual\nmachine? Elsewhere [1], we propose to distinguish them based on\nthe notion of instruction set: A virtual machine has an instruction\nset whereas an abstract machine does not. Therefore, an abstract\nmachine directly operates on a\n\n-term, but a virtual machine operates\non a compiled representation of a\n\n-term, expressed using\nan instruction set. (This distinction can be found elsewhere in the\nliterature [21].)\nThe Categorical Abstract Machine [6], for example, has an instruction\nset--categorical combinators--and therefore (despite its\nname) it is a virtual machine, not an abstract machine. In contrast\n, Krivine's machine, the CEK machine, the CLS machine, and\nthe SECD machine are all abstract machines, not virtual machines,\nsince they directly operate on\n\n-terms. In this section, we present\nthe abstract machine corresponding to the Categorical Abstract Machine\n(CAM). We start from the evaluation model embodied in the\nCAM [1].\n6.1\nThe evaluator corresponding to the CAM\nThe evaluation model embodied in the CAM is an interpreter\nthreading a stack with its top element cached in a register, representing\nenvironments as expressible values (namely nested pairs linked\nas lists), with a caller-save strategy (witness the duplication of the\nregister on the stack in the meaning of applications below), and with\na left-to-right evaluation of sub-terms. In particular, the meaning of\na term is a partial endofunction over the register and the stack. This\nevaluator reads as follows:\ndatatype term = IND of int (* de Bruijn index *)\n| ABS of term\n| APP of term * term\n| NIL\n| CONS of term * term\n| CAR of term\n| CDR of term\nPrograms are closed terms.\n2\nA rough definition of\nreset\nis\nfun reset t = t ()\n.\nA more accurate definition, however, falls out of the scope of this\narticle [12, 18].\n16\nstructure Eval0\n= struct\ndatatype expval\n= NULL\n| PAIR of expval * expval\n| CLOSURE of expval * (expval * expval list\n-> expval * expval list)\n(*\naccess : int * expval * expval list\n*)\n(*\n-> expval * expval list\n*)\nfun access (0, PAIR (v1, v2), s)\n= (v2, s)\n| access (n, PAIR (v1, v2), s)\n= access (n - 1, v1, s)\n(*\neval : term * expval * expval list\n*)\n(*\n-> expval * expval list\n*)\nfun eval (IND n, v, s)\n= access (n, v, s)\n| eval (ABS t, v, s)\n= (CLOSURE (v, fn (v, s) => eval (t, v, s)), s)\n| eval (APP (t0, t1), v, s)\n= let val (v, v' :: s)\n= eval (t0, v, v :: s)\nval (v', (CLOSURE (v, f)) :: s)\n= eval (t1, v', v :: s)\nin f (PAIR (v, v'), s)\nend\n| eval (NIL, v, s)\n= (NULL, s)\n| eval (CONS (t1, t2), v, s)\n= let val (v, v' :: s) = eval (t1, v, v :: s)\nval (v, v' :: s) = eval (t2, v', v :: s)\nin (PAIR (v', v), s)\nend\n| eval (CAR t, v, s)\n= let val (PAIR (v1, v2), s) = eval (t, v, s)\nin (v1, s)\nend\n| eval (CDR t, v, s)\n= let val (PAIR (v1, v2), s) = eval (t, v, s)\nin (v2, s)\nend\n(*\nmain : term -> expval\n*)\nfun main t\n= let val (v, nil) = eval (t, NULL, nil)\nin v\nend\nend\nThis evaluator evidently implements Hardin, Maranget, and\nPagano's X strategy [24, Section 6].\n6.2\nThe abstract machine corresponding to\nthe CAM\nAs in Sections 2, 3, and 4, we can closure-convert the evaluator of\nSection 6.1 by defunctionalizing its expressible values, transform\nit into continuation-passing style, and defunctionalize its continuations\n. The resulting abstract machine reads as follows, where t\ndenotes terms, v denotes expressible values, k denotes evaluation\ncontexts, and s denotes stacks of expressible values.\n\nSource syntax:\nt\n::\nn\n\nt\nt\n0\nt\n1\nnil\n\ncons\nt\n1\nt\n2\n\n\ncar\nt\n\n\ncdr\nt\n\n\nExpressible values (unit value, pairs, and closures) and evaluation\ncontexts:\nv\n::\nnull\n\nv\n1\nv\n2\n\nv t\nk\n::\nCONT0\nCONT1\n\nt k\n\nCONT2\n\nk\n\nCONT3\n\nt k\n\nCONT4\n\nk\n\nCONT5\n\nk\n\nCONT6\n\nk\n\n\nInitial transition, transition rules (two kinds), and final transition\n:\nt\n\ninit\nt\nnull\nnil\nCONT0\nn v s k\n\neval\nk\n\n\nn v\n\ns\n\nt v s k\n\neval\nk v t s\nnil\nv s k\n\neval\nk\nnull\ns\nt\n0\nt\n1\nv s k\n\neval\nt\n0\nv v :: s\nCONT1\n\nt\n1\nk\n\n\ncons\nt\n1\nt\n2\n\nv s k\n\neval\nt\n1\nv v :: s\nCONT3\n\nt\n2\nk\n\n\ncar\nt\n\nv s k\n\neval\nt v s\nCONT5\n\nk\n\n\ncdr\nt\n\nv s k\n\neval\nt v s\nCONT6\n\nk\n\nCONT1\n\nt k\n\nv v\n\n:: s\n\ncont\nt v\n\nv :: s\nCONT2\n\nk\n\nCONT2\n\nk\n\nv\n\nv t :: s\n\ncont\nt\n\nv v\n\n\ns k\nCONT3\n\nt\n1\nk\n\nv v\n\n:: s\n\ncont\nt\n1\nv\n\nv :: s\nCONT4\n\nk\n\nCONT4\n\nk\n\nv v\n\n:: s\n\ncont\nk\n\nv\n\nv\n\ns\nCONT5\n\nk\n\n\nv\n1\nv\n2\n\ns\n\ncont\nk v\n1\ns\nCONT6\n\nk\n\n\nv\n1\nv\n2\n\ns\n\ncont\nk v\n2\ns\nCONT0\nv nil\n\nfinal\nv\nwhere\n\n\n0\n\nv\n1\nv\n2\n\nv\n2\n\n\nn\n\nv\n1\nv\n2\n\n\n\nn\n\n1 v\n1\n\nVariables n are represented by their de Bruijn index, and the abstract\nmachine consists of two mutually recursive transition functions\n. The first transition function operates on quadruples consisting\nof a term, an expressible value, a stack of expressible values,\nand an evaluation context. The second transition function operates\non triples consisting of an evaluation context, an expressible value,\nand a stack of expressible values.\nThis abstract machine embodies the evaluation model of the CAM.\nNaturally, more intuitive names could be chosen instead of\nCONT0\n,\nCONT1\n, etc.\nConclusion and issues\nWe have presented a constructive correspondence between functional\nevaluators and abstract machines.\nThis correspondence\nbuilds on off-the-shelf program transformations: closure conversion\n, CPS transformation, defunctionalization, and inlining.\n3\nWe\nhave shown how to reconstruct known machines (Krivine's machine\n, the CEK machine, the CLS machine, and the SECD machine\n) and how to construct new ones. Conversely, we have revealed\nthe denotational content of known abstract machines. We\nhave shown that Krivine's abstract machine and the CEK machine\ncorrespond to canonical evaluators for the\n\n-calculus. We have also\nshown that they are dual of each other since they correspond to\ncall-by-name and call-by-value evaluators in the same direct style.\nIn terms of denotational semantics [27, 34], Krivine's machine and\nthe CEK machine correspond to a standard semantics, whereas the\nCLS machine and the SECD machine correspond to a stack semantics\nof the\n\n-calculus. Finally, we have exhibited the abstract machine\ncorresponding to the CAM, which puts the reader in a new\nposition to answer the recurrent question as to whether the CLS\nmachine is closer to the CAM or to the SECD machine.\n3\nIndeed the push-enter twist of Krivine's machine is obtained\nby inlining\napply cont\nin Section 2.1.5.\n17\nSince this article was written, we have studied the correspondence\nbetween functional evaluators and abstract machines for call by\nneed [2] and for Propositional Prolog [4]. In both cases, we derived\nsensible machines out of canonical evaluators.\nIt seems to us that this correspondence between functional evaluators\nand abstract machines builds a reliable bridge between denotational\ndefinitions and definitions of abstract machines. On the one\nhand, it allows one to identify the denotational content of an abstract\nmachine in the form of a functional interpreter. On the other\nhand, it gives one a precise and generic recipe to construct arbitrarily\nmany new variants of abstract machines (e.g., with substitutions\nor environments, or with stacks) or of arbitrarily many new abstract\nmachines, starting from an evaluator with any given computational\nmonad [28].\nAcknowledgments:\nWe are grateful to Malgorzata Biernacka, Julia Lawall, and Henning\nKorsholm Rohde for timely comments. Thanks are also due to\nthe anonymous reviewers.\nThis work is supported by the ESPRIT Working Group APPSEM II\n(\nhttp://www.appsem.org\n).\nReferences\n[1] Mads Sig Ager, Dariusz Biernacki, Olivier Danvy, and Jan\nMidtgaard.\nFrom interpreter to compiler and virtual machine\n: a functional derivation. Technical Report BRICS RS-03\n-14, DAIMI, Department of Computer Science, University\nof Aarhus, Aarhus, Denmark, March 2003.\n[2] Mads Sig Ager, Olivier Danvy, and Jan Midtgaard. A functional\ncorrespondence between call-by-need evaluators and\nlazy abstract machines.\nTechnical Report BRICS RS-03-24\n, DAIMI, Department of Computer Science, University of\nAarhus, Aarhus, Denmark, June 2003.\n[3] Anindya Banerjee, Nevin Heintze, and Jon G. Riecke. Design\nand correctness of program transformations based on control-flow\nanalysis. In Naoki Kobayashi and Benjamin C. Pierce,\neditors, Theoretical Aspects of Computer Software, 4th International\nSymposium, TACS 2001, number 2215 in Lecture\nNotes in Computer Science, Sendai, Japan, October 2001.\nSpringer-Verlag.\n[4] Dariusz Biernacki and Olivier Danvy.\nFrom interpreter to\nlogic engine: A functional derivation.\nTechnical Report\nBRICS RS-03-25, DAIMI, Department of Computer Science,\nUniversity of Aarhus, Aarhus, Denmark, June 2003. Accepted\nfor presentation at LOPSTR 2003.\n[5] Rod M. Burstall and John Darlington. A transformational\nsystem for developing recursive programs. Journal of ACM,\n24(1):4467, 1977.\n[6] Guy Cousineau, Pierre-Louis Curien, and Michel Mauny. The\ncategorical abstract machine. Science of Computer Programming\n, 8(2):173202, 1987.\n[7] Pierre Cregut. An abstract machine for lambda-terms normalization\n.\nIn Mitchell Wand, editor, Proceedings of the\n1990 ACM Conference on Lisp and Functional Programming,\npages 333340, Nice, France, June 1990. ACM Press.\n[8] Pierre-Louis Curien. Categorical Combinators, Sequential\nAlgorithms and Functional Programming. Progress in Theoretical\nComputer Science. Birkhauser, 1993.\n[9] Pierre-Louis Curien and Hugo Herbelin. The duality of computation\n. In Philip Wadler, editor, Proceedings of the 2000\nACM SIGPLAN International Conference on Functional Programming\n, SIGPLAN Notices, Vol. 35, No. 9, pages 233\n243, Montreal, Canada, September 2000. ACM Press.\n[10] Olivier Danvy. Back to direct style. Science of Computer\nProgramming, 22(3):183195, 1994.\n[11] Olivier Danvy. A lambda-revelation of the SECD machine.\nTechnical Report BRICS RS-02-53, DAIMI, Department of\nComputer Science, University of Aarhus, Aarhus, Denmark,\nDecember 2002.\n[12] Olivier Danvy and Andrzej Filinski. Representing control, a\nstudy of the CPS transformation. Mathematical Structures in\nComputer Science, 2(4):361391, 1992.\n[13] Olivier Danvy and John Hatcliff. CPS transformation after\nstrictness analysis. ACM Letters on Programming Languages\nand Systems, 1(3):195212, 1993.\n[14] Olivier Danvy and John Hatcliff. On the transformation between\ndirect and continuation semantics. In Stephen Brookes,\nMichael Main, Austin Melton, Michael Mislove, and David\nSchmidt, editors, Proceedings of the 9th Conference on Mathematical\nFoundations of Programming Semantics, number\n802 in Lecture Notes in Computer Science, pages 627648,\nNew Orleans, Louisiana, April 1993. Springer-Verlag.\n[15] Olivier Danvy and Lasse R. Nielsen.\nDefunctionalization\nat work. In Harald Sndergaard, editor, Proceedings of the\nThird International ACM SIGPLAN Conference on Principles\nand Practice of Declarative Programming (PPDP'01), pages\n162174, Firenze, Italy, September 2001. ACM Press. Extended\nversion available as the technical report BRICS RS-01\n-23.\n[16] Matthias Felleisen and Matthew Flatt.\nProgramming languages\nand lambda calculi.\nUnpublished lecture notes.\nhttp://www.ccs.neu.edu/home/matthias/3810-w02/\nreadings.html\n, 1989-2003.\n[17] Matthias Felleisen and Daniel P. Friedman. Control operators,\nthe SECD machine, and the\n\n-calculus. In Martin Wirsing, editor\n, Formal Description of Programming Concepts III, pages\n193217. Elsevier Science Publishers B.V. (North-Holland),\nAmsterdam, 1986.\n[18] Andrzej Filinski. Representing monads. In Hans-J. Boehm,\neditor, Proceedings of the Twenty-First Annual ACM Symposium\non Principles of Programming Languages, pages 446\n457, Portland, Oregon, January 1994. ACM Press.\n[19] Cormac Flanagan, Amr Sabry, Bruce F. Duba, and Matthias\nFelleisen. The essence of compiling with continuations. In\nDavid W. Wall, editor, Proceedings of the ACM SIGPLAN'93\nConference on Programming Languages Design and Imple-mentation\n, SIGPLAN Notices, Vol. 28, No 6, pages 237247,\nAlbuquerque, New Mexico, June 1993. ACM Press.\n[20] Daniel P. Friedman, Mitchell Wand, and Christopher T.\nHaynes. Essentials of Programming Languages, second edition\n. The MIT Press, 2001.\n[21] Benjamin Gregoire and Xavier Leroy. A compiled implementation\nof strong reduction. In Simon Peyton Jones, editor\n, Proceedings of the 2002 ACM SIGPLAN International\nConference on Functional Programming, SIGPLAN Notices,\nVol. 37, No. 9, pages 235246, Pittsburgh, Pennsylvania,\nSeptember 2002. ACM Press.\n18\n[22] Chris Hankin. Lambda Calculi, a guide for computer scientists\n, volume 1 of Graduate Texts in Computer Science. Oxford\nUniversity Press, 1994.\n[23] John Hannan and Dale Miller. From operational semantics\nto abstract machines. Mathematical Structures in Computer\nScience, 2(4):415459, 1992.\n[24] Ther`ese Hardin, Luc Maranget, and Bruno Pagano. Functional\nruntime systems within the lambda-sigma calculus.\nJournal of Functional Programming, 8(2):131172, 1998.\n[25] Jean-Louis Krivine.\nUn interpr`ete du\n\n-calcul.\nBrouil-lon\n. Available online at\nhttp://www.logique.jussieu.fr/\n~krivine\n, 1985.\n[26] Peter J. Landin. The mechanical evaluation of expressions.\nThe Computer Journal, 6(4):308320, 1964.\n[27] Robert E. Milne and Christopher Strachey. A Theory of Programming\nLanguage Semantics. Chapman and Hall, London,\nand John Wiley, New York, 1976.\n[28] Eugenio Moggi. Notions of computation and monads. Information\nand Computation, 93:5592, 1991.\n[29] Lasse R. Nielsen. A denotational investigation of defunctionalization\n. Technical Report BRICS RS-00-47, DAIMI, Department\nof Computer Science, University of Aarhus, Aarhus,\nDenmark, December 2000.\n[30] Gordon D. Plotkin. Call-by-name, call-by-value and the\n\ncalculus\n. Theoretical Computer Science, 1:125159, 1975.\n[31] John C. Reynolds. Definitional interpreters for higher-order\nprogramming languages. Higher-Order and Symbolic Computation\n, 11(4):363397, 1998. Reprinted from the proceedings\nof the 25th ACM National Conference (1972).\n[32] Kristoffer H. Rose.\nExplicit substitution tutorial & survey\n. BRICS Lecture Series LS-96-3, DAIMI, Department of\nComputer Science, University of Aarhus, Aarhus, Denmark,\nSeptember 1996.\n[33] Amr Sabry and Matthias Felleisen.\nReasoning about programs\nin continuation-passing style. Lisp and Symbolic Computation\n, 6(3/4):289360, 1993.\n[34] David A. Schmidt. Denotational Semantics: A Methodology\nfor Language Development. Allyn and Bacon, Inc., 1986.\n[35] Guy L. Steele Jr. and Gerald J. Sussman.\nThe art of\nthe interpreter or, the modularity complex (parts zero, one,\nand two).\nAI Memo 453, Artificial Intelligence Laboratory\n, Massachusetts Institute of Technology, Cambridge, Massachusetts\n, May 1978.\n[36] Joseph Stoy.\nSome mathematical aspects of functional\nprogramming.\nIn John Darlington, Peter Henderson, and\nDavid A. Turner, editors, Functional Programming and its\nApplications. Cambridge University Press, 1982.\n[37] Joseph E. Stoy. Denotational Semantics: The Scott-Strachey\nApproach to Programming Language Theory. The MIT Press,\n1977.\n[38] Christopher Strachey.\nFundamental concepts in programming\nlanguages. Higher-Order and Symbolic Computation,\n13(1/2):149, 2000.\n[39] Philip Wadler. The essence of functional programming (in-vited\ntalk). In Andrew W. Appel, editor, Proceedings of the\nNineteenth Annual ACM Symposium on Principles of Programming\nLanguages, pages 114, Albuquerque, New Mexico\n, January 1992. ACM Press.\n[40] Mitchell Wand. A short proof of the lexical addressing algorithm\n. Information Processing Letters, 35:15, 1990.\n[41] Glynn Winskel. The Formal Semantics of Programming Languages\n. Foundation of Computing Series. The MIT Press,\n1993.\n19", "keywords": "call-by-name;Interpreters;closure conversion;evaluator;defunctionalization;call-by-value;transformation into continuation-passing style (CPS);abstract machines;abstract machine"} {"name": "110", "title": "Indexing Multi-Dimensional Time-Series with Support for Multiple Distance Measures", "abstract": "Although most time-series data mining research has concentrated on providing solutions for a single distance function, in this work we motivate the need for a single index structure that can support multiple distance measures. Our specific area of interest is the efficient retrieval and analysis of trajectory similarities. Trajectory datasets are very common in environmental applications, mobility experiments, video surveillance and are especially important for the discovery of certain biological patterns. Our primary similarity measure is based on the Longest Common Subsequence (LCSS) model, that offers enhanced robustness, particularly for noisy data, which are encountered very often in real world applications . However, our index is able to accommodate other distance measures as well, including the ubiquitous Euclidean distance, and the increasingly popular Dynamic Time Warping (DTW). While other researchers have advocated one or other of these similarity measures, a major contribution of our work is the ability to support all these measures without the need to restructure the index. Our framework guarantees no false dismissals and can also be tailored to provide much faster response time at the expense of slightly reduced precision/recall. The experimental results demonstrate that our index can help speed-up the computation of expensive similarity measures such as the LCSS and the DTW.", "fulltext": "INTRODUCTION\nIn this work we present an efficient and compact, external\nmemory index for fast detection of similar trajectories. Trajectory\ndata are prevalent in diverse fields of interest such\nas meteorology, GPS tracking, wireless applications, video\ntracking [5] and motion capture [18]. Recent advances in\nmobile computing, sensor and GPS technology have made it\npossible to collect large amounts of spatiotemporal data and\n\nThe research of this author was supported by NSF ITR 0220148, NSF\nCAREER 9907477, NSF IIS 9984729, and NRDRP\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, or\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee. SIGKDD '03, August 24-27, 2003, Washington,\nDC, USA.\nCopyright 2003 ACM 1-58113-737-0/03/0008...\n$\n5.00.\nthere is increasing interest in performing data analysis tasks\nover such data [17]. In mobile computing, users equipped\nwith mobile devices move in space and register their location\nat different time instances to spatiotemporal databases\nvia wireless links. In environmental information systems,\ntracking animals and weather conditions is very common\nand large datasets can be created by storing locations of observed\nobjects over time. Human motion data generated by\ntracking simultaneously various body joints are also multidimensional\ntrajectories. In this field of computer graphics\nfundamental operations include the clustering of similar\nmovements, leading to a multitude of applications such as interactive\ngeneration of motions [2]. Spatiotemporal data are\nalso produced by migrating particles in biological sciences,\nwhere the focus can be on the discovery of subtle patterns\nduring cellular mitoses [19]. In general, any dataset that\ninvolves storage of multiple streams (attributes) of data can\nbe considered and treated as a multidimensional trajectory.\nOne very common task for such data is the discovery of\nobjects that follow a certain motion pattern, for purposes\nof clustering or classification. The objective here is to efficiently\norganize trajectories on disk, so that we can quickly\nanswer k-Nearest-Neighbors (kNN) queries. A frequent obstacle\nin the analysis of spatiotemporal data, is the presence\nof noise, which can be introduced due to electromagnetic\nanomalies, transceiver problems etc. Another impediment\nis that objects may move in a similar way, but at different\nspeeds. So, we would like our similarity model to be robust\nto noise, support elastic and imprecise matches.\nChoosing the Euclidean distance as the similarity model\nis unrealistic, since its performance degrades rapidly in the\npresence of noise and this measure is also sensitive to small\nvariations in the time axis. We concentrate on two similarity\nmodels: the first is an extension of Dynamic Time\nWarping for higher dimensions. We note that DTW has\nbeen used so far for one-dimensional time series. Here we\npresent a formulation for sequences of arbitrary dimensions.\nThe second distance measure is a modification of the Longest\nCommon Subsequence (LCSS), specially adapted for continuous\nvalues. Both measures represent a significant improvement\ncompared to the Euclidean distance. However, LCSS\nis more robust than DTW under noisy conditions [20] as figure\n1 shows. Euclidean matching completely disregards the\nvariations in the time axis, while DTW performs excessive\nmatchings, therefore distorting the true distance between sequences\n. The LCSS produces the most robust and intuitive\ncorrespondence between points.\nBy incorporating warping in time as a requirement to\n216\n0\n20\n40\n60\n80\n100\n120\nEuclidean Matching\n0\n20\n40\n60\n80\n100\n120\nTime Warping\n0\n20\n40\n60\n80\n100\n120\nLongest Common Subsequence\nFigure 1: A lucid example about the quality matching of the LCSS compared to other distance functions.\nThe Euclidean distance performs an inflexible matching, while the DTW gives many superfluous and spurious\nmatchings, in the presence of noise.\nour model, our algorithms are automatically challenged with\nquadratic execution time. Moreover, these flexible functions\nare typically non-metric, which makes difficult the design of\nindexing structures. To speed up the execution of a similarity\nfunction, one can devise a low cost, upper bounding function\n(since the LCSS model captures the similarity, which\nis inversely analogous to the distance). We utilize a fast\nprefiltering scheme that will return upper bound estimates\nfor the LCSS similarity between the query and the indexed\ntrajectories. In addition to providing similarity measures\nthat guarantee no false dismissals, we also propose approximate\nsimilarity estimates that significantly reduce the index\nresponse time. Finally, we show that the same index can\nsupport other distance measures as well.\nOur technique works by splitting the trajectories in multidimensional\nMBRs and storing them in an R-tree. For a\ngiven query, we construct a Minimum Bounding Envelope\n(MBE) that covers all the possible matching areas of the\nquery under warping conditions. This MBE is decomposed\ninto MBRs and then probed in the R-tree index. Using the\nindex we can discover which trajectories could potentially\nbe similar to the query. The index size is compact and its\nconstruction time scales well with the trajectory length and\nthe database size, therefore our method can be utilized for\nmassive datamining tasks.\nThe main contributions of the paper are:\nWe present the first external memory index for multidimensional\ntrajectories, that supports multiple distance\nfunctions (such as LCSS, DTW and Euclidean), without the\nneed to rebuild the index.\nWe give efficient techniques for upper(lower) bounding\nand for approximating the LCSS(DTW) for a set of trajectories\n. We incorporate these techniques in the design of an\nefficient indexing structure for the LCSS and the DTW.\nWe provide a flexible method that allows the user to\nspecify queries of variable warping length, and the technique\ncan be tuned to optimize the retrieval time or the accuracy\nof the solution.\nRELATED WORK\nThere has been a wealth of papers that use an L\np\ndistance\nfamily function to perform similarity matching for\n1D time-series. Work on multidimensional sequences can\nbe found in [14, 9]. However, they support only Euclidean\ndistance, which, as mentioned in the introduction, cannot\ncapture flexible similarities.\nAlthough the vast majority of database/data mining research\non time series data mining has focused on Euclidean\ndistance, virtually all real world systems that use time series\nmatching as a subroutine, use a similarity measure which allows\nwarping. In retrospect, this is not very surprising, since\nmost real world processes, particularly biological processes,\ncan evolve at varying rates. For example, in bioinformat-ics\n, it is well understood that functionally related genes will\nexpress themselves in similar ways, but possibly at different\nrates. Because of this, DTW is used for gene expression data\nmining [1, 3]. Dynamic Time Warping is a ubiquitous tool\nin the biometric/surveillance community. It has been used\nfor tracking time series extracted from video [7], classifying\nhandwritten text [16] and even fingerprint indexing [13].\nWhile the above examples testify to the utility of a time\nwarped distance measure, they all echo the same complaint;\nDTW has serious scalability issues. Work that attempted to\nmitigate the large computational cost has appeared in [12]\nand [21], where the authors use lower bounding measures to\nspeed up the execution of DTW. However, the lower bounds\ncan be loose approximations of the original distance, when\nthe data are normalized. In [15] a different approach is used\nfor indexing Time Warping, by using suffix trees. Nonetheless\n, the index requires excessive disk space (about 10 times\nthe size of the original data).\nThe flexibility provided by DTW is very important, however\nits efficiency deteriorates for noisy data, since by matching\nall the points, it also matches the outliers distorting the\ntrue distance between the sequences. An alternative approach\nis the use of Longest Common Subsequence (LCSS),\nwhich is a variation of the edit distance. The basic idea is\nto match two sequences by allowing them to stretch, without\nrearranging the order of the elements but allowing some\nelements to be unmatched.\nUsing the LCSS of two sequences\n, one can define the distance using the length of this\nsubsequence [6]. In [20] an internal memory index for the\nLCSS has been proposed. It also demonstrated that while\nthe LCSS presents similar advantages to DTW, it does not\nshare its volatile performance in the presence of outliers.\nClosest in spirit to our approach, is the work of [10] which,\nhowever, only addresses 1D time-series. The author uses\nconstrained DTW as the distance function, and surrounds\nthe possible matching regions by a modified version of a\nPiecewise Approximation, which is later stored as equi-length\nMBRs in an R-tree.\nHowever, by using DTW, such an\napproach is susceptible to high bias of outliers. Also, the\n217\nfixed MBR size (although simplifies the index operations)\ncan lead to degenerate approximations of the original sequence\n. Moreover, the embedding of the envelope in the\nindexed sequences can slow the index construction time and\nlimit the user's query capabilities to a predefined warping\nlength. The use of LCSS as our primary similarity measure,\nlends itself to a more natural use of the R-tree, where the\nsimilarity estimates are simply computed by calculating the\nMBR intersection areas. Since the index is not constructed\nfor a specific warping window, the user can pose queries with\nvariable warping length.\nThe purpose of this paper is to reconcile the best of both\nworlds. We provide a framework that can support in the\nsame index, the LCSS, DTW and Euclidean distance functions\n. The only aspect that changes, is the different representation\nof the query for each distance measure.\nDISTANCE MEASURES\nIn this section we present details of how the Dynamic\nTime Warping and the LCSS model can be extended to describe\nthe similarity between trajectories.\n3.1 Dynamic Time Warping for 2D trajectories\nWe describe an extension in 2D of the original DTW function\nas described by Berndt and Clifford [4]. Let A and B\nbe two trajectories of moving objects with size n and m\nrespectively, where A = ((a\nx,1\n, a\ny,1\n), . . . , (a\nx,n\n, a\ny,n\n)) and\nB = ((b\nx,1\n, b\ny,1\n), . . . , (b\nx,m\n, b\ny,m\n)). For a trajectory A, let\nHead(A) = ((a\nx,1\n, a\ny,1\n), . . . , (a\nx,n\n\n1\n, a\ny,n\n\n1\n)).\nDefinition 1. The Time Warping between 2-dimensional\nsequences A and B is:\nDT W (A, B)\n=\nL\np\n((a\nx,n\n, a\ny,n\n), (b\nx,m\n, b\ny,m\n)) +\nmin\n{DTW(Head(A),\nHead(B)), DT W (Head(A), B),\nDT W (A, Head(B))\n}\n(1)\nwhere L\np\nis any p-Norm. Using dynamic programming and\nconstraining the matching region within , the time required\nto compute DTW is O((n + m)). In order to represent an\naccurate relationship of distances between sequences with\ndifferent lengths, the quantity in equation 1 is normalized\nby the length of the warping path. The extension to n dimensions\nis similar. In figure 2 we show an example of time\nwarping for two trajectories.\n3.2 LCSS model for 2D trajectories\nThe original LCSS model refers to 1D sequences, we must\ntherefore extend it to the 2D case. In addition, the LCSS\nparadigm matches discrete values, however in our model we\nwant to allow a matching, when the values are within a\ncertain range in space and time (note that like this, we also\navoid distant and degenerate matchings).\nDefinition 2. Given an integer and a real number 0 <\n< 1, we define the LCSS\n,\n(A, B) as follows:\nLCSS\n,\n(A, B) =\n\n0\nif A or B is empty\n1 + LCSS\n,\n(Head(A), Head(B))\nif\n|a\nx,n\nb\nx,m\n| <\nand\n|a\ny,n\nb\ny,m\n| <\nand\n|n m|\nmax(LCSS\n,\n(Head(A), B),\nLCSS\n,\n(A, Head(B))),\notherwise\n0\n50\n100\n150\n0\n500\n1000\n1500\n100\n200\n300\n400\n500\n600\nX movement\nTime\nY movement\nFigure 2: The support of flexible matching in spatiotemporal\nqueries is very important. However, we\ncan observe that Dynamic Time Warping matches\nall points (so the outliers as well), therefore distorting\nthe true distance. In contrast, the LCSS model\ncan efficiently ignore the noisy parts.\nwhere sequences A and Head(A) are defined similarly as\nbefore. The constant controls the flexibility of matching\nin time and constant\nis the matching threshold is space.\nThe aforementioned LCSS model has the same O((n+m))\ncomputational complexity as the DTW, when we only allow\na matching window in time [6].\nThe value of LCSS is unbounded and depends on the\nlength of the compared sequences. We need to normalize\nit, in order to support sequences of variable length. The\ndistance derived from the LCSS similarity can be defined as\nfollows:\nDefinition 3. The distance D\n,\nexpressed in terms of\nthe LCSS similarity between two trajectories A and B is\ngiven by:\nD\n,\n(A, B) = 1 LCSS\n,\n(A, B)\nmin(n, m)\n(2)\nINDEX CONSTRUCTION\nEven though imposing a matching window can help\nspeed up the execution, the computation can still be quadratic\nwhen is a significant portion of the sequence's length.\nTherefore, comparing a query to all the trajectories becomes\nintractable for large databases. We are seeking ways to avoid\nexamining the trajectories that are very distant to our query.\nThis can be accomplished by discovering a close match to\nour query, as early as possible. A fast pre-filtering step is\nemployed that eliminates the majority of distant matches.\nOnly for some qualified sequences will we execute the costly\n(but accurate) quadratic time algorithm. This philosophy\nhas also been successfully used in [21, 10]. There are certain\npreprocessing steps that we follow:\n1. The trajectories are segmented into MBRs, which are\nstored in an Rtree T.\n2. Given a query Q, we discover the areas of possible\nmatching by constructing its Minimum Bounding Envelope\n(M BE\nQ\n).\n3. M BE\nQ\nis decomposed into MBRs that are probed in\nthe index T.\n218\n4. Based on the MBR intersections, similarity estimates\nare computed and the exact LCSS (or DTW) is performed\nonly on the qualified trajectories.\nThe above notions are illustrated in figure 3 and we explain\nin detail how they can be applied for the LCSS case in the\nsections that follow.\nE. LCSS Upper Bound Estimate = L1+L2+L3\nA. Query Q\nC. Envelope Splitting\nB. Query Envelope\nD. Sequence MBRs\nL1\nL2\nL3\nFigure 3: An example of our approach (in 1D for\nclarity); A query is extended into a bounding envelope\n, which in turn is also split into the resulting\nMBRs. Overlap between the query and the index\nMBRs suggest areas of possible matching.\n4.1 Bounding the Matching Regions\nLet us first consider a 1D time-series and let a sequence\nA be (a\nx,1\n, . . . , a\nx,n\n). Ignoring for now the parameter , we\nwould like to perform a very fast LCSS\n\nmatch between\nsequence A and some query Q. Suppore that we replicate\neach point Q\ni\nfor time instances before and after time i.\nThe envelope that includes all these points defines the areas\nof possible matching. Everything outside this envelope can\nnever be matched.\n10\n20\n30\n40\n50\n60\n70\n40 pts\n6 pts\n2\n\n\nQ\nA\nFigure 4: The Minimum Bounding Envelope (MBE)\nwithin in time and\nin space of a sequence. Everything\nthat lies outside this envelope can never be\nmatched.\nWe call this envelope, the Minimum Bounding Envelope\n(MBE) of a sequence. Also, once we incorporate the matching\nwithin\nin space, this envelope should extent\nabove\nand below the original envelope (figure 4). The notion of the\nbounding envelope can be trivially extended in more dimensions\n, where M BE(, ) for a 2D trajectory Q = ((q\nx,1\n, q\ny,1\n),\n. . . , (q\nx,n\n, q\ny,n\n) covers the area between the following time-series\n:\nEnvLow\nMBE(, ) EnvHigh, where:\nEnvHigh[i] = max(Q[j] + epsilon)\n,\n|ij|\nEnvLow[j] = min(Q[j] epsilon)\n,\n|ij|\nThe LCSS similarity between the envelope of Q and a sequence\nA is defined as:\nLCSS(M BE\nQ\n, A) =\nn\ni=1\n1\nif A[i] within envelope\n0\notherwise\nFor example, in figure 4 the LCSS similarity between M BE\nQ\nand sequence A is 46, as indicated in the figure. This value\nrepresents an upper bound for the similarity of Q and A.\nWe can use the M BE\nQ\nto compute a lower bound on the\ndistance between trajectories:\nLemma 1. For any two trajectories Q and A the following\nholds: D\n,\n(M BE\nQ\n, A)\nD\n,\n(Q, A),\nProof (Sketch): D\n,\n(M BE\nQ\n, A) = 1\nLCSS\n,\n(M BE\nQ\n,A)\nmin(\n|Q|,|A|)\n,\ntherefore it is sufficient to show that: LCSS\n,\n(M BE\nQ\n, A)\n\nLCSS\n,\n(Q, A). This is true since M BE\nQ\nby construction\ncontains all possible areas within and\nof the query Q.\nTherefore, no possible matching points will be missed. 2\nThe previous lemma provides us with the power to create\nan index that guarantees no false dismissals. However, this\nlower bound refers to the raw data. In the sections that follow\n, we will 'split' the M BE of a trajectory, into a number\nof Minimum Bounding Rectangles (MBRs), to accommodate\ntheir storage into a multidimensional R-tree. We will\nshow that the above inequality still holds between trajectory\nMBRs.\nThe MBR generation procedure is orthogonal to our approach\n, since any segmentation methodology can be applied\nto our framework. Therefore, the description of the potential\nMBR generation methods (and of our implementation\nchoice) will be delayed until later.\nQUICK PRUNING OF DISSIMILAR TRA-JECTORIES\nSuppose that we have an index with the segmented trajectories\nand the user provides a query Q. Our goal is the\ndiscovery of the k closest trajectories to the given query, according\nto the LCSS similarity. A prefiltering step will aid\nthe quick discovery of a close match to the query, helping\nus discard the distant trajectories without using the costly\nquadratic algorithm. Therefore, in this phase, we compute\nupper bound estimates of the similarity between the query\nand the indexed sequences using their MBRs.\nBelow we describe the algorithm to find the closest trajectory\nto a given query:\nInput: Query Q, Index I with trajectory MBRs, Method\nOutput: Most similar trajectory to Q.\n219\nBox Env = constructM BE\n,\n(Q);\nVector V\nQ\n= CreateM BRs(Env);\n// V\nQ\ncontains a number of boxes.\nPriority queue P Q\n;\n// P Q keeps one entry per trajectory sorted\n// according to the similarity estimate\nfor each box B in V\nQ\n:\nV = I.intersectionQuery(B);\n// V contains all trajectory MBRs that intersect with B.\nif Method == Exact: // upper bound\nP Q\ncomputeL-SimilarityEstimates(V, B);\nelse: // approximate\nP Q\ncomputeV-SimilarityEstimates(V, B);\nBestSoF ar = 0; Best\n;\nwhile P Q not empty:\nE\nPQ.top;\nif E.estimate < BestSoF ar: break;\nelse:\nD = computeLCCS\n,\n(Q, E); // exact\nif D > BestSoF ar:\nBestSoF ar = D; Best\nE;\nReport Best;\nThe above algorithm can be adjusted to return the kNN\nsequences, simply by comparing with the k\nth\nbestSoF ar\nmatch. Next, we examine the possible similarity estimates.\nSome of them guarantee that will find the best match (they\nlower bound the original distance or upper bound the original\nsimilarity), while other estimates provide faster but approximate\nresults.\n5.1 Similarity Estimates\nHere we will show how to compute estimates of the LCSS\nsimilarity, based on the geometric properties of the trajectory\nMBRs and their intersection. An upper bound estimate\nis provided by the length of the MBR intersection and an\napproximate estimate is given as a parameter of the intersecting\nvolume. To formalize these notions, first we present\nseveral operators. Then we will use these operators to derive\nthe estimates.\n5.1.1 Estimates for the LCSS\nEach trajectory T can be decomposed into a number of\nMBRs. The i\nth\n3D MBR of T consists of six numbers:\nM\nT,i\n=\n{t\nl\n, t\nh\n, x\nl\n, x\nh\n, y\nl\n, y\nh\n}. Now, let us define the operators\n(c)\nt\n,\n(p)\nt\nand\nV\nbetween two 3D MBRs M\nP,i\nand\nM\nR,j\n, belonging to objects P and R, respectively:\n1.\n(c)\nt\n(M\nP,i\n, M\nR,j\n) =\n||Intersection||\nt\n,\nwhere M\nR,j\n.x\nl\nM\nP,i\n.x\nl\nM\nR,j\n.x\nh\nand\nM\nR,j\n.x\nl\nM\nP,i\n.x\nh\nM\nR,j\n.x\nh\nand\nM\nR,j\n.y\nl\nM\nP,i\n.y\nl\nM\nR,j\n.y\nh\nand\nM\nR,j\n.y\nl\nM\nP,i\n.y\nh\nM\nR,j\n.y\nh\nor similarly by rotating M\nR,j\nM\nP,i\nTherefore, this operator computes the time intersection\nof two MBR when one fully contains the other in\nthe x,y dimensions.\n2.\n(p)\nt\n(M\nP,i\n, M\nR,j\n) =\n||Intersection||\nt\n, otherwise\n3.\nV\n(M\nP,i\n, M\nR,j\n) =\n||Intersection||\nt\n||Intersection||\nx\n\n||Intersection||\ny\nWe can use upper bound or approximate estimates for the\nsimilarity:\nCommon Volume Intersection\nThe Intersection of MBRs is fully\ncontained within one MBR\nIntersection between two MBRs\ntime\ny\nx\nFigure 5:\nTop left:\nIntersection recorded in list\nL\nt,partial\n.\nTop right: Intersection recorded in list\nL\nt,complete\n. Bottom left: Percentage of Volume Intersection\nkept in L\nV\n.\n1. Upper bound estimates (L-similarity estimate).\nSuch estimates are computed using the following data-structures:\nThe list L\nt,complete\n, an element L(P ) of which is defined\nas:\nL(P ) =\nm\nn\nM\nQ,m\n(c)\nt\nM\nP,n\nwhere Q is a query and P is a trajectory in the index. So the\nlist stores for each trajectory the total time that its MBRs\nintersected with the query's MBRs. We record into this list\nonly the intersections, where a query MBR is fully contained\nin all spatial dimensions by a trajectory MBR (or vice versa\n-it is equivalent. See figure 5, top right).\nThe list L\nt,partial\n, an element L(P ) of which is defined as:\nL(P ) =\nm\nn\nM\nQ,m\n(p)\nt\nM\nP,n\nThis list records for each sequence the total intersection\nin time for those query MBRs that are not fully contained\nwithin the x,y dimensions by the trajectory MBRs (or vice\nversa. Figure 5, top left).\nRegarding a query Q, for any trajectory P the sum of\nL\nt,complete\n(P ) + L\nt,partial\n(P ) will provide an upper bound\non the similarity of P and Q.\nThe reason for the distinction of the L-similarity estimate\nin two separate lists derives from the fact that the estimates\nstored in list L\nt,partial\ncan significantly overestimate\nthe LCSS similarity. If one wishes to relax the accuracy,\nin favor of enhanced performance, it is instructive to give a\nweight 0 < w\np\n< 1 to all estimates in list L\nt,partial\n. Even\nthough now we may miss the best match to our query, we\nare going to find a close match in less time. This weighted\napproach is used when we are seeking for approximate, but\nvery good quality answers, however it will not be explained\nfurther due to space limitations.\n2. Approximate estimates (V-similarity estimate).\nThis second estimate is based on the intersecting volume of\nthe MBRs. This type of estimates are stored in list L\nV\n:\nAny element L\nV\n(P ) of list L\nV\nrecords similarity estimates\nbetween trajectory P and query Q, based on the total volume\nintersection between the MBRs of P and Q.\nL(P ) =\n1\nlength(P )\nm\nn\nM\nQ,m\nV\nM\nP,n\n||M\nQ,m\n||\nV\n||M\nQ,m\n||\nt\n220\nwhere\n||M||\nV\ndenotes the volume of MBR M and\n||M||\nt\nits\nlength on the time axis.\nThe L-similarity overestimates the LCSS\n,\nbetween two\nsequences A and B and so it can be deployed for the design\nof an index structure.\nLemma 2. The use of the L-similarity estimate upper bounds\nthe LCSS\n,\nsimilarity between two sequences A and B and\ntherefore does not introduce any false dismissals.\nThe V-similarity estimate can be used for approximate\nquery answering. Even though it does not guarantee the\nabsence of false dismissals, the results will be close to the\noptimal ones with high probability. Also, because this estimate\nprovides a tighter approximation to the original distance\n, we expect faster response time. Indeed, as we show in\nthe experimental section, the index performance is boosted,\nwhile the error in similarity is frequently less then 5%.\n5.2 Estimates for the DTW\nWhen the distance function used is the Time Warping,\nusing the index we obtain a lower bound of the actual distance\n. In this case we have the inverse situation from the\nLCSS; instead of calculating the degree of overlap between\nthe MBRs of the indexed trajectories and the query, we evaluate\nthe distance between the MBRs. The overall distance\nbetween the MBRs underestimates the true distance of the\ntrajectories, and no false dismissals are introduced. Using\nthe MBRs we can also calculate upper bound estimates on\nthe distance, which hadn't been exploited in previous work\n[10, 22]. Sequences with lower bound larger than the smallest\nupper bound can be pruned. With this additional prefiltering\nstep we can gain on average an additional 10-15%\nspeedup in the total execution time.\nDue to space limitations only a visual representation of\nthis approach is provided in figure 6.\nMBR GENERATION\nGiven a multidimensional time-series (or an MBE) our\nobjective is to minimize the volume of the sequence using\nk MBRs. Clearly, the best approximation of a trajectory\n(or an MBE) using a fixed number of MBRs is the set of\nMBRs that completely contain the sequence and minimize\nthe volume consumption. We can show the following lemma:\nLemma 3. Minimizing the volume of the Minimum Bounding\nEnvelope, minimizes the expected similarity approximation\nerror.\nThree different approaches are considered:\n1. k-Optimal. We can discover the k MBRs of a sequence\nthat take up the least volume, using a dynamic programming\nalgorithm that requires O(n\n2\nk) time ([8]), where n is the\nlength of the given sequence. Since this approach is not\nreasonable for large databases, we are motivated to consider\napproximate and faster solutions.\n2. Equi-Split. This technique produces MBRs of fixed\nlength l. It is a simple approach with cost linear in the\nlength of a sequence. However, in pathological cases increasing\nthe number of splits can result to larger space utilization\n,therefore the choice of the MBR length becomes a\ncritical parameter (see figure 7 for an example).\nA. Query Q\nB. Query Envelope\nC. Envelope Splitting\nD. Sequence MBRs\nE. MINDIST(Q,R)\nF. MAXDIST(Q,R)\nFigure 6: A visual intuition of the DTW indexing\ntechnique (the one-dimensional case is shown for\nclarity).\nThe original query (A) is enclosed in a\nminimum-bounding envelope (B) like the LCSS approach\n. The MBE is split into its MBRs using equi\nor greedy split (fig. (C)). The candidate sequences\nin the database have their MBRs stored in the index\n(D). Between the query and any sequence in\nthe index, the minimum and maximum distance can\nbe quickly determined by examining the distance\nbetween the MBRs and the query's bounding envelope\n, as represented by the arrows in (E) and (F).\n3. Greedy-Split. The Greedy approach is our implementation\nchoice in this paper. Initially we assign an MBR to\neach of the n sequence points and at each subsequent step\nwe merge the consecutive MBRs that will introduce the least\nvolume consumption. The algorithm has a running time of\nO(nlogn). We can see a sketch of the method in fig. 8. Al-ternatively\n, instead of assigning the same number of splits\nto all objects, according to our space requirements we can\nassign a total of K splits to be distributed among all objects.\nThis method can provide better results, since we can assign\nmore splits for the objects that will yield more space gain.\nAlso, this approach is more appropriate when one is dealing\nwith sequences of different lengths. The complexity of this\napproach is O(K + N logN ), for a total of N objects ([8]).\nInput: A spatiotemporal trajectory T and an integer k denoting\nthe number of final MBRs.\nFor 0\ni < n compute the volume of the MBR produced by\nmerging T\ni\nand T\ni+1\n. The results are stored in a priority queue.\nWhile #M BRs < k: Using the priority queue, merge the pair\nof consecutive MBRs that yield the smallest increase in volume.\nDelete the two merged MBRs and insert the new one in the priority\nqueue.\nOutput: A set of MBRs that cover T .\nFigure 8: The greedy algorithm for producing k\nMBRs that cover the trajectory T .\nAfter a trajectory is segmented the MBRs can be stored\nin a 3D-Rtree. Using the greedy split each additional split\nwill always lead to smaller (or equal) volume (figure 7). A\nsimilar greedy split algorithm is also used for splitting the\nMBE of the query trajectory Q.\n221\n(a)\nEqui-Split, 8 MBRs, Gain = 5.992\n(b)\nEqui-Split, 9 MBRs, Gain = 5.004\n(c)\nGreedy-Split, 8MBRs, Gain = 9.157\n(d)\nGreedy-Split, 9MBRs, Gain = 10.595\nFigure 7: (a): 8 MBRs produced using equi-Split. The volume gain over having 1 MBR is 5.992. (b):\nSegmenting into 9 MBRs decreases the volume gain to 5.004. So, disk space is wasted without providing\na better approximation of the trajectory. (c): 8 MBRs using greedy-Split. The volume gain over having 1\nMBR is 9.157. (d): Every additional split will yield better space utilization. Segmentation into 9 MBRs\nincreases volume gain to 10.595.\nSUPPORTING MULTIPLE MEASURES\nThe application of the Minimum Bounding Envelope only\non the query suggests that user queries are not confined to\na predefined and rigid matching window . The user can\npose queries of variable warping in time. In some datasets,\nthere is no need to perform warping, since the Euclidean\ndistance performs acceptably [11]. In other datasets, by\nusing the Euclidean distance we can find quickly some very\nclose matches, while using warping we can distinguish more\nflexible similarities. So, we can start by using a query with\n= 0 (no bounding envelope), and increase it progressively\nin order to find more flexible matches (figure 9).\nTherefore, our framework offers the unique advantage that\nmultiple distance functions can be supported in a single index\n. The index sequences have been segmented without any\nenvelope applied on them and never have to be adjusted\nagain. For different measures, the aspects that change are,\nthe creation of the query envelope and the type of operation\nbetween MBRs. In order to pose queries based on Euclidean\ndistance we follow the steps:\nThe query is segmented with no envelope applied on it.\nThe minDist and maxDist estimators for the Euclidean\ndistance are derived by calculating the distance between the\nquery and index MBRs, just like in the DTW case.\n0\n50\n100\n150\n200\n250\n300\n350\n200\n150\n100\n50\n0\n50\n100\nIndex Trajectory\nQuery\nFigure 9: By incorporating the bounding envelope\non the query, our approach can support Euclidean\ndistance, constrained or full warping. This is accomplished\nby progressively expanding the MBE.\nEXPERIMENTAL EVALUATION\nIn this section we compare the effectiveness of various\nsplitting methods and we demonstrate the superiority of our\nlower bounding technique (for the DTW) compared to other\nproposed lower bounds. We describe the datasets we used\nand present comprehensive experiments regarding the index\nperformance for the two similarity estimates. In addition,\nwe evaluate the accuracy of the approximate estimates. All\nexperiments conducted were run on an AMD Athlon 1.4 Ghz\nwith 1GB RAM and 60GB of hard drive.\n1. ASL\n2. Buoy Sensor\n3. Video Track 1\n4. Flutter\n5. Marine Mammals\n6. Word Tracking\n7. Random Walk\n8. Video Track 2\nFigure 10: Datasets used for testing the efficiency\nof various MBR generation methods.\n8.1 MBR Generation Comparison\nThe purpose of our first experiment is to test the space\nconsumption of the presented MBR generation methods. We\nhave used eight datasets with diverse characteristics, in order\nto provide objective results.\nWe evaluate the space consumption, by calculating the\n\"Average Volume Gain\" (AvgV olGain), which is defined\nas the percentage of volume when using i MBRs, over the\nvolume when using only 1 MBR, normalized by the maximum\ngain provided over all methods (for various number of\nsplits).\nRandom\nEqui\nGreedy\n1\n2\n3\n4\n5\n6\n7\n8\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nDataset\nAverage Volume Gain\nFigure 11: The greedy-split MBR generation algorithm\npresents the highest volume gain, by producing\nMBRs that consume consistently less space, over\na number of datasets and for diverse number of generated\nMBRs\n222\nDATASET\nEQ\ns20,d5\nGR\ns20,d5\nEQ\ns40,d5\nGR\ns40,d5\nEQ\ns20,d5\nGR\ns20,d5\nEQ\ns40,d5\nGR\ns40,d5\nLB-Kim\nLB-Yi\nLCSS\nDTW\nASL\n0.732\n0.799\n0.825\n0.856\n0.449\n0.632\n0.588\n0.756\n0.1873\n0.2530\nVT1\n0.260\n0.339\n0.453\n0.511\n0.087\n0.136\n0.230\n0.266\n0.0838\n0.1692\nMarine\n0.719\n0.750\n0.804\n0.814\n0.226\n0.506\n0.308\n0.608\n0.2587\n0.4251\nWord\n0.627\n0.666\n0.761\n0.774\n0.311\n0.361\n0.466\n0.499\n0.0316\n0.2116\nRandom\n0.596\n0.652\n0.701\n0.741\n0.322\n0.384\n0.440\n0.491\n0.1389\n0.2067\nVT2\n0.341\n0.431\n0.498\n0.569\n0.210\n0.296\n0.363\n0.437\n0.2100\n0.5321\nTable 1: Some indicative results of how close our similarity estimates are to the exact value (for 20 and 40\nsplits, & = 5%). For all datasets the greedy-split approach provides the closest similarity estimates to the\nactual similarity.\nAvgV olGain is a number between 0 and 1, where higher\nnumbers indicate increased volume gain (or less space consumption\n) against the competitive methods.\nIn figure 11\nwe observe the average volume gain for the eight datasets.\nThe greedy-split algorithm produced MBRs that took at\nleast half the space, compared to equi-split. The equi-split\noffers slightly better results, than producing MBRs at random\npositions. The volume gain of greedy-split was less,\nonly for the buoy sensor, which is a very busy and unstruc-tured\nsignal. This experiment validates that our choice to\nuse the greedy-split method was correct. Since, the indexed\nMBR trajectories will take less space, we also expect tighter\nsimilarity estimates, therefore fewer false positives.\n8.2 Tightness of Bounds\nIn table 1 we show how close our similarity estimates are\n(for LCSS and DTW) to the actual similarity between sequences\n. Numbers closer to 1, indicate higher similarity\nto the value returned by the exact algorithm. To our best\nknowledge, this paper introduces the first upper bounding\ntechnique for the LCSS. For DTW there have been a few\napproaches to provide a lower bound of the distance; we refer\nto them as LB-Kim [12] and LB-Yi [21]. These lower\nbounds originally referred to 1D time-series; here we extend\nthem in more dimensions, in order to provide unambiguous\nresults about the tightness of our estimates. Note that the\npreviously proposed methods operate on the raw data. Our\napproach can still provide tighter estimates, while operating\nonly on the trajectory MBRs. Using the raw data our\nexperiments indicate that we are consistently 2-3 times better\nthan the best alternative approach. However, since our\nindex operates on the segmented time-series we only report\nthe results on the MBRs.\nThe greedy-split method approximates the similarity consistently\ntighter than the equi-split. In table 1 only the\nresults for = 5% of the query's length are reported, but\nsimilar results are observed for increasing values of . It is\nevident from the table that using our method we can provide\nvery tight lower bounds of the actual distance.\n8.3 Matching Quality\nWe demonstrate the usefulness of our similarity measures\nin a real world dataset. The Library of Congress maintains\nthousands of handwritten manuscripts, and there is an increasing\ninterest to perform automatic transcribing of these\ndocuments. Given the multiple variations of each word and\ndue to the manuscript degradations, this is a particularly\nchallenging task and the need for a flexible and robust distance\nfunction is essential.\nWe have applied the LCSS and DTW measures on word\nFigure 12:\nResults for a real world application.\n3NN reported for each query, using Dynamic Time\nWarping to match features extracted from scanned\nmanuscript words.\nimages extracted from a 10 page scanned manuscript. 4-dimensional\ntime-series features have originally been extracted\nfor each word. Here we maintain the 2 least correlated time-series\nfeatures and treat each word as a trajectory. In figure\n12 we observe the 3-KNN results using DT W for various\nword queries. The results are very good, showing high accuracy\neven for similarly looking words. Analogous results\nhave been obtained using the LCSS.\n8.4 Index performance\nWe tested the performance of our index using the upper\nbound and the approximate similarity estimates, and compared\nit to the sequential scan. Because of limited space,\nthe majority of the figures record the index performance using\nthe LCSS as a similarity measure. The performance\nmeasure used is the total computation time required for the\nindex and the sequential scan to return the nearest neighbor\nfor the same one hundred queries. For the linear scan,\none can also perform early termination of the LCSS (or the\nDTW) computation. Therefore, the LCSS execution can\nbe stopped at the point where one is sure that the current\nsequence will not be more similar to the query than the\nbestSoFar. We call this optimistic linear scan. Pessimistic\nlinear scan, is the one than does not reuse the previously\ncomputed similarity values and can be an accurate time\nestimate, when the query match resides at the end of the\ndataset. We demonstrate the index performance relative to\n223\n1024\n2048\n4096\n8192\n16384\n32768\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\n1.4\nDataset size\nTime Ratio Compared to Linear Scan\n=5%\nOptimistic\nPessimistic\nLinear Scan\n1024\n2048\n4096\n8192\n16384\n32768\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\n1.4\nDataset size\nTime Ratio Compared to Linear Scan\n=10%\nOptimistic\nPessimistic\nLinear Scan\n1024\n2048\n4096\n8192\n16384\n32768\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\n1.4\nDataset size\nTime Ratio Compared to Linear Scan\n=20%\nOptimistic\nPessimistic\nLinear Scan\nFigure 13: Index performance. For small warping windows the index can be up to 5 times faster than\nsequential scan without compromising accuracy. The gray regions indicate the range of potential speedup.\n1024\n2048\n4096\n8192\n16384\n32768\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\n1.4\nDataset size\nTime Ratio Compared to Linear Scan\n=5%\nOptimistic\nPessimistic\nLinear Scan\n1024\n2048\n4096\n8192\n16384\n32768\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\n1.4\nDataset size\nTime Ratio Compared to Linear Scan\n=10%\nOptimistic\nPessimistic\nLinear Scan\n1024\n2048\n4096\n8192\n16384\n32768\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\n1.4\nDataset size\nTime Ratio Compared to Linear Scan\n=20%\nOptimistic\nPessimistic\nLinear Scan\nFigure 14: Using the approximate similarity estimates the response time can be more than 7 times faster.\nboth types of linear scan, because this provides a realistic\nupper or lower bound on the index speedup.\nThe dataset we used contained 2\n10\n. . . 2\n16\ntrajectories. Taking\nunder consideration that the average trajectory size is\naround 500 points, this resulted to a database with more\nthan 16 million 2D points. The trajectories have been normalized\nby subtracting the average value in each direction of\nmovement. All data and queries can be obtained by emailing\nthe first author.\nMixed two-dimensional Time-series (2D-Mixed). This\nsecond dataset consists of time-series of variable length, ranging\nfrom less than 100 points to over 1000 points. The\ndataset is comprised by the aggregation of the eight datasets\nwe used for comparing the MBR generation methods. Since\nthe total number of these trajectories is less than 200, we\nhave used them as seeds to generate increasingly larger datasets.\nWe create multiple copies of the original trajectories by incorporating\nthe following features:\nAddition of small variations in the original trajectory\npattern\nAddition of random compression and decompression in\ntime\nThe small variations in the pattern were added by interpolating\npeaks of Gaussian noise using splines. In this manner\nwe are able to create the smooth variations that existed in\nthe original datasets.\n8.4.1 Results on the upper bound Estimates\nThe index performance is influenced be three parameters:\nthe size of the dataset, the warping length (as a percentage\nof the query's length) and the number of splits. For all\nexperiments the parameter\n(matching in space) was set\nto std/2 of the query, which provided good and intuitive\nresults.\nDataset size: In figure 13 we can observe how the\nperformance of the index scales with the database size (for\nvarious lengths of matching window). We record the index\nresponse time relative to both optimistic and pessimistic linear\nscan. Therefore, the gray region in the figures indicates\nthe range of possible speedup. It is evident that the early\ntermination feature of the sequential scan can significantly\nassist its performance. The usefulness of an index becomes\nobvious for large dataset sizes, where the quadratic computational\ncost dominates the I/O cost of the index. For these\ncases our approach can be up to 5 times faster than linear\nscan. In figure 15 we also demonstrate the pruning power of\nthe index, as a true indicator (not biased by any implementation\ndetails) about the efficacy of our index. Using the\nindex we perform 2-5 times fewer LCSS computations than\nthe linear scan. We observe similar speedup when using the\nDTW as the distance function in figure 17.\nParameter : The index performance is better for\nsmaller warping lengths (parameter ). The experiments\nrecord the performance for warping from 5% to 20% of the\nquery's length. Increasing values signify larger bounding\nenvelopes around the query, therefore larger space of search\nand less accurate similarity estimates. The graphs suggest\nthat an index cannot not be useful under full warping (when\nthe data are normalized).\nNumber of Splits: Although greater number of MBRs\nfor each trajectory implies better volume utilization, nonetheless\nmore MBRs also lead to increased I/O cost. When we\nare referring to x% splits, it means that we have assigned a\ntotal of 100/x(\nn\ni=1\n(\n||T\ni\n||)) splits, for all sequences T\ni\n. In\nour figures we provide the 5% splits scenario for the MBRs,\nwhich offers better performance than 10% and 20% splits,\nsince for the last two cases the I/O cost negates the effect of\nthe better query approximation. The index space requirements\nfor 5% splits is less than a quarter of the dataset size.\n8.4.2 Results on the approximate Estimates\nHere we present the index performance when the volume\nintersections of the MBRs are used as estimates of the sim-224\n1024\n2048\n4096\n8192\n16384\n32768\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\n1.4\n1.6\nDataset size\nRatio of LCSS performed by the index\nPruning Power compared to Linear Scan\n5% splits\n10% splits\n20% splits\nLinear Scan\n=5%\n=10%\n=20%\nFigure 15: Each gray band indicates\n(for a certain warping window\n) the percentage of LCSS\ncomputations conducted by the index\ncompared to linear scan.\n1024\n2048\n4096\n8192\n16384\n32768\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nDataset size\nAverage similarity Error\nSimilarity Error, 5% splits\n=5%\n=10%\n=20%\nFigure 16: Using the V-similarity\nestimate, we can retrieve answers\nfaster with very high accuracy.\nThe LCSS similarity is very close\n(2-10%) to the exact answer returned\nby the sequential scan.\n1024\n2048\n4096\n8192\n16384\n32768\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\n1.4\nDataset size\nTime Ratio Compared to Linear Scan\n=5%\nOptimistic\nPessimistic\nLinear Scan\nFigure 17: Index Performance using\nDTW as the distance measure.\n( = 5%). We can observe up to 5\ntimes speedup.\nilarity and the results are shown in figure 14. We observe\nthat using this approximate similarity estimate, our index\nperformance is boosted up. The use of the V-similarity estimate\nleads to more tight approximations of the original\nsimilarity compared to the L-similarity estimate, however\nnow we may miss finding the best match.\nNaturally, comes the question of the quality of the results\n. We capture this by calculating the absolute difference\nbetween the similarity of the best match returned by the\nindex, and the best match found by the sequential scan for\neach query. Then we average the results over a number of\nqueries\n|q|. Therefore, the Average Similarity Error (ASE)\nis:\nASE =\n1\n|q|\n|q|\ni=1\n(\n|BestMatch\nindex\nBestM atch\nexhaustive\n|)\nThe results are shown in figure 16. We can see that the\nsimilarity returned by the V-similarity estimate is approxi-mately\nwithin 5% of the actual similarity (5% splits used).\nTherefore, by providing two similarity estimates the user\ncan decide for the trade-off between the expedited execution\ntime and the quality of results. Since by using the latter estimator\nwe can significantly increase the performance of the\nindex, this is the approach we recommend for mining large\ndatasets.\nCONCLUSIONS AND FUTURE WORK\nIn this paper we have presented an external memory indexing\nmethod for discovering similar multidimensional time-series\n. The unique advantage of our approach is that it\ncan accommodate multiple distance measures. The method\nguarantees no false dismissals and depicts a significant execution\nspeed up for the LCSS and DTW compared to sequential\nscan. We have shown the tightness of our similarity\nestimates and demonstrated the usefulness of our measures\nfor challenging real world applications. We hope that our\neffort can act as a bridge between metric and non-metric\nfunctions, as well as a tool for understanding better their\nstrengths and weaknesses. In the future we plan to investigate\nthe combination of several heuristics, in order to provide\neven tighter estimates.\nAcknowledgements: We would like to thank Margrit Betke\nfor providing us the Video Track I and II datasets. We also\nfeel obliged to T. Rath and R. Manmatha for kindly providing\nthe manuscript words dataset.\nREFERENCES\n[1] J. Aach and G. Church. Aligning gene expression time series\nwith time warping algorithms. In Bioinformatics, Volume 17,\npages 495508, 2001.\n[2] O. Arikan and D. Forsyth. Interactive motion generation from\nexamples. In Proc. of ACM SIGGRAPH, 2002.\n[3] Z. Bar-Joseph, G. Gerber, D. Gifford, T. Jaakkola, and\nI. Simon. A new approach to analyzing gene expression time\nseries data. In Proc. of 6th RECOMB, pages 3948, 2002.\n[4] D. Berndt and J. Clifford. Using Dynamic Time Warping to\nFind Patterns in Time Series. In Proc. of KDD Workshop,\n1994.\n[5] M. Betke, J. Gips, and P. Fleming. The camera mouse: Visual\ntracking of body features to provide computer access for people\nwith severe disabilities. In IEEE Transactions on Neural\nSystems and Rehabilitation Engineering, Vol. 10, No. 1, 2002.\n[6] G. Das, D. Gunopulos, and H. Mannila. Finding Similar Time\nSeries. In Proc. of the First PKDD Symp., pages 88100, 1997.\n[7] D. Gavrila and L. Davis. Towards 3-d model-based tracking\nand recognition of human movement: a multi-view approach. In\nInt. Workshop on Face and Gesture Recognition.\n[8] M. Hadjieleftheriou, G. Kollios, V. Tsotras, and D. Gunopulos.\nEfficient indexing of spatiotemporal objects. In Proc. of 8th\nEDBT, 2002.\n[9] T. Kahveci, A. Singh, and A. Gurel. Similarity searching for\nmulti-attribute sequences. In Proc. of SSDBM, 2002.\n[10] E. Keogh. Exact indexing of dynamic time warping. In Proc. of\nVLDB, 2002.\n[11] E. Keogh and S. Kasetty. On the need for time series data\nmining benchmarks: A survey and empirical demonstration. In\nProc. of SIGKDD, 2002.\n[12] S. Kim, S. Park, and W. Chu. An index-based approach for\nsimilarity search supporting time warping in large sequence\ndatabases. In In Proc. of 17th ICDE, 2001.\n[13] Z. Kov\nacs-Vajna. A fingerprint verification system based on\ntriangular matching and dynamic time warping. In IEEE\nTransactions on PAMI, Vol. 22, No. 11.\n[14] S.-L. Lee, S.-J. Chun, D.-H. Kim, J.-H. Lee, and C.-W. Chung.\nSimilarity Search for Multidimensional Data Sequences. Proc.\nof ICDE, pages 599608, 2000.\n[15] S. Park, W. Chu, J. Yoon, and C. Hsu. Efficient Searches for\nSimilar Subsequences of Different Lengths in Sequence\nDatabases. In Proc. of ICDE, pages 2332, 2000.\n[16] T. Rath and R. Manmatha. Word image matching using\ndynamic time warping. In Tec Report MM-38. Center for\nIntelligent Information Retrieval, University of\nMassachusetts Amherst, 2002.\n[17] J. F. Roddick and K. Hornsby. Temporal, Spatial and\nSpatio-Temporal Data Mining. 2000.\n[18] M. Shimada and K. Uehara. Discovery of correlation from\nmulti-stream of human motion. In Discovery Science 2000.\n[19] R. E. Valdes-Perez and C. A. Stone. Systematic detection of\nsubtle spatio-temporal patterns in time-lapse imaging ii.\nparticle migrations. In Bioimaging 6(2), pages 7178, 1998.\n[20] M. Vlachos, G. Kollios, and D. Gunopulos. Discovering similar\nmultidimensional trajectories. In Proc. of ICDE, 2002.\n[21] B.-K. Yi, H. V. Jagadish, and C. Faloutsos. Efficient retrieval\nof similar time sequences under time warping. In Proc. of\nICDE, pages 201208, 1998.\n[22] Y. Zhu and D. Shasha. Query by humming: a time series\ndatabase approach. In Proc. of SIGMOD, 2003.\n225\n", "keywords": "Dynamic Time Warping;indexing;trajectory;distance function;Dynamic Time Warping (DTW);similarity;Longest Common Subsequence;Trajectories;Longest Common Subsequence (LCSS);measure"} {"name": "111", "title": "Information Retrieval for Language Tutoring: An Overview of the REAP Project", "abstract": "INTRODUCTION Typical Web search engines are designed to run short queries against a huge collection of hyperlinked documents quickly and cheaply, and are often tuned for the types of queries people submit most often [2]. Many other types of applications exist for which large, open collections like the Web would be a valuable resource. However, these applications may require much more advanced support from information retrieval technology than is currently available. In particular, an application may have to describe more complex information needs, with a varied set of properties and data models, including aspects of the user's context and goals. In this paper we present an overview of one such application, the REAP project, whose main purpose is to provide reader-specific practice for improved reading comprehension. (REAP stands for REAder-specific Practice.) A key component of REAP is an advanced search model that can find documents satisfying a set of diverse and possibly complex lexical constraints, including a passage's topic, reading level (e.g. 3rd grade), use of syntax (simple vs. complex sentence structures), and vocabulary that is known or unknown to the student. Searching is performed on a database of documents automatically gathered from the Web which have been analyzed and annotated with a rich set of linguistic metadata. The Web is a potentially valuable resource for providing reading material of interest to the student because of its extent, variety, and currency for popular topics.", "fulltext": "SYSTEM DESCRIPTION\nHere we describe the high-level design of the REAP\ninformation retrieval system, including document database\nrequirements and construction, annotations, and a brief\ndescription of the retrieval model.\n2.1 Database Construction\nOur goal is to present passages that are interesting to students,\nwhether they are on current topics such as pop stars or sports\nevents, or related to particular classroom projects. To this end,\nwe use the Web as our source of reading practice materials\nbecause of its extent, variety, and currency of information.\nWe want coverage of topics in the database to be deeper in\nareas that are more likely to be of interest to students.\nCoverage of other areas is intended to be broad, but more\nshallow. We therefore gather documents for the database\nusing focused crawling [3]. The current prototype uses a\npage's reading difficulty to set priority for all links from that\npage equally, based on the distance from the target reading\nlevel range. We plan to explore more refined use of\nannotations to direct the crawl on a link-by-link basis. In our\nprototype, we collected 5 million pages based on an initial set\nof 20,000 seed pages acquired from the Google Kids Directory\n[7]. Our goal is to have at least 20 million pages that focus on\nmaterial for grades 1 through 8. The document database must\nbe large enough that the most important lexical constraints are\nsatisfied by at least a small number of pages. Data annotation\nis currently performed off-line at indexing time. The specific\nannotations for REAP are described in Section 2.2.\nOnce the documents are acquired, they are indexed using an\nextended version of the Lemur IR Toolkit [9]. We chose\nLemur because of its support for language model-based\nretrieval, its extensibility, and its support for incremental\nindexing, which is important for efficient updates to the\ndatabase. Annotations are currently stored as Lemur\nproperties, but later versions will take advantage of the\nenhancements planned for support of rich document structure,\ndescribed in Section 2.3.\n2.2 Linguistic Annotations\nIn addition to the underlying text, the following linguistic\nannotations are specified as features to be indexed:\n\nBasic text difficulty within a document section or region.\nThis is calculated using a new method based on a mixture\nof language models [4] that is more reliable for Web\npages and other non-traditional documents than typical\nreading difficulty measures.\n\nGrammatical structure. This includes part-of-speech tags\nfor individual words as well as higher-level parse\nstructures, up to sentence level.\n\nDocument-level attributes such as title, metadata\nkeywords, and ratings.\n544\n\nTopic category. This would involve broad categories such\nas fiction/non-fiction [5] or specific topics, perhaps based\non Open Directory.\n\nNamed entity tags. We use BBN's Identifinder [1] for\nhigh-precision tagging of proper names.\nWe may also look at more advanced attributes such as text\ncoherence and cohesion [6].\n2.3 Query and Retrieval Models\nA typical information need for the REAP system might be\ndescribed as follows:\nFind a Web page about soccer, in American English,\nwith reading difficulty around the Grade 3 level. The\ntext should use both passive- and active-voice\nsentence constructions and should introduce about\n10% new vocabulary relative to the student's\nknown-vocabulary model. The page's topic is less\nimportant than finding pages that practice the words:\nfor example, an article on another sport that satisfies\nthe other constraints would also be acceptable.\nInformation needs in REAP will be modeled as mixtures of\nmultiple word histograms, representing different sources of\nevidence, as well as document-level or passage-level\nconstraints on attributes such as reading difficulty. There is\nprecedent for using word histograms to specify information\nneeds: indeed, query expansion is one example of this. More\nspecifically, related work includes language model-based\ntechniques such as relevance models [8].\nNo current Web-based search engine is able to make use of\ncombinations of lexical constraints and language models in\nthis way, on such a large scale. To support this, we are making\nextensions to Lemur that include:\n1.\nRetrieval models for rich document structure, which\nincludes nested fields of different datatypes where each\nfield may be associated with its own language model.\n2.\nMore detailed retrieval models in which we skew\nlanguage models towards the appropriate grade level,\ntopic, or style.\n3.\nThe use of user model descriptions as context for a query.\n2.4 User Profiles\nIn the current prototype, we model a reader's topic interests,\nreading level, and vocabulary acquisition goals using simple\nlanguage models. For example, we model the curriculum as a\nword histogram. Although crude, this captures word-frequency\ninformation associated with general reading\ndifficulty, as well as capturing topics that are the focus of the\ncurriculum at each grade level. We plan to add more complex\naspects to user profiles, including more specific lexical\nconstraints such as grammar constructs and text novelty. The\nmodels can be updated incrementally as the student's interests\nevolve and they make progress through the curriculum.\nEVALUATION METHODS\nEvaluation of the end-to-end REAP system will be via a series\nof three year-long studies with both adults and children. The\nadult studies will provide feedback on vocabulary matching\nand comprehension, and the child studies will test the\nhypothesis that children will read adaptively to texts that vary\nin vocabulary demands, where those texts that closely reflect\nthe reader's interests and comprehension can be used to\nsupport improved comprehension and vocabulary growth.\n\nCONCLUSION\nThe REAP project is intended to advance the state of the art in\ninformation retrieval, as well as research in reading\ncomprehension, by bringing together practical user models of\nstudent interests, vocabulary knowledge and growth, and other\naspects of reading, with interesting material from large, open\ncollections like the World Wide Web. This type of system is a\nvaluable new research tool for educational psychologists and\nlearning scientists, because it gives much greater control over\nhow instructional materials are selected. This in turn allows\ntesting of instructional hypotheses, such as the effect of 10%\nvocabulary stretch, which have been impractical to test in the\npast. The work also has direct application to other areas of\nlanguage learning, such as English as a Second Language\ntraining. More broadly, however, we believe the REAP project\nis a important first step toward enabling richer user and task\nmodels than currently available with ad-hoc search systems.\n\nACKNOWLEDGMENTS\nWe thank our collaborators Maxine Eskenazi, Charles Perfetti\nand Jonathan Brown; John Cho and Alexandros Ntoulas of\nUCLA for their crawler code; and the anonymous reviewers.\nThis work was supported by U.S. Dept. of Education grant\nR305G03123. Any opinions, findings, conclusions, or\nrecommendations expressed in this material are the authors'\nand do not necessarily reflect those of the sponsors.\n\nREFERENCES\n[1] Bikel, D. M., Miller, S., Schwartz, R., Weischedel, R. M.,\nNymbol: A high-performance learning name-finder. In\nProceedings of the 5th Conference on Applied Natural\nLanguage Processing, 194 - 201, 1997.\n[2] Broder, A. A taxonomy of Web search. In SIGIR Forum,\n36(2). 3 - 10, 2002.\n[3] Chakrabarti, S., van der Berg, M., & Dom, B. Focused\ncrawling: a new approach to topic-specific web resource\ndiscovery. In Proc. of the 8th International World-Wide Web\nConference (WWW8), 1999.\n[4] Collins-Thompson, K., & Callan, J. A language modeling\napproach to predicting reading difficulty. Proceedings of\nHLT/NAACL 2004, Boston, USA, 2004.\n[5] Finn, A., Kushmerick, N. & Smyth, B. Fact or fiction:\nContent classification for digital libraries. Joint DELOS-NSF\nWorkshop on Personalisation and Recommender Systems in\nDigital Libraries (Dublin), 2001.\n[6] Foltz, P. W., Kintsch, W., Landauer, T. K. Analysis of text\ncoherence using Latent Semantic Analysis. Discourse\nProcesses 25(2-3), 285 - 307, 1998.\n[7] Google Kids Directory.\nhttp://directory.google.com/Top/Kids_and_Teens/\n[8] Lavrenko, V., and Croft, B. Relevance-based language\nmodels. In Proc. of the 24th Annual International ACM\nSIGIR Conference, New Orleans, 120 - 127, 2001.\n[9] Ogilvie, P. and Callan, J. Experiments using the Lemur\nToolkit. In Proc.of the 10th Text Retrieval Conference, TREC\n2001. NIST Special Publication 500-250, 103-108, 2001.\n545\n", "keywords": "computer-assisted learning;user model;searching;reading comprehension;Information retrieval;information retrieval"} {"name": "112", "title": "Categorizing Web Queries According to Geographical Locality", "abstract": "Web pages (and resources, in general) can be characterized according to their geographical locality. For example, a web page with general information about wildflowers could be considered a global page, likely to be of interest to a ge-ographically broad audience. In contrast, a web page with listings on houses for sale in a specific city could be regarded as a local page, likely to be of interest only to an audience in a relatively narrow region. Similarly, some search engine queries (implicitly) target global pages, while other queries are after local pages. For example, the best results for query [wildflowers] are probably global pages about wildflowers such as the one discussed above. However, local pages that are relevant to, say, San Francisco are likely to be good matches for a query [houses for sale] that was issued by a San Francisco resident or by somebody moving to that city. Unfortunately, search engines do not analyze the geographical locality of queries and users, and hence often produce sub-optimal results. Thus query [wildflowers ] might return pages that discuss wildflowers in specific U.S. states (and not general information about wildflowers), while query [houses for sale] might return pages with real estate listings for locations other than that of interest to the person who issued the query. Deciding whether an unseen query should produce mostly local or global pages--without placing this burden on the search engine users--is an important and challenging problem, because queries are often ambiguous or underspecify the information they are after. In this paper, we address this problem by first defining how to categorize queries according to their (often implicit) geographical locality. We then introduce several alternatives for automatically and efficiently categorizing queries in our scheme, using a variety of state-of-the-art machine learning tools. We report a thorough evaluation of our classifiers using a large sample of queries from a real web search engine, and conclude by discussing how our query categorization approach can help improve query result quality.", "fulltext": "INTRODUCTION\nWeb pages (and resources, in general) can be characterized\naccording to their geographical locality. For example, a\nweb page with general information about wildflowers could\nbe considered a global page, likely to be of interest to a ge-ographically\nbroad audience. In contrast, a web page with\nlistings on houses for sale in a specific city could be regarded\nas a local page, likely to be of interest only to an audience in\na relatively narrow region. Earlier research [9] has addressed\nthe problem of automatically computing the \"geographical\nscope\" of web resources.\nOften search engine queries (implicitly) target global web\npages, while other queries are after local pages. For example,\nthe best results for query [wildflowers] are probably global\npages about wildflowers discussing what types of climates\nwildflowers grow in, where wildflowers can be purchased,\nor what types of wildflower species exist. In contrast, local\npages that are relevant to, say, San Francisco are likely to\nbe good matches for a query [houses for sale] that was issued\nby a San Francisco resident, or by somebody moving\nto San Francisco, even if \"San Francisco\" is not mentioned\nin the query. The user's intent when submitting a query\nmay not be always easy to determine, but if underspecified\nqueries such as [houses for sale] can be detected, they can\nbe subsequently modified by adding the most likely target\ngeographical location or by getting further user input to cus-tomize\nthe results.\nUnfortunately, search engines do not analyze the geographical\nlocality of queries and users, and hence often produce\nsub-optimal results, even if these results are on-topic\nand reasonably \"popular\" or \"authoritative.\" Thus query\n[wildflowers] might return pages that discuss wildflowers in\nspecific U.S. states (and not general information about wildflowers\n). In fact, as of the writing of this paper, the first\n10 results that Google provides for this query include 5\npages each of which discusses wildflowers in only one U.S.\n325\nstate (e.g., \"Texas Wildflowers\"). Similarly, the top 10 results\nthat Google returns for query [houses for sale] include\nreal estate pages for Tuscany, United Kingdom, and New\nZealand.\nThese pages are likely to be irrelevant to, say,\nsomebody interested in San Francisco real estate who types\nsuch an underspecified query.\nDeciding whether a query posed by a regular search engine\nuser should produce mostly local or global pages is an important\nand challenging problem, because queries are often\nambiguous or underspecify the information they are after,\nas in the examples above. By identifying that, say, query\n[wildflowers] is likely after \"global\" information, a search\nengine could rank the results for this query so that state-specific\npages do not appear among the top matches. By\nidentifying that, say, query [houses for sale] is likely after\n\"local\" information, a search engine could filter out pages\nwhose geographical locality is not appropriate for the user\nwho issued the query. Note that deciding which location\nis of interest to a user who wrote an underspecified query\nsuch as [houses for sale] is an orthogonal, important issue\nthat we do not address in this paper. Our focus is on identifying\nthat such a query is after \"local\" pages in nature,\nand should therefore be treated differently by a search engine\nthan queries that are after \"global\" pages. By knowing\nthat a user query is after local information, a search engine\nmight choose to privilege pages whose geographical locality\ncoincides with that of the user's or, alternatively, attempt\nto obtain further input from the user on what location is of\ninterest.\nIn this paper, we first define how to categorize user queries\naccording to their (often implicit) geographical locality. We\nthen introduce several alternatives for automatically and efficiently\nclassifying queries according to their locality, using\na variety of state-of-the-art machine learning tools. We report\na thorough evaluation of our classifiers using a large\nsample of queries from a real web search engine query log.\nFinally, we discuss how our query categorization approach\ncan help improve query result quality. The specific contributions\nof this paper are as follows:\nA discussion on how to categorize user queries according\nto their geographical locality, based on a careful\nanalysis of a large query log from the Excite web site\n(Section 3).\nA feature representation for queries; we derive the feature\nrepresentation of a query from the results produced\nfor the query by a web search engine such as\nGoogle (Section 4.1).\nA variety of automatic query classification strategies\nthat use our feature representation for queries (Section\n4.2).\nA large-scale experimental evaluation of our strategies\nover real search engine queries (Section 5).\nPreliminary query reformulation and page re-ranking\nstrategies that exploit our query classification techniques\nto improve query result quality (Section 6).\nRELATED WORK\nTraditional information-retrieval research has studied how\nto best answer keyword-based queries over collections of text\ndocuments [18].\nThese collections are typically assumed\nto be relatively uniform in terms of, say, their quality and\nscope. With the advent of the web, researchers are studying\nother \"dimensions\" to the data that help separate useful resources\nfrom less-useful ones in an extremely heterogeneous\nenvironment like the web. Notably, the Google search engine\n[4] and the HITS algorithm [7, 13] estimate the \"impor-tance\"\nof web pages by analyzing the hyperlinks that point\nto them, thus capturing an additional dimension to the web\ndata, namely how important or authoritative the pages are.\nDing et al. [9] extract yet another crucial dimension of the\nweb data, namely the geographical scope of web pages. For\nexample, the Stanford Daily newspaper has a geographical\nscope that consists of the city of Palo Alto (where Stanford\nUniversity is located), while the New York Times newspaper\nhas a geographical scope that includes the entire U.S.\nTo compute the geographical scope of a web page, Ding et\nal. propose two complementary strategies: a technique based\non the geographical distribution of HTML links to the page,\nand a technique based on the distribution of geographical\nreferences in the text of the page. Ding et al. report on\na search-engine prototype that simply filters out from the\nresults for a user query any pages not in the geographical\nscope of the user. This technique does not attempt to determine\nwhether a query is best answered with \"global\" or\n\"local\" pages, which is the focus of our paper. Ding et al.\nbuilt on the work by Buyukkokten et al. [6], who discussed\nhow to map a web site (e.g., http://www-db.stanford.edu)\nto a geographical location (e.g., Palo Alto) and presented a\ntool to display the geographical origin of the HTML links\nto a given web page. This tool then helps visualize the geographical\nscope of web pages [6].\nA few commercial web sites manually classify web resources\nby their location, or keep directory information that\nlists where each company or web site is located. The North-ernLight\nsearch engine\n1\nextracts addresses from web pages,\nletting users narrow their searches to specific geographical\nregions (e.g., to pages \"originated\" within a five-mile radius\nof a given zip code). Users benefit from this information\nbecause they can further filter their query results. McCurley\n[14] presented a variety of approaches for recognizing\ngeographical references on web pages, together with a nav-igational\ntool to browse pages by geographical proximity\nand their spatial context. (Please refer to [16] for additional\nreferences.) None of these techniques addresses our focus\nproblem in this paper: automatically determining the geographical\nlocality associated with a given, unmodified search\nengine query.\nDEFINING GEOGRAPHICAL LOCALITY\nAs discussed above, queries posed to a web search engine\ncan be regarded as local, if their best matches are likely to\nbe \"local\" pages, or as global, if their best matches are likely\nto be \"global\" pages. In an attempt to make this distinction\nmore concrete, we now discuss several examples of local and\nglobal queries.\nGlobal queries often do not include a location name, as\nis the case for query [Perl scripting]. A user issuing this\nquery is probably after tutorials about the Perl language,\nand hence pages on the topic with a restricted geographi-1\nhttp://www.northernlight.com/\n326\ncal scope are less desirable than global pages. Other global\nqueries do not mention a location explicitly either, but are\ntopically associated with one particular location. An example\nof such a query is [Elgin marbles], which is topically associated\nwith the city of Athens. We consider these queries\nas global, since their best matches are broad, global pages,\nnot localized pages with a limited geographical scope. In-terestingly\n, global queries sometimes do include a location\nname. For example, a query might be just a location name\n(e.g., [Galapagos Islands]) or a request for concrete information\nabout a location (e.g., [Boston area codes]). General\nresources about the location (e.g., tourist guides) are\narguably to be preferred for such queries, which are hence\nregarded as global. Other global queries include locations\nthat are strongly associated topic-wise with the rest of the\nquery. Query [Woody Allen NYC] is an example of such a\nquery. The location mentioned in this query (i.e., \"NYC,\"\nfor \"New York City\") is not used to restrict query results to\npages of interest to New York residents, but rather expresses\na topic specification. Query [Ansel Adams Yosemite] is another\nexample: photographer Ansel Adams took a famous\nseries of photographs in Yosemite.\nLocal queries often include a location name, as is the case\nfor query [Wisconsin Christmas tree producers association].\nThe location mentioned in this query (i.e., \"Wisconsin\") is\nused to \"localize\" the query results. Query [houses for sale\nNew York City] is a related example. Other local queries\ndo not include a location name, but still implicitly request\n\"localized\" results. Query [houses for sale] is an example of\nsuch a query. These queries tend to be underspecified, but\nare still asked by (presumably naive) search engine users.\nWe conducted a thorough examination of a large number\n(over 1,200) of real search engine queries. Most queries\nthat we encountered can be cleanly categorized as being either\nglobal or local. However, other queries are inherently\nambiguous, and their correct category is impossible to determine\nwithout further information on the user intent behind\nthem. For example, query [New York pizza] could be con-strued\nas a local query if it is, say, after pizza delivery web\nsites for the New York area. In contrast, the same query\ncould be regarded as a global query if the user who issues it\nwants to learn about the characteristics of New York-style\npizza.\nUSING CLASSIFIERS TO DETERMINE LOCALITY\nWe earlier established that queries are associated with local\nor global status, which influences the kind of results that\nare desirable. Since current search engines do not directly\ntake into account geographical information, for certain types\nof queries they produce a large number of on-topic but un-wanted\nresults, as in the [houses for sale] example discussed\nearlier. In this section, we discuss automatic methods that\ncan determine, given a query, whether the query is a local\nor global one. To build the two-class classifier, we experimented\nwith several state-of-the-art classification techniques\n, using widely available implementations for each. We\ndescribe below the features used in the classification, how we\nextract them from web pages, and the classifiers with which\nwe experimented.\n4.1\nClassification Features\nWeb queries, which we treat in this paper as ordered bags\nof words with no other structure, are typically fairly short.\nIn the collection of 2,477,283 real queries that we used in\nour experiments (Section 5.1), 84.9% were five words long\nor shorter. Because few words are available per query, basing\nthe classification directly on the words in the query may\nlead to severe sparse data problems. Even more importantly,\nsome of the characteristics that make a query local or global\nare not directly observable in the query itself, but rather in\nthe results returned. For example, a query that returns results\nthat contain few references to geographical locations is\nlikely to be global, while a query that returns results spread\nuniformly over many locations without including a significant\npercentage of results with no locations is likely to be\nlocal.\nFor these reasons, we base our classification on a sample\nof results actually returned for a given query rather than the\nwords in the query itself. By observing distributional characteristics\nin the unmodified results, the classifier can infer\nthe type of the query (global or local) so that the results can\nbe appropriately filtered or re-ordered, or the query modified\n. In a way the approach is similar in spirit to query\nexpansion techniques that rely on pseudo-relevance feedback\n[5].\nIn our experiments, we use Google (via the Google\nAPI\n2\n) to obtain the top 50 web pages that match the query.\nFor simplicity, we limited our search to HTML pages, skipping\nover non-HTML documents. We chose Google because\nit represents state-of-the-art web search technology and offers\na published interface for submitting large numbers of\nqueries.\nWe represent the web pages returned by Google as text\ndocuments. This conversion is achieved by using the\nlynx\nHTML browser with the -dump option. We base our classification\nfeatures on measures of frequency and dispersion of\nlocation names in these text files. For this purpose, we have\nconstructed a database of 1,605 location names by concatenating\nlists of all country names\n3\n, of the capitals of these\ncountries\n4\n, of the fifty U.S. states, and of all cities in the\nUnited States with more than 25,000 people\n5\n. We then compare\nthe words in each text document with the database of\nlocation names, and output any matching entries and their\ncount per document. This matching is case insensitive, because\nwe found capitalization of location names in web pages\nto be erratic. Note that we do not attempt to disambiguate\nwords that match location names but also have other senses\n(e.g., \"China\"), as this is a hard problem in natural language\nanalysis; instead, we count such words as locations. An alternative\napproach that would detect and disambiguate location\nnames would be to use a named-entity tagger. We\nexperimented with a well-known third-party named-entity\ntagger, but we encountered a very high error rate because\nof the noise often introduced in web pages.\nOur classification features combine these location counts\nin various ways. For each query, we measure the average\n2\nhttp://www.google.com/apis\n3\nObtained from the United Nations, http://www.un.org/\nOverview/unmember.html.\n4\nObtained from the CIA World Factbook, http://www.\ncapitals.com/.\n5\nObtained from the U.S. Census Bureau (2000 census\nfigures), http://www.census.gov/prod/2002pubs/00ccdb/\ncc00_tabC1.pdf.\n327\n(per returned web page) number of location words in the retrieved\nresults. We count the average frequency of location\nwords at different levels of detail (country, state, city), as\nwell as the average of the aggregate total for all locations.\nWe obtain these frequencies for both the total count (tokens)\nand the unique location words in each page (types), as it is\npossible that a few locations would be repeated many times\nacross the results, indicating a global query, or that many locations\nwould be repeated few times each, indicating a local\nquery. We also consider the total number of unique locations\nacross all the returned documents taken together, divided by\nthe number of retrieved documents. For the average token\nfrequencies of city, state, and country locations we also calculate\nthe minimum and maximum across the set of returned\nweb pages. To account for the hierarchical nature of location\ninformation, we calculate an alternative frequency for states\nwhere we include in the count for each state the counts for\nall cities from that state that were found in that text; this\nallows us to group together location information for cities\nin the same state. We also include some distributional measures\n, namely the fraction of the pages that include at least\none location of any kind, the percentage of words that match\nlocations across all pages, and the standard deviation of the\ntotal per page location count. Finally, we add to our list\nof features the total number of words in all of the returned\ndocuments, to explore any effect the local/global distinction\nmay have on the size of the returned documents. These calculations\nprovide for 20 distinct features that are passed on\nto the classifier.\n6\nThe core data needed to produce these\n20 query features (i.e., the locations mentioned in each web\npage) could be efficiently computed by a search engine such\nas Google at page-indexing time. Then, the final feature\ncomputation could be quickly performed at query time using\nthis core data.\n4.2\nClassification Methods\nWe initially trained a classifier using\nRipper [8], which\nconstructs a rule-based classifier in an incremental manner.\nThe algorithm creates an initial set of very specific rules\nbased on the data, similar to the way in which decision trees\nare generated. The rules are then pruned iteratively to eliminate\nthe ones that do not seem to be valid for a large enough\nsubset of the training data, so as to prevent overfitting.\nAlthough\nRipper provides a robust classifier with high\naccuracy and transparency (a human can easily examine\nthe produced rules), it outputs binary \"local\"\"global\" decisions\n. In many cases, it is preferable to obtain a measure\nof confidence in the result or an estimate of the probability\nthat the assigned class is the correct one. To add this capability\nto our classifier, we experimented with logistic regression\n[19]. Logistic, or log-linear, regression models a binary\noutput variable (the class) as a function of a weighted sum\nof several input variables (the classification features). Con-ceptually\n, a linear predictor\nis first fitted over the training\ndata in a manner identical to regular regression analysis,\ni.e.,\n= w\n0\n+\nk\ni=1\nw\ni\nF\ni\nwhere\nF\ni\nis the\ni-th feature and w\ni\nis the weight assigned to\n6\nStudying the effect on classification accuracy of a richer\nfeature set (e.g., including as well all words on the result\npages) is the subject of interesting future work.\nthat feature during training. Subsequently,\nis transformed\nto the final response,\nC, via the logistic transformation\nC =\ne\n\n1 +\ne\n\nwhich guarantees that\nC is between 0 and 1. Each of the\nendpoints of the interval (0\n, 1) is associated with one of the\nclasses, and\nC gives the probability that the correct class is\nthe one associated with \"1\". In practice, the calculations are\nnot performed as a separate regression and transformation,\nbut rather as a series of successive regressions of transformed\nvariables via the iterative reweighted least squares algorithm\n[1].\n7\nWe used the implementation of log-linear regression\nprovided in the R statistical package.\n8\nAnother desideratum for our classifier is its ability to support\ndifferent costs for the two possible kinds of errors (misclassifying\nlocal queries versus misclassifying global queries).\nWhich kind of error is the most important may vary for\ndifferent settings; for our search modification application,\nwe consider the misclassification of global queries as local\nones a more serious error. This is because during our subsequent\nmodification of the returned results (Section 6), we\nreorder the results for some of the queries that we consider\nglobal, but we modify the original queries for some of the\nqueries classified as local, returning potentially very different\nresults. Consequently, the results can change more significantly\nfor a query classified as local, and the potential for\nerror is higher when a global query is labeled local than the\nother way around.\nBoth\nRipper and log-linear regression can incorporate different\ncosts for each type of error. We experimented with\na third classification approach that also supports this feature\n, Support Vector Machines (SVMs) [2], which have been\nfound quite effective for text matching problems [11]. SVM\nclassifiers conceptually convert the original measurements of\nthe features in the data to points in a high-dimensional space\nthat facilitates the separation between the two classes more\nthan the original representation. While the transformation\nbetween the original and the high-dimensional space may be\ncomplex, it needs not to be carried out explicitly. Instead,\nit is sufficient to calculate a kernel function that only involves\ndot products between the transformed data points,\nand can be calculated directly in the original feature space.\nWe report experiments with two of the most common kernel\nfunctions: a linear kernel,\nK(x, y) = x y + 1\nand a Gaussian (radial basis function) kernel,\nK(x, y) = e\n- x-y\n2\n/2\n2\nwhere\nis a parameter (representing the standard deviation\nof the underlying distribution). This latter kernel has been\nrecommended for text matching tasks [10]. Regardless of the\nchoice of kernel, determining the optimal classifier is equivalent\nto determining the hyperplane that maximizes the total\ndistance between itself and representative transformed data\npoints (the support vectors).\nFinding the optimal classifier\ntherefore becomes a constrained quadratic optimization\n7\nThis is because the modeled distribution is binomial rather\nthan normal, and hence the variance depends on the mean-see\n[19] for the technical details.\n8\nhttp://www.r-project.org/\n328\nSet\nOriginal\nnumber of\nqueries\nNumber of\nappropriate\nqueries\nGlobal\nLocal\nTraining\n595\n439\n368\n(83.8%)\n71\n(16.2%)\nDevelopment\n199\n148\n125\n(84.5%)\n23\n(15.5%)\nTest\n497\n379\n334\n(88.1%)\n45\n(11.9%)\nTable 1: Distribution of global and local queries in our training, development, and test sets.\nproblem. In our experiments, we use the SVM-Light implementation\n9\n[12].\nIn many binary classification tasks, one of the two classes\npredominates, and thus trained classifiers tend to favor that\nclass in the absence of strong evidence to the contrary. This\ncertainly applies to our task; as we show in Section 5.1,\n8389% of web queries are global. Weiss and Provost [21]\nshowed that this imbalance can lead to inferior classifier\nperformance on the test set, and that the problem can be\naddressed through oversampling of the rarer class in the\ntraining data. Their method examines different oversampling\nrates by constructing artificial training sets where the\nsmaller class is randomly oversampled to achieve a specific\nratio between samples from the two classes. For each such\nsampling ratio, a classifier is trained, which assigns a score\nto each object indicating strength of evidence for one of the\nclasses. By fixing a specific strength threshold, we divide\nthe classifier output into the two classes. Further, by varying\nthis threshold\n10\nwe can obtain an error-rate curve for\neach class as a function of the threshold. The entire process\nresults in a Receiver-Operator Characteristic (ROC) curve\n[3] for each sampling ratio. Specific points on the curve that\noptimize the desired combination of error rates can then be\nselected, and the performance of the classification method\nacross the different thresholds can be measured from the\narea between the curve and the x-axis. Weiss and Provost\nuse the C4.5 classifier [17], a decision tree classifier with\nadditional pruning of nodes to avoid overfitting. We use a\nsoftware package provided by them (and consequently also\nthe C4.5 algorithm) to explore the effect that different ratios\nof global to local queries during training have on classifier\nperformance.\nEXPERIMENTAL RESULTS\nWe now describe the data (Section 5.1) and metrics (Section\n5.2) that we use for the experimental evaluation of the\nquery classifiers (Section 5.3).\n5.1\nData\nFor the experiments reported in this paper, we used a sample\nof real queries submitted to the Excite search engine.\n11\nWe had access to a portion of the December 1999 query log of\nExcite, containing 2,477,283 queries. We randomly selected\ninitial sets of queries for training, development (tuning the\nparameters of the classifiers we train), and testing purposes\nby selecting each of these queries for inclusion in each set\nwith a constant (very small) probability. These probabilities\nwere set to 400/2,477,283, 400/2,477,283, and 500/2,477,283\n9\nAvailable from http://svmlight.joachims.org/.\n10\nSetting the threshold to each extreme assigns all or none\nof the data points to that category.\n11\nhttp://www.excite.com/\nfor the three sets, respectively. Subsequently we combined\nthe training and development set, and reassigned the queries\nin the combined set so that three-fourths were placed in the\ntraining set and one-fourth in the development; we kept the\ntest set separate. This process generated 595, 199, and 497\nqueries in the initial versions of the training, development,\nand test sets. We further eliminated queries that passed any\nof the following tests:\nUpon examination, they appeared likely to produce\nresults with explicit sexual content.\nWhen supplied to Google--and after filtering out any\nnon-HTML results and any broken links--the queries\nproduced fewer than 40 files. This constraint is meant\nto ensure that we are not including in our experimental\ndata queries that contain misspellings or deal with\nextremely esoteric subjects, for which not enough material\nfor determining locality would be available.\nThey had already been included in an earlier set (we\nconstructed first the training set, then the development\nset, and finally the test set). Since multiple people\nmay issue the same query, duplicates can be found\nin the log. Although our algorithms take no special advantage\nof duplicates, we eliminated them to avoid any\nbias. Taking into account variations of upper/lower\ncase and spacing between queries (but not word order\n), this constraint removed 6 queries from the test\nset.\nThese filtering steps left us with 439 queries in the training\nset, 148 queries in the development set, and 379 queries in\nthe test set.\nWe then classified the queries using the criteria laid out\nin Section 3. Table 1 shows the size of the three sets before\nand after filtering, and the distribution of local and global\nqueries in each set. We observe that, in general, most queries\n(8389%) tend to be global.\n5.2\nEvaluation Metrics\nWe consider a number of evaluation metrics to rate the\nperformance of the various classifiers and their configurations\n. Since a large majority of the queries are global (85.6%\nin the training, development, and test sets combined), overall\nclassification accuracy (i.e., the percentage of correct\nclassification decisions) may not be the most appropriate\nmeasure. This is because a baseline method that always\nsuggests the most populous class (\"global\") will have an accuracy\nequal to the proportion of global queries in the evaluated\nset. Yet such a classifier will offer no improvement\nduring search since it provides no new information. The situation\nis analogous to applications in information retrieval\nor medicine where very few of the samples should be labeled\npositive (e.g., in a test for a disease that affects only 0.1%\n329\nof patients). While we do not want overall accuracy to decrease\nfrom the baseline (at least not significantly), we will\nutilize measures that capture the classifier's improved ability\nto detect the rarer class relative to the baseline method.\nTwo standard such metrics are precision and recall for\nthe local queries. Precision is the ratio of the number of\nitems correctly assigned to the class divided by the total\nnumber of items assigned to the class. Recall is the ratio\nof the number of items correctly assigned to a class as compared\nwith the total number of items in the class. Note that\nthe baseline method achieves precision of 100% but recall of\n0%. For a given classifier with adjustable parameters, often\nprecision can be increased at the expense of recall, and vice\nversa; therefore we also compute the F-measure [20] (with\nequal weights) to combine precision and recall into a single\nnumber,\nF-measure = 2 Precision Recall\nPrecision + Recall\nFinally, we argued earlier that one kind of misclassification\nerrors may be assigned a higher cost. We can then calculate\nthe average cost [15],\nAverage cost =\nX{\nGlobal\n,\nLocal\n}\nCost(\nX) Rate(X)\nwhere Cost(\nX) is the cost of wrong X classifications and\nRate(\nX) is the rate of wrong X classifications. Average cost\nis the measure to minimize from a decision theory perspective\n. The rate of wrong classifications for a class is equal\nto the number of data points that have been misclassified\ninto that class divided by the total number of classification\ndecisions, and the costs for each misclassification error are\npredetermined parameters. If both costs are set to 1, then\nthe average cost becomes equal to the total error rate, i.e.,\none minus accuracy. In our experiments, we report the average\ncost considering the mislabeling of global queries as\nlocal twice as important as the mislabeling of local queries,\nfor the reasons explained in the previous section.\n5.3\nResults\nWe trained the classifiers of Section 4.2 on the 439 queries\nin our training set.\nRipper and the regression model were\ntrained on that training set without modification. For C4.5\nand SVMs, we explored the effect that different proportions\nof local queries in the training set have on overall performance\n. For that purpose, we used our development set to\nevaluate the performance effects of different local query proportions\n, and select the optimal classifier within each family.\nFor the C4.5-based classifier, we used the C4.4 software\nprovided by Foster Provost and Claudia Perlich to explore\nthe effect of different proportions of local and global queries.\nWe created training sets by randomly oversampling or un-dersampling\nthe minority (local) class as needed, in increments\nof 10%.\nFor any given proportion of local queries\nbetween 10% and 90%, we started from our training set,\nmodified it according to the above sampling method to have\nthe desired proportion of local queries, trained the corresponding\nC4.5 classifier, and evaluated its performance on\nour development set. The natural proportion of the local\nclass in the unmodified training data is also included as one\nof the proportions used to build and evaluate a classifier. In\nthis manner, we obtain curves for the various metrics as the\nproportion of local queries varies (Figure 1). We observe\nFigure 1:\nEvaluation metrics for C4.5 classifiers\ntrained on different proportions of local queries.\nthat the highest value for precision and F-measure, and the\nlowest value for the average cost, are obtained when the\nclassifier is trained with a significantly amplified proportion\nof local queries (80%). Further, running C4.5 with 80% local\nqueries also produced the largest area under the ROC\ncurve obtained when different precision/recall tradeoffs in\nthe development set are explored. On the basis of this information\n, we selected the proportion of 80% local queries\nas the optimal configuration for C4.5. We refer to that configuration\nas C4.5(80), and this is the version of C4.5 that\nwe evaluated on the test set.\nUsing our own implementation for constructing extended\ntraining sets with a given proportion of local queries, we\nperformed similar experiments for Support Vector Machines\nwith linear and Gaussian kernels. For these classifiers, we\nalso experimented with versions trained with equal error\ncosts for the two kinds of classification errors, and with versions\nwhere, during training, a false local assignment counts\nfor twice as much as a false global assignment. We found\nthat the optimal proportion of local queries is closer to the\nnatural proportion with SVMs compared to C4.5 classifiers;\nthe proportions chosen from our development set were 50%\nfor the linear SVM classifier with equal error costs, 30% for\nthe linear SVM classifier with unequal error costs, 30% for\nthe Gaussian SVM classifier with equal error costs, and 20%\nfor the Gaussian SVM classifier with unequal error costs.\nWe denote the optimal classifiers from these four families\nas SVM-Linear-E(50), SVM-Linear-U(30), SVM-Gaussian-E\n(30), and SVM-Gaussian-U(20), respectively.\nFigure 2\nshows the curve obtained for the SVM-Gaussian-U family\nof classifiers.\nHaving determined the best value for the proportion of\nlocal queries for C4.5 and SVM-based classifiers, we evaluate\nthese classifiers, as well as the classifiers obtained from\nRipper and log-linear regression, on our test set.\n12\nTable 2\nshows the values of the evaluation metrics obtained on the\nunseen test set. The classifier using a linear kernel SVM\nwith unequal error costs achieves the highest F-measure,\n12\nWe also experimented with variable error costs for the\nRipper\nclassifier, using the same 2:1 error cost correspondence,\nbut the resulting classifier was identical to the\nRipper classifier\nobtained with equal error costs.\n330\nClassifier\nRecall\nPrecision\nF-Measure\nAverage Cost\nAccuracy\nRipper\n53.33%\n47.06%\n50.00%\n0.1979\n87.34%\nLog-linear Regression\n37.78%\n58.62%\n45.95%\n0.1372\n89.45%\nC4.5(80)\n40.00%\n32.73%\n36.00%\n0.2665\n83.11%\nSVM-Linear-E(50)\n48.89%\n48.89%\n48.89%\n0.1821\n87.86%\nSVM-Linear-U(30)\n48.89%\n53.66%\n51.16%\n0.1609\n88.92%\nSVM-Gaussian-E(30)\n37.78%\n53.13%\n44.16%\n0.1530\n88.65%\nSVM-Gaussian-U(20)\n37.78%\n53.13%\n44.16%\n0.1530\n88.65%\nBaseline (always global)\n0.00%\n100.00%\n0.00%\n0.1187\n88.13%\nTable 2: Evaluation metrics on the test set of selected classifiers optimized over the development set.\nFigure 2:\nEvaluation metrics for Support Vector\nMachines with Gaussian kernel and false local assignments\nweighted twice as much as false global assignments\n, trained on different proportions of local\nqueries.\nwhile the log-linear classifier achieves the lowest average\nclassification cost. As expected, the SVM classifiers that\nwere trained with unequal error costs achieve the same or\nlower average cost (which also utilizes the same unequal error\ncosts) compared to their counterparts trained with equal\nerror costs. Overall,\nRipper, log-linear regression, and the\ntwo SVM classifiers with linear kernels achieve the highest\nperformance, with small differences between them. They\nare followed by the two SVM classifiers with a Gaussian\nkernel function, while C4.5 trails significantly behind the\nother classifiers.\nThe features used for classification vary considerably from\nclassifier to classifier.\n13\nRipper achieves one of the best classification\nperformances using only one simple rule, based\nonly on the average number of city locations per returned\nweb page: if that number exceeds a threshold, the query is\nclassified as local, otherwise as global. On the other hand,\nthe C4.5 and SVM classifiers utilize all or almost all the\nfeatures. The log-linear regression classifier falls in-between\nthese two extremes, and primarily utilizes the average numbers\nof unique city, state, and country names per retrieved\npage, as well as the total number of unique locations per\npage (4 features).\nFor concreteness, and to conclude our discussion, Table 3\n13\nMost classifiers automatically ignore some of the provided\nfeatures, to avoid overfitting.\nshows the performance of our classifiers on a few representative\nexamples of local and global queries.\nIMPROVING SEARCH RESULTS\nThe core of this paper is on classifying queries as either\nlocal or global. In this section, we present preliminary ideas\non how to exploit this classification to improve the quality\nof the query results. Further exploration of these and other\ndirections is the subject of interesting future work.\nConsider a query that has been classified as local using\nthe techniques of Section 4.\nBy definition, this query is\nbest answered with \"localized\" pages. We can easily determine\nif the query includes any location name by using the\ndictionary-based approach of Section 4.1. If no locations are\npresent in the query (e.g., as in query [houses for sale]), in\nthe absence of further information we can attempt to \"localize\"\nthe query results to the geographical area of the user\nissuing the query, for which we can rely on registration information\nprovided by the user, for example. Consequently,\nwe can simply expand the query by appending the user's\nlocation to it, to turn, say, the query [houses for sale] into\n[houses for sale San Francisco] for a San Francisco resident.\nAlternatively, a search engine might attempt to obtain additional\ninformation from the user to further localize the query\nas appropriate. For example, the query [houses for sale] can\nthen be transformed into [houses for sale New York City] for\na San Francisco resident who is moving to New York City. In\neither case, the expanded query will tend to produce much\nmore focused and localized results than the original query\ndoes. As of the writing of this paper, all of the top-10 results\nreturned by Google for query [houses for sale San Francisco]\nare results of relevance to a person interested in Bay Area\nreal estate. In contrast, most of the results for the original\nquery, [houses for sale], are irrelevant to such a person, as\ndiscussed in the Introduction. An alternative, more expensive\nstrategy for handling these queries is to compute and\nexploit the geographical scope of web pages as defined in [9].\nThen, pages with a geographical scope that includes the location\nof the user issuing the query would be preferred over\nother pages. In contrast, a local query in which locations are\nmentioned is likely to return pages with the right locality,\nmaking any further modification of the query or reranking\nof the results unnecessary.\nConsider now a query that has been classified as global\nusing the techniques of Section 4. By definition, this query\nis best answered with \"broad\" pages. Rather than attempting\nto modify a global query so that it returns \"broad\"\npages, we can follow a result reranking strategy to privilege\nthese pages over more localized ones. One possible reranking\nstrategy is to reorder the results from, say, Google for\n331\nClass\nQuery\nClassifier\nRipper Regression C4.5(80) SVM-LE SVM-LU SVM-GE SVM-GU\nGlobal\n[Perl scripting]\nGlobal\n-0.9381\nGlobal\n-1.9163\n-1.7882\n-1.0627\n-1.0590\n[world news]\nGlobal\n-0.8306\nLocal\n-0.5183\n-0.3114\n-0.4166\n-0.1440\n[wildflowers]\nGlobal\n-0.5421\nGlobal\n-0.7267\n-0.8082\n-0.8931\n-0.8144\n[Elgin marbles]\nLocal\n0.4690\nLocal\n0.6426\n0.6654\n0.0378\n0.1016\n[Galapagos Islands]\nGlobal\n-0.7941\nGlobal\n-1.2834\n-1.1998\n-0.9826\n-0.8575\n[Boston zip code]\nLocal\n-0.0243\nLocal\n0.6874\n0.6152\n0.0408\n0.0797\n[Woody Allen\nNYC]\nGlobal\n-0.2226\nGlobal\n-0.3253\n-0.3541\n-0.6182\n-0.5272\nLocal\n[houses for sale]\nGlobal\n-0.6759\nGlobal\n-1.0769\n-1.0962\n-0.9242\n-0.8516\n[Volkswagen clubs]\nLocal\n-0.0933\nGlobal\n1.0844\n0.7917\n0.0562\n0.0837\n[Wisconsin\nChristmas tree\nproducers\nassociation]\nGlobal\n0.1927\nLocal\n-0.1667\n-0.4421\n-0.4461\n-0.3582\n[New York style\npizza delivery]\nGlobal\n-0.0938\nGlobal\n-0.5945\n-0.6809\n-0.5857\n-0.4824\nTable 3: Classification assignments made by different classifiers on several example queries. SVM-LE, SVM-LU\n, SVM-GE, and SVM-GU stand for classifiers SVM-Linear-E(50), SVM-Linear-U(30), SVM-Gaussian-E\n(30), and SVM-Gaussian-U(20), respectively. For regression and SVM classifiers, positive numbers indicate\nassignment to the local class, and negative numbers indicate assignment to the global class; the absolute\nmagnitude of the numbers increases as the classifier's confidence in its decision increases.\n(We linearly\ntransformed the regression output from the (0, 1) to the (\n-1, 1) range.) The scale of the numbers is\nconsistent across queries and between all SVM classifiers, but not directly comparable between regression\nclassifiers (bound between\n-1 and 1) and SVM classifiers (unbounded).\nthe unmodified query based on the geographical scope of the\npages as defined in [9]. Thus pages with a broad geographical\nscope (e.g., covering the entire United States) would\nprevail over other pages with a narrower scope. A less expensive\nalternative is to classify the result pages as local\nor global following a procedure similar to that of Section 4\nfor queries.\nSpecifically, we implemented this alternative\nby training C4.5\nRules, a rule-based version of the C4.5\ndecision-tree classifier, with a collection of 140 web pages\ncategorized in the Yahoo! directory. Pages classified under\nindividual states in the \"Regional\" portion of the directory\nwere regarded as local, while pages under general categories\nwere regarded as global. The feature representation for the\npages was analogous to that for the queries in Section 4.1\nbut restricted to features that are meaningful over individual\npages (e.g., total number of locations on a page), as opposed\nto over a collection of pages (e.g., minimum number of locations\nper page in the top-50 result pages for a query). At\nquery time, we reorder the results so as to privilege global\npages over local ones. This is based on the locality classification\nof the pages, which can be precomputed off-line since\nit is query-independent or performed on the fly as we do in\nour prototype implementation. This procedure is efficient,\nand produced promising initial results for a handful of global\nqueries (e.g., [wildflowers]) that we tried.\nOur preliminary approach to query modification is therefore\nas follows: Given a query specified by the user, we supply\nfirst the unmodified query to the search engine and collect\nthe top 50 results.\nWe extract location names from\nthese results\n14\n, and calculate the features of Section 4.1.\nUsing one of the best performing classifiers of Section 4, we\ndetermine if the query is global or local. If it is local and\n14\nAs noted earlier, these names could be cached along with\neach web page at the time of indexing, to increase efficiency.\ncontains at least one location name, nothing is done--the\nresults returned from the unmodified query are presented\nto the user. If the query is local and contains no location,\nwe add the user's location (or, alternatively, request further\ninformation from the user, as discussed), reissue the query\nand present the results. Finally, if the query is global, we\ncalculate the scope of each retrieved web page using part of\nthe location features computed earlier and the C4.5\nRules\nclassifier, and rerank the results so that more global pages\nare higher in the list shown to the user. We have built a prototype\nimplementation of this algorithm, using the classifier\nobtained from\nRipper (because of the relative simplicity of\nits rules) for query classification, and Google as the search\nengine.\nCONCLUSION\nWe have described an attribute of queries, locality, that-to\nthe best of our knowledge--has not been explored before\neither in theoretical work or in practical search engines but\ncan significantly affect the appropriateness of the results\nreturned to the user. We defined a categorization scheme\nfor queries based on their geographical locality, and showed\nhow queries can be represented for purposes of determining\nlocality by features based on location names found in the\nresults they produce. Using these features, automatic classifiers\nfor determining locality can be built. We explored\nseveral state-of-the-art classification approaches, and evaluated\ntheir performance on a large set of actual queries. The\nempirical results indicated that for many queries locality can\nbe determined effectively.\nThe bulk of the paper discussed methods for classifying\nqueries according to locality, and empirically established\nthat this is desirable and feasible for many queries.\nWe\nalso presented some first thoughts on possible query refor-332\nmulation and result reranking strategies that utilize locality\ninformation to actually improve the results the user sees.\nAlthough our strategies for query modification and result\nreranking are preliminary, they illustrate a promising family\nof approaches that we plan to investigate in the future\nso that we can exploit the classification of queries based on\ntheir geographical locality in order to improve search result\nquality.\nAcknowledgments\nThis material is based upon work supported in part by the\nNational Science Foundation under Grants No. IIS-97-33880\nand IIS-98-17434. We are grateful to Claudia Perlich and\nFoster Provost for providing us with their adaptation of the\nC4.5 classifier that we used in our experiments. Also, we\nwould like to thank Thorsten Joachims for answering our\nquestions on SVM-Light, and David Parkes for his helpful\ncomments and insight.\n\nREFERENCES\n[1] D. M. Bates and D. G. Watts. Nonlinear Regression\nAnalysis and its Applications. Wiley, New York, 1988.\n[2] B. E. Boser, I. M. Guyon, and V. Vapnik. A training\nalgorithm for optimal margin classifiers. In\nProceedings of the Fifth Annual Workshop on\nComputational Learning Theory, Pittsburgh, 1992.\n[3] A. Bradley. The use of the area under the ROC curve\nin the evaluation of machine learning algorithms.\nPattern Recognition, 30(7):11451159, 1998.\n[4] S. Brin and L. Page. The anatomy of a large-scale\nhypertextual web search engine. In Proceedings of the\nSeventh International World Wide Web Conference\n(WWW7), Apr. 1998.\n[5] C. Buckley, J. Allan, G. Salton, and A. Singhal.\nAutomatic query expansion using SMART: TREC 3.\nIn Proceedings of the Third Text REtrieval Conference\n(TREC-3), pages 6980, April 1995. NIST Special\nPublication 500-225.\n[6] O. Buyukkokten, J. Cho, H. Garcia-Molina,\nL. Gravano, and N. Shivakumar. Exploiting\ngeographical location information of web pages. In\nProceedings of the ACM SIGMOD Workshop on the\nWeb and Databases (WebDB'99), June 1999.\n[7] S. Chakrabarti, B. Dom, P. Raghavan,\nS. Rajagopalan, D. Gibson, and J. Kleinberg.\nAutomatic resource compilation by analyzing\nhyperlink structure and associated text. In\nProceedings of the Seventh International World Wide\nWeb Conference (WWW7), Apr. 1998.\n[8] W. W. Cohen. Learning trees and rules with\nset-valued functions. In Proceedings of the Thirteenth\nInternational Joint Conference on Artificial\nIntelligence, 1996.\n[9] J. Ding, L. Gravano, and N. Shivakumar. Computing\ngeographical scopes of web resources. In Proceedings of\nthe Twenty-sixth International Conference on Very\nLarge Databases (VLDB'00), 2000.\n[10] G. W. Flake, E. J. Glover, S. Lawrence, and C. L.\nGiles. Extracting query modifications from nonlinear\nSVMs. In Proceedings of the Eleventh International\nWorld-Wide Web Conference, Dec. 2002.\n[11] M. A. Hearst. Trends and controversies: Support\nvector machines. IEEE Intelligent Systems,\n13(4):1828, July 1998.\n[12] T. Joachims. Estimating the generalization of\nperformance of an SVM efficiently. In Proceedings of\nthe Fourteenth International Conference on Machine\nLearning, 2000.\n[13] J. Kleinberg. Authoritative sources in a hyperlinked\nenvironment. In Proceedings of the Ninth Annual\nACM-SIAM Symposium on Discrete Algorithms, pages\n668677, Jan. 1998.\n[14] K. S. McCurley. Geospatial mapping and navigation\nof the web. In Proceedings of the Tenth International\nWorld Wide Web Conference (WWW10), May 2001.\n[15] M. Pazzani, C. Merz, P. Murphy, K. Ali, T. Hume,\nand C. Brunk. Reducing misclassification costs. In\nProceedings of the Eleventh International Conference\non Machine Learning, Sept. 1997.\n[16] R. Purves, A. Ruas, M. Sanderson, M. Sester, M. van\nKreveld, and R. Weibel. Spatial information retrieval\nand geographical ontologies: An overview of the\nSPIRIT project. In Proceedings of the 25th ACM\nInternational Conference on Research and Development\nin Information Retrieval (SIGIR'02), 2002.\n[17] R. J. Quinlan. C4.5: Programs for Machine Learning.\nMorgan Kaufman, 1993.\n[18] G. Salton. Automatic Text Processing: The\ntransformation, analysis, and retrieval of information\nby computer. Addison-Wesley, 1989.\n[19] T. J. Santner and D. E. Duffy. The Statistical Analysis\nof Discrete Data. Springer-Verlag, New York, 1989.\n[20] C. J. van Rijsbergen. Information Retrieval.\nButterworths, London, 2nd edition, 1979.\n[21] G. M. Weiss and F. Provost. The effect of class\ndistribution on classifier learning: An empirical study.\nTechnical Report ML-TR-44, Computer Science\nDepartment, Rutgers University, Aug. 2001.\n333", "keywords": "geographical locality;categorization scheme;query modification;web search;query categorization / query classification;web queries;search engines;global page;local page;information retrieval;search engine;query classification"} {"name": "113", "title": "Information Revelation and Privacy in Online Social Networks", "abstract": "Participation in social networking sites has dramatically increased in recent years. Services such as Friendster, Tribe, or the Facebook allow millions of individuals to create online profiles and share personal information with vast networks of friends - and, often, unknown numbers of strangers. In this paper we study patterns of information revelation in online social networks and their privacy implications. We analyze the online behavior of more than 4,000 Carnegie Mellon University students who have joined a popular social networking site catered to colleges. We evaluate the amount of information they disclose and study their usage of the site's privacy settings. We highlight potential attacks on various aspects of their privacy, and we show that only a minimal percentage of users changes the highly permeable privacy preferences.", "fulltext": "EVOLUTION OF ONLINE NETWORKING\nIn recent years online social networking has moved from\nniche phenomenon to mass adoption. Although the concept\ndates back to the 1960s (with University of Illinois Plato\ncomputer-based education tool, see [16]), viral growth and\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nWPES'05,\nNovember 7, 2005, Alexandria, Virginia, USA.\nCopyright 2005 ACM 1-59593-228-3/05/0011 ...\n$\n5.00.\ncommercial interest only arose well after the advent of the\nInternet.\n1\nThe rapid increase in participation in very recent\nyears has been accompanied by a progressive diversification\nand sophistication of purposes and usage patterns across a\nmultitude of different sites. The Social Software Weblog\n2\nnow groups hundreds of social networking sites in nine categories\n, including business, common interests, dating, face-to\n-face facilitation, friends, pets, and photos.\nWhile boundaries are blurred, most online networking\nsites share a core of features: through the site an individual\noffers a \"profile\" - a representation of their sel[ves] (and,\noften, of their own social networks) - to others to peruse,\nwith the intention of contacting or being contacted by others\n, to meet new friends or dates (Friendster,\n3\nOrkut\n4\n), find\nnew jobs (LinkedIn\n5\n), receive or provide recommendations\n(Tribe\n6\n), and much more.\nIt is not unusual for successful social networking sites to\nexperience periods of viral growth with participation expanding\nat rates topping 20% a month. Liu and Maes estimate\nin [18] that \"well over a million self-descriptive personal\nprofiles are available across different web-based social\nnetworks\" in the United States, and Leonard, already in\n2004, reported in [16] that world-wide \"[s]even million people\nhave accounts on Friendster. [...] Two million are registered\nto MySpace. A whopping 16 million are supposed to\nhave registered on Tickle for a chance to take a personality\ntest.\"\nThe success of these sites has attracted the attention of\nthe media (e.g., [23], [3], [16], [4], [26]) and researchers. The\nlatter have often built upon the existing literature on social\nnetwork theory (e.g., [20], [21], [11], [12], [32]) to discuss\nits online incarnations. In particular, [7] discusses issues of\ntrust and intimacy in online networking; [9] and [8] focus\non participants' strategic representation of their selves to\nothers; and [18] focus on harvesting online social network\nprofiles to obtain a distributed recommender system.\nIn this paper, we focus on patterns of personal information\nrevelation and privacy implications associated with online\nnetworking. Not only are the participation rates to online\n1\nOne of the first networking sites, SixDegrees.com, was\nlaunched in 1997 but shut down in 2000 after \"struggling\nto find a purpose for [its] concept\" [5].\n2\nHttp://www.socialsoftware.weblogsinc.com/ .\n3\nHttp://www.friendster.com/ .\n4\nHttp://www.orkut.com/ .\n5\nHttp://www.linkedin.com/ .\n6\nHttp://www.tribe.net/ .\n71\nsocial networking staggering among certain demographics;\nso, also, are the amount and type of information participants\nfreely reveal. Category-based representations of a person's\nbroad interests are a recurrent feature across most networking\nsites [18]. Such categories may include indications of a\nperson's literary or entertainment interests, as well as political\nand sexual ones. In addition, personally identified\nor identifiable data (as well as contact information) are often\nprovided, together with intimate portraits of a person's\nsocial or inner life.\nSuch apparent openness to reveal personal information to\nvast networks of loosely defined acquaintances and complete\nstrangers calls for attention. We investigate information revelation\nbehavior in online networking using actual field data\nabout the usage and the inferred privacy preferences of more\nthan 4,000 users of a site catered to college students, the\nFacebook.\n7\nOur results provide a preliminary but detailed\npicture of personal information revelation and privacy concerns\n(or lack thereof) in the wild, rather than as discerned\nthrough surveys and laboratory experiments.\nThe remainder of this paper is organized as follows. We\nfirst elaborate on information revelation issues in online social\nnetworking in Section 2. Next, we present the results\nof our data gathering in Section 3. Then, we discuss their\nimplications in terms of users attitudes and privacy risks in\nSection 4. Finally, we summarize our findings and conclude\nin Section 5.\nINFORMATION REVELATION AND ONLINE SOCIAL NETWORKING\nWhile social networking sites share the basic purpose of\nonline interaction and communication, specific goals and\npatterns of usage vary significantly across different services.\nThe most common model is based on the presentation of the\nparticipant's profile and the visualization of her network of\nrelations to others - such is the case of Friendster. This\nmodel can stretch towards different directions. In match-making\nsites, like Match.com\n8\nor Nerve\n9\nand Salon\n10\nPersonals\n, the profile is critical and the network of relations\nis absent. In diary/online journal sites like LiveJournal,\n11\nprofiles become secondary, networks may or may not be visible\n, while participants' online journal entries take a central\nrole. Online social networking thus can morph into online\nclassified in one direction and blogging in another.\nPatterns of personal information revelation are, therefore,\nquite variable.\nFirst, the pretense of identifiability changes across different\ntypes of sites.\nThe use of real names to (re)present\nan account profile to the rest of the online community may\nbe encouraged (through technical specifications, registration\nrequirements, or social norms) in college websites like the\nFacebook, that aspire to connect participants' profiles to\ntheir public identities. The use of real names may be toler-ated\nbut filtered in dating/connecting sites like Friendster,\nthat create a thin shield of weak pseudonymity between the\npublic identity of a person and her online persona by making\nonly the first name of a participant visible to others,\n7\nHttp://www.facebook.com/ .\n8\nHttp://www.match.com/ .\n9\nHttp://personals.nerve.com/ .\n10\nHttp://personals.salon.com/ .\n11\nHttp://www.livejournal.com/ .\nand not her last name. Or, the use of real names and personal\ncontact information could be openly discouraged, as in\npseudonymous-based dating websites like Match.com, that\nattempt to protect the public identity of a person by making\nits linkage to the online persona more difficult. However,\nnotwithstanding the different approaches to identifiability,\nmost sites encourage the publication of personal and identifiable\npersonal photos (such as clear shots of a person's\nface).\nSecond, the type of information revealed or elicited often\norbits around hobbies and interests, but can stride from\nthere in different directions. These include: semi-public information\nsuch as current and previous schools and employers\n(as in Friendster); private information such as drinking\nand drug habits and sexual preferences and orientation (as\nin Nerve Personals); and open-ended entries (as in LiveJournal\n).\nThird, visibility of information is highly variable. In certain\nsites (especially the ostensibly pseudonymous ones) any\nmember may view any other member's profile. On weaker-pseudonym\nsites, access to personal information may be limited\nto participants that are part of the direct or extended\nnetwork of the profile owner. Such visibility tuning controls\nbecome even more refined on sites which make no pretense\nof pseudonymity, like the Facebook.\nAnd yet, across different sites, anecdotal evidence suggests\nthat participants are happy to disclose as much information\nas possible to as many people as possible. It is not unusual\nto find profiles on sites like Friendster or Salon Personals\nthat list their owners' personal email addresses (or link to\ntheir personal websites), in violation of the recommendation\nor requirements of the hosting service itself. In the next sub-section\n, we resort to the theory of social networks to frame\nthe analysis of such behavior, which we then investigate em-pirically\nin Section 3.\n2.1\nSocial Network Theory and Privacy\nThe relation between privacy and a person's social network\nis multi-faceted. In certain occasions we want information\nabout ourselves to be known only by a small circle\nof close friends, and not by strangers. In other instances,\nwe are willing to reveal personal information to anonymous\nstrangers, but not to those who know us better.\nSocial network theorists have discussed the relevance of\nrelations of different depth and strength in a person's social\nnetwork (see [11], [12]) and the importance of so-called\nweak ties in the flow of information across different nodes\nin a network. Network theory has also been used to explore\nhow distant nodes can get interconnected through relatively\nfew random ties (e.g., [20], [21], [32]).\nThe privacy relevance\nof these arguments has recently been highlighted by\nStrahilevitz in [27].\nStrahilevitz has proposed applying formal social network\ntheory as a tool for aiding interpretation of privacy in legal\ncases. He suggests basing conclusions regarding privacy \"on\nwhat the parties should have expected to follow the initial\ndisclosure of information by someone other than the defen-dant\"\n(op cit, p. 57). In other words, the consideration\nof how information is expected to flow from node to node\nin somebody's social network should also inform that person's\nexpectations for privacy of information revealed in the\nnetwork.\nHowever, the application of social network theory to the\n72\nstudy of information revelation (and, implicitly, privacy choices)\nin online social networks highlights significant differences between\nthe offline and the online scenarios.\nFirst, offline social networks are made of ties that can only\nbe loosely categorized as weak or strong ties, but in reality\nare extremely diverse in terms of how close and intimate a\nsubject perceives a relation to be. Online social networks,\non the other side, often reduce these nuanced connections\nto simplistic binary relations: \"Friend or not\" [8]. Observing\nonline social networks, Danah Boyd notes that \"there\nis no way to determine what metric was used or what the\nrole or weight of the relationship is. While some people are\nwilling to indicate anyone as Friends, and others stick to a\nconservative definition, most users tend to list anyone who\nthey know and do not actively dislike. This often means\nthat people are indicated as Friends even though the user\ndoes not particularly know or trust the person\" [8] (p. 2).\nSecond, while the number of strong ties that a person\nmay maintain on a social networking site may not be significantly\nincreased by online networking technology, Donath\nand Boyd note that \"the number of weak ties one can\nform and maintain may be able to increase substantially,\nbecause the type of communication that can be done more\ncheaply and easily with new technology is well suited for\nthese ties\" [9] (p. 80).\nThird, while an offline social network may include up to\na dozen of intimate or significant ties and 1000 to 1700 \"ac-quaintances\"\nor \"interactions\" (see [9] and [27]), an online\nsocial networks can list hundreds of direct \"friends\" and include\nhundreds of thousands of additional friends within just\nthree degrees of separation from a subject.\nThis implies online social networks are both vaster and\nhave more weaker ties, on average, than offline social networks\n. In other words, thousands of users may be classified\nas friends of friends of an individual and become able to\naccess her personal information, while, at the same time,\nthe threshold to qualify as friend on somebody's network\nis low. This may make the online social network only an\nimaginary (or, to borrow Anderson's terminology, an imagined\n) community (see [2]). Hence, trust in and within online\nsocial networks may be assigned differently and have a different\nmeaning than in their offline counterparts. Online\nsocial networks are also more levelled, in that the same information\nis provided to larger amounts of friends connected\nto the subject through ties of different strength. And here\nlies a paradox. While privacy may be considered conducive\nto and necessary for intimacy (for [10], intimacy resides in\nselectively revealing private information to certain individuals\n, but not to others), trust may decrease within an online\nsocial network. At the same time, a new form of intimacy\nbecomes widespread: the sharing of personal information\nwith large and potential unknown numbers of friends and\nstrangers altogether. The ability to meaningfully interact\nwith others is mildly augmented, while the ability of others\nto access the person is significantly enlarged. It remains to\nbe investigated how similar or different are the mental models\npeople apply to personal information revelation within\na traditional network of friends compared to those that are\napplied in an online network.\n2.2\nPrivacy Implications\nPrivacy implications associated with online social networking\ndepend on the level of identifiability of the information\nprovided, its possible recipients, and its possible uses. Even\nsocial networking websites that do not openly expose their\nusers' identities may provide enough information to identify\nthe profile's owner. This may happen, for example, through\nface re-identification [13]. Liu and Maes estimate in [18] a\n15% overlap in 2 of the major social networking sites they\nstudied. Since users often re-use the same or similar photos\nacross different sites, an identified face can be used to identify\na pseudonym profile with the same or similar face on\nanother site. Similar re-identifications are possible through\ndemographic data, but also through category-based representations\nof interests that reveal unique or rare overlaps of\nhobbies or tastes. We note that information revelation can\nwork in two ways: by allowing another party to identify a\npseudonymous profile through previous knowledge of a sub-ject's\ncharacteristics or traits; or by allowing another party\nto infer previously unknown characteristics or traits about a\nsubject identified on a certain site. We present evaluations\nof the probabilities of success of these attacks on users of a\nspecific networking site in Section 4.\nTo whom may identifiable information be made available?\nFirst of all, of course, the hosting site, that may use and\nextend the information (both knowingly and unknowingly\nrevealed by the participant) in different ways (below we discuss\nextracts from the privacy policy of a social networking\nsite that are relevant to this discussion).\nObviously, the\ninformation is available within the network itself, whose extension\nin time (that is, data durability) and space (that is,\nmembership extension) may not be fully known or knowable\nby the participant. Finally, the easiness of joining and\nextending one's network, and the lack of basic security measures\n(such as SSL logins) at most networking sites make it\neasy for third parties (from hackers to government agencies)\nto access participants data without the site's direct collaboration\n(already in 2003, LiveJournal used to receive at least\nfive reports of ID hijacking per day, [23]).\nHow can that information be used? It depends on the\ninformation actually provided - which may, in certain cases,\nbe very extensive and intimate. Risks range from identity\ntheft to online and physical stalking; from embarrassment\nto price discrimination and blackmailing.\nYet, there are\nsome who believe that social networking sites can also offer\nthe solution to online privacy problems. In an interview,\nTribe.net CEO Mark Pincus noted that \"[s]ocial networking\nhas the potential to create an intelligent order in the current\nchaos by letting you manage how public you make yourself\nand why and who can contact you.\" [4]. We test this position\nin Section 4.\nWhile privacy may be at risk in social networking sites,\ninformation is willingly provided. Different factors are likely\nto drive information revelation in online social networks.\nThe list includes signalling (as discussed in [9]), because the\nperceived benefit of selectively revealing data to strangers\nmay appear larger than the perceived costs of possible privacy\ninvasions; peer pressure and herding behavior; relaxed\nattitudes towards (or lack of interest in) personal privacy;\nincomplete information (about the possible privacy implications\nof information revelation); faith in the networking\nservice or trust in its members; myopic evaluation of privacy\nrisks (see [1]); or also the service's own user interface,\nthat may drive the unchallenged acceptance of permeable\ndefault privacy settings.\nWe do not attempt to ascertain the relative impact of\n73\ndifferent drivers in this paper. However, in the following\nsections we present data on actual behavioral patterns of\ninformation revelation and inferred privacy attitudes in a\ncollege-targeted networking site. This investigation offers\na starting point for subsequent analysis of the motivations\nbehind observed behaviors.\nTHE FACEBOOK.COM\nMany users of social networking sites are of college age [8],\nand recent ventures have started explicitly catering to the\ncollege crowd and, in some cases, to specific colleges (e.g.,\nthe Facebook.com, but also Universitysingles.ca, quad5.com,\nCampusNetwork.com, iVentster.com, and others).\nCollege-oriented social networking sites provide opportunities\nto combine online and face-to-face interactions within\nan ostensibly bounded domain.\nThis makes them different\nfrom traditional networking sites: they are communities\nbased \"on a shared real space\" [26]. This combination may\nexplain the explosive growth of some of these services (according\nto [26], the Facebook has spread \"to 573 campuses\nand 2.4 million users. [...] [I]t typically attracts 80 percent\nof a school's undergraduate population as well as a smattering\nof graduate students, faculty members, and recent\nalumni.\") Also because of this, college-oriented networks\noffer a wealth of personal data of potentially great value\nto external observers (as reported by [6], for example, the\nPentagon manages a database of 16-to-25-year-old US youth\ndata, containing around 30 million records, and continuously\nmerged with other data for focused marketing).\nSince many of these sites require a college's email account\nfor a participant to be admitted to the online social network\nof that college, expectations of validity of certain personal\ninformation provided by others on the network may\nincrease. Together with the apparent sharing of a physical\nenvironment with other members of the network, that\nexpectation may increase the sense of trust and intimacy\nacross the online community. And yet, since these services\ncan be easily accessed by outsiders (see Section 4) and since\nmembers can hardly control the expansion of their own network\n(often, a member's network increases also through the\nactivity of other members), such communities turn out to\nbe more imagined than real, and privacy expectations may\nnot be matched by privacy reality.\nThe characteristics mentioned above make college-oriented\nnetworking sites intriguing candidates for our study of information\nrevelation and privacy preferences. In the rest of\nthis paper we analyze data gathered from the network of\nCarnegie Mellon University (CMU) students enlisted on one\nof such sites, the Facebook.\nThe Facebook has gained huge adoption within the CMU\nstudent community but is present with similar success at\nmany other colleges nationwide. It validates CMU-specific\nnetwork accounts by requiring the use of CMU email addresses\nfor registration and login. Its interface grants participants\nvery granular control on the searchability and visibility\nof their personal information (by friend or location,\nby type of user, and by type of data). The default settings,\nhowever, are set to make the participants profile searchable\nby anybody else in any school in the Facebook network, and\nmake its actual content visible to any other user at the same\ncollege or at another college in the same physical location.\n12\n12\nAt the time of writing, the geography feature which gen-17\n18 19 20 21 22 23 24 25 26 27 28 29 30 31\n0\n2\n4\n6\n8\n10\n12\n14\n16\n18\n20\nAge\nPercentage of Profiles\nMale\nFemale\nFigure 1:\nAge distribution of Facebook profiles at CMU.\nThe majority of users (95.6%) falls into the 18-24 age\nbracket.\nThe Facebook is straightforward about the usage it plans\nfor the participants' personal information: at the time of\nthis writing, its privacy policy [30] reports that the site will\ncollect additional information about its users (for instance,\nfrom instant messaging), not originated from the use of the\nservice itself. The policy also reports that participants' information\nmay include information that the participant has\nnot knowingly provided (for example, her IP address), and\nthat personal data may be shared with third parties.\n3.1\nAccess Tools\nIn June 2005, we separately searched for all \"female\" and\nall \"male\" profiles for CMU Facebook members using the\nwebsite's advanced search feature and extracted their profile\nIDs. Using these IDs we then downloaded a total of 4540\nprofiles - virtually the entire CMU Facebook population at\nthe time of the study.\n3.2\nDemographics\nThe majority of users of the Facebook at CMU are undergraduate\nstudents (3345 or 73.7% of all profiles; see Table\n1). This corresponds to 62.1% of the total undergraduate\npopulation at CMU [31]. Graduate students, staff and faculty\nare represented to a much lesser extent (6.3%, 1.3%,\nand 1.5% of the CMU population, respectively). The majority\nof users is male (60.4% vs. 39.2%). Table 2 shows the\ngender distribution for the different user categories. The\nstrong dominance of undergraduate users is also reflected in\nthe user age distribution shown in Figure 1. The vast majority\nof users (95.6%) falls in the 18-24 age bracket. Overall\nthe average age is 21.04 years.\nerates networks based on physical location is by default not\navailable to undergraduate students. However, the status of\na profile can easily be changed to e.g. \"graduate student\"\nfor which the feature is accessible.\n74\nTable 1: Distribution of CMU Facebook profiles for different user categories.\nThe majority of users are\nundergraduate students. The table lists the percentage of the CMU population (for each category) that are\nusers of the Facebook (if available).\n# Profiles\n% of Facebook Profiles\n% of CMU Population\nUndergraduate Students\n3345\n74.6\n62.1\nAlumni\n853\n18.8\nGraduate\nStudents\n270\n5.9\n6.3\nStaff\n35\n0.8\n1.3\nFaculty\n17\n0.4\n1.5\nTable 2: Gender distribution for different user categories.\n# Profiles\n% of Category\n% of CMU Population\nMale\n2742\n60.4\nOverall\nFemale\n1781\n39.2\nMale\n2025\n60.5\n62.0\nUndergraduate Students\nFemale\n1320\n39.5\n62.3\nMale\n484\n56.7\nAlumni\nFemale\n369\n43.3\nMale\n191\n70.7\n6.3\nGraduate Students\nFemale\n79\n29.3\n6.3\nMale\n23\n65.7\nStaff\nFemale\n12\n34.3\nMale\n17\n100\n3.4\nFaculty\nFemale\n0\n0.0\n0.0\n3.3\nTypes and Amount of Information\nDisclosed\nThe Facebook offers users the ability to disclose a large\nand varied amount of personal information. We evaluated\nto which extent users at CMU provide personal information.\nFigure 2 shows the percentages of CMU profiles that disclose\ndifferent categories of information.\nIn general, CMU users of the Facebook provide an astonishing\namount of information: 90.8% of profiles contain an\nimage, 87.8% of users reveal their birth date, 39.9% list a\nphone number (including 28.8% of profiles that contain a\ncellphone number), and 50.8% list their current residence.\nThe majority of users also disclose their dating preferences\n(male or female), current relationship status (single, married\n, or in a relationship), political views (from \"very liberal\"\nto \"very conservative\"), and various interests (including music\n, books, and movies). A large percentage of users (62.9%)\nthat list a relationship status other than single even identify\ntheir partner by name and/or link to their Facebook profile.\nNote that, as further discussed below in Section 3.4, Facebook\nprofiles tend to be fully identified with each participant's\nreal first and last names, both of which are used as\nthe profile's name. In other words, whoever views a profile\nis also able to connect the real first and last name of a person\nto the personal information provided - that may include\nbirthday or current residence.\nAcross most categories, the amount of information revealed\nby female and male users is very similar. A notable\nexception is the phone number, disclosed by substantially\nmore male than female users (47.1% vs. 28.9%). Single male\nusers tend to report their phone numbers in even higher frequencies\n, thereby possibly signalling their elevated interest\n0\n10\n20\n30\n40\n50\n60\n70\n80\n90\n100\nSummer Job\nFavorite Movies\nFavorite Books\nFavorite Music\nInterests\nPolitical Preference\nRelationship Partner\nRelationship Status\nDating Interests\nHighschool\nAIM Screenname\nPhone\nAddress\nHome Town\nBirthday\nProfile Image\nPercentage of Profiles\nFigure 2:\nPercentages of CMU profiles revealing various\ntypes of personal information.\nin making a maximum amount of contact information easily\navailable.\nAdditional types of information disclosed by Facebook\nusers (such as the membership of one's own network of\nfriends at the home college or elsewhere, last login information\n, class schedule, and others) are discussed in the rest\nof this paper.\n3.4\nData Validity and Data Identifiability\nThe terms of service of the site encourage users to only\npublish profiles that directly relate to them and not to other\n75\nentities, people or fictional characters. In addition, in order\nto sign up with the Facebook a valid email address of one\nof the more than 500 academic institutions that the site\ncovers has to be provided. This requirement, along with the\nsite's mission of organizing the real life social networks of\ntheir members, provides incentives for users to only publish\naccurate information.\nWe tested how valid the published data appears to be. In\naddition, we studied how identifiable or granular the provided\ndata is.\nIn general, determining the accuracy of the information\nprovided by users on the Facebook (or any other social networking\nwebsite) is nontrivial for all but selected individual\ncases. We therefore restrict our validity evaluation to the\nmeasurement of the manually determined perceived accuracy\nof information on a randomly selected subset of 100 profiles.\n3.4.1\nProfile Names\nWe manually categorized the names given on Facebook\nprofiles as being one of the the following:\n1. Real Name\nName appears to be real.\n2. Partial Name\nOnly a first name is given.\n3. Fake Name\nObviously fake name.\nTable 3 shows the results of the evaluation. We found 89%\nof all names to be realistic and likely the true names for the\nusers (for example, can be matched to the visible CMU email\naddress provided as login), with only 8% of names obviously\nfake. The percentage of people that choose to only disclose\ntheir first name was very small: 3%.\nTable 3: Categorization of name quality of a random\nsubset of 100 profile names from the Facebook. The\nvast majority of names appear to be real names with\nonly a very small percentage of partial or obviously\nfake names.\nCategory\nPercentage Facebook Profiles\nReal Name\n89%\nPartial Name\n3%\nFake Name\n8%\nIn other words, the vast majority of Facebook users seem\nto provide their fully identifiable names, although they are\nnot forced to do so by the site itself.\nAs comparison, 98.5% of the profiles that include a birthday\nactually report the fully identified birth date (day, month,\nand year), although, again, users are not forced to provide\nthe complete information (the remaining 1.5% of users reported\nonly the month or the month and day but not the\nyear of birth). Assessing the validity of birth dates is not\ntrivial. However, in certain instances we observed friends\nposting birthday wishes in the comments section of the profile\nof a user on the day that had been reported by the user\nas her birthday. In addition, the incentives to provide a\nfake birth date (rather than not providing one at all, which\nis permitted by the system) would be unclear.\n3.4.2\nIdentifiability of Images on Profile\nThe vast majority of profiles contain an image (90.8%,\nsee Section 3.3). While there is no explicit requirement to\nprovide a facial image, the majority of users do so. In order\nto assess the quality of the images provided we manually\nlabelled them into one of four categories:\n1. Identifiable\nImage quality is good enough to enable person recognition\n.\n2. Semi-Identifiable\nThe profile image shows a person, but due to the image\ncomposition or face pose the person is not directly\nrecognizable. Other aspects however (e.g. hair color,\nbody shape, etc.) are visible.\n3. Group Image\nThe image contains more than one face and no other\nprofile information (e.g. gender) can be used to identify\nthe user in the image.\n4. Joke Image\nImages clearly not related to a person (e.g. cartoon or\ncelebrity image).\nTable 4 shows the results of labelling the profile images into\nthe four categories. In the majority of profiles the images\nare suitable for direct identification (61%). Overall, 80% of\nimages contain at least some information useful for identification\n. Only a small subset of 12% of all images are clearly\nnot related to the profile user. We repeated the same evaluation\nusing 100 randomly chosen images from Friendster,\nwhere the profile name is only the first name of the member\n(which makes Friendster profiles not as identifiable as\nFacebook ones). Here the percentage of \"joke images\" is\nmuch higher (23%) and the percentage of images suitable\nfor direct identification lower (55%).\n13\n3.4.3\nFriends Networks\nThe Facebook helps in organizing a real-life social network\nonline. Since Facebook users interact with many of the other\nusers directly in real-life, often on a daily basis, the network\nof friends may function as profile fact checker, potentially\ntriggering questions about obviously erroneous information.\nFacebook users typically maintain a very large network of\nfriends. On average, CMU Facebook users list 78.2 friends at\nCMU and 54.9 friends at other schools. 76.6% of users have\n25 or more CMU friends, whereas 68.6% of profiles show\n25 or more non-CMU friends. See Figure 3 for histogram\nplots of the distribution of sizes of the networks for friends\nat CMU and elsewhere. This represents some effort, since\nadding a friend requires explicit confirmation.\n3.5\nData Visibility and Privacy Preferences\nFor any user of the Facebook, other users fall into four different\ncategories: friends, friends of friends, non-friend users\n13\nWe note that Friendster's profiles used to be populated\nby numerous fake and/or humorous profiles, also called\n\"Fakesters\" (see [8]). Friendster management tried to eliminate\nfake profiles and succeeded in significantly reducing\ntheir number, but not completely extirpating them from the\nnetwork. Based on our manual calculations, the share of\nfake Friendster profiles is currently comparable to the share\nof fake Facebook profiles reported above.\n76\nTable 4: Categorization of user identifiability based on manual evaluation of a randomly selected subset of\n100 images from both Facebook and Friendster profiles. Images provided on Facebook profiles are in the\nmajority of cases suitable for direct identification (61%). The percentage of images obviously unrelated to\na person (\"joke image\") is much lower for Facebook images in comparison to images on Friendster profiles\n(12% vs. 23%).\nCategory\nPercentage Facebook Profiles\nPercentage Friendster Profiles\nIdentifiable\n61%\n55%\nSemi-Identifiable\n19%\n15%\nGroup Image\n8%\n6%\nJoke Image\n12%\n23%\n0\n20\n40\n60\n80\n100\n120\n140\n160\n0\n0.05\n0.1\nPercentage of Profiles\nNumber of CMU Friends Listed\n0\n20\n40\n60\n80\n100\n120\n140\n160\n0\n0.05\n0.1\nPercentage of Profiles\nNumber of Non-CMU Friends Listed\n(a) Network of CMU friends\n(b) Network of Non-CMU friends\nFigure 3:\nHistogram of the size of networks for both CMU friends (a) and non-CMU friends (b). Users maintain\nlarge networks of friends with the average user having 78.2 friends at CMU and 54.9 friends elsewhere.\nat the same institution and non-friend users at a different\ninstitution.\n14\nBy default, everyone on the Facebook appears\nin searches of everyone else, independent of the searchers institutional\naffiliation. In search results the users' full names\n(partial searches for e.g. first names are possible) appear\nalong with the profile image, the academic institution that\nthe user is attending, and the users' status there. The Facebook\nreinforces this default settings by labelling it \"recom-mended\"\non the privacy preference page. Also by default\nthe full profile (including contact information) is visible to\neveryone else at the same institution.\nPrior research in HCI has shown that users tend to not\nchange default settings [19]. This makes the choice of default\nsettings by website operators very important. On the other\nhand, the site provides users a very granular and relatively\nsophisticated interface to control the searchability and visibility\nof their profiles. Undergrad users, for example, can\nmake their profiles searchable only to other undergrad users,\nor only users who are friends, or users who are friends of\nfriends, or users at the same institution - or combinations\nof the above constraints. In addition, visibility of the entire\nprofile can be similarly controlled. Granular control on\ncontact information is also provided.\nSociological theories of privacy have noted how an individual\nmay selectively disclose personal information to others\nin order to establish different degrees of trust and intimacy\nwith them (see [10]). In light of these theories, we tested\n14\nThe Facebook recently introduced a new relationship category\nbased on user location, e.g. Pittsburgh, which we did\nnot consider in this study.\nhow much CMU Facebook users take advantage of the ability\nthe site provides to manage their presentation of sel[ves].\nBy creating accounts at different institutions, and by using\naccounts with varying degree of interconnectedness with the\nrest of the CMU network, we were able to infer how individual\nusers within the CMU network were selecting their own\nprivacy preference.\n3.5.1\nProfile Searchability\nWe first measured the percentage of users that changed\nthe search default setting away from being searchable to\neveryone on the Facebook to only being searchable to CMU\nusers. We generated a list of profile IDs currently in use at\nCMU and compared it with a list of profile IDs visible from\na different academic institution. We found that only 1.2% of\nusers (18 female, 45 male) made use of this privacy setting.\n3.5.2\nProfile Visibility\nWe then evaluated the number of CMU users that changed\nprofile visibility by restricting access to CMU users.\nWe\nused the list of profile IDs currently in use at CMU and\nevaluated which percentage of profiles were fully accessible\nto an unconnected user (not friend or friend of friend of any\nprofile). Only 3 profiles (0.06%) in total did not fall into\nthis category.\n3.5.3\nFacebook Data Access\nWe can conclude that only a vanishingly small number\nof users change the (permissive) default privacy preferences.\nIn general, fully identifiable information such as personal\n77\nimage and first and last name is available to anybody registered\nat any Facebook member network. Since the Facebook\nboasts a 80% average participation rate among undergraduate\nstudents at the hundreds of US institutions it covers, and\nsince around 61% of our CMU subset provides identifiable\nface images, it is relatively easy for anybody to gain access\nto these data, and cheap to store a nation-wide database of\nfully identified students and their IDs. In other words, information\nsuitable for creating a brief digital dossier consisting\nof name, college affiliation, status and a profile image can be\naccessed for the vast majority of Facebook users by anyone\non the website. (To demonstrate this we downloaded and\nidentified the same information for a total of 9673 users at\nHarvard University.)\nAdditional personal data - such as political and sexual orientation\n, residence address, telephone number, class schedule\n, etc. - are made available by the majority of users to\nanybody else at the same institution, leaving such data accessible\nto any subject able to obtain even temporary control\nof an institution's single email address.\nPRIVACY IMPLICATIONS\nIt would appear that the population of Facebook users we\nhave studied is, by large, quite oblivious, unconcerned, or\njust pragmatic about their personal privacy. Personal data\nis generously provided and limiting privacy preferences are\nsparingly used. Due to the variety and richness of personal\ninformation disclosed in Facebook profiles, their visibility,\ntheir public linkages to the members' real identities, and\nthe scope of the network, users may put themselves at risk\nfor a variety of attacks on their physical and online persona.\nSome of these risks are common also in other online social\nnetworks, while some are specific to the Facebook. In this\nsection we outline a number of different attacks and quantify\nthe number of users susceptible based on the data we\nextracted. See Table 5 for an overview.\n4.1\nStalking\nUsing the information available on profiles on the Facebook\na potential adversary (with an account at the same\nacademic institution) can determine the likely physical location\nof the user for large portions of the day. Facebook\nprofiles include information about residence location, class\nschedule, and location of last login. A students' life during\ncollege is mostly dominated by class attendance. Therefore,\nknowledge of both the residence and a few classes that the\nstudent is currently attending would help a potential stalker\nto determine the users whereabouts. In the CMU population\n860 profiles fall into our definition of this category (280\nfemale, 580 male), in that they disclose both their current\nresidence and at least 2 classes they are attending. Since our\nstudy was conducted outside of the semester (when many\nstudents might have deleted class information from their\nprofiles) we speculate this number to be even higher during\nthe semester.\nA much larger percentage of users is susceptible to a form\nof cyber-stalking using the AOL instant messenger (AIM).\nUnlike other messengers, AIM allows users to add \"buddies\"\nto their list without knowledge of or confirmation from the\nbuddy being added. Once on the buddy list the adversary\ncan track when the user is online. In the CMU population\n77.7% of all profiles list an AIM screen name for a total of\nmore than 3400 users.\n4.2\nRe-identification\nData re-identification typically deals with the linkage of\ndatasets without explicit identifiers such as name and address\nto datasets with explicit identifiers through common\nattributes [25].\nExamples include the linkage of hospital\ndischarge data to voter registration lists, that allows to re-identify\nsensitive medical information [28].\n4.2.1\nDemographics re-identification\nIt has been shown previously that a large portion of the\nUS population can be re-identified using a combination of\n5-digit ZIP code, gender, and date of birth [29]. The vast\nmajority of CMU users disclose both their full birthdate\n(day and year) and gender on their profiles (88.8%). For\n44.3% of users (total of 1676) the combination of birthdate\nand gender is unique within CMU. In addition, 50.8% list\ntheir current residence, for which ZIP codes can be easily\nobtained. Overall, 45.8% of users list birthday, gender, and\ncurrent residence. An adversary with access to the CMU\nsection of the Facebook could therefore link a comparatively\nlarge number of users to outside, de-identified data sources\nsuch as e.g. hospital discharge data.\n4.2.2\nFace Re-Identification\nIn a related study we were able to correctly link facial\nimages from Friendster profiles without explicit identifiers\nwith images obtained from fully identified CMU web pages\nusing a commercial face recognizer [13]. The field of automatic\nface recognition has advanced tremendously over\nthe last decade and is now offering a number of commercial\nsolutions which have been shown to perform well across a\nwide range of imaging conditions [14, 17, 24]. As shown in\nSection 3.4 a large number of profiles contain high quality\nimages. At CMU more than 2500 profiles fall in this category\n15\n. Potential de-identified data sources include other\nsocial networking sites (e.g. Friendster) or dating sites (e.g.\nMatch.com) that typically host anonymous profiles.\n4.2.3\nSocial Security Numbers and Identity Theft\nAn additional re-identification risk lies in making birthdate\n, hometown, current residence, and current phone number\npublicly available at the same time. This information\ncan be used to estimate a person's social security number\nand exposes her to identity theft.\nThe first three digits of a social security number reveal\nwhere that number was created (specifically, the digits are\ndetermined by the ZIP code of the mailing address shown\non the application for a social security number). The next\ntwo digits are group identifiers, which are assigned according\nto a peculiar but predictable temporal order. The last four\ndigits are progressive serial numbers.\n16\nWhen a person's hometown is known, the window of the\nfirst three digits of her SNN can be identified with probability\ndecreasing with the home state's populousness. When\nthat person's birthday is also known, and an attacker has\naccess to SSNs of other people with the same birthdate in\nthe same state as the target (for example obtained from the\nSSN death index or from stolen SSNs), it is possible to pin\ndown a window of values in which the two middle digits\n15\nIn fact, 90.8% of profiles have images, out of which 61%\nare estimated to be of sufficient quality for re-identification.\n16\nSee\nhttp://www.ssa.gov/foia/stateweb.html\nand\nhttp://policy.ssa.gov/poms.nsf/lnx/0100201030.\n78\nTable 5: Overview of the privacy risks and number of CMU profiles susceptible to it.\nRisk\n# CMU Facebook Profiles\n% CMU Facebook Profiles\n280 (Female)\n15.7 (Female)\nReal-World Stalking\n580 (Male)\n21.2 (Male)\nOnline Stalking\n3528\n77.7\nDemographics Re-Identification\n1676\n44.3\nFace Re-Identification\n2515 (estimated)\n55.4\nare likely to fall. The last four digits (often used in unpro-tected\nlogins and as passwords) can be retrieved through\nsocial engineering. Since the vast majority of the Facebook\nprofiles we studied not only include birthday and hometown\ninformation, but also current phone number and residence\n(often used for verification purposes by financial institutions\nand other credit agencies), users are exposing themselves to\nsubstantial risks of identity theft.\n4.3\nBuilding a Digital Dossier\nThe privacy implications of revealing personal and sensitive\ninformation (such as sexual orientation and political\nviews) may extend beyond their immediate impact, which\ncan be limited. Given the low and decreasing costs of storing\ndigital information, it is possible to continuously monitor\nthe evolution of the network and its users' profiles, thereby\nbuilding a digital dossier for its participants. College students\n, even if currently not concerned about the visibility\nof their personal information, may become so as they enter\nsensitive and delicate jobs a few years from now - when the\ndata currently mined could still be available.\n4.4\nFragile Privacy Protection\nOne might speculate that the perceived privacy protection\nof making personal information available only to members\nof a campus community may increase Facebook users' willingness\nto reveal personal information. However, the mechanisms\nprotecting this social network can be circumvented.\nAdding to this the recognition that users have little control\non the composition of their own networks (because often a\nmember's friend can introduce strangers into that member's\nnetwork), one may conclude that the personal information\nusers are revealing even on sites with access control and\nmanaged search capabilities effectively becomes public data.\n4.4.1\nFake Email Address\nThe Facebook verifies users as legitimate members of a\ncampus community by sending a confirmation email containing\na link with a seemingly randomly generated nine\ndigit code to the (campus) email address provided during\nregistration. Since the process of signing up and receiving\nthe confirmation email only takes minutes, an adversary simply\nneeds to gain access to the campus network for a very\nshort period of time. This can be achieved in a number of\nwell-known ways, e.g. by attempting to remotely access a\nhacked or virus-infected machine on the network or physi-cally\naccessing a networked machine in e.g. the library, etc.\n4.4.2\nManipulating Users\nSocial engineering is a well-known practice in computer\nsecurity to obtain confidential information by manipulating\nlegitimate users [22]. Implementation of this practice on the\nFacebook is very simple: just ask to be added as someone's\nfriend. The surprisingly high success rate of this practice\nwas recently demonstrated by a Facebook user who, using\nan automatic script, contacted 250,000 users of the Facebook\nacross the country and asked to be added as their\nfriend. According to [15], 75,000 users accepted: thirty percent\nof Facebook users are willing to make all of their profile\ninformation available to a random stranger and his network\nof friends.\n4.4.3\nAdvanced Search Features\nWhile not directly linked to from the site, the Facebook\nmakes the advanced search page of any college available to\nanyone in the network. Using this page various profile information\ncan be searched for, e.g. relationship status, phone\nnumber, sexual preferences, political views and (college) residence\n. By keeping track of the profile IDs returned in the\ndifferent searches a significant portion of the previously inaccessible\ninformation can be reconstructed.\nCONCLUSIONS\nOnline social networks are both vaster and looser than\ntheir offline counterparts. It is possible for somebody's profile\nto be connected to hundreds of peers directly, and thousands\nof others through the network's ties. Many individuals\nin a person's online extended network would hardly be defined\nas actual friends by that person; in fact many may be\ncomplete strangers. And yet, personal and often sensitive\ninformation is freely and publicly provided.\nIn our study of more than 4,000 CMU users of the Facebook\nwe have quantified individuals' willingness to provide\nlarge amounts of personal information in an online social\nnetwork, and we have shown how unconcerned its users appear\nto privacy risks: while personal data is generously provided\n, limiting privacy preferences are hardly used; only a\nsmall number of members change the default privacy preferences\n, which are set to maximize the visibility of users profiles\n. Based on the information they provide online, users\nexpose themselves to various physical and cyber risks, and\nmake it extremely easy for third parties to create digital\ndossiers of their behavior.\nThese risks are not unique to the Facebook. However, the\nFacebook's public linkages between an individual profile and\nthe real identity of its owner, and the Facebook's perceived\nconnection to a physical and ostensibly bounded community\n(the campus), make Facebook users a particularly interesting\npopulation for our research.\nOur study quantifies patterns of information revelation\nand infers usage of privacy settings from actual field data,\nrather than from surveys or laboratory experiments. Still,\nthe relative importance of the different drivers influencing\nFacebook users' information revelation behavior has to be\nquantified. Our evidence is compatible with a number of\n79\ndifferent hypotheses. In fact, many simultaneous factors are\nlikely to play a role. Some evidence is compatible with a\nsignalling hypothesis (see Section 3.3): users may be prag-matically\npublishing personal information because the benefits\nthey expect from public disclosure surpass its perceived\ncosts. Yet, our evidence is also compatible with an interface\ndesign explanation, such as the acceptance (and possibly ignorance\n) of the default, permeable settings (see Section 3.5).\nPeer pressure and herding behavior may also be influencing\nfactors, and so also myopic privacy attitudes (see [1]) and\nthe sense of protection offered by the (perceived) bounds of\na campus community. Clarifying the role of these different\nfactors is part of our continuing research agenda.\nAcknowledgements\nWe would like to thank Anne Zimmerman and Bradley Ma-lin\nfor first bringing the Facebook to our attention.\nWe\nwould also like to thank danah boyd, Lorrie Cranor, Julie\nDowns, Steven Frank, Julia Gideon, Charis Kaskiris, Bart\nNabbe, Mike Shamos, Irina Shklovski, and four anonymous\nreferees for comments. This work was supported in part by\nthe Data Privacy Lab in the School of Computer Science\nand by the Berkman Fund at Carnegie Mellon University.\nREFERENCES\n[1] A. Acquisti. Privacy in electronic commerce and the\neconomics of immediate gratification. In Proceedings\nof the ACM Conference on Electronic Commerce (EC\n'04), pages 2129, 2004.\n[2] B. Anderson. Imagined Communities: Reflections on\nthe Origin and Spread of Nationalism. Verso, London\nand New York, revised edition, 1991.\n[3] S. Arrison. Is Friendster the new TIA?\nTechCentralStation, January 7, 2004.\n[4] J. Black. The perils and promise of online schmoozing.\nBusinessWeek Online, February 20, 2004.\n[5] J. Brown. Six degrees to nowhere. Salon.com,\nSeptember 21, 1998.\n[6] D. Cave. 16 to 25? Pentagon has your number, and\nmore. The New York Times, June 24, 2005.\n[7] d. boyd. Reflections on friendster, trust and intimacy.\nIn Intimate (Ubiquitous) Computing Workshop Ubicomp\n2003, October 12-15, Seattle, Washington,\nUSA, 2003.\n[8] d. boyd. Friendster and publicly articulated social\nnetworking. In Conference on Human Factors and\nComputing Systems (CHI 2004), April 24-29, Vienna,\nAustria, 2004.\n[9] J. Donath and d. boyd. Public displays of connection.\nBT Technology Journal, 22:7182, 2004.\n[10] S. Gerstein. Intimacy and privacy. In F. D. Schoeman,\neditor, Philosophical Dimensions of Privacy: An\nAnthology. Cambridge University Press, Cambridge,\nUK, 1984.\n[11] M. Granovetter. The strength of weak ties. American\nJournal of Sociology, 78:13601380, 1973.\n[12] M. Granovetter. The strength of weak ties: A network\ntheory revisited. Sociological Theory, 1:201233, 1983.\n[13] R. Gross. Re-identifying facial images. Technical\nreport, Carnegie Mellon University, Institute for\nSoftware Research International, 2005. In preparation.\n[14] R. Gross, J. Shi, and J. Cohn. Quo vadis face\nrecognition? In Third Workshop on Empirical\nEvaluation Methods in Computer Vision, 2001.\n[15] K. Jump. A new kind of fame. The Columbian\nMissourian, September 1, 2005.\n[16] A. Leonard. You are who you know. Salon.com, June\n15, 2004.\n[17] S. Li and A. Jain, editors. Handbook of Face\nRecognition. Springer Verlag, 2005.\n[18] H. Liu and P. Maes. Interestmap: Harvesting social\nnetwork profiles for recommendations. In Beyond\nPersonalization - IUI 2005, January 9, San Diego,\nCalifornia, USA, 2005.\n[19] W. Mackay. Triggers and barriers to customizing\nsoftware. In Proceedings of CHI'91, pages 153160.\nACM Press, 1991.\n[20] S. Milgram. The small world problem. Psychology\nToday, 6:6267, 1967.\n[21] S. Milgram. The familiar stranger: An aspect of urban\nanonymity. In S. Milgram, J. Sabini, and M. Silver,\neditors, The Individual in a Social World: Essays and\nExperiments. Addison-Wesley, Reading, MA, 1977.\n[22] K. Mitnick, W. Simon, and S. Wozniak. The art of\ndeception: controlling the human element of security.\nJohn Wiley & Sons, 2002.\n[23] A. Newitz. Defenses lacking at social network sites.\nSecurityFocus, December 31, 2003.\n[24] P. Phillips, P. Flynn, T. Scruggs, K. Bowyer,\nJ. Chang, K. Hoffman, J. Marques, J. Min, and\nJ. Worek. Overview of the face recognition grand\nchallenge. In IEEE Conference on Computer Vision\nand Pattern Recognition, June 20-25, San Diego,\nCalifornia, USA, 2005.\n[25] P. Samarati and L. Sweeney. Protecting privacy when\ndisclosing information: k-anonymity and its\nenforcement through generalization and cell\nsuppression. Technical report, SRI International, 1998.\n[26] I. Sege. Where everybody knows your name.\nBoston.com, April 27, 2005.\n[27] L. J. Strahilevitz. A social networks theory of privacy.\nThe Law School, University of Chicago, John M. Olin\nLaw & Economics Working Paper No. 230 (2D Series),\nDecember 2004.\n[28] L. Sweeney. k-Anonymity: a model for protecting\nprivacy. International Journal on Uncertainty,\nFuzziness and Knowledge-based Systems,\n10(5):557570, 2002.\n[29] L. Sweeney. Uniqueness of simple demographics in the\nU.S. population. Technical report, Carnegie Mellon\nUniversity, Laboratory for International Data Privacy,\n2004.\n[30] The Facebook. Privacy policy.\nhttp://facebook.com/policy.php, August 2005.\n[31] University Planning. Carnegie Mellon Factbook 2005.\nCarnegie Mellon University, February 2005.\n[32] D. Watts. Six Degrees: The Science of a Connected\nAge. W.W.Norton & Company, 2003.\n80", "keywords": "information relevation;privacy;social networking sites;information revelation;privacy risk;Online privacy;online social networking;online behavior;college;social network theory;facebook;stalking;re-identification;data visibility;privacy perference"} {"name": "114", "title": "Integrating the Document Object Model with Hyperlinks for Enhanced Topic Distillation and Information Extraction", "abstract": "Topic distillation is the process of finding authoritative Web pages and comprehensive \"hubs\" which reciprocally endorse each other and are relevant to a given query. Hyperlink-based topic distillation has been traditionally applied to a macroscopic Web model where documents are nodes in a directed graph and hyperlinks are edges. Macroscopic models miss valuable clues such as banners, navigation panels , and template-based inclusions, which are embedded in HTML pages using markup tags. Consequently, results of macroscopic distillation algorithms have been deteriorating in quality as Web pages are becoming more complex. We propose a uniform fine-grained model for the Web in which pages are represented by their tag trees (also called their Document Object Models or DOMs) and these DOM trees are interconnected by ordinary hyperlinks. Surprisingly, macroscopic distillation algorithms do not work in the fine-grained scenario. We present a new algorithm suitable for the fine-grained model. It can dis-aggregate hubs into coherent regions by segmenting their DOMtrees. Mutual endorsement between hubs and authorities involve these regions , rather than single nodes representing complete hubs. Anecdotes and measurements using a 28-query, 366000-document benchmark suite, used in earlier topic distillation research, reveal two benefits from the new algorithm: distillation quality improves, and a by-product of distillation is the ability to extract relevant snippets from hubs which are only partially relevant to the query.", "fulltext": "Introduction\nKleinberg's Hyperlink Induced Topic Search (HITS) [14]\nand the PageRank algorithm [3] underlying Google have\nrevolutionized ranking technology for Web search engines.\nPageRank evaluates the \"prestige score\" of a page as roughly\nproportional to the sum of prestige scores of pages citing it\n\n(Note:\nTo view the HTML version using Netscape, add the\nfollowing line to your ~/.Xdefaults or ~/.Xresources file:\nNetscape*documentFonts.charset*adobe-fontspecific: iso-8859-1\nFor printing use the PDF version, as browsers may not print the\nmathematics properly.)\nCopyright is held by author/owner.\nWWW10, May 15, 2001, Hong Kong.\nACM1-58113-348-0/01/0005.\nusing hyperlinks. HITS also identifies collections of resource\nlinks or \"hubs\" densely coupled to authoritative pages on a\ntopic. The model of the Web underlying these and related\nsystems is a directed graph with pages (HTML files) as\nnodes and hyperlinks as edges.\nSince those papers were published, the Web has been\nevolving in fascinating ways, apart from just getting larger.\nWeb pages are changing from static files to dynamic views\ngenerated from complex templates and backing semi-structured\ndatabases. A variety of hypertext-specific idioms such\nas navigation panels, advertisement banners, link exchanges,\nand Web-rings, have been emerging.\nThere is also a migration of Web content from syntac-tic\nHTML markups towards richly tagged, semi-structured\nXML documents (http://www.w3.org/XML/) interconnected\nat the XML element level by semantically rich links (see,\ne.g., the XLink proposal at http://www.w3.org/TR/xlink/).\nThese refinements are welcome steps to implementing what\nBerners-Lee and others call the semantic Web (http://www.\nw3.org/1999/04/13-tbl.html), but result in document, file,\nand site boundaries losing their traditional significance.\nContinual experiments performed by several researchers\n[2, 15] reveal a steady deterioration of distillation quality\nthrough the last few years. In our experience, poor results\nare frequently traced to the following causes:\nLinks have become more frequent and \"noisy\" from\nthe perspective of the query, such as in banners, navigation\npanels, and advertisements. Noisy links do not\ncarry human editorial endorsement, a basic assumption\nin topic distillation.\nHubs may be \"mixed\", meaning only a portion of\nthe hub may be relevant to the query. Macroscopic\ndistillation algorithms treat whole pages as atomic, indivisible\nnodes with no internal structure. This leads\nto false reinforcements and resulting contamination of\nthe query responses.\nThanks in part to the visibility of Google, content creators\nare well aware of hyperlink-based ranking technology.\nOne reaction has been the proliferation of nepotistic \"clique\nattacks\"--a collection of sites linking to each other without\nsemantic reason, e.g. http://www.411fun.com, http://\nwww.411fashion.com and http://www.411-loans.com. (Figures\n8 and 9 provide some examples.) Some examples look\nsuspiciously like a conscious attempt to spam search engines\nthat use link analysis. Interestingly, in most cases, the visual\npresentation clearly marks noisy links which surfers rarely\nfollow, but macroscopic algorithms are unable to exploit it.\n211\n<html>\n<head>\n<title>Portals</title>\n</head>\n<body>\n<ul>\n<li>\n<a href=\"...\">Yahoo</a>\n</li>\n<li>\n<a href=\"...\">Lycos</a>\n</li>\n</ul>\n</body>\n</html>\nhtml\nhead\nbody\ntitle\nul\nli\nli\na\na\nFigure 1: In the fine-grained model, DOMs for individual pages\nare trees interconnected by ordinary hyperlinks.Each triangle is\nthe DOM tree corresponding to one HTML page.Green boxes\nrepresent text.\nMany had hoped that HITS-like algorithms would put\nan end to spamming, but clearly the situation is more like\nan ongoing arms-race. Google combines link-based ranking\nwith page text and anchor text in undisclosed ways, and\nkeeps tweaking the combination, but suffers an occasional\nembarrassment\n1\n.\nDistillation has always been observed to work well for\n\"broad\" topics (for which there exist well-connected relevant\nWeb subgraphs and \"pure\" hubs) and not too well for\n\"narrow\" topics, because w.r.t. narrow topics most hubs are\nmixed and have too many irrelevant links. Mixed hubs and\nthe arbitrariness of page boundaries have been known to\nproduce glitches in the Clever system [6]: there has been\nno reliable way to classify hubs as mixed or pure.\nIf a\nfine-grained model can suitably dis-aggregate mixed hubs,\ndistillation should become applicable to narrow queries too.\nYet another motivation for the fine-grained model comes\nfrom the proliferation of mobile clients such as cell-phones\nand PDAs with small or no screens. Even on a conventional\nWeb browser, scrolling through search results for promising\nresponses, then scrolling through those responses to satisfy\na specific information need are tedious steps. The tedium is\nworse on mobile clients. Search engines that need to serve\nmobile clients must be able to pinpoint narrow sections of\npages and sites that address a specific information need, and\nlimit the amount of extra matter sent back to the client [4].\n1.1\nOur contributions\nWe initiate a study of topic distillation with a fine-grained\nmodel of the Web, built using the Document Object Model\n(DOM) of HTML pages. The DOM can model reasonably\nclean HTML, support XML documents that adhere to rigid\nschema definitions, and embed free text in a natural way.\nIn our model, HTML pages are represented by their DOMs\nand these DOMtrees are interconnected by ordinary hyperlinks\n(figure 1). The sometimes artificial distinction between\nWeb-level, site-level, page-level, and intra-page structures\nis thereby blurred.\nSurprisingly, macroscopic distillation\nalgorithms perform poorly in the fine-grained setting; we\ndemonstrate this using analysis and anecdotes. Our main\ntechnical contribution is a new fine-grained distillation al-1\nhttp://searchenginewatch.com/sereport/99/11-google.html\n(local copy GoogleDrEvil.html) and http://searchenginewatch.com/\nsereport/01/02-bush.html (local copy GoogleBush.html) provide some\nsamples.\nBibliometry,\nGraph theory\nPageRank/\nGoogle\nHITS\nClever@IBM\nExploiting\nanchor text\nTopic distillation\n@Compaq\nOutlier\nelimination\nDOM\nstructure\nThis paper\nFigure 2: This work in the context of HITS and related research.\ngorithm which can identify mixed hubs and segment their\ncorresponding DOMtrees into maximal subtrees which are\n\"coherent\" w.r.t. the query, i.e., each is almost completely\nrelevant or completely irrelevant. The segmentation algorithm\nuses the Minimum Description Length (MDL) principle\n[16] from Information Theory [9]. Rather than collapse\nthese diverse hub subtrees into one node, the new algorithm\nallocates a node for each subtree. This intermediate\nlevel of detail, between the macroscopic and the fine-grained\nmodel, is essential to the success of our algorithm.\nWe\nreport on experiments with 28 queries involving over 366000\nWeb pages.\nThis benchmark has been used in previous\nresearch on resource compilation and topic distillation [5,\n2, 6]. Our experience is that the fine-grained model and\nalgorithm significantly improve the quality of distillation,\nand are capable of extracting DOMsubtrees from mixed\nhubs that are relevant to the query.\nWe note that in this study we have carefully and deliberately\nisolated the model from possible influences of text\nanalysis. By controlling our experimental environment to\nnot use text, we push HITS-like ideas to the limit, evaluating\nexactly the value added by information present in DOM\nstructures. In ongoing work, we have added textual support\nto our framework and obtained even better results [7].\n1.2\nBenefits and applications\nApart from offering a more faithful model of Web content,\nour approach enables solutions to the following problems.\nBetter topic distillation: We show less tendency for topic\ndrift and contamination when the fine-grained model is used.\nWeb search using devices with small or no screen:\nThe ability to identify page snippets relevant to a query is\nattractive to search services suitable for mobile clients.\nFocused crawling: Identification of relevant DOMsubtrees\ncan be used to better guide a focused crawler's link\nexpansion [8].\nAnnotation extraction: Experiments with a previous macroscopic\ndistillation algorithm (Clever [6]) revealed that volunteers\npreferred Clever to Yahoo! only when Yahoo!'s manual\nsite annotations were removed in a blind test. Our work may\nimprove on current techniques for automatic annotation extraction\n[1] by first collecting candidate hub page fragments\nand then subjecting the text therein to further segmentation\ntechniques.\nData preparation for linguistic analysis: Information\nextraction is a natural next step after resource discovery. It\nis easier to build extractors based on statistical and linguistic\nmodels if the domain or subject matter of the input documents\nis suitably segmented [12], as is effected by our hub\nsubtree extraction technique, which is a natural successor to\nresource discovery, and a precursor to linguistic analysis.\n212\n1.3\nOutline of the paper\nIn\n2.1 we review HITS and related algorithms. This section\ncan be skipped by a reader who is familiar with HITS-related\nliterature. In\n2.2 we illustrate some recent and growing\nthreats to the continued success of macroscopic distillation\nalgorithms. We show why the fine-grained model does not\nwork with traditional HITS-like approaches in\n3, and then\npropose our framework in\n4. We report on experimental\nresults in\n5 and conclude in 6 with some comments on\nongoing and future work.\nPreliminaries\nWe review the HITS family of algorithms and discuss how\nthey were continually enhanced to address evolving Web\ncontent.\n2.1\nReview of HITS and related systems\nThe HITS algorithm [14] started with a query\nq which was\nsent to a text search engine. The returned set of pages\nR\nq\nwas fetched from the Web, together with any pages having a\nlink to any page in\nR\nq\n, as well as any page cited in some page\nof\nR\nq\nusing a hyperlink. Links that connected pages on the\nsame Web server (based on canonical host name match) were\ndropped from consideration because they were often seen to\nserve only a navigational purpose, or were \"nepotistic\" in\nnature.\nSuppose the resulting graph is\nG\nq\n= (\nV\nq\n, E\nq\n). We will\ndrop the subscript\nq where clear from context. Each node v\nin\nV is assigned two scores: the hub score h(v) and the authority\nscore\na(v), initialized to any positive number. Next\nthe HITS algorithm alternately updates a and h as follows:\na(v) =\n(u,v)E\nh(u) and h(u) =\n(u,v)E\na(v), making\nsure after each iteration to scale a and h so that\nv\nh(v) =\nv\na(v) = 1, until the ranking of nodes by a and h stabilize\n(see figure 3).\nIf\nE is represented in the adjacency matrix format (i.e.,\nE[i, j] = 1 if there is an edge (i, j) and 0 otherwise) then the\nabove operation can be written simply as a =\nE\nT\nh and h =\nEa, interspersed with scaling to set |h|\n1\n=\n|a|\n1\n= 1. The\nHITS algorithm effectively uses power iterations [11] to find\na, the principal eigenvector of\nE\nT\nE; and h, the principal\neigenvector of\nEE\nT\n.\nPages with large\na are popular or\nauthoritative sources of information; pages with large\nh are\ngood collections of links.\nA key feature of HITS is how endorsement or popularity\ndiffuses to siblings. If (\nu, v) and (u, w) are edges and somehow\na(v) becomes large, then in the next iteration h(u) will\nincrease, and in the following iteration,\na(w) will increase.\nWe will describe this as \"\nv's authority diffuses to w through\nthe hub\nu.\" This is how sibling nodes reinforce each other's\nauthority scores. We will revisit this property later in\n3.\nGoogle has no notion of hubs. Roughly speaking, each\npage\nv has a single \"prestige\" score p(v) called its PageRank\n[3] which is defined as proportional to\n(u,v)E\np(u), the\nsum of prestige scores of pages\nu that cite v. Some conjecture\nthat the prestige model is adequate for the living Web,\nbecause good hubs readily acquire high prestige as well. Our\nwork establishes the value of a bipartite model like HITS,\nand indeed, the value of an asymmetric model where hubs\nExpanded graph\nRootset\nKeyword\nSearch\nengine\nQuery\na = Eh\nh = E\nT\na\nh\na\nh\nh\nh\na\na\na\nhello\nworld\nstdio\nCentroid of\nrootset\nSimilarity\ncone\nDistant\nvectors\npruned\nFigure 3: (a) HITS, a macroscopic topic distillation algorithm\nwith uniform edge weights; (b) The B&H algorithm, apart from\nusing non-uniform edge weights, discards pages in the expanded\nset which are too dissimilar to the rootset pages to prevent topic\ndrift.Documents are represented as vectors with each component\nrepresenting one token or word [17].\nare analyzed quite differently from authorities. Therefore\nwe will not discuss prestige-based models any further.\n2.2\nThe impact of the evolving Web on\nhyperlink analysis\nElegant as the HITS model is, it does not adequately capture\nvarious idioms of Web content. We discuss here a slew of\nfollow-up work that sought to address these issues.\nKleinberg dropped links within the same Web-site from\nconsideration because these were often found to be navigational\n, \"nepotistic\" and noisy. Shortly after HITS was published\n, Bharat and Henzinger (B&H [2]) found that nepotism\nwas not limited to same-site links. In many trials with\nHITS, they found two distinct sites\ns\n1\nand\ns\n2\n, where\ns\n1\nhosted a number of pages\nu linking to a page v on s\n2\n, driving\nup\na(v) beyond what may be considered fair. B&H proposed\na simple and effective fix for such \"site-pair\" nepotism: if\nk\npages on\ns\n1\npoint to\nv, let the weight of each of these links\nbe 1\n/k, so that they add up to one, assuming a site (not a\npage) is worth one unit of voting power.\nLater work in the Clever system [6] used a small edge\nweight for same-site links and a larger weight for other links,\nbut these weights were tuned empirically by evaluating the\nresults on specific queries.\nAnother issue with HITS were \"mixed hubs\" or pages\nu that included a collection of links of which only a subset\nwas relevant to a query. Because HITS modeled\nu as a single\nnode with a single\nh score, high authority scores could diffuse\nfrom relevant links to less relevant links. E.g., responses to\nthe query movie awards sometimes drifted into the neighboring\n, more densely linked domain of movie companies.\nLater versions of Clever tried to address the issue in\ntwo ways. First, links within a fixed number of tokens of\nquery terms were assigned a large edge weight (the width\nof the \"activation window\" was tuned by trial-and-error).\nSecond, hubs which were \"too long\" were segmented at a few\nprominent boundaries (such as <UL> or <HR>) into \"pagelets\"\nwith their own scores. The boundaries were chosen using a\nstatic set of rules depending on the markup tags on those\npages alone.\nTo avoid drift, B&H also computed a vector space representation\n[17] of documents in the response set (shown in\nFigure 3) and then dropped pages that were judged to be\n\"outliers\" using a suitable threshold of (cosine) similarity to\nthe vector space centroid. B&H is effective for improving\nprecision, but may reduce recall if mixed hubs are pruned\nbecause of small similarity to the root set centroid. This\n213\nQuery term\nActivation\nwindow\nFigure 4: Clever uses a slightly more detailed page model than\nHITS.Hyperlinks near query terms are given heavier weights.\nSuch links are shown as thicker lines.\nmay in turn distort hub and authority scores and hence the\ndesired ranking. Losing a few hubs may not be a problem\nfor broad queries but could be serious for narrower queries.\nAs resource discovery and topic distillation become more\ncommonplace, we believe the quest will be for every additional\nresource than can possibly be harvested, not merely\nthe ones that \"leap out at the surfer.\" Our goal should\ntherefore be to extract relevant links and annotations even\nfrom pages which are partially or largely irrelevant.\nGeneralizing hyperlinks to interconnected DOMs\nHTML documents have always embedded many sources of\ninformation (other that text) which have been largely ignored\nin previous distillation research. Markups are one\nsuch source. From a well-formed HTML document, it ought\nto be possible to extract a tree structure called the Document\nObject Model (DOM). In real life HTML is rarely\nwell formed, but using a few simple patches, it is possible\nto generate reasonably accurate DOMs. For XML sources\nadhering to a published DTD, a DOMis precise and well\ndefined.\nFor simplicity, we shall work with a greatly pared-down\nversion of the DOMfor HTML pages. We will discard all\ntext, and only retain those paths in the DOMtree that lead\nfrom the root to a leaf which is an <A...> element with an\nHREF leading to another page.\nHyperlinks always originate from leaf DOMelements,\ntypically deep in the DOMtree of the source document. If\nsame-site links are ignored, very few macro-level hyperlinks\ntarget an internal node in a DOMtree (using the \"#\" modifier\nin the URL). To simplify our model (and experiments)\nwe will assume that the target of a hyperlink is always the\nroot node of a DOMtree. In our experiments we found very\nfew URLs to be otherwise.\nA first-cut approach (which one may call MicroHITS )\nwould be to use the fine-grained graph directly in the HITS\nalgorithm. One may even generalize \"same-site\" to \"same-DOM\"\nand use B&H-like edge-weights. This approach turns\nout to work rather poorly.\nTo appreciate why, consider two simple example graphs\nshown in Figure 5 and their associated eigenvectors. The\nfirst graph is for the macro setting. Expanding out a\n\nE\nT\nEa we get\na(2) a(2) + a(3) and\nBipar\ntite\nc\no\nr\ne\nDOM Tree\n1\n2\n3\n=\n=\n1\n1\n0\n1\n1\n0\n0\n0\n0\n;\n0\n0\n0\n0\n0\n0\n1\n1\n0\nE\nE\nE\nT\n3\n2\n4\n1\n5\n=\n=\n1\n0\n0\n0\n0\n0\n1\n0\n1\n0\n0\n0\n0\n0\n0\n0\n1\n0\n2\n0\n0\n0\n0\n0\n0\n;\n0\n1\n0\n0\n0\n0\n0\n1\n0\n0\n0\n0\n0\n0\n0\n0\n0\n1\n0\n1\n0\n0\n0\n0\n0\nE\nE\nE\nT\nFigure 5: A straight-forward application of HITS-like algorithms\nto a DOM graph may result in some internal DOM nodes blocking\nthe diffusion of authority across siblings.\na(3) a(2) + a(3),\nwhich demonstrates the mutual reinforcement. In the second\nexample nodes numbered 3 and 4 are part of one DOM\ntree. This time, we get\na(2) 2a(2) + a(4) and\na(4) a(2) + a(4),\nbut there is no coupling between\na(2) and a(5), which we\nwould expect at the macroscopic level. Node 4 (marked\nred) effectively blocks the authority from diffusing between\nnodes 2 and 5.\nOne may hope that bigger DOMtrees and multiple paths\nto authorities might alleviate the problem, but the above\nexample really depicts a basic problem. The success of HITS\ndepends critically on reinforcement among bipartite cores\n(see figure 5) which may be destroyed by the introduction\nof fine-grained nodes.\nProposed model and algorithm\nAt this point the dilemma is clear: by collapsing hubs into\none node, macroscopic distillation algorithms lose valuable\ndetail, but the more faithful fine-grained model prevents\nbipartite reinforcement.\nIn this section we present our new model and distillation\nalgorithm that resolves the dilemma. Informally, our model\nof hub generation enables our algorithm to find a cut or\nfrontier across each hub's DOMtree. Subtrees attached\nto these cuts are made individual nodes in the distillation\ngraph. Thus the hub score of the entire page is dis-aggregated\nat this intermediate level. The frontiers are not computed\none time as a function of the page alone, neither do they\nremain unchanged during the HITS iterations. The frontiers\nare determined by the current estimates of the hub scores of\nthe leaf HREF nodes.\nWe will first describe the hub segmentation technique\nand then use it in a modified iterative distillation algorithm.\n4.1\nScoring internal micro-hub nodes\nMacroscopic distillation algorithms rank and report complete\nhub pages, even if they are only partially relevant. In\nthis section we address the problem of estimating the hub\nscore of each DOMnode in the fine-grained graph, given an\nestimate of authority scores. Because inter-page hyperlinks\noriginate in leaf DOMnodes and target root nodes of DOM\ntrees, we will also assume that only those DOMnodes that\nare document roots can have an authority score.\n214\nAt the end of the h\nEa substep of MicroHITS, leaf\nDOMnodes get a hub score. Because leaf nodes point to\nexactly one page via an HREF, the hub score is exactly the\nauthority score of the target page. Macroscopic distillation\nalgorithms in effect aggregate all the leaf hub scores for a\npage into one hub score for the entire page. Reporting leaf\nhub scores in descending order would be useless, because\nthey would simply follow the authority ranking and fail to\nidentify good hub aggregates.\nInstead of the total hub score, one may consider the\ndensity of hub scores in a subtree, which may be defined as\nthe total hub score in the subtree divided by the number of\nHREF leaves. The maximum density will be achieved by the\nleaf node that links to the best authority. In our experience\nsmall subtrees with small number of leaves dominate the\ntop ranks, again under-aggregating hub scores and pitting\nancestor scores against descendant scores.\n4.1.1\nA generative model for hubs\nTo help us find suitable frontiers along which we can aggregate\nhub scores, we propose the following generative model\nfor hubs.\nImagine that the Web has stopped changing and with\nrespect to a fixed query, all Web pages have been manually\nrated for their worth as hubs. From these hub scores, one\nmay estimate that the hub scores have been generated from\na distribution\n0\n. (E.g.,\n0\nmay represent an exponential\ndistribution with mean 0\n.005.) If the author of a hub page\nsampled URLs at random to link to, the distribution of hub\nscores at the leaves of the page would approach the global\ndistribution provided enough samples were taken.\nHowever, authors differ in their choice of URLs. Hub\nauthors are not aware of all URLs relevant to a given query\nor their relative authority; otherwise all hubs authored on\na topic would be complete and identical, and therefore all\nbut one would be pointless to author. (Here we momentarily\nignore the value added by annotations and commentaries on\nhub pages.)\nTherefore, the distribution of hub scores for pages composed\nby a specific author will be different from\n0\n. (E.g.,\nthe author's personal average of hub scores may be 0\n.002,\ndistributed exponentially.) Moreover, the authors of mixed\nhubs deliberately choose to dedicate not the entire page, but\nonly a fragment or subtree of it, to URLs that are relevant\nto the given query. (As an extreme case a subtree could be\na single HREF.)\nWe can regard the hub generation process as a progressive\nspecialization of the hub score distribution starting from\nthe global distribution. For simplicity, assume all document\nroots are attached to a \"super-root\" which corresponds to\nthe global distribution\n0\n. As the author works down the\nDOMtree, \"corrections\" are applied to the score distribution\nat nodes on the path.\nAt some suitable depth, the author fixes the score distribution\nand generates links to pages so that hub scores\nfollow that distribution. This does not mean that there are\nno interesting DOMsubtrees below this depth. The model\nmerely posits that up to some depth, DOMstructure is indicative\nof systematic choices of score distributions, whereas\nbeyond that depth variation is statistical.\n0\nGlobal distribution\nProgressive\n`distortion'\nModel\nfrontier\nOther pages\nv\nu\nCumulative distortion cost =\nKL(\n\n0\n;\n\nu\n) + ... + KL(\n\nu\n;\n\nv\n)\nData encoding cost is roughly\n\n\n\nv\nH\nh\nv\nh\n)\n|\nPr(\nlog\n`Hot' subtree\n`Cold' subtree\n\nFigure 6: Our fine-grained model of Web linkage which unifies\nhyperlinks and DOM structure.\n4.1.2\nDiscovering DOM frontiers from generated\nhubs\nDuring topic distillation we observe pages which are the\noutcome of the generative process described above, and our\ngoal is to discover the \"best\" frontier at which the score\ndistributions were likely to have been fixed.\nA balancing act is involved here: one may choose a\nlarge and redundant frontier near the leaves and model the\nmany small, homogeneous subtrees (each with a different\ndistribution\nw\n) attached to that frontier accurately, or one\nmay choose a short frontier near the root with a few subtrees\nwhich are harder to model because they contain diverse hub\nscores. The balancing act requires a common currency to\ncompare the cost of the frontier with the cost of modeling\nhub score data beneath the frontier.\nThis is a standard problem in segmentation, clustering,\nand model estimation. A particularly successful approach to\noptimizing the trade-off is to use the Minimum Description\nLength (MDL) principle [16]. MDL provides a recipe for\nbringing the cost of model corrections to the same units as\nthe cost for representing data w.r.t a model, and postulates\nthat \"learning\" is equivalent to minimizing the sum total of\nmodel and data encoding costs.\nData encoding cost:\nFirst we consider the cost of encoding\nall the\nh-values at the leaves of a subtree rooted at\nnode\nw. Specifically, let the distribution associated with w\nbe\nw\n. The set of HREF leaf nodes in the subtree rooted at\nnode\nw is denoted L\nw\n, and the set of hub scores at these\nleaves is denoted\nH\nw\n. As part of the solution we will need\nto evaluate the number of bits needed to encode\nh-values in\nH\nw\nusing the model\nw\n. There are efficient codes which can\nachieve a data encoding length close to Shannon's entropy-based\nlower bound [9] of\nhH\nw\nlog Pr\n\nw\n(\nh)\nbits\n,\n(1)\nwhere Pr\n\nw\n(\nh) is the probability of hub score h w.r.t. a\ndistribution represented by\nw\n. (E.g.,\nw\nmay include the\nmean and variance of a normal distribution.) We will use\nthis lower bound as an approximation to our data encoding\ncost. (This would work if the\nh-values followed a discrete\nprobability distribution, which is not the case with hub\nscores. We will come back to this issue in\n4.2.)\n215\nModel encoding cost:\nNext we consider the model encoding\ncost. Consider node\nv in the DOMtree. We will\nassume that\n0\nis known to all, and use the path from the\nglobal root to\nv to inductively encode each node w.r.t its\nparent. Suppose we want to specialize the distribution\nv\nof\nsome\nv away from\nu\n, the distribution of its parent\nu. The\ncost for specifying this change is given by the well-known\nKullback-Leibler (KL) distance [9]\nKL(\nu\n;\nv\n), expressed\nas\nKL(\nu\n;\nv\n) =\nx\nPr\n\nu\n(\nx) log Pr\n\nu\n(\nx)\nPr\n\nv\n(\nx) .\n(2)\nIntuitively, this is the cost of encoding the distribution\nv\nw.r.t. a reference distribution\nu\n. E.g., if\nX is a binary random\nvariable and its probabilities of being zero and one are\n(\n.2, .8) under\n1\nand (\n.4, .6) under\n2\n, then\nKL(\n2\n;\n1\n) =\n.4 log\n.4\n.2\n+\n.6 log\n.6\n.8\n. Unlike in the case of entropy, the sum\ncan be taken to an integral in the limit for a continuous\nvariable\nx. Clearly for\nu\n=\nv\n, the KL distance is zero;\nit can also be shown that this is a necessary condition, and\nthat the KL distance is asymmetric in general but always\nnon-negative.\nIf\nu\nis specialized to\nv\nand\nv\nis specialized to\nw\n,\nthe cost is additive, i.e.,\nKL(\nu\n;\nv\n) +\nKL(\nv\n;\nw\n). We\nwill denote the cost of such a path as\nKL(\nu\n;\nv\n;\nw\n).\nMoreover, the model encoding cost of\nv starting from the\nglobal root model will be denoted\nKL(\n0\n;\n. . . ;\nv\n).\nCombined optimization problem:\nGiven the model at\nthe parent node\nu\nand the observed data\nH\nv\n, we should\nchoose\nv\nso as to minimize the sum of the KL distance and\ndata encoding cost:\nKL(\nv\n;\nu\n)\nhH\nv\nlog Pr\n\nv\n(\nh).\n(3)\nIf\nv\nis expressed parametrically, this will involve an optimization\nover those parameters.\nWith the above set-up, we are looking for a cut or frontier\nF across the tree, and for each v F , a\nv\n, such that\nvF\nKL(\n0\n;\n. . . ;\nv\n)\nhH\nv\nlog Pr\n\nv\n(\nh)\n(4)\nis minimized. The first part expresses the total model encoding\ncost of all nodes\nv on the frontier F starting from\nthe global root distribution. The second part corresponds\nto the data encoding cost for the set of hub scores\nH\nv\nat\nthe leaves of the subtrees rooted at the nodes\nv. Figure 6\nillustrates the two costs.\n4.2\nPractical considerations\nThe formulation above is impractical for a number of reasons\n. There is a reduction from the knapsack problem to\nthe frontier-finding problem. Dynamic programming can be\nused to give close approximations [13, 18], but with tens of\nthousands of macro-level pages, each with hundreds of DOM\nnodes, something even simpler is needed. We describe the\nsimplifications we had to make to control the complexity of\nour algorithm.\nWe use the obvious greedy expansion strategy. We initialize\nour frontier with the global root and keep picking\na node\nu from the frontier to see if expanding it to its\nimmediate children\n{v} will result in a reduction in code\nlength, if so we replace\nu by its children, and continue until\nno further improvement is possible. We compare two costs\nlocally at each\nu:\nThe cost of encoding all the data in H\nu\nwith respect\nto model\nu\n.\nThe cost of expanding u to its children, i.e.,\nv\nKL(\nu\n;\nv\n), plus the cost of encoding the subtrees\nH\nv\nwith respect to\nv\n.\nIf the latter cost is less, we expand\nu, otherwise, we prune\nit, meaning that\nu becomes a frontier node.\nAnother issue is with optimizing the model\nv\n. Usually,\nclosed form solutions are rare and numerical optimization\nmust be resorted to; again impractical in our setting.\nIn practice, if\nH\nv\nis moderately large, the data encoding\ncost tends to be larger than the model cost. In such cases, a\nsimple approximation which works quite well is to first minimize\nthe data encoding cost for\nH\nv\nby picking parameter\nvalues for\nv\nthat maximize the probability of the observed\ndata (the \"maximum likelihood\" or ML parameters), thus\nfix\nv\n, then evaluate\nKL(\nu\n;\nv\n).\n(As an example, if a coin tossed\nn times turns up heads\nk times, the ML parameter for bias is simply k/n, but if\na uniform\nu\n=\nU(0, 1) is chosen, the mean of\nv\nshifts\nslightly to (\nk + 1)/(n + 2) which is a negligible change for\nmoderately large\nk and n.)\nNon-parametric evaluation of the KL distance is complicated\n, and often entails density estimates. We experimented\nwith two parametric distributions: the Gaussian and exponential\ndistributions for which the KL distance has closed\nform expressions. We finally picked the exponential distribution\nbecause it fit the observed hub score distribution more\nclosely.\nIf represents an exponential distribution with mean\n\nand probability density\nf(x) = (1/) exp(-x/), then\nKL(\n1\n;\n2\n) = log\n2\n\n1\n+\n\n1\n\n2\n- 1 ,\n(5)\nwhere\n\ni\ncorresponds to\ni\n(\ni = 1, 2).\nThe next issue is how to measure data encoding cost\nfor continuous variables. There is a notion of the relative\nentropy of a continuous distribution which generalizes discrete\nentropy, but the relative entropy can be negative and\nis useful primarily for comparing the information content in\ntwo signal sources. Therefore we need to discretize the hub\nscores.\nA common approach to discretizing real values is to scale\nthe smallest value to one, in effect allocating log(\nh\nmax\n/h\nmin\n)\nbits per value. This poses a problem in our case. Consider\nthe larger graph in figure 5.\nIf h is initialized to\n(1\n, 1, 1, 1, 1)\nT\n, after the first few multiplications by\nEE\nT\nwhich represents the linear transformation\n(\nh(1), . . . , h(5))\nT\n(h(1) + h(3), 0, h(1) + 2h(3), h(4), 0)\nT\n,\nwe get (2\n, 0, 3, 1, 0)\nT\n, (5\n, 0, 8, 1, 0)\nT\n, (13\n, 0, 21, 1, 0)\nT\n, and\n(34\n, 0, 55, 1, 0)\nT\n. Even if we disregard the zeroes, the ratio\nof the largest to the smallest positive component of h grows\nwithout bound. As scaling is employed to prevent overflow,\nh(4) decays towards zero. This makes the log(h\nmax\n/h\nmin\n)\nstrategy useless.\n216\nA reasonable compromise is possible by noting that the\nuser is not interested in the precision of all the hub scores.\nE.g., reporting the top\nfraction of positive hub scores to\nwithin a small multiplicative error of\nis quite enough. We\nused\n= 0.8 and = 0.05 in our experiments.\n4.3\nDistillation using segmented hubs\nIn this section we will embed the segmentation algorithm\ndiscussed in the previous section into the edge-weighted B&H\nalgorithm. (Unlike the full B&H algorithm, we do no text\nanalysis at this stage. We continue to call the edge-weighted\nversion of HITS as \"B&H\" for simplicity.)\nThe main modification will be the insertion of a call\nto the segmentation algorithm after the h\nEa step and\nbefore the complementary step a\nE\nT\nh.\nIt is also a\nreasonable assumption that the best frontier will segment\neach hub non-trivially, i.e., below its DOMroot. Therefore\nwe can invoke the segmentation routine separately on each\npage. Let the segmentation algorithm described previously\nbe invoked as\nF segment(u)\nwhere\nu is the DOMtree root of a page and F is the returned\nfrontier for that page.\nHere is the pseudo-code for one\niteration:\nh\nEa\nfor each document DOMroot\nu\nF segment(u)\nfor each frontier node\nv F\nh(v)\nwL\nv\nh(w)\nfor each\nw L\nv\nh(w) h(v)\nreset\nh(v) 0\na\nE\nT\nh\nnormalize a so that\nu\na(u) = 1.\nFor convenience we can skip the hub normalization and only\nnormalize authorities every complete cycle; this does not\naffect ranking.\nThe reader will observe that this is not a linear relaxation\nas was the case with HITS, Clever, or B&H, because\nsegment may lead us to aggregate and redistribute different\nsets of hub scores in different iterations, based on the current\nleaf hub scores. (Also note that if\nF were fixed for each page\nfor all time, the system would still be linear and therefore\nguaranteed to converge.) Although convergence results for\nnon-linear dynamical systems are rare [10], in our experiments\nwe never found convergence to be a problem (see\n5).\nHowever, we do have to take care with the initial values\nof a and h, unlike in the linear relaxation situation where\nany positive value will do. Assume that the first iteration\nstep transfers weights from authorities to hubs, and consider\nhow we can initialize the authority scores. In contrast to\nHITS, we cannot start with all\na(v) = 1. Why not? Because\nboth good and bad authorities will get this score, resulting in\nmany hub DOMsubtrees looking more uniformly promising\nthan they should. This will lead the segment algorithm\nto prune the frontier too eagerly, resulting in potentially\nexcessive authority diffusion, as in HITS.\nWe propose a more conservative initialization policy. Similar\nto B&H, we assume that the textual content of the rootset\ndocuments returned by the text search engine is more\nreliably relevant than the radius-1 neighbors included for\ndistillation. Therefore we start our algorithm by assigning\nonly root-set authority scores to one. Of course, once the\niterations start, this does not prevent authority from diffusing\nover to siblings, but the diffusion is controlled by hub\nsegmentation.\nThere is one other way in which we bias our algorithm\nto be conservative w.r.t. authority diffusion. If a DOMnode\nhas only one child with a positive hub score, or if there is a\ntie in the cost of expanding vs. pruning, we expand the node,\nthereby pushing the frontier down and preventing the leaf\nhub score from spreading out to possibly irrelevant outlinks.\nTaken together, these two policies may be a little too\nconservative, sometimes preventing desirable authority diffusion\nand bringing our algorithm closer to MicroHITS than\nwe would like. For example, the graph being distilled may\nbe such that page\nu has one DOMsubtree clearly (to a\nhuman reading the text) dedicated to motorcycles, but only\none link target\nv is in the expanded set. In ongoing work\nwe are integrating text analysis into our fine-grained model\nto avoid such pitfalls [7].\nExperiments and results\nWe used the 28 queries used in the Clever studies [5, 6] and\nby B&H [2] (shown in Figure 7). For each, RagingSearch\nreturned at most 500 responses in the root set. These 500\n\n28 pages were fetched and all their outlinks included in our\ndatabase as well. RagingSearch and HotBot were used to\nget as many inlinks to the root set as possible; these were\nalso included in our database. This resulted in about 488000\nraw URLs.\nAfter normalizing URLs and eliminating duplicates, ap-proximately\n366000 page fetches succeeded. We used the\nw3c command-line page fetching tool from http://www.w3c.\norg for its reliable timeout mechanism. We then scanned\nall these pages and filled a global (macro-)link table with\n2105271 non-local links, i.e., links between pages not on the\nsame hostname (as a lowercase string without port number).\nWe then proceeded to parse the documents into their\nDOMs in order to populate a different set of tables that\nrepresented the DOMnodes and the micro-links between\nthem. We used the javax.swing.text.html.parser package\nand built a custom pared-down DOMgenerator on top of\nthe SAX scanner provided. The total number of micro-links\nwas 9838653, and the total number of micro-nodes likewise\nincreased.\nOut of the two million non-local links, less than 1% had\ntargets that were not the root of the DOMtree of a page.\nThus our introduction of the asymmetry in handling hubs\nand authorities seems to be not a great distortion of reality.\nEven though our experiments were performed on a 700\nMHz Pentium Xeon processor with 512 MB RAM and 60 GB\nof disk, handling this scale of operation required some care\nand compromise. In particular, to cut down the micro-graph\nto only about 10 million edges, we deleted all DOMpaths\nthat did not lead to an <A...>...</A> element. Otherwise,\nwe estimated that the number of micro-links would be at\nleast two orders of magnitude larger\n2\n.\n2\nIn our ongoing work we are having to address this issue as we are\nalso analyzing text.\n217\n#\nQuery\nDrift\nMixed\n1\n``affirmative action''\nlarge\n2\nalcoholism\n\n3\n``amusement park*''\nsmall\n\n4\narchitecture\n\n5\nbicycling\n6\nblues\n7\n``classical guitar''\nsmall\n\n8\ncheese\n\n9\ncruises\n10\n``computer vision''\n11\n``field hockey''\n12\ngardening\n\n13\n``graphic design''\nlarge\n14\n``Gulf war''\nlarge\n15\nHIV\n\n16\n``lyme disease''\nsmall\n\n17\n``mutual fund*''\nsmall\n18\n``parallel architecture''\n\n19\n``rock climbing''\nlarge\n20\n+recycling +can*\n\n21\n+stamp +collecting\n22\nShakespeare\n\n23\nsushi\nsmall\n\n24\ntelecommuting\nlarge\n25\n+Thailand +tourism\nlarge\n26\n``table tennis''\nsmall\n27\n``vintage cars''\nsmall\n\n28\n+Zen +buddhism\nlarge\nFigure 7: The set of 28 broad queries used for comparing B&H\n(without text analysis) and our system.The second column shows\nthe extent of drift in the B&H response.The third column shows\nif mixed hubs were found within the top 50 hubs reported.\nFigure 7 shows the 28 queries used by the Clever study\nand by B&H. As indicated before, our baseline was B&H\nwith edge-weighting but without text-based outlier elimination\n, which we will simply call \"B&H\". We did not have any\narbitrary cut-off for the number of in-links used as we did not\nknow which to discard and which to keep. As B&H noted,\nedge-weighting improved results significantly, but without\ntext analysis is not adequate to prevent drift. Of the 28\nqueries, half show drift to some extent. We discuss a few\ncases.\n\"Affirmative action\" is understandably dominated by\nlists of US universities because they publicize their support\nfor the same. Less intuitive was the drift into the world\nof software, until we found http://206.243.171.145/7927.\nhtml in the root set which presents a dialup networking software\ncalled Affirmative Action, and links to many popular\nfreeware sites (figure 8).\nBy itself, this page would not\nsurvive the link-based ranking, but the clique of software\nsites leads B&H astray.\nAnother example was \"amusement parks\" where B&H\nfell prey to multi-host nepotism in spite of edge-weighting. A\ndensely connected conglomerate including the relevant starting\npoint http://www.411fun.com/THEMEPARKS/ (figure 9)\nformed a multi-site nepotistic cluster and misled macroscopic\nalgorithms.\nIn both these cases there were ample clues in the DOM\nstructure alone (leave alone text) that authority diffusion\nshould be suppressed. We obtained several cases of reduced\ndrift using our technique. (In ongoing work we are getting\nthe improvement evaluated by volunteers.) One striking\nexample was for the query \"amusement parks\" where our\nalgorithm prevented http://www.411... from taking over\nthe show (see figure 10; complete results are in AP-macro.\nhtml and AP-micro.html).\nFigure 8: The part of this HTML page that contains the query\naffirmative action is not very popular, but adjoining DOM\nsubtrees (upper right corner) create a dense network of software\nsites and mislead macroscopic distillation algorithms.Dotted red\nlines are drawn by hand.\nFigure 9: The 411 \"clique attack\" comprises a set of sibling sites\nwith different hostnames and a wide variety of topics linking to\neach other.A human can easily avoid paying attention to the\nsibling sites but macroscopic distillation will get misled.Dotted\nred lines are drawn by hand.\nFigure 7 also shows that for almost half the queries, we\nfound excellent examples of mixed hubs within the top 50\nhubs reported. Given the abundance of hubs on these topics,\nwe had anticipated that the best hubs would be \"pure\".\nWhile this was to some extent true, we found quite a few\nmixed hubs too. Our system automatically highlighted the\nmost relevant DOMsubtree; we present some examples in\nfigure 11 and urge the reader to sample the annotated hubs\npackaged with the HTML version of this paper.\nMacroscopic\nFine-grained\nhttp://www.411boating.com\nhttp://www.411jobs.com\nhttp://www.411insure.com\nhttp://www.411hitech.com\nhttp://www.411freestuff.com\nhttp://www.411commerce.com\nhttp://www.411-realestate.com\nhttp://www.411worldtravel.com\nhttp://www.411worldsports.com\nhttp://www.411photography.com\nhttp://www.kennywood.com\nhttp://www.beachboardwalk.com\nhttp://www.sixflags.com\nhttp://www.cedarpoint.com\nhttp://www.pgathrills.com\nhttp://www.pki.com\nhttp://www.valleyfair.com\nhttp://www.silverwood4fun.com\nhttp://www.knotts.com\nhttp://www.thegreatescape.com\nhttp://www.dutchwonderland.com\nFigure 10: The fine-grained algorithm is less susceptible to clique\nattacks.The query here is amusement parks.\n218\nFigure 11: Two samples of mixed hub annotations: amusement\nparks amidst roller-coaster manufacturers and sushi amidst\ninternational cuisine.\nQuery\nAnnotated file\nalcoholism\nAL1.html\nAmusement parks\nAP1.html\nArchitecture\nAR1.html\nClassical guitar\nCG1.html\nHIV\nHI1.html\nShakespeare\nSH1.html\nSushi\nSU1.html\nWe verified that our smoothing algorithm was performing\nnon-trivial work: it was not merely locating top-scoring\nauthorities and highlighting them. Within the highlighted\nregions, we typically found as many unvisited links as links\nalready rated as authorities. In ongoing work we are using\nthese new links for enhanced focused crawling.\nA key concern for us was whether the smoothing iterations\nwill converge or not.\nBecause the sites of hub\naggregation are data-dependent, the transform was non-linear\n, and we could not give a proof of convergence. In\npractice we faced no problems with convergence; figure 12\nis typical of all queries.\nThis raised another concern: was the smoothing subroutine\ndoing anything dynamic and useful, or was convergence\ndue to its picking the same sites for hub aggregation every\ntime? In figures 13 and 14 we plot relative numbers of\nnodes pruned vs. expanded against the number of iterations\n. Queries which do not have a tendency to drift look\nlike figure 13. Initially, both numbers are small. As the\nsystem bootstraps into controlled authority diffusion, more\ncandidate hubs are pruned, i.e., accepted in their entirety.\nDiffused authority scores in turn lead to fewer nodes get-1\n.00E-07\n1.00E-06\n1.00E-05\n1.00E-04\n1.00E-03\n1.00E-02\n1.00E-01\n0\n2\n4\n6\n8\n10\nIterations\nM\ne\nan\naut\nh\ns\nc\no\nr\ne\nc\nh\nang\ne\nFigure 12: In spite of the non-linear nature of our relaxation\nalgorithm, convergence is quick in practice.A typical chart of\naverage change to authority scores is shown against successive\niterations.\nting expanded. For queries with a strong tendency to drift\n(figure 14), the number of nodes expanded does not drop\nas low as in low-drift situations. For all the 28 queries, the\nrespective counts stabilize within 1020 iterations.\n0\n500\n1000\n1500\n2000\n2500\n3000\n3500\n4000\n0\n1\n2\n3\n4\n5\n6\n7\n8\n9 10\n#Prune\n#Expand\nData\nFigure 13:\nOur micro-hub smoothing technique is highly\nadaptive: the number of nodes pruned vs.expanded changes\ndramatically across iterations, but stabilizes within 1020 iterations\n.There is also a controlled induction of new nodes into\nthe response set owing to authority diffusion via relevant DOM\nsubtrees (query: bicycling).\n0\n200\n400\n600\n800\n1000\n1200\n0\n1\n2\n3\n4\n5\n6\n7\n8\n9 10\n#Prune\n#Expand\nData\nFigure 14: For some queries for which B&H showed high drift,\nour algorithm continues to expand a relatively larger number of\nnodes in an attempt to suppress drift (query: affirmative action).\nFinally, we checked how close we were to B&H ranking.\nWe expected our ranking to be correlated with theirs, but\nverified that there are meaningful exceptions. Figure 15\nshow a scatter plot of authority scores. It illustrates that\nwe systematically under-rate authorities compared to B&H\n(the axes have incomparable scale; the leading straight line\nshould be interpreted as\ny = x). This is a natural outcome\nof eliminating pseudo-authorities that gain prominence in\nB&H via mixed hubs.\n219\n0\n0.005\n0.01\n0.015\n0.02\n0.025\n0\n0.002\n0.004\n0.006 0.008\n0.01\n0.012\nAuthority score B&H\nO\nu\nr\na\nut\nh\no\nri\nt\ny\ns\nc\nore\nFigure 15: Our ranking is correlated to B&H, but not identical;\nwe tend to systematically under-rate authorities compared to\nB&H.\nConclusion and future work\nWe have presented a fine-grained approach to topic distillation\nthat integrates document substructure (in the form\nof the Document Object Model) with regular hyperlinks.\nPlugging in the fine-grained graph in place of the usual\ncoarse-grained graph does not work because the fine-grained\ngraph may not have the bipartite cores so vital to the success\nof macroscopic distillation algorithms. We propose a new\ntechnique for aggregating and propagating micro-hub scores\nat a level determined by the Minimum Description Length\nprinciple applied to the DOMtree with hub scores at the\nleaves. We show that the resulting procedure still converges\nin practice, reduces drift, and is moreover capable of identifying\nand extracting regions (DOMsubtrees) relevant to\nthe query out of a broader hub or a hub with additional\nless-relevant contents and links.\nIn ongoing work, apart from completing a detailed user\nstudy (as in the Clever project), we are exploring three more\nideas. First, our algorithm depends on DOMbranch points\nto be able to separate relevant hub fragments from irrelevant\nones. We have seen some pages with a long sequence of\nURLs without any helpful DOMstructure such as <LI>\nproviding natural segment candidates. Second, we need to\nbring back some of the text analysis techniques that have\nimproved HITS and integrate them with our model. Third,\nwe are measuring if the link localization done by our system\ncan help in faster resource discovery.\nAcknowledgment: Thanks to Vivek Tawde and Hrishikesh\nGupta for helpful discussions, to S. Sudarshan for stimulating\ndiscussions and generous support from the Informatics\nLab, IIT Bombay, and the anonymous reviewers for helping\nto improve the presentation.\nReferences\n[1] E.Amitay and C.Paris. Automatically summarising web\nsites: Is there a way around it?\nIn 9th International\nConference on Information and Knowledge Management\n(CIKM 2000), Washington, DC, USA, 2000.ACM. Online\nat http://www.mri.mq.edu.au/~einat/publications/\ncikm2000.pdf.\n[2] K.Bharat and M.Henzinger. Improved algorithms for\ntopic distillation in a hyperlinked environment.In 21st\nInternational ACM SIGIR Conference on Research and\nDevelopment in Information Retrieval, pages 104111, Aug.\n1998.Online at ftp://ftp.digital.com/pub/DEC/SRC/\npublications/monika/sigir98.pdf.\n[3] S.Brin and L.Page. The anatomy of a large-scale\nhypertextual web search engine.In Proceedings of the 7th\nWorld-Wide Web Conference (WWW7), 1998.Online at\nhttp://decweb.ethz.ch/WWW7/1921/com1921.htm.\n[4] O.Buyukkokten, H.Garcia-Molina, and A.Paepcke.\nFocused web searching with PDAs.In World Wide Web\nConference, Amsterdam, May 2000.Online at http://www9.\norg/w9cdrom/195/195.html.\n[5] S.Chakrabarti, B.Dom, D.Gibson, J.Kleinberg, P.Raghavan\n, and S.Rajagopalan. Automatic resource compilation\nby analyzing hyperlink structure and associated text.In\n7th World-wide web conference (WWW7), 1998.Online\nat http://www7.scu.edu.au/programme/fullpapers/1898/\ncom1898.html.\n[6] S.Chakrabarti, B.E.Dom, S.Ravi Kumar, P.Raghavan,\nS.Rajagopalan, A.Tomkins, D.Gibson, and J.Kleinberg.\nMining the Web's link structure. IEEE Computer, 32(8):60\n67, Aug.1999.\n[7] S.Chakrabarti, M.Joshi, and V.Tawde. Enhanced\ntopic distillation using text, markup tags, and hyperlinks.\nSubmitted for publication, Jan.2001.\n[8] S.Chakrabarti, M.van den Berg, and B.Dom. Focused\ncrawling: a new approach to topic-specific web resource\ndiscovery. Computer Networks, 31:16231640, 1999.First\nappeared in the 8th International World Wide Web Conference\n, Toronto, May 1999.Available online at http://www8.\norg/w8-papers/5a-search-query/crawling/index.html.\n[9] T.M.Cover and J.A.Thomas. Elements of Information\nTheory.John Wiley and Sons, Inc., 1991.\n[10] D.A.Gibson, J.M.Kleinberg, and P.Raghavan.Clustering\ncategorical data: An approach based on dynamical systems.\nIn VLDB, volume 24, pages 311322, New York, Aug.1998.\n[11] G.H.Golub and C.F.van Loan. Matrix Computations.\nJohns Hopkins University Press, London, 1989.\n[12] M.Hearst. Multi-paragraph segmentation of expository\ntext.In Proceedings of the 32nd Annual Meeting of the\nAssociation for Computational Linguistics, Las Cruces, NM,\nJune 1994.Online at http://www.sims.berkeley.edu/\n~hearst/publications.shtml.\n[13] D.S.Johnson and K.A.Niemi. On knapsacks, partitions,\nand a new dynamic programming technique for trees.\nMathematics of Operations Research, 8(1):114, 1983.\n[14] J.Kleinberg. Authoritative sources in a hyperlinked\nenvironment.In ACM-SIAM Symposium on Discrete\nAlgorithms, 1998.Online at http://www.cs.cornell.edu/\nhome/kleinber/auth.ps.\n[15] R.Lempel and S.Moran. The stochastic approach for link-structure\nanalysis (SALSA) and the TKC effect.In WWW9,\npages 387401, Amsterdam, May 2000.Online at http://\nwww9.org/w9cdrom/175/175.html.\n[16] J.Rissanen. Stochastic complexity in statistical inquiry. In\nWorld Scientific Series in Computer Science, volume 15.\nWorld Scientific, Singapore, 1989.\n[17] G.Salton and M.J.McGill. Introduction to Modern\nInformation Retrieval.McGraw-Hill, 1983.\n[18] S.Sarawagi. Explaining differences in multidimensional\naggregates.In International Conference on Very Large\nDatabases (VLDB), volume 25, 1999.Online at http:\n//www.it.iitb.ernet.in/~sunita/papers/vldb99.pdf.\n220", "keywords": "PageRank algorithm;segmentation;HITS;link localization;Topic distillation;DOM;Document Object Model;XML;microscopic distillation;text analysis;Minimum Description Length principle;Google;hub fragmentation;hyperlink;topic distillation"} {"name": "115", "title": "Integration of Information Assurance and Security into the IT2005 Model Curriculum", "abstract": "In this paper we present the context of the work of the Curriculum Committee on IT2005, the IT curriculum volume described in the Overview Draft document of the Joint Task Force for Computing Curriculum 2004. We also provide a brief introduction to the history and work of the Information Assurance Education community. These two perspectives provide the foundation for the main thrust of the paper, which is a description of the Information Assurance and Security (IAS) component of the IT2005 document. Finally, we end the paper with an example of how IAS is being implemented at BYU as a \"pervasive theme\" that is woven throughout the curriculum and conclude with some observations about the first year's experience.", "fulltext": "INTRODUCTION\nIn December 2001 a meeting (CITC-1) of interested parties from\nfifteen four-year IT programs from the US along with\nrepresentatives from IEEE, ACM, and ABET began work on the\nformalization of Information Technology as an accredited\nacademic discipline. The effort has evolved into SIGITE, the\nACM SIG for Information Technology Education. During this\nevolution three main efforts have proceeded in parallel: 1)\nDefinition of accreditation standards for IT programs, 2) Creation\nof a model curriculum for four-year IT programs, and 3)\nDescription of the characteristics that distinguish IT programs\nfrom the sister disciplines in computing.\nOne of the biggest challenges during the creation of the model\ncurriculum was understanding and presenting the knowledge area\nthat was originally called \"security\". Some of us were\nuncomfortable with the term because it was not broad enough to\ncover the range of concepts that we felt needed to be covered. We\nbecame aware of a community that had resolved many of the\nissues associated with the broader context we were seeking,\nInformation Assurance. Information assurance has been defined\nas \"a set of measures intended to protect and defend information\nand information systems by ensuring their availability, integrity,\nauthentication, confidentiality, and non-repudiation. This includes\nproviding for restoration of information systems by incorporating\nprotection, detection, and reaction capabilities.\" The IA\ncommunity and work done by IA educators became useful in\ndefining requisite security knowledge for information technology\neducation programs.\nWe believe that the Information Technology and the Information\nAssurance Education communities have much to share. At the 9\nth\n\nColloquium for Information System Security Education in Atlanta\nwe introduced CC2005 and IT2005 to the IA Education\ncommunity[1]. In the current paper we introduce the history and\ncurrent state of IA education to the SIGITE community. In\naddition, we demonstrate how significant concepts from the\nInformation Assurance community have been integrated into\nIT2005.\n1.1 CC2005 and IT2005\nIn the first week of December of 2001 representatives from 15\nundergraduate information technology (IT) programs from across\nthe country gathered together near Provo, Utah, to develop a\ncommunity and begin to establish academic standards for this\nrapidly growing discipline. This first Conference on Information\nTechnology Curriculum (CITC-1) was also attended by\nrepresentatives from two professional societies, the Association\nfor Computing Machinery (ACM) and the Institute of Electrical\nand Electronics Engineers, Inc. (IEEE), and also the Accreditation\nBoard for Engineering and Technology, Inc. (ABET). This\ninvitational conference was the culmination of an effort begun\nseveral months earlier by five of these universities who had\nformed a steering committee to organize a response from existing\nIT programs to several initiatives to define the academic discipline\nof IT. The steering committee wanted to ensure that the input of\nexisting programs played a significant role in the definition of the\nfield.\nA formal society and three main committees were formed by the\nattendees of CITC-1. The society was the Society for Information\nTechnology Education (SITE); one of the committees formed was\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nSIGITE'05, October 20-22, 2005, Newark, New Jersey, USA.\nCopyright 2005 ACM\n1-59593-252-6/05/0010...$5.00.\n7\nthe executive board for SITE, composed of a president, vice-president\n, secretary, treasurer, regional representatives, and an\nactivities chairperson. The other two committees formed were the\nIT Curriculum Committee, including subcommittees for 4-year\nand 2-year programs, and the IT Accreditation Committee, also\nincluding subcommittees for 4-year and 2-year programs.\nThe development of IT as an academic discipline is similar to the\nprocess that computer science (CS) went through in the 70's and\n80's. In fact, looking at the placement of CS programs in academic\ninstitutions around the U.S. illustrates the debate that swirled\naround the discipline as its core was being defined. Some CS\nprograms are in departments of mathematics, others are in\nengineering schools, and many others have become mainstay\nprograms within newly emerging colleges of computing.\nInformation technology, as it is practiced at this moment in its\nevolution, reflects similar growing pains. IT programs exist in\ncolleges of computing, in CS departments, in schools of\ntechnology, and in business schools. Professors of information\ntechnology possess degrees in information systems, electronics,\ncommunications, graphics arts, economics, mathematics,\ncomputer science, and other disciplines. Few to none of them\nhave a degree in information technology.\nIt should be acknowledged here that IT has two substantially\ndifferent interpretations, and that these should be clarified.\nInformation Technology (IT) in its broadest sense encompasses all\naspects of computing technology. IT, as an academic discipline,\nfocuses on meeting the needs of users within an organizational\nand societal context through the selection, creation, application,\nintegration and administration of computing technologies. A\nmore detailed history of SIGITE is available in [2].\nSIGITE is directly involved with the Joint Task Force for\nComputing Curriculum 2004 and has 2 representatives on the task\nforce. This task force is a continuation of the effort that created\nCC2001\n\n[3]\n\nthe current computer science curriculum standard.\nCC2001 has been relabeled CS2001 and the current draft of the\nCC2004 Overview document [4] presents the structure being used\nto describe computing and its sub-disciplines (See Figure 1). The\nSIGITE Curriculum Committee is responsible for IT2005, the\nInformation Technology Curriculum Volume. IT2005 was made\navailable for comment in mid 2005.\n\n\nFigure 1\n1.2 Information Assurance Education\nInformation assurance has been defined as \"a set of measures\nintended to protect and defend information and information\nsystems by ensuring their availability, integrity, authentication,\nconfidentiality, and non-repudiation. This includes providing for\nrestoration of information systems by incorporating protection,\ndetection, and reaction capabilities.\" (National Security Agency,\nhttp://www.nsa.gov/ia/iaFAQ.cfm?MenuID=10#1).[5]\nInformation assurance education, then, includes all efforts to\nprepare a workforce with the needed knowledge, skills, and\nabilities to assure our information systems, especially critical\nnational security systems. Information assurance education has\nbeen growing in importance and activity for the past two decades.\nA brief look at the involved entities and history will shed light on\nthe growth.\nThe National Information Assurance Education and Training\nPartnership (NIETP) program is a partnership among government,\nacademia and industry focused on advancing information\nassurance education, training, and awareness. The NIETP was\ninitiated in 1990 under National Security Directive 42 and has\nsince been reauthorized several times. The NIETP serves in the\ncapacity of national manager for information assurance education\nand training related to national security systems and coordinates\nthis effort with the Committee on National Security Systems\n(CNSS). \"The CNSS provides a forum for the discussion of\npolicy issues, sets national policy, and promulgates direction,\noperational procedures, and guidance for the security of national\nsecurity systems. National security systems are information\nsystems operated by the U.S. Government, its contractors or\nagents that contain classified information or that:\n1. involve intelligence activities;\n2. involve cryptographic activities related to national security;\n3. involve command and control of military forces;\n4. involve equipment that is an integral part of a weapon or\nweapons system(s); or\n5. are critical to the direct fulfillment of military or intelligence\nmissions (not including routine administrative and business\napplications).\" http://www.cnss.gov/history.html[6]\nCNSS is responsible for the development of principles, policies,\nguidelines, and standards that concern systems holding or related\nto national security information. Education and training standards\nare among the many standards and guidelines that CNSS issues.\nThe training/education standards issued to date include: a)\nNSTISSI\n1\n4011 The National Training Standard for Information\nSystems Security Professionals, b) CNSSI 4012 The National\nInformation Assurance Training Standard for Senior Systems\nManagers, c) CNSSI 4013 The National Information Assurance\nTraining Standard for System Administrators, d) CNSSI 4014 -\nInformation Assurance Training Standard for Information Systems\nSecurity Officers, and e) NSTISSI 4015 The National Training\nStandard for Systems Certifiers. CNSSI 4016 The National\n1\nUnder Executive Order (E.O.) 13231 of October 16, 2001, Critical\nInfrastructure Protection in the Information Age, the President\nredesigned the National Security Telecommunications and Information\nSystems Security Committee (NSTISSC) as the Committee on National\nSecurity Systems (CNSS)\n8\nTraining Standard for Information Security Risk Analysts will be\nreleased soon.\nThe NSTISSI-CNSSI standards referenced above have been used\nto develop in-service training and education opportunities for\nenlisted and civilian employees in an effort to assure quality\npreparation of professionals entrusted with securing our critical\ninformation. In addition to providing a basis for in-service\neducation and training, the NSTISSI-CNSSI standards have also\nbeen deployed to colleges and universities in an effort to also\nprepare qualified individuals preservice. The most significant\neffort to involve colleges and universities has been through the\nNational Centers of Academic Excellence in Information\nAssurance Education (CAEIAE) Program. The CAEIAE program\nwas started in 1998 by the National Security Agency (NSA) and is\nnow jointly sponsored by the NSA and the Department of\nHomeland Security (DHS) in support of the President's National\nStrategy to Secure Cyberspace, February 2003. The purpose of the\nprogram is to recognize colleges and universities for their efforts\nin information assurance education and also to encourage more\ncolleges and universities to develop courses and programs of\nstudy in information assurance. In order to be eligible to apply\nfor CAEIAE certification, an institution must first demonstrate\nthat it teaches the content covered in NSTISSI 4011 - The\nNational Training Standard for Information Systems Security\nProfessionals. Once an institution has been 4011 certified, it is\neligible to apply for CAEIAE status. Criteria for becoming a\nCAEIAE include the following: a) evidence of partnerships in IA\neducation, b) IA must be treated as a multidisciplinary science, c)\nevidence that the university encourages the practice of\ninformation assurance in its operations, d) demonstration of\ninformation assurance research, e) demonstration that the IA\ncurriculum reaches beyond physical geographic borders, f)\nevidence of faculty productivity in information assurance research\nand scholarship, g) demonstration of state of the art information\nassurance resources, h) a declared concentration(s) in information\nassurance, i) a university recognized center in information\nassurance, and j) dedicated information assurance faculty\n(http://www.nsa.gov/ia/academia/caeCriteria.cfm?MenuID=10.1.1\n.2).[7]\nIn 1999, there were seven institutions designated as the inaugural\nCAEIAE schools. The certification is good for three years at\nwhich time institutions can reapply. Annually, an additional 6-10\ninstitutions are awarded the certification; today, there are more\nthan 60 CAEIAE institutions. The types of institutions and\nprograms that are applying and being certified are growing not\njust in number, but also in diversity. In the first round of\ncertification, the institutions were largely research institutions and\ntheir respective programs were at the graduate level in computer\nscience. Today, institutions are certifying courses at the\nundergraduate level in computer science, management\ninformation systems, and information technology. The work\nbeing done by SIGITE is important to the further expansion of\ninformation assurance education as information assurance\nexpands beyond the development of information systems to\ninclude the entire system life cycle including deployment,\noperation, maintenance, a retirement of such systems.\nINFORMATION ASSURANCE IN IT2005\nThe IT2005 volume is modeled on CS2001. It consists of 12\nchapters and 2 appendices. The current draft resides at\nhttp://sigite.acm.org/activities/curriculum/[8]\nChapter 1. Introduction\nChapter 2. Lessons from Past Reports\nChapter 3. Changes in the Information Technology\nDiscipline\nChapter 4. Principles\nChapter 5. Overview of the IT Body of Knowledge\nChapter 6. Overview of the Curricular Models\nChapter 7. The Core of the Curriculum\nChapter 8. Completing the Curriculum\nChapter 9. Professional Practice\nChapter 10. Characteristics of IT Graduates\nChapter 11. Computing across the Curriculum\nChapter 12. Institutional Challenges\nAcknowledgements\nBibliography\nAppendix A. The IT Body of Knowledge\nAppendix B. IT Course Descriptions\nChapters 5 and 7 are of particular interest for this discussion.\nChapter 5 is an overview of the IT body of knowledge. A\nsummary is included as Appendix A. Chapter 7 discusses the\nrelationship of the core topics described in the body of knowledge\nto IT curriculum. IAS is explicitly mentioned in three contexts:\nSection 7.2 as part of the IT Fundamentals Knowledge\nArea (KA)\nSection 7.2 as a \"pervasive theme\"\nSection 7.4 as a KA that integrates the IAS concepts for\nstudents ready to graduate.\nIAS is the only area that is an IT Fundamental, a \"pervasive\ntheme\" and also a complete KA with a recommended senior level\ncourse for integrating all of the concepts. Clearly, IT2005\npresents Information Assurance and Security as a core\ncompetency required by every graduate of an IT program.\nDuring the early analysis of IT as an academic discipline, Delphi\nstudies were performed that ranked \"Security\" as a central area for\nIT. [1]\n\nAs we studied the issues several members of the\ncommittees involved were uncomfortable with \"security\" as the\nname for the knowledge area. The name seemed too restrictive.\nAt the annual SIGITE conference in 2003 two of the authors were\nintroduced to the other author and the Center for Research and\nEducation in Information Assurance and Security (Cerias) at\nPurdue. The BYU faculty was dissatisfied with the security\ncomponent in the IT curriculum and the SIGITE curriculum\ncommittee was struggling with the Security KA for IT2005.\nThrough flyers at the conference we became aware of the\nInformation Assurance Education Graduate Certificate (IAEGC)\nprogram funded by the NSA. With encouragement from\ncolleagues and the administration of the School of Technology,\nthe primary author attended the 2004 program. The experience\nhas had a significant impact on IT2005 and the BYU curriculum.\n9\nWe discovered that NSA had begun to use the umbrella term\nInformation Assurance [9] to cover what we were calling security.\nEven though this term is defined to cover exactly what the IT\ncommunity meant by security, the use of the terminology elicited\na lot of blank stares. We found that explicitly adding security to\nthe name of the knowledge area eliminated much of the confusion.\nWe are indebted to the Center for Education and Research for\nInformation Assurance and Security (CERIAS)[10] at Purdue\nwhose name provided the inspiration to use IAS as a name for the\nknowledge area.\nOnce the naming issue was resolved, the SIGITE curriculum\ncommittee struggled to find a model for IAS that could\nbe understood by freshman IT students\nprovide a framework to integrate IAS concepts that are\nintegrated into nearly all of the other KAs\nbe rich enough to support a senior level course that ties\neverything together.\nWhen A Model for Information Assurance: An Integrative\nApproach [11] was discovered the writing committee achieved\nconsensus on a model. The cube (see Figure 2) provides a simple\nvisual representation that a freshman can understand, yet the 3\ndimensional structure facilitates the detailed analysis required for\nuse in technology specific contexts, and is comprehensive enough\nto encompass a capstone learning experience.\n\nFigure 2\nIT2005 uses this model to structure IAS concepts throughout the\ndocument.\n\nRECOMMENDATIONS FOR \"PERVASIVE THEMES\" IN IT2005\nDuring the deliberations of the SIGITE Curriculum Committee,\nseveral topics emerged that were considered essential, but that did\nnot seem to belong in a single specific knowledge area or unit.\nThese topics, referred to as pervasive themes, are:\n1.\nuser advocacy\n2.\ninformation assurance and security\n3.\nethics and professional responsibility\n4.\nthe ability to manage complexity through: abstraction & modeling,\nbest practices, patterns, standards, and the use of appropriate tools\n5. a deep understanding of information and communication\ntechnologies and their associated tools\n6. adaptability\n7.\nlife-long learning and professional development\n8. interpersonal\nskills\nThe committee states \"that these topics are best addressed\nmultiple times in multiple classes, beginning in the IT\nfundamentals class and woven like threads throughout the tapestry\nof the IT curriculum\"[12].\nThese themes need to be made explicit in the minds of the\nstudents and the faculty. The themes touch many of the topics\nthroughout the curriculum. Every time a new technology is\nannounced in the media, an instructor has the opportunity to drive\nhome the importance of \"life-long learning\". Every time there is a\ncyber-crime in the media we have the opportunity to discuss the\nethical and professional ramifications. It is recommended that an\nIT Fundamentals course be taught early in the curriculum where\nall of these themes are introduced and discussed as concepts that\ntouch everything an IT professional does.\nEach of these topics deserves a full treatment; however, for the\npurposes of this paper we will focus on IAS, possibly the most\npervasive theme. We will address one approach to achieve\naddressing IAS \"multiple times in multiple classes\" in section 6\nbelow.\n\nTHE INFORMATION ASSURANCE AND SECURITY KNOWLEDGE AREA\nIn early 2003, the SIGITE curriculum committee divided into\nworking groups around the knowledge areas defined by [3] to\nmake an initial cut at the list of topics for each KA. A significant\nrevision was accomplished and reviewed by the participants at the\n2004 IAEGC program at Purdue in August 2004. The list of areas\nfor the IAS KA was finalized in late 2004 at a full IT Curriculum\nCommittee meeting. The draft of the completed IAS KA was\ncompleted in early Feb 2005 by the IAS working group, edited by\nthe writing committee in late Feb 2005 and was presented to the\nfull committee in April 2005.\nFigure 3 is a list of the IAS KA and its areas. The basic structure\nand vocabulary is derived directly from work done in the IA\ncommunity, specifically Maconachy, et. al.[11]. The number in\nparenthesis is the number of lecture hours the committee thought\nwould be required to give an IT student minimum exposure to the\nunit. It should be noted that the ordering of units in all of the\nKAs, is first \"Fundamentals\", if there is one, and then the units are\nsorted in order of the number of core hours. This ordering should\nnot be considered as any indication of the order the units would\nbe covered pedagogically in an implemented curriculum.\n10\n\nFigure 3\nA summary of the IAS KA is in Appendix A, and a complete\ntreatment is found in IT2005 [4], including topics, core learning\noutcomes, and example elective learning outcomes.\nIn reviewing this model curriculum for IAS in Information\nTechnology, it should be remembered that the core topics and\nassociated lecture hours are the minimum coverage that every IT\nstudent in every program should receive. We would expect that\nmost institutions would provide additional instruction in\nInformation Assurance and Security according to the\nstrengths/areas of specialization in their programs of study.\n\nIT AT BRIGHAM YOUNG UNIVERSITY\nThe Information Technology program at BYU began officially in\nFall 2001 with a faculty consisting of:\n1. Two electronics engineering technology (EET) professors who\nwere instrumental in the evolution of the existing EET program at\nBYU into an IT program,\n2. One electrical engineering, Ph.D. newly arrived from the\naerospace industry.\n3. One computer science instructor who had done part time\nteaching and had been part of the department for 1 year with\nseveral years in system development in health care.\n4. One computer science Ph.D. with recent executive management\nresponsibilities in network hardware and service provider\nbusinesses.\n5. The former department chair of the technology education\nprogram for secondary schools joined in 2002.\n6. One computer science Ph. D. with extensive industry\nexperience in data privacy and IT management joined in 2004.\nThis is obviously a diverse group of people, each of whom joined\nthe department because they thought that the existing computing\nprograms at BYU did not offer students preparation for the\npractical aspects of system delivery to customers. We are evenly\ndivided between long-term academics and recent `retreads' from\nindustry. However, the academics have also each had significant\nindustrial experience, which provided the motivation for them to\naccept positions in the new IT program. The BYU curriculum\nbegan as a traditional \"stovepipe\" approach of courses oriented\naround topics like networking, databases, and operating systems\nborrowed from CS, EET, CE and IS, and evolved to a more\nintegrated approach starting at the introductory levels so that\nadvanced topic oriented courses are more easily sequenced. We\nhave also discovered that the integrative nature of IT forces a\nfocus on the seams between technologies rather than\nimplementation of components. This fundamental difference in\nfocus is one of the primary differences that distinguishes IT from\nother computing disciplines that focus on the design and\nimplementation of components[12] [13]. Over the last 4 years,\nBYU faculty has participated actively in SIGITE and attempted to\nshare what has been learned with the emerging IT community.\n[14] [15] [16]\nThe BYU curriculum has evolved into what IT2005 calls a\n\"core/integration first\" approach [17]. Significant portions of the\nintroductory material in operating systems, databases, web\nsystems, networking had been moved to lower division courses by\nearly 2004. Much of the shift occurred when the introduction to\nweb systems was moved from the junior to the sophomore year\nand introductory material sufficient to understand web systems\nwas included for networking, databases, operating system\nadministration and OS process models. The improvements in flow\nand reduced redundancy have been noticeable in the upper\ndivision core courses. Appendix B graphs the current BYU course\nstructure. In late 2004 and early 2005 we began implementing\nthe \"pervasive theme\" of IAS in earnest.\n\nINTEGRATING IAS INTO THE EXISTING BYU CURRICULUM\nA senior level IAS class had been introduced into the curriculum\nin early 2004 and was made a requirement in 2005. However, we\nrecognized that simply adding a required course at the end of a\nstudent's college experience would not be adequate. SIGITE\ndiscussions had placed security in the pervasive theme category at\nthe very beginning, though the name of the KA wasn't chosen\nuntil 2004. We were faced with the challenge of integrating the\nIAS fundamentals into the introductory courses, morphing the\nsecurity modules in the existing classes to use the MSRW [11]\nframework and bringing all of the students in the program up to\nspeed on the new framework simultaneously.\nOur approach has been to prepare one hour modules on the\nMSRW framework that can be used in an existing course to bring\nstudents up to speed or taught in seminars as needed. We are in\nthe process of integrating the IAS Fundamentals into our\nintroductory courses. We successfully integrated the IAS modules\ninto the sophomore introduction to web-based systems course,\nwhich was already introducing all of the major IT areas. The\ncourse was modified to replace a 3 week team project experience\nwith a 2 week team oriented lab and then using the time for IAS\ntopics. Much remains to be done, but the initial experience is\npositive. The faculty seems unified in their desire to implement\nIAS as a pervasive theme. For example, 2 lecture and 2 lab hours\nare now included in the computer communications course. 3\nlecture hours and 3 lab hours were added to the web systems\ncourse. The IAS component of the database course was\nrearranged and strengthened with 1 lecture hour added. Similar\nadjustments have been made throughout the curriculum.\nIAS. Information Assurance and Security (23 core hours)\nIAS1. Fundamental Aspects (3)\nIAS2. Security Mechanisms (countermeasures) (5)\nIAS3. Operational Issues (3)\nIAS4. Policy (3)\nIAS5. Attacks (2)\nIAS6. Security Domains (2)\nIAS7. Forensics (1)\nIAS8. Information States (1)\nIAS9. Security Services (1)\nIAS10. Threat Analysis Model (1)\nIAS11. Vulnerabilities (1)\n11\nIn addition to improving the IAS component of the BYU\ncurriculum, we have done an analysis of our coverage of the\nproposed IT2005 core. We have several adjustments in other\nparts of our curriculum. Since we evolved from an EET program,\nthe hardware coverage was extremely strong. We are weak in the\ncoverage of systems and database administration. We will\ncontinue to adjust our curriculum as IT matures as an academic\ndiscipline.\n\nSUMMARY\nInformation Technology is maturing rapidly as an academic\ndiscipline. A public draft of the IT volume described in the\nComputing Curriculum 2004 Overview is ready for review. The\nSIGITE Curriculum Committee is soliciting feedback on the\ndocument. This paper presents a brief history of SIGITE, the\nACM SIG for Information Technology Education, and a brief\nintroduction to the Information Assurance Education community.\nThe authors believe that collaboration between these communities\ncan be of benefit to all of the participants and the industry at large.\nSIGITE and the CC 2005 Joint Task Force solicit feedback on the\ndocuments at http://www.acm.org/education/ .\n\nACKNOWLEDGMENTS\nThe authors would like to thank the ACM Education committee\nfor their support of the IT2005 effort, especially Russ\nShackleford, without whose financial support and encouragement\nthe document would be years away from completion. We would\nalso like to express appreciation to the NSA for funding the\nIAEGC[18] program. Corey Schou's IAEGC lecture on helping\nstudents understand IAS in an hour was the genesis of the IAS\napproach in IT2005. The BYU authors would like to express\nappreciation to our colleagues and the administration of the\nSchool of Technology at Brigham Young University, who covered\nour classes and found the funding for the time and travel our\nparticipation in the SIGITE curriculum committee required.\n\nREFERENCES\n[1] Ekstrom, Joseph J., Lunt, Barry M., Integration of Information\nAssurance and Security into IT2005, 9\nth\nColloquium for\nInformation Systems Security Education, June 6-9, 2005,\nAtlanta, Georgia.\n[2] Lunt, Barry M.; Ekstrom, Joseph J.; Lawson, Edith A.;\nKamali, Reza; Miller, Jacob; Gorka, Sandra; Reichgelt, Han;\n\"Defining the IT Curriculum: The Results of the Last 2\nYears\"; World Engineer's Convention 2004, Shanghai,\nChina; Nov 2-6, 2004\n[3] Joint Task Force for Computing Curricula (2001), Computing\nCurricula 2001, Computer Science Volume, December 15,\n2001, Copyright 2001, ACM/IEEE\n[4] ]Joint Task Force for Computing Curricula (2004), Computing\nCurricula 2004: Overview Document,\nhttp://www.acm.org/education/Overview_Draft_11-22-04\n.pdf retrieved Mar. 2, 2005.\n[5] http://www.nsa.gov/ia/iaFAQ.cfm?MenuID=10#1\n[6] http://www.cnss.gov/history.html\n[7]\nhttp://www.nsa.gov/ia/academia/caeCriteria.cfm?MenuID=1\n0.1.1.2\n[8] SIGITE Curriculum Committee (2005), Computing\nCurriculum 2005, IT Volume,\nhttp://sigite.acm.org/activities/curriculum/\n[9] NSA web site, Information Assurance Division;\nhttp://www.nsa.gov/ia/ verified Mar, 4, 2005.\n[10] Cerias web site, http://cerias.purdue.edu/; verified Mar 4,\n2005\n[11] Machonachy, W. Victor; Schou, Corey D.; Ragsdale,\nDaniel; Welch , Don; \"A model for Information Assurance:\nAn Integrated Approach\", Proceedings of the 2001 IEEE\nWorkshop on Information Assurance and Security, United\nStates Military Academy, West Point , NY, 5-6 June 2001.\n[12] Ekstrom, Joseph, Renshaw, Stephen, Curriculum and Issues\nin a First Course of Computer Networking for Four-year\nInformation Technology Programs, ASEE 2002 Session\n2793\n[13] Ekstrom, Joseph, Renshaw, Stephen, A Project-Based\nIntroductory Curriculum in Networking, WEB and Database\nSystems for 4-year Information Technology Programs, CITC\n3 Rochester NY, September, 2002\n[14] Ekstrom, Joseph, Renshaw, Stephen, Database Curriculum\nIssues for Four-year IT Programs, CIEC 2003, Tucson, AZ,\nJanuary, 2003.\n[15] Ekstrom, Joseph; Lunt, Barry; Education at the Seams:\nPreparing Students to Stitch Systems Together; Curriculum\nand Issues for 4-Year IT Programs, CITC IV Purdue\nUniversity, West Lafayette, Indiana, October 2003.\n[16] Ekstrom, Joseph; Lunt, Barry M; Helps, C. Richard;\nEducation at the Seams: Preliminary Evaluation of Teaching\nIntegration as a Key to Education in Information\nTechnology; ASEE 2004, Salt Lake City, Utah, June 2004.\n[17] Section 6.3 of ref [4].\n[18] IAEGC, Information Assurance Education Graduate\nCertificate, http://www.cerias.purdue.edu/iae Validated\nApril 13, 2005.\n\n12\nAppendix A\n\nFrom IT2005 Mar 2005 Draft\n\nThe Information Technology Body of Knowledge\n\nITF. Information Technology Fundamentals (33 core)\nITF1. Pervasive Themes in IT (17)\nITF2. Organizational Issues (6)\nITF3. History of IT (3)\nITF4. IT and Its Related and Informing Disciplines (3)\nITF5. Application Domains (2)\nITF6. Applications of Math and Statistics to IT (2)\nHCI. Human Computer Interaction (20 core hours)\nHCI1. Human Factors (6)\nHCI2. HCI Aspects of Application Domains (3)\nHCI3. Human-Centered Evaluation (3)\nHCI4. Developing Effective Interfaces (3)\nHCI5. Accessibility (2)\nHCI6. Emerging Technologies (2)\nHCI7. Human-Centered Software (1)\nIAS. Information Assurance and Security (23 core)\nIAS1. Fundamental Aspects (3)\nIAS2. Security Mechanisms (Countermeasures) (5)\nIAS3. Operational Issues (3)\nIAS4. Policy (3)\nIAS5. Attacks (2)\nIAS6. Security Domains (2)\nIAS7. Forensics (1)\nIAS8. Information States (1)\nIAS9. Security Services (1)\nIAS10. Threat Analysis Model (1)\nIAS11. Vulnerabilities (1)\nIM. Information Management (34 core hours)\nIM1. IM Concepts and Fundamentals (8)\nIM2. Database Query Languages (9)\nIM3. Data Organization Architecture (7)\nIM4. Data Modeling (6)\nIM5. Managing the Database Environment (3)\nIM6. Special-Purpose Databases (1)\nIPT. Integrative Programming & Technologies (23 core)\nIPT1. Intersystems Communications (5)\nIPT2. Data Mapping and Exchange (4)\nIPT3. Integrative Coding (4)\nIPT4. Scripting Techniques (4)\nIPT5. Software Security Practices (4)\nIPT6. Miscellaneous Issues (1)\nIPT7. Overview of programming languages (1)\nNET. Networking (20 core hours)\nNET1. Foundations of Networking (3).\nNET2. Routing and Switching (8)\nNET3. Physical Layer (6)\nNET4. Security (2)\nNET5. Application Areas (1)\nNET6. Network Management\n\nPF. Programming Fundamentals (38 core hours)\nPF1. Fundamental Data Structures (10)\nPF2. Fundamental Programming Constructs (9)\nPF3. Object-Oriented Programming (9)\nPF4. Algorithms and Problem-Solving (6)\nPF5. Event-Driven Programming (3)\nPF6. Recursion (1)\nPT. Platform Technologies (14 core hours)\nPT1. Operating Systems (10)\nPT2. Architecture and Organization (3)\nPT3. Computer Infrastructure (1)\nPT4. Enterprise Deployment Software\nPT5. Firmware\nPT6. Hardware\nSA. System Administration and Maintenance (11 core hours)\nSA1. Operating Systems (4)\nSA2. Applications (3)\nSA3. Administrative Activities (2)\nSA4. Administrative Domains (2)\nSIA. System Integration and Architecture (21 core hours)\nSIA1. Requirements (6)\nSIA2. Acquisition/Sourcing (4)\nSIA3. Integration (3)\nSIA4. Project Management (3)\nSIA5. Testing and QA (3)\nSIA6. Organizational Context (1)\nSIA7. Architecture (1)\nSP. Social and Professional Issues (23 core hours)\nSP1. Technical Writing for IT (5)\nSP2. History of Computing (3)\nSP3. Social Context of Computing (3)\nSP4. Teamwork Concepts and Issues (3)\nSP5. Intellectual Properties (2)\nSP6. Legal Issues in Computing (2)\nSP7. Organizational Context (2)\nSP8. Professional and Ethical Issues and Responsibilities (2)\nSP9. Privacy and Civil Liberties (1)\nWS. Web Systems and Technologies (21 core hours)\nWS1. Web Technologies (10)\nWS2. Information Architecture (4)\nWS3. Digital Media (3)\nWS4. Web Development (3)\nWS5. Vulnerabilities (1)\nWS6. Social Software\nTotal Hours: 281\nNotes:\n1. Order of Knowledge Areas: Fundamentals first, then ordered alphabetically.\n2. Order of Units under each Knowledge Area: Fundamentals first (if present),\nthen ordered by number of core hours.\n\n\n13\nAppendix B\n\n\n14", "keywords": "Information assurance;IT2005 volume;Pervasive Themes;BYU curriculum;NIETP Program;Training standards;In-service training development;Committee on National Security Systems;CITC-1;Information Technology;IT;CC2005;IA;SIGITE Curriculum committee;Education;IT2005;Security Knowledge;Information Assurance;IAS"} {"name": "116", "title": "Interactive Machine Learning", "abstract": "Perceptual user interfaces (PUIs) are an important part of ubiquitous computing. Creating such interfaces is difficult because of the image and signal processing knowledge required for creating classifiers. We propose an interactive machine-learning (IML) model that allows users to train, classify/view and correct the classifications. The concept and implementation details of IML are discussed and contrasted with classical machine learning models. Evaluations of two algorithms are also presented. We also briefly describe Image Processing with Crayons (Crayons), which is a tool for creating new camera-based interfaces using a simple painting metaphor. The Crayons tool embodies our notions of interactive machine learning.", "fulltext": "INTRODUCTION\nPerceptual user interfaces (PUIs) are establishing the need for\nmachine learning in interactive settings. PUIs like\nVideoPlace [8], Light Widgets [3], and Light Table [15,16]\nall use cameras as their perceptive medium. Other systems\nuse sensors other than cameras such as depth scanners and\ninfrared sensors [13,14,15]. All of these PUIs require\nmachine learning and computer vision techniques to create\nsome sort of a classifier. This classification component of the\nUI often demands great effort and expense. Because most\ndevelopers have little knowledge on how to implement\nrecognition in their UIs this becomes problematic. Even\nthose who do have this knowledge would benefit if the\nclassifier building expense were lessened. We suggest the\nway to decrease this expense is through the use of a visual\nimage classifier generator, which would allow developers to\nadd intelligence to interfaces without forcing additional\nprogramming. Similar to how Visual Basic allows simple\nand fast development, this tool would allow for fast\nintegration of recognition or perception into a UI.\n\nImplementation of such a tool, however, poses many\nproblems. First and foremost is the problem of rapidly\ncreating a satisfactory classifier. The simple solution is to\nusing behind-the-scenes machine learning and image\nprocessing.\nMachine learning allows automatic creation of classifiers,\nhowever, the classical models are generally slow to train, and\nnot interactive. The classical machine-learning (CML) model\nis summarized in Figure 1. Prior to the training of the\nclassifier, features need to be selected. Training is then\nperformed \"off-line\" so that classification can be done\nquickly and efficiently. In this model classification is\noptimized at the expense of longer training time. Generally,\nthe classifier will run quickly so it can be done real-time. The\nassumption is that training will be performed only once and\nneed not be interactive. Many machine-learning algorithms\nare very sensitive to feature selection and suffer greatly if\nthere are very many features.\nFeature\nSelection\nTrain\nClassify\nInteractive Use\n\nFigure 1 Classical machine learning model\nWith CML, it is infeasible to create an interactive tool to\ncreate classifiers. CML requires the user to choose the\nfeatures and wait an extended amount of time for the\nalgorithm to train. The selection of features is very\nproblematic for most interface designers. If one is designing\nan interactive technique involving laser spot tracking, most\ndesigners understand that the spot is generally red. They are\nnot prepared to deal with how to sort out this spot from red\nclothing, camera noise or a variety of other problems. There\nare well-known image processing features for handling these\nproblems, but very few interface designers would know how\nto carefully select them in a way that the machine learning\nalgorithms could handle.\nThe current approach requires too much technical knowledge\non the part of the interface designer. What we would like to\ndo is replace the classical machine-learning model with the\ninteractive model shown in Figure 2. This interactive training\nallows the classifier to be coached along until the desired\nresults are met. In this model the designer is correcting and\n\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies\nare not made or distributed for profit or commercial advantage and\nthat copies bear this notice and the full citation on the first page. To\ncopy otherwise, or republish, to post on servers or to redistribute to\nlists, requires prior specific permission and/or a fee.\nIUI'03, January 1215, 2003, Miami, Florida, USA.\nCopyright 2003 ACM 1-58113-586-6/03/0001...$5.00.\n39\n\n\nteaching the classifier and the classifier must perform the\nappropriate feature selection.\n\nFeature\nSelection\nTrain\nClassify\nInteractive Use\nFeedback To\nDesigner\nManual\nCorrection\n\nFigure 2 Interactive machine learning (IML) model\nThe pre-selection of features can be eliminated and\ntransferred to the learning part of the IML if the learning\nalgorithm used performs feature selection. This means that a\nlarge repository of features are initially calculated and fed to\nthe learning algorithm so it can learn the best features for the\nclassification problem at hand. The idea is to feed a very\nlarge number of features into the classifier training and let the\nclassifier do the filtering rather than the human. The human\ndesigner then is focused on rapidly creating training data that\nwill correct the errors of the classifier.\nIn classical machine learning, algorithms are evaluated on\ntheir inductive power. That is, how well the algorithm will\nperform on new data based on the extrapolations made on the\ntraining data. Good inductive power requires careful analysis\nand a great deal of computing time. This time is frequently\nexponential in the number of features to be considered. We\nbelieve that using the IML model a simple visual tool can be\ndesigned to build classifiers quickly. We hypothesize that\nwhen using the IML, having a very fast training algorithm is\nmore important than strong induction. In place of careful\nanalysis of many feature combinations we provide much\nmore human input to correct errors as they appear. This\nallows the interactive cycle to be iterated quickly so it can be\ndone more frequently.\nThe remainder of the paper is as follows. The next section\nbriefly discusses the visual tool we created using the IML\nmodel, called Image Processing with Crayons (Crayons).\nThis is done to show one application of the IML model's\npower and versatility. Following the explanation of Crayons,\nwe explore the details of the IML model by examining its\ndistinction from CML, the problems it must overcome, and its\nimplementation details. Finally we present some results from\nsome tests between two of the implemented machine learning\nalgorithms. From these results we base some preliminary\nconclusions of IML as it relates to Crayons.\n\nIMAGE PROCESSING WITH CRAYONS\nCrayons is a system we created that uses IML to create image\nclassifiers. Crayons is intended to aid UI designers who do\nnot have detailed knowledge of image processing and\nmachine learning. It is also intended to accelerate the efforts\nof more knowledgeable programmers.\nThere are two primary goals for the Crayons tool: 1) to allow\nthe user to create an image/pixel classifier quickly, and 2) to\nallow the user to focus on the classification problem rather\nthan image processing or algorithms. Crayons is successful if\nit takes minutes rather than weeks or months to create an\neffective classifier. For simplicity sake, we will refer to this\nas the UI principle of fast and focused. This principle refers\nto enabling the designer to quickly accomplish his/her task\nwhile remaining focused solely on that task.\nFigure 3 shows the Crayons design process. Images are input\ninto the Crayons system, which can then export the generated\nclassifier. It is assumed the user has already taken digital\npictures and saved them as files to import into the system, or\nthat a camera is set up on the machine running Crayons, so it\ncan capture images from it. Exporting the classifier is equally\ntrivial, since our implementation is written in Java. The\nclassifier object is simply serialized and output to a file using\nthe standard Java mechanisms.\n\nFigure 3 Classifier Design Process\nAn overview of the internal architecture of Crayons is shown\nin Figure 4. Crayons receives images upon which the user\ndoes some manual classification, a classifier is created, then\nfeedback is displayed. The user can then refine the classifier\nby adding more manual classification or, if the classifier is\nsatisfactory, the user can export the classifier. The internal\nloop shown in Figure 4 directly correlates to the\naforementioned train, feedback, correct cycle of the IML (see\nFigure 2). To accomplish the fast and focused UI principle,\nthis loop must be easy and quick to cycle through. To be\ninteractive the training part of the loop must take less than\nfive seconds and generally much faster. The cycle can be\nbroken down into two components: the UI and the Classifier.\nThe UI component needs to be simple so the user can remain\nfocused on the classification problem at hand. The classifier\ncreation needs to be fast and efficient so the user gets\nfeedback as quickly as possible, so they are not distracted\nfrom the classification problem.\n\nFigure 4 The classification design loop\n40\n\n\nAlthough the IML and the machine-learning component of\nCrayons are the primary discussion of this paper it is notable\nto mention that Crayons has profited from work done by\nViola and Jones [19] and Jaimes and Chang [5,6,7]. Also a\nbrief example of how Crayons can be used is illustrative. The\nsequence of images in Figure 5 shows the process of creating\na classifier using Crayons.\n\n\n\nFigure 5 Crayons interaction process\n\nFigure 5 illustrates how the user initially paints very little\ndata, views the feedback provided by the resulting classifier,\ncorrects by painting additional class pixels and then iterates\nthrough the cycle. As seen in the first image pair in Figure 5,\nonly a little data can generate a classifier that roughly learns\nskin and background. The classifier, however, over-generalizes\nin favor of background; therefore, in the second\nimage pair you can see skin has been painted where the\nclassifier previously did poorly at classifying skin. The\nresulting classifier shown on the right of the second image\npair shows the new classifier classifying most of the skin on\nthe hand, but also classifying some of the background as skin.\nThe classifier is corrected again, and the resulting classifier is\nshown as the third image pair in the sequence. Thus, in only\na few iterations, a skin classifier is created.\nThe simplicity of the example above shows the power that\nCrayons has due to the effectiveness of the IML model. The\nkey issue in the creation of such a tool lies in quickly\ngenerating effective classifiers so the interactive design loop\ncan be utilized.\nMACHINE LEARNING\nFor the IML model to function, the classifier must be\ngenerated quickly and be able to generalize well. As such we\nwill first discuss the distinctions between IML and CML,\nfollowed by the problems IML must overcome because of its\ninteractive setting, and lastly its implementation details\nincluding specific algorithms.\n\nCML vs IML\nClassical machine learning generally has the following\nassumptions.\n\nThere are relatively few carefully chosen features,\n\nThere is limited training data,\n\nThe classifier must amplify that limited training data\ninto excellent performance on new training data,\n\nTime to train the classifier is relatively unimportant as\nlong as it does not take too many days.\nNone of these assumptions hold in our interactive situation.\nOur UI designers have no idea what features will be\nappropriate. In fact, we are trying to insulate them from\nknowing such things. In our current Crayons prototype there\nare more than 150 features per pixel. To reach the breadth of\napplication that we desire for Crayons we project over 1,000\nfeatures will be necessary. The additional features will handle\ntexture, shape and motion over time. For any given problem\nsomewhere between three and fifteen of those features will\nactually be used, but the classifier algorithm must\nautomatically make this selection. The classifier we choose\nmust therefore be able to accommodate such a large number\nof features, and/or select only the best features.\nIn Crayons, when a designer begins to paint classes on an\nimage a very large number of training examples is quickly\ngenerated. With 77K pixels per image and 20 images one\ncan rapidly generate over a million training examples. In\npractice, the number stays in the 100K examples range\nbecause designers only paint pixels that they need to correct\nrather than all pixels in the image. What this means,\nhowever, is that designers can generate a huge amount of\ntraining data very quickly. CML generally focuses on the\nability of a classifier to predict correct behavior on new data.\nIn IML, however, if the classifier's predictions for new data\nare wrong, the designer can rapidly make those corrections.\nBy rapid feedback and correction the classifier is quickly (in\na matter of minutes) focused onto the desired behavior. The\ngoal of the classifier is not to predict the designer's intent into\nnew situations but rapidly reflect intent as expressed in\nconcrete examples.\nBecause additional training examples can be added so\nreadily, IML's bias differs greatly from that of CML.\nBecause it extrapolates a little data to create a classifier that\nwill be frequently used in the future, CML is very concerned\nabout overfit. Overfit is where the trained classifier adheres\n41\n\n\ntoo closely to the training data rather than deducing general\nprinciples. Cross-validation and other measures are generally\ntaken to minimize overfit. These measures add substantially\nto the training time for CML algorithms. IML's bias is to\ninclude the human in the loop by facilitating rapid correction\nof mistakes. Overfit can easily occur, but it is also readily\nperceived by the designer and instantly corrected by the\naddition of new training data in exactly the areas that are most\nproblematic. This is shown clearly in Figure 5 where a\ndesigner rapidly provides new data in the edges of the hand\nwhere the generalization failed.\nOur interactive classification loop requires that the classifier\ntraining be very fast. To be effective, the classifier must be\ngenerated from the training examples in under five seconds.\nIf the classifier takes minutes or hours, the process of `train-feedback\n-correct' is no longer interactive, and much less\neffective as a design tool. Training on 100,000 examples\nwith 150 features each in less than five seconds is a serious\nchallenge for most CML algorithms.\nLastly, for this tool to be viable the final classifier will need\nto be able to classify 320 x 240 images in less than a fourth of\na second. If the resulting classifier is much slower than this it\nbecomes impossible to use it to track interactive behavior in a\nmeaningful way.\nIML\nThroughout our discussion thus far, many requirements for\nthe machine-learning algorithm in IML have been made. The\nmachine-learning algorithm must:\n\nlearn/train very quickly,\n\naccommodate 100s to 1000s of features,\n\nperform feature selection,\n\nallow for tens to hundreds of thousands of training\nexamples.\nThese requirements put firm bounds on what kind of a\nlearning algorithm can be used in IML. They invoke the\nfundamental question of which machine-learning algorithm\nfits all of these criteria. We discuss several options and the\nreason why they are not viable before we settle on our\nalgorithm of choice: decision trees (DT).\nNeural Networks [12] are a powerful and often used\nmachine-learning algorithm. They can provably approximate\nany function in two layers. Their strength lies in their\nabilities to intelligently integrate a variety of features. Neural\nnetworks also produce relatively small and efficient\nclassifiers, however, there are not feasible in IML. The\nnumber of features used in systems like Crayons along with\nthe number of hidden nodes required to produce the kinds of\nclassifications that are necessary completely overpowers this\nalgorithm. Even more debilitating is the training time for\nneural networks. The time this algorithm takes to converge is\nfar to long for interactive use. For 150 features this can take\nhours or days.\nThe nearest-neighbor algorithm [1] is easy to train but not\nvery effective. Besides not being able to discriminate\namongst features, nearest-neighbor has serious problems in\nhigh dimensional feature spaces of the kind needed in IML\nand Crayons. Nearest-neighbor generally has a classification\ntime that is linear in the number of training examples which\nalso makes it unacceptably slow.\nThere are yet other algorithms such as boosting that do well\nwith feature selection, which is a desirable characteristic.\nWhile boosting has shown itself to be very effective on tasks\nsuch as face tracing [18], its lengthy training time is\nprohibitive for interactive use in Crayons.\nThere are many more machine-learning algorithms, however,\nthis discussion is sufficient to preface to our decision of the\nuse of decision trees. All the algorithms discussed above\nsuffer from the curse of dimensionality. When many features\nare used (100s to 1000s), their creation and execution times\ndramatically increase. In addition, the number of training\nexamples required to adequately cover such high dimensional\nfeature spaces would far exceed what designers can produce.\nWith just one decision per feature the size of the example set\nmust approach 2\n100\n, which is completely unacceptable. We\nneed a classifier that rapidly discards features and focuses on\nthe 1-10 features that characterize a particular problem.\nDecision trees [10] have many appealing properties that\ncoincide with the requirements of IML. First and foremost is\nthat the DT algorithm is fundamentally a process of feature\nselection. The algorithm operates by examining each feature\nand selecting a decision point for dividing the range of that\nfeature. It then computes the \"impurity\" of the result of\ndividing the training examples at that decision point. One can\nthink of impurity as measuring the amount of confusion in a\ngiven set. A set of examples that all belong to one class\nwould be pure (zero impurity). There are a variety of\npossible impurity measures [2]. The feature whose partition\nyields the least impurity is the one chosen, the set is divided\nand the algorithm applied recursively to the divided subsets.\nFeatures that do not provide discrimination between classes\nare quickly discarded. The simplicity of DTs also provides\nmany implementation advantages in terms of speed and space\nof the resulting classifier.\nQuinlan's original DT algorithm [10] worked only on\nfeatures that were discrete (a small number of choices). Our\nimage features do not have that property. Most of our\nfeatures are continuous real values. Many extensions of the\noriginal DT algorithm, ID3, have been made to allow use of\nrealvalued data [4,11]. All of these algorithms either\ndiscretize the data or by selecting a threshold T for a given\nfeature F divide the training examples into two sets where\nF<T and F>=T. The trick is for each feature to select a value\nT that gives the lowest impurity (best classification\nimprovement). The selection of T from a large number of\nfeatures and a large number of training examples is very slow\nto do correctly.\n42\n\n\nWe have implemented two algorithms, which employ\ndifferent division techniques. These two algorithms also\nrepresent the two approaches of longer training time with\nbetter generalization vs. shorter training time with poorer\ngeneralization. The first strategy slightly reduces interactivity\nand relies more on learning performance. The second relies\non speed and interactivity. The two strategies are Center\nWeighted (CW) and Mean Split (MS).\nOur first DT attempt was to order all of the training examples\nfor each feature and step through all of the examples\ncalculating the impurity as if the division was between each\nof the examples. This yielded a minimum impurity split,\nhowever, this generally provided a best split close to the\nbeginning or end of the list of examples, still leaving a large\nnumber of examples in one of the divisions. Divisions of this\nnature yield deeper and more unbalanced trees, which\ncorrelate to slower classification times. To improve this\nalgorithm, we developed Center Weighted (CW), which does\nthe same as above, except that it more heavily weights central\nsplits (more equal divisions). By insuring that the split\nthreshold is generally in the middle of the feature range, the\nresulting tree tends to be more balanced and the sizes of the\ntraining sets to be examined at each level of the tree drops\nexponentially.\nCW DTs do, however, suffer from an initial sort of all\ntraining examples for each feature, resulting in a O(f * N log\nN) cost up front, where f is the number of features and N the\nnumber of training examples. Since in IML, we assume that\nboth f and N are large, this can be extremely costly.\nBecause of the extreme initial cost of sorting all N training\nexamples f times, we have extended Center Weighted with\nCWSS. The `SS' stand for sub-sampled. Since the iteration\nthrough training examples is purely to find a good split, we\ncan sample the examples to find a statistically sound split.\nFor example, say N is 100,000, if we sample 1,000 of the\noriginal N, sort those and calculate the best split then our\ninitial sort is 100 times faster. It is obvious that a better\nthreshold could be computed using all of the training data, but\nthis is mitigated by the fact that those data items will still be\nconsidered in lower levels of the tree. When a split decision\nis made, all of the training examples are split, not just the sub-sample\n. The sub-sampling means that each node's split\ndecision is never greater than O(f*1000*5), but that\neventually all training data will be considered.\nQuinlan used a sampling technique called \"windowing\".\nWindowing initially used a small sample of training examples\nand increased the number of training examples used to create\nthe DT, until all of the original examples were classified\ncorrectly [11]. Our technique, although similar, differs in that\nthe number of samples is fixed. At each node in the DT a\nnew sample of fixed size is drawn, allowing misclassified\nexamples in a higher level of the DT to be considered at a\nlower level.\nThe use of sub-sampling in CWSS produced very slight\ndifferences in classification accuracy as compared to CW, but\nreduced training time by a factor of at least two (for training\nsets with N\n\n5,000). This factor however will continue to\ngrow as N increases. (For N = 40,000 CWSS is\napproximately 5 times faster than CW; 8 for N = 80,000.)\nThe CW and CWSS algorithms spend considerable\ncomputing resources in trying to choose a threshold value for\neach feature. The Mean Split (MS) algorithm spends very\nlittle time on such decisions and relies on large amounts of\ntraining data to correct decisions at lower levels of the tree.\nThe MS algorithm uses T=mean(F) as the threshold for\ndividing each feature F and compares the impurities of the\ndivisions of all features. This is very efficient and produces\nrelatively shallow decision trees by generally dividing the\ntraining set in half at each decision point. Mean split,\nhowever, does not ensure that the division will necessarily\ndivide the examples at points that are meaningful to correct\nclassification. Successive splits at lower levels of the tree\nwill eventually correctly classify the training data, but may\nnot generalize as well.\nThe resulting MS decision trees are not as good as those\nproduced by more careful means such as CW or CWSS.\nHowever, we hypothesized, that the speedup in classification\nwould improve interactivity and thus reduce the time for\ndesigners to train a classifier. We believe designers make up\nfor the lower quality of the decision tree with the ability to\ncorrect more rapidly. The key is in optimizing designer\njudgment rather than classifier predictions. MSSS is a sub-sampled\nversion of MS in the same manner as CWSS. In\nMSSS, since we just evaluate the impurity at the mean, and\nsince the mean is a simple statistical value, the resulting\ndivisions are generally identical to those of straight MS.\nAs a parenthetical note, another important bottleneck that is\ncommon to all of the classifiers is the necessity to calculate\nall features initially to create the classifier. We made the\nassumption in IML that all features are pre-calculated and\nthat the learning part will find the distinguishing features.\nAlthough, this can be optimized so it is faster, all algorithms\nwill suffer from this bottleneck.\nThere are many differences between the performances of\neach of the algorithms. The most important is that the CW\nalgorithms train slower than the MS algorithms, but tend to\ncreate better classifiers. Other differences are of note though.\nFor example, the sub sampled versions, CWSS and MSSS,\ngenerally allowed the classifiers to be generated faster. More\nspecifically, CWSS was usually twice as fast as CW, as was\nMSSS compared to MS.\nBecause of the gains in speed and lack of loss of\nclassification power, only CWSS and MSSS will be used for\ncomparisons. The critical comparison is to see which\nalgorithm allows the user to create a satisfactory classifier the\nfastest. User tests comparing these algorithms are outlined\nand presented in the next section.\n43\nEVALUATIONS\nUser tests were conducted to evaluate the differences between\nCWSS and MSSS. When creating a new perceptual interface\nit is not classification time that is the real issue. The\nimportant issue is designer time. As stated before,\nclassification creation time for CWSS is longer than MSSS,\nbut the center-weighted algorithms tend to generalize better\nthan the mean split algorithms. The CWSS generally takes 110\nseconds to train on training sets of 10,000-60,000\nexamples, while MSSS is approximately twice as fast on the\nsame training sets. These differences are important; as our\nhypothesis was that faster classifier creation times can\novercome poorer inductive strength and thus reduce overall\ndesigner time.\nTo test the difference between CWSS and MSSS we used\nthree key measurements: wall clock time to create the\nclassifier, number of classify/correct iterations, and structure\nof the resulting tree (depth and number of nodes). The latter\nof these three corresponds to the amount of time the classifier\ntakes to classify an image in actual usage.\nIn order to test the amount of time a designer takes to create a\ngood classifier, we need a standard to define \"good\nclassifier\". A \"gold standard\" was created for four different\nclassification problems: skin-detection, paper card tracking,\nrobot car tracking and laser tracking. These gold standards\nwere created by carefully classifying pixels until, in human\njudgment, the best possible classification was being\nperformed on the test images for each problem. The resulting\nclassifier was then saved as a standard.\nTen total test subjects were used and divided into two groups.\nThe first five did each task using the CWSS followed by the\nMSSS and the remaining five MSSS followed by CWSS.\nThe users were given each of the problems in turn and asked\nto build a classifier. Each time the subject requested a\nclassifier to be built that classifier's performance was\nmeasured against the performance of the standard classifier\nfor that task. When the subject's classifier agreed with the\nstandard on more than 97.5% of the pixels, the test was\ndeclared complete.\nTable 1, shows the average times and iterations for the first\ngroup, Table 2, the second group.\n\nCWSS MSSS\nProblem Time\nIterations\nTime\nIterations\nSkin 03:06\n4.4\n10:35\n12.6\nPaper Cards\n02:29\n4.2\n02:23\n5.0\nRobot Car\n00:50\n1.6\n01:00\n1.6\nLaser 00:46\n1.2\n00:52\n1.4\nTable 1 CWSS followed by MSSS\n\n\nMSSS CWSS\nProblem Time\nIterations\nTime\nIterations\nSkin 10:26\n11.4\n03:51\n3.6\nPaper Cards\n04:02\n5.0\n02:37\n2.6\nRobot Car\n01:48\n1.2\n01:37\n1.2\nLaser 01:29\n1.0\n01:16\n1.0\nTable 2 MSSS followed by CWSS\nThe laser tracker is a relatively simple classifier because of\nthe uniqueness of bright red spots [9]. The robot car was\ncontrasted with a uniform colored carpet and was similarly\nstraightforward. Identifying colored paper cards against a\ncluttered background was more difficult because of the\ndiversity of the background. The skin tracker is the hardest\nbecause of the diversity of skin color, camera over-saturation\nproblems and cluttered background [20].\nAs can be seen in tables 1 and 2, MSSS takes substantially\nmore designer effort on the hard problems than CWSS. All\nsubjects specifically stated that CWSS was \"faster\" than\nMSSS especially in the Skin case. (Some did not notice a\ndifference between the two algorithms while working on the\nother problems.) We did not test any of the slower\nalgorithms such as neural nets or nearest-neighbor.\n\nInteractively these are so poor that the results are self-evident.\nWe also did not test the full CW algorithm. Its classification\ntimes tend into minutes and clearly could not compete with\nthe times shown in tables 1 and 2. It is clear from our\nevaluations that a classification algorithm must get under the\n10-20 second barrier in producing a new classification, but\nthat once under that barrier, the designer's time begins to\ndominate. Once the designer's time begins to dominate the\ntotal time, then the classifier with better generalization wins.\nWe also mentioned the importance of the tree structure as it\nrelates to the classification time of an image. Table 3 shows\nthe average tree structures (tree depth and number of nodes)\nas well as the average classification time (ACT) in\nmilliseconds over the set of test images.\n\nCWSS MSSS\nProblem Depth Nodes ACT Depth Nodes ACT\nSkin 16.20\n577\n243\n25.60\n12530\n375\nPaper\nCards\n15.10 1661 201 16.20 2389 329\nCar\n13.60 1689 235 15.70 2859 317\nLaser 13.00\n4860\n110\n8.20\n513\n171\nTable 3 Tree structures and average classify time (ACT)\nAs seen in Table 3, depth, number of nodes and ACT, were\nall lower in CWSS than in MSSS. This was predicted as\nCWSS provides better divisions between the training\nexamples.\n44\n\n\nWhile testing we observed that those who used the MSSS\nwhich is fast but less accurate, first, ended up using more\ntraining data, even when they used the CWSS, which usually\ngeneralizes better and needs less data. Those who used the\nCWSS first, were pleased with the interactivity of CWSS and\nbecame very frustrated when they used MSSS, even though it\ncould cycle faster through the interactive loop. In actuality,\nbecause of the poor generalization of the mean split\nalgorithm, even though the classifier generation time for\nMSSS was quicker than CWSS, the users felt it necessary to\npaint more using the MSSS, so the overall time increased\nusing MSSS.\nCONCLUSION\nWhen using machine learning in an interactive design setting,\nfeature selection must be automatic rather than manual and\nclassifier training-time must be relatively fast. Decision Trees\nusing a sub-sampling technique to improve training times are\nvery effective for both of these purposes. Once interactive\nspeeds are achieved, however, the quality of the classifier's\ngeneralization becomes important. Using tools like Crayons,\ndemonstrates that machine learning can form an appropriate\nbasis for the design tools needed to create new perceptual\nuser interfaces.\n\nREFERENCES\n1. Cover, T., and Hart, P. \"Nearest Neighbor Pattern\nClassification.\" IEEE Transactions on Information\nTheory, 13, (1967) 21-27.\n2. Duda, R. O., Hart, P. E., and Stork, D. G., Pattern\nClassification. (2001).\n3. Fails, J.A., Olsen, D.R. \"LightWidgets: Interacting in\nEveryday Spaces.\" Proceedings of IUI '02 (San\nFrancisco CA, January 2002).\n4. Fayyad, U.M. and Irani, K. B. \"On the Handling of\nContinuous-valued Attributes in Decision Tree\nGeneration.\" Machine Learning, 8, 87-102,(1992).\n5. Jaimes, A. and Chang, S.-F. \"A Conceptual Framework\nfor Indexing Visual Information at Multiple Levels.\"\nIS&T/SPIE Internet Imaging 2000, (San Jose CA,\nJanuary 2000).\n6. Jaimes, A. and Chang, S.-F. \"Automatic Selection of\nVisual Features and Classifier.\" Storage and Retrieval\nfor Image and Video Databases VIII, IS&T/SPIE (San\nJose CA, January 2000).\n7. Jaimes, A. and Chang, S.-F. \"Integrating Multiple\nClassifiers in Visual Object Detectors Learned from User\nInput.\" Invited paper, session on Image and Video\nDatabases, 4th Asian Conference on Computer Vision\n(ACCV 2000), Taipei, Taiwan, January 8-11, 2000.\n8. Krueger, M. W., Gionfriddo. T., and Hinrichsen, K.,\n\"VIDEOPLACE -- an artificial reality\". Human Factors\nin Computing Systems, CHI '85 Conference Proceedings,\nACM Press, 1985, 35-40.\n9. Olsen, D.R., Nielsen, T. \"Laser Pointer Interaction.\"\nProceedings of CHI '01 (Seattle WA, March 2001).\n10. Quinlan, J. R. \"Induction of Decision Trees.\" Machine\nLearning, 1(1); 81-106, (1986).\n11. Quinlan, J. R. \"C4.5: Programs for machine learning.\"\nMorgan Kaufmann, San Mateo, CA, 1993.\n12. Rumelhart, D., Widrow, B., and Lehr, M. \"The Basic\nIdeas in Neural Networks.\" Communications of the ACM,\n37(3), (1994), pp 87-92.\n13. Schmidt, A. \"Implicit Human Computer Interaction\nThrough Context.\" Personal Technologies, Vol 4(2),\nJune 2000.\n14. Starner, T., Auxier, J. and Ashbrook, D. \"The Gesture\nPendant: A Self-illuminating, Wearable, Infrared\nComputer Vision System for Home Automation Control\nand Medical Monitoring.\" International Symposium on\nWearable Computing (Atlanta GA, October 2000).\n15. Triggs, B. \"Model-based Sonar Localisation for Mobile\nRobots.\" Intelligent Robotic Systems '93, Zakopane,\nPoland, 1993.\n16. Underkoffler, J. and Ishii H. \"Illuminating Light: An\nOptical Design Tool with a Luminous-Tangible\nInterface.\" Proceedings of CHI '98 (Los Angeles CA,\nApril 1998).\n17. Underkoffler, J., Ullmer, B. and Ishii, H. \"Emancipated\nPixels: Real-World Graphics in the Luminous Room.\"\nProceedings of SIGGRAPH '99 (Los Angeles CA, 1999),\nACM Press, 385-392.\n18. Vailaya, A., Zhong, Y., and Jain, A. K. \"A hierarchical\nsystem for efficient image retrieval.\" In Proc. Int. Conf.\non Patt. Recog. (August 1996).\n19. Viola, P. and Jones, M. \"Robust real-time object\ndetection.\" Technical Report 2001/01, Compaq CRL,\nFebruary 2001.\n20. Yang, M.H. and Ahuja, N. \"Gaussian Mixture Model for\nHuman Skin Color and Its Application in Image and\nVideo Databases.\" Proceedings of SPIE '99 (San Jose\nCA, Jan 1999), 458-466.\n\n\n45\n", "keywords": "classification;Perceptual interface;image processing;perceptive user interfaces;Perceptual user iinterfaces;Machine learning;image/pixel classifier;Predict correct behaviour;Classification design loop;Interactive machine learning;interaction;Crayons prototype;Image processing with crayons;Crayons design process;Classical machine learning"} {"name": "117", "title": "Interestingness of Frequent Itemsets Using Bayesian Networks as Background Knowledge", "abstract": "The paper presents a method for pruning frequent itemsets based on background knowledge represented by a Bayesian network. The interestingness of an itemset is defined as the absolute difference between its support estimated from data and from the Bayesian network. Efficient algorithms are presented for finding interestingness of a collection of frequent itemsets, and for finding all attribute sets with a given minimum interestingness. Practical usefulness of the algorithms and their efficiency have been verified experimentally.", "fulltext": "INTRODUCTION\nFinding frequent itemsets and association rules in database\ntables has been an active research area in recent years.\nUnfortunately, the practical usefulness of the approach is\nlimited by huge number of patterns usually discovered. For\nlarger databases many thousands of association rules may\nbe produced when minimum support is low. This creates\na secondary data mining problem: after mining the data,\nwe are now compelled to mine the discovered patterns. The\nproblem has been addressed in literature mainly in the context\nof association rules, where the two main approaches are\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nKDD'04, August 2225, 2004, Seattle, Washington, USA.\nCopyright 2004 ACM 1-58113-888-1/04/0008 ...\n$\n5.00.\nsorting rules based on some interestingness measure, and\npruning aiming at removing redundant rules.\nFull review of such methods is beyond the scope of this\npaper. Overviews of interestingness measures can be found\nfor example in [3, 13, 11, 32], some of the papers on rule\npruning are [30, 31, 7, 14, 28, 16, 17, 33].\nMany interestingness measures are based on the divergence\nbetween true probability distributions and distributions\nobtained under the independence assumption. Pruning\nmethods are usually based on comparing the confidence\nof a rule to the confidence of rules related to it.\nThe main drawback of those methods is that they tend\nto generate rules that are either obvious or have already\nbeen known by the user. This is to be expected, since the\nmost striking patterns which those methods select can also\neasily be discovered using traditional methods or are known\ndirectly from experience.\nWe believe that the proper way to address the problem\nis to include users background knowledge in the process.\nThe patterns which diverge the most from that background\nknowledge are deemed most interesting. Discovered patterns\ncan later be applied to improve the background knowledge\nitself.\nMany approaches to using background knowledge in machine\nlearning are focused on using background knowledge\nto speed up the hypothesis discovery process and not on discovering\ninteresting patterns. Those methods often assume\nstrict logical relationships, not probabilistic ones. Examples\nare knowledge based neural networks (KBANNs) and uses\nof background knowledge in Inductive Logic Programming.\nSee Chapter 12 in [20] for an overview of those methods and\na list of further references.\nTuzhilin et. al. [23, 22, 29] worked on applying background\nknowledge to finding interesting rules. In [29, 22] interestingness\nmeasures are presented, which take into account\nprior beliefs; in another paper [23], the authors present an\nalgorithm for selecting a minimum set of interesting rules\ngiven background knowledge. The methods used in those\npapers are local, that is, they don't use a full joint probability\nof the data. Instead, interestingness of a rule is evaluated\nusing rules in the background knowledge with the same consequent\n. If no such knowledge is present for a given rule, the\nrule is considered uninteresting. This makes it impossible to\ntake into account transitivity. Indeed, in the presence of the\nbackground knowledge represented by the rules A B and\n178\nResearch Track Paper\nB C, the rule A C is uninteresting. However, this cannot\nbe discovered locally. See [25] for a detailed discussion\nof advantages of global versus local methods. Some more\ncomparisons can be found in [18].\nIn this paper we present a method of finding interesting\npatterns using background knowledge represented by a\nBayesian network. The main advantage of Bayesian networks\nis that they concisely represent full joint probability\ndistributions, and allow for practically feasible probabilistic\ninference from those distributions [25, 15]. Other advantages\ninclude the ability to represent causal relationships, easy to\nunderstand graphical structure, as well as wide availability\nof modelling tools. Bayesian networks are also easy to modify\nby adding or deleting edges.\nWe opt to compute interestingness of frequent itemsets\ninstead of association rules, agreeing with [7] that directions\nof dependence should be decided by the user based on her\nexperience and not suggested by interestingness measures.\nOur approach works by estimating supports of itemsets from\nBayesian networks and comparing thus estimated supports\nwith the data. Itemsets with strongly diverging supports\nare considered interesting.\nFurther definitions of interestingness exploiting Bayesian\nnetwork's structure are presented, as well as efficient methods\nfor computing interestingness of large numbers of itemsets\nand for finding all attribute sets with given minimum\ninterestingness.\nThere are some analogies between mining emerging patterns\n[6] and our approach, the main differences being that\nin our case a Bayesian network is used instead of a second\ndataset, and that we use a different measure for comparing\nsupports. Due to those differences our problem requires a\ndifferent approach and a different set of algorithms.\nDEFINITIONS AND NOTATION\nDatabase attributes will be denoted with uppercase letters\nA, B, C, . . ., domain of an attribute A will be denoted\nby Dom(A). In this paper we are only concerned with categorical\nattributes, that is attributes with finite domains.\nSets of attributes will be denoted with uppercase letters\nI, J, . . .. We often use database notation for representing\nsets of attributes, i.e. I = A\n1\nA\n2\n. . . A\nk\ninstead of the set theoretical\nnotation {A\n1\n, A\n2\n, . . . , A\nk\n}. Domain of an attribute\nset I = A\n1\nA\n2\n. . . A\nk\nis defined as\nDom(I) = Dom(A\n1\n) Dom(A\n2\n) . . . Dom(A\nk\n).\nValues from domains of attributes and attribute sets are denoted\nwith corresponding lowercase boldface letters, e.g. i\nDom(I).\nLet P\nI\ndenote a joint probability distribution of the attribute\nset I. Similarly let P\nI\n|J\nbe a distribution of I conditioned\non J. When used in arithmetic operations such\ndistributions will be treated as functions of attributes in I\nand I J respectively, with values in the interval [0, 1]. For\nexample P\nI\n(i) denotes the probability that I = i.\nLet P\nI\nbe a probability distribution, and let J I. Denote\nby P\nJ\nI\nthe marginalization of P\nI\nonto J, that is\nP\nJ\nI\n=\nX\nI\n\\J\nP\nI\n,\n(1)\nwhere the summation is over the domains of all variables\nfrom I \\ J.\nProbability distributions estimated from data will be denoted\nby adding a hat symbol, e.g. ^\nP\nI\n.\nAn itemset is a pair (I, i), where I is an attribute set and\ni Dom(I). The support of an itemset (I, i) is defined as\nsupp(I, i) = ^\nP\nI\n(i),\nwhere the probability is estimated from some dataset. An\nitemset is called frequent if its support is greater than or\nequal to some user defined threshold minsupp. Finding all\nfrequent itemsets in a given database table is a well known\ndatamining problem [1].\nA Bayesian network BN over a set of attributes H =\nA\n1\n. . . A\nn\nis a directed acyclic graph BN = (V, E) with\nthe set of vertices V = {V\nA\n1\n, . . . , V\nA\nn\n} corresponding to\nattributes of H, and a set of edges E V V , where each\nvertex V\nA\ni\nhas associated a conditional probability distribution\nP\nA\ni\n|par\ni\n, where par\ni\n= {A\nj\n: (V\nA\nj\n, V\nA\ni\n) E} is the set\nof attributes corresponding to parents of V\nA\ni\nin G. See [25,\n15] for a detailed discussion of Bayesian networks.\nA Bayesian network BN over H uniquely defines a joint\nprobability distribution\nP\nBN\nH\n=\nn\nY\ni\n=1\nP\nA\ni\n|par\ni\nof H. For I H the distribution over I marginalized from\nP\nBN\nH\nwill be denoted by P\nBN\nI\nP\nBN\nI\n=\n\"P\nBN\nH\n\"\nI\n.\nINTERESTINGNESS OF AN ATTRIBUTE SET WITH RESPECT TO A BAYESIAN NETWORK\nLet us first define the support of an itemset (I, i) in a\nBayesian network BN as\nsupp\nBN\n(I, i) = P\nBN\nI\n(i).\nLet BN be a Bayesian network over an attribute set H,\nand let (I, i) be an itemset such that I H. The interestingness\nof the itemset (I, i) with respect to BN is defined\nas\nI\nBN\n(I, i) = |supp(I, i) - supp\nBN\n(I, i)|\nthat is, the absolute difference between the support of the\nitemset estimated from data, and the estimate of this support\nmade from the Bayesian network BN . In the remaining\npart of the paper we assume that interestingness is always\ncomputed with respect to a Bayesian network BN and the\nsubscript is omitted.\nAn itemset is -interesting if its interestingness is greater\nthan or equal to some user specified threshold .\nA frequent interesting itemset represents a frequently occurring\n(due to minimum support requirement) pattern in\nthe database whose probability is significantly different from\nwhat it is believed to be based on the Bayesian network\nmodel.\nAn alternative would be to use supp(I, i)/supp\nBN\n(I, i)\nas the measure of interestingness [6]. We decided to use\nabsolute difference instead of a quotient since we found it to\nbe more robust, especially when both supports are small.\nOne could think of applying our approach to association\nrules with the difference in confidences as a measure of interestingness\nbut, as mentioned in the Introduction, we think\n179\nResearch Track Paper\nthat patterns which do not suggest a direction of influence\nare more appropriate.\nSince in Bayesian networks dependencies are modelled using\nattributes not itemsets, it will often be easier to talk\nabout interesting attribute sets, especially when the discovered\ninteresting patterns are to be used to update the background\nknowledge.\nDefinition\n3.1. Let I be an attribute set. The interestingness\nof I is defined as\nI(I) =\nmax\niDom(I)\nI(I, i),\n(2)\nanalogously, I is -interesting if I(I) .\nAn alternative approach would be to use generalizations\nof Bayesian networks allowing dependencies to vary for different\nvalues of attributes, see [27], and deal with itemset\ninterestingness directly.\n3.1\nExtensions to the Definition of Interestingness\nEven though applying the above definition and sorting\nattribute sets on their interestingness works well in practice,\nthere might still be a large number of patterns retained,\nespecially if the background knowledge is not well developed\nand large number of attribute sets have high interestingness\nvalues. This motivates the following two definitions.\nDefinition\n3.2. An attribute set I is hierarchically interesting\nif it is -interesting and none of its proper subsets\nis -interesting.\nThe idea is to prevent large attribute sets from becoming\ninteresting when the true cause of them being interesting\nlies in their subsets.\nThere is also another problem with Definition 3.1. Consider\na Bayesian network\nA B\nwhere nodes A and B have respective probability distributions\nP\nA\nand P\nB\n|A\nattached. Suppose also that A is interesting\n. In this case even if P\nB\n|A\nis the same as ^\nP\nB\n|A\n,\nattribute sets B and AB may be considered -interesting.\nBelow we present a definition of interestingness aiming at\npreventing such situations.\nA vertex V is an ancestor of a vertex W in a directed graph\nG if there is a directed path from V to W in G. The set of\nancestors of a vertex V in a graph G is denoted by anc(V ).\nMoreover, let us denote by anc(I) the set of all ancestor\nattributes in BN of an attribute set I. More formally:\nanc\n(I) = {A\ni\n/\nI : V\nA\ni\nanc(V\nA\nj\n) in BN, for some A\nj\nI}.\nDefinition\n3.3. An attribute set I is topologically\n-interesting if it is -interesting, and there is no attribute\nset J such that\n1. J anc(I) I, and\n2. I J, and\n3. J is -interesting.\nThe intention here is to prevent interesting attribute sets\nfrom causing all their successors in the Bayesian network\n(and the supersets of their successors) to become interesting\nin a cascading fashion.\nTo see why condition 2 is necessary consider a Bayesian\nnetwork\nA X B\nSuppose that there is a dependency between A and B in data\nwhich makes AB -interesting. Now however ABX may\nalso become interesting, (even if P\nA\n|X\nand P\nB\n|X\nare correct\nin the network) and cause AB to be pruned. Condition 2\nprevents AB from being pruned and ABX from becoming\ninteresting.\nNotice that topological interestingness is stricter than hierarchical\ninterestingness. Indeed if J I is -interesting,\nthen it satisfies all the above conditions, and makes I not\ntopologically -interesting.\nALGORITHMS FOR FINDING INTERESTING ITEMSETS AND ATTRIBUTE SETS\nIn this section we present algorithms using the definition\nof interestingness introduced in the previous section to select\ninteresting itemsets or attribute sets. We begin by describing\na procedure for computing marginal distributions for a\nlarge collection of attribute sets from a Bayesian network.\n4.1\nComputing a Large Number of Marginal\nDistributions from a Bayesian Network\nComputing the interestingness of a large number of frequent\nitemsets requires the computation of a large number of\nmarginal distributions from a Bayesian network. The problem\nhas been addressed in literature mainly in the context\nof finding marginals for every attribute [25, 15], while here\nwe have to find marginals for multiple, overlapping sets of\nattributes. The approach taken in this paper is outlined\nbelow.\nThe problem of computing marginal distributions from a\nBayesian network is known to be NP-hard, nevertheless in\nmost cases the network structure can be exploited to speed\nup the computations.\nHere we use exact methods for computing the marginals.\nApproximate methods like Gibbs sampling are an interesting\ntopic for future work.\nBest known approaches to exact marginalizations are join\ntrees [12] and bucket elimination [5]. We chose bucket elimination\nmethod which is easier to implement and according\nto [5] as efficient as join tree based methods. Also, join\ntrees are mainly useful for computing marginals for single\nattributes, and not for sets of attributes.\nThe bucket elimination method, which is based on the distributive\nlaw, proceeds by first choosing a variable ordering\nand then applying distributive law repeatedly to simplify the\nsummation. For example suppose that a joint distribution\nof a Bayesian network over H = ABC is expressed as\nP\nBN\nABC\n= P\nA\nP\nB\n|A\nP\nC\n|A\n,\nand we want to find P\nBN\nA\n. We need to compute the sum\nX\nB\nX\nC\nP\nA\nP\nB\n|A\nP\nC\n|A\n180\nResearch Track Paper\nwhich can be rewritten as\nP\nA\n0\n@ X\nb\nDom(B)\nP\nB\n|A\n1\nA 0@ X\nc\nDom(C)\nP\nC\n|A\n1\nA.\nAssuming that domains of all attributes have size 3, computing\nthe first sum directly requires 12 additions and 18\nmultiplications, while the second sum requires only 4 additions\nand 6 multiplications.\nThe expression is interpreted as a tree of buckets, each\nbucket is either a single probability distribution, or a sum\nover a single attribute taken over a product of its child buckets\nin the tree. In the example above a special root bucket\nwithout summation could be introduced for completeness.\nIn most cases the method significantly reduces the time\ncomplexity of the problem. An important problem is choosing\nthe right variable ordering. Unfortunately that problem\nis itself NP-hard. We thus adopt a heuristic which orders\nvariables according to the decreasing number of factors in\nthe product depending on each variable. A detailed discussion\nof the method can be found in [5].\nAlthough bucket elimination can be used to obtain supports\nof itemsets directly (i.e. P\nI\n(i)), we use it to obtain\ncomplete marginal distributions. This way we can directly\napply marginalization to obtain distributions for subsets of\nI (see below).\nSince bucket elimination is performed repeatedly we use\nmemoization to speed it up, as suggested in [21]. We remember\neach partial sum and reuse it if possible. In the\nexample above\nP\nb\nDom(B)\nP\nB\n|A\n,\nP\nc\nDom(C)\nP\nC\n|A\n, and the\ncomputed P\nBN\nA\nwould have been remembered.\nAnother method of obtaining a marginal distribution P\nI\nis marginalizing it from P\nJ\nwhere I J using Equation (1),\nprovided that P\nJ\nis already known. If |J \\ I| is small, this\nprocedure is almost always more efficient than bucket elimination\n, so whenever some P\nI\nis computed by bucket elimination\n, distributions of all subsets of I are computed using\nEquation (1).\nDefinition\n4.1. Let C be a collection of attribute sets.\nThe positive border of C [19], denoted by Bd\n+\n(C), is the\ncollection of those sets from C which have no proper superset\nin C:\nBd\n+\n(C) = {I C : there is no J C such that I J}.\nIt is clear from the discussion above that we only need to\nuse bucket elimination to compute distributions of itemsets\nin the positive border. We are going to go further than this;\nwe will use bucket elimination to obtain supersets of sets\nin the positive border, and then use Equation (1) to obtain\nmarginals even for sets in the positive border. Experiments\nshow that this approach can give substantial savings, especially\nwhen many overlapping attribute sets from the positive\nborder can be covered by a single set only slightly larger\nthen the covered ones.\nThe algorithm for selecting the marginal distribution to\ncompute is motivated by the algorithm from [9] for computing\nviews that should be materialized for OLAP query\nprocessing. Bucket elimination corresponds to creating a\nmaterialized view, and marginalizing thus obtained distribution\nto answering OLAP queries.\nWe first need to define costs of marginalization and bucket\nelimination. In our case the cost is defined as the total\nnumber of additions and multiplications used to compute\nthe marginal distribution.\nThe cost of marginalizing P\nJ\nfrom P\nI\n, J I using Equation\n(1) is\ncost(P\nJ\nI\n) = | Dom(J)| (| Dom(I \\ J)| - 1) .\nIt follows from the fact that each value of P\nJ\nI\nrequires\nadding | Dom(I \\ J)| values from P\nI\n.\nThe cost of bucket elimination can be computed cheaply\nwithout actually executing the procedure. Each bucket is\neither an explicitly given probability distribution, or computes\na sum over a single variable of a product of functions\n(computed in buckets contained in it) explicitly represented\nas multidimensional tables, see [5] for details. If the bucket\nis an explicitly given probability distribution, the cost is of\ncourse 0.\nConsider now a bucket b containing child buckets b\n1\n, . . . , b\nn\nyielding functions f\n1\n, . . . , f\nn\nrespectively. Let Var(f\ni\n) the set\nof attributes on which f\ni\ndepends.\nLet f = f\n1\nf\n2\nf\nn\ndenote the product of all factors in\nb. We have Var(f ) =\nn\ni\n=1\nVar(f\ni\n), and since each value\nof f requires n - 1 multiplications, computing f requires\n| Dom(Var(f ))| (n - 1) multiplications. Let A\nb\nbe the attribute\nover which summation in b takes place. Computing\nthe sum will require | Dom(Var(f ) \\ {A\nb\n})| (| Dom(A\nb\n)| - 1)\nadditions.\nSo the total cost of computing the function in bucket b\n(including costs of computing its children) is thus\ncost(b)\n=\nn\nX\ni\n=1\ncost(b\ni\n) + | Dom(Var(f ))| (n - 1)\n+ | Dom(Var(f ) \\ {A\nb\n})| (| Dom(A\nb\n)| - 1).\nThe cost of computing P\nBN\nI\nthrough bucket elimination,\ndenoted cost\nBE\n(P\nBN\nI\n), is the cost of the root bucket of the\nsummation used to compute P\nBN\nI\n.\nLet C be a collection of attribute sets. The gain of using\nbucket elimination to find P\nBN\nI\nfor some I while computing\ninterestingness of attribute sets from C can be expressed as:\ngain(I) = -cost\nBE\n(P\nBN\nI\n) +\nX\nJ\nBd\n+\n(C),J I\nhcost\nBE\n(P\nBN\nJ\n) - cost(P\nBN\nI\nJ\n)\ni.\nAn attribute set to which bucket elimination will be applied\nis found using a greedy procedure by adding in each itera-tion\nthe attribute giving the highest increase of gain. The\ncomplete algorithm is presented in Figure 1.\n4.2\nComputing The Interestingness of a Collection\nof Itemsets\nFirst we present an algorithm for computing interestingness\nof all itemsets in a given collection. Its a simple application\nof the algorithm in Figure 1. It is useful if we already\n181\nResearch Track Paper\nInput: collection of attribute sets C, Bayesian network BN\nOutput: distributions P\nBN\nI\nfor all I C\n1. S Bd\n+\n(C)\n2. while S = :\n3.\nI an attribute set from S.\n4.\nfor A in H \\ I:\n5.\ncompute gain(I {A})\n6.\npick A for which the gain in Step 5 was maximal\n7.\nif gain(I {A }) > gain(I):\n8.\nI I {A }\n9.\ngoto 4\n10.\ncompute P\nBN\nI\nfrom BN using bucket elimination\n11.\ncompute P\nBN\nI\nJ\nfor all J S, J I using Equation\n(1)\n12.\nremove from S all attribute sets included in I\n13. compute P\nBN\nJ\nfor all J C \\ Bd\n+\n(C) using Equation\n(1)\nFigure 1: Algorithm for computing a large number\nof marginal distributions from a Bayesian network.\nhave a collection of itemsets (e.g. all frequent itemsets found\nin a database table) and want to select those which are the\nmost interesting. The algorithm is given below\nInput: collection of itemsets K, supports of all itemsets in\nK, Bayesian network BN\nOutput: interestingness of all itemsets in K.\n1. C {I : (I, i) K for some i Dom(I)}\n2. compute P\nBN\nI\nfor all I C using algorithm in Figure 1\n3. compute interestingness of all itemsets in K using distributions\ncomputed in step 2\n4.3\nFinding All Attribute Sets With Given Minimum\nInterestingness\nIn this section we will present an algorithm for finding all\nattribute sets with interestingness greater than or equal to a\nspecified threshold given a dataset and a Bayesian network\nBN .\nLet us first make an observation:\nObservation\n4.2. If an itemset (I, i) has interestingness\ngreater than or equal to\nwith respect to a Bayesian network\nBN then its support must be greater than or equal to\nin\neither the data or in BN . Moreover if an attribute set is\n-interesting, by definition 3.1, at least one of its itemsets\nmust be -interesting.\nIt follows that if an attribute set is -interesting, then one\nof its itemsets must be frequent, with minimum support ,\neither in the data or in the Bayesian network.\nInput: Bayesian network BN , minimum support minsupp.\nOutput: itemsets whose support in BN is minsupp\n1. k 1\n2. Cand {(I, i) : |I| = 1}\n3. compute supp\nBN\n(I, i) for all (I, i) Cand using the\nalgorithm in Figure 1\n4. F req\nk\n{(I, i) Cand : supp\nBN\n(I, i) minsupp}\n5. Cand generate new candidates from F req\nk\n6. remove itemsets with infrequent subsets from Cand\n7. k k + 1; goto 3\nFigure 2: The AprioriBN algorithm\nThe algorithm works in two stages. First all frequent itemsets\nwith minimum support\nare found in the dataset and\ntheir interestingness is computed. The first stage might have\nmissed itemsets which are -interesting but don't have sufficient\nsupport in the data.\nIn the second stage all itemsets frequent in the Bayesian\nnetwork are found, and their supports in the data are computed\nusing an extra database scan.\nTo find all itemsets frequent in the Bayesian network we\nuse the Apriori algorithm [1] with a modified support counting\npart, which we call AprioriBN. The sketch of the algorithm\nis shown in Figure 2, except for step 3 it is identical\nto the original algorithm.\nWe now have all the elements needed to present the algorithm\nfor finding all -interesting attribute sets, which is\ngiven in Figure 3.\nStep 4 of the algorithm can reuse marginal distributions\nfound in step 3 to speed up the computations.\nNotice that it is always possible to compute interestingness\nof every itemset in step 6 since both supports of each\nitemset will be computed either in steps 1 and 3, or in steps 4\nand 5.\nThe authors implemented hierarchical and topological interestingness\nas a postprocessing step. They could however\nbe used to prune the attribute sets which are not interesting\nwithout evaluating their distributions, thus providing a\npotentially large speedup in the computations. We plan to\ninvestigate that in the future.\nEXPERIMENTAL RESULTS\nIn this section we present experimental evaluation of the\nmethod. One problem we were faced with was the lack\nof publicly available datasets with nontrivial background\nknowledge that could be represented as a Bayesian network\n.\nThe UCI Machine Learning repository contains a\nfew datasets with background knowledge (Japanese credit,\nmolecular biology), but they are aimed primarily at Inductive\nLogic Programming: the relationships are logical rather\nthan probabilistic, only relationships involving the class attribute\nare included. These examples are of little value for\nour approach.\nWe have thus used networks constructed using our own\n182\nResearch Track Paper\nInput:\nBayesian network BN , dataset, interestingness\nthreshold .\nOutput: all attribute sets with interestingness at least ,\nand some of the attribute sets with lower interestingness.\n1. K {(I, i) : supp(I, i) } (using Apriori algorithm)\n2. C {I : (I, i) K for some i Dom(I)}\n3. compute P\nBN\nI\nfor all I C using algorithm in Figure 1\n4. K {(I, i) : supp\nBN\n(I, i) } (using AprioriBN\nalgorithm)\n5. compute support in data for all itemsets in K \\ K by\nscanning the dataset\n6. compute interestingness of all itemsets in K K\n7. C {I : (I, i) K for some i Dom(I)}\n8. compute interestingness of all attribute sets I in C C:\nI(I) = max{I(I, i) : (I, i) K K , i Dom(I)}\nFigure 3: Algorithm for finding all -interesting attribute\nsets.\ncommon-sense knowledge as well as networks learned from\ndata.\n5.1\nAn Illustrative Example\nWe first present a simple example demonstrating the usefulness\nof the method. We use the KSL dataset of Danish\n70 year olds, distributed with the DEAL Bayesian network\npackage [4]. There are nine attributes, described in Table 1,\nrelated to the person's general health and lifestyle. All continuous\nattributes have been discretized into 3 levels using\nthe equal weight method.\nFEV\nForced ejection volume of person's lungs\nKol\nCholesterol\nHyp\nHypertension (no/yes)\nBMI\nBody Mass Index\nSmok\nSmoking (no/yes)\nAlc\nAlcohol consumption (seldom/frequently)\nWork\nWorking (yes/no)\nSex\nmale/female\nYear\nSurvey year (1967/1984)\nTable 1: Attributes of the KSL dataset.\nWe began by designing a network structure based on au-thors'\n(non-expert) knowledge. The network structure is\ngiven in Figure 4a. Since we were not sure about the relation\nof cholesterol to other attributes, we left it unconnected.\nConditional probabilities were estimated directly from the\nKSL\ndataset. Note that this is a valid approach since even\nwhen the conditional probabilities match the data perfectly\ninteresting patterns can still be found because the network\nstructure usually is not capable of representing the full joint\ndistribution of the data. The interesting patterns can then\nbe used to update the network's structure. Of course if both\nthe structure and the conditional probabilities are given by\na)\nb)\nFigure 4: Network structures for the KSL dataset\nconstructed by the authors\nthe expert, then the discovered patterns can be used to update\nboth the network's structure and conditional probabilities\n.\nWe applied the algorithm for finding all interesting attribute\nsets to the KSL dataset and the network, using the\nthreshold of 0.01. The attribute sets returned were sorted\nby interestingness, and top 10 results were kept.\nThe two most interesting attribute sets were {F EV, Sex}\nwith interestingness 0.0812 and {Alc, Y ear} with interestingness\n0.0810.\nIndeed, it is known (see [8]) that women's lungs are on average\n20% - 25% smaller than men's lungs, so sex influences\nthe forced ejection volume (FEV) much more than smoking\ndoes (which we thought was the primary influence). This\nfact, although not new in general, was overlooked by the\nauthors, and we suspect that, due to large amount of literature\non harmful effects of smoking, it might have been\noverlooked by many domain experts. This proves the high\nvalue of our approach for verification of Bayesian network\nmodels.\nThe data itself implied a growth in alcohol consumption\nbetween 1967 and 1984, which we considered to be a plausible\nfinding.\nWe then decided to modify the network structure based on\nour findings by adding edges Sex F EV and Y ear Alc.\nOne could of course consider other methods of modifying\nnetwork structure, like deleting edges or reversing their direction\n.\nA brief overview of more advanced methods of\nBayesian network modification can be found in [15, Chap. 3,\nSect. 3.5]. Instead of adapting the network structure one\ncould keep the structure unchanged, and tune conditional\nprobabilities in the network instead, see [15, Chap. 3, Sect. 4]\nfor details.\n183\nResearch Track Paper\nAs a method of scoring network structures we used the\nnatural logarithm of the probability of the structure conditioned\non the data, see [10, 26] for details on computing the\nscore.\nThe modified network structure had the score of -7162.71\nwhich is better than that of the original network: -7356.68.\nWith the modified structure, the most interesting attribute\nset was {Kol, Sex, Y ear} with interestingness 0.0665. We\nfound in the data that cholesterol levels decreased between\nthe two years in which the study was made, and that cholesterol\nlevel depends on sex. We found similar trends in the\nU.S. population based on data from American Heart Association\n[2]. Adding edges Y ear Kol and Sex Kol\nimproved the network score to -7095.25.\n{F EV, Alc, Y ear} became the most interesting attribute\nset with the interestingness of 0.0286. Its interestingness is\nhowever much lower than that of previous most interesting\nattribute sets. Also, we were not able to get any improvement\nin network score after adding edges related to that\nattribute set.\nSince we were unable to obtain a better network in this\ncase, we used topological pruning, expecting that some other\nattribute sets might be the true cause of the observed discrepancies\n. Only four attribute sets, given below, were topologically\n0.01-interesting.\n{Kol, BM I}\n0.0144\n{Kol, Alc}\n0.0126\n{Smok, Sex, Y ear}\n0.0121\n{Alc, W ork}\n0.0110\nWe found all those patters intuitively valid, but were unable\nto obtain an improvement in the network's score by\nadding related edges. Moreover, the interestingness values\nwere quite small. We thus finished the interactive network\nstructure improvement process with the final result given in\nFigure 4b.\nThe algorithm was implemented in Python and used on\na 1.7GHz Pentium 4 machine. The computation of interestingness\nfor this example took only a few seconds so an\ninteractive use of the program was possible. Further performance\nevaluation is given below.\n5.2\nPerformance Evaluation\nWe now present the performance evaluation of the algorithm\nfor finding all attribute sets with given minimum interestingness\n. We used the UCI datasets and Bayesian networks\nlearned from data using B-Course [26]. The results\nare given in Table 2.\nThe max. size column gives the maximum size of frequent\nattribute sets considered. The #marginals column gives the\ntotal number of marginal distributions computed from the\nBayesian network. The attribute sets whose marginal distributions\nhave been cached between the two stages of the\nalgorithm are not counted twice.\nThe time does not include the initial run of the Apriori algorithm\nused to find frequent itemsets in the data (the time\nof the AprioriBN algorithm is included though). The times\nfor larger networks can be substantial; however the proposed\nmethod has still a huge advantage over manually evaluating\nthousands of frequent patterns, and there are several possibilities\nto speed up the algorithm not yet implemented by\nthe authors, discussed in the following section.\n0\n5000\n10000\n15000\n0\n50\n100\n150\n200\n250\n300\nno. of marginals\ntime [s]\nFigure 5: Time of computation depending on the\nnumber of marginal distributions computed for the\nlymphography\ndatabase\n20\n30\n40\n50\n60\n0\n2000\n4000\n6000\n8000\nno. of attributes\ntime [s]\nmax. size = 3\nmax. size = 4\nFigure 6: Time of computation depending on the\nnumber of attributes for datasets from Table 2\nThe maximum interestingness column gives the interestingness\nof the most interesting attribute set found for a given\ndataset. It can be seen that there are still highly interesting\npatterns to be found after using classical Bayesian network\nlearning methods. This proves that frequent pattern and association\nrule mining has the capability to discover patterns\nwhich traditional methods might miss.\nTo give a better understanding of how the algorithm scales\nas the problem size increases we present two additional figures\n. Figure 5 shows how the computation time increases\nwith the number of marginal distributions that must be computed\nfrom the Bayesian network. It was obtained by varying\nthe maximum size of attribute sets between 1 and 5.\nThe value of\n= 0.067 was used (equivalent to one row in\nthe database). It can be seen that the computation time\ngrows slightly slower than the number of marginal distributions\n. The reason for that is that the more marginal\ndistributions we need to compute, the more opportunities\nwe have to avoid using bucket elimination by using direct\nmarginalization from a superset instead.\nDetermining how the computation time depends on the\n184\nResearch Track Paper\ndataset\n#attrs\nmax. size\n#marginals\ntime [s]\nmax. inter.\nKSL\n9\n0.01\n5\n382\n1.12\n0.032\nsoybean\n36\n0.075\n3\n7633\n1292\n0.064\nsoybean\n36\n0.075\n4\n61976\n7779\n0.072\nbreast-cancer\n10\n0.01\n5\n638\n3.49\n0.082\nannealing\n40\n0.01\n3\n9920\n1006\n0.048\nannealing\n40\n0.01\n4\n92171\n6762\n0.061\nmushroom\n23\n0.01\n3\n2048\n132.78\n0.00036\nmushroom\n23\n0.01\n4\n10903\n580.65\n0.00036\nlymphography\n19\n0.067\n3\n1160\n29.12\n0.123\nlymphography\n19\n0.067\n4\n5036\n106.13\n0.126\nsplice\n61\n0.01\n3\n37882\n8456\n0.036\nTable 2: Performance evaluation of the algorithm for finding all -interesting attribute sets.\nsize of the network is difficult, because the time depends\nalso on the network structure and the number of marginal\ndistributions computed (which in turn depends on the maximum\nsize of attribute sets considered).\nWe nevertheless show in Figure 6 the numbers of attributes\nand computation times plotted against each other for some\nof the datasets from Table 2. Data corresponding to maximum\nattribute set sizes equal to 3 and 4 are plotted sepa-rately\n.\nIt can be seen that the algorithm remains practically usable\nfor fairly large networks of up to 60 variables, even\nthough the computation time grows exponentially. For larger\nnetworks approximate inference methods might be necessary\n, but this is beyond the scope of this paper.\nCONCLUSIONS AND DIRECTIONS OF FUTURE RESEARCH\nA method of computing interestingness of itemsets and attribute\nsets with respect to background knowledge encoded\nas a Bayesian network was presented. We built efficient algorithms\nfor computing interestingness of frequent itemsets\nand finding all attribute sets with given minimum interestingness\n. Experimental evaluation proved the effectiveness\nand practical usefulness of the algorithms for finding interesting\n, unexpected patterns.\nAn obvious direction for future research is increasing efficiency\nof the algorithms.\nPartial solution would be to\nrewrite the code in C, or to use some off-the-shelf highly op-timized\nBayesian network library like Intel's PNL. Another\napproach would be to use approximate inference methods\nlike Gibbs sampling.\nAdding or removing edges in a Bayesian network does not\nalways influence all of its marginal distributions. Interactiv-ity\nof network building could be imporved by making use of\nthis property.\nUsefulness of methods developed for mining emerging patterns\n[6], especially using borders to represent collections of\nitemsets, could also be investigated.\nAnother interesting direction (suggested by a reviewer)\ncould be to iteratively apply interesting patterns to modify\nthe network structure until no further improvement in the\nnetwork score can be achieved. A similar procedure has been\nused in [24] for background knowledge represented by rules.\nIt should be noted however that it might be better to just\ninform the user about interesting patterns and let him/her\nuse their experience to update the network. Manually up-dated\nnetwork might better reflect causal relationships between\nattributes.\nAnother research area could be evaluating other probabilistic\nmodels such as log-linear models and chain graphs\ninstead of Bayesian networks.\nREFERENCES\n[1] R. Agrawal, T. Imielinski, and A. Swami. Mining\nassociation rules between sets of items in large\ndatabases. In Proc. ACM SIGMOD Conference on\nManagement of Data, pages 207216, Washington,\nD.C., 1993.\n[2] American Heart Association. Risk factors: High blood\ncholesterol and other lipids.\nhttp://www.americanheart.org/downloadable/\nheart/1045754065601FS13CHO3.pdf\n, 2003.\n[3] R. J. Bayardo and R. Agrawal. Mining the most\ninteresting rules. In Proc. of the 5th ACM SIGKDD\nInt'l Conf. on Knowledge Discovery and Data Mining,\npages 145154, August 1999.\n[4] Susanne G. Bttcher and Claus Dethlefsen. Deal: A\npackage for learning bayesian networks.\nwww.math.auc.dk/novo/Publications/\nbottcher:dethlefsen:03.ps\n, 2003.\n[5] Rina Dechter. Bucket elimination: A unifying\nframework for reasoning. Artificial Intelligence,\n113(1-2):4185, 1999.\n[6] Guozhu Dong and Jinyan Li. Efficient mining of\nemerging patterns: Discovering trends and differences.\nIn Proc. of the 5th Intl. Conf. on Knowledge Discovery\nand Data Mining (KDD'99), pages 4352, San Diego,\nCA, 1999.\n[7] William DuMouchel and Daryl Pregibon. Empirical\nbayes screening for multi-item associations. In\nProceedings of the Seventh International Conference\non Knowledge Discovery and Data Mining, pages\n6776, 2001.\n[8] H. Gray. Gray's Anatomy. Grammercy Books, New\nYork, 1977.\n[9] Venky Harinarayan, Anand Rajaraman, and Jeffrey D.\nUllman. Implementing data cubes efficiently. In Proc.\nACM SIGMOD, pages 205216, 1996.\n[10] David Heckerman. A tutorial on learning with\nBayesian networks. Technical Report MSR-TR-95-06,\nMicrosoft Research, Redmond, WA, 1995.\n185\nResearch Track Paper\n[11] R. Hilderman and H. Hamilton. Knowledge discovery\nand interestingness measures: A survey. Technical\nReport CS 99-04, Department of Computer Science,\nUniversity of Regina, 1999.\n[12] C. Huang and A. Darwiche. Inference in belief\nnetworks: A procedural guide. Intl. Journal of\nApproximate Reasoning, 15(3):225263, 1996.\n[13] S. Jaroszewicz and D. A. Simovici. A general measure\nof rule interestingness. In 5th European Conference on\nPrinciples of Data Mining and Knowledge Discovery\n(PKDD 2001), pages 253265, 2001.\n[14] S. Jaroszewicz and D. A. Simovici. Pruning redundant\nassociation rules using maximum entropy principle. In\nAdvances in Knowledge Discovery and Data Mining,\n6th Pacific-Asia Conference, PAKDD'02, pages\n135147, Taipei, Taiwan, May 2002.\n[15] Finn V. Jensen. Bayesian Networks and Decision\nGraphs. Springer Verlag, New York, 2001.\n[16] Bing Liu, Wynne Hsu, and Shu Chen. Using general\nimpressions to analyze discovered classification rules.\nIn Proceedings of the Third International Conference\non Knowledge Discovery and Data Mining (KDD-97),\npage 31. AAAI Press, 1997.\n[17] Bing Liu, Wynne Jsu, Yiming Ma, and Shu Chen.\nMining interesting knowledge using DM-II. In\nProceedings of the Fifth ACM SIGKDD International\nConference on Knowledge Discovery and Data Mining,\npages 430434, N.Y., August 1518 1999.\n[18] Heikki Mannila. Local and global methods in data\nmining: Basic techniques and open problems. In\nICALP 2002, 29th International Colloquium on\nAutomata, Languages, and Programming, Malaga,\nSpain, July 2002. Springer-Verlag.\n[19] Heikki Mannila and Hannu Toivonen. Levelwise search\nand borders of theories in knowledge discovery. Data\nMining and Knowledge Discovery, 1(3):241258, 1997.\n[20] T.M. Mitchell. Machine Learning. McGraw-Hill, 1997.\n[21] Kevin Murphy. A brief introduction to graphical\nmodels and bayesian networks.\nhttp://www.ai.mit.edu/~murphyk/Bayes/\nbnintro.html\n, 1998.\n[22] B. Padmanabhan and A. Tuzhilin. Belief-driven\nmethod for discovering unexpected patterns. In\nProceedings. of the 4th International Conference on\nKnowledge Discovery and Data Mining (KDD'98),\npages 94100, August 1998.\n[23] B. Padmanabhan and A. Tuzhilin. Small is beautiful:\ndiscovering the minimal set of unexpected patterns. In\nProceedinmgs of the 6th ACM SIGKDD International\nConference on Knowledge Discovery and Data Mining\n(KDD'00), pages 5463, N. Y., August 2000.\n[24] B. Padmanabhan and A. Tuzhilin. Methods for\nknowledge refinement based on unexpected patterns.\nDecision Support Systems, 33(3):221347, July 2002.\n[25] Judea Pearl. Probabilistic Reasoning in Intelligent\nSystems. Morgan Kaufmann, Los Altos, CA, 1998.\n[26] P.Myllym\naki, T.Silander, H.Tirri, and P.Uronen.\nB-course: A web-based tool for bayesian and causal\ndata analysis. International Journal on Artificial\nIntelligence Tools, 11(3):369387, 2002.\n[27] D. Poole and N. L. Zhang. Exploiting contextual\nindependence in probablisitic inference. Journal of\nArtificial Intelligence Research, 18:263313, 2003.\n[28] D. Shah, L. V. S. Lakshmanan, K. Ramamritham, and\nS. Sudarshan. Interestingness and pruning of mined\npatterns. In 1999 ACM SIGMOD Workshop on\nResearch Issues in Data Mining and Knowledge\nDiscovery, 1999.\n[29] Abraham Silberschatz and Alexander Tuzhilin. On\nsubjective measures of interestingness in knowledge\ndiscovery. In Knowledge Discovery and Data Mining,\npages 275281, 1995.\n[30] E. Suzuki. Autonomous discovery of reliable exception\nrules. In Proceedings of the Third International\nConference on Knowledge Discovery and Data Mining\n(KDD-97), page 259. AAAI Press, 1997.\n[31] E. Suzuki and Y. Kodratoff. Discovery of surprising\nexception rules based on intensity of implication. In\nProc of PKDD-98, Nantes, France, pages 1018, 1998.\n[32] P.-N. Tan, V. Kumar, and J. Srivastava. Selecting the\nright interestingness measure for association patterns.\nIn Proc of the Eighth ACM SIGKDD Int'l Conf. on\nKnowledge Discovery and Data Mining (KDD-2002),\npages 3241, 2002.\n[33] M. J. Zaki. Generating non-redundant association\nrules. In Proceedinmgs of the 6th ACM SIGKDD\nInternational Conference on Knowledge Discovery and\nData Mining (KDD-00), pages 3443, N. Y.,\nAugust 2023 2000.\n186\nResearch Track Paper\n", "keywords": "Bayesian network;frequent itemsets;association rules;interestingness;emerging pattern;association rule;background knowledge;frequent itemset"} {"name": "118", "title": "Interference Evaluation of Bluetooth and IEEE 802.11b Systems", "abstract": "The emergence of several radio technologies, such as Bluetooth and IEEE 802.11, operating in the 2.4 GHz unlicensed ISM frequency band, may lead to signal interference and result in significant performance degradation when devices are colocated in the same environment. The main goal of this paper is to evaluate the effect of mutual interference on the performance of Bluetooth and IEEE 802.11b systems. We develop a simulation framework for modeling interference based on detailed MAC and PHY models. First, we use a simple simulation scenario to highlight the effects of parameters, such as transmission power, offered load, and traffic type. We then turn to more complex scenarios involving multiple Bluetooth piconets and WLAN devices.", "fulltext": "Introduction\nThe proliferation of mobile computing devices including laptops\n, personal digital assistants (PDAs), and wearable computers\nhas created a demand for wireless personal area networks\n(WPANs). WPANs allow closely located devices to\nshare information and resources.\nA key challenge in the\ndesign of WPANs is adapting to a hostile radio environment\nthat includes noise, time-varying channels, and abundant\nelectromagnetic interference. Today, most radio technologies\nconsidered by WPANs (Bluetooth Special Interest\nGroup [2], and IEEE 802.15) employ the 2.4 GHz ISM frequency\nband, which is also used by Local Area Network\n(WLAN) devices implementing the IEEE 802.11b standard\nspecifications [9]. It is anticipated that some interference will\nresult from all these technologies operating in the same environment\n. WLAN devices operating in proximity to WPAN\ndevices may significantly impact the performance of WPAN\nand vice versa.\nThe main goal of this paper is to present our findings on the\nperformance of these systems when operating in close proximity\nto each other. Our results are based on detailed models\nfor the MAC, PHY, and wireless channel. Recently, a number\nof research activities has led to the development of tools for\nwireless network simulation [1,16]. While some of these tools\ninclude a PHY layer implementation, it is often abstracted to\na discrete channel model that does not implement interference\nper se. Therefore, in order to model interference and capture\nthe time and frequency collisions, we chose to implement an\nintegrated MAC-PHY module.\nEfforts to study interference in the 2.4 GHz band are relatively\nrecent. For example, interference caused by microwave\novens operating in the vicinity of a WLAN network has been\ninvestigated [17] and requirements on the signal-to-noise ratio\n(SNR) are presented by Kamerman and Erkocevic [11].\n\nCorresponding author.\nE-mail: nada.golmie@nist.gov\nIn addition, there has been several attempts at quantifying\nthe impact of interference on both the WLAN and Bluetooth\nperformance. Published results can be classified into at least\nthree categories depending on whether they rely on analysis,\nsimulation, or experimental measurements.\nAnalytical results based on probability of packet collision\nwere obtained by Shellhammer [13], Ennis [4], and\nZyren [18] for the WLAN packet error and by Golmie [6]\nfor the Bluetooth packet error. In all these cases, the probability\nof packet error is computed based on the probability of\npacket collision in time and frequency. Although these analytical\nresults can often give a first order approximation on the\nimpact of interference and the resulting performance degradation\n, they often make assumptions concerning the traffic\ndistributions and the operation of the media access protocol,\nwhich can make them less realistic. More importantly, in order\nfor the analysis to be tractable, mutual interference that\ncan change the traffic distribution for each system is often ig-nored\n.\nOn the other hand, experimental results, such as the ones\nobtained by Kamerman [10], Howitt et al. [8], and Fumolari\n[5] for a two-node WLAN system and a two-node Bluetooth\npiconet, can be considered more accurate at the cost\nof being too specific to the implementation tested. Thus, a\nthird alternative consists of using modeling and simulation to\nevaluate the impact of interference. This third approach can\nprovide a more flexible framework. Zurbes et al. [19] present\nsimulation results for a number of Bluetooth devices located\nin a single large room. They show that for 100 concurrent\nweb sessions, performance is degraded by only 5%. Golmie\net al. [7] use a detailed MAC and PHY simulation framework\nto evaluate the impact of interference for a pair of WLAN\ndevices and a pair of Bluetooth devices. Similar results have\nbeen obtained by Lansford et al. [12] for the case of colocated\nWLAN and Bluetooth devices on the same laptop. Their simulation\nmodels are based on a link budget analysis and a theoretical\ncalculation of the BER (Q function calculation). The\nwork in this paper is an extension of [7].\n202\nGOLMIE ET AL.\nFigure 1. Master TX/RX hopping sequence.\nThis paper is organized as follows. In section 2, we give\nsome general insights on the Bluetooth and IEEE 802.11 protocol\noperation. In section 3, we describe in great detail our\nmodeling approach for the MAC, PHY and wireless channel.\nIn section 4, we evaluate the impact of interference on both\nBluetooth and WLAN performance and present simulation results\n. Concluding remarks are offered in section 5.\nProtocol overview\nIn this section, we give a brief overview of the Bluetooth technology\n[2] and discuss the main functionality of its protocol\nspecifications. Bluetooth is a short range (010 m) wireless\nlink technology aimed at replacing non-interoperable proprietary\ncables that connect phones, laptops, PDAs and other\nportable devices together. Bluetooth operates in the ISM frequency\nband starting at 2.402 GHz and ending at 2.483 GHz\nin the USA and Europe. 79 RF channels of 1 MHz width are\ndefined. The air interface is based on an antenna power of\n1 mW with an antenna gain of 0 dB. The signal is modulated\nusing binary Gaussian Frequency Shift Keying (GFSK). The\nraw data rate is defined at 1 Mbit/s. A Time Division Multiplexing\n(TDM) technique divides the channel into 625 s\nslots. Transmission occurs in packets that occupy an odd\nnumber of slots (up to 5). Each packet is transmitted on a\ndifferent hop frequency with a maximum frequency hopping\nrate of 1600 hops/s.\nTwo or more units communicating on the same channel\nform a piconet, where one unit operates as a master and the\nothers (a maximum of seven active at the same time) act as\nslaves. A channel is defined as a unique pseudo-random frequency\nhopping sequence derived from the master device's\n48-bit address and its Bluetooth clock value. Slaves in the\npiconet synchronize their timing and frequency hopping to\nthe master upon connection establishment. In the connection\nmode, the master controls the access to the channel using a\npolling scheme where master and slave transmissions alternate\n. A slave packet always follows a master packet transmission\nas illustrated in figure 1, which depicts the master's\nview of the slotted TX/RX channel.\nThere are two types of link connections that can be\nestablished between a master and a slave: the Synchronous\nConnection-Oriented (SCO), and the Asynchronous\nConnection-Less (ACL) link. The SCO link is a symmetric\npoint-to-point connection between a master and a slave\nwhere the master sends an SCO packet in one TX slot at regular\ntime intervals, defined by T\nSCO\ntime slots. The slave responds\nwith an SCO packet in the next TX opportunity. T\nSCO\nis set to either 2, 4 or 6 time slots for HV1, HV2, or HV3\npacket formats, respectively. All three formats of SCO packets\nare defined to carry 64 Kbit/s of voice traffic and are never\nretransmitted in case of packet loss or error.\nThe ACL link is an asymmetric point-to-point connection\nbetween a master and active slaves in the piconet. An Automatic\nRepeat Request (ARQ) procedure is applied to ACL\npackets where packets are retransmitted in case of loss until\na positive acknowledgement (ACK) is received at the source.\nThe ACK is piggy-backed in the header of the returned packet\nwhere an ARQN bit is set to either 1 or 0 depending on\nwhether or not the previous packet was successfully received.\nIn addition, a sequence number (SEQN) bit is used in the\npacket header in order to provide a sequential ordering of data\npackets in a stream and filter out retransmissions at the destination\n. Forward Error Correction (FEC) is used on some\nSCO and ACL packets in order to correct errors and reduce\nthe number of ACL retransmissions.\nBoth ACL and SCO packets have the same packet format.\nIt consists of a 72-bit access code used for message identification\nand synchronization, a 54-bit header and a variable\nlength payload that contains either a voice or a data packet\ndepending on the type of link connection that is established\nbetween a master and a slave.\nA repetition code of rate 1/3 is applied to the header, and\na block code with minimum distance, d\nmin\n, equal to 14, is applied\nto the access code so that up to 13 errors are detected and\n(d\nmin\n- 1)/2 = 6 can be corrected. Note that uncorrected\nerrors in the header and the access code lead to a packet drop.\nVoice packets have a total packet length of 366 bits including\nthe access code and header. A repetition code of 1/3 is used\nfor HV1 packet payload. On the other hand, DM and HV2\npacket payloads use a 2/3 block code where every 10 bits of\ninformation are encoded with 15 bits. DH and HV3 packets\ndo not have any encoding on their payload. HV packets do\nINTERFERENCE EVALUATION OF BLUETOOTH AND IEEE 802.11b SYSTEMS\n203\nFigure 2. WLAN frame transmission scheme.\nTable 1\nSummary of error occurrences in the packet and actions taken in case errors\nare not corrected.\nError location\nError correction\nAction taken\nAccess code\nd\nmin\n= 14\nPacket dropped\nPacket header\n1/3 repetition\nPacket dropped\nHV1 payload\n1/3 repetition\nPacket accepted\nHV2 payload\n2/3 block code\nPacket accepted\nHV3 payload\nNo FEC\nPacket accepted\nDM1, DM3, DM5 payload\n2/3 block code\nPacket dropped\nDH1, DH3, DH5 payload\nNo FEC\nPacket accepted\nnot have a CRC in the payload. In case of an error occurrence\nin the payload, the packet is never dropped. Uncorrected errors\nfor DM and DH packets lead to dropped packets and the\napplication of the ARQ and SEQN schemes. Table 1 summarizes\nthe error occurrences in the packet and the actions taken\nby the protocol.\n2.2. IEEE 802.11b\nThe IEEE 802.11 standard [9] defines both the physical\n(PHY) and medium access control (MAC) layer protocols\nfor WLANs. In this sequel, we shall be using WLAN and\n802.11b interchangeably.\nThe IEEE 802.11 standard calls for three different PHY\nspecifications: frequency hopping (FH) spread spectrum, direct\nsequence (DS) spread spectrum, and infrared (IR). The\ntransmit power for DS and FH devices is defined at a maximum\nof 1 W and the receiver sensitivity is set to\n-80 dBmW.\nAntenna gain is limited to 6 dB maximum. In this work,\nwe focus on the 802.11b specification (DS spread spectrum)\nsince it is in the same frequency band as Bluetooth and the\nmost commonly deployed.\nThe basic data rate for the DS system is 1 Mbit/s encoded\nwith differential binary phase shift keying (DBPSK). Similarly\n, a 2 Mbit/s rate is provided using differential quadrature\nphase shift keying (DQPSK) at the same chip rate of\n11\n10\n6\nchips/s. Higher rates of 5.5 and 11 Mbit/s are also\navailable using techniques combining quadrature phase shift\nkeying and complementary code keying (CCK); all of these\nsystems use 22 MHz channels.\nThe IEEE 802.11 MAC layer specifications, common to all\nPHYs and data rates, coordinate the communication between\nstations and control the behavior of users who want to access\nthe network. The Distributed Coordination Function (DCF),\nwhich describes the default MAC protocol operation, is based\non a scheme known as carrier-sense, multiple access, collision\navoidance (CSMA/CA). Both the MAC and PHY layers cooperate\nin order to implement collision avoidance procedures.\nThe PHY layer samples the received energy over the medium\ntransmitting data and uses a clear channel assessment (CCA)\nalgorithm to determine if the channel is clear. This is accomplished\nby measuring the RF energy at the antenna and determining\nthe strength of the received signal commonly known\nas RSSI, or received signal strength indicator. In addition,\ncarrier sense can be used to determine if the channel is available\n. This technique is more selective since it verifies that the\nsignal is the same carrier type as 802.11 transmitters. In all\nof our simulations, we use carrier sense and not RSSI to determine\nif the channel is busy. Thus, a Bluetooth signal will\ncorrupt WLAN packets, but it will not cause the WLAN to\ndefer transmission.\nA virtual carrier sense mechanism is also provided at the\nMAC layer. It uses the request-to-send (RTS) and clear-to-send\n(CTS) message exchange to make predictions of future\ntraffic on the medium and updates the network allocation vector\n(NAV) available in stations. Communication is established\nwhen one of the wireless nodes sends a short RTS frame. The\nreceiving station issues a CTS frame that echoes the sender's\naddress. If the CTS frame is not received, it is assumed that a\ncollision occurred and the RTS process starts over. Regardless\nof whether the virtual carrier sense routine is used or\nnot, the MAC is required to implement a basic access procedure\n(depicted in figure 2) as follows. If a station has data to\nsend, it waits for the channel to be idle through the use of the\nCSMA/CA algorithm. If the medium is sensed idle for a period\ngreater than a DCF interframe space (DIFS), the station\ngoes into a backoff procedure before it sends its frame. Upon\nthe successful reception of a frame, the destination station returns\nan ACK frame after a Short interframe space (SIFS).\nThe backoff window is based on a random value uniformly\ndistributed in the interval\n[CW\nmin\n,\nCW\nmax\n], where CW\nmin\nand CW\nmax\nrepresent the Contention Window parameters. If\nthe medium is determined busy at any time during the backoff\nslot, the backoff procedure is suspended. It is resumed after\nthe medium has been idle for the duration of the DIFS period.\nIf an ACK is not received within an ACK timeout interval, the\nstation assumes that either the data frame or the ACK was lost\n204\nGOLMIE ET AL.\nand needs to retransmit its data frame by repeating the basic\naccess procedure.\nErrors are detected by checking the Frame Check Sequence\n(FCS) that is appended to the packet payload. In case\nan error is found, the packet is dropped and is then later retransmitted\nIntegrated simulation model\nIn this section, we describe the methodology and platform\nused to conduct the performance evaluation. The simulation\nenvironment consists of detailed models for the RF channel,\nthe PHY, and MAC layers developed in C and OPNET (for the\nMAC layer). These detailed simulation models constitute an\nevaluation framework that is critical to studying the various\nintricate effects between the MAC and PHY layers. Although\ninterference is typically associated with the RF channel modeling\nand measured at the PHY layer, it can significantly impact\nthe performance of higher layer applications including\nthe MAC layer. Similarly, changes in the behavior of the\nMAC layer protocol and the associated data traffic distribution\ncan play an important factor in the interference scenario\nand affect the overall system performance.\nFigure 3 shows a packet being potentially corrupted by two\ninterference packets. Consider that the desired packet is from\nthe WLAN and the interference packets are Bluetooth (the\nfigure is equally valid if the roles are reversed, except that\nthe frequencies of the packets will be different). For interference\nto occur, the packets must overlap in both time and\nfrequency. That is, the interference packets must be within\nthe 22 MHz bandwidth of the WLAN. In a system with many\nBluetooth piconets, there may be interference from more than\none packet at any given time. We define a period of stationarity\n(POS) as the time during which the interference is constant\n. For example, t\ni\nt\nt\ni\n+1\nis such a period, as is\nt\ni\n+1\nt\nt\ni\n+2\n.\nEven during a POS where there is one or more interferers,\nthe number and location of bit errors in the desired packet depends\non a number of factors: (1) the signal-to-interference\nratio (SIR) and the signal-to-noise ratio at the receiver, (2) the\ntype of modulation used by the transmitter and the interferer,\nand (3) the channel model. For this reason, it is essential to\nuse accurate models of the PHY and channel, as described\nbelow. Just because two packets overlap in time and frequency\ndoes not necessary lead to bit errors and the consequent\npacket loss. While one can use (semi-)analytic models\ninstead, such as approximating Bluetooth interference on\nWLAN as a narrowband tone jammer, the use of detailed signal\nprocessing-based models better allows one to handle multiple\nsimultaneous interferers.\nIn order to simulate the overall system, an interface module\nwas created that allows the MAC models to use the physical\nlayer and channel models. This interface module captures\nall changes in the channel state (mainly in the energy level).\nConsider the Bluetooth transmitterchannelreceiver chain of\nprocesses. For a given packet, the transmitter creates a set of\nFigure 3. Packet collision and placement of errors. The bit error rate (BER)\nis roughly constant during each of the three indicated periods.\nsignal samples that are corrupted by the channel and input to\nthe receiver; interference may be present for all or only specific\nperiods of stationarity, as shown in figure 3. A similar\nchain of processing occurs for an 802.11b packet. The interface\nmodule is designed to process a packet at a time.\nAt the end of each packet transmission, the MAC layer\ngenerates a data structure that contains all the information required\nto process the packet. This structure includes a list of\nall the interfering packets with their respective duration, timing\noffset, frequency, and transmitted power. The topology of\nthe scenario is also included. The data structure is then passed\nto the physical layer along with a stream of bits representing\nthe packet being transmitted. The physical layer returns the\nbit stream after placing the errors resulting from the interference\n.\n3.1. MAC model\nWe used OPNET to develop a simulation model for the Bluetooth\nand IEEE 802.11 protocols. For Bluetooth, we implemented\nthe access protocol according to the specifications [2].\nWe assume that a connection is already established between\nthe master and the slave and that the synchronization process\nis complete. The Bluetooth hopping pattern algorithm is implemented\n. Details of the algorithm are provided in section\n2.1. A pseudo-random number generator is used instead\nof the implementation specific circuitry that uses the master's\nclock and 48-bit address to derive a random number.\nFor the IEEE 802.11 protocol, we used the model available\nin the OPNET library and modified it to bypass the OPNET\nradio model and to use our MAC/PHY interface module. We\nfocus in this study on the Direct Sequence mode which uses a\nfixed frequency that occupies 22 MHz of the frequency band.\nThe center frequency is set to 2.437 GHz.\nAt the MAC layer, a set of performance metrics are defined\nincluding probability of packet loss. Packet loss measures the\nnumber of packets discarded at the MAC layer due to errors\nin the bit stream. This measure is calculated after performing\nerror correction.\n3.2. PHY model\nThe transmitters, channel, and receivers are implemented\nat complex baseband.\nFor a given transmitter, inphase\nINTERFERENCE EVALUATION OF BLUETOOTH AND IEEE 802.11b SYSTEMS\n205\nand quadrature samples are generated at a sampling rate of\n44\n10\n6\nper second. This rate provides four samples/symbol\nfor the 11 Mbit/s 802.11 mode, enough to implement a good\nreceiver.\nIt is also high enough to allow digital modulation\nof the Bluetooth signal to account for its frequency hopping\n. Specifically, since the Bluetooth signal is approximately\n1 MHz wide, it can be modulated up to almost 22 MHz,\nwhich is more than enough to cover the 11 MHz bandwidth\n(one-sided) of the 802.11 signal. The received complex samples\nfrom both the desired transmitter and the interferer(s) are\nadded together at the receiver.\nWhile there are a number of possible Bluetooth receiver\ndesigns, we chose to implement the noncoherent limiter-discriminator\n(LD) receiver [3,14]. Its simplicity and relatively\nlow cost should make it the most common type for\nmany consumer applications. Details of the actual design are\ngiven in [15].\nIn the 802.11b CCK receiver, each group of eight information\nbits chooses a sequence of eight consecutive chips that\nforms a symbol. As before, the inphase and quadrature components\nof these chips are transmitted. The receiver looks at\nthe received symbol and decides which was the most likely\ntransmitted one. While one can implement this decoding procedure\nby correlating against all 256 possible symbols, we\nchose a slightly sub-optimal, but considerably faster architecture\nsimilar to the WalshHadamard transform; again details\ncan be found in [15].\n3.3. Channel model\nThe channel model consists of a geometry-based propagation\nmodel for the signals, as well as a noise model. For the indoor\nchannel, we apply a propagation model consisting of\ntwo parts: (1) line-of-sight propagation (free-space) for the\nfirst 8 m, and (2) a propagation exponent of 3.3 for distances\nover 8 m. Consequently, the path loss in dB is given by\nL\np\n=\n\n\n\n\n32.45\n+ 20 log(f d) if d < 8 m,\n58.3\n+ 33 log d8\notherwise,\n(1)\nwhere f is the frequency in GHz, and d is the distance in meters\n. This model is similar to the one used by Kamerman [10].\nAssuming unit gain for the transmitter and receiver antennas\nand ignoring additional losses, the received power in dBmW\nis\nP\nR\n= P\nT\n- L\np\n,\n(2)\nwhere P\nT\nis the transmitted power also in dBmW. Equation\n(2) is used for calculating the power received at a given\npoint due to either a Bluetooth or an 802.11 transmitter, since\nthis equation does not depend on the modulation method.\nThe main parameter that drives the PHY layer performance\nis the signal-to-interference ratio between the desired signal\nand the interfering signal. This ratio is given in dB by\nSIR\n= P\nR\n- P\nI\n,\n(3)\nwhere P\nI\nis the interference power at the receiver. In the absence\nof interference, the bit error rate for either the Bluetooth\nor WLAN system is almost negligible for the transmitter powers\nand ranges under consideration.\nTo complete the channel model, noise is added to the received\nsamples, according to the specified SNR. In decibels,\nthe signal-to-noise ratio is defined by SNR\n= P\nR\n-S\nR\n, where\nP\nR\nis the received signal power, and S\nR\nis the receiver's sensitivity\nin dBmW; this latter value is dependent on the receiver\nmodel and so is an input parameter. Additive white Gaussian\nnoise (AWGN) is used to model the noise at the receivers.\n3.4. Model validation\nThe results obtained from the simulation models were validated\nagainst experimental and analytical results.\nSince the implementation of the PHY layer required\nchoosing a number of design parameters, the first step in the\nvalidation process is comparing the PHY results against theoretical\nresults. Complete BER curves of the Bluetooth and\n802.11b systems are given in [15]; for the AWGN and flat\nRician channels without interference, all the results match\nvery closely to analytical bounds and other simulation results.\nAlso, the simulation results for both the MAC and PHY models\nwere compared and validated against analytical results for\npacket loss given different traffic scenarios [6].\nFor the experimental testing, we use the topology in figure\n4 and compare the packet loss observed for Bluetooth\nvoice and WLAN data with the simulation results in figure 5.\nThe experimental and simulation results are in good agreement\nSimulation results\nWe present simulation results to evaluate the performance of\nBluetooth in the presence of WLAN interference and vice\nversa. First, we consider the effects of parameters such as\ntransmitted power, offered load, hop rate, and traffic type on\ninterference. Second, we look at two realistic interference\nscenarios to quantify the severity of the performance degradation\nfor the Bluetooth and WLAN systems.\n4.1. Factors effecting interference\nWe first consider a four node topology consisting of two\nWLAN devices and two Bluetooth devices (one master and\none slave) as shown in figure 4. The WLAN access point\n(AP) is located at (0, 15) m, and the WLAN mobile is fixed at\n(\n0, 1) m. The Bluetooth slave device is fixed at (0, 0) m and\nthe master is fixed at (1, 0) m.\nIn an effort to control the interference on Bluetooth and\nWLAN, we define two scenarios. In the first scenario, we\nlet the mobile be the generator of 802.11 data, while the AP\nis the sink. In this case, the interference is from the mobile\nsending data packets to the AP and receiving acknowledgments\n(ACKs) from it. Since most of the WLAN traffic is\n206\nGOLMIE ET AL.\nFigure 4. Topology 1. Two WLAN devices and one Bluetooth piconet.\nTable 2\nSummary of the scenarios.\nScenario\nDesired\nInterferer\nWLAN\nWLAN\nsignal\nsignal\nAP\nmobile\n1\nBluetooth\nWLAN\nSink\nSource\n2\nWLAN\nBluetooth\nSource\nSink\noriginating close to the Bluetooth piconet, both the master and\nthe slave may suffer from serious interference. In the second\nscenario, the traffic is generated at the AP and received at the\nWLAN mobile. Because the data packets are generally longer\nthen the ACKs, this is a more critical scenario for the WLAN\nthen when the mobile is the source. Table 2 summarizes the\ntwo scenarios.\nFor Bluetooth, we consider two types of applications,\nvoice and data. For voice, we assume a symmetric stream\nof 64 Kbit/s each way using HV1 packet encapsulation. For\ndata traffic, we consider a source that generates DM5 packets.\nThe packet interarrival time is exponentially distributed, and\nits mean in seconds is computed according to\nt\nB\n= 2 n\ns\nT\ns\n,\n(4)\nwhere is the offered load; n\ns\nis the number of slots occupied\nby a packet. For DM5, n\ns\n= 5. T\ns\nis the slot size equal to\n625 s.\nFor WLAN, we use the 11 Mbit/s mode and consider a\ndata application. Typical applications for WLAN could be\nftp or http. However, since we are mainly interested in the\nMAC layer performance, we abstract the parameters for the\napplication model to packet size and offered load and do not\nmodel the entire TCP/IP stack. We fix the packet payload to\n12,000 bits which is the maximum size for the MAC payload\ndata unit, and vary . The packet interarrival time in seconds,\nt\nW\n, is exponentially distributed, and its mean is computed acTable\n3\nSimulation parameters\nSimulation parameters\nValues\nPropagation delay\n5 s/km\nLength of simulation run\n30 s\nBluetooth parameters\nACL Baseband Packet Encapsulation\nDM5\nSCO Baseband Packet Encapsulation\nHV1\nTransmitted Power\n1 mW\nWLAN parameters\nTransmitted power\n25 mW\nPacket header\n224 bits\nPacket payload\n12,000 bits\ncording to\nt\nW\n= 192/1,000,000 + 12,224/11,000,000\n\n,\n(5)\nwhere the 192-bit PLCP header is sent at 1 Mbit/s and the\npayload at 11 Mbit/s. Unless specified otherwise, we use the\nconfiguration and system parameters shown in table 3.\nFor scenarios 1 and 2, we run 15 trials using a different\nrandom seed for each trial. In addition to plotting the mean\nvalue, confidence intervals, showing plus and minus two standard\ndeviations, are also included. From figures 5 and 6, one\nsees that the statistical variation around the mean values are\nvery small. In addition to the comparisons with analytical and\nexperimental results described in section 3.4, this fact provides\nfurther validation for the results.\n4.1.1. WLAN transmission power\nFirst, we look at the effect on Bluetooth of increasing the\nWLAN transmission power in scenario 1; that is, increasing\nthe interferer transmission power on the victim signal. Since\npower control algorithms exist in many WLAN implementa-tions\n, it is important to consider how varying the transmitted\npower changes the interference. However, since Bluetooth\nwas designed as a low power device, we fix its transmitter\npower at 1 mW for all simulations.\nWe fix WLAN to 60% for different Bluetooth traffic\ntypes and values of . In figure 5(a), we note a saturation\neffect around 10 mW. A threshold, which is close to 22/79,\ncorresponds to the probability that Bluetooth is hopping in the\nWLAN occupied band. Thus, increasing the WLAN transmission\npower beyond 10 mW does not affect the Bluetooth\npacket loss. Between 1 and 5 mW, a small change in the\nWLAN transmitted power triples the Bluetooth packet loss.\nPlease note the relative positions of the packet loss curves for\ndifferent values of between 1 and 5 mW; as increases, the\npacket loss is higher. Also, note that Bluetooth voice has the\nlowest packet loss, partly due to its short packet size. A second\nreason for the low loss probability is that voice packets\nare rejected only if there are errors in the access code or\npacket headers, cf. table 1. A packet may be accepted with\na relatively large number of bit errors in the payload, which\nmay lead to a substantial reduction in subjective voice quality.\nFigure 5(b) shows the probability of packet loss for the\nWLAN mobile device.\nThis corresponds to ACKs being\nINTERFERENCE EVALUATION OF BLUETOOTH AND IEEE 802.11b SYSTEMS\n207\n(a)\n(b)\n(c)\nFigure 5. WLAN\n= 60%. (a) Scenario 1. Probability of packet loss for the Bluetooth slave. (b) Scenario 1. Probability of packet loss for the WLAN\nmobile. (c) Scenario 2. Probability of packet loss for the WLAN mobile.\ndropped at the WLAN source. The general trend is that the\npacket loss decreases as the WLAN transmitted power increases\n. However, we notice a slight \"bump\" between 1 and\n5 mW. This is due to the effect of closed-loop interference.\nThe WLAN source increases its transmitted power and causes\nmore interference on the Bluetooth devices; as a result, there\nare more retransmissions in both the Bluetooth and WLAN\npiconets, which causes more lost ACKs at the WLAN source.\nNext, we consider the effect of increasing the WLAN\ntransmission power on the WLAN performance in scenario 2.\nFrom figure 5(c), we observe that even if the WLAN transmission\npower is fifty times more than the Bluetooth transmission\npower (fixed at 1 mW), the packet loss for the WLAN\ndoes not change. This leads us to an interesting observation\non power control. Basically, we note that increasing the\ntransmission power does not necessarily improve the performance\n. However, decreasing the transmission power is usu-ally\na \"good neighbor\" strategy that may help reduce the interference\non other devices.\n4.1.2. Offered load\nThe offered load, also referred to in some cases as duty cycle\n, is an interesting parameter to track. Consider scenario 1\nwhere Bluetooth is the interferer and fix the WLAN transmission\npower to 25 mW. We observe that for the WLAN, the\npacket loss is proportional to the Bluetooth offered load as\nshown in figure 6. For equal 20%, 50%, and 100%, the\npacket loss is 7%, 15%, and 25%, respectively. This observation\nhas been confirmed analytically in [6], where the packet\nerror is shown to depend not only on the offered load of the\ninterferer system but also on the packet sizes of both systems.\nAlso note that the probability of loss for the 30% WLAN of-208\nGOLMIE ET AL.\nFigure 6. Scenario 2. Probability of packet loss for the WLAN mobile.\nfered load is slightly higher than for the 60% WLAN offered\nload. However, this difference is statistically insignificant.\nThe significance of the packet size is apparent in figures\n5(a) and (c), where short Bluetooth voice packets lead\nto less packet loss for Bluetooth but cause more interference\nfor WLAN. However, for the WLAN 11 Mbit/s rate, the effect\nof changing the WLAN packet size over the range 1,000\nto 12,000 bits has very little effect on the performance of both\nthe WLAN and Bluetooth, and that is due to the relatively\nshort transmission time of the WLAN packet. At the 1 Mbit/s\nrate, WLAN packets of the same bit lengths take considerably\nlonger to transmit, and the effect of packet size is somewhat\nmore pronounced. For a further discussion of the 1 Mbit/s\ncase, please see [7].\n4.1.3. Bluetooth hop rate\nIn order to highlight the effect of the Bluetooth hop rate on\nWLAN, we use different packet types, DM1, DM3, and DM5;\nthese packets occupy 1, 3, and 5 time slots, respectively. The\nBluetooth hop rate is determined by the number of time slots\noccupied by a packet. Thus, the hop rate is 1600, 533, and\n320 hops/s for DM1, DM3, and DM5 packets, respectively.\nThe offered load for Bluetooth is set to 100%. The results in\ntable 4 clearly indicate that a faster hop rate leads to higher\npacket losses (44%, 28%, and 26% for DM1, DM3 and DM5,\nrespectively). Note that the results are rather insensitive to the\nWLAN offered load.\n4.1.4. Bluetooth traffic type\nThe question here is, whether Bluetooth voice effects WLAN\nmore than Bluetooth data, and vice versa.\nWe use three\ntypes of packets for voice encapsulation, namely, HV1, HV2,\nand HV3. HV1 represents the worst case of interference for\nWLAN as shown in table 5 with 44% packet loss. HV2 and\nHV3, which contain less error correction and more user information\n, are sent less often and, therefore, interfer less with\nWLAN (25% and 16% for HV2 and HV3, respectively). The\nTable 4\nScenario 2. Probability of WLAN packet loss versus Bluetooth hop rate.\nBT\nWLAN\n= 30%\nWLAN\n= 60%\nDM1\n0.449\n0.449\nDM3\n0.286\n0.277\nDM5\n0.269\n0.248\nTable 5\nScenario 2. Probability of WLAN packet loss versus Bluetooth traffic type.\nBT\nWLAN\n= 30%\nWLAN\n= 60%\nVoice\nHV1\n0.446\n0.470\nHV2\n0.253\n0.257\nHV3\n0.166\n0.169\nData,\n= 60%\n0.191\n0.177\nTable 6\nScenario 1. Probability of Bluetooth packet loss versus Bluetooth traffic\ntype.\nBT\nWLAN\n= 30%\nWLAN\n= 60%\nVoice\nHV1\n0.077\n0.141\nHV2\n0.075\n0.149\nHV3\n0.069\n0.136\nData,\n= 60%\n0.2089\n0.210\nWLAN packet loss with Bluetooth data interference is 19%.\nPlease note that the results do not depend on the WLAN offered\nload.\nOn the other hand, the probability of packet loss for Bluetooth\ndata (20%) is higher than for Bluetooth voice (7%) as\nshown in table 6. Note that doubling the WLAN offered load\nto 60% doubles the Bluetooth voice packet loss. Also, since\nall three types of voice packets suffer the same packet loss,\nit is preferable to use HV3, which causes less interference\non the WLAN. The error correction coding in HV1 and HV2\npackets may provide greater range in a noise-limited environment\n, but this coding is far too weak to protect the packets\nfrom interference. Instead, it is the frequency hopping ability\nof Bluetooth that limits the damage done by the WLAN.\n4.1.5. Bluetooth transmission power\nWhile most Bluetooth devices will be operating at 1 mW, the\nspecification also allows higher transmitter powers. Table 7\nshows the probability of packet loss for both Bluetooth and\nthe WLAN for three values of the BT transmitter power and\ntwo types of Bluetooth traffic. As expected, higher transmitter\npowers lead to more lost WLAN packets, regardless of the\nBT traffic type. Increasing the power from 1 to 10 mW leads\nto approximately a 50% increase in WLAN loss. Conversely,\nthe Bluetooth packet error rate decreases. It still not clear how\nbeneficial this decrease is for Bluetooth; even a loss probability\nof 0.0335 may lead to unacceptable voice quality.\n4.1.6. Bluetooth packet error correction\nSo far, the results shown for the Bluetooth data are with DM5\npackets, which use a 2/3 block code on the packet payload.\nIn order to show the effect of error correction on the probabil-INTERFERENCE\nEVALUATION OF BLUETOOTH AND IEEE 802.11b SYSTEMS\n209\nFigure 7. Scenario 1. Probability of packet loss for the Bluetooth slave.\nTable 7\nScenario 2. Probability of packet loss versus Bluetooth transmission power\n(mW). WLAN\n= 60%.\nBT traffic\nBT power\nBT loss\nWLAN loss\n(mW)\nprobability\nprobability\n\n= 60%\n1\n0.2125\n0.0961\n2.5\n0.2085\n0.1227\n10\n0.1733\n0.1358\nVoice\n1\n0.1417\n0.1253\n2.5\n0.1179\n0.1609\n10\n0.0335\n0.1977\nity of packet loss, we repeat scenario 1 and compare the results\ngiven in figures 5(a) and 7, obtained with DM5 and DH5\npackets, respectively. As expected, the probability of packet\nloss for DM5 packets (figure 5(a)) is slightly less than for\nDH5 packets (figure 7) for WLAN transmission powers less\nthan 5 mW. Thus, for low levels of interference, a 2/3 block\ncode can reduce the probability of loss by 4%. However, for\nWLAN transmission powers above 5 mW, the probability of\npacket loss is the same for both DM5 and DH5 packets.\n4.2. Realistic interference topologies\nIn this section, we consider two practical interference topologies\n. While they appear to be somewhat different, they actually\ncomplement each other. The first one has the WLAN\ndevice, in the midst of the Bluetooth piconets, acting at the\nsource, while the second one has the WLAN access point acting\nas the source.\n4.2.1. Topology 2\nWe first look at the topology illustrated in figure 8. It consists\nof one WLAN AP located at (0, 15) m, and one WLAN\nmobile at (0, 0) m. The WLAN traffic is generated at the mobile\n, while the AP returns acknowledgments. The distance\nbetween the WLAN AP and mobile is d\nW\n= 15 m. There\nFigure 8. Topology 2. Two WLAN devices and ten Bluetooth piconets.\nTable 8\nExperiment 3 results.\nBT traffic\nWLAN\nBT loss\nWLAN loss\nd\nB\n= 1 m\nd\nB\n= 2 m\n\n= 30%\n30%\n0.056\n0.157\n0.121\n60%\n0.060\n0.188\n0.170\n\n= 60%\n30%\n0.057\n0.243\n0.405\n60%\n0.061\n0.247\n0.381\nVoice\n30%\n0.009\n0.104\n1\n60%\n0.008\n0.106\n1\nare ten Bluetooth piconets randomly placed, covering a disk.\nThe center of the disk is located at (0, 0) and its radius is\nr\n= 10 m. We define d\nB\nas the distance between a Bluetooth\nmaster and slave pair. d\nB\n= 1 m for half of the master and\nslave pairs, while d\nB\n= 2 m for the other half of the master\nand slave pairs.\nIn this case, the main interference on Bluetooth is caused\nby the WLAN source located in the center of the disk; the aggregation\nof the ten piconets affects the WLAN source. We\nfound that when the WLAN system is not operating, the Bluetooth\npacket loss is negligible (less than 1%). Table 8 gives\nthe packet loss for the Bluetooth and WLAN devices. The\npacket loss for the Bluetooth devices is averaged over the\nmaster and slave devices and split into two groups: piconets\nwith d\nB\n= 1 m and piconets with d\nB\n= 2 m. For WLAN, the\npacket loss is measured at the source. It is effectively zero at\nthe sink.\nWe observe that the WLAN packet loss depends on the\nBluetooth traffic load value, . As is varied from 30% to\n60%, the WLAN packet loss is significantly changed from\n12% to 40%. However, the WLAN packet loss is insensitive\nto the WLAN offered load. Consistent with previous results,\nBluetooth voice represents the worst case interference scenario\nfor WLAN.\n210\nGOLMIE ET AL.\nIn general, the Bluetooth packet loss for d\nB\n= 1 m is less\nthan for d\nB\n= 2 m. The reason is that when the Bluetooth\nsignal is stronger (over a shorter distance), the impact of interference\nis less significant.\n4.2.2. Topology 3\nWe next consider the topology given in figure 9. It includes\none WLAN AP and four WLAN mobile devices. The WLAN\nAP is located at (0, 15) m, and it is the source of the traffic\ngeneration. The four WLAN mobile devices are placed\non a two-dimensional grid at (\n-1, 1), (1, 1), (-1, -1), and\n(\n1,\n-1) m. In this topology, there are four Bluetooth piconets,\neach consisting of a masterslave device pair. The placement\nof the Bluetooth devices is as shown in the figure.\nIn this case, we are looking at the effect of Bluetooth piconets\non the four WLAN sink devices. The packet loss measure\nfor WLAN is averaged over the four devices. As shown\nin table 9, the impact of WLAN interference on Bluetooth is\nminimal, given that the WLAN source is far from the Bluetooth\npiconets. As expected, the WLAN packet loss depends\non the Bluetooth traffic conditions, and it is rather insensitive\nto the WLAN traffic activity. With Bluetooth voice, the\nWLAN packet loss is close to 84%. It is 57% for Bluetooth\ndata with WLAN loads of\n= 30, 60%.\nConcluding remarks\nWe presented results on the performance of Bluetooth and\nWLAN operating in the 2.4 GHz ISM band based on detailed\nchannel, MAC, and PHY layer models for both systems. The\nevaluation framework used allows us to study the impact of\ninterference in a closed loop environment where two systems\nare affecting each other, and explore the MAC and PHY layer\ninteractions in each system.\nWe are able to draw some useful conclusions based on our\nresults. First, we note that power control may have limited\nbenefits in this environment. Increasing the WLAN transmission\npower to even fifty times the power of Bluetooth is not\nsufficient to reduce the WLAN packet loss. On the other hand,\nlimiting the WLAN power, may help avoid interference to\nBluetooth. Second, using a slower hop rate for Bluetooth (i.e.\nlonger packet sizes) may cause less interference to WLAN.\nThird, Bluetooth voice represents the worst type of interference\nfor WLAN. In addition, the WLAN performance seems\nto degrade as the Bluetooth offered load is increased. Finally,\nthe use of error correcting block codes in the Bluetooth payload\ndoes not improve performance. The errors caused by\ninterference are often too many to correct.\nOverall, the results are dependent on the traffic distribution\n. Yet, there may be little room for parameter optimization\nespecially for the practical scenarios. Not only does the complexity\nof the interactions and the number of parameters to adjust\nmake the optimization problem intractable, but choosing\nan objective function is very dependent on the applications\nand the scenario. Thus, achieving acceptable performance for\nFigure 9. Topology 3. Five WLAN devices and four Bluetooth piconets.\nTable 9\nExperiment 4 results.\nBT traffic\nWLAN\nBT loss\nWLAN loss\n\n= 30%\n30%\n0.007\n0.574\n60%\n0.006\n0.580\n\n= 60%\n30%\n0.007\n0.576\n60%\n0.006\n0.580\nVoice\n30%\n0.002\n0.836\n60%\n0.001\n0.828\na particular system comes at the expense of the other system's\nthroughput. Therefore, we believe that the primary solutions\nto this problem lie in the development of coexistence mechanisms\nReferences\n[1] BlueHoc: Bluetooth Performance Evaluation Tool, Open-Source\n(2001) http://oss.software.ibm.com/developerworks/\nopensource/~bluehoc\n[2] Bluetooth Special Interest Group, Specifications of the Bluetooth system\n, Vol. 1, v.1.0B Core, and Vol. 2, v1.0B Profiles (December 1999).\n[3] T. Ekvetchavit and Z. Zvonar, Performance of phase-locked loop receiver\nin digital FM systems, in: Ninth IEEE International Symposium\non Personal, Indoor and Mobile Radio Communications, Vol. 1 (1998)\npp. 381385.\n[4] G. Ennis, Impact of Bluetooth on 802.11 direct sequence, IEEE\nP802.11 Working Group Contribution, IEEE P802.11-98/319 (September\n1998).\n[5] D. Fumolari, Link performance of an embedded Bluetooth personal\narea network, in: Proceedings of IEEE ICC'01, Helsinki, Finland (June\n2001).\n[6] N. Golmie and F. Mouveaux, Interference in the 2.4 GHz ISM band:\nImpact on the Bluetooth access control performance, in: Proceedings\nof IEEE ICC'01, Helsinki, Finland (June 2001).\nINTERFERENCE EVALUATION OF BLUETOOTH AND IEEE 802.11b SYSTEMS\n211\n[7] N. Golmie, R.E. Van Dyck, and A. Soltanian, Interference of Bluetooth\nand IEEE 802.11: Simulation modeling and performance evaluation\n, in: Proceedings of the Fourth ACM International Workshop on\nModeling, Analysis, and Simulation of Wireless and Mobile Systems,\nMSWIM'01, Rome, Italy (July 2001).\n[8] I. Howitt, V. Mitter and J. Gutierrez, Empirical study for IEEE 802.11\nand Bluetooth interoperability, in: Proceedings of IEEE Vehicular\nTechnology Conference (VTC) (Spring 2001).\n[9] IEEE Standard 802-11, IEEE standard for wireless LAN Medium Access\nControl (MAC) and Physical Layer (PHY) specification (June\n1997).\n[10] A. Kamerman, Coexistence between Bluetooth and IEEE 802.11 CCK:\nSolutions to avoid mutual interference, IEEE P802.11 Working Group\nContribution, IEEE P802.11-00/162r0 (July 2000).\n[11] A. Kamerman and N. Erkocevic, Microwave oven interference on wireless\nLANs operating in the 2.4 GHz ISM band, in: Proceedings of the\n8th IEEE International Symposium on Personal, Indoor and Mobile\nRadio Communications, Vol. 3 (1997) pp. 12211227.\n[12] J. Lansford, A. Stephens and R. Nevo, Wi-Fi (802.11b) and Bluetooth:\nEnabling coexistence, IEEE Network Magazine (September/October\n2001).\n[13] S. Shellhammer, Packet error rate of an IEEE 802.11 WLAN in the\npresence of Bluetooth, IEEE P802.15 Working Group Contribution,\nIEEE P802.15-00/133r0 (May 2000).\n[14] M.K. Simon and C.C. Wang, Differential versus limiter-discriminator\ndetection of narrow-band FM, IEEE Transactions on Communications\nCOM-31(11) (November 1983) 12271234.\n[15] A. Soltanian and R.E. Van Dyck, Physical layer performance for coexistence\nof Bluetooth and IEEE 802.11b, in: Virginia Tech. Symposium\non Wireless Personal Communications (June 2001).\n[16] M. Takai, R. Bagrodia, A. Lee and M. Gerla, Impact of channel models\non simulation of large scale wireless networks, in: Proceedings of\nACM/IEEE MSWIM'99, Seattle, WA (August 1999).\n[17] S. Unawong, S. Miyamoto and N. Morinaga, Techniques to improve the\nperformance of wireless LAN under ISM interference environments, in:\nFifth Asia-Pacific Conference on Communications, 1999 and Fourth\nOptoelectronics and Communications Conference, Vol. 1 (1999) pp.\n802805.\n[18] J. Zyren, Reliability of IEEE 802.11 WLANs in presence of Bluetooth\nradios, IEEE P802.11 Working Group Contribution, IEEE P802.15-99/073r0\n(September 1999).\n[19] S. Zurbes, W. Stahl, K. Matheus and J. Haartsen, Radio network performance\nof Bluetooth, in: Proceedings of IEEE International Conference\non Communications, ICC 2000, New Orleans, LA, Vol. 3 (June 2000)\npp. 15631567.\nNada Golmie received the M.S.E degree in computer\nengineering from Syracuse University, Syracuse\n, NY, in 1993, and the Ph.D. degree in computer\nscience from University of Maryland, College Park,\nMD, in 2002. Since 1993, she has been a research\nengineer at the advanced networking technologies\ndivision at the National Institute of Standards and\nTechnology (NIST). Her research in traffic management\nand flow control led to several papers presented\nat professional conferences, journals and numerous\ncontributions to international standard organizations and industry led consor-tia\n. Her current work is focused on the performance evaluation of protocols\nfor Wireless Personal Area Networks. Her research interests include modeling\nand performance analysis of network protocols, media access control,\nand Quality of Service for IP and wireless network technologies. She is the\nvice-chair of the IEEE 802.15 Coexistence Task Group.\nE-mail: nada.golmie@nist.gov\nRobert E. Van Dyck received the B.E and M.E.E\ndegrees from Stevens Institute of Technology, Hoboken\n, NJ, in 1985 and 1986, respectively, and the\nPh.D. degree in electrical engineering from the North\nCarolina State University at Raleigh in 1992. Since\nJune 2000, he has been a member of the Advanced\nNetwork Technologies Division of the National Institute\nof Standards and Technology, Gaithersburg,\nMD. Prior to that, he was an Assistant Professor in\nthe Department of Electrical Engineering, the Pennsylvania\nState University, University Park, PA. During 1999, he was a Summer\nFaculty Research Fellow at Rome Laboratory. His other previous affiliations\ninclude GEC-Marconi Electronic Systems, Wayne, NJ (19951996),\nthe Center for Computer Aids for Industrial Productivity, Rutgers University,\nPiscataway, NJ (19921995), the Computer Science Corporation, Research\nTriangle Park NC, (1989), and the Communications Laboratory, Raytheon\nCo., Marlborough, MA (19851988). His present research interests are in\nself-organization of sensor networks, multimedia communications and networking\n, and source and channel coding for wireless communications.\nAmir Soltanian received his M.S. degree from\nSharif University of Technology, Tehran, Iran, in\n1994. He has been working in the industry for 6\nyears doing research on GSM receivers. Currently,\nhe is a guest researcher at National Institute of Standards\nand Technology. His current research is the\nstudy of the interference cancellation methods for\nthe physical layer of the Bluetooth and IEEE802.11\nWLAN.\nArnaud Tonnerre is a graduate student at the\ncole Nationale Suprieure des Telecommunications\n(ENST) in Bretagne, France. He is currently doing\nan internship at the National Institute of Standards\nand Technology (NIST) in Gaithersburg, MD. He\nwill receive the Diplome d'Ingenieur in June 2003.\nHis research interests are in wireless personal area\nnetworks.\nOlivier Rbala received a computer science degree\nfrom the Institut suprieur d'informatique, de\nmodlisation et de leurs applications (ISIMA) in\nClermont-Ferrand, France, in September 2001. He\nis currently a Guest Researcher at the National Institute\nof Standards and Technology (NIST) in the\nadvanced networking technologies division. His research\ninterests includes the performance evaluation\nof wireless networking protocols.", "keywords": "evaluation;packet loss;performance degradation;IEEE 802.11b;simulation framework;Bluetooth;interference;hop rate;tranmission power;topology;WPANs;WLAN;offered load"} {"name": "119", "title": "Is a Picture Worth a Thousand Words?", "abstract": "What makes a peripheral or ambient display more effective at presenting awareness information than another ? Presently, little is known in this regard and techniques for evaluating these types of displays are just beginning to be developed. In this article, we focus on one aspect of a peripheral display's effectiveness-its ability to communicate information at a glance. We conducted an evaluation of the InfoCanvas, a peripheral display that conveys awareness information graphically as a form of information art, by assessing how well people recall information when it is presented for a brief period of time. We compare performance of the InfoCanvas to two other electronic information displays , a Web portal style and a text-based display, when each display was viewed for a short period of time. We found that participants noted and recalled significantly more information when presented by the InfoCanvas than by either of the other displays despite having to learn the additional graphical representations employed by the InfoCanvas.", "fulltext": "Introduction\nThe Peripheral awareness displays are systems that reside in\na user's environment within the periphery of the user's\nattention. As such, the purpose of these displays is not\nfor monitoring vital tasks. Rather, peripheral displays\nbest serve as communication media that people can\nopportunistically examine to maintain information\nawareness [11, 17].\nThe\nterm\nambient display [22] has been used to describe\nsystems like this as well, but to avoid confusion,\nthroughout this document we use this term to describe\nperipheral awareness systems that generally convey\nonly one piece of information. We use the term peripheral\ndisplay to describe peripheral awareness systems\nthat may present multiple information items. Both peripheral\nand ambient displays are designed not to distract\npeople from their tasks at hand, but to be subtle,\ncalm reminders that can be occasionally noticed. In\naddition to presenting information, the displays also\nfrequently contribute to the aesthetics of the locale in\nwhich they are deployed [1].\nDozens of peripheral/ambient displays have been\ncreated in many shapes and form factors. Some displays\n, such as the dangling string [21], tangible displays\nincluding water lamps and pinwheels [4], and the Information\nPercolator [7] have utilized physical (and\noften everyday) objects. Other displays, such as Informative\nArtwork [8] and the Digital Family Portrait [16]\nuse electronic displays to represent information in a\ngraphical manner. All these systems primarily communicate\none item of information.\n\nOther peripheral/ambient displays exist that are capable\nof conveying more than one information item\nsimultaneously. The Digital Family Portrait, although\nprimarily intended to allow geographically separated\nfamily members maintain awareness of each other, allows\nfor the optional displaying of additional information\nsuch as weather [16]. Audio cues, instead of visual\ndisplays, have also been utilized in peripheral displays\nto convey multiple nuggets of information in the Audio\nAura system [15]. The Kandinsky system [5] attempts\nto create artistic collages of various pieces of information\n, and the Scope system is an abstract visualization\ndisplaying notification information from multiple\nsources [19]. SideShow [3] provides a display sidebar\ncontaining multiple awareness icons such as traffic and\nweather indicators.\nThe InfoCanvas [14], the focus of this article, differs\nfrom the initial set of systems above by explicitly\npromoting the conveyance of multiple pieces of information\nconcurrently. It differs from the latter set of\nCopyright is held by the author/owner originally published by the\nCanadian Human-Computer Communications Society in the\nProceedings of Graphics Interface 2004, May 17-19, London, Ontario.\n\n117\nsystems in promoting greater flexibility of information\nmonitored and its subsequent visual representation, as\nwell as allowing for greater user control in specifying\nthose mappings.\nAlthough many types of displays exist and new ones\nare being developed, little is known about what makes a\nparticular peripheral/ambient display more successful at\npresenting information than another [10]. Furthermore,\nsuch displays are inherently difficult to evaluate formally\nsince they are designed not to distract the user.\nAs a result, evaluation techniques have been limited, as\nMankoff et al. note [10], to formative ethnographies\n[16] and within-lab studies where displays are developed\nand subsequently refined over time by their designers\n[6]. However, there has been recent work on\ndeveloping new evaluation techniques for ambient displays\n, most notably Mankoff et al.'s set of discount\nformative techniques [10] and McCrickard et al.'s notification\nsystem categorization framework [13].\nThe goal of this study is not to evaluate peripheral\ndisplays in general. Rather, we focus on one particular\ncomponent of a peripheral display's effectiveness, its\nability to communicate information. More specifically,\nwe examine how the abstract data mappings of electronic\ninformation artwork affect people's interpretation\nand memory of the data.\nBoth the InfoCanvas [14] and the Informative Artwork\n[8] projects make use of dynamic pieces of electronic\nartwork to represent information in an eye-appealing\nmanner. Such displays are placed within a\nperson's work environment or are publicly displayed,\nenabling at-a-glance information awareness. How well\nthe systems convey information is not known, however.\nNote that the success of a peripheral/ambient display\ninvolves more than simple information acquisition.\nBecause these displays are positioned in people's environments\n, aesthetics and attractiveness influence adop-tion\nas well. The research reported here, though, focuses\nsolely on such displays' ability to convey information\n. In a companion study, the issues of aesthetics\nand longer-term use of the InfoCanvas system are currently\nbeing explored.\nExperimental Design\nThis study examines if an electronic picture \"is worth a\nthousand words.\" That is, how well are users able to\nlearn mappings and subsequently comprehend and recall\ninformation when it is presented in the form of\nelectronic artwork in comparison to more traditional\nmethods. We accomplish this by designing an InfoCanvas\ndisplay as well as two more conventional information\ndisplays and evaluating participants' memories\nof them when they only see the displays for short\nperiods of time.\nStudy participants viewed three examples of each\ndisplay with each example encoding different data values\n(described in detail in the next section). After\nviewing a display for eight seconds, participants recalled\nthe information presented using a multiple-choice\nquestionnaire.\n2.1 Materials\nTen items of information were selected to be monitored\n: time of day, a weather forecast, a temperature\nforecast, traffic conditions, a news headline, the Dow\nJones stock index value, an airfare price, updates to a\nWeb site, a count of new emails, and a baseball score.\nThese items are examples of information people typically\nseek to maintain awareness of [14].\nThree information screens were designed including\nan InfoCanvas beach scene, a minimalist text-based\ndisplay, and a Web portal-like display. These three\ndisplays were chosen to represent interesting points in a\nspectrum of possibilities, as depicted in Figure 1, for\nrepresenting awareness information on electronic ambient\ndisplays. Styles range from pure textual presentations\nto highly abstract, graphical imagery. The InfoCanvas\nand the Text-based display inhabit positions\nnear the endpoints of that spectrum. The Web Portal\ndisplay was designed to incorporate a hybrid of textural\nand graphical representations, and resemble the types of\nWeb \"start pages\" that people frequently use to maintain\ninformation awareness today [14].\nOther interesting points in the spectrum include\nmore direct graphical (typically iconic) representations\nof information as embodied by systems such as Sideshow\n[3], and could be the subject of future experiments\n. For this study, we compare the InfoCanvas to\ntwo widely deployed types, Web portals (e.g. MyYahoo\n!) and text-heavy news summaries or Web pages.\n\nHighly Textual\n\n\n\n\n\n\nInfoCanvas [14]\nSideshow [3]\nWeb Portal\nMy Yahoo!\nText-Based\nInformative Artwork [6]\nHighly Graphical\nFigure 1: A spectrum of awareness displays ranging\nfrom textual to graphical presentations of information.\n118\nAll three displays in the study were designed seeking\na balance of experimental control and representation\nof ecologically valid real-world use. Extensive\npilot testing and redesign was used to refine their appearance\n. We designed the three displays to encode the\nten pieces of information in an appropriate manner for\nthat display style. In all three, we added a small\namount of extra information beyond the ten queried\ninformation values, much as similar real world displays\nwould undoubtedly do.\nAll displays were presented full-screen on a\nViewsonic 15\" LCD display running at a resolution of\n1024 x 768. The InfoCanvas used the entire screen\narea, and the other two displays used slightly less of the\nentire display as will be explained below. In the following\nsubsections, we describe each of the displays in\nmore detail.\nInfoCanvas Display\nThe InfoCanvas system supports a variety of artistic\nscenes or themes. We chose to use a beach scene as\nshown in Figure 2 for the experiment due to its popularity\nwith trial users. Individual objects in the scene represented\nthe ten data values as follows:\nAirfare price: Represented by the vertical height\nof the kite in the sky from $0 (near the water\nlevel) to $400 (top of the screen).\nNews headline: Shown on the banner behind the\nplane.\nTime of day: Denoted by the sailboat moving from\nthe left side (12:01 AM) to the right side (11:59\nPM).\nWeb site update: Represented by the color of the\nleaves on the palm tree, green indicates a recent\nupdate and brown indicates no recent changes.\nWeather forecast: Illustrated through the actual\nweather shown in the sky (e.g., clouds represents\na forecast of cloudy weather).\nTemperature forecast: Represented by the height\nof the large seagull in the sky, ranging from 50\ndegrees at water level to 90 degrees at the top of\nthe screen.\nDow Jones stock market change: Displayed by\nthe arrangement of seashells on the shoreline.\nShells form an arrow to indicate whether stocks\nare up or down and the quantity of shells indicates\nthe value (three shells indicate a change of\n0 50 points, five shells indicate a change of\nmore than 50 points).\nNew email messages: Depicted by the height of\nliquid in the glass ranging from 0 new emails\n(empty glass) to 20 new emails (a full glass).\nCurrent traffic speed on a local roadway: Sym-bolized\nby the color of the woman's bathing suit\nwith red indicating speed less than 25 MPH, yellow\nindicating a speed between 25 and 50 MPH,\nand green indicating a speed greater than 50\nMPH.\nBaseball score: Shown by the size of two beach\nballs: A larger ball indicates a winning team and\nidentical ball sizes indicate a tied score. Color is\nused to distinguish the two teams.\nThese mappings were chosen to reflect a variety of\nobjects moving or changing size or color. In addition,\nsome mappings were chosen for being more intuitive\nand direct, such as using weather icons to represent\nweather or the metaphor of a kite flying in the sky to\nreflect airfare price. Other mappings, such as representing\nupdates to a Website by tree leaf color, were intended\nto be more abstract and indirect. A pilot study\nof four InfoCanvas users revealed a wide variety of\nmapping styles, both natural and abstract. As a result,\nwe wanted the scene used in this study to reflect this.\nFurthermore, as also done in actual use, we placed additional\nitems in the scene such as the chair, umbrella,\nand crab simply for aesthetic purposes.\nSeveral items within this display present information\nas a precise point along a continuous scale, including\nthe time-of-day, airfare, and forecasted temperature,\nby displaying objects that move along a line. Other\nitems, including the traffic speed, stock update, and\nbaseball score, are represented using categorical encod-ings\n. For example, the different shell arrangements\nrepresenting the Dow Jones stock update indicate four\ndifferent ranges of values. The implications of this difference\nwill be explored more fully later when describing\nthe questionnaire formats.\nText-Based Display\nThe Text-based display (shown in Figure 2) predomi-nantly\nuses text to display information. Web pages\nsuch as MyYahoo were the inspiration for the Text-based\ndisplay, but the use of images, different colors,\nand graphics were removed. Thus, the display represents\na position near the endpoint of the graphics-text\nspectrum presented earlier.\nAs a result, we restricted formatting on this display\nto changes in point size and the use of bold text with the\nexception of using a fixed-width font to indicate stock\nchange values. (The fixed-width font helps to align numerical\nstock values, providing a clean and orderly appearance\nsimilar to the style used by existing services.)\nExtra information beyond the ten data values on this\ndisplay included a few lines from a news article related\nto the current headline, the current date, and additional\nstock information for the Standard & Poor's 500 and\n119\n\n\n\n\n\n\nFigure 2: Examples of the InfoCanvas beach scene (top), text-based (middle), and Web Portal displays (bottom)\nused in the study.\n120\nNASDAQ indices--items likely to appear on such a\ndisplay.\nThe Text-based display consisted of a region 970\npixels wide to 330 pixels high on the screen. Pilot testing\nfound this size optimal in allowing the use of columns\n, section headers, and white space to make an effective\nand visually pleasing display. Furthermore,\npilot testing indicated that information recall suffered as\nthe display's size increased, perhaps due to increased\neye movement, even though the data elements remained\nlocated in the same position.\nWeb Portal Display\nThe Web Portal display (shown in Figure 2) also mim-icked\nthe look and feel of popular no-cost \"start\" Web\npages such as My Yahoo. However, we added additional\nformatting and iconic graphics/images as found\nin awareness displays such as Sideshow [3] to differentiate\nthis display from the Text-based display. Web\nportals, in actuality, tend to make relatively limited use\nof images and graphics. Our introduction of graphics\nand images served two main purposes--making the\ndisplay more of a hybrid between the highly artistic\nInfoCanvas and a display utilizing only text, and also to\nincrease the effectiveness of the design by using graphics\nto position items or convey information.\nGraphics that encode values--those that change to\nreflect information--in the Web Portal display include\nthe weather icon indicating the weather forecast, the\nspeedometer icon with a meter indicating the current\nspeed of vehicles, and an icon indicating the presence\nof new email messages. In addition, an image related to\nthe news headline was displayed. Iconic images that\ndid not change and were used solely as positional anchors\nincluded a picture frame icon for the Web site\nupdate item, baseball team logos, and an airline logo.\nIn addition, colors and arrows were used to indicate\nstock trends and the baseball team currently winning\nwas displayed in bold text.\nThe Web Portal display's extra information (e.g. not\nencoding the ten queried values) included a few lines of\na news story related to the headline and the current\ndate, and the two other stock indices as done in the\nText-based display.\nThe Web Portal display used an area of 968 pixels\nwide by 386 pixels high on the display. Again, iterative\ndevelopment and pilot testing helped determine this\nsize was best to create a balanced and ordered layout\nand be an effective presenter of information. As in the\nText-based display, each element on the Web Portal\ndisplay remained in the same relative position.\nDesign Considerations\nAs noted above, wherever we faced a design choice in\ncreating the Web Portal and Text-based displays, we\nattempted to optimize the display to promote comprehension\n. For example, both the Web Portal and Text-based\ndisplays represent substantial improvements over\nreal-life examples. The Web Portal design contained\nmore graphics and images than what typically appears\non these Web pages. Pilot subjects found these graphics\nand images to be beneficial in remembering information\n. Furthermore, individual items were modified\nduring pilot testing to assist recall. For example, we\nmade the size of the weather forecast image substan-tially\nlarger than what is typically found on Web portals\n.\nLikewise, we designed the Text-based display to be\na substantial improvement over existing text-based information\ndisplays, such as tickers or small desktop\nwindow applications, by introducing columns, section\nheaders, and white space.\nInitial full-screen presentations used for the Web\nPortal and Text-based display tended to look unwieldy\nand resulted in lower recall of information during pilot\ntesting. We attributed this to the larger screen area that\nparticipants had to visually parse. Hence, we reduced\nthe screen area occupied by those displays to promote\ncomprehension. Following that logic, InfoCanvas' larger\nsize should have served to negatively impact its\nperformance, if anything.\n2.2 Participants\nForty-nine (11 female) individuals with normal or corrected\n-to-normal eyesight participated in this study.\nParticipants ranged from 18 to 61 years of age (mean\n24.2). 27 were graduate students, 17 were undergraduates\n, and 5 were non-students. Participants were com-pensated\n$10 for their time.\n2.3 Procedure\nTesting occurred in individual sessions lasting approximately\n45 minutes. Participants sat two feet in\nfront of the LCD monitor. The keyboard and mouse\nwere removed from the area, leaving empty desk space\nbetween the participant and the display. The experi-menter\ninformed participants that they were participating\nin a study to determine how much they could remember\nfrom different information screens when they\ncould only see the screen for a brief amount of time.\nA within-subjects experimental design was used and\nthe ordering of the display conditions was counterbal-anced\n. Participants were randomly assigned to an ordering\nsequence. For each of the three displays, an in-121\ntroductory tour, preparation task, and practice task were\ngiven prior to performing three actual trials.\nThe introductory session included an explanation of\nthe display and the information found on it. For the\nInfoCanvas and Web Portal displays, the behaviors of\nthe elements on the displays were also explained. Due\nto the display's more complex and dynamic nature, the\nintroductory tour took longer to perform with the InfoCanvas\n, approximately 3.5 minutes in duration, than\nwith the Web Portal and Text-based display, both approximately\n1.5 minutes in duration.\nInitially, especially with InfoCanvas, we had concerns\nthat the introductory tour might not be sufficient\nto allow participants to learn each display. Pilot testing,\nhowever, revealed that participants were able to quickly\nlearn the information mappings. To further ensure that\nwe would be testing information comprehension and\nrecall but not mapping recall with respect to the InfoCanvas\n, participants were asked to point out the different\nobjects on a sample display and say aloud what information\neach object represented. We also provided\nparticipants with a reference sheet labeling the mappings\nbetween information and objects on the InfoCanvas\n. In practice, we found that participants seldom\nlooked at the sheet and some actually turned it over.\nDuring the preparation task, participants were\nshown an example display and instructed to complete a\nsample recall questionnaire (explained in more detail\nlater in this section), much as they would in the actual\ntrials. In this phase, however, no time limit was en-forced\nfor viewing the display. This task then allowed\nthe participant to better familiarize him or herself with\nthe display, the questionnaire style, and to ask additional\nquestions regarding the display, all while it was\nvisible.\nNext, in the practice task, participants were exposed\nto what the actual trials would be like. A recall questionnaire\nwas placed text side down in front of the participant\nand then an information display was shown for\neight seconds. Pilot testing determined that this was a\nsuitable amount of exposure time to avoid ceiling or\nfloor effects, with recall averaging about five or six\nitems. Furthermore, participants during pilot testing\nfelt that this amount of time was indicative of the duration\nof a glance of a person seeking multiple information\nupdates. Upon completion of the exposure, the\ncomputer prompted the individual to turn over and\ncomplete the recall sheet. Participants were instructed\nto not guess on the recall questionnaire; if the participant\ndid not remember an item at all, he or she left that\nitem blank on the questionnaire.\nThe actual trials followed the practice task and consisted\nof three exposure and recall activities involving\ndifferent data sets and hence data displays. Again, specific\nemphasis was made to discourage the participant\nfrom guessing on the recall. The same data values were\nused for each position of the nine total experiment trials\nindependent of the display ordering, ensuring a balance\nacross the experiment.\nUpon completion of the three different display conditions\n, participants were given several concluding surveys\nthat captured subjective feedback from the participants\nregarding perceived performance and display\npreferences.\nRecall Task\nTen questions, one per each information item, were\npresented to participants after exposure to an information\ndisplay. We varied the question topic order across\ntrials to discourage participants from becoming accustomed\nto a particular topic being the subject of the first\nfew questions and then seeking out information from\nthe displays on those topics. While participants were\nnot explicitly informed of this, the varied order came as\nno surprise when they performed actual trials since they\nhad already encountered the recall sheet in the preparation\nand practice tasks.\nTo minimize cognitive load, the questions were\ndesigned to elicit the comprehension and recall of information\nin the same manner that it had been encoded.\nFor all questions about the Text and Web Portal displays\n, and for the majority of questions about the InfoCanvas\ndisplay, the question style was multiple-choice,\ntypically including four exact-value answers spread\nrelatively evenly across the range of possible answers.\nFor instance, the potential answers for the time of day\nmight have been 3:42am, 8:36am, 5:09pm, and\n10:11pm. The newspaper headline question used four\npossible answers containing some similarity (usually\nusing the same key words such as \"Iraq\" or \"President\nBush\") to ensure the recall of the headline by context,\nnot by recognition of a key word. The Web site update\nquestion simply asked whether the site had been up-dated\n, with yes and no as the possible answers. Finally,\nthe baseball score question asked which team was currently\nwinning and offered the choices of the Braves,\nPirates, or tied game. The data values used to generate\ndisplays for the nine trials also were chosen to range\nacross the possible set of values.\n\"Exact Value\"\n\n\"Categorical\"\nWhat is the status of the\nDow Jones?\n\nWhat is the status of the\nDow Jones?\n+ 89 points\n+ 42 points\n- 2 points\n- 75 points\n\nUp over 50 points\nUp 0 50 points\nDown 0 50 points\nDown over 50 points\n\nFigure 3: Example of exact value and categorical\nrecall questions.\n122\nFor topics that the InfoCanvas presented categories\nor ranges of values (e.g., traffic conditions, baseball\nscore, and stock updates), answer choices to the recall\nquestions were also presented in the form of ranges.\nFigure 3 shows an example of how these differed using\nstocks as an example. Note how the exact-value answers\nlie within the intervals used; the questions and\nanswers were designed to be as similar as possible.\nFurthermore, we felt that the more general issue of participants\nneeding to translate pictures into exact, usually\nnumeric, values would counter any benefit received by\nthe InfoCanvas in using ranges for a few questions.\nAdjacent to each multiple-choice question on the\nrecall questionnaire was a confidence level scale with\nchoices for high, medium, or low confidence. Participants\nwere instructed to indicate their relative confidence\nfor each item. We did this to further lessen the\n\"guessing factor\" and identify whether confidence\nwould play a measure.\nFollowing the nine cumulative trials for all three\ndisplays, participants completed a Likert scale survey\nrating all the displays for facilitating the recall of information\n, being an effective presenter of information,\nand visual appeal. In addition, participants rank-ordered\neach display for facilitating recall and visual\nappeal. Lastly, participants responded to open-ended\nquestions regarding which display they would employ\nat their workstation or on a wall if a dedicated display\nwould be available.\nResults\nTable 1 presents the means and standard deviations\nacross all conditions of the raw number of correct responses\nfor each of the three trials under each display.\nA repeated measures ANOVA identified an overall\neffect of the display for accurately recalled items,\nF(2,96) = 22.21, MSE = 2.31, p < .0001, and there was\nno effect for order. Additionally, pair-wise comparisons\nbetween display types found an advantage of the\nInfoCanvas display over the Web Portal, F(1,48) =\n14.65, MSE = 2.66, p < .0005), the Web Portal over the\nText-based display, F(1,48) = 8.17, MSE = 1.76, p <\n.007), and the InfoCanvas over the Text-based display,\nF(1,48) = 40.01, MSE =2.51, p < .0001).\nTo take into account participants' confidence of\ntheir answers, a second method to evaluate performance\nwas developed. Weights of value 3, 2, and 1 were assigned\nfor the high, medium, and low confidence levels,\nrespectively (e.g. a correct answer with medium confidence\nyielded +2 points, while an incorrect answer also\nwith a medium confidence yielded 2 points). Questions\nnot answered on the recall task were assigned a\nweighted score value of 0.\nParticipants forgot to assign a confidence on 13 of\nthe 4410 responses collected in the study. Since this\nnumber of accidental omissions was quite low, items\nwith omitted confidence ratings were assigned a medium\nlevel, the median of the obtainable point values.\nOf the 13 questions with omitted confidence, 3 were\nanswered incorrectly.\nIn examining the weighted scores shown in Table 2,\nan overall effect was found on the display, F(2,96) =\n10.40, MSE = 25.35, p < .001, and again there was no\neffect of order. Furthermore, pair-wise comparisons\nbetween the displays again found an advantage of the\nInfoCanvas display over the Web Portal, F(1,48) =\n7.29, MSE = 30.56, p = .0095, and of the InfoCanvas\ndisplay over the Text-based display, F(1,48) = 22.21,\nMSE = 22.93, p < .0001. However, the weighted scores\ngave no advantage of the Web Portal over the Text-based\ndisplay, F(1,48) = 2.59, MSE = 2.51, p = 0.11.\nFigure 3 presents an item-by-item breakdown of the\npercentage of correctly answered questions for each\ndisplay. The InfoCanvas had the highest average on\n\nEase of Info. Recall\n1\n2\n3\n4\n5\nMean\nText-Based 7\n18\n14\n10\n0\n2.6 (1.0)\nWeb Portal\n1\n8\n18\n17\n5\n3.3 (0.9)\nInfoCanvas 2\n4\n13\n20\n10\n3.7 (1.0)\n\nEffective Data Pres.\n1 2 3 4 5 Mean\nText-Based 6\n18\n16\n7\n2\n2.6 (1.0)\nWeb Portal\n2\n3\n14\n24\n6\n3.6 (0.9)\nInfoCanvas 5\n9\n13\n18\n4\n3.1 (1.1)\n\nVisual Appeal\n1 2 3 4 5 Mean\nText-Based 20\n19\n8\n2\n0\n1.8 (0.9)\nWeb Portal\n1\n2\n12\n22\n12\n3.9 (0.9)\nInfoCanvas 1\n1\n10\n17\n20\n4.1 (0.9)\nTable 3: Likert scale responses for display characteristics\n, with 1 = low rating and 5 = high rating.\n\n1st Trial\n2nd Trial\n3rd Trial\nText-Based\n5.14 (1.59)\n5.12 (1.33)\n5.02 (1.57)\nWeb Portal\n5.67 (1.61)\n5.65 (1.54)\n5.29 (1.89)\nInfoCanvas\n6.27 (1.80)\n6.22 (1.79)\n6.31 (1.76)\nTable 1: Means and standard deviations of correct\nresponses for three trials of each display.\n\n1st Trial\n2nd Trial\n3rd Trial\nText-Based\n11.47 (4.92)\n11.78 (4.81)\n10.57 (5.02)\nWeb Portal\n12.88 (5.09)\n12.35 (5.84)\n11.27 (6.40)\nInfoCanvas\n13.88 (5.96)\n14.02 (5.89)\n13.82 (6.63)\n\nTable 2. Means and standard deviations of correct\nresponses for weighted scores for three trials of each\ndisplay\n123\nseven of the ten items. The Web Portal score was\nhigher on the time and baseball items, and the Text display\nwas best for the airfare price.\nTable 3 contains a breakdown of participants' Likert\nratings captured during the post-experiment surveys.\nThese results mirror the performance data with the InfoCanvas\ngenerally being rated higher with the exception\nthat participants generally ranked the Web Portal\nhigher as being a more effective presenter of data.\nParticipants' order rankings of the three displays for\nfacilitation of recall and personal preference are shown\nin Table 4. Here, the Text-based display fared poorly\nalong both dimensions. More participants preferred the\nWeb Portal but rated the InfoCanvas as best for recall.\nDiscussion\nParticipants in the study recalled information best using\nthe InfoCanvas display despite having the greater cognitive\nload of remembering mappings and representations\nused in the art paradigm. This cognitive load also\nincludes translating pictorial InfoCanvas objects to the\nvalues used in the recall questions, while the two other\ndisplays presented data values more closely to the format\nof the questions. Even with these disadvantages,\nthe InfoCanvas conveyed information better and was\nmore vividly recalled.\nAnother possible interpretation is that the InfoCanvas\nsystem actually reduces the cognitive load of the\nindividuals. In this scenario, it follows that it is easier\nand cognitively more efficient to remember and recall\nthe InfoCanvas images, and then translate later to the\nvalues desired.\nRegardless of their cognitive interpretation, the\nstudy's results should not be too surprising. People are\nable to process images rapidly by leveraging the sophisticated\n, parallel, and high-bandwidth nature of the perceptual\nsystem [20]. Umanath and Scamell showed that\ngraphics are conducive towards recall tasks involving\nsimple fact retrieval in a series of studies investigating\nthe role of recall in real-time decision-making [18].\nFurthermore, \"ecological\" layouts with objects in natural\npositions have been shown to facilitate faster browsing\n[2]. This study, however, confirms our intuition\nthat the InfoCanvas, and displays like it, has potential to\nbe an effective peripheral display where people seek to\nobtain information at a glance.\nSeveral interesting observations emerged from the\nresults of this study. We noted that participants generally\nexpressed preference for the Web Portal display\nover the InfoCanvas display even though they felt that\nthe InfoCanvas display had best facilitated the recall of\ninformation. When asked about this preference, one\nparticipant remarked that the Web Portal design was\n\"more professional looking\" and \"more common than\nthe other two.\" Other participants praised the Web Portal\nfor its ability to display information in a more \"logi-cal\nand precise\" manner and providing \"accurate information\nthat is not influenced by my interpretation.\"\nThese comments seem to imply a conservative attitude\nabout adopting a new and unconventional technology\nsuch as an ambient display\nOther participants appeared to capture the essence\nof peripheral/ambient displays and their abilities to be\nsubtle communication channels, not distracting a user.\nOne participant remarked that, \"I think I could choose\nto ignore it [InfoCanvas] while I was working. I think\nonce I got used to what all the icons meant and what the\nscales were, I could easily look at it to see the information\nI was interested in.\" Others also echoed this sentiment\n: \"[InfoCanvas] is the quickest and easiest to see\nat a glance the information you want\" and \"[InfoCanvas\n] is informative but also relaxing.\" Finally, one participant\nsummarized the benefits of the InfoCanvas as\nbeing \"able to keep working and not get distracted by\ndetails; [InfoCanvas is] faster to see and interpret from\na distance.\"\nIn the context of this study, the InfoCanvas was\nevaluated on its abilities as an information purveyor.\nThe mappings between information and graphical elements\nused in this study were designed by the authors,\nand as such, did not always feel instinctive to participants\n. Some participants indicated they had difficulty\nin learning the mappings; one participant remarked that\n\"I struggled with the visual mappings\" and another felt\nthat InfoCanvas was \"counterintuitive.\" As was mentioned\nearlier, this was a concern in the design of the\n0%\n20%\n40%\n60%\n80%\n100%\nWeb Site Update\nWeather Forecast\nTraffic Conditions\nTime of Day\nStock Updates\nNews Headline\nNew Email\nForecasted Temp\nBaseball Score\nAirfare\nText\nWeb\nInfoCanvas\nFigure 3. Mean percentage for correctly recalled\nitems for each display type\n124\nstudy--would individuals even be able to learn these\nmappings in such a short period of time? Pilot studies\nand the final study data both indicated that despite not\nbeing able to define their own mappings for the information\n, participants were able to recall more information\nwhen presented on the InfoCanvas.\nA crucial implication lies in this; the InfoCanvas is\ndesigned to be a highly personalized peripheral display\nwhere users specify their own mappings and layouts.\nSince participants were able to recall information quite\nwell when they did not specify the mappings, it seems\nlogical to conclude that comprehension and recall\nwould benefit even more when people design their own\ndisplay and it is constantly present in their environment.\nSeveral interesting discussion points arise from the\nbreakdown of correctly recalled items shown in Figure\n3. First, note that on the whole, the InfoCanvas yielded\nthe largest percentage of correctly recalled items per\ncategory, with the exception of the airfare, time of day,\nand baseball score items. However, performance of the\nthree displays on the baseball score item was comparable\n, averaging a recall rate of 64-70%. In regards to the\nairfare and time of day items, the InfoCanvas produced\nthe second best percentage of correctly recalled items\nand was outperformed by the Text and Web Portal displays\n, respectively. Slightly lower performance was\nsomewhat expected with these two items, since their\nrepresentations moved along a straight line to indicate a\npoint on a scale. Pilot participants often remarked that\nthese representations were more difficult to keep track\nof since they could be found in different areas. Interestingly\n, even with these representations, the InfoCanvas\nperformed better than the Web Portal (for the airfare\nitem) and the Text display (for the time item), indicating\nthat despite their moving nature, graphical representations\nstill worked relatively well. The temperature\nelement, also represented by a moving object, illustrates\nthis point as well, generating a higher recall than the\nother displays.\nInterestingly, the InfoCanvas appeared to have the\nlargest advantage over the other two displays with the\ntraffic conditions item. While some may argue that this\nis due to the use of intervals to represent conditions, as\nopposed to the exact-value representations on the Web\nPortal and Text displays, note that the use of intervals\nfor the baseball score did not yield such an effect. This\ndifference implies that the representation used to indicate\ntraffic conditions--the color of the woman's bathing\nsuit--provided an excellent mapping. Therefore,\nwe speculate that if individuals create their own mappings\n, leveraging their personal experiences, recall with\nInfoCanvas will benefit even more.\nThis study examined the information conveyance\nabilities of three specific examples of displays involving\na sample population consisting mainly of academic-related\n, relatively young individuals. Generalizing its\nfindings too much would be unwise. Nevertheless, we\nspeculate that the results would extend to other similar\ntypes of displays and people of different demographics.\nThe lessons learned from this study could be applied\nto the design of new information systems. For example\n, in designing a system using a docked PDA as an\ninformation display, a graphical representation of information\n, such as using a miniature InfoCanvas, might\nconvey information more effectively than a traditional\ntext-based manner.\nConclusion and Future Work\nIn this paper, we present a formal evaluation of information\nrecall from three different electronic information\ndisplays, the InfoCanvas, a Web Portal-like, and a\nText-based display. We present results indicating that\nparticipants comprehended and recalled more awareness\ninformation when it was represented in graphical\nmanners; participants recalled more information from\nthe InfoCanvas display than the Web Portal and Text-based\ndisplays. Likewise, participants recalled more\ninformation from the Web Portal display than the Text-based\ndisplay. Our results suggest that there are benefits\nfor comprehension, when a person may only glance\nat a display for a short period of time, by displaying\ninformation in a highly graphical or stylized nature.\nA number of potential directions for follow-on work\nexist. It would be interesting to compare a more abstract\ngraphical presentation of information as embodied\nby the InfoCanvas with a purely graphical, but more\ndirect iconic encoding, such as in Sideshow [3].\nIn this study, we positioned the information displays\ndirectly in front of participants. Another possible experiment\ncould position the display further away, perhaps\non a neighboring wall, from the person's main\ncomputer display. Yet another possibility is to introduce\nan explicit primary task thus making information\ncomprehension more truly peripheral. For instance,\nparticipants could perform a primary task such as\ndocument editing while information is presented for\ncomprehension and recall on a display in another location\nas done in several other studies [9,12].\n\nText-based Web Portal InfoCanvas\nBest Recall Facilitator\n2 (4%)\n16 (33%)\n31 (63%)\n\nWorst Recall Facilitator 41 (84%)\n5 (10%)\n3 (6%)\nMost Preferred\n2 (4%)\n35 (71%)\n12 (25%)\nLeast Preferred\n35 (71%)\n2 (4%)\n12 (25%)\nTable 4: Rankings of displays for facilitating recall\nand personal preference.\n125\nAcknowledgements This research has been supported in part by a grant from the National Science Foundation, IIS-0118685 and the first author's NDSEG Graduate Fellowship The authors would like to express gratitude to Richard Catrambone and Mary Czerwinski for providing valuable insights into the development and analysis of this study\n", "keywords": "evaluation;peripheral display;graphical representation;awareness information;ambient display;text-based display;information conveyance;InfoCanvas display;Peripheral display;information recall;empirical evaluation;information visualization;Web portal-like display"} {"name": "12", "title": "A Geometric Constraint Library for 3D Graphical Applications", "abstract": "Recent computer technologies have enabled fast high-quality 3D graphics on personal computers, and also have made the development of 3D graphical applications easier. However , most of such technologies do not sufficiently support layout and behavior aspects of 3D graphics. Geometric constraints are, in general, a powerful tool for specifying layouts and behaviors of graphical objects, and have been applied to 2D graphical user interfaces and specialized 3D graphics packages. In this paper, we present Chorus3D, a geometric constraint library for 3D graphical applications. It enables programmers to use geometric constraints for various purposes such as geometric layout, constrained dragging, and inverse kinematics. Its novel feature is to handle scene graphs by processing coordinate transformations in geometric constraint satisfaction. We demonstrate the usefulness of Chorus3D by presenting sample constraint-based 3D graphical applications.", "fulltext": "INTRODUCTION\nRecent advances in commodity hardware have enabled fast\nhigh-quality 3D graphics on personal computers. Also, software\ntechnologies such as VRML and Java 3D have made the\ndevelopment of 3D graphical applications easier. However,\nmost of such technologies mainly focus on rendering aspects\nof 3D graphics, and do not sufficiently support layout and\nbehavior aspects.\nConstraints are, in general, a powerful tool for specifying\nlayouts and behaviors of graphical objects.\nIt is widely\nrecognized that constraints facilitate describing geometric\nlayouts and behaviors of diagrams in 2D graphical user interfaces\nsuch as drawing editors, and therefore constraint\nsolvers for this purpose have been extensively studied [3, 7,\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to distribute to lists, requires prior specific\npermission and/or fee.\nInt. Symp. on Smart Graphics,\nJune 11-13, 2002, Hawthorne, NY, USA.\nCopyright 2002 ACM 1-58113-555-6/02/0600...\n$\n5.00\n8, 9, 11, 12, 13, 17, 18]. Also, many specialized 3D graphics\npackages enable the specification of object layouts and\nbehaviors by using constraints or similar functions.\nIt is natural to consider that various 3D graphical applications\ncan also be enhanced by incorporating constraints. It\nmight seem sufficient for this purpose to modify existing 2D\ngeometric constraint solvers to support 3D geometry. It is,\nhowever, insufficient in reality because of the essential difference\nbetween the ways of specifying 2D and 3D graphics;\ntypical 2D graphics handles only simple coordinate systems,\nwhereas most 3D graphics requires multiple coordinate systems\nwith complex relations such as rotations to treat scene\ngraphs. It means that we need to additionally support coordinate\ntransformations in 3D geometric constraint solvers.\nIn this paper, we present Chorus3D, a geometric constraint\nlibrary for 3D graphical applications. The novel feature of\nChorus3D is to handle scene graphs by processing coordinate\ntransformations in geometric constraint satisfaction.\nWe have realized Chorus3D by adding this feature to our\nprevious 2D geometric constraint library Chorus [13].\nAnother important point of Chorus3D is that it inherits from\nChorus the capability to handle \"soft\" constraints with hierarchical\nstrengths or preferences (i.e., constraint hierarchies\n[7]), which are useful for specifying default layouts and behaviors\nof graphical objects. It determines solutions so that\nthey satisfy as many strong constraints as possible, leaving\nweaker inconsistent constraints unsatisfied.\nChorus3D also inherits from Chorus a module mechanism\nwhich allows user-defined kinds of geometric constraints.\nThis feature enables programmers to use geometric constraints\nfor various purposes including the following:\nGeometric layout: A typical use of Chorus3D is to lay\nout graphical objects. For example, it allows putting\nobjects parallel or perpendicular to others without requiring\npredetermined positioning parameters. Also, it\nprovides constraint-based general graph layout based\non the spring model [14].\nConstrained dragging: Chorus3D enables dragging objects\nwith positioning constraints.\nFor example, it\ncan constrain a dragged object to be on the surface\nof a sphere. Constrained dragging is important for 3D\ngraphics because it provides a sophisticated way to ac-94\ncommodate ordinary mouse dragging to 3D spaces.\nInverse kinematics: Chorus3D is applicable to inverse\nkinematics, which is a problem of finding desired configurations\nof \"articulated\" objects [1, 20]. It allows\nthe specification of articulated objects by using coordinate\ntransformations, and can automatically calculate\nthe parameters of the transformations that satisfy\nconstraints. This method is also applicable to camera\ncontrol by aiming at a possibly moving target object.\nIn this paper, we demonstrate the usefulness of Chorus3D\nby presenting sample constraint-based 3D graphical applications\n.\nThis paper is organized as follows: We first present our approach\nto the use of constraints for 3D graphics. Second,\nwe describe our basic framework of constraints. Next, we\npresent a method for processing coordinate transformations\nin our framework. We then provide the implementation of\nChorus3D, and demonstrate examples of using constraints\nin 3D graphics. After giving related work and discussion, we\nmention the conclusions and future work of this research.\nOUR APPROACH\nIn this research, we integrate geometric constraints with 3D\ngraphics. Basically, we realize this by extending our previous\n2D geometric constraint solver Chorus [13] to support\n3D geometry. However, as already mentioned, it is not a\nstraightforward task because 3D graphics typically requires\nhandling scene graphs with hierarchical structures of coordinate\nsystems, which is not covered by the 2D version of\nthe Chorus constraint solver.\nTo support hierarchies of coordinate systems, we introduce\nthe following new model of constraints:\nPoint variables: Each point variable (which consists of\nthree real-valued constrainable variables) is associated\nwith one coordinate system, and its value is expressed\nas local coordinates.\nGeometric constraints: Geometric constraints on point\nvariables are evaluated by using the world coordinates\nof the point variables (they can also refer to 1D variables\nfor, e.g., distances and angles by using their values\ndirectly). A single constraint can refer to point\nvariables belonging to different coordinate systems.\nCoordinate transformations: Parameters of coordinate\ntransformations are provided as constrainable variables\n, and the solver is allowed to change the parameters\nof transformations to appropriately satisfy given\nconstraints.\nWith this model, we can gain the benefit of the easy maintenance\nof geometric relations by using constraints, as well as\nthe convenience of modeling geometric objects by employing\nscene graphs.\nIn our actual implementation, we provide the following three\nelemental kinds of coordinate transformations:\nTranslation: A translation transformation is characterized\nwith three variables t\nx\n, t\ny\n, and t\nz\n, and specifies the\ntranslation of vector (t\nx\n, t\ny\n, t\nz\n).\nRotation: A rotation transformation is parameterized with\nfour variables r\nx\n, r\ny\n, r\nz\n, and r\nw\n, and specifies the rotation\nof angle r\nw\nabout the axis (r\nx\n, r\ny\n, r\nz\n).\nScale: A scale transformation is represented with three\nvariables s\nx\n, s\ny\n, and s\nz\n, and specifies the axis-wise\nscale (s\nx\n, s\ny\n, s\nz\n) about the origin.\nWe can express many practically useful transformations by\nusing such elemental ones. In fact, any transformations represented\nwith Transform nodes in VRML can be realized by\ncombining these kinds of transformations [4].\nCONSTRAINT FRAMEWORK\nIn this section, we briefly describe our framework for handling\nconstraints. We base it on the framework for the 2D\nversion of the Chorus constraint solver. See [13] for further\ndetail.\n3.1\nProblem Formulation\nWe first present the mathematical formulation for modeling\nconstraints and constraint systems.\nIn the following, we\nwrite x to represent a variable vector (x\n1\n, x\n2\n, . . . , x\nn\n) of\nn variables, and also v to indicate a variable value vector\n(v\n1\n, v\n2\n, . . . , v\nn\n) of n real numbers (v\ni\nexpresses the value of\nx\ni\n).\nTo support various geometric constraints in a uniform manner\n, we adopt error functions as a means of expressing constraints\n. An error function e(x) is typically associated with\na single arithmetic constraint, and is defined as a function\nfrom variable value vectors to errors expressed as non-negative\nreal numbers; that is, e(v) gives the error of the\nassociated constraint for v. An error function returns a zero\nif and only if the constraint is exactly satisfied. For example,\ne(x) = (x\ni\n- x\nj\n)\n2\ncan be used for the constraint x\ni\n= x\nj\n.\nWe assume that, for each e(x), its gradient is known:\ne(x) =\ne(x)\nx\n1\n, e(x)\nx\n2\n, . . . , e(x)\nx\nn\n.\nIn the same way as constraint hierarchies [7], constraint systems\nin our framework can be divided into levels consisting\nof constraints with equal strengths. Constraints with the\nstrongest preference are said to be required (or hard), and\nare guaranteed to be always satisfied (if it is impossible,\nthere will be no solution). By contrast, constraints with\nweaker preferences are said to be preferential (or soft), and\nmay be relaxed if they conflict with stronger constraints.\nSolutions to constraint systems are defined as follows: let\ne\ni,j\n(x) be the error function of the j-th constraint (1 j\nm\ni\n) at strength level i (0 i l); then solutions v are\ndetermined with the optimization problem\nminimize\nv\nE(v) subject to e\n0,j\n(v) = 0 (1 j m\n0\n)\n95\nwhere E is an objective function defined as\nE(x) =\nl\ni=1\nm\ni\nj=1\nw\ni\ne\ni,j\n(x)\nin which w\ni\nindicates the weight associated with strength i,\nand the relation w\n1\nw\n2\n\nw\nl\nholds. In this formulation\n, level 0 corresponds to required constraints, and the\nothers to preferential ones. Intuitively, more weighted (or\nstronger) preferential constraints should be more satisfied.\nOur framework simulates constraint hierarchies. Particularly\n, if the squares of constraint violations are used to compute\nerror functions, a system in our framework will obtain\napproximate solutions to the similar hierarchy solved with\nthe criterion least-squares-better [3, 17]. The largest difference\nis that a system in our framework slightly considers a\nweak constraint inconsistent with a stronger satisfiable one\nin computing its solutions, while the similar hierarchy would\ndiscard such a weak one.\nOur actual implementation of the Chorus3D constraint\nsolver provides four external strengths required, strong,\nmedium, and weak as well as two internal strengths very\nstrong (used to approximately handle required nonlinear\nor inequality constraints) and very weak (exploited to make\nnew solutions as close to previous ones as possible). It typically\nassigns weights 32\n4\n, 32\n3\n, 32\n2\n, 32\n1\n, and 1 to strengths\nvery strong, strong, medium, weak, and very weak respectively\n. These weights were determined according to the precision\nof the actual numerical algorithm (described in the\nnext subsection). To know how much these weights affect\nsolutions, suppose a system of strong constraint x = 0 and\nmedium one x = 100. Then the unique solution will be obtained\nas x = 3.0303 (= 100/33). Thus the difference of\nstrengths is obvious. According to our actual experience,\nthis precision allows us to discriminate constraint strengths\nin most graphical applications.\n3.2\nAlgorithm\nTo actually find solutions to constraint systems presented\nabove, we need to solve their corresponding optimization\nproblems. For this purpose, we designed a constraint satisfaction\nalgorithm by combining a numerical optimization\ntechnique with a genetic algorithm. It uses numerical optimization\nto find local solutions, while it adopts a genetic\nalgorithm to search for global solutions.\nFor numerical optimization, we mainly use the quasi-Newton\nmethod based on Broyden-Fletcher-Goldfarb-Sahnno updating\nformula [2, 6], which is a fast iterative technique that\nexhibits superlinear convergence.\nSince it excludes fruitless\nsearches by utilizing its history, it is usually faster than\nstraightforward Newton's method.\nWe introduced a genetic algorithm to alleviate the problem\nthat some kinds of geometric constraints suffer from local optimal\nbut global non-optimal solutions [11, 16]. Generally,\na genetic algorithm is a stochastic search method that repeatedly\ntransforms a population of potential solutions into\nanother next-generation population [10, 15]. We typically\nnecessitate it only for computing initial solutions; in other\nwords, we can usually re-solve modified constraint systems\nwithout the genetic algorithm, only by applying numerical\noptimization to previous solutions.\nPROCESSING COORDINATE TRANSFORMATIONS\nIn this section, we propose a method for integrating coordinate\ntransformations with our constraint framework.\nAs already mentioned, we use world coordinates of points\nto evaluate 3D geometric constraints. A naive method for\nthis is to duplicate point variables in all ancestor coordinate\nsystems, and then to impose required constraints that represent\ncoordinate transformations between the point variables\n. However, this method requires an optimization routine\nsupporting required nonlinear constraints, which limits\nthe availability of actual techniques (in fact, we cannot\nuse the quasi-Newton method for this purpose). Also, this\nmethod tends to yield many variables and constraints, and\ntherefore requires an extra amount of memory.\nBelow we propose a more widely applicable method for handling\ncoordinate transformations.\nIts characteristic is to\nhide transformations from optimization routines, which is\nrealized by embedding transformations in error functions.\n4.1\nModel\nTo begin with, we introduce another variable vector x =\n(x\n1\n, x\n2\n, . . . , x\nn\n), which is created by replacing variables for\nlocal coordinates of 3D points in x with the corresponding\nones for world coordinates (1D variables remain the same).\nWe can mathematically model this process as follows: Consider\nthe sequence of the s transformations\ny\n0\n(= x) t\n0\n- y\n1\nt\n1\n- t\ns-2\n- y\ns-1\nt\ns-1\n- y\ns\n(= x )\nwhere y\n0\nand y\ns\nare equal to x and x respectively, each\ny\nk\n(1 k s - 1) is an \"intermediate\" vector, and each t\nk\n(0 k s - 1) is a function that transforms y\nk\ninto y\nk+1\n.\nIntuitively, t\nk\ncorresponds to a coordinate transformation,\nand transforms related point variables from its source coordinate\nsystem into its destination system. It should be\nnoted that, although transformations are, in general, hierarchical\n(or tree-structured), we can always find such a linear\nsequence by \"serializing\" them in an appropriate order.\nBy using such transformations, we can compute x as follows\n:\nx = t\ns-1\n(t\ns-2\n( (t\n1\n(t\n0\n(x))) )) t(x)\nwhere t is defined as the composition of all the elemental\ntransformations. In the following description, we write y\nk,i\nto denote the i-th element of y\nk\n, and also t\nk,i\nto represent\nthe i-th element of t\nk\n; that is,\ny\nk+1\n= (y\nk+1,1\n, y\nk+1,2\n, . . . , y\nk+1,n\n)\n= (t\nk,1\n(y\nk\n), t\nk,2\n(y\nk\n), . . . , t\nk,n\n(y\nk\n)) = t\nk\n(y\nk\n).\n4.2\nMethod\nGeometric constraints are evaluated by using world coordinates\nof points, which means that their error functions are\n96\ndefined as e(x ). Using the composed transformations, we\ncan evaluate them as\ne(x ) = e(t(x)).\nImportantly, we can efficiently realize this computation by\napplying only necessary transformations to actually used\nvariables.\nWe also need to compute the gradient of e(t(x)), i.e.,\ne(t(x)) =\ne(t(x))\nx\n1\n, e(t(x))\nx\n2\n, . . . , e(t(x))\nx\nn\n.\nBasically,\nwe\ncan\ndecompose\neach\npartial\nderivative\ne(t(x))/x\ni\ninto primitive expressions by repeatedly using\nthe chain rule. However, we should avoid the simple\napplication of the chain rule since it would result in a large\nnumber of expressions.\nInstead, we perform a controlled way of decomposing such\npartial derivatives; it appropriately arranges the chain rule\nto restrict the computation to only necessary components.\nFirst, we decompose e(t(x))/x\ni\nas follows:\ne(t(x))\nx\ni\n=\nj\ne(x )\nx\nj\nt\ns-1,j\n(y\ns-1\n)\nx\ni\n=\nj\ne(x )\nx\nj\nj\ns-1\nt\ns-1,j\n(y\ns-1\n)\ny\ns-1,j\ns-1\nt\ns-2,j\ns-1\n(y\ns-2\n)\nx\ni\n=\nj\ns-1\n\n\nj\ne(x )\nx\nj\nt\ns-1,j\n(y\ns-1\n)\ny\ns-1,j\ns-1\n\n\nt\ns-2,j\ns-1\n(y\ns-2\n)\nx\ni\n=\nj\ns-1\ne(x )\ny\ns-1,j\ns-1\nt\ns-2,j\ns-1\n(y\ns-2\n)\nx\ni\n.\nNote that each e(x )/x\nj\nis given by the definition\nof the geometric constraint, and also that each\nt\ns-1,j\n(y\ns-1\n)/y\ns-1,j\ns-1\nis a partial derivative in the gradient\nof a single coordinate transformation t\ns-1\n. Thus we\ncan obtain each e(x )/y\ns-1,j\ns-1\n. Also, by repeating this\nprocess, we can compute, for each k,\ne(t(x))\nx\ni\n=\nj\nk\ne(x )\ny\nk,j\nk\nt\nk-1,j\nk\n(y\nk-1\n)\nx\ni\nand finally achieve\ne(t(x))\nx\ni\n=\nj\n1\ne(x )\ny\n1,j\n1\nt\n0,j\n1\n(x)\nx\ni\nwhere each t\n0,j\n1\n(x)/x\ni\nis a component of the gradient of\nt\n0\n. Therefore, e(t(x))/x\ni\nis now determined.\nFurthermore, we can considerably reduce the number of the\ncomputations of e(x )/y\nk,j\nk\nin practice. We can make the\nfollowing observations about the above computation:\nFor each variable x\nj\n, e(x )/x\nj\ncan be non-zero only\nif x\nj\nis actually needed to evaluate the designated constraint\n.\nIf x\ni\nis originated in the coordinate system associated\nwith t\nk\n(that is, x\ni\nis either a local coordinate or a\nparameter of the coordinate transformation), we have\ny\nk,i\n= x\ni\n, which means that we have t\nk,j\n(y\nk\n)/x\ni\n.\nTherefore, we can compute e(x )/x\ni\nimmediately.\nThese observations reveal that we need to transfer a partial\nderivative e(x )/y\nk,j\nto the next step only when x\nj\nrepresents\na really necessary coordinate that has not reached\nits local coordinate system. Also, since we can handle each\nnecessary point independently, we can implement this process\nwith a linear recursive function that hands over only\nthree derivatives e(x )/y\nk,j\nat each recursive call.\nIMPLEMENTATION\nWe implemented the proposed method by developing a constraint\nsolver called Chorus3D, which is a 3D extension to\nour previous 2D geometric constraint solver Chorus [13]. We\nconstructed Chorus3D as a C++ class library, and also developed\na native method interface to make it available to\nJava programs.\nChorus3D allows programmers to add a new kind of arithmetic\nconstraints (e.g., Euclidean geometric constraints) by\nconstructing a new constraint class with a method that evaluates\ntheir error functions. Also, programmers can introduce\na new kind of non-arithmetic (or pseudo) constraints\n(for, e.g., general graph layout) by developing a new evalua-tion\nmodule which computes an \"aggregate\" error function\nfor a given set of constraints.\nChorus3D currently provides linear equality, linear inequality\n, edit (update a variable value), stay (fix a variable value),\nEuclidean geometric constraints (for, e.g., parallelism, per-pendicularity\n, and distance equality), and graph layout constraints\nbased on the spring model [14]. Linear equality/\ninequality constraints can refer to only 1D variables (including\nelements of 3D point variables), while edit and stay constraints\ncan be associated with 1D and 3D point variables.\nEuclidean geometric constraints typically refer to point variables\nalthough they sometimes require 1D variables for angles\nand distances. Each graph layout constraint represents\na graph edge, and refers to two point variables as its associated\ngraph nodes. As stated earlier, constraints on such\npoint variables are evaluated by using world coordinates of\nthe points. Also, a single constraint can refer to point variables\nbelonging to different coordinate systems.\nThe application programming interface of Chorus3D is a\nnatural extension to that of Chorus, which provides a certain\ncompatibility with a recent linear solver called Cassowary\n[3]; in a similar way to Cassowary and Chorus, Chorus3D\nallows programmers to process constraint systems by creating\nvariables and constraints as objects, and by adding/\nremoving constraint objects to/from the solver object. In\naddition, Chorus3D handles coordinate transformations as\nobjects, and presents an interface for arranging them hier-archically\nEXAMPLES\nIn this section, we present three examples to demonstrate\nhow to incorporate geometric constraints into 3D graphics\nby using the Chorus3D constraint solver. All the examples\nare implemented in Java by using Java 3D as a graphics\n97\nFigure 1: A 3D geometric layout of a general graph\nstructure.\nprogramming interface as well as the native method interface\nwith Chorus3D. We also provide computation times taken\nfor constraint satisfaction in these examples.\n6.1\nGraph Layout\nThe first example is an application which lays out a set\nof points with a general graph structure in a 3D space as\nshown in Figure 1. This application also allows a user to\ndrag graph nodes with a mouse.\n1\nThe used graph layout\ntechnique is based on a 3D extension to the spring model\n[14]. This kind of 3D graph layout is practically useful to\ninformation visualization, and has actually been adopted in\na certain system [19].\nThe constraint system of this graph layout consists of 26\npoint variables (i.e., 78 real-valued variables), 31 graph layout\nconstraints, and three linear equality constraints for fixing\none of the point variables at the origin. When executed\non an 866 MHz Pentium III processor running Linux 2.2.16,\nChorus3D obtained an initial solution in 456 milliseconds. It\nperformed constraint satisfaction typically within 250 milliseconds\nto reflect the user's dragging a graph node.\n6.2\nConstrained Dragging\nThe second example is an application which allows a user\nto drag an object constrained to be on another spherical\nobject. Figure 2 depicts this application, where the smaller\nsolid spherical object is constrained to be on the surface of\nthe larger wireframe one. The application declares a strong\nEuclidean geometric constraint which specifies a constant\ndistance between the centers of these objects. When the\nuser tries to drag the smaller object with a mouse, the application\nimposes another medium Euclidean constraint which\ncollinearly locates the viewpoint, the 3D position of the\nmouse cursor (which is considered to be on the screen), and\n1\nUnlike constrained dragging in the next example, this\nmouse operation is simply implemented with Java 3D's\nPickMouseBehavior classes.\nFigure 2: Dragging an object constrained to be on\na sphere.\nViewpoint\nMouse cursor which\nis on the screen\nScreen\nObject which is on\nthe sphere surface\nCollinearity\nconstraint\nDistance\nconstraint\nSphere\nFigure 3: Implementation of constrained dragging.\nthe center of the dragged object as shown in Figure 3. This\ncollinearity constraint reflects the motion of the mouse in\nthe position of the dragged object. Since the collinearity\nconstraint is weaker than the first Euclidean constraint, the\nuser cannot drag the smaller object to the outside of the\nlarger sphere.\nThe application initially declares one Euclidean geometric\nconstraint on two point variables, and solved it in 1 millisecond\non the same computer as the first example. When\nthe user tries to drag the smaller object, it adds another\nEuclidean constraint as well as two edit constraints for the\nviewpoint and mouse position. The solver maintained this\nconstraint system usually within 2 milliseconds.\n6.3\nInverse Kinematics\nThe final example applies inverse kinematics to a virtual\nrobot arm by using constraints.\nUnlike the previous examples\n, it takes advantage of coordinate transformations to\nexpress its constraint system.\n98\n(a)\n(b)\n(c)\n(d)\n(e)\n(f )\nFigure 4: A robot arm application which performs inverse kinematics.\nAs illustrated in Figure 4(a), the robot arm consists of four\nparts called a base, a shoulder, an upper arm, and a forearm.\nConstraint satisfaction for inverse kinematics is performed\nto position its hand (the end of the forearm) at the target\nobject if possible, or otherwise to make it maximally close\nto the target. Figures 4(b)(f) show the movement of the\nrobot arm. In Figures 4(b)(e), its hand is positioned at\nthe exact location of the target by using appropriate angles\nof its joints. By contrast, in Figure 4(f), the hand cannot\nreach the target, and therefore the arm is extended toward\nthe target instead.\nFigure 5 describes the constraint program used in the robot\narm application.\nAfter constructing a constraint solver\ns, it creates six coordinate transformations shldrTTfm,\nshldrRTfm, uarmTTfm, uarmRTfm, farmTTfm, and farmRTfm.\nHere the rotation angle parameters of the rotation transformations\nshldrRTfm, uarmRTfm, and farmRTfm will actually\nwork as variables that can be altered by the solver.\nNext, it generates a point variable handPos to represent\nthe position of the hand, and then suggests the target position\nto the hand by using a preferential edit constraint\neditHandPos. Finally, executing the solver, it obtains the\ndesired angles shldrAngle, uarmAngle, and farmAngle of\nthe rotation transformations. These angles will be passed\nto the Java 3D library to render the properly configured\nrobot arm.\nThis program generates a constraint system which contains\nthree translation and three rotation transformations, one explicit\npoint variable as well as six point variables and three\n1D variables for coordinate transformations, and one edit\nconstraint. The solver found an initial solution to this system\nin 18 milliseconds, and obtained each new solution for\na frame update typically within 10 milliseconds.\nRELATED WORK AND DISCUSSION\nThere has been work on integrating constraints or similar\nfunctions with 3D graphics languages to facilitate the specification\nof graphical objects. For example, we can view the\nevent routing mechanism in VRML [4] as a limited form of\none-way propagation constraints. Also, there is an attempt\nto extend VRML by introducing one-way propagation and\nfinite-domain combinatorial constraints [5]. However, they\ncannot handle more powerful simultaneous nonlinear constraints\nsuch as Euclidean geometric constraints.\nAlthough many constraint solvers have been developed in\n99\n// constraint solver\ns = new C3Solver();\n// translation transformation for the shoulder: fixed to (0, .1, 0)\nshldrTTfm = new C3TranslateTransform(new C3Domain3D(0, .1, 0));\ns.add(shldrTTfm); // shldrTTfm is parented by the world coordinate system\n// rotation transformation for the shoulder: axis fixed to (0, 1, 0); angle ranging over [-10000, 10000]\nshldrRTfm = new C3RotateTransform(new C3Domain3D(0, 1, 0), new C3Domain(-10000, 10000));\ns.add(shldrRTfm, shldrTTfm); // shldrRTfm is parented by shldrTTfm\n// translation transformation for the upper arm: fixed to (0, .1, 0)\nuarmTTfm = new C3TranslateTransform(new C3Domain3D(0, .1, 0));\ns.add(uarmTTfm, shldrRTfm); // uarmTTfm is parented by shldrRTfm\n// rotation transformation for the upper arm: axis fixed to (0, 0, 1); angle ranging over [-1.57, 1.57]\nuarmRTfm = new C3RotateTransform(new C3Domain3D(0, 0, 1), new C3Domain(-1.57, 1.57));\ns.add(uarmRTfm, uarmTTfm); // uarmRTfm is parented by uarmTTfm\n// translation transformation for the forearm: fixed to (0, .5, 0)\nfarmTTfm = new C3TranslateTransform(new C3Domain3D(0, .5, 0));\ns.add(farmTTfm, uarmRTfm); // farmTTfm is parented by uarmRTfm\n// rotation transformation for the forearm: axis fixed to (0, 0, 1); angle ranging over [-3.14, 0]\nfarmRTfm = new C3RotateTransform(new C3Domain3D(0, 0, 1), new C3Domain(-3.14, 0));\ns.add(farmRTfm, farmTTfm); // farmRTfm is parented by farmTTfm\n// variable for the hand's position, associated with farmRTfm and fixed to (0, .5, 0)\nhandPos = new C3Variable3D(farmRTfm, new C3Domain3D(0, .5, 0));\n// medium-strength edit constraint for the hand's position\neditHandPos = new C3EditConstraint(handPos, C3.MEDIUM);\ns.add(editHandPos);\n// suggest the hand being located at the target's position\neditHandPos.set(getTargetWorldCoordinates());\n// solve the constraint system\ns.solve();\n// get solutions\ndouble shldrAngle = shldrRTfm.rotationAngle().value();\ndouble uarmAngle = uarmRTfm.rotationAngle().value();\ndouble farmAngle = farmRTfm.rotationAngle().value();\nFigure 5: Constraint program for the robot arm application.\nthe field of graphical user interfaces [3, 7, 11, 12, 13, 17, 18],\nmost of them do not provide special treatment for 3D graphics\n. In general, the role of nonlinear geometric constraints\nis more important in 3D applications than in 2D interfaces.\nMost importantly, 3D graphics usually requires rotations of\nobjects which are rarely used in 2D interfaces. The main\nreason is that we often equally treat all \"horizontal\" directions\nin a 3D space even if we may clearly distinguish them\nfrom \"vertical\" directions. Therefore, nonlinear constraint\nsolvers are appropriate for 3D applications. In addition, coordinate\ntransformations should be supported since they are\ntypically used to handle rotations of objects.\nGleicher proposed the differential approach [8, 9], which supports\n3D geometric constraints and coordinate transformations\n. In a sense, it shares a motivation with Chorus3D; in\naddition to support for 3D graphics, it allows user-defined\nkinds of geometric constraints. However, it is based on a different\nsolution method from Chorus3D; it realizes constraint\nsatisfaction by running virtual dynamic simulations. This\ndifference results in a quite different behavior of solutions as\nwell as an interface for controlling solutions. By contrast,\nChorus3D provides a much more compatible interface with\nrecent successful solvers such as Cassowary [3].\nMuch research on inverse kinematics has been conducted in\nthe fields of computer graphics and robotics [1, 20]. However\n, inverse kinematics is typically implemented as specialized\nsoftware which only provides limited kinds of geometric\nconstraints.\nChorus3D has two limitations in its algorithm: one is on the\nprecision of solutions determined by preferential constraints;\nthe other is on the speed of the satisfaction of large constraint\nsystems. These limitations are mainly caused by the\ntreatment of multi-level preferences of constraints in addition\nto required constraints (i.e., constraint hierarchies). Although\nmany numerical optimization techniques have been\nproposed and implemented in the field of mathematical programming\n[2, 6], most of them do not handle preferential\nconstraints. To alleviate the limitations of Chorus3D, we\nare pursuing a more sophisticated method for processing\nmulti-level preferential constraints.\nWe implemented Chorus3D as a class library which can\nbe exploited in C++ and Java programs. However, more\nhigh-level authoring tools will also be useful for declarative\napproaches to 3D design. One possible direction is to extend\nVRML [4] to support geometric constraints. Standard\nVRML requires scripts in Java or JavaScript to realize complex\nlayouts and behaviors. By contrast, constraint-enabled\nVRML will cover a wider range of applications without such\nadditional scripts.\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we presented Chorus3D, a geometric constraint\nlibrary for 3D graphical applications. It enables programmers\nto use geometric constraints for various purposes\n100\nsuch as geometric layout, constrained dragging, and inverse\nkinematics.\nIts novel feature is to handle scene graphs\nby processing coordinate transformations in geometric constraint\nsatisfaction.\nOur future work includes the development of other kinds of\ngeometric constraints to further prove the usefulness of our\napproach. In particular, we are planning to implement non-overlapping\nconstraints [13] in Chorus3D so that we can use\nit for the collision resolution of graphical objects. Another\nfuture direction is to improve Chorus3D in the scalability\nand accuracy of constraint satisfaction.\nREFERENCES\n[1] Badler, N. I., Phillips, C. B., and Webber, B. L.\nSimulating Humans: Computer Graphics, Animation,\nand Control. Oxford University Press, Oxford, 1993.\n[2] Bertsekas, D. P. Nonlinear Programming, 2nd ed.\nAthena Scientific, 1999.\n[3] Borning, A., Marriott, K., Stuckey, P., and Xiao, Y.\nSolving linear arithmetic constraints for user interface\napplications. In Proc. ACM UIST , 1997, 8796.\n[4] Carey, R., Bell, G., and Marrin, C. The Virtual\nReality Modeling Language (VRML97). ISO/IEC\n14772-1:1997, The VRML Consortium Inc., 1997.\n[5] Diehl, S., and Keller, J. VRML with constraints. In\nProc. Web3D-VRML, ACM, 2000, 8186.\n[6] Fletcher, R. Practical Methods of Optimization,\n2nd ed. John Wiley & Sons, 1987.\n[7] Freeman-Benson, B. N., Maloney, J., and Borning, A.\nAn incremental constraint solver. Commun. ACM 33,\n1 (1990), 5463.\n[8] Gleicher, M. A graphical toolkit based on differential\nconstraints. In Proc. ACM UIST , 1993, 109120.\n[9] Gleicher, M. A differential approach to graphical\nmanipulation (Ph.D. thesis). Tech. Rep.\nCMU-CS-94-217, Sch. Comput. Sci. Carnegie Mellon\nUniv., 1994.\n[10] Herrera, F., Lozano, M., and Verdegay, J. L. Tackling\nreal-coded genetic algorithms: Operators and tools for\nbehavioural analysis. Artif. Intell. Rev. 12, 4 (1998),\n265319.\n[11] Heydon, A., and Nelson, G. The Juno-2\nconstraint-based drawing editor. Research Report\n131a, Digital Systems Research Center, 1994.\n[12] Hosobe, H. A scalable linear constraint solver for user\ninterface construction. In Principles and Practice of\nConstraint Programming--CP2000 , vol. 1894 of\nLNCS, Springer, 2000, 218232.\n[13] Hosobe, H. A modular geometric constraint solver for\nuser interface applications. In Proc. ACM UIST , 2001,\n91100.\n[14] Kamada, T., and Kawai, S. An algorithm for drawing\ngeneral undirected graphs. Inf. Process. Lett. 31, 1\n(1989), 715.\n[15] Kitano, H., Ed. Genetic Algorithms. Sangyo-Tosho,\n1993. In Japanese.\n[16] Kramer, G. A. A geometric constraint engine. Artif.\nIntell. 58, 13 (1992), 327360.\n[17] Marriott, K., Chok, S. S., and Finlay, A. A tableau\nbased constraint solving toolkit for interactive\ngraphical applications. In Principles and Practice of\nConstraint Programming--CP98 , vol. 1520 of LNCS,\nSpringer, 1998, 340354.\n[18] Sannella, M. Skyblue: A multi-way local propagation\nconstraint solver for user interface construction. In\nProc. ACM UIST , 1994, 137146.\n[19] Takahashi, S. Visualizing constraints in visualization\nrules. In Proc. CP2000 Workshop on Analysis and\nVisualization of Constraint Programs and Solvers,\n2000.\n[20] Zhao, J., and Badler, N. I. Inverse kinematics\npositioning using nonlinear programming for highly\narticulated figures. ACM Trans. Gr. 13, 4 (1994),\n313336.\n101\n", "keywords": "layout;scene graphs;3D graphics;geometric layout;constraint satisfaction;3D graphical applications;geometric constraints;graphical objects;behaviors;coordinate transformation"} {"name": "120", "title": "KDDCS: A Load-Balanced In-Network Data-Centric Storage Scheme for Sensor Networks", "abstract": "We propose an In-Network Data-Centric Storage (INDCS) scheme for answering ad-hoc queries in sensor networks. Previously proposed In-Network Storage (INS) schemes suffered from Storage Hot-Spots that are formed if either the sensors' locations are not uniformly distributed over the coverage area, or the distribution of sensor readings is not uniform over the range of possible reading values. Our K-D tree based Data-Centric Storage (KDDCS) scheme maintains the invariant that the storage of events is distributed reasonably uniformly among the sensors. KDDCS is composed of a set of distributed algorithms whose running time is within a poly-log factor of the diameter of the network. The number of messages any sensor has to send, as well as the bits in those messages, is poly-logarithmic in the number of sensors. Load balancing in KDDCS is based on defining and distributively solving a theoretical problem that we call the Weighted Split Median problem . In addition to analytical bounds on KDDCS individual algorithms , we provide experimental evidence of our scheme's general efficiency, as well as its ability to avoid the formation of storage hot-spots of various sizes, unlike all previous INDCS schemes.", "fulltext": "INTRODUCTION\nSensor networks provide us with the means of effectively monitoring\nand interacting with the physical world. As an illustrative\nexample of the type of sensor network application that concerns\nus here, consider an emergency/disaster scenario where sensors are\ndeployed in the area of the disaster [17]. It is the responsibility of\nthe sensor network to sense and store events of potential interest.\nAn event is composed of one or more attributes (e.g. temperature,\ncarbon monoxide level, etc.), the identity of the sensor that sensed\nthe event, and the time when the event was sensed. As first responders\nmove through the disaster area with hand-held devices, they\nissue queries about recent events in the network. For example, the\nfirst responder might ask for the location of all sensor nodes that\nrecorded high carbon monoxide levels in the last 15 minutes, or\nhe might ask whether any sensor node detected movement in the\nlast minute. Queries are picked up by sensors in the region of the\nfirst responder. The sensor network is then responsible for answering\nthese queries. The first responders use these query answers to\nmake decisions on how to best manage the emergency.\nThe ad-hoc queries of the first responders will generally be multi-dimensional\nrange queries [9], that is, the queries concern sensor\nreadings that were sensed over a small time window in the near past\nand that fall in a given range of the attribute values. In-Network\nStorage (INS) is a storage technique that has been specifically presented\nto efficiently process this type of queries. INS involves storing\nevents locally in the sensor nodes. Storage may be in-network\nbecause it is more efficient than shipping all the data (i.e., raw sensor\nreadings) out of the network (for example to base stations), or\nsimply because no out-of-network storage is available. All INS\nschemes already presented in literature were Data-Centric Storage\n(DCS) schemes [15]. In any In-Network Data-Centric Storage (INDCS\n) scheme, there exists a function from events to sensors that\nmaps each event to an owner sensor based on the value of the attributes\nof that event. The owner sensor will be responsible for\nstoring this event. The owner may be different than the sensor that\noriginally generated the event. To date, the Distributed Index for\nMulti-dimensional data (DIM) scheme [9] has been shown to exhibit\nthe best performance among all proposed INDCS schemes in\ndealing with sensor networks whose query loads are basically composed\nof ad-hoc queries .\nIn DIM [9], the events-to-sensors mapping is based on a K-D tree\n[3], where the leaves\nR form a partition of the coverage area, and\neach element of\nR contains either zero or one sensor. The formation\nof the K-D tree consists of rounds. Initially,\nR is a one element\nset containing the whole coverage area. In each odd/even round r,\neach region R R that contains more than one sensor is bisected\nhorizontally/vertically. Each time that a region is split, each sensor\nin that region has a bit appended to its address specifying which\nside of the split the sensor was on. Thus, the length of a sensor's\naddress (bit-code) is its depth in the underlying K-D tree. When a\nsensor generates an event, it maps such event to a binary code based\non a repetitive fixed uniform splitting of the attributes' ranges in a\nround robin fashion. For our purposes, it is sufficient for now to\n317\nconsider the cases that the event consists of only one attribute, say\ntemperature. Then, the high order bits of the temperature are used\nto determine a root-to-leaf path in the K-D tree, and if there is a\nsensor in the region of the leaf, then this sensor is the owner of this\nevent. Due to the regularity of regions in this K-D tree, the routing\nof an event from the generating sensor to the owner sensor is particularly\neasy using Greedy Perimeter Stateless Routing (GPSR) [6].\nFull description of DIM is presented in Section 2.\nThough it is the best DCS scheme so far, DIM suffers from several\nproblems. One problem is that events may well be mapped to\norphan regions that contain no sensors. Thus, DIM requires some\nkludge to assign orphan regions to neighboring sensors.\nAnother major problem in DIM is that of storage hot-spots. Storage\nhot-spots may occur if the sensors are not uniformly distributed.\nA storage hot-spot occurs when relatively many events are assigned\nto a relatively small number of the sensors. For example, if there\nwas only one sensor on one side of the first bisection, then half of\nthe events would be mapped to this sensor if the events were uniformly\ndistributed over the range of possible events. Due to their\nstorage constraints, the presence of a storage hot-spot leads to increasing\nthe dropping rate of events by overloaded sensors. Clearly,\nthis has a significant impact on the quality of data (QoD) generated\nby the sensor network. Queries for events in a storage hot-spot may\nbe delayed due to contention at the storage sensors and the surrounding\nsensors. More critically, the sensors in and near the hot-spot\nmay quickly run out of energy, due to the high insertion/query\nload imposed to them. This results in a loss of the events generated\nat these sensors, the events stored at these sensors, and possibly a\ndecrease in network connectivity. Increased death of sensors results\nin decreasing the coverage area and causes the formation of coverage\ngaps within such area. Both of which consequently decrease\nQoD. Certainly, it is not desirable to have a storage scheme whose\nperformance and QoD guarantees rest on the assumption that the\nsensors are uniformly distributed geographically.\nStorage hot-spots may also occur in DIM if the distribution of\nevents is not uniform over the range of possible events. It is difficult\nto imagine any reasonable scenario where the events are uniformly\ndistributed over the range of all possible events. Consider the situation\nwhere the only attribute sensed is temperature. One would\nexpect that most temperature readings would be clustered within a\nrelatively small range rather than uniform over all possible temperatures\n. Without any load balancing, those sensors responsible for\ntemperatures outside this range would store no events.\nIn this paper, we provide a load-balanced INDCS scheme based\non K-D trees, that we, not surprisingly, call K-D tree based DCS\n(KDDCS). In our KDDCS scheme, the refinement of regions in the\nformation of the K-D tree has the property that the numbers of sensors\non both sides of the partition are approximately equal. As a\nresult of this, our K-D tree will be balanced, there will be no orphan\nregions, and, regardless of the geographic distribution of the\nsensors, the ownership of events will uniformly distributed over\nthe sensors if the events are uniformly distributed over the range\nof possible events. We present a modification of GPSR routing,\nnamely Logical Stateless Routing (LSR), for the routing of events\nfrom their generating sensors to their owner sensors, that is competitive\nwith the GPSR routing used in DIM. In order to maintain\nload balance in the likely situation that the events are not uniformly\ndistributed, we present a re-balancing algorithm that we call K-D\nTree Re-balancing (KDTR). Our re-balancing algorithm guarantees\nload balance even if the event distribution is not uniform. KDTR\nhas essentially minimal overhead. We identify a problem, that we\ncall the weighted split median problem, that is at the heart of both\nthe construction of the initial K-D tree, and the re-balancing of the\nK-D tree. In the weighted split median problem, each sensor has an\nassociated weight/multiplicity, and the sensors' goal is to distributively\ndetermine a vertical line with the property that the aggregate\nweight on each side of the line is approximately equal. We give a\ndistributed algorithm for the weighted split median problem, and\nshow how to use this algorithm to construct our initial K-D tree,\nand to re-balance the tree throughout the network lifetime.\nWe are mindful of the time, message complexity, and node storage\nrequirements, in the design and implementation of all of our\nalgorithms. The time for all of our algorithms is within a poly-log\nfactor of the diameter of the network. Obviously, no algorithm can\nhave time complexity less than the diameter of the network. The\nnumber of messages, and number of bits in those messages, that\nany particular node is required to send by our algorithms is poly-logarithmic\nin number of sensors. The amount of information that\neach node must store to implement one of our algorithms is logarithmic\nin the number of sensors.\nExperimental evaluation shows that the main advantages of KDDCS\n, when compared to the pure DIM, are:\nAchieving a better data persistence by balancing the storage\nresponsibility among sensor nodes.\nIncreasing the QoD by distributing the storage hot-spot events\namong a larger number of sensors.\nIncreasing the energy savings by achieving a well balanced\nenergy consumption overhead among sensor nodes.\nThe rest of the paper is organized as follows. Section 2 presents\nan overview of the differences between DIM and KDDCS. Section\n3 describes the weighted split median problem, and our distributed\nsolution. Section 4 describes the components of KDDCS. Section 5\npresents our K-D tree re-balancing algorithm. Experimental results\nare discussed in Section 6. Section 7 presents the related work.\nOVERVIEW OF DIM VS KDDCS\nIn this section, we will briefly describe the components of both\nschemes, DIM and KDDCS, and highlight the differences between\nthe two schemes using a simple example.\nWe assume that the sensors are arbitrarily deployed in the convex\nbounded region R. We assume also that each sensor is able to\ndetermine its geographic location (i.e., its x and y coordinates), as\nwell as, the boundaries of the service area R. Each node is assumed\nto have a unique NodeID, like a MAC address. Sensor nodes are\nassumed to have the capacity for wireless communication, basic\nprocessing and storage, and they are associated with the standard\nenergy limitations.\nThe main components of any DCS scheme are: the sensor to\naddress mapping that gives a logical address to each sensor, and\nthe event to owner-sensor mapping that determines which sensor\nwill store the event. The components of DIM and KDDCS are:\nRepetitive splitting of the geographic region to form the underlying\nK-D tree, and the logical sensor addresses.\nRepetitive splitting of the attribute ranges to form the bit-code\nfor an event.\nThe routing scheme to route an event from the generating\nsensor to the owner sensor.\nWe now explain how DIM implements these components.\nLet us start with the formation of the K-D tree in DIM. DIM\nstarts the network operation with a static node to bit-code mapping\nphase. In such phase, each sensor locally determines its binary\naddress by uniformly splitting the overall service area in a round\n318\nFigure 1: Initial network configuration\nFigure 2: DIM K-D tree\nrobin fashion, horizontally then vertically, and left shifting its bit-code\nwith every split by 0 (or 1) bit when falling above (or below)\nthe horizontal split line (similarly, by a 0 bit if falling on the left\nof the vertical split line, or a 1 bit otherwise). Considering the\nregion as partitioned into zones, the process ends when every sensor\nlies by itself in a zone, such that the sensor address is the zone bit\ncode. Thus, the length of the binary address of each sensor (in bits)\nrepresents its depth in the underlying K-D tree. Note that from\na sensor address, one can determine the physical location of the\nsensor. In case any orphan zones exist (zones physically containing\nno sensors in their geographic area), the ownership of each of these\nzones is delegated to one of its neighbor sensors. As an example,\nconsider the simple input shown in Figure 1. The K-D tree formed\nby DIM is shown in Figure 2. In this figure, the orphan zone (01)\nis assumed to be delegated to node 001, which is the least loaded\namong its neighbors.\nWe now turn to the construction of an event bit-code in DIM.\nThe generation of the event bit-code proceeds in rounds. As we\nproceed, there is a range R\nj\nassociated with each attribute j of the\nevent. Initially, the range R\nj\nis the full range of possible values for\nattribute j. We now describe how a round i 0 works. Round i,\ndetermines the (i+1)\nth\nhigh order bit in the code. Round i depends\non attribute j = i mod k of the event, where k is the number of\nattributes in the event. Assume the current value of R\nj\nis [a, c], and\nlet b = (a + c)/2 be the midpoint of the range R\nj\n. If the value of\nattribute j is in the lower half of the range R\nj\n, that is in [a, b], then\nthe i\nth\nbit is 0, and R\nj\nis set to be the lower half of R\nj\n. If the value\nof attribute j is in the upper half of the range R\nj\n, that is in [b, c],\nthen the i\nth\nbit is 1, and R\nj\nis set to be the upper half of R\nj\n.\nTo show the events to bit-code mapping in DIM, consider that\nthe events in our example (shown in Figure 2) are composed of\ntwo attributes, temperature and pressure, with ranges (30, 70) and\n(0, 2), respectively. Let an event with values (55, 0.6) be generated\nby Node N3(11). The 4 high-order bits of the bit-code for this\nevent are 1001. This is because temperature is in the top half of\nthe range [30, 70], pressure is in the bottom half of the range [0, 2],\nthen temperature is in the bottom half of the range [50, 70], and\npressure is in the top half of the range [0, 1]. Thus, the event should\nbe routed toward the geometric location specified by code 1001.\nIn DIM, an event is routed using Greedy Perimeter Stateless\nRouting (GPSR) [6] to the geographic zone with an address matching\nthe high order bits of the event bit-code. In our example, the\nsensor 10 will store this event since this is the sensor that matches\nFigure 3: KDDCS K-D tree\nthe high order bits of the bit-code 1001. If there is no sensor in this\nregion, then, the event is stored in a neighboring region.\nWe now highlight the differences between our proposed KDDCS\nscheme, and DIM. The first difference is how the splitting is accomplished\nduring the formation of the K-D tree. In KDDCS, the split\nline is chosen so that there are equal numbers of sensors on each\nside of the split line. Recall that, in DIM, the split line was the geometric\nbisector of the region. Thus, in KDDCS, the address of a\nsensor is a logical address and does not directly specify the location\nof the sensor. Also, note that the K-D tree in KDDCS will be balanced\n, while this will not be the case in DIM if the sensors are not\nuniformly distributed. This difference is illustrated by the K-D tree\nformed by KDDCS shown in Figure 3 for the same simple input\nshown in Figure 1. The second difference is that in determining the\nowner sensor for an event, the range split point b need not be the\nmidpoint of the range R\nj\n. The value of b is selected to balance the\nnumber of events in the ranges [a, b] and [b, c]. Thus, in KDDCS,\nthe storage of events will be roughly uniform over the sensors. The\nthird difference is that, since addresses are not geographic, KDDCS\nneeds a routing scheme that is more sophisticated than GPSR.\nTHE WEIGHTED SPLIT MEDIAN PROBLEM\nBefore presenting our KDDCS scheme, we first define the weighted\nsplit median problem in the context of sensor networks and present\nan efficient distributed algorithm to solve the problem. Each sensor\ns\ni\ninitially knows w\ni\nassociated values v\n1\n, . . . v\nw\ni\n. Let W =\nP\nn\ni=1\nw\ni\nbe the number of values. The goal for the sensors is to\ncome to agreement on a split value V with the property that approximately\nhalf of the values are larger than V and half of the values\nare smaller than V .\nWe present a distributed algorithm to solve this problem. The\ntime complexity of our algorithm is O(log n) times the diameter of\nthe communication network in general, and O(1) times the diameter\nif n is known a priori within a constant factor. Each node is\nrequired to send only O(log n) sensor ID's. The top level steps of\nthis algorithm are:\n1. Elect a leader sensor s , and form a breadth first search (BFS)\ntree T of the communication network that is rooted at s .\n2. The number of sensors n, and the aggregate number of values\nW is reported to s .\n3. The leader s collects a logarithmically-sized uniform random\nsample L of the values. The expected number of times\nthat a value from sensor s\ni\nis included in this sample is\n\n\"\nw\ni\nlog n\nW\n\"\n.\n4. The value of V is then the median of the reported values in\nL, which s reports to all of the sensors.\nWe need to explain how these steps are accomplished, and why the\nalgorithm is correct.\n319\nWe start with the first step. We assume that each sensor has a\nlower bound k on the number of sensors in R. If a sensor has no\nidea of the number of other sensors, it may take k = 2.\nThen, each sensor decides independently, with probability `\nln k\nk\n,\nto become a candidate for the leader. Each candidate sensor s\nc\ninitiates\nthe construction of a BFS tree of the communication graph\nrooted at s\nc\nby sending a message Construct(s\nc\n)\nto its neighbors.\nAssume a sensor s\ni\ngets a message Construct(s\nc\n) from sensor s\nj\n.\nIf this is the first Construct(s\nc\n) message that it has received, and\ns\nc\n's ID is larger than the ID of any previous candidates in prior\nConstruct messages, then:\ns\ni\nmakes s\nj\nits tentative parent in the BFS tree T , and\nforwards the Construct(s\nc\n) message to its neighbors.\nIf the number of candidates was positive, then, after time proportional\nto the diameter of the communication network, there will\nbe a BFS tree T rooted at the candidate with the largest ID. Each\nsensor may estimate an upper bound for the diameter of the communication\ngraph to be the diameter of R divided by the broadcast\nradius of a sensor. After this time, the sensors know that they have\nreached an agreement on T , or that there were no candidates. If\nthere were no candidates, each sensor can double its estimate of\nk, and repeat this process. After O(log n) rounds, it will be the\ncase that k = (n). Once k = (n), then, with high probability\n(that is, with probability 1\n1\npoly(n)\n), the number of candidates\nis (log n). Thus, the expected time complexity to accomplish\nthe first step is O(n log n). Assuming that each ID has O(log n)\nbits, the expected number of bits that each sensors has to send is\nO(log\n2\nn) since there are are likely only O(log n) candidates on\nthe first and only round in which there is a candidate. A log n factor\ncan be removed if each sensor initially knows an estimate of n\nthat is accurate to within a multiplicative constant factor.\nThe rest of the steps will be accomplished by waves of root-to-leaves\nand leaves-to-root messages in T . The second step is easily\naccomplished by a leave-to-root wave of messages reporting on the\nnumber of sensors and number of values in each subtree. Let T\ni\nbe\nthe subtree of T rooted at sensor s\ni\n, and W\ni\nthe aggregate number\nof values in T\ni\n. The value W\ni\nthat s\ni\nreports to its parents is w\ni\nplus the aggregate values reported to s\ni\nby its children in T . The\nsensor count that s\ni\nreports to its parents is one plus the sensor\ncounts reported to s\ni\nby its children in T .\nThe third step is also accomplished by a root-to-leaves wave and\nthen a leaves-to-root wave of messages. Assume a sensor s\ni\nwants\nto generate a uniform random sample of L\ni\nof the values stored\nin the sensors in T\ni\n. The value of L for the leader is (log n).\nLet s\ni\n1\n, . . . , s\ni\nd\nbe the children of s\ni\nin T . Node s\ni\ngenerates the\nresults to L\ni\nBernoulli trials, where each trial has d + 1 outcomes\ncorresponding to s\ni\nand its d children. The probability that the\noutcome of a trial is s\ni\nis\nw\ni\nW\ni\n, and the probability that the outcome\nis the child s\ni\nj\nis\nw\nij\nW\ni\n. Then, s\ni\ninforms each child s\ni\nj\nhow often it\nwas selected, which becomes the value of L\ni\nj\ns\ni\n, then waits until it\nreceives samples back from all of its children. s\ni\nthen unions these\nsamples, plus a sample of values of the desired size from itself, and\nthen passes that sample back to its parent. Thus, each sensor has to\nsend O(log n) ID's.\nThe leader s then sets V to be the median of the values of the\nsample L, then, in a root-to-leaves message wave, informs the other\nsensors of the value of V .\nWe now argue that, with high probability, the computed median\nof the values is close to the true median. Consider a value ^\nV such\nthat only a fraction <\n1\n2\nof the values are less than ^\nV . One\ncan think of each sampled value as being a Bernoulli trial with outcomes\nless and more depending on whether the sampled value is\nFigure 4: Logical address assignment algorithm\nless than ^\nV . The number of less outcomes is binomially distributed\nwith mean L. In order for the computed median to be less than\n^\nV , one needs the number of less outcomes to be at least L/2, or\nequivalently (\n1\n2\n-)L more than the mean L. But the probability\nthat a binomially distributed variable exceeds its mean by a factor\nof 1 + is at most e\n-2\n3\n. Thus, by picking the multiplicative constant\nin the sample size to be sufficiently large (as a function of ),\none can guarantee that, with high probability, the number of values\nless than the computed median V cannot be much more than L/2.\nA similar argument shows that the number more than the computed\nmedian V can not be much more than L/2.\nIf the leader finds that n is small in step 2, it may simply ask all\nsensors to report on their identities and locations, and then compute\nV directly.\nNow that we solved the weighted split median problem, we present\nthe components of the KDDCS scheme in the next section.\nKDDCS\nWe now present our KDDCS scheme in details. We explain how\nthe initial K-D tree is constructed, how events are mapped to sensors\n, and how events are routed to their owner sensors.\n4.1\nDistributed Logical Address Assignment\nAlgorithm\nThe main idea of the algorithm is that the split lines used to construct\nthe K-D tree are selected so that each of the two resulting\nregions contain an equal number of sensors. The split line can be\ndetermined using our weighted split median algorithm with each\nsensor having unit weight, and the value for each sensor is either\nits x coordinate or its y coordinate. The recursive steps of the algorithm\nare shown in Figure 4. We now describe in some greater\ndetail how a recursive step works.\nThe algorithm starts by partitioning the complete region R horizontally\n. Thus, the distributed weighted split median algorithm\n(presented in section 3) is applied for R using the y-coordinates of\nthe sensors to be sent to the BFS root. Upon determining weighted\nsplit median of R, sensors having lower y-coordinate than the median\nvalue (we refer to these sensors as those falling in the lower\nregion of R) assign their logical address to 0. On the other hand,\nthose sensor falling on the upper region of R assign themselves a 1\nlogical address. At the end of the first recursive step, the terrain can\nbe looked at as split into two equally logically loaded partitions (in\nterms of the number of sensors falling in each partition).\nAt the next step, the weighted split median algorithm is applied\nlocally in each of the sub-regions (lower/upper), while using the\nsensors' x-coordinates, thus, partitioning the sub-regions vertically\nrather than horizontally. Similarly, sensors' logical addresses are\nupdated by left-shifting them with a 0 bit for those sensors falling\n320\nin the lower regions (in other words, sensor nodes falling on the\nleft of the weighted median line), or with a 1 bit for sensor nodes\nfalling in the upper regions (i.e., sensor nodes falling on the right\nof the weighted median line).\nThe algorithm continues to be applied distributively by the different\nsubtrees until each sensor obtains a unique logical address,\nusing x and y coordinates of sensors, in a round robin fashion, as\nthe criterion of the split. The algorithm is applied in parallel on\nthe different subtrees whose root nodes fall at the same tree level.\nAt the i\nth\nrecursive step, the algorithm is applied at all intermediate\nnodes falling at level i- 1 of the tree. Based on the definition of the\nweighted split median problem, the algorithm results in forming a\nbalanced binary tree, such that sensors represent leaf nodes of this\ntree (intermediate nodes of the tree are logical nodes, not physical\nsensors). The algorithm terminates in log n recursive steps. At the\nend of the algorithm, the size of the logical address given to each\nsensor will be log n bits.\nRecall that the time complexity of our weight split median algorithm\nis O(d log n), where d is the diameter of the region. Thus,\nas the depth of our K-D tree is O(log n), we get that the time complexity\nfor building the tree is O(d log\n2\nn). If the sensors are uniformly\ndistributed, then, as the construction algorithm recurses, the\ndiameters of the regions will be geometrically decreasing. Thus,\nin the case of uniformly distributed sensors, one would expect the\ntree construction to take time O(d log n). As our weighted split\nmedian algorithm requires each sensor to send O(log n) ID's, and\nour K-D tree has depth O(log n), we can conclude that during the\nconstruction of our K-D tree, the number of ID's sent by any node\nis O(log\n2\nn).\n4.2\nEvent to Bit-code Mapping\nIn this section, we explain how the event to bit-code mapping\nfunction is determined. Recall that the main idea is to set the split\npoints of the ranges so that the storage of events is roughly uniform\namong sensor nodes. To construct this mapping requires a probability\ndistribution on the events. In some situations, this distribution\nmight be known. For example, if the network has been operational\nfor some period of time, a sampling of prior events might be used\nto estimate a distribution. In cases where it is not known, say when\na network is first deployed, we can temporarily assume a uniform\ndistribution.\nIn both cases, we use the balanced binary tree as the base tree\nto overlay the attribute-specific K-D tree on (Recall that a K-D tree\nis formed by k attributes). This is basically done by assigning a\nrange for each of the k attributes to every intermediate node in the\ntree. Note that the non-leaf nodes in the K-D tree are logical nodes\nthat do not correspond to any particular sensor. One may think of\nnon-leaf nodes as regions. Any split point p of a node x of tree\nlevel l, where l%k = i, represents a value of attribute i. Such split\npoint partitions the range of attribute i falling under responsibility\nof node x into two subranges such that the the subrange lower than\np is assigned to the left child of x, while the other range is assigned\nto x's right child. Note that the other k - 1 ranges of node x,\ncorresponding to the remaining k-1 attributes, are simply inherited\nby both children of x.\nKnowing the data distribution, the split points of the tree should\nbe predefined in a way to cope with any expected irregularity in\nthe load distribution among the K-D tree leaf nodes. For example,\ngiven an initial temperature range (30, 70) and knowing that 50%\nof the events will fall in the temperature range (65, 70), the root\nsplit point should be defined as 65 (assuming that the temperature\nis the first attribute in the event). Therefore, based on the selected\nroot split point, the left child subtree of the root will be responsible\nof storing events falling in the temperature range (30, 65),\nwhile the right child subtree will store events falling in the range\n(65, 70). Figure 3 gives an example of non-uniform initialization\nof split points.\nWe finish by describing what information is stored in each sensor\nnode. Each sensor node corresponds to a leaf in the K-D tree. Each\nsensor knows its logical address in the tree. Further, each leaf in\nthe K-D tree knows all the pertinent information about each of its\nancestors in the tree. The pertinent information about each node is:\nThe geographic region covered.\nThe split line separating its two children.\nThe attribute range, and attribute split point, associated with\nthis region.\nFrom this information, each leaf/sensor can determine the range of\nevents that will be stored at this sensor. Note that each sensor only\nstores O(log n) information about the K-D tree.\n4.3\nIncremental Event Hashing and Routing\nStrictly speaking, the events-to-sensors mapping in DIM actually\nproduces a geographic location. GPSR routing can then be used to\nroute that event towards that geographic location. If the destination\nis contained in a leaf region with one sensor, then that sensor stores\nthe event. If the leaf region is an orphan, then one of the sensors in\nthe neighboring regions will store this event.\nIn our scheme, the events-to-sensors mapping provides a logical\naddress. Essentially, all that the sensor generating the event can\ndetermine from this logical address is a general direction of the\nowner sensor. Thus, our routing protocol, which we call Logical\nStateless Routing (LSR), is in some sense less direct.\nLSR operates in O(log n) rounds. We explain how a round\nworks. Assume that a source sensor with a logical address s wants\nto route an event e to a sensor with logical address t. However,\ns does not know the identity of the sensor t. Recall that s knows\nthe pertinent information about its ancestors in the K-D tree. In\nparticular, s knows the range split values of its ancestors. Thus, s\ncan compute the least common ancestor (LCA) of s and t in the\nK-D tree. Assume that the first bit of disagreement between s and\nt is the\nth\nbit. So, the least common ancestor (LCA) of s and t\nin the K-D tree has depth . Let R be the region corresponding to\nthe LCA of s and t, L the split line corresponding to this region,\nand R\n0\nand R\n1\nthe two subregions of R formed by L. Without\nloss of generality, assume that s R\n0\nand t R\n1\n. From its own\naddress, and the address of t, the sensor s can conclude that t is in\nthe region R\n1\n. Recall that s knows the location of the split line L.\nThe sensor s computes a location x in the region R\n1\n. For concrete-ness\nhere, let us assume that x is some point in R\n1\nthat lies on the\nline intersecting s and perpendicular to L (Although there might be\nsome advantages to selecting x to be the geometric center of the region\nR\n1\n). LSR then directs a message toward the location x using\nGPSR. The message contains an additional field noting that this is\na\nth\nround message. The\nth\nround terminates when this message\nfirst reaches a sensor s whose address agrees with the address of t\nin the first + 1 bits. The sensor s will be the first sensor reached\nin R\n1\n. Round + 1 then starts with s being the new source sensor.\nWe explain how range queries are routed by means of an example\n. This example also essentially illustrates how events are stored.\nFigure 5 gives an example of a multi-dimensional range query and\nshows how to route it to its final destination. In this example, a\nmulti-dimensional range query arises at node N 7(111) asking for\nthe number of events falling in the temperature range (30, 32) and\npressure range (0.4, 1) that were generated throughout the last 2\nminutes. Node N 7 knows that the range split point for the root\n321\nFigure 5: Example of routing a query on KDDCS\nwas temperature 40, and thus, this query needs to be routed toward\nthe left subtree of the root, or geometrically toward the top\nof the region, using GPSR. The first node in this region that this\nevent reaches is say N 3. Node N 3 knows that the first relevant\nsplit point is pressure = 0.5. Thus, the query is partitioned into two\nsub-queries, ((30, 32), (0.4, 0.5)) and ((30, 32), (0.5, 1)). When\nprocessing the first subquery, node N 3 forwards it to the left using\nGPSR. N 3 can then tell that the second query should be routed to\nthe other side of its parent in the K-D tree since the range split for\nits parent is temperature 34. The logical routing of this query is\nshown on the right in Figure 5, and a possible physical routing of\nthis query is shown on the left in Figure 5.\nAs LSR does not initially know the geometric location of the\nowner sensor, the route to the owner sensor cannot possibly be as\ndirect as it is in DIM. But, we argue that the length of the route in\nLSR should be at most twice the length of the route in DIM. Assume\nfor the moment that all messages are routed by GPSR along\nthe direct geometric line between the source sensor and the destination\nlocation. Let us assume, without loss of generality, that LSR is\nrouting horizontally in the odd rounds. Then, the routes used in the\nodd rounds do not cross any vertical line more than once. Hence,\nthe sum of the route distances used by LSR in the odd rounds is\nat most the diameter of the region. Similarly, the sum of the route\ndistances used by LSR in the even rounds is at most the diameter of\nthe region. Thus, the sum of the route distances for LSR, over all\nrounds, is at most twice the diameter. The geometric distance between\nthe source-destination pair in DIM is obviously at most the\ndiameter. So we can conclude that the length of the route found by\nLSR is at most twice the length of the route found by DIM, assuming\nthat GPSR is perfect. In fact, the only assumption that we need\nabout GPSR to reach this conclusion is that the length of the path\nfound by GPSR is roughly a constant multiple times the geometric\ndistance between the source and destination. Even this factor\nof two can probably be improved significantly in expectation if the\nlocations of the sensors are roughly uniform. A simple heuristic\nwould be to make the location of the target x equal to the location\nof the destination sensor t if the sensors in R\n1\nwhere uniformly distributed\n. The location of x can easily be calculated by the source\nsensor s given information local to s.\nKDTR K-D TREE RE-BALANCING ALGORITHM\nBased on the KDDCS components presented so far, KDDCS\navoids the formation of storage hot-spots resulting from skewed\nsensor deployments, and from skewed events distribution if the distribution\nof events was known a priori. However, storage hot-spots\nmay be formed if the initial presumed events distribution was not\ncorrect, or if events distribution evolves over times. We present a\nK-D tree re-balancing algorithm, KDTR, to re-balance the load.\nIn the next subsections, we first explain how to determine the\nroots of the subtrees that will re-balance, and then show how a re-balancing\noperation on a subtree works. We assume that this re-balancing\nis performed periodically with a fixed period.\n5.1\nSelection of Subtrees to be Re-Balanced\nThe main idea is to find the highest unbalanced node in the KD\ntree. A node is unbalanced if the ratio of the number of events\nin one of the child subtrees over the number of events stored in\nthe other child subtree exceeds some threshold h. This process of\nidentifying nodes to re-balance proceeds in O(log n) rounds from\nthe leaves to the root of the K-D tree.\nWe now describe how round i 1 works. Intuitively, round i\noccurs in parallel on all subtrees rooted at nodes of height i + 1\nin the K-D tree. Let x be a node of height i + 1. Let the region\nassociated with x be R, the split line be L, and the two subregions\nof R be R\n0\nand R\n1\n. At the start of this round, each sensor in R\n0\nand R\n1\nknows the number of stored events C\n0\nand C\n1\nin R\n0\nand\nR\n1\n, respectively. The count C\n0\nis then flooded to the sensors in\nR\n1\n, and the count C\n1\nis flooded to the sensors in R\n0\n. After this\nflooding, each sensor in R knows the number of events stored in R,\nand also knows whether the ratio max(\nC\n0\nC\n1\n,\nC\n1\nC\n0\n)\nexceeds h.\nThe time complexity per round is linear in the diameter of a region\nconsidered in that round. Thus, the total time complexity is\nO(D log n), where D is the diameter of the network, as there are\nO(log n) rounds. The number of messages sent per node i in a\nround is O(d\ni\n)\n, where d\ni\nis the degree of node i in the communication\nnetwork. Thus, the total number of messages sent by a node\ni is O(d\ni\nlog n).\nRe-Balancing is then performed in parallel on all unbalanced\nnodes, that have no unbalanced ancestors. Note that every leaf\nknows if an ancestor will re-balance, and is so, the identity of the\nunique ancestor that will balance. All the leaves of a node that will\nre-balance, will be aware of this at the same time.\n5.2\nTree Re-balancing Algorithm\nLet x be an internal node to the K-D tree that needs to be re-balanced\n. Let the region associated with x be R. Let the attribute\nassociated with node x be the j'th attribute. So, we need to find a\nnew attribute split L for the j'th attribute for node x. To accomplish\nthis, we apply the weighted split median procedure, where the\nweight w\ni\nassociated with sensor i is the number of events stored\nat sensor i, and the values are the j'th attributes of the w\ni\nevents\nstored at that sensor. Thus, the computed attribute split L has the\nproperty that, in expectation, half of the events stored in R have\ntheir j'th attribute larger than L, and half of the events stored in R\nhave their j'th attribute smaller than L.\nLet R\n0\nand R\n1\nbe the two subregions of R. Eventually, we want\nto recursively apply this process in parallel to the regions R\n0\nand\nR\n1\n. But before recursing, we need to route some events from one\nof R\n0\nor R\n1\nto the other. The direction of the routing depends on\nwhether the attribute split value became larger or smaller. Let us\nassume, without loss of generality, that events need to be routed\nfrom R\n0\nto R\n1\n. Consider an event e stored at a sensor s in R\n0\nthat\nneeds to be routed to R\n1\n. The sensor s picks a destination logical\naddress t, uniformly at random, from the possible addresses in the\nregion R\n1\n. The event e is then routed to t using the routing scheme\ndescribed in section 4.3. The final owner for e in R\n1\ncannot be\ndetermined until our process is recursively applied to R\n1\n, but this\nprocess cannot be recursively applied until the events that should\nbe stored in R\n1\nare contained in R\n1\n. The fact the the destination\naddresses in R\n1\nwere picked uniformly at random ensures load balance\n.\nThis process can now be recursively applied to R\n0\nand R\n1\n.\n322\nFigure 6: KDDCS original K-D tree\nWe now discuss the complexity of this procedure. We break the\ncomplexity into two parts: the cost of performing the weighted split\nmedian operation, and the cost of migrating the events. One application\nof the weighted split median has time complexity O(D log n),\nwhere D is the diameter of the region, and messages sent per node\nof O(log\n2\nn) messages. Thus, we get time complexity O(D log\n2\nn)\nand messages sent per node of O(log\n3\nn) for all of the applications\nof weighted split median. Every period re-balance requires each\nevent to travel at most twice the diameter of the network (assuming\nthat GPSR routes on a direct line). The total number of events that\ncan be forced to migrate as a result of k new events being stored\nis O(k log k). Thus, the amortized number of migrations per event\nis logarithmic, O(log k) in the number of insertions. This amount\nof re-balancing per insertion is required for any standard dynamic\ndata structure (e.g. 2-3 trees, AVL trees, etc.).\nFigures 6 and 7 show a detailed example illustrating how KDTR\nworks. Continuing on the same example we presented in Section\n4.2, we monitor how KDTR maintains the K-D tree balancing in\nthe course of successive insertions. Starting with an equal number\nof 3 events stored at each sensor, a storage hot-spot arises in node\nN 7 after 6 event insertions. By checking the ratio of N 7 storage\nto that of N 7, KDTR identifies the subtree rooted at node 11 as\nan unbalanced subtree. As none of node 11's ancestors is unbalanced\nat this point, KDTR selects 11 to be re-balanced. However,\nthe storage load remains skewed toward subtree 1, thus, after another\n6 insertions, KDTR re-balances the subtree rooted at 1. After\n12\nmore insertions aiming the right subtree of the root, KDTR re-balances\nthe root of the tree, basically changing the attribute-based\nsplit points of almost all internal nodes, in order to maintain the balance\nof the tree. Note that, as long as the average loads of sensors\nwhich are falling outside the hot-spot area increases, the frequency\nof re-balancing decreases.\nWe digress slightly to explain a method that one might use to\ntrigger re-balancing, as opposed to fixed time period re-balancing.\nEach sensor s\ni\nknows the number of events that are stored in each\nregion corresponding to an ancestor of s\ni\nin the K-D tree when this\nregion was re-balanced. Let C\nj\nbe the number of events at the last\nre-balancing of the region R\nj\ncorresponding to node of depth j on\nthe path from the root to s\ni\nin the K-D tree. Assume that the region\nR\nj\nhas elected a leader s\nj\n. Then, the number of events that have\nto be stored in R\nj\n, since the last re-balancing, to cause another re-balancing\nin R\nj\nis something like hC\nj\n, where h is the unbalancing\nratio that we are willing to tolerate. Then, each insertion to s\ni\nis\nreported by s\ni\nto s\nj\nwith probability something like\n\"\nlog n\nhC\nj\n\"\n.\nThus, after seeing (log n) such notifications, the leader s\nj\ncan be\nconfident that there have been very close to hC\nj\ninsertions into the\nregion R\nj\n, and a re-balancing might be warranted. Note that the\nrole of leader requires only receiving O(log n) messages.\nFigure 7: KDTR example\nEXPERIMENTAL RESULTS\nIn order to evaluate our KDDCS scheme, we compared its performance\nwith that of the DIM scheme, that has been shown to be\nthe best among current INDCS schemes [9].\nIn our simulation, we assumed having sensors of limited buffer\nand constrained energy. We simulated networks of sizes ranging\nfrom 50 to 500 sensors, each having an initial energy of 50 units,\na radio range of 40m, and a storage capacity of 10 units. For simplicity\n, we assumed that the size of a message is equal to the size\nof a storage unit. We also assumed that the size of a storage unit\nis equal to the size of an event. When sent from a sensor to its\nneighbor, a message consumes 1 energy unit from the sender energy\nand 0.5 energy unit from the receiver energy. The service area\nwas computed such that each node has on average 20 nodes within\nits nominal radio range.\nAs each sensor has a limited storage capacity, it is assumed to\nfollow a FIFO storage approach to handle its cache. Thus, a sensor\nreplaces the oldest event in its memory by the newly incoming\nevent to be stored in case it is already full when receiving this new\nevent.\nWe modeled a network of temperature sensors. The range of possible\nreading values was [30, 70]. We modeled storage hot-spots by\nusing a random uniform distribution to represent sensors' locations,\nwhile using a skewed distribution of events among the attributes\nranges. Note that the regular sensor deployment assumption does\nnot affect our ability to assess the effectiveness of KDDCS as the\nstorage hot-spot can result from either skewed sensor deployments,\nor skewed data distributions, or both. The storage hot-spot size is\ncharacterized by the skewness dimensions, which are the percentage\nof the storage hot-spot events to the total number of events\ngenerated by the sensor network and the percentage of the read-323\n0\n200\n400\n600\n800\n1000\n1200\n1400\n1600\n1800\n2000\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\nDropped Events\nNetwork Size\nDIM\nKDDCS/KDTR\nFigure 8: Number of dropped events for networks with a 80%-10%\nhot-spot\n0\n200\n400\n600\n800\n1000\n1200\n1400\n1600\n1800\n2000\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\nDropped Events\nNetwork Size\nDIM\nKDDCS/KDTR\nFigure 9: Number of dropped events for networks with a 80%-5%\nhot-spot\nings' range in which the hot-spot events fall to the total possible\nrange of temperature readings. We assumed that a single storage\nhot-spot is imposed on the sensor network. To follow the behavior\nof KDDCS toward storage hot-spots of various sizes, we simulated,\nfor each network size, a series of hot-spots where a percentage of\n10\n% to 80% of the events fell into a percentage of 5% to 10% of\nthe reading' range. Note that we always use the term x%-y% hot-spot\nto describe a storage hot-spot where x% of the total generated\nevents fall into y% of the readings' range.\nWe used a uniform split points initialization to setup the attribute\nrange responsibilities of all internal nodes of the K-D tree. For the\nre-balancing threshold, we used a value of 3 to determine that a\nspecific subtree is unbalanced. Node failures were handled in the\nsame way as DIM. When a node fails, its stored events are considered\nlost. Futher events directed to the range responsibility of such\nnode are directed to one of its close neighbors.\nWe ran the simulation for each network size and storage hot-spot\nsize pair. Each simulation run consisted of two phases: the insertion\nphase and the query phase. During the insertion phase, each\nsensor generates (i.e. reads) 5 events, according to the predefined\nhot-spot size and distribution, and forward each of these event to\nits owner sensor. In the query phase, each sensor generates queries\nof sizes ranging from 10% to 90% of the [30, 70] range. The query\nphase is meant to measure the damages, in terms of QoD and energy\nlosses, caused by the storage hot-spot.\nThe results of the simulations are shown in the Figures 8 to 17. In\nthese figures, we compare the performance of our KDDCS scheme\nversus that the DIM scheme with respect to various performance\nmeasures. Note that we only show some of our findings due to\nspace constraints.\nR1. Data Persistence: Figures 8 and 9 present the total number\nevents dropped by all network nodes in networks with 80%-10%\nand 80%-5% hot-spots, respectively. By analyzing the difference\n0\n200\n400\n600\n800\n1000\n1200\n1400\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\nEvents Returned for (0.5 * attribute range) Query\nNetwork Size\nDIM\nKDDCS/KDTR\nFigure 10: Query size of a 50% query for networks with a 80%-10%\nhot-spot\n0\n200\n400\n600\n800\n1000\n1200\n1400\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\nEvents Returned for (0.8 * attribute range) Query\nNetwork Size\nDIM\nKDDCS/KDTR\nFigure 11: Query size of a 80% query for networks with a 80%-5%\nhot-spot\nbetween KDDCS and DIM, we can find out that the number of\ndropped events in the first is around 40% to 60% of that in the\nsecond. This can be interpreted by the fact that KDDCS achieves\na better load balancing of storage among the sensors. This leads to\ndecreasing the number of sensors reaching their maximum storage,\nand decreasing the total number of such nodes compared to that in\nthe pure DIM. This directly results in decreasing the total number\nof dropped events and achieving a better data persistence.\nAnother important remark to be noted based on the two figures is\nthat decreasing the size of the hot-spot by making the same number\nof events to fall into a smaller attributes' range does not highly\naffect the overall performance of KDDCS compared to that of DIM.\nR2. Quality of Data: Figures 10 and 11 show the average query\nsizes of 50% and 80% of the attribute ranges for networks with\na 80%-10% and 80%-5% hot-spots, respectively. It is clear that\nKDDCS remarkebly improves the QoD provided by the sensor network\n. This is mainly due to dropping less information (as pointed\nat in R1), thus, increasing the number of events resulting in each\nquery. The gap between DIM and KDDCS, in terms of resulting\nquery sizes, is really huge for in both graphs, which indicates that\nKDDCS outperforms DIM for different storage hot-spot sizes.\nThis result has a very important implication on the data accuracy\nof the sensor readings output from a network experiencing a\nhot-spot. The success of the KDDCS in avoiding hot-spots results\nin improving the network ability to keep a higher portion of the\nhot-spot data. This ameliorates the degree of correctness of any aggregate\nfunctions on the network readings, for example, an average\nof the temperature or pressure values where a high percentage of\nthe data is falling within a small range of the total attributes' range.\nWe consider this to be a good achievement compared to the pure\nDIM scheme.\nR3. Load Balancing: Figures 12 and 13 show the average node\nstorage level for networks with 70%-10% and 60%-5% hot-spots,\n324\n0\n1\n2\n3\n4\n5\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\nAverage Storage Level\nNetwork Size\nDIM\nKDDCS/KDTR\nFigure 12: Average node storage level for networks with a 70%-10%\nhot-spot (numbers rounded to ceiling integer)\n0\n1\n2\n3\n4\n5\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\nAverage Storage Level\nNetwork Size\nDIM\nKDDCS/KDTR\nFigure 13: Average node storage level for networks with a 60%-5%\nhot-spot (numbers rounded to ceiling integer)\nrespectively. By a node storage level, we mean the number of\nevents stored in the node's cache. The figures show that KDDCS\nhas a higher average storage level than DIM, especially for less\nskewed hot-spots. This can be interpreted as follows. When a storage\nhot-spot arises in DIM, the hot-spot load is directed to a small\nnumber of sensors. These nodes rapidly reach their storage maximum\n, while almost all other sensor nodes are nearly empty. Therefore\n, the load distribution is highly skewed among nodes leadind to\na low average storage level value. However, in KDDCS, the number\nof nodes effectively storing events increases. Subsequently, the\naverage storage load value increases. This gives us a truthful figure\nabout the better storage balancing the network. It is worth mentioning\nthat each of the values in both figures is rounded to the ceiling\ninteger. Thus, in both cases, the average in DIM does not exceed\none event per sensor for all network sizes.\nR4. Energy Consumption Balancing: Figures 14 and 15 show\nthe average node energy level at the end of the simulation for networks\nwith 70%-10% and 50%-5% hot-spots, respectively. The\nfigures show that this average generally decreases with the increase\n30\n35\n40\n45\n50\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\nAverage Energy Level\nNetwork Size\nDIM\nKDDCS/KDTR\nFigure 14: Average sensors' energy levels for networks with a\n70%-10% hot-spot\n30\n35\n40\n45\n50\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\nAverage Energy Level\nNetwork Size\nDIM\nKDDCS/KDTR\nFigure 15: Average sensors' energy levels for networks with a\n50%-5% hot-spot\n0\n500\n1000\n1500\n2000\n2500\n3000\n3500\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\nMoved Events for x%-10% hot-spot\nNetwork Size\nx=40\nx=60\nx=80\nFigure 16: Number of event movements for networks with a\nx%-10% hot-spot\nof the network size for both schemes. The interesting result that\nthese figures show is that both KDDCS and DIM result in fairly\nclose average energy consumption among the sensors. However,\nas we mentioned in R3 and based on the way DIM works, most of\nthe energy consumed in DIM is effectively consumed by a small\nnumber of nodes, namely those falling in the hot-spot region. On\nthe other hand, the number of nodes consuming energy increases in\nKDDCS due to the better load balancing KDDCS achieves, while\nthe average energy consumed by each sensor node decreases. Thus,\nalthough the overall energy consumption is the same in both KDDCS\nand DIM, this result is considered as a positive result in terms\nof increasing the overall network lifetime, as well as avoiding the\nearly death of sensor nodes, which leads to avoid network partitioning\n.\nR5. Events Movements: Figures 16 and 17 show the number of\nmigrated events for networks with x% - 10% and x% - 5% hot-spots\n, respectively, where x varies from 40 to 80. For both sets of\nhot-spot sizes, the number of event movements lineraly increases\nwith the network size. The important result to be noted in both\n0\n500\n1000\n1500\n2000\n2500\n3000\n3500\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\nMoved Events for x%-5% hot-spot\nNetwork Size\nx=40\nx=60\nx=80\nFigure 17: Number of event movements for networks with a\nx%-5% hot-spot\n325\nfigure is that the total number of movements is not highly dependent\non the hot-spot size. This is mainly because KDDCS avoids\nstorage hot-spots in their early stages instead of waiting for a large\nstorage hot-spot to be formed, and then try to decompose it. Therefore\n, most of the event movements are really done at the start of\nthe formation of the storage hot-spot. This leads to the fact that,\nfor highly skewed data distributions, (i.e. large hot-spot sizes), the\nnumber of event movements does not highly change with changing\nthe exact storage hot-spot size.\nRELATED WORK\nMany approaches have been presented in literature defining how\nto store the data generated by a sensor network. In the early age of\nsensor networks research, the main storage trend used consisted of\nsending all the data to be stored in base stations, which lie within,\nor outside, the network. However, this approach may be more appropriate\nto answer continuous queries, which are queries running\non servers and mostly processing events generated by all the sensor\nnodes over a large period of time [4, 10, 18, 14, 12, 11].\nIn order to improve the lifetime of the sensor network, as well\nas the QoD of ad-hoc queries, INS techniques have been proposed.\nAll INS schemes presented so far were based on the idea of DCS\n[15]. These INDCS schemes differ from each other based on the\nevents-to-sensors mapping used. The mapping was done using hash\ntables in DHT [15] and GHT [13], or using K-D trees in DIM [9].\nThe formation of storage hot-spots due to irregularity, in terms of\nsensor deployment or events distribution, represent a vital issue in\ncurrent INDCS techniques [5]. Some possible solutions for irregular\nsensors deployments were highlighted by [5], such as routing\nbased on virtual coordinates, or using heuristics to locally adapt to\nirregular sensor densities. Recently, some load balancing heuristics\nfor the irregular events distribution problem were presented\nby [2, 8]. Such techniques were limited in their capability to deal\nwith storage hot-spots of large sizes as they were basically acting\nlike storage hot-spots detection and decomposition schemes, rather\nthan storage hot-spots avoidance schemes like KDDCS. To the best\nof our knowledge, no techniques have been provided to cope with\nboth types of irregularities at the same time. A complentary work\nto our paper is that on exploting similarities in processing queries\nissued by neighboring sensors in a DCS scheme [16].\nQuery Hot-Spots is another important problem that is orthogonal\nto the storage hot-spots problem. The problem arizes when a large\npercentage of queries ask for data stored in few sensors. We identified\nthe problem in an earlier paper [1] and presented two algorithms\n, Zone Partitioning (ZP) and Zone Partial Replication (ZPR),\nto locally detect and decompose query hot-spots in DIM. We believe\nthat KDDCS is able to cope with query hot-spots provided\nminor changes are added to the scheme. We aim at addressing this\nproblem in the KDDCS testbed that we plan to develop.\nRecently, Krishnamurthy et al. [7] presented a novel DCS scheme,\ncalled RESTORE, that is characterized by real time event correlation\n. It would be interesting to compare the performance of both\nKDDCS and RESTORE in terms of load balacing.\nCONCLUSIONS\nSensor databases are becoming embedded in every aspect of our\nlife from merchandise tracking, healthcare, to disaster responds. In\nthe particular application of disaster management, it has been ar-gued\nthat it is more energy efficient to store the sensed data locally\nin the sensor nodes rather than shipping it out of the network, even\nif out-of-network storage is available.\nThe formation of Storage Hot-Spots is a major problem with the\ncurrent INDCS techniques in sensor networks. In this paper, we\npresented a new load-balanced INDCS scheme, namely KDDCS,\nthat avoids the formation of storage hot-spots arising in the sensor\nnetwork due to irregular sensor deployment and/or irregular events\ndistribution. Further, we proposed a new routing algorithm called\nLogical Stateless Routing, for routing events from the generating\nsensors to the storage sensors, that is competitive with the popular\nGPSR routing. Our experimental evaluation has confirmed that our\nproposed KDDCS both increases the quality of data and the energy\nsavings by distributing events of the storage hot-spots among\na larger number of sensors.\nAcknowledgments\nWe would like to thank Mohamed Sharaf for his useful feedback.\nWe would also like to thank the anonymous referees for their helpful\ncomments.\nREFERENCES\n[1] M. Aly, P. K. Chrysanthis, and K. Pruhs. Decomposing data-centric\nstorage query hot-spots in sensor networks. In Proc. of\nMOBIQUITOUS, 2006.\n[2] M. Aly, N. Morsillo, P. K. Chrysanthis, and K. Pruhs. Zone Sharing:\nA hot-spots decomposition scheme for data-centric storage in sensor\nnetworks. In Proc. of DMSN, 2005.\n[3] J. L. Bentley. Multidimensional binary search trees used for\nassociative searching. In CACM, 18(9), 1975.\n[4] P. Bonnet, J. Gehrke, and P. Seshadri. Towards sensor database\nsystems. In Proc. of MDM, 2001.\n[5] D. Ganesan, S. Ratnasamy, H. Wang, and D. Estrin. Coping with\nirregular spatio-temporal sampling in sensor networks. In Proc. of\nHotNets-II, 2003.\n[6] B. Karp and H. T. Kung. GPSR: Greedy perimeter stateless routing\nfor wireless sensor networks. In Proc. of ACM Mobicom, 2000.\n[7] S. Krishnamurthy, T. He, G. Zhou, J. A. Stankovic, and S. H. Son.\nRestore: A real-time event correlation and storage service for sensor\nnetworks. In Proc. of INSS, 2006.\n[8] X. Li, F. Bian, R. Govidan, and W. Hong. Rebalancing distributed\ndata storage in sensor networks. Technical Report No. 05-852, CSD,\nUSC, 2005.\n[9] X. Li, Y. J. Kim, R. Govidan, and W. Hong. Multi-dimensional range\nqueries in sensor networks. In Proc. of ACM SenSys, 2003.\n[10] S. Madden, M. Franklin, J. Hellerstein, and W. Hong. TAG: a tiny\naggregation service for ad-hoc sensor networks. In Proc. of OSDI,\n2002.\n[11] S.-J. Park, R. Vedantham, R. Sivakumar, and I. F. Akyildiz. A\nscalable approach for reliable downstream data delivery in wireless\nsensor networks. In Proc. of MobiHoc, 2004.\n[12] T. Pham, E. J. Kim, and W. M. Moh. On data aggregation quality and\nenergy efficiency of wireless sensor network protocols. In Proc. of\nBROADNETS, 2004.\n[13] S. Ratnasamy, B. Karp, L. Yin, F. Yu, D. Estrin, R. Govidan, and\nS. Shenker. GHT: A grographic hash table for data-centric storage. In\nProc. of WSNA, 2002.\n[14] M. A. Sharaf, J. Beaver, A. Labrinidis, and P. K. Chrysanthis. TiNA:\nA scheme for temporal coherency-aware in-network aggregation. In\nProc. of MobiDE, 2003.\n[15] S. Shenker, S. Ratnasamy, B. Karp, R. Govidan, and D. Estrin.\nData-centric storage in sensornets. In Proc. of HotNets-I, 2002.\n[16] P. Xia, P. K. Chrysanthis, and A. Labrinidis. Similarity-aware query\nprocessing in sensor networks. In Proc. of WPDRTS, 2006.\n[17] T. Yan, T. He, and J. A. Stankovic. Differentiated surveillance for\nsensor networks. In Proc. of SenSys, 2003.\n[18] Y. Yao and J. Gehrke. Query processing for sensor networks. In Proc.\nof CIDR, 2003.\n326\n", "keywords": "quality of data (QoD);KDDCS;routing algorithm;Power-Aware;energy saving;Sensor Network;sensor network;Distributed Algorithms;weighted split median problem;DIM;data persistence;storage hot-spots;ad-hoc queries"} {"name": "121", "title": "Language-specific Models in Multilingual Topic Tracking", "abstract": "Topic tracking is complicated when the stories in the stream occur in multiple languages. Typically, researchers have trained only English topic models because the training stories have been provided in English. In tracking, non-English test stories are then machine translated into English to compare them with the topic models. We propose a native language hypothesis stating that comparisons would be more effective in the original language of the story. We first test and support the hypothesis for story link detection. For topic tracking the hypothesis implies that it should be preferable to build separate language-specific topic models for each language in the stream. We compare different methods of incrementally building such native language topic models.", "fulltext": "INTRODUCTION\nTopic detection and tracking (TDT) is a research area concerned\nwith organizing a multilingual stream of news broadcasts as it arrives\nover time. TDT investigations sponsored by the U.S. government\ninclude five different tasks: story link detection, clustering\n(topic detection), topic tracking, new event (first story) detection\n, and story segmentation. The present research focuses on\ntopic tracking, which is similar to filtering in information retrieval\n. Topics are defined by a small number of (training) stories,\ntypically one to four, and the task is to find all the stories on those\ntopics in the incoming stream.\nTDT evaluations have included stories in multiple languages since\n1999. TDT2 contained stories in English and Mandarin. TDT3\nand TDT4 included English, Mandarin, and Arabic. Machine-translations\ninto English for all non-English stories were provided\n, allowing participants to ignore issues of story translation.\nAll TDT tasks have at their core a comparison of two text models.\nIn story link detection, the simplest case, the comparison is between\npairs of stories, to decide whether given pairs of stories are\non the same topic or not. In topic tracking, the comparison is between\na story and a topic, which is often represented as a centroid\nof story vectors, or as a language model covering several stories.\nOur focus in this research was to explore the best ways to compare\nstories and topics when stories are in multiple languages. We\nbegan with the hypothesis that if two stories originated in the\nsame language, it would be best to compare them in that language,\nrather than translating them both into another language for comparison\n. This simple assertion, which we call the native language\nhypothesis, is easily tested in the TDT story link detection task.\nThe picture gets more complex in a task like topic tracking, which\nbegins with a small number of training stories (in English) to define\neach topic. New stories from a stream must be placed into\nthese topics. The streamed stories originate in different languages,\nbut are also available in English translation. The translations have\nbeen performed automatically by machine translation algorithms,\nand are inferior to manual translations. At the beginning of the\nstream, native language comparisons cannot be performed because\nthere are no native language topic models (other than English\n). However, later in the stream, once non-English documents\nhave been seen, one can base subsequent tracking on native-language\ncomparisons, by adaptively training models for additional\nlanguages. There are many ways this adaptation could be performed\n, and we suspect that it is crucial for the first few non-English\nstories to be placed into topics correctly, to avoid building\nnon-English models from off-topic stories.\nPrevious research in multilingual TDT has not attempted to compare\nthe building of multiple language-specific models with single\n-language topic models, or to obtain native-language models\nthrough adaptation. The focus of most multilingual work in TDT\nfor example\n\n[2]\n\n[12]\n\n[13], has been to compare the efficacy of machine\ntranslation of test stories into a base language, with other\nmeans of translation. Although these researchers normalize scores\nfor the source language, all story comparisons are done within the\nbase language. This is also true in multilingual filtering, which is\na similar task\n\n[14].\nThe present research is an exploration of the native language hypothesis\nfor multilingual topic tracking. We first present results on\nstory link detection, to support the native language hypothesis in a\nsimple, understandable task. Then we present experiments that\ntest the hypothesis in the topic tracking task. Finally we consider\nseveral different ways to adapt topic models to allow native language\ncomparisons downstream.\n\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nSIGIR '04, July 25-29, 2003, Sheffield, South Yorkshire, UK.\nCopyright 2004 ACM 1-58113-881-4/04/0007...$5.00.\n\n402\nAlthough these experiments were carried out in service of TDT,\nthe results should equally apply to other domains which require\nthe comparison of documents in different languages, particularly\nfiltering, text classification and clustering.\nEXPERIMENTAL SETUP\nExperiments are replicated with two different data sets, TDT3 and\nTDT4, and two very different similarity functions - cosine similarity\n, and another based on relevance modeling, described in the\nfollowing two sections. Cosine similarity can be seen as a basic\ndefault approach, which performs adequately, and relevance modeling\nis a state of the art approach which yields top-rated performance\n. Confirming the native-language hypothesis in both systems\nwould show its generality.\nIn the rest of this section, we describe the TDT data sets, then we\ndescribe how story link detection and topic tracking are carried\nout in cosine similarity and relevance modeling systems. Next, we\ndescribe the multilingual aspects of the systems.\n2.1\n\nTDT3 Data\nTDT data consist of a stream of news in multiple languages and\nfrom different media - audio from television, radio, and web news\nbroadcasts, and text from newswires. Two forms of transcription\nare available for the audio stream. The first form comes from\nautomatic speech recognition and includes transcription errors\nmade by such systems. The second form is a manual transcription,\nwhich has few if any errors. The audio stream can also be divided\ninto stories automatically or manually (so-called reference\nboundaries). For all the research reported here, we used manual\ntranscriptions and reference boundaries.\nThe characteristics of the TDT3 data sets for story link detection\nand topic tracking are summarized in Tables 1-3.\n\nTable 1: Number of stories in TDT3 Corpus\n\nEnglish Arabic Mandarin\nTotal\nTDT3 37,526 15,928 13,657 67,111\n\nTable 2: Characteristics of TDT3 story link detection data sets\nNumber of topics\n8\nNumber of link pairs\nSame topic\nDifferent topic\nEnglish-English\n605\n3999\nArabic-Arabic\n669\n3998\nMandarin-Mandarin\n440\n4000\nEnglish-Arabic\n676\n4000\nEnglish-Mandarin\n569\n4000\nArabic-Mandarin\n583\n3998\nTotal\n3542\n23,995\n\nTable 3: Characteristics of TDT3 topic tracking data sets\n\nN\nt\n=2\nN\nt\n=4\nNumber of topics\n36 30\nNum. test stories\nOn-topic\nAll On-topic\nAll\nEnglish\n2042\n883,887\n2042\n796,373\nArabic\n572\n372,889\n572\n336,563\nMandarin\n405\n329,481\n369\n301,568\nTotal\n3019 1,593,782\n2983 1,434,504\n2.2\n\nStory Representation and Similarity\n2.2.1\n\nCosine similarity\nTo compare two stories for link detection, or a story with a topic\nmodel for tracking, each story is represented as a vector of terms\nwith tfidf term weights:\n\n(\n)\n(\n)\n(\n)\n1\nlog\n5\n.\n0\nlog\n+\n+\n\n=\nN\ndf\nN\ntf\na\ni\n(1)\nwhere tf is the number of occurrences of the term in the story, N is\nthe total number of documents in the collection, and df is the\nnumber of documents containing the term. Collection statistics N\nand df are computed incrementally, based on the documents already\nin the stream within a deferral period after the test story\narrives. The deferral period was 10 for link detection and 1 for\ntopic tracking. For link detection, story vectors were pruned to the\n1000 terms with the highest term weights.\nThe similarity of two (weighted, pruned) vectors\nn\na\na\na\n,...,\n1\n=\nr\nand\nm\nb\nb\nb\n,...,\n1\n=\nr\nis the inner product between the two vectors:\n\n(\n)\n(\n)(\n)\n\n\n\n=\ni\ni\ni\ni\ni\ni\ni\nb\na\nb\na\nSim\n2\n2\ncos\n(2)\nIf the similarity of two stories exceeds a yes/no threshold, the\nstories are considered to be about the same topic.\nFor topic tracking, a topic model is a centroid, an average of the\nvectors for the N\nt\ntraining stories. Topic models are pruned to 100\nterms based on the term weights. Story vectors pruned to 100\nterms are compared to centroids using equation (2). If the similarity\nexceeds a yes/no threshold, the story is considered on-topic.\n2.2.2\n\nRelevance modeling\nRelevance modeling is a statistical technique for estimating language\nmodels from extremely small samples, such as queries,\n\n[9].\nIf Q is small sample of text, and C is a large collection of documents\n, the language model for Q is estimated as:\n\n)\n|\n(\n)\n|\n(\n)\n|\n(\nQ\nM\nP\nM\nw\nP\nQ\nw\nP\nd\nC\nd\nd\n\n\n=\n(3)\nA relevance model, then, is a mixture of language models M\nd\nof\nevery document d in the collection, where the document models\nare weighted by the posterior probability of producing the query\nP(M\nd\n|Q). The posterior probability is computed as:\n\n\n\n\n\n\n\n\n\n\n=\nC\nd\nQ\nq\nd\nQ\nq\nd\nd\nM\nq\nP\nd\nP\nM\nq\nP\nd\nP\nQ\nM\nP\n)\n|\n(\n)\n(\n)\n|\n(\n)\n(\n)\n|\n(\n(4)\nEquation (4) assigns the highest weights to documents that are\nmost likely to have generated Q, and can be interpreted as nearest-neighbor\nsmoothing, or a massive query expansion technique.\nTo apply relevance modeling to story link detection, we estimate\nthe similarity between two stories A and B by pruning the stories\nto short queries, estimating relevance models for the queries, and\nmeasuring the similarity between the two relevance models. Each\nstory is replaced by a query consisting of the ten words in the\nquery with the lowest probability of occurring by chance in ran-domly\ndrawing |A| words from the collection C:\n403\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n=\nA\nC\nA\nA\nC\nC\nA\nC\nA\nP\nw\nw\nw\nw\nw\nchance\n)\n(\n(5)\nwhere |A| is the length of the story A, A\nw\nis the number of times\nword w occurs in A, |C| is the size of the collection, and C\nw\nis the\nnumber of times word w occurs in C.\nStory relevance models are estimated using equation (4). Similarity\nbetween relevance models is measured using the symmetrized\nclarity-adjusted divergence\n\n[11]:\n\n\n+\n=\nw\nA\nB\nw\nB\nA\nRM\nGE\nw\nP\nQ\nw\nP\nQ\nw\nP\nGE\nw\nP\nQ\nw\nP\nQ\nw\nP\nSim\n)\n|\n(\n)\n|\n(\nlog\n)\n|\n(\n)\n|\n(\n)\n|\n(\nlog\n)\n|\n(\n(6)\nwhere P(w|Q\nA\n) is the relevance model estimated for story A, and\nP(w|GE) is the background (General English, Arabic, or Mandarin\n) probability of w, computed from the entire collection of stories\nin the language within the same deferral period used for cosine\nsimilarity.\nTo apply relevance modeling to topic tracking, the asymmetric\nclarity adjusted divergence is used:\n\n\n=\nw\ntrack\nGE\nw\nP\nS\nw\nP\nT\nw\nP\nS\nT\nSim\n)\n|\n(\n)\n|\n(\nlog\n)\n|\n(\n)\n,\n(\n(7)\nwhere P(w|T) is a relevance model of the topic T. Because of\ncomputational constraints, smoothed maximum likelihood estimates\nrather than relevance models are used for the story model\nP(w|S). The topic model, based on Equation (3), is:\n\n\n\n=\nt\nS\nd\nd\nt\nM\nw\nP\nS\nT\nw\nP\n)\n|\n(\n1\n)\n|\n(\n(8)\nwhere S\nt\nis the set of training stories. The topic model is pruned to\n100 terms. More detail about applying relevance models to TDT\ncan be found in\n\n[2].\n2.3\n\nEvaluation\nTDT tasks are evaluated as detection tasks. For each test trial, the\nsystem attempts to make a yes/no decision. In story link detection,\nthe decision is whether the two members of a story pair belong to\nthe same topic. In topic tracking, the decision is whether a story in\nthe stream belongs to a particular topic. In all tasks, performance\nis summarized in two ways: a detection cost function (C\nDet\n) and a\ndecision error tradeoff (DET) curve. Both are based on the rates\nof two kinds of errors a detection system can make: misses, in\nwhich the system gives a no answer where the correct answer is\nyes, and false alarms, in which the system gives a yes answer\nwhere the correct answer is no.\nThe DET curve plots the miss rate (P\nMiss\n) as a function of false\nalarm rate (P\nFa\n), as the yes/no decision threshold is swept through\nits range. P\nMiss\nand P\nFa\nare computed for each topic, and then\naveraged across topics to yield topic-weighted curves. An example\ncan be seen in Figure 1 below. Better performance is indicated by\ncurves more to the lower left of the graph.\nThe detection cost function is computed for a particular threshold\nas follows:\n\nC\nDet\n= (C\nMiss\n* P\nMiss\n* P\nTarget\n+ C\nFa\n* P\nFa\n* (1-P\nTarget\n))\n(9)\nwhere: P\nMiss\n= #Misses / #Targets\nP\nFa\n= #False Alarms / #NonTargets\nC\nMiss\nand C\nFa\nare the costs of a missed detection and false alarm,\nrespectively, and are specified for the application, usually at 10\nand 1, penalizing misses more than false alarms. P\nTarget\nis the a\npriori probability of finding a target, an item where the answer\nshould be yes, set by convention to 0.02.\nThe cost function is normalized:\n(C\nDet\n)\nNorm\n= C\nDet\n/ MIN(C\nMiss\n* C\nTarget\n, C\nFa\n* (1-P\nTarget\n)) (10)\nand averaged over topics. Each point along the detection error\ntradeoff curve has a value of (C\nDet\n)\nNorm\n. The minimum value found\non the curve is known as the min(C\nDet\n)\nNorm\n. It can be interpreted as\nthe value of C\nDet\n)\nNorm\nat the best possible threshold. This measure\nallows us to separate performance on the task from the choice of\nyes/no threshold. Lower cost scores indicate better performance.\nMore information about these measures can be found in\n\n[5].\n2.4\n\nLanguage-specific Comparisons\nEnglish stories were lower-cased and stemmed using the kstem\nstemmer\n\n[6]. Stop words were removed. For native Arabic\ncomparisons, stories were converted from Unicode UTF-8 to windows\n(CP1256) encoding, then normalized and stemmed with a\nlight stemmer\n\n[7]. Stop words were removed. For native Mandarin\ncomparisons, overlapping character bigrams were compared.\nSTORY LINK DETECTION\nIn this section we present experimental results for story link detection\n, comparing a native condition with an English baseline. In\nthe English baseline, all comparisons are in English, using machine\ntranslation (MT) for Arabic and Mandarin stories. Corpus\nstatistics are computed incrementally for all the English and\ntranslated-into-English stories. In the Native condition, two stories\noriginating in the same language are compared in that language\n. Corpus statistics are computed incrementally for the stories\nin the language of the comparison. Cross language pairs in the\nnative condition are compared in English using MT, as in the\nbaseline.\n\nFigure 1: DET curve for TDT3 link detection based on English\nversions of stories, or native language versions, for cosine and\nrelevance model similarity\n\n404\nTable 4: Min(C\ndet\n)\nNorm\nfor TDT3 story link detection\nSimilarity English\nNative\nCosine\n.3440 .2586\nRelevance Model\n.2625 .1900\n\nFigure 1 shows the DET curves for the TDT3 story link detection\ntask, and Table 4 shows the minimum cost. The figure and table\nshow that native language comparisons (dotted) consistently outperform\ncomparisons based on machine-translated English (solid).\nThis difference holds both for the basic cosine similarity system\n(first row) (black curves), and for the relevance modeling system\n(second row) (gray curves). These results support the general\nconclusion that when two stories originate in the same language, it\nis better to carry out similarity comparisons in that language,\nrather than translating them into a different language.\nTOPIC TRACKING\nIn tracking, the system decides whether stories in a stream belong\nto predefined topics. Similarity is measured between a topic\nmodel and a story, rather than between two stories. The native\nlanguage hypothesis for tracking predicts better performance if\nincoming stories are compared in their original language with\ntopic models in that language, and worse performance if translated\nstories are compared with English topic models.\nThe hypothesis can only be tested indirectly, because Arabic and\nMandarin training stories were not available for all tracking topics\n. In this first set of experiments, we chose to obtain native language\ntraining stories from the stream of test stories using topic\nadaptation, that is, gradual modification of topic models to incorporate\ntest stories that fit the topic particularly well.\nAdaptation begins with the topic tracking scenario described\nabove in section\n\n2.2, using a single model per topic based on a\nsmall set of training stories in English. Each time a story is compared\nto a topic model to determine whether it should be classed\nas on-topic, it is also compared to a fixed adaptation threshold\nad\n= 0.5 (not to be confused with the yes/no threshold mentioned\nin section\n\n2.2.1). If the similarity score is greater than\nad\n, the\nstory is added to the topic set, and the topic model recomputed.\nFor clarity, we use the phrase topic set to refer to the set of stories\nfrom which the topic model is built, which grows under adaptation\n. The training set includes only the original N\nt\ntraining stories\nfor each topic. For cosine similarity, adaptation consists of\ncomputing a new centroid for the topic set and pruning to 100\nterms. For relevance modeling, a new topic model is computed\naccording to Equation (8). At most 100 stories are placed in each\ntopic set.\nWe have just described global adaptation, in which stories are\nadded to global topic models in English. Stories that originated in\nArabic or Mandarin are compared and added in their machine-translated\nversion.\nNative adaptation differs from global adaptation in making separate\ntopic models for each source language. To decide whether a\ntest story should be added to a native topic set, the test story is\ncompared in its native language with the native model, and added\nto the native topic set for that language if its similarity score exceeds\nad\n. The English version of the story is also compared to the\nglobal topic model, and if its similarity score exceeds\nad\n, it is\nadded to the global topic set. (Global models continue to adapt\nfor other languages which may not yet have a native model, or for\nsmoothing, discussed later.)\nAt the start there are global topic models and native English topic\nmodels based on the training stories, but no native Arabic or\nMandarin topic models. When there is not yet a native topic\nmodel in the story's original language, the translated story is\ncompared to the global topic model. If the similarity exceeds\nad\n,\nthe native topic model is initialized with the untranslated story.\nYes/no decisions for topic tracking can then be based on the untranslated\nstory's similarity to the native topic model if one exists.\nIf there is no native topic model yet for that language and topic,\nthe translated story is compared to the global topic model.\nWe have described three experimental conditions: global adapted,\nnative adapted, and a baseline. The baseline, described in Section\n\n2.2, can also be called global unadapted. The baseline uses a\nsingle English model per topic based on the small set of training\nstories. A fourth possible condition, native unadapted is problematic\nand not included here. There is no straightforward way to\ninitialize native language topic models without adaptation when\ntraining stories are provided only in English.\nFigure 2: DET curves for TDT3 tracking, cosine similarity\n(above) and relevance models (below), N\nt\n=4 training stories,\nglobal unadapted baseline, global adapted, and native adapted\n\n\n405\nTable 5: Min(C\ndet\n)\nNorm\nfor TDT3 topic tracking.\n\nN\nt\n=2\nN\nt\n=4\nAdapted Adapted\nBaseline\nGlobal Native\nBaseline\nGlobal Native\nCosine\n.1501 .1197 .1340 .1238 .1074\n.1028\nRM\n.1283 .0892 .0966 .1060 .0818 .0934\n\nThe TDT3 tracking results on three conditions, replicated with the\ntwo different similarity measures (cosine similarity and relevance\nmodeling) and two different training set sizes (N\nt\n=2 and 4) can be\nseen in Table 5. DET curves for N\nt\n=4 are shown in Figure 2, for\ncosine similarity (above) and relevance modeling (RM) (below).\nTable 5 shows a robust adaptation effect for cosine and relevance\nmodel experiments, and for 2 or 4 training stories. Native and\nglobal adaptation are always better (lower cost) than baseline\nunadapted tracking. In addition, relevance modeling produces\nbetter results than cosine similarity. However, results do not show\nthe predicted advantage for native adapted topic models over\nglobal adapted topic models. Only cosine similarity, N\nt\n=4, seems\nto show the expected difference (shaded cells), but the difference\nis very small. The DET curve in Figure 2 shows no sign of a native\nlanguage effect.\nTable 6 shows minimum cost figures computed separately for\nEnglish, Mandarin, and Arabic test sets. Only English shows a\npattern similar to the composite results of Table 5 (see the shaded\ncells). For cosine similarity, there is not much difference between\nglobal and native English topic models. For relevance modeling,\nNative English topic models are slightly worse than global models\n. Arabic and Mandarin appear to show a native language advantage\nfor all cosine similarity conditions and most relevance\nmodel conditions. However, DET curves comparing global and\nnative adapted models separately for English, Arabic, and Mandarin\n, (Figure 3) show no real native language advantage.\n\nTable 6: Min(C\ndet\n)\nNorm\nfor TDT3 topic tracking; breakdown by\noriginal story language\nEnglish\n\nN\nt\n=2\nN\nt\n=4\nAdapted Adapted\nBaseline\nGlobal Native\nBaseline\nGlobal Native\nCosine\n.1177 .0930 .0977 .0903 .0736\n.0713\nRM\n.1006 .0681 .0754 .0737 .0573 .0628\nArabic\nCosine\n.2023\n.1654\n.1486 .1794 .1558\n.1348\nRM\n.1884 .1356 .1404 .1581 .1206 .1377\nMandarin\nCosine\n.2156\n.1794\n.1714 .1657 .1557\n.1422\nRM\n.1829\n.1272\n.0991 .1286 .0935\n.0847\n\n\n\nFigure 3: DET curves for TDT3 tracking, cosine similarity,\nN\nt\n=4 training stories, global adapted vs. native adapted\nbreakdown for English, Arabic, and Mandarin\n\nIn trying to account for the discrepancy between the findings on\nlink detection and tracking, we suspected that the root of the\nproblem was the quality of native models for Arabic and Mandarin\n. For English, adaptation began with 2 or 4 on-topic models.\nHowever, Mandarin and Arabic models did not begin with on-topic\nstories; they could begin with off-topic models, which\nshould hurt tracking performance. A related issue is data sparseness\n. When a native topic model is first formed, it is based on one\nstory, which is a poorer basis for tracking than N\nt\nstories. In the\nnext three sections we pursue different aspects of these suspicions.\nIn section\n\n5 we perform a best-case experiment, initializing native\ntopic sets with on-topic stories, and smoothing native scores with\nglobal scores to address the sparseness problem. If these conditions\ndo not show a native language advantage, we would reject\nthe native language hypothesis. In section\n\n6 we explore the role of\nthe adaptation threshold. In section\n\n7 we compare some additional\nmethods of initializing native language topic models.\nON-TOPIC NATIVE CENTROIDS\nIn this section, we consider a best-case scenario, where we take\nthe first N\nt\nstories in each language relevant to each topic, to initialize\nadaptation of native topic models. While this is cheating,\nand not a way to obtain native training documents in a realistic\ntracking scenario, it demonstrates what performance can be attained\nif native training documents are available. More realistic\napproaches to adapting native topic models are considered in\nsubsequent sections.\nThe baseline and global adapted conditions were carried out as in\nSection\n\n4, and the native adapted condition was similar except in\nthe way adaptation of native topics began. If there were not yet N\nt\n\nnative stories in the topic set for the current test story in its native\nlanguage, the story was added to the topic set if it was relevant.\nOnce a native topic model had N\nt\nstories, we switched to the usual\nnon-cheating mode of adaptation, based on similarity score and\nadaptation threshold.\nTo address the data sparseness problem, we also smoothed the\nnative similarity scores with the global similarity scores:\n406\n\n)\n,\n(\n)\n1\n(\n)\n,\n(\n)\n,\n(\nS\nT\nSim\nS\nT\nSim\nS\nT\nSim\nglobal\nnative\nsmooth\n\n\n+\n=\n\n(11)\nThe parameter was not tuned, but set to a fixed value of 0.5.\nThe results can be seen in Table 7. Shaded cell pairs indicate confirmation\nof the native language hypothesis, where language-specific\ntopic models outperform global models.\nTable 7: Min(C\ndet\n)\nNorm\nfor TDT3 topic tracking, using N\nt\non-topic\nnative training stories and smoothing native scores\n\nN\nt\n=2\nN\nt\n=4\nAdapted Adapted\nBaseline\nGlobal Native\nBaseline\nGlobal Native\nCosine\n.1501\n.1197\n.0932 .1238 .1074\n.0758\nRel.\n.1283\n.0892\n.0702 .1060 .0818\n.0611\n\n\nFigure 4: DET curve for TDT3 tracking, initializing native\nadaptation with relevant training stories during adaptation,\ncosine similarity, N\nt\n=4\nFigure 4 shows the DET curves for cosine, N\nt\n=4 case. When the\nnative models are initialized with on-topic stories, the advantage\nto native models is clearly seen in the tracking performance.\n\nFigure 5: DET curve for TDT3 tracking initializing native\nadaptation with relevant training stories during adaptation\nand smoothing, vs. global adaptation, cosine similarity, N\nt\n=4,\nseparate analyses for English, Arabic, and Mandarin.\nDET curves showing results computed separately for the three\nlanguages can be seen in Figure 5, for the cosine, N\nt\n=4 case. It can\nbe clearly seen that English tracking remains about the same but\nthe Arabic and Mandarin native tracking show a large native language\nadvantage.\nADAPTATION THRESHOLD\nThe adaptation threshold was set to 0.5 in the experiments described\nabove without any tuning. The increase in global tracking\nperformance after adaptation shows that the value is at least acceptable\n. However, an analysis of the details of native adaptation\nshowed that many Arabic and Mandarin topics were not adapting.\nA summary of some of this analysis can be seen in Table 8.\nTable 8: Number of topics receiving new stories during native\nadaptation, breakdown by language\n\n\nTotal\nTopics receiving more stories\nSimilarity N\nt\nTopics English\nArabic Mandarin\n2 36 24\n8\n11\nCosine\n4 30 26\n7\n9\n2 36 36\n8\n7\nRelevance\nModel\n4 30 30\n8\n5\nFewer than a third of the topics received adapted stories. This\nmeans that for most topics, native tracking was based on the\nglobal models. In order to determine whether this was due to the\nadaptation threshold, we performed an experiment varying the\nadaptation threshold from .3 to .65 in steps of .05. The results can\nbe seen in Figure 6, which shows the minimum cost,\nmin(C\nDet\n)\nNorm\n, across the range of adaptation threshold values.\nAlthough we see that the original threshold, .5, was not always the\noptimal value, it is also clear that the pattern we saw at .5 (and in\nFigure 6) does not change as the threshold is varied, that is tracking\nwith native topic models is not better than tracking with\nglobal models. An improperly tuned adaptation threshold was\ntherefore not the reason that the native language hypothesis was\nnot confirmed for tracking. We suspect that different adaptation\nthresholds may be needed for the different languages, but it would\nbe better to handle this problem by language-specific normalization\nof similarity scores.\n0.06\n0.07\n0.08\n0.09\n0.1\n0.11\n0.12\n0.13\n0.14\n0.15\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\nT hreshold\nMi\nn\nC\no\ns\nt\nGlobal Nt=2\nGlobal Nt=4\nNative Nt=2\nNative Nt=4\nRelevance Model\n0.06\n0.07\n0.08\n0.09\n0.1\n0.11\n0.12\n0.13\n0.14\n0.15\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\nT hreshold\nMi\nn\nC\no\ns\nt\nGlobal Nt=2\nGlobal Nt=4\nNative Nt=2\nNative Nt=4\nCosine Similarity\n\nFigure 6: Effect of adaptation threshold on min(C\nDet\n)\nNorm\non\nTDT3 tracking with adaptation.\nIMPROVING NATIVE TOPIC MODELS\nIn the previous two sections we showed that when native topic\nmodels are initialized with language specific training stories that\nare truly on-topic, then topic tracking is indeed better with native\nmodels than with global models. However, in context of the TDT\n407\ntest situation, the way we obtained our language-specific training\nstories was cheating.\nIn this section we experiment with 2 different \"legal\" ways to initialize\nbetter native language models: (1) Use both global and\nnative models, and smooth native similarity scores with global\nsimilarity scores. (2) Initialize native models with dictionary or\nother translations of the English training stories into the other\nlanguage.\nSmoothing was carried out in the native adapted condition according\nto Equation (11), setting =0.5, without tuning. The comparison\nwith unadapted and globally adapted tracking can be seen\nin Table 9. The smoothing improves the native topic model performance\nrelative to unsmoothed native topic models (cf. Table\n5), and brings the native model performance to roughly the same\nlevel as the global. In other words, smoothing improves performance\n, but we still do not have strong support for the native language\nhypothesis. This is apparent in Figure 7. Native adapted\ntracking is not better than global adapted tracking.\nTable 9: Min(C\ndet\n)\nNorm\nfor TDT3 topic tracking, smoothing\nnative scores with global scores\n\nN\nt\n=2\nN\nt\n=4\nAdapted Adapted\nBaseline\nGlobal Native\nSmooth\nBaseline\nGlobal Native\nSmooth\nCosine\n.1501\n.1197\n.1125 .1238 .1074\n.1010\nRM\n.1283\n.0892\n.0872 .1060 .0818 .0840\n\n\nFigure 7: DET curve for TDT3 tracking with smoothing,\ncosine similarity, N\nt\n=4 training stories\nThe final method of initializing topic models for different languages\nwould be to translate the English training stories into the\nother languages required. We did not have machine translation\nfrom English into Arabic or Mandarin available for these experiments\n. However, we have had success with dictionary translations\nfor Arabic. In\n\n[2] we found that dictionary translations from Arabic\ninto English resulted in comparable performance to the machine\ntranslations on tracking, and better performance on link\ndetection. Such translated stories would not be \"native language\"\ntraining stories, but might be a better starting point for language-specific\nadaptation anyway.\nTraining story translations into Arabic used an English/Arabic\nprobabilistic dictionary derived from the Linguistic Data Consor-tium's\nUN Arabic/English parallel corpus, developed for our\ncross-language information retrieval work\n\n[7]. Each English word\nhas many different Arabic translations, each with a translation\nprobability p(a|e). The Arabic words, but not the English words,\nhave been stemmed according to a light stemming algorithm. To\ntranslate an English story, English stop words were removed, and\neach English word occurrence was replaced by all of its dictionary\ntranslations, weighted by their translation probabilities. Weights\nwere summed across all the occurrences of each Arabic word, and\nthe resulting Arabic term vector was truncated to retain only terms\nabove a threshold weight. We translated training stories only into\nArabic, because we did not have a method to produce good quality\nEnglish to Mandarin translation.\nThe results for Arabic can be seen in Table 10. For translation, it\nmakes sense to include an unadapted native condition, labeled\ntranslated in the table.\nTable 10: Min(C\ndet\n)\nNorm\nfor Arabic TDT3 topic tracking,\ninitializing native topic models with dictionary-translated\ntraining stories\nArabic\nN\nt\n=2\nUnadapted Adapted\n\nBaseline Translated Global Native\n\nCosine\n.2023 .2219\n.1694 .2209\nRM\n.1884\n.1625 .1356\n.1613\nArabic\nN\nt\n=4\nCosine\n.1794\n.1640 .1558\n1655\nRM\n.1581\n.1316 .1206\n.1325\n\n\nFigure 8: DET curve for TDT3 tracking initializing native\ntopics with dictionary-translated training stories, cosine\nsimilarity, N\nt\n=4, Arabic only\nThe results are mixed. First of all, this case is unusual in that\nadaptation does not improve translated models. Further analysis\nrevealed that very little adaptation was taking place. Because of\nthis lack of native adaptation, global adaptation consistently outperformed\nnative adaptation here. However, in the unadapted\nconditions, translated training stories outperformed the global\nmodels for Arabic in three of the four cases - cosine N\nt\n=4 and\nrelevance models for N\nt\n=2 and N\nt\n=4 (the shaded baseline-trans-408\nlated pairs in Table 10). The DET curve for the cosine N\nt\n=4 case\ncan be seen in Figure 8. The native unadapted curve is better\n(lower) than the global unadapted curve.\nThe translated stories were very different from the test stories, so\ntheir similarity scores almost always fell below the adaptation\nthreshold. We believe the need to normalize scores between native\nstories and dictionary translations is part of the problem, but we\nalso need to investigate the compatibility of the dictionary translations\nwith the native Arabic stories.\nCONCLUSIONS\nWe have confirmed the native language hypothesis for story link\ndetection. For topic tracking, the picture is more complicated.\nWhen native language training stories are available, good native\nlanguage topic models can be built for tracking stories in their\noriginal language. Smoothing the native models with global models\nimproves performance slightly. However, if training stories are\nnot available in the different languages, it is difficult to form native\nmodels by adaptation or by translation of training stories,\nwhich perform better than the adapted global models.\nWhy should language specific comparisons be more accurate than\ncomparisons based on machine translation? Machine translations\nare not always good translations. If the translation distorts the\nmeaning of the original story, it is unlikely to be similar to the\ntopic model, particularly if proper names are incorrect, or spelled\ndifferently in the machine translations than they are in the English\ntraining stories, a common problem in English translations from\nMandarin or Arabic. Secondly, even if the translations are correct,\nthe choice of words, and hence the language models, are likely to\nbe different across languages. The second problem could be han-dled\nby normalizing for source language, as in\n\n[12]. But\nnormalization cannot compensate for poor translation.\nWe were surprised that translating the training stories into Arabic\nto make Arabic topic models did not improve tracking, but again,\nour dictionary based translations of the topic models were different\nfrom native Arabic stories. We intend to try the same experiment\nwith manual translations of the training stories into Arabic\nand Mandarin. We are also planning to investigate the best way\nto normalize scores for different languages. When TDT4 relevance\njudgments are available we intend to replicate some of\nthese experiments on TDT4 data.\nACKNOWLEDGMENTS\nThis work was supported in part by the Center for Intelligent Information\nRetrieval and in part by SPAWARSYSCEN-SD grant\nnumber N66001-02-1-8903. Any opinions, findings and conclusions\nor recommendations expressed in this material are the author\n(s) and do not necessarily reflect those of the sponsor.\nREFERENCES\n[1]\n\nAllan, J. Introduction to topic detection and tracking. In\nTopic detection and tracking: Event-based information organization\n, J. Allan (ed.): Kluwer Academic Publishers, 1-16\n, 2002.\n[2]\n\nAllan, J. Bolivar, A., Connell, M., Cronen-Townsend, S.,\nFeng, A, Feng, F., Kumaran, G., Larkey, L., Lavrenko, V.,\nRaghavan, H. UMass TDT 2003 Research Summary. In\nProceedings of TDT 2003 evaluation, unpublished, 2003.\n[3]\n\nChen, H.-H. and Ku, L. W. An NLP & IR approach to topic\ndetection. In Topic detection and tracking: Event-based\ninformation organization, J. Allan (ed.). Boston, MA: Kluwer\n, 243-264, 2002.\n[4]\n\nChen, Y.-J. and Chen, H.-H. Nlp and IR approaches to\nmonolingual and multilingual link detection. Presented at\nProceedings of 19th International Conference on Computa-tional\nLinguistics, Taipei, Taiwan, 2002.\n[5]\n\nFiscus, J. G. and Doddington, G. R. Topic detection and\ntracking evaluation overview. In Topic detection and\ntracking: Event-based information organization, J. Allan\n(ed.). Boston, MA: Kluwer, 17-32, 2002.\n[6]\n\nKrovetz, R. Viewing morphology as an inference process.\nIn Proceedings of SIGIR '93, 191-203, 1993.\n[7]\n\nLarkey, Leah S. and Connell, Margaret E. (2003) Structured\nQueries, Language Modeling, and Relevance Modeling in\nCross-Language Information Retrieval. To appear in\nInformation Processing and Management Special Issue on\nCross Language Information Retrieval, 2003.\n[8]\n\nLarkey, L. S., Ballesteros, L., and Connell, M. E. Improving\nstemming for Arabic information retrieval: Light stemming\nand co-occurrence analysis. In Proceedings of SIGIR 2002,\n275-282, 2002.\n[9]\n\nLavrenko, V. and Croft, W. B. Relevance-based language\nmodels. In Proceedings of SIGIR 2001. New Orleans:\nACM, 120-127, 2001.\n[10]\n\nLavrenko, V. and Croft, W. B. Relevance models in\ninformation retrieval. In Language modeling for information\nretrieval, W. B. Croft and J. Lafferty (eds.). Boston:\nKluwer, 11-56, 2003.\n[11]\n\nLavrenko, V., Allan, J., DeGuzman, E., LaFlamme, D., Pollard\n, V., and Thomas, S. Relevance models for topic detection\nand tracking. In Proceedings of the Conference on\nHuman Language Technology, 104-110, 2002.\n[12]\n\nLeek, T., Schwartz, R. M., and Sista, S. Probabilistic approaches\nto topic detection and tracking. In Topic detection\nand tracking: Event-based information organization, J.\nAllan (ed.). Boston, MA: Kluwer, 67-83, 2002.\n[13]\n\nLevow, G.-A. and Oard, D. W. Signal boosting for translin-gual\ntopic tracking: Document expansion and n-best translation\n. In Topic detection and tracking: Event-based information\norganization, J. Allan (ed.). Boston, MA: Kluwer,\n175-195, 2002.\n[14]\n\nOard, D. W. Adaptive vector space text filtering for\nmonolingual and cross-language applications. PhD dissertation\n, University of Maryland, College Park, 1996.\nhttp://www.glue.umd.edu/~dlrg/filter/papers/thesis.ps.gz\n\n409", "keywords": "topic models;;classification;crosslingual;native topic models;similarity;story link;topic tracking;native language hypothesis;multilingual topic tracking;multilingual;Arabic;TDT;machine translation"} {"name": "122", "title": "Lazy Preservation: Reconstructing Websites by Crawling the Crawlers", "abstract": "Backup of websites is often not considered until after a catastrophic event has occurred to either the website or its webmaster. We introduce \"lazy preservation\" digital preservation performed as a result of the normal operation of web crawlers and caches. Lazy preservation is especially suitable for third parties; for example, a teacher reconstructing a missing website used in previous classes. We evaluate the effectiveness of lazy preservation by reconstructing 24 websites of varying sizes and composition using Warrick, a web-repository crawler. Because of varying levels of completeness in any one repository, our reconstructions sampled from four different web repositories: Google (44%), MSN (30%), Internet Archive (19%) and Yahoo (7%). We also measured the time required for web resources to be discovered and cached (10-103 days) as well as how long they remained in cache after deletion (7-61 days).", "fulltext": "INTRODUCTION\n\"My old web hosting company lost my site in its\nentirety (duh!) when a hard drive died on them.\nNeedless to say that I was peeved, but I do notice\nthat it is available to browse on the wayback\nmachine... Does anyone have any ideas if I can\ndownload my full site?\" - A request for help at\narchive.org [25]\nWebsites may be lost for a number of reasons: hard drive\ncrashes, file system failures, viruses, hacking, etc. A lost\nwebsite may be restored if care was taken to create a backup\nbeforehand, but sometimes webmasters are negligent in backing\nup their websites, and in cases such as fire, flooding, or\ndeath of the website owner, backups are frequently unavailable\n. In these cases, webmasters and third parties may turn\nto the Internet Archive (IA) \"Wayback Machine\" for help.\nAccording to a representative from IA, they have performed\nover 200 website recoveries in the past year for various individuals\n. Although IA is often helpful, it is strictly a best-effort\napproach that performs sporadic, incomplete and slow\ncrawls of the Web (IA is at least 6 months out-of-date [16]).\nAnother source of missing web content is in the caches of\nsearch engines (SEs) like Google, MSN and Yahoo that scour\nthe Web looking for content to index. Unfortunately, the\nSEs do not preserve canonical copies of all the web resources\nthey cache, and it is assumed that the SEs do not keep web\npages long after they have been removed from a web server.\nWe define lazy preservation as the collective digital preservation\nperformed by web archives and search engines on behalf\nof the Web at large. It exists as a preservation service\non top of distributed, incomplete, and potentially unreliable\nweb repositories. Lazy preservation requires no individual\neffort or cost for Web publishers, but it also provides no\nquality of service guarantees. We explore the effectiveness\nof lazy preservation by downloading 24 websites of various\nsizes and subject matter and reconstructing them using a\nweb-repository crawler named Warrick\n1\nwhich recovers missing\nresources from four web repositories (IA, Google, MSN\nand Yahoo). We compare the downloaded versions of the\nsites with the reconstructed versions to measure how successful\nwe were at reconstructing the websites.\nWe also measure the time it takes for SEs to crawl and\ncache web pages that we have created on .com and .edu websites\n. In June 2005, we created four synthetic web collections\nconsisting of HTML, PDF and images. For 90 days we systematically\nremoved web pages and measured how long they\nremained cached by the SEs.\nBACKGROUND AND RELATED WORK\nThe ephemeral nature of the Web has been widely ac-knowledged\n. To combat the disappearance of web resources,\nBrewster Kahle's Internet Archive has been archiving the\n1\nWarrick is named after a fictional forensic scientist with a\npenchant for gambling.\n67\nTable 1: Web repository-supported data types\nType\nG\nY\nM\nIA\nHTML\nC\nC\nC\nC\nPlain text\nM\nM\nM\nC\nGIF, PNG, JPG\nM\nM\nR\nC\nJavaScript\nM\nM\nC\nMS Excel\nM\nS\nM\nC\nMS PowerPoint\nM\nM\nM\nC\nMS Word\nM\nM\nM\nC\nPDF\nM\nM\nM\nC\nPostScript\nM\nS\nC\nC = Canonical version is stored\nM = Modified version is stored (image thumbnails or\nHTML conversions)\nR = Stored but not retrievable with direct URL\nS = Indexed but stored version is not accessible\nWeb since 1996 [4]. National libraries are also actively engaged\nin archiving culturally important websites [8]. Systems\nlike LOCKSS [24] have been developed to ensure libraries\nhave long-term access to publishers' web content,\nand commercial systems like Spurl.net and HanzoWeb.com\nhave been developed to allow users to archive selected web\nresources that they deem important.\nOther researchers have developed tools for archiving individual\nwebsites and web pages. InfoMonitor [7] archives a\nwebsite's file system and stores the archive remotely. TTA-pache\n[9] is used to archive requested pages from a particular\nweb server, and iPROXY [23] is used as a proxy server to\narchive requested pages from a variety of web servers. In\nmany cases these services can be of some value for recovering\na lost website, but they are largely useless when backups\nare inaccessible or destroyed or when a third party wants to\nreconstruct a website. They also require the webmaster to\nperform some amount of work in setting up, configuring and\nmonitoring the systems.\nIn regards to commercial search engines, the literature\nhas mostly focused on measuring the amount of content\nthey have indexed (e.g., [15, 18]), relevance of responses to\nusers' queries (e.g., [5, 14]), and ranking of pages (e.g., [28]).\nLewandowski et al.\n[17] studied how frequently Google,\nMSN and Yahoo updated their cached versions of web pages,\nbut we are unaware of any research that attempts to measure\nhow quickly new resources are added to and removed\nfrom commercial SE caches, or research that explores the\nuse of SE caches for reconstructing websites.\nWEB CRAWLING AND CACHING\nThere are many SEs and web archives that index and\nstore Web content. For them to be useful for website reconstruction\n, they must at a minimum provide a way to map\na given URL to a stored resource. To limit the implemen-tation\ncomplexity, we have focused on what we consider to\nbe the four most popular web repositories that meet our\nminimum criteria. Recent measurements show that Google,\nMSN and Yahoo index significantly different portions of the\nWeb and have an intersection of less than 45% [15]. Adding\nadditional web repositories like ask.com, gigablast.com, in-cywincy\n.com and any other web repository that allows direct\nURL retrieval would likely increase our ability to reconstruct\nwebsites.\nt\nd\nt\na\nt\nr\nt\np\nTTL\nc\nSE cache\nt\nm\nTTL\nws\nt\n0\nvulnerable replicated endangered unrecoverable\nWeb server\nFigure 1: Timeline of SE resource acquisition and\nrelease\nAlthough SEs often publish index size estimates, it is difficult\nto estimate the number of resources in each SE cache.\nAn HTML web page may consist of numerous web resources\n(e.g., images, applets, etc.) that may not be counted in the\nestimates, and not all indexed resources are stored in the SE\ncaches. Google, MSN and Yahoo will not cache an HTML\npage if it contains a NOARCHIVE meta-tag, and the http\nCache-control directives `no-cache' and `no-store' may also\nprevent caching of resources [1].\nOnly IA stores web resources indefinitely. The SEs have\nproprietary cache replacement and removal policies which\ncan only be inferred from observed behavior. All four web\nrepositories perform sporadic and incomplete crawls of websites\nmaking their aggregate performance important for website\nreconstruction.\nTable 1 shows the most popular types of resources held\nby the four web repositories. This table is based on our\nobservations when reconstructing websites with a variety of\ncontent. IA keeps a canonical version of all web resources,\nbut SEs only keep canonical versions of HTML pages. When\nadding PDF, PostScript and Microsoft Office (Word, Excel,\nPowerPoint) resources to their cache, the SEs create HTML\nversions of the resources which are stripped of all images.\nSEs also keep only a thumbnail version of the images they\ncache due to copyright law. MSN uses Picsearch for their\nimage crawling; unfortunately, Picsearch and MSN do not\nsupport direct URL queries for accessing these images, so\nthey cannot be used for recovering website images.\n3.2\nLifetime of a Web Resource\nFigure 1 illustrates the life span of a web resource from\nwhen it is first made available on a web server to when when\nit is finally purged from a SE cache. A web resource's time-to\n-live on the web server (TTL\nws\n) is defined as the number\nof days from when the resource is first made accessible on\nthe server (\nt\n0\n) to when it is removed (\nt\nr\n).\nA new resource is vulnerable until it is discovered by a\nSE (\nt\nd\n) and made available in the SE cache (\nt\na\n). The resource\nis replicated when it is accessible on the web server\nand in cache. Once the resource is removed from the web\nserver (\nt\nr\n), it becomes endangered since it is only accessible\nin cache. When a subsequent crawl reveals the resource is\nno longer available on the web server (\nt\nm\n), it will then be\npurged from cache (\nt\np\n) and be unrecoverable. The period\nbetween\nt\na\nand\nt\np\ndefine a resource's time-to-live in the SE\ncache (TTL\nc\n). A resource is recoverable if it is currently\ncached (i.e., is replicated or endangered). A recoverable resource\ncan only be recovered during the TTL\nc\nperiod with\na probability of\nP\nr\n, the observed number of days that a resource\nis retrievable from the cache divided by TTL\nc\n.\n68\nIt should be noted that the TTL\nws\nand TTL\nc\nvalues of a\nresource may not necessarily overlap. A SE that is trying\nto maximize the freshness of its index will try to minimize\nthe difference between TTL\nws\nand TTL\nc\n. A SE that is slow\nin updating its index, perhaps because it obtains crawling\ndata from a third party, may experience late caching where\nt\nr\n< t\na\n.\nFor a website to be lazily preserved, we would like its resources\nto be cached soon after their appearance on a website\n(have minimal vulnerability). SEs may also share this goal\nif they want to index newly discovered content as quickly as\npossible. Inducing a SE to crawl a website at a specific time\nis not currently possible. Webmasters may employ various\ntechniques to ensure their websites are crawler-friendly [13,\n27] and well connected to the Web. They may even submit\ntheir website URLs to SEs or use proprietary mechanisms\nlike Google's Sitemap Protocol [12], but no technique will\nguarantee immediate indexing and caching of a website.\nWe would also like resources to remain cached long after\nthey have been deleted from the web server (remain endangered\n) so they can be recovered for many days after their\ndisappearance. SEs on the other hand may want to minimize\nthe endangered period in order to purge missing content\nfrom their index. Just as we have no control as to when\na SE crawler will visit, we also have no control over cache\neviction policies.\n3.3\nWeb Collection Design\nIn order to obtain measurements for TTL\nc\nand other values\nin Figure 1, we created four synthetic web collections and\nplaced them on websites for which we could obtain crawling\ndata. We deployed the collections in June 2005 at four\ndifferent locations: 1) www.owenbrau.com, 2) www.cs.odu.\nedu/\n\nfmccown/lazy/ 3) www.cs.odu.edu/\n\njsmit/, and 4)\nwww.cs.odu.edu/\n\nmln/lazyp/. The .com website was new\nand had never been indexed by Google, Yahoo or MSN.\nThe 3 .edu websites had existed for over a year and had\nbeen previously crawled by all three SEs. In order for the\nweb collections to be found by the SEs, we placed links to\nthe root of each web collection from the .edu websites, and\nwe submitted owenbrau's base URL to Google, MSN and\nYahoo 1 month prior to the experiment. For 90 days we\nsystematically removed resources from each collection. We\nexamined the server web logs to determine when resources\nwere crawled, and we queried Google, MSN and Yahoo daily\nto determine when the resources were cached.\nWe organized each web collection into a series of 30 update\nbins (directories) which contained a number of HTML\npages referencing the same three inline images (GIF, JPG\nand PNG) and a number of PDF files. An index.html file\n(with a single inline image) in the root of the web collection\npointed to each of the bins. An index.html file in each bin\npointed to the HTML pages and PDF files so a web crawler\ncould easily find all the resources. All these files were static\nand did not change throughout the 90 day period except\nthe index.html files in each bin which were modified when\nlinks to deleted web pages were removed. In all, there were\n381 HTML files, 350 PDF files, and 223 images in each web\ncollection. More detail about the organization of the web\ncollections and what the pages and images looked like can\nbe found in [20, 26].\nThe PDF and HTML pages were made to look like typical\nweb pages with around 120 words per page. The text for\neach page was randomly generated from a standard English\ndictionary. By using random words we avoided creating duplicate\npages that a SE may reject [6]. Unfortunately, using\nrandom words may cause pages to be flagged as spam [10].\nEach HTML and PDF page contained a unique identifier\n(UID) at the top of each page (e.g., `mlnODULPT2 dgrp18\npg18-2-pdf' that included 4 identifiers: the web collection\n(e.g., `mlnODULPT2' means the `mln' collection), bin number\n(e.g., `dgrp18' means bin 18), page number and resource\ntype (e.g., `pg18-2-pdf' means page number 2 from bin 18\nand PDF resource). The UID contains spaces to allow for\nmore efficient querying of the SE caches.\nThe TTL\nws\nfor each resource in the web collection is a\nfunction of its bin number\nb and page number p:\nTTL\nws\n=\nb( 90/b - p + 1)\n(1)\n3.4\nDaily SE Queries\nIn designing our daily SE queries, care was taken to perform\na limited number of daily queries to not overburden\nthe SEs. We could have queried the SEs using the URL for\neach resource, but this might have led to our resources being\ncached prematurely; it is possible that if a SE is queried\nfor a URL it did not index that it would add the URL to a\nlist of URLs to be crawled at a later date. This is how IA's\nadvanced search interface handles missing URLs from users'\nqueries.\nTo determine which HTML and PDF resources had been\ncached, we queried using subsets of the resources' UIDs\nand looked for cached URLs in the results pages. For example\n, to find PDF resources from the mln collection, we\nqueried each SE to return the top 100 PDF results from\nthe site www.cs.odu.edu that contain the exact phrase `mlnODULPT2\ndgrp18'.\n2\nIt is necessary to divulge the site in\nthe query or multiple results from the site will not be returned\n. Although this tells the SE on which site the resource\nis located, it does not divulge the URL of the resource. To\nquery for cached images, we queried for the globally unique\nfilename given to each image.\n3.5\nCrawling and Caching Observations\nAlthough the web server logs registered visits from a variety\nof crawlers, we report only on crawls from Google,\nInktomi (Yahoo) and MSN.\n3\nAlexa Internet (who provides\ncrawls to IA) only accessed our collection once (induced\nthrough our use of the Alexa toolbar). A separate IA robot\naccessed less than 1% of the collections, likely due to\nseveral submissions we made to their Wayback Machine's\nadvanced search interface early in the experiment. Further\nanalysis of the log data can seen in a companion paper [26].\nWe report only detailed measurements on HTML resources\n(PDF resources were similar).\nImages were crawled and\ncached far less frequently; Google and Picsearch (the MSN\nImages provider) were the only ones to crawl a significant\nnumber of images. The 3 .edu collections had 29% of their\nimages crawled, and owenbrau had 14% of its images crawled.\nOnly 4 unique images appeared in Google Images, all from\n2\nMSN only allows limiting the results page to 50.\n3\nDue to a technical mishap beyond our control, we were\nunable to obtain crawling data for days 41-55 for owebrau\nand parts of days 66-75 and 97 for the .edu web collections.\nWe were also prevented from making cache queries on days\n53, 54, 86 and 87.\n69\nTable 2: Caching of HTML resources from 4 web collections (350 HTML resources in each collection)\nWeb\n% URLs crawled\n% URLs cached\nt\nca\nT T L\nc\n/\nP\nr\nEndangered\ncollection\nG\nM\nY\nG\nM\nY\nG\nM\nY\nG\nM\nY\nG\nM\nY\nfmccown\n91\n41\n56\n91\n16\n36\n13\n65\n47\n90 / 0.78\n20 / 0.87\n35 / 0.57\n51\n9\n24\njsmit\n92\n31\n92\n92\n14\n65\n12\n66\n47\n86 / 0.82\n20 / 0.91\n36 / 0.55\n47\n7\n25\nmln\n94\n33\n84\n94\n14\n49\n10\n65\n54\n87 / 0.83\n21 / 0.90\n24 / 0.46\n47\n8\n19\nowenbrau\n18\n0\n0\n20\n0\n0\n103\nN/A\nN/A\n40 / 0.98\nN/A\nN/A\n61\nN/A\nN/A\nAve\n74\n26\n58\n74\n11\n37\n35\n66\n50\n76 / 0.86\n20 / 0.89\n32 / 0.53\n51\n8\n23\nFigure 2: Crawling (top) and caching (bottom) of\nHTML resources from the mln web collection\nthe mln collection. Google likely used an image duplication\ndetection algorithm to prevent duplicate images from\ndifferent URLs from being cached. Only one image (from\nfmccown) appeared in MSN Images. None of the cached\nimages fell out of cache during our experiment.\nTable 2 summarizes the performance of each SE to crawl\nand cache 350 HTML resources from each of the four web\ncollections. This table does not include index.html resources\nwhich had an infinite\nT T L\nws\n. We believe there was an error\nin the MSN query script which caused fewer resources to\nbe found in the MSN cache, but the percentage of crawled\nURLs provides an upper bound on the number of cached\nresources; this has little to no effect on the other measurements\nreported.\nThe three SEs showed equal desire to crawl HTML and\nPDF resources. Inktomi (Yahoo) crawled 2 times as many\nresources as MSN, and Google crawled almost 3 times as\nmany resources than MSN. Google was the only SE to crawl\nand cache any resources from the new owenbrau website.\nFrom a preservation perspective, Google out-performed\nMSN and Yahoo in nearly every category. Google cached\nthe highest percentage of HTML resources (76%) and took\nonly 12 days on average to cache new resources from the\nedu web collections. On average, Google cached HTML resources\nfor the longest period of time (76 days), consistently\nprovided access to the cached resources (86%), and were the\nslowest to remove cached resources that were deleted from\nthe web server (51 days).\nAlthough Yahoo cached more\nHTML resources and kept the resources cached for a longer\nperiod than MSN, the probability of accessing a resource on\nany given day was only 53% compared to 89% for MSN.\nFigure 2 provides an interesting look at the crawling and\ncaching behavior of Google, Yahoo and MSN. These graphs\nillustrate the crawling and caching of HTML resources from\nthe mln collection; the other two edu collections exhibited\nsimilar behavior. The resources are sorted by TTL\nws\nwith\nthe longest-living resources appearing on the bottom. The\nindex.html files which were never removed from the web collection\nhave an infinite TTL (`inf'). The red diagonal line\nindicates the decay of the web collection; on any particular\nday, only resources below the red line were accessible\nfrom the web server. On the top row of Figure 2, blue dots\nindicate resources that were crawled on a particular day.\nWhen resources were requested that had been deleted, the\nweb server responded with a 404 (not found) code represented\nby green dots above the red line. The bottom row of\nFigure 2 shows the cached HTML resources (blue) resulting\nfrom the crawls. Some pages in Yahoo were indexed but not\ncached (green).\nAs Figure 2 illustrates, both Google and MSN were quick\nto make resources available in their cache soon after they\nwere crawled, and they were quick to purge resources from\ntheir cache when a crawl revealed the resources were no\nlonger available on the web server.\nA surprising finding\nis that many of the HTML resources that were previously\npurged from Google's cache reappeared on day 102 and remained\ncached for the remainder of our experiment. The\nother two edu collections exhibited similar behavior for HTML\nresources. HTML and PDF resources from owenbrau appeared\nin the Google cache on day 102 for the first time;\nthese resources had been deleted from the web server 10-20\ndays before day 102. Manual inspection weeks after the experiment\nhad concluded revealed that the pages remained\nin Google's cache and fell out months later.\nYahoo was very sporadic in caching resources; there was\noften a lag time of 30 days between the crawl of a resource\nand its appearance in cache. Many of the crawled resources\nnever appeared in Yahoo's cache. Although Inktomi crawled\nnearly every available HTML resource on day 10, only half\nof those resources ever became available in the Yahoo cache.\nWe have observed through subsequent interaction with Yahoo\nthat links to cached content may appear and disappear\nwhen performing the same query just a few seconds apart.\nThis likely accounts for the observed cache inconsistency.\nWe have observed from our measurements that nearly all\nnew HTML and PDF resources that we placed on known\nwebsites were crawled and cached by Google several days af-70\nA\nA\nD\nB\nC\nE\nF G\nB'\nC'\nE\nF\nadded\n20%\nW\n'\nW\nchanged\n33%\nidentical\n50%\nmissing\n17%\nFigure 3: Lost website (left), reconstructed website\n(center), and reconstruction diagram (right)\nter they were discovered. Resources on a new website were\nnot cached for months.\nYahoo and MSN were 4-5 times\nslower than Google to acquire new resources, and Yahoo incurs\na long transfer delay from Inktomi's crawls into their\ncache.\nWe have also observed that cached resources are\noften purged from all three caches as soon as a crawl reveals\nthe resources are missing, but in the case of Google,\nmany HTML resources have reappeared weeks after being\nremoved. Images tend to be largely ignored.\nSearch engines may crawl and cache other websites differ-ently\ndepending on a variety of factors including perceived\nlevel of importance (e.g., PageRank) and modification rates.\nCrawling policies may also be changed over time. This experiment\nmerely provides a glimpse into the current caching\nbehavior of the top three SEs that has not been documented\nbefore. Our findings suggest that SEs vary greatly in the\nlevel of access they provide to cached resources, and that\nwebsites are likely to be reconstructed more successfully if\nthey are reconstructed quickly after being lost. Reconstructions\nshould also be performed several days in a row to ensure\nmaximum access to web repository holdings. In some\ncases, it may even be beneficial to attempt recovering resources\neven a month after they have been lost.\nRECONSTRUCTING WEBSITES\nWe define a reconstructed website to be the collection\nof recovered resources that share the same URIs as the resources\nfrom a lost website or from some previous version of\nthe lost website [19]. The recovered resources may be equivalent\nto, or very different from, the lost resources. For websites\nthat are composed of static files, recovered resources\nwould be equivalent to the files that were lost. For sites\nproduced dynamically using CGI, PHP, etc., the recovered\nresources would match the client's view of the resources and\nwould be useful to the webmaster in rebuilding the server-side\ncomponents. The server-side components are currently\nnot recoverable using lazy preservation (see Section 5).\nTo quantify the difference between a reconstructed website\nand a lost website, we classify the recovered resources from\nthe website graphs. A website can be represented as a graph\nG = (V, E) where each resource r\ni\n(HTML, PDF, image,\netc.), identified by a URI, is a node\nv\ni\n, and there exists\na directed edge from\nv\ni\nto\nv\nj\nwhen there is a hyperlink or\nreference from\nr\ni\nto\nr\nj\n. The left side of Figure 3 shows a web\ngraph for some website\nW if we began to crawl it starting\nat A. Suppose\nW was lost and reconstructed forming the\nwebsite\nW represented in the center of Figure 3.\nFor each resource\nr\ni\nin\nW we may examine its corresponding\nresource\nr\ni\nin\nW that shares the same URI and categorize\nr\ni\nas identical (\nr\ni\nis byte-for-byte identical to\nr\ni\n),\nchanged (\nr\ni\nis not identical to\nr\ni\n), or missing (\nr\ni\ncould not\nbe found in any web). We would categorize those resources\nin\nW that did not share a URI with any resource in W\nas added (\nr\ni\nwas not a part of the current website but was\nrecovered due to a reference from\nr\nj\n).\nFigure 3 shows that resources A, G and E were reconstructed\nand are identical to their lost versions. An older\nversion of B was found (B') that pointed to G, a resource\nthat does not currently exist in\nW . Since B' does not reference\nD, we did not know to recover it (it is possible that\nG is actually D renamed). An older version of C was found,\nand although it still references F, F could not be found in\nany web repository.\nA measure of change between the lost website\nW and the\nreconstructed website\nW can be described using the following\ndifference vector:\ndifference(\nW, W ) =\nR\nchanged\n|W | ,\nR\nmissing\n|W | ,\nR\nadded\n|W |\n\n(2)\nFor Figure 3, the difference vector is (2/6, 1/6, 1/5) =\n(0.333, 0.167, 0.2). The best case scenario would be (0,0,0),\nthe complete reconstruction of a website. A completely unrecoverable\nwebsite would have a difference vector of (0,1,0).\nThe difference vector for a reconstructed website can be\nillustrated as a reconstruction diagram as shown on the\nright side of Figure 3. The changed, identical and missing\nresources form the core of the reconstructed website. The\ndark gray portion of the core grows as the percentage of\nchanged resource increases. The hole in the center of the\ncore grows as the percentage of missing resources increases.\nThe added resources appear as crust around the core. This\nrepresentation will be used later in Table 3 when we report\non the websites we reconstructed in our experiments.\n4.2\nWarrick Operation\nWarrick, our web-repository crawler, is able to reconstruct\na website when given a base URL pointing to where the site\nused to exist. The web repositories are crawled by issuing\nqueries in the form of URLs to access their stored holdings\n. For example, Google's cached version of http://foo.\nedu/page1.html can be accessed like so: http://search.\ngoogle.com/search?q=cache:http://foo.edu/page1.html.\nIf Google has not cached the page, an error page will be generated\n. Otherwise the cached page can be stripped of any\nGoogle-added HTML, and the page can be parsed for links\nto other resources from the foo.edu domain (and other domains\nif necessary). Most repositories require two or more\nqueries to obtain a resource.\nFor each URL, the file extension (if present) is examined\nto determine if the URL is an image (.png, .gif, .jpg, etc.)\nor other resource type. All three SEs use a different method\nfor retrieving images than for other resource types. IA has\nthe same interface regardless of the type. We would have\nbetter accuracy at determining if a given URL referenced\nan image or not if we knew the URL's resource MIME type,\nbut this information is not available to us.\nIA is the first web repository queried by Warrick because\nit keeps a canonical version of all web resources.\nWhen\nquerying for an image URL, if IA does not have the image\nthen Google and Yahoo are queried one at a time until one of\nthem returns an image. Google and Yahoo do not publicize\nthe cached date of their images, so it is not possible to pick\nthe most recently cached image.\n71\nTable 3: Results of website reconstructions\nMIME type groupings (orig/recovered)\nDifference vector\nWebsite\nPR\nTotal\nHTML\nImages\nOther\n(Changed, Missing,\nAdded)\nRecon\ndiag\nAlmost\nidentical\nNew\nrecon\ndiag\n1. www.eskimo.com/~scs/\n6\n719/691\n96%\n696/669\n96%\n22/21\n95%\n1/1\n100%\n(0.011, 0.039, 0.001)\n50%\n2. www.digitalpreservation.gov\n8\n414/378\n91%\n346/329\n95%\n42/25\n60%\n26/24\n92%\n(0.097, 0.087, 0.000)\n44%\n3. www.harding.edu/hr/\n4\n73/47\n64%\n19/19\n100%\n25/2\n8%\n29/26\n90%\n(0.438, 0.356, 0.145)\n83%\n4. www.techlocker.com\n4\n1216/406\n33%\n687/149\n22%\n529/257\n49%\n0/0\n(0.267, 0.666, 0.175)\n99%\nIf a non-image resource is being retrieved, again IA is\nqueried first. If IA has the resource and the resource does\nnot have a MIME type of `text/html', then the SEs are not\nqueried since they only store canonical versions of HTML\nresources. If the resource does have a `text/html' MIME\ntype (or IA did not have a copy), then all three SEs are\nqueried, the cache dates of the resources are compared (if\navailable), and the most recent resource is chosen.\nWarrick will search HTML resources for URLs to other\nresources and add them to the crawl frontier (a queue). Resources\nare recovered in breadth-first order, and reconstruction\ncontinues until the frontier is empty. All recovered resources\nare stored on the local filesystem, and a log is kept of\nrecovered and missing resources. Warrick limits its requests\nper day to the web repositories based on their published API\nvalues (Google, 1000; Yahoo, 5000; MSN, 10,000) or lacking\nan API, our best guess (IA, 1000). If any repository's limit\nis exceeded, Warrick will checkpoint and sleep for 24 hours.\n4.3\nReconstruction Experiment and Results\nTo gauge the effectiveness of lazy preservation for website\nreconstruction, we compared the snap-shot of 24 live websites\nwith their reconstructions. We chose sites that were\neither personally known to us or randomly sampled from\ndmoz.org. The websites (some were actually subsites) were\npredominantly English, covered a range of topics, and were\nfrom a number of top-level domains. We chose 8 small (\n<150\nURIs), 8 medium (150-499 URIs) and 8 large (\n500 URIs)\nwebsites, and we avoided websites that used robots.txt and\nFlash exclusively as the main interface.\nIn August 2005 we downloaded all 24 websites by starting\nat the base URL and following all links and references that\nthat were in and beneath the starting directory, with no limit\nto the path depth. For simplicity, we restricted the download\nto port 80 and did not follow links to other hosts within\nthe same domain name. So if the base URL for the website\nwas http://www.foo.edu/bar/, only URLs matching http:\n//www.foo.edu/bar/* were downloaded. Warrick uses the\nsame default setting for reconstructing websites.\nImmediately after downloading the websites, we reconstructed\nfive different versions for each of the 24 websites:\nfour using each web repository separately, and one using\nall web repositories together. The different reconstructions\nhelped to show how effective individual web repositories\ncould reconstruct a website versus the aggregate of all four\nweb repositories.\nWe present 4 of the 24 results of the aggregate reconstructions\nin Table 3, ordered by percent of recovered URIs.\nThe complete results can be seen in [20].\nThe `PR' column\nis Google's PageRank (0-10 with 10 being the most\nimportant) for the root page of each website at the time of\nthe experiments. (MSN and Yahoo do not publicly disclose\ntheir `importance' metric.) For each website, the total number\nof resources in the website is shown along with the total\nnumber of resources that were recovered and the percentage.\nResources are also totalled by MIME type. The difference\nvector for the website accounts for recovered files that were\nadded.\nThe `Almost identical' column of Table 3 shows the percentage\nof text-based resources (e.g., HTML, PDF, PostScript\n, Word, PowerPoint, Excel) that were almost identical\nto the originals. The last column shows the reconstruction\nfigure for each website if these almost identical resources are\nmoved from the `Changed' category to `Identical' category.\nWe considered two text-based resources to be almost identical\nif they shared at least 75% of their shingles of size 10.\nShingling (as proposed by Broder et al. [3]) is a popular\nmethod for quantifying similarity of text documents when\nword-order is important [2, 11, 21]. We did not use any\nimage similarity metrics.\nWe were able to recover more than 90% of the original resources\nfrom a quarter of the 24 websites. For three quarters\nof the websites we recovered more than half of the resources.\nOn average we were able to recover 68% of the website resources\n(median=72%). Of those resources recovered, 30%\nof them on average were not byte-for-byte identical. A majority\n(72%) of the `changed' text-based files were almost\nidentical to the originals (having 75% of their shingles in\ncommon). 67% of the 24 websites had obtained additional\nfiles when reconstructed which accounted for 7% of the total\nnumber of files reconstructed per website.\nWhen all website resources are aggregated together and\nexamined, dynamic pages (those that contained a `?'\nin\nthe URL) were significantly less likely to be recovered than\nresources that did not have a query string (11% vs. 73%).\nURLs with a path depth greater than three were also less\nlikely to be recovered (52% vs. 61%). A chi-square analysis\nconfirms the significance of these findings (p\n< .001). We\nwere unable to find any correlation between percentage of\nrecovered resources with PageRank or website size.\nThe success of recovering resources based on their MIME\ntype is plotted in Figure 4. The percentage of resources\n72\n0\n25\n50\n75\n100\n125\n150\n175\n200\n225\nhtml\nimages\npdf\nother\nms\nMIME type groups\nNu\nm\nb\ner o\nf\nreso\nu\nr\nces\n0%\n10%\n20%\n30%\n40%\n50%\n60%\n70%\n80%\n90%\n100%\nAve # of resources\nin original websites\nAggregate % recon\nIA % recon\nGoogle % recon\nMSN % recon\nYahoo! % recon\nFigure 4: Recovery success by MIME type\nthat were recovered from the five different website reconstructions\nwe performed (one using all four web repositories\n, and four using each web repository individually) are\nshown along with the average number of resources making\nup the 24 downloaded (or original) websites. A majority\n(92%) of the resources making up the original websites are\nHTML and images. We were much more successful at recovering\nHTML resources than images; we recovered 100%\nof the HTML resources for 9 of the websites (38%) using\nall four web repositories. It is likely we recovered fewer images\nbecause MSN cannot be used to recover images, and as\nour caching experiment revealed, images are also much less\nlikely to be cached than other resource types.\nFigure 4 also emphasizes the importance of using all four\nweb repositories when reconstructing a website. By just using\nIA or just using Google, many resources will not be\nrecovered.\nThis is further illustrated by Figure 5 which\nshows the percentage of each web repository's contribution\nin the aggregate reconstructions (sites are ordered by number\nof URIs). Although Google was the largest overall contributor\nto the website reconstructions (providing 44% of\nthe resources) they provided none of the resources for site\n17 and provided less than 30% of the resources for 9 of\nthe reconstructions. MSN contributed on average 30% of\nthe resources; IA was third with 19%, and Yahoo was last\nwith a 7% contribution rate.\nYahoo's poor contribution\nrate is likely due to their spotty cache access as exhibited\nin our caching experiment (Figure 2) and because last-modified\ndatestamps are frequently older than last-cached\ndatestamps (Warrick chooses resources with the most recent\ndatestamps).\nThe amount of time and the number of queries required\nto reconstruct all 24 websites (using all 4 repositories) is\nshown in Figure 6. Here we see almost a 1:1 ratio of queries\nto seconds. Although the size of the original websites gets\nlarger along the x-axis, the number of files reconstructed\nand the number of resources held in each web repository\ndetermine how many queries are performed. In none of our\nreconstructions did we exceed the daily query limit of any\nof the web repositories.\nFUTURE WORK\nWe have made Warrick available on the Web\n4\n, and it has\nbeen used to reconstruct several websites have been lost\ndue to fire, hard-drive crashes, death of the website owner,\n4\nhttp://www.cs.odu.edu/\n\nfmccown/warrick/\n0%\n10%\n20%\n30%\n40%\n50%\n60%\n70%\n80%\n90%\n100%\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24\nReconstructed w ebsites\nC\nont\nr\ni\nbut\ni\no\nn\nYahoo\nIA\nMSN\nGoogle\nFigure 5:\nWeb repositories contributing to each\nwebsite reconstruction\n0\n1000\n2000\n3000\n4000\n5000\n6000\n7000\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 1516 17 18 19 20 2122 23 24\nReconstructed W ebsites\nN\nu\nm\nb\ne\nr\nof\nque\nr\ni\ne\ns\n0\n1000\n2000\n3000\n4000\n5000\n6000\n7000\nTi\nm\ne\n(\ns\ne\nc\n)\nqueries\ntime\nFigure 6: Number of queries performed and time\ntaken to reconstruct websites\nhacking, and discontinued charitable website hosting [19].\nAlthough the reconstructions have not been complete, individuals\nare very thankful to have recovered any resources at\nall when faced with total loss.\nThere are numerous improvements we are making to Warrick\nincluding an API for easier inclusion of new web repositories\nand new methods for discovering more resources within\na web repository [19]. We are planning on reconstructing a\nlarger sample from the Web to discover the website characteristics\nthat allow for more effective \"lazy recovery\". Discovering\nsuch characteristics will allow us to create guidelines\nfor webmasters to ensure better lazy preservation of\ntheir sites. Our next experiment will take into account rate\nof change and reconstruction differences over time.\nWe are also interested in recovering the server-side components\n(CGI programs, databases, etc.) of a lost website.\nWe are investigating methods to inject server-side components\ninto indexable content using erasure codes (popular\nwith RAID systems [22]) so they can be recovered from web\nrepositories when only a subset of pages can be found.\nA web-repository crawler could be used in the future to\nsafeguard websites that are at risk of being lost. When a\nwebsite is detected as being lost, a reconstruction could be\ninitiated to preserve what is left of the site. Additionally,\nwebsites in countries that are targeted by political censorship\ncould be reconstructed at safe locations.\nCONCLUSIONS\nLazy preservation is a best-effort, wide-coverage digital\npreservation service that may be used as a last resort when\n73\nwebsite backups are unavailable. It is not a substitute for\ndigital preservation infrastructure and policy. Web repositories\nmay not crawl orphan pages, protected pages (e.g.,\nrobots.txt, password, IP), very large pages, pages deep in\na web collection or links influenced by JavaScript, Flash or\nsession IDs. If a web repository will not or cannot crawl and\ncache a resource, it cannot be recovered.\nWe have measured the ability of Google, MSN and Yahoo\nto cache four synthetic web collections over a period of\nfour months. We measured web resources to be vulnerable\nfor as little as 10 days and in the worst case, as long as\nour 90 day test period. More encouragingly, many HTML\nresources were recoverable for 851 days on average after\nbeing deleted from the web server. Google proved to be the\nmost consistent at caching our synthetic web collections.\nWe have also used our web-repository crawler to reconstruct\na variety of actual websites with varying success.\nHTML resources were the most numerous (52%) type of resource\nin our collection of 24 websites and were the most successfully\nrecoverable resource type (89% recoverable). Images\nwere the second most numerous (40%) resource type,\nbut they were less successfully recovered (53%). Dynamic\npages and resources with path depths greater than three\nwere less likely to be recovered. Google was the most frequent\nsource for the reconstructions (44%), but MSN was a\nclose second (30%), followed by IA (19%) and Yahoo (7%).\nThe probability of reconstruction success was not correlated\nwith Google's PageRank or the size of the website.\nREFERENCES\n[1] H. Berghel. Responsible web caching. Communications\nof the ACM, 45(9):1520, 2002.\n[2] K. Bharat and A. Broder. Mirror, mirror on the web:\na study of host pairs with replicated content. In\nProceedings of WWW '99, pages 15791590, 1999.\n[3] A. Z. Broder, S. C. Glassman, M. S. Manasse, and\nG. Zweig. Syntactic clustering of the Web. Computer\nNetworks & ISDN Systems, 29(8-13):11571166, 1997.\n[4] M. Burner. Crawling towards eternity: Building an\narchive of the world wide web. Web Techniques\nMagazine, 2(5), 1997.\n[5] F. Can, R. Nuray, and A. B. Sevdik. Automatic\nperformance evaluation of web search engines. Info.\nProcessing & Management, 40(3):495514, 2004.\n[6] J. Cho, N. Shivakumar, and H. Garcia-Molina.\nFinding replicated web collections. In Proceedings of\nSIGMOD '00, pages 355366, 2000.\n[7] B. F. Cooper and H. Garcia-Molina. Infomonitor:\nUnobtrusively archiving a World Wide Web server.\nInternational Journal on Digital Libraries,\n5(2):106119, April 2005.\n[8] M. Day. Collecting and preserving the World Wide\nWeb. 2003. http:\n//library.wellcome.ac.uk/assets/WTL039229.pdf.\n[9] C. E. Dyreson, H. Lin, and Y. Wang. Managing\nversions of web documents in a transaction-time web\nserver. In Proceedings of WWW '04, pages 422432,\n2004.\n[10] D. Fetterly, M. Manasse, and M. Najork. Spam, damn\nspam, and statistics: using statistical analysis to\nlocate spam web pages. In Proceedings of WebDB '04,\npages 16, 2004.\n[11] D. Fetterly, M. Manasse, M. Najork, and J. Wiener. A\nlarge-scale study of the evolution of web pages. In\nProceedings of WWW '03, pages 669678, 2003.\n[12] Google Sitemap Protocol, 2005. http://www.google.\ncom/webmasters/sitemaps/docs/en/protocol.html.\n[13] Google webmaster help center: Webmaster guidelines,\n2006. http://www.google.com/support/webmasters/\nbin/answer.py?answer=35769.\n[14] M. Gordon and P. Pathak. Finding information on the\nWorld Wide Web: the retrieval effectiveness of search\nengines. Inf. Process. Manage., 35(2):141180, 1999.\n[15] A. Gulli and A. Signorini. The indexable web is more\nthan 11.5 billion pages. In Proceedings of WWW '05,\npages 902903, May 2005.\n[16] Internet Archive FAQ: How can I get my site included\nin the Archive?, 2006.\nhttp://www.archive.org/about/faqs.php.\n[17] D. Lewandowski, H. Wahlig, and G. Meyer-Beautor.\nThe freshness of Web search engine databases. Journal\nof Information Science, 32(2):131148, Apr 2006.\n[18] F. McCown, X. Liu, M. L. Nelson, and M. Zubair.\nSearch engine coverage of the OAI-PMH corpus. IEEE\nInternet Computing, 10(2):6673, Mar/Apr 2006.\n[19] F. McCown and M. L. Nelson. Evaluation of crawling\npolicies for a web-repository crawler. In Proceedings of\nHYPERTEXT '06, pages 145156, 2006.\n[20] F. McCown, J. A. Smith, M. L. Nelson, and J. Bollen.\nReconstructing websites for the lazy webmaster.\nTechnical report, Old Dominion University, 2005.\nhttp://arxiv.org/abs/cs.IR/0512069.\n[21] A. Ntoulas, J. Cho, and C. Olston. What's new on the\nWeb? The evolution of the Web from a search engine\nperspective. In Proceedings of WWW '04, pages 112,\n2004.\n[22] J. S. Plank. A tutorial on Reed-Solomon coding for\nfault-tolerance in RAID-like systems. Software:\nPractice and Experience, 27(9):9951012, 1997.\n[23] H. C. Rao, Y. Chen, and M. Chen. A proxy-based\npersonal web archiving service. SIGOPS Operating\nSystems Review, 35(1):6172, 2001.\n[24] V. Reich and D. S. Rosenthal. LOCKSS: A permanent\nweb publishing and access system. D-Lib Magazine,\n7(6), 2001.\n[25] A. Ross. Internet Archive forums: Web forum posting.\nOct 2004. http://www.archive.org/iathreads/\npost-view.php?id=23121.\n[26] J. A. Smith, F. McCown, and M. L. Nelson. Observed\nweb robot behavior on decaying web subsites. D-Lib\nMagazine, 12(2), Feb 2006.\n[27] M. Weideman and M. Mgidana. Website navigation\narchitectures and their effect on website visibility: a\nliterature survey. In Proceedings of SAICSIT '04,\npages 292296, 2004.\n[28] J. Zhang and A. Dimitroff. The impact of webpage\ncontent characteristics on webpage visibility in search\nengine results (part I). Information Processing &\nManagement, 41(3):665690, 2005.\n74\n", "keywords": "Search engines (SEs);cached resources;web repositories;recovery;reconstruction;crawling;caching;lazy preservation;search engine;digital preservation"} {"name": "123", "title": "Learning Concepts from Large Scale Imbalanced Data Sets Using Support Cluster Machines", "abstract": "This paper considers the problem of using Support Vector Machines (SVMs) to learn concepts from large scale imbalanced data sets. The objective of this paper is twofold. Firstly, we investigate the effects of large scale and imbalance on SVMs. We highlight the role of linear non-separability in this problem. Secondly, we develop a both practical and theoretical guaranteed meta-algorithm to handle the trouble of scale and imbalance. The approach is named Support Cluster Machines (SCMs). It incorporates the informative and the representative under-sampling mechanisms to speedup the training procedure. The SCMs differs from the previous similar ideas in two ways, (a) the theoretical foundation has been provided, and (b) the clustering is performed in the feature space rather than in the input space. The theoretical analysis not only provides justification , but also guides the technical choices of the proposed approach. Finally, experiments on both the synthetic and the TRECVID data are carried out. The results support the previous analysis and show that the SCMs are efficient and effective while dealing with large scale imbalanced data sets.", "fulltext": "INTRODUCTION\nIn the context of concept modelling, this paper considers\nthe problem of how to make full use of the large scale annotated\ndata sets. In particular, we study the behaviors of\nSupport Vector Machines (SVMs) on large scale imbalanced\ndata sets, not only because its solid theoretical foundations\nbut also for its empirical success in various applications.\n1.1\nMotivation\nBridging the semantic gap has been becoming the most\nchallenging problem of Multimedia Information Retrieval\n(MIR). Currently, there are mainly two types of methods\nto bridge the gap [8]. The first one is relevance feedback\nwhich attempts to capture the user's precise needs through\niterative feedback and query refinement. Another promising\ndirection is concept modelling. As noted by Hauptmann\n[14], this splits the semantic gap between low level features\nand user information needs into two, hopefully smaller gaps:\n(a) mapping the low-level features into the intermediate semantic\nconcepts and (b) mapping these concepts into user\nneeds. The automated image annotation methods for CBIR\nand the high level feature extraction methods in CBVR are\nall the efforts to model the first mapping. Of these methods,\nsupervised learning is one of the most successful ones. An\nearly difficulty of supervised learning is the lack of annotated\ntraining data. Currently, however, it seems no longer\na problem. This is due to both the techniques developed to\nleverage surrounding texts of web images and the large scale\ncollaborative annotation. Actually, there is an underway effort\nnamed Large Scale Concept Ontology for Multimedia\nUnderstanding (LSCOM), which intends to annotate 1000\nconcepts in broadcast news video [13]. The initial fruits of\nthis effort have been harvested in the practice of TRECVID\nhosted by National Institute of Standards and Technology\n(NIST) [1]. In TRECVID 2005, 39 concepts are annotated\nby multiple participants through web collaboration, and ten\nof them are used in the evaluation.\nThe available large amount of annotated data is undoubt-edly\nbeneficial to supervised learning. However, it also brings\nout a novel challenge, that is, how to make full use of the\ndata while training the classifiers. On the one hand, the annotated\ndata sets are usually in rather large scale. The de-441\nvelopment set of TRECVID 2005 includes 74523 keyframes.\nThe data set of LSCOM with over 1000 annotated concepts\nmight be even larger. With all the data, the training of\nSVMs will be rather slow. On the other hand, each concept\nwill be the minority class under one-against-all strategy\n. Only a small portion of the data belong to the concept,\nwhile all the others are not (In our case, the minority class\nalways refers to the positive class). The ratio of the positive\nexamples and the negative ones is typically below 1 : 100\nin TRECVID data. These novel challenges have spurred\ngreat interest in the communities of data mining and machine\nlearning[2, 6, 21, 22, 29]. Our first motivation is to\ninvestigate the effects of large scale and imbalance on SVMs.\nThis is critical for correct technical choices and development.\nThe second objective of this paper is to provide a practical\nas well as theoretical guaranteed approach to addressing the\nproblem.\n1.2\nOur Results\nThe major contribution of this paper can be summarized\nas follows:\n1. We investigate the effects of large scale and imbalance\non SVMs and highlight the role of linear non-separability\nof the data sets. We find that SVMs has\nno difficulties with linear separable large scale imbalanced\ndata.\n2. We establish the relations between the SVMs trained\non the centroids of the clusters and the SVMs obtained\non the original data set. We show that the difference\nbetween their optimal solutions are bounded by the\nperturbation of the kernel matrix. We also prove the\noptimal criteria for approximating the original optimal\nsolutions.\n3. We develop a meta-algorithm named Support Cluster\nMachines (SCMs).\nA fast kernel k-means approach\nhas been employed to partition the data in the feature\nspace rather than in the input space.\nExperiments on both the synthetic data and the TRECVID\ndata are carried out. The results support the previous analysis\nand show that the SCMs are efficient and effective while\ndealing with large scale imbalanced data sets.\n1.3\nOrganization\nThe structure of this paper is as follows. In Section 2 we\ngive a brief review of SVMs and kernel k-means. We discuss\nthe effects of the large scale imbalanced data on SVMs\nin Section 3. We develop the theoretical foundations and\npresent the detailed SCMs approach in Section 4. In Section\n5 we carry out experiments on both the synthetic and\nthe TRECVID data sets. Finally, we conclude the paper in\nSection 6.\nPRELIMINARIES\nHere, we present a sketch introduction to the soft-margin\nSVMs for the convenience of the deduction in Section 4. For\na binary classification problem, given a training data set\nD\nof size n\nD = {(x\ni\n, y\ni\n)\n|x\ni\nR\nN\n, y\ni\n{1, -1}},\nwhere x\ni\nindicates the training vector of the ith sample and\ny\ni\nindicates its target value, and i = 1, . . . , n. The classification\nhyperplane is defined as\nw, (x) + b = 0,\nwhere (\n) is a mapping from R\nN\nto a (usually) higher dimension\nHilbert space\nH, and , denotes the dot product\nin\nH. Thus, the decision function f(x) is\nf (x) = sign( w, (x) + b).\nThe SVMs aims to find the hyperplane with the maximum\nmargin between the two classes, i.e., the optimal hyperplane.\nThis can be obtained by solving the following quadratic optimization\nproblem\nmin\nw,b,\n1\n2 w\n2\n+ C\nn\ni=1\n\ni\nsubject to\ny\ni\n( w, (x\ni\n) + b)\n1 i\n(1)\n\ni\n0, i = 1, . . . , n.\nWith the help of Lagrange multipliers, the dual of the above\nproblem is\nmin\n\nG() = 1\n2\nT\nQ\n- e\nT\n\nsubject to\n0\n\ni\nC, i = 1, . . . , n\n(2)\n\nT\ny = 0,\nwhere is a vector with components\ni\nthat are the Lagrange\nmultipliers, C is the upper bound, e is a vector of\nall ones, and Q is an n\nn positive semi-definite matrix,\nQ\nij\n= y\ni\ny\nj\n(x\ni\n), (x\nj\n) . Since the mapping (\n) only appears\nin the dot product, therefore, we need not know its\nexplicit form. Instead, we define a kernel K(\n, ) to calculate\nthe dot product, i.e., K(x\ni\n, x\nj\n) = (x\ni\n), (x\nj\n) . The matrix\nK with components K(x\ni\n, x\nj\n) is named Gram Matrix\n(or kernel matrix). With kernel K\n, , we can implicitly\nmap the training data from input space to a feature space\nH.\n2.2\nKernel\nk\n-means and Graph Partitioning\nGiven a set of vectors x\n1\n, . . . , x\nn\n, the standard k-means\nalgorithm aims to find clusters\n1\n, . . . ,\nk\nthat minimize the\nobjective function\nJ(\n{\nc\n}\nk\nc=1\n) =\nk\nc=1 x\ni\n\nc\nx\ni\n- m\nc 2\n,\n(3)\nwhere\n{\nc\n}\nk\nc=1\ndenotes the partitioning of the data set and\nm\nc\n=\n\nxic\nx\ni\n|\nc\n|\nis the centroid of the cluster\nc\n. Similar\nto the idea of nonlinear SVMs, the k-means can also be\nperformed in the feature space with the help of a nonlinear\nmapping (\n), which results in the so-called kernel k-means\nJ(\n{\nc\n}\nk\nc=1\n) =\nk\nc=1 x\ni\n\nc\n(x\ni\n)\n- m\nc 2\n,\n(4)\nwhere m\nc\n=\n\nxic\n(x\ni\n)\n|\nc\n|\n. If we expand the Euclidean distance\n(x\ni\n)\n- m\nc 2\nin the objective function, we can find\nthat the image of x\ni\nonly appears in the form of dot product\n. Thus, given a kernel matrix K with the same meaning\n442\nin SVMs, we can compute the distance between points and\ncentroids without knowing explicit representation of (x\ni\n).\nRecently, an appealing alternative, i.e., the graph clustering\nhas attracted great interest. It treats clustering as\na graph partition problem. Given a graph G = (\nV, E, A),\nwhich consists of a set of vertices\nV and a set of edges E such\nthat an edge between two vertices represents their similarity.\nThe affinity matrix A is\n|V||V| whose entries represent the\nweights of the edges. Let links(\nV\n1\n,\nV\n2\n) be the sum of the\nedge weights between the nodes in\nV\n1\nand\nV\n2\n, that is\nlinks(\nV\n1\n,\nV\n2\n) =\niV\n1\n,jV\n2\nA\nij\n.\nRatio association is a type of graph partitioning objective\nwhich aims to maximize within-cluster association relative\nto the size of the cluster\nRAssoc(G) =\nmax\nV\n1\n,...,V\nk\nk\nc=1\nlinks(\nV\nc\n,\nV\nc\n)\n|V\nc\n|\n.\n(5)\nThe following theorem establishes the relation between kernel\nk-means and graph clustering [10].\nWith this result,\nwe can develop some techniques to handle the difficulty of\nstoring the large kernel matrix for kernel k-means.\nTheorem 1. Given a data set, we can construct a weighted\ngraph G = (\nV, E, A), by treating each sample as a node and\nlinking an edge between each other. If we define the edge\nweight A\nij\n= K(x\ni\n, x\nj\n), that is, A = K, the minimization\nof (4) is equivalent to the maximization of (5).\nTHE EFFECTS OF LARGE SCALE IM-BALANCED DATA ON SVMS\nThere are two obstacles yielded by large scale. The first\none is the kernel evaluation, which has been intensively discussed\nin the previous work. The computational cost scales\nquadratically with the data size. Furthermore, it is impossible\nto store the whole kernel matrix in the memory for\ncommon computers. The decomposition algorithms (e.g.,\nSMO) have been developed to solve the problem [20, 22].\nThe SMO-like algorithms actually transform the space load\nto the time cost, i.e., numerous iterations until convergence.\nTo reduce or avoid the kernel reevaluations, various efficient\ncaching techniques are also proposed [16]. Another obstacle\ncaused by large scale is the increased classification difficulty\n, that is, the more probable data overlapping. We can\nnot prove it is inevitable but it typically happens. Assume\nwe will draw n randomly chosen numbers between 1 to 100\nfrom a uniform distribution, our chances of drawing a number\nclose to 100 would improve with increasing values of n,\neven though the expected mean of the draws is invariant [2].\nThe checkerboard experiment in [29] is an intuitive example\n. This is true especially for the real world data, either\nbecause of the weak features (we mean features that are\nless discriminative) or because of the noises. With the large\nscale data, the samples in the overlapping area might be\nso many that the samples violating KKT conditions become\nabundant. This means the SMO algorithm might need more\niterations to converge.\nGenerally, the existing algorithmic approaches have not\nbeen able to tackle the very large data set. Whereas, the\nunder-sampling method, e.g., active learning, is possible.\nWith unlabelled data, active learning selects a well-chosen\nsubset of data to label so that reduce the labor of manual annotations\n[24]. With large scale labelled data, active learning\ncan also be used to reduce the scale of training data [21].\nThe key issue of active learning is how to choose the most\n\"valuable\" samples. The informative sampling is a popular\ncriterion. That is, the samples closest to the boundary or\nmaximally violating the KKT conditions (the misclassified\nsamples) are preferred [24, 26]. Active learning is usually\nin an iterative style. It requires an initial (usually random\nselected) data set to obtain the estimation of the boundary.\nThe samples selected in the following iterations depend on\nthis initial boundary. In addition, active learning can not\nwork like the decomposition approach which stops until all\nthe samples satisfy the KKT conditions. This imply a potential\ndanger, that is, if the initial data are selected improperly,\nthe algorithm might not be able to find the suitable hyperplane\n. Thus, another criterion, i.e., representative, must be\nconsidered. Here, \"representative\" refers to the ability to\ncharacterize the data distribution. Nguyen et al. [19] show\nthat the active learning method considering the representative\ncriterion will achieve better results. Specifically for\nSVMs, pre-clustering is proposed to estimate the data distribution\nbefore the under-sampling [31, 3, 30]. Similar ideas\nof representative sampling appear in [5, 12].\n3.2\nThe Imbalanced Data\nThe reason why general machine learning systems suffer\nperformance loss with imbalanced data is not yet clear [23,\n28], but the analysis on SVMs seems relatively straightforward\n. Akbani et al. have summarized three possible causes\nfor SVMs [2].\nThey are, (a) positive samples lie further\nfrom the ideal boundary, (b) the weakness of the soft-margin\nSVMs, and (c) the imbalanced support vector ratio. Of these\ncauses, in our opinion, what really matters is the second one.\nThe first cause is pointed out by Wu et al. [29]. This situation\noccurs when the data are linearly separable and the\nimbalance is caused by the insufficient sampling of the minority\nclass. Only in this case does the \"ideal\" boundary\nmake sense.\nAs for the third cause, Akbani et al.\nhave\npointed out that it plays a minor role because of the constraint\nT\ny = 0 on Lagrange multipliers [2].\nThe second cause states that the soft-margin SVMs has inherent\nweakness for handling imbalanced data. We find that\nit depends on the linear separability of the data whether the\nimbalance has negative effects on SVMs. For linearly separable\ndata, the imbalance will have tiny effects on SVMs,\nsince all the slack variables of (1) tend to be zeros (, unless\nthe C is so small that the maximization of the margin dominates\nthe objective). In the result, there is no contradiction\nbetween the capacity of the SVMs and the empirical error\n. Unfortunately, linear non-separable data often occurs.\nThe SVMs has to achieve a tradeoff between maximizing\nthe margin and minimizing the empirical error. For imbalanced\ndata, the majority class outnumbers the minority one\nin the overlapping area. To reduce the overwhelming errors\nof misclassifying the majority class, the optimal hyperplane\nwill inevitably be skew to the minority. In the extreme, if C\nis not very large, SVMs simply learns to classify everything\nas negative because that makes the \"margin\" the largest,\nwith zero cumulative error on the abundant negative examples\n. The only tradeoff is the small amount of cumulative\n443\nerror on the few positive examples, which does not count for\nmuch.\nSeveral variants of SVMs have been adopted to solve the\nproblem of imbalance. One choice is the so-called one-class\nSVMs, which uses only positive examples for training. Without\nusing the information of the negative samples, it is usually\ndifficult to achieve as good result as that of binary SVMs\nclassifier [18]. Using different penalty constants C\n+\nand C\nfor\nthe positive and negative examples have been reported to\nbe effective [27, 17]. However, Wu et al. point out that the\neffectiveness of this method is limited [29]. The explanation\nof Wu is based on the KKT condition\nT\ny = 0, which imposes\nan equal total influence from the positive and negative\nsupport vectors. We evaluate this method and the result\nshows that tuning\nC\n+\nC\ndoes\nwork (details refer to Section\n5). We find this also depends on the linear separability of\nthe data whether this method works. For linearly separable\ndata, tuning\nC\n+\nC\nhas\nlittle effects, since the penalty constants\nare useless with the zero-valued slack variables. However, if\nthe data are linearly non-separable, tuning\nC\n+\nC\ndoes\nchange\nthe position of separating hyperplane. The method to modify\nthe kernel matrix is also proposed to improve SVMs for\nimbalanced data [29]. A possible drawback of this type approach\nis its high computational costs.\nOVERALL APPROACH\nThe proposed approach is named Support Cluster Machines\n(SCMs). We first partition the negative samples into\ndisjoint clusters, then train an initial SVMs model using\nthe positive samples and the representatives of the negative\nclusters. With the global picture of the initial SVMs, we can\napproximately identify the support vectors and non-support\nvectors. A shrinking technique is then used to remove the\nsamples which are most probably not support vectors. This\nprocedure of clustering and shrinking are performed itera-tively\nseveral times until some stop criteria satisfied. With\nsuch a from coarse-to-fine procedure, the representative and\ninformative mechanisms are incorporated. There are four\nkey issues in the meta-algorithm of SCMs: (a) How to get\nthe partition of the training data, (b) How to get the representative\nfor each cluster, (c) How to safely remove the\nnon-support vector samples, (d) When to stop the iteration\nprocedure. Though similar ideas have been proposed\nto speed-up SVMs in [30, 3, 31], no theoretical analysis of\nthis idea has been provided. In the following, we present an\nin-depth analysis for this type of approaches and attempt to\nimprove the algorithm under the theoretical guide.\n4.1\nTheoretical Analysis\nSuppose\n{\nc\n}\nk\nc=1\nis a partition of the training set that the\nsamples within the same cluster have the same class label.\nIf we construct a representative u\nc\nfor each cluster\nc\n, we\ncan obtain two novel models of SVMs.\nThe first one is named Support Cluster Machines (SCMs).\nIt treats each representative as a sample, thus the data size\nis reduced from n to k. This equals to the classification of\nthe clusters. That is where the name SCMs come from. The\nnew training set is\nD\n\n=\n{(u\nc\n, y\nc\n)\n|u\nc\nR\nN\n, y\nc\n{1, -1}, c = 1, . . . , k},\nin which y\nc\nequals the labels of the samples within\nc\n. We\ndefine the dual problem of support cluster machines as\nmin\n\n\nG\n\n(\n\n) = 1\n2\nT\n\nQ\n\n\n\n- e\nT\n\n\n\nsubjectto\n0\n\ni\n|\ni\n|C, i = 1, . . . , k\n(6)\n\nT\n\ny\n\n= 0,\nwhere\n\nis a vector of size k with components\ni\ncorresponding\nto u\ni\n,\n|\ni\n|C is the upper bound for\ni\n, e\n\nis a\nk dimension vector of all ones, and Q\n\nis an k\nk positive\nsemi-definite matrix, Q\nij\n= y\ni\ny\nj\n(u\ni\n), (u\nj\n) .\nAnother one is named Duplicate Support Vector Machines\n(DSVMs). Different from SCMs, it does not reduce the size\nof training set. Instead, it replace each sample x\ni\nwith the\nrepresentative of the cluster that x\ni\nbelongs to. Thus, the\nsamples within the same cluster are duplicate. That is why\nit is named DSVMs. The training set is\n~\nD = {(~x\ni\n, ~\ny\ni\n)\n|x\ni\nD, if x\ni\n\nc\n, ~\nx\ni\n= u\nc\nand ~\ny\ni\n= y\ni\n},\nand the corresponding dual problem is defined as\nmin\n\n~\nG() = 1\n2\nT\n~\nQ\n- e\nT\n\nsubjectto\n0\n\ni\nC, i = 1, . . . , n\n(7)\n\nT\ny = 0,\nwhere ~\nQ is is an n\nn positive semi-definite matrix, ~\nQ\nij\n=\n~\ny\ni\n~\ny\nj\n(~\nx\ni\n), (~\nx\nj\n) .\nWe have the following theorem that\nstates (6) is somehow equivalent to (7):\nTheorem 2. With the above definitions of the SCMs and\nthe DSVMs, if\n\nand\n\nare their optimal solutions respectively\n, the relation G\n\n(\n\n) = ~\nG(\n\n) holds. Furthermore,\nany\n\nR\nk\nsatisfying\n{\nc\n=\n\nx\ni\n\nc\n\ni\n,\nc = 1, . . . , k}\nis the optimal solution of SCMs. Inversely, any\nR\nn\nsatisfying\n{\n\nx\ni\n\nc\n\ni\n=\nc\n,\nc = 1, . . . , k} and the constraints\nof (7) is the optimal solution of DSVMs.\nThe proof is in Appendix A. Theorem 2 shows that solving\nthe SCMs is equivalent to solving a quadratic programming\nproblem of the same scale as that of the SVMs in (2).\nComparing (2) and (7), we can find that only the Hessian\nmatrix is different. Thus, to estimate the approximation\nfrom SCMs of (6) to SVMs of (2), we only need to analyze\nthe stability of the quadratic programming model in\n(2) when the Hessian matrix varies from Q to ~\nQ. Daniel\nhas presented a study on the stability of the solution of definite\nquadratic programming, which requires that both Q\nand ~\nQ are positive definite [7]. However, in our situation,\nQ is usually positive definite and ~\nQ is not (because of the\nduplications). We develop a novel theorem for this case.\nIf define = Q\n- ~\nQ , where\ndenotes the Frobenius\nnorm of a matrix, the value of measure the size of the\nperturbations between Q and ~\nQ.\nWe have the following\ntheorem:\nTheorem 3. If Q is positive definite and = Q - ~\nQ\n,\nlet\n\nand ~\n\n\nbe the optimal solutions to (2) and (7) respectively\n, we have\n~\n\n\n\n~\nmC\n\nG( ~\n\n\n)\n- G(\n\n)\n(m\n2\n+ ~\nm\n2\n)C\n2\n\n2\n444\nwhere is the minimum eigenvalue of Q, m and ~\nm indicate\nthe numbers of the support vectors for (2) and (7) respectively\n.\nThe proof is in Appendix B. This theorem shows that the\napproximation from (2) to (7) is bounded by . Note that\nthis does not mean that with minimal we are sure to get\nthe best approximate solution. For example, adopting the\nsupport vectors of (1) to construct ~\nQ will yield the exact\noptimal solution of (2) but the corresponding are not necessarily\nminimum. However, we do not know which samples\nare support vectors beforehand. What we can do is to minimize\nthe potential maximal distortion between the solutions\nbetween (2) and (7).\nNow we consider the next problem, that is, given the partition\n{\nc\n}\nk\nc=1\n, what are the best representatives\n{u\nc\n}\nk\nc=1\nfor the clusters in the sense of approximating Q? In fact,\nwe have the following theorem:\nTheorem 4. Given the partition {\nc\n}\nk\nc=1\n, the\n{u\nc\n}\nk\nc=1\nsatisfying\n(u\nc\n) =\n\nx\ni\n\nc\n(x\ni\n)\n|\nc\n|\n, c = 1, . . . , k\n(8)\nwill make = Q\n- ~\nQ\nminimum.\nThe proof is in Appendix C. This theorem shows that, given\nthe partition, (u\nc\n) = m\nc\nyields the best approximation\nbetween ~\nQ and Q.\nHere we come to the last question, i.e., what partition\n{\nc\n}\nk\nc=1\nwill make =\nQ\n- ~\nQ\nminimum. To make the\nproblem more clearly, we expand\n2\nas\nQ\n- ~\nQ\n2\n=\nk\nh=1\nk\nl=1 x\ni\n\nh\nx\nj\n\nl\n( (x\ni\n), (x\nj\n)\n- m\nh\n, m\nl\n)\n2\n.\n(9)\nThere are approximately k\nn\n/k! types of such partitions of\nthe data set. An exhaustive search for the best partition is\nimpossible. Recalling that (9) is similar to (4), we have the\nfollowing theorem which states their relaxed equivalence.\nTheorem 5. The relaxed optimal solution of minimizing\n(9) and the relaxed optimal solution of minimizing (4) are\nequivalent.\nThe proof can be found in Appendix D. Minimizing amounts\nto find a low-rank matrix approximating Q. Ding et al. have\npointed out the relaxed equivalence between kernel PCA and\nkernel k-means in [11]. Note that minimizing (9) is different\nfrom kernel PCA in that it is with an additional block-wise\nconstant constraint. That is, the value of ~\nQ\nij\nmust be invariant\nwith respect to the cluster\nh\ncontaining ~\nx\ni\nand the\ncluster\nl\ncontaining ~\nx\nj\n. With Theorem 5 we know that\nkernel k-means is a suitable method to obtain the partition\nof data.\nAccording to the above results, the SCMs essentially finds\nan approximate solution to the original SVMs by smoothing\nthe kernel matrix K (or Hessian matrix Q). Fig.1 illustrates\nthe procedure of smoothing the kernel matrix via clustering.\nHence, by solving a smaller quadratic programming problem\n, the position of separating hyperplane can be roughly\ndetermined.\n-5\n0\n5\n10\n15\n-4\n-2\n0\n2\n4\n6\n8\n10\n12\n14\n16\n50\n100\n150\n200\n20\n40\n60\n80\n100\n120\n140\n160\n180\n200\n50\n100\n150\n200\n20\n40\n60\n80\n100\n120\n140\n160\n180\n200\n50\n100\n150\n200\n20\n40\n60\n80\n100\n120\n140\n160\n180\n200\n(a)\n(b)\n(c )\n(d)\nFigure 1: (a) 2D data distribution, (b) the visualization\nof the kernel matrix Q, (c) the kernel matrix\nQ by re-ordering the entries so that the samples belonging\nto the same cluster come together, (d) the\napproximate kernel matrix ~\nQ obtained by replacing\neach sample with the corresponding centroid.\n4.2\nKernel-based Graph Clustering\nIn the previous work, k-means [30], BIRCH [31] and PDDP\n[3] have been used to obtain the partition of the data. None\nof them performs clustering in the feature space, though the\nSVMs works in the feature space. This is somewhat unnatural\n. Firstly, recalling that the kernel K(\n, ) usually implies\nan implicitly nonlinear mapping from the input space to the\nfeature space, the optimal partition of input space is not necessarily\nthe optimal one of feature space. Take k-means as\nan example, due to the fact that the squared Euclidean distance\nis used as the distortion measure, the clusters must be\nseparated by piece-wise hyperplanes (i.e., voronoi diagram).\nHowever, these separating hyperplanes are no longer hyperplanes\nin the feature space with nonlinear mapping (\n).\nSecondly, the k-means approach can not capture the complex\nstructure of data. As shown in Fig.2, the negative class\nis in a ring-shape in the input space. If the k-means is used,\nthe centroids of positive and negative class might overlap.\nWhereas in the feature space, the kernel k-means might get\nseparable centroids.\nSeveral factors limit the application of kernel k-means to\nlarge scale data. Firstly, it is almost impossible to store the\nwhole kernel matrix K in the memory, e.g., for n = 100 000,\nwe still need 20 gigabytes memory taking the symmetry into\naccount. Secondly, the kernel k-means relies heavily on an\neffective initialization to achieve good results, and we do not\nhave such a sound method yet. Finally, the computational\ncost of the kernel k-means might exceeds that of SVMs, and\ntherefore, we lose the benefits of under-sampling. Dhillon et\nal. recently propose a multilevel kernel k-means method [9],\nwhich seems to cater to our requirements. The approach\nis based on the equivalence between graph clustering and\nkernel k-means. It incorporates the coarsening and initial\npartitioning phases to obtain a good initial clustering. Most\nimportantly, the approach is extremely efficient. It can handle\na graph with 28,294 nodes and 1,007,284 edges in several\nseconds. Therefore, here we adopt this approach. The detailed\ndescription can be found in [9]. In the following, we\nfocus on how to address the difficulty of storing large scale\nkernel matrix.\nTheorem 1 states that kernel k-means is equivalent to a\ntype of graph clustering. Kernel k-means focuses on grouping\ndata so that their average distance from the centroid is\nminimum ,while graph clustering aims to minimizing the average\npair-wise distance among the data. Central grouping\nand pair-wise grouping are two different views of the same\napproach. From the perspective of pair-wise grouping, we\ncan expect that two samples with large distance will not\nbelong to the same cluster in the optimal solution. Thus,\n445\n1\nx\n2\nx\n1\nz\n2\nz\n3\nz\nPositive Class\nNegative Class\nFigure 2: The left and right figures show the data\ndistribution of input space and feature space respectively\n. The two classes are indicated by squares and\ncircles. Each class is grouped into one cluster, and\nthe solid mark indicates the centroid of the class.\nwe add the constraint that two samples with distance large\nenough are not linked by an edge, that is, transforming the\ndense graph to a sparse graph. This procedure is the common\npractice in spectral clustering or manifold embedding.\nUsually, two methods have been widely used for this purpose\n, i.e., k-nearest neighbor and -ball. Here, we adopt the\n-ball approach. Concretely, the edges with weight A\nij\n<\nis removed from the original graph, in which the parameter\nis pre-determined. By transforming a dense graph into\na sparse graph, we only need store the sparse affinity matrix\ninstead of the original kernel matrix. Nevertheless, we\nhave to point out that the time complexity of constructing\nsparse graph is O(n\n2\n) for data set with n examples, which\nis the efficiency bottleneck of the current implementation.\nWith the sparse graph, each iteration of the multilevel kernel\nk-means costs O(ln\n\n) time, where ln\nis\nthe number of\nnonzero entries in the kernel matrix.\n4.3\nSupport Cluster Machines\nAccording to Theorem 4, choosing the centroid of each\ncluster as representative will yield the best approximation.\nHowever, the explicit form of (\n) is unknown. We don't\nknow the exact pre-images of\n{m\nc\n}\nk\nc=1\n, what we can get are\nthe dot products between the centroids by\nm\nh\n, m\nl\n=\n1\n|\nh\n||\nl\n|\nx\ni\n\nh\nx\nj\n\nl\n(x\ni\n), (x\nj\n) ,\nwhich requires O(n\n2\n) costs. Then the pre-computed kernel\nSVMs can be used. The pre-computed kernel SVMs takes\nthe kernel matrix K\n\nas input, and save the indices of support\nvectors in the model [15].\nTo classify the incoming\nsample x, we have to calculate the dot product between x\nand all the samples in the support clusters, e.g.,\nc\n(If m\nc\nis\na support vector, we define the cluster\nc\nas support cluster.)\nx, m\nc\n=\n1\n|\nc\n|\nx\ni\n\nc\nx, x\ni\n.\nWe need another O(nm) costs to predict all the training\nsamples if there are m samples in support clusters. This\nis unacceptable for large scale data. To reduce the kernel\nreevaluation, we adopt the same method as [3], i.e., selecting\na pseudo-center for each cluster as the representative\nu\nc\n= arg min\nx\ni\n\nc\n(x\ni\n)\n- 1\n|\nc\n|\nx\nj\n\nc\n(x\nj\n)\n2\n,\n1\nx\nPositive class\nNegative class\n2\nx\n1\nx\nPositive class\nNegative class\n2\nx\n(a)\n(b)\nFigure 3: (a) Each class is grouped into one cluster,\n(b) each class is grouped into two clusters. The solid\nmark represents the centroid of the corresponding\nclass.\nThe solid lines indicate the support hyperplanes\nyielded by SCMs and the dot lines indicate\nthe true support hyperplanes.\nwhich can be directly obtained by\nu\nc\n= arg max\nx\ni\n\nc\nx\nj\n\nc\n(x\ni\n), (x\nj\n) .\n(10)\nThus, the kernel evaluation within training procedure requires\nO(\n\nk\nc=1\n|\nc\n|\n2\n+ k\n2\n) time, which be further reduced\nby probabilistic speedup proposed by Smola [25]. The kernel\nevaluation of predicting the training samples is reduced\nfrom O(nm) to O(ns), where s indicates the number of support\nclusters.\n4.4\nShrinking Techniques\nWith the initial SCMs, we can remove the samples that\nare not likely support vectors. However, there is no theoretical\nguarantee for the security of the shrinking. In Fig. 3,\nwe give a simple example to show that the shrinking might\nnot be safe. In the example, if the samples outside the margin\nbetween support hyperplanes are to be removed, the\ncase (a) will remove the true support vectors while the case\n(b) will not. The example shows that the security depends\non whether the hyperplane of SCMs is parallel to the true\nseparating hyperplane. However, we do not know the direction\nof true separating hyperplane before the classification.\nTherefore, what we can do is to adopt sufficient initial cluster\nnumbers so that the solution of SCMs can approximate\nthe original optimal solution enough. Specifically for large\nscale imbalanced data, the samples satisfying the following\ncondition will be removed from the training set:\n| w, (x) + b| > ,\n(11)\nwhere is a predefined parameter.\n4.5\nThe Algorithm\nYu [31] and Boley [3] have adopted different stop criteria\n. In Yu et al.'s approach, the algorithm stops when each\ncluster has only one sample. Whereas, Boley et al. limit\nthe maximum iterations by a fixed parameter.\nHere, we\npropose two novel criteria especially suitable for imbalanced\ndata. The first one is to stop whenever the ratio of positive\nand negative samples is relatively imbalanced. Another\nchoice is the Neyman-Pearson criterion, that is, minimizing\nthe total error rate subject to a constraint that the miss rate\nof positive class is less than some threshold. Thus, once the\n446\nmiss rate of positive class exceeds some threshold, we stop\nthe algorithm.\nThe overall approach is illustrated in Algorithm 1. With\nlarge scale balanced data, we carry out the data clustering\nfor both classes separately. Whereas with imbalanced data,\nthe clustering and shrinking will only be conducted on the\nmajority class. The computation complexity is dominated\nby kernel evaluation. Therefore, it will not exceed O((n\n\n)\n2\n+\n(n\n+\n)\n2\n), where n\nand\nn\n+\nindicate the number of negative\nand positive examples respectively.\nAlgorithm 1: Support Cluster Machines\nInput\n: Training data set\nD = D\n+\nD\nOutput\n: Decision function f\nrepeat\n1\n{\n+\nc\n, m\n+\nc\n}\nk\n+\nc=1\n=KernelKMeans(\nD\n+\n)\n2\n{\nc\n, m\nc\n}\nk\nc=1\n=KernelKMeans(\nD\n\n)\n3\nD\n\n=\n{m\n+\nc\n}\nk\n+\nc=1\n{m\n+\nc\n}\nk\nc=1\n4\nf\n\n=SVMTrain(\nD\n\n)\n5\nf (\nD) =SVMPredict(f\n\n,\nD)\n6\nD = D\n+\nD\n=Shrinking\n(f (\nD));\n7\nuntil\nstop criterion is true\n8\nEXPERIMENTS\nThe experiments on both the synthetic and the TRECVID\ndata are carried out. The experiments on synthetic data\nare used to analyze the effects of large scale and imbalance\non SVMs and the experiments on TRECVID data serve to\nevaluate the effectiveness and efficiency of SCMs. The multilevel\nkernel graph partitioning code graclus [9] is adopted for\ndata clustering and the well-known LibSVM software [15] is\nused in our experiments. All our experiments are done in a\nPentium 4 3.00GHz machine with 1G memory.\n5.1\nSynthetic Data Set\nWe generate two-dimensional data for the convenience\nof observation. Let x is a random variable uniformly distributed\nin [0, ]. The data are generated by\nD\n+\n=\n{(x, y)|y =sin(x)-+0.7[rand(0, 1)-1], x [0, ]}\nD\n=\n{(x, y)|y =- sin(x)+1+0.7rand(0, 1), x [0, ]},\nwhere rand(0, 1) generates the random numbers uniformly\ndistributed between 0 and 1, and is a parameter controlling\nthe overlapping ratio of the two classes. Fig. 4 and\nFig. 5 show some examples of the synthetic data. We use\nthe linear kernel function in all the experiments on synthetic\ndata.\n5.1.1\nThe Effects of Scale\nWe generate two types of balanced data, i.e., n\n+\n= n\n\n,\nbut one (\nD\n1\n=\nD( = 1.5)) is linearly separable and the\nTable 1: The effects of scale and overlapping on the\ntime costs of training SVMs (in seconds).\nn\n+\n+ n\n200\n2000 4000 8000 20000 40000\n80000\ntime(\nD\n1\n) 0.01 0.03\n0.04\n0.07\n0.23\n0.63\n1.32\ntime(\nD\n2\n) 0.02 0.70\n3.24 14.01 58.51 201.07 840.60\n0\n0.5\n1\n1.5\n2\n2.5\n3\n3.5\n-2.5\n-2\n-1.5\n-1\n-0.5\n0\n0.5\n1\n1.5\n2\nPositive class\nNegative class\n0\n0.5\n1\n1.5\n2\n2.5\n3\n3.5\n-1.5\n-1\n-0.5\n0\n0.5\n1\n1.5\n2\nPositive class\nNegative class\n(a)\n(b)\nFigure 4: (a) example of non-overlapped balanced\ndata sets, (b) example of overlapped balanced data\nsets.\n0\n0.5\n1\n1.5\n2\n2.5\n3\n3.5\n-2.5\n-2\n-1.5\n-1\n-0.5\n0\n0.5\n1\n1.5\n2\nPositive class\nNegative class\n0\n0.5\n1\n1.5\n2\n2.5\n3\n3.5\n-1.5\n-1\n-0.5\n0\n0.5\n1\n1.5\n2\nPositive class\nNegative class\n(a)\n(b)\nFigure 5: (a) example of non-overlapped imbalanced\ndata sets, (b) example of overlapped imbalanced\ndata sets.\nother (\nD\n2\n=\nD( = 0.6)) is not, as shown in Fig.4. We\nobserve the difference of the behaviors of time costs for\nD\n1\nand\nD\n2\nwhen the scale increases. With the same parameter\nsettings, the time costs of optimizing the objective for\nD\n1\nand\nD\n2\nare shown in Table 1, from which we can get two\nconclusions, (a) time costs increase with the scale, and (b)\nin the same scale, the linearly non-separable data will cost\nmore time to converge.\n5.1.2\nThe Effects of Imbalance\nWe generate two types of imbalanced data, i.e., n\n+\nn\n\n,\nbut one (\nD\n1\n=\nD( = 1.5)) is linearly separable and the\nother (\nD\n2\n=\nD( = 0.6)) is not, as shown in Fig.5. We\nobserve the difference of the effects of imbalance for linearly\nseparable data\nD\n1\nand linearly non-separable\nD\n2\n. For the\nspace limitation, we will not describe the detailed results\nhere but only present the major conclusions. For linearly\nseparable data, SVMs can find the non-skew hyperplane if\nC is not too small. In this situation, tuning\nC\n+\nC\nis\nmeaningless\n. For linearly non-separable data, the boundary will be\nskew to positive class if C\n+\n= C\n\n. In this case, increasing\nC\n+\nC\ndose\n\"push\" the skewed separating hyperplane to the\nnegative class. For both\nD\n1\nand\nD\n2\n, if the C is too small,\nunderfitting occurs, that is, the SVMs simply classify all the\nsamples into negative class.\n5.2\nTRECVID Data Set\n5.2.1\nExperimental Setup\nIn this section, we evaluate the proposed approach on the\nhigh level feature extraction task of TRECVID [1]. Four\nconcepts, including \"car\",\"maps\",\"sports\" and \"waterscape\",\nare chosen to model from the data sets. The development\ndata of TRECVID 2005 are employed and divided into training\nset and validation set in equal size. The detailed statis-447\nTable 2: The details of the training set and validation\nset of TRECVID 2005.\nConcept\n|D\ntrain\n|\n|D\nval\n|\nPositive Negative Positive Negative\nCar\n1097\n28881\n1097\n28881\nMaps\n296\n30462\n296\n30463\nSports\n790\n29541\n791\n29542\nWaterscape\n293\n30153\n293\n30154\ntics of the data is summarized in Table 2. In our experiments\n, the global 64-dimension color autocorrelogram feature\nis used to represent the visual content of each image.\nConforming to the convention of TRECVID, average precision\n(AP) is chosen as the evaluation criterion. Totally five\nalgorithms have been implemented for comparison:\nWhole\nAll the negative examples are used\nRandom\nRandom sampling of the negative examples\nActive\nActive sampling of the negative examples\nSCMs I\nSCMs with k-means in the input space\nSCMs\nSCMs with kernel k-means\nIn the Active method, we firstly randomly select a subset\nof negative examples.\nWith this initial set, we train\nan SVMs model and use this model to classify the whole\ntraining data set. Then the maximally misclassified negative\nexamples are added to the training set. This procedure\niterates until the ratio between the negative and the\npositive examples exceeding five. Since both the Random\nand Active methods depend on the initial random chosen\ndata set, we repeat each of them for ten times and calculate\ntheir average performances for comparison. Both SCMs I\nand SCMs methods adopt the Gaussian kernel during the\nSVMs classification. The only difference is that SCMs I\nperforms data clustering with k-means in the input space\nwhile SCMs with k-means in the feature space.\n5.2.2\nParameter Settings\nCurrently, the experiments focus on the comparative performance\nbetween the different approaches based on the the\nsame parameter settings. Therefore, some of the parameters\nare heuristically determined and might not be optimal. The\ncurrent implementation of SCMs involves the following parameter\nsettings: (a) Gaussian kernel is adopted and the parameters\nare selected via cross-validation, furthermore, the\nkernel function of kernel k-means clustering is adopted the\nsame as that of SVMs, (b) the threshold for transforming\ndense graphs to sparse ones is experimentally determined\nas\n= 0.6, (c) the parameter of shrinking technique is experimentally\nchosen as = 1.3, (d) for SCMs, the data are\nimbalanced for each concept, we only carry out data clustering\nfor negative classes, therefore, k\n+\nalways equals\n|D\n+\n|\nand k\nis\nalways chosen as\n|D\n|\n10\n, (e) we stop the iteration\nof SCMs when the number of the negative examples are not\nmore than the five times of that of the positive examples.\n5.2.3\nExperiment Results\nThe average performance and time costs of the various\napproaches are in Table 3 and Table 4 respectively.\nWe\ncan see that both the Random and Active methods use\nfewer time than the others, but their performances are not\nas good as the others. Furthermore, the SCMs achieves\nTable 3: The average performance of the approaches\non the chosen concepts, measured by average precision\n.\nConcept\nWhole Random Active SCMs I SCMs\nCar\n0.196\n0.127\n0.150\n0.161\n0.192\nMaps\n0.363\n0.274\n0.311\n0.305\n0.353\nSports\n0.281\n0.216\n0.253\n0.260\n0.283\nWaterscape\n0.269\n0.143\n0.232\n0.241\n0.261\nTable 4: The average time costs of the approaches\non the chosen concepts (in seconds).\nConcept\nWhole Random Active SCMs I SCMs\nCar\n4000.2\n431.0\n1324.6\n1832.0\n2103.4\nMaps\n402.6\n35.2\n164.8\n234.3\n308.5\nSports\n1384.5\n125.4\n523.8\n732.5\n812.7\nWaterscape\n932.4\n80.1\n400.3\n504.0\n621.3\nthe comparable performance with that of Whole while uses\nfewer time costs. Note that SCMs I also achieves satisfying\nresults. This might be due to the Gaussian kernels, in which\ne\n- x-y\n2\nis monotonic with x\n-y\n2\n. Therefore, the order of\nthe pair-wise distances is the same for both the input space\nand feature space, which perhaps leads to similar clustering\nresults.\nCONCLUSIONS\nIn this paper, we have investigated the effects of scale and\nimbalance on SVMs. We highlight the role of data overlapping\nin this problem and find that SVMs has no difficulties\nwith linear separable large scale imbalanced data.\nWe propose a meta-algorithm named Support Cluster Machines\n(SCMs) for effectively learning from large scale and\nimbalanced data sets.\nDifferent from the previous work,\nwe develop the theoretical justifications for the idea and\nchoose the technical component guided by the theoretical\nresults. Finally, experiments on both the synthetic and the\nTRECVID data are carried out. The results support the\nprevious analysis and show that the SCMs are efficient and\neffective while dealing with large scale imbalanced data sets.\nHowever, as a pilot study, there is still some room for improvement\n. Firstly, we have not incorporated the caching\ntechniques to avoid the kernel reevaluations. Therefore, we\nhave to recalculate the dot product on line whenever it is re-quired\n. Secondly, the parameters within the algorithms are\ncurrently selected heuristically, which depend on the tradeoff\nof efficiency and accuracy.\nACKNOWLEDGMENTS\nWe would like to thank the anonymous reviewers for their\ninsightful suggestions. We also thank Dr. Chih-Jen Lin for\nthe code of libSVM, Brian J. Kulis for the code of graclus\nand National Institute of Standards and Technology for providing\nthe TRECVID data sets. Finally, special thanks go\nto Dr. Ya-xiang Yuan for his helpful discussions on optimization\ntheory.\nREFERENCES\n[1] TREC Video Retrieval. National Institute of\nStandards and Technology,\nhttp://www-nlpir.nist.gov/projects/trecvid/.\n448\n[2] R. Akbani, S. Kwek, and N. Japkowicz. Applying\nSupport Vector Machines to Imbalanced Datasets. In\nProceedings of ECML'04, pages 3950, 2004.\n[3] D. Boley and D. Cao. Training Support Vector\nMachine using Adaptive Clustering. In Proceeding of\n2004 SIAM International Conference on Data Mining,\nApril 2004.\n[4] S. Boyd and L. Vandenberghe. Convex Optimization.\nCambridge University Press, New York, NY, USA,\n2004.\n[5] K. Brinker. Incorporating Diversity in Active Learning\nwith Support Vector Machines. In Proceedings of\nICML'03, pages 5966, 2003.\n[6] N. V. Chawla, N. Japkowicz, and A. Kotcz. Editorial:\nSpecial Issue on Learning from Imbalanced Data Sets.\nSIGKDD Explor. Newsl., 6(1):16, 2004.\n[7] J. W. Daniel. Stability of the Solution of Definite\nQuadratic Programs. Mathematical Programming,\n5(1):4153, December 1973.\n[8] R. Datta, J. Li, and J. Z. Wang. Content-based Image\nRetrieval: Approaches and Trends of the New Age. In\nProceedings of ACM SIGMM workshop on MIR'05,\npages 253262, 2005.\n[9] I. Dhillon, Y. Guan, and B. Kulis. A Fast\nKernel-based Multilevel Algorithm for Graph\nClustering. In Proceeding of ACM SIGKDD'05, pages\n629634, 2005.\n[10] I. S. Dhillon, Y. Guan, and B. Kulis. A Unified View\nof Graph Partitioning and Weighted Kernel k-means.\nTechnical Report TR-04-25, The University of Texas\nat Austin, Department of Computer Sciences, June\n2004.\n[11] C. Ding and X. He. K-means clustering via principal\ncomponent analysis. In Proceedings of ICML'04, pages\n2936, 2004.\n[12] K.-S. Goh, E. Y. Chang, and W.-C. Lai. Multimodal\nConcept-dependent Active Learning for Image\nRetrieval. In Proceedings of ACM MM'04, pages\n564571, 2004.\n[13] A. G. Hauptmann. Towards a Large Scale Concept\nOntology for Broadcast Video. In Proceedings of\nCIVR'04, pages 674675, 2004.\n[14] A. G. Hauptmann. Lessons for the Future from a\nDecade of Informedia Video Analysis Research. In\nProceedings of CIVR'05, pages 110, 2005.\n[15] C.-W. Hsu, C.-C. Chang, and C.-J. Lin. A Practical\nGuide to Support Vector Classification. 2005. available\nat http://www.csie.ntu.edu.tw/~cjlin/libsvm/.\n[16] T. Joachims. Making Large-scale Support Vector\nMachine Learning Practical. Advances in kernel\nmethods: support vector learning, pages 169184, 1999.\n[17] Y. Lin, Y. Lee, and G. Wahba. Support Vector\nMachines for Classification in Nonstandard Situations.\nMachine Learning, 46(1-3):191202, 2002.\n[18] L. M. Manevitz and M. Yousef. One-class SVMs for\nDocument Classification. Journal of Machine Learning\nResearch, 2:139154, 2002.\n[19] H. T. Nguyen and A. Smeulders. Active Learning\nUsing Pre-clustering. In Proceedings of ICML'04,\npages 7986, 2004.\n[20] E. Osuna, R. Freund, and F. Girosi. An Improved\nTraining Algorithm for Support Vector Machines. In\nIEEE Workshop on Neural Networks and Signal\nProcessing, September 1997.\n[21] D. Pavlov, J. Mao, and B. Dom. Scaling-Up Support\nVector Machines Using Boosting Algorithm. In\nProceeding of ICPR'00, volume 2, pages 22192222,\n2000.\n[22] J. C. Platt. Fast Training of Support Vector Machines\nusing Sequential Minimal Optimization. Advances in\nkernel methods: support vector learning, pages\n185208, 1999.\n[23] R. C. Prati, G. E. A. P. A. Batista, and M. C.\nMonard. Class Imbalances versus Class Overlapping:\nan Analysis of a Learning System Behavior. In\nProceedings of the MICAI 2004, pages 312321, 2004.\n[24] G. Schohn and D. Cohn. Less is More: Active\nLearning with Support Vector Machines. In\nProceddings of ICML'00, pages 839846, 2000.\n[25] A. J. Smola and B. Sch\nokopf. Sparse Greedy Matrix\nApproximation for Machine Learning. In Proceedings\nof ICML'00, pages 911918, 2000.\n[26] S. Tong and E. Chang. Support Vector Machine\nActive Learning for Image Retrieval. In Proceedings of\nACM MM'01, pages 107118, 2001.\n[27] K. Veropoulos, N. Cristianini, and C. Campbell.\nControlling the Sensitivity of Support Vector\nMachines. In Proceedings of IJCAI'99, 1999.\n[28] G. M. Weiss and F. J. Provost. Learning When\nTraining Data are Costly: The Effect of Class\nDistribution on Tree Induction. Journal of Artificial\nIntelligence Research (JAIR), 19:315354, 2003.\n[29] G. Wu and E. Y. Chang. KBA: Kernel Boundary\nAlignment Considering Imbalanced Data Distribution.\nIEEE Transactions on Knowledge and Data\nEngineering, 17(6):786795, 2005.\n[30] Z. Xu, K. Yu, V. Tresp, X. Xu, and J. Wang.\nRepresentative Sampling for Text Classification Using\nSupport Vector Machines. In Proceedings of ECIR'03,\npages 393407, 2003.\n[31] H. Yu, J. Yang, J. Han, and X. Li. Making SVMs\nScalable to Large Data Sets using Hierarchical Cluster\nIndexing. Data Min. Knowl. Discov., 11(3):295321,\n2005.\nAPPENDIX\nA.\nPROOF OF THEOREM 2\nFirstly, we define ^\n\n\nwhich satisfies ^\n\nc\n=\n\nx\ni\n\nc\n\ni\n,\nc =\n1, . . . , k. It is easy to verify that ^\n\n\nis a feasible solution of\nSCMs. Secondly, we define\nsatisfying\n\ni\n=\n\n\nc\n|\nc\n|\nif x\ni\n\n\nc\n, i = 1, . . . , n. It is easy to verify that\nis a feasible solution\nof DSVMs. According to the relation of\nD\n\nand ~\nD, we\ncan obtain the following equation\n1\n2\nn\ni=1\nn\nj=1\n\ni\ny\ni\n(~\nx\ni\n), (~\nx\nj\n)\nj\ny\nj\nn\ni=1\n\ni\n=\n1\n2\nk\nh=1\nk\nl=1\n^\n\nh\ny\nh\n(u\nh\n), (u\nl\n) ^\n\nl\ny\nl\nk\nh=1\n^\n\nh\n,\nwhich means ~\nG(\n\n) = G\n\n( ^\n\n\n). Similarly, we can get ~\nG(\n) =\nG\n\n(\n\n). Using the fact that\n\nand\n\nare the optimal solu-449\ntions to SCMs and DSVMs respectively, we have G\n\n(\n\n)\n\nG\n\n( ^\n\n\n) and ~\nG(\n\n)\n~\nG(\n). Thus, the equation G\n\n(\n\n) =\n~\nG(\n\n) holds. For any\n\nR\nk\nsatisfying\n{\nc\n=\n\nx\ni\n\nc\n\ni\n,\nc = 1, . . . , k}, we know it is a feasible solution to SCMs\nand G\n\n(\n\n) = ~\nG(\n\n) = G\n\n(\n\n) holds, which means\n\nis\nthe optimal solution of SCMs. Similarly, for any\nR\nn\nsatisfying\n{\n\nx\ni\n\nc\n\ni\n=\nc\n,\nc = 1, . . . , k} and the constraints\nof (7), we have ~\nG() = G\n\n(\n\n) = ~\nG(\n\n), which\nmeans is the optimal solution of DSVMs.\nB.\nPROOF OF THEOREM 3\nNote that the feasible regions of (2) and (7) are the same.\nBy the fact that\n\nand ~\n\n\nare optimal solutions to (2) and\n(7) respectively, we know that\n( ~\n\n\n\n)\nT\nG(\n\n)\n0\n(12)\n(\n\n- ~\n\n)\nT\n~\nG( ~\n\n\n)\n0\n(13)\nhold, where the gradients\nG() = Q - e and ~\nG() = ~\nQ\n- e.\n(14)\nAdding (12) and (13) and then a little arrangement yields\n( ~\n\n\n\n)\nT\n[\n~\nG( ~\n\n\n)\n- ~\nG(\n\n)]\n(~\n\n\n)\nT\n[\nG(\n\n)\n- ~\nG(\n\n)].\nSubstituting (14) in the above inequality, we get\n( ~\n\n\n\n)\nT\n~\nQ( ~\n\n\n\n)\n(~\n\n\n)\nT\n(Q\n- ~\nQ)\n\n.\n(15)\nAdding ( ~\n\n\n\n)\nT\n(Q\n- ~\nQ)( ~\n\n\n\n) to the both sides of\n(15), we have\n( ~\n\n\n\n)\nT\nQ( ~\n\n\n\n)\n(~\n\n\n)\nT\n(Q\n- ~\nQ) ~\n\n\n.\n(16)\nIf > 0 is the smallest eigenvalue of Q, we have\n~\n\n\n2\n(~\n\n\n)\nT\nQ( ~\n\n\n\n)\n( ~\n\n\n\n)\nT\n(Q\n- ~\nQ) ~\n\n\n~\n\n\nQ\n- ~\nQ\n~\n\n\nand\n~\n\n\n~\nmC. Using (16) we get\n~\n\n\n\n\n~\nmC\n\n.\nNow we turn to prove the second result.\n\nis the optimal\nsolution of (2), therefore, 0\nG(~\n\n)\n- G(\n\n) is obvious.\nMeanwhile, we have\nG( ~\n\n\n)\n- G(\n\n)= 1\n2 ( ~\n\n\n)\nT\n(Q\n- ~\nQ) ~\n\n\n+ ~\nG( ~\n\n\n)\n- G(\n\n)\n12(~\n\n)\nT\n(Q\n- ~\nQ) ~\n\n\n+ ~\nG(\n\n)\n- G(\n\n)\n= 1\n2 ( ~\n\n\n)\nT\n(Q\n- ~\nQ) ~\n\n\n- 12(\n\n)\nT\n(Q\n- ~\nQ)\n\n12 Q - ~Q ~\n2\n+ 1\n2 Q ~\nQ\n\n2\n(m\n2\n+ ~\nm\n2\n)C\n2\n\n2\nC.\nPROOF OF THEOREM 4\nExpanding to be the explicit function of\n{(u\nc\n)\n}\nk\nc=1\n, we\nget\n2\n= YKY\n- ~\nY ~\nK ~\nY\n2\n, in which Y and ~\nY denote diagonal\nmatrices whose diagonal elements are y\n1\n, . . . , y\nn\nand\n~\ny\n1\n, . . . , ~\ny\nn\nrespectively. Using the fact that Y equals to ~\nY,\nwe have\n2\n=\nY(K\n- ~\nK)Y\n2\n. Since Y only change the\nsigns of the elements of K\n- ~\nK by Y(K\n- ~\nK)Y, we have\n2\n=\nK\n- ~\nK\n2\n=\n\nk\nh=1\n\nk\nl=1\n\nx\ni\n\nh\n\nx\nj\n\nl\n( (x\ni\n), (x\nj\n)\n\n(u\nh\n), (u\nl\n) )\n2\n. It is a biquadratic function of\n{(u\nc\n)\n}\nk\nc=1\n.\nTherefore, this is an unconstrained convex optimization problem\n[4]. The necessary and sufficient condition for\n{u\nc\n}\nk\nc=1\nto be optimal is\n\n2\n(\n{(u\nc\n)\n}\nk\nc=1\n) = 0. We can verify that\n(u\nc\n) =\n\nxic\n(x\ni\n)\n|\nc\n|\n, c = 1, . . . , k satisfies the condition\nthat the gradient is zero.\nD.\nPROOF OF THEOREM 5\nWe define a n\nk matrix Z as Z\nic\n=\n\n1\n\n|\nc\n|\nif x\ni\n\nc\n0\notherwise\n.\nWe can see that Z captures the disjoint cluster memberships.\nThere is only one non-zero entry in each row of Z and Z\nT\nZ =\nI\nk\nholds (I\nk\nindicates the identity matrix). Suppose is\nthe matrix of the images of the samples in feature space,\ni.e., = [(x\n1\n), . . . , (x\nn\n)]. We can verify that the matrix\nZZ\nT\nconsists of the mean vectors of the clusters containing\nthe corresponding sample. Thus, the\n2\ncan be written as\nQ\n- ~\nQ\n2\n=\nT\n\n- (ZZ\nT\n)\nT\nZZ\nT 2\n.\nUsing the fact that trace(A\nT\nA) = A\n2\nF\n, trace(A + B) =\ntrace(A) + trace(B) and trace(AB) = trace(BA), we have\n\n2\n= trace((\nT\n)\nT\n\nT\n\n- (Z\nT\n\nT\nZ)(Z\nT\n\nT\nZ)).\nSince trace((\nT\n)\nT\n\nT\n) is constant, minimizing is equivalent\nto maximizing\nJ\n1\n= trace((Z\nT\n\nT\nZ)(Z\nT\n\nT\nZ)).\n(17)\nWith similar procedure, we can see that minimizing J(\n{\nc\n}\nk\nc=1\n)\namounts to maximizing\nJ\n2\n= trace(Z\nT\n\nT\nZ).\n(18)\nMatrix K =\nT\nis a symmetric matrix. Let\n1\n. . . ,\n\nn\n0 denote its eigenvalues and (v\n1\n, . . . , v\nn\n) be the corresponding\neigenvectors. Matrix H = Z\nT\n\nT\nZ is also a\nsymmetric matrix.\nLet\n1\n. . . ,\nk\n0 denote its\neigenvalues.\nAccording to Poincar\ne Separation Theorem,\nwe know the relations\ni\n\ni\n, i = 1, . . . , k hold. Therefore\n, we have J\n2\n=\n\nk\ni=1\n\ni\n\n\nk\ni=1\n\ni\n. Similarly, we have\nJ\n1\n=\n\nk\ni=1\n\n2\ni\n\n\nk\ni=1\n\n2\ni\n. In both cases, the equations hold\nwhen Z = (v\n1\n, . . . , v\nk\n)R, where R is an arbitrary k\nk orthonormal\nmatrix. Actually, the solution to maximizing J\n2\nis just the well-known theorem of Ky Fan (the Theorem 3.2.\nof [11]). Note that the optimal Z might no longer conforms\nto the definition of Z\nic\n=\n\n1\n\n|\nc\n|\nif x\ni\n\nc\n0\notherwise\n, but it is\nstill a orthonormal matrix. That is why it is called a relaxed\noptimal solution.\n450", "keywords": "Support Vector Machines;concept modelling;Concept Modelling;Imbalance;Support Vector Machines (SVMs);Large Scale;Clustering;imbalanced data;kernel k-means;support cluster machines (SCMs);TRECVID;meta-algorithm;large scale data;shrinking techniques;clusters;Kernel k-means"} {"name": "124", "title": "Learning Query Languages of Web Interfaces", "abstract": "This paper studies the problem of automatic acquisition of the query languages supported by a Web information resource . We describe a system that automatically probes the search interface of a resource with a set of test queries and analyses the returned pages to recognize supported query operators. The automatic acquisition assumes the availability of the number of matches the resource returns for a submitted query. The match numbers are used to train a learning system and to generate classification rules that recognize the query operators supported by a provider and their syntactic encodings. These classification rules are employed during the automatic probing of new providers to determine query operators they support. We report on results of experiments with a set of real Web resources.", "fulltext": "INTRODUCTION\nSearching for relevant information is a primary activity\non the Web.\nOften, people search for information using\ngeneral-purpose search engines, such as Google or Yahoo!,\nwhich collect and index billions of Web pages. However,\nthere exists an important part of the Web that remains unavailable\nfor centralized indexing. This so-called \"hidden\"\npart of the Web includes the content of local databases and\ndocument collections accessible through search interfaces offered\nby various small- and middle-sized Web sites, including\ncompany sites, university sites, media sites, etc. According\nto the study conducted by BrightPlanet in 2000 [6], the size\nof the Hidden Web is about 400 to 550 times larger than the\ncommonly defined (or \"Visible\") World Wide Web. This\nsurprising discovery has fed new research on collecting and\norganizing the Hidden Web resources [1, 2, 15, 17, 19].\nCommercial approaches to the Hidden Web are usually in\nthe shape of Yahoo!-like directories which organize local sites\nbelonging to specific domains. Some important examples\nof such directories are InvisibleWeb[1] and BrightPlanet[2]\nwhose gateway site, CompletePlanet[3], is a directory as\nwell as a meta-search engine. For each database incorporated\ninto its search, the meta-search engine is provided with\na manually written \"wrapper\", a software component that\nspecifies how to submit queries and extract query answers\nembedded into HTML-formatted result pages.\nSimilar to the Visible Web, search resources on the Hidden\nWeb are highly heterogeneous. In particular, they use different\ndocument retrieval models, such as Boolean or vector-space\nmodels. They allow different operators for the query\nformulation and, moreover, the syntax of supported operators\ncan vary from one site to another. Conventionally,\nquery languages are determined manually; reading the help\npages associated with a given search interface, probing the\ninterface with sample queries and checking the result pages\nis often the method of choice.\nThe manual acquisition of Web search interfaces has important\ndrawbacks. First, the manual approach is hardly\nscalable to thousands of search resources that compose the\nHidden Web. Second, the manual testing of Web resources\nwith probe queries is often error-prone due to the inability\nto check results. Third, cases of incorrect or incomplete help\npages are frequent. Operators that are actually supported\nby an engine may not be mentioned in the help pages, and\nconversely, help pages might mention operators that are not\nsupported by the engine.\nTo overcome the shortcomings of the manual approach,\nwe address the problem of acquiring the query languages of\nWeb resources in an automatic manner. We develop a system\nthat automatically probes a resource's search interface\nwith a set of test queries and analyses the returned pages to\nrecognize supported query operators. The automatic acquisition\nassumes the availability of the number of matches the\nresource returns for a submitted query. The match numbers\nare used to train a learning system and to generate classification\nrules that recognize the query operators supported\nby a provider and their syntactic encodings.\nNew technologies surrounding the XML syntax standard,\nin particular Web Services [18], establish a new basis for automatic\ndiscovery and information exchange and are becoming\nwidely employed in corporate applications.\nHowever,\nthis has yet to happen for thousands of public information\nproviders. The question of when and how they will move\ntoward open cooperation using Web Service technologies remains\nwidely open [4]. Instead, the query-probing approach\nfor acquiring supported operators does not assume any cooperation\nof Web providers; its only requirement is that they\n1114\n2004 ACM Symposium on Applied Computing\nprovide an accessible interface and allow queries to be run.\nThis paper is organized as follows. In Section 2 we discuss\nthe heterogeneity of Web interfaces; we formalize the problem\nand show its connection with the concept of learning by\nquerying in Section 3. In Section 4 we design a classifier system\nfor the automatic acquisition of a query language and\ninvestigate different aspects of the system. In Section 6 we\nreview the prior art; in Section 5 we present experimental\nresults to illustrate the performance of our system. Section 7\ndiscusses open issues and Section 8 concludes the paper.\nQUERYING WEB RESOURCES\nWeb resources vary considerably in the ways they retrieve\nrelevant documents. In the theory of information retrieval,\nthere exist at least five basic retrieval models, but only three\nof these models are visible on the Web, namely the Boolean,\nthe extended Boolean and the vector-space models. In the\nBoolean query model, a query is a condition, which documents\neither do or do not satisfy, with the query result being\na set of documents. In the vector-space model, a query is\na list of terms, and documents are assigned a score according\nto how similar they are to the query. The query result\nis a ranked list of documents. A document in the query result\nmight not contain all query terms. Finally, the extended\nBoolean model combines the advantages of both the Boolean\nand the vector-space query model. In this model, keywords\ncan be preceded by special characters (like + and - ) requiring\nan obligatory presence or absence of a given keyword\nin a document. For example, the query +information\n+provider will retrieve all documents containing both keywords\nand rank them according to some similarity function.\nAnalysis of information providers suggests that the majority\nof providers adopt one of the three basic models. Moreover\n, beyond query answers, many resources report the number\nof documents in their collections matching the query. If\na resource deploys the (extended) Boolean model, the match\nnumber shows how many documents match the query. In the\ncase of the vector-space model, the match number refers to\ndocuments containing at least one query term, thus being\nequivalent to the Boolean disjunction.\nIn the following, we develop an approach for automatic\ndetermination of query operators by reasoning on submitted\nqueries and corresponding match numbers. Though this\napproach excludes resources that do not report match numbers\n, other ways of automatic detection of query operators\nappear even more problematic and difficult to implement. A\nmethod based on downloading answer documents and verifying\nthe query against their content often fails, either for legal\nreasons, when the content of documents is unavailable or\npassword-protected, or for technical reasons, when a query\nmatches millions of documents and downloading even a part\nof them requires prohibitive time and network resources.\n2.1\nQuery language model\nA query language of a Web provider includes a set of basic\noperators and the way of combining the operators to get\ncomplex queries. Basic operators have different arities, in\nparticular, the default term processing and the unary and\nbinary operators. The default processing refers primarily\nto case sensitivity in this paper, but we could also refer to\nwhether the query term is treated as a complete word or as a\nsubstring in a possible matching document. Unary operators\ninclude the Stem-operator, which replaces a query term with\nits lexem; binary operators include the Boolean operators\n(conjunction), (disjunction), and (negation)\n1\nand the\noperator P hrase which requires the adjacency of all terms\nin a document.\nSome other operators, like substring matching or word\nproximity operators have been studied in various systems,\nhowever the six query operators mentioned above are by far\nthe ones most frequently supported by Web interfaces. In\nthe following, we develop a method to cope with the operator\nset O = {Case, Stem, , , , P hrase}. Issues relevant\nto the possible extension of set O with other operators are\ndelegated to Section 7.\n2.2\nQuery interpretation\nWeb providers are queried by filling their search forms\nwith query strings. CGI or JavaScript code linked to the\nquery form interprets the query strings according to certain\nrules.\nThese rules allow syntactic encodings for the supported\nquery operators. If correctly interpreted, the query\nis executed on the document collection before a (full or partial\n) answer is reported to the user.\nUnfortunately, the same query operator may be encoded\ndifferently by different providers. For example, the Boolean\nconjunction is often encoded as A AND B , A B , or +A\n+B , where A and B are query terms. Worse, two providers\ncan interpret the same query string differently. For example,\nquery string A B can be interpreted as a Boolean conjunction\n, Boolean disjunction, or P hrase.\nExample 1. To illustrate the problem, consider the query\nstring q = Casablanca AND Bogart .\nOn Google, AND\nis interpreted as the Boolean conjunction, that is, i\nGoogle\n( Casablanca AND Bogart ) = Casablanca Bogart . As\na result, query q matches 24,500 pages at Google, as op-posed\nto 551,000 for query q\n1\n= Casablanca and 263,000\nfor q\n2\n= Bogart . On the Internet Movie Database (IMDB)\n(http://www.imdb.com/.), AND is taken literally and all\nterms in a query are implicitly OR-connected. Therefore, the\nIMDB interprets query q as follows: i\nIM DB\n( Casablanca\nAND Bogart ) = Casablanca AND Bogart . The\nquery returns 12,020 matches documents on IMDB, as op-posed\nto only 22 for q\n1\n= Casablanca and 4 for q\n2\n= Bogart .\nIf we investigate an unknown query language, then Example\n1 shows that observing match numbers for probe queries\ncan provide a good insight into the supported operators.\nHowever, no definitive decision appears possible from the\nthree queries above q, q\n1\n, q\n2\n. An accurate decision on supported\noperators/syntaxes will require probing the provider\nwith other queries and comparing all match numbers in order\nto confirm or reject various hypotheses.\nExample 2. As in Example 1, let us compare match numbers\nfor the queries q= Casablanca AND Bogart , q\n1\n= Casablanca\n, and q\n2\n= Bogart . For Google, the fact that q matches\nless documents than any of q\n1\nand q\n2\n, favors the Conjunction-hypotheses\n, but is still insufficient to exclude other hypotheses\n, like that of P hrase. Probing Google with query q\n3\n=\nBogart AND Casablanca returns the same number of matched\ndocuments as q. This (most likely) discards the P hrase-hypothesis\n, but not the hypothesis Casablanca AND\n1\nNegation is a binary operator in Web query languages and\nits interpretation is given by 'AND NOT', that is, A B is\na synonym for A B (the latter using the unary ).\n1115\nBogart . To exclude this one, even more queries should be\nsent to Google, like q\n4\n= Casablanca AND , and so on. Sim-ilarly\nin IMDB, the fact that query q matches more documents\nthan q\n1\nand q\n2\nsuggests that q is processed as a disjunction\n, but it can not tell whether AND is taken literally\nor ignored. A deeper analysis requires further probing\nIMDB with, for example, queries q\n4\n= Casablanca AND or\nq\n5\n= Casablanca Bogart to compare their match numbers to\nthe ones of previous queries and decide about the AND .\nOur approach to the automatic acquisition of Web query\nlanguages formalizes and generalizes the idea described in\nExamples 1 and 2. We build a learning system that trains\na number of classifiers with data from manually annotated\nsites to automatically determine supported operators and\ntheir syntaxes at a new site. The training data from annotated\nsites includes an ensemble of test queries together\nwith the corresponding match numbers.\nPROBLEM DEFINITION\nAssume an information provider P supports some or all\nquery operators in O; these operators form a set O\nP\n, O\nP\n\nO and allow us to compose a set of complex queries Q(O\nP\n).\nFor any operator o\ni\nO\nP\n, P accepts one or more syntactical\nencodings, s\ni1\n, s\ni2\n, . . .. The set {s\nij\n} of accepted syntaxes\nfor o\ni\nO\nP\nis denoted S\ni\n. The interpretation I\nP\nof operator\nset O\nP\nis defined as I\nP\n= {(o\ni\n, s\nij\n)|o\ni\nO\nP\n, s\nij\nS\ni\n} =\n{(o\ni\n, S\ni\n)|o\ni\nO\nP\n}. Interpretation I\nP\nis monovalued if each\noperator has at most one syntax, i.e, |S\ni\n| = 1 for all o\ni\nO\nP\n.\nI\nP\nis multivalued, if it allows multiple syntaxes for at least\none operator, i.e., o\ni\nO\nP\nsuch that |S\ni\n| > 1. In Google,\nthe Boolean conjunction can be encoded by both AND and\n(whitespace). Therefore, for any query terms A and B,\nboth query strings A B and A AND B are interpreted\nas A B. I\nGoogle\ncontains (, AND ) and (,\n) and is a\nmultivalued interpretation.\nWe distinguish between ambiguous and unambiguous interpretations\n. A pair of distinct operator encodings (o\ni\n, s\nij\n)\nand (o\nk\n, s\nkl\n) is ambiguous if the two operators have the same\nsyntax: o\ni\n= o\nk\nbut s\nij\n= s\nkl\n. An interpretation I\nP\nis ambiguous\n, if it contains at least one ambiguous pair of encodings\n. An interpretation I is unambiguous, if for any pair of\nencodings (o\ni\n, s\nij\n) and (o\nk\n, s\nkl\n) in I, o\ni\n= o\nk\ns\nij\n= s\nkl\n.\nAmbiguous interpretations can be observed with Web providers\nthat interpret query strings dynamically, when the final\ndecision depends on results of the query execution with\ndifferent retrieval models\n2\n. However, the major part of Web\nproviders interpret query strings unambiguously and our\nmethod copes with unambiguous interpretations only. Further\ndiscussion on ambiguous interpretations is in Section 7.\nLike with the query operators, we select the most frequent\nsyntaxes on the Web, S = { Default\n3\n, ,\n, AND , + ,\nOR , NOT , - , \"\" (quote marks)}. Like set O, these\nsyntaxes have been selected after verification of hundreds\nof Web providers. Set S is easily extendable to alternative\nsyntaxes, like ones employed by non-English providers. For\n2\nCiteseer at http://citeseer.nj.nec.com/cs is an example\nof ambiguous interpretation. By default, it interprets A\nB as a conjunction; however if A B matches zero documents\n, the query is interpreted as disjunction.\n3\n'Default' refers to the absence of any syntax; it assumes\nthe processing of plain terms.\nexample, French providers may use ET for the Boolean\nconjunction and OU for the disjunction.\nThe theoretical framework for the query language acquisition\nis derived from the learning of an unknown concept\nby querying [5]. Assume that provider P supports the basic\noperators in O; complex queries composed from the basic\noperators form a set Q(O). For the document collection at\nP , query q Q(O) constrains a subset P (q) of documents\nmatching q. An abstract query q Q(O) is mapped into\na textual string with a mapping M : O 2\nS\nthat defines\n(possibly multiple) syntaxes for operators in O. The mapping\nof a complex query q is denoted m(q), the set of mapped\nqueries is denoted Q(S) = Q(M (O)).\nThe sets O and S are assumed to be known, whereas the\nmapping M is unknown. We are given an oracle that can\nbe queried with a mapped query m(q) Q(S) on the size\nof subset P (q), oracle(m(q)) = |P (q)|. By observing the or-acle's\nresponses to queries, the learning system should produce\na hypothesis on the mapping M , which should be as\nclose as possible to the correct one.\nThe identification of the mapping M may be simple under\ncertain circumstances. Below we show an example of reconstruction\nwhen O\nP\nincludes a particular subset of operators\nand the oracle is noiseless.\nExample 3. Let O include the three Boolean operators\n(, and ) and P hrase. Then, for a given syntax set S,\nany unambiguous mapping M : O 2\nS\ncan be exactly identified\nif the oracle is noise-less\n4\n. In such a case, subset sizes\nreturned by the oracle fit the Boolean logic on sets.Indeed,\nwhen querying the oracle with terms A and B and syntaxes\nfrom S, the disjunction is distinguishable from other operators\nby the fact that it constrains bigger subsets in a collection\nthan any of terms does:\n|A B| |A|, |A B| |B|\n(1)\nFurthermore, among three other operators, the conjunction\nis recognized by its commutativity:\n|A B| = |B A|\n(2)\nFinally, the difference between negation and phrases is detected\nby the basic equation linking three Boolean operators:\n|A B| = |AB| + |A B| + |BA|\n(3)\nSizes of subsets constrained by the Boolean operators satisfy\nthe disequation (1) and equations (2), (3) for any pair of\nA and B, so one can easily design a learning system that\nexactly identifies an unambiguous mapping M after only a\nfew probing queries.\nUnfortunately, easy identification of the mapping M is\nrather an exception on the real Web, where few if any of the\nassumptions made in Example 3 become true. First, any\nchange in the operator set O\np\nmakes the exact reconstruction\nless obvious. If the conjunction and/or disjunction are\nnot supported, then the size of A B (or A B) is unavailable\nand equation (3) cannot help distinguish negation\nfrom phrases. In cases like this, the identification of supported\nsyntaxes requires an analysis of the semantic correlation\nbetween query terms A and B and guessing on their\nco-occurrence in (unknown) document collections.\n4\nOracle noiseless assumes the pure Boolean logics, with no\nquery preprocessing, like the stopword removal.\n1116\nSecond, Web query interfaces that play the role of oracles\nand return sizes of subsets constrained by queries m(q)\nQ(S) are rarely noiseless.\nWhen probing interfaces with\ntest queries, the match numbers may violate equations (2)\nand (3). Most violations happen because converting query\nstrings into queries on collections hides the stop-word removal\nand term stemming. It is not clear, whether queries\nlike A AND B are interpreted as one (A is a stopword),\ntwo, or three terms. Moreover, for the performance reasons,\nreal match numbers are often replaced by their estimations\nwhich are calculated using various collection statistics [13],\nwithout the real retrieval of documents matching the query.\nLEARNING SYSTEM\nTo automatically determine supported query operators,\nwe reduce the overall problem to a set of classification tasks,\nwhere each task is associated with recognizing a specific\nquery operator or syntax, and where some standard learning\nalgorithms like SVM, k-nearest neighbors or decision trees\ncan be applied. To build the classifiers, we collect and annotate\na set of Web providers. We develop a set of test queries\nand probe all selected providers with the test queries. We\ntrain the classifiers with query matches for test queries. For\nany new provider, we first probe it with the test queries.\nQuery matches returned by the provider upon test queries\nare used to automatically classify operators and syntaxes\nand produce an unambiguous interpretation for P .\nTo achieve a good level of classification accuracy, we investigate\ndifferent aspects of the learning system including\nthe target function, probe queries, data preparation, and\nfeature encoding and selection.\n4.1\nTarget function\nDue to the multivalued relationships between query operators\nand syntaxes, the target function for our learning\nsystem has two alternatives, one for the direct mapping M\nand the other one for the inverted mapping M\n-1\n:\nT\n1\n: O 2\nS\n. T\n1\ntargets the unknown mapping M ;\nit assigns zero or more syntaxes to each operator in\nO. T\n1\nbuilds a multi-value classifier for every o\ni\nO,\nor alternatively, a set of binary classifiers for all valid\ncombinations (o\ni\n, s\nj\n), o\ni\nO, s\nj\nS(o\ni\n).\nT\n2\n: S O. T\n2\ntargets the inverted mapping M\n-1\n; it\nassigns at most one operator to every syntax s\nj\nS.\nEither target function gets implemented as a set of classifiers\n, operator classifiers for T\n1\nor syntax classifiers for T\n2\n.\nClassifiers are trained with match numbers for probe queries\nfrom annotated providers.\nFor a new provider P , either\nfunction produces a hypothesis I\nT\n(P ) that approximates\nthe real interpretation I\nP\n. The major difference between\nT\n1\nand T\n2\nis that the former can produce ambiguous interpretations\n, while the output of T\n2\nis always unambiguous.\nIndeed, two operator classifiers with T\n1\ncan output the same\nsyntax leading to ambiguity, while each classifier in T\n2\noutputs\nat most, one operator for one syntax. In experiments\nwe tested both functions, though when building the learning\nsystem we put an emphasis on T\n2\n, which is free of ambiguity.\nTo build syntax classifiers for the target function T\n2\n, we\nshould consider beyond \"good\" classification cases for the\noperators in O and include some \"real-world\" cases where\nproviders process syntaxes in S literally or simply ignore\nthem. For certain providers, it is difficult to find any valid\ninterpretation. In the learning system, we extend the set\nof possible interpretations of syntaxes in S by three more\ncases, O = O{Ignored, Literal, U nknown}. Syntaxes in\nS have different alternatives for their interpretation; below\nwe revisit some syntaxes and report possible matches in O\nas they are specified in the learning system.\nDefault : Case sensitivity for query terms: possible values\nare case-insensitive (Case) or case-sensitive (Literal).\n* : This unary operator can be interpreted as Stem, when\ni(A*) = Stem(A), Ignored when i(A*) = i(A), and\nLiteral, when A* is accepted as one term.\n: Whitespace is often a default for another syntax in\nS. Three possible interpretations include the Boolean\nconjunction when i( A B )= A B, the Boolean disjunction\nwhen i( A B )= A B, and P hrase when\ni( A B )= P hrase (A,B).\nAND : Three alternatives here are the conjunction when\ni( A AND B )= A B, Ignored, when AND is ignored\nand the interpretation goes with the whitespace\nmeaning, i( A AND B )= i( A B )= M\n-1\n(' ')\n(A, B), and Literal when i( A AND B )= M\n-1\n(\n)\n(A, AND ,B).\n\" \" (Quote marks): Two possible interpretations are P hrase,\nwhen i( \"A B\" )= P hrase(A,B), and Ignore when quote\nmarks are ignored and terms are interpreted with the\nwhitespace, i( \"A B\" )= i( A B ) = M\n-1\n(\n) (A, B).\nA similar analysis is done for the syntaxes + ,\nOR ,\nNOT and - . Additionally, all syntaxes for binary operators\ncan be labeled as U nknown.\n4.2\nProbing with test queries\nTo train syntax classifiers for T\n2\n, we collect data from annotated\nsites by probing their interfaces and extracting the\nmatch numbers. Probing has a fairly low cost, but requires\na certain policy when selecting terms for test queries to provide\nmeaningful data for the learning. We define a set R\nof model queries that contain syntaxes in S and parameter\nterms A and B, which are later bound with real terms.\nWe form the set R by first selecting well-formed queries\nthat contain all syntaxes we want to classify. Second, we add\nqueries that are term permutations of previously selected\nqueries, for example the permutation B A for query A\nB . Finally, we add model queries that are not well-formed,\nbut appear helpful for building accurate classification rules.\nBelow, the set R of model queries is illustrated using the\npair of terms A and B; model queries are split into three\ngroups containing one, two or three words:\nOne word queries: A , B ,UpperCase(A), A* , Stem(A).\nTwo word queries: A B , B A , \"A B\" , \"B A\" , +A\n+B , +B +A , A -B , A AND , A OR , A NOT .\nThree word queries: A AND B , B AND A , A OR\nB , B OR A , A NOT B , B NOT A .\nIn total, the set R is composed of 22 model queries, all in\nlower case, except UpperCase (A), which is an upper case of\nterm A. Six queries in R are permutations of other queries\n1117\nand three queries are (purposely) not well-formed. These\nqueries A AND , A OR , A NOT are selected to help\ndetect Literal-cases for AND , OR , NOT .\nProbe queries are obtained from the model queries by replacing\nparameters A and B with specific query terms, like\nknowledge and retrieval . These 22 probe queries form\na probe package denoted R\nA,B\n.\nFor a provider P , probe\nqueries together with corresponding match numbers form\nthe elementary feature set F\n0\nA,B\n= {(m(q\ni\n), oracle(P (q\ni\n))),\nw(q\ni\n) R\nA,B\n}. Query terms are selected from a generic\nEnglish vocabulary with all standard stopwords excluded.\nOne site can be probed with one or more probe packages,\nall packages using different term pairs (A,B).\nTo probe the sites with test queries, we bind model queries\nin R with query terms. To obtain meaningful training data,\nquery terms should not be common stopwords, such as and\nor the . As the term co-occurrence in a provider's document\ncollection is unknown, we select pairs with different degrees\nof semantic correlation. Here, the term pairs fall into three\ncategories:\nC\n1\n: terms that form a phrase (such as A= information\nand B= retrieval );\nC\n2\n: terms that do not form a phrase but occur in the\nsame document ( knowledge and wireless );\nC\n3\n: terms that rarely occur in the same document\n(such as cancer and wireless ).\nThese three categories can be expressed through term co-occurrence\nin some generic document collection P\nG\n.\nWe\nre-use our query probing component to establish criteria for\nterm selection for the three categories. A pair of terms (A,\nB) is in category C\n1\n(phrase co-occurrence) if the match\nnumber for P hrase(A, B) is comparable with the conjunction\nA B, that is\n|P\nG\n(P hrase(A,B))|\n|P\nG\n(AB)|\n> , for some threshold\n0 < < 1. A term pair (A, B) is in category C\n2\n(high co-occurrence\n) if the terms are not co-occurred in a phrase,\nbut their conjunction is comparable with either A or B,\n|P\nG\n(AB)|\nmin{|P\nG\n(A)|,|P\nG\n(B)|}\n> , for some 0 < < 1. If pair (A,B)\ndoes not fit the conditions for categories C\n1\nand C\n2\n, then it\nis in category C\n3\n(low co-occurrence). For our experiments,\nwe have selected Google as generic document collection G\nand set the values of and both to 0.01.\n4.3\nElementary features\nMatch numbers for probe queries in F\n0\nA,B\nrepresent elementary\nfeatures that can be directly used to train classifiers\n. Unfortunately, this often leads to poor results. The\nreason is that Web resources considerably differ in size and,\ntherefore, the query matches from different resources are of\ndifferent magnitude and thus hardly comparable. A query\nmay match millions of documents on Google, but only a\nfew at a small local resource. To leverage the processing of\nquery matches from resources of different size, we develop\ntwo alternative methods for the feature encoding.\nIn the first approach, we normalize the query matches in\nF\n0\nby the maximum number of matches for the two basic\nqueries A and B . We thus obtain features F\n1\nwith values\nmostly between 0 and 1 (except for queries related to\nthe Boolean disjunction). The second approach F\n2\nto the\nfeature encoding, uses the \"less-equal-greater\"-relationship\nbetween any two probe queries in a probe package. This\nproduces a three-value feature for each pair of test queries.\n4.4\nFeature selection\nThe refinement of raw features produces l=22 refined real\nvalue features with F\n1\nand\nl(l-1)\n2\n= 231 three-value features\nwith F\n2\n. The basic approach is to train each classifier with\nthe entire feature set F\n0\n, F\n1\nor F\n2\n. However, because of the\nnoise in the data, building accurate classifiers may require a\nlot of training data. To control the amount of training data\nand enhance the quality of classification rules, we proceed\nwith two methods of feature selection. First, we distinguish\nbetween relevant and irrelevant features for a given classifier\nand remove irrelevant ones. Second, beyond the direct\nfeature filtering, we use prior knowledge and classify new\nsyntaxes using previously classified ones.\nRemoving irrelevant features. The definition of relevant\nfeatures requires establishing syntactical dependencies\nbetween model queries in R and semantic relationships between\nsyntaxes in S. Model query r\ni\nR syntactically depends\non model query r\nj\nif r\ni\nincludes syntaxes present in\nr\nj\n. Syntaxes s\ni\nand s\nj\nin S are semantically related if they\ncan be interpreted with the same operator in O.\nWe define the relevant feature set F\ni\nfor syntax s\ni\nas containing\nthree parts, F S(s\ni\n) = F S\ni\n= F S\n0\ni\n+ F S\n1\ni\n+ F S\n2\ni\n.\nF S\n0\ni\nsimply contains all model queries r\nj\nR that involve\nsyntax s\ni\n, for example F S\n0\n( AND )= { A AND B , B AND\nA , A AND }. Next, F S\n1\ni\ncontains model queries for syntactically\ndependent syntaxes. Actually, F S\n1\ni\ncontains the\ntwo model queries A B and B A for all binary syntaxes.\nFinally, F S\n2\n1\ncontains the model queries for semantically related\nsyntaxes. For example, F S\n2\n( AND ) = F S\n0\n( + ), and\nvice versa, F S\n2\n( + )=F S\n0\n( AND ).\nUse of prior knowledge. Beyond removing irrelevant\nfeatures, it is possible to benefit from the dependencies between\nsyntaxes established in Section 4.1. For example, the\nLiteral-cases for OR and AND depend on the interpretation\nof whitespaces. The classification of AND as Literal\nbecomes simpler when the system already knows that, for\nexample,\nis interpreted as conjunction. To use the prior\nknowledge, we alter the training and classification process.\nWe impose an order on the syntaxes in S. When training\nor using syntax classifiers, we use the classification results\nof previous syntaxes.\nWe convert the syntax set in the ordered list S\nO\n= (Default\n, ,\n, \"\" , AND , + , OR ', NOT , - ) and impose\nthe order on how the classifiers are trained and used\nfor the classification. In the prior knowledge approach, the\nfeature set used to train the classifier for syntax s\ni\nS\nO\nwill\ninclude the classifications of all s\nj\npreceding s\ni\nin S\nO\n.\nRemoving irrelevant features and using prior knowledge\nare two independent methods for feature selection and can\nbe applied separately or together. This allows us to consider\nfour feature selection methods for training classifiers and\nclassifying new sites:\n1. Full feature set, F f s\ni\n= F , where F is a selected feature\nencoding, F\n0\n, F\n1\nor F\n2\n;\n2. Relevant feature set, Rf s\ni\n= F S\ni\n;\n3. Prior knowledge features, P Kf s\ni\n=F M\n-1\n(s\nj\n), j < i.\n4. Relevant prior knowledge feature set RP Kf s\ni\n= F S\ni\n\nM\n-1\n(s\nj\n), j < i.\nEXPERIMENTAL EVALUATION\nTo run experiments, we collected and annotated 36 Web\nsites with search interfaces. All sites report the match num-1118\nbers for user queries and unambiguously interpret their query\nlanguages. Selected sites represent a wide spectrum of supported\noperator sets. For each site, we annotated all supported\noperators and their syntaxes. For the extraction of\nthe match numbers from HTML pages we used the Xerox\nIWrap wrapper toolkit [7, 12]. Out of 36 providers, only 4\nsupport monovalued interpretations; in the other 32 cases,\nat least one operator has two or more syntaxes.\nFigure 1: T\n1\nand T\n2\ntarget functions.\nFigure 2: Three feature encodings for DT, KNN and\nSVM.\n5.1\nExperimental framework\nIn all experiments we estimate the classification accuracy\nfor the individual operators in O (with T\n1\n) and the syntaxes\nin S (with T\n2\n).\nWe also estimate the mean accuracy\nfor the target functions T\n1\nand T\n2\n. Experiments are\nconducted using the cross-validation method. 36 annotated\nsites are split into N =9 groups, S\n1\n, S\n2\n,. . . , S\nN\n. We run N\nexperiments; in experiment i, classifiers are trained with the\ngroups S\n1\n,. . . ,S\ni-1\n, S\ni+1\n,. . . ,S\nN\nand then tested with sites\nfrom group S\ni\n. Accuracy (precision) values over N experiments\nare averaged for each operator/syntax classifier and\nform the individual accuracies. The average of individual\naccuracies over O/S gives the mean accuracy.\nWe test the learning system by varying the system parameters\nintroduced in Section 4. We train and test classifiers\nwith three different learning algorithms: decision trees\nfrom Borgelt's package (DT), k-nearest neighbors algorithm\n(KNN), and support vector machines (SVM)\n5\n. The following\nlist recalls the test parameters and possible options.\n1. Target function: T\n1\nincludes |O |=9 operator classifiers\n; multivalued interpretations are implemented as\nclassifications with subsets of |O |. For T\n2\n, the system\nincludes |S|=9 syntax classifiers.\n2. Feature encoding: The three different feature encodings\n(see Section 4.3) include the raw match numbers\ngiven by F\n0\n, the normalized match numbers given by\nF\n1\n, and three-value feature comparison given by F\n2\n.\n3. Feature selection: The four methods presented in Section\n4.4 include Ffs (full feature set), Rfs (relevant\nfeature set), PKfs (prior knowledge feature set) and\nRPKfs (relevant prior knowledge feature set).\n4. Term selection: We test three term selection categories\n, C\n1\n, C\n2\nand C\n3\nintroduced in Section 4.2. Additionally\n, we test the mixture of the three categories,\nwhen three term pairs are used to probe a site, i.e. one\nterm pair from each category C\n1\n, C\n2\nand C\n3\n.\nExperiments have been run for all parameter combinations\n; most combinations achieve mean accuracy superior to\n60%. The four system parameters appear to be uncorrelated\nin their impact on the classification results. To figure out\nthe most interesting ones, we determine overall \"winners\"\nfor each parameter, except for the learning algorithm. The\nwinners are T\n2\ntarget function, F\n2\nfeature encoding, and\nM ixed term selection.\nRP Kf s feature selection behaves\nbest for DT and KNN and P Kf s feature selection is the\nwinner for SVM. We report more detail below.\n5.2\nExperimental Results\nDecision trees are induced by the Borgelt's software; they\nare then pruned using the confidence level (p=0.5) pruning\nmethod. In SVM, linear kernel functions have been used.\nFor the KNN method, we report results for k=3 which behaves\nbetter that k=1, 5 and 10. Because the implemen-tation\nof the KNN algorithm cannot process large sets of\nfeatures, we were not able to test the Ffs and PKfs feature\nselection methods.\nAll three learning algorithms show a similar performance.\n3NN slightly outperforms DT and SVM for the \"winner\"\ncombination (86.42% against 84.26% and 79.74%), however\nit is often less accurate with other parameter combinations.\n5\nAvailable\nat\nhttp://fuzzy.cs.uni-magdeburg.de/\nborgelt/software.html,\nhttp://www.dontveter.com/\nnnsoft/nnsoft.html,\nhttp://svmlight.joachims.org/,\nrespectively.\n1119\nTarget functions and feature selection. The target functions\nT\n1\nand T\n2\nimplement alternative approaches to the\nquery language acquisition; T\n1\nuses operator classifiers while\nT\n2\nuses syntax classifiers. As seen in Section 4, T\n2\nhas an\nadvantage over T\n1\nbecause it avoids multivalued classification\nand outputs only unambiguous interpretations, while\nthe output of T\n1\nshould be further tested for unambiguity.\nThus we have built the learning system for T\n2\n. Series of\nexperiments conducted with T\n1\nand T\n2\nconfirm the superiority\nof T\n2\n. As operator classifiers in T\n1\nare trained indepen-dently\n, their combined output does not guarantee unambiguity\n. Unlike T\n2\n, high accuracy of individual classifiers may\nnot be translated into global good accuracy, because one\nmisclassification may produce an ambiguous interpretation\nand undermine the good performance of other classifiers.\nIn practice, we test the output of operator classifiers of\nT\n1\nand discard those that form ambiguous interpretations.\nThis gives a 2% to 10% drop in the mean accuracy. Figure 1\nplots mean accuracies for T\n1\nand T\n2\nfor all feature selection\nmethods (with fixed F\n2\nfeature encoding and M ixed term\nselection) and the three learning methods (only Rfs and RPKfs\ncould be measured for 3NN algorithm). Within feature\nselection methods, keeping relevant features spurs the performance\nof DT and 3NN better than the prior knowledge,\nwith their combination being the winner. For SVM, instead,\nadding prior knowledge to the full feature set is the best\nchoice. In the following, all reported experiments refer to\nthe target function T\n2\n.\nFeature encoding. Previous figures compared the mean\naccuracies. We unfold the mean value and plot individual\naccuracies for the syntaxes in S. Figure 2 plots accuracy values\nfor the three feature encoding methods (for T\n2\n-RP Kf sM\nixed combination for DT and 3NN and T\n2\n-P Kf s-M ixed\ncombination for SVM). As the figure shows, the pair-wise\ncomparison F\n2\nperforms best with respect to the raw and\nnormalized match numbers.\nTerm selection. We complete the analysis of system parameters\nby testing four methods of term selection. They\ninclude categories C\n1\n, C\n2\nand C\n3\nand M ixed. Figure 3 plots\nmean accuracies for all learning algorithms and four term selection\nmethods, giving M ixed as the winner.\nFigure 3:\nFour term selection methods for DT,\nKNN, and SVM.\n5.3\nBias in training data\nAmong the syntaxes in S, all methods show only little difference\nfor the unary operators Def ault and Case. Among\nthe syntaxes for binary operators, certain (\n, AND and\n+ ) are easier to detect than others ( \" \" , OR and NOT ).\nHowever, this phenomenon is not linked to the nature of the\noperators or their syntaxes, but rather can be explained by\nthe bias in training data. In Table 1, we unfold the individual\naccuracies and show results for each case (s, o), s\nS, o O in the annotated data. Each non-empty cell in\nTable 1 reports the occurrence of the case (in brackets) and\nits classification accuracy. We can observe a definitive bias\nof high accuracy for more frequent cases; instead, rare cases\nhave a very low accuracy. This explains good results for\n,\nAND and + , where occurrences are fairly split between\ntwo main cases. For other syntaxes instead, the high error\nratio for rare cases decreases the individual accuracy.\nRELATED WORK\nThe Hidden Web has emerged as a research trend and different\nresearch groups have started to address various problems\nrelevant to organizing Hidden Web resources [15, 14,\n17, 19, 20].\nOne focus is crawling; [17] presents a task-specific\nand human-assisted approach to the crawling of the\nHidden Web.\nThe crawler searches for Hidden Web resources\nrelevant to a specific domain. The identification is\nachieved by selecting domain-specific keywords from a fuzzy\nset and assigning them to elements of HTML forms; the\nresource is judged relevant if it returns valid search results.\nAnother important task is the classification of Hidden\nWeb resources. [15, 14] and [19] have developed approaches\nto this problem based on query probing.\nMoreover, [15]\nmakes use of the number of documents matching a submitted\nconjunction query, as does our approach.\nInstead of\nquery languages, they use match numbers to reason about\nthe relevance of a provider for a given category.\nOriginally, the query probing has been used for the automatic\nlanguage model discovery in [9], it probed documents\nin a collection to analyze the content of the result pages.\n[14] extends the work in [15] to the problem of database\nselection by computing content summaries based on probing\n. Once query language interfaces are understood, meaningful\nquery transformation becomes possible. [11] describes\none way of transforming a front-end query into subsuming\nqueries that are supported by the sources and a way to filter\nout incorrectly returned documents. In [16], interaction\nwith online-vendors is automated.\nIn the close domain of meta-searching, the declaration of a\nsearch resource's query features is often coupled with methods\nof converting/translating meta-search queries into the\nresource's native queries. Considerable research effort has\nbeen devoted to minimizing the possible overheads of query\ntranslation when the meta-search and search resource differ\nin supporting basic query features [11]. In all these methods,\nthe manual discovery of the query features is assumed.\nIn information mediation systems that query Web resources\nto materialize views on hidden data [20], one approach is to\nreconstruct an image of a resource's database. Because of\na restricted Web interface, a critical problem is the entire\nor partial reconstruction of the database image without the\nunnecessary overload of the Web servers. [8] builds efficient\nquery covers that are accessible through nearest-neighbor\ninterfaces for the specific domain of spatial databases.\nOPEN QUESTIONS\nThe experiments have raised a number of open questions\nthat require further research. Below we list some of them.\nStopwords. In tests, common English stopwords were\nexcluded from probing. However, the set of stopwords is\n1120\nOperators\nSyntaxes\ndefault\n\n\"\"\nAND\nOR\n+\nNOT\nCase\n97.6(20)\nStemming\n41.1(9)\nConjunction\n92.9(15)\n95.2(27)\n100(16)\nDisjunction\n100(19)\n87.8(17)\nNegation\n91.0(15)\n94.9(26)\nPhrase\n0(1)\n90.4(28)\nLiteral\n100(16)\n79.6(7)\n91.7(11)\n69.2(14)\nIgnored\n79.9(27)\n18.5(4)\n3.7(1)\n55.6(4)\n100(19)\n25.0(4)\n81.0(7)\nUnknown\n0(1)\n4.1(4)\n0(1)\n19.4(4)\n0(1)\n0(3)\n0(3)\nTable 1: Classification accuracy and occurrence for all syntax+interpretation cases (DT,T\n2\n,F\n2\n,RPKfs,M ixed).\noften domain-dependent; this should be taken into account\nwhen generating test queries. A more difficult case is when\na resource treats terms as stopwords in a certain context.\nFor example, Google accepts the term \"WWW\" when it is\nqueried alone and ignores it when the term is conjuncted\nwith other terms. Such query-dependent treatment of stopwords\nis considered as noise in the current system.\nAcquiring other operators. We have addressed the\nset of most frequently used query operators. Other operators\ndefined by existing document retrieval models, like\nproximity operators, can be added to the operator set and\nprocessed in a similar manner. Two remarks concerning less\nfrequent operators are that their syntactical encodings may\nvary even more than for Boolean operators, and, more im-portantly\n, finding sufficient training data to build reliable\nclassifiers may be technically difficult.\nQuery composition. The next issue is the manner in\nwhich basic query operators are combined to form complex\nqueries. The most frequent manner on the Web is the use\nof parentheses or a certain operator priority. How to detect\nthis remains an open problem at this point.\nAmbiguous interpretations. Recognizing ambiguous\ninterpretations is the most difficult problem. One example\nis Citeseer, which interprets whitespaces as conjunction by\ndefault, but switches to disjunction if the conjunction query\nmatches no documents. Some other Web providers behave\nin the same or a similar manner. We will need to extend\nthe learning system to include a possibility of triggering the\nretrieval model as a function of the oracle answers.\nCONCLUSION\nWe have addressed the problem of automatic recognition\nof operators and syntaxes supported by query languages of\nWeb resources. We have developed a machine learning approach\nbased on reformulation of the entire problem as a set\nof classification problems. By introducing various refined\nsolutions for the target function, feature encoding, and feature\nselection, we have achieved 86% mean accuracy for the\nset of the most frequent operators and syntaxes. Further\nimprovement in the accuracy is possible with better preparation\nof annotated sites, but this is limited because of the\ncomplexity of the a-priori unknown operator composition\nand the noise produced by the hidden query preprocessing.\nREFERENCES\n[1] The InvisibleWeb, http://www.invisibleweb.com/.\n[2] BrightPlanet, http://www.brightplanet.com/.\n[3] CompletePlanet, http://www.completeplanet.com/.\n[4] G. Alonso. Myths around web services. IEEE Bulletin\non Data Engineering, 25(4):39, 2002.\n[5] D. Angluin. Queries and concept learning. Machine\nLearning, 2(4):319342, 1987.\n[6] M. K. Bergman. The Deep Web: Surfacing hidden\nvalue. Journal of Electronic Publishing, 7(1), 2001.\n[7] D. Bredelet and B. Roustant. Java IWrap: Wrapper\ninduction by grammar learning. Master's thesis,\nENSIMAG Grenoble, 2000.\n[8] S. Byers, J. Freire, and C. T. Silva. Efficient\nacquisition of web data through restricted query\ninterfaces. In Proc. WWW Conf., China, May 2001.\n[9] J. P. Callan, M. Connell, and A. Du. Automatic\ndiscovery of language models for text databases. In\nProc. ACM SIGMOD Conf., pp. 479490, June 1999.\n[10] C.-C. K. Chang and H Garcia-Molina. Approximate\nquery translation across heterogeneous information\nsources. In Proc. VLDB Conf., pp. 566577, Cairo,\nEgypt, September 2000.\n[11] C.-C. K. Chang, H. Garcia-Molina, and A. Paepcke.\nBoolean query mapping across heterogeneous\ninformation sources. IEEE TKDE, 8(4):515521, 1996.\n[12] B. Chidlovskii. Automatic repairing of web wrappers\nby combining redundant views. In Proc. of the IEEE\nIntern. Conf. Tools with AI, USA, November 2002.\n[13] L. Gravano, H. Garcia-Molina, and A. Tomasic. Gloss:\nText-source discovery over the internet. ACM TODS,\n24(2):229264, 1999.\n[14] P. G. Ipeirotis and L. Gravano. Distributed search\nover the hidden web: Hierarchical database sampling\nand selection. In Proc. VLDB Conf., pp. 394405,\nHong Kong, China, August 2002.\n[15] P. G. Ipeirotis, L. Gravano, and M. Sahami. Probe,\ncount, and classify: Categorizing hidden-web\ndatabases. In Proc. ACM SIGMOD Conf., pp. 6778,\nSanta Barbara, CA, USA, May 2001.\n[16] M. Perkowitz, R. B. Doorenbos, O. Etzioni, and D. S.\nWeld. Learning to understand information on the\ninternet: An example-based approach. Journal of\nIntelligent Information Systems, 8(2):133153, 1997.\n[17] S. Raghavan and H. Garcia-Molina. Crawling the\nhidden web. In Proc. VLDB Conf., pp. 129138,\nRome, Italy, September 2001.\n[18] D. Tsur. Are web services the next revolution in\ne-commerce? In Proc. VLDB Conf., pp. 614617,\nRome, Italy, September 2001.\n[19] W. Wang, W. Meng, and C. Yu. Concept hierarchy\nbased text database categorization. In Proc. Intern.\nWISE Conf., pp. 283290, China, June 2000.\n[20] R. Yerneni, C. Li, H. Garcia-Molina, and J. Ullman.\nComputing capabilities of mediators. In Proc. ACM\nSIGMOD Conf., pp. 443454, PA, USA, June 1999.\n1121\n", "keywords": "query operators;automatic acquisition;learning;hidden web;search interface;web resources;machine learning;search engine;query languages;Hidden Web;web interfaces"} {"name": "125", "title": "Learning Spatially Variant Dissimilarity (SVaD) Measures", "abstract": "Clustering algorithms typically operate on a feature vector representation of the data and find clusters that are compact with respect to an assumed (dis)similarity measure between the data points in feature space. This makes the type of clusters identified highly dependent on the assumed similarity measure. Building on recent work in this area, we formally define a class of spatially varying dissimilarity measures and propose algorithms to learn the dissimilarity measure automatically from the data. The idea is to identify clusters that are compact with respect to the unknown spatially varying dissimilarity measure. Our experiments show that the proposed algorithms are more stable and achieve better accuracy on various textual data sets when compared with similar algorithms proposed in the literature.", "fulltext": "INTRODUCTION\nClustering plays a major role in data mining as a tool\nto discover structure in data. Object clustering algorithms\noperate on a feature vector representation of the data and\nfind clusters that are compact with respect to an assumed\n(dis)similarity measure between the data points in feature\nspace. As a consequence, the nature of clusters identified by\na clustering algorithm is highly dependent on the assumed\nsimilarity measure. The most commonly used dissimilarity\nmeasure, namely the Euclidean metric, assumes that the dissimilarity\nmeasure is isotropic and spatially invariant, and\nit is effective only when the clusters are roughly spherical\nand all of them have approximately the same size, which is\nrarely the case in practice [8]. The problem of finding non-spherical\nclusters is often addressed by utilizing a feature\nweighting technique. These techniques discover a single set\nof weights such that relevant features are given more importance\nthan irrelevant features. However, in practice, each\ncluster may have a different set of relevant features. We\nconsider Spatially Varying Dissimilarity (SVaD) measures\nto address this problem.\nDiday et. al. [4] proposed the adaptive distance dynamic\nclusters (ADDC) algorithm in this vain. A fuzzified version\nof ADDC, popularly known as the Gustafson-Kessel (GK)\nalgorithm [7] uses a dynamically updated covariance matrix\nso that each cluster can have its own norm matrix. These algorithms\ncan deal with hyperelliposoidal clusters of various\nsizes and orientations. The EM algorithm [2] with Gaussian\nprobability distributions can also be used to achieve similar\nresults. However, the above algorithms are computationally\nexpensive for high-dimensional data since they invert covariance\nmatrices in every iteration. Moreover, matrix inversion\ncan be unstable when the data is sparse in relation to the\ndimensionality.\nOne possible solution to the problems of high computation\nand instability arising out of using covariance matrices\nis to force the matrices to be diagonal, which amounts to\nweighting each feature differently in different clusters. While\nthis restricts the dissimilarity measures to have axis parallel\nisometry, the weights also provide a simple interpretation of\nthe clusters in terms of relevant features, which is important\nin knowledge discovery. Examples of such algorithms are\nSCAD and Fuzzy-SKWIC [5, 6], which perform fuzzy clustering\nof data while simultaneously finding feature weights\nin individual clusters.\nIn this paper, we generalize the idea of the feature weighting\napproach to define a class of spatially varying dissimilarity\nmeasures and propose algorithms that learn the dissimilarity\nmeasure automatically from the given data while\nperforming the clustering. The idea is to identify clusters\ninherent in the data that are compact with respect to the\nunknown spatially varying dissimilarity measure. We compare\nthe proposed algorithms with a diagonal version of GK\n(DGK) and a crisp version of SCAD (CSCAD) on a variety\nof data sets. Our algorithms perform better than DGK and\nCSCAD, and use more stable update equations for weights\nthan CSCAD.\nThe rest of the paper is organized as follows. In the next\nsection, we define a general class of dissimilarity measures\n611\nResearch Track Poster\nand formulate two objective functions based on them. In\nSection 3, we derive learning algorithms that optimize the\nobjective functions. We present an experimental study of\nthe proposed algorithms in Section 4. We compare the performance\nof the proposed algorithms with that of DGK and\nCSCAD. These two algorithms are explained in Appendix A.\nFinally, we summarize our contributions and conclude with\nsome future directions in Section 5.\nSPATIALLY VARIANT DISSIMILARITY (SVAD) MEASURES\nWe first define a general class of dissimilarity measures\nand formulate a few objective functions in terms of the given\ndata set. Optimization of the objective functions would result\nin learning the underlying dissimilarity measure.\n2.1\nSVaD Measures\nIn the following definition, we generalize the concept of\ndissimilarity measures in which the weights associated with\nfeatures change over feature space.\nDefinition 2.1 We define the measure of dissimilarity of\nx from y\n1\nto be a weighted sum of M dissimilarity measures\nbetween x and y where the values of the weights depend\non the region from which the dissimilarity is being measured\n. Let P = {R\n1\n, . . . , R\nK\n} be a collection of K regions\nthat partition the feature space, and w\n1\n, w\n2\n, . . ., and w\nK\nbe\nthe weights associated with R\n1\n, R\n2\n, . . ., and R\nK\n, respectively.\nLet g\n1\n, g\n2\n, . . ., and g\nM\nbe M dissimilarity measures. Then,\neach w\nj\n, j = 1, . . . , K, is an M -dimensional vector where its\nl-th component, w\njl\nis associated with g\nl\n. Let W denote the\nK-tuple (w\n1\n, . . . , w\nK\n) and let r be a real number. Then, the\ndissimilarity of x from y is given by:\nf\nW\n(x, y)\n\n=\nM\nl=1\nw\nr\njl\ng\nl\n(x, y), if y R\nj\n.\n(1)\nWe refer to f\nW\nas a Spatially Variant Dissimilarity (SVaD)\nmeasure.\nNote that f\nW\nneed not be symmetric even if g\ni\nare symmetric\n. Hence, f\nW\nis not a metric. Moreover, the behavior\nof f\nW\ndepends on the behavior of g\ni\n. There are many ways\nto define g\ni\n. We list two instances of f\nW\n.\nExample 2.1 (Minkowski) Let\nd\nbe the feature space and\nM = d. Let a point x\nd\nbe represented as (x\n1\n, . . . , x\nd\n).\nThen, when g\ni\n(x, y) = |x\ni\n- y\ni\n|\np\nfor i = 1, . . . , d, and p 1,\nthe resulting SVaD measure, f\nM\nW\nis called Minkowski SVaD\n(MSVaD) measure. That is,\nf\nM\nW\n(x, y)\n\n=\nd\nl=1\nw\nr\njl\n|x\nl\n- y\nl\n|\np\n, if y R\nj\n.\n(2)\nOne may note that when w\n1\n= = w\nK\nand p = 2, f\nM\nW\nis the weighted Euclidean distance. When p = 2, we call f\nM\nW\na Euclidean SVaD (ESVaD) measure and denote it by f\nE\nW\n.\n1\nWe use the phrase \"dissimilarity of x from y\" rather than\n\"dissimilarity between x and y\" because we consider a general\nsituation where the dissimilarity measure depends on\nthe location of y. As an example of this situation in text\nmining, when the dissimilarity is measured from a document\non `terrorism' to a document x, a particular set of keywords\nmay be weighted heavily whereas when the dissimilarity is\nmeasured from a document on `football' to x, a different set\nof keywords may be weighted heavily.\nExample 2.2 (Cosine) Let the feature space be the set\nof points with l\n2\nnorm equal to one. That is,\nx\n2\n= 1\nfor all points x in feature space. Then, when g\nl\n(x, y) =\n(1/d - x\nl\ny\nl\n) for l = 1, . . . , d, the resulting SVaD measure\nf\nC\nW\nis called a Cosine SVaD (CSVaD) measure:\nf\nC\nW\n(x, y)\n\n=\nd\ni=1\nw\nr\njl\n(1/d - x\nl\ny\nl\n), if y R\nj\n.\n(3)\nIn the formulation of the objective function below, we use\na set of parameters to represent the regions R\n1\n, R\n2\n, . . ., and\nR\nK\n. Let c\n1\n, c\n2\n, . . ., and c\nK\nbe K points in feature space.\nThen y R\nj\niff\nf\nW\n(y, c\nj\n) < f\nW\n(y, c\ni\n) for i = j.\n(4)\nIn the case of ties, y is assigned to the region with the lowest\nindex. Thus, the K-tuple of points C = (c\n1\n, c\n2\n, . . . , c\nK\n) defines\na partition in feature space. The partition induced by\nthe points in C is similar in nature to a Voronoi tessellation.\nWe use the notation f\nW,C\nwhenever we use the set C to\nparameterize the regions used in the dissimilarity measure.\n2.2\nObjective Function for Clustering\nThe goal of the present work is to identify the spatially\nvarying dissimilarity measure and the associated compact\nclusters simultaneously. It is worth mentioning here that,\nas in the case of any clustering algorithm, the underlying\nassumption in this paper is the existence of such a dissimilarity\nmeasure and clusters for a given data set.\nLet x\n1\n, x\n2\n, . . ., and x\nn\nbe n given data points. Let K be\na given positive integer. Assuming that C represents the\ncluster centers, let us assign each data point x\ni\nto a cluster\nR\nj\nwith the closest c\nj\nas the cluster center\n2\n, i.e.,\nj = arg min\nl\nf\nW,C\n(x\ni\n, c\nl\n).\n(5)\nThen, the within-cluster dissimilarity is given by\nJ (W, C) =\nK\nj=1\nx\ni\nR\nj\nM\nl=1\nw\nr\njl\ng\nl\n(x\ni\n, c\nj\n).\n(6)\nJ (W, C) represents the sum of the dissimilarity measures of\nall the data points from their closest centroids. The objective\nis to find W and C that minimize J (W, C). To avoid\nthe trivial solution to J (W, C), we consider a normalization\ncondition on w\nj\n, viz.,\nM\nl=1\nw\njl\n= 1.\n(7)\nNote that even with this condition, J (W, C) has a trivial\nsolution: w\njp\n= 1 where p = arg min\nl\nx\ni\nR\nj\ng\nl\n(x\ni\n, c\nj\n),\nand the remaining weights are zero. One way to avoid convergence\nof w\nj\nto unit vectors is to impose a regularization\ncondition on w\nj\n. We consider the following two regularization\nmeasures in this paper: (1) Entropy measure:\nM\nl=1\nw\njl\nlog(w\njl\n) and (2) Gini measure:\nM\nl=1\nw\n2\njl\n.\n2\nWe use P = {R\n1\n, R\n2\n, . . . , R\nK\n} to represent the corresponding\npartition of the data set as well. The intended interpretation\n(cluster or region) would be evident from the context.\n612\nResearch Track Poster\nALGORITHMS TO LEARN SVAD MEASURES\nThe problem of determining the optimal W and C is similar\nto the traditional clustering problem that is solved by\nthe K-Means Algorithm (KMA) except for the additional W\nmatrix. We propose a class of iterative algorithms similar to\nKMA. These algorithms start with a random partition of the\ndata set and iteratively update C, W and P so that J (W, C)\nis minimized. These iterative algorithms are instances of Alternating\nOptimization (AO) algorithms. In [1], it is shown\nthat AO algorithms converge to a local optimum under some\nconditions. We outline the algorithm below before actually\ndescribing how to update C, W and P in every iteration.\nRandomly assign the data points to K clusters.\nREPEAT\nUpdate C: Compute the centroid of each cluster c\nj\n.\nUpdate W : Compute the w\njl\nj, l.\nUpdate P: Reassign the data points to the clusters.\nUNTIL (termination condition is reached).\nIn the above algorithm, the update of C depends on the\ndefinition of g\ni\n, and the update of W on the regularization\nterms. The update of P is done by reassigning the data\npoints according to (5). Before explaining the computation\nof C in every iteration for various g\ni\n, we first derive update\nequations for W for various regularization measures.\n3.1\nUpdate of Weights\nWhile updating weights, we need to find the values of\nweights that minimize the objective function for a given C\nand P. As mentioned above, we consider the two regularization\nmeasures for w\njl\nand derive update equations. If we\nconsider the entropy regularization with r = 1, the objective\nfunction becomes:\nJ\nEN T\n(W, C)\n=\nK\nj=1\nx\ni\nR\nj\nM\nl=1\nw\njl\ng\nl\n(x\ni\n, c\nj\n)\n+\nK\nj=1\n\nj\nM\nl=1\nw\njl\nlog(w\njl\n) +\nK\nj=1\n\nj\nM\nl=1\nw\njl\n- 1\n.\n(8)\nNote that\nj\nare the Lagrange multipliers corresponding\nto the normalization constraints in (7), and\nj\nrepresent\nthe relative importance given to the regularization term\nrelative to the within-cluster dissimilarity. Differentiating\nJ\nEN T\n(W, C) with respect to w\njl\nand equating it to zero, we\nobtain w\njl\n= exp\n-(\nj\n+\nx\ni Rj\ng\nl\n(\nx\ni\n,\nc\nj\n))\n\nj\n- 1 . Solving for\n\nj\nby substituting the above value of w\njl\nin (7) and substituting\nthe value of\nj\nback in the above equation, we obtain\nw\njl\n=\nexp x\ni\nR\nj\ng\nl\n(x\ni\n, c\nj\n)/\nj\nM\nn=1\nexp x\ni\nR\nj\ng\nn\n(x\ni\n, c\nj\n)/\nj\n.\n(9)\nIf we consider the Gini measure for regularization with\nr = 2, the corresponding w\njl\nthat minimizes the objective\nfunction can be shown to be\nw\njl\n=\n1/(\nj\n+\nx\ni\nR\nj\ng\nl\n(x\ni\n, c\nj\n))\nM\nn=1\n(1/(\nj\n+\nx\ni\nR\nj\ng\nn\n(x\ni\n, c\nj\n))) .\n(10)\nIn both cases, the updated value of w\njl\nis inversely related\nAlgorithm\nUpdate Equations\nAcronyms\nP\nC\nW\nEEnt\n(5)\n(11)\n(9)\nEsGini\n(5)\n(11)\n(10)\nCEnt\n(5)\n(12)\n(9)\nCsGini\n(5)\n(12)\n(10)\nTable 1: Summary of algorithms.\nto\nx\ni\nR\nj\ng\nl\n(x\ni\n, c\nj\n). This has various interpretations based\non the nature of g\nl\n. For example, when we consider the ESVaD\nmeasure, w\njl\nis inversely related to the variance of l-th\nelement of the data vectors in the j-th cluster. In other\nwords, when the variance along a particular dimension is\nhigh in a cluster, then the dimension is less important to\nthe cluster. This popular heuristic has been used in various\ncontexts (such as relevance feedback) in the literature [9].\nSimilarly, when we consider the CSVaD measure, w\njl\nis directly\nproportional to the correlation of the j-th dimension\nin the l-th cluster.\n3.2\nUpdate of Centroids\nLearning ESVaD Measures: Substituting the ESVaD measure\nin the objective function and solving the first order\nnecessary conditions, we observe that\nc\njl\n=\n1\n|R\nj\n| x\ni\nR\nj\nx\nil\n(11)\nminimizes J\nESV AD\n(W, C).\nLearning CSVaD Measures: Let x\nil\n= w\njl\nx\nil\n, then using\nthe Cauchy-Swartz inequality, it can be shown that\nc\njl\n=\n1\n|R\nj\n| x\ni\nR\nj\nx\nil\n(12)\nmaximizes\nx\ni\nR\nj\nd\nl=1\nw\njl\nx\nil\nc\njl\n. Hence, (12) also minimizes\nthe objective function when CSVaD is used as the\ndissimilarity measure.\nTable 1 summarizes the update equations used in various\nalgorithms. We refer to this set of algorithms as SVaD\nlearning algorithms.\nEXPERIMENTS\nIn this section, we present an experimental study of the algorithms\ndescribed in the previous sections. We applied the\nproposed algorithms on various text data sets and compared\nthe performance of EEnt and EsGini with that of K-Means,\nCSCAD and DGK algorithms. The reason for choosing the\nK-Means algorithm (KMA) apart from CSCAD and DGK\nis that it provides a baseline for assessing the advantages of\nfeature weighting. KMA is also a popular algorithm for text\nclustering. We have included a brief description of CSCAD\nand DGK algorithms in Appendix A.\nText data sets are sparse and high dimensional. We consider\nstandard labeled document collections and test the\nproposed algorithms for their ability to discover dissimilarity\nmeasures that distinguish one class from another without\nactually considering the class labels of the documents. We\nmeasure the success of the algorithms by the purity of the\nregions that they discover.\n613\nResearch Track Poster\n4.1\nData Sets\nWe performed our experiments on three standard data\nsets: 20 News Group, Yahoo K1, and Classic 3. These data\nsets are described below.\n20 News Group\n3\n: We considered different subsets of 20\nNews Group data that are known to contain clusters of varying\ndegrees of separation [10]. As in [10], we considered three\nrandom samples of three subsets of the 20 News Group data.\nThe subsets denoted by Binary has 250 documents each\nfrom talk.politics.mideast and talk.politics.misc. Multi5 has\n100 documents each from comp.graphics, rec.motorcycles,\nrec.sport.baseball, sci.space, and talk.politics.mideast. Finally\n, Multi10 has 50 documents each from alt.atheism, comp.\nsys.mac.hardware, misc.forsale, rec.autos, rec.sport.hockey,\nsci.crypt, sci.electronics, sci.med, sci.space, and talk.politics.\ngun. It may be noted that Binary data sets have two highly\noverlapping classes. Each of Multi5 data sets has samples\nfrom 5 distinct classes, whereas Multi10 data sets have only\na few samples from 10 different classes. The size of the vocabulary\nused to represent the documents in Binary data set\nis about 4000, Multi5 about 3200 and Multi10 about 2800.\nWe observed that the relative performance of the algorithms\non various samples of Binary, Multi5 and Multi10 data sets\nwas similar. Hence, we report results on only one of them.\nYahoo K1\n4\n: This data set contains 2340 Reuters news\narticles downloaded from Yahoo in 1997.\nThere are 494\nfrom Health, 1389 from Entertainment, 141 from Sports, 114\nfrom Politics, 60 from Technology and 142 from Business.\nAfter preprocessing, the documents from this data set are\nrepresented using 12015 words. Note that this data set has\nsamples from 6 different classes. Here, the distribution of\ndata points across the class is uneven, ranging from 60 to\n1389.\nClassic 3\n5\n: Classic 3 data set contains 1400 aerospace\nsystems abstracts from the Cranfield collection, 1033 medical\nabstracts from the Medline collection and 1460 information\nretrieval abstracts from the Cisi collection, making up\n3893 documents in all. After preprocessing, this data set\nhas 4301 words. The points are almost equally distributed\namong the three distinct classes.\nThe data sets were preprocessed using two major steps.\nFirst, a set of words (vocabulary) is extracted and then each\ndocument is represented with respect to this vocabulary.\nFinding the vocabulary includes: (1) elimination of the standard\nlist of stop words from the documents, (2) application\nof Porter stemming\n6\nfor term normalization, and (3) keeping\nonly the words which appear in at least 3 documents. We\nrepresent each document by the unitized frequency vector.\n4.2\nEvaluation of Algorithms\nWe use the accuracy measure to compare the performance\nof various algorithms. Let a\nij\nrepresent the number of data\npoints from class i that are in cluster j. Then the accuracy\nof the partition is given by\nj\nmax\ni\na\nij\n/n where n is the\ntotal number of data points.\nIt is to be noted that points coming from a single class\nneed not form a single cluster.\nThere could be multiple\n3\nhttp://www-2.cs.cmu.edu/afs/cs.cmu.edu/project/theo-20/www/data/news20\n.tar.gz\n4\nftp://ftp.cs.umn.edu/dept/users/boley/PDDPdata/doc-K\n5\nftp://ftp.cs.cornell.edu/pub/smart\n6\nhttp://www.tartarus.org/~martin/PorterStemmer/\nIteration\n0\n1\n2\n3\n4\n5\nJ (W, C)\n334.7\n329.5\n328.3\n328.1\n327.8\nAccuracy\n73.8\n80.2\n81.4\n81.6\n82\n82\nTable 2: Evolution of J (W, C) and Accuracies with\niterations when EEnt applied on a Multi5 data.\nclusters in a class that represent sub-classes. We study the\nperformance of SVaD learning algorithms for various values\nof K, i.e., the number of clusters.\n4.3\nExperimental Setup\nIn our implementations, we have observed that the proposed\nalgorithms, if applied on randomly initialized centroids\n, show unstable behavior. One reason for this behavior\nis that the number of parameters that are estimated in\nfeature-weighting clustering algorithms is twice as large as\nthat estimated by the traditional KMA. We, therefore, first\nestimate the cluster centers giving equal weights to all the\ndimensions using KMA and then fine-tune the cluster centers\nand the weights using the feature-weighting clustering\nalgorithms. In every iteration, the new sets of weights are\nupdated as follows. Let w\nn\n(t+1) represent the weights com-puted\nusing one of (9), (10), (14) or (15) in iteration (t + 1)\nand w(t) the weights in iteration t. Then, the weights in\niteration (t + 1) are\nw(t + 1) = (1 - (t))w(t) + (t)w\nn\n(t + 1),\n(13)\nwhere (t) [0, 1] decreases with t. That is, (t) = (t 1\n), for a given constant [0, 1]. In our experiments, we\nobserved that the variance of purity values for different initial\nvalues of (0) and above 0.5 is very small. Hence, we\nreport the results for (0) = 0.5 and = 0.5. We set the\nvalue of\nj\n= 1.\nIt may be noted that when the documents are represented\nas unit vectors, KMA with the cosine dissimilarity measure\nand Euclidean distance measure would yield the same clusters\n. This is essentially the same as Spherical K-Means algorithms\ndescribed in [3]. Therefore, we consider only the\nweighted Euclidean measure and restrict our comparisons to\nEEnt and EsGini in the experiments.\nSince the clusters obtained by KMA are used to initialize\nall other algorithms considered here, and since the results\nof KMA are sensitive to initialization, the accuracy numbers\nreported in this section are averages over 10 random\ninitializations of KMA.\n4.4\nResults and Observations\n4.4.1\nEffect of SVaD Measures on Accuracies\nIn Table 2, we show a sample run of EEnt algorithm on\none of the Multi5 data sets. This table shows the evolution\nof J (W, C) and the corresponding accuracies of the clusters\nwith the iterations. The accuracy, shown at iteration 0, is\nthat of the clusters obtained by KMA. The purity of clusters\nincreases with decrease in the value of the objective function\ndefined using SVaD measures. We have observed a similar\nbehavior of EEnt and EsGini on other data sets also. This\nvalidates our hypothesis that SVaD measures capture the\nunderlying structure in the data sets more accurately.\n614\nResearch Track Poster\n4.4.2\nComparison with Other Algorithms\nFigure 1 to Figure 5 show average accuracies of various\nalgorithms on the 5 data sets for various number of clusters\n. The accuracies of KMA and DGK are very close to\neach other and hence, in the figures, the lines corresponding\nto these algorithms are indistinguishable. The lines corresponding\nto CSCAD are also close to that of KMA in all the\ncases except Class 3.\nGeneral observations: The accuracies of SVaD algorithms\nfollow the trend of the accuracies of other algorithms.\nIn all our experiments, both SVaD learning algorithms improve\nthe accuracies of clusters obtained by KMA. It is observed\nin our experiments that the improvement could be\nas large as 8% in some instances. EEnt and EsGini consis-tently\nperform better than DGK on all data sets and for all\nvalues of K. EEnt and EsGini perform better than CSCAD\non all data sets excepts in the case of Classic 3 and for a few\nvalues of K.\nNote that the weight update equation of CSCAD (15)\nmay result in negative values of w\njl\n. Our experience with\nCSCAD shows that it is quite sensitive to initialization and\nit may have convergence problems. In contrast, it may be\nobserved that w\njl\nin (9) and (10) are always positive. Moreover\n, in our experience, these two versions are much less\nsensitive to the choice of\nj\n.\nData specific observations: When K = 2, EEnt and\nEsGini could not further improve the results of KMA on the\nBinary data set. The reason is that the data set contains\ntwo highly overlapping classes. However, for other values of\nK, they marginally improve the accuracies.\nIn the case of Multi5, the accuracies of the algorithms are\nnon-monotonic with K. The improvement of accuracies is\nlarge for intermediate values of K and small for extreme\nvalues of K. When K = 5, KMA finds relatively stable\nclusters.\nHence, SVaD algorithms are unable to improve\nthe accuracies as much as they did for intermediate values\nof K. For larger values of K, the clusters are closely spaced\nand hence there is little scope for improvement by the SVaD\nalgorithms.\nMulti10 data sets are the toughest to cluster because of\nthe large number of classes present in the data. In this case,\nthe accuracies of the algorithms are monotonically increasing\nwith the number of clusters. The extent of improvement\nof accuracies of SVaD algorithms over KMA is almost constant\nover the entire range of K. This reflects the fact that\nthe documents in Multi10 data set are uniformly distributed\nover feature space.\nThe distribution of documents in Yahoo K1 data set is\nhighly skewed. The extent of improvements that the SVaD\nalgorithms could achieve decrease with K. For higher values\nof K, KMA is able to find almost pure sub-clusters, resulting\nin accuracies of about 90%. This leaves little scope for\nimprovement.\nThe performance of CSCAD differs noticeably in the case\nof Classic 3. It performs better than the SVaD algorithms\nfor K = 3 and better than EEnt for K = 9. However, for\nlarger values of K, the SVaD algorithms perform better than\nthe rest. As in the case of Multi5, the improvements of SVaD\nalgorithms over others are significant and consistent. One\nmay recall that Multi5 and Classic 3 consist of documents\nfrom distinct classes.\nTherefore, this observation implies\nthat when there are distinct clusters in the data set, KMA\nyields confusing clusters when the number of clusters is over-Figure\n1: Accuracy results on Binary data.\nFigure 2: Accuracy results on Multi5 data.\nspecified. In this scenario, EEnt and EsGini can fine-tune\nthe clusters to improve their purity.\nSUMMARY AND CONCLUSIONS\nWe have defined a general class of spatially variant dissimilarity\nmeasures and proposed algorithms to learn the measure\nunderlying a given data set in an unsupervised learning\nframework.\nThrough our experiments on various textual\ndata sets, we have shown that such a formulation of dissimilarity\nmeasure can more accurately capture the hidden\nstructure in the data than a standard Euclidean measure\nthat does not vary over feature space. We have also shown\nthat the proposed learning algorithms perform better than\nother similar algorithms in the literature, and have better\nstability properties.\nEven though we have applied these algorithms only to\ntext data sets, the algorithms derived here do not assume\nany specific characteristics of textual data sets. Hence, they\nFigure 3: Accuracy results on Multi10 data.\n615\nResearch Track Poster\nFigure 4: Accuracy results on Yahoo K1 data.\nFigure 5: Accuracy results on Classic 3 data.\nare applicable to general data sets. Since the algorithms\nperform better for larger K, it would be interesting to investigate\nwhether they can be used to find subtopics of a\ntopic. Finally, it will be interesting to learn SVaD measures\nfor labeled data sets.\nREFERENCES\n[1] J. C. Bezdek and R. J. Hathaway. Some notes on\nalternating optimization. In Proceedings of the 2002\nAFSS International Conference on Fuzzy Systems.\nCalcutta, pages 288300. Springer-Verlag, 2002.\n[2] A. P. Dempster, N. M. Laird, and Rubin. Maximum\nlikelihood from incomplete data via the EM algorithm.\nJournal Royal Statistical Society B, 39(2):138, 1977.\n[3] I. S. Dhillon and D. S. Modha. Concept\ndecompositions for large sparse text data using\nclustering. Machine Learning, 42(1):143175, January\n2001.\n[4] E. Diday and J. C. Simon. Cluster analysis. In K. S.\nFu, editor, Pattern Recognition, pages 4794.\nSpringer-Verlag, 1976.\n[5] H. Frigui and O. Nasraoui. Simultaneous clustering\nand attribute discrimination. In Proceedings of\nFUZZIEEE, pages 158163, San Antonio, 2000.\n[6] H. Frigui and O. Nasraoui. Simultaneous\ncategorization of text documents and identification of\ncluster-dependent keywords. In Proceedings of\nFUZZIEEE, pages 158163, Honolulu, Hawaii, 2001.\n[7] D. E. Gustafson and W. C. Kessel. Fuzzy clustering\nwith the fuzzy covariance matrix. In Proccedings of\nIEEE CDC, pages 761766, San Diego, California,\n1979.\n[8] R. Krishnapuram and J. Kim. A note on fuzzy\nclustering algorithms for Gaussian clusters. IEEE\nTransactions on Fuzzy Systems, 7(4):453461, Aug\n1999.\n[9] Y. Rui, T. S. Huang, and S. Mehrotra. Relevance\nfeedback techniques in interactive content-based image\nretrieval. In Storage and Retrieval for Image and\nVideo Databases (SPIE), pages 2536, 1998.\n[10] N. Slonim and N. Tishby. Document clustering using\nword clusters via the information bottleneck method.\nIn Proceedings of SIGIR, pages 208215, 2000.\nAPPENDIX\nA.\nOTHER FEATURE WEIGHTING\nCLUSTERING TECHNIQUES\nA.1\nDiagonal Gustafson-Kessel (DGK)\nGustafson and Kessel [7] associate each cluster with a different\nnorm matrix. Let A = (A\n1\n, . . . , A\nk\n) be the set of k\nnorm matrices associated with k clusters. Let u\nji\nis the fuzzy\nmembership of x\ni\nin cluster j and U = [u\nji\n]. By restricting\nA\nj\ns to be diagonal and u\nji\n{0, 1}, we can reformulate the\noriginal optimization problem in terms of SVaD measures as\nfollows:\nmin\nC,W\nJ\nDGK\n(C, W ) =\nk\nj=1\nx\ni\nR\nj\nM\nl=1\nw\njl\ng\nl\n(x\ni\n, c\nj\n),\nsubject to\nl\nw\njl\n=\nj\n. Note that this problem can be solved\nusing the same AO algorithms described in Section 3. Here,\nthe update for C and P would remain the same as that\ndiscussed in Section 3. It can be easily shown that when\nj\n= 1, j,\nw\njl\n=\nM\nm=1\nx\ni\nR\nj\ng\nm\n(x\ni\n, c\nj\n)\n1/M\nx\ni\nR\nj\ng\nl\n(x\ni\n, c\nj\n)\n(14)\nminimize J\nDGK\nfor a given C.\nA.2\nCrisp Simultaneous Clustering and\nAttribute Discrimination (CSCAD)\nFrigui et.\nal.\nin [5, 6], considered a fuzzy version of\nthe feature-weighting based clustering problem (SCAD). To\nmake a fair comparison of our algorithms with SCAD, we derive\nits crisp version and refer to it as Crisp SCAD (CSCAD).\nIn [5, 6], the Gini measure is used for regularization. If the\nGini measure is considered with r = 1, the weights w\njl\nthat\nminimize the corresponding objective function for a given C\nand P, are given by\nw\njl\n= 1\nM +\n1\n2\nj\n\n\n1\nM\nM\nn=1\nx\ni\nR\nj\ng\nn\n(x\ni\n, c\nj\n) x\ni\nR\nj\ng\nl\n(x\ni\n, c\nj\n)\n\n.\n(15)\nSince SCAD uses the weighted Euclidean measure, the update\nequations of centroids in CSCAD remain the same as in\n(11). The update equation for w\njl\nin SCAD is quite similar\nto (15). One may note that, in (15), the value of w\njl\ncan\nbecome negative. In [5], a heuristic is used to estimate the\nvalue\nj\nin every iteration and set the negative values of w\njl\nto zero before normalizing the weights.\n616\nResearch Track Poster", "keywords": "Clustering;feature weighting;spatially varying dissimilarity (SVaD);Learning Dissimilarity Measures;clustering;dissimilarity measure"} {"name": "126", "title": "Learning the Unified Kernel Machines for Classification", "abstract": "Kernel machines have been shown as the state-of-the-art learning techniques for classification. In this paper, we propose a novel general framework of learning the Unified Kernel Machines (UKM) from both labeled and unlabeled data. Our proposed framework integrates supervised learning, semi-supervised kernel learning, and active learning in a unified solution. In the suggested framework, we particularly focus our attention on designing a new semi-supervised kernel learning method, i.e., Spectral Kernel Learning (SKL), which is built on the principles of kernel target alignment and unsupervised kernel design. Our algorithm is related to an equivalent quadratic programming problem that can be efficiently solved. Empirical results have shown that our method is more effective and robust to learn the semi-supervised kernels than traditional approaches. Based on the framework, we present a specific paradigm of unified kernel machines with respect to Kernel Logistic Regresions (KLR), i.e., Unified Kernel Logistic Regression (UKLR). We evaluate our proposed UKLR classification scheme in comparison with traditional solutions. The promising results show that our proposed UKLR paradigm is more effective than the traditional classification approaches.", "fulltext": "INTRODUCTION\nClassification is a core data mining technique and has been\nactively studied in the past decades. In general, the goal of\nclassification is to assign unlabeled testing examples with a\nset of predefined categories. Traditional classification methods\nare usually conducted in a supervised learning way, in\nwhich only labeled data are used to train a predefined classification\nmodel. In literature, a variety of statistical models\nhave been proposed for classification in the machine learning\nand data mining communities. One of the most popular\nand successful methodologies is the kernel-machine techniques\n, such as Support Vector Machines (SVM) [25] and\nKernel Logistic Regressions (KLR) [29]. Like other early\nwork for classification, traditional kernel-machine methods\nare usually performed in the supervised learning way, which\nconsider only the labeled data in the training phase.\nIt is obvious that a good classification model should take\nadvantages on not only the labeled data, but also the unlabeled\ndata when they are available. Learning on both labeled\nand unlabeled data has become an important research\ntopic in recent years. One way to exploit the unlabled data\nis to use active learning [7]. The goal of active learning is\nto choose the most informative example from the unlabeled\ndata for manual labeling. In the past years, active learning\nhas been studied for many classification tasks [16].\nAnother emerging popular technique to exploit unlabeled\ndata is semi-supervised learning [5], which has attracted\na surge of research attention recently [30].\nA variety of\nmachine-learning techniques have been proposed for semi-supervised\nlearning, in which the most well-known approaches\nare based on the graph Laplacians methodology [28, 31, 5].\nWhile promising results have been popularly reported in\nthis research topic, there is so far few comprehensive semi-supervised\nlearning scheme applicable for large-scale classification\nproblems.\nAlthough supervised learning, semi-supervised learning\nand active learning have been studied separately, so far\nthere is few comprehensive scheme to combine these techniques\neffectively together for classification tasks. To this\nend, we propose a general framework of learning the Unified\nKernel Machines (UKM) [3, 4] by unifying supervised\nkernel-machine learning, semi-supervised learning, unsupervised\nkernel design and active learning together for large-scale\nclassification problems.\nThe rest of this paper is organized as follows. Section 2 reviews\nrelated work of our framework and proposed solutions.\nSection 3 presents our framework of learning the unified ker-187\nResearch Track Paper\nnel machines. Section 4 proposes a new algorithm of learning\nsemi-supervised kernels by Spectral Kernel Learning (SKL).\nSection 5 presents a specific UKM paradigm for classification\n, i.e., the Unified Kernel Logistic Regression (UKLR).\nSection 6 evaluates the empirical performance of our proposed\nalgorithm and the UKLR classification scheme. Section\n7 sets out our conclusion.\nRELATED WORK\nKernel machines have been widely studied for data classification\nin the past decade.\nMost of earlier studies on\nkernel machines usually are based on supervised learning.\nOne of the most well-known techniques is the Support Vector\nMachines, which have achieved many successful stories\nin a variety of applications [25].\nIn addition to SVM, a\nseries of kernel machines have also been actively studied,\nsuch as Kernel Logistic Regression [29], Boosting [17], Regularized\nLeast-Square (RLS) [12] and Minimax Probability\nMachines (MPM) [15], which have shown comparable performance\nwith SVM for classification. The main theoretical\nfoundation behind many of the kernel machines is the theory\nof regularization and reproducing kernel Hilbert space\nin statistical learning [17, 25]. Some theoretical connections\nbetween the various kernel machines have been explored in\nrecent studies [12].\nSemi-supervised learning has recently received a surge of\nresearch attention for classification [5, 30]. The idea of semi-supervised\nlearning is to use both labeled and unlabeled data\nwhen constructing the classifiers for classification tasks. One\nof the most popular solutions in semi-supervised learning\nis based on the graph theory [6], such as Markov random\nwalks [22], Gaussian random fields [31], Diffusion models [13]\nand Manifold learning [2]. They have demonstrated some\npromising results on classification.\nSome recent studies have begun to seek connections between\nthe graph-based semi-supervised learning and the kernel\nmachine learning. Smola and Kondor showed some theoretical\nunderstanding between kernel and regularization based\non the graph theory [21]. Belkin et al. developed a framework\nfor regularization on graphs and provided some analysis\non generalization error bounds [1]. Based on the emerging\ntheoretical connections between kernels and graphs, some\nrecent work has proposed to learn the semi-supervised kernels\nby graph Laplacians [32]. Zhang et al. recently provided\na theoretical framework of unsupervised kernel design\nand showed that the graph Laplacians solution can be considered\nas an equivalent kernel learning approach [27]. All\nof the above studies have formed the solid foundation for\nsemi-supervised kernel learning in this work.\nTo exploit the unlabeled data, another research attention\nis to employ active learning for reducing the labeling efforts\nin classification tasks. Active learning, or called pool-based\nactive learning, has been proposed as an effective technique\nfor reducing the amount of labeled data in traditional supervised\nclassification tasks [19]. In general, the key of active\nlearning is to choose the most informative unlabeled examples\nfor manual labeling.\nA lot of active learning methods\nhave been proposed in the community. Typically they\nmeasure the classification uncertainty by the amount of disagreement\nto the classification model [9, 10] or measure the\ndistance of each unlabeled example away from the classification\nboundary [16, 24].\nFRAMEWORK OF LEARNING UNIFIED KERNEL MACHINES\nIn this section, we present the framework of learning the\nunified kernel machines by combining supervised kernel machines\n, semi-supervised kernel learning and active learning\ntechniques into a unified solution. Figure 1 gives an overview\nof our proposed scheme. For simplicity, we restrict our discussions\nto classification problems.\nLet\nM(K, ) denote a kernel machine that has some underlying\nprobabilistic model, such as kernel logistic regressions\n(or support vector machines). In general, a kernel machine\ncontains two components, i.e., the kernel K (either a\nkernel function or simply a kernel matrix), and the model parameters\n. In traditional supervised kernel-machine learning\n, the kernel K is usually a known parametric kernel function\nand the goal of the learning task is usually to determine\nthe model parameter . This often limits the performance of\nthe kernel machine if the specified kernel is not appropriate.\nTo this end, we propose a unified scheme to learn the unified\nkernel machines by learning on both the kernel K and\nthe model parameters together. In order to exploit the unlabeled\ndata, we suggest to combine semi-supervised kernel\nlearning and active learning techniques together for learning\nthe unified kernel machines effectively from the labeled\nand unlabeled data. More specifically, we outline a general\nframework of learning the unified kernel machine as follows.\nFigure 1: Learning the Unified Kernel Machines\nLet L denote the labeled data and U denote the unlabeled\ndata. The goal of the unified kernel machine learning task is\nto learn the kernel machine\nM(K\n\n,\n\n) that can classify the\ndata effectively. Specifically, it includes the following five\nsteps:\nStep 1. Kernel Initialization\nThe first step is to initialize the kernel component K\n0\nof the kernel machine\nM(K\n0\n,\n0\n). Typically, users can\nspecify the initial kernel K\n0\n(function or matrix) with a\nstanard kernel. When some domain knowledge is ava-iable\n, users can also design some kernel with domain\nknowledge (or some data-dependent kernels).\nStep 2. Semi-Supervised Kernel Learning\nThe initial kernel may not be good enough to classify\nthe data correctly. Hence, we suggest to employ\n188\nResearch Track Paper\nthe semi-supervised kernel learning technique to learn\na new kernel K by engaging both the labeled L and\nunlabled data U available.\nStep 3. Model Parameter Estimation\nWhen the kernel K is known, to estimate the parameters\nof the kernel machines based on some model assumption\n, such as Kernel Logistic Regression or Support\nVector Machines, one can simply employ the standard\nsupervised kernel-machine learning to solve the\nmodel parameters .\nStep 4. Active Learning\nIn many classification tasks, labeling cost is expensive.\nActive learning is an important method to reduce human\nefforts in labeling. Typically, we can choose a\nbatch of most informative examples S that can most effectively\nupdate the current kernel machine\nM(K, ).\nStep 5. Convergence Evaluation\nThe last step is the convergence evaluation in which we\ncheck whether the kernel machine is good enough for\nthe classification task. If not, we will repeat the above\nsteps until a satisfied kernel machine is acquired.\nThis is a general framework of learning unified kernel machines\n. In this paper, we focus our main attention on the\nthe part of semi-supervised kernel learning technique, which\nis a core component of learning the unified kernel machines.\nSPECTRAL KERNEL LEARNING\nWe propose a new semi-supervised kernel learning method,\nwhich is a fast and robust algorithm for learning semi-supervised\nkernels from labeled and unlabeled data. In the following\nparts, we first introduce the theoretical motivations and then\npresent our spectral kernel learning algorithm. Finally, we\nshow the connections of our method to existing work and\njustify the effectiveness of our solution from empirical observations\n.\n4.1\nTheoretical Foundation\nLet us first consider a standard supservisd kernel learning\nproblem. Assume that the data (X, Y ) are drawn from\nan unknown distribution\nD. The goal of supervised learning\nis to find a prediction function p(X) that minimizes the\nfollowing expected true loss:\nE\n(X,Y )D\nL(p(X), Y ),\nwhere E\n(X,Y )D\ndenotes the expectation over the true underlying\ndistribution\nD. In order to achieve a stable esimia-tion\n, we usually need to restrict the size of hypothesis function\nfamily. Given l training examples (x\n1\n,y\n1\n),. . .,(x\nl\n,y\nl\n),\ntypically we train a predition function ^\np in a reproducing\nHilbert space\nH by minimizing the empirical loss [25]. Since\nthe reproducing Hilbert space can be large, to avoid over-fitting\nproblems, we often consider a regularized method as\nfollow:\n^\np = arg inf\npH\n1\nl\nl\ni=1\nL(p(x\ni\n), y\ni\n) +\n||p||\n2\nH\n,\n(1)\nwhere is a chosen positive regularization parameter. It\ncan be shown that the solution of (1) can be represented as\nthe following kernel method:\n^\np(x) =\nl\ni=1\n^\n\ni\nk(x\ni\n, x)\n= arg inf\nR\nl\n1\nn\nl\ni=1\nL (p(x\ni\n), y\ni\n) +\nl\ni,j=1\n\ni\n\nj\nk(x\ni\n, x\nj\n)\n,\nwhere\nis a parameter vector to be estimated from the\ndata and k is a kernel, which is known as kernel function\n. Typically a kernel returns the inner product between\nthe mapping images of two given data examples, such that\nk(x\ni\n, x\nj\n) = (x\ni\n), (x\nj\n) for x\ni\n, x\nj\nX .\nLet us now consider a semi-supervised learning setting.\nGiven labeled data\n{(x\ni\n, y\ni\n)\n}\nl\ni=1\nand unlabeled data\n{x\nj\n}\nn\nj=l+1\n,\nwe consider to learn the real-valued vectors f\nR\nm\nby the\nfollowing semi-supervised learning method:\n^\nf = arg inf\nf R\n1\nn\nn\ni=1\nL(f\ni\n, y\ni\n) + f K\n-1\nf\n,\n(2)\nwhere K is an m\nm kernel matrix with K\ni,j\n= k(x\ni\n, x\nj\n).\nZhang et al. [27] proved that the solution of the above semi-supervised\nlearning is equivelent to the solution of standard\nsupervised learning in (1), such that\n^\nf\nj\n= ^\np(x\nj\n)\nj = 1, . . . , m.\n(3)\nThe theorem offers a princple of unsuperivsed kernel design\n: one can design a new kernel\nk(\n, ) based on the unlabeled\ndata and then replace the orignal kernel k by\nk in the\nstandard supervised kernel learning. More specifically, the\nframework of spectral kernel design suggests to design the\nnew kernel matrix\nK by a function g as follows:\n\nK =\nn\ni=1\ng(\ni\n)v\ni\nv\ni\n,\n(4)\nwhere (\ni\n, v\ni\n) are the eigen-pairs of the original kernel matrix\nK, and the function g(\n) can be regarded as a filter function\nor a transformation function that modifies the spectra\nof the kernel. The authors in [27] show a theoretical justification\nthat designing a kernel matrix with faster spectral decay\nrates should result in better generalization performance,\nwhich offers an important pricinple in learning an effective\nkernel matrix.\nOn the other hand, there are some recent papers that\nhave studied theoretical principles for learning effective kernel\nfunctions or matrices from labeled and unlabeled data.\nOne important work is the kernel target alignment, which\ncan be used not only to assess the relationship between the\nfeature spaces by two kernels, but also to measure the similarity\nbetween the feature space by a kernel and the feature\nspace induced by labels [8]. Specifically, given two kernel\nmatrices K\n1\nand K\n2\n, their relationship is defined by the\nfollowing score of alignment:\nDefinition 1. Kernel Alignment: The empirical alignment\nof two given kernels K\n1\nand K\n2\nwith respect to the\nsample set S is the quantity:\n^\nA(K\n1\n, K\n2\n) =\nK\n1\n, K\n2 F\n\nK\n1\n, K\n1 F\nK\n2\n, K\n2 F\n(5)\n189\nResearch Track Paper\nwhere K\ni\nis the kernel matrix induced by the kernel k\ni\nand\n, is the Frobenius product between two matrices, i.e.,\nK\n1\n, K\n2 F\n=\n\nn\ni,j=1\nk\n1\n(x\ni\n, x\nj\n)k\n2\n(x\ni\n, x\nj\n).\nThe above definition of kernel alignment offers a principle\nto learn the kernel matrix by assessing the relationship\nbetween a given kernel and a target kernel induced by the\ngiven labels. Let y =\n{y\ni\n}\nl\ni=1\ndenote a vector of labels in\nwhich y\ni\n{+1, -1} for binary classification. Then the target\nkernel can be defined as T = yy . Let K be the kernel\nmatrix with the following structure\nK =\nK\ntr\nK\ntrt\nK\ntrt\nK\nt\n(6)\nwhere K\nij\n= (x\ni\n), (x\nj\n) , K\ntr\ndenotes the matrix part of\n\"train-data block\" and K\nt\ndenotes the matrix part of \"test-data\nblock.\"\nThe theory in [8] provides the principle of learning the\nkernel matrix, i.e., looking for a kernel matrix K with good\ngeneralization performance is equivalent to finding the matrix\nthat maximizes the following empirical kernel alignment\nscore:\n^\nA(K\ntr\n, T ) =\nK\ntr\n, T\nF\n\nK\ntr\n, K\ntr F\nT, T\nF\n(7)\nThis principle has been used to learn the kernel matrices\nwith multiple kernel combinations [14] and also the semi-supervised\nkernels from graph Laplacians [32]. Motivated by\nthe related theorecial work, we propose a new spectral kernel\nlearning (SKL) algorithm which learns spectrals of the\nkernel matrix by obeying both the principle of unsupervised\nkernel design and the principle of kernel target alignment.\n4.2\nAlgorithm\nAssume that we are given a set of labeled data L =\n{x\ni\n, y\ni\n}\nl\ni=1\n, a set of unlabeled data U =\n{x\ni\n}\nn\ni=l+1\n, and\nan initial kernel matrix K.\nWe first conduct the eigen-decomposition\nof the kernel matrix:\nK =\nn\ni=1\n\ni\nv\ni\nv\ni\n,\n(8)\nwhere (\ni\n, v\ni\n) are eigen pairs of K and are assumed in a\ndecreasing order, i.e.,\n1\n\n2\n. . .\nn\n. For efficiency\nconsideration, we select the top d eigen pairs, such that\nK\nd\n=\nd\ni=1\n\ni\nv\ni\nv\ni\nK ,\n(9)\nwhere the parameter d\nn is a dimension cutoff factor that\ncan be determined by some criteria, such as the cumulative\neigen energy.\nBased on the principle of unsupervised kernel design, we\nconsider to learn the kernel matrix as follows\n\nK =\nd\ni=1\n\ni\nv\ni\nv\ni\n,\n(10)\nwhere\ni\n0 are spectral coefficients of the new kernel matrix\n. The goal of spectral kernel learning (SKL) algorithm is\nto find the optimal spectral coefficients\ni\nfor the following\noptimization\nmax\n\nK,\n^\nA(\nK\ntr\n, T )\n(11)\nsubject to\n\nK =\n\nd\ni=1\n\ni\nv\ni\nv\ni\ntrace(\nK) = 1\n\ni\n0,\n\ni\nC\ni+1\n, i = 1 . . . d\n- 1 ,\nwhere C is introduced as a decay factor that satisfies C\n1,\nv\ni\nare top d eigen vectors of the original kernel matrix K,\n\nK\ntr\nis the kernel matrix restricted to the (labeled) training\ndata and T is the target kernel induced by labels.\nNote\nthat C is introduced as an important parameter to control\nthe decay rate of spectral coefficients that will influence the\noverall performance of the kernel machine.\nThe above optimization problem belongs to convex optimization\nand is usually regarded as a semi-definite programming\nproblem (SDP) [14], which may not be computation-ally\nefficient. In the following, we turn it into a Quadratic\nProgramming (QP) problem that can be solved much more\nefficiently.\nBy the fact that the objective function (7) is invariant\nto the constant term T, T\nF\n, we can rewrite the objective\nfunction into the following form\n\nK\ntr\n, T\nF\n\n\nK\ntr\n,\nK\ntr F\n.\n(12)\nThe above alignment is invariant to scales. In order to remove\nthe trace constraint in (11), we consider the following\nalternative approach. Instead of maximizing the objective\nfunction (12) directly, we can fix the numerator to 1 and\nthen minimize the denominator. Therefore, we can turn the\noptimization problem into:\nmin\n\n\n\nK\ntr\n,\nK\ntr F\n(13)\nsubject to\n\nK =\n\nd\ni=1\n\ni\nv\ni\nv\ni\n\nK\ntr\n, T\nF\n= 1\n\ni\n0,\n\ni\nC\ni+1\n, i = 1 . . . d\n- 1 .\nThis minimization problem without the trace constraint is\nequivalent to the original maximization problem with the\ntrace constraint.\nLet vec(A) denote the column vectorization of a matrix A\nand let D = [vec(V\n1,tr\n) . . . vec(V\nd,tr\n)] be a constant matrix\nwith size of l\n2\nd, in which the d matrices of V\ni\n= v\ni\nv\ni\nare\nwith size of l\nl. It is not difficult to show that the above\nproblem is equivalent to the following optimization\nmin\n\n||D||\n(14)\nsubject to\nvec(T ) D = 1\n\ni\n0\n\ni\nC\ni+1\n, i = 1 . . . d\n- 1 .\nMinimizing the norm is then equivalent to minimizing the\nsquared norm. Hence, we can obtain the final optimization\n190\nResearch Track Paper\n0\n5\n10\n15\n20\n25\n30\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nDimension (d)\nCumulative Energy\n(a) Cumulative eigen energy\n0\n5\n10\n15\n20\n25\n30\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nDimension (d)\nScaled Coefficient\nOriginal Kernel\nSKL (C=1)\nSKL (C=2)\nSKL (C=3)\n(b) Spectral coefficients\nFigure 2: Illustration of cumulative eigen energy and the spectral coefficients of different decay factors on\nthe Ionosphere dataset. The initial kernel is a linear kernel and the number of labeled data is 20.\n0\n10\n20\n30\n40\n50\n0.65\n0.7\n0.75\n0.8\n0.85\n0.9\n0.95\nDimension (d)\nAccuracy\nK\nOrigin\nK\nTrunc\nK\nCluster\nK\nSpectral\n(a) C=1\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\n50\n0.65\n0.7\n0.75\n0.8\n0.85\n0.9\n0.95\nDimension (d)\nAccuracy\nK\nOrigin\nK\nTrunc\nK\nCluster\nK\nSpectral\n(b) C=2\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\n50\n0.65\n0.7\n0.75\n0.8\n0.85\n0.9\n0.95\nDimension (d)\nAccuracy\nK\nOrigin\nK\nTrunc\nK\nCluster\nK\nSpectral\n(c) C=3\nFigure 3: Classification performance of semi-supervised kernels with different decay factors on the Ionosphere\ndataset. The initial kernel is a linear kernel and the number of labeled data is 20.\nproblem as\nmin\n\nD D\nsubject to\nvec(T ) D = 1\n\ni\n0\n\ni\nC\ni+1\n, i = 1 . . . d\n- 1 .\nThis is a standard Quadratic Programming (QP) problem\nthat can be solved efficiently.\n4.3\nConnections and Justifications\nThe essential of our semi-supervised kernel learning method\nis based on the theories of unsupervised kernel design and\nkernel target alignment.\nMore specifically, we consider a\ndimension-reduction effective method to learn the semi-supervised\nkernel that maximizes the kernel alignment score. By examining\nthe work on unsupervised kernel design, the following\ntwo pieces of work can be summarized as a special case of\nspectral kernel learning framework:\nCluster Kernel\nThis method adopts a \"[1. . . ,1,0,. . . ,0]\" kernel that has\nbeen used in spectral clustering [18]. It sets the top\nspectral coefficients to 1 and the rest to 0, i.e.,\n\ni\n=\n1\nfor\ni\nd\n0\nfor\ni > d\n.\n(15)\nFor a comparison, we refer to this method as \"Cluster\nkernel\" denoted by K\nCluster\n.\nTruncated Kernel\nAnother method is called the truncated kernel that\nkeeps only the top d spectral coefficients\n\ni\n=\n\ni\nfor\ni\nd\n0\nfor\ni > d\n,\n(16)\nwhere\ni\nare the top eigen values of an initial kernel.\nWe can see that this is exactly the method of kernel\nprincipal component analysis [20] that keeps only\nthe d most significant principal components of a given\nkernel. For a comparison, we denote this method as\nK\nTrunc\n.\n191\nResearch Track Paper\n0\n5\n10\n15\n20\n25\n30\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nDimension (d)\nScaled Coefficient\nOriginal Kernel\nSKL (C=1)\nSKL (C=2)\nSKL (C=3)\n(a) Spectral coefficients\n0\n10\n20\n30\n40\n50\n0.7\n0.75\n0.8\n0.85\n0.9\n0.95\n1\nDimension (d)\nAccuracy\nK\nOrigin\nK\nTrunc\nK\nCluster\nK\nSpectral\n(b) C=1\n0\n10\n20\n30\n40\n50\n0.7\n0.75\n0.8\n0.85\n0.9\n0.95\n1\nDimension (d)\nAccuracy\nK\nOrigin\nK\nTrunc\nK\nCluster\nK\nSpectral\n(c) C=2\nFigure 4:\nExample of Spectral coefficients and performance impacted by different decay factors on the\nIonosphere dataset. The initial kernel is an RBF kernel and the number of labeled data is 20.\n0\n10\n20\n30\n40\n50\n0.55\n0.6\n0.65\n0.7\n0.75\n0.8\n0.85\n0.9\nDimension (d)\nAccuracy\nK\nOrigin\nK\nTrunc\nK\nCluster\nK\nSpectral\n(a) C=1\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\n50\n0.5\n0.55\n0.6\n0.65\n0.7\n0.75\n0.8\n0.85\n0.9\nDimension (d)\nAccuracy\nK\nOrigin\nK\nTrunc\nK\nCluster\nK\nSpectral\n(b) C=2\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\n50\n0.55\n0.6\n0.65\n0.7\n0.75\n0.8\n0.85\n0.9\nDimension (d)\nAccuracy\nK\nOrigin\nK\nTrunc\nK\nCluster\nK\nSpectral\n(c) C=3\nFigure 5: Classification performance of semi-supervised kernels with different decay factors on the Heart\ndataset. The initial kernel is a linear kernel and the number of labeled data is 20.\nIn our case, in comparison with semi-supervised kernel\nlearning methods by graph Laplacians, our work is similar\nto the approach in [32], which learns the spectral transformation\nof graph Laplacians by kernel target alignment with\norder constraints. However, we should emphasize two important\ndifferences that will explain why our method can\nwork more effectively.\nFirst, the work in [32] belongs to traditional graph based\nsemi-supervised learning methods which assume the kernel\nmatrix is derived from the spectral decomposition of graph\nLaplacians.\nInstead, our spectral kernel learning method\nlearns on any initial kernel and assume the kernel matrix is\nderived from the spectral decomposition of the normalized\nkernel.\nSecond, compared to the kernel learning method in [14],\nthe authors in [32] proposed to add order constraints into\nthe optimization of kernel target alignment [8] to enforce the\nconstraints of graph smoothness. In our case, we suggest\na decay factor C to constrain the relationship of spectral\ncoefficients in the optimization that can make the spectral\ncoefficients decay faster. In fact, if we ignore the difference\nof graph Laplacians and assume that the initial kernel in our\nmethod is given as K\nL\n-1\n, we can see that the method\nin [32] can be regarded as a special case of our method when\nthe decay factor C is set to 1 and the dimension cut-off\nparameter d is set to n.\n4.4\nEmpirical Observations\nTo argue that C = 1 in the spectral kernel learning algorithm\nmay not be a good choice for learning an effective\nkernel, we illustrate some empirical examples to justifiy the\nmotivation of our spectral kernel learning algorithm. One\ngoal of our spectral kernel learning methodology is to attain\na fast decay rate of the spectral coefficients of the kernel\nmatrix. Figure 2 illustrates an example of the change of the\nresulting spectral coefficients using different decay factors in\nour spectral kernel learning algorithms. From the figure, we\ncan see that the curves with larger decay factors (C = 2, 3)\nhave faster decay rates than the original kernel and the one\nusing C = 1. Meanwhile, we can see that the cumulative\neigen energy score converges to 100% quickly when the number\nof dimensions is increased. This shows that we may use\nmuch small number of eigen-pairs in our semi-supervised\nkernel learning algorithm for large-scale problems.\nTo examine more details in the impact of performance\nwith different decay factors, we evaluate the classification\n192\nResearch Track Paper\nperformance of spectral kernel learning methods with different\ndecay factors in Figure 3. In the figure, we compare\nthe performance of different kernels with respect to spectral\nkernel design methods. We can see that two unsupervised\nkernels, K\nTrunc\nand K\nCluster\n, tend to perform better than\nthe original kernel when the dimension is small. But their\nperformances are not very stable when the number of dimensions\nis increased. For comparison, the spectral kernel\nlearning method achieves very stable and good performance\nwhen the decay factor C is larger than 1. When the decay\nfactor is equal to 1, the performance becomes unstable due\nto the slow decay rates observed from our previous results\nin Figure 3. This observation matches the theoretical justification\n[27] that a kernel with good performance usually\nfavors a faster decay rate of spectral coefficients.\nFigure 4 and Figure 5 illustrate more empirical examples\nbased on different initial kernels, in which similar results\ncan be observed. Note that our suggested kernel learning\nmethod can learn on any valid kernel, and different initial\nkernels will impact the performance of the resulting spectral\nkernels. It is usually helpful if the initial kernel is provided\nwith domain knowledge.\nUNIFIED KERNEL LOGISTIC REGRESSION\nIn this section, we present a specific paradigm based on\nthe proposed framework of learning unified kernel machines.\nWe assume the underlying probabilistic model of the kernel\nmachine is Kernel Logistic Regression (KLR). Based on\nthe UKM framework, we develop the Unified Kernel Logistic\nRegression (UKLR) paradigm to tackle classification\ntasks. Note that our framework is not restricted to the KLR\nmodel, but also can be widely extended for many other kernel\nmachines, such as Support Vector Machine (SVM) and\nRegularized Least-Square (RLS) classifiers.\nSimilar to other kernel machines, such as SVM, a KLR\nproblem can be formulated in terms of a stanard regularized\nform of loss+penalty in the reproducing kernel Hilbert space\n(RKHS):\nmin\nf H\nK\n1\nl\nl\ni=1\nln(1 + e\n-y\ni\nf (x\ni\n)\n) +\n2 ||f ||\n2\nH\nK\n,\n(17)\nwhere\nH\nK\nis the RKHS by a kernel K and is a regularization\nparameter. By the representer theorem, the optimal\nf (x) has the form:\nf (x) =\nl\ni=1\n\ni\nK(x, x\ni\n) ,\n(18)\nwhere\ni\nare model parameters. Note that we omit the constant\nterm in f (x) for simplified notations. To solve the\nKLR model parameters, there are a number of available\ntechniques for effective solutions [29].\nWhen the kernel K and the model parameters are available\n, we use the following solution for active learning, which\nis simple and efficient for large-scale problems. More specifically\n, we measure the information entropy of each unlabeled\ndata example as follows\nH(x; , K) =\nN\nC\ni=1\np(C\ni\n|x)log(p(C\ni\n|x)) ,\n(19)\nAlgorithm: Unified Kernel Logistic Regresssion\nInput\nK\n0\n: Initial normalized kernel\nL: Set of labeled data\nU: Set of unlabeled data\nRepeat\nSpectral Kernel Learning\nK\nSpectral Kernel(K\n0\n, L, U );\nKLR Parameter Estimation\n\nKLR Solver(L, K);\nConvergence Test\nIf (converged), Exit Loop;\nActive Learning\nx\n\nmax\nxU\nH(x; , K)\nL\nL {x\n\n}, U U - {x\n\n}\nUntil converged.\nOutput\nUKLR = M(K, ).\nFigure 6: The UKLR Algorithm.\nwhere N\nC\nis the number of classes and C\ni\ndenotes the i\nth\nclass and p(C\ni\n|x) is the probability of the data example x\nbelonging to the i\nth\nclass which can be naturally obtained\nby the current KLR model (, K). The unlabeled data examples\nwith maximum values of entropy will be considered\nas the most informative data for labeling.\nBy unifying the spectral kernel learning method proposed\nin Section 3, we summarize the proposed algorithm of Unified\nKernel Logistic Regression (UKLR) in Figure 6. In the\nalgorithm, note that we can usually initialize a kernel by a\nstandard kernel with appropriate parameters determined by\ncross validation or by a proper deisgn of the initial kernel\nwith domain knowledge.\nEXPERIMENTAL RESULTS\nWe discuss our empirical evaluation of the proposed framework\nand algorithms for classification. We first evaluate the\neffectiveness of our suggested spectral kernel learning algorithm\nfor learning semi-supervised kernels and then compare\nthe performance of our unified kernel logistic regression\nparadigm with traditional classification schemes.\n6.1\nExperimental Testbed and Settings\nWe use the datasets from UCI machine learning repository\n1\n. Four datasets are employed in our experiments. Table\n1 shows the details of four UCI datasets in our experiments\n.\nFor experimental settings, to examine the influences of\ndifferent training sizes, we test the compared algorithms on\nfour different training set sizes for each of the four UCI\ndatasets. For each given training set size, we conduct 20\nrandom trials in which a labeled set is randomly sampled\n1\nwww.ics.uci.edu/ mlearn/MLRepository.html\n193\nResearch Track Paper\nTable 1: List of UCI machine learning datasets.\nDataset\n#Instances\n#Features\n#Classes\nHeart\n270\n13\n2\nIonosphere\n351\n34\n2\nSonar\n208\n60\n2\nWine\n178\n13\n3\nfrom the whole dataset and all classes must be present in\nthe sampled labeled set.\nThe rest data examples of the\ndataset are then used as the testing (unlabeled) data. To\ntrain a classifier, we employ the standard KLR model for\nclassification. We choose the bounds on the regularization\nparameters via cross validation for all compared kernels to\navoid an unfair comparison. For multi-class classification,\nwe perform one-against-all binary training and testing and\nthen pick the class with the maximum class probability.\n6.2\nSemi-Supervised Kernel Learning\nIn this part, we evaluate the performance of our spectral\nkernel learning algorithm for learning semi-supervised kernels\n. We implemented our algorithm by a standard Matlab\nQuadratic Programming solver (quadprog). The dimension-cut\nparameter d in our algorithm is simply fixed to 20 without\nfurther optimizing. Note that one can easily determine\nan appropriate value of d by examining the range of the\ncumulative eigen energy score in order to reduce the com-putational\ncost for large-scale problems. The decay factor\nC is important for our spectral kernel learning algorithm.\nAs we have shown examples before, C must be a positive\nreal value greater than 1. Typically we favor a larger decay\nfactor to achieve better performance. But it must not be\nset too large since the too large decay factor may result in\nthe overly stringent constraints in the optimization which\ngives no solutions. In our experiments, C is simply fixed to\nconstant values (greater than 1) for the engaged datasets.\nFor a comparison, we compare our SKL algorithms with\nthe state-of-the-art semi-supervised kernel learning method\nby graph Laplacians [32], which is related to a quadrati-cally\nconstrained quaratic program (QCQP). More specifically\n, we have implemented two graph Laplacians based\nsemi-supervised kernels by order constraints [32]. One is the\norder-constrained graph kernel (denoted as \"Order\") and\nthe other is the improved order-constrained graph kernel\n(denoted as \"Imp-Order\"), which removes the constraints\nfrom constant eigenvectors. To carry a fair comparison, we\nuse the top 20 smallest eigenvalues and eigenvectors from\nthe graph Laplacian which is constructed with 10-NN un-weighted\ngraphs. We also include three standard kernels for\ncomparisons.\nTable 2 shows the experimental results of the compared\nkernels (3 standard and 5 semi-supervised kernels) based on\nKLR classifiers on four UCI datasets with different sizes of\nlabeled data. Each cell in the table has two rows: the upper\nrow shows the average testing set accruacies with standard\nerrors; and the lower row gives the average run time in seconds\nfor learning the semi-supervised kernels on a 3GHz\ndesktop computer. We conducted a paired t-test at significance\nlevel of 0.05 to assess the statistical significance of the\ntest set accuracy results. From the experimental results,\nwe found that the two order-constrained based graph kernels\nperform well in the Ionosphere and Wine datasets, but\nthey do not achieve important improvements on the Heart\nand Sonar datasets. Among all the compared kernels, the\nsemi-supervised kernels by our spectral kernel learning algorithms\nachieve the best performances. The semi-supervised\nkernel initialized with an RBF kernel outperforms other kernels\nin most cases. For example, in Ionosphere dataset, an\nRBF kernel with 10 initial training examples only achieves\n73.56% test set accuracy, and the SKL algorithm can boost\nthe accuracy significantly to 83.36%. Finally, looking into\nthe time performance, the average run time of our algorithm\nis less than 10% of the previous QCQP algorithms.\n6.3\nUnified Kernel Logistic Regression\nIn this part, we evaluate the performance of our proposed\nparadigm of unified kernel logistic regression (UKLR). As\na comparison, we implement two traditional classification\nschemes: one is traditional KLR classification scheme that\nis trained on randomly sampled labeled data, denoted as\n\"KLR+Rand.\" The other is the active KLR classification\nscheme that actively selects the most informative examples\nfor labeling, denoted as \"KLR+Active.\" The active learning\nstrategy is based on a simple maximum entropy criteria\ngiven in the pervious section. The UKLR scheme is implemented\nbased on the algorithm in Figure 6.\nFor active learning evaluation, we choose a batch of 10\nmost informative unlabeled examples for labeling in each\ntrial of evaluations. Table 3 summarizes the experimental\nresults of average test set accuarcy performances on four\nUCI datasets. From the experimental results, we can observe\nthat the active learning classification schems outperform\nthe randomly sampled classification schemes in most\ncases. This shows the suggested simple active learning strategy\nis effectiveness. Further, among all compared schemes,\nthe suggsted UKLR solution significantly outperforms other\nclassification approaches in most cases. These results show\nthat the unified scheme is effective and promising to integrate\ntraditional learning methods together in a unified solution\n.\n6.4\nDiscussions\nAlthough the experimental results have shown that our\nscheme is promising, some open issues in our current solution\nneed be further explored in future work. One problem to investigate\nmore effective active learning methods in selecting\nthe most informative examples for labeling. One solution to\nthis issue is to employ the batch mode active learning methods\nthat can be more efficient for large-scale classification\ntasks [11, 23, 24]. Moreover, we will study more effective kernel\nlearning algorithms without the assumption of spectral\nkernels. Further, we may examine the theoretical analysis\nof generalization performance of our method [27]. Finally,\nwe may combine some kernel machine speedup techniques to\ndeploy our scheme efficiently for large-scale applications [26].\nCONCLUSION\nThis paper presented a novel general framework of learning\nthe Unified Kernel Machines (UKM) for classification.\nDifferent from traditional classification schemes, our UKM\nframework integrates supervised learning, semi-supervised\nlearning, unsupervised kernel design and active learning in\na unified solution, making it more effective for classification\ntasks. For the proposed framework, we focus our attention\non tackling a core problem of learning semi-supervised kernels\nfrom labeled and unlabled data. We proposed a Spectral\n194\nResearch Track Paper\nTable 2: Classification performance of different kernels using KLR classifiers on four datasets. The mean\naccuracies and standard errors are shown in the table. 3 standard kernels and 5 semi-supervised kernels are\ncompared. Each cell in the table has two rows. The upper row shows the test set accuracy with standard\nerror; the lower row gives the average time used in learning the semi-supervised kernels (\"Order\" and \"Imp-Order\"\nkernels are sovled by SeDuMi/YALMIP package; \"SKL\" kernels are solved directly by the Matlab\nquadprog function.\nTrain\nStandard Kernels\nSemi-Supervised Kernels\nSize\nLinear\nQuadratic\nRBF\nOrder\nImp-Order\nSKL(Linear)\nSKL(Quad)\nSKL(RBF)\nHeart\n10\n67.19 1.94\n71.90 1.23\n70.04 1.61\n63.60 1.94\n63.60 1.94\n70.58 1.63\n72.33 1.60\n73.37 1.50\n\n\n\n( 0.67 )\n( 0.81 )\n( 0.07 )\n( 0.06 )\n( 0.06 )\n20\n67.40 1.87\n70.36 1.51\n72.64 1.37\n65.88 1.69\n65.88 1.69\n76.26 1.29\n75.36 1.30\n76.30 1.33\n\n\n\n( 0.71 )\n( 0.81 )\n( 0.06 )\n( 0.06 )\n( 0.06 )\n30\n75.42 0.88\n70.71 0.83\n74.40 0.70\n71.73 1.14\n71.73 1.14\n78.42 0.59\n78.65 0.52\n79.23 0.58\n\n\n\n( 0.95 )\n( 0.97 )\n( 0.06 )\n( 0.06 )\n( 0.06 )\n40\n78.24 0.89\n71.28 1.10\n78.48 0.77\n75.48 0.69\n75.48 0.69\n80.61 0.45\n80.26 0.45\n80.98 0.51\n\n\n\n( 1.35 )\n( 1.34 )\n( 0.07 )\n( 0.07 )\n( 0.07 )\nIonosphere\n10\n73.71 1.27\n71.30 1.70\n73.56 1.91\n71.86 2.79\n71.86 2.79\n75.53 1.75\n71.22 1.82\n83.36 1.31\n\n\n\n( 0.90 )\n( 0.87 )\n( 0.05 )\n( 0.05 )\n( 0.05 )\n20\n75.62 1.24\n76.00 1.58\n81.71 1.74\n83.04 2.10\n83.04 2.10\n78.78 1.60\n80.30 1.77\n88.55 1.32\n\n\n\n( 0.87 )\n( 0.79 )\n( 0.05 )\n( 0.06 )\n( 0.05 )\n30\n76.59 0.82\n79.10 1.46\n86.21 0.84\n87.20 1.16\n87.20 1.16\n82.18 0.56\n83.08 1.36\n90.39 0.84\n\n\n\n( 0.93 )\n( 0.97 )\n( 0.05 )\n( 0.05 )\n( 0.05 )\n40\n77.97 0.79\n82.93 1.33\n89.39 0.65\n90.56 0.64\n90.56 0.64\n83.26 0.53\n87.03 1.02\n92.14 0.46\n\n\n\n( 1.34 )\n( 1.38 )\n( 0.05 )\n( 0.04 )\n( 0.04 )\nSonar\n10\n63.01 1.47\n62.85 1.53\n60.76 1.80\n59.67 0.89\n59.67 0.89\n64.27 1.91\n64.37 1.64\n65.30 1.78\n\n\n\n( 0.63 )\n( 0.63 )\n( 0.08 )\n( 0.07 )\n( 0.07 )\n20\n68.09 1.11\n69.55 1.22\n67.63 1.15\n64.68 1.57\n64.68 1.57\n70.61 1.14\n69.79 1.30\n71.76 1.07\n\n\n\n( 0.68 )\n( 0.82 )\n( 0.07 )\n( 0.07 )\n( 0.08 )\n30\n66.40 1.06\n69.80 0.93\n68.23 1.48\n66.54 0.79\n66.54 0.79\n70.20 1.48\n68.48 1.59\n71.69 0.87\n\n\n\n( 0.88 )\n( 1.02 )\n( 0.07 )\n( 0.07 )\n( 0.07 )\n40\n64.94 0.74\n71.37 0.52\n71.61 0.89\n69.82 0.82\n69.82 0.82\n72.35 1.06\n71.28 0.96\n72.89 0.68\n\n\n\n( 1.14 )\n( 1.20 )\n( 0.07 )\n( 0.08 )\n( 0.07 )\nWine\n10\n82.26 2.18\n85.89 1.73\n87.80 1.63\n86.99 1.98\n86.99 1.45\n83.63 2.62\n83.21 2.36\n90.54 1.08\n\n\n\n( 1.02 )\n( 0.86 )\n( 0.09 )\n( 0.09 )\n( 0.09 )\n20\n86.39 1.39\n86.96 1.30\n93.77 0.99\n92.31 1.39\n92.31 1.39\n89.53 2.32\n92.56 0.56\n94.94 0.50\n\n\n\n( 0.92 )\n( 0.91 )\n( 0.09 )\n( 0.09 )\n( 0.09 )\n30\n92.50 0.76\n87.43 0.63\n94.63 0.50\n92.97 0.54\n92.97 0.54\n93.99 1.09\n94.29 0.53\n96.25 0.30\n\n\n\n( 1.28 )\n( 1.27 )\n( 0.09 )\n( 0.10 )\n( 0.09 )\n40\n94.96 0.65\n88.80 0.93\n96.38 0.35\n95.62 0.37\n95.62 0.37\n95.80 0.47\n95.36 0.46\n96.81 0.28\n\n\n\n( 1.41 )\n( 1.39 )\n( 0.08 )\n( 0.08 )\n( 0.10 )\nKernel Learning (SKL) algorithm, which is more effective\nand efficient for learning kernels from labeled and unlabeled\ndata. Under the framework, we developed a paradigm of\nunified kernel machine based on Kernel Logistic Regression,\ni.e., Unified Kernel Logistic Regression (UKLR). Empirical\nresults demonstrated that our proposed solution is more effective\nthan the traditional classification approaches.\nACKNOWLEDGMENTS\nThe work described in this paper was fully supported by\ntwo grants, one from the Shun Hing Institute of Advanced\nEngineering, and the other from the Research Grants Council\nof the Hong Kong Special Administrative Region, China\n(Project No. CUHK4205/04E).\nREFERENCES\n[1] M. Belkin and I. M. andd P. Niyogi. Regularization\nand semi-supervised learning on large graphs. In\nCOLT, 2004.\n[2] M. Belkin and P. Niyogi. Semi-supervised learning on\nriemannian manifolds. Machine Learning, 2004.\n[3] E. Chang, S. C. Hoi, X. Wang, W.-Y. Ma, and\nM. Lyu. A unified machine learning framework for\nlarge-scale personalized information management. In\nThe 5th Emerging Information Technology\nConference, NTU Taipei, 2005.\n[4] E. Chang and M. Lyu. Unified learning paradigm for\nweb-scale mining. In Snowbird Machine Learning\nWorkshop, 2006.\n[5] O. Chapelle, A. Zien, and B. Scholkopf.\nSemi-supervised learning. MIT Press, 2006.\n[6] F. R. K. Chung. Spectral Graph Theory. American\nMathematical Soceity, 1997.\n[7] D. A. Cohn, Z. Ghahramani, and M. I. Jordan. Active\nlearning with statistical models. In NIPS, volume 7,\npages 705712, 1995.\n[8] N. Cristianini, J. Shawe-Taylor, and A. Elisseeff. On\nkernel-target alignment. JMLR, 2002.\n195\nResearch Track Paper\nTable 3: Classification performance of different classification schemes on four UCI datasets.\nThe mean\naccuracies and standard errors are shown in the table. \"KLR\" represents the initial classifier with the initial\ntrain size; other three methods are trained with additional 10 random/active examples.\nTrain\nLinear Kernel\nRBF Kernel\nSize\nKLR\nKLR+Rand\nKLR+Active\nUKLR\nKLR\nKLR+Rand\nKLR+Active\nUKLR\nHeart\n10\n67.19 1.94\n68.22 2.16\n69.22 1.71\n77.24 0.74\n70.04 1.61\n72.24 1.23\n75.36 0.60\n78.44 0.88\n20\n67.40 1.87\n73.79 1.29\n73.77 1.27\n79.27 1.00\n72.64 1.37\n75.10 0.74\n76.23 0.81\n79.88 0.90\n30\n75.42 0.88\n77.70 0.92\n78.65 0.62\n81.13 0.42\n74.40 0.70\n76.43 0.68\n76.61 0.61\n81.48 0.41\n40\n78.24 0.89\n79.30 0.75\n80.18 0.79\n82.55 0.28\n78.48 0.77\n78.50 0.53\n79.95 0.62\n82.66 0.36\nIonosphere\n10\n73.71 1.27\n74.89 0.95\n75.91 0.96\n77.31 1.23\n73.56 1.91\n82.57 1.78\n82.76 1.37\n90.48 0.83\n20\n75.62 1.24\n77.09 0.67\n77.51 0.66\n81.42 1.10\n81.71 1.74\n85.95 1.30\n88.22 0.78\n91.28 0.94\n30\n76.59 0.82\n78.41 0.79\n77.91 0.77\n84.49 0.37\n86.21 0.84\n89.04 0.66\n90.32 0.56\n92.35 0.59\n40\n77.97 0.79\n79.05 0.49\n80.30 0.79\n84.49 0.40\n89.39 0.65\n90.55 0.59\n91.83 0.49\n93.89 0.28\nSonar\n10\n61.19 1.56\n63.72 1.65\n65.51 1.55\n66.12 1.94\n57.40 1.48\n60.19 1.32\n59.49 1.46\n67.13 1.58\n20\n67.31 1.07\n68.85 0.84\n69.38 1.05\n71.60 0.91\n62.93 1.36\n64.72 1.24\n64.52 1.07\n72.30 0.98\n30\n66.10 1.08\n67.59 1.14\n69.79 0.86\n71.40 0.80\n63.03 1.32\n63.72 1.51\n66.67 1.53\n72.26 0.98\n40\n66.34 0.82\n68.16 0.81\n70.19 0.90\n73.04 0.69\n66.70 1.25\n68.70 1.19\n67.56 0.90\n73.16 0.88\nWine\n10\n82.26 2.18\n87.31 1.01\n89.05 1.07\n87.31 1.03\n87.80 1.63\n92.75 1.27\n94.49 0.54\n94.87 0.49\n20\n86.39 1.39\n93.99 0.40\n93.82 0.71\n94.43 0.54\n93.77 0.99\n95.57 0.38\n97.13 0.18\n96.76 0.26\n30\n92.50 0.76\n95.25 0.47\n96.96 0.40\n96.12 0.47\n94.63 0.50\n96.27 0.35\n97.17 0.38\n97.21 0.26\n40\n94.96 0.65\n96.21 0.63\n97.54 0.37\n97.70 0.34\n96.38 0.35\n96.33 0.45\n97.97 0.23\n98.12 0.21\n[9] S. Fine, R. Gilad-Bachrach, and E. Shamir. Query by\ncommittee, linear separation and random walks.\nTheor. Comput. Sci., 284(1):2551, 2002.\n[10] Y. Freund, H. S. Seung, E. Shamir, and N. Tishby.\nSelective sampling using the query by committee\nalgorithm. Mach. Learn., 28(2-3):133168, 1997.\n[11] S. C. Hoi, R. Jin, and M. R. Lyu. Large-scale text\ncategorization by batch mode active learning. In\nWWW2006, Edinburg, 2006.\n[12] S. B. C. M. J.A.K. Suykens, G. Horvath and\nJ. Vandewalle. Advances in Learning Theory:\nMethods, Models and Applications. NATO Science\nSeries: Computer & Systems Sciences, 2003.\n[13] R. Kondor and J. Lafferty. Diffusion kernels on graphs\nand other discrete structures. 2002.\n[14] G. Lanckriet, N. Cristianini, P. Bartlett, L. E. Ghaoui,\nand M. Jordan. Learning the kernel matrix with\nsemi-definite programming. JMLR, 5:2772, 2004.\n[15] G. Lanckriet, L. Ghaoui, C. Bhattacharyya, and\nM. Jordan. Minimax probability machine. In Advances\nin Neural Infonation Processing Systems 14, 2002.\n[16] R. Liere and P. Tadepalli. Active learning with\ncommittees for text categorization. In Proceedings 14th\nConference of the American Association for Artificial\nIntelligence (AAAI), pages 591596, MIT Press, 1997.\n[17] R. Meir and G. Ratsch. An introduction to boosting\nand leveraging. In In Advanced Lectures on Machine\nLearning (LNAI2600), 2003.\n[18] A. Ng, M. Jordan, and Y. Weiss. On spectral\nclustering: Analysis and an algorithm. In In Advances\nin Neural Information Processing Systems 14, 2001.\n[19] N. Roy and A. McCallum. Toward optimal active\nlearning through sampling estimation of error\nreduction. In 18th ICML, pages 441448, 2001.\n[20] B. Scholkopf, A. Smola, and K.-R. Muller. Nonlinear\ncomponent analysis as a kernel eigenvalue problem.\nNeural Computation, 10:12991319, 1998.\n[21] A. Smola and R. Kondor. Kernels and regularization\non graphs. In Intl. Conf. on Learning Theory, 2003.\n[22] M. Szummer and T. Jaakkola. Partially labeled\nclassification with markov random walks. In Advances\nin Neural Information Processing Systems, 2001.\n[23] S. Tong and E. Chang. Support vector machine active\nlearning for image retrieval. In Proc ACM Multimedia\nConference, pages 107118, New York, 2001.\n[24] S. Tong and D. Koller. Support vector machine active\nlearning with applications to text classification. In\nProc. 17th ICML, pages 9991006, 2000.\n[25] V. N. Vapnik. Statistical Learning Theory. John Wiley\n& Sons, 1998.\n[26] G. Wu, Z. Zhang, and E. Y. Chang. Kronecker\nfactorization for speeding up kernel machines. In\nSIAM Int. Conference on Data Mining (SDM), 2005.\n[27] T. Zhang and R. K. Ando. Analysis of spectral kernel\ndesign based semi-supervised learning. In NIPS, 2005.\n[28] D. Zhou, O. Bousquet, T. Lal, J. Weston, and\nB. Schlkopf. Learning with local and global\nconsistency. In NIPS'16, 2005.\n[29] J. Zhu and T. Hastie. Kernel logistic regression and\nthe import vector machine. In NIPS 14, pages\n10811088, 2001.\n[30] X. Zhu. Semi-supervised learning literature survey.\nTechnical report, Computer Sciences TR 1530,\nUniversity of Wisconsin - Madison, 2005.\n[31] X. Zhu, Z. Ghahramani, and J. Lafferty.\nSemi-supervised learning using gaussian fields and\nharmonic functions. In Proc. ICML'2003, 2003.\n[32] X. Zhu, J. Kandola, Z. Ghahramani, and J. Lafferty.\nNonparametric transforms of graph kernels for\nsemi-supervised learning. In NIPS2005, 2005.\n196\nResearch Track Paper\n", "keywords": "Active Learning;data mining;classification;unified kernel machine(UKM);Kernel Machines;spectral kernel learning (SKL);Kernel Logistic Regressions;Supervised Learning;supervised learning;Semi-Supervised Learning;active learning;Classification;Unsuper-vised Kernel Design;framework;Spectral Kernel Learning;semi-supervised kernel learning"} {"name": "127", "title": "Leo: A System for Cost Effective 3D Shaded Graphics", "abstract": "A physically compact, low cost, high performance 3D graphics accelerator is presented. It supports shaded rendering of triangles and antialiased lines into a double-buffered 24-bit true color frame buffer with a 24-bit Z-buffer. Nearly the only chips used besides standard memory parts are 11 ASICs (of four types). Special geometry data reformatting hardware on one ASIC greatly speeds and simplifies the data input pipeline. Floating-point performance is enhanced by another ASIC: a custom graphics microprocessor, with specialized graphics instructions and features. Screen primitive rasterization is carried out in parallel by five drawing ASICs, employing a new partitioning of the back-end rendering task. For typical rendering cases, the only system performance bottleneck is that intrinsically imposed by VRAM.", "fulltext": "INTRODUCTION\nTo expand the role of 3D graphics in the mainstream computer industry\n, cost effective, physically small, usable performance 3D\nshaded graphics architectures must be developed. For such systems,\nnew features and sheer performance at any price can no longer be\nthe driving force behind the architecture; instead, the focus must be\non affordable desktop systems.\nThe historical approach to achieving low cost in 3D graphics systems\nhas been to compromise both performance and image quality.\nBut now, falling memory component prices are bringing nearly ideal\nframe buffers into the price range of the volume market: double\nbuffered 24-bit color with a 24-bit Z-buffer. The challenge is to drive\nthese memory chips at their maximum rate with a minimum of supporting\nrendering chips, keeping the total system cost and physical\nsize to an absolute minimum. To achieve this, graphics architectures\nmust be repartitioned to reduce chip count and internal bus sizes,\nwhile still supporting existing 2D and 3D functionality.\nThis paper describes a new 3D graphics system, Leo, designed to\nthese philosophies. For typical cases, Leo's only performance limit\nis that intrinsically imposed by VRAM. This was achieved by a\ncombination of new architectural techniques and advances in VLSI\ntechnology. The result is a system without performance or image\nquality compromises, at an affordable cost and small physical size.\nThe Leo board set is about the size of one and a half paperback novels\n; the complete workstation is slightly larger than two copies of\nFoley and Van Dam [7]. Leo supports both the traditional requirements\nof the 2D X window system and the needs of 3D rendering:\nshaded triangles, antialiased vectors, etc.\nARCHITECTURAL ALTERNATIVES\nA generic pipeline for 3D shaded graphics is shown in Figure 1. ([7]\nChapter 18 is a good overview of 3D graphics hardware pipeline issues\n.) This pipeline is truly generic, as at the top level nearly every\ncommercial 3D graphics accelerator fits this abstraction. Where individual\nsystems differ is in the partitioning of this rendering pipeline\n, especially in how they employ parallelism. Two major areas\nhave been subject to separate optimization: the floating-point intensive\ninitial stages of processing up to, and many times including,\nprimitive set-up; and the drawing-intensive operation of generating\npixels within a primitive and Z-buffering them into the frame buffer.\nFor low end accelerators, only portions of the pixel drawing stages\nof the pipeline are in hardware; the floating-point intensive parts of\nthe pipe are processed by the host in software. As general purpose\nprocessors increase in floating-point power, such systems are starting\nto support interesting rendering rates, while minimizing cost\n[8]. But, beyond some limit, support of higher performance requires\ndedicated hardware for the entire pipeline.\nThere are several choices available for partitioning the floating-point\nintensive stages. Historically, older systems performed these\ntasks in a serial fashion [2]. In time though, breaking the pipe into\nmore pieces for more parallelism (and thus performance) meant\nthat each section was devoting more and more of its time to I/O\noverhead rather than to real work. Also, computational variance\nmeant that many portions of the pipe would commonly be idle\nwhile others were overloaded. This led to the data parallel designs\nof most recent 3D graphics architectures [12].\nLeo: A System for Cost Effective\n3D Shaded Graphics\nMichael F Deering, Scott R Nelson\nSun Microsystems Computer Corporation\n\nHere the concept is that multiple parallel computation units can\neach process the entire floating-point intensive task, working in parallel\non different parts of the scene to be rendered. This allows each\npipe to be given a large task to chew on, minimizing handshake\noverhead. But now there is a different load balancing problem. If\none pipe has an extra large task, the other parallel pipes may go idle\nwaiting for their slowest peer, if the common requirement of in-order\nexecution of tasks is to be maintained. Minor load imbalances\ncan be averaged out by adding FIFO buffers to the inputs and outputs\nof the parallel pipes. Limiting the maximum size of task given\nto any one pipe also limits the maximum imbalance, at the expense\nof further fragmenting the tasks and inducing additional overhead.\nBut the most severe performance bottleneck lies in the pixel drawing\nback-end. The most fundamental constraint on 3D computer\ngraphics architecture over the last ten years has been the memory\nchips that comprise the frame buffer. Several research systems have\nattempted to avoid this bottleneck by various techniques [10][4][8],\nbut all commercial workstation systems use conventional Z-buffer\nrendering algorithms into standard VRAMs or DRAMs. How this\nRAM is organized is an important defining feature of any high performance\nrendering system.\nLEO OVERVIEW\nFigure 2 is a diagram of the Leo system. This figure is not just a\nblock diagram; it is also a chip level diagram, as every chip in the\nsystem is shown in this diagram. All input data and window system\ninteractions enter through the LeoCommand chip. Geometry data is\nreformatted in this chip before being distributed to the array of LeoFloat\nchips below. The LeoFloat chips are microcoded specialized\nDSP-like processors that tackle the floating-point intensive stages\nof the rendering pipeline. The LeoDraw chips handle all screen\nspace pixel rendering and are directly connected to the frame buffer\nRAM chips. LeoCross handles the back-end color look-up tables,\ndouble buffering, and video timing, passing the final digital pixel\nvalues to the RAMDAC.\nLeo\nCommand\nLeo\nFloat\nLeo\nDraw\nLeo\nCross\nRAM\nDAC\nClock\nGenerator\nBoot\nPROM\nSRAM\nSRAM\nSRAM\nSRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nDRAM\nDRAM\nLeo\nDraw\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nDRAM\nDRAM\nLeo\nDraw\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nDRAM\nDRAM\nLeo\nDraw\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nDRAM\nDRAM\nLeo\nDraw\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nVRAM\nDRAM\nDRAM\nLeo\nFloat\nSRAM\nSRAM\nSRAM\nSRAM\nLeo\nFloat\nSRAM\nSRAM\nSRAM\nSRAM\nLeo\nFloat\nSRAM\nSRAM\nSRAM\nSRAM\nCF Bus\nCD Bus\nCX Bus (Subset of CD Bus)\nFigure 2: The Leo Block Diagram. Every chip in the system is\nrepresented in this diagram.\nVideo Output\nSBus\nData Input\nTransformation\nClip Test\nFace Determination\nLighting\nClip (if needed)\nPerspective Divide\nScreen Space\nConversion\nSet Up for\nIncremental Render\nEdge-Walk\nSpan-Interpolate\nZ-Buffered Blend\nVRAM Frame Buffer\nDouble Buffered MUX\nOutput Lookup Table\nDigital to Analog\nConversion\nFigure 1: Generic 3D Graphics Pipeline\nDrawing\nFloating-point\nIntensive Functions\nIntensive Functions\nCD Bus\n102\nThe development of the Leo architecture started with the constraints\nimposed by contemporary VRAM technology. As will be\nderived in the LeoDraw section below, these constraints led to the\npartitioning of the VRAM controlling LeoDraw chips, and set a\nmaximum back-end rendering rate. This rate in turn set the performance\ngoal for LeoFloat, as well as the data input bandwidth and\nprocessing rate for LeoCommand. After the initial partitioning of\nthe rendering pipeline into these chips, each chip was subjected to\nadditional optimization. Throughput bottlenecks in input geometry\nformat conversion, floating-point processing, and pixel rendering\nwere identified and overcome by adding reinforcing hardware to\nthe appropriate chips.\nLeo's floating-point intensive section uses data parallel partitioning\n. LeoCommand helps minimize load balancing problems by\nbreaking down rendering tasks to the smallest isolated primitives:\nindividual triangles, vectors, dots, portions of pixel rasters, rendering\nattributes, etc., at the cost of precluding optimizations for\nshared data in triangle strips and polylines. This was considered\nacceptable due to the very low average strip length empirically\nobserved in real applications. The overhead of splitting geometric\ndata into isolated primitives is minimized by the use of dedicated\nhardware for this task. Another benefit of converting all rendering\noperations to isolated primitives is that down-stream processing of\nprimitives is considerably simplified by only needing to focus on\nthe isolated case.\nINPUT PROCESSING LEO COMMAND\nFeeding the pipe\nLeo supports input of geometry data both as programmed I/O and\nthrough DMA. The host CPU can directly store up to 32 data words\nin an internal LeoCommand buffer without expensive read back\ntesting of input status every few words. This is useful on hosts that\ndo not support DMA, or when the host must perform format conversions\nbeyond those supported in hardware. In DMA mode, LeoCommand\nemploys efficient block transfer protocols on the system\nbus to transfer data from system memory to its input buffer, allowing\nmuch higher bandwidth than simple programmed I/O. Virtual\nmemory pointers to application's geometry arrays are passed directly\nto LeoCommand, which converts them to physical memory\naddresses without operating system intervention (except when a\npage is marked as currently non-resident). This frees the host CPU\nto perform other computations during the data transfer. Thus the\nDMA can be efficient even for pure immediate-mode applications,\nwhere the geometry is being created on the fly.\nProblem: Tower of Babel of input formats\nOne of the problems modern display systems face is the explosion\nof different input formats for similar drawing functions that need to\nbe supported. Providing optimized microcode for each format\nrapidly becomes unwieldy. The host CPU could be used to pretrans-late\nthe primitive formats, but at high speeds this conversion operation\ncan itself become a system bottleneck. Because DMA completely\nbypasses the host CPU, LeoCommand includes a programmable\nformat conversion unit in the geometry data pipeline. This\nreformatter is considerably less complex than a general purpose\nCPU, but can handle the most commonly used input formats, and at\nvery high speeds.\nThe geometry reformatting subsystem allows several orthogonal\noperations to be applied to input data. This geometric input data is\nabstracted as a stream of vertex packets. Each vertex packet may\ncontain any combination of vertex position, vertex normal, vertex\ncolor, facet normal, facet color, texture map coordinates, pick IDs,\nheaders, and other information. One conversion supports arbitrary\nre-ordering of data within a vertex, allowing a standardized element\norder after reformatting. Another operation supports the conversion\nof multiple numeric formats to 32-bit IEEE floating-point. The\nsource data can be 8-bit or 16-bit fixed-point, or 32-bit or 64-bit\nIEEE floating-point. Additional miscellaneous reformatting allows\nthe stripping of headers and other fields, the addition of an internally\ngenerated sequential pick ID, and insertion of constants. The\nfinal reformatting stage re-packages vertex packets into complete\nisolated geometry primitives (points, lines, triangles). Chaining bits\nin vertex headers delineate which vertices form primitives.\nLike some other systems, Leo supports a generalized form of triangle\nstrip (see Figure 3), where vertex header bits within a strip specify\nhow the incoming vertex should be combined with previous vertices\nto form the next triangle. A stack of the last three vertices used\nto form a triangle is kept. The three vertices are labeled oldest, middle\n, and newest. An incoming vertex of type replace\n_oldest causes\nthe oldest vertex to be replaced by the middle, the middle to be replaced\nby the newest, and the incoming vertex becomes the newest.\nThis corresponds to a PHIGS PLUS triangle strip (sometimes called\na \"zig-zag\" strip). The replacement type replace\n_middle leaves the\noldest vertex unchanged, replaces the middle vertex by the newest,\nand the incoming vertex becomes the newest. This corresponds to a\ntriangle star. The replacement type restart marks the oldest and middle\nvertices as invalid, and the incoming vertex becomes the newest.\nGeneralized triangle strips must always start with this code. A triangle\nwill be output only when a replacement operation results in three\nvalid vertices. Restart corresponds to a \"move\" operation in\npolylines, and allows multiple unconnected variable-length triangle\nstrips to be described by a single data structure passed in by the user,\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n21\n1 Restart\n2 RO\n3 RO\n4 RO\n5 RO\n6 RO\n7 Restart\n8 RO\n9 RO\n10 RM\n11 RM\n12 RM\n13 RM\n14 RM\n15 Restart\n16 RO\n17 RO\n18 Restart\n19 RO\n20 RO\n21 RO\n22 Restart\n23 RO\n24 RO\n25 RO\n26 RO\n27 RO\n28 RO\n29 RM\n30 RM\n31 RM\n32 RM\n33 RO\nTriangle Strip\nTriangle Star\nIndependent\nTriangle\nIndependent\nQuad\nMixed Strip\nFigure 3: A Generalized Triangle Strip\nVertex Codes\nRO = Replace Oldest\nRM = Replace Middle\n103\nreducing the overhead. The generalized triangle strip's ability to effectively\nchange from \"strip\" to \"star\" mode in the middle of a strip\nallows more complex geometry to be represented compactly, and requires\nless input data bandwidth. The restart capability allows several\npieces of disconnected geometry to be passed in one DMA operation\n. Figure 3 shows a single generalized triangle strip, and the\nassociated replacement codes. LeoCommand also supports header-less\nstrips of triangle vertices either as pure strips, pure stars, or pure\nindependent triangles.\nLeoCommand hardware automatically converts generalized triangle\nstrips into isolated triangles. Triangles are normalized such that\nthe front face is always defined by a clockwise vertex order after\ntransformation. To support this, a header bit in each restart defines\nthe initial face order of each sub-strip, and the vertex order is reversed\nafter every replace\n_oldest. LeoCommand passes each com-pleted\ntriangle to the next available LeoFloat chip, as indicated by\nthe input FIFO status that each LeoFloat sends back to LeoCommand\n. The order in which triangles have been sent to each\nLeoFloat is scoreboarded by LeoCommand, so that processed triangles\nare let out of the LeoFloat array in the same order as they entered\n. Non-sequential rendering order is also supported, but the\nautomatic rendering task distribution hardware works so well that\nthe performance difference is less than 3%. A similar, but less complex\nvertex repackaging is supported for polylines and multi-polylines\nvia a move/draw bit in the vertex packet header.\nTo save IC pins and PC board complexity, the internal Leo data busses\nconnecting LeoCommand, LeoFloat, and LeoDraw are 16 bits in\nsize. When colors, normals, and texture map coefficients are being\ntransmitted on the CF-bus between LeoCommand and the LeoFloats\n, these components are (optionally) compressed from 32-bit\nIEEE floating-point into 16-bit fixed point fractions by LeoCommand\n, and then automatically reconverted back to 32-bit IEEE\nfloating-point values by LeoFloat. This quantization does not effect\nquality. Color components will eventually end up as 8-bit values in\nthe frame buffer. For normals, 16-bit (signed) accuracy represents a\nresolution of approximately plus or minus an inch at one mile. This\noptimization reduces the required data transfer bandwidth by 25%.\nFLOATING-POINT PROCESSING LEO FLOAT\nAfter canonical format conversion, the next stages of processing triangles\nin a display pipeline are: transformation, clip test, face determination\n, lighting, clipping (if required), screen space conversion,\nand set-up. These operations are complex enough to require the use\nof a general purpose processor.\nUse of commercially available DSP (Digital Signal Processing)\nchips for this work has two major drawbacks. First, most such processors\nrequire a considerable number of surrounding glue chips,\nespecially when they are deployed as multi-processors. These glue\nchips can easily quadruple the board area dedicated to the DSP\nchip, as well as adversely affecting power, heat, cost, and reliability.\nSecond, few of these chips have been optimized for 3D graphics.\nA better solution might be to augment the DSP with a special ASIC\nthat would replace all of these glue chips. Given the expense of developing\nan ASIC, we decided to merge that ASIC with a custom\nDSP core optimized for graphics.\nThe resulting chip was LeoFloat. LeoFloat combines a 32-bit mi-crocodable\nfloating-point core with concurrent input and output\npacket communication subsystems (see Figure 4\n.\n), similar to the approach\nof [3]. The only support chips required are four SRAM chips\nfor external microcode store. A number of specialized graphics instructions\nand features make LeoFloat different from existing DSP\nprocessors. Each individual feature only makes a modest incremen-tal\ncontribution to performance, and indeed many have appeared in\nother designs. What is novel about LeoFloat is the combination of\nfeatures, whose cumulative effect leads to impressive overall system\nperformance. The following sections describe some of the\nmore important special graphics instructions and features.\nDouble buffered asynchronous I/O register files.\nAll input and\noutput commands are packaged up by separate I/O packet hardware.\nVariable length packets of up to 32 32-bit words are automatically\nwritten into (or out of) on-chip double-buffered register files (the I\nand O registers). These are mapped directly into microcode register\nspace. Special instructions allow complete packets to be requested,\nrelinquished, or queued for transmission in one instruction cycle.\nEnough internal registers.\nMost commercial DSP chips support a\nvery small number of internal fast registers, certainly much smaller\nthan the data needed by the inner loops of most 3D pipeline algorithms\n. They attempt to make up for this with on-chip SRAM or\ndata caches, but typically SRAMs are not multi-ported and the\ncaches not user-schedulable. We cheated with LeoFloat. We first\nwrote the code for the largest important inner loop (triangles),\ncounted how many registers were needed (288), and built that many\ninto the chip.\nParallel internal function units\n. The floating-point core functions\n(32-bit IEEE format) include multiply, ALU, reciprocal, and integer\noperations, all of which can often be executed in parallel. It is\nparticularly important that the floating-point reciprocal operation\nnot tie up the multiply and add units, so that perspective or slope\ncalculations can proceed in parallel with the rest of geometric processing\n. Less frequently used reciprocal square root hardware is\nshared with the integer function unit.\nPut all non-critical algorithms on the host.\nWe avoided the necessity\nof building a high level language compiler (and support instructions\n) for LeoFloat by moving any code not worth hand coding in\nmicrocode to the host processor. The result is a small, clean kernel\nof graphics routines in microcode. (A fairly powerful macro-assembler\nwith a `C'-like syntax was built to support the hand coding.)\nSoftware pipeline scheduling.\nOne of the most complex parts of\nmodern CPUs to design and debug is their scoreboard section,\nwhich schedules the execution of instructions across multiple steps\nin time and function units, presenting the programmer with the\nFigure 4: LeoFloat arithmetic function units, registers and data paths.\nI0 -I31\nI0'-I31'\n*\n*\n+\n+\n+\nIALU\n1/X\nFALU\nFMULT\nInput from off-chip\nOff-chip output\nP0 -P31\nP32-P63\nP64-P91\nR0 -R31\nR32-R63\nO0 -O31\nO0'-O31'\n104\nillusion that individual instructions are executed in one shot. LeoFloat\navoided all this hardware by using more direct control fields,\nlike horizontal microprogrammable machines, and leaving it to the\nassembler (and occasionally the programmer) to skew one logical\ninstruction across several physical instructions.\nSpecial clip condition codes & clip branch.\nFor clip testing we\nemploy a modified Sutherland-Hodgman algorithm, which first\ncomputes a vector of clip condition bits. LeoFloat has a clip test instruction\nthat computes these bits two at a time, shifting them into\na special clip-bits register. After the bits have been computed, special\nbranch instructions decode these bits into the appropriate case:\nclip rejected, clip accepted, single edge clip (six cases), or needs\ngeneral clipping. There are separate branch instructions for triangles\nand vectors. (A similar approach was taken in [9].) The branch\ninstructions allow multiple other conditions to be checked at the\nsame time, including backfacing and model clipping.\nRegister Y sort instruction.\nThe first step of the algorithm we used\nfor setting up triangles for scan conversion sorts the three triangle\nvertices in ascending Y order. On a conventional processor this requires\neither moving a lot of data, always referring to vertex data\nthrough indirect pointers, or replicating the set-up code for all six\npossible permutations of triangle vertex order. LeoFloat has a special\ninstruction that takes the results of the last three comparisons and reorders\npart of the R register file to place vertices in sorted order.\nMiscellaneous.\nLeoFloat contains many performance features tra-ditionally\nfound on DSP chips, including an internal subroutine\nstack, block load/store SRAM, and integer functions. Also there is\na \"kitchen sink\" instruction that initiates multiple housekeeping\nfunctions in one instruction, such as \"transmit current output packet\n(if not clip pending), request new input packet, extract op-code and\ndispatch to next task.\"\nCode results: equivalent to 150 megaflop DSP.\nEach 25 MHz\nLeoFloat processes the benchmark isolated triangle (including clip-test\nand set-up) in 379 clocks. (With a few exceptions, microcode\ninstructions issue at a rate of one per clock tick.) The same graphics\nalgorithm was tightly coded on several RISC processors and DSP\nchips (SPARC, i860, C30, etc.), and typically took on the order of\n1100 clocks. Thus the 379 LeoFloat instruction at 25 MHz do the\nequivalent work of a traditional DSP chip running at 75 MHz (even\nthough there are only 54 megaflops of hardware). Of course these\nnumbers only hold for triangles and vectors, but that's most of what\nLeoFloat does. Four LeoFloats assure that floating-point processing\nis not the bottleneck for 100-pixel isolated, lighted triangles.\nSCREEN SPACE RENDERING: LEO DRAW\nVRAM limits\nCommercial VRAM chips represent a fundamental constraint on\nthe possible pixel rendering performance of Leo's class of graphics\naccelerator. The goal of the Leo architecture was to ensure to the\ngreatest extent possible that this was the only performance limit for\ntypical rendering operations.\nThe fundamental memory transaction for Z-buffered rendering\nalgorithms is a conditional read-modify-write cycle. Given an XY\naddress and a computed RGBZ value, the old Z value at the XY address\nis first read, and then if the computed Z is in front of the old\nZ, the computed RGBZ value is written into the memory. Such\ntransactions can be mapped to allowable VRAM control signals in\nmany different ways: reads and writes may be batched, Z may be\nread out through the video port, etc.\nVRAM chips constrain system rendering performance in two ways.\nFirst, they impose a minimum cycle time per RAM bank for the Z-buffered\nread-modify-write cycle. Figure 5 is a plot of this cycle\ntime (when in \"page\" mode) and its changes over a half-decade\nperiod. VRAMs also constrain the ways in which a frame buffer can\nbe partitioned into independently addressable banks. Throughout\nthe five year period in Figure 5, three generations of VRAM technology\nhave been organized as 256K by 4, 8, and 16-bit memories. For\ncontemporary display resolutions of 1280\n\n1024, the chips comprising\na minimum frame buffer can be organized into no more than\nfive separately-addressed interleave banks. Combining this information\n, a theoretical maximum rendering speed for a primitive can be\ncomputed. The second line in Figure 5 is the corresponding performance\nfor rendering 100-pixel Z-buffered triangles, including the\noverhead for entering page mode, content refresh, and video shift\nregister transfers (video refresh). Higher rendering rates are only\npossible if additional redundant memory chips are added, allowing\nfor higher interleaving factors, at the price of increased system cost.\nEven supporting five parallel interleaves has a cost: at least 305\nmemory interface pins (five banks of (24 RGB + 24 Z + 13 address/\ncontrol)) are required, more pins than it is currently possible to dedicate\nto a memory interface on one chip. Some systems have used\nexternal buffer chips, but on a minimum cost and board area system\n, this costs almost as much as additional custom chips. Thus, on\nthe Leo system we opted for five separate VRAM control chips\n(LeoDraws).\nTriangle scan conversion\nTraditional shaded triangle scan conversion has typically been via\na linear pipeline of edge-walking followed by scan interpolation\n[12]. There have been several approaches to achieving higher\nthroughput in rasterization. [2] employed a single edge-walker, but\nparallel scan interpolation. [4][10] employed massively parallel\nrasterizers. [6] and other recent machines use moderately parallel\nrasterizers, with additional logic to merge the pixel rasterization\nstreams back together.\nIn the Leo design we chose to broadcast the identical triangle specification\nto five parallel rendering chips, each tasked with rendering\nonly those pixels visible in the local interleave. Each chip performs\nits own complete edge-walk and span interpolation of the triangle,\nbiased by the chip's local interleave. By paying careful attention to\nproper mathematical sampling theory for rasterized pixels, the five\n90\n91\n92\n93\n94\n180 ns\n160 ns\n140 ns\n260K\n220K\n200K\n240K\n200 ns\n100 pixel triangle\ntheoretical\nmaximum\nrender rate\nVRAM minimum\nZ-buffer RGB\nread/modify/write\ncycle time (on page)\n(off page = 1.5x)\nFigure 5: VRAM cycle time and theoretical maximum triangle\nrendering rate (for five-way interleaved frame buffers).\nVRAM Cycle T\nime\nT\nriangle Rendering Rate\n1 Meg VRAM\n2 Meg VRAM\n4 Meg\n105\nchips can act in concert to produce the correct combined rasterized\nimage. Mathematically, each chip thinks it is rasterizing the triangle\ninto an image memory with valid pixel centers only every five original\npixels horizontally, with each chip starting off biased one more\npixel to the right.\nTo obtain the speed benefits of parallel chips, most high performance\ngraphics systems have split the edge-walk and span-interpolate\nfunctions into separate chips. But an examination of the relative\namounts of data flow between rendering pipeline stages shows that\nthe overall peak data transfer bandwidth demand occurs between\nthe edge-walk and span-interpolate sections, induced by long thin\ntriangles, which commonly occur in tessellated geometry. To minimize\npin counts and PC board bus complexity, Leo decided to replicate\nthe edge-walking function into each of the five span-interpolation\nchips.\nOne potential drawback of this approach is that the edge-walking\nsection of each LeoDraw chip will have to advance to the next scan\nline up to five times more often than a single rasterization chip\nwould. Thus LeoDraw's edge-walking circuit was designed to operate\nin one single pixel cycle time (160 ns. read-modify-write VRAM\ncycle), so it would never hold back scan conversion. Other usual\npipelining techniques were used, such as loading in and buffering\nthe next triangle to be drawn in parallel with rasterizing the current\ntriangle. Window clipping, blending, and other pixel post processing\nare handled in later pipelined stages.\nLine scan conversion\nAs with triangles, the mathematics of the line rasterization algorithms\nwere set up to allow distributed rendering of aliased and\nantialiased lines and dots, with each LeoDraw chip handling the\n1/5 of the frame buffer pixels that it owns. While the Leo system\nuses the X11 semantics of Bresenham lines for window system\noperations, these produce unacceptable motion artifacts in 3D\nwireframe rendering. Therefore, when rendering 3D lines, Leo\nemploys a high-accuracy DDA algorithm, using 32 bits internally\nfor sufficient subpixel precision.\nAt present there is no agreement in the industry on the definition of a\nhigh quality antialiased line. We choose to use the image quality of\nvector strokers of years ago as our quality standard, and we tested different\nalgorithms with end users, many of whom were still using cal-ligraphic\ndisplays. We found users desired algorithms that displayed\nno roping, angle sensitivities, short vector artifacts, or end-point artifacts\n. We submitted the resulting antialiased line quality test patterns\nas a GPC [11] test image. In achieving the desired image quality level\n, we determined several properties that a successful line antialias-ing\nalgorithm must have. First, the lines must have at least three pixels\nof width across the minor axis. Two-pixel wide antialiased lines\nexhibit serious roping artifacts. Four-pixel wide lines offer no visible\nimprovement except for lines near 45 degrees. Second, proper endpoint\nramps spread over at least two pixels are necessary both for\nseamless line segment joins as well as for isolated line-ends. Third,\nproper care must be taken when sampling lines of subpixel length to\nmaintain proper final intensity. Fourth, intensity or filter adjustments\nbased on the slope are necessary to avoid artifacts when rotating\nwireframe images. To implement all this, we found that we needed at\nleast four bits of subpixel positional accuracy after cumulative interpolation\nerror is factored in. That is why we used 32 bits for XY coordinate\naccuracy: 12 for pixel location, 4 for subpixel location, and\n16 for DDA interpolation error. (The actual error limit is imposed by\nthe original, user-supplied 32-bit IEEE floating-point data.)\nBecause of the horizontal interleaving and preferred scan direction,\nthe X-major and Y-major aliased and antialiased line rasterization\nalgorithms are not symmetric, so separate optimized algorithms\nwere employed for each.\nAntialiased dots\nEmpirical testing showed that only three bits of subpixel precision\nare necessary for accurate rendering of antialiased dots. For ASIC\nimplementation, this was most easily accomplished using a brute-force\ntable lookup of one of 64 precomputed 3\n\n3 pixel dot images.\nThese images are stored in on-chip ROM, and were generated using\na circular symmetric Gaussian filter.\nTriangle, line, and dot hardware\nImplementation of the triangle and antialiased vector rasterization\nalgorithms require substantial hardware resources. Triangles need\nsingle pixel cycle edge-walking hardware in parallel with RGBZ\nspan interpolation hardware. To obtain the desired quality of antialiased\nvectors, our algorithms require hardware to apply multiple\nwaveform shaping functions to every generated pixel. As a result,\nthe total VLSI area needed for antialiased vectors is nearly as large\nas for triangles. To keep the chip die size reasonable, we reformu-lated\nboth the triangle and antialiased vector algorithms to combine\nand reuse the same function units. The only difference is how the\nseparate sequencers set up the rasterization pipeline.\nPer-pixel depth cue\nDepth cueing has long been a heavily-used staple of wireframe applications\n, but in most modern rendering systems it is an extra time\nexpense feature, performed on endpoints back in the floating-point\nsection. We felt that we were architecting Leo not for benchmarks,\nbut for users, and many wireframe users want to have depth cueing\non all the time. Therefore, we built a parallel hardware depth cue\nfunction unit into each LeoDraw. Each triangle, vector, or dot rendered\nby Leo can be optionally depth cued at absolutely no cost in\nperformance. Another benefit of per-pixel depth cueing is full compliance\nwith the PHIGS PLUS depth cueing specification. For Leo,\nper-pixel depth cueing hardware also simplifies the LeoFloat microcode\n, by freeing the LeoFloats from ever having to deal with it.\nPicking support\nInteractive graphics requires not only the rapid display of geometric\ndata, but also interaction with that data: the ability to pick a particular\npart or primitive within a part. Any pixels drawn within the\nbounds of a 3D pick aperture result in a pick hit, causing the current\npick IDs to be automatically DMAed back to host memory.\nWindow system support\nMany otherwise sophisticated 3D display systems become somewhat\nbefuddled when having to deal simultaneously with 3D rendering\napplications and a 2D window system. Modern window systems\non interactive workstations require frequent context switching\nof the rendering pipeline state. Some 3D architectures have tried to\nminimize the overhead associated with context switching by supporting\nmultiple 3D contexts in hardware. Leo goes one step further\n, maintaining two completely separate pipelines in hardware:\none for traditional 2D window operations; the other for full 3D rendering\n. Because the majority of context switch requests are for 2D\nwindow system operations, the need for more complex 3D pipeline\ncontext switching is significantly reduced. The 2D context is much\nlighter weight and correspondingly easier to context switch. The\ntwo separate graphics pipelines operate completely in parallel, allowing\nsimultaneous access by two independent CPUs on a multi-processor\nhost.\n2D functionality abstracts the frame buffer as a 1-bit, 8-bit, or 24-bit\npixel array. Operations include random pixel access, optimized character\ncell writes, block clear, block copy, and the usual menagerie of\n106\nboolean operations, write masks, etc. Vertical block moves are special\ncased, as they are typically used in vertical scrolling of text\nwindows, and can be processed faster than the general block move\nbecause the pixel data does not have to move across LeoDraw chip\ninterleaves. Rendering into non-rectangular shaped windows is\nsupported by special clip hardware, resulting in no loss in performance\n. A special block clear function allows designated windows\n(and their Z-buffers) to be initialized to any given constant in under\n200 microseconds. Without this last feature, 30 Hz or faster animation\nof non-trivial objects would have been impossible.\n7 VIDEO OUTPUT: LEO CROSS\nLeo's standard video output format is 1280\n\n1024 at 76 Hz refresh\nrate, but it also supports other resolutions, including 1152\n\n900,\ninterlaced 640\n\n480 RS-170 (NTSC), interlaced 768\n\n576 PAL\ntiming, and 960\n\n680 113 Hz field sequential stereo. LeoCross\ncontains several color look-up tables, supporting multiple pseudo\ncolor maps without color map flashing. The look-up table also supports\ntwo different true color abstractions: 24-bit linear color\n(needed by rendering applications), and REC-709 non-linear color\n(required by many imaging applications).\nVirtual reality support\nStereo output is becoming increasingly important for use in Virtual\nReality applications. Leo's design goals included support for the\nVirtual Holographic Workstation system configuration described in\n[5]. Leo's stereo resolution was chosen to support square pixels, so\nthat lines and antialiased lines are displayed properly in stereo, and\nstandard window system applications can co-exist with stereo. Stereo\ncan be enabled on a per-window basis (when in stereo mode windows\nare effectively quad-buffered). Hooks were included in LeoCross\nto support display technologies other than CRT's, that may be\nneeded for head-mounted virtual reality displays.\n8 NURBS AND TEXTURE MAP SUPPORT/b>\nOne of the advantages to using programmable elements within a\ngraphics accelerator is that additional complex functionality, such\nas NURBS and texture mapping, can be accelerated. Texture mapping\nis supported through special LeoFloat microcode and features\nof LeoCommand. LeoFloat microcode also includes algorithms to\naccelerate dynamic tessellation of trimmed NURBS surfaces. The\ndynamic tessellation technique involves reducing trimmed NURBS\nsurfaces into properly sized triangles according to a display/pixel\nspace approximation criteria [1]; i.e. the fineness of tessellation is\nview dependent. In the past, dynamic tessellation tended to be\nmainly useful as a compression technique, to avoid storing all the\nflattened triangles from a NURBS surface in memory. Dynamic tessellation\nwas not viewed as a performance enhancer, for while it\nmight generate only a third as many triangles as a static tessellation,\nthe triangles were generated at least an order of magnitude or more\nslower than brute force triangle rendering. In addition it had other\nproblems, such as not handling general trimming. For many cases,\nLeo's dynamic tesselator can generate and render triangles only a\nsmall integer multiple slower than prestored triangle rendering,\nwhich for some views, can result in faster overall object rendering.\n9 RESULTS\nLeo is physically a-two board sandwich, measuring 5.7\n\n6.7\n\n0.6\ninches, that fits in a standard 2S SBus slot. Figure 6 is a photo of the\ntwo boards, separated, showing all the custom ASICs. Figure 7 is a\nphoto of the complete Leo workstation, next to two of our units of\nscale and the board set.\nLeo can render 210K 100-pixel isolated, lighted, Gouraud shaded,\nZ-buffered, depth cued triangles per second, with one infinite diffuse\nand one ambient light source enabled. At 100 pixels, Leo is\nstill VRAM rendering speed limited; smaller triangles render faster.\nIsolated 10-pixel antialiased, constant color, Z-buffered, depth cued\nlines (which are actually 12 pixels long due to endpoint ramps, and\nthree pixels wide) render at a 422K per second rate. Corresponding\naliased lines render at 730K. Aliased and antialiased constant color,\nZ-buffered, depth cued dots are clocked at 1100K. 24-bit image rasters\ncan be loaded onto the screen at a 10M pixel per second rate.\nScreen scrolls, block moves, and raster character draws all also\nhave competitive performance. Figure 8 is a sample of shaded triangle\nrendering.\n10 SIMULATION\nA system as complex as Leo cannot be debugged after the fact. All\nthe new rendering mathematics were extensively simulated before\nbeing committed to hardware design. As each chip was defined, high,\nmedium, and low level simulators of its function were written and\ncontinuously used to verify functionality and performance. Complete\nimages of simulated rendering were generated throughout the\ncourse of the project, from within weeks of its start. As a result, the\nwindow system and complex 3D rendering were up and running on\na complete board set within a week of receiving the first set of chips.\n11 CONCLUSIONS\nBy paying careful attention to the forces that drive both performance\nand cost, a physically compact complete 3D shaded graphics\naccelerator was created. The focus was not on new rendering features\n, but on cost reduction and performance enhancement of the\nmost useful core of 3D graphics primitives. New parallel algorithms\nwere developed to allow accurate screen space rendering of\nprimitives. Judicious use of hardware to perform some key traditional\nsoftware functions (such as format conversion and primitive\nvertex reassembly) greatly simplified the microcode task. A specialized\nfloating-point core optimized for the primary task of processing\nlines and triangles also supports more general graphics processing\n, such as rasters and NURBS. The final system performance\nis limited by the only chips not custom designed for Leo: the standard\nRAM chips.\nACKNOWLEDGEMENTS\nThe authors would like to thank the entire Leo team for their efforts\nin producing the system, and Mike Lavelle for help with the paper.\n\nREFERENCES\n1.\nAbi-Ezzi, Salim, and L. Shirman.\nTessellation of Curved\nSurfaces under Highly Varying Transformations. Proc. Euro-graphics\n'91 (Vienna, Austria, September 1991), 385-397.\n2.\nAkeley, Kurt and T. Jermoluk.\nHigh-Performance Polygon\nRendering, Proceedings of SIGGRAPH '88 (Atlanta, GA, Aug\n1-5, 1988). In Computer Graphics 22, 4 (July 1988), 239-246.\n3.\nAnido, M., D. Allerton and E. Zaluska.\nMIGS - A Multiprocessor\nImage Generation System using RISC-like Micropro-cessors\n. Proceedings of CGI '89 (Leeds, UK, June 1989),\nSpringer Verlag 1990.\n4.\nDeering, Michael, S. Winner, B. Schediwy, C. Duffy and N.\nHunt.\nThe Triangle Processor and Normal Vector Shader: A\nVLSI system for High Performance Graphics. Proceedings of\nSIGGRAPH '88 (Atlanta, GA, Aug 1-5, 1988). In Computer\nGraphics 22, 4 (July 1988), 21-30.\n107\n5.\nDeering, Michael.\nHigh Resolution Virtual Reality. Proceedings\nof SIGGRAPH '92 (Chicago, IL, July 26-31, 1992). In\nComputer Graphics 26, 2 (July 1992), 195-202.\n6.\nDunnett, Graham, M. White, P. Lister and R. Grimsdale.\nThe Image Chip for High Performance 3D Rendering. IEEE\nComputer Graphics and Applications 12, 6 (November 1992),\n41-52.\n7.\nFoley, James, A. van Dam, S. Feiner and J Hughes.\nComputer\nGraphics: Principles and Practice, 2nd ed., Addison-Wesley\n, 1990.\n8.\nKelley, Michael, S. Winner, K. Gould.\nA Scalable Hardware\nRender Accelerator using a Modified Scanline Algorithm.\nProceedings of SIGGRAPH '92 (Chicago, IL, July 26-31,\n1992). In Computer Graphics 26, 2 (July 1992), 241-248.\n9.\nKirk, David, and D. Voorhies.\nThe Rendering Architecture\nof the DN10000VS. Proceedings of SIGGRAPH '90 (Dallas,\nTX, August 6-10, 1990). In Computer Graphics 24, 4 (August\n1990), 299-307.\n10. Molnar, Steven, J. Eyles, J. Poulton.\nPixelFlow: High-Speed\nRendering Using Image Composition. Proceedings of SIGGRAPH\n'92 (Chicago, IL, July 26-31, 1992). In Computer\nGraphics 26, 2 (July 1992), 231-240.\n11. Nelson, Scott.\nGPC Line Quality Benchmark Test. GPC Test\nSuite, NCGA GPC committee 1991.\n12. Torborg, John.\nA Parallel Processor Architecture for Graphics\nArithmetic Operations. Proceedings of SIGGRAPH '87\n(Anaheim, CA, July 27-31, 1987). In Computer Graphics 21,\n4 (July 1987), 197-204.\nFigure 8: Traffic Jam to Point Reyes. A scene containing 2,322,000 triangles, rendered by Leo Hardware. Sto-chastically\nsuper-sampled 8 times. Models courtesy of Viewpoint Animation Engineering.\nFigure 7: The complete SPARCstation ZX workstation, next to two\nof our units of scale and the Leo board set.\nFigure 6: The two boards, unfolded.\n108", "keywords": "input processing;3D graphics hardware;parallel algorithms;video output;general graphics processing;parallel graphics algorithms;small physical size;geometry data;3D shaded graphics;rendering;screen space rendering;antialiased lines;floating-point microprocessors;low cost;floating point processing;gouraud shading"} {"name": "128", "title": "Location based Indexing Scheme for DAYS", "abstract": "Data dissemination through wireless channels for broadcasting information to consumers is becoming quite common. Many dissemination schemes have been proposed but most of them push data to wireless channels for general consumption. Push based broadcast [1] is essentially asymmetric, i.e., the volume of data being higher from the server to the users than from the users back to the server. Push based scheme requires some indexing which indicates when the data will be broadcast and its position in the broadcast. Access latency and tuning time are the two main parameters which may be used to evaluate an indexing scheme. Two of the important indexing schemes proposed earlier were tree based and the exponential indexing schemes. None of these schemes were able to address the requirements of location dependent data (LDD) which is highly desirable feature of data dissemination. In this paper, we discuss the broadcast of LDD in our project DAta in Your Space (DAYS), and propose a scheme for indexing LDD. We argue that this scheme, when applied to LDD, significantly improves performance in terms of tuning time over the above mentioned schemes. We prove our argument with the help of simulation results.", "fulltext": "INTRODUCTION\nWireless data dissemination is an economical and efficient\nway to make desired data available to a large number of mobile or\nstatic users. The mode of data transfer is essentially asymmetric,\nthat is, the capacity of the transfer of data (downstream\ncommunication) from the server to the client (mobile user) is\nsignificantly larger than the client or mobile user to the server\n(upstream communication). The effectiveness of a data\ndissemination system is judged by its ability to provide user the\nrequired data at anywhere and at anytime. One of the best ways to\naccomplish this is through the dissemination of highly\npersonalized Location Based Services (LBS) which allows users\nto access personalized location dependent data. An example\nwould be someone using their mobile device to search for a\nvegetarian restaurant. The LBS application would interact with\nother location technology components or use the mobile user's\ninput to determine the user's location and download the\ninformation about the restaurants in proximity to the user by\ntuning into the wireless channel which is disseminating LDD.\nWe see a limited deployment of LBS by some service\nproviders. But there are every indications that with time some of\nthe complex technical problems such as uniform location\nframework, calculating and tracking locations in all types of\nplaces, positioning in various environments, innovative location\napplications, etc., will be resolved and LBS will become a\ncommon facility and will help to improve market productivity and\ncustomer comfort. In our project called DAYS, we use wireless\ndata broadcast mechanism to push LDD to users and mobile users\nmonitor and tune the channel to find and download the required\ndata. A simple broadcast, however, is likely to cause significant\nperformance degradation in the energy constrained mobile devices\nand a common solution to this problem is the use of efficient air\nindexing. The indexing approach stores control information which\ntells the user about the data location in the broadcast and how and\nwhen he could access it. A mobile user, thus, has some free time\nto go into the doze mode which conserves valuable power. It also\nallows the user to personalize his own mobile device by\nselectively tuning to the information of his choice.\nAccess efficiency and energy conservation are the two issues\nwhich are significant for data broadcast systems. Access efficiency\nrefers to the latency experienced when a request is initiated till the\nresponse is received. Energy conservation [7, 10] refers to the\nefficient use of the limited energy of the mobile device in\naccessing broadcast data. Two parameters that affect these are the\ntuning time and the access latency. Tuning time refers to the time\nduring which the mobile unit (MU) remains in active state to tune\nthe channel and download its required data. It can also be defined\nas the number of buckets tuned by the mobile device in active\nstate to get its required data. Access latency may be defined as the\ntime elapsed since a request has been issued till the response has\nbeen received.\n\n1\nThis research was supported by a grant from NSF IIS-0209170.\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise,\nor republish, to post on servers or to redistribute to lists, requires prior\nspecific permission and/or a fee.\nMobiDE'05, June 12, 2005, Baltimore, Maryland, USA.\nCopyright 2005 ACM 1-59593-088-4/05/0006...$5.00.\n17\nSeveral indexing schemes have been proposed in the past and\nthe prominent among them are the tree based and the exponential\nindexing schemes [17]. The main disadvantages of the tree based\nschemes are that they are based on centralized tree structures. To\nstart a search, the MU has to wait until it reaches the root of the\nnext broadcast tree. This significantly affects the tuning time of\nthe mobile unit. The exponential schemes facilitate index\nreplication by sharing links in different search trees. For\nbroadcasts with large number of pages, the exponential scheme\nhas been shown to perform similarly as the tree based schemes in\nterms of access latency. Also, the average length of broadcast\nincreases due to the index replication and this may cause\nsignificant increase in the access latency. None of the above\nindexing schemes is equally effective in broadcasting location\ndependent data. In addition to providing low latency, they lack\nproperties which are used to address LDD issues. We propose an\nindexing scheme in DAYS which takes care of some these\nproblems. We show with simulation results that our scheme\noutperforms some of the earlier indexing schemes for\nbroadcasting LDD in terms of tuning time.\nThe rest of the paper is presented as follows. In section 2, we\ndiscuss previous work related to indexing of broadcast data.\nSection 3 describes our DAYS architecture. Location dependent\ndata, its generation and subsequent broadcast is presented in\nsection 4. Section 5 discusses our indexing scheme in detail.\nSimulation of our scheme and its performance evaluation is\npresented in section 6. Section 7 concludes the paper and\nmentions future related work.\n\nPREVIOUS WORK\nSeveral disk-based indexing techniques have been used for air\nindexing. Imielinski et al. [5, 6] applied the B+ index tree, where\nthe leaf nodes store the arrival times of the data items. The\ndistributed indexing method was proposed to efficiently replicate\nand distribute the index tree in a broadcast. Specifically, the index\ntree is divided into a replicated part and a non replicated part.\nEach broadcast consists of the replicated part and the non-replicated\npart that indexes the data items immediately following\nit. As such, each node in the non-replicated part appears only once\nin a broadcast and, hence, reduces the replication cost and access\nlatency while achieving a good tuning time. Chen et al. [2] and\nShivakumar et al. [8] considered unbalanced tree structures to\noptimize energy consumption for non-uniform data access. These\nstructures minimize the average index search cost by reducing the\nnumber of index searches for hot data at the expense of spending\nmore on cold data. Tan and Yu discussed data and index\norganization under skewed broadcast Hashing and signature\nmethods have also been suggested for wireless broadcast that\nsupports equality queries [9]. A flexible indexing method was\nproposed in [5]. The flexible index first sorts the data items in\nascending (or descending) order of the search key values and then\ndivides them into p segments. The first bucket in each data\nsegment contains a control index, which is a binary index\nmapping a given key value to the segment containing that key,\nand a local index, which is an m-entry index mapping a given key\nvalue to the buckets within the current segment. By tuning the\nparameters of p and m, mobile clients can achieve either a good\ntuning time or good access latency. Another indexing technique\nproposed is the exponential indexing scheme [17]. In this scheme,\na parameterized index, called the exponential index is used to\noptimize the access latency or the tuning time. It facilitates index\nreplication by linking different search trees. All of the above\nmentioned schemes have been applied to data which are non\nrelated to each other. These non related data may be clustered or\nnon clustered. However, none of them has specifically addressed\nthe requirements of LDD. Location dependent data are data which\nare associated with a location. Presently there are several\napplications that deal with LDD [13, 16]. Almost all of them\ndepict LDD with the help of hierarchical structures [3, 4]. This is\nbased on the containment property of location dependent data.\nThe Containment property helps determining relative position of\nan object by defining or identifying locations that contains those\nobjects. The subordinate locations are hierarchically related to\neach other. Thus, Containment property limits the range of\navailability or operation of a service. We use this containment\nproperty in our indexing scheme to index LDD.\nDAYS ARCHITECTURE\nDAYS has been conceptualized to disseminate topical and non-topical\ndata to users in a local broadcast space and to accept\nqueries from individual users globally. Topical data, for example,\nweather information, traffic information, stock information, etc.,\nconstantly changes over time. Non topical data such as hotel,\nrestaurant, real estate prices, etc., do not change so often. Thus,\nwe envision the presence of two types of data distribution: In the\nfirst case, server pushes data to local users through wireless\nchannels. The other case deals with the server sending results of\nuser queries through downlink wireless channels. Technically, we\nsee the presence of two types of queues in the pull based data\naccess. One is a heavily loaded queue containing globally\nuploaded queries. The other is a comparatively lightly loaded\nqueue consisting of locally uploaded queries. The DAYS\narchitecture [12] as shown in figure 1 consists of a Data Server,\nBroadcast Scheduler, DAYS Coordinator, Network of LEO\nsatellites for global data delivery and a Local broadcast space.\nData is pushed into the local broadcast space so that users may\ntune into the wireless channels to access the data. The local\nbroadcast space consists of a broadcast tower, mobile units and a\nnetwork of data staging machines called the surrogates. Data\nstaging in surrogates has been earlier investigated as a successful\ntechnique [12, 15] to cache users' related data. We believe that\ndata staging can be used to drastically reduce the latency time for\nboth the local broadcast data as well as global responses. Query\nrequest in the surrogates may subsequently be used to generate the\npopularity patterns which ultimately decide the broadcast\nschedule [12].\n18\n\nPopularity\nFeedback from\nSurrogates for\nBroadcast\nScheduler\nLocal Broadcast Space\nBroadcast Tower\nSurrogate\nMU\nMU\nMU\nMU\nData Server\nBroadcast scheduler\nDAYS Coordinator\nLocal downlink\nchannel\nGlobal downlink\nchannel\nPull request queue\nGlobal request queue\nLocal request queue\nLocation based index\n\n\nStarbucks\nPlaza\nKansas\nCity\n\nFigure 1. DAYS Architecture\nFigure 2. Location Structure of Starbucks, Plaza\nLOCATION DEPENDENT DATA (LDD)\nWe argue that incorporating location information in wireless data\nbroadcast can significantly decrease the access latency. This\nproperty becomes highly useful for mobile unit which has limited\nstorage and processing capability. There are a variety of\napplications to obtain information about traffic, restaurant and\nhotel booking, fast food, gas stations, post office, grocery stores,\netc. If these applications are coupled with location information,\nthen the search will be fast and highly cost effective. An important\nproperty of the locations is Containment which helps to determine\nthe relative location of an object with respect to its parent that\ncontains the object. Thus, Containment limits the range of\navailability of a data. We use this property in our indexing\nscheme. The database contains the broadcast contents which are\nconverted into LDD [14] by associating them with respective\nlocations so that it can be broadcasted in a clustered manner. The\nclustering of LDD helps the user to locate information efficiently\nand supports containment property. We present an example to\njustify our proposition.\nExample: Suppose a user issues query \"Starbucks Coffee in\nPlaza please.\" to access information about the Plaza branch of\nStarbucks Coffee in Kansas City. In the case of location\nindependent set up the system will list all Starbucks coffee shops\nin Kansas City area. It is obvious that such responses will\nincrease access latency and are not desirable. These can be\nmanaged efficiently if the server has location dependent data, i.e.,\na mapping between a Starbucks coffee shop data and its physical\nlocation. Also, for a query including range of locations of\nStarbucks, a single query requesting locations for the entire\nregion of Kansas City, as shown in Figure 2, will suffice. This\nwill save enormous amount of bandwidth by decreasing the\nnumber of messages and at the same time will be helpful in\npreventing the scalability bottleneck in highly populated area.\n4.1 Mapping Function for LDD\nThe example justifies the need for a mapping function to process\nlocation dependent queries. This will be especially important for\npull based queries across the globe for which the reply could be\ncomposed for different parts of the world. The mapping function\nis necessary to construct the broadcast schedule.\nWe define Global Property Set (GPS) [11], Information Content\n(IC) set, and Location Hierarchy (LH) set where IC\n\nGPS and\nLH\n\nGPS to develop a mapping function. LH = {l\n1\n, l\n2\n, l\n3\n...,l\nk\n}\nwhere l\ni\nrepresent locations in the location tree and IC = {ic\n1\n, ic\n2\n,\nic\n3\n,...,ic\nn\n} where ic\ni\nrepresent information type. For example, if\nwe have traffic, weather, and stock information are in broadcast\nthen IC = {ic\ntraffic\n, ic\nweather\n, and ic\nstock\n}. The mapping scheme must\nbe able to identify and select an IC member and a LH node for (a)\ncorrect association, (b) granularity match, (c) and termination\ncondition. For example, weather\n\nIC could be associated with a\ncountry or a state or a city or a town of LH. The granularity match\nbetween the weather and a LH node is as per user requirement.\nThus, with a coarse granularity weather information is associated\nwith a country to get country's weather and with town in a finer\ngranularity. If a town is the finest granularity, then it defines the\nterminal condition for association between IC and LH for weather.\nThis means that a user cannot get weather information about\nsubdivision of a town. In reality weather of a subdivision does\nnot make any sense.\nWe develop a simple heuristic mapping approach scheme based\non user requirement. Let IC = {m\n1\n, m\n2\n,m\n3 .,...,\nm\nk\n}, where m\ni\nrepresent its element and let LH = {n\n1\n, n\n2\n, n\n3, ...,\nn\nl\n}, where n\ni\n\nrepresents LH's member. We define GPS for IC (GPSIC)\n\nGPS\nand for LH (GPSLH)\n\nGPS as GPSIC = {P\n1\n, P\n2\n,..., P\nn\n}, where\nP\n1\n, P\n2\n, P\n3\n,..., P\nn\nare properties of its members and GPSLH = {Q\n1\n,\nQ\n2\n,..., Q\nm\n} where Q\n1\n, Q\n2\n,..., Q\nm\nare properties of its members.\nThe properties of a particular member of IC are a subset of\nGPSIC. It is generally true that (property set (m\ni\n\nIC)\n\nproperty\nset (m\nj\n\nIC))\n\n\n\n, however, there may be cases where the\nintersection is not null. For example, stock\n\nIC and movie\n\nIC\nrating do not have any property in common. We assume that any\ntwo or more members of IC have at least one common\ngeographical property (i.e. location) because DAYS broadcasts\ninformation about those categories, which are closely tied with a\nlocation. For example, stock of a company is related to a country,\nweather is related to a city or state, etc.\nWe define the property subset of m\ni\n\nIC as PSm\ni\n\nm\ni\n\nIC and\nPSm\ni\n= {P\n1\n, P\n2\n, ..., P\nr\n} where r n.\n\nP\nr\n{P\nr\n\nPSm\ni\n\n\nP\nr\n\n\nGPS\nIC\n} which implies that\n\ni, PSm\ni\n\n\nGPS\nIC\n. The geographical\nproperties of this set are indicative of whether m\ni\n\nIC can be\nmapped to only a single granularity level (i.e. a single location) in\nLH or a multiple granularity levels (i.e. more than one nodes in\n19\nthe hierarchy) in LH. How many and which granularity levels\nshould a m\ni\nmap to, depends upon the level at which the service\nprovider wants to provide information about the m\ni\nin question.\nSimilarly we define a property subset of LH members as PSn\nj\n\nn\nj\n\nLH which can be written as PSn\nj\n={Q\n1\n, Q\n2\n, Q\n3\n, ..., Q\ns\n} where s\nm. In addition,\n\nQ\ns\n{Q\ns\n\nPSn\nj\n\n\nQ\ns\n\nGPS\nLH\n} which implies that\n\nj, PSn\nj\n\nGPSLH.\nThe process of mapping from IC to LH is then identifying for\nsome m\nx\n\nIC one or more n\ny\n\nLH such that PSm\nx\n\nPSn\nv\n\n\n\n.\nThis means that when m\nx\nmaps to n\ny\nand all children of n\ny\nif m\nx\n\ncan map to multiple granularity levels or m\nx\nmaps only to n\ny\nif m\nx\n\ncan map to a single granularity level.\nWe assume that new members can join and old member can leave\nIC or LH any time. The deletion of members from the IC space is\nsimple but addition of members to the IC space is more restrictive.\nIf we want to add a new member to the IC space, then we first\ndefine a property set for the new member: PSm\nnew_m\n={P\n1\n, P\n2\n, P\n3\n,\n..., P\nt\n} and add it to the IC only if the condition:\n\nP\nw\n{P\nw\n\n\nPSp\nnew_m\n\n\nP\nw\n\nGPS\nIC\n} is satisfied. This scheme has an\nadditional benefit of allowing the information service providers to\nhave a control over what kind of information they wish to provide\nto the users. We present the following example to illustrate the\nmapping concept.\nIC = {Traffic, Stock, Restaurant, Weather, Important history\ndates, Road conditions}\nLH = {Country, State, City, Zip-code, Major-roads}\nGPS\nIC\n= {Surface-mobility, Roads, High, Low, Italian-food,\nStateName, Temp, CityName, Seat-availability, Zip, Traffic-jams,\nStock-price, CountryName, MajorRoadName, Wars, Discoveries,\nWorld}\nGPS\nLH\n= {Country, CountrySize, StateName, CityName, Zip,\nMajorRoadName}\nPs(IC\nStock\n) = {Stock-price, CountryName, High, Low}\nPs(IC\nTraffic\n) = {Surface-mobility, Roads, High, Low, Traffic-jams,\nCityName}\nPs(IC\nImportant dates in history\n) = {World, Wars, Discoveries}\nPs(IC\nRoad conditions\n) = {Precipitation, StateName, CityName}\nPs(IC\nRestaurant\n) = {Italian-food, Zip code}\nPs(IC\nWeather\n) = {StateName, CityName, Precipitation,\nTemperature}\nPS(LH\nCountry\n) = {CountryName, CountrySize}\nPS(LH\nState\n= {StateName, State size},\nPS(LH\nCity\n) ={CityName, City size}\nPS(LH\nZipcode\n) = {ZipCodeNum }\nPS(LH\nMajor roads\n) = {MajorRoadName}\nNow, only PS(IC\nStock\n)\n\n\nPS\nCountry\n\n\n. In addition, PS(IC\nStock\n)\nindicated that Stock can map to only a single location Country.\nWhen we consider the member Traffic of IC space, only\nPS(IC\nTraffic\n)\n\n\nPS\ncity\n\n\n\n. As PS(IC\nTraffic\n) indicates that Traffic can\nmap to only a single location, it maps only to City and none of its\nchildren. Now unlike Stock, mapping of Traffic with Major roads,\nwhich is a child of City, is meaningful. However service providers\nhave right to control the granularity levels at which they want to\nprovide information about a member of IC space.\nPS(IC\nRoad conditions\n)\n\n\nPS\nState\n\n\nand PS(IC\nRoad conditions\n)\n\n\nPS\nCity\n\n\n.\nSo Road conditions maps to State as well as City. As PS(IC\nRoad\nconditions\n) indicates that Road conditions can map to multiple\ngranularity levels, Road conditions will also map to Zip Code and\nMajor roads, which are the children of State and City. Similarly,\nRestaurant maps only to Zip code, and Weather maps to State,\nCity and their children, Major Roads and Zip Code.\n\nLOCATION BASED INDEXING SCHEME\nThis section discusses our location based indexing scheme\n(LBIS). The scheme is designed to conform to the LDD broadcast\nin our project DAYS. As discussed earlier, we use the\ncontainment property of LDD in the indexing scheme. This\nsignificantly limits the search of our required data to a particular\nportion of broadcast. Thus, we argue that the scheme provides\nbounded tuning time.\nWe describe the architecture of our indexing scheme. Our scheme\ncontains separate data buckets and index buckets. The index\nbuckets are of two types. The first type is called the Major index.\nThe Major index provides information about the types of data\nbroadcasted. For example, if we intend to broadcast information\nlike Entertainment, Weather, Traffic etc., then the major index\npoints to either these major types of information and/or their main\nsubtypes of information, the number of main subtypes varying\nfrom one information to another. This strictly limits number of\naccesses to a Major index. The Major index never points to the\noriginal data. It points to the sub indexes called the Minor index.\nThe minor indexes are the indexes which actually points to the\noriginal data. We called these minor index pointers as Location\nPointers as they points to the data which are associated with a\nlocation. Thus, our search for a data includes accessing of a major\nindex and some minor indexes, the number of minor index\nvarying depending on the type of information.\nThus, our indexing scheme takes into account the hierarchical\nnature of the LDD, the Containment property, and requires our\nbroadcast schedule to be clustered based on data type and\nlocation. The structure of the location hierarchy requires the use\nof different types of index at different levels. The structure and\npositions of index strictly depend on the location hierarchy as\ndescribed in our mapping scheme earlier. We illustrate the\nimplementation of our scheme with an example. The rules for\nframing the index are mentioned subsequently.\n20\n\nA1\nEntertainment\nResturant\nMovie\nA2\nA3\nA4\nR1\nR2\nR3\nR4\nR5\nR6\nR7\nR8\nWeather\nKC\nSL\nJC\nSF\n\nEntertainment\nR1 R2 R3 R4\nR5 R6\nR7 R8\nKC SL JC SF\n(A, R, NEXT = 8)\n3, R5\n4, R7\nType (S, L)\nER\nW\nE\nEM\n(1, 4)\n(5, 4)\n(1, 4), (9, 4)\n(9, 4)\nType (S, L)\nW\nE\nEM\nER\n(1, 4)\n(5, 8)\n(5, 4)\n(9, 4)\nType (S, L)\nE\nEM\nER\nW\n(1, 8)\n(1, 4)\n(5, 4)\n(9, 4)\nA1 A2 A3 A4\nMovie Resturant Weather\n1 2 3 4 5 6 7 8 9 10 11 12\nMajor index\nMajor index\nMajor index\nMinor index\nMajor index\nMinor index\n\nFigure 3. Location Mapped Information for Broadcast Figure 4. Data coupled with Location based Index\nExample: Let us suppose that our broadcast content contains\nIC\nentertainment\nand IC\nweather\nwhich is represented as shown in Fig. 3.\nAi represents Areas of City and Ri represents roads in a certain\narea. The leaves of Weather structure represent four cities. The\nindex structure is given in Fig. 4 which shows the position of\nmajor and minor index and data in the broadcast schedule.\nWe propose the following rules for the creation of the air indexed\nbroadcast schedule:\n\nThe major index and the minor index are created.\n\nThe major index contains the position and range of different\ntypes of data items (Weather and Entertainment, Figure 3)\nand their categories. The sub categories of Entertainment,\nMovie and Restaurant, are also in the index. Thus, the major\nindex contains Entertainment (E), Entertainment-Movie\n(EM), Entertainment-Restaurant (ER), and Weather (W). The\ntuple (S, L) represents the starting position (S) of the data\nitem and L represents the range of the item in terms of\nnumber of data buckets.\n\nThe minor index contains the variables A, R and a pointer\nNext. In our example (Figure 3), road R represents the first\nnode of area A. The minor index is used to point to actual\ndata buckets present at the lowest levels of the hierarchy. In\ncontrast, the major index points to a broader range of\nlocations and so it contains information about main and sub\ncategories of data.\n\nIndex information is not incorporated in the data buckets.\nIndex buckets are separate containing only the control\ninformation.\n\nThe number of major index buckets m=#(IC), IC = {ic\n1\n, ic\n2\n,\nic\n3\n,...,ic\nn\n} where ic\ni\nrepresent information type and #\nrepresents the cardinality of the Information Content set IC.\nIn this example, IC= {ic\nMovie\n, ic\nWeather\n, ic\nRestaurant\n} and so\n#(IC) =3. Hence, the number of major index buckets is 3.\n\nMechanism to resolve the query is present in the java based\ncoordinator in MU. For example, if a query Q is presented as\nQ (Entertainment, Movie, Road_1), then the resultant search\nwill be for the EM information in the major index. We say,\nQ EM.\nOur proposed index works as follows: Let us suppose that an MU\nissues a query which is represented by Java Coordinator present in\nthe MU as \"Restaurant information on Road 7\". This is resolved\nby the coordinator as Q ER. This means one has to search for\nER unit of index in the major index. Let us suppose that the MU\nlogs into the channel at R2. The first index it receives is a minor\nindex after R2. In this index, value of Next variable = 4, which\nmeans that the next major index is present after bucket 4. The MU\nmay go into doze mode. It becomes active after bucket 4 and\nreceives the major index. It searches for ER information which is\nthe first entry in this index. It is now certain that the MU will get\nthe position of the data bucket in the adjoining minor index. The\nsecond unit in the minor index depicts the position of the required\ndata R7. It tells that the data bucket is the first bucket in Area 4.\nThe MU goes into doze mode again and becomes active after\nbucket 6. It gets the required data in the next bucket. We present\nthe algorithm for searching the location based Index.\n\nAlgorithm 1 Location based Index Search in DAYS\n1. Scan broadcast for the next index bucket, found=false\n2. While (not found) do\n3. if bucket is Major Index then\n4. Find the Type & Tuple (S, L)\n5. if S is greater than 1, go into doze mode for S seconds\n6. end if\n7. Wake up at the S\nth\nbucket and observe the Minor Index\n8. end if\n9. if bucket is Minor Index then\n10. if Type\nRequested\nnot equal to Type\nfound\nand (A,R)\nRequest\nnot\nequal to (A,R)\nfound\nthen\n11. Go into doze mode till NEXT & repeat from step 3\n12. end if\n13. else find entry in Minor Index which points to data\n14. Compute time of arrival T of data bucket\n15. Go into doze mode till T\n16. Wake up at T and access data, found = true\n17. end else\n18. end if\n19. end While\n21\n\nPERFORMANCE EVALUATION\nConservation of energy is the main concern when we try to access\ndata from wireless broadcast. An efficient scheme should allow\nthe mobile device to access its required data by staying active for\na minimum amount of time. This would save considerable amount\nof energy. Since items are distributed based on types and are\nmapped to suitable locations, we argue that our broadcast deals\nwith clustered data types. The mobile unit has to access a larger\nmajor index and a relatively much smaller minor index to get\ninformation about the time of arrival of data. This is in contrast to\nthe exponential scheme where the indexes are of equal sizes. The\nexample discussed and Algorithm 1 reveals that to access any\ndata, we need to access the major index only once followed by\none or more accesses to the minor index. The number of minor\nindex access depends on the number of internal locations. As the\nnumber of internal locations vary for item to item (for example,\nWeather is generally associated with a City whereas traffic is\ngranulated up to major and minor roads of a city), we argue that\nthe structure of the location mapped information may be\nvisualized as a forest which is a collection of general trees, the\nnumber of general trees depending on the types of information\nbroadcasted and depth of a tree depending on the granularity of\nthe location information associated with the information.\nFor our experiments, we assume the forest as a collection of\nbalanced M-ary trees. We further assume the M-ary trees to be\nfull by assuming the presence of dummy nodes in different levels\nof a tree.\nThus, if the number of data items is d and the height of the tree is\nm, then\nn= (m*d-1)/(m-1) where n is the number of vertices in the tree and\ni= (d-1)/(m-1) where i is the number of internal vertices.\nTuning time for a data item involves 1 unit of time required to\naccess the major index plus time required to access the data items\npresent in the leaves of the tree.\nThus, tuning time with d data items is t = log\nm\nd+1\nWe can say that tuning time is bounded by O(log\nm\nd).\nWe compare our scheme with the distributed indexing and\nexponential scheme. We assume a flat broadcast and number of\npages varying from 5000 to 25000. The various simulation\nparameters are shown in Table 1.\nFigure 5-8 shows the relative tuning times of three indexing\nalgorithms, ie, the LBIS, exponential scheme and the distributed\n\ntree scheme. Figure 5 shows the result for number of internal\nlocation nodes = 3. We can see that LBIS significantly\noutperforms both the other schemes. The tuning time in LBIS\nranges from approx 6.8 to 8. This large tuning time is due to the\nfact that after reaching the lowest minor index, the MU may have\nto access few buckets sequentially to get the required data bucket.\nWe can see that the tuning time tends to become stable as the\nlength of broadcast increases. In figure 6 we consider m= 4. Here\nwe can see that the exponential and the distributed perform almost\nsimilarly, though the former seems to perform slightly better as\nthe broadcast length increases. A very interesting pattern is visible\nin figure 7. For smaller broadcast size, the LBIS seems to have\nlarger tuning time than the other two schemes. But as the length of\nbroadcast increases, it is clearly visible the LBIS outperforms the\nother two schemes. The Distributed tree indexing shows similar\nbehavior like the LBIS. The tuning time in LBIS remains low\nbecause the algorithm allows the MU to skip some intermediate\nMinor Indexes. This allows the MU to move into lower levels\ndirectly after coming into active mode, thus saving valuable\nenergy. This action is not possible in the distributed tree indexing\nand hence we can observe that its tuning time is more than the\nLBIS scheme, although it performs better than the exponential\nscheme. Figure 8, in contrast, shows us that the tuning time in\nLBIS, though less than the other two schemes, tends to increase\nsharply as the broadcast length becomes greater than the 15000\npages. This may be attributed both due to increase in time\nrequired to scan the intermediate Minor Indexes and the length of\nthe broadcast. But we can observe that the slope of the LBIS\ncurve is significantly less than the other two curves.\nTable 1 Simulation Parameters\nP\nDefinition Values\nN Number of data Items\n5000 - 25000\nm Number of internal location nodes\n3, 4, 5, 6\nB Capacity of bucket without index (for\nexponential index)\n10,64,128,256\ni\nIndex base for exponential index\n2,4,6,8\nk Index size for distributed tree\n8 bytes\nThe simulation results establish some facts about our\nlocation based indexing scheme. The scheme performs\nbetter than the other two schemes in terms of tuning time in\nmost of the cases. As the length of the broadcast increases, after a\ncertain point, though the tuning time increases as a result of\nfactors which we have described before, the scheme always\nperforms better than the other two schemes. Due to the prescribed\nlimit of the number of pages in the paper, we are unable to show\nmore results. But these omitted results show similar trend as the\nresults depicted in figure 5-8.\n\nCONCLUSION AND FUTURE WORK\nIn this paper we have presented a scheme for mapping of wireless\nbroadcast data with their locations. We have presented an example\nto show how the hierarchical structure of the location tree maps\nwith the data to create LDD. We have presented a scheme called\nLBIS to index this LDD. We have used the containment property\nof LDD in the scheme that limits the search to a narrow range of\ndata in the broadcast, thus saving valuable energy in the device.\nThe mapping of data with locations and the indexing scheme will\nbe used in our DAYS project to create the push based\narchitecture. The LBIS has been compared with two other\nprominent indexing schemes, i.e., the distributed tree indexing\nscheme and the exponential indexing scheme. We showed in our\nsimulations that the LBIS scheme has the lowest tuning time for\nbroadcasts having large number of pages, thus saving valuable\nbattery power in the MU.\n22\n\n\n\n\n\n\nIn the future work we try to incorporate pull based architecture in\nour DAYS project. Data from the server is available for access by\nthe global users. This may be done by putting a request to the\nsource server. The query in this case is a global query. It is\ntransferred from the user's source server to the destination server\nthrough the use of LEO satellites. We intend to use our LDD\nscheme and data staging architecture in the pull based architecture.\nWe will show that the LDD scheme together with the data staging\narchitecture significantly improves the latency for global as well as\nlocal query.\n\nREFERENCES\n[1] Acharya, S., Alonso, R. Franklin, M and Zdonik S. Broadcast\ndisk: Data management for asymmetric communications\nenvironments. In Proceedings of ACM SIGMOD Conference\non Management of Data, pages 199210, San Jose, CA, May\n1995.\n[2] Chen, M.S.,Wu, K.L. and Yu, P. S. Optimizing index\nallocation for sequential data broadcasting in wireless mobile\ncomputing. IEEE Transactions on Knowledge and Data\nEngineering (TKDE), 15(1):161173, January/February 2003.\nFigure 5. Broadcast Size (# buckets)\n\nDist tree\nExpo\n\nLBIS\nFigure 6. Broadcast Size (# buckets)\nDist tree\nExpo\n\nLBIS\nFigure 7. Broadcast Size (# buckets)\n\nDist tree\nExpo\n\nLBIS\nFigure 8. Broadcast Size (# buckets)\nDist tree\nExpo\n\nLBIS\nAv\nera\ng\ne\ntunin\ng\nt\ni\nme\nAv\nera\ng\ne t\nu\nn\ni\nn\ng\nt\ni\nme\nAv\nera\ng\ne\ntunin\ng\nt\ni\nme\nAv\nera\ng\ne t\nu\nn\ni\nn\ng\nt\ni\nme\n23\n[3] Hu, Q. L., Lee, D. L. and Lee, W.C. Performance evaluation\nof a wireless hierarchical data dissemination system. In\nProceedings of the 5\nth\nAnnual ACM International Conference\non Mobile Computing and Networking (MobiCom'99), pages\n163173, Seattle, WA, August 1999.\n[4] Hu, Q. L. Lee, W.C. and Lee, D. L. Power conservative\nmulti-attribute queries on data broadcast. In Proceedings of\nthe 16th International Conference on Data Engineering\n(ICDE'00), pages 157166, San Diego, CA, February 2000.\n[5] Imielinski, T., Viswanathan, S. and Badrinath. B. R. Power\nefficient filtering of data on air. In Proceedings of the 4th\nInternational Conference on Extending Database Technology\n(EDBT'94), pages 245258, Cambridge, UK, March 1994.\n[6] Imielinski, T., Viswanathan, S. and Badrinath. B. R. Data on\nair Organization and access. IEEE Transactions on\nKnowledge and Data Engineering (TKDE), 9(3):353372,\nMay/June 1997.\n[7] Shih, E., Bahl, P. and Sinclair, M. J. Wake on wireless: An\nevent driven energy saving strategy for battery operated\ndevices. In Proceedings of the 8th Annual ACM International\nConference on Mobile Computing and Networking\n(MobiCom'02), pages 160171, Atlanta, GA, September\n2002.\n[8] Shivakumar N. and Venkatasubramanian, S. Energy-efficient\nindexing for information dissemination in wireless systems.\nACM/Baltzer Journal of Mobile Networks and Applications\n(MONET), 1(4):433446, December 1996.\n[9] Tan K. L. and Yu, J. X. Energy efficient filtering of non\nuniform broadcast. In Proceedings of the 16th International\nConference on Distributed Computing Systems (ICDCS'96),\npages 520527, Hong Kong, May 1996.\n[10] Viredaz, M. A., Brakmo, L. S. and Hamburgen, W. R. Energy\nmanagement on handheld devices. ACM Queue, 1(7):4452,\nOctober 2003.\n[11] Garg, N. Kumar, V., & Dunham, M.H. \"Information Mapping\nand Indexing in DAYS\", 6th International Workshop on\nMobility in Databases and Distributed Systems, in\nconjunction with the 14th International Conference on\nDatabase and Expert Systems Applications September 1-5,\nPrague, Czech Republic, 2003.\n[12] Acharya D., Kumar, V., & Dunham, M.H. InfoSpace: Hybrid\nand Adaptive Public Data Dissemination System for\nUbiquitous Computing\". Accepted for publication in the\nspecial issue of Pervasive Computing. Wiley Journal for\nWireless Communications and Mobile Computing, 2004.\n[13] Acharya D., Kumar, V., & Prabhu, N. Discovering and using\nWeb Services in M-Commerce, Proceedings for 5th VLDB\nWorkshop on Technologies for E-Services, Toronto,\nCanada,2004.\n[14] Acharya D., Kumar, V. Indexing Location Dependent Data in\nbroadcast environment. Accepted for publication, JDIM\nspecial issue on Distributed Data Management, 2005.\n[15] Flinn, J., Sinnamohideen, S., & Satyanarayan, M. Data\nStaging on Untrusted Surrogates, Intel Research, Pittsburg,\nUnpublished Report, 2003.\n[16] Seydim, A.Y., Dunham, M.H. & Kumar, V. Location\ndependent query processing, Proceedings of the 2nd ACM\ninternational workshop on Data engineering for wireless and\nmobile access, p.47-53, Santa Barbara, California, USA,\n2001.\n[17]\nXu, J., Lee, W.C., Tang., X. Exponential Index: A\nParameterized Distributed Indexing Scheme for Data on Air.\n\nIn Proceedings of the 2nd ACM/USENIX International\nConference on Mobile Systems, Applications, and Services\n(MobiSys'04), Boston, MA, June 2004.\n\n24\n", "keywords": "containment;indexing scheme;access efficiency;indexing;Wireless data broadcast;mapping function;location based services;wireless;energy conservation;location dependent data;broadcast;push based architecture;data dissemination;data staging"} {"name": "129", "title": "Lossless Online Bayesian Bagging", "abstract": "Bagging frequently improves the predictive performance of a model. An online version has recently been introduced, which attempts to gain the benefits of an online algorithm while approximating regular bagging. However, regular online bagging is an approximation to its batch counterpart and so is not lossless with respect to the bagging operation. By operating under the Bayesian paradigm, we introduce an online Bayesian version of bagging which is exactly equivalent to the batch Bayesian version, and thus when combined with a lossless learning algorithm gives a completely lossless online bagging algorithm. We also note that the Bayesian formulation resolves a theoretical problem with bagging, produces less variability in its estimates, and can improve predictive performance for smaller data sets.", "fulltext": "Introduction\nIn a typical prediction problem, there is a trade-off between bias and variance, in that after\na certain amount of fitting, any increase in the precision of the fit will cause an increase in\nthe prediction variance on future observations. Similarly, any reduction in the prediction\nvariance causes an increase in the expected bias for future predictions. Breiman (1996)\nintroduced bagging as a method of reducing the prediction variance without affecting the\nprediction bias. As a result, predictive performance can be significantly improved.\nBagging, short for \"Bootstrap AGGregatING\", is an ensemble learning method. Instead\nof making predictions from a single model fit on the observed data, bootstrap samples\nare taken of the data, the model is fit on each sample, and the predictions are averaged\nover all of the fitted models to get the bagged prediction. Breiman (1996) explains that\nbagging works well for unstable modeling procedures, i.e. those for which the conclusions\nare sensitive to small changes in the data. He also gives a theoretical explanation of how\nbagging works, demonstrating the reduction in mean-squared prediction error for unstable\nc 2004 Herbert K. H. Lee and Merlise A. Clyde.\nLee and Clyde\nprocedures. Breiman (1994) demonstrated that tree models, among others, are empirically\nunstable.\nOnline bagging (Oza and Russell, 2001) was developed to implement bagging sequentially\n, as the observations appear, rather than in batch once all of the observations have\narrived. It uses an asymptotic approximation to mimic the results of regular batch bagging,\nand as such it is not a lossless algorithm. Online algorithms have many uses in modern\ncomputing. By updating sequentially, the update for a new observation is relatively quick\ncompared to re-fitting the entire database, making real-time calculations more feasible.\nSuch algorithms are also advantageous for extremely large data sets where reading through\nthe data just once is time-consuming, so batch algorithms which would require multiple\npasses through the data would be infeasible.\nIn this paper, we consider a Bayesian version of bagging (Clyde and Lee, 2001) based\non the Bayesian bootstrap (Rubin, 1981). This overcomes a technical difficulty with the\nusual bootstrap in bagging. It also leads to a theoretical reduction in variance over the\nbootstrap for certain classes of estimators, and a significant reduction in observed variance\nand error rates for smaller data sets. We present an online version which, when combined\nwith a lossless online model-fitting algorithm, continues to be lossless with respect to the\nbagging operation, in contrast to ordinary online bagging. The Bayesian approach requires\nthe learning base algorithm to accept weighted samples.\nIn the next section we review the basics of the bagging algorithm, of online bagging,\nand of Bayesian bagging. Next we introduce our online Bayesian bagging algorithm. We\nthen demonstrate its efficacy with classification trees on a variety of examples.\nBagging\nIn ordinary (batch) bagging, bootstrap re-sampling is used to reduce the variability of an\nunstable estimator. A particular model or algorithm, such as a classification tree, is specified\nfor learning from a set of data and producing predictions. For a particular data set X\nm\n,\ndenote the vector of predictions (at the observed sites or at new locations) by G(X\nm\n).\nDenote the observed data by X = (x\n1\n, . . . , x\nn\n). A bootstrap sample of the data is a\nsample with replacement, so that X\nm\n= (x\nm\n1\n, . . . , x\nm\nn\n), where m\ni\n{1, . . . , n} with\nrepetitions allowed. X\nm\ncan also be thought of as a re-weighted version of X, where the\nweights,\n(m)\ni\nare drawn from the set {0,\n1\nn\n,\n2\nn\n, . . . , 1}, i.e., n\n(m)\ni\nis the number of times that\nx\ni\nappears in the mth bootstrap sample. We denote the weighted sample as (X,\n(m)\n). For\neach bootstrap sample, the model produces predictions G(X\nm\n) = G(X\nm\n)\n1\n, . . . , G(X\nm\n)\nP\nwhere P is the number of prediction sites. M total bootstrap samples are used. The bagged\npredictor for the jth element is then\n1\nM\nM\nm\n=1\nG(X\nm\n)\nj\n= 1\nM\nM\nm\n=1\nG(X,\n(m)\n)\nj\n,\nor in the case of classification, the jth element is predicted to belong to the most frequently\npredicted category by G(X\n1\n)\nj\n, . . . , G(X\nM\n)\nj\n.\nA version of pseudocode for implementing bagging is\n1. For m {1, . . . , M },\n144\nLossless Online Bayesian Bagging\n(a) Draw a bootstrap sample, X\nm\n, from X.\n(b) Find predicted values G(X\nm\n).\n2. The bagging predictor is\n1\nM\nM\nm\n=1\nG\n(X\nm\n).\nEquivalently, the bootstrap sample can be converted to a weighted sample (X,\n(m)\n) where\nthe weights\n(m)\ni\nare found by taking the number of times x\ni\nappears in the bootstrap sample\nand dividing by n. Thus the weights will be drawn from {0,\n1\nn\n,\n2\nn\n, . . . , 1} and will sum to\n1. The bagging predictor using the weighted formulation is\n1\nM\nM\nm\n=1\nG\n(X\nm\n,\n(m)\n) for\nregression, or the plurality vote for classification.\n2.1 Online Bagging\nOnline bagging (Oza and Russell, 2001) was recently introduced as a sequential approximation\nto batch bagging. In batch bagging, the entire data set is collected, and then bootstrap\nsamples are taken from the whole database. An online algorithm must process observations\nas they arrive, and thus each observation must be resampled a random number of times\nwhen it arrives. The algorithm proposed by Oza and Russell resamples each observation\naccording to a Poisson random variable with mean 1, i.e., P (K\nm\n= k) = exp(-1)/k!, where\nK\nm\nis the number of resamples in \"bootstrap sample\" m, K\nm\n{0, 1, . . .}. Thus as each\nobservation arrives, it is added K\nm\ntimes to X\nm\n, and then G(X\nm\n) is updated, and this is\ndone for m {1, . . . , M }.\nPseudocode for online bagging is\nFor i {1, . . . , n},\n1. For m {1, . . . , M },\n(a) Draw a weight K\nm\nfrom a Poisson(1) random variable and add K\nm\ncopies\nof x\ni\nto X\nm\n.\n(b) Find predicted values G(X\nm\n).\n2. The current bagging predictor is\n1\nM\nM\nm\n=1\nG\n(X\nm\n).\nIdeally, step 1(b) is accomplished with a lossless online update that incorporates the K\nm\nnew points without refitting the entire model. We note that n may not be known ahead of\ntime, but the bagging predictor is a valid approximation at each step.\nOnline bagging is not guaranteed to produce the same results as batch bagging. In\nparticular, it is easy to see that after n points have been observed, there is no guarantee\nthat X\nm\nwill contain exactly n points, as the Poisson weights are not constrained to add\nup to n like a regular bootstrap sample. While it has been shown (Oza and Russell, 2001)\nthat these samples converge asymptotically to the appropriate bootstrap samples, there\nmay be some discrepancy in practice. Thus while it can be combined with a lossless online\nlearning algorithm (such as for a classification tree), the bagging part of the online ensemble\nprocedure is not lossless.\n145\nLee and Clyde\n2.2 Bayesian Bagging\nOrdinary bagging is based on the ordinary bootstrap, which can be thought of as replacing\nthe original weights of\n1\nn\non each point with weights from the set {0,\n1\nn\n,\n2\nn\n, . . . , 1}, with the\ntotal of all weights summing to 1. A variation is to replace the ordinary bootstrap with\nthe Bayesian bootstrap (Rubin, 1981). The Bayesian approach treats the vector of weights\nas unknown parameters and derives a posterior distribution for , and hence G(X, ).\nThe non-informative prior\nn\ni\n=1\n\n-1\ni\n, when combined with the multinomial likelihood, leads\nto a Dirichlet\nn\n(1, . . . , 1) distribution for the posterior distribution of . The full posterior\ndistribution of G(X, ) can be estimated by Monte Carlo methods: generate\n(m)\nfrom\na Dirichlet\nn\n(1, . . . , 1) distribution and then calculate G(X,\n(m)\n) for each sample. The\naverage of G(X,\n(m)\n) over the M samples corresponds to the Monte Carlo estimate of\nthe posterior mean of G(X, ) and can be viewed as a Bayesian analog of bagging (Clyde\nand Lee, 2001).\nIn practice, we may only be interested in a point estimate, rather than the full posterior\ndistribution. In this case, the Bayesian bootstrap can be seen as a continuous version of\nthe regular bootstrap. Thus Bayesian bagging can be achieved by generating M Bayesian\nbootstrap samples, and taking the average or majority vote of the G(X,\n(m)\n). This is\nidentical to regular bagging except that the weights are continuous-valued on (0, 1), instead\nof being restricted to the discrete set {0,\n1\nn\n,\n2\nn\n, . . . , 1}. In both cases, the weights must sum\nto 1. In both cases, the expected value of a particular weight is\n1\nn\nfor all weights, and the\nexpected correlation between weights is the same (Rubin, 1981). Thus Bayesian bagging\nwill generally have the same expected point estimates as ordinary bagging. The variability\nof the estimate is slightly smaller under Bayesian bagging, as the variability of the weights\nis\nn\nn\n+1\ntimes that of ordinary bagging. As the sample size grows large, this factor becomes\narbitrarily close to one, but we do note that it is strictly less than one, so the Bayesian\napproach does give a further reduction in variance compared to the standard approach. In\npractice, for smaller data sets, we often find a significant reduction in variance, possibly\nbecause the use of continuous-valued weights leads to fewer extreme cases than discrete-valued\nweights.\nPseudocode for Bayesian bagging is\n1. For m {1, . . . , M },\n(a) Draw random weights\n(m)\nfrom a Dirichlet\nn\n(1, . . . , 1) to produce the Bayesian\nbootstrap sample (X,\n(m)\n).\n(b) Find predicted values G(X,\n(m)\n).\n2. The bagging predictor is\n1\nM\nM\nm\n=1\nG\n(X,\n(m)\n).\nUse of the Bayesian bootstrap does have a major theoretical advantage, in that for\nsome problems, bagging with the regular bootstrap is actually estimating an undefined\nquantity. To take a simple example, suppose one is bagging the fitted predictions for a\npoint y from a least-squares regression problem. Technically, the full bagging estimate is\n1\nM\n0\nm\n^\ny\nm\nwhere m ranges over all possible bootstrap samples, M\n0\nis the total number\nof possible bootstrap samples, and ^\ny\nm\nis the predicted value from the model fit using the\nmth bootstrap sample. The issue is that one of the possible bootstrap samples contains the\n146\nLossless Online Bayesian Bagging\nfirst data point replicated n times, and no other data points. For this bootstrap sample,\nthe regression model is undefined (since at least two different points are required), and so\n^\ny and thus the bagging estimator are undefined. In practice, only a small sample of the\npossible bootstrap samples is used, so the probability of drawing a bootstrap sample with an\nundefined prediction is very small. Yet it is disturbing that in some problems, the bagging\nestimator is technically not well-defined. In contrast, the use of the Bayesian bootstrap\ncompletely avoids this problem. Since the weights are continuous-valued, the probability\nthat any weight is exactly equal to zero is zero. Thus with probability one, all weights\nare strictly positive, and the Bayesian bagging estimator will be well-defined (assuming the\nordinary estimator on the original data is well-defined).\nWe note that the Bayesian approach will only work with models that have learning algorithms\nthat handle weighted samples. Most standard models either have readily available\nsuch algorithms, or their algorithms are easily modified to accept weights, so this restriction\nis not much of an issue in practice.\nOnline Bayesian Bagging\nRegular online bagging cannot be exactly equivalent to the batch version because the Poisson\ncounts cannot be guaranteed to sum to the number of actual observations. Gamma random\nvariables can be thought of as continuous analogs of Poisson counts, which motivates our\nderivation of Bayesian online bagging. The key is to recall a fact from basic probability -- a\nset of independent gamma random variables divided by its sum has a Dirichlet distribution,\ni.e.,\nIf w\ni\n(\ni\n, 1), then\nw\n1\nw\ni\n, w\n2\nw\ni\n, . . . , w\nk\nw\ni\nDirichlet\nn\n(\n1\n,\n2\n, . . . ,\nk\n) .\n(See for example, Hogg and Craig, 1995, pp. 187188.) This relationship is a common\nmethod for generating random draws from a Dirichlet distribution, and so is also used in\nthe implementation of batch Bayesian bagging in practice.\nThus in the online version of Bayesian bagging, as each observation arrives, it has a\nrealization of a Gamma(1) random variable associated with it for each bootstrap sample,\nand the model is updated after each new weighted observation. If the implementation of the\nmodel requires weights that sum to one, then within each (Bayesian) bootstrap sample, all\nweights can be re-normalized with the new sum of gammas before the model is updated. At\nany point in time, the current predictions are those aggregated across all bootstrap samples,\njust as with batch bagging. If the model is fit with an ordinary lossless online algorithm, as\nexists for classification trees (Utgoff et al., 1997), then the entire online Bayesian bagging\nprocedure is completely lossless relative to batch Bayesian bagging. Furthermore, since\nbatch Bayesian bagging gives the same mean results as ordinary batch bagging, online\nBayesian bagging also has the same expected results as ordinary batch bagging.\nPseudocode for online Bayesian bagging is\nFor i {1, . . . , n},\n1. For m {1, . . . , M },\n147\nLee and Clyde\n(a) Draw a weight\n(m)\ni\nfrom a Gamma(1, 1) random variable, associate weight\nwith x\ni\n, and add x\ni\nto X.\n(b) Find predicted values G(X,\n(m)\n) (renormalizing weights if necessary).\n2. The current bagging predictor is\n1\nM\nM\nm\n=1\nG\n(X,\n(m)\n).\nIn step 1(b), the weights may need to be renormalized (by dividing by the sum of all\ncurrent weights) if the implementation requires weights that sum to one. We note that for\nmany models, such as classification trees, this renormalization is not a major issue; for a\ntree, each split only depends on the relative weights of the observations at that node, so\nnodes not involving the new observation will have the same ratio of weights before and\nafter renormalization and the rest of the tree structure will be unaffected; in practice, in\nmost implementations of trees (including that used in this paper), renormalization is not\nnecessary. We discuss the possibility of renormalization in order to be consistent with the\noriginal presentation of the bootstrap and Bayesian bootstrap, and we note that ordinary\nonline bagging implicitly deals with this issue equivalently.\nThe computational requirements of Bayesian versus ordinary online bagging are comparable\n. The procedures are quite similar, with the main difference being that the fitting\nalgorithm must handle non-integer weights for the Bayesian version. For models such as\ntrees, there is no significant additional computational burden for using non-integer weights.\nExamples\nWe demonstrate the effectiveness of online Bayesian bagging using classification trees. Our\nimplementation uses the lossless online tree learning algorithms (ITI) of Utgoff et al. (1997)\n(available at http://www.cs.umass.edu/lrn/iti/). We compared Bayesian bagging to\na single tree, ordinary batch bagging, and ordinary online bagging, all three of which were\ndone using the minimum description length criterion (MDL), as implemented in the ITI\ncode, to determine the optimal size for each tree. To implement Bayesian bagging, the code\nwas modified to account for weighted observations.\nWe use a generalized MDL to determine the optimal tree size at each stage, replacing all\ncounts of observations with the sum of the weights of the observations at that node or leaf\nwith the same response category. Replacing the total count directly with the sum of the\nweights is justified by looking at the multinomial likelihood when written as an exponential\nfamily in canonical form; the weights enter through the dispersion parameter and it is easily\nseen that the unweighted counts are replaced by the sums of the weights of the observations\nthat go into each count. To be more specific, a decision tree typically operates with a\nmultinomial likelihood,\nleaves\nj\nclasses\nk\np\nn\njk\njk\n,\nwhere p\njk\nis the true probability that an observation in leaf j will be in class k, and n\njk\nis\nthe count of data points in leaf j in class k. This is easily re-written as the product over\nall observations,\nn\ni\n=1\np\n\ni\nwhere if observation i is in leaf j and a member of class k then\np\n\ni\n= p\njk\n. For simplicity, we consider the case k = 2 as the generalization to larger k is\nstraightforward. Now consider a single point, y, which takes values 0 or 1 depending on\nwhich class is it a member of. Transforming to the canonical parameterization, let =\np\n1-p\n,\n148\nLossless Online Bayesian Bagging\nwhere p is the true probability that y = 1. Writing the likelihood in exponential family\nform gives exp\ny + log\n1\n1+exp{}\na where a is the dispersion parameter, which would\nbe equal to 1 for a standard data set, but would be the reciprocal of the weight for that\nobservation in a weighted data set. Thus the likelihood for an observation y with weight\nw is exp\ny + log\n1\n1+exp{}\n(1/w)\n= p\nwy\n(1 - p)\nw\n(1-y)\nand so returning to the full\nmultinomial, the original counts are simply replaced by the weighted counts. As MDL is a\npenalized likelihood criterion, we thus use the weighted likelihood and replace each count\nwith a sum of weights. We note that for ordinary online bagging, using a single Poisson\nweight K with our generalized MDL is exactly equivalent to including K copies of the data\npoint in the data set and using regular MDL.\nTable 1 shows the data sets we used for classification problems, the number of classes in\neach data set, and the sizes of their respective training and test partitions. Table 2 displays\nthe results of our comparison study. All of the data sets, except the final one, are available\nonline at http://www.ics.uci.edu/mlearn/MLRepository.html, the UCI Machine\nLearning Repository. The last data set is described in Lee (2001). We compare the results\nof training a single classification tree, ordinary batch bagging, online bagging, and Bayesian\nonline bagging (or equivalently Bayesian batch). For each of the bagging techniques, 100\nbootstrap samples were used. For each data set, we repeated 1000 times the following procedure\n: randomly choose a training/test partition; fit a single tree, a batch bagged tree, an\nonline bagged tree, and a Bayesian bagged tree; compute the misclassification error rate for\neach fit. Table 2 reports the average error rate for each method on each data set, as well as\nthe estimated standard error of this error rate.\nSize of\nSize of\nNumber of\nTraining\nTest\nData Set\nClasses\nData Set\nData Set\nBreast cancer (WI)\n2\n299\n400\nContraceptive\n3\n800\n673\nCredit (German)\n2\n200\n800\nCredit (Japanese)\n2\n290\n400\nDermatology\n6\n166\n200\nGlass\n7\n164\n50\nHouse votes\n2\n185\n250\nIonosphere\n2\n200\n151\nIris\n3\n90\n60\nLiver\n3\n145\n200\nPima diabetes\n2\n200\n332\nSPECT\n2\n80\n187\nWine\n3\n78\n100\nMushrooms\n2\n1000\n7124\nSpam\n2\n2000\n2601\nCredit (American)\n2\n4000\n4508\nTable 1: Sizes of the example data sets\n149\nLee and Clyde\nBayesian\nSingle\nBatch\nOnline\nOnline/Batch\nData Set\nTree\nBagging\nBagging\nBagging\nBreast cancer (WI)\n0.055 (.020)\n0.045 (.010)\n0.045 (.010)\n0.041 (.009)\nContraceptive\n0.522 (.019)\n0.499 (.017)\n0.497 (.017)\n0.490 (.016)\nCredit (German)\n0.318 (.022)\n0.295 (.017)\n0.294 (.017)\n0.285 (.015)\nCredit (Japanese)\n0.155 (.017)\n0.148 (.014)\n0.147 (.014)\n0.145 (.014)\nDermatology\n0.099 (.033)\n0.049 (.017)\n0.053 (.021)\n0.047 (.019)\nGlass\n0.383 (.081)\n0.357 (.072)\n0.361 (.074)\n0.373 (.075)\nHouse votes\n0.052 (.011)\n0.049 (.011)\n0.049 (.011)\n0.046 (.010)\nIonosphere\n0.119 (.026)\n0.094 (.022)\n0.099 (.022)\n0.096 (.021)\nIris\n0.062 (.029)\n0.057 (.026)\n0.060 (.025)\n0.058 (.025)\nLiver\n0.366 (.036)\n0.333 (.032)\n0.336 (.034)\n0.317 (.033)\nPima diabetes\n0.265 (.027)\n0.250 (.020)\n0.247 (.021)\n0.232 (.017)\nSPECT\n0.205 (.029)\n0.200 (.030)\n0.202 (.031)\n0.190 (.027)\nWine\n0.134 (.042)\n0.094 (.037)\n0.101 (.037)\n0.085 (.034)\nMushrooms\n0.004 (.003)\n0.003 (.002)\n0.003 (.002)\n0.003 (.002)\nSpam\n0.099 (.008)\n0.075 (.005)\n0.077 (.005)\n0.077 (.005)\nCredit (American)\n0.350 (.007)\n0.306 (.005)\n0.306 (.005)\n0.305 (.006)\nTable 2: Comparison of average classification error rates (with standard error)\nWe note that in all cases, both online bagging techniques produce results similar to\nordinary batch bagging, and all bagging methods significantly improve upon the use of a\nsingle tree. However, for smaller data sets (all but the last three), online/batch Bayesian\nbagging typically both improves prediction performance and decreases prediction variability.\nDiscussion\nBagging is a useful ensemble learning tool, particularly when models sensitive to small\nchanges in the data are used. It is sometimes desirable to be able to use the data in\nan online fashion. By operating in the Bayesian paradigm, we can introduce an online\nalgorithm that will exactly match its batch Bayesian counterpart. Unlike previous versions\nof online bagging, the Bayesian approach produces a completely lossless bagging algorithm.\nIt can also lead to increased accuracy and decreased prediction variance for smaller data\nsets.\n\nAcknowledgments\nThis research was partially supported by NSF grants DMS 0233710, 9873275, and 9733013.\nThe authors would like to thank two anonymous referees for their helpful suggestions.\n150\nLossless Online Bayesian Bagging\n\nReferences\nL. Breiman. Heuristics of instability in model selection. Technical report, University of\nCalifornia at Berkeley, 1994.\nL. Breiman. Bagging predictors. Machine Learning, 26(2):123140, 1996.\nM. A. Clyde and H. K. H. Lee. Bagging and the Bayesian bootstrap. In T. Richardson and\nT. Jaakkola, editors, Artificial Intelligence and Statistics 2001, pages 169174, 2001.\nR. V. Hogg and A. T. Craig. Introduction to Mathematical Statistics. Prentice-Hall, Upper\nSaddle River, NJ, 5th edition, 1995.\nH. K. H. Lee. Model selection for neural network classification. Journal of Classification,\n18:227243, 2001.\nN. C. Oza and S. Russell. Online bagging and boosting. In T. Richardson and T. Jaakkola,\neditors, Artificial Intelligence and Statistics 2001, pages 105112, 2001.\nD. B. Rubin. The Bayesian bootstrap. Annals of Statistics, 9:130134, 1981.\nP. E. Utgoff, N. C. Berkman, and J. A. Clouse. Decision tree induction based on efficient\ntree restructuring. Machine Learning, 29:544, 1997.\n151\n", "keywords": "classification;Dirichlet Distribution;online bagging;bootstrap;Classification Tree;Bayesian Bootstrap;mean-squared prediction error;Bayesian bagging;bagging;lossless learning algorithm"} {"name": "13", "title": "A Machine Learning Based Approach for Table Detection on The Web", "abstract": "Table is a commonly used presentation scheme, especially for describing relational information. However, table understanding remains an open problem. In this paper, we consider the problem of table detection in web documents. Its potential applications include web mining, knowledge management , and web content summarization and delivery to narrow-bandwidth devices. We describe a machine learning based approach to classify each given table entity as either genuine or non-genuine . Various features re ecting the layout as well as content characteristics of tables are studied. In order to facilitate the training and evaluation of our table classi er, we designed a novel web document table ground truthing protocol and used it to build a large table ground truth database. The database consists of 1,393 HTML les collected from hundreds of di erent web sites and contains 11,477 leaf <TABLE> elements, out of which 1,740 are genuine tables. Experiments were conducted using the cross validation method and an F-measure of 95 : 89% was achieved.", "fulltext": "INTRODUCTION\nThe increasing ubiquity of the Internet has brought about\na constantly increasing amount of online publications. As\na compact and e cient way to present relational information\n, tables are used frequently in web documents. Since\ntables are inherently concise as well as information rich, the\nautomatic understanding of tables has many applications including\nknowledge management, information retrieval, web\nCopyright is held by the author/owner(s).\nWWW2002\n, May 711, 2002, Honolulu, Hawaii, USA.\nACM 1-58113-449-5/02/0005.\nmining, summarization, and content delivery to mobile devices\n. The processes of table understanding in web documents\ninclude table detection, functional and structural\nanalysis and nally table interpretation 6]. In this paper,\nwe concentrate on the problem of table detection. The web\nprovides users with great possibilities to use their own style\nof communication and expressions. In particular, people use\nthe\n<TABLE>\ntag not only for relational information display\nbut also to create any type of multiple-column layout to\nfacilitate easy viewing, thus the presence of the\n<TABLE>\ntag does not necessarily indicate the presence of a relational\ntable. In this paper, we de ne\ngenuine\ntables to be document\nentities where a two dimensional grid is semantically\nsigni cant in conveying the logical relations among the cells\n10]. Conversely,\nNon-genuine\ntables are document entities\nwhere\n<TABLE>\ntags are used as a mechanism for grouping\ncontents into clusters for easy viewing only. Figure 1 gives\na few examples of genuine and non-genuine tables. While\ngenuine tables in web documents could also be created without\nthe use of\n<TABLE>\ntags at all, we do not consider such\ncases in this article as they seem very rare from our experience\n. Thus, in this study,\nTable detection\nrefers to the\ntechnique which classi es a document entity enclosed by the\n<TABLE></TABLE>\ntags as genuine or non-genuine tables.\nSeveral researchers have reported their work on web table\ndetection 2, 10, 6, 14]. In 2], Chen\net al.\nused heuristic\nrules and cell similarities to identify tables. They tested\ntheir table detection algorithm on 918 tables from airline information\nweb pages and achieved an F-measure of 86\n:\n50%.\nPenn\net al.\nproposed a set of rules for identifying genuinely\ntabular information and news links in HTML documents\n10]. They tested their algorithm on 75 web site front-pages\nand achieved an F-measure of 88\n:\n05%. Yoshida\net al.\nproposed\na method to integrate WWW tables according to the\ncategory of objects presented in each table 14]. Their data\nset contains 35,232 table tags gathered from the web. They\nestimated their algorithm parameters using all of table data\nand then evaluated algorithm accuracy on 175 of the tables.\nThe average F-measure reported in their paper is 82\n:\n65%.\nThese previous methods all relied on heuristic rules and were\nonly tested on a database that is either very small 10], or\nhighly domain speci c 2]. Hurst mentioned that a Naive\nBayes classi er algorithm produced adequate results but no\ndetailed algorithm and experimental information was provided\n6].\nWe propose a new machine learning based approach for\n242\nFigure 1: Examples of genuine and non-genuine tables.\ntable detection from generic web documents. In particular\n, we introduce a set of novel features which re ect the\nlayout as well as content characteristics of tables. These\nfeatures are used in classi ers trained on thousands of examples\n. To facilitate the training and evaluation of the table\nclassi ers, we designed a novel web document table ground\ntruthing protocol and used it to build a large table ground\ntruth database. The database consists of 1,393 HTML les\ncollected from hundreds of di erent web sites and contains\n11,477 leaf\n<TABLE>\nelements, out of which 1,740 are genuine\ntables. Experiments on this database using the cross\nvalidation method demonstrate signi cant performance improvements\nover previous methods.\nThe rest of the paper is organized as follows. We describe\nour feature set in Section 2, followed by a brief discussion\nof the classi ers we experimented with in Section 3. In Section\n4, we present a novel table ground truthing protocol\nand explain how we built our database. Experimental results\nare then reported in Section 5 and we conclude with\nfuture directions in Section 6.\nFEATURES FOR WEB TABLE DETECTION\nFeature selection is a crucial step in any machine learning\nbased methods. In our case, we need to nd a combination\nof features that together provide signi cant separation between\ngenuine and non-genuine tables while at the same time\nconstrain the total number of features to avoid the curse of\ndimensionality. Past research has clearly indicated that layout\nand content are two important aspects in table understanding\n6]. Our features were designed to capture both of\nthese aspects. In particular, we developed 16 features which\ncan be categorized into three groups: seven layout features,\neight content type features and one word group feature. In\nthe rst two groups, we attempt to capture the global composition\nof tables as well as the consistency within the whole\ntable and across rows and columns. The last feature looks at\nwords used in tables and is derived directly from the vector\nspace model commonly used in Information Retrieval.\nBefore feature extraction, each HTML document is rst\nparsed into a document hierarchy tree using Java Swing\nXML parser with W3C HTML 3.2 DTD 10]. A\n<TABLE>\nnode is said to be a\nleaf table\nif and only if there are no\n<TABLE>\nnodes among its children 10]. Our experience indicates\nthat almost all genuine tables are leaf tables. Thus\nin this study only leaf tables are considered candidates for\ngenuine tables and are passed on to the feature extraction\nstage. In the following we describe each feature in detail.\n2.1\nLayout Features\nIn HTML documents, although tags like\n<TR>\nand\n<TD>\n(or\n<TH>\n) may be assumed to delimit table rows and table\ncells, they are not always reliable indicators of the number\nof rows and columns in a table. Variations can be caused\nby spanning cells created using\n<ROWSPAN>\nand\n<COLSPAN>\ntags. Other tags such as\n<BR>\ncould be used to move content\ninto the next row. Therefore to extract layout features\nreliably one can not simply count the number of\n<TR>\n's and\n<TD>\n's. For this purpose, we maintain a matrix to record all\n243\nthe cell spanning information and serve as a pseudo rendering\nof the table. Layout features based on row or column\nnumbers are then computed from this matrix.\nGiven a table\nT\n, assuming its numbers of rows and columns\nare\nrn\nand\ncn\nrespectively, we compute the following layout\nfeatures:\nAverage number of columns, computed as the average\nnumber of cells per row:\nc\n= 1\nrn\nrn\nX\ni\n=1\nc\ni\nwhere\nc\ni\nis the number of cells in row\ni\n,\ni\n= 1\n::: rn\nStandard deviation of number of columns:\ndC\n=\nv\nu\nu\nt\n1\nrn\nrn\nX\ni\n=1\n(\nc\ni\n;\nc\n) (\nc\ni\n;\nc\n)\nAverage number of rows, computed as the average\nnumber of cells per column:\nr\n= 1\nrn\ncn\nX\ni\n=1\nr\ni\nwhere\nr\ni\nis the number of cells in column\ni\n,\ni\n= 1\n::: cn\nStandard deviation of number of rows:\ndR\n=\nv\nu\nu\nt\n1\ncn\ncn\nX\ni\n=1\n(\nr\ni\n;\nr\n) (\nr\ni\n;\nr\n)\n:\nSince the majority of tables in web documents contain\ncharacters, we compute three more layout features based on\ncell length in terms of number of characters:\nAverage overall cell length:\ncl\n=\n1\nen\nP\nen\ni\n=1\ncl\ni\n, where\nen\nis the total number of cells in a given table and\ncl\ni\nis\nthe length of cell\ni\n,\ni\n= 1\n::: en\nStandard deviation of cell length:\ndCL\n=\nv\nu\nu\nt\n1\nen\nen\nX\ni\n=1\n(\ncl\ni\n;\ncl\n) (\ncl\ni\n;\ncl\n)\nAverage\nCumulative length consistency\n,\nCLC\n.\nThe last feature is designed to measure the cell length consistency\nalong either row or column directions. It is inspired\nby the fact that most genuine tables demonstrate certain\nconsistency either along the row or the column direction,\nbut usually not both, while non-genuine tables often show\nno consistency in either direction. First, the average cumulative\nwithin-row length consistency,\nCLC\nr\n, is computed as\nfollows. Let the set of cell lengths of the cells from row\ni\nbe\nR\ni\n,\ni\n= 1\n::: r\n(considering only non-spanning cells):\n1. Compute the mean cell length,\nm\ni\n, for row\nR\ni\n.\n2. Compute cumulative length consistency within each\nR\ni\n:\nCLC\ni\n=\nX\ncl\n2R\ni\nLC\ncl\n:\nHere\nLC\ncl\nis de ned as:\nLC\ncl\n= 0\n:\n5\n;\nD\n, where\nD\n=\nmin\nfj\ncl\n;\nm\ni\nj\n=m\ni\n1\n:\n0\ng\n. Intuitively,\nLC\ncl\nmeasures the\ndegree of consistency between\ncl\nand the mean cell\nlength, with\n;\n0\n:\n5 indicating extreme inconsistency and\n0\n:\n5 indicating extreme consistency. When most cells\nwithin\nR\ni\nare consistent, the cumulative measure\nCLC\ni\nis positive, indicating a more or less consistent row.\n3. Take the average across all rows:\nCLC\nr\n= 1\nr\nr\nX\ni\n=1\nCLC\ni\n:\nAfter the within-row length consistency\nCLC\nr\nis computed\n, the within-column length consistency\nCLC\nc\nis computed\nin a similar manner. Finally, the overall cumulative\nlength consistency is computed as\nCLC\n= max(\nCLC\nr\nCLC\nc\n).\n2.2\nContent Type Features\nWeb documents are inherently multi-media and has more\ntypes of content than any traditional documents. For example\n, the content within a\n<TABLE>\nelement could include\nhyperlinks, images, forms, alphabetical or numerical strings,\netc. Because of the relational information it needs to convey,\na genuine table is more likely to contain alpha or numerical\nstrings than, say, images. The content type feature was\ndesigned to re ect such characteristics.\nWe de ne the set of content types\nT\n=\nf\nImage Form\nHyperlink Alphabetical Digit Empty Others\ng\n. Our content\ntype features include:\nThe histogram of content type for a given table. This\ncontributes 7 features to the feature set\nAverage\ncontent type consistency\n,\nCTC\n.\nThe last feature is similar to the cell length consistency feature\n. First, within-row content type consistency\nCTC\nr\nis\ncomputed as follows. Let the set of cell type of the cells\nfrom row\ni\nas\nT\ni\n,\ni\n= 1\n::: r\n(again, considering only non-spanning\ncells):\n1. Find the dominant type,\nDT\ni\n, for\nT\ni\n.\n2. Compute the cumulative type consistency with each\nrow\nR\ni\n,\ni\n= 1\n::: r\n:\nCTC\ni\n=\nX\nct\n2R\ni\nD\nwhere\nD\n= 1 if\nct\nis equal to\nDT\ni\nand\nD\n=\n;\n1, otherwise\n.\n3. Take the average across all rows:\nCTC\nr\n= 1\nr\nr\nX\ni\n=1\nCTC\ni\nThe within-column type consistency is then computed in\na similar manner. Finally, the overall cumulative type consistency\nis computed as:\nCTC\n= max(\nCTC\nr\nCTC\nc\n).\n244\n2.3\nWord Group Feature\nIf we treat each table as a \\mini-document" by itself, table\nclassi cation can be viewed as a document categorization\nproblem with two broad categories: genuine tables and\nnon-genuine tables. We designed the word group feature to\nincorporate word content for table classi cation based on\ntechniques developed in information retrieval 7, 13].\nAfter morphing 11] and removing the infrequent words,\nwe obtain the set of words found in the training data,\nW\n.\nWe then construct weight vectors representing genuine and\nnon-genuine tables and compare that against the frequency\nvector from each new incoming table.\nLet\nZ\nrepresent the non-negative integer set. The following\nfunctions are de ned on set\nW\n.\ndf\nG\n:\nW\n!\nZ\n, where\ndf\nG\n(\nw\ni\n) is the number of genuine\ntables which include word\nw\ni\n,\ni\n= 1\n:::\njW\nj\ntf\nG\n:\nW\n!\nZ\n, where\ntf\nG\n(\nw\ni\n) is the number of times\nword\nw\ni\n,\ni\n= 1\n:::\njW\nj\n, appears in genuine tables\ndf\nN\n:\nW\n!\nZ\n, where\ndf\nN\n(\nw\ni\n) is the number of non-genuine\ntables which include word\nw\ni\n,\ni\n= 1\n:::\njW\nj\ntf\nN\n:\nW\n!\nZ\n, where\ntf\nN\n(\nw\ni\n) is the number of times\nword\nw\ni\n,\ni\n= 1\n:::\njW\nj\n, appears in non-genuine tables.\ntf\nT\n:\nW\n!\nZ\n, where\ntf\nT\n(\nw\ni\n) is the number of times\nword\nw\ni\n,\nw\ni\n2\nW\nappears in a new test table.\nTo simplify the notations, in the following discussion, we\nwill use\ndf\nGi\n,\ntf\nGi\n,\ndf\nNi\nand\ntf\nNi\nto represent\ndf\nG\n(\nw\ni\n),\ntf\nG\n(\nw\ni\n),\ndf\nN\n(\nw\ni\n) and\ntf\nN\n(\nw\ni\n), respectively.\nLet\nN\nG\n,\nN\nN\nbe the number of genuine tables and non-genuine\ntables in the training collection, respectively and let\nC\n= max(\nN\nG\nN\nN\n). Without loss of generality, we assume\nN\nG\n6\n= 0 and\nN\nN\n6\n= 0. For each word\nw\ni\nin\nW\n,\ni\n= 1\n:::\njW\nj\n,\ntwo weights,\np\nG\ni\nand\np\nN\ni\nare computed:\np\nG\ni\n=\n8\n<\n:\ntf\nGi\nlog\n(\ndf\nG\ni\nN\nG\nN\nN\ndf\nN\ni\n+ 1) when\ndf\nNi\n6\n= 0\ntf\nGi\nlog\n(\ndf\nG\ni\nN\nG\nC\n+ 1)\nwhen\ndf\nNi\n= 0\np\nN\ni\n=\n8\n<\n:\ntf\nNi\nlog\n(\ndf\nN\ni\nN\nN\nN\nG\ndf\nG\ni\n+ 1) when\ndf\nGi\n6\n= 0\ntf\nNi\nlog\n(\ndf\nN\ni\nN\nN\nC\n+ 1)\nwhen\ndf\nGi\n= 0\nAs can be seen from the formulas, the de nitions of these\nweights were derived from the traditional\ntf idf\nmeasures\nused in informational retrieval, with some adjustments made\nfor the particular problem at hand.\nGiven a new incoming table, let us denote the set including\nall the words in it as\nW\nn\n. Since\nW\nis constructed using\nthousands of tables, the words that are present in both\nW\nand\nW\nn\nare only a small subset of\nW\n. Based on the vector\nspace model, we de ne the similarity between weight vectors\nrepresenting genuine and non-genuine tables and the\nfrequency vector representing the incoming table as the corresponding\ndot products. Since we only need to consider the\nwords that are present in both\nW\nand\nW\nn\n, we rst compute\nthe\ne ective word set\n:\nW\ne\n=\nW\n\\\nW\nn\n. Let the words in\nW\ne\nbe represented as\nw\nm\nk\n, where\nm\nk\nk\n= 1\n:::\njW\ne\nj\n, are\nindexes to the words from set\nW\n=\nf\nw\n1\nw\n2\n::: w\njW\nj\ng\n. we\nde ne the following vectors:\nWeight vector representing the genuine table group:\n!\nG\nS\n=\np\nG\nm\n1\nU p\nG\nm\n2\nU\np\nG\nm\njW\ne\nj\nU\n!\nwhere\nU\nis the cosine normalization term:\nU\n=\nv\nu\nu\nt\njW\ne\nj\nX\nk\n=1\np\nG\nm\nk\np\nG\nm\nk\n:\nWeight vector representing the non-genuine table group:\n!\nN\nS\n=\np\nN\nm\n1\nV p\nN\nm\n2\nV\np\nN\nm\njW\ne\nj\nV\n!\nwhere\nV\nis the cosine normalization term:\nV\n=\nv\nu\nu\nt\njW\ne\nj\nX\nk\n=1\np\nN\nm\nk\np\nN\nm\nk\n:\nFrequency vector representing the new incoming table:\n!\nI\nT\n=\ntf\nTm\n1\ntf\nTm\n2\ntf\nTm\njW\ne\nj\n:\nFinally, the word group feature is de ned as the ratio of\nthe two dot products:\nwg\n=\n8\n>\n>\n>\n<\n>\n>\n>\n:\n!\nI\nT\n!\nG\nS\n!\nI\nT\n!\nN\nS\nwhen\n!\nI\nT\n!\nN\nS\n6\n= 0\n1\nwhen\n!\nI\nT\n!\nG\nS\n= 0 and\n!\nI\nT\n!\nN\nS\n= 0\n10\nwhen\n!\nI\nT\n!\nG\nS\n6\n= 0 and\n!\nI\nT\n!\nN\nS\n= 0\nCLASSIFICATION SCHEMES\nVarious classi cation schemes have been widely used in\ndocument categorization as well as web information retrieval\n13, 8]. For the table detection task, the decision tree classi-er\nis particularly attractive as our features are highly non-homogeneous\n. We also experimented with Support Vector\nMachines (SVM), a relatively new learning approach which\nhas achieved one of the best performances in text categorization\n13].\n3.1\nDecision Tree\nDecision tree learning is one of the most widely used and\npractical methods for inductive inference. It is a method\nfor approximating discrete-valued functions that is robust\nto noisy data.\nDecision trees classify an instance by sorting it down the\ntree from the root to some leaf node, which provides the classi\ncation of the instance. Each node in a discrete-valued decision\ntree speci es a test of some attribute of the instance,\nand each branch descending from that node corresponds to\none of the possible values for this attribute. Continuous-valued\ndecision attributes can be incorporated by dynami-cally\nde ning new discrete-valued attributes that partition\nthe continuous attribute value into a discrete set of intervals\n9].Animplementationof thecontinuous-valueddecision tree\ndescribed in 4] was used for our experiments. The decision\ntree is constructed using a training set of feature vectors with\ntrue class labels. At each node, a discriminant threshold\n245\nis chosen such that it minimizes an impurity value. The\nlearned discriminant function splits the training subset into\ntwo subsets and generates two child nodes. The process is\nrepeated at each newly generated child node until a stopping\ncondition is satis ed, and the node is declared as a terminal\nnode based on a majority vote. The maximum impurity\nreduction, the maximum depth of the tree, and minimum\nnumber of samples are used as stopping conditions.\n3.2\nSVM\nSupport Vector Machines (SVM) are based on the\nStructural\nRisk Management\nprinciple from computational learning\ntheory 12]. The idea of structural risk minimization\nis to nd a hypothesis\nh\nfor which the lowest true error is\nguaranteed. The true error of\nh\nis the probability that\nh\nwill make an error on an unseen and randomly selected test\nexample.\nThe SVM method is de ned over a vector space where the\ngoal is to nd a decision surface that best separates the data\npoints in two classes. More precisely, the decision surface by\nSVM for linearly separable space is a hyperplane which can\nbe written as\n~w ~x\n;\nb\n= 0\nwhere\n~\nx\nis an arbitrary data point and the vector\n~w\nand\nthe constant\nb\nare learned from training data. Let\nD\n=\n(\ny\ni\n~x\ni\n) denote the training set, and\ny\ni\n2\nf\n+1\n;\n1\ng\nbe the\nclassi cation for\n~x\ni\n, the SVM problem is to nd\n~w\nand\nb\nthat satis es the following constraints:\n~w ~x\ni\n;\nb\n+1\nfor y\ni\n= +1\n~w ~x\ni\n;\nb\n;\n1\nfor y\ni\n=\n;\n1\nwhile minimizing the vector 2-norm of\n~w\n.\nThe SVM problem in linearly separable cases can be e ciently\nsolved using quadratic programming techniques, while\nthe non-linearly separable cases can be solved by either introducing\nsoft margin hyperplanes, or by mapping the original\ndata vectors to a higher dimensional space where the\ndata points become linearly separable 12, 3].\nOne reason why SVMs are very powerful is that they are\nvery universal learners. In their basic form, SVMs learn linear\nthreshold functions. Nevertheless, by a simple \\plug-in"\nof an appropriate kernel function, they can be used to learn\npolynomial classi ers, radial basis function (RBF) networks,\nthree-layer sigmoid neural nets, etc. 3].\nFor our experiments, we used the\nSV M\nlight\nsystem implemented\nby Thorsten Joachims.\n1\nDATA COLLECTION AND TRUTHING\nSince there are no publicly available web table ground\ntruth database, researchers tested their algorithms in di erent\ndata sets in the past 2, 10, 14]. However, their data\nsets either had limited manually annotated table data (\ne.g.\n,\n918 table tags in 2], 75 HTML pages in 10], 175 manually\nannotated table tags in 14]), or were collected from some\nspeci c domains (\ne.g.\n, a set of tables selected from airline\ninformation pages were used in 2]). To develop our machine\nlearning based table detection algorithm, we needed to build\na general web table ground truth database of signi cant size.\n1\nhttp://svmlight.joachims.org\n4.1\nData Collection\nInstead of working within a speci c domain, our goal of\ndata collection was to get tables of as many di erent varieties\nas possible from the web. To accomplish this, we composed\na set of key words likely to indicate documents containing\ntables and used those key words to retrieve and download\nweb pages using the Google search engine. Three directories\non Google were searched: the business directory and\nnews directory using key words:\nftable,\nstock,\nbonds,\nfigure,\nschedule,\nweather,\nscore,\nservice,\nresults,\nvalueg\n, and the science directory using key words\nftable,\nresults,\nvalueg\n. A total of 2,851 web pages were down-loaded\nin this manner and we ground truthed 1,393 HTML\npages out of these (chosen randomly among all the HTML\npages). These 1,393 HTML pages from around 200 web sites\ncomprise our database.\n4.2\nGround Truthing\nThere has been no previous report on how to systemati-cally\ngenerate web table ground truth data. To build a large\nweb table ground truth database, a simple, exible and complete\nground truth protocol is required. Figure 4.2(a) shows\nthe diagram of our ground truthing procedure. We created\na new Document Type De nition(DTD) which is a super-set\nof W3C HTML 3.2 DTD. We added three attributes for\n<TABLE>\nelement, which are \\tabid", \\genuine table" and\n\\table title". The possible value of the second attribute is\nyes\nor\nno\nand the value of the rst and third attributes is a\nstring. We used these three attributes to record the ground\ntruth of each leaf\n<TABLE>\nnode. The bene t of this design\nis that the ground truth data is inside HTML le format.\nWe can use exactly the same parser to process the ground\ntruth data.\nWe developed a graphical user interface for web table\nground truthing using the Java 1] language. Figure 4.2(b)\nis a snapshot of the interface. There are two windows. After\nreading an HTML le, the hierarchy of the HTML le is\nshown in the left window. When an item is selected in the\nhierarchy, the HTML source for the selected item is shown\nin the right window. There is a panel below the menu bar.\nThe user can use the radio button to select either genuine\ntable or non-genuine table. The text window is used to input\ntable title.\n4.3\nDatabase Description\nOur nal table ground truth database consists of 1,393\nHTML pages collected from around 200 web sites. There\nare a total of 14,609\n<TABLE>\nnodes, including 11,477 leaf\n<TABLE>\nnodes. Out of the 11,477 leaf\n<TABLE>\nnodes,\n1,740 are genuine tables and 9,737 are non-genuine tables.\nNot every genuine table has its title and only 1,308 genuine\ntables have table titles. We also found at least 253 HTML\nles have unmatched\n<TABLE>\n,\n</TABLE>\npairs or wrong\nhierarchy, which demonstrates the noisy nature of web documents\nEXPERIMENTS\nA hold-out method is used to evaluate our table classi-er\n. We randomly divided the data set into nine parts.\nEach classi er was trained on eight parts and then tested\non the remaining one part. This procedure was repeated\nnine times, each time with a di erent choice for the test\n246\nParser\nAdding attributes\nHTML with attributes and unique\nindex to each table(ground truth)\nValidation\nHTML File\nHierarchy\n(a)\n(b)\nFigure 2: (a) The diagram of ground truthing procedure (b) A snapshot of the ground truthing software.\npart. Then the combined nine part results are averaged to\narrive at the overall performance measures 4].\nFor the layout and content type features, this procedure\nis straightforward. However it is more complicated for the\nword group feature training. To compute\nw\ng\nfor training\nsamples, we need to further divide the training set into two\ngroups, a larger one (7 parts) for the computation of the\nweights\np\nG\ni\nand\np\nN\ni\n,\ni\n= 1\n:::\njW\nj\n, and a smaller one (1\npart) for the computation of the vectors\n!\nG\nS\n,\n!\nN\nS\n, and\n!\nI\nT\n.\nThis partition is again rotated to compute\nw\ng\nfor each table\nin the training set.\nTable 1: Possible true- and detected-state combinations\nfor two classes.\nTrue\nClass\nAssigned Class\ngenuine table non-genuine table\ngenuine table\nN\ngg\nN\ngn\nnon-genuine table\nN\nng\nN\nnn\nThe output of each classi er is compared with the ground\ntruth and a contingency table is computed to indicate the\nnumber of a particular class label that are identi ed as members\nof one of two classes. The rows of the contingency table\nrepresent the true classes and the columns represent the assigned\nclasses. The cell at row\nr\nand column\nc\nis the number\nof tables whose true class is\nr\nwhile its assigned class is\nc\n.\nThe possible true- and detected-state combination is shown\nin Table 1. Three performance measures\nRecall Rate(R)\n,\nPrecision Rate(P)\nand\nF-measure(F)\nare computed as follows\n:\nR\n=\nN\ngg\nN\ngg\n+\nN\ngn\nP\n=\nN\ngg\nN\ngg\n+\nN\nng\nF\n=\nR\n+\nP\n2\n:\nFor comparison among di erent features and learning algorithms\nwe report the performance measures when the best\nF-measure is achieved. First, the performance of various feature\ngroups and their combinations were evaluated using the\ndecision tree classi er. The results are given in Table 2.\nTable 2: Experimental results using various feature\ngroups and the decision tree classi er.\nL\nT\nLT\nLTW\nR (%) 87.24 90.80 94.20\n94.25\nP (%) 88.15 95.70 97.27\n97.50\nF (%) 87.70 93.25 95.73\n95.88\nL:\nLa\ny\nout\nonly\n.\nT:\nCon\nten\nt\nt\nyp\ne\nonly\n.\nL\nT:\nLa\ny\nout\nand\ncon\nten\nt\nt\nyp\ne.\nL\nTW:\nLa\ny\nout,\ncon\nten\nt\nt\nyp\ne\nand\nw\nord\ngroup.\nAs seen from the table, content type features performed\nbetter than layout features as a single group, achieving an\nF-measure of 93\n:\n25%. However, when the two groups were\ncombined the F-measure was improved substantially to 95\n:\n73%,\nrecon rming the importance of combining layout and content\nfeatures in table detection. The addition of the word\ngroup feature improved the F-measure slightly more to 95\n:\n88%.\nTable 3 compares the performances of di erent learning\nalgorithms using the full feature set. The leaning algorithms\ntested include the decision tree classi er and the SVM al-247\ngorithm with two di erent kernels { linear and radial basis\nfunction (RBF).\nTable 3: Experimental results using di erent learning\nalgorithms.\nTree SVM (linear) SVM (RBF)\nR (%) 94.25\n93.91\n95.98\nP (%) 97.50\n91.39\n95.81\nF (%) 95.88\n92.65\n95.89\nAs seen from the table, for this application the SVM with\nradial basis function kernel performed much better than the\none with linear kernel. It achieved an F measure of 95\n:\n89%,\ncomparable to the 95\n:\n88% achieved by the decision tree classi\ner.\nFigure 3 shows two examples of correctly classi ed tables,\nwhere Figure 3(a) is a genuine table and Figure 3(b) is a\nnon-genuine table.\nFigure 4 shows a few examples where our algorithm failed.\nFigure 4(a) was misclassi ed as a non-genuine table, likely\nbecause its cell lengths are highly inconsistent and it has\nmany hyperlinks which is unusual for genuine tables. The\nreason why Figure 4(b) was misclassi ed as non-genuine is\nmore interesting. When we looked at its HTML source code,\nwe found it contains only two\n<TR>\ntags. All text strings\nin one rectangular box are within one\n<TD>\ntag. Its author\nused\n<p>\ntags to put them in di erent rows. This points\nto the need for a more carefully designed pseudo-rendering\nprocess. Figure 4(c) shows a non-genuine table misclassi-ed\nas genuine. A close examination reveals that it indeed\nhas good consistency along the row direction. In fact, one\ncould even argue that this is indeed a genuine table, with\nimplicit row headers of\nTitle, Name, Company A liation\nand\nPhone Number\n. This example demonstrates one of the\nmost di cult challenges in table understanding, namely the\nambiguous nature of many table instances (see 5] for a more\ndetailed analysis on that). Figure 4(d) was also misclassi-ed\nas a genuine table. This is a case where layout features\nand the kind of shallow content features we used are not\nenough | deeper semantic analysis would be needed in order\nto identify the lack of logical coherence which makes it\na non-genuine table.\nFor comparison, we tested the previously developed rule-based\nsystem 10] on the same database. The initial results\n(shown in Table 4 under \\Original Rule Based") were\nvery poor. After carefully studying the results from the\ninitial experiment we realized that most of the errors were\ncaused by a rule imposing a hard limit on cell lengths in genuine\ntables. After deleting that rule the rule-based system\nachieved much improved results (shown in Table 4 under\n\\Modi ed Rule Based"). However, the proposed machine\nlearning based method still performs considerably better in\ncomparison. This demonstrates that systems based on handcrafted\nrules tend to be brittle and do not generalize well.\nIn this case, even after careful manual adjustment in a new\ndatabase, it still does not work as well as an automatically\ntrained classi er.\n(a)\n(b)\nFigure 3: Examples of correctly classi ed tables.\n(a): a genuine table (b): a non-genuine table.\nTable 4: Experimental results of a previously developed\nrule based system.\nOriginal Rule Based Modi ed Rule Based\nR (%)\n48.16\n95.80\nP (%)\n75.70\n79.46\nF (%)\n61.93\n87.63\n248\n(a)\n(b)\n(c)\n(d)\nFigure 4: Examples of misclassi ed tables. (a) and (b): Genuine tables misclassi ed as non-genuine (c) and\n(d): Non-genuine tables misclassi ed as genuine.\nA direct comparison to other previous results 2, 14] is\nnot possible currently because of the lack of access to their\nsystem. However, our test database is clearly more general\nand far larger than the ones used in 2] and 14], while our\nprecision and recall rates are both higher.\nCONCLUSION AND FUTURE WORK\nTable detection in web documents is an interesting and\nchallenging problem with many applications. We present a\nmachine learning based table detection algorithm for HTML\ndocuments. Layout features, content type features and word\ngroup features were used to construct a novel feature set.\nDecision tree and SVM classi ers were then implemented\nand tested in this feature space. We also designed a novel table\nground truthing protocol and used it to construct a large\nweb table ground truth database for training and testing.\nExperiments on this large database yielded very promising\nresults.\nOur future work includes handling more di erent HTML\nstyles in pseudo-rendering, detecting table titles of the rec-ognized\ngenuine tables and developing a machine learning\nbased table interpretation algorithm. We would also like to\ninvestigate ways to incorporate deeper language analysis for\nboth table detection and interpretation.\nACKNOWLEDGMENT\nWe would like to thank Kathie Shipley for her help in\ncollecting the web pages, and Amit Bagga for discussions on\nvector space models.\n\nREFERENCES\n1] M. Campione, K. Walrath, and A. Huml. The\njava(tm) tutorial: A short course on the basics (the\njava(tm) series).\n2] H.-H. Chen, S.-C. Tsai, and J.-H. Tsai. Mining tables\nfrom large scale html texts. In\nProc. 18th\nInternational Conference on Computational\nLinguistics\n, Saabrucken, Germany, July 2000.\n3] C. Cortes and V. Vapnik. Support-vector networks.\nMachine Learning\n, 20:273{296, August 1995.\n4] R. Haralick and L. Shapiro.\nComputer and Robot\nVision\n, volume 1. Addison Wesley, 1992.\n5] J. Hu, R. Kashi, D. Lopresti, G. Nagy, and\nG. Wilfong. Why table ground-truthing is hard. In\nProc. 6th International Conference on Document\nAnalysis and Recognition (ICDAR01)\n, pages 129{133,\nSeattle, WA, USA, September 2001.\n6] M. Hurst. Layout and language: Challenges for table\nunderstanding on the web. In\nProc. 1st International\nWorkshop on Web Document Analysis\n, pages 27{30,\nSeattle, WA, USA, September 2001.\n7] T. Joachims. A probabilistic analysis of the rocchio\nalgorithm with t df for text categorization. In\nProc.\n14th International Conference on Machine Learning\n,\npages 143{151, Morgan Kaufmann, 1997.\n8] A. McCallum, K. Nigam, J. Rennie, and K. Seymore.\nAutomating the construction of internet portals with\nmachine learning. In\nInformation Retrieval Journal\n,\nvolume 3, pages 127{163, Kluwer, 2000.\n249\n9] T. M. Mitchell.\nMachine Learning\n. McGraw-Hill, 1997.\n10] G. Penn, J. Hu, H. Luo, and R. McDonald. Flexible\nweb document analysis for delivery to narrow-bandwidth\ndevices. In\nProc. 6th International\nConference on Document Analysis and Recognition\n(ICDAR01)\n, pages 1074{1078, Seattle, WA, USA,\nSeptember 2001.\n11] M. F. Porter. An algorithm for su x stripping.\nProgram\n, 14(3):130{137, 1980.\n12] V. N. Vapnik.\nThe Nature of Statistical Learning\nTheory\n, volume 1. Springer, New York, 1995.\n13] Y. Yang and X. Liu. A re-examination of text\ncategorization methods. In\nProc. SIGIR'99\n, pages\n42{49, Berkeley, California, USA, August 1999.\n14] M. Yoshida, K. Torisawa, and J. Tsujii. A method to\nintegrate tables of the world wide web. In\nProc. 1st\nInternational Workshop on Web Document Analysis\n,\npages 31{34, Seattle, WA, USA, September 2001.\n250", "keywords": "Table detection;table ground truthing protocol;Layout Analysis;classifers;word group;presentation;Information Retrieval;Algorithms;Support Vector Machine;classifcation schemes;Machine Learning;Table Detection;Layout;machine learning based approach;content type;Decision tree;HTML document"} {"name": "130", "title": "Low Latency Photon Mapping Using Block Hashing", "abstract": "For hardware accelerated rendering, photon mapping is especially useful for simulating caustic lighting effects on non-Lambertian surfaces. However, an efficient hardware algorithm for the computation of the k nearest neighbours to a sample point is required. Existing algorithms are often based on recursive spatial subdivision techniques, such as kd-trees. However, hardware implementation of a tree-based algorithm would have a high latency, or would require a large cache to avoid this latency on average. We present a neighbourhood-preserving hashing algorithm that is low-latency and has sub-linear access time. This algorithm is more amenable to fine-scale parallelism than tree-based recursive spatial subdivision, and maps well onto coherent block-oriented pipelined memory access. These properties make the algorithm suitable for implementation using future programmable fragment shaders with only one stage of dependent texturing.", "fulltext": "Introduction\nPhoton mapping, as described by Jensen\n, is a technique\nfor reconstructing the incoming light field at surfaces everywhere\nin a scene from sparse samples generated by forward\nlight path tracing. In conjunction with path tracing, photon\nmapping can be used to accelerate the computation of both\ndiffuse and specular global illumination . It is most effective\nfor specular or glossy reflectance effects, such as caustics\n.\nThe benefits of migrating photo-realistic rendering techniques\ntowards a real-time, hardware-assisted implementation\nare obvious. Recent work has shown that it is possible\nto implement complex algorithms, such as ray-tracing,\nusing the programmable features of general-purpose hardware\naccelerators\nand/or specialised hardware\n. We are\ninterested in hardware support for photon mapping: specifically\n, the application of photon maps to the direct visualisation\nof caustics on non-Lambertian surfaces, since diffuse\nglobal illumination effects are probably best handled in a\nreal-time renderer using alternative techniques such as irradiance\n.\nCentral to photon mapping is the search for the set of photons\nnearest to the point being shaded. This is part of the interpolation\nstep that joins light paths propagated from light\nsources with rays traced from the camera during rendering,\nand it is but one application of the well-studied k nearest\nneighbours (kNN) problem.\nJensen uses the kd-tree\n,\ndata structure to find these nearest\nphotons. However, solving the kNN problem via kd-trees\nrequires a search that traverses the tree. Even if the tree is\nstored as a heap, traversal still requires random-order memory\naccess and memory to store a stack. More importantly,\na search-path pruning algorithm, based on the data already\nexamined, is required to avoid accessing all data in the tree.\nThis introduces serial dependencies between one memory\nlookup and the next. Consequently, a hardware implementation\nof a kd-tree-based kNN solution would either have high\nlatency, or would require a large cache to avoid such latency.\nIn either case a custom hardware implementation would be\nrequired. These properties motivated us to look at alternatives\nto tree search.\nSince photon mapping is already making an approximation\nby using kNN interpolation, we conjectured that an\napproximate kNN (AkNN) solution should suffice so long\nas visual quality is maintained. In this paper we investigate\na hashing-based AkNN solution in the context of high-c\nThe Eurographics Association 2002.\nMa and McCool / Low Latency Photon Mapping Using Block Hashing\nperformance hardware-based (specifically, programmable\nshader-based) photon mapping. Our major contribution is\nan AkNN algorithm that has bounded query time, bounded\nmemory usage, and high potential for fine-scale parallelism.\nMoreover, our algorithm results in coherent, non-redundant\naccesses to block-oriented memory. The results of one memory\nlookup do not affect subsequent memory lookups, so accesses\ncan take place in parallel within a pipelined memory\nsystem. Our algorithm is based on array access, and is\nmore compatible with current texture-mapping capabilities\nthan tree-based algorithms. Furthermore, any photon mapping\nacceleration technique that continues to rely on a form\nof kNN (such as irradiance caching\n) can still benefit from\nour technique.\nIn Section\n2\n, we first review previous work on the kNN\nand the approximate k-nearest neighbour (AkNN) problems.\nSection\n3\ndescribes the context and assumptions of our research\nand illustrates the basic hashing technique used in\nour algorithm. Sections\n4\nand\n5\ndescribe the details of our\nalgorithm. Section\n6\npresents numerical, visual quality, and\nperformance results. Section\n7\ndiscusses the mapping of the\nalgorithm onto a shader-based implementation. Finally, we\nconclude in Section\n8\n.\nPrevious Work\nJensen's book\n25\ncovers essentially all relevant previous work\nleading up to photon mapping. Due to space limitations, we\nwill refer the reader to that book and focus our literature review\non previous approaches to the kNN and AkNN problems\n.\nAny non-trivial algorithm that claims to be able to solve\nthe kNN problem faster than brute-force does so by reducing\nthe number of candidates that have to be examined when\ncomputing the solution set. Algorithms fall into the following\ncategories: recursive spatial subdivision, point location,\nneighbourhood graphs, and hashing.\nAmongst algorithms based on recursive spatial subdivision\n, the kd-tree\n5\nmethod is the approach commonly used\nto solve the kNN problem\n14\n. An advantage of the kd-tree is\nthat if the tree is balanced it can be stored as a heap in a\nsingle array. While it has been shown that kd-trees have optimal\nexpected-time complexity\n6\n, in the worst case finding\nthe k nearest neighbours may require an exhaustive search of\nthe entire data structure via recursive decent. This requires a\nstack the same size as the depth of the tree. During the recursion\n, a choice is made of which subtree to search next based\non a test at each internal node. This introduces a dependency\nbetween one memory access and the next and makes it hard\nto map this algorithm into high-latency pipelined memory\naccesses.\nMuch work has been done to find methods to optimise\nthe kd-tree method of solving the kNN and AkNN problems.\nSee Christensen\n26\n, Vanco et al.\n44\n, Havran\n19\n, and Sample\net al.\n39\n. Many other recursive subdivision-based techniques\nhave also been proposed for the kNN and AkNN problems,\nincluding kd-B-trees\n36\n, BBD-trees\n4\n, BAR-trees\n9\n, Principal-Axis\nTrees\n33\n, the R-tree family of data structures\n27\n, and\nANN-trees\n30\n. Unfortunately, all schemes based on recursive\nsearch over a tree share the same memory dependency problem\nas the kd-tree.\nThe second category of techniques are based on building\nand searching graphs that encode sample-adjacency information\n. The randomised neighbourhood graph approach\n3\nbuilds and searches an approximate local neighbourhood\ngraph. Eppstein et al.\n11\ninvestigated the fundamental properties\nof a nearest neighbour graph. Jaromczyk and Toussaint\nsurveyed data structures and techniques based on Relative\nNeighbourhood Graphs\n23\n. Graph-based techniques tend to\nhave the same difficulties as tree-based approaches: searching\na graph also involves stacks or queues, dependent memory\naccesses, and pointer-chasing unsuited to high-latency\npipelined memory access.\nVoronoi diagrams can be used for optimal 1-nearest\nneighbour searches in 2D and 3D\n10\n. This and other point-location\nbased techniques\n17\nfor solving nearest neighbour\nproblems do not need to calculate distances between the\nquery point and the candidates, but do need another data\nstructure (like a BSP tree) to test a query point for inclusion\nin a region.\nHashing approaches to the kNN and AkNN problems\nhave recently been proposed by Indyk et al.\n20\n,\n21\nand Gionis\net al.\n16\n. These techniques have the useful property that\nmulti-level dependent memory lookups are not required. The\nheart of these algorithms are simple hash functions that preserve\nspatial locality, such as the one proposed by Linial\nand Sasson\n31\n, and Gionis et al.\n16\n. We base our technique on\nthe latter. The authors also recognise recent work by Wald\net al.\n45\non real-time global illumination techniques where\na hashing-based photon mapping technique was apparently\nused (but not described in detail).\nNumerous surveys and books\n1\n,\n2\n,\n15\n,\n42\n,\n43\n,\n17\nprovide further\ninformation on this family of problems and data structures\ndeveloped to solve them.\nContext\nWe have developed a novel technique called Block Hashing\n(BH) to solve the approximate kNN (AkNN) problem in the\ncontext of, but not limited to, photon mapping.\nOur algorithm uses hash functions to categorise photons\nby their positions. Then, a kNN query proceeds by deciding\nwhich hash bucket is matched to the query point and retrieving\nthe photons contained inside the hash bucket for analysis\n. One attraction of the hashing approach is that evaluation\nof hash functions takes constant time. In addition, once we\nhave the hash value, accessing data we want in the hash table\ntakes only a single access. These advantages permit us to\nc The Eurographics Association 2002.\n90\nMa and McCool / Low Latency Photon Mapping Using Block Hashing\navoid operations that are serially dependent on one another,\nsuch as those required by kd-trees, and are major stepping\nstones towards a low-latency shader-based implementation.\nOur technique is designed under two assumptions on the\nbehaviour of memory systems in (future) accelerators. First,\nwe assume that memory is allocated in fixed-sized blocks .\nSecond, we assume that access to memory is via burst transfer\nof blocks that are then cached. Under this assumption,\nif any part of a fixed-sized memory block is \"touched\", access\nto the rest of this block will be virtually zero-cost. This\nis typical even of software implementations on modern machines\nwhich rely heavily on caching and burst-mode transfers\nfrom SDRAM or RDRAM. In a hardware implementation\nwith a greater disparity between processing power and\nmemory speed, using fast block transfers and caching is even\nmore important. Due to these benefits, in BH all memory\nused to store photon data is broken into fixed-sized blocks.\n3.1. Locality-Sensitive Hashing\nSince our goal is to solve the kNN problem as efficiently as\npossible in a block-oriented cache-based context, our hashing\ntechnique requires hash functions that preserve spatial\nneighbourhoods. These hash functions take points that are\nclose to each other in the domain space and hash them close\nto each other in hash space. By using such hash functions,\nphotons within the same hash bucket as a query point can be\nassumed to be close to the query point in the original domain\nspace. Consequently, these photons are good candidates for\nthe kNN search. More than one such scheme is available; we\nchose to base our technique on the Locality-Sensitive Hashing\n(LSH) algorithm proposed by Gionis et al.\n16\n, but have\nadded several refinements (which we describe in Section\n4\n).\nThe hash function in LSH groups one-dimensional real\nnumbers in hash space by their spatial location. It does so\nby partitioning the domain space and assigning a unique\nhash value to each partition. Mathematically, let\nT =\n{t\ni\n| 0 i P} be a monotonically increasing sequence\nof P + 1 thresholds between 0 and 1. Assume t\n0\n= 0 and\nt\nP\n= 1, so there are P\n- 1 degrees of freedom in this sequence\n. Define a one-dimensional Locality-Sensitive Hash\nFunction h\nT\n: [0, 1]\n{0 . . . P - 1} to be h\nT\n(t) = i, where\nt\ni\nt < t\ni+1\n. In other words, the hash value i can take\non P different values, one for each \"bucket\" defined by the\nthreshold pair [t\ni\n,t\ni+1\n). An example is shown in Figure\n1\n.\nt\n1\nt\n2\nt\n3\nt\n4\nt\n0\nFigure 1: An example of h\nT\n. The circles and boxes represent\nvalues to be hashed, while the vertical lines are the thresholds\nin\nT . The boxes lie between t\n1\nand t\n2\n, thus they are given\na hash value of 1.\nThe function h\nT\ncan be interpreted as a monotonic non-uniform\nquantisation of spatial position, and is characterised\nby P and the sequence\nT . It is important to note that h\nT\ngives\neach partition of the domain space delineated by\nT equal\nrepresentation in hash space. Depending on the location of\nthe thresholds, h\nT\nwill contract some parts of the domain\nspace and expand other parts. If we rely on only a single\nhash table to classify a data set, a query point will only hash\nto a single bucket within this table, and the bucket may represent\nonly a subset of the true neighbourhood we sought.\nTherefore, multiple hash tables with different thresholds are\nnecessary for the retrieval of a more complete neighbourhood\naround the query point (See Figure\n2\n.)\nh\nT1\nh\nT3\nh\nT2\nt\n1\nt\n2\nt\n3\nt\n4\nt\n0\nt\n4\nt\n0\nt\n4\nt\n0\nt\n1\nt\n1\nt\n2\nt\n2\nt\n3\nt\n3\nFigure 2: An example of multiple hash functions classifying\na dataset. The long vertical line represents the query\nvalue. The union of results multiple hash tables with different\nthresholds represents a more complete neighbourhood.\nTo deal with n-dimensional points, each hash table will\nhave one hash function per dimension. Each hash function\ngenerates one hash value per coordinate of the point (See\nFigure\n3\n.) The final hash value is calculated by\n\nn\n-1\ni=0\nh\ni\nP\ni\n,\nwhere h\ni\nare the hash values and P is the number of thresholds\n. If P were a power of two, then this amounts to concatenating\nthe bits. The only reason we use the same number of\nthresholds for each dimension is simplicity. It is conceivable\nthat a different number of thresholds could be used for\neach dimension to better adapt to the data. We defer the discussion\nof threshold generation and query procedures until\nSections\n4.2\nand\n4.4\n, respectively.\nh\nx\nh\ny\nP\nFigure 3: Using two hash functions to handle\na 2D point. Each hash function will be\nused to hash one coordinate.\nLSH is very similar to grid files\n34\n. However, the grid file\nwas specifically designed to handle dynamic data. Here, we\nassume that the data is static during the rendering pass. Also,\nthe grid file is more suitable for range searches than it is for\nsolving the kNN problem.\n3.2. Block-Oriented Memory Model\nIt has been our philosophy that hardware implementations of\nalgorithms should treat off-chip memory the same way software\nimplementations treat disk: as a relatively slow, \"out-of\n-core\", high-latency, block-oriented storage device. This\nc The Eurographics Association 2002.\n91\nMa and McCool / Low Latency Photon Mapping Using Block Hashing\nanalogy implies that algorithms and data structures designed\nto optimise for disk access are potentially applicable to hardware\ndesign. It also drove us to employ fixed-sized blocks to\nstore the data involved in the kNN search algorithm, which\nare photons in the context of this application.\nIn our prototype software implementation of BH, each\nphoton is stored in a structure similar to Jensen's \"extended\"\nphoton representation\n25\n. As shown in Figure\n4\n, each component\nof the 3D photon location is represented by a 32-bit\nfixed-point number. The unit vectors representing incoming\ndirection ^d and surface normal ^n are quantised to 16-bit\nvalues using Jensen's polar representation. Photon power is\nstored in four channels using sixteen-bit floating point numbers\n. This medium-precision signed representation permits\nother AkNN applications beyond that of photon mapping.\nFour colour channels are also included to better match the\nfour-vectors supported in fragment shaders. For the photon\nmapping application specifically, our technique is compatible\nwith the replacement of the four colour channels with a\nWard RGBE colour representation\n46\n. Likewise, another implementation\nmight use a different representation for the normal\nand incident direction unit vectors.\n|\n32\n|\nx\ny\nz\n^\nd\n^\nn\nc\n1\nc\n2\nc\n3\nc\n4\n| 16 |\nFigure 4: Representation of a photon record.\nThe 32-bit values (x, y, z) denote the position of\na photon and are used as keys. Two quantised\n16-bit unit vectors ^d, ^n and four 16-bit floating\npoint values are carried as data.\nAll photon records are stored in fixed-sized memory\nblocks. BH uses a 64 32-bit-word block size, chosen to\npermit efficient burst-mode transfers over a wide bus to\ntransaction-oriented DRAM. Using a 128-bit wide path to\nDDR SDRAM, for instance, transfer of this block would\ntake eight cycles, not counting the overhead of command\ncycles to specify the operation and the address. Using next-generation\nQDR SDRAM this transfer would take only four\ncycles (or eight on a 64-bit bus, etc.)\nOur photon representation occupies six 32-bit words.\nSince photon records are not permitted to span block boundaries\n, ten photons will fit into a 64-word block with four\nwords left over. Some of this extra space is used in our implementation\nto record how many photons are actually stored in\neach block. For some variants of the data structures we describe\n, this extra space could also be used for flags or pointers\nto other blocks. It might be possible or desirable in other\nimplementations to support more or fewer colour channels,\nor channels with greater or lesser precision, in which case\nsome of our numerical results would change.\nBlock Hashing\nBlock Hashing (BH) contains a preprocessing phase and a\nquery phase. The preprocessing phase consists of three steps.\nAfter photons have been traced into the scene, the algorithm\norganises the photons into fixed-sized memory blocks, creates\na set of hash tables, and inserts photon blocks into the\nhash tables.\nIn the second phase, the hash tables will be queried for a\nset of candidate photons from which the k nearest photons\nwill be selected for each point in space to be shaded by the\nrenderer.\n4.1. Organizing Photons into Blocks\nDue to the coherence benefits associated with block-oriented\nmemory access, BH starts by grouping photons and storing\nthem into fixed-sized memory blocks. However, these benefits\nare maximised when the photons within a group are close\ntogether spatially.\nWe chose to use the Hilbert curve\n13\nto help group photons\ntogether. The advantage of the Hilbert curve encoding of position\nis that points mapped near each other on the Hilbert\ncurve are guaranteed to be within a certain distance of each\nother in the original domain\n22\n. Points nearby in the original\ndomain space have a high probability of being nearby on the\ncurve, although there is a non-zero probability of them being\nfar apart on the curve. If we sort photons by their Hilbert\ncurve order before packing them into blocks, then the photons\nin each block will have a high probability of being spatially\ncoherent. Each block then corresponds to an interval of\nthe Hilbert curve, which in turn covers some compact region\nof the domain (see Figure\n7\na). Each region of domain space\nrepresented by the blocks is independent, and regions do not\noverlap.\nBH sorts photons and inserts them into a B\n+\n-tree\n8\nusing\nthe Hilbert curve encoding of the position of each photon as\nthe key. This method of spatially grouping points was first\nproposed by Faloutsos and Rong\n12\nfor a different purpose.\nSince a B\n+\n-tree stores photon records only at leaves, with\na compatible construction the leaf nodes of the B\n+\n-tree can\nserve as the photon blocks used in the later stages of BH.\nOne advantage of using a B\n+\n-tree for sorting is that insertion\ncost is bounded: the tree is always balanced, and in the\nworst case we may have to split h nodes in the tree, when\nthe height of the tree is h. Also, B\n+\n-trees are optimised for\nblock-oriented storage, as we have assumed.\nThe B\n+\n-tree used by BH has index and leaf nodes that\nare between half full to completely full. To minimise the final\nnumber of blocks required to store the photons, the leaf\nnodes can be compacted (see Figure\n5\n.) After the photons\nare sorted and compacted, the resulting photon blocks are\nready to be used by BH, and the B\n+\n-tree index and any leaf\nnodes that are made empty by compaction are discarded. If\nthe complete set of photons is known a priori, the compact\nB\n+\n-tree\n37\nfor static data can be used instead. This data structure\nmaintains full nodes and avoids the extra compaction\nstep.\nc The Eurographics Association 2002.\n92\nMa and McCool / Low Latency Photon Mapping Using Block Hashing\n(b)\nIndex node\nEmpty cell in leaf node\nOccupied cell in leaf node\n(a)\nFigure 5: Compaction of photon blocks. (a) B\n+\n-tree after inserting\nall photons. Many leaf nodes have empty cells. (b) All\nphoton records are compacted in the leaf nodes.\nRegardless, observe that each photon block contains a\nspatially clustered set of photons disjoint from those contained\nin other blocks. This is the main result we are after\n; any other data structures that can group photons into\nspatially-coherent groups, such as grid files\n34\n, can be used\nin place of the B\n+\n-tree and space-filling curve.\n4.2. Creating the Hash Tables\nThe hash tables used in BH are based on the LSH scheme described\nin Section\n3.1\n. BH generates L tables in total, serving\nas parallel and complementary indices to the photon data.\nEach table has three hash functions (since photons are classified\nby their 3D positions), and each hash function has P + 1\nthresholds.\nBH employs an adaptive method that generates the thresholds\nbased on the photon positions. For each dimension, a\nhistogram of photon positions is built. Then, the histogram is\nintegrated to obtain a cumulative distribution function (cdf ).\nLastly, stratified samples are taken from the inverse of the cdf\nto obtain threshold locations. The resulting thresholds will\nbe far apart where there are few photons, and close together\nwhere photons are numerous. Ultimately this method attempts\nto have a similar number of photons into each bucket.\nHash tables are stored as a one-dimensional array structure\n, shown in Figure\n6\n. The hash key selects a bucket out of\nthe P\nn\navailable buckets in the hash table. Each bucket refers\nup to B blocks of photons, and has space for a validity flag\nper reference, and storage for a priority value. We defer the\ndiscussion on the choice of P, L and B until Section\n5\n.\nB\nV V V\nV\nPriority\nFigure 6: Hash table bucket\nlayout\n4.3. Inserting Photon Blocks\nIn BH, references to entire photon blocks, rather than individual\nphotons, are inserted into the hash tables. One reason\nfor doing so is to reduce the memory required per bucket.\nAnother, more important, reason is that when merging results\nfrom multiple hash tables (Section\n3.1\n), BH needs\nto compare only block addresses instead of photons when\nweeding out duplicates as each block contains a unique set of\nphotons. This means fewer comparisons have to be made and\nthe individual photons are only accessed once per query, during\npost-processing of the merged candidate set to find the\nk nearest photons. Consequently, the transfer of each photon\nblock through the memory system happens at most once per\nquery. All photons accessed in a block are potentially useful\ncontributions to the candidate set, since photons within a single\nblock are spatially coherent. Due to our memory model\nassumptions, once we have looked at one photon in a block\nit should be relatively inexpensive to look at the rest.\n(a)\n(b)\n(c)\nFigure 7: Block hashing illustrated. (a) Each block corresponds\nto an interval of the Hilbert curve, which in turn covers\nsome compact region of the domain. Consequently, each\nbucket (b) represents all photons (highlighted with squares)\nin each block with at least one photon hashed into it (c).\nEach bucket in a hash table corresponds to a rectangular\nregion in a non-uniform grid as shown in Figure\n7\nb. Each\nblock is inserted into the hash tables once for each photon\nwithin that block, using the position of these photons to create\nthe keys. Each bucket of the hash table refers to not only\nthe photons that have been hashed into that bucket, but also\nall the other photons that belong to the same blocks as the\nhashed photons (see Figure\n7\nc.)\nSince each photon block is inserted into each hash table\nmultiple times, using different photons as keys, a block\nmay be hashed into multiple buckets in the same hash table\n. Of course, a block should not be inserted into a bucket\nmore than once. More importantly, our technique ensures\nthat each block is inserted into at least one hash table. Orphaned\nblocks are very undesirable since the photons within\nwill never be considered in the subsequent AkNN evaluation\nand will cause a constant error overhead. Hence, our technique\ndoes not navely drop a block that causes a bucket to\noverflow.\nHowever, there may be collisions that cause buckets to\noverflow, especially when a large bucket load factor is chosen\nto give a compact hash table size, or there exists a large\nvariation in photon density (which, of course, is typical in\nthis application). Our algorithm uses two techniques to address\nthis problem. The first technique attempts to insert every\nblock into every hash table, but in different orders on\ndifferent hash tables, such that blocks that appear earlier in\nthe ordering are not favoured for insertion in all tables. BH\nuses a technique similar to disk-striping\n38\n, illustrated by the\nc The Eurographics Association 2002.\n93\nMa and McCool / Low Latency Photon Mapping Using Block Hashing\npseudo code in Figure\n8\n. An example is given in the diagram\nin the same figure.\nfor h from 0 to (number_of_hash_tables-1)\nfor b from 0 to (number_of_blocks-1)\nidx = (h+b) modulo L\ninsert block[b] into hashtable[idx]\nendfor\nendfor\n0\n1\n2\n0\n1\n2\n1\n2\n0\n1\n2\n0\n1\n2\n0\n1\n2\n0\n1\n2\n0\n1\n2\n0\n3\n4\n5\n3\n4\n5\n3\n4\n5\nPhoton Block\nHash Table Bucket\n1st iteration\n2nd iteration\n3rd iteration\nFigure 8: Striping insertion strategy\nThe second technique involves a strategy to deal with\noverflow in a bucket. For each photon block, BH keeps the\ncount of buckets that the block has been hashed into so far.\nWhen a block causes overflow in a bucket, the block in the\nbucket that has the maximum count will be bumped if that\ncount is larger than one, and larger than that of the incoming\nblock. This way we ensure that all blocks are inserted into\nat least one bucket, given adequate hash table sizes, and no\nblock is hashed into an excessive number of buckets.\n4.4. Querying\nA query into the BH data structure proceeds by delegating\nthe query to each of the L hash tables. These parallel accesses\nwill yield as candidates all photon blocks represented\nby buckets that matched the query. The final approximate\nnearest neighbour set comes from scanning the unified candidate\nset for the nearest neighbours to the query point (see\nFigure\n9\n.) Note that unlike kNN algorithms based on hier-archical\ndata structures, where candidates for the kNN set\ntrickle in as the traversal progresses, in BH all candidates are\navailable once the parallel queries are completed. Therefore,\nBH can use algorithms like selection\n29\n(instead of a priority\nqueue) when selecting the k nearest photons.\nEach query will retrieve one bucket from each of the L\nhash tables. If the application can tolerate elevated inaccuracy\nin return for increased speed of query (for example, to\npre-visualise a software rendering), it may be worthwhile to\nconsider using only a subset of the L candidate sets. Block\nhashing is equipped with a user-specified accuracy setting:\nLet A\nIN be an integer multiplier. The block hashing algorithm\nwill only consider Ak candidate photons in the final\nscan to determine the k nearest photons to a query. Obviously\nthe smaller A is, the fewer photons will be processed\nin the final step; as such, query time is significantly reduced,\nbut with an accuracy cost. Conversely, a higher A will lead\nto a more accurate result, but it will take more time. Experimental\nresults that demonstrate the impact of the choice of\nA will be explored in Section\n6\n.\nQuery point\nMatched point\nData point\n(a)\n(b)\n(c)\nFigure 9: Merging the results from multiple hash tables.\n(a) the query point retrieves different candidates sets from\ndifferent hash tables, (b) the union set of candidates after\nmerging, and (c) the two closest neighbours selected.\nThere needs to be a way to select the buckets from which\nthe Ak candidate photons are obtained. Obviously, we want\nto devise a heuristic to pick the \"best\" candidates. Suppose\nevery bucket in every hash table has a priority given by\n\n=\n|bucket_capacity - #entries - #overflows|\nwhere \"#overflows\" is the number of insertion attempts after\nthe bucket became full. The priority can be pre-computed\nand stored in each bucket of each hash table during the insertion\nphase. The priority of a bucket is smallest when the\nbucket is full but never overflowed. Conversely, when the\nhash bucket is underutilised or overflow has occurred,\n\nwill\nbe larger. If a bucket is underutilised, it is probably too small\nspatially (relative to local sample density). If it has experienced\noverflow, it is likely too large spatially, and covers too\nmany photon block regions.\nDuring each query, BH will sort the L buckets returned\nfrom the hash tables by their priority values, smallest values\nof\n\nfirst. Subsequently, buckets are considered in this order,\none by one, until the required Ak photons are found. In this\nway the more \"useful\" buckets will be considered first.\nChoice of Parameter Values\nBlock Hashing is a scheme that requires several parameters:\nB, the bucket capacity; L, the number of hash tables whose\nresults are merged; and P, the number of thresholds per dimension\n. We would like to determine reasonable values for\nthese parameters as functions of k, the number of nearest\nneighbours sought, and N, the number of photons involved.\nIt is important to realize the implications of these parameters\n. The total number of 32-bit pointers to photon blocks\nis given by LP\n3\nB. Along with the number of thresholds 3LP,\nthis gives the memory overhead required for BH. The upper\nbound for this value is 6N, the number of photons multiplied\nby the six 32-bit words each photon takes up in our\nimplementation. If we allow B to be a fixed constant for now,\nc The Eurographics Association 2002.\n94\nMa and McCool / Low Latency Photon Mapping Using Block Hashing\nthe constraint LP\n3\n+ 3LP\nN arises from the reasonable assumption\nthat we do not want to have more references to\nblocks than there are photons, or more memory used in the\nindex than in the data.\nEmpirically, L = P = ln N has turned out to be a good\nchoice. The value ln N remains sub-linear as N increases,\nand this value gives a satisfactory index memory overhead\nratio: There are a total of B(ln N)\n4\nblock references. Being\nfour bytes each, the references require 4B(ln N)\n4\nbytes. With\neach hash table, there needs to be 3LP = 3(ln N)\n2\nthresholds\n. Represented by a 4-byte value each, the thresholds take\nanother 12(ln N)\n2\nbytes. Next, assuming one photon block\ncan hold ten photons, N photons requires N/10 blocks; each\nblock requires 64 words, so the blocks require 25.6N bytes\nin total. The total memory required for N photons, each occupying\n6 words, is 24N bytes. This gives an overhead ratio\nof\n(4B(ln N)\n4\n+ 12(ln N)\n2\n+ 25.6N\n- 24N)/24N.\n(1)\nThe choice of B is also dependent on the value of k specified\nby the situation or the user. However, since it is usual\nin photon mapping that k is known ahead of time, B can be\nset accordingly. B should be set such that the total number of\nphotons retrieved from the L buckets for each query will be\nlarger than k. Mathematically speaking, each photon block\nin our algorithm has ten photons, hence 10LB\nk. In particular\n, 10LB > Ak should also be satisfied. Since we choose\nL = ln N, rearranging the equation yields: B > Ak/(10 ln N)\nFor example, assuming A = 16, N = 2000000, k = 50, then\nB = 6.\nIf we substitute B back into Equation\n1\n, we obtain the final\noverhead equation\n(4(Ak/10)(ln N)\n3\n+ 12(ln N)\n2\n+ 1.6N)/24N.\n(2)\nFigure\n10\nplots the number of photons versus the memory\noverhead. For the usual range of photon count in a photon\nmapping application, we see that the memory overhead,\nwhile relative large for small numbers of photons, becomes\nreasonable for larger numbers of photons, and has an asymptote\nof about 6%. Of course, if we assumed different block\nsize (cache line size), these results would vary, but the analysis\nis the same.\nMemory Overhead\nRatio (%)\n5\n15\n20\n25\n30\n0.50\n0.75\n1.00\n1.25\n1.50\n1.75\nA=16\nA=8\nA=4\n10\n0.25\n2.00\nNumber of Photons\nFigure 10: Plot of photon count vs. memory overhead in-curred\nby BH, assuming k = 50.\nResults\nFor BH to be a useful AkNN algorithm, it must have satisfactory\nalgorithmic accuracy. Moreover, in the context of\nphoton mapping, BH must also produce good visual accuracy\n. This section will demonstrate that BH satisfies both requirements\n, while providing a speed-up in terms of the time\nit takes to render an image, even in a software implementation\n.\nTo measure algorithmic accuracy, our renderer was rigged\nto use both the kd-tree and BH based photon maps. For each\nkNN query the result sets were compared for the following\nmetrics:\nFalse negatives:\n# photons incorrectly excluded from the\nkNN set.\nMaximum distance dilation:\nthe ratio of bounding radii\naround the neighbours reported by the two algorithms.\nAverage distance dilation:\nthe ratio of the average distances\nbetween the query point and each of the nearest neighbours\nreported by the two algorithms.\nTo gauge visual accuracy, we calculate a Caustic RMS\nError\nmetric, which compares the screen space radiance difference\nbetween the caustic radiance values obtained from\nkd-tree and BH.\nA timing-related Time Ratio metric is calculated as a ratio\nof the time taken for a query into the BH data structure\nversus that for the kd-tree data structure. Obviously, as A\nincreases, the time required for photon mapping using BH\napproaches that for a kd-tree based photon mapping.\nOur first test scene, shown in Figure\n11\n, with numerical\nresults in Figure\n12\n, consists of a highly specular ring\nplaced on top of a plane with a Modified Phong\n28\nreflectance\nmodel. This scene tests the ability of BH to handle a caustic\nof varying density, and a caustic that has been cast onto a\nnon-Lambertian surface.\nFigure\n13\nshows a second scene consisting of the Venus\nbust, with a highly specular ring placed around the neck of\nVenus. Figure\n14\nshows the numerical statistics of this test\nscene. The ring casts a cardioid caustic onto the (non-planar)\nchest of the Venus. This scene demonstrates a caustic on a\nhighly tessellated curved surface. Global illumination is also\nused for both scenes, however the times given are only for\nthe query time of the caustic maps.\nThe general trend to notice is that for extremely low accuracy\n(A) settings the visual and algorithmic performance of\nBH is not very good. The candidate set is simply not large\nenough in these cases. However, as A increases, these performance\nindicators drop to acceptable levels very quickly,\nespecially between values of A between 2 and 8. After A = 8\ndiminishing returns set in, and the increase in accuracy incurs\na significant increase in the cost of the query time required\n. This numerical error comparison is parallelled by the\nc The Eurographics Association 2002.\n95\nMa and McCool / Low Latency Photon Mapping Using Block Hashing\n(a) kd-tree\n(b) BH, A=4\n(c) BH, A=16\n(d) BH, A=8\nFigure 11: \"Ring\" image\n0\n0.2\n0.4\n0.6\n0.8\n1\n0 2 4 6 8 10 12 14 16 18 20\nAccuracy Setting (A)\nFalse-Average\n#errors\n1\n1.1\n1.2\n1.3\n1.4\n1.5\n1.6\n0 2 4 6 8 10 12 14 16 18 20\nAccuracy Setting (A)\nMax Radius Dilation\nAvg Radius Dilation\nDilation Ratio\n0\n0.005\n0.01\n0.015\n0.02\n0 2 4 6 8 10 12 14 16 18 20\nAccuracy Setting (A)\nRMS error\nRadiance RMS Error\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\n1.4\n0 2 4 6 8 10 12 14 16 18 20\nAccuracy Setting (A)\nTiming Ratio\nTime Ratio\nFigure 12: \"Ring\" numerical statistics\nvisual comparison of the images: the image rendered with\nA = 8 and A = 16 are virtually indistinguishable. These results\nsuggest that intermediate values of A, between 8 to 10,\nshould be used as a good compromise between query speed\nand solution accuracy.\nIt is apparent from the query time ratio plots that there\nexists a close-to-linear relationship between values of A and\ntime required for a query into BH. This is consistent with\nthe design of the A parameter; it corresponds directly to the\nnumber of photons accessed and processed for each query.\nAnother important observation that can be made from the\nvisual comparisons is that images with greater approximation\nvalue A look darker. This is because the density estimate\nis based on the inverse square of the radius of the sphere enclosing\nthe k nearest neighbours. The approximate radius is\nalways larger than the true radius. This is an inherent problem\nwith any approximate solution to the kNN problem, and\nindeed even with the exact kNN density estimator: as k goes\n(a) kd-tree\n(b) BH, A=16\n(c) BH, A=8\n(d) BH, A=4\nFigure 13: \"Venus with Ring\" images\n0\n0.5\n1\n1.5\n2\n2.5\n3\n3.5\n4\n0 2 4 6 8 10 12 14 16 18 20\nAccuracy Setting (A)\nFalse-Average\n#errors\n1\n1.02\n1.04\n1.06\n1.08\n1.1\n0 2 4 6 8 10 12 14 16 18 20\nAccuracy Setting (A)\nMax Radius Dilation\nAvg Radius Dilation\nDilation Ratio\n0\n0.01\n0.02\n0.03\n0.04\n0.05\n0 2 4 6 8 10 12 14 16 18 20\nAccuracy Setting (A)\nRMS error\nRadiance RMS Error\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\n0 2 4 6 8 10 12 14 16 18 20\nAccuracy Setting (A)\nTiming Ratio\nTime Ratio\nFigure 14: \"Venus with Ring\" numerical statistics\nto infinity, the kNN density estimator does converge on the\ntrue density, but always from below.\nHardware Implementation\nThere are two ways to approach a hardware implementation\nof an algorithm in a hardware accelerator: with a custom\nhardware implementation, or as an extension or exploitation\nof current hardware support. While there would be certain\nadvantages in a full custom hardware implementation of\nBH, this would probably lead to a large chunk of hardware\nwith low utilisation rates. Although there are many potential\napplications of AkNN search beyond photon mapping (we\nlist several in the conclusions), it seems more reasonable to\nconsider first if BH can be implemented using current hardware\nand programmable shader features, and if not, what the\nsmallest incremental changes would be. We have concluded\nthat BH, while not quite implementable on today's graphics\nhardware, should be implementable in the near future.\nWe consider only the lookup phase here, since the preprocessing\nwould indeed require some custom hardware support\n, but support which perhaps could be shared with other\nuseful features. In the lookup phase, (1) we compute hash\nc The Eurographics Association 2002.\n96\nMa and McCool / Low Latency Photon Mapping Using Block Hashing\nkeys, (2) look up buckets in multiple hash tables, (3) merge\nand remove duplicates from the list of retrieved blocks and\noptionally sorting by priority, (4) retrieve the photon records\nstored in these blocks, and (5) process the photons. Steps\n(1) and (5) could be performed with current shader capabilities\n, although the ability to loop would be useful for the last\nstep to avoid replicating the code to process each photon.\nComputing the hash function amounts to doing a number of\ncomparisons, then adding up the zero-one results. This can\nbe done in linear time with a relatively small number of instructions\nusing the proposed DX9 fragment shader instruction\nset. If conditional assignment and array lookups into the\nregister file are supported, this could be done in logarithmic\ntime using binary search.\nSteps (2) and (4) amount to table lookups and can be implemented\nas nearest-neighbour texture-mapping operations\nwith suitable encoding of the data. For instance, the hash\ntables might be supported with one texture map giving the\npriority and number of valid entries in the bucket, while another\ntexture map or set of texture maps might give the block\nreferences, represented by texture coordinates pointing into\nanother set of texture maps holding the photon records.\nStep (3) is difficult to do efficiently without true conditionals\nand conditional looping. Sorting is not the problem,\nas it could be done with conditional assignment. The problem\nis that removal of a duplicate block reduces the number\nof blocks in the candidate set. We would like in this case to\navoid making redundant photon lookups and executing the\ninstructions to process them. Without true conditionals, an\ninefficient work-around is to make these redundant texture\naccesses and process the redundant photons anyhow, but discard\ntheir contribution by multiplying them by zero.\nWe have not yet attempted an implementation of BH on an\nactual accelerator. Without looping, current accelerators do\nnot permit nearly enough instructions in shaders to process\nk photons for adequate density estimation. However, we feel\nthat it might be feasible to implement our algorithm on a\nnext-generation shader unit using the \"multiply redundant\nphotons by zero\" approach, if looping (a constant number of\ntimes) were supported at the fragment level.\nWe expect that the generation that follows DX9-class accelerators\nwill probably have true conditional execution and\nlooping, in which case implementation of BH will be both\nstraightforward and efficient, and will not require any additional\nhardware or special shader instructions. It will also\nonly require two stages of conditional texture lookup, and\nlookups in each stage can be performed in parallel. In comparison\n, it would be nearly impossible to implement a tree-based\nsearch algorithm on said hardware due to the lack of a\nstack and the large number of dependent lookups that would\nbe required. With a sufficiently powerful shading unit, of\ncourse, we could implement any algorithm we wanted, but\nBH makes fewer demands than a tree-based algorithm does.\nConclusion and Future Work\nWe have presented an efficient, scalable, coherent and\nhighly parallelisable AkNN scheme suitable for the high-performance\nimplementation of photon mapping.\nThe coherent memory access patterns of BH lead to im-proved\nperformance even for a software implementation.\nHowever, in the near future we plan to implement the lookup\nphase of BH on an accelerator. Accelerator capabilities are\nnot quite to the point where they can support this algorithm,\nbut they are very close. What is missing is some form of\ntrue run-time conditional execution and looping, as well as\ngreater capacity in terms of numbers of instructions. However\n, unlike tree-based algorithms, block hashing requires\nonly bounded execution time and memory.\nAn accelerator-based implementation would be most interesting\nif it is done in a way that permits other applications\nto make use of the fast AkNN capability it would provide.\nAkNN has many potential applications in graphics beyond\nphoton maps. For rendering, it could also be used for sparse\ndata interpolation (with many applications: visualisation of\nsparse volume data, BRDF and irradiance volume representation\n, and other sampled functions), sparse and multi-resolution\ntextures, procedural texture generation (specifically\n, Worley's texture\n47\nfunctions), direct ray-tracing of\npoint-based objects\n40\n, and gap-filling in forward projection\npoint-based rendering\n48\n. AkNN could also potentially be\nused for non-rendering purposes: collision detection, surface\nreconstruction, and physical simulation (for interacting\nparticle systems). Unlike the case with tree-based algorithms\n, we feel that it will be feasible to implement BH as\na shader subroutine in the near future, which may make it a\nkey component in many potential applications of advanced\nprogrammable graphics accelerators.\nFor a more detailed description of the block hashing algorithm\n, please refer to the author's technical report\n32\n.\nAcknowledgements\nThis research was funded by grants from the National\nScience and Engineering Research Council of Canada\n(NSERC), the Centre for Information Technology of Ontario\n(CITO), the Canadian Foundation for Innovation (CFI), the\nOntario Innovation Trust (OIT), and the Bell University Labs\ninitiative.\n\nReferences\n1.\nP. K. Agarwal. Range Searching. In J. E. Goodman and\nJ. O'Rourke, editors, Handbook of Discrete and Computational\nGeometry. CRC Press, July 1997.\n2\n2.\nP. K. Agarwal and J. Erickson. Geometric range searching\nand its relatives. Advances in Discrete and Computational\nGeometry, 23:156, 1999.\n2\nc The Eurographics Association 2002.\n97\nMa and McCool / Low Latency Photon Mapping Using Block Hashing\n3.\nS. Arya and D. M. Mount. Approximate Nearest Neighbor\nQueries in Fixed Dimensions. In Proc. ACM-SIAM\nSODA, 1993.\n2\n4.\nS. Arya, D. M. Mount, N. S. Netanyahu, R. Silverman,\nand A. Y. Wu. An Optimal Algorithm for Approximate\nNearest Neighbor Searching. In Proc. ACM-SIAM\nSODA, pages 573582, 1994.\n2\n5.\nJ. L. Bentley.\nMultidimensional binary search trees\nused for associative searching. Communications of the\nACM, 18(9), September 1975.\n1\n,\n2\n6.\nJ. L. Bentley, B. W. Weide, and A. C. Chow. Optimal\nExpected-Time Algorithms for Closest Point Problems.\nACM TOMS, 6(4), December 1980.\n1\n,\n2\n7.\nPer H. Christensen. Faster Photon Map Global Illumination\n. Journal of Graphics Tools, 4(3):110, 1999.\n2\n8.\nD. Comer. The Ubiquitous B-Tree. ACM Computing\nSurveys, 11(2):121137, June 1979.\n4\n9.\nC. A. Duncan, M. T. Goodrich, and S. G. Kobourov.\nBalanced aspect ratio trees: Combining the advantages\nof k-d trees and octrees. In Proc. ACM-SIAM SODA,\nvolume 10, pages 300309, 1999.\n2\n10. H. Edelsbrunner. Algorithms in Combinatorial Geometry\n. Springer-Verlag, 1987.\n2\n11. D. Eppstein, M. S. Paterson, and F. F. Yao. On nearest\nneighbor graphs. Discrete & Computational Geometry,\n17(3):263282, April 1997.\n2\n12. C. Faloutsos and Y. Rong.\nDOT: A Spatial Access\nMethod Using Fractals. In Proc. 7th Int. Conf. on Data\nEngineering,, pages 152159, Kobe, Japan, 1991.\n4\n13. C. Faloutsos and S. Roseman. Fractals for Secondary\nKey Retrieval. In Proc. 8th ACM PODS, pages 247\n252, Philadelphia, PA, 1989.\n4\n14. J. H. Freidman, J. L. Bentley, and R. A. Finkel. An Algorithm\nfor Finding Best Matches in Logarithmic Expected\nTime. ACM TOMS, 3(3):209226, 1977.\n2\n15. V. Gaede and O. Gnther.\nMultidimensional access\nmethods.\nACM Computing Surveys (CSUR),\n30(2):170231, 1998.\n2\n16. A. Gionis, P. Indyk, and R. Motwani. Similarity Search\nin High Dimensions via Hashing. In Proc. VLDB, pages\n518529, 1999.\n2\n,\n3\n17. J. E. Goodman and J. O'Rourke, editors. Handbook\nof Discrete and Computational Geometry. CRC Press,\nJuly 1997. ISBN: 0849385245.\n2\n18. G. Greger, P. Shirley, P. Hubbard, and D. Greenberg.\nThe irradiance volume.\nIEEE CG&A, 18(2):3243,\n1998.\n1\n19. V. Havran. Analysis of Cache Sensitive Representation\nfor Binary Space Partitioning Trees. Informatica,\n23(3):203210, May 2000.\n2\n20. P. Indyk and R. Motwani. Approximate Nearest Neighbors\n: Towards Removing the Curse of Dimensionality.\nIn Proc. ACM STOC, pages 604613, 1998.\n2\n21. P. Indyk, R. Motwani, P. Raghavan, and S. Vem-pala\n. Locality-Preserving Hashing in Multidimensional\nSpaces. In Proc. ACM STOC, pages 618625, 1997.\n2\n22. H. V. Jagadish. Linear clustering of objects with multiple\nattributes. In Proc. Acm-sigmod, pages 332342,\nMay 1990.\n4\n23. J. W. Jaromczyk and G. T. Toussaint. Relative Neighborhood\nGraphs and Their Relatives.\nProc. IEEE,\n80(9):15021517, September 1992.\n2\n24. H. W. Jensen. Rendering Caustics on Non-Lambertian\nSurfaces.\nComputer Graphics Forum, 16(1):5764,\n1997. ISSN 0167-7055.\n1\n25. H. W. Jensen. Realistic Image Synthesis Using Photon\nMapping. A.K. Peters, 2001.\n1\n,\n2\n,\n4\n26. H. W. Jensen, F. Suykens, and P. H. Christensen. A\nPractical Guide to Global Illumination using Photon\nMapping. In SIGGRAPH 2001 Course Notes, number\n38. ACM, August 2001.\n2\n27. J. K. P. Kuan and P. H. Lewis. Fast k Nearest Neighbour\nSearch for R-tree Family. In Proc. of First Int.\nConf. on Information, Communication and Signal Processing\n, pages 924928, Singapore, 1997.\n2\n28. E. P. Lafortune and Y. D. Willems. Using the Modified\nPhong BRDF for Physically Based Rendering. Technical\nReport CW197, Department of Computer Science,\nK.U.Leuven, November 1994.\n7\n29. C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction\nto Algorithms. MIT Press, 2001.\n6\n30. K.-I. Lin and C. Yang. The ANN-Tree: An Index for\nEfficient Approximate Nearest-Neighbour Search. In\nConf. on Database Systems for Advanced Applications,\n2001.\n2\n31. N. Linial and O. Sasson. Non-Expansive Hashing. In\nProc. acm stoc, pages 509518, 1996.\n2\n32. V. Ma.\nLow Latency Photon Mapping using Block\nHashing.\nTechnical Report CS-2002-15, School of\nComputer Science, University of Waterloo, 2002.\n9\n33. J. McNames.\nA Fast Nearest-Neighbor Algorithm\nBased on a Principal Axis Search Tree. IEEE Transactions\non Pattern Analysis and Machine Intelligence,\n23(9):964976, 2001.\n2\n34. J. Nievergelt, H. Hinterberger, and K. C. Sevcik. The\nGrid File: an adaptable, symmetric multikey file structure\n. ACM TODS, 9(1):3871, March 1984.\n3\n,\n5\n35. T. J. Purcell, I. Buck, W. R. Mark, and P. Hanrahan.\nRay Tracing on Programmable Graphics Hardware. In\nto appear in Proc. SIGGRAPH, 2002.\n1\n36. J. T. Robinson. The K-D-B-tree: A Search Structure\nfor Large Multidimensional Dynamic Indexes. In Proc.\nacm sigmod, pages 1018, 1981.\n2\n37. Arnold L. Rosenberg and Lawrence Snyder. Time- and\nspace-optimality in b-trees. ACM TODS, 6(1):174193,\n1981.\n4\n38. K. Salem and H. Garcia-Molina. Disk striping. In IEEE\nICDE, pages 336342, 1986.\n5\nc The Eurographics Association 2002.\n98\nMa and McCool / Low Latency Photon Mapping Using Block Hashing\n39. N. Sample, M. Haines, M. Arnold, and T. Purcell. Optimizing\nSearch Strategies in kd-Trees. May 2001.\n2\n40. G. Schaufler and H. W. Jensen.\nRay Tracing Point\nSampled Geometry. Rendering Techniques 2000, pages\n319328, June 2000.\n9\n41. J. Schmittler, I. Wald, and P. Slusallek. SaarCOR - A\nHardware Architecture for Ray Tracing. In to appear\nat EUROGRAPHICS Graphics Hardware, 2002.\n1\n42. M. Smid.\nClosest-Point Problems in Computational\nGeometry. In J. R. Sack and J. Urrutia, editors, Handbook\non Computational Geometry. Elsevier Science,\nAmsterdam, North Holland, 2000.\n2\n43. P. Tsaparas. Nearest neighbor search in multidimen-sional\nspaces.\nQualifying Depth Oral Report 319-02\n, Dept. of Computer Science, University of Toronto,\n1999.\n2\n44. M. Vanco, G. Brunnett, and T. Schreiber. A Hashing\nStrategy for Efficient k-Nearest Neighbors Computation\n. In Computer Graphics International, pages 120\n128. IEEE, June 1999.\n2\n45. I. Wald, T. Kollig, C. Benthin, A. Keller, and\nP. Slusallek. Interactive global illumination. Technical\nreport, Computer Graphics Group, Saarland University,\n2002. to be published at EUROGRAPHICS Workshop\non Rendering 2002.\n2\n46. G. Ward. Real Pixels. In James Arvo, editor, Graphics\nGems II, pages 8083. Academic Press, 1991.\n4\n47. Steven Worley. A cellular texture basis function. In\nProc. SIGGRAPH 1996, pages 291294. ACM Press,\n1996.\n9\n48. M. Zwicker, H. Pfister, J. van Baar, and M. Gross. Surface\nSplatting. Proc. SIGGRAPH 2001, pages 371378,\n2001.\n9\nc The Eurographics Association 2002.\n99\nMa and McCool / Low Latency Photon Mapping Using Block Hashing\n(a) kd-tree\n(b) BH, A=16\n(c) BH, A=8\n(d) BH, A=4\nFigure 13: \"Ring\"\n(a) kd-tree\n(b) BH, A=16\n(c) BH, A=8\n(d) BH, A=4\nFigure 14: \"Venus with Ring\"\nc The Eurographics Association 2002.\n158", "keywords": "photon mapping;block hashing (BH);hashing techniques;AkNN;kNN;accelerator"} {"name": "131", "title": "Lower Bounds & Competitive Algorithms for Online Scheduling of Unit-Size Tasks to Related Machines", "abstract": "In this paper we study the problem of assigning unit-size tasks to related machines when only limited online information is provided to each task. This is a general framework whose special cases are the classical multiple-choice games for the assignment of unit-size tasks to identical machines. The latter case was the subject of intensive research for the last decade. The problem is intriguing in the sense that the natural extensions of the greedy oblivious schedulers, which are known to achieve near-optimal performance in the case of identical machines, are proved to perform quite poorly in the case of the related machines. In this work we present a rather surprising lower bound stating that any oblivious scheduler that assigns an arbitrary number of tasks to n related machines would need log n polls of machine loads per task, in order to achieve a constant competitive ratio versus the optimum offline assignment of the same input sequence to these machines . On the other hand, we prove that the missing information for an oblivious scheduler to perform almost optimally , is the amount of tasks to be inserted into the system. In particular, we provide an oblivious scheduler that only uses O(loglog n) polls, along with the additional information of the size of the input sequence, in order to achieve a constant competitive ratio vs. the optimum offline assignment . The philosophy of this scheduler is based on an interesting exploitation of the slowfit concept ([1, 5, 3]; for a survey see [6, 9, 16]) for the assignment of the tasks to the related machines despite the restrictions on the provided online information, in combination with a layered induction argument for bounding the tails of the number of tasks passing from slower to faster machines. We finally use this oblivious scheduler as the core of an adaptive scheduler that does not demand the knowledge of the input sequence and yet achieves almost the same performance.", "fulltext": "INTRODUCTION\nThe problem of the Online Assignment of Tasks to Related\nMachines is defined as follows: there are\nn machines\npossibly of different speeds, that are determined by a speed\nvector c\n, and an input sequence\nof m independent tasks to be assigned to these machines.\nThe tasks arrive sequentially, along with their associated\nweights (positive reals) and have to be assigned immediately\nand uniquely to the machines of the system. The size\nof the input sequence as well as the weights of the tasks\nare determined by an oblivious adversary (denoted here by\nADV). Each task has to be assigned upon its arrival to one\nof the machines, using the following information:\n(possibly a portion of) the online information of current\nstatus of the system,\nthe offline information of the machine speeds, and\nits own weight.\nThe tasks are considered to be of infinite duration (permanent\ntasks) and no preemption is allowed. The cost of\nan online scheduler\nALG for the assignment of an input sequence\nof tasks (denoted by ALG()) is the maximum load\neventually appearing in the system. In case that a random-ized\nscheduler is taken into account, then the cost of the\nscheduler is the expectation of the corresponding random\nvariable. The quality of an online scheduler is compared vs.\nthe optimum offline assignment of the same input sequence\nto the n machines. We denote the optimum offline cost for\nby ADV(). That is, we consider the competitive ratio\n(or performance guarantee) to be the quality measure, (eg,\nsee [6]):\nDefinition\n1.1. An online scheduler\nALG is said to achieve\na competitive ratio of parameters (a, ), if for any\n124\ninput sequence the relation connecting its own cost ALG()\nand the optimum offline cost of\nADV, are related by\nALG() a ADV() + .\nALG is strictly a-competitive if\n, ALG() a ADV().\nIn this work we study the consequences of providing only\nsome portion of the online information to a scheduler. That\nis, we focus our interest on the case where each task is capable\nof checking the online status only by a (small wrt n)\nnumber d of polls from the n machines. In this case, the\nobjective is to determine the trade-off between the number\nof polls that are available to each of the tasks and the performance\nguarantee of the online scheduler, or equivalently,\nto determine the minimum number of polls per task so that\na strictly constant competitive ratio is achieved.\nAdditionally, we consider the case of unit-size tasks that\nare assigned to related machines. Thus, each task t [m]\nhas to be assigned to a machine host(t) [n] using the following\ninformation that is provided to it: the current loads\nof d suitably chosen machines (the kind of the \"suitable\"\nselection is one of the basic elements of a scheduler and will\nbe called the polling strategy from now on) and an assignment\nstrategy that determines the host of t among\nthese d selected candidates on behalf of t.\nIn what follows we shall consider homogeneous schedulers\n, ie, schedulers that apply exactly the same protocol on\nall the tasks that are inserted into the system. This choice\nis justified by the fact that no task is allowed to have access\nto knowledge concerning previous or forthcoming choices of\nother tasks in the system, except only for the current loads\nof those machines that have been chosen to be its candidate\nhosts. Additionally, we shall use the terms (capacitated)\nbins instead of (related) machines and (identical) balls\ninstead of (unit-size) tasks interchangeably, due to the profound\nanalogy of the problem under consideration with the\ncorresponding Balls & Bins problem.\n1.1\nPolling Strategies\nThe way a scheduler\nALG lets each newly inserted task\nchoose its d candidate hosts is called a polling strategy\n(PS). We call the strategies that poll candidate machines\nhomogeneously for all the inserted tasks of the same size,\nhomogeneous polling strategies (HPS). In the present\nwork we consider the tasks to be indistinguishable: Each\ntask upon its arrival knows only the loads of the machines\nthat it polls, along with the speed (or equiv. capacity wrt\nbins) vector c of the system.\nThis is why we focus our\ninterest in schedulers belonging to HPS. Depending on the\ndependencies of the polls that are taken on behalf of a task,\nwe classify the polling strategies as follows:\nOblivious polling strategies(HOPS)\nIn this case we consider that the polling strategy on behalf\nof a newly inserted task t consists of an independent (from\nother tasks) choice of a d-tuple from [n]\nd\naccording to a fixed\nprobability distribution f : [n]\nd\n[0, 1]. This probability\ndistribution may only depend on the common offline information\nprovided to each of the tasks. It should be clear\nthat any kind of d independent polls (with or without replacement\n) on behalf of each task, falls into this family of\npolling strategies. Thus the whole polling strategy is the sequence\nof m d-tuples chosen independently (using the same\nprobability distribution f ) on behalf of the m tasks that\nare to be inserted into the system. Clearly for any polling\nstrategy belonging to HOPS, the d random polls on behalf\nof each of the m tasks could have been fixed prior to the\nstart of the assignments.\nAdaptive polling strategies (HAPS)\nIn this case the i\nth\npoll on behalf of ball t [m] is allowed\nto exploit the information gained by the i - 1 previous polls\non behalf of the same ball. That is, unlike the case of HOPS\nwhere the choice of d candidates of a task was oblivious to\nthe current system state, now the polling strategy is allowed\nto direct the next poll to specific machines of the system\naccording to the outcome of the polls up to this point. In\nthis case all the polls on behalf of each task have to be taken\nat runtime, upon the arrival of the task.\nRemark:\nIt is commented that this kind of polling strategies\nare not actually helpful in the case of identical machines,\nwhere HOPS schedulers achieve asymptotically optimal performance\n(see [18]). Nevertheless, we prove here that this\nis not the case for the related machines. It will be shown\nthat oblivious strategies perform rather poorly in this setting\n, while HAPS schedulers achieve actually asymptotically\noptimal performance.\n1.2\nAssignment Strategies\nHaving chosen the d-size set of candidate hosts for task\nt [m], the next thing is to assign this task to one of these\nmachines given their current loads and possibly exploiting\nthe knowledge on the way that they were selected. We call\nthis procedure the assignment strategy. The significant\nquestion that arises here is the following: Given the polling\nstrategy adopted and the knowledge that is acquired at runtime\nby the polled d-tuple on behalf of a task t [m], which\nwould be the optimal assignment strategy for this task so that\nthe eventual maximum load in the system be minimized?\nIn the Unit Size Tasks-Identical Machines case, when\neach of the d polls is chosen iur (with replacement) from [n],\nAzar et al. ([2]) show that the best assignment strategy\nis the minimum load rule and requires\nO(log n) polls per\ntask for a strictly constant competitive ratio. Consequently\nV\nocking ([18]) has suggested the always go left strategy\n, which (in combination with a properly chosen oblivious\npolling strategy) only requires a total number of\nO(loglog n)\nin order to achieve a constant competitive ratio. In the same\nwork it was also shown that one should not expect much by\nexploiting possible dependencies of the polls in the case of\nunit-size tasks that are placed into identical machines, since\nthe load of the fullest machine is roughly the same as the one\nachieved in the case of non-uniform but independent polls\nusing the always go left rule.\nNevertheless, things are quite different in the Related\nMachines case: we show by our lower bound (section 3)\nthat even if a scheduler\nALG considers any oblivious polling\nstrategy and the best possible assignment strategy,\nALG\nhas a strict competitive ratio of at least\n2d\nn\n4\nd-2\n, where d is\nthe number of oblivious polls per task. This implies that\nin the case of the related machines there is still much space\nfor the adaptive polling strategies until the lower bound of\n(loglog n) polls per task is matched.\n125\n1.3\nRelated Work\nIn the case of assigning unit-size tasks to identical machines\n, there has been a lot of intensive research during the\nlast decade. If each task is capable of viewing the whole\nstatus of the system upon its arrival (we call this the Full\nInformation case), then Graham's greedy algorithm assures\na competitive ratio that asymptotically tends to 2\n1\nn\n([6]).\nNevertheless, when the tasks are granted only a limited number\nof polls, things are much more complicated: In the case\nof unit-size tasks and a single poll per task, the result of\nGonnet [10] has proved that for m n the maximum load\nis (1 + o(1))\nln\nn\nlnln\nn\nwhen the poll of each task is chosen iur\nfrom the n machines, whp.\n1\nIn [15] an explicit form for the\nexpected maximum load is given for all combinations of n\nand m. From this work it easily seen that for m n ln n, the\nmaximum load is\nm\nn\n+ (\nm ln n/n), which implies that by\nmeans of competitive ratio, m n is actually the hardest\ninstance.\nIn the case of d 2 polls per task, a bunch of new techniques\nhad to be applied for the analysis of such schedulers.\nThe main tools used in the literature for this problem have\nbeen the layered induction, the witness tree argument and\nthe method of fluid models (a comprehensive presentation\nof these techniques may be found in the very good survey\nof Mitzenmacher et al. [14]). In the seminal paper of Azar\nBroder Karlin and Upfal [2] it was proved that the proposed\nscheduler abku that chooses each of the polls of a task iur\nfrom [n] and then assigns the task to the candidate machine\nof minimum load, achieves a maximum load that is at\nmost\nm\nn\n+\nlnln\nn\nln\nd\n(1). This implies a strictly O\nlnln\nn\nln\nd\ncompetitive\nratio, or equivalently, at least\nO(ln n) polls per\ntask would be necessary in order to achieve a strictly constant\ncompetitive ratio. In [18] the always go left algorithm\nwas proposed, which assures a maximum load of\nat most\nm\nn\n+\nlnln\nn\nd ln 2\n(1) and thus only needs an amount\nof\nO(loglog n) polls per task in order to achieve a strictly\nconstant competitive ratio. In addition it was shown that\nthis is the best possible that one may hope for in the case\nof assigning unit-size tasks to identical machines with only\nd (either oblivious or adaptive) polls per task.\nThe Online Assignment of Tasks to Related Machines\nproblem has been thoroughly studied during the past years\nfor the Full Information case (eg, see chapter 12 in [6]). In\nparticular, it has been shown that a strictly (small) constant\ncompetitive ratio can be achieved using the slowfit-like algorithms\nthat are based on the idea of exploiting the least\nfavourable machines (this idea first appeared in [17]). The\ncase of Limited Information has attracted little attention\nup to this point: some recent works ([12, 13, 8]) study the\ncase of each task having a single poll, for its assignment\nto one of the (possibly related) machines when the probability\ndistributions of the tasks comprise a Nash Equilibrium\n. For example, in [8] it was shown that in the Related\nMachines case a coordination ratio (ie, the ratio between the\nworst possible expected maximum load among Nash Equilibria\nover the offline optimum) of\nO\nlog\nn\nlogloglog\nn\n. However,\nwhen all the task weights are equal then it was shown by\nMavronicolas and Spirakis [13] that the coordination ratio\n1\nA probabilistic event A is said to hold with high probability\n(whp) if for some arbitrarily chosen constant > 0,\nIP[A] 1 - n\n\n.\nis\nO\nlog\nn\nloglog\nn\n. As for the case of d > 1 in the Related\nMachines problem, up to the author's knowledge this is the\nfirst time that this problem is dealt with.\n1.4\nNew results\nIn this work we show that any HOPS scheduler requires at\nleast\nO\nlog\nn\nloglog\nn\npolls in order to achieve a strictly constant\ncompetitive ratio vs. an oblivious adversary. The key point\nin this lower bound argument is the construction of a system\nof d+1 groups of machines running at the same speed within\neach group, while the machine speeds (comparing machines\nof consecutive groups) fall by a fixed factor and on the other\nhand the cumulative capacities of the groups are raised by\nthe same factor. Then it is intuitively clear that any HOPS\nscheduler cannot keep track of the current status within each\nof these d + 1 groups while having only d polls per new task,\nand thus it will have to pay the cost of misplacing balls in\nat least one of these groups. More specifically, we show the\nfollowing lower bound:\nTheorem\n1.\nd 1, the competitive ratio of any\nd-hops scheduler is at least\n2d\nn\n4\nd-2\n.\nThen we propose a new d-hops scheduler OBL\n\nwhich,\nif it is fortified with the additional knowledge of the total\nnumber of tasks to be inserted, then it achieves the following\nupper-bound:\nTheorem 2.\nLet lnln n - lnlnln n > d 2 and suppose\nthat the size of the input sequence is given as offline\ninformation.If\nOBL\n\nprovides each task with (at most) 2d\npolls, then it has a strict competitive ratio that drops double-exponentially\nwith d.In particular the cost of OBL\n\nis with\nhigh probability at most\nOBL\n\n(m) (1 + o(1))8\nn\nd2\nd+3\n1\n/(2\nd+1\n-1)\n+ 1\nADV(m)\nIt is commented that all the schedulers for the Identical\nMachines-Limited Information case up to now used minimum\nload\nas the profound assignment rule. On the other\nhand,\nOBL\n\nwas inspired by the slowfit approaches for\nthe Related Machines-Full Information problem and the\nfact that a greedy scheduler behaves badly in that case.\nUp to the author's knowledge, the idea of using the slow-est\nmachine possible first appeared in [17]. Additionally, a\nlayered induction argument is employed for bounding the\namount of tasks that flow from the slower to the faster machines\nin the system.\n2\nThis then allows the use of relatively\nsimple arguments for bounding the maximum load\nof the tasks that end up in a small fraction of the system\nthat consists of the fastest machines. Clearly this upper\nbound is near-optimal (up to a multiplicative constant),\nsince it matches the (loglog n) lower bound of the Unit\nSize Tasks-Identical Machines problem ([18]) which is a\nsubcase of our setting.\nFinally we propose a haps scheduler (\nADAPT ) that combines\nthe previous hops scheduler with a classical guessing\n2\nNote that this does not imply preemption of tasks which is\nnot allowed in our setting, but rather the event that a task\nhits slower machines that are already overloaded and thus\nhas to assign itself to a faster machine.\n126\nargument for the cost of\nADV and assures a cost roughly 5\ntimes the cost of\nOBL\n\n:\nTheorem 3.\nFor any input sequence of identical tasks\nthat have to be assigned to n related machines using at most\n2d + 1 polls per task, the cost of ADAPT is (whp),\nADAPT () < O\nn\nd2\nd+3\n1\n/(2\nd+1\n-1)\n+ 1\nADV()\nA SIMPLE LOWER BOUND ON HOMOGENEOUS SINGLE-POLL GAMES\nThis section contains a simple argument for the claimed\nlower bound on online schedulers that devote a single poll\nper new task, ie, d = 1. Clearly by their nature these are\nHOPS schedulers, since there is no actual option for the\nassignment strategy. The proof for the lower bound of these\nschedulers is rather simple but it will and shed some light to\nthe essence of the construction for the proof of the general\nlower bound that will follow in the next section.\nLet's assume that there exists a HOPS scheduler that only\nuses 1 poll per task and claims to be strict a-competitive\nagainst any oblivious adversary\nADV. Initially ADV chooses\nan arbitrary real number r n which will be fixed in the\nend so as to maximize the lower bound on a. Let also the\nvariables C\ntotal\n, C\nmax\ndenote the total capacity and the maximum\npossible polled capacity using one poll (ie, the maximum\nbin capacity in this case) in the system. Consequently\nADV uses the following system of capacitated bins so that\nthese values are preserved:\nC\n1\n=\nC\nmax\n,\nC\ni\n=\nC\nmax\nr , i = 2, . . . ,\nC\ntotal\n- C\nmax\nC\nmax\nr\n+ 1 (\nn)\nObserve that the capacity of bin i 2 is r times smaller\nthan\nC\nmax\n, while on the other hand, the cumulative capacity\nof the last n - 1 machines is\nn-1\nr\ntimes larger than the\ncapacity of the largest bin in the system. Consider also the\nfollowing abbreviations of probabilities and events that may\noccur upon the arrival of a new ball:\nE\ni\n\"bin i is hit by a ball\"\nP\n1\nIP[E\n1\n],\nP\n1\n1 - P\n1\nObviously due to the assumption of a-competitiveness,\na C\ntotal\nC\nmax\n= C\nmax\n+\n(\nn-1)C\nmax\nr\nC\nmax\n= 1 + n - 1\nr\nsince\nALG could choose to assign all the incoming balls to\nthe largest bin in the system. The question that arises is\nwhether there exists a 1-poll scheduler that can do better\nthan that. We consider the following input sequences:\n|| = 1, w\n1\n= w:\nALG() = IE[L\nmax\n()]\nP\n1\nw\nC\nmax\n+\n\nP\n1\nw\nCmax\nr\nADV() =\nw\nC\nmax\n\n\n\n\n/ a-comp. /\n=\n\na P\n1\n+ r\nP\n1\n= r - P\n1\n(r - 1)\nP\n1\n\nr-a\nr-1\n|| = , t 1, w\nt\n= w: In this case the loads of all the\nbins will tend to their expected values, and thus\n||\n(1) = IE[\n||\n(1)] =\nP\n1\n||w\nC\nmax\nALG() = IE[L\nmax\n()]\nP\n1\n||w\nC\nmax\nADV() =\n||w\nC\ntotal\n\n\n/ a-comp. /\n=\n\na\n||w\nC\ntotal\n\nP\n1\n||w\nC\nmax\nP\n1\n\naC\nmax\nC\ntotal\n=\na\nn-1\nr\n+1\nCombining the two bounds on P\n1\nwe get:\naC\nmax\nC\ntotal\n=\na\nn-1\nr\n+ 1\nP\n1\nr - a\nr - 1\na r - a n - 1 + r - a\nn - 1\nr\n+ 1\na r + n - 1\nr\nr + n - 1\na\nr + n - 1\nr +\nn-1\nr\n= r\n2\n+ n r - r\nr\n2\n+ n - 1\nwhich is maximized for r = n + 1 and assures a lower\nbound on a of\nn\n2\n.\nRemark:\nIt is worth noting that the lower bound com-pletely\ndepends on the number of bins in the system, and on\nthe ratio r =\nC\nmax\nC\nmin\nand does not depend at all on the total\ncapacity of the system, C\ntotal\n.\nTHE LOWER BOUND ON MULTI-HOPS SHCEDULERS\nIn this section we study the behaviour of homogeneous\nschedulers that adopt an oblivious polling strategy (ie, the\npolling strategy is from HOPS) and an arbitrary assignment\nstrategy. We call these d-hops schedulers, since the choice of\nthe d candidates on behalf of each ball is done independently\nfor each ball, according to a common probability distribution\nf : [n]\nd\n[0, 1]. Recall that the choice of the candidate\nbins for each ball is oblivious to the current system state\nand thus could have been fixed prior to the beginning of the\nassignments.\nTheorem\n1.\nd 1, the competitive ratio of any d-hops\nscheduler is at least\n2d\nn\n4\nd-2\n.\nProof:\nLet f : [n]\nd\n[0, 1] be the adopted Oblivious\npolling strategy by an arbitrary d-hops scheduler, ALG. Assume\nalso that\nALG uses the best possible assignment strategy\ngiven this polling strategy, that is, each ball chooses its\nown candidate bins according to f and then it may assign\nitself to an arbitrarily chosen host among its candidates, depending\non the current loads of the candidate bins. Assume\nalso that\nALG claims a (strict) competitive ratio a against\noblivious adversaries.\nAs parameters of the problem we consider again the quantities\nC\ntotal\n=\nn\ni=1\nC\ni\nand\nC\nmax\n: the total capacity of a\nsystem of n related machines and the maximum capacity\nthat may be returned by a single poll. We shall describe\nan adversary\nADV that initially chooses an arbitrary real\nnumber 1\nr n and then considers the system of (d + 1\ngroups of) n capacitated bins that is described in Table 1.\nObserve that this construction preserves the following two\ninvariants when considering two successive groups of bins\nF\n\n, F\n+1\n: 1\nd:\n127\nGroup of Bins\nNumber of Bins in group\nCapacity per Bin\nCumulative Group Capacity\nF\n1\n1\nC\nmax\nC\nmax\nF\n2\nr(r - 1)\nC\nmax\n/r\n(r - 1)C\nmax\nF\n3\nr\n3\n(r - 1)\nC\nmax\n/r\n2\nr(r - 1)C\nmax\nF\n4\nr\n5\n(r - 1)\nC\nmax\n/r\n3\nr\n2\n(r - 1)C\nmax\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\nF\nd\nr\n2\nd-3\n(r - 1)\nC\nmax\n/r\nd-1\nr\nd-2\n(r - 1)C\nmax\nF\nd+1\nn - 1 r\nr+1\n(r\n2\nd-2\n- 1) n - r\n2\nd-2\nC\nmax\n/r\nd\n(\nn\nr\nd\n- r\nd-2\n)\nC\nmax\nTable 1: The system of (d + 1 groups of ) capacitated bins considered by ADV for the proof of the Lower\nBound on d-HOPS schedulers.\n(I1) when going from one group to its successor, the bin\ncapacities decrease by a factor of r, and\n(I2) the cumulative capacity of the first + 1 groups is\nlarger than the cumulative capacity of the first groups\nby a factor of r.\nWe shall denote by C[F ] the cumulative capacity of any\ngroup of bins F [n].\nRemark:\nThe preservation of invariant (I2) when = d\nimplies that C[F\nd+1\n]\nr\nd-1\n(r-1)C\nmax\n\nn\nr\nd\n- r\nd-2\nC\nmax\nr\nd-1\n(r - 1)C\nmax\nn r\n2\nd\n- r\n2\nd-1\n+ r\n2\nd-2\n.\nWe fortify\nALG by allowing a perfect balance of the bins\nof a group F\n\nwhenever at least one poll on behalf of a new\nball goes to a bin of this group. This is actually in order\nto capture the notion of the \"perfect assignment strategy\ngiven the polling strategy\" claim stated above. Clearly this\ndoes not cause any problem since we are looking for a lower\nbound. Because\nALG could lock its d choices to the first d\ngroups of the system, it is obvious that its competitive ratio\na is at most a\nC\ntotal\nC[\nd\n=1\nF\n\n]\n=\nn\nr\n2d-1\n+ 1.\nConsider now the d events E\n\n\"F\n\nis hit by a ball\" (1\n\nd), while P\n\nIP[E\n\n] (call it the hitting probability\nof group F\n\n) is the probability of at least one bin from\nF\n\nbeing hit by a ball.\nWe shall charge\nALG according\nto the hitting probabilities that its polling strategy determines\n. Notice that these are fixed at the beginning of the\nassignments since the polling strategy of\nALG is an oblivious\nstrategy. Furthermore, the following conditional hitting\nprobabilities are also determined uniquely by the polling\nstrategy of\nALG: i, j [n] : i > j,\nP\ni|j\nIP[E\ni\n|E\n1\nE\n2\nE\nj\n],\nQ\ni|j\nIP[E\ni\n|E\n1\nE\n2\nE\nj\n].\nFinally, let B\n\n() ( = 1, . . . , d) denote the maximum number\nof balls that may be hosted by bins of the set\n\n\n=1\nF\n\nwithout violating the assumption of a-competitiveness of\nALG, when the input sequence of tasks is chosen by ADV.\nThe following lemma states an inherent property of any d-hops\nscheduler:\nLemma\n3.1. For any > 1, unless ALG admits a competitive\nratio a >\n(\n-1)(1-r\n-2\n)\nd\n2\nr\n, the following property holds:\n1 d, P\n|-1\n1\n- 1\na\nr\nProof:\nWe prove this lemma by considering the following\ninput sequences of balls of the same (arbitrarily chosen) size\nw:\n|| = 1: In this case we know that ADV() =\nw\nC\nmax\n. The\ncost (ie, the expectation of maximum load) of\nALG is:\nALG() P\n1\nw\nC\nmax\n+\n(1\n- P\n1\n) Q\n2\n|1\nrw\nC\nmax\n+ (1\n- Q\n2\n|1\n)Q\n3\n|2\nr\n2\nw\nC\nmax\n+\n+\n+ (1 - Q\n2\n|1\n)\n(1 - Q\nd|d-1\n) r\nd\nw\nC\nmax\nDue to the demand for a-competitiveness of ALG against\nADV, this then implies\na\nr\n1 - P\n1\nP\n1\n1 a\nr\n.\n|| = r\n2-2\n, = 2, , d: In this case ADV will use\nonly the bins of\n\n\n=1\nF\n\nand thus he will pay a cost of\nADV() =\nr\n2-2\nw\nr\n-1\nC\nmax\n=\nr\n-1\nw\nC\nmax\n. As for the cost of ALG, we\nshall only charge it for the input subsequence of balls that\ndefinitely hit groups F\n1\n, . . . , F\n-1\n(call it ^\n). Our purpose\nis from this sequence of tasks to determine P\n|-1\n, ie, the\nconditional hitting probability of group F\n\ngiven that all the\nprevious groups are hit by a ball. Clearly,\nIE[\n|^|] = ||\n-1\n=1\nP\n|-1\n= r\n2\n-2\n\n-1\n=1\nP\n|-1\n(where for symmetry of representation we let P\n1\n|0\n= P\n1\n).\nRecall that B\n-1\n() denotes the maximum number of balls\nthat may be assigned to the bins of the first - 1 groups,\ngiven the claimed competitive ratio a by ALG and the input\nsequence . Then we have:\nwB\n-1\n(\n)\nC[\n-1\n=1\nF\n\n]\na ADV()\nB\n-1\n() a r\n2\n-3\n. Thus, there is a subsequence ~\nof ^\n\nthat consists of those tasks which cannot be assigned to the\nbins of the first - 1 groups due to the a-competitiveness\nconstraint. All these tasks have to exploit their remaining\n(at most) d - + 1 polls among the bins of [n] \\\n-1\n=1\nF\n\n.\nIt is clear that\nALG has no reason to spoil more than one\npoll per group due to the optimal assignment strategy that\nit adopts. Thus we can safely assume that there remain\nexactly d - + 1 polls for the remaining groups. Obviously\nIE[\n|~|] IE[|^|] - B\n-1\n() r\n2\n-2\n-1\n=1\nP\n|-1\n- ar\n2\n-3\nr\n2\n-3\n(r\n-1\n- a),\nwhere for simplification of notation we use the bounding se-128\nquence\n-1\n=\n-2\n1\n-1\na\nr\n\n-1\n= 1\n\n-1\na\nr\n-1\nand\n0\n= 1. This is true because P\n1\n1 a\nr\n\n1\n= 1\n\n-1\na\nr\n, > 1, while we assume inductively that\n-1\n=1\nP\n|-1\n\n-1\n. By showing that P\n|-1\n1\n-1\na\nr\nwe shall also\nhave assured that\n\n\n=1\nP\n|-1\n. We apply the Markov\nInequality (on the complementary random variable\n||-|~|)\nto find a lower bound on the size of ~\n:\n> 1, IP[|~| r\n2\n-3\n(r - r + r\n-1\n- a)] 1 - 1.\nNow it is clear that if\nALG claims a competitive ratio\na ( - 1)(1 - r\n-2\n)\n\n2\nd r\n( - 1) 1 1\nr\n2-2\n\n2\nr\n,\nthen at least one ball of will belong to ~\nwith probability\nat least 1\n1\n. Thus, either\nALG has a >\n(\n-1)(1-r\n-2\n)\n\n2\ndr\n, or\n(by simply charging it only for this very specific ball)\nALG()\n1\n- 1 [P\n|-1\n+ (1\n- P\n|-1\n)r] r\n-1\nw\nC\nmax\nwhich, combined with the demand for a-competitiveness and\nthe cost of\nADV for , implies that\nP\n|-1\n1\n- 1\na\nr .\nWe finally try the following input sequence, in case that\nALG still claims a competitive ratio a\n(\n-1)(1-r\n-2\n)\nd\n2\n\nr:\n|| = : For this input sequence it is clear that\nADV() = ||w\nC\ntotal\n=\n||w\nr\nd-1\n- r\nd-2\n+\nn\nr\nd\nC\nmax\n.\nFor\nALG we again consider the subsequence ^ of balls that\ndefinitely hit the first d - 1 groups of the system. Clearly\n|^| = IE[|^|]\nd-1\n|| since we now consider an infinite\nsequence of incoming balls. As for the upper bound on the\nballs that the first d - 1 groups can host, this is again given\nby the demand for a-competitiveness:\nwB\nd-1\n()\nC[\nd-1\n=1\nF\n\n] =\nwB\nd-1\n()\nr\nd-2\nC\nmax\n\na||w\nr\nd-1\n- r\nd-2\n+\nn\nr\nd\nC\nmax\nB\nd-1\n()\nr\nd-1\nr\nd-1\n- r\nd-2\n+\nn\nr\nd\na\nr ||\nThe subsequence ~\n^\nthat has to exploit a single poll\namong the bins of [n] \\\nd-1\n=1\nF\n\nhas size at least\n|~| = IE[|~|] IE[|^|] - B\nd-1\n()\n\n\nd-1\nr\nd-1\nr\nd-1\n- r\nd-2\n+\nn\nr\nd\na\nr\n||\n\n1\n- (d - 1)\n- 1\na\nr r\nd-1\nr\nd-1\n- r\nd-2\n+\nn\nr\nd\na\nr\n||\n\n1\n- d - 1\n- 1\na\nr\n||\nwhere for the last inequality we consider that n r\n2\nd-2\n.\nSince we consider that a\n(\n-1)(1-r\n-2\n)\n\n2\ndr\n, we can be sure\nthat\n|~| 1 1\n+\n1\nr\n2\n|| and thus, the cost of ALG will\nbe lower bounded by the expected load of the bins in F\nd\ndue\nto the tasks of ~\n:\na ADV() ALG()\nP\nd|d-1\n|~| w\n(r\nd-1\n- r\nd-2\n)\nC\nmax\na\nr\nd-1\n- r\nd-2\n+\nn\nr\nd\nP\nd|d-1\n1\n\n(\n-1)(1-r\n-2\n)\n\n2\ndr\nd-1\n- r\nd-2\n1\n1\n\n(\n-1)(1-r\n-2\n)\n(d-1)\n1 +\nn\nr\n2d-2\n(\nr-1)\n1\n- 1\na\nr\n1 - 1 - r\n-2\nd - 1\nwhich is not possible for any > 1 and n r\n2\nd\n. Thus we\nconclude that\nALG cannot avoid a competitive ratio\na min\n- 1\n(d - 1) r,\nn\nr\n2\nd-1\n+ 1\nfor any > 1 and n r\n2\nd\n, which for = 2 and n = r\n2\nd\ngives the desired bound.\nDEALING WITH INPUT SEQUENCES OF KNOWN TOTAL SIZE\nIn this section we prove that the missing information for\nan oblivious scheduler to perform efficiently is the size of\nthe input sequence. More specifically, considering that the\ninput size is provided as offline information to each of the\nnewly inserted tasks, we construct an oblivious scheduler\nthat exploits this information along with a slowfit assignment\nrule and a layered induction argument for the flow of\nballs from slower to faster bins, in order to achieve a strictly\nconstant competitive ratio with only\nO(loglog n) polls per\ntask.\nAssume that m unit-size balls are thrown into a system\nof n capacitated bins with capacities C\nmax\n=\nC\nn\nC\nn-1\n\nC\n1\n=\nC\nmin\n. Assume also that each ball is allowed to\npoll up to 2d bins and then it has to assign itself to one of\nthese candidates. We additionally assume that\nC\nmax\n\nn\n2\nd+1\n.\nAs it will become clear later by the analysis, if this was not\nthe case then it could only be in favour of the oblivious\nscheduler that we propose, because this would allow the absorption\nof the large additive constants in the performance\nguarantee of the scheduler.\nWe consider (wlog) that the capacity vector c is nor-malized\nby\nn\n||c||\n1\nso that\nn\ni=1\nC\ni\n= n. We also assume\nthat the total size m the input sequence is given to every\nnewly inserted ball.\nThis implies that each ball can\nestimate the cost\nADV(m) opt (ie, the optimum offline\nassignment of the m unit-size balls to the n capacitated\nbins), and thus it can know a priori the subset of bins\nthat may have been used by\nADV during the whole process\n.\n3\nHaving this in mind, we can assume that every bin\nin the system is legitimate, that is, it might have been\nused by the optimum solution, otherwise we could have each\nball ignore all the illegitimate bins in the system. Thus,\nopt\nmax\n1\nC\nmin\n,\nm\nC\ntotal\n= max\n1\nC\nmin\n,\nm\nn\n. Finally, we assume\nthat each of the legitimate bins of the system gets at\n3\nA bin i [n] may have been used by ADV if and only if\n1/C\ni\nopt.\n129\nleast one ball in the optimum offline schedule. This does not\naffect the performance of\nADV, while it may only deteriorate\nthe performance of an online scheduler. Nevertheless, it\nassures that\nm\nn\nopt\nm\nn\n+ 1, meaning that the fractional\nload on the bins is actually a good estimation of opt.\nLet the load of bin i [n] at time t (that is, right after the\nassignment of the t\nth\nball of the input sequence) be denoted\nas\nt\n(i)\nq\nt\n(\ni)\nC\ni\n, where q\nt\n(i) is the number of balls assigned\nto bin i up to that time. The following definition refers to\nthe notion of saturated bins in the system, ie, overloaded\nbins wrt the designed performance guarantee of an oblivious\nscheduler:\nDefinition\n4.1. A bin i [n] is called saturated upon\nthe arrival of a new ball t m, if and only if it has\nt-1\n(i) > a opt (a), where (a) is called the designed\nperformance guarantee\nof the oblivious scheduler.\nLet\nr [d], i\nr\nmin{i [n] :\ni\nj=1\nC\nj\n\nr\n=1\nn\n2\n\n}.\nThen, we consider the following partition of the set of bins\n[n] into d + 1 groups of (roughly geometrically) decreasing\ncumulative capacities:\nF\n1\n{1, . . . , i\n1\n},\nF\nr\n{i\nr-1\n+ 1, . . . , i\nr\n}, r = 2, . . . , d,\nF\nd+1\n{i\nd\n+ 1, . . . , n}.\nAlthough the cumulative capacity of group F\nr\nmay vary\nfrom\nn\n2\nr\n-C\nmax\nto\nn\n2\nr\n+\nC\nmax\n, for ease of the following computations\nwe assume that asymptotically\nr [d], C[F\nr\n]\n\nn\n2\nr\nand C[F\nd+1\n]\n\nn\n2\nd\n. We denote by\nC\nmin\n\nthe capacity of the\nsmallest bin in F\n\n, [d + 1].\nWe now consider the following ideal scheduler that uses\nan oblivious polling strategy and an assignment strategy\nbased on the slowfit rule. This scheduler (we call it\nOBL\n\n)\ninitially discards all the illegitimate bins in the system, using\nthe knowledge of m. Then first it normalizes the capacity\nvector of the remaining bins and afterwards it considers the\ngrouping mentioned above and adopts the following pair of\nstrategies:\nPOLLING:\n1 r d group F\nr\ngets exactly 1 poll, which\nis chosen among the bins of the group proportionally to the\nbin capacities. That is,\nr [d], i F\nr\n,\nIP[bin i F\nr\nis a candidate host of a ball]\nC\ni\nC[F\nr\n] .\nThe remaining d polls are assigned to the bins of group F\nd+1\n,\neither to the d fastest bins, or according to the polling strategy\nof always go left, depending only on the parameters\n(c, d) of the problem instance.\n4\nASSIGNMENT: Upon the arrival of a new ball t [m],\nthe smallest polled bin from\n\nd\n=1\nF\n\n(starting from F\n1\n, to\nF\n2\n, a.s.o.) that is unsaturated gets this ball (slowfit rule).\nIn case that all the first d polls of a ball are already saturated\n, then this ball has to be assigned to a bin of F\nd+1\nusing its remaining d candidates. Within group F\nd+1\n, either\nthe minimum post load rule (ie, the bin of minimum load\namong the d choices from F\nd+1\nis chosen, taking into account\nalso the additional load of the new ball), or the slowfit rule\n4\nObserve that this is offline information and thus this decision\ncan be made at the beginning of the assignment process,\nfor all the balls of the input sequence.\nis applied, depending on the offline parameters (c\n, d) of the\nproblem instance. If all the 2d polled bins are saturated,\nthen the minimum post load rule is applied among them.\nTies are always broken in favour of smaller bins (ie, slower\nmachines).\nThe following theorem gives the performance of\nOBL\n\n,\nwhen the additional information of the input size is also\nprovided offline:\nTheorem\n2. For lnln n - lnlnln n > d 2, when the\nsize of the input sequence m n is given as offline information\n,\nOBL\n\nhas a strict competitive ratio that drops\ndouble-exponentially with d.In particular the cost of OBL\n\nis (whp) at most\nOBL\n\n(m) (1 + o(1))8\nn\nd2\nd+3\n1\n/(2\nd+1\n-1)\n+ 1\nADV(m)\nProof:\nLet + 1 m be the first ball in the system that\nhits only saturated bins by its 2d polls. Our purpose is to\ndetermine the value of a in the designed performance guarantee\n(a), so that the probability of ball +1 existing to be\npolynomially small. As stated before, the technique that we\nshall employ is a layered induction argument on the number\nof balls that are passed to the right of group F\nr\n, r [d]. For\nthe assignment of the balls that end up in group F\nd+1\nwe\nuse a slightly modified version of the always go left scheduler\nof [18] that gives an upper bound on the maximum load\nin group F\nd+1\n(we denote this by\nL\nmax\n[F\nd+1\n]). This upper\nbound on\nL\nmax\n[F\nd+1\n] holds with high probability. This assignment\nis only used when it produces a smaller maximum\nload than the brute assignment of all the balls ending up in\nF\nd+1\nto the d fastest bins of the group.\nWe shall consider a notion of time that corresponds to the\nassignments of newly arrived balls into the system: At time\nt m, the t\nth\nball of the input sequence is thrown into the\nsystem and it has to be immediately assigned to one of its\n2d candidates.\nConsider the polls on behalf of a ball to be ordered according\nto the groups from which they are taken. Observe then\nthat each ball t is assigned to the first unsaturated bin\nthat it hits from the first d groups, or to a bin in group F\nd+1\n.\nThus,\n[d+1], each ball that has been assigned to group\nF\n\nup to (and including) ball , has definitely failed to hit\nan unsaturated bin in all the groups F\n1\n, . . . , F\n-1\n. For any\nball t m and [d], let Q\nt\n() denote the number of balls\nthat have been assigned to group F\n\nup to time t (ie, right\nafter the assignment of the t\nth\nball), while ~\nQ\nt\n() denotes the\nballs that have been assigned to the right of group F\n\n, that\nis, to bins of [n] \\\n\n=1\nF\n\n. Thus, ~\nQ\nt\n() =\nd+1\n=+1\nQ\nt\n().\nLet also S\nt\n() denote the set of saturated bins in F\n\nat time\nt. Then, [d], t ,\nIP[t hits a saturated bin in F\n\n] = C[S\nt-1\n()]\nC[F\n\n]\nC[S\n\n()]\nC[F\n\n] .\nObserve now that\n[d],\nQ\n\n() =\niF\n\nq\n\n(i)\n\niS\n\n(\n)\nC\ni\nq\n\n(i)\nC\ni\n> C[S\n\n()](a)\nC[S\n\n()]\nC[F\n\n]\n<\nQ\n\n()\n(a) C[F\n\n] P\n\n(1)\n130\nRecall that up to time , we can be sure that Q\n\n()\n(a) C[F\n\n] (because all these balls are assigned to unsaturated\nbins), which in turn assures that\nP\n\n1. Inequality\n(1) implies that, had we known ~\nQ\n\n(-1), then the number\n~\nQ\n\n() of balls before ball + 1 that go to the right of\nset F\n\nwould be stochastically dominated by the number of\nsuccesses in ~\nQ\n\n(-1) Bernoulli trials with success probability\nP\n\n. We shall denote this number by B( ~\nQ\n\n( - 1), P\n\n).\n5\nIn the following lemma, we apply the Chernoff-Hoeffding\nbound on these Bernoulli trials to get an upper bound on\nthe amount of balls that have to be assigned beyond group\nF\n\n, for [d], as a function of the number of balls that\nhave been assigned beyond group F\n-1\n:\nLemma\n4.1.\n[d] and for an arbitrary constant > 1,\nwith probability at least 1\n- n\n~\nQ\n\n() max\n2\n+1\n[ ~\nQ\n\n( - 1)]\n2\n(a)n\n,\n2 ln n ~\nQ\n\n( - 1) .\nProof:\nLet's assume that we already know the number\n~\nQ\n\n( - 1) of balls that have already failed in F\n1\n, . . . , F\n-1\n.\nThen ~\nQ\n\n() is stochastically dominated by the random variable\nB( ~\nQ\n\n( - 1), P\n\n).\nBy applying Chernoff-Hoeffding\nbounds ([11], p. 198) on these Bernoulli trials, we get that\nIP[B( ~\nQ\n\n( - 1), P\n\n)\n~\nQ\n\n( - 1) (P\n\n+ )]\nexp(-2 ~\nQ\n\n( - 1)\n2\n), > 0\nIP B( ~\nQ\n\n( - 1), P\n\n)\n~\nQ\n\n( - 1) P\n\n+\nln n\n2 ~\nQ\n\n(\n-1)\nn\n\n, > 0\nwhere we have set =\nln n\n2 ~\nQ\n\n(\n-1)\n. But recall that\nP\n\n=\nQ\n\n(\n)\n(\na)C[F\n\n]\nand also that C[F\n\n]\n\nn\n2\n\n. Thus we conclude that\nwith probability at least 1\n- n\n\n,\n~\nQ\n\n() B( ~\nQ\n\n( - 1), P\n\n)\n\n~\nQ\n\n( - 1)\n2\n\nQ\n\n(\n)\n(\na)n\n+\nln n\n2 ~\nQ\n\n(\n-1)\nQ\n\n() = ~\nQ\n\n( - 1) - ~\nQ\n\n()\n\n\n\n\n\n\n\n\n\n~\nQ\n\n() ~\nQ\n\n( - 1)\n2\n\n[ ~\nQ\n\n( - 1) - ~\nQ\n\n()]\n(a)n\n+\nln n\n2 ~\nQ\n\n( - 1)\n~\nQ\n\n() 1 + 2\n\n~\nQ\n\n( - 1)\n(a)n\n\n~\nQ\n\n( - 1)\n2\n\n~\nQ\n\n( - 1)\n(a)n\n+\nln n\n2 ~\nQ\n\n( - 1)\n~\nQ\n\n()\n2\n\n[ ~\nQ\n\n( - 1)]\n2\n+ (a)n\nln n\n2\n~\nQ\n\n( - 1)\n(a)n + 2\n\n~\nQ\n\n( - 1)\nfrom which we get the desired bound.\nConsider now the following finite sequence\n^\nQ() = max\n2\n+1\n[ ^\nQ(-1)]\n2\n(\na)n\n,\n2 ln n ^\nQ\n\n( - 1) , [d]\n^\nQ(0) = m\n5\nFor the integrity of the representation we set ~\nQ\n\n(0) =\nm - 1.\nWe then bound the number of balls that end up in group\nF\nd+1\nby the d\nth\nterm of this sequence:\nLemma\n4.2. With probability at least 1\n- dn\n\n, at most\n^\nQ(d) balls end up in group F\nd+1\n.\nProof:\nThe proof of this lemma is relatively simple and\ndue to lack of space it is differed to the full version of the\npaper.\nConsequently we estimate a closed form for the first terms\nof the bounding sequence\n^\nQ(), = 0, . . . , d that was\ndetermined earlier:\nLemma\n4.3. The first log\nlog\nm\nln n\nlog(\na/8)\n+2 terms of the bounding\nsequence\n^\nQ(), = 0, . . . , d are given by the closed\nform ^\nQ() =\nm\n2\n\n\n(\na\n8\n)\n2-1\n.\nProof:\nAssume that\n\nwas the first element in the sequence\nfor which\n2\n\n\n+1\n[ ^\nQ(\n\n- 1)]\n2\n(a)n\n<\n2 ln n ^\nQ(\n\n- 1)\n(2)\nThen, up to term\n\n- 1 the sequence is dominated by the\nright-hand term of the above inequality, and thus,\nr <\n\n,\n^\nQ(r) =\n2\nr+1\n(a)n [\n^\nQ(r - 1)]\n2\n= 2\nr+1\n(a)n\n2\nr\n(a)n\n2\n[ ^\nQ(r - 2)]\n4\n=\n2\nr+1\n(a)n\n2\n0\n2\nr\n(a)n\n2\n1\n2\nr-1\n(a)n\n2\n2\n[ ^\nQ(r - 3)]\n2\n3\n=\n=\nr\n=1\n2\n2\nr+2\n(a)n\n2\n-1\n[ ^\nQ(0)]\n2\nr\n=\n2\nr\n=1\n(\nr+2-)2\n-1\n\n1\n(a)n\nr\n=1\n2\n-1\n[ ^\nQ(0)]\n2\nr\n=\n2\n3\n2\nr\n-r-3\n(\na)n\nm\n2\nr\n-1\nm\nm\n2\nr\na\n8\n2\nr\n-1\nsince\n(\na)n\n8\nm\n\na\n8\n. We now plug in this closed form for ^\nQ(\n\n1\n) in inequality (2) to get the following:\n2 ln n >\n2\n\n\n+1\n(a)n\n\nm\n2\n\n\n-1 a8 2\n-1\n-1\n\n3\n/2\nm\n3\n/2\n2 lnn < (a)n\n2\n\n\n+1\n2\n3\n2\n(\n\n\n-1)\n(a)n\n8m\n2\n-1\n-1\nm\n2 ln n\n< 2\n1\n2\n(\n\n\n+3)\na\n8\n2\n-1\nFrom the above and the definition of\n\n, it is easy to see that\n\n\n-1 = max r [d] :\nln\nm-ln -lnln n\nln 2\n- 4 r + 2\nr-2\nln\na\n8\n.\nBy setting A = log m lnln\nn+ln\nln 2\n- 4 and B = ln\na\n8\nwe get\nthe solution\n\nA W\n[\nB ln 2\n4\nexp(\nA ln 2)\n]\nln 2\n+ 1, where W (x)\nis the Lambert W Function ([7]). By approximating this\nfunction by ln x - lnln x (since x =\nB ln 2\n4\nexp(A ln 2) )\nwe conclude that\n\n\nlog log\nm\nln n\nlog(a/8)\n+ 3\n131\nLemma\n4.4. Assume that m n and d <\nlnln\nn-lnlnln n\n(2+1) ln 2\n.\nIf we set a = 8\nn\nd2\nd+3\n1\n/(2\nd+1\n-1)\nthe cost of\nOBL\n\nis upper\nbounded (whp) by\nOBL\n\n(m) (1 + o(1))(a + 1)ADV(m).\nProof:\nThe cost of\nOBL\n\nup to time is upper-bounded\nby max\n{L\nmax\n[F\nd+1\n], (a + 1) opt} since in the first d groups\nno saturated bin ever gets another ball and all the bins are\nlegitimate. We choose a so that the cost in the first d groups\nis at least as large as\nL\nmax\n[F\nd+1\n] (whp), given the upper\nbound ^\nQ(d) on the number of balls that end up in the last\ngroup. Thus the probability of + 1 existing will be then\npolynomially small because at least one poll in F\nd+1\nwill\nhave to be unsaturated whp. This then implies the claimed\nupper bound on the performance of\nOBL\n\n. Due to lack of\nspace, the complete proof of this lemma is presented in the\nfull version of the paper.\nCombining the statements of all these technical lemmas, we\nconclude with the desired bound on the competitive ratio of\nOBL\n\n.\nA COMPETITIVE HAPS SCHEDULER\nIn the previous section we have proposed a hops scheduler\nthat is based on the knowledge of the size of the input\nsequence and then assures that its own performance is never\nworse than (a + 1)opt, whp. In this section we propose a\nhaps\nscheduler (call it\nADAPT ) whose main purpose is\nto \"guess\" the value opt of the optimum offline cost by a\nclassical guessing argument and then let\nOBL\n\ndo the rest\nof the assignments. This approach is in complete analogy\nwith the online schedulers of the Related Machines-Full\nInformation problem (see [6], pp. 208-210). The only difference\nis that\nOBL\n\nhas a performance which holds whp,\nand this is why the final result also holds whp.\nA significant difficulty of\nADAPT is exactly this guessing\nmechanism that will have to be based on the limited information\nprovided to each of the new tasks. Our goal is not to\nassume that any kind of additional information (eg, global\nenvironment variables) is provided to the balls, other than\nthe capacity vector and the current loads of each ball's candidates\n.\nADAPT sacrifices one of the available polls per\nball, in order to create such a good guessing mechanism.\nOf course, a different approach that would be based on the\noutcome of some of the polls (eg, a constant fraction of the\npolls) in order to estimate a proper online prediction of opt\nwould be more interesting in the sense that it would not\ncreate a communication bottleneck for the tasks. Nevertheless\n, the purpose of\nADAPT is mainly to demonstrate the\npossibility of constructing such an adaptive scheduler whose\nperformance is close to that of\nOBL\n\n.\nLet's assume that the system now has n + 1 capacitated\nbins (\nC\nmin\n=\nC\n1\nC\n2\nC\nn+1\n=\nC\nmax\n). Assume also\nthat each new task is provided with 2d + 1 polls. Fix a\nnumber r > 1. Then an r-guessing mechanism proceeds\nin stages: Stage ( = 0, 1, . . . ) contains a (consecutive)\nsubsequence of tasks from the input sequence that use the\nsame prediction\n\n= r\n\n\n0\nfor their assignment. We set\n\n0\n=\n1\nC\nmax\n. The following definitions refer to the notions\nof eligible and saturated machines upon the arrival of a\nnew task t [m] into the system, that are used in a similar\nfashion as in [5] where the concept of eligible-saturated\nmachines was introduced:\nDefinition\n5.1. Suppose that task t m belongs to stage\n.The set of eligible machines for t is\nE\n\ni [n] : 1\nC\ni\n\n\n= r\n\n\n0\n,\nwhile a machine i [n] is considered to be saturated upon\nthe arrival of task t, if\nt-1\n(i) > a\n\n\n\n= r\n\na\n\n\n0\n, where\na\n\n= 8\nmax 1,\n|E\n\n|\nd2\nd+3\n1\n/(2\nd+1\n-1)\n.\nNotice that the static information of (E\n\n, a\n\n), = 0, 1, . . .\nonly depends on the capacity vector c and the number d of\npolls per task. Thus it can be easily computed by each of\nthe tasks, or alternatively it can be provided a priori to all\nthe tasks as additional offline information.\nADAPT proceeds in phases and works as follows: Upon\nthe arrival of a new ball t [m], first the fastest bin in the\nsystem is polled and the stage s(t) to which this ball belongs\nis determined, according to the following rule:\ns(t) = IN : r\n-1\na\n-1\n+ 1 < q\nt-1\n(n + 1) r\n\na\n\n+ 1\n(obviously for stage 0 only the second inequality must hold).\nThe remaining 2d polls of task t are taken from group E\ns(t)\nin a fashion similar to that of\nOBL\n\n. The assignment strategy\nof\nADAPT is exactly the same with that of OBL\n\n, with\nthe only difference that whenever the first 2d candidates of\na task are already saturated, then this task is assigned to\nthe fastest machine in the system, n + 1. If the latter event\ncauses machine n + 1 to become also saturated, then by the\ndefinition of the stages this task is the last ball of the current\nstage and a new stage (with an r times larger prediction)\nstarts from the next task on.\nLemma\n5.1. Suppose that\nADAPT uses r = 9/4.Let\nh denote the stage at which the prediction\nh\nof\nADAPT\nreaches opt for the first time.Then the last stage of\nADAPT\nis at most h + 1.\nProof:\nThe case of h = 0 is easy since this implies that\nopt = 1/C\nmax\nand then\nADAPT cannot be worse than\nOBL\n\nwhich (having the right prediction) succeeds to assign\nall the incoming tasks (but for those that might fit in machine\nn +1 without making it saturated) below (a\n0\n+ 1)\nopt,\nwhp. So let's consider the case where h 1.\nLet E\n\ndenote the set of legitimate machines in the system\n(ie, E\n\n=\n{i [n + 1] : 1/C\ni\nopt}). The amount\nof work inserted into the system during the whole assignment\nprocess is bounded by C[E\n\n]\nopt. By definition of\nh, r\nh-1\n/C\nmax\n< opt r\nh\n/C\nmax\n. Stage h + 1 ends when\nmachine n + 1 becomes saturated. Let W () denote the\ntotal amount of work assigned during stage , while R()\ndenotes the amount of remaining work at the end of stage\n. We shall prove that the amount of remaining work at the\nend of stage h is not enough to make OBL\n\nfail within stage\nh + 1.\nObserve that at the end of stage h, no machine has exceeded\na load of (a\nh\n+ 1)\nh\n= (a\nh\n+ 1)r\nh\n/C\nmax\n(by definition\nof stages). On the other hand, during stage h + 1, each of\nthe eligible machines i E\nh+1\nneeds a load of more than\na\nh+1\n\nh+1\n= a\nh+1\nr\nh+1\n/C\nmax\nin order to become saturated\nfor this stage. We denote the additional free space of bin\n132\ni E\nh+1\nby f ree\ni\n(h+1) >\na\nh+1\nr\nh+1\nC\nmax\n\n(\na\nh\n+1)\nr\nh\nC\nmax\nC\ni\n, while\nF REE(h + 1)\niE\nh+1\nf ree\ni\n(h + 1)\n>\na\nh+1\nr\nh+1\nC\nmax\n- (a\nh\n+ 1)r\nh\nC\nmax\nC[E\nh+1\n]\n>\na\nh+1\nr\nh+1\nC\nmax\n1 - a\nh\n+ 1\nr a\nh+1\nC[E\nh+1\n]\n> (a\nh+1\n+ 1)optC[E\nh+1\n]\nra\nh+1\na\nh+1\n+ 1 a\nh\n+ 1\na\nh+1\n+ 1\n(3)\nis the cumulative free space granted to the eligible bins of\nthe system for stage h + 1. It only remains to prove that the\namount of work that has to be dealt with by\nOBL\n\nusing\nonly the bins of [n] is less than F REE(h+1)/(a\nh+1\n+1). Observe\nnow that\nOBL\n\nuses the bins of E\nh+1\nand has to deal\nwith an amount of work less than W (h + 1) - f ree\nn+1\n(h +\n1) < R(h)-a\nh+1\nr\nh\nC\nmax\nr a\nh\n+1\na\nh+1\nC\nmax\n< C[E\n\n]opt\n-a\nh+1\n(r-2\n)opt\nC\nmax\n< opt(C[E\n\n]\n-(r-2)a\nh+1\nC\nmax\n) < (C[E\n\n]\n-C\nmax\n)\n\nopt if we set r 17/8. This is because a\nh+1\na\nh\n8 and\nat the end of stage h+1 bin n+1 must have already become\nsaturated. Observe also that E\nh+1\nE\nh\nE\n\n\\ {n + 1} by\ndefinition of h. Thus, C[E\nh+1\n]\nC[E\n\n]\n- C\nmax\n. Thus, the\namount of work that has to be served by\nOBL\n\nduring stage\nh + 1 is less than opt C[E\nh+1\n]. Then, setting r = 9/4 in\ninequality (3) assures that\nra\nh+1\na\nh+1\n+1\na\nh\n+1\na\nh+1\n+1\n1 (recall\nthat a\nh+1\na\nh\n8) and thus the remaining work at the\nend of stage h is not enough to make OBL\n\nfail.\nThe following theorem is now a straightforward consequence\nof the previous lemma:\nTheorem\n3. For any input sequence of identical tasks\nthat have to be assigned to n related machines using at most\n2d + 1 polls per task, the cost of ADAPT is (whp),\nADAPT () < O\nn\nd2\nd+3\n1\n/(2\nd+1\n-1)\n+ 1\nADV().\nCONCLUSIONS\nIn this work we have studied the problem of exploiting limited\nonline information for the assignment of a sequence of\nunit-size tasks to related machines. We have shown that the\noblivious schedulers that perform asymptotically optimally\nin the case of identical machines, deteriorate significantly\nin this case. We have then determined an adaptive scheduler\nthat actually mimics the behaviour of an ideal oblivious\nscheduler, in order to achieve roughly the asymptotically optimal\nperformance similar to the case of the identical machines\n.\nAs for further research, the issue of providing only limited\ninformation to online algorithms is critical in many problems\nfor which an objective is also the minimization of the communication\ncost. In this category fall most of the network\ndesign problems. Another interesting line of research would\nbe the avoidance of communication bottlenecks for such limited\ninformation online algorithms. Additionally, it would\nbe very interesting to study the case of tasks of arbitrary\nsizes being injected into the system.\nACKNOWLEDGMENTS\nThe author wishes to thank Paul Spirakis for valuable\ndiscussions during the write-up of the paper and also for\nsuggesting an appropriate terminology on the categorization\nof polling strategies.\n\nREFERENCES\n[1] J. Aspens, Y. Azar, A. Fiat, S. Plotkin, O. Waarts. Online\nMachine Scheduling with Applications to Load Balancing\nand Virtual Circuit Routing. In Journal of ACM, Vol. 44\n(1997), pp. 486-504.\n[2] Y. Azar, A. Broder, A. Karlin, E. Upfal. Ballanced\nAllocations. In Proc. of 26th ACM-STOC (1994), pp.\n593-602. Also in SIAM J. Computing, 29 (1999), pp.\n180-200.\n[3] P. Berman, M. Charikar, M. Karpinski. Online Load\nBalancing for Related Machines. In Proc. of 5th Int.\nWorkshop on Algorithms and Data Structures (1997),\nLNCS 1272, Springer-Verlag 1997, pp. 116-125.\n[4] P. Berenbrink, A. Czumaj, A. Steger, and B. V\nocking.\nBalanced Allocations: The Heavily Loaded Case. In Proc.\nof 32\nnd\nACM-STOC (Portland, 2000), pp. 745-754.\n[5] A. Bar-Noy, A. Freund, J. Naor. NewAlgorithms for\nRelated Machines with Temporary Jobs. In Journal of\nScheduling, Vol. 3 (2000), pp. 259-272.\n[6] A. Borodin, R. El Yaniv. Online Computation and\nCompetitive Analysis. Cambridge Univ. Press 1998.\n[7] R. Corless, G. Gonnet, D. Hare, D. Jeffrey, D. Knuth. On\nthe Lambert W Function. In Advances in Computational\nMathematics, Vol. 5 (1996), pp. 329-359.\n[8] A. Czumaj and B. V\nocking. Tight Bounds for Worst-Case\nEquilibria. In Proc. of 13\nth\nACM-SIAM SODA (San\nFrancisco, 2002), pp. 413-420.\n[9] Y. Azar. Online Load Balancing. In Online Algorithms:\nThe State of the Art, A. Fiat, G. Woeginger, (Eds.). LNCS\n1442, Springer 1998, pp. 178-195.\n[10] G. Gonnet. Expected Length of the Longest Probe\nSequence in Hash Code Searching. In J. of ACM, Vol. 28,\nNo. 2 (1981), pp. 289-304.\n[11] M. Habib, C. McDiarmid, J. Ramirez-Alfonsin, B. Reed\n(Eds.). Probabilistic Methods for Algorithmic Discrete\nMathematics. ISBN: 3-540-64622-1, Springer-Velrag 1998.\n[12] E. Koutsoupias, C. Papadimitriou. Worst-case Equilibria.\nIn Proc. of 16\nth\nAnnual Symposium on Theoretical\nAspects of Computer Science (STACS), LNCS 1563,\nSpringer-Verlag 1999, pp. 404-413.\n[13] M. Mavronicolas, P. Spirakis. The Price of Selfish Routing.\nIn Proc. of 33\nrd\nACM-STOC (Krete, 2001), pp. 510-519.\n[14] M. Mitzenmacher, A. Richa, R. Sitaraman. The Power of\nTwo Random Choices: A Survey of Techniques and\nResults. In Handbook of Randomized Algorithms (to\nappear). Also available through\nhttp://www.eecs.harvard.edu/ michaelm/.\n[15] A. Steger, M. Raab. \"Balls into Bins\" - A Simple and\nTight Analysis. In Proc. of RANDOM'98, LNCS 1518,\nSpringer Verlag, 1998, pp. 159-170.\n[16] J. Sgall. Online Scheduling. In Online Algorithms: The\nState of the Art, A. Fiat, G. Woeginger (Eds.). LNCS\n1442, Springer 1998, pp. 196-231.\n[17] D. Shmoys, J. Wein, D. Williamson. Scheduling Parallel\nMachines Online. In Proc. of the 32\nnd\nIEEE-FOCS (1991),\npp. 131-140.\n[18] B. V\nocking. HowAsymmetry Helps Load Balancing. In\nProc. of 40\nth\nIEEE-FOCS (NewYork, 1999), pp. 131-140.\n133\n", "keywords": "oblivious scheduler;HOPS;related machines;Limited Information;lower bounds;online information;scheduling;Online Load Balancing;input sequence;unit-size task;Related Machines"} {"name": "132", "title": "Machine Learning for Information Architecture in a Large Governmental Website", "abstract": "This paper describes ongoing research into the application of machine learning techniques for improving access to governmental information in complex digital libraries. Under the auspices of the GovStat Project, our goal is to identify a small number of semantically valid concepts that adequately spans the intellectual domain of a collection. The goal of this discovery is twofold. First we desire a practical aid for information architects. Second, automatically derived document-concept relationships are a necessary precondition for real-world deployment of many dynamic interfaces. The current study compares concept learning strategies based on three document representations: keywords, titles, and full-text. In statistical and user-based studies, human-created keywords provide significant improvements in concept learning over both title-only and full-text representations. Categories and Subject Descriptors", "fulltext": "INTRODUCTION\nThe GovStat Project is a joint effort of the University\nof North Carolina Interaction Design Lab and the University\nof Maryland Human-Computer Interaction Lab\n1\n. Citing\nend-user difficulty in finding governmental information\n(especially statistical data) online, the project seeks to create\nan integrated model of user access to US government\nstatistical information that is rooted in realistic data models\nand innovative user interfaces. To enable such models\nand interfaces, we propose a data-driven approach, based\non data mining and machine learning techniques. In particular\n, our work analyzes a particular digital library--the\nwebsite of the Bureau of Labor Statistics\n2\n(BLS)--in efforts\nto discover a small number of linguistically meaningful concepts\n, or \"bins,\" that collectively summarize the semantic\ndomain of the site.\nThe project goal is to classify the site's web content according\nto these inferred concepts as an initial step towards\ndata filtering via active user interfaces (cf.\n[13]).\nMany\ndigital libraries already make use of content classification,\nboth explicitly and implicitly; they divide their resources\nmanually by topical relation; they organize content into hi-erarchically\noriented file systems. The goal of the present\n1\nhttp://www.ils.unc.edu/govstat\n2\nhttp://www.bls.gov\n151\nresearch is to develop another means of browsing the content\nof these collections. By analyzing the distribution of terms\nacross documents, our goal is to supplement the agency's\npre-existing information structures. Statistical learning technologies\nare appealing in this context insofar as they stand\nto define a data-driven--as opposed to an agency-driven-navigational\nstructure for a site.\nOur approach combines supervised and unsupervised learning\ntechniques. A pure document clustering [12] approach\nto such a large, diverse collection as BLS led to poor results\nin early tests [6]. But strictly supervised techniques [5] are\ninappropriate, too. Although BLS designers have defined\nhigh-level subject headings for their collections, as we discuss\nin Section 2, this scheme is less than optimal. Thus we\nhope to learn an additional set of concepts by letting the\ndata speak for themselves.\nThe remainder of this paper describes the details of our\nconcept discovery efforts and subsequent evaluation. In Section\n2 we describe the previously existing, human-created\nconceptual structure of the BLS website. This section also\ndescribes evidence that this structure leaves room for improvement\n. Next (Sections 35), we turn to a description\nof the concepts derived via content clustering under three\ndocument representations: keyword, title only, and full-text.\nSection 6 describes a two-part evaluation of the derived conceptual\nstructures. Finally, we conclude in Section 7 by outlining\nupcoming work on the project.\nSTRUCTURING ACCESS TO THE BLS WEBSITE\nThe Bureau of Labor Statistics is a federal government\nagency charged with compiling and publishing statistics pertaining\nto labor and production in the US and abroad. Given\nthis broad mandate, the BLS publishes a wide array of information\n, intended for diverse audiences.\nThe agency's\nwebsite acts as a clearinghouse for this process. With over\n15,000 text/html documents (and many more documents if\nspreadsheets and typeset reports are included), providing\naccess to the collection provides a steep challenge to information\narchitects.\n2.1\nThe Relation Browser\nThe starting point of this work is the notion that access\nto information in the BLS website could be improved by\nthe addition of a dynamic interface such as the relation\nbrowser described by Marchionini and Brunk [13]. The relation\nbrowser allows users to traverse complex data sets by\niteratively slicing the data along several topics. In Figure\n1 we see a prototype instantiation of the relation browser,\napplied to the FedStats website\n3\n.\nThe relation browser supports information seeking by allowing\nusers to form queries in a stepwise fashion, slicing and\nre-slicing the data as their interests dictate. Its motivation\nis in keeping with Shneiderman's suggestion that queries\nand their results should be tightly coupled [2]. Thus in Fig-3\nhttp://www.fedstats.gov\nFigure 1: Relation Browser Prototype\nure 1, users might limit their search set to those documents\nabout \"energy.\" Within this subset of the collection, they\nmight further eliminate documents published more than a\nyear ago. Finally, they might request to see only documents\npublished in PDF format.\nAs Marchionini and Brunk discuss, capturing the publication\ndate and format of documents is trivial. But successful\nimplementations of the relation browser also rely on topical\nclassification. This presents two stumbling blocks for system\ndesigners:\nInformation architects must define the appropriate set\nof topics for their collection\nSite maintainers must classify each document into its\nappropriate categories\nThese tasks parallel common problems in the metadata\ncommunity: defining appropriate elements and marking up\ndocuments to support metadata-aware information access.\nGiven a collection of over 15,000 documents, these hurdles\nare especially daunting, and automatic methods of approaching\nthem are highly desirable.\n2.2\nA Pre-Existing Structure\nPrior to our involvement with the project, designers at\nBLS created a shallow classificatory structure for the most\nimportant documents in their website. As seen in Figure 2,\nthe BLS home page organizes 65 \"top-level\" documents into\n15 categories. These include topics such as Employment and\nUnemployment, Productivity, and Inflation and Spending.\n152\nFigure 2: The BLS Home Page\nWe hoped initially that these pre-defined categories could\nbe used to train a 15-way document classifier, thus automating\nthe process of populating the relation browser altogether.\nHowever, this approach proved unsatisfactory. In personal\nmeetings, BLS officials voiced dissatisfaction with the existing\ntopics. Their form, it was argued, owed as much to\nthe institutional structure of BLS as it did to the inherent\ntopology of the website's information space. In other words,\nthe topics reflected official divisions rather than semantic\nclusters. The BLS agents suggested that re-designing this\nclassification structure would be desirable.\nThe agents' misgivings were borne out in subsequent analysis\n. The BLS topics comprise a shallow classificatory structure\n; each of the 15 top-level categories is linked to a small\nnumber of related pages. Thus there are 7 pages associated\nwith Inflation. Altogether, the link structure of this classificatory\nsystem contains 65 documents; that is, excluding\nnavigational links, there are 65 documents linked from the\nBLS home page, where each hyperlink connects a document\nto a topic (pages can be linked to multiple topics). Based on\nthis hyperlink structure, we defined M, a symmetric 65 65\nmatrix, where m\nij\ncounts the number of topics in which documents\ni and j are both classified on the BLS home page. To\nanalyze the redundancy inherent in the pre-existing structure\n, we derived the principal components of M (cf. [11]).\nFigure 3 shows the resultant scree plot\n4\n.\nBecause all 65 documents belong to at least one BLS topic,\n4\nA scree plot shows the magnitude of the k\nth\neigenvalue\nversus its rank. During principal component analysis scree\nplots visualize the amount of variance captured by each component\n.\n0\n10\n20\n30\n40\n50\n60\n0\n2\n4\n6\n8\n10\n12\n14\nEigenvalue Rank\nEigenvlue Magnitude\nFigure 3: Scree Plot of BLS Categories\nthe rank of M is guaranteed to be less than or equal to\n15 (hence, eigenvalues 16 . . . 65 = 0). What is surprising\nabout Figure 3, however, is the precipitous decline in magnitude\namong the first four eigenvalues. The four largest\neigenvlaues account for 62.2% of the total variance in the\ndata. This fact suggests a high degree of redundancy among\nthe topics. Topical redundancy is not in itself problematic.\nHowever, the documents in this very shallow classificatory\nstructure are almost all gateways to more specific information\n. Thus the listing of the Producer Price Index under\nthree categories could be confusing to the site's users. In\nlight of this potential for confusion and the agency's own request\nfor redesign, we undertook the task of topic discovery\ndescribed in the following sections.\nA HYBRID APPROACH TO TOPIC DISCOVERY\nTo aid in the discovery of a new set of high-level topics for\nthe BLS website, we turned to unsupervised machine learning\nmethods. In efforts to let the data speak for themselves,\nwe desired a means of concept discovery that would be based\nnot on the structure of the agency, but on the content of the\nmaterial. To begin this process, we crawled the BLS website\n, downloading all documents of MIME type text/html.\nThis led to a corpus of 15,165 documents. Based on this\ncorpus, we hoped to derive k 10 topical categories, such\nthat each document d\ni\nis assigned to one or more classes.\n153\nDocument clustering (cf. [16]) provided an obvious, but\nonly partial solution to the problem of automating this type\nof high-level information architecture discovery. The problems\nwith standard clustering are threefold.\n1. Mutually exclusive clusters are inappropriate for identifying\nthe topical content of documents, since documents\nmay be about many subjects.\n2. Due to the heterogeneity of the data housed in the\nBLS collection (tables, lists, surveys, etc.), many doc-uments'\nterms provide noisy topical information.\n3. For application to the relation browser, we require a\nsmall number (k 10) of topics. Without significant\ndata reduction, term-based clustering tends to deliver\nclusters at too fine a level of granularity.\nIn light of these problems, we take a hybrid approach to\ntopic discovery.\nFirst, we limit the clustering process to\na sample of the entire collection, described in Section 4.\nWorking on a focused subset of the data helps to overcome\nproblems two and three, listed above. To address the problem\nof mutual exclusivity, we combine unsupervised with\nsupervised learning methods, as described in Section 5.\nFOCUSING ON CONTENT-RICH DOCUMENTS\nTo derive empirically evidenced topics we initially turned\nto cluster analysis. Let A be the np data matrix with n observations\nin p variables. Thus a\nij\nshows the measurement\nfor the i\nth\nobservation on the j\nth\nvariable. As described\nin [12], the goal of cluster analysis is to assign each of the\nn observations to one of a small number k groups, each of\nwhich is characterized by high intra-cluster correlation and\nlow inter-cluster correlation. Though the algorithms for accomplishing\nsuch an arrangement are legion, our analysis\nfocuses on k-means clustering\n5\n, during which, each observation\no\ni\nis assigned to the cluster C\nk\nwhose centroid is closest\nto it, in terms of Euclidean distance. Readers interested in\nthe details of the algorithm are referred to [12] for a thorough\ntreatment of the subject.\nClustering by k-means is well-studied in the statistical\nliterature, and has shown good results for text analysis (cf.\n[8, 16]). However, k-means clustering requires that the researcher\nspecify k, the number of clusters to define. When\napplying k-means to our 15,000 document collection, indicators\nsuch as the gap statistic [17] and an analysis of\nthe mean-squared distance across values of k suggested that\nk 80 was optimal. This paramterization led to semantically\nintelligible clusters. However, 80 clusters are far too\nmany for application to an interface such as the relation\n5\nWe have focused on k-means as opposed to other clustering\nalgorithms for several reasons.\nChief among these is the\ncomputational efficiency enjoyed by the k-means approach.\nBecause we need only a flat clustering there is little to be\ngained by the more expensive hierarchical algorithms. In\nfuture work we will turn to model-based clustering [7] as a\nmore principled method of selecting the number of clusters\nand of representing clusters.\nbrowser. Moreover, the granularity of these clusters was un-suitably\nfine. For instance, the 80-cluster solution derived\na cluster whose most highly associated words (in terms of\nlog-odds ratio [1]) were drug, pharmacy, and chemist. These\nwords are certainly related, but they are related at a level\nof specificity far below what we sought.\nTo remedy the high dimensionality of the data, we resolved\nto limit the algorithm to a subset of the collection.\nIn consultation with employees of the BLS, we continued\nour analysis on documents that form a series titled From\nthe Editor's Desk\n6\n. These are brief articles, written by BLS\nemployees. BLS agents suggested that we focus on the Editor's\nDesk because it is intended to span the intellectual\ndomain of the agency. The column is published daily, and\neach entry describes an important current issue in the BLS\ndomain. The Editor's Desk column has been written daily\n(five times per week) since 1998. As such, we operated on a\nset of N = 1279 documents.\nLimiting attention to these 1279 documents not only reduced\nthe dimensionality of the problem. It also allowed\nthe clustering process to learn on a relatively clean data set.\nWhile the entire BLS collection contains a great deal of non-prose\ntext (i.e. tables, lists, etc.), the Editor's Desk documents\nare all written in clear, journalistic prose. Each document\nis highly topical, further aiding the discovery of term-topic\nrelations. Finally, the Editor's Desk column provided\nan ideal learning environment because it is well-supplied\nwith topical metadata. Each of the 1279 documents contains\na list of one or more keywords. Additionally, a subset\nof the documents (1112) contained a subject heading. This\nmetadata informed our learning and evaluation, as described\nin Section 6.1.\nCOMBINING SUPERVISED AND UNSUPERVISED LEARNING FOR TOPIC DISCOVERY\nTo derive suitably general topics for the application of a\ndynamic interface to the BLS collection, we combined document\nclustering with text classification techniques. Specif-ically\n, using k-means, we clustered each of the 1279 documents\ninto one of k clusters, with the number of clusters\nchosen by analyzing the within-cluster mean squared distance\nat different values of k (see Section 6.1). Constructing\nmutually exclusive clusters violates our assumption that\ndocuments may belong to multiple classes. However, these\nclusters mark only the first step in a two-phase process of\ntopic identification. At the end of the process, document-cluster\naffinity is measured by a real-valued number.\nOnce the Editor's Desk documents were assigned to clusters\n, we constructed a k-way classifier that estimates the\nstrength of evidence that a new document d\ni\nis a member\nof class C\nk\n. We tested three statistical classification techniques\n: probabilistic Rocchio (prind), naive Bayes, and support\nvector machines (SVMs). All were implemented using\nMcCallum's BOW text classification library [14]. Prind is a\nprobabilistic version of the Rocchio classification algorithm\n[9]. Interested readers are referred to Joachims' article for\n6\nhttp://www.bls.gov/opub/ted\n154\nfurther details of the classification method. Like prind, naive\nBayes attempts to classify documents into the most probable\nclass. It is described in detail in [15]. Finally, support\nvector machines were thoroughly explicated by Vapnik [18],\nand applied specifically to text in [10]. They define a decision\nboundary by finding the maximally separating hyperplane\nin a high-dimensional vector space in which document\nclasses become linearly separable.\nHaving clustered the documents and trained a suitable\nclassifier, the remaining 14,000 documents in the collection\nare labeled by means of automatic classification. That is, for\neach document d\ni\nwe derive a k-dimensional vector, quantifying\nthe association between d\ni\nand each class C\n1\n. . . C\nk\n.\nDeriving topic scores via naive Bayes for the entire 15,000-document\ncollection required less than two hours of CPU\ntime. The output of this process is a score for every document\nin the collection on each of the automatically discovered\ntopics. These scores may then be used to populate a\nrelation browser interface, or they may be added to a traditional\ninformation retrieval system. To use these weights in\nthe relation browser we currently assign to each document\nthe two topics on which it scored highest. In future work we\nwill adopt a more rigorous method of deriving document-topic\nweight thresholds. Also, evaluation of the utility of\nthe learned topics for users will be undertaken.\nEVALUATION OF CONCEPT DISCOVERY\nPrior to implementing a relation browser interface and\nundertaking the attendant user studies, it is of course important\nto evaluate the quality of the inferred concepts, and\nthe ability of the automatic classifier to assign documents\nto the appropriate subjects. To evaluate the success of the\ntwo-stage approach described in Section 5, we undertook\ntwo experiments. During the first experiment we compared\nthree methods of document representation for the clustering\ntask. The goal here was to compare the quality of document\nclusters derived by analysis of full-text documents,\ndocuments represented only by their titles, and documents\nrepresented by human-created keyword metadata. During\nthe second experiment, we analyzed the ability of the statistical\nclassifiers to discern the subject matter of documents\nfrom portions of the database in addition to the Editor's\nDesk.\n6.1\nComparing Document Representations\nDocuments from The Editor's Desk column came supplied\nwith human-generated keyword metadata. Additionally\n, The titles of the Editor's Desk documents tend to be\ngermane to the topic of their respective articles. With such\nan array of distilled evidence of each document's subject\nmatter, we undertook a comparison of document representations\nfor topic discovery by clustering. We hypothesized\nthat keyword-based clustering would provide a useful model.\nBut we hoped to see whether comparable performance could\nbe attained by methods that did not require extensive human\nindexing, such as the title-only or full-text representations\n. To test this hypothesis, we defined three modes of\ndocument representation--full-text, title-only, and keyword\nonly--we generated three sets of topics, T\nf ull\n, T\ntitle\n, and\nT\nkw\n, respectively.\nTopics based on full-text documents were derived by application\nof k-means clustering to the 1279 Editor's Desk documents\n, where each document was represented by a 1908-dimensional\nvector.\nThese 1908 dimensions captured the\nTF.IDF weights [3] of each term t\ni\nin document d\nj\n, for all\nterms that occurred at least three times in the data. To arrive\nat the appropriate number of clusters for these data, we\ninspected the within-cluster mean-squared distance for each\nvalue of k = 1 . . . 20. As k approached 10 the reduction in\nerror with the addition of more clusters declined notably,\nsuggesting that k 10 would yield good divisions. To select\na single integer value, we calculated which value of k led\nto the least variation in cluster size. This metric stemmed\nfrom a desire to suppress the common result where one large\ncluster emerges from the k-means algorithm, accompanied\nby several accordingly small clusters.\nWithout reason to\nbelieve that any single topic should have dramatically high\nprior odds of document membership, this heuristic led to\nk\nf ull\n= 10.\nClusters based on document titles were constructed simi-larly\n. However, in this case, each document was represented\nin the vector space spanned by the 397 terms that occur\nat least twice in document titles. Using the same method\nof minimizing the variance in cluster membership k\ntitle\nthe\nnumber of clusters in the title-based representationwas also\nset to 10.\nThe dimensionality of the keyword-based clustering was\nvery similar to that of the title-based approach. There were\n299 keywords in the data, all of which were retained. The\nmedian number of keywords per document was 7, where a\nkeyword is understood to be either a single word, or a multi-word\nterm such as \"consumer price index.\" It is worth noting\nthat the keywords were not drawn from any controlled vocabulary\n; they were assigned to documents by publishers at\nthe BLS. Using the keywords, the documents were clustered\ninto 10 classes.\nTo evaluate the clusters derived by each method of document\nrepresentation, we used the subject headings that were\nincluded with 1112 of the Editor's Desk documents. Each\nof these 1112 documents was assigned one or more subject\nheadings, which were withheld from all of the cluster applications\n. Like the keywords, subject headings were assigned\nto documents by BLS publishers. Unlike the keywords, however\n, subject headings were drawn from a controlled vocabulary\n. Our analysis began with the assumption that documents\nwith the same subject headings should cluster together\n. To facilitate this analysis, we took a conservative\napproach; we considered multi-subject classifications to be\nunique. Thus if document d\ni\nwas assigned to a single subject\nprices, while document d\nj\nwas assigned to two subjects,\ninternational comparisons, prices, documents d\ni\nand d\nj\nare\nnot considered to come from the same class.\nTable 1 shows all Editor's Desk subject headings that were\nassigned to at least 10 documents. As noted in the table,\n155\nTable 1: Top Editor's Desk Subject Headings\nSubject\nCount\nprices\n92\nunemployment\n55\noccupational safety & health\n53\ninternational comparisons, prices\n48\nmanufacturing, prices\n45\nemployment\n44\nproductivity\n40\nconsumer expenditures\n36\nearnings & wages\n27\nemployment & unemployment\n27\ncompensation costs\n25\nearnings & wages, metro. areas\n18\nbenefits, compensation costs\n18\nearnings & wages, occupations\n17\nemployment, occupations\n14\nbenefits\n14\nearnings & wage, regions\n13\nwork stoppages\n12\nearnings & wages, industries\n11\nTotal\n609\nTable 2:\nContingecy Table for Three Document\nRepresentations\nRepresentation\nRight\nWrong\nAccuracy\nFull-text\n392\n217\n0.64\nTitle\n441\n168\n0.72\nKeyword\n601\n8\n0.98\nthere were 19 such subject headings, which altogether covered\n609 (54%) of the documents with subjects assigned.\nThese document-subject pairings formed the basis of our\nanalysis. Limiting analysis to subjects with N > 10 kept\nthe resultant\n2\ntests suitably robust.\nThe clustering derived by each document representation\nwas tested by its ability to collocate documents with the\nsame subjects. Thus for each of the 19 subject headings\nin Table 1, S\ni\n, we calculated the proportion of documents\nassigned to S\ni\nthat each clustering co-classified. Further,\nwe assumed that whichever cluster captured the majority of\ndocuments for a given class constituted the \"right answer\"\nfor that class. For instance, There were 92 documents whose\nsubject heading was prices. Taking the BLS editors' classifications\nas ground truth, all 92 of these documents should\nhave ended up in the same cluster. Under the full-text representation\n52 of these documents were clustered into category\n5, while 35 were in category 3, and 5 documents were in category\n6. Taking the majority cluster as the putative right\nhome for these documents, we consider the accuracy of this\nclustering on this subject to be 52/92 = 0.56. Repeating\nthis process for each topic across all three representations\nled to the contingency table shown in Table 2.\nThe obvious superiority of the keyword-based clustering\nevidenced by Table 2 was borne out by a\n2\ntest on the\naccuracy proportions. Comparing the proportion right and\nTable 3: Keyword-Based Clusters\nbenefits\ncosts\ninternational\njobs\nplans\ncompensation\nimport\nemployment\nbenefits\ncosts\nprices\njobs\nemployees\nbenefits\npetroleum\nyouth\noccupations\nprices\nproductivity\nsafety\nworkers\nprices\nproductivity\nsafety\nearnings\nindex\noutput\nhealth\noperators\ninflation\nnonfarm\noccupational\nspending\nunemployment\nexpenditures\nunemployment\nconsumer\nmass\nspending\njobless\nwrong achieved by keyword and title-based clustering led to\np\n0.001. Due to this result, in the remainder of this paper,\nwe focus our attention on the clusters derived by analysis of\nthe Editor's Desk keywords. The ten keyword-based clusters\nare shown in Table 3, represented by the three terms most\nhighly associated with each cluster, in terms of the log-odds\nratio. Additionally, each cluster has been given a label by\nthe researchers.\nEvaluating the results of clustering is notoriously difficult.\nIn order to lend our analysis suitable rigor and utility, we\nmade several simplifying assumptions. Most problematic is\nthe fact that we have assumed that each document belongs\nin only a single category. This assumption is certainly false.\nHowever, by taking an extremely rigid view of what constitutes\na subject--that is, by taking a fully qualified and\noften multipart subject heading as our unit of analysis--we\nmitigate this problem. Analogically, this is akin to considering\nthe location of books on a library shelf. Although a\ngiven book may cover many subjects, a classification system\nshould be able to collocate books that are extremely similar,\nsay books about occupational safety and health. The most\nserious liability with this evaluation, then, is the fact that\nwe have compressed multiple subject headings, say prices :\ninternational into single subjects. This flattening obscures\nthe multivalence of documents. We turn to a more realistic\nassessment of document-class relations in Section 6.2.\n6.2\nAccuracy of the Document Classifiers\nAlthough the keyword-based clusters appear to classify\nthe Editor's Desk documents very well, their discovery only\nsolved half of the problem required for the successful implementation\nof a dynamic user interface such as the relation\nbrowser. The matter of roughly fourteen thousand\nunclassified documents remained to be addressed. To solve\nthis problem, we trained the statistical classifiers described\nabove in Section 5.\nFor each document in the collection\nd\ni\n, these classifiers give p\ni\n, a k-vector of probabilities or distances\n(depending on the classification method used), where\np\nik\nquantifies the strength of association between the i\nth\ndocument and the k\nth\nclass. All classifiers were trained on\nthe full text of each document, regardless of the representation\nused to discover the initial clusters. The different\ntraining sets were thus constructed simply by changing the\n156\nTable 4: Cross Validation Results for 3 Classifiers\nMethod\nAv. Percent Accuracy\nSE\nPrind\n59.07\n1.07\nNaive Bayes\n75.57\n0.4\nSVM\n75.08\n0.68\nclass variable for each instance (document) to reflect its assigned\ncluster under a given model.\nTo test the ability of each classifier to locate documents\ncorrectly, we first performed a 10-fold cross validation on\nthe Editor's Desk documents. During cross-validation the\ndata are split randomly into n subsets (in this case n = 10).\nThe process proceeds by iteratively holding out each of the\nn subsets as a test collection for a model trained on the\nremaining n - 1 subsets. Cross validation is described in\n[15]. Using this methodology, we compared the performance\nof the three classification models described above. Table 4\ngives the results from cross validation.\nAlthough naive Bayes is not significantly more accurate\nfor these data than the SVM classifier, we limit the remainder\nof our attention to analysis of its performance.\nOur\nselection of naive Bayes is due to the fact that it appears to\nwork comparably to the SVM approach for these data, while\nbeing much simpler, both in theory and implementation.\nBecause we have only 1279 documents and 10 classes, the\nnumber of training documents per class is relatively small.\nIn addition to models fitted to the Editor's Desk data, then,\nwe constructed a fourth model, supplementing the training\nsets of each class by querying the Google search engine\n7\nand\napplying naive Bayes to the augmented test set. For each\nclass, we created a query by submitting the three terms\nwith the highest log-odds ratio with that class. Further,\neach query was limited to the domain www.bls.gov.\nFor\neach class we retrieved up to 400 documents from Google\n(the actual number varied depending on the size of the result\nset returned by Google).\nThis led to a training set\nof 4113 documents in the \"augmented model,\" as we call\nit below\n8\n. Cross validation suggested that the augmented\nmodel decreased classification accuracy (accuracy= 58.16%,\nwith standard error= 0.32). As we discuss below, however,\naugmenting the training set appeared to help generalization\nduring our second experiment.\nThe results of our cross validation experiment are encouraging\n. However, the success of our classifiers on the Editor's\nDesk documents that informed the cross validation study\nmay not be good predictors of the models' performance on\nthe remainder to the BLS website. To test the generality\nof the naive Bayes classifier, we solicited input from 11 human\njudges who were familiar with the BLS website. The\nsample was chosen by convenience, and consisted of faculty\nand graduate students who work on the GovStat project.\nHowever, none of the reviewers had prior knowledge of the\noutcome of the classification before their participation. For\nthe experiment, a random sample of 100 documents was\ndrawn from the entire BLS collection. On average each re-7\nhttp://www.google.com\n8\nA more formal treatment of the combination of labeled and\nunlabeled data is available in [4].\nTable 5: Human-Model Agreement on 100 Sample\nDocs.\nHuman Judge 1\nst\nChoice\nModel\nModel 1\nst\nChoice\nModel 2\nnd\nChoice\nN. Bayes (aug.)\n14\n24\nN. Bayes\n24\n1\nHuman Judge 2\nnd\nChoice\nModel\nModel 1\nst\nChoice\nModel 2\nnd\nChoice\nN. Bayes (aug.)\n14\n21\nN. Bayes\n21\n4\nviewer classified 83 documents, placing each document into\nas many of the categories shown in Table 3 as he or she saw\nfit.\nResults from this experiment suggest that room for improvement\nremains with respect to generalizing to the whole\ncollection from the class models fitted to the Editor's Desk\ndocuments. In Table 5, we see, for each classifier, the number\nof documents for which it's first or second most probable\nclass was voted best or second best by the 11 human judges.\nIn the context of this experiment, we consider a first- or\nsecond-place classification by the machine to be accurate\nbecause the relation browser interface operates on a multi-way\nclassification, where each document is classified into\nmultiple categories. Thus a document with the \"correct\"\nclass as its second choice would still be easily available to\na user. Likewise, a correct classification on either the most\npopular or second most popular category among the human\njudges is considered correct in cases where a given document\nwas classified into multiple classes. There were 72 multi-class\ndocuments in our sample, as seen in Figure 4. The\nremaining 28 documents were assigned to 1 or 0 classes.\nUnder this rationale, The augmented naive Bayes classifier\ncorrectly grouped 73 documents, while the smaller model\n(not augmented by a Google search) correctly classified 50.\nThe resultant\n2\ntest gave p = 0.001, suggesting that increasing\nthe training set improved the ability of the naive\nBayes model to generalize from the Editor's Desk documents\nto the collection as a whole. However, the improvement afforded\nby the augmented model comes at some cost. In particular\n, the augmented model is significantly inferior to the\nmodel trained solely on Editor's Desk documents if we concern\nourselves only with documents selected by the majority\nof human reviewers--i.e. only first-choice classes. Limiting\nthe right answers to the left column of Table 5 gives p = 0.02\nin favor of the non-augmented model. For the purposes of\napplying the relation browser to complex digital library content\n(where documents will be classified along multiple categories\n), the augmented model is preferable. But this is not\nnecessarily the case in general.\nIt must also be said that 73% accuracy under a fairly\nliberal test condition leaves room for improvement in our\nassignment of topics to categories. We may begin to understand\nthe shortcomings of the described techniques by\nconsulting Figure 5, which shows the distribution of categories\nacross documents given by humans and by the augmented\nnaive Bayes model. The majority of reviewers put\n157\nNumber of Human-Assigned Classes\nFrequency\n0\n1\n2\n3\n4\n5\n6\n7\n0\n5\n10\n15\n20\n25\n30\n35\nFigure 4:\nNumber of Classes Assigned to Documents\nby Judges\ndocuments into only three categories, jobs, benefits, and occupations\n. On the other hand, the naive Bayes classifier distributed\nclasses more evenly across the topics. This behavior\nsuggests areas for future improvement. Most importantly,\nwe observed a strong correlation among the three most frequent\nclasses among the human judges (for instance, there\nwas 68% correlation between benefits and occupations). This\nsuggests that improving the clustering to produce topics\nthat were more nearly orthogonal might improve performance\nCONCLUSIONS AND FUTURE WORK\nMany developers and maintainers of digital libraries share\nthe basic problem pursued here. Given increasingly large,\ncomplex bodies of data, how may we improve access to collections\nwithout incurring extraordinary cost, and while also\nkeeping systems receptive to changes in content over time?\nData mining and machine learning methods hold a great deal\nof promise with respect to this problem. Empirical methods\nof knowledge discovery can aid in the organization and\nretrieval of information. As we have argued in this paper,\nthese methods may also be brought to bear on the design\nand implementation of advanced user interfaces.\nThis study explored a hybrid technique for aiding information\narchitects as they implement dynamic interfaces such\nas the relation browser. Our approach combines unsupervised\nlearning techniques, applied to a focused subset of the\nBLS website. The goal of this initial stage is to discover the\nmost basic and far-reaching topics in the collection. Based\njobs\nprices\nspending\ncosts\nHuman Classifications\n0.00\n0.15\njobs\nprices\nspending\ncosts\nMachine Classifications\n0.00\n0.10\nFigure 5: Distribution of Classes Across Documents\non a statistical model of these topics, the second phase of\nour approach uses supervised learning (in particular, a naive\nBayes classifier, trained on individual words), to assign topical\nrelations to the remaining documents in the collection.\nIn the study reported here, this approach has demon-strated\npromise. In its favor, our approach is highly scalable.\nIt also appears to give fairly good results. Comparing three\nmodes of document representation--full-text, title only, and\nkeyword--we found 98% accuracy as measured by collocation\nof documents with identical subject headings. While it\nis not surprising that editor-generated keywords should give\nstrong evidence for such learning, their superiority over full-text\nand titles was dramatic, suggesting that even a small\namount of metadata can be very useful for data mining.\nHowever, we also found evidence that learning topics from\na subset of the collection may lead to overfitted models.\nAfter clustering 1279 Editor's Desk documents into 10 categories\n, we fitted a 10-way naive Bayes classifier to categorize\nthe remaining 14,000 documents in the collection. While we\nsaw fairly good results (classification accuracy of 75% with\nrespect to a small sample of human judges), this experiment\nforced us to reconsider the quality of the topics learned by\nclustering. The high correlation among human judgments\nin our sample suggests that the topics discovered by analysis\nof the Editor's Desk were not independent. While we do\nnot desire mutually exclusive categories in our setting, we\ndo desire independence among the topics we model.\nOverall, then, the techniques described here provide an\nencouraging start to our work on acquiring subject metadata\nfor dynamic interfaces automatically. It also suggests\nthat a more sophisticated modeling approach might yield\n158\nbetter results in the future. In upcoming work we will experiment\nwith streamlining the two-phase technique described\nhere.\nInstead of clustering documents to find topics and\nthen fitting a model to the learned clusters, our goal is to\nexpand the unsupervised portion of our analysis beyond a\nnarrow subset of the collection, such as The Editor's Desk.\nIn current work we have defined algorithms to identify documents\nlikely to help the topic discovery task. Supplied with\na more comprehensive training set, we hope to experiment\nwith model-based clustering, which combines the clustering\nand classification processes into a single modeling procedure.\nTopic discovery and document classification have long been\nrecognized as fundamental problems in information retrieval\nand other forms of text mining. What is increasingly clear,\nhowever, as digital libraries grow in scope and complexity,\nis the applicability of these techniques to problems at the\nfront-end of systems such as information architecture and\ninterface design. Finally, then, in future work we will build\non the user studies undertaken by Marchionini and Brunk\nin efforts to evaluate the utility of automatically populated\ndynamic interfaces for the users of digital libraries.\nREFERENCES\n[1] A. Agresti. An Introduction to Categorical Data\nAnalysis. Wiley, New York, 1996.\n[2] C. Ahlberg, C. Williamson, and B. Shneiderman.\nDynamic queries for information exploration: an\nimplementation and evaluation. In Proceedings of the\nSIGCHI conference on Human factors in computing\nsystems, pages 619626, 1992.\n[3] R. Baeza-Yates and B. Ribeiro-Neto. Modern\nInformation Retrieval. ACM Press, 1999.\n[4] A. Blum and T. Mitchell. Combining labeled and\nunlabeled data with co-training. In Proceedings of the\neleventh annual conference on Computational learning\ntheory, pages 92100. ACM Press, 1998.\n[5] H. Chen and S. Dumais. Hierarchical classification of\nweb content. In Proceedings of the 23rd annual\ninternational ACM SIGIR conference on Research and\ndevelopment in information retrieval, pages 256263,\n2000.\n[6] M. Efron, G. Marchionini, and J. Zhang. Implications\nof the recursive representation problem for automatic\nconcept identification in on-line governmental\ninformation. In Proceedings of the ASIST Special\nInterest Group on Classification Research (ASIST\nSIG-CR), 2003.\n[7] C. Fraley and A. E. Raftery. How many clusters?\nwhich clustering method? answers via model-based\ncluster analysis. The Computer Journal,\n41(8):578588, 1998.\n[8] A. K. Jain, M. N. Murty, and P. J. Flynn. Data\nclustering: a review. ACM Computing Surveys,\n31(3):264323, September 1999.\n[9] T. Joachims. A probabilistic analysis of the Rocchio\nalgorithm with TFIDF for text categorization. In\nD. H. Fisher, editor, Proceedings of ICML-97, 14th\nInternational Conference on Machine Learning, pages\n143151, Nashville, US, 1997. Morgan Kaufmann\nPublishers, San Francisco, US.\n[10] T. Joachims. Text categorization with support vector\nmachines: learning with many relevant features. In\nC. N\nedellec and C. Rouveirol, editors, Proceedings of\nECML-98, 10th European Conference on Machine\nLearning, pages 137142, Chemnitz, DE, 1998.\nSpringer Verlag, Heidelberg, DE.\n[11] I. T. Jolliffe. Principal Component Analysis. Springer,\n2nd edition, 2002.\n[12] L. Kaufman and P. J. Rosseeuw. Finding Groups in\nData: an Introduction to Cluster Analysis. Wiley,\n1990.\n[13] G. Marchionini and B. Brunk. Toward a general\nrelation browser: a GUI for information architects.\nJournal of Digital Information, 4(1), 2003.\nhttp://jodi.ecs.soton.ac.uk/Articles/v04/i01/Marchionini/.\n[14] A. K. McCallum. Bow: A toolkit for statistical\nlanguage modeling, text retrieval, classification and\nclustering. http://www.cs.cmu.edu/~mccallum/bow,\n1996.\n[15] T. Mitchell. Machine Learning. McGraw Hill, 1997.\n[16] E. Rasmussen. Clustering algorithms. In W. B. Frakes\nand R. Baeza-Yates, editors, Information Retrieval:\nData Structures and Algorithms, pages 419442.\nPrentice Hall, 1992.\n[17] R. Tibshirani, G. Walther, and T. Hastie. Estimating\nthe number of clusters in a dataset via the gap\nstatistic, 2000.\nhttp://citeseer.nj.nec.com/tibshirani00estimating.html.\n[18] V. N. Vapnik. The Nature of Statistical Learning\nTheory. Springer, 2000.\n159\n", "keywords": "information architecture;Information Architecture;BLS;digital libraries;document classification;Machine Learning;topic discovery;document representation;Interface Design;clustering;relational browser"} {"name": "133", "title": "Machine Learning in DNA Microarray Analysis for Cancer Classification", "abstract": "The development of microarray technology has supplied a large volume of data to many fields. In particular, it has been applied to prediction and diagnosis of cancer, so that it expectedly helps us to exactly predict and diagnose cancer. To precisely classify cancer we have to select genes related to cancer because extracted genes from microarray have many noises. In this paper, we attempt to explore many features and classifiers using three benchmark datasets to systematically evaluate the performances of the feature selection methods and machine learning classifiers. Three benchmark datasets are Leukemia cancer dataset, Colon cancer dataset and Lymphoma cancer data set. Pearson's and Spearman's correlation coefficients, Euclidean distance, cosine coefficient, information gain, mutual information and signal to noise ratio have been used for feature selection. Multi-layer perceptron, k-nearest neighbour, support vector machine and structure adaptive selforganizing map have been used for classification. Also, we have combined the classifiers to improve the performance of classification. Experimental results show that the ensemble with several basis classifiers produces the best recognition rate on the benchmark dataset.", "fulltext": "Introduction\nThe need to study whole genome such as Human\nGenomic Project (HGP) is recently increasing because\nfragmentary knowledge about life phenomenon with\ncomplex control functions of molecular-level is limited.\nDNA chips have been developed during that process\nbecause understanding the functions of genome\nsequences is essential at that time.\nThe development of DNA microarray technology has\nbeen produced large amount of gene data and has made it\neasy to monitor the expression patterns of thousands of\ngenes simultaneously under particular experimental\nenvironments and conditions (Harrington et al. 2000).\nAlso, we can analyze the gene information very rapidly\nand precisely by managing them at one time (Eisen et al.\n1999).\nMicroarray technology has been applied to the field of\naccurate prediction and diagnosis of cancer and expected\n\nthat it would help them. Especially accurate classification\nof cancer is very important issue for treatment of cancer.\nMany researchers have been studying many problems of\ncancer classification using gene expression profile data\nand attempting to propose the optimal classification\ntechnique to work out these problems (Dudoit et al. 2000,\nBen-Dor et al. 2000) as shown in Table . Some\nproduce better results than others, but there have been\nstill no comprehensive work to compare the possible\nfeature selection methods and classifiers. We need a\nthorough effort to give the evaluation of the possible\nmethods to solve the problems of analyzing gene\nexpression data.\nThe gene expression data usually consist of huge number\nof genes, and the necessity of tools analysing them to get\nuseful information gets radical. There is research that\nsystematically analyzes the results of test using a variety\nof feature selection methods and classifiers for selecting\ninformative genes to help classification of cancer and\nclassifying cancer (Ryu et al. 2002). However, the results\nwere not verified enough because only one benchmark\ndataset was used. Due to the reason, it is necessary to\nanalyse systematically the performance of classifiers\nusing a variety of benchmark datasets.\nIn this paper, we attempt to explore many features and\nclassifiers that precisely classify cancer using three\nrecently published benchmark dataset. We adopted seven\nfeature selection methods and four classifiers, which are\ncommonly used in the field of data mining and pattern\nrecognition. Feature selection methods include Pearson's\nand Spearman's correlation coefficients, Euclidean\ndistance, cosine coefficient, information gain, mutual\ninformation and signal to noise ratio. Also, classification\nmethods include multi-layer perceptron (MLP), k-nearest\nneighbour (KNN), support vector machine (SVM) and\nstructure adaptive self organizing map (SOM). We also\nattempt to combine some of the classifiers with majority\nvoting to improve the performance of classification.\n\nBackgrounds\nDNA arrays consist of a large number of DNA molecules\nspotted in a systemic order on a solid substrate.\nDepending on the size of each DNA spot on the array,\nDNA arrays can be categorized as microarrays when the\ndiameter of DNA spot is less than 250 microns, and\nmacroarrays when the diameter is bigger than 300\nmicrons. The arrays with the small solid substrate are also\nreferred to as DNA chips. It is so powerful that we can\ninvestigate the gene information in short time, because at\nleast hundreds of genes can be put on the DNA\nmicroarray to be analyzed.\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nSam ple\nGe\nne\ns\nALL\nAML\n1000\n2000\n3000\n4000\n5000\n6000\n7129\nGene expression data\nDNA microarray\nImage scanner\n\nFig. 1. General process of acquiring the gene expression\ndata from DNA microarray\nDNA microarrays are composed of thousands of\nindividual DNA sequences printed in a high density array\non a glass microscope slide using a robotic arrayer as\nshown in Fig. 1. The relative abundance of these spotted\nDNA sequences in two DNA or RNA samples may be\nassessed by monitoring the differential hybridization of\nthe two samples to the sequences on the array. For\nmRNA samples, the two samples are reverse-transcribed\ninto cDNA, labeled using different fluorescent dyes\nmixed (red-fluorescent dye Cy5 and green-fluorescent\ndye Cy3). After the hybridization of these samples with\nthe arrayed DNA probes, the slides are imaged using\nscanner that makes fluorescence measurements for each\ndye. The log ratio between the two intensities of each dye\nis used as the gene expression data (Lashkari et al. 1997,\nDerisi et al. 1997, Eisen et al. 1998).\n)\n3\nCy\n(\nInt\n)\n5\nCy\n(\nInt\nlog\n_\n2\n=\nexpression\ngene\n\n(1)\nwhere Int(Cy5) and Int(Cy3) are the intensities of red\nand green colors. Since at least hundreds of genes are put\non the DNA microarray, it is so helpful that we can\nTable . Relevant works on cancer classification\nMethod\n\nAuthors\n\nDataset\nFeature\n\nClassifier\n\nAccuracy\n[%]\n\nLeukemia\n94.1\n\nFurey et al.\n\nColon\nSignal to noise ratio\n\nSVM\n\n90.3\nLi et al. 2000\n\nLeukemia\nModel selection with Akaike information criterion and Bayesian information\ncriterion with logistic regression\n\n94.1\n\nLymphoma\n84.6~\nLi et al. 2001\nColon\nGenetic Algorithm\nKNN\n94.1~\nLeukemia\n91.6\n\nColon\nNearest neighbor\n\n80.6\nLeukemia\n94.4\n\nColon\nSVM with quadratic kernel\n\n74.2\nLeukemia\n95.8\n\nBen-Dor et al.\n\nColon\nAll genes, TNoM score\n\nAdaBoost\n\n72.6\nLeukemia\n95.0~\n\nLymphoma\nNearest neighbor\n\n95.0~\nLeukemia\n95.0~\n\nLymphoma\nDiagonal linear discriminant analysis\n\n95.0~\nLeukemia\n95.0~\n\nDudoit et al.\n\nLymphoma\nThe ratio of between-groups to\nwithin-groups sum of squares\n\nBoostCART\n\n90.0~\nLeukemia\n94.2\n\nLymphoma\n98.1\nColon\nLogistic discriminant\n\n87.1\nLeukemia\n95.4\n\nLymphoma\n97.6\nColon\nPrincipal component analysis\n\nQuadratic discriminant analysis\n\n87.1\nLeukemia\n95.9\n\nLymphoma\n96.9\nColon\nLogistic discriminant\n\n93.5\nLeukemia\n96.4\n\nLymphoma\n97.4\nNguyen et al.\n\nColon\nPartial least square\n\nQuadratic discriminant analysis\n\n91.9\ninvestigate the genome-wide information in short time.\n2.2 Related Works\nIt is essential to efficiently analyze DNA microarray data\nbecause the amount of DNA microarray data is usually\nvery large. The analysis of DNA microarray data is\ndivided into four branches: clustering, classification, gene\nidentification, and gene regulatory network modeling.\nMany machine learning and data mining methods have\nbeen applied to solve them.\nInformation theory (Fuhrman et al. 2000) has been\napplied to gene identification problem. Also, boolean\nnetwork (Thieffry et al. 1998), Bayesian network\n(Friedman et al. 2000), and reverse engineering method\n(Arkin et al. 1997) have been applied to gene regulatory\nnetwork modeling problem.\nSeveral machine learning techniques have been\npreviously used in classifying gene expression data,\nincluding Fisher linear discriminant analysis (Dudoit et al.\n2000), k nearest neighbour (Li et al. 2001), decision tree,\nmulti-layer perceptron (Khan et al. 2001, Xu et al. 2002),\nsupport vector machine (Furey et al. 2000, Brown et al.\n2000), boosting, and self-organizing map (Golub et al.\n1999). Also, many machine learning techniques were\nhave been used in clustering gene expression data\n(Shamir 2001). They include hierarchical clustering\n(Eisen et al. 1998), self-organizing map (Tamayo et al.\n1999), and graph theoretic approaches (Hartuv et al. 2000,\nBen-Dor et al. 1999, Sharan et al. 2000)\nThe first approach, classification method, is called\nsupervised method while the second approach, clustering\nmethod, is called unsupervised method. Clustering\nmethods do not use any tissue annotation (e.g., tumor vs.\nnormal) in the partitioning step. In contrast, classification\nmethods attempt to predict the classification of new\ntissues, based on their gene expression profiles after\ntraining on examples (training data) that have been\nclassified by an external \"supervision\" (Ben-Dor et al.\n2000). Table shows relevant works on cancer\nclassification.\n\nMachine Learning for DNA Microarray\nWe define machine learning for DNA microarray that\nselects discriminative genes related with classification\nfrom gene expression data, trains classifier and then\nclassifies new data using learned classifier. The system is\nas shown in Fig. 2. After acquiring the gene expression\ndata calculated from the DNA microarray, our prediction\nsystem has 2 stages: feature selection and pattern\nclassification stages.\nThe feature selection can be thought of as the gene\nselection, which is to get the list of genes that might be\ninformative for the prediction by statistical, information\ntheoretical methods, etc. Since it is highly unlikely that\nall the 7,129 genes have the information related to the\ncancer and using all the genes results in too big\ndimensionality, it is necessary to explore the efficient\nway to get the best feature. We have extracted 25 genes\nusing seven methods described in Section 3.1, and the\ncancer predictor classifies the category only with these\ngenes.\nGiven the gene list, a classifier makes decision to which\ncategory the gene pattern belongs at prediction stage. We\nhave adopted four most widely used classification\nmethods and an ensemble classifier as shown in Fig. 2.\n\nFeature selection\nTumor\nNormal\nCancer predictor\nPearson's correlation coefficient\nSpearman's correlation coefficient\nEuclidean distance\nCosine coefficient\nInformation gain\nMutual information\nSignal to noise ratio\n3-layered MLP with backpropagation\nk-nearest neighbor\nSupport vector machine\nStructure adaptive self-organizing map\nEnsemble classifier\nMicroarray\nExpression data\n\nFig. 2. Cancer classification system\n3.1 Gene Selection\nAmong thousands of genes whose expression levels are\nmeasured, not all are needed for classification.\nMicroarray data consist of large number of genes in small\nsamples. We need to select some genes highly related\nwith particular classes for classification, which is called\ninformative genes (Golub et al. 1999). This process is\nreferred to as gene selection. It is also called feature\nselection in machine learning.\nUsing the statistical correlation analysis, we can see the\nlinear relationship and the direction of relation between\ntwo variables. Correlation coefficient r varies from 1 to\n+1, so that the data distributed near the line biased to (+)\ndirection will have positive coefficients, and the data near\nthe line biased to (-) direction will have negative\ncoefficients.\nSuppose that we have a gene expression pattern g\ni\n(i = 1 ~\n7,129 in Leukemia data, i = 1 ~ 2,000 in Colon data, i = 1\n~ 4,026 in Lymphoma data)\n.\nEach g\ni\nis a vector of gene\nexpression levels from N samples, g\ni\n= (e\n1\n, e\n2\n, e\n3\n, ..., e\nN\n).\nThe first M elements (e\n1\n, e\n2\n, ..., e\nM\n) are examples of\ntumor samples, and the other N-M (e\nM+1\n, e\nM+2\n, ..., e\nN\n) are\nthose from normal samples. An ideal gene pattern that\nbelongs to tumor class is defined by g\nideal_tumor\n= (1, 1, ...,\n1, 0, ..., 0), so that all the elements from tumor samples\nare 1 and the others are 0. In this paper, we have\ncalculated the correlation coefficient between this g\nideal\n\nand the expression pattern of each gene. When we have\ntwo vectors X and Y that contain N elements, r\nPearson\nand\nr\nSpearman\nare calculated as follows:\n(\n)\n( )\n\n\n\n\n\n\n\n\n\n=\n\n\n\n\n\n\n\nN\nY\nY\nN\nX\nX\nN\nY\nX\nXY\nr\nPearson\n2\n2\n2\n2\n\n(2)\n(\n)\n(\n)\n1\n6\n1\n2\n2\n\n\n=\n\nN\nN\nD\nD\nr\ny\nx\nSpearman\n\n(3)\nwhere, D\nx\nand D\ny\nare the rank matrices of X and Y,\nrespectively.\nThe similarity between two input vectors X and Y can be\nthought of as distance. Distance is a measure on how far\nthe two vectors are located, and the distance between\ng\nideal_tumor\nand g\ni\ntells us how much the g\ni\nis likely to the\ntumor class. Calculating the distance between them, if it\nis bigger than certain threshold, the gene g\ni\nwould belong\nto tumor class, otherwise g\ni\nbelongs to normal class. In\nthis paper, we have adopted Euclidean distance (r\nEclidean\n)\nand cosine coefficient (r\nCosine\n) represented by the\nfollowing equations:\n(\n)\n\n=\n2\nY\nX\nr\nEclidean\n\n(4)\n\n\n=\n2\n2\nY\nX\nXY\nr\nCosine\n\n(5)\nWe have utilized the information gain and mutual\ninformation that are widely used in many fields such as\ntext categorization and data mining. If we count the\nnumber of genes excited (\n)\n(\ni\ng\nP\n) or not excited (\n)\n(\n_\ni\ng\nP\n)\nin category c\nj\n(\n)\n(\nj\nc\nP\n), the coefficients of the information\ngain and mutual information become as follows:\n)\n(\n)\n(\n)\n,\n(\nlog\n)\n,\n(\n)\n(\n)\n(\n)\n,\n(\nlog\n)\n,\n(\n)\n,\n(\ni\nj\nj\ni\ni\ni\ni\nj\nj\ni\nj\ni\nj\ni\ng\nP\nc\nP\nc\ng\nP\nc\ng\nP\ng\nP\nc\nP\nc\ng\nP\nc\ng\nP\nc\ng\nIG\n\n+\n\n=\n\n(6)\n)\n(\n)\n(\n)\n,\n(\nlog\n)\n,\n(\ni\nj\nj\ni\nj\ni\ng\nP\nc\nP\nc\ng\nP\nc\ng\nMI\n\n=\n\n(7)\nMutual information tells us the dependency relationship\nbetween two probabilistic variables of events. If two\nevents are completely independent, the mutual\ninformation is 0. The more they are related, the higher the\nmutual information gets. Information gain is used when\nthe features of samples are extracted by inducing the\nrelationship between gene and class by the presence\nfrequency of the gene in the sample. Information gain\nmeasures the goodness of gene using the presence and\nabsence within the corresponding class.\nFor each gene g\ni\n, some are from tumor samples, and some\nare from normal samples. If we calculate the mean\n\nand standard deviation\nfrom the distribution of gene\nexpressions within their classes, the signal to noise ratio\nof gene g\ni\n, SN(g\ni\n), is defined by:\n)\n(\n)\n(\n)\n(\n)\n(\n)\n(\ni\nnormal\ni\ntumor\ni\nnormal\ni\ntumor\ni\ng\ng\ng\ng\ng\nSN\n\n\n\n\n\n=\n\n(8)\n3.2 Classification\nMany algorithms designed for solving classification\nproblems in machine learning have been applied to recent\nresearch of prediction and classification of cancer with\ngene expression data. General process of classification in\nmachine learning is to train classifier to accurately\nrecognize patterns from given training samples and to\nclassify test samples with the trained classifier.\nRepresentative classification algorithms such as\nmulti-layer perceptron, k-nearest neighbour, support\nvector machine, and structure-adaptive self-organizing\nmap are applied to the classification.\n1) MLP\nError backpropagation neural network is a feed-forward\nmultilayer perceptron (MLP) that is applied in many\nfields due to its powerful and stable learning algorithm\n(Lippman et al. 1987). The neural network learns the\ntraining examples by adjusting the synaptic weight of\nneurons according to the error occurred on the output\nlayer. The power of the backpropagation algorithm lies in\ntwo main aspects: local for updating the synaptic weights\nand biases, and efficient for computing all the partial\nderivatives of the cost function with respect to these free\nparameters (Beale 1996). The weight-update rule in\nbackpropagation algorithm is defined as follows:\n)\n1\n(\n)\n(\n\n+\n=\n\nn\nw\nx\nn\nw\nji\nji\nj\nji\n\n\n\n(9)\nwhere\n)\n(n\nw\nji\n\nis the weight update performed during\nthe nth iteration through the main loop of the algorithm,\n\nis a positive constant called the learning rate,\nj\nis\nthe error term associated with j, x\nji\nis the input from node\ni to unit j, and 0\n\n<1 is a constant called the\nmomentum.\n2) KNN\nk-nearest neighbor (KNN) is one of the most common\nmethods among memory based induction. Given an input\nvector, KNN extracts k closest vectors in the reference set\nbased on similarity measures, and makes decision for the\nlabel of input vector using the labels of the k nearest\nneighbors.\nPearson's coefficient correlation and Euclidean distance\nhave been used as the similarity measure. When we have\nan input X and a reference set D = {d\n1\n, d\n2\n, ..., d\nN\n}, the\nprobability that X may belong to class c\nj\n, P(X, c\nj\n) is\ndefined as follows:\nj\nj\ni\nkNN\nd\ni\nj\nb\nc\nd\nP\nd\nX\nc\nX\nP\ni\n=\n\n\n)\n,\n(\n)\n,\nSim(\n)\n,\n(\n\n(10)\nwhere Sim(X, d\ni\n) is the similarity between X and d\ni\nand b\nj\n\nis a bias term.\n3) SASOM\nSelf-organizing map (SOM) defines a mapping from the\ninput space onto an output layer by unsupervised learning\nalgorithm. SOM has an output layer consisting of N nodes,\neach of which represents a vector that has the same\ndimension as the input pattern. For a given input vector X,\nthe winner node m\nc\nis chosen using Euclidean distance\nbetween x and its neighbors, m\ni\n.\ni\ni\nc\nm\nx\nm\nx\n=\nmin\n\n(11)\n)}\n(\n)\n(\n{\n)\n(\n)\n(\n)\n(\n)\n1\n(\nt\nm\nt\nx\nt\nn\nt\nt\nm\nt\nm\ni\nci\ni\ni\n\n\n+\n=\n+\n\n\n(12)\nEven though SOM is well known for its good\nperformance of topology preserving, it is difficult to\napply it to practical classification since the topology\nshould be fixed before training. A structure adaptive\nself-organizing map (SASOM) is proposed to overcome\nthis shortcoming (Kim et al. 2000). SASOM starts with\n44 map, and dynamically splits the output nodes of the\nmap, where the data from different classes are mixed,\ntrained with the LVQ learning algorithm. Fig. 3 illustrates\nthe algorithm of SASOM.\n\nInput data\nNo\nYes\nMap generated\nInitialize map as 4X4\nTrain with Kohonen's algorithm\nFind nodes whose hit_ratio is less than 95.0%\nSplit the nodes to 2X2 submap\nTrain the split nodes with LVQ algorithm\nRemove nodes not participted in learning\nStop condition satisfied?\nStructure adaptation\n\nFig. 3. Overview of SASOM\n4) SVM\nSupport vector machine (SVM) estimates the function\nclassifying the data into two classes (Vapnik 1995,\nMoghaddam et al. 2000). SVM builds up a hyperplane as\nthe decision surface in such a way to maximize the\nmargin of separation between positive and negative\nexamples. SVM achieves this by the structural risk\nminimization principle that the error rate of a learning\nmachine on the test data is bounded by the sum of the\ntraining-error rate and a term that depends on the\nVapnik-Chervonenkis (VC) dimension. Given a labeled\nset of M training samples (X\ni\n, Y\ni\n), where X\ni\n\nR\nN\nand Y\ni\nis\nthe associated label, Y\ni\n\n{-1, 1}, the discriminant\nhyperplane is defined by:\n\n=\n+\n=\nM\ni\ni\ni\ni\nb\nX\nX\nk\nY\nX\nf\n1\n)\n,\n(\n)\n(\n\n\n(13)\nwhere k( .) is a kernel function and the sign of f(X)\ndetermines the membership of X. Constructing an optimal\nhyperplane is equivalent to finding all the nonzero\ni\n\n(support vectors) and a bias b. We have used SVM\nlight\n\nmodule and SVM\nRBF\nin this paper.\n5) Ensemble classifier\nClassification can be defined as the process to\napproximate I/O mapping from the given observation to\nthe optimal solution. Generally, classification tasks\nconsist of two parts: feature selection and classification.\nFeature selection is a transformation process of\nobservations to obtain the best pathway to get to the\noptimal solution. Therefore, considering multiple features\nencourages obtaining various candidate solutions, so that\nwe can estimate more accurate solution to the optimal\nthan any other local optima.\nWhen we have multiple features available, it is important\nto know which of features should be used. Theoretically,\nas many features we may concern, it may be more\neffective for the classifier to solve the problems. But\nfeatures that have overlapped feature spaces may cause\nthe redundancy of irrelevant information and result in the\ncounter effect such as overfitting. Therefore, it is more\nimportant to explore and utilize independent features to\ntrain classifiers, rather than increase the number of\nfeatures we use. Correlation between feature sets can be\ninduced from the distribution of feature numbers, or using\nmathematical analysis using statistics.\nMeanwhile, there are many algorithms for the\nclassification from machine learning approach, but none\nof them is perfect. However, it is always difficult to\ndecide what to use and how to set up its parameters.\nAccording to the environments the classifier is embedded,\nsome algorithm works well and others not. It is because,\ndepending on the algorithms, features and parameters\nused, the classifier searches in different solution space.\nThese sets of classifiers produce their own outputs, and\nenable the ensemble classifier to explore more wide\nsolution space.\nWe have applied this idea to a classification framework\nas shown in Fig. 4. If there are k features and n classifiers,\nthere are kn feature-classifier combinations. There are\nkn\nC\nm\npossible ensemble classifiers when m\nfeature-classifier combinations are selected for ensemble\nclassifier. Then classifiers are trained using the features\nselected, finally a majority voting is accompanied to\ncombine the outputs of these classifiers. After classifiers\nwith some features are trained independently produce\ntheir own outputs, final answer will be judged by a\ncombining module, where the majority voting method is\nadopted.\nClass\ni\nClass\n1\nFeature\nextractor\nMajority\nVoting\nInput\npattern\n...\n...\nClassifier\nn1\nClassifier\nnk\n...\nClassifier\na1\nClassifier\nak\n...\nClassifier\na2\nClassifier\nn2\nFeature 1\nFeature k\nFeature 2\n...\n...\n...\n..\n.\nFig 4. Overview of the ensemble classifier\nExperimental Results\nThere are several microarray datasets from published\ncancer gene expression studies, including leukemia\ncancer dataset, colon cancer dataset, lymphoma dataset,\nbreast cancer dataset, NCI60 dataset, and ovarian cancer\ndataset. Among them three datasets are used in this paper.\nThe first dataset and third dataset involve samples from\ntwo variants of the same disease and second dataset\ninvolves tumor and normal samples of the same tissue.\nBecause the benchmark data have been studied in many\npapers, we can compare the results of this paper with\nothers.\n1) Leukemia cancer dataset\nLeukemia dataset consists of 72 samples: 25 samples of\nacute myeloid leukemia (AML) and 47 samples of acute\nlymphoblastic leukemia (ALL). The source of the gene\nexpression measurements was taken form 63 bone\nmarrow samples and 9 peripheral blood samples. Gene\nexpression levels in these 72 samples were measured\nusing high density oligonucleotide microarrays (Ben-Dor\net al. 2000).\n38 out of 72 samples were used as training data and the\nremaining were used as test data in this paper. Each\nsample contains 7129 gene expression levels.\n2) Colon cancer dataset\nColon dataset consists of 62 samples of colon epithelial\ncells taken from colon-cancer patients. Each sample\ncontains 2000 gene expression levels. Although original\ndata consists of 6000 gene expression levels, 4000 out of\n6000 were removed based on the confidence in the\nmeasured expression levels. 40 of 62 samples are colon\ncancer samples and the remaining are normal samples.\nEach sample was taken from tumors and normal healthy\nparts of the colons of the same patients and measured\nusing high density oligonucleotide arrays (Ben-Dor et al.\n2000).\n31 out of 62 samples were used as training data and the\nremaining were used as test data in this paper.\n3) Lymphoma cancer dataset\nB cell diffuse large cell lymphoma (B-DLCL) is a\nheterogeneous group of tumors, based on significant\nvariations in morphology, clinical presentation, and\nresponse to treatment. Gene expression profiling has\nrevealed two distinct tumor subtypes of B-DLCL:\ngerminal center B cell-like DLCL and activated B\ncell-like DLCL (Lossos et al. 2000). Lymphoma dataset\nconsists of 24 samples of GC B-like and 23 samples of\nactivated B-like.\n22 out of 47 samples were used as training data and the\nremaining were used as test data in this paper.\n4.2 Environments\nFor feature selection, each gene is scored based on the\nfeature selection methods described in Section 3.1, and\nthe 25 top-ranked genes are chosen as the feature of the\ninput pattern.\nFor classification, we have used 3-layered MLP with\n5~15 hidden nodes, 2 output nodes, 0.01~0.50 of learning\nrate and 0.9 of momentum. KNN has been used with\nk=1~8. Similarity measures used in KNN are Pearson's\ncorrelation coefficient and Euclidean distance. SASOM\nhas been used by 44 map with rectangular topology,\n0.05 of initial learning rate, 1000 of initial learning length,\n10 of initial radius, 0.02 of final learning rate, 10000 of\nfinal learning length and 3 of final radius. We have used\nSVM with linear function and RBF function as kernel\nfunction. In RBF, we have changed 0.1~0.5 gamma\nvariable.\n4.3 Analysis of results\nTable shows the IDs of genes overlapped by\nPearson's correlation coefficient, cosine coefficient,\nEuclidean distance in each dataset. Among these genes\nthere are some genes overlapped by other feature\nselection methods. For example, gene 2288 of leukemia\nhas been third-ranked in information gain. The number of\noverlapped genes of leukemia dataset is 17. The number\nof overlapped genes of colon dataset is 9. The number of\noverlapped genes of lymphoma dataset is 19. These\noverlapped genes are very informative. In particular,\nZyxin, gene 4847 of leukemia, has been reported as\ninformative (Golub et al. 1999), but there are no genes\nappeared commonly in every method.\n\nTable . The IDs of genes overlapped by Pearson's\ncorrelation coefficient, cosine coefficient, and Euclidean\ndistance\n461 1249 1745 1834 2020\n2043 2242 2288 3258 3320\n4196 4847 5039 6200 6201\nLeukemia\n6373 6803\n\n\n187\n619\n704\n767 1060\nColon\n1208 1546 1771 1772\n36 75 76 77 86\n86 678 680 1636\n1637\n2225 2243 2263 2412 2417\nLymphoma\n2467 3890 3893 3934\n\nFig. 5 shows the expression level of genes chosen by\nPearson's correlation coefficient method in Leukemia\ndataset. 1~27 samples are ALL and 28~38 samples are\nAML. The differences of brightness between AML and\nALL represent that genes chosen by Pearson's correlation\ncoefficient method divide samples into AML and ALL.\nThe results of recognition rate on the test data are as\nshown in Tables , , and . Column is the list of\nfeature selection methods: Pearson's correlation\ncoefficient (PC), Spearman's correlation coefficient (SC),\nEuclidean distance (ED), cosine coefficient (CC),\ninformation gain (IG), mutual information (MI), and\nsignal to noise ratio (SN). KNN\nPearson\nand MLP seem to\nproduce the best recognition rate among the classifiers on\nthe average. KNN\nPearson\nis better than KNN\ncosine\n. SVM is\npoorer than any other classifiers.\n0\n0.2\n0.4\n0.6\n0.8\n1\nSample\nGe\nn\ne\n27\n38\n5\n10\n15\n20\n25\n\nFig. 5. Expression level of genes chosen by\nr\nPearson\nin\nLeukemia dataset\n\nTable . Recognition rate with features and classifiers\n(%) in Leukemia dataset\nSVM KNN\nMLP\nSASOM\nLinear\nRBF Cosine\nPearson\nPC 97.1 76.5 79.4 79.4 97.1 94.1\nSC 82.4 61.8 58.8 58.8 76.5 82.4\nED 91.2 73.5 70.6 70.6 85.3 82.4\nCC 94.1 88.2 85.3 85.3 91.2 94.1\nIG 97.1 91.2 97.1 97.1 94.1 97.1\nMI 58.8 58.8 58.8 58.8 73.5 73.5\nSN 76.5 67.7 58.8 58.8 73.5 73.5\nMean 85.3 74.0 72.7 72.7 84.5 85.3\n\nTable . Recognition rate with features and classifiers\n(%) in Colon dataset\nSVM KNN\nMLP\nSASOM\nLinear\nRBF Cosine\nPearson\nPC 74.2 74.2 64.5 64.5 71.0 77.4\nSC 58.1 45.2 64.5 64.5 61.3 67.7\nED 67.8 67.6 64.5 64.5 83.9 83.9\nCC 83.9 64.5 64.5 64.5 80.7 80.7\nIG 71.0 71.0 71.0 71.0 74.2 80.7\nMI 71.0 71.0 71.0 71.0 74.2 80.7\nSN 64.5 45.2 64.5 64.5 64.5 71.0\nMean 70.1 62.7 66.4 66.4 72.7 77.4\n\nTable . Recognition rate with features and classifiers\n(%) in Lymphoma dataset\nSVM KNN\nMLP\nSASOM\nLinear\nRBF Cosine\nPearson\nPC 64.0 48.0 56.0 60.0 60.0 76.0\nSC 60.0 68.0 44.0 44.0 60.0 60.0\nED 56.0 52.0 56.0 56.0 56.0 68.0\nCC 68.0 52.0 56.0 56.0 60.0 72.0\nIG 92.0 84.0 92.0 92.0 92.0 92.0\nMI 72.0 64.0 64.0 64.0 80.0 64.0\nSN 76.0 76.0 72.0 76.0 76.0 80.0\nMean 69.7 63.4 62.9 63.4 69.1 73.1\nFig. 6 shows the comparison of the average performance\nof features. Although the results are different between\ndatasets, information gain is the best, and Pearson's\ncorrelation coefficient is the second. Mutual information\nand Spearman's correlation coefficient are poor. The\ndifference of performance in datasets might be caused by\nthe characteristics of data.\n\n0\n20\n40\n60\n80\n100\nLeukemia\nColon\nLy mphoma\nR\ne\nc\no\ng\nn\nit\nio\nn\n\nra\nt\ne\n\n[\n%\n]\nPC\nSC\nED\nCC\nIG\nMI\nSN\nFig. 6. Average performance of feature selection methods\n\nRecognition rates by ensemble classifiers are shown in\nTable . Majority voting-3 means the ensemble\nclassifier using majority voting with 3 classifiers, and\nmajority voting-all means the ensemble classifier using\nmajority voting with all 42 feature-classifier\ncombinations. Fig. 7 shows the comparison of the\nperformance of the best classifier of all possible\n42\nC\n3\n\nensemble classifiers, ensemble classifier-3 and ensemble\nclassifier-all. The best result of Leukemia is obtained by\nall classifier except SASOM. The result of the best\nclassifier is the same as that of the best ensemble\nclassifier using majority voting with 3 classifiers. In other\ndatasets, the performance of ensemble classifier surpasses\nthe best classifier. In all datasets, ensemble classifier\nusing majority voting with all classifiers are the worst.\n\nTable . Recognition rate by ensemble classifier\n\nMajority voting-3 Majority voting-all\nLeukemia\n97.1 91.2\nColon 93.6\n71.0\nLymphoma\n96.0 80.0\n\n60\n70\n80\n90\n100\nLeukemia\nColon\nLymphoma\nR\necognit\nion rat\ne\n[\n%]\nThe best classifier\nMajority voting-3\nMajority voting-all\nFig. 7. Comparison of the performance of the best\nclassifier, the best ensemble classifier-3, and ensemble\nclassifier-all\nTable . Classifiers of the best ensemble classifier of all\npossible\n42\nC\n3\nensemble classifiers in Colon dataset\nClassifier\nFeature selection method\nMLP\nKNN\ncosine\n\nKNN\ncosine\n\nCosine coefficient\nEuclidean distance\nPearson's correlation coefficient\nMLP\nKNN\ncosine\n\nKNN\nPearson\n\nCosine coefficient\nEuclidean distance\nPearson's correlation coefficient\nMLP\nKNN\ncosine\n\nSASOM\nCosine coefficient\nEuclidean distance\nPearson's correlation coefficient\nMLP\nKNN\ncosine\n\nKNN\npearson\n\nMutual information\nEuclidean distance\nPearson's correlation coefficient\nMLP\nKNN\ncosine\n\nKNN\npearson\n\nInformation gain\nEuclidean distance\nPearson's correlation coefficient\nMLP\nMLP\nKNN\npearson\n\nCosine coefficient\nPearson's correlation coefficient\nEuclidean distance\nKNN\npearson\n\nKNN\npearson\n\nSASOM\nEuclidean distance\nMutual information\nPearson's correlation coefficient\nKNN\npearson\n\nKNN\npearson\n\nSASOM\nEuclidean distance\nInformation gain\nPearson's correlation coefficient\n\nTable shows the classifiers of the best ensemble\nclassifier of all possible\n42\nC\n3\nensemble classifiers in\nColon dataset where its recognition rate is 93.6%. If we\nobserve the classifiers of the best ensemble classifier in\nFig. 10, we find features more important to affect the\nresult than classifiers. In other words, in ensemble\nclassifiers there must be classifiers with Euclidean\ndistance and Pearson's correlation coefficient. The other\nclassifier is the one with cosine coefficient, mutual\ninformation or information gain.\nThis fact is also prominent in Lymphoma dataset. Most of\nthe classifiers of the best ensemble classifiers are\nclassifiers with information gain, signal to noise ratio and\nEuclidean distance, or the classifiers with information\ngain, signal to noise ratio and Pearson's correlation\ncoefficient.\nAs shown in Fig. 8~11, Euclidean distance, Pearson's\ncorrelation coefficient and cosine coefficient are highly\ncorrelated in Colon dataset. Table shows genes ranked\nby Euclidean distance, Pearson's correlation coefficient\nand cosine coefficient and the value of genes by each\nmethod. The bold faced figures mean the overlapped\ngenes of those features. There are some overlapped genes\namong them as shown in Table . This indicates\noverlapped genes of highly correlated features can\ndiscriminate classes and the other genes not overlapped\namong combined features can supplement to search the\nsolution spaces. For example, gene 1659 and gene 550\nare high-ranked in both of Pearson's correlation\ncoefficient and cosine coefficient, and gene 440 is\nhigh-ranked in both of Euclidean distance and cosine\ncoefficient. This subset of two features might paly an\nimportant role in classification.\nThis paper shows that the ensemble classifier works and\nwe can improve the classification performance by\ncombining complementary common sets of classifiers\nlearned from three independent features, even when we\nuse simple combination method like majority voting.\n\nFig. 8. Correlation of Euclidean distance and Pearson's\ncorrelation coefficient in Colon dataset\n\n\nFig. 9. Correlation of Euclidean distance and cosine\ncoefficient in Colon dataset\n\nFig. 10. Correlation of Pearson's correlation coefficient\nand cosine coefficient in Colon dataset\n\nFig. 11. Correlation of Euclidean distance, Pearson's\ncorrelation coefficient and cosine coefficient in Colon\ndataset\n\nTable . Genes ranked by Euclidean distance, Pearson's\ncorrelation coefficient and cosine coefficient\nRank Euclidean\nPearson\nCosine\n1 619(2.262385)\n2 767(2.335303)\n3 704(2.374358)\n4 187(2.388404)\n5 207(2.410640)\n6 887(2.473033)\n7 635(2.474971)\n8 1915(2.498611)\n9 1046(2.506833)\n10 1208(2.512257)\n11 482(2.520699)\n12 1771(2.525080)\n13 1993(2.529032)\n14 62(2.546894)\n15 1772(2.547455)\n16 1194(2.549244)\n17 1594(2.551892)\n18 199(2.557360)\n19 1867(2.587469)\n20 959(2.589989)\n21 440(2.593881)\n22 480(2.594514)\n23 1546(2.604907)\n24 399(2.613609)\n25 1060(2.614100)\n619(0.681038)\n1771(0.664378)\n1659(0.634084)\n550(0.631655)\n187(0.626262)\n1772(0.621581)\n1730( 0.615566)\n1648(0.614949)\n365(0.614591)\n1208(0.603313)\n1042(0.602160)\n1060(0.601712)\n513(0.596444)\n767(0.594119)\n1263(0.591725)\n138(0.587851)\n1826(0.584774)\n1546(0.582293)\n141(0.579073)\n1227(0.574537)\n704(0.569022)\n1549(0.562828)\n1489(0.561003)\n1724(0.559919)\n1209(0.559778)\n619(0.895971)\n1772(0.875472)\n767(0.874914)\n1771(0.873892)\n1659(0.870115)\n187(0.867285)\n704(0.866679)\n1208(0.866029)\n550(0.864547)\n1546(0.856904)\n251(0.855841)\n1915(0.855784)\n440(0.855453)\n1263(0.854854)\n1060(0.854829)\n965(0.854137)\n1648(0.854119)\n1942(0.853586)\n513(0.852270)\n1042(0.851993)\n1993(0.851753)\n365(0.851205)\n1400(0.849531)\n207(0.849084)\n271(0.848481)\n\nConcluding Remarks\nWe have conducted a thorough quantitative comparison\namong the 42 combinations of features and classifiers for\nthree benchmark dataset. Information gain and Pearson's\ncorrelation coefficient are the top feature selection\nmethods, and MLP and KNN are the best classifiers. The\nexperimental results also imply some correlations\nbetween features and classifiers, which might guide the\nresearchers to choose or devise the best classification\nmethod for their problems in bioinformatics. Based on the\nresults, we have developed the optimal feature-classifier\ncombination to produce the best performance on the\nclassification.\nWe have combined 3 classifiers among 42 classifiers\nusing majority voting. We could confirm that ensemble\nclassifier of highly correlated features works better than\nensemble of uncorrelated features. In particular, we\nanalyzed the improvement of the classification\nperformance for Colon dataset.\nMoreover, our method of combining classifiers is very\nsimple, and there are many methods of combining\nclassifiers in machine learning and data mining fields. We\nwill have to apply more sophisticated methods of\ncombining classifiers to the same dataset to confirm the\nresults obtained and get better results.\nAcknowledgements\nThis paper was supported by Brain Science and\nEngineering Research Program sponsored by Korean\nMinistry of Science and Technology.\n\nReferences\nAlon, U., Barkai, N., et al. (1999): Broad patterns of gene\nexpression revealed by clustering analysis of tumor and\nnormal colon tissues probed by oligonucleotide arrays.\nProc. of the Natl. Acad. of Sci. USA, 96:6745-6750.\nArkin, A., Shen, P. and Ross, J. (1997): A test case of\ncorrelation metric construction of a reaction pathway\nfrom measurements. Science, 277:1275-1279.\nBeale, H. D. (1996): Neural Network Design. 1-47, PWS\nPublish Company.\nBen-Dor, A., Shamir, R. and Yakhini, Z. (1999):\nClustering gene expression patterns. Journal of\nComputational Biology, 6:281-297.\nBen-Dor, A., Bruhn, L., Friedman, N., Nachman, I.,\nSchummer, M. and Yakhini, N. (2000): Tissue\nclassification with gene expression profiles. Journal of\nComputational Biology, 7:559-584.\nBrown, M. P. S., Grundy, W. N., Lin, D., Cristianini, N.,\nSugnet, C. W., Furey, T. S., Ares, M. Jr. and Haussler,\nD. (2000): Knowledge-based analysis of microarray\ngene expression data by using support vector machines.\nProc. of the Natl. Acad. of Sci. USA, 97:262-267, 2000.\nDerisi, J., Iyer, V. and Brosn, P. (1997): Exploring the\nmetabolic and genetic control of gene expression on a\ngenomic scale. Science, 278:680-686.\nDudoit, S., Fridlyand, J. and Speed, T. P. (2000):\nComparison of discrimination methods for the\nclassification of tumors using gene expression data.\nTechnical Report 576, Department of Statistics,\nUniversity of California, Berkeley.\nEisen, M. B., Spellman, P. T., Brown, P. O. and Bostein,\nD. (1998): Cluster analysis and display of\ngenome-wide expression patterns. Proc. of the Natl.\nAcad. of Sci. USA, 95:14863-14868.\nEisen, M. B. and Brown, P. O. (1999): DNA arrays for\nanalysis of gene expression. Methods Enzymbol, 303:\n179-205.\nFriedman, N., Linial, M., Nachman, I. and Pe'er, D.\n(2000): Using Bayesian networks to analyze expression\ndata. Journal of Computational Biology, 7:601-620.\nFuhrman, S., Cunningham, M. J., Wen, X., Zweiger, G.,\nSeilhamer, J. and Somogyi, R. (2000): The application\nof Shannon entropy in the identification of putative\ndrug targets. Biosystems, 55:5-14.\nFurey, T. S., Cristianini, N., Duffy, N., Bednarski, D. W.,\nSchummer, M. and Haussler, D. (2000): Support vector\nmachine classification and validation of cancer tissue\nsamples using microarray expression data.\nBioinformatics, 16(10):906-914.\nGolub, T. R., Slonim, D. K., Tamayo, P., Huard, C.,\nGaasenBeek, M., Mesirov, J. P., Coller, H., Loh, M. L.,\nDowning, J. R., Caligiuri, M. A., Blomfield, C. D., and\nLander, E. S. (1999): Molecular classification of\ncancer: Class discovery and class prediction by\ngene-expression monitoring. Science, 286:531-537.\nHarrington, C. A., Rosenow, C., and Retief, J. (2000):\nMonitoring gene expression using DNA microarrays.\nCurr. Opin. Microbiol., 3:285-291.\nHartuv, E., Schmitt, A., Lange, J., Meier-Ewert, S.,\nLehrach, H. and Shamir, R. (2000): An algorithm for\nclustering cDNA fingerprints. Genomics,\n66(3):249-256.\nKhan, J., Wei, J. S., Ringner, M., Saal, L. H., Ladanyi, M.,\nWestermann, F., Berthold, F., Schwab, M., Antonescu,\nC. R., Peterson, C. And Meltzer, P. S. (2001):\nClassification and diagnostic prediction of cancers\nusing gene expression profiling and artificial neural\nnetworks. Nature Medicine, 7(6):673-679.\nKim, H. D. and Cho, S.-B. (2000): Genetic optimization\nof structure-adaptive self-organizing map for efficient\nclassification. Proc. of International Conference on\nSoft Computing, 34-39, World-Scientific Publishing.\nLashkari, D., Derisi, J., McCusker, J., Namath, A.,\nGentile, C., Hwang, S., Brown, P., and Davis, R.\n(1997): Yeast microarrays for genome wide parallel\ngenetic and gene expression analysis. Proc. of the Natl.\nAcad. of Sci. USA, 94:13057-13062.\nLippman, R. P. (1987): An introduction to computing\nwith neural nets. IEEE ASSP Magazine, 4-22.\nLi, L., Weinberg, C. R., Darden, T. A. and Pedersen, L. G.\n(2001): Gene selection for sample classification based\non gene expression data: study of sensitivity to choice\nof parameters of the GA/KNN method. Bioinformatics,\n17(12):1131-1142.\nLi, W. and Yang, Y. (2000): How many genes are needed\nfor a discriminant microarray data analysis. Critical\nAssessment of Techniques for Microarray Data Mining\nWorkshop.\nLossos, I. S., Alizadeh, A. A., Eisen, M. B., Chan, W. C.,\nBrown, P. O., Bostein, D., Staudt, L. M., and Levy, R.\n(2000): Ongoing immunoglobulin somatic mutation in\ngerminal center B cell-like but not in activated B\ncell-like diffuse large cell lymphomas. Proc. of the Natl.\nAcad. of Sci. USA, 97(18):10209-10213.\nMoghaddam, B. and Yang, M.-H. (2000): Gender\nclassification with support vector machines. Proc. of\n4th IEEE Intl. Conf. on Automatic Face and Gesture\nRecognition, 306-311.\nNguyen, D. V. and Rocke, D. M. (2002): Tumor\nclassification by partial least squares using microarray\ngene expression data. Bioinformatics, 18(1):39-50.\nQuinlan, J. R. (1986): The effect of noise on concept\nlearning. Machine Learning: An Artificial Intelligence\nApproach, Michalski, R. S., Carbonell, J. G. and\nMitchell, T. M. (eds). San Mateo, CA: Morgan\nKauffmann, 2:149-166.\nRyu, J. and Cho, S. B. (2002): Towards optimal feature\nand classifier for gene expression classification of\ncancer. Lecture Note in Artificial Intelligence,\n2275:310-317.\nShamir, R. and Sharan, R. (2001): Algorithmic\napproaches to clustering gene expression data. Current\nTopics in Computational Biology. In Jiang, T., Smith,\nT., Xu, Y. and Zhang, M. Q. (eds), MIT press.\nSharan, R. and Shamir, R. (2000): CLICK: A clustering\nwith applications to gene expression analysis. Proc. Of\nthe Eighth International Conference in Computational\nMolecular Biology (ISBM), 307-316.\nTamayo, P. (1999): Interpreting patterns of gene\nexpression with self-organizing map: Methods and\napplication to hematopoietic differentiation. Proc. of\nthe National Academy of Sciences of the United States\nof America, 96: 2907-2912.\nThieffry, D. and Thomas, R. (1998): Qualitative analysis\nof gene networks. Pacific Symposium on Biocomputing,\n3:66-76.\nVapnik, V. N. (1995): The Nature of Statistical Learning\nTheory, New York: Springer.\nXu, Y., Selaru, M., Yin, J., Zou, T. T., Shustova, V., Mori,\nY., Sato, F., Liu, T. C., Olaru, A., Wang, S., Kimos, M.\nC., Perry, K., Desai, K., Greenwood, B. D., Krasna, M.\nJ., Shibata, D., Abraham, J. M. and Meltzer, S. J.\n(2002): Artificial neural networks and gene filtering\ndistinguish between global gene expression profiles of\nBarrett's esophagus and esophageal cancer. Cancer\nResearch, 62:3493-3497.\n\n", "keywords": "classification;MLP;SASOM;gene expression profile;SVM;KNN;Biological data mining;ensemble classifier;feature selection"} {"name": "134", "title": "Machine Learning in Low-level Microarray Analysis", "abstract": "Machine learning and data mining have found a multitude of successful applications in microarray analysis, with gene clustering and classification of tissue samples being widely cited examples. Low-level microarray analysis often associated with the pre-processing stage within the microarray life-cycle has increasingly become an area of active research, traditionally involving techniques from classical statistics. This paper explores opportunities for the application of machine learning and data mining methods to several important low-level microarray analysis problems: monitoring gene expression, transcript discovery, genotyping and resequencing . Relevant methods and ideas from the machine learning community include semi-supervised learning, learning from heterogeneous data, and incremental learning.", "fulltext": "INTRODUCTION\nDNA microarrays have revolutionized biological research over\nthe short time since their inception [2; 27; 28; 29]. Although\nmost widely used for parallel measurement of gene\nexpression [27; 28], microarrays are starting to find common\napplication in other areas of genomics and transcriptomics,\nincluding genomic re-sequencing [30; 31], genotyping [32;\n33], and transcript discovery [34].\nResearch labs armed with microarrays have been able to\npartake in a range of studies, including finding gene function\n[35; 36; 37]; correcting mistaken database annotations\n[36; 7]; performing linkage analyses; determining specific\ngenes involved in biological pathways; identifying genes that\nare important at certain times of development (or that are\nturned on/off over a course of treatment); elucidating gene\nregulatory networks [13]; diagnosing disease in tissue sam-Figure\n1: The relationship between low-level and high-level\nmicroarray analysis.\nples [38; 39; 40; 41];\ntioners' misdiagnoses [38]. The common thread among these\nhigh-level microarray analysis problems is that they answer\nsophisticated questions of direct biological interest to medical\nresearchers (such as \"which genes are being co-expressed\nunder treatment X?\"), where the raw data used are estimates\nof biologically meaningful parameters (such as the\nexpression level estimates for thousands of genes).\nIn contrast to these so-called high-level problems, low-level\nmicroarray analysis [19] is concerned with the preceding step\nin the microarray assay cycle (Figure 1) given raw data\nstraight from a scanner which has no direct biological interpretation\n, clean and summarize this data to produce the\nbiologically meaningful parameter estimates (such as expression\nlevel estimates) that are later used in high-level analyses\n.\nIn low-level analysis, more consideration is generally given to\nthe behavior of the underlying molecular biology, microarray\ntechnology, and experimental design than in high-level analysis\n. This makes generative methods readily applicable in\nlow-level problems, facilitating the formulation of confidence\nSIGKDD Explorations.\nand even identifying medical practi\nstatements such as p-values in gene expression calls. Hence,\nwhile high-level problems have been tackled with discriminative\napproaches, such as those found in machine learning and\ndata mining, in addition to classical statistical methods, the\nlow-level analysis community has traditionally called upon\nonly the latter.\nIn this paper we argue that low-level microarray analysis\nposes a number of interesting problems for the data mining\nand machine learning community, distinct to the traditional\nhigh-level microarray problems. These problems are relevant\nto the long-term success of DNA microarrays and are\nalready topics of active research in the low-level microarray\nanalysis community. It is our hope that this position paper\nmotivates and enables further machine learning research in\nthe area. Although we will focus on high density oligonucleotide\nmicroarrays, particularly those of the Affymetrix\nGeneChip variety, the underlying concepts and opportunities\nremain the same for related technologies. Throughout\nthe paper, we distinguish machine learning from statistics.\nWhile these disciplines are closely related and serve as foundations\nfor inference in microarray analysis, the distinction\ndoes have content. In our view, classical statistics is generative\n, dealing with relatively low-dimensional data and parameter\nspaces, while machine learning is often discriminative\nin nature and explicitly addresses computational issues\nin high-dimensional data analysis.\nSection 2 reviews relevant background ideas from machine\nlearning. For an overview of the background molecular biology\nand microarray technology, see the guest editorial elsewhere\nin this issue. The low-level problems of absolute and\ndifferential expression level summarization, expression detection\n, and transcript discovery are reviewed in Section 3,\nalong with suggested applications of machine learning approaches\nto these problems. Sections 4 and 5 similarly cover\nmicroarray-based genotyping and re-sequencing.\nFinally,\nSection 6 concludes the paper.\nBACKGROUND MACHINE LEARNING\nWe assume familiarity with the notions of unsupervised learning\n(clustering) and supervised learning (classification and\nregression). As many of the low-level analysis problems discussed\nbelow are amenable to learning from partially labeled\ndata, learning from heterogeneous data, and incremental\nlearning, we briefly review these paradigms here.\n2.1\nLearning from Partially Labeled Data\nGiven an i.i.d. labeled sample {(x\ni\n, y\ni\n)}\nn\ni=1\ndrawn from the\nunknown and fixed joint distribution F (x, y), and an i.i.d.\nunlabeled sample {x\ni\n}\nm\ni=n+1\ndrawn from the marginal distribution\nF (x), the problem of learning from partially labeled\ndata [22; 20] is to use the data in choosing a function ^\ng\nm\n(X)\napproximating E(Y |X) where (X, Y ) F . This problem\nhas been motivated by a number of applications where only\nlimited labeled data is present, say due to expense, while unlabeled\ndata is plentiful [16]. This is particularly the case in\nthe areas of text classification, medical research, and computer\nvision [42], within which much of the research into\nlearning from partially labeled data has occurred.\nThis problem, also called the labeled-unlabeled data problem\n[42], has been explored under a number of closely-related\nguises. Some of the earliest approaches used so-called hybrid\nlearners [6], where an unsupervised learning algorithm assigns\nlabels to the unlabeled data, thereby expanding the labeled\ndataset for subsequent supervised learning. The term\nmultimodal learning is sometimes used to refer to partially\nlabeled learning in the computer vision literature [17]. Co-training\nis a form of partially labeled learning where the two\ndatasets may be of different types and one proceeds by using\nthe unlabeled data to bootstrap weak learners trained\non the labeled data [16].\nMore recently, semi-supervised learning [25] and transductive\nlearning [26] have gained popularity. Equivalent to partially\nlabeled learning, semi-supervised learning includes a\nnumber of successful algorithms, such as those based on the\nsupport vector machine (SVM) [25; 8]. Transductive learners\n, on the other hand, aim to predict labels for just the\nunlabeled data at hand, without producing the inductive\napproximation ^\ng\nm\n. This approach can be used to generalize\nthe aforementioned hybrid learners, whose unsupervised\nstep typically ignores the labeled data. In particular\n, it is shown in [26] that direct transduction is more effective\nthan the traditional two-step approach of induction\nfollowed by deduction. A number of transductive schemes\nhave been proposed, such as those based on the SVM [4; 25],\na graph-based transductive learner [9], and a leave-one-out\nerror ridge regression method [26]. Joachims [25] describes\nan approximate solver for the semi-supervised SVM which\nutilizes a fast SVM optimizer as an inner loop.\nThe story is not all good. [10] tells us that while unlabeled\ndata may be useful, labeled examples are exponentially more\nvaluable in a suitable sense. [43] tells us that unlabeled data\nmay lead the transductive SVM to maximize the wrong margin\n, and in [42] it is shown that unlabeled data may in fact\ndegrade classifier performance under certain conditions relating\nthe risk and empirical risk. Nonetheless, learning from\npartially labeled data has enjoyed great success in many theoretical\nand empirical studies [16; 42; 44; 43].\nWe are especially interested in partially labeled learning as\nan approach to the low-level microarray analysis problems\ndiscussed in Sections 35, where we have relatively few labeled\nexamples but an abundant source of unlabeled data.\n[45] is a recent example of partially labeled learning applied\nto high-level microarray analysis.\nThere, the problem of\npredicting gene function is tackled using a semi-supervised\nscheme trained on a two-component dataset of DNA microarray\nexpression profiles and phylogenetic profiles from\nwhole-genome sequence comparisons. This leads us to the\nnext relevant idea from machine learning.\n2.2\nLearning from Heterogeneous Data\nLearning from heterogeneous data is the process of learning\nfrom training data, labeled or not, that can be partitioned\ninto subsets, each of which contains a different type of data\nstructure or originates from a different source. This notion\nis equivalent to the methods of data fusion [5].\nResearch into learning from heterogeneous data tends to\nbe quite domain-specific and has enjoyed increasing interest\nfrom the bioinformatics community in particular (e.g., [18]).\n[46] presents a kernel-based framework for learning from heterogeneous\ndescriptions of a collection of genes, proteins or\nother entities. The authors demonstrate the method's superiority\nto the homogeneous case on the problem of predicting\nyeast protein function using knowledge of amino acid\nsequence, protein complex data, gene expression data, and\nknown protein-protein interactions.\nSIGKDD Explorations.\nVolume 5,Issue 2 - Page 131\n[37] proposes an SVM method for classifying gene function\nfrom microarray expression estimates and phylogenetic profiles\n. This is achieved through the construction of an explicitly\nheterogeneous kernel: first separate kernels are constructed\nfor each data type, taking into account high-order\nwithin-type correlations, then these kernels are combined,\nignoring high-order across-type correlations.\nOur interest in learning from heterogeneous data arises because\nseveral sources of knowledge relevant to low-level microarray\nanalysis are available, and incorporating such problem\ndomain knowledge has been shown to improve the performance\nof learning algorithms in the past.\n2.3\nIncremental Learning\nIncremental learning is focused on learning from data presented\nsequentially, where the model may be required to\nmake predictions on unseen data during training. This is in\ncontrast to cases where all training occurs before any predictions\nare made (batch learning ), and is similar to online\nlearning [24].\nA number of incremental learning algorithms have been proposed\nand applied in the literature. For example, several\nincremental support vector machines have been studied [24;\n21; 47]. In [48], incremental learning is applied to distributed\nvideo surveillance. SVM algorithm parameter selection is\ninvestigated in [47]. [21] applies an incremental SVM to detecting\nconcept drift the problem of varying distributions\nover long periods of data gathering and to adaptive classification\nof documents with respect to user interest. An exact\nincremental SVM is proposed in [24], where decremental unlearning\nof incremental training data is possible. This can\nbe used to efficiently evaluate the computationally-expensive\nleave-one-out error measure.\nDue to the relatively small sizes of datasets typically available\nin low-level microarray analysis, there is great potential\nfor learners that can incrementally incorporate new data\ngathered in the lab, thereby improving estimator performance\nspecific to that lab's patterns of microarray assay.\nEXPRESSION ANALYSIS\nThe most successful application of DNA microarray technology\nto date has been to gene expression analysis. Tra-ditionally\n, this has involved estimating gene expression levels\n(Section 3.1), an area that is being addressed through\nsuccessful statistical methods and active statistics research.\nHowever, the task of determining transcription activity over\nentire chromosomes (Section 3.2) is less well developed and\noffers serious opportunities for machine learning.\n3.1\nGene Expression Monitoring\n3.1.1\nThe Problem\nTraditional microarrays measure mRNA target abundance\nusing the scanned intensities of fluorescence from tagged\nmolecules hybridized to substrate-attached probes [29]. The\nbrighter the intensity within a cell of identical probes, the\nmore hybridization there has been to those probes (Figure\n2a). The scanned intensity, then, roughly corresponds\nto target abundance.\nSince probes are limited in length while targets may be thousands\nof bases long, the GeneChip uses a set of probes to\ndetect each target nucleic acid. The probes are spread out\nFigure 2: Probe-level features for expression level summarization\n: (a) a cell of probes; (b) target transcript, perfect\nmatch probe and mis-match probe sequences; and (c)\nscanned and image-analyzed probe-level intensities.\nalong a 600 base pair region close to the 3' end of the transcript\n. To measure the effects of cross-hybridization, or un-intended\nhybridization of target A to the probes intended for\ntarget B, a system of probe pairs is used. In each pair, a perfect\nmatch (PM) probe contains the target's exact complementary\nsequence, while a mismatch (MM) probe replaces\nthe middle base of the perfect match probe with its Watson-Crick\ncomplement. In this way, a target is probed by a probe\nset of 11-20 PM-MM probe pairs. The aim is roughly for\nthe PMs to measure signal plus noise and for the MMs to\nmeasure just noise, so that the signal is revealed using some\nfunction of (PM - MM). Figure 2b depicts the probe set arrangement\n, while Figure 2c gives an example of the scanned\nintensities. We may now define the expression level summarization\nproblem.\nLow-level Problem 1. Given a probe set's intensities (possibly\nafter background correction and normalization), the\nexpression level summarization problem is to estimate the\namount of target transcript present in the sample.\nWhile the expression level summary aims to estimate gene\nexpression level from the features of Figure 2, expression\ndetection is concerned with determining the presence of any\ngene expression at all.\nLow-level Problem 2. Given a probe set's intensities, possibly\nnormalized, the expression detection problem is to predict\nwhether the target transcript is present (P) or absent\n(A) in the sample, or otherwise call marginal (M) if it is too\ndifficult to tell. In addition to the P/M/A detection call, we\nwish to state a confidence level in the call, such as a p-value.\nDetection calls are not as widely utilized as expression level\nestimates. They are often used, for example, to filter out\ngenes with negligible expression before performing computationally\n-expensive high-level analyses, such as clustering on\ngene expression profiles.\nSIGKDD Explorations.\nVolume 5,Issue 2 - Page 132\nThe previous two problems dealt with estimates based on a\nsingle probe-set read from a single array. Comparative studies\n, on the other hand, involve assaying two arrays, one the\nbaseline and the other the experiment, followed by computation\nof a single comparative estimate.\nLow-level Problem 3. Given two sets of intensities, possibly\nnormalized, for the same probe set on two arrays:\na. The differential expression level summarization problem\nis to estimate the relative abundance of target transcript\non each array.\nb. The comparison call problem is to predict whether the\nexpression of the target has increased, not changed, or\ndecreased from one chip to the other. As in Low-level\nProblem 2, a statement of confidence in the call should\nbe supplied.\nThe log-ratio of expression levels for a target is sometimes\nknown as the relative expression level [3] and is closely related\nto the notion of fold change (which is sign(log-ratio)\n2log-ratio). Comparison calls are sometimes referred to as\nchange calls. An advantage of working with these comparative\nestimates is that probe-specific affinities (one cause\nof undesired variation) are approximately cancelled out by\ntaking ratios [3].\nAll of these problems are complicated by exogenous sources\nof variation which cloud the quantities we are interested in.\n[49] proposes a breakdown of the sources of variation in microarray\nexperiments into intrinsic noise (variation inherent\nin the experiment's subjects), intermediate noise (arising\nfor example from laboratory procedures), and measurement\nerror (variation due to the instrumentation, such as array\nmanufacture, scanning, or in silico processing).\n3.1.2\nCurrent Approaches\nAt the level of microarray design, sophisticated probe modeling\nand combinatorial techniques are used to reduce probe-specific\neffects and cross-hybridization. However, much of\nthe unwanted variation identified above must still be tackled\nduring low-level analysis. This means that care must\nbe taken with the relevant statistical issues. For example,\nin experimental design, we must trade off between biological\nreplicates (across samples) and technical replicates (one\nsample across chips). Background correction and normalization\n, for reducing systematic variation within and across\nreplicate arrays, also surface as major considerations [19;\n11].\nThree popular approaches to Low-level Problem 1 [11] are\nthe Affymetrix microarray suite (MAS) 5.0 signal measure\n[14; 3; 1], the robust multi-array average (RMA) [50; 11]\nand the model-based expression index (MBEI) [51].\nMAS5 first performs background correction by subtracting a\nbackground estimate for each cell, computed by partitioning\nthe array into rectangular zones and setting the background\nof each zone to that zone's second-percentile intensity. Next\nMAS5 subtracts an \"ideal mismatch value\" from each PM\nintensity and log-transforms the adjusted PMs to stabilize\nthe variance. A robust mean is computed for the resulting\nvalues using a biweight estimator, and finally this value is\nscaled using a trimmed mean to produce the signal estimate.\nRMA proceeds by first performing quantile normalization\n[52], which puts the probe intensity distributions across replicate\narrays on the same scale. RMA then models the PMs\nFigure 3: An ROC curve: (0, 0) and (1, 1) correspond to\nthe \"always negative\" and \"always positive\" classifiers respectively\n. The closer to the ideal point (0, 1) the better.\nNeither of the two families A or B dominates the other. Instead\n, one or the other is better according to the desired\ntrade-off between FP and TP.\nas background plus signal, where the signal is exponentially\nand the background normally distributed MM intensities\nare not used in RMA. A robust additive model is used to\nmodel the PM signal (in log-space) as the sum of the log\nscale expression level, a probe affinity effect, and an i.i.d.\nerror term. Finally, median polish estimates the model parameters\nand produces the log-scale expression level summary\n.\nMBEI fits P M\ni,j\n-M M\ni,j\n=\ni\n\nj\n+\ni,j\n, using maximum likelihood\nto estimate the per-gene expression levels\ni\n. Here the\n\nj\nare probe-specific affinities and the\ni,j\nare i.i.d. normal\nerrors.\nAlthough it may seem that expression detection is just a\nmatter of thresholding expression level estimates, this has\nproven not to be the case [53]. It is known that expression\nlevel estimators often have difficulty at low levels of expression\n, while detection algorithms are designed with this\nsetting in mind.\nThe most widely used detection algorithm for the GeneChip\nis a method based on a Wilcoxon signed-rank test [54; 3;\n55]. This algorithm corresponds to a hypothesis test of H\n0\n:\nmedian(\nP M\ni\n-M M\ni\nP M\ni\n+M M\ni\n) = versus H\n1\n: median(\nP M\ni\n-M M\ni\nP M\ni\n+M M\ni\n) >\n, where is a small positive constant. These hypotheses\ncorrespond to absence and presence of expression, respectively\n. The test is conducted using a p-value for a sum of\nsigned ranks R\ni\n=\nP M\ni\n-M M\ni\nP M\ni\n+M M\ni\n- . The p-value is thresholded\nso that values in [0,\n1\n), [\n1\n,\n2\n), and [\n2\n, 1] result\nin present, marginal, and absent calls, respectively. Here\n0 <\n1\n<\n2\n< 0.5 control the trade-off between false positives\n(FP) and true positives (TP).\nRecently, a number of alternate rank sum-based algorithms\nhave been proposed [53]. One in particular a variant on\nthe MAS5 method where scores are set to R\ni\n= log\nP M\ni\nM M\ni\n\nhas been shown to outperform MAS5 detection in a range\nof real-world situations.\nOne aspect of the study in [53]\nof particular interest is the use of the Receiver Operating\nCharacteristic (ROC) Convex Hull method [56] for comparing\ncompeting classifiers on a spike-in test set.\nROC curves (see Figure 3) characterize the classification\nperformance of a family of classifiers parameterized by a tun-SIGKDD\nExplorations.\nVolume 5,Issue 2 - Page 133\nable parameter that controls the FP-TP trade-off. For example\n, as the level of a hypothesis test is decreased, the rate\nof false positive rejections decreases (by definition), while\nthe rate of false negative acceptances will typically go up.\nAn ROC curve encodes this trade-off, extending the notion\nof contingency table to an entire curve. It is a more expressive\nobject than accuracy, which boils performance down to\none number [56; 57].\nComparing ROC curves has traditionally been achieved by\neither choosing the \"clear winner\" (in the rare case of domination\n[57]), or choosing the maximizer of the Area Under\nCurve (AUC). Although AUC works in some cases, it gives\nequal credit to performance over all misclassification cost\nand class size settings usually an undesirable strategy if\nany domain knowledge is available. The ROC Convex Hull\nmethod, on the other hand, relates expected-cost optimality\nto conditions on relative misclassification cost and class size,\nso that the typical case of semi-dominance (as in Figure 3)\ncan be handled in a principled way rather than selecting\np-value thresholds by hand, end-users are provided with the\nright classifier and thresholds by the method. This use of\nthe ROCCH method demonstrates a surprising application\nof machine learning to low-level microarray analysis.\nMany of these absolute expression algorithms have their\ncomparative analogues. For example, MAS5 produces the\nsignal log ratio with an associated confidence interval, using\na biweight algorithm [14; 3]. MAS5 also implements a\ncomparison call based on the Wilcoxon signed-rank sum test,\njust as in the absolute MAS5 detection algorithm above [55].\nWhile the Affymetrix microarray suite is the software package\nbundled with the GeneChip, the Bioconductor project\n[15] an open-source set of R [12] packages for bioinformatics\ndata analysis has been gaining popularity and implements\nmost of the methods discussed here.\n3.1.3\nOpen Problems\nWhile Low-level Problem 1 involves prediction of continuous\nexpression levels (non-negative real values) given a vector of\n(non-negative real) perfect match and mismatch intensities,\nwith total length between 22 and 40, Low-level Problem 2 is\na 3-class classification problem with call confidence levels.\nOpen Problem 1. In the respective settings of Low-level\nProblems 13:\na. What machine learning techniques are competitive with\nalgorithms based on classical statistical methods for expression\nlevel estimation?\nb. Which machine learning classifiers are competitive for expression\ndetection?\nc. What machine learning methods achieve high performance\non the comparative analogues of the previous two problems\n, posed on the appropriate product space of microarray\nmeasurements?\nComparisons for expression level estimators might be made\nbased on bias and variance, computational efficiency, and biological\nrelevance of learned models. The ROCCH method\nis ideal for detector comparison. Issues of background correction\nand normalization across multiple arrays must likely\nalso be addressed to enable competitiveness with the state\nof the art.\nResearch into applying semi-supervised, heterogeneous data\nand incremental learners to gene expression monitoring is directly\nmotivated by the proportion of labeled to unlabeled\ndata available, the existence of GeneChip domain knowledge\n, and the endemic nature of microarray assays that are\ncontinually performed in individual research labs. Biologists\ncould augment the limited labeled probe-level data available\nwith relatively abundant unlabeled data. Labeled data can\nbe procured, for example, from bacterial control experiments\nwith known concentrations, called spike-in assays, and bacterial\ncontrol probe sets that are present in some GeneChips\nfor calibration purposes. The former source of labeled data\nis the more useful for this problem, as it provides examples\nwith a range of labels. Unfortunately, spike-in studies are\nrare because they are not of independent scientific interest:\nthey are only performed for low-level microarray research.\nFor the few spike-in assays that are available, only a small\nnumber of targets are spiked in at an equally small number of\nconcentrations (typically 10). Unlabeled data, in contrast,\ncould be taken from the large collection of available biologically\nrelevant assays; each one providing tens of thousands\nof data points. Beyond probe intensities, other data sources\ncould include probe sequences and probe-affinity information\nderived from probe models. Such information is closely\nrelated to the hybridization process and might be of use\nin expression level estimation: both target and non-specific\nhybridization are known to be probe-dependent. Although\nlabeled data from spike-in studies are of greatest utility for\nlearning [10], the quantity of unlabeled data produced by\na series of biologically interesting microarray assays in any\ngiven lab suggests a semi-supervised incremental approach.\nSince the ROCCH involves taking a pointwise maximum\nover the individual noisy ROC curves, it incorporates a possibly\nlarge degree of uncertainty. It should be possible to\nextend the results of [53] to quantify this property.\nOpen Problem 2. Can the ROC Convex Hull method\nof [56] be extended to provide confidence intervals for its\nconditions on expected-cost optimality?\n3.2\nTranscript Discovery\n3.2.1\nThe Problem\nThe applications to expression monitoring described above\nare all related to addressing questions about pre-defined\ntranscripts.\nMore precisely, the vast majority of expression\nanalysis is performed using probes interrogating only\na small sub-sequence of each transcript. This has clearly\nbeen a useful approach, but there are at least two potential\ndrawbacks. One is that we can only monitor the expression\nof genes known to exist at the time of the array's design.\nEven in a genome as well-studied as that of the human, new\ntranscripts are routinely discovered. Another is that in directly\nmonitoring only a sub-sequence of the transcript, it\nwill often be impossible to distinguish between alternatively\nspliced forms of the same gene (which may have very different\nfunctional roles).\nAn alternative approach is to use arrays with probes tiled\nuniformly across genomic sequence, without regard to current\nknowledge of transcription.\nSuch genome tiling arrays\nhave been used to monitor expression in all the non-repetitive\nsequence of human chromosomes 21 and 22 [34],\nand more widespread use is underway.\nSIGKDD Explorations.\nVolume 5,Issue 2 - Page 134\nThe problems arising in the analysis of data from genome\ntiling arrays are essentially the same as those for the expression\nmonitoring arrays described above: estimation of\nexpression level, detection of presence, and detection of differential\nexpression. There is, however, the additional challenge\nof determining the number of distinct transcripts and\ntheir location within the tiled genomic region.\nLow-level Problem 4. The problem of transcript discovery\ncan be viewed in two steps:\na. Determining the exon structure of genes within a tiled\nregion; and\nb. Determining which exons should be classified together as\npart of a single gene's transcript.\n3.2.2\nCurrent Approaches\nA simple heuristic approach is taken in [34], in which PM-MM\nprobe pairs are classified as positive or negative based\non thresholds applied to the difference and ratio of the PM\nand MM values. Positions classified as positive and located\nclose to other positive positions are grouped together to form\npredicted exons.\nA more effective approach [58] is based on the application of\na Wilcoxon signed-rank test in a sliding window along the\ngenomic sequence, using the associated Hodges-Lehmann estimator\nfor estimation of expression level. Grouping into\nexons is achieved by thresholding on present call p-values or\nestimated expression level, then defining groups of probes\nexceeding the threshold to be exons.\n3.2.3\nOpen Problems\nThe problem of detecting exons based on probe intensities\n(Low-level Problem 4a) is very similar to the problem of\nabsolute expression detection (Low-level Problem 2). For\nexample, the exon detection method of [58] and the MAS5\nexpression detection algorithm [55] are both built around\nthe Wilcoxon signed-rank test. The problem of finding exons\nhas been addressed as described, but the methods are\nheuristic and there is plenty of room for improvement. Associating\nexons to form transcripts (Low-level Problem 4b) has\nbeen addressed in a large experiment across almost 70 experimental\npairs using a heuristic correlation-based method;\nagain, this presents an opportunity for research into more\neffective methods.\nOpen Problem 3. Are there machine learning methods\nthat are able to out-perform current classical statistical methods\nin transcript discovery as defined in Low-Level Problem\n4?\nOne possibility which appears well suited to the problem is\nthe use of hidden Markov models where the underlying un-observed\nMarkov chain is over states representing expressed\nversus non-expressed sequence. The distribution of the observed\nprobe intensities would depend on the underlying hidden\nstate. Another possible approach, considering the success\nwhich has been demonstrated in predicting genes from\nsequence data alone, would also be to integrate array-derived\ndata with sequence information in prediction of transcripts.\nGENOTYPING\nDescriptions of genome sequencing efforts such as the human\ngenome project often lend the impression that there\nis a unique genomic sequence associated with each species.\nThis is a useful and approximately correct abstraction. But\nin fact, any two individuals picked at random from a species\npopulation will have differing nucleotides at a small fraction\nof the corresponding positions in their genomes. Such single-nucleotide\npolymorphisms, or SNPs, help form the basis of\ngenetically-determined variation across individuals. Biologists\nestimate that about one position in 1,000 in the human\ngenome is a SNP. With over 3 billion bases of genomic DNA,\nwe see that SNPs number in the several millions. Although\nthere are other kinds of individual genomic variation, such\nas insertions, deletions, and duplications of DNA segments,\nour focus here is SNPs.\nFurther complicating the picture is the fact that humans are\ndiploid organisms--each person possesses two complete but\ndifferent copies of the human genome, one inherited from the\nmother and one from the father. Now consider a polymorphic\nposition, or locus, at which two different bases occur\nin the population, say G and T. These variants are called\nthe alleles at the locus, so in this case we are describing a\nbiallelic SNP. A given individual will have inherited either a\nG or T in the paternal genome, and the same is true of the\nmaternal genome. Thus there are three possible genotypes,\nor individual genetic signatures, at this SNP: they are de-noted\nGG, TT, and GT. We do not distinguish the last case\nfrom TG, since there is no inherent ordering of the paternal\nand maternal genomes at a given polymorphic position.\nWe refer generically to the alleles of a biallelic SNP as A and\nB. Biological evidence suggests that essentially all SNPs are\nbiallelic in humans. The genotyping problem, then, is to establish\nan individual's genotype as AA, BB, or AB for as\nmany SNPs as possible in the human genome. The completion\nof the human genome project means that one has\nrecourse to the full genomic sequence surrounding a SNP\nto help solve the genotyping problem. Furthermore, various\nlarge-scale public projects to locate SNPs and identify their\nalleles exist, notably The SNP Consortium (TSC); the data\nthey generate may also be utilized for genotyping.\nThe major drawback to traditional genotyping protocols are\ntheir lack of parallelism, with consequent expense in terms\nof material and labor. In contrast, Kennedy et al. [33] describe\nwhole-genome sampling analysis (WGSA), which enables\nmassively parallel genotyping via genotyping microarrays\n.\nFor the Affymetrix Mapping 10k Array, which genotypes\napproximately 10,000 SNPs across the human genome, each\nSNP actually has 56 corresponding probes, collectively termed\na miniblock. The miniblock has 7 probe quartets for the\nSNP's flanking region on the forward strand and another 7\nprobe quartets for the reverse complement strand, so 4 7\n2 yields 56 probes. Each probe quartet in turn corresponds\nto a 25-mer in which the SNP is at one of 7 offsets from the\ncentral position. The four probes within a probe quartet\ndiffer in the base they put at the SNP: a perfect match to\nthe A allele, a perfect match to the B allele, and mismatches\nfor each.\nLow-level Problem 5. Given a SNP's 56-vector of miniblo-SIGKDD\nExplorations.\nVolume 5,Issue 2 - Page 135\nck probe intensities, the genotype calling problem is to predict\nthe individual's corresponding alleles as AA, BB or AB.\nWrite PM(A), PM(B), MM(A), and MM(B) for the probe\nintensities within a quartet. We would then hope that an\nAA individual has PM(A) > MM(A) but PM(B) MM(B),\nfor all probe quartets on both strands. For a BB individual,\nwe hope to find just the opposite effect, and an AB individual\nshould have both PM(A) > MM(A) and PM(B) >\nMM(B). The mismatch probes in each quartet act as controls\n, establishing the level of nonspecific hybridization for\ntheir corresponding perfect match probes. The presence of\nmultiple probe quartets allows for the determination of genotype\neven when one strand and/or some offsets do not yield\nreliable hybridization, say for biochemical reasons.\n4.2\nCurrent Approaches\nLow-level Problem 5 is a three-class classification problem.\nIn many machine learning applications, the metric of interest\nfor competing classifiers is predictive accuracy, in this case\nthe probability of correctly genotyping a new individual's\nSNP based on the miniblock vector. However, in the kinds\nof genetic studies which take large numbers of genotypes as\ninput, there is usually an explicit requirement that genotype\npredictions have a prespecified accuracy, often 99%.\nTo attain such accuracy, it is usually permissible for some\nfraction of genotypable SNPs to be no-calls; that is, the classifier\ncan refuse to predict a genotype for some miniblocks.\nWhen comparing genotypers, our interest therefore lies in\nthe trade-off between the rate of no-calls and the accuracy\nattained on those SNPs which are called. For example, some\nstudies consider the punt rate, or lowest no-call rate which\nyields a prespecified accuracy level on the called SNPs.\nA simple unsupervised approach to training a genotyper\nis to ignore available labels during training, instead using\nthese labels to subsequently assess the trade-off between accuracy\nand no-call rate for the trained model. This is the\nstrategy pursued by MPAM (modified partitioning around\nmedoids) [59], the discriminative clustering genotyper used\nfor the Affymetrix 10k array. An alternative approach, using\na parametric generative model for the clustering, will\nbe described elsewhere.\nIt resembles ABACUS, a model\nstudied in the context of re-sequencing microarrays [31] (see\nSection 5).\n4.3\nOpen Problems\nOpen Problem 4. Are there machine learning methods\nthat are able to meet typical accuracy and punt-rate specifications\non the genotype calling problem?\nIn order to choose a genotyper using supervised learning,\nwe need labels (true genotypes) along with corresponding\nminiblock reads from genotyping arrays. Unfortunately, there\nis no large-scale set of publicly available genotypes. Instead\n, one makes do with modestly-sized sets of genotypes\navailable commercially from companies using smaller-scale\ntechniques. Of course, no genotyping method is error-free,\nso in practice one measures concordance with reference genotypes\n.\nIf the concordance is high enough, the remaining\ncases of disagreement between a candidate genotyper and\nthe reference genotypes can be resolved via the older labor-intensive\nmethods. The incomplete nature of reference genotype\ndata leads naturally to the setting of semi-supervised\nlearning. Rather than falling back to unsupervised methods\nsuch as those described above, we may consider employing\nmore general semi-supervised learners as described in Section\n2.1. Additionally, the methods of [23] could be used\nto incorporate low-level physical parametric models of hybridization\ninto a kernel-based classifier.\nRE-SEQUENCING\nAs explained in Section 4, within a single species genomic\nsequence will vary slightly from one individual to the next.\nWhile Low-level Problem 5 focuses on the determination of\ngenotype at a position known in advance to be polymorphic,\nthe problem described in this section concerns locating such\npolymorphic sites in the first place.\nThe usual starting point is a newly-sequenced genome, such\nas the recently-finished human genome. It is often the case\nthat, based on previous research, an investigator will be interested\nin detailed study of variation in a particular genomic\nregion (say on the order of tens or hundreds of kilo-bases\n) and wants to re-sequence this region in a large number\nof individuals. Such re-sequencing allows for identification\nof the small subset of polymorphic locations. Here\nwe consider the more recent challenges of microarray-based\nre-sequencing of diploid genomic DNA.\nA typical re-sequencing array uses eight probes to interrogate\neach base of the monitored sequence. These eight\nprobes comprise two quartets, one for the forward strand\nand one for the reverse. Each quartet is formed of 25-mer\nprobes perfectly complementary to the 25 bases of the reference\nsequence centered on the interrogated base, but with\nall four possible bases used at the central position.\nLow-level Problem 6. The goal of the re-sequencing problem\nis to start with a set of probe intensities and classify\neach position as being one of A, C, G, T, AC, AG, AT, CG,\nCT, GT, or N, where N represents a `no call' (due to sample\nfailure or ambiguous data).\nThe intuition is that for a homozygous position, one of the\nfour probes should be much brighter relative to the others\non each strand, and for a heterozygous position, two probes\ncorresponding to the two bases of a SNP should be brighter\non each strand. Of particular interest are positions in which\nthe called base is heterozygous, or homozygous and different\nto the reference sequence, as such positions exhibit polymorphism\nand are candidate positions for explaining phenotypic\ndifferences between individuals.\nAt face value, this classification problem is much harder than\nthe genotyping problem. There are fewer probes to start\nwith (a miniblock of 8 rather than 40 or more) and more\ncategories (11 as opposed to 3 or 4) into which to classify.\n5.2\nCurrent Approaches\nThe most recent analysis of the kind of re-sequencing array\ndiscussed here [31] is based on modeling pixel intensities\nwithin each probe as independent random variables with\na common mean and variance. The model for a homozygous\nbase is that, on each strand, the probe correspond-three\nprobes have another. The means and variance are estimated\nby maximum likelihood, and the likelihood of the\nSIGKDD Explorations.\ning to the base has one mean and variance, and the other\nVolume 5,Issue 2 - Page 136\nmodel is evaluated. The model for each of the six heterozygous\npossibilities is similar, except two probes correspond to\neach heterozygote model and the other two are background.\nThe likelihoods (overall and for each strand) are converted\nto scores and, provided the maximum score exceeds some\nthreshold, the best-scoring model is chosen as the base call.\nA number of other filters that deal with the signal absence,\nsignal saturation, sample failure, and so on are applied, as is\nan iterative procedure to account for bias in the background\nprobes. This method, called ABACUS, was found to make\nbase calls at over 80% of all bases, with an estimate accuracy\nin excess of 99% at the bases which were called.\n5.3\nOpen Problems\nA good base-calling method for re-sequencing arrays already\nexists in ABACUS, but there remains room for improvement\n. A recent and improved implementation [60] of the\nABACUS method on a new genomic region found the overall\nsequencing accuracy to be on the order of 99.998%, but\nthe accuracy on heterozygote calls to be about 96.7%. Biologists\nwould value highly an improvement in heterozygote\ncall accuracy.\nOpen Problem 5. Can a supervised learning method be\nused to call bases in re-sequencing arrays with accuracy, in\nparticular heterozygote accuracy, in excess of the accuracies\nachieved by the more classic statistical approaches used to\ndate?\nConsidering the ongoing efforts of SNP detection projects,\nthere is an abundance of labeled data available, so the problem\nseems quite amenable to machine learning approaches.\nAs with the genotyping problem, it would be desirable to\nhave a measure of confidence associated with base calls. It\nmay also be useful to take into account the sequences of the\n25-mer probes, as there are known sequence-specific effects\non the probe intensities.\nCONCLUSIONS\nWe have described a variety of low-level problems in microarray\ndata analysis and suggested the applicability of\nmethods from several areas of machine learning. Some properties\nof these problems which should be familiar to machine\nlearning researchers include high-dimensional observations\nwith complicated joint dependencies (probe intensities\n), partially labeled data sets (expression levels, genotypes\n), data from disparate domains (microarray assays,\nprobe sequences, phylogenetic information), and sequential\nobservations (ongoing experimental work at individual labs).\nWe pointed out the suitability of semi-supervised, heterogeneous\n, and incremental learning in these settings. It is worth\nremarking that analogous problems arise with other high-throughput\ntechnologies, such as cDNA and long oligonucleotide\nmicroarrays, mass spectrometry, and fluorescence-activated\ncell sorting.\nThere are other issues in low-level analysis we did not cover.\nHere we mention two of these. Image analysis is the problem\nof going from raw pixel values in the scanned image of a\nmicroarray to a set of pixel intensities for each feature placed\non the probe, and then to single-number probe intensities.\nThe surface of the GeneChip contains detectable grid points\nwhich facilitate rotation and translation of the image to a\ncanonical alignment; subsequent mapping of each pixel to a\nfeature is semi- or fully automated and has not previously\nraised major analysis issues. However, work is being done\non aggressive reduction of feature sizes to a scale where this\nmapping procedure could become a central concern.\nOn the more theoretical side, probe models based on the\nphysics of polymer hybridization have recently been the focus\nof considerable interest. These models reflect a significant\nincrease in the use of biological knowledge for estimating\ntarget abundance and present an opportunity for application\nof machine learning techniques which can exploit\nparametric distributions in high-dimensional data analysis,\nsuch as graphical models.\nWe close by observing that a fuller awareness of low-level\nmicroarray analysis issues will also benefit machine learning\nresearchers involved with high-level problems: the inevitable\ninformation reduction from earlier stage to later could well\nconceal too much of what the unfiltered array data reveal\nabout the biological issue at hand. Familiarity with initial\nnormalization and analysis methods will allow the high-level\nanalyst to account for such a possibility when drawing scientific\nconclusions.\nACKNOWLEDGMENTS\nWe thank Rafael Irizarry, Ben Bolstad, Francois Collin and\nKen Simpson for many useful discussions and collaboration\non low-level microarray analysis.\n\nREFERENCES\n[1] Affymetrix.\nAffymetrix\nMicroarray\nSuite\nGuide.\nAffymetrix Inc., Santa Clara, CA, 2001. version 5.0.\n[2] M. Schena. DNA Microarrays: A Practical Approach.\nOxford University Press, 1999.\n[3] Affymetrix. Statistical algorithms description document\n. Whitepaper, Affymetrix Inc., Santa Clara, CA,\n2002.\n[4] A. Gammerman, V. Vovk, and V. Vapnik. Learning by\ntransduction. In Fourteenth Conference on Uncertainty\nin Artificial Intelligence, pages 148155. Morgan Kaufmann\nPublishers, 1998.\n[5] P. K. Varshney. Scanning the issue: Special issue on\ndata fusion. Proceedings of the IEEE, 85:35, 1997.\n[6] R. O. Duda and P. E. Hart. Pattern Classification and\nScene Analysis. John Wiley and Sons, New York, 1973.\n[7] T. Gaasterland and S. Bekiranov. Making the most of\nmicroarray data. Nature Genetics, 24:204206, 2000.\n[8] K. P. Bennett and A. Demiriz. Semi-supervised support\nvector machines. In Advances in Neural Information\nProcessing Systems 11, pages 368374, Cambridge,\nMA, 1999. MIT Press.\n[9] T. Joachims. Transductive learning via spectral graph\npartitioning. In Proceedings of the International Conference\non Machine Learning (ICML), 2003.\n[10] V. Castelli and T. Cover. On the exponential value of\nlabeled samples. Pattern Recognition Letters, 16:105\n111, 1995.\nSIGKDD Explorations.\nVolume 5,Issue 2 - Page 137\n[11] R. A. Irizarry. Science and Statistics: A Festschrift for\nTerry Speed, volume 40 of Lecture NotesMonograph\nSeries,\nchapter Measures of gene expression for\nAffymetrix high density oligonucleotide arrays, pages\n391402. Institute of Mathematical Statistics, 2003.\n[12] R. Ihaka and R. Gentleman. R: A language for data\nanalysis and graphics. Journal of Computational and\nGraphical Statistics, 5(3):299314, 1996.\n[13] N. Friedman. Probabilistic models for identifying regulation\nnetworks. Bioinformatics, 19:II57, October 2003.\n[14] E. Hubbell, W. M. Liu, and R. Mei. Robust estimators\nfor expression analysis. Bioinformatics, 18:15851592,\n2002.\n[15] Bioconductor Core. An overview of projects in computing\nfor genomic analysis. Technical report, The Bioconductor\nProject, 2002.\n[16] A. Blum and T. Mitchell. Combining labeled and unlabeled\ndata with co-training. In Proceedings of the Workshop\non Computational Learning Theory. Morgan Kaufmann\nPublishers, 1998.\n[17] L. Wu, S. L. Oviatt, and P. R. Cohen. Multimodal integration\n- a statistical view. IEEE Transactions on Mul-timedia\n, 1:334 341, 1999.\n[18] A. J. Hartemink and E. Segal. Joint learning from multiple\ntypes of genomic data. In Proceedings of the Pacific\nSymposium on Biocomputing 2004, 2004.\n[19] G. K. Smyth, Y. H. Yang, and T. P. Speed. Functional\nGenomics: Methods and Protocols, volume 224 of Methods\nin Molecular Biology, chapter Statistical issues in\ncDNA microarray data analysis, pages 111136. Hu-mana\nPress, Totowa, NJ, 2003.\n[20] A. Blum and S. Chawla. Learning from labeled and\nunlabeled data using graph mincuts. In International\nConference on Machine Learning (ICML), 2001.\n[21] R. Klinkenberg and T. Joachims. Detecting concept\ndrift with support vector machines. In P. Langley, editor\n, Proceedings of ICML-00, 17th International Conference\non Machine Learning, pages 487494, Stanford,\nCA, 2000. Morgan Kaufmann Publishers.\n[22] M. Szummer and T. Jaakkola. Partially labeled classification\nwith Markov random walks. In Neural Information\nProcessing Systems (NIPS), 2001.\n[23] T. S. Jaakkola and D. Haussler. Exploiting generative\nmodels in discriminative classifiers. In Advances in Neural\nInformation Processing Systems 11: Proceedings of\nthe 1998 Conference, pages 487493. MIT Press, 1998.\n[24] G. Cauwenberghs and T. Poggio. Incremental and\ndecremental support vector machine learning. In NIPS,\npages 409415, 2000.\n[25] T. Joachims. Transductive inference for text classification\nusing support vector machines. In I. Bratko and\nS. Dzeroski, editors, Proceedings of the 16th Annual\nConference on Machine Learning, pages 200209. Morgan\nKaufmann, 1999.\n[26] O. Chapelle, V. Vapnik, and J. Weston. Advances\nin Neural Information Processing Systems 12, chapter\nTransductive inference for estimating values of functions\n. MIT Press, 2000.\n[27] M. Schena, D. Shalon, R. W. Davis, and P. O.\nBrown. Quantitative monitoring of gene expression patterns\nwith a complementary DNA microarray. Science,\n270:467470, 1995.\n[28] D. J. Lockhart, H. Dong, M. C. Byrne, M. T. Follet-tie\n, M. V. Gallo, M. S. Chee, M. Mittmann, C. Wang,\nM. Kobayashi, H. Horton, and E. L. Brown. Expression\nmonitoring by hybridization to high-density oligonucleotide\narrays. Nature Biotechnology, 14:16751680,\n1996.\n[29] R. J. Lipshutz, S. P. A. Fodor, T. R. Gingeras, and\nD. H. Lockhart. High density synthetic oligonucleotide\narrays. Nature Genetics, 21:2024, 1999. Supplement.\n[30] J. B. Fan, D. Gehl, L. Hsie, K. Lindblad-Toh, J. P.\nLaviolette, E. Robinson, R. Lipshutz, D. Wang, T. J.\nHudson, and D. Labuda. Assessing DNA sequence variations\nin human ests in a phylogenetic context using\nhigh-density oligonucleotide arrays. Genomics, 80:351\n360, September 2002.\n[31] D. J. Cutler, M. E. Zwick, M. M. Carrasquillo, C. T.\nYohn, K. P. Tobin, C. Kashuk, D. J. Mathews,\nN. A. Shah, E. E. Eichler, J. A. Warrington, and\nA. Chakravarti. High-throughput variation detection\nand genotyping using microarrays. Genome Research,\n11:19131925, November 2001.\n[32] J. B. Fan, X. Chen, M. K. Halushka, A. Berno,\nX. Huang, T. Ryder, R. J. Lipshutz, D. J. Lockhart\n, and A. Chakravarti. Parallel genotyping of human\nSNPs using generic high-density oligonucleotide tag arrays\n. Genome Research, 10:853860, June 2000.\n[33] G. C. Kennedy, H. Matsuzaki, D. Dong, W. Liu,\nJ. Huang, G. Liu, X. Su, M. Cao, W. Chen, J. Zhang,\nW. Liu, G. Yang, X. Di, T. Ryder, Z. He, U. Surti,\nM. S. Phillips, M. T. Boyce-Jacino, S. P. A. Fodor, and\nK. W. Jones. Large-scale genotyping of complex DNA.\nNature Biotechnology, October 2003.\n[34] P. Kapranov, S. E. Cawley, J. Drenkow, S. Bekiranov,\nR. L. Strausberg, S. P. A. Fodor, and T. R. Gingeras.\nLarge-scale transcriptional activity in chromosomes 21\nand 22. Science, 296:916919, 2002.\n[35] M. Brown, W. N. Grundy, D. Lin, N. Cristianini,\nC. Sugnet, M. Ares Jr, and D. Haussler. Support vector\nmachine classification of microarray gene expression\ndata. Technical Report UCSC-CRL-99-09, Department\nof Computer Science, University of California at Santa\nCruz, 1999.\n[36] M. P. S. Brown, W. N. Grundy, D. Lin, N. Cristianini,\nC. W. Sugnet, T. S. Furey, M. Ares Jr, and D. Haussler.\nKnowledge-based analysis of microarray gene expression\ndata by using support vector machines. Proceedings\nof the National Academy of Sciences, 97:262267,\n1997.\nSIGKDD Explorations.\nVolume 5,Issue 2 - Page 138\n[37] P. Pavlidis, J. Weston, J. Cai, and W. N. Grundy.\nGene functional classification from heterogeneous data.\nIn Proceedings of the Fifth International Conference on\nComputational Molecular Biology, pages 242248, 2001.\n[38] T. S. Furey, N. Cristianini, N. Duffy, D. W. Bednarski,\nM. Schummer, and D. Haussler. Support vector machine\nclassification and validation of cancer tissue samples\nusing microarray expression data. Bioinformatics,\n16:906914, 2000.\n[39] S. Mukherjee,\nP. Tamayo,\nD. Slonim,\nA. Verri,\nT. Golub, J. P. Mesirov, and T. Poggio. Support vector\nmachine classification of microarray data. Technical\nReport 182, Center for Biological and Computational\nLearning Massachusetts Institute of Technology, 1998.\n[40] S. Ramaswamy, P. Tamayo, R. Rifkin, S. Mukherjee,\nC. Yeang, M. Angelo, C. Ladd, M. Reich, E. Latulippe,\nJ. P. Mesirov, T. Poggio, W. Gerald, M. Loda, E. S.\nLander, and T. R. Golub. Multiclass cancer diagnosis\nusing tumor gene expression signatures. Proceedings of\nthe National Academy of Sciences, 98, 2001.\n[41] C. Yeang, S. Ramaswamy, P. Tamayo, S. Mukherjee\n, R. M. Rifkin, M. Angelo, M. Reich, E. Lander,\nJ. Mesirov, and T. Golub. Molecular classification of\nmultiple tumor types. Bioinformatics, 1:17, 2001.\n[42] F. G. Cozman and I. Cohen. Unlabeled data can degrade\nclassification performance of generative classifiers\n. In Fifteenth International Florida Artificial Intelligence\nSociety Conference, pages 327331, 2002.\n[43] T. Zhang and F. J. Oles. A probability analysis on\nthe value of unlabeled data for classification problems.\nIn Proceedings of the International Conference on Machine\nLearning, pages 11911198, 2000.\n[44] K. Nigam, A. McCallum, S. Thrun, and T. Mitchell.\nText classification from labeled and unlabeled documents\nusing EM. Machine Learning, 39:103134, 2000.\n[45] T. Li, S. Zhu, Q. Li, and M. Ogihara. Gene functional\nclassification by semi-supervised learning from heterogeneous\ndata. In Proceedings of the ACM Symposium\non Applied Computing, 2003.\n[46] G. R. G. Lanckriet, M. Deng, N. Cristianini, M. I. Jordan\n, and W. S. Noble. Kernel-based data fusion and its\napplication to protein function prediction in yeast. In\nProceedings of the Pacific Symposium on Biocomputing\n2004, 2004.\n[47] A. Shilton, M. Palaniswami, D. Ralph, and A. C. Tsoi.\nIncremental training in support vector machines. In\nProceedings of the International Joint Conference on\nNeural Networks, 2001.\n[48] C. P. Diehl. Toward Efficient Collaborative Classification\nfor Distributed Video Surveillance. PhD thesis,\nDepartment of Electrical and Computer Engineering,\nCarnegie Mellon University, 2000.\n[49] J. H. Maindonald, Y. E. Pittelkow, and S. R. Wilson.\nScience and Statistics: A Festschrift for Terry Speed,\nvolume 40 of IMS Lecture NotesMonograph Series,\nchapter Some Considerations for the Design of Microarray\nExperiments, pages 367390. Institute of Mathematical\nStatistics, 2003.\n[50] R. A. Irizarry, B. Hobbs, F. Collin, Y. D. Beazer-Barclay\n, K. J. Antonellis, U. Scherf, and T. P. Speed.\nExploration, normalization, and summaries of high\ndensity oligonucleotide array probe level data. Bio-statistics\n, 4:249264, 2003.\n[51] C. Li and W. H. Wong. Model-based analysis of oligonucleotide\narrays: Expression index computation and outlier\ndetection. Proceedings of the National Academy of\nScience, 98:3136, 2001.\n[52] B. M. Bolstad, R. A. Irizarry, M. Astrand, and T. P.\nSpeed. A comparison of normalization methods for high\ndensity oligonucleotide array data based on bias and\nvariance. Bioinformatics, 19:185193, 2003.\n[53] B. I. P. Rubinstein and T. P. Speed. Detecting\ngene expression with oligonucleotide microarrays, 2003.\nmanuscript in preparation.\n[54] W. Liu, R. Mei, D. M. Bartell, X. Di, T. A. Webster,\nand T. Ryder. Rank-based algorithms for analysis of\nmicroarrays. Proceedings of SPIE, Microarrays: Optical\nTechnologies and Informatics, 4266, 2001.\n[55] W. M. Liu, R. Mei, X. Di, T. B. Ryder, E. Hubbell,\nS. Dee, T. A. Webster, C. A. Harrington, M. H. Ho,\nJ. Baid, and S. P. Smeekens. Analysis of high density\nexpression microarrays with signed-rank call algorithms\n. Bioinformatics, 18:15931599, 2002.\n[56] F. Provost and T. Fawcett. Analysis and visualization\nof classifier performance:\nComparison under imprecise\nclass and cost distributions. In Third International\nConference on Knowledge Discovery and Data Mining,\nMenlo Park, CA, 1997.\n[57] T. Fawcett, F. Provost, and R. Kohavi. The case\nagainst accuracy estimation for comparing induction algorithms\n. In Fifteenth International Conference on Machine\nLearning, 1998.\n[58] D. Kampa and et al. Novel RNAs identified from a comprehensive\nanalysis of the transcriptome of human chromosomes\n21 and 22. Manuscript in preparation.\n[59] W.-M. Liu, X. Di, G. Yang, H. Matsuzaki, J. Huang,\nR. Mei, T. B. Ryder, T. A. Webster, S. Dong, G. Liu,\nK. W. Jones, G. C. Kennedy, and D. Kulp. Algorithms\nfor large scale genotyping microarrays. Bioinformatics,\n2003. In press.\n[60] Affymetrix.\nGeneChip\nCustomSeq\nresequencing\narray:\nPerformance\ndata\nfor\nbase\ncalling\nalgorithm\nin GeneChip DNA analysis software. Technical\nnote,\nAffymetrix\nInc.,\nSanta\nClara,\nCA,\n2003.\nhttp://www.affymetrix.com/support/tech\nnical/technotes/customseq technote.pdf.\nSIGKDD Explorations.\nVolume 5,Issue 2 - Page 139", "keywords": "low-level analysis;data mining;transductive learning;learning from heterogeneous data;heterogeneous data;semi-supervised learning;incremental learning;transcript discovery;microarray;gene expression estimation;statistics;genotyping;Low-level microarray analysis;re-sequencing"} {"name": "135", "title": "Measuring Cohesion of Packages in Ada95", "abstract": "Ada95 is an object-oriented programming language. Pack-ages are basic program units in Ada 95 to support OO programming, which allow the specification of groups of logically related entities. Thus, the cohesion of a package is mainly about how tightly the entities are encapsulated in the package. This paper discusses the relationships among these entities based on dependence analysis and presents the properties to obtain these dependencies. Based on these, the paper proposes an approach to measure the package cohesion, which satisfies the properties that a good measure should have.", "fulltext": "INTRODUCTION\nCohesion is one of the most important software features during its\ndevelopment. It tells us the tightness among the components of a\nsoftware module. The higher the cohesion of a module, the more\nunderstandable, modifiable and maintainable the module is. A\nsoftware system should have high cohesion and low coupling.\nResearchers have developed several guidelines to measure\ncohesion of a module [1, 3, 4]. Since more and more applications\nare object-oriented, the approaches to measure cohesion of object-oriented\n(OO) programs have become an important research field.\nGenerally, each object-oriented programming language provides\nfacilities to support OO features, such as data abstraction,\nencapsulation and inheritance. Each object consists of a set of\nattributes to represent the states of objects and a set of operations\non attributes. Thus, in OO environment, the cohesion is mainly\nabout how tightly the attributes and operations are encapsulated.\nThere are several approaches proposed in literature to measure\nOO program cohesion [2, 5, 6, 7, 11, 12]. Most approaches are\nbased on the interaction between operations and attributes. The\ncohesion is measured as the number of the interactions. Generally\nonly the references from operations to attributes are considered.\nAnd few care about the interactions of attributes to attributes and\noperations to operations at the same time. This might lead to bias\nwhen measuring the cohesion of a class. For example, when\ndesigning the trigonometric function lib class, we might set a\nglobal variable to record the temporal result. The variable is\nreferred in all the operations of the class. According to methods\nbased on the interaction between operations and attributes [6, 7],\nthe cohesion is the maximum 1. In fact, there are no relations\namong the operations if the calls are not taken into account. In\nthis view, its cohesion is 0. The difference is caused by\nconsidering only the references from operations to attributes,\nwhile not considering the inter-operation relations.\nIn our previous work, we have done some research in measuring\nOO program cohesion [10, 13, 14]. Our approach overcomes the\nlimitations of previous class cohesion measures, which consider\nonly one or two of the three facets. Since the OO mechanisms in\ndifferent programming languages are different from each other,\nthis paper applies our measure to Ada packages.\nThe remaining sections are organized as follows. Section 2\nintroduces the package in Ada 95. Section 3 discusses the basic\ndefinitions and properties for our measure. Based on the\ndefinitions and properties, Section 4 proposes approaches to\nmeasure package cohesion. Conclusion remarks are given in the\nlast section.\nPACKAGES IN ADA 95\nIn Ada 95[ISO95], packages and tagged types are basic program\nunits to support OO programming. A package allows the\nspecification of groups of logically related entities. Typically, a\npackage contains the declaration of a type along with the\ndeclarations of primitive subprograms of the type, which can be\ncalled from outside the package, while its inner workings remain\nhidden from outside users. In this paper, we distinguish packages\ninto four groups.\nPG1: Packages that contain any kind of entities except\ntagged types.\nPG2: Packages that only contain the declaration of one\ntagged type along with those primitive subprograms\nof the type. There are two subgroups in PG2:\n- PG2-1: The type is an original tagged type.\n- PG2-2: The type is a derived type.\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nSIGAda'03, December 711, 2003, San Diego, California, USA.\nCopyright 2003 ACM 1-58113-476-2/03/0012...$5.00.\n\n\n62\nPG3: Combination of PG1 and PG2.\nPG4: Generic packages.\nAfter a generic package is instantiated, it belongs to one of the\nformer three groups. Thus, only cohesion measure of PG1, PG2\nand PG3 is discussed in the paper.\n\nDEFINITIONS\nIn this section, we will present our definitions in the form of PG1.\nThe cohesion of a package from PG1 is mainly about how tightly\nthe objects and subprograms are encapsulated in the package. In\nthis paper, the relationships among objects and subprograms are\ndefined as three dependencies: inter-object, inter-subprogram and\nsubprogram-object dependence.\nDefinition 1 In the package body or a subprogram of the package,\nif the definition (modification) of object A uses (refer, but not\nmodify) object B directly or indirectly, or whether A can been\ndefined is determined by the state of B, then A depends on B,\ndenoted by A B.\nGenerally, if B is used in the condition part of a control statement\n(such as if and while), and the definition of A is in the inner\nstatement of the control statement, the definition of A depends on\nB's state.\nDefinition 2 If object A is referred in subprogram P, P depends\non A, denoted by P A.\nDefinition 3 There are two types of dependencies between\nsubprograms: call dependence and potential dependence. If P is\ncalled in M, then M call depends on P, denoted by M P. If the\nobject A used in M is defined in P, the A used in M depends on\nthe A defined in P, denoted by M\n\n\n\nA\nA,\nP, where (A, A) is\nnamed as a tag. For each call edge, add a tag (*, *) for unification.\ni.e. if P Q, P\n\n\n*,*\nQ.\nTo obtain these dependencies, we introduce four sets for each\nsubprogram M:\nIN(M) is an object set, each element of which is an object\nreferred before modifying its value in M;\nOUT(M) is an object set, each element of which is an\nobject modified in M.\nDEP_A (M) is a dependence set which represents the\ndependencies from the objects referred in M to the objects\ndefined outside M. Each element has the form <A, B>,\nwhere A and B are objects of the package.\nDEP_A_OUT(M) is a dependence set which records the\ndependencies from the objects referred in M to the objects\ndefined outside M when exiting M.\nIn general, the intermediate results are invisible outside, and an\nobject might be modified many times in a subprogram. We\nintroduce DEP_A_OUT to improve the precision. Obviously,\nDEP_A_OUT(M)\nDEP_A (M).\nProperty 1 A\nIN(M), A OUT(P) M\n\n\n\nA\nA,\nP.\nProperty 2 <A, B>\nDEP_A(M), B OUT(P)\nM\n\n\n\nB\nA,\nP.\nProperty 3 M\n\n\n\nB\nA,\nP,\n<B, C>(<B, C> DEP_A_OUT(P),\nC\nOUT(Q)) M\n\n\n\nC\nA,\nQ.\nIn our previous work [8, 9], we have proposed methods to analyze\ndependencies among statements for Ada programs. And these\ndependencies can be easily transformed to the dependencies\nproposed in this paper. Due to the space limitation, we do not\ndiscuss them in detail here.\nTo present our cohesion measure in a united model, we introduce\npackage dependence graph to describe all types of dependencies.\nDefinition 4 The package dependence graph (PGDG) of a\npackage PG is a directed graph, PGDG = <N, E, T>, where N is\nthe node set and E is the edge set, T is the tag set. N = N\nO\nN\nP\n,\nN\nO\nis the object node set, each of which represents a unique\nobject; N\nP\nis the subprogram node set, each of which represents a\nunique subprogram. PGDG consists of three sub-graphs:\nInter-Object Dependence Graph (OOG), OOG = <N\nO\n, E\nO\n>,\nwhere N\nO\nis the object node set (the name of a node is the\nname of the object it represents); E\nO\nis the edge set, if\nA B, then edge <A, B>\nE\nO\n.\nInter-Subprogram Dependence Graph (PPG), PPG = <N\nP\n,\nE\nP\n, T>, where N\nP\nis the subprogram node set; E\nP\nis the\nedge set which represents the dependencies between\nsubprograms; T\n(V V) is the tag set, where V is the\nunion of objects and {*}.\nSubprogram-Object Dependence Graph (POG), POG = <N,\nE\nPO\n>, where N is the node set which represents objects\nand subprograms; E\nPO\nis the edge set representing\ndependencies between subprograms and objects. If P A,\n<P, A>\nE\nPO.\nExample1 shows the package Tri, which contains three objects:\ntemp, temp1 and temp2, and four subprograms: sin, cos, tg and\nctg. Figure 1 shows the PGDG of the package Tri in Example1\n(all the Tags on PPG are (*, *), because there are only call\ndependencies in this example. We omit the Tags for\nconvenience).\nExample1: package Tri.\npackage Tri is\ntemp, temp1, temp2: real;\n\nfunction sin (x: real) return real;\nfunction cos (x: real) return real;\nfunction tg (x: real) return real;\nfunction ctg (x: real) return real;\nend Tri;\n\npackage body Tri is\nfunction sin (x: real) return real is\nbegin temp:=...; return temp; end sin;\n...\n63\nfunction tg (x: real) return real is\nbegin\ntemp1:=sin(x);temp2:=cos(x);\ntemp:=temp1/temp2; return temp;\nend tg;\n...\nend Tri;\n\n\n\n\n\n\n\n\n\n\n\n\n3.2 Extended Definitions\nSince there is no object in the package of PG2, the definitions of\nSection 3 can not be applied to these packages directly. Therefore,\nthis section will extend the definitions of Section 3.1 to a more\ngeneral model by the following steps:\nFor PG1, if there is an embedded package, the package is\ntaken as an object.\nFor PG2, take the components of the type as objects of the\npackage.\nLet A, B be object of a type T, M, P primitive subprograms, and\nCom1 and Com2 are components of T. Then\nA, B (A.Com1 B.Com2) Com1 Com2.\nA, P (P A.Com) P Com.\nA, B, M, P (M\n\n\n\n\n\n\n\n2\n.\n,\n1\n.\nCom\nB\nCom\nA\nP)\nM\n\n\n\n\n\n\n2\n,\n1 Com\nCom\nP.\nFor PG3, take the types as objects of the package.\nTo present our measure in a unified model, we add powers\nfor different objects.\nPW(O) =\n\n\n\nothers\nO\nO\nPG\nCohesioin\nO\nO\nCohesion\n1\nobject\n\ntype\na\n\nis\n\n))\n(\n(\nobject\n\npackage\n\na\n\nis\n\n)\n(\n\nwhere Cohesion (O) is the cohesion of O, PG (O) returns the\npackage containing O.\n\nMEASURING PACKAGE COHESION\nAccording to the PGDG, this section will propose our method to\nmeasure the package cohesion. In the following discussions, we\nassume package PG contains n objects and m subprograms, where\nm, n\n0.\n4.1 Measuring Inter-Object Cohesion\nInter-object cohesion is about the tightness among objects in a\npackage. To measure this cohesion, for each object A, introduce a\nset A_DEP to record the objects on which A depends, i.e.\nO_DEP(A) = {B| A B, A\nB}.\nLet\n\n\n=\n)\n(\n_\n)\n(\n)\n(\n_\nA\nDEP\nO\nB\nB\nPW\nA\nDEP\nPW\n.\nThen, we define the inter-object cohesion as:\n=\n)\n,\n_\n(\nPG\nO\nO\nCohesion\n\n\n\n\n>\n=\n=\n\n=\n1\n1\n)\n(\n_\n1\n1\n)\n(\n0\n0\n1\nn\nn\nA\nDEP\nPW\nn\nn\nA\nPW\nn\nn\ni\ni\n\nwhere\n1\n)\n(\n_\nn\nA\nDEP\nPW\nrepresents the degree on which A\ndepends on other objects.\nIf n=0, there is no object in the package, we set it to 0. If n=1,\nthere is one and only one object in the package, the cohesion is its\npower.\n4.2 Measuring Subprogram-Object Cohesion\nSubprogram-object cohesion is the most important field in\nmeasuring cohesion. Until now, there have been several\napproaches proposed in literature, such as Chae's methods [6, 7].\nBut most approaches are based on the POG. As we have\nmentioned above, all these methods describe the object reference\nin a simple way and subprograms are connected by the objects\nreferred. Whether there are related among these subprograms are\nnot described exactly. Thus, these approaches should be improved\nto describe these relations. For completeness, we use Co(Prev) to\nrepresent a previous cohesion measure, which satisfies Briand's\nfour properties.\nFor each subprogram P, we introduce another two sets: P_O and\nP_O_OUT. Where\nP_O(P) records all the objects referred in P.\nFigure. 1. PGDG of class Tri\ntemp\ntemp1\ntemp2\n(a) OOG\nsin\ncos\ntg\nctg\n(c) PPG\nsin cos tg ctg\ntemp temp1 temp2\n(b) POG\n64\nP_O_OUT(P) records the objects referred in P, but these\nobjects relate to objects referred by other subprogram,\ni.e.,\nP_O_OUT(P)={A|\nB, M (P\n\n\n\nA\nB ,\nM\nM\n\n\n\nB\nA,\nP)\nA,B '*'}.\nLet\n\n\n\n\n=\n\n)\n(\n_\n)\n(\n_\n_\n)\n(\n)\n(\n)\n(\nP\nO\nP\nA\ni\nP\nOUT\nO\nP\nA\ni\ni\ni\nA\nPW\nA\nPW\nP\n\nThen, we define the subprogram-object cohesion as:\n=\n)\n,\n_\n(\nPG\nO\nP\nCohesion\n\n\n\n\n\n\n\n\n=\n=\n\n=\n\n\n\n=\n\nOthers\nP\nPrev\nCo\nm\nm\nA\nPW\nA\nPW\nn\nm\nm\ni\ni\nP\nO\nP\nA\ni\ni\n1\ni\n)\n(\n_\n)\n(\n)\n(\n1\n1\n)\n(\n)\n(\n0\n0\n0\n\nIf P_O(P) =\n, i.e. no objects are referred in P, we set\n)\n(P\n\n=0.\nIf the objects referred in P are not related to other subprograms,\nthese objects can work as local variables. It decreases the\ncohesion to take a local variable for a subprogram as an object for\nall subprograms. If there is no object or subprogram in the\npackage, no subprogram will depend on others. Thus,\n0\n)\n,\n_\n(\n=\nPG\nO\nP\nCohesion\n.\n4.3 Measuring Inter-Subprogram Cohesion\nIn the PGDG, although subprograms can be connected by objects,\nthis is not necessary sure that these subprograms are related. To\nmeasure the inter-subprogram cohesion, we introduce another set\nP_DEP(P) = {M| P M} for each P. The inter-subprogram\ncohesion Cohesion(P_P, PG) is defined as following:\n=\n)\n,\n_\n(\nPG\nP\nP\nCohesion\n\n\n\n\n>\n=\n=\n\n=\n1\n1\n)\n(\n_\n1\n1\n1\n0\n0\n1\nm\nm\nP\nDEP\nP\nm\nm\nm\nm\ni\ni\n\nwhere\n1\n|\n)\n(\n_\n|\nm\nP\nDEP\nP\nrepresents the tightness between P and\nother subprograms in the package.\nIf each subprogram depends on all other subprograms,\nCohesion(P_P, PG) = 1.\nIf all subprograms have no relations with any other subprogram,\nCohesion(P_P, PG) = 0.\n4.4 Measuring Package Cohesion\nAfter measuring the three facets independently, we have a\ndiscrete view of the cohesion of a package. We have two ways to\nmeasure the package cohesion:\n1) Each measurement works as a field, the package\ncohesion is 3-tuple,\nCohesion(PG) = < Cohesion(O_O, PG),\nCohesion(P_O, PG),\nCohesion(P_P, PG)>.\n2) Integrate the three facets as a whole\n=\n)\n(PG\nCohesion\n\n\n\n\n\n\n=\n=\n\n=\nOthers\nPG\nCohesion\nk\nm\nn\nPG\nP\nP\nCohesion\nk\nm\ni\ni\ni\n3\n1\n)\n(\n0\n,\n0\n)\n,\n_\n(\n*\n0\n0\n\nwhere k\n\n(\n]\n1\n,\n0\n; k\n1\n, k\n2\n, k\n3\n>0, and k\n1\n+ k\n2\n+ k\n3\n=1.\nCohesion\n1\n(PG) = Cohesion(O_O, PG)\nCohesion\n2\n(PG) = Cohesion(P_P, PG)\nCohesion\n3\n(PG) = Cohesion(P_O, PG)\nIf n=0, m\n0, the package cohesion describes only the tightness of\nthe call relations, thus we introduce a parameter k to constrain it.\nFor the example shown in Figure 1, the cohesion of Tri describes\nas follows:\nCohesion(O_O, Tri)= 1/3\nCohesion(P_O, Tri)=0\nCohesion(P_P, Tri)=1/3\nLet k\n1\n= k\n2\n= k\n3\n= 1/3, Co(Prev)=1, then\nCohesion(Tri)= 2/9.\nBriand et al. [3, 4] have stated that a good cohesion measure\nshould be\n(1) Non-negative and standardization.\n(2) Minimum and maximum.\n(3) Monotony.\n(4) Cohesion does not increase when combining two\nmodules.\nThese properties give a guideline to develop a good cohesion\nmeasure.\nAccording to the definitions, it is easy to prove our measure\nsatisfies these properties.\n4.5 Cohesion for PG2-2\nIn the hierarchies of types, the derived type inherits the\ncomponents and primitive subprograms of the super types.\nGenerally, inheritance will increase the coupling and decrease the\n65\ncohesion. For the package from PG2-2, we will discuss its\ncohesion in four cases:\nCase 1: Take the package independently.\nCase 2: Take all the primitive subprograms and components\n(contains those from super type) into consideration.\nCase 3: If the primitive subprograms of the derived type might\naccess the components (or subprogram) of the super type, take\nthese components (or subprogram) as those of the derived type.\nCase 4: Take the super type as an object of the derived type.\nThe shortcoming of Case 1 is that: It only measures the cohesion\nof the additional components and primitive subprograms of the\nderived type, not the complete type.\nThe primitive subprograms in the super type can not access the\ncomponents of the derived type except dispatched subprograms.\nConsequently, in Case 2 or 3, the deeper the hierarchy of types is,\nthe smaller the cohesion. And it is hard to design a package which\ncohesion is big enough.\nAlthough we present four cases in this section, none is good\nenough to describe the cohesion for a package from PG2-2. To\nmeasure the cohesion of a derived type, much more aspects\nshould be considered.\nRELATED WORKS\nThere have several methods proposed in literatures to measure\nclass cohesion. This section gives a brief review of these methods.\n(1) Chidamber's LCOM1\n[0,\n2\n)\n1\n(\nm\nm\n], it measures the\ncohesion by similar methods and non-similar methods. It is a\nreverse cohesion measure. The bigger the measure, the lower the\ncohesion.\n(2) The PPG in Hitz's LCOM2 is represented by an undirected\ngraph. LCOM2 is the number of sub-graphs connected. When\nthere is one and only one sub-graph, he introduces connectivity to\ndistinguish them.\n(3) Briand's RCI is the ratio of the number of edges on POG to\nthe max interaction between subprograms and objects.\n(4) Henderson's LCOM3 can be described as follows.\n)\n(\n3 C\nLCOM\n=\nm\nm\nA\nn\nn\nj\nj\n\n\n\n=\n1\n|\n)\n(\n|\n1\n1\n\nwhere\n(A)= {M| AP_O(M)}, A is attribute and M is method.\n(5) Chae's CO [6] introduces glue methods, and Xu-Zhou's CO\n[13] introduces cut set (glue edges) to analyze the interact pattern.\nThese two measures are more rational than other measures.\nFrom the introductions above, we can see that\nAll these methods consider the attribute reference in a\nsimple way. Whether the methods are related or not are\nnot described exactly.\nLCOM1, LCOM2 and LCOM3 are non-standard, because\ntheir up-bounds are related to the number of methods in\nthe class. LCOM1 is non-monotonous. The measuring\nresults might be inconsistent with intuition in some cases\nRCI has the basic four properties proposed by Briand. But\nit does not consider the patterns of the interactions among\nits members, neither LCOM1 and LCOM2 nor LCOM3.\nChae's CO overcomes most limitations of previous\nmeasures. But it is non-monotonous [13]. Xu-Zhou's CO\nimproves Chae's cohesion measure, and makes its result\nmore consistent with intuition. The chief disadvantage of\nboth measures is that they can be applied to connected\nPOG; otherwise the result will always be 0.\nLCOM1 and LCOM2 measure the cohesion among\nmethods in a class. We can improve the similar function\nusing the dependencies among methods proposed in this\npaper.\nLCOM3, Chae and Xu-Zhou's CO measure the cohesion\namong methods and attributes in a class. In this paper we\nimprove them by introducing\n)\n(M\n\nfor each method M.\nCONCLUSION\nThis paper proposes an approach to measure the cohesion of a\npackage based on dependence analysis. In this method, we\ndiscussed the tightness of a package from the three facets: inter-object\n, subprogram-object and inter-subprogram. These three\nfacets can be used to measure the package cohesion independently\nand can also be integrated as a whole. Our approach overcomes\nthe limitations of previous class cohesion measures, which\nconsider only one or two of the three facets. Thus, our measure is\nmore consistent with the intuition. In the future work, we will\nverify and improve our measure by experiment analysis\nWhen measuring package cohesion, the following should be paid\nattentions.\n(1) In the hierarchies of types, the primitive subprograms of\nsuper type might access the objects of the derived type by\ndispatching. Therefore, when measuring the cohesion of\nPG2, it is hard to determine whether the accession of\nderived typed is considered or not.\n(2) We can determine polymorphic calls in an application\nsystem. However it is impossible for a package, which\ncan be reused in many systems.\n(3) How to deal with some special subprograms, such as\naccess subprograms, since such subprograms can access\nsome special objects in the package.\n(4) How to apply the domain knowledge to cohesion\nmeasure.\nIn all, if a package can be applied to many applications, the\ncohesion is mainly about itself without considering the\napplication environments. Otherwise, it is the cohesion in the\nspecial environments.\n66\nACKNOWLEDGMENTS\nThis work was supported in part by the National Natural Science\nFoundation of China (NSFC) (60073012), National Grand\nFundamental Research 973 Program of China (G1999032701),\nand National Research Foundation for the Doctoral Program of\nHigher Education of China (20020286004).\n\nREFERENCES\n[1]\nAllen, E.B., Khoshgoftaar, T.M. Measuring Coupling and\nCohesion: An Information-Theory Approach. in Proceedings\nof the Sixth International Software Metrics Symposium.\nFlorida USA, IEEE CS Press, 1999, 119-127.\n[2]\nBansiya, J.L., et al. A Class Cohesion Metric for Object-oriented\nDesigns. Journal of Object-oriented Programming,\n1999, 11(8): 47-52.\n[3]\nBriand, L.C., Morasca, S., Basili, V.R. Property-Based\nSoftware Engineering Measurement. IEEE Trans. Software\nEngineering, Jan. 1996, 22(1): 68-85.\n[4]\nBriand, L.C., Daly, J., Wuest, J. A Unified Framework for\nCohesion Measurement in Object-Oriented Systems.\nEmpirical Software Engineering, 1998, 3(1): 65-117.\n[5]\nBriand, L.C., Morasca, S., Basili, V.R. Defining and\nValidating Measures for Object-Based High-Level Design.\nIEEE Trans. Software Engineering, 1999, 25(5): 722-743.\n[6]\nChae, H.S., Kwon, Y.R., Bae, D.H. A Cohesion Measure for\nObject-Oriented Classes. Software Practice &\nExperience, 2000, 30(12): 1405-1431.\n[7]\nChae, H.S., Kwon, Y.R. A Cohesion Measure for Classes in\nObject-Oriented Systems. in Proceedings of the Fifth\nInternational Software Metrics Symposium. Bethesda, MD\nUSA, 1998, IEEE CS Press, 158-166.\n[8]\nChen, Z., Xu, B., Yang, H. Slicing Tagged Objects in Ada\n95. in Proceedings of AdaEurope'2001, LNCS 2043: 100-112\n.\n[9]\nChen, Z., Xu, B., Yang, H., Zhao, J. Static Dependency\nAnalysis for Concurrent Ada 95 Programs. in Proceedings of\nAdaEurope 2002, LNCS 2361, 219-230.\n[10]\nChen, Z., Xu, B. Zhou, Y., Zhao, J., Yang, H. A Novel\nApproach to Measuring Class Cohesion Based on\nDependence Analysis. in Proceedings of ICSM 2002, IEEE\nCS Press, 377-383\n[11]\nChidamber, S.R., Kemerer, C.F. A Metrics Suite for Object-Oriented\nDesign. IEEE Trans. Software Engineering, 1994,\n20(6): 476-493.\n[12]\nHitz, M., Montazeri, B. Measuring Coupling and Cohesion\nin Object-Oriented Systems. in Proceedings of International\nSymposium on Applied Corporate Computing, Monterrey,\nMexico, October 1995: 25-27.\n[13]\nXu, B., Zhou, Y. Comments on A cohesion measure for\nobject-oriented classes. Software Practice &\nExperience, 2001, 31(14): 1381-1388.\n[14]\nZhou, Y., Guan, Y., Xu, B. On Revising Chae's Cohesion\nMeasure for Classes. J. Software. 2001, 12(Suppl.): 295-300\n(in Chinese)\n\n\n\n67", "keywords": "Object-Oriented;Ada95;cohesion;dependence;Measurement;OO programming;measure;Cohesion"} {"name": "136", "title": "Measurement of e-Government Impact: Existing practices and shortcomings", "abstract": "Public administrations of all over the world invest an enormous amount of resources in e-government. How the success of e-government can be measured is often not clear. E-government involves many aspects of public administration ranging from introducing new technology to business process (re-)engineering. The measurement of the effectiveness of e-government is a complicated endeavor. In this paper current practices of e-government measurement are evaluated. A number of limitations of current measurement instruments are identified. Measurement focuses predominantly on the front (primarily counting the number of services offered) and not on the back-office processes. Interpretation of measures is difficult as all existing measurement instruments lack a framework depicting the relationships between the indicators and the use of resources. The different measures may fit the aim of the owners of the e-governmental services, however, due to conflicting aims and priorities little agreement exists on a uniform set of measures, needed for comparison of e-government development. Traditional methods of measuring e-government impact and resource usage fall short of the richness of data required for the effective evaluation of e-government strategies.", "fulltext": "INTRODUCTION\nPublic institutions as well as business organizations use the\nInternet to deliver a wide range of information and services at an\nincreasing level of sophistication [24]. However, Web sites and\nrelated business processes and information systems are so\ncomplex that it is difficult for governments to determine adequate\nmeasures for evaluating the efficiency and effectiveness of the\nspending of their public money. Moreover only measuring the\nfront of public websites is a too narrow view on e-government. E-government\ninvolves the collaboration and communication\nbetween stakeholders and integration of cross-agency business\nprocesses.\nAn important aim of having a well-funded theory on measuring e-government\nis to allow comparison or benchmarking. By\nexamining the results of such benchmarks we might be able to\ndistinct good from bad practices and to give directives to\ndesigners of e-governmental services. Moreover is should help to\nidentify how effective public money is spend. Thus evaluating the\nrelationship between results and resources used.\nComparison can have different goals. Selecting between options\nis just one of them. Principally we can distinct three types of\ncomparison: 1) comparison of alternatives (e.g. to make a choice\nbetween alternative solutions), 2) vertical comparison over time\n(e.g. in order to measure improvement of versions) and 3)\nhorizontal comparison (e.g. benchmarking different solutions).\nWhatever type of comparison we choose, we can only compare if\nwe have a set of preferred outcomes. Many measurement\ninstruments currently applied are not described in such way that\nthe preferences underlying the instrument have been made\nexplicitly. In this research we describe different measurement\ninstruments used and we will discuss their strengths and\nweaknesses.\nThe goal of this paper is to evaluate how current measurement\ninstruments the impact of e-government. The combined research\nmethodology of literature research and case study was chosen to\nanswer the goal of this research. Case study research can be\ncharacterized as qualitative and observatory, using predefined\nresearch questions [36]. Case studies were used to examine\nexisting measurement instruments in the Netherlands. The e-government\nmonitor of Accenture and a European Union\nmeasurement instrument was also evaluated as thes instruments\nare used by Dutch agencies to benchmark their situation.\n\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nICEC'04, Sixth International Conference on Electronic Commerce\nEdited by: Marijn Janssen, Henk G. Sol, and Ren W. Wagenaar\nCopyright 2004 ACM 1-58113-930-6/04/10...$5.00\n\n481\nTHE NATURE OF E-GOVERNMENT\nBefore analyzing instruments developed to measure e-government\ndevelopment, it is necessary to understand the nature of e-government\n. The organization of public administration involves\nmany, heterogeneous types of business processes. When law or\nregulations are changed these processes and their supportive\nsystems have to be adapted. Within one policy layer the process\nof adaptation starts with legislation drafting (adapting the law or\nregulations) followed by a chain of processes varying from\ntranslating these law texts into specifications, design of processes\nand supporting systems, development of these processes and\nsystems and finally implementation and use (for a recent `legal\nengineering' approach see [34]). A complicating factor is that\nmore than one governmental layer exists and often interaction\nbetween these layers occurs. Especially the need to adapt\nlegislation from the European Union is a more then ever dominant\nfactor that even further complicates this process is.\nIn Figure 1 the fragmented nature of the government is shown.\nLegislation and service provisioning efforts are distributed over\nthe European, State, Region and local level. The situation is even\nmore complicated as within each level many agencies of various\ntypes exists. At a local level municipalities, water boards,\nchambers of commerce, local taxes and many other public and\npublic-private agencies exists. As such many agencies influence\nthe impact of e-government and measurement should include\nthem all.\nEurope\nEurope\nEurope\nEurope\nEurope\nstate\nEurope\nregion\nEurope\nEurope\nlocal\nint\ne\nr\noper\nata\nb\nility\nint\ne\nroper\na\nt\nability\nint\ne\nr\nopera\nt\na\nbility\npolicy and standardization\nnavigation, service provisioning,\nhelpdesk and appeal\nAggregation of local initaitves,\nnavigagtion, geogprahical relevance\nstandardization, policies, culture,\nenforcement\nbusinesses\ndemand\n?\n?\n?\n?\nFigure 1: Fragmented nature of public administration\n\nTwo main types of interactions influence this fragmented\nlandscape, the policy-making, implementation, execution and\nenforcement of new legislation and the business and citizens\nsearching for information and services.\nWe will denote the creation, implementation, execution,\nenforcement and maintenance of laws as production cycle in this\npaper. Governments are looking for ways to increase their\nefficiency, effectiveness, decrease the administrative burden and\nreduce the lead times for adopting new legislations. The\nconsequences of new laws at production phase (drafting) are only\nroughly known. Only after implementing the new regulations in\nthe processes and supporting systems the full meaning of applying\nthese regulations becomes clear. Certainly when the\ninterpretations and translation into practical applications take\nplace at local government level it often will not be possible to\ntimely inform the businesses or citizens, who are affected by the\nnew law, pro-actively, as no information about concrete effects on\ntheir cases is available. A complicating factor is that many times a\nadministrative process is supported by heterogeneous information\nsystems and many different parties can be involved in adapting\nthose systems.\nMost of the companies' ERP software components need to be\nupdated quite frequently to be in, and keep being in accordance\nwith small changes in administrative legislation. The same holds\nfor Human Resources software, Time reporting software, Tax\nreporting software and, more indirectly, Logistics software. All\nhave to be updated due to changes in legislation. It does not\nrequire extensive explanation to stress the need for smart public-private\nnetworks from a production chain perspective.\nNowadays businesses expect that the governments reduce the\nadministrative burden for businesses. Governments can achieve\nthis goal by creating a smart, service oriented, public\nadministration. To be able to provide these integrated services\naccess to different legal sources or better to the formal models\nthat could be derived from those models is needed (see e.g. [7]).\nStandardization initiatives like the Metalex standard for\ndescribing legal sources (see\nwww.metalex.nl\n) and the Power\nmethod for modeling normative systems (see\nwww.belastingdienst.nl/epower\n) are essential first steps towards\nthis. They provide a basis for interoperable and contextual better\nunderstandable and accessible legal sources that could easier be\nconnected to the context of business activities.\nFrom the demand perspective, citizens and businesses find it very\nhard to access relevant legislation, local procedures and rules,\npolicy documents etc. Governmental bodies are engaged in a\nflurry of policy and law making activities. Not only is this a\ncomplex myriad of legal issues, but the information is produced at\ndifferent levels of public administration, including local, regional,\nnational and European union. A commonly accepted requirement\nhowever is that online state legislative information is equally\naccessible to all (Fage & Fagan, 2004) and of course in the first\nplace it should be accessible. Many governments currently are\nsearching for ways to make their information accessible and\nretrievable. This involves issues regarding terminology,\nexplaining the type of legislative document, understandable and\neasy-to-use search interfaces and accessing the official status of\nonline documents.\nA central question for researchers working in the field of e-government\nis how to measure the progress made in complying to\nthe requirements mentioned before. In this paper we examine\nsome examples of measurement instruments that were developed\nto measure progress in e-Government. But before describing these\ninstruments we will discuss some literature on measuring eGovernment\nfirst.\nLITERATURE REVIEW\nThere is a diversity of literature focusing on measurements. Stage\nmodels are often used to positioning and evaluate the current\nstatus of e-government development. Services literature focusing\n\n482\non the measurement to of perceived quality, satisfaction of\ncomplex, multi-service organizations. Last there is the research\nfocusing on developing suitable `yardstick', performance\nindicators acting as surrogates for measuring the performance.\n3.1 Stage models\nMany researchers on e-business address the stages of Web site\ndevelopment. Green [17] suggests three stages: attracting,\ntransforming, and utilization of media technology. Fink et al.\n(2001) focuses on attracting, enhancing and retaining client\nrelationships using the Web site applications. Moon [27]proposes\na five stage model. Layne and Lee [23] propose four stages of a\ngrowth model towards e-government. In stage one, governments\ncreate a `state website' by establishing a separate Internet\ndepartment. There is no integration between the processes in the\nfront and back offices. The web sites are focused on creating a\nweb-presence and providing information. Transaction possibilities\nfor customers are restricted to printing online forms and sending\nthem by traditional mail. At stage two, transaction, there is two-way\ncommunication. Customers transact with government on-line\nby filling out forms and government responds by providing\nconfirmations, receipts, etc. The number of transactions is\nrelatively small and the organization becomes confronted with\ntranslating information from and back to the front office and the\nother way around. Often a working database is established in the\nfront office for supporting immediate transactions. The data in\nthis database is periodically derived from and exported to the\nvarious databases in the back office. At stage three, vertical\nintegration, the focus is moving towards transformation of\ngovernment services, rather than automating and digitizing\nexisting processes. Most information systems are localized and\nfragmented. Government organizations often maintain separate\ndatabases that are not connected to other governmental agencies\nat the same level or with similar agencies at the local or federal\nlevel. A natural progression according to Layne and Lee is the\nintegration of scattered information systems at different levels\n(vertical) of government services. Layne and Lee (2001) expect\nthat vertical integration within the similar functional walls but\nacross different levels of government will happen first, because\nthe gap between levels of government is much less than the\ndifference between different functions. Information systems are\nconnected to each other or, at least, can communicate with each\nother.\nThe problem with the description of governmental systems is that\nthey don't make a distinction between the (legal and\nadministrative `knowledge' contained in those systems and the\ndata (of citizens) to which that knowledge is applied (e.g. to\nderive if someone is entitled to receive a subsidy). Especially the\nsharing of data is limited and for very good reasons too! Both the\ndesire to guarantee a certain level of privacy and the vulnerability\nfor misuse of data have been reasons for the legislator to limit\nstorage, reusing and sharing data between different agencies (not\nto speak of passing data of citizens from the government to\nprivate institutions).\nThe challenge consequently is how to realize the full potential of\ninformation technology, from the customer's perspective, since\nthis can only be achieved by horizontally integrating government\nservices across different functional walls (`silos') in Layne and\nLee's stage four, horizontal integration. The question is how to\nachieve this without the need for having databases across different\nfunctional areas to communicate with each. The situation that\ninformation obtained by one agency can be used for CRM\npurposes by other agencies by sharing information is for reasons\nmentioned earlier, undesirable. The knowledge contained in these\ninformation systems however can be shared and reused thus\nallowing better integrated government services.\n3.2 Service literature\nThe concepts of perceived quality and satisfaction are two of the\nfundamental pillars of recent evaluation studies. Bign et al. [5]\nconclude that in most cases the fundamental unit of analysis has\nbeen an isolated service, and the fact that several services may be\noffered by an individual organization has not been taken into\naccount. Indeed multi-service organizations, where the customer\nhas access to several services, have not been so intensively dealt\nwith. The problems facing these organizations in terms of\nmeasurement and management of perceived quality and of\nsatisfaction are more complex than in those organizations where\nonly one service is offered, but less complex then situations where\na service has to be assembled from many other service suppliers.\nWhen measuring the quality of such integrated service it is\nnecessary to take into consideration not only the perceived quality\nof each of the elementary services, but also the perceived overall\nquality of the constituting ones.\nBign et al. [5] found that the scale used to determine the\nperceived quality of the core services of hospitals, and\nuniversities, is composed of five dimensions, as proposed by\nParasuraman et al. [28]: tangibility, reliability, responsiveness,\nconfidence and empathy.\n\n\n3.3 Performance indicators\nPerformance indicators serve as surrogates to deduce the quality\nof the e-government performance [20]. Lee [24] provides\nmeasurements based on development states and a modification of\nSimeon's [32]components.\n(1) The following five items determine the affect (Attracting) of a\nhomepage on the Web site:\n1. Design of logo and tagline (quick summary of what a\nWeb site is all about).\n2. Graphics (e.g. layout, color and figures of a homepage).\n3. Institution's\nself-advertising (e.g. banner, button, and\ninterstitials).\n4. Services for attracting (e.g. quiz, lottery, e-card, maps,\nweather, channels, download service).C\n5. Contents for attracting (e.g. entertainments, culture,\ntourism, game, kids, health, gallery).\n(2) Informing consists of nine items developed by modifying\nSimeon's (1999) components: local links, contents for publicity,\ncontents for learning, reports, descriptions on the institution,\ndescriptions on online administrative services, projects, contact\ninformation and counseling.\n(3) Community consists of ten items: online forum, events, partner\nlinks (or ads), e-Magazine (or newsletter or Webcast), message\nboards, users' participation (e.g. articles, photos, personal links),\nfocus of news, vision (or values), domain identity and community\nservices (or online support for community meeting or\nnetworking). A good example of the latter is the ``Citizen\ndiscussion room'' of Ulsan City (eulsan.go.kr).\n\n483\n(4) Delivering as a variable is determined by the presence or\nabsence of features like: search engine, mailing list, framework,\nmultimedia, password system, FAQ, chat, downloadable\npublications and update indication.\n(5) Innovation. Public institutions have to utilize the Internet for\nactual service innovation. Hence, two variables indicating\ninnovation results are selected: transformation level of existing\nservices and frequency of new innovative services. These are each\nrated on a five-point scale: ``(1) never; (2) only descriptions; (3)\nonline request; (4) partial; (5) full processing'' for the first item\nand'' (1) never to (5) many new systems'', for the second item.\nSuch quantification is possible because the introduction of new\ninnovative systems on public sector Web sites is growing, for\nexample, Docket Access, View Property Assessments, and\nRequest for Proposals of Philadelphia (phila.gov) and Citizen\nAssessment Systems, Citizen Satisfaction Monitor, Online\nProcedures Enhancement (OPEN) System of Seoul\n(metro.seoul.kr).\nThese measures focus mainly on components visible to users and\ndo not take into account back-office components like integration.\nVan der Merwe and Bekker [26] classify website evaluation\ncriteria in 5 groups as shown in table1. Many of their criteria\nseem to be inspired by their e-Commerce orientation but many of\nthe criteria will be applicable to e-Government as well.\n\nTable 1: Web site evaluation criteria groups according to Merwe and Bekker [26]\nPhase\nCriteria group\nThis criteria group evaluates/measures\nInterface\nGraphic design principles\nThe effective use of color, text, backgrounds, and other general graphic\ndesign principles\n\nGraphics and multimedia\nThe effectiveness of the graphics and multimedia used on the site\n\nStyle and text\nWhether or not the text is concise and relevant, and the style good\n\nFlexibility and compatibility\nThe degree to which the interface is designed to handle exceptions, for\nexample, text-only versions of pages\nNavigation\nLogical structure\nThe organization and menu system of the site\n\nEase of use\nThe ease of navigation to find the pages that the user is looking for\n\nSearch engine\nThe search engine's ability to find the correct pages easily and provide\nclear descriptions of the search results\n\nNavigational necessities\nOther important aspects of navigation like the absence of broken links\nand ``under-construction'' pages\nContent\nProduct/service-related information\nWhether or not the products/services are described precisely and\nthoroughly\n\nAgency and contact\nInformation\nWhether or not it is easy to find information on the company, its\nemployees and its principals\n\nInformation quality\nThe currency and relevance of the content on the site\n\nInteractivity\nHow much input the user has on the content displayed on the site\nReliability\nStored customer profile\nThe registering process and how the company uses the stored customer\nprofile\n\nOrder process\nThe effectiveness and ease of use of the online order process\n\nAfter-order to order receipt\nThe company's actions from order placement until the order is delivered\n\nCustomer service\nHow the company communicates and helps its online customers\nTechnical\nSpeed\nDifferent aspects of the loading speed of the site\n\nSecurity\nSecurity systems and the ways used by the company to protect\ncustomers' privacy on the site\n\nSoftware and database\nFlexibility in terms of different software used. Also looks at the data software and data\ncommunication systems used on the site\n\nSystem design\nThe correct functioning of the site and how well it integrates with\ninternal and external systems\nMEASUREMENTS INSTRUMENTS FOUND IN PRACTICE\nHazlett and Hill discuss [19] discuss the current level of\ngovernment measurement. Huang and Chao (2001) note that\nwhile the development and management of web sites are\nbecoming essential elements of public sector management, little is\nknown about their effectiveness. Indeed, Kaylor et al. (2001) note\nthat research into the effectiveness of e-Government efforts tends\nto concentrate on content analysis or measures of usage. These\nmay not be wholly appropriate metrics with which to gauge\nsuccess. Aspects of service relevant in this context may,\naccording to Voss (2000) include: consumer perceptions of\nsecurity and levels of trust; response times (bearing in mind that\nInternet consumers may well be accustomed to quick responses);\nnavigability of the Web site; download time; fulfillment of service\npromised; timely updating of information; site effectiveness and\nfunctionality. Reinforcing a point made above, Voss (2000) takes\nthe view that e-service channels should be regarded as providing\n\n484\nsupport for front-line service delivery and not replacements for\nfront-line service. However, such channels do enable change in\nthe nature of front-line service delivery and of human\ninvolvement in service.\n4.1 OVERHEID.NL\nThe Dutch Government has recently published \"Overheid.nl\nMonitor 2003\", its fifth annual e-government progress report.\nWhile highlighting a number of encouraging developments, the\nreport concludes that much remains to be done in areas such as\nuser-friendliness, transactional services and e-democracy.\nOverheid.nl focused on all government agencies, and mentions\nthe following agencies explicitly.\n\nMunicipalities\n\nMinistries\n\nProvinces\n\nWater boards\nA screenshot of the online monitor of the measurement of\nmunicipalities is shown in the figure below.\n\nFigure 2: Screenshot of Overheid.nl\n\n\"Overheid.nl Monitor 2003: developments in electronic\ngovernment\" is based on a periodical large-scale survey of\ngovernment websites, which was carried out in October 2003 by\nAdvies Overheid.nl on behalf of the Dutch Ministry of the Interior\nand Kingdom Relations. The survey assessed 1,124 government\nwebsites according to five criteria: user-friendliness, general\ninformation, government information, government services, and\nscope for participation (interactive policy making). The website\nuser friendliness measurement etc, is very thorough in this survey.\nThe e-service measurement is less well defined. The services\ninvestigated for the survey are listed clearly for several layers of\ngovernment but they seem to be limited to the so called `Dutch\nservice product catalogue', set of typical municipal products and\nservices.\nFigure 3: Sample listing of services measured and ways\nof accessing them investigated\nAdditionally, researchers measured the e-mail response time of\ngovernment websites and assessed user satisfaction via a survey\nof 3,000 users. The report states that, although e-government\nservices are developing on schedule and are becoming more\nsophisticated, there is still much room for improvement.\nOn the positive side, the report finds that:\nE-government is developing on schedule. The 2003 target of\nproviding 35% of government services electronically was\nwidely achieved, with 39% of services for citizens and 44% of\nservices for businesses e-enabled by October 2003.\nHowever, the report also identifies a number of shortcomings and\nareas where improvement is needed:\nPractically no full electronic transactions are available. In this\nrespect, the report considers that development of such services\nwill depend both on future solutions for user identification and\nauthentication and on back-office re-engineering.\nAlthough the use of e-services is growing, the development of e-government\nis still mainly supply-driven and the penetration of\ngovernment websites remains unknown. \"Only if we assess the\npenetration of government websites and the level of their use can\nwe take a truly demand-driven approach\", the report says.\nThe items related to municipalities are connected to the functionalities\nwithin an implemented product and services catalogue.\n\nD1\nIs a product catalogue or another form of systematically\noffered services been used?\n\nD2\n-if so, does at least contain 150 products?\nD3\n-if so, does is at least contain 50 forms?\ni.e.: can one request for or comment on at least 50 product\nby using a form that is contained in the product catalogue\nwhich can be filled in, printed and send in by the users ...?\nD4\n-if so, can these be accessed per life event?\nD5\n-if so, can these be accessed per theme?\nD6\n-if so, can these be accessed by using a lexicon (a-z)?\nD7\n-if so, does it contain a specific search function? (fill in the\nsearch term)\nBesides this, four commonly municipal products are mentioned that can\nbe supplied more or less in a digital form.\n(choices: no info; info; down loadable form; up loadable\nform; transaction)\nD8a\nRequest for building permission\nD8b\nRequest for Cutting trees permission\nD8c\nRequest for extract from GBA\nD8d\nReport of change of address / removal (no transaction\npossible)\n\n485\nUsers' satisfaction with e-government services is still\nsignificantly lower than with services delivered through\ntraditional channels.\nE-democracy tools and citizen engagement through electronic\nmeans remain embryonic. According to the report, this is due not\nonly to a lack of demand but also to a poorly organized supply\nside, with inadequate moderation, unappealing consultation\nsubjects and missing follow-up.\nIn addition to identifying progress accomplished and remaining\nissues, the report makes a number of recommendations that\nshould help reach the objectives of the Andere Overheid (\"Other\nGovernment\") action programme, presented in December 2003.\nSuch recommendations include the following:\nE-government services must become more user-friendly and\neasier to find. Metadata standards should be defined to make\nthem easier to find through search engines. FAQs and lists of\nthe most searched terms and documents should also be made\nmore widely available.\nE-government services must be permanently improved: even once\nall government websites are fully functional, government should\nstill constantly aim to improve e-government and consult target\ngroups about new services they might require, says the report.\nE-government must be further developed through service\nintegration across government bodies, which is currently still in\nits infancy. According to the report, the Dutch supply-driven\napproach has so far sought solutions within the limits and\nadministrative competencies of single bodies.\nEmphasis must be shifted from the breadth of services to their\ndepth. Rather than aiming to run every electronic product and\nservice conceivable, government bodies should aim to integrate\nservices as deeply as possible, especially those in frequent and\npopular demand, the report says. This implies developing\nseamless links from the front to the back office and fostering a\nmore customer-minded culture.\nFrom the perspective of the policymakers we may conclude that\nthe benchmark takes into account individual agencies and their\nwebsites, number of services and to a certain degree also service\nlevels, but the aim is to integrate horizontally, something which is\nnot measured by\nwww.overheid.nl\n.\n4.2 WEBDAM.NL\nIn the year 2000 the ministry of interior decided that all\nmunicipalities should be online by 2002 and 25% of the services\nprovisioning should be support by websites. In order to help\nmunicipalities to achieve this, the Webdam project was started in\nMarch 2000 aiming at stimulating municipalities to develop\nwebsites. These websites should deliver better and improved\nservices provisioning over the Internet as citizens expected.\nOne of the activities in the Webdam project has been the\ndevelopment of a website that allowed municipalities to share\nknowledge and make better use of the Internet. To further\nstimulate municipalities Webdam started a Top 50 for\nmunicipalities' websites, using criteria such as design, content,\nservice level and communication. Assessment and ranking of each\nmunicipality is performed by representatives coming from three\ngroups; civil servants, citizens and experts.\nWebdam uses a panel of experts to judge the public agencies' web\npages. The stakeholders include the following groups:\n1.\nWebdam employees (experts)\n\n2. Public servants municipality under study\n3. Citizens\n\nThese stakeholders judges the web pages based on fie main\ngroups\n1. Layout,\n\n2. Content,\n3. Communication,\n\n4. services\nand,\n5.\nplus/minus remarks\n\nEach group has a minimum and maximum score. The total is\naggregated to determine a ranking. Who determines score?\n\nFigure 4: Screenshot of webdam\nWebdam focuses exclusively on the front-office, the aspects\ndirectly visibility to the citizens using the web pages. No\nconnection to the size of the municipality, the number of citizens\nand other expect influencing the available resources of the\nmunicipality. is made\n4.3 Accenture e-gov monitor\nThe yearly research conduct by Accenture [1][2][3] has a\nprofound influence on governments. An increase of decrease in\nranking of this report results in discussions about the future of e-government\n.\nAccenture researchers in each of the 23 selected countries\ndescribed the typical services that national government should\nprovide using the Internet in order to fulfill the obvious needs of\ncitizens and businesses. They accessed and assessed the websites\nof national government agencies to determine the quality and\nmaturity of services, and the level at which business can be\nconducted electronically with government.\nIn total, 169 national government services across nine major\nservice sectors were investigated by Accenture during a two\nweeks lasting study (!)\n\nin 2002 using the web in 23 countries. The\nnine service sectors researched were Human Services, Justice &\nPublic Safety, Revenue, Defence, Education, Transport & Motor\nVehicles, Regulation & Democracy, Procurement and Postal. The\nmain \"indicator\" of the eGovernment level chosen by Accenture\n\n486\nis what they call: service maturity. Service maturity is an\nindication for the level to which a government has developed an\nonline presence. It takes into account the numbers of services for\nwhich national governments are responsible that are available\nonline (Service Maturity Breadth), and the level of completeness\nwith which each service is offered (Service Maturity Depth).\nService Maturity Overall is the product of Service Maturity\nBreadth and Service Maturity Depth.\nService maturity is decomposed into the following aspects:\nPublish - Passive/Passive Relationship. The user does not\ncommunicate electronically with the government agency and the\nagency does not communicate (other than through what is\npublished on the website) with the user.\nInteract - Active/Passive Interaction. The user must be able to\ncommunicate electronically with the government agency, but the\nagency does not necessarily communicate with the user.\nTransact - Active/Active Interaction. The user must be able to\ncommunicate electronically with the government agency, and the\nagency must be able to respond electronically to the user. the\ndegree to which the services are organized around the citizen, as\nopposed to around internal government structures.\nIn 2004 Accenture again investigated 12 service sectors and 206\nservices in yet again two weeks. They were: agriculture; defence;\ne-Democracy; education; human services; immigration, justice\nand security; postal; procurement; regulation; participation;\nrevenue and customs; and transport.\nLittle is said by Accenture about the metrics involved. They have\nperformed the survey for five years now and the perspective\nchosen is that of a citizen accessing a government service using\non line means. For this article it is interesting to note the final\nremarks in the 2004 report: governments are at the end of what\ncan be achieved with traditional methods; they are developing\nstrategies to cope with horizontal integration between agencies.\n4.4 Regional innovation scorecard (ris)\nOne of the ambitions of the EU is to become the most competitive\nand dynamic knowledge-based economy of the world (Lissabon\nagenda). To measure this, the European Regional Innovation\nScorecard (RIS), a scorecard used for monitoring and comparing\nthe innovation in regions, has been developed [13]. The scorecard\nis seen as acknowledged instrument to compare regions in their\nability to foster economic growth. The Largest province of The\nNetherlands, The Province of South Holland, explicitly states on\ntheir website: \"The province of Noord-Brabant ranks third on the\nEuropean regional innovation scoreboard. Zuid-Holland will have\nto make a considerable effort in the coming years if it is to reach\nthe top-20\". They also state that \"the scoreboard is regarded as\nextremely relevant because it is generally accepted as the a\nleading European benchmark for innovation dynamics\" [30].\nSurprisingly the same regional authority does not pay any\nattention to the contribution of their own eGovernment services to\nthat level of innovation and economic growth. There is no\nmentioning of eGovernment or government services or anything\nclose to it in the whole policy documents related to innovation at\nthis Province. This becomes less surprising when the indicators of\nthe RIS and the EIS are viewed more closely. The RIS uses the\nfollowing indicators.\n(1) population with tertiary education\n(2) lifelong learning,\n(3) employment in medium/high-tech manufacturing,\n(4) employment in high-tech services,\n(5) public R&D expenditures,\n(6) business R&D expenditures,\n(7) EPO high-tech patent applications,\n(8) all EPO patent applications, and\n(9) five indicators using unpublished CIS-2 data\n(10) the share of innovative enterprises in manufacturing and\nservices innovation expenditures as a percentage of\nturnover in both\n(11) manufacturing and\n(12) services,\n(13) the share of sales of new-to-the-firm products in\nmanufacturing.\nThese indicators are based on Eurostat exercises [13]. From this\nanalysis it becomes obvious that this Province considers\ninnovation as a development process \"outside\" the government\nand its own performance. The basic assumption made by the\nDutch Provinces is that governments can stimulate innovation in\nthe economy without being part of the regional economy. The\nmain political driver for efficient eGovernment is economic\ngrowth and jobs and the main driver for economic growth is\nconsidered to be innovation. The metrics of the benchmarks do no\ncoincide.\nEVALUATION\nThe evaluation instruments described before are just examples of\nthe overwhelming number measurement instruments currently\nused. Table 2 summarizes the described instruments. Although we\nfocused on a limited number of instruments these instruments are\nvery likely representative for the other measurement instruments.\nThe following observations can be made about the measurement\ninstruments:\n\nMost instruments focus on the performance of a single\nagency;\n\nMeasurement focus on front, which is directly visible,\nand not on the business process and information system\nin the back. This is surprisingly as these aspects\ninfluence the performance to a large extend;\n\nShort-term focus, not many indicators are focused on\nmeasuring the long-term performance;\n\nInterpretation of measures is difficult as all existing\nmeasurement instruments lack a framework depicting\nthe relationships between the indicators and the use of\nresources. Different stakeholders may come to different\ninterpretations of the status of e-government.\nFrom a theoretical point of view we conclude after examining\nmany other existing instruments that these instruments lack a\n\n487\nclear connection to any theoretical framework on e-Government\nand a well-described set of preferences that can be used for\ncomparison. Even if we would consider that these measurements\ninstruments were developed independent of each other it is\nastonishing that these instruments show that little overlap both in\nfeatures as in measurement method.\nTable 2: Summary of measurement instruments studied\nGovernmental performance is dependent on a complex of\ninterlinked processes and dependencies between these processes,\nthe stakeholders involved including civil servants and their\ndepartments. The legal and political context which is very\ndominant in a governmental setting furthermore increases\ncomplexity. Sometimes obvious improvements in a service\nprovision chain may be blocked because data may not be shared\ndue to data protection regulations. The system of checks an\nbalances that is fundamental to governments' functioning and\nessential for maintaining citizens' trust in the government can be\ntroublesome if we want to redesign inefficient processes. A\ncombination of factors such as the volume of regulations and the\nlack of understanding of their original aims, the lack of formal\nprocess models that could help to get insights in the dependencies\nbetween processes and explain how the legal requirements are\ntranslated into process requirements and the lack of formally\ndescribed legal models, don't really help if we want to explicitly\nformulate the criteria that determine e-Government success. These\ncriteria determine e-Government success (or failure) are exactly\nthe ones that should be the ones in our measurement instruments.\nBut even if we would have had a better theory on performance of\ne-Government processes and we would have had well funded\nmeasurement instruments, interpretation of the outcomes of\napplying those instruments would be problematic, especially\nwithin the political context within these instruments are generally\nused. Bruin [10] showed that when the distance between the\ninterpreters and providers of information is bigger, it is more\ndifficult to interpret information.\nPoliticians do not always steer on rational grounds but suppose\nthey would then their control system (or policy making process\nsee figure 5) would include a comparison and control function.\nWe stated before that comparison is based upon a set of\npreferences. Public services can consequently be evaluated using\ncompeting norms and values [18]. A court for example might be\nasked to deal with cases as efficiently as possible, to maximize\nthe number of cases dealt with, within a certain time period on the\none hand, while on the other hand the sentence should be\ncarefully made, funded with the right arguments and\nunderstandable. Performance measurement instruments that lack\nan explicit set of preferences (or norms for comparison) might\ngive a wrong view on reality if looked at with other preferences in\nmind.\n\nCONCLUSIONS AND FURTHER RESEARCH\nWe investigate the current e-government measurement practice in\nthe Netherlands and investigated some theoretical work in this\nfield. Our analyzes shows a messy picture on the measurement of\ne-government. Many measurement instruments take a too\nsimplistic view and focus on measuring what is easy to measure.\nMany of the instruments focus on measuring the visible front of e-government\nand ignore the performance of the cross-agency\nbusiness processes. None of the instruments focus on measuring\nmulti-service organizations. The instruments focus on one (type\nof) agency and do not provide an overall picture.\nInterpretation of measures is difficult as all existing measurement\ninstruments lack a framework depicting the relationships between\nthe indicators and the use of resources. The different measures\nmay fit the aim of the owners of the e-governmental services,\nhowever, due to conflicting aims and priorities little agreement\nexists on a uniform set of measures, needed for comparison of e-government\ndevelopment. Different stakeholders may come to\ndifferent interpretations of the status of e-government. As such the\nexisting instruments provide a picture of the status of e-government\nthat may not useful as surrogates for deducing the e-government\nperformance\nTraditional methods of measuring e-government impact and\nresource usage fall short of the richness of data required for the\neffective evaluation of e-government strategies. A good\ntheoretical framework for measuring the impact of e-government\nand the use of resources is still lacking. Due to this fact and the\nmany reports that are produced on e-Government developments,\nbased on different measurement instruments that used different\ncriteria, we can hardly learn from existing practices. It would be\nMeasurement\ninstrument\nFocus\nUpdate frequency\nSource data\nCharacteristics of the\nmethod\nOverheid.nl All\npublic\nagency\nwebsites\nYearly\nExperts\nRanking based on web\nsite features\nWebdam\nMunicipality websites\nMonthly (continuous)\nExpert panel consisting of\n3 types of representatives:\n1) Civil servants, 2)\ncitizens and 3) experts\nRanking based on web\nsite features\nAccenture\nComparison of countries Yearly\nAccenture researchers\nbased on judgment of a\nselected services\nRanking based inventory\nof services\nRegional innovation\nscorecard\nEuropean\nregions\n\nEurostat\nRanking based on\neconomic quantitative\nindicators\n\n488\nbeneficial for both citizens as for governments if such a\ntheoretical framework would be developed and a more or less\nstandardized measurement instrument could become available.\nThis would allow governments and designers to compare different\ne-government approaches and learn from them and learning from\nour experiences certainly fits within the ambition of the European\nUnion to become the most competitive and dynamic knowledge-based\neconomy of the world.\nREFERENCES\n[1] Accenture (2001). Governments Closing Gap Between\nPolitical Rhetoric and eGovernment Reality,\nhttp://www.accenture.com/xdoc/en/industries/government/20\n01FullReport.pdf\n.\n[2] Accenture (2002). eGovernment Leadership -Realizing the\nVision,\nhttp://www.accenture.com/xd/xd.asp?it=enWeb&xd=industri\nes/government/gove_welcome.xml\n\n[3] Accenture (2003). eGovernment Leadership: Engaging the\nCustomer,\nhttp://www.accenture.com/xd/xd.asp?it=enweb&xd=industri\nes/government/gove_capa_egov.xml\n\n[4] Armour, F.J. Kaisler, S.H. and Liu, S.Y. (1999). A big-picture\nlook at Enterprise Architectures, IEEE IT\nProfessional, 1(1): 35-42.\n[5] Bign, E., Moliner, M.A., and Snchez, J. (2003) Perceived\nQuality and satisfaction in multiservice organizations. The\ncase of Spanish public services. Journal of Services\nMarketing, 17(4), pp. 420-442.\n[6] Boer, A. Engers, T. van and R. Winkels (2003). Using\nOntologies for Comparing and Harmonizing Legislation. In\nProceedings of the International Conference on Artificial\nIntelligence and Law (ICAIL), Edinburgh (UK), ACM Press.\n[7] Alexander Boer, Radboud Winkels, Rinke Hoekstra, and\nTom M. van Engers. Knowledge Management for\nLegislative Drafting in an International Setting. In D.\nBourcier, editor, Legal Knowledge and Information Systems.\nJurix 2003: The Sixteenth Annual Conference., pages 91-100\n, Amsterdam, 2003. IOS Press.\n[8] Bons R., Ronald M.Lee and Tan, Yua-Hua, (1999). A\nFormal Specification of Automated Auditing of Trustworthy\nTrade Procedures for Open Electronic Commerce. Hawaii\nInternational Conference on System Sciences (HICCS).\n[9] Buckland and F. Gey (1994). The relationship between recall\nand precision. Journal of the American Society for\nInformation Science, 45(1):12-19.\n[10] Bruin, H. de (2002). Performance measurement in the public\nsector: strategies to cope with the risks of performance\nmeasurement. The International Journal of Public Sector\nManagement, vol. 15, no. 7, pp. 578-594.\n[11] Checkland, P. (1981). Systems Thinking, Systems Practice.\nWiley, Chichester.\n[12] Coase, R. (1937). The Nature of the Firm. Economia, 4: 386-405\n.\n[13] European Commissions (2002). 2003 European Innovation\nScoreboard: European Trend Chart on Innovation.\nInnovation/SMEs Programme.\n[14] European Commission (2004). Green paper on Public\nprivate partnerships and community law on public contracts\nand concessions, European Commission, no. 327.\n[15] Fagan, J.C. & Fagan, B. (2004). An accessibility study of\nstate legislative web sites. Government Information\nQuarterly, 21: 65-85.\n[16] Galliers, R.D. (1992). Information Systems Research. Issues,\nmethods and practical guidelines. Alfred Waller, Fawley,\nEngland.\n[17] Green, S.H.B. (1998), Cyberspace winners: how they did it,\nBusiness Week, 22 June, pp. 154-60.\n[18] Groot, H., de and R. Goudriaan (1991). De productiviteit van\nde overheid: over prestaties, personeel en uitgaven in de\npublieke sector. Academic Service, Schoonhoven, The\nNetherlands.\n\n[19] Hazlett, S.A. and Hill, F. (2003). E-government: the realities\nof using IT to transform the public sector. Managing Service\nQuality, Vol. 13, No. 6, pp. 445-452.\n[20] Janssen, M.F.W.H.A. (2001). Designing Electronic\nIntermediaries. Doctoral Dissertation, Delft University of\nTechnology.\n[21] Janssen, Marijn & Davidse, Anouk (2004). Evaluation of a\nPerformance-based Accountability System. The 4th\nEuropean Conference on E-government (ECEG), Dublin\nCastle, Dublin, Ireland, 17-18 June 2004\n[22] Jensen, M. and Meckling, W. (1976). Theory of the Firm:\nManagerial behavior, agency costs, and capital structure,\nJournal of Financial Economics, 5: 305-360.\n[23] Layne, KJL & Lee, J. (2001) \"Developing fully functional E-government\n: A four stage model\", Government Information\nQuarterly, Vol 18, No. 2, pp 122-136.\n[24] Lee, J.K. (2003). A model for monitoring public sector web\nsite strategy. Internet Research. Electronic networking\napplication and policy. Vol. 13, no. 4, pp259-266.\n[25] Malone, T.W. & Crowston, K. (1994). The Interdisciplinary\nStudy of Coordination. ACM Computing Surveys, vol. 26,\nno. 2, pp. 87-119.\n[26] Merwe, R. van der, and Bekker,J. (2003). A framework and\nmethodology for evaluating e-commerce web sites. Internet\nResearch: electronic Networking Applications and Policy.\nVol. 13, No.5, pp. 330-341.\n[27] Moon, M.J. (2002). The Evolution of E-Government Among\nMunicipalities; Rhetoric or reality? Public Administration\nReview. Vol. 62, no. 4, pp. 424-433.\n\n[28] Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1988).\nSERVQUAL: a multiple item scale for measuring consumer\nperceptions of service quality, Journal of Retailing, Vol. 64,\npp. 12-40.\n[29] Peters, Rob and Wilson, Frank (2003). Natural Language\naccess to regional information sources: the PortofRotterdam\ncase: 4th International Workshop on Image Analysis for\nMultimedia Interactive Services, WIAMIS 2003.\n\n489\n[30] Provincial council (2003) Innovatiebrief kenniseconomie\nZuid-Holland \"Kennismaken met Kenniszaken,\nhttp://www.zuid-holland.nl/images/126_107822.pdf\n, page\n14.\n[31] Rohleder, S.J. et al. (2004). eGovernment Leadership: High\nPerformance, Maximum Value. Fifth Annual Accenture\neGovernment Study. Accenture Government Executive\nStudies,\nhttp://www.accenture.com/xdoc/en/industries/government/go\nve_egov_value.pdf\n\n[32] Simeon, R. (1999), ``Evaluating domestic and international\nWeb site strategies'',\nInternet Research, Vol. 9 No. 4, pp.\n297-308.\n\n[33] Quinn, R.E. and Rohrbaugh, J.W. (1983). A Spatial Model of\nEffectiveness criteria: Towards a competing values approach\nto organizational effectiveness. Management Science 29:\n363-377.\n[34] Van Engers, T.M., 2004, Legal Engineering: A Knowledge\nEngineering Approach To Improving Legal Quality, in\neGovernment and eDemocracy: Progress and Challenges,\nPadget, J., Neira, R., De Len, J.L., Editors, Instituto\nPolitchnico Nacional Centro de Investigacion en\nComputacin, ISBN 970-36-0152-9, p189-206.\n[35] Williamson, O.E. (1975). Market and Hierarchies, Analysis\nand Antitrust Implications. A study in the economics of\ninternal organization. Macmillan, New York.\n[36] Yin, R.K. (1989). Case Study Research: Design and\nmethods. Sage publications, Newbury Park, California.", "keywords": "E-government;law;benchmark;interoperability;evaluation;measurement;architectures;e-government;business process;public administration"} {"name": "137", "title": "Minimizing Average Flow Time on Related Machines", "abstract": "We give the first on-line poly-logarithmic competitve algorithm for minimizing average flow time with preemption on related machines, i.e., when machines can have different speeds. This also yields the first poly-logarithmic polynomial time approximation algorithm for this problem. More specifically, we give an O(log P log S)-competitive algorithm, where P is the ratio of the biggest and the smallest processing time of a job, and S is the ratio of the highest and the smallest speed of a machine. Our algorithm also has the nice property that it is non-migratory. The scheduling algorithm is based on the concept of making jobs wait for a long enough time before scheduling them on slow machines.", "fulltext": "INTRODUCTION\nWe consider the problem of scheduling jobs that arrive\nover time in multiprocessor environments. This is a fundamental\nscheduling problem and has many applications, e.g.,\nservicing requests in web servers. The goal of a scheduling\nWork done as part of the \"Approximation Algorithms\"\npartner group of MPI-Informatik, Germany.\nSupported by an IBM faculty development award and a\ntravel grant from the Max Plank Society.\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nSTOC'06, May\n2123, 2006, Seattle, Washington, USA.\nCopyright 2006 ACM 1-59593-134-1/06/0005 ...\n$\n5.00.\nalgorithm is to process jobs on the machines so that some\nmeasure of performance is optimized. Perhaps the most natural\nmeasure is the average flow time of the jobs. Flow time\nof a job is defined as the difference of its completion time\nand its release time, i.e., the time it spends in the system.\nThis problem has received considerable attention in the\nrecent past [1, 2, 10]. All of these works make the assumption\nthat the machines are identical, i.e., they have the same\nspeed. But it is very natural to expect that in a heterogenous\nprocessing environment different machines will have different\nprocessing power, and hence different speeds. In this paper\n, we consider the problem of scheduling jobs on machines\nwith different speeds, which is also referred to as related machines\nin the scheduling literature. We allow for jobs to be\npreempted. Indeed, the problem turns out to be intractable\nif we do not allow preemption. Kellerer et. al. [9] showed\nthat the problem of minimizing average flow time without\npreemption has no online algorithm with o(n) competitive\nratio even on a single machine. They also showed that it\nis hard to get a polynomial time O(n\n1/2\n)-approximation\nalgorithm for this problem. So preemption is a standard,\nand much needed, assumption when minimizing flow time.\nIn the standard notation of Graham et. al. [7], we consider\nthe problem Q|r\nj\n, pmtn| P\nj\nF\nj\n. We give the first poly-logarithmic\ncompetitive algorithm for minimizing average\nflow time on related machines. More specifically, we give an\nO(log\n2\nP log S)-competitive algorithm, where P is the ratio\nof the biggest and the smallest processing time of a job, and\nS is the ratio of the highest and the smallest speed of a machine\n. This is also the first polynomial time poly-logarithmic\napproximation algorithm for this problem. Despite its similarity\nto the special case when machines are identical, this\nproblem is more difficult since we also have to worry about\nthe processing times of jobs.\nOur algorithm is also non-migratory\n, i.e., it processes a job on only one machine. This\nis a desirable feature in many applications because moving\njobs across machines may have many overheads.\nRelated Work The problem of minimizing average flow\ntime (with preemption) on identical parallel machines has\nreceived much attention in the past few years.\nLeonardi\nand Raz [10] showed that the Shortest Remaining Processing\nTime (SRPT) algorithm has a competitive ratio of\nO(log(min(\nn\nm\n, P ))), where n is the number of jobs, and m\nis the number of machines. A matching lower bound on the\ncompetitive ratio of any on-line (randomized) algorithm for\nthis problem was also shown by the same authors. Awerbuch\net. al. [2] gave a non-migratory version of SRPT with\ncompetitive ratio of O(log(min(n, P ))). Chekuri et. al.[5]\n730\ngave a non-migratory algorithm with competitive ratio of\nO(log(min(\nn\nm\n, P ))). One of the merits of their algorithm\nwas a much simpler analysis of the competitive ratio. Instead\nof preferring jobs according to their remaining processing\ntimes, their algorithm divides jobs into classes when they\narrive. A job goes to class k if its processing time is between\n2\nk-1\nand 2\nk\n. The scheduling algorithm now prefers jobs of\nsmaller class irrespective of the remaining processing time.\nWe also use this notion of classes of jobs in our algorithm.\nAzar and Avrahami[1] gave a non-migratory algorithm with\nimmediate dispatch, i.e., a job is sent to a machine as soon as\nit arrives. Their algorithm tries to balance the load assigned\nto each machine for each class of jobs. Their algorithm also\nhas the competitive ratio of O(log(min(\nn\nm\n, P ))). It is also\ninteresting to note that these are also the best known results\nin the off-line setting of this problem.\nKalyanasundaram and Pruhs[8] introduced the resource\naugmentation model where the algorithm is allowed extra resources\nwhen compared with the optimal off-line algorithm.\nThese extra resources can be either extra machines or extra\nspeed. For minimizing average flow time on identical\nparallel machines, Phillips et. al. [11] showed that we can\nget an optimal algorithm if we are given twice the speed\nas compared to the optimal algorithm. In the case of single\nmachine scheduling, Bechheti et. al.[4] showed that we\ncan get O(1/)-competitive algorithms if we are give (1 + )\ntimes more resources. Bansal and Pruhs[3] extended this\nresult to a variety of natural scheduling algorithms and to\nL\np\nnorms of flow times of jobs as well. In case of identical\nparallel machines, Chekuri et. al.[6] gave simple scheduling\nalgorithms which are O(1/\n3\n)-competitive with (1 + )\nresource augmentation.\nOur Techniques A natural algorithm to try here would\nbe SRPT. We feel that it will be very difficult to analyze\nthis algorithm in case of related machines. Further SRPT is\nmigratory. Non-migratory versions of SRPT can be shown\nto have bad competitive ratios. To illustrate the ideas involved\n, consider the following example. There is one fast\nmachine, and plenty of slow machines. Suppose many jobs\narrive at the same time. If we distribute these jobs to all the\navailable machines, then their processing times will be very\nhigh. So at each time step, we need to be selective about\nwhich machines we shall use for processing.\nIdeas based\non distributing jobs in proportion to speeds of machines as\nused by Azar and Avrahami[1] can also be shown to have\nproblems.\nOur idea of selecting machines is the following. A job is\nassigned to a machine only if it has waited for time which\nis proportional to its processing time on this machine. The\nintuition should be clear if a job is going to a slow machine,\nthen it can afford to wait longer before it is sent to the\nmachine. Hopefully in this waiting period, a faster machine\nmight become free in which case we shall be able to process\nit in smaller time. We also use the notion of class of jobs as\nintroduced by Chekuri et. al.[5] which allows machines to\nhave a preference ordering over jobs. We feel that this idea\nof delaying jobs even if a machine is available is new.\nAs mentioned earlier, the first challenge is to bound the\nprocessing time of our algorithm. In fact a bulk of our paper\nis about this. The basic idea used is the following if a\njob is sent to a very slow machine, then it must have waited\nlong. But then most of this time, our algorithm would have\nkept the fast machines busy. Since we are keeping fast machines\nbusy, the optimum also can not do much better. But\nconverting this idea into a proof requires several technical\nsteps.\nThe second step is of course to bound the flow time of\nthe jobs. It is easy to see that the total flow time of the\njobs in a schedule is same as the sum over all time t of\nthe number of waiting jobs at time t in the schedule. So it\nwould be enough if we show that for any time t, the number\nof jobs which are waiting in our schedule is close to that\nin the optimal schedule. Chekuri et. al. [5] argue this in\nthe following manner. Consider a time t. They show that\nthere is a time t < t such that the number of waiting jobs\nof a certain class k or less in both the optimal and their\nschedule is about the same (this is not completely accurate,\nbut captures the main idea). Further they show that t is\nsuch that all machines are busy processing jobs of class k or\nless during (t , t). So it follows that the number of waiting\njobs of this class or less at time t are about the same in both\nthe schedules. We can not use this idea because we would\nnever be able to keep all machines busy (some machines can\nbe very slow). So we have to define a sequence of time steps\nlike t for each time and make clever use of geometric scaling\nto show that the flow time is bounded.\nPRELIMINARIES\nWe consider the on-line problem of minimizing total flow\ntime for related machines. Each job j has a processing requirement\nof p\nj\nand a release date of r\nj\n. There are m machines\n, numbered from 1 to m. The machines can have\ndifferent speeds, and the processing time of a job j on a\nmachine is p\nj\ndivided by the speed of the machine. The\nslowness of machine is the reciprocal of its speed. It will be\neasier to deal with slowness, and so we shall use slowness\ninstead of speed in the foregoing discussion. Let s\ni\ndenote\nthe slowness of machine i. So the time taken by job j to\nprocess on machine i is p\nj\ns\ni\n. Assume that the machines\nhave been numbered so that s\n1\ns\n2\ns\nm\n. We shall\nassume without loss of generality that processing time, release\ndates, and slowness are integers. We shall use the term\nvolume of a set of jobs for denoting their processing time on\na unit speed machine.\nLet\nA be a scheduling algorithm. The completion time\nof a job j in A is denoted by C\nA\nj\n. The flow time of j in A\nis defined as F\nA\nj\n= C\nA\nj\n- r\nj\n. Our goal is to find an on-line\nscheduling algorithm which minimizes the total flow time of\njobs. Let\nO denote the optimal off-line scheduling algorithm.\nWe now develop some notations. Let P denote the ratio of\nthe largest to the smallest processing times of the jobs, and\nS be the ratio of the largest to the smallest slowness of the\nmachines. For ease of notation, we assume that the smallest\nprocessing requirement of any job is 1, and the smallest slowness\nof a machine is 1. Let and be suitable chosen large\nenough constants. We divide the jobs and the machines into\nclasses. A job j is said to be in class k if p\nj\n[\nk-1\n,\nk\n)\nand a machine i is said to be in class l if s\ni\n[\nl-1\n,\nl\n).\nNote that there are O(log P ) classes for jobs and O(log S)\nclasses for machines. Given a schedule\nA, we say that a job\nj is active at time t in A if r\nj\nt but j has not finished\nprocessing by time t in A.\nSCHEDULING ALGORITHM\nWe now describe the scheduling algorithm. The under-731\nlying idea of the algorithm is the following -- if we send a\njob j to machine i, we make sure that it waits for at least\np\nj\ns\ni\nunits of time (which is its processing time on machine\ni). Intuitively, the extra waiting time can be charged to its\nprocessing time. Of course, we still need to make sure that\nthe processing time does not blow up in this process.\nThe algorithm maintains a central pool of jobs. When a\nnew job gets released, it first goes to the central pool and\nwaits there to get assigned to a machine. Let W (t) denote\nthe set of jobs in the central pool at time t. Our algorithm\nwill assign each job to a unique machine -- if a job j gets\nassigned to a machine i, then j will get processed by machine\ni only. Let A\ni\n(t) be the set of active jobs at time t which\nhave been assigned to machine i. We shall maintain the\ninvariant that A\ni\n(t) contains at most one job of each class.\nSo\n|A\ni\n(t)| log P .\nWe say that a job j W (t) of class k is mature for a\nmachine i of class l at time t, if it has waited for at least\nk\n\nl\ntime in the central pool, i.e., t - r\nj\n\nk\n\nl\n. For\nany time t, we define a total order on the jobs in W (t)\nas follows j j if either (i) class(j) < class(j ), or (ii)\nclass(j) = class(j ) and r\nj\n> r\nj\n(in case class(j) = class(j )\nand r\nj\n= r\nj\nwe can order them arbitrarily).\nNow we describe the actual details of the algorithm. Initially\n, at time 0, A\ni\n(0) is empty for each machine.\nAt\neach time t, the algorithm considers machines in the order\n1, 2, . . . , m (recall that the machines have been arranged in\nthe ascending order of their slowness). Let us describe the\nalgorithm when it considers machine i. Let M\ni\n(t) be the\njobs in W (t) which are mature for machine i at time t. Let\nj M\ni\n(t) be the smallest job according to the total order\n. If class(j) < class(j ) for all jobs j A\ni\n(t), then we\nassign j to machine i (i.e., we delete j from W (t) and add\nit to A\ni\n(t)).\nOnce the algorithm has considered all the machines at\ntime t, each machine i processes the job of smallest class\nin A\ni\n(t) at time t. This completes the description of our\nalgorithm. It is also easy to see that the machines need to\nperform the above steps for only a polynomial number of\ntime steps t (i.e., when a job finishes or matures for a class\nof machines).\nWe remark that both the conditions in the definition of\n\nare necessary. The condition (i) is clear because it prefers\nsmaller jobs. Condition (ii) makes sure that we make old\njobs wait longer so that they can mature for slower machines.\nIt is easy to construct examples where if we do not obey\ncondition (ii) then slow machines will never get used and so\nwe will incur very high flow time.\nANALYSIS\nWe now show that the flow time incurred by our algorithm\nis within poly-logarithmic factor of that of the optimal algorithm\n. The outline of the proof is as follows. We first\nargue that the total processing time incurred by our algorithm\nis not too large. Once we have shown this, we can\ncharge the waiting time of all the jobs in A\ni\n(t) for all machines\ni and time t to the total processing time. After this,\nwe show that the total waiting time of the jobs in the central\npool is also bounded by poly-logarithmic factor times\nthe optimum's flow time.\nLet\nA denote our algorithm. For a job j, define the dispatch\ntime d\nA\nj\nof j in A as the time at which it is assigned to\na machine. For a job j and class l of machines, let t\nM\n(j, l)\ndenote the time at which j matures for machines of class l,\ni.e., t\nM\n(j, l) = r\nj\n+\nk\n\nl\n, where k is the class of job j. Let\nF\nA\ndenote the total flow time of our algorithm. For a job j\nlet P\nA\nj\ndenote the time it takes to process job j in A (i.e.,\nthe processing time of j in A). Similarly, for a set of jobs\nJ define P\nA\nJ\nas the total processing time incurred by\nA on\nthese jobs. Let P\nA\ndenote the sum P\nj\nP\nA\nj\n, i.e., the total\nprocessing time incurred by\nA. Define F\nO\n, P\nO\nj\n, P\nO\nJ\nand P\nO\nsimilarly. Let m\nl\ndenote the number of machines of class l.\n4.1\nBounding the Processing Time\nWe shall now compare P\nA\nwith F\nO\n. For each value of\nclass k of jobs and class l of machines, let J(k, l) be the jobs\nof class k which get processed by A on machines of class\nl. For each value of k and l, we shall bound the processing\ntime incurred by\nA on J(k, l). So fix a class k of jobs and\nclass l of machines.\nThe idea of the proof is as follows. We shall divide the\ntime line into intervals I\n1\n= (t\n1\nb\n, t\n1\ne\n), I\n2\n= (t\n2\nb\n, t\n2\ne\n), . . . so that\neach interval I\nq\nsatisfies the following property\nA is almost\nbusy in I\nq\nprocessing jobs of class at most k on machines\nof class l - 1 or less. Further these jobs have release time\nat least t\nq\nb\n. We shall denote these jobs by H\nq\n. Now, if\nO\nschedules jobs in J(k, l) on machines of class l or more, we\nhave nothing to prove since\nO would incur at least as much\nprocessing time as we do for these jobs. If\nO schedules some\njobs in J(k, l) on machines of class l - 1 or less during the\ninterval I\nq\n, then one of these two cases must happen (i)\nsome jobs in H\nq\nneed to be processed on machines of class\nl or more, or (ii) some jobs in H\nq\nget processed after time\nt\nq\ne\n. We shall show that both of these cases are good for us\nand we can charge the processing times of jobs in J(k, l) to\nthe flow time of jobs in H\nq\nin\nO.\nLet us formalize these ideas (see Figure 1). The starting\npoints of the intervals I\n1\n, I\n2\n, . . . will be in decreasing order,\ni.e., t\n1\nb\n> t\n2\nb\n> (so we shall work backwards in time while\ndefining these intervals). Suppose we have defined intervals\nI\n1\n, . . . , I\nq-1\nso far. Let J\nq\n(k, l) denote the set of jobs in\nJ(k, l) which get released before interval I\nq-1\nbegins, i.e.,\nbefore t\nq-1\nb\n(J\n1\n(k, l) is defined as J(k, l)).\nNow we define I\nq\n. Let j\nq\n0\nJ\nq\n(k, l) be the job with the\nhighest dispatch time. Let r\nq\n0\ndenote the release time of j\nq\n0\n.\nLet k\nq\n0\ndenote the class of job j\nq\n0\n. Let d\nq\n0\ndenote the dispatch\ntime of j\nq\n0\n. The right end-point t\nq\ne\nof I\nq\nis defined as d\nq\n0\n.\nConsider the jobs in J(k, l) which get dispatched during\n(r\nq\n0\n, d\nq\n0\n). Let j\nq\n1\nbe such a job with the earliest release date.\nDefine r\nq\n1\n, k\nq\n1\n, d\nq\n1\nsimilarly.\nLet H\nq\n0\n(l ), l < l, be the set of jobs of class at most k\nq\n0\nwhich are dispatched to a machine of class l during the time\ninterval (t\nM\n(j\nq\n0\n, l ), d\nq\n0\n). Note that the phrase \"class at most\nk\nq\n0\n\" in the previous sentence is redundant because we cannot\ndispatch a job of class greater than k\nq\n0\nduring (t\nM\n(j\nq\n0\n, l ), d\nq\n0\n)\non machines of class l (otherwise we should have dispatched\nj\nq\n0\nearlier). Let H\nq\n0\ndenote\n\nl-1\nl =1\nH\nq\n0\n(l ). Define H\nq\n1\n(l ), H\nq\n1\nsimilarly.\nIf all jobs in H\nq\n1\nH\nq\n0\nget released after r\nq\n1\n, we stop the\nprocess here. Otherwise find a job in H\nq\n1\nH\nq\n0\nwhich has the\nearliest release date, let j\nq\n2\ndenote this job. As before define\nr\nq\n2\nas the release date of j\nq\n2\n, and k\nq\n2\nas the class of j\nq\n2\n. Again\ndefine H\nq\n2\n(l ) as the set of jobs (of class at most k\nq\n2\n) which\nget dispatched on machines of class l during (t\nM\n(j\nq\n2\n, l ), d\nq\n2\n).\nDefine H\nq\n2\nanalogously.\nSo now assume we have defined\nj\nq\n0\n, j\nq\n1\n, . . . , j\nq\ni\nand H\nq\n0\n, H\nq\n1\n, . . . , H\nq\ni\n, i 2. If all jobs in H\nq\ni\nare\n732\nj\nq\n3\nj\nq+1\n0\nI\nq\nj\nq\n2\nI\nq-1\nr\nq\n0\nd\nq\n0\n= t\nq\ne\nd\nq\n2\nr\nq\n1\nd\nq\n3\nr\nq\n2\nr\nq\n3\nj\nq\n0\nl\nl\nl - 1\nl - 2\n1\nj\nq\n1\nFigure 1: An illustration of the notation used\nreleased after r\nq\ni\n, then we stop the process. Otherwise, define\nj\nq\ni+1\nas the job in H\nq\ni\nwith the earliest release date. Define\nH\nq\ni+1\nin a similar manner (see Figure 1). We remark that\nthe first two steps of this process differ from the subsequent\nsteps. This we will see later is requierd to ensure that the\nintervals do not overlap too much. The following simple\nobservation shows that this process will stop.\nClaim 4.1. For i 2, k\nq\ni\n< k\nq\ni-1\n.\nProof. Consider the job j\nq\ni\nH\nq\ni-1\n(l ). A prefers j\nq\ni\nover\nj\nq\ni-1\n. If class(j\nq\ni\n) = class(j\nq\ni-1\n), release time of j\nq\ni\nmust be\nat least that of j\nq\ni-1\n. But this is not the case. Thus, class of\nj\nq\ni\nmust be less than that of j\nq\ni-1\n.\nSuppose this process stops after u\nq\nsteps. Define the beginning\nof I\nq\n, i.e., t\nq\nb\nas r\nq\nu\nq\n.\nThis completes our description of I\nq\n.\nLet H\nq\ndenote\n\ni\nH\nq\ni\n. We are now ready to show that interval I\nq\nis sufficiently\nlong, and that for a large fraction of this time, all\nmachines of class less than l are processing jobs of class less\nthan k which are released in this interval itself. This would\nbe valuable in later arguing that\nO could not have sched-uled\njobs of J(k, l) on the lower class machines and thereby\nincurred small processing time.\nLemma 4.2. Length of the interval I\nq\nis at least\nk\n\nl\n.\nProof. Indeed, job j\nq\n0\nmust wait for at least\nk\n\nl\namount\nof time before being dispatched to a machine of class l. So\nt\nq\ne\n- t\nq\ns\nd\nq\n0\n- r\nq\n0\n\nk\n\nl\n.\nLemma 4.3. H\nq\nconsists of jobs of class at most k and\nall jobs in H\nq\nare released after t\nq\nb\n.\nProof. The first statement is clear from the definition\nof H\nq\n. Let us look at H\nq\ni\n. As argued in proof of Claim 4.1\nall jobs of class k\nq\ni\nin H\nq\ni\nare released after r\nq\ni\n. If all jobs of\nclass less than k\nq\ni\nin H\nq\ni\nare released after r\nq\ni\n, then we are\ndone. Otherwise, all such jobs are released after r\nq\ni+1\n(after\nr\nq\n2\nif i = 0). This completes the proof of the lemma.\nLemma 4.4. A machine of class l < l processes jobs of\nH\nq\nfor at least\n|I\nq\n| - 6\nk\n\nl\namount of time during I\nq\n.\nProof. Let us fix a machine i of class l < l. Let us\nconsider the interval (r\nq\np\n, d\nq\np\n). Job j\nq\np\nmatures for i at time\nt\nM\n(j\nq\np\n, l ). So machine i must be busy during (t\nM\n(j\nq\np\n, l ), d\nq\np\n),\notherwise we could have dispatched j\nq\np\nearlier. We now want\nto argue that machine i is mainly busy during this interval\nprocessing jobs from H\nq\n.\nLet j be a job which is processed by i during (t\nM\n(j\nq\np\n, l ), d\nq\np\n).\nWe have already noted that j can be of class at most k\nq\np\n. If j\ngets dispatched after t\nM\n(j\nq\np\n, l ), then by definition it belongs\nto H\nq\n. If j gets dispatched before t\nM\n(j\nq\np\n, l ), it must belong\nto A\ni\n(t ). But A\ni\n(t ) can contain at most one job of each\nclass. So the total processing time taken by jobs of A\ni\n(t )\nduring (t , d\np\n) is at most\nl\n(+\n2\n+\n+\nk\nq\np\n)\n3/2\nk\nq\np\n\nl\n.\nSo during (r\nq\np\n, d\nq\np\n), i processes jobs from H\nq\nexcept perhaps\nfor a period of length (t\nM\n(j\nq\np\n, l ) - r\nq\np\n) + 3/2\nk\nq\np\n\nl\n=\n5/2\nk\nq\np\n\nl\n. Since\n\np\n(r\nq\np\n, d\np\n) covers I\nq\n, the amount of time\ni does not process jobs from H\nq\nis at most 5/2\nl\n(\nk\n+\n\nk\n+\n+ 1), which proves the lemma.\nWe would like to charge the processing time for jobs in\nJ(k, l) in our solution to the flow time of jobs in H\nq\nin\nO.\nBut to do this we require that the sets H\nq\nbe disjoint. We\nnext prove that this is almost true.\nLemma 4.5. For any q, I\nq\nand I\nq+2\nare disjoint. Hence\nH\nq\nand H\nq+2\nare also disjoint.\nProof. Recall that J\nq+1\n(k, l) is the set of jobs in J(k, l)\nwhich get released before t\nq\nb\n. However some of these jobs\nmay get dispatched after t\nq\nb\nand hence I\nq\nand I\nq+1\ncan intersect\n.\nConsider some job j J\nq+1\n(k, l) which is dispatched during\nI\nq\n. Now, observe that j\nq+1\n0\nis released before t\nq\nb\n(by\ndefinition of J\nq+1\n(k, l)) and dispatched after j. So, j gets\ndispatched in (r\nq\n0\n, d\nq\n0\n). This means that release date of j\nq+1\n1\nmust be before release date of j. But t\nq+1\nb\nr\nq+1\n1\nand so, j\nis released after t\nq+1\nb\n. But then j /\nJ\nq+2\n(k, l). So all jobs\nin J\nq+2\n(k, l) get dispatched before I\nq\nbegins, which implies\nthat I\nq\nand I\nq+2\nare disjoint.\nConsider an interval I\nq\n. Let D\nq\n(k, l) denote the jobs in\nJ(k, l) which get released after t\nq\nb\nbut dispatched before t\nq\ne\n.\nIt is easy to see that D\nq\n(k, l) is a superset of J\nq\n(k, l) J\nq+1\n(k, l). So\nq\nD\nq\n(k, l) covers all of J(k, l).\nNow we would like to charge the processing time of jobs\nin D\nq\n(k, l) to the flow time incurred by O on the jobs in H\nq\n.\nBefore we do this, let us first show that\nO incurs significant\n733\namount of flow time processing the jobs in H\nq\n.\nThis is\nproved in the technical theorem below whose proof we defer\nto the appendix.\nTheorem 4.6. Consider a time interval I = (t\nb\n, t\ne\n) of\nlength T . Suppose there exists a set of jobs J\nI\nsuch that\nevery job j J\nI\nis of class at most k and is released after\nt\nb\n. Further\nA dispatches all the jobs in J\nI\nduring I and only\non machines of class less than l. Further, each machine i\nof class l < l satisfies the following condition : machine i\nprocesses jobs from J\nI\nfor at least T - 6\nk\n\nl\namount of\ntime. Assuming T\nk\n\nl\n, the flow time F\nO\nJ\nI\nincurred by\nO on the jobs in J\nI\nis at least `P\nA\nJ\nI\n.\nSubstituting t\nb\n= t\nq\nb\n, t\ne\n= t\nq\ne\n, I = I\nq\n, T = |I\nq\n|, J\nI\n= H\nq\nin\nthe statement of Theorem 4.6 and using Lemmas 4.2, 4.3,\n4.4, we get\nP\nA\nH\nq\nO(F\nO\nH\nq\n).\n(1)\nWe are now ready to prove the main theorem of this section\n.\nTheorem 4.7. The processing time incurred by A on\nD\nq\n(k, l), namely P\nA\nD\nq\n(k,l)\n, is O(F\nO\nH\nq\n+ F\nO\nD\nq\n(k,l)\n).\nProof. Let V denote the volume of D\nq\n(k, l). By Lemma\n4.4, machines of class l do not process jobs from H\nq\nfor at\nmost 6\n\nk\n\nl\nunits of time during I\nq\n. This period translates\nto a job-volume of at most 6\nk\n. If V is sufficiently small\nthen it can be charged to the processing time (or equivalently\nthe flow time in\nO) of the jobs in H\nq\n.\nLemma 4.8. If V c\nk\n(m\n0\n+ . . . + m\nl-1\n) where c is\na constant, then P\nA\nD\nq\n(k,l)\nis O(F\nO\nH\nq\n).\nProof. The processing time incurred by A on D\nq\n(k, l) is\nat most\nl\nV = O(\nk\n\nl+1\n(m\n0\n+ . . .+m\nl-1\n)). Now, Lemmas\n4.2 and 4.4 imply that machines of class l < l process jobs\nfrom H\nq\nfor at least\nk\n\nl\n/2 amount of time. So P\nA\nH\nq\nis at\nleast\nk\n\nl\n(m\n0\n+ . . . + m\nl-1\n)/2. Using equation (1), we get\nthe result.\nSo from now on we shall assume that V c\nk+1\n(m\n0\n+\n. . . + m\nl-1\n) for a sufficiently large constant c. We deal with\neasy cases first :\nLemma 4.9. In each of the following cases P\nA\nD\nq\n(k,l)\nis\nO(F\nO\nD\nq\n(k,l)\n+ F\nO\nH\nq\n) :\n(i) At least V /2 volume of D\nq\n(k, l) is processed on machines\nof class at least l by O.\n(ii) At least V /4 volume of D\nq\n(k, l) is finished by O after\ntime t\nq\ne\nk\n\nl\n/2.\n(iii) At least V /8 volume of H\nq\nis processed by\nO on machines\nof class l or more.\nProof. Note that P\nA\nD\nq\n(k,l)\nis at most V\nl\n. If (i) happens,\nO pays at least V/2\nl-1\namount of processing time for\nD\nq\n(k, l) and so we are done. Suppose case (ii) occurs. All\njobs in D\nq\n(k, l) get dispatched by time t\nq\ne\n. So they must get\nreleased before t\nq\ne\nk\n\nl\n. So at least 1/4 volume of these\njobs wait for at least\nk\n\nl\n/2 amount of time in O. This\nmeans that F\nO\nD\nq\n(k,l)\nis at least V /8\nl\n, because each job has\nsize at most\nk\n. This again implies the lemma. Case (iii) is\nsimilar to case (i).\nSo we can assume that none of the cases in Lemma 4.9\noccur.\nNow, consider a time t between t\nq\ne\nk\n\nl\n/2 and\nt\nq\ne\n. Let us look at machines of class 1 to l - 1. O finishes\nat least V /4 volume of D\nq\n(k, l) on these machines before\ntime t. Further, at most V /8 volume of H\nq\ngoes out from\nthese machines to machines of higher class. The volume that\ncorresponds to time slots in which these machines do not\nprocess jobs of H\nq\nin\nA is at most V/16 (for a sufficiently\nlarge constant c). So, at least V /16 amount of volume of\nH\nq\nmust be waiting in\nO at time t. So the total flow time\nincurred by\nO on H\nq\nis at least V /(16\nk\n)\n\nk\n\nl\n/2 which\nagain is (V\nl\n). This proves the theorem.\nCombining the above theorem with Lemma 4.5, we get\nCorollary 4.10. P\nA\nJ(k,l)\nis O(F\nO\n), and so P\nA\nis\nO(log S log P F\nO\n).\n4.2\nBounding the flow time\nWe now show that the average flow time incurred by our\nalgorithm is within poly-logarithmic factor of that incurred\nby the optimal solution. We shall say that a job j is waiting\nat time t in A if it is in the central pool at time t in A and\nhas been released before time t.\nLet V\nA\nk\n(t) denote the volume of jobs of class at most\nk which are waiting at time t in A. Define V\nO\nk\n(t) as the\nremaining volume of jobs of class at most k which are active\nat time t in O. Note the slight difference in the definitions\nfor\nA and O -- in case of A, we are counting only those jobs\nwhich are in the central pool, while in\nO we are counting\nall active jobs. Our goal is to show that for all values of k,\nP\nt\nV\nA\nk\n(t) is bounded from above by P\nt\nV\nO\nk\n(t) plus small\nadditive factors, i.e., O(\nk\n(P\nO\n+ P\nA\n)). Once we have this,\nthe desired result will follow from standard calculations.\nBefore we go to the details of the analysis, let us first show\nhow to prove such a fact when all machines are of the same\nspeed, say 1. The argument for this case follows directly\nfrom [5], but we describe it to develop some intuition for the\nmore general case. Fix a time t and class k of jobs. Suppose\nall machines are processing jobs of class at most k at time\nt in our schedule. Let t be the first time before t at which\nthere is at least one machine in\nA which is not processing\njobs of class k or less (if there is no such time, set t as\n0). It follows that at time t there are no jobs of class k or\nless which are mature for these machines (since all machines\nare identical, a job becomes mature for all the machines\nat the same time). So these jobs must have been released\nafter t k\n. During (t k\n, t ), the optimal schedule can\nprocess at most m\nk\nvolume of jobs of class at most k. So\nit follows that V\nA\nk\n(t ) - V\nO\nk\n(t ) is at most m\nk\n. Since our\nschedule processes jobs of class at most k during (t , t), we\nget V\nA\nk\n(t) - V\nO\nk\n(t) x\nA\n(t)\nk\n, where x\nA\n(t) denotes the\nnumber of busy machines at timer t in our schedule (x\nA\n(t)\nis same as m because all machines are busy at time t). The\nother case when a machine may be processing jobs of class\nmore than k or remain idle at time t can be shown similarly\nto yield the same expression. Adding this for all values of t,\nwe get P\nt\nV\nA\nk\n(t) - P\nt\nV\nO\nk\n(t) is at most\nk\nP\nA\n, which is\nwhat we wanted to show.\nThis argument does not extend to the case when machines\ncan have different speeds. There might be a very slow machine\nwhich is not processing jobs of class k or less (it may\nbe idle), but then the jobs which are waiting could have\nbeen released much earlier and so we cannot argue that the\n734\nvolume of remaining jobs of class at most k at time t in the\ntwo schedules are close. This complicates our proof and instead\nof defining one time t as above, we need to define a\nsequence of times.\nBefore we describe this process, we need more notations.\nThese are summarized in the table below for ease of reference\n.\nLet j\nk\n(t) J\nk\n(t) be the job with the earliest release date.\nLet r\nk\n(t) denote the release date of job j\nk\n(t), and c\nk\n(t) denote\nthe class of j\nk\n(t). Let l\nk\n(t) denote the largest l such that\nj\nk\n(t) has matured for machines of class l at time t. In other\nwords, l\nk\n(t) is the highest value of l such that t\nM\n(j, l) t.\nObserve that all machines of class l\nk\n(t) or less must be busy\nprocessing jobs of class at most c\nk\n(t) at time t, otherwise our\nalgorithm should have dispatched j by time t. Our proof will\nuse the following lemma.\nLemma 4.11. For any class k of jobs and time t,\nV\nA\nk\n(t) - V\nO\nk\n(t) 2\nc\nk\n(t)\nm\n(l\nk\n(t)-1)\n+ V\nA\n(c\nk\n(t)-1)\n(r\nk\n(t)) - V\nO\n(c\nk\n(t)-1)\n(r\nk\n(t))\n+ X\nll\nk\n(t)\nP\nO\nl\n(r\nk\n(t), t)\n\nl-1\nProof. Let V\nA\ndenote the volume of jobs of class at most\nk which are processed by A on machines of class l\nk\n(t) - 1 or\nless during (r\nk\n(t), t). Define U\nA\nas the volume of jobs of class\nat most k which are processed by A on machines of class\nl\nk\n(t) or more during (r\nk\n(t), t). Define V\nO\nand U\nO\nsimilarly.\nClearly, V\nA\nk\n(t) - V\nO\nk\n(t) = V\nA\nk\n(r\nk\n(t)) - V\nO\nk\n(r\nk\n(t)) + (V\nO\nV\nA\n) + (U\nO\n- U\nA\n).\nLet us look at V\nO\n- V\nA\nfirst.\nAny machine of class\nl l\nk\n(t) - 1 is busy in A during (t\nM\n(j\nk\n(t), l ), t) processing\njobs of class at most c\nk\n(t). The amount of volume O can\nprocess on such machines during (r\nk\n(t), t\nM\n(j\nk\n(t), l )) is at\nmost\nm\nl\n\nck(t)\n\nl\n\nl -1\n, which is at most m\nl\n\nc\nk\n(t)\n. So we\nget V\nO\n- V\nA\nm\n(l\nk\n(t)-1)\n\nc\nk\n(t)\n.\nLet us now consider V\nA\nk\n(r\nk\n(t)) - V\nO\nk\n(r\nk\n(t)). Let us consider\njobs of class c\nk\n(t) or more which are waiting at time\nr\nk\n(t) in A (so they were released before r\nk\n(t)). By our definition\nof j\nk\n(t), all such jobs must be processed by time\nt in A. If l l\nk\n(t) - 1, then such jobs can be done on\nmachines of class l only during (t\nM\n(j\nk\n(t), l ), t). So again\nwe can show that the total volume of such jobs is at most\nm\n(l\nk\n(t)-1)\n\nc\nk\n(t)\n+ U\nA\n.\nThus we get V\nA\nk\n(r\nk\n(t)) V\nO\nk\n(r\nk\n(t)) m\n(l\nk\n(t)-1)\n\nc\nk\n(t)\n+U\nA\n+V\nA\n(c\nk\n(t)-1)\n(r\nk\n(t))V\nO\n(c\nk\n(t)-1)\n(r\nk\n(t)), because V\nO\n(c\nk\n(t)-1)\n(r\nk\n(t)) V\nO\nk\n(r\nk\n(t)).\nFinally note that U\nO\nis at most P\nll\nk\n(t) P\nO\nl\n(r\nk\n(t),t)\n\nl-1\n. Combining\neverything, we get the result.\nThe rest of the proof is really about unraveling the expression\nin the lemma above. To illustrate the ideas involved,\nlet us try to prove the special case for jobs of class 1 only.\nThe lemma above implies that V\nA\n1\n(t) - V\nO\n1\n(t) 2\nm\n(l\n1\n(t)-1)\n+P\nll\n1\n(t) P\nO\nl\n(r\n1\n(t),t)\n\nl-1\n. We are really interested in\nP\nt\n(V\nA\n1\n(t) - V\nO\n1\n(t)). Now, the sum P\nt\nm\n(l\n1\n(t)-1)\nis not a\nproblem because we know that at time t all machines of class\nl\n1\n(t) or less must be busy in A. So, x\nA\n(t) m\n(l\n1\n(t)-1)\n. So\nP\nt\nm\n(l\n1\n(t)-1)\nis at most P\nt\nx\nA\n(t), which is the total processing\ntime of\nA. It is little tricky to bound the second\nterm. We shall write P\nll\n1\n(t) P\nO\nl\n(r\n1\n(t),t)\n\nl-1\nas\nP\nll\n1\n(t)\nP\nt\nt =r\n1\n(t) x\nO\nl\n(t )\n\nl-1\n.\nWe can think of this as saying\nthat at time time t , r\n1\n(t) t t, we are charging 1/\nl-1\namount to each machine of class l which is busy at time t\nin\nO. Note that here l is at least l\n1\n(t).\nNow we consider P\nt\nP\nll\n1\n(t)\nP\nt\nt =r\n1\n(t) x\nO\nl\n(t )\n\nl-1\n. For a fixed\ntime t and a machine i of class l which is busy at time t\nin\nO, let us see for how many times t we charge to i. We\ncharge to i at time t if t lies in the interval (r\n1\n(t), t), and\nl l\n1\n(t). Suppose this happens. We claim that t - t has\nto be at most\nl+1\n. Indeed otherwise t - r\n1\n(t)\nl+1\nand so j\n1\n(t) has matured for machines of class l + 1 as well.\nBut then l < l\n1\n(t). So the total amount of charge machine i\ngets at time t is at most\nl+1\n1/\nl-1\n= O(1). Thus, the\ntotal sum turns out to be at most a constant times the total\nprocessing time of\nO.\nLet us now try to prove this for the general case. We\nbuild some notation first. Fix a time t and class k. We\nshall define a sequence of times t\n0\n, t\n1\n, and a sequence of\njobs j\n0\n(t), j\n1\n(t), . . . associated with this sequence of times.\nc\ni\n(t) shall denote the class of the job j\ni\n(t). Let us see how\nwe define this sequence. First of all t\n0\n= t, and j\n0\n(t) is\nthe job j\nk\n(t) (as defined above). Recall the definition of\nj\nk\n(t) it is the job with the earliest release date among all\njobs of class at most k which are waiting at time t in A.\nNote that c\n0\n(t), the class of this job, can be less than k.\nNow suppose we have defined t\n0\n, . . . , t\ni\nand j\n0\n(t), . . . , j\ni\n(t),\ni 0. t\ni+1\nis the release date of j\ni\n(t). j\ni+1\n(t) is defined as\nthe job j\nc\ni\n(t)-1\n(t\ni+1\n), i.e., the job with the earliest release\ndate among all jobs of class less than c\ni\n(t) waiting at time\nt\ni+1\nin\nA. We shall also define a sequence of classes of\nmachines l\n0\n(t), l\n1\n(t), . . . in the following manner l\ni\n(t) is the\nhighest class l of machines such that job j\ni\n(t) has matured\nfor machines of class l at time t\ni\n. Figure 2 illustrates these\ndefinitions. The vertical line at time t\ni\ndenotes l\ni\n(t) the\nheight of this line is proportional to l\ni\n(t).\nWe note the following simple fact.\nClaim 4.12.\nl\ni\n(t)\n\nc\ni\n(t)\nt\ni\n- t\ni+1\n\nl\ni\n(t)+1\n\nc\ni\n(t)\n.\nProof. Indeed t\ni+1\nis the release date of j\ni\n(t) and j\ni\n(t)\nmatures for machines of class l\ni\n(t) at time t\ni\n, but not for\nmachines of class l\ni+1\n(t) at time t\ni\n.\nThe statement in Lemma 4.11 can be unrolled iteratively\nto give the following inequality :\nV\nA\nk\n(t) - V\nO\nk\n(t) 2 (\nc\n0\n(t)\nm\n<l\n0\n(t)\n+\nc\n1\n(t)\nm\n<l\n1\n(t)\n+\n) +\n0\n@ X\nll\n0\n(t)\nP\nO\nl\n(t\n1\n, t\n0\n)\n\nl-1\n+ X\nll\n1\n(t)\nP\nO\nl\n(t\n2\n, t\n1\n)\n\nl-1\n+\n\n1\nA (2)\nLet us try to see what the inequality means. First look\nat the term\nc\ni\n(t)\nm\n<l\ni\n(t)\n.\nConsider a machine of class\nl < l\ni\n(t). It is busy in A during (t\nM\n(j\ni\n(t), l), t\ni\n).\nNow\nt\ni\n- t\nM\n(j\ni\n(t), l) = (t\ni\n- t\ni+1\n)\n- (t\nM\n(j\ni\n(t), l) - t\ni+1\n)\n\nl\ni\n(t)\n\n\nc\ni\n(t)\nc\ni\n(t)\n\nl\n\nc\ni\n(t)\n\nl\n, as l < l\ni\n(t). Hence machine l is\nalso busy in\nA during (t\ni\nc\ni\n(t)\n\nl\n, t\ni\n), So the term\nc\ni\n(t)\n\nm\n<l\ni\n(t)\nis essentially saying that we charge 1/\nl\namount to\neach machine of class l < l\ni\n(t) during each time in (t\ni\n735\nx\nA\n(t), x\nO\n(t)\nNumber of machines busy at time t in A, O.\nx\nA\nl\n(t), x\nO\nl\n(t)\nNumber of machines of class l busy at time t in A, O.\nP\nA\nl\n(t\n1\n, t\n2\n), P\nO\nl\n(t\n1\n, t\n2\n)\nTotal processing time incurred by machines of class l\nduring (t\n1\n, t\n2\n) in\nA, O.\nm\nl\n, m\nl\n, m\n<l\nNumber of machines of class l, at most l, less than l.\nm\n(l\n1\n,l\n2\n)\nNumber of machines of class between (and including) l\n1\nand l\n2\n.\nJ(k, t)\nSet of jobs of class at most k which are waiting at time t in A.\nTable 1: Table of definitions\nt\nt\nt\nt\nt\n1\n0\n2\n3\n4\nl\nl\nl\nl\nl\n0\n1\n2\n3\n4\nt\n5\n5\nl\nt\n6\nr\njj\nj\nr\nr\nr\nr\nr\nj\nj\nj\nj\nj\n0\n1\n2\n3\n4\n5\nFigure 2: Illustrating the definitions of t\n0\n, . . . and c\n0\n(t), . . ..\n\nc\ni\n(t)\n\nl\n, t\ni\n). These intervals are shown in figure 2 using\nlight shade. So a machine i of class l which is busy in A\nduring these lightly shaded regions is charged 1/\nl\nunits.\nLet us now look at the term P\nll\ni\n(t) P\nO\nl\n(t\ni+1\n,t\ni\n)\n\nl-1\n. This\nmeans the following consider a machine of class l l\ni\n(t)\nwhich is busy in\nO at some time t during (t\ni+1\n, t\ni\n) then\nwe charge it 1/\nl-1\nunits. Figure 2 illustrates this fact\nwe charge 1/\nl-1\nto all machines of class l l\ni\n(t) which are\nbusy in\nO during the darkly shaded regions.\nLet us see how we can simplify the picture. We say that\nthe index i is suffix maximum if l\ni\n(t) > l\ni-1\n(t), . . . , l\n0\n(t)\n(i = 0 is always suffix maximum). In Figure 2, t\n0\n, t\n2\nand\nt\n5\nare suffix maximum.\nLet the indices which are suffix\nmaximum be i\n0\n= 0 < i\n1\n< i\n2\n< . . .. The following lemma\nsays that we can consider only suffix maximum indices in\n(2). We defer its proof to the appendix.\nLemma 4.13.\nV\nA\nk\n(t) - V\nO\nk\n(t) 4\n2\nX\ni\nu\n\nc\niu\n(t)\nm\n(l\niu-1\n(t),l\niu\n(t)-1)\n+ X\ni\nu\nX\nll\niu\n(t)\nP\nO\nl\n(t\ni\nu+1\n, t\ni\nu\n)\n\nl-1\n,\nwhere i\nu\nvaries over the suffix maximum indices (define l\ni\n-1\n(t)\nas 1). Recall that m\n(l\n1\n,l\n2\n)\ndenotes the number of machines\nof class between l\n1\nand l\n2\n.\nFigure 3 shows what Lemma 4.13 is saying. In the lightly\nshaded region, if there is a machine of class l which is busy\nat some time t in A, we charge it 1/\nl\nunits at time t . In\nthe darkly shaded region if there is a machine i of class l\nwhich is busy at time t in O we charge it 1/\nl-1\nunits.\nNow we try to bound the charges on the darkly shaded\nregion for all time t. Let us fix a machine h of class l.\nSuppose h is busy in O at time t . We are charging 1/\nl-1\namount to h at time t if the following condition holds : for\nall i such that t\ni\nt , l\ni\n(t) is at most l. Now we ask, if we\nfix t , for how many values of t do we charge h at time t ?\nThe following claim shows that this can not be too large.\nClaim 4.14. Given a machine h of class l, we can charge\nit for the darkly shaded region at time t for at most 2\nl+1\n\n\nk\nvalues of t.\nProof. Suppose we charge h at time t for some value\nof t. Fix this t. Clearly t t . Let i be the largest index\nsuch that t t\ni\n. So t - t t - t\ni+1\n. Now consider any\ni i. Lemma 4.12 implies that t\ni\n- t\ni +1\n\nl\ni\n(t)+1\n\n\n\ni\n(t)\n\nl+1\n\nc\ni\n(t)\n. Since c\ni\n(t) decrease as i increases,\nP\ni\ni =0\n(t\ni\n- t\ni +1\n)\n2\nl+1\n\nk\n. This implies the claim.\nSo the total amount of charge to machine h at time t is\nat most 2\n\n2\n\nk\n. Thus we get the following fact :\nX\nt\n0\n@X\ni\nu\nX\nll\niu\n(t)\nP\nO\nl\n(t\ni\nu\n+1\n, t\ni\nu\n)\n\nl-1\n1\nA 2\n2\n\nk\nP\nO\n(3)\nNow we look at the charges on the lightly shaded region.\nLet h be a machine of class l which is busy in A at time t .\nAs argued earlier, we charge 1/\nl\nunits to h at time t if the\nfollowing condition holds there exists a suffix maximum\nindex i\nu\nsuch that t lies in the interval (t\ni\nu\nc\niu\n(t)\n\nl\n, t\ni\nu\n).\nFurther for all suffix maximum indices i < i\nu\n, it must be\nthe case that l\ni\n(t) < l. Now we want to know for how many\nvalues of t do we charge h at time t .\n736\nt\nt\nt\nt\nt\n1\n0\n2\n3\n4\nl\nl\nl\nl\nl\n0\n1\n2\n3\n4\nt\n5\n5\nl\nt\n6\nr\njj\nj\nr\nr\nr\nr\nr\nj\nj\nj\nj\nj\n0\n1\n2\n3\n4\n5\nFigure 3: Illustrating Lemma 4.13\nClaim 4.15. Given a machine h of class l, we can charge\nit for the lightly shaded region at time t for at most 3\n\nl+1\n\n\nk\nvalues of t.\nProof. Fix a time t such that while accounting for V\nA\nk\n(t)\nwe charge h at time t . So there is a maximum index i\nu\nsuch\nthat t\ni\nu\n- t\nc\niu\n(t)\n\nl\n. Further, if i is an index less than\ni\nu\n, then l\ni\n(t) must be less than l. We can argue as in the\nproof of Claim 4.14 that t - t\ni\nu\ncan be at most 2\n\nl+1\n\nk\n.\nSo t - t\n3\nl+1\n\nk\n.\nSo we get\nX\nt\nX\ni\nu\n\nc\niu\n(t)\nm\n(l\niu-1\n(t),l\niu\n(t)-1)\n!\n3\nk\nP\nA\n(4)\nPutting everything together, we see that Lemma 4.13, and\nequations (3) and (4) imply that there is a constant c such\nthat X\nt\nV\nA\nk\n(t) X\nt\nV\nO\nk\n(t) + c\nk\n(P\nO\n+ P\nA\n).\n(5)\nThe final result now follows from standard calculations.\nTheorem 4.16. F\nA\nis O(log S log\n2\nP F\nO\n).\nProof. We have already bounded the processing time of\nA. Once a job gets dispatched to a machine i, its waiting\ntime can be charged to the processing done by i. Since at\nany time t, there are at most log P active jobs dispatched to\na machine, the total waiting time of jobs after their dispatch\ntime is at most O(log P P\nA\n). So we just need to bound the\ntime for which jobs are waiting in the central pool.\nLet n\nA\nk\n(t) be the number of jobs of class k waiting in the\ncentral pool at time t in our algorithm. Let n\nO\nk\n(t) be the\nnumber of jobs of class k which are active at time t in O\n(note the difference in the definitions of the two quantities).\nSince jobs waiting in the central pool in\nA have not been\nprocessed at all, it is easy to see that n\nA\nk\n(t)\nV\nA\nk\n(t)\n\nk\n. Further\n, V\nO\nk\n(t)\nk\nn\nO\nk\n(t) + + n\nO\n1\n(t). Combining these\nobservations with equation (5), we get for all values of k,\nX\nt\nn\nA\nk\n(t) X\nt\n,,\nn\nO\nk\n(t) + n\nO\nk-1\n(t)\n\n+\n+ n\nO\n1\n(t)\n\nk-1\n\n+c (P\nO\n+ P\nA\n).\nWe know that total flow time of a schedule is equal to the\nsum over all time t of the number of active jobs at time t in\nthe schedule. So adding the equation above for all values of\nk and using Corollary 4.10 implies the theorem.\nACKNOWLEDGEMENTS\nWe would like to express our thanks to Gagan Goel, Vinayaka\nPandit, Yogish Sabharwal and Raghavendra Udupa for useful\ndiscussions.\nREFERENCES\n[1] N. Avrahami and Y. Azar. Minimizing total flow time\nand total completion time with immediate\ndispatching. In Proc. 15th Symp. on Parallel\nAlgorithms and Architectures (SPAA), pages 1118.\nACM, 2003.\n[2] Baruch Awerbuch, Yossi Azar, Stefano Leonardi, and\nOded Regev. Minimizing the flow time without\nmigration. In ACM Symposium on Theory of\nComputing, pages 198205, 1999.\n[3] N. Bansal and K. Pruhs. Server scheduling in the l p\nnorm: A rising tide lifts all boats. In ACM Symposium\non Theory of Computing, pages 242250, 2003.\n[4] Luca Becchetti, Stefano Leonardi, Alberto\nMarchetti-Spaccamela, and Kirk R. Pruhs. Online\nweighted flow time and deadline scheduling. Lecture\nNotes in Computer Science, 2129:3647, 2001.\n[5] C. Chekuri, S. Khanna, and A. Zhu. Algorithms for\nweighted flow time. In ACM Symposium on Theory of\nComputing, pages 8493. ACM, 2001.\n[6] Chandra Chekuri, Ashish Goel, Sanjeev Khanna, and\nAmit Kumar. Multi-processor scheduling to minimize\nflow time with epsilon resource augmentation. In\nACM Symposium on Theory of Computing, pages\n363372, 2004.\n[7] R. L. Graham, E. L. Lawler, J. K. Lenstra, and A. H.\nG. Rinnooy Kan. Optimization and approximation in\ndeterministic sequencing and scheduling : a survey.\nAnn. Discrete Math., 5:287326, 1979.\n[8] Bala Kalyanasundaram and Kirk Pruhs. Speed is as\npowerful as clairvoyance. In IEEE Symposium on\n737\nFoundations of Computer Science, pages 214221,\n1995.\n[9] Hans Kellerer, Thomas Tautenhahn, and Gerhard J.\nWoeginger. Approximability and nonapproximability\nresults for minimizing total flow time on a single\nmachine. In ACM Symposium on Theory of Cmputing,\npages 418426, 1996.\n[10] Stefano Leonardi and Danny Raz. Approximating\ntotal flow time on parallel machines. In ACM\nSymposium on Theory of Computing, pages 110119,\n1997.\n[11] C. A. Phillips, C. Stein, E. Torng, and J. Wein.\nOptimal time-critical scheduling via resource\naugmentation. In ACM Symposium on Theory of\nComputing, pages 140149, 1997.\nAppendix\nProof of Theorem 4.6.\nLet M denote the set of machines\nof class less than l. First observe that the processing\ntime incurred by\nA on J\nI\nis at most twice of\n|M | T (the\nfactor twice comes because there may be some jobs which\nare dispatched during I but finish later -- there can be at\nmost one such job for a given class and a given machine).\nSo we will be done if we can show that F\nO\nJ\nI\nis (\n|M | T ).\nLet V be the volume of jobs in J\nI\nwhich are done by\nO\non machines of class l or more. If V\n|M |T\n\nl+1\n, then we are\ndone because then the processing time incurred by\nO on J\nI\nis at least V\nl\n. So we will assume in the rest of the proof\nthat V\n|M |T\n\nl+1\n.\nLet i be a machine of class less than l. We shall say that\ni is good is i processes jobs from J\nI\nfor at least T /4 units\nof time during I in the optimal solution O. Otherwise we\nsay that i is bad. Let G denote the set of good machines. If\n|G|\n|M |\n\n, then we are done again, because P\nO\nJ\nI\nis at least\n|G| T/4. Let B denote the set of bad machines.\nSo we can assume that the number of good machines is\nat most 1/ fraction of the number of machines of class less\nthan l. Now consider a time t in the interval (t\nb\n+ T /2, t\ne\n).\nClaim 6.1. At time t, at least\nk\n|M | volume of jobs from\nJ\nI\nis waiting in\nO.\nProof. Let V\n1\ndenote the volume of jobs from J\nI\nwhich\nis done by\nA during (t\nb\n, t). Let V\n2\ndenote the volume of jobs\nfrom J\nI\nwhich is done by\nO on machines of class less than\nl during (t\nb\n, t). Recall that for a machine i, s\ni\ndenotes the\nslowness of i.\nSince a machine i of class l < l does not perform jobs\nfrom J\nI\nfor at most 6\nk\n\nl\namount of time during I, we see\nthat V\n1\nP\niM t-t\nb\n-6\nk\n\nci\ns\ni\n, where c\ni\ndenotes the class\nof i. Let us look at V\n2\nnow. In\nO all bad machines do not\nprocess jobs from J\nI\nfor at least 3T /4 units of time during I.\nSo they do not process jobs from J\nI\nfor at least T /4 units of\ntime during (t\nb\n, t\nb\n+T /2). So V\n2\nP\niM (t-t\nb\n)\ns\ni\n-P\niB T\n4s\ni\n.\nThis shows that V\n1\n- V\n2\nP\niB T\n4s\ni\n- P\niM 6\nk\n\nci\ns\ni\n.\nFor a bad machine i, T /4 - 6\nk\n\nc\ni\nT/8, since c\ni\nl 1\n(assuming is large enough). So, we can see that this\ndifference is at least P\niB T\n8s\ni\n- P\ni /\nB\n6\nk\n. Since T\nl\n\n\nk\n, we see that this difference is at least\nT\n\nl-1\n(\n|B|/8 - 6|G|),\nwhich is at least\nT |M |\n10\nl-1\n, because we have assumed that\n|B| is\nlarger than\n|G| by a sufficiently high constant factor. Recall\nthat V is the volume of jobs in J\nI\nwhich is done by\nO on\nmachines of class l or more. Clearly, the volume of jobs from\nJ\nI\nwhich is waiting at time t in O is at least V\n1\n- V\n2\n- V .\nBut V is at most\nT |M |\n\nl+1\n. Hence the volume waiting at time\nt is at least\nT |M |\n\nl\n. This proves the lemma.\nSince each job\nin J\nI\nis of size at most\nk\n, we see that at least (\n|M |) jobs\nare waiting at time t. Summing over all values of t in the\nrange (t\nb\n+ T /2, t\ne\n) implies the theorem.\nProof of Lemma 4.13.\nConsider an i, i\nu+1\n< i i\nu\n.\nThen l\ni\n(t) l\ni\nu\n(t), otherwise we should have another suffix\nmaximal index between i\nu+1\nand i\nu\n. So P\ni\nu+1\n-1\ni=i\nu\n\nc\ni\n(t)\n\nm\n<l\ni\n(t)\nm\n<l\niu\n(t)\nP\ni\nu\n-1\ni=i\nu\n\nc\ni\n(t)\n2 m\n<l\niu\n(t)\n\nc\niu\n(t)\n. So\nwe get P\ni\n\nc\ni\n(t)\nm\n<l\ni\n(t)\n2 P\ni\nu\nm\n<l\niu\n(t)\n\nc\niu\n(t)\n.\nNow we consider the sum P\ni\nP\nll\ni\n(t) P\nO\nl\n(t\ni+1\n,t\ni\n)\n\nl-1\n. Fix an\ni, i\nu+1\n< i i\nu\n. Using Claim 4.12, we see that P\nO\nl\n(t\ni+1\n, t\ni\n)\n\nm\nl\n\nl\ni\n(t)+1\n\nc\ni\n(t)\n. So we get\nX\nll\ni\n(t)\nP\nO\nl\n(t\ni+1\n, t\ni\n)\n\nl-1\nX\nll\niu\n(t)\nP\nO\nl\n(t\ni+1\n, t\ni\n)\n\nl-1\n+\nl\niu\n(t)-1\nX\nl=l\ni\n(t)\nm\nl\n\nl\ni\n(t)+1\n\nc\ni\n(t)\n\nl-1\nNow the second term on the right hand side above is at\nmost\n2\nm\n<l\niu\n(t)\n\nc\ni\n(t)\n. So we get\ni\nu+1-1\nX\ni=i\nu\nX\nll\ni\n(t)\nP\nO\nl\n(t\ni+1\n, t\ni\n)\n\nl-1\nX\nll\niu\n(t)\nP\nO\nl\n(t\ni\nu+1\n, t\ni\nu\n)\n\nl-1\n+ 2\n2\n\nc\niu\n(t)\nm\n<l\niu\n(t)\n,\nbecause\nc\ni\n(t)\nscales down geometrically as i increases.\nFinally note that P\ni\nu\n\nc\niu\n(t)\nm\n<l\niu\n(t)\nis at most twice of\nP\ni\nu\n\nc\niu\n(t)\nm\n(l\niu-1\n(t),l\niu\n(t)-1)\n, because\nc\ni\n(t)\nscale down\ngeometrically. This proves the lemma (using (2)).\n738", "keywords": "non-migratory algorithm;flow-time;average flow time;approximation algorithms;processing time;competitive ratio;related machines;poly-logarithmic factor;preemption;multiprocessor environment;scheduling;Scheduling"} {"name": "138", "title": "Modeling and Predicting Personal Information Dissemination Behavior", "abstract": "In this paper, we propose a new way to automatically model and predict human behavior of receiving and disseminating information by analyzing the contact and content of personal communications. A personal profile, called CommunityNet, is established for each individual based on a novel algorithm incorporating contact, content, and time information simultaneously. It can be used for personal social capital management. Clusters of CommunityNets provide a view of informal networks for organization management. Our new algorithm is developed based on the combination of dynamic algorithms in the social network field and the semantic content classification methods in the natural language processing and machine learning literatures. We tested CommunityNets on the Enron Email corpus and report experimental results including filtering, prediction, and recommendation capabilities. We show that the personal behavior and intention are somewhat predictable based on these models. For instance, "to whom a person is going to send a specific email" can be predicted by one's personal social network and content analysis. Experimental results show the prediction accuracy of the proposed adaptive algorithm is 58% better than the social network-based predictions, and is 75% better than an aggregated model based on Latent Dirichlet Allocation with social network enhancement. Two online demo systems we developed that allow interactive exploration of CommunityNet are also discussed.", "fulltext": "INTRODUCTION\nWorking in the information age, the most important is not\nwhat you know, but who you know [1]. A social network, the\ngraph of relationships and interactions within a group of\nindividuals, plays a fundamental role as a medium for the spread\nof information, ideas, and influence. At the organizational level,\npersonal social networks are activated for recruitment, partnering,\nand information access. At the individual level, people exploit\ntheir networks to advance careers and gather information.\nInformal network within formal organizations is a major, but\nhard to acquire, factor affecting companies' performance.\nKrackhardt [2] showed that companies with strong informal\nnetworks perform five or six times better than those with weak\nnetworks, especially on the long-term performance. Friend and\nadvice networks drive enterprise operations in a way that, if the\nreal organization structure does not match the informal networks,\nthen a company tends to fail [3]. Since Max Weber first studied\nmodern bureaucracy structures in the 1920s, decades of related\nsocial scientific researches have been mainly relying on\nquestionnaires and interviews to understand individuals' thoughts\nand behaviors for sensing informal networks. However, data\ncollection is time consuming and seldom provides timely,\ncontinuous, and dynamic information. This is usually the biggest\nhurdle in social studies.\nPersonal Social Network (PSN) could provide an organizing\nprinciple for advanced user interfaces that offer information\nmanagement and communication services in a single integrated\nsystem. One of the most pronounced examples is the\nnetworking\n\nstudy by Nardi et al. [4], who coined the term\nintensional\nnetworks to describe personal social networks. They presented a\nvisual model of user's PSN to organize personal communications\nin terms of a social network of contacts. From this perspective,\nmany tools were built such as LinkedIn [5], Orkut [6], and\nFriendster [7]. However, all of them only provide tools for\nvisually managing personal social networks. Users need to\nmanually input, update, and manage these networks. This results\nin serious drawbacks. For instance, people may not be able to\ninvest necessary efforts in creating rich information, or they may\nnot keep the information up-to-date as their interests,\nresponsibilities, and network change. They need a way to organize\nthe relationship and remember who have the resources to help\nthem. We coin the terminology of managing these goals as\npersonal social capital management\n1\n.\nIn this paper, we develop a user-centric modeling\ntechnology, which can dynamically describe and update a\nperson's personal social network with context-dependent and\ntemporal evolution information from personal communications.\nWe refer to the model as a CommunityNet. Senders and receivers,\ntime stamps, subject and content of emails contribute three key\ncomponents content semantics, temporal information, and\nsocial relationship. We propose a novel Content-Time-Relation\n(CTR) algorithm to capture dynamic and context-dependent\ninformation in an unsupervised way. Based on the CommunityNet\nmodels, many questions can be addressed by inference, prediction\nand filtering. For instance, 1) Who are semantically related to\neach other? 2) Who will be involved in a special topic? Who are\nthe important (central) people in this topic? 3) How does the\ninformation flow? and 4) If we want to publicize a message,\nwhom should we inform?\nFigure 1 shows the procedure of our proposed scheme. First,\ntopic detection and clustering is conducted on training emails in\norder to define topic-communities. Then, for each individual,\nCommunityNet is built based on the detected topics, the sender\nand receiver information, and the time stamps. Afterwards, these\npersonal CommunityNets can be applied for inferring\norganizational informal networks and predicting personal\nbehaviors to help users manage their social capitals. We\nincorporate the following innovative steps:\n1) Incorporate content analysis into social network in an\nunsupervised way\n2) Build a CommunityNet for each user to capture the context-dependent\n, temporal evolutionary personal social network\nbased on email communication records\n\n3) Analyze people's behaviors based on CommunityNet,\nincluding predicting people's information sending and\nreceiving behaviors\n\n4) Show the potential of using automatically acquired personal\nsocial network for organization and personal social capital\nmanagement\nInput: Emails\nFrom: sally.beck@enron.com\nTo: shona.wilson@enron.com\nSubject: Re: timing of submitting\ninformation to Risk Controls\nGood memo - let me know if you see results.\n......\nTopic Detection,\nContent Analysis\nTopics\nMeeting schedule\nAgreement\nCalifornia Energy\nGame\nHoliday celebration\nCommunityNet\nCommunityNet\nModeling\nApplications\nRecommendation system\nPrediction,\nFiltering\nInput: Emails\nFrom: sally.beck@enron.com\nTo: shona.wilson@enron.com\nSubject: Re: timing of submitting\ninformation to Risk Controls\nGood memo - let me know if you see results.\n......\nTopic Detection,\nContent Analysis\nTopics\nMeeting schedule\nAgreement\nCalifornia Energy\nGame\nHoliday celebration\nCommunityNet\nCommunityNet\nModeling\nApplications\nRecommendation system\nPrediction,\nFiltering\n\nFigure 1. An Overview of CommunityNet\n\nWe tested the CommunityNet model on the Enron email\ncorpus comprising the communication records of 154 Enron\nemployees dating from Jan. 1999 to Aug. 2002. The Enron email\ndataset was originally made available to public by the Federal\nEnergy Regulatory Commission during the investigation [9]. It\nwas later collected and prepared by Melinda Gervasio at SRI for\nthe CALO (A Cognitive Assistant that Learns and Organizes)\nproject. William Cohen from CMU has put up the dataset on the\nweb for research purpose [9]. This version of the dataset contains\naround 517,432 emails within 150 folders. We clean the data and\nextract 154 users from those 150 folders with 166,653 unique\nmessages from 1999 to 2002. In the experiments, we use 16,873\nintra-organizational emails which connect these 154 people.\nThe primary contributions of this paper are three-fold. First\nwe develop an algorithm incorporating content-time-relation\ndetection. Second, we generate an application model which\ndescribes personal dynamic community network. Third, we show\nhow this model can be applied to organization and social capital\nmanagement. To the best of our knowledge, this is among the first\nreported technologies on fusing research in the social network\nanalysis field and the content analysis field for information\nmanagement.\n\nWe propose the CTR algorithm and the\nCommunityNet based on the Latent Dirichlet Allocation\nalgorithm.\n\nIn our experiments, we observed clear benefit of\ndiscovering knowledge based on multi-modality information\nrather than using only single type of data.\nThe rest of the paper is organized as follows. In Section 2,\nwe present an overview of related work. In Section 3, we present\nour model. We discuss how to use CommunityNet to analyze\ncommunities and individuals in section 4 and 5, respectively. In\nSection 6, we show two demo systems for query, visualization\nand contact recommendation. Finally, conclusions and future\nwork are addressed in Section 7.\nRELATED WORK\nTo capture relationships between entities, social network has\nbeen a subject of study for more than 50 years. An early sign of\nthe potential of social network was perhaps the classic paper by\nMilgram [10] estimating that on average, every person in the\nworld is only six edges away from each other, if an edge between\ni and j means "i knows j". Lately, introducing social network\nanalysis into information mining is becoming an important\nresearch area. Schwartz and Wood [11] mined social relationships\nfrom email logs by using a set of heuristic graph algorithms. The\nReferral Web project [12] mined a social network from a wide\nvariety of publicly-available online information, and used it to\nhelp individuals find experts who could answer their questions\nbased on geographical proximity. Flake et al. [13] used graph\nalgorithms to mine communities from the Web (defined as sets of\nsites that have more links to each other than to non-members).\nTyler et al. [14] use a betweenness centrality algorithm for the\nautomatic identification of communities of practice from email\nlogs within an organization. The Google search engine [15] and\nKleinberg's HITS algorithm of finding hubs and authorities on the\nWeb [16] are also based on social network concepts. The success\nof these approaches, and the discovery of widespread network\ntopologies with nontrivial properties, have led to a recent flurry of\nresearch on applying link analysis for information mining.\nA promising class of statistical models for expressing\nstructural properties of social networks is the class of Exponential\nRandom Graph Models (ERGMs) (or p* model) [17]. This\nstatistical model can represent structural properties that define\ncomplicated dependence patterns that cannot be easily modeled\nby deterministic models. Let Y denote a random graph on a set of\nn nodes and let y denote a particular graph on those nodes. Then,\nthe probability of Y equals to y is\n(\n)\n( )\n(\n)\n( )\nexp\nT\ns y\nP Y\ny\nc\n\n\n\n=\n=\n(1)\n480\nIndustry/Government Track Paper\nwhere\n( )\ns y\nis a known vector of graph statistics (Density,\nReciprocity, Transitivity, etc) on y,\n\nis a vector of coefficients to\nmodel the influence of each statistics for the whole graph, T\nmeans \"transpose\",\n( )\nc\n\nis a normalization term to\nsatisfy\n(\n)\n1\ny\nP Y\ny\n\n=\n=\n\n. The parameters\n\nare estimated based on\nthe observed graph\nobs\ny\nby maximum likelihood estimation.\nAll the research discussed above has focused on using static\nproperties in a network to represent the complex structure.\nHowever, social networks evolve over time. Evolution property\nhas a great deal of influence; e.g., it affects the rate of information\ndiffusion, the ability to acquire and use information, and the\nquality and accuracy of organizational decisions.\nDynamics of social networks have attracted many\nresearchers' attentions recently. Given a snapshot of a social\nnetwork, [19] tries to infer which new interactions among its\nmembers are likely to occur in the near future. In [20], Kubica et\nal.\nare interested in tracking changes in large-scale data by\nperiodically creating an agglomerative clustering and examining\nthe evolution of clusters over time. Among the known dynamical\nsocial networks in literature, Snijder's dynamic actor-oriented\nsocial network [18] is one of the most successful algorithms.\nChanges in the network are modeled as the stochastic result of\nnetwork effects (density, reciprocity, etc.). Evolution is modeled\nby continuous-time Markov chains, whose parameters are\nestimated by the Markov chain Monte Carlo procedures. In [21],\nHandcock et al. proposed a curved ERGM model and applied it to\nthe new specifications of ERGMs This latest model uses nonlinear\nparameters to represent structural properties of networks.\nThe above mentioned dynamic analyses show some success\nin analyzing longitudinal stream data. However, most of them are\nonly based on pure network properties, without knowing what\npeople are talking about and why they have close relationships.\n2.2 Content Analysis\nIn statistical Natural Language processing, one common way\nof modeling the contributions of different topics to a document is\nto treat each topic as a probability distribution over words,\nviewing a document as a probability distribution over words, and\nthus viewing a document as a probabilistic mixture over these\ntopics. Given T topics, the probability of the ith word in a given\ndocument is formalized as:\n\n( )\n(\n) (\n)\n1\n|\nT\ni\ni\ni\ni\nj\nP w\nP w z\nj P z\nj\n=\n=\n=\n=\n\n(2)\nwhere\ni\nz\nis a latent variable indicating the topic from which the\ni\nth word was drawn and\n(\n)\n|\ni\ni\nP w z\nj\n=\nis the probability of the word\ni\nw\nunder the jth topic.\n(\n)\ni\nP z\nj\n=\ngives the probability of choosing\na word from topics j in the current document, which varies across\ndifferent documents.\nHofmann [22] introduced the aspect model Probabilistic\nLatent Semantic Analysis (PLSA), in which, topics are modeled\nas multinomial distributions over words, and documents are\nassumed to be generated by the activation of multiple topics. Blei\net al.\n[23] proposed Latent Dirichlet Allocation (LDA) to address\nthe problems of PLSA that parameterization was susceptible to\noverfitting and did not provide a straightforward way to infer\ntesting documents. A distribution over topics is sampled from a\nDirichlet distribution for each document. Each word is sampled\nfrom a multinomial distribution over words specific to the\nsampled topic. Following the notations in [24], in LDA, D\ndocuments containing T topics expressed over W unique words,\nwe can represent ( | )\nP w z\nwith a set of T multinomial\ndistributions\n\nover the W words, such that\n( )\n( |\n)\nw\nj\nP w z\nj\n\n=\n=\n,\nand P(z) with a set of D multinomial distribution\n\nover the T\ntopics, such that for a word in document d,\n( )\n(\n)\nd\nj\nP z\nj\n\n=\n=\n.\nRecently, the Author-Topic (AT) model [25] extends LDA to\ninclude authorship information, trying to recognize which part of\nthe document is contributed by which co-author. In a recent\nunpublished work, McCallum et al. [26] further extend the AT\nmodel to the Author-Recipient-Topic model by regarding the\nsender-receiver pair as an additional author variable for topic\nclassification. Their goal is role discovery, which is similar to one\nof our goals as discussed in Sec. 4.1.2 without taking the temporal\nnature of emails into consideration.\nUsing LDA,\n\nand\n\nare parameters that need to be estimated\nby using sophisticated approximation either with variational\nBayes or expectation propagation. To solve this problem, Griffiths\nand Steyvers [24] extended LDA by considering the posterior\ndistribution over the assignments of words to topics and showed\nhow Gibbs sampling could be applied to build models.\nSpecifically,\n\n(\n)\n( )\n( )\n( )\n( )\n,\n,\n,\n,\n|\n,\ni\ni\ni\nw\nd\ni j\ni j\ni\ni\nd\ni j\ni\nn\nn\nP z\nj z w\nn\nW\nn\nT\n\n\n\n\n\n\n\n\n+\n+\n=\n\n+\n+\n(3)\nwhere\n( )\ni\nn\nis\na count that does not include the current assignment,\n( )\nw\nj\nn is the number of times word w has been assigned to topic j in\nthe vector of assignments z,\n( )\nd\nj\nn is the number of times a word\nfrom document d has been assigned to topic j,\n( )\nj\nn is a sum of\n( )\nw\nj\nn ,\n( )\nd\nn is a sum of\n( )\nd\nj\nn . Further, one can estimate\n( )\nw\nj\n\n, the\nprobability of using word w in topic j, and\n( )\nd\nj\n\n, the probability of\ntopic j in document d as follows:\n\n( )\n( )\n( )\n^\nw\nw\nj\nj\nj\nn\nn\nW\n\n\n\n+\n=\n+\n\n(4)\n\n\n( )\n( )\n( )\n^\nd\nd\nj\nj\nd\nn\nn\nT\n\n\n\n+\n=\n+\n\n(5)\nIn [24], experiments show that topics can be recovered by\ntheir algorithm and show meaningful aspects of the structure and\nrelationships between scientific papers.\nContextual, relational, and temporal information are three\nkey factors for current data mining and knowledge management\nmodels. However, there are few papers addressing these three\ncomponents simultaneously. .In our recent paper, we built user\nmodels to explicitly describe a person's expertise by a relational\nand evolutionary graph representation called ExpertisetNet [27].\nIn this paper, we continue exploring this thread, and build a\nCommunityNet model which incorporates these three components\ntogether for data mining and knowledge management.\nCOMMUNITYNET\nIn this section, we first define terminologies. Then, we\npropose a Content-Time-Relation (CTR) algorithm to build the\n481\nIndustry/Government Track Paper\npersonal CommunityNet. We also specifically address the\nprediction of the user's behaviors as a classification problem and\nsolve it based on the CommunityNet models.\n3.1 Terminology\nDefinition 1. Topic-Community: A topic community is a group\nof people who participate in one specific topic.\n\nDefinition 2: Personal Topic-Community Network (PTCN): A\npersonal topic-community network is a group of people directly\nconnected to one person about a specific topic.\nDefinition 3. Evolutionary Personal Social Network: An\nevolutionary personal social network illustrates how a personal\nsocial network changes over time.\n\nDefinition 4. Evolutionary Personal Topic-Community\nNetwork: An evolutionary network illustrates how a person's\npersonal topic-community network changes over time.\n\nDefinition 5. Personal Social Network Information Flow: A\npersonal social network information flow illustrates how the\ninformation flows over a person's personal social network to\nother people's personal social networks\nDefinition 6: Personal Topic-Community Information Flow: A\npersonal Topic-CommunityNet information flow illustrates how\nthe information about one topic flows over a person's personal\nsocial network to other people's personal social networks.\n3.2 Personal Social Network\nWe build people's personal social networks by collecting\ntheir communication records. The nodes of a network represent\nwhom this person contacts with. The weights of the links measure\nthe probabilities of the emails he sends to the other people: A\nbasic form of the probability that an user u sending email to a\nrecipient r is:\n\n( )\nnumber of times sends emails to\n|\ntotal number of emails sent out by\nu\nr\nP r u\nu\n=\n(6)\nWe build evolutionary personal social networks to explore the\ndynamics and the evolution. The ERGM in Eq. (1) can be used to\nreplace Eq. (6) for probabilistic graph modeling. A big challenge\nof automatically building evolutionary personal social network is\nthe evolutionary segmentation, which is to detect changes\nbetween personal social network cohesive sections. Here we apply\nthe same algorithm as we proposed in [27]. For each personal\nsocial network in one time period t, we use the exponential\nrandom graph model [17] to estimate an underlying distribution to\ndescribe the social network.\n\nAn ERGM is estimated from the data\nin each temporal sliding window. With these operations, we\nobtain a series of parameters which indicates the graph\nconfigurations.\n3.3 Content-Time-Relation Algorithm\nWe begin with email content, sender and receiver\ninformation, and time stamps, and use these sources of knowledge\nto create a joint probabilistic model. An observation is (u, r, d, w,\nt) corresponds to an event of a user u sending to receivers r an\nemail d containing words w during a particular time period t.\nConceptually, users choose latent topics z, which in turn generate\nreceivers r, documents d, and their content words w during time\nperiod t.\n(\n)\n(\n)\n(\n)\n, | ,\n, | ,\n| ,\nz\nP u r d t\nP u r z t P z d t\n=\n\n(7)\nwhere\n,\nu r is a sender-receiver pair during time period t\n.\n,\nu r\ncan be replaced by any variable to indicate the user's\nbehavior, as long as it is also assumed to be dependent on latent\ntopics of emails.\nIn order to model the PTCN, one challenge is how to detect\nlatent topics dynamically and at the same time track the emails\nrelated to the old topics. This is a problem similar to topic\ndetection tracking [28]. We propose an incremental LDA (ILDA)\nalgorithm to solve it, in which the number of topics is\ndynamically updated based on the Bayesian model selection\nprinciple [24]. The procedures of the algorithm are illustrated as\nfollows:\nIncremental Latent Dirichlet Allocation (ILDA) algorithm:\nInput: Email streams with timestamp t\nOutput:\n( )\n,\nw\nj t\n\n,\n( )\n,\nd\nj t\n\nfor different time period t\nSteps:\n1) Apply LDA on a data set with currently observed emails in a\ntime period t to generate latent topics\nj\nz and estimate\n(\n)\n( )\n0\n0\n,\n| ,\nw\nj\nj t\nP w z t\n\n=\nand\n(\n)\n( )\n0\n0\n,\n| ,\nd\nj\nj t\nP z d t\n\n=\nby equation (4)\nand (5). The number of topics is determined by the Bayesian\nmodel selection principle.\n2) When new emails arrive during time period k, use Bayesian\nmodel selection principle to determine the number of topics\nand apply\n(\n)\n(\n)\n( )\n( )\n,\n1\n,\n|\n, ,\n|\n,\ni\ni\nd\ni j\ni\ni\nk\ni\nk\nd\ni\nn\nP z\nj z w t\nP w z\nj t\nn\nT\n\n\n\n\n\n+\n=\n\n=\n+\nto\nestimate\n(\n)\n| ,\nk\nP z d t ,\n(\n)\n| ,\nk\nP w z t , and\n(\n)\n| ,\nk\nP z w t .\n3) Repeat step 2) until no data arrive.\nBased on this ILDA algorithm, we propose a Content-Time-Relation\n(CTR)\nalgorithm. It consists of two phases, the training\nphase and the testing phase. In the training phase, emails as well\nas the senders, receivers and time stamps are available.\n(\n)\n| ,\nold\nP w z t\nand\n(\n)\n, | ,\nold\nP u r z t\nare learnt from the observed\ndata. In the testing phase, we apply ILDA to learn\n(\n)\n| ,\nnew\nP z d t\n.\nBased on\n(\n)\n, | ,\nold\nP u r z t\n, which is learnt from the training\nphase,\n,\nu r can be inferred. Again, ,\nu r represents a sender-receiver\npair or any variable to indicate the user's behavior, as\nlong as it is dependent on the latent topics of emails.\nContent-Time-Relation (CTR) algorithm:\n1) Training\nphase\nInput: Old emails with content, sender and receiver information,\nand time stamps\nold\nt\nOutput:\n(\n) (\n)\n(\n)\n| ,\n, | ,\n, and , | ,\nold\nold\nold\nP w z t\nP z d t\nP u r z t\n\nSteps:\na) Apply Gibbs Sampling on the data according to equation (3).\nb) Estimate\n(\n)\n( )\n,\n| ,\nold\nw\nj\nold\nj t\nP w z t\n\n=\nand\n(\n)\n( )\n,\n| ,\nold\nd\nj\nold\nj t\nP z d t\n\n=\nby\nequation (4), and (5).\nc) Estimate\n\n\n(\n)\n(\n)\n(\n)\n(\n)\n(\n)\n, | ,\n, | ,\n| ,\n, | ,\n| ,\nold\nold\nold\nd\nold\nold\nd\nP u r z t\nP u r d t\nP d z t\nP u r d t\nP z d t\n=\n\n\n\n(8)\n2) Testing\nphase\nInput: New emails with content and time stamps\nnew\nt\n\n482\nIndustry/Government Track Paper\nOutput:\n(\n)\n(\n)\n(\n)\n, | ,\n, | ,\n,\nand | ,\nnew\nnew\nnew\nP u r d t\nP w z t\nP z d t\n\nSteps:\na) Apply incremental LDA by Gibbs Sampling based on\n(\n)\n(\n)\n( )\n( )\n,\n,\n|\n, ,\n,\n|\ni\ni\nd\ni j\ni\ni\nnew\ni\nold\nd\ni\nn\nP z\nj z w t\nP w z\nj t\nn\nT\n\n\n\n\n+\n=\n\n=\n+\nto\nestimate\n(\n)\n| ,\nj\nnew\nP w z t\n, and\n(\n)\n| ,\nnew\nP z d t\nby equation (4) and\n(5).\nb) If the topics are within the training set, estimate\n(\n)\n(\n)\n(\n)\n^\n, | ,\n, | ,\n| ,\nnew\nold\nnew\nz\nP u r d t\nP u r z t\nP z d t\n=\n\n, else if the\nsender and receivers are within the training set, estimate\n(\n)\n^\n, | ,\nnew\nP u r d t\n\nby topic-independent social network\n(\n)\n, |\nold\nP u r t\n.\nc) If there are new topics detected, update the model by\nincorporating the new topics.\nInference, filtering, and prediction can be conducted based\non this model. For the CTR algorithm, sender variable u or\nreceiver variable r is fixed. For instance, if we are interested\nin\n(\n)\n| , ,\nP r u d t , which is to answer a question of whom we should\nsend the message d to during the time period t. The answer will be\n(\n)\n(\n) (\n)\n(\n)\n(\n)\n/\n^\nargmax\n| , ,\nargmax\n| ,\n,\n| , ,\n| ,\n| , ,\nold\nold\nnew\nt\nt\nt\nold\nnew\nold\nnew\nr\nt\nold\nt\nnew\nold\nt\nnew\nr\nz\nz\nz\nP r u d t\nP r u z t\nP z\nu d t\nP r u t\nP z\nu d t\n=\n\n\n+\n\n\n\n\n\n\n\n\n(9)\nwhere z\n/\nnew\nold\nt\nt\nz\nrepresents the new topics emerging during the\ntime period t. Another question is if we receive an email, who will\nbe possibly the sender?\n\n(\n)\n(\n) (\n)\n(\n)\n(\n)\n/\n^\nargmax\n| , ,\nargmax\n| ,\n,\n| , ,\n| ,\n| , ,\nold\nold\nold\nt\nt\nt\nold\nnew\nold\nnew\nu\nt\nold\nt\nnew\nold\nt\nnew\nu\nz\nz\nz\nP u r d t\nP u r z t\nP z\nr d t\nP u r t\nP z\nr d t\n=\n\n\n+\n\n\n\n\n\n\n\n\n(10)\nEq. (9) and Eq. (10) integrate the PSN, content and temporal\nanalysis. Social network models such as ERGM in Eq. (1) or the\nmodel in Sec. 3.2 can be applied to the\n(\n)\n, | ,\nP u r d t terms.\nFigure 2 illustrates the CTR model and compares to the\nLDA, AT and ART models. In CTR, the observed variables not\nonly include the words w in an email but also the sender u and the\ntimestamp on each email d.\n3.4 Predictive Algorithms\nFor the sake of easier evaluation, we focus on prediction\nschemes in details. Specifically, we address the problem of\npredicting receivers and senders of emails as a classification\nproblem, in which we train classifiers to predict the senders or\nreceivers and other behavior patterns given the observed people's\ncommunication records. The trained classifier represents a\nfunction in the form of:\n\n:\n(\n, )\nf Comm t i t\nY\n\n(11)\nwhere\n(\n)\n,\nComm t i t\nis\nthe observed communication record\nduring the interval from time t-i to t, Y is a set of receivers or\nsenders or other user behavior patterns to be discriminated, and\nthe value of\n(\n)\n(\n)\n,\nf Comm t i t\nis\nthe classifier prediction\nregarding which user behavior patterns gave rise to the observed\ncommunication records. The classifier is trained by providing the\nhistory of the communication records with known user behaviors.\n3.4.1 Using Personal Social Network Model\nWe aggregate all the communication records in the history of\na given user, and build his/her personal social network. We\nchoose those people with the highest communication frequency\nwith this person as the prediction result.\n3.4.2 Using LDA combined with PSN Model\nWe use the LDA model and combine it with PSN to do the\nprediction, which is referred as LDA-PSN in the paper. Latent\ntopics are detected by applying original LDA on the training set\nand LDA is used for inference in testing data without\nincorporating new topics when time passes by. The possible\nsenders and receivers when new emails arrive,\n(\n)\n, | ,\nnew\nP u r d t\nis\nestimated as\n(\n)\n(\n)\n(\n)\n^\n, | ,\n, | ,\n| ,\nnew\nold\nnew\nz\nP u r d t\nP u r z t\nP z d t\n=\n\n.\nPeople are ranked by this probability as the prediction results.\n3.4.3 Using CTR Model\nPeople tend to send emails to different group of people under\ndifferent topics during different time periods. This is the\nassumption we made for our predictive model based on CTR.\nLDA\nAT\nART\nu\nu\nLDA\nAT\nART\nu\nu\nLDA\nAT\nART\nu\nu\n\n: observations\n\nA\nN\nD\n\nT\nu\nz\nw\nr\nCTR\nS\n\nTm\nt\n\n\n\n: observations\n\nA\nN\nD\n\nT\nu\nz\nw\nr\nCTR\nS\n\nTm\nt\n\n\n\n\nFigure 2. The graphical model for the CTR model comparing to LDA, AT and ART models, where u: sender, t: time, r: receivers, w:\nwords, z: latent topics, S: social network, D: number of emails, N: number of words in one email, T: number of topics, Tm: size of the\ntime sliding window, A: number of authors, ,\n\nand\n\nare the parameters we want to estimate with the hyperparameters\n, ,\n\n483\nIndustry/Government Track Paper\n(\n)\n, | ,\nnew\nP u r d t\nis estimated by applying the CTR model\ndiscussed in section 3.3. The prediction results are people with\nhighest scores calculated by equation (9) and (10).\n3.4.4 Using an Adaptive CTR Model\nBoth the personal social network and the CTR model ignore\na key piece of information from communication records -- the\ndynamical nature of emails. Both personal social network and\nTopic-Community dynamically change and evolve. Only based on\nthe training data which are collected in history will not get the\noptimal performance for the prediction task. Adaptive prediction\nby updating the model with newest user behavior information is\nnecessary. We apply several strategies for the adaptive prediction.\nThe first strategy is aggregative updating the model by adding\nnew user behavior information including the senders and receivers\ninto the model. Then the model becomes:\n(\n)\n(\n)\n(\n)\n(\n)\n(\n)\n1\n/\n^ , | ,\n, | ,\n| ,\n, |\n| ,\ni\nt\nt\ni\nold\nK\ni\nk old\nk\ni\nold\nt\ni\nk\nz z\nP u r d t\nP u r z t P z d t\nP u r t P z d t\n=\n=\n+\n\n\n(12)\nwhere K is the number of old topics. Here, we always use the data\nfrom\nold\nt\n, including\n0\nt to\n1\ni\nt\nto\npredict the user behavior\nduring\ni\nt .\nIn the second strategy, we assume the correlation between\ncurrent data and the previous data decays over time. The more\nrecent data are more important. Thus, a sliding window of size n\nis used to choose the data for building the prediction model, in\nwhich the prediction is only dependent on the recent data, with the\ninfluence of old data ignored. Here in equation (12),\nold\nt consists\nof\ni n\nt\nto\n1\ni\nt\n\n.\n3.5 CommunityNet Model\nWe then build a CommunityNet model based on the CTR\nalgorithm. The CommunityNet model, which refers to the personal\nTopic-Community Network, draws upon the strengths of the topic\nmodel and the social network as well as the dynamic model, using\na topic-based representation to model the content of the\ndocument, the interests of the users, the correlation of the users\nand the receivers and all these relationship changing over time.\nFor prediction, CommunityNet incorporates the adaptive CTR\nmodel as described in Section 3.4.4.\nCOMMUNITY ANALYSIS\nThe first part of our analysis focuses on identifying clusters\nof topics, and the senders and receivers who participated in those\ntopics. First, we analyze the topics detected from the Enron\nCorpus. Then, we study the topic-community patterns.\n4.1 Topic Analysis\nIn the experiment, we applied Bayesian model selection [24]\nto choose the number of topics. In the Enron intra-organization\nemails, there are 26,178\n\nword-terms involved after we apply stop-words\nremoval and stemming, We computed\n(\n)\n|\nP w T\nfor T values\nof 30, 50, 70, 100, 110, 150 topics and chose T = 100 with the\nmaximum value of\n(\n)\n(\n)\nlog\n|\nP w T\nfor the experiment.\n4.1.1 Topic Distribution\nAfter topic clustering based on words, for each document, we\nhave P(z|d), which indicates how likely each document belongs to\neach topic. By summing up this probability for all the documents,\nwe get the topic distribution of how likely each topic occurs in\nthis corpus. We define this summed likelihood as \"Popularity\" of\nthe topic in the dataset. From this topic distribution, we can see\nthat some topics are hot - people frequently communicate with\neach other about them, while some others are cold, with only few\nemails related to them. Table 1 illustrates the top 5 topics in Enron\ncorpus. We can see that most of them are talking about regular\nissues in the company like meeting, deal, and document. Table 2\nillustrates the bottom 5 topics in Enron corpus. Most of them are\nspecific and sensitive topics, like \"Stock\" or \"Market\". People\nmay feel less comfortable to talk about them broadly.\nTable 1. Hot Topics\nmeeting deal Petroleum\nTexas\ndocument\nmeeting\nplan\nconference\nbalance\npresentation\ndiscussion\ndeal\ndesk\nbook\nbill\ngroup\nexplore\nPetroleum\nresearch\ndear\nphoto\nEnron\nstation\nHouston\nTexas\nEnron\nnorth\nAmerica\nstreet\nletter\ndraft\nattach\ncomment\nreview\nmark\nTable 2. Cold Topics\nTrade stock network\nProject\nMarket\ntrade\nLondon\nbank\nname\nMexico\nconserve\nStock\nearn\ncompany\nshare\nprice\nnew\nnetwork\nworld\nuser\nsave\nsecure\nsystem\nCourt\nstate\nIndia\nserver\nproject\ngovern\ncall\nmarket\nweek\ntrade\ndescription\nrespond\n\n4.1.2 Topic Trend Analysis\nTo sense the trend of the topics over time, we calculate the\ntopic popularity for year 2000 and 2001, and calculate the\ncorrelation coefficients of these two series. For some topics, the\ntrends over years are similar. Figure 3(a) illustrates the trends for\ntwo topics which have largest correlation coefficients between\ntwo years. Topic 45, which is talking about a schedule issue,\nreaches a peak during June to September. For topic 19, it is\ntalking about a meeting issue. The trend repeats year to year.\nFigure 3(b) illustrates the trend of Topic \"California Power\"\nover 2000 to 2001. We can see that it reaches a peak from the end\nof year 2000 to the beginning of year 2001. From the timeline of\nEnron [29], we found that \"California Energy Crisis\" occurred at\nexactly this time period. Among the key people related to this\ntopic, Jeff Dasovich was an Enron government relations\nexecutive. His boss, James Steffes was Vice President of\nGovernment Affairs. Richard Schapiro was Vice President of\nRegulatory Affairs. Richard Sanders was Vice President and\nAssistant General Counsel. Steven Kean was Executive Vice\nPresident and Chief of Staff. Vincent Kaminski was a Ph.D.\neconomist and Head of Research for Enron Corp. Mary Han was\na lawyer at Enron's West Coast trading hub. From the timeline,\nwe found all these people except Vince were very active in this\nevent. We will further analyze their roles in Section 5.\n484\nIndustry/Government Track Paper\nTopic Trend Comparison\n0\n0.005\n0.01\n0.015\n0.02\n0.025\n0.03\nJan\nMar\nMay\nJul\nSep\nNov\nPo\np\nu\nl\na\nr\ni\nt\ny\nTopic45(y2000)\nTopic45(y2001)\nTopic19(y2000)\nTopic19(y2001)\n\n(a) Trends of two yearly repeating events.\nTopic Analysis for Topic 61\n0\n0.002\n0.004\n0.006\n0.008\n0.01\n0.012\n0.014\n0.016\n0.018\nJan-00 Apr-00 Jul-00\nOct-00 Jan-01 Apr-01 Jul-01 Oct-01\nP\nopul\nar\ni\nt\ny\n\nKeywords\nwith\n(\n)\n|\nP w z\n\npower 0.089361 California 0.088160 electrical\n0.087345 price 0.055940 energy 0.048817 generator\n0.035345 market 0.033314 until 0.030681\n\nKey people\nwith\n( )\n|\nP u z\n\n\nJeff_Dasovich 0.249863 James_Steffes 0.139212\nRichard_Shapiro 0.096179 Mary_Hain 0.078131\nRichard_Sanders 0.052866 Steven_Kean 0.044745\nVince_Kaminski 0.035953\n\n(b) The trend of \"California Power\" and most related keywords\nand people.\nFigure 3. Topic trends\n\n4.2 Predicting Community Patterns\nWe assume that, people communicate with certain people\nonly under certain few topics. People in the same community\nunder a topic would share the information. Thus, if there is\nsomething new about one topic, people in that topic-community\nwill most likely get the information and propagate it to others in\nthe community. Finally, many people in the community will get\nthe information.\nTo evaluate our assumption and answer the question of who\nwill be possibly involved in an observed email, we collect the\nground truth about who are the senders and receivers for the\nemails and use the CTR algorithm to\ninfer\n(\n)\n, | ,\nj\nnew\nP u r z t\nby\n(\n)\n, | ,\nj\nold\nP u r z t\n. We partitioned the data into\ntraining set and testing set. We tried two strategies for this\nexperiment. First is to randomly partition the data into a training\nset with 8465 messages and a testing set with 8408 messages.\nPrediction accuracy is calculated by comparing the inference\nresults and the ground truth (i.e., receiver-sender pair of that\nemail). We found that 96.8446% people stick in the old topics\nthey are familiar with. The second strategy is to partition data by\ntime: emails before 1/31/2000 as the training data (8011) and\nafter that as the testing data (8862). We found 89.2757% of the\npeople keep their old topics. Both results are quite promising. It is\nfound that people really stick in old topics they are familiar with.\nINDIVIDUAL ANALYSIS\nIn this section, we evaluate the performance of\nCommunityNet. First, we show how people's roles in an event can\nbe inferred by CommunityNet. Then, we show the predicting\ncapability of the proposed model in experiments.\n5.1 Role Discovery\nPeople with specific roles at company hierarchy behave\nspecifically on specific topics. Here we show it is possible to\ninfer people's roles by using CommunityNet.\nIn Section 4.1.2, we show there are some key people\ninvolved in \"California Energy Crisis\". In reality, Dasovich,\nSteffes, Schapiro, Sanders, and Kean, were in charge of\ngovernment affairs. Their roles were to \"solve the problem\". Mary\nHain was a lawyer during the worst of the crisis and attended\nmeetings with key insiders. We calculated the correlation\ncoefficients of the trends of these people and the overall trend of\nthis topic. Jeff Dasovich got 0.7965, James Steffes got 0.6501,\nMary Hain got 0.5994, Richard Shapiro got 0.5604, Steven Kean\ngot 0.3585 (all among the 10 highest correlation scores among\n154 people), and Richard Sanders got 0.2745 (ranked 19), while\nVince Kaminski had correlation coefficient of -0.4617 (Figure 4).\nWe can see that all the key people except Vince Kaminski have\nstrong correlation with the overall trend of \"California Energy\nCrisis\". From their positions, we can see that all of them were sort\nof politicians while Vince Kaminski is a researcher. Thus, it is\nclear to see the difference of their roles in this topic.\n0\n0.1\n0.2\n0.3\n0.4\n0.5\nJan-00 May-00 Sep-00 Jan-01 May-01 Sep-01\nPo\np\nu\nl\na\nr\ni\nt\ny\nOverall trend\nJeff_Dasovich\nVince_Kaminski\nFigure 4. Personal topic trend comparison on \"California Power\"\n5.2 Predicting Receivers\nHere we want to address the problem of whether it is\npossible to infer who will possibly be the receivers by a person's\nown historic communication records and the content of the email-to\n-send. One possible application is to help people organize\npersonal social capital. For instance, if a user has some\ninformation to send or a question to ask, CommunityNet can\nrecommend the right persons to send the info or get the answer.\nWe conduct experiments by partitioning the dataset into a\ntraining set with the emails from 1999 to 2000, and a testing set\nwith the emails from 2001 to 2002. The testing set is further\npartitioned into sub-sets with emails from one month as a subset.\nWith this, we have 15 testing sets. (We exclude the emails after\nMarch 2002 because the total number of emails after that is only\n78.) One issue we want to mention is that the number of people\nfrom 1999 to 2000 is 138, while from 2001 to 2002 is 154. In this\nstudy, we test each email in the training set by using its content,\n485\nIndustry/Government Track Paper\nsender, and time as prior information to predict the receiver,\nwhich is compared to the real receiver of that email.\nIn Figure 5, we illustrate the prediction performance by\ncomparing the CTR algorithm, PSN, and the aggregated LDA-PSN\nmodel. The result shows that CTR beats PSN by 10% on\naccuracy. The aggregated LDA-PSN model performs even worse\nthan PSN, because of the inaccurate clustering results. The\nperformance gain is 21%. Moreover, intuitively, personal contacts\nevolve over time. Models built at a specific time should have\ndecreasing predicting capability over time. In this figure, we\nobtain strong evidence of this hypothesis by observing that the\nperformance of these models monotonically decays. This also\nimplies our models well match the practice.\n0\n0.2\n0.4\n0.6\n0.8\n1\nJan-01\nApr-01\nJul-01\nOct-01\nJan-02\nA\nc\ncu\nr\na\ncy\nby PSN\nby CTR\nby LDA-PSN\n(a) Accuracy based on the top 5 most likely people\n0\n0.2\n0.4\n0.6\n0.8\n1\nJan-01\nApr-01\nJul-01\nOct-01\nJan-02\nA\nc\ncu\nr\na\ncy\nby PSN\nby CTR\nby LDA-PSN\n(b) Accuracy based on the top 10 most likely people\nFigure 5. Prediction Accuracy comparisons. Accuracy is\nmeasured by testing whether the \"real\" receiver is among the\nprediction list of the top 5 or 10 most likely people\n5.3 Inferring Senders\nWe test whether it is possible to infer who will possibly be\nthe senders given a person's CommunityNet and the content of the\nemail. One possible application is to exclude spam emails or\ndetect identification forgery. Figure 6 illustrates the prediction\nresult, which also shows the prediction accuracy decays over time.\n0\n0.2\n0.4\n0.6\n0.8\n1\nJan-01 Mar-01 May-01 Jul-01 Sep-01 Nov-01\nA\nccu\nr\na\nc\ny\ntop5\ntop3\nFigure 6. Predicting senders given receiver and content\n5.4 Adaptive Prediction\nWe observed the prediction performance decays over time\nfrom the results of 5.2 and 5.3, which reflects the changes of the\nnature of email streams. Here we apply adaptive prediction\nalgorithms we mentioned in 3.4.3, in which we incrementally and\nadaptively estimate statistical parameters of the model by\ngradually forgetting out-of-state statistics.\n0\n0.2\n0.4\n0.6\n0.8\n1\nJan-01 Mar-01 May-01 Jul-01 Sep-01 Nov-01\nAc\nc\nu\nr\na\nc\ny\nAdaptive CT R(T op 5)\nAdaptive CT R(T op 10)\nCT R(T op5)\nCT R(T op10)\n\n(a). Comparison between Adaptive CTR and CTR models\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\nJan-01 Mar-01 May-01 Jul-01\nSep-01 Nov-01\nAdaptive CTR(aggregative)\nAdaptive CTR(6 months)\nCTR\nLDA-PSN\nPSN\n(b) Comparison of algorithms using Breese evaluation metrics\nFigure 7. Performance evaluation for adaptive prediction\nalgorithm and overall comparison\nFigure 7 (a) illustrates the performance of the Adaptive CTR\nalgorithm and compares it to the CTR algorithm.\n\nFor the data far\naway from the training data,\n\nthe improvement is more than 30%.\nAnd, if we compare it to the PSN and LDA-PSN algorithms, the\nperformance gains are 58% and 75%, respectively. Evaluation by\nthis accuracy metric tells us how related the top people ranked in\nthe prediction results are. To understand the overall performance\nof the ranked prediction results, we apply the evaluation metric\nproposed by Breese [30], and illustrate the overall comparison in\nFigure 7(b). This metric is an aggregation of the accuracy\nmeasurements in various top-n retrievals in the ranked list.\nAmong all predictive algorithms, adaptive CTR models perform\nbest and PSN performs worst. In adaptive CTR models,\nestimating from recent data of six months beats aggregative\nupdating the model from all the data from the history.\nCOMMUNITYNET APPLICATIONS\nIn this section, we show two application systems we built\nbased on the CommunityNet. The first one is a visualization and\nquery tool to demonstrate informal networks incorporation. The\nsecond one is a receiver recommendation tool which can be used\nin popular email systems. These demos can be accessed from\nhttp://nansen.ee.washington.edu/CommunityNet/.\n6.1 Sensing Informal Networks\n6.1.1 Personal Social Network\nFigure 8 illustrates the interface of a visualization and query\nsystem of CommunityNet.\n\nThe distance of nodes represents the\ncloseness (measured by the communication frequencies) of a\nperson to the center person. Users can click on the node to link to\n486\nIndustry/Government Track Paper\nthe CommunityNet of another person. This system can show\npersonal social networks, which includes all the people a user\ncontacts with during a certain time period. For instance, Figure\n8(a) illustrates the personal social network of Vice President John\nArnold from January 1999 to December 2000. During this period,\nthere were 22 people he sent emails to, regardless what they were\ntalking about. An evolutionary personal social network is\nillustrated in Figure 8(b), in which we show people's personal\nsocial network changes over time. From Jan. 1999 to Dec. 2000,\nno new contact was added to John's PSN. However, people's\nrelationship changed in 2000. A Personal Social Network\nInformation Flow is illustrated in Figure 8(c), in which we show\nhow the information flows through the network (here we illustrate\nthe information in two levels.)\n\n(a) Personal Social Network of John Arnold\n\n\n\n(b1) Jan-`99 to Dec-`99 (b2) Jan-`00 to Jun-`00 (b3) Jul-`00 to Dec-`00\n(b) Evolutionary Personal Social Network\n\n(c) Personal Social Network Information Flow with two-level\npersonal social network of John Arnold\nFigure 8. Personal social networks of John Arnold\n6.1.2 Personal Topic-Community Network\nPersonal topic-community network can show whom this user\nwill contact with under a certain topic.\n\nOn retrieval, keywords are\nrequired for inferring the related topics. Figure 9 illustrates\nseveral personal topic-community networks for John Arnold.\nFirst, we type in \"Christmas\" as the keyword. CommunityNet\ninfers it as \"holiday celebration\" and shows the four people John\ncontacted with about this topic. About \"Stock\", we find John\ntalked with five people on \"Stock Market\" and \"Company Share\"\nfrom Jan. 1999 to Dec. 2000. Personal Topic-Community network\ncan be depicted by the system, too.\n\n\nFigure 9. Personal Topic-Community Networks when we type in\n\"Christmas\" and \"Stock\"\n6.2 Personal Social Capital Management Receiver\nRecommendation Demo\nWhen a user has some questions, he/she may want to know\nwhom to ask how to find an expert and who may tell him/her\nmore details because of their close relationships. In our second\ndemo, we show a CommunityNet application which addresses this\nproblem. This tool can be incorporated with general email\nsystems to help users organize their personal social capitals. First,\nafter a user login a webmail system, he can type in content and/or\nsubject then click on the "Show Content Topic-Community". This\ntool shall recommend appropriate people to send this email to,\nbased on the learned personal social network or personal topic-community\n. The distances of nodes represent the closeness of the\npeople to the user. Users can click on the node to select an\nappropriate person to send email to. If the center node is clicked,\nthen a sphere grows to represent his ties to a group of experts.\nClick on \"Mail To\", then the people in the sphere will be included\nin the sender list.\nIn the examples in Figure 10, we log in as Jeff Dasovich. He\ncan ask his closest friends whenever he has questions or wants to\ndisseminate information. If he wants to inform or get informed on\n487\nIndustry/Government Track Paper\n\"Government\" related topics, the system will suggest him to send\nemails to Steffes, Allen, Hain, or Scott. The topics are inferred by\nmatching the terms from the Subject as well as the content of the\nemail. He can also type in \"Can you tell me the current stock\nprice?\" as the email content. This system will detect \"Stock\nMarket\" as the most relevant topic. Based on Dasovich's\nCommunityNet, it shows three possible contacts. He then chooses\nappropriate contact(s).\n\n(a) Receiver recommendation for \"Government\"\n\n(b) Receiver recommendation for \"Can you tell me the current\nstock price?\"\nFigure 10.Receiver recommendation demo system\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we propose a new way to automatically model\nand predict human behavior of receiving and disseminating\ninformation. We establish personal CommunityNet profiles based\non a novel Content-Time-Relation algorithm, which incorporates\ncontact, content, and time information simultaneously from\npersonal communication.\n\nCommunityNet can model and predict\nthe community behavior as well as personal behavior. Many\ninteresting results are explored, such as finding the most\nimportant employees in events, predicting senders and receivers\nof emails, etc. Our experiments show that this multi-modality\nalgorithm performs better than both the social network-based\npredictions and the content-based predictions. Ongoing work\nincludes studying the response time of each individual to emails\nfrom different people to further analyze user's behavior, and also\nincorporating nonparametric Bayesian methods such as\nhierarchical LDA with contact and time information.\n\nACKNOWLEDGMENTS\nWe would like to thank D. Blei, T. Griffiths, Yi Wu and\nanonymous reviewers for valuable discussions and comments.\nThis work was supported by funds from NEC Labs America.\n\nREFERENCES\n[1]\nB. A. Nardi, S. Whittaker, and H. Schwarz. \"It's not what you know, it's who\nyou know: work in the information age,\" First Mon., 5, 2000.\n[2]\nD. Krackhardt, \"Panel on Informal Networks within Formal Organizations,\"\nXXV Intl. Social Network Conf., Feb. 2005.\n[3]\nD. Krackhardt and M. Kilduff, "Structure, culture and Simmelian ties in\nentrepreneurial firms," Social Networks, Vol. 24, 2002.\n[4]\nB. Nardi, S. Whittaker, E. Isaacs, M. Creech, J. Johnson, and J. Hainsworth,\n\"ContactMap: Integrating Communication and Information Through\nVisualizing Personal Social Networks,\" Com. of the Association for\nComputing Machinery. April, 2002.\n[5]\nhttps://www.linkedin.com/home?trk=logo.\n[6]\nhttps://www.orkut.com/Login.aspx.\n[7]\nhttp://www.friendster.com/.\n[8]\nN. Lin, \"Social Capital,\" Cambridge Univ. Press, 2001.\n[9]\nW. Cohen. http://www-2.cs.cmu.edu/~enron/.\n[10]\nS. Milgram. \"The Small World Problem,\" Psychology Today, pp 60-67, May\n1967.\n[11]\nM. Schwartz and D. Wood, \"Discovering Shared Interests Among People\nUsing Graph Analysis\", Comm. ACM, v. 36, Aug. 1993.\n[12]\nH. Kautz, B. Selman, and M. Shah. \"Referral Web: Combining social\nnetworks and collaborative filtering,\" Comm. ACM, March 1997.\n[13]\nG. W. Flake, S. Lawrence, C. Lee Giles, and F. M. Coetzee. \"Self-organization\nand identification of Web communities,\" IEEE Computer, 35(3):6670, March\n2002.\n[14]\nJ. Tyler, D. Wilkinson, and B. A. Huberman. \"Email as spectroscopy:\nAutomated Discovery of Community Structure Within Organizations,\" Intl.\nConf. on Communities and Technologies., 2003.\n[15]\nL. Page, S. Brin, R. Motwani and T. Winograd. \"The PageRank Citation\nRanking: Bringing Order to the Web,\" Stanford Digital Libraries Working\nPaper, 1998.\n[16]\nJ. Kleinberg. \"Authoritative sources in a hyperlinked environment,\" In Proc.\n9th ACM-SIAM Symposium on Discrete Algorithms, 1998.\n[17]\nS. Wasserman, and P. E. Pattison, \"Logit models and logistic regression for\nsocial networks: I. An introduction to Markov graphs and p*\", Psychometrika,\n61: 401 425, 1996.\n[18]\nT. A.B. Snijders. \"Models for Longitudinal Network Data,\" Chapter 11 in\nModels and methods in social network analysis, New York: Cambridge\nUniversity Press, 2004.\n[19]\nD. L.-Nowell and J. Kleinberg, \"The Link Prediction Problem for Social\nNetworks,\" In Proceedings of the 12th Intl. Conf. on Information and\nKnowledge Management, 2003.\n[20]\nJ. Kubica, A. Moore, J. Schneider, and Y. Yang. \"Stochastic Link and Group\nDetection,\" In Proceedings of the 2002 AAAI Conference. Edmonton, Alberta,\n798-804, 2002.\n[21]\nM. Handcock and D. Hunter, \"Curved Exponential Family Models for\nNetworks,\" XXV Intl. Social Network Conf., Feb. 2005.\n[22]\nT. Hofmann, \"Probabilistic Latent Semantic Analysis,\"\n\nProc. of the Conf. on Uncertainty in Artificial Intelligence, 1999.\n[23]\nD. Blei, A. Ng, and M. Jordan, \"Latent Dirichlet allocation,\" Journal of\nMachine Learning Research, 3:993-1022, January 2003.\n[24]\nT. Griffiths and M. Steyvers, \"Finding Scientific Topics,\" Proc. of the\nNational Academy of Sciences, 5228-5235, 2004.\n[25]\nM. R.-Zvi, T. Griffiths, M. Steyvers and P. Smyth, \"The Author-Topic Model\nfor Authors and Documents\",\n\nProc. of the Conference on Uncertainty in Artificial Intelligence volume 21,\n2004.\n[26]\nA. McCallum, A. Corrada-Emmanuel, and X. Wang, \"The Author-Recipient-Topic\nModel for Topic and Role Discovery in Social Networks: Experiments\nwith Enron and Academic Email,\" Technical Report UM-CS-2004-096, 2004.\n[27]\nX. Song, B. L. Tseng, C.-Y. Lin, and M.-T. Sun, "ExpertiseNet: Relational and\nEvolutionary Expert Modeling," 10th Intl. Conf. on User Modeling,\nEdinburgh, UK, July 24-30, 2005.\n[28]\nJ. Allan, R. Papka, and V. Lavrenko. \"On-line New Event Detection and\nTracking,\" Proc. of 21st ACM SIGIR, pp.37-45, August 1998.\n[29]\nhttp://en.wikipedia.org/wiki/Timeline_of_the_Enron_scandal.\n[30]\nJ. Breese, D. Heckerman, and C. Kadie. \"Empirical analysis of predictive\nalgorithms for collaborative filtering,\" Conf. on Uncertainty in Artificial\nIntelligence, Madison,WI, July 1998.\n\n488\nIndustry/Government Track Paper", "keywords": "user behavior modeling;information dissemination;personal information management"} {"name": "139", "title": "Modeling behavioral design patterns of concurrent objects", "abstract": "Object-oriented software development practices are being rapidly adopted within increasingly complex systems, including reactive, real-time and concurrent system applications. While data modeling is performed very well under current object-oriented development practices, behavioral modeling necessary to capture critical information in real-time, reactive, and concurrent systems is often lacking. Addressing this deficiency, we offer an approach for modeling and analyzing concurrent object-oriented software designs through the use of behavioral design patterns, allowing us to map stereotyped UML objects to colored Petri net (CPN) representations in the form of reusable templates. The resulting CPNs are then used to model and analyze behavioral properties of the software architecture, applying the results of the analysis to the original software design.", "fulltext": "Introduction\nObject-oriented software development practices are being\nrapidly adopted within increasingly complex systems, including\nreactive, real-time and concurrent system applications. In\npractice, though, object-oriented software design techniques are\nstill predominantly focused on the creation of static class\nmodels. Dynamic architectural models capturing the overall\nbehavioral properties of the software system are often\nconstructed using ad hoc techniques with little consideration\ngiven to the resulting performance or reliability implications\nuntil the project reaches implementation. Efforts to analyze\nbehavioral issues of these architectures occur through\nopportunistic rather than systematic approaches and are\ninherently cumbersome, unreliable, and unrepeatable.\nOne means of improving the behavioral modeling capabilities of\nobject-oriented architecture designs is to integrate formalisms\nwith the object-oriented specifications. Using this technique,\nobject-oriented design artifacts are captured in a format such as\nthe Unified Modeling Language (UML) [1], which is intuitive to\nthe software architect. The native object-oriented design is then\naugmented by integrating an underlying formal representation\ncapable of providing the necessary analytical tools. The\nparticular method used in this research [2] is to integrate colored\nPetri nets (CPNs) [3] with object-oriented architecture designs\ncaptured in terms of UML communication diagrams.\n\nSpecifically, this paper will present a method to systematically\ntranslate a UML software architecture design into an underlying\nCPN model using a set of pre-defined CPN templates based on a\nset of object behavioral roles. These behavioral roles are based\non the object structuring criteria found in the COMET method\n[4], but are not dependent on any given method and are\napplicable across application domains. This paper will also\ndemonstrate some of the analytical benefits provided by\nconstructing a CPN representation of the UML software\narchitecture. After a survey of related research, Section 2\ndescries the concept of behavioral design pattern templates for\nmodeling concurrent objects. Section 3 discusses how we\nconstruct an overall CPN model of the concurrent software\narchitecture by interconnecting the individual behavioral design\npattern templates. Section 4 describes the validation of the\napproach.\n1.1\n\nRelated Research\nThere are many existing works dealing with the use of Petri nets\nfor describing software behavior. As they relate to this paper,\nthe existing works can be broadly categorized into the modeling\nof software code and the modeling of software designs. In this\nresearch, the focus is on improving reliability of object-oriented\nsoftware designs rather than delaying detection to the software\ncode. In terms of object-oriented design, the related Petri net\nresearch can be categorized as new development methodologies\n[5-8]; object-oriented extensions to Petri nets [9-12]; and the\nintegration of Petri nets with existing object-oriented\nmethodologies [13-20]. Since one of the goals of this research\neffort is to provide a method that requires no additional tools or\nlanguage constructs beyond those currently available for the\nUML and CPN definitions, this approach [2,21-25] falls into the\nlast category of integrating Petri nets with existing\nmethodologies. The main features that distinguish this approach\nfrom other related works are a focus on the concurrent software\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nICSE'06, May 2028, 2006, Shanghai, China.\nCopyright 2006 ACM 1-59593-085-X/06/0005...$5.00.\n202\narchitecture design and the use of consistent, reusable CPN\ntemplates to model the behavior of concurrent objects and their\ninteractions. This paper also extends our more recent works\n[25] by specifically focusing on the behavioral design patterns\nof individual concurrent objects and applying these patterns to\nconstruct an underlying representation of the concurrent\nsoftware design architecture.\n\nModeling Behavioral Design Patterns\nTo model concurrent object behavioral design patterns with\nCPNs, our approach starts with a concurrent software\narchitecture model captured in UML. For the construction of\nthis architecture model, we identify a set of behavioral design\npatterns used to categorize the objects along with a set of\nspecification requirements necessary to correctly model the\nconcurrent behavior with the underlying CPN model. Each of\nthe identified behavioral design patterns then has a\ncorresponding template, represented as a CPN segment, which is\npaired with the UML object and is instantiated to capture\nspecific behavioral characteristics based on the object\nspecifications. The following sections describe the object\narchitecture definition along with the concept of behavioral\npattern templates for modeling concurrent objects. Section 3\nwill then discuss how we construct an overall CPN model of the\nconcurrent object architecture by connecting the individual\nbehavioral pattern templates.\n2.1\n\nConcurrent Object Modeling\nOur approach uses a UML communication diagram to capture\nthe concurrent software architecture. Depending on the desired\nlevel of modeling, this architecture model can be constructed for\nan entire software system or for one or more individual\nsubsystems. This communication diagram contains a collection\nof concurrent (active) and passive objects along with the\nmessage communication that occurs between the objects. Using\nour approach, objects within the concurrent software\narchitecture are organized using the notion of components and\nconnectors. Under this paradigm, concurrent objects are treated\nas components that can be connected through passive message\ncommunication objects and entity objects. In keeping with the\nCOMET object structuring criteria, each object is assigned a\nUML stereotype to indicate its behavioral design pattern.\nObjects are broadly divided into application objects, which\nperform the work, and connector objects, which provide the\nmeans of communicating between application objects. For\napplication objects, we use six stereotyped behavioral design\npatterns as illustrated in Figure 1: interface, entity, coordinator,\nstate-dependent, timer, and algorithm. Additionally, connector\nobjects can take the roles of: queue, buffer, or buffer-with-response\n, corresponding to asynchronous, synchronous, and\nreturn messages. These patterns are not intended to be an\nexhaustive list, but rather are intended to represent sufficient\nvariety to model concurrent systems across a wide range of\ndomains while also allowing these patterns to be extended as\nnecessary for future applications.\nThe identification of stereotyped behavioral roles allows us to\nselect a specific CPN template to model each object (further\ndescribed in Section 3.2). These behavioral stereotypes are\ngeneric across applications, so we also capture specific\napplication information using the following tagged values:\n\nExecution Type. Each object must be declared as either\npassive or concurrent and for concurrent objects, further\nspecified to be asynchronous or periodic.\n\nIO Mapping. Input-output message pairings must be\nspecified for each object\n\nCommunication Type. Indicate whether message\ncommunication occurs through asynchronous or synchronous\nmeans.\n\nActivation Time. The period of activation must be specified\nfor each periodic concurrent object.\n\nProcessing Time. Estimated processing times for completing\nan execution cycle should be assigned to each object if\ntiming is to be accounted for in performance analysis.\n\nOperation Type. Indicate whether operations on entity\nobjects perform \"reader\" or \"writer\" functionality.\n\nStatechart. For each state-dependent object, a UML\nstatechart is used to capture the state behavior for that object.\nA detailed discussion of how the statechart is translated into\nthe CPN model is provided in Pettit and Gomaa [24].\nFigure 1. Stereotype Hierarchy for Application Objects\n2.2\n\nDefining Behavioral Pattern Templates\nThe basis for our approach to modeling concurrent object\nbehavior lies in the notion of a behavioral design pattern (BDP)\ntemplate, which represents concurrent objects according to their\nrole along with associated message communication constructs.\nFor each BDP template, we employ a self-contained CPN\nsegment that, through its places, transitions, and tokens, models\na given stereotyped behavioral pattern. Each template is generic\nin the sense that it provides us with the basic behavioral pattern\nand component connections for the stereotyped object but does\nnot contain any application-specific information. The\nconnections provided by each template are consistent across the\nset of templates and allow concurrent objects to be connected to\npassive objects (entities or message communication) in any\norder.\nWe provide a BDP template for each object type identified in\nthe previous section. Since each of these templates captures a\ngeneric behavioral design pattern, when a template is assigned\nto a specific object, we then augment that template with the\ninformation captured in the tagged values for the object. For the\nresulting CPN representation, this affects the color properties of\nthe tokens (e.g. to represent specific messages) and the rules for\nprocessing tokens (e.g. to account for periodic processing or\nspecial algorithms). The following sections describe a subset of\nour behavioral templates for both concurrent object components\nand their connectors.\napplication\ninterface\nentity\ncontrol\nalgorithm\ncoordinator\ntimer\nstate dependent\napplication\ninterface\nentity\ncontrol\nalgorithm\ncoordinator\ntimer\nstate dependent\napplication\ninterface\nentity\ncontrol\nalgorithm\ncoordinator\ntimer\nstate dependent\napplication\ninterface\nentity\ncontrol\nalgorithm\ncoordinator\ntimer\nstate dependent\n203\n2.2.1\n\nAsynchronous Interface Object Template\nConsider the case of an asynchronous, input-only interface\nobject. The template for this behavioral design pattern is given\nin Figure 2.\nThis template represents a concurrent object, that is, an object\nthat executes its own thread of control concurrently with other\nobjects in the software system. While this template models\nrelatively simple behavior (wait for input; process input; wait for\nnext input), it features characteristics found throughout the\nconcurrent object templates. First, to model the thread of\ncontrol within a concurrent object, a control token (CTRL) is\nassigned to each concurrent object. For this template, a control\ntoken is initially present in the Ready place. Thus, this template\nis initialized in a state whereby it is ready to receive an input at\nthe ProcessInput transition. As an input arrives (and given that\nthe control token is in the Ready place), ProcessInput is allowed\nto fire, simulating the processing of the external input and the\nbehavior of the asynchronous input interface object.\n\nProcessInput consumes both a token representing the external\ninput as well as the control token representing the executable\nthread of control. An output arc from ProcessInput uses a\nfunction, processInput (Input_event) to generate the appropriate\ntoken representing an internal message passed to another object\nwithin the system. The exact behavior of the processInput\nfunction (as with any arc-inscription functions throughout the\ntemplates) is determined from the object specification when a\ntemplate is instantiated for a specific object. Finally, to\ncomplete the behavioral pattern for this template, the control\ntoken is passed to the MessageSent place and eventually back to\nthe Ready place, enabling the template to process the next input.\n2.2.2\n\nPeriodic Algorithm Object Template\nThe asynchronous interface template addressed asynchronous\nbehavior for a concurrent object, where the object is activated\non demand by the receipt of a message or an external stimulus\n(as in the case of the interface example). For periodic behavior,\nwhere an object is activated on a regular periodic interval,\nconsider the template for a concurrent periodic algorithm object\ngiven in Figure 3.\nAlgorithm objects are internal concurrent objects that\nencapsulate algorithms, which may be activated or deactivated\non demand. In the case of the periodic algorithm object, once\nthe algorithm is enabled, it awakens on its activation period,\nperforms the desired algorithmic task, and then returns to a sleep\nstate until the next activation period.\nLooking at the periodic algorithm template from Figure 3, you\nshould notice that, like the previous concurrent object template,\nthere is Ready place with a control token that indicates when the\nobject is ready to start its next processing cycle and models the\nthread of execution. This is common across all concurrent\nobject templates. To model the ability for an algorithm object to\nbe enabled or disabled, the input interface to this template\noccurs through the Enable_Alg and Disable_Alg transitions.\n(Note that we maintain the use of transitions as the interface\npoints for all concurrent objects.) Thus, in addition to the\ncontrol token being present on the Ready place, an Enable token\nmust also be present on the Alg_Enabled place in order for the\nPerform_Alg transition to be enabled and subsequently fired.\nThe actual behavior performed by the algorithm is captured by\ndecomposing the Perform_Alg transition.\nThe resulting decomposition uses one or more place-transition\npaths to model the behavior performed within the algorithm.\nThe information necessary to derive the CPN algorithm model\nmay be contained in the UML class specification for the\nalgorithm object or, for more complex algorithms, may be\ncaptured in supporting UML artifacts such as the activity\ndiagram. Multiple algorithms may be encapsulated within the\nsame algorithm object. In these cases, the enable/disable\ntransitions, enabled place, and processing transition are repeated\nfor each encapsulated algorithm. However, there will only ever\nbe one control token and ready place in a single concurrent\nobject as our approach does not allow for multi-threaded\nconcurrent objects.\nFinally, to capture the periodic nature of this template, a Sleep\nplace along with Wakeup and Timeout transitions have been\nadded to the basic asynchronous object template. This place-transition\npair will be common to all periodic templates. In this\ncase, the periodic algorithm starts in the Sleep place rather than\nReady. After the desired sleep time (indicating the activation\nperiod of the object) has elapsed, the Wakeup transition is\nenabled and, when fired, removes the CTRL token from the\nSleep place and places it in the Ready place. This now enables\nthe template to perform any enabled algorithms. If one or more\nalgorithms are enabled, the template proceeds in the same\nmanner as the previous asynchronous algorithm template.\nHowever, if no algorithms are enabled when the template wakes\nup, the Timeout transition will fire and return the Control token\nto the Sleep place and wait for the next period of activation.\n2.2.3\n\nEntity Object Template\nIn contrast to concurrent objects, passive objects do not execute\ntheir own thread of control and must rely on operation calls\nfrom a concurrent object. Using our approach, the entity objects\nfrom Figure 1 are passive objects. The purpose of an entity\nobject is to store persistent data. Entities provide operations to\naccess and manipulate the data stored within the object. These\noperations provide the interface to the entity object. To account\nfor the possibility of multiple concurrent objects accessing a\nsingle entity object, our approach stipulates that each operation\nbe tagged as having \"read\" or \"write\" access and for the object\nto be tagged with \"mutually exclusive\" or \"multiple-reader/single\n-writer\" rules for access control. This allows us to\napply the appropriate template with the desired mutual exclusion\nprotection for the encapsulated object attributes. The behavioral\ndesign pattern template representing an entity object with\nmutually exclusive access is shown in Figure 4.\nIn this template, attributes are modeled with a CPN place\ncontaining tokens representing the attribute values. The\nunderlying functionality of each operation is captured in an\n\"idmOperation\" transition that can be further decomposed as\nnecessary to implement more complex functions. When\ninstantiated for a specific entity object, the \"idm\" tag is replaced\nwith a specific identifier for each operation. Finally, the\ninterface to each operation is provided by a pair of CPN places\none place for the operation call and another for the return.\nCollectively, these places form the interface to the entity object.\nAs opposed to concurrent objects, all passive objects and\nmessage connectors will use CPN places for their interface,\nallowing concurrent objects to be connected through their\ntransition interfaces. Thus, for performing an operation call, a\n204\nconcurrent object places its control token and any necessary\nparameter tokens on the calling place and then waits for the\ncontrol token to be returned along with any additional operation\nresults at the call return place. Recall that entity objects do not\nhave their own thread of control, thus they become part of the\ncalling object's thread of control for the duration of the\noperation call.\n2.2.4\n\nMessage Communication Templates\nFinally, in addition to application object templates, our method\nalso provides templates for connector objects representing\nmessage communication. These connectors may represent\nasynchronous or synchronous message communication between\ntwo concurrent objects.\n\nFigure 2. Asynchronous Input-Only Interface Object: (a) UML (2.0); (b) CPN Template\n\nFigure 3. Periodic Algorithm Template: (a) UML; (b) CPN Representation\n{Execution = async;\nIO = input\nProcess Time = <process time>\n}\nasyncInput\nInterface\n<<interface>>\nexternal\nInputSource\n<<external input device>>\ninputEvent\nasyncMsg\nTo internal\nconnector\nobject\n(a)\n(b)\n{Execution = async;\nIO = input\nProcess Time = <process time>\n}\nasyncInput\nInterface\n<<interface>>\nexternal\nInputSource\n<<external input device>>\ninputEvent\nasyncMsg\nTo internal\nconnector\nobject\n(a)\n(b)\n{Execution = periodic;\nActivation Time = <sleep time>\nProcess Time = <process time>\n}\nperiodic\nAlgorithm\nObject\n<<algorithm>>\nenable\n(a)\n(b)\nperiodic\nAlgorithm\nObject\n{Execution = periodic;\nActivation Time = <sleep time>\nProcess Time = <process time>\n}\nperiodic\nAlgorithm\nObject\n<<algorithm>>\nenable\n(a)\n(b)\nperiodic\nAlgorithm\nObject\n205\n\nFigure 4. Passive Entity Template: (a) UML; (b) CPN Representation\nConsider the message buffer template shown in Figure 5.\nNotice that, as with passive entity objects, the interface to\nconnector objects always occurs through a place rather than a\ntransition, thus allowing concurrent object interfaces to be\nlinked with connector interfaces while still enforcing the Petri\nnet connection rules of only allowing arcs to occur between\ntransitions and places. The message buffer template models\nsynchronous message communication between two concurrent\nobjects. Thus, only one message may be passed through the\nbuffer at a time and both the producer (sender) and consumer\n(receiver) are blocked until the message communication has\ncompleted. The behavior of synchronous message\ncommunication is modeled through this template by first having\nthe producer wait until the buffer is free as indicated by the\npresence of a \"free\" token in the buffer. The producer then\nplaces a message token in the buffer and removes the free token,\nindicating that the buffer is in use. Conversely, the consumer\nwaits for a message token to appear in the buffer. After\nretrieving the message token, the consumer sets the buffer once\nagain to free and places a token in the \"Return\" place, indicating\nto the producer that the communication has completed.\nAsynchronous message connector templates continue to employ\nplaces for their interfaces. However, asynchronous message\ncommunication, which involves the potential for queuing of\nmessages, is more involved than the simple synchronous\nmessage buffer and must therefore add a transition to handle this\nbehavior. The corresponding template is shown in Figure 6.\nWith asynchronous communication the sender is not blocked\nawaiting acknowledgement that the sent message has been\nreceived and a message queue is allowed to form for the object\nreceiving the asynchronous messages. In this template, the\nManageQueue transition is decomposed into a subnet that\nimplements the FIFO placement and retrieval of messages in the\nqueue [26]. To send an asynchronous message, a concurrent\nobject places a message token on the Enqueue place. The\nsubnet under ManageQueue would then add this message token\nto the tail of the queue. Another concurrent object receiving the\nasynchronous message would wait for a message token to be\navailable in the Dequeue place (representing the head of the\nqueue). It would then remove the message token from Dequeue\nand signal DequeueComplete in a similar manner to the\noperation calls previously described for entities. This signals\nthe queue that a message token has been removed from the head\nof the queue and that the remaining messages need to be\nadvanced.\nFigure 5. Synchronous Message Buffer Connector Template:\n(a) UML; (b) CPN Representation\n\n\n(a)\n(b)\nanActiveObject\nanotherActive\nObject\nanEntityObject\nread()\nwrite()\n<<entity>>\n{Access Control = mutually-exclusive}\n(a)\n(b)\nanActiveObject\nanotherActive\nObject\nanEntityObject\nread()\nwrite()\n<<entity>>\n{Access Control = mutually-exclusive}\n(a)\n(b)\nproducer\nconsumer\ndata\n(a)\n(b)\nproducer\nconsumer\ndata\n206\nFigure 6. Asynchronous Message Queue Template\nConstructing CPN Models from UML\nUp to this point, we have just discussed individual CPN\ntemplates being used to model behavioral design patterns of\nconcurrent objects, passive entity objects, and message\ncommunication mechanisms. This section presents our method\nfor constructing a CPN model of the concurrent software\narchitecture by applying and interconnecting these templates.\nThe basic construction process consists of the following steps:\n1.\n\nConstruct a concurrent software architecture model using a\nUML communication diagram to show all concurrent and\npassive objects participating in the (sub) system to be\nanalyzed along with their message communication.\n2.\n\nBegin constructing the CPN model by first developing a\ncontext-level CPN model showing the system as a single\nCPN substitution (hierarchically structured) transition and\nthe external interfaces as CPN places. Using a series of\nhierarchically structured transitions allows us to work with\nthe CPN representation at varying levels of abstraction, from\na completely black-box view, a concurrent software\narchitecture view (in the next step), or within an individual\nobject as desired for the level of analysis being applied to the\nmodel.\n3.\n\nDecompose the system transition of the CPN context-level\nmodel, populating an architecture-level model with the\nappropriate CPN templates representing the objects from the\nconcurrent software architecture.\n4.\n\nElaborate each instance of CPN template to account for the\nspecific behavioral properties of the object it models.\n5.\n\nConnect the templates, forming a connected graph between\nconcurrent object templates and passive entity objects or\nmessage communication mechanisms.\nTo illustrate the application of this approach, consider a partial\nexample from the well-known Cruise Control System [4]. This\nexample was chosen for this paper as it requires little\nexplanation for the UML model and allows us to focus on the\nuse of behavioral design pattern templates and the CPN\nrepresentations. Figure 7 provides a partial communication\ndiagram of the Cruise Control System concurrent software\narchitecture.\nTo begin, focus on the input events being provided by the\nCruise Control Lever. (We will return to the brake and engine\ninputs later in this section.) Cruise control lever events enter the\nsystem via a concurrent interface object that sends an\nasynchronous message to the state dependent control object to\nprocess the requests based on rules defined in a corresponding\nstatechart. Based on the state of the CruiseControl object,\ncommands are given to a concurrent periodic algorithm object\nenabling it to compare speed values from two passive entity\nobjects and determine the correct throttle values, which are then\npassed on to the periodic output interface, ThrottleInterface.\nGiven this concurrent software architecture, the second step in\nour process would construct the context-level CPN model\nshown in Figure 8. At this level, we see the system as a black-box\nrepresented as a single transition, \"CruiseControlSystem\".\nExternal input and output interfaces for the cruise control lever,\nbrake, and engine devices are represented as places. The\npurpose of this context-level CPN model is to provide a central\nstarting point for our modeling and analysis. By structuring the\nCPN model in this way, we can analyze the system as a black\nbox, dealing only with external stimuli and observed results\n(corresponding to the tokens stored in these places) or we can\nuse hierarchical decomposition to gain access to the individual\nobject behavioral design pattern templates (and their detailed\nCPN implementation) by systematically decomposing the\nhierarchically structured transitions (indicated with the HS tag).\nIn the third step, the CruiseControlSystem transition from the\ncontext-level model is decomposed into an architecture-level\nmodel populated with the appropriate CPN behavioral design\npattern template for each of the cruise control objects. Given\nthe architecture design from Figure 7 (and continuing to ignore\nAutoSensors for the moment), we would need to instantiate two\ninterface templates, two entity templates, one state\ndependent control template, and one algorithm template. We\nwould also need to use queue and buffer templates for the\nasynchronous and synchronous message communication\nrespectively.\nOnce the appropriate templates have been assigned to each\nobject, the fourth step in the process is to elaborate each\ntemplate to model a specific object. To illustrate, consider\nCruiseControlLeverInterface. This object is an asynchronous\ninput-only interface that accepts events from the cruise control\nlever device and, based on the input event, generates the\nappropriate messages for the cruise control request queue.\nApplying the asynchronous input interface template from Figure\n2, we arrive at the elaborated CPN segment for\nCruiseControlLeverInterface shown in Figure 9.\nTo elaborate the template for the CruiseControlLeverInterface,\nthe place and transition names from the basic template have\nbeen appended with the object ID (1) for the specific object.\nThe control token for this model has also been set to the specific\ncontrol token for the CruiseControlLeverInterface object\n(CTRL1) and the time region for the PostProcessing_1\ntransition has been set to \"@+100\" to reflect the Process Time\ntagged value. The CruiseControlLeverInterface CPN\nrepresentation is then connected to the software architecture by\nestablishing an input arc from the CruiseControlLeverDevice\nplace, representing the external input from the device, and an\noutput arc to the Enqueue place, modeling the asynchronous\nmessage communication identified in the UML software\narchitecture. Token types (colors) are then specifically created\nto represent the incoming event and outgoing messages. Finally,\nthe processInput1() function is elaborated to generate the\nappropriate asynchronous message based on an incoming lever\nevent. This elaboration process is similar for all templates.\n(a)\n(b)\nanActiveObject\nanotherActive\nObject\ndata\n(a)\n(b)\nanActiveObject\nanotherActive\nObject\ndata\n207\n\nFigure 7. Partial Concurrent Software Architecture for Cruise Control\nFigure 8. CPN Context-Level Model for Cruise Control\nFigure 9. Asynchronous Input-Only Interface Template\nApplied to CruiseControlLeverInterface\nOnce all templates have been elaborated, our fifth and final step\nconnects the templates to form a connected graph of the\nconcurrent software architecture. The entire CPN architecture\nmodel for cruise control is too large for inclusion in this paper.\nHowever, Figure 10 illustrates the component connections\nbetween the CruiseControlLeverInterface and the CruiseControl\ntemplates using an asynchronous message queue connector. As\ncan be seen from this figure, the two concurrent object templates\ncommunicate via the queue connector by establishing arcs\nbetween the interface transitions of the concurrent objects and\nthe interface places of the queue connector. This component\nconnection method applies to the entire software architecture\nusing our approach of allowing concurrent objects to be\nconnected to either passive entity objects or to a message\ncommunication connector.\nTo further illustrate the component-based approach used for\nconstructing these CPN let us now consider expanding the\nmodel to include input from the brake and engine devices. In\naddition to the cruise control lever inputs, Figure 7 also shows\nbrake and engine status messages arriving from the respective\ndevices. These status messages are handled by the AutoSensors\nperiodic interface object and are passed to CruiseControl via an\nasynchronous message through the same cruise control request\nqueue already being used by CruiseControlLeverInterface.\nUsing our component-based modeling approach, the\nAutoSensors object can be added to our CPN model by simply\ninstantiating a CPN representation of the periodic input interface\nbehavioral design pattern template using the specified\ncharacteristics for AutoSensors and then connecting it to the\nexisting queue template. The resulting CPN model is given in\nFigure 11.\nThe addition of AutoSensors also illustrates another capability\nof the interface template. Whereas the cruise control lever is an\nasynchronous device, providing interrupts to\nCruiseControlLeverInterface, the brake and engine devices are\npassive devices that must be polled for their status. In Figure\n11, every time AutoSensors is activated, it retrieves the status\ntoken from the brake and engine device places. After checking\nthe status, the token is immediately returned to the device\nplaces, modeling persistence of device status information that\ncan be polled as necessary. The remainder of the AutoSensors\ntemplate should be familiar, being constructed of the standard\nReady and ProcessInput place-transition pair for interface object\ntemplates (Section 2.2.1) and the Sleep and Wakeup place-transition\npair included for periodic objects (Section 2.2.2).\nAs demonstrated in this section, the primary benefits of our\ncomponent-based modeling approach are that connections can\neasily be added or modified as the architecture evolves or to\nprovide rapid \"what-if\" modeling and analysis.\n\n\n\nselect(),\nclear()\nentity\n:DesiredSpeed\nentity\n:CurrentSpeed\nccCommand\nthrottleValue\nthrottleOutput\nto throttle\nread()\nread()\ncruiseControlLeverInput\ncruiseControlRequest\n{Execution = async;\nIO = input\nProcess Time = 100ms\n}\n{Execution = async;\nProcess Time = 200ms\n}\n{Execution = periodic;\nActivation Time = 100ms\nProcess Time = 50ms\n}\n{Execution = periodic;\nIO = output\nProcess Time = 20ms\nActivation Time = 100ms\n}\ncruiseControlRequest\nbrakeStatus\nengineStatus\n{Execution = periodic;\nIO = input\nActivation Time = 100ms\nProcess Time = 20ms\n}\nCruiseControl\nLeverInterface\n<<interface>>\nCruiseControl\n<<state dependent>>\nSpeed\nAdjustment\n<<algorithm>>\nThrottle\nInterface\n<<interface>>\nAutoSensors\n<<interface>>\nCruiseControl\nLeverDevice\nEngineDevice\nBrakeDevice\n<<external input device>>\n<<external input device>>\n<<external input device>>\nselect(),\nclear()\nentity\n:DesiredSpeed\nentity\n:CurrentSpeed\nccCommand\nthrottleValue\nthrottleOutput\nto throttle\nread()\nread()\ncruiseControlLeverInput\ncruiseControlRequest\n{Execution = async;\nIO = input\nProcess Time = 100ms\n}\n{Execution = async;\nProcess Time = 200ms\n}\n{Execution = periodic;\nActivation Time = 100ms\nProcess Time = 50ms\n}\n{Execution = periodic;\nIO = output\nProcess Time = 20ms\nActivation Time = 100ms\n}\ncruiseControlRequest\nbrakeStatus\nengineStatus\n{Execution = periodic;\nIO = input\nActivation Time = 100ms\nProcess Time = 20ms\n}\nCruiseControl\nLeverInterface\n<<interface>>\nCruiseControl\n<<state dependent>>\nSpeed\nAdjustment\n<<algorithm>>\nThrottle\nInterface\n<<interface>>\nAutoSensors\n<<interface>>\nCruiseControl\nLeverDevice\nEngineDevice\nBrakeDevice\n<<external input device>>\n<<external input device>>\n<<external input device>>\n208\nFigure 10 Connecting CruiseControlLeverInterface and CruiseControl via Asynchronous Communication\nFigure 11. Addition of AutoSensors to the CPN Architecture\nFurthermore, by maintaining the integrity between a CPN\ntemplate and the object it represents, modeling and analysis\nresults can readily be applied to the original UML software\narchitecture model. Thus, while from a pure CPN perspective,\nour CPNs could be further optimized, we feel that it is of greater\nbenefit to maintain a component-based architecture that closely\nrepresents the structure of our original UML design artifacts.\nValidation\nThe validation of our approach was in three parts. First, there\nwas the issue of whether our behavioral stereotypes and\ncorresponding templates could be applied across domains and\nprojects. This was demonstrated by successfully applying our\nprocess to two case studies, the cruise control system (a portion\nof which was shown in the previous sections) and the signal\ngenerator system [2]. Secondly, we performed validation to\ndetermine if the resulting CPN models provided a correct model\nof the concurrent software architecture. This was necessary to\nvalidate that our approach would result in an accurate\nrepresentation of the original architecture and was by far the\nmost tedious part of validation, as it required manual inspection\nand unit testing of each object and its corresponding CPN\ntemplate representation for the two case studies. Finally, after\ndetermining that our template approach satisfied the modeling\nrequirements for both case studies, we then sought to\ndemonstrate the analytical capabilities gained from using CPNs\nto model concurrent software architectures. The behavioral\nanalysis addresses both the functional behavior of the concurrent\narchitecture as well as its performance, as described next. The\ndetailed analytical results for both case studies are provided in\n[2].\n4.1\n\nValidating Functional Behavior\nFor functional analysis, the simulation capabilities of the\nDesignCPN tool are used to execute the model over a set of test\ncases. These test cases may be black-box tests in which we are\nonly monitoring the context-level model in terms of input events\nand output results or they may be white-box tests in which we\nanalyze one or more individual object representations. In our\napproach, black box test cases were derived from use cases\nwhile white box test cases were derived from object interactions,\nobject specifications, and statecharts. In each of these cases, the\n209\nappropriate inputs for each test case were provided by placing\ntokens on the CPN places representing the external actors in the\ncontext model. The CPN model was then executed in the\nsimulator and observed at the desired points to determine if the\ncorrect output was generated or if the correct logical paths were\nchosen.\nAgain, consider the cruise control system. Figure 12 illustrates\na black-box simulation in which the driver has selected\n\"Accelerate\" from the cruise control lever (with the engine being\non and the brake being released). Figure 12(a) shows the state\nof the system before the simulation run and Figure 12(b)\nillustrates the results of accelerating, namely a value being sent\nto the throttle. This form of simulation may be applied to as low\nor as high of a level of abstraction as desired in order to gain\nvisibility into the desired behavior of the architecture. For\nexample, one could choose to simply conduct black box testing\nby placing input tokens on actor places, executing the\nsimulation, and then observing the resulting token values on\noutput actor places. Alternatively, if a more detailed\ninvestigation is desired, the engineer may navigate the CPN\nhierarchical construction and observe such characteristics as the\nbehavior of state changes within a state dependent object's CPN\nrepresentation. A detailed analysis of this state-dependent\nbehavior is provided in [24].\nFigure 12. Example Cruise Control Black-Box Simulation\n4.2\n\nValidating Performance\nIn addition to simulation capabilities, the DesignCPN [27] tool\nused in this effort also has a very powerful performance tool\n[28] that can be employed to analyze performance aspects of the\nconcurrent software architecture. This tool can be used to\nanalyze such things as queue backlogs, system throughput, and\nend-to-end timing characteristics. As an example of the latter,\nwe conducted a test to monitor the cruise control system\nresponse times to commands being input from the cruise control\nlever. To conduct this analysis, commands were issued to the\ncruise control system while the system was in a simulated state\nof operation with a speed of 60 miles per hour (100 kph). The\nperformance tool was used to monitor changes in the throttle\noutput and compare the time at an observed output change to the\ntime the original command was issued.\nThe results from this analysis are shown in Figure 13. From this\nfigure, we can see that all cruise control commands complete in\nless than one second (1000ms) and most complete in less than\n500ms. Detailed performance requirements were not provided\nfor our cruise control case study. However, if this cruise control\nsystem was an actual production system, an engineer could\ncompare the analysis results against documented performance\nrequirements to determine if the system in fact satisfies the\nnecessary performance criteria. By being able to conduct this\nform of analysis from the concurrent software design, an\nengineer can both improve the reliability of the software\narchitecture at the design level and correct problems prior to\nimplementation.\nFigure 13. Cruise Control End-to-End Timing Analysis\n\nConclusions and Future Research\nThe long-term goal of this research effort is to provide an\nautomated means of translating a UML concurrent software\narchitecture design into an underlying CPN representation that\ncan then be used to conduct behavioral analysis with results\ncommunicated in terms of the original UML model. To date, we\nhave developed a method for systematically translating a UML\nsoftware architecture into a CPN representation. This method\nemploys reusable templates that model the behavior of a set of\nobjects according to their stereotyped behavioral roles. Each\ntemplate provides a consistent interface that allows templates to\nbe interconnected as components of a larger system, thus\ncreating the overall CPN representation. The resulting CPN\nmodel enables the analysis of both the functional and\nperformance behavior of the concurrently executing objects. As\nthe CPN representation mirrors the structure of the concurrent\nsoftware architecture, the results can be readily applied to the\noriginal UML model.\nFuture research in this area will need to investigate approaches\nto facilitate the automated translation from a UML model into a\nCPN model that can be read by a tool such as DesignCPN.\nAdditional research also needs to be conducted to investigate the\nscalability of this approach to larger systems, including\ndistributed applications and providing behavioral templates for\nthe COMET distributed components [4]. Finally, the use of\nstate space analysis should be investigated further. Most of the\nanalysis conducted with this research effort has focused on the\nuse of simulations for functional analysis and on the\nperformance tool for performance analysis. State space analysis\nCruise Control End-To-End Timing Performance\n0\n200\n400\n600\n800\n1000\n1200\n0\n5000\n10000\n15000\n20000\n25000\n30000\nElapsed Time (ms)\nC\no\nm\nm\na\nnd C\no\nm\np\nl\ne\nt\ni\non\nTi\nm\ne\n(\nm\ns\n)\n(a)\n(b)\n1`\"BrakeOff\"\n1`\"Accel\"\n1`\"Engine\nOn\"\n1\n1\n1\n1\n1\n1\n1`\"BrakeOff\"\n1`50\n1`\"Engine\nOn\"\n210\ncould also be used to further refine deadlock detection as well as\nto analyze system-wide state changes.\n\nReferences\n[1]\n\nJ. Rumbaugh, I. Jacobson, and G. Booch, The Unified\nModeling Language Reference Manual. 2\nnd\nEdition.\nAddison-Wesley, 2005.\n[2]\n\nR. G. Pettit, Analyzing Dynamic Behavior of Concurrent\nObject-Oriented Software Designs, Ph.D., School of IT&E,\nGeorge Mason University, 2003.\n[3]\n\nK. Jensen, Coloured Petri Nets: Basic Concepts, Analysis\nMethods, and Practical Use, vol. I-III. Berlin, Germany:\nSpringer-Verlag, 1997.\n[4]\n\nH. Gomaa, Designing Concurrent, Distributed, and Real-Time\nApplications with UML, Addison-Wesley, 2000.\n[5]\n\nM. Baldassari, G. Bruno, and A. Castella, \"PROTOB: an\nObject-Oriented CASE Tool for Modeling and Prototyping\nDistributed Systems,\" Software-Practice & Experience,\nv.21, pp. 823-44, 1991.\n[6]\n\nB. Mikolajczak and C. A. Sefranek, \"Integrating Object\nOriented Design with Concurrency Using Petri Nets,\"\nIEEE International Conference on Systems, Man and\nCybernetics, Piscataway, NJ, USA, 2001.\n[7]\n\nR. Aihua, \"An Integrated Development Environment for\nConcurrent Software Developing Based on Object Oriented\nPetri Nets,\" Fourth International Conference/Exhibition on\nHigh Performance Computing in the Asia-Pacific Region.,\nLos Alamitos, CA, USA, 2000.\n[8]\n\nX. He and Y. Ding, \"Object Orientation in Hierarchical\nPredicate Transition Nets,\" Concurrent Object-Oriented\nProgramming and Petri Nets. Advances in Petri Nets,\nBerlin: Springer-Verlag, 2001, pp. 196-215.\n[9]\n\nO. Biberstein, D. Buchs, and N. Guelfi, \"Object-Oriented\nNets with Algebraic Specifications: The CO-OPN/2\nFormalism,\" Concurrent Object-Oriented Programming\nand Petri Nets. Advances in Petri Nets, Berlin: Springer-Verlag\n, 2001, pp. 73-130.\n[10]\n\nS. Chachkov and D. Buchs, \"From Formal Specifications\nto Ready-to-Use Software Components: The Concurrent\nObject Oriented Petri Net Approach,\" Second International\nConference on Application of Concurrency to System\nDesign, Los Alamitos, CA, USA, 2001.\n[11]\n\nA. Camurri, P. Franchi, and M. Vitale, \"Extending High-Level\nPetri Nets for Object-Oriented Design,\" IEEE\nInternational Conference on Systems, Man and\nCybernetics, New York, NY, USA, 1992.\n[12]\n\nJ. E. Hong and D. H. Bae, \"Software Modeling and\nAnalysis Using a Hierarchical Object-Oriented Petri Net,\"\nInformation Sciences, v.130, pp. 133-64, 2000.\n[13]\n\nD. Azzopardi and D. J. Holding, \"Petri Nets and OMT for\nModeling and Analysis of DEDS,\" Control Engineering\nPractices, v.5, pp. 1407-1415, 1997.\n[14]\n\nC. Lakos, \"Object Oriented Modeling With Object Petri\nNets,\" Concurrent Object-Oriented Programming and\nPetri Nets. Advances in Petri Nets, Berlin: Springer-Verlag,\n2001, pp. 1-37.\n[15]\n\nC. Maier and D. Moldt, \"Object Coloured Petri Nets- A\nFormal Technique for Object Oriented Modelling,\"\nConcurrent Object-Oriented Programming and Petri Nets.\nAdvances in Petri Nets, Berlin: Springer-Verlag, 2001, pp.\n406-27.\n[16]\n\nJ. A. Saldhana, S. M. Shatz, and H. Zhaoxia,\n\"Formalization of Object Behavior and Interactions from\nUML Models,\" International Journal of Software\nEngineering & Knowledge Engineering, v.11, pp. 643-73,\n2001.\n[17]\n\nL. Baresi and M. Pezze, \"On Formalizing UML with High-Level\nPetri Nets,\" Concurrent Object-Oriented\nProgramming and Petri Nets. Advances in Petri Nets,\nBerlin: Springer-Verlag, 2001, pp. 276-304.\n[18]\n\nK. M. Hansen, \"Towards a Coloured Petri Net Profile for\nthe Unified Modeling\" Centre for Object Technology,\nAarhus, Denmark, Technical Report COT/2-52-V0.1\n(DRAFT), 2001.\n[19]\n\nJ. B. Jrgensen, \"Coloured Petri Nets in UML-Based\nSoftware Development - Designing Middleware for\nPervasive Healthcare,\" CPN '02, Aarhus, Denmark, 2002.\n[20]\n\nB. Bordbar, L. Giacomini, and D. J. Holding, \"UML and\nPetri Nets for Design and Analysis of Distributed Systems,\"\nInternational Conference on Control Applications,\nAnchorage, Alaska, USA, 2000.\n[21]\n\nR. G. Pettit and H. Gomaa, \"Integrating Petri Nets with\nDesign Methods for Concurrent and Real-Time Systems,\"\nReal Time Applications Workshop, Montreal, Canada,\n1996.\n[22]\n\nR. G. Pettit, \"Modeling Object-Oriented Behavior Using\nPetri Nets,\" OOPSLA Workshop on Behavioral\nSpecification, 1999.\n[23]\n\nR. G. Pettit and H. Gomaa, \"Validation of Dynamic\nBehavior in UML Using Colored Petri Nets,\" UML 2000,\nYork, England, 2000.\n[24]\n\nR. G. Pettit and H. Gomaa, \"Modeling State-Dependent\nObjects Using Colored Petri Nets,\" CPN 01 Workshop on\nModeling of Objects, Components, and Agents, Aarhus,\nDenmark, 2001.\n[25]\n\nR.G. Pettit and H. Gomaa, \"Modeling Behavioral Patterns\nof Concurrent Software Architectures Using Petri Nets.\"\nWorking IEEE/IFIP Conference on Software Architectures,\nOslo, Norway, 2004.\n[26]\n\nR. David and H. Alla, \"Petri Nets for Modeling of Dynamic\nSystems: A Survey.\" Automatica v.30(2). Pp. 175-202.\n1994.\n[27]\n\nK. Jensen, \"DesignCPN,\" 4.0 ed. Aarhus, Denmark:\nUniversity of Aarhus, 1999.\n[28]\n\nB. Lindstrom and L. Wells, \"Design/CPN Performance\nTool Manual,\" University of Aarhus, Aarhus, Denmark\nSeptember 1999.\n211", "keywords": "Software Architecture;Behavioral Design Patterns;Colored Petri Nets;COMET"} {"name": "14", "title": "A New Approach to Intranet Search Based on Information Extraction", "abstract": "This paper is concerned with `intranet search'. By intranet search, we mean searching for information on an intranet within an organization. We have found that search needs on an intranet can be categorized into types, through an analysis of survey results and an analysis of search log data. The types include searching for definitions, persons, experts, and homepages. Traditional information retrieval only focuses on search of relevant documents, but not on search of special types of information. We propose a new approach to intranet search in which we search for information in each of the special types, in addition to the traditional relevance search. Information extraction technologies can play key roles in such kind of `search by type' approach, because we must first extract from the documents the necessary information in each type. We have developed an intranet search system called `Information Desk'. In the system, we try to address the most important types of search first - finding term definitions, homepages of groups or topics, employees' personal information and experts on topics. For each type of search, we use information extraction technologies to extract, fuse, and summarize information in advance. The system is in operation on the intranet of Microsoft and receives accesses from about 500 employees per month. Feedbacks from users and system logs show that users consider the approach useful and the system can really help people to find information. This paper describes the architecture, features, component technologies, and evaluation results of the system.", "fulltext": "INTRODUCTION\nInternet search has made significant progress in recent years. In\ncontrast, intranet search does not seem to be so successful. The\nIDC white paper entitled \"The high cost of not finding\ninformation\" [13] reports that information workers spend from\n15% to 35% of their work time on searching for information and\n40% of information workers complain that they cannot find the\ninformation they need to do their jobs on their company intranets.\nMany commercial systems [35, 36, 37, 38, 39] have been\ndeveloped for intranet search. However, most of them view\nintranet search as a problem of conventional relevance search. In\nrelevance search, when a user types a query, the system returns a\nlist of ranked documents with the most relevant documents on the\ntop.\nRelevance search can only serve average needs well. It cannot,\nhowever, help users to find information in a specific type, e.g.,\ndefinitions of a term and experts on a topic. The characteristic of\nintranet search does not seem to be sufficiently leveraged in the\ncommercial systems.\nIn this paper, we try to address intranet search in a novel approach.\nWe assume that the needs of information access on intranets can\nbe categorized into searches for information in different types. An\nanalysis on search log data on the intranet of Microsoft and an\nanalysis on the results of a survey conducted at Microsoft have\nverified the correctness of the assumption.\nOur proposal then is to take a strategy of `divide-and-conquer'.\nWe first figure out the most important types of search, e.g.,\ndefinition search, expert search. For each type, we employ\ninformation extraction technologies to extract, fuse, and\nsummarize search results in advance. Finally, we combine all the\ntypes of searches together, including the traditional relevance\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\n\nCIKM'05, October 31-November 5, 2005, Bremen, Germany.\nCopyright 2005 ACM 1-59593-140-6/05/0010...$5.00.\n\n460\nsearch, in a unified system. In this paper, we refer to the approach\nas `search by type'. Search by type can also be viewed as a\nsimplified version of Question Answering, adapted to intranet.\nThe advantage of the new search approach lies in that it can help\npeople find the types of information which relevance search\ncannot easily find. The approach is particularly reasonable on\nintranets, because in such space users are information workers and\nsearch needs are business oriented.\nWe have developed a system based on the approach, which is\ncalled `Information Desk'. Information Desk can help users to\nfind term definitions, homepages of groups or topics, employees'\npersonal information and experts on topics, on their company\nintranets.\nThe system has been put into practical use since November 24\nth\n,\n2004. Each month, about 500 Microsoft employees make access\nto the system. Both the results of an analysis on a survey and the\nresults of an analysis on system log show that the features of\ndefinition search and homepage search are really helpful. The\nresults also show that search by type is necessary at enterprise.\n\nRELATED WORK\nThe needs on search on intranets are huge. It is estimated that\nintranets at enterprises have tens or even hundreds of times larger\ndata collections (both structured and unstructured) than internet.\nAs explained above, however, many users are not satisfied with\nthe current intranet search systems. How to help people access\ninformation on intranet is a big challenge in information retrieval.\nMuch effort has been made recently on solutions both in industry\nand in academia.\nMany commercial systems [35, 36, 37, 38, 39] dedicated to\nintranet search have been developed. Most of the systems view\nintranet search as a problem of conventional relevance search.\nIn the research community, ground designs, fundamental\napproaches, and evaluation methodologies on intranet search have\nbeen proposed.\nHawking et al [17] made ten suggestions on how to conduct high\nquality intranet search. Fagin et al [12] made a comparison\nbetween internet search and intranet search. Recently, Hawking\n[16] conducted a survey on previous work and made an analysis\non the intranet search problem. Seven open problems on intranet\nsearch were raised in their paper.\nChen et al [3] developed a system named `Cha-Cha', which can\norganize intranet search results in a novel way such that the\nunderlying structure of the intranet is reflected. Fagin et al [12]\nproposed a new ranking method for intranet search, which\ncombine various ranking heuristics. Mattox et al [25] and\nCraswell et al [7] addressed the issue of expert finding on a\ncompany intranet. They developed methods that can automatically\nidentify experts in an area using documents on the intranet.\nStenmark [30] proposed a method for analyzing and evaluating\nintranet search tools.\n2.2 Question Answering\nQuestion Answering (QA) particularly that in TREC\n(http://trec.nist.gov/) is an application in which users type\nquestions in natural language and the system returns short and\nusually single answers to the questions.\nWhen the answer is a personal name, a time expression, or a place\nname, the QA task is called `Factoid QA'. Many QA systems have\nbeen developed, [2, 4, 18, 20, 22, 27]. Factoid QA usually\nconsists of the following steps: question type identification,\nquestion expansion, passage retrieval, answer ranking, and answer\ncreation.\nTREC also has a task of `Definitional QA'. In the task, \"what is\n<term>\" and \"who is <person>\" questions are answered in a\nsingle combined text [1, 11, 15, 33, 34]. A typical system consists\nof question type identification, document retrieval, key sentence\nmatching, kernel fact finding, kernel fact ranking, and answer\ngeneration.\n\nOUR APPROACH TO INTRANET SEARCH\nSearch is nothing but collecting information based on users'\ninformation access requests. If we can correctly gather\ninformation on the basis of users' requests, then the problem is\nsolved. Current intranet search is not designed along this\ndirection. Relevance search can help create a list of ranked\ndocuments that serve only average needs well. The limitation of\nthis approach is clear. That is, it cannot help users to find\ninformation of a specific type, e.g., definitions of a term. On the\nother hand, Question Answering (QA) is an ideal form for\ninformation access. When a user inputs a natural language\nquestion or a query (a combination of keywords) as a description\nof his search need, it is ideal to have the machine `understand' the\ninput and return only the necessary information based on the\nrequest. However, there are still lots of research work to do before\nputting QA into practical uses. In short term, we need consider\nadopting a different approach.\nOne question arises here: can we take a hybrid approach?\nSpecifically, on one hand, we adopt the traditional approach for\nsearch, and on the other hand, we realize some of the most\nfrequently asked types of search with QA. Finally, we integrate\nthem in a single system. For the QA part, we can employ\ninformation extraction technologies to extract, fuse, and\nsummarize the results in advance. This is exactly the proposal we\nmake to intranet search.\nCan we categorize users' search needs easily? We have found that\nwe can create a hierarchy of search needs for intranet search, as\nwill be explained in section 4.\nOn intranets, users are information workers and their motivations\nfor conducting search are business oriented. We think, therefore,\nthat our approach may be relatively easily realized on intranets\nfirst. (There is no reason why we cannot apply the same approach\nto the internet, however.)\nTo verify the correctness of the proposal, we have developed a\nsystem and made it available internally at Microsoft. The system\ncalled Information Desk is in operation on the intranet of\nMicrosoft and receives accesses from about 500 employees per\nmonth.\nAt Information Desk, we try to solve the most important types of\nsearch first - find term definitions, homepages of groups or topics,\nexperts on topics, and employees' personal information. We are\n461\nalso trying to increase the number of search types, and integrate\nthem with the conventional relevance search. We will explain the\nworking of Information Desk in section 5.\n\nANALYSIS OF SEARCH NEEDS\nIn this section, we describe our analyses on intranet search needs\nusing search query logs and survey results.\n4.1 Categorization of Search Needs\nIn order to understand the underlying needs of search queries, we\nwould need to ask the users about their search intentions.\nObviously, this is not feasible. We conducted an analysis by using\nquery log data. Here query log data means the records on queries\ntyped by users, and documents clicked by the users after sending\nthe queries.\nOur work was inspirited by those of Rose and Levinson [28]. In\ntheir work, they categorized the search needs of users on internet\nby analyzing search query logs.\nWe tried to understand users' search needs on intranet by\nidentifying and organizing a manageable number of categories of\nthe needs. The categories encompass the majority of actual\nrequests users may have when conducting search on an intranet.\nWe used a sample of queries from the search engine of the\nintranet of Microsoft. First, we brainstormed a number of\ncategories, based on our own experiences and previous work.\nThen, we modified the categories, including adding, deleting, and\nmerging categories, by assigning queries to the categories.\nGiven a query, we used the following information to deduce the\nunderlying search need:\nthe query itself\nthe documents returned by the search engine\nthe documents clicked on by the user\nFor example, if a user typed a keyword of `.net' and clicked a\nhomepage of .net, then we judged that the user was looking for a\nhomepage of .net.\nAs we repeated the process, we gradually reached the conclusion\nthat search needs on intranet can be categorized as a hierarchical\nstructure shown in Figure 1. In fact, the top level of the hierarchy\nresembles that in the taxonomy proposed by Rose and Levinson\nfor internet [28]. However, the second level differs. On intranet,\nusers' search needs are less diverse than those on internet, because\nthe users are information workers and their motivations of\nconducting search are business oriented.\nThere is a special need called `tell me about' here. It is similar to\nthe traditional relevance search. Many search needs are by nature\ndifficult to be categorized, for example, \"I want to find documents\nrelated to both .net and SQL Server\". We can put them into the\ncategory.\nWe think that the search needs are not Microsoft specific; one can\nimage that similar needs exist in other companies as well.\nInformational\nWhen (time)\nWhere (place)\nWhy (reason)\nWhat is (definition)\nWho knows about (expert)\nWho is (person)\nHow to (manual)\nTell me about (relevance)\nNavigational\nPerson\nProduct\nTechnology\nServices\nGroup\nTransactional\n\nFigure 1. Categories of search needs\n4.2 Analysis on Search Needs by Query Log\nWe have randomly selected 200 unique queries and tried to assign\nthe queries to the categories of search needs described above.\nTable 1 shows the distribution. We have also picked up the top\n350 frequently submitted queries and assigned them to the\ncategories. Table 2 shows the distribution. (There is no result for\n`why', `what is', and `who knows about', because it is nearly\nimpossible to guess users' search intensions by only looking at\nquery logs.)\nFor random queries, informational needs are dominating. For high\nfrequency queries, navigational needs are dominating. The most\nimportant types for random queries are relevance search, personal\ninformation search, and manual search. The most important types\nfor high frequency queries are home page search and relevance\nsearch.\n4.3 Analysis on Search Needs by Survey\nWe can use query log data to analyze users' search needs, as\ndescribed above. However, there are two shortcomings in the\napproach. First, sometimes it is difficult to guess the search\nintensions of users by only looking at query logs. This is\nespecially true for the categories of `why' and `what'. Usually it is\nhard to distinguish them from `relevance search'. Second, query\nlog data cannot reveal users' potential search needs. For example,\nmany employees report that they have needs of searching for\nexperts on specific topics. However, it is difficult to find expert\nsearches from query log at a conventional search engine, because\nusers understand that such search is not supported and they do not\nconduct the search.\nTo alleviate the negative effect, we have conducted another\nanalysis through a survey. Although a survey also has limitation\n(i.e., it only asks people to answer pre-defined questions and thus\ncan be biased), it can help to understand the problem from a\ndifferent perspective.\n462\nTable 1. Distribution of search needs for random queries\nCategory of Search Needs\nPercentage\nWhen 0.02\nWhere 0.02\nWhy NA\nWhat is\nNA\nWho knows about\nNA\nWho is\n0.23\nHow to\n0.105\nTell me about\n0.46\nInformational total\n0.835\nGroups 0.03\nPersons\n0.005\nProducts\n0.02\nTechnologies\n0.02\nServices\n0.06\nNavigational total\n0.135\nTransactional 0.025\nOther 0.005\nTable 2. Distribution of search needs for high frequency queries\nCategory of Search Needs\nRelative Prevalence\nWhen 0.0057\nWhere 0.0143\nWhy NA\nWhat is\nNA\nWho knows about\nNA\nWho is\n0.0314\nHow to\n0.0429\nTell me about\n0.2143\nInformational total\n0.3086\nGroups 0.0571\nPersons\n0.0057\nProducts\n0.26\nTechnologies\n0.0829\nServices\n0.2371\nNavigational total\n0.6428\nTransactional 0.0086\nOther 0.04\n\nI have experiences of conducting search at Microsoft intranet to\nlook for the web sites (or homepages) of (multiple choice)\n\ntechnologies\n74 %\n\nproducts\n74 %\n\nservices\n68 %\n\nprojects\n68 %\n\ngroups\n60 %\n\npersons\n42 %\n\nnone of the above\n11 %\n\nI have experiences of conducting search at Microsoft intranet in\nwhich the needs can be translated into questions like? (multiple\nchoice)\n\n`what is' - e.g., "what is blaster"\n77 %\n\n`how to' - "how to submit expense report"\n54 %\n\n`where' - e.g., "where is the company store"\n51 %\n\n`who knows about' - e.g., "who knows about data mining"\n51 %\n\n`who is' - e.g., "who is Rick Rashid"\n45 %\n\n`when' - e.g., "when is TechFest'05 "\n42 %\n\n`why' - e.g., "why do Windows NT device drivers contain\ntrusted code"\n28 %\n\nnone of the above\n14 %\nI have experiences of conducting search at Microsoft intranet in\norder to (multiple choice)\n\ndownload a software, a document, or a picture. E.g., "getting\nMSN logo"\n71 %\n\nmake use of a service. E.g., "getting a serial number of\nWindows"\n53 %\n\nnone of the above\n18 %\nFigure 2. Survey results on search needs\nIn the survey, we have asked questions regarding to search needs\nat enterprise. 35 Microsoft employees have taken part in the\nsurvey. Figure 2 shows the questions and the corresponding\nresults.\nWe see from the answers that definition search, manual search,\nexpert finding, personal information search, and time schedule\nsearch are requested by the users. Homepage finding on\ntechnologies and products are important as well. Search for a\ndownload site is also a common request.\n463\nWho is\nDefinition of Longhorn\nWhere is homepage of\nWho knows about\nLonghorn is the codename for the next release of the Windows operating system, planned for release in FY 2005.\nLonghorn\nwill further Microsoft's long term vision for ...\nhttp://url1\nLonghorn is a platform that enables incredible user experiences that are unlike anything possible with OS releases to date .\nThis session describes our approach and philosophy that...\nhttp://url2\nLonghorn is the platform in which significant improvements in the overall manageability of the system by providing the\nnecessary infrastructure to enable standardized configuration/change management, structured eventing and monitoring, and\na unified software distribution mechanism will be made.\nIn order to achieve this management with each Longhorn...\nhttp://url3\nLonghorn is the evolution of the .NET Framework on the client and the biggest investment that Microsoft has made in the\nWindows client development platform in years.\nLonghorn is the platform for smart , connected...\nhttp://url4\nLonghorn is the platform for smart, connected applications , combining the best features of the Web, such as ease of\ndeployment and rich content with the power of the Win32 development platform, enabling developers to build a new breed of\napplications that take real advantage of the connectivity , storage, and graphical capabilities of the modern personal\ncomputer .\nhttp://url5\nWhat is\nLonghorn\nGo\nWhat is\nWho is\nWhere is homepage of\nWho knows about\n\nWhat is\nWho is\nHomepages of Office\nOffice\nGo\nWhat is\nWho is\nWhere is homepage of\nWho knows about\nWho knows about\nOffice Portal Site\nThis is the internal site for Office\nhttp://url1\nWhere is homepage of\nOffice Site (external)\nMicrosoft.com site offering information on the various Office products. Links include FAQs , downloads, support, and more.\nhttp:/url2\nOffice\nNew Office Site\nhttp://url3\nOffice\nOffice\nhttp://url4\n\nWhere is homepage of\nWhat is\nWho is\nPeople Associated with Data mining\nWho knows about\nJamie MacLennan\nDEVELOPMENT LEAD\nUS-SQL Data Warehouse\n+1 (425) XXXXXXX XXXXXX\nAssociated documents(4):\n\nis author of document entitled\nData Mining Tutorial\nhttp ://url1\n\nis author of document entitled\nSolving Business Problems Using Data Mining\nhttp:// url2\nJim Gray\nDISTINGUISHED ENGINEER\nUS-WAT MSR San Francisco\n+XXXXXXXXXXX\nAssociated documents(2):\nData Mining\nGo\nWhat is\nWho is\nWhere is homepage of\nWho knows about\n\nis author of document entitled\nMainlining Data Mining\nhttp ://url3\n\nis author of document entitled\nData Mining the SDSS SkyServer Database\nhttp:// url4\n\nWhere is homepage of\nWho knows about\nWhat is\nWho is\nBill Gates\nCHRMN & CHIEF SFTWR ARCHITECT\nUS-Executive-Chairman\n+1 (425) XXXXXXX XXXXXX\nDocuments of Bill Gates(118)\n\nMy advice to students: Education counts\nhttp://url1\n\nEvento NET Reviewers Seattle 7/8 Novembro\nhttp://url2\n\nA Vision for Life Long Learning Year 2020\nhttp://url3\n\nBill Gates answers most frequently asked questions .\nhttp://url4\n>>more\nTop 10 terms appearing in documents of Bill Gates\nTerm 1 (984.4443)\nTerm 2 (816.4247)\nTerm 3 (595.0771)\nTerm 4 (578.5604)\nTerm 5 (565.7299)\nTerm 6 (435.5366)\nTerm 7 (412.4467)\nTerm 8 (385.446)\nTerm 9 (346.5993)\nTerm 10 (345.3285)\nBill Gates\nGo\nWhat is\nWho is\nWhere is homepage of\nWho knows about\n\nFigure 3: Information Desk system\n\nINFORMATION DESK\nCurrently Information Desk provides four types of search. The\nfour types are:\n1. `what is' search of definitions and acronyms. Given a term,\nit returns a list of definitions of the term. Given an acronym, it\nreturns a list of possible expansions of the acronym.\n2. `who is' search of employees' personal information. Given\nthe name of a person, it returns his/her profile information,\nauthored documents and associated key terms.\n3. `where is homepage of' search of homepages. Given the\nname of a group, a product, or a technology, it returns a list of\nits related home pages.\n4. `who knows about' search of experts. Given a term on a\ntechnology or a product, it returns a list of persons who might\nbe experts on the technology or the product.\nCrawler &\nExtractor\nWeb Server\nInformation Desk\nMS Web\nterm\ndefinition\nacronym\nwhat is\nperson\ndocument\nkey term\nwho is\nterm\nperson\ndocument\nwho knows about\nterm\nhomepage\nWhere is homepage of\n\nFigure 4. Workflow of Information Desk\nThere are check boxes on the UI, and each represents one search\ntype. In search, users can designate search types by checking the\ncorresponding boxes and then submit queries. By default, all the\nboxes are checked.\nFor example, when users type `longhorn' with the `what is' box\nchecked, they get a list of definitions of `Longhorn' (the first\nsnapshot in figure 3). Users can also search for homepages (team\nweb sites) related to `Office', using the `where is homepage'\nfeature (the second snapshot in figure 3). Users can search for\nexperts on, for example, `data mining' by asking `who knows\nabout data mining' (the third snapshot in figure 3). Users can also\nget a list of documents that are automatically identified as being\nauthored by `Bill Gates', for example, with the `who is' feature\n(the last snapshot in figure 3). The top ten key terms found in his\ndocuments are also given.\nLinks to the original documents, from which the information has\nbeen extracted, are also available on the search result UIs.\n5.2 Technologies\n5.2.1 Architecture\nInformation Desk makes use of information extraction\ntechnologies to support the search by type feaatures. The\ntechnologies include automatically extracting document metadata\nand domain specific knowledge from a web site using information\nextraction technologies. The domain specific knowledge includes\ndefinition, acronym, and expert. The document metadata includes\ntitle, author, key term, homepage. Documents are in the form of\nWord, PowerPoint, or HTML. Information Desk stores all the\ndata in Microsoft SQL Server and provides search using web\n464\n$\n9\n!\n\n" !\n4\n9\n%\n)\n!\n.$\n,\n!\n!\n"\nT\n2\nT\n(\nT\nT\nT\n2\nT\nT\nU\nT\nT\n3\nT\n,\n\nM\nN\n!\n4\n2\n-V% KV\n%\nL *)7+\nK\nL\n-V%\n$\n!\nK\nL\n\n!\n:\n!\nK\nL\n\n>\n>\n7\n)\n!\n' &&'\n' <<1\n%\n*7 7' 77 7& F7 F9 ))+ 2\n#\n!\n\n" !\n!\n!\n-V%\n*\n)F+\n\n$\nM\nN\nKM\nNL\nM\nN\nM\nN\n>\n52\n"\nA\n"\n2\nB\nKA"2BL6\n\n\nB\nB\nC\n$\n!\n!\n!\n$# "$\n%\n(\n; '''\n; '''\nB\nB\n' F;&\n' 7F;\n!\nD\n$\n&\nK\nB\nB\nL\n!\n\n2\nB\nP\n%\n*F)+\n3\n!\nD\n,\nB\n' <1&\n' <==\n8\n4\n@ #\n;%%; @ #\nJ !\n! A7\n( )\n8\n4\n@ #\n;%%; @ #\nJ !\n! A7\n( )\n2\n8$\n,\n"\n465\nrecall for title extraction from PowerPoint are 0.907 and 0.951\nrespectively.\nMetadata extraction has been intensively studied. For instance,\nHan et al [14] proposed a method for metadata extraction from\nresearch papers. They considered the problem as that of\nclassification based on SVM. They mainly used linguistic\ninformation as features. To the best of our knowledge, no\nprevious work has been done on metadata extraction from general\ndocuments. We report our title extraction work in details in [19].\nThe feature of `who is' can help find documents authored by a\nperson, but existing in different team web sites. Information\nextraction (specifically metadata extraction) makes the aggregation\nof information possible.\n5.2.4 `Who knows about'\nThe basic idea for the feature is that if a person has authored many\ndocuments on an issue (term), then it is very likely that he/she is an\nexpert on the issue, or if the person's name co-occurs in many times\nwith the issue, then it is likely that he/she is an expert on the issue.\nAs described above, we can extract titles, authors, and key terms\nfrom all the documents. In this way, we know how many times each\nperson is associated with each topic in the extracted titles and in the\nextracted key terms. We also go through all the documents and see\nhow many times each person's name co-occurs with each topic in\ntext segments within a pre-determined window size.\nIn search, we use the three types of information: topic in title, topic\nin key term, and topic in text segment to rank persons, five persons\nfor each type. We rank persons with a heuristic method and return\nthe list of ranked persons. A person who has several documents with\ntitles containing the topic will be ranked higher than a person whose\nname co-occurs with the topic in many documents.\nIt appears that the results of the feature largely depend on the size of\ndocument collection we crawl. Users' feedbacks on the results show\nthat sometimes the results are very accurate, however, sometimes\nthey are not (due to the lack of information).\nCraswell et al. developed a system called `P@NOPTIC', which can\nautomatically find experts using documents on an intranet [7]. The\nsystem took documents as plain texts and did not utilize metadata of\ndocuments as we do at Information Desk.\n5.2.5 `Where is homepage of'\nWe identify homepages (team web sites) using several rules. Most of\nthe homepages at the intranet of Microsoft are created by\nSharePoint, a product of Microsoft. From SharePoint, we can obtain\na property of each page called `ContentClass'. It tells exactly\nwhether a web page corresponds to a homepage or a team site. So\nwe know it is a homepage (obviously, this does not apply in\ngeneral). Next we use several patterns to pull out titles from the\nhomepages. The precision of home page identification is nearly\n100%.\nIn search, we rank the discovered home pages related to a query\nterm using the URL lengths of the home pages. A home page with a\nshorter URL will be ranked higher.\nTREC has a task called `home/named page finding' [8, 9], which is\nto find home pages talking about a topic. Many methods have been\ndeveloped for pursuing the task [5, 6, 26, 29]. Since we can identify\nhomepages by using special properties on our domain, we do not\nconsider employing a similar method.\nEVALUATION\nUsually it is hard to conduct evaluation on a practical system. We\nevaluated the usefulness of Information Desk by conducting a\nsurvey and by recording system logs.\nWe have found from analysis results that the `what is' and `where is\nhomepage of' features are very useful. The `who is' feature works\nwell, but the `who knows about' feature still needs improvements.\n6.1 Survey Result Analysis\nThe survey described in section 4.3 also includes feedbacks on\nInformation Desk.\nFigure 6 shows a question on the usefulness of the features and a\nsummary on the answers. We see that the features `where is\nhomepage of' and `what is' are regarded useful by the responders in\nthe survey.\nFigure 7 shows a question on new features and a summary on the\nanswers. We see that the users want to use the features of `how to',\n`when', `where' and `why' in the future. This also justifies the\ncorrectness of our claim on intranet search made in section 4.\nFigure 8 shows a question on purposes of use and a digest on the\nresults. About 50% of the responders really want to use Information\nDesk to search for information.\nThere is also an open-ended question asking people to make\ncomments freely. Figure 9 gives some typical answers from the\nresponders. The first and second answers are very positive, while the\nthird and fourth point out the necessity of increasing the coverage of\nthe system.\nWhich feature of Information Desk has helped you in finding\ninformation?\n\n`where is homepage of' - finding homepages\n54 %\n\n`what is' - finding definitions/acronyms\n25 %\n\n`who is' - finding information about people\n18 %\n\n`who knows about' - finding experts\n3 %\nFigure 6. Users' evaluation on Information Desk\nWhat kind of new feature do you want to use at Information\nDesk? (multiple choice)\n\n`how to' - e.g., "how to activate Windows"\n57 %\n\n`when' - e.g., "when is Yukon RTM"\n57 %\n\n`where' - e.g., "where can I find an ATM"\n39 %\n\n`why' - e.g., "why doesn't my printer work"\n28 %\n\nothers\n9 %\nFigure 7. New features expected by users\n466\nI visited Information Desk today to\n\nconduct testing on Information Desk\n54 %\n\nsearch for information related to my work\n46 %\nFigure 8. Motivation of using Information Desk\nPlease provide any additional comments, thanks!\n\nThis is a terrific tool! Including `how to' and `when'\ncapabilities will put this in the `can't live without it'\ncategory.\n\nExtremely successful searching so far! Very nice product\nwith great potential.\n\n\nI would like to see more `Microsoftese' definitions. There is\na lot of cultural/tribal knowledge here that is not explained\nanywhere.\n\nTyping in my team our website doesn't come up in the\nresults, is there any way we can provide content for the\nsearch tool e.g., out group sharepoint URL?\n\n...\nFigure 9. Typical user comments to Information Desk\n6.2 System Log Analysis\nWe have made log during the running of Information Desk. The\nlog includes user IP addresses, queries and clicked documents\n(recall that links to the original documents, from which\ninformation has been extraction, are given in search). The log data\nwas collected from 1,303 unique users during the period from\nNovember 26\nth\n, 2004 to February 22\nnd\n, 2005. The users were\nMicrosoft employees.\nIn the log, there are 9,076 query submission records. The records\ninclude 4,384 unique query terms. About 40% of the queries are\nrelated to the `what is' feature, 29% related to `where is homepage\nof', 30% related to `who knows about' and 22% related to `who\nis'. A query can be related to more than one feature.\nIn the log, there are 2,316 clicks on documents after query\nsubmissions. The numbers of clicks for the `what is', `where is\nhomepage of', `who knows about', and `who is' features are 694,\n1041, 200 and 372, respectively. Note that for `what is', `where is\nhome page of', and `who knows about' we conduct ranking on\nretrieved information. The top ranked results are considered to be\nthe best. If a user has clicked a top ranked document, then it\nmeans that he is interested in the document, and thus it is very\nlikely he has found the information he looks for. Thus a system\nwhich has higher average rank of clicks is better than the other\nthat does not. We used average rank of clicked documents to\nevaluate the performances of the features. The average ranks of\nclicks for `what is', `where is homepage of' and `who knows\nabout' are 2.4, 1.4 and 4.7 respectively. The results indicate that\nfor the first two features, users usually can find information they\nlook for on the top three answers. Thus it seems safe to say that\nthe system have achieved practically acceptable performances for\nthe two features. As for `who is', ranking of a person's documents\ndoes not seem to be necessary and the performance should be\nevaluated in a different way. (For example, precision and recall of\nmetadata extraction as we have already reported in section 5).\n\nCONCLUSION\nIn this paper, we have investigated the problem of intranet search\nusing information extraction.\nThrough an analysis of survey results and an analysis of\nsearch log data, we have found that search needs on intranet\ncan be categorized into a hierarchy.\nBased on the finding, we propose a new approach to intranet\nsearch in which we conduct search for each special type of\ninformation.\nWe have developed a system called `Information Desk',\nbased on the idea. In Information Desk, we provide search on\nfour types of information - finding term definitions,\nhomepages of groups or topics, employees' personal\ninformation and experts on topics. Information Desk has\nbeen deployed to the intranet of Microsoft and has received\naccesses from about 500 employees per month. Feedbacks\nfrom users show that the proposed approach is effective and\nthe system can really help employees to find information.\nFor each type of search, information extraction technologies\nhave been used to extract, fuse, and summarize information\nin advance. High performance component technologies for\nthe mining have been developed.\nAs future work, we plan to increase the number of search types\nand combine them with conventional relevance search.\n\nACKNOWLEDGMENTS\nWe thank Jin Jiang, Ming Zhou, Avi Shmueli, Kyle Peltonen,\nDrew DeBruyne, Lauri Ellis, Mark Swenson, and Mark Davies for\ntheir supports to the project.\n\nREFERENCES\n[1] S. Blair-Goldensohn, K.R. McKeown, A.H. Schlaikjer. A\nHybrid Approach for QA Track Definitional Questions. In\nProc. of Twelfth Annual Text Retrieval Conference (TREC12\n), NIST, Nov., 2003.\n[2] E. Brill, S. Dumais, and M. Banko, An Analysis of the\nAskMSR Question-Answering System,\nEMNLP 2002\n\n[3] M. Chen, A. Hearst, A. Marti, J. Hong, and J. Lin, Cha-Cha:\nA System for Organizing Intranet Results. Proceedings of the\n2nd USENIX Symposium on Internet Technologies and\nSystems. Boulder, CO. Oct. 1999.\n[4] C. L. A. Clarke, G. V. Cormack, T. R. Lynam, C. M. Li, and\nG. L. McLearn, Web Reinforced Question Answering\n(MultiText Experiments for TREC 2001). TREC 2001\n[5] N. Craswell, D. Hawking, and S.E. Robertson. Effective site\nfinding using link anchor information. In Proc. of the 24th\nannual international ACM SIGIR conference on research\nand development in information retrieval, pages 250--257,\n2001.\n[6] N. Craswell, D. Hawking, and T. Upstill. TREC12 Web and\nInteractive Tracks at CSIRO. In TREC12 Proceedings, 2004.\n[7] N. Craswell, D. Hawking, A. M. Vercoustre, and P. Wilkins.\nP@noptic expert: Searching for experts not just for\ndocuments. Poster Proceedings of AusWeb'01,\n467\n2001b./urlausweb.scu.edu.au/aw01/papers/edited/vercoustre/\npaper.htm.\n[8] N. Craswell, D. Hawking, R. Wilkinson, and M. Wu.\nOverview of the TREC-2003 Web Track. In NIST Special\nPublication: 500-255, The Twelfth Text REtrieval\nConference (TREC 2003), Gaithersburg, MD, 2003.\n[9] N. Craswell, D. Hawking, R. Wilkinson, and M. Wu. Task\nDescriptions: Web Track 2003. In TREC12 Proceedings,\n2004.\n\n[10] H. Cui, M-Y. Kan, and T-S. Chua. Unsupervised Learning of\nSoft Patterns for Definitional Question Answering,\nProceedings of the Thirteenth World Wide Web conference\n(WWW 2004), New York, May 17-22, 2004.\n[11] A. Echihabi, U.Hermjakob, E. Hovy, D. Marcu, E. Melz, D.\nRavichandran. Multiple-Engine Question Answering in\nTextMap. In Proc. of Twelfth Annual Text Retrieval\nConference (TREC-12), NIST, Nov., 2003.\n[12] R. Fagin, R. Kumar, K. S. McCurley, J. Novak, D.\nSivakumar, J. A. Tomlin, and D. P. Williamson. Searching\nthe workplace web. Proc. 12th World Wide Web Conference,\nBudapest, 2003.\n[13] S. Feldman and C. Sherman. The high cost of not finding\ninformation. Technical Report #29127, IDC, April 2003.\n[14] H. Han, C. L. Giles, E. Manavoglu, H. Zha, Z. Zhang, and E.\nA. Fox. Automatic Document Metadata Extraction using\nSupport Vector Machines. In Proceedings of the third\nACM/IEEE-CS joint conference on Digital libraries, 2003\n[15] S. Harabagiu, D. Moldovan, C. Clark, M. Bowden, J.\nWilliams, J. Bensley. Answer Mining by Combining\nExtraction Techniques with Abductive Reasoning. In Proc.\nof Twelfth Annual Text Retrieval Conference (TREC-12),\nNIST, Nov., 2003.\n[16] D. Hawking. Challenges in Intranet search. Proceedings of\nthe fifteenth conference on Australasian database. Dunedin,\nNew Zealand, 2004.\n[17] D. Hawking, N. Craswell, F. Crimmins, and T. Upstill.\nIntranet search: What works and what doesn't. Proceedings\nof the Infonortics Search Engines Meeting, San Francisco,\nApril 2002.\n[18] E. Hovy, L. Gerber, U. Hermjakob, M. Junk, and C. Y. Lin.\nQuestion Answering in Webclopedia. TREC 2000\n[19] Y. Hu, H. Li, Y. Cao, D. Meyerzon, and Q. Zheng.\nAutomatic Extraction of Titles from General Documents\nusing Machine Learning. To appear at Proc. of Joint\nConference on Digital Libraries (JCDL), 2005. Denver,\nColorado, USA. 2005.\n[20] A. Ittycheriah and S. Roukos, IBM's Statistical Question\nAnswering System-TREC 11. TREC 2002\n[21] J. Klavans and S. Muresan. DEFINDER: Rule-Based\nMethods for the Extraction of Medical Terminology and\ntheir Associated Definitions from On-line Text. In\nProceedings of AMIA Symposium 2000.\n[22] C. C. T. Kwok, O. Etzioni, and D. S. Weld, Scaling question\nanswering to the Web. WWW-2001: 150-161\n[23] Y. Li, H Zaragoza, R Herbrich, J Shawe-Taylor, and J. S.\nKandola. The Perceptron Algorithm with Uneven Margins.\nin Proceedings of ICML'02.\n[24] B. Liu, C. W. Chin, and H. T. Ng. Mining Topic-Specific\nConcepts and Definitions on the Web. In Proceedings of the\ntwelfth international World Wide Web conference (WWW-2003\n), 20-24 May 2003, Budapest, HUNGARY.\n[25] D. Mattox, M. Maybury and D. Morey. Enterprise Expert\nand Knowledge Discovery. Proceedings of the HCI\nInternational '99 (the 8th International Conference on\nHuman-Computer Interaction) on Human-Computer\nInteraction: Communication, Cooperation, and Application\nDesign-Volume 2 - Volume 2. 1999.\n[26] P. Ogilvie and J. Callan. Combining Structural Information\nand the Use of Priors in Mixed Named-Page and Homepage\nFinding. In TREC12 Proceedings, 2004.\n\n[27] D. R. Radev, W. Fan, H. Qi, H. Wu, and A. Grewal.\nProbabilistic question answering on the web. WWW 2002:\n408-419\n[28] D. E. Rose and D. Levinson. Understanding user goals in\nweb search. Proceedings of the 13th international World\nWide Web conference on Alternate track papers & posters,\n2004 New York, USA.\n[29] J. Savoy, Y. Rasolofo, and L. Perret, L. Report on the TREC-2003\nExperiment: Genomic and Web Searches. In TREC12\nProceedings, 2004.\n[30] D. Stenmark. A Methodology for Intranet Search Engine\nEvaluations. Proceedings of IRIS22, Department of CS/IS,\nUniversity of Jyvskyl, Finland, August 1999.\n[31] V. N. Vapnik. The Nature of Statistical Learning Theory.\nSpringer, 1995.\n[32] J. Xu, Y. Cao, H. Li, and M. Zhao. Ranking Definitions with\nSupervised Learning Methods. In Proc. of 14\nth\nInternational\nWorld Wide Web Conference (WWW05), Industrial and\nPractical Experience Track, Chiba, Japan, pp.811-819, 2005.\n[33] J. Xu, A. Licuanan, R. Weischedel. TREC 2003 QA at BBN:\nAnswering Definitional Questions. In Proc. of 12\nth\nAnnual\nText Retrieval Conference (TREC-12), NIST, Nov., 2003.\n[34] H. Yang, H. Cui, M. Maslennikov, L. Qiu, M-Y. Kan, and TS\n. Chua, QUALIFIER in TREC-12 QA Main Task. TREC\n2003: 480-488\n[35] Intellectual capital management products. Verity,\nhttp://www.verity.com/\n[36] IDOL server. Autonomy,\nhttp://www.autonomy.com/content/home/\n[37] Fast data search. Fast Search & Transfer,\nhttp://www.fastsearch.com/\n[38] Atomz intranet search. Atomz, http://www.atomz.com/\n[39] Google Search Appliance. Google,\nhttp://www.google.com/enterprise/\n\n468", "keywords": "Search Needs;metadata extraction;features;architecture;Experimentation;definition search;INFORMATION DESK;information extraction;expert finding;Algorithms;intranet search;Human Factors;information retrieval;component technologies;Intranet search;types of information"} {"name": "140", "title": "Modeling Node Compromise Spread in Wireless Sensor Networks Using Epidemic Theory", "abstract": "Motivated by recent surfacing viruses that can spread over the air interfaces, in this paper, we investigate the potential disastrous threat of node compromise spreading in wireless sensor networks. Originating from a single infected node, we assume such a compromise can propagate to other sensor nodes via communication and preestablished mutual trust. We focus on the possible epidemic breakout of such propagations where the whole network may fall victim to the attack. Based on epidemic theory, we model and analyze this spreading process and identify key factors determining potential outbreaks. In particular, we perform our study on random graphs precisely constructed according to the parameters of the network, such as distance, key sharing constrained communication and node recovery, thereby reflecting the true characteristics therein. The analytical results provide deep insights in designing potential defense strategies against this threat. Furthermore , through extensive simulations, we validate our model and perform investigations on the system dynamics. Index Terms-- Sensor Networks, Epidemiology, Random Key Predistribution, Random Graph.", "fulltext": "Introduction\nAs wireless sensor networks are unfolding their vast\npotential in a plethora of application environments [1],\nsecurity still remains one of the most critical challenges\nyet to be fully addressed. In particular, a vital problem\nin the highly distributed and resource constrained environment\nis node compromise, where a sensor node can\nbe completely captured and manipulated by the adversary.\nWhile extensive work has focused on designing schemes\nthat can either defend and delay node capture or timely\nidentify and revoke compromised nodes themselves [5],\nlittle attention has been paid to the node compromise\nprocess itself. Inspired by recently emerged viruses that\ncan spread over air interfaces, we identify in this paper\nthe threat of epidemic spreading of node compromises in\nlarge scale wireless sensor networks and present a model\nthat captures the unique characteristic of wireless sensor\nnetworks in conjunction with pairwise key schemes. In\nparticular, we identify the key factors determining the\npotential epidemic outbreaks that in turn can be employed\nto devise corresponding defense strategies.\nA. Motivation\nDue to its scarce resources and hence low defense capabilities\n, node compromises can be expected to be common\nphenomena for wireless sensor networks in unattended\nand hostile environments. While extensive research efforts,\nincluding those from ourselves [15], have been engineered\ntoward designing resilient network security mechanisms\n[12], [13], the compromise itself and in particular the propagation\nof node compromise (possible epidemics) have\nattracted little attention.\nWhile node compromise, thanks to physical capture and\nsucceeding analysis, is naturally constrained by the adver-sary's\ncapability, software originated compromises can be\nmuch more damaging. Specifically, the recently surfaced\nvirus Cabir\n1\nthat can spread over the air interface has\nunveiled a disastrous threat for wireless sensor networks.\nInescapably, viruses targeting wireless sensor networks\nwill emerge. Consequently, node compromise by way of\nvirus spreading (over the air interface) can effortlessly\ndevastate the entire network in a short period of time. With\nrecent advancements on sensor design empowering nodes\nsuch as MICA2 motes with over-the-air programmability,\nthe network becomes vulnerable to the above described\nattack. Even worse, the inherent dense, large scale nature\nof sensor networks undoubtedly further facilitates the virus\npropagation.\nWhile virus spreading over the internet has been widely\nstudied, and notably by means of epidemic theory [2], [3],\nthe distance and pairwise key restricted communication\npattern in wireless sensor networks uniquely distinguish\nthe phenomena from those on the Internet.\nB. Our Contribution\nIn this paper, we investigate the spreading process of\nnode compromise in large scale wireless sensor networks.\nStarting from a single point of failure, we assume that the\nadversary can effectively compromise neighboring nodes\nthrough wireless communication and thus can threat the\nwhole network without engaging in full scale physical\nattacks. In particular, due to security schemes employed by\nthe sensor networks, we assume that communication can\nonly be performed when neighboring nodes can establish\nmutual trust by authenticating a common key. Therefore,\nnode compromise is not only determined by the deployment\nof sensor nodes which in turn affects node density,\nbut also determined by the pairwise key scheme employed\ntherein. By incorporating these factors of the networks,\nwe propose an epidemiological model to investigate the\nprobability of a breakout (compromise of the whole network\n) and if not, the sizes of the affected components\n(compromised clusters of nodes). Furthermore, we analyze\nthe effect of node recovery in an active infection scenario\nand obtain critical values for these parameters that result\nin an outbreak. Through extensive simulations, we show\nthat our analytical results can closely capture the effects\nin a wide range of network setups.\nThe remainder of the paper is organized as follows. In\nSection II we present the preliminaries, including the threat\nmodel, random key pre-distribution, and epidemic theory.\nIn Section III, we study the compromise propagation\nwithout node recovery and with node recovery, and detail\nour analytical results. We perform experimental study in\nSection IV. Related work is presented in Section V and\nwe conclude in Section VI.\nPreliminaries\nIn this section, we present our threat model and briefly\noverview pairwise key distribution in wireless sensor networks\nand epidemic theory.\nA. Threat model\nWe assume that a compromised node, by directly\ncommunicating with a susceptible node, can spread the\ninfection and conduce to the compromise of the susceptible\nnode. Communication among sensor nodes is not only\nconstrained by their distances, but also shall be secured\nand thus determined by the probability of pairwise key\nsharing. Therefore, the spreading of node compromise is\ndependent on the network deployment strategy and the\npairwise key scheme employed therein. We assume that\nthe \"seed\" compromise node could be originated by an\nadversary through physical capture and analysis of that\nnode or by other similar means.\nThe spread of node compromise in a wireless sensor\nnetwork, particularly thanks to its dense nature, can lead\nto an epidemic effect where the whole network will get\ninfected. We consider this epidemic effect as the key threat\nto the network and hence the investigation target of this\npaper.\nB. Pairwise Key Pre-distribution\nAs the pairwise key scheme affects the communication\nand hence the propagation of the node compromise, we\nprovide below, a brief overview of the key distribution\nschemes in wireless sensor networks.\nDue to the severe resource constraint of wireless sensor\nnetworks and limited networking bandwidth, proposed\npairwise key schemes have commonly adopted the predistribution\napproach instead of online key management\nschemes with prohibitive resource consumption. The concept\nof pre-distribution was originated from [11], where\nthe authors propose to assign a number of keys, termed key\nring randomly drawn from a key pool. If two neighboring\nnodes share a common key on their key rings, a shared\npairwise key exists and a secure communication can be\nestablished. Pre-distribution schemes that rely on bivariate\npolynomials is discussed in [13]. In this scheme, each\nsensor node is pre-distributed a set of polynomials. Two\nsensor nodes with the same polynomials can respectively\nderive the same key.\nRegardless of the specific key distribution scheme, a\ncommon parameter capturing the performance is the probability\nthat two neighbors can directly establish a secure\ncommunication. We denote this probability by\nq. As it shall\nbe revealed later,\nq plays an important role in the spreading\nof node compromise, because direct communication, as\nexplained in the threat model, can result in propagation of\nmalicious code.\nC. Node Recovery\nIn the event that a node is compromised, its secrets will\nbe revealed to the attacker. The network may attempt to\nrecover the particular node. Recovery might be realized\nin several possible ways. For example, the keys of the\nnodes might be revoked and the node may be given a\nfresh set of secret keys. In this context, key revocation,\nwhich refers to the task of securely removing keys that\nare known to be compromised, has been investigated as\npart of the key management schemes, for example in\n[5]. Moreover, recovery can also be achieved by simply\nremoving the compromised node from the network, for\nexample by announcing a blacklist, or simply reload the\nnode's programs. More sophisticated methods may include\nimmunizing a node with an appropriate antivirus patch that\nmight render the node immune from the same virus attack.\nRegardless, in our analysis, we will study virus spreading\nunder the two cases respectively depending on whether\na node can be recovered or not.\nD. Epidemic Theory\nOriginally, epidemic theory concerns about contagious\ndiseases spreading in the human society. The key feature\nof epidemiology [2], [7] is the measurement of infection\noutcomes in relation to a population at risk. The population\nat risk basically comprises of the set of people who\npossess a susceptibility factor with respect to the infection.\nThis factor is dependent on several parameters including\nexposure, spreading rate, previous frequency of occurrence\netc., which define the potential of the disease causing\nthe infection. Example models characterizing the infection\nspreading process include the Susceptible Infected Susceptible\n(SIS) Model, Susceptible Infected Recovered (SIR)\nModel etc. In the former, a susceptible individual acquires\ninfection and then after an infectious period, (i.e., the time\nthe infection persists), the individual becomes susceptible\nagain. On the other hand, in the latter, the individual\nrecovers and becomes immune to further infections.\nOf particular interest is the phase transition of the\nspreading process that is dependent on an epidemic threshold\n: if the epidemic parameter is above the threshold, the\ninfection will spread out and become persistent; on the\ncontrary, if the parameter is below the threshold, the virus\nwill die out.\nEpidemic theory indeed has been borrowed to the\nnetworking field to investigate virus spreading. In this\npaper, we will mainly rely on a random graph model to\ncharacterize the unique connectivity of the sensor network\nand perform the epidemic study [8], [10].\nIII Modelling and Analysis of Compromise Propagation\nIn this section, we analyze the propagation of node\ncompromise originating from a single node that has been\naffected. Our focus is to study the outbreak point of the\nepidemic effect where the whole network will fall victim\nto the compromise procedure.\nOur key method is to characterize the sensor network\n, including its key distribution, by mathematically\nformulating it as a random graph whose key parameters\nare precisely determined by those of the sensor network.\nTherefore, the investigation of epidemic phenomena can\nbe performed on the random graph instead. Following\nthis approach, we observe the epidemic process under two\nscenarios: without node recovery and with node recovery,\ndepending on whether infected nodes will be recovered by\nexternal measures like key revocation, immunization, etc.\nA. Network Model as Random Graph\nAssume that sensor nodes are uniformly deployed in\na disc area with radius\nR. Let =\nN\nR\n2\ndenote the node\ndensity of the network where\nN is the total number of\nthe nodes. For a sensor node with communication range\nr,\nthe probability that\nl nodes are within its communication\nrange is given by\np(l) = nl p\nl\n(1 - p)\nn-l\n(1)\nwhere\np is defined by\np = r\n2\nR\n2\n= r\n2\n\nN .\n(2)\nThus\np is the probability of a link existing at the physical\nlevel, i.e., whether the two nodes fall within their respective\ncommunication ranges.\nWe further assume that the probability that two neighboring\nnodes sharing at least one key in the random predistribution\npairwise key is\nq. Notice that q is determined\nby the specific pairwise key scheme employed. For a\nparticular node having\nl neighboring nodes, the probability\nthat there are\nk nodes, k l, sharing at least one key with\nit is given by\np(k|l) = lk q\nk\n(1 - q)\nl-k\n(3)\nTherefore, the probability of having\nk neighboring nodes\nsharing at least one key is\np(k) =\n\nl=k\np(l)p(k|l)\n(4)\n=\n\nl=k\nn\nl p\nl\n(1 - p)\nn-l\nl\nk q\nk\n(1 - q)\nl-k\n(5)\nThus, based on both physical proximity and the probability\nof key sharing between neighbors, we get a degree\ndistribution\np(k). Notice that this degree distribution can\nbe employed to generate a random graph\nG. Since G possesses\nthe same property in terms of secure communication\npattern as the sensor network of concern, we will next\nperform the analysis on\nG instead.\nB. Compromise Spread Without Node Recovery\nGiven the random graph construction, we now analyze\nthe case of compromise spread when no node recovery\nis performed. In other words, a compromised sensor node\nwill remain infectious indefinitely.\nLet\nG\n0\n(x) be the generating function of the degree\ndistribution of a randomly chosen vertex in\nG and is\ndefined by\nG\n0\n(x) =\n\nk=0\np(k)x\nk\n(6)\nMoreover, with\nG\n1\n(x) given by\nG\n1\n(x) =\n1\nG\n0\n(1)G\n0\n(x)\n(7)\nand with\ndenoting the infection probability of a node\nbeing infected by communicating with a compromised\nnode, then following the analysis presented in [8], the\naverage size of the outbreak is derived as\ns = 1 + G\n0\n(1)\n1 - G\n1\n(1).\n(8)\nInfection probability\nessentially captures the spreading\ncapability of the virus that could compromise the network:\nthe larger it is, the stronger the virus is. We assume that\nits value can be obtained by means of measurement or\nanalysis.\nGiven the above result, we can see that the outbreak\npoint for the network is\n= 1/G\n1\n(1) which marks\nthe onset of an epidemic. For\n> 1/G\n1\n(1) we have\nan epidemic in the form of a giant component in the\nrandom network and the size\nS of the epidemic, where\nS denotes the expected fraction of the network that will\nbe compromised if an outbreak happens, is given by\nS = 1 - G\n0\n(u).\nHere\nu is the root of the self-consistency relation\nu = G\n1\n(u).\nIntuitively, the above conclusion reveals that if\n\n1/G\n1\n(1), the component of compromised nodes is finite\n3\nProceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia\nNetworks (WoWMoM'06)\n0-7695-2593-8/06 $20.00 2006\nIEEE\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0\n10\n20\n30\n40\n50\n60\n70\n80\n90\n100\nInfection Probability(\n)\nSize of Cluster Compromised\nq = 0.01\nq = 0.02\nq = 0.04\nq = 0.1\nq = Key Sharing Prob\n(a) Non-epidemic cluster size vs. infection probability (\n)\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nInfection Probability (\n)\nFraction of Network Compromised\nq = 0.01\nq = 0.02\nq = 0.04\nq = 0.1\nq = Key Sharing Prob\n(b) Epidemic size vs. infection probability (\n)\nFig. 1. Size of compromised node clusters:\n(a)\ndepicts the average size of infected clusters when there\nis no epidemic and (b) shows the epidemic size as the\nfraction of the entire network. The point where non-zero\nvalue appears indicates the transition from non-epidemic\nto epidemic\nin size regardless of the size of the network and each\nnode's probability of being compromised is zero for large\nnetworks. On the contrary, if\n> 1/G\n1\n(1), there always\nexists a finite probability for a node to be compromised.\nFig. 1 depicts this effect for a network with\nN = 1000\nnodes with different key sharing probabilities\nq. The underlying\nphysical topology, determined by the communication\nrange and node density, has an average edge probability\nof\np = 0.25. Given the physical deployment, we vary the\nprobability of direct pairwise key sharing (\nq) and study\nthe point of outbreak. As we can see in Fig. 1, while\nundoubtedly increasing\nq can facilitate communication in\nthe network, the network also becomes more vulnerable\nto virus spreading. Specifically, when\nq = 0.01, network\nwide breakout is only possible when a compromised node\nhas an infection probability (\n) larger than 0.4 to infect a\nneighbor. We note that in this case, we have an average\nnode degree of\n2.5. On the contrary, this probability only\nneeds to be around\n0.05 when q = 0.1 which subsequently\nmakes the node degree\n25. Fig. 1(b) illustrates the fraction\nof the network that is ultimately infected as the infection\nprobability is increased beyond the critical point of the\nonset of outbreak. For instance, we observe that when\nq = 0.1, the whole network is compromised with a value\nof less than\n0.2. On the contrary, with q = 0.01, 80% of\nthe network could be compromised with only a high value\nof\n= 0.8.\nIn summary, Fig. 1 clearly indicates the tradeoff between\nkey sharing probability among sensor nodes and the\nvulnerability of the network to compromise.\nC. Compromise Spread With Node Recovery\nIn this case, we assume that the network has the\ncapability to recover some of the compromised nodes\nby either immunization or removal from the network. To\ncapture this recovery effect, we assume that an infected\nnode recovers or is removed from the network after an\naverage duration of infectivity\n. In other words, a node in\nthe sensor network remains infective for an average period\nafter which it is immunized. During this infective period,\nthe node transmits the epidemic to its neighbors with the\ninfection rate\n, denoting the probability of infection per\nunit time. Evidently, the parameter\nis critical to the\nanalysis as it measures how soon a compromised node\nrecovers. Naturally, we will perform our analysis following\nthe SIR model in epidemic theory [10], [8].\nFirst, consider a pair of adjacent nodes where one\nis infected and the other is susceptible. If\nT denotes\nthe compromise transmission probability, given the above\ndefinitions for\nand , we can say that the probability\nthat the disease will not be transmitted from the infected\nto the susceptible is given by\n1 - T = lim\nt0\n(1 - t)\n/t\n= e\n\n.\n(9)\nSubsequently, we have the transmission probability\nT = 1 - e\n\n.\nIn other words, the compromise propagation can be considered\nas a Poisson process, with average\n. The outcome\nof this process is the same as bond percolation and\nT is\nbasically analogous to the bond occupation probability on\nthe graph representing the key sharing network. Thus, the\noutbreak size would be precisely the size of the cluster\nof vertices that can be reached from the initial vertex\n(infected node) by traversing only occupied edges which\nare occupied with probability\nT . Notice that T explicitly\ncaptures node recovery in terms of the parameter\n.\nReplacing\nwith T in Equation 8, and following similar\nsteps, we get the size of the average cluster as\ns = 1 + T G\n0\n(1)\n1 - T G\n1\n(1).\n(10)\nand the epidemic size is obtained by\nS = 1 - G\n0\n(u; T ).\n(11)\nwhere\nu is obtained by\nu = 1 - G\n1\n(u; T ),\n(12)\n4\nProceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia\nNetworks (WoWMoM'06)\n0-7695-2593-8/06 $20.00 2006\nIEEE\n0\n50\n100\n150\n0\n50\n100\n150\nInfectivity Duration\n\nSize of Cluster Compromised\n= 0.01\n= 0.02\n= 0.04\n= 0.2\n= Infection rate\n(a) Non-epidemic cluster size vs. infectivity duration\n\n0\n50\n100\n150\n200\n250\n300\n350\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nInfectivity Duration\n\nFraction of Network Compromised\n= 0.01\n= 0.02\n= 0.04\n= 0.08\n= 0.2\n= infection rate\n(b) Epidemic size vs. infectivity duration\n\nFig. 2. Size of compromised node clusters:\n(a)\ndepicts the average size of infected clusters when there\nis no epidemic and (b) shows the epidemic size as the\nfraction of the entire network. The point where non-zero\nvalue appears indicates the transition from non-epidemic\nto epidemic\nand\nG\n0\n(u; T ) and G\n1\n(u; T ) are given respectively by\nG\n0\n(u; T ) = G\n0\n(1 + (u - 1)T ),\n(13)\nand\nG\n1\n(u; T ) = G\n1\n(1 + (u - 1)T ).\n(14)\nFig. 2 summarizes this effect, depicting the epidemic\noutbreak against the average recovery time\nfor the\nrespective infection rates\n. The plots are for a sensor\nnetwork with typical average degree of\n10. In Fig. 2(a),\nwe can identify the average duration that an infected\nnode is allowed to remain infective before an epidemic\noutbreak occurs. We notice that, when the infection rate\nis\n0.01, infected nodes have to be recovered/removed on\nthe average in less than 100 time units in order to prevent\nan epidemic. As expected, this time is much lower when\nthe infection rate is\n0.2. Fig. 2(b) depicts the epidemic\noutbreak point for different infection rates\nin terms of\nthe average duration of infectivity of a node.\nWe remark that both the analytical and experimental\nresults have significant implication for security scheme\ndesign in terms of revoking/immunizing compromised\nnodes in wireless sensor networks: it dictates the speed at\nwhich the network must react in order to contain/prevent\nthe effect of network wide epidemic.\nSimulation\nWe employ a discrete event-driven simulation to accu-rately\nsimulate the propagation of the infection spreading\nprocess. In this section, we first outline our discrete-event\ndriven simulation model for the gradual progress of the\nspread of node compromise. Then we use this model to\ncapture the time dynamics of the spread of the compromise\nin the whole population.\nA. Simulation Setup\nIn our simulation, we assume the number of sensor\nnodes in the network to be 1000. The sensor network\nis produced by uniformly distributing the sensors in a\n12001200 unit\n2\narea. The communication range of each\nnode is assumed to be 100 units. Our goal is to make\nthe physical network fairly connected with an average\nnode degree of around 20 to 25. We use the key sharing\nprobability on top of this network to further reduce the\naverage node degree of the final key sharing network to\ntypical values of 3 and 10.\nWe employ the random key pre-distribution scheme\ndescribed in [11] to establish the pairwise key among\nsensor nodes. By tuning the parameters of the scheme,\nwe can achieve any specific values for the probability of\nany two neighbors to share at least one key.\nOur simulation works in two phases. In the first phase,\nwe form the network where each node identifies its set of\nneighbors and entries are made into a neighbor table. The\naverage degree of the key sharing network is controlled by\nchanging the value of the key sharing probability between\nneighbors. The entry for each node in the neighborhood\ntable can indicate whether a node is susceptible, infected or\nrecovered. We use typical values obtained for the average\nnode degree of the network, namely, 3 and 10.\nIn the second phase, we simulate actual virus propagation\n. Initially, at\nt = 0, the number of infected nodes,\ndenoted by\nI(0) is set to be 1. At any time point t,\nthe population is divided into the group of susceptible\nnodes,\nS(t), and the group of infected nodes, I(t). In\nthe situation where we have nodes that are immunized\nand thus recovered, we denote that this set of recovered\nnodes by\nR(t). The sub-population dynamics is obtained\nby observing the population counts after fixed simulation\nintervals of 1 time unit. We assume that the time it takes\nfor an infected node to infect its susceptible neighbor is\nnegative exponentially distributed with a mean of 1 unit\ntime.\nThere are two simulation scenarios corresponding to our\nanalysis.\nB. Simulation Results and Discussion\n1) Simulation Results for No Recovery Case: The simulation\nresults for the case without recovery are shown\n5\nProceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia\nNetworks (WoWMoM'06)\n0-7695-2593-8/06 $20.00 2006\nIEEE\n0\n100\n200\n300\n400\n500\n600\n700\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nTime(t)\nFraction of compromised nodes\nGrowth of infected nodes with time\n= 0.05\n= 0.1\n= 0.2\n= 0.3\nN = 1000\nz = 5\n(a) Average node degree = 5\n0\n100\n200\n300\n400\n500\n600\n700\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nTime(t)\nFraction of compromised nodes\nGrowth of infected nodes with time\n= 0.05\n= 0.1\n= 0.2\n= 0.3\nN = 1000\nAvg Degree z = 10\n= Infection Prob\n(b) Average node degree = 10\nFig. 3. System dynamics without recovery\nin Fig. 3. We vary the value of the infection probability\nunder different network connectivities and study the\ntime dynamics of the infected population. We notice, as\nexpected, that an increase in the average node degree\nfrom 5 to 10 has an impact on the rate of compromise\nof the network. For instance, the curve with the lowest\nvalue(0.05) has compromised the entire network by\nsimulation time 700 when the average node degree is 10.\nHowever, with the node degree at 5, a\nvalue of 0.05\ncould compromise upto 70% of the network by that same\nsimulation time. Thus, we find that in the no-recovery case\nthe two key parameters affecting the network compromise\nrate are infection probability\nand the average node\ndegree.\n2) Simulation Results for Recovery Case: Fig. 4 and 5\nshow the simulation results for the three sub-populations\n(infected, immunized, and susceptible) in the situation\nwhere nodes do recover.\nIn Fig. 4 we see the effects of the infectivity duration\n\nand infection rate\non the dynamics of the epidemic. In\nFig. 4(c), the highest point is reached very fast because of\nthe high value of\n. Thereafter, its recovery also takes less\ntime. However, in Fig. 4(a),\nis smaller but\n0\nis higher\n(i.e., 30), the infection rises slowly and also falls slowly\nbecause of the high recovery time.\nIn comparison, Fig. 5 has better connectivity of average\nnode degree of 5 which in turn increases the rate of\ninfection significantly. Comparing Fig. 4(c) and Fig. 5(c),\nwe observe that infection penetration is higher in the latter\neven in presence of a smaller value of\n. In Fig. 5(c), it\nshows that even with a low value of\n, the infection still\nrises to above 60%.\nTherefore, we observe that network connectivity has\na high impact on the infection propagation and on the\nspeed of reaching the maximal point of outbreak. However,\nthereafter during the recover phase,\n\n0\naffects aggressively\nthe time it takes to recover the whole network.\nRelated Work\nThe mathematical modeling of epidemics is well documented\n[2], [7]. In fact, visualizing the population as a\ncomplex network of interacting individuals has resulted in\nthe analysis of epidemics from a network or graph theoretic\npoint of view [8], [9], [10].\nNode compromise in sensor networks and the need for\ntheir security has also received immense attention [4]. A\nlarge portion of current research on security in sensor\nnetworks has been focused on protocols and schemes for\nsecuring the communication between nodes [12], [13].\nRevocation of keys of compromised nodes has been studied\nin [14]. In [4], the authors demonstrate the ease with which\na sensor node can be compromised and all its information\nextracted. Unfortunately, little work has been done on the\ndefense strategies when the compromise of a single node\ncould be used to compromise other nodes over the air. In\nthis paper, we take the first step to model this potential\ndisastrous propagation. In [6], the authors used an epidemic\nmodeling technique for information dissemination in\na MANET. However, they assumed homogeneous mixing\nwhich is not possible in a static sensor network as ours.\nIn our work, we adopted some of the results presented in\n[8] where the author proposes a percolation theory based\nevaluation of the spread of an epidemic on graphs with\ngiven degree distributions. However, little has been shown\nthere on the temporal dynamics of the epidemic spread and\nthe authors only studied the final outcome of an infection\nspread.\nConclusion\nIn this paper, we investigate the potential threat for compromise\npropagation in wireless sensor networks. Based\non epidemic theory, we model the process of compromise\nspreading from a single node to the whole network. In\nparticular, we focus on the key network parameters that\ndetermine a potential epidemic outbreak in the network.\nDue to the unique distance and key sharing constrained\ncommunication pattern, we resort to a random graph model\nwhich is precisely generated according to the parameters of\nthe real sensor network and perform the study on the graph.\nFurthermore, we introduce the effect of node recovery after\ncompromise and adapt our model to accommodate this\neffect. Our results reveal key network parameters in defending\nand containing potential epidemics. In particular,\n6\nProceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia\nNetworks (WoWMoM'06)\n0-7695-2593-8/06 $20.00 2006\nIEEE\n0\n100\n200\n300\n400\n500\n600\n700\n-0.2\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\nTime(t)\nFraction of Network\nDynamics of the Infected, Susceptible and Revoked nodes\nSusceptible S(t)\nInfected I(t)\nRecovered R(t)\nAverage Degree z = 3\nInfection Prob\n= 0.6\n(a)\n\n0\n= 30\n0\n100\n200\n300\n400\n500\n600\n700\n-0.2\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\nTime(t)\nFraction of Network\nDynamics of the Infected, Susceptible and Revoked nodes\nS(t)\nI(t)\nR(t)\nz = 3\n= 0.7\n(b)\n\n0\n= 20\n0\n100\n200\n300\n400\n500\n600\n700\n-0.2\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\nTime(t)\nFraction of Network\nDynamics of the Infected, Susceptible and Revoked nodes\nS(t)\nI(t)\nR(t)\nz = 3\n= 0.9\n(c)\n\n0\n= 10\nFig. 4. The dynamics of the population with recovery for average degree of 3\n0\n100\n200\n300\n400\n500\n600\n700\n-0.2\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\nTime(t)\nFraction of Network\nDynamics of the Infected, Susceptible and Revoked nodes\nSusceptible S(t)\nInfected I(t)\nRecovered R(t)\nAverage Degree z = 10\nInfection Prob\n= 0.15\n(a)\n\n0\n= 30\n0\n100\n200\n300\n400\n500\n600\n700\n-0.2\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\nTime(t)\nFraction of Network\nDynamics of the Infected, Susceptible and Revoked nodes\nS(t)\nI(t)\nR(t)\nz = 10\n= 0.17\n(b)\n\n0\n= 20\n0\n100\n200\n300\n400\n500\n600\n700\n-0.2\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\nTime(t)\nFraction of Network\nDynamics of the Infected, Susceptible and Revoked nodes\nS(t)\nI(t)\nR(t)\nz = 10\n= 0.2\n(c)\n\n0\n= 10\nFig. 5. The dynamics of the population with recovery for average degree of 10\nthe result provides benchmark time period for the network\nto recover a node in order to defend against the epidemic\nspreading. Our extensive simulation results validate our\nanalyses and moreover, provide insights of the dynamics\nof the system in terms of temporal evolution.\nReferences\n[1] I Akyildiz, W. Su, Y Sankarasubramaniam, and E. Cayirci, \"A\nSurvey on sensor networks,\" IEEE Communications Magazine, vol.\n40, no. 8, 2002.\n[2] R. M. Anderson and R. M. May, \"Infectious Diseases of Human:\nDynamics and Control\" (Oxford Univ. Press, Oxford, 1991).\n[3] S. Staniford, V. Paxson, and N. Weaver. \"How to Own the Internet\nin Your Spare Time\". In 11th Usenix Security Symposium, San\nFrancisco, August, 2002.\n[4] C. Hartung, J. Balasalle, and R. Han, \"Node Compromise in Sensor\nNetworks: The Need for Secure Systems\", Technical Report CU-CS\n-990-05 (2005).\n[5] H. Chan, V. D. Gligor, A. Perrig, G. Muralidharan, \"On the Distribution\nand Revocation of Cryptographic Keys in Sensor Networks\",\nIEEE Transactions on Dependable and Secure Computing 2005.\n[6] A. Khelil, C. Becker, J. Tian, K. Rothermel, \"An Epidemic Model\nfor Information Diffusion in MANETs\", MSWiM 2002, pages 54-60\n.\n[7] N. T. J. Bailey, \"The Mathematical Theory of Infectious Diseases\nand its Applications\". Hafner Press, New York (1975)\n[8] M. E. J. Newman, \"Spread of epidemic disease on networks\", Phys.\nRev. E, 66 (2002), art. no. 016128.\n[9] C. Moore and M. E. J. Newman, \"Epidemics and percolation in\nsmall- world networks\". Phys. Rev. E 61, 5678-5682 (2000)\n[10] P. Grassberger, \"On the critical behavior of the general epidemic\nprocess and dynamic percolation\", Math. Biosc. 63 (1983) 157.\n[11] L Eschenauer and V. D. Gligor. \"A key-management scheme for\ndistributed sensor networks\", in Proc. of the 9th Computer Communication\nSecurity - CCS '02, pages 4147, Washington D.C.,\nUSA, November 2002.\n[12] H. Chan, A. Perrig, and D. Song, \"Random key predistribution\nschemes for sensor networks\", in Proc. of the IEEE Symposium\non Research in Security and Privacy - SP '03, pages 197215,\nWashington D.C., USA, May 2003.\n[13] Donggang Liu and Peng Ning, \"Establishing pairwise keys in\ndistributed sensor networks\", in Proc. of the 10th ACM Conference\non Computer and Communications Security - CCS '03, pages 52\n61, Washington D.C., USA, October 2003.\n[14] H. Chan; V.D. Gligor, A. Perrig, G. Muralidharan, \"On the Distribution\nand Revocation of Cryptographic Keys in Sensor Networks\",\nIEEE Transactions on Dependable and Secure Computing, Volume\n2, Issue 3, July-Sept. 2005\n[15] A. Chadha, Y. Liu. and S. Das, \"Group key distribution via local\ncollaboration in wireless sensor networks,\" in Proceedings of the\nIEEE SECON 2005, Santa Clara, CA, Sept. 2005.\n7\nProceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia\nNetworks (WoWMoM'06)\n0-7695-2593-8/06 $20.00 2006\nIEEE", "keywords": "Random Key Predistribution;Sensor Networks;Random Graph;Epidemiology"} {"name": "141", "title": "Modelling Adversaries and Security Objectives for Routing Protocols in Wireless Sensor Networks", "abstract": "The literature is very broad considering routing protocols in wireless sensor networks (WSNs). However, security of these routing protocols has fallen beyond the scope so far. Routing is a fundamental functionality in wireless networks, thus hostile interventions aiming to disrupt and degrade the routing service have a serious impact on the overall operation of the entire network. In order to analyze the security of routing protocols in a precise and rigorous way, we propose a formal framework encompassing the definition of an adversary model as well as the \"general\" definition of secure routing in sensor networks. Both definitions take into account the feasible goals and capabilities of an adversary in sensor environments and the variety of sensor routing protocols. In spirit, our formal model is based on the simulation paradigm that is a successfully used technique to prove the security of various cryptographic protocols. However, we also highlight some differences between our model and other models that have been proposed for wired or wireless networks. Finally, we illustrate the practical usage of our model by presenting the formal description of a simple attack against an authenticated routing protocol, which is based on the well-known TinyOS routing.", "fulltext": "INTRODUCTION\nRouting is a fundamental function in every network that\nis based on multi-hop communications, and wireless sensor\nnetworks are no exceptions.\nConsequently, a multitude\nof routing protocols have been proposed for sensor\nnetworks in the recent past. However, most of these protocols\nhave not been designed with security requirements in\nmind. This means that they can badly fail in hostile environments\n. Paradoxically, research on wireless sensor networks\nhave been mainly fuelled by their potential applications in\nmilitary settings where the environment is hostile. The natural\nquestion that may arise is why then security of routing\nprotocols for sensor networks has fallen beyond the scope of\nresearch so far.\nWe believe that one important reason for this situation\nis that the design principles of secure routing protocols for\nwireless sensor networks are poorly understood today. First\nof all, there is no clear definition of what secure routing\nshould mean in this context. Instead, the usual approach,\nexemplified in [10], is to list different types of possible attacks\nagainst routing in wireless sensor networks, and to\ndefine routing security implicitly as resistance to (some of)\nthese attacks. However, there are several problems with this\napproach. For instance, a given protocol may resist a different\nset of attacks than another one. How to compare these\nprotocols? Shall we call them both secure routing protocols\n? Or on what grounds should we declare one protocol\nmore secure than another? Another problem is that it is\nquite difficult to carry out a rigorous analysis when only a\nlist of potential attack types are given. How can we be sure\nthat all possible attacks of a given type has been considered\nin the analysis? It is not surprising that when having such\na vague idea about what to achieve, one cannot develop the\nnecessary design principles. It is possible to come up instead\nwith some countermeasures, similar to the ones described in\n[10], which are potentially usefully to thwart some specific\ntypes of attacks, but it remains unclear how to put these\ningredients together in order to obtain a secure and efficient\nrouting protocol at the end.\nIn order to remedy this situation, we propose to base the\ndesign of secure routing protocols for wireless sensor networks\non a formal security model.\nWhile the benefit of\nformal models is not always clear (indeed, in some cases,\nthey tend to be overly complicated compared to what they\nachieve), we have already demonstrated their advantages\nin the context of ad hoc network routing protocols. More\nspecifically, we developed formal security models in [4, 1, 2],\nand we successfully used them to prove the security of some\n49\nad hoc network routing protocols, and to find security holes\nin others. The idea here is to use the same approach in the\ncontext of wireless sensor networks. The rationale is that\nrouting protocols in sensor networks are somewhat similar\nto those in ad hoc networks, hence they have similar pitfalls\nand they can be modeled in a similar way.\nThus, in this paper, we present a formal model, in which\nsecurity of routing is precisely defined, and which can serve\nas the basis for rigorous security analysis of routing protocols\nproposed for wireless sensor networks. Our model is based\non the simulation paradigm, where security is defined in\nterms of indistinguishability between an ideal-world model\nof the system (where certain attacks are not possible by\ndefinition) and the real-world model of the system (where\nthe adversary is not constrained, except that he must run in\ntime polynomial). This is a standard approach for defining\nsecurity, however, it must be adopted carefully to the specific\nenvironment of wireless sensor networks.\nSimilar to [4], in this paper, we develop an adversary\nmodel that is different from the standard Dolev-Yao model,\nwhere the adversary can control all communications in the\nsystem. In wireless sensor networks, the adversary uses wireless\ndevices to attack the systems, and it is more reasonable\nto assume that the adversary can interfere with communications\nonly within its power range. In addition, we must\nalso model the broadcast nature of radio communications.\nHowever, in addition to the model described in [4], here\nwe take into account that there are some attacks which exploit\nthe constraint energy supply of sensor nodes (e.g., the\nadversary decreases the network lifetime by diverting the\ntraffic in order to overload, and thus, deplete some sensor\nnodes). Hence, we explicitly model the energy consumption\ncaused by sending a message between each pair of nodes in\nthe network.\nAnother difference with respect to the model of [4] lies in\nthe definition of the outputs of the ideal-world and the real-world\nmodels. It is tempting to consider the state stored\nin the routing tables of the nodes as the output, but an\nadversary can distort that state in unavoidable ways. This\nmeans that if we based our definition of security on the indistinguishability\nof the routing states in the ideal-world and\nin the real-world models, then no routing protocol would\nsatisfy it. Hence, we define the output of the models as a\nsuitable function of the routing state, which hides the unavoidable\ndistortions in the states. This function may be different\nfor different types of routing protocols, but the general\napproach of comparing the outputs of this function in the\nideal-world and in the real-world models remain the same.\nFor instance, this function could be the average length of the\nshortest pathes between the sensor nodes and the base station\n; then, even if the routing tables of the nodes would not\nalways be the same in the ideal-world and in the real-world\nmodels, the protocol would still be secure given that the\ndifference between the distributions of the average length of\nthe shortest pathes in the two models is negligibly small.\nThe rest of the paper is organized as follows: In Section 2,\nwe present the elements of our formal model, which includes\nthe presentation of the adversary model adopted to wireless\nsensor networks, the description of the ideal-world and the\nreal-world models, the general definition of the output of\nthese models, as well as the definition of routing security.\nThen, in Section 3, we illustrate the usage of our model by\nrepresenting in it a known insecurity of an authenticated\nversion of the TinyOS routing protocol.\nFinally, in Section\n4, we report on some related work, and in Section 5, we\nconclude the paper.\nWe must note that the work described in this paper is a\nwork in progress, and it should be considered as such. In\nparticular, the reader will not find security proofs in this\npaper. There are two reasons for this: first, we are still\ndeveloping the proof techniques, and second, we have not\nidentified yet any routing protocols that would be secure in\nour model.\nTHE MODEL OF WIRELESS SENSOR NETWORKS\nThe adversary is represented by adversarial nodes in the\nnetwork. An adversarial node can correspond to an ordinary\nsensor node, or a more resourced laptop-class device.\nIn the former case, the adversary may deploy some corrupted\nsensor-class devices or may capture some honest sensor\nnodes. In the latter case, he has a laptop-class device\nwith a powerful antenna and unconstrained energy supply.\nAll of these adversarial nodes may be able to communicate in\nout-of-band channels (e.g., other frequency channel or direct\nwired connection), which may be used to create wormholes.\nIn general, when capturing honest sensor nodes, the adversary\nmay be able to compromise their cryptographic secrets\n(assuming that such secrets are used in the system).\nHowever, in this paper, we assume that the adversary cannot\ncompromise cryptographic material. This is certainly\na simplifying assumption, and we intend to relax it in our\nfuture work.\nThe adversary attacking the routing protocol primarily\nintends to shorten the network lifetime, degrade the packet\ndelivery ratio, increase his control over traffic, and increase\nnetwork delay. Some of these goals are highly correlated;\ne.g., increasing hostile control over traffic may also cause\nthe network delay to be increased.\nIn order to achieve the aforementioned goals, the adversary\nis able to perform simple message manipulations: fab-ricated\nmessage injection, message deletion, message modification\nand re-ordering of message sequences. In the followings\n, we describe how the adversary can perform message\ndeletion and injection in a wireless sensor network.\nRe-ordering of message sequences is straightforward using\nmessage deletion and insertion, thus, we do not elaborate it\nfurther.\nBasically, an adversarial node can affect the communication\nof two honest nodes in two cases: In the first case, an adversarial\nnode relays messages between honest nodes which\nare not able to communicate directly with each other. In\nthe second case, the honest nodes can also reach each other,\nand the adversarial node can also hear the nodes' communication\n, i.e., he can send and receive messages to/from both\nhonest nodes. We further assume that communication range\nimplies interference range, and vice-versa.\nIn case of adversarial relaying of messages between the\nnodes, all of the message manipulations are quite straightforward\n. On the contrary, if the honest nodes can also communicate\nwith each other, message manipulations must be\nperformed in a very sophisticated way. The adversarial node\ncan inject messages easily, but deletion and modification re-50\nquire jamming capability. Message deletion may be achieved\nby employing various selective jamming techniques against\neither the sender node or the receiver node. Message modification\nis only feasible, if both the sender and the receiver\nnodes are within the communication range of the adversarial\nnode. Here, we sketch two scenarios for message modification\n, which are illustrated on Figure 1.\nBy these simple\nexamples, we intend to point out the feasibility of message\nmodification assuming even direct communication between\nthe sender and the receiver node.\nScenario 1: There are two honest nodes X and Y , and\nnode X intends to send a message m to node Y . A\n1\nand A\n2\nare adversarial nodes, where A\n2\nis able to interfere with Y 's\ncommunication, but not with X's and A\n1\n's communication.\nLet A\n1\nbe in the communication range of X and Y , whereas\nA\n2\ncan only communicate with Y . When X transmits m to\nY , node A\n1\noverhears m, meanwhile A\n2\nperforms jamming\nto cause Y not to be able to receive m. In order to take\nthis action, A\n1\nand A\n2\nare connected by an out-of-band\nchannel, thus, A\n1\ncan send a signal to A\n2\nwhen A\n2\nshould\nstart jamming Y 's communication. It is also feasible that\nA\n2\nperforms constant jamming for a certain amount of time,\nafterwards, A\n1\ncan send the modified message m to Y .\nScenario 2: In this scenario, there is only one adversarial\nnode denoted by A. We assume that transmitting a message\nfrom the routing sublayer consists of passing the message to\nthe data-link layer, which, after processing the message, also\npasses it further to the physical layer. The data-link layer\nuses CRC in order to provide some protection against faults\nin noisy channels; a sender generally appends a frame check\nsequence to each frame (e.g., see [7]). The adversary can\nexploit this CRC mechanism to modify a message in the\nfollowing way (illustrated on Figure 1). When X transmits\nmessage m to Y , node A also overhears m, in particular,\nhe can see the frame(s) belonging to m. A intends to modify\nmessage m. Here, we must note that most messages\noriginated from the routing sublayer are composed of only\none frame per message in the data-link layer due to performance\nreasons, especially when they are used to discover\nrouting topology. Upon reception of the frame corresponding\nto the message, the adversary can corrupt the frame\ncheck sequence by jamming once the data field of the frame\nhas been received. This causes node Y to drop the frame\n(and the message), since Y detects that the last frame is incorrect\n, and waits for retransmission. At this point, if some\nacknowledgement mechanism is in use, A should send an acknowledgement\nto X so that it does not re-send the original\nframe. In addition, A retransmits message m in the name\nof X, where m is the modified message.\nThe feasibility of jamming attacks is studied and demonstrated\nin [17]. Although, the authors conclude in that paper\nthat the success of jamming attacks mainly depend on the\ndistance of the honest nodes and the jammer node, various\njamming techniques has been presented there that can\nseverely interfere with the normal operation of the network.\n2.2\nNetwork model\nWe assume that each honest device has exactly one antenna\nin the network. If the adversary uses several antennas\nwe represent each of them by a distinct node. The network\nnodes are considered to be static, and we further assume\nthat there is a single base station in the network.\nLet us denote the honest nodes in the network by\nv\n0\n, v\n1\n, . . . , v\nk\n, where v\n0\ndenotes the base station. Similarly,\nv\nk+1\n, . . . , v\nk+m\nrepresent the adversarial nodes. The set of\nall nodes is denoted by V . Furthermore, n denotes the number\nof all nodes in the network, i.e., n = |V | = k + m + 1.\nFor each pair of nodes v\ni\nand v\nj\n, we define e\nv\ni\n,v\nj\nto be the\nenergy level needed to transmit a message from v\ni\nto v\nj\n,\nwhere v\ni\n, v\nj\nV . This values can be ordered in a matrix\nwith size n n, called reachability matrix, and it is denoted\nby E.\n1\nIn the rest, if we intend to emphasize the distinction\nbetween the honest and the adversarial nodes in the notation\n, we prefer to denote the adversarial nodes by v\n\n1\n, . . . , v\n\nm\n(where v\n\n= v\nk+\n, 1\nm).\nFor the sake of simplicity, we also assume that at least\nenergy e\nv\ni\n,v\nj\nis needed for node v\ni\nto interfere with node\nv\nj\n's packet reception. This means that if v\ni\ncan reach v\nj\n,\nthen v\ni\ncan also interfere with all the communication of v\nj\n.\nLet us assume that each node uses a globally unique\nidentifier in the network, and these identifiers are authenticated\nin some way (e.g., by symmetric keys).\nWe denote\nthe set of these identifiers by L, and there is a function\nL : V L {undef} that assigns an identifier to\neach node, where undef /\nL. According to our adversary\nmodel described in Subsection 2.1, we assume that the adversary\nhas no (authenticated) identifier in the network, i.e.,\nL(v\n\nj\n) = undef for all 1\nj m.\nWe also introduce a cost function\nC : V R, which assigns\na cost to each node (e.g., the remaining energy in the\nbattery, or constant 1 to each node in order to represent\nhop-count).\nConfiguration:\nA configuration conf is a quadruple\n(V, L, E, C) that consists of the set of nodes, the labelling\nfunction, the reachability matrix, and the cost function of\nnodes.\n2.3\nSecurity objective function\nDiverse sensor applications entail different requirements\nfor routing protocols. For instance, remote surveillance applications\nmay require minimal delay for messages, while\nsensor applications performing some statistical measurements\nfavour routing protocols prolonging network lifetime.\nThe diversity of routing protocols is caused by these conflicting\nrequirements: e.g., shortest-path routing algorithms\ncannot maximize the network lifetime, since always choosing\nthe same nodes to forward messages causes these nodes to\nrun out of their energy supply sooner. Several sensor routing\nprotocols use a trade-off to satisfy conflicting requirements\n[16, 11].\nThis small argument also points out that one cannot judge\nthe utility of all routing protocols uniformly. Without a unified\nmetric of utility we cannot refine our security objectives\nfor routing protocols. By the above argument, a routing\nprotocol that is secure against attacks aiming at decreasing\nnetwork-lifetime cannot be secure against attacks aiming at\nincreasing network delay. We model the negatively correlated\nrequirements of routing, and essentially, our security\nobjectives in a very general manner. We represent the output\nof a routing protocol, which is actually the ensemble of\nthe routing entries of the honest nodes, with a given con-1\nIn this paper, the rows and the columns of all matrices are\nnumbered from zero.\n51\nNode X\nNode A\n2\nNode A\n1\nNode Y\n1: m\n1: jam\n1: m\n2: m\nNode X\nNode Y\nNode A\n1: m\n1: m\n1: jam\n2: m\nFigure 1: Message modification performed by the cooperation of two adversarial nodes A\n1\nand A\n2\n(on the\nright-hand side) in Scenario 1, and employing overhearing, jamming, and relaying with a single adversarial\nnode A (on the left-hand side) in Scenario 2. Honest nodes are labelled by X and Y . Arrows between nodes\nillustrate the direction of communication, the sequence of message exchanges are also depicted on these\narrows. Dashed arrows illustrate failed message delivery caused by jamming.\nfiguration conf by a matrix T\nconf\nwith size k + 1 k + 1.\n2\nT\nconf\ni,j\n= 1, if honest node v\ni\nsends every message to an honest\nnode identified by\nL(v\nj\n) in order to deliver the message\nto the base station, otherwise let T\nconf\ni,j\nbe 0. In the rest of\nthe paper, we shortly refer to the result of a routing protocol\nwith a given configuration as a routing topology, which can\nbe considered as a directed graph described by matrix T\nconf\n.\nIn the following, we will omit the index conf of T when the\nconfiguration can be unambiguously determined in a given\ncontext. In fact, T\nconf\nis a random variable, where the ran-domness\nis caused by the sensor readings initiated randomly\nby the environment, processing and transmission time of the\nsensed data, etc.\nLet us denote the set of all configurations by\nG. Furthermore\n,\nT denotes the set of the routing topologies of all configurations\n. The security objective function\nF : G T R\nassigns a real number to a random routing topology of a configuration\n. This function intends to distinguish \"attacked\"\ntopologies from \"non-attacked\" topologies based on a well-defined\nsecurity objective. We note that the definition of\nF\nis protocol dependent. For example, let us consider routing\nprotocols that build a routing tree, where the root is\nthe base station. We can compare routing trees based on\nnetwork lifetime by the following security objective function\nF(conf , T\nconf\n) = 1\nk\nk\nX\ni=1\nE(v\ni\n, conf , T\nconf\n)\nwhere\nE : V G T R assigns the overall energy consumption\nof the path from a node v\ni\nto v\n0\n(the base station)\nin a routing tree of a configuration. Since T\nconf\nis a random\nvariable, the output of\nF is a random variable too. If the\ndistribution of this output in the presence of an attacker\nnon-negligibly differs from the distribution when there's no\nattacker, then the protocol is not secure. If we intend to\ncompare routing trees based on network delay a simple security\nobjective function may be\nF(conf , T\nconf\n) = 1\nk\nk\nX\ni=1\nM(v\ni\n, conf , T\nconf\n)\nwhere\nM : V G T R assigns the length of the path\nfrom a node to v\n0\nin a routing topology of a configuration.\n2\nOf course, here we only consider the result of the protocol\nwith respect to the honest nodes, since the adversarial nodes\nmay not follow the protocol rules faithfully.\n2.4\nDynamic model\nFollowing the simulation paradigm, we define a real-world\nmodel and an ideal-world model. The real-world model represents\nthe real operation of the protocol and the ideal-world\nmodel describes how the system should work ideally. Both\nmodels contain an adversary. The real-world adversary is\nnot constrained apart from requiring it to run in time polynomial\n. This enables us to be concerned with arbitrary feasible\nattacks. In addition, the ideal-world adversary is constrained\nin a way that it cannot modify messages and inject\nextra ones due to the construction of the ideal-world system\n. In other words, all attacks that modify or inject any\nmessages is unsuccessful in the ideal-world system. However\n, the ideal-world adversary can perform attacks that are\nunavoidable or very costly to defend against (e.g., message\ndeletion).\nOnce the models are defined, the goal is to prove that\nfor any real-world adversary, there exist an ideal-world adversary\nthat can achieve essentially the same effects in the\nideal-world model as those achieved by the real-world adversary\nin the real-world model (i.e., the ideal-world adversary\ncan simulate the real-world adversary).\n2.4.1\nReal-world model\nThe real-world model that corresponds to a configuration\nconf = (V, L, E, C) and adversary A is denoted by sys\nreal\nconf ,A\n,\nand it is illustrated on Figure 2. We model the operation\nof the protocol participants by interactive and probabilistic\nTuring machines. Correspondingly, we represent the adversary\n, the honest sensor nodes, and the broadcast nature of\nthe radio communication by machines A, M\ni\n, and C, respectively\n. These machines communicate with each other\nvia common tapes.\nEach machine must be initialized with some input data\n(e.g., cryptographic keys, reachability matrix, etc.), which\ndetermines its initial state. Moreover, the machines are also\nprovided with some random input (the coin flips to be used\nduring the operation). Once the machines have been initialized\n, the computation begins. The machines operate in\na reactive manner, i.e., they need to be activated in order\nto perform some computation.\nWhen a machine is activated\n, it reads the content of its input tapes, processes the\nreceived data, updates its internal state, writes some output\non its output tapes, and goes back to sleep. The machines\nare activated in rounds by a hypothetic scheduler, and each\nmachine in each round is activated only once. The order of\nactivation is arbitrary with the only restriction that C must\nbe activated at the end of the rounds.\n52\nNow, we present the machines in more details:\nMachine C. This machine is intended to model the radio\ncommunication. It has input tapes out\ni\nand out\nj\n,\nfrom which it reads messages written by M\ni\nand A,\nresp. It also has output tapes in\ni\nand in\nj\n, on which it\nwrites messages to M\ni\nand A, resp. C is also initialized\nby matrix E at the beginning of the computation.\nMessages\non\ntape\nout\ni\ncan\nhave\nthe\nformat\n(\nsndr\n, cont , e, dest), where\nsndr\nL is the identifier of\nthe sender, cont is the message content, e is the energy\nlevel to be used to determine the range of transmission,\nand dest is the identifier of the intended destination\ndest\nL {}, where indicates broadcast message.\nMessages on tape out\nj\ncan have the following formats:\n(MSG,\nsndr\n, cont, e, dest ): MSG message models a\nnormal broadcast message sent by the adversary\nto machine C with sender identifier\nsndr\nL,\nmessage content cont , energy level e, and identifier\nof the intended destination dest\nL {}.\n(JAM, e): Special JAM message, that is sent by\nthe adversary to machine C, models the jamming\ncapability of the adversary. When machine C receives\na message JAM, it performs the requested\njamming by deleting all messages in the indicated\nrange e around the jamming node, which\nmeans that those deleted messages are not delivered\nto the nodes (including the jammer node\nitself) within the jamming range.\n(DEL,\ntar\n, e): Special DEL message, that is sent\nby the adversary to machine C, models the modification\ncapability of the adversary. When receiving\na message DEL with identifier\ntar\nL,\nmachine C does not deliver any messages sent by\nnode v\nV , where L(v ) =\ntar\n, if v is within\nthe indicated range e, except the adversarial node\nitself that will receive the deleted messages. This\nmodels the sophisticated jamming technique that\nwe described in Subsection 2.1.\nIn a more formal way, when reading a message msg\nin\n=\n(MSG,\nsndr\n, cont, e, dest) from out\nj\n, C determines the\nnodes which receive the message by calculating the set\nof nodes V\ne\nV , such that for all v V\ne\ne\nv\nj\n,v\ne.\nFinally, C processes msg\nin\nas follows.\n1. if dest\nL {}, then C writes\nmsg\nout\n= (\nsndr\n, cont, dest ) to the input tapes\nof machines corresponding to honest nodes in\nV\ne\nmsg\nout\n= (MSG,\nsndr\n, cont, dest ) to the input\ntapes of machines corresponding to adversarial\nnodes in V\ne\n\\ {v\n\nj\n}\n2. otherwise C discards msg\nin\nWhen reading a message msg\nin\n= (JAM, e) from out\nj\n,\nC determines the set of nodes which receive the message\nby calculating V\ne\nV , such that for all v V\ne\ne\nv\nj\n,v\ne. Afterwards, C does not write any messages\nwithin the same round to the input tapes of machines\ncorresponding to V\ne\n.\nWhen reading a message msg\nin\n= (DEL,\ntar\n, e) from\nout\nj\n, C determines the set of nodes which receive the\nmessage by calculating V\ne\nV , such that for all v\nV\ne\ne\nv\nj\n,v\ne. Finally, C processes msg\nin\nas follows.\n1. if there exists v\nx\nV\ne\n(1\nx k), such that\nL(v\nx\n) =\ntar\n, then C does not write any messages\nwithin the same round from tape out\nx\nto the input\ntapes of machines corresponding to V\ne\n\\ {v\n\nj\n}\n2. otherwise C discards msg\nin\nWhen reading a message msg\nin\n= (\nsndr\n, cont, e, dest )\nfrom out\ni\n, C determines the set of nodes which receive\nthe message by calculating V\ne\nV , such that for\nall v V\ne\ne\nv\nj\n,v\ne. Finally, C processes msg\nin\nas\nfollows.\n1. if dest\nL {}, then C writes\nmsg\nout\n= (\nsndr\n, cont, dest) to the input tapes\nof machines corresponding to honest nodes in\nV\ne\n\\ {v\ni\n}\nmsg\nout\n= (MSG,\nsndr\n, cont, dest ) to the input\ntapes of machines corresponding to adversarial\nnodes in V\ne\n2. otherwise C discards msg\nin\nMachine M\ni\n. This machine models the operation of\nhonest sensor nodes, and it corresponds to node v\ni\n.\nIt has input tape in\ni\nand output tape out\ni\n, which\nare shared with machine C.\nThe format of input\nmessages must be (\nsndr\n, cont, dest ), where dest\nL {}.\nThe format of output messages must be\n(\nsndr\n, cont, e, dest ), where\nsndr\nmust be\nL(v\ni\n), dest\n\nL {}, and e indicates the transmission range of the\nmessage for C. When this machine reaches one of its\nfinal states or there is a time-out during the computation\nprocess, it outputs its routing table.\nMachine A. This machine models the adversary logic.\nEncapsulating each adversarial node into a single machine\nallows us to model wormholes inside A. One can\nimagine that the adversary deploy several antennas in\nthe network field, which are connected to a central adversary\nlogic. In this convention, node v\n\nj\ncorresponds\nto an adversarial antenna, which is modelled by input\ntape in\nj\nand output tape out\nj\n. These tapes are shared\nwith machine C.\nThe format of input messages must be msg\nin\n=\n(MSG,\nsndr\n, cont , e, dest), where dest L {}.\nThe format of output messages msg\nout\ncan be\n(MSG,\nsndr\n, cont, e, dest), where dest L {}\nand e indicates the transmission range of the message\n;\n(JAM, e), where e indicates the range of jamming;\n(DEL,\ntar\n, e), where e indicates the range of selective\njamming, and\ntar\nL.\nThe computation ends, when all machines M\ni\nreach their\nfinal states, or there is a time-out. The output of sys\nreal\nconf ,A\nis the value of the security objective function\nF applied to\nthe resulted routing topology defined in Subsection 2.3 and\nconfiguration conf . The routing topology is represented by\n53\nC\nM\n0\n...\nin\n0\nout\n0\nin\n1\nout\n1\nin\n2\nout\n2\nA\nM\n1\nout\n1\nin\n1\nM\nk\nout\nk\nin\nk\nin\nm\nout\nm\n...\nC\nM\n0\n...\nin\n0\nout\n0\nin\n1\nout\n1\nin\n2\nout\n2\nM\n1\nout\n1\nin\n1\nM\nk\nout\nk\nin\nk\nin\nm\nout\nm\n...\nA\nFigure 2: The real-world model (on the left-hand side) and the ideal-world model (on the right-hand side).\nthe ensemble of the routing entries of machines M\ni\n. We\ndenote the output by Out\nreal,F\nconf ,A\n(r), where r is the random\ninput of the model. In addition, Out\nreal,F\nconf ,A\nwill denote the\nrandom variable describing Out\nreal,F\nconf ,A\n(r) when r is chosen\nuniformly at random.\n2.4.2\nIdeal-world model\nThe ideal-world model (illustrated on Figure 2) that corresponds\nto a configuration conf = (V, L, E, C) and adversary\nA is denoted by sys\nideal\nconf ,A\n. The ideal-world model is identical\nto the real-world model with the exception that the\nideal-world adversary cannot modify and inject extra messages\n. However, he is allowed to simply drop any messages\nor perform jamming, since these attacks are unavoidable, or\nat least, they are too costly to defend against. Our model\nis considered to be ideal in this sense. Comparing to the\nreal-world model, we replace machine C with machine C\nand machine A with machine A in order to implement our\nrestricted ideal-world adversary. Hence, we only detail the\noperation of C and A here, since M\ni\nare the same as in the\nreal-world model.\nReceiving an MSG message from machines M\ni\n, C internally\nstores that message with a unique message identifier\nin its internal store. Delivering any MSG message to A ,\nC also includes the message identifier into the message. A\ncan send an MSG message to C with a different format; it\nonly contains an identifier id and an energy level e. Upon\nthe reception of such a message, C searches for the original\nmessage which is associated with identifier id in its internal\nstore, and delivers this stored message using the energy level\ne. Although A also receives the original message with its\nassociated identifier from C , he is not able to modify that,\nsince C only accepts a message identifier issued by himself\nand an energy level from A . In other words, A can only\ndelete messages, since A can also send special DEL and JAM\nmessages to C . We elaborate the operation of C and A in\na more formal way as follows.\nA and C communicate via tapes in\nj\nand out\nj\n.\nMachine C . It has input tapes out\ni\nand out\nj\n, from\nwhich it reads messages written by M\ni\nand A, resp. It\nalso has output tapes in\ni\nand in\nj\n, on which it writes\nmessages to M\ni\nand A, resp. C is also initialized by\nmatrix E. In addition, it sets its internal variable id\nC\nto 1 at the beginning of the computation.\nC interacts with machines M\ni\nin a similar way as C\ndoes in the real-world model; when reading a message\nmsg\nin\n= (\nsndr\n, cont, e, dest ) from out\ni\n, C processes\nmsg\nin\nidentically to C in the real-world model\nonly with one exception:\nBefore writing msg\nin\n=\n(MSG, id\nC\n,\nsndr\n, cont, dest) to output tapes in\nj\n, C\ninternally stores msg\nin\nin set S. After writing msg\nin\nto output tapes in\nj\n, C increments id\nC\nby one. Therefore\n, C knows what messages are passed to A from M\ni\n.\nMessages on out\nj\ncan have the formats:\n(MSG, id, e):\nMSG message models a normal\nbroadcast message sent by the ideal-world adversary\nto machine C , where e indicates the transmission\nrange of the message identified by id.\n(JAM, e): Special JAM message, that is sent by\nthe adversary to machine C, models the jamming\ncapability of the ideal-world adversary, where e\nindicates the range of jamming.\n(DEL,\ntar\n, e): Special DEL message, that is sent\nby the adversary to machine C, models the modification\ncapability of the ideal-world adversary,\nwhere e indicates the range of selective jamming,\nand\ntar\nL.\nWhen reading a message msg\nin\n= (MSG, id, e) from\nout\nj\n, machine C operates differently from C. C determines\nthe set of nodes which receive the message by\ncalculating V\ne\nV , such that for all v V\ne\ne\nv\nj\n,v\ne.\nFinally, C processes msg\nin\nas follows.\n1. if 1\nid id\nC\n, then C searches the msg =\n(MSG, id ,\nsndr\n, cont , dest ) in S such that id\nequals to id, and C writes\nmsg\nout\n= (\nsndr\n, cont , dest ) to the input\ntapes of machines corresponding to honest\nnodes in V\ne\nmsg\nout\n= (MSG, id ,\nsndr\n, cont , dest ) to the\ninput tapes of machines corresponding to adversarial\nnodes in V\ne\n\\ {v\n\nj\n}\n2. otherwise C discards msg\nin\nWhen reading a message msg\nin\n= (JAM, e) or msg\nin\n=\n(DEL,\ntar\n, e) from out\nj\n, machine C operates the same\n54\nway as C does in case of the corresponding message\nformats.\nMachine A . It has output tapes out\nj\nand input tapes\nin\nj\n. The format of messages on input tape in\nj\nmust\nbe msg\nin\n= (MSG, id,\nsndr\n, cont, e, dest), where dest\nL {}.\nThe format of output messages msg\nout\ncan be\n(MSG, id, e), where id is a message identifier and\ne indicates the transmission range of the message\nidentified by id;\n(JAM, e), where e indicates the range of jamming;\n(DEL,\ntar\n, e), where e indicates the range of selective\njamming, and\ntar\nL.\nThe computation ends, when all machines M\ni\nreach their\nfinal states, or there is a time-out. Similar to the real-world\nmodel, the output of sys\nideal\nconf ,A\nis the value of the security\nobjective function\nF applied to the resulted routing topology\nand configuration conf . The routing topology is represented\nby the ensemble of the routing entries of machines M\ni\n. We\ndenote the output by Out\nideal,F\nconf ,A\n(r), where r is the random\ninput of the model. Moreover, Out\nideal,F\nconf ,A\nwill denote the\nrandom variable describing Out\nideal,F\nconf ,A\n(r) when r is chosen\nuniformly at random.\n2.5\nDefinition of secure routing\nLet us denote the security parameter of the model by\n(e.g., is the key length of the cryptographic primitive employed\nin the routing protocol, such as digital signature,\nMAC, etc.). Based on the model described in the previous\nsubsections, we define routing security as follows:\nDefinition 1\n(Statistical security). A\nrouting\nprotocol is statistically secure with security objective function\nF, if for any configuration conf and any real-world\nadversary\nA, there exists an ideal-world adversary A ,\nsuch that Out\nreal,F\nconf ,A\nis statistically indistinguishable from\nOut\nideal,F\nconf ,A\n.\nTwo random variables are statistically indistinguishable\nif the L\n1\ndistance of their distributions is a\nnegligible function of the security parameter .\nIntuitively, if a routing protocol is statistically secure,\nthen any system using this routing protocol cannot satisfy\nits security objectives represented by function\nF only with\na probability that is a negligible function of .\nThis negligible probability is related to the fact that the\nadversary can always forge the cryptographic primitives\n(e.g., generate a valid digital signature) with a very small\nprobability depending on the value of .\nINSECURITY OF TINYOS ROUTING\nIn this section, we present an authenticated routing mechanism\nbased on the well-known TinyOS routing, and we\nshow that it is not secure in our model for a given security\nobjective function representing a very minimal security\nrequirement.\n3.1\nOperation of an authenticated routing\nprotocol\nOriginally, the authors of TinyOS implemented a very\nsimple routing protocol, where each node uses a globally\nunique identifier. The base station periodically initiates a\nrouting topology discovery by flooding the network by a beacon\nmessage. Upon reception of the first beacon within a\nsingle beaconing interval, each sensor node stores the identifier\nof the node, from which it received the beacon, as its\nparent (aka. next-hop towards the base station), and then\nre-broadcasts the beacon after changing the sender identifier\nto its own identifier. As for each node only one parent\nis stored, the resulted routing topology is a tree.\nEvery\nsensor node receiving a data packet forwards that towards\nthe base station by sending the packet to its parent.\nA\nlightweight cryptographic extension is employed in [14] in\norder to authenticate the beacon by the base station. This\nauthenticated variant of TinyOS routing uses Tesla scheme\nto provide integrity for the beacon; each key is disclosed by\nthe next beacon in the subsequent beaconing interval. We\nremark that this protocol has only been defined informally\nthat inspired us to present a new protocol, which provides\nthe \"same\" security as the authenticated routing protocol in\n[14], but due to its simplicity it fits more in demonstrating\nthe usage of our model. Consequently, the presented attack\nagainst this new protocol also works against the protocol in\n[14]. We must note again that this protocol is only intended\nto present the usefulness of our model rather than to be\nconsidered as a proposal of a new sensor routing protocol.\nWe assume that the base station B has a public-private\nkey pair, where the public key is denoted by K\npub\n. Furthermore\n, it is assumed that each sensor node is also deployed\nwith K\npub\n, and they are capable to perform digital signature\nverification with K\npub\nas well as to store some beacons in\nits internal memory. We note that B never relays messages\nbetween sensor nodes.\nInitially, B creates a beacon, that contains a constant message\nidentifier BEACON, a randomly generated number rnd,\nthe identifier of the base station Id\nB\n, and a digital signature\nsig\nB\ngenerated on the previous elements except Id\nB\n. Afterwards\n, the base station floods the network by broadcasting\nthis beacon:\nB\n:\nmsg\n1\n= (BEACON, rnd, Id\nB\n, sig\nB\n)\nEach sensor node X receiving msg\n1\nchecks whether it has\nalready received a beacon with the same rnd in conjunction\nwith a correct signature before. If it is true, the node discards\nmsg\n1\n, otherwise it verifies sig\nB\n. If the verification is\nsuccessful, then X sets Id\nB\nas its parent, stores msg\n1\nin its\ninternal memory, and re-broadcasts the beacon by changing\nthe sender identifier Id\nB\nto its own identifier Id\nX\n:\nX\n:\nmsg\n2\n= (BEACON, rnd, Id\nX\n, sig\nB\n)\nIf the signature verification is unsuccessful, then X discards\nmsg\n1\n. Every sensor node receiving msg\n2\nperforms the same\nsteps what X has done before.\nOptionally, B can initiate this topology construction periodically\nby broadcasting a new beacon with different rnd.\nIn the rest, we shortly refer to this protocol as ABEM\n(Authenticated Beaconing Mechanism).\n3.2\nFormalization of a simple attack\nA simple security objective is to guarantee the correctness\nof all routing entries in the network. Namely, it is desirable\nthat a sender node v\ni\nis always able to reach node v\nj\n, if v\ni\nset\nL(v\nj\n) as its parent identifier earlier. It means that if\n55\nnode v\ni\nsets node\nL(v\nj\n) as its parent identifier, then E\ni,j\nshould contain a finite value, or v\ni\nas well as v\nj\nshould have\nan adversarial neighboring node v\n\n1\nand v\n\n2\n, resp., such that\nE\ni,k+\n1\nand E\nk+\n2\n,j\nare finite values, where 1\n\n1\n,\n2\nm\nand\n1\n=\n2\nmay hold.\nIn order to formalize this minimal security requirement,\nwe introduce the following security objective function\nF\nABEM\n(conf , T ) =\n8\n<\n:\n1, if i, j : T\ni,j\nE\ni,j\n\n`Q\nm=1\nE\ni,k+\n+ Q\nm=1\nE\nk+ ,j\n= 0\n0, otherwise\nwhere we derive matrix E with size n n from E, so that\nE\ni,j\n= 1, if E\ni,j\n=\n, otherwise E\ni,j\n= 0. In other words,\nE\ni,j\n= 1, if v\ni\ncannot send a message directly to v\nj\n, otherwise\nE\ni,j\n= 0.\nWe will show that ABEM is not secure in our model\nfor security objective function\nF\nABEM\n. In particular, we\npresent a configuration conf and an adversary\nA, for which\nthere doesn't exist any ideal-world adversary\nA , such\nthat Out\nreal,F\nABEM\nconf ,A\nis statistically indistinguishable from\nOut\nideal,F\nABEM\nconf ,A\n. Equivalently, we show that for a real-world\nadversary\nA, F\nABEM\n(conf , T ) = 0 with a probability that\nis a non-negligible function of in the real-world model,\nwhile\nF\nABEM\n(conf , T ) = 0 with probability zero for every\nideal-world adversary\nA in the ideal-world model, where\nT describes the routing topology in the ideal-world model.\nMoreover, the success probability of the real-world adversary\nA described below is independent from .\nv\n0\n, B\nv\n1\n, X\nv\n2\n, Y\nv\n\n1\n= v\n3\nv\n0\n, B\nv\n1\n, X\nv\n2\n, Y\nv\n\n1\n= v\n3\nFigure 3: A simple attack against ABEM. v\n0\n, v\n1\n,\nand v\n2\nare honest nodes with identifiers\nL(v\n0\n) = B,\nL(v\n1\n) = X, and L(v\n2\n) = Y , whereas v\n\n1\nis an adversarial\nnode.\nE\n1,0\n, E\n3,0\n, E\n2,3\nare finite values, and E\n3,1\n=\nE\n2,0\n= E\n2,1\n=\n. Links are assumed to be symmetric,\ni.e., E\ni,j\n= E\nj,i\n. The configuration is illustrated on\nthe left-hand side, where a dashed line denotes a\ndirect link. In the routing topology of the real-world\nmodel, on the right-hand side, v\n2\nsets X as its parent\nidentifier, however, E\n2,1\n=\nand E\n3,1\n=\n.\nThe configuration conf and the result of the attack is\ndepicted on Figure 3.\nWe assume that the base station\nbroadcasts only a single beacon during the computational\nprocess, i.e., only a single beaconing interval is analyzed in\nour model. At the beginning, the base station B floods the\nnetwork by a beacon\nB\n:\nmsg\n1\n= (BEACON, rnd, B, sig\nB\n)\nBoth adversarial node v\n\n1\nand honest node X receive this\nbeacon, and X sets B as its parent, since the verification\nof the signature is successful. X modifies the beacon by replacing\nsender identifier B to X, and broadcasts the resulted\nbeacon:\nX\n:\nmsg\n2\n= (BEACON, rnd, X, sig\nB\n)\nIn parallel, v\n\n1\nmodifies the beacon by replacing sender\nidentifier B to X, and broadcasts the resulted beacon:\nv\n\n1\n: msg\n2\n= (BEACON, rnd, X, sig\nB\n)\nUpon the reception of msg\n2\n, node Y sets X as its parent,\nsince sig\nB\nis correct.\nIn the real-world model, these actions result T\n2,1\n= 1,\nwhich implies that\nF\nABEM\n(conf , T ) = 0. On the contrary,\nF\nABEM\n(conf , T ) never equals to 0, where T represents the\nrouting topology in the ideal-world model. Let us assume\nthat\nF\nABEM\n(conf , T ) = 0, which means that T\n1,2\n= 1 or\nT\n2,1\n= 1. T\n1,2\n= 1 is only possible, if X receives\nmsg\n3\n= (BEACON, rnd, Y, sig\nB\n)\nHowever, it yields contradiction, since E\n3,1\n= E\n2,1\n=\n,\nand B never broadcasts msg\n3\n. Similarly, if T\n2,1\n= 1 then\nY must receive msg\n2\n, which means that v\n\n1\nmust broadcast\nmsg\n2\n. Conversely, B never broadcasts msg\n2\n, and E\n3,1\n=\n.\nTherefore, v\n\n1\ncan only broadcast msg\n2\n, if he successfully\nmodifies msg\n1\nor forges msg\n2\n. However, it also contradicts\nour assumption that the ideal-world adversary cannot modify\nand inject messages in the ideal-world model.\nRELATED WORK\nIn [10], the authors map some adversary capabilities and\nsome feasible attacks against routing in wireless sensor networks\n, and they define routing security implicitly as resistance\nto (some of) these attacks.\nHence, the security of\nsensor routing is only defined informally, and the countermeasures\nare only related to specific attacks. In this way, we\neven cannot compare the sensor routing protocols in terms of\nsecurity. Another problem with this approach is the lack of a\nformal model, where the security of sensor routing can be described\nin a precise and rigorous way. While secure messaging\nand key-exchange protocols are classical and well-studied\nproblems in traditional networks [3, 15], formal modelling of\nsecure routing in sensor networks has not been considered\nso far.\nThe adversarial nodes are also classified into the\ngroups of sensor-class and laptop-class nodes in [10], but\nthe capabilities of an adversarial node regarding message\nmanipulations are not discussed.\nThe simulation paradigm is described in [15, 5]. These\nmodels were mainly proposed with wired networks in mind\ntypically implemented on the well-known Internet architecture\n, and the wireless context is not focused there. In our\nopinion, the multi-hop nature of communications is an inherent\ncharacteristic of wireless sensor networks, therefore,\nit should be explicitly modelled. In more particular, the\nbroadcast nature of communication enables a party to overhear\nthe transmission of a message that was not destined to\nhim, however, this transmission can be received only in a\ncertain range of the sender. The size of this range is determined\nby the power at which the sender sent the message.\nAnother deviation from [15] is the usage of the security objective\nfunction in the definition of security. In [15], the\n56\nindistinguishability is defined on the view of the honest parties\n(on their input, states, and output) in the ideal-world\nand in the real-world models. However, an adversary can\ndistort the states of the honest parties in unavoidable ways,\nand hence, the classical definition would be too strong and\nno routing protocol would satisfy it. On the other hand,\nour model is compliant with [15] considering high-level connections\nbetween nodes. In [15], the standard cryptographic\nsystem allows us to define each high-level connection as secure\n(private and authentic), authenticated (only authentic\n), and insecure (neither private nor authentic). In this\ntaxonomy, the communication channel between two honest\nnodes can be either insecure or secure in our model. If an\nadversarial node is placed in the communication range of\none of the communicating nodes, then it is considered to\nbe an insecure channel. If the adversary can reach none of\nthe communicating nodes, the channel between that nodes\nis hidden from the adversary, and thus, it is considered to\nbe secure.\nAlthough some prior works [18, 12] also used formal techniques\nto model the security of multi-hop routing protocols\n, these ones were mainly proposed for ad hoc routing.\nMoreover, the model proposed in [12] is based on CPAL-ES,\nand the model in [18] is similar to the strand spaces model.\nBoth of these formal techniques differ from the simulation\nparadigm.\nOur work is primarily based on [4, 1]. Here, the authors\nalso use the simulation paradigm to prove the security of\nrouting protocols in wireless ad-hoc networks. However, our\nmodel differs from the models in [4, 1] in two ways:\nAdversary model: The adversary in [4] and [1] is assumed\nto have the same resources and communication\ncapabilities as an ordinary node in the network.\nTherefore, that adversary model deviates from the so-called\nDolev-Yao model in [6]. In our work, the adversary\nalso uses wireless devices to attack the systems,\nand it is reasonable to assume that the adversary can\ninterfere with communications only within its power\nrange. The adversarial nodes belonging to the sensor-class\nnodes has the same resources and communication\ncapabilities as an ordinary sensor node, but a more resourced\nadversarial node (e.g., laptops) may affect the\noverall communication of an entire part of the network\ndepending on the power range of the resourced adversarial\ndevice. That resourced devices also make the\nadversary able to perform more sophisticated message\nmanipulations.\nModelling security objectives: In ad hoc networks,\nnodes construct routes between a source and a destination\n[13, 8], whereas sensor nodes should build a complete\nrouting topology for the entire network. In case\nof sensor networks, the only destination for all nodes is\nthe base station [9]. In addition, sensor nodes are resource\nconstrained, which implies that we also need to\nmodel the energy consumption of sensor nodes, since\nseveral attacks impacts the network lifetime. These\ndifferences from ad hoc networks has yielded a wide\nrange of sensor applications, and thus, sensor routing\nprotocols [9] are much diverse than ad hoc routing\nprotocols.\nHence, the security objectives cannot be\nmodelled uniformly for sensor routing protocols.\nCONCLUSION\nIn this paper, we proposed a formal security model for\nrouting protocols in wireless sensor networks. Our model is\nbased on the well-known simulation paradigm, but it differs\nfrom previously proposed models in several important aspects\n. First of all, the adversary model is carefully adopted\nto the specific characteristics of wireless sensor networks. In\nour model, the adversary is not all-powerful, but it can only\ninterfere with communications within its own radio range. A\nsecond important contribution is that we defined the output\nof the dynamic models that represent the ideal and the real\noperations of the system as a suitable function of the routing\nstate of the honest nodes, instead of just using the routing\nstate itself as the output. We expect that this will allow\nus to model different types of routing protocols in a common\nframework. In addition, this approach hides the unavoidable\ndistortions caused by the adversary in the routing\nstate, and in this way, it makes our definition of routing security\nsatisfiable. As an illustrative example, we considered\nan authenticated version of the TinyOS beaconing routing\nprotocol, and we showed how an attack against this protocol\ncan be represented in our formal model.\nAs we mentioned in the Introduction, this paper is a workin\n-progress paper.\nIn particular, we have presented neither\na new secure routing protocol designed with the help\nof our formal model, nor a detailed security proof carried\nout within our model. These are left for future study. We\nmust note, however, that the generality of the simulation\nparadigm and the fact that we could represent a known\nattack against the authenticated TinyOS protocol in our\nmodel make us confident that we are on the right track.\nACKNOWLEDGEMENTS\nThe\nwork\ndescribed\nin\nthis\npaper\nis\nbased\non\nresults\nof\nIST\nFP6\nSTREP\nUbiSec&Sens\n(\nhttp://www.ist-ubisecsens.org).\nUbiSec&Sens\nreceives\nresearch funding from the European Community's\nSixth Framework Programme. Apart from this, the European\nCommission has no responsibility for the content of\nthis paper. The information in this document is provided\nas is and no guarantee or warranty is given that the\ninformation is fit for any particular purpose.\nThe user\nthereof uses the information at its sole risk and liability.\nThe work presented in this paper has also been partially\nsupported by the Hungarian Scientific Research Fund (contract\nnumber T046664).\nThe first author has been further\nsupported by the HSN Lab. The second author has\nbeen supported by the Hungarian Ministry of Education\n(B\nO2003/70).\nREFERENCES\n[1] G.\nAcs, L. Butty\nan, and I. Vajda. Provable Security of\nOn-Demand Distance Vector Routing in Wireless Ad\nHoc Networks. In Proceedings of the Second European\nWorkshop on Security and Privacy in Ad Hoc and\nSensor Networks (ESAS 2005), July 2005.\n[2] G.\nAcs, L. Butty\nan, and I. Vajda. Provably Secure\nOn-demand Source Routing in Mobile Ad Hoc\nNetworks. To appear in IEEE Transactions on Mobile\nComputing.\n[3] M. Bellare, R. Canetti, and H. Krawczyk. A modular\napproach to the design and analysis of authentication\n57\nand key exchange protocols. In Proceedings of the\nACM Symposium on the Theory of Computing, 1998.\n[4] L. Butty\nan and I. Vajda. Towards provable security\nfor ad hoc routing protocols. In Proceedings of the\nACM Workshop on Security in Ad Hoc and Sensor\nNetworks (SASN), October 2004.\n[5] R. Canetti. Universally composable security: A new\nparadigm for cryptographic protocols. In Proceedings\nof the 42nd IEEE Symposium on Foundations of\nComputer Science (FOCS), 2001.\n[6] D. Dolev and A. C. Yao. On the Security of Public\nKey Protocols. In IEEE Transactions on Information\nTheory 29 (2), 1983.\n[7] IEEE Standard for Information\ntechnology--Telecommunications and information\nexchange between systems--Local and metropolitan\narea networks--Specific requirements. Part 15.4:\nWireless Medium Access Control (MAC) and Physical\nLayer (PHY) Specifications for Low-Rate Wireless\nPersonal Area Networks (LR-WPANs), 2003.\n[8] D. Johnson and D. Maltz. Dynamic source routing in\nad hoc wireless networks. In Mobile Computing, edited\nby Tomasz Imielinski and Hank Korth, Chapter 5,\npages 153-181. Kluwer Academic Publisher, 1996.\n[9] J. N. Al-Karaki and A. E. Kamal. Routing techniques\nin wireless sensor networks: a survey. In IEEE\nWireless Communications, Volume 11, pp. 6-28, 2004.\n[10] C. Karlof, D. Wagner. Secure routing in wireless\nsensor networks: attacks and countermeasures. In Ad\nHoc Networks, Volume 1, 2003.\n[11] Q. Li, J. Aslam, and D. Rus. Hierarchical\nPower-aware Routing in Sensor Networks. In\nProceedings of the DIMACS Workshop on Pervasive\nNetworking, May, 2001.\n[12] J. Marshall. An Analysis of the Secure Routing\nProtocol for mobile ad hoc network route discovery:\nusing intuitive reasoning and formal verification to\nidentify flaws. MSc thesis, Department of Computer\nScience, Florida State University, April 2003.\n[13] C. Perkins and E. Royer. Ad hoc on-demand distance\nvector routing. In Proceedings of the IEEE Workshop\non Mobile Computing Systems and Applications, pp.\n90-100, February 1999.\n[14] A. Perrig, R. Szewczyk, V. Wen, D. Culler, J. D.\nTygar. SPINS: Security Protocols for Sensor\nNetworks. In Wireless Networks Journal (WINE),\nVolume 8, September 2002.\n[15] B. Pfitzman and M. Waidner. A model for\nasynchronous reactive systems and its application to\nsecure message transmission. In Proceedings of the\n22nd IEEE Symposium on Security & Privacy, 2001.\n[16] S. Singh, M. Woo, and C. Raghavendra. Power-Aware\nRouting in Mobile Ad Hoc Networks. In Proceedings\nof the Fourth Annual ACM/IEEE International\nConference on Mobile Computing and Networking\n(MobiCom '98), Oct. 1998.\n[17] W. Xu, W. Trappe, Y. Zhang and T. Wood. The\nFeasibility of Launching and Detecting Jamming\nAttacks in Wireless Networks. In Proceedings of the\n6th ACM International Symposium on Mobile Ad Hoc\nNetworking and Computing (MobiHoc'05), May 2005.\n[18] S. Yang and J. Baras. Modeling vulnerabilities of ad\nhoc routing protocols. In Proceedings of the ACM\nWorkshop on Security of Ad Hoc and Sensor\nNetworks, October 2003.\n58\n", "keywords": "Simulatability;Adversary Model;Routing Protocols;Sensor Networks;Provable Security"} {"name": "142", "title": "Obfuscated Databases and Group Privacy", "abstract": "We investigate whether it is possible to encrypt a database and then give it away in such a form that users can still access it, but only in a restricted way. In contrast to conventional privacy mechanisms that aim to prevent any access to individual records, we aim to restrict the set of queries that can be feasibly evaluated on the encrypted database. We start with a simple form of database obfuscation which makes database records indistinguishable from lookup functions . The only feasible operation on an obfuscated record is to look up some attribute Y by supplying the value of another attribute X that appears in the same record (i.e., someone who does not know X cannot feasibly retrieve Y ). We then (i) generalize our construction to conjunctions of equality tests on any attributes of the database, and (ii) achieve a new property we call group privacy. This property ensures that it is easy to retrieve individual records or small subsets of records from the encrypted database by identifying them precisely, but \"mass harvesting\" queries matching a large number of records are computationally infeasible. Our constructions are non-interactive. The database is transformed in such a way that all queries except those ex-plicitly allowed by the privacy policy become computationally infeasible, i.e., our solutions do not rely on any access-control software or hardware.", "fulltext": "INTRODUCTION\nConventional privacy mechanisms usually provide all-or-nothing\nprivacy. For example, secure multi-party computation\nschemes enable two or more parties to compute some\njoint function while revealing no information about their respective\ninputs except what is leaked by the result of the\ncomputation. Privacy-preserving data mining aims to com-pletely\nhide individual records while computing global statistical\nproperties of the database. Search on encrypted data\nand private information retrieval enable the user to retrieve\ndata from an untrusted server without revealing the query.\nIn this paper, we investigate a different concept of privacy.\nConsider a data owner who wants to distribute a database to\npotential users. Instead of hiding individual data entries, he\nwants to obfuscate the database so that only certain queries\ncan be evaluated on it, i.e., the goal is to ensure that the\ndatabase, after it has been given out to users, can be accessed\nonly in the ways permitted by the privacy policy.\nNote that there is no interaction between the data owner and\nthe user when the latter accesses the obfuscated database.\nOur constructions show how to obfuscate the database\nbefore distributing it to users so that only the queries permitted\nby the policy are computationally feasible. This concept\nof privacy is incomparable to conventional definitions\nbecause, depending on the policy, a permitted query may or\neven should reveal individual data entries.\nFor example, a college alumni directory may be obfuscated\nin such a way that someone who already knows a person's\nname and year of graduation is able to look up that person's\nemail address, yet spammers cannot indiscriminately\nharvest addresses listed in the directory. Employees of a\ncredit bureau need to have access to customers' records so\nthat they can respond to reports of fraudulent transactions,\nyet one may want to restrict the bureau's ability to compile\na list of customers' addresses and sell it to a third party.\nWe develop provably secure obfuscation techniques for\nseveral types of queries. We do not assume that users of the\nobfuscated database access it through a trusted third party,\nnor that they use trusted or \"tamper-proof\" access-control\nsoftware or hardware (in practice, such schemes are vulnerable\nto circumvention and reverse-engineering, while trusted\nthird parties are scarce and often impractical). Our constructions\nare cryptographically strong, i.e., they assume an\nadversary who is limited only by his computational power.\nWe prove security in the standard \"virtual black-box\"\nmodel for obfuscation proposed by Barak et al. [2]. Intuitively\n, a database is securely obfuscated if the view of any\nefficient adversary with access to the obfuscation can be efficiently\nsimulated by a simulator who has access only to\nthe ideal functionality, which is secure by definition. The\nideal functionality can be thought of as the desired privacy\npolicy for the database. One of our contributions is coming\nup with several ideal functionalities that capture interesting\nprivacy policies for databases.\n102\nDirected-access databases.\nOur \"warm-up\" construction\nis a directed-access database. Some attributes of the\ndatabase are designated as query attributes, and the rest\nas data attributes. The database is securely obfuscated if,\nfor any record, it is infeasible to retrieve the values of the\ndata attributes without supplying the values of the query\nattributes, yet a user who knows the query attributes can\neasily retrieve the corresponding data attributes.\nTo illustrate by example, a directed-access obfuscation of\na telephone directory has the property that it is easy to\nlook up the phone number corresponding to a particular\nname-address pair, but queries such as \"retrieve all phone\nnumbers stored in the directory\" or \"retrieve all names\"\nare computationally infeasible. Such a directory is secure\nagainst abusive harvesting, but still provides useful functionality\n. Note that it may be possible to efficiently enumerate\nall name-address pairs because these fields have less\nentropy than regular cryptographic keys, and thus learn the\nentire database through the permitted queries. Because the\ndatabase is accessed only in permitted ways, this does not\nviolate the standard definition of obfuscation. Below, we\ngive some examples where it is not feasible to enumerate all\npossible values for query attributes.\nThe directed-access property of a single database record\ncan be modeled as a point function, i.e., a function from\n{0, 1}\nn\nto {0, 1} that returns 1 on exactly one input x (in\nour case, query attributes are the arguments of the point\nfunction). Directed-access obfuscation guarantees that the\nadversary's view of any obfuscated record can be efficiently\nsimulated with access only to this point function. Therefore\n, for this \"warm-up\" problem, we can use obfuscation\ntechniques for point functions such as [22]. Informally, we\nencrypt the data attributes with a key derived from hashed\nquery attributes. The only computationally feasible way to\nretrieve the data attributes is to supply the corresponding\nquery attributes. If the retriever does not know the right\nquery attributes, no information can be extracted at all.\nGroup-exponential databases.\nWe then consider a more\ninteresting privacy policy, which requires that computational\ncost of access be exponential in the number of database\nrecords retrieved. We refer to this new concept of privacy\nas group privacy. It ensures that users of the obfuscated\ndatabase can retrieve individual records or small subsets of\nrecords by identifying them precisely, i.e., by submitting\nqueries which are satisfied only by these records. Queries\nmatching a large number of records are infeasible.\nWe generalize the idea of directed access to queries consisting\nof conjunctions of equality tests on query attributes,\nand then to any boolean circuit over attribute equalities.\nThe user can evaluate any query of the form attribute\nj\n1\n=\nvalue\n1\n. . .attribute\nj\nt\n= value\nt\n, as long as it is satisfied by\na small number of records. Our construction is significantly\nmore general than simple keyword search on encrypted data\nbecause the value of any query attribute or a conjunction\nthereof can be used as the \"keyword\" for searching the obfuscated\ndatabase, and the obfuscator does not need to know\nwhat queries will be evaluated on the database.\nTo distinguish between \"small\" and \"large\" queries, suppose\nthe query predicate is satisfied by n records.\nOur\nconstruction uses a form of secret sharing that forces the\nretriever to guess n bits before he can access the data attributes\nin any matching record. (If n=1, i.e., the record is\nunique, the retriever still has to guess 1 bit, but this simply\nmeans that with probability\n1\n2\nhe has to repeat the query.)\nThe policy that requires the retriever to uniquely identify\na single record, i.e., forbids any query that is satisfied by\nmultiple records, can also be easily implemented using our\ntechniques. Our construction can be viewed as the noninteractive\nanalog of hash-reversal \"client puzzles\" used to\nprevent denial of service in network security [21], but, unlike\nclient puzzles, it comes with a rigorous proof of security.\nFor example, consider an airline passenger database in\nwhich every record contains the passenger's name, flight\nnumber, date, and ticket purchase details. In our construction\n, if the retriever knows the name and date that uniquely\nidentify a particular record (e.g., because this information\nwas supplied in a court-issued warrant), he (almost) immediately\nlearns the key that encrypts the purchase details in\nthe obfuscated record. If the passenger traveled on k flights\non that date, the retriever learns the key except for k bits.\nSince k is small, guessing k bits is still feasible. If, however,\nthe retriever only knows the date and the flight number, he\nlearns the key except for m bits, where m is the number of\npassengers on the flight, and retrieval of these passengers'\npurchase details is infeasible.\nA database obfuscated using our method has the group\nprivacy property in the following sense. It can be accessed\nonly via queries permitted by the privacy policy. The probability\nof successfully evaluating a permitted query is inversely\nexponential in the number of records that satisfy the\nquery predicate. In particular, to extract a large number of\nrecords from the database, the retriever must know a pri-ori\nspecific information that uniquely identifies each of the\nrecords, or small subsets thereof. The obfuscated database\nitself does not help him obtain this information.\nIn obfuscated databases with group privacy, computational\ncost of access depends on the amount of information\nretrieved. Therefore, group privacy can be thought of as a\nstep towards a formal cryptographic model for \"economics\nof privacy.\" It is complementary to the existing concepts of\nprivacy, and appears to be a good fit for applications such\nas public directories and customer relationship management\n(CRM) databases, where the database user may need to access\nan individual record for a legitimate business purpose,\nbut should be prevented from extracting large subsets of\nrecords for resale and abusive marketing.\nWhile our constructions for group privacy are provably\nsecure in the \"virtual black-box\" sense of [2], the cost of\nthis rigorous security is a quadratic blowup in the size of the\nobfuscated database, rendering the technique impractical for\nlarge datasets. We also present some heuristic techniques to\ndecrease the size of the obfuscated database, and believe\nthat further progress in this area is possible.\nAlternative privacy policies.\nDefining rigorous privacy\npolicies that capture intuitive \"database privacy\" is an important\nchallenge, and we hope that this work will serve as\na starting point in the discussion. For example, the group\nprivacy policy that we use in our constructions permits the\nretriever to learn whether a given attribute of a database\nrecord is equal to a particular value. While this leaks more\ninformation than may be desirable, we conjecture that the\nprivacy policy without this oracle is unrealizable.\nWe also consider privacy policies that permit any query\nrather than just boolean circuits of equality tests on attributes\n. We show that this policy is vacuous: regardless\nof the database contents, any user can efficiently extract\n103\nthe entire database by policy-compliant queries. Therefore,\neven if the obfuscation satisfies the virtual black-box property\n, it serves no useful purpose. Of course, there are many\ntypes of queries that are more general than boolean circuits\nof equality tests on attributes. Exact characterization of\nnon-vacuous, yet realizable privacy policies is a challenging\ntask, and a topic of future research.\nOrganization of the paper.\nWe discuss related work in\nsection 2. The ideas are illustrated with a \"warm-up\" construction\nin section 3. In section 4, we explain group privacy\nand the corresponding obfuscation technique. In section 5,\nwe generalize the class of queries to boolean circuits over attribute\nequalities. In section 6, we show that policies which\npermit arbitrary queries are vacuous, and give an informal\nargument that a policy that does not allow the retriever to\nverify his guesses of individual attribute values cannot be realized\n. Conclusions are in section 7. All proofs will appear\nin the full version of the paper.\nRELATED WORK\nThis paper uses the \"virtual black-box\" model of obfuscation\ndue to Barak et al. [2]. In addition to the impossibility\nresult for general-purpose obfuscation, [2] demonstrates several\nclasses of circuits that cannot be obfuscated. We focus\non a different class of circuits.\nTo the best of our knowledge, the first provably secure\nconstructions for \"virtual black-box\" obfuscation were proposed\nby Canetti et el. [5, 6] in the context of \"perfectly\none-way\" hash functions, which can be viewed as obfuscators\nfor point functions (a.k.a. oracle indicators or delta\nfunctions). Dodis and Smith [15] recently showed how to\nconstruct noise-tolerant \"perfectly one-way\" hash functions.\nwhich they used to obfuscate proximity queries with \"en-tropic\nsecurity.\" It is not clear how to apply techniques\nof [15] in our setting. In section 6, we present strong evidence\nthat our privacy definitions may not be realizable if\nqueries other than equality tests are permitted.\nLynn et al. [22] construct obfuscators for point functions\n(and simple extensions, such as public regular expressions\nwith point functions as symbols) in the random oracle model.\nThe main advantage of [22] is that it allows the adversary\npartial information about the preimage of the hash function,\ni.e., secrets do not need to have high entropy. This feature\nis essential in our constructions, too, thus we also use the\nrandom oracle model. Wee [27] proposed a construction for\na weaker notion of point function obfuscation, along with\nthe impossibility result for uniformly black-box obfuscation.\nThis impossibility result suggests that the use of random\noracles in our proofs (in particular, the simulator's ability\nto choose the random oracle) is essential.\nMany ad-hoc obfuscation schemes have been proposed in\nthe literature [1, 10, 9, 12, 13, 11]. Typically, these schemes\ncontain neither a cryptographic definition of security, nor\nproofs, except for theoretical work on software protection\nwith hardware restrictions on the adversary [19, 20].\nForcing the adversary to pay some computational cost for\naccessing a resource is a well-known technique for preventing\nmalicious resource exhaustion (a.k.a. denial of service\nattacks). This approach, usually in the form of presenting\na puzzle to the adversary and forcing him to solve it, has\nbeen proposed for combating junk email [16], website metering\n[17], prevention of TCP SYN flooding attacks [21],\nprotecting Web protocols [14], and many other applications.\nPuzzles based on hash reversal, where the adversary must\ndiscover the preimage of a given hash value where he already\nknows some of the bits, are an especially popular technique\n[21, 14, 26], albeit without any proof of security. Our\ntechniques are similar, but our task is substantially harder\nin the context of non-interactive obfuscation.\nThe obfuscation problem is superficially similar to the\nproblem of private information retrieval [3, 8, 18] and keyword\nsearch on encrypted data [25, 4]. These techniques are\nconcerned, however, with retrieving data from an untrusted\nserver, whereas we are concerned with encrypting the data\nand then giving them away, while preserving some control\nover what users can do with them.\nA recent paper by Chawla et al. [7] also considers database\nprivacy in a non-interactive setting, but their objective is\ncomplementary to ours. Their definitions aim to capture privacy\nof data, while ours aim to make access to the database\nindistinguishable from access to a certain ideal functionality.\nDIRECTED-ACCESS DATABASES\nAs a warm-up example, we show how to construct directed-access\ndatabases in which every record is indistinguishable\nfrom a lookup function. The constructions and theorems in\nthis section are mainly intended to illustrate the ideas.\nLet X be a set of tuples\nx\n, Y a set of tuples\ny\n, and\nY\n\n= Y {}. Let D X Y be the database. We want to\nobfuscate each record of D so that the only operation that\na user can perform on it is to retrieve\ny\nif he knows\nx\n.\nWe use the standard approach in secure multi-party computation\n, and formally define this privacy policy in terms of\nan ideal functionality. The ideal functionality is an (imaginary\n) trusted third party that permits only policy-compliant\ndatabase accesses. An obfuscation algorithm is secure if any\naccess to the obfuscated database can be efficiently simulated\nwith access only to the ideal functionality. This means\nthat the user can extract no more information from the obfuscated\ndatabase than he would be able to extract had all\nof his accesses been filtered by the trusted third party.\nDefinition 1. (Directed-access privacy policy) For\ndatabase D, define the corresponding directed-access functionality\nDA\nD\nas the function that, for any input\nx\nX\nsuch that\nx ,\ny\n1\n, . . . ,\nx ,\ny\nm\nD, outputs {\ny\n1\n, . . . ,\ny\nm\n}.\nIntuitively, a directed-access database is indistinguishable\nfrom a lookup function. Given the query attributes of an\nindividual record (\nx\n), it is easy to learn the data attributes\n(\ny\n), but the database cannot be feasibly accessed in any\nother way. In particular, it is not feasible to discover the\nvalue of\ny\nwithout first discovering a corresponding\nx\n.\nMoreover, it is not feasible to harvest all\ny\nvalues from\nthe database without first discovering all values of\nx\n.\nThis definition does not say that, if set X is small, it is\ninfeasible to efficiently enumerate all possible values of\nx\nand stage a dictionary attack on the obfuscated database.\nIt does guarantee that even for this attack, the attacker is\nunable to evaluate any query forbidden by the privacy policy.\nIn applications where X cannot be efficiently enumerated\n(e.g., X is a set of secret keywords known only to some\nusers of the obfuscated database), nothing can be retrieved\nfrom the obfuscated database by users who don't know the\nkeywords. Observe that\nx\ncan contain multiple attributes,\n104\nand thus multiple keywords may be required for access to\n\ny\nin the obfuscated database.\nDirected-access databases are easy to construct in the random\noracle model, since lookup functionality is essentially\na point function on query attributes, and random oracles\nnaturally provide an obfuscation for point functions [22].\nThe obfuscation algorithm OB\nda\ntakes D and replaces every\nrecord\nx\ni\n,\ny\ni\nD with\nhash\n(r\ni\n1\n||\nx\ni\n), hash(r\ni\n2\n||\nx\ni\n)\ny\ni\n, r\ni\n1\n, r\ni\n2\nwhere r\ni\n1,2\nare random numbers, || is concatenation, and\nhash\nis a hash function implementing the random oracle.\nTheorem 1. (Directed-access obfuscation is \"virtual\nblack-box\") Let OB\nda\nbe as described above. For any\nprobabilistic polynomial-time adversarial algorithm A, there\nexists a probabilistic polynomial-time simulator algorithm S\nand a negligible function of the security parameter k such\nthat for any database D:\n|P(A(OB\nda\n(D)) = 1) - P(S\nDA\nD\n(1\n|D|\n) = 1)| (k)\nwhere probability P is taken over random oracles (implemented\nas hash functions), as well as the the randomness of\nA\nand S. Intuitively, this theorem holds because retrieving\n\ny\ni\nrequires finding the (partial) pre-image of hash(r\ni\n2\n,\nx\ni\n).\nThe standard definition of obfuscation in [2] also requires\nthat there exist an efficient retrieval algorithm that, given\nsome\nx\n\n, extracts the corresponding\ny\nfrom the obfuscation\nOB\nda\n(D). Clearly, our construction has this property\n. Someone who knows\nx\n\nsimply finds the record(s)\nin which the first value is equal to hash(r\ni\n1\n||\nx\n\n), computes\nhash\n(r\ni\n2\n||\nx\n\n) and uses it as the key to decrypt\ny\n.\nGROUP-EXPONENTIAL DATABASES\nFor the purposes of this section, we restrict our attention\nto queries P that are conjunctions of equality tests over\nattributes (in section 5, we show how this extends to arbitrary\nboolean circuits over equality tests). For this class of\nqueries, we show how to obfuscate the database so that evaluation\nof the query is exponential in the size of the answer\nto the query. Intuitively, this means that only precise query\npredicates, i.e., those that are satisfied by a small number\nof records, can be efficiently computed. \"Mass harvesting\"\nqueries, i.e., predicates that are satisfied by a large number\nof records, are computationally infeasible.\nRecall that our goal is to restrict how the database can\nbe accessed. For some databases, it may be possible to efficiently\nenumerate all possible combinations of query attributes\nand learn the entire database by querying it on every\ncombination. For databases where the values of query\nattributes are drawn from a large set, our construction prevents\nthe retriever from extracting any records that he cannot\ndescribe precisely. In either case, we guarantee that the\ndatabase can be accessed only through the interface permitted\nby the privacy policy, without any trust assumptions\nabout the retriever's computing environment.\nIn our construction, each data attribute is encrypted with\na key derived from a randomly generated secret. We use a\ndifferent secret for each record. The secret itself is split into\nseveral (unequal) shares, one per each query attribute. Each\nshare is then encrypted itself, using a key derived from the\noutput of the hash function on the value of the corresponding\nquery attribute. If the retriever knows the correct value only\nfor some query attribute a, he must guess the missing shares.\nThe size of the revealed share in bits is inversely related to\nthe number of other records in the database that have the\nsame value of attribute a. This provides protection against\nqueries on frequently occurring attribute values.\n4.1\nGroup privacy policy\nWe define the privacy policy in terms of an ideal functionality\n, which consists of two parts. When given an index\nof a particular query attribute and a candidate value, it responds\nwhether the guess is correct, i.e., whether this value\nindeed appears in the corresponding attribute of the original\ndatabase. When given a predicate, it evaluates this predicate\non every record in the database. For each record on\nwhich the predicate is true, it returns this record's data attributes\nwith probability 2\n-q\n, where q is the total number\nof records in the database that satisfy the predicate. if no\nmore information can be extracted this ideal functionality.\nDefinition 2. (Group privacy policy) Let X be a set\nand\nY a set of tuples. Let D be the database\n1\n,\n2\n, . . .\nN\nwhere\ni\n= {x\ni\n1\n, x\ni\n2\n, . . . , x\nim\n,\ny\ni\n} X\nm\nY. Let P : X\nm\n\n{0, 1} be a predicate of the form X\nj\n1\n= x\nj\n1\nX\nj\n2\n= x\nj\n2\n. . .\nX\nj\nt\n= x\nj\nt\n. Let D\n[P]\n= {\ni\nD | P(x\ni\n1\n, x\ni\n2\n, . . . , x\nim\n) = 1}\nbe the subset of records on which\nP is true.\nThe group-exponential functionality GP\nD\nconsists of two\nfunctions:\n- C\nD\n(x, i, j) is 1 if x = x\nij\nand 0 otherwise, where 1 i\nN,\n1 j m.\n- R\nD\n(P) =\n\n1iN\n{ i,\ni\n}, where\n\ni\n=\n\ny\ni\nwith probability 2\n-|D\n[P]\n|\nif\nP(\ni\n)\n\nwith probability 1 - 2\n-|D\n[P]\n|\nif\nP(\ni\n)\n\nif\nP(\ni\n)\nProbability is taken over the internal coin tosses of\nGP\nD\n.\nInformally, function C answers whether the jth attribute of\nthe ith record is equal to x, while function R returns all\nrecords that satisfy some predicate P, but only with probability\ninversely exponential in the number of such records.\nIt may appear that function C is unnecessary. Moreover,\nit leaks additional information, making our privacy policy\nweaker than it might have been. In section 6, we argue\nthat it cannot be simply eliminated, because the resulting\nfunctionality would be unrealizable. Of course, there may\nexist policies that permit some function C\n\nwhich leaks less\ninformation than C, but it is unclear what C\n\nmight be. We\ndiscuss several alternatives to our definition in section 6.\nWe note that data attributes are drawn from a set of tuples\nY because there may be multiple data attributes that\nneed to be obfuscated. Also observe that we have no restrictions\non the values of query attributes, i.e., the same\nm\n-tuple of query attributes may appear in multiple records,\nwith different or identical data attributes.\n4.2\nObfuscating the database\nWe now present the algorithm OB\ngp\n, which, given any\ndatabase D, produces its obfuscation. For notational convenience\n, we use a set of random hash functions H\n\n: {0, 1}\n\n\n{0, 1}\nk\n. Given any hash function H, these can be implemented\nsimply as H(||x). To convert the k-bit hash function\noutput into a key as long as the plaintext to which it is\n105\napplied, we use a set of pseudo-random number generators\nprg\n,\n: {0, 1}\nk\n{0, 1}\n\n(this implements random oracles\nwith unbounded output length).\nLet N be the number of records in the database. For\neach row\ni\n, 1 i N , generate a random N -bit secret\nr\ni\n= r\ni\n1\n||r\ni\n2\n|| . . . ||r\niN\n, where r\nij\n\nR\n{0, 1}. This secret will\nbe used to protect the data attribute\ny\ni\nof this record. Note\nthat there is 1 bit in r\ni\nfor each record of the database.\nNext, split r\ni\ninto m shares corresponding to query attributes\n. If the retriever can supply the correct value of\nthe jth attribute, he will learn the jth share (1 j m).\nDenote the share corresponding to the jth attribute as s\nij\n.\nShares are also N bits long, i.e., s\nij\n= s\nij\n1\n|| . . . ||s\nijN\n.\nEach of the N bits of s\nij\nhas a corresponding bit in r\ni\n,\nwhich in its turn corresponds to one of the N records in the\ndatabase. For each p s.t. 1 p N , set s\nijp\n= r\nip\nif x\nij\n=\nx\npj\n, and set s\nijp\n= 0 otherwise. In other words, the jth\nshare s\nij\nconsists of all bits of r\ni\nexcept those corresponding\nto the records where the value of the jth attribute is the\nsame. An example can be found in section 4.4.\nThe result of this construction is that shares corresponding\nto commonly occurring attribute values will be missing\nmany bits of r\ni\n, while a share corresponding to an attribute\nthat uniquely identifies one record will contain all bits of\nr\ni\nexcept one. Intuitively, this guarantees group privacy. If\nthe retriever can supply query attribute values that uniquely\nidentify a single record or a small subset of records, he will\nlearn the shares that reveal all bits of the secret r\ni\nexcept\nfor a few, which he can easily guess. If the retriever cannot\ndescribe precisely what he is looking for and supplies\nattribute values that are common in the database, many of\nthe bits of r\ni\nwill be missing in the shares that he learns,\nand guessing all of them will be computationally infeasible.\nShares corresponding to different query attributes may\noverlap. For example, suppose that we are obfuscating a\ndatabase in which two records have the same value of attribute\nX\n1\nif and only if they have the same value of attribute\nX\n2\n. In this case, for any record in the database, the\nshare revealed if the retriever supplies the correct value of\nX\n1\nwill be exactly the same as the share revealed if the retriever\nsupplies the value of X\n2\n. The retriver gains nothing\nby supplying X\n2\nin conjunction with X\n1\nbecause this does\nnot help him narrow the set of records that match his query.\nTo construct the obfuscated database, we encrypt each\nshare with a pseudo-random key derived from the value of\nthe corresponding query attribute, and encrypt the data attribute\nwith a key derived from the secret r\ni\n. More precisely,\nwe replace each record\ni\n= x\ni\n1\n, . . . , x\nim\n,\ny\ni\nof the original\ndatabase with the obfuscated record\nv\ni\n1\n, w\ni\n1\n, v\ni\n2\n, w\ni\n2\n, . . . , v\nim\n, w\nim\n, u\ni\n, z\ni\nwhere\n- v\nij\n= H\n1,i,j\n(x\nij\n). This enables the retriever to verify that\nhe supplied the correct value for the jth query attribute.\n- w\nij\n= prg\n1,i,j\n(H\n2,i,j\n(x\nij\n)) s\nij\n. This is the jth share of\nthe secret r\ni\n, encrypted with a key derived from the value\nof the jth query attribute.\n- u\ni\n= H\n3,i\n(r\ni\n). This enables the retriever to verify that he\ncomputed the correct secret r\ni\n.\n- z\ni\n= prg\n2,i\n(H\n4,i\n(r\ni\n))\ny\ni\n. This is the data attribute\ny\ni\n,\nencrypted with a key derived from the secret r\ni\n.\nClearly, algorithm OB\ngp\nruns in time polynomial in N\n(the size of the database). The size of the resulting obfuscation\nis N\n2\nm\n. Even though it is within a polynomial factor of\nN\n(and thus satisfies the definition of [2]), quadratic blowup\nmeans that our technique is impractical for large databases.\nThis issue is discussed further in section 4.5.\nWe claim that OB\ngp\nproduces a secure obfuscation of D,\ni.e., it is not feasible to extract any more information from\nOB\ngp\n(D) than permitted by the privacy policy GP\nD\n.\nTheorem 2. (Group-exponential obfuscation is\n\"virtual black-box\") For any probabilistic polynomial-time\n(adversarial) algorithm A, there exists a probabilistic\npolynomial-time simulator algorithm S and a negligible function\nof the security parameter k s.t. for any database D:\n|P(A(OB\ngp\n(D)) = 1) - P(S\nGP\nD\n(1\n|D|\n) = 1)| (k)\nRemark.\nAn improper implementation of the random oracles\nin the above construction could violate privacy under\ncomposition of obfuscation, i.e., when more than one\ndatabase is obfuscated and published. For instance, if the\nhash of some attribute is the same in two databases, then\nthe adversary learns that the attributes are equal without\nhaving to guess their value. To prevent this, the same hash\nfunction may not be used more than once. One way to\nachieve this is to pick H\ni\n(.) = H(r\ni\n||.) where r\ni\n\nR\n{0, 1}\nk\n,\nand publish r\ni\nalong with the obfuscation. This is an example\nof the pitfalls inherent in the random oracle model.\n4.3\nAccessing the obfuscated database\nWe now explain how the retriever can efficiently evaluate\nqueries on the obfuscated database. Recall that the privacy\npolicy restricts the retriever to queries consisting of conjunctions\nof equality tests on query attributes, i.e., every query\npredicate P has the form X\nj\n1\n= x\nj\n1\n. . . X\nj\nt\n= x\nj\nt\n, where\nj\n1\n, . . . , j\nt\nare some indices between 1 and m.\nThe retriever processes the obfuscated database record by\nrecord. The ith record of the obfuscated database (1 i\nN\n) has the form v\ni\n1\n, w\ni\n1\n, v\ni\n2\n, w\ni\n2\n, . . . , v\nim\n, w\nim\n, u\ni\n, z\ni\n. The\nretriever's goal is to compute the N -bit secret r\ni\nso that he\ncan decrypt the ciphertext z\ni\nand recover the value of\ny\ni\n.\nFirst, the retriever recovers as many shares as he can from\nthe ith record. Recall from the construction of section 4.2\nthat each w\nij\nis a ciphertext of some share, but the only way\nto decrypt it is to supply the corresponding query attribute\nvalue x\nij\n. Let range over the indices of attributes supplied\nby the retriever as part of the query, i.e., {j\n1\n, . . . , j\nt\n}.\nFor each , if H\n1,i,\n(x\n\n) = v\ni\n, then the retriever extracts\nthe corresponding share s\ni\n= prg\n1,i,\n(H\n2,i,\n(x\n\n)) w\ni\n. If\nH\n1,i,\n(x\n\n) = v\ni\n, this means that the retriever supplied the\nwrong attribute value, and he learns nothing about the corresponding\nshare. Let S be the set of recovered shares.\nEach recovered share s\n\nS reveals only some bits of r\ni\n,\nand, as mentioned before, bits revealed by different shares\nmay overlap. For each p s.t. 1 p N , the retriever sets the\ncorresponding bit r\nip\nof the candidate secret r\ni\nas follows:\nr\nip\n=\ns\np\nif s\n\nS s.t. v\np\n= H\n1,1,\n(x\n\n)\nrandom\notherwise\nInformally, if at least one of recovered shares s\n\ncontains\nthe pth bit of r\ni\n(this can be verified by checking that the\nvalue of th attribute is not the same in the pth record of\nthe database -- see construction in section 4.2), then this\n106\nbit is indeed to the pth bit of the secret r\ni\n. Otherwise, the\nretriever must guess the pth bit randomly.\nOnce a candidate r\ni\nis constructed, the retriever checks\nwhether H\n3,i\n(r\ni\n) = u\ni\n. If not, the missing bits must have\nbeen guessed incorrectly, and the retriever has to try another\nchoice for these bits. If H\n3,i\n(r\ni\n) = u\ni\n, then the retriever\ndecrypts the data attribute\ny\ni\n= prg\n2,i\n(H\n4,i\n(r\ni\n)) z\ni\n.\nThe obfuscation algorithm of section 4.2 guarantees that\nthe number of missing bits is exactly equal to the number of\nrecords satisfied by the query P. This provides the desired\ngroup privacy property. If the retriever supplies a query\nwhich is satisfied by a single record, then he will only have\nto guess one bit to decrypt the data attributes. If a query is\nsatisfied by two records, then two bits must be guessed, and\nso on. For queries satisfied by a large number of records,\nthe number of bits to guess will be infeasible large.\n4.4\nExample\nConsider a toy airline passenger database with 4 records,\nwhere the query attributes are \"Last name\" and \"Flight,\"\nand the data attribute (in bold) is \"Purchase details.\"\nLast name\nFlight\nPurchase details\nSmith\n88\nAcme Travel, Visa 4390XXXX\nBrown\n500\nAirline counter, cash\nJones\n88\nNonrevenue\nSmith\n1492\nTravel.com, AmEx 3735XXXX\nBecause N = 4, we need to create a 4-bit secret to protect\neach data attribute. (4-bit secrets can be easily guessed,\nof course. We assume that in real examples N would be\nsufficiently large, and use 4 records in this example only to\nsimplify the explanations.) Let =\n1\n\n2\n\n3\n\n4\nbe the secret\nfor the first data attribute, and , , the secrets for the\nother data attributes, respectively.\nFor simplicity, we use a special symbol \"?\" to indicate\nthe missing bits that the retriever must guess. In the actual\nconstruction, each of these bits is equal to 0, but the retriever\nknows that he must guess the ith bit of the jth share if the\nvalue of the jth attribute in the current record is equal to\nthe value of the jth attribute in the ith record.\nConsider the first record. Each of the two query attributes,\n\"Last name\" and \"Flight,\" encrypts a 4-bit share. The share\nencrypted with the value of the \"Last name\" attribute (i.e.,\n\"Smith\") is missing the 1st and 4th bits because the 1st and\n4th records in the database have the same value of this attribute\n. (Obviously, all shares associated the ith record have\nthe ith bit missing). The share encrypted with the value of\nthe \"Flight\" attribute is missing the 1st and 3rd bits.\nH\n111\n(\"Smith\"), prg\n1,1,1\n(H\n211\n(\"Smith\")) (?\n2\n\n3\n?),\nH\n112\n(\"88\"), prg\n1,1,2\n(H\n212\n(\"88\")) (?\n2\n?\n4\n),\nH\n31\n(\n1\n\n2\n\n3\n\n4\n), prg\n2,1\n(H\n41\n(\n1\n\n2\n\n3\n\n4\n)) (\"Acme. . . \")\nH\n121\n(\"Brown\"), prg\n1,2,1\n(H\n221\n(\"Brown\")) (\n1\n?\n3\n\n4\n),\nH\n122\n(\"500\"), prg\n1,2,2\n(H\n222\n(\"500\")) (\n1\n?\n3\n\n4\n),\nH\n32\n(\n1\n\n2\n\n3\n\n4\n), prg\n2,2\n(H\n42\n(\n1\n\n2\n\n3\n\n4\n)) (\"Airline. . . \")\nH\n131\n(\"Jones\"), prg\n1,3,1\n(H\n231\n(\"Jones\")) (\n1\n\n2\n?\n4\n),\nH\n132\n(\"88\"), prg\n1,3,2\n(H\n232\n(\"88\")) (?\n2\n?\n4\n),\nH\n33\n(\n1\n\n2\n\n3\n\n4\n), prg\n2,3\n(H\n43\n(\n1\n\n2\n\n3\n\n4\n)) (\"Nonrev. . . \")\nH\n141\n(\"Smith\"), prg\n1,4,1\n(H\n241\n(\"Smith\")) (?\n2\n\n3\n?),\nH\n142\n(\"1492\"), prg\n1,4,2\n(H\n242\n(\"1492\")) (\n1\n\n2\n?\n4\n),\nH\n34\n(\n1\n\n2\n\n3\n\n4\n), prg\n2,4\n(H\n44\n(\n1\n\n2\n\n3\n\n4\n)) (\"Travel.com. . . \")\nSuppose the retriever knows only that the flight number is\n88. There are 2 records in the database that match this predicate\n. From the first obfuscated record, he recovers ?\n2\n?\n4\nand from the third obfuscated record, ?\n2\n?\n4\n. The retriever\nlearns which bits he must guess by computing H\n2i2\n(\"88\") for\n1 i 4, and checking whether the result is equal to v\ni\n2\nfrom the ith obfuscated record. In both cases, the retriever\nlearns that he must guess 2 bits (1st and 3rd) in order to\nreconstruct the secret and decrypt the data attribute.\nNow suppose the retriever knows that the flight number\nis 88 and the last name is Smith. There is only 1 record\nin the database that satisfies this predicate. From the first\npart of the first obfuscated record, the retriever can recover\n?\n2\n\n3\n?, and from the second part ?\n2\n?\n4\n(note how the\nshares overlap). Combining them, he learns ?\n2\n\n3\n\n4\n, so he\nneeds to guess only 1 bit to decrypt the data attribute.\nIt may appear that this toy example is potentially vulnerable\nto a dictionary attack, since it is conceivable that all combinations\nof last names and flight numbers can be efficiently\nenumerated with enough computing power. Note, however,\nthat this \"attack\" does not violate the definition of secure\nobfuscation because the retriever must supply the name-flight\npair before he can recover the purchase details. Therefore\n, the obfuscated database is only accessed via queries\npermitted by the privacy policy. In databases where values\nare drawn from a large set, even this \"attack\" is infeasible.\n4.5\nEfficiency\nThe algorithm of section 4.2 produces obfuscations which\nare a factor of (N ) larger than original databases. Thus,\nwhile our results establish feasibility of database obfuscation\nand group privacy, they are not directly applicable to real-world\ndatabases. This appears to be a recurring problem in\nthe field of database privacy: the cryptography community\nhas very strict definitions of security but loose notions of\nefficiency (typically polynomial time and space), whereas the\ndatabase community has very strict efficiency requirements\nbut loose security (typically heuristic or statistical). As a\nresult, many proposed schemes are either too inefficient, or\ntoo insecure for practical use.\nA possible compromise might be to start with a provably\nsecure but inefficient construction and employ heuristic techniques\nto improve its efficiency. In this spirit, we now propose\nsome modifications to reduce the size of the obfuscated\ndatabase without providing a security proof. The presentation\nis informal due to lack of space; see the full version of\nthe paper for a more rigorous version.\nThe obfuscation algorithm is modified as follows. For each\nrecord i, we split r\ni\ninto\nN\nk\n\"blocks\" of k bits each, padding\nthe last block if necessary (k is the security parameter).\nInstead of generating the bits randomly, we create a binary\ntree of depth log\nN\nk\n. A key of length k is associated with each\nnode of the tree, with the property the two \"children\" keys\nare derived from the \"parent\" key (e.g., using a size-doubling\npseudo-random generator). This is similar to a Merkle tree\nin which keys are derived in the reverse direction. The edge\nof tree (minus the padding of the last block) is r\ni\n.\nLet us denote the j\nth\nattribute of the i\nth\nrecord by i, j .\nSay that i, j is entitled to the secret bit r\ni\n\nj\nif x\nij\n= x\ni\n\nj\n,\nand i, j is entitled to an entire block B if it is entitled\nto each secret bit r\ni\n\nj\nin that block. Intuitively, if an entire\nblock is entitled, then we encode it efficiently using the\n\"reverse Merkle\" tree described above; if it is partially entitled\n, then we fall back on our original construction. Thus,\n107\nlet N\nij\nbe the minimal set of nodes in the tree which are\nsufficient for reconstructing all entitled blocks (i.e., every\nentitled block has among its parents an element of N\nij\n),\nand only these blocks. Then the share s\nij\nconsists of (a\nsuitable encoding of) N\nij\ntogether with the remaining bits\nr\ni\n\nj\nto which i, j is entitled. These are the entitled bits\nfrom any block which also includes non-entitled bits.\nIn the worst case, this algorithm does not decrease the\nblowup in the size of the obfuscated database. This occurs\nwhen for every query attribute j of every record i, there are\n(N ) records i\n\nfor which the value of the query attribute\nis the same, i.e., x\nij\n= x\ni\n\nj\n. If we assume a less pathological\ndatabase, however, we can get a better upper bound.\nIf there is a threshold t such that for any (i, j) there are at\nmost t records i\n\nfor which x\nij\n= x\ni\n\nj\n, then the size of the\nkey material (which causes the blowup in the size of the obfuscated\ndatabase) is O(mN t(k log\nN\nk\n)) bits (recall that m\nis the number of attributes). This bound is tight only for\nsmall values of t, and the new algorithm does no worse than\nthe original one even when t = (N ). When we consider\nthat each of the mN entries of the original database is several\nbits long, the size of the obfuscated database could be\nacceptably small for practical use.\nIt must be noted that this obfuscation reveals the size of\nthe share, and thus, for a given attribute of a given record, it\nleaks information about the number of other records whose\nattribute value is the same (but not which records they are).\nThis opens two research questions:\n- Is there a provably secure database obfuscation algorithm\nthat produces obfuscations that are smaller than O(N\n2\n).\n- Can the heuristic described in this section be improved to\nobtain acceptable lower bounds in the worst case?\nARBITRARY PREDICATES OVER EQUALITIES ON ATTRIBUTES\nWe now consider queries formed by taking an arbitrary\npredicate P over m boolean variables b\n1\n, b\n2\n. . . b\nm\n, and substituting\n(X\nj\n= x\nj\n) for b\nj\n, where X\nj\nis a query attribute,\nand x\nj\nX {} is a candidate value for this attribute,\ndrawn from the domain X of query attribute values. The\nspecial value denotes that the value of the X\nj\nattribute is\nignored when evaluating the predicate. The class of queries\nconsidered in section 4 is a partial case of this definition,\nwhere P =\n\n1jm\nb\nj\n. The group-exponential property is\nsimilar to definition 2 except for the domain of P.\nLet C be a boolean circuit computing P. We assume that C\nis a monotone circuit, i.e., a poly-size directed acyclic graph\nwhere each node is an AND, OR or FANOUT gate. AND\nand OR gates have two inputs and one output each, while\nFANOUT gates have one input and two outputs. Circuit\nC has m inputs, one per each query attribute. Below, we\nshow how to generalize our obfuscation technique to non-monotone\ncircuits.\nObfuscation algorithm.\nThe algorithm is similar to the\none in section 4, and consists of generating a random secret\nto encrypt each data attribute, splitting this secret into\n(unequal) shares, and encrypting these shares under the keys\nderived from the values of query attributes.\nAs before, let H\n\n: {0, 1}\n\n{0, 1}\nk\nbe a set of random\nhash functions and prg\n,\n: {0, 1}\nk\n{0, 1}\n\na set of\npseudo-random generators.\nFor each record\ni\nin the database, do the following:\nGenerate a block of uniformly random bits {r\nilEt\n},\nwhere 1 l N , E ranges over all edges of the circuit\nC, and 1 t k, where k is the length of the hash\nfunctions' output. Denote\nr\niEt\n= r\ni\n1Et\n||r\ni\n2Et\n|| . . . ||r\niN Et\n-r\nilE\n= r\nilE\n1\n||r\nilE\n2\n|| . . . ||r\nilEk\nThen, for each query attribute X\nj\n:\n\nOutput v\nij\n= H\n1,i,j\n(x\nij\n)\n\nLet E\nj\nbe the input edge in the circuit C whose\ninput is the X\nj\n= x\nj\ntest. Define the bits of the\ncorresponding share s\niljt\n= r\nilE\nj\nt\nif x\nij\n= x\nlj\n, and\n0 otherwise. Encrypt the resulting share using a\nkey derived from x\nij\n, i.e., output\nw\nij\n= prg\n1,i,j\n(H\n2,i,j\n(x\nij\n)) (\ns\ni\n1j\n||-s\ni\n2j\n|| . . . ||-s\niN j\n).\nLet E\nout\nbe the output edge in the circuit C. Output\nu\ni\n= H\n3,i\n(r\niE\nout\n0\n)\nOutput z\ni\n= prg\n2,i\n(H\n4,i\n(r\niE\nout\n0\n))\ny\ni\n.\nThe previous procedure obfuscated only the output\nedge of C. Repeat the following step recursively for\neach gate G C, whose output edge (or both of whose\noutput edges, for a FANOUT gate) have been obfuscated\n. Stop when all edges have been obfuscated:\n\nIf G is an AND gate, let E\n0\nand E\n1\nbe the input\nedges and E the output edge. For each l, set\n-r\nilE\n0\n= -r\nilE\n1\n= -r\nilE\n.\n\nIf G is an OR gate, then, for each l, generate\nrandom -r\nilE\n0\n\nR\n{0, 1}\nk\nand set -r\nilE\n1\n= -r\nilE\n0\n\n-r\nilE\n.\n\nIf G is a FANOUT gate, let E\n0\nand E\n1\nbe the\noutput edges and E the input edge. For each l,\ngenerate random -r\nilE\n0\n, -r\nilE\n1\n\nR\n{0, 1}\nk\nand output\n\nilE\n0\n= H\n5,i,l,E\n0\n(-r\nilE\n) -r\nilE\n0\nand\n\nilE\n1\n= H\n5,i,l,E\n1\n(-r\nilE\n) -r\nilE\n1\nRetrieval algorithm.\nLet Q be the query predicate in\nwhich specific values of x\nj\nor have been plugged into all\nX\nj\n= x\nj\nexpressions in the leaves of the circuit C.\nThe retrieval algorithm consists of two functions:\nC\nob\n(OB\ngp\n(D), x, i, j), which enables the retriever to check\nwhether the jth query attribute of the ith record is equal to\nx\n, and R\nob\n(OB\ngp\n(D), Q, i), which attempts to retrieve the\nvalue of the obfuscated data attribute in the ith record.\nDefine C\nob\n(OB\ngp\n(D), x, i, j) = 1 if H\n1,i,j\n(x) = v\nij\nand 0\notherwise.\nEvaluate Q(\ni\n) using C\nob\n. If Q\nOB\ngp\n(\ni\n), then\nR\nob\n(OB\ngp\n(D), Q, i) =.\nFor each l and each circuit edge E, set -r\nilE\n=?? . . .?\n(i.e., none of the bits of the secret are initially known).\nFor each query attribute j, let E\nj\nbe the input edge of\nthe circuit associated with the equality test for this attribute\n. If Q contains this test, i.e., if Q contains X\nj\n=\n108\nx\nj\nfor some candidate value x\nj\n(rather than X\nj\n= ),\nthen set (\ns\ni\n1j\n|| . . . ||-s\niN j\n) = w\nij\nprg\n1,i,j\n(H\n2,i,j\n(x\nij\n)),\ni.e., decrypt the secret bits with the key derived from\nthe value of the jth attribute.\nFor each l, if C\nob\n(x\nij\n, l, j\n) = 0, then set -r\nilE\nj\n= s\nilj\n,\ni.e., use only those of the decrypted bits that are true\nbits of the secret -r\nilE\n.\nSo far, only the input gates of the circuit have been\nvisited. Find a gate all of whose input edges have\nbeen visited, and repeat the following step for every\ngate until the output edge E\nout\nhas been visited.\n\nIf E is the output of an AND gate with inputs E\n0\nand E\n1\n, then, for each l, if -r\nilE\n0\n=?, set -r\nilE\n=\n-r\nilE\n0\n; if -r\nilE\n1\n=?, set -r\nilE\n= -r\nilE\n1\n.\n\nE\nis the output of an OR gate with inputs E\n0\nand E\n1\n. For each l, if -r\nilE\n0\n=? and -r\nilE\n1\n=?, set\n-r\nilE\n= -r\nilE\n0\n-r\nilE\n1\n.\n\nE\nis the output of a FANOUT gate with input\nE\n0\n. For each l, if -r\nilE\n0\n=?, set r\nilE\n=\nilE\n0\n\nH\n5,i,l,E\n0\n(-r\nilE\n0\n).\nFor each l, if r\nilE\nout\n0\n=?, this means that the corresponding\nsecret bit must be guessed. Choose random\nr\nilE\nout\n0\n\nR\n{0, 1}.\nIf H\n3,i\n(r\niE\nout\n0\n) = u\ni\n, this means that the retriever successfully\nreconstructed the secret. In this case, define\nR\nob\n(OB\ngp\n(D), Q, i) = prg\n2,i\n(H\n4,i\n(r\niE\nout\n0\n)) z\ni\n. Otherwise\n, define R\nob\n(OB\ngp\n(D), Q, i) =.\nTheorem 3. The obfuscation algorithm for arbitrary\npredicates over equalities on attributes satisfies the virtual\nblack-box property.\n5.1\nObfuscating non-monotone circuits\nGiven a non-monotone circuit C, let C be the monotone\ncircuit whose leaves are literals and negated literals formed\nby \"pushing down\" all the NOT gates. Observe that C has\nat most twice as many gates as C. Also, C can be considered\na monotone circuit over the 2m predicates X\n1\n= x\n1\n, X\n2\n=\nx\n2\n, . . . , X\nm\n= x\nm\n, X\n1\n= x\n1\n, X\n2\n= x\n2\n, . . . X\nm\n= x\nm\n. Observe\nthat a predicate of the form X\nj\n= x\nj\nis meaningful only\nwhen x\nj\n= x\nij\nfor some record i. This is because if x\nj\n= x\nij\nfor any record i, then X\nj\n= x\nj\nmatches all the records.\nHence there exists a circuit C\n\n(obtained by setting the leaf\nin C corresponding to the predicate X\nj\n= x\nj\nto true) that\nevaluates to the same value as C for every record in the\ndatabase.\nGiven that x\nj\n= x\nij\nfor some record i, the predicate\nX\nj\n= x\nj\nis equivalent to the predicate X\nj\n= x\nij\nfor some\nvalue of i. C can thus be viewed as a monotone circuit over\nthe m + mN attribute equality predicates X\n1\n= x\n1\n, X\n2\n=\nx\n2\n, . . . , X\nm\n= x\nm\n,\nand X\nj\n= x\nij\nfor each i and j. It follows\nthat a database D with N records and m columns can be\ntransformed into a database D\n\nwith N records and m+mN\ncolumns such that obfuscating D over the circuit C is equivalent\nto obfuscating D over the monotone circuit C.\nALTERNATIVE PRIVACY POLICIES\nIn general, a privacy policy can be any computable, possibly\nrandomized, joint function of the database and the\nquery. Clearly, it may be useful to consider generalizations\nof our privacy policies in several directions.\nFirst, we discuss alternatives to definition 2 that may\nbe used to model the requirement that accessing individual\nrecords should be easy, but mass harvesting of records\nshould be hard. To motivate this discussion, let us consider\na small database with, say, 10 or 20 records. For such\na database, the group-exponential property is meaningless.\nEven if all records match the adversary's query, he can easily\ntry all 2\n10\nor 2\n20\npossibilities for the random bits r\nik\nbecause database accesses are noninteractive.\nThis does not in any way violate our definition of privacy.\nExactly the same attack is possible against the ideal functionality\n, therefore, the simulation argument goes through,\nshowing that the obfuscated database leaks no more information\nthan the ideal functionality. It is thus natural to seek\nan alternative privacy definition that will make the above attack\ninfeasible when N is small (especially when N < k, the\nsecurity parameter).\nOur construction can be easily modified to support a wide\nvariety of (monotonically decreasing) functions capturing\nthe dependence between the probability of the ideal functionality\nreturning the protected attributes and the number\nof records matching the query. For example, the following\nthreshold ideal functionality can be implemented using\na threshold (n-t)-out-of-n secret sharing scheme [24].\n- C\nD\n(x, i, j) is 1 if x = x\nij\nand 0 otherwise, where 1 i\nN,\n1 j m.\n- R\nD\n(P) =\n\n1iN\n{ i,\ni\n}, where\n\ni\n=\n\ny\ni\nif P(\ni\n) and |D\n[P]\n| t\n\nif P(\ni\n) and |D\n[P]\n| > t\n\nif P(\ni\n)\nThe adversary can evaluate the query if there are at most t\nmatching records, but learns nothing otherwise. The details\nof the construction are deferred to the full version.\nWe may also consider which query language should be permitted\nby the privacy policy. We demonstrated how to obfuscate\ndatabases in accordance with any privacy policy that\npermits evaluation of some predicate consisting of equality\ntests over database attributes. Such queries can be considered\na generalization of \"partial match\" searches [23], which\nis a common query model in the database literature. Also,\nour algorithms can be easily modified to support policies\nthat forbid some attributes from having as a legal value,\ni.e., policies that require the retriever to supply the correct\nvalue for one or more designated attributes before he can\nextract anything from the obfuscated database.\nIt is worth asking if we can allow predicates over primitives\nlooser than exact attribute equality (e.g., proximity queries\nof [15] are an interesting class). We present strong evidence\nthat this is impossible with our privacy definitions. In fact,\neven using ideal functionalities (IF) that are more restrictive\nthan the one we have used does not seem to help. Recall that\nthe IF considered in section 4 consists of two functions: C\nD\n(it tells the retriever whether his guess of a particular query\nattribute value is correct) and R\nD\n(it evaluates the query\nwith the inverse-exponential probability). We will call this\nIF the permissive IF.\nWe define two more IFs. The strict IF is like the permissive\nIF except that it doesn't have the function C. The\nsemi-permissive IF\nfalls in between the two.\nIt, too,\n109\ndoesn't have the function C, but its retrieval function R\nleaks slightly more information. Instead of the same symbol\n, function R of the semi-permissive IF gives different responses\ndepending on whether it failed to evaluate the query\nbecause it matches no records (no-matches) or because it\nmatches too many records, and the probability came out to\nthe retriever's disadvantage (too-many-matches).\nDefine R\nD\n(P) as\n\n1iN\nR\n\n(P, i), where R\n\nis as follows:\nIf P(\ni\n), then R\n\n(P, i) = .\nIf P(\ni\n), then R\n\n(P, i) =\ny\ni\nwith probability 2\n-|D\n[P]\n|\nand with probability 1 - 2\n-|D\n[P]\n|\n.\nObserve that if, for any privacy policy allowing single-attribute\nequality tests, i.e., if all queries of the form X\nj\n=\nx\nj\nare permitted, then the semi-permissive IF can simulate\nthe permissive IF. Of course, the permissive IF can always\nsimulate the semi-permissive IF.\nWe say that a privacy policy leaks query attributes if all\nx\nij\ncan be computed (with overwhelming probability) simply\nby accessing the corresponding ideal functionality I\nD\n,\ni.e., there exists a probabilistic poly-time oracle algorithm\nA\ns.t., for any database D, P(A\nI\nD\n,\nO\n(i, j) = x\nij\n) 1 - (k).\nNote that the order of quantifiers has been changed: the\nalgorithm A is now independent of the database. This captures\nthe idea that A has no knowledge of the specific query\nattributes, yet successfully retrieves them with access only\nto the ideal functionality. Such a policy, even if securely\nrealized, provides no meaningful privacy.\nWe have the following results (proofs omitted):\nIf X = {1, 2, . . . M } and queries consisting of conjunctions\nover inequalities are allowed, then the semi-permissive\nIF leaks query attributes. Each of the x\nij\ncan be separately computed by binary search using\nqueries of the form X\nj\nx\nlow\nX\nj\nx\nhigh\n.\nIf arbitrary PPT-computable queries are allowed, then\neven the strict IF leaks query attributes.\nNote that a policy does not have to leak all query attributes\nto be intuitively useless or vacuous. For instance,\na policy which allows the retriever to evaluate conjunctions\nof inequalities on the first m - 1 query attributes, and allows\nno queries involving the last attribute, is vacuous for\nthe semi-permissive IF. Therefore, we give a stronger criterion\nfor vacuousness, which formalizes the notion that \"all\ninformation contained in the IF can be extracted without\nknowing anything about the query attributes\". Note that\nthe definition below applies to arbitrary privacy policies, for\nit makes no reference to query or data attributes.\nDefinition 3. (Vacuous privacy policy) We say that\nan ideal functionality\nI\nD\nis vacuous if there exists an efficient\nextractor Ext such that for any PPT algorithm A\nthere exists a simulator S so that for any database D:\n|P(A\nI\nD\n(1\nk\n) = 1) - P(S(Ext\nI\nD\n(1\nk\n))) = 1)| = (k)\nIn other words, we first extract all useful information from\nI\nD\nwithout any specific knowledge of the database, throw\naway I\nD\n, and use the extracted information to simulate I\nD\nagainst an arbitrary adversary. As a special case, if Ext can\nrecover the entire database D from I\nD\n, then the functionality\ncan be simulated, because the privacy policy is required\nto be computable and the simulator is not required to be\ncomputationally bounded (if we consider only privacy policies\nwhich are computable in probabilistic polynomial time,\nthen we can define vacuousness with a PPT simulator as\nwell). At the other extreme, the ideal functionality that\npermits no queries is also simulatable: Ext simply outputs\nnothing. The reader may verify that the IF in the all-but-one\n-query-attribute example above is also vacuous.\nTheorem 4. The strict ideal functionality that permits\narbitrary queries is vacuous.\nFinally, we consider what happens if we use the strict IF\nbut don't increase the power of the query language. We\nconjecture the existence of very simple languages, including\na language that contains only conjunctions of equality tests\non attributes, which are unrealizable even for single-record\ndatabases in the sense that there is no efficient obfuscation\nalgorithm that would make the database indistinguishable\nfrom the corresponding IF. This can be seen as justification\nfor the choice of the permissive, rather than strict IF for our\nconstructions.\nconjecture 1. The strict IF for the following query language\ncannot be realized even for single-record databases:\n\n2k\ni\n=1\n(X\n2i-1\n= x\n2i-1\nX\n2i\n= x\n2i\n) where i x\ni\n{0, 1}.\nNote that the only constraint on the database is that its\nsize should be polynomial in the security parameter k, and\ntherefore we are allowed to have 2k query attributes.\nWe expect that a proof of this conjecture will also yield a\nproof of the following conjecture:\nconjecture 2. The strict IF for a query language consisting\nof conjunction of equality tests on k query attributes\nis unrealizable even for single-record databases.\nThese conjectures are interesting from another perspective\n. They can be interpreted as statements about the impossibility\nof circuit obfuscation in the random oracle model.\nThey also motivate the question: given a query language, it\nis possible to achieve the group-exponential property with\nthe strict IF provided there exists an obfuscation algorithm\nfor this query language on a single record? In other words,\ngiven a class of predicates over single records and an efficient\nobfuscator for the corresponding circuit class, does\nthere exist an obfuscator for the entire database that realizes\nthe group-exponential ideal functionality for that query\nlanguage? We discuss this question in the full version of the\npaper.\nCONCLUSIONS\nWe introduced a new concept of database privacy, which\nis based on permitted queries rather than secrecy of individual\nrecords, and realized it using provably secure obfuscation\ntechniques. This is but a first step in investigating\nthe connection between obfuscation and database privacy.\nWhile our constructions are secure in the \"virtual black-box\"\nmodel for obfuscation, the blowup in the size of the\nobfuscated database may render our techniques impractical\nfor large databases. Our query language permits any predicate\nover equalities on database attributes, but other query\nlanguages may also be realizable. We define group privacy in\nterms of a particular ideal functionality, but there may be\n110\nother functionalities that better capture intuitive security\nagainst \"mass harvesting\" queries. In general, investigating\nwhich ideal functionalities for database privacy can be\nsecurely realized is an important topic of future research.\nFinally, all proofs in this paper are carried out in the random\noracle model. Whether privacy-via-obfuscation can be\nachieved in the plain model is another research challenge.\nREFERENCES\n[1] D. Aucsmith. Tamper resistant software: an\nimplementation. In Proc. 1st International Workshop\non Information Hiding, volume 1174 of LNCS, pages\n317333. Springer, 1996.\n[2] B. Barak, O. Goldreich, R. Impagliazzo, S. Rudich,\nA. Sahai, S. Vadhan, and K. Yang. On the\n(im)possibility of obfuscating programs. In Proc.\nAdvances in Cryptology - CRYPTO 2001, volume 2139\nof LNCS, pages 118. Springer, 2001.\n[3] D. Beaver, J. Feigenbaum, J. Kilian, and P. Rogaway.\nLocally random reductions: improvements and\napplications. J. Cryptology, 10:1736, 1997.\n[4] D. Boneh, G. Di Crescenzo, R. Ostrovsky, and\nG. Persiano. Public key encryption with keyword\nsearch. In Proc. Advances in Cryptology EUROCRYPT\n2004, volume 3027 of LNCS, pages\n506522. Springer, 2004.\n[5] R. Canetti. Towards realizing random oracles: hash\nfunctions that hide all partial information. In Proc.\nAdvances in Cryptology - CRYPTO 1997, volume 1294\nof LNCS, pages 455469. Springer, 1997.\n[6] R. Canetti, D. Micciancio, and O. Reingold. Perfectly\none-way probabilistic hash functions. In Proc. 30th\nAnnual ACM Symposium on Theory of Computing\n(STOC), pages 131140. ACM, 1998.\n[7] S. Chawla, C. Dwork, F. McSherry, A. Smith, and\nH. Wee. Towards privacy in public databases. In Proc.\n2nd Theory of Cryptography Conference (TCC),\nvolume 3378 of LNCS, pages 363385. Springer, 2005.\n[8] B. Chor, E. Kushilevitz, O. Goldreich, and M. Sudan.\nPrivate information retrieval. J. ACM, 45(6):965981,\n1998.\n[9] S. Chow, P. Eisen, H. Johnson, and P. van Oorschot.\nWhite-box cryptography and an AES implementation.\nIn 9th Annual International Workshop on Selected\nAreas in Cryptography (SAC), volume 2595 of LNCS,\npages 250270. Springer, 2003.\n[10] S. Chow, P. Eisen, H. Johnson, and P. van Oorschot. A\nwhite-box DES implementation for DRM applications.\nIn ACM Digital Rights Management Workshop,\nvolume 2696 of LNCS, pages 115. Springer, 2003.\n[11] C. Collberg and C. Thomborson. Watermarking,\ntamper-proofing, and obfuscation - tools for software\nprotection. IEEE Transactions on Software\nEngineering, 28(8):735746, 2002.\n[12] C. Collberg, C. Thomborson, and D. Low. A\ntaxonomy of obfuscating transformations. Technical\nReport 148, Department of Computer Sciences, The\nUniversity of Auckland, July 1997.\n[13] C. Collberg, C. Thomborson, and D. Low.\nManufacturing cheap, resilient, and stealthy opaque\nconstructs. In Proc. 25th ACM SIGPLAN-SIGACT\nSymposium on Principles of Programming Languages\n(POPL), pages 184196. ACM, 1998.\n[14] D. Dean and A. Stubblefield. Using client puzzles to\nprotect TLS. In Proc. 10th USENIX Security\nSymposium, pages 18. USENIX, 2001.\n[15] Y. Dodis and A. Smith. Correcting errors without\nleaking partial information. In Proc. 37th Annual\nACM Symposium on Theory of Computing (STOC),\npages 654663. ACM, 2005.\n[16] C. Dwork and M. Naor. Pricing via processing or\ncombatting junk mail. In Proc. Advances in\nCryptology - CRYPTO 1992, volume 740 of LNCS,\npages 139147. Springer, 1993.\n[17] M. Franklin and D. Malkhi. Auditable metering with\nlightweight security. J. Computer Security,\n6(4):237255, 1998.\n[18] Y. Gertner, Y. Ishai, E. Kushilevitz, and T. Malkin.\nProtecting data privacy in private information\nretrieval schemes. In Proc. 30th Annual ACM\nSymposium on Theory of Computing (STOC), pages\n151160. ACM, 1998.\n[19] O. Goldreich and R. Ostrovsky. Software protection\nand simulation on oblivious rams. J. ACM,\n43(3):431473, 1996.\n[20] Y. Ishai, A. Sahai, and D. Wagner. Private circuits:\nsecuring hardware against probing attacks. In Proc.\nAdvances in Cryptology - CRYPTO 2003, volume 2729\nof LNCS, pages 463481. Springer, 2003.\n[21] A. Juels and J. Brainard. Client puzzles: a\ncryptographic defense against connection depletion. In\nProc. Network and Distributed System Security\nSymposium (NDSS), pages 151165. The Internet\nSociety, 1999.\n[22] B. Lynn, M. Prabhakaran, and A. Sahai. Positive\nresults and techniques for obfuscation. In Proc.\nAdvances in Cryptology - EUROCRYPT 2004, volume\n3027 of LNCS, pages 2039. Springer, 2004.\n[23] R. Rivest. Partial-match retrieval algorithms. SIAM\nJournal of Computing, 5(1):1950, 1976.\n[24] A. Shamir. How to share a secret. Communications of\nthe ACM, 22(11):612613, 1979.\n[25] D. Song, D. Wagner, and A. Perrig. Practical\ntechniques for searches on encrypted data. In Proc.\nIEEE Symposium on Security and Privacy, pages\n4455. IEEE Computer Society, 2000.\n[26] X. Wang and M. Reiter. Defending against\ndenial-of-service attacks with puzzle auctions. In Proc.\nIEEE Symposium on Security and Privacy, pages\n7892. IEEE Computer Society, 2003.\n[27] H. Wee. On obfuscating point functions. In Proc. 37th\nAnnual ACM Symposium on Theory of Computing\n(STOC), pages 523532. ACM, 2005.\n111", "keywords": "Database privacy;Obfuscation"} {"name": "143", "title": "On the Complexity of Computing Peer Agreements for Consistent Query Answering in Peer-to-Peer Data Integration Systems", "abstract": "Peer-to-Peer (P2P ) data integration systems have recently attracted significant attention for their ability to manage and share data dispersed over different peer sources. While integrating data for answering user queries, it often happens that inconsistencies arise, because some integrity constraints specified on peers' global schemas may be violated. In these cases, we may give semantics to the inconsistent system by suitably \"repairing\" the retrieved data, as typically done in the context of traditional data integration systems. However , some specific features of P2P systems, such as peer autonomy and peer preferences (e.g., different source trusting ), should be properly addressed to make the whole approach effective. In this paper, we face these issues that were only marginally considered in the literature. We first present a formal framework for reasoning about autonomous peers that exploit individual preference criteria in repairing the data. The idea is that queries should be answered over the best possible database repairs with respect to the preferences of all peers, i.e., the states on which they are able to find an agreement. Then, we investigate the computational complexity of dealing with peer agreements and of answering queries in P2P data integration systems. It turns out that considering peer preferences makes these problems only mildly harder than in traditional data integration systems.", "fulltext": "INTRODUCTION\nPeer-to-Peer (P2P ) data integration systems are networks\nof autonomous peers that have recently emerged as an effective\narchitecture for decentralized data sharing, integration,\nand querying. Indeed, P2P systems offer transparent access\nto the data stored at (the sources of) each peer p, by\nmeans of the global schema equipped with p for modeling\nits domain of interest; moreover, pair of peers with the same\ndomain of interest one peer and the system is in charge of\naccessing each peer containing relevant data separately, and\ncombining local results into a global answer by suitably exploiting\nthe mapping rules.\nP2P systems can be considered the natural evolution of\ntraditional data integration systems, which have received\nconsiderable attention in the last few years, and which have\nalready become a key technology for managing enormous\namounts of information dispersed over many data sources.\nIn fact, P2P systems have attracted significant attention\nrecently, both in the development of efficient distributed algorithms\nfor the retrieval of relevant information and for\nanswering user queries (see, e.g., [9, 21, 12, 13]), and in the\ninvestigation of its theoretical underpinnings (see, e.g., [16,\n3, 20, 11, 9, 5]).\nIn this paper, we continue along this latter line of research,\nby investigating some important theoretical issues. In particular\n, we consider an expressive framework where integrity\nconstraints are specified on peer schemas in order to enhance\ntheir expressiveness, so that each peer can be in fact considered\na completely specified data integration system. In\nthis scenario, it may happen that data at different peers are\nmutually inconsistent, i.e., some integrity constraints are violated\nafter the integration is carried out; then, a \"repair\"\nfor the P2P system has to be computed [5, 17]. Roughly\nspeaking, repairs may be viewed as insertions or deletions\nof tuples at the peers that are able to lead the system to a\nconsistent state.\nOur aim is to deal with data integration in P2P systems,\nby extending some of the ideas described in previous studies\non merging mutually inconsistent databases into a single\nconsistent theory [2, 14] and on repairing individual data\nintegration systems [8, 6, 4, 10].\n36\nIndeed, in order to be effective in this framework, the repair\napproach should consider the peculiarities of P2P systems\nand, specifically, the following two issues:\nIn practical applications, peers often have an a-priori\nknowledge about the reliability of the sources that, in\nturn, determines their criteria for computing repairs.\nThat is, peers will rarely delete tuples coming from\nhighly reliable sources, and will try to solve conflicts\nby updating the less reliable sources only.\nPeers are autonomous and not benevolent: they rarely\ndisregard their individual preferences in order to find\nan agreement with other peers on the way the repair\nshould be carried out. Therefore, the presence of possibly\ncontrasting interests of selfish peers should be\naccounted for, when answering user queries.\nDespite the wide interest in this field, none of the approaches\nin the literature considered the issue of modeling the autonomy\nof the peers in providing a semantics for the system,\nand therefore they implicitly assume that all the peers act\ncooperatively in the network. Moreover, the possibility of\nmodeling peer preferences has been rarely considered in previous\nstudies, even though it has been widely recognized to\nbe a central issue for the design of quality-aware integration\nsystems (cf. [17]). Indeed, the first and almost isolated\nattempt is in [5], where the authors considered trust relationships\namong peers in a simplified setting in which the\nsystem does not transitively propagate information through\npeers. Actually, an extension to the case of transitive propagations\nis also argued, but peers autonomy is not considered,\nand query answering is undecidable in presence of loops.\nIn this paper, we face the above issues by introducing\na formal framework for reasoning about autonomous peers\nthat exploit individual preference criteria in repairing data.\nIn summary, our contributions are the following:\nWe preliminary introduce a framework for P2P data\nintegration systems, where each peer is equipped with\nintegrity constraints on its global schema. The model\nis simple yet very expressive, since each peer is assumed\nto be in turn a data integration system. The\nsemantics of a P2P system is defined in terms of suitable\ndatabases for the peers, called models. We show\nthat checking whether a system has a model can be\ndone efficiently.\nWe propose an approach to the repair of inconsistent\nP2P systems that focuses on data stored at the\nsources, rather than on the global schema (following\nthe approach described by [15] for the standard data\nintegration setting).\nThis is particularly suited for\ndealing with peers, as their preferences are typically\nexpressed over the sources. Indeed, if repairs were considered\non the global schema, suitable reformulations\nand translation of the preferences would be required.\nWe investigate the effect of considering individual preferences\non the semantics of P2P database integration\nsystems. The idea is that queries should be answered\nover the best possible database repairs with respect\nto the preferences of all peers, i.e., over the states on\nwhich they are able to find an agreement. Unfortu-nately\n, but not surprisingly, it turns out that considering\nautonomous peers gives rise to scenarios where\nthey are not able to find any agreement on the way\nthe integration should be done.\nThe above result motivates the subsequent study of\nthe complexity of dealing with peer agreements and\nof answering queries in such P2P data integration systems\n. We show that checking whether a given database\nis an agreed repair is a difficult task, since it is complete\nfor the class co-NP. Moreover, the complexity of\ncomputing an agreement turns out to be complete for\nthe functional class FPNP. Finally, we study the complexity\nof computing consistent answers and show that\nthis problem is\nP\n2\n-complete. It follows that our approach\nfor handling preferences in P2P systems is just\nmildly harder than the basic data integration framework\n, where in fact query answering lies at the first\nlevel of the polynomial hierarchy [8], as well.\nThe rest of the paper is organized as follows.\nIn Section\n2, we briefly present some preliminaries on relational\ndatabases. In Section 3, we introduce a simple formalization\nof P2P data integration systems and in the subsequent\nsection we enrich it to take care of peers' preferences. The\ncomputational complexity of the concept of agreement in\nquery answering is studied in Section 5. Finally, in Section 6\nwe draw our conclusions.\nPRELIMINARIES ON RELATIONAL DATABASES\nWe recall the basic notions of the relational model with\nintegrity constraints. For further background on relational\ndatabase theory, we refer the reader to [1].\nWe assume a (possibly infinite) fixed database domain\nwhose elements can be referenced by constants c\n1\n,. . . , c\nn\nunder the unique name assumption, i.e. different constants\ndenote different objects. These elements are assumed to be\nshared by all the peers and are, in fact, the constants that\ncan appear in the P2P system.\nA relational schema (or simply schema)\nRS is a pair\n, , where: is a set of relation symbols, each with an\nassociated arity that indicates the number of its attributes,\nand is a set of integrity constraints, i.e., (first-order) assertions\nthat have to be satisfied by each database instance.\nWe deal with quantified constraints, i.e., first order formulas\nof the form:\n~x.\nl\ni=1\nA\ni\n~y.\nm\nj=1\nB\nj\n\nn\nk=1\n\nk\n,\n(1)\nwhere l+m > 0, n 0, A\n1\n, . . . A\nl\nand B\n1\n, . . . B\nm\nare positive\nliterals,\n1\n, . . .\nn\nare built-in literals, and ~\nx and ~\ny are lists\nof distinct variables.\nActually, to keep things simple, we shall assume throughout\nthe paper that ~\ny is empty, thereby dealing with universally\nquantified constraints. We recall here that this kind of\nconstraint covers most of the classical constraints issued on\na relational schema, such as keys, functional dependencies,\nand exclusion dependencies. A brief discussion on how to\ngeneralize the results in the paper to other classes of constraints\nis reported in Section 6.\nA database instance (or simply database)\nDB for a schema\nRS = , is a set of facts of the form r(t) where r is a\nrelation of arity n in and t is an n-tuple of constants\nfrom . We denote as r\nDB\nthe set\n{t | r(t) DB}.\nA database\nDB for a schema RS is said to be consistent\nwith\nRS if it satisfies (in the first order logic sense) all constraints\nexpressed on\nRS.\n37\nFigure 1: The\nP2P system P\nr\nin Example 1.\nA relational query (or simply query ) over\nRS is a formula\nthat is intended to extract tuples of elements from\nthe underlying domain of constants .\nWe assume that\nqueries over\nRS = , are Unions of Conjunctive Queries\n(UCQs), i.e., formulas of the form\n{~x | ~y\n1\n.\nconj\n1\n(~\nx, ~\ny\n1\n)\n\n~y\nm\n.\nconj\nm\n(~\nx, ~\ny\nm\n)\n} where, for each i {1, . . . , m},\nconj\ni\n(~\nx, ~\ny\ni\n) is a conjunction of atoms whose predicate symbols\nare in , and involve ~\nx = X\n1\n, . . . , X\nn\nand ~\ny\ni\n=\nY\ni,1\n, . . . , Y\ni,n\ni\n, where n is the arity of the query, and each\nX\nk\nand each Y\ni,\nis either a variable or a constant in .\nGiven a database\nDB for RS, the answer to a UCQ Q\nover\nDB, denoted Q\nDB\n, is the set of n-tuples of constants\nc\n1\n, . . . , c\nn\nsuch that, when substituting each X\ni\nwith c\ni\n, the\nformula\n~y\n1\n.\nconj\n1\n(~\nx, ~\ny\n1\n)\n~y\nm\n.\nconj\nm\n(~\nx, ~\ny\nm\n) evaluates\nto true on\nDB.\nDATA INTEGRATION IN P2P SYSTEMS\nIn this section, we introduce a simple framework for dealing\nwith P2P systems. The model is not meant to be a novel\ncomprehensive formalization, since our aim here is to face\nthe problem of finding agreement among peers rather than\nto investigate new syntactic modeling features.\nTherefore, our approach takes basically the same perspective\nas [9, 11, 5, 17].\n3.1\nBasic Framework\nA P2P system\nP is a tuple P, I, N , map , where P is\na non-empty set of distinct peers and\nI, N and map are\nfunctions whose meaning will be explained below.\nFirst,\neach peer p P is equipped with its own data integration\nsystem\nI(p), which is formalized as a triple G\np\n, S\np\n, M\np\n.\nBasically,\nS\np\nis meant to denote the set of sources to\nwhich p is allowed to access and is in fact modeled as a\nrelational schema of the form\nS\np\n=\np\n, , i.e., there are\nno integrity constraints on the sources. The structure of\nthe global schema is, instead, represented by means of the\nschema\nG\np\n=\np\n,\np\n, whereas the relationships between\nthe sources and the global schema are specified by\nM\np\n,\nwhich is a set of local mapping assertions between\nG\np\nand\nS\np\n.\nWe assume that each assertion is of the form Q\nS\np\nQ\nG\np\n,\nwhere Q\nS\np\nand Q\nG\np\nare two conjunctive queries of the same\narity over the source schema\nS\np\nand the peer schema\nG\np\n,\nrespectively.\nExample 1 Let us introduce three peers, namely p\n1\n, p\n2\n,\nand p\n3\n, that constitute the P2P scenario that will be used\nas a running example throughout this paper to illustrate\ntechnical definitions.\nThe global schema\nG\np\n1\nof peer p\n1\nconsists of the relation\npredicate secretary (Employee, Manager ) (without constraints\n), the source schema\nS\np\n1\nconsists of the relation symbol\ns\n1\n, and the set\nM\np\n1\nof the local mapping assertions is\n{X, Y | s\n1\n(X, Y )}\n{X, Y | secretary (X, Y )}.\nAs for peer p\n2\n, the schema\nG\np\n2\nconsists of the relation\nfinancial (Employee, Manager ) (without constraints),\nthe source schema consists of the relation symbol s\n2\n, and\nM\np\n2\n=\n{X, Y | s\n2\n(X, Y )}\n{X, Y | financial(X, Y )}.\nThe schema\nG\np\n3\nof peer p\n3\nconsists of the relations\nemployee(Name, Dept) and boss(Employee, Manager ),\nwhose set of constraints contains the assertions (quantifiers\nare omitted) employee (X, Y ) boss(X\n1\n, Y\n1\n)\nX = Y\n1\nand\nboss(X, Y ) boss(X\n1\n, Y\n1\n)\nY\n1\n= X, stating that managers\nare never employees; the source schema\nS\np\n3\ncomprises the\nrelation symbols s\n3\n; and, the set of the local mapping assertions\nis\n{X, Y | s\n3\n(X, Y )}\n{X, Y | employee(X, Y )}.\nP\nEach peer p P in a P2P system P = P, I, N , map is\nalso equipped with the neighborhood function\nN providing\na set of peers\nN (p) P - {p} containing the peers (called\nneighbors) who potentially have some information of interest\nto p. Intuitively, the neighborhood relation determines the\nstructure of a P2P system\nP. Such a structure is better\ndescribed by the dependency graph G(\nP) of P, i.e., by a\ndirected graph having P as its set of vertices and {(p, q) |\nq P p N (q)} as its set of edges.\nIn particular, a peer q is in N (p) iff p is interested in the\ndata exported by q by means of its global schema, i.e., some\nof the global relations of p can be populated by means of\nthe data coming from q besides the data coming from the\nsources of p itself. To this aim, map(p) defines the set of\npeer mapping assertions of p.\nEach assertion is an expression of the form Q\nq\nQ\np\n,\nwhere the peer q N (p) is a neighbor of p, and Q\nq\nand Q\np\nare two conjunctive queries of the same arity over schemas\nG\nq\nand\nG\np\n, respectively.\nExample 1 (contd.) Let\nP\nr\n=\nP\nr\n, I\nr\n, N\nr\n, map\nr\nbe a\nP2P system, where P\nr\nconsists of three peers p\n1\n, p\n2\nand p\n3\n,\nsuch that\nN\nr\n(p\n1\n) =\nN\nr\n(p\n2\n) =\nand N\nr\n(p\n3\n) =\n{p\n1\n, p\n2\n}.\nFigure 1 summarizes the structure of the system\nP\nr\nby\nshowing, for each peer, its global schema, its source schema,\nand its local and peer mapping assertions. In particular,\nnotice that the mapping assertions are such that: map(p\n1\n) =\nmap(p\n2\n) =\n, and map(p\n3\n) =\n{X, Y | financial(X, Y ))}\n{X, Y | boss(X, Y )} {X, Y | secretary(X, Y )}\n{X, Y |\nboss(X, Y )}.\nP\n38\nA source database for a P2P system\nP is a function D\nassigning to each peer p P such that I(p) = G\np\n, S\np\n, M\np\na database instance\nD(p) for S\np\n.\nA global database for\nP is a function B assigning to each\npeer p a database instance B(p) for G\np\n.\nUsually, we are\ninterested in global databases that can be \"retrieved\" from\na given source, as formalized below.\nGiven a source database\nD for P, a retrieved global\ndatabase for\nD is a global database B that satisfies the\nmapping assertions\nM\np\nof each peer p, i.e., B is such that:\np P and (Q\nS\np\nQ\nG\np\n)\nM\np\n, it is the case that\nQ\nD(p)\nS\np\nQ\nB(p)\nG\np\n.\nWe denote by ret (\nP, D) the set of all the retrieved global\ndatabases for\nD in the system P.\nNotice that in the definition above we are considering\nsound mappings: data retrieved from the sources by the\nmapping views are assumed to be a subset of the data that\nsatisfy the corresponding global relation. This is a classical\nassumption in data integration, where sources in general do\nnot provide all the intended extensions of the global schema,\nhence extracted data are to be considered sound but not\nnecessarily complete.\nExample 1 (contd.) Let\nD\nr\nbe a source database for\nthe P2P system\nP\nr\nsuch that\nD\nr\n(p\n1\n) is\n{s\n1\n(Albert, Bill)},\nD\nr\n(p\n2\n) consists of\n{s\n2\n(John, Mary), s\n2\n(Mary, Tom)}, and\nD\nr\n(p\n3\n)\n=\n{s\n3\n(Mary, D1)}.\nConsider also the global\ndatabase\nB\nr\nsuch that\nB\nr\n(p\n1\n) =\n{secretary (Albert, Bill)},\nB\nr\n(p\n2\n) =\n{financial(John, Mary), financial(Mary, Tom)}\nand\nB\nr\n(p\n3\n) =\n{employee(Mary, D1)}. Then, it is easy\nto see that\nB\nr\nis a retrieved database for\nD\nr\nin\nP\nr\n, i.e.,\nB\nr\nret(P\nr\n, D\nr\n).\nNote that a global database\nB whose peer schema for some\npeer p {p\n1\n, p\n2\n, p\n3\n} is a superset of B\nr\n(p) is in ret (P\nr\n, D\nr\n)\nas well - we simply say that\nB is a superset of B\nr\n.\nP\n3.2\nModels of Peer-to-Peer Systems\nGiven a source database\nD, it is particular important\nto investigate whether it is possible to retrieve from\nD\na database which satisfies the semantics of the network.\nTherefore, we next define a suitable notion of model for a\nP2P system. The approach has been inspired by the au-toepistemic\napproach of [9]; in particular, we assume that\npeers propagate through mapping assertions only the values\nthey really trust.\nDefinition 2 Let\nP = P, I, N , map be a P2P system, p\nP a peer with I(p) = G\np\n, S\np\n, M\np\nand\nG\np\n=\np\n,\np\n, and\nD a source instance for P. Then, a p-model for P w.r.t. D is\na maximal nonempty set of global databases\nM ret(P, D)\nsuch that:\n1. for each\nB M, B(p) satisfies the constraints in\np\n,\nand\n2. for each assertion Q\nq\nQ\np\nmap(p), it holds:\nB M\nQ\nB (q)\nq\n\nB M\nQ\nB (p)\np\n.\nP\nThus, according to Condition 1, any databases in the p-model\nsatisfies all the integrity constraints issued over the\nglobal schema of p; moreover, Condition 2 guarantees that\npeers communicate only those values that belong to all models\n, i.e., a cautious approach to the propagation has been\npursued. Finally we point out that, as for local mapping assertions\n, peer mapping assertions are assumed to be sound.\nNow, given that each peer singles out its models, a notion\nof model for the whole system can be easily stated.\nDefinition 3 Let\nP = P, I, N , map be a P2P system.\nA model for\nP w.r.t. D is a maximal nonempty set M\nret (\nP, D) of global databases such that, for each p P , M\nis a p-model. If a model for P w.r.t. D exists, we say that\nD satisfies P, denoted by D |= P.\nP\nFor\ninstance,\nin\nour\nrunning\nexample,\nD\nr\ndoes\nnot\nsatisfy\nP\nr\n;\nindeed,\nthe\npeer\nmapping\nassertions\nconstrain the schema of p\n3\nto contain in every\nglobal\ndatabase\n(retrieved\nfrom\nD\nr\n)\nthe\ntuples\nboss(Albert, Bill), boss(John, Mary), boss(Mary, Tom),\nand\nemployee (Mary, D1) that violate the integrity constraints\nover p\n3\n, since Mary results to be both an employee and a\nmanger. Therefore, retrieving data from\nD\nr\nleads to an inconsistent\nscenario.\nWe conclude by noticing that deciding whether a P2P\nsystem admits a model can be done efficiently. The result\ncan be proven by modifying the techniques in [9], in order\nto first evaluate all the mappings in the network and then\ncheck for the satisfaction of the integrity constraints over\npeer schemas.\nTheorem 4\nLet\nP = P, I, N , map be a P2P system, and\nD be a database instance for P. Then, deciding whether\nthere is a model for\nP w.r.t. D, i.e., D |= P, is feasible in\npolynomial time.\nDEALING WITH AUTONOMOUS PEERS\nAs shown in our running example, in general data stored\nin local and autonomous sources are not required to satisfy\nconstraints expressed on the global schema (for example\nwhen a key dependency on\nG is violated by data retrieved\nfrom the sources). Thus, a P2P system may be unsatisfiable\nw.r.t. a source database\nD. In this section, we face the problem\nof solving inconsistencies in P2P systems. Specifically,\nwe introduce a semantics for \"repairing\" a P2P system. To\nthis aim, we first provide a model for peer preferences, and\nthen show the impact of these individual preferences on the\ncost of reaching a global agreed repair.\n4.1\nPeer Preferences and Repairs\nLet\nP = P, I, N , map be a P2P system, and D be a\nsource database instance for\nP. Next, we define a repair\nweighting function w\np\n(P,D)\nfor each peer p, encoding its preferences\non candidate repairs of\nD. Formally, w\np\n(P,D)\nis a\npolynomially-computable function assigning, to each source\ndatabase instance\nD, a natural number that is a measure of\nthe preference of p on having D as a repair for D (the lower\nthe number, the more preferred the repair).\nAs a quite simple, yet natural example of weighting function\n, we can consider the evaluation of the number of deletions\nperformed to the peer's sources.\nIn this case, we\nhave that w\np\n(P,D)\n(\nD ) = |D (p) - D(p)|, which in fact corresponds\nto the size of the difference between\nD and D\nrestricted to tuples of peer p. This weighting function is\ncalled cardinality-based in the following.\nExample 1 (contd.) Consider the source databases\nD\nr\n1\n,\nD\nr\n2\n, and\nD\nr\n3\nsuch that:\nD\nr\n1\n(p\n1\n) =\nD\nr\n2\n(p\n1\n) =\nD\nr\n3\n(p\n1\n) =\nD\nr\n(p\n1\n),\n39\nD\nr\n1\n(p\n2\n) =\n{s\n2\n(John, Mary)}, D\nr\n2\n(p\n2\n) =\n{s\n2\n(Mary, Tom)},\nD\nr\n3\n(p\n2\n) =\n{}, D\nr\n1\n(p\n3\n) =\n{}, D\nr\n2\n(p\n3\n) =\n{s\n3\n(Mary, D1)}, and\nD\nr\n3\n(p\n3\n) =\n{s\n3\n(Mary, D1)}.\nAssume that, for each peer p, w\np\n(P\nr\n,D\nr\n)\n(\nD) = |D(p) D\nr\n(p)|, i.e., she prefers source repairs where the minimum\nnumber of tuples is deleted from\nD\nr\n(p).\nThen,\nw\np\n1\n(P\nr\n,D\nr\n)\n(\nD\nr\n1\n)\n=\nw\np\n1\n(P\nr\n,D\nr\n)\n(\nD\nr\n2\n)\n=\nw\np\n1\n(P\nr\n,D\nr\n)\n(\nD\nr\n3\n)\n=\n0;\nw\np\n2\n(P\nr\n,D\nr\n)\n(\nD\nr\n1\n) = w\np\n2\n(P\nr\n,D\nr\n)\n(\nD\nr\n2\n) = 1; w\np\n2\n(P\nr\n,D\nr\n)\n(\nD\nr\n3\n) = 2;\nw\np\n3\n(P\nr\n,D\nr\n)\n(\nD\nr\n1\n) = 1; w\np\n3\n(P\nr\n,D\nr\n)\n(\nD\nr\n2\n) = w\np\n3\n(P\nr\n,D\nr\n)\n(\nD\nr\n3\n) = 0.\nP\nThe problem of solving inconsistency in \"classical\" data\nintegration systems has been traditionally faced by providing\na semantics in terms of the repairs of the global\ndatabases that the mapping forces to be in the semantic\nof the system [4, 7, 6]. Repairs are obtained by means of\naddition and deletion of tuples according to some minimality\ncriterion.\nWe next propose a generalization of these approaches to\nthe P2P framework, which takes into account peers preferences\n. To this aim, we focus on finding the proper set of facts\nat the sources that imply as a consequence a global database\nsatisfying all integrity constraints. Basically, such a way of\nproceeding allows us to easily take into account information\non preferences when trying to solve inconsistency, since repairing\nis performed by directly focusing on those sources,\nwhose integration has caused inconsistency.\nDefinition 5 (Repair) Let\nP be a P2P system, p a peer,\nand\nD and D two source databases. We say that D is p-minimal\nif\nD |= P, and there exists no source database D\nsuch that w\np\n(P,D)\n(\nD ) < w\np\n(P,D)\n(\nD ) and D |= P.\nThen,\nD is a repair for P w.r.t. D if D is p-minimal for\neach peer p.\nP\nExample 1 (contd.) It is easy to see that\nD\nr\n1\n,\nD\nr\n2\n, and\nD\nr\n3\nsatisfy\nP\nr\nand they are both p\n1\n-minimal. Indeed, peer\np\n1\nhas no preferences among the three databases, since\nw\np\n1\n(P\nr\n,D\nr\n)\n(\nD\nr\n1\n) = w\np\n1\n(P\nr\n,D\nr\n)\n(\nD\nr\n2\n) = w\np\n1\n(P\nr\n,D\nr\n)\n(\nD\nr\n3\n) = 0.\nMoreover,\nD\nr\n1\nand\nD\nr\n2\nare equally preferred by p\n2\n, whereas\nD\nr\n2\nand\nD\nr\n3\nare equally preferred by p\n3\n. Therefore, all peers\nagree on\nD\nr\n2\n, which is thus a repair for\nD\nr\nw.r.t.\nP\nr\n. However\n, neither\nD\nr\n3\nis p\n2\n-minimal, nor\nD\nr\n1\nis p\n3\n-minimal, and\nthus they are not repairs.\nP\nWe next define the semantics of a P2P system, in terms\nof models for those sources on which all the peers agree.\nDefinition 6 (Agreement) Let\nP = P, I, N , map be a\nP2P system, and\nD be an instance for P. The agreement for\nP w.r.t. D is the set of all of its models w.r.t. some repair,\nand will be denoted by Agr (\nP, D).\nP\nExample 1 (contd.)\nD\nr\n2\nis p-minimal, for each peer p,\nand it is easy to see that the set Agr (\nP\nr\n, D\nr\n) contains\nall databases belonging to some model for\nP\nr\nw.r.t.\nD\nr\n2\n.\nIn particular,\nit contains the supersets\n(satisfying\nthe constraints)\nof\nthe database\nB\nr\n2\nsuch that\nB\nr\n2\n(p\n1\n) =\n{secretary (Albert, Bill)}, B\nr\n2\n(p\n2\n) =\n{financial(Mary, Tom)} and B\nr\n2\n(p\n3\n) =\n{boss(Albert, Bill),\nboss (Mary, Tom), employee(Mary, D1)}. Moreover, no other\nglobal database is in Agr (\nP\nr\n, D\nr\n).\nP\nWe can finally characterize the answer to a user query in\nterms of the repairs for the system.\nDefinition 7 Let\nP = P, I, N , map be a P2P system, let\nD be a source database for it, and let Q be a query over\nthe schema of a peer p. Then, the answer to Q is the evaluation\nof the query over all the possible agreed databases:\nans(Q, p, P, D) =\nB Agr(P,D)\nQ\nB(p)\np\n.\nP\nFor instance, in our running example, the answer to the\nuser query\n{X | boss(X, Y )} posed over peer p\n3\n, which asks\nfor all employees that have a boss, is\n{ Albert , Mary },\nsince this query is evaluated over the supersets of the\ndatabase\nB\nr\n2\nretrieved from\nD\nr\n2\nonly.\nWe conclude the section by noticing that Agr (\nP, D) is just\na formal characterization of the semantics of a P2P system.\nUsually, we are not interested in computing such a set; and,\nin fact, for practical applications, suitable techniques and\noptimization algorithms should be investigated to handle\ninconsistency at query time (in the spirit of, e.g., [10]).\n4.2\nThe Price of Autonomy\nGiven the framework presented so far, we are in the position\nof studying the effects of having autonomous peers\nrepairing their source databases according to their own preferences\n. We next show that, in some cases, peers might not\nfind an agreement on the way the repair has to be carried\nout. This is a somehow expected consequence of having selfish\ninterested peers in the absence of a global coordination.\nProposition 8\nThere exists a P2P system\nP and a source\ndatabase\nD such that there is no agreement, i.e., Agr(P , D)\nis empty.\nProof\n[Sketch].\nConsider the P2P system\nP =\nP , I , N , map , where P consists of the peers challenger\n(short: c) and duplicator (short: d), that are mutually connected\n, i.e.,\nN (c) = {d} and N (d) = {c}.\nPeer c is such that I (c) = G\nc\n, S\nc\n, M\nc\n, where the schema\nG\nc\nconsists of predicates r\nc\n(X) and mr\nd\n(X) with constraints\nr\nc\n(X) r\nc\n(Y ) X = Y and r\nc\n(X) mr\nd\n(Y ) X = Y ; the\nsource schema consists of the relation symbol s\nc\n; and\nM\nc\ncontains only the assertion\n{X | s\nc\n(X)}\n{X | r\nc\n(X)}.\nPeer d is such that I (d) = G\nd\n, S\nd\n, M\nd\n, where the schema\nG\nd\nconsists of predicates r\nd\n(X) and mr\nc\n(X) with constraints\nr\nd\n(X) r\nd\n(Y ) X = Y and r\nd\n(X) mr\nc\n(Y ) X = Y ; the\nsource schema consists of the relation symbol s\nd\n; and\nM\nd\ncontains only the assertion\n{X | s\nd\n(X)}\n{X | r\nd\n(X)}.\nFinally, map(c) contains the assertion {X | r\nc\n(X))}\n{X | mr\nc\n(X)}, while map(d) contains the assertion {X |\nr\nd\n(X))}\n{X | mr\nd\n(X)}.\nLet\nD be a source database for P such that D(c) =\n{s\nc\n(0), s\nc\n(1)\n} and D(d) = {s\nd\n(0), s\nd\n(1)\n}. We build four\nsource databases, say\nD\n1\n,\nD\n2\n,\nD\n3\nand\nD\n4\n, that satisfy\nP. They are such that: D\n1\n(c) = {}, D\n1\n(d) = {s\nd\n(0)\n};\nD\n2\n(c) = {}, D\n2\n(d) = {s\nd\n(1)\n}; D\n3\n(c) = {s\nc\n(0)\n}, D\n3\n(d) = {};\nD\n4\n(c) = {s\nc\n(1)\n}, D\n4\n(d) = {}. Notice that all the other\ndatabases satisfying\nP are proper subsets of these ones.\nThen, by assuming that each peer wants to minimize the\nnumber of deletions in\nD, there exists no source database\nsatisfying\nP that is both c-minimal and d-minimal.\nTHE COMPLEXITY OF QUERY ANSWERING\nIn the light of Proposition 8, it is particulary relevant to\ninvestigate the complexity of dealing with peer agreements\n40\nand query answering in such P2P data integration systems.\nIn this section, we first present some basic problems arising\nin the proposed framework, and subsequently analyze their\ncomputational complexity. This analysis is a fundamental\npremise to devise effective and optimized implementations.\n5.1\nProblems\nGiven a P2P system\nP and a source database D for P, we\nconsider the following problems:\nRepairChecking: given a source instance D , is D a\nrepair for\nP w.r.t. D?\nAgreementExistence: is Agr(P, D) = ?\nAnyAgreementComputation: compute a database B in\nthe agreement Agr (\nP, D), if any.\nQueryOutputTuple: given a query Q over a peer\nschema\nG\np\nand a tuple t, is t ans(Q, p, P, D)?\nIntuitively,\nRepairChecking\nis\nthe\nvery\nbasic\nproblem\nof\nassessing\nwhether\na\nsource\ninstance\nat\nhand\nsatisfies\nthe\ndata\nintegration\nsystem.\nThen,\nAgreementExistence (and its corresponding computational\nversion\nAnyAgreementComputation) asks for singling\nout scenarios where some agrement can be in fact computed\n. Finally,\nQueryOutputTuple represents the problem\ncharacterizing the intrinsic complexity of a query answering\nin the proposed framework; indeed, it is the problem of\ndeciding the membership of a given tuple in the result of\nquery evaluation.\n5.2\nResults\nOur first result is that checking whether all the peers are\nsatisfied by a given source database is a difficult task that\nis unlikely to be feasible in polynomial time.\nTheorem 9\nRepairChecking is co-NP-complete. Hardness\nholds even for cardinality-based weighting functions.\nProof [Sketch].\nMembership. Consider the complementary\nproblem of deciding whether there exists a peer p\nsuch that\nD is not p-minimal. This problem is feasible\nin NP by guessing a source database\nD and checking in\nthat 1.\nD |= P , and 2. there exists a peer p such that\nw\np\n(P,D)\n(\nD ) < w\np\n(P,D)\n(\nD ). In particular, 1. is feasible in\npolynomial time because of Theorem 4, and 2. is feasible in\npolynomial time because our weighting functions are polynomially\ncomputable.\nHardness. Recall that deciding whether a Boolean formula\nin conjunctive normal form = C\n1\n. . . C\nm\nover the\nvariables X\n1\n, . . . , X\nn\nis not satisfiable, i.e., deciding whether\nthere exists no truth assignments to the variables making\neach clause C\nj\ntrue, is a co-NP-hard problem.\nWe built a P2P system\nP\n\nsuch that:\nP\n\ncontains a peer\nx\ni\nfor each variable X\ni\n, a peer c\nj\nfor each clause C\nj\n, and\nthe distinguished peer e. The source schema of x\ni\n(resp. c\nj\n)\nconsists of the unary relation s\nx\ni\n(resp. s\nc\nj\n), whereas the\nglobal schema consists of the unary relation r\nx\ni\n(resp. r\nc\nj\n).\nThe source schema of e consists of the unary relations s\ne\nand\ns\na\n, whereas its global schema consists of the unary relations\nr\ne\nand r\na\n. For each source relation, say s , P() contains\na local mapping assertion of the form\n{X | s (X)}\n{X |\nr (X)}. Each global relation of the form r\nx\ni\nis equipped\nwith the constraint r\nx\ni\n(X\n1\n)\nr\nx\ni\n(X\n2\n)\nX\n1\n= X\n2\n, stating\nthat each relation must contain one atom at most. Each\nglobal relation of the form r\nc\nj\nis equipped with the constraint\nr\nc\nj\n(tx\ni\n)\nr\nc\nj\n(fx\ni\n)\n, where is the empty disjunction\n, stating that for each variable x\ni\n, r\nc\nj\ncannot contain\nboth tx\ni\nand fx\ni\nat the same time. Moreover, peer e\nhas also the constraint r\ne\n(X\n1\n)\nr\na\n(X\n2\n)\nX\n1\n= X\n2\n.\nConsider the source database\nD\n\nfor\nP\n\nsuch that:\nD\n\n(x\ni\n)\n=\n{s\nx\ni\n(tx\ni\n), s\nx\ni\n(fx\ni\n)\n}; for each x\ni\noccurring\nin c\nj\n,\nD\n\n(c\nj\n)\n=\n{s\nc\nj\n(tx\ni\n), s\nc\nj\n(fx\ni\n)\n}; and D\n\n(e) =\n{s\ne\n(t), s\ne\n(f), s\na\n(t)}. Notice that due to the constraints issued\nover peers schemas, any source database\nD , with\nD |= P\n\n, is such that\n|D (x\ni\n)\n| 1, for each x\ni\n. Therefore\n, the restriction of\nD to the peers of the form x\ni\nis in\none-to-one correspondence with a truth-value assignment for\n, denoted by (D ). Intuitively, the atom s\nx\ni\n(tx\ni\n) (resp.\ns\nx\ni\n(fx\ni\n)) means that variable X\ni\nis set to true (resp. false),\nwhereas the atom s\nc\nj\n(tx\ni\n) means that the clause C\nj\nis true,\nwitnessed by the assignment for the variable X\ni\noccurring\nin c\nj\n.\nFinally, the peers mapping assertions in\nP\n\nare defined\nas follows.\nFor each variable X\ni\noccurring positively\n(resp.\nnegatively) in the clause C\nj\nthere are exactly\ntwo mappings of the form\n{r\nx\ni\n(tx\ni\n)\n}\n{r\nc\nj\n(tx\ni\n)\n} and\n{r\nx\ni\n(fx\ni\n)\n}\n{r\nc\nj\n(fx\ni\n)\n} (resp. {r\nx\ni\n(fx\ni\n)\n}\n{r\nc\nj\n(tx\ni\n)\n}\nand\n{r\nx\ni\n(tx\ni\n)\n}\n{r\nc\nj\n(fx\ni\n)\n}); moreover, for each clause\nC\nj\ncontaining variables X\nj\n1\n, ..., X\nj\nk\n, there exists a mapping\n{r\nc\nj\n(fx\nj\n1\n)\nr\nc\nj\n(fx\nj\nk\n)\n}\n{r\ne\n(f)}.\nFigure 2 shows on the upper part the dependency graph\nG(\nP\n\n) for the formula = (X\n1\nX\n2\n)\n(X\n3\n)\n(X\n1\nX\n3\n\nX\n4\n)\n(X\n4\n)\n(X\n5\nX\n6\nX\n7\n)\n(X\n4\nX\n6\nX\n8\n).\nAssume that each peer wants to minimize the number of\ndeletions in\nD\n\n. Then, given a source database\nD minimal\nw.r.t. each peer in\nP\n\nbut e, we can show that the\nabove mappings encode an evaluation of the assignment\n(D ). In particular, it is easy to see that (D ) is a satisfying\nassignment for if and only if\nD (e) contains the facts\n{s\ne\n(t), s\na\n(t)}, i.e., one fact is deleted from the source of e\nonly. Assume, now, that\nD is such that D (e) = {s\ne\n(f)},\ni.e., two facts are deleted from the source of e. Then, D is\nalso e-minimal if and only if is not satisfiable.\nP\nGiven the above complexity result, one can easily see that\nAnyAgreementComputation is feasible in the functional version\nof\nP\n2\n. Indeed, we can guess in NP a source instance\nD, build in polynomial time a model B for P w.r.t. D (by\nconstruction in Theorem 4), and check in co-NP that\nD is\nminimal for each peer.\nActually, we can do much better. In fact, we next show\nthat the problem is complete for the polynomial time closure\nof NP, and thus remains at the first level of the polynomial\nhierarchy.\nTheorem 10\nAnyAgreementComputation\nis\nFPNP-complete\n.\nHardness\nholds even for cardinality-based\nweighting functions.\nProof [Sketch]. Membership. The problem can be solved\nby processing peers in a sequential manner. For each peer in\nP, we can find the minimum value of the associated preference\nfunction by means of a binary search, in which at each\nstep we guess in NP a database instance and verify that\nsuch a preference holds. After having collected the minimum\nvalues for all peers, we conclude with a final guess to\nget a repair\nD, and a subsequent check that actually each\npeer gets its minimum possible value for\nP w.r.t. D.\n41\nFigure 2: Constructions in Proofs of Complexity Results\n.\nFinally, a model for\nP w.r.t. D can be build in polynomial\ntime (again, by construction in Theorem 4).\nHardness. Let be a boolean formula in conjunctive normal\nform = C\n1\n. . . C\nm\nover the variables X\n1\n, . . . , X\nn\n.\nAssume that each clause, say C\nj\n, is equipped with a weight\nw\nj\n(natural number).\nLet be an assignment for the\nvariables in .\nIts weight is the sum of the weights of\nall the clauses satisfied in .\nThe problem of computing\nthe maximum weight over any truth assignment, called\nMAX - WEIGHT - SAT, is FPNP-complete.\nConsider again the construction in Theorem 9, and modify\nP\n\nas follows.\nThe source schema of peer e consists\nof the relation s\nw\n, whereas its global schema consists\nof the relations r\nw\nand r\nv\n, and of the constraint\nr\nv\n(X) r\nw\n(X, Y ) . The local mappings of e is {X, Y |\ns\nw\n(X, Y )}\n{X, Y | r\nw\n(X, Y )}. Moreover, for each clause\nc\nj\nover variables X\nj\n1\n, ..., X\nj\nk\n, map(e) contains the assertion\n{r\nc\nj\n(fx\nj\n1\n)\nr\nc\nj\n(fx\nj\nk\n)\n}\n{r\nv\n(fc\nj\n)\n}. Let\nP\n\nbe such\na modified P2P system. Notice that G(\nP\n\n) coincides with\nG(\nP\n\n) (see again Figure 2).\nConsider now the database instance\nD\n\nfor\nP\n\nobtained\nby modifying\nD\n\nsuch that\nD\n\n(e) contains the atoms\ns\nw\n(fc\nj\n, 1), s\nw\n(fc\nj\n, 2), ...s\nw\n(fc\nj\n, w\nj\n) for each clause c\nj\n. Intuitively\n, peer e stores w\nj\ndistinct atoms for each clause c\nj\n.\nLet\nD be a source instance that satisfies\nP\n\n. As in Theorem\n9, the restriction of\nD over the variables is in one-to-one\ncorrespondence with a truth assignment for , denoted\nby (D ). Then, it is easy to see that peer e must delete\nin\nD all the w\nj\ndistinct atoms corresponding to a clause\nC\nj\nthat is not satisfied by the assignment (D ). Therefore\n,\n|D (e)| =\ni|C\ni\nis false in\n(D )\nw\ni\n. Hence, the result\neasily follows, since computing the source instance that is e-minimal\n, say\nD, determines the maximum weight over any\nassignment for as (\ni\nw\ni\n)\n- |D(e)|.\nP\nWe next focus on the\nAgreementExistence problem. Note\nthat membership of this problem in\nP\n2\nis easy to proven,\nafter the above theorem. However, the reduction for the\nhardness part we shall exploit here is rather different.\nTheorem 11\nAgreementExistence is\nP\n2\n-complete. Hardness\nholds even for cardinality-based weighting functions.\nProof [Sketch]. Membership is shown with the same line\nof reasoning of Theorem 10. For the hardness, consider again\nMAX - WEIGHT - SAT, and the\nP\n2\n-complete problem of deciding\nwhether it has a unique solution.\nLet\nP\n\nbe the P2P system built in Theorem 10, and let\n\nP\n\nbe a copy of it, obtained by replacing each element\n(both relations and peers) r in\nP\n\nby r . Then, consider the\nsystem ~\nP\n\nobtained as the union of\nP\n\n,\nP\n\nand a fresh\npeer u. Figure 2 shows the dependency graph G( ~\nP\n\n).\nThe local schema of u is empty, while its global schema\nconsists of the unary relation r\nu\nwith the constraint\nn\ni=1\nr\nu\n(bad\ni\n)\n. The mapping assertions are as follows.\nFor each variable X\ni\nin , map(u) contains {r\nx\ni\n(tx\ni\n)\n\nr\nx\ni\n(tx\ni\n)\n}\n{r\nu\n(bad\ni\n)\n} and {r\nx\ni\n(fx\ni\n)\nr\nx\ni\n(fx\ni\n)\n}\n{r\nu\n(bad\ni\n)\n}. It is worthwhile noting that, for the sake of\nsimplicity, the mapping assertions are slightly more general\nthan those allowed in the usual definition of P2P systems,\nsince they involve joins among different peers. However, this\nis only a syntactical facility, as such a mapping can be easily\nsimulated by introducing a suitable dummy peer.\nThe idea of the reduction is that, if the same assignment\nthat maximizes the weight of the satisfied clauses is selected\nfor both\nP\n\nand\nP\n\n, then r\nu\n(bad\ni\n) is pushed to u (for each\ni), thereby violating the constraint. Thus, there is a (nonempty\n) agreement in ~\nP\n\nif and only if there are at least two\nsuch assignments.\nP\nWe conclude our investigation by observing that query\nanswering is at least as hard as\nAgreementExistence. Indeed\n, intuitively, if peers are not able to find an agreement\nin an inconsistent P2P system, then the answer to any given\nquery will be empty. Moreover, membership can be proven\nby the same line of reasoning of Theorem 10, and we thus\nget the following result.\nTheorem 12\nQueryOutputTuple is\nP\n2\n-complete.\nHardness\nholds even for cardinality-based weighting functions.\nCONCLUSIONS\nIn this paper, we investigated some important theoretical\nissues in P2P data integration systems. Specifically, we\nintroduced a setting in which peers take into account their\nown preferences over data sources, in order to integrate data\nif some inconsistency arise. This seems a natural setting for\nsuch kind of systems, which has not been previously investigated\nin the literature. It turns out that there are scenarios\nwhere peers do not find any agreement on the way the repair\nshould be carried out, and where some kind of centralized\ncoordination is required.\nActually, our results show that this coordination comes\nwith a cost and some basic problems are unlikely to be\ntractable. However, the complexity of the problems studied\nin this paper are only mildly harder than the corresponding\nproblems in traditional data integration systems.\n42\nThis is an important feature of our approach, that paves\nthe way for possible easy implementations, based on available\nsystems.\nIn particular, the prototypical implementation appears viable\nwith minor efforts if done on top of integration systems\nthat exploit a declarative approach to data integration (e.g.,\n[18], where logic programs serve as executable logic specifications\nfor the repair computation). Indeed, our complexity\nresults show that logic engines able to express all problems\nin the second level of the polynomial hierarchy, such as the\nDLV system [19], suffices for managing the framework, once\nwe provide appropriate logic specifications.\nA number of interesting research questions arise from this\nwork.\nFirst, it is natural to ask whether the framework\ncan be extended to the presence of existentially quantified\nconstraints. This can be easily done for some special syntactic\nfragments, such as for non key-conflicting schemas,\ni.e., global schemas enriched with inclusion dependencies\nand keys, for which decidability in the context of data integration\nsystems has been proven in [7]. To this aim, one has\nto modify the algorithm in [9] to propagate information in a\nP2P system by accounting for mapping assertion as well as\nfor inclusion dependencies, and eventually check that after\nsuch propagation no key has been violated.\nWe conclude by noticing that an avenue of further research\nis to consider more sophisticated peer-agreement semantics,\nbesides the Pareto-like approach described here.\nFor instance\n, we may think of some applications where peers may\nform cooperating groups, or do not cooperate at all. Another\nline of research may lead to enrich the setting by further\nkinds of peer preferences criteria, by replacing or complementing\nthe weighting functions proposed in this paper.\nAcknowledgments\nThe work was partially supported by the European Commission\nunder project IST-2001-33570 INFOMIX.\nFrancesco Scarcello's work was also supported by ICAR-CNR\n, Rende, Italy.\nREFERENCES\n[1] Serge Abiteboul, Richard Hull, and Victor Vianu.\nFoundations of Databases. Addison Wesley Publ. Co.,\nReading, Massachussetts, 1995.\n[2] Marcelo Arenas, Leopoldo E. Bertossi, and Jan\nChomicki. Consistent query answers in inconsistent\ndatabases. In Proc. of PODS'99, pages 6879, 1999.\n[3] P. Bernstein, F. Giunchiglia, A. Kementsietsidis,\nJ. Mylopoulos, L. Serafini, and I. Zaihrayeu. Data\nmanagement for peer-to-peer computing: A vision. In\nWorkshop on the Web and Databases, WebDB, 2002.\n[4] Leopoldo Bertossi, Jan Chomicki, Alvaro Cortes, and\nClaudio Gutierrez. Consistent answers from integrated\ndata sources. In Proc. of FQAS'02, pages 7185, 2002.\n[5] Leopoldo E. Bertossi and Loreto Bravo. Query\nanswering in peer-to-peer data exchange systems. In\nProc. of EDBT Workshops 2004, pages 476485, 2004.\n[6] Loreto Bravo and Leopoldo Bertossi. Logic\nprogramming for consistently querying data\nintegration systems. In Proc. of IJCAI'03, pages\n1015, 2003.\n[7] Andrea Cal`i, Domenico Lembo, and Riccardo Rosati.\nOn the decidability and complexity of query\nanswering over inconsistent and incomplete databases.\nIn Proc. of PODS'03, pages 260271, 2003.\n[8] Andrea Cal`i, Domenico Lembo, and Riccardo Rosati.\nQuery rewriting and answering under constraints in\ndata integration systems. In Proc. of IJCAI'03, pages\n1621, 2003.\n[9] Diego Calvanese, Giuseppe De Giacomo, Maurizio\nLenzerini, and Riccardo Rosati. Logical foundations of\npeer-to-peer data integration. In Proc. of PODS'04,\npages 241251, 2004.\n[10] Thomas Eiter, Michael Fink, Gianluigi Greco, and\nDomenico Lembo. Efficient evaluation of logic\nprograms for querying data integration systems. In\nProc. of ICLP'03, pages 348364, 2003.\n[11] Enrico Franconi, Gabriel Kuper, Andrei Lopatenko,\nand Luciano Serafini. A robust logical and\ncomputational characterisation of peer-to-peer\ndatabase systems. In Proc. of DBISP2P'03, pages\n6476, 2003.\n[12] Enrico Franconi, Gabriel Kuper, Andrei Lopatenko,\nand Ilya Zaihrayeu. A distributed algorithm for robust\ndata sharing and updates in p2p database networks.\nIn Proc. of P2P&DB'04, pages 446455, 2004.\n[13] Enrico Franconi, Gabriel Kuper, Andrei Lopatenko,\nand Ilya Zaihrayeu. Queries and updates in the codb\npeer to peer database system. In Proc. of VLDB'04,\npages 12771280, 2004.\n[14] Gianluigi Greco, Sergio Greco, and Ester Zumpano. A\nlogic programming approach to the integration,\nrepairing and querying of inconsistent databases. In\nProc. of ICLP'01, pages 348364. Springer, 2001.\n[15] Gianluigi Greco and Domenico Lembo. Data\nintegration with prefernces among sources. In Proc. of\nER'04, pages 231244, 2004.\n[16] Alon Y. Halevy, Zachary G. Ives, Peter Mork, and\nIgor Tatarinov. Piazza: data management\ninfrastructure for semantic web applications. In Proc.\nof WWW'03, pages 556567, 2003.\n[17] Maurizio Lenzerini. Quality-aware peer-to-peer data\nintegration. In Proc. of IQIS'04, 2004.\n[18] Nicola Leone, Thomas Eiter, Wolfgang Faber, Michael\nFink, Georg Gottlob, Gianluigi Greco, Giovambattista\nIanni, Edyta Kalka, Domenico Lembo, Maurizio\nLenzerini, Vincenzino Lio, Bartosz Nowicki, Riccardo\nRosati, Marco Ruzzi, Witold Staniszkis, and Giorgio\nTerracina. The INFOMIX system for advanced\nintegration of incomplete and inconsistent data. In\nProc. of SIGMOD'05, pages 915917, 2005.\n[19] Nicola Leone, Gerald Pfeifer, Wolfgang Faber,\nThomas Eiter, Georg Gottlob, Simona Perri, and\nFrancesco Scarcello. The DLV System for Knowledge\nRepresentation and Reasoning. ACM Transaction on\nCumputational Logic. To appear.\n[20] Luciano Serafini, Fausto Giunchiglia, John\nMylopoulos, and Philip A. Bernstein. Local relational\nmodel: A logical formalization of database\ncoordination. In Fourth International and\nInterdisciplinary Conference on Modeling and Using\nContext, CONTEXT 2003, pages 286299, 2003.\n[21] Igor Tatarinov and Alon Halevy. Efficient query\nreformulation in peer data management systems. In\nProc. of SIGMOD'04, pages 539550, 2004.\n43\n", "keywords": "Peer-to-Peer Systems;Data Integration Systems"} {"name": "144", "title": "On the Discovery of Significant Statistical Quantitative Rules", "abstract": "In this paper we study market share rules, rules that have a certain market share statistic associated with them. Such rules are particularly relevant for decision making from a business perspective. Motivated by market share rules, in this paper we consider statistical quantitative rules (SQ rules) that are quantitative rules in which the RHS can be any statistic that is computed for the segment satisfying the LHS of the rule. Building on prior work, we present a statistical approach for learning all significant SQ rules, i.e., SQ rules for which a desired statistic lies outside a confidence interval computed for this rule. In particular we show how resampling techniques can be effectively used to learn significant rules. Since our method considers the significance of a large number of rules in parallel, it is susceptible to learning a certain number of "false" rules. To address this, we present a technique that can determine the number of significant SQ rules that can be expected by chance alone, and suggest that this number can be used to determine a "false discovery rate" for the learning procedure. We apply our methods to online consumer purchase data and report the results.", "fulltext": "INTRODUCTION\nRule discovery is widely used in data mining for learning\ninteresting patterns. Some of the early approaches for rule\nlearning were in the machine learning literature [11, 12, 21]. More\nrecently there have been many algorithms [1, 25, 28, 31] proposed\nin the data mining literature, most of which are based on the\nconcept of association rules [1]. While all these various\napproaches have been successfully used in many applications [8,\n22, 24], there are still situations that these types of rules do not\ncapture. The problem studied in this paper is motivated by market\nshare rules, a specific type of rule that cannot be represented as\nassociation rules. Informally, a market share rule is a rule that\nspecifies the market share of a product or a firm under some\nconditions.\nThe results we report in this paper are from real user-level Web\nbrowsing data provided to us by comScore Networks. The data\nconsists of browsing behavior of 100,000 users over 6 months. In\naddition to customer specific attributes, two attributes in a\ntransaction that are used to compute the market share are the site\nat which a purchase was made and the purchase amount. Consider\nthe example rules below that we discovered from the data:\n(1) Household Size = 3\n\n35K < Income < 50K\n\nISP =\nDialup\n\nmarketshare\nExpedia\n= 27.76%, support = 2.1%\n(2) Region = North East\n\nHousehold Size = 1\n\n\nmarketshare\nExpedia\n= 25.15%, support = 2.2%\n(3) Education\n\n=\n\nCollege\n\nRegion = West\n\n50 < Household\nEldest\n\nAge\n\n<\n\n55\n\n\n\nmarketshare\nExpedia\n=\n\n2.92%,\n\nsupport=2.2%\n(4) 18 < Household Eldest Age < 20\n\nmarketshare\nExpedia\n=\n8.16%, support = 2.4%\nThe market share for a specific site, e.g. Expedia.com, is\ncomputed as the dollar value of flight ticket purchases (satisfying\nthe LHS of the rule) made at Expedia.com, divided by the total\ndollar value of all flight ticket purchases satisfying the LHS. The\ndiscovered rules suggest that Expedia seems to be doing\nparticularly well among the single households in the North East\nregion (rule 2), while it cedes some market in the segment of\nteenagers (rule 4). Rules such as these are particularly relevant for\nbusiness since they suggest natural actions that may be taken. For\nexample, it may be worth investigating the higher market share\nsegments to study if there is something particularly good that is\nbeing done, which is not being done in the lower market share\nsegments.\nMore generally, \"market share\" is an example of a statistic that is\ncomputed based on the segment satisfying the antecedent of the\nrule. Besides market share, various other quantitative statistics on\nthe set of transactions satisfying the LHS of a rule can be\ncomputed, including mean and variance of an attribute. Prior\nwork on learning quantitative association rules [2, 33] studied the\ndiscovery of rules with statistics such as the mean, variance, or\nminimum/maximum of a single attribute on the RHS of a rule. In\nthis paper we generalize the structure of the rules considered in\n[2] to rules in which the RHS can be any quantitative statistic that\ncan be computed for the subset of data satisfying the LHS. This\nstatistic can even be computed based on multiple attributes. We\nterm such rules as statistical quantitative rules (SQ rules).\nWith respect to learning SQ rules from data, we formulate the\nproblem as learning significant SQ rules that have adequate\nsupport. We define an SQ rule to be significant if the specific\nstatistic computed for the rule lies outside a certain confidence\ninterval. This confidence interval represents a range in which the\nstatistic can be expected by chance alone. This is an important\nrange to identify if the rules discovered are to be interpreted as\nsuggesting fundamental relationships between the LHS and the\nmarket share. For example, by chance alone if it is highly likely\nthat the market share of Expedia is between 25% and 30% for any\nsubset of data, then it is not at all clear that the rule relating\nincome and Expedia's market share (rule 1 in the example) is\nidentifying a fundamental relationship between income and the\nmarket share.\nWhile prior work [6, 9] has used confidence intervals to identify\nsignificant rules, most of these approaches are either parametric\nor specific for binary data. Building on prior work in this paper\nwe present a statistical approach for learning significant SQ rules\nthat is entirely non-parametric. In particular we show how\nresampling techniques, such as permutation, can be effectively\nused to learn confidence intervals for rules. Based on these\nconfidence intervals, significant rules can be identified. However,\nsince our method considers the significance of a large number of\nrules in parallel, for a given significance level it is susceptible to\nlearning a certain number of false rules. To address this we\npresent an intuitive resampling technique that can determine the\nnumber of false rules, and argue that this number can be used to\ndetermine a "false discovery rate" for the learning procedure. The\npractical significance of this approach is that we learn significant\nSQ rules from data and specify what the false discovery rate\nexactly is.\nThe paper is organized as follows. We first define SQ rules in the\nnext section. Section 3 presents an algorithm for computing\nconfidence intervals and Section 4 presents an algorithm for\nlearning significant SQ rules. In Section 5 we explain how the\nfalse discovery rate for our approach can be computed. We\npresent detailed experimental results on real web browsing data in\nSection 6 followed by a literature review and conclusions.\n\nSTATISTICAL QUANTITATIVE RULES\nIn this section we define SQ rules and significant SQ rules. Let\nA= {A\n1\n, A\n2\n,..., A\nn\n} be a set of attributes that will be used to\ndescribe segments and B = {B\n1\n, B\n2\n,..., B\nm\n} be another set of\nattributes that will be used to compute various statistics that\ndescribe the segment. Let dom(A\ni\n) and dom(B\nj\n) represent the set\nof values that can be taken by attribute A\ni\nand B\nj\nrespectively, for\nany A\ni\n\nA and B\nj\n\nB. Let D be a dataset of N transactions where\neach transaction is of the form {A\n1\n= a\n1\n, A\n2\n= a\n2\n,..., A\nn\n= a\nn\n, B\n1\n=\nb\n1\n, B\n2\n= b\n2\n,..., B\nm\n= b\nm\n} where a\ni\n\ndom(A\ni\n) and b\nj\n\ndom(B\nj\n). Let\nan atomic condition be a proposition of the form value\n1\n\n\nA\ni\n\n\n\nvalue\n2\n\nfor ordered attributes and A\ni\n= value for unordered\nattributes where value, value\n1\n, value\n2\nbelong to the finite set of\ndiscrete values taken by A\ni\nin D. Finally, let an itemset represent a\nconjunction of atomic conditions.\nDefinition 2.1 (SQ rule). Given (i) sets of attributes A and B, (ii)\na dataset D and (iii) a function f that computes a desired statistic\nof interest on any subset of data, an SQ rule is a rule of the form:\n\nX\nf(D\nX\n) = statistic, support = sup\n1\n(2.1)\nwhere X is an itemset involving attributes in A only, D\nX\nis the\nsubset of D satisfying X, the function f computes some statistic\nfrom the values of the B attributes in the subset D\nX\n, and support is\nthe number of transactions in D satisfying X.\n\n\nNote that the statistic on the RHS of the rule can be computed\nusing the values of multiple attributes. The following examples\nare listed to demonstrate different types of rules that an SQ rule\ncan represent. For ease of exposition we use the name of the\ndesired statistic in the RHS instead of referring to it as f(D\nX\n).\n1.\n\nQuantitative association rules [2]:\npopulation-subset\nmean or variance values for the subset (2.2)\nQuantitative association rules are a popular representation for\nrules in the data mining literature in which the RHS of a rule\nrepresents the mean or variance of some attribute. Example:\nEducation = graduate\nMean(purchase) = $15.00. (2.2) is a\nspecial case of (2.1), where f(subset) is the mean of some attribute\nB\nj\nin the subset of data.\n2.\n\nMarket share rules:\nLet {A\n1\n, A\n2\n,..., A\nn\n, MSV, P} be a set of attributes in a dataset D.\nMSV (Market Share Variable) is a special categorical attribute for\nwhich the market share values are to be computed. P is a special\ncontinuous variable that is the basis for the market share\ncomputation for MSV. For example, each transaction T\nk\nmay\nrepresent a book\n2\npurchased online. A\n1\nthrough A\nn\nmay represent\nattributes of the customer who makes the purchase, such as\nincome, region of residence and household size. For each\ntransaction, MSV is the variable indicating the online book retailer\nwhere the purchase was made. dom(MSV) may be {Amazon,\nBarnes&Noble, Ebay} and P is the price of the book purchased.\nFor a specific v\ndom(MSV) a market share statistic can be\ncomputed as described below. Market share rules have the\nfollowing form:\n\nX\nmarketshare\nv\n\n= msh, support = sup\n(2.3)\nwhere X is an itemset consisting of attributes in {A\n1\n, A\n2\n,..., A\nn\n}\nand marketshare\nv\nis a statistic that represents the market share of a\nspecific v\ndom(MSV). This is computed as follows. Let D\nX\n\nrepresent the subset of transactions satisfying X and D\nX, MSV=v\n\n\n1\n\nIn association rules, support is the number of transactions satisfying both\nLHS and RHS of a rule. In SQ rules, since the RHS is not an itemset, we\ndefine support as the number of transactions satisfying the LHS of a rule\nonly.\n2\nThe provider, comScore Networks categorizes each purchase into\ncategories such as \"book\", \"travel\" and \"consumer electronics\". Hence\nwe can generate datasets in which all transactions represent purchases in\na single category, and this helps in the generation of market share rules\nrepresenting specific categories.\n375\nResearch Track Paper\nrepresent the subset of transactions satisfying (X\nMSV = v).\nThen marketshare\nv\nis computed as sum(P, D\nX, MSV=v\n) / sum(P, D\nX\n),\nwhere sum(P, D) is the sum of all the values of attribute P in the\ntransactions in D.\nMarket share rules naturally occur in various applications,\nincluding online purchases at various Web sites, sales\napplications, and knowledge management applications. The\nexamples presented in the introduction are real market share rules\ndiscovered in online purchase data. The following additional\nexamples illustrate the versatility and usefulness of market share\nrules.\n\n\nWithin a specific product category (e.g. shoes) Footlocker\nsells competing brands of shoes. In their transaction data, the\nbrand of the shoe can be the MSV and the purchase price is\nP.\n\n\nConsider a dataset of patents associated with some area (e.g.\nhard disks). Each record may consist of several attributes\ndescribing a patent, including one attribute (MSV) which\nrepresents the organization to which the patent belongs and\nanother attribute that is always 1 (representing P and\nindicating a granted patent) in the data. For a specific\norganization, e.g. IBM, market share rules will represent the\npercentage of patents that belong to IBM under some\nconditions involving other attributes of the patent.\nDefinition 2.1 differs from the definition of quantitative rule [2,\n33] as follows. First, it is not limited to mean and variance\nstatistics and assumes a much broader class of statistics, including\nthe market share statistics. Second, unlike quantitative rules, the\nstatistic of interest in the RHS of a rule can be computed based on\nmultiple attributes.\nDefinition 2.2 (Significant SQ rule). For a given significance\nlevel\n\n\n(0, 1), let (stat\nL\n, stat\nH\n) be the (1\n) confidence interval\nfor a desired statistic, where this confidence interval represents\nthe range in which the statistic can be expected by chance alone.\nAn SQ rule X\n\nf(D\nX\n) = statistic, support = sup is significant if\nstatistic lies outside the range (stat\nL\n, stat\nH\n).\n\nThe main objective of this paper is to discover all significant SQ\nrules. The first challenge in learning significant SQ rules is in\nconstructing a confidence interval for the desired statistic such\nthat this interval represents a range of values for the RHS statistic\nthat can be expected by chance alone. In the next section we\npresent an algorithm for learning these confidence intervals.\n\nCOMPUTING CONF INTERVALS\nThe first question that needs to be addressed is what is meant by\n\"a range for the statistic that can be expected by chance alone\". In\nthis section we start by addressing this question and outline a\nprocedure by which such a range can be computed. Next we will\npoint out the computational challenge in implementing such a\nprocedure for learning these intervals for several SQ rules and\nthen outline three observations that will substantially help address\nthe computational problems. Based on these observations we\npresent a resampling-based algorithm for computing the\nconfidence intervals.\n3.1\n\nMotivation and outline of a procedure\nFor a given SQ rule, the desired confidence interval theoretically\nrepresents the range in which the statistic can be expected when\nthere is no fundamental relationship between the LHS of the rule\nand the statistic. More precisely, since the statistic is computed\nfrom the values of the B attributes, the confidence interval\nrepresents the range in which the statistic can be expected when\nthe A attributes are truly independent of the B attributes.\nWithout making any parametric distributional assumptions, such a\nconfidence interval can be generated using the classical nonparametric\ntechnique of permutation. Indeed permutation-based\napproaches have been commonly used to generate confidence\nintervals in the statistics literature [16]. If R is the set of all\nattributes in a dataset, the basic idea in permutation is to create\nmultiple datasets by randomly permuting the values of some\nattributes R\ni\n\nR. Such a permutation would create a dataset in\nwhich R\ni\nis independent of (R R\ni\n), but would maintain the\ndistributions of R\ni\nand (R R\ni\n) in the permutation dataset to be the\nsame as the distributions of these attributes in the original dataset.\nTable 3.1 illustrates one example of a permutation dataset D\nin\nwhich the B attributes are randomly permuted. Since a desired\nstatistic can be computed on each permutation dataset, a\ndistribution for the statistic can be computed based on its values\nfrom the multiple permutation datasets. A confidence interval can\nthen be computed from this distribution.\nTable 3.1 Dataset permutation\nOriginal dataset D:\n\nPermutation dataset D\n:\nA\n1\n\nA\n2\n\nB\n1\n\nB\n2\n\nA\n1\n\nA\n2\n\nB\n1\n\nB\n2\n\n1 2 3 8\n1 2 5 6\n1 3 5 6\n1 3 7 4\n2 3 7 4\n\n\n2 3 3 8\nAs mentioned above, this is a commonly used procedure in nonparametric\nstatistics. The reason this procedure makes sense is as\nfollows. Even if there is a relationship between the LHS of an SQ\nrule and the statistic on the RHS, by holding the A attributes fixed\nand randomly re-ordering the values of the B attributes the\nrelationship is destroyed and the A attributes and B attributes are\nnow independent of each other. Repeating this procedure many\ntimes provides many datasets in which the A attributes and B\nattributes are independent of each other, while maintaining the\ndistributions of the A and B attributes to be the same as their\ndistributions in the original dataset. The values for the statistic\ncomputed from the many permutation datasets is used to construct\na distribution for the statistic that can be expected when the A\nattributes are truly independent of the B attributes.\nSpecifically, for the same itemset X, compare the following two\nSQ rules in D and D\n,\n\nD: X\nf(\nX\nD\n) = stat\nD\n, support = sup\nD\n(3.1)\n\nD\n: X f(\nX\nD ) = stat\nD\n, support = sup\nD\n(3.2)\nFirst note that the supports of the rules are the same since the\nnumber of records satisfying X in the permutation dataset is the\nsame as the original dataset. We will use this observation to build\na more efficient method for computing confidence intervals\n376\nResearch Track Paper\nshortly. A confidence interval for the rule in (3.1) can be\ncomputed using the following nave procedure.\n1.\n\nCreate permutation dataset D\nfrom the original dataset D\nand compute stat\nD\n\n(as mentioned earlier in Section 2, the\nfunction f computes this number based on the records\nsatisfying X).\n2.\n\nRepeat step 1 N\nperm\n> 1000 times\n3\n, sort all the N\nperm\nstat\nD\n\n\nvalues in an ascending order (stat\nD\n-1\n, stat\nD\n-2\n,..., stat\nD\n-Nperm\n)\nand let the\n/2\nth\nand (1\n/2)\nth\npercentiles\n4\nfrom this list be\nstat\nD\n-L\nand stat\nD\n-H\n. The N\nperm\nvalues computed above\nrepresents a distribution for the statistic that can be expected\nby chance alone, while the percentile values from this\ndistribution determine a specific confidence interval. (Below\nwe use the terms \"distribution\" and \"confidence interval\"\nfrequently.)\n3.\n\nThe (1\n) confidence interval for the SQ rule in Equation\n(3.1) is (stat\nD\n-L\n, stat\nD\n-H\n).\n3.2\n\nComputational challenges and solutions\nComputing these confidence intervals for multiple candidate SQ\nrules creates several computational problems which we will\naddress in this section. For example, if we need to test 10,000\npotential significant rules (which is a reasonably small number for\ndata mining tasks), then we would need to repeat the above steps\n10,000 times, and this means generating permutation datasets\n10,000\nN\nperm\n> 10\n7\ntimes and to compute the desired statistic in\neach permutation dataset.\nThe following observations substantially reduce the\ncomputational complexity of the procedure.\n1. Sampling can be used instead of creating permutation datasets.\nFor the SQ rule in Equation (3.1), computing stat\nD\n\non a\npermutation dataset is really equivalent to computing stat\nD\n\nbased\non a random sample of sup\nD\nrecords in D. This is the case since\nnone of the A attributes play a role in the computation of the\nstatistic. Permuting the dataset, identifying the (sup\nD\n) records\nwhere X holds, and then computing the statistic on this subset\nachieves the same effect as picking a random sample of sup\nD\n\nrecords from D and computing the statistic on this random subset.\nHence to determine the confidence interval for the SQ rule in\nEquation (3.1), instead of permuting the dataset N\nperm\ntimes, it is\nenough to sample sup\nD\nrecords from D for N\nperm\ntimes.\n2. Some of the candidate SQ rules have the same support values\nas other rules. Based on this observation, confidence intervals for\ntwo SQ rules with the same support can be approximated by the\nsame interval. This is the case since for a given rule the interval is\ngenerated by sampling sup\nD\nrecords many times and if another\nrule has support = sup\nD\nthen the interval for that rule will be\nsimilar if the same procedure is repeated (it will not be exactly the\n3\n\nN\nperm\nis typically a big number. If we let N\nperm\n= N!, which is the number\nof all possible permutations, we will be implementing a Monte Carlo\ntest. On large datasets, such a test is impractical. For a statistic like\nmarket share whose value is limited by 0 and 1, N\nperm\n> 1000 makes the\ndistribution precise to the third decimal place. In our experiments, N\nperm\n\n= 1999.\n4\n\nSince we do not have any prior assumption about the expected value of\nthe statistic we use a two sided p-value.\nsame because of randomization). Therefore, fewer confidence\nintervals need to be generated.\n3. It is adequate to generate a fixed number of intervals,\nindependent of the number of rules considered. We observe that\nthe interval for an SQ rule with support = sup\nD\ncan be\napproximated by an interval computed by sampling sup\nE\nrecords\nwhere sup\nE\nis \"reasonably close\" to sup\nD\n. This is a heuristic that\nwe use to considerably reduce the complexity of the procedure.\nDenote N\nRule\nas the number of rules to be tested. If all rules have\ndifferent support values, we need to construct N\nRule\ndistributions.\nInstead, we would construct a fixed number N\ndist\ndistributions,\nsuch that for rule \"X\nf(D\nX\n) = statistic, support = sup\", statistic\nis compared with the distribution that is constructed by sampling\nthe closest number of transactions to sup. This heuristic is more\nmeaningful when we consider support in terms of percentage of\ntransactions satisfying LHS of a rule, which is a number between\n0 and 1.\n3.3\n\nAlgorithm CIComp\nBased on the above observations, we present in Figure 3.1\nalgorithm CIComp for constructing N\ndist\ndistributions and\ndetermining the (1\n) confidence intervals for a given\nsignificance level.\nInput: dataset\nD\nwith\nN\ntransactions, the number of\ndistributions\nN\ndist\n, the number of points in each\ndistribution\nN\nperm\n, a function\nf\nthat computes the desired\nstatistic, and significance level\n\n.\nOutput:\nN\ndist\ndistributions and significance thresholds.\n1\nfor (\ndist\n= 1;\ndist\n\n\n\nN\ndist\n;\ndist\n++ ) {\n2\n\nN\nsample\n=\ndist\n/\nN\ndist\n\n\n\nN\n;\n3\nfor (\nperm\n= 1;\nperm\n<\nN\nperm\n;\nperm\n++ ) {\n4\n\nS\n=\nN\nsample\ntransactions from\nD\nsampled without\nreplacements\n5\n;\n5\nstat[\ndist\n][\nperm\n] =\nf\n(\nS\n);\n6\n}\n7\nsort(stat[\ndist\n]);\n8\nLowerCI[\ndist\n] = stat[\ndist\n][(\nN\nperm\n+ 1)\n\n\n\n/2];\n9\nUpperCI[\ndist\n] = stat[\ndist\n][(\nN\nperm\n+ 1)\n\n(1\n\n/2)];\n10 }\n11 Output stat[][], LowerCI[], UpperCI[]\nFigure 3.1 Algorithm CIComp\nIn the above algorithm, N\ndist\n, N\nperm\n, and\nare user-defined\nparameters.\nis usually chosen to be 5%, 2.5% or 1%. For N\ndist\n\nand N\nperm\n, the larger they are, the more precise the distributions\nwill be. Let N = 1000, N\ndist\n= 100, N\nperm\n= 999, and\n= 5%. We\nuse these numbers as an example to explain the algorithm. For\nstep 2, the first distribution corresponds to N\nsample\n= dist/N\ndist\n\n\nN\n= 1/100\n1000 = 10 transactions. Step 3 to 6 computes N\nperm\n=\n999 statistics for 10 randomly sampled transactions from dataset\nD. Then we sort these 999 statistics and pick\n/2 and 1 /2\npercentiles, which are the 25\nth\nand 975\nth\nnumbers in the\ndistribution, as the lower and upper thresholds for the (1\n)\nconfidence interval. Steps 2 through 9 are repeated N\ndist\n= 100\ntimes to get the desired number of distributions and confidence\nintervals.\n5\nIf the sampling is done with replacement then the interval will be the\nbootstrap confidence interval. The two intervals will essentially be the\nsame when the support of the itemset is small.\n377\nResearch Track Paper\nThe computation complexity of the algorithm in Figure 3.1 is O(N\nN\nperm\n\nN\ndist\n), whereas the complexity of nave method is O(N\n\nN\nperm\n\nN\nrule\n). Note that N\ndist\ncan be fixed to a reasonable small\nnumber, e.g. 100, whereas N\nrule\nis the number of rules that are\nbeing tested and can easily be orders of magnitude more than\nN\ndist\n.\nDISCOVERING SQ RULES\nGiven the distributions and confidence intervals, discovering all\nsignificant statistical rules is straightforward. Algorithm\nSigSQrules is presented in Figure 4.1.\nInput: dataset\nD\nwith\nN\ntransactions, sets of attributes\nA and B, N\ndist\n, stat[][], LowerCI[], and UpperCI[] from\nalgorithm\nCIComp\n, a function\nf\nthat computes the desired\nstatistic, minimum support\nminsup\nand a large itemset\ngeneration procedure\nlargeitemsets\n.\nOutput: set of\n\nSignificant rules, sigrules.\n1\nL =\nlargeitemsets\n(\nD\n,\nA\n,\nminsup\n) # generates large\nitemsets involving attributes in A\n2\nsigrules = {}\n3\nforall (itemsets\nx\n\n\nL) {\n4\n\nx.stat\n=\nf\n(\nD\nx\n) // statistic computed on\ntransactions satisfying\nx\n\n5\n\ndist\n= round(\nsupport(x)\n/\nN\n\n\n\nN\ndist\n)\n6\nif\nx\n.stat\n\n\n(LowerCI[\ndist\n], UpperCI[\ndist\n]) {\n//\nx\n\n\n\nf\n(\nD\nx\n)\n\n=\nx\n.\nstat\nis significant\n7\n\nx.pvalue\n= 2\n\npercentile of\nx.stat\nin\nstat[\ndist\n][1..\nN\nperm\n]\n8\nsigrules = sigrules\n\n{\nx\n\n\n\nf\n(\nD\nx\n)\n\n=\nx\n.\nstat\n,\nsupport\n=\nsupport\n(\nx\n)}\n9\n}\n10 }\nFigure 4.1 Algorithm SigSQrules\nGiven N\ndist\ndistributions constructed from the algorithm CIComp,\nwe use the above algorithm to discover all significant SQ rules.\nWe continue to use the example N = 1000, N\ndist\n= 100, and N\nperm\n=\n999 to describe the steps in Figure 4.1. Note that the attributes in\nA represent the attributes in the dataset that are used to describe\nsegments for which statistics can be computed. Step 1 uses any\nlarge itemset generation procedure in rule discovery literature to\ngenerate all large itemsets involving attributes in A. The exact\nprocedure used will depend on whether the attributes in A are all\ncategorical or not. If they are, then Apriori algorithm can be used\nto learn all large itemsets. If some of them are continuous then\nother methods such as the ones described in [31] can be used.\nStep 4 computes the statistic function for each large itemset, x. In\nstep 5, we find out which distribution is to be used for\nsignificance test. For example, if support(x) = 23, then\nsupport(x)/N\nN\ndist\n= (23/1000)\n100 = 2.3, and hence dist will\nbe round(2.3) = 2. We would compare x.stat with its\ncorresponding confidence interval (LowerCI[2], UpperCI[2]) in\nstep 6. If x.stat is outside of the confidence interval, the rule is\nsignificant, and we use step 7 to calculate its 2-side p-value. If\nx.stat is the qth percentile, the 2-side p-value is 2\nmin(q%, 1\nq%). The p-value is not only a value to understand how\nsignificant a rule is, but is also useful for determining the false\ndiscovery rate in Section 5. Note that the confidence interval used\nto test significance of a rule is approximate since we do not\ncompute this interval for the exact value of the support of this\nrule. Instead we use the closest interval (which was pre-computed\nas described in Section 3.2) corresponding to this support value.\nIn future research we will quantify the effects of this\napproximation.\nWe would also like to point out that in many cases (see below) the\ncomputation of the statistic can be done efficiently within the\nitemset generation procedure (largeitems) itself. This can be used\nto modify the algorithm to make it more efficient once a specific\nitemset generation procedure is used. This is the case if the\nfunction f that computes the statistic on transactions T\n1\n, T\n2\n,..., T\ns\n\nis a recursive function on s, that is,\n\nf(T\n1\n, T\n2\n,..., T\ns\n) = g(f(T\n1\n, T\n2\n,..., T\ns-1\n), f(T\ns\n), s)\n(4.1)\nMany statistics, such as mean and market share, are recursive. For\nexample, Mean(T\n1\n, T\n2\n,..., T\ns\n) = [Mean(T\n1\n, T\n2\n,..., T\ns 1\n)\n(s 1) +\nMean(T\ns\n)] / s.\nIn this section we presented an algorithm SigSQrules for\ngenerating significant SQ rules. However, as mentioned in the\nintroduction, for any given level of significance for a rule, the fact\nthat thousands of rules are evaluated for their significance makes\nit possible to discover a certain number of false rules. This is the\nwell known multiple hypothesis testing problem [4]. While it is\ndifficult to eliminate this problem, it is possible to quantify this\neffect. In the next section we discuss the problem of false\ndiscovery in detail and present an algorithm for determining the\nfalse discovery rate associated with the discovery of significant\nSQ rules.\n\nFALSE DISCOVERY OF SQ RULES\nAs mentioned above, when multiple rules are tested in parallel for\nsignificance, it is possible to learn a number of \"false\" rules by\nchance alone. Indeed, this is a problem for many rule discovery\nmethods in the data mining literature. The false discovery rate\n(FDR) is the expected percentage of false rules among all the\ndiscovered rules. Prior work in statistics has taken two approaches\nto deal with the multiple hypothesis testing problem [4, 17, 34].\nOne approach attempts to lower the false discovery rate by\nadjusting the significance level at which each rule is tested. As we\nwill describe below, this approach is not suitable for data mining\nsince it will result in very few rules being discovered. The second\napproach assumes that a given number of false discoveries should\nbe expected, and focuses on estimating what the false discovery\nrate (FDR) exactly is. This is more useful for data mining, since it\npermits the discovery of a reasonable number of rules, but at the\nsame time computes a FDR that can give users an idea of what\npercentage of the discovered rules are spurious. In this section, we\nfirst review key ideas related to the multiple hypotheses testing\nproblem and then present a nonparametric method to determine\nfalse discovery rate for our procedure.\nFor significance tests for a single rule, the significance level\nis\ndefined as the probability of discovering a significant rule when\nthe LHS and RHS of the rule are actually independent of each\nother; in other words,\nis the probability of a false (spurious)\ndiscovery. For example, on a random dataset where all attributes\nare independent, if we test 10,000 rules, then by definition of\n,\nwe expect 10,000\n5% = 500 false discoveries by pure chance\nalone. When some of the attributes are dependent on each other,\nas is the case for most datasets on which rule discovery methods\nare used, the above approach cannot be used to get an expectation\nfor the number of false rules. In such cases, two approaches are\n378\nResearch Track Paper\npossible. In statistics, a measure called familywise error rate\n(FWER) is defined as the probability of getting at least one false\nrule output. Most conventional approaches in statistics that deals\nwith the multiple hypotheses testing problem use different\nmethods to control FWER by lowering significance level for\nindividual rule,\n\nind\n. For example, Bonferroni-type procedures\nwould have\n\nind\n=\n/ the number of rules tested, which is 5% /\n10,000 = 5\n10\n-6\n. However, when the number of hypotheses\ntested is large (as is the case in data mining algorithms), extreme\nlow\nvalue, e.g. 5 10\n-6\n, will result in very few rules discovered.\nThe other type of approach, as taken recently in [4] estimates the\nfalse discovery rate (FDR), the expectation of the proportion of\nfalse discoveries in all discoveries.\nTable 5.1 Confusion matrix of the number of rules\nNon-Significant\nRules\nSignificant\nRules\nLHS independent of RHS\na b\nLHS dependent on RHS\nc d\nIn Table 5.1, the number of rules tested is (a + b + c + d), out of\nwhich (a + b) is the number of rules where the LHS of the rules is\ntruly independent of the RHS, and (c + d) is the number of rules\nwhere there is a real relationship between the LHS and the RHS\nof the rules. The columns determine how many tested rules are\noutput as significant or non-significant. The two terms FDR and\nFWER can be defined precisely as FDR = Exp(b / b + d) and\nFWER = Prob(b >0).\nWe adopt FDR estimation in this section because it effectively\nestimates false discoveries without rejecting too many discovered\nrules. However, the method proposed in the literature [4, 7, 35]\nfor FDR cannot be used for large scale rule discovery because of\nthe following two reasons: first, the assumption that statistics of\nthe rules tested are independent from each other (which some of\nthe approaches use) is not true. For example, rules A\n1\n= 1\n\nMean(D\nA1 = 1\n) and A\n1\n= 1\nA\n2\n= 2\nMean(D\nA1 = 1\nA2 = 2\n) are not\nindependent. In fact a large number of rules are related to each\nother in rule discovery because their LHS share common\nconditions and RHS come from the same attributes. Second,\nmethods in statistics draw conclusions based on the number of\nrules tested (= a + b + c + d), however, as indicated in [25], a and\nc are unknown values due to the filtering by support constraint.\nWithout making any assumptions, below we present another\npermutation-based method to estimate the FDR for our procedure\nfor learning significant SQ rules.\nDenote N\nsig\n(\n) to be the number of significant rules discovered\nfrom dataset D when the significant level =\n. In Table 5.1,\nN\nsig\n(\n) = b + d. Similar to the procedure described in Section 3,\nby keeping the values in attributes A intact and permuting the B\nattributes, we get a permutation dataset D\n. Since we remove any\nrelationship between A and B attributes by this procedure, all the\nLHS and RHS statistic of each rule tested in D\nare independent.\nIf we apply the significant rule discovery algorithm SigSQrules,\nthe number of significant rules discovered from D\nwhen the\nsignificant level =\nwill be one instance of false discovery, that\nis, N\nsig-perm\n(\n) = b. It is easy to see that by creating multiple\npermutation datasets, we can estimate the expectation of the\nnumber of false discoveries and thus compute a false discovery\nrate FDR = Exp(N\nsig-perm\n(\n)) / N\nsig\n(\n). We will describe the steps\nhow FDR(\n) can be estimated in detail in the Appendix.\nIn this section, we described the problem of multiple hypotheses\ntesting and pointed out that for any given significance level a\ncertain number of significant SQ rules will be discovered by\nchance alone. We then described an intuitive permutation based\nprocedure to compute the false discovery rate. From a practical\npoint of view the procedure described above can be used in\nconjunction with SigSQrules to discover a set of significant SQ\nrules and provide a number representing the percentage of these\nrules that are likely to be spurious.\n\nEXPERIMENTS\nIn this section we present results from learning significant market\nshare rules, a specific type of SQ rules. We started with user-level\nonline purchase data gathered by comScore Networks, a market\ndata vendor. The data consist of 100,000 users' online browsing\nand purchase behavior over a period of six months. The market\ndata vendor tracks all online purchases explicitly by parsing the\ncontent of all pages delivered to each user. Starting from the raw\ndata we created a dataset of purchases where each transaction\nrepresents a purchase made at an online retailer. Attributes of the\ntransaction include user demographics, the site at which the\npurchase was made, the primary category (e.g. books, home\nelectronics etc) of the site, the product purchased and the price\npaid for the product. Within a specific category, e.g. books,\nsignificant market share rules would be particularly interesting to\ndiscover. We selected many datasets with purchases belonging to\neach specific category and applied our method to learn several\ninteresting significant market share rules. For space limitations we\ndo not present all the results, but report the results for learning\nmarket share rules for the top three retailers in the online book\nindustry. Specifically the dataset consists of all transactions in\nwhich a book was purchased at any site and we use the methods\npresented in the paper to learn market share rules for the top 3\nsites Amazon.com, Barnes&Noble and Ebay. The total number\nof transactions was 26,439 records and we limit the rules to\nhaving at most five items on the LHS of a rule.\n6.1\n\nRule Examples\nAmong the most significant market share rules (as determined by\nthe p-values of these rules), we picked four rules to list that were\nparticularly interesting for each online retailer.\nAmazon.com\n(1) Education\n\n=\n\nHigh\n\nSchool\n\n\n\nmarketshare\nAmazon\n=\n\n42.72%,\nsupport\n\n=\n\n20.7%, CI\n\n=\n\n(46.07%,\n\n50.92%)\n(2) Region\n\n=\n\nWest\n\n\n\nHousehold\n\nSize\n\n=\n\n2\n\nmarketshare\nAmazon\n=\n57.93%, support\n\n=\n\n7.9%, CI\n\n=\n\n(44.36%, 52.50%)\n(3) Region\n\n=\n\nSouth\n\n\n\nHousehold Size\n\n=\n\n4\n\nmarketshare\nAmazon\n=\n38.54%, support\n\n=\n\n5.4%, CI\n\n=\n\n(43.76%, 53.39%)\n(4) 35\n\n<\n\nHousehold\n\nEldest\n\nAge\n\n<\n\n40\n\n\n\nISP\n\n=\n\nBroadband\n\n\n\nmarketshare\nAmazon\n=\n\n60.52%,\n\nsupport\n\n=\n\n4.3%,\n\nCI\n\n=\n\n(42.88%, 53.99%)\nBarnesandnoble.com\n(1) Education\n\n=\n\nGraduate\n\n\n\nHousehold\n\nSize\n\n=\n\n2\n\n\nmarketshare\nBN\n=\n\n13.12%,\n\nsupport\n\n=\n\n6.0%,\n\nCI\n\n=\n\n(16.81%, 25.68%)\n(2) 50 < Household Eldest Age < 55\n\nIncome > 100K\n\n\nmarketshare\nBN\n=\n\n30.28%,\n\nsupport\n\n=\n\n4.2%,\n\nCI\n\n=\n\n(16.05%, 26.79%)\n(3) Region = South\n\nHousehold Size = 3\n\nChild = Yes\n\n\nmarketshare\nBN\n=\n\n13.27%,\n\nsupport\n\n=\n\n4.2%,\n\nCI\n\n=\n\n(16.68%, 26.10%)\n(4) Region = South\n\n60 < Household Eldest Age < 65\n\n\nmarketshare\nBN\n=\n\n39.84%,\n\nsupport\n\n=\n\n2.8%,\n\nCI\n\n=\n\n(15.55%, 27.10%)\n379\nResearch Track Paper\nEbay.com\n(1) Education = College\n\nRegion = South\n\n\nmarketshare\nEbay\n=\n\n8.28%,\n\nsupport\n\n=\n\n6.9%,\n\nCI\n\n=\n\n(11.70%, 17.71%)\n(2) Education = College\n\nRegion = North Central\n\n\nmarketshare\nEbay\n=\n\n21.77%,\n\nsupport\n\n=\n\n4.0%,\n\nCI\n\n=\n\n(11.05%, 18.29%)\n(3) Region\n\n= South\n\nIncome > 100K\n\nmarketshare\nEbay\n=\n4.83%,\n\nsupport\n\n=\n\n2.9%,\n\nCI\n\n=\n\n(9.54%, 20.46%)\n(4) 18 < Household Eldest Age < 20\n\nmarketshare\nEbay\n=\n27.50%, support = 2.8%,\n\nCI\n\n=\n\n(10.12%, 19.70%)\nRule (4) for Amazon.com indicates that it is doing particularly\nwell in households with middle-aged heads that have broadband\naccess. The market share for Amazon.com in this segment lies\nsignificantly outside the confidence interval computed for the\nrule. On the other hand, rule (1) for Barnesandnoble.com shows\nthat they are doing poorly selling to a segment which perhaps\nrepresents well educated couples. Given that this is a large\nsegment (support = 6%), this rule suggests that they could try and\nexamine why this is the case and how they can achieve greater\npenetration in this segment. In Ebay's case, all four rules are very\ninteresting. Rule (4) indicates that they have high market share\namong teenagers, while rule (3) describes a segment they clearly\nhave trouble penetrating. For many other categories too (travel\nand home electronics in particular) the significant SQ rules that\nwe learned were highly interesting. As these examples suggest,\nthese rules can be insightful, identify interesting segments and\nhave significant business potential.\n6.2\n\nVarying support and significance levels\nTo test how the methods perform as the minimum support and\nsignificance levels vary, for one site we generated significant SQ\nrules for many values of the minimum support and significance\nlevel parameters. Figures 6.1 and 6.2 show how the number of\nsignificant rules and the false discovery rate vary with support.\nAs the minimum support threshold is lowered the number of\nsignificant SQ rules discovered increases. However the FDR\nincreases as the support threshold is lowered, suggesting a\ntradeoff between discovering many significant rules while\nkeeping the FDR low. A practical outcome is that it may be\ndesirable to have higher minimum supports (to keep FDR low),\nbut not too high that very few rules are discovered. Figures 6.3\nand 6.4 illustrate a similar tradeoff for the significance level\nparameter. As\ndecreases FDR is lower, but this results in fewer\nnumber of significant rules being discovered. Again, the\nimplication is that it may be desirable to have a low\n(to keep\nFDR low) but not too low that very few rules are discovered.\n0\n500\n1000\n1500\n2000\n2500\n0.0%\n2.0%\n4.0%\n6.0%\n8.0%\n10.0%\nsupport\n#\nof\ns\ni\ngn\ni\nf\ni\nc\nan\nt\nr\nu\nl\ne\ns\n= 10%\n= 5%\n= 2.5%\n= 1%\n0.0%\n5.0%\n10.0%\n15.0%\n20.0%\n25.0%\n30.0%\n35.0%\n0.0%\n2.0%\n4.0%\n6.0%\n8.0%\n10.0%\nsupport\nFD\nR\n= 10%\n= 5%\n= 2.5%\n= 1%\n\n\nFigure 6.1. Effect of support on # of rules\nFigure 6.2. Effect of support on FDR\n0.00%\n5.00%\n10.00%\n15.00%\n20.00%\n25.00%\n30.00%\n0.00%\n2.00%\n4.00%\n6.00%\n8.00%\n10.00%\nsignificance level\n\nFD\nR\nsupport = 1%\nsupport = 2%\nsupport = 5%\nsupport = 10%\n\n0\n200\n400\n600\n800\n1000\n1200\n1400\n0.0%\n2.0%\n4.0%\n6.0%\n8.0%\n10.0%\nsignificance level\n\n# o\nf\ns\ni\ng\nn\nificant r\nu\nle\ns\nsupport = 1%\nsupport = 2%\nsupport = 5%\nsupport = 10%\n\n\nFigure 6.3. Effect of\non FDR\nFigure 6.4. Effect of\non # of rules\n\n380\nResearch Track Paper\n6.3\n\nSummary results for online book retailers\nBased on this general tradeoff we chose minimum support of 2%\nand chose an\nof 2.5% in order to report summary results for the\nthree sites. Table 6.1 summarizes the number of significant rules\ndiscovered and the false discovery rates of the procedure. As the\nvalues in the table and the examples above show, our procedure\ncan be used effectively to learn a good set of significant SQ rules\nwhile keeping the false discovery rates reasonable.\nTable 6.1 Summary of results\nWeb site\nSignificant Rules\nFalse Discovery\nRate\nAmazon 651\n6.30%\nBarnesandnoble 393\n9.67%\nEbay 679 5.60%\nIn this section we first presented compelling examples of rules\ndiscovered that illustrate the potential of learning significant\nmarket share rules. We then examined how the number of\nsignificant rules discovered and the false discovery rate changes\nwith the support and significance level (\n) parameters. The results\nof this analysis suggested a tradeoff between generating\nsignificant rules and keeping the false discovery rate low. Based\non this tradeoff we identified a specific value of the support and\nsignificance parameters and showed the number of rules\ndiscovered for these values.\n\nRELATED WORK\nWe compare our work with the literature based on three aspects:\nrule structure, rule significance, and methodology.\nRule structure. Rule discovery methods on a quantitative dataset\ncan be traced back to [29], where rules of the form x\n1\n< A < x\n2\n\n\ny\n1\n< B < y\n2\nare discovered. [31] extends the structure to be\nconjunctions of multiple conditions on both antecedent and\nconsequent of a rule, and proposes their discovery method based\non the Apriori algorithm [1]. Although rules in [31] are important,\npartitions like y\n1\n< B < y\n2\nfor continuous attributes on the RHS of\na rule only gives partial description of the subset satisfying the\nLHS of the rule and partial descriptions sometimes are\nmisleading. Observing this problem, [2] introduces a new\nstructure where the consequent of a rule is Mean(D\nX\n) or\nVariance(D\nX\n) to summarize the behavior of the subset satisfying\nthe antecedent. [33] further extends the form of the consequent of\nthe rule, such that it can be of Min(D\nX\n), Max(D\nX\n), or Sum(D\nX\n).\nOur rule structure is based on prior work: the antecedent is\nconjunctions of conditions, while the consequent can be any\naggregate function f on multiple attributes to describe the\nbehavior of the subset satisfying the antecedent.\nRule significance. Any combination of attributes with conditions\ncan potentially form a rule. Researchers use different\nmeasurements, e.g. support and confidence, to select only\nimportant rules from all possible rules. Based on the support and\nconfidence framework, many metrics have been developed, such\nas gain [15], conviction [10], unexpectedness [27]. Although\nthese metrics can be generalized to rules where the antecedent and\nconsequent are both conjunctions of the form value\n1\n< Attribute <\nvalue\n2\nfor quantitative datasets, they are not applicable for rules\nwhose consequent is a function, such as Mean(D\nX\n), or in general,\nf(D\nX\n). To solve this non-trivial problem, we use statistical\nsignificance tests to evaluate rules, so that the consequent of a\nrule is not expected by chance alone. In the data mining literature,\nstatistical significance tests are commonly used in many\napplications. For example, chi-square (\n\n2\n) is a statistic to test\ncorrelations between attributes in binary or categorical data, and it\nhas been applied to discover correlation rules [9], actionable rules\n[23], and contrast sets [3, 32]. For sparse data, [35, 36] employ\nFisher's Exact Test to detect anomaly patterns for disease\noutbreaks. As mentioned in Section 3, these two tests are special\ncases of our significance test when we apply our significance\ndefinition to categorical data. For quantitative rules in [2], the\nauthors use a standard Z-test to determine the significance of\ninequality of means between a subset D\nX\nand its complement D\nD\nX\n. [33] defines a new measurement, impact, to evaluate\nquantitative rules, where impact can identify those groups that\ncontribute most to some outcome, such as profits or costs. For\nareas other than rule discovery, standard Z-tests with log-linear\nmodels is used in Exploratory Data Analysis for OLAP data cubes\n[30]. Our significance test is different from the above primarily\nbecause (i) our significance definition is applicable to any user-defined\naggregate function f(D\nX\n), and (ii) we using nonparametric\nmethods to construct distributions and confidence intervals, in\nwhich f(D\nX\n) is expected from random effects alone.\nMethodology. Nonparametric statistics is philosophically related\nto data mining, in that both methods typically make no\nassumptions on distributions of data or test statistics. Even with\nknown distribution of a statistic, nonparametric methods are\nuseful to estimate parameters of the distribution [13].\nNonparametric methods are widely used on testing models that\nare built from data: as earliest in [18], the author uses\nrandomization tests to tackle a model overfitting problem; [20]\ncompares bootstrap and cross-validation for model accuracy\nestimation; for decision trees, [14, 26] use permutation tests to\nselect attributes based on 2\n2 contingency tables. Rule discovery\nis to learn local features, which is inherently different from\nmodels. Although we have seen methods using parametric\nhypothesis testing approach to learning rules from dataset [5, 6],\nno prior work has been found on discovering large number of\nrules based on nonparametric significance tests.\nThe problem of multiple hypothesis testing/multiple comparison\nis well known in rule discovery, a good review of which can be\nfound in [19]. On sparse binary data, [25] shows that with proper\nsupport and confidence control, very few false rules will be\ndiscovered. However, rule discovery on quantitative data faces\nmuch more complicated challenges, and conventional p-value\nadjustment methods cannot be directly applied. To solve this\nproblem, we employ false discovery rate [4] metric to estimate the\nnumber of false rules discovered due to testing a large number of\nrules. In data mining, FDR has been shown useful in [7, 36] for\ncategorical data with known number of hypotheses, and we\nextend it to quantitative rules with resampling methods.\nCONCLUSION\nIn this paper we defined a new category of rules, SQ rules, and\nthe significance of SQ rules, on quantitative data. Then we\npresented a permutation-based algorithm for learning significant\nSQ rules. Furthermore, we show how an explicit false discovery\nrate can be estimated for our procedure, which makes the\napproach useful from a practical perspective. We presented\nexperiments in which we discovered market share rules, a specific\n381\nResearch Track Paper\ntype of SQ rules, in real online purchase datasets and\ndemonstrated that our approach can be used to learn interesting\nrules from data.\nWe would also like to point out that it is possible to compute the\nfalse discovery rate (FDR) for several possible significance levels\nin an efficient manner (without creating permutation datasets for\neach significance level). Although a detailed presentation of this\nis beyond the scope of this paper, in the appendix we provide an\noverview of how this can be done. One main advantage of being\nable to do this is that significant SQ rules can be discovered at a\nchosen significance level that is computed from some desired\nFDR. Hence rather than just estimating FDR we may be able to\ndiscover significant rules given a specific FDR. However this\nneeds to be studied in greater detail in future work.\n\nREFERENCES\n[1] Agrawal, R. and Srikant, R., Fast Algorithms for Mining\nAssociation Rules, in Proceedings of the 20th International\nConference on Very Large Databases, Santiago, Chile, 1994.\n[2] Aumann, Y. and Lindell, Y., A Statistical Theory for\nQuantitative Association Rules, in Proceedings of The Fifth\nACM SIGKDD Int'l Conference on Knowledge Discovery\nand Data Mining, pp. 261-270, San Diego, CA, 1999.\n[3] Bay, S. D. and Pazzani, M. J., Detecting Change in\nCategorical Data: Mining Contrast Sets, in Proceedings of\nthe Fifth ACM SIGKDD International Conference on\nKnowledge Discovery and Data Mining, pp. 302 - 306, San\nDiego, CA, 1999.\n[4] Benjamini, Y. and Hochberg, Y., Controlling the False\nDiscovery Rate: A Practical and Powerful Approach to\nMultiple Testing, Journal of Royal Statistical Society B, vol.\n57, iss. 1, pp. 289-300, 1995.\n[5] Bolton, R. and Adams, N., An Iterative Hypothesis-Testing\nStrategy for Pattern Discovery, in Proceedings of the Ninth\nACM SIGKDD Int'l Conference on Knowledge Discovery\nand Data Mining, pp. 49-58, Washington, DC, 2003.\n[6] Bolton, R. J. and Hand, D. J., Significance Tests for Patterns\nin Continuous Data, in Proceedings of the 2001 IEEE\nInternational Conference on Data Mining, pp. 67-74, San\nJose, CA, 2001.\n[7] Bolton, R. J., Hand, D. J., and Adams, N. M., Determining\nHit Rate in Pattern Search, in Pattern Detection and\nDiscovery, ESF Exploratory Workshop, pp. 36-48, London,\nUK, 2002.\n[8] Brijs, T., Swinnen, G., Vanhoof, K., and Wets, G., Using\nAssociation Rules for Product Assortment: Decisions Case\nStudy, in Proceedings of the Fifth ACM SIGKDD\nInternational Conference on Knowledge Discovery and Data\nMining, pp. 254-260, San Diego, CA, 1999.\n[9] Brin, S., Motwani, R., and Silverstein, C., Beyond Market\nBaskets: Generalizing Association Rules to Correlations, in\nProceedings of the ACM SIGMOD/PODS '97 Joint\nConference, pp. 265-276, Tucson, AZ, 1997.\n[10] Brin, S., Motwani, R., Ullman, J. D., and Tsur, S., Dynamic\nItemset Counting and Implication Rules for Market Basket\nData, in Proceedings ACM SIGMOD International\nConference on Management of Data (SIGMOD'97), pp. 255-264\n, Tucson, AZ, 1997.\n[11] Clark, P. and Niblett, T., The Cn2 Induction Algorithm,\nMachine Learning, vol. 3, pp. 261-283, 1989.\n[12] Clearwater, S. and Provost, F., Rl4: A Tool for Knowledge-Based\nInduction, in Procs. of the Second International IEEE\nConference on Tools for Artificial Intelligence, pp. 24-30,\n1990.\n[13] Efron, B. and Tibshirani, R. J., An Introduction to the\nBootstrap. New York, NY: Chapman & Hall, 1993.\n[14] Frank, E. and Witten, I. H., Using a Permutation Test for\nAttribute Selection in Decision Trees, in Proceedings of 15th\nInt'l Conference on Machine Learning, pp. 152-160, 1998.\n[15] Fukuda, T., Morimoto, Y., Morishita, S., and Tokuyama, T.,\nData Mining Using Two-Dimensional Optimized\nAssociation Rules: Scheme, Algorithms and Visualization, in\nProceedings of the 1996 ACM SIGMOD International\nConference on Management of Data (SIGMOD'96), pp. 13-23\n, Montreal, Quebec, Canada, 1996.\n[16] Good, P., Permutation Tests: A Practical Guide to\nResampling Methods for Testing Hypotheses - 2nd Edition.\nNew York: Springer, 2000.\n[17] Hsu, J. C., Multiple Comparisons - Theory and Methods.\nLondon, UK: Chapman & Hall, 1996.\n[18] Jensen, D., Knowledge Discovery through Induction with\nRandomization Testing, in Proceedings of the 1991\nKnowledge Discovery in Databases Workshop, pp. 148-159,\nMenlo Park, 1991.\n[19] Jensen, D. and Cohen, P. R., Multiple Comparisons in\nInduction Algorithms, Machine Learning, vol. 38, pp. 309-338\n, 2000.\n[20] Kohavi, R., A Study of Cross-Validation and Bootstrap for\nAccuracy Estimation and Model Selection, in Proceedings of\nthe Fourteenth International Joint Conference on Artificial\nIntelligence, pp. 1137-1143, San Mateo, CA, 1995.\n[21] Lee, Y., Buchanan, B. G., and Aronis, J. M., Knowledge-Based\nLearning in Exploratory Science: Learning Rules to\nPredict Rodent Carcinogenicity, Machine Learning, vol. 30,\npp. 217-240, 1998.\n[22] Ling, C. X. and Li, C., Data Mining for Direct Marketing:\nProblems and Solutions, in Proceedings of the Fourth\nInternational Conference on Knowledge Discovery and Data\nMining, pp. 73-79, New York, NY, 1998.\n[23] Liu, B., Hsu, W., and Ma, Y., Identifying Non-Actionable\nAssociation Rules, in Proceedings of the Seventh ACM\nSIGKDD International Conference on Knowledge Discovery\nand Data Mining, pp. 329-334, San Francisco, CA, 2001.\n[24] Mani, D. R., Drew, J., Betz, A., and Datta, P., Statistics and\nData Mining Techniques for Lifetime Value Modeling, in\nProceedings of the Fifth ACM SIGKDD International\nConference on Knowledge Discovery and Data Mining, pp.\n94-103, San Diego, CA, 1999.\n[25] Megiddo, N. and Srikant, R., Discovering Predictive\nAssociation Rules, in Proceedings of the Fourth\nInternational Conference on Knowledge Discovery and Data\nMining, pp. 274-278, New York, NY, 1998.\n[26] Oates, T. and Jensen, D., Large Datasets Lead to Overly\nComplex Models: An Explanation and a Solution, in\nProceedings of the Fourth International Conference on\nKnowledge Discovery and Data Mining, pp. 294-298, Menlo\nPark, CA, 1998.\n[27] Padmanabhan, B. and Tuzhilin, A., A Belief-Driven Method\nfor Discovering Unexpected Patterns, in Proceedings of the\nFourth International Conference on Knowledge Discovery\nand Data Mining, pp. 94-100, New York, NY, 1998.\n382\nResearch Track Paper\n[28] Padmanabhan, B. and Tuzhilin, A., Small Is Beautiful:\nDiscovering the Minimal Set of Unexpected Patterns, in\nProceedings of the Sixth ACM SIGKDD International\nConference on Knowledge Discovery & Data Mining, pp. 54-63\n, Boston, MA, 2000.\n[29] Piatesky-Shapiro, G., Discovery, Analysis, and Presentation\nof Strong Rules, in Knowledge Discovery in Databases,\nPiatesky-Shapiro, G. and Frawley, W. J., Eds. Menlo Park,\nCA: AAAI/MIT Press, pp. 229-248, 1991.\n[30] Sarawagi, S., Agrawal, R., and Megiddo, N., Discovery-Driven\nExploration of Olap Data Cubes, in Proceedings of\nthe Sixth International Conference on Extending Database\nTechnology (EDBT'98), pp. 168-182, Valencia, Spain, 1998.\n[31] Srikant, R. and Agrawal, R., Mining Quantitative\nAssociation Rules in Large Relational Tables, in\nProceedings of the 1996 ACM SIGMOD International\nConference on Management of Data, 1996.\n[32] Webb, G., Butler, S., and Newlands, D., On Detecting\nDifferences between Groups, in Proceedings of the Ninth\nACM SIGKDD Int'l Conference on Knowledge Discovery\nand Data Mining, pp. 256-265, Washington, DC, 2003.\n[33] Webb, G. I., Discovering Associations with Numeric\nVariables, in Proceedings of The Seventh ACM SIGKDD\nInternational Conference on Knowledge Discovery and Data\nMining, San Francisco, CA, 2001.\n[34] Westfall, P. H. and Young, S. S., Resampling-Based Multiple\nTesting - Examples and Methods for P-Value Adjustment.\nNew York, NY: John Wiley & Sons, Inc, 1993.\n[35] Wong, W.-K., Moore, A., Cooper, G., and Wagner, M.,\nRule-Based Anomaly Pattern Detection for Detecting\nDisease Outbreaks, in Proceedings of the Eighteenth\nNational Conference on Artificial Intelligence (AAAI-2002),\nEdmonton, Canada, 2002.\n[36] Wong, W.-K., Moore, A., Cooper, G., and Wagner, M.,\nBayesian Network Anomaly Pattern Detection for Disease\nOutbreaks, in Proceedings of the Twentieth International\nConference on Machine Learning (ICML-2003),\nWashington, DC, 2003.\n\nAPPENDIX: Discovering false discovery rates\nfor multiple significance levels\nLet us continue to use the example N\nperm\n= 999 and\n= 5%. On\nthe dataset D, from the algorithm SigSQrules we generate\nsignificant rules as well as each rule's p-value. Because there are\nN\nperm\nvalues in each distribution, the smallest possible p-value\nfrom the permutation tests is 1/(N\nperm\n+ 1) = 0.001, and all possible\np-values are S = { 1/(N\nperm\n+ 1) = 0.001, 2/(N\nperm\n+ 1) = 0.002, ...\n\n= 0.05 }. Let N\nsig\n[\n\nind\n] be the number of significant rules whose pvalue\nis no larger than\n\nind\n\nS. For example, if there are 50 rules\nwhose p-value = 0.001, and 30 rules whose p-value = 0.002, then\nN\nsig\n[0.001] = 50 and N\nsig\n[0.002] = 50 + 30 = 80. Without further\npermutation tests, with N\nsig\n[] we know how many rules will be\ndiscovered if we lower the significance level from\nto\nind\n. For\nexample, if\n\nind\n= 0.002, there are only N\nsig\n[0.002] = 80 rules\nwhose p-value is no larger than\n\nind\n= 0.002, therefore we expect\nto discover 80 rules.\nSimilarly, for each permutation dataset D\n, at each significance\nlevel\n\nind\n<\nwe can compute the number of significant rules and\ntheir p-values by applying SigSQrules only once. Note that all\ndiscoveries from D\nare false discoveries, because the\nrelationships between A and B are removed. Let N\nsig-perm\n[i][\n\nind\n]\nbe the number of discoveries from permutation datasets D\n[i]. For\nexample, N\nsig-perm\n[1][0.002] = 20 means we have 20 discoveries\nfrom the permutation dataset D\n[1] at\nind\n= 0.002. We implement\nthis procedure on multiple permutation datasets, and Median(\nN\nsig-perm\n[][\n\nind\n]) is the estimate of false discoveries at each\nsignificance level\n\nind\non permutation datasets. Therefore,\nFDR(\n\nind\n) = Median(N\nsig-perm\n[][\n\nind\n]) / N\nsig\n[\n\nind\n]. We use Median\nto estimate the expectation, which conforms to nonparametric\nstatistical considerations (median is the best estimator for\nexpectation when the underlying distribution is unknown).\nEmpirically, we showed in Figure 6.3 that FDR(\n\nind\n) is an\nincreasing function on\n\nind\n. It means that by decreasing\n\nind\n, we\ncan control FDR(\n\nind\n) to a smaller value. We are not always\nguaranteed, though, to be able to set an individual significance\nlevel such that FDR < 5%. It is possible that even when we\ndecrease\n\nind\nto a level that almost no rules are discovered, FDR\nis still much larger than 5%. In other words, there are always a\nlarge proportion of spurious rules discovered from some datasets.\nFor example, if attributes independent based on a test statistic,\nthen Median(N\nsig-perm\n[][\n\nind\n])\nN\nsig\n[\n\nind\n] for all significance\nlevels, and FDR\n1. We want to point out that this is a desirable\nproperty of our method on controlling FDR, because there are\nmany real-world datasets whose attributes are truly independent\nfrom each other. Traditional methods cannot estimate how many\nrules should be discovered, but with our technique, we can draw\nthe conclusion that, there is no rule to be discovered because none\nof the rules is better than chance. This nonparametric method to\nestimate and control FDR is applicable to quantitative datasets\nand broad types of rules.\n\n383\nResearch Track Paper", "keywords": "nonparametric methods;resampling;Rule discovery;market share rules;statistical quantitative rules"} {"name": "145", "title": "Optimal transmission range for cluster-based wireless sensor networks with mixed communication modes", "abstract": "Prolonging the network lifetime is one of the most important designing objectives in wireless sensor networks (WSN). We consider a heterogeneous cluster-based WSN, which consists of two types of nodes: powerful cluster-heads and basic sensor nodes. All the nodes are randomly deployed in a specific area. To better balance the energy dissipation, we use a simple mixed communication modes where the sensor nodes can communicate with cluster-heads in either single-hop or multi-hop mode. Given the initial energy of the basic sensor nodes, we derive the optimal communication range and identify the optimal mixed communication mode to maximize the WSN's lifetime through optimizations. Moreover, we also extend our model from 2-D space to 3-D space.", "fulltext": "INTRODUCTION\nA\nWIRELESS sensor network consists of a large amount\nof sensor nodes, which have wireless communication\ncapability and some level of ability for signal processing.\nDistributed wireless sensor networks enable a variety of applications\nfor sensing and controlling the physical world [1], [2].\nOne of the most important applications is the monitor of a\nspecific geographical area (e.g., to detect and monitor the\nenvironmental changes in forests) by spreading a great number\nof wireless sensor nodes across the area [3][6].\nBecause of the sensor nodes' self constraints (generally tiny\nsize, low-energy supply, weak computation ability, etc.), it\nis challenging to develop a scalable, robust, and long-lived\nwireless sensor network. Much research effort has focused on\nthis area which result in many new technologies and methods\nto address these problems in recent years. The combination\nof clustering and data-fusion is one of the most effective\napproaches to construct the large-scale and energy-efficient\ndata gathering sensor networks [7][9].\nIn particular, the authors of [9] develop a distributed algorithm\ncalled Low-Energy Adaptive Clustering Hierarchy\n(LEACH) for homogeneous sensor networks where each sensor\nelects itself as a cluster-head with some probability and\nThe research reported in this paper was supported in part by the U.S.\nNational Science Foundation CAREER Award under Grant ECS-0348694.\nthe cluster reconfiguration scheme is used to balance the\nenergy load. The LEACH allows only single-hop clusters\nto be constructed. On the other hand, in [10] we proposed\nthe similar clustering algorithms where sensors communicate\nwith their cluster-heads in multi-hop mode. However, in these\nhomogeneous sensor networks, the requirement that every\nnode is capable of aggregating data leads to the extra hardware\ncost for all the nodes. Instead of using homogeneous sensor\nnodes and the cluster reconfiguration scheme, the authors\nof [11] focus on the heterogeneous sensor networks in which\nthere are two types of nodes: supernodes and basic sensor\nnodes. The supernodes act as the cluster-heads. The basic\nsensor nodes communicate with their closest cluster-heads via\nmulti-hop mode. The authors of [11] formulate an optimization\nproblem to minimize the network cost which depends on the\nnodes' densities and initial energies.\nIn addition, The authors of [12] obtain the upper bounds\non the lifetime of a sensor network through all possible\nroutes/ communication modes. However, it is complicated to\nimplement a distributed scheme to achieve the upper bound\nof the WSN lifetime because it is required to know the\ndistance between every two sensor nodes in their scheme. The\nauthors of [13] develop a simpler, but sub-optimal, scheme\nwhere the nodes employ the mixed communication modes:\nsingle-hop mode and multi-hop mode periodically. This mixed\ncommunication modes can better balance the energy load\nefficiently over WSNs. However, the authors of [13] do not\nobtain the optimal communication range for the multi-hop\nmode which is a critically important parameter for the mixed\ncommunication modes scheme. Also, their analytical model\ncan only deal with the case of grid deployment, where the\nnodes are placed along the grids, without considering the\nrandom deployment.\nIn order to further increase the network lifetime of heterogeneous\nWSNs by remedying the deficiencies in the aforementioned\npervious work, we develop the analytical models to\ndetermine the optimal transmission range of the sensor nodes\nand identify the optimal communication modes in this paper.\nIn our models, the basic sensor nodes are allowed to communicate\nwith their cluster-heads with mixed communication\nProceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia\nNetworks (WoWMoM'06)\n0-7695-2593-8/06 $20.00 2006\nIEEE\nCluster-head\nBasic sensor node\nZone1\nZone 2\nZone 3\nZone 4\nR\nMobile base station\nCluster\nFig. 1.\nAn example of the wireless sensor data-gathering networks. In each\nround, the aircraft hovers above the cluster-heads in the monitored area to\ncollect the aggregated data. Within each cluster, the basic sensor nodes can\ncommunicate with its cluster-head in either single-hop or multi-hop.\nmodes instead of only multi-hop mode or only single-hop\nmode. Applying the derived optimal transmission range and\ncommunication modes, we also study how the other WSN\nparameters (e.g., the density and initial energy of the cluster-heads\n, etc.) affect the network lifetime through simulation\nexperiments. Moreover, simulation results verify our analytical\nmodels. We also extend our model from 2-D space to 3-D\nspace.\nThe rest of the paper is organized as follows. Section II\ndevelops our proposed models and formulates the design\nprocedure as an optimization problem. Section III solves the\nformulated optimization problem. Section IV presents the\nnumerical and experimental results. Section V addresses the\nextended 3-D model. The paper concludes with Section VI.\nSYSTEM MODEL\nWe study the following WSN scenario in this paper. A\nheterogeneous sensor network consisting of two types of\nsensor nodes, i) the more powerful but more expensive cluster-head\nnodes with density of\n\n1\nand ii) the inexpensive basic\nsensor nodes with density of\n\n2\n, is deployed in a specific\narea. The density of the basic sensor nodes is determined by\nthe application requirements. A basic sensor node joins the\ncluster whose cluster-head is the closest in hops or distance to\nthis basic sensor node. In each round, the cluster-heads send\nthe aggregated data to the mobile base station (e.g., an aircraft\nor a satellite) after the cluster-heads receive and process the\nraw data from the basic sensor node. Fig. 1 shows an example\nof this type of sensor networks.\nThe definition of network lifetime used in this paper is\nthe period in rounds from the time when the network starts\nworking to the time when the first node dies [15]. Notice that\nenergy dissipation is not uniform over the cluster-based WSNs,\nimplying that some nodes in specific zones (e.g., the nodes\nwhich are close to the cluster-heads need to consume more\nenergy for relaying traffic of other nodes in multi-hop mode)\ndrain out their energy faster than others. Thus, the network\nlifetime of these critical nodes decides how long the WSN can\nsurvive. Hence, maximizing the network lifetime is equivalent\nto minimizing the energy consumption of the critical sensor\nnodes if the initial energy of sensor nodes is given. In this\npaper, our optimization objective is not to minimize the total\nenergy consumption by all the sensor nodes, but to minimize\nthe energy consumption of the critical nodes to prolong the\nnetwork lifetime.\nA. Node Architectures and Energy Models\nA wireless sensor node typically consists of the following\nthree parts: 1) the sensor component, 2) the transceiver component\n, and 3) the signal processing component. In this paper,\nwe have the following assumptions for each component.\n1) For the sensor component:\nAssume that the sensor nodes sense constant amount\nof information every round. The energy consumed in\nsensing is simply\nE\nsense\n(l) = l, where is the power\nconsumption for sensing a bit of data and\nl is the length\nin bits of the information which a sensor node should\nsense in every data-gathering round. In general the value\nof\nl is a constant.\n2) For the transceiver component:\nWe use a simple model for the radio hardware energy\nconsumption [16]. The energy\nE\ntx\n(r, l) and the energy\nE\nrx\n(l) required for a node to transmit and receive a\npacket of\nl bits over r distance, respectively, can be\nexpressed as follows.\nE\ntx\n(r, l) = (r\nn\n+ )l\nE\nrx\n(l)\n= l\n(1)\nwhere\nr\nn\naccounts for the radiated power necessary\nto transmit over a distance of\nr between source and\ndestination and\nis the energy dissipated in the transmitter\ncircuit (PLLs, VCOs, etc) which depends on\nthe digital coding, modulation, etc. The value of path\nloss exponent\nn depends on the surrounding environment\n[16]. In general,\n= 10pJ/bit/m\n2\nwhen\nn = 2,\n= 0.0013pJ/bit/m\n4\nwhen\nn = 4, = 50nJ/bit [9].\n3) For the signal processing component:\nThis component conducts data-fusion. Because the signal\nprocessing component usually consists of complicated\nand expensive gear such as Digital Signal Proces-sors\n(DSP), Field Programmable Gate Arrays (FPGA),\netc., the basic sensor nodes do not contain the signal\nprocessing component. Only the cluster-heads have these\ncomponents and the ability for data-fusion. The energy\nspent in aggregating\nk streams of l bits raw information\ninto a single stream is determined by\nE\naggr\n(k, l) = kl\n(2)\nwhere the typical value of\nis 5nJ/bit/stream [17].\nB. Mixed Communication Modes\nNotice that in the multi-hop mode, the closer the distance\nbetween the sensor node and its cluster-head, the more energy\nthe sensor node consumes since the inner nodes are required\nProceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia\nNetworks (WoWMoM'06)\n0-7695-2593-8/06 $20.00 2006\nIEEE\nto relay the more traffics than that for the outer nodes. On\nthe other hand, for the case of pure single-hop mode, the\nbasic sensor nodes which are closer to their cluster-heads\ndissipate less energy than those farther from the cluster-heads\nbecause the energy consumption increases as the\nn-th power\nof distance, where\nn is the path loss exponent.\nIn order to balance the energy load of the basic sensor\nnodes, our model employs the mixed communication modes\nwhich consist of the mixed single-hop and multi-hop mode.\nThe basic sensor nodes use single-hop communication mode in\nsome rounds but multi-hop communication mode in the other\nrounds. This kind of mixed communication modes is easy\nto implement. For example, the cluster-head can broadcast a\nnotifying message periodically to all member nodes to inform\nof which communication mode should be used for next data-gathering\nround. We use parameter\nto measure how often\nthe single-hop mode is used.\nSuppose\nT is the total rounds that the network can perform,\nT\ns\nis the number of rounds that single-hop communication\nmode is used and\nT\nm\nis the number of rounds that multi-hop\ncommunication mode is employed. Let\n= T\ns\n/T = 1-T\nm\n/T\nbe the frequency with which the single-hop communication\nmode is used. Note that\n= 0 means that the pure multi-hop\ncommunication mode is employed, while\n= 1 represents that\nonly the single-hop communication mode is used.\nC. Deployment Models\nThe sensor nodes and the cluster-heads are randomly distributed\non a 2-D circular area, whose radius is\nA unit. We\ncan model such random deployment (e.g., deployed by the\naircraft in a large-scale mode) as a spatial Poisson point\nprocess. Specifically, the cluster-heads and basic sensor nodes\nin the wireless sensor network are distributed according to\ntwo independent spatial Poisson processes PP1 and PP2 with\ndensities equal to\n\n1\nand\n\n2\n, respectively.\nThe basic sensor nodes will join the clusters in which\nthe cluster-heads are the closest to the sensor nodes to form\nVoronoi cells [14]. The 2-D plane is thus partitioned into a\nnumber of Voronoi cells which correspond to a PP1 process\npoint. The authors of [14] have studied the moments and tail\nof the distributions of the number of PP2 nodes (i.e., the basic\nsensor nodes) connected to a particular PP1 node (i.e, a cluster-head\n). Because PP1 and PP2 are homogeneous Poisson point\nprocesses, we can shift the origin to one of the PP1 nodes.\nLet\nV be the set of nodes which belong to the Voronoi cell\ncorresponding to a PP1 node located at the origin and\nS\n(r,)\nbe a PP2 node whose polar coordinate is (\nr, ). By using the\nresults of [14], we can get the probability that\nS\n(r,)\nbelongs\nto\nV as follows:\nP r S\n(r,)\nV = e\n1\nr\n2\n(3)\nLet\nN\nV\nbe the number of PP2 nodes belonging to the\nV.\nThen, the average\nN\nV\ncan be given by\nE[N\nV\n] =\n2\n0\n\n0\nP r S\n(r,)\nV\n2\nrdrd\n=\n2\n0\n\n0\ne\n1\nr\n2\n\n2\nrdrd\n=\n2\n\n1\n(4)\nwhere\n\n2\nrdrd denotes the number of PP2 nodes in a small\narea of\nrdrd.\nIf we confine the cluster size to\nX hops (i.e., maximum\nnumber of\nX hops is allowed from the basic sensor node\nto its cluster-head), the average number of sensors which do\nnot belong to any cluster-head, denoted by\nE[N\nO\n], can be\nexpressed as follows:\nE[N\nO\n] =\n1\nA\n2\n2\n0\n\nXR\nP r S\n(r,)\nV\n2\nrdrd\n=\n2\nA\n2\ne\n1\n(XR)\n2\n(5)\nClearly, we want a small\nE[N\nO\n] with an appropriate cluster\nsize\nX.\nE [N\nO\n]\n2\nA\n2\n(6)\nwhere (\n> 0) is the percentage of sensor nodes that will not\njoin any cluster-head.\nSolving Eq. (5) and Eq. (6) together, we obtain the following\ninequality.\nX\n- log\n1\nR\n2\n(7)\nThe average number of sensor nodes which do not join any\ncluster-head is less or equal than\n\n2\nA\n2\nif\nX is given by\nX =\n- log\n1\nR\n2\n(8)\nThus, we know that every cluster can be divided into\nX\nring zones with the ring width equal to\nR when multi-hop\ncommunication mode is used.\nThe average number of sensor nodes in the\ni-th zone of the\ncluster can be written as follows:\nE[N\ni\n] =\n2\n0\niR\n(i-1)R\nP r S\n(r,)\nV\n2\nrdrd\n=\n2\n\n1\ne\n1\n[(i-1)R]\n2\n- e\n1\n(iR)\n2\n(9)\nwhere\ni is an positive integer ranging from 1 to X.\nD. The Wireless Sensor Network Lifetime\nIn our model, the nodes at the same zone will die at almost\nthe same time. Therefore, the network lifetime is equivalent\nto the period from the time when the WSN begins working to\nthe time when all nodes of a zone die.\ni) The basic sensor nodes\nThe sensor nodes have the responsibility to relay the traffic\nfrom the peers laid in the outer zone in the multi-hop communication\nmode. We define the average number of packets\nProceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia\nNetworks (WoWMoM'06)\n0-7695-2593-8/06 $20.00 2006\nIEEE\nby\nY(i), which a sensor node placed in the i-th zone need\nto relay. Because each basic sensor node sends out a packet\nof sensed information per round,\nY(i) is determined by the\naverage number of nodes for which node\ni needs to forward\nmessages and can be written as follows:\nY(i) =\nX\nj=i+1\nE[N\nj\n]\nE[N\ni\n]\n.\n(10)\nThe energy consumption of the basic sensor nodes in the\ni-th zone can be written as the summation of the following 3\nterms:\nE\nsensor\n(i, ) = (1 - ) E\ntx\n(R, l) + Y(i) E\ntx\n(R, l)\n+E\nrx\n(l) + E\ntx\n(iR, l) + E\nsense\n(l)\n(11)\nwhere\n(0 1) is a parameter measuring the frequency\nwith which the single-hop communication mode will\nbe employed. The first term of Eq. (11) represents the energy\nspent in the relaying the traffic for the sensor nodes in the\nouter zone and transmitting its own traffic when the multihop\ncommunication mode is employed. The second term of\nEq. (11) is energy consumption for transmitting a packet by\nsingle-hop. The third term of Eq. (11) is the energy dissipation\nfor sensing.\nIn our model, because the basic sensor nodes in the same\nzone will consume almost the same energy in each round\n(i.e., sharing the same relaying traffic load when multi-hop\ncommunication mode is used), their lifetimes are equal if\ntheir initial energies are the same. Therefore, if the initial\nenergy is the same for each basic sensor node, the sensor\nnodes which belong to the (arg max\n1iX\n{E\nsensor\n(i, )})-th\nzone will cost the most energy and die first which decides the\nnetwork lifetime. Given the initial energy, denoted by\nE\ninit2\n,\nwhich is carried by the basic sensor node, the network lifetime\nin rounds (\nT ) can be written as follows:\nT =\nE\ninit2\nmax\n1iX\nE\nsensor\n(i, ) =\nE\ninit2\nE\nmax\n()\n(12)\nwhere\nE\nmax\n()\nmax\n1iX\nE\nsensor\n(i, ). One of our objectives\nis to find the optimal\n\n\nwhich is determined by\n\n\n= arg min\n01\n{E\nmax\n()}\n(13)\nii) The cluster-heads\nBecause the main functions of cluster-heads include (1)\nsensing, (2) collecting data from the basic sensor nodes, (3)\naggregating the raw data, and (4) transferring the processed\ndata to the base station, the energy consumption of cluster-heads\nis the sum of the energy dissipation of these four parts\nfor the above four parts. Therefore, the energy consumption\nfor a cluster-head, denoted by\nE\nCH\n, in each round can be\nexpressed as summation of the following 4 terms:\nE\nCH\n= E [N ] E\nrx\n(l) + E\ntx\n(H, l ) + E\nsense\n(l) +\nE\naggr\n(E [N ] + 1, l)\n(14)\nwhere\nE[N ] is the average number of basic sensor nodes in\na cluster, and\nH is the distance between the cluster-head and\nthe mobile base-station.\nThe first term of Eq. (14) represents the energy consumed\nfor receiving the packets from the basic sensor nodes. The\nsecond term of Eq. (14) is the energy spent in transmitting the\naggregated information to the mobile base station. The third\nterm of Eq. (14) denotes the energy dissipation for sensing and\nthe fourth term of Eq. (14) means the energy consumption for\naggregating (\nE [N ] + 1) packets, each with l bits, into one\npacket of\nl bits.\nBecause\nE[N ] =\n2\n/\n1\n,\nE\nCH\ndepends on the ratio between\n\n2\nand\n\n1\n. The energy consumption of cluster-heads\nis reversely proportional to the density of cluster-heads. The\nlarger the density of cluster-heads, the smaller the value of\nE\nCH\n. Thus, in order to ensure\nT\n0\nrounds network lifetime, the\ninitial energy for the cluster-heads can be written as follows:\nE\ninit1\nT\n0\nE\nCH\n(15)\nE. Connectivity\nBecause the mixed communication modes contains multihop\ncommunication mode, the communication range (\nR) is\nrequired to be large enough to ensure the connectivity of the\nnetwork. When the nodes are assumed to be distributed with\nPoisson density\nin a disc of a unit area, the authors of [18]\nderived a lower bound on the communication range (\nR) to\nensure the network connectivity with probability\nP r{conn},\nwhich is determined by\nP r{conn} 1 - e\n-R\n2\n(16)\nHence, the following inequality need to hold to ensure the\nconnectivity.\ne\n-R\n2\n\n(17)\nwhere\n> 0.\nBy resolving Eq. (17), we obtain the minimum transmission\nrange, denoted by\nR\nmin\n, to ensure that the probability of\nconnectivity is greater than (1\n- ).\nR\nmin\n=\n- 1\nlog\n\n\n=\n1\n(\n1\n+\n2\n) log\n(\n1\n+\n2\n)\n\n(18)\nF. The Optimization Problem Formulation\nOur objective is to find the optimal transmission range (\nR)\nand the parameter for the mixed communication modes (\n) to\nmaximize the network lifetime.\nObjective:\nmax{T }\n(19)\nSubject to\nR R\nmin\n(20)\n0 1\n(21)\n0 E\ninit2\nE\n0\n(22)\nwhere the first constraint given by Eq. (20) is to ensure the\nconnectivity of the network. The expression of\nR\nmin\ndepends\nProceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia\nNetworks (WoWMoM'06)\n0-7695-2593-8/06 $20.00 2006\nIEEE\non the types of deployment. The second constraint given by\nEq. (21) indicates that we can use mixed communication\nmodes. The third constraint given by Eq. (22) is due to the\nvery limited energy carried by the basic sensor nodes.\nBy observing Eq. (12), we can simplify the objective\nfunction as \"min\n{E\nmax\n}\" and remove the constraint given\nby Eq. (22) by letting\nE\ninit2\n= E\n0\n. Then, the simplified\noptimization problem can be written as follows:\nObjective:\nmin{E\nmax\n}\n(23)\nSubject to\nR R\nmin\n0 1\nSOLUTIONS FOR THE OPTIMIZATION PROBLEM\nFirst, we show that given a specified transmission range\n(\nR), the function of E\nmax\n() is convex in and the optimal\n\n\n= arg min\n01\n{E\nmax\n()} is a function of R.\nBecause the energy consumption\nE\nsensor\n(i, ) of the sensor\nnodes in the\ni-th zone is a convex function of i (the proof\nis omitted for lack of space), the value of\nE\nmax\n() will be\nachieved by the sensor nodes laid in either the 1st zone or the\nX-th zone, i.e.,\nE\nmax\n() = max {E\nsensor\n(1, ), E\nsensor\n(X, )}\n(24)\nNote that\nE\nsensor\n(1, ) is a monotonically decreasing linear\nfunction of\nwhile E\nsensor\n(X, ) is a monotonically increasing\nlinear function of\n, and E\nsensor\n(1, 1) = E\nsensor\n(X, 0).\nHence, we can find that the two lines corresponding to these\ntwo linear functions will intersect at the point where\nis\nwithin the range between 0 to 1. Clearly, the intersecting\npoint, denoted by\n\n\n, yields the minimum value of\nE\nmax\n().\nTherefore,\n=\n\nis the optimal value when the following\nequation is satisfied.\nE\nsensor\n(1,\n\n) = E\nsensor\n(X,\n\n)\n(25)\nBy resolving Eq. (25), we obtain the solution for Eq. (13).\n\n\n= arg min\n01\n{E\nmax\n()}\n=\nY(1)[E\ntx\n(R) + E\nrx\n]\nY(1)[E\ntx\n(R) + E\nrx\n] + E\ntx\n(XR) - E\ntx\n(R)\n=\nY(1)(R\nn\n+ 2)\nY(1)(R\nn\n+ 2) + R\nn\n(X\nn\n- 1)\n=\nY(1)(R\nn\n+ 2)\nY(1)(R\nn\n+ 2) + R\nn\n(X\nn\n- 1)\n(26)\nwhere\n= /, and we use E\ntx\n(R), E\ntx\n(XR), and E\nrx\ninstead of\nE\ntx\n(R, l), E\ntx\n(XR, l), and E\nrx\n(l) since the value\nof\nl is a constant for a specific application.\nThe factor\nmeasures to what extent R has the impact\non the transmission energy consumption. For example, The\ntransmission energy consumption is more sensitive to\nR when\nis larger. The factor also determines the cost of relaying\ntraffic. The cost of relaying traffic increases with the increment\nof\nbecause receiving packets consumes more energy with a\ngreater\n.\n10\n-5\n10\n0\n10\n5\n10\n4\n10\n3\n10\n2\n10\n1\n10\n-1\n10\n-2\n10\n-3\n10\n-4\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n\n\n*\nR=0.1, n=2\nR=1, n=2\nR=0.1, n=4\nR=1, n=4\nFig. 2.\nThe optimal\n\n\nagainst\n.\nIf\n\nR\nn\n, Eq. (26) reduces to its approximated expression\nas follows:\n\n\n\nY(1)\nY(1) + (X\nn\n- 1)\n(27)\nBy substituting Eq. (9) and (10) into Eq. (26), we have\n\n\n=\n1-e\n-1(X2-1)R2\ne\n1R2\n-1\n(R\nn\n+ 2)\n1-e\n-1(X2-1)R2\ne\n1R2\n-1\n(R\nn\n+ 2) + R\nn\n(X\nn\n- 1)\n(28)\nNotice that\n\n\nis a function of\nR if we substitute Eq. (8) into\nEq. (28).\nLet\n\n1\n= 0.001,\n2\n= 3, and = 10\n-12\n. By using Eq. (28),\nwe plot the optimal\n\n\nagainst\nas shown in Fig. 2. We\nobserve from Fig. 2 that the optimal\n\n\nis almost 0 (i.e., pure\nmulti-hop communication mode) if\nn = 4 and is small. The\nreason is because the energy consumption for transmission is\nproportional to\nR\n4\nand the term of\nR\n4\nin the first part of\nEq. (1) dominates the transmission energy consumption if\n\nis small. Thus, the energy consumption in single-hop mode is\nmuch more than that in multi-hop mode. In contrast, if\nis\nlarge, the multi-hop mode loses its advantage over the single-hop\nmode because the transmission energy consumption is\ndominated by the constant term of\nin the first part of Eq. (1)\nand it is not sensitive to the transmission range.\nSo far, given the communication range (\nR), we obtain the\nminimum\nE\nmax\n(\n\n) for the basic sensor nodes in order to\nderive the solutions for objective function of Eq. (24) as\nfollows:\nE\nmax\n(\n\n) = [R\nn\n(1 +\n\nX\nn\n\n) + + ]l\n(29)\nNext, we want to identify the optimal\nR\n\nto minimize\nE\nmax\n(\n\n) when constraint given by Eq. (20) applies. Because\nit is difficult to find the closed form for the optimal\nR\n\n, we\nuse a numerical solutions to determine the optimal\nR\n\nwhich\nis detailed for some scenarios in Section IV.\nTHE NUMERICAL AND SIMULATION EVALUATIONS\nFor the following discussions, we set\n= 10pJ/bit/m\n2\n,\n= 50nJ/bit, = 5nJ/bit/stream, the packet length\nProceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia\nNetworks (WoWMoM'06)\n0-7695-2593-8/06 $20.00 2006\nIEEE\n10\n-3\n10\n-2\n10\n-1\n10\n0\n10\n1\n10\n2\n0\n5\n10\n15\n20\n25\n30\nThe density of the cluster-heads (\n\n1\n)\nThe optimal transmission range (R\n*\n)\n\n2\n/\n\n1\n=100\n\n2\n/\n\n1\n=5000\n=1000\n=50\n=0\n10\n-3\n10\n-2\n10\n-1\n10\n0\n10\n1\n10\n2\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nThe density of the cluster-heads (\n\n1\n)\n\n*\n\n2\n/\n\n1\n=100\n\n2\n/\n\n1\n=5000\n=1000\n=0\n=50\n(a)\n(b)\nFig. 3.\nThe optimal parameters versus the density of cluster-heads under different\n. (a) The optimal communication range R\n\n. (b) The optimal\n\n\n.\n0.5\n1\n1.5\n2\n2.5\n3\n3.5\n4\n4.5\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nTransmission range in meters (R)\nThe normalized network lifetime\n=0\n=0.2\n=0.5\n=0.8\n=1\nFig. 4.\nThe normalized network lifetime versus the transmission range (\nR)\nwith\n= 50,\n1\n= 0.1,\n2\n/\n1\n= 5000, and = 0, 0.2, 0.5, 0.8, 1.\nl = 120 bits and the path loss exponent n = 2. We consider\nvarious scenarios with three different values of\n, two average\nnumbers of basic sensor nodes in a cluster (\n\n2\n/\n1\n) and various\ndensities of cluster-heads (\n\n1\n). According to the discussion in\nSection III, we get the numerical solutions for the optimal\nR\n\n,\nand then the value of optimal\n\n\ncan be calculated by using\nEq. (28).\nThe numerical results of optimal\nR\n\nand\n\n\nwith different\n\nand (\n\n2\n/\n1\n) are shown in Fig. 3(a) and Fig. 3(b), respectively.\nThe optimal\nR\n\ndecreases as\n\n1\nincreases. With the same\n\n1\n,\nthe larger\n, the larger the value of R\n\n. When\nis small\n(e.g.,\n= 0 and = 50), the value of\n\nis small (e.g.,\n\n\n< 0.1). This implies that the multi-hop communication\nmode dominates the single-hop mode. The reason for these\nobservations includes the following two. First, the cost of\nrelaying traffics is small since the receiving energy is small.\nSecond, the transmission energy is sensitive to the transmission\nrange.\nWe also conduct the simulation experiments to verify our\nanalytical results. In our simulations, Minimum Transmission\nEnergy (MTE) routing algorithm [19], which minimizes the\ntotal energy consumption for sending a packet, is used as the\nrelaying scheme for the multi-hop communication mode. Let\n= 50,\n1\n= 0.1, and\n2\n/\n1\n= 5000. We set the initial\nenergy of cluster-heads (\nE\ninit2\n) high enough to guarantee that\nthe cluster-heads can have longer lifetime than the basic sensor\nnodes. Fig. 4 shows the simulation results of network lifetime.\nThe plots in Fig. 4 is the average results of 1000 experiments.\nIt shows that in most cases (i.e.,\n= 0, 0.2, 0.5, and 0.8) the\nnetwork lifetime is maximized when\nR\n\n= 3.5, which agrees\nwith the numerical results shown in Fig. 3.\nIn the following simulations, the parameters are set as\nfollows: the initial energy of basic sensor nodes\nE\ninit2\n=\n0.01J, the distance between the cluster-heads and the mobile\nbase station\nH = 100m, l = l = 120bits and\n2\n= 1000.\nFig. 5 shows the network lifetime changes with the average\nnumber of the basic sensor nodes in a cluster by using the\noptimal\nR\n\nand\n\n\nwhen\nis equal to 0, 100 and 1000. We\nobserve that increasing the density of cluster-heads (i.e.,\n\n2\n/\n1\ndecreases given constant\n\n2\n) does not always help to extend\nthe network lifetime. For example, the network lifetime can be\nincreased by 51% when the density of cluster-heads changes\nfrom\n\n1\n= 0.01 or\n2\n/\n1\n= 10\n6\nto\n\n1\n= 0.1 or\n2\n/\n1\n= 10\n5\n,\nwhile the network lifetime is almost the same when the density\nof cluster-heads is greater than\n\n1\n= 1 (i.e., the average of\nbasic sensor nodes\n\n2\n/\n1\n10\n4\n). On the other hand, Fig. 6\nshows the energy consumption of a cluster-head against the\naverage number of the basic sensor nodes. We find that the\naverage energy consumption of a cluster-head is proportional\nto (\n\n2\n/\n1\n) from Fig. 6. The required initial energy of cluster-heads\nincreases with the decrease of the density of cluster-heads\n(\n\n1\n). Thus, there is a trade-off between the network\nlifetime and the initial energy of the cluster-heads.\n3-D WSN EXTENSION\nOur work can be easily extended to a 3-D space model.\nThe differences between the 3-D space model and the 2-D\nspace model lie in the deployment models and the connectivity\nmodels.\nThe probability that a basic sensor node with spherical\ncoordinate (\nr, , ) belongs to a cluster-head located in the\norigin is\nP r S\n(r,,)\nV = e\n1\n4\n3\nr\n3\n(30)\nProceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia\nNetworks (WoWMoM'06)\n0-7695-2593-8/06 $20.00 2006\nIEEE\n10\n1\n10\n2\n10\n3\n10\n4\n10\n5\n10\n6\n600\n700\n800\n900\n1000\n1100\n1200\n1300\nThe average number of basic sensor nodes in a cluster (\n\n2\n/\n\n1\n)\nNetwork lifetime in rounds\n=0\n=100\n=1000\nFig. 5. Optimal network lifetime under various situations where\n\n2\n= 1000.\n10\n1\n10\n2\n10\n3\n10\n4\n10\n5\n10\n6\n10\n-1\n10\n0\n10\n1\n10\n2\n10\n3\nThe average number of basic sensor nodes in a cluster (\n\n2\n/\n\n1\n)\nEnergy consumption of a cluster-head in Joules\n=0\n=100\n=1000\nFig. 6.\nThe energy consumption of cluster-heads against the number of the\nbasic sensor nodes.\nThe average number of basic sensor nodes in a cluster is\ndetermined by\nE[N\nV\n]\n=\n\n0\n2\n0\n\n0\nP r S\n(r,,)\nV\n2\nr\n2\nsin drdd\n=\n\n0\n2\n0\n\n0\ne\n1\n4\n3\nr\n3\n\n2\nr\n2\nsin drdd\n=\n2\n\n1\n(31)\nIn the similar way, we can obtain the number of basic sensor\nnodes in the\ni-th zone as follows:\nE [N\nV\n]\n=\n\n0\n2\n0\niR\n(i-1)R\nP r S\n(r,,)\nV\n2\nr\n2\nsin drdd\n=\n2\n\n1\ne\n1\n4\n3\n[(i-1)R]\n3\n- e\n1\n4\n3\n(iR)\n3\n(32)\nTo satisfy the requirement of connectivity, the minimum\ncommunication range\nR\nmin\ncan be written as follows:\nR\nmin\n=\n3\n3\n4(\n1\n+\n2\n) log\n(\n1\n+\n2\n)\n\n(33)\nAgain, we can obtain the optimal\n\n\nand\nR\n\nfor the 3-D\nspace model along the same manner as a class of the case of\n2-D space WSN model.\nCONCLUSION\nWe investigated the optimal transmission range for a heterogeneous\ncluster-based sensor network which consists of\ntwo types of nodes, the super cluster-heads and the basic\nsensor nodes. To balance the energy load of the basic sensor\nnodes, the mixed communication modes are employed. By\ndeveloping the analytical models, we numerically derive the\noptimal transmission range\nR\n\nand the frequency of single-hop\nmode\n\n\nto achieve the longest network lifetime. Our analyses\nalso showed that our proposed model can be easily extended\nfrom 2-D to 3-D. The simulation results validated our proposed\nanalytical models. Our simulation results with the optimal\nR\n\nand\n\n\nindicated that the high density of cluster-heads is not\nvery helpful for prolonging the network lifetime.\nREFERENCES\n[1] I. F. Akyildiz, W. Su, Y. Sankarsubramaniam, and E. Cayirci, \"Wireless sensor\nnetworks: a survey,\" Computer Networks, 38 (2002) pp. 393-422.\n[2] G. J. Pottie and W. J. Kaiser, \"Wireless integrated network sensors,\" Communications\nof the ACM, Vol. 43, No 5, pp 51-58, May 2000.\n[3] A. Cerpa, J. Elson, D. Estrin, L. Girod, M. Hamilton, and J. Zhao, \"Habitat\nmonitoring: application driver for wireless communications technology,\" in Proc.\nof ACM SIGCOMM Workshop on Data Communications in Latin America and the\nCaribbean, Costa Rica, April 2001.\n[4] J. Lundquist, D. Cayan, and M. Dettinger, \"Meteorology and hydrology in yosemite\nnational park: a sensor network application,\" in Proc. of Information Processing\nin Sensor Networks (IPSN), April, 2003.\n[5] A. Mainwaring, J. Polastre, R. Szewczyk, D. Culler, and J. Anderson, \"Wireless\nsensor networks for habitat monitoring,\" in Proc. of WSNA'02, Atlanta, Georgia,\nSeptember 28, 2002.\n[6] G. Tolle, D. Gay, W. Hong, J. Polastre, R. Szewczyk, D. Culler, N. Turner, K.\nTu, S. Burgess, T. Dawson, and P. Buonadonna, \"A macroscope in the redwoods,\"\nin 3rd ACM Conference on Embedded Networked Sensor Systems (SenSys), San\nDiego, November 2-4, 2005\n[7] O. Younis and S. Fahmy, \"Distributed clustering in ad-hoc sensor networks: a\nhybrid, energy-efficient approach\", in Proc. of INFOCOM'04, March 2004\n[8] A. Boulis, S. Ganeriwal, and M. B. Srivastava, \"Aggregation in sensor networks:\nan energy-accuracy trade-off,\" Elsevier Ad Hoc Networks Journal, Vol. 1, 2003,\npp. 317-331\n[9] W. R. Heinzelman, A. Chandrakasan and H. Balakrishman, \"Energy-efficient\ncommunication protocol for wireless microsensor networks,\" in Proc. Of IEEE\nHICSS, January 2000.\n[10] H. Su and X. Zhang, \"Energy-efficient clustering system model and reconfiguration\nschemes for wireless sensor networks,\" in proc. of the 40th Conference on\nInformation Sciences and Systems (CISS 2006), March 2006.\n[11] V. P. Mhatre, C. Rosenberg, D. Kofman, R. Mazumdar and N. Shroff, \"A minmum\ncost heterogeneous sensor network with a lifetime constraint,\" IEEE Trans. on\nMobile Computing, Vol.4, No.1, Jan./Feb. 2005, pp.4-15.\n[12] M. Bhardwaj and A. P. Chanrakasan, \"Bouding the lifetime of sensor networks\nvia optimal role assignments,\" in Proc. of INFOCOM'02, pp.1587-1596, 2002.\n[13] V. Mhatre and C. Rosenberg, \"Design guidelines for wireless sensor networks:\ncommunication, clustering and aggregation\", Elsevier Ad Hoc Networks Journal,\nVol. 2, 2004, pp. 45-63.\n[14] S. G. Foss and S. A. Zuyev, \"On a voronoi aggregative process related to a bivariate\npoisson process,\" Advances in Applied Probability, vol. 28, no. 4, 1996, pp. 965-981\n.\n[15] J. H. Chang and L. Tassiulas, \"Energy conserving routing in wireless ad hoc\nnetworks,\" in Proc. INFOCOM, Tel Aviv, Israel, March 2000, pp. 22-31.\n[16] T. Rappaport, Wireless Communication Priciples and Practice (2nd Edition). Upper\nSaddle River, N.J. Prentice Hall PTR, 2002.\n[17] A. Wang, W. Heinzelman, and A. Chandrakasan, \"Energy-scalable protocols for\nbattery-operated microsensor networks,\" in Proc. of the IEEE workshop on Signal\nProcessing Systems (SiPS'99), pp. 483-492, 1999.\n[18] P. Gupta and P. R. Kumar, Critical power for asymptotic connectivity in wireless\nnetworks, in W. M. McEneany, G. Yin, Q. Zhang (Editors), Stochastic Analysis,\nControl, Optimization and Applications: A Volume in Honor of W. H. Fleming,\nBirkhauser, Boston, MA, 1998, pp. 547-566.\n[19] T. Shepard, \"Decentralized channel management in scalable multihop spread\nspectrum packet radio networks,\" Massachusetts Inst. of Technol., Lab. for Comput.\nSci., Cambridge, Tech. Rep. MIT/LCS/ TR-670, July 1995.\nProceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia\nNetworks (WoWMoM'06)\n0-7695-2593-8/06 $20.00 2006\nIEEE", "keywords": "Wireless sensor networks;heterogeneous cluster-based sensor network;optimization;network lifetime;optimal transmission range;energy optimization;Voronoi cell;numerical model;clustering"} {"name": "146", "title": "Parallel Crawlers", "abstract": "In this paper we study how we can design an effective parallel crawler. As the size of the Web grows, it becomes imperative to parallelize a crawling process, in order to finish downloading pages in a reasonable amount of time. We first propose multiple architectures for a parallel crawler and identify fundamental issues related to parallel crawling. Based on this understanding, we then propose metrics to evaluate a parallel crawler, and compare the proposed architectures using 40 million pages collected from the Web. Our results clarify the relative merits of each architecture and provide a good guideline on when to adopt which architecture.", "fulltext": "INTRODUCTION\nA crawler is a program that downloads and stores Web\npages, often for a Web search engine. Roughly, a crawler\nstarts off by placing an initial set of URLs, S\n, in a queue,\nwhere all URLs to be retrieved are kept and prioritized.\nFrom this queue, the crawler gets a URL (in some order),\ndownloads the page, extracts any URLs in the downloaded\npage, and puts the new URLs in the queue. This process is\nrepeated until the crawler decides to stop. Collected pages\nare later used for other applications, such as a Web search\nengine or a Web cache.\nAs the size of the Web grows, it becomes more difficult to\nretrieve the whole or a significant portion of the Web using\na single process. Therefore, many search engines often run\nmultiple processes in parallel to perform the above task, so\nthat download rate is maximized. We refer to this type of\ncrawler as a parallel crawler.\nIn this paper we study how we should design a parallel\ncrawler, so that we can maximize its performance (e.g.,\ndownload rate) while minimizing the overhead from parallelization\n. We believe many existing search engines already\nuse some sort of parallelization, but there has been little\nscientific research conducted on this topic. Thus, little has\nbeen known on the tradeoffs among various design choices\nfor a parallel crawler. In particular, we believe the following\nissues make the study of a parallel crawler challenging and\ninteresting:\nOverlap: When multiple processes run in parallel to\ndownload pages, it is possible that different processes\ndownload the same page multiple times. One process\nmay not be aware that another process has already\ndownloaded the page. Clearly, such multiple downloads\nshould be minimized to save network bandwidth\nand increase the crawler's effectiveness. Then how can\nwe coordinate the processes to prevent overlap?\nQuality: Often, a crawler wants to download \"important\"\npages first, in order to maximize the \"quality\"\nof the downloaded collection. However, in a parallel\ncrawler, each process may not be aware of the whole\nimage of the Web that they have collectively downloaded\nso far. For this reason, each process may make\na crawling decision solely based on its own image of\nthe Web (that itself has downloaded) and thus make\na poor crawling decision. Then how can we make sure\nthat the quality of the downloaded pages is as good for\na parallel crawler as for a centralized one?\nCommunication bandwidth: In order to prevent overlap\n, or to improve the quality of the downloaded pages,\ncrawling processes need to periodically communicate\nto coordinate with each other. However, this communication\nmay grow significantly as the number of crawling\nprocesses increases. Exactly what do they need to\ncommunicate and how significant would this overhead\nbe? Can we minimize this communication overhead\nwhile maintaining the effectiveness of the crawler?\nWhile challenging to implement, we believe that a parallel\ncrawler has many important advantages, compared to a\nsingle-process crawler:\nScalability: Due to enormous size of the Web, it is often\nimperative to run a parallel crawler. A single-process\ncrawler simply cannot achieve the required download\nrate in certain cases.\nNetwork-load dispersion: Multiple crawling processes\nof a parallel crawler may run at geographically distant\nlocations, each downloading \"geographically-adjacent\"\npages. For example, a process in Germany may download\nall European pages, while another one in Japan\ncrawls all Asian pages. In this way, we can disperse\nthe network load to multiple regions. In particular,\nthis dispersion might be necessary when a single network\ncannot handle the heavy load from a large-scale\ncrawl.\nNetwork-load reduction: In addition to the dispersing\nload, a parallel crawler may actually reduce the network\nload.\nFor example, assume that a crawler in\nNorth America retrieves a page from Europe. To be\ndownloaded by the crawler, the page first has to go\nthrough the network in Europe, then the Europe-to-North\nAmerica inter-continental network and finally\nthe network in North America. Instead, if a crawling\nprocess in Europe collects all European pages, and\nif another process in North America crawls all North\nAmerican pages, the overall network load will be reduced\n, because pages go through only \"local\" networks.\nNote that the downloaded pages may need to be transferred\nlater to a central location, so that a central index\ncan be built. However, even in that case, we believe\nthat the transfer can be significantly smaller than the\noriginal page download traffic, by using some of the\nfollowing methods:\n\nCompression: Once the pages are collected and\nstored, it is easy to compress the data before sending\nthem to a central location.\n\nDifference: Instead of sending the entire image\nwith all downloaded pages, we may first take difference\nbetween previous image and the current\none and send only this difference. Since many\npages are static and do not change very often,\nthis scheme can significantly reduce the network\ntraffic.\n\nSummarization: In certain cases, we may need\nonly a central index, not the original pages themselves\n. In this case, we may extract the necessary\ninformation for the index construction (e.g., postings\nlist) and transfer this data only.\nTo build an effective web crawler, we clearly need to address\nmany more challenges than just parallelization. For\nexample, a crawler needs to figure out how often a page\nchanges and how often it would revisit the page in order to\nmaintain the page up to date [7, 10]. Also, it has to make\nsure that a particular Web site is not flooded with its HTTP\nrequests during a crawl [17, 12, 24]. In addition, it has to\ncarefully select what page to download and store in its limited\nstorage space in order to make the best use of its stored\ncollection of pages [9, 5, 11]. While all of these issues are\nimportant, we focus on the crawler parallelization in this\npaper, because this problem has been paid significantly less\nattention than the others.\nIn summary, we believe a parallel crawler has many advantages\nand poses interesting challenges. In particular, we\nbelieve our paper makes the following contributions:\nWe identify major issues and problems related to a\nparallel crawler and discuss how we can solve these\nproblems.\nWe present multiple techniques for a parallel crawler\nand discuss their advantages and disadvantages. As far\nas we know most of these techniques have not been described\nin open literature. (Very little is known about\nthe internals of crawlers, as they are closely guarded\nsecrets.)\nUsing a large dataset (40M web pages) collected from\nthe Web, we experimentally compare the design choices\nand study their tradeoffs quantitatively.\nWe propose various optimization techniques that can\nminimize the coordination effort between crawling processes\n, so that they can operate more independently\nwhile maximizing their effectiveness.\n1.1\nRelated work\nWeb crawlers have been studied since the advent of the\nWeb [18, 23, 4, 22, 14, 6, 19, 12, 9, 5, 11, 10, 7]. These\nstudies can be roughly categorized into one of the following\ntopics:\nGeneral architecture [22, 14, 6, 19, 12]: The work in\nthis category describes the general architecture of a\nWeb crawler and studies how a crawler works. For example\n, Reference [14] describes the architecture of the\nCompaq SRC crawler and its major design goals. Some\nof these studies briefly describe how the crawling task\nis parallelized. For instance, Reference [22] describes\na crawler that distributes individual URLs to multiple\nmachines, which download Web pages in parallel.\nThe downloaded pages are then sent to a central machine\n, on which links are extracted and sent back to\nthe crawling machines. However, these studies do not\ncarefully compare various issues related to a parallel\ncrawler and how design choices affect performance. In\nthis paper, we first identify multiple techniques for a\nparallel crawler and compare their relative merits using\nreal Web data.\nPage selection [9, 5, 11]: Since many crawlers can\ndownload only a small subset of the Web, crawlers need\nto carefully decide what page to download. By retrieving\n\"important\" or \"relevant\" pages early, a crawler\nmay improve the \"quality\" of the downloaded pages.\nThe studies in this category explore how a crawler can\ndiscover and identify \"important\" pages early, and propose\nvarious algorithms to achieve this goal. In our paper\n, we study how parallelization affects some of these\ntechniques and explain how we can fix the problems\nintroduced by parallelization.\nPage update [10, 7]: Web crawlers need to update the\ndownloaded pages periodically, in order to maintain\nthe pages up to date. The studies in this category\ndiscuss various page revisit policies to maximize the\n\"freshness\" of the downloaded pages.\nFor example,\nReference [7] studies how a crawler should adjust revisit\nfrequencies for pages when the pages change at\ndifferent rates. We believe these studies are orthogonal\nto what we discuss in this paper.\nThere also exists a significant body of literature studying\nthe general problem of parallel and distributed computing\n[21, 25]. Some of these studies focus on the design of efficient\nparallel algorithms. For example, References [20, 16]\n125\npresent various architectures for parallel computing, propose\nalgorithms that solve various problems (e.g., finding\nmaximum cliques) under the architecture, and study the\ncomplexity of the proposed algorithms. While the general\nprinciples described are being used in our work,\n1\nnone of\nthe existing solutions can be directly applied to the crawling\nproblem.\nAnother body of literature designs and implements distributed\noperating systems, where a process can use distributed\nresources transparently (e.g., distributed memory,\ndistributed file systems) [25, 1]. Clearly, such OS-level support\nmakes it easy to build a general distributed application\n, but we believe that we cannot simply run a centralized\ncrawler on a distributed OS to achieve parallelism. A web\ncrawler contacts millions of web sites in a short period of\ntime and consumes extremely large network, storage and\nmemory resources. Since these loads push the limit of existing\nhardwares, the task should be carefully partitioned\namong processes and they should be carefully coordinated.\nTherefore, a general-purpose distributed operating system\nthat does not understand the semantics of web crawling will\nlead to unacceptably poor performance.\nARCHITECTURE OF A PARALLEL CRAWLER\nIn Figure 1 we illustrate the general architecture of a parallel\ncrawler. A parallel crawler consists of multiple crawling\nprocesses, which we refer to as\nC-proc's. Each C-proc performs\nthe basic tasks that a single-process crawler conducts.\nIt downloads pages from the Web, stores the pages locally,\nextracts URLs from the downloaded pages and follows links.\nDepending on how the\nC-proc's split the download task,\nsome of the extracted links may be sent to other\nC-proc's.\nThe\nC-proc's performing these tasks may be distributed either\non the same local network or at geographically distant\nlocations.\nIntra-site parallel crawler: When all C-proc's run on\nthe same local network and communicate through a\nhigh speed interconnect (such as LAN), we call it an\nintra-site parallel crawler. In Figure 1, this scenario\ncorresponds to the case where all\nC-proc's run only on\nthe local network on the top. In this case, all\nC-proc's\nuse the same local network when they download pages\nfrom remote Web sites. Therefore, the network load\nfrom\nC-proc's is centralized at a single location where\nthey operate.\nDistributed crawler: When C-proc's run at geographically\ndistant locations connected by the Internet (or\na wide area network), we call it a distributed crawler.\nFor example, one\nC-proc may run in the US, crawling\nall US pages, and another\nC-proc may run in France,\ncrawling all European pages. As we discussed in the\nintroduction, a distributed crawler can disperse and\neven reduce the load on the overall network.\nWhen\nC-proc's run at distant locations and communicate\nthrough the Internet, it becomes important how\noften and how much\nC-proc's need to communicate.\nThe bandwidth between\nC-proc's may be limited and\n1\nFor example, we may consider that our proposed solution\nis a variation of \"divide and conquer\" approach, since we\npartition and assign the Web to multiple processes.\nC-proc\n. . .\nC-proc\nLocal connect\nC-proc\ncollected pages\nqueues of URLs to visit\n. . .\nC-proc\nLocal connect\nNET\nINTER\nFigure 1: General architecture of a parallel crawler\n1\nS\n2\nS\n1\n2\n(C )\n(C )\nb\na\nc\nd\ne\nf\ng\nh\ni\nFigure 2: Site S\n1\nis crawled by C\n1\nand site S\n2\nis\ncrawled by C\n2\nsometimes unavailable, as is often the case with the\nInternet.\nWhen multiple\nC-proc's download pages in parallel, different\nC-proc's may download the same page multiple times. In\norder to avoid this overlap,\nC-proc's need to coordinate with\neach other on what pages to download. This coordination\ncan be done in one of the following ways:\nIndependent: At one extreme, C-proc's may download\npages totally independently without any coordination.\nThat is, each\nC-proc starts with its own set of seed\nURLs and follows links without consulting with other\nC-proc's. In this scenario, downloaded pages may overlap\n, but we may hope that this overlap will not be significant\n, if all\nC-proc's start from different seed URLs.\nWhile this scheme has minimal coordination overhead\nand can be very scalable, we do not directly study\nthis option due to its overlap problem. Later we will\nconsider an improved version of this option, which significantly\nreduces overlap.\nDynamic assignment: When there exists a central coordinator\nthat logically divides the Web into small partitions\n(using a certain partitioning function) and dy-namically\nassigns each partition to a\nC-proc for download\n, we call it dynamic assignment.\nFor example, assume that a central coordinator partitions\nthe Web by the site name of a URL. That\nis, pages in the same site (e.g., http://cnn.com/top.\nhtml and http://cnn.com/content.html) belong to\n126\nthe same partition, while pages in different sites belong\nto different partitions. Then during a crawl, the\ncentral coordinator constantly decides on what partition\nto crawl next (e.g., the site cnn.com) and sends\nURLs within this partition (that have been discovered\nso far) to a\nC-proc as seed URLs. Given this request,\nthe\nC-proc downloads the pages and extracts links from\nthem. When the extracted links point to pages in the\nsame partition (e.g., http://cnn.com/article.html),\nthe\nC-proc follows the links, but if a link points to a\npage in another partition (e.g., http://nytimes.com/\nindex.html), the\nC-proc reports the link to the central\ncoordinator. The central coordinator later uses\nthis link as a seed URL for the appropriate partition.\nNote that the Web can be partitioned at various gran-ularities\n. At one extreme, the central coordinator may\nconsider every page as a separate partition and assign\nindividual URLs to\nC-proc's for download. In this case,\na\nC-proc does not follow links, because different pages\nbelong to separate partitions.\nIt simply reports all\nextracted URLs back to the coordinator. Therefore,\nthe communication between a\nC-proc and the central\ncoordinator may vary dramatically, depending on the\ngranularity of the partitioning function.\nStatic assignment: When the Web is partitioned and\nassigned to each\nC-proc before they start to crawl, we\ncall it static assignment. In this case, every\nC-proc\nknows which\nC-proc is responsible for which page during\na crawl, and the crawler does not need a central\ncoordinator. We will shortly discuss in more detail\nhow\nC-proc's operate under this scheme.\nIn this paper, we mainly focus on static assignment because\nof its simplicity and scalability, and defer the study of\ndynamic assignment to future work. Note that in dynamic\nassignment, the central coordinator may become the major\nbottleneck, because it has to maintain a large number of\nURLs reported from all\nC-proc's and has to constantly coordinate\nall\nC-proc's. Thus the coordinator itself may also\nneed to be parallelized.\nCRAWLING MODES FOR STATIC ASSIGNMENT\nUnder static assignment, each\nC-proc is responsible for\na certain partition of the Web and has to download pages\nwithin the partition. However, some pages in the partition\nmay have links to pages in another partition. We refer to\nthis type of link as an inter-partition link. To illustrate how\na\nC-proc may handle inter-partition links, we use Figure 2 as\nour example. In the figure, we assume two\nC-proc's, C\n1\nand\nC\n2\n, are responsible for sites S\n1\nand S\n2\n, respectively. For\nnow, we assume that the Web is partitioned by sites and\nthat the Web has only S\n1\nand S\n2\n. Also, we assume that\neach\nC-proc starts its crawl from the root page of each site,\na and f.\n1. Firewall mode: In this mode, each\nC-proc downloads\nonly the pages within its partition and does not follow\nany inter-partition link. All inter-partition links are\nignored and thrown away. For example, the links a\n\ng, c\ng and h d in Figure 2 are ignored and thrown\naway by C\n1\nand C\n2\n.\nIn this mode, the overall crawler does not have any\noverlap in the downloaded pages, because a page can\nbe downloaded by only one\nC-proc, if ever. However,\nthe overall crawler may not download all pages that it\nhas to download, because some pages may be reachable\nonly through inter-partition links. For example,\nin Figure 2, C\n1\ncan download a, b and c, but not d and\ne, because they can be reached only through h\nd\nlink. However,\nC-proc's can run quite independently\nin this mode, because they do not conduct any run-time\ncoordination or URL exchanges.\n2. Cross-over mode: Primarily, each\nC-proc downloads\npages within its partition, but when it runs out of\npages in its partition, it also follows inter-partition\nlinks. For example, consider C\n1\nin Figure 2. Process\nC\n1\nfirst downloads pages a, b and c by following links\nfrom a. At this point, C\n1\nruns out of pages in S\n1\n, so\nit follows a link to g and starts exploring S\n2\n. After\ndownloading g and h, it discovers a link to d in S\n1\n, so\nit comes back to S\n1\nand downloads pages d and e.\nIn this mode, downloaded pages may clearly overlap\n(pages g and h are downloaded twice), but the overall\ncrawler can download more pages than the firewall\nmode (C\n1\ndownloads d and e in this mode). Also, as\nin the firewall mode,\nC-proc's do not need to communicate\nwith each other, because they follow only the\nlinks discovered by themselves.\n3. Exchange mode: When\nC-proc's periodically and\nincrementally exchange inter-partition URLs, we say\nthat they operate in an exchange mode. Processes do\nnot follow inter-partition links.\nFor example, C\n1\nin Figure 2 informs C\n2\nof page g after\nit downloads page a (and c) and C\n2\ntransfers the URL\nof page d to C\n1\nafter it downloads page h. Note that\nC\n1\ndoes not follow links to page g. It only transfers\nthe links to C\n2\n, so that C\n2\ncan download the page. In\nthis way, the overall crawler can avoid overlap, while\nmaximizing coverage.\nNote that the firewall and the cross-over modes give\nC-proc's\nmuch independence (C-proc's do not need to communicate\nwith each other), but they may download the same\npage multiple times, or may not download some pages. In\ncontrast, the exchange mode avoids these problems but requires\nconstant URL exchange between\nC-proc's.\n3.1\nURL exchange minimization\nTo reduce URL exchange, a crawler based on the exchange\nmode may use some of the following techniques:\n1. Batch communication: Instead of transferring an\ninter-partition URL immediately after it is discovered,\na\nC-proc may wait for a while, to collect a set of URLs\nand send them in a batch. That is, with batching, a\nC-proc collects all inter-partition URLs until it downloads\nk pages. Then it partitions the collected URLs\nand sends them to an appropriate\nC-proc. Once these\nURLs are transferred, the\nC-proc then purges them\nand starts to collect a new set of URLs from the next\ndownloaded pages. Note that a\nC-proc does not maintain\nthe list of all inter-partition URLs discovered so\nfar. It only maintains the list of inter-partition links\n127\nin the current batch, in order to minimize the memory\noverhead for URL storage.\nThis batch communication has various advantages over\nincremental communication. First, it incurs less communication\noverhead, because a set of URLs can be\nsent in a batch, instead of sending one URL per message\n. Second, the absolute number of exchanged URLs\nwill also decrease. For example, consider C\n1\nin Figure\n2. The link to page g appears twice, in page a and\nin page c. Therefore, if C\n1\ntransfers the link to g after\ndownloading page a, it needs to send the same URL\nagain after downloading page c.\n2\nIn contrast, if C\n1\nwaits until page c and sends URLs in batch, it needs\nto send the URL for g only once.\n2. Replication: It is known that the number of incoming\nlinks to pages on the Web follows a Zipfian distribution\n[3, 2, 26]. That is, a small number of Web\npages have an extremely large number of links pointing\nto them, while a majority of pages have only a small\nnumber of incoming links.\nThus, we may significantly reduce URL exchanges, if\nwe replicate the most \"popular\" URLs at each\nC-proc\n(by most popular, we mean the URLs with most incoming\nlinks) and stop transferring them between\nC-proc's\n. That is, before we start crawling pages, we\nidentify the most popular k URLs based on the image\nof the Web collected in a previous crawl. Then\nwe replicate these URLs at each\nC-proc, so that the\nC-proc's do not exchange them during a crawl. Since\na small number of Web pages have a large number of\nincoming links, this scheme may significantly reduce\nURL exchanges between\nC-proc's, even if we replicate\na small number of URLs.\nNote that some of the replicated URLs may be used\nas the seed URLs for a\nC-proc. That is, if some URLs\nin the replicated set belong to the same partition that\na\nC-proc is responsible for, the C-proc may use those\nURLs as its seeds rather than starting from other pages.\nAlso note that it is possible that each\nC-proc tries to\ndiscover popular URLs on the fly during a crawl, instead\nof identifying them based on the previous image\n. For example, each\nC-proc may keep a \"cache\"\nof recently seen URL entries. This cache may pick\nup \"popular\" URLs automatically, because the popular\nURLs show up repeatedly. However, we believe\nthat the popular URLs from a previous crawl will be a\ngood approximation for the popular URLs in the current\nWeb; Most popular Web pages (such as Yahoo!)\nmaintain their popularity for a relatively long period\nof time, even if their exact popularity may change\nslightly.\n3.2\nPartitioning function\nSo far, we have mainly assumed that the Web pages are\npartitioned by Web sites. Clearly, there exists a multitude\nof ways to partition the Web, including the following:\n1. URL-hash based: Based on the hash value of the\nURL of a page, we assign the page to a\nC-proc. In\n2\nWhen it downloads page c, it does not remember whether\nthe link to g has been already sent.\nthis scheme, pages in the same site can be assigned\nto different\nC-proc's. Therefore, the locality of link\nstructure\n3\nis not reflected in the partition, and there\nwill be many inter-partition links.\n2. Site-hash based: Instead of computing the hash value\non an entire URL, we compute the hash value only\non the site name of a URL (e.g., cnn.com in http:\n//cnn.com/index.html) and assign the page to a\nC-proc\n.\nIn this scheme, note that the pages in the same site\nwill be allocated to the same partition. Therefore, only\nsome of the inter-site links will be inter-partition links,\nand thus we can reduce the number of inter-partition\nlinks quite significantly compared to the URL-hash\nbased scheme.\n3. Hierarchical: Instead of using a hash-value, we may\npartition the Web hierarchically based on the URLs\nof pages. For example, we may divide the Web into\nthree partitions (the pages in the .com domain, .net\ndomain and all other pages) and allocate them to three\nC-proc's. Even further, we may decompose the Web by\nlanguage or country (e.g., .mx for Mexico).\nBecause pages hosted in the same domain or country\nmay be more likely to link to pages in the same domain,\nscheme may have even fewer inter-partition links than\nthe site-hash based scheme.\nIn this paper, we do not consider the URL-hash based scheme,\nbecause it generates a large number of inter-partition links.\nWhen the crawler uses URL-hash based scheme,\nC-proc's\nneed to exchange much larger number of URLs (exchange\nmode), and the coverage of the overall crawler can be much\nlower (firewall mode).\nIn addition, in our later experiments, we will mainly use\nthe site-hash based scheme as our partitioning function. We\nchose this option because it is much simpler to implement,\nand because it captures the core issues that we want to\nstudy.\nFor example, under the hierarchical scheme, it is\nnot easy to divide the Web into equal size partitions, while\nit is relatively straightforward under the site-hash based\nscheme.\n4\nAlso, we believe we can interpret the results from\nthe site-hash based scheme as the upper/lower bound for\nthe hierarchical scheme. For instance, assuming Web pages\nlink to more pages in the same domain, the number of inter-partition\nlinks will be lower in the hierarchical scheme than\nin the site-hash based scheme (although we could not confirm\nthis trend in our experiments).\nIn Figure 3, we summarize the options that we have discussed\nso far.\nThe right-hand table in the figure shows\nmore detailed view on the static coordination scheme. In\nthe diagram, we highlight the main focus of our paper with\ndark grey. That is, we mainly study the static coordination\nscheme (the third column in the left-hand table) and we use\nthe site-hash based partitioning for our experiments (the\nsecond row in the second table). However, during our discussion\n, we will also briefly explore the implications of other\n3\nAccording to our experiments, about 90% of the links in a\npage point to pages in the same site on average.\n4\nWhile the sizes of individual Web sites vary, the sizes of\npartitions are similar, because each partition contains many\nWeb sites and their average sizes are similar among partitions\n.\n128\nType\nURL-hash\nSite-hash\nHierarchical\nDistributed\nIntra-site\nIndependent\nDynamic\nStatic\nBatch Replication None\nMain focus\nAlso discussed\nPartitioning\nExchange\nCoordination\nFirewall\nCross-over\nMode\nNot discussed further\nFigure 3: Summary of the options discussed\noptions. For instance, the firewall mode is an \"improved\"\nversion of the independent coordination scheme (in the first\ntable), so our study on the firewall mode will show the implications\nof the independent coordination scheme. Also, we\nroughly estimate the performance of the URL-hash based\nscheme (first row in the second table) when we discuss the\nresults from the site-hash based scheme.\nGiven our table of crawler design space, it would be very\ninteresting to see what options existing search engines selected\nfor their own crawlers. Unfortunately, this information\nis impossible to obtain in most cases because companies\nconsider their technologies proprietary and want to keep\nthem secret. The only two crawler designs that we know of\nare the prototype Google crawler [22] (when it was developed\nat Stanford) and the Mercator crawler [15] at Compaq.\nThe prototype google crawler used the intra-site, static and\nsite-hash based scheme and ran in exchange mode [22]. The\nMercator crawler uses the site-based hashing scheme.\nEVALUATION MODELS\nIn this section, we define metrics that will let us quantify\nthe advantages or disadvantages of different parallel crawling\nschemes. These metrics will be used later in our experiments\n.\n1. Overlap: When multiple\nC-proc's are downloading\nWeb pages simultaneously, it is possible that different\nC-proc's download the same page multiple times.\nMultiple downloads of the same page are clearly undesirable\n.\nMore precisely, we define the overlap of downloaded\npages as\nN-I\nI\n. Here, N represents the total number of\npages downloaded by the overall crawler, and I represents\nthe number of unique pages downloaded, again,\nby the overall crawler. Thus, the goal of a parallel\ncrawler is to minimize the overlap.\nNote that a parallel crawler does not have an overlap\nproblem, if it is based on the firewall mode (Section 3,\nItem 1) or the exchange mode (Section 3, Item 3). In\nthese modes, a\nC-proc downloads pages only within its\nown partition, so the overlap is always zero.\n2. Coverage: When multiple\nC-proc's run independently,\nit is possible that they may not download all pages that\nthey have to. In particular, a crawler based on the firewall\nmode (Section 3, Item 1) may have this problem,\nbecause its\nC-proc's do not follow inter-partition links\nnor exchange the links with others.\nTo formalize this notion, we define the coverage of\ndownloaded pages as\nI\nU\n, where U represents the total\nnumber of pages that the overall crawler has to download\n, and I is the number of unique pages downloaded\nby the overall crawler. For example, in Figure 2, if C\n1\ndownloaded pages a, b and c, and if C\n2\ndownloaded\npages f through i, the coverage of the overall crawler\nis\n7\n9\n= 0.77, because it downloaded 7 pages out of 9.\n3. Quality: Often, crawlers cannot download the whole\nWeb, and thus they try to download an \"important\" or\n\"relevant\" section of the Web. For example, a crawler\nmay have storage space only for 1 million pages and\nmay want to download the \"most important\" 1 million\npages. To implement this policy, a crawler needs\na notion of \"importance\" of pages, often called an importance\nmetric [9].\nFor example, let us assume that the crawler uses backlink\ncount as its importance metric. That is, the crawler\nconsiders a page p important when a lot of other pages\npoint to it. Then the goal of the crawler is to download\nthe most highly-linked 1 million pages. To achieve this\ngoal, a single-process crawler may use the following\nmethod [9]: The crawler constantly keeps track of how\nmany backlinks each page has from the pages that it\nhas already downloaded, and first visits the page with\nthe highest backlink count. Clearly, the pages downloaded\nin this way may not be the top 1 million pages,\nbecause the page selection is not based on the entire\nWeb, only on what has been seen so far. Thus, we may\nformalize the notion of \"quality\" of downloaded pages\nas follows [9]:\nFirst, we assume a hypothetical oracle crawler, which\nknows the exact importance of every page under a certain\nimportance metric. We assume that the oracle\ncrawler downloads the most important N pages in total\n, and use P\nN\nto represent that set of N pages. We\nalso use A\nN\nto represent the set of N pages that an actual\ncrawler would download, which would not be necessarily\nthe same as P\nN\n. Then we define\n|A\nN\nP\nN\n|\n|P\nN\n|\nas\nthe quality of downloaded pages by the actual crawler.\nUnder this definition, the quality represents the fraction\nof the true top N pages that are downloaded by\n129\nthe crawler.\nNote that the quality of a parallel crawler may be\nworse than that of a single-process crawler, because\nmany importance metrics depend on the global structure\nof the Web (e.g., backlink count). That is, each\nC-proc\nin a parallel crawler may know only the pages that\nare downloaded by itself, and thus have less information\non page importance than a single-process crawler\ndoes.\nOn the other hand, a single-process crawler\nknows all pages it has downloaded so far. Therefore, a\nC-proc in a parallel crawler may make a worse crawling\ndecision than a single-process crawler.\nIn order to avoid this quality problem,\nC-proc's need to\nperiodically exchange information on page importance.\nFor example, if the backlink count is the importance\nmetric, a\nC-proc may periodically notify other C-proc's\nof how many pages in its partition have links to pages\nin other partitions.\nNote that this backlink exchange can be naturally incorporated\nin an exchange mode crawler (Section 3,\nItem 3). In this mode, crawling processes exchange\ninter-partition URLs periodically, so a\nC-proc can simply\ncount how many inter-partition links it receives\nfrom other\nC-proc's, to count backlinks originating in\nother partitions. More precisely, if the crawler uses the\nbatch communication technique (Section 3.1, Item 1),\nprocess C\n1\nwould send a message like [http://cnn.\ncom/index.html, 3] to C\n2\n, to notify that C\n1\nhas seen\n3 links to the page in the current batch.\n5\nOn receipt of\nthis message, C\n2\nthen increases the backlink count for\nthe page by 3 to reflect the inter-partition links. By\nincorporating this scheme, we believe that the quality\nof the exchange mode will be better than that of the\nfirewall mode or the cross-over mode.\nHowever, note that the quality of an exchange mode\ncrawler may vary depending on how often it exchanges\nbacklink messages. For instance, if crawling processes\nexchange backlink messages after every page download\n, they will have essentially the same backlink information\nas a single-process crawler does. (They know\nbacklink counts from all pages that have been downloaded\n.)\nTherefore, the quality of the downloaded\npages would be virtually the same as that of a single-process\ncrawler.\nIn contrast, if\nC-proc's rarely exchange\nbacklink messages, they do not have \"accurate\"\nbacklink counts from downloaded pages, so they may\nmake poor crawling decisions, resulting in poor quality\n. Later, we will study how often\nC-proc's should\nexchange backlink messages in order to maximize the\nquality.\n4. Communication overhead: The\nC-proc's in a parallel\ncrawler need to exchange messages to coordinate\ntheir work. In particular,\nC-proc's based on the exchange\nmode (Section 3, Item 3) swap their inter-partition\nURLs periodically. To quantify how much\ncommunication is required for this exchange, we define\ncommunication overhead as the average number of\ninter-partition URLs exchanged per downloaded page.\n5\nIf the\nC-proc's send inter-partition URLs incrementally after\nevery page, the\nC-proc's can send the URL only, and\nother\nC-proc's can simply count these URLs.\nMode\nCoverage\nOverlap\nQuality\nCommunication\nFirewall\nBad\nGood\nBad\nGood\nCross-over\nGood\nBad\nBad\nGood\nExchange\nGood\nGood\nGood\nBad\nTable 1: Comparison of three crawling modes\nFor example, if a parallel crawler has downloaded 1,000\npages in total and if its\nC-proc's have exchanged 3,000\ninter-partition URLs, its communication overhead is\n3,000/1,000 = 3. Note that a crawler based on the the\nfirewall and the cross-over mode do not have any communication\noverhead, because they do not exchange\nany inter-partition URLs.\nIn Table 1, we compare the relative merits of the three\ncrawling modes (Section 3, Items 13). In the table, \"Good\"\nmeans that the mode is expected to perform relatively well\nfor that metric, and \"Bad\" means that it may perform worse\ncompared to other modes. For instance, the firewall mode\ndoes not exchange any inter-partition URLs (Communication\n: Good) and downloads pages only once (Overlap: Good),\nbut it may not download every page (Coverage: Bad). Also,\nbecause\nC-proc's do not exchange inter-partition URLs, the\ndownloaded pages may be of lower quality than those of an\nexchange mode crawler. Later, we will examine these issues\nmore quantitatively through experiments based on real Web\ndata.\nDESCRIPTION OF DATASET\nWe have discussed various issues related to a parallel\ncrawler and identified multiple alternatives for its architecture\n. In the remainder of this paper, we quantitatively study\nthese issues through experiments conducted on real Web\ndata.\nIn all of the following experiments, we used 40 million\nWeb pages in our Stanford WebBase repository. Because\nthe property of this dataset may significantly impact the\nresult of our experiments, readers might be interested in\nhow we collected these pages.\nWe downloaded the pages using our Stanford WebBase\ncrawler in December 1999 in the period of 2 weeks.\nIn\ndownloading the pages, the WebBase crawler started with\nthe URLs listed in Open Directory (http://www.dmoz.org),\nand followed links. We decided to use the Open Directory\nURLs as seed URLs, because these pages are the ones that\nare considered \"important\" by its maintainers. In addition,\nsome of our local WebBase users were keenly interested in\nthe Open Directory pages and explicitly requested that we\ncover them. The total number of URLs in the Open Directory\nwas around 1 million at that time. Then conceptually,\nthe WebBase crawler downloaded all these pages, extracted\nURLs within the downloaded pages, and followed links in a\nbreadth-first manner. (The WebBase crawler uses various\ntechniques to expedite and prioritize crawling process, but\nwe believe these optimizations do not affect the final dataset\nsignificantly.)\nOur dataset is relatively \"small\" (40 million pages) compared\nto the full Web, but keep in mind that using a significantly\nlarger dataset would have made many of our experiments\nprohibitively expensive. As we will see, each of\nthe graphs we present study multiple configurations, and\nfor each configuration, multiple crawler runs were made to\n130\n2\n4\n8\n16\n32\n64\n0.2\n0.4\n0.6\n0.8\n1\nNumber of\nC-proc\n's\nCoverage\nn\nFigure 4: Number of processes vs. Coverage\n64\n4096\n10000\n20000\n30000\n0.2\n0.4\n0.6\n0.8\n1\n64\n32\n8\n2\nprocesses\nprocesses\nprocesses\nprocesses\nCoverage\nNumber of Seeds\ns\nFigure 5: Number of seed URLs vs. Coverage\nobtain statistically valid data points. Each run involves simulating\nhow one or more\nC-proc's would visit the 40 million\npages. Such detailed simulations are inherently very time\nconsuming.\nIt is clearly difficult to predict what would happen for a\nlarger dataset. In the extended version of this paper [8],\nwe examine this data size issue a bit more carefully and\ndiscuss whether a larger dataset would have changed our\nconclusions.\nFIREWALL MODE AND COVERAGE\nA firewall mode crawler (Section 3, Item 1) has minimal\ncommunication overhead, but it may have coverage and\nquality problems (Section 4). In this section, we quantitatively\nstudy the effectiveness of a firewall mode crawler\nusing the 40 million pages in our repository. In particular,\nwe estimate the coverage (Section 4, Item 2) of a firewall\nmode crawler when it employs n\nC-proc's in parallel. (We\ndiscuss the quality issue of a parallel crawler later.)\nIn our experiments, we considered the 40 million pages\nwithin our WebBase repository as the entire Web, and we\nused site-hash based partitioning (Section 3.2, Item 2). As\nthe seed URLs, each\nC-proc was given 5 random URLs from\nits own partition, so 5n seed URLs were used in total by the\noverall crawler. (We discuss the effect of the number of seed\nURLs shortly.) Since the crawler ran in firewall mode,\nC-proc's\nfollowed only intra-partition links, not inter-partition\nlinks. Under these settings, we let the\nC-proc's run until\nthey ran out of URLs. After this simulated crawling, we\nmeasured the overall coverage of the crawler. We performed\nthese experiments with 5n random seed URLS and repeated\nthe experiments multiple times with different seed URLs. In\nall of the runs, the results were essentially the same.\nIn Figure 4, we summarize the results from the experiments\n. The horizontal axis represents n, the number of\nparallel\nC-proc's, and the vertical axis shows the coverage of\nthe overall crawler for the given experiment. Note that the\ncoverage is only 0.9 even when n = 1 (a single-process). This\nresult is because the crawler in our experiment started with\nonly 5 URLs, while the actual dataset was collected with 1\nmillion seed URLs. Thus, some of the 40 million pages were\nunreachable from the 5 seed URLs.\nFrom the figure it is clear that the coverage decreases as\nthe number of processes increases. This trend is because the\nnumber of inter-partition links increases as the Web is split\ninto smaller partitions, and thus more pages are reachable\nonly through inter-partition links.\nFrom this result we can see that we may run a crawler in a\nfirewall mode without much decrease in coverage with fewer\nthan 4\nC-proc's. For example, for the 4-process case, the\ncoverage decreases only 10% from the single-process case.\nAt the same time, we can also see that the firewall mode\ncrawler becomes quite ineffective with a large number of\nC-proc's\n. Less than 10% of the Web can be downloaded when\n64\nC-proc's run together, each starting with 5 seed URLs.\nClearly, coverage may depend on the number of seed URLs\nthat each\nC-proc starts with. To study this issue, we also\nran experiments varying the number of seed URLs, s, and\nwe show the results in Figure 5.\nThe horizontal axis in\nthe graph represents s, the total number of seed URLs that\nthe overall crawler used, and the vertical axis shows the\ncoverage for that experiment. For example, when s = 128,\nthe overall crawler used 128 total seed URLs, each\nC-proc\nstarting with 2 seed URLs when 64\nC-proc's ran in parallel.\nWe performed the experiments for 2, 8, 32, 64\nC-proc cases\nand plotted their coverage values. From this figure, we can\nobserve the following trends:\nWhen a large number of C-proc's run in parallel (e.g.,\n32 or 64), the total number of seed URLs affects the\ncoverage very significantly. For example, when 64 processes\nrun in parallel the coverage value jumps from\n0.4% to 10% if the number of seed URLs increases\nfrom 64 to 1024.\nWhen only a small number of processes run in parallel\n(e.g., 2 or 8), coverage is not significantly affected by\nthe number of seed URLs. While coverage increases\nslightly as s increases, the improvement is marginal.\nBased on these results, we draw the following conclusions:\n1. When a relatively small number of\nC-proc's are running\nin parallel, a crawler using the firewall mode provides\ngood coverage. In this case, the crawler may start with\nonly a small number of seed URLs, because coverage\nis not much affected by the number of seed URLs.\n2. The firewall mode is not a good choice if the crawler\nwants to download every single page on the Web. The\ncrawler may miss some portion of the Web, particularly\nwhen it runs many\nC-proc's in parallel.\nExample 1. (Generic search engine) To illustrate how\nour results could guide the design of a parallel crawler, consider\nthe following example. Assume that to operate a Web\nsearch engine, we need to download 1 billion pages\n6\nin one\n6\nCurrently the Web is estimated to have around 1 billion\npages.\n131\n0.2\n0.4\n0.6\n0.8\n1\n0.5\n1\n1.5\n2\n2.5\n3\noverlap\ncoverage\nn=64\nn=32\n: number of\nn\n's\nC-proc\nn=4\nn=8\nn=2\nn=16\nFigure 6:\nCoverage vs. Overlap for a cross-over\nmode crawler\nmonth. Each machine that we plan to run our\nC-proc's on\nhas 10 Mbps link to the Internet, and we can use as many\nmachines as we want.\nGiven that the average size of a web page is around 10K\nbytes, we roughly need to download 10\n4\n10\n9\n= 10\n13\nbytes\nin one month. This download rate corresponds to 34 Mbps,\nand we need 4 machines (thus 4\nC-proc's) to obtain the rate.\nGiven the results of our experiment (Figure 4), we may estimate\nthat the coverage will be at least 0.8 with 4\nC-proc's.\nTherefore, in this scenario, the firewall mode may be good\nenough, unless it is very important to download the \"entire\"\nWeb.\nExample 2. (High freshness) As a second example, let us\nnow assume that we have strong \"freshness\" requirement on\nthe 1 billion pages and need to revisit every page once every\nweek, not once every month.\nThis new scenario requires\napproximately 140 Mbps for page download, and we need to\nrun 14\nC-proc's. In this case, the coverage of the overall\ncrawler decreases to less than 0.5 according to Figure 4. Of\ncourse, the coverage could be larger than our conservative\nestimate, but to be safe one would probably want to consider\nusing a crawler mode different than the firewall mode.\nCROSS-OVER MODE AND OVERLAP\nIn this section, we study the effectiveness of a cross-over\nmode crawler (Section 3, Item 2). A cross-over crawler may\nyield improved coverage of the Web, since it follows inter-partition\nlinks when a\nC-proc runs out of URLs in its own\npartition. However, this mode incurs overlap in downloaded\npages (Section 4, Item 1), because a page can be downloaded\nby multiple\nC-proc's. Therefore, the crawler increases its\ncoverage at the expense of overlap in the downloaded pages.\nIn Figure 6, we show the relationship between the coverage\nand the overlap of a cross-over mode crawler obtained from\nthe following experiments. We partitioned the 40M pages\nusing site-hash partitioning and assigned them to n\nC-proc's.\nEach of the n\nC-proc's then was given 5 random seed URLs\nfrom its partition and followed links in the cross-over mode.\nDuring this experiment, we measured how much overlap the\noverall crawler incurred when its coverage reached various\npoints. The horizontal axis in the graph shows the coverage\nat a particular time and the vertical axis shows the overlap\nat the given coverage. We performed the experiments for\nn = 2, 4, . . . , 64.\nNote that in most cases the overlap stays at zero until the\ncoverage becomes relatively large. For example, when n =\n2\n4\n8\n16\n32\n64\n0.5\n1\n1.5\n2\n2.5\n3\nURL Hash\nSite Hash\nCommunication overhead\nn\nNumber of C-proc's\nFigure 7: Number of crawling processes vs. Number\nof URLs exchanged per page\n16, the overlap is zero until coverage reaches 0.5. We can understand\nthis result by looking at the graph in Figure 4. According\nto that graph, a crawler with 16\nC-proc's can cover\naround 50% of the Web by following only intra-partition\nlinks. Therefore, even a cross-over mode crawler will follow\nonly intra-partition links until its coverage reaches that\npoint. Only after that, each\nC-proc starts to follow inter-partition\nlinks, thus increasing the overlap. For this reason,\nwe believe that the overlap would have been much worse in\nthe beginning of the crawl, if we adopted the independent\nmodel (Section 2). By applying the partitioning scheme to\nC-proc's, we make each C-proc stay in its own partition in\nthe beginning and suppress the overlap as long as possible.\nWhile the crawler in the cross-over mode is much better\nthan one based on the independent model, it is clear that the\ncross-over crawler still incurs quite significant overlap. For\nexample, when 4\nC-proc's run in parallel in the cross-over\nmode, the overlap becomes almost 2.5 to obtain coverage\nclose to 1. For this reason, we do not recommend the crossover\nmode, unless it is absolutely necessary to download\nevery page without any communication between\nC-proc's.\nEXCHANGE MODE AND COMMUNICATION\nTo avoid the overlap and coverage problems, an exchange\nmode crawler (Section 3, Item 3) constantly exchanges inter-partition\nURLs between\nC-proc's. In this section, we study\nthe communication overhead (Section 4, Item 4) of an exchange\nmode crawler and how much we can reduce it by\nreplicating the most popular k URLs. For now, let us assume\nthat a\nC-proc immediately transfers inter-partition URLs.\n(We will discuss batch communication later when we discuss\nthe quality of a parallel crawler.)\nIn the experiments, again, we split the 40 million pages\ninto n partitions based on site-hash values and ran n\nC-proc's\nin the exchange mode. At the end of the crawl, we\nmeasured how many URLs had been exchanged during the\ncrawl. We show the results in Figure 7. In the figure, the\nhorizontal axis represents the number of parallel\nC-proc's,\nn, and the vertical axis shows the communication overhead\n(the average number of URLs transferred per page). For\ncomparison purposes, the figure also shows the overhead for\na URL-hash based scheme, although the curve is clipped at\nthe top because of its large overhead values.\nTo explain the graph, we first note that an average page\nhas 10 out-links, and about 9 of them point to pages in\n132\nthe same site. Therefore, the 9 links are internally followed\nby a\nC-proc under site-hash partitioning. Only the remaining\n1 link points to a page in a different site and may be\nexchanged between processes. Figure 7 indicates that this\nURL exchange increases with the number of processes. For\nexample, the\nC-proc's exchanged 0.4 URLs per page when\n2 processes ran, while they exchanged 0.8 URLs per page\nwhen 16 processes ran. Based on the graph, we draw the\nfollowing conclusions:\nThe site-hash based partitioning scheme significantly\nreduces communication overhead, compared to the URL-hash\nbased scheme. We need to transfer only up to\none link per page (or 10% of the links), which is significantly\nsmaller than the URL-hash based scheme.\nFor example, when we ran 2\nC-proc's using the URL-hash\nbased scheme the crawler exchanged 5 links per\npage under the URL-hash based scheme, which was\nsignificantly larger than 0.5 links per page under the\nsite-hash based scheme.\nThe network bandwidth used for the URL exchange is\nrelatively small, compared to the actual page download\nbandwidth.\nUnder the site-hash based scheme,\nat most 1 URL will be exchanged per page, which is\nabout 40 bytes.\n7\nGiven that the average size of a Web\npage is 10 KB, the URL exchange consumes less than\n40/10K = 0.4% of the total network bandwidth.\nHowever, the overhead of the URL exchange on the\noverall system can be quite significant. The processes\nneed to exchange up to one message per page, and the\nmessage has to go through the TCP/IP network stack\nat the sender and the receiver. Thus it is copied to and\nfrom kernel space twice, incurring two context switches\nbetween the kernel and the user mode. Since these operations\npose significant overhead even if the message\nsize is small, the overall overhead can be important if\nthe processes exchange one message per every downloaded\npage.\nIn the extended version of this paper [8], we also study how\nmuch we can reduce this overhead by replication. In short,\nour results indicate that we can get significant reduction in\ncommunication cost (between 40% 50% reduction) when\nwe replication the most popular 10,000 100,000 URLs\nin each\nC-proc. When we replicated more URLs, the cost\nreduction was not as dramatic as the first 100,000 URLs.\nThus, we recommend replicating 10,000 100,000 URLs.\nQUALITY AND BATCH COMMUNICATION\nAs we discussed, the quality (Section 4, Item 3) of a\nparallel crawler can be worse than that of a single-process\ncrawler, because each\nC-proc may make crawling decisions\nsolely based on the information collected within its own partition\n. We now study this quality issue. In the discussion we\nalso study the impact of the batch communication technique\n(Section 3.1, Item 1) on quality.\nThroughout the experiments in this section, we assume\nthat the crawler uses the number of backlinks to page p as\nthe importance of p, or I(p). That is, if 1000 pages on the\n7\nIn our estimation, an average URL was about 40 bytes\nlong.\n0\n1 2\n4\n10 20\n50 100\n500 1000\n0.025\n0.05\n0.075\n0.1\n0.125\n0.15\n0.175\n0.2\nQuality\nNumber of URL exchanges\n2\n8\n64\nProcesses\nx\n(a) URL exchange vs. Quality\n0\n1\n2\n4\n10\n20\n50\n0.2\n0.4\n0.6\n0.8\n1.0\nNumber of URL exchanges\n2\n8\n64\nProcesses\nx\nCommunication overhead\n(b) URL exchange vs. Communication\nFigure 8: Crawlers downloaded 500K pages (1.2%\nof 40M)\nWeb have links to page p, the importance of p is I(p) = 1000.\nClearly, there exist many other ways to define the importance\nof a page, but we use this metric because it (or its\nvariations) is being used by some existing search engines [22,\n13]. Also, note that this metric depends on the global structure\nof the Web. If we use an importance metric that solely\ndepends on a page itself, not on the global structure of the\nWeb, the quality of a parallel crawler will be essentially the\nsame as that of a single crawler, because each\nC-proc in a\nparallel crawler can make good decisions based on the pages\nthat it has downloaded.\nUnder the backlink metric, each\nC-proc in our experiments\ncounted how many backlinks a page has from the downloaded\npages and visited the page with the most backlinks\nfirst. Remember that the\nC-proc's need to periodically exchange\nmessages to inform others of the inter-partition backlinks\n. Depending on how often they exchange messages, the\nquality of the downloaded pages will differ. For example,\nif\nC-proc's never exchange messages, the quality will be the\nsame as that of a firewall mode crawler, and if they exchange\nmessages after every downloaded page, the quality will be\nsimilar to that of a single-process crawler.\nTo study these issues, we compared the quality of the\ndownloaded pages when\nC-proc's exchanged backlink messages\nat various intervals and we show the results in Figures\n8(a), 9(a) and 10(a). Each graph shows the quality\nachieved by the overall crawler when it downloaded a total of\n500K, 2M, and 8M pages, respectively. The horizontal axis\nin the graphs represents the total number of URL exchanges\nduring a crawl, x, and the vertical axis shows the quality\nfor the given experiment. For example, when x = 1, the\nC-proc's exchanged backlink count information only once in\nthe middle of the crawl. Therefore, the case when x = 0 represents\nthe quality of a firewall mode crawler, and the case\n133\n0\n1 2\n4\n10 20\n50 100\n500 1000\n0.05\n0.1\n0.15\n0.2\n0.25\nNumber of URL exchanges\nQuality\n2\n8\n64\nProcesses\nx\n(a) URL exchange vs. Quality\n0\n1 2 4\n10 20\n50 100\n5001000\n0.25\n0.5\n0.75\n1\n2\n8\n64\nCommunication overhead\nx\nNumber of URL exchanges\nProcesses\n(b) URL exchange vs. Communication\nFigure 9: Crawlers downloaded 2M pages (5% of\n40M)\nwhen x\nshows the quality of a single-process crawler.\nIn Figures 8(b), 9(b) and 10(b), we also show the communication\noverhead (Section 4, Item 4); that is, the average\nnumber of [URL, backlink count] pairs exchanged per a\ndownloaded page.\nFrom these figures, we can observe the following trends:\nAs the number of crawling processes increases, the quality\nof downloaded pages becomes worse, unless they exchange\nbacklink messages often. For example, in Figure\n8(a), the quality achieved by a 2-process crawler\n(0.12) is significantly higher than that of a 64-process\ncrawler (0.025) in the firewall mode (x = 0). Again,\nthis result is because each\nC-proc learns less about\nthe global backlink counts when the Web is split into\nsmaller parts.\nThe quality of the firewall mode crawler (x = 0 ) is significantly\nworse than that of the single-process crawler\n(x\n) when the crawler downloads a relatively small\nfraction of the pages (Figures 8(a) and 9(a)). However\n, the difference is not very significant when the\ncrawler\ndownloads\na\nrelatively\nlarge\nfraction\n(Figure 10(a)). In other experiments, when the crawler\ndownloaded more than 50% of the pages, the difference\nwas almost negligible in any case. (Due to space limitations\n, we do not show the graphs.) Intuitively, this\nresult makes sense because quality is an important issue\nonly when the crawler downloads a small portion\nof the Web. (If the crawler will visit all pages anyway,\nquality is not relevant.)\nThe communication overhead does not increase lin-early\nas the number of URL exchange increases. The\ngraphs in Figures 8(b), 9(b) and 10(b) are not straight\nlines. This result is because a popular URL appears\n0\n1 2\n4\n10 20\n50 100\n500 1000\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\nNumber of URL exchanges\nQuality\n2\n8\n64\nProcesses\nx\n(a) URL exchange vs. Quality\n0\n1 2 4\n10 20\n50 100\n500 1000\n0.5\n0.4\n0.3\n0.2\n0.1\n2\n8\n64\nCommunication overhead\nProcesses\nx\nNumber of URL exchanges\n(b) URL exchange vs. Communication\nFigure 10: Crawlers downloaded 8M pages (20% of\n40M)\nmultiple times between backlink exchanges.\nTherefore\n, a popular URL can be transferred as one entry\n(URL and its backlink count) in the exchange, even if\nit appeared multiple times. This reduction increases\nas\nC-proc's exchange backlink messages less frequently.\nOne does not need a large number of URL exchanges\nto achieve high quality. Through multiple experiments,\nwe tried to identify how often\nC-proc's should exchange\nbacklink messages to achieve the highest quality value.\nFrom these experiments, we found that a parallel\ncrawler can get the highest quality values even if the\nprocesses communicate less than 100 times during a\ncrawl.\nWe use the following example to illustrate how one can\nuse the results of our experiments.\nExample 3. (Medium-Scale Search Engine) Say we plan\nto operate a medium-scale search engine, and we want to\nmaintain about 20% of the Web (200 M pages) in our index.\nOur plan is to refresh the index once a month. The machines\nthat we can use have individual T1 links (1.5 Mbps) to the\nInternet.\nIn order to update the index once a month, we need about\n6.2 Mbps download bandwidth, so we have to run at least\n5\nC-proc's on 5 machines. According to Figure 10(a) (20%\ndownload case), we can achieve the highest quality if the\nC-proc's\nexchange backlink messages 10 times during a crawl\nwhen 8 processes run in parallel.\n(We use the 8 process\ncase because it is the closest number to 5.) Also, from Figure\n10(b), we can see that when\nC-proc's exchange messages\n10 times during a crawl they need to exchange fewer than\n0 .17\n200M = 34M [URL, backlink count] pairs in total\n. Therefore, the total network bandwidth used by the back-134\nlink exchange is only (34M\n40 )/(200M 10K ) 0 .06 % of\nthe bandwidth used by actual page downloads. Also, since\nthe exchange happens only 10 times during a crawl, the\ncontext-switch overhead for message transfers (discussed in\nSection 8) is minimal.\nNote that in this scenario we need to exchange 10 backlink\nmessages in one month or one message every three days.\nTherefore, even if the connection between\nC-proc's is unreliable\nor sporadic, we can still use the exchange mode without\nany problem.\nCONCLUSION\nCrawlers are being used more and more often to collect\nWeb data for search engine, caches, and data mining. As\nthe size of the Web grows, it becomes increasingly important\nto use parallel crawlers. Unfortunately, almost nothing\nis known (at least in the open literature) about options for\nparallelizing crawlers and their performance. Our paper addresses\nthis shortcoming by presenting several architectures\nand strategies for parallel crawlers, and by studying their\nperformance. We believe that our paper offers some useful\nguidelines for crawler designers, helping them, for example,\nselect the right number of crawling processes, or select the\nproper inter-process coordination scheme.\nIn summary, the main conclusions of our study were the\nfollowing:\nWhen a small number of crawling processes run in parallel\n(in our experiment, 4 or fewer), the firewall mode\nprovides good coverage.\nGiven that firewall mode\ncrawlers can run totally independently and are easy\nto implement, we believe that it is a good option to\nconsider. The cases when the firewall mode might not\nbe appropriate are:\n1. when we need to run more than 4 crawling processes\nor\n2. when we download only a small subset of the Web\nand the quality of the downloaded pages is important\n.\nA crawler based on the exchange mode consumes small\nnetwork bandwidth for URL exchanges (less than 1%\nof the network bandwidth). It can also minimize other\noverheads by adopting the batch communication technique\n. In our experiments, the crawler could maximize\nthe quality of the downloaded pages, even if it exchanged\nbacklink messages fewer than 100 times during\na crawl.\nBy replicating between 10,000 and 100,000 popular\nURLs, we can reduce the communication overhead by\nroughly 40%. Replicating more URLs does not significantly\nreduce the overhead.\nREFERENCES\n[1] T. E. Anderson, M. D. Dahlin, J. M. Neefe, D. A.\nPatterson, D. S. Roselli, and R. Y. Wang. Serverless\nnetwork file systems. In Proc. of SOSP, 1995.\n[2] A. Barabasi and R. Albert. Emergence of scaling in\nrandom networks. Science, 286(509), 1999.\n[3] A. Z. Broder, S. R. Kumar, F. Maghoul, P. Raghavan,\nS. Rajagopalan, R. Stata, A. Tomkins, and J. L.\nWiener. Graph structure in the web. In Proc. of\nWWW Conf., 2000.\n[4] M. Burner. Crawling towards eterneity: Building an\narchive of the world wide web. Web Techniques\nMagazine, 2(5), May 1998.\n[5] S. Chakrabarti, M. van den Berg, and B. Dom.\nFocused crawling: A new approach to topic-specific\nweb resource discovery. In Proc. of WWW Conf., 1999.\n[6] J. Cho and H. Garcia-Molina. The evolution of the\nweb and implications for an incremental crawler. In\nProc. of VLDB Conf., 2000.\n[7] J. Cho and H. Garcia-Molina. Synchronizing a\ndatabase to improve freshness. In Proc. of SIGMOD\nConf., 2000.\n[8] J. Cho and H. Garcia-Molina. Parallel crawlers.\nTechnical report, UCLA Computer Science, 2002.\n[9] J. Cho, H. Garcia-Molina, and L. Page. Efficient\ncrawling through URL ordering. In Proc. of WWW\nConf., 1998.\n[10] E. Coffman, Jr., Z. Liu, and R. R. Weber. Optimal\nrobot scheduling for web search engines. Technical\nreport, INRIA, 1997.\n[11] M. Diligenti, F. M. Coetzee, S. Lawrence, C. L. Giles,\nand M. Gori. Focused crawling using context graphs.\nIn Proc. of VLDB Conf., 2000.\n[12] D. Eichmann. The RBSE spider: Balancing effective\nsearch against web load. In Proc. of WWW Conf.,\n1994.\n[13] Google Inc. http://www.google.com.\n[14] A. Heydon and M. Najork. Mercator: A scalable,\nextensible web crawler. Word Wide Web,\n2(4):219229, December 1999.\n[15] A. Heydon and M. Najork. High-performance web\ncrawling. Technical report, SRC Research Report, 173,\nCompaq Systems Research Center, September 2001.\n[16] D. Hirschberg. Parallel algorithms for the transitive\nclosure and the connected component problem. In\nProc. of STOC Conf., 1976.\n[17] M. Koster. Robots in the web: threat or treat?\nConneXions, 4(4), April 1995.\n[18] O. A. McBryan. GENVL and WWWW: Tools for\ntaming the web. In Proc. of WWW Conf., 1994.\n[19] R. C. Miller and K. Bharat. SPHINX: a framework for\ncreating personal, site-specific web crawlers. In Proc.\nof WWW Conf., 1998.\n[20] D. Nassimi and S. Sahni. Parallel permutation and\nsorting algorithms and a new generalized connection\nnetwork. Journal of ACM, 29:642667, July 1982.\n[21] M. T. Ozsu and P. Valduriez. Principles of Distributed\nDatabase Systems. Prentice Hall, 1999.\n[22] L. Page and S. Brin. The anatomy of a large-scale\nhypertextual web search engine. In Proc. of WWW\nConf., 1998.\n[23] B. Pinkerton. Finding what people want: Experiences\nwith the web crawler. In Proc. of WWW Conf., 1994.\n[24] Robots exclusion protocol. http://info.webcrawler.\ncom/mak/projects/robots/exclusion.html.\n[25] A. S. Tanenbaum and R. V. Renesse. Distributed\noperating systems. ACM Computing Surveys, 17(4),\nDecember 1985.\n[26] G. K. Zipf. Human Behaviour and the Principle of\nLeast Effort: an Introduction to Human Ecology.\nAddison-Wesley, 1949.\n135\n", "keywords": "guideline;architecture;Parallelization;Web Spider;parallel crawler;Web Crawler;model evaluation"} {"name": "147", "title": "Performance Enhancing Proxy for Interactive 3G Network Gaming", "abstract": "Unlike non-time-critical applications like email and file transfer , network games demand timely data delivery to maintain the seemingly interactive presence of players in the virtual game world. Yet the inherently large transmission delay mean and variance of 3G cellular links make on-time game data delivery difficult. Further complicating the timely game data delivery problem is the frequent packet drops at these links due to inter-symbol interference, fading and shadowing at the physical layer. In this paper, we propose a proxy architecture that enhances the timeliness and reliability of data delivery of interactive games over 3G wireless networks. In particular, a performance enhancing proxy is designed to optimize a new time-critical data type -- variable-deadline data, where the utility of a datum is inversely proportional to the time required to deliver it. We show how a carefully designed and configured proxy can noticeably improve the delivery of network game data.", "fulltext": "INTRODUCTION\nWhile network gaming has long been projected to be an\napplication of massive economic growth, as seen in the recent\nexplosive development on the wired Internet in South Korea\nand Japan, deployment of similar network games on 3G\nwireless networks continues to be slow and difficult. One reason\nis that unlike their wired counterparts, wireless links are\nnotoriously prone to errors due to channel fading, shadowing\nand inter-symbol interference. While 3G wireless networks,\nsuch as High Speed Downlink Packet Access (HSDPA) of\n3rd Generation Partnership Project (3GPP) Release 5 (R5)\n[1] and CDMA 1x EvDO of 3GPP2 [5], combat wireless\nlink failures at the MAC and physical layer with an elaborate\nsystem of channel coding, retransmission, modulation\nand spreading, with resulting packet loss rate being reduced\nto negligible 1 to 2%, the detrimental side-effect to network\ngaming is the large and often unpredictable transmission delay\nmean and variance [15]. Such large and variable delays\ngreatly reduce the necessary interactivity of network game\nplayers and deteriorate the overall gaming experience.\nIn a separate development, a new 3G network element\ncalled IP Multimedia Subsystem (IMS) [3] has been intro-duced\nin 3GPP specifications R5 and later, as shown in Figure\n1. The Session Initiation Protocol (SIP)-based IMS provides\na multitude of multimedia services: from establishing\nconnections from the legacy telephone networks to the new\nIP core network using Voice over IP (VoIP), to delivering\nstreaming services such as video as a value-added service\nto mobile users (UE). Strategically located as a pseudo-gateway\nto the private and heavily provisioned 3G networks,\nit is foreseeable that IMS will continue to enlarge and enrich\nits set of multimedia services in future wireless networks.\nIn this paper, we propose a performance enhancing proxy\n(PEP) called (W)ireless (I)nteractive (N)etwork (G)aming\nProxy (WING) to improve the timely delivery of network\ngame data in 3G wireless networks. WING is located inside\nIMS as an application service on top of the myriad of\n207\nservices that IMS already provides. In a nutshell, WING improves\nthe delivery of game data from the game server to 3G\nwireless game players\n1\nusing the following three techniques.\nFirst, by virtue of locating at the intersection of the private\nwireless network and the open Internet, connection from the\ngame server to the wireless game player can be strategi-cally\nsplit; for the server-WING connection, only the statis-tically\nstable and fast round-trip time (RTT) and low wired-network\n-only packet loss rate (PLR) are used for congestion\ncontrol, resulting in a steady yet TCP-friendly server-WING\nconnection. Second, by configuring parameters in the radio\nlink layer (RLC) specifically for gaming during session setup,\nexcessive RLC retransmissions are avoided, and timeliness\nof game data is improved at the controlled expense of in-creased\npacket losses.\nFinally, by constructing small but\nerror-resilient packets that contain location data, packets\ncan be transmitted in fewer MAC-layer protocol data units\n(PDU) and hence further reduces delay.\nThe paper is organized as follows. Related work is presented\nin Section 2. We overview the 3G wireless system in\nfocus, HSDPA of 3GPP R5, in Section 3. Note that because\nsimilar link and MAC layer transport optimizations that\nchiefly affect delay mean and variance are also employed in\nother 3G networks, our proposed WING can conceivably be\napplied to other wireless networks such as CDMA 1x EvDO\nof 3GPP2. We discuss the design of WING in details in\nSection 4. Finally, experimental results and conclusion are\nprovided in Section 5 and 6, respectively.\nRELATED WORK\nWe divide the discussion on the large volume of related\nwork into two section. Section 2.1 discusses related research\non wireless transport optimization.\nSection 2.2 discusses\nrelated research in transport of network game data.\n2.1\nWireless Transport Optimization\nWe note that proxy-based transport optimization for last-hop\nwireless networks has a long history, with the majority\nof the research [4, 15] focusing on optimization of TCP over\nlast-hop wireless networks. In particular, [15] showed that\nwhile 3G network packet losses can indeed be successfully\novercome by using ample link layer retransmissions, the resulting\nlarge RTT mean and variance may severely affect the\nperformance of a TCP-like congestion avoidance rate control\nthat is based on end-to-end observable statistics of RTT and\nPLR. The limiting rate constraint and undesirable fluctuations\ncan be alleviated using a proxy with split-connection\n-- a theme we develop in Section 4.2.\nRecently, efforts on proxy design have shifted to delay-sensitive\nmultimedia transport [18, 13, 8, 9], though all of\nthem focused exclusively on streaming media, while we focus\non network gaming. Note that due to cited complexity\nreason, a competing end-to-end approach for rate control\nthat does not rely on an intermediate proxy is popular as\nwell [17, 6]. However, we chose the proxy-based approach\nand will juxtapose its advantages in Section 4.2.\n1\nWhile peer-to-peer model for interactive network games\nis also possible, we assume the more common server-client\nmodel where the game server maintains and disseminates all\ngame states in this paper.\n2.2\nTransport of Network Game Data\nIn [3], a general gaming platform for IMS that provides\nnetwork services needed for network gaming such as session\nsetup and registration is proposed to ease deployment over\n3G networks. Our work is orthogonal to [3] since we focus\nonly on the efficient transport of game data.\nAn early work on gaming protocol is [10], which defined\na Game Transport Protocol (GTP) for massive multi-player\non-line games (MMPOGs) over the Internet. Our proposed\ngaming proxy WING differs in the following respects: i) we\ndesign WING specifically for lossy, bandwidth-limited networks\n, hence focusing on design of network-optimized differential\ncoding to produce small but loss-resilient packets;\nand, ii) we tailor WING for HSDPA of 3G wireless networks,\noptimizing performance by intelligently configuring parameters\nof the RLC layer.\nThe most similar related work is [12], which proposed an\nend-to-end adaptive FEC and dynamic packetization algorithm\nto combat packet losses due to wireless link failures\nand reduce packet sizes. Unlike [12], our approach is proxy-based\n, and we tailor our gaming optimization exclusively for\n3G networks.\nOVERVIEW OF UMTS RELEASE 5\nHSDPA of UMTS Release 5, also known as 3.5G, improves\nupon Release 4 with numerous lower-layer optimizations.\nFirst, a shared channel is periodically scheduled to users\nin the cell with good observable network conditions to take\nadvantage of user diversity during fading without sacrificing\nfairness. Second, an elaborate MAC-layer scheme chooses an\nappropriate combination of FEC, hybrid ARQ, modulation\nand spreading based on client observable network state. In\nthis section, we instead focus on the RLC layer, where the\nuser has limited control over behavior using configuration of\nparameters during session setup.\nThe Radio Link control (RLC) layer [1] buffers upper layer\nservice data units (SDU) on a per-session basis -- IP packets\nin this case, and segments each SDU into smaller protocol\ndata units (PDU) of size S\nP DU\nand await transmission at\nlower layers. There are three transmission modes: transparent\nmode (TM), unacknowledged mode (UM) and acknowledged\nmode (AM). Only AM performs link-layer retransmissions\nif transmission in the lower layer fails. For error\nresiliency, we focus only on AM. In particular, we look at\nhow SDUs are discarded in the RLC layer: using a method\nof retransmission-based discard (RBD), an SDU can be discarded\nbefore successful transmission. In a nutshell, an SDU\nis discarded if a predefined maximum number of retransmissions\nB has been reached before successful transmission of\na PDU belonging to the SDU. We will investigate how the\nvalue B can be selected to trade off error resiliency with\ndelay in Section 4.3.\nWING FOR WIRELESS INTERACTIVE NETWORK GAMING\nBefore we discuss the three optimizations of our proposed\ngaming proxy WING in details in Section 4.2, 4.3 and 4.4,\nwe first define a new type of transport data called variable\ndeadline data in Section 4.1 -- a consequence of a prediction\nprocedure used at a network game client to predict locations\nof other game players in the virtual game world.\n208\n0\n100\n200\n300\n400\n500\n1\n2\n3\n4\n5\n6\ndelay in ms\ndistortion\ndistortion vs. delay for dead-reckoning\nrandom walk\nweighted random walk\n0\n100\n200\n300\n400\n500\n0.2\n0.4\n0.6\n0.8\n1\ndelay in ms\nutility\nutility vs. delay for dead-reckoning\nrandom walk\nweighted random walk\na) distortion vs. delay\nb) utility vs. delay\nFigure 2: Examples of Dead-Reckoning\n4.1\nVariable Deadline Data Delivery\nUnlike media streaming applications where a data unit\ncontaining media data is fully consumed if it is correctly delivered\nby a playback deadline and useless otherwise [9], the\nusefulness (utility) of a game datum is inversely proportional\nto the time it requires to deliver it. This relationship between\nutility and transmission delay is the behavioral result\nof a commonly used game view reconstruction procedure at\na game client called dead-reckoning [2]. It works as follows.\nTo maintain time-synchronized virtual world views among\ngame players at time t\n0\n, a player P\nA\npredicts the location\n\n\nt\n0\nof another player P\nB\nand draws it in P\nA\n's virtual world\nat time t\n0\n, extrapolating from previously received location\nupdates of P\nB\nin the past,\n\n, < t\n0\n. When location update\nt\n0\narrives at P\nA\nfrom P\nB\nat a later time t\n1\n, P\nA\nupdates\nits record of P\nB\n's locations with (t\n0\n,\nt\n0\n), in order to make\nan accurate prediction of (t\n1\n,\n\nt\n1\n) for display in P\nA\n's virtual\nworld at time t\n1\n. Regardless of what prediction method is\nused at the client, it is clear that a smaller transmission delay\nwill in general induce a smaller prediction error. We term\nthis type of data with inversely proportional relationship between\nquantifiable utility and delay variable deadline data.\nWe next show examples of how such utility-delay curve u(d)\ncan be derived in practice given a player movement model\nand a prediction method.\n4.1.1\nExamples of Dead-Reckoning\nWe first consider two simple movement models that model\na game player in two-dimensional space (x, y). The first is\nrandom walk, where for each time increment t, probability\nmass function (pmf) of random variable of x-coordinate x\nt\n,\np(x\nt\n), is defined as follows:\np(x\nt+1\n= x\nt\n+ 1)\n=\n1/3\np(x\nt+1\n= x\nt\n)\n=\n1/3\np(x\nt+1\n= x\nt\n- 1) = 1/3\n(1)\nRandom variable of y-coordinate y\nt\nis calculated similarly\nand is independent of x\nt\n.\nThe second movement model is weighted random walk,\nwhose pmf is defined as follows:\np (x\nt+1\n= x\nt\n+ ((x\nt\n- x\nt-1\n+ 1) mod 2))\n=\n1/6\np (x\nt+1\n= x\nt\n+ (x\nt\n- x\nt-1\n))\n=\n2/3\np (x\nt+1\n= x\nt\n+ ((x\nt\n- x\nt-1\n- 1) mod 2)) = 1/6 (2)\nIn words, the player continues the same movement as done\nin the previous instant with probability 2/3, and changes\nto one of two other movements each with probability 1/6.\nRandom variable y-coordinate y\nt\nis calculated similarly.\nWe defined a simple prediction method called 0th-order\nprediction as follows: each unknown x\nt\nis simply set to the\nmost recently updated x\n\n. Using each of the two movement\nmodels in combination with the prediction method, we constructed\ndistortion-delay curves experimentally as shown in\nFigure 2a. As seen, 0th-order prediction is a better match to\nrandom walk than weighted random walk, inducing a smaller\ndistortion for all delay values. Utility u(d) -- shown in Figure\n2b is simply the reciprocal of distortion. Having derived\nu(d) gives us a quantifiable metric on which we can\nobjectively evaluate game data transport systems.\n4.2\nProxy-based Congestion Control\nWe argue that by locating WING between the open wired\nInternet and the provisioned wireless networks to conduct\nsplit-connection data transfer, stable TCP-friendly congestion\ncontrol can be maintained on top of UDP in the wired\nserver-WING connection. Traditional congestion control algorithms\nlike TCP-friendly Rate Control (TFRC) [11] space\noutgoing packets with interval T\ncc\nas a function of estimated\npacket loss rate (PLR)\ncc\n, RTT mean m\ncc\nand RTT variance\n2\ncc\ndue to wired network congestion:\nT\ncc\n= m\ncc\np2\ncc\n/3 + 3(m\ncc\n+ 4\n2\ncc\n)\ncc\n`1 + 32\n2\ncc\np3\ncc\n/8\n(3)\nPast end-to-end efforts [17, 6] have focused on methodologies\nto distinguish wired network congestion losses from\nwireless link losses, in order to avoid unnecessary rate reduction\ndue to erroneous perception of wireless losses as\nnetwork congestion. Split connection offers the same effect\nregarding PLR by completely shielding sender from packet\nlosses due to wireless link failures. Moreover, by performing\nTFRC (3) in the server-WING connection using only stable\nwired network statistics, split connection shields the server-WING\nconnection from large rate fluctuations due to large\nRTT variance in the last-hop 3G link as shown in [15]. For\nthis reason, [15] showed experimentally that indeed proxy-based\nsplit-connection congestion control performs better\nthan end-to-end counterparts, even in negligible wireless loss\nenvironments.\nLastly, we note that split connection can benefit from a\nrate-mismatch environment [8, 9], where the available bandwidth\nR\n1\nin the server-WING connection is larger than the\navailable bandwidth R\n2\nin the WING-client connection. In\nsuch case, the surplus bandwidth R\n1\n- R\n2\ncan b e used for\nredundancy packets like forward-error correction (FEC) or\nretransmission to lower PLR in the server-WING connection\n. We refer interested readers to [8, 9] for further details.\n4.3\nOptimizing RLC Configuration\nGiven utility-delay function u(d) in Section 4.1, we optimize\nconfiguration of RLC to maximize utility. More pre-cisely\n, we pick the value of maximum retransmission limit B\n-- inducing expected SDU loss rate l\n\nand delay d\n\n, so that\nthe expected utility (1\n- l\n\n)u(d\n\n) is maximized.\nWe assume a known average SDU size S\nSDU\n, PDU loss\nrate\nP DU\n, and probability density function (pdf) of PDU\ntransmission delay () with mean m\n\nand variance\n2\n\n.\nFirst, the expected number of PDUs fragmented from an\nSDU is N =\nl\nS\nSDU\nS\nP DU\nm\n. For a given B, the expected SDU\n209\nloss rate l\nSDU\ncan b e written simply:\nP\nP DU\n=\nB\nX\ni=1\ni-1\nP DU\n(1\nP\nDU\n)\n(4)\nl\nSDU\n=\n1\n- P\nN\nP DU\n(5)\nwhere P\nP DU\nis the probability that a PDU is successfully\ndelivered given B.\nThe delay d\nSDU\nexperienced by a successfully delivered\nSDU is the sum of queuing delay d\nq\nSDU\nand transmission\ndelay d\nt\nSDU\n. Queuing delay d\nq\nSDU\nis the delay experienced\nby an SDU while waiting for head-of-queue SDUs to clear\ndue to early termination or delivery success. d\nt\nSDU\nis the expected\nwireless medium transmission delay given the SDU is\nsuccessfully delivered. d\nt\nSDU\nis easier and can be calculated\nas:\nX\nP DU\n=\n1\nP\nP DU\nB\nX\ni=1\ni\ni-1\nP DU\n(1\nP\nDU\n)\n(6)\nd\nt\nSDU\n=\nN m\n\nX\nP DU\n(7)\nwhere X\nP DU\nis the expected number of PDU (re)transmissions\ngiven PDU delivery success. To calculate d\nq\nSDU\n, we assume\na M/G/1 queue\n2\nwith arrival rate\nq\n, mean service time m\nq\n,\nand variance of service time\n2\nq\n. Using Pollaczek-Khinchin\nmean value formula [14], d\nq\nSDU\ncan b e written as:\nd\nq\nSDU\n=\nq\nm\nq\n`1 +\n2\nq\n/m\n2\nq\n\n2 (1\nq\nm\nq\n)\nm\nq\n(8)\nIn our application,\nq\nis the rate at which game data arrive\nat WING from the server, which we assume to be known.\nm\nq\nis the mean service rate for both cases of SDU delivery\nsuccess and failure and can be derived as follows:\nY\nSDU\n=\n1\nl\nSDU\nN\nX\ni=1\n(B + (i - 1)X\nP DU\n)\nB\nP DU\nP\ni-1\nP DU\n(9)\nm\nq\n=\n(1\n- l\nSDU\n) d\nt\nSDU\n+ l\nSDU\nm\n\nY\nSDU\n(10)\nwhere Y\nSDU\nis the expected total number of PDU\n(re)transmissions in an SDU given SDU delivery failure.\nSimilar analysis will show that the variance of service rate\n2\nq\nfor our application is:\n\n2\nq\n= (1\n- l\nSDU\n) N\n2\nX\n2\nP DU\n\n2\n\n+ l\nSDU\nY\n2\nSDU\n\n2\n\n(11)\nWe can now evaluate expected queuing delay d\nq\nSDU\n, from\nwhich we evaluate expected delay d\nSDU\n. Optimal B\n\nis one\nthat maximizes (1\n- l\n\nSDU\n) u(d\nSDU\n).\n4.4\nLoss-optimized Differential Coding\nIf the location data -- player position updates sent to\nimprove dead-reckoning discussed in Section 4.1 -- are in\nabsolute values, then the size of the packet containing the\ndata can be large, resulting in large delay due to many PDU\nfragmentation and spreading. The alternative is to describe\nthe location in relative terms -- the difference in the location\nfrom a previous time slot. Differential values are smaller, resulting\nin fewer encoded bits and smaller packets, and hence\n2\nOur system is actually more similar to a D/G/1 queue,\nsince the arrivals of game data are more likely to be deter-ministic\nthan Markovian. Instead, we use M/G/1 queue as\na first-order approximation.\n\n\n\n1\n2\n3\n4\nACK boundary\n2\n3\n4\n1\n\n\n\n\nFigure 3: Example of Differential Coding\nmode\nmode marker\nref. size\ncoord. size\ntotal\n0\n00\n0\n32\n2 + 64n\n1\n01\n0\n16\n2 + 32n\n2\n10\n2\n8\n4 + 16n\n3\n11\n4\n4\n6 + 8n\nTable 1: Differential Coding Modes\nsmaller transmission delay. This differential coding of location\ndata is used today in networked games.\nThe obvious disadvantage of differential coding is that the\ncreated dependency chain is vulnerable to network loss; a\nsingle loss can result in error propagation until the next\nabsolute location data (refresh).\nTo lessen the error propagation effect while maintaining\nthe coding benefit of differential coding, one can reference a\nposition in an earlier time slot. An example is shown in Figure\n3, where we see position 3 (\n3\n) references\n1\ninstead of\n\n2\n. This way, loss of packet containing\n2\nwill not affect\n3\n,\nwhich depends only on\n1\n. The problem is then: for a new\nposition\nt\n, how to select reference position\nt-r\nfor differential\ncoding such that the right tradeoff of error resilience and\npacket size can be selected? This selection must be done in\nan on-line manner as new position becomes available from\nthe application to avoid additional delay.\n4.4.1\nSpecifying Coding Modes\nTo implement loss-optimized differential coding, we first\ndefine a coding specification that dictates how the receiver\nshould decode location packets. For simplicity, we propose\nonly four coding modes, where each mode is specified by a\ndesignated bit sequence (mode marker) in the packet. Assuming\nthe original absolute position is specified by two\n32-bit fixed point numbers, mode 0 encodes the unaltered\nabsolute position in x-y order, resulting in data payload size\nof 2 + 64n bits for n game entities. Mode 1 uses the previous\nposition as reference for differential encoding with 16\nbits per coordinate, resulting in 2 + 32n bits for n entities.\nMode 2 uses the first 2 bits to specify r in reference position\nt - r for differential encoding. Each coordinate takes 8 bits,\nresulting in 4 + 16n total bits for n entities. Mode 3 is similar\nto mode 2 with the exception that each of the reference\nmarker and the two coordinate takes only 4 bits to encode,\nresulting in 6 + 8n bits for n entities.\nFor given position\nt\n= (x\nt\n, y\nt\n) and reference\nt-r\n=\n(x\nt-r\n, y\nt-r\n), some modes may be infeasible due to the fixed\ncoding bit budgets for reference and coordinate sizes. So\nlimited to the set of feasible modes, we seek a reference position\n/ mode pair that maximizes an objective function.\n210\nPLR\nRTT mean\nRTT variance\nTokyo-Singapore (50)\n0\n94.125ms\n178.46\nTokyo-Singapore (100)\n0\n95.131ms\n385.30\nTokyo-Singapore (200)\n0\n96.921ms\n445.63\nHSDPA (50)\n0\n62.232ms\n7956.8\nHSDPA (100)\n0\n72.547ms\n25084\nHSDPA (200)\n0\n152.448ms\n143390\nTable 2: Comparison of Network Statistics\n4.4.2\nFinding Optimal Coding Modes\nFor an IP packet of size s\nt\ncontaining position\nt\nthat\nis sent at time t, we first define the probability that it is\ncorrectly delivered b y time as\nt\n( ).\nt\n( ) depends on\nexpected PLR l(s\nt\n) and delay d(s\nt\n), resulting from retransmission\nlimit B chosen in Section 4.3:\nN(s\nt\n)\n=\ns\nt\nS\nP DU\ni\n(12)\nl(s\nt\n)\n=\n1\n- (P\nP DU\n)\nN(s\nt\n)\n(13)\nd(s\nt\n)\n=\nd\nq\nSDU\n+ N(s\nt\n) m\n\nX\nP DU\n(14)\nwhere N(s\nt\n) is the number of PDUs fragmented from an\nSDU of size s\nt\n.\nl(s\nt\n) is PLR in (5) generalized to SDU\nsize s\nt\n. d(s\nt\n) is the expected queuing delay in (8) plus the\ntransmission delay in (7) generalized to SDU size s\nt\n. We\ncan now approximate\nt\n( ) as:\n\nt\n( )\n1\nif ACKed b y\n(1\n- l(s\nt\n))1 ( - t - d(s\nt\n))\no.w.\n(15)\nwhere 1(x) = 1 if x 0, and = 0 otherwise. If no acknowledgment\npackets (ACK) are sent from client to WING, then\nt\n( ) is simply the second case in (15).\nWe next define the probability that position\nt\nis correctly\ndecoded b y time as P\nt\n( ). Due to dependencies resulting\nfrom differential coding, P\nt\n( ) is written as follows:\nP\nt\n( ) =\nt\n( ) Y\njt\n\nj\n( )\n(16)\nwhere j i is the set of positions j's that precedes t in the\ndependency graph due to differential coding.\nGiven utility function u(d) in Section 4.1 and decode probability\n(16), the optimal reference position / mode pair is one\nthat maximizes the following objective function:\nmax P\nt\n(t + d(s\nt\n)) u(d(s\nt\n))\n(17)\nEXPERIMENTATION\nWe first present network statistics for HSDPA and discuss\nthe implications.\nWe collected network statistics of\n10,000 ping packets, of packet size 50, 100 and 200 bytes,\nspaced 200ms apart, between hosts in Tokyo and Singapore\ninside HP intranet. The results are shown in Table 2. We\nthen conducted the same experiment over a network emu-lator\ncalled WiNe2 [16] emulating the HSDPA link with 10\ncompeting ftp users each with mobility model Pedestrian\nA. We make two observations in Table 2. One, though results\nfrom both experiments had similar RTT means, HSDPA's\nRTT variances were very large, substantiating our\nclaim that using split-connection to shield the server-WING\nconnection from HSDPA's RTT variance would drastically\nimprove TFRC bandwidth (3) of server-WING connection.\nnumber of entities n\n4\nframe rate\n10 fps\nIP + UDP header\n20 + 8 bytes\nRLC PDU size\n40 bytes\nRLC PDU loss rate\n0.1 to 0.3\naverage packet size\n61 bytes\nshifted Gamma parameter\ng\n2\nshifted Gamma parameter\ng\n0.1\nshifted Gamma parameter\ng\n10.0\nTable 3: Simulation Parameters\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n50\n100\n150\n200\n250\n300\n350\n400\nretransmission limit\nexpected delay in ms\nexpected delay vs. retrans. limit\nepsilon=0.1\nepsilon=0.2\nepsilon=0.3\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n0.2\n0.22\n0.24\n0.26\n0.28\n0.3\n0.32\n0.34\n0.36\n0.38\nretransmission limit\nexpected utility\nexpected utility vs. retrans. limit\nepsilon=0.1\nepsilon=0.2\nepsilon=0.3\na) delay vs. retrans. limit B\nb) utility vs. retrans. limit B\nFigure 4: Delay andUtility vs. Retrans. Limit\nTwo, larger packets entailed larger RTT means for HSDPA.\nThis means that the differential coding discussed in Section\n4.4 indeed has substantial performance improvement potential\n.\nWe next used an internally developed network simulator\ncalled (mu)lti-path (n)etwork (s)imulator (muns) that was\nused in other simulations [7] to test RLC configurations and\ndifferential coding. For PDU transmission delay (), we\nused a shifted Gamma distribution:\n() =\n\ng\ng\n( g\n)\n\ng\n-1\ne\ng\n(g\n)\n(\ng\n)\n,\n\ng\n< < (18)\nwhere () is the Gamma function [14]. The parameters\nused are shown in Table 3.\nFigure 4 shows the expected delay and utility as a function\nof retransmission limit B for different PDU loss rates. As\nexpected, when B increases, the expected delay increases.\nThe expected utility, on the other hand, reaches a peak and\ndecreases. For given PDU loss rate, we simply select B with\nthe largest expected utility.\nNext, we compare the results of our loss-optimized differential\ncoding optimization opt in Section 4.4 with two\nschemes: abs, which always encodes in absolute values; and,\nrel, which uses only previous frame for differential coding\nand refreshes with absolute values every 10 updates. abs\nP DU\nB\n\nabs(1)\nabs(\nB\n\n)\nrel(1)\nrel(\nB\n\n)\nopt\n0.10\n2\n1.181\n1.134\n1.806\n1.154\n1.070\n0.15\n3\n1.222\n1.166\n2.288\n1.108\n1.073\n0.20\n2\n1.290\n1.192\n2.619\n1.380\n1.086\n0.25\n2\n1.356\n1.232\n3.035\n1.568\n1.090\n0.30\n2\n1.449\n1.268\n3.506\n1.750\n1.110\n0.35\n2\n1.509\n1.300\n3.556\n2.054\n1.121\nTable 4: Distortion Comparison\n211\nrepresents the most error resilient coding method in differential\ncoding, while rel represents a reasonably coding-efficient\nmethod with periodical resynchronization. Note,\nhowever, that neither abs nor rel adapts differential coding\nin real time using client feedbacks.\nabs and rel were each tested twice.\nIn the first trial,\nlimit B was set to 1, and in the second, B was set to the\noptimal configured value as discussed in Section 4.3. 20000\ndata points were generated and averaged for each distortion\nvalue in Table 4. As we see in Table 4 for various PDU\nloss rate\nP DU\n, the resulting distortions for opt were always\nlower than abs's and rel's, particularly for high PDU loss\nrates. opt performed better than rel because of opt's error\nresiliency of loss-optimized differential coding, while opt\nperformed better than abs because opt's smaller packets in-duced\na smaller queuing delay and a smaller transmission\ndelay due to smaller number of RLC fragmentations. This\ndemonstrates that it is important not only to find an optimal\nRLC configuration, but a suitable differential coding\nscheme to match the resulting loss rate and delay of the\nconfiguration.\nCONCLUSION\nWe propose a performance enhancing proxy called WING\nto improve the delivery of game data from a game server to\n3G game players using three techniques: i) split-connection\nTCP-friendly congestion control, ii) network game optimized\nRLC configuration, and, iii) packet compression using differential\ncoding. For future, we will investigate how similar\ntechniques can be applied for the 3G uplink from game\nplayer to game server.\nACKNOWLEDGMENTS\nThe authors thank other members of the multimedia systems\narchitecture team, Yasuhiro Araki and Takeaki Ota,\nfor their valuable comments and discussions.\nREFERENCES\n[1] Universal Mobile Telecommunications System\n(UMTS); Radio Link Control (RLC) protocol\nspecification (3GPP TS.25.322 version 5.12.0 Release\n5). http://www.3gpp.org/ftp/Specs/archive/25 series/\n25.322/25322-5c0.zip, September 2005.\n[2] S. Aggarwal, H. Banavar, and A. Khandelwal.\nAccuracy in dead-reckoning based distributed\nmulti-player games. In ACM SIGCOMM NetGames,\nPortland, OR, August 2004.\n[3] A. Akkawi, S. Schaller, O. Wellnitz, and L. Wolf. A\nmobile gaming platform for the IMS. In ACM\nSIGCOMM NetGames, Portland, OR, August 2004.\n[4] H. Balakrishnan, V. Padmanabhan, S. Seshan, and\nR. Katz. A comparison of mechanisms for improving\nTCP performance over wireless links. In IEEE/ACM\nTrans. Networking, volume 5, no.6, December 1997.\n[5] Q. Bi and S. Vitebsky. Performance analysis of 3G-1x\nEvDO high data rate system. In IEEE Wireless\nCommunications and Networking Conference,\nOrlando, FL, March 2002.\n[6] M. Chen and A. Zakhor. AIO-TRFC: A light-weight\nrate control scheme for streaming over wireless. In\nIEEE WirelessCom, Maui, HI, June 2005.\n[7] G. Cheung, P. Sharma, and S. J. Lee. Striping\ndelay-sensitive packets over multiple bursty wireless\nchannels. In IEEE International Conference on\nMultimedia and Expo, Amsterdam, the Netherlands,\nJuly 2005.\n[8] G. Cheung and W. t. Tan. Streaming agent for wired\nnetwork / wireless link rate-mismatch environment. In\nInternational Workshop on Multimedia Signal\nProcessing, St. Thomas, Virgin Islands, December\n2002.\n[9] G. Cheung, W. t. Tan, and T. Yoshimura. Double\nfeedback streaming agent for real-time delivery of\nmedia over 3G wireless networks. In IEEE\nTransactions on Multimedia, volume 6, no.2, pages\n304314, April 2004.\n[10] S. P. et al. Game transport protocol: A reliable\nlightweight transport protocol for massively\nmultiplayer on-line games (MMPOGs). In\nSPIE-ITCOM, Boston, MA, July 2002.\n[11] S. Floyd, M. Handley, J. Padhye, and J. Widmer.\nEquation-based congestion control for unicast\napplications. In ACM SIGCOMM, Stockholm,\nSweden, August 2000.\n[12] P. Ghosh, K. Basu, and S. Das. A cross-layer design to\nimprove quality of service in online multiplayer\nwireless gaming networks. In IEEE Broadnets, Boston,\nMA, October 2005.\n[13] L. Huang, U. Horn, F. Hartung, and M. Kampmann.\nProxy-based TCP-friendly streaming over mobile\nnetworks. In IEEE International Symposium on a\nWorld of Wireless, Mobile and Multimedia Networks,\nAtlanta, GA, September 2002.\n[14] A. Leon-Garcia. Probability and Random Processes for\nElectrical Engineering. Addison Wesley, 1994.\n[15] M. Meyer, J. Sachs, and M. Holzke. Performance\nevaluation of a TCP proxy in WCDMA networks. In\nIEEE Wireless Communications, October 2003.\n[16] Nomor Research GmbH. WiSe2.\nhttp://www.nomor.de.\n[17] F. Yang, Q. Zhang, W. Zhu, and Y.-Q. Zhang. Bit\nallocation for scalable video streaming over mobile\nwireless internet. In IEEE Infocom, Hong Kong,\nMarch 2004.\n[18] T. Yoshimura, T. Ohya, T. Kawahara, and M. Etoh.\nRate and robustness control with RTP monitoring\nagent for mobile multimedia streaming. In IEEE\nInternational Conference on Communication, New\nYork, NY, April 2002.\n212\n", "keywords": "Wireless Networks;3G wireless network;time critical data;Network Gaming;congestion control;loss-optimized;RLC configuration;proxy architecture"} {"name": "148", "title": "Physically-Based Visual Simulation on Graphics Hardware", "abstract": "In this paper, we present a method for real-time visual simulation of diverse dynamic phenomena using programmable graphics hardware. The simulations we implement use an extension of cellular automata known as the coupled map lattice (CML). CML represents the state of a dynamic system as continuous values on a discrete lattice. In our implementation we store the lattice values in a texture, and use pixel-level programming to implement simple next-state computations on lattice nodes and their neighbors. We apply these computations successively to produce interactive visual simulations of convection, reaction-diffusion, and boiling. We have built an interactive framework for building and experimenting with CML simulations running on graphics hardware, and have integrated them into interactive 3D graphics applications.", "fulltext": "Introduction\nInteractive 3D graphics environments, such as games, virtual\nenvironments, and training and flight simulators are\nbecoming increasingly visually realistic, in part due to the\npower of graphics hardware. However, these scenes often\nlack rich dynamic phenomena, such as fluids, clouds, and\nsmoke, which are common to the real world.\nA recent approach to the simulation of dynamic\nphenomena, the coupled map lattice\n[Kaneko 1993]\n, uses a\nset of simple local operations to model complex global\nbehavior. When implemented using computer graphics\nhardware, coupled map lattices (CML) provide a simple, fast\nand flexible method for the visual simulation of a wide\nvariety of dynamic systems and phenomena.\nIn this paper we will describe the implementation of\nCML systems with current graphics hardware, and\ndemonstrate the flexibility and performance of these systems\nby presenting several fast interactive 2D and 3D visual\nsimulations. Our CML boiling simulation runs at speeds\nranging from 8 iterations per second for a 128x128x128\nlattice to over 1700 iterations per second for a 64x64 lattice.\nSection 2 describes CML and other methods for\nsimulating natural phenomena. Section 3 details our\nimplementation of CML simulations on programmable\ngraphics hardware, and Section 4 describes the specific\nsimulations we have implemented. In Section 5 we discuss\nlimitations of current hardware and investigate some\nsolutions. Section 6 concludes.\n\nCML and Related Work\nThe standard approach to simulating natural phenomena is to\nsolve equations that describe their global behavior. For\nexample, multiple techniques have been applied to solving\nthe Navier-Stokes fluid equations\n[Fedkiw, et al. 2001;Foster\nand Metaxas 1997;Stam 1999]\n. While their results are\ntypically numerically and visually accurate, many of these\nsimulations require too much computation (or small lattice\nsizes) to be integrated into interactive graphics applications\nsuch as games. CML models, instead of solving for the\nglobal behavior of a phenomenon, model the behavior by a\nnumber of very simple local operations. When aggregated,\nthese local operations produce a visually accurate\napproximation to the desired global behavior.\nFigure 1: 3D coupled map lattice simulations running on\ngraphics hardware. Left: Boiling. Right: Reaction-Diffusion\n.\nA coupled map lattice is a mapping of continuous\ndynamic state values to nodes on a lattice that interact (are\n`coupled') with a set of other nodes in the lattice according\nto specified rules. Coupled map lattices were developed by\nKaneko for the purpose of studying spatio-temporal\ndynamics and chaos\n[Kaneko 1993]\n. Since their introduction,\nCML techniques have been used extensively in the fields of\nphysics and mathematics for the simulation of a variety of\nphenomena, including boiling\n[Yanagita 1992]\n, convection\n[Yanagita and Kaneko 1993]\n, cloud formation\n[Yanagita and\nKaneko 1997]\n, chemical reaction-diffusion\n[Kapral 1993]\n, and\nthe formation of sand ripples and dunes\n[Nishimori and Ouchi\n1993]\n. CML techniques were recently introduced to the field\nof computer graphics for the purpose of cloud modeling and\nanimation\n[Miyazaki, et al. 2001]\n. Lattice Boltzmann\ncomputation is a similar technique that has been used for\nsimulating fluids, particles, and other classes of phenomena\n[Qian, et al. 1996]\n.\nA CML is an extension of a cellular automaton (CA)\n[Toffoli and Margolus 1987;von Neumann 1966;Wolfram 1984]\n\nin which the discrete state values of CA cells are replaced\nwith continuous real values. Like CA, CML are discrete in\nspace and time and are a versatile technique for modeling a\nwide variety of phenomena. Methods for animating cloud\nformation using cellular automata were presented in\n[Dobashi, et al. 2000;Nagel and Raschke 1992]\n. Discrete-state\nautomata typically require very large lattices in order to\nsimulate real phenomena, because the discrete states must be\nfiltered in order to compute real values. By using\ncontinuous-valued state, a CML is able to represent real\nphysical quantities at each of its nodes.\nWhile a CML model can certainly be made both\nnumerically and visually accurate\n[Kaneko 1993]\n, our\nimplementation on graphics hardware introduces precision\nconstraints that make numerically accurate simulation\ndifficult. Therefore, our goal is instead to implement\nvisually accurate simulation models on graphics hardware, in\nthe hope that continuing improvement in the speed and\nprecision of graphics hardware will allow numerically\naccurate simulation in the near future.\nThe systems that have been found to be most amenable to\nCML implementation are multidimensional initial-value\npartial differential equations. These are the governing\nequations for a wide range of phenomena from fluid\ndynamics to reaction-diffusion. Based on a set of initial\nconditions, the simulation evolves forward in time. The only\nrequirement is that the equation must first be explicitly\ndiscretized in space and time, which is a standard\nrequirement for conventional numerical simulation. This\nflexibility means that the CML can serve as a model for a\nwide class of dynamic systems.\n2.1 A CML Simulation Example\nTo illustrate CML, we describe the boiling simulation of\n[Yanagita 1992]\n. The state of this simulation is the\ntemperature of a liquid. A heat plate warms the lower layer\nof liquid, and temperature is diffused through the liquid. As\nthe temperature reaches a threshold, the phase changes and\n\"bubbles\" of high temperature form. When phase changes\noccur, newly formed bubbles absorb latent heat from the\nliquid around them, and temperature differences cause them\nto float upward under buoyant force.\nYanagita implements this global behavior using four local\nCML operations; Diffusion, Phase change, Buoyancy, and\nLatent heat. Each of these operations can be written as a\nsimple equation. Figures 1, 2 and 7 (see color pate) show this\nsimulation running on graphics hardware, and Section 4.1\ngives details of our implementation. We will use this\nsimulation as an example throughout this paper.\nHardware Implementation\nGraphics hardware is an efficient processor of images it\ncan use texture images as input, and outputs images via\nrendering. Images arrays of values map well to state\nvalues on a lattice. Two-dimensional lattices can be\nrepresented by 2D textures, and 3D lattices by 3D textures or\ncollections of 2D textures. This natural correspondence, as\nwell as the programmability and performance of graphics\nhardware, motivated our research.\n3.1 Why Graphics Hardware?\nOur primary reason to use graphics hardware is its speed at\nimaging operations compared to a conventional CPU. The\nCML models we have implemented are very fast, making\nthem well suited to interactive applications (See Section 4.1).\nGPUs were designed as efficient coprocessors for\nrendering and shading. The programmability now available\nin GPUs such as the NVIDIA GeForce 3 and 4 and the ATI\nRadeon 8500 makes them useful coprocessors for more\ndiverse applications. Since the time between new\ngenerations of GPUs is currently much less than for CPUs,\nfaster coprocessors are available more often than faster\ncentral processors. GPU performance tracks rapid\nimprovements in semiconductor technology more closely\nthan CPU performance. This is because CPUs are designed\nfor high performance on sequential operations, while GPUs\nare optimized for the high parallelism of vertex and fragment\nprocessing\n[Lindholm, et al. 2001]\n. Additional transistors can\nFigure 2: A sequence of stills (10 iterations apart) from a\n2D boiling simulation running on graphics hardware.\n110\nHarris, Coombe, Scheuermann, and Lastra / Simulation on Graphics Hardware\n\n\n\nThe Eurographics Association 2002.\n\ntherefore be used to greater effect in GPU architectures. In\naddition, programmable GPUs are inexpensive, readily\navailable, easily upgradeable, and compatible with multiple\noperating systems and hardware architectures.\nMore importantly, interactive computer graphics\napplications have many components vying for processing\ntime. Often it is difficult to efficiently perform simulation,\nrendering, and other computational tasks simultaneously\nwithout a drop in performance. Since our intent is visual\nsimulation, rendering is an essential part of any solution. By\nmoving simulation onto the GPU that renders the results of a\nsimulation, we not only reduce computational load on the\nmain CPU, but also avoid the substantial bus traffic required\nto transmit the results of a CPU simulation to the GPU for\nrendering. In this way, methods of dynamic simulation on\nthe GPU provide an additional tool for load balancing in\ncomplex interactive applications.\nGraphics hardware also has disadvantages. The main\nproblems we have encountered are the difficulty of\nprogramming the GPU and the lack of high precision\nfragment operations and storage. These problems are related\nprogramming difficulty is increased by the effort required\nto ensure that precision is conserved wherever possible.\nThese issues should disappear with time. Higher-level\nshading languages have been introduced that make hardware\ngraphics programming easier\n[Peercy, et al. 2000;Proudfoot, et\nal. 2001]\n. The same or similar languages will be usable for\nprogramming simulations on graphics hardware. We believe\nthat the precision of graphics hardware will continue to\nincrease, and with it the full power of programmability will\nbe realised.\n3.2 General-Purpose Computation\nThe use of computer graphics hardware for general-purpose\ncomputation has been an area of active research for many\nyears, beginning on machines like the Ikonas\n[England 1978]\n,\nthe Pixel Machine\n[Potmesil and Hoffert 1989]\nand Pixel-Planes\n5\n[Rhoades, et al. 1992]\n. The wide deployment of\nGPUs in the last several years has resulted in an increase in\nexperimental research with graphics hardware.\n[Trendall and\nSteward 2000]\ngives a detailed summary of the types of\ncomputation available on modern GPUs.\nWithin the realm of graphics applications, programmable\ngraphics hardware has been used for procedural texturing\nand shading\n[Olano and Lastra 1998; Peercy, et al. 2000;\nProudfoot, et al. 2001; Rhoades, et al. 1992]\n. Graphics\nhardware has also been used for volume visualization\n[Cabral, et al. 1994]\n. Recently, methods for using current and\nnear-future GPUs for ray tracing computations have been\ndescribed in\n[Carr, et al. 2002]\nand\n[Purcell, et al. 2002]\n,\nrespectively.\nOther researchers have found ways to use graphics\nhardware for non-graphics applications. The use of\nrasterization hardware for robot motion planning is described\nin\n[Lengyel, et al. 1990]\n.\n[Hoff, et al. 1999]\ndescribes the use\nof z-buffer techniques for the computation of Voronoi\ndiagrams. The PixelFlow SIMD graphics computer\n[Eyles, et\nal. 1997]\nwas used to crack UNIX password encryption\n[Kedem and Ishihara 1999]\n, and graphics hardware has been\nused in the computation of artificial neural networks\n[Bohn\n1998]\n.\nOur work uses CML to simulate dynamic phenomena that\ncan be described by PDEs. Related to this is the\nvisualization of flows described by PDEs, which has been\nimplemented using graphics hardware to accelerate line\nintegral convolution and Lagrangian-Eulerian advection\n[Heidrich, et al. 1999; Jobard, et al. 2001; Weiskopf, et al. 2001]\n.\nNVIDIA has demonstrated the Game of Life cellular\nautomata running on their GPUs, as well as a 2D physically-based\nwater simulation that operates much like our CML\nsimulations\n[NVIDIA 2001a;NVIDIA 2001b]\n.\n3.3 Common Operations\nA detailed description of the implementation of the specific\nsimulations that we have modeled using CML would require\nmore space than we have in this paper, so we will instead\ndescribe a few common CML operations, followed by details\nof their implementation. Our goal in these descriptions is to\nimpart a feel for the kinds of operations that can be\nperformed using a graphics hardware implementation of a\nCML model.\n3.3.1 Diffusion and the Laplacian\nThe divergence of the gradient of a scalar function is called\nthe Laplacian\n[Weisstein 1999]\n:\n2\n2\n2\n2\n2\n( , )\n.\nT\nT\nT x y\nx\ny\n\n\n\n=\n+\n\n\n\nThe Laplacian is one of the most useful tools for working\nwith partial differential equations. It is an isotropic measure\nof the second spatial derivative of a scalar function.\nIntuitively, it can be used to detect regions of rapid change,\nand for this reason it is commonly used for edge detection in\nimage processing. The discretized form of this equation is:\n2\n,\n1,\n1,\n,\n1\n,\n1\n,\n4\ni j\ni\nj\ni\nj\ni j\ni j\ni j\nT\nT\nT\nT\nT\nT\n+\n+\n\n=\n+\n+\n+\n\n.\nThe Laplacian is used in all of the CML simulations that\nwe have implemented. If the results of the application of a\nLaplacian operator at a node T\ni,j\nare scaled and then added to\nthe value of T\ni,j\nitself, the result is diffusion\n[Weisstein 1999]\n:\n\n'\n2\n,\n,\n,\n4\nd\ni j\ni j\ni j\nc\nT\nT\nT\n=\n+\n\n. (1)\nHere, c\nd\nis the coefficient of diffusion. Application of this\ndiffusion operation to a lattice state will cause the state to\ndiffuse through the lattice\n1\n.\n3.3.2 Directional Forces\nMost dynamic simulations involve the application of force.\nLike all operations in a CML model, forces are applied via\n\n1\nSee Appendix A for details of our diffusion implementation.\n111\nHarris, Coombe, Scheuermann, and Lastra / Simulation on Graphics Hardware\n\n\n\nThe Eurographics Association 2002.\n\ncomputations on the state of a node and its neighbors. As an\nexample, we describe a buoyancy operator used in\nconvection and cloud formation simulations\n[Miyazaki, et al.\n2001;Yanagita and Kaneko 1993;Yanagita and Kaneko 1997]\n.\nThis buoyancy operator uses temperature state T to\ncompute a buoyant velocity at a node and add it to the node's\nvertical velocity state, v:\n\n,\n,\n,\n1,\n1,\n2\n[2\n]\nb\nc\ni j\ni j\ni j\ni\nj\ni\nj\nv\nv\nT\nT\nT\n+\n=\n+\n\n\n.\n(2)\nEquation (2) expresses that a node is buoyed upward if its\nhorizontal neighbors are cooler than it is, and pushed\ndownward if they are warmer. The strength of the buoyancy\nis controlled via the parameter c\nb\n.\n3.3.3 Computation on Neighbors\nSometimes an operation requires more complex computation\nthan the arithmetic of the simple buoyancy operation\ndescribed above. The buoyancy operation of the boiling\nsimulation described in Section 2.1 must also account for\nphase change, and is therefore more complicated:\n\n\n,\n,\n,\n,\n1\n,\n1\n2\n[ (\n)\n(\n)],\n( )\ntanh[ (\n)].\ni j\ni j\ni j\ni j\ni j\nc\nT\nT\nT\nT\nT\nT\nT T\n\n\n\n\n\n+\n=\n\n=\n\n(3)\nIn Equation (3),\ns is the buoyancy strength coefficient, and\n\n(T) is an approximation of density relative to temperature,\nT. The hyperbolic tangent is used to simulate the rapid\nchange of density of a substance around the phase change\ntemperature, T\nc\n. A change in density of a lattice node\nrelative to its vertical neighbors causes the temperature of the\nnode to be buoyed upward or downward. The thing to notice\nin this equation is that simple arithmetic will not suffice the\nhyperbolic tangent function must be applied to the\ntemperature at the neighbors above and below node (i,j). We\nwill discuss how we can compute arbitrary functions using\ndependent texturing in Section 3.4.\n3.4 State Representation and Storage\nOur goal is to maintain all state and operation of our\nsimulations in the GPU and its associated memory. To this\nend, we use the frame buffer like a register array to hold\ntransient state, and we use textures like main memory arrays\nfor state storage. Since the frame buffer and textures are\ntypically limited to storage of 8-bit unsigned integers, state\nvalues must be converted to this format before being written\nto texture.\nTexture storage can be used for both scalar and vector\ndata. Because of the four color channels used in image\ngeneration, two-, three-, or four-dimensional vectors can be\nstored in each texel of an RGBA texture. If scalar data are\nneeded, it is often advantageous to store more than one scalar\nstate in a single texture by using different color channels. In\nour CML implementation of the Gray-Scott reaction-diffusion\nsystem, for example, we store the concentrations of\nboth reactants in the same texture. This is not only efficient\nin storage but also in computation since operations that act\nequivalently on both concentrations can be performed in\nparallel.\nPhysical simulation also requires the use of signed values.\nMost texture storage, however, uses unsigned fixed-point\nvalues. Although fragment-level programmability available\nin current GPUs uses signed arithmetic internally, the\nunsigned data stored in the textures must be biased and\nscaled before and after processing\n[NVIDIA 2002]\n.\n3.5 Implementing CML Operations\nAn iteration of a CML simulation consists of successive\napplication of simple operations on the lattice. These\noperations consist of three steps: setup the graphics hardware\nrendering state, render a single quadrilateral fit to the view\nport, and store the rendered results into a texture. We refer to\neach of these setup-render-copy operations as a single pass.\nIn practice, due to limited GPU resources (number of texture\nunits, number of register combiners, etc.), a CML operation\nmay span multiple passes.\nThe setup portion of a pass simply sets the state of the\nhardware to correctly perform the rest of the pass. To be\nsure that the correct lattice nodes are sampled during the\npass, texels in the input textures must map directly to pixels\nin the output of the graphics pipeline. To ensure that this is\ntrue, we set the view port to the resolution of the lattice, and\nthe view frustum to an orthographic view fit to the lattice so\nthat there is a one-to-one mapping between pixels in the\nrendering buffer and texels in the texture to be updated.\nThe render-copy portion of each pass performs 4\nsuboperations: Neighbor Sampling, Computation on\nNeighbors, New State Computation, and State Update.\nFigure\n3 illustrates the mapping of the suboperations to\ngraphics hardware. Neighbor sampling and Computation on\nNeighbors are performed by the programmable texture\nmapping hardware. New State Computation performs\narithmetic on the results of the previous suboperations using\nprogrammable texture blending. Finally, State Update feeds\nthe results of one pass to the next by rendering or copying\nthe texture blending results to a texture.\nNeighbor Sampling: Since state is stored in textures,\nneighbor sampling is performed by offsetting texture\ncoordinates toward the neighbors of the texel being updated.\nFor example, to sample the four nearest neighbor nodes of\nFigure 3: Components of a CML operation map to\ngraphics hardware pipeline components.\n112\nHarris, Coombe, Scheuermann, and Lastra / Simulation on Graphics Hardware\n\n\n\nThe Eurographics Association 2002.\n\nnode (x,y), the texture coordinates at the corners of the\nquadrilateral mentioned above are offset in the direction of\neach neighbor by the width of a single texel. Texture\ncoordinate interpolation ensures that as rasterization\nproceeds, every texel's neighbors will be correctly sampled.\nNote that beyond sampling just the nearest neighbors of a\nnode, weighted averages of nearby nodes can be computed\nby exploiting the linear texture interpolation hardware\navailable in GPUs. An example of this is our single-pass\nimplementation of 2D diffusion, described in Appendix A.\nCare must be taken, though, since the precision used for\nthe interpolation coefficients is sometimes lower than the rest\nof the texture pipeline.\nComputation on Neighbors: As described in Section\n3.3.3, many simulations compute complex functions of the\nneighbors they sample. In many cases, these functions can\nbe computed ahead of time and stored in a texture for use as\na lookup table. The programmable texture shader\nfunctionality of recent GPUs provides several dependent\ntexture addressing operations. We have implemented table\nlookups using the \"DEPENDENT_GB_TEXTURE_\n2D_NV\" texture shader of the GeForce 3. This shader\nprovides memory indirect texture addressing the green and\nblue colors read from one texture unit are used as texture\ncoordinates for a lookup into a second texture unit. By\nbinding the precomputed lookup table texture to the second\ntexture unit, we can implement arbitrary function operations\non the values of the nodes (Figure 4).\nNew State Computation: Once we have sampled the\nvalues of neighboring texels and optionally used them for\nfunction table lookups, we need to compute the new state of\nthe lattice. We use programmable hardware texture blending\nto perform arithmetic operations including addition,\nmultiplication, and dot products. On the GeForce 3 and 4,\nwe implement this using register combiners\n[NVIDIA 2002]\n\nRegister combiners take the output of texture shaders and\nrasterization as input, and provide arithmetic operations,\nuser-defined constants, and temporary registers. The result\nof these computations is written to the frame buffer.\nState Update: Once the new state is computed, we must\nstore it in a state texture. In our current implementation, we\ncopy the newly-rendered frame buffer to a texture using the\nglCopyTexSubImage2D() instruction in OpenGL. Since all\nsimulation state is stored in textures, our technique avoids\nlarge data transfers between the CPU and GPU during\nsimulation and rendering.\n3.6 Numerical Range of CML Simulations\nThe physically based nature of CML simulations means that\nthe ranges of state values for different simulations can vary\nwidely. The graphics hardware we use to implement them,\non the other hand, operates only on fixed-point fragment\nvalues in the range [0,1]. This means that we must\nnormalize the range of a simulation into [0,1] before it can be\nimplemented in graphics hardware.\nBecause the hardware uses limited-precision fixed-point\nnumbers, some simulations will be more robust to this\nnormalization than others. The robustness of a simulation\ndepends on several factors. Dynamic range is the ratio\nbetween a simulation's largest absolute value and its smallest\nnon-zero absolute value. If a simulation has a high dynamic\nrange, it may not be robust to normalization unless the\nprecision of computation is high enough to represent the\ndynamic range. We refer to a simulation's resolution as the\nsmallest absolute numerical difference that it must be able to\ndiscern. A simulation with a resolution finer than the\nresolution of the numbers used in its computation will not be\nrobust. Finally, as the arithmetic complexity of a simulation\nincreases, it will incur more roundoff error, which may\nreduce its robustness when using low-precision arithmetic.\nFor example, the boiling simulation (Section 4.1) has a\nrange of approximately [0,10], but its values do not get very\nclose to zero, so its dynamic range is less than ten. Also, its\nresolution is fairly coarse, since the event to which it is most\nsensitive phase change is near the top of its range. For\nthese reasons, boiling is fairly robust under normalization.\nReaction-diffusion has a range of [0,1] so it does not require\nnormalization. Its dynamic range, however, is on the order\nof 10\n5\n, which is much higher than that of the 8-bit\n\nnumbers\nstored in textures. Fortunately, by scaling the coefficients of\nreaction-diffusion, we can reduce this dynamic range\nsomewhat to get interesting results. However, as we\ndescribe in Section 4.3, it suffers from precision errors (See\nSection 5.1 for more discussion of precision issues). As\nmore precision becomes available in graphics hardware,\nnormalization will become less of an issue. When floating\npoint computation is made available, simulations can be run\nwithin their natural ranges.\nResults\nWe have designed and built an interactive framework,\n\"CMLlab\", for constructing and experimenting with CML\nsimulations (Figure 5). The user constructs a simulation\nfrom a set of general purpose operations, such as diffusion\nand advection, or special purpose operations designed for\nspecific simulations, such as the buoyancy operations\ndescribed in Section 3.3. Each operation processes a set of\ninput textures and produces a single output texture. The user\nconnects the outputs and inputs of the selected operations\ninto a directed acyclic graph. An iteration of the simulation\nconsists of traversing the graph in depth-first fashion so that\neach operation is performed in order. The state textures\nresulting from an iteration are used as input state for the next\niteration, and for displaying the simulated system. The\nFigure 4: Arbitrary function lookups are implemented\nusing dependent texturing in graphics hardware.\n113\nHarris, Coombe, Scheuermann, and Lastra / Simulation on Graphics Hardware\n\n\n\nThe Eurographics Association 2002.\n\nresults of intermediate passes in a simulation iteration can be\ndisplayed to the user in place of the result textures. This is\nuseful for visually debugging the operation of a new\nsimulation.\nWhile 2D simulations in our framework use only 2D\ntextures for storage of lattice state, 3D simulations can be\nimplemented in two ways. The obvious way is to use 3D\ntextures. However, the poor performance of copying to 3D\ntextures in current driver implementations would make our\nsimulations run much slower. Instead, we implement 3D\nsimulations using a collection of 2D slices to represent the\n3D volume. This has disadvantages over using true 3D\ntextures. For example, we must implement linear filtering\nand texture boundary conditions (clamp or repeat) in\nsoftware, wheras 3D texture functionality provides these in\nhardware.\nIt is worth noting that we trade optimal performance for\nflexibility in the CMLLab framework. Because we want to\nallow a variety of models to be built from a set of operations,\nwe often incur the expense of some extra texture copies in\norder to keep operations separate. Thus, our implementation\nis not optimal even faster rates are achievable on the same\nhardware by sacrificing operator reuse.\nTo demonstrate the utility of hardware CML simulation\nin interactive 3D graphics applications, we have integrated\nthe simulation system into a virtual environment built on a\n3D game engine, \"Wild Magic\"\n[Eberly 2001]\n. Figure 7 (see\ncolor plate) is an image of a boiling witch's brew captured\nfrom a real-time demo we built with the engine. The demo\nuses our 3D boiling simulation (Section 4.1) and runs at 45\nframes per second.\nWe will now describe three of the CML simulations that\nwe have implemented. The test computer we used is a PC\nwith a single 2.0 GHz Pentium 4 processor and 512 MB of\nRAM. Tests were performed on this machine with both an\nNVIDIA GeForce 3 Ti 500 GPU with 64 MB of RAM, and\nan NVIDIA GeForce 4 Ti 4600 GPU with 128 MB of RAM.\n4.1 Boiling\nWe have implemented 2D and 3D boiling simulations as\ndescribed in\n[Yanagita 1992]\n. Rather than simulate all\ncomponents of the boiling phenomenon (temperature,\npressure, velocity, phase of matter, etc.), their model\nsimulates only the temperature of the liquid as it boils. The\nsimulation is composed of successive application of thermal\ndiffusion, bubble formation and buoyancy, latent heat\ntransfer. Sections 3.3.1 and 3.3.3 described the first two of\nthese, and Section 2.1 gave an overview of the model. For\ndetails of the latent heat transfer computation, we refer the\nreader to\n[Yanagita 1992]\n. Our implementation requires\nseven passes per iteration for the 2D simulation, and 9 passes\nper slice for the 3D simulation. Table 1 shows the\nsimulation speed for a range of resolutions. For details of\nour boiling simulation implementation, see\n[Harris 2002b]\n.\n4.2 Convection\nThe Rayleigh-Bnard convection CML model of\n[Yanagita\nand Kaneko 1993]\nsimulates convection using four CML\noperations: buoyancy (described in 3.3.2), thermal diffusion,\ntemperature and velocity advection, and viscosity and\npressure effect. The viscosity and pressure effect is\nimplemented as\n2\ngrad(div )\n4\nv\np\nk\nv\nv\nv k\nv\n= +\n+\n,\nwhere\nv\nis the velocity, k\nv\nis the viscosity ratio and k\np\nis the\ncoefficient of the pressure effect. The first two terms of this\nequation account for diffusion of the velocity, and the last\nterm is the flow caused by the gradient of the mass flow\naround the lattice\n[Miyazaki, et al. 2001]\n. See\n[Miyazaki, et al.\n2001;Yanagita and Kaneko 1993]\nfor details of the discrete\nimplementation of this operation.\nThe remaining operation is advection of temperature and\nvelocity by the velocity field.\n[Yanagita and Kaneko 1993]\n\nimplements this by distributing state from a node to its\nneighbors according to the velocity at the node. In our\nimplementation, this was made difficult by the precision\n\nIterations Per Second\nResolution\nSoftware GeForce 3 GeForce 4 Speedup\n64x64\n266.5\n1252.9\n1752.5\n4.7 / 6.6\n128x128\n61.8\n679.0\n926.6 11.0 / 15.0\n256x256\n13.9\n221.3\n286.6 15.9 / 20.6\n512x512\n3.3\n61.2\n82.3 18.5 / 24.9\n1024x1024\n.9\n15.5\n21.6\n17.2 / 24\n32x32x32\n25.5\n104.3\n145.8\n4.1 / 5.7\n64x64x64\n3.2\n37.2\n61.8 11.6 / 19.3\n128x128x128\n.4\nNA\n8.3 NA / 20.8\nTable 1: A speed comparison of our hardware CML\nboiling simulation to a software version. The speedup\ncolumn gives the speedup for both GeForce 3 and 4.\nFigure 5: CMLlab, our interactive framework for building\nand experimenting with CML simulations.\n114\nHarris, Coombe, Scheuermann, and Lastra / Simulation on Graphics Hardware\n\n\n\nThe Eurographics Association 2002.\n\nlimitations of the hardware, so we used a texture shader-based\nadvection operation instead. This operation advects\nstate stored in a texture using the GL_OFFSET_TEXTURE_\n2D_NV dependent texture addressing mode of the GeForce 3\nand 4. A description of this method can be found in\n[Weiskopf, et al. 2001]\n. Our 2D convection implementation\n(Figure 8 in the color plate section) requires 10 passes per\niteration. We have not implemented a 3D convection\nsimulation because GeForce 3 and 4 do not have a 3D\nequivalent of the offset texture operation.\nDue to the precision limitations of the graphics hardware,\nour implementation of convection did not behave exactly as\ndescribed by\n[Yanagita and Kaneko 1993]\n. We do observe the\nformation of convective rolls, but the motion of both the\ntemperature and velocity fields is quite turbulent. We\nbelieve that this is a result of low-precision arithmetic.\n4.3 Reaction-Diffusion\nReaction-Diffusion processes were proposed by\n[Turing\n1952]\nand introduced to computer graphics by\n[Turk\n1991;Witkin and Kass 1991]\n. They are a well-studied model\nfor the interaction of chemical reactants, and are interesting\ndue to their complex and often chaotic behavior. The\npatterns that emerge are reminiscent of patterns occurring in\nnature\n[Lee, et al. 1993]\n. We implemented the Gray-Scott\nmodel, as described in\n[Pearson 1993]\n. This is a two-chemical\nsystem defined by the initial value partial differential\nequations:\n2\n2\n2\n2\n(1\n)\n(\n) ,\nu\nv\nU\nD\nU UV\nF\nU\nt\nV\nD\nV UV\nF k V\nt\n= +\n\n= +\n+\n\n\nwhere F, k, D\nu\n, and D\nv\n. are parameters given in\n[Pearson\n1993]\n. We have implemented 2D and 3D versions of this\nprocess, as shown in Figure 5 (2D), and Figures 1 and 9 (3D,\non color plate). We found reaction-diffusion relatively\nsimple to implement in our framework because we were able\nto reuse our existing diffusion operator. In 2D this\nsimulation requires two passes per iteration, and in 3D it\nrequires three passes per slice. A 256x256 lattice runs at 400\niterations per second in our interactive framework, and a\n128x128x32 lattice runs at 60 iterations per second.\nThe low precision of the GeForce 3 and 4 reduces the\nvariety of patterns that our implementation of the Gray-Scott\nmodel produces. We have seen a variety of results, but much\nless diversity than produced by a floating point\nimplementation. As with convection, this appears to be\ncaused by the effects of low-precision arithmetic.\nHardware Limitations\nWhile current GPUs make a good platform for CML\nsimulation, they are not without problems. Some of these\nproblems are performance problems of the current\nimplementation, and may not be issues in the near future.\nNVIDIA has shown in the past that slow performance can\noften be alleviated via optimization of the software drivers\nthat accompany the GPU. Other limitations are more\nfundamental.\nMost of the implementation limitations that we\nencountered were limitations that affected performance. We\nhave found glCopyTexSubImage3D(), which copies the\nframe buffer to a slice of a 3D texture, to be much slower (up\nto three orders of magnitude) than glCopyTexSubImage2D()\nfor the same amount of data. This prevented us from using\n3D textures in our implementation. Once this problem is\nalleviated, we expect a 3D texture implementation to be\nfaster and easier to implement, since it will remove the need\nto bind multiple textures to sample neighbors in the third\ndimension. Also, 3D textures provide hardware linear\ninterpolation and boundary conditions (periodic or fixed) in\nall three dimensions. With our slice-based implementation,\nwe must interpolate and handle boundary conditions in the\nthird dimension in software.\nThe ability to render to texture will also provide a speed\nimprovement, as we estimate that in a complex 3D\nsimulation, much of the processing time is spent copying\nrendered data from the frame buffer to textures (typically one\ncopy per pass). When using 3D textures, we will need the\nability to render to a slice of a 3D texture.\n5.1 Precision\nThe hardware limitation that causes the most problems to\nour implementation is precision. The register combiners in\nthe GeForce 3 and 4 perform arithmetic using nine-bit signed\nfixed-point values. Without floating point, the programmer\nmust scale and bias values to maintain them in ranges that\nmaximize precision. This is not only difficult, it is subject to\narithmetic error. Some simulations (such as boiling) handle\nthis error well, and behave as predicted by a floating point\nimplementation. Others, such as our reaction-diffusion\nimplementation, are more sensitive to precision errors.\nWe have done some analysis of the error introduced by\nlow precision and experiments to determine how much\nprecision is needed (For full details, see\n[Harris 2002a]\n). We\nhypothesize that the diffusion operation is very susceptible to\nFigure 6: High-precision fragment computations in near\nfuture graphics hardware will enable accurate simulation of\nreaction-diffusion at hundreds of iterations per second.\n115\nHarris, Coombe, Scheuermann, and Lastra / Simulation on Graphics Hardware\n\n\n\nThe Eurographics Association 2002.\n\nroundoff error, because in our experiments in CMLlab,\niterated application of a diffusion operator never fully\ndiffuses its input. We derive the error induced by each\napplication of diffusion (in 2D) to a node (i,j) as\n,\n3\n(3\n)\n4\nd\ni j\nd\nx\n\n\n\n+\n+\n,\nwhere d is the diffusion coefficient, x\ni,j\nis the value at node (i,\nj), and\n\nis the amount of roundoff error in each arithmetic\noperation. Since d and x\ni,j\nare in the range [0,1], this error is\nbounded above by\n4.75\nd\n\n\n\n. With 8 bits of precision,\n\nis\nat most 2\n-9\n. This error is fairly large, meaning that a\nsimulation that is sensitive to small numbers will quickly\ndiverge.\nIn an attempt to better understand the precision needs of\nour more sensitive simulations, we implemented a software\nversion of our reaction-diffusion simulation with adjustable\nfixed-point precision. Through experimentation, we have\nfound that with 14 or more bits of fixed-point precision, the\nbehavior of this simulation is visually very similar to our\nsingle-precision floating-point implementation. Like the\nfloating-point version, a diverse variety of patterns grow,\nevolve, and sometimes develop unstable formations that\nnever cease to change. Figure 6 shows a variety of patterns\ngenerated with this 14-bit fixed-point simulation.\nGraphics hardware manufacturers are quickly moving\ntoward higher-quality pixels. This goal, along with\nincreasing programmability, makes high-precision\ncomputation essential. Higher precision, including floating-point\nfragment values, will become a standard feature of\nGPUs in the near future\n[Spitzer 2002]\n. With the increasing\nprecision and programmability of GPUs, we believe that\nCML methods for simulating natural phenomena using\ngraphics hardware will become very useful.\nConclusions and Future Work\nIn this paper, we have described a method for simulating a\nvariety of dynamic phenomena using graphics hardware. We\npresented the coupled map lattice as a simple and flexible\nsimulation technique, and showed how CML operations map\nto computer graphics hardware operations. We have\ndescribed common CML operations and how they can be\nimplemented on programmable GPUs.\nOur hardware CML implementation shows a substantial\nspeed increase (up to 25 times on a GeForce 4) over the same\nsimulations implemented to run on a Pentium 4 CPU.\nHowever, this comparison (and the speedup numbers in\nTable 1) should be taken with a grain of salt. While our\nCPU-based CML simulator is an efficient, straightforward\nimplementation that obeys common cache coherence\nprinciples, it is not highly optimized, and could be\naccelerated by using vectorized CPU instructions. Our\ngraphics hardware implementation is not highly optimized\neither. We sacrifice optimal speed for flexibility. The CPU\nversion is also written to use single precision floats, while\nthe GPU version uses fixed-point numbers with much less\nprecision. Nevertheless, we feel that it would be difficult, if\nnot impossible, to achieve a 25x speedup over our current\nCPU implementation by optimizing the code and using lower\nprecision numbers. A more careful comparison and\noptimized simulations on both platforms would be useful in\nthe future.\n\"CMLlab\", our flexible framework for building CML\nmodels, allows a user to experiment with simulations\nrunning on graphics processors. We have described various\n2D and 3D simulations that we have implemented in this\nframework. We have also integrated our CML framework\nwith a 3D game engine to demonstrate the use of 3D CML\nmodels in interactive scenes and virtual environments. In the\nfuture, we would like to add more flexibility to CMLlab.\nUsers currently cannot define new, custom operations\nwithout writing C++ code. It would be possible, however, to\nprovide generic, scriptable operators, since the user\nmicrocode that runs on the GPU can be dynamically loaded.\nWe have described the problems we encountered in\nimplementing CML in graphics hardware, such as limited\nprecision and 3D texturing performance problems. We\nbelieve that these problems will be alleviated in near future\ngenerations of graphics hardware. With the continued\naddition of more texture units, memory, precision, and more\nflexible programmability, graphics hardware will become an\neven more powerful platform for visual simulation. Some\nrelatively simple extensions to current graphics hardware and\nAPIs would benefit CML and PDE simulation. For example,\nthe ability to render to 3D textures could simplify and\naccelerate each pass of our simulations. One avenue for\nfuture research is to increase parallelization of simulations on\ngraphics hardware. Currently, it is difficult to add multiple\nGPUs to a single computer because PCs have a single AGP\nport. If future PC hardware adds support for multiple GPUs,\npowerful multiprocessor machines could be built with these\ninexpensive processors.\nWe plan to continue exploring the use of CML on current\nand future generations of graphics hardware. We are\ninterested in porting our system to ATI Radeon hardware.\nThe Radeon 8500 can sample more textures per pass and has\nmore programmable texture addressing than GeForce 3,\nwhich could add power to CML simulations. Also, our\ncurrent framework relies mostly on the power of the\nfragment processing pipeline, and uses none of the power\navailable in the programmable vertex engine. We could\ngreatly increase the complexity of simulations by taking\nadvantage of this. Currently, this would incur additional cost\nfor feedback of the output of the fragment pipeline (through\nthe main memory) and back into the vertex pipeline, but\ndepending on the application, it may be worth the expense.\nGPU manufacturers could improve the performance of this\nfeedback by allowing textures in memory to be interpreted as\nvertex meshes for processing by the vertex engine, thus\navoiding unneccessary transfers back to the host.\nWe hope to implement the cloud simulation described by\n[Miyazaki, et al. 2001]\nin the near future, as well as other\n116\nHarris, Coombe, Scheuermann, and Lastra / Simulation on Graphics Hardware\n\n\n\nThe Eurographics Association 2002.\n\ndynamic phenomena. Also, since the boiling simulation of\n[Yanagita 1992]\nmodels only temperature, and disregards\nsurface tension, the bubbles are not round. We are interested\nin extending this simulation to improve its realism. We plan\nto continue exploring the use of computer graphics hardware\nfor general computation. As an example, the anisotropic\ndiffusion that can be performed on a GPU may be useful for\nimage-processing and computer vision applications.\n\nAcknowledgements\nThe authors would like to thank Steve Molnar, John Spitzer\nand the NVIDIA Developer Relations team for answering\nmany questions. This work was supported in part by\nNVIDIA Corporation, US NIH National Center for Research\nResources Grant Number P41 RR 02170, US Office of\nNaval Research N00014-01-1-0061, US Department of\nEnergy ASCI program, and National Science Foundation\ngrants ACR-9876914 and IIS-0121293.\nA Implementation\nof\nDiffusion\nOn GeForce 3 hardware, the diffusion operation can be\nimplemented more efficiently than the Laplacian operator\nitself. To do so, we rewrite Equation (1) as\n'\n,\n,\n1,\n1,\n, 1\n, 1\n4\n,\n( , )\n1\n(1\n)\n(\n)\n4\n1\n[(1\n)\n],\n4\nk\nd\ni j\nd\ni j\ni\nj\ni\nj\ni j\ni j\nd\ni j\nd n i j\nk\nc\nT\nc T\nT\nT\nT\nT\nc T\nc T\n+\n+\n=\n= +\n+\n+\n+\n=\n+\n\n\nwhere n\nk\n(x,y) represents the kth nearest neighbor of (x, y). In\nthis form, we see that the diffusion operator is the average of\nfour weighted sums of the center texel, T\ni,j\nand its four\nnearest neighbor texels. These weighted sums are actually\nlinear interpolation computations, with c\nd\nas the parameter of\ninterpolation. This means that we can implement the\ndiffusion operation described by Equation 3 by enabling\nlinear texture filtering, and using texture coordinate offsets\nof c\nd\n\nw\n,\nwhere w is the width of a texel as described in\nSection 3.5.\nReferences\n[Bohn 1998] Bohn, C.-A. Kohonen Feature Mapping\nThrough Graphics Hardware. In Proceedings of 3rd Int.\nConference on Computational Intelligence and\nNeurosciences 1998. 1998.\n[Cabral, et al. 1994] Cabral, B., Cam, N. and Foran, J.\nAccelerated Volume Rendering and Tomographic\nReconstruction Using Texture Mapping Hardware. In\nProceedings of Symposium on Volume Visualization 1994,\n91-98. 1994.\n[Carr, et al. 2002] Carr, N.A., Hall, J.D. and Hart, J.C. The\nRay Engine. In Proceedings of SIGGRAPH / Eurographics\nWorkshop on Graphics Hardware 2002. 2002.\n[Dobashi, et al. 2000] Dobashi, Y., Kaneda, K., Yamashita,\nH., Okita, T. and Nishita, T. A Simple, Efficient Method for\nRealistic Animation of Clouds. In Proceedings of\nSIGGRAPH 2000, ACM Press / ACM SIGGRAPH, 19-28.\n2000.\n[Eberly 2001] Eberly, D.H. 3D Game Engine Design.\nMorgan Kaufmann Publishers. 2001.\n[England 1978] England, J.N. A system for interactive\nmodeling of physical curved surface objects. In Proceedings\nof SIGGRAPH 78 1978, 336-340. 1978.\n[Eyles, et al. 1997] Eyles, J., Molnar, S., Poulton, J., Greer,\nT. and Lastra, A. PixelFlow: The Realization. In Proceedings\nof 1997 SIGGRAPH / Eurographics Workshop on Graphics\nHardware 1997, ACM Press, 57-68. 1997.\n[Fedkiw, et al. 2001] Fedkiw, R., Stam, J. and Jensen, H.W.\nVisual Simulation of Smoke. In Proceedings of SIGGRAPH\n2001, ACM Press / ACM SIGGRAPH. 2001.\n[Foster and Metaxas 1997] Foster, N. and Metaxas, D.\nModeling the Motion of a Hot, Turbulent Gas. In\nProceedings of SIGGRAPH 1997, ACM Press / ACM\nSIGGRAPH, 181-188. 1997.\n[Harris 2002a] Harris, M.J. Analysis of Error in a CML\nDiffusion Operation. University of North Carolina Technical\nReport TR02-015.\nhttp://www.cs.unc.edu/~harrism/cml/dl/HarrisTR02-015.pdf\n. 2002a.\n[Harris 2002b] Harris, M.J. Implementation of a CML\nBoiling Simulation using Graphics Hardware. University of\nNorth Carolina Technical Report TR02-016.\nhttp://www.cs.unc.edu/~harrism/cml/dl/HarrisTR02-016.pdf\n. 2002b.\n[Heidrich, et al. 1999] Heidrich, W., Westermann, R., Seidel,\nH.-P. and Ertl, T. Applications of Pixel Textures in\nVisualization and Realistic Image Synthesis. In Proceedings\nof ACM Symposium on Interactive 3D Graphics 1999. 1999.\n[Hoff, et al. 1999] Hoff, K.E.I., Culver, T., Keyser, J., Lin,\nM. and Manocha, D. Fast Computation of Generalized\nVoronoi Diagrams Using Graphics Hardware. In\nProceedings of SIGGRAPH 1999, ACM / ACM Press, 277-286\n. 1999.\n[Jobard, et al. 2001] Jobard, B., Erlebacher, G. and Hussaini,\nM.Y. Lagrangian-Eulerian Advection for Unsteady Flow\nVisualization. In Proceedings of IEEE Visualization 2001.\n2001.\n[Kaneko 1993] Kaneko, K. (ed.), Theory and applications of\ncoupled map lattices. Wiley, 1993.\n[Kapral 1993] Kapral, R. Chemical Waves and Coupled Map\nLattices. in Kaneko, K. ed. Theory and Applications of\nCoupled Map Lattices, Wiley, 135-168. 1993.\n[Kedem and Ishihara 1999] Kedem, G. and Ishihara, Y.\nBrute Force Attack on UNIX Passwords with SIMD\nComputer. In Proceedings of The 8th USENIX Security\nSymposium 1999. 1999.\n[Lee, et al. 1993] Lee, K.J., McCormick, W.D., Ouyang, Q.\nand Swinn, H.L. Pattern Formation by Interacting Chemical\nFronts. Science, 261. 192-194. 1993.\n[Lengyel, et al. 1990] Lengyel, J., Reichert, M., Donald, B.R.\nand Greenberg, D.P. Real-Time Robot Motion Planning\nUsing Rasterizing Computer Graphics Hardware. In\nProceedings of SIGGRAPH 1990, 327-335. 1990.\n117\nHarris, Coombe, Scheuermann, and Lastra / Simulation on Graphics Hardware\n\n\n\nThe Eurographics Association 2002.\n\n[Lindholm, et al. 2001] Lindholm, E., Kilgard, M. and\nMoreton, H. A User Programmable Vertex Engine. In\nProceedings of SIGGRAPH 2001, ACM Press / ACM\nSIGGRAPH, 149-158. 2001.\n[Miyazaki, et al. 2001] Miyazaki, R., Yoshida, S., Dobashi,\nY. and Nishita, T. A Method for Modeling Clouds Based on\nAtmospheric Fluid Dynamics. In Proceedings of The Ninth\nPacific Conference on Computer Graphics and Applications\n2001, IEEE Computer Society Press, 363-372. 2001.\n[Nagel and Raschke 1992] Nagel, K. and Raschke, E. Self-organizing\ncriticality in cloud formation? Physica A, 182.\n519-531. 1992.\n[Nishimori and Ouchi 1993] Nishimori, H. and Ouchi, N.\nFormation of Ripple Patterns and Dunes by Wind-Blown\nSand. Physical Review Letters, 71 1. 197-200. 1993.\n[NVIDIA 2002] NVIDIA. NVIDIA OpenGL Extension\nSpecifications.\nhttp://developer.nvidia.com/view.asp?IO=nvidia_opengl_specs\n. 2002.\n[NVIDIA 2001a] NVIDIA. NVIDIA OpenGL Game Of Life\nDemo.\nhttp://developer.nvidia.com/view.asp?IO=ogl_gameoflife\n.\n2001a.\n[NVIDIA 2001b] NVIDIA. NVIDIA Procedural Texture\nPhysics Demo.\nhttp://developer.nvidia.com/view.asp?IO=ogl_dynamic_bumpreflection\n.\n2001b.\n[Olano and Lastra 1998] Olano, M. and Lastra, A. A Shading\nLanguage on Graphics Hardware: The PixelFlow Shading\nSystem. In Proceedings of SIGGRAPH 1998, ACM / ACM\nPress, 159-168. 1998.\n[Pearson 1993] Pearson, J.E. Complex Patterns in a Simple\nSystem. Science, 261. 189-192. 1993.\n[Peercy, et al. 2000] Peercy, M.S., Olano, M., Airey, J. and\nUngar, P.J. Interactive Multi-Pass Programmable Shading. In\nProceedings of SIGGRAPH 2000, ACM Press / ACM\nSIGGRAPH, 425-432. 2000.\n[Potmesil and Hoffert 1989] Potmesil, M. and Hoffert, E.M.\nThe Pixel Machine: A Parallel Image Computer. In\nProceedings of SIGGRAPH 89 1989, ACM, 69-78. 1989.\n[Proudfoot, et al. 2001] Proudfoot, K., Mark, W.R.,\nTzvetkov, S. and Hanrahan, P. A Real-Time Procedural\nShading System for Programmable Graphics Hardware. In\nProceedings of SIGGRAPH 2001, ACM Press / ACM\nSIGGRAPH, 159-170. 2001.\n[Purcell, et al. 2002] Purcell, T.J., Buck, I., Mark, W.R. and\nHanrahan, P. Ray Tracing on Programmable Graphics\nHardware. In Proceedings of SIGGRAPH 2002, ACM /\nACM Press. 2002.\n[Qian, et al. 1996] Qian, Y.H., Succi, S. and Orszag, S.A.\nRecent Advances in Lattice Boltzmann Computing. in\nStauffer, D. ed. Annual Reviews of Computational Physics\nIII, World Scientific, 195-242. 1996.\n[Rhoades, et al. 1992] Rhoades, J., Turk, G., Bell, A., State,\nA., Neumann, U. and Varshney, A. Real-Time Procedural\nTextures. In Proceedings of Symposium on Interactive 3D\nGraphics 1992, ACM / ACM Press, 95-100. 1992.\n[Spitzer 2002] Spitzer, J. Shading and Game Development\n(Presentation on NVIDIA Technology). IBM EDGE\nWorkshop. 2002.\n[Stam 1999] Stam, J. Stable Fluids. In Proceedings of\nSIGGRAPH 1999, ACM Press / ACM SIGGRAPH, 121-128\n. 1999.\n[Toffoli and Margolus 1987] Toffoli, T. and Margolus, N.\nCellular Automata Machines. The MIT Press. 1987.\n[Trendall and Steward 2000] Trendall, C. and Steward, A.J.\nGeneral Calculations using Graphics Hardware, with\nApplications to Interactive Caustics. In Proceedings of\nEurogaphics Workshop on Rendering 2000, Springer, 287-298\n. 2000.\n[Turing 1952] Turing, A.M. The chemical basis of\nmorphogenesis. Transactions of the Royal Society of London,\nB237. 37-72. 1952.\n[Turk 1991] Turk, G. Generating Textures on Arbitrary\nSurfaces Using Reaction-Diffusion. In Proceedings of\nSIGGRAPH 1991, ACM Press / ACM SIGGRAPH, 289-298\n. 1991.\n[von Neumann 1966] von Neumann, J. Theory of Self-Reproducing\nAutomata. University of Illinois Press. 1966.\n[Weiskopf, et al. 2001] Weiskopf, D., Hopf, M. and Ertl, T.\nHardware-Accelerated Visualization of Time-Varying 2D\nand 3D Vector Fields by Texture Advection via\nProgrammable Per-Pixel Operations. In Proceedings of\nVision, Modeling, and Visualization 2001, 439-446. 2001.\n[Weisstein 1999] Weisstein, E.W. CRC Concise\nEncyclopedia of Mathematics. CRC Press. 1999.\n[Witkin and Kass 1991] Witkin, A. and Kass, M. Reaction-Diffusion\nTextures. In Proceedings of SIGGRAPH 1991,\nACM Press / ACM SIGGRAPH, 299-308. 1991.\n[Wolfram 1984] Wolfram, S. Cellular automata as models of\ncomplexity. Nature, 311. 419-424. 1984.\n[Yanagita 1992] Yanagita, T. Phenomenology of boiling: A\ncoupled map lattice model. Chaos, 2 3. 343-350. 1992.\n[Yanagita and Kaneko 1993] Yanagita, T. and Kaneko, K.\nCoupled map lattice model for convection. Physics Letters A,\n175. 415-420. 1993.\n[Yanagita and Kaneko 1997] Yanagita, T. and Kaneko, K.\nModeling and Characterization of Cloud Dynamics. Physical\nReview Letters, 78 22. 4297-4300. 1997.\n\n118\nHarris, Coombe, Scheuermann, and Lastra / Simulation on Graphics Hardware\n\n\n\nThe Eurographics Association 2002.\n\n\nFigure 7: A CML boiling simulation running in an\ninteractive 3D environment (the steam is a particle\nsystem).\n\nFigure 9: A sequence from our 3D version of the Gray-Scott reaction-diffusion model.\nFigure 8: A CML convection simulation. The left panel\nshows temperature; the right panel shows 2D velocity\nencoded in the blue and green color channels\n.\n160", "keywords": "Coupled Map Lattice;Visual Simulation;Reaction-Diffusion;dynamic phenomena;Multipass Rendering;simulation;CML;graphic hardware;Graphics Hardware"} {"name": "149", "title": "Physiological Measures of Presence in Stressful Virtual Environments", "abstract": "A common measure of the quality or effectiveness of a virtual environment (VE) is the amount of presence it evokes in users. Presence is often defined as the sense of being there in a VE. There has been much debate about the best way to measure presence, and presence researchers need, and have sought, a measure that is reliable, valid, sensitive, and objective. We hypothesized that to the degree that a VE seems real, it would evoke physiological responses similar to those evoked by the corresponding real environment, and that greater presence would evoke a greater response. To examine this, we conducted three experiments, the results of which support the use of physiological reaction as a reliable, valid, sensitive, and objective presence measure. The experiments compared participants' physiological reactions to a non-threatening virtual room and their reactions to a stressful virtual height situation. We found that change in heart rate satisfied our requirements for a measure of presence, change in skin conductance did to a lesser extent, and that change in skin temperature did not. Moreover, the results showed that inclusion of a passive haptic element in the VE significantly increased presence and that for presence evoked: 30FPS > 20FPS > 15FPS.", "fulltext": "Introduction\nVirtual environments (VEs) are the most sophisticated human-computer\ninterfaces yet developed. The effectiveness of a VE\nmight be defined in terms of enhancement of task performance,\neffectiveness for training, improvement of data comprehension,\netc. A common metric of VE quality is the degree to which the VE\ncreates in the user the subjective illusion of presence a sense of\nbeing in the virtual, as opposed to the real, environment. Since\npresence is a subjective condition, it has most commonly been\nmeasured by self-reporting, either during the VE experience or\nimmediately afterwards by questionnaires. There has been\nvigorous debate as to how to best measure presence [Barfield et al.\n1995; Ellis 1996; Freeman et al. 1998; IJsselsteijn and de Ridder\n1998; Lombard and Ditton 1997; Regenbrecht and Schubert 1997;\nSchubert et al. 1999; Sheridan 1996; Slater 1999; Witmer and\nSinger 1998].\nIn order to study a VE's effectiveness in evoking presence,\nresearchers need a well-designed and verified measure of the\nphenomena. This paper reports our evaluation of three\nphysiological measures heart rate, skin conductance, and skin\ntemperature as alternate operational measures of presence in\nstressful VEs. Since the concept and idea of measuring presence\nare heavily debated, finding a measure that could find wide\nacceptance would be ideal. In that hope, we investigated the\nreliability, validity, sensitivity, and objectivity of each\nphysiological measure.\n\nFigure 1. Side view of the virtual environment. Subjects start\nin the Training Room and later enter the Pit Room.\n1.2. Physiological Reaction as a Surrogate\nMeasure of Presence\nAs VE system and technology designers, we have sought for a\npresence measure that is\nReliable produces repeatable results, both from trial to trial on\nthe same subject and across subjects,\nValid measures subjective presence, or at least correlates with\nwell-established subjective presence measures,\nSensitive discriminates among multiple levels of presence, and\nObjective is well shielded from both subject and experimenter\nbias.\nWe hypothesize that to the degree that a VE seems real, it will\nevoke physiological responses similar to those evoked by the\ncorresponding real environment, and that greater presence will\nevoke a greater response. If so, these responses can serve as\nobjective surrogate measures of subjective presence.\nOf the three physiological measures in our studies, Change\nin Heart Rate performs best. It consistently differentiates among\nconditions with more sensitivity and more statistical power than\nthe other physiological measures, and more than most of the self-reported\nmeasures. It also best correlates with the reported\nmeasures.\n\nFigure 2. View of the 20' pit from the wooden ledge.\nChange in Skin Temperature is less sensitive, less powerful, and\nslower responding than Change in Heart Rate, although its\nresponse curves are similar. It also correlates with reported\nmeasures. Our results and the literature on skin temperature\nreactions suggest that Change in Skin Temperature would\ndifferentiate among conditions better if the exposures to the\nstimulus were at least 2 minutes [McMurray 1999; Slonim 1974].\nOurs averaged 1.5 minutes in each experiment.\nChange in\nSkin Conductance Level yielded significant\ndifferentiation in some experiments but was not so consistent as\nChange in Heart Rate. More investigation is needed to establish\nwhether it can reliably differentiate among multiple levels of\npresence.\nSince Change in Heart Rate best followed the hypotheses, the\nremainder of this paper will treat chiefly the results for it. For a\nfull account of all measures, please see [Meehan 2001].\n1.3. Our Environment and Measures\nWe use a derivative of the compelling VE reported by Usoh et al.\n[1999]. Figure 1 shows the environment: a Training Room, quite\nordinary, and an adjacent Pit Room, with an unguarded hole in the\nfloor leading to a room 20 ft. below. On the upper level the Pit\nRoom is bordered with a 2-foot wide walkway. The 18x32 foot, 2-room\nvirtual space fits entirely within the working space of our\nlab's wide-area ceiling tracker. Users, equipped with a head-tracked\nstereoscopic head-mounted display, practice walking about\nand picking up and placing objects in the Training Room. Then\nthey are told to carry an object into the next room and place it at a\ndesignated spot. The door opens, and they walk through it to an\nunexpected hazard, a virtual drop of 20 ft. if they move off the\nwalkway. Below is a furnished Living Room (Figure 2).\nUsers report feeling frightened. Some report vertigo. Some will\nnot walk out on the ledge and ask to stop the experiment or demo\nat the doorway. A few boldly walk out over the hole, as if there\nwere a solid glass floor. For most of us, doing that, if we can,\nrequires conscious mustering of will.\nThis environment, with its ability to elicit a fear reaction in users,\nenables investigation of physiological reaction as a measure of\npresence. If so strong a stress-inducing VE does not produce\nsignificant physiological reactions, a less stressful VE won't. This\ninvestigation is a first step. Follow-on research should investigate\nwhether less stressful environments also elicit statistically\nsignificant physiological reactions.\nThis remainder section will discuss the physiological measures we\ntested and the reported measures we used to evaluate validity.\n1.3.1. The Physiological Measures\nAs stated above, we investigated three physiological metrics that\nmeasure stress in real environments [Andreassi 1995;\nGuyton 1986; Weiderhold et al. 1998]:\nChange in heart rate ( Heart Rate). The heart beats faster in stress.\nChange in skin conductance ( Skin Conductance Level). The\nskin of the palm sweats more in stress, independently of\ntemperature, so its conductance rises.\nChange in skin temperature ( Skin Temperature). Circulation\nslows in the extremities in stress, causing skin temperature to drop.\nEach of these measures was constructed to increase when the\nphysiological reaction to the Pit Room was greater.\nHeart Rate = mean HR\nPit Room\nmean HR\nTraining Room.\n\nSkin Conductance = mean SC\nPit Room\nmean SC\nTraining Room\n\nSkin Temperature = mean ST\nTraining Room\nmean ST\nPit Room\n\nWe first measured heart rate with a convenient finger-mounted\nblood-pulse plethysmograph, but the noise generated by the sensor\nmoving on the finger made the signal unstable and unusable. We\nthen went to more cumbersome chest-attached three-electrode\nelectrocardiography (ECG). This gave a good signal. Skin\nconductivity and skin temperature were successfully measured on\nthe fingers. Once connected, users reported forgetting about the\nphysiological sensors they did not cause breaks in presence\nduring the experiments. Figure 3 shows a subject wearing the\nphysiological monitoring equipment.\n1.3.2. The Reported Measures\nReported Presence. We used the University College London\n(UCL) questionnaire [Slater et al. 1995; Usoh et al. 1999]. The\nUCL questionnaire contains seven questions that measure presence\n(Reported Presence), three questions that measure behavioral\npresence (Reported Behavioral Presence) does the user act as if\nin a similar real environment and three that measure ease of\nlocomotion (Ease of Locomotion). Responses for each question\nare on a scale of 1 to 7. Reported Ease of Locomotion was\nadministered for consistency with earlier experiments, but we do\nnot report on it in this paper.\n\nFigure 3. Subject wearing HMD and physiological monitoring\nequipment in the \"Pit Room\".\n646\n\nEven though each question is rated on a scale of 1-7, Slater et al.\nuse it only to yield a High-Presence/ Low-Presence result. A\njudgment must be made as to the high-low threshold. Slater et al.\nhave investigated the use of 6 and 7 as \"high\" responses [\n6] and\nthe use of 5, 6, and 7 as \"high\" responses [\n5] as well as other\nconstructions: addition of raw scores, and a combination based on\nprincipal-components analysis. They have found that [\n6] better\nfollowed conditions [Slater et al. 1994], and, therefore they chose\nthat construction. We found that the [\n5] construction better\nfollows presence conditions but has lower correlations with our\nphysiological measures. Therefore, in order to best follow the\noriginal intention of the measures, irrespective of the lower\ncorrelations with our measures, we choose the [\n5] construction.\nOn the study for which data is published, Slater's subjects rarely\n(<10%) reported \"5\" values; over 25% of our subjects did. One\nexplanation for this difference in subjects' reporting may be that\nuniversity students today expect more technically of a VE than\nthey did several years ago and, therefore, are more likely to report\nlower values (5s) even for the most presence-inducing VEs.\nReported Behavioral Presence. Three questions asked subjects if\nthey behaved as if present when in the VE. The count of high\nscores [\n5] on these questions made up the Reported Behavioral\nPresence measure.\n\nMultiple\nExposures\nPassive\nHaptics\nFrame\nRate\nPresence in\nVEs\nDoes\npresence\ndecrease\nwith exposures?\nPassive\nHaptics\nincrease\npresence?\nHigher Frame\nRate increases\npresence?\nReliability\nof Measures\nAre repeated\nmeasures\nhighly\ncorrelated?\nRegardless of condition, will the\nPit Room evoke similar\nphysiological reactions on every\nexposure?\nValidity\nDo results correlate with reported measures?\nSensitivity\nof Measures\nDo measures\ndetect any\neffect?\nDo measures\ndistinguish\nbetween 2\nconditions?\nDo measures\ndistinguish\namong 4\nconditions?\nTable 1. Questions investigated in each study.\n1.4. Methods and Procedures\n1.4.1. Experimental procedures.\nWe conducted three experiments: Effects of Multiple Exposures on\nPresence (Multiple Exposures), Effects of Passive Haptics on\nPresence (Passive Haptics), and Effects of Frame Rate on Presence\n(Frame Rate). Each of the three studies investigated some\ninteresting aspect of VEs and the properties of the physiological\nmeasures themselves. Table 1 summarizes all the questions\nstudied. For all studies we excluded subjects who had previously\nexperienced VEs more than three times. The experiments were\nalso limited to subjects who were ambulatory, could use stereopsis\nfor depth perception, had no history of epilepsy or seizure, were\nnot overly prone to motion sickness, were in their usual state of\ngood physical fitness at the time of the experiment, and were\ncomfortable with the equipment.\nMultiple Exposures: 10 subjects (average age 24.4;\n= 8.2; 7\nfemale, 3 male) were trained to pick up books and move about in\nthe Training Room at which time a physiological baseline was\ntaken. Subjects then carried a virtual book from the Training\nRoom and placed it on a virtual chair on the far side of the Pit\nRoom. After that, they returned to the Training Room. The\nsubjects performed this task three times per day on four separate\ndays. We investigated whether the presence-evoking power of a\nVE declines with multiple exposures. Heart Rate was not\nsuccessfully measured in this study due to problems with the\nsensor.\n\nFigure 4. Subject in slippers with toes over 1.5-inch ledge.\nPassive Haptics: 52 subjects (average age 21.4;\n= 4.3; 16\nfemale, 36 male) reported on two days. Subjects experienced the\nVE with the 1.5-inch wooden ledge on one of their two days. The\n1.5-inch height was selected so that the edge-probing foot did not\nnormally contact the real laboratory floor where the virtual pit was\nseen. On their other day, subjects experienced the VE without the\nledge. Subjects were counterbalanced as to the order of\npresentation of the physical ledge. Subjects performed all\nexposures to the VE wearing only thin sock-like slippers (Figure\n4). The task was the same as in the Multiple Exposures study\nexcept subjects were instructed to walk to the edge of the wooden\nplatform, place their toes over the edge, and count to ten before\nthey proceeded to the chair on the far side of the room to drop the\nbook. We investigated whether the 1.5-inch wooden ledge\nincreased the presence-evoking power of the VE.\nFrame Rate: 33 participants (average age 22.3;\n= 3.6; 8 female,\n25 male) entered the VE four times on one day and were presented\nthe same VE with a different frame rate each time. The four frame\nrates were 10, 15, 20, and 30 frames-per-second (FPS). Subjects\nwere counterbalanced as to the order of presentation of the four\n\nAll exposures\nFirst Exposure Only (Between Subjects)\nStudy\nVariable\nMean\nP\n% > 0\nN\nMean\nP\n% > 0\nN\nSkin Conductance\n2.3\nmSiemens\n< .001\n99% 112\n2.9\nmSiemens\n.002\n100% 9\nMultiple\nExposures\nSkin Temperature\n0.6\n\no\nF\n< .001\n77% 94\n1.2\n\no\nF\n.015\n100% 7\nHeart Rate\n6.3\nBPM\n< .001\n89% 92\n6.2\nBPM\n< .001\n85% 46\nSkin Conductance\n4.8\nmSiemens\n< .001\n100% 100\n4.7\nmSiemens\n< .001\n100% 50\nPassive Haptics\nSkin Temperature\n1.1\n\no\nF\n< .001\n90% 98\n1.1\n\no\nF\n< .001\n94% 49\nHeart Rate\n6. 3\nBPM\n< .001\n91% 132\n8.1\nBPM\n< .001\n91% 33\nSkin Conductance\n2.0\nmSiemens\n< .001\n87% 132\n2.6\nmSiemens\n< .001\n97% 33\nFrame Rate\nSkin Temperature\n0.8\n\no\nF\n< .001\n100% 132\n1.0\n\no\nF\n< .001\n100% 33\nTable 2. Summary of means and significance of differences (\n)\nbetween Training Room and Pit Room. The mean, P-value for the\none-sample t-test, percentage of times the measure was > 0, and number of samples are shown. The left side shows the means and\nsignificances of all exposures. The right side shows these for only subjects' first exposures. The greater mean is shown in bold.\n647\n\nframe rates. Subjects were trained to pick up and drop blocks in\nthe Training Room and then carried a red block to the Pit Room\nand dropped it on a red X-target on the floor of the Living Room, a\nprocedural improvement that forced subjects to look down into the\npit. They then plucked from the air two other colored blocks\nfloating in the Pit Room and dropped each on the correspondingly-colored\nXs on the floor of the Living Room. The X-targets and the\ngreen and blue blocks are visible in Figures 1 and 2. In this study,\nwe investigated the effect of several different frame rates on\npresence and hypothesized that the higher the frame rate, the\ngreater the presence evoked.\nIn all three studies, the amount of physical activity (walking,\nmanipulating objects) was approximately balanced between the Pit\nand Training Rooms. This lessened any difference between the\ntwo rooms in physiological reaction due to physical activity.\n1.4.2. Statistics\nIn this paper, we define statistical significance at the 5% level, i.e.\nP < 0.050. Findings significant at the 5% level are discussed as\n\"demonstrated\" or \"shown\". To find the best statistical model for\neach measure, we used Stepwise Selection and Elimination as\ndescribed by Kleinbaum et al. [1998]. As they suggest, to account\nbetter (statistically) for variation in the dependent variable (e.g.,\nHeart Rate), we included all variables in the statistical models\nthat were significant at the P < 0.100 level.\nThe analysis of differences in physiological reaction between the\nPit Room and the Training Room for all studies (Table 2) was\nperformed with a One-Sample T-Test. The correlations among\nmeasures were performed using the Bivariate Pearson Correlation.\nWe analyzed order effects and the effects on presence of passive\nhaptics and frame rate with the Univariate General Linear Model,\nusing the repeated measure technique described in the SAS 6.0\nManual [SAS 1990]. This technique allows one to investigate the\neffect of the condition while taking into account inter-subject\nvariation, order effects, and the effects of factors that change from\nexposure to exposure such as loss of balance on the 1.5-inch ledge.\nSection 2 details our evaluation of physiological measures as a\nsurrogate for presence. In Section 3, we analyze physiological\nreactions as between-subject measures. In Section 4, we\nsummarize the results as they pertain to interesting aspects of VEs.\nPhysiological measures of presence\nIn this section, we discuss the reliability, validity, sensitivity, and\nobjectivity of the physiological measures.\n2.1. Reliability\nReliability is \"the extent to which the same test applied on different\noccasions ... yields the same result\" [Sutherland 1996].\nSpecifically, we wanted to know whether the virtual environment\nwould consistently evoke similar physiological reactions as the\nsubject entered and remained in the Pit Room on several occasions.\nInconsistency could manifest itself as either a systematic increase\nor decrease in reactions or in uncorrelated measures for repeated\nexposure to the same VE. In the Multiple Exposures study the\ncondition was the same each time, so this was our purest measure\nof reliability. We also hypothesized that in the Passive Haptics and\nFrame Rate studies, regardless of condition, that the Pit Room\nwould also evoke similar physiological reactions on every\nexposure. We hypothesized that simply being exposed to the Pit\nRoom would cause a greater physiological reaction than the\ndifference between \"high\" and \"low\" presence conditions.\n\nTherefore, all three studies provide information on reliability.\nAs we hypothesized, the environment consistently evoked\nphysiological reactions over multiple exposures to the Pit Room.\nWhen analyzing the data from all exposures, we found there were\nsignificant physiological reactions to the Pit Room: heart rate and\nskin conductance were significantly higher and skin temperature\nwas significantly lower in the Pit Room in all three studies. Heart\nrate was higher in the Pit Room for 90% of the exposures to the\nVE, skin conductance was higher for nearly 95%, and skin\ntemperature was lower for 90%. Table 2 shows the mean\ndifference, t-test, percentage of occurrences where the measure was\nabove zero, and the total count for each physiological measure for\neach study. It shows results both for all exposures taken together,\nwhich is the approach discussed for most of the paper, and for\nanalysis of the first exposure only, which we discuss in Section 4.\nWe also wanted to know whether the physiological reactions to the\nenvironment would diminish over multiple exposures. Since our\nhypotheses relied on presence in the VE evoking a stress reaction\nover multiple exposures (2-12 exposures), we wanted to know\nwhether physiological reactions to the VE would drop to zero or\nbecome unusably small due to habituation. In fact,\nSkin\nTemperature, Reported Presence, Reported Behavioral Presence,\nand\nHeart Rate each decreased with multiple exposures in every\nstudy (although this effect was not always statistically significant),\nand\nSkin Conductance decreased in all but one study. None\ndecreased to zero, though, even after twelve exposures to the VE.\nTable 3 shows the significant order effects.\nA decrease in physiological reaction over multiple exposures\nwould not necessarily weaken validity, since the literature shows\nthat habituation diminishes the stress reactions to real heights and\nother stressors [Abelson and Curtis, 1989; Andreassi 1995]. Since,\nhowever, the reported presence measures, not just the physiological\nstress measures, decrease over multiple exposures, the decreases\nmay not be due to habituation to the stressor; there may also be, as\nHeeter hypothesized, a decrease in a VE's ability to evoke\npresence as novelty wears off [Heeter 1992].\nOrienting Effect. In general, each measure decreased after the\nfirst exposure. Moreover, for each measure except\nHeart Rate,\nthere was a significant decrease after the first exposure in at least\none of the studies (see Table 3). For physiological responses, this\nis called an orienting effect a higher physiological reaction when\none sees something novel [Andreassi 1995]. Though this term\ntraditionally refers to physiological reactions, we will also use the\nterm for the initial spike in the reported measures.\nWe attempted, with only partial success, to overcome the orienting\neffect by exposing subjects to the environment once as part of their\norientation to the experimental setup and prior to the data-gathering\nportion of the experiment. In the Passive Haptics and\nFrame Rate studies, subjects entered the VE for approximately two\nOrder Effects\nHeart Rate\n(\nBPM)\nSkin Conductance\n(\nmSiemens)\nSkin Temperature\n(\n\no\nF)\nReported Presence\n(Count \"high\")\nReported Behavioral Presence\n(Count \"high\")\nMultiple Exposures\nNA\n-0.7 (1\nst\n)\n-0.9 (1\nst\n)\n\n-0.7 (1\nst\n)\nPassive Haptics\n- -\n-0.8 (1\nst\n) -0.4\n(1\nst\n)\nFrame Rate\n-1.0 (Task)\n-0.8 (1\nst\n) -0.3\n(1\nst\n)\n\n-0.2 (Task)\nTable 3. Significant order effects for each measure in each study. \"(1\nst\n)\" indicates a decrease after the first exposure only. \"(Task)\"\nindicates a decrease over tasks on the same day. There was an order effect for each measure in at least one study. NA is \"Not\navailable\". Significant results are listed at the P < 0.050 level (bold) and P < 0.100 (normal text). Full details given in [Meehan 2001].\n648\n\nminutes and were shown both virtual rooms before the experiment\nstarted. These pre-exposures reduced but did not eliminate the\norienting effects.\n2.2. Validity\nValidity is \"the extent to which a test or experiment genuinely\nmeasures what it purports to measure\" [Sutherland 1996]. The\nconcept of presence has been operationalized in questionnaires so\nthe validity of the physiological measures can be established by\ninvestigating how well the physiological reactions correlate with\none or more of the questionnaire-based measures of presence. We\ninvestigated their correlations with two such measures: Reported\nPresence and Reported Behavioral Presence.\nReported Presence. Of the physiological measures,\nHeart Rate\ncorrelated best with the Reported Presence. There was a\nsignificant correlation in the Frame Rate study (corr. = 0.265,\nP < 0.005) and no correlation (corr. = 0.034, P = 0.743) in the\nPassive Haptics study. In the Multiple Exposures study, where\nHeart Rate was not available, Skin Conductance had the highest\ncorrelation with Reported Presence (corr. = 0.245, P < 0.010).\nReported Behavioral Presence.\nHeart Rate had the highest\ncorrelation, and a significant one, with Reported Behavioral\nPresence in the Frame Rate study (corr. = 0.192, P < 0.050), and\nthere was no correlation between the two (corr. = 0.004, P = 0.972)\nin the Passive Haptics study. In the Multiple Exposures study,\nwhere\nHeart Rate was not measured, Skin Conductance had the\nhighest correlation with reported behavioral presence (corr. =\n0.290, P < 0.005).\nThe correlations of the physiological measures with the reported\nmeasures give some support to their validities. The validity of\nHeart Rate appears to be better established by its correlation with\nthe well-established reported measures. There was also some\nsupport for the validity of\nSkin Conductance from its correlation\nwith reported measures.\nFollowing hypothesized relationships. According to Singleton,\nthe validation process includes \"examining the theory underlying\nthe concept being measured,\" and \"the more evidence that supports\nthe hypothesized relationships [between the measure and the\nunderlying concept], the greater one's confidence that a particular\noperational definition is a valid measure of the concept\"\n[Singleton et al. 1993]. We hypothesized that presence should\nincrease with frame rate and with the inclusion of the 1.5-inch\nwooden ledge, since each of these conditions provides increased\nsensory stimulation fidelity. As presented in the next section, our\nphysiological measures did increase with frame rate and with\ninclusion of the 1.5-inch wooden ledge. This helps validate the\nphysiological reactions as measures of presence.\n2.3. Sensitivity and multi-level sensitivity\nSensitivity is \"the likelihood that an effect, if present, will be\ndetected\" [Lipsey 1998]. The fact that the physiological measures\nreliably distinguished between subjects reaction in the Pit Room\nversus the Training Room in every study assured us of at least a\nminimal sensitivity. For example, heart rate increased an average\nacross all conditions of 6.3 beats / minute (BPM) in the Pit Room\n(P < 0.001) compared to the Training Room in both the Passive\nHaptics and Frame Rate studies. See Table 2 for a full account of\nsensitivity of physiological measures to the difference between the\ntwo rooms.\nAcrophobic patients', when climbing to the second story of a fire\nescape (with a handrail), waiting one minute, and looking down,\naveraged an increase in heart rate of 13.4 BPM\n[Emmelkamp and Felten 1985]. Our subjects were non-phobic,\nand our height was virtual; so, we would expect, and did find, our\nsubjects' heart rate reactions to be lower but in the same direction.\nMulti-level sensitivity. For guiding VE technological\ndevelopment and for better understanding of the psychological\nphenomena of VEs, we need a measure that reliably yields a higher\nvalue as a VE is improved along some \"goodness\" dimension, i.e.,\nis sensitive to multiple condition values. We distinguish this from\nsensitivity as described above and call this multi-level sensitivity.\nThe Passive Haptics study provided us some evidence of the\nmeasures' ability to discriminate between two \"high presence\"\nsituations. We have informally observed that walking into the Pit\nRoom causes a strong reaction in users, and this reaction seems\ngreater in magnitude than the differences in reaction to the Pit\nRoom between any two experimental conditions (e.g., with and\nwithout the 1.5-inch wooden ledge). Therefore, we expected the\ndifferences in reaction among the conditions to be less than the\ndifferences between the two rooms. For example, in Passive\nHaptics, we expected there to be a significant difference in the\nphysiological measures between the two conditions (with and\nwithout the 1.5-inch wooden ledge), but expected it to be less than\nthe difference between the Training Room and Pit Room in the\n\"lower\" presence condition (without the 1.5-inch wooden ledge).\nFor\nHeart Rate, we did find a significant difference between the\ntwo conditions of 2.7 BPM (P < 0.050), and it was less than the\ninter-room difference for the without-ledge condition: 4.9 BPM.\nSee Figure 5. Figure 6 shows that the differences among the\nconditions in the FR study are smaller in magnitude as compared to\nthe differences between the two rooms.\n0\n2\n4\n6\n8\n10\nNo Physical Ledge\nWith Physical Ledge\nChange in beats/minute\n2.7, P<0.050\n4.9,\nP <0.001\n7.6,\nP <0.001\nFigure 5.\nHeart Rate in Passive Haptics study.\nIn the Passive Haptics study, we investigated the multi-level\nsensitivity of the measures by testing whether presence was\nsignificantly higher with the 1.5-inch wooden ledge. Presence as\nmeasured by each of\nHeart Rate (2.7 BPM; P < 0.050), Skin\nConductance (0.8 mSiemens; P < 0.050), and Reported Behavioral\nPresence (0.5 more \"high\" responses; P < 0.005) was significantly\nhigher with the wooden ledge. Reported Presence had a strong\ntrend in the same direction (0.5 more \"high\" responses; P = 0.060).\nIn the Frame Rate study, we investigated the multi-level sensitivity\nof the measures by testing whether presence increased significantly\nas graphic frame update rates increased. We hypothesized that\nphysiological reactions would increase monotonically with frame\nrates of 10, 15, 20, and 30 FPS. They did not do exactly that (see\nFigure 6). During the 10 FPS condition, there was an anomalous\nreaction for all of the physiological measures and for Reported\nBehavioral Presence. That is, at 10 FPS, subjects had higher\nphysiological reaction and reported more behavioral presence. We\nbelieve that this reaction at 10 FPS was due to discomfort, added\nlag, and reduced temporal fidelity while in the ostensibly\ndangerous situation of walking next to a 20-foot pit\n[Meehan 2001].\n649\n\nWe also observed that subjects often lost their balance while trying\nto inch to the edge of the wooden platform at this low frame rate;\ntheir heart rate jumped an average of 3.5 BPM each time they lost\ntheir balance (P < 0.050). Statistically controlling for these Loss of\nBalance incidents improved the significance of the statistical model\nfor\nHeart Rate and brought the patterns of responses closer to the\nhypothesized monotonic increase in presence with frame rate but\ndid not completely account for the increased physiological reaction\nat 10 FPS. Loss of Balance was not significant in any other model.\n0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n10\n15\n20\n30\nFrame Rate\nChange in beats/minute\n\nFigure 6. heart rate, after correcting for Loss of Balance,\nat 10, 15, 20, and 30 frames per second.\nBeyond 10 FPS, Heart Rate followed the hypothesis. After we\nstatistically controlled for Loss of Balance, Heart Rate\nsignificantly increased between 15 FPS and 30 FPS (3.2 BPM;\nP < 0.005) and between 15 FPS and 20 FPS (2.4 BPM; P < 0.050).\nThere was also a non-significant increase between 20 FPS and 30\nFPS (0.7 BPM; P = 0.483) and a non-significant decrease between\n10 FPS and 15 FPS (1.6 BPM; P = 0.134). Reported Presence, and\nReported Behavioral Presence also increased with frame rate from\n15-20-30 FPS, but with less distinguishing power.\nThese findings support the multi-level sensitivity of\nHeart Rate.\n2.4. Objectivity\nThe measure properties of reliability, validity, and multi-level\nsensitivity are established quantitatively. Objectivity can only be\nargued logically. We argue that physiological measures are\ninherently better shielded from both subject bias and experimenter\nbias than are either reported measures or measures based on\nbehavior observations. Reported measures are liable to subject\nbias the subject reporting what he believes the experimenter\nwants. Post-experiment questionnaires are also vulnerable to\ninaccurate recollection and to modification of impressions garnered\nearly in a run by impressions from later. Having subjects report\nduring the session, whether by voice report or by hand-held\ninstrument, intrudes on the very presence illusion one is trying to\nmeasure. Behavioral measures, while not intrusive, are subject to\nbias on the part of the experimenters who score the behaviors.\nPhysiological measures, on the other hand, are much harder for\nsubjects to affect, especially with no biofeedback. These measures\nare not liable to experimenter bias, if instructions given to the\nparticipants are properly limited and uniform. We read instructions\nfrom a script in the Multiple Exposures study. We improved our\nprocedure in the later Passive Haptics and Frame Rate studies by\nplaying instructions from a compact disk player located in the real\nlaboratory and represented by a virtual radio in the VE.\n2.5. Summary and discussion\nThe data presented here show that physiological reactions can be\nused as reliable, valid, multi-level sensitive, and objective\nmeasures of presence in stressful VEs. Of the physiological\nmeasures,\nHeart Rate performed the best. There was also some\nsupport for\nSkin Conductance.\nHeart Rate significantly differentiated between the Training\nRoom and the Pit Room, and although this reaction faded over\nmultiple exposures, it never decreased to zero. It correlated with\nthe well-established reported measure, the UCL questionnaire. It\ndistinguished between the presence and absence of passive haptics\nand among frame rates at and above 15 FPS. As we argued above,\nit is objective. In total, it satisfies all of the requirements for a\nreliable, valid, multi-level sensitive, and objective measure of\npresence in a stressful VE.\nSkin Conductance has some, but not all, of the properties we\ndesire in a measure of presence. In particular, it did not\ndifferentiate among frame rates. We do not have a theory as to\nwhy.\nAlthough,\nHeart Rate satisfied the requirements for a presence\nmeasure for our VE, which evokes a strong reaction, it may not for\nless stressful VEs. To determine whether physiological reaction\ncan more generally measure presence, a wider range of VEs must\nbe tested, including less stressful, non-stressful, and relaxing\nenvironments. Investigation is currently under way to look at\nphysiological reaction in relaxing 3D Television environments\n[Dillon et al. 2001].\nThe height reaction elicited by our VE could be due to vertigo,\nfear, or other innate or learned response. The reactions are well\nknown in the literature and manifest as increased heart rate and\nskin conductance and decreased skin temperature [Andreassi 1995;\nGuyton 1986]. We hypothesized that the more present a user feels\nin our stressful environment, the more physiological reaction the\nuser will exhibit. What causes this higher presence and higher\nphysiological reaction? Is it due to a more realistic flow of visual\ninformation? Is it due to more coherence between the visual and\nhaptic information? Is it due to the improved visual realism? All\nof these are likely to improve presence. We cannot, however,\nanswer these questions definitively. We can say, though, that we\nhave empirically shown that physiological reaction and reported\npresence are both higher when we present a \"higher presence\" VE.\nWhatever it is that causes the higher reported presence and\nphysiological reaction, it causes more as we improve the VE.\nAn additional desirable aspect of a measure is ease of use in the\nexperimental setting. We did not record the time needed for each\nmeasure, but after running many subjects we can say with some\nconfidence that use of the physiological monitoring and of the\npresence questionnaire each added approximately the same amount\nof time to the experiment. It took about five minutes per exposure\nto put on and take off the physiological sensors. It took about an\nextra minute at the beginning and end of each set of exposures to\nput on and take off the ECG sensor it was left on between\nexposures on the same day. It took subjects about five minutes to\nfill out the UCL Presence Questionnaire. It took some training for\nexperimenters to learn the proper placement of the physiological\nequipment on the hands and chest of the subject thirty minutes\nwould probably be sufficient.\nAnother aspect of ease of use is the amount of difficulty\nparticipants have with the measure and to what extent the measure,\nif concurrent with an experimental task, interferes with the task.\nNo subjects reported difficulties with the questionnaires. Only\n650\n\nabout one in ten subjects reported noticing the physiological\nmonitoring equipment on the hands during the VE exposures. Our\nexperiment, though, was designed to use only the right hand,\nkeeping the sensor-laden left hand free from necessary activity.\nNo subjects reported noticing the ECG sensor once it was attached\nto the chest. In fact, many subjects reported forgetting about the\nECG electrodes when prompted to take them off at the end of the\nday. There are groups investigating less cumbersome equipment,\nwhich would probably improve ease of use, including a\nphysiological monitoring system that subjects wear like a shirt\n[Cowings et al. 2001]. Overall, questionnaires and physiological\nmonitoring were both easy to use and non-intrusive.\nPhysiological reactions as between-subjects measures\nWe conducted all of the studies as within-subjects to avoid the\nvariance due to natural human differences. That is, each subject\nexperienced all of the conditions for the study in which she\nparticipated. This allowed us to look at relative differences in\nsubject reaction among conditions and to overcome the differences\namong subjects in reporting and physiological reaction.\nThe UCL questionnaire has been used successfully between-subjects\n[Usoh et al. 1999]. We suspected, however, that\nphysiological reaction would not perform as well if taken between-subjects\n. We expected the variance among subjects would mask,\nat least in part, the differences in physiological reaction evoked by\nthe different conditions. We investigated this hypotheses by\nanalyzing the data using only the first task for each subject\neliminating order effects and treating the reduced data sets as\nbetween-subjects experiments. That is, we treat each experiment\nas if only the first task for each subject was run. This means that\nthe analysis uses only 10 data points (10 subjects first exposure\nonly) for the Multiple Exposures study, 52 data points for the\nPassive Haptics study, and 33 data points for the Frame Rate study.\nReliability between-subjects: Physiological reaction in the Pit\nRoom. Even between subjects, we expected that there would be a\nconsistent physiological reaction to the Pit Room, since we\nexpected such a reaction for every exposure to the VE. We\nexpected the significance to be lower, however, because of the\nreduced size of the data set. We found exactly that. The right half\nof Table 2 shows the values of the physiological measures\naveraged across conditions for the between-subjects analysis. As\ncompared to the full data set, the between-subjects data have lower\nsignificance values, but subjects still have strong physiological\nreactions to the Pit Room. Table 2 demonstrates that the\nphysiological orienting effects caused the averages for the first\nexposures to be higher than for the full data set.\nValidity between-subjects: Correlation with established\nmeasures. We expected correlations with the reported measures\nto be lower when taken between subjects since there were fewer\ndata points and individual differences in physiological reaction and\nreporting would confound the correlations. This was the case. No\nphysiological measure correlated significantly with any reported\nmeasure when analyzing between-subjects.\nMulti-level sensitivity between-subjects: Differentiating among\npresence conditions. We expected inter-subject variation in\nphysiological reaction to mask the differences in physiological\nreactions evoked by the presence conditions (e.g., various frame\nrates). Contrary to this expectation, however, we found strong\ntrends in the physiological measures among conditions in both the\nPassive Haptics and Frame Rate studies. (The condition was not\nvaried in the Multiple Exposures study.)\nIn the Passive Haptics study, both\nHeart Rate and Skin\nConductance both varied in the expected direction non-significantly\n(3.3 BPM, P = 0.097; 1.0 mSiemens, P = 0.137,\nrespectively).\nIn the Frame Rate study,\nHeart Rate followed hypothesized\npatterns, but\nSkin Conductance did not. After the anomalous\nreaction at 10 FPS (as in full data set compare Figures 6 and 7),\nHeart Rate differentiated among presence conditions: at 30 FPS it\nwas higher than at 15 FPS, and this difference was nearly\nsignificant (7.2 BPM; P = 0.054).\nOverall,\nHeart Rate shows promise as a between-subjects\nmeasure of presence. Though it did not correlate well with the\nreported measures (between-subjects), it did differentiate among\nthe conditions with some statistical power in Passive Haptics and\nFrame Rate.\nSkin Conductance did not show as much promise as\na between-subjects measure. For more discussion of physiological\nreactions as between-subjects measures of presence, see\n[Meehan 2001].\n0\n2\n4\n6\n8\n10\n12\n14\n10\n15\n20\n30\nFrame Rate\nChange in beat\ns/minut\ne\nFigure 7. Between-subjects analysis:\nHeart Rate.\nVE Effectiveness results\nAbove we described the experiments as they related to the testing\nof the physiological presence measures, below we discuss each\nexperiment with respect to the aspect of VEs it investigated.\nEffect of Multiple Exposures on Presence. As described in\nSection 1.4.1, ten users go through the same VE twelve times (over\nfour days) in order to study whether the presence inducing power\nof a VE declines, or becomes unusably small, over multiple\nexposures. We did find significant decreases in each presence\nmeasure (reported and physiological) in either this experiment or\none of the subsequent two experiments (see Table 3). However,\nnone of the measures decreased to zero nor did any become\nunusably small. The findings support our hypothesis that all\npresence measures decrease over multiple exposures to the same\nVE, but not to zero.\nEffect of Passive Haptics on Presence. Our hypothesis was that\nsupplementing a visual-aural VE with even rudimentary, low-fidelity\npassive haptics cues significantly increases presence. This\nexperiment was only one of a set of studies investigating the\npassive haptics hypothesis. The detailed design, results, and\ndiscussion for the set are reported elsewhere [Insko 2001].\nWe found significant support for the hypothesis in that, with the\ninclusion of the 1.5-inch ledge, presence as measured by Heart\nRate, Reported Behavioral Presence, and\nSkin Conductance was\nsignificantly higher at the P < 0.05 level. Reported Presence also\nhad a strong trend (P < 0.10) in the same direction.\n651\n\nEffect of Frame Rate on Presence. Our hypothesis was that as\nframe rate increases from 10, 15, 20, 30 frames/second, presence\nincreases. For frame rates of 15 frames/second and above, the\nhypothesis was largely confirmed. It was confirmed with statistical\nsignificance for 15 to 20 FPS and 15 to 30 FPS. 20 to 30 FPS\nthough not statistically significant was in the same direction. 10\nFPS gave anomalous results on all measures except Reported\nPresence, which increased monotonically with frame rate with no\nstatistical significance.\nFuture Work\nGiven a compelling VE and a sensitive, quantitative presence\nmeasure, the obvious strategy is to degrade quantitative VE quality\nparameters in order to answer the questions: What makes a VE\ncompelling? What are the combinations of minimum system\ncharacteristics to achieve this?\nFor example, we would like to study the effect of\nLatency\nSelf-avatar fidelity\nAural localization\nVisual Detail\nLighting Realism\nRealistic physics in interactions with objects\nInteractions with other people or agents\nThen we hope to begin to establish trade-offs for presence evoked:\nIs it more important to have latency below 50 ms or frame rate\nabove 20 FPS?\nAdditionally, we must eliminate the cables that tether subjects to\nthe monitoring, tracking, and rendering equipment. Our subjects\nreported this encumbrance as the greatest cause of breaks in\npresence.\n\nAcknowledgements\nWe would like to thank the University of North Carolina (UNC)\nGraduate School, the Link Foundation, and the National Institutes\nof Health National Center for Research Resources (Grant Number\nP41 RR 02170) for funding this project. We would like to thank\nthe members of the Effective Virtual Environments group, the\nUNC Computer Science Department, and Dr. McMurray of the\nUNC Applied Physiology Department. Without their hard work,\nnone of this research would have been possible. We would like to\nthank Drs. Slater, Usoh, and Steed of the University College of\nLondon who built much of the foundation for this work. We\nwould also like to thank the reviewers for their thoughtful\ncomments and suggestions.\n\n\n\nReferences\nAbelson, J. L. and G. C. Curtis (1989). Cardiac and neuroendocrine\nresponses to exposure therapy in height phobics. Behavior Research and\nTherapy, 27(5): 561-567.\nAndreassi, J. L. (1995). Psychophysiology: Human behavior and\nphysiological response. Hillsdale, N.J., Lawrence Erlbaum Associates.\nBarfield, W., T. Sheridan, D. Zeltzer and M. Slater (1995). Presence and\nperformance within virtual environments. In W. Barfield and T.\nFurness, Eds., Virtual environments and advanced interface design.\nLondon, Oxford University Press.\nCowings, P., S. Jensen, D. Bergner and W. Toscano (2001). A lightweight\nambulatory physiological monitoring system. NASA Ames, California.\nDillon, C., E. Keogh, J. Freeman and J. Davidoff (2001). Presence: Is your\nheart in it? 4th Int. Wkshp. on Presence, Philadelphia.\nEllis, S. R. (1996). Presence of mind: A reaction to Thomas Sheridan's\n"Further musings on the psychophysics of presence". Presence:\nTeleoperators and Virtual Environments, 5(2): 247-259.\nEmmelkamp, P. and M. Felten (1985). The process of exposure in vivo:\ncognitive and physiological changes during treatment of acrophobia.\nBehavior Research and Therapy, 23(2): 219.\nFreeman, J., S. E. Avons, D. Pearson, D. Harrison and N. Lodge (1998).\nBehavioral realism as a metric of presence. 1st Int. Wkshp. on Presence.\nGuyton, A. C. (1986). Basic characteristics of the sympathetic and\nparasympathetic function. In Textbook of Medical Physiology, 688-697.\nPhiladelphia, W.B. Saunders Company.\nHeeter, C. (1992). Being there: The subjective experience of presence.\nPresence: Teleoperators and Virtual Environments, 1: 262-271.\nIJsselsteijn, W. A. and H. d. Ridder (1998). Measuring temporal variations\nin presence. 1st Int. Wkshp. on Presence.\nB. Insko (2001). Passive haptics significantly enhance virtual\nenvironments, Doctoral Dissertation. Computer Science. University of\nNorth Carolina, Chapel Hill, NC, USA.\nKleinbaum, D., L. Kupper, K. Muller and A. Nizam (1998). Applied\nregression analysis and other multivariate methods.\nLipsey, M. W. (1998). Design sensitivity: Statistical power for applied\nexperimental research. In L. Brickman and D. J. Rog, Eds., Handbook\nof applied social research methods, 39-68. Thousand Oaks, California,\nSage Publications, Inc.\nLombard, M. and T. Ditton (1997). At the heart of it all: The concept of\npresence. Journal of Computer Mediated Communication, 3(2).\nMcMurray, D. R. (1999). Director of Applied Physiology lab, University of\nNorth Carolina. Personal Communication.\nM. Meehan (2001). Physiological reaction as an objective measure of\npresence in virtual environments. Doctoral Dissertation. Computer\nScience. University of North Carolina, Chapel Hill, NC, USA.\nRegenbrecht, H. T. and T. W. Schubert (1997). Measuring presence in\nvirtual environments. In Proc. of Human Computer Interface\nInternational, San Francisco.\nSAS (1990). SAS/ STAT User's Guide, Version 6, Fourth Edition. Cary,\nNC, USA, SAS Institute Inc.\nSchubert, T., F. Friedmann and H. Regenbrecht (1999). Embodied presence\nin virtual environments. In R. Paton and I. Neilson, Eds., Visual\nRepresentations and Interpretations. London, Springer-Verlag.\nSheridan, T. B. (1996). Further musings on the psychophysics of presence.\nPresence: Teleoperators and Virtual Environments, 5(2): 241-246.\nSingleton, R. A., B. C. Straits and M. M. Straits (1993). Approaches to\nSocial Research. New York, Oxford University Press.\nSlater, M., M. Usoh and A. Steed (1994). Depth of presence in virtual\nenvironments. Presence: Teleoperators and Virtual Environments, 3(2):\n130-144.\nSlater, M., M. Usoh and A. Steed (1995). Taking steps: The influence of a\nwalking technique on presence in virtual reality. ACM Transactions on\nComputer Human Interaction (TOCHI), 2(3): 201-219.\nSlater, M. (1999). Measuring Presence: A Response to the Witmer and\nSinger Presence Questionnaire. Presence: Teleoperators and Virtual\nEnvironments, 8(5): 560-565.\nSlonim, N. B., Ed. (1974). Environmental Physiology. Saint Louis. The C.\nV. Mosby Company.\nSutherland, S. (1996). The international dictionary of psychology. New\nYork, The Crossroads Publishing Company.\nUsoh, M., K. Arthur, M. Whitton, R. Bastos, A. Steed, M. Slater and F.\nBrooks (1999). Walking > walking-in-place > flying in virtual\nenvironments. In Proc. of ACM SIGGRAPH 99. ACM Press/ ACM\nSIGGRAPH.\nWeiderhold, B. K., R. Gervirtz and M. D. Wiederhold (1998). Fear of\nflying: A case report using virtual reality therapy with physiological\nmonitoring. CyberPsychology and Behavior, 1(2): 97-104.\nWitmer, B. G. and M. J. Singer (1998). Measuring presence in virtual\nenvironments: A presence questionnaire. Presence: Teleoperators and\nVirtual Environments, 7(3): 225-240.\n652\n", "keywords": "presence;Haptics;measurement;Frame Rate;virtual environment;Presence;Physiology"} {"name": "15", "title": "A New Statistical Formula for Chinese Text Segmentation Incorporating Contextual Information", "abstract": "A new statistical formula for identifying 2-character words in Chinese text, called the contextual information formula, was developed empirically by performing stepwise logistic regression using a sample of sentences that had been manually segmented. Contextual information in the form of the frequency of characters that are adjacent to the bigram being processed as well as the weighted document frequency of the overlapping bigrams were found to be significant factors for predicting the probablity that the bigram constitutes a word. Local information (the number of times the bigram occurs in the document being segmented) and the position of the bigram in the sentence were not found to be useful in determining words. The contextual information formula was found to be significantly and substantially better than the mutual information formula in identifying 2-character words. The method can also be used for identifying multi-word terms in English text.", "fulltext": "INTRODUCTION\nChinese text is different from English text in that there is no\nexplicit word boundary. In English text, words are separated by\nspaces. Chinese text (as well as text of other Oriental languages)\nis made up of ideographic characters, and a word can comprise\none, two or more such characters, without explicit indication\nwhere one word ends and another begins.\nThis has implications for natural language processing and\ninformation retrieval with Chinese text. Text processing\ntechniques that have been developed for Western languages deal\nwith words as meaningful text units and assume that words are\neasy to identify. These techniques may not work well for Chinese\ntext without some adjustments. To apply these techniques to\nChinese text, automatic methods for identifying word boundaries\naccurately have to be developed. The process of identifying word\nboundaries has been referred to as text segmentation or, more\naccurately, word segmentation.\nSeveral techniques have been developed for Chinese text\nsegmentation. They can be divided into:\n1.\n\nstatistical methods, based on statistical properties and\nfrequencies of characters and character strings in a corpus\n(e.g. [13] and [16]).\n2.\n\ndictionary-based methods, often complemented with\ngrammar rules. This approach uses a dictionary of words to\nidentify word boundaries. Grammar rules are often used to\nresolve conflicts (choose between alternative segmentations)\nand to improve the segmentation (e.g. [4], [8], [19] and [20]).\n3.\n\nsyntax-based methods, which integrate the word\nsegmentation process with syntactic parsing or part-of-speech\ntagging (e.g. [1]).\n4.\n\nconceptual methods, that make use of some kind of semantic\nprocessing to extract information and store it in a knowledge\nrepresentation scheme. Domain knowledge is used for\ndisambiguation (e.g. [9]).\nMany researchers use a combination of methods (e.g. [14]).\nThe objective of this study was to empirically develop a\nstatistical formula for Chinese text segmentation. Researchers\nhave used different statistical methods in segmentation, most of\nwhich were based on theoretical considerations or adopted from\nother fields. In this study, we developed a statistical formula\nempirically by performing stepwise logistic regression using a\nsample of sentences that had been manually segmented. This\npaper reports the new formula developed for identifying 2-character\nwords, and the effectiveness of this formula compared\nwith the mutual information formula.\nThis study has the following novel aspects:\n\n\nThe statistical formula was derived empirically using\nregression analysis.\n\n\nThe manual segmentation was performed to identify\nmeaningful\nwords rather than simple words.\nMeaningful\nwords include phrasal words and multi-word\nterms.\n\n\nIn addition to the relative frequencies of bigrams and\ncharacters often used in other studies, our study also\ninvestigated the use of document frequencies and weighted\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, to republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nSIGIR '99 8/99 Berkley, CA USA\nCopyright 1999 ACM 1-58113-096-1/99/0007 . . . $5.00\n82\n)\n(\n*\n)\n(\n)\n(\nlog\n2\nC\nfreq\nB\nfreq\nBC\nfreq\ndocument frequencies. Weighted document frequencies are\nsimilar to document frequencies but each document is\nweighted by the square of the number of times the character\nor bigram occurs in the document.\n\n\nContextual information was included in the study. To predict\nwhether the bigram BC in the character string\nA B C D\nconstitutes a word, we investigated whether the\nfrequencies for AB, CD, A and D should be included in the\nformula.\n\n\nLocal frequencies were included in the study. We\ninvestigated character and bigram frequencies within the\ndocument in which the sentence occurs (i.e. the number of\ntimes the character or bigram appears in the document being\nsegmented).\n\n\nWe investigated whether the position of the bigram (at the\nbeginning of the sentence, before a punctuation mark, or after\na punctuation mark) had a significant effect.\n\n\nWe developed a segmentation algorithm to apply the\nstatistical formula to segment sentences and resolve conflicts.\nIn this study, our objective was to segment text into\nmeaningful words\nrather than\nsimple words\n. A simple\nword is the smallest independent unit of a sentence that has\nmeaning on its own. A meaningful word can be a simple word or\na compound word comprising 2 or more simple words\ndepending on the context. In many cases, the meaning of a\ncompound word is more than just a combination of the meanings\nof the constituent simple words, i.e. some meaning is lost when\nthe compound word is segmented into simple words.\nFurthermore, some phrases are used so often that native speakers\nperceive them and use them as a unit. Admittedly, there is some\nsubjectivity in the manual segmentation of text. But the fact that\nstatistical models can be developed to predict the manually\nsegmented words substantially better than chance indicates some\nlevel of consistency in the manual segmentation.\nThe problem of identifying meaningful words is not limited to\nChinese and oriental languages. Identifying multi-word terms is\nalso a problem in text processing with English and other Western\nlanguages, and researchers have used the mutual information\nformula and other statistical approaches for identifying such\nterms (e.g. [3], [6] and [7]).\nPREVIOUS STUDIES\nThere are few studies using a purely statistical approach to\nChinese text segmentation. One statistical formula that has been\nused by other researchers (e.g. [11] and [16]) is the mutual\ninformation formula. Given a character string\nA B C D\n, the mutual information for the bigram BC is given by the\nformula:\nMI(BC) =\n= log\n2\nfreq(BC) log\n2\nfreq(B) log\n2\nfreq(C)\nwhere freq refers to the relative frequency of the character or\nbigram in the corpus (i.e. the number of times the character or\nbigram occurs in the corpus divided by the number of characters\nin the corpus).\nMutual information is a measure of how strongly the two\ncharacters are associated, and can be used as a measure of how\nlikely the pair of characters constitutes a word. Sproat & Shih\n[16] obtained recall and precision values of 94% using mutual\ninformation to identify words. This study probably segmented\ntext into simple words rather than meaningful words. In our\nstudy, text was segmented into meaningful words and we\nobtained much poorer results for the mutual information\nformula.\nLua [12] and Lua & Gan [13] applied information theory to the\nproblem of Chinese text segmentation. They calculated the\ninformation content of characters and words using the\ninformation entropy formula I = - log\n2\nP, where P is the\nprobability of occurrence of the character or word. If the\ninformation content of a character string is less than the sum of\nthe information content of the constituent characters, then the\ncharacter string is likely to constitute a word. The formula for\ncalculating this\nloss\nof information content when a word is\nformed is identical to the mutual information formula. Lua &\nGan [13] obtained an accuracy of 99% (measured in terms of the\nnumber of errors per 100 characters).\nTung & Lee [18] also used information entropy to identify\nunknown words in a corpus. However, instead of calculating the\nentropy value for the character string that is hypothesized to be a\nword (i.e. the candidate word), they identified all the characters\nthat occurred to the left of the candidate word in the corpus. For\neach left character, they calculated the probability and entropy\nvalue for that character given that it occurs to the left of the\ncandidate word. The same is done for the characters to the right\nof the candidate word. If the sum of the entropy values for the\nleft characters and the sum of the entropy values for the right\ncharacters are both high, than the candidate word is considered\nlikely to be a word. In other words, a character string is likely to\nbe a word if it has several different characters to the left and to\nthe right of it in the corpus, and none of the left and right\ncharacters predominate (i.e. not strongly associated with the\ncharacter string).\nOgawa & Matsuda [15] developed a statistical method to\nsegment Japanese text. Instead of attempting to identify words\ndirectly, they developed a formula to estimate the probability that\na bigram straddles a word boundary. They referred to this as the\nsegmentation probability. This was complemented with some\nsyntactic information about which class of characters could be\ncombined with which other class.\nAll the above mathematical formulas used for identifying words\nand word boundaries were developed based on theoretical\nconsiderations and not derived empirically.\nOther researchers have developed statistical methods to find the\nbest segmentation for the whole sentence rather than focusing on\nidentifying individual words. Sproat et al. [17] developed a\nstochastic finite state model for segmenting text. In their model,\na word dictionary is represented as a weighted finite state\ntransducer. Each weight represents the estimated cost of the\nword (calculated using the negative log probability). Basically,\nthe system selects the sentence segmentation that has the\nsmallest total cost. Chang & Chen [1] developed a method for\nword segmentation and part-of-speech tagging based on a first-order\nhidden Markov model.\n83\nRESEARCH METHOD\nThe purpose of this study was to empirically develop a statistical\nformula for identifying 2-character words as well as to\ninvestigate the usefulness of various factors for identifying the\nwords. A sample of 400 sentences was randomly selected from 2\nmonths (August and September 1995) of news articles from the\nXin Hua News Agency, comprising around 2.3 million characters.\nThe sample sentences were manually segmented. The\nsegmentation rules described in [10] were followed fairly closely.\nMore details of the manual segmentation process, especially with\nregard to identifying meaningful words will be given in [5].\n300 sentences were used for model building, i.e. using regression\nanalysis to develop a statistical formula. 100 sentences were set\naside for model validation to evaluate the formula developed in\nthe regression analysis. The sample sentences were broken up\ninto overlapping bigrams. In the regression analysis, the\ndependent variable was whether a bigram was a two-character\nword according to the manual segmentation. The independent\nvariables were various corpus statistics derived from the corpus\n(2 months of news articles).\nThe types of frequency information investigated were:\n1. Relative frequency of individual characters and bigrams\n(character pairs) in the corpus, i.e. the number of times the\ncharacter or bigram occurs in the corpus divided by the total\nnumber of characters in the corpus.\n2. Document frequency of characters and bigrams, i.e. the\nnumber of documents in the corpus containing the character\nor bigram divided by the total number of documents in the\ncorpus.\n3. Weighted document frequency of characters and bigrams. To\ncalculate the weighted document frequency of a character\nstring, each document containing the character string is\nassigned a score equal to the square of the number of times\nthe character string occurs in the document. The scores for all\nthe documents containing the character string are then\nsummed and divided by the total number of documents in the\ncorpus to obtain the weighted document frequency for the\ncharacter string. The rationale is that if a character string\noccurs several times within the same document, this is\nstronger evidence that the character string constitutes a word,\nthan if the character string occurs once in several documents.\nTwo or more characters can occur together by chance in\nseveral different documents. It is less likely for two\ncharacters to occur together several times within the same\ndocument by chance.\n4. Local frequency in the form of within-document frequency of\ncharacters and bigrams, i.e. the number of times the character\nor bigram occurs in the document being segmented.\n5. Contextual information. Frequency information of characters\nadjacent to a bigram is used to help determine whether the\nbigram is a word. For the character string\nA B C D\n, to determine whether the bigram BC is a word,\nfrequency information for the adjacent characters A and D, as\nwell as the overlapping bigrams AB and BC were considered.\n6. Positional information. We studied whether the position of a\ncharacter string (at the beginning, middle or end of a\nsentence) gave some indication of whether the character\nstring was a word.\nThe statistical model was developed using forward stepwise\nlogistic regression, using the Proc Logistic function in the SAS\nv.6.12 statistical package for Windows. Logistic regression is an\nappropriate regression technique when the dependent variable is\nbinary valued (takes the value 0 or 1). The formula developed\nusing logistic regression predicts the probability (more\naccurately, the log of the odds) that a bigram is a meaningful\nword.\nIn the stepwise regression, the threshold for a variable to enter\nthe model was set at the 0.001 significance level and the\nthreshold for retaining a variable in the model was set at 0.01. In\naddition, preference was given to relative frequencies and local\nfrequencies because they are easier to calculate than document\nfrequencies and weighted document frequencies. Also, relative\nfrequencies are commonly used in previous studies.\nFurthermore, a variable was entered in a model only if it gave a\nnoticeable improvement to the effectiveness of the model. During\nregression analysis, the effectiveness of the model was estimated\nusing the measure of concordance that was automatically output\nby the SAS statistical program. A variable was accepted into the\nmodel only if the measure of concordance improved by at least\n2% when the variable was entered into the model.\nWe evaluated the accuracy of the segmentation using measures of\nrecall and precision. Recall and precision in this context are\ndefined as follows:\nRecall = No. of 2-character words identified in the automatic\nsegmentation that are correct\nNo. of 2-character words identified in the manual\nsegmentation\nPrecision = No. of 2-character words identified in the automatic\nsegmentation that are correct\nNo. of 2-character words identified in the automatic\nsegmentation\n\nSTATISTICAL FORMULAS DEVELOPED\nThe formula that was developed for 2-character words is as\nfollows. Given a character string\nA B C D\n, the\nassociation strength for bigram BC is:\nAssoc(BC) = 0.35 * log\n2\nfreq(BC) + 0.37 * log\n2\nfreq(A) +\n0.32 log\n2\nfreq(D) 0.36 * log\n2\ndocfreq\nwt\n(AB)\n0.29 * log\n2\ndocfreq\nwt\n(CD) + 5.91\nwhere freq refers to the relative frequency in the corpus and\ndocfreq\nwt\nrefers to the weighted document frequency. We refer to\nthis formula as the contextual information formula. More details\nof the regression model are given in Table 1.\nThe formula indicates that contextual information is helpful in\nidentifying word boundaries. A in the formula refers to the\ncharacter preceding the bigram that is being processed, whereas\nD is the character following the bigram. The formula indicates\nthat if the character preceding and the character following the\nbigram have high relative frequencies, then the bigram is more\nlikely to be a word.\n84\nContextual information involving the weighted document\nfrequency was also found to be significant. The formula indicates\nthat if the overlapping bigrams AB and CD have high weighted\ndocument frequencies, then the bigram BC is less likely to be a\nword. We tried replacing the weighted document frequencies\nwith the unweighted document frequencies as well as the relative\nfrequencies. These were found to give a lower concordance score.\nEven with docfreq (AB) and docfreq (CD) in the model, docfreq\nwt\n(AB) and docfreq\nwt\n(CD) were found to improve the model\nsignificantly. However, local frequencies were surprisingly not\nfound to be useful in predicting 2-character words.\nWe investigated whether the position of the bigram in the\nsentence was a significant factor. We included a variable to\nindicate whether the bigram occurred just after a punctuation\nmark or at the beginning of the sentence, and another variable to\nindicate whether the bigram occurred just before a punctuation\nmark or at the end of a sentence. The interaction between each of\nthe\nposition\nvariables and the various relative frequencies\nwere not significant. However, it was found that whether or not\nthe bigram was at the end of a sentence or just before a\npunctuation mark was a significant factor. Bigrams at the end of\na sentence or just before a punctuation mark tend to be words.\nHowever, since this factor did not improve the concordance score\nby 2%, the effect was deemed too small to be included in the\nmodel.\nIt should be noted that the contextual information used in the\nstudy already incorporates some positional information. The\nfrequency of character A (the character preceding the bigram)\nwas given the value 0 if the bigram was preceded by a\npunctuation mark or was at the beginning of a sentence.\nSimilarly, the frequency of character D (the character following\nthe bigram) was given the value 0 if the bigram preceded a\npunctuation mark.\nWe also investigated whether the model would be different for\nhigh and low frequency words. We included in the regression\nanalysis the interaction between the relative frequency of the\nbigram and the other relative frequencies. The interaction terms\nwere not found to be significant. Finally, it is noted that the\ncoefficients for the various factors are nearly the same, hovering\naround 0.34.\n4.2\n\nImproved Mutual Information Formula\nIn this study, the contextual information formula (CIF) was\nevaluated by comparing it with the mutual information formula\n(MIF). We wanted to find out whether the segmentation results\nusing the CIF was better than the segmentation results using the\nMIF.\nIn the CIF model, the coefficients of the variables were\ndetermined using regression analysis. If CIF was found to give\nbetter results than MIF, it could be because the coefficients for\nthe variables in CIF had been determined empirically and not\nbecause of the types of variables in the formula. To reject this\nexplanation, regression analysis was used to determine the\ncoefficients for the factors in the mutual information formula.\nWe refer to this new version of the formula as the improved\nmutual information formula.\nGiven a character string\nA B C D\n, the improved\nmutual information formula is:\nImproved MI(BC) = 0.39 * log\n2\nfreq(BC) - 0.28 * log\n2\nfreq(B) 0\n.23 log\n2\nfreq(C) - 0.32\nThe coefficients are all close to 0.3. The formula is thus quite\nsimilar to the mutual information formula, except for a\nmultiplier of 0.3.\nSEGMENTATION ALGORITHMS\nThe automatic segmentation process has the following steps:\n1.\n\nThe statistical formula is used to calculate a score for each\nbigram to indicate its association strength (or how likely the\nbigram is a word).\n2.\n\nA threshold value is then set and used to decide which\nbigram is a word. If a bigram obtains a score above the\nthreshold value, then it is selected as a word. Different\nthreshold values can be used, depending on whether the user\nprefers high recall or high precision.\n3.\n\nA segmentation algorithm is used to resolve conflict. If two\noverlapping bigrams both have association scores above the\nParameter Standard Wald Pr > Standardized\nVariable DF Estimate Error Chi-Square Chi-Square Estimate\nINTERCPT\n1\n5.9144\n0.1719\n1184.0532\n0.0001\n.\nLog freq(BC)\n\n1\n0.3502\n0.0106\n1088.7291\n0.0001\n0.638740\nLog freq(A)\n\n1\n0.3730\n0.0113\n1092.1382\n0.0001\n0.709621\nLog freq(D)\n\n1\n0.3171\n0.0107\n886.4446\n0.0001\n0.607326\nLog docfreq\nwt\n(AB)\n\n1\n-0.3580\n0.0111\n1034.0948\n0.0001\n-0.800520\nLog docfreq\nwt\n(CD)\n\n1\n-0.2867\n0.0104\n754.2276\n0.0001\n-0.635704\nNote: freq refers to the relative frequency, and docfreq\nwt\nrefers to the\nweighted document frequency.\nAssociation of Predicted Probabilities and Observed Responses\nConcordant = 90.1% Somers' D = 0.803\nDiscordant = 9.8% Gamma = 0.803\nTied = 0.1% Tau-a = 0.295\n(23875432 pairs) c = 0.901\nTable 1. Final regression model for 2-character words\n85\nthreshold value, then there is conflict or ambiguity. The\nfrequency of such conflicts will rise as the threshold value is\nlowered. The segmentation algorithm resolves the conflict\nand selects one of the bigrams as a word.\nOne simple segmentation algorithm is the forward match\nalgorithm. Consider the sentence\nA B C D E\n. The\nsegmentation process proceeds from the beginning of the\nsentence to the end. First the bigram AB is considered. If the\nassociation score is above the threshold, then AB is taken as a\nword, and the bigram CD is next considered. If the association\nscore of AB is below the threshold, the character A is taken as a\n1-character word. And the bigram BC is next considered. In\neffect, if the association score of both AB and BC are above\nthreshold, the forward match algorithm selects AB as a word and\nnot BC.\nThe forward match method for resolving ambiguity is somewhat\narbitrary and not satisfactory. When overlapping bigrams exceed\nthe threshold value, it simply decides in favour of the earlier\nbigram. Another segmentation algorithm was developed in this\nstudy which we refer to as the comparative forward match\nalgorithm. This has an additional step:\nIf 2 overlapping bigrams AB and BC both have scores above\nthe threshold value then their scores are compared. If AB has a\nhigher value, then it is selected as a word, and the program\nnext considers the bigrams CD and DE. On the other hand, if\nAB has a lower value, then character A is selected as a 1-character\nword, and the program next considers bigrams BC\nand CD.\nThe comparative forward match method (CFM) was compared\nwith the forward match method (FM) by applying them to the 3\nstatistical formulas (the contextual information formula, the\nmutual information formula and the improved mutual\ninformation formula). One way to compare the effectiveness of\nthe 2 segmentation algorithms is by comparing their precision\nfigures at the same recall levels. The precision figures for\nselected recall levels are given in Table 2. The results are based\non the sample of 300 sentences.\nThe comparative forward match algorithm gave better results for\nthe mutual information and improved mutual information\nformulas especially at low threshold values when a large\nnumber of conflicts are likely. Furthermore, for the forward\nmatch method, the recall didn\nt go substantially higher than\n80% even at low threshold values.\nFor the contextual information formula, the comparative forward\nmatch method did not perform better than forward match, except\nat very low threshold values when the recall was above 90%.\nThis was expected because the contextual information formula\nalready incorporates information about neighboring characters\nwithin the formula. The formula gave very few conflicting\nsegmentations. There were very few cases of overlapping\nbigrams both having association scores above the threshold\nexcept when threshold values were below 1.5.\nEVALUATION\nIn this section we compare the effectiveness of the contextual\ninformation formula with the mutual information formula and\nthe improved mutual information formula using the 100\nsentences that had been set aside for evaluation purposes. For the\ncontextual information formula, the forward match segmentation\nalgorithm was used. The comparative forward match algorithm\nwas used for the mutual information and the improved mutual\ninformation formulas.\nThe three statistical formulas were compared by comparing their\nprecision figures at 4 recall levels at 60%, 70%, 80% and 90%.\nFor each of the three statistical formulas, we identified the\nthreshold values that would give a recall of 60%, 70%, 80% and\n90%. We then determined the precision values at these threshold\nvalues to find out whether the contextual information formula\ngave better precision than the other two formulas at 60%, 70%,\n80% and 90% recall. These recall levels were selected because a\nrecall of 50% or less is probably unacceptable for most\napplications.\nThe precision figures for the 4 recall levels are given in Table 3.\nThe recall-precision graphs for the 3 formulas are given in Fig. 1.\nThe contextual information formula substantially outperforms\nthe mutual information and the improved mutual information\nformulas. At the 90% recall level, the contextual information\nPrecision\nRecall\nComparative\nForward Match\nForward\nMatch\nImprovement\nMutual Information\n90%\n51%\n\n80%\n52%\n47%\n5%\n70%\n53%\n51%\n2%\n60%\n54%\n52%\n2%\nImproved Mutual Information\n90%\n51%\n\n80%\n53%\n46%\n7%\n70%\n54%\n52%\n2%\n60%\n55%\n54%\n1%\nContextual Information Formula\n90%\n55%\n54%\n1%\n80%\n62%\n62%\n0%\n70%\n65%\n65%\n0%\n60%\n68%\n68%\n0%\nTable 2. Recall and precision values for the comparative\nforward match segmentation algorithm vs. forward match\nPrecision\nRecall\nMutual\nInformation\nImproved Mutual\nInformation\nContextual\nInformation\n90%\n57% (0.0)\n57% (-2.5)\n61% (-1.5)\n80%\n59% (3.7)\n59% (-1.5)\n66% (-0.8)\n70%\n59% (4.7)\n60% (-1.0)\n70% (-0.3)\n60%\n60% (5.6)\n62% (-0.7)\n74% (0.0)\n* Threshold values are given in parenthesis.\nTable 3. Recall and precision for three statistical formulas\n86\nformula was better by about 4%. At the 60% recall level, it\noutperformed the mutual information formula by 14% (giving a\nrelative improvement of 23%). The results also indicate that the\nimproved mutual information formula does not perform better\nthan the mutual information formula.\n6.2\n\nStatistical Test of Significance\nIn order to perform a statistical test, recall and precision figures\nwere calculated for each of the 100 sentences used in the\nevaluation. The average recall and the average precision across\nthe 100 sentences were then calculated for the three statistical\nformulas. In the previous section, recall and precision were\ncalculated for all the 100 sentences combined. Here, recall and\nprecision were obtained for individual sentences and then the\naverage across the 100 sentences was calculated. The average\nprecision for 60%, 70%, 80% and 90% average recall are given\nin Table 4.\nFor each recall level, an analysis of variance with repeated\nmeasures was carried out to find out whether the differences in\nprecision were significant. Pairwise comparisons using Tukey s\nHSD test was also carried out. The contextual information\nformula was significantly better (\n\n=0.001) than the mutual\ninformation and the improved mutual information formulas at all\n4 recall levels. The improved mutual information formula was\nnot found to be significantly better than mutual information.\nANALYSIS OF ERRORS\nThe errors that arose from using the contextual information\nformula were analyzed to gain insights into the weaknesses of\nthe model and how the model can be improved. There are two\ntypes of errors: errors of commission and errors of omission.\nErrors of commission are bigrams that are identified by the\nautomatic segmentation to be words when in fact they are not\n(according to the manual segmentation). Errors of omission are\nbigrams that are not identified by the automatic segmentation to\nbe words but in fact they are.\nThe errors depend of course on the threshold values used. A high\nthreshold (e.g. 1.0) emphasizes precision and a low threshold\n(e.g. 1.0) emphasizes recall. 50 sentences were selected from\nthe 100 sample sentences to find the distribution of errors at\ndifferent regions of threshold values.\nAssociation Score >1.0 (definite errors)\n\nwill\nthrough\n\ntelegraph [on the] day [31 July]\nAssociation Score Between 1.0 and 1.0\n(borderline errors)\n\nstill\nto\nwill be\n\npeople etc.\n\nI want\nPerson's name\n(\n)\nWan Wen Ju\nPlace name\n(\n)\na village name in China\n(\n)\nCanada\nName of an organization/institution\n(\n)\nXin Hua Agency\n(\n)\nThe State Department\nTable 6. Bigrams incorrectly identified as words\n55\n60\n65\n70\n75\n60\n65\n70\n75\n80\n85\n90\n95\nRecall(%)\nPrecision(%)\nContextual information\nMutual information\nImproved mutual\ninformation\nFig. 1. Recall-precision graph for the three statistical\nmodels.\nAssociation Score>1.0 (definite errors)\n\n(\n)\nuniversity (agricultural\nuniversity)\n(\n)\ngeology (geologic age)\n(\n)\nplant (upland plant)\n(\n)\nsovereignty (sovereign state)\nAssociation Score Between 1.0 and 1.0\n(borderline errors)\n\n(\n)\nstatistics (statistical data)\n(\n)\ncalamity (natural calamity)\n(\n)\nresources (manpower resources)\n(\n)\nprofessor (associate professor)\n(\n)\npoor (pauperization)\n(\n)\nfourteen (the 14\nth\nday)\n(\n)\ntwenty (twenty pieces)\nTable 5. Simple words that are part of a longer\nmeaningful word\nAvg Precision\nAvg\nRecall\nMutual\nInformation\nImproved Mutual\nInformation\nContextual\nInformation\n90%\n57% (1.0)\n58% (-2.3)\n61% (-1.5)\n80%\n60% (3.8)\n60% (-1.4)\n67% (-0.7)\n70%\n59% (4.8)\n60% (-1.0)\n70% (-0.3)\n60%\n60% (5.6)\n63% (-0.6)\n73% (0.0)\n* Threshold values are given in parenthesis.\nTable 4. Average recall and average precision for the three\nstatistical formulas\n87\nWe divide the errors of commission (bigrams that are incorrectly\nidentified as words by the automatic segmentation) into 2 groups:\n1.\n\nDefinite errors: bigrams with association scores above 1.0 but\nare not words\n2.\n\nBorderline errors: bigrams with association scores between\n1.0 and 1.0 and are not words\nWe also divide the errors of omission (bigrams that are words\nbut are not identified by the automatic segmentation) into 2\ngroups:\n1. Definite errors: bigrams with association scores below 1.0\nbut are words\n2. Borderline errors: bigrams with association scores between\n1.0 and 1.0 and are words.\n7.1\n\nErrors of Commission\nErrors of commission can be divided into 2 types:\n1.\n\nThe bigram is a simple word that is part of a longer\nmeaningful word.\n2.\n\nThe bigram is not a word (neither simple word nor\nmeaningful word).\nErrors of the first type are illustrated in Table 5. The words\nwithin parenthesis are actually meaningful words but segmented\nas simple words (words on the left). The words lose part of the\nmeaning when segmented as simple words. These errors\noccurred mostly with 3 or 4-character meaningful words.\nErrors of the second type are illustrated in Table 6. Many of the\nerrors are caused by incorrectly linking a character with a\nfunction word or pronoun. Some of the errors can easily be\nremoved by using a list of function words and pronouns to\nidentify these characters.\n7.2\n\nErrors of Omission\nExamples of definite errors of omission (bigrams with\nassociation scores below 1.0 but are words) are given in Table\n7. Most of the errors are rare words and time words. Some are\nancient names, rare and unknown place names, as well as\ntechnical terms. Since our corpus comprises general news\narticles, these types of words are not frequent in the corpus. Time\nwords like dates usually have low association values because\nthey change everyday! These errors can be reduced by\nincorporating a separate algorithm for recognizing them.\nThe proportion of errors of the various types are given in Table 8.\n\nCONCLUSION\nA new statistical formula for identifying 2-character words in\nChinese text, called the contextual information formula, was\ndeveloped empirically using regression analysis. The focus was\non identifying meaningful words (including multi-word terms\nand idioms) rather than simple words. The formula was found to\ngive significantly and substantially better results than the mutual\ninformation formula.\nContextual information in the form of the frequency of characters\nthat are adjacent to the bigram being processed as well as the\nweighted document frequency of the overlapping bigrams were\nfound to be significant factors for predicting the probablity that\nthe bigram constitutes a word. Local information (e.g. the\nnumber of times the bigram occurs in the document being\nsegmented) and the position of the bigram in the sentence were\nnot found to be useful in determining words.\nOf the bigrams that the formula erroneously identified as words,\nabout 80% of them were actually simple words. Of the rest,\nmany involved incorrect linking with a function words. Of the\nwords that the formula failed to identify as words, more than a\nthird of them were rare words or time words. The proportion of\nrare words increased as the threshold value used was lowered.\nThese rare words cannot be identified using statistical\ntechniques.\nThis study investigated a purely statistical approach to text\nAssociation Score between -1.0 and -2.0\n\nthe northern section of a construction project\n\nfragments of ancient books\nAssociation Score < -2.0\n\nSeptember\n\n3rd day\n\n(name of a district in China )\n\n(name of an institution)\n\nthe Book of Changes\nTable 7. 2-character words with association score\nbelow -1.0\nErrors of Commission\nAssociation score > 1.0\n(No. of errors=34)\nBorderline Cases\nAssociation score: 1.0 to1.0\n(No. of cases: 210)\nErrors of Omission\nAssociation score < 1.0\nAssociation score:\n1.0 to 2.0\n(No. of errors=43)\nAssociation score\n< 2.0\n(No. of errors=22)\nSimple\nwords\n82.3%\nNot words\n17.7%\nSimple\nwords\n55.2%\nNot\nwords\n20.5%\nMeaningful\nwords\n24.3%\nRare words\n& time\nwords\n23.2%\nOthers\n76.8%\nRare words\n& time\nwords\n63.6%\nOthers\n36.4%\nTable 8. Proportion of errors of different types\n88\nsegmentation. The advantage of the statistical approach is that it\ncan be applied to any domain, provided that the document\ncollection is sufficiently large to provide frequency information.\nA domain-specific dictionary of words is not required. In fact, the\nstatistical formula can be used to generate a shortlist of candidate\nwords for such a dictionary. On the other hand, the statistical\nmethod cannot identify rare words and proper names. It is also\nfooled by combinations of function words that occur frequently\nand by function words that co-occur with other words.\nIt is well-known that a combination of methods is needed to give\nthe best segmentation results. The segmentation quality in this\nstudy can be improved by using a list of function words and\nsegmenting the function words as single character words. A\ndictionary of common and well-known names (including names\nof persons, places, institutions, government bodies and classic\nbooks) could be used by the system to identify proper names that\noccur infrequently in the corpus. Chang et al. [2] developed a\nmethod for recognizing proper nouns using a dictionary of family\nnames in combination with a statistical method for identifying\nthe end of the name. An algorithm for identifying time and dates\nwould also be helpful. It is not clear whether syntactic processing\ncan be used to improve the segmentation results substantially.\nOur current work includes developing statistical formulas for\nidentifying 3 and 4-character words, as well as investigating\nwhether the statistical formula developed here can be used with\nother corpora. The approach adopted in this study can also be\nused to develop statistical models for identifying multi-word\nterms in English text. It would be interesting to see whether the\nregression model developed for English text is similar to the one\ndeveloped in this study for Chinese text. Frantzi, Ananiadou &\nTsujii [7], using a different statistical approach, found that\ncontextual information could be used to improve the\nidentification of multi-word terms in English text.\nREFERENCES\n[1] Chang, C.-H., and Chen, C.-D. A study of integrating\nChinese word segmentation and part-of-speech tagging.\nCommunications of COLIPS, 3, 1 (1993), 69-77.\n[2] Chang, J.-S., Chen, S.-D., Ker, S.-J., Chen, Y., and Liu, J.S.\nA multiple-corpus approach to recognition of proper names\nin Chinese texts. Computer Processing of Chinese and\nOriental Languages, 8, 1 (June 1994), 75-85.\n[3] Church, K.W., and Hanks, P. Word association norms,\nmutual information and lexicography. In Proceedings of the\n27\nth\nAnnual Meeting of the Association for Computational\nLinguistics (Vancouver, June 1989), 76-83.\n[4] Dai, J.C., and Lee, H.J. A generalized unification-based LR\nparser for Chinese. Computer Processing of Chinese and\nOriental Languages, 8, 1 (1994), 1-18.\n[5] Dai, Y. Developing a new statistical method for Chinese\ntext segmentation. (Master\ns thesis in preparation)\n[6] Damerau, F.J. Generating and evaluating domain-oriented\nmulti-word terms from texts. Information Processing &\nManagement, 29, 4 (1993), 433-447.\n[7] Frantzi, K.T., Ananiadou, S., and Tsujii, J. The C-value/NC\n-value method of automatic recognition for multi-word\nterms. In C. Nikolaou and C. Stephanidis (eds.),\nResearch and Advanced Technology for Digital Libraries,\n2\nnd\nEuropean Conference, ECDL\n98 (Heraklion, Crete,\nSeptember 1998), Springer-Verlag, 585-604.\n[8] Liang, N.Y. The knowledge of Chinese words segmentation\n[in Chinese]. Journal of Chinese Information Processing, 4,\n2 (1990), 42-49.\n[9] Liu, I.M. Descriptive-unit analysis of sentences: Toward a\nmodel natural language processing. Computer Processing of\nChinese & Oriental Languages, 4, 4 (1990), 314-355.\n[10] Liu, Y., Tan, Q., and Shen, X.K. Xin xi chu li yong xian dai\nhan yu fen ci gui fan ji zi dong fen ci fang fa [\nModern\nChinese Word Segmentation Rules and Automatic Word\nSegmentation Methods for Information Processing\n]. Qing\nHua University Press, Beijing, 1994.\n[11] Lua, K.T. Experiments on the use of bigram mutual\ninformation in Chinese natural language processing.\nPresented at the 1995 International Conference on Computer\nProcessing of Oriental Languages (ICCPOL) (Hawaii,\nNovember 1995). Available: http://137.132.89.143/luakt/\npublication.html\n[12] Lua, K.T. From character to word - An application of\ninformation theory. Computer Processing of Chinese &\nOriental Languages, 4, 4 (1990), 304-312.\n[13] Lua, K.T., and Gan, G.W. An application of information\ntheory in Chinese word segmentation. Computer Processing\nof Chinese & Oriental Languages, 8, 1 (1994), 115-124.\n[14] Nie, J.Y., Hannan, M.L., and Jin, W.Y. Unknown word\ndetection and segmentation of Chinese using statistical and\nheuristic knowledge. Communications of COLIPS, 5, 1&2\n(1995), 47-57.\n[15] Ogawa, Y., and Matsuda, T. Overlapping statistical word\nindexing: A new indexing method for Japanese text. In\nProceedings of the 20th Annual International ACM SIGIR\nConference on Research and Development in Information\nRetrieval (Philadelphia, July 1997), ACM, 226-234.\n[16] Sproat, R., and Shih, C.L. A statistical method for finding\nword boundaries in Chinese text. Computer Processing of\nChinese & Oriental Languages, 4, 4 (1990), 336-351.\n[17] Sproat, R., Shih, C., Gale, W., and Chang, N. A stochastic\nfinite-state word-segmentation algorithm for Chinese.\nComputational Lingustics, 22, 3 (1996), 377-404.\n[18] Tung, C.-H., and Lee, H.-J. Identification of unknown words\nfrom a corpus. Computer Processing of Chinese and\nOriental Languages, 8 (Supplement, Dec. 1994), 131-145.\n[19] Wu, Z., and Tseng, G. ACTS: An automatic Chinese text\nsegmentation system for full text retrieval. Journal of the\nAmerican Society for Information Science, 46, 2 (1995), 83-96\n.\n[20] Yeh, C.L., and Lee, H.J. Rule-based word identification for\nmandarin Chinese sentences: A unification approach.\nComputer Processing of Chinese and Oriental Languages, 5,\n2 (1991), 97-118.\n89", "keywords": "logistic regression;statistical formula;word boundary identification;Chinese text segmentation;word boundary;natural language processing;mutual information;regression model;contextual information;multi-word terms"} {"name": "150", "title": "Preventing Attribute Information Leakage in Automated Trust Negotiation", "abstract": "Automated trust negotiation is an approach which establishes trust between strangers through the bilateral, iterative disclosure of digital credentials. Sensitive credentials are protected by access control policies which may also be communicated to the other party. Ideally, sensitive information should not be known by others unless its access control policy has been satisfied. However, due to bilateral information exchange, information may flow to others in a variety of forms, many of which cannot be protected by access control policies alone. In particular, sensitive information may be inferred by observing negotiation participants' behavior even when access control policies are strictly enforced. In this paper, we propose a general framework for the safety of trust negotiation systems. Compared to the existing safety model, our framework focuses on the actual information gain during trust negotiation instead of the exchanged messages. Thus, it directly reflects the essence of safety in sensitive information protection. Based on the proposed framework, we develop policy databases as a mechanism to help prevent unauthorized information inferences during trust negotiation. We show that policy databases achieve the same protection of sensitive information as existing solutions without imposing additional complications to the interaction between negotiation participants or restricting users' autonomy in defining their own policies.", "fulltext": "INTRODUCTION\nAutomated trust negotiation (ATN) is an approach to\naccess control and authentication in open, flexible systems\nsuch as the Internet. ATN enables open computing by as-Permission\nto make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nCCS'05, November 711, 2005, Alexandria, Virginia, USA.\nCopyright 2005 ACM 1-59593-226-7/05/0011 ...\n$\n5.00.\nsigning an access control policy to each resource that is to\nbe made available to entities from different domains. An\naccess control policy describes the attributes of the entities\nallowed to access that resource, in contrast to the traditional\napproach of listing their identities. To satisfy an access control\npolicy, a user has to demonstrate that they have the\nattributes named in the policy through the use of digital\ncredentials. Since one's attributes may also be sensitive, the\ndisclosure of digital credentials is also protected by access\ncontrol policies.\nA trust negotiation is triggered when one party requests\naccess to a resource owned by another party. Since each\nparty may have policies that the other needs to satisfy, trust\nis established incrementally through bilateral disclosures of\ncredentials and requests for credentials, a characteristic that\ndistinguishes trust negotiation from other trust establishment\napproaches [2, 11].\nAccess control policies play a central role in protecting\nprivacy during trust negotiation. Ideally, an entity's sensitive\ninformation should not be known by others unless they\nhave satisfied the corresponding access control policy. However\n, depending on the way two parties interact with each\nother, one's private information may flow to others in various\nforms, which are not always controlled by access control\npolicies. In particular, the different behaviors of a negotiation\nparticipant may be exploited to infer sensitive information\n, even if credentials containing that information are\nnever directly disclosed.\nFor example, suppose a resource's policy requires Alice to\nprove a sensitive attribute such as employment by the CIA.\nIf Alice has this attribute, then she likely protects it with an\naccess control policy. Thus, as a response, Alice will ask the\nresource provider to satisfy her policy. On the other hand,\nif Alice does not have the attribute, then a natural response\nwould be for her to terminate the negotiation since there is\nno way that she can access the resource. Thus, merely from\nAlice's response, the resource provider may infer with high\nconfidence whether or not Alice is working for the CIA, even\nthough her access control policy is strictly enforced.\nThe problem of unauthorized information flow in ATN has\nbeen noted by several groups of researchers [20, 22, 27]. A\nvariety of approaches have been proposed, which mainly fall\ninto two categories. Approaches in the first category try to\n\"break\" the correlation between different information. Intuitively\n, if the disclosed policy for an attribute is independent\nfrom the possession of the attribute, then the above inference\nis impossible. A representative approach in this category is\nby Seamons et al. [20], where an entity possessing a sensi-36\ntive credential always responds with a cover policy of f alse\nto pretend the opposite. Only when the actual policy is satisfied\nby the credentials disclosed by the opponent will the\nentity disclose the credential. Clearly, since the disclosed\npolicy is always f alse, it is not correlated to the possession\nof the credential. One obvious problem with this approach,\nhowever, is that a potentially successful negotiation may fail\nbecause an entity pretends to not have the credential.\nApproaches in the second category aim to make the correlation\nbetween different information \"safe\", i.e., when an\nopponent is able to infer some sensitive information through\nthe correlation, it is already entitled to know that information\n. For example, Winsborough and Li [23] proposed the\nuse of acknowledgement policies (\"Ack policies\" for short) as\na solution. Their approach is based on the principle \"discuss\nsensitive topics only with appropriate parties\". Therefore,\nbesides an access control policy P , Alice also associates an\nAck policy P\nAck\nwith a sensitive attribute A. Intuitively,\nP\nAck\ndetermines when Alice can tell others whether or not\nshe has attribute A. During a negotiation, when the attribute\nis requested, the Ack policy P\nAck\nis first sent back\nas a reply. Only when P\nAck\nis satisfied by the other party,\nwill Alice disclose whether or not she has A and may then\nask the other party to satisfy the access control policy P .\nIn order to prevent additional correlation introduced by Ack\npolicies, it is required that all entities use the same Ack policy\nto protect a given attribute regardless of whether or not\nthey have A. In [23], Winsborough and Li also formally defined\nthe safety requirements in trust negotiation based on\nAck policies.\nThough the approach of Ack policies can provide protection\nagainst unauthorized inferences, it has a significant disadvantage\n. One benefit of automated trust negotiation is\nthat it gives each entity the autonomy to determine the appropriate\nprotection for its own resources and credentials.\nThe perceived sensitivity of possessing an attribute may be\nvery different for different entities. For example, some may\nconsider the possession of a certificate showing eligibility for\nfood stamps highly sensitive, and thus would like to have a\nvery strict Ack policy for it. Some others may not care as\nmuch, and have a less strict Ack policy, because they are\nmore concerned with their ability to get services than their\nprivacy. The Ack Policy system, however, requires that all\nentities use the same Ack policy for a given attribute, which.\ndeprives entities of the autonomy to make their own decisions\n. This will inevitably be over-protective for some and\nunder-protective for others. And either situation will result\nin users preferring not to participate in the system.\nIn this paper, we first propose a general framework for safe\ninformation flow in automated trust negotiation. Compared\nwith that proposed by Winsborough and Li, our framework\nfocuses on modeling the actual information gain caused by\ninformation flow instead of the messages exchanged. Therefore\nit directly reflects the essence of safety in sensitive information\nprotection. Based on this framework, we propose\npolicy databases as a solution to the above problem. Policy\ndatabases not only prevent unauthorized inferences as described\nabove but also preserve users' autonomy in deciding\ntheir own policies. In order to do this, we focus on severing\nthe correlation between attributes and policies by introducing\nrandomness, rather than adding additional layers or\nfixed policies as in the Ack Policy system. In our approach,\nthere is a central database of policies for each possession\nsensitive attribute. Users who possess the attribute submit\ntheir policies to the database anonymously. Users who do\nnot possess the attribute can then draw a policy at random\nfrom the database. The result of this process is that the\ndistributions of policies for a given possession sensitive attribute\nare identical for users who have the attribute and\nusers who do not. Thus, an opponent cannot infer whether\nor not users possess the attribute by looking at their policies.\nThe rest of the paper is organized as follows. In section\n2, we propose a formal definition of safety for automated\ntrust negotiation. In section 3, we discuss the specifics of\nour approach, including what assumptions underlie it, how\nwell it satisfies our safety principle, both theoretically and\nin practical situations, and what practical concerns to implementing\nit exist. Closely related work to this paper is\nreported in section 4. We conclude this paper in section 5\nSAFETY IN TRUST NEGOTIATION\nIn [23], Winsborough and Li put forth several definitions\nof safety in trust negotiation based on an underlying notion\nof indistinguishability. The essence of indistinguishability is\nthat if an opponent is given the opportunity to interact with\na user in two states corresponding to two different potential\nsets of attributes, the opponent cannot detect a difference\nin those sets of attributes based on the messages sent. In\nthe definition of deterministic indistinguishability, the messages\nsent in the two states must be precisely the same. In\nthe definition of probabilistic indistinguishability, they must\nhave precisely the same distribution.\nThese definitions, however, are overly strict. To determine\nwhether or not a given user has a credential, it is not sufficient\nfor an opponent to know that the user acts differently\ndepending on whether or not that user has the credential:\nthe opponent also has to be able to figure out which behavior\ncorresponds to having the credential and which corresponds\nto lacking the credential. Otherwise, the opponent has not\nactually gained any information about the user.\nExample 1. Suppose we have a system in which there\nis only one attribute and two policies, p\n1\nand p\n2\n. Half of\nthe users use p\n1\nwhen they have the attribute and p\n2\nwhen\nthey do not. The other half of the users use p\n2\nwhen they\nhave the attribute and p\n1\nwhen they do not. Every user's\nmessages would be distinguishable under the definition of indistinguishability\npresented in [23] because for each user the\ndistribution of messages is different. However, if a fraction\nr of the users have the attribute and a fraction 1 - r do not,\nthen\n1\n2\nr +\n1\n2\n(1 - r) =\n1\n2\nof the users display policy p\n1\nand\nthe other half of the users display policy p\n2\n. As such the\nnumber of users displaying p\n1\nor p\n2\ndoes not change as r\nchanges. Hence, they are independent. Since the policy displayed\nis independent of the attribute when users are viewed\nas a whole, seeing either policy does not reveal any information\nabout whether or not the user in question has the\nattribute.\nAs such, Winsborough and Li's definitions of indistinguishability\nrestrict a number of valid systems where a given\nuser will act differently in the two cases, but an opponent\ncannot actually distinguish which case is which. In fact,\ntheir definitions allow only systems greatly similar to the\nAck Policy system that they proposed in [22]. Instead we\npropose a definition of safety based directly on information\n37\ngain instead of the message exchange sequences between the\ntwo parties.\nBefore we formally define safety, we first discuss what\nsafety means informally. In any trust negotiation system,\nthere is some set of objects which are protected by policies\n. Usually this includes credentials, information about\nattribute possession, and sometimes even some of the policies\nin the system. All of these can be structured as digital\ninformation, and the aim of the system is to disclose that\ninformation only to appropriate parties.\nThe straight-forward part of the idea of safety is that an\nobject's value should not be revealed unless its policy has\nbeen satisfied. However, we do not want to simply avoid\nan object's value being known with complete certainty, but\nalso the value being guessed with significant likelihood.\nAs such, we can define the change in safety as the change\nin the probability of guessing the value of an object.\nIf\nthere are two secrets, s\n1\nand s\n2\n, we can define the conditional\nsafety of s\n1\nupon the disclosure of s\n2\nas the conditional\nprobability of guessing s\n1\ngiven s\n2\n. Thus, we define\nabsolute safety in a system as being the property that no\ndisclosure of objects whose policies have been satisfied results\nin any change in the probability of guessing the value\nof any object whose policy has not been satisfied regardless\nof what inferences might be possible.\nThere exists a simple system which can satisfy this level of\nsafety, which is the all-or-nothing system, a system in which\nall of every user's objects are required to be protected by\na single policy which is the same for all users. Clearly in\nsuch a system there are only two states, all objects revealed\nor no objects revealed. As such, there can be no inferences\nbetween objects which are revealed and objects which are\nnot. This system, however, has undesirable properties which\noutweigh its safety guarantees, namely the lack of autonomy,\nflexibility, and fine-grained access control. Because of the\nnecessity of protecting against every possible inference which\ncould occur, it is like that any system which achieves ideal\nsafety would be similarly inflexible.\nSince there have been no practical systems proposed which\nmeet the ideal safety condition, describing ideal safety is not\nsufficient unto itself. We wish to explore not just ideal safety,\nbut also safety relative to certain types of attacks. This will\nhelp us develop a more complete view of safety in the likely\nevent that no useful system which is ideally safe is found.\nIf a system does not have ideal safety, then there must\nbe some inferences which can cause a leakage of information\nbetween revealed objects and protected objects. But\nthis does not mean that every single object revealed leaks\ninformation about every protected object. As such, we can\npotentially describe what sort of inferences a system does\nprotect against. For example, Ack Policy systems are moti-vated\nby a desire to prevent inferences from a policy to the\npossession of the attribute that it protects. Inferences from\none attribute to another are not prevented by such a system\n(for example, users who are AARP members are more likely\nto be retired than ones who are not). Hence, it is desirable\nto describe what it means for a system to be safe relative to\ncertain types of inferences.\nNext we present a formal framework to model safety in\ntrust negotiation. The formalism which we are using in this\npaper is based on that used by Winsborough and Li, but is\nsubstantially revised.\n2.0.1\nTrust Negotiation Systems\nA Trust Negotiation System is comprised of the following\nelements:\nA finite set, K, of principals, uniquely identified by a randomly\nchosen public key, P ub\nk\n. Each principal knows the\nassociated private key, and can produce a proof of identity.\nA finite set, T , of attributes. An attribute is something\nwhich each user either possesses or lacks. An example would\nbe status as a licensed driver or enrollment at a university.\nA set, G, of configurations, each of which is a subset of T .\nIf a principal k is in a configuration g G, then k possesses\nthe attributes in g and no other attributes.\nA set, P, of possible policies, each of which is a logical proposition\ncomprised of a combination of and, or, and attributes\nin T . We define an attribute in a policy to be true with respect\nto a principal k if k has that attribute. We consider\nall logically equivalent policies to be the same policy.\nObjects. Every principal k has objects which may be protected\nwhich include the following:\n- A set, S, of services provided by a principal. Every principal\noffers some set of services to all other principals. These\nservices are each protected by some policy, as we will describe\nlater. A simple service which each principal offers is\na proof of attribute possession. If another principal satisfies\nthe appropriate policy, the principal will offer some proof\nthat he holds the attribute. This service is denoted s\nt\nfor\nany attribute t T .\n- A set, A, of attribute status objects. Since the set of all\nattributes is already known, we want to protect the information\nabout whether or not a given user has an attribute.\nAs such we formally define A as a set of boolean valued random\nvariables, a\nt\n. The value of a\nt\nfor a principal k, which\nwe denote a\nt\n(k) is defined to be true if k possesses t T\nand false otherwise. Thus A = {a\nt\n|t T }.\n- A set, Q of policy mapping objects. A system may desire\nto protect an object's policy either because of correlations\nbetween policies and sensitive attributes or because in some\nsystems the policies themselves may be considered sensitive.\nSimilar to attribute status objects, we do not protect a policy\n, per se, but instead the pairing of a policy with what\nit is protecting. As such, each policy mapping object is a\nrandom variable q\no\nwith range P where o is an object. The\nvalue of q\no\nfor a given principal k, denoted q\no\n(k) is the policy\nthat k has chosen to protect object o.\nEvery system should define which objects are protected. It\nis expected that all systems should protect the services, S,\nand the attribute status objects, A. In some systems, there\nwill also be policies which protect policies. Thus protected\nobjects may also include a subset of Q. We call the set of\nprotected objects O, where O S A Q. If an object is\nnot protected, this is equivalent to it having a policy equal\nto true.\nFor convenience, we define Q\nX\nto be the members of Q\nwhich are policies protecting members of X , where X is a\nset of objects. Formally, Q\nX\n= {q\no\nQ|o X }.\nSome subset of the information objects are considered to\nbe sensitive objects. These are the objects about which we\nwant an opponent to gain no information unless they have\nsatisfied that object's policy. Full information about any\nobject, sensitive or insensitive, is not released by the system\n38\nuntil its policy has been satisfied, but it is acceptable for\ninferences to cause the leakage of information which is not\nconsidered sensitive.\nA set, N , of negotiation strategies. A negotiation strategy\nis the means that a principal uses to interact with other\nprincipals. Established strategies include the eager strategy\n[24] and the trust-target graph strategy [22].\nA negotiation\nstrategy, n, is defined as an interactive, deterministic,\nTuring-equivalent computational machine augmented by a\nrandom tape. The random tape serves as a random oracle\nwhich allows us to discuss randomized strategies.\nA negotiation strategy takes as initial input the public knowledge\nneeded to operate in a system, the principal's attributes,\nits services, and the policies which protect its objects. It\nthen produces additional inputs and outputs by interacting\nwith other strategies. It can output policies, credentials,\nand any additional information which is useful. We do not\nfurther define the specifics of the information communicated\nbetween strategies except to note that all the strategies in a\nsystem should have compatible input and output protocols.\nWe refrain from further specifics of strategies since they are\nnot required in our later discussion.\nAn adversary, M , is defined as a set of principals coordinating\nto discover the value of sensitive information objects\nbelonging to some k M .\nPreventing this discovery is\nthe security goal of a trust negotiation system. We assume\nthat adversaries may only interact with principals through\ntrust negotiation and are limited to proving possession of\nattributes which they actually possess. In other words, the\ntrust negotiation system provides a means of proof which is\nresistant to attempts at forgery.\nA set, I, of all inferences. Each inference is a minimal subset\nof information objects such that the joint distribution of the\nset differs from the product of the individual distributions\nof the items in the set.\n1\nThese then allow a partitioning, C, of the information objects\ninto inference components. We define a relation such\nthat o\n1\no\n2\niff i I|o\n1\n, o\n2\ni. C is the transitive closure\nof .\nIn general, we assume that all of the information objects\nin our framework are static. We do not model changes in a\nprincipal's attribute status or policies. If such is necessary,\nthe model would need to be adapted.\nIt should also be noted that there is an additional constraint\non policies that protect policies which we have not\ndescribed.\nThis is because in most systems there is a way\nto gain information about what a policy is, which is to satisfy\nit. When a policy is satisfied, this generally results in\nsome service being rendered or information being released.\nAs such, this will let the other party know that they have\nsatisfied the policy for that object. Therefore, the effective\npolicy protecting a policy status object must be the logical\nor of the policy in the policy status object and the policy\nwhich protects it.\n2.0.2\nThe Ack Policy System\nTo help illustrate the model, let us describe how the Ack\nPolicy system maps onto the model. The mapping of oppo-1\nA system need not define the particulars of inferences, but\nshould discuss what sort of inferences it can deal with, and\nhence what sort of inferences are assumed to exist.\nnents, the sets of principals, attributes, configurations, and\npolicies in the Ack Policy system is straightforward.\nIn an Ack Policy system, any mutually compatible set of\nnegotiation strategies is acceptable. There are policies for\nprotecting services, protecting attribute status objects, and\nprotecting policies which protect attribute proving services.\nAs such, the set of protected objects, O = S A Q\nS\n.\nAccording to the definition of the Ack Policy system, for\na given attribute, the policy that protects the proof service\nfor that attribute is protected by the same policy that protects\nthe attribute status object. Formally, t T , k\nK, q\na\nt\n(k) = q\nq\nst\n(k). Further, the Ack policy for an attribute\nis required to be the same for all principals. Thus we know\nt T p Pk K|q\na\nt\n(k) = p.\nTwo basic assumptions about the set of inferences, I, exist\nin Ack Policy systems, which also lead us to conclusions\nabout the inference components, C. It is assumed that inferences\nbetween the policy which protects the attribute proving\nservice, q\ns\nt\n(k), and the attribute status object, a\nt\n(k),\nexist. As such, those two objects should always be in the\nsame inference component. Because Ack Policies are uniform\nfor all principals, they are uncorrelated to any other\ninformation object and they cannot be part of any inference.\nHence, each Ack Policy is in an inference component of its\nown.\n2.0.3\nSafety in Trust Negotiation Systems\nIn order to formally define safety in trust negotiation, we\nneed to define the specifics of the opponent. We need to\nmodel the potential capabilities of an opponent and the information\ninitially available to the opponent. Obviously, no\nsystem is safe against an opponent with unlimited capabilities\nor unlimited knowledge.\nAs such, we restrict the opponent to having some tactic,\nfor forming trust negotiation messages, processing responses\nto those messages, and, finally, forming a guess about the\nvalues of unrevealed information objects. We model the tactic\nas an interactive, deterministic, Turing-equivalent computational\nmachine. This model is a very powerful model,\nand we argue that it describes any reasonable opponent.\nThis model, however, restricts the opponent to calculating\nthings which are computable from its input and implies that\nthe opponent behaves in a deterministic fashion.\nThe input available to the machine at the start is the\nknowledge available to the opponent before any trust negotiation\nhas taken place. What this knowledge is varies\ndepending on the particulars of a trust negotiation system.\nHowever, in every system this should include the knowledge\navailable to the principals who are a part of the opponent\n, such as their public and private keys and their credentials\n. And it should also include public information such\nas how the system works, the public keys of the attribute\nauthorities, and other information that every user knows.\nIn most systems, information about the distribution of attributes\nand credentials and knowledge of inference rules\nshould also be considered as public information.\nAll responses\nfrom principals in different configurations become\navailable as input to the tactic as they are made. The tactic\nmust output both a sequence of responses and, at the end,\nguesses about the unknown objects of all users.\nWe observe that an opponent will have probabilistic knowledge\nabout information objects in a system. Initially, the\nprobabilities will be based only on publicly available knowl-39\nedge, so we can use the publicly available knowledge to describe\nthe a priori probabilities.\nFor instance, in most systems, it would be reasonable to\nassume that the opponent will have knowledge of the odds\nthat any particular member of the population has a given attribute\n. Thus, if a fraction h\nt\nof the population is expected\nto possess attribute t T , the opponent should begin with\nan assumption that some given principal has a h\nt\nchance of\nhaving attribute t. Hence, h\nt\nrepresents the a priori probability\nof any given principal possessing t. Note that we\nassume that the opponent only knows the odds of a given\nprincipal having an attribute, but does not know for certain\nthat a fixed percentage of the users have a given attribute.\nAs such, knowledge about the value of an object belonging\nto some set of users does not imply any knowledge about\nthe value of objects belonging to some other user.\nDefinition 1. A trust negotiation system is safe relative\nto a set of possible inferences if for all allowed mappings between\nprincipals and configurations there exists no opponent\nwhich can guess the value of sensitive information objects\nwhose security policies have not been met with odds better\nthan the a priori odds over all principals which are not in\nthe opponent, over all values of all random tapes, and over\nall mappings between public key values and principals.\nDefinition 1 differs from Winsborough and Li's definitions\nin several ways. The first is that it is concerned with multiple\nusers. It both requires that the opponent form guesses\nover all users and allows the opponent to interact with all\nusers. Instead of simply having a sequence of messages sent\nto a single principal, the tactic we have defined may interact\nwith a variety of users, analyzing incoming messages, and\nthen use them to form new messages. It is allowed to talk\nto the users in any order and to interleave communications\nwith multiple users, thus it is more general than those in [23].\nThe second is that we are concerned only with the information\nwhich the opponent can glean from the communication,\nnot the distribution of the communication itself. As such,\nour definition more clearly reflects the fundamental idea of\nsafety.\nWe next introduce a theorem which will be helpful in proving\nthe safety of systems.\nTheorem 1. There exists no opponent which can beat the\na priori odds of guessing the value of an object, o, given\nonly information about objects which are not in the same\ninference component as o, over all principals not in M and\nwhose policy for o M cannot satisfy, over all random tapes,\nand over all mappings between public keys and principals.\nThe formal proof for this theorem can be found in Appendix\nA. Intuitively, since the opponent only gains information\nabout objects not correlated to o, its guess of the\nvalue of o is not affected.\nWith theorem 1, let us take a brief moment to prove\nthe safety of the Ack Policy systems under our framework.\nSpecifically, we examine Ack Policy systems in which the\ndistribution of strategies is independent of the distributions\nof attributes, an assumption implicitly made in [23]. In Ack\nPolicy systems the Ack Policy is a policy which protects\ntwo objects in our model: an attribute's status object and\nits policy for that attribute's proof service. Ack Policies are\nrequired to be uniform for all users, which ensures that they\nare independent of all objects.\nAck Policy systems are designed to prevent inferences\nfrom an attribute's policy to an attribute's status for attributes\nwhich are sensitive. So, let us assume an appropriate\nset of inference components in order to prove that Ack\nPolicy systems are safe relative to that form of inference.\nAs we described earlier, each attribute status object should\nbe in the same inference component with the policy which\nprotects that attribute's proof service, and the Ack policy\nfor each attribute should be in its own inference component.\nThe Ack Policy system also assumes that different attributes\nare independent of each other. As such, each attribute status\nobject should be in a different inference group.\nThis set of inference components excludes all other possible\ntypes of inferences. The set of sensitive objects is the\nset of attribute status objects whose value is true. Due to\nTheorem 1, we know then that no opponent will be able to\ngain any information based on objects in different inference\ncomponents. So the only potential source of inference for\nwhether or not a given attribute's status object, a\nt\n, has a\nvalue of true or f alse is the policy protecting the attribute\nproof service, s\nt\n.\nHowever, we know that the same policy, P , protects both\nof these objects. As such unauthorized inference between\nthem is impossible without satisfying P .\n2\nThus, the odds\nfor a\nt\ndo not change. Therefore, the Ack Policy system is\nsecure against inferences from an attribute's policy to its\nattribute status.\nPOLICY DATABASE\nWe propose a new trust negotiation system designed to\nbe safe under the definition we proposed, but to also allow\nthe users who have sensitive attributes complete freedom to\ndetermine their own policies. It also does not rely on any\nparticular strategy being used. Potentially, a combination of\nstrategies could even be used so long as the strategy chosen\nis not in any way correlated to the attributes possessed.\nThis system is based on the observation that there is more\nthan one way to deal with a correlation. A simple ideal system\nwhich prevents the inference from policies to attribute\npossession information is to have each user draw a random\npolicy. This system obviously does not allow users the freedom\nto create their own policies. Instead we propose a system\nwhich looks like the policies are random even though\nthey are not.\nThis system is similar to the existing trust negotiation\nsystems except for the addition of a new element: the policy\ndatabase. The policy database is a database run by a trusted\nthird party which collects anonymized information about the\npolicies which are in use. In the policy database system, a\n2\nExcept that one of these is a policy mapping object which\nis being protected by a policy. As such, we have to keep\nin mind that there exists a possibility that the opponent\ncould gain information about the policy without satisfying\nit. Specifically, the opponent can figure out what attributes\ndo not satisfy it by proving that he possesses those\nattributes. However, in an Ack Policy system, the policy\nprotecting an attribute proof object of an attribute which a\nuser does not hold is always f alse. No opponent can distinguish\nbetween two policies which they cannot satisfy since\nall they know is that they have failed to satisfy them. And\nwe are unconcerned with policies which they have satisfied.\nThus, we know that the opponent cannot gain any useful information\nabout the policies which they have not satisfied,\nand hence cannot beat the a priori odds for those policies.\n40\nuser who has a given sensitive attribute chooses their own\npolicy and submits it anonymously to the policy database\nfor that attribute. The policy database uses pseudonymous\ncertificates to verify that users who submit policies actually\nhave the attribute, in a manner that will be discussed later\nin section 3.2. Then users who do not have the attribute\nwill pull policies at random from the database to use as\ntheir own. The contents of the policy database are public,\nso any user who wishes to can draw a random policy from\nthe database.\nIn our system, each user uses a single policy to protect all\nthe information objects associated with an attribute. They\nneither acknowledge that they have the attribute nor prove\nthat they do until the policy has been satisfied. This means\nthat users are allowed to have policies which protect attributes\nwhich they do not hold. The policy in our system\nmay be seen as the combination of the Ack policy and a\ntraditional access control policy for attribute proofs.\nThe goal of this system is to ensure that the policy is in\na separate inference component from the attribute status\nobject, thus guaranteeing that inferences between policies\nand attribute status objects cannot be made.\nThis system is workable because of the following.\nWe\nknow that policies cannot require a lack of an attribute, thus\nusers who do not have a given attribute will never suffer from\ntheir policy for that attribute being too strong. Changes\nin the policy which protects an attribute that they do not\nhave may vary the length of the trust negotiation, but it\nwill never cause them to be unable to complete transactions\nwhich they would otherwise be able to complete. Also, we\ndeal only with possession sensitive attributes. We do not\ndeal with attributes where it is at all sensitive to lack them.\nAs such, users who do not have the attribute cannot have\ntheir policies be too weak. Since there is no penalty for those\nusers for their policies being either too weak or too strong,\nthey can have whatever policy is most helpful for helping\ndisguise the users who do possess the attribute.\nThis also means that users who do not have the attribute\ndo not need to trust the policy database since no policy\nwhich the database gives them would be unacceptable to\nthem. Users who have the attribute, however, do need to\ntrust that the policy database will actually randomly distribute\npolicies to help camouflage their policies. They do\nnot, however, need to trust the policy database to act appropriately\nwith their sensitive information because all information\nis anonymized.\n3.1\nSafety of the Approach of Policy Databases\nLet us describe the Policy Database system in terms of\nour model. Again the opponent and the sets of principals,\nattributes, configurations, and policies need no special comment\n.\nBecause we only have policies protecting the services\nand attribute status objects, the set of protected objects\n, O = S A. Also, each attribute proving service and\nattribute status object are protected by the same policy.\nt T , k K, q\na\nt\n(k) = q\ns\nt\n(k).\nThis system is only designed to deal with inferences from\npolicies to attribute possession, so we assume that every attribute\nstatus object is in a different inference component.\nIf the policies do actually appear to be completely random,\nthen policies and attribute status objects should be in separate\ninference components as well.\nThe obvious question is whether Policy Database systems\nactually guarantee that this occurs. The answer is that they\ndo not guarantee it with any finite number of users due to\nthe distribution of policies being unlikely to be absolutely,\nprecisely the same. This is largely due to a combination of\nrounding issues and the odds being against things coming\nout precisely evenly distributed. However, as the number\nof users in the system approaches infinity, the system approaches\nthis condition.\nIn an ideal system, the distribution of policies would be\ncompletely random. If an opponent observes that some number\nof principals had a given policy for some attribute, this\nwould give them no information about whether or not any\nof those users had the attribute. However, in the Policy\nDatabase system, every policy which is held is known to be\nheld by at least one user who has the attribute. As such, we\nneed to worry about how even the distributions of different\npolicies are.\nWe can describe and quantify the difference which exists\nbetween a real implementation of our system and the ideal.\nThere are two reasons for a difference to exist. The first is\ndifference due to distributions being discrete. For example,\nlet us say that there are five users in our system, two of\nwhich have some attribute and three who do not. Let us\nalso say that the two users with the attribute each have\ndifferent policies. For the distributions to be identical, each\nof those policies would need to be selected by one and a\nhalf of the remaining three users. This, obviously, cannot\nhappen. We refer to this difference as rounding error.\nThe second is difference due to the natural unevenness of\nrandom selection.\nThe distributions tend towards evenness\nas the number of samples increases, but with any finite\nnumber of users, the distributions are quite likely to vary\nsome from the ideal.\nThese differences can both be quantified the same way:\nas a difference between the expected number of principals\nwho have a policy and the actual number. If the opponent\nknows that one half of the principals have an attribute and\none half do not, and they observe that among four users,\nthere are two policies, one of which is held by three users\nand the other by one user, then they can know that the user\nwith the unique policy holds the attribute. In general, any\ntime the number of users who share a policy is less than the\nexpectation, it is more likely that a user who has that policy\nalso has the attribute. Information is leaked when there is\na difference between the expected number of principals who\nhave a policy and the actual number of principals who have\nthat policy in proportion to the ratio between them.\nTheorem 2. The limit of the difference between the expected\nnumber of principals who have a policy and the actual\nnumber of principals who have the policy as the number of\nusers goes to infinity is 0.\nThe proof of Theorem 2 can be found in Appendix B. The\nintuition behind it is that as the number of samples grows\nvery large, the actual distribution approaches the ideal distribution\nand the rounding errors shrink towards zero.\n3.2\nAttacks and Countermeasures\nUntil now, we have only proven things about a system\nwhich is assumed to be in some instantaneous unchanging\nstate. In the real world we have to deal with issues related\nto how policies change over time and multiple interactions.\n41\nTherefore, we also want the policy which a given user randomly\nselects from the database to be persistent. Otherwise\nan adversary would simply make multiple requests to the\nsame user over time and see if the policy changed. If it did,\nespecially if it changed erratically, it would indicate that the\nuser was repeatedly drawing random policies. Instead, the\nuser should hold some value which designates which policy\nthe user has.\nAn obvious answer would be to have the user hold onto\nthe policy itself, but this would open the user up to a new attack\n. If users lacking a given attribute simply grabbed onto\na policy and never changed it, this itself could be a tell. If\nthere were some event which occurred which made having a\ngiven attribute suddenly more sensitive than it used to be,\nthen rational users who have the attribute would increase\nthe stringency of their policies.\nFor example, if a country\nundertook an action which was politically unpopular on\na global scale, holders of passports issued by that country\nwould likely consider that more sensitive information now\nand would increase their policies appropriately. The result\nwould then be that the average policy for people who had\ncached a previously fetched policy would then be less stringent\nthan those who were making their own policies.\nInstead of a permanent policy, it would be more sensible\nfor a principal to receive a cookie which could get it the\npolicy from a particular principal so that when principals\nwho posses the attribute changed their policies, principals\nwho do not possess it would too.\nWe also need to guard against stacking the deck. Obviously\nwe can restrict the database to users who actually have\nthe attribute by requiring the presentation of a pseudonymous\ncertificate [6, 7, 8, 9, 10, 18] which proves that they\nhave the attribute. However, we also need to assure that a\nlegitimate attribute holder cannot submit multiple policies\nin order to skew the set of policies. To this end, we require\nthat each policy be submitted initially with a one-time-show\npseudonymous credential [8]. The attribute authorities can\nbe restricted so that they will only issue each user a single\none-time-show pseudonymous credential for each Policy\nDatabase use. Then we can accept the policy, knowing it to\ncome from a unique user who holds the attribute, and issue\nthem a secret key which they can later use to verify that\nthey were the submitter of a given policy and to replace it\nwith an updated policy.\nThis does not prevent a user who has the attribute from\nsubmitting a single false policy, perhaps one which is distinctly\ndifferent from normal policies. The result would be\nthat users who draw that policy would be known to not have\nthe attribute. However, under the assumptions of our system\n, not having the attribute is not sensitive, so this does\nnot compromise safety.\n3.3\nLimitations\nWe assume that for the attribute being protected, it is not\nsignificantly sensitive to lack the attribute. This assumption\nmeans that our system likely cannot be used in practice to\nprotect all attributes. Most notably it fails when lacking an\nattribute implies having or being highly likely to have some\nother attribute. For example, not having a valid passport\nprobably means that you are a permanent resident of the\ncountry you are currently in (although users could be an\nillegal immigrants or citizens of a defunct nation).\nIt also fails when the lack of an attribute is more sensitive\nthan having it. For instance, few people are going to wish\nto prevent people from knowing that they have graduated\nfrom high school, but many would consider their lack of a\nhigh school graduation attribute to be sensitive. However,\nwe argue that no system can adequately handle such a case\nbecause those who do have the attribute would likely be\nunwilling to accept any system which would result in them\nhaving to not disclose the attribute when it was useful for\nthem to do so. And if they still easily disclose their attribute,\nthen it becomes impossible for those without to disguise\ntheir lack.\nSimilarly to the Ack Policy system, policy databases also\ndo not generally handle any form of probabilistic inference\nrule between attributes. The existence of such a rule would\nlikely imply certain relationships between policies which most\nusers would enforce. If the possession of a city library card\nsuggested with strong probability that the user was a city\nresident, then perhaps all users who have both would have\na policy protecting their library card which is stricter than\nthe policy protecting their city residency. However, as there\nis variety in the policies of individuals, a user could pick a\nrandom pair of policies which did not have this property.\nThat would then be a sure tell that he did not actually have\nboth of those attributes.\nAnother drawback of the system is that it requires a policy\ndatabase service be available on-line.\nThis decreases\nthe decentralized nature of trust negotiation. However, our\napproach is still less centralized than Ack Policies, which\nrequire that users cooperate to determine a universally accepted\nAck policy. And this centralization may be able to be\ndecreased by decentralizing the database itself. Although we\ndiscuss the database as if it were a single monolithic entity,\nit could be made of a number of different entities acting\ntogether. The only requirement is that it accepts policies\nfrom unique users who have the attribute and distributes\nthem randomly.\nRELATED WORK\nThe framework of automated trust negotiation was first\nproposed by Winsborough et al.\n[24].\nSince then, great\nefforts have been put forward to address challenges in a variety\nof aspects of trust negotiation.\nAn introduction to\ntrust negotiation and related trust management issues can\nbe found in [25]. As described in detail there, a number of\ntrust negotiation systems and supporting middleware have\nbeen proposed and/or implemented in a variety of contexts\n(e.g., [3, 4, 11, 12, 14, 17, 19]). Information leakage during\ntrust negotiation is studied in [13, 5, 15, 20, 21, 22,\n23]. The work by Winsborough and Li has been discussed\nin detail in previous sections. Next, we discuss several other\napproaches.\nIn [20], non-response is proposed as a way to protect\npossession-sensitive attributes.\nThe basic idea is to have\nAlice, the owner of a sensitive attribute, act as if she does\nnot have the attribute. Only later when the other party\naccidentally satisfies her policy for that attribute will Alice\ndisclose that attribute. This approach is easy to deploy in\ntrust negotiation. But clearly it will often cause a potentially\nsuccessful negotiation to fail because of Alice's conservative\nresponse.\nYu and Winslett [26] introduce a technique called policy\nmigration to mitigate the problem of unauthorized inference.\nIn policy migration, Alice dynamically integrates her poli-42\ncies for sensitive attributes with those of other attributes, so\nthat she does not need to explicitly disclose policies for sensitive\nattributes. Meanwhile, policy migration makes sure\nthat \"migrated\" policies are logically equivalent to original\npolicies, and thus guarantees the success of the negotiation\nwhenever possible. On the other hand, policy migration is\nnot a universal solution, in the sense that it may not be applicable\nto all the possible configurations of a negotiation.\nFurther, it is subject to a variety of attacks. In other words,\nit only seeks to make unauthorized inference harder instead\nof preventing it completely.\nMost existing trust negotiation frameworks [16, 17, 28]\nassume that the appropriate access control policies can be\nshown to Bob when he requests access to Alice's resource.\nHowever, realistic access control policies also tend to contain\nsensitive information, because the details of Alice's policy\nfor the disclosure of a credential C tends to give hints about\nC's contents. More generally, a company's internal and external\npolicies are part of its corporate assets, and it will\nnot wish to indiscriminately broadcast its policies in their\nentirety. Several schemes have been proposed to protect the\ndisclosure of sensitive policies. In [4], Bonatti and Samarati\nsuggests dividing a policy into two parts prerequisite rules\nand requisite rules. The constraints in a requisite rule will\nnot be disclosed until those in prerequisite rules are satisfied.\nIn [19], Seamons et al. proposed organizing a policy into a\ndirected graph so that constraints in a policy can be disclosed\ngradually. In [26], access control policies are treated\nas first-class resources, thus can be protected in the same\nmanner as services and credentials.\nRecently, much work has been done on mutual authentication\nand authorization through the use of cryptographic\ntechniques that offer improved privacy guarantees. For example\n, Balfanz et al. [1] designed a secret-handshake scheme\nwhere two parties reveal their memberships in a group to\neach other if and only if they belong to the same group. Li\net al. [15] proposed a mutual signature verification scheme\nto solve the problem of cyclic policy interdependency in trust\nnegotiation. Under their scheme, Alice can see the content\nof Bob's credential signed by a certification authority CA\nonly if she herself has a valid certificate also signed by CA\nand containing the content she sent to Bob earlier. A similar\nidea was independently explored by researchers [5, 13]\nto handle more complex access control policies. Note that\napproaches based on cryptographic techniques usually impose\nmore constraints on access control policies. Therefore,\npolicy databases are complementary to the above work.\nCONCLUSION AND FUTURE WORK\nIn this paper, we have proposed a general framework for\nsafety in automated trust negotiation. The framework is\nbased strictly on information gain, instead of on communication\n. It thus more directly reflects the essence of safe information\nflow in trust negotiation. We have also shown that\nAck policy systems are safe under our framework. Based\non the framework, we have presented policy databases, a\nnew, safe trust negotiation system. Compared with existing\nsystems, policy databases do not introduce extra layers of\npolicies or other complications to the negotiation between\nusers. Further, policy databases preserve user's autonomy\nin defining their own policies instead of imposing uniform\npolicies across all users. Therefore they are more flexible\nand easier to deploy than other systems.\nFurther, we have discussed a number of practical issues\nwhich would be involved in implementing our system. In\nthe future, we plan to address how our system can be used\nin the presence of delegated credentials. And we plan to\nattempt to broaden the system to account for probabilistic\ninferences rules which are publicly known.\nAcknowledgments This research was sponsored by NSF\nthrough IIS CyberTrust grant number 0430166 (NCSU). We\nalso thank anonymous reviewers for their helpful comments.\nREFERENCES\n[1] D. Balfanz, G. Durfee, N. Shankar, D. Smetters, J. Staddon,\nand H. Wong. Secret Handshakes from Pairing-Based Key\nAgreements. In IEEE Symposium on Security and Privacy,\nBerkeley, CA, May 2003.\n[2] M. Blaze, J. Feigenbaum, J. Ioannidis, and A. Keromytis. The\nKeyNote Trust Management System Version 2. In Internet\nDraft RFC 2704, September 1999.\n[3] M. Blaze, J. Feigenbaum, and A. D. Keromytis. KeyNote:\nTrust Management for Public-Key Infrastructures. In Security\nProtocols Workshop, Cambridge, UK, 1998.\n[4] P. Bonatti and P. Samarati. Regulating Service Access and\nInformation Release on the Web. In Conference on Computer\nand Communications Security, Athens, November 2000.\n[5] R.W. Bradshaw, J.E. Holt, and K.E. Seamons. Concealing\nComplex Policies in Hidden Credentials. In ACM Conference\non Computer and Communications Security, Washington,\nDC, October 2004.\n[6] S. Brands. Rethinking Public Key Infrastructures and Digital\nCertificates: Building in Privacy. The MIT Press, 2000.\n[7] J. Camenisch and E.V. Herreweghen. Design and\nImplementation of the Idemix Anonymous Credential System.\nIn ACM Conference on Computer and Communications\nSecurity, Washington D.C., November 2002.\n[8] J. Camenisch and A. Lysyanskaya. Efficient Non-Transferable\nAnonymous Multi-Show Credential System with Optional\nAnonymity Revocation. In EUROCRYPT 2001, volume 2045\nof Lecture Notes in Computer Science. Springer, 2001.\n[9] D. Chaum. Security without Identification: Transactions\nSystems to Make Big Brother Obsolete. Communications of\nthe ACM, 24(2), 1985.\n[10] I.B. Damg\nard. Payment Systems and Credential Mechanism\nwith Provable Security Against Abuse by Individuals. In\nCRYPTO'88, volume 403 of Lecture Notes in Computer\nScience. Springer, 1990.\n[11] A. Herzberg, J. Mihaeli, Y. Mass, D. Naor, and Y. Ravid.\nAccess Control Meets Public Key Infrastructure, Or: Assigning\nRoles to Strangers. In IEEE Symposium on Security and\nPrivacy, Oakland, CA, May 2000.\n[12] A. Hess, J. Jacobson, H. Mills, R. Wamsley, K. Seamons, and\nB. Smith. Advanced Client/Server Authentication in TLS. In\nNetwork and Distributed System Security Symposium, San\nDiego, CA, February 2002.\n[13] J. Holt, R. bradshaw, K.E. Seamons, and H. Orman. Hidden\nCredentials. In ACM Workshop on Privacy in the Electronic\nSociety, Washington, DC, October 2003.\n[14] W. Johnson, S. Mudumbai, and M. Thompson. Authorization\nand Attribute Certificates for Widely Distributed Access\nControl. In IEEE International Workshop on Enabling\nTechnologies: Infrastructure for Collaborative Enterprises,\n1998.\n[15] N. Li, W. Du, and D. Boneh. Oblivious Signature-Based\nEnvelope. In ACM Symposium on Principles of Distributed\nComputing, New York City, NY, July 2003.\n[16] N. Li, J.C. Mitchell, and W. Winsborough. Design of a\nRole-based Trust-management Framework. In IEEE\nSymposium on Security and Privacy, Berkeley, California,\nMay 2002.\n[17] N. Li, W. Winsborough, and J.C. Mitchell. Distributed\nCredential Chain Discovery in Trust Management. Journal of\nComputer Security, 11(1), February 2003.\n[18] A. Lysyanskaya, R. Rivest, A. Sahai, and S. Wolf. Pseudonym\nSystems. In Selected Areas in Cryptography, 1999, volume\n1758 of Lecture Notes in Computer Science. Springer, 2000.\n[19] K. Seamons, M. Winslett, and T. Yu. Limiting the Disclosure\nof Access Control Policies during Automated Trust\n43\nNegotiation. In Network and Distributed System Security\nSymposium, San Diego, CA, February 2001.\n[20] K. Seamons, M. Winslett, T. Yu, L. Yu, and R. Jarvis.\nProtecting Privacy during On-line Trust Negotiation. In 2nd\nWorkshop on Privacy Enhancing Technologies, San Francisco,\nCA, April 2002.\n[21] W. Winsborough and N. Li. Protecting Sensitive Attributes in\nAutomated Trust Negotiation. In ACM Workshop on Privacy\nin the Electronic Society, Washington, DC, November 2002.\n[22] W. Winsborough and N. Li. Towards Practical Automated\nTrust Negotiation. In 3rd International Workshop on Policies\nfor Distributed Systems and Networks, Monterey, California,\nJune 2002.\n[23] W. Winsborough and N. Li. Safety in Automated Trust\nNegotiation. In IEEE Symposium on Security and Privacy,\nOakland, CA, May 2004.\n[24] W. Winsborough, K. Seamons, and V. Jones. Automated Trust\nNegotiation. In DARPA Information Survivability Conference\nand Exposition, Hilton Head Island, SC, January 2000.\n[25] M. Winslett, T. Yu, K.E. Seamons, A. Hess, J. Jarvis,\nB. Smith, and L. Yu. Negotiating Trust on the Web. IEEE\nInternet Computing, special issue on trust management, 6(6),\nNovember 2002.\n[26] T. Yu and M. Winslett. A Unified Scheme for Resource\nProtection in Automated Trust Negotiation. In IEEE\nSymposium on Security and Privacy, Oakland, CA, May 2003.\n[27] T. Yu and M. Winslett. Policy Migration for Sensitive\nCredentials in Trust Negotiation. In ACM Workshop on\nPrivacy in the Electronic Society, Washington, DC, October\n2003.\n[28] T. Yu, M. Winslett, and K. Seamons. Supporting Structured\nCredentials and Sensitive Policies through Interoperable\nStrategies in Automated Trust Negotiation. ACM Transactions\non Information and System Security, 6(1), February 2003.\nAPPENDIX\nA.\nPROOF OF THEOREM 1\nOur goal is to prove the following theorem:\nThere exists no opponent which can beat the a\npriori odds of guessing the value of an object, o\ngiven only information about objects which are\nnot in the same inference component as o, over\nall principals not in M and whose policy for o M\ncannot satisfy, over all random tapes, and over all\nmappings of public key values to principals.\nNow it follows that if the opponent can beat the a priori\nodds of guessing the value of an object, o, then the opponent\ncan beat the a priori odds of guessing the parity of o. Hence,\nif no opponent can beat the a priori odds of guessing the\nparity of an object, then none can beat the odds of guessing\nthe value of the object.\nLemma 1. There exists no opponent which can beat the\na priori odds of guessing the parity of an object, o given\nonly information about objects which are not in the same\ninference component as o, over all principals not in M and\nwhose policy for o M cannot satisfy, over all random tapes,\nand over all mappings of public key values to principals.\nTo prove this, we begin with an assumption that there\nexists some tactic which can successfully guess the parity of\no with odds better than the a priori odds for at least some\npublic key mappings. We are going to prove that any such\ntactic cannot beat the a priori odds on average across all\nmappings because there must be more mappings where it\nfails to beat the a priori odds than where it beats them.\nJust to be clear, the tactic is allowed to interact with\nprincipals whose policy for o it can satisfy. It just does not\nget to guess about the value of o for those principals, as it\nis entitled to beat the a priori odds for them. Hence, doing\nso is not considered a leakage in the system.\nBecause the tactic is a deterministic Turing-equivalent\ncomputational machine, when it outputs its final guesses,\nit must output them in some order. We will define n to\nbe the number of users, |K|. We will number the series of\nprincipals k\n1\n, k\n2\n, ..., k\nn\n. Without loss of generality, we can\nassume that every principal's strategy's random tape has\nsome fixed value, resulting in them behaving in a strictly\ndeterministic manner. Therefore, as the tactic and strategies\nare deterministic, the only remaining variable is the\nmapping of public keys to principals.\nNext we will fix the sequence of public keys.\nBecause\npublic keys are randomly chosen to begin with, and we are\nvarying over the set of all public-key to user mappings, we\ncan do this without loss of generality. The order in which\nguesses are made must in some way depend only on the a\npriori knowledge, the public keys, and the communications\nwhich the tactic has with the strategies. So, if all of these\nthings are kept constant, the guesses will not change.\nLet us suppose that a fraction h of the population whose\npolicy for o has not been satisfied has one parity value, and\na fraction 1 - h of the population has the other. Without\nloss of generality, we assume that h 1 - h. We determine\nh by calculating the relative a priori probabilities given the\ndistribution of the values of the object.\nThe a priori probability of successfully guessing which parity\na given user's object has is h. Now, if there exists some\norder of interaction, i which beats the a priori odds, then\nits number of correct guesses must be expressible as hn +\nfor some > 0.\nWe can break the set of users whose policies for o M cannot\nmeet down into a group of sets according to the values\nof the objects which are in inference components other than\nthe one which contains o. We will define a set of sets, V G\nsuch that vg V G is a set of users all of which have the\nsame values for all objects in all inference components other\nthan the one which contains o.\nNow, let us consider the possibility of rearranging the public\nkeys of members of this group. Because the strategies\nin use are defined to be deterministic with respect to the\npolicies governing the attributes which distinguish the two\nconfigurations and because the opponent is defined to be\ndeterministic: it follows that if we were to rearrange user's\npublic keys from the original mapping to create a new mapping\n, the communication would be the same in both. Since\nthe communication would be the same, it follows that the\ntactic would make the same guesses relative to the order of\nusers because it is a deterministic machine and must produce\nthe same output given the same input, the end result\nof which is that switching two users both of whom are members\nof the same value group will result in the guesses of the\nparity of those two users switching as well.\nWe can then consider the set of all arrangements of public\nkeys formed by switching principals around within their\nvalue groups, which we shall call I. So the question at hand,\nthen, is whether or not the expected value of across all\nmembers of I is positive. If we can demonstrate that it is\nnot, then no successful opponent can exist.\nHere we introduce another lemma. Proof of this lemma is\nnow sufficient to establish our earlier lemma.\nLemma 2. The expected value of across all public key\nmappings is less than or equal to zero.\n44\nIf we have some quantity of extra correct guesses, , for\nsome public key mapping i, then these guesses must be distributed\nover some set of value groups. If is to be positive\non average, then at least some value groups must average\na number of correct guesses above the a priori probability\nover all arrangements in I.\nLet us assume that we have one such group vg. Because\nthe distributions of values of items in other inference components\nare defined to be precisely independent of o, we can\nknow that in each group, there must be a fraction h of the\nmembers which have one parity and 1 - h which have the\nother. So, in vg there will be x = h|vg| principals with the\nfirst parity and y = (1 - h)|vg| principals with the second,\nand the a priori expected number of correct guesses would\nbe x.\nIf, for some mapping, i, the tactic is successful, then there\nmust be some number of correct guesses x + where > 0.\nWe also know that y simply because the tactic is limited\nin total correct guesses to |vg| = x + y. As the number of\ncorrect guesses is x + , it must follow that the number of\nincorrect guesses is y - .\nFurther, we need to note that the tactic must make some\nquantity of first parity guesses and some quantity of second\nparity guesses. Obviously, these quantities need to add up\nto |vg|, but need not match up with x and y. Every extra\nfirst parity or second parity guess guarantees at least one\nmistake, but even with several mistakes, it is quite possible\nto beat the a priori odds for some arrangements. So we\ndefine x + c to be the number of first parity guesses and\ny - c to be the number of second parity guesses.\nNow, we know that each increase of one in |c| guarantees\nat least one wrong guess, so we have a bound of + |c| y.\nFurther, we know that since c is fixed (as it is not dependent\non the arrangement, only the guesses which are unchanging\n), the only way to gain a wrong guess is to swap a first\nparity principal with a second parity principal, which must\nnecessarily create two wrong guesses. So we can quantify\nthe number of wrong first parity guesses and the number of\nwrong second parity guesses using the terms we have set up.\nSpecifically, there must be\n1\n2\n(y - + c) incorrect first parity\nguesses, and\n1\n2\n(y - - c) incorrect second parity guesses.\nNow we can determine the number of arrangements of\nprincipals which will create x + correct guesses. Specifically\n, we look at the total number of principals which are\nfirst parity and choose a way to arrange them to match up\nwith incorrect second parity guesses and we look at the total\nnumber of principals which are second parity and choose\na way to arrange them to match up with incorrect first\nparity guesses. Then we multiply that by the number of\npermutations of first parity principals and the number of\npermutations of second parity principals. And we arrive at\n`\nx\n1\n2\n(y--c)\n`\ny\n1\n2\n(y-+c)\nx!y!.\nNow, similarly, we can calculate the number of arrangements\nwhich will result in x - correct answers. And if\nfor all there are at least as many arrangements which\nproduce x - correct answers as produce x + of them\nthen the average of cannot exceed 0. Now, if there are\nx - correct answers, then there must be y + incorrect\nones. And we can use the same reasoning to establish that\nthere must be\n1\n2\n(y + + c) incorrect first parity guesses\nand\n1\n2\n(y + - c) incorrect second parity guesses, and hence\n`\nx\n1\n2\n(y+-c)\n`\ny\n1\n2\n(y++c)\nx!y! arrangements which result in x correct\nguesses.\nSo if we can prove that this is no less\nthan the previous quantity then our proof will be complete.\n`\nx\n1\n2\n(y--c)\n`\ny\n1\n2\n(y-+c)\nx!y! `\nx\n1\n2\n(y+-c)\n`\ny\n1\n2\n(y++c)\nx!y!\n`\nx\n1\n2\n(y--c)\n`\ny\n1\n2\n(y-+c)\n`\nx\n1\n2\n(y+-c)\n`\ny\n1\n2\n(y++c)\n\nx!\n(\n1\n2\n(y--c))!(x-1\n2\n(y--c))!\ny!\n(\n1\n2\n(y-+c))!(y-1\n2\n(y-+c))!\n\nx!\n(\n1\n2\n(y+-c))!(x-1\n2\n(y+-c))!\ny!\n(\n1\n2\n(y++c))!(y-1\n2\n(y++c))!\n\n1\n(\n1\n2\n(y--c))!(x-1\n2\n(y--c))!\n1\n(\n1\n2\n(y-+c))!(\n1\n2\n(y+-c))!\n\n1\n(\n1\n2\n(y+-c))!(x-1\n2\n(y+-c))!\n1\n(\n1\n2\n(y++c))!(\n1\n2\n(y--c))!\n\n1\n(x-1\n2\n(y--c))!(\n1\n2\n(y-+c))!\n\n1\n(x-1\n2\n(y+-c))!(\n1\n2\n(y++c))!\n\n(x 1\n2\n(y - - c))!(\n1\n2\n(y - + c))! (x 1\n2\n(y + - c))!(\n1\n2\n(y +\n+ c))!\n(x-1\n2\n(y--c))!\n(x-1\n2\n(y+-c))!\n\n(\n1\n2\n(y++c))!\n(\n1\n2\n(y-+c))!\n\n(x-1\n2\n(y--c))!\n(x-1\n2\n(y--c)-)!\n\n(\n1\n2\n(y++c))!\n(\n1\n2\n(y++c)-)!\nWe define a function f (a, k) = a!/(a-k)!, i.e. the product\nstarting from a going down k integers. And obviously a\nb f (a, k) f (b, k), b k 0.\nThen we can rewrite the last inequality as f (x 1\n2\n(y\n- c), ) f (\n1\n2\n(y + + c), ), which, noting that 0 and\ny + |c| y - c y + c + 2\n1\n2\n(y + c + ) ,\nis implied by x-1\n2\n(y - -c)\n1\n2\n(y + +c) x-1\n2\ny\n1\n2\ny\nx y h|vg| (1 - h)|vg| h (1 - h) which we know\nto be true from our assumption at the start of the proof.\nSo we have proven lemma 2, and this completes the proof.\nB.\nPROOF OF THEOREM 2\nWe define n to be the number of users, |K|. Because we\nassume that this system is in a fixed state, every user k is in\nsome configuration g\nk\n. Now let us examine some particular\nattribute, t. We know that a fraction h of users have that\nattribute and 1 - h do not. Let us define a set of policies\nL = {p|t T , k Kq\ns\nt\nQ such that p = q\ns\nt\n(k)}. We\nalso need to know the fraction of users who have each policy\nin L. As the number of users grows towards infinity, the\nnumber of possible policies stays finite, so multiple users\nwith the attribute will wind up sharing the same policy.\nFor every member l L, we define f\nl\nto be the fraction of\nusers with attribute t who have policy l. P\nlL\nf\nl\n= 1. We\nassume that as n approaches infinity, f\nl\napproaches some\nfixed quantity ^\nf\nl\nfor every l L. Essentially, what we are\nassuming is that there is a fixed fraction of users with the\nattribute who will chose any given policy. The particular\nnumber will vary at any given time, but over time, we will\napproach this fraction. We should then know that for some\nparticular policy l, the odds of a user without the attribute\ndrawing policy l are also f\nl\nbecause policies are handed out\nwith the same distribution that they are submitted.\nThe distribution which describes how many users we are\nactually going to have with this policy is a binomial distribution\n. The variance of a binomial distribution is\n2\n=\nn(1-h)f\nl\n(1-f\nl\n). The difference between the actual and the\nideal is the square root of the variance divided by the expected\nnumber of users who have a given policy, which is nf\nl\n.\nHence, the expected difference between our practical system\nand the ideal system is\n\nn(1-h)f\nl\n(1-f\nl\n)\nnf\nl\n= q\n(1-h)(1-f\nl\n)\nnf\nl\n.\n1 - h is a constant term, and f\nl\nwill approach ^\nf\nl\n, which is\na fixed quantity. So lim\nninf\n(1-h)(1-f\nl\n)\nnf\nl\n= 0, and we have\nproven that our system approaches the ideal as the number\nof users goes to infinity.\n45", "keywords": "Privacy;Trust Negotiation;Attribute-based Access Control"} {"name": "151", "title": "Probabilistic Author-Topic Models for Information Discovery", "abstract": "We propose a new unsupervised learning technique for extracting information from large text collections. We model documents as if they were generated by a two-stage stochastic process. Each author is represented by a probability distribution over topics, and each topic is represented as a probability distribution over words for that topic. The words in a multi-author paper are assumed to be the result of a mixture of each authors' topic mixture. The topic-word and author-topic distributions are learned from data in an unsupervised manner using a Markov chain Monte Carlo algorithm . We apply the methodology to a large corpus of 160,000 abstracts and 85,000 authors from the well-known CiteSeer digital library, and learn a model with 300 topics. We discuss in detail the interpretation of the results discovered by the system including specific topic and author models, ranking of authors by topic and topics by author, significant trends in the computer science literature between 1990 and 2002, parsing of abstracts by topics and authors and detection of unusual papers by specific authors. An online query interface to the model is also discussed that allows interactive exploration of author-topic models for corpora such as CiteSeer.", "fulltext": "INTRODUCTION\nWith the advent of the Web and various specialized digital\nlibraries, the automatic extraction of useful information\nfrom text has become an increasingly important research\narea in data mining. In this paper we discuss a new algorithm that extracts both the topics expressed in large text\ndocument collections and models how the authors of documents\nuse those topics. The methodology is illustrated using\na sample of 160,000 abstracts and 80,000 authors from the\nwell-known CiteSeer digital library of computer science research\npapers (Lawrence, Giles, and Bollacker, 1999). The\nalgorithm uses a probabilistic model that represents topics\nas probability distributions over words and documents\nas being composed of multiple topics. A novel feature of\nour model is the inclusion of author models, in which authors\nare modeled as probability distributions over topics.\nThe author-topic models can be used to support a variety\nof interactive and exploratory queries on the set of documents\nand authors, including analysis of topic trends over\ntime, finding the authors who are most likely to write on a\ngiven topic, and finding the most unusual paper written by\na given author. Bayesian unsupervised learning is used to\nfit the model to a document collection.\nSupervised learning techniques for automated categorization\nof documents into known classes or topics has received\nconsiderable attention in recent years (e.g., Yang, 1998).\nFor many document collections, however, neither predefined\ntopics nor labeled documents may be available. Furthermore\n, there is considerable motivation to uncover hidden\ntopic structure in large corpora, particularly in rapidly changing\nfields such as computer science and biology, where predefined\ntopic categories may not accurately reflect rapidly\nevolving content.\nAutomatic extraction of topics from text, via unsupervised\nlearning, has been addressed in prior work using a\nnumber of different approaches. One general approach is\nto represent the high-dimensional term vectors in a lower-dimensional\nspace. Local regions in the lower-dimensional\nspace can then be associated with specific topics. For example\n, the WEBSOM system (Lagus et al. 1999) uses nonlinear\ndimensionality reduction via self-organizing maps to\nrepresent term vectors in a two-dimensional layout. Linear\nprojection techniques, such as latent semantic indexing\n(LSI), are also widely used (Berry, Dumais, and O' Brien,\n1995). For example, Deerwester et al. (1990), while not\nusing the term \"topics\" per se, state:\nRoughly speaking, these factors may be thought\nof as artificial concepts; they represent extracted\ncommon meaning components of many different\nwords and documents.\nResearch Track Paper\n306\nA somewhat different approach is to cluster the documents\ninto groups containing similar semantic content, using\nany of a variety of well-known document clustering techniques\n(e.g., Cutting et al., 1992; McCallum, Nigam, and\nUngar, 2000; Popescul et al., 2000). Each cluster of documents\ncan then be associated with a latent topic (e.g., as\nrepresented by the mean term vector for documents in the\ncluster). While clustering can provide useful broad information\nabout topics, clusters are inherently limited by the fact\nthat each document is (typically) only associated with one\ncluster. This is often at odds with the multi-topic nature of\ntext documents in many contexts. In particular, combinations\nof diverse topics within a single document are difficult\nto represent. For example, this present paper contains at\nleast two significantly different topics: document modeling\nand Bayesian estimation. For this reason, other representations\n(such as those discussed below) that allow documents\nto be composed of multiple topics generally provide better\nmodels for sets of documents (e.g., better out of sample predictions\n, Blei, Ng, and Jordan (2003)).\nHofmann (1999) introduced the aspect model (also referred\nto as probabilistic LSI, or pLSI) as a probabilistic\nalternative to projection and clustering methods. In pLSI,\ntopics are modeled as multinomial probability distributions\nover words, and documents are assumed to be generated\nby the activation of multiple topics. While the pLSI model\nproduced impressive results on a number of text document\nproblems such as information retrieval, the parameterization\nof the model was susceptible to overfitting and did not provide\na straightforward way to make inferences about new\ndocuments not seen in the training data. Blei, Ng, and\nJordan (2003) addressed these limitations by proposing a\nmore general Bayesian probabilistic topic model called latent\nDirichlet allocation (LDA). The parameters of the LDA\nmodel (the topic-word and document-topic distributions)\nare estimated using an approximation technique known as\nvariational EM, since standard estimation methods are intractable\n. Griffiths and Steyvers (2004) showed how Gibbs\nsampling, a Markov chain Monte Carlo technique, could be\napplied in this model, and illustrated this approach using 11\nyears of abstract data from the Proceedings of the National\nAcademy of Sciences.\nOur focus here is to extend the probabilistic topic models\nto include authorship information. Joint author-topic\nmodeling has received little or no attention as far as we\nare aware. The areas of stylometry, authorship attribution,\nand forensic linguistics focus on the problem of identifying\nwhat author wrote a given piece of text. For example,\nMosteller and Wallace (1964) used Bayesian techniques to\ninfer whether Hamilton or Madison was the more likely author\nof disputed Federalist papers. More recent work of a\nsimilar nature includes authorship analysis of a purported\npoem by Shakespeare (Thisted and Efron, 1987), identifying\nauthors of software programs (Gray, Sallis, and MacDonell,\n1997), and the use of techniques such as support vector machines\n(Diederich et al., 2003) for author identification.\nThese author identification methods emphasize the use of\ndistinctive stylistic features (such as sentence length) that\ncharacterize a specific author. In contrast, the models we\npresent here focus on extracting the general semantic content\nof a document, rather than the stylistic details of how\nit was written. For example, in our model we omit common\n\"stop\" words since they are generally irrelevant to the topic\nof the document--however, the distributions of stop words\ncan be quite useful in stylometry. While \"topic\" information\ncould be usefully combined with stylistic features for author\nclassification we do not pursue this idea in this particular\npaper.\nGraph-based and network-based models are also frequently\nused as a basis for representation and analysis of relations\namong scientific authors. For example, Newman (2001),\nMutschke (2003) and Erten et al. (2003) use methods from\nbibliometrics, social networks, and graph theory to analyze\nand visualize co-author and citation relations in the\nscientific literature. Kautz, Selman, and Shah (1997) developed\nthe interactive ReferralWeb system for exploring\nnetworks of computer scientists working in artificial intelligence\nand information retrieval, and White and Smyth\n(2003) used PageRank-style ranking algorithms to analyze\nco-author graphs. In all of this work only the network con-nectivity\ninformation is used--the text information from the\nunderlying documents is not used in modeling. Thus, while\nthe grouping of authors via these network models can implicitly\nprovide indications of latent topics, there is no explicit\nrepresentation of the topics in terms of the text content (the\nwords) of the documents.\nThe novelty of the work described in this paper lies in\nthe proposal of a probabilistic model that represents both\nauthors and topics, and the application of this model to a\nlarge well-known document corpus in computer science. As\nwe will show later in the paper, the model provides a general\nframework for exploration, discovery, and query-answering\nin the context of the relationships of author and topics for\nlarge document collections.\nThe outline of the paper is as follows: in Section 2 we describe\nthe author-topic model and outline how the parameters\nof the model (the topic-word distributions and author-topic\ndistributions) can be learned from training data consisting\nof documents with known authors. Section 3 illustrates\nthe application of the model to a large collection of\nabstracts from the CiteSeer system, with examples of specific\ntopics and specific author models that are learned by\nthe algorithm. In Section 4 we illustrate a number of applications\nof the model, including the characterization of topic\ntrends over time (which provides some interesting insights\non the direction of research in computer science), and the\ncharacterization of which papers are most typical and least\ntypical for a given author. An online query interface to the\nsystem is described in Section 5, allowing users to query the\nmodel over the Web--an interesting feature of the model is\nthe coupling of Bayesian sampling and relational database\ntechnology to answer queries in real-time. Section 6 contains\na brief discussion of future directions and concluding\ncomments.\nAN OVERVIEW OF THE AUTHOR-TOPIC MODEL\nThe author-topic model reduces the process of writing a\nscientific document to a simple series of probabilistic steps.\nThe model not only discovers what topics are expressed in a\ndocument, but also which authors are associated with each\ntopic. To simplify the representation of documents, we use\na bag of words assumption that reduces each document to a\nResearch Track Paper\n307\nx\nz\nw\nD\n\n\n\n\nK\nT\nd\na\nGiven the set of\nco-authors:\nN\nd\n1. Choose an author\n2. Choose a topic\ngiven the author\n3. Choose a word\ngiven the topic\nFigure 1: The graphical model for the author-topic\nmodel using plate notation.\nvector of counts, where each vector element corresponds to\nthe number of times a term appears in the document.\nEach author is associated with a multinomial distribution\nover topics. A document with multiple authors has a distribution\nover topics that is a mixture of the distributions\nassociated with the authors. When generating a document,\nan author is chosen at random for each individual word in\nthe document. This author picks a topic from his or her\nmultinomial distribution over topics, and then samples a\nword from the multinomial distribution over words associated\nwith that topic. This process is repeated for all words\nin the document.\nIn the model, the authors produce words from a set of\nT\ntopics. When T is kept relatively small relative to the\nnumber of authors and vocabulary size, the author-topic\nmodel applies a form of dimensionality reduction to documents\n; topics are learned which capture the variability in\nword choice across a large set of documents and authors.\nIn our simulations, we use 300 topics (see Rosen-Zvi et al.\n(2004) for an exploration of different numbers of topics).\nFigure 1 illustrates the generative process with a graphical\nmodel using plate notation. For readers not familiar\nwith plate notation, shaded and unshaded variables indicate\nobserved and latent variables respectively. An arrow\nindicates a conditional dependency between variables and\nplates (the boxes in the figure) indicate repeated sampling\nwith the number of repetitions given by the variable in the\nbottom (see Buntine (1994) for an introduction). In the\nauthor-topic model, observed variables not only include the\nwords w in a document but also the set of coauthors A\nd\non\neach document d. Currently, the model does not specify the\ngenerative process of how authors choose to collaborate. Instead\n, we assume the model is provided with the authorship\ninformation on every document in the collection.\nEach author (from a set of K authors) is associated with\na multinomial distribution over topics, represented by .\nEach topic is associated with a multinomial distribution over\nwords, represented by . The multinomial distributions\nand have a symmetric Dirichlet prior with hyperparame-ters\nand (see Rosen-Zvi et al. (2004) for details). For\neach word in the document, we sample an author x uni-formly\nfrom A\nd\n, then sample a topic z from the multinomial\ndistribution associated with author x and sample a word\nw\nfrom a multinomial topic distribution associated with\ntopic z. This sampling process is repeated N times to form\ndocument d.\n2.2 Bayesian Estimation of the Model Parameters\nThe author-topic model includes two sets of unknown\nparameters--the K author-topic distributions , and the T\ntopic distributions --as well as the latent variables corresponding\nto the assignments of individual words to topics z\nand authors x. The Expectation-Maximization (EM) algorithm\nis a standard technique for estimating parameters in\nmodels with latent variables, finding a mode of the posterior\ndistribution over parameters. However, when applied to\nprobabilistic topic models (Hofmann, 1999), this approach\nis susceptible to local maxima and computationally inefficient\n(see Blei, Ng, and Jordan, 2003). We pursue an alternative\nparameter estimation strategy, outlined by Griffiths\nand Steyvers (2004), using Gibbs sampling, a Markov chain\nMonte Carlo algorithm to sample from the posterior distribution\nover parameters. Instead of estimating the model\nparameters directly, we evaluate the posterior distribution\non just x and z and then use the results to infer and .\nFor each word, the topic and author assignment are sam-pled\nfrom:\nP\n(z\ni\n= j, x\ni\n= k|w\ni\n= m, z\n-i\n,\nx\n-i\n)\n\nC\nW T\nmj\n+\nm\nC\nW T\nm j\n+ V\nC\nAT\nkj\n+\nj\nC\nAT\nkj\n+ T\n(1)\nwhere z\ni\n= j and x\ni\n= k represent the assignments of\nthe ith word in a document to topic j and author k respectively\n, w\ni\n= m represents the observation that the ith word\nis the mth word in the lexicon, and z\n-i\n,\nx\n-i\nrepresent all\ntopic and author assignments not including the ith word.\nFurthermore, C\nW T\nmj\nis the number of times word m is assigned\nto topic j, not including the current instance, and\nC\nAT\nkj\nis the number of times author k is assigned to topic j,\nnot including the current instance, and V is the size of the\nlexicon.\nDuring parameter estimation, the algorithm only needs to\nkeep track of a V T (word by topic) count matrix, and a\nK\nT (author by topic) count matrix, both of which can be\nrepresented efficiently in sparse format. From these count\nmatrices, we can easily estimate the topic-word distributions\nand author-topic distributions by:\n\nmj\n=\nC\nW T\nmj\n+\nm\nC\nW T\nm j\n+ V\n(2)\n\nkj\n=\nC\nAT\nkj\n+\nj\nC\nAT\nkj\n+ T\n(3)\nwhere\nmj\nis the probability of using word m in topic j, and\n\nkj\nis the probability of using topic j by author k. These\nvalues correspond to the predictive distributions over new\nwords w and new topics z conditioned on w and z.\nWe start the algorithm by assigning words to random topics\nand authors (from the set of authors on the document).\nEach Gibbs sample then constitutes applying Equation (1)\nto every word token in the document collection. This sampling\nprocess is repeated for I iterations. In this paper we\nprimarily focus on results based on a single sample so that\nspecific topics can be identified and interpreted--in tasks involving\nprediction of words and authors one can average over\ntopics and use multiple samples when doing so (Rosen-Zvi\net al., 2004).\nResearch Track Paper\n308\nWORD\nPROB.\nWORD\nPROB.\nWORD\nPROB.\nWORD\nPROB.\nPATTERNS\n0.1965\nUSER\n0.3290\nMAGNETIC\n0.0155\nMETHODS\n0.5319\nPATTERN\n0.1821\nINTERFACE\n0.1378\nSTARS\n0.0145\nMETHOD\n0.1403\nMATCHING\n0.1375\nUSERS\n0.1060\nSOLAR\n0.0135\nTECHNIQUES\n0.0442\nMATCH\n0.0337\nINTERFACES\n0.0498\nEMISSION\n0.0127\nDEVELOPED\n0.0216\nTEXT\n0.0242\nSYSTEM\n0.0434\nMASS\n0.0125\nAPPLIED\n0.0162\nPRESENT\n0.0207\nINTERACTION\n0.0296\nOBSERVATIONS 0.0120\nBASED\n0.0153\nMATCHES\n0.0167\nINTERACTIVE\n0.0214\nSTAR\n0.0118\nAPPROACHES\n0.0133\nPAPER\n0.0126\nUSABILITY\n0.0132\nRAY\n0.0112\nCOMPARE\n0.0113\nSHOW\n0.0124\nGRAPHICAL\n0.0092\nGALAXIES\n0.0105\nPRACTICAL\n0.0112\nAPPROACH\n0.0099\nPROTOTYPE\n0.0086\nOBSERVED\n0.0098\nSTANDARD\n0.0102\nAUTHOR\nPROB.\nAUTHOR\nPROB.\nAUTHOR\nPROB.\nAUTHOR\nPROB.\nNavarro_G\n0.0133\nShneiderman_B\n0.0051\nFalcke_H\n0.0140\nSrinivasan_A\n0.0018\nAmir_A\n0.0099\nRauterberg_M\n0.0046\nLinsky_J\n0.0082\nMooney_R\n0.0018\nGasieniec_L\n0.0062\nHarrison_M\n0.0025\nButler_R\n0.0077\nOwren_B\n0.0018\nBaeza-Yates_R\n0.0048\nWiniwarter_W\n0.0024\nKnapp_G\n0.0067\nWarnow_T\n0.0016\nBaker_B\n0.0042\nArdissono_L\n0.0021\nBjorkman_K\n0.0065\nFensel_D\n0.0016\nArikawa_S\n0.0041\nBillsus_D\n0.0019\nKundu_M\n0.0060\nGodsill_S\n0.0014\nCrochemore_M\n0.0037\nCatarci_T\n0.0017\nChristensen-D_J 0.0057\nSaad_Y\n0.0014\nRytter_W\n0.0034\nSt_R\n0.0017\nMursula_K\n0.0054\nHansen_J\n0.0013\nRaffinot_M\n0.0032\nPicard_R\n0.0016\nCranmer_S\n0.0051\nZhang_Y\n0.0013\nUkkonen_E\n0.0032\nZukerman_I\n0.0016\nNagar_N\n0.0050\nDietterich_T\n0.0013\nWORD\nPROB.\nWORD\nPROB.\nWORD\nPROB.\nWORD\nPROB.\nDATA\n0.1622\nPROBABILISTIC 0.0869\nRETRIEVAL\n0.1208\nQUERY\n0.1406\nMINING\n0.0657\nBAYESIAN\n0.0791\nINFORMATION\n0.0613\nQUERIES\n0.0947\nDISCOVERY\n0.0408\nPROBABILITY\n0.0740\nTEXT\n0.0461\nDATABASE\n0.0932\nATTRIBUTES\n0.0343\nMODEL\n0.0533\nDOCUMENTS\n0.0385\nDATABASES\n0.0468\nASSOCIATION\n0.0328\nMODELS\n0.0466\nINDEXING\n0.0369\nDATA\n0.0426\nLARGE\n0.0279\nPROBABILITIES 0.0308\nDOCUMENT\n0.0316\nRELATIONAL\n0.0384\nDATABASES\n0.0257\nINFERENCE\n0.0306\nQUERY\n0.0261\nJOIN\n0.0188\nKNOWLEDGE\n0.0175\nCONDITIONAL\n0.0274\nCONTENT\n0.0256\nPROCESSING\n0.0165\nPATTERNS\n0.0174\nPRIOR\n0.0273\nSEARCH\n0.0174\nSOURCES\n0.0114\nITEMS\n0.0173\nPOSTERIOR\n0.0228\nRELEVANCE\n0.0171\nOPTIMIZATION\n0.0110\nAUTHOR\nPROB.\nAUTHOR\nPROB.\nAUTHOR\nPROB.\nAUTHOR\nPROB.\nHan_J\n0.0164\nKoller_D\n0.0104\nOard_D\n0.0097\nLevy_A\n0.0092\nZaki_M\n0.0089\nHeckerman_D\n0.0079\nHawking_D\n0.0065\nNaughton_J\n0.0078\nLiu_B\n0.0071\nGhahramani_Z\n0.0060\nCroft_W\n0.0057\nSuciu_D\n0.0075\nCheung_D\n0.0066\nFriedman_N\n0.0060\nJones_K\n0.0053\nRaschid_L\n0.0075\nShim_K\n0.0051\nMyllymaki_P\n0.0057\nSchauble_P\n0.0052\nDeWitt_D\n0.0062\nMannila_H\n0.0049\nLukasiewicz_T\n0.0054\nVoorhees_E\n0.0050\nWidom_J\n0.0058\nRastogi_R\n0.0049\nGeiger_D\n0.0045\nCallan_J\n0.0046\nAbiteboul_S\n0.0057\nGanti_V\n0.0048\nMuller_P\n0.0044\nFuhr_N\n0.0042\nChu_W\n0.0055\nToivonen_H\n0.0043\nBerger_J\n0.0044\nSmeaton_A\n0.0042\nLibkin_L\n0.0054\nLiu_H\n0.0043\nXiang_Y\n0.0042\nSanderson_M\n0.0041\nKriegel_H\n0.0054\nTOPIC 29\nTOPIC 58\nTOPIC 298\nTOPIC 139\nTOPIC 52\nTOPIC 95\nTOPIC 293\nTOPIC 68\nFigure 2: Eight example topics extracted from the\nCiteSeer database. Each is illustrated with the 10\nmost likely words and authors with corresponding\nprobabilities.\nWORD\nPROB.\nWORD\nPROB.\nWORD\nPROB.\nWORD\nPROB.\nDATA\n0.1468\nPROBABILISTIC 0.0826\nRETRIEVAL\n0.1381\nQUERY\n0.1699\nMINING\n0.0631\nBAYESIAN\n0.0751\nINFORMATION\n0.0600\nQUERIES\n0.1209\nDISCOVERY\n0.0396\nPROBABILITY\n0.0628\nINDEX\n0.0529\nJOIN\n0.0258\nATTRIBUTES\n0.0392\nMODEL\n0.0364\nINDEXING\n0.0469\nDATA\n0.0212\nASSOCIATION\n0.0316\nPROBABILITIES 0.0313\nQUERY\n0.0319\nOPTIMIZATION\n0.0171\nRULES\n0.0252\nINFERENCE\n0.0294\nCONTENT\n0.0299\nPROCESSING\n0.0162\nPATTERNS\n0.0210\nMODELS\n0.0273\nBASED\n0.0224\nRELATIONAL\n0.0131\nLARGE\n0.0207\nCONDITIONAL\n0.0262\nSEARCH\n0.0219\nDATABASE\n0.0128\nATTRIBUTE\n0.0183\nDISTRIBUTION\n0.0261\nRELEVANCE\n0.0212\nAGGREGATION 0.0117\nDATABASES\n0.0179\nPRIOR\n0.0259\nSIMILARITY\n0.0178\nRESULT\n0.0106\nAUTHOR\nPROB.\nAUTHOR\nPROB.\nAUTHOR\nPROB.\nAUTHOR\nPROB.\nHan_J\n0.0157\nKoller_D\n0.0109\nOard_D\n0.0080\nNaughton_J\n0.0103\nZaki_M\n0.0104\nHeckerman_D\n0.0079\nVoorhees_E\n0.0053\nSuciu_D\n0.0091\nLiu_B\n0.0080\nFriedman_N\n0.0076\nHawking_D\n0.0053\nLevy_A\n0.0080\nCheung_D\n0.0075\nGhahramani_Z\n0.0060\nSchauble_P\n0.0051\nDeWitt_D\n0.0077\nHamilton_H\n0.0058\nLukasiewicz_T\n0.0053\nCroft_W\n0.0051\nWong_L\n0.0071\nMannila_H\n0.0056\nMyllymaki_P\n0.0053\nJones_K\n0.0041\nRoss_K\n0.0067\nBrin_S\n0.0055\nPoole_D\n0.0050\nBruza_P\n0.0041\nKriegel_H\n0.0055\nGanti_V\n0.0050\nXiang_Y\n0.0048\nLee_D\n0.0040\nMumick_I\n0.0054\nLiu_H\n0.0050\nvanderGaag_L\n0.0047\nSmeaton_A\n0.0040\nRaschid_L\n0.0053\nToivonen_H\n0.0049\nBerger_J\n0.0040\nCallan_J\n0.0039\nKossmann_D\n0.0053\nTOPIC 276\nTOPIC 158\nTOPIC 213\nTOPIC 15\nFigure 3: The four most similar topics to the topics\nin the bottom row of Figure 2, obtained from a\ndifferent Markov chain run.\nAUTHOR-TOPICS FOR CITESEER\nOur collection of CiteSeer abstracts contains D = 162, 489\nabstracts with K = 85, 465 authors. We preprocessed the\ntext by removing all punctuation and common stop words.\nThis led to a vocabulary size of V = 30, 799, and a total of\n11, 685, 514 word tokens.\nThere is inevitably some noise in data of this form given\nthat many of the fields (paper title, author names, year, abstract\n) were extracted automatically by CiteSeer from PDF\nor postscript or other document formats. We chose the simple\nconvention of identifying authors by their first initial and\nsecond name, e.g., A Einstein, given that multiple first initials\nor fully spelled first names were only available for a relatively\nsmall fraction of papers. This means of course that for\nsome very common names (e.g., J Wang or J Smith) there\nwill be multiple actual individuals represented by a single\nname in the model. This is a known limitation of working\nwith this type of data (e.g., see Newman (2001) for further\ndiscussion). There are algorithmic techniques that could\nbe used to automatically resolve these identity problems-however\n, in this paper, we don't pursue these options and\ninstead for simplicity work with the first-initial/last-name\nrepresentation of individual authors.\nIn our simulations, the number of topics T was fixed at\n300 and the smoothing parameters and (Figure 1) were\nset at 0.16 and 0.01 respectively. We ran 5 independent\nGibbs sampling chains for 2000 iterations each. On a 2GHz\nPC workstation, each iteration took 400 seconds, leading to\na total run time on the order of several days per chain.\n3.2 Author-Topic and Topic-Word Models for\nthe CiteSeer Database\nWe now discuss the author-topic and topic-word distributions\nlearned from the CiteSeer data. Figure 2 illustrates\neight different topics (out of 300), obtained at the 2000th\niteration of a particular Gibbs sampler run.\nEach table in Figure 2 shows the 10 words that are most\nlikely to be produced if that topic is activated, and the 10\nauthors who are most likely to have produced a word if it is\nknown to have come from that topic. The words associated\nwith each topic are quite intuitive and, indeed, quite precise\nin the sense of conveying a semantic summary of a particular\nfield of research. The authors associated with each topic\nare also quite representative--note that the top 10 authors\nassociated with a topic by the model are not necessarily the\nmost well-known authors in that area, but rather are the\nauthors who tend to produce the most words for that topic\n(in the CiteSeer abstracts).\nThe first 3 topics at the top of Figure 2, topics #163, #87\nand #20 show examples of 3 quite specific and precise topics\non string matching, human-computer interaction, and astronomy\nrespectively. The bottom four topics (#205, #209,\n#289, and #10) are examples of topics with direct relevance\nto data mining--namely data mining itself, probabilistic\nlearning, information retrieval, and database querying and\nindexing. The model includes several other topics related\nto data mining, such as predictive modeling and neural networks\n, as well as topics that span the full range of research\nareas encompassed by documents in CiteSeer. The full list is\navailable at http://www.datalab.uci.edu/author-topic.\nTopic #273 (top right Figure 2) provides an example of a\nResearch Track Paper\n309\ntopic that is not directly related to a specific research area.\nA fraction of topics, perhaps 10 to 20%, are devoted to \"non-research\n-specific\" topics, the \"glue\" that makes up our research\npapers, including general terminology for describing\nmethods and experiments, funding acknowledgments and\nparts of addresses(which inadvertently crept in to the abstracts\n), and so forth.\nWe found that the topics obtained from different Gibbs\nsampling runs were quite stable. For example, Figure 3\nshows the 4 most similar topics to the topics in the bottom\nrow of Figure 2, but from a different run. There is\nsome variability in terms of ranking of specific words and\nauthors for each topic, and in the exact values of the associated\nprobabilities, but overall the topics match very closely.\nAPPLICATIONS OF THE AUTHOR-TOPIC MODEL TO CITESEER\nOf the original 162,489 abstracts in our data set, estimated\nyears of publication were provided by CiteSeer for 130, 545 of\nthese abstracts. There is a steady (and well-known) increase\nyear by year in the number of online documents through the\n1990's. From 1999 through 2002, however, the number of\ndocuments for which the year is known drops off sharply-the\nyears 2001 and 2002 in particular are under-represented\nin this set. This is due to fact that it is easier for CiteSeer to\ndetermine the date of publication of older documents, e.g.,\nby using citations to these documents.\nWe used the yearly data to analyze trends in topics over\ntime. Using the same 300 topic model described earlier, the\ndocuments were partitioned by year, and for each year all\nof the words were assigned to their most likely topic using\nthe model. The fraction of words assigned to each topic for\na given year was then calculated for each of the 300 topics\nand for each year from 1990 to 2002.\nThese fractions provide interesting and useful indicators of\nrelative topic popularity in the research literature in recent\nyears. Figure 4 shows the results of plotting several different\ntopics. Each topic is indicated in the legend by the five\nmost probable words in the topic. The top left plot shows\na steady increase (roughly three-fold) in machine learning\nand data mining topics. The top right plot shows a \"tale of\ntwo topics\": an increase in information-retrieval coupled to\nan apparent decrease in natural language processing.\nOn the second row, on the left we see a steady decrease in\ntwo \"classical\" computer science topics, operating systems\nand programming languages. On the right, however, we see\nthe reverse behavior, namely a corresponding substantial\ngrowth in Web-related topics.\nIn the third row, the left plot illustrates trends within\ndatabase research: a decrease in the transaction and concurrency-related\ntopic, query-related research holding steady over time,\nand a slow but steady increase in integration-related database\nresearch. The plot on the right in the third row illustrates\nthe changing fortunes of security-related research--a decline\nin the early 90's but then a seemingly dramatic upward trend\nstarting around 1995.\nThe lower left plot on the bottom row illustrates the somewhat\nnoisy trends of three topics that were \"hot\" in the\n1990's: neural networks exhibits a steady decline since the\nearly 1990's (as machine learning has moved on to areas such\nas support vector machines), genetic algorithms appears to\nbe relatively stable, and wavelets may have peaked in the\n199498 time period.\nFinally, as with any large data set there are always some\nsurprises in store. The final figure on the bottom right shows\ntwo somewhat unexpected \"topics\". The first topic consists\nentirely of French words (in fact the model discovered 3 such\nFrench language topics ). The apparent peaking of French\nwords in the mid-1990s is likely to be an artifact of how CiteSeer\npreprocesses data rather than any indication of French\nresearch productivity. The lower curve corresponds to a\ntopic consisting of largely Greek letters, presumably from\nmore theoretically oriented papers--fans of theory may be\nsomewhat dismayed to see that there is an apparent steady\ndecline in the relative frequency of Greek letters in abstracts\nsince the mid-1990s!\nThe time-trend results above should be interpreted with\nsome caution. As mentioned earlier, the data for 2001 and\n2002 are relatively sparse compared to earlier years. In addition\n, the numbers are based on a rather skewed sample (online\ndocuments obtained by the CiteSeer system for which\nyears are known). Furthermore, the fractions per year only\nindicate the relative number of words assigned to a topic\nby the model and make no direct assessment of the quality\nor importance of a particular sub-area of computer science.\nNonetheless, despite these caveats, the results are quite informative\nand indicate substantial shifts in research topics\nwithin the field of computer science.\nIn terms of related work, Popescul et al. (2000) investi-gated\ntime trends in CiteSeer documents using a document\nclustering approach. 31K documents were clustered into 15\nclusters based on co-citation information while the text information\nin the documents was not used. Our author-topic\nmodel uses the opposite approach. In effect we use the text\ninformation directly to discover topics and do not explic-itly\nmodel the \"author network\" (although implicitly the\nco-author connections are used by the model). A direct\nquantitative comparison is difficult, but we can say that our\nmodel with 300 topics appears to produce much more noticeable\nand precise time-trends than the 15-cluster model.\n4.2 Topics and Authors for New Documents\nIn many applications, we would like to quickly assess the\ntopic and author assignments for new documents not contained\nin our subset of the CiteSeer collection. Because our\nMonte Carlo algorithm requires significant processing time\nfor 160K documents, it would be computationally inefficient\nto rerun the algorithm for every new document added to the\ncollection (even though from a Bayesian inference viewpoint\nthis is the optimal approach). Our strategy instead is to\napply an efficient Monte Carlo algorithm that runs only on\nthe word tokens in the new document, leading quickly to\nlikely assignments of words to authors and topics. We start\nby assigning words randomly to co-authors and topics. We\nthen sample new assignments of words to topics and authors\nby applying Equation 1 only to the word tokens in the new\ndocument each time temporarily updating the count matrices\nC\nW T\nand C\nAT\n. The resulting assignments of words to\nauthors and topics can be saved after a few iterations (10\niterations in our simulations).\nFigure 5 shows an example of this type of inference. Abstracts\nfrom two authors, B Scholkopf and A Darwiche were\ncombined together into 1 \"pseudo-abstract\" and the docu-Research\nTrack Paper\n310\n1990\n1992\n1994\n1996\n1998\n2000\n2002\n1\n2\n3\n4\n5\n6\n7\n8 x 10\n-3\nYear\nFraction of Words Assigned to Topic\n114:regression-variance-estimator\n-estimators-bias\n153:classification-training-classifier\n-classifiers-generalization\n205:data-mining-attributes-discovery\n-association\n1990\n1992\n1994\n1996\n1998\n2000\n2002\n2\n3\n4\n5\n6\n7\n8 x 10\n-3\nYear\nFraction of Words Assigned to Topic\n280:language-semantic-natural\n-linguistic-grammar\n289:retrieval-text-documents\n-information-document\n1990\n1992\n1994\n1996\n1998\n2000\n2002\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11 x 10\n-3\nYear\nFraction of Words Assigned to Topic\n60:programming-language-concurrent\n-languages-implementation\n139:system-operating-file\n-systems-kernel\n1990\n1992\n1994\n1996\n1998\n2000\n2002\n0\n0.002\n0.004\n0.006\n0.008\n0.01\n0.012\nYear\nFraction of Words Assigned to Topic\n7:web-user-world-wide-users\n80:mobile-wireless-devices\n-mobility-ad\n275:multicast-multimedia-media\n-delivery-applications\n1990\n1992\n1994\n1996\n1998\n2000\n2002\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10 x 10\n-3\nYear\nFraction of Words Assigned to Topic\n10:query-queries-index-data-join\n261:transaction-transactions\n-concurrency-copy-copies\n194:integration-view-views-data\n-incremental\n1990\n1992\n1994\n1996\n1998\n2000\n2002\n1\n2\n3\n4\n5\n6\n7\n8\n9 x 10\n-3\nYear\nFraction of Words Assigned to Topic\n120:security-secure-access-key-authentication\n240:key-attack-encryption-hash-keys\n1990\n1992\n1994\n1996\n1998\n2000\n2002\n1\n2\n3\n4\n5\n6\n7 x 10\n-3\nYear\nFraction of Words Assigned to Topic\n23:neural-networks-network-training-learning\n35:wavelet-operator-operators-basis\n-coefficients\n242:genetic-evolutionary-evolution-population-ga\n1990\n1992\n1994\n1996\n1998\n2000\n2002\n0\n0.002\n0.004\n0.006\n0.008\n0.01\n0.012\nYear\nFraction of Words Assigned to Topic\n47:la-les-une-nous-est\n157:gamma-delta-ff-omega-oe\nFigure 4: Topic trends for research topics in computer science.\nResearch Track Paper\n311\n[\nAUTH1=Scholkopf_B\n( 69%, 31%)]\n[\nAUTH2=Darwiche_A\n( 72%, 28%)]\nA\nmethod\n1\nis described which like the\nkernel\n1\n\ntrick\n1\nin\nsupport\n1\n\nvector\n1\n\nmachines\n1\n\nSVMs\n1\n\nlets\nus\ngeneralize\n\ndistance\n1\n\nbased\n2\n\nalgorithms\nto\noperate\nin\nfeature\n1\n\nspaces\nusually\nnonlinearly\n\nrelated\nto the\ninput\n1\n\nspace\nThis is done by\nidentifying\na\nclass\nof\nkernels\n1\nwhich can be\nrepresented\nas\nnorm\n1\n\nbased\n2\n\ndistances\n1\nin\nHilbert\n\nspaces\nIt\nturns\n1\nout that\ncommon\n\nkernel\n1\n\nalgorithms\nsuch as\nSVMs\n1\nand\nkernel\n1\n\nPCA\n1\nare actually really\ndistance\n1\n\nbased\n2\n\nalgorithms\nand can be\nrun\n2\nwith that\nclass\nof\nkernels\n1\ntoo As well as\nproviding\n1\na useful new\ninsight\n1\ninto how these\nalgorithms\n\nwork\nthe\npresent\n2\n\nwork\ncan\nform\nthe\nbasis\n1\nfor\nconceiving\nnew\nalgorithms\n\nThis\npaper\n\npresents\n2\na\ncomprehensive\n\napproach\nfor\nmodel\n2\n\nbased\n2\n\ndiagnosis\n2\nwhich\nincludes\n\nproposals\nfor\ncharacterizing\nand\ncomputing\n2\n\npreferred\n2\n\ndiagnoses\n2\n\nassuming\nthat the\nsystem\n2\n\ndescription\n2\nis\naugmented\nwith a\nsystem\n2\n\nstructure\n2\na\ndirected\n2\n\ngraph\n2\nexplicating the\ninterconnections\nbetween\nsystem\n2\n\ncomponents\n2\n\nSpecifically\nwe\nfirst\nintroduce\nthe\nnotion\nof a\nconsequence\n2\nwhich is a\nsyntactically\n2\n\nunconstrained\n\npropositional\n2\n\nsentence\n2\nthat\ncharacterizes\nall\nconsistency\n2\n\nbased\n2\n\ndiagnoses\n2\nand\nshow\n2\nthat\nstandard\n2\n\ncharacterizations\nof\ndiagnoses\n2\nsuch as\nminimal\n\nconflicts\n1\n\ncorrespond\nto\nsyntactic\n2\n\nvariations\n1\non a\nconsequence\n2\nSecond we\npropose\na new\nsyntactic\n2\n\nvariation\n\non the\nconsequence\n2\nknown as\nnegation\n2\n\nnormal\n\nform\nNNF and\ndiscuss\nits\nmerits\n\ncompared\nto\nstandard\n\nvariations\n\nThird we\nintroduce\na\nbasic\n\nalgorithm\n2\nfor\ncomputing\n\nconsequences\nin NNF given a\nstructured\n\nsystem\n2\n\ndescription\nWe\nshow\nthat if the\nsystem\n2\n\nstructure\n2\ndoes not contain\ncycles\n2\nthen there is always a\nlinear\n\nsize\n2\n\nconsequence\n2\nin NNF\nwhich can be\ncomputed\nin\nlinear\n\ntime\n2\nFor\narbitrary\n1\n\nsystem\n2\n\nstructures\n2\nwe\nshow\na\nprecise\n\nconnection\nbetween the\ncomplexity\n2\nof\ncomputing\n2\n\nconsequences\nand the\ntopology\nof the\nunderlying\n\nsystem\n2\n\nstructure\n2\n\nFinally\nwe\npresent\n2\nan\nalgorithm\n2\nthat\nenumerates\n2\nthe\npreferred\n2\n\ndiagnoses\n2\n\ncharacterized\nby a\nconsequence\n2\nThe\nalgorithm\n2\nis\nshown\n1\nto take\nlinear\n\ntime\n2\nin the\nsize\n2\nof the\nconsequence\n2\nif the\npreference\n\ncriterion\n1\n\nsatisfies\nsome\ngeneral\n\nconditions\n\n\nFigure 5: Automated labeling of a pseudo-abstract from two authors by the model.\nment treated as if they had both written it. These two authors\nwork in relatively different but not entirely unrelated\nsub-areas of computer science: Scholkopf in machine learning\nand Darwiche in probabilistic reasoning. The document\nis then parsed by the model. i.e., words are assigned to these\nauthors. We would hope that the author-topic model, conditioned\nnow on these two authors, can separate the combined\nabstract into its component parts.\nFigure 5 shows the results after the model has classified\neach word according to the most likely author. Note that\nthe model only sees a bag of words and is not aware of the\nword order that we see in the figure. For readers viewing\nthis in color, the more red a word is the more likely it is to\nhave been generated (according to the model) by Scholkopf\n(and blue for Darwiche). For readers viewing the figure in\nblack and white, the superscript 1 indicates words classified\nby the model for Scholkopf, and superscript 2 for Darwiche.\nThe results show that all of the significant content words\n(such as kernel, support, vector, diagnoses, directed, graph)\nare classified correctly. As we might expect most of the \"er-rors\"\nare words (such as \"based\" or \"criterion\") that are not\nspecific to either authors' area of research. Were we to use\nword order in the classification, and classify (for example)\nwhole sentences, the accuracy would increase further. As it\nis, the model correctly classifies 69% of Scholkopf's words\nand 72% of Darwiche's.\n4.3 Detecting the Most Surprising and Least\nSurprising Papers for an Author\nIn Tables 1 through 3 we used the model to score papers\nattributed to three well-known researchers in computer science\n(Christos Faloutsos, Michael Jordan, and Tom Mitchell).\nFor each document for each of these authors we calculate\na perplexity score. Perplexity is widely used in language\nmodeling to assess the predictive power of a model. It is a\nmeasure of how surprising the words are from the model's\nperspective, loosely equivalent to the effective branching factor\n. Formally, the perplexity score of a new unobserved document\nd that contains a set of words W\nd\nand conditioned\non a topic model for a specific author a is:\nPerplexity(W\nd\n|a) = exp - log p(W\nd\n|a)\n|W\nd\n|\nwhere p(W\nd\n|a) is the probability assigned by the author\ntopic model to the words W\nd\nconditioned on the single author\na, and |W\nd\n| is the number of words in the document.\nEven if the document was written by multiple authors we\nevaluate the perplexity score relative to a single author in\norder to judge perplexity relative to that individual.\nOur goal here is not to evaluate the out-of-sample predictive\npower of the model, but to explore the range of perplexity\nscores that the model assigns to papers from specific\nauthors. Lower scores imply that the words w are less surprising\nto the model (lower bounded by zero).In particular\nwe are interested in the abstracts that the model considers\nmost surprising (highest perplexity) and least surprising\n(lowest perplexity)--in each table we list the 2 abstracts\nwith the highest perplexity scores, the median perplexity,\nand the 2 abstracts with the lowest perplexity scores.\nTable 1 for Christos Faloutsos shows that the two papers\nwith the highest perplexities have significantly higher perplexity\nscores than the median and the two lowest perplexity\npapers. The high perplexity papers are related to \"query by\nexample\" and the QBIC image database system, while the\nlow perplexity papers are on high-dimensional indexing. As\nfar as the topic model for Faloutsos is concerned, the indexing\npapers are much more typical of his work than the query\nby example papers.\nTables 2 and 3 provide interesting examples in that the\nmost perplexing papers (from the model's viewpoint) for\neach author are papers that the author did not write at\nall. As mentioned earlier, by combining all T Mitchell's and\nM Jordan's together, the data set may contain authors who\nare different from Tom Mitchell at CMU and Michael Jordan\nat Berkeley. Thus, the highest perplexity paper for\nT Mitchell is in fact authored by a Toby Mitchell and is on\nthe topic of estimating radiation doses (quite different from\nthe machine learning work of Tom Mitchell). Similarly, for\nMichael Jordan, the most perplexing paper is on software\nResearch Track Paper\n312\nTable 1: Papers ranked by perplexity for C. Faloutsos, from 31 documents.\nPaper Title\nPerplexity Score\nMindReader: Querying databases through multiple examples\n1503.7\nEfficient and effective querying by image content\n1498.2\nMEDIAN SCORE\n603.5\nBeyond uniformity and independence: analysis of R-trees using the concept of fractal dimension\n288.9\nThe TV-tree: an index structure for high-dimensional data\n217.2\nTable 2: Papers ranked by perplexity for M. Jordan, from 33 documents.\nPaper Title\nPerplexity Score\nSoftware configuration management in an object oriented database\n1386.0\nAre arm trajectories planned in kinematic or dynamic coordinates? An adaptation study\n1319.2\nMEDIAN SCORE\n372.4\nOn convergence properties of the EM algorithm for Gaussian mixtures\n180.0\nSupervised learning from incomplete data via an EM approach\n179.0\nTable 3: Papers ranked by perplexity for T. Mitchell from 15 documents.\nPaper Title\nPerplexity Score\nA method for estimating occupational radiation dose to individuals, using weekly dosimetry data\n2002.9\nText classification from labeled and unlabeled documents using EM\n845.4\nMEDIAN SCORE\n411.5\nLearning one more thing\n266.5\nExplanation based learning for mobile robot perception\n264.2\nconfiguration management and was written by Mick Jordan\nof Sun Microsystems. In fact, of the 7 most perplexing papers\nfor M Jordan, 6 are on software management and the\nJAVA programming language, all written by Mick Jordan.\nHowever, the 2nd most perplexing paper was in fact coauthored\nby Michael Jordan, but in the area of modeling of\nmotor planning, which is a far less common topic compared\nto the machine learning papers that Jordan typically writes.\nAN AUTHOR-TOPIC BROWSER\nWe have built a JAVA-based query interface tool that supports\ninteractive querying of the model\n1\n. The tool allows a\nuser to query about authors, topics, documents, or words.\nFor example, given a query on a particular author the tool\nretrieves and displays the most likely topics and their probabilities\nfor that author, the 5 most probable words for each\ntopic, and the document titles in the database for that author\n. Figure 6(a) (top panel) shows the result of querying\non Pazzani M and the resulting topic distribution (highly-ranked\ntopics include machine learning, classification, rule-based\nsystems, data mining, and information retrieval).\nMouse-clicking on one of the topics (e.g., the data mining\ntopic as shown in the figure) produces the screen display to\nthe left (Figure 6(b)). The most likely words for this topic\nand the most likely authors given a word from this topic are\nthen displayed. We have found this to be a useful technique\nfor interactively exploring topics and authors, e.g., which\nauthors are active in a particular research area.\nSimilarly, one can click on a particular paper (e.g., the\npaper A Learning Agent for Wireless News Access as shown\nin the lower screenshot (Figure 6(c)) and the display in the\npanel to the right is then produced. This display shows the\nwords in the documents and their counts, the probability\ndistribution over topics for the paper given the word counts\n1\nA prototype online version of the tool can be accessed at\nhttp://www.datalab.uci.edu/author-topic\n.\n(ranked by highest probability first), and a probability distribution\nover authors, based on the proportion of words\nassigned by the model to each topic and author respectively.\nThe system is implemented using a combination of a relational\ndatabase and real-time Bayesian estimation (a relatively\nrare combination of these technologies for a real-time\nquery-answering system as far as we are aware). We use\na database to store and index both (a) the sparse author-topic\nand topic-word count matrices that are learned by our\nalgorithm from the training data, and (b) various tables describing\nthe data such as document-word, document-author,\nand document-title tables. For a large document set such\nas CiteSeer (and with 300 topics) these tables can run into\nthe hundred's of megabytes of memory--thus, we do not\nload them into main memory automatically but instead issue\nSQL commands to retrieve the relevant records in real-time.\nFor most of the queries we have implemented to date the\nqueries can be answered by simple table lookup followed by\nappropriate normalization (if needed) of the stored counts\nto generate conditional probabilities. For example, displaying\nthe topic distribution for a specific author is simply a\nmatter of retrieving the appropriate record. However, when\na document is the basis of a query (e.g., as in the lower\nscreenshot, Figure 6(c)) we must compute in real-time the\nconditional distribution of the fraction of words assigned to\neach topic and author, a calculation that cannot be computed\nin closed form. This requires retrieving all the relevant\nword-topic counts for the words in the document via\nSQL, then executing the estimation algorithm outlined in\nSection 4.2 in real-time using Gibbs sampling, and displaying\nthe results to the user. The user can change adjust the\nburn-in time, the number of samples and the lag time in the\nsampling algorithm--typically we have found that as few as\n10 Gibbs samples gives quite reasonable results (and takes\non the order of 1 or 2 seconds depending on the machine\nbeing used other factors).\nResearch Track Paper\n313\n(b)\n(a)\n(c)\nFigure 6: Examples of screenshots from the interactive query browser for the author-topic model with (a)\nquerying on author Pazzani M, (b) querying on a topic (data mining) relevant to that author, and (c) querying\non a particular document written by the author.\nResearch Track Paper\n314\nCONCLUSIONS\nWe have introduced a probabilistic algorithm that can\nthat can automatically extract information about authors,\ntopics, and documents from large text corpora. The method\nuses a generative probabilistic model that links authors to\nobserved words in documents via latent topics. We demon-strated\nthat Bayesian estimation can be used to learn such\nauthor-topic models from very large text corpora, using CiteSeer\nabstracts as a working example. The resulting CiteSeer\nauthor-topic model was shown to extract substantial novel\n\"hidden\" information from the set of abstracts, including\ntopic time-trends, author-topic relations, unusual papers for\nspecific authors and so forth. Other potential applications\nnot discussed here include recommending potential reviewers\nfor a paper based on both the words in the paper and the\nnames of the authors. Even though the underlying probabilistic\nmodel is quite simple, and ignores several aspects of\nreal-world document generation (such as topic correlation,\nauthor interaction, and so forth), it nonetheless provides a\nuseful first step in understanding author-topic structure in\nlarge text corpora.\nAcknowledgements\nWe would like to thank Steve Lawrence, C. Lee Giles, and\nIsaac Council for providing the CiteSeer data used in this\npaper. We also thank Momo Alhazzazi, Amnon Meyers,\nand Joshua O'Madadhain for assistance in software development\nand data preprocessing. The research in this paper\nwas supported in part by the National Science Foundation\nunder Grant IRI-9703120 via the Knowledge Discovery and\nDissemination (KD-D) program.\nReferences\nBlei, D. M., Ng, A. Y., and Jordan, M. I., (2003) Latent\nDirichlet allocation, Journal of Machine Learning Research\n3, pp. 9931022.\nBuntine, W.L. (1994) Operations for learning with graphical\nmodels, Journal of Artificial Intelligence Research\n2, pp. 159-225.\nCutting, D., Karger, D. R., Pederson, J., and Tukey, J.\nW. (1992) Scatter/Gather: a cluster-based approach\nto browsing large document collections, in Proceedings\nof the 15th Annual International ACM SIGIR Conference\non Research and Development in Information\nRetrieval, pp. 318329.\nDeerwester, S. C., Dumais, S. T., Landauer, T. K., Furnas,\nG. W., and Harshman, R. A. (1990) Indexing by latent\nsemantic analysis, Journal of the American Society of\nInformation Science, 41(6), pp. 391407.\nDiederich, J., Kindermann, J., Leopold, E., and Paass, G.\n(2003) Authorship attribution with support vector machines\n, Applied Intelligence 19 (1).\nErten, C., Harding, P. J., Kobourov, S. G., Wampler, K.,\nand Yee, G. (2003) Exploring the computing literature\nusing temporal graph visualization, Technical Report,\nDepartment of Computer Science, University of Arizona\n.\nGray, A., Sallis, P., MacDonell, S. (1997) Software forensics\n: Extending authorship analysis techniques to computer\nprograms, Proceedings of the 3rd Biannual Conference\nof the International Association of Forensic\nLinguists (IAFL), Durham NC.\nGriffiths, T. L., and Steyvers , M. (2004) Finding scientific\ntopics, Proceedings of the National Academy of\nSciences, 101 (suppl. 1), 52285235.\nHofmann, T. (1999) Probabilistic latent semantic indexing\n, in Proceedings of the 22nd International Conference\non Research and Development in Information Retrieval\n(SIGIR'99).\nKautz, H., Selman, B., and Shah, M. (1997) Referral Web:\nCombining social networks and collaborative filtering,\nCommunications of the ACM, 3, pp. 6365.\nLagus, K, Honkela, T., Kaski, S., and Kohonen, T. (1999)\nWEBSOM for textual data mining, Artificial Intelligence\nReview, 13 (56), pp. 345364.\nLawrence, S., Giles, C. L., and Bollacker, K. (1999) Digital\nlibraries and autonomous citation indexing, IEEE\nComputer, 32(6), pp. 6771.\nMcCallum, A., Nigam, K., and Ungar, L. (2000) Efficient\nclustering of high-dimensional data sets with application\nto reference matching, in Proceedings of the Sixth\nACM SIGKDD Conference on Knowledge Discovery\nand Data Mining, pp. 169178.\nMosteller, F., and Wallace, D. (1964) Applied Bayesian and\nClassical Inference: The Case of the Federalist Papers,\nSpringer-Verlag.\nMutschke, P. (2003) Mining networks and central entities\nin digital libraries: a graph theoretic approach applied\nto co-author networks, Intelligent Data Analysis\n2003, Lecture Notes in Computer Science 2810,\nSpringer Verlag, pp. 155166\nNewman, M. E. J. (2001) Scientific collaboration networks:\nI. Network construction and fundamental results, Physical\nReview E, 64, 016131.\nPopescul, A., Flake, G. W., Lawrence, S., Ungar, L. H., and\nGiles, C. L. (2000) Clustering and identifying temporal\ntrends in document databases, IEEE Advances in\nDigital Libraries, ADL 2000, pp. 173182.\nRosen-Zvi, M., Griffiths, T., Steyvers, M., Smyth, P. (2004)\nThe author-topic model for authors and documents,\nProceedings of the 20th UAI Conference, July 2004.\nThisted, B., and Efron, R. (1987) Did Shakespeare write a\nnewly discovered poem?, Biometrika, pp. 445455.\nWhite, S. and Smyth, P. (2003) Algorithms for estimating\nrelative importance in networks, in Proceedings of the\nNinth ACM SIGKDD Conference on Knowledge Discovery\nand Data Mining, pp. 266275.\nYang, Y. (1999) An evaluation of statistical approaches to\ntext categorization, Information Retrieval, 1, pp. 69\n90.\nResearch Track Paper\n315", "keywords": "Gibbs sampling;text modeling;unsupervised learning"} {"name": "152", "title": "Proportional Search Interface Usability Measures", "abstract": "Speed, accuracy, and subjective satisfaction are the most common measures for evaluating the usability of search user interfaces. However, these measures do not facilitate comparisons optimally and they leave some important aspects of search user interfaces uncovered. We propose new, proportional measures to supplement the current ones. Search speed is a normalized measure for the speed of a search user interface expressed in answers per minute. Qualified search speed reveals the trade-off between speed and accuracy while immediate search accuracy addresses the need to measure success in typical web search behavior where only the first few results are interesting. The proposed measures are evaluated by applying them to raw data from two studies and comparing them to earlier measures. The evaluations indicate that they have desirable features.", "fulltext": "INTRODUCTION\nIn order to study the usability of search user interfaces we\nneed proper measures. In the literature, speed, accuracy and\nsubjective satisfaction measures are common and reveal\ninteresting details. They have, however, a few\nshortcomings that call for additional measures.\nFirst, comparing results even within one experiment--let\nalone between different experiments--is hard because the\nmeasures are not typically normalized in the research\nreports but multiple raw numbers (like answers found and\ntime used) are reported. Of course, unbiased comparison\nbetween studies will always be difficult as the test setup has\na big effect on the results, but the problem is compounded\nby the presentation of multiple task dependent measures. A\ngood measure would be as simple as possible, yet it must\nnot discard relevant information.\nSecond, the current measures do not reveal the sources of\nspeed differences. In particular, the relation between speed\nand accuracy may be hard to understand since the current\nmeasures for those dimensions are completely separate. For\nexample, it is essential to know if the increase in speed is\ndue to careless behavior or better success.\nThird, in the web environment, a typical goal for a search is\nto find just a few good enough answers to a question. This\nis demonstrated by studies that show that about half of the\nusers only view one or two result pages per query [11].\nCurrent search user interface usability measures do not\ncapture the success of such a behavior very well.\nIn order to address these problems, we present three new\nproportional, normalized usability measures. The new\nmeasures are designed for the result evaluation phase of the\nsearch process [10] where real users are involved. Search\nspeed is a normalized speed measure expressed in answers\nper minute. It makes within study comparisons simple and\nbetween studies bit more feasible. Qualified search speed is\na combination of speed and accuracy measures that reveals\nthe tradeoff between speed and accuracy. It shows the\nsource of speed differences in terms of accuracy and is also\nmeasured in answers per minute. Immediate search\naccuracy is a measure that captures the success of result\nevaluation when only the first few hits are interesting.\nThese new measures are evaluated by applying them to\ndata from real experiments and comparing them to\nconventional measures.\n\nRELATED WORK\nIn usability evaluations, the measurements are typically\nbased on the three major components of usability:\neffectiveness, efficiency, and satisfaction [3, 4].\nInternational ISO 9241-11 standard [4] defines\neffectiveness as the \"accuracy and completeness with which\nthe users achieve specified goals\" and efficiency as\n\"resources expended in relation to the accuracy and\ncompleteness with which users achieve goals\". According\n\n\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies\nare not made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page.\n\nTo copy\notherwise, to republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nNordiCHI '04, October 23-27, 2004 Tampere, Finland\nCopyright 2004 ACM 1-58113-857-1/04/10... $5.00\n365\n\n\nto the standard, efficiency measure divides the\neffectiveness (achieved results) by the resources used (e.g.\ntime, human effort, or cost). In this work, we will leave\nsatisfaction measures out of the discussion and concentrate\non objective quantitative measures.\nUsability measurements are strongly domain dependent. In\nthe search user interface domain effectiveness is typically\nmeasured in terms of accuracy (which is recognized as an\nexample measure in the ISO standard as well). Time (speed\nof use) is typically used as the critical resource when\ncalculating the efficiency.\nIn the following we will discuss measuring practices in\ntypical studies evaluating search user interfaces. Note that\nalthough almost every study in the information retrieval\ncommunity deals with searching, they tend to focus on\nsystem performance [8] and thus only a few studies are\nmentioned here.\nSpeed Measures\nThe basic approach for measuring the speed is simply to\nmeasure the time required for performing a task, but the\nactual implementation differs from study to study. In early\nevaluations of the Scatter/Gather system by Pirolli et al.\n[6], times were recorded simply on a task basis. In the\nresults they reported how many minutes it took, on average,\nto complete a task. In the study by Dumais et al. [2],\nroughly the same method was used, except that the times\nwere divided into categories according to the difficulty of\nthe task. Sebrechts et al. [9] used a different categorization\nmethod where task execution times were divided into\ncategories according to the subject's computer experience.\nTime measurements can also be recorded in a somewhat\nreversed manner as Pratt and Fagan [7] did. They reported\nhow many results users found in four minutes. This is close\nto measuring speed (achievement / time), but this\nnormalization to four minutes is arbitrary and does not\nfacilitate comparisons optimally. In a study by Dennis et al.\n[1], the time to bookmark a result page was measured and\nonly one page was bookmarked per task. This setup makes\nthe comparison fairly easy since the reported time tells how\nmuch time it takes to find a result with the given user\ninterface. However, this desirable feature was caused by the\nsetup where only one result was chosen, and other types of\ntasks were not considered.\nAccuracy Measures\nAccuracy measures are based on the notion of relevance\nwhich is typically determined by independent judges in\nrelation to a task. In information retrieval studies, accuracy\nis typically a combination of two measures: recall and\nprecision. Recall describes the amount of relevant results\nfound in a search in a relation to all the relevant results in\nthe collection. As a perfect query in terms of recall could\nreturn all the entries in the collection, it is counterbalanced\nwith the precision measure. Precision describes how clean\nthe result set is by describing the density of relevant results\nin it. Precision, like recall, is expressed by a percentage\nnumber which states the proportion of relevant targets in\nthe result set.\nRecall and precision measures are designed for measuring\nthe success of a query. In contrast, when the success of the\nresult evaluation process is studied, the users need to\ncomplete the process by selecting the interesting results.\nMeasures are then based on analyzing the true success of\nthe selections. Recall and precision measures are used here\ntoo, but the calculation is different. In these cases recall\ndescribes the amount of relevant results selected in relation\nto the amount of them in the result set. Precision, on the\nother hand, describes the density of relevant results among\nthe selected results.\nVeerasamy and Heikes [13] used such measures (called\ninteractive recall and interactive precision) in their study of\na graphical display of retrieval studies. They asked\nparticipants to judge the relevance of the results in order to\nget the users' idea of the document relevance. Pirolli et al.\n[6] used only the precision measure in their test of the\nScatter/Gather system. The selection of the results was\nimplemented by a save functionality. Dennis et al. [1] used\nan approach where they reported the average relevance of\nthe results found with a given user interface. Relevant\nresults were indicated by bookmarking them. Further\nvariations of the measures where user interaction is taken\ninto account in accuracy evaluation were proposed and\nused by Veerasamy and Belkin [12].\nInformation Foraging Theory\nStuart Card, Peter Pirolli and colleagues have made\nextensive research on information foraging theory [5] in\nXerox Parc and the results are relevant here as well. In its\nconventional form information foraging theory states that\nthe rate of gain of valuable information (R) can be\ncalculated using the formula:\n\nW\nB\nT\nT\nG\nR\n+\n=\n\n(1)\nIn the formula, G is the amount of gained information, T\nB\nis\nthe total time spent between information patches and T\nW\nis\nthe total time spent within an information patch [5]. An\ninformation patch is understood to mean a collection of\ninformation such as a document collection, a search result\ncollection or even a single document that can be seen to be\na collection of information that requires some actions for\ndigesting the information. In the information foraging\nprocess, the forager navigates first between patches and\nthen finds actual meaningful information within a patch.\nThe process is then started over by seeking a new patch.\nIf we discard the separation of two different types of\nactivities (between and within patches) for simplicity,\nequation 1 states the information gain rate in terms of time\nunit. This matches with common practices in the field and\nis the basis for our proposed measurements as well.\n366\n\n\nThe gap that is left in the information foraging theory in a\nrelation to making concrete measurements, is the definition\nof information gain. The gap is well justified as the\ndefinition would unnecessarily reduce the scope of the\ntheory. On the other hand, when we deal with concrete\nproblems, we can be more specific and thus obtain\npreciseness. This is our approach here: we apply the basic\nrelationships stated in the information foraging theory and\nprovide meaningful ways of measuring the gain. All this is\ndone in the context of evaluating search user interfaces in\nthe search result evaluation phase. We will get back to this\ntopic in the discussions of the new measures to see their\nrelationship to the information foraging theory in more\ndetail.\n\nEXPERIMENT\nWe will evaluate the proposed measures using data from an\nexperiment of ours. This experiment was conducted to\nevaluate a new search user interface idea by comparing it to\nthe de facto standard solution.\nOur proposed user interface used automatically calculated\ncategories for facilitating the result access (Figure 1, left).\nAs the categories we used the most common words and\nphrases found within the result titles and text summaries\n(snippets). Stop word list and a simple stemmer were used\nfor improving the quality of the categories (e.g. discarding\nvery common words such as `and' or `is'). As the category\nword (or phrase) selection was based solely on the word\nfrequencies, the categories were neither exclusive nor\nexhaustive. There was a special built-in category for\naccessing all the results as one long list. The hypothesis\nbehind the category user interface was such that it would\nallow users to identify and locate interesting results easier\nand faster than the conventional solution.\nThe calculated categories were presented to the user as a\nlist beside the actual result list. When a category was\nselected from the list, the result listing was filtered to\ndisplay only those result items that contained the selected\nword or phrase. There were a total of 150 results that the\nuser could access and from which the categories were\ncomputed.\nParticipants\nThere were 20 volunteer participants (8 male, 12 female) in\nthe experiment. Their average age was 35 years varying\nfrom 19 to 57 years and they were recruited from the local\nuniversity. Almost all of the participants can be regarded as\nexperienced computer users, but none of them was an\ninformation technology professional.\nApparatus\nThere were two user interfaces to access the search results:\n1.\nThe category interface (category UI, Figure 1, left)\npresented the users with a list of 15 automatically\ngenerated categories on the left side of the user\ninterface. When the user selected a category, the\ncorresponding results were shown on the right side of\nthe user interface much like in popular e-mail clients.\n2.\nThe reference interface (reference UI, Figure 1, right)\nwas a Google web search engine imitation showing\nresults in separate pages, ten results per page. The order\nof the results was defined by the search engine\n(Google). In the bottom of the window, there were\ncontrols to browse the pages in order (Previous and\nFigure 1. Compared user interfaces in our experiment. Category user interface on the left,\nreference user interface on the right.\n367\n\n\nNext buttons) or in random order (a radio button for\neach page). There were 15 pages so that the participants\ncould access a total of 150 results.\nDesign and Procedure\nThe experiment had search user interface as the only\nindependent variable with two values: category UI and\nreference UI. The values of the independent variable were\nvaried within the subjects and thus the analysis was done\nusing repeated measures tools. As dependent variables we\nmeasured: 1) time to accomplish a task in seconds, 2)\nnumber of results selected for a task, 3) relevance of\nselected result in a three step scale (relevant, related, not\nrelevant), and 4) subjective attitudes towards the systems.\nThe experiments were carried out in a usability laboratory.\nOne experiment lasted approximately 45 minutes and\ncontained 18 (9+9) information seeking tasks in two\nblocks: one carried out with the category interface and the\nother using the reference interface. The order of the blocks\nand the tasks were counterbalanced between the\nparticipants. For each task, there was a ready-made query\nand users did not (re)formulate the queries themselves. This\nkind of restriction in the setup was necessary to properly\nfocus on measuring the success in the result evaluation\nphase of the search.\nThe actual task of the participant was to \"collect as many\nrelevant results for the information seeking task as possible\nas fast as you can\". The participants collected results by\nusing check boxes that were available beside each result\nitem (see Figure 1).\nIn the test situation there were two windows in the\ncomputer desktop. The task window displayed information\nseeking tasks for the participants who were instructed to\nfirst read the task description, then push the `Start' button\nin the task window and promptly proceed to accomplish the\ntask in the search window. Upon task completion\n(participant's own decision or time-out), the participants\nwere instructed to push the `Done' button in the task\nwindow. The time between `Start' and `Done' button\npresses was measured as the total time for the task. This\ntiming scheme was explained to the participants. Time for\neach task was limited to one minute.\nAccuracy measures are based on ratings done by the\nexperimenter (one person). The rating judgments were\nmade based solely on the task description and the very\nsame result title and summary texts that the participants\nsaw in the experiment. Actual result pages were not used\nbecause it would have added an extra variable into the\ndesign (result summary vs. page relation), which we did not\nwish. All the tasks had at least two defining concepts like in\n\"Find pictures of planet Mars\". For relevant results, all of\nthe concepts was required to be present in some form\n(different wording was of course allowed). Related results\nwere those where only the most dominant concept was\npresent (e.g. planet Mars). Rest of the results was\nconsidered to be not relevant.\n\nRESULTS\nFor comparing the proposed measures we present here the\nresults of our experiment using the conventional measures:\ntime, number of results, and precision. The time measure\ndid not reveal very interesting results, because the test setup\nlimited the total time for one task to one minute. Thus the\nmean times for conditions were close to each other: 56.6\nseconds (sd = 5.5) for the category UI and 58.3 seconds\n(sd = 3.5) for the reference UI. The difference is not\nstatistically significant as repeated measures analysis of\nvariance (ANOVA) gives F(1,19) = 3.65, ns.\nIn contrast, number of results revealed a difference. When\nusing the category UI the participants were able to find on\naverage 5.1 (sd = 2.1) results per task whereas using the\nreference UI yielded on average 3.9 (sd = 1.2) selections.\nThe difference is significant since ANOVA gives F(1,19) =\n9.24, p < .01.\nPrecision measure gave also a statistically significant\ndifference. When using the category UI on average 65%\n(sd = 13) of the participants' selections were relevant in a\nrelation to the task. The corresponding number for the\nreference UI was 49% (sd = 15). ANOVA gave\nF(1,19) = 14.49, p < .01.\nThe results are compatible with other studies done with\nsimilar categorizing search user interfaces. For example,\nPratt and Fagan [7] have also reported similar results in\nfavor of categorizing user interface. When categories work,\nthey enhance the result evaluation process by reducing the\nnumber of items that need to be evaluated. Users find\ninteresting looking categories and evaluate only the results\nwithin those categories. Concentration of relevant\ndocuments in the interesting categories is higher than in the\nwhole result set.\n\nSEARCH SPEED\nIn order to make the comparison of speed measures easier,\nwe suggest a proportional measure. When the search time\nand number of results are combined into one measure, just\nlike in measuring physical speed by kilometers or miles per\nhour, we get a search user interface\nsearch speed measure\nexpressed in answers per minute (APM). It is calculated by\ndividing the number of answers found by the time it took to\nfind them:\n\nsearched\nminutes\nfound\n\nans/wers\n\n\nspeed\nsearch\n=\n\n(2)\nIn relation to the ISO-9241-11 standard this is an efficiency\nmeasure whereas the plain number of answers is an\n(simple) effectiveness measure. In terms of information\nforaging theory, we replace the G term in equation 1 with\nnumber of results found and the time is normalized to\nminutes. This concretizes the rate (R) in equation 1 to be\nanswers per minute. The structure of equations 1 and 2 is\nessentially the same.\n368\n\n\nWhenever two (or more) measures are reduced into one,\nthere is a risk of loosing relevant information. This is the\ncase here as well. The proposed measure does not make the\ndistinction between a situation where one answer is found\nin 10 seconds and a situation where four answers are found\nin 40 seconds. In both cases the speed is 6 answers per\nminute and the details of the situation are lost in the\nmeasurement. However, we feel that speed measure is\nnevertheless correct also in this case. The situation can be\ncompared to driving 50 km/h for 10 or 40 minutes. The\ntraveled distance is different, but the speed is the same.\nThis means that proposed speed measure does not apply in\nevery situation and attention must be paid in measurement\nselection.\nThe problem of reducing two measures into one has also\nbeen thoroughly discussed by Shumin Zhai [14] in the\ncontext of input devices. He points out that reduction of\ntwo Fitts' law variables (a and b) in calculating throughput\nof an input device leads to a measure that is dependent of\nthe task. The same problem does not apply here as our\nsituation is not related to Fitts' law. However, our measure\nis dependent on the task, but it is not dependent of the used\ntime or the number of results collected like previous\nmeasures.\nEvaluation\nIn order to evaluate the suggested measure it was applied to\nthe results of Scatter/Gather evaluation by Pirolli et al. [6].\nIn their experiment the task was to find relevant documents\nfor a given topic. The table below summarizes the results\n(SS = similarity search, SG = scatter/gather):\nMeasurement\nSS\nSG\nOriginal\n\n\nTime used in minutes\n10.10\n30.64\nNumber of answers\n16.44\n12.26\nSearch speed\n\n\nAnswers per minute\n1.62\n0.40\nThe first two rows show the actual numbers reported in the\npaper while the third row shows the same results in answers\nper minute. It is arguably easier to understand the\nrelationship between the two user interfaces from the\nnormalized search speed measure. It communicates that the\nSS condition was roughly four times faster than the SG\ncondition. The relation is hard to see from the original\nresults. In addition, measurements can be easily related to\none's own experiences with similar user interfaces because\nof the normalization.\nIn the second table below, the search speed measure is\napplied to the data from our own experiment. Here the\ndifference between raw numbers and normalized measure\nis not as large as in the previous example because the time\nused for the tasks is roughly the same in both cases due to\nthe test setup. Nevertheless, the suggested measure makes\nthe comparison easier. Note also that the fairly large\ndifference with the speeds in the experiment by Pirolli et al.\nis presumably due to experiment set-up (tasks, conditions,\nequipment, etc.).\nMeasurement\nCategory UI\nReference UI\nRaw numbers\n\n\nTime used in minutes\n0.94\n0.97\nNumber of answers\n5.1\n3.9\nSearch speed\n\n\nAnswers per minute\n5.4\n4.0\nWhen an analysis of variance is calculated on the answers\nper minute measure, we see a bit stronger result compared\nto the conventional measures where just the number of\nresults revealed significant difference. Here ANOVA gives\nF(1,19) = 11.3, p < .01. Slight increase in the F statistic is\ndue to the combination of two measures that both have a\ndifference in the same direction. In summary, search speed\nmeasures the same phenomena as the previously used\nmeasures (it is calculated from the same numbers) and it\ncan make distinctions between the measured objects.\n\nQUALIFIED SEARCH SPEED\nPreviously used recall and precision measures do not\ndirectly tell where possible speed differences come from or\nwhat the relation between speed and accuracy is. The\nsuggested\nqualified search speed measure refines the\nsearch speed measure with categories of relevance to\naddress this shortcoming. To keep the measure\nunderstandable and robust, we use only two or three\ncategories of relevance. Like the previous measure, the\nqualified search speed is also measured in answers per\nminute, with a distinction that the speed is calculated\nseparately for each relevance category according to the\nequation 3. There\nRCi\nstands for relevance category i\n(typical categories are e.g. relevant and irrelevant).\n\nsearched\nminutes\nfound\n\nanswers\n\n\nspeed\nsearch\nqualified\nRCi\nRCi\n=\n\n(3)\nNote that the sum over all relevance categories equals to\nthe normal search speed.\nWhen qualified search speed is described in information\nforaging terminology, we can see that the gain is now\ndefined more precisely than with search speed. While\nsearch speed takes into account only the number of results,\nqualified search speed adds the quality of the results into\nthe equation. In essence, this gives us a more accurate\nestimate of the gain of information, and thus a more\naccurate rate of information gain. Note that this shows also\nin the rate magnitude: rate is now stated in (e.g.) number of\nrelevant results per minute.\nEvaluation\nWhen the qualified search speed measure is applied to the\ndata of our experiment and compared to the simple measure\nof precision, a few observations can be made. First, the\nproposed measure preserves the statistically significant\n369\n\n\ndifference that was observed with the conventional\nprecision measure. ANOVA for the speed of acquiring\nrelevant results gives F(1,19) = 32.4, p < .01.\nSecond, both measures (Figure 2) convey roughly the same\ninformation about the precision of the user interfaces\nincluding: 1) with the category UI more than half of the\nselected results were relevant whereas with the reference\nUI about half of the results were relevant, and 2) using the\ncategory UI participants were more successful in terms of\nprecision. However, with the suggested qualified search\nspeed measure, the amplitude of difference in precision is\nnot obvious and thus the new measure cannot replace the\nold one.\nThird, in addition to what can be seen in the precision\nchart, the qualified search speed chart (Figure 2) reveals\nsome interesting data. It shows that the improvement in\nspeed is due to the fact that participants have been able to\nselect more relevant results while the proportion of not\nrelevant results decreased a bit. The same information\ncould surely be acquired by combining conventional speed\nand precision measures, but when the information is visible\nin one figure it is arguably easier to find such a\nrelationship. Note also that although the new measure is\nmainly concerned about the accuracy of use, it informs the\nreader simultaneously about the speed of use as well.\nFigure 3 makes a comparison between the new measure\nand the original precision measure using the data collected\nin the Scatter/Gather experiment [6]. Here it is worthwhile\nto note that even though precision measures are close to\nthose in the previous example, the qualified search speed\nmeasure reveals large differences between the conditions.\nQualified search speed seems to reveal the tradeoff between\naccuracy and speed convincingly in this case. We can also\nnotice that both conditions here are much slower than those\nin Figure 2 as the qualified search speed is normalized just\nlike the simpler search speed.\nIt is notable that qualified search speed does not measure\nthe same phenomena as precision and thus they are not\nreplaceable. We can image a situation where high qualified\nspeed is associated with low precision and vice versa. In\nreality this could happen when users try to be very precise\nin one condition and very fast in another. On the other\nhand, we saw that qualified evaluation speed can make\nclear distinctions between user interfaces, which is a\ncompulsory quality for a useful measure.\n\nIMMEDIATE ACCURACY\nThe last suggested measure captures the success of typical\nweb search behavior. In such a task, the user wants to find a\npiece of information that would be good enough for an\ninformation need and overall speed and accuracy are not as\nimportant as quick success. The measure is called\nimmediate accuracy and it is expressed as a success rate.\nThe success rate states the proportion of cases where at\nleast one relevant result is found by the n\nth\nselection. For\napplying the measure, the order of each result selection\nmust be stored and the relevance of them must be judged\nagainst the task. The selections for each task and participant\nare then gone through in the order they were made and the\nfrequency of first relevant result finding is calculated for\neach selection (first, second, and so on). When this figure is\ndivided by the total number of observations (number of\nparticipants * number of tasks) we get the percentage of\nfirst relevant result found per each selection. Equation 4\nshows the calculation more formally, there n stands for n\nth\n\nselection.\n\nns\nobservatio\n\nof\n\nnumber\n\ntotal\nn\nresults\n\nrelevant\n\nfirst\n\nof\n\nnumber\n\n\nn\naccuracy\n\nimmediate\n=\n(4)\nWhen the figures calculated with equation 4 are plotted into\na cumulative line chart (Figure 4) we can see when at least\none relevant result is found on average. For example, (in\nFigure 4) after the second selection in 79 % of the cases at\nleast one relevant result is found when using the category\nuser interface. Notice also that the lines do not reach the\n100 % value. This means that in some of the cases the users\nwere not able to find any relevant results.\nWhen looking back to information foraging theory, this\nmeasure takes us to a different approach compared to the\nprevious ones. This measure abandons time as the limiting\n\nFigure 3. Qualified search speed measure compared to precision\nmeasure in the Scatter/Gather study [4].\n\nFigure 2. Qualified search speed measure compared to precision\nmeasure of data gathered in our own study.\n370\n\n\nresource against which the gain is compared and replaces it\nby selection ordinal (remember that ISO standard leaves the\nchoice of resource up to the domain). As this new resource\nis discrete in nature, the expression of the measure as a\nsingle figure (rate) becomes hard and thus, for example, a\ncumulative chart is preferred for easily interpretable\nfigures. From another perspective of information foraging\ntheory, we can say that immediate accuracy is a measure\nfor estimating the beginning of the within patch gain slope.\nNote, that it is only an estimation of the beginning of the\nslope as all subsequent relevant selections are discarded in\nthis measure. In this view, we define an information patch\nto be a search result set.\nEvaluation\nThe evaluation is based only on our own data because the\nmeasure requires information that is typically not reported\nin the publications. Figure 4 shows that the user orientates\nfaster while using the category UI as the first selection\nproduces already a relevant result in 56 % of the cases. In\ncontrast, the reference UI produces a relevant result in\n40 % of the first selections. By the second selection, the\ndifference is bit greater since in 79 % of the cases the users\nhave found at least one relevant result with the category UI,\nwhile the corresponding number for the reference UI is\n62 %.\nIn the analysis of cumulative data, the most interesting\npoints are those where the difference between compared\nmeasurements changes. Change points are most important\nbecause cumulative figures will preserve the difference if\nnot further changes happen. In our case the difference is\nmade at the first selection and remains virtually the same\nafterwards. This difference is statistically significant as\nANOVA gives F(1,19) = 12.5, p < .01 and it is preserved\nthroughout the selections (F(1,19) 10.4, p < .01 for all\nsubsequent selections).\nFindings of Spink et al. [11] stated that users only select\none or two results per query. Immediate accuracy allows us\nto see the success of the studied user interface in such a\ncase. We can focus on a given selection and quickly see the\nsuccess rate at that point. Note that this kind of information\nis not available using the conventional accuracy measures\nand straightforward speed measures.\nImmediate Success Speed\nAnother fairly simple and obvious way for measuring\nimmediate success would be to record the time to the first\nrelevant result. We did try this measure as well, but found a\nproblem.\nIn our experiment, the average time to find the first relevant\nresult was practically the same in both cases (20 and 21\nseconds for category and reference UI respectively) and\nthere was no statistically significant difference. This could,\nof course, be the true situation, but the amount of relevant\nresults suggested the opposite.\nThe problem comes from the fact that the first relevant\nresult is not always found. With the category UI users were\nnot able to find a single relevant result for a task in 10% of\nthe cases whereas the same number for reference UI was\n21%. We felt that this is a big difference and that it should\nbe visible in the measurement as well. However, we were\nnot able to come up with a reasonable solution for\nnormalizing the time measurement in this respect and thus\nthe measurement is not promoted as such.\nIn addition, the results of Spink et al. [11] suggest that the\ntime to first relevant result is not very important for the\nsearch process. Since searchers tend to open only one or\ntwo results, the time does not seem to be the limiting factor,\nbut the number of result selections is. This supports also the\nchoice of immediate accuracy over the time to the first\nrelevant result.\n\nDISCUSSION\nOur goal was to provide search user interface designers,\nresearchers, and evaluators with additional measures that\nwould complement the current ones. The first problem with\nthem is that result comparison is hard, even within one\nexperiment. Proportional measures makes within study\ncomparisons easy and in addition they let readers relate\ntheir previous experience better to the presented results. We\nproposed normalized search speed measure that is\nexpressed in answers per minute. As the measure combines\ntwo figures (number of answers and time searched) into\none proportional number, it makes the comparisons within\nan experiment easy and between experiments bit more\nfeasible.\nThe second shortcoming of the current measures is the fact\nthat it is difficult to see the tradeoff between speed and\naccuracy. To address this problem, we proposed the\nqualified search speed measure that divides the search\nImmediate Accuracy\n0 %\n20 %\n40 %\n60 %\n80 %\n100 %\n1.\n2.\n3.\n4.\n5.\n6.\n7.\n8.\nselection\nsuccess rate\nCategory UI\nReference UI\n\nFigure 4. Immediate accuracy of category UI and\nreference UI. The measure shows the proportion of the cases\nwhere a relevant result have been found at n\nth\nselection.\n371\n\n\nspeed measure into relevance categories. The measure\nallows readers to see what the source of speed in terms of\naccuracy is. In the evaluation we showed that conventional\nmeasures may only tell the half of the story. For instance,\nin the case of the Scatter/Gather experiment the precision\nmeasure showed only moderate difference between the\nsystems whereas qualified speed revealed a vast difference\nin the gain of relevant results. Combining speed and\naccuracy measures is particularly effective in such a case as\nit eliminates the need to mentally combine the two\nmeasures (speed and accuracy).\nThe third weakness of the current measures is their inability\nto capture users' success in typical web search behavior\nwhere the first good enough result is looked for. We\nproposed the immediate accuracy measure to solve this\nflaw. Immediate accuracy shows the proportion of the cases\nwhere the users are able to find at least one relevant result\nper n\nth\nresult selection. It allows readers to see how well\nand how fast the users can orient themselves to the task\nwith the given user interface. As the measurements are\nmade based on finding the first relevant result, the reader\ncan compare how well different solutions support users'\ngoal of finding the first relevant answer (and presumably\nfew others as well) to the search task.\nThe proposed measures are not intended to replace the old\nmeasures, but rather to complement them. They lessen the\nmental burden posed to the reader as important information\nof different type (e.g. speed, accuracy) is combined into\none proportional measure. In summary, the proposed\nmeasures capture important characteristics of search user\ninterface usability and communicate them effectively.\nThe issue of making comparisons between experiments is\nnot completely solved by these new measures. We feel that\nthe problem is not in the properties of the new measures but\nin the nature of the phenomena to be measured. In the\ncontext of search user interfaces the test settings have a\nhuge effect on the results that cannot be solved simply with\nnew measures. One solution for the problem could be test\nsetup standardization. In the TREC interactive track such\nan effort have been taken, but it seems that the wide variety\nof research questions connected to searching cannot be\naddressed with a single standard test setup.\n\nACKNOWLEDGMENTS\nThis work was supported by the Graduate School in User\nCentered Information Technology (UCIT). I would like to\nthank Scott MacKenzie, Kari-Jouko Rih, Poika Isokoski,\nand Natalie Jhaveri for invaluable comments and\nencouragement.\n\nREFERENCES\n1.\nDennis, S., Bruza, P., McArthur, R. Web Searching: A\nProcess-Oriented Experimental Study of Three\nInteractive Search Paradigms. Journal of the American\nSociety for Information Science and Technology, Vol.\n53, No. 2, 2002, 120-133.\n2.\nDumais, S., Cutrell, E., Chen, H. Optimizing Search by\nShowing Results in Context. Proceedings of ACM\nCHI'01 (Seattle, USA), ACM Press, 2001, 277-284.\n3.\nFrkjr, E., Hertzum, M., Hornbk, K. Measuring\nUsability: Are Effectiveness, Efficiency, and\nSatisfaction Really Correlated? Proceedings of ACM\nCHI'2000 (The Hague, Netherlands), ACM Press,\n2000, 345-352.\n4.\nISO 9241-11: Ergonomic requirements for office work\nwith visual display terminals (VDTs) - Part 11:\nGuidance on usability, International Organization for\nStandardization, March 1998.\n5.\nPirolli, P. and Card, S. Information Foraging.\nPsychological Review, 1999, Vol. 106, No. 4, 643-675.\n6.\nPirolli, P., Schank, P., Hearst, M., Diehl, C.\nScatter/Gather Browsing Communicates the Topic\nStructure of a Very Large Text Collection. Proceedings\nof ACM CHI'96 (Vancouver, Canada), ACM Press,\n1996, 213-220.\n7.\nPratt, W., Fagan, L. The Usefulness of Dynamically\nCategorizing Search Results. Journal of the American\nMedical Informatics Association, Vol. 7, No. 6,\nNov/Dec 2000, 605-617.\n8.\nSaracevic, T. Evaluation of Evaluation in Information\nRetrieval. Proceedings of ACM SIGIR'95 (Seattle,\nUSA), ACM Press, 1995, 138-146.\n9.\nSebrechts, M., Vasilakis, J., Miller, M., Cugini, J.,\nLaskowski, S. Visualization of Search Results: A\nComparative Evaluation of Text, 2D, and 3D Interfaces.\nProceedings of ACM SIGIR'99 (Berkeley, USA), ACM\nPress, 1999.\n10.\nShneiderman, B., Byrd, D., Croft, B. Clarifying Search:\nA User-Interface Framework for Text Searches. D-Lib\nMagazine, January 1997.\n11.\nSpink, A., Wolfram, D., Jansen, M., and Saracevic, T.:\nSearching the Web: The Public and Their Queries.\nJournal of the American Society for Information Science\nand Technology, 2001, Vol. 52, No. 6, 226-234.\n12.\nVeerasamy, A., Belkin, N. Evaluation of a Tool for\nVisualization of Information Retrieval Results.\nProceedings of ACM SIGIR'96 (Zurich, Switzerland),\nACM Press, 1996, 85-92.\n13.\nVeerasamy, A., Heikes, R. Effectiveness of a Graphical\nDisplay of Retrieval Results. Proceedings of ACM\nSIGIR'97 (Philadelphia, USA), ACM Press, 1997, 236-244\n.\n14.\nZhai, S. On the validity of Throughput as a\nCharacteristic of Computer Input. IBM Research\nReport, RJ 10253, IBM Research Division. August\n2002.\n\n\n372", "keywords": "usability evaluation;Search user interface;speed;usability measure;accuracy"} {"name": "153", "title": "Protected Interactive 3D Graphics Via Remote Rendering", "abstract": "Valuable 3D graphical models, such as high-resolution digital scans of cultural heritage objects, may require protection to prevent piracy or misuse, while still allowing for interactive display and manipulation by a widespread audience. We have investigated techniques for protecting 3D graphics content, and we have developed a remote rendering system suitable for sharing archives of 3D models while protecting the 3D geometry from unauthorized extraction . The system consists of a 3D viewer client that includes low-resolution versions of the 3D models, and a rendering server that renders and returns images of high-resolution models according to client requests. The server implements a number of defenses to guard against 3D reconstruction attacks, such as monitoring and limiting request streams, and slightly perturbing and distorting the rendered images. We consider several possible types of reconstruction attacks on such a rendering server, and we examine how these attacks can be defended against without excessively compromising the interactive experience for non-malicious users.", "fulltext": "Protecting digital information from theft and misuse, a subset of the\ndigital rights management problem, has been the subject of much\nresearch and many attempted practical solutions. Efforts to protect\nsoftware, databases, digital images, digital music files, and other\ncontent are ubiquitous, and data security is a primary concern in\nthe design of modern computing systems and processes. However,\nthere have been few technological solutions to specifically protect\ninteractive 3D graphics content.\nThe demand for protecting 3D graphical models is significant. Contemporary\n3D digitization technologies allow for the reliable and\nefficient creation of accurate 3D models of many physical objects,\nand a number of sizable archives of such objects have been created.\nThe Stanford Digital Michelangelo Project [Levoy et al. 2000], for\nexample, has created a high-resolution digital archive of 10 large\nstatues of Michelangelo, including the David. These statues represent\nthe artistic patrimony of Italy's cultural institutions, and the\ncontract with the Italian authorities permits the distribution of the\n3D models only to established scholars for non-commercial use.\nThough all parties involved would like the models to be widely\navailable for constructive purposes, were the digital 3D model of\nthe David to be distributed in an unprotected fashion, it would soon\nbe pirated, and simulated marble replicas would be manufactured\noutside the provisions of the parties authorizing the creation of the\nmodel.\nDigital 3D archives of archaeological artifacts are another example\nof 3D models often requiring piracy protection. Curators of such\nartifact collections are increasingly turning to 3D digitization as a\nway to preserve and widen scholarly usage of their holdings, by allowing\nvirtual display and object examination over the Internet, for\nexample. However, the owners and maintainers of the artifacts often\ndesire to maintain strict control over the use of the 3D data and\nto guard against theft. An example of such a collection is [Stanford\nDigital Forma Urbis Project 2004], in which over one thousand\nfragments of an ancient Roman map were digitized and are being\nmade available through a web-based database, providing that the\n3D models can be adequately protected.\nOther application areas such as entertainment and online commerce\nmay also require protection for 3D graphics content. 3D character\nmodels developed for use in motion pictures are often repurposed\nfor widespread use in video games and promotional materials. Such\nmodels represent valuable intellectual property, and solutions for\npreventing their piracy from these interactive applications would be\nvery useful. In some cases, such as 3D body scans of high profile\nactors, content developers may be reluctant to distribute the 3D\nmodels without sufficient control over reuse. In the area of online\ncommerce, a number of Internet content developers have reported\nan unwillingness of clients to pursue 3D graphics projects specifically\ndue to the lack of ability to prevent theft of the 3D content\n[Ressler 2001].\nPrior technical research in the area of intellectual property protections\nfor 3D data has primarily concentrated on 3D digital watermarking\ntechniques. Over 30 papers in the last 7 years describe\nsteganographic approaches to embedding hidden information into\n3D graphical models, with varying degrees of robustness to attacks\nthat seek to disable watermarks through alterations to the 3D shape\nor data representation. Many of the most successful 3D watermarking\nschemes are based on spread-spectrum frequency domain\ntransformations, which embed watermarks at multiple scales by introducing\ncontrolled perturbations into the coordinates of the 3D\nmodel vertices [Praun et al. 1999; Ohbuchi et al. 2002]. Complementary\ntechnologies search collections of 3D models and examine\nthem for the presence of digital watermarks, in an effort to detect\npiracy.\nWe believe that for the digital representations of highly valuable\n3D objects such as cultural heritage artifacts, it is not sufficient to\ndetect piracy after the fact; we must instead prevent it. The computer\nindustry has experimented with a number of techniques for\npreventing unauthorized use and copying of computer software and\ndigital data. These techniques have included physical dongles, software\naccess keys, node-locked licensing schemes, copy prevention\nsoftware, program and data obfuscation, and encryption with embedded\nkeys. Most such schemes are either broken or bypassed by\ndetermined attackers, and cause undue inconvenience and expense\nfor non-malicious users. High-profile data and software is particularly\nsusceptible to being quickly targeted by attackers.\nFortunately, 3D graphics data differs from most other forms of digital\nmedia in that the presentation format, 2D images, is fundamen-tally\ndifferent from the underlying representation (3D geometry).\nUsually, 3D graphics data is displayed as a projection onto a 2D\ndisplay device, resulting in tremendous information loss for single\nviews. This property supports an optimistic view that 3D graphics\nsystems can be designed that maintain usability and utility, while\nnot being as vulnerable to piracy as other types of digital content.\nIn this paper, we address the problem of preventing the piracy of 3D\nmodels, while still allowing for their interactive display and manipulation\n. Specifically, we attempt to provide a solution for maintainers\nof large collections of high-resolution static 3D models, such as\nthe digitized cultural heritage artifacts described above. The methods\nwe develop aim to protect both the geometric shape of the 3D\nmodels, as well as their particular geometric representation, such\nas the 3D mesh vertex coordinates, surface normals, and connectivity\ninformation. We accept that the coarse shape of visible objects\ncan be easily reproduced regardless of our protection efforts, so we\nconcentrate on defending the high-resolution geometric details of\n3D models, which may have been most expensive to model or measure\n(perhaps requiring special access and advanced 3D digitizing\ntechnology), and which are most valuable in exhibiting fidelity to\nthe original object.\nIn the following paper sections, we first examine the graphics\npipeline to identify its possible points of attack, and then propose\nseveral possible techniques for protecting 3D graphics data from\nsuch attacks. Our experimentation with these techniques led us to\nconclude that remote rendering provides the best solution for protecting\n3D graphical models, and we describe the design and implementation\nof a prototype system in Section 4. Section 5 describes\nsome types of reconstruction attacks against such a remote rendering\nsystem and the initial results of our efforts to guard against\nthem.\nPossible Attacks in the Graphics Pipeline\nFigure 1 shows a simple abstraction of the graphics pipeline for\npurposes of identifying possible attacks to recover 3D geometry.\nWe note several places in the pipeline where attacks may occur:\n3D model file reverse-engineering. Fig. 1(a). 3D graphics models\nare typically distributed to users in data streams such as files in\ncommon file formats. One approach to protecting the data is to\nobfuscate or encrypt the data file. If the user has full access to the\ndata file, such encryptions can be reverse-engineered and broken,\nand the 3D geometry data is then completely unprotected.\nTampering with the viewing application. Fig. 1(b). A 3D viewer\napplication is typically used to display the 3D model and allow for\nits manipulation. Techniques such program tracing, memory dumping\n, and code replacement are practiced by attackers to obtain access\nto data in use by application programs.\nGraphics driver tampering. Fig. 1(c). Because the 3D geometry\nusually passes through the graphics driver software on its way to\nthe GPU, the driver is vulnerable to tampering. Attackers can replace\ngraphics drivers with malicious or instrumented versions to\ncapture streams of 3D vertex data, for example. Such replacement\ndrivers are widely distributed for purposes of tracing and debugging\ngraphics programs.\nReconstruction from the framebuffer. Fig. 1(d). Because the\nframebuffer holds the result of the rendered scene, its contents can\nbe used by sophisticated attackers to reconstruct the model geometry\n, using computer vision 3D reconstruction techniques. The\nFigure 1: Abstracted graphics pipeline showing possible attack locations\n(a-e). These attacks are described in the text.\nframebuffer contents may even include depth values for each pixel,\nand attackers may have precise control over the rendering parameters\nused to create the scene (viewing and projection transformations\n, lighting, etc.). This potentially creates a perfect opportunity\nfor computer vision reconstruction, as the synthetic model data and\ncontrolled parameters do not suffer from the noise, calibration, and\nimprecision problems that make robust real world vision with real\nsensors very difficult.\nReconstruction from the final image display. Fig. 1(e). Regardless\nof whatever protections a graphics system can guarantee\nthroughout the pipeline, the rendered images finally displayed to\nthe user are accessible to attackers. Just as audio signals may be\nrecorded by external devices when sound is played through speakers\n, the video signals or images displayed on a computer monitor\nmay be recorded with a variety of video devices. The images so\ngathered may be used as input to computer vision reconstruction\nattacks such as those possible when the attacker has access to the\nframebuffer itself, though the images may be of degraded quality,\nunless a perfect digital video signal (such as DVI) is available.\nTechniques for Protecting 3D Graphics\nIn light of the possible attacks in the graphics pipeline as described\nin the previous section, we have considered a number of approaches\nfor sharing and rendering protected 3D graphics.\nSoftware-only rendering. A 3D graphics viewing system that does\nnot make use of hardware acceleration may be easier to protect from\nthe application programmer's point of view. Displaying graphics\nwith a GPU can require transferring the graphics data in precisely\nknown and open formats, through a graphics driver and hardware\npath that is often out of the programmer's control. A custom 3D\nviewing application with software rendering allows the 3D content\ndistributor to encrypt or obfuscate the data in a specific manner, all\nthe way through the graphics pipeline until display.\nHybrid hardware/software rendering. Hybrid hardware and software\nrendering schemes can be used to take at least some advantage\nof hardware accelerated rendering, while benefiting from software\nrendering's protections as described above. In one such scheme, a\nsmall but critically important portion of a protected model's geometry\n(such as the nose of a face) is rendered in software, while the\nrest of the model is rendered normally with the accelerated GPU\nhardware. This technique serves as a deterrent to attackers tampering\nwith the graphics drivers or hardware path, but the two-phase\ndrawing with readback of the color and depth buffers can incur a\n696\nperformance hit, and may require special treatment to avoid artifacts\non the border of the composition of the two images.\nIn another hybrid rendering scheme, the 3D geometry is transformed\nand per-vertex lighting computations are performed in software\n. The depth values computed for each vertex are distorted in\na manner that still preserves the correct relative depth ordering,\nwhile concealing the actual model geometry as much as possible.\nThe GPU is then used to complete rendering, performing rasteri-zation\n, texturing, etc. Such a technique potentially keeps the 3D\nvertex stream hidden from attackers, but the distortions of the depth\nbuffer values may impair certain graphics operations (fog computation\n, some shadow techniques), and the geometry may need to be\ncoarsely depth sorted so that Z-interpolation can still be performed\nin a linear space.\nDeformations of the geometry. Small deformations in large 2D\nimages displayed on the Internet are sometimes used as a defense\nagainst image theft; zoomed higher resolution sub-images with\nvarying deformations cannot be captured and easily reassembled\ninto a whole. A similar idea can be used with 3D data: subtle 3D\ndeformations are applied to geometry before the vertices are passed\nto the graphics driver. The deformations are chosen so as to vary\nsmoothly as the view of the model changes, and to prohibit recovery\nof the original coordinates by averaging the deformations over\ntime. Even if an attacker is able to access the stream of 3D data after\nit is deformed, they will encounter great difficulty reconstructing\na high-resolution version of the whole model due to the distortions\nthat have been introduced.\nHardware decryption in the GPU. One sound approach to providing\nfor protected 3D graphics is to encrypt the 3D model data with\npublic-key encryption at creation time, and then implement custom\nGPUs that accept encrypted data, and perform on-chip decryption\nand rendering. Additional system-level protections would need to\nbe implemented to prevent readback of framebuffer and other video\nmemory, and to place potential restrictions on the command stream\nsent to the GPU, in order to prevent recovery of the 3D data.\nImage-based rendering. Since our goal is to protect the 3D geometry\nof graphic models, one technique is to distribute the models\nusing image-based representations, which do not explicitly include\nthe complete geometry data. Examples of such representations\ninclude light fields and Lumigraphs [Levoy and Hanrahan\n1996; Gortler et al. 1996], both of which are highly amenable to\ninteractive display.\nRemote rendering. A final approach to secure 3D graphics is to\nretain the 3D model data on a secure server, under the control of\nthe content owner, and pass only 2D rendered images of the models\nback to client requests. Very low-resolution versions of the models,\nfor which piracy is not a concern, can be distributed with special\nclient programs to allow for interactive performance during manipulation\nof the 3D model. This method relies on good network\nbandwidth between the client and server, and may require significant\nserver resources to do the rendering for all client requests, but\nit is vulnerable primarily only to reconstruction attacks.\nDiscussion. We have experimented with several of the 3D model\nprotection approaches described above. For example, our first protected\n3D model viewer was an encrypted version of the \"QS-plat\"\n[Rusinkiewicz and Levoy 2000] point-based rendering system\n, which omits geometric connectivity information.\nThe 3D\nmodel files were encrypted using a strong symmetric block cipher\nscheme, and the decryption key was hidden in a heavily obfuscated\n3D model viewer program, using modern program obfuscation\ntechniques [Collberg and Thomborson 2000]. Vertex data was\ndecrypted on demand during rendering, so that only a very small\nportion of the decrypted model was ever in memory, and only software\nrendering modes were used.\nUnfortunately, systems such as this ultimately rely on \"security\nthrough obfuscation,\" which is theoretically unsound from a computer\nsecurity point of view. Given enough time and resources, an\nattacker will be able to discover the embedded encryption key or\notherwise reverse-engineer the protections on the 3D data. For this\nreason, any of the 3D graphics protection techniques that make the\nactual 3D data available to potential attackers in software can be\nbroken [Schneier 2000]. It is possible that future \"trusted comput-ing\"\nplatforms for general purpose computers will be available that\nmake software tampering difficult or impossible, but such systems\nare not widely deployed today. Similarly, the idea of a GPU with\ndecryption capability has theoretical merit, but it will be some years\nbefore such hardware is widely available for standard PC computing\nenvironments, if ever.\nThus, for providing practical, robust, anti-piracy protections for 3D\ndata, we gave strongest consideration to purely image-based representations\nand to remote rendering. Distributing light fields at\nthe high resolutions necessary would involve huge, unwieldy file\nsizes, would not allow for any geometric operations on the data\n(such as surface measurements performed by archaeologists), and\nwould still give attackers unlimited access to the light field for purposes\nof performing 3D reconstruction attacks using computer vision\nalgorithms. For these reasons, we finally concluded that the\nlast technique, remote rendering, offers the best solution for protecting\ninteractive 3D graphics content.\nRemote rendering has been used before in networked environments\nfor 3D visualization, although we are not aware of a system specifically\ndesigned to use remote rendering for purposes of security\nand 3D content protection. Remote rendering systems have been\npreviously implemented to take advantage of off-site specialized\nrendering capabilities not available in client systems, such as intensive\nvolume rendering [Engel et al. 2000], and researchers have\ndeveloped special algorithmic approaches to support efficient distribution\nof rendering loads and data transmission between rendering\nservers and clients [Levoy 1995; Yoon and Neumann 2000].\nRemote rendering of 2D graphical content is common for Internet\nservices such as online map sites; only small portions of the whole\ndatabase are viewed by users at one time, and protection of the entire\n2D data corpus from theft via image harvesting may be a factor\nin the design of these systems.\nRemote Rendering System\nTo test our ideas for providing controlled, protected interactive access\nto collections of 3D graphics models, we have implemented\na remote rendering system with a client-server architecture, as described\nbelow.\n4.1\nClient Description\nUsers of our protected graphics system employ a specially-designed\n3D viewing program to interactively view protected 3D content\n.\nThis client program is implemented as an OpenGL and\nwxWindows-based 3D viewer, with menus and GUI dialogs to control\nvarious viewing and networking parameters (Figure 2). The\nclient program includes very low-resolution, decimated versions of\nthe 3D models, which can be interactively rotated, zoomed, and re-lit\nby the user in real-time. When the user stops manipulating the\nlow-resolution model, detected via a \"mouse up\" event, the client\nprogram queries the remote rendering server via the network for a\n697\nFigure 2: Screenshot of the client program.\nhigh-resolution rendered image corresponding to the selected rendering\nparameters. These parameters include the 3D model name,\nviewpoint position and orientation, and lighting conditions. When\nthe server passes the rendered image back to the client program, it\nreplaces the low-resolution rendering seen by the user (Figure 3).\nOn computer networks with reasonably low latencies, the user thus\nhas the impression of manipulating a high-resolution version of\nthe model. In typical usage for cultural heritage artifacts, we use\nmodels with approximately 10,000 polygons for the low resolution\nversion, whereas the server-side models often contain tens of millions\npolygons. Such low-resolution model complexities are of little\nvalue to potential thieves, yet still provide enough clues for the\nuser to navigate. The client viewer could be further extended to\ncache the most recent images returned from the server and projec-tively\ntexture map them onto the low-resolution model as long as\nthey remain valid during subsequent rotation and zooming actions.\n4.2\nServer Description\nThe remote rendering server receives rendering requests from\nusers' client programs, renders corresponding images, and passes\nthem back to the clients. The rendering server is implemented as\na module running under the Apache 2.0 HTTP Server; as such,\nthe module communicates with client programs using the standard\nHTTP protocol, and takes advantage of the wide variety of access\nprotection and monitoring tools built into Apache. The rendering\nserver module is based upon the FastCGI Apache module, and allows\nfor multiple rendering processes to be spread across any number\nof server hardware nodes.\nAs render requests are received from clients, the rendering server\nchecks their validity and dispatches the valid requests to a GPU for\nOpenGL hardware-accelerated rendering. The rendered images are\nread back from the framebuffer, compressed using JPEG compression\n, and returned to the client. If multiple requests from the same\nclient are pending (such as if the user rapidly changes views while\non a slow network), earlier requests are discarded, and only the\nmost recent is rendered. The server uses level-of-detail techniques\nto speed the rendering of highly complex models, and lower level-of\n-detail renderings can be used during times of high server load\nto maintain high throughput rates. In practice, an individual server\nnode with a Pentium 4 CPU and an NVIDIA GeForce4 video card\ncan handle a maximum of 8 typical client requests per second; the\nFigure 3: Client-side low resolution (left) and server-side high resolution\n(right) model renderings.\nbottlenecks are in the rendering and readback (about 100 milliseconds\n), and in the JPEG compression (approximately 25 milliseconds\n). Incoming request sizes are about 700 bytes each, and the\nimages returned from our deployed servers average 30 kB per request\n.\n4.3\nServer Defenses\nIn Section 2, we enumerated several possible places in the graphics\npipeline that an attacker could steal 3D graphics data. The benefit of\nusing remote rendering is that it leaves only 3D reconstruction from\n2D images in the framebuffer or display device as possible attacks.\nGeneral 3D reconstruction from images constitutes a very difficult\ncomputer vision problem, as evidenced by the great amount of research\neffort being expended to design and build robust computer\nvision systems. However, synthetic 3D graphics renderings can be\nparticularly susceptible to reconstruction because the attacker may\nbe able to exactly specify the parameters used to create the images,\nthere is a low human cost to harvest a large number of images, and\nsynthetic images are potentially perfect, with no sensor noise or\nmiscalibration errors. Thus, it is still necessary to defend the remote\nrendering system from reconstruction attacks; below, we describe a\nnumber of such defenses that we have implemented in combination\nfor our server.\nSession-based defenses. Client programs that access the remote\nrendering system are uniquely identified during the course of a usage\nsession. This allows the server to monitor and track the specific\nsequence of rendering requests made by each client. Automatic\nanalysis of the server logs allows suspicious request streams to be\nclassified, such as an unusually high number of requests per unit\ntime, or a particular pattern of requests that is indicative of an image\nharvesting program. High quality computer vision reconstructions\noften require a large number of images that densely sample\nthe space of possible views, so we are able to effectively identify\nsuch access patterns and terminate service to those clients. We can\noptionally require recurrent user authentication in order to further\ndeter some image harvesting attacks, although a coalition of users\nmounting a low-rate distributed attack from multiple IP addresses\ncould still defeat such session-based defenses.\nObfuscation. Although we do not rely on obfuscation to protect the\n3D model data, we do use obfuscation techniques on the client side\nof the system to discourage and slow down certain attacks. The\nlow-resolution models that are distributed with the client viewer\nprogram are encrypted using an RC4-variant stream cipher, and the\nkeys are embedded in the viewer and heavily obfuscated. The rendering\nrequest messages sent from the client to the server are also\nencrypted with heavily obfuscated keys. These encryptions simply\nserve as another line of defense; even if they were broken, attackers\nwould still not be able to gain access to the high resolution 3D data\nexcept through reconstruction from 2D images.\n698\nLimitations on valid rendering requests. As a further defense,\nwe provide the capability in our client and remote server to constrain\nthe viewing conditions. Some models may have particular\n\"stayout\" regions defined that disallow certain viewing and lighting\nangles, thus keeping attackers from being able to reconstruct a\ncomplete model. For the particular purpose of defending against the\nenumeration attacks described in Section 5.1, we put restrictions on\nthe class of projection transformations allowed to be requested by\nusers (requiring a perspective projection with particular fixed field\nof view and near and far planes), and we prevent viewpoints within\na small offset of the model surface.\nPerturbations and distortions. Passive 3D computer vision reconstructions\nof real-world objects from real-world images are usually\nof relatively poor quality compared to the original object. This failure\ninspires the belief that we can protect our synthetically rendered\nmodels from reconstruction by introducing into the images the same\ntypes of obstacles known to plague vision algorithms. The particular\nperturbations and distortions that we use are described below;\nwe apply these defenses to the images only to the degree that they\ndo not distract the user viewing the models. Additionally, these defenses\nare applied in a pseudorandomly generated manner for each\ndifferent rendering request, so that attackers cannot systematically\ndetermine and reverse their effects, even if the specific form of the\ndefenses applied is known (such as if the source code for the rendering\nserver is available). Rendering requests with identical parameters\nare mapped to the same set of perturbations, in order to\ndeter attacks which attempt to defeat these defenses by averaging\nmultiple images obtained under the same viewing conditions.\nPerturbed viewing parameters We pseudorandomly introduce\nsubtle perturbations into the view transformation matrix\nfor the images rendered by the server; these perturbations\nhave the effect of slightly rotating, translating, scaling, and\nshearing the model. The range of these distortions is bounded\nsuch that no point in the rendered image is further than either\nm\nobject space units or n pixels from its corresponding point\nin an unperturbed view. In practice, we generally set m proportional\nto the size of the model's geometry being protected,\nand use values of n = 15 pixels, as experience has shown that\nusers can be distracted by larger shifts between consecutively\ndisplayed images.\nPerturbed lighting parameters We pseudorandomly introduce\nsubtle perturbations into the lighting parameters used\nto render the images; these perturbations include modifying\nthe lighting direction specified in the client request, as well\nas addition of randomly changing secondary lighting to illuminate\nthe model. Users are somewhat sensitive to shifts in\nthe overall scene intensity and shading, so the primary light\ndirection perturbations used are generally fairly small (maximum\nof 10\n\nfor typical models, which are rendered using the\nOpenGL local lighting model).\nHigh-frequency noise added to the images We introduce\ntwo types of high-frequency noise artifacts into the rendered\nimages. The first, JPEG artifacts, are a convenient result of\nthe compression scheme applied to the images returned from\nthe server. At high compression levels (we use a maximum\nlibjpeg quality factor of 50), the quantization of DCT coefficients\nused in JPEG compression creates \"blocking\" discontinuities\nin the images, and adds noise in areas of sharp contrast.\nThese artifacts create problems for low-level computer vision\nimage processing algorithms, while the design of JPEG compression\nspecifically seeks to minimize the overall perceptual\nloss of image quality for human users.\nAdditionally, we add pseudorandomly generated monochromatic\nGaussian noise to the images, implemented efficiently\nby blending noise textures during hardware rendering on the\nserver. The added noise defends against computer vision attacks\nby making background segmentation more difficult, and\nby breaking up the highly regular shading patterns of the synthetic\nrenderings. Interestingly, users are not generally distracted\nby the added noise, but have even commented that the\nrendered models often appear \"more realistic\" with the high-frequency\nvariations caused by the noise. One drawback of\nthe added noise is that the increased entropy of the images can\nresult in significantly larger compressed file sizes; we address\nthis in part by primarily limiting the application of noise to the\nnon-background regions of the image via stenciled rendering.\nLow-frequency image distortions Just as real computer vision\nlens and sensor systems sometimes suffer from image\ndistortions due to miscalibration, we can effectively simulate\nand extend these distortions in the rendering server. Subtle\nnon-linear radial distortions, pinching, and low-frequency\nwaves can be efficiently implemented with vertex shaders, or\nwith two-pass rendering of the image as a texture onto a non-uniform\nmesh, accelerated with the \"render to texture\" capabilities\nof modern graphics hardware.\nDue to the variety of random perturbations and distortions that are\napplied to the images returned from the rendering server, there is\na risk of distracting the user, as the rendered 3D model exhibits\nchanges from frame to frame, even when the user makes very minor\nadjustments to the view. However, we have found that the\nbrief switch to the lower resolution model in between display of the\nhigh resolution perturbed images, inherent to our remote rendering\nscheme, very effectively masks these changes. This masking of\nchanges is attributed to the visual perception phenomenon known\nas change blindness [Simons and Levin 1997], in which significant\nchanges occurring in full view are not noticed due to a brief disruption\nin visual continuity, such as a \"flicker\" introduced between\nsuccessive images.\nReconstruction Attacks\nIn this section we consider several classes of attacks, in which sets\nof images may be gathered from our remote rendering server to\nmake 3D reconstructions of the model, and we analyze their efficacy\nagainst the countermeasures we have implemented.\n5.1\nEnumeration Attacks\nThe rendering server responds to rendering requests from users\nspecifying the viewing conditions for the rendered images. This\nability for precise specification can be exploited by attackers, as\nthey can potentially explore the entire 3D model space, using the returned\nimages to discover the location of the 3D model to any arbitrary\nprecision. In practice, these attacks involve enumerating many\nsmall cells in a voxel grid, and testing each such voxel to determine\nintersection with the remote high-resolution model's surface; thus\nwe term them enumeration attacks. Once this enumeration process\nis complete, occupied cells of the voxel grid are exported as a point\ncloud and then input to a surface reconstruction algorithm.\nIn the plane sweep enumeration attack, the view frustum is specified\nas a rectangular, one-voxel-thick \"plane,\" and is swept over the\nmodel (Figure 4(a)). Each requested image represents one slice of\nthe model's surface, and each pixel of each image corresponds to a\nsingle voxel. A simple comparison of each image pixel against the\nexpected background color is performed to determine whether that\n699\n(a)\n(b)\nFigure 4: Enumeration Attacks: (a) the plane sweep enumeration\nattack sweeps a one-voxel thick orthographic view frustum over\nthe model, (b) the near plane sweep enumeration attack sweeps the\nviewpoint over the model, marking voxels where the model surface\nis clipped by the near plane.\npixel is a model surface or background pixel. Sweeps from multiple\nview angles (such as the six faces of the voxels) are done to catch\nbackfacing polygons that may not be visible from a particular angle.\nThese redundant multiple sweeps also allow the attacker to be liberal\nabout ignoring questionable background pixels that may occur,\nsuch as if low-amplitude background noise or JPEG compression is\nbeing used as a defense on the server.\nOur experiments demonstrate that the remote model can be efficiently\nreconstructed against a defenseless server using this attack\n(Figure 5(b)). Perturbing viewing parameters can be an effective\ndefense against this attack; the maximum reconstruction resolution\nwill be limited by the maximum relative displacement that an individual\nmodel surface point undergoes. Figure 5(c) shows the results\nof a reconstruction attempt against a server pseudorandomly\nperturbing the viewing direction by up to 0.3\n\nin the returned images\n. Since plane sweep enumeration relies on the correspondence\nbetween image pixels and voxels, image warps can also be effective\nas a defense. The large number of remote image requests required\nfor plane sweep enumeration (O(n) requests for an n n n\nvoxel grid) and the unusual request parameters may look suspicious\nand trigger the rendering server log analysis monitors. Plane sweep\nenumeration attacks can be completely nullified by limiting user\ncontrol of the view frustum parameters, which we implement in our\nsystem and use for valuable models.\nAnother enumeration attack, near plane sweep enumeration, involves\nsweeping the viewpoint (and thus the near plane) over the\nmodel, checking when the model surface is clipped by the near\nplane and marking voxels when this happens (Figure 4(b)). The\nattacker knows that the near plane has clipped the model when a\npixel previously containing the model surface begins to be classified\nas the background. In order to determine which voxel each\nimage pixel corresponds to, the attacker must know two related parameters\n: the distance between the viewpoint position and the near\nplane, and the field of view.\nThese parameters can be easily discovered. The near plane distance\ncan be determined by first obtaining the exact location of one\nfeature point on the model surface through triangulation of multiple\nrendering requests and then moving the viewpoint slowly toward\nthat point on the model. When the near plane clips the feature\npoint, the distance between that point and the view position equals\nthe near plane distance. The horizontal and vertical field of view\nangles can be obtained by moving the viewpoint slowly toward the\nmodel surface, stopping when any surface point becomes clipped by\nthe near plane. The viewpoint is then moved a small amount perpendicular\nto its original direction of motion such that the clipped\npoint moves slightly relative to the view but stays on the new image\n(near plane). Since the near plane distance has already been\n(a)\n(b)\n(c)\n(d)\nFigure 5: 3D reconstruction results from enumeration attacks:\n(a) original 3D model, (b) plane sweep attack against defenseless\nserver (6 passes, 3,168 total rendered images), (c) plane sweep attack\nagainst 0.3\n\nviewing direction perturbation defense (6 passes,\n3,168 total rendered images), (d) near plane sweep attack against\ndefenseless server (6 passes, 7,952 total rendered images).\nobtained, the field of view angle (horizontal or vertical depending\non direction of motion) can be obtained from the relative motion of\nthe clipped point across the image.\nBecause the near plane is usually small compared to the dimensions\nof the model, many sweeps must be tiled in order to attain full coverage\n. Sweeps must also be made in several directions to ensure\nthat all model faces are seen. Because this attack relies on seeing\nthe background to determine when the near plane has clipped a surface\n, concave model geometries will present a problem for surface\ndetection. Although sweeps from multiple directions will help, this\nproblem is not completely avoidable. Figure 5(d) illustrates this\nproblem, showing a case in which six sweeps have not fully captured\nall the surface geometry.\nViewing parameter perturbations and image warps will nearly destroy\nthe effectiveness of near plane sweep enumeration attacks, as\nthey can make it very difficult to determine where the surface lies\nand where it does not near silhouette edges (pixels near these edges\nwill change erratically between surface and background). The most\nsolid defense against this attack is to prevent views within a certain\nsmall offset of the model surface. This defense, which we use\nin our system to protect valuable models, prevents the near plane\nfrom ever clipping the model surface and thereby completely nullifies\nthis attack.\n5.2\nShape-from-silhouette Attacks\nShape-from-silhouette [Slabaugh et al. 2001] is one well studied,\nrobust technique for extracting a 3D model from a set of images.\nThe method consists of segmenting the object pixels from the background\nin each image, then intersecting in space their resulting extended\ntruncated silhouettes, and finally computing the surface of\nthe resulting shape. The main limitation of this technique is that\nonly a visual hull [Laurentini 1994] of the 3D shape can be recovered\n; the line-concave parts of the model are beyond the capabilities\nof the reconstruction. Thus, the effectiveness of this attack depends\non the specific geometric characteristics of the object; the high-resolution\n3D models that we target often have many concavities\nthat are difficult or impossible to fully recover using shape-from-silhouette\n. However, this attack may also be of use to attackers\n700\nFigure 6: The 160 viewpoints used to reconstruct the model with a\nshape-from-silhouette attack; results are shown in Figure 7.\nto obtain a coarse, low-resolution version of the model, if they are\nunable to break through the obfuscation protections we use for the\nlow-resolution models distributed with the client.\nTo measure the potential of a shape-from-silhouette attack against\nour protected graphics system, we have conducted reconstruction\nexperiments on a 3D model of the David as served via the rendering\nserver, using a shape-from-silhouette implementation described\nin [Tarini et al. 2002]. With all server defenses disabled, 160 images\nwere harvested from a variety of viewpoints around the model\n(Figure 6); these viewpoints were selected incrementally, with later\nviewpoints chosen to refine the reconstruction accuracy as measured\nduring the process. The resulting 3D reconstruction is shown\nin Figure 7(b).\nSeveral of the perturbation and distortion defenses implemented in\nour server are effective against the shape-from-silhouette attack.\nResults from experiments showing the reconstructed model quality\nwith server defenses independently enabled are shown in Figures\n7(c-g). Small perturbations in the viewing parameters were\nparticularly effective at decreasing the quality of the reconstructed\nmodel, as would be expected; Niem [1997] performed an error analysis\nof silhouette-based modeling techniques and showed the linear\nrelationship between error in the estimation of the view position\nand error in the resulting reconstruction. Perturbations in the images\nreturned from the server, such as radial distortion and small\nrandom shifts, were also effective. Combining the different perturbation\ndefenses, as they are implemented in our remote rendering\nsystem, makes for further deterioration of the reconstructed model\nquality (Figure 7(h)).\nHigh frequency noise and JPEG defenses in the server images can\nincrease the difficulty of segmenting the object from the background\n.\nHowever, shape-from-silhouette software implementa-tions\nwith specially tuned image processing operations can take the\nnoise characteristics into account to help classify pixels accurately.\nThe intersection stage of shape-from-silhouette reconstruction algorithms\nmakes them innately robust with respect to background\npixels misclassified as foreground.\n5.3\nStereo Correspondence-based Attacks\nStereo reconstruction is another well known 3D computer vision\ntechnique. Stereo pairs of similarly neighborhooded pixels are detected\n, and the position of the corresponding point on the 3D surface\nis found via the intersection of epipolar lines. Of particular\nrelevance to our remote rendering system, Debevec et al. [1996]\nshowed that the reconstruction task can be made easier and more\naccurate if an approximate low resolution model is available, by\nwarping the images over it before performing the stereo matching.\n(a) E = 0\n(b) E = 4.5\n(c) E = 13.5\n(d) E = 45.5\n(e) E = 11.6\n(f) E = 9.3\n(g) E = 16.2\n(h) E = 26.6\nFigure 7: Performance of shape-from-silhouette reconstructions\nagainst various server defenses. Error values (E) measure the mean\nsurface distance (mm) from the 5m tall original model. Top row:\n(a) original model, (b) reconstruction from defenseless server, reconstruction\nwith (c) 0.5\n\nand (d) 2.0\n\nperturbations of the view\ndirection. Bottom row: (e) reconstruction with a random image offset\nof 4 pixels, with (f) 1.2% and (g) 2.5% radial image distortion,\nand (h) reconstruction against combined defenses (1.0\n\nview perturbation\n, 2 pixel random offset, and 1.2% radial image distortion).\nUltimately, however, stereo correspondence techniques usually rely\non matching detailed, high-frequency features in order to yield\nhigh-resolution reconstruction results. The smoothly shaded 3D\ncomputer models generated by laser scanning that we share via our\nremote rendering system thus present significant problems to basic\ntwo-frame stereo matching algorithms. When we add in the server\ndefenses such as image-space high frequency noise, and slight perturbations\nin the viewing and lighting parameters, the stereo matching\ntask becomes even more ill-posed. Other stereo research such as\n[Scharstein and Szeliski 2002] also reports great difficulty in stereo\nreconstruction of noise-contaminated, low-texture synthetic scenes.\nWere we to distribute 3D models with high resolution textures applied\nto their surfaces, stereo correspondence methods may be a\nmore effective attack.\n5.4\nShape-from-shading Attacks\nShape-from-shading attacks represent another family of computer\nvision techniques for reconstructing the shape of a 3D object (see\n[Zhang et al. 1999] for a survey). The primary attack on our remote\nrendering system that we consider in this class involves first\n701\n(a) E = 0\n(b) E = 1.9\n(c) E = 1.0\n(d) E = 1.1\n(e) E = 1.7\n(f) E = 2.0\nFigure 8: Performance of shape-from-shading reconstruction attacks\n. Error values (E) measure the mean surface distance (mm)\nfrom the original model. Top row: (a) original model, (b) low-resolution\nbase mesh, (c) reconstruction from defenseless server.\nBottom row: reconstruction results against (d) high-frequency image\nnoise, (e) complicated lighting model (3 lights), and (f) viewing\nangle perturbation (up to 1.0\n\n) defenses.\nobtaining several images from the same viewpoint under varying,\nknown lighting conditions. Then, using photometric stereo methods\n, a normal is computed for each pixel by solving a system of\nrendering equations. The resulting normal map can be registered\nand applied to an available approximate 3D geometry, such as the\nlow-resolution model used by the client, or one obtained from another\nreconstruction technique such as shape-from-silhouette.\nThis coarse normal-mapped model itself may be of value to some\nattackers: when rendered it will show convincing 3D high frequency\ndetails that can be shaded under new lighting conditions,\nthough with artifacts at silhouettes. However, the primary purpose\nof our system is to protect the high-resolution 3D geometry, which\nif stolen could be used maliciously for shape analysis or to create\nreplicas. Thus, a greater risk is posed if the normal map is integrated\nby the attacker to compute a displacement map, and the results are\nused to displace a refined version of the low-resolution model mesh.\nFollowing this procedure with images harvested from a defenseless\nremote rendering server and using a low-resolution client model,\nwe were able to successfully reconstruct a high-resolution 3D\nmodel. The results shown in Figure 8(c) depict a reconstruction\nof the David's head produced from 200 1600x1114 pixel images\ntaken from 10 viewpoints, with 20 lighting positions used at each\nviewpoint, assuming a known, single-illuminant OpenGL lighting\nmodel and using a 10,000 polygon low-resolution model (Fig. 8(b))\nof the whole statue.\nSome of the rendering server defenses, such as adding high-frequency\nnoise to the images, can be compensated for by attackers\nby simply adding enough input images to increase the robustness\nof the photometric stereo solution step (although harvesting\ntoo many images will eventually trigger the rendering server log\nanalysis monitors). Figure 8(d) shows the high quality reconstruction\nresult possible when only random Gaussian noise is used as\na defense. More effective defenses against shape-from-shading attacks\ninclude viewing and lighting perturbations and low-frequency\nimage distortions, which can make it difficult to precisely register\nimages onto the low-resolution model, and can disrupt the photometric\nstereo solution step without a large number of aligned input\nimages. Figure 8(e) shows a diminished quality reconstruction\nwhen the rendering server complicates the lighting model by using\n3 perturbed light sources with a Phong component unknown to\nthe attacker, and Figure 8(f) shows the significant loss of geometric\ndetail in the reconstruction when the server randomly perturbs the\nviewing direction by up to 1.0\n\n(note that the reconstruction error\nexceeds that of the starting base mesh).\nThe quality of the base mesh is an important determinant in the success\nof this particular attack. For example, repeating the experiment\nof Figure 8 with a more accurate base mesh of 30,000 polygons\nyields results of E = 0.8, E = 0.6, and E = 0.7 for the conditions\nof Figures 8(b), 8(c), and 8(e), respectively. This reliance on an\naccurate low-resolution base mesh for the 3D model reconstruction\nis a potential weak point of the attack; attackers may be deterred\nby the effort required to reverse-engineer the protections guarding\nthe low-resolution model or to reconstruct an acceptable base mesh\nfrom harvested images using another technique.\n5.5\nDiscussion\nBecause we know of no single mechanism for guaranteeing the security\nof 3D content delivered through a rendering server, we have\ninstead taken a systems-based approach, implementing multiple defenses\nand using them in combination. Moreover, we know of no\nformalism for rigorously analyzing the security provided by our defenses\n; the reconstruction attacks that we have empirically considered\nhere are merely representative of the possible threats.\nOf the reconstruction attacks we have experimented with so far, the\nshape-from-shading approach has yielded the best results against\nour defended rendering server.\nEnumeration attacks are easily\nfoiled when the user's control over the viewpoint and view frustum\nis constrained, pure shape-from-silhouette methods are limited\nto reconstructing a visual hull, and two-frame stereo algorithms rely\non determining accurate correspondences which is difficult with the\nsynthetic, untextured models we are attempting to protect. Attackers\ncould improve the results of the shape-from-shading algorithm\nagainst our perturbation defenses by explicitly modeling the distortions\nand trying to take them into account in the optimization step,\nor alternatively by attempting to align the images by interactively\nestablishing point to point correspondences or using an automatic\ntechnique such as [Lensch et al. 2001].\nSuch procedures for explicitly modeling the server defenses, or correcting\nfor them via manual specification of correspondences, are\napplicable to any style of reconstruction attempt. To combat these\nattacks, we must rely on the combined discouraging effect of multiple\ndefenses running simultaneously, which increases the number of\ndegrees of freedom of perturbation to a level that would be difficult\nand time-consuming to overcome. Some of our rendering server\ndefenses, such as the lighting model and non-linear image distortions\n, can be increased arbitrarily in their complexity. Likewise, the\nmagnitude of server defense perturbations can be increased with a\ncorresponding decrease in the fidelity of the rendered images.\nUltimately, no fixed set of defenses is bulletproof against a sophisticated\n, malicious attacker with enough resources at their disposal\n, and one is inevitably led to an \"arms race\" between attacks\nand countermeasures such as we have implemented. As the expense\nrequired to overcome our remote rendering server defenses\nbecomes greater, determined attackers may instead turn to reaching\ntheir piracy goals via non-reconstruction-based methods beyond the\nscope of this paper, such as computer network intrusion or exploitation\nof non-technical human factors.\n702\nResults and Future Work\nA prototype of our remote rendering system (ScanView, available\nat http://graphics.stanford.edu/software/scanview/ ) has been\ndeployed to share 3D models from a major cultural heritage archive,\nthe Digital Michelangelo Project [Levoy et al. 2000], as well as\nother collections of archaeological artifacts that require protected\nusage. In the several months since becoming publically available,\nmore than 4,000 users have installed the client program on their personal\ncomputers and accessed the remote servers to view the protected\n3D models. The users have included art students, art scholars\n, art enthusiasts, and sculptors examining high-resolution artworks\n, as well as archaeologists examining particular artifacts. Few\nof these individuals would have qualified under the strict guidelines\nrequired to obtain completely unrestricted access to the models, so\nthe protected remote rendering system has enabled large, entirely\nnew groups of users access to 3D graphical models for professional\nstudy and enjoyment.\nReports from users of the system have been uniformly positive\nand enthusiastic. Fetching high-resolution renderings over intercontinental\nbroadband Internet connections takes less than 2 seconds\nof latency, while fast continental connections generally experience\nlatencies dominated by the rendering server's processing time\n(around 150 ms). The rendering server architecture can scale up to\nsupport an arbitrary number of requests per second by adding additional\nCPU and GPU nodes, and rendering servers can be installed\nat distributed locations around the world to reduce intercontinental\nlatencies if desired.\nOur log analysis defenses have detected multiple episodes of system\nusers attempting to harvest large sets of images from the server\nfor purposes of later 3D reconstruction attempts, though these incidents\nwere determined to be non-malicious. In general, the monitoring\ncapabilities of a remote rendering server are useful for reasons\nbeyond just security, as the server logs provide complete accounts\nof all usage of the 3D models in the archive, which can be\nvaluable information for archive managers to gauge popularity of\nindividual models and understand user interaction patterns.\nOur plans for future work include further investigation of computer\nvision techniques that address 3D reconstruction of synthetic data\nunder antagonistic conditions, and analysis of their efficacy against\nthe various rendering server defenses. More sophisticated extensions\nto the basic vision approaches described above, such as multi-view\nstereo algorithms, and robust hybrid vision algorithms which\ncombine the strengths of different reconstruction techniques, can\npresent difficult challenges to protecting the models. Another direction\nof research is to consider how to allow users a greater degree\nof geometric analysis of the protected 3D models without further\nexposing the data to theft; scholarly and professional users have\nexpressed interest in measuring distances and plotting profiles of\n3D objects for analytical purposes beyond the simple 3D viewing\nsupported in the current system. Finally, we are continuing to investigate\nalternative approaches to protecting 3D graphics, designing\nspecialized systems which make data security a priority while\npotentially sacrificing some general purpose computing platform\ncapabilities. The GPU decryption scheme described herein, for example\n, is one such idea that may be appropriate for console devices\nand other custom graphics systems.\nAcknowledgements\nWe thank Kurt Akeley, Sean Anderson,\nJonathan Berger, Dan Boneh, Ian Buck, James Davis, Pat Hanrahan\n, Hughes Hoppe, David Kirk, Matthew Papakipos, Nick\nTriantos, and the anonymous reviewers for their useful feedback,\nand Szymon Rusinkiewicz for sharing code. This work has been\nsupported in part by NSF contract IIS0113427, the Max Planck\nCenter for Visual Computing and Communication, and the EU IST-2001\n-32641 ViHAP3D Project.\nReferences\nC\nOLLBERG\n, C.,\nAND\nT\nHOMBORSON\n, C. 2000. Watermarking, tamper-proofing\n, and obfuscation: Tools for software protection. Tech. Rep.\n170, Dept. of Computer Science, The University of Auckland.\nD\nEBEVEC\n, P., T\nAYLOR\n, C.,\nAND\nM\nALIK\n, J. 1996. Modeling and rendering\narchitecture from photographs: A hybrid geometry- and image-based\napproach. In Proc. of ACM SIGGRAPH 96, 1120.\nE\nNGEL\n, K., H\nASTREITER\n, P., T\nOMANDL\n, B., E\nBERHARDT\n, K.,\nAND\nE\nRTL\n, T. 2000. Combining local and remote visualization techniques for\ninteractive volume rendering in medical applications. In Proc. of IEEE\nVisualization 2000\n, 449452.\nG\nORTLER\n, S., G\nRZESZCZUK\n, R., S\nZELISKI\n, R.,\nAND\nC\nOHEN\n, M. F.\n1996. The lumigraph. In Proc. of ACM SIGGRAPH 96, 4354.\nL\nAURENTINI\n, A. 1994. The visual hull concept for silhouette-based image\nunderstanding. IEEE Trans. on Pattern Analysis and Machine Intelligence\n16\n, 2, 150162.\nL\nENSCH\n, H. P., H\nEIDRICH\n, W.,\nAND\nS\nEIDEL\n, H.-P. 2001. A silhouette-based\nalgorithm for texture registration and stitching. Graphical Models\n63\n, 245262.\nL\nEVOY\n, M.,\nAND\nH\nANRAHAN\n, P. 1996. Light field rendering. In Proc. of\nACM SIGGRAPH 96\n, 3142.\nL\nEVOY\n, M., P\nULLI\n, K., C\nURLESS\n, B., R\nUSINKIEWICZ\n, S., K\nOLLER\n, D.,\nP\nEREIRA\n, L., G\nINZTON\n, M., A\nNDERSON\n, S., D\nAVIS\n, J., G\nINSBERG\n,\nJ., S\nHADE\n, J.,\nAND\nF\nULK\n, D. 2000. The digital michelangelo project.\nIn Proc. of ACM SIGGRAPH 2000, 131144.\nL\nEVOY\n, M. 1995. Polygon-assisted jpeg and mpeg compression of synthetic\nimages. In Proc. of ACM SIGGRAPH 95, 2128.\nN\nIEM\n, W. 1997. Error analysis for silhouette-based 3d shape estimation\nfrom multiple views. In International Workshop on Synthetic-Natural\nHybrid Coding and 3D Imaging\n.\nO\nHBUCHI\n, R., M\nUKAIYAMA\n, A.,\nAND\nT\nAKAHASHI\n, S.\n2002.\nA\nfrequency-domain approach to watermarking 3d shapes.\nComputer\nGraphics Forum 21\n, 3.\nP\nRAUN\n, E., H\nOPPE\n, H.,\nAND\nF\nINKELSTEIN\n, A. 1999. Robust mesh watermarking\n. In Proc. of ACM SIGGRAPH 99, 4956.\nR\nESSLER\n, S., 2001.\nWeb3d security discussion.\nOnline article:\nhttp://web3d.about.com/library/weekly/aa013101a.htm\n.\nR\nUSINKIEWICZ\n, S.,\nAND\nL\nEVOY\n, M. 2000. QSplat: A multiresolution\npoint rendering system for large meshes. In Proc. of ACM SIGGRAPH\n2000\n, 343352.\nS\nCHARSTEIN\n, D.,\nAND\nS\nZELISKI\n, R. 2002. A taxonomy and evaluation of\ndense two-frame stereo correspondence algorithms. International Journal\nof Computer Vision 47\n, 13, 742.\nS\nCHNEIER\n, B. 2000. The fallacy of trusted client software. Information\nSecurity\n(August).\nS\nIMONS\n, D.,\nAND\nL\nEVIN\n, D. 1997. Change blindness. Trends in Cognitive\nSciences 1\n, 7, 261267.\nS\nLABAUGH\n, G., C\nULBERTSON\n, B., M\nALZBENDER\n, T.,\nAND\nS\nCHAFER\n,\nR. 2001. A survey of methods for volumetric scene reconstruction from\nphotographs. In Proc. of the Joint IEEE TCVG and Eurographics Workshop\n(VolumeGraphics-01)\n, Springer-Verlag, 81100.\nS\nTANFORD\nD\nIGITAL\nF\nORMA\nU\nRBIS\nP\nROJECT\n,\n2004.\nhttp://formaurbis.stanford.edu.\nT\nARINI\n, M., C\nALLIERI\n, M., M\nONTANI\n, C., R\nOCCHINI\n, C., O\nLSSON\n, K.,\nAND\nP\nERSSON\n, T. 2002. Marching intersections: An efficient approach\nto shape-from-silhouette. In Proceedings of the Conference on Vision,\nModeling, and Visualization (VMV 2002)\n, 255262.\nY\nOON\n, I.,\nAND\nN\nEUMANN\n, U. 2000. Web-based remote rendering with\nIBRAC. Computer Graphics Forum 19, 3.\nZ\nHANG\n, R., T\nSAI\n, P.-S., C\nRYER\n, J. E.,\nAND\nS\nHAH\n, M. 1999. Shape from\nshading: A survey. IEEE Transactions on Pattern Analysis and Machine\nIntelligence 21\n, 8, 690706.\n703", "keywords": "digital rights management;remote rendering;security;3D models"} {"name": "154", "title": "Providing the Basis for Human-Robot-Interaction: A Multi-Modal Attention System for a Mobile Robot", "abstract": "In order to enable the widespread use of robots in home and office environments, systems with natural interaction capabilities have to be developed. A prerequisite for natural interaction is the robot's ability to automatically recognize when and how long a person's attention is directed towards it for communication. As in open environments several persons can be present simultaneously, the detection of the communication partner is of particular importance. In this paper we present an attention system for a mobile robot which enables the robot to shift its attention to the person of interest and to maintain attention during interaction. Our approach is based on a method for multi-modal person tracking which uses a pan-tilt camera for face recognition, two microphones for sound source localization, and a laser range finder for leg detection. Shifting of attention is realized by turning the camera into the direction of the person which is currently speaking. From the orientation of the head it is decided whether the speaker addresses the robot. The performance of the proposed approach is demonstrated with an evaluation. In addition, qualitative results from the performance of the robot at the exhibition part of the ICVS'03 are provided.", "fulltext": "INTRODUCTION\nA prerequisite for the successful application of mobile service\nrobots in home and office environments is the development of systems\nwith natural human-robot-interfaces. Much research focuses\nFigure 1: Even in crowded situations (here at the ICVS'03) the\nmobile robot BIRON is able to robustly track persons and shift\nits attention to the speaker.\non the communication process itself, e.g. speaker-independent\nspeech recognition or robust dialog systems. In typical tests of such\nhuman-machine interfaces, the presence and position of the communication\npartner is known beforehand as the user either wears a\nclose-talking microphone or stands at a designated position. On a\nmobile robot that operates in an environment where several people\nare moving around, it is not always obvious for the robot which\nof the surrounding persons wants to interact with it. Therefore,\nit is necessary to develop techniques that allow a mobile robot to\nautomatically recognize when and how long a user's attention is\ndirected towards it for communication.\nFor this purpose some fundamental abilities of the robot are required\n. First of all, it must be able to detect persons in its vicinity\nand to track their movements over time in order to differentiate\nbetween persons. In previous work, we have demonstrated how\ntracking of persons can be accomplished using a laser range finder\nand a pan-tilt color camera [6].\nAs speech is the most important means of communication for\nhumans, we extended this framework to incorporate sound source\ninformation for multi-modal person tracking and attention control.\nThis enables a mobile robot to detect and localize sound sources in\nthe robot's surroundings and, therfore, to observe humans and to\nshift its attention to a person that is likely to communicate with the\nrobot. The proposed attention system is part of a larger research\neffort aimed at building BIRON the Bielefeld Robot Companion.\nBIRON has already performed attention control successfully\nduring several demonstrations. Figure 1 depicts a typical situation\nduring the exhibition of our mobile robot at the International Conference\non Computer Vision Systems (ICVS) 2003 in Graz.\nThe paper is organized as follows: At first we discuss approaches\nthat are related to the detection of communication partners in section\n2. Then, in section 3 the robot hardware is presented. Next,\nmulti-modal person tracking is outlined in section 4, followed by\nthe explanation of the corresponding perceptual systems in section\n5. This is the basis of our approach for the detection of communication\npartners explained in section 6. In section 7 an extensive\nevaluation of the system is presented. The paper concludes with a\nshort summary in section 8.\nRELATED WORK\nAs long as artificial systems interact with humans in static setups\nthe detection of communication partners can be achieved rather easily\n. For the interaction with an information kiosk the potential user\nhas to enter a definite space in front of it (cf. e.g. [14]). In intelligent\nrooms usually the configuration of the sensors allows to monitor all\npersons involved in a meeting simultaneously (cf. e.g. [18]).\nIn contrast to these scenarios a mobile robot does not act in a\nclosed or even controlled environment. A prototypical application\nof such a system is its use as a tour guide in scientific laboratories\nor museums (cf. e.g. [3]). All humans approaching or passing the\nrobot have to be considered to be potential communication partners\n. In order to circumvent the problem of detecting humans in\nan unstructured and potentially changing environment, in the approach\npresented in [3] a button on the robot itself has to be pushed\nto start the interaction.\nTwo examples for robots with advanced human-robot interfaces\nare SIG [13] and ROBITA [12] which currently demonstrate their\ncapabilities in research labs. Both use a combination of visual face\nrecognition and sound source localization for the detection of a person\nof interest. SIG's focus of attention is directed towards the\nperson currently speaking that is either approaching the robot or\nstanding close to it. In addition to the detection of talking people,\nROBITA is also able to determine the addressee of spoken utterances\n. Thus, it can distinguish speech directed towards itself from\nutterances spoken to another person. Both robots, SIG and ROBITA\n, can give feedback which person is currently considered to be\nthe communication partner. SIG always turns its complete body towards\nthe person of interest. ROBITA can use several combinations\nof body orientation, head orientation, and eye gaze.\nThe multi-modal attention system for a mobile robot presented\nin this paper is based on face recognition, sound source localization\nand leg detection. In the following related work on these topics will\nbe reviewed.\nFor human-robot interfaces tracking of the user's face is indispensable\n. It provides information about the user's identity, state,\nand intent. A first step for any face processing system is to detect\nthe locations of faces in the robot's camera image. However,\nface detection is a challenging task due to variations in scale and\nposition within the image. In addition, it must be robust to different\nlighting conditions and facial expressions. A wide variety of\ntechniques has been proposed, for example neural networks [15],\ndeformable templates [23], skin color detection [21], or principle\ncomponent analysis (PCA), the so-called Eigenface method [19].\nFor an overview the interested reader is referred to [22, 9].\nIn current research on sound or speaker localization mostly microphone\narrays with at least 3 microphones are used. Only a few\napproaches employ just one pair of microphones. Fast and robust\ntechniques for sound (and therefore speaker) localization are\ne.g. the Generalized Cross-Correlation Method [11] or the Cross-Powerspectrum\nPhase Analysis [8], which both can be applied for\nmicrophone-arrays as well as for only one pair of microphones.\nMore complex algorithms for speaker localization like spectral separation\nand measurement fusion [2] or Linear-Correction Least-Squares\n[10] are also very robust and can additionally estimate\nthe distance and the height of a speaker or separate different audio\nsources. Such complex algorithms require more than one pair of\nmicrophones to work adequately and also require substantial processing\npower.\nIn mobile robotics 2D laser range finders are often used, primarily\nfor robot localization and obstacle avoidance. A laser range\nfinder can also be applied to detect persons. In the approach presented\nin [16] for every object detected in a laser scan features like\ndiameter, shape, and distance are extracted. Then, fuzzy logic is\nused to determine which of the objects are pairs of legs. In [17]\nlocal minima in the range profile are considered to be pairs of legs.\nSince other objects (e.g. trash bins) produce patterns similar to persons\n, moving objects are distinguished from static objects, too.\nROBOT HARDWARE\nFigure 2: The mobile\nrobot BIRON.\nThe hardware platform for BIRON is\na Pioneer PeopleBot from ActivMedia\n(Fig. 2) with an on-board PC (Pentium\nIII, 850 MHz) for controlling the motors\nand the on-board sensors and for sound\nprocessing. An additional PC (Pentium\nIII, 500 MHz) inside the robot is used\nfor image processing.\nThe two PC's running Linux are\nlinked with a 100 Mbit Ethernet and the\ncontroller PC is equipped with wireless\nEthernet to enable remote control of the\nmobile robot. For the interaction with a\nuser a 12\" touch screen display is provided\non the robot.\nA pan-tilt color camera (Sony EVI-D31\n) is mounted on top of the robot at a\nheight of 141 cm for acquiring images of\nthe upper body part of humans interacting\nwith the robot. Two AKG far-field\nmicrophones which are usually used for\nhands free telephony are located at the\nfront of the upper platform at a height\nof 106 cm, right below the touch screen\ndisplay. The distance between the microphones\nis 28.1 cm. A SICK laser\nrange finder is mounted at the front at\na height of approximately 30 cm.\nFor robot navigation we use the ISR (Intelligent Service Robot)\ncontrol software developed at the Center for Autonomous Systems,\nKTH, Stockholm [1].\nMULTI-MODAL PERSON TRACKING\nIn order to enable a robot to direct its attention to a specific person\nit must be able to distinguish between different persons. Therefore\n, it is necessary for the robot to track all persons present as\nrobustly as possible.\nPerson tracking with a mobile robot is a highly dynamic task. As\nboth, the persons tracked and the robot itself might be moving the\nsensory perception of the persons is constantly changing. Another\ndifficulty arises from the fact that a complex object like a person\n29\nusually cannot be captured completely by a single sensor system\nalone. Therefore, we use the sensors presented in section 3 to obtain\ndifferent percepts of a person:\n\nThe camera is used to recognize faces. From a face detection\nstep the distance, direction, and height of the observed\nperson are extracted, while an identification step provides the\nidentity of the person if it is known to the system beforehand\n(see section 5.1).\n\nStereo microphones are applied to locate sound sources using\na method based on Cross-Powerspectrum Phase Analysis [8].\nFrom the result of the analysis the direction relative to the\nrobot can be estimated (see section 5.2).\n\nThe laser range finder is used to detect legs. In range readings\npairs of legs of a human result in a characteristic pattern\nthat can be easily detected [6]. From detected legs the distance\nand direction of the person relative to the robot can be\nextracted (see section 5.3).\nThe processes which are responsible for processing the data of\nthese sensors provide information about the same overall object:\nthe person. Consequently, this data has to be fused. We combine\nthe information from the different sensors in a multi-modal framework\nwhich is described in the following section.\n4.1\nMulti-Modal Anchoring\nIn order to solve the problem of person tracking we apply multi-modal\nanchoring [6]. This approach extends the idea of standard\nanchoring as proposed in [4]. The goal of anchoring is defined as\nestablishing connections between processes that work on the level\nof abstract representations of objects in the world (symbolic level)\nand processes that are responsible for the physical observation of\nthese objects (sensory level). These connections, called anchors,\nmust be dynamic, since the same symbol must be connected to new\npercepts every time a new observation of the corresponding object\nis acquired.\nTherefore, in standard anchoring at every time step\n\n, an anchor\ncontains three elements: a symbol, which is used to denote an object\n, a percept of the same object, generated by the corresponding\nperceptual system, and a signature, which is meant to provide an\nestimate for the values of the observable properties of the object. If\nthe anchor is grounded at time\n\n, it contains the percept perceived\nat\n\nas well as the updated signature. If the object is not observable\nat\n\nand therefore the anchor is ungrounded, then no percept is\nstored in the anchor but the signature still contains the best available\nestimate.\nBecause standard anchoring only considers the special case of\nconnecting one symbol to the percepts acquired from one sensor,\nthe extension to multi-modal anchoring was necessary in order to\nhandle data from several sensors. Multi-modal anchoring allows\nto link the symbolic description of a complex object to different\ntypes of percepts, originating from different perceptual systems. It\nenables distributed anchoring of individual percepts from multiple\nmodalities and copes with different spatio-temporal properties of\nthe individual percepts. Every part of the complex object which\nis captured by one sensor is anchored by a single component anchoring\nprocess. The composition of all component anchors is\nrealized by a composite anchoring process which establishes the\nconnection between the symbolic description of the complex object\nand the percepts from the individual sensors. In the domain\nof person tracking the person itself is the composite object while\nits components are face, speech, and legs, respectively. In addition\nSignature\nlist\nFusion\nMotion\nComposition\nFace region\nSound source\nLaser legs\nAnchoring\nAnchoring\nAnchoring\nperson\nface\nspeech\nlegs\n...\nname, height,\nt\n2\nt\n0\nt\n1\nAnchor\nposition, etc.\nAnchoring of composite object\nPerson models\nSymbols\nPercepts\nAnchoring of component objects\nFigure 3: Multi-modal anchoring of persons.\nto standard anchoring, the composite anchoring module requires a\ncomposition model, a motion model, and a fusion model:\n\nThe composition model defines the spatial relationships of\nthe components with respect to the composite object. It is\nused in the component anchoring processes to anchor only\nthose percepts that satisfy the composition model.\n\nThe motion model describes the type of motion of the complex\nobject, and therefore allows to predict its position. Using\nthe spatial relationships of the composition model, the\nposition of percepts can be predicted, too. This information\nis used by the component anchoring processes in two\nways: 1. If multiple percepts are generated from one perceptual\nsystem the component anchoring process selects the percept\nwhich is closest to the predicted position. 2. If the corresponding\nperceptual system receives its data from a steerable\nsensor with a limited field of view (e.g. pan-tilt camera), it\nturns the sensor into the direction of the predicted position.\n\nThe fusion model defines how the perceptual data from the\ncomponent anchors has to be combined. It is important to\nnote, that the processing times of the different perceptual systems\nmay differ significantly. Therefore, the perceptual data\nmay not arrive at the composite anchoring process in chronological\norder. Consequently, the composite anchor provides a\nchronologically sorted list of the fused perceptual data. New\ndata from the component anchors is inserted in the list, and\nall subsequent entries are updated.\nThe anchoring of a single person is illustrated in Figure 3. It is\nbased on anchoring the three components legs, face, and speech.\nFor more details please refer to [6].\n4.2\nTracking Multiple Persons\nIf more than one person has to be tracked simultaneously, several\nanchoring processes have to be run in parallel. In this case, multi-modal\nanchoring as described in the previous section may lead to\nthe following conflicts between the individual composite anchoring\nprocesses:\n\nThe anchoring processes try to control the pan-tilt unit of the\ncamera in a contradictory way.\n\nA percept is selected by more than one anchoring process.\n30\nIn order to resolve these problems a supervising module is required,\nwhich grants the access to the pan-tilt camera and controls the selection\nof percepts.\nThe first problem is handled in the following way: The supervising\nmodule restricts the access to the pan-tilt unit of the camera\nto only one composite anchoring process at a time. How access is\ngranted to the processes depends on the intended application. For\nthe task of detecting communication partners which is presented\nin this paper, only the anchoring process corresponding to the currently\nselected person of interest controls the pan-tilt unit of the\ncamera (see section 6).\nIn order to avoid the second problem, the selection of percepts is\nimplemented as follows. Instead of selecting a specific percept de-terministically\n, every component anchoring process assigns scores\nto all percepts rating the proximity to the predicted position. After\nall component anchoring processes have assigned scores, the supervising\nmodule computes the optimal non-contradictory assignment\nof percepts to component anchors. Percepts that are not assigned\nto any of the existing anchoring processes are used to establish new\nanchors. Additionally, an anchor that was not updated for a certain\nperiod of time will be removed by the supervising module.\nPERCEPTUAL SYSTEMS\nIn order to supply the anchoring framework presented in 4.1 with\nsensory information about observed persons, three different perceptual\nsystems are used. These are outlined in the following subsec-tions\n.\n5.1\nFace Recognition\nIn our previous work [6], face detection was realized using a\nmethod which combines adaptive skin-color segmentation with\nface detection based on Eigenfaces [7]. The segmentation process\nreduces the search space, so that only those sub-images which are\nlocated at skin colored regions have to be verified with the Eigenface\nmethod. In order to cope with varying lighting conditions the\nmodel for skin-color is continuously updated with pixels extracted\nfrom detected faces. This circular process requires initialization,\nwhich is realized by performing face detection using Eigenfaces on\nthe whole image, since initially no suitable model for skin-color is\navailable. This method has two major drawbacks: It is very sensitive\nto false positive detections of faces, since then the skin-model\nmay adapt to a wrong color. In addition, initialization is computa-tionally\nvery expensive.\nIn our current system presented in this paper, the detection of\nfaces (in frontal view) is based on the framework proposed by Viola\nand Jones [20]. This method allows to process images very rapidly\nwith high detection rates for the task of face detection. Therefore,\nneither a time consuming initialization nor the restriction of the\nsearch using a model of skin color is necessary.\nThe detection is based on two types of features (Fig. 4), which\nare the same as proposed in [24]. A feature is a scalar value which\nis computed by the weighted sum of all intensities of pixels in rectangular\nregions. The computation can be realized very efficiently\nusing integral images (see [20]). The features have six degrees of\nfreedom for two-block features (\n\n\n\n\n\n) and seven degrees\nof freedom for three-block features (\n\n\n\n\n\n\n\n).\nWith restrictions to the size of the rectangles and their distances we\nobtain about 300.000 different features for sub-windows of a size\nof\n\n\n\npixels. Classifiers are constructed by selecting a small\nnumber of important features using AdaBoost [5]. A cascade of\nclassifiers\n\n\n\n\nof increasing complexity (increasing number\nof features) forms the over-all face detector (Fig. 5). For face\ndetection an image is scanned, and every sub-image is classified\n\n2\n\n1\ndy\n\n2\ndx\n\n1\ndy\ndx\n\n\n2\nw\nh\nw\ndx\nh\n\nx y\n\n\nx y\n\nFigure 4: The two types of features used for face detection.\nEach feature takes a value which is the weighted sum of all pixels\nin the rectangles.\n.....\nNon-face\nNon-face\nNo\nNo\nInput Sub-Window\nYes\nYes\nNon-face\nFace\nNo\nC\nn\nC\n2\nC\n1\nYes\nFigure 5: A cascade of\n\nclassifiers of increasing complexity\nenables fast face detection.\nwith the first classifier\n\nof the cascade. If classified as non-face,\nthe process continues with the next sub-image. Otherwise the current\nsub-image is passed to the next classifier (\n\n) and so on.\nThe first classifier of the cascade is based on only two features,\nbut rejects approximately 75 % of all sub-images. Therefore, the\ndetection process is very fast. The cascade used in our system consists\nof 16 classifiers based on 1327 features altogether.\nIn order to update the multi-modal anchoring process the position\nof the face is extracted: With the orientation of the pan-tilt\ncamera, the angle of the face relative to the robot is calculated. The\nsize of the detected face is used to estimate the distance of the person\n: Assuming that sizes of heads of adult humans only vary to a\nminor degree, the distance is proportional to the reciprocal of the\nsize. The height of the face above the ground is also extracted by\nusing the distance and the camera position.\nSince the approach presented so far does not provide face identification\n, a post-processing step is is required. Therefore, we use a\nslightly enhanced version of the Eigenface method [19]. Each individual\nis represented in face space by a mixture of several Gaussians\nwith diagonal covariances. Practical experiments have shown\nthat the use of four to six Gaussians leads to a satisfying accuracy\nin discriminating between a small set of known persons.\n5.2\nSound Source Localization\nIn order to detect speaking persons, we realize the localization\nof sound sources using a pair of microphones. Given a sound\nsource\n\nin 3D space, the distances\n\nand\n\nbetween\n\nand the\ntwo microphones\n\n\nand\n\n\ngenerally differ by the amount of\n\n\n\n\n(see Fig. 6). This difference\n\nresults in a time\ndelay\n\nof the received signal between the left and the right channel\n(microphone). Based on Cross-Powerspectrum Phase Analysis [8]\nwe first calculate a spectral correlation measure\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n(1)\nwhere\n\n\n\n\nand\n\n\n\n\nare the short-term power spectra of the\nleft and the right channel, respectively (calculated within a 43 ms\nwindow from the signal sampled at 48 kHz). If only a single sound\n31\n\nd\n10 cm\n\nd\n=0\nc\nm\n\nd\n=5c\nm\n\nd\n= 1\n0c\nm\n\nd\n= 2\n0 c\nm\n\nd\n= 25\ncm\n\nd\n= 1\n5 c\nm\ns\nm\n2\nm\n1\nd\n1\nd\n2\nb\n= 28.1 cm\nFigure 6: The distances\n\nand\n\nbetween the sound source\n\nand the two microphones\n\n\nand\n\n\ndiffer by the amount of\n\n. All sound events with identical\n\nare located on one half\nof a two-sheeted hyperboloid (gray).\nsource is present the time delay\n\nwill be given by the argument\nthat maximizes the spectral correlations measure\n\n\n:\n\n\n\n\n\n\n(2)\nTaking into account also local maxima delivered by equation (1),\nwe are able to detect several sound sources simultaneously.\nEven in the planar case, where all sound sources are on the same\nlevel as the microphones, the position of\n\ncan be estimated only\nif its distance is known or additional assumptions are made. In a\nsimplified geometry the microphone distance\nis considered suf-ficiently\nsmall compared to the distance of the source. Therefore,\nthe angles of incidence of the signals observed at the left or right\nmicrophone, respectively, will be approximately equal and can be\ncalculated directly from\n\n. In the 3D-case the observed time delay\nnot only depends on the direction and distance but also on the\nrelative elevation of the source with respect to the microphones.\nTherefore, given only\n\nthe problem is under-determined.\nAll sound events which result in the same\n\nare located on one\nhalf of a two-sheeted hyperboloid, given by\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n(3)\nwhere\n\n\n\n\n\n\n\nis the position of the sound source given in\nCartesian coordinates. The axis of symmetry of the hyperboloid coincides\nwith the axis on which the microphones are located (y-axis).\nFigure 6 shows the intersections of hyperboloids for different\n\nwith the plane spanned by\n\n,\n\n\n, and\n\n\n. Consequently, the localization\nof sound sources in 3D using two microphones requires\nadditional information.\nAs in our scenario sound sources of interest correspond to persons\ntalking, the additional spatial information necessary can be\nobtained from the other perceptual systems of the multi-modal anchoring\nframework. Leg detection and face recognition provide\ninformation about the direction, distance, and height of a person\nwith respect to the local coordinate system of the robot. Even if no\nface was detected at all, the height of a person can be estimated as\nthe standard size of an adult.\nIn order to decide whether a sound percept can be assigned to\na specific person, the sound source has to be located in 3D. For\nthis purpose it is assumed that the sound percept originates from\n100 cm\nRobot\nFigure 7: A sample laser scan. The arrow marks a pair of legs.\nthe person and is therefore located at the same height and same distance\n. Then, the corresponding direction of the sound source can be\ncalculated from equation (3) transformed to cylindric coordinates.\nDepending on the difference between this direction and the direction\nin which the person is located, the sound percept is assigned to\nthe person's sound anchor. Similar to other component anchors, the\ndirection of the speech is also fused with the position of the person.\nNote that the necessity of positional attributes of a person for the\nlocalization of speakers implies that speech can not be anchored\nuntil the legs or the face of a person have been anchored.\nIn conclusion, the use of only one pair of microphones is sufficient\nfor feasible speaker localization in the multi-modal anchoring\nframework.\n5.3\nLeg Detection\nThe laser range finder provides distance measures within a\n\n\n\nfield of view at leg-height. The angular resolution is\n\n\nresulting\nin 361 reading points for a single scan (see Fig. 7 for an example\n). Usually, human legs result in a characteristic pattern which\ncan be easily detected. This is done as follows: At first, neighboring\nreading points with similar distance values are grouped into\nsegments. Then, these segments are classified as legs or non-legs\nbased on a set of features (see [6]). Finally, legs with a distance that\nis below a threshold are grouped into pairs of legs.\nFOCUSING THE ATTENTION\nFor the detection of a person of interest from our mobile robot\nwe apply multi-modal person tracking, as described in section 4.\nEvery person in the vicinity of the robot is anchored and, therefore,\ntracked by an individual anchoring process, as soon as the legs or\nthe face can be recognized by the system.\nIf the robot detects that a person is talking, this individual becomes\nthe person of interest and the robot directs its attention towards\nit. This is achieved by turning the camera into the direction\nof the person. The anchoring process corresponding to the person\nof interest maintains access to the pan-tilt camera and keeps the\nperson in the center of the camera's field of view. If necessary, the\nentire robot basis is turned in the direction of the person of interest.\nIf this person moves to far away from the robot, the robot will start\nto follow the person. This behavior ensures that the sensors of the\nrobot do not loose track of this person. Moreover, the person can\nguide the robot to a specific place.\nAs long as the speech of the person of interest is anchored, other\npeople talking are ignored. This allows the person of interest to\ntake breath or make short breaks while speaking without loosing the\nrobots attention. When the person of interest stops talking for more\nthan two seconds, the person of interest looses its speech anchor.\nNow, another person can become the person of interest. If no other\nperson is speaking in the vicinity of the robot, the person which\n32\n(2)\n(3)\n(4)\n(1)\nP\n1\nP\n1\nP\n1\nP\n1\nP\n2\nP\n2\nP\n2\nP\n2\nR\nR\nR\nR\nFigure 8: Sample behavior with two persons\n\n\nand\n\n\nstanding\nnear the robot\n\n: In (1)\n\n\nis considered as communication\npartner, thus the robot directs its attention towards\n\n\n. Then\n\n\nstops speaking but remains person of interest (2). In (3)\n\n\nbegins to speak. Therefore the robot's attention shifts to\n\n\nby turning its camera (4). Since\n\n\nis facing the robot,\n\n\nis\nconsidered as new communication partner.\nis in the focus of attention of the robot remains person of interest.\nOnly a person that is speaking can take over the role of the person\nof interest. Notice, that a person which is moving fast in front of\nthe robot is considered as a passer-by, and hence is definitely no\nperson of interest even if this person is speaking.\nIn addition to the attention system described so far, which enables\nthe robot to detect the person of interest and to maintain its\nattention during interaction, the robot decides whether the person of\ninterest is addressing the robot and, therefore, is considered as communication\npartner. This decision is based on the orientation of the\nperson's head, as it is assumed that humans face their addressees for\nmost of the time while they are talking to them. Whether a tracked\nperson faces the robot or not is derived from the face recognition\nsystem. If the face of the person of interest is detected for more than\n20 % of the time the person is speaking, this person is considered\nto be the communication partner.\nA sample behavior of the robot is depicted in Figure 8.\nSYSTEM PERFORMANCE\nIn order to analyze the performance of the proposed approach,\nwe present results from three different types of evaluation. At\nfirst, we study the accuracy of sound source localization independently\n. The second part deals with a quantitative evaluation of our\napproach for a multi-modal attention system. Finally, qualitative\nresults from a performance of the robot at the exhibition part of the\nICVS'03 are presented.\n7.1\nEvaluation of Sound Source Localization\nThe objective of this evaluation was to analyze the accuracy of\nlocating speakers with a pair of microphones using the method described\nin section 5.2 independently from the multi-modal anchoring\nframework. In order to be able to estimate the arrival angle relative\nto the microphones, the setup for the experiment was arranged\nsuch that the sound source (mouth of the speaker) was always at the\nsame height as the microphones. Therefore, the simplified geometric\nmodel mentioned in section 5.2 can be used.\nThe experiments were carried out with five subjects. Every subject\nwas positioned at six different angles (\n\n\n,\n\n\n,\n\n\n,\n\n\n,\n\n\n,\nand\n\n\n), and at two different distances (100 cm and 200 cm), respectively\n, resulting in 12 positions altogether. At every position\na subject had to read out one specific sentence which took about\n8 seconds. During every utterance the position of the speaker was\ncalculated every 50 ms.\nBased on the angles estimated by our localization algorithm we\ncalculated the mean angle and the variance for every speaker. It is\nimportant to note, that in our setup it is almost impossible to position\nthe test speaker accurately on the target angle. For this reason,\nDistance between speaker and robot\nAngle\n100 cm\n200 cm\n0\n\n-0.9\n\n0.56\n-0.3\n\n0.81\n10\n\n9.1\n\n0.34\n9.2\n\n0.37\n20\n\n18.9\n\n0.21\n19.3\n\n0.27\n40\n\n38.2\n\n0.50\n38.8\n\n0.22\n60\n\n57.7\n\n0.40\n57.5\n\n0.64\n80\n\n74.0\n\n2.62\n73.3\n\n2.18\nTable 1: Averaged estimated speaker positions\nand averaged\nvariances\nfor the acoustic speaker localization.\nwe used the mean estimated angle for every speaker instead of the\ntarget angle to calculate the variance. Following the calculation of\nmean angle and variance for every speaker we averaged for every\nposition the mean angle and the variance across all speakers.\nTable 1 shows the results of our experiments. First, the results\nsuggest that the robot was not correctly aligned, because especially\nfor small angles (0\n\nto 20\n\n) the averaged angle differs constantly\nfrom the target angle about 1\n\n. Under this justifiable assumption\nthe speaker localization works very well for angles between 0\n\nand\n40\n\n. The estimated angle is nearly equivalent to the actual angle\nand the variance is also very low. Furthermore, the acoustic position\nestimation works equally well for 100 cm and for 200 cm.\nFor angles greater than 40\n\nthe difference between estimated angle\nand target angle as well as the variance increases. This means that\nthe accuracy of the acoustic position estimation decreases with an\nincreasing target angle. The main reason for this behavior is the\ndirectional characteristic of the microphones.\nHowever, the evaluation has shown that the time delay estimation\nworks reasonably well. Thus the sound source localization\nprovides important information for the detection and localization\nof the current person of interest.\n7.2\nEvaluation of the Attention System\nThe objective of this evaluation was to analyze the performance\nof the attention system presented in this paper. On the one hand,\nthe capability of the robot to successfully shift its attention on the\nspeaker, and to recognize when it was addressed was investigated.\nOn the other hand, details about the perceptual sub-systems were\nof interest.\nThe experiment was carried out in an office room (Fig. 9). Four\npersons were standing around the robot at designated positions. In\nreference to the local coordinate system of the robot, person\n\n\nwas located at a distance of\n\ncm and an angle of\n\n, where\n\n\nis defined as the direction ahead of the robot. Person\n\n\nwas\nlocated at\n\n\ncm\n\n\n\n, person\n\n\nat\n\n\ncm\n\n\n\n, and person\n\nat\n\n\ncm\n\n\n\n\n. The subjects were asked to speak for about\n10 seconds, one after another. They had to either address the robot\nor one of the other persons by turning their heads into the corresponding\ndirection. There were no restrictions on how to stand.\nThe order in which the persons were speaking was predetermined\n(see Table 2). The experiment was carried out three times with nine\ndifferent subjects altogether.\nThe following results were achieved:\n\nThe attention system was always able to determine the correct\nperson of interest within the time the person was speaking\n. However, in some situations either the reference to the\n33\nP2\nR\nP3\nP4\nP1\nFigure 9: Setup for the evaluation of the attention system.\nPerson\nStep\n1\n2\n3\n4\n5\n6\n7\n8\n9\n11 12\n10\nP\n4\nP\n3\nP\n2\nP\n1\nTable 2: Order in which the persons were speaking, either to\nthe robot (steps 14 and 912) or to another person (steps 58).\nlast person of interest was sustained too long or an incorrect\nperson of interest was selected intermediately. A diagram of\nthe robot's focus of attention is shown in Figure 10. The erroneous\nbehavior occurred in 4 of the 36 time slices: In these\ncases, the robot shifted its attention to a person which was\ncurrently not speaking (see column 5 in all experiments and\ncolumn 4 in the last experiment in Fig. 10). Note that in all\nfailure cases person\n\n\n, which was located in front of the\nrobot, was selected as person of interest. In addition, there\nwere two shifts which were correct but had a very long delay\n(eighth time slice of the first and the third experiment).\nAgain, the person in front of the robot (\n\n\n) was involved.\nAll errors occurred because a sound source was located in the\ndirection of\n\n\n, although person\n\n\nwas not speaking. This\ncan be explained with the noise of the robot itself, which is\ninterpreted as a sound source in the corresponding direction.\nThis error could be suppressed using voice activity detection,\nwhich distinguishes speech from noise. This will be part of\nour future work.\nAs the diagram in Fig. 10 shows, every shift of attention had\na delay of approximately 2 seconds. This results from the\nanchoring framework: The anchor for the sound source is\nremoved after a period of 2 seconds with no new assigned\npercepts. Now, if another person is talking it becomes the\nperson of interest.\n\nThe decision whether the current person of interest was addressing\nthe robot or not was made as described in section 6.\nIt was correct for all persons in all runs. This means that the\nrobot always determined himself as addressee in steps 14\nand 912, and never in steps 58.\nThese results prove that the presented approach for a multi-modal\nattention system on a mobile robot is capable to identify communication\npartners successfully.\nP\n1\nP\n3\nP\n4\nP\n2\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n1st\nP\n1\nP\n2\nP\n3\nP\n4\n3rd\nP\n1\nP\n2\nP\n3\nP\n4\n2nd\nFigure 10: Diagram for the three runs of the experiment. Every\nperson is assigned a track (light-gray) which is shaded while the\nperson was speaking. The solid line shows which person was in\nfocus of the robot's attention.\nIn addition the following measurements concerning the anchoring\nframework were extracted during the experiments: The attention\nsystem and the face recognition were running on one PC (Pentium\nIII, 500 MHz), while the sound source localization and the\nrobot control software were running on the other PC (Pentium III,\n850 MHz). Face recognition was performed on images of a size of\n\n\n\n\nat a rate of 9.6 Hz. Localization of sound sources was\nrunning at a rate of 5.5 Hz. The laser range finder provided new\ndata at a rate of 4.7 Hz while the processing time for the detection\nof legs was negligible.\nThe anchoring processes of the persons which were currently\nspeaking to the robot were updated with percepts at a rate of\n15.4 Hz. Face percepts were assigned to the corresponding anchor\nat 71.4 % of the time. Note, that after a new person of interest\nis selected it takes up to approximately 1 second until the camera\nis turned and the person is in the field of view. During this time,\nno face percept for the person of interest can be generated. Sound\npercepts were assigned at 69.5 % of the time, and leg percepts at\n99.9 % of the time.\nThe multi-modal anchoring framework was able to quantify the\nbody heights of all subjects with an accuracy of at least\n\n5 cm,\nwhich was sufficient to precisely locate sound sources in 3D (see\nsection 5.2).\n7.3\nPerformance at an Exhibition\nIn the beginning of April 2003 our robot was presented at the\nexhibition part of the International Conference on Computer Vision\nSystems (ICVS) in Graz. There we were able to demonstrate\nthe robot's capabilities in multi-modal person tracking, and also in\nfollowing people. BIRON was continuously running without any\nproblems.\nOn the two exhibition days, the robot was running 9:20 hours\nand 6:30 hours, respectively, tracking about 2240 persons on the\nfirst day, and about 1400 persons on the second day. The large\namount of persons tracked results from the following condition:\nEvery person which came in the vicinity of the robot was counted\nonce. However, if a person left the observed area and came back\nlater, it was counted again as a new person.\nSince the coffee breaks of the conference took place in the exhibition\nroom, there were extremely busy phases. Even then, the\nrobot was able to track up to 10 persons simultaneously. Despite\nthe high noise level, the sound source localization worked reliably,\neven though it was necessary to talk slightly louder to attract the\nrobot's attention.\n34\nSUMMARY\nIn this paper we presented a multi-modal attention system for\na mobile robot. The system is able to observe several persons in\nthe vicinity of the robot and to decide based on a combination of\nacoustic and visual cues whether one of these is willing to engage\nin a communication with the robot. This attentional behavior is realized\nby combining an approach for multi-modal person tracking\nwith the localization of sound sources and the detection of head\norientation derived from a face recognition system. Note that due\nto the integration of cues from multiple modalities it is possible to\nverify the position of a speech source in 3D space using a single\npair of microphones only. Persons that are observed by the robot\nand are also talking are considered persons of interest. If a person\nof interest is also facing the robot it will become the current communication\npartner. Otherwise the robot assumes that the speech\nwas addressed to another person present.\nThe performance of our approach and its robustness even in real\nworld situations were demonstrated by quantitative evaluations in\nour lab and a qualitative evaluation during the exhibition of the\nmobile robot system at the ICVS'03.\nACKNOWLEDGMENTS\nThis work has been supported by the German Research Foundation\nwithin the Collaborative Research Center 'Situated Artificial\nCommunicators' and the Graduate Programs 'Task Oriented Com-munication'\nand 'Strategies and Optimization of Behavior'.\nREFERENCES\n[1] M. Andersson, A. Oreback, M. Lindstrom, and H. I.\nChristensen. ISR: An intelligent service robot. In H. I.\nChristensen, H. Bunke, and H. Noltmeier, editors, Sensor\nBased Intelligent Robots; International Workshop Dagstuhl\nCastle, Germany, September/October 1998, Selected Papers,\nvolume 1724 of Lecture Notes in Computer Science, pages\n287310. Springer, New York, 1999.\n[2] B. Berdugo, J. Rosenhouse, and H. Azhari. Speakers'\ndirection finding using estimated time delays in the\nfrequency domain. Signal Processing, 82:1930, 2002.\n[3] W. Burgard, A. B. Cremers, D. Fox, D. Hahnel,\nG. Lakemeyer, D. Schulz, W. Steiner, and S. Thrun. The\ninteractive museum tour-guide robot. In Proc. Nat. Conf. on\nArtificial Intelligence (AAAI), pages 1118, Madison,\nWisconsin, 1998.\n[4] S. Coradeschi and A. Saffiotti. Perceptual anchoring of\nsymbols for action. In Proc. Int. Conf. on Artificial\nIntelligence, pages 407412, Seattle, WA, 2001.\n[5] Y. Freund and R. E. Shapire. A decision-theoretic\ngeneralization of on-line learning and an application to\nboosting. Computational Learning Theory: Eurocolt '95,\npages 2327, 1995.\n[6] J. Fritsch, M. Kleinehagenbrock, S. Lang, T. Plotz, G. A.\nFink, and G. Sagerer. Multi-modal anchoring for\nhuman-robot-interaction. Robotics and Autonomous Systems,\nSpecial issue on Anchoring Symbols to Sensor Data in Single\nand Multiple Robot Systems, 43(23):133147, 2003.\n[7] J. Fritsch, S. Lang, M. Kleinehagenbrock, G. A. Fink, and\nG. Sagerer. Improving adaptive skin color segmentation by\nincorporating results from face detection. In Proc. IEEE Int.\nWorkshop on Robot and Human Interactive Communication\n(ROMAN), pages 337343, Berlin, Germany, 2002.\n[8] D. Giuliani, M. Omologo, and P. Svaizer. Talker localization\nand speech recognition using a microphone array and a\ncross-powerspectrum phase analysis. In Proc. Int. Conf. on\nSpoken Language Processing, volume 3, pages 12431246,\nYokohama, Japan, 1994.\n[9] E. Hjelmas and B. K. Low. Face detection: A survey.\nComputer Vision and Image Understanding, 83(3):236274,\n2001.\n[10] Y. Huang, J. Benesty, G. W. Elko, and R. M. Mersereau.\nReal-time passiv source localization: A practical\nlinear-correction least-square approach. IEEE Trans. on\nSpeech and Audio Processing, 9(8):943956, 2001.\n[11] C. H. Knapp and G. C. Carter. The generalized correlation\nmethod for estimation of time delay. IEEE Trans. on\nAcoustics, Speech and Signal Processing,\nASSP-24(4):320327, 1976.\n[12] Y. Matsusaka, S. Fujie, and T. Kobayashi. Modeling of\nconversational strategy for the robot participating in the\ngroup conversation. In Proc. European Conf. on Speech\nCommunication and Technology, pages 21732176, Aalborg,\nDenmark, 2001.\n[13] H. G. Okuno, K. Nakadai, and H. Kitano. Social interaction\nof humanoid robot based on audio-visual tracking. In Proc.\nInt. Conf. on Industrial and Engineering Applications of\nArtificial Intelligence and Expert Systems, Cairns, Australia,\n2002.\n[14] V. Pavlovic, A. Garg, J. Rehg, and T. Huang. Multimodal\nspeaker detection using error feedback dynamic bayesian\nnetworks. In Proc. Int. Conf. on Computer Vision and Pattern\nRecognition, pages 3443, Los Alamitos, CA, 2000.\n[15] H. Rowley, S. Baluja, and T. Kanade. Neural network-based\nface detection. IEEE Trans. on Pattern Analysis and Machine\nIntelligence, 20(1):2338, 1998.\n[16] R. D. Schraft, B. Graf, A. Traub, and D. John. A mobile\nrobot platform for assistance and entertainment. Industrial\nRobot, 28(1):2934, 2001.\n[17] D. Schulz, W. Burgard, D. Fox, and A. B. Cremers. Tracking\nmultiple moving objects with a mobile robot. In Proc. Int.\nConf. on Computer Vision and Pattern Recognition,\nvolume 1, pages 371377, Kauwai, Hawaii, 2001.\n[18] R. Stiefelhagen, J. Yang, and A. Waibel. Estimating focus of\nattention based on gaze and sound. In Workshop on\nPerceptive User Interfaces, Orlando, FL, 2001.\n[19] M. Turk and A. Pentland. Eigenfaces for recognition.\nJournal of Cognitive Neuro Science, 3(1):7186, 1991.\n[20] P. Viola and M. Jones. Robust real-time object detection. In\nProc. IEEE Int. Workshop on Statistical and Computational\nTheories of Vision, Vancouver, Canada, 2001.\n[21] J. Yang and A. Waibel. A real-time face tracker. In Proc.\nIEEE Workshop on Applications of Computer Vision, pages\n142147, Sarasota, Florida, 1996.\n[22] M. H. Yang, D. J. Kriegman, and N. Ahuja. Detecting faces\nin images: A survey. IEEE Trans. on Pattern Analysis and\nMachine Intelligence, 24(1):3458, 2002.\n[23] A. L. Yuille, P. W. Hallinan, and D. S. Cohen. Feature\nextraction from faces using deformable templates. Int.\nJournal of Computer Vision, 8(2):99111, 1992.\n[24] Z. Zhang, L. Zhu, S. Z. Li, and H. Zhang. Real-time\nmulti-view face detection. In Proc. Int. Conf. on Automatic\nFace and Gesture Recognition, Washington, DC, 2002.\n35\n", "keywords": "Multi-modal person tracking;Attention;Human-robot-interaction"} {"name": "155", "title": "Psychologically Targeted Persuasive Advertising and Product Information in E-Commerce", "abstract": "In this paper, we describe a framework for a personalization system to systematically induce desired emotion and attention related states and promote information processing in viewers of online advertising and e-commerce product information. Psychological Customization entails personalization of the way of presenting information (user interface, visual layouts, modalities, structures) per user to create desired transient psychological effects and states, such as emotion, attention, involvement, presence, persuasion and learning. Conceptual foundations and empiric evidence for the approach are presented.", "fulltext": "INTRODUCTION\nAdvertising and presentation of product information is done both\nto inform people about new products and services and to persuade\nthem into buying them. Persuasion can be thought of as\ninfluencing peoples attitudes and behavior. Advertising done in a\nmass medium, such as television or magazines can be segmented\nto desired audiences. However, there is a possibility to\npersonalize or mass customize advertising in the internet and for\ninstance in mobile phones. Similarly, in the internet also product\ninformation of various items for sale can be personalized to\ndesired users. These two areas are introduced here together as\nthey represent interesting opportunities for personalization.\nConsequently, personalization may turn out to be an important\ndriver for future commercial applications and services in a one-to-one\nworld in which automatic and intelligent systems tailor the\ninteractions of users, contexts and systems in real-time. This\npaper describes the foundations of information personalization\nsystems that may facilitate desired psychological states in\nindividual users of internet based advertising and product\ninformation presentation in e-commerce thereby creating\npsychologically targeted messages for users of such systems. It is\npreliminarily hypothesized that such personalization may be one\nway to more efficient persuasion.\nWhen perceiving information via media and communications\ntechnologies users have a feeling of presence. In presence, the\nmediated information becomes the focused object of perception,\nwhile the immediate, external context, including the technological\ndevice, fades into the background [8, 36, 37]. Empirical studies\nshow that information experienced in presence has real\npsychological effects on perceivers, such as emotional responses\nbased on the events described or cognitive processing and\nlearning from the events [see 51]. It is likely that perceivers of\nadvertisements and product information experience presence that\nmay lead to various psychological effects. For instance, an\nattitude may be hold with greater confidence the stronger the\npresence experience.\nPersonalization and customization entails the automatic or semiautomatic\nadaptation of information per user in an intelligent way\nwith information technology [see 33, 68]. One may also vary the\nform of information (modality for instance) per user profile,\nwhich may systematically produce, amplify, or shade different\npsychological effects [56, 57, 58, 59, 60, 61, 62, 63].\n\nMedia- and communication technologies as special cases of\ninformation technology may be considered as consisting of three\nlayers [6]. At the bottom is a physical layer that includes the\nphysical technological device and the connection channel that is\nused to transmit communication signals. In the middle is a code\nlayer that consists of the protocols and software that make the\nphysical layer run. At the top is a content layer that consists of\nmultimodal information. The content layer includes both the\nsubstance and the form of multimedia content [7, 56]. Substance\nrefers to the core message of the information. Form implies\naesthetic and expressive ways of organizing the substance, such\nas using different modalities and structures of information [56].\nWith the possibility of real-time customization and adaptation of\ninformation for different perceivers it is hypothesized that one\nmay vary the form of information within some limits per the same\nsubstance of information. For instance, the same substance can be\nexpressed in different modalities, or with different ways of\ninteraction with the user and technology. This may produce a\ncertain psychological effect in some perceivers; or shade or\namplify a certain effect. In Figure 1 the interaction of media and\ncommunications technology and the user in context with certain\ntypes of tasks is seen as producing transient psychological effects,\nthereby creating various \"archetypal technologies\" that\nsystematically facilitate desired user experiences [see 55, 56].\nMedia and communication technology is divided into the\nphysical, code and content layers. The user is seen as consisting\nof various different psychological profiles, such as individual\ndifferences related to cognitive style, personality, cognitive\nability, previous knowledge (mental models related to task) and\nother differences, such as pre-existing mood. [49, 56, 58, 59]\nMedia- and communication technologies may be called Mind-Based\nif they simultaneously take into account the interaction of\nthree different key components: i) the individual differences\nand/or user segment differences of perceptual processing and\nsense making ii) the elements and factors inherent in information\nand technology that may produce psychological effects (physical,\ncode and content layers), and iii) the consequent transient\npsychological effects emerging based on perception and\nprocessing of information at the level of each individual. [see 63]\nThis definition can be extended to include both context and at\nleast short-term behavioral consequences. Regarding context, a\nMind-Based system may alter its functionalities depending on\ntype of task of the user, physical location, social situation or other\nad-hoc situational factors that may have a psychological impact.\nBehavioral consequences of using a Mind-Based system may be\nthought of especially in the case of persuasion as facilitating\ndesired instant behaviors such as impulse buying. Of course, if a\nMind-Based system builds a positive image and schema of a\nproduct over longer periods of time reflected in product and brand\nawareness that may the influence user behaviors later on.\nAs the task of capturing and predicting users psychological state\nin real time is highly complex, one possible realization for\ncapturing users psychological state is to have the user linked to a\nsufficient number of measurement channels of various i)\npsychophysiological signals (EEG, EMG, GSR, cardiovascular\nactivity, other), ii) eye-based measures (eye blinks, pupil dilation,\neye movements) and iii) behavioral measures (response speed,\nresponse quality, voice pitch analysis etc.). An index based on\nthese signals then would verify to the system whether a desired\npsychological effect has been realized.\nFig. 1. Mind-Based Technologies as a framework for\nproducing psychological effects. Adapted from [56].\n\nAnother approach would be to conduct a large number of user\nstudies on certain tasks and contexts with certain user groups,\npsychological profiles and content-form variations and measure\nvarious psychological effects as objectively as possible. Here,\nboth subjective methods (questionnaires and interviews) and\nobjective measures (psychophysiological measures or eye-based\nmethods) may be used as well interviews [for a review on the use\nof psychophysiological methods in media research, see 46]. This\nwould constitute a database of design-rules for automatic\nadaptations of information per user profile to create similar effects\nin highly similar situations with real applications. Naturally, a\nhybrid approach would combine both of these methods for\ncapturing and facilitating the users likely psychological state.\nCapturing context and short-term user behavior is a challenge.\nComputational approach to context utilizes a mass of sensors that\ndetect various signals in an environment. AI-based software then\nmassively computes from the signal flow significant events either\ndirectly or with the help of some simplifying rules and algorithms.\nCapturing user behavior in context is easier if the user is using an\ninternet browser to buy an item, for instance. In this case action,\nor behavior can be captured by the system as the user clicks his\nmouse to buy an item. If the user is wondering around in a\nsupermarket with a mobile phone that presented a persuasive\nmessage to buy the item on aisle 7 it may be difficult to verify this\nother than cross-reference his checkout bill with the displayed\nadverts inside the store. However, it is beyond the scope of this\npaper to fully elaborate on the contextual and behavioral\ndimensions of Mind-Based Technologies.\n2.2 Description of a Psychological\nCustomization System\nPsychological Customization is one possible way of\nimplementing of Mind-Based Technologies in system design. It\ncan be applied to various areas of HCI, such as Augmentation\nSystems (augmented and context sensitive financial news),\nNotification Systems (alerts that mobilize a suitable amount of\nattention per task or context of use), Affective Computing\n(emotionally adapted games), Collaborative Filtering (group-focused\ninformation presentation), Persuasive Technology\nMedia and Communications\nTechnology\nSubstance\nForm\n- modalities\n- visual layout\n- structure\n- other\nType of device\nWays of interaction\nUser interface\nTechnologies influencing\nsubjective experience\n-Emotion Technologies\n-Flow Technologies\n-Presence Technologies\nTechnologies influencing\nsubjective experience\n-Emotion Technologies\n-Flow Technologies\n-Presence Technologies\nTechnologies influencing\nchange of knowledge\n- Knowledge Technologies\nTechnologies influencing\nchange of knowledge\n- Knowledge Technologies\nMind\nPsychological\nprofiles\n- cognitive style\n- personality\n- mental models\n- other\nResults of\nprocessing\nResults of\nprocessing\nPhysical\nContent\nCode\nContext and Task\n246\n246\n(advertising for persuasion, e-commerce persuasion), Computer\nMediated Social Interaction Systems (collaborative work, social\ncontent creation templates), Messaging Systems (emotionally\nadapted mobile multimedia messaging and email) and\nContextually Sensitive Services (psychologically efficient\nadaptation of presentation of information sensitive to physical,\nsocial or situational context, such as available menus to control a\nphysical space, available information related to a particular\nsituation, such as social interaction or city navigation with a\nmobile device).\nIt can be hypothesized that the selection and manipulation of\nsubstance of information takes place through the technologies of\nthe various application areas of Psychological Customization.\nUnderlying the application areas is a basic technology layer for\ncustomizing design. This implies that within some limits one may\nautomatically vary the form of information per a certain category\nof substance of information. The design space for Psychological\nCustomization is formed in the interaction of a particular\napplication area and the possibilities of the technical\nimplementation of automated design variation Initially,\nPsychological Customization includes modeling of individuals,\ngroups, and communities to create psychological profiles and\nother profiles based on which customization may be conducted. In\naddition, a database of design rules is needed to define the desired\ncognitive and emotional effects for different types of profiles.\nOnce these components are in place, content management\ntechnologies can be extended to cover variations of form and\nsubstance of information based on psychological profiles and\ndesign rules to create the desired psychological effects. [see 63]\nAt the technically more concrete level, a Psychological\nCustomization System is a new form of middleware between\napplications, services, content management systems and\ndatabases. It provides an interface for designing desired\npsychological effects and user experiences for individual users or\nuser groups. The most popular framework for building customized\nWeb-based applications is Java 2 Enterprise Edition. J2EE-based\nimplementation of the Psychological Customization System for\nWeb-based applications is depicted in Figure 2. The basic J2EE\nthree-tiered architecture consisting of databases, application\nservers, and presentation servers has been extended with three\nmiddleware layers: content management layer, customer\nrelationship management layer, and psychological customization\nlayer. The profiles of the users and the communities are available\nin the profile repository. [see 69]\nThe Content Management System is used to define and manage\nthe content repositories. This typically is based on metadata\ndescriptions of the content assets. The metadata of the content\nrepositories is matched against the user and community profiles\nby the Customer Relationship Management (CRM) system. The\nCRM system includes tools for managing the logic behind\ncontent, application and service customization. Rules can be\nsimple matching rules or more complex rule sets. A special case\nof a rule set are scenarios, which are rule sets involving sequences\nof the interactions on the Web site. The Customer Relationship\nManagement layer also includes functionality for user and\ncommunity modeling. This layer can also perform automated\ncustomer data analysis, such as user clustering. [see 69]\nThe Psychological Customization System layer performs the\noptimization of the form of the content as selected by the\nCustomer Relationship Management layer. This functionality can\nbe considered similar to the device adaptation by using content\ntransformation rules (for example XSL-T). In the case of the\npsychological customization, the transformation rules are\nproduced based on the design rules for content presentation\nvariation and the contents of the psychological profile of the user.\nAfter this optimization, the content is passed to the Web\npresentation layer.\n\nCommunity\nProfiles\nContent\n\nRules and\nScenarios\nApplication Server\nUser\nUser\nUser\nUser\nprofile\nCommunity\nprofile\nContent\nManagement\nSystem\nCustomer\nRelationship\nManagement\nWeb Presentation Layer\nPsychological\nCustomization\nSystem\n\n\nFigure 2. J2EE implementation of the Psychological\nCustomization System [69]\nEven though a working prototype of Psychological Customization\nhas not been built yet, several empirical studies support the\nfeasibility of a user-experience driven system that matches the\nform of information to the psychologically relevant properties and\nother profile factors of individual users and user groups.\nFor instance, there are individual differences in cognitive\nprocesses such as attention, working memory capacity, general\nintelligence, perceptual-motor skills and language abilities. These\nindividual differences have a considerable effect on computer-based\nperformance and may product sometimes quite large\nvariance in the intensity or type of psychological effects, such as\ndepth of learning, positive emotion, persuasion, presence, social\npresence and other types of psychological states and effects as\nwell as consequent behavior [13, 14, 18, 56, 57, 58, 59, 60, 61,\n62, 63, 70].\nThere is considerable evidence in literature and in our own\nexperimental research that varying the form of information, such\nas modality, layouts, background colors, text types, emotionality\nof the message, audio characteristics, presence of image motion\nand subliminality creates for instance emotional, cognitive and\nattentional effects [9, 25, 27, 28, 29, 30, 31, 32, 33, 34, 48]. Some\nof these effects are produced in interaction with individual\ndifferences, such as cognitive style, personality, age and gender\n[21, 46, 47], or pre-existing mood [49]. The role of hardware\nshould not be neglected. A device with a large screen or a\n247\n247\nportable device with smaller screen with user-changeable covers\nmay also influence the emerging effects [e.g. 30].\nTable 1. Key factors influencing psychological effects.\nAdapted from [56].\nLayer of\ntechnology\nKey factors\nPhysical\nHardware\n- large or small vs. human scale\n- mobile or immobile\n- close or far from body (intimate-personal\n-social distance)\nInteraction\n- degree of user vs. system\ncontrol and proactivity through\nuser interface\nCode\nVisual-functional aspects\n- way of presenting controls in an\ninterface visually and functionally\nSubstance\n- the essence of the event\ndescribed\n- type of substance\n(factual/imaginary; genre, other)\n- narrative techniques used by\nauthors\nContent\nForm\n1. Modalities\n- text, video, audio, graphics,\nanimation, etc.\n2. Visual layout\n- ways of presenting various shapes,\ncolours, font types, groupings and\nother relationships or expressive\nproperties of visual representations\n- ways of integrating modalities into\nthe user interface\n\n3. Structure\n- ways of presenting modalities,\nvisual layout and other elements of\nform and their relationships over time\n- linear and/or non-linear\nstructure (sequential vs.\nparallel; narrative\ntechniques,\nhypertextuality)\n\n\n\nThis empiric evidence partly validates the possibility for\nPsychological Customization Systems at least with mobile\ndevices and user interface prototypes used in our own research.\nTypical experiments we have conducted on the influence of form\nof information on psychological effects have included such\nmanipulations as animation and movement (for orientation\nresponse), fonts of text, layout of text, background colors of text,\nuser interface navigation element shapes (round vs. sharp), user\ninterface layout directions, adding background music to reading\ntext, use of subliminal affective priming in the user interface\n(emotionally loaded faces) and use of different modalities, for\ninstance. Table 1 addresses the key factors that may influence\npsychological effects of processing mediated information.\nAPPLICATION AREAS\nThe focus of this paper is on persuasion in advertising and\nproduct information presentation in e-commerce. The key\napplication area to realize this with Psychological Customization\nis Persuasive Technology. It refers to human-computer interaction\nin which there is an underlying goal to non-coercively change the\nattitudes, motivation and/or behavior of the user [15, 16]. For\ninstance, one may motivate users to quit smoking via motivating\ngames.\nHowever, it is clear that how much people allocate resources to\nprocessing a particular persuasive message has to be taken into\naccount. Further, it may be that there is not so much freedom to\nmanipulate persuasive messages to produce effects and the effects\nthemselves may be sometimes small. Despite this, empiric\nevidence in personalization, as discussed, suggests that\nstatistically significant effects perhaps in the range of a few\npercentages to even tens of percents exist in the area of\nPsychological Customization, such as emotion, presence and\nefficiency of information processing. Hence, it can be at least\npreliminarily assumed that with persuasion similar level effects\nmay be achievable also.\nPersuasion in human computer interaction has been researched\nfrom the point of view of seeing computers as tools (increasing\ncapabilities of the user), as a medium (providing experiences) and\nas a social actor (creating a relationship) [15, 16]. For the\npurposes of this article, technology used in Psychological\nCustomization for presentation of e-commerce product\ninformation and online advertising is seen mostly as a medium\nand partly as a social actor. How then to model and explain\npersuasion in more detail? Evidently no universal theory of what\nis the process of persuasion has been created yet [17].\nCandidates for explaining and modeling persuasion include i)\nlearning theory (operant conditioning), ii) functional paradigm\ntheory (similarity-attraction, pleasure seeking), iii) cognitive\nconsistency theory (new information creates tension that needs to\nbe relieved by adopting schemata), iv) congruity principle theory\n(interpretations of new information tend to be congruent with\nexisting schemata), v) cognitive dissonance theory (certain\nactions and information produce tension that needs to be relieved\nby adopting mental structures or behavior), counter-attitudinal\nadvocacy (belief-discrepant messages are persuasive), vi)\ninoculation theory (combining supportive and refutational\ninformation to achieve better persuasion) and vii) attribution\ntheory (people make simple models to predict events of the world\nand behaviors of other people). [for a review, see 50].Some\ncontemporary models of persuasion are i) social learning theory\n(environmental learning is the source of persuasion, such as social\nrelationships), ii) the elaboration likelihood model (a specific and\nlimited model on how a piece of information may influence the\nattitudes of the receiver) and iii) the communication/persuasion\nmodel (the source, the message content and form, the channel, the\nproperties of the receiver and the immediacy of the\ncommunication influence persuasion). [2, 39, 42]\nThe latter approach partly resembles the approach of Mind-Based\nTechnologies as a way of finding out the values of relevant\nparameters in the layers of technology, the user and the transient\nresults of processing, such as emotion and cognition. Other\nframeworks have also been presented. Meyers-Levy and\nMalaviya (1998) have presented a framework introducing several\n248\n248\nstrategies to process persuasive messages. Each strategy\nrepresents a different amount of cognitive resources employed\nduring processing and may influence the level of persuasion. [38]\nThe position of the authors is that while various theories and\nmodels for persuasion have been presented, within the context of\npersonalized information presentation especially by varying both\nthe substance of the message and the form of the message it is\ndifficult to know what types of persuasive effects may emerge.\nThis is partly due to the fact that especially the perception of form\nof information is most likely not a conscious process involving in-depth\nprocessing and cognitive appraisal but a rather automatic\nand non-cognitive process. Hence, if one influences the\nconditions of perceptual processing or some early-level cognitive\nprocessing of multimodal information, no clear models are\navailable for explaining and predicting persuasion. Also, the exact\ninfluence of the amount of cognitive resources employed during\nearly and later processing of a persuasive message remains\nunknown. It is most evident that case studies with particular\napplication are needed to verify such effects.\nHowever, the authors present one possible way of seeing\npersuasion mostly via a link to transient emotional states and\nmoods immediately before, during and after processing\ninformation presented through a Psychological Customization\nsystem. Yet, based on this approach the claim for more efficient\npersuasion in each application area, such as using Psychological\nCustomization in advertising or e-commerce product information\nremains a complex task. Despite this difficulty, we now present a\npossible selection of relevant psychological principles related to\nperceptual processing and persuasion of advertising and e-commerce\nproduct information.\nFirst, a similarity-attraction process may arise between the\npresented information and the personality of the user that may\nlead to the information being processed more fully [i.e., trait-congruency\nhypothesis; see 54]. That is, users are likely to be\nattracted to information with content and formal characteristics\nmanifesting a personality similar to their own [see e.g., 21].\nSecond, the decrease of cognitive load in perceptual processing\n(i.e., high processing fluency) may induce a feeling of\npleasantness that may label the information processed [for a\nreview, see 65]. That is, fluent stimuli are associated with\nincreased liking and positive affective responses as assessed by\nfacial EMG, for example. Third, the creation of specific\nemotional reactions and moods varying in valence and arousal\nmay label the information processed; here the effects may depend\non the type of emotion. For instance, mood-congruency may\nprovide more intensive engagement with the information\npresented when the mood induced by the information processed\nmatches a pre-existing mood of the user [see 49]. Fourth, the\nemotional reactions may induce increased attention that may lead\nto more in-depth processing of information [e.g., 26]. Fifth, as\nsuggested by excitation transfer theory, arousal induced by a\nprocessed stimuli influences the processing of subsequent stimuli\n[see 71, 72].\nSixth, according to selective-exposure theory, individuals are\nmotivated to make media choices in order to regulate their\naffective state [i.e., to maintain excitatory homeostasis; 73]. This\nmay mean that people use also e-commerce product information\nto manage their moods, i.e. neutralize an unwanted mood, such as\ndepression by engaging with exciting and positive product\ninformation. Users may also intensify an existing mood by\nselecting product information content that may add to the present\nmood. Seventh, affective priming research indicates that the\nvalence of subliminally exposed primes (e.g., facial expressions)\ninfluences the affective ratings of subsequent targets [40, 43],\nincluding video messages presented on a small screen [48].\nEighth, the perceived personal relevance of the particular\ninformation to the user exerts a robust influence on message\nprocessing and involvement [64]. This means that if the user is\ninterested in the product described in the information presented,\nhe will be quite involved when processing the information and\nhence his memory of the product will be enhanced. Consequently,\nit has been shown that information tailored to the needs and\ncontexts of users often increases the potential for attitude and\nbehavior change [5, 11, 41, 66, 67]. Further, there is quite a lot of\nresearch indicating that, when compared to video form, text has a\ngreater capacity to trigger active or systematic message\nprocessing and is perceived as more involving [see 44]; this\ndepends on the mood of the user, however [48]. Ninth, some\nemotional states and moods lead to secondary effects related to\ndecision-making, judgment and behavior [4, 10, 20].\nIt then seems that indeed a relevant area to focus on regarding\npersuasion with Psychological Customization is emotion (arousal\nand valence) immediately before, during and right after viewing\nproduct information and ads.\nOne may focus on "primitive" emotional responses or emotions\nrequiring more extensive cognitive appraisal and processing. Both\nof these types of emotions can be linked to various psychological\nconsequences. Consequently, with emotionally loaded\npersonalized information products one may focus on i) creating\nimmediate and primitive emotional responses, ii) creating mood\nand iii) indirectly influencing secondary effects of emotion and\nmood, such as attention, memory, performance and judgment.\nKnown psychological mechanisms used to create desired\nemotions or moods would be for instance similarity attraction\n(trait congruency), decrease of cognitive load (high processing\nfluency), mood congruency, excitation transfer, mood\nmanagement and affective priming.\nThese mechanisms are not without problems as they may have\nalso opposite effects. For instance mood congruency may\ndecrease attention and hence lessen the mobilization of cognitive\nresources in processing a persuasive message. Also, even though\nemotion is good candidate to look for a strong link to persuasion,\nthe exact nature of this link is unclear.\nThe key idea of using emotion as a hypothesized gateway to\npersuasion would be that more in-depth processing of information\ncaused by arousal, valence, attention or involvement may lead to\nincreased memory and perceived trustworthiness of information\nand also influence attitudes towards the product in question [e.g.\n65]. This in turn may lead to instant behavior, such as buying\nonline, clicking through an ad or purchasing the item later in a\ndepartment store based on long-term memory schemata. It should\nbe noted that this view is based on empiric evidence of the\npsychological effects and their consequences in general, but they\nhave not yet been validated with the use of e-commerce systems\nthat personalize the form of information for persuasion.\nHence, it would be most beneficial to capture the users emotional\nstates or mood before the user starts to browse a particular piece\nof product information to be able to automatically realize various\neffects with adaptation of the form of information and track the\nchanges of the online behavior of the user.\n249\n249\n3.2 Persuasive Advertising\nThe effectiveness of persuasion in advertisements in general is a\ncomplex issue. Subliminal priming, use of commonly known\nsymbols, matching the advertisement to basic biological needs,\nsuch as food, shelter and sex, maximizing the credibility of the\nmessage, telling a compelling story, creating a desirable image of\nthe perceiver with the product, placing TV-ads immediately after\nemotional (arousal) and attentional peaks of TV-programming\nand other approaches have been widely used. However, research\ninto effectiveness of the form of presentation of advertising is not\nwidely available in the scientific community. Moreover, little\nresearch has been done to understand the psychological\neffectiveness of online advertising. It should be also noted that\nadvertising may be mostly a creative and design-driven high-speed\nfield of industrial production in which various types of\nauthors and artists collaborate like in film-production to make the\nadvertisement rather than a field filled with scientists attempting\nto analyze the advertisements and their effects in great detail.\nIn internet-based advertising the advertisement is typically\npresented on a web page as a banner. The banner is embedded in\neditorial content, such as the front page of a magazine or online\nnewspaper. The banners are often placed according to the number\nand demography of the visitors on a particular section of editorial\ncontent. This means a best guess is taken as to what may be the\nmost efficient and visible way of placing the banner based on\nprevious knowledge of the behavior of desired user segments on\nthe website.\nIt seems that ads placed in context work best also online. This\nmeans that an ad that is related to the editorial content it is\ndisplayed with is more efficiently persuasive [3]. Another issue is\nthat text-based ads online may work better than only graphics.\nThis implies that most ads contain text based over a graphical\nsurface. Overall, very simple principles (larger is better etc.) seem\nto guide people's choices: e.g., larger ads are thought to be more\nappealing and affective.\nFurther, in mobile contexts personalized advertising has been\nstudied from the point of view of targeting users by emotions in\naddition to location and other relevant factors [19].\nHowever, the exact transient psychological influence of a\nparticular piece of editorial content the online advertisement is\ndisplayed with remains unknown. It is possible that the editorial\ncontent repels the user and the advertisement is also labeled by\nthis emotion. It is also possible that the editorial content induces a\npositive emotion and the advertisement gets an advantage based\non this emotional state. The emotional tone of the advertisements\nand editorial content may also interact. For example, Kamins,\nMarks, and Skinner (1991) showed that commercials that are\nmore consistent in emotional tone with the TV program perform\nbetter as measured by likeability and purchase intention ratings\nthan those that are inconsistent in tone. Sometimes advertisements\nare changed in real-time per type of user as the system recognizes\na user segment to which a certain banner has been targeted.\nHowever, what is lacking here is i) more detailed information of\nthe type of user (such as what may be the most efficient way to\ninfluence him psychologically) and ii) what may be the\npsychological impact on the same user of the editorial content\nwithin which the banner is placed. [22]\nWith a Psychological Customization system some of these gaps\nmay be at least indirectly addressed as presented in Table 2.\n\nTable 2. Technological possibilities of persuasive advertising\nwith Psychological Customization.\nLayer of\nTechnology\nAdaptations of Advertising Banners\n1. Physical\n-multimedia\nPC or mobile\ndevice\n-The advertisement substance and form may be\nmatched to the technology used by lifestyle segments\nor other means of segmentation (hip ads for mobile\nphones etc.)\n-Mobile device: user changeable covers in colors and\nshapes that facilitate emotion\n2. Code\n-Windows-type\nuser\ninterface\n-Mouse, pen,\nspeech,\n-The user interface elements (background color,\nforms, shapes, directions of navigation buttons etc.)\nmay be varied in real-time per page per user in which\na certain advertisement is located to create various\nemotions and ease of perceptual processing\n-audio channel may be used to create emotional\neffects (using audio input/output sound, varying\npitch, tone, background music, audio effects etc.).\n3. Content\nA. Substance\n- Fixed\nmultimedia\ncontent\n-The editorial content may be matched with the ad\n-The content of the ad may be matched to the users\nbased on various factors (interests, use history,\ndemography, personality etc.)\n-Adding subliminal extra content to create emotion\nB. Form\nModality\n-Multimedia\n-Modality may be matched to cognitive style or preexisting\nmood of the enable easier processing.\n-Background music, audio effects or ringing tones\nmay be used as a separate modality to facilitate\ndesired emotions and moods.\n-Animated text can be used to create more efficient\nprocessing of text facilitate some emotional effects.\nVisual\npresentation\n-Emotionally evaluated and positioned layout designs\nand templates for ads (colors, shapes and textures)\nmay be utilized per type of user segment\nStructure\n-temporal,\nother\n-Offering emotionally evaluated and positioned\nnarrative templates for creating emotionally engaging\nstories.\nBased on Table 2 a Psychological Customization system may\noperate by trying to optimize desired emotional effects that may\nbe related to persuasion. The content provider, such as a media\ncompany, is able to set desired effects per type of user group and\nadvertiser need by using a Psychological Customization system.\nAlso, the placement of ads within desired editorial contexts may\nbe utilized with a more developed system. When a user logs in\nwith his profile already to the database of the content provider the\nsystem will start real-time personalization of form of information.\nAs the user has logged in, the front page of the service may be\naltered for him according to advertiser needs. As the user\nnavigates the system and consumes information the system\nfollows ready-set effects to be realized to the user. It is clear that\nsuch a scenario is difficult, but if it is done in a simple enough\nmanner it may be that the persuasive efficiency of online\nadvertising may increase.\n3.3 Persuasive e-Commerce Product\nInformation\nPersonalized e-commerce has not been studied widely. It has been\nfound that while personalization of the content substance\ndisplayed to each user may provide value, the users have a strong\nmotivation to understand why the system is displaying a\nparticular piece of information for them. Also, users wish to be in\ncontrol of their user profiles. [see 1, 23]\nHence, it seems that users are at least partly suspicious to the\nsystem adapting the substance of information to them. However,\n250\n250\nin many cases it may be possible to adapt the form of information\nin personalized applications in conjunction to content substance\nvariation or even without it. The adaptation of form of\ninformation to the user may even be a more transparent way of\npersonalizing user-system- interaction as the user most likely does\nnot question the form of a particular substance. Hence, there are\nemerging possibilities for personalization and customization in\nthis area.\nThere are at least two different types of advanced e-commerce\nsystems commonly used: i) systems using recommendation\nengines and other personalization features to present information\nin a media-like manner and ii) systems using persuasive interface\nagents, creating a relationship between the user and the agent. The\nfocus here is mostly on presentation of product information, such\nas information (product properties, comparisons, pricing,\nfunctionalities and other information) of a new car, digital\ncamera, computer or garment. Although, in the context of product\npresentation, users have usually been suggested to prefer a\ncombination site including pictures and text [e.g. 35], individual\ndifferences in their preferences are likely to occur.\nThe technological possibilities for persuasive presentation of\nproduct information are much like those presented for persuasive\nadvertising seen in Table 2. In other words, different layers of\ntechnology may be adapted to the user of an e-commerce system\nto create various psychological effects when presenting product\ninformation.\nWith personalized e-commerce systems for product information\npresentation one may facilitate positive emotional responses for\ninstance by selecting the modalities of the information to be\ndisplayed according to the processing styles and alter visual\nlayouts of the interface according to the personalities of the users.\nThe ease of processing information and the similarity-attraction\nbetween visual layouts and personalities may create positive\nemotional states. As for brand awareness one may indirectly\ninfluence memory with the facilitation of positive emotion and\nincrease memory-based performance on the task such as brand\nrecognition and recall. By increasing attention one may increase\nthe likelihood of the user of an e-commerce system to learn\nproduct information more efficiently. Positive emotion and mood\nalso has the effect of making the user adapt a less risk-prone\napproach to making decisions [20]. This may be used to present\nproduct information in a familiar manner creating a safe\natmosphere around the product to make it more desirable when\nthe user is making purchasing decisions in a positive mood.\nPsychological Customization may be used for persuasion also\nwith recommendation systems. The system knows the users\nprofile, such as type of personality, and the desired psychological\neffect is set to positive emotion in as many page-views of the\nrecommendations as possible. The user starts using the system\nand finds an interesting product that the system recommends to\nher. The form of the recommendation information is tailored to\nthe users profile and desired psychological effect in real-time\nwhen the page uploads to make the realization of positive emotion\nas probable as possible. The system may select the modality of\nrecommendation from text to audio, or from audio to animation;\nthe system may change the background colors of the page and\nmodify the shape and color of the navigation buttons, for instance.\nIn this case, the system will try to do everything possible to\nfacilitate positive emotion but change the substance of the\nrecommendation itself. Naturally in some cases depending on\ntype of user and the type of recommendation, the available\ndatabases of recommendation information and the available\nmeans of Psychological Customization of form of\nrecommendation information, the effect to be achieved is more or\nless likely to occur. However, even effects that provide some\npercentages or even tens of percents of more targeted positive\nemotion may make a difference in persuasion and hence attitudes\ntowards the product and buying behavior. This is especially true if\nthe recommendation system website has masses of users and\nhence even a slight increase in sales effectiveness may add up to\nsignificant amounts of revenue.\nFurther, one may discuss interface agents for product information\npresentation. Often with interface agents an illusion of being in\ninteraction with another human being is created in the user via\nusing for example animated agents that seemingly exhibit various\nhuman properties, such as gender, personality, politeness, group\nmembership and other factors. Here one possible application\nwould be to add an agent to float atop of a page with product\ninformation to comment or recommend it, to aid the user in\nnavigation and finding interesting information and to act as a\nfeedback channel for the user, such as collecting the users\ninterest profile or other situational relevant information.\nIt is known that both the substance of the interaction (what is\nbeing sold, or what information is presented, and what the agent\nsays, or how it acts) and the form of interaction (how information\nis presented, what is the appearance and personality or other\nfactors of the agent) influence for instance trust, persuasiveness,\nemotion and liking of the transaction [e.g. 51].\nWhat Psychological Customization may add here may be more\nsystematic and efficient personalization of the way of presenting\ninformation together with customizing the selected appearance\nand other features of the agent in without actually changing the\nsubstance of the interaction, i.e. what the agent says or what\nproduct information is presented, for instance.\nCONCLUSION\nThe authors believe that no other comprehensive framework of\nvarying form of information to systematically create emotional\nand cognitive effects has been presented, specifically in\npersuasive presentation of online advertising and product\ninformation in e-commerce sites. Differences to other approaches\nto influencing user experience in general are various. Usability\nstudies traditionally address the question of how to make difficult\ntechnology easy to use. Usability is at least partly founded on the\nidea of optimal human-machine performance, i.e. how well a user\ncan manipulate and control a machine. However, there is a\ngrowing conviction that, in order to ensure high user satisfaction\nusability is not sufficient [see 12, 24].\nThe approach to system design presented in this paper may be\nbeneficial to the fields of e-commerce and online advertising\nbecause: i) it provides a possibility to personalize the form of\ninformation that may be more transparent and acceptable to the\nusers than adapting the substance of information, ii) it offers a\nway of more systematically accessing and controlling transient\npsychological effects of users of e-commerce and advertisement\ndisplaying systems, iii) it offers possibilities to more efficiently\npersuade and consequently influence behavior of individual users\nand iv) it is compatible with existing and new systems\n(recommendation engines, click-through-systems, other) as an\nadd-on or a middleware layer in software with many potential\napplication areas.\n251\n251\nThe potential drawbacks of the framework include the following:\ni) it may be costly to build the design-rule databases and actually\nworking real-life systems for creating systematic psychological\neffects, ii) the rule-databases may have to be adapted also locally\nand culturally, iii) the method needed to create a rule-database is\nnot easy to use and may be suspect to ecological validity (eye-tracking\n, behavioral and psychophysiological measures, self-report\n, field tests are needed to verify laboratory results etc.) and\niv) if the system works efficiently it may raise privacy issues,\nsuch as the intimacy of a personal psychological user profile\n(personality, cognitive style, values, other). Also ethical issues\nrelated to mind-control or even propaganda may arise.\nIt should be noted that to build a smoothly functioning\nPsychological Customization system one should do much more\nresearch and gain more evidence of the systematic relationships of\nuser profiles, information forms and psychological effects.\nHowever, in our research for the past four years we have found\nmany feasible rules for personalization for psychological effects.\nRegarding future research, content management technologies\nshould be elaborated to provide for the platform that prototypes\ncan be built on. Consequently, we aim to build, evaluate and\nfield-test prototypes of Psychological Customization in various\nareas, specifically in mobile, urban ad-hoc contexts and situations\nrelated to mobile advertising and e-commerce, but also other\nareas such as mobile gaming communities, mobile content,\nmobile messaging, knowledge work systems and city navigation.\nREFERENCES\n[1] Alpert, S. R.; Karat, J.; Karat, C-M; Brodie, C and Vergo,\nJ.G. (2003) User Attitudes Regarding a User-Adaptive\neCommerce Web Site. User Modeling and User-Adapted\nInteraction\n13\n(4): 373-396, November 2003.\n[2] Bandura (1977) Social learning theory. Englewood Cliffs,\nNJ: Prentice-Hall.\n[3] Baudisch, P. and Leopold, D. (2000) Attention, indifference,\ndislike, action: Web advertising involving users. Netnomics\nVolume 2 , Issue 1 2000, pp. 75-83.\n\n\n[4] Brave S. and Nass, C. (2003). Emotion in human-computer\ninteraction. In Jacko, J.A. and Sears, A. (Ed.), The Human-Computer\nInteraction Handbook. Fundamentals, Evolving\nTechnologies and Emerging Applications. (pp. 81-96).\nLondon : Lawrence Erlbaum Associates.\n[5] Beniger, J. R. (1987) Personalization of mass media and the\ngrowth of pseudo-community. Communication Research,\n14(3), pp 352-371.\n[6] Benkler, Y. (2000) From Consumers to Users: Shifting the\nDeeper Structures of Regulation. Federal Communications\nLaw Journal 52, 561-63.\n[7] Billmann, D. (1998) Representations. In Bechtel, W. and\nGraham, G. (1998) A companion to cognitive science, 649-659\n. Blackwell publishers, Malden, MA.\n[8] Biocca, F. and Levy, M. (1995) Communication in the age of\nvirtual reality. Lawrence Erlbaum, Hillsdale, NJ.\n[9] Cuperfain, R. and Clarke, T. K. (1985) A new perspective on\nsubliminal perception. Journal of Advertising, 14, 36-41.\n[10] Clore, G. C. and Gasper, K. (2000). Feeling is believing.\nSome affective influences on belief. In Frijda, N.H.,\nManstead, A. S. R. and Bem, S. (Ed.), Emotions and beliefs:\nHow feelings influence thoughts (pp. 10-44).\nParis/Cambridge: Editions de la Maison des Sciences de\nlHomme and Cambridge University Press.\n[11] Dijkstra, J. J., Liebrand, W.B.G and Timminga, E. (1998)\nPersuasiveness of expert systems. Behavior and Information\nTechnology, 17(3), pp 155-163.\n[12] Dillon, A. (2001). Beyond usability: process, outcome and\naffect in human computer interactions. Online:\nhttp://www.ischool.utexas.edu/~adillon/publications/beyond\n_usability.html.\n[13] Egan, D. E. (1988). Individual differences in human-computer\ninteraction. In: M. Helander (Ed.), Handbook of\nHuman-Computer Interaction, p. 543 568. Elsevier, New\nYork.\n[14] Eysenck, M. (1994) Individual Differences: Normal and\nAbnormal. New Jersey: Erlbaum.\n[15] Fogg, B.J. (2003) Motivating, influencing and persuading\nusers. In Jacko, J.A. and Sears, A. (Ed.), The Human-Computer\nInteraction Handbook. Fundamentals, Evolving\nTechnologies and Emerging Applications. (pp. 81-96).\nLondon : Lawrence Erlbaum Associates.\n[16] Fogg, B. J. (2002) Persuasive technology. Using computers\nto change what we think and do. Morgan Kaufmann\nPublishers, New York.\n[17] Ford, M. E. (1992) Motivating humans: goals, emotions,\npersonal agency beliefs. Newbury Park, Ca: Sage.\n[18] Hampson, S. E. & Colman, A. M. (Eds., 1995) Individual\ndifferences and personality. London: Longman.\n[19] Hristova, N. and O'Hare, G. M. P. (2004) Ad-me: Wireless\nAdvertising Adapted to the User Location, Device and\nEmotions.\nProceedings of the Proceedings of the 37th\nAnnual Hawaii International Conference on System\nSciences (HICSS'04) - Track 9 - Volume 9\n\n[20] Isen, A. M. (2000). Positive affect and decision making. In\nLewis, M. and Haviland-Jones, J. M. (Ed.), Handbook of\nemotions (2nd ed.) (pp. 417-435). New York: Guilford Press.\n[21] Kallinen, K., & Ravaja, N. (in press). Emotion-related\neffects of speech rate and rising vs. falling background music\nmelody during audio news: The moderating influence of\npersonality. Personality and Individual Differences.\n[22] Kamins, M.A., Marks, L.J., & Skinner, D. (1991). Television\ncommercial evaluation in the context of program induced\nmood: Congruency versus consistency effects. Journal of\nAdvertising, 20, 1-14.\n[23] Karat, M-C., Blom, J. and Karat, J. (in press) Designing\nPersonalized User Experiences in eCommerce. Dordrecht:\nKluwer.\n[24] Karat, J. (2003) Beyond task completion: Evaluation of\naffective components of use. In Jacko, J.A. and Sears, A.\n(Ed.), The Human-Computer Interaction Handbook.\nFundamentals, Evolving Technologies and Emerging\nApplications. (pp. 81-96). London : Lawrence Erlbaum\nAssociates.\n[25] Kihlstrm, J. F., Barnhardt, T. M. and Tataryn, D. J. (1992)\nImplicit perception. In Bornstein, R. F. and Pittmann, T. S.\n252\n252\n(eds.) Perception without awareness. Cognitive, clinical and\nsocial perspectives, 17-54. Guilford, New York.\n[26] Krohne, H.W., Pieper, M., Knoll, N., & Breimer, N. (2002).\nThe cognitive regulation of emotions: The role of success\nversus failure experience and coping dispositions. Cognition\nand Emotion, 16, 217-243.\n[27] Krosnick, J. A., Betz, A. L., Jussim, J. L. and Lynn, A. R.\n(1992) Subliminal conditioning of attitudes. Personality and\nSocial Psychology Bulletin, 18, 152-162.\n[28] Laarni, J. (2003). Effects of color, font type and font style on\nuser preferences. In C. Stephanidis (Ed.) Adjunct\nProceedings of HCI International 2003. (Pp. 31-32). Crete\nUniversity Press, Heraklion.\n[29] Laarni, J. (2002). Searching for optimal methods of\npresenting dynamic text on different types of screens. In:\nO.W. Bertelsen, S. Bdker & K. Kuutti (Eds.), Tradition and\nTranscendence. Proceedings of The Second Nordic\nConference on Human-Computer Interaction, October 19-23,\n2002, Arhus, Denmark (Pp. 217 220).\n[30] Laarni, J. & Kojo, I.(2001). Reading financial news from\nPDA and laptop displays. In: M. J. Smith & G. Salvendy\n(Eds.) Systems, Social and Internationalization Design\nAspects of Human-Computer Interaction. Vol. 2 of\nProceedings of HCI International 2001. Lawrence Erlbaum,\nHillsdale, NJ. (Pp. 109 113.)\n[31] Laarni, J., Kojo, I. & Krkkinen, L. (2002). Reading and\nsearching information on small display screens. In: D. de\nWaard, K. Brookhuis, J. Moraal, & A. Toffetti (Eds.),\nHuman Factors in Transportation, Communication, Health,\nand the Workplace. (Pp. 505 516). Shake, Maastricht. (On\nthe occasion of the Human Factors and Ergonomics Society\nEurope Chapter Annual Meeting in Turin, Italy, November\n2001).\n[32] Lang, A. (1990) Involuntary attention and physiological\narousal evoked by structural features and mild emotion in\nTV commercials. Communication Research, 17 (3), 275-299.\n[33] Lang, A., Dhillon, P. and Dong, Q. (1995) Arousal, emotion\nand memory for television messages. Journal of\nBroadcasting and Electronic Media, 38, 1-15.\n[34] Lang, A., Newhagen, J. and Reeves. B. (1996) Negative\nvideo as structure: Emotion, attention, capacity and memory.\nJournal of Broadcasting and Electronic Media, 40, 460-477.\n[35] Lightner, N.J., & Eastman, C. (2002). User preference for\nproduct information in remote purchase environments.\nJournal of Electronic Commerce Research [Online], 3,\nAvailable from\nhttp://www.csulb.edu/web/journals/\njecr/issues/20023/paper6.pdf\n.\n[36] Lombard, M. and Ditton, T. (2000) Measuring presence: A\nliterature-based approach to the development of a\nstandardized paper-and-pencil instrument. Project abstract\nsubmitted to Presence 2000: The third international\nworkshop on presence.\n[37] Lombard, M., Reich, R., Grabe, M. E., Bracken, C. and\nDitton, T. (2000) Presence and television: The role of screen\nsize. Human Communication Research, 26(1), 75-98.\n[38] Meyers-Levy, J. & Malaviya, P. (1998). Consumers'\nprocessing of persuasive advertisements. An integrative\nframework of persuasion theories. Journal of Marketing 63,\n4560.\n[39] McGuire, W. J. (1989) Theoretical foundations of\ncampaigns. In R.E. Rice and C.K. Atkin (eds.) Public\ncommunication campaigns (2\nnd\ned, pp 43-65). Newbury\nPark, Ca: Sage.\n[40] Monahan, J.L. (1998). I don't know it but I like you: The\ninfluence of nonconscious affect on person perception.\nHuman Communication Research, 24, 480-500.\n[41] Nowak, G. J., Shamp, S., Hollander, B., Cameron, G. T.,\nSchumann, D. W. and Thorson, E. (1999) Interactive media:\nA means for more meaningful advertising? Advertising and\nconsumer psychology. Mahwah, NJ: Lawrence Erlbaum.\n[42] Petty, R.E., & Cacioppo, J.T. (1986). Communication and\npersuasion: Central and peripheral routes to attitude change.\nNew York: Springer-Verlag.\n[43] Murphy, S.T., Monahan, J.L., & Zajonc, R.B. (1995).\nAdditivity of nonconscious affect: Combined effects of\npriming and exposure. Journal of Personality and Social\nPsychology, 69, 589-602.\n[44] Pfau, M., Holbert, R.L., Zubric, S.J., Pasha, N.H., & Lin,\nW.-K. (2000). Role and influence of communication\nmodality in the process of resistance to persuasion. Media\nPsychology, 2, 1-33.\n[45] Ravaja, N. (2004). Effects of a small talking facial image on\nautonomic activity: The moderating influence of\ndispositional BIS and BAS sensitivities and emotions.\nBiological Psychology, 65, 163-183.\n[46] Ravaja, N. (in press). Contributions of psychophysiology to\nmedia research: Review and recommendations. Media\nPsychology.\n[47] Ravaja, N., & Kallinen, K. (in press). Emotional effects of\nstartling background music during reading news reports: The\nmoderating influence of dispositional BIS and BAS\nsensitivities. Scandinavian Journal of Psychology.\n[48] Ravaja, N., Kallinen, K., Saari, T., & Keltikangas-Jrvinen,\nL. (in press). Suboptimal exposure to facial expressions\nwhen viewing video messages from a small screen: Effects\non emotion, attention, and memory. Journal of Experimental\nPsychology: Applied.\n[49] Ravaja, N., Saari, T., Kallinen, K., & Laarni, J. (2004). The\nRole of Mood in the Processing of Media Messages from a\nSmall Screen: Effects on Subjective and Physiological\nResponses. Manuscript submitted for publication.\n[50] Reardon, K. R. (1991) Persuasion in practice. Newbury Park,\nCa: Sage.\n[51] Reeves, B. and Nass, C. (1996) The media equation. How\npeople treat computers, television and new media like real\npeople and places. Cambridge University Press, CSLI,\nStanford.\n[52] Riding, R. J. and Rayner, S. (1998) Cognitive styles and\nlearning strategies. Understanding style differences in\nlearning and behavior. David Fulton Publishers, London.\n253\n253\n[53] Riecken, D. (2000) Personalized views on personalization.\nCommunications of the ACM, V. 43, 8, 27-28.\n[54] Rusting, C.L. (1998). Personality, mood, and cognitive\nprocessing of emotional information: Three conceptual\nframeworks. Psychological Bulletin, 124, 165-196.\n[55] Saari, T. (1998) Knowledge creation and the production of\nindividual autonomy. How news influences subjective\nreality. Reports from the department of teacher education in\nTampere university. A15/1998.\n[56] Saari, T. (2001) Mind-Based Media and Communications\nTechnologies. How the Form of Information Influences Felt\nMeaning. Acta Universitatis Tamperensis 834. Tampere\nUniversity Press, Tampere 2001.\n[57] Saari, T. (2002) Designing Mind-Based Media and\nCommunications Technologies. Proceedings of Presence\n2002 Conference, Porto, Portugal.\n[58] Saari, T. (2003a) Designing for Psychological Effects.\nTowards Mind-Based Media and Communications\nTechnologies. In Harris, D., Duffy, V., Smith, M. and\nStephanidis, C. (eds.) Human-Centred Computing:\nCognitive, Social and Ergonomic Aspects. Volume 3 of the\nProceedings of HCI International 2003, pp. 557-561.\n[59] Saari, T. (2003b) Mind-Based Media and Communications\nTechnologies. A Framework for producing personalized\npsychological effects. Proceedings of Human Factors and\nErgonomics 2003 -conference. 13.-17.10.2003 Denver,\nColorado.\n[60] Saari, T. (in press, a) Using Mind-Based Technologies to\nfacilitate Positive Emotion and Mood with Media Content.\nAccepted to proceedings of to Positive Emotion, 2\nnd\n\nEuropean Conference. Italy, July 2004.\n[61] Saari, T. (in press, b) Facilitating Learning from Online\nNews with Mind-Based Technologies. Accepted to\nproceedings of EDMedia 2004, Lugano, Switzerland.\n[62] Saari, T. and Turpeinen, M. (in press, a) Towards\nPsychological Customization of Information for Individuals\nand Social Groups. In Karat, J., Blom, J. and Karat. M.-C.\n(eds.) Personalization of User Experiences for eCommerce,\nKluwer.\n[63] Saari, T. and Turpeinen, M. (in press, b) Psychological\ncustomization of information. Applications for personalizing\nthe form of news. Accepted to proceedings of ICA 2004, 27.31\n.5. 2004, New Orleans, USA.\n[64] Schneider, S.L., & Laurion, S.K. (1993). Do we know what\nwe've learned from listening to the news? Memory and\nCognition, 21, 198-209.\n[65] Schwarz, N. (in press). Meta-cognitive experiences in\nconsumer judgment and decision making. Journal of\nConsumer Psychology.\n[66] Stretcher, V. J., Kreutzer, M., Den Boer, D.-J., Kobrin, S.,\nHospers, H. J., and Skinner, C. S. (1994) the effects of\ncomputer-tailored smoking cessation messages in family\npractice settings. Journal of Family Practice, 39(3), 262-270.\n[67] Stretcher, V. J. (1999) Computer tailored smoking cessation\nmaterials: A review and discussion. Special issue: Computer\ntailored education. Patient Education & Counceling, 36(2),\n107-117.\n[68] Turpeinen, M. (2000) Customizing news content for\nindividuals and communities. Acta Polytechnica\nScandinavica. Mathematics and computing series no. 103.\nHelsinki University of Technology, Espoo.\n[69] Turpeinen, M. and Saari, T. (2004) System Architechture for\nPsychological Customization of Information. Proceedings of\nHICSS-37- conference, 5.-8.1. 2004, Hawaii.\n[70] Vecchi, T., Phillips, L. H. & Cornoldi, C. (2001). Individual\ndifferences in visuo-spatial working memory. In: M. Denis,\nR. H. Logie, C. Cornoldi, M. de Vega, & J. Engelkamp\n(Eds.), Imagery, language, and visuo-spatial thinking.\nPsychology Press, Hove.\n[71] Zillmann, D. (1971). Excitation transfer in communication-mediated\naggressive behavior. Journal of Experimental\nSocial Psychology, 7, 419-434.\n[72] Zillmann, D. (1978). Attribution and misattribution of\nexcitatory reactions. In J.H. Harvey, W.J. Ickes, & R.F. Kidd\n(Eds.), New directions in attribution research (Vol. 2, pp.\n335-368). Hillsdale, NJ: Lawrence Erlbaum Associates.\n[73] Zillmann, D., & Bryant, J. (1985). Affect, mood, and\nemotion as determinants of selective exposure. In D.\nZillmann & J. Bryant (Eds.), Selective exposure to\ncommunication (pp. 157-190). Hillsdale, NJ: Lawrence\nErlbaum.\n254\n254", "keywords": "personalization emotion;persuasion;advertising;E-commerce"} {"name": "156", "title": "Publicly Verifiable Ownership Protection for Relational Databases", "abstract": "Today, watermarking techniques have been extended from the multimedia context to relational databases so as to protect the ownership of data even after the data are published or distributed. However , all existing watermarking schemes for relational databases are secret key based , thus require a secret key to be presented in proof of ownership. This means that the ownership can only be proven once to the public (e.g., to the court). After that, the secret key is known to the public and the embedded watermark can be easily destroyed by malicious users. Moreover, most of the existing techniques introduce distortions to the underlying data in the watermarking process, either by modifying least significant bits or exchanging categorical values. The distortions inevitably reduce the value of the data. In this paper, we propose a watermarking scheme by which the ownership of data can be publicly proven by anyone, as many times as necessary. The proposed scheme is distortion-free , thus suitable for watermarking any type of data without fear of error constraints. The proposed scheme is robust against typical database attacks including tuple/attribute insertion/deletion, ran-dom/selective value modification, data frame-up, and additive attacks", "fulltext": "INTRODUCTION\nOwnership protection of digital products after dissemination has\nlong been a concern due to the high value of these assets and the\nlow cost of copying them (i.e., piracy problem). With the fast development\nof information technology, an increasing number of digital\nproducts are distributed through the internet. The piracy problem\nhas become one of the most devastating threats to networking systems\nand electronic business. In recent years, realizing that \"the law\ndoes not now provide sufficient protection to the comprehensive\nand commercially and publicly useful databases that are at the heart\nof the information economy\" [12], people have joined together to\nfight against theft and misuse of databases published online (e.g.,\nparametric specifications, surveys, and life sciences data) [32, 4].\nTo address this concern and to fight against data piracy, watermarking\ntechniques have been introduced, first in the multimedia\ncontext and now in relational database literature, so that the ownership\nof the data can be asserted based on the detection of watermark\n. The use of watermark should not affect the usefulness of\ndata, and it must be difficult for a pirate to invalidate watermark detection\nwithout rendering the data much less useful. Watermarking\nthus deters illegal copying by providing a means for establishing\nthe original ownership of a redistributed copy [1].\nIn recent years, researchers have developed a variety of watermarking\ntechniques for protecting the ownership of relational databases\n[1, 28, 26, 29, 13, 19, 20, 2] (see Section 5 for more on related\nwork). One common feature of these techniques is that they are secret\nkey based, where ownership is proven through the knowledge\nof a secret key that is used for both watermark insertion and detection\n. Another common feature is that distortions are introduced\nto the underlying data in the process of watermarking. Most techniques\nmodify numerical attributes [1, 28, 29, 13, 19, 20], while\nothers swap categorical values [26, 2]. The distortions are made\nsuch that the usability of data for certain applications is not affected\nand that watermark detection can be performed even in the\npresence of attacks such as value modification and tuple selection.\nThe above two features may severely affect the application of\nwatermarking techniques for relational databases. First, the secret\nkey based approach is not suitable for proving ownership to the\npublic (e.g., in a court). To prove ownership of suspicious data,\nthe owner has to reveal his secret key to the public for watermark\ndetection. After being used one time, the key is no longer secret.\nWith access to the key, a pirate can invalidate watermark detection\nby either removing watermarks from protected data or adding a\nfalse watermark to non-watermarked data.\nSecond, the distortions that are introduced in the process of watermarking\nmay affect the usefulness of data. Even though certain\nkind of error constraints (e.g., means and variances of watermarked\nattributes) can be enforced prior to or during the watermarking\nprocess, it is difficult or even impossible to quantify all\npossible constraints, which may include domain constraint, unique-ness\nconstraint, referential integrity constraint, functional dependencies\n, semantic integrity constraint, association, correlation, car-dinality\nconstraint, the frequencies of attribute values, and statisti\n\ncal distributes. In addition, any change to categorical data may be\nconsidered to be significant. Another difficulty is that the distortions\nintroduced by watermarking cannot be reduced arbitrarily. A\ntradeoff has to be made between watermark distortions and the robustness\nof watermark detection (roughly speaking, the more distortions\nintroduced in the watermarking process, the more likely\nthat a watermark can be detected in the presence of database attacks\n).\nIn this paper, we attempt to design a new database watermarking\nscheme that can be used for publicly verifiable ownership protection\nand that introduces no distortions. Our research was motivated\nin part by certain aspects of public key watermarking schemes in\nthe multimedia context, yet it is fundamentally different and particularly\ncustomized for relational databases (see also Section 5 for related\nwork). Our scheme has the following unique properties. First,\nour scheme is publicly verifiable. Watermark detection and ownership\nproof can be effectively performed publicly by anyone as\nmany times as necessary. Second, our scheme introduces no errors\nto the underlying data (i.e., it is distortion-free); it can be used for\nwatermarking any type of data including integer numeric, real numeric\n, character, and Boolean, without fear of any error constraints.\nThird, our scheme is efficient for incremental updating of data. It\nis designed to facilitate typical database operations such as tuple\ninsertion, deletion, and value modification. Fourth, our scheme is\nrobust. It is difficult to invalidate watermark detection and ownership\nproof through typical database attacks and other attacks. With\nthese properties, we believe that our watermarking technique can\nbe applied practically in the real world for the protection of ownership\nof published or distributed databases.\nThe rest of the paper is organized as follows. Section 2 presents\nour watermarking scheme, which includes watermark generation\nand detection. Section 3 studies how to prove ownership publicly\nusing a watermark certificate. It also investigates certificate revocation\nand incremental update in our scheme. Section 4 analyzes the\nrobustness of our scheme and the tradeoff between its robustness\nand overhead. Section 5 comments on related work, and section 6\nconcludes the paper.\nTHE SCHEME\nOur scheme watermarks a database relation R whose schema is\nR(P, A\n0\n, . . . , A\n\n-1\n), where P is a primary key attribute (later we\ndiscuss extensions for watermarking a relation that does not have\na primary key attribute). There is no constraint on the types of\nattributes used for watermarking; the attributes can be integer numeric\n, real numeric, character, Boolean, or any other types. Attributes\nare represented by bit strings in computer systems. Let\ndenote the number of tuples in relation R. For each attribute of\na tuple, the most significant bit (MSB) of its standard binary representation\nmay be used in the generation of a watermark. It is\nassumed that any change to an MSB would introduce intolerable\nerror to the underlying data value. For ease of referencing, Table 1\nlists the symbols that will be used in this paper.\n2.1 Watermark Generation\nLet the owner of relation R possess a watermark key K, which\nwill be used in both watermark generation and detection. The watermark\nkey should be capable of publicly proving ownership as\nmany times as necessary. This is contrast to traditional watermarking\n, where a watermark key is kept secret so that the database owner\ncan prove his ownership by revealing the key for detecting the watermark\n. However, under that formation, the ownership can be publicly\nproved only once. In addition, the key should be long enough\nto thwart brute force guessing attacks to the key.\nAlgorithm 1 genW (R, K, ) // Generating watermark W for DB\nrelation R\n1: for each tuple r in R do\n2:\nconstruct a tuple t in W with the same primary key t.P =\nr.P\n3:\nfor i=0; i < ; i= i+1 do\n4:\nj =\nG\ni\n(K, r.P ) mod (the number of attributes in r)\n5:\nt.W\ni\n= MSB of the j-th attribute in r\n6:\ndelete the j-th attribute from r\n7:\nend for\n8: end for\n9: return W\nIn our scheme, the watermark key is public and may take any\nvalue (numerical, binary, or categorical) selected by the owner.\nThere is no constraint on the formation of the key. To reduce unnecessary\nconfusion, the watermark key should be unique to the\nowner with respect to the watermarked relation. We suggest the\nwatermark key be chosen as\nK = h(ID\n|DB name|version|...)\n(1)\nwhere ID is the owner's identity, `|' indicates concatenation, and\nh() is a cryptographic hash function (e.g., SHA-512) [22]. In the\ncase of multiple owners, the public key can be extended to be a\ncombination of all the owners' IDs or generated from them using a\nthreshold scheme. For simplicity, we assume that there is a single\nowner of DB relation R in the following.\nOur concept of public watermark key is different from that of a\npublic key in public key infrastructure (PKI) [16]. In the cryptography\nliterature, a public key is paired with a private key such that a\nmessage encoded with one key can be decoded with its paired key;\nthe key pair is selected in a specific way such that it is computation-ally\ninfeasible to infer a private key from the corresponding public\nkey. In our watermarking scheme, there is no private key, and the\npublic watermark key can be arbitrarily selected. If the watermark\nkey is derived from the owner's ID as suggested, it is similar to\nthe public key in identity based cryptography [25, 3, 5], though the\nowner does not need to request a private key from a key distribution\ncenter (KDC).\nThe watermark key is used to decide the composition of a public\nwatermark W . The watermark W is a database relation whose\nscheme is W (P, W\n0\n, . . . , W\n\n-1\n), where W\n0\n, . . . , W\n\n-1\nare binary\nattributes. Compared to DB relation R, the watermark (relation\n) W has the same number of tuples and the same primary\nkey attribute P . The number of binary attributes in W is a control\nparameter that determines the number of bits in W , where\n=\nand . In particular, we call the watermark\ngeneration parameter\n.\nAlgorithm 1 gives the procedure genW (R, K, ) for generating\nthe watermark W . In the algorithm, a cryptographic pseudo-random\nsequence generator (see chapter 16 in [24]) G is seeded\nwith the concatenation of watermark key K and the primary key\nr.P for each tuple r in R, generating a sequence of numbers\n{G\ni\n(K,\nr.P )\n}. The MSBs of selected values are used for generating the\nwatermark. The whole process does not introduce any distortions\nto the original data. The use of MSBs is for thwarting potential\nattacks that modify the data. Since the watermark key K, the watermark\nW , and the algorithm genW are publicly known, anyone\ncan locate those MSBs in R that are used for generating W . However\n, an attacker cannot modify those MSBs without introducing\nintolerable errors to the data.\nIn the construction of watermark W , each tuple in relation R\n0\n\nR\ndatabase relation to be watermarked\n\nnumber of tuples in relation R\n\nnumber of attributes in relation R\nW\ndatabase watermark (relation) generated in watermarking\n\n(watermark generation parameter) number of binary attributes in watermark W\n\nnumber of bits in W ; =\n\n(watermark detection parameter) least fraction of watermark bits required for watermark detection\nK\nwatermark key\nTable 1: Notation in watermarking\nAlgorithm 2 detW (R\n\n, K, , W, ) // Detecting watermark for\nDB relation R'\n1: match count=0\n2: total coutn=0\n3: for each tuple r in R\n\ndo\n4:\nget a tuple t in W with the same primary key t.P = r.P\n5:\nfor i=0; i < ; i= i+1 do\n6:\ntotal count = total count +1\n7:\nj =\nG\ni\n(K, r.P ) mod (the number of attributes in r)\n8:\nif t.W\ni\n= MSB of the j-th attribute in r then\n9:\nmatch count = match count +1\n10:\nend if\n11:\ndelete the j-th attribute from r\n12:\nend for\n13: end for\n14: if match count/total count > then\n15:\nreturn true\n16: else\n17:\nreturn false\n18:\nend if\ncontributes MSBs from different attributes that are pseudo-randomly\nselected based on the watermark key and the primary key of the tuple\n. It is impossible for an attacker to remove all of the watermark\nbits by deleting some but not all of the tuples and/or attributes from\nthe watermarked data. The larger the watermark generation parameter\n, the more robust our scheme is against such deletion attacks.\n2.2 Watermark Detection\nOur watermark detection is designed to be performed publicly\nby anyone as many times as necessary. This is a notable difference\ncompared from previous approaches, which are secret key based. In\nwatermark detection, the public watermark key K and watermark\nW are needed to check a suspicious database relation R\n\n. It is\nassumed that the primary key attribute has not been changed or\nelse can be recovered. If the primary key cannot be relied on, one\ncan turn to other attributes, as will be discussed in Section 2.4.\nAlgorithm 2 gives the procedure detW (R\n\n, K, , W, ) for detecting\nwatermark W from relation R\n\n, where is the watermark\ngeneration parameter used in watermark generation, and is the\nwatermark detection parameter\nthat is the least fraction of correctly\ndetected watermark bits. Both parameters are used to control the\nassurance and robustness of watermark detection, as will be ana-lyzed\nin Section 4. The watermark detection parameter is in the\nrange of [0.5, 1). To increase the robustness of watermark detection\n, we do not require that all detected MSBs in R\n\nmatch the\ncorresponding bits in W , but that the percentage of the matches is\nmore than (i.e., match count/total count > in algorithm 2).\n2.3 Randomized MSBs\nMost modern computers can represent and process four primitive\ntypes of data besides memory addresses: integer numeric, real\nnumeric, character, and Boolean. Regardless of its type, a data item\nis represented in computer systems as a bit string. The MSB of the\nbit string is the leftmost digit, which has the greatest weight. In a\nsigned numeric format (integer or real), the MSB can be the sign\nbit, indicating whether the data item is negative or not\n1\n. If the sign\nbit is not chosen (or there is no sign bit), the MSB can be the high\norder bit (next to the sign bit; in floating point format, it is the leftmost\nbit of exponent). For character or Boolean data, any bit can\nbe an MSB and we simply choose the leftmost one.\nWe assume that watermark bits generated from selected MSBs\nare randomly distributed; that is, each MSB has the same probability\nof 1/2 to be 1 or 0. This randomness is important in our\nrobustness analysis (see Section 4). If this is not the case, then we\nrandomize the MSBs by XOR'ing them with random mask bits. For\nthe MSB of the j-th attribute of tuple r, the corresponding mask bit\nis the j-th bit of hash value h(K|r.P) if j , where is the bit-length\nof hash output. In general, if (k - 1) < j k, the mask\nbit is the (j - (k - 1))-th bit of hash value h\nk\n(K\n|r.P). Since the\nhash value is computed from the unique primary key, the mask bit\nis random; thus, the MSB after masking is random. The random-ized\nMSBs are then used in watermark generation and detection in\nour scheme.\n2.4 Discussion on Relations without Primary\nKeys\nMost watermarking schemes (e.g., [1, 20, 26, 2]) for relational\ndatabases, including ours, depend critically on the primary key attribute\nin the watermarking process. In the case that there is no\nprimary key attribute, or that the primary key attribute is destroyed\nin malicious attacks, one can turn to other attributes and construct\na virtual primary key that will be used instead of the primary key in\nthe watermarking process. The virtual primary key is constructed\nby combining the most significant bits of some selected attributes.\nThe actual attributes that are used to construct the virtual primary\nkey differ from tuple to tuple, and the selection of the attributes is\nbased on a key that could be the watermark key in the context of\nthis paper. The reader is referred to [19] for more details on the\nconstruction of a virtual primary key.\nSince the virtual primary key is constructed from the MSBs of\nselected attributes, it is difficult to destroy the virtual primary key\nthrough value modification or attribute deletion. However, unlike\na real primary key, the virtual primary key may not be unique for\neach tuple; consequently, there could be multiple tuples in both R\nand W sharing the same value of the primary key. In watermark\ndetection, the exact mapping between pairs of these tuples needs\n1\nIn most commonly used storage formats, the sign bit is 1 for a\nnegative number and 0 for a non-negative number.\n0\n\nto be recovered (see line 4 in algorithm 2). This can be done as\nfollows. For each tuple r R with primary key r.P, compute\na tuple t the same way as in watermark generation, then choose\na tuple t\n\nW such that t\n\nis the most close (e.g., in terms of\nHamming distance) to t among the multiple tuples in W that share\nthe same primary key r.P . The number of tuples sharing the same\nprimary key value (i.e., the severity of the duplicate problem) can\nbe minimized, as shown in the above-mentioned work [19].\nPUBLIC OWNERSHIP PROOF\nWe now investigate how to publicly prove ownership as many\ntimes as necessary. If the watermark key K is kept secret with the\nowner, the ownership proof can be done secretly; however, it can\nbe done only once in public since the key has to be revealed to the\npublic during this process.\nThe problem of public ownership proof was originally raised in\nthe multimedia context [15] (see section 5 for details); it has not\nbeen studied in the literature of database watermarking. We note\nthat the requirements for watermarking relational data are different\nfrom those for watermarking multimedia data. The former must be\nrobust against typical database alterations or attacks such as tuple\ninsertion, deletion, and value modification, while the latter should\nbe robust against multimedia operations such as compression and\ntransformation. An additional requirement for watermarking relational\ndata is that a watermarked relation should be updated easily\nand incrementally.\nPublic ownership proof in our scheme is achieved by combining\nwatermark detection with a certificate.\n3.1 Watermark Certificate\nD\nEFINITION\n3.1. A watermark certificate C of relation R is\na tuple ID,K,h(W),h(R),T,DB-CA,Sig, where ID is the\nidentity of the owner of R, K is the owner's watermark key, W\nis the public watermark, T is the validity information, DB-CA is\nthe trusted authority who signs the certificate by generating a signature\nSig.\nSimilar to the identity certificate [16] in PKI (or attribute certificate\n[10] in PMI), which strongly binds a public key (a set of\nattributes) to its owner with a validity period, the watermark certificate\nstrongly binds a watermark key, a watermark, and a DB\nrelation to its owner's ID with validity information. The validity\ninformation is a triple T = T\norigin\n, T\nstart\n, T\nend\nindicating the\noriginal time T\norigin\nwhen the DB relation is first certified, the\nstarting time T\nstart\n, and the ending time T\nend\nof this certificate in\nthe current binding. When the DB relation is certified for the first\ntime, T\norigin\nshould be the same as T\nstart\n. Compared with the\nidentity certificate or attribute certificate, the watermark certificate\nnot only has a validity period defined by T\nstart\nand T\nend\n, but also\ncontains the original time T\norigin\n. The original time will be useful\nin thwarting possible attacks that confuse ownership proof.\nA comparison of the watermark certificate with the traditional\nidentity certificate is illustrated in Figure 1. The two kinds of certificates\nshare a similar structure except that the public key information\nin the identity certificate is replaced by the watermark key,\nwatermark hash, and database hash in the watermark certificate.\nIn traditional identity certificate, the subject's public key is paired\nwith a private key known only to the subject. In the case of damage\nor loss of the private key (e.g., due to collision attacks), the identity\ncertificate needs to be revoked before the expiration of the certificate\n. In the watermark certificate, since there is no private key asso-ciated\nwith the public watermark key, it seems that there is no need\nVersion\nSerial Number\nSignature Algorithm\nIssuer\nValidity Period\nSubject\nSubject Public Key Info\nSignature\nVersion\nSerial Number\nSignature Algorithm\nDB-CA\nValidity Info T\nDB owner ID\nWatermark Key K\nWatermark Hash h(W)\nDB hash h(R)\nSignature Sig\nIdentity Certificate\nWatermark Certificate\nFigure 1: Relation between watermark and identity certificate\nof certificate revocation. Nonetheless, certificate revocation and recertification\nmay be needed in the case of identity change, ownership\nchange, DB-CA signature compromise, and database update.\nThe role of DB-CA is similar to that of the traditional CA in PKI\nin terms of authentication of an applicant's identity. The differences\nare: (i) it binds the applicant's ID to the watermark key, watermark,\nand watermarked data; and (ii) it confirms the original time when\nthe watermarked data was first certified. The original time is especially\nuseful in the case of recertification so as to thwart false\nclaims of ownership by a pirate. This is addressed in the following\nsubsection.\n3.2 Public Verifiability\nWhile the watermark detection process can be performed by anyone\n, voluntarily or in delegation, who has access to the public watermark\nand watermark key, the ownership is proven by further\nchecking the corresponding watermark certificate. This involves\nchecking (i) if the watermark certificate has been revoked (see the\nnext subsection for details); (ii) if the watermark key and (the hash\nof) the watermark used in watermark detection are the same as\nthose listed in the watermark certificate; (iii) if the signature is correctly\nsigned by the DB-CA stipulated in the watermark certificate\n(this is done in traditional PKI and may involve checking the DB-CA's\npublic key certificate, a chain of CA's certificates, and a certificate\nrevocation list); and (iv) the similarity of suspicious data\nR\n\nto the original data R as published by the owner of watermark\ncertificate. If all are proven, the ownership of the suspicious data\nis publicly claimed to belong to the owner of the watermark certificate\nfor the time period stipulated in the certificate. The original\ntime that the data was certified is also indicated in the certificate.\nThe last requirement is optional, depending on whether data\nframe-up attack\nis of concern. In a data frame-up attack, an attacker\nmodifies the watermarked data as much as possible while\nleaving the watermarked bits (i.e., MSBs of selected values) untouched\n. Note that in our scheme, an attacker can pinpoint the watermarked\nbits since the watermark key, watermark, and watermark\nalgorithm are all public. Since the ownership is publicly verifiable,\nsuch \"frame-up\" data may cause confusion and damage to the legitimate\nownership.\nThe data frame-up attack has not been discussed before, even\nthough it is also possible in secret key based schemes. For example\n, in Agrawal and Kiernan's watermarking scheme [1], the watermark\ninformation is embedded in one of least significant bits\nof some selected values. Data frame-up attack is possible if an attacker\nmodifies all significant bits except the last least significant\nbits in each value. However, this attack is less serious in secret key\n\n\nbased schemes because the owner of watermarked data may choose\nnot to claim the ownership for \"frame-up\" data. In our scheme, this\nattack is thwarted by requiring that the suspicious data is similar\nenough to the original data (the authenticity of the original data R\ncan be checked with h(R) in the watermark certificate).\nThe rationale is that when an attacker forges a low quality data\nR\n\nwith the MSBs given in the public watermark W , such R\n\nwill\nbe significantly different from the original R due to its low quality.\nThe similarity between R and R\n\nmay be measured, for example,\nby the portion of significant bits that match for each pair of values\nin R and R\n\nwhose watermarked MSBs match. The similarity may\nalso be measured in terms of the usefulness of data, such as the\ndifference of individual values, means, and variances.\n3.3 Certificate Management\nOnce publicly proven based on a valid watermark certificate,\nthe ownership of watermarked data is established for the owner\nof the certificate. The current ownership is valid for a time period\n[T\nstart\n, T\nend\n] stipulated in the certificate. The original time\nT\norigin\nwhen the data was first certified is also indicated in the certificate\n.\nThe use of original time is to thwart additive attack. Additive attack\nis a common type of attacks to watermarking schemes in which\nan attacker simply generates another watermark for watermarked\ndata so as to confuse ownership proof. The additional watermark\ncan be generated using a watermark key that is derived from the\nattacker's ID. It is also possible for the attacker to obtain a valid\nwatermark certificate for this additional watermark.\nWe solve this problem by comparing the original time T\norigin\nin the certificate of real owner with the original time T\norigin\nin the\ncertificate of the attacker. We assume that the owner of data will\nnot make the data available to potential attackers unless the data is\nwatermarked and a valid watermark certificate is obtained. Therefore\n, one always has T\norigin\n< T\norigin\nby which the legitimate\nownership can be proven in the case of an ownership dispute. After\nthis, the attacker's valid certificate should be officially revoked.\nBesides revocation upon losing an ownership dispute, a certificate\nmay be revoked before its expiration based on the following\nreasons: (1) identity change; (2) ownership change; (3) validity\nperiod change; (4) DB-CA compromise; and (5) database update.\nWhen the owner of a valid certificate changes his identity, he needs\nto revoke the certificate and, at the same time, apply for a new\ncertificate to replace the old one. Upon the owner's request, the\nDB-CA will grant a new validity period [T\nstart\n, T\nend\n] according\nto its policy while keeping the original time T\norigin\nunchanged in\nthe new certificate. The case of ownership change is handled in a\nsimilar manner, except that the DB-CA needs to authenticate the\nnew owner and ensure the ownership change is granted by the old\nowner. In both cases, a new watermark key and a new watermark\nmay be derived and included in the new certificate.\nSometimes the owner wants to prolong or shorten the validity period\nof his certificate. In this case, the watermark certificate needs\nto be re-certified with a new validity period. The watermark key or\nwatermark does not need to change in the recertification process.\nIn our scheme, the DB-CA is trusted, similar to the CA in traditional\nPKI. A traditional PKI certificate would need to be revoked\nfor a variety of reasons, including key compromise and CA compromise\n. Since a watermark key is not paired with a private key\nin our scheme, there is no scenario of watermark key compromise.\nHowever, there is a possibility of DB-CA compromise if any of the\nfollowing happens: (i) DB-CA's signature is no longer safe (e.g.,\ndue to advanced collision attacks); (ii) DB-CA loses its signature\nkey; (iii) DB-CA ceases its operation or business; or (iv) any CA\nwho certifies the DB-CA's public key is compromised (the public\nkey is used to verify the DB-CA's signature in our scheme). In\nthe case of DB-CA compromise, all related watermark certificates\nmust be revoked and re-examined by a valid DB-CA and recertified\nwith new validity periods but unchanged original times.\nDue to the similarity between the watermark certificate and the\ntraditional identity certificate, many existing standards and mechanisms\nregarding certificate management, such as certification path\nconstraints and CRL distribution points, can be borrowed from PKI\nwith appropriate adaptations. For simplicity and convenience, the\nfunctionality of a DB-CA may be performed by a CA in traditional\nPKI.\n3.4 Efficient Revocation of Watermark Certificate\nMicali proposed an efficient public key certificate revocation scheme\n[23] called CRS (for certificate revocation status). Compared with\nthe CRL-based solution, CRS substantially reduces the cost of management\nof certificates in traditional PKI. This scheme can easily\nbe adapted to our scheme for efficient revocation of watermark certificates\n.\nAs pointed out in [23], the costs of running a PKI are staggering\nand most of the costs are due to CRL transmission. The major\nreason is that each time a user queries the status of a single certificate\n, he needs to query a directory, an agent receiving certificate\ninformation from a CA and handling user queries about it, and the\ndirectory sends him the whole CRL list that has been most recently\nsigned by the CA. Since the CRL list tends to be very long and\ntransmitted very often, the CRL solution is extremely expensive. In\nCRS, however, the directory responds to a user's query by sending\na 100-bit value only, instead of the whole CRL. The 100-bit value\nis employed by the user to verify whether the relative certificate is\nvalid or has been revoked.\nIn our watermarking scheme, the DB-CA selects a secret 100-bit\nvalue Y\n0\nfor a watermark certificate, and recursively applies on it\na one-way function F 365 times, assuming that the validity period\nof the certificate is a normal year. The DB-CA then includes the\n100-bit value Y\n365\n= F\n365\n(Y\n0\n) in the watermark certificate C =\nID,K,h(W),h(R),T,DB-CA,Y\n365\n, Sig\n.\nAssume that the current day is the i-th day in the validity period\nof the certificate. The DB-CA generates a 100-bit value Y\n365\n-i\n=\nF\n365\n-i\n(Y\n0\n) and gets it published through the directory. It is the DB\nowner's responsibility to obtain Y\n365\n-i\nfrom the directory and publish\nit together with the watermark certificate C. Anyone can verify\nthe validity of the certificate by checking whether F\ni\n(Y\n365\n-i\n) =\nY\n365\n, where i is the number of days since the start of the validity\nperiod (i.e., T\nstart\nin T ). If this is the case, the certificate is valid;\notherwise, it has been revoked before the i-th day, in which case\nthe DB-CA did not get Y\n365\n-i\npublished. Note that Y\n365\n-i\ncannot\nbe computed from previously released Y\n365\n-j\n(j < i) due to the\none-way property of function F .\nIn this scheme, the DB owner needs to query the directory and\nupdate Y\n365\n-i\nevery day. To make the transition from Y\n365\n-i\nto\nY\n364\n-i\nsmooth, one more hour may be granted for the validity period\nof Y\n365\n-i\n(i.e., 25 hours). To avoid high query load at certain\nhours, the validity period of Y\n365\n-i\nshould start at a different time\neach day for a different certificate. A policy stating this may also\nbe included in the watermark certificate.\nNote that Micali's original scheme requires a CA to (i) sign another\n100-bit value besides Y\n365\n-i\nto explicitly indicate a certificate\nbeing revoked; and (ii) sign a updated list indicating all and\nonly the series numbers of issued and not-yet-expired certificates.\nThe signed value and list are sent to the directory so that any user\n\n\nquery can be answered by the directory. In our scheme, it is the DB\nowner's responsibility (for his own benefit, namely anti-piracy) to\nquery the directory and publish the updated Y\n365\n-i\nonline together\nwith DB, watermark, and certificate. A user who wants to verify\nthe certificate will obtain the validity information from the owner\nrather than from the directory. This separation of duty simplifies\nthe scheme and clarifies the responsibility of the DB owner.\nIt is relatively straightforward to analyze the communication cost\nof our scheme as compared with the CRL based solution. The\nanalysis is very similar to that given in [23] for comparing CRS\nwith CRL (CRS is about 900 times cheaper than CRL in terms of\ncommunication cost with the Federal PKI estimates). We omit this\nanalysis due to space limitations.\n3.5 Incremental Updatability\nThe proposed scheme is also designed to facilitate incremental\ndatabase update. In relational database systems, database update\nhas been tailored to tuple operations (tuple insertion, deletion, and\nmodification), where each tuple is uniquely identified by its primary\nkey. In our scheme, both watermark generation and detection\nare tuple oriented; each tuple is processed independently of other\ntuples, based on its primary key.\nThe watermark is updated as follows. If a set of new tuples is\ninserted into the watermarked data, the watermark generation algorithm\n1 can be performed on those new tuples only. As a result, a\nset of corresponding new tuples is generated and inserted into the\nwatermark. If a set of tuples is deleted from the watermarked data,\nthe corresponding tuples with the same primary keys are simply\ndeleted from the watermark. In the case that a set of values is modified\n, only the related tuples need to be updated in the watermark.\nThis can be done in a similar manner as in the tuple insertion case.\nNote that if a modified value does not contribute any MSB to the\nwatermark, then no update is needed for that value.\nThe update of the watermark certificate follows the update of the\nwatermark. To update a watermark certificate, the owner of watermarked\ndata needs to authenticate himself to a DB-CA, revoke the\nold certificate, and get a new certificate for the updated DB and watermark\n. The new certificate may have an updated validity period,\nbut the original time will not be altered. As this process involves\ninteractions with a DB-CA, it may not be efficient if executed frequently\n. Fortunately, our scheme is very robust against database\nupdate, as will be indicated in the next section. Therefore, the update\nof the watermark and watermark certificate may lag behind the\nupdate of the watermarked data; it can be done periodically after a\nbatch of data updates. The lag-behind watermark and certificate\ncan still be used for checking the ownership of the updated data as\nlong as the updates do not severely degrade the robustness of our\nscheme.\n3.6 Discussion\nLike traditional PKI, the certificate revocation in our scheme is\nhandled only by the trusted party (i.e., the DB-CA). An alternative\nsolution is to let the DB owner himself handle the certificate\nrevocation. After the DB-CA signs a watermark certificate C =\nID,K,h(W),h(R),T,DB - CA,Y\n365\n, Sig\n, where Y\n365\n=\nF\n365\n(Y\n0\n), it gives Y\n0\nto the DB owner through a secure channel.\nThe DB owner keeps Y\n0\nsecret. On the i-th day in the validity\nperiod of the certificate, the DB owner himself can generate and\npublish Y\n365\n-i\n= F\n365\n-i\n(Y\n0\n), based on which anyone can verify\nthe validity of the certificate. This solution further simplifies our\nscheme in the sense that the DB-CA does not need to generate Y-values\nfor all valid certificates each day, and that all DB owners do\nnot need to query a directory to update the Y-values. The communication\ncost is thus reduced substantially. Whenever the DB owner\ndeems it appropriate (e.g., after database is updated), he can refuse\nto release new Y-values to the public, thus revoking the certificate\nin a de facto manner, and apply a new certificate if necessary. This\nsolution works well for database updates because it is to the benefit\nof the DB owner to maintain the certificate status. However, it\nmay not work well in the case of DB-CA compromise or loss of\nY\n0\n, but this fortunately would not happen very often as compared\nwith database updates. It is possible to develop a hybrid solution\nthat combines the merits of both DB-owner-handled revocation and\nCA-handled revocation.\nROBUSTNESS AND OVERHEAD\nFor a watermarking scheme to be useful, it must be robust against\ntypical attacks and be efficient in practice. In this section, we first\npresent a quantitative model for the robustness of our watermarking\nscheme. We analyze the robustness of our scheme by the same\nmethod (i.e., binomial probability) as was used in [1]. We then investigate\nthe overhead of our watermarking scheme. We also study\nthe tradeoffs between the robustness and overhead in terms of the\nwatermarking generation parameter and watermarking detection\nparameter .\n4.1 Survival Binomial Probability\nConsider n Bernoulli trials of an event, with probability p of\nsuccess and q = 1 - p of failure in any trial. Let P\np\n(k; n) be\nthe probability of obtaining exactly k successes out of n Bernoulli\ntrials (i.e., the discrete probability of binomial distribution). Then\nP\np\n(k; n) =\nnkp\nk\nq\nn\n-k\n(2)\nnk = n!\nk!(n\n- k)!\n(3)\nLet C\np\n(k; n) denote the probability of having more than k successes\nin n Bernoulli trials; that is, C\np\n(k; n) is the survival binomial\nprobability. According to the standard analysis of binomial\ndistribution, we have\nC\np\n(k; n) =\nn\n\ni=k+1\nP\np\n(i; n)\n(4)\nIn many widely available computation software packages such as\nMatlab and Mathematica, the survival binomial probability can be\ncomputed by C\np\n(k; n) = 1\n- binocdf(k,n,p), where binocdf(k,\nn, p) is the binomial cumulative distribution function with parameters\nn and p at value k. When n is large, the binomial distribution\ncan be approximated by a normal distribution with mean np,\nstandard deviation\nnpq, at value k + 0.5, where 0.5 is the correction\nof continuity (for p = 0.5, the normal is a good approximation\nwhen n is as low as 10; see chapter 7.6 in [31]). Thus,\nC\np\n(k; n) = 1\n- normcdf(k + 0.5,np,npq), where normcdf\nis the normal cumulative distribution function.\n4.2 Detecting Non-Watermarked Data\nFirst consider the robustness of our scheme in terms of false hit,\nwhich is the probability of a valid watermark being detected from\nnon-watermarked data. The lower the false hit, the better the robustness\n. We show that the false hit is under control in our scheme\nand can be made highly improbable.\nRecall that in watermark detection, a collection of MSBs are\nlocated in suspicious data and compared with the corresponding\n\n\nbits recorded in the public watermark. When the watermark detection\nis applied to non-watermarked data, each MSB in data has the\nsame probability 1/2 to match or not to match the corresponding\nbit in the watermark. Assume that the non-watermarked data has\nthe same number of tuples (and the same primary keys) as the\noriginal data. Let = be the total number of bits in the watermark\n, where is the watermark generation parameter. The false\nhit is the probability that at least portion of bits can be detected\nfrom the non-watermarked data by sheer chance, where is the\nwatermark detection parameter. The false hit H can be written as\nH = C\n1/2\n(\n,) = C\n1/2\n(\n,)\n(5)\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n10\n-15\n10\n-10\n10\n-5\n10\n0\n\nFalse hit H\n=1000\n=0.51\n=0.52\n=0.53\n=0.54\n=0.55\nFigure 2: False hit as function of\n2000\n4000\n6000\n8000\n10000\n10\n-15\n10\n-10\n10\n-5\n10\n0\n\nFalse hit H\n=5\n=0.51\n=0.52\n=0.53\n=0.54\n=0.55\nFigure 3: False hit as function of\nFigure 2 shows the change of the false hit when the watermark\ninsertion parameter increases from 1 to 10 for fixed = 1000\nand various values of the watermark detection parameter . The\nfigure illustrates that the false hit is monotonic decreasing with both\nwatermark insertion parameter and detection parameter . On\nthe one hand, the larger the insertion parameter , the more MSBs\nare included in the watermark and the smaller the false hit. On\nthe other hand, the false hit can be decreased by increasing the\ndetection parameter , which is the least fraction of watermark bits\nrequired for ownership assertion.\nFigure 3 illustrates the trend of false hit when the number of\ntuples is scaled up from 1000 to 10,000. The trend is that the false\nhit is monotonic decreasing with . This trend is linear, which is\nsimilar to that of increasing , as indicated in figure 2. A conclusion\ndrawn from these two figures is that with reasonably large values\nof , , and/or , the false hit can be made extremely low.\n4.3 Detecting Watermarked Data\nWe now consider the robustness of our scheme in terms of false\nmiss\n, which is the probability of not detecting a valid watermark\nfrom watermarked data that has been modified in typical attacks.\nThe robustness can also be measured in terms of the error introduced\nby typical attacks. The less the false miss, or the larger the\nerror introduced by typical attacks, the better the robustness. The\ntypical attacks include database update, selective value modification\n, and suppression. Other typical attacks include the data frame-up\nattack and the additive attack which have been addressed in a\nprevious section.\n4.3.1 Typical Database Update\nTypical database update includes tuple insertion, tuple deletion,\nattribute deletion, and value modification. For tuple deletion and\nattribute deletion, the MSBs in the deleted tuples or attributes will\nnot be detected in watermark detection; however, the MSBs in other\ntuples or attributes will not be affected. Therefore, all detected\nMSBs will match their counterparts in the public watermark, and\nthe false miss is zero.\nThough the deletion of tuples or attributes will not affect the false\nmiss\n, it will make the false hit worse. The more the tuples or attributes\nare deleted, the larger the false hit, as indicated in Section\n4.2. The effect to the false hit of deleting tuples is equivalent to that\nof decreasing as shown in Figure 3, while the effect of deleting\nattributes is equivalent to decreasing proportionally as shown in\nFigure 2.\nSince the watermark detection is primary key based, a newly inserted\ntuple should have a valid primary key value; otherwise, there\nis no corresponding tuple in the public watermark. We thus consider\ntuple insertion to be \"mix-and-match\" [1]; that is, an attacker\ninserts new tuples to replace watermarked tuples with their primary\nkey values unchanged. For watermark detection to return a\nfalse answer, at least - MSBs in those newly added tuples\n(which consists of MSBs) must not match their counterparts\nin the public watermark (which consist of bits). Therefore, the\nfalse miss M\n\nfor inserting tuples in mix-and-match can be written\nas\nM\n\n= C\n1/2\n(\n- - 1,)\n(6)\n80 82 84 86 88 90 92 94 96 98 100\n10\n-15\n10\n-10\n10\n-5\n10\n0\n/ (%)\nFalse miss M\n\n=5, =1000\n=0.51\n=0.52\n=0.53\n=0.54\n=0.55\nFigure 4: False miss (tuple insertion) as function of\nFigures 4, 5, and 6 show the false miss in the case of tuple insertion\n. The default parameters in these figures are / = 90% (i.e.,\n\n\nbits recorded in the public watermark. When the watermark detection\nis applied to non-watermarked data, each MSB in data has the\nsame probability 1/2 to match or not to match the corresponding\nbit in the watermark. Assume that the non-watermarked data has\nthe same number of tuples (and the same primary keys) as the\noriginal data. Let = be the total number of bits in the watermark\n, where is the watermark generation parameter. The false\nhit is the probability that at least portion of bits can be detected\nfrom the non-watermarked data by sheer chance, where is the\nwatermark detection parameter. The false hit H can be written as\nH = C\n1/2\n(\n,) = C\n1/2\n(\n,)\n(5)\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n10\n-15\n10\n-10\n10\n-5\n10\n0\n\nFalse hit H\n=1000\n=0.51\n=0.52\n=0.53\n=0.54\n=0.55\nFigure 2: False hit as function of\n2000\n4000\n6000\n8000\n10000\n10\n-15\n10\n-10\n10\n-5\n10\n0\n\nFalse hit H\n=5\n=0.51\n=0.52\n=0.53\n=0.54\n=0.55\nFigure 3: False hit as function of\nFigure 2 shows the change of the false hit when the watermark\ninsertion parameter increases from 1 to 10 for fixed = 1000\nand various values of the watermark detection parameter . The\nfigure illustrates that the false hit is monotonic decreasing with both\nwatermark insertion parameter and detection parameter . On\nthe one hand, the larger the insertion parameter , the more MSBs\nare included in the watermark and the smaller the false hit. On\nthe other hand, the false hit can be decreased by increasing the\ndetection parameter , which is the least fraction of watermark bits\nrequired for ownership assertion.\nFigure 3 illustrates the trend of false hit when the number of\ntuples is scaled up from 1000 to 10,000. The trend is that the false\nhit is monotonic decreasing with . This trend is linear, which is\nsimilar to that of increasing , as indicated in figure 2. A conclusion\ndrawn from these two figures is that with reasonably large values\nof , , and/or , the false hit can be made extremely low.\n4.3 Detecting Watermarked Data\nWe now consider the robustness of our scheme in terms of false\nmiss\n, which is the probability of not detecting a valid watermark\nfrom watermarked data that has been modified in typical attacks.\nThe robustness can also be measured in terms of the error introduced\nby typical attacks. The less the false miss, or the larger the\nerror introduced by typical attacks, the better the robustness. The\ntypical attacks include database update, selective value modification\n, and suppression. Other typical attacks include the data frame-up\nattack and the additive attack which have been addressed in a\nprevious section.\n4.3.1 Typical Database Update\nTypical database update includes tuple insertion, tuple deletion,\nattribute deletion, and value modification. For tuple deletion and\nattribute deletion, the MSBs in the deleted tuples or attributes will\nnot be detected in watermark detection; however, the MSBs in other\ntuples or attributes will not be affected. Therefore, all detected\nMSBs will match their counterparts in the public watermark, and\nthe false miss is zero.\nThough the deletion of tuples or attributes will not affect the false\nmiss\n, it will make the false hit worse. The more the tuples or attributes\nare deleted, the larger the false hit, as indicated in Section\n4.2. The effect to the false hit of deleting tuples is equivalent to that\nof decreasing as shown in Figure 3, while the effect of deleting\nattributes is equivalent to decreasing proportionally as shown in\nFigure 2.\nSince the watermark detection is primary key based, a newly inserted\ntuple should have a valid primary key value; otherwise, there\nis no corresponding tuple in the public watermark. We thus consider\ntuple insertion to be \"mix-and-match\" [1]; that is, an attacker\ninserts new tuples to replace watermarked tuples with their primary\nkey values unchanged. For watermark detection to return a\nfalse answer, at least - MSBs in those newly added tuples\n(which consists of MSBs) must not match their counterparts\nin the public watermark (which consist of bits). Therefore, the\nfalse miss M\n\nfor inserting tuples in mix-and-match can be written\nas\nM\n\n= C\n1/2\n(\n- - 1,)\n(6)\n80 82 84 86 88 90 92 94 96 98 100\n10\n-15\n10\n-10\n10\n-5\n10\n0\n/ (%)\nFalse miss M\n\n=5, =1000\n=0.51\n=0.52\n=0.53\n=0.54\n=0.55\nFigure 4: False miss (tuple insertion) as function of\nFigures 4, 5, and 6 show the false miss in the case of tuple insertion\n. The default parameters in these figures are / = 90% (i.e.,\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n10\n-15\n10\n-10\n10\n-5\n10\n0\n\nFalse miss M\n\n=1000, / =90%\n=0.51\n=0.52\n=0.53\n=0.54\n=0.55\nFigure 5: False miss (tuple insertion) as function of\n2000\n4000\n6000\n8000\n10000\n10\n-15\n10\n-10\n10\n-5\n10\n0\n\nFalse miss M\n\n=5, / =90%\n=0.51\n=0.52\n=0.53\n=0.54\n=0.55\nFigure 6: False miss (tuple insertion) as function of\n90% of the new tuples are inserted into the data to replace the watermarked\ntuples), = 5, and = 1000. A general trend shown\nin these figures is that the false miss is monotonic increasing with\nwatermark detection parameter . This trend is opposite to that of\nthe false hit, which is monotonic decreasing with as indicated in\nFigures 2 and 3. Therefore, there is a tradeoff between false hit and\nfalse miss with respect to .\nFigure 4 shows that even if 80% of watermarked tuples are replaced\nwith new tuples, the false miss is as low as 10\n-15\nfor all\nvalues greater than or equal to 51%. The false miss is close to one\nonly if more than 90% of watermarked tuples are replaced in this\nfigure.\nFigures 5 and 6 illustrate that the false miss is monotonic decreasing\nwith and , which is similar to the trend of false hit as\nindicated in Figures 2 and 3. With reasonably large and/or , the\nfalse miss can be made extremely low.\nFor value modification, we assume that the modified values are\nrandomly chosen. We leave the selective modification targeted on\nwatermarked values to the next subsection. Recall that there are\nattributes in the original data in which attributes are watermarked\nfor each tuple. When a random modification happens, it has probability\n/ that a watermarked value is chosen. When a watermarked\nvalue is modified, its MSB has probability 1/2 to change\n(i.e., the value is modified randomly). In watermark detection, a detected\nMSB has probability /(2) not to match its counterpart in\nthe public watermark. The false miss M\n\nfor randomly modifying\nvalues can be written as\nM\n\n= C\n/2\n(\n- - 1,)\n(7)\n80 82 84 86 88 90 92 94 96 98 100\n10\n-15\n10\n-10\n10\n-5\n10\n0\n/() (%)\nFalse miss M\n\n=10, =5, =1000\n=0.51\n=0.52\n=0.53\n=0.54\n=0.55\nFigure 7: False miss (value modification) as function of\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n10\n-15\n10\n-10\n10\n-5\n10\n0\n\nFalse miss M\n\n=10, =1000, /() =90%\n=0.51\n=0.52\n=0.53\n=0.54\n=0.55\nFigure 8: False miss (value modification) as function of\n2000\n4000\n6000\n8000\n10000\n10\n-15\n10\n-10\n10\n-5\n10\n0\n\nFalse miss M\n\n=10, =5, /() =90%\n=0.51\n=0.52\n=0.53\n=0.54\n=0.55\nFigure 9: False miss (value modification) as function of\nFigures 7, 8, and 9 show the false miss in the case of random\nvalue modification. The default parameters in these figures are\n/() = 90% (i.e., 90% of the values are modified randomly),\n= 10, = 5, and = 1000. The general trend shown in these\n\n\nfigures for value modification is similar to that shown in previous\nFigures 4, 5, and 6 for tuple insertion. The difference in calculation\nis due to the use of probability /2 in Equation 7 instead of probability\n1/2 in Equation 6. Figure 7 shows that even if 80% of values\nare modified randomly, which would make the data less useful, the\nfalse miss rate in detection is less than 10\n-10\nin our computation.\n4.3.2 Selective Value Modification and Suppression\nSince both the watermark key and the watermark are public in\nour scheme, an attacker can pinpoint the MSBs of watermarked\nvalues. A simple attack would be to flip some of those MSBs so\nthat the watermark detection will detect no match. Assuming that\nwatermarked MSBs are flipped in selective value modification, the\nfalse miss M\n\ncan be written as\nM\n\n=\n1 if-0\notherwise\n(8)\nIf no less than - watermarked MSBs are flipped, the watermarked\ndata will no longer be detected. The robustness of our\nscheme can then be measured in terms of the error introduced by\nthis attack. The larger the error introduced for defeating the watermark\ndetection (i.e., achieving M\n\n= 1), the better the robustness.\nRecall that any change to an MSB would introduce intolerable\nerror to the related data value. To defeat the watermark detection,\nno less than - MSBs have to be flipped; this would introduce\nintolerable errors to no less than - data values. We\nthus measure the robustness in terms of failure error rate, which is\nthe least fraction F of total data values that need to be intolerably\nmodified for defeating the watermark detection. This failure error\nrate can be written as\nF =\n\n(1 - )\n(9)\nA larger failure error rate (or better robustness) can be achieved\nby increasing (watermark generation parameter) or decreasing\n(watermark detection parameter). There is a tradeoff between\nthe robustness of our scheme and the size of the public watermark\n(which has binary attributes). To achieve the best robustness\nin terms of thwarting the selective modification attacks, one may\nchoose = and 0.5. (However, this would increase the\nfalse hit as indicated in Section 4.2.) In this extreme case, approxi-mately\n50% of data values have to be intolerably modified so as to\ndefeat the watermark detection.\nTo avoid the intolerable error, an attacker may choose to suppress\nsome watermarked values rather than flipping their MSBs. Since\nthis attack causes no mismatch in watermark detection, the false\nmiss\nis zero. However, it will increase the false hit because those\nMSBs will be missed in watermark detection. It is easy to know that\nthe effect of suppressing MSBs to the false hit is the equivalent\nof decreasing the total number of MSBs by in the computation\nof false hit. Thus, the false hit formula (see section 4.2) changes\nfrom C\n1/2\n(\n,) to C\n1/2\n(\n( - ), - ) for selective\nsuppression of watermarked values.\nFigure 10 shows the influence of selective value suppression to\nthe false hit for fixed = 5, = 1000, and various from 0.51 to\n0.55. In the figure, we change the rate /() (the percentage of\nwatermarked bits are suppressed) from 0% to 99%. Even if the rate\n/() increases up to 50%, the false hit is still below 15.4% for\n= 0.51, below 2.2% for = 0.52, below 0.13% for = 0.53,\nbelow 3 10\n-5\nfor = 0.54, and below 2.6 10\n-7\nfor = 0.55.\n4.4 Overhead\nWe now analyze the time and space overhead for both watermark\n0 10 20 30 40 50 60 70 80 90 100\n10\n-15\n10\n-10\n10\n-5\n10\n0\n/() (%)\nFalse hit H\n=5, =1000\n=0.51\n=0.52\n=0.53\n=0.54\n=0.55\nFigure 10: False hit (value suppression) as function\nof\ngeneration and watermark detection. Throughout the analysis, we\nignore the IO cost (i.e., reading and writing tuples). Table 2 describes\nthe symbols that will be used in this section.\nConsider watermark generation. For each of tuples to be processed\n, a random sequence generator G is first seeded, then MSBs\nare determined based on random numbers generated by G. The\nMSBs are assigned to the corresponding attributes in the public\nwatermark. For each MSB to be determined, one mod operation is\ninvolved and one attribute is deleted from the copy of related tuple.\nThe memory requirement for the process of a tuple is to keep the\ncopy of the tuple, MSBs, and the watermark key in concatenation\nwith the tuple's primary key. Therefore, the time overhead t\ngenW\nand space overhead m\ngenW\nfor watermark generation are\nt\ngenW\n=\nt\nseed\n+ (t\ngenS\n+ t\nmod\n+ t\nbit\n+ t\ndelA\n)\n=\nO()\n(10)\nm\ngenW\n=\nm\ntuple\n+ + m\nwkey\n= O()\n(11)\nIn watermark detection, the time and space overheads are the\nsame as in watermark generation except for the cost of processing\nthe count information. Let t\nif\ndenote the cost of the last operation\n\"if match count/total count > .\" The time overhead t\ndetW\nand space overhead m\ndetW\nfor watermark detection can be written\nas\nt\ndetW\n=\n2t\ncount\n+ t\nseed\n+ (t\ngenS\n+ t\nmod\n+ t\nbit\n+\nt\ndelA\n+ 2t\ncount\n) + t\nif\n= O()\n(12)\nm\ndetW\n=\n2m\ncount\n+ m\ntuple\n+ + m\nwkey\n= O()\n(13)\nThe generated watermark W will be stored on disk. The disk\nstorage requirement m\ndisk\nis thus\nm\ndisk\n=\n|W| = m\npkey\n+ = O()\n(14)\n4.5 Tradeoffs\nIn our watermark scheme, we have two parameters: watermark\ngeneration parameter and watermark detection parameter . The\ntwo parameters can be used to balance between the robustness and\nthe overhead of our scheme. Table 3 summarizes the tradeoffs that\ncan be made when choosing the two parameters.\nThe watermark generation parameter is used to balance between\nrobustness and overhead. The larger the , the better the\nrobustness of our scheme and the worse the time and space overhead\n. While the watermark detection parameter has no effect on\n\n\nfigures for value modification is similar to that shown in previous\nFigures 4, 5, and 6 for tuple insertion. The difference in calculation\nis due to the use of probability /2 in Equation 7 instead of probability\n1/2 in Equation 6. Figure 7 shows that even if 80% of values\nare modified randomly, which would make the data less useful, the\nfalse miss rate in detection is less than 10\n-10\nin our computation.\n4.3.2 Selective Value Modification and Suppression\nSince both the watermark key and the watermark are public in\nour scheme, an attacker can pinpoint the MSBs of watermarked\nvalues. A simple attack would be to flip some of those MSBs so\nthat the watermark detection will detect no match. Assuming that\nwatermarked MSBs are flipped in selective value modification, the\nfalse miss M\n\ncan be written as\nM\n\n=\n1 if-0\notherwise\n(8)\nIf no less than - watermarked MSBs are flipped, the watermarked\ndata will no longer be detected. The robustness of our\nscheme can then be measured in terms of the error introduced by\nthis attack. The larger the error introduced for defeating the watermark\ndetection (i.e., achieving M\n\n= 1), the better the robustness.\nRecall that any change to an MSB would introduce intolerable\nerror to the related data value. To defeat the watermark detection,\nno less than - MSBs have to be flipped; this would introduce\nintolerable errors to no less than - data values. We\nthus measure the robustness in terms of failure error rate, which is\nthe least fraction F of total data values that need to be intolerably\nmodified for defeating the watermark detection. This failure error\nrate can be written as\nF =\n\n(1 - )\n(9)\nA larger failure error rate (or better robustness) can be achieved\nby increasing (watermark generation parameter) or decreasing\n(watermark detection parameter). There is a tradeoff between\nthe robustness of our scheme and the size of the public watermark\n(which has binary attributes). To achieve the best robustness\nin terms of thwarting the selective modification attacks, one may\nchoose = and 0.5. (However, this would increase the\nfalse hit as indicated in Section 4.2.) In this extreme case, approxi-mately\n50% of data values have to be intolerably modified so as to\ndefeat the watermark detection.\nTo avoid the intolerable error, an attacker may choose to suppress\nsome watermarked values rather than flipping their MSBs. Since\nthis attack causes no mismatch in watermark detection, the false\nmiss\nis zero. However, it will increase the false hit because those\nMSBs will be missed in watermark detection. It is easy to know that\nthe effect of suppressing MSBs to the false hit is the equivalent\nof decreasing the total number of MSBs by in the computation\nof false hit. Thus, the false hit formula (see section 4.2) changes\nfrom C\n1/2\n(\n,) to C\n1/2\n(\n( - ), - ) for selective\nsuppression of watermarked values.\nFigure 10 shows the influence of selective value suppression to\nthe false hit for fixed = 5, = 1000, and various from 0.51 to\n0.55. In the figure, we change the rate /() (the percentage of\nwatermarked bits are suppressed) from 0% to 99%. Even if the rate\n/() increases up to 50%, the false hit is still below 15.4% for\n= 0.51, below 2.2% for = 0.52, below 0.13% for = 0.53,\nbelow 3 10\n-5\nfor = 0.54, and below 2.6 10\n-7\nfor = 0.55.\n4.4 Overhead\nWe now analyze the time and space overhead for both watermark\n0 10 20 30 40 50 60 70 80 90 100\n10\n-15\n10\n-10\n10\n-5\n10\n0\n/() (%)\nFalse hit H\n=5, =1000\n=0.51\n=0.52\n=0.53\n=0.54\n=0.55\nFigure 10: False hit (value suppression) as function\nof\ngeneration and watermark detection. Throughout the analysis, we\nignore the IO cost (i.e., reading and writing tuples). Table 2 describes\nthe symbols that will be used in this section.\nConsider watermark generation. For each of tuples to be processed\n, a random sequence generator G is first seeded, then MSBs\nare determined based on random numbers generated by G. The\nMSBs are assigned to the corresponding attributes in the public\nwatermark. For each MSB to be determined, one mod operation is\ninvolved and one attribute is deleted from the copy of related tuple.\nThe memory requirement for the process of a tuple is to keep the\ncopy of the tuple, MSBs, and the watermark key in concatenation\nwith the tuple's primary key. Therefore, the time overhead t\ngenW\nand space overhead m\ngenW\nfor watermark generation are\nt\ngenW\n=\nt\nseed\n+ (t\ngenS\n+ t\nmod\n+ t\nbit\n+ t\ndelA\n)\n=\nO()\n(10)\nm\ngenW\n=\nm\ntuple\n+ + m\nwkey\n= O()\n(11)\nIn watermark detection, the time and space overheads are the\nsame as in watermark generation except for the cost of processing\nthe count information. Let t\nif\ndenote the cost of the last operation\n\"if match count/total count > .\" The time overhead t\ndetW\nand space overhead m\ndetW\nfor watermark detection can be written\nas\nt\ndetW\n=\n2t\ncount\n+ t\nseed\n+ (t\ngenS\n+ t\nmod\n+ t\nbit\n+\nt\ndelA\n+ 2t\ncount\n) + t\nif\n= O()\n(12)\nm\ndetW\n=\n2m\ncount\n+ m\ntuple\n+ + m\nwkey\n= O()\n(13)\nThe generated watermark W will be stored on disk. The disk\nstorage requirement m\ndisk\nis thus\nm\ndisk\n=\n|W| = m\npkey\n+ = O()\n(14)\n4.5 Tradeoffs\nIn our watermark scheme, we have two parameters: watermark\ngeneration parameter and watermark detection parameter . The\ntwo parameters can be used to balance between the robustness and\nthe overhead of our scheme. Table 3 summarizes the tradeoffs that\ncan be made when choosing the two parameters.\nThe watermark generation parameter is used to balance between\nrobustness and overhead. The larger the , the better the\nrobustness of our scheme and the worse the time and space overhead\n. While the watermark detection parameter has no effect on\nTable 2: Symbols used in the analysis of overhead\nt\nseed\ncost of seeding random sequence generator S with public key and a tuple's primary key\nt\ngenS\ncost of generating a random number from S\nt\nmod\ncost of mod operation\nt\ndelA\ncost of deleting an attribute from a copy of a tuple\nt\nbit\ncost of assigning/comparing a bit value to/with the public watermark\nt\ncount\ncost of assigning/updating a count in watermark detection\nm\ncount\nnumber of bits required to store a count in watermark detection\nm\ntuple\nnumber of bits required to store a copy of a tuple\nm\nwkey\nnumber of bits to store a watermark key\nm\npkey\nnumber of bits to store a primary key value\nTable 3: Tradeoffs\npara-false\nfalse\nfailure\nrobustness\noverhead\noverhead\nmeter\nhit\nmiss\nerror rate\n(summary)\n(time)\n(space)\n\nH M\nF\n\n\n\n\n\nH M\nF\n\nin terms of H\n\n-in\nterms of M,F\nthe overhead, it is used as a tradeoff between false hit, false miss,\nand failure error rate. Increasing will make the robustness better\nin terms of false hit, but worse in terms of false miss and failure\nerror rate.\nRELATED WORK\nWatermarking has been extensively studied in the context of multimedia\ndata for the purpose of ownership protection and authentication\n[7, 17, 18]. Most watermarking schemes proposed so far are\nsecret key based\n, which require complete disclosure of the watermarking\nkey in watermark verification. These watermarking schemes\ncan be further classified as private (both the secret key and original\ndata are required in watermark verification), blind (only the secret\nkey is needed for watermark bit decoding), and semi-blind (it\nrequires both the secret key and watermark bit sequence in watermark\ndetection). Watermarking schemes can also be classified as\nbeing robust (the watermark is hardly destroyed in attacks), or fragile\n(the watermark is hardly untouched if the watermarked data is\nmodified). The robust watermark may be used for ownership proof\nwhile the fragile watermark is suitable for data authentication and\nintegrity check.\nAs database piracy increasingly becomes a serious problem, watermarking\ntechniques have been extended to protect the ownership\nof published or distributed databases [1, 13, 28, 29, 26, 19, 20,\n2]. Agrawal and Kiernan [1] first proposed a robust watermarking\nscheme for database relations. Their scheme modifies a collection\nof least significant bits of numerical attributes. The locations of\nthose least significant bits, and the values to which those bits are\nmodified, are all determined by a secret key. With the same secret\nkey, those modified values can be localized in watermark detection,\nand ownership is claimed if a large portion of the detected values\nare as expected.\nAs noted by Agrawal and Kiernan [1], database relations differ\nfrom multimedia data in significant ways and hence require a\ndifferent class of watermarking techniques. A major difference is\nthat a database relation is composed of a set of tuples; each tuple\nrepresents an independent object which can be added, deleted, and\nmodified frequently in either benign updates or malicious attacks.\nIn contrast, a multimedia object consists of a large number of bits;\nportions of a multimedia object are bound together in fixed spatial\nor temporal order that cannot be arbitrarily changed. It is also noted\nthat the frequency domain watermarking being used in the multimedia\ncontext is not suitable for watermarking relational data. The\nreason is that the error introduced in frequency domain will spread\nover all attribute values (i.e., the whole \"image\"), which may not\nbe acceptable in certain database applications.\nThere have been other schemes proposed for watermarking relational\ndata. In Sion et al.'s scheme [28], an arbitrary bit is embedded\ninto a selected subset of numeric values by changing the distribution\nof the values. The selection of the values is based on a secret\nsorting. In another work, Gross-Amblard [13] designs a query-preserving\nscheme which guarantees that special queries (called local\nqueries) can be answered up to an acceptable distortion. Recent\nwork also includes watermarking categorical data [26], streaming\ndata [29], XML data [27], and medical databases [2]. The watermarking\nschemes for categorical data [26, 2] exchange pairs of\ncategorical values so as to embed watermark information. In this\ncase, there is no insignificant change and the error constraint is considered\nat aggregation level (e.g., k-anonymity).\nA common feature of this class of work is that a watermark is\nembedded and detected based on a secret key. Without knowing\nthe key, an attacker is not able to locate exactly where the watermark\nis embedded, nor does he destroy the embedded watermark\nunless too many errors are introduced. A drawback of such a solution\nis that the ownership of watermarked data can be proven only\nonce. After the key is revealed to the public (e.g., to the court) in\nthe proof, anyone knowing the key can easily locate and remove the\nembedded watermark. Another common feature of these schemes\nis that the watermarking process introduces errors to the underlying\ndata. This may severely affect database applications unless error\nconstraints are carefully enforced in the watermarking process. In\naddition, a tradeoff between the watermarking error and the robustness\nof watermarking schemes has to be made.\nThe concept of public key based watermark (or asymmetric watermark\n) was first conceived in the multimedia context. Hachez and\nQuisquater summarized the work in this area in [14]. As mentioned\nin [14], one of the first ideas was proposed by Hartung and Girod\n\n\n[15] for watermarking compressed video. The basic idea is to make\na part of the embedded watermark public such that a user can check\nthe presence of this part of watermark. However, an attacker is able\nto remove this part of watermark and thus invalidate a public detector\n. Another idea is to embed private key information into a host\nsignal and detect a correlation between the signal and a transformation\nof the signal using a public key [33]. Other correlation-based\npublic watermarking schemes include [9, 30, 11]. However, such\nwatermarks can be removed by certain attacks such as a sensitivity\nattack [6, 21] or confusing attack [34].\nCraver and Katzenbeisser [8] used a zero knowledge protocol to\nprove the presence of a watermark in a signal \"without revealing the\nexact location and nature of the watermark (specified by a private\nkey).\" As in most zero knowledge protocols, the proposed scheme\nrequires many rounds of interactions between prover and verifier,\nwhich may not be efficient in practice. It is also not clear how to extend\nthis scheme to watermarking relational databases. Because the\noriginal watermark is not certified and because a verifier is allowed\nto perform the protocol multiple times, this scheme may be subject\nto oracle attack (an attacker uses a public detector repeatedly to test\nmodified signals so as to remove the watermark), plain-text chosen\nattack (a special case of oracle attack in which the tested signals\nare chosen by an attacker), or ambiguity attack (also called invert-ibility\nattack, in which a fake watermark is discovered from the\nwatermarked signal). In comparison, our scheme requires no interaction\nbetween a verifier and the owner of data, thus is immune to\nboth oracle attack and plain-text chosen attack. The watermark is\ncertified in our scheme for thwarting the ambiguity attack (which\nwe call additive attack in this paper). In addition, our scheme is\nboth efficient and robust for typical database operations.\nCONCLUSION\nIn this paper, we proposed a public watermarking scheme for relational\ndatabases. The scheme is unique in that it has the following\nproperties.\nPublic verifiability Given a database relation to be published\nor distributed, the owner of data uses a public watermark\nkey to generate a public watermark, which is a relation\nwith binary attributes. Anyone can use the watermark\nkey and the watermark to check whether a suspicious copy\nof data is watermarked, and, if so, prove the ownership of\nthe data by checking a watermark certificate officially signed\nby a trusted certificate authority, DB-CA. The watermark\ncertificate contains the owner's ID, the watermark key, the\nhashes of both the watermark and DB relation, the first time\nthe relation was certified, the validity period of the current\ncertificate, and the DB-CA's signature. The watermark certificate\nmay be revoked and re-certified in the case of identity\nchange, ownership change, DB-CA compromise, or data\nupdate. Therefore, the revocation status also needs to be\nchecked in ownership proof. To our best knowledge, our\nscheme is the only one to achieve public ownership proof\nin database literature. In contrast, all existing schemes are\nbased on secret key, by which ownership cannot be proven\nmore than once in public.\nDistortion free Different from typical watermarking schemes\n(e.g., [1]) for database ownership proof that hide watermark\ninformation in data by modifying least significant bits (LSBs),\nour scheme generates a public watermark from a collection\nof the most significant bits (MSBs). Our scheme does not\nmodify any MSBs; therefore, it is distortion-free. The public\nwatermark is a database relation that has the same primary\nkey attribute as the original data, plus one or more binary\nattributes to store the MSBs. Even though the MSBs are\npublicly known, an attacker cannot modify them without introducing\nintolerable error to the underlying data. In comparison\n, all previous watermarking schemes for databases\nintroduce some kind of distortion to the watermarked data.\nThey either modify LSB's for numerical data (e.g., [1, 19,\n20]), or exchange values among categorical data (e.g., [26,\n2]). Those schemes work well for particular types of data\nonly, while our scheme can be applied for any type of data\ndistortion-free.\nIncremental updatability Following the line of [1], each\ntuple in a database relation is independently processed in\nour scheme. Neither watermark generation nor detection depends\non any correlation or costly sorting among data items\nas required in [28, 26, 2]. Therefore, the scheme is particularly\nefficient for typical database operations, which are\nmostly tuple oriented. In the case of tuple insertion, deletion,\nor modification, the watermark can be easily updated by processing\nthose relating tuples only, with simple computation\nof random sequence numbers and modulus operations. Due\nto the robustness of our scheme, the update of watermark\ncertificate can be performed periodically after a batch of data\nupdates.\nRobustness Since the ownership of data is proven after the\ndata is published or distributed, it is crucial that our scheme\nis robust against various attacks that intend to invalidate watermark\ndetection or ownership proof. The robustness of our\nscheme is measured in terms of: (i) false hit, the probability\nof detecting a valid watermark from non-watermarked\ndata; (ii) false miss, the probability of not detecting a valid\nwatermark from watermarked data due to attacks; and (iii)\nfailure error rate, the least portion of data that has to be intolerably\nmodified so as to defeat our watermark detection.\nTypical database attacks considered in this paper include tuple/attribute\ninsertion, deletion, and random/selecitive value\nmodification/suppression. Both theoretical analysis and experimental\nstudy show that our scheme is robust in terms\nof these measures, which can be adjusted by the watermark\ngeneration and detection parameters. We have also studied\nthe tradeoff between the robustness and the overhead of our\nscheme. Our scheme is robust against the data frame-up attack\nand additive attack that may be more perilous to public\nwatermarking schemes.\nThe major contribution of this paper is the proposal of a public\nwatermarking scheme that has the above properties. Though our\nscheme may not necessarily supersede secret key based schemes\ndue to the overhead of using certificate and public watermark, we\nbelieve that it can be applied more practically in the real world for\ndatabase ownership protection. Our future plan includes extending\nour scheme to other types of data such as XML and streaming data.\nREFERENCES\n[1] R. Agrawal and J. Kiernan. Watermarking relational\ndatabases. In Proceedings of VLDB, pages 155166, 2002.\n[2] E. Bertino, B. C. Ooi, Y. Yang, and R. Deng. Privacy and\nownership preserving of outsourced medical data. In\nProceedings of IEEE International Conference on Data\nEngineering\n, pages 521532, 2005.\n\n\n[3] D. Boneh and M. Franklin. Identity-based encryption from\nthe weil pairing. In Proceedings of CRYPTO'2001, LNCS\n2139, Springer-Varlag\n, pages 213229, 2001.\n[4] Coalition Against Database Piracy (CADP). Piracy is\nunacceptable in the information age or any other age, July 2,\n2005. http://cadp.net/default.asp.\n[5] C. Cocks. An identity based encryption scheme based on\nquadratic residues. In Cryptography and Coding - Institute of\nMathematics and Its Applications International Conference\non Cryp- tography and Coding Proceedings of IMA 2001,\nLNCS 2260\n, pages 360363, 2001.\n[6] I. J. Cox and J. M. G. Linnartz. Public watermarks and\nresistance to tampering. In Proceedings of International\nConference on Image Processing\n, pages 36, 1997.\n[7] I. J. Cox, M. L. Miller, and J. A. Bloom. Digital\nWatermarking: Principles and Practice\n. Morgan Kaufmann,\n2001.\n[8] S. Craver and S. Katzenbeisser. Security analysis of\npublic-key watermarking schemes. In SPIE Vol. 4475,\nMathematics of Data/Image Coding, Compression, and\nEncryption IV\n, pages 172182, 2001.\n[9] J. J. Eggers, J. K. Su, and B. Girod. Public key watermarking\nby eigenvectors of linear transforms. In Proceedings of\nEuropean Signal Processing Conference (EUSIPCO)\n, 2000.\n[10] S. Farrell and R. Housley. An internet attribute certificate\nprofile for authorization, internet draft, April, 2002.\nhttp://www.ietf.org/rfc/rfc3281.txt.\n[11] T. Furon, I. Venturini, and P. Duhamel. A unified approach of\nasymmetric watermarking schemes. In SPIE Vol. 4314,\nSecurity and Watermarking of Multimedia Contents III\n,\npages 269279, 2001.\n[12] B. Gray and J. Gorelick. Database piracy plague. The\nWashington Times\n, March 1, 2004.\nhttp://www.washingtontimes.com.\n[13] D. Gross-Amblard. Query-preserving watermarking of\nrelational databases and xml documents. In Proceedings of\nACM Symposium on Principles of Database Systems\n(PODS)\n, pages 191201, 2003.\n[14] G. Hachez and J. Quisquater. Which directions for\nasymmetric watermarking. In Proceedings of XI European\nSignal Processing Conference (EUSIPCO), Vol. I\n, pages\n283286, 2002.\n[15] F. Hartung and B. Girod. Fast public-key watermarking of\ncompressed video. In Proceedings of IEEE International\nConference on Speech and Signal Processing\n, 1997.\n[16] R. Housley, W. Ford, W. Polk, and D. Solo. Internet x.509\npublic key infrastructure certificate and crl profile, July 2,\n2005. http://www.ietf.org/rfc/rfc2459.txt.\n[17] N. F. Johnson, Z. Duric, and S. Jajodia. Information Hiding:\nSteganography and WatermarkingAttacks and\nCountermeasures\n. Kluwer Publishers, 2000.\n[18] S. Katzenbeisser and F. A. Petitcolas, editors. Information\nHiding Techniques for Steganography and Digital\nWatermarking\n. Artech House, 2000.\n[19] Y. Li, V. Swarup, and S. Jajodia. Constructing a virtual\nprimary key for fingerprinting relational data. In Proceedings\nof ACM Workshop on Digital Rights Management (DRM)\n,\nOctober 2003.\n[20] Y. Li, V. Swarup, and S. Jajodia. Fingerprinting relational\ndatabases: Schemes and specialties. IEEE Transactions on\nDependable and Secure Computing (TDSC)\n, 2(1):3445,\n2005.\n[21] J. M. G. Linnartz and M. van Dijk. Analysis of the sensitivity\nattack against electronic watermarks in images. In\nProceedings of 2nd Workshop on Information Hiding\nWorkshop\n, 1998.\n[22] A. Menezes, P. C. van Oorschot, and S. A. Vanstone.\nHandbook of Applied Cryptography\n. CRC Press, 1997.\n[23] S. Micali. Efficient certificate revocation. In Technical\nReport: TM-542b.\nMassachusetts Institute of Technology.\nCambridge, MA, USA, 1996.\n[24] B. Schneier. Applied Cryptography. John Wiley & Sons,\nInc., 1996.\n[25] A. Shamir. Identity-based cryptosystems and signature\nschemes. In Proceedings of CRYPTO'84, LNCS 196,\nSpringer-Varlag\n, pages 4753, 1984.\n[26] R. Sion. Proving ownership over categorical data. In\nProceedings of IEEE International Conference on Data\nEngineering\n, pages 584596, 2004.\n[27] R. Sion, M. Atallah, and S. Prabhakar. Resilient information\nhiding for abstract semi-structures. In Proceedings of the\nWorkshop on Digital Watermarking\n, 2003.\n[28] R. Sion, M. Atallah, and S. Prabhakar. Rights protection for\nrelational data. In Proceedings of ACM SIGMOD\nInternational Conference on Management of Data\n, pages\n98108, 2003.\n[29] R. Sion, M. Atallah, and S. Prabhakar. Resilient rights\nprotection for sensor streams. In Proceedings of the Very\nLarge Databases Conference\n, pages 732743, 2004.\n[30] J. Smith and C. Dodge. Developments in steganography. In\nProceedings of 3rd International Workshop on Information\nHiding\n, pages 7787, 1999.\n[31] G. W. Snedecor and W. G. Cochran. Statistical Methods. 8th\nedition, Iowa State Press, 1989.\n[32] L. Vaas. Putting a stop to database piracy. eWEEK,\nenterprise news and reviews\n, September 24, 2003.\nhttp://www.eweek.com/print article/0,3048,a=107965,00.asp.\n[33] R. G. van Schyndel, A. Z. Tirkel, and I. D. Svalbe. Key\nindependent watermark detection. In Proceedings of IEEE\nInternational Conference on Multimedia Computing and\nSystems, Vol. 1\n, 1999.\n[34] Y. Wu, F. Bao, and C. Xu. On the security of two public key\nwatermarking schemes. In Proceedings of 4th IEEE\nPacific-Rim Conference on Multimedia\n, 2003.\n", "keywords": "public verifiability;certificate;Relational database;watermark;ownership protection"} {"name": "157", "title": "Putting Integrated Information in Context: Superimposing Conceptual Models with SPARCE", "abstract": "A person working with diverse information sources--with possibly different formats and information models--may recognize and wish to express conceptual structures that are not explicitly present in those sources. Rather than replicate the portions of interest and recast them into a single, combined data source, we leave base information where it is and superimpose a conceptual model that is appropriate to the task at hand. This superimposed model can be distinct from the model(s) employed by the sources in the base layer. An application that superimposes a new conceptual model over diverse sources, with varying capabilities, needs to accommodate the various types of information and differing access protocols for the base information sources. The Superimposed Pluggable Architecture for Contexts and Excerpts (SPARCE) defines a collection of architectural abstractions, placed between superimposed and base applications, to demarcate and revisit information elements inside base sources and provide access to content and context for elements inside these sources. SPARCE accommodates new base information types without altering existing superimposed applications. In this paper, we briefly introduce several superimposed applications that we have built, and describe the conceptual model each superimposes. We then focus on the use of context in superimposed applications. We describe how SPARCE supports context and excerpts. We demonstrate how SPARCE facilitates building superimposed applications by describing its use in building our two, quite diverse applications.", "fulltext": "Introduction\nWhen a physician prepares for rounds in a hospital intensive\ncare unit, she often creates a quick synopsis of important\nproblems, with relevant lab tests or observations,\nfor each patient, as shown in Figure 1. The information\nis largely copied from elsewhere, e.g., from the patient\nmedical record, or the laboratory system. Although the\nunderlying data sources use various information\nstructures, including dictated free text, tabular results and\nformatted reports, the physician may organize the\nselected information items into the simple cells or groups\nas shown in Figure 1 (without concern for the format or\ninformation model of the base sources). Each row\ncontains information about a single patient, with the four\ncolumns containing patient identifying information, (a\nsubset of) the patient's current problems, (a subset of)\nrecent lab results or other reports, and notes (including a\n\"To Do\" list for the patient). While the information\nelements selected for this synopsis will generally suffice\nfor the task at hand (patient rounds), the physician may\nneed to view an element (such as a problem or a lab\nresult) in the original source [Gorman 2000, Ash 2001].\nHowever, this paper artefact obviously provides no means\nof automatically returning to the original context of an\ninformation element.\nIn an ICU, we have observed a clinician actively working\nwith a potentially diverse set of underlying information\nsources as she prepares to visit a patient, selecting bits of\ninformation from the various information sources, organizing\nthem to suit the current purpose, possibly\nelaborating them with highlighting or annotation, or mixing\nthem with new additional information, including new\nrelationships among bits of information [Gorman 2000].\nIn our work [Delcambre 2001], we have put forth the\nnotion of superimposed information for use in such scenarios\n. The superimposed layer contains marks, which\nare encapsulated addresses, to the information elements\nof interest in the base layer. More than that, the superimposed\nlayer may contain additional information (beyond\nmarks) and may be structured according to an appropriate\nconceptual model. We are particularly interested in\nviewing and manipulating base information using tools\nappropriate for the information source (e.g., Microsoft\nWord for .doc files, Adobe Acrobat for .PDF files, and an\nelectronic medical record system for patient data). We\nhave built several superimposed applications that use\nconceptual models that are quite different from those of\nany of the underlying base information sources.\nIn past work we have implemented superimposed\napplications and models that rely solely on the ability of a\nbase application to create a mark and to return to the\nmarked region. In this paper, we explore the use of\nexcerpts and context for marks in superimposed applica-71\ntions. An excerpt consists of the extracted content for a\nmark and the context contains additional descriptive information\n(such as section heading and font\ncharacteristics) about the marked information.\nIn Section 2 we present two superimposed applications\nthat superimpose a new conceptual model over the base\ninformation (which is largely text documents), and makes\nuse of excerpt and mark capabilities. In Section 3 we\ndescribe the notion of excerpts and contexts in more\ndetail and provide the rationale for using middleware to\naccess them. The main contribution of this paper is our\narchitecture for building superimposed applications called\nthe Superimposed Pluggable Architecture for Contexts\nand Excerpts (SPARCE), presented in Section 4. This\narchitecture makes it easy for a developer to build\nsuperimposed applications, including those that\nsuperimpose a conceptual model that is different from\nany of the base conceptual models. The paper concludes\nwith a discussion of how to structure and access context,\na summary of related work, and conclusions and plans for\nfuture work, in Sections 5, 6, and 7, respectively.\nSample Applications\nWe present two superimposed applications built using\nSPARCE to demonstrate the ability to superimpose\ndifferent conceptual models, over the same corpus of base\ninformation. These applications are designed for use in\nthe Appeals Decision Process in the Forest Services of\nthe US Department of Agriculture (USFS).\nUSFS routinely makes decisions to solve (or prevent)\nproblems concerning forests. The public may appeal any\nUSFS decision after it is announced. The appeal process\nbegins with a set period of time during which an appellant\ncan send in an appeal letter that raises one or more issue\nwith a USFS decision or the decision-making process. A\nUSFS editor processes all appeal letters pertaining to a\ndecision and prepares an appeal packet for a reviewing\nofficer. An appeal packet contains all documents a\nreviewing officer might need to consult while formulating\na recommended decision about the complete set of issues\nraised in the appeals. This set of documents is called the\nRecords, Information, and Documentation (RID) section\nof the appeal packet. This section contains a RID letter\nthat lists the issues raised and a summary response for\neach issue. An Editor synthesizes a RID letter using\ndocuments in the RID such as the Decision Notice, the\nEnvironmental Assessment, the Finding of No Significant\nImpact (FONSI), and specialists' reports. In the RID\nletter, the editor presents information from other\ndocuments in a variety of forms such as excerpts,\nsummaries, and commentaries. In addition, the editor\ndocuments the location and identity of the information\nsources referenced in the RID letter.\n2.1 RIDPad\nComposing a RID letter requires an editor to maintain a\nlarge working set of information. Since it is not unusual\nfor an editor to be charged with preparing appeal packets\nfor several decisions simultaneously, the editor may need\nto maintain several threads of organization. Though using\ndocuments in electronic form can be helpful, such use\ndoes not necessarily alleviate all problems. For example,\nthe editor still needs to document the identity and location\nof information. In using electronic documents, the editor\nmay have to cope with more than a dozen documents\nsimultaneously.\nRIDPad is a superimposed application for the USFS appeal\nprocess. A USFS editor can use this application to\ncollect and organize information needed to prepare a RID\nletter. A RIDPad instance is a collection of items and\ngroups. An item is a superimposed information element\nassociated with a mark. It has a name and a description.\nThe name is user-defined and the description is the text\nexcerpt from the associated mark. A group is a convenient\ncollection of items and other groups.\nFigure 2 shows a RIDPad instance with information concerning\nthe \"Road 18 Caves\" decision (made in the\nPacific Northwest Region of USFS). The instance shown\nhas eight items (labeled Summary, Details, Comparison\nof Issues, Alternative A, Alternative B, Statement,\nDetails, and FONSI) in four groups (labeled\nEnvironmental Assessment, Proposed Action, Other\nAlternatives, and Decision). The group labeled\n\"Environmental Assessment\" contains two other groups.\nFigure 1: (Hand-drawn) Information summary as prepared by a resident prior to conducting\nrounds in a hospital intensive care unit (used with permission)\n72\nThe information in the instance shown comes from three\ndistinct base documents in two different base applications\n. (The item labeled \"Comparison of Issues\" contains\nan MS Excel mark; all other items contain MS Word\nmarks.) All items were created using base-layer support\nincluded in the current implementation of SPARCE.\n\nFigure 2: A RIDPad Instance\nRIDPad affords many operations on items and groups. A\nuser can create new items and groups, and move items\nbetween groups. The user can also rename, resize, and\nchange visual characteristics such as colour and font for\nitems and groups. With the mark associated with an item,\nthe user can navigate to the base layer if necessary, or\nbrowse the mark's context from within RIDPad via the\nContext Browser (as shown in Figure 3). Briefly, the\nContext Browser is a superimposed application window\nwith information related to a mark. Figure 3 shows the\nContext Browser for the item labelled \"FONSI\". From\nthe context elements listed on the left we see that this\nitem has both content and presentation kinds of context\nelements. The browser displays the value of the selected\ncontext element to the right. The formatted text content is\ncurrently selected and displayed in the\nbrowser.\n\nFigure 3: Context of a RIDPad Item\nRIDPad superimposes a simple conceptual model over\nthe selected base information with Group and Item as the\nonly model constructors. A group contains a name, size,\nlocation, and an ID. An item contains a name,\ndescription, size, location, and an ID. Items can occur\nwithin a Group and Groups can be nested within a Group.\nFigure 4 shows the model as a UML Class Diagram. The\nclass RIDPadDoc represents the RIDPad instance which\nincludes information that will likely be used to prepare\nthe RIDPad document.\n\nFigure 4: RIDPad Information Model (Simplified)\n2.2 Schematics Browser\nAppeal letters from different appellants in the USFS appeal\nprocess tend to share features. They all contain\nappellant names and addresses, refer to a Decision\nNotice, and raise issues. Such similarities suggest a\nschema for appeal letters. A superimposed schematic is\nan E-R schema superimposed over base information\n[Bowers 2002]. The Schematics Browser (see Figure 5) is\na superimposed application that demonstrates the use of\nsuperimposed schematics. It is meant to allow USFS\npersonnel to consider a set of appeal decisions to look for\nimportant issues or trends. The Schematics Browser\nmight be used to support strategic planning activities.\n\nFigure 5: Schematics Browser\nName\nRIDPadDoc\nID\nName\nSize\nLocation\nGroup\n\nID\nName\nDescription\n\nSize\nLocation\nItem\nBelongs to\n\n0..1\n*\n0..1\n*\n0..1\n\nContains\n\n*\n0..1\n*\nID\n\nAddress\nMark\n73\nFigure 5 shows an instance of a USFS appeal decision\nschematic opened in the Schematics Browser. The upper\nleft frame lists instances of the appeal decision schematic.\nThe user can select one of these instances, and then use\nthe large middle frame to browse through information\nassociated with the decision. The \"1997 Ranch House\nTimber Sale\" appeal decision is selected in Figure 5. This\nschematic allows the user to easily browse from a particular\nissue to the appeal letter(s) where the issue was\nraised to the appellant who raised the issue, for example.\nMarks into any number of base sources can be associated\nwith entities, relationships, and attributes (but only one\nmark per entity and attribute). When an entity,\nrelationship, or an attribute has an associated mark, a user\ncan either visit the base layer or choose to view the\nexcerpt from within the browser.\nFigure 6 shows a simplified version of the information\nmodel the Schematics Browser uses in superimposing the\nE-R model over base information. The browser stores all\nsuperimposed information in a relational database. This\nstructure is a simple generic model that accommodates\narbitrary Entity-Relationship style schematics.\nName\nSchematic\nID\nName\nDescription\nEntity\nID\nName\nValue\nAttribute\n1\n1..*\n1\n*\n1\n*\nID\nAddress\nMark\nID\nName\nSchematicInst\nID\nAddress\nMark\n\nFigure 6: Schematics Browser's Information Model\nFigure 7 uses the Schematic Browser's meta model to\nshow a partial superimposed schematic instance. It shows\nan instance of the \"1997 Ranch House Timber Sale\" appeal\ndecision schematic (also shown in Figure 5) and an\nIssue entity. It also shows the two attribute instances,\ndesc\nand\nnumber\n, of the Issue entity. The\ndesc\nattribute\nis associated with a mark instance (ID 41). In this\nsimple implementation, the schematic instance data has\nits corresponding type information stored in the\nName\n\nfield.\n2.3 Impact of Superimposed Information on\nConceptual Model(s)\nSuperimposed information introduces one significant\nmodeling construct the mark. The mark spans between\ninformation at the superimposed layer and information in\nthe various base layer sources. The mark thus serves as a\nbridge between the conceptual model used in the superimposed\nlayer and the conceptual model used in a base\ninformation source.\nName = Appeal Decision\n: Schematic\nID = 2\nName = 1997 Ranch House Timber Sale\n: SchematicInst\nID = 1\nName = Issue\nDescription = Failed to meet Treaty and trust obligations\n: Entity\nID = 1\nName = desc\nValue = The Forest Service i...\n: Attribute\nID = 2\nName = number\nValue = 1\n: Attribute\nID = 41\nAddress = Win1997.pdf|1|79|115\n: Mark\n\nFigure 7: Partial Superimposed Schematic Instance\nIn the RIDPad application, the superimposed model consists\nof groups and items, where groups can be nested.\nThis model is somewhat like a simplified XML model\nwhere groups are analogous to elements. But one important\ndifference is that items contain marks, as opposed to\nPCDATA or other content. In a similar manner, the\nSchematics Browser uses a superimposed model that is\nsimilar to an entity-relationship model, but marks may\nappear as attribute values. In addition, each entity and\nrelationship instance may be anchored, i.e., may be in\none-to-one correspondence with a mark.\nAny superimposed application, by definition, includes\nmarks in the superimposed layer. Thus, the conceptual\nmodel used in the superimposed layer must, necessarily,\nbe extended to include marks in some manner.\nThe use of marks has no impact on the conceptual model\nof the base layer. In fact, the use of marks, in general,\nrequires no change to the base information or the base\napplication. Marks encapsulate an address to an information\nelement in the base source. Thus, the use of\nmarks requires an addressing scheme for each base source\nthat participates in a superimposed application. The addressing\nscheme may exploit the data model of the base\ninformation source. As an example, we could use XPath\nexpressions to address information elements in an XML\ndocument. It is also possible to use addressing schemes\nthat are independent of the data model used in the base\ninformation source. For example, a MS Word document\ncould be converted to a PDF document and a user could\ncreate a mark using a bounding box where the interior of\nthe box contains parts of individual characters. Regardless\nof the addressing scheme used in a mark, the superimposed\nlayer is shielded from the details of the\naddressing scheme as well as the details of the conceptual\nmodel used in the base information source.\nExcerpts and Contexts\nSuperimposed applications may want to incorporate contents\nof base-layer elements in the superimposed layer.\nFor example, an application might use the extracted base-layer\ncontent as the label of a superimposed element. We\ncall the contents of a base-layer element an excerpt. An\nexcerpt can be of various types. For example it may be\nplain text, formatted text, or an image. An excerpt of one\n74\ntype could also be transformed into other types. For example\n, formatted text in a word processor could also be\nseen as plain text, or as a graphical image.\nIn addition to excerpts, applications may use other information\nrelated to base-layer elements. For example, an\napplication may group superimposed information by the\nsection in which the base-layer elements reside. To do so,\nthe application needs to retrieve the section heading (assuming\none exists) of each base-layer element. We call\ninformation concerning a base-layer element, retrieved\nfrom the base layer, its context. Presentation information\nsuch as font name and location information such as line\nnumber might be included in the context of a mark. The\ncontext of a base-layer element may contain more than\none piece of information related to the base-layer element\n. Each such piece of information is a context element\n(and context is a collection of context elements).\n\nFigure 8: A Base-Layer Selection\nFigure 8 shows a fragment of an HTML page as\ndisplayed by a web browser. The highlighted region of\nthe fragment is the marked region. Table 1 shows an\nexcerpt and a few context elements of this marked region.\nThe column on the left lists names of context elements\nwhereas the column on the right shows values of those\ncontext elements.\nName Value\nExcerpt\nCheatgrass, Bromus tectorum, grows near many\ncaves in this project area.\nHTML\nCheatgrass,  <i>Bromus tectorum </i>,\n  grows near many caves in this project\narea.\nFont name\n(Inherited)\nTimes New Roman\nFont size\n(Inherited)\n12\nTable 1: Sample Context Elements of an HTML Mark\nNote that superimposed applications may access context\ninformation that a user might not explicitly access (or\neven be aware of). For example, consider the marked\nregion shown in Figure 8. The HTML markup for this\nregion (shown in Table 1) does not contain font\ninformation. If a superimposed application needs to\ndisplay the mark's excerpt exactly as it is in the base\nlayer, the application needs to examine the markup of the\nenclosing element, possibly traversing to the beginning of\nthe document (because font characteristics can be\ninherited in HTML). The superimposed application may\nalso need to examine the configuration of the Web\nbrowser to retrieve some or all of the format\nspecification.\nSeveral kinds of context are possible for a mark. The\nfollowing is a representative list of context kinds along\nwith example context elements for each kind.\nContent: Text, graphics.\nPresentation: Font name, color.\nPlacement: Line number, section.\nSub-structure: Rows, sentences.\nTopology: Next sentence, next paragraph.\nContainer: Containing paragraph, document.\nApplication: Options, preferences.\nContexts can vary across base-layer types. For example,\nthe context of a mark to a region in a graphics-format\nbase layer might include background colour and foreground\ncolour, but not font name. However, the context\nof a mark to a selection in a web page might include all\nthree elements. Contexts can also vary between marks of\nthe same base-layer type. For example, an MS Word\nmark to text situated inside a table may have a \"column\nheading\" context element, but a mark to text not situated\nin a table does not include that context element. Lastly,\nthe context of a mark itself may change with time. For\nexample, the context of a mark to a figure inside a document\nincludes a \"caption\" context element only as long as\na caption is attached to that figure.\nSupporting excerpts and contexts for marks are a natural\nextension of our original notion of mark as an encapsulated\naddress. Because we use the same mechanism to\nsupport both contexts and excerpts, we will often use the\nterm \"context\" broadly to refer to both kinds of information\nabout a base-layer element.\nAccessing information inside diverse base-layer types\nrequires superimposed applications to work with a variety\nof base information models, addressing mechanisms, and\naccess protocols. In addition, base applications may have\ndifferent capabilities. For example, base applications may\nvary in their support for navigation or querying, but users\nof superimposed applications may want to navigate\nthrough selected base information elements seamlessly\nand uniformly, e.g., using the Schematics Browser. We\nuse middleware to ease communication between the two\nlayers and make up for deficiencies of base applications.\nAnd we want the middleware to allow independent evolution\nof components in these layers.\nBy providing a uniform interface to base information and\nits context, the middleware reduces the complexity of\nsuperimposed applications and allows superimposed\napplication developers to focus on the needs of their applications\nsuch as the intricacies of the conceptual model\nthey aim to superimpose.\nSPARCE\nThe Superimposed Pluggable Architecture for Contexts\nand Excerpts (SPARCE) is a middleware for mark and\ncontext management [Murthy 2003]. It is designed to be\nextensible in terms of supporting new base-layer types\n75\nand contexts, without adversely affecting existing superimposed\napplications.\n\nFigure 9: SPARCE Reference Model\nFigure 9 shows a reference model for SPARCE. The\nMark Management module implements operations such\nas mark creation. It also maintains a repository of marks.\nThe Context Management module is responsible for retrieving\ncontext of base information. This module\ndepends on the Mark Management module to locate information\ninside base layers. The Clipboard module is\nmodelled after the Clipboard object in operating systems\nsuch as Macintosh and MS Windows. The Superimposed\nInformation Management module provides storage service\nto superimposed applications. We have developed a\ngeneric representation for information, called the Uni-Level\nDescription [Bowers 2003], that can represent\ninformation (including superimposed information) structured\naccording to various data models or representation\nschemes, such as XML, RDF or database models, in a\nuniform way. In this architecture, superimposed applications\ncan choose whether they use this module for\nstorage, or another storage manager.\n4.1 Key Abstractions\nTable 2 provides a brief description of the classes and\ninterfaces SPARCE uses for mark and context management\n. SPARCE supports context for three classes of\nobjects: marks, containers, and applications (using the\nclasses Mark, Container, and Application respectively). A\nContainer is an abstraction for a base document (or a\nportion of that document). An Application is an abstraction\nfor a base application. SPARCE also defines the\ninterface Context-Aware Object to any base-layer element\nthat supports context. The classes Mark, Container, and\nApplication implement this interface. Superimposed\napplications use the class SPARCE Manager to create\nnew marks and to retrieve existing marks. The SPARCE\nManager maintains a repository of marks.\nSPARCE treats context as a property set (a collection of\nname-value pairs). Context is the entire set of properties\nof a base-layer element and a context element is any one\nproperty. For example, the text excerpt and font name of\na mark are context elements. Modelling context as a\nproperty set makes it possible to support a variety of\ncontexts, both across and within base layers, without affecting\nexisting superimposed applications. This model\nalso provides a uniform interface to context of any base-layer\nelement, for any base-layer type.\nSPARCE uses the interface Context Agent to achieve its\nextensibility goal. A class that implements this interface\ntakes a context-aware object and returns its context. That\nis, SPARCE does not access base-layer elements or their\ncontexts directly. It uses external agents to do so on its\nbehalf. However, SPARCE is responsible for associating\na context-aware object with an appropriate context agent.\nThe SPARCE Manager obtains the name of the class that\nwill be the context agent for a mark from the description\nof the marks. The SPARCE Manager instantiates the\ncontext agent class by name whenever a superimposed\napplication accesses the context of a context-aware object\n. Typically, there is one implementation of the context\nagent interface per base-layer type. For example, a PDF\nAgent is an implementation of this interface for use with\nPDF documents. A context agent implementation determines\nthe constitution of context for its context-aware\nobjects. SPARCE does not require an implementation to\nsupport particular context elements (nor does it prevent\nan implementation from defining any context element).\nHowever, we expect implementations to support kinds of\ncontext elements commonly expected (such as those\nlisted in Section 3), and use meaningful names for context\nkinds and elements.\nClass/Interface Description\nMark\nA mark to base-layer information.\nContainer\nThe base document (or a portion of it)\nin which a mark is made.\nApplication\nThe base application in which a mark is\nmade.\nContext-Aware\nObject (interface)\nInterface to any base-layer element\nable to provide context. Classes Mark,\nContainer, and Application implement\nthis interface.\nContext\nContext of a context-aware object. It is\na collection of context elements.\nContext Element\nA single piece of context information\nabout a context-aware object.\nContext Agent\n(interface)\nInterface to any base-layer. An implementation\nwill retrieve context from a\ncontext-aware object.\nSPARCE Manager\nCreates, stores, and retrieves marks;\nassociates context-aware objects with\nappropriate context agents.\nTable 2: SPARCE Classes and Interfaces\n4.2 Creating Marks\nA user initiates mark creation after selecting some information\nin a base application. The mark creation process\nconsists of two steps: (1) generating the address of the\nselected base information, perhaps with other auxiliary\ninformation (collectively called mark fodder) and (2)\ncreating a mark object in the mark repository. The address\ncontained in mark fodder uses the addressing\nmechanism appropriate for the base information source.\nFor example, the address to information inside a PDF\ndocument contains the page number and the starting and\nending word indexes; the address to a selection in a\nspreadsheet contains the row and column numbers for the\nfirst and last cell in the selection. (Other addressing\nschemes are possible for these base types.)\nSuperimposed\nApplication\nSuperimposed\nInformation\nManagement\nMark\nManagement\nContext\nManagement\nClipboard\nBase\nApplication\n76\nFigure 10 depicts two possible mark-creation scenarios as\na UML Use Case Diagram. (The boxes in this figure\ndenote system boundaries; the broken arrows denote\nobject flows.) In both scenarios, a user starts mark\ncreation in a base application and completes it in a\nsuperimposed application. In the first scenario, labelled\n\"Copy\", the user is able to use the normal copy operation,\ne.g., of a word processor, to create the mark fodder. In\nthe \"Mark\" use case, the user invokes a newly introduced\nfunction (such as the Mark menu item shown in Figure\n8). The superimposed application retrieves the mark\nfodder from the Clipboard, and passes it to the SPARCE\nManager. The SPARCE Manager creates a mark object\n(from the fodder), assigns it a unique ID, stores it in the\nmark repository, and returns the new object to the\nsuperimposed application.\nCopy\nUser\nMark\nBase Application\nClipboard\nOperating System\nComplete\nSuperimposed\nApplication\n\nFigure 10: Two Mark-creation Scenarios\nThe first scenario allows a user to select base information\nin a preferred base application and copy it to the\nClipboard without having to learn any new application,\ntool, or process to create marks. However, supporting this\nscenario requires cooperative base applications such as\nMicrosoft Word and Excel. Some base applications do\nnot directly support Clipboard operations, but they\nprovide mechanisms (such as plug-ins or add-ins) to\nextend their environments. A special mark creation tool\nor menu option can be inserted in to the user interface of\nsuch applications. The Mark use case in Figure 10\ndemonstrates this scenario. Early versions of Adobe\nAcrobat and Netscape Navigator are examples of base\napplications in this category.\nFigure 11 shows the internal representation of a mark.\nThis mark corresponds to the selection in the HTML page\nshown in Figure 8. Superimposed applications do not\nhave visibility of a mark's internal representation. They\nsimply use the mark's interface to access its details.\n4.3 Accessing Marks and Context\nA superimposed application sends a mark ID to the\nSPARCE Manager to retrieve the corresponding mark\nobject from the marks repository. The SPARCE Manager\ninstantiates an implementation of the context agent interface\nthat is appropriate for the mark. The superimposed\napplication can work with the mark object directly (for\nexample, to navigate to the base layer) or can interact\nwith the mark's context agent object (for example, to retrieve\nmark context).\nWith a context object in hand, a superimposed application\ncan find out what context elements are available. It can\nalso retrieve values for context elements of interest. The\nsuperimposed application may use a context-element's\nvalue in various ways. For example, it may use the text\ncontent of the mark as a label, or it may apply the font\ncharacteristics of the marked region to some\nsuperimposed information.\n<Mark ID="HTML2003Apr22065837YZXsmurthy">\n<Agent>HTMLAgents.IEAgent</Agent>\n<Class>HTMLMark</Class>\n<Address>4398|4423</Address>\n<Description/>Noxius Weeds in ea1.html\n</Description>\n<Excerpt>Cheatgrass, Bromus tectorum,\ngrows near many caves in this project\narea.</Excerpt>\n<Who>smurthy</Who>\n<Where>YZX</Where>\n<When>2003-04-22 06:58:37</When>\n<ContainerID>cdocsea1html</ContainerID>\n</Mark>\n\nFigure 11: Internal Representation of a Mark\nFor ease of use, our design also allows the application to\nretrieve the value of a context element from the context-aware\nobject or even from the context-agent object. An\napplication developer may choose the access path that is\nmost convenient to his or her particular situation.\n4.4 Implementation\nWe have implemented SPARCE for Microsoft-Windows\noperating systems using ActiveX technology [COM]. The\ncurrent implementation includes support for the following\nbase applications: MS Word, MS Excel, Adobe Acrobat\n(PDF files), and MS Windows Media Player (a variety of\naudio/video file types). The agents for these base applications\nsupport the following kinds of context: content,\npresentation, containment, placement, sub-structure,\ntopology, document, and application. (Some possible\ncontext elements of these kinds are listed in Section 3.)\nWe have implemented reusable view facilities such as the\nContext Browser to display the complete context of a\ncontext-aware object, and tabbed property pages to display\nproperties of context-aware objects. We have also\nimplemented a few testing aids. For example, we have\nimplemented a generic context agent with limited\nfunctionality (that can be used with any base-layer type)\nto test integration with new base-layer types. The Context\nBrowser is also a good testing tool when support for a\nnew a base type is added or when definition of context is\naltered for a base type.\n4.5 Extensibility\nSupporting new context elements is straightforward in\nSPARCE: The new context element name is just added to\nthe property set. Superimposed applications may ignore\nthe new context elements if they are not capable of\nhandling them.\nSupporting new base-layer types is more involved. It requires\na developer to understand the base layer and its\naddressing mechanisms. The developer must implement\nthe context agent interface for the base-layer type. And\nthe developer must implement a means to allow users to\n77\nselect regions within this type of base information and\ncopy mark fodder to the Clipboard. As we mention in\nSection 4.2, the developer might be able to exploit extensibility\nmechanisms of base applications for creating\nmark fodder.\nWe have used the extensibility mechanism to add support\nfor MS Word, MS Excel, Adobe Acrobat, and MS\nWindows Media Player. It took us about 7-12 hours to\nsupport each of these base types. The SPARCE implementation\nand the superimposed applications were not\nchanged or recompiled when new base types were added.\n4.6 Evaluation\nOur observations show that developing superimposed\napplications with SPARCE is relatively easy. Although\nthe effort required to develop a superimposed application\ndepends on the specifics of that application, using abstractions\nsuch as marks and contexts alleviate the need to\nmodel those entities in each application. For example,\nRIDPad is a complex application due to its graphical nature\nand the variety of operations it supports. However,\nwe were able to develop that application in approximately\n30 hours. As we added support for new base types using\nthe extensibility mechanism of SPARCE, RIDPad was\nable to automatically work with the new base types.\nThe original Schematics Browser application worked\nonly with PDF files. The application was responsible for\nmanaging marks and interacting with Adobe Acrobat.\nThe application had no context-management capabilities.\nWe altered this application to use SPARCE and it\ninstantaneously had access to all base-layer types\nSPARCE supported (and those it will support in future).\nIn addition, it also had access to context of base\ninformation. In less than 7 hours, we were able to alter\nthe Schematics Browser to use SPARCE.\nThere are many ways to deploy the components of\nSPARCE and its applications (based on application and\nuser needs). For example, RIDPad is expected to be a\nsingle-user application. Thus, all components of RIDPad\nand SPARCE may run on a single computer. In contrast,\nthe Schematics Browser is likely to be used by many\nUSFS personnel to browse schematic instances of past\nappeal decisions. That is, shared repositories of superimposed\ninformation and marks can be useful. Based on\nsuch analyses, we are currently in the process of evaluating\ndifferent deployment configurations of SPARCE and\nits applications. In addition to studying performance of\nthese configurations, we intend to explore the benefits of\ncaching context information.\nIssues in Context Representation\nOne of the areas of SPARCE design we are still exploring\nis the representation of context. We have considered defining\ncontexts via data types (say, a context type for each\nbase type), but feel that approach would be too restrictive.\nThe set of context elements available for a mark might\nvary across a document. For example, a mark in a Word\ndocument might have a \"column name\" context element\nif it is in a table, but not otherwise. It is even possible\nthat the context elements available for a single mark may\nchange over time. For instance, the \"image\" context element\nmight only be available while the invocation of the\nbase application in which the mark was originally created\nis still running. A context type could define all possible\ncontext elements, where a particular mark produces null\nvalues on elements undefined for it, but that approach\ncomplicates the application programming interface (especially\nfor context elements of scalar types such as\nnumbers and strings). Another issue with types is making\nit possible to write a superimposed application without\nspecifying in advance all the base sources it will be used\nwith (and their context types). We have demonstrated\nwith our current approach the ability of a superimposed\napplication to work with new context agents without\nmodifying the application. The superimposed application\ncan make use of any context elements it knows about\n(from the elements the new agent supplies). While\ninheritance schemes can support some polymorphism in\ntypes, they do not seem adequate to support the arbitrary\nkinds of overlap we have seen among context elements\nacross base types.\nAnother issue is the internal structure of a context. Cur-rently\na context is a property set of context elements,\nwhere each element is a name-value pair. Context elements\nalso have kinds (such as presentation and\nsubstructure), which allows grouping context elements in\nuser interfaces. We are considering giving contexts an\nexplicit hierarchical structure. There are several alternatives\nfor such an approach: Make a context a compound\nobject capable of holding sub-contexts, use of qualified\nnames (for example, format.font.fontsize), or employ a\nhierarchical namespace as in a directory structure. We do\nnot see great differences in these three alternatives. The\nadvantage of some kind of hierarchical structure, however\n, versus the current flat structure might come in the\ninterface between superimposed applications and the\ncontext agent. Rather than the application asking for\ncontext elements individually (or for all context elements\n), it could ask for a particular subgroup of elements\nof interest.\nA methodological issue related to context structure is how\nto coordinate the naming of context elements across multiple\nbase types and multiple superimposed applications.\nThere is no requirement currently that the \"same\" context\nelement be named the same thing for different base types\n(or, in fact, in alternative context agents for the same base\ntype). Even if the same name is used, the types of the\nassociated values could be different. With an individual\nor small group writing context agents and superimposed\napplications, informal methods will work for consistency\nin naming. However, a more structured process will be\nneeded at the point that context agents and superimposed\napplications are being produced by different organizations\nRelated Work\nMemex and Evolutionary List File were visionary proposals\nfor organizing information from diverse sources\n[Bush 1945, Nelson 1965]. Hypertext and compound\ndocument models are two classes of systems that attempt\nto realize these visions. Hypertext systems are helpful in\n78\npreparing information for non-linear media. Although\ndesigned to help organize information, they tend to be\nlimited in the types of source, granularity of information,\nand location of information that can be organized. For\nexample, NoteCards and Dexter both require information\nconsulted to be stored in a proprietary database [Halasz\n1987, 1994]. Intermedia can address base information\nonly at sub-document granularity [Yankelovich 1988].\nHypertext systems typically do not support retrieval of\ncontextual information from sources.\nCompound document systems are helpful in preparing\ninformation for linear media (such as paper). They can\naddress base information at both document and sub-document\ngranularity, but they tend to constrain display\nmodels of tools developed. For example, OLE 2 requires\nrectangular display areas [COM]. Like SPARCE (and\nunlike hypertext systems), compound document systems\nprovide architectural support for building applications.\nCompound document systems support only retrieval of\ncontents. Information sources decide the content, its format\n, and geometry.\nTable 3 provides a brief comparison of SPARCE with\nhypertext and compound document systems. NoteCards,\nIntermedia, and Dexter are hypertext systems. OpenDoc\n[Apple 1994] and OLE 2 are compound document systems\n.\n\nNo\nteC\na\nr\nd\ns\nIn\nte\nrm\ne\nd\nia\nDe\nx\nt\ner\nOp\ne\nn\nD\no\nc\nOLE\n2\nSPAR\nC\nE\nBase\ntypes\n2\n\n3\nAny Any Any Any\nBase\nlocation\nCustom\nFiles Custom Any Any Any\nBase\ngranularity\nWhole Part Both Both Both Both\nContext\nkinds\nNone None\nNone Content\nContent\nMany\nTable 3: SPARCE Compared with Related Systems\nMultivalent documents [Phelps 2000b] allow multiple\nbehaviours to be superimposed on to a single base document\nusing an abstraction similar to the context-agent\ninterface in SPARCE. The system uses contents of a\nregion of interest (and its surrounding), but only to\naddress that region [Phelps 2000a].\nIn the area related to dynamism and representation of\ncontext, OLE Automation [Microsoft 1996] provides an\ninteresting comparison to our approach. An OLE automation\nobject exposes an interface to the type information\nobject (ITypeInfo) that corresponds to itself. The type\ninformation object is resident in a type library (that contains\ntype information for possibly many automation\nobject types). Changing type information (deleting members\nor adding new members) requires creation of a new\ntype-information object and a new type library. Although\nthe framework allows each instance of an object type to\nreturn a different type-information object, the requirement\nto create new type information and a type library\nmakes it impractical to do so. Consequently, type information\nof an OLE automation object tends to include all\npossible elements, without regard to whether those members\nare relevant in a given situation. For example, the\ntype information for a Range object of a MS Word document\ncontains over 30 members [Microsoft]. The value of\na member of scalar type that is not applicable for a given\nRange object will be equivalent of NULL (and the inapplicable\ncollection-type members will be empty). In\nSPARCE, the context of a mark contains only those elements\nthat apply to the mark.\nIt might seem that links in OLE 2 compound documents\nprovide similar functionality to marks. An OLE 2 compound\ndocument supports only retrieval of contents from\nlinks. It does not provide a mechanism from within a\ncompound document to obtain the OLE automation object\nthat corresponds to a link (even when the link source\ndefines an automation object corresponding to the region\nthe link represents). As a consequence, context-like information\nabout the linked region cannot be accessed via\nthe link directly. For example, linking a selection S from\nthe main body of MS Word document D1 into Word\ndocument D2 makes D2 a compound document. However\n, the Range object for S (which is available in D1) is\nnot accessible through the link in D2. A user needing\nmore information about S must navigate to the source\ndocument D1. SPARCE not only provides the ability to\nlink information via marks, it also provides access to\ncontext of the mark through the mark itself.\nDiscussion and Future Work\nOne way to view our work is that we have extended the\nstandard modelling building blocks (integers, floats,\ndates, strings, etc.) with a new primitive--mark--that\nencapsulates an information element from an external\nsource. A conceptual model (extended to be a superimposed\nmodel) can permit the use of marks in any of its\nstructuring constructs (tuples, relationships, attributes,\nentities, etc.), without regard to the complexities of the\nunderlying element. Support for context allows superimposed\napplications to extract information from that\nelement and its surrounding elements or the information\nsource in a controlled manner, to augment what is explicitly\nstored in the superimposed model.\nAs a means to provide \"new models for old data,\" our\napproach is quite different from data integration approaches\nsuch as mediators and data warehouses. Such\napproaches seek to provide an integrated view through a\nglobal schema describing base information that faithfully\nreflects the structure of the base source. In our work, we\nare exploring the use of selected base information\nelements (using marks). Note that the selection of marks\nis often performed manually, by a domain expert (e.g., a\nclinician or a USFS scientist), for a specific purpose (e.g.,\nto treat a patient or prepare a RID). We have no\nrequirement to represent the structure or relationships\npresent within the base layer. Rather, we rely on the\noriginal application to provide interpretation for a mark\nand, if appropriate, to describe any relationships among\nmarks. Standard integration approaches describe information\nfrom various sources and expect the mediator to\nbe responsible for its interpretation.\n79\nThe superimposed layer, by definition, allows the user to\nmix marks with additional information that may not exist\nin any of the base information sources. Such information\nmay augment the base layer, e.g., by making implicit information\nexplicit (e.g., \"this issue relates only to\nAlternative A\") or by providing commentary. Another\nuse of superimposed information is to link related\ninformation from multiple sources, e.g., by placing marks\nin the same group or by explicitly linking between\ninformation elements in two sources. Finally, the\nsuperimposed approach permits reinterpretations that are\nmuch less structured than the original. For example, base\ninformation elements can be grouped or linked without\nhaving to observe any type constraints imposed in the\noriginal source.\nExploring different representations of context and ways\nto reconcile context definition from different context\nagents is one area of our future work. Understanding the\nneeds of new superimposed conceptual models (other\nthan those we have described), and exploiting contexts to\nsuperimpose richer conceptual models is another area of\nour interest. A natural application of superimposed conceptual\nmodels would be to create means of querying\njointly over superimposed and base information. We are\nalso interested in superimposed applications that facilitate\n\"schema later\" organization of diverse information. That\nis, a user can start accumulating and arranging information\nitems of interest, and--as he or she starts forming a\nmental conceptual model--incrementally define a\nsuperimposed model that reflects it.\nAcknowledgements\nThis work has been supported by US NSF grants IIS\n9817492 and IIS 0086002. We thank John Davis for\nhelping us understand the USFS appeal process. We also\nthank the anonymous reviewers for their comments.\n\nReferences\nAcrobat SDK: Acrobat Software Development Kit,\nAdobe Systems Incorporated.\nApple (1994): The OpenDoc Technical Summary. Apple\nWorld Wide Developers Conference Technologies CD,\nSan Jose; CA.\nAsh, J., Gorman P., Lavelle, M., Lyman J., Delcambre,\nL., Maier, D., Bowers, S. and Weaver, M. (2001):\nBundles: Meeting Clinical Information Needs. Bulletin\nof the Medical Library Association 89(3):294-296.\nBowers, S., Delcambre, L. and Maier, D. (2002): Superimposed\nSchematics: Introducing E-R Structure for In-Situ\nInformation Selections. Proc. ER 2002, pp 90\n104, Springer LNCS 2503.\nBowers, S. and Delcambre, L. (2003): The Uni-Level\nDescription: A Uniform Framework for Representing\nInformation in Multiple Data Models. Proc. of the 22\nnd\n\nInternational Conference on Conceptual Modeling (ER\n2003), Chicago, IL, October 2003.\nBush, V. (1945): As We May Think. The Atlantic\nMonthly; 1945; July.\nDelcambre, L., Maier, D., Bowers, S., Weaver, M., Deng,\nL., Gorman, P., Ash, J., Lavelle, M. and Lyman, J.\n(2001): Bundles in Captivity: An Application of Superimposed\nInformation. Proc. ICDE 2001, Heidelberg\n, Germany, pp 111-120.\nGorman, P., Ash, J., Lavelle, M., Lyman, J., Delcambre,\nL. and Maier, D. (2000): Bundles in the wild: Managing\ninformation to solve problems and maintain situation\nawareness. Lib. Trends 2000 49(2):266-289.\nHalasz, F.G., Moran, T.P. and Trigg, R.H. (1987):\nNoteCards in a Nutshell. Proc. ACM CHI+GI Conference\n, New York, NY, pp 45-52, ACM Press.\nHalasz, F.G. and Schwartz, F. (1994): The Dexter Hypertext\nReference Model. Communications of the ACM,\n37(2):30-39, ACM Press.\nMaier, D. and Delcambre, L. (1999): Superimposed Information\nfor the Internet. Proc. WebDB 1999 (informal\n), Philadelphia, PA, pp 1-9.\nCOM: The Component Object Model Specification, Microsoft\nCorporation.\nMicrosoft Corporation. (1996): OLE Automation Pro-", "keywords": "superimposed information;SPARCE .;excerpts;Conceptual modelling;software architecture;context"} {"name": "158", "title": "Quality-of-Service in IP Services over Bluetooth Ad-Hoc Networks", "abstract": "Along with the development of multimedia and wireless networking technologies, mobile multimedia applications are playing more important roles in information access. Quality of Service (QoS) is a critical issue in providing guaranteed service in a low bandwidth wireless environment. To provide Bluetooth-IP services with differentiated quality requirements, a QoS-centric cascading mechanism is proposed in this paper. This innovative mechanism, composed of intra-piconet resource allocation, inter-piconet handoff and Bluetooth-IP access modules, is based on the Bluetooth Network Encapsulation Protocol (BNEP) operation scenario. From our simulations the handoff connection time for a Bluetooth device is up to 11.84 s and the maximum average transmission delay is up to 4e-05 s when seven devices join a piconet simultaneously. Increasing the queue length for the Bluetooth-IP access system will decrease the traffic loss rate by 0.02 per 1000 IP packets at the expense of a small delay performance.", "fulltext": "Introduction\nWireless communications have evolved rapidly over the past\nfew years. Much attention has been given to research and\ndevelopment in wireless networking and personal mobile\ncomputing [10,17]. The number of computing and telecommunications\ndevices is increasing and consequently, portable\ncomputing and communications devices like cellular phones,\npersonal digital assistants, tablet PCs and home appliances\nare used widely. Wireless communication technologies will\noffer the subscriber greater flexibility and capability than ever\nbefore [14].\nIn February 1998, mobile telephony and computing leaders\nEricsson, Nokia, IBM, Toshiba, and Intel formed a\nSpecial Interest Group (SIG) to create a standard radio interface\nnamed Bluetooth [13]. The main aim of Bluetooth\nhas been the development of a wireless replacement for cables\nbetween electronic devices via a universal radio link in\nthe globally available and unlicensed 2.4 GHz Industrial Scientific\nand Medical (ISM) frequency band [9]. Bluetooth\ntechnologies have the potential to ensure that the best services\n, system resources and quality are delivered and used efficiently\n. However, global services will embrace all types of\nnetworks. Therefore, bluetooth-based service networks will\ninterconnect with IPv4/v6 existing networks to provide wide\narea network connectivity and Internet access to and between,\nindividuals and devices [7]. In Reference [2], the BLUEPAC\n(BLUEtooth Public ACcess) concepts presented ideas for enabling\nmobile Bluetooth devices to access local area networks\nin public areas, such as airports, train stations and supermarkets\n.\nThe Bluetooth specification defined how Bluetooth-enabled\ndevices (BT) can access IP network services using\nthe IETF Point-to-Point Protocol (PPP) and the Bluetooth\nNetwork Encapsulation Protocol (BNEP) [12,19,20].\nBy\nmapping IP addresses on the corresponding BT addresses\n(BD_ADDR), common access across networks is enabled [3].\nThis means that devices from different networks are allowed\nto discover and use one another's services without the need\nfor service translation or user interaction. To support communications\nbetween all Bluetooth-based home appliances and\nthe existing IP world, IPv6 over Bluetooth (6overBT) technology\nwas proposed [8]. The 6overBT technology suggested\nthat no additional link layer or encapsulation protocol headers\nbe used. However, the development of 6overBT technology\nis still in progress. The BNEP protocol was referred to as the\nkey technology in our research.\nWhat with the development of applications and wireless\nnetworking technologies, mobile multimedia applications are\nplaying more important roles in information access [21].\nTo provide responsive multimedia services (high QoS) in a\nBluetooth-IP mobile environment, a pre-requisite for our discussion\nis seamless data transmission. A cascading system\nwith fair resource allocation scheme, seamless handoff strategy\n, and transparent bridging system that provides a way\nof integrating IP networks and Bluetooth-based service networks\nto relay multimedia applications within residual and\nenterprise is thus inevitable [6].\nThe rest of this paper is organized as follows. The following\nsection describes Bluetooth background information,\nincluding the piconet, scatternet, IP over Bluetooth service architecture\n. Section 3 presents the proposed QoS-centric cascading\nmechanism, including the intra-piconet resource allocation\n, inter-piconet handoff, and Bluetooth-IP access system\n. The simulation model and performance analysis of the\n700\nW.-C. CHAN ET AL.\nqueue length, loss rate, average delay, are introduced in section\n4. Section 5 presents our concluding remarks.\nBluetooth-IP services\nFigure 1 is a Bluetooth-IP service network environment.\nWhen a BT user wants to receive a networked video stored\non a remote video server, the BT (maybe a slave in a picocell\n) will initiate a connection procedure with the picocell\nmaster. The master initiates a connection procedure with the\nvideo server through the Bluetooth-IP access system. During\nthe connection state, the video stream will be fed through the\naccess system, master to BT user (the dotted line of figure 1).\n2.1. Piconet\nWhen two BTs establish a connection, one BT will act as\nmaster and the other as the slave. More BTs can join the piconet\n. A single piconet can accommodate up to seven active\nslaves. If a slave does not need to participate in the channel\n, it should still be frequency-hop synchronized and switch\nto a low-power Park mode. The master can also request that\nthe slave enter the Park mode so the master can communicate\nmore than seven slave BTs. The master determines the\nfrequency-hop sequence, the timing and the scheduling of all\npackets in the piconet.\nWithin a piconet, the master initiates the connection procedure\n, although the application may necessitate that the master\nand slave roles be exchanged. For instance, such an exchange\nis necessary when a BT receives network services through\nBluetooth-IP access systems. In this circumstance, the access\nsystem provides an IP service that may be used by many\nother BTs. This situation requires that the access system be\nthe master of the piconet and the other BTs to act as slaves.\nHowever, when the device is initially activated and looks for\nan access system, it may be the device initiating the connection\n. This will make the device the master and the access\nsystem the slave [1].\n2.2. Scatternet\nSeveral piconets can be established and linked together to\nform an ad-hoc network. This is called a scatternet. A BT\ncan participate in two or more overlaying piconets by applying\ntime multiplexing. To participate on the proper channel,\nthe BT should use the associated master device address and\nproper clock offset to obtain the correct phase. A BT can act\nas a slave in several piconets, but only as a master in a sin-Figure\n1. Bluetooth-IP service network environment.\nQUALITY-OF-SERVICE IN IP SERVICES\n701\nFigure 2. An inter-piconet node in the scatternet.\ngle piconet: because two piconets with the same master are\nsynchronous and use the same hopping sequence. This syn-chronization\nmakes them one and the same piconet.\nTime multiplexing must be used to switch between piconets\n. In figure 2, an inter-piconet node is capable of time-sharing\nbetween multiple piconets. This allows the traffic to\nflow within and between the piconets [18]. In the case of data\nlinks only, a BT can request to enter the Hold or Park mode\nin the current piconet during which time it may join another\npiconet by just changing the channel parameters. BTs in the\nSniff mode may have sufficient time to visit another piconet\nin between the Sniff slots. If audio links are established, other\npiconets can only be visited in the non-reserved slots.\n2.3. IP networking\nThe LAN Access profile defines IP service access using\nPPP over RFCOMM. TCP/IP runs on the PPP protocol.\nBTs can receive IP services using the PPP protocol [20].\nWhen a BT wants to receive IP service, it will find a\nLAN Access Point (LAP) within radio range through inquiry\nand paging. After the data links have been setup, the\nLMP (Link_Manager Protocol) will process the master/slave\nswitch and the L2CAP/RFCOMM/PPP connection will then\nbe established. A suitable IP address is negotiated between\ndevices in the PPP layer. The BT can forward IP packets\nthrough the LAP.\nEncapsulating an Ethernet packet inside a PPP packet is\nnot an efficient solution. Moreover PPP is not sufficient for\nad-hoc networks that contain multiple wireless hops. The best\nway of providing networking would be to Ethernet over the\nL2CAP layer. The Bluetooth Network Encapsulation Protocol\n(BNEP) was pursued by the Bluetooth SIG PAN working\ngroup to provide an Ethernet-like interface to IP services [19],\nas depicted in figure 3.\nQoS-centric cascading mechanism\nFrom figure 1, a BT that accesses multimedia services on\nthe Internet may do so through a piconet or scatternet. QoS\nis critical in transmitting of different network segments. To\nprovide Bluetooth-IP services with differentiated quality requirements\n, a QoS-centric cascading mechanism is proposed\nFigure 3. BNEP protocol stack.\nFigure 4. The proposed QoS-centric cascading modules.\nin our research to tunnel the guaranteed applications. The operational\nmodules are illustrated in figure 4. The innovative\nmechanism consists of three modules: intra-piconet resource\nallocation, inter-piconet handoff and Bluetooth-IP access system\n. These modules were developed based on the BNEP operation\nscenarios.\n3.1. Intra-piconet resource allocation\nTwo service types: Synchronous Connection Oriented (SCO)\nand Asynchronous Connectionless Link (ACL) were defined\nin the Bluetooth service environment. Through the QoS setup,\nthe ACL link can be configured to provide QoS requirements.\nThe ACL link can be configured with the Flush Timeout setting\n, which prevents re-transmission when it is no longer\nuseful. Acknowledgement can be received within 1.25 ms.\nThis makes the delay small and it is possible to perform re-transmission\nfor real-time applications. The ACL link also\nsupports variable and asymmetrical bandwidth requirement\napplications.\nCurrently, the IP QoS architecture is based purely on\nIP-layer decision making, packet buffering and scheduling\nthrough a single link-layer service access point. The mechanism\nin the link layer has better understanding of the communications\nmedium status. However, simplicity has been\nimportant design objective for the Bluetooth interface and the\nnumber of IP-based protocols is becoming increasingly more\ncomplex. As depicted in figure 5, QoS architecture at the\nnetwork layer such as differentiated and integrated services\nprovides different services to applications. These services at\n702\nW.-C. CHAN ET AL.\nFigure 5. Bluetooth IP QoS mechanism.\nFigure 6. General QoS framework.\nthe high layer are sufficient depending on the particular scenario\n. With shared bandwidth and a re-transmission scheme,\nthe Host Controller (HC) buffering will invoke delays. The\nHC buffer size can be decreased to reduce the buffer delay.\nIn bluetooth-based service layer, the L2CAP layer informs\nthe remote side of the non-default parameters and sends a\nConfiguration Request. The remote L2CAP-side responds\nto the Configuration Response that agrees or disagrees with\nthese requirements. If the Configuration Response disagrees,\nthe local side sends other parameters to re-negotiate or stop\nthe connection. The Link Manager uses the poll interval and\nrepetitions for broadcast packets to support QoS. The poll interval\n, which is defined as the maximum time between subsequent\ntransmissions from the master to a particular slave on\nthe ACL link, is used to support bandwidth allocation and latency\ncontrol. Figure 6 depicts the general framework, which\ndefines the basic functions required to support QoS.\nIn figure 6, the traffic and QoS requirements for the QoS\nflow from the high layer sends the request to the Resource\nRequester (RR). Based on this request, the RR generates a resource\nrequest to the Resource Manager (RM). When the RM\naccepts the request the RR configures parameters to the local\nResource Allocation (RA) entity. The RA actually reserves\nresources to satisfy the QoS requirements. The QoS is satisfied\nby the application that receives the appropriate resource.\nIn our scheme, bluetooth-based operation identifies the following\nfunctions and procedures to determine the amount\nof resources assigned to a traffic flow. A polling algorithm\ndetermines which slave is polled next and bandwidth is assigned\nto that slave. The slave uses the air-interface scheduler\nto determine which data to send when it is polled. An\ninter-piconet scheduling algorithm is used by the inter-piconet\nnode to efficiently control the traffic flow between two piconets\n. Bluetooth also uses the Flush Timeout setting to determine\nthe maximum delay involved with re-transmissions.\nThe Link Manager module in the master selects the baseband\npacket type for transmission in the single, three and five time\nslots [4].\n3.2. Inter-piconet handoff\nThe inter-piconet environment suffers many challenges. First,\nthe formation of Bluetooth networks is spontaneous and the\nproblem of scatternet formation has not been defined in the\nBluetooth specification [16]. Some researches have addressed\nthese issues in the formation of an efficient scatternet [5,22].\nThe data sent forward between nodes must been sent via the\nmaster. Sometimes this data will traffic through multiple hops\nin the scatternet. Efficient routing protocols are needed for\nBluetooth. The inter-piconet node as the bridge over which a\npiconet controls communications between piconets. Smarter\ntraffic control is needed to coordinate with these masters. Different\ndata rates exist in each link in different scenarios. The\ninter-piconet node becomes the bottleneck for the scatternet.\nIn a piconet, the master controls all of the slaves in the piconet\n. The inter-piconet node joins more than two piconets,\nbut it is only active in one piconet at a time. To efficiently\nmove traffic between different piconets scheduling is needed\nto coordinate the inter-piconet and the master. In Reference\n[11], inter-piconet (IPS) and intra-piconet scheduling (IPRS)\nwere presented to interact with one another to provide an efficient\nscatternet scheduler, as illustrated in figure 2. The IPS\nand IPRS must coordinate with one another to schedule when\nthe inter-piconet node belongs to which piconet and how and\nwhen to transfer data packets between the different masters.\nBluetooth connection progress includes two steps: the inquiry\nprogress and the paging progress. This causes the bottleneck\nin the handoff. Two situations were discussed in\nBLUEPAC. When the Access Point is the master, the mobile\nnode joins the piconet as a slave. The Access Point can efficiently\ncontrol the traffic to the Internet. However, the disadvantage\nis that the Access Point must periodically enter the\ninquiry stage to find the newly arrived mobile node. This will\ninterrupt the Access Point transfer a packet to the Internet. In\na situation in which the BT is the master, the Access Point\ninvolves itself in multiple piconets. The BT can actively inquire\nthe Access Point when it wants to connect to the Access\nPoint. However, the traffic control becomes difficult and\ncomplex when the Access Point must switch between various\npiconets.\nThe scheduler is still not supported for seamless handoffs\nin real time applications. To solve this problem, reference\n[15] proposes the Next Hop Handoff Protocol (NHHP) to support\nfast handoffs. The major focus is on reducing the connection\ntime in the strategy. A scheme that passes the inquiry\ninformation to the next Access Point was used to avoid wasting\ntime in the inquiry stage. The disadvantage is that the\nAccess Points are divided into two categories: Entry Points\nQUALITY-OF-SERVICE IN IP SERVICES\n703\nwhich constantly make inquiries to the new BT and pass information\nto the Access Points in the internal regions have the\nresponsibility in the handoff. If a newly arrived BT is initiated\nin an internal region, the scheme doesn't work.\nTo resolve the above problem, a fast handoff scheme was\nproposed. This scheme assumes Bluetooth service environment\nis a micro-cellular network architecture. However, it\nalso adapts Bluetooth as a macro-cellular network. Based on\nthe fast handoff mission, the major focus is reducing the connection\nprocedure that causes significant delay. We assumed\nthe following conditions; the Access Point and mobile BT periodically\nscan for page attempts and inquiry requests. To obtain\nseamless and efficient handoff support, the Access Point\nRF range should cover the nearby Access Point. The neighborhood\nset records all Access Point locations.\n3.2.1. Connection procedure\nAs depicted in figure 7, when the mobile BT accesses the Internet\nit initiates an inquiry to the Access Point and makes a\nconnection. The Access Point passes the BT addresses and\nclock information to the nearest Access Point according to\nthe neighborhood set. The nearest Access Point uses this\ninformation to page the BT and form the piconets. These\npiconets form the scatternet and the BT becomes the inter-piconet\nnode between them. The BT depends on the received\nsignal strength indicator (RSSI) to determine which Access\nPoint to use to access the Internet. The BT is a dedicated\nAccess Point only in the connection state. The remaining piconets\nare all in the Hold state.\n3.2.2. Handoff\nThe BT periodically monitors the RSSI and bit error rate.\nWhen the RSSI decreases to the lower threshold a handoff\nmay take place. To know where the mobile BT moves, the\nBT detects which RSSI becomes stronger. It then informs the\nAccess Points and Foreign Agent that a handoff is imminent\n(figure 8(A)). The BT leaves the Hold state to the connection\nstate in the piconet that contains the coming Access Point\n(AP0 in figure 8(B)). The routing path also changes to a new\npath. Additional caches may be needed in the Access Point\nto avoid packet losses. The new nearest Access Point (APa\nin figure 8(C)) in accordance with the neighbor set is notified\nthat the mobile BT is within range to receive BT information\n. It begins to page the BT and enter the Hold state with\nthe BT. The original connection state also turns into the Hold\nstate in the piconet that contains the original access (AP1 in\nfigure 8(D)).\nWhen the BT does not seek access service from the Access\nPoint it should inform the Access Point that it no longer\nwants a connection. A connection could break down without\nprior warning. In the Bluetooth specification, both the master\nand slave use the link supervision time to supervise the loss.\nThe supervision timeout period is chosen so that the value is\nlonger than the hold periods.\nFor simplicity of discussion we assumed that the Bluetooth\nAP is the application sender and divided the architecture into\ntwo parts, wired parts: the correspondent node (CN) to the\nBluetooth AP and wireless part: the Bluetooth AP to the mobile\nBT device. The wired part is the same as the current\ngeneral mechanism. We will only discuss the mechanism\nthat combines the wireless part and our handoff mechanism\nin the Bluetooth environment. After making a connection and\nswitching roles as mentioned earlier, the Bluetooth AP becomes\na master. As illustrated in figure 9, the Bluetooth AP1\nsends an active PATH message to the BT in figure 9(1). After\nthe BT receives the PATH message, if the BT needs RSVP\nFigure 7. Connection procedures.\n704\nW.-C. CHAN ET AL.\nFigure 8. Handoff procedures.\nFigure 9. RSVP messages for bluetooth resource reservation.\nsupport, it sends a resource reservation request with a RESV\nmessage to the AP1 in figure 9(2). When the traffic specification\ncontains a RSVP message, the traffic and QoS requirements\nfor the QoS flow from the high layer sends a request to\nthe Resource Requester (RR). The Bluetooth low levels will\nthus coordinate with one another.\nOnce the Bluetooth AP1 accepts the request, the reservation\nalong the flow between the AP1 and BT is made. After\nthis point, the Bluetooth AP1 must have reservations in\nthe neighboring APs. The resource reservation request is the\nsame as an active reservation. The current AP1 sends a Passive\nPATH message to the neighboring AP2. The AP2 responds\nwith a Passive RESV message and reserves the resources\nthat the BT may need. Because Bluetooth can have\nonly seven active slaves in a piconet at the same time, the\nresources must be used efficiently. One way is adding more\nBluetooth devices in an AP. This can be easily achieved by\nmodifying the application layer.\nAs discussed before, to support seamless handoffs, information\nabout the BT is sent to the AP2 after the RSVP process\nis performed. The Hold state is maintained between the BT\nand AP2. However, if the BT does not need a QoS guarantee,\nFigure 10. The protocol stack for Bluetooth-IP interworking.\nthe QoS mechanism is not added. After supporting the end-to\n-end QoS using the RSVP protocol, the packet classification\nand scheduler can be used to control the traffic.\n3.3. Bluetooth-IP access system\nTh difference between existing Bluetooth piconets and IP\nLANs is the communication protocol stack illustrated in figure\n10. From figure 10, these differences are shown in the\nlower OSI seven layers. The lower layers are responsible for\nconnection and addressing. We therefore focused on the connection\nmanagement and address resolution issues in our research\n.\nThe access function allows connections to be established\nwithout requiring any particular knowledge or interaction.\nThe Bluetooth-IP access system plays the role of bridg-ing/routing\nmultimedia traffic between the various LANs and\npiconets. Their operational scenario is illustrated in figure 11.\nWhen different piconet devices are connected directly to\nthe access system, the access system function is referred to\nas a bridge (piconet Master and Slave role). If a LAN host\n(piconet BT) communicates with a piconet BT (LAN host)\nthrough the access system, because of the different protocol\nstacks between the piconet and LAN networks, two issues,\naddressing and connection must be resolved in the design.\nQUALITY-OF-SERVICE IN IP SERVICES\n705\nFigure 11. Bluetooth-IP interworking operational scenario.\n3.3.1. Address resolution\nTo achieve the interconnection function in a Bluetooth-IP environment\n, the access system must refer to both the piconet\nand LAN networks as members. Thus, two different protocol\nstacks must be combined to form a new communication protocol\nstack suitable as a routing server. The combined protocol\nstack is shown in figure 10. Using the protocol stack specifications\n, a scenario for addressing is identified as follows:\nStep 1: Each LAN host assigns an IP address. Each host thus\npossesses two addresses; an IP address and a MAC\naddress.\nStep 2: Each piconet BT acquires two addresses, a BT address\n(BD_ADDR) and an IP address.\nStep 3: A routing table must be built for interconnection in\nthe access system.\nA lookup for destination addresses\nis needed to find the corresponding outbound\nBD_ADDR or MAC address to which the packet\nmust be forwarded.\n3.3.2. Connection management\nBecause the existing LAN is a packet switching service and\nBT connections are made on an ad hoc basis, interconnection\nis very difficult. To solve this problem, a mechanism based\non IP services over Bluetooth was proposed in the Bluetooth\nspecification. The system is as follows.\nStep 1: Bluetooth adapters and an Ethernet card are embed-ded\ninto a desktop computer. These adapters and card\nare referred to as network attachment units (wired or\nFigure 12. The Bluetooth-IP access system.\nradio). Each port in the interface is directly attached\nto different networks (see figure 12).\nStep 2: The API of the Bluetooth adapter and the Ethernet\npacket driver are used to design an access system.\nThe operational procedures for this system follow the\nscenario in figure 11.\nPerformance analysis\nThe Bluehoc toolkit was used to simulate the various scenarios\nin the Bluetooth-IP service environment. As depicted\nin figure 13, the data is dumped from the L2CA_DataWrite\n706\nW.-C. CHAN ET AL.\nFigure 13. L2CAP packet flow.\nand L2CA_DataRead into the connection state. The\nL2CA_DataWrite and L2CA_DataRead events are the upper-Layer\nto the L2CAP events. The L2CA_DataRead is the\nevent that requests transfer for received data from the L2CAP\nentity to the upper layer. The L2CA_DataWrite is the event\nthat requests data transfer from the upper layer to the L2CAP\nentity for transmission over an open channel.\nIn the Bluehoc toolkit the connection procedures such as\ninquiry and paging are simulated according to the Bluetooth\nspecifications. The master sends the QoS parameters, which\ndepend on the application. QoS parameters are then passed\nto the Deficit Round Robin (DRR) based scheduler that determines\nif the connection can be accepted by the LMP. The\nDRR-based scheduler finds the appropriate ACL link baseband\npacket type (DM1, DM3, DM5, DH1, DH3 and DH5)\ndepending upon the application level MTU and loss sensitivity\n. The simulation applications include packetized voice,\nTelnet and FTP.\n4.2. Simulation results\nIn figure 14 the average delay for various numbers of slaves\nusing packetized voice in a piconet is shown. The voice application\nis real time and would be sensitive to a loss of several\nconsecutive small packets. Figure 15 shows the average delay\nin the mixed traffic source. The short-packet delay, such\nas telnet applications, are significantly increased by the long-packet\nin the DRR-based scheduler. An efficient and simple\nscheme that does not add to the Bluetooth load is important.\n4.2.1. Queue length analysis\nThe queue length analysis was based on the access system\nqueue size. We observed the variations in queue length using\nspecified traffic levels. In figure 16, the queue length of the\ntraffic from 100 M LAN to 1 M piconet increases very fast.\nIt reaches 50000 packets within 10 s. The queue length for\nthe traffic from 10 M LAN to 1 M piconet increases more\nsmoothly and the queue length of the traffic from 1 M piconet\nto 100 M or 10 M LANs is almost zero.\nFigure 14. Average delay with voice services.\nFigure 15. Average delay in the mixed traffic.\nFigure 16. Queue length analysis.\n4.2.2. Loss rate analysis\nIn the loss rate analysis the queue length was changed to observe\nthe variations in the loss rate. In figure 17, when the\nqueue length is smaller than 1000 packets, the loss rate is\nclose to 0.9. When the queue length increases to 5000 packets\n, the loss rate decreases to 0.8. Therefore, increasing the\nqueue length will decrease the traffic loss rate. The loss rate\nfrom 100M LAN to 1 M piconet is a little more than that for\n10 M LAN to 1 M piconet because of the difference in the\nLAN transport speed.\nQUALITY-OF-SERVICE IN IP SERVICES\n707\nFigure 17. Loss rate analysis.\nFigure 18. Throughput analysis.\n4.2.3. Throughput analysis\nFrom figure 18, increasing the queue length has no effect on\nimproving the throughput. The throughput is smooth in both\nthe LAN to piconet and piconet to LAN traffic.\n4.2.4. Delay analysis\nFrom figure 19, when the queue length is 500 packets, the\ndelay is about 0.0005 seconds per bit. If the queue length increases\nto 1000 packets, the delay becomes almost double.\nWhen the queue length reaches 5000 packets, the delay is\nclose to 0.003 seconds per bit. The transfer delay has no obvious\nchange when the queue length increases to 10000 packets.\nConclusions\nIn a wireless environment the QoS guarantee provision becomes\nmore important. The frequent mobility of a host increases\nthe service disruption in real-time applications. Even\nthough efficient RSVP enhances the resource reservation ability\nand allows for requesting a specific QoS from the network,\nthese mechanisms do not have enough QoS guarantee to prevent\nservice disruption during handoff. In this paper a cascading\nmechanism for QoS guarantee in a Bluetooth-IP environment\nwas proposed. A fast and efficient handoff scheme\nthat supports BT roaming handoffs between different Access\nPoints was proposed. Concepts for the mobile RSVP issues\nin Bluetooth networks were presented with our fast handoff\nmechanism. The Bluetooth-IP access system can be implemented\nusing available technology such as Network Addressing\nTranslation (NAT) and Linux Bluetooth Stack to connect\nBluetooth piconets and LAN. In our simulations Bluetooth\nrequired a long time to process the inquiry and paging procedures\n. These results show that the connection time is up to\n11.84 sec when seven slaves join a piconet at the same time.\nMoreover, the access system queue length increases by about\n10000 packets per second in a 100 Mbps LAN and about 1000\npackets per second in a 10 Mbps LAN when the traffic load is\n80%. In the loss rate analysis, the loss rate was close to 90%\nwhen the queue length was less than 1000 packets. However\n, when the queue length increased to 10000 packets the\n708\nW.-C. CHAN ET AL.\nFigure 19. Delay analysis.\nloss rate decreased to 70%. In the delay analysis, the delay\nwas about 0.000045 seconds per bit when the queue length\nwas 500 packets. The delay doubles when the queue length\ndoubles. However, when the queue length is more than 5000\npackets the delay has no obvious variance.\nAcknowledgement\nThis paper is a partial result of project no. NSC-90-2218-E-259\n-006 conducted by National Dong Hwa University under\nthe sponsorship of National Science Council, ROC.\nReferences\n[1] M. Albrecht, M. Frank, P. Martini, M. Schetelig, A. Vilavaara and A.\nWenzel, IP service over bluetooth: Leading the way to a new mobility,\nin: Proceedings of the International Conference on Local Computer\nNetworks (1999) pp. 143154.\n[2] S. Baatz, M. Frank, R. Gopffarth, D. Kassatkine, P. Martini, M.\nSchetelig and A. Vilavaara, Handoff support for mobility with IP over\nbluetooth, in: Proceedings of the 25th Annual IEEE Conference on Local\nComputer Networks, USA (2000) pp. 143154.\n[3] J.L. Chen and K.C. Yen, Transparent bridging support for bluetooth-IP\nservice interworking, International Journal of Network Management 12\n(2002) 379386.\n[4] J.L. Chen, A.P. Shu, H.W. Tzeng and P.T. Lin, Fair scheduling for guaranteed\nservices on personal area networks, in: Proceedings of 2002\nInternational Conference on Communications, Circuits and Systems,\nChina (2002) pp. 440444.\n[5] L. Ching and K.Y. Siu, A bluetooth scatternet formation algorithm, in:\nProceedings of IEEE Global Telecommunications Conference, Vol. 5\n(2001) pp. 28642869.\n[6] A. Das, A. Ghose, A. Razdan, H. Saran and R. Shorey, Enhancing performance\nof asynchronous data traffic over the bluetooth wireless ad-hoc\nnetwork, in: Proceedings of the IEEE INFOCOM (2001) pp. 591\n600.\n[7] D. Famolari and P. Agrawal, Architecture and performance of an em-bedded\nIP bluetooth personal area network, in: Proceedings of the International\nConference on Personal Wireless Communications, India\n(2000) pp. 7579.\n[8] M. Frank, R. Goepffarth, W. Hansmann and U. Mueller, Transmission\nof native IPv6 over bluetooth, http://www.ispras.ru/~ipv6/\ndocs/draft-hansmann-6overbt-00.txt\n[9] C. Haartsen and S. Mattisson, Bluetooth a new low-power radio interface\nproviding short-range connectivity, Proceedings of the IEEE\n88(10) (2000) 16511661.\n[10] G. Ivano, D. Paolo and F. Paolo, The role of Internet technology in\nfuture mobile data systems, IEEE Communications Magazine 38(11)\n(2000) 6873.\n[11] P. Johansson, R. Kapoor, M. Kazantzidis and M. Gerla, Rendezvous\nscheduling in bluetooth scatternets, in: Proceedings of IEEE International\nConference on Communications, Vol. 1 (2002) pp. 318324.\n[12] D.J.Y. Lee and W.C.Y. Lee, Ricocheting bluetooth, in: Proceedings of\nthe 2nd International Conference on Microwave and Millimeter Wave\nTechnology (2000) pp. 432435.\n[13] D.G. Leeper, A long-term view of short-range wireless, IEEE Computer\n34(6) (2001) 3944.\n[14] Y.B. Lin and I. Chlamtac, Wireless and Mobile Network Architectures\n(Wiley, 2000).\n[15] I. Mahadevan and K.M. Sivalingam, An architecture and experimental\nresults for quality if service in mobile networks using RSVP and CBQ,\nACM/Baltzer Wireless Networks Journal 6(3) (2000) 221234.\n[16] B. Raman, P. Bhagwat and S. Seshan, Arguments for cross-layer opti-mizations\nin Bluetooth scatternets, in: Proceedings of 2001 Symposium\non Applications and the Internet (2001) pp. 176184.\n[17] T.S. Rappaport, Wireless Communications Principles and Practice,\n2nd ed. (2002).\n[18] T. Salonidis, P. Bhagwat, L. Tassiulas and R. LaMaire, Distributed\ntopology construction of bluetooth personal area networks, in: Proceedings\nof the IEEE INFOCOM (2001) pp. 15771586.\n[19] The Bluetooth Special Interest Group, Bluetooth Network Encapsulation\nProtocol Specification (2001).\n[20] The Bluetooth Special Interest Group, Documentation available at\nhttp://www.bluetooth.com/techn/index.asp\n[21] The Bluetooth Special Interest Group,\nQuality of service in\nbluetooth networking, http://ing.ctit.utwente.nl/WU4/\nDocuments/Bluetooth_QoS_ING_A_part_I.pdf\n[22] V. Zaruba, S. Basagni and I. Chlamtac, Bluetrees-scatternet formation\nto enable bluetooth-based ad hoc networks, in: Proceedings of IEEE\nInternational Conference on Communications, Vol. 1 (2001) pp. 273\n277.\nQUALITY-OF-SERVICE IN IP SERVICES\n709\nWah-Chun Chan received the Ph.D. degree from\nUniversity of British Columbia in 1965. He is cur-rently\na Visiting Professor in the Department of\nComputer Science at National Chiao Tung University\n. Dr. Chan's research interest has been in the areas\nof queueing theory and telecommunication networks\n.\nResearch on telecommunication networks\nhas been in the development of models for the performance\nanalysis of computer communication networks\n.\nJiann-Liang Chen received the Ph.D. degree in\nelectrical engineering from National Taiwan University\n, Taipei, Taiwan in 1989. Since August 1997,\nhe has been with the Department of Computer Science\nand Information Engineering of National Dong\nHwa University, where he is a Professor now. His\ncurrent research interests are directed at cellular mobility\nmanagement and personal communication systems", "keywords": "handoff;quality of service;Bluetooth-IP access system;BNEP protocol;resource allocation"} {"name": "159", "title": "Web Question Answering: Is More Always Better?", "abstract": "This paper describes a question answering system that is designed to capitalize on the tremendous amount of data that is now available online. Most question answering systems use a wide variety of linguistic resources. We focus instead on the redundancy available in large corpora as an important resource. We use this redundancy to simplify the query rewrites that we need to use, and to support answer mining from returned snippets. Our system performs quite well given the simplicity of the techniques being utilized. Experimental results show that question answering accuracy can be greatly improved by analyzing more and more matching passages. Simple passage ranking and n-gram extraction techniques work well in our system making it efficient to use with many backend retrieval engines.", "fulltext": "INTRODUCTION\nQuestion answering has recently received attention from the\ninformation retrieval, information extraction, machine learning,\nand natural language processing communities [1][3][19][20] The\ngoal of a question answering system is to retrieve `answers' to\nquestions rather than full documents or even best-matching\npassages as most information retrieval systems currently do. The\nTREC Question Answering Track which has motivated much of\nthe recent work in the field focuses on fact-based, short-answer\nquestions such as \"Who killed Abraham Lincoln?\" or \"How tall is\nMount Everest?\". In this paper we focus on this kind of question\nanswering task, although the techniques we propose are more\nbroadly applicable.\nThe design of our question answering system is motivated by\nrecent observations in natural language processing that, for many\napplications, significant improvements in accuracy can be attained\nsimply by increasing the amount of data used for learning.\nFollowing the same guiding principle we take advantage of the\ntremendous data resource that the Web provides as the backbone\nof our question answering system. Many groups working on\nquestion answering have used a variety of linguistic resources\npart-of-speech tagging, syntactic parsing, semantic relations,\nnamed entity extraction, dictionaries, WordNet, etc. (e.g.,\n[2][8][11][12][13][15][16]).We chose instead to focus on the\nWeb as gigantic data repository with tremendous redundancy that\ncan be exploited for question answering. The Web, which is\nhome to billions of pages of electronic text, is orders of magnitude\nlarger than the TREC QA document collection, which consists of\nfewer than 1 million documents. This is a resource that can be\nusefully exploited for question answering. We view our\napproach as complimentary to more linguistic approaches, but\nhave chosen to see how far we can get initially by focusing on\ndata per se as a key resource available to drive our system design.\nAutomatic QA from a single, small information source is\nextremely challenging, since there is likely to be only one answer\nin the source to any user's question. Given a source, such as the\nTREC corpus, that contains only a relatively small number of\nformulations of answers to a query, we may be faced with the\ndifficult task of mapping questions to answers by way of\nuncovering complex lexical, syntactic, or semantic relationships\nbetween question string and answer string. The need for anaphor\nresolution and synonymy, the presence of alternate syntactic\nformulations, and indirect answers all make answer finding a\npotentially challenging task. However, the greater the answer\nredundancy in the source data collection, the more likely it is that\nwe can find an answer that occurs in a simple relation to the\nquestion. Therefore, the less likely it is that we will need to resort\nto solving the aforementioned difficulties facing natural language\nprocessing systems.\n\nEXPLOITING REDUNDANCY FOR QA\nWe take advantage of the redundancy (multiple, differently\nphrased, answer occurrences) available when considering massive\namounts of data in two key ways in our system.\nEnables Simple Query Rewrites.\nThe greater the number of\ninformation sources we can draw from, the easier the task of\nrewriting the question becomes, since the answer is more likely to\nbe expressed in different manners. For example, consider the\ndifficulty of gleaning an answer to the question \"Who killed\nAbraham Lincoln?\" from a source which contains only the text\n\"John Wilkes Booth altered history with a bullet. He will forever\nbe known as the man who ended Abraham Lincoln's life,\"\n\nQuestion\nRewrite Query\n<Search Engine>\nCollect Summaries,\nMine N-grams\nFilter N-Grams\nTile N-Grams\nN-Best Answers\nWhere is the Louvre\nMuseum located?\n\"+the Louvre Museum +is located\"\n\"+the Louvre Museum +is +in\"\n\"+the Louvre Museum +is near\"\n\"+the Louvre Museum +is\"\nLouvre AND Museum AND near\nin Paris France 59%\nmuseums 12%\nhostels 10%\nFigure 1. System Architecture\nQuestion\nRewrite Query\n<Search Engine>\nCollect Summaries,\nMine N-grams\nFilter N-Grams\nTile N-Grams\nN-Best Answers\nWhere is the Louvre\nMuseum located?\n\"+the Louvre Museum +is located\"\n\"+the Louvre Museum +is +in\"\n\"+the Louvre Museum +is near\"\n\"+the Louvre Museum +is\"\nLouvre AND Museum AND near\nin Paris France 59%\nmuseums 12%\nhostels 10%\nFigure 1. System Architecture\n\nversus a source that also contains the transparent answer string,\n\"John Wilkes Booth killed Abraham Lincoln.\"\nFacilitates Answer Mining.\nEven when no obvious answer\nstrings can be found in the text, redundancy can improve the\nefficacy of question answering. For instance, consider the\nquestion: \"How many times did Bjorn Borg win Wimbledon?\"\nAssume the system is unable to find any obvious answer strings,\nbut does find the following sentences containing \"Bjorn Borg\"\nand \"Wimbledon\", as well as a number:\n(1) Bjorn Borg blah blah Wimbledon blah blah 5 blah\n(2) Wimbledon blah blah blah Bjorn Borg blah 37 blah.\n(3) blah Bjorn Borg blah blah 5 blah blah Wimbledon\n(4) 5 blah blah Wimbledon blah blah Bjorn Borg.\nBy virtue of the fact that the most frequent number in these\nsentences is 5, we can posit that as the most likely answer.\n\nRELATED WORK\nOther researchers have recently looked to the web as a resource\nfor question answering. The Mulder system described by Kwok\net al. [14] is similar to our approach in several respects. For each\nquestion, Mulder submits multiple queries to a web search engine\nand analyzes the results. Mulder does sophisticated parsing of the\nquery and the full-text of retrieved pages, which is far more\ncomplex and compute-intensive than our analysis. They also\nrequire global idf term weights for answer extraction and\nselection, which requires local storage of a database of term\nweights. They have done some interesting user studies of the\nMulder interface, but they have not evaluated it with TREC\nqueries nor have they looked at the importance of various system\ncomponents.\nClarke et al. [9][10] investigated the importance of redundancy in\ntheir question answering system. In [9] they found that the best\nweighting of passages for question answering involves using both\npassage frequency (what they call redundancy) and a global idf\nterm weight. They also found that analyzing more top-ranked\npassages was helpful in some cases and not in others. Their\nsystem builds a full-content index of a document collection, in\nthis case TREC. In [10] they use web data to reinforce the scores\nof promising candidate answers by providing additional\nredundancy, with good success. Their implementation requires\nan auxiliary web corpus be available for full-text analysis and\nglobal term weighting. In our work, the web is the primary\nsource of redundancy and we operate without a full-text index of\ndocuments or a database of global term weights.\nBuchholz's Shapaqa NLP system [7] has been evaluated on both\nTREC and Web collections. Question answering accuracy was\nhigher with the Web collection (although both runs were poor in\nabsolute terms), but few details about the nature of the differences\nare provided.\nThese systems typically perform complex parsing and entity\nextraction for both queries and best matching web pages ([7][14]),\nwhich limits the number of web pages that they can analyze in\ndetail. Other systems require term weighting for selecting or\nranking the best-matching passages ([10][14]) and this requires\nauxiliary data structures. Our approach is distinguished from\nthese in its simplicity (simple rewrites and string matching) and\nefficiency in the use of web resources (use of only summaries and\nsimple ranking). We now describe how our system uses\nredundancy in detail and evaluate this systematically.\n\nSYSTEM OVERVIEW\nA flow diagram of our system is shown in Figure 1. The system\nconsists of four main components.\nRewrite Query\n. Given a question, the system generates a number\nof rewrite strings, which are likely substrings of declarative\nanswers to the question. To give a simple example, from the\nquestion \"When was Abraham Lincoln born?\" we know that a\nlikely answer formulation takes the form \"Abraham Lincoln was\nborn on <DATE>\". Therefore, we can look through the collection\nof documents, searching for such a pattern.\nWe first classify the question into one of seven categories, each of\nwhich is mapped to a particular set of rewrite rules. Rewrite rule\nsets range in size from one to five rewrite types. The output of\nthe rewrite module is a set of 3-tuples of the form [\nstring,\nL/R/-, weight\n], where \"\nstring\n\" is the reformulated\n292\n\n\nsearch query, \"\nL/R/-\"\nindicates the position in the text where\nwe expect to find the answer with respect to the query string (to\nthe left, right or anywhere) and \"\nweight\n\" reflects how much we\nprefer answers found with this particular query. The idea behind\nusing a weight is that answers found using a high precision query\n(e.g., \"Abraham Lincoln was born on\") are more likely to be\ncorrect than those found using a lower precision query (e.g.,\n\"Abraham\" AND \"Lincoln\" AND \"born\").\nWe do not use a parser or part-of-speech tagger for query\nreformulation, but do use a lexicon in order to determine the\npossible parts-of-speech of a word as well as its morphological\nvariants. We created the rewrite rules and associated weights\nmanually for the current system, although it may be possible to\nlearn query-to-answer reformulations and weights (e.g. see\nAgichtein et al. [4]; Radev et al. [17]).\nThe rewrites generated by our system are simple string-based\nmanipulations. For instance, some question types involve query\nrewrites with possible verb movement; the verb \"is\" in the\nquestion \"Where is the Louvre Museum located?\" should be\nmoved in formulating the desired rewrite to \"The Louvre Museum\nis\nlocated in\". While we might be able to determine where to\nmove a verb by analyzing the sentence syntactically, we took a\nmuch simpler approach. Given a query such as \"Where is w\n1\nw\n2\n\n... w\nn\n\", where each of the w\ni\nis a word, we generate a rewrite for\neach possible position the verb could be moved to (e.g. \"w\n1\nis w\n2\n\n... w\nn\n\", \"w\n1\nw\n2\nis ... w\nn\n\", etc). While such an approach results in\nmany nonsensical rewrites (e.g. \"The Louvre is Museum located\nin\"), these very rarely result in the retrieval of bad pages, and the\nproper movement position is guaranteed to be found via\nexhaustive search. If we instead relied on a parser, we would\nrequire fewer query rewrites, but a misparse would result in the\nproper rewrite not being found.\nFor each query we also generate a final rewrite which is a backoff\nto a simple ANDing of the non-stop words in the query. We\ncould backoff even further to ranking using a best-match retrieval\nsystem which doesn't require the presence of all terms and uses\ndifferential term weights, but we did not find that this was\nnecessary when using the Web as a source of data. There are an\naverage of 6.7 rewrites for the 500 TREC-9 queries used in the\nexperiments described below.\nAs an example, the rewrites for the query \"Who created the\ncharacter of Scrooge?\" are:\nLEFT_5_\"created +the character +of Scrooge\"\nRIGHT_5_\"+the character +of Scrooge +was created\n+by\"\nAND_2_\"created\" AND \"+the character\" AND \"+of\nScrooge\"\nAND_1_\"created\" AND \"character\" AND \"Scrooge\"\nTo date we have used only simple string matching techniques.\nSoubbotin and Soubbotin [18] have used much richer regular\nexpression matching to provide hints about likely answers, with\nvery good success in TREC 2001, and we could certainly\nincorporate some of these ideas in our rewrites. Note that many\nof our rewrites require the matching of stop words like \"in\" and\n\"the\", in the above example. In our system stop words are\nimportant indicators of likely answers, and we do not ignore them\nas most ranked retrieval systems do, except in the final backoff\nAND rewrite.\nThe query rewrites are then formulated as search engine queries\nand sent to a search engine from which page summaries are\ncollected and analyzed.\nMine N-Grams\n. From the page summaries returned by the search\nengine, n-grams are mined. For reasons of efficiency, we use\nonly the returned summaries and not the full-text of the\ncorresponding web page. The returned summaries contain the\nquery terms, usually with a few words of surrounding context. In\nsome cases, this surrounding context has truncated the answer\nstring, which may negatively impact results. The summary text is\nthen processed to retrieve only strings to the left or right of the\nquery string, as specified in the rewrite triple.\n1-, 2-, and 3-grams are extracted from the summaries. An N-gram\nis scored according the weight of the query rewrite that retrieved\nit. These scores are summed across the summaries that contain\nthe n-gram (which is the opposite of the usual inverse document\nfrequency component of document/passage ranking schemes).\nWe do not count frequency of occurrence within a summary (the\nusual tf component in ranking schemes). Thus, the final score for\nan n-gram is based on the rewrite rules that generated it and the\nnumber of unique summaries in which it occurred. When\nsearching for candidate answers, we enforce the constraint that at\nmost one stopword is permitted to appear in any potential n-gram\nanswers.\nThe top-ranked n-grams for the Scrooge query are:\nDickens 117\nChristmas Carol 78\nCharles Dickens 75\nDisney 72\nCarl Banks 54\nA Christmas 41\nuncle 31\nFilter/Reweight N-Grams.\nNext, the n-grams are filtered and\nreweighted according to how well each candidate matches the\nexpected answer-type, as specified by a handful of handwritten\nfilters. The system uses filtering in the following manner. First,\nthe query is analyzed and assigned one of seven question types,\nsuch as who-question, what-question, or how-many-question.\nBased on the query type that has been assigned, the system\ndetermines what collection of filters to apply to the set of potential\nanswers found during n-gram harvesting. The answers are\nanalyzed for features relevant to the filters, and then rescored\naccording to the presence of such information.\nA collection of approximately 15 filters were developed based on\nhuman knowledge about question types and the domain from\nwhich their answers can be drawn. These filters used surface\nstring features, such as capitalization or the presence of digits, and\nconsisted of handcrafted regular expression patterns.\nAfter the system has determined which filters to apply to a pool of\ncandidate answers, the selected filters are applied to each\ncandidate string and used to adjust the score of the string. In most\ncases, filters are used to boost the score of a potential answer\nwhen it has been determined to possess the features relevant to the\nquery type. In other cases, filters are used to remove strings from\nthe candidate list altogether. This type of exclusion was only\nperformed when the set of correct answers was determined to be a\n293\n\n\nclosed set (e.g. \"Which continent....?\") or definable by a set of\nclosed properties (e.g. \"How many...?\").\nTile N-Grams.\nFinally, we applied an answer tiling algorithm,\nwhich both merges similar answers and assembles longer answers\nout of answer fragments. Tiling constructs longer n-grams from\nsequences of overlapping shorter n-grams. For example, "A B C"\nand "B C D" is tiled into "A B C D." The algorithm proceeds\ngreedily from the top-scoring candidate - all subsequent\ncandidates (up to a certain cutoff) are checked to see if they can\nbe tiled with the current candidate answer. If so, the higher\nscoring candidate is replaced with the longer tiled n-gram, and the\nlower scoring candidate is removed. The algorithm stops only\nwhen no n-grams can be further tiled.\nThe top-ranked n-grams after tiling for the Scrooge query are:\nCharles Dickens 117\nA Christmas Carol 78\nWalt Disney's uncle 72\nCarl Banks 54\nuncle 31\nOur system works most efficiently and naturally with a backend\nretrieval system that returns best-matching passages or query-relevant\ndocument summaries. We can, of course, post-process\nthe full text of matching documents to extract summaries for n-gram\nmining, but this is inefficient especially in Web applications\nwhere the full text of documents would have to be downloaded\nover the network at query time.\nEXPERIMENTS\nFor our experimental evaluations we used the first 500 TREC-9\nqueries (201-700) [19]. For simplicity we ignored queries which\nare syntactic rewrites of earlier queries (701-893), although\nincluding them does not change the results in any substantive\nway. We used the patterns provided by NIST for automatic\nscoring. A few patterns were slightly modified to accommodate\nthe fact that some of the answer strings returned using the Web\nwere not available for judging in TREC-9. We did this in a very\nconservative manner allowing for more specific correct answers\n(e.g., Edward J. Smith vs. Edward Smith) but not more general\nones (e.g., Smith vs. Edward Smith), and simple substitutions\n(e.g., 9 months vs. nine months). These changes influence the\nabsolute scores somewhat but do not change relative performance,\nwhich is our focus here.\nMany of the TREC queries are time sensitive that is, the correct\nanswer depends on when the question is asked. The TREC\ndatabase covers a period of time more than 10 years ago; the Web\nis much more current. Because of this mismatch, many correct\nanswers returned from the Web will be scored as incorrect using\nthe TREC answer patterns. 10-20% of the TREC queries have\ntemporal dependencies. E.g., Who is the president of Bolivia?\nWhat is the exchange rate between England and the U. S.? We\ndid not modify the answer key to accommodate these time\ndifferences, because this is a subjective job and would make\ncomparison with earlier TREC results impossible.\nFor the main Web retrieval experiments we used Google as a\nbackend because it provides query-relevant summaries that make\nour n-gram mining techniques more efficient. Thus we have\naccess to more than 2 billion web pages. For some experiments in\nTREC retrieval we use the standard QA collection consisting of\nnews documents from Disks 1-5. The TREC collection has just\nunder 1 million documents [19].\nAll runs are completely automatic, starting with queries and\ngenerating a ranked list of 5 candidate answers. Candidate\nanswers are a maximum of 50 bytes long, and typically much\nshorter than that. We report the Mean Reciprocal Rank (MRR) of\nthe first correct answer, the Number of Questions Correctly\nAnswered (NumCorrect), and the Proportion of Questions\nCorrectly Answered (PropCorrect). Correct answers at any rank\nare included in the number and proportion correct measures.\nMost correct answers are at the top of the list -- 70% of the correct\nanswers occur in the first position and 90% in the first or second\npositions.\nUsing our system with default settings for query rewrite weights,\nnumber of summaries returned, etc. we obtain a MRR of 0.507\nand answer 61% of the queries. The average answer length was\n12 bytes, so the system is really returning short answers not\npassages. This is very good performance and would place us near\nthe top of 50-byte runs for TREC-9. However, since we did not\ntake part in TREC-9 it is impossible to compare our results\nprecisely with those systems (e.g., we used TREC-9 for tuning our\nTREC-10 system increasing our score somewhat, but we return\nseveral correct answers that were not found in TREC-9 thus\ndecreasing our score somewhat).\nRedundancy is used in two key ways in our data-driven approach.\nFirst, the occurrence of multiple linguistic formulations of the\nsame answers increases the chances of being able to find an\nanswer that occurs within the context of a simple pattern match\nwith the query. Second, answer redundancy facilitates the process\nof answer extraction by giving higher weight to answers that\noccur more often (i.e., in more different document summaries).\nWe now evaluate the contributions of these experimentally.\n5.1\n\nNumber of Snippets\nWe begin by examining the importance of redundancy in answer\nextraction. To do this we vary the number of summaries\n(snippets) that we get back from the search engine and use as\ninput to the n-gram mining process. Our standard system uses\n100 snippets. We varied the number of snippets from 1 to 1000.\nThe results are shown in Figure 2.\n\nPerformance improves sharply as the number of snippets increases\nfrom 1 to 50 (0.243 MRR for 1 snippet, 0.370 MRR for 5, 0.423\nMRR for 10, and 0.501 for 50), somewhat more slowly after that\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n1\n10\n100\n1000\nNum ber of Snippets\nMR\nR\n\nFigure 2. MRR as a function of number of\nsnippets returned. TREC-9, queries 201-700.\n294\n\n\n(peaking 0.514 MRR with 200 snippets), and then falling off\nsomewhat after that as more snippets are included for n-gram\nanalysis. Thus, over quite a wide range, the more snippets we\nconsider in selecting and ranking n-grams the better. We believe\nthat the slight drop at the high end is due to the increasing\ninfluence that the weaker rewrites have when many snippets are\nreturned. The most restrictive rewrites return only a few matching\ndocuments. Increasing the number of snippets increases the\nnumber of the least restrictive matches (the AND matches), thus\nswamping the restrictive matches. In addition, frequent n-grams\nbegin to dominate our rankings at this point.\nAn example of failures resulting from too many AND matches is\nQuery 594: What is the longest word in the English language?\nFor this query, there are 40 snippets matching the rewrite \"is the\nlongest word in the English language\" with weight 5, 40 more\nsnippets matching the rewrite \"the longest word in the English\nlanguage is\" with the weight 5, and more than 100 snippets\nmatching the backoff AND query (\"longest\" AND \"word\" AND\n\"English\" AND \"language\") with a weight of 1. When 100\nsnippets are used, the precise rewrites contribute almost as many\nsnippets as the AND rewrite. In this case we find the correct\nanswer, \"pneumonoultramicroscopicsilicovolcanokoniosis\", in the\nsecond rank. The first answer, \"1909 letters long\", which is\nincorrect, also matches many precise rewrites such as \"the longest\nword in English is ## letters long\", and we pick up on this.\nWhen 1000 snippets are used, the weaker AND rewrites dominate\nthe matches. In this case, the correct answer falls to seventh on\nthe list after \"letters long\", \"one syllable\", \"is screeched\", \"facts\",\n\"stewardesses\" and \"dictionary\", all of which occur commonly in\nresults from the least restrictive AND rewrite. A very common\nAND match contains the phrase \"the longest one-syllable word in\nthe English language is screeched\", and this accounts for two of\nour incorrect answers.\nUsing differential term weighting of answer terms, as many\nretrieval systems do, should help overcome this problem to some\nextent but we would like to avoid maintaining a database of global\nterm weights. Alternatively we could refine our weight\naccumulation scheme to dampen the effects of many weakly\nrestrictive matches by sub-linear accumulation, and we are\ncurrently exploring several alternatives for doing this.\nOur main results on snippet redundancy are consistent with those\nreported by Clarke et al. [9], although they worked with the much\nsmaller TREC collection. They examined a subset of the TREC-9\nqueries requiring a person's name as the answer. They varied the\nnumber of passages retrieved (which they call depth) from 25 to\n100, and observed some improvements in MRR. When the\npassages they retrieved were small (250 or 500 bytes) they found\nimprovement, but when the passages were larger (1000 or 2000\nbytes) no improvements were observed. The snippets we used\nare shorter than 250 bytes, so the results are consistent. Clarke et\nal. [9] also explored a different notion of redundancy (which they\nrefer to as c\ni\n). c\ni\nis the number of different passages that match an\nanswer. Their best performance is achieved when both c\ni\nand\nterm weighting are used to rank passages. We too use the number\nof snippets that an n-gram occurs in. We do not, however, use\nglobal term weights, but have tried other techniques for weighting\nsnippets as described below.\n5.2\n\nTREC vs. Web Databases\nAnother way to explore the importance of redundancy is to run\nour system directly on the TREC documents. As noted earlier,\nthere are three orders of magnitude more documents on the Web\nthan in the TREC QA collection. Consequently, there will be\nfewer alternative ways of saying the same thing and fewer\nmatching documents available for mining the candidate n-grams.\nWe suspect that this lack of redundancy will limit the success of\nour approach when applied directly on TREC documents.\nWhile corpus size is an obvious and important difference between\nthe TREC and Web collections there are other differences as well.\nFor example, text analysis, ranking, and snippet extraction\ntechniques will all vary somewhat in ways that we can not control.\nTo better isolate the size factor, we also ran our system against\nanother Web search engine.\nFor these experiments we used only the AND rewrites and looked\nat the first 100 snippets. We had to restrict ourselves to AND\nrewrites because some of the search engines we used do not\nsupport the inclusion of stop words in phrases, e.g., \"created +the\ncharacter +of Scrooge\".\n5.2.1\n\nTREC Database\nThe TREC QA collection consists of just under 1 million\ndocuments. We expect much less redundancy here compared to\nthe Web, and suspect that this will limit the success of our\napproach. An analysis of the TREC-9 query set (201-700) shows\nthat no queries have 100 judged relevant documents. Only 10 of\nthe 500 questions have 50 or more relevant documents, which the\nresults in Figure 2 suggest are required for the good system\nperformance. And a very large number of queries, 325, have\nfewer than 10 relevant documents.\nWe used an Okapi backend retrieval engine for the TREC\ncollection. Since we used only Boolean AND rewrites, we did\nnot take advantage of Okapi's best match ranking algorithm.\nHowever, most queries return fewer than 100 documents, so we\nwind up examining most of the matches anyway.\nWe developed two snippet extraction techniques to generate\nquery-relevant summaries for use in n-gram mining. A\nContiguous technique returned the smallest window containing all\nthe query terms along with 10 words of context on either side.\nWindows that were greater than 500 words were ignored. This\napproach is similar to passage retrieval techniques albeit without\ndifferential term weighting. A Non-Contiguous technique\nreturned the union of two word matches along with 10 words of\ncontext on either side. Single words not previously covered are\nincluded as well. The search engine we used for the initial Web\nexperiments returns both contiguous and non-contiguous snippets.\nFigure 3 shows the results of this experiment.\nMRR\nNumCorrect PropCorrect\nWeb1\n0.450\n281\n0.562\nTREC, Contiguous Snippet\n0.186\n117\n0.234\nTREC, Non-Contiguous Snippet\n0.187\n128\n0.256\nAND Rewrites Only, Top 100\nFigure 3: Web vs. TREC as data source\n\n295\n\n\nOur baseline system using all rewrites and retrieving 100 snippets\nachieves 0.507 MRR (Figure 2). Using only the AND query\nrewrites results in worse performance for our baseline system with\n0.450 MRR (Figure 3). More noticeable than this difference is\nthe drop in performance of our system using TREC as a data\nsource compared to using the much larger Web as a data source.\n\nMRR drops from 0.450 to 0.186 for contiguous snippets and\n0.187 for non-contiguous snippets, and the proportion of\nquestions answered correctly drops from 56% to 23% for\ncontiguous snippets and 26% for non-contiguous snippets. It is\nworth noting that the TREC MRR scores would still place this\nsystem in the top half of the systems for the TREC-9 50-byte task,\neven though we tuned our system to work on much larger\ncollections. However, we can do much better simply by using\nmore data. The lack of redundancy in the TREC collection\naccounts for a large part of this drop off in performance. Clarke et\nal. [10] have also reported better performance using the Web\ndirectly for TREC 2001 questions.\nWe expect that the performance difference between TREC and the\nWeb would increase further if all the query rewrites were used.\nThis is because there are so few exact phrase matches in TREC\nrelative to the Web, and the precise matches improve performance\nby 13% (0.507 vs. 0.450).\nWe believe that database size per se (and the associated\nredundancy) is the most important difference between the TREC\nand Web collections. As noted above, however, there are other\ndifferences between the systems such as text analysis, ranking,\nand snippet extraction techniques. While we can not control the\ntext analysis and ranking components of Web search engines, we\ncan use the same snippet extraction techniques. We can also use a\ndifferent Web search engine to mitigate the effects of a specific\ntext analysis and ranking algorithms.\n5.2.2\n\nAnother Web Search Engine\nFor these experiments we used the MSNSearch search engine. At\nthe time of our experiments, the summaries returned were\nindependent of the query. So we retrieved the full text of the top\n100 web pages and applied the two snippet extraction techniques\ndescribed above to generate query-relevant summaries. As before,\nall runs are completely automatic, starting with queries, retrieving\nweb pages, extracting snippets, and generating a ranked list of 5\ncandidate answers. The results of these experiments are shown in\nFigure 4. The original results are referred to as Web1 and the\nnew results as Web2.\nMRR\nNumCorrect PropCorrect\nWeb1\n0.450\n281\n0.562\nTREC, Contiguous Snippet\n0.186\n117\n0.234\nTREC, Non-Contiguous Snippet\n0.187\n128\n0.256\nWeb2, Contiguous Snippet\n0.355\n227\n0.454\nWeb2, Non-Contiguous Snippet\n0.383\n243\n0.486\nAND Rewrites Only, Top 100\nFigure 4: Web vs. TREC as data source\n\nThe Web2 results are somewhat worse than the Web1 results, but\nthis is expected given that we developed our system using the\nWeb1 backend, and did not do any tuning of our snippet\nextraction algorithms. In addition, we believe that the Web2\ncollection indexed somewhat less content than Web1 at the time\nof our experiments, which should decrease performance in and of\nitself. More importantly, the Web2 results are much better than\nthe TREC results for both snippet extraction techniques, almost\ndoubling MRR in both cases. Thus, we have shown that QA is\nmore successful using another large Web collection compared to\nthe small TREC collection. The consistency of this result across\nWeb collections points to size and redundancy as the key factors.\n5.2.3\n\nCombining TREC and Web\nGiven that the system benefits from having a large text collection\nfrom which to search for potential answers, then we would expect\nthat combining both the Web and TREC corpus would result in\neven greater accuracy. We ran two experiments to test this.\nBecause there was no easy way to merge the two corpora, we\ninstead combined the output of QA system built on each corpus.\nFor these experiments we used the original Web1 system and our\nTREC system. We used only the AND query rewrites, looked at\nthe Top1000 search results for each rewrite, and used a slightly\ndifferent snippet extraction technique. For these parameter\nsettings, the base TREC-based system had a MRR of .262, the\nWeb-based system had a MRR of .416.\nFirst, we ran an oracle experiment to assess the potential gain that\ncould be attained by combining the output of the Web-based and\nTREC-based QA systems. We implemented a \"switching oracle\",\nwhich decides for each question whether to use the output from\nthe Web-based QA system or the TREC-based QA system, based\nupon which system output had a higher ranking correct answer.\nThe switching oracle had a MRR of .468, a 12.5% improvement\nover the Web-based system. Note that this oracle does not\nprecisely give us an upper bound, as combining algorithms (such\nas that described below) could re-order the rankings of outputs.\nNext, we implemented a combining algorithm that merged the\noutputs from the TREC-based and Web-based systems, by having\nboth systems vote on answers, where the vote is the score\nassigned to a particular answer by the system. For voting, we\ndefined string equivalence such that if a string X is a substring of\nY, then a vote for X is also a vote for Y. The combined system\nachieved a MRR of .433 (a 4.1% improvement over the Web-based\nsystem) and answered 283 questions correctly.\n5.3\n\nSnippet Weighting\nUntil now, we have focused on the quantity of information\navailable and less on its quality. Snippet weights are used in\nranking n-grams. An n-gram weight is the sum of the weights for\nall snippets in which that n-gram appears.\nEach of our query rewrites has a weight associated with it\nreflecting how much we prefer answers found with this particular\nquery. The idea behind using a weight is that answers found\nusing a high precision query (e.g., \"Abraham Lincoln was born\non\") are more likely to be correct than those found using a lower\nprecision query (e.g., \"Abraham\" AND \"Lincoln\" AND \"born\").\nOur current system has 5 weights.\nThese rewrite weights are the only source of snippet weighting in\nour system. We explored how important these weight are and\nconsidered several other factors that could be used as additional\nsources of information for snippet weighting. Although we\nspecify Boolean queries, the retrieval engine can provide a\nranking, based on factors like link analyses, proximity of terms,\n296\n\n\nlocation of terms in the document, etc. So, different weights can\nbe assigned to matches at different positions in the ranked list.\nWe also looked at the number of matching terms in the best fixed\nwidth window, and the widow size of the smallest matching\npassage as indicators of passage quality.\nRewrite Wts uses our heuristically determined rewrite weights as a\nmeasure the quality of a snippet. This is the current system\ndefault. Equal Wts gives equal weight to all snippets regardless of\nthe rewrite rule that generated them. To the extent that more\nprecise rewrites retrieve better answers, we will see a drop in\nperformance when we make all weights equal. Rank Wts uses the\nrank of the snippet as a measure of its quality, SnippetWt = (100-rank\n)/100. NMatch Wts uses the number of matching terms in a\nfixed-width window as the measure of snippet quality. Length\nWts uses a measure of the length of the snippet needed to\nencompass all query terms as the measure of snippet quality. We\nalso look at combinations of these factors. For example,\nRewrite+Rank Wts uses both rewrite weight and rank according to\nthe following formula, SnippetWt = RewriteScore + (100-rank\n)/100. All of these measures are available from query-relevant\nsummaries returned by the search engine and do not\nrequire analyzing the full text of the document. The results of\nthese experiments are presented in Figure 4.\nWeighting\nMRR\nNumCorrect PropCorrect\nEqual Wts\n0.489\n298\n0.596\nRewrite Wts (Default)\n0.507\n307\n0.614\nRank Wts\n0.483\n292\n0.584\nRewrite + Rank Wts\n0.508\n302\n0.604\nNMatch Wts\n0.506\n301\n0.602\nLength Wts\n0.506\n302\n0.604\nFigure 5: Snippet Weighting\n\nOur current default 5-level weighting scheme which reflects the\nspecificity of the query rewrites does quite well. Equal weighting\nis somewhat worse, as we expected. Interestingly search engine\nrank is no better for weighting candidate n-grams than equal\nweighting. None of the other techniques we looked at surpasses\nthe default weights in both MRR and PropCorrect. Our heuristic\nrewrite weights provide a simple and effective technique for\nsnippet weighting, that can be used with any backend retrieval\nengine.\nMost question answering systems use IR-based measures of\npassage quality, and do not typically evaluate the best measure of\nsimilarity for purposes of extracting answers. Clarke et al. [9]\nnoted above is an exception. Soubbotin and Soubbotin [18]\nmention different weights for different regular expression\nmatches, but they did not describe the mechanism in detail nor did\nthey evaluate how useful it is. Harabagiu et al. [11] have a kind\nof backoff strategy for matching which is similar to weighting, but\nagain we do not know of parametric evaluations of its importance\nin their overall system performance. The question of what kinds\nof passages can best support answer mining for question\nanswering as opposed to document retrieval is an interesting one\nthat we are pursuing.\nDISCUSSION AND FUTURE DIRECTIONS\nThe design of our question answering system was motivated by\nthe goal of exploiting the large amounts of text data that is\navailable on the Web and elsewhere as a useful resource. While\nmany question answering systems take advantage of linguistic\nresources, fewer depend primarily on data. Vast amounts of data\nprovide several sources of redundancy that our system capitalizes\non. Answer redundancy (i.e., multiple, differently phrased,\nanswer occurrences) enables us to use only simple query rewrites\nfor matching, and facilitates the extraction of candidate answers.\nWe evaluated the importance of redundancy in our system\nparametrically. First, we explored the relationship between the\nnumber of document snippets examined and question answering\naccuracy. Accuracy improves sharply as the number of snippets\nincluded for n-gram analysis increases from 1 to 50, somewhat\nmore slowly after that peaking at 200 snippets, and then falls off\nsomewhat after that. More is better up to a limit. We believe that\nwe can increase this limit by improving our weight accumulation\nalgorithm so that matches from the least precise rewrites do not\ndominate. Second, in smaller collections (like TREC), the\naccuracy of our system drops sharply, although it is still quite\nreasonable in absolute terms. Finally, snippet quality is less\nimportant to system performance than snippet quantity. We have\na simple 5-level snippet weighting scheme based on the specificity\nof the query rewrite, and this appears to be sufficient. More\ncomplex weighting schemes that we explored were no more\nuseful.\nThe performance of our system shows promise for approaches to\nquestion answering which makes use of very large text databases\neven with minimal natural language processing. Our system does\nnot need to maintain its own index nor does it require global term\nweights, so it can work in conjunction with any backend retrieval\nengine. Finally, since our system does only simple query\ntransformations and n-gram analysis, it is efficient and scalable.\nOne might think that our system has limited applicability, because\nit works best with large amounts of data. But, this isn't\nnecessarily so. First, we actually perform reasonably well in the\nsmaller TREC collection, and could perhaps tune system\nparameters to work even better in that environment. More\ninterestingly, Brill et al. [6] described a projection technique that\ncan be used to combine the wealth of data available on the Web\nwith the reliability of data in smaller sources like TREC or an\nencyclopedia. The basic idea is to find candidate answers in a\nlarge and possibly noisy source, and then expand the query to\ninclude likely answers. The expanded queries can then be used\non smaller but perhaps more reliable collections either directly\nto find support for the answer in the smaller corpus, or indirectly\nas a new query which is issued and mined as we currently do.\nThis approach appears to be quite promising. Our approach\nseems least applicable in applications that involve a small amount\nof proprietary data. In these cases, one might need to do much\nmore sophisticated analyses to map user queries to the exact\nlexical form that occur in the text collection rather than depend on\nprimarily on redundancy as we have done.\nAlthough we have pushed the data-driven perspective, more\nsophisticated language analysis might help as well by providing\nmore effective query rewrites or less noisy data for mining.\n297\n\n\nMost question answering systems contain aspects of both we use\nsome linguistic knowledge in our small query typology and\nanswer filtering, and more sophisticated systems often use simple\npattern matching for things like dates, zip codes, etc.\nThere are a number of open questions that we hope to explore. In\nthe short term, we would like to look systematically at the\ncontributions of other system components. Brill et al. [5] have\nstarted to explore individual components in more detail, with\ninteresting results. In addition, it is likely that we have made\nseveral sub-optimal decisions in our initial implementation (e.g.,\nomitting most stop words from answers, simple linear\naccumulation of scores over matching snippets) that we would\nlike to improve. Most retrieval engines have been developed\nwith the goal of finding topically relevant documents. Finding\naccurate answers may require somewhat different matching\ninfrastructure. We are beginning to explore how best to generate\nsnippets for use in answer mining. Finally, time is an interesting\nissue. We noted earlier how the correct answer to some queries\nchanges over time. Time also has interesting implications for\nusing redundancy. For example, it would take a while for a news\nor Web collection to correctly answer a question like \"Who is the\nU. S. President?\" just after an election.\nAn important goal of our work is to get system designers to treat\ndata as a first class resource that is widely available and\nexploitable. We have made good initial progress, and there are\nseveral interesting issues remaining to explore.\n\nREFERENCES\n[1]\n\nAAAI Spring Symposium Series (2002). Mining Answers\nfrom Text and Knowledge Bases.\n[2]\n\nS. Abney, M. Collins and A. Singhal (2000). Answer\nextraction. In Proceedings of the 6\nth\nApplied Natural\nLanguage Processing Conference (ANLP 2000), 296-301.\n[3]\n\nACL-EACL (2002). Workshop on Open-domain Question\nAnswering.\n[4]\n\nE. Agichtein, S. Lawrence and L. Gravano (2001). Learning\nsearch engine specific query transformations for question\nanswering. In Proceedings of the 10\nth\nWorld Wide Web\nConference (WWW10), 169-178.\n[5]\n\nE. Brill, S. Dumais and M. Banko (2002). An analysis of the\nAskMSR question-answering system. In Proceedings of\n2002 Conference on Empirical Methods in Natural\nLanguage Processing (EMNLP 2002).\n[6]\n\nE. Brill, J. Lin, M. Banko, S. Dumais and A. Ng (2002).\nData-intensive question answering. In Proceedings of the\nTenth Text REtrieval Conference (TREC 2001).\n[7]\n\nS. Buchholz (2002). Using grammatical relations, answer\nfrequencies and the World Wide Web for TREC question\nanswering. In Proceedings of the Tenth Text REtrieval\nConference (TREC 2001).\n[8]\n\nJ. Chen, A. R. Diekema, M. D. Taffet, N. McCracken, N. E.\nOzgencil, O. Yilmazel, E. D. Liddy (2002). Question\nanswering: CNLP at the TREC-10 question answering track.\nIn Proceedings of the Tenth Text REtrieval Conference\n(TREC 2001).\n[9]\n\nC. Clarke, G. Cormack and T. Lyman (2001). Exploiting\nredundancy in question answering. In Proceedings of the\n24\nth\nAnnual International ACM SIGIR Conference on\nResearch and Development in Information Retrieval\n(SIGIR'2001), 358-365.\n[10]\n\nC. Clarke, G. Cormack and T. Lynam (2002). Web\nreinforced question answering. In Proceedings of the Tenth\nText REtrieval Conference (TREC 2001).\n[11]\n\nS. Harabagiu, D. Moldovan, M. Pasca, R. Mihalcea, M.\nSurdeanu, R. Bunescu, R. Girju, V. Rus and P. Morarescu\n(2001). FALCON: Boosting knowledge for question\nanswering. In Proceedings of the Ninth Text REtrieval\nConference (TREC-9), 479-488.\n[12]\n\nE. Hovy, L. Gerber, U. Hermjakob, M. Junk and C. Lin\n(2001). Question answering in Webclopedia. In\nProceedings of the Ninth Text REtrieval Conference (TREC-9\n), 655-664.\n[13]\n\nE. Hovy, U. Hermjakob and C. Lin (2002). The use of\nexternal knowledge in factoid QA. In Proceedings of the\nTenth Text REtrieval Conference (TREC 2001).\n[14]\n\nC. Kwok, O. Etzioni and D. Weld (2001). Scaling question\nanswering to the Web. In Proceedings of the 10\nth\nWorld\nWide Web Conference (WWW'10), 150-161.\n[15]\n\nM. A. Pasca and S. M. Harabagiu (2001). High performance\nquestion/answering. In Proceedings of the 24\nth\nAnnual\nInternational ACM SIGIR Conference on Research and\nDevelopment in Information Retrieval (SIGIR'2001), 366-374\n.\n[16]\n\nJ. Prager, E. Brown, A. Coden and D. Radev (2000).\nQuestion answering by predictive annotation. In\nProceedings of the 23\nrd\nAnnual International ACM SIGIR\nConference on Research and Development in Information\nRetrieval (SIGIR'2000), 184-191.\n[17]\n\nD. R. Radev, H. Qi, Z. Zheng, S. Blair-Goldensohn, Z.\nZhang, W. Fan and J. Prager (2001). Mining the web for\nanswers to natural language questions. In Proceeding of the\n2001 ACM CIKM: Tenth International Conference on\nInformation and Knowledge Management, 143-150\n[18]\n\nM. M. Soubbotin and S. M. Soubbotin (2002). Patterns and\npotential answer expressions as clues to the right answers. In\nProceedings of the Tenth Text REtrieval Conference (TREC\n2001).\n[19]\n\nE. Voorhees and D. Harman, Eds. (2001). Proceedings of\nthe Ninth Text REtrieval Conference (TREC-9). NIST\nSpecial Publication 500-249.\n[20]\n\nE. Voorhees and D. Harman, Eds. (2002). Proceedings of\nthe Tenth Text REtrieval Conference (TREC 2001). ). NIST\nSpecial Publication 500-250.\n\n298", "keywords": "rewrite query;n-gram extraction techniques;automatic QA;Experimentation;information extraction;Algorithms;question answering system;redundancy in large corpora;facilitates answer mining;natural language processing;information retrieval;machine learning;TREC QA;simple passage ranking"} {"name": "16", "title": "A Programming Languages Course for Freshmen", "abstract": "Programming languages are a part of the core of computer science. Courses on programming languages are typically offered to junior or senior students, and textbooks are based on this assumption. However, our computer science curriculum offers the programming languages course in the first year. This unusual situation led us to design it from an untypical approach. In this paper, we first analyze and classify proposals for the programming languages course into different pure and hybrid approaches. Then, we describe a course for freshmen based on four pure approaches, and justify the main choices made. Finally, we identify the software used for laboratories and outline our experience after teaching it for seven years.", "fulltext": "INTRODUCTION\nThe topic of programming languages is a part of the core of\ncomputer science. It played a relevant role in all the curricula\nrecommendations delivered by the ACM or the IEEE-CS since\nthe first Curriculum'68 [2]. Recent joint curricular\nrecommendations of the ACM and the IEEE-CS identified several\n\"areas\" which structure the body of knowledge of the discipline.\nThe list of areas has grown since the first proposal made by the\nDenning Report [4] up to 14 in the latest version, Computing\nCurricula 2001 [12]. Programming languages has always been\none of these areas.\nInternationally reputed curricular recommendations are a valuable\ntool for the design of particular curricula. However, each country\nhas specific features that constrain the way of organizing their\nstudies. In Spain, the curriculum of a discipline offered by a\nuniversity is the result of a trade-off. On the one hand, the\nuniversity must at least offer a number of credits of the core\nsubject matters established by the Government. On the other\nhand, the university may offer supplementary credits of the core\nas well as mandatory and optional courses defined according to\nthe profile of the University and the faculty. Any proposal of a\nnew curriculum follows a well-established process: (1) the\ncurriculum is designed by a Center after consulting the\ndepartments involved; (2) it must be approved by the University\nCouncil; (3) the Universities Council of the Nation must deliver a\n(positive) report; and (4) the curriculum is published in the\nOfficial Bulletin of the Nation. This scheme has a number of\nadvantages, e.g. a minimum degree of coherence among all the\nuniversities is guaranteed. However, it also has a number of\ndisadvantages, e.g. the process to change a curriculum is very\nrigid.\nThe Universidad Rey Juan Carlos is a young university, now\nseven years old. It offered studies of computer science since the\nvery first year. The curriculum was designed by an external\ncommittee, so the teachers of computer science thereafter hired by\nthe university did not have the opportunity to elaborate on it. The\ncurriculum had a few weak points that would recommend a light\nreform, but the priorities of the new university postponed it.\nThe curriculum establishes the features of the \"Foundations of\nprogramming languages\" course. The course is scheduled to last\nfor fifteen weeks, with three lecture hours per week and two\nsupervised laboratory hours per week. However, some flexibility\nis allowed, so that some weeks may be released from the lab\ncomponent.\nThis course is both a strong and a weak feature of the curriculum.\nIt is a strong feature because programming languages are\nmarginal in the official core. Consequently, our curriculum is\ncloser to international recommendations than most Spanish\nuniversities. However, it is a weak feature, because the course is\noffered in the second semester of the first year! Notice that the\nprogramming languages course is more typically offered as an\nintermediate or advanced course in the third or fourth year.\nOur problem was how to teach the programming languages course\nto freshmen. The paper presents our design of the course and our\nexperience. In the second section we first analyze and classify\nproposals for the programming languages course into different\npure and hybrid approaches. In section 3, we describe a course for\nfreshmen based on four pure approaches, and justify the choices\nmade with respect to the factors that most influenced its design.\nFinally, we identify the software used for laboratories and outline\nour experience after teaching it for seven years.\n\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nITiCSE'05, June 2729, 2005, Monte de Caparica, Portugal.\nCopyright 2005 ACM 1-59593-024-8/05/0006...$5.00.\n\n271\n\nAPPROACHES TO TEACHING PROGRAMMING LANGUAGES\nSince Curriculum'68, several issues on programming languages\nhave received attention in the different curriculum\nrecommendations: particular programming languages, language\nimplementation, etc. It is formative to study (or to browse, at\nleast) such recommendations, even though the large number of\ntopics can be discouraging for the teacher.\nIn this section, we try to organize different contributions. Firstly,\nwe identify \"pure\" approaches to the programming languages\ncourse. Secondly, we describe their implementation, usually as\nhybrid approaches. Finally, we briefly discuss the issue of which\nprogramming languages and paradigms to use for the course.\n2.1 Pure Approaches\nProbably, the best study on approaches to the programming\nlanguages course was given by King [7]. He made a study of 15\ntextbooks and found 10 different goals. Furthermore, he identified\n3 approaches on which these textbooks were based and discussed\na number of issues. We have extended his classification up to 5\napproaches. Although most books and courses follow a hybrid\napproach, it is clarifying to distinguish the following pure ones:\nCatalogue approach. It provides a survey of several\nprogramming languages. This approach has several\nadvantages: the student acquires a good education in several\nlanguages, it allows studying the interaction among different\nconstructs of a language, and the languages may be studied in\nchronological order. However, it also exhibits disadvantages:\nthere is much redundancy in studying similar features in\ndifferent languages, and there is no guarantee that the student\nacquires a solid education.\nDescriptive (or anatomic [5]) approach. Programming\nlanguages have many common elements which can be\ngrouped into categories and studied independently. Typical\nexamples are: data types, hiding mechanisms, execution\ncontrol, etc. The advantages and disadvantages of this\napproach are roughly the opposite of the previous one.\nParadigm approach. Although each language has different\ncharacteristics and constructs, it is based on a basic model of\ncomputation called programming paradigm. The paradigm\napproach is an evolution of the descriptive approach described\nabove since it generalizes language constructs and groups\nthem consistently. Typical examples of paradigms are\nfunctional, logic and imperative programming.\nFormal approach. It studies the foundations of programming\nlanguages, mainly their syntax and semantics. The main\nadvantage of this approach is that the student acquires a solid\nconceptual background. However, it has the risk of being too\nformal and therefore keeping far from the study of the\nprogramming languages themselves.\nImplementation approach. It comprises language processing\ntopics. This approach is usually adopted jointly with the\ndescriptive one, so that the run-time mechanisms that support\neach language construct are also described. This allows\nestimating the computational cost of each construct. However,\nthe student may associate each concept with a particular\nimplementation; this approach may also be in contradiction\nwith the idea that a high level language should be\nunderstandable independently from its implementation.\n2.2 Implementation of Pure Approaches\nThe formal and implementation approaches are the basis of two\nwell known and established courses: computation theory and\nlanguage processors. They are not studied in this paper as\nstandalone courses, but we consider their integration into the\nprogramming languages course.\nDespite these \"pure\" approaches, it is more common to adopt a\nhybrid one, formed by a combination of several approaches. For\ninstance, we have explained that a descriptive course may address\nthe implementation of language constructs. The descriptive and\nparadigm approaches also are commonly complemented by a\nsmall catalogue of selected languages. It is also common to find a\ndescriptive part based on the imperative paradigm, followed by a\nsecond part based on the catalogue or the paradigm approaches.\nFinally, the formal approach can complement the descriptive or\nimplementation ones.\nFrom a historical point of view, the catalogue approach was the\nmost common in the first years of computing curricula. However,\nit has almost universally been abandoned, with some interesting\nexceptions, such as the experience by Maurer [9] on a subfamily\nof four C-like languages.\nThe trend has been towards giving more importance to the\nfoundations of programming languages, mainly elements and\nparadigms. After this evolution, it seems that the two most\ncommon organizations are:\nDescriptive approach, complemented with some languages or\nparadigms. It is the most common organization, according to\ncurrently available textbooks.\nDescriptive approach, illustrated by means of interpreters of\nsome selected languages. Interpreters and paradigms can be\ncombined in two symmetrical ways: either implementing an\ninterpreter in each paradigm [14], or implementing an\ninterpreter for one language of each paradigm; the latter\napproach can be adopted with an imperative language [6] or,\nmore probably, with a functional language [1].\nThere are few proposals for a holistic approach. One exception is\nWick and Stevenson's proposal [17], which combines the\ndescriptive, formal, paradigm and implementation approaches into\na \"reductionistic\" one.\n2.3 Choice of Languages and Paradigms\nCourses on programming languages do not simply consist in the\nstudy of one or several languages, but their study usually is a part\nof the course. The selection of these languages rises some\nquestions, that we cannot discuss here.\nA related issue is the selection of programming paradigms. Not all\nthe paradigms can be equally useful for a course on programming\nlanguages for freshmen. Firstly, some paradigms are richer to\nillustrate language elements than others. Secondly, some\nparadigms can be more adequate to freshmen than others. There is\nnot a catalogue of paradigms classified by suitable academic year,\nbut it is important to use a more objective criterion than just the\npersonal opinion of faculty.\n272\nWe have used the Computing Curricula 2001 [12] as an objective\nbasis to identify feasible paradigms. They identify the paradigms\nthat have succeeded in CS1: procedural, object-oriented,\nfunctional, algorithmic notations, and low-level languages. The\nlast two choices are useful for CS1 but not for a programming\nlanguages course, thus the remaining choices are procedural,\nobject-oriented, and functional. An additional choice, not cited by\nComputing Curricula 2001, consists in the use of tiny languages\n[8][10]. These languages can be ad hoc designed for a specific\ndomain or embedded into operating systems or applications.\n\nOUR PROPOSAL\nWe discarded the catalogue approach because it would only\ncontribute to a memorization effort by freshmen. Consequently,\nour course is based on the remaining four approaches:\nFormal approach. Formal grammars and syntax notations are\ngiven in depth. Language semantics is simply introduced.\nImplementation approach. Only basic concepts are given.\nDescriptive approach. Basic language elements are reviewed.\nParadigm approach. A programming paradigm is given in\ndepth (functional programming) but others are just sketched.\nTable 1 contains the contents of the course we offer.\nThere are several major factors to consider for the design of a\nprogramming languages course. Firstly, the course when it is\noffered to students determines to a large extent the knowledge and\nmaturity of students. Secondly, the existence of related courses,\nsuch as courses on automata theory or programming\nmethodology, may recommend removing some overlapping\ntopics. Thirdly, the specialization profile of faculty can foster the\nchoice of a given topic instead of another one. Fourthly, a course\nwith so many different topics must guarantee coherence among\nthem. Finally, time constraints typically limit the number of\ntopics to consider.\nIn the following subsections we justify the adequacy of the\nchoices made with respect to these factors.\nTable 1. Syllabus of our programming languages course\nPART I. INTRODUCTION\nChapter 1. General issues\nComputer and programming languages. Elements, properties\nand history of programming languages. Classifications.\nPART II. SYNTACTIC FOUNDATIONS OF\nPROGRAMMING LANGUAGES\nChapter 2. Grammars and formal languages\nAlphabets, symbols and chains. Languages. Grammars.\nDerivation of sentences. Recursive grammars. Classification\nof grammars: Chomsky's hierarchy. Abstract machines.\nChapter 3. Regular grammars\nDefinition. Uses and limitations. Regular expressions and\nregular languages. Finite-state automata.\nChapter 4. Context-free grammars\nDefinition. Uses and limitations. Parsing trees. The ambiguity\nproblem and its removal.\nPART III. DESCRIPTION AND PROCESSING OF\nPROGRAMMING LANGUAGES\nChapter 5. Language processors\nAbstract (or virtual) machines. Classes of processors. Stages\nin language processing. Concrete vs. abstract syntax.\nChapter 6. Lexical and syntactic notations\nLexical and syntactic elements. Regular definitions. Syntax\nnotations: BNF, EBNF, syntax charts.\nChapter 7. Semantics\nSemantics. Classes of semantics. Static semantics. Binding.\nPART IV. THE FUNCTIONAL PARADIGM\nChapter 8. Basic elements\nFunction definition and application. Programs and\nexpressions. Overview of functional languages. Built-in types,\nvalues and operations. The conditional expression.\nChapter 9. Advanced elements\nOperational semantics: term rewriting. Recursive functions.\nLocal definitions.\nChapter 10. Functional data types\nConstructors. Equations and patterns. Pattern matching.\nPART V. ELEMENTS OF PROGRAMMING LANGUAGES\nChapter 11. Lexical and syntactical elements\nIdentifiers. Numbers. Characters. Comments. Delimeters.\nNotations for expressions. Program structure and blocks.\nChapter 12. Data types\nRecursive types. Parametric types: polymorphism.\nPolymorphic functions. Type systems. Type checking and\ninference. Type equivalence. Type conversion. Overloading.\nPART VI. PROGRAMMING PARADIGMS\nChapter 13. Other paradigms and languages\nImperative paradigms. Logic paradigm. Motivation of\nconcurrency. Other computer languages: mark-up languages.\n3.1 Maturity of Students\nOne major concern was the fact that the course is offered to\nfreshmen. A single approach could not be used because of the\nfreshmen's lack of knowledge of programming languages. A\nvariety of contents from the different approaches must be selected\nin order to give them a comprehensible and rich view of\nprogramming languages.\nThe lack of maturity and capability of students to understand\ncertain topics was a bottleneck for course organization. The\ndifferent topics can be given with varying degrees of depth, but\nalways making sure that freshmen can master them. We found\nthat some topics are especially difficult to understand, even\nformulated in the simplest way. This mainly applies to:\nSemantics of programming languages.\nImplementation of programming languages.\nSome programming paradigms, such as concurrency.\nConsequently, these topics were included in a summarized way,\nso that students could achieve a global view of them and\n273\nunderstand the main issues involved. The rest of the topics could\npotentially be taught more deeply, but without forgetting that they\nwere offered to freshmen.\nIn terms of Bloom's taxonomy [2], the three topics above can be\nmastered at the knowledge level, or even comprehension level.\nHowever, for the rest of topics, we can expect students to achieve\nthe application and analysis levels, at least.\n3.2 Overlapping with Other Courses\nSome topics are also offered in other courses, either in the same\nor in a subsequent year. Consequently, these topics can be\nremoved or dealt with more shallowly. The most probable\nconflicts are:\nImperative programming, either procedural or object-oriented.\nGrammars and formal languages.\nLanguage processors.\nIn our case, there is an annual course on programming\nmethodology based on the imperative paradigm, but the other two\ntopics are not included in the curriculum. The programming\nmethodology course is offered in the first academic year.\nConsequently, we removed the imperative paradigm, except for\nits use in some illustrating examples, mainly in part V. However,\nwe kept chapters on grammars and formal languages, and on\nlanguage processors.\n3.3 Preferences and Specialization of Faculty\nThis factor is important in order to choose among equally eligible\noptions, or to give broader coverage of some topics. In particular,\nfaculty can be more familiar with some paradigms than with\nothers. This was over-riding for our choice of the programming\nparadigm.\nIn subsection 2.3 we discussed suitable paradigms for freshmen,\nand we concluded that Computing Curricula 2001 fosters the\nselection of the procedural, object-oriented, and functional\nparadigms. We discarded the procedural paradigm as it is\nconcurrently taught in the programming methodology courses.\nFinally, our specialization gave priority to the functional\nparadigm over object-orientation. The reader can find many\nexperiences in the literature, but we recommend a monograph on\nfunctional programming in education [13].\nFunctional programming is a paradigm with several advantages,\nsuch as short definition of languages, simple and concise\nprograms, high level of abstraction, etc. However, its main\nadvantage for us is richness of elements. This allows us to deal\nwith many aspects of programming languages (e.g. data types,\nrecursion, polymorphism, etc.) in a natural and easy way.\nThe use of tiny languages is another attractive choice in a course\nfor freshmen. However, we also discarded them in favor of the\nfunctional paradigm because they have fewer language elements.\n3.4 Coherence and Unifying Themes\nA key issue in a course based on several approaches is to provide\ncontents coherence. A network of relationships among the\ndifferent parts makes possible their coherent integration.\nPart III (description and processing of languages) is the pragmatic\ncontinuation of part II (formal grammars). Thus, EBNF and\nsyntactic charts are introduced in part III as more adequate\nnotations for language description than pure grammar definitions.\nLanguage processing is given at a conceptual level, but the role of\nregular and syntax-free grammars in the architecture of language\nprocessors is highlighted.\nParts IV and V are both based on a functional language, which is\ndescribed with the tools given in part III, mainly EBNF and type\nconstraints.\nParts IV and V are also related because they are based on the\nsame language. In order to provide more homogeneity, language\nelements studied in part V are introduced in a universal way, but\nthey are mainly illustrated with the functional paradigm.\nLast, but not least, recursion is adopted as a recurring theme\nduring the course. In effect, it is found in grammars, functions and\ndata types. The recurrent presentation of this topic fosters deeper\nunderstanding by students.\nThe \"pure\" definition of recursion is given early in the course, but\nits three instantiations enumerated above are studied later. For\neach instantiation, the mechanisms that accompany a recursive\ndefinition are clearly identified, in particular representation of\ninformation and operational semantics [16]. For instance,\nrecursive grammars represent sentences as strings of terminal\nsymbols, and its operating semantics is defined in terms of\nderivation of sentences. However, recursive functions represent\ninformation as expressions, and its operating semantics is defined\nin terms of term rewriting.\n3.5 Time Constraints\nAs a final factor, time constraints limit the depth of study of those\ntopics that could be studied longer. A global view of the course\nschedule is given in Table 2.\nTable 2 Schedule of the course\nPart\nTheory #hours\nLab #hours\nPart I\n5\nPart\nII\n14\n8\nPart III\n10\nPart\nIV\n12\n6\nPart V\n8\n6\nPart VI\n4\n2\nIn part II (formal grammars), regular and context-free grammars\nare the only ones studied in depth because of their importance for\nlanguage description and processing.\nMoreover, only one paradigm can be studied in depth. Even so,\nthe lack of time limits the presentation of functional programming\n(parts IV and V) to the core elements of the paradigm. Other\nelements, important for the functional programmer, can not be\naddressed (e.g. higher-order, lazy evaluation, or currification).\nHowever, this is not a serious drawback since the aim of including\nfunctional programming in the course is teaching the essentials of\na new paradigm as well as illustrating language elements.\n274\nLABORATORY COURSEWARE\nA course on programming languages must have a laboratory\ncomponent. The laboratory schedule includes sessions for those\nparts of the course where problem solving can be faced, mainly\nformal grammars and functional programming. Laboratory tools\nwere selected carefully so that they are adequate for freshmen to\nexercise non-trivial concepts; simple user interaction and\nvisualization facilities are of great help here. There are a number\nof tools that fulfill these requirements. For formal grammars, we\nrequire simulators that allow at least manipulating regular\nexpressions, finite automata, context-free grammars and\nderivation trees. Our final selection was JFLAP [11]. For\nfunctional programming, we require a programming environment\nthat shows term rewriting as the operational semantics. Our final\nselection was WinHIPE [15].\n\nEXPERIENCE\nWe have been teaching this course for seven years. Although the\nbasic structure has roughly been constant, it was refined\naccording to our experience. In particular, the emphasis on\nrecursion was introduced after several years as we noticed student\nproblems with this concept. We consider that we have succeeded,\nat least in eliminating the magical connotation of recursion.\nA major change was the relative order of chapters on formal\ngrammars and the functional paradigm. During the first year, they\nwere given in reverse order. However, students had problems in\nunderstanding the syntax of functional declarations that led us to\nteach in the first place formal grammars (and therefore syntax\nnotations such as EBNF). Thus, a foundation to declare syntax\nwas laid and then used to introduce functional programming.\nThe literature classifies the main difficulties for teaching\nfunctional programming into syntactical, conceptual and\n\"psychological\" problems [13]. In our approach, the two former\nkinds of problems are avoided, but the latter remains. As\nfreshmen learn concurrently the functional and one imperative\nlanguage, they get the idea that functional is an exotic, useless\nparadigm.\n\nCONCLUSION\nWe have described a course on programming languages for\nfreshmen. It comprises elements from four different approaches.\nWe have described the contents of the course, and we have\nexplained the factors that led us to its current design. The\nexperience has been very positive both for teachers and for\nstudents. As the Denning report sought for CS1, we consider that\nour course illustrates that it is feasible to offer some traditionally\nintermediate or advanced matters in introductory courses.\n\nACKNOWLEDGMENTS\nThis work is supported by the research project TIN2004-07568 of\nthe Spanish Ministry of Education and Science.\n\nREFERENCES\n[1] Abelson, H., and Sussman, G.J. Structure and Interpretation of\nComputer Programs. MIT Press, 2 ed., 1996.\n[2] Bloom, B., Furst, E., Hill, W., and Krathwohl, D.R.\nTaxonomy of Educational Objectives: Handbook I, The\nCognitive Domain. Addison-Wesley, 1959.\n[3] Curriculum Committee on Computer Science. Curriculum\n'68: Recommendations for academic programs in computer\nscience. Comm. ACM, 11, 3 (March 1968), 151-197.\n[4] Denning, P. et al. Computing as a Discipline. ACM Press,\nNew York, 1988.\n[5] Fischer, A.E., and Grodzinsky, F.S. The Anatomy of\nProgramming Languages. Prentice-Hall, 1993.\n[6] Kamin, S.N. Programming Languages: An Interpreter-Based\nApproach. Addison-Wesley, 1990.\n[7] King, K.N. The evolution of the programming languages\ncourse. In 23\nrd\nSIGCSE Technical Symposium on Computer\nScience Education (SIGCSE'92). ACM Press, New York,\n1992, 213-219.\n[8] Kolesar, M.V., and Allan, V.H. Teaching computer science\nconcepts and problem solving with a spreadsheet. In 26\nth\n\nSIGCSE Technical Symposium on Computer Science\nEducation (SIGCSE'95). ACM Press, New York, 1995, 10-13\n.\n[9] Maurer, W.D. The comparative programming languages\ncourse: A new chain of development. In 33\nrd\nSIGCSE\nTechnical Symposium on Computer Science Education\n(SIGCSE 2002). ACM Press, New York, 2002, 336-340.\n[10] Popyack, J.L., and Herrmann, N. Why everyone should\nknow how to program a computer. In IFIP World\nConference on Computers in Education VI (WCCE'95).\nChapman & Hall, 1995, 603-612.\n[11] Hung, T., and Rodger, S.H. Increasing visualization and\ninteraction in the automata theory course. In 31\nst\nSIGCSE\nTechnical Symposium on Computer Science Education\n(SIGCSE 2000). ACM Press, New York, 2000, 6-10.\n[12] The Joint Task Force on Computing Curricula IEEE-CS/ACM\n: Computing Curricula 2001 Computer Science,\nhttp://www.computer.org/education/cc2001/final, 2001.\n[13] Thomson, S., and Wadler, P. (eds.) Functional programming\nin education. Journal of Functional Programming, 3, 1\n(1993).\n[14] Tucker, A.B., and Noonan, R.E. Integrating formal models\ninto the programming languages course. In 33\nrd\nSIGCSE\nTechnical Symposium on Computer Science Education\n(SIGCSE 2002). ACM Press, New York, 2002, 346-350.\n[15] Velzquez-Iturbide, J.. Improving functional programming\nenvironments for education. In M. D. Brouwer-Hanse y T.\nHarrington (eds.), Man-Machine Communication for\nEducational Systems Design. Springer-Verlag, NATO ASI\nSeries F 124, 1994, 325-332.\n[16] Velzquez-Iturbide, J.. Recursion in gradual steps (is\nrecursion really that difficult?). In 31\nst\nSIGCSE Technical\nSymposium on Computer Science Education (SIGCSE 2000).\nACM Press, New York, 2000, 310-314.\n[17] Wick, M.R., and Stevenson, D.E. A reductionistic approach\nto a course on programming languages. In 32\nnd\nSIGCSE\nTechnical Symposium on Computer Science Education\n(SIGCSE 2001). ACM Press, New York, 2001, 253-257.\n275", "keywords": "programming language course;language description;formal grammars;laboratory component;functional programming;computer science;programming methodology;Programming languages;recursion;curriculum;freshmen;topics;programming paradigms"} {"name": "160", "title": "Query Result Ranking over E-commerce Web Databases", "abstract": "To deal with the problem of too many results returned from an E-commerce Web database in response to a user query, this paper proposes a novel approach to rank the query results. Based on the user query, we speculate how much the user cares about each attribute and assign a corresponding weight to it. Then, for each tuple in the query result, each attribute value is assigned a score according to its \"desirableness\" to the user. These attribute value scores are combined according to the attribute weights to get a final ranking score for each tuple. Tuples with the top ranking scores are presented to the user first. Our ranking method is domain independent and requires no user feedback. Experimental results demonstrate that this ranking method can effectively capture a user's preferences.", "fulltext": "INTRODUCTION\nWith the rapid expansion of the World Wide Web, more and more\nWeb databases are available. At the same time, the size of existing\nWeb databases is growing rapidly. One common problem faced by\nWeb users is that there is usually too many query results returned\nfor a submitted query. For example, when a user submits a query to\nautos.yahoo.com to search for a used car within 50 miles of New\nYork with a price between $5,000 and $10,000, 10,483 records are\nreturned. In order to find \"the best deal\", the user has to go through\nthis long list and compare the cars to each other, which is a tedious\nand time-consuming task.\nMost Web databases rank their query results in ascending or\ndescending order according to a single attribute (e.g., sorted by date,\nsorted by price, etc.). However, many users probably consider\nmultiple attributes simultaneously when judging the relevance or\ndesirableness of a result. While some extensions to SQL allow the\nuser to specify attribute weights according to their importance to\nhim/her [21], [26], this approach is cumbersome and most likely\nhard to do for most users since they have no clear idea how to set\nappropriate weights for different attributes. Furthermore, the user-setting\n-weight approach is not applicable for categorical attributes.\nIn this paper, we tackle the many-query-result problem for Web\ndatabases by proposing an automatic ranking method, QRRE\n(Query Result Ranking for E-commerce), which can rank the query\nresults from an E-commerce Web database without any user\nfeedback. We focus specifically on E-commerce Web databases\nbecause they comprise a large part of today's online databases. In\naddition, most E-commerce customers are ordinary users who may\nnot know how to precisely express their interests by formulating\ndatabase queries. The carDB Web database is used in the following\nexamples to illustrate the intuitions on which QRRE is based.\nExample 1: Consider a used Car-selling Web database D with a\nsingle table carDB in which the car instances are stored as tuples\nwith attributes: Make, Model, Year, Price, Mileage and Location.\nEach tuple t\ni\nin D represents a used car for sale.\nGiven a tuple t\ni\nin the query result T\nq\nfor a query q that is submitted\nby a buyer, we assign a ranking score to t\ni\n, based on its attribute\nvalues, which indicates its desirableness, from an E-commerce\nviewpoint, to the buyer. For instance, it is obvious that a luxury,\nnew and cheap car is globally popular and desired in the used car\nmarket. However, sometimes the desired attribute values conflict\nwith each other. For example, a new luxury car with low mileage is\nunlikely to be cheap. Hence, we need to decide which attributes are\nmore important for a buyer. Some buyer may care more about the\nmodel of a car, while some other buyer may be more concerned\nabout its price. For each attribute, we use a weight to denote its\nimportance to the user.\nIn this work, we assume that the attributes about which a user cares\nmost are present in the query he/she submits, from which the\nattribute importance can be inferred. We define specified attributes\nto be attributes that are specified in a query and unspecified\nattributes to be attributes that are not specified in a query.\nFurthermore, we also consider that a subset of the unspecified\nattributes, namely, those attributes that are closely correlated to the\nquery, is also important.\nExample 2: Given a query with condition \"Year > 2005\", the query\ncondition suggests that the user wants a relatively new car.\nIntuitively, besides the Year attribute, the user is more concerned\nabout the Mileage than he/she is concerned about the Make and\nLocation, considering that a relatively new car usually has low\nmileage.\nGiven an unspecified attribute A\ni\n, the correlation between A\ni\nand the\nuser query q is evaluated by the difference between the distribution\nof A\ni\n's values over the query results and their distribution over the\nwhole database D. The bigger the difference, the more A\ni\ncorrelates\nto the specified attribute value(s). Consequently, we assign a bigger\nattribute weight to A\ni\n. Example 3 explains our intuition.\nExample 3: Suppose a used car database D contains car instances\nfor which the Year has values 1975 and onwards and D returns a\nsubset d containing the tuples that satisfy the query with condition\n\"Year > 2005\". Intuitively, Mileage values of the tuples in d\ndistribute in a small and dense range with a relatively low average,\nwhile the Mileage values of tuples in D distribute in a large range\nwith a relatively high average. The distribution difference shows a\nclose correlation between the unspecified attribute, namely,\nMileage, and the query \"Year > 2005\".\nBesides the attribute weight, we also assign a preference score to\neach attribute value, including the values of both specified and\nunspecified attributes. In the E-commerce context, we first assume\nthat an expensive product is less preferred than a cheap product if\nother attribute values are equal. Hence, we assign a small preference\nscore for a high Price value and a large preference score for a low\nPrice value. We further assume that a non-Price attribute value with\nhigh desirableness, such as a luxury car or a new car, correlates\npositively with a high Price value. Thus, a luxury car is more\nexpensive than a standard car and a new car is usually more\nexpensive than an old car. Based on this assumption, we convert a\nvalue a\ni\nof a non-Price attribute A\ni\nto a Price value p'\nI\nwhere p'\nI\nis\nthe average price of the product for A\ni\n= a\ni\nin the database.\nConsequently, the preference score for a\ni\nwill be large if p'\nI\nis large\nbecause a large Price value denotes a high desirableness for the user.\nFinally, the attribute weight and the value preference score are\ncombined to get the final ranking score for each tuple. The tuples\nwith the largest ranking scores are presented to the user first.\nThe contributions of this paper include the following:\n1. We present a novel approach to rank the tuples in the query\nresults returned by E-commerce Web databases.\n2. We propose a new attribute importance learning approach,\nwhich is domain independent and query adaptive.\n3. We also propose a new attribute-value preference score\nassignment approach for E-commerce Web databases.\nIn the entire ranking process, no user feedback is required.\nThe rest of the paper is organized as follows. Section 2 reviews\nsome related work. Section 3 gives a formal definition of the many-query\n-result problem and presents an overview of QRRE. Section 4\nproposes our attribute importance learning approach while Section 5\npresents our attribute preference score assignment approach.\nExperimental results are reported in Section 6. The paper is\nconcluded in Section 7.\nRELATED WORK\nQuery result ranking has been investigated in information retrieval\nfor a long time. Cosine Similarity with TF-IDF weighting of the\nvector space model [2] and [26], the probabilistic ranking model\n[30] and [31] and the statistical language model [12] have been\nsuccessfully used for ranking purposes. In addition, [10], [11], [14]\nand [15] explore the integration of database and information\nretrieval techniques to rank tuples with text attributes. [1], [5] and\n[17] propose some keyword-query based retrieval techniques for\ndatabases. However, most of these techniques focus on text\nattributes and it is very difficult to apply these techniques to rank\ntuples with categorical or numerical attributes.\nSome recent research addresses the problem of relational query\nresult ranking. In [9], [26], [28] and [33], user relevance feedback is\nemployed to learn the similarity between a result record and the\nquery, which is used to rank the query results in relational\nmultimedia databases. In [21] and [26], the SQL query language is\nextended to allow the user to specify the ranking function according\nto their preference for the attributes. In [18] and [19], users are\nrequired to build profiles so that the query result is ranked according\nto their profile. Compared with the above work, our approach is\nfully automatic and does not require user feedback or other human\ninvolvement.\nIn [1] and [12], two ranking methods have been proposed that take\nadvantage of the links (i.e., associations) among records, such as the\ncitation information between papers. Unfortunately, linking\ninformation among records does not exist for most domains.\nThe work that is most similar to ours is the probabilistic information\nretrieval (PIR) model in [8], which addresses the many-query-result\nproblem in a probabilistic framework. In PIR, the ranking score is\ncomposed of two factors: global score, which captures the global\nimportance of unspecified values, and conditional score, which\ncaptures the strength of the dependencies between specified and\nunspecified attribute values. The two scores are combined using a\nprobabilistic approach. Our approach differs from that in [8] in the\nfollowing aspects:\n1. PIR only focuses on point queries, such as \"A\ni\n= a\ni\n\". Hence, both\na query with condition \"Mileage < 5000\" and a query with\ncondition \"Mileage < 2500\" may have to be converted to a\nquery with condition \"Mileage = small\" to be a valid query in\nPIR, which is not reasonable for many cases. In contrast, QRRE\ncan handle both point and range queries.\n2. PIR focuses on the unspecified attributes during query result\nranking while QRRE deals with both specified and unspecified\nattributes. For example, suppose a car with price less than\n$10,000 is labeled as a \"cheap\" car. For a query \"Price < 10000\",\nPIR will only consider the value difference for non-Price\nattributes among tuples and ignore the price difference, which is\nusually important for a poor buyer. On the contrary, QRRE will\nconsider the value difference for all attributes.\n3. A workload containing past user queries is required by PIR in\norder to learn the dependency between the specified and\nunspecified attribute values, which is unavailable for new online\ndatabases, while QRRE does not require such a workload.\nThe experimental results in Section 6 show that QRRE produces a\nbetter quality ranking than does PIR.\nThe attribute-importance learning problem was studied in [23] and\n[24], in which attribute importance is learned according to the\nattribute dependencies. In [23], a Bayesian network is built to\ndiscover the dependencies among attributes. The root attribute is the\nmost important while the leaf attributes are less important.\n576\nIn [24], an attribute dependency graph is built to discover the\nattribute dependencies. Both of these methods learn the attribute\nimportance based on some pre-extracted data and their result is\ninvariant to the user queries. Furthermore, both methods can only\ndetermine the attribute importance sequence. They are incapable of\ngiving a specific value to show how important each attribute is. In\ncontrast, the attribute-importance learning method presented in this\npaper can be adapted to the user's query and thus can be tailored to\ntake into account the desire of different users, since each attribute is\nassigned a weight that denotes its importance for the user. To our\nknowledge, this is the first work that generates attribute weights that\nare adaptive to the query the user submitted.\n\nQUERY RESULT RANKING\nIn this section, we first define the many-query-result problem and\nthen present an overview of QRRE.\n3.1 Problem Formulation\nConsider an autonomous Web database D with attributes A={A\n1\n, A\n2\n,\n..., A\nm\n} and a selection query q over D with a conjunctive selection\ncondition that may include point queries, such as \"A\ni\n= a\ni\n\", or range\nqueries, such as \"a\ni1\n< A\ni\n< a\ni2\n\". Let T={t\n1\n, t\n2\n, ..., t\nn\n} be the set of\nresult tuples returned by D for the query q. In many cases, if q is not\na selective query, it will produce a large number of query results\n(i.e., a large T). The goal is to develop a ranking function to rank the\ntuples in T that captures the user's preference, is domain-independent\nand does not require any user feedback.\n3.2 QRRE\nInitially, we focus on E-commerce Web databases because E-commerce\nWeb databases comprise a large proportion of the\ndatabases on the Web. We further assume that each E-commerce\nWeb database has a Price attribute, which we always assume to be\nA\n1\n. The Price attribute A\n1\nplays an intermediate role for all attributes\nduring the attribute preference score assignment.\nExample 4: Consider the tuples in Table 1 that represent an\nexample query result set T. It can be seen that most tuples have their\nown advantages when compared with other tuples. For example, t\n1\n\nis a relatively new car while t\n2\nis a luxury car and t\n3\nis the cheapest\namong all cars. Hence, depending on a user's preferences, different\nrankings may be needed for different users. Assuming that a user\nwould prefer to pay the smallest amount for a car and that all other\nattribute values are equal, then the only certainty is that t\n4\nshould\nalways be ranked after t\n3\nbecause its mileage is higher than t\n3\nwhile\nit is more expensive than t\n3\n.\nTable 1. Examples of used car tuples.\nYear Make Model Mileage Price Location\nt\n1\n\n2005 Toyota Corolla 16995 26700 Seattle\nt\n2\n\n2002 Mercedes-Benz\nG500 47900 39825 Seattle\nt\n3\n\n2002 Nissan 350Z 26850 17448 Seattle\nt\n4\n\n2002 Nissan 350Z 26985 18128 Seattle\nAccording to Example 4, two problems need to be solved when we\nassign a ranking score for a tuple t\ni\n={t\ni1\n, t\ni2\n, ..., t\nim\n} in the query\nresult T:\n1. How can we surmise how much a user cares about an attribute\nA\nj\nand how should we assign a suitable weight w\nj\nfor the\nattribute(s) A\nj\nto reflect its (their) importance to the user?\n2. How do we assign a preference score v\nij\nfor an attribute value t\nij\n?\nFor example, when assigning the score for the attribute value \"Year\n= 2005\" in t\n1\n, should the score be larger than the score assigned for\nattribute value \"Year = 2002\" in t\n2\nand how much larger is\nreasonable? The first problem will be discussed in Section 4. The\nsecond problem will be discussed in Section 5.\nHaving assigned a preference score v\nij\n(1jm) to each attribute-value\nof t\ni\nand a weight w\nj\nto the attribute A\nj\n, the value preference\nscores v\nij\nare summed to obtain the ranking score s\ni\nfor t\ni\nto reflect\nthe attribute importance for the user. That is:\n\n=\n=\nm\nj\nij\nj\ni\nv\nw\ns\n1\n\nThe overall architecture of a system employing QRRE is shown in\nFigure 1. Such a system includes two components: pre-processing\ncomponent and online processing component. The pre-processing\ncomponent collects statistics about the Web database D using a set\n\nFigure 1: Architecture of a system employing Query Result Ranking for E-commerce (QRRE).\n577\nof selected queries. Two kinds of histograms are built in the preprocessing\nstep: single-attribute histograms and bi-attribute\nhistograms. A single-attribute histogram is built for each attribute A\nj\n.\nA bi-attribute histogram is built for each non-Price attribute (i.e., A\nj\n\nin which i>1) using the Price attribute A\n1\n.\nThe online-processing component ranks the query results given the\nuser query q. After getting the query results T from the Web\ndatabase D for q, a weight is assigned for each attribute by\ncomparing its data distribution in D and in the query results T. At\nthe same time, the preference score for each attribute value in the\nquery result is determined using the information from the bi-attribute\nhistograms. The attribute weights and preference scores are\ncombined to calculate the ranking score for each tuple in the query\nresult. The tuples' ranking scores are sorted and the top K tuples\nwith the largest ranking scores are presented to the user first.\nATTRIBUTE WEIGHT ASSIGNMENT\nIn the real world, different users have different preferences. Some\npeople prefer luxury cars while some people care more about price\nthan anything else. Hence, we need to surmise the user's preference\nwhen we make recommendations to the user as shown by Example\n4 in Section 3. The difficulty of this problem lies in trying to\ndetermine what a user`s preference is (i.e., which attributes are more\nimportant) when no user feedback is provided. To address this\nproblem, we start from the query the user submitted. We assume\nthat the user's preference is reflected in the submitted query and,\nhence, we use the query as a hint for assigning weights to attributes.\nThe following example provides the intuition for our attribute\nweight assignment approach.\nExample 5: Consider the query q with condition \"Year > 2005\",\nwhich denotes that the user prefers a relatively new car. It is\nobvious that the specified attribute Year is important for the user.\nHowever, all the tuples in the query result T satisfy the query\ncondition. Hence, we need to look beyond the specified attribute and\nspeculate further about what the user's preferences may be from the\nspecified attribute. Since the user is interested in cars that are made\nafter 2005, we may speculate that the user cares about the Mileage\nof the car. Considering the distribution of Mileage values in the\ndatabase, cars whose model year is greater than 2005 usually have\na lower mileage when compared to all other cars. In contrast,\nattribute Location is less important for the user and its distribution\nin cars whose model year is greater than 2005 may be similar to the\ndistribution in the entire database.\nAccording to this intuition, an attribute A\nj\nthat correlates closely\nwith the query will be assigned a large weight and vice verse.\nFurthermore, as Example 3 in Section 1 shows, the correlation of A\nj\n\nand the query can be measured by the data distribution difference of\nA\nj\nin D and in T.\nIt should be noted that the specified attribute is not always important,\nespecially when the condition for the specified attribute is not\nselective. For example, for a query with condition \"Year > 1995 and\nMake = BMW\", the specified attribute Year is not important\nbecause almost all tuples in the database satisfy the condition\n\"Year > 1995\" and the Year distribution in D and in T is similar.\nA natural measure of the distribution difference of A\nj\nin D and in T\nis the Kullback-Leibler distance or Kullback-Leibler (KL)\ndivergence [13]. Suppose that A\nj\nis a categorical attribute with value\nset {a\nj1\n, a\nj2\n, ..., a\njk\n}. Then the KL-divergence of A\nj\nfrom D to T is:\n\n=\n=\n=\n=\n=\nk\nl\njl\nj\njl\nj\njl\nj\nKL\nT\na\nA\nprob\nD\na\nA\nprob\nD\na\nA\nprob\nT\nD\nD\n1\n)\n|\n(\n)\n|\n(\nlog\n)\n|\n(\n)\n||\n(\n(1)\nin which prob(A\nj\n=a\njl\n| D) refers to the probability that A\nj\n= a\njl\nin D\nand prob(A\nj\n=a\njl\n| T) refers to the probability that A\nj\n= a\njl\nin T. If A\nj\nis\na numerical attribute, its value range is first discretized into a few\nvalue sets, where each set refers to a category, and then the KL-divergence\nof A\nj\nis calculated as in (1).\n4.1 Histogram Construction\nTo calculate the KL-divergence in equation (1) we need to obtain\nthe distribution of attribute values over D. The difficulty here is that\nwe are dealing with an autonomous database and we do not have\nfull access to all the data. In [24], the attribute value distribution\nover a collection of data crawled from D is used to estimate the\nactual attribute value distribution over D. However, it is highly\nlikely that the distribution of the crawled data can be different from\nthat of D because the crawled data may be biased to the submitted\nqueries.\nIn this paper, we propose a probing-and-count based method to\nbuild a histogram for an attribute over a Web database\n1\n. We assume\nthat the number of query results is available in D for a given query.\nAfter submitting a set of selected queries to D, we can extract the\nnumber of query results, instead of the actual query results, to get\nthe attribute value distribution of A\ni\n. An equi-depth histogram [27] is\nused to represent the attribute value distribution, from which we will\nget the probability required in Equation (1). The key problem in our\nhistogram construction for A\ni\nis how to generate a set of suitable\nqueries to probe D.\nFigure 2 shows the algorithm for building a histogram for attribute\nA\ni\n. For each attribute A\ni\n, a histogram is built in the preprocessing\nstage. We assume that one attribute value of A\ni\nis enough to be a\nquery for D. If A\ni\nis a categorical attribute, each category of A\ni\nis\nused as a query to get its occurrence count (Lines 2-3). If A\ni\nis a\nnumerical attribute, an equal-depth histogram is built for A\ni\n. We first\ndecide the occurrence frequency\n\nthreshold t for each bucket by\ndividing |D|, namely, the number of tuples in D, with the minimum\nbucket number n that will be created for a numerical attribute A\ni\n. In\nour experiments, n is empirically set to be 20. Then we probe D\nusing a query with condition on A\ni\nsuch that low A\ni\n<up and get c,\nthe number of instances in that range (Line 8). If c is smaller than t,\na bucket is added for it in H\nDi\n(Line 10) and another query probe is\nprepared (Line 11). Otherwise, we update the query probe condition\non A\ni\nby reducing the size of the bucket (Line 13) and a new\niteration begins. The iteration continues until each value in the value\nrange is in a bucket. It is obvious that there are some improvements\nthat can be made to the algorithm to accelerate the histogram\nconstruction. The improvements are not described here because\nhistogram construction is not the major focus of this paper.\nConsidering that only a single-attribute histogram is constructed, the\nprocess should complete quickly.\n\n1\nAlthough both our histogram construction method and the\nhistogram construction methods in [1] and [5] are probing-based\n, they have different goals. The goal in [1] and [5] is to\nbuild a histogram that precisely describes the regions on which\nthe queries concentrate, while our purpose is to build a\nhistogram that summarizes the data distribution of D as\nprecisely as possible with a number of query probes.\n578\nA histogram H\nTi\nalso needs to be built for A\ni\nover T (the result set) to\nget its probability distribution over T. For each bucket of H\nDi\n, a\nbucket with the same bucket boundary is built in H\nTi\nwhile its\nfrequency is counted in T.\n4.2 Attribute Weight Setting\nAfter getting the histogram of A\ni\nover D and T, the histograms are\nconverted to a probability distribution by dividing the frequency in\neach bucket of the histogram by the bucket frequency sum of the\nhistogram. That is, the probability distribution of A\ni\nfor D, P\nDi\n, is\n|\n|\n|\n|\nD\nc\np\nDk\nDi\n=\n\nin which c\nDk\nis the frequency of the k\nth\nbucket in H\nDi\n. The\nprobability distribution of A\ni\nfor T, P\nTi\n, is\n|\n|\n|\n|\nT\nc\np\nTk\nTi\n=\n\nin which c\nTk\nis the frequency of the k\nth\nbucket in H\nTi\n.\nNext, for the i\nth\nattribute A\ni\n, we assign its importance w\ni\nas\n\n=\n=\nm\nj\nTj\nDj\nTi\nDi\ni\nP\nP\nKL\nP\nP\nKL\nw\n1\n)\n,\n(\n)\n,\n(\n\nThe attribute weight assignment is performed not only on the\nunspecified attributes, but also on the specified attributes. If a\nspecified attribute is a point condition, its attribute weight will be\nthe same for all tuples in the query result. If a specified attribute is a\nrange condition, its attribute weight will be different for the tuples in\nthe query result. Example 6 illustrates this point.\nExample 6: Consider a query q with condition \"Make = 2004 and\nPrice < 10000\". In q, since the specified attribute Make is a point\nattribute, the attribute weight assigned to it is useless because all\nthe query results have the same value for Make. On the other hand,\nsince the attribute Price is a range attribute, the price of different\ntuples is an important factor to consider during ranking.\n4.3 Examples of Attribute Weight Assignment\nIn our experiments, we found that the attribute weight assignment\nwas intuitive and reasonable for the given queries. Table 2 shows\nthe attribute weight assigned to different attributes corresponding to\ndifferent queries in our experiments for the carDB. Given a query\nwith condition \"Mileage < 20000\", which means that the user\nprefers a new car, as expected the attribute \"Mileage\" is assigned a\nlarge weight because it is a specified attribute and the attribute\n\"Year\" is assigned a large weight too. The attribute \"Model\" is\nassigned a large weight because a new car usually has a model that\nappears recently. In contrast, Consider the query with condition\n\"Make = BMW & Mileage < 100000\".\nThe\nsub-condition\n\"Mileage < 100000\" possesses a very weak selective capability\nbecause almost all tuples in the database satisfy it. The buyer is\nactually just concerned about the Make and the Model of the car. As\nexpected, the attribute Make and Model are assigned large weights,\nwhile Year and Mileage are no longer assigned large weights.\n\nTable 2: Attribute weight assignments for two queries.\n\nMileage < 20000\nMake = BMW & Mileage < 100000\nYear 0.222\n0.015\nMake 0.017\n0.408\nModel 0.181\n0.408\nPrice 0.045\n0.120\nMileage 0.533\n0.04\nLocation 0.0003\n0.002\nATTRIBUTE PREFERENCE SCORE ASSIGNMENT\nIn addition to the attributes themselves, different values of an\nattribute may have different attractions for the buyer. For example, a\ncar with a low price is obviously more attractive than a more\nexpensive one if other attribute values are the same. Similarly, a car\nwith low mileage is also obviously more desirable. Given an\nattribute value, the goal of the attribute preference score assignment\nmodule is to assign a preference score to it that reflects its\ndesirableness for the buyer. To facilitate the combination of scores\nof different attribute values, all scores assigned for different attribute\nvalues are in [0, 1].\nInstead of requiring human involvement for attribute value\nassignment, given a normal E-commerce context, we make the\nfollowing two intuitive assumptions:\n1. Price assumption: A product with a lower price is always more\ndesired by buyers than a product with a higher price if the other\nattributes of the two products have the same values. For\nexample, if all other attribute values are the same, a cheaper car\nis preferred over a more expensive car.\n2. Non-Price assumption: A non-Price attribute value with higher\ndesirableness for the user corresponds to a higher price. For\nexample, a new car, which most buyers prefer, is usually more\nInput: Attribute A\ni\nand its value range\nWeb database D with the total number of tuples | D |\nMinimum bucket number n\nOutput: A single-attribute histogram H\nDi\nfor A\ni\n\nMethod:\n1. If A\ni\nis a categorical attribute\n2. For each category a\nij\nof A\ni\n, probe D using a query with\ncondition \"A\ni\n=a\nij\n\" and get its occurrence count c\n3. Add a bucket (a\nij\n, c) into H\nD\ni\n\n4. If A\ni\nis a numerical value attribute with value range [a\nlow\n, a\nup\n)\n5. t = |D| / n\n6. low = a\nlow\n, up = a\nup\n\n7. Do\n8. probe D with a query with condition \"low A\ni\n<up\"\nand get its occurrence count c\n9.\nif c t\n10.\nAdd a bucket (low, up, c) into H\nD\ni\n\n11.\nlow = up, up = a\nup\n\n12. else\n13. up = low + (up - low) / 2\n14. While low < a\nup\n\n15.\nReturn H\nD\ni\n\nFigure 2: Probing-based histogram construction algorithm.\n579\nexpensive than an old car. Likewise, a luxury car is usually\nmore expensive than an ordinary car.\nWith the above two assumptions, we divide the attributes into two\nsets: Price attribute set, which only includes the attribute Price, and\nnon-Price attribute set, which includes all attributes except Price.\nThe two sets of attributes are handled in different ways.\nAccording to the Price assumption, we assign a large score for a low\nprice and a small score for a high price. To avoid requiring human\ninvolvement to assign a suitable score for a Price value, the Price\ndistribution in D is used to assign the scores. Given a Price value t, a\nscore v\nt\nis assigned to it as the percentage of tuples whose Price\nvalue is bigger than t\ni\nin D:\n|\n| D\nS\nv\nt\nt\n=\n\nin which S\nt\ndenotes the number of tuples whose Price value is bigger\nthan t. In our experiments, the histogram for the attribute Price A\n1\n,\nwhose construction method is described in Section 4, is used for the\nPrice preference score assignment.\nFigure 3 shows the algorithm used to assign a score v for a Price\nvalue t using the Price histogram. Given the Price histogram H\nD1\n,\nthe frequency sum is fist calculated (Line 1). Then we count the\nnumber S\nt\nof tuples whose Price value is bigger than t. For each\nbucket in H\nD1\n, if the lower boundary of the bucket is bigger than t, it\nmeans that all the tuples for this bucket have a Price value bigger\nthan t and the frequency of this bucket is added to S\nt\n(Line 4). If t is\nwithin the boundary of the bucket, we assume that the Price has a\nuniform distribution in the bucket and a fraction of the frequency in\nthis bucket is added to S\nt\n(Line 5). If the upper boundary of the\nbucket is smaller than t, it means that all the tuples for this bucket\nhave a Price value lower than t and the bucket is ignored. Finally the\nratio is generated by dividing S\nt\nwith the frequency sum.\nFor a value a\ni\nof a non-Price attribute A\ni\n, the difficulty of assigning\nit a score is two fold:\n1. How to make the attribute preference score assignment adaptive\nfor different attributes? Our goal is to have an intuitive\nassignment for each attribute without human involvement. The\ndifficulty is that different attributes can have totally different\nattribute values.\n2. How to establish the correspondence between different\nattributes? For example, how can we know that the\ndesirableness of \"Year = 2005\" is the same as the desirableness\nof \"Mileage = 5000\" for most users?\nWe solve the problem in two steps. First, based on the non-Price\nassumption, we can convert a non-Price value a\ni\nto a Price attribute\nvalue t\ni\n:\nIf A\ni\nis a categorical attribute, t\ni\nis the average price for all tuples\nin D such that A\ni\n=a\ni\n.\nIf A\ni\nis a numerical attribute, v\ni\nis the average price for all tuples\nin D such that a\ni\n-d < A\ni\n< a\ni\n+d where d is used to prevent too\nfew tuples or no tuple being collected if we just simply set A\ni\n\n=a\ni\n.\nIn our experiments, a bi-attribute histogram (A\n1\n, A\ni\n) is used when a\ni\n\nis converted to a Price value. The bi-attribute histograms are built in\nthe pre-processing step in a way similar to the histogram\nconstruction described in Section 4.\nSecond, after converting all non-Price attribute values to Price\nvalues, we use a uniform mechanism to assign them a preference\nscore. We assign a large score for a large Price value according to\nthe non-Price assumption. That is, given a converted Price value t\ni\n, a\npreference score v\ni\nis assigned to it as the percentage of Price values\nthat is smaller than t\ni\nin D. The algorithm for the converted Price\npreference score assignment can be easily adapted from the\nalgorithm in Figure 3.\n5.1 Examples of Attribute Preference Score\nAssignment\nTable 3 shows the average Price and assigned score for different\nMake values for the carDB database used in our experiments. It can\nbe seen that the prices for different car makes fit our intuition well.\nLuxury cars are evaluated to have a higher price than standard cars\nand, consequently, are assigned a larger preference score. We found\nthat the attribute preference assignments for other attributes in\ncarDB are intuitive too.\nTable 3: Make-Price-Score correspondence.\nMake\nAverage Price\nScore\nMitsubishi 12899 0.183\nVolkswagen 16001 0.372\nHonda 16175\n0.373\nToyota 16585\n0.387\nAcura 20875\n0.599\nBMW 33596\n0.893\nBenz 37930\n0.923\n\nEXPERIMENTS\nIn this section, we describe our experiments, report the QRRE\nexperimental results and compare them with some related work. We\nfirst introduce the databases we used and the related work for\ncomparison. Then we informally give some examples of query\nresult ranking to provide some intuition for our experiments. Next, a\nmore formal evaluation of the ranking results is presented. Finally,\nthe running time statistics are presented.\n6.1 Experimental Setup\nTo evaluate how well different ranking approaches capture a user's\npreference, five postgraduate students were invited to participate in\nthe experiments and behave as buyers from the E-commerce\ndatabases.\nInput: Price histogram H\nD1\n={(c\n1\n,low\n1\n,up\n1\n), ..., (c\nm\n, low\nm\n,\nup\nm\n)}\nPrice value t\nOutput: Price score v\nMethod:\n1.\n\n=\ni\ni\nc\nsum\n\n2.\nS\nt\n= 0\n3.\nFor i =1..m\n4.\nif (low\nm\n> t ) S\nt\n= S\nt\n+ c\ni\n\n5.\nif (low\nm\n< t < up\nm\n) S\nt\n=S\nt\n+ c\ni\n* (t -low\nm\n)/( up\nm\n- low\nm\n)\n6.\nv = S\nt\n/ sum\n7.\nreturn v\n\nFigure 3: Price value score assignment algorithm.\n580\n6.1.1 Databases\nFor our evaluation, we set up two databases from two domains in E-commerce\n. The first database is a used car database carDB(Make,\nModel, Year, Price, Mileage, Location) containing 100,000 tuples\nextracted from Yahoo! Autos. The attributes Make, Model, Year\nand Location are categorical attributes and the attributes Price and\nMileage are numerical attributes. The second database is a real\nestate database houseDB(City, Location, Bedrooms, Bathrooms, Sq\nFt, Price) containing 20,000 tuples extracted from Yahoo! Real\nEstate. The attributes City, Location, Bedrooms and Bathrooms are\ncategorical attributes and the attributes Sq Ft and Price are\nnumerical attributes. To simulate the Web databases for our\nexperiments we used MySQL on a P4 3.2-GHz PC with 1GB of\nRAM . We implemented all algorithms in JAVA and connected to\nthe RDBMS by DAO.\n6.1.2 Implemented Algorithms\nBesides QRRE described above, we implemented two other ranking\nmethods, which are described briefly below, to compare with\nQRRE.\nRANDOM ranking model: In the RANDOM ranking model, the\ntuples in the query result are presented to the user in a random order.\nThe RANDOM model provides a baseline to show how well QRRE\ncan capture the user behavior over a random method.\nProbabilistic Information Retrieval (PIR) ranking model: A\nprobabilistic information retrieval (PIR) technique, which has been\nsuccessfully used in the Information Retrieval field, is used in [8]\nfor ranking query results. This technique addresses the same\nproblem as does QRRE. In PIR, given a tuple t, its ranking score is\ngiven by the following equation:\n\n\n\n\n=\nY\ny\nX\nx\nY\ny\nD\ny\nx\np\nW\ny\nx\np\nD\ny\np\nW\ny\np\nt\nScore\n)\n,\n|\n(\n)\n,\n|\n(\n)\n|\n(\n)\n|\n(\n)\n(\n\nin which X is the specified attributes, Y is the unspecified attributes,\nW is a past query workload and p denotes the probability.\nAs mentioned in Section 2, PIR work focuses on point queries\nwithout considering range queries. Therefore, when applying the\nPIR ranking model, the numerical attributes Price and Mileage in\ncarDB and Sq Ft and Price in houseDB are discretized into\nmeaningful ranges as categories, which in reality requires a domain\nexpert.\nIn PIR, a workload is required to obtain the conditional probability\nused to measure the correlation between specified attribute values\npresent in the query and the unspecified attributes. In our\nexperiments, we requested 5 subjects to behave as different kinds of\nbuyers, such as rich people, clerks, students, women, etc. and post\nqueries against the databases. We collected 200 queries for each\ndatabase and these queries are used as the workload W for the PIR\nmodel.\n6.2 Examples of Query Result Ranking\nWhen we examine the query result rankings, we find that the\nranking results of both QRRE and PIR are much more reasonable\nand intuitive than that of RANDOM. However, there are some\ninteresting examples that show that the QRRE rankings are superior\nto those of PIR. We found that the ranking result of QRRE is more\nreasonable than that of PIR in several ways:\nQRRE can discover an assumption that is implicitly held by a\nbuyer. For example, for a query with condition\n\"Mileage < 5000\". QRRE ranks cars with Year = 2006 as the\ntop recommendation. Intuitively, this is because a 2006 model\nyear car usually has lower mileage and this is what the user is\nlooking for. However, PIR is unable to identify the importance\nof Year because most users assume that Mileage itself is enough\nto represent their preference and, consequently, the relationship\nbetween Year and Mileage is not reflected in the workload.\nIn PIR, given a numerical attribute, its value range needs to be\ndiscretized into meaningful categories and the values within a\ncategory are assumed to be the same during ranking. For\nexample, if we assign a car with \"Mileage < 10000\" to be a\ncategory \"Mileage\n=\nsmall\", then PIR will treat\n\"Mileage = 2000\" to be the same as \"Mileage = 9000\", which is\nobviously unreasonable. In contrast, QRRE will identify that\n\"Mileage = 2000\" is more desirable than \"Mileage = 9000\".\nQRRE considers the value difference of the specified attributes\nof the tuples in the query result, while PIR ignores the difference.\nFor example, for a query with condition \"Make = Mercedes-Benz\nand Model = ML500 and Year > 2003\", QRRE usually\nranks the cars that are made in this year first. However, PIR\ndoes not take the Year difference in the query result records into\nconsideration during ranking.\nLikewise, QRRE often produces a ranking better than does PIR for\nhouseDB. The actual evaluation in the following section confirms\nthese observations.\n6.3 Ranking Evaluation\nWe now present a more formal evaluation of the query result\nranking quality. A survey is conducted to show how well each\nranking algorithm captures the user's preference. We evaluate the\nquery results in two ways: average precision and user preference\nranking.\n6.3.1 Average Precision\nIn this experiment, each subject was asked to submit three queries\nfor carDB and one query for houseDB according to their preference.\nEach query had on average 2.2 specified attributes for carDB and\n2.4 specified attributes for houseDB. We found that all the attributes\nof carDB and houseDB were specified in the collected queries at\nleast once. On average, for carDB, each query had a query result of\n686 tuples, with the maximum being 4,213 tuples and the minimum\n116 tuples. It can be seen that the many-query-result problem is a\ncommon problem in reality. Each query for houseDB has a query\nresult of 166 tuples on average.\nSince it is not practical to ask the subjects to rank the whole query\nresult for a query, we adopt the following strategy to compare the\nperformance of different ranking approaches. For each implemented\nranking algorithm, we collected the first 10 tuples that it\nrecommended. Hence, thirty tuples are collected in total. If there is\noverlap among the recommended tuples from different algorithms,\nwe extract more tuples using the RANDOM algorithm so that thirty\nunique tuples are collected in total.\n\nNext, for each of the fifteen\nqueries, each subject was asked to rank the top 10 tuples as the\nrelevant tuples that they preferred most from the thirty unique tuples\ncollected for each query. During ranking, they were asked to behave\nlike real buyers to rank the records according to their preferences.\n581\nTable 4: Average precision for different ranking methods for\ncarDB.\n\nQRRE PIR RANDOM\nq1 0.72\n0.52 0.08\nq2 0.62\n0.62 0.06\nq3 0.72\n0.22 0.06\nq4 0.52\n0.64 0.04\nq5 0.84\n0.78 0.06\nq6 0.68\n0.36 0.04\nq7 0.92\n0.46 0.02\nq8 0.88\n0.64 0.06\nq9 0.78\n0.62 0.04\nq10 0.74\n0.64 0.04\nq11 0.56\n0.66 0.06\nq12 0.86\n0.76 0.08\nq13 0.84\n0.36 0.02\nq14 0.58\n0.38 0.04\nq15 0.76\n0.66 0.06\nAverage\n0.735 0.555 0.048\nWe use the Precision/Recall metrics to evaluate how well the user's\npreference is captured by the different ranking algorithms. Precision\nis the ratio obtained by dividing the number of retrieved tuples that\nare relevant by the total number of retrieved tuples. Recall is the\nratio obtained by dividing the number of relevant tuples by the\nnumber of tuples that are retrieved. In our experiments, both the\nrelevant tuples and the retrieved tuples are 10, which make the\nPrecision and Recall to be equal. Table 4 shows the average\nprecision of the different ranking methods for each query. It can be\nseen that both QRRE and PIR consistently have a higher precision\nthan RANDOM. For 11 queries out of 15, the precision of QRRE is\nhigher than that of PIR. The precision of QRRE is equal to that of\nPIR for two queries and is lower than that of PIR for the remaining\ntwo queries. QRRE's average precision is 0.18 higher than that of\nPIR. QRRE has a precision higher than 0.5 for each query while\nPIR has a precision as low as 0.22 for q3. It should be noted that\nthere is some overlap between the top-10 ranked results of QRRE\nand top-10 ranked results of PIR for most queries. Figure 4 and\nFigure 5 show the average precision of the three ranking methods\ngraphically for both carDB and houseDB.\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15\nQuery\nA\nver\nag\ne P\nr\neci\ns\ni\no\nn\nQRRE\nPIR\nRANDOM\n\nFigure 4: Average prevision for different ranking methods for\ncarDB.\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n1\n2\n3\n4\n5\nQuery\nA\nv\ner\nag\ne P\nr\ne\nc\ni\ns\ni\no\nn\nQRRE\nPIR\nRANDOM\n\nFigure 5: Average precision for different ranking methods for\nhouseDB.\n6.3.2 User Preference Ranking\nIn this experiment, 10 queries were collected from the 5 subjects for\ncarDB and 5 queries were collected for houseDB. After getting the\nquery results, they were ranked using the three ranking methods.\nThe ranking results were then provided to the subjects in order to let\nthem select which result they liked best.\nTable 5 and Table 6 show the user preference ranking (UPR) of the\ndifferent ranking methods for each query for carDB and houseDB,\nrespectively. It can be seen that again both QRRE and PIR greatly\noutperform RANDOM. In most cases, the subjects preferred the\nranking results of QRRE to that of PIR.\nTable 5: UPR for different ranking methods for carDB.\n\nQRRE PIR RANDOM\nq1\n0.8\n0.2\n0\nq2\n1\n0\n0\nq3 0.6\n0.4 0\nq4 0.4\n0.6 0\nq5 0.4\n0.6 0\nq6 0.8\n0.2\n0\nq7 1\n0\n0\nq8 0.6\n0.2 0.2\nq9 0.8\n0.2 0\nq10 0.8\n0.2 0\nAverage\n0.72 0.26 0.02\n\nTable 6: UPR for different ranking methods for houseDB.\n\nQRRE PIR RANDOM\nq1 0.4\n0.4\n0.2\nq2 0.8\n\n0.2 0\nq3 1\n0 0\nq4 0.4\n0.6 0\nq5 0.6\n0.4 0\nAverage\n0.66 0.32 0.02\n\n582\nWhile these preliminary experiments indicate that QRRE is\npromising and better than the existing work, a much larger scale\nuser study is necessary to conclusively establish this finding.\n6.4 Performance Report\nUsing QRRE, histograms need to be constructed before the query\nresults can be ranked. The histogram construction time depends on\nthe number of buckets and the time to query the web to get the\nnumber of occurrences for each bucket. However, in most cases the\nhistogram usually does not change very much over time and so\nneeds to be constructed only once in a given time period.\nThe query result ranking in the online processing part includes four\nmodules: the attribute weight assignment module, the attribute-value\npreference score assignment module, the ranking score calculation\nmodule and the ranking score sorting module. Each of the first three\nmodules has a time complexity of O(n) after constructing the\nhistogram, where n is the number of query results, and the ranking\nscore sorting module has a time complexity of O(nlog(n)). Hence,\nthe overall time complexity for the online processing stage is\nO(nlog(n)).\nFigure 6 shows the online execution time of the queries over carDB\nas a function of the number of tuples in the query result. It can be\nseen that the execution time of QRRE grows almost linearly with\nthe number of tuples in the query result. This is because ranking\nscore sorting is fairly quick even for a large amount of data and thus\nmost of the running time is spent in the first three modules.\n\n0\n5000\n10000\n15000\n20000\n25000\n30000\n35000\n40000\n45000\n50000\n0\n500\n1000\n1500\n2000\n2500\n3000\nTuple Number\nE\nx\ne\nc\nut\ni\non Ti\nm\ne\n(\nm\ns\n)\n\nFigure 6: Execution times for different numbers of query results\nfor carDB.\nCONCLUSION\nIn this paper, a novel automated ranking approach for the many-query\n-result problem in E-commerce is proposed. Starting from the\nuser query, we assume that the specified attributes are important for\nthe user. We also assume that the attributes that are highly\ncorrelated with the query also are important to the user. We assign a\nweight for each attribute according to its importance to the user.\nThen, for each value in each tuple of the query result, a preference\nscore is assigned according to its desirableness in the E-commerce\ncontext, where users are assumed to more prefer products with\nlower prices. All preference scores are combined according to the\nattribute weight assigned to each attribute. No domain knowledge or\nuser feedback is required in the whole process. Preliminary\nexperimental results indicate that QRRE captures the user\npreference fairly well and better than existing works.\nWe acknowledge the following shortcoming of our approach, which\nwill be the focus for our future research. First, we do not deal with\nstring attributes, such as book titles or the comments for a house,\ncontained in many Web databases. It would be extremely useful to\nfind a method to incorporate string attributes into QRRE. Second,\nQRRE has only been evaluated on small-scale datasets. We realize\nthat a large, comprehensive benchmark should be built to\nextensively evaluate a query result ranking system, both for QRRE\nand for future research. Finally, QRRE has been specifically tailored\nfor E-commerce Web databases. It would be interesting to extend\nQRRE to also deal with non-E-commerce Web databases.\nACKNOWLEDGMENTS\nThis research was supported by the Research Grants Council of\nHong Kong under grant HKUST6172/04E.\n\nREFERENCES\n[1] A. Aboulnaga and S. Chaudhuri. \"Self-tuning Histograms:\nBuilding Histograms Without Looking at Data,\" Proc. of the\nACM SIGMOD Conf., 181-192, 1999.\n[2] S. Agrawal, S. Chaudhuri and G. Das. \"DBXplorer: A System\nfor Keyword Based Search over Relational Databases,\" Proc.\nof 18\nth\nIntl. Conf. on Data Engineering, 5-16, 2002.\n[3] R. Baeza-Yates and B. Ribeiro-Neto. Modern Information\nRetrieval, Addison-Wesley, 1999.\n[4] A. Balmin, V. Hristidis and Y. Papakonstantinou.\n\"ObjectRank: Authority-Based Keyword Search in\nDatabases,\" Proc. of the 30\nth\nIntl. Conf. on Very Large\nDatabases, 564-575, 2004.\n[5] G. Bhalotia, C. Nakhe, A. Hulgeri, S. Chakrabarti and S.\nSudarshan. \"Keyword Searching and Browsing in Databases\nusing BANKS,\" Proc. of 18\nth\nIntl. Conf. on Data Engineering,\n431-440, 2002.\n[6] N. Bruno, S. Chaudhuri and L. Gravano. \"STHoles: A\nMultidimensional Workload-aware Histogram,\" Proc. of the\nACM SIGMOD Conf., 211-222, 2001.\n[7] K. Chakrabarti, S. Chaudhuri and S. Hwang. \"Automatic\nCategorization of Query Results,\" Proc. of the ACM SIGMOD\nConf., 755-766, 2004.\n[8] S. Chaudhuri, G. Das, V. Hristidis and G. Weikum.\n\"Probabilistic Ranking of Database Query Results,\" Proc. of\nthe Intl. Conf. on Very Large Databases, 888-899, 2004.\n[9] K. Chakrabarti, K. Porkaew and S. Mehrotra. \"Efficient Query\nRefinement in Multimedia Databases,\" Proc. of 16\nth\nIntl. Conf.\non Data Engineering, 196, 2000.\n[10] W. Cohen. \"Integration of Heterogeneous Databases Without\nCommon Domains Using Queries Based on Textual\nSimilarity,\" Proc. of the ACM SIGMOD Conf., 201-212, 1998.\n[11] W. Cohen. \"Providing Database-like Access to the Web Using\nQueries Based on Textual Similarity,\" Proc. of the ACM\nSIGMOD Conf., 558-560,1998.\n[12] W.B. Croft and J. Lafferty. Language Modeling for\nInformation Retrieval. Kluwer 2003.\n[13] R.O. Duda, P.E. Hart and D.G. Stork, Pattern Classification.\nJohn Wiley & Sons, USA, 2001.\n[14] N. Fuhr. \"A Probabilistic Framework for Vague Queries and\nImprecise Information in Databases,\" Proc. of the 16\nth\nIntl.\nConf. on Very Large Databases, 696-707, 1990.\n583\n[15] N. Fuhr. \"A Probabilistic Relational Model for the Integration\nof IR and Databases,\" Proc. of the ACM SIGIR Conf., 309-317,\n1993.\n[16] F. Geerts, H. Mannila and E. Terzi. \"Relational Link-based\nRanking,\" Proc. of the 30\nth\nIntl. Conf. on Very Large\nDatabases, 552-563, 2004.\n[17] V. Hristidis and Y. Papakonstantinou. \"DISCOVER: Keyword\nSearch in Relational Databases,\" Proc. of the 28\nth\nIntl. Conf. on\nVery Large Databases, 670-681, 2002.\n[18] G. Koutrika and Y.E. Ioannidis. \"Personalization of Queries in\nDatabase Systems,\" Proc. of 20\nth\nIntl. Conf. on Data\nEngineering, 597-608, 2004.\n[19] G. Koutrika and Y.E. Ioannidis. \"Constrained Optimalities in\nQuery Personalization,\" Proc. of the ACM SIGMOD Conf., 73-84\n, 2005.\n[20] Y.E. Ioannidis. \"The History of Histograms (abridged),\" Proc.\nof the 29\nth\nIntl. Conf. on Very Large Databases, 19-30, 2003.\n[21] W. Kieling. \"Foundations of Preferences in Database\nSystems,\" Proc. of the 28\nth\nIntl. Conf. on Very Large\nDatabases, 311-322, 2002.\n[22] R. Kooi. The Optimization of Queries in Relational Databases.\nPhD Thesis, Case Western Reserve University, 1980.\n[23] I. Muslea and T. Lee. \"Online Query Relaxation via Bayesian\nCausal Structures Discovery,\" Proc. of the AAAI Conf., 831-836\n, 2005.\n[24] U. Nambiar and S. Kambhampati. \"Answering Imprecise\nQueries over Autonomous Web Databases,\" Proc. of 22\nnd\nIntl.\nConf. on Data Engineering, 45, 2006.\n[25] Z. Nazeri, E. Bloedorn and P. Ostwald. \"Experiences in\nMining Aviation Safety Data,\" Proc. of the ACM SIGMOD\nConf., 562-566, 2001.\n[26] M. Ortega-Binderberger, K. Chakrabarti and S. Mehrotra. \"An\nApproach to Integrating Query Refinement in SQL,\" Proc. Intl.\nConf. on Extending Data Base Technology, 15-33, 2002.\n[27] G. Piatetsky-Sharpiro and C. Connell. \"Accurate Estimation of\nthe Number of Tuples Satisfying a Condition,\" Proc. of the\nACM SIGMOD Conf., 256--276, 1984.\n[28] Y. Rui, T.S. Huang and S. Merhotra. \"Content-Based Image\nRetrieval with Relevance Feedback in MARS,\" Proc. IEEE\nIntl. Conf. on Image Processing, 815-818, 1997.\n[29] G. Salton, A. Wong and C.S. Yang. \"A Vector Space Model\nfor Information Retrieval,\" Communications of the ACM\n18(11), 613-620, 1975.\n[30] K. Sparck Jones, S. Walker and S.E. Robertson. \"A\nProbabilistic Model of Information Retrieval: Development\nand Comparative Experiments - Part 1,\" Inf. Process.\nManagement 36(6), 779-808, 2000.\n[31] K. Sparck Jones, S. Walker and S.E. Robertson. \"A\nProbabilistic Model of Information Retrieval: Development\nand Comparative Experiments - Part 2,\" Inf. Process.\nManagement 36(6), 809-840, 2000.\n[32] E.M. Voorhees. \"The TREC-8 Question Answering Track\nReport,\" Proc. of the 8\nth\nText Retrieval Conf, 1999.\n[33] L. Wu, C. Faloutsos, K. Sycara and T. Payne. \"FALCON:\nFeedback Adaptive Loop for Content-Based Retrieval,\" Proc.\nof the 26\nth\nIntl. Conf. on Very Large Databases, 297-306, 2000.\n\n584", "keywords": "many query result problem;rank the query results;query result ranking;QRRE;algorithms;experimentation;attribute value;Attribute weight assignment;Query result ranking;attribute preference;design;PIR;e-commerce web databases;human factors;E-commerce"} {"name": "161", "title": "Query Type Classification for Web Document Retrieval", "abstract": "The heterogeneous Web exacerbates IR problems and short user queries make them worse. The contents of web documents are not enough to find good answer documents. Link information and URL information compensates for the insufficiencies of content information. However, static combination of multiple evidences may lower the retrieval performance . We need different strategies to find target documents according to a query type. We can classify user queries as three categories, the topic relevance task, the homepage finding task, and the service finding task. In this paper, a user query classification scheme is proposed. This scheme uses the difference of distribution, mutual information , the usage rate as anchor texts, and the POS information for the classification. After we classified a user query, we apply different algorithms and information for the better results. For the topic relevance task, we emphasize the content information, on the other hand, for the homepage finding task, we emphasize the Link information and the URL information. We could get the best performance when our proposed classification method with the OKAPI scoring algorithm was used.", "fulltext": "INTRODUCTION\nThe Web is rich with various sources of information. It\ncontains the contents of documents, web directories, multi-media\ndata, user profiles and so on. The massive and heterogeneous\nweb document collections as well as the unpredictable\nquerying behaviors of typical web searchers exacerbate\nInformation Retrieval (IR) problems. Retrieval approaches\nbased on the single source of evidence suffer from\nweakness that can hurt the retrieval performance in certain\nsituations [5]. For example, content-based IR approaches\nhave a difficulty in dealing with the diversity in vocabulary\nand the quality of web documents, while link-based approaches\ncan suffer from an incomplete or noisy link structure\n.\nCombining multiple evidences compensates for the\nweakness of a single evidence [17]. Fusion IR studies have\nrepeatedly shown that combining multiple sources of evidence\ncan improve retrieval performance [5][17].\nHowever, previous studies did not consider a user query\nin combining evidences [5][7][10][17]. Not only documents in\nthe Web but also users' queries are diverse. For example, for\nuser query `Mutual Information' , if we count on link information\ntoo highly, well-known site that has `mutual funds'\nand `information' as index terms gets the higher rank. For\nuser query `Britney's Fan Club' , if we use content information\ntoo highly, yahoo or lycos's web directory pages get the\nhigher rank, instead of the Britney's fan club site. Like these\nexamples, combining content information and link information\nis not always good. We have to use different strategies\nto meet the need of a user. User queries can be classified as\nthree categories according to their intent [4].\ntopic relevance task (informational)\nhomepage finding task (navigational)\nservice finding task (transactional)\nThe topic relevance task is a traditional ad hoc retrieval task\nwhere web documents are ranked by decreasing likelihood of\nmeeting the information need provided in a user query [8].\nFor example, `What is a prime factor?' or `prime factor' is\na query of the topic relevance task. The goal of this query is\nfinding the meaning of `prime factor'. The homepage finding\ntask is a known-item task where the goal is to find the\nhomepage (or site entry page) of the site described in a user\nquery. Users are interested in finding a certain site. For\nexample, `Where is the site of John Hopkins Medical Institutions\n?' or `John Hopkins Medical Institutions' is a query\nof the homepage finding task. The goal of this query is finding\nthe entry page of `John Hopkins Medical Institutions'.\nThe service finding task is a task where the goal is to find\n64\nweb documents that provide the service described in a user\nquery. For example, `Where can I buy concert tickets?' or\n`buy concert tickets' is a query of the service finding task.\nThe goal of this query is finding documents where they can\nbuy concert tickets.\nUsers may want different documents with the same query.\nWe cannot always tell the class of a query clearly. But we can\ntell most people want a certain kind of documents with this\nquery. In this paper, we calculate the probability that the\nclass of a user query is the topic relevance task or the homepage\nfinding task. Based on this probability, we combine\nmultiple evidences dynamically. In this paper, we consider\nthe topic relevance task and the homepage finding task only.\nBecause the proposed method is based on the difference of\ndatabases, we can apply the same method to classify the\nservice finding task.\nIn this paper, we present a user query classification method\nand a combining method for each query type. In section 2,\nwe describe various types of information (Content, Link, and\nURL information). Section 3 lists the differences of search\ntasks and the properties of Content, Link, and URL information\n. In section 4, we present the model of a query classification\n. In section 5, we experiment with our proposed\nmodel. Conclusion is described in section 6.\nMULTIPLE SOURCES OF INFORMATION\nIn this section, we explain various sources of information\nfor the web document retrieval. There are three types of information\n, Content information, Link information, and URL\ninformation.\n2.1\nContent Information\nThere are multiple types of representations for a document\n. These representations typically contain titles, anchor\ntexts, and main body texts [5]. A title provides the main\nidea and the brief explanation of a web document. An anchor\ntext provides the description of linked web documents\nand files. An anchor text often provides more accurate description\nof a web document than the document itself.\nWe usually use tf and df to calculate the relevance of a\ngiven web documents [1]. tf is the raw frequency of a given\nterm inside a document. It provides one measure of how\nwell that term describes the document contents. df is the\nnumber of documents in which the index term appears. The\nmotivation for using an inverse document frequency is that\nterms that appear in many documents are not very useful\nfor distinguishing a relevant document from a non-relevant\none. There are various scoring algorithms that use tf and\ndf . These scoring algorithms include the normalization and\nthe combination of each factor, tf and df .\n2.2\nLink Information\nA hyperlink in a web document is a kind of citation. The\nessential idea is that if page u has a link to page v, then the\nauthor of u is implicitly assigning some importance to page\nv. Since we can represent the Web as a graph, we can use\ngraph theories to help us make a search engine that returns\nthe most important pages first. The PageRank or P R(A) of\na page A is given as follows [13].\nP R(A)\n=\n(1\n- d) +\n(1)\nd(P R(T\n1\n)/C(T\n1\n) + . . . + P R(T\nn\n)/C(T\nn\n))\nWe assume page A has pages T\n1\n. . . T\nn\nthat point to it. The\nparameter d is a damping factor that can be set between 0\nand 1. Also C(A) is defined as the number of links going out\nof a page A. P R(A) can be calculated using a simple iterative\nalgorithm, and corresponds to the principal eigenvector\nof the normalized link matrix of the Web [3].\n2.3\nURL Information\nThe URL string of a site entry page often contains the\nname or acronym of the corresponding organization. Therefore\n, an obvious way of exploiting URL information is trying\nto match query terms and URL terms. Additionally, URLs\nof site entry pages tend to be higher in a server's directory\ntree than other web documents, i.e. the number of slashes\n(`/') in an entry page URL tends to be relatively small.\nKraaij et al. suggested 4 types of URLs [16].\nroot: a domain name\n(e.g. http://trec.nist.gov)\nsubroot: a domain name followed by a single directory\n(e.g. http://trec.nist.gov/pubs/)\npath: a domain name followed by an arbitrarily deep\npath\n(e.g. http://trec.nist.gov/pubs/trec9/papers)\nfile: anything ending in a filename other than `in-dex\n.html'\n(e.g. http://trec.nist.gov/pubs/trec9/t9proc.html)\nKraaij et al. estimated a prior probability (URLprior ) of\nbeing an entry page on the basis of the URL type for all\nURL types t (root, subroot, path, and file).\n2.4\nCombination of Information\nWe can combine results of each search engine or scores of\neach measure to get better results. Croft proposed the IN-QUERY\nretrieval system, based on the inference network,\nto combine multiple evidences [5]. The inference network\nmodel is a general model for combining information. It is\ndata-level fusion. The model is based on probabilistic updating\nof the values of nodes in the network, and many retrieval\ntechniques and information can be implemented by config-uring\nthe network properly.\nSeveral researchers have experimented with linearly combining\nthe normalized relevance scores (s\ni\n) given to each\ndocument [7][10][16].\nscore(d) =\ni\n\ni\ns\ni\n(d)\n(2)\nIt requires training for the weight\ni\ngiven to each input\nsystem. For example, we can get a better result by combining\ncontent information and URL type information with the\nfollowing weight [16].\nscore(d) = 0.7\ncontent + 0.3 URLprior\n(3)\nTOPIC RELEVANCE TASK AND HOMEPAGE FINDING TASK\nIn this section, we show properties of Content information,\nLink information, and URL information in each search task.\nBesides, we will propose the method for linearly combining\ninformation for each task.\n65\nWe use TREC data collection, to show the differences of\neach search task. We made a simple search engine that use\nthe variation of the OKAPI scoring function [15]. Given a\nquery Q, the scoring formula is:\nscore\n=\nt\n(QD\nd\n)\nT F\nd,t\nIDF\nt\n(4)\nT F\nd,t\n=\n0.4 + 0.6\n\ntf\nd,t\ntf\nd,t\n+ 0.5 + 1.5\n\ndoclen\nd\navg doclen\n(5)\nIDF\nt\n=\nlog( N + 0.5\ndf\nt\n)/log(N + 1)\n(6)\nN is the number of documents in the collection. tf\nd,t\nis the\nnumber of occurrences of an index term t in a document d,\nand df\nt\nis the number of documents in which t occurs.\nWe use the data for the web track, the 10-gigabyte WT10g\ncollection [2], distributed by CSIRO [6]. We use TREC-2001\ntopic relevance task queries (topics 501-550) for the topic relevance\ntask, and 145 queries for the homepage finding task\n[8]. For the homepage finding task, NIST found a homepage\nwithin WT10g and then composed a query designed to locate\nit.\nWe used the anchor text representation (Anchor) and the\ncommon content text representation (Common) for indexing\n. Every document in the anchor text representation has\nanchor texts and the title as content, and excludes a body\ntext. Consequently the anchor text representation has brief\nor main explanations of a document. We used two other evidences\nfor a scoring function besides the OKAPI score. One\nis URLprior for URL information and the other is PageRank\nfor Link information. We linearly interpolated Content\ninformation (OKAPI score), URLprior, and PageRank. We\ncall this interpolation as CMB .\nrel(d)\n=\n0.65\nContent Information +\n(7)\n0.25\nURL Information +\n0.1\nLink Information\nWe used `and' and `sum' operators for matching query\nterms [1]. `and' operator means that the result document\nhas all query terms in it. `sum' operator means that a result\ndocument has at least one query term in it.\nTable 1 shows the average precision of the topic relevance\ntask and the MRR of the homepage finding task [8]. The\nfirst column in the table 1 means the method that we used\nfor indexing and scoring. For example, `Anchor and CMB'\nmeans that we used the anchor text representation for indexing\n, `and' operator for query matching, and the OKAPI\nscore, PageRank and URLprior for scoring.\nThe average\nprecision is defined as the average of the precision obtained\nat the rank of each relevant document.\nP\navg\n= 1\n|R|\nd\nR\nR\nr(d)\nr(d)\n(8)\nR is the set of all relevant documents and R\nr(d)\nis the\nset of relevant documents with rank r(d) or better. MRR\n(Mean Reciprocal Rank) is the main evaluation measure for\nthe homepage finding task. MRR is based on the rank of\nthe first correct document (answer\ni\nrank) according to the\nTable 1: Topic Relevance Task vs. Homepage Finding\nTask\nTopic\nHomepage\nmodel\nP\navg\nMRR\nAnchor and\n0.031\n0.297\nAnchor and CMB\n0.031\n0.431\nAnchor sum\n0.034\n0.351\nAnchor sum CMB\n0.034\n0.583\nCommon and\n0.131\n0.294\nCommon and CMB\n0.122\n0.580\nCommon sum\n0.182\n0.355\nCommon sum CMB\n0.169\n0.673\nMAX\n0.226\n0.774\nAVG\n0.145\n0.432\nfollowing formula:\nM RR =\n1\n#queries\n#queries\ni\n=1\n1\nanswer\ni\nrank\n(9)\nM AX represents the best score of a search engine that submitted\nin TREC-2001. AV G represents the average score of\nall search engines that submitted in TREC-2001.\nWe got the better result with the common content text\nrepresentation than the anchor text representation in the\ntopic relevance task. A title and anchor texts do not have\nenough information for the topic relevance task.\nOn the\nother hand, we could get the similar performance with the\nanchor text representation in the homepage finding task.\nURL information and Link information are good for the\nhomepage finding task but bad for the topic relevance task.\nIn the topic relevance task, we lost our performance by combining\nURL and Link information.\nThe query of the topic relevance task usually consists of\nmain keywords that are relevant to some concept or the explanation\nof what they want to know. However, we cannot\nassume that other people use same expressions and keywords\nto explain what a user wants to know. Therefore we could\nnot get a good result with `and' operator in the topic relevance\ntask. But on the other hand the query of the homepage\nfinding task consists of entity names or proper nouns.\nTherefore we could have good results with `and' operator\nwhen we can have a result document. However, the MRR\nof `Anchor and CMB' is lower than that of `Common sum\nCMB' in the homepage finding task. `Anchor and CMB'\nmethod did not retrieve a document for 31 queries. To compensate\nfor this sparseness problem, we combined the results\nof `Anchor and CMB' and `Common sum CMB' . This\ncombined result showed 0.730 in the homepage finding task.\nWhen we combined the results of `Anchor and ' and `Common\nsum' , it showed 0.173 in the topic relevance task. This\nimplies that the result documents with `and' operator are\ngood and useful in the homepage finding task.\nWe can conclude that we need different retrieval strategies\naccording to the category of a query. We have to use the field\ninformation (title, body, and anchor text) of each term, and\ncombine evidences dynamically to get good results. In the\ntopic relevance task, the body text of a document is good for\nindexing, `sum' operator is good for query term matching,\n66\nand combining URL and Link information are useless. On\nthe other hand, in the homepage finding task, anchor texts\nand titles are useful for indexing, `and' operator is also good\nfor query term matching, and URL and Link information\nis useful. By combining results from main body text and\nanchor texts and titles we can have the better performance.\nUSER QUERY CLASSIFICATION\nIn this section, we present the method for making a language\nmodel for a user query classification.\n4.1\nPreparation for Language Model\nWe may use the question type of a query to classify the\ncategory of a user query. For example, \"What is a two electrode\nvacuum tube?\" is a query of the topic relevance task.\n\"Where is the site of SONY?\" is a query of the homepage\nfinding task. We can assume the category of a query with an\ninterrogative pronoun and cue expressions (e.g. `the site of').\nHowever, people do not provide natural language queries to\na search engine. They usually use keywords for their queries.\nIt is not easy to anticipate natural language queries. In this\npaper, we assume that users provide only main keywords for\ntheir queries.\nWe define a query Q as the set of words.\nQ =\n{w\n1\n, w\n2\n, . . . , w\nn\n}\n(10)\nTo see the characteristics of each query class, we use two\nquery sets. For the topic relevance task, TREC-2000 topic\nrelevance task queries (topics 451-500) are used. For the\nhomepage finding task, queries for randomly selected 100\nhomepages\n1\nare used. We call them QU ERY\nT\n-T RAIN\nand\nQU ERY\nH\n-T RAIN\n.\nWe divided WT10g into two sets, DB\nT OP IC\nand DB\nHOM E\n.\nIf the URL type of a document is `root' type, we put this\ndocument to DB\nHOM E\n. Others are added to DB\nT OP IC\n.\nAccording to the report of [16], our division method can\nget site entry pages with 71.7% precision. Additionally we\nput virtual documents into DB\nHOM E\nwith anchor texts. If\na linked document is in DB\nT OP IC\n, then we make a virtual\ndocument that consists of anchor texts and put it into\nDB\nHOM E\n. If a linked document is in DB\nHOM E\n, then we\nadd anchor texts to the original document. Usually a site entry\npage does not have many words. It is not an explanatory\ndocument for some topic or concept, but the brief explanation\nof a site. We can assume that site entry pages have the\ndifferent usage of words. If we find distinctive features for\nsite entry pages, then we can discriminate the category of a\ngiven query.\n#DB\nT OP IC\nand #DB\nHOM E\nmean the number of documents\nin the DB\nT OP IC\nand DB\nHOM E\nrespectively. However\n, most documents in the DB\nHOM E\nhave a short length,\nwe normalized the number of documents with the following\nequation.\n#DB\nT OP IC\n=\n# of documents in DB\nT OP IC\n(11)\n#DB\nHOM E\n=\n# of documents in DB\nHOM E\n(12)\navg doclength\nHOM E\navg doclength\nT OP IC\n1\navailable at http://www.ted.cmis.csiro.au/TRECWeb/Qrels/\n4.2\nDistribution of Query Terms\n`Earthquake' occurs more frequently in DB\nT OP IC\n. But\n`Hunt Memorial Library' shows the high relative frequency\nin DB\nHOM E\n. General terms tend to have same distribution\nregardless of the database. If the difference of distribution\nis larger than expected, this tells whether a given query is in\nthe topic relevance task class or the homepage finding task\nclass. We can calculate the occurrence ratio of a query with\nthe following equation [11].\nDist(w\n1\n, . . . , w\nn\n) = n C(w\n1\n, . . . , w\nn\n)\n\nn\ni\n=1\nC(w\ni\n)\n(13)\nC(w) is the number of documents that have w as an index\nterm. df of w is used for C(w). C(w\n1\n, . . . , w\nn\n) is the number\nof documents that have all w\n1\n, . . . , w\nn\nas index terms. To see\nthe distribution difference of a query, we use the following\nratio equation.\ndif f\nDist\n(Q) = Dist\nHOM E\n(Q)\nDist\nT OP IC\n(Q)\n(14)\nIf a query has only one term, we use the chi-square [11].\nWe make a 2-by-2 table for the given word `w'.\nword=w\nword = w\nDB\nT OP IC\na\nb\nDB\nHOM E\nc\nd\na + b = #DB\nT OP IC\nand c + d = #DB\nHOM E\n. `a' is the\nfrequency of the word `w' in the DB\nT OP IC\nand `c' is the\nfrequency of the word `w' in the DB\nHOM E\n. The chi-square\nvalue shows the dependence of the word `w' and DB. If the\nchi-square value of the word `w' is high, then `w' is a special\nterm of DB\nT OP IC\nor DB\nHOM E\n. We classify these words\nthat have a high chi-square value according to the df . If `w'\nhas a high df then the word `w' is the topic relevance task\nquery. Otherwise `w' is the homepage finding task query.\nFor example, `f ast' shows the high chi-square value, since\nit is used a lot to modify proper names. However, one word\n`f ast' is not the proper name. We classify a word that has a\nhigh chi-square and a high df into the topic relevance task.\nIf the chi-square value of the word `w' is low, then `w' is a\ngeneral term.\nFig.2 shows the results of dif f\nDist\nof queries that have at\nleast two query terms. The mean values of QU ERY\nT\n-T RAIN\n's\ndif f\nDist\nand QU ERY\nH\n-T RAIN\n's dif f\nDist\nare 0.5138 and\n1.1 respectively. As the value of dif f\nDist\nof a given query\nis higher, we can have confidence that the query has special\nterms. On the other hand, if the score of dif f\nDist\nis near\nthe mean value of QU ERY\nT\n-T RAIN\n, it means the query\nhas general terms, not a special expression. We calculate\nthe possibility that a given query is in each class with the\nmean value and the standard deviation. However, there are\nqueries that show high dif f\nDIST\nin QU ERY\nT\n-T RAIN\n. For\nexample, `Jenniffer Aniston' and `Chevrolet Trucks' showed\n2.04 and 0.76 respectively. Usually proper names showed\nhigh dif f\nDIST\nvalues. If a proper name is frequently used\nin the DB\nHOM E\n, then we can think of it as the name of the\nsite.\n4.3\nMutual Information\nThere are two or more words that co-occur frequently.\nThese words may have syntactic or semantic relations to\n67\nif length(Q)=1 then\ncalculate the\n2\nof Q\nif\n2\n> 18 then\nif df of a query > 65\nthe topic relevance task\nelse\nthe homepage finding task\nelse\nthe topic relevance task\nelse\ncalculate distributions of a query in each database\ncalculate dif f Dist(Q)\nif dif f Dist(Q) >\nthe homepage finding task\nelse\nunknown\nFigure 1: The Algorithm of Distribution Difference\nMethod\n0%\n10%\n20%\n30%\n40%\n50%\n60%\n70%\n0.1\n0.3\n0.5\n0.7\n0.9\n1.1\n1.3\n1.5\nRatio of Distribution Difference\nP\ne\nr\nc\ne\nn\nt\na\ng\ne\no\nf\n\nO\nb\nse\nr\nv\nat\ni\no\nn\ns\nQUERY-TOPIC-TRAIN\nQUERY-HOMEPAGE-TRAIN\nFigure 2: Distribution of Queries\neach other. We say these words have some dependency. For\nexample, `tornadoes formed' shows similar dependency regardless\nof the database. But `Fan Club' has a high dependency\nin DB\nHOM E\nset. This means that `tornadoes formed'\nis a general usage of words but `Fan Club' is a special usage\nin DB\nHOM E\n. Therefore, the dependency of `Fan Club' can\nbe the key clue of guessing the category of a user query. If\nthe difference of dependency of each term is larger than expected\n, this tells whether a given query is the topic relevance\ntask or the homepage finding task. For two variables A and\nB, we can calculate the dependency with mutual information\n, I(A; B) [9]. We use the pointwise mutual information\nI(x, y) to calculate the dependency of terms in a query [11].\nI(A; B)\n=\nH(A) + H(B)\n- H(A, B)\n=\na,b\np(a, b)log p(a, b)\np(a)p(b)\n(15)\nI(x, y) = log p(x, y)\np(x)p(y)\n(16)\nWe extend pointwise mutual information for three variables\n. We use the set theory to calculate the value of an\n0%\n5%\n10%\n15%\n20%\n25%\n30%\n0.25\n0.75\n1.25\n1.75\n2.25\n2.75\n3.25\n3.75\n4.25\n4.75\n5.25\nRatio of MI Difference\nP\ne\nr\nc\ne\nn\nt\na\ng\ne\n\no\nf\n\nO\nb\ns\ne\nr\nv\na\nt\ni\no\nn\nQUERY-TOPIC-TRAIN\nQUERY-HOMEPAGE-TRAIN\nFigure 3: Mutual Information of Queries\nintersection part, like two variables problem.\nI(A; B; C)\n=\nH(A, B, C)\n- H(A) - H(B) - H(C) +\nI(A; B) + I(B; C) + I(C; A)\n=\na,b,c\np(a, b, c)log p(a, b)p(b, c)p(c, a)\np(a, b, c)p(a)p(b)p(c) (17)\nI(x, y, z) = log p(x, y)p(y, z)p(z, x)\np(x, y, z)p(x)p(y)p(z)\n(18)\nIn principle, p(x, y) means the probability that x and y are\nco-occurred in a specific distance [11]. Usually x and y are\nconsecutive words. Since the number of words and documents\nare so huge in IR domain, it is not easy to keep statistics\n. Our measure assume that x and y are co-occurred in a\ndocument. We use df of a given term to calculate the number\nof documents that contain a term. Like the distribution\ndifference measure, we use the ratio difference equation to\nsee the difference of MI. If pointwise mutual information is\nbelow zero then we use zero.\ndif f\nM I\n(Q) = M I\nHOM E\n(Q)\nM I\nT OP IC\n(Q)\n(19)\nFig.3 shows the results of dif f\nM I\n. The mean values of\nQU ERY\nT\n-T RAIN\n's dif f\nM I\nand QU ERY\nH\n-T RAIN\n's dif f\nM I\nare 1.9 and 2.7 respectively. For example, the topic relevance\ntask query `mexican food culture' showed 1.0, but the\nhomepage finding task query `Newave IFMO' showed 7.5.\nQU ERY\nH\n-T RAIN\ngets a slightly high standard deviation.\nIt means that the query of QU ERY\nH\n-T RAIN\nhas different\nMI in DB\nHOM E\n. As the value of dif f\nM I\nis higher, we can\nhave confidence that the query has a special dependency.\nWe calculate the possibility that a given query is in each\nclass with the mean value and the standard deviation.\n4.4\nUsage Rate as an Anchor Text\nIf query terms appear in titles and anchor texts frequently,\nthis tells the category of a given query is the homepage finding\ntask. Titles and anchor texts are usually entity names\nor proper nouns, the usage rate shows the probability that\ngiven terms are special terms.\nuse\nAnchor\n(w\n1\n, . . . , w\nn\n) =\n(20)\nC\nSIT E AN CHOR\n(w\n1\n, . . . , w\nn\n)\n- C\nSIT E\n(w\n1\n, . . . , w\nn\n)\nC\nSIT E\n(w\n1\n, w\n2\n, . . . , w\nn\n)\n68\nC\nSIT E\n(w) means the number of site entry documents that\nhave w as an index term. C\nSIT E AN CHOR\n(w) means the\nnumber of site entry documents and anchor texts that have\nw as an index term.\n4.5\nPOS information\nSince the homepage finding task queries are proper names,\nthey do not usually contain a verb. However, some topic relevance\ntask queries include a verb to explain what he or she\nwants to know. For example, `How are tornadoes formed?'\nor briefly `tornadoes formed' contain a verb `formed'. If a\nquery has a verb except the `be' verb, then we classified it\ninto the topic relevance task.\n4.6\nCombination of Measures\nThe difference of distribution method can apply more\nqueries than the difference of MI. The usage rate as anchor\ntexts and the POS information show small coverage. However\n, four measures cover different queries. Therefore, we\ncan have more confidence and more coverage by combining\nthese measures. We use a different combination equation as\nthe number of query terms. If the query has 2 and 3 terms\nin it, we use pointwise mutual information also.\nS(Q)\n=\n\ndiff\nDist\n(Q) +\ndiff\nM I\n(Q) + (21)\n\nuse\nAnchor\n(Q) +\nP OS\ninf o\n(Q)\nWe choose , , , and with train data (QU ERY\nT\n-T RAIN\nand QU ERY\nH\n-T RAIN\n). If `S(Q)' score is not high or low\nenough, then we make no decision.\nEXPERIMENTS\nIn this section, we show the efficiency of a user query\nclassification.\n5.1\nQuery Classification\nWe used four query sets for experimenting our query classification\nmethod. QU ERY\nT\n-T RAIN\nand QU ERY\nH\n-T RAIN\nare used for training (TRAIN). TREC-2001 topic relevance\ntask queries (Topic 501-550) and TREC-2001 homepage finding\ntask queries (1-145) are used for testing (TEST). We call\ntwo test sets as QU ERY\nT\n-T EST\nand QU ERY\nH\n-T EST\n. We\nused WT10g for making a classification model.\nWe classified queries with our proposed method. If the\nscore `S(Q)' is high enough to tell that a given query is\nin the topic relevance task or the homepage finding task\nquery, then we assigned the query type to it.\nFor other\ncases, we did not classify a query category. Table 2 shows\nthe classification result of our proposed language model.\nTable 2: Query Classification Result\nQUERY\nTRAIN\nTEST\nMeasure\nPrecision\nRecall\nPrecision\nRecall\nDist.\n77.3%\n38.7%\n82.1%\n28.2%\nMI\n90.9%\n20.0%\n78.2%\n29.9%\nAnchor\n73.6%\n35.3%\n82.4%\n35.9%\nPOS\n100%\n9.3%\n96.4%\n13.8%\nAll\n81.1%\n57.3%\n91.7%\n61.5%\nBy combining each measure, we could apply our method\nto more queries and increase precision and recall. Our pro-Table\n3: Average Precision of the Topic Relevance\nTask\nmodel\nOKAPI\nTF-IDF\nKL DIR\nMIXFB KL D\nLemur\n0.182\n0.170\n0.210\n0.219\nMLemur\n0.169\n0.159\n0.200\n0.209\nTable 4: MRR of the Homepage Finding Task\nmodel\nOKAPI\nTF-IDF\nKL DIR\nMIXFB KL D\nLemur\n0.355\n0.340\n0.181\n0.144\nMLemur\n0.673\n0.640\n0.447\n0.360\nposed method shows the better result in the test set. This\nis due to the characteristics of the query set. There are 7\nqueries that have a verb in QU ERY\nT\n-T RAIN\nand 28 queries\nin QU ERY\nT\n-T EST\n. We can assume that the POS information\nis good information.\nThe main reason of misclassification is wrong division of\nWT10g. Since our method usually gives the high score to the\nproper name, we need correct information to distinguish a\nproper name from a site name. We tried to make DB\nHOM E\nautomatically. However, some root pages are not site entry\npages. We need a more sophisticated division method.\nThere is a case that a verb is in the homepage finding\ntask query. `Protect & Preserve' is the homepage finding\ntask query but `protect' and `preserve' are verbs. However,\n`Protect' and `Preserve' start with a capital letter. We can\ncorrect wrong POS tags.\nThere are queries in QU ERY\nT\n-T EST\nthat look like queries\nof QU ERY\nH\n-T EST\n. For example, `Dodge Recalls' is used to\nfind documents that report on the recall of any dodge automobile\nproducts. But user may want to find the entry page\nof `Dodge recall'. This is due to the use of main keywords\ninstead of a natural language query.\nThere are 6 queries in QU ERY\nT\n-T EST\nand 6 queries in\nQU ERY\nH\n-T EST\nthat do not have a result document that\nhas all query terms in it. We could not use our method to\nthem. WT10g is not enough to extract probability information\nfor these two query sets. To make up this sparseness\nproblem, we need a different indexing terms extraction module\n. We have to consider special parsing technique for URL\nstrings and acronyms in a document. Also we need a query\nexpansion technique to get a better result.\n5.2\nThe Improvement of IR Performance\nWe used the Lemur Toolkit [12] to make a general search\nengine for the topic relevance task. The Lemur Toolkit is an\ninformation retrieval toolkit designed with language modeling\nin mind. The Lemur Toolkit supports several retrieval\nalgorithms. These algorithms include a dot-product function\nusing TF-IDF weighting algorithm, the Kullback-Leibler\n(KL) divergence algorithm, the OKAPI retrieval algorithm,\nthe feedback retrieval algorithm and the mixture model of\nDirichlet smoothing, MIXFB KL D [14]. For the homepage\nfinding task, we add the URLprior probability of a URL\nstring to the Lemur Toolkit. Besides Link information, we\nadd the PageRank of a document. We normalized PageRank\nvalues, so the max value is 100 and the min value is 0.\nFirst we extracted top 1,000 results with the Lemur Toolkit.\n69\nTable 5: The Retrieval Performance with Classification Method\nOKAPI\nTF-IDF\nMIXFB KL D\nMeasure\nDEFAULT\nTOPIC\nHOME\nTOPIC\nHOME\nTOPIC\nHOME\nDist.\nTOPIC\n0.178\n0.469\n0.168\n0.447\n0.216\n0.226\nDist.\nHOME\n0.174\n0.666\n0.164\n0.633\n0.212\n0.359\nMI\nTOPIC\n0.179\n0.465\n0.168\n0.445\n0.218\n0.233\nMI\nHOME\n0.169\n0.673\n0.159\n0.640\n0.209\n0.360\nAnchor\nTOPIC\n0.176\n0.513\n0.165\n0.489\n0.215\n0.232\nAnchor\nHOME\n0.169\n0.666\n0.159\n0.633\n0.209\n0.359\nPOS\nTOPIC\n0.182\n0.355\n0.170\n0.340\n0.219\n0.144\nPOS\nHOME\n0.173\n0.673\n0.163\n0.640\n0.212\n0.354\nAll\nTOPIC\n0.180\n0.552\n0.168\n0.528\n0.217\n0.280\nAll\nHOME\n0.173\n0.666\n0.163\n0.633\n0.212\n0.353\nThen we combined URL information and Link information\nto reorder results with the equation Eq. 7. We presented\ntop 1,000 documents as the answer in the topic relevance\ntask, and 100 documents in the homepage finding task. We\ncall this modified Toolkit as MLemur Toolkit.\nTable 3 and 4 show results of the topic relevance task and\nthe homepage finding task that use the Lemur Toolkit and\nthe MLemur Toolkit. MIXFB KL D showed the good result\nin the topic relevance task but showed the poor result in the\nhomepage finding task. We can say that a good information\nretrieval algorithm for the topic relevance task is not always\ngood for the homepage finding task. We chose three algorithms\n, the OKAPI , the TF-IDF , and the MIXFB KL D\nthat got the best and worst score in each task, for the test\nof performance improvement by query type classification.\nTable 5 shows the change of performance. `DEFAULT'\nmeans the default category for an unclassified query. Digits\nin the TOPIC column and the HOME column are average\nprecision and MRR respectively. From the result, the\nOKAPI algorithm and the homepage finding task as a default\nclass method shows the good performance.\n5.3\nDiscussion\nTo classify a query type, we need the document frequency\nof a query term in each database. This lowers the system efficiency\n. However, we may create two databases as proposed\nin this paper for indexing. We retrieve two result document\nsets from each database and classify a query type at the same\ntime. And then according to the category of a query, merge\ntwo results. From table 1, merging the results of the anchor\ntext representation and the common content representation\nshows good performance. We need more work to unify the\nquery classification work and the document retrieval.\nIn this paper, we proposed a user query classification\nmethod for the topic relevance task and the homepage finding\ntask. The queries of the homepage finding task usually\nconsist of entity names or proper nouns. However queries of\nthe service finding task have verbs for the service definition.\nFor example, \"Where can I buy concert tickets?\" has `buy'\nas the service definition. To find these cue expressions, we\nneed more sophisticated analysis of anchor texts. Since the\nservice in the Web is provided as a program, there is a trigger\nbutton. Mostly these trigger buttons are explained by\nanchor texts. We have to distinguish an entity name and an\naction verb from anchor texts. We have to change measures\nfor the query classification from a word unit to entity and\naction units.\nUser query classification can be applied to various areas.\nMetaSearch is the search algorithm that combines results of\neach search engine to get the better result [7]. [10] proposed\nCombMNZ, Multiply by NonZeros, is better than other scoring\nalgorithm, CombSUM , Summed similarity over systems.\nBut if we consider the homepage finding task, we are in a\ndifferent situation.\nTable 6 and 7 show the improvement of performance of\nMetaSearch algorithms. We had an experiment with random\nsamplings of 2, 3, 4, and 5 engine results. The score is\nthe average improvement of 100 tests. CombMNZ was good\nfor the topic relevance task, but CombSUM was good for\nthe homepage finding task. It also tells, we need different\nstrategies for MetaSearch as the class of a query.\nTable 6: Performance of MetaSearch in the Topic\nRelevance Task\nengine #\n2\n3\n4\n5\nCombSUM\n-2.4%\n4.4%\n3.7%\n4.8%\nCombMNZ\n-1.2%\n5.7%\n5.3%\n5.8%\nTable 7: Performance of Metasearch in the Homepage\nFinding Task\nengine #\n2\n3\n4\n5\nCombSUM\n-4.5%\n0.7%\n-0.9%\n0.8%\nCombMNZ\n-6.0%\n-0.4%\n-4.5%\n-2.4%\nCONCLUSIONS\nWe have various forms of resources in the Web, and consequently\npurposes of user queries are diverse. We can classify\nuser queries as three categories, the topic relevance task,\nthe homepage finding task, and the service finding task.\nSearch engines need different strategies to meet the purpose\nof a user query. For example, URL information and\nLink information are bad for the topic relevance task, but\non the other hand, they are good for the homepage finding\ntask. We made two representative databases, DB\nHOM E\n70\nand DB\nT OP IC\n, for each task. To make databases, we divided\ntext collection by the URL type of a web document.\nIf the URL of a document contains a host name only, then\nwe put it into DB\nHOM E\n. Also we make a virtual document\nwith an anchor text and put it into DB\nHOM E\n. Other\ndocuments are put into DB\nT OP IC\n. If given query's distributions\nin DB\nHOM E\nand DB\nT OP IC\nare different, then this\ntells a given query is not a general word. Therefore, we\ncan assume the category of a given query is in the homepage\nfinding task. Likewise, the difference of dependency,\nMutual Information, and the usage rate as anchor texts tell\nwhether a given query is in the homepage finding task or\nnot. We tested the proposed classification method with two\nquery sets, QU ERY\nT\n-T EST\nand QU ERY\nH\n-T EST\n. The usage\nrate as anchor texts and the POS information show small\ncoverage. On the other hand, distribution difference and\ndependency showed good precision and coverage. Also each\nclassifier applied to different queries. We could get the better\nprecision and recall by combining each classifier. We got\n91.7% precision and 61.5% recall. After we classified the category\nof a query, we used different information for a search\nengine. For the topic relevance task, Content information\nsuch as TFIDF is used. For the homepage finding task, Link\ninformation and URL information besides content information\nare used. We tested our dynamic combining method.\nFrom the result, our classification method showed the best\nresult with the OKAPI scoring algorithm.\nACKNOWLEDGMENTS\nWe would like to thank Jamie Callan for providing useful\nexperiment data and the Lemur toolkit.\n\nREFERENCES\n[1] R. Baeza-Yates and B. Ribeiro-Neto. Modern\nInformation Retrieval. ACM PRESS BOOKS, 1999.\n[2] P. Bailey, N. Craswell, and D. Hawking. Engineering a\nmulti-purpose test collection for web retrieval\nexperiments. Information Processing and\nManagement, to appear.\n[3] S. Brin and L. Page. The anatomy of a large-scale\nhypertextual Web search engine. Computer Networks\nand ISDN Systems, 30(1-7):107117, 1998.\n[4] A. Broder. A taxonomy of web search. SIGIR Forum,\n36(2), 2002.\n[5] W. B. Croft. Combining approaches to information\nretrieval. In Advances in Information Retrieval:\nRecent Research from the Center for Intelligent\nInformation Retrieval, pages 136. Kluwer Academic\nPublishers, 2000.\n[6] CSIRO. Web research collections - trec web track.\nwww.ted.cmis.csiro.au /TRECWeb/, 2001.\n[7] E. Fox and J. Shaw. Combination of multiple searches.\nIn Text REtrieval Conference (TREC-1), pages\n243252, 1993.\n[8] D. Hawking and N. Craswell. Overview of the\ntrec-2001 web track. In Text REtrieval Conference\n(TREC-10), pages 6167, 2001.\n[9] E. Jaynes. Information theory and statistical\nmechanics. Physics Review, 106(4):620630, 1957.\n[10] J. H. Lee. Analyses of multiple evidence combination.\nIn Proceedings of the 20th Annual International ACM\nSIGIR Conference on Research and Development in\nInformation Retrieval, pages 267276, 1997.\n[11] C. D. Manning and H. Schutze. Foundations of\nStatistical Natural Language Processing. The MIT\nPress, 1999.\n[12] P. Ogilvie and J. Callan. Experiments using the lemur\ntoolkit. In Text REtrieval Conference (TREC-10)\nhttp://www-2.cs.cmu.edu/ lemur, pages 103108, 2001.\n[13] L. Page, S. Brin, R. Motwani, and T. Winograd. The\npagerank citation ranking: Bringing order to the web.\nTechnical report, Stanford Digital Library\nTechnologies Project, 1998.\n[14] J. M. Ponte. Language models for relevance feedback.\nIn W. B. Croft, editor, Advances in Information\nRetrieval: Recent Research from the Center for\nIntelligent Information Retrieval, pages 7395. Kluwer\nAcademic Publishers, 2000.\n[15] S. E. Robertson, S. Walker, S. Jones,\nM. Hancock-Beaulieu, and M. Gatford. Okapi at\ntrec-3. In Text REtrieval Conference (TREC-2), pages\n109126, 1994.\n[16] T. Westerveld, W. Kraaij, and D. Hiemstra.\nRetrieving web pages using content, links, urls and\nanchors. In Text REtrieval Conference (TREC-10),\npages 663672, 2001.\n[17] K. Yang. Combining text and link-based retrieval\nmethods for web ir. In Text REtrieval Conference\n(TREC-10), pages 609618, 2001.\n71\n", "keywords": "URL Information;web document;URL;improvement;frequency;task;information;model;rate;IR;Combination of Multiple Evidences;Link Information;query;Query Classification"} {"name": "162", "title": "Querying Bi-level Information", "abstract": "In our research on superimposed information management, we have developed applications where information elements in the superimposed layer serve to annotate, comment, restructure, and combine selections from one or more existing documents in the base layer. Base documents tend to be unstructured or semi-structured (HTML pages, Excel spreadsheets, and so on) with marks delimiting selections. Selections in the base layer can be programmatically accessed via marks to retrieve content and context. The applications we have built to date allow creation of new marks and new superimposed elements (that use marks), but they have been browse-oriented and tend to expose the line between superimposed and base layers. Here, we present a new access capability, called bi-level queries, that allows an application or user to query over both layers as a whole. Bi-level queries provide an alternative style of data integration where only relevant portions of a base document are mediated (not the whole document) and the superimposed layer can add information not present in the base layer. We discuss our framework for superimposed information management, an initial implementation of a bi-level query system with an XML Query interface, and suggest mechanisms to improve scalability and performance.", "fulltext": "INTRODUCTION\nYou are conducting background research for a paper you are\nwriting. You have found relevant information in a variety of\nsources: HTML pages on the web, PDF documents on the web\nand on your SIGMOD anthology of CDs, Excel spreadsheets and\nWord documents from your past work in a related area, and so on.\nYou identify relevant portions of the documents and add\nannotations with clarifications, questions, and conclusions. As\nyou collect information, you frequently reorganize the\ninformation you have collected thus far (and your added\nannotations) to reflect your perspective. You intentionally keep\nyour information structure loose so you can easily move things\naround. When you have collected sufficient information, you\nimport it, along with your comments, in to a word-processor\ndocument. As you write your paper in your word-processor, you\nrevisit your sources to see information in its context. Also, as you\nwrite your paper you reorganize its contents, including the imported\ninformation, to suit the flow. Occasionally, you search the\nimported annotations, selections, and the context of the selections.\nYou mix some of the imported information with other information\nin the paper and transform the mixture to suit presentation needs.\nMost researchers will be familiar with manual approaches to the\nscenario we have just described. Providing computer support for\nthis scenario requires a toolset with the following capabilities:\n1. Select portions of documents of many kinds (PDF, HTML,\netc.) in many locations (web, CD, local file system, etc.), and\nrecord the selections.\n2. Create and associate annotations (of varying structure) with\ndocument selections.\n3. Group and link document selections and annotations,\nreorganize them as needed, and possibly even maintain\nmultiple organizations.\n4. See a document selection in its context by opening the\ndocument and navigating to the selected region, or access the\ncontext of a selection without launching its original document.\n5. Place document selections and annotations in traditional documents\n(such as the word-processor document that contains\nyour paper).\n6. Search and transform a mixture of document selections,\nannotations, and other information.\nSystems that support some subset of these capabilities exist, but\nno one system supports the complete set. It is hard to use a\ncollection of systems to get the full set of features because the\nsystems do not interoperate well. Some hypertext systems can\ncreate multiple organizations of the same information, but they\ntend to lack in the types of source, granularity of information, or\nthe location of information consulted. For example, Dexter [6]\nrequires all information consulted to be stored in its proprietary\ndatabase. Compound document systems can address sub-documents\n, but they tend to have many display constraints. For\nexample, OLE 2 [9] relies on original applications to render\ninformation. Neither type of system supports querying a mixture\nof document selections and annotations.\nSuperimposed information management is an alternative solution\nfor organizing heterogeneous in situ information, at document and\nsub-document granularity. Superimposed information (such as\nannotations) refers to data placed over existing information\nsources (base information) to help organize, access, connect and\nreuse information elements in those sources [8]. In our previous\nwork [12], we have described the Superimposed Pluggable\nArchitecture for Contexts and Excerpts (SPARCE), a middleware\nfor superimposed information management, and presented some\nsuperimposed applications built using SPARCE. Together they\nsupport Capabilities 1 through 4. In this paper, we show how\nSPARCE can be used to support Capability 6. Details of support\nfor Capability 5 are outside the scope of this paper.\nBefore we proceed with the details of how we support Capability\n6, we introduce a superimposed application called RIDPad [12].\nFigure 1 shows a RIDPad document that contains information\nselections and annotations related to the topic of information\nintegration. The document shown contains eight items: CLIO,\nDefinition, SchemaSQL, Related Systems, Goal, Model, Query\nOptimizer, and Press. These items are associated with six distinct\nbase documents of three kinds--PDF, Excel, and HTML. An item\nhas a name, a descriptive text, and a reference (called a mark) to a\nselection in a base document. For example, the item labeled\n`Goal' contains a mark into a PDF document. The boxes labeled\nSchematic Heterogeneity and Garlic are groups. A group is a\nnamed collection of items and other groups. A RIDPad document\nis a collection of items and groups.\nRIDPad affords many operations for items and groups. A user can\ncreate new items and groups, and move items between groups.\nThe user can also rename, resize, and change visual\ncharacteristics such as color and font for items and groups. With\nthe mark associated with an item, the user can navigate to the base\nlayer if necessary, or examine the mark's properties and browse\ncontext information (such as containing paragraph) from within\nRIDPad via a reusable Context Browser we have built.\nThe operations RIDPAD affords are at the level of items and\ngroups. However, we have seen the need to query and manipulate\na RIDPad document and its base documents as a whole. For\nexample, possible queries over the RIDPad document in Figure 1\ninclude:\nQ1: List base documents used in this RIDPad document.\nQ2: Show abstracts of papers related to Garlic.\nQ3: Create an HTML table of contents from the groups and items.\nQuery Q1 examines the paths to base documents of marks associated\nwith items in the RIDPad document. Q2 examines the\ncontext of marks of items in the group labeled `Schematic\nHeterogeneity.' Q3 transforms the contents of the RIDPad document\nto another form (table of contents). In general, queries such\nas these operate on both superimposed information and base\ninformation. Consequently, we call them bi-level queries.\n\nFigure 1: A RIDPad document.\nThere are many possible choices on how to present the contents of\nsuperimposed documents (such as the RIDPad document in\nFigure 1) and base documents for querying. We could make the\ndivision between the superimposed and base documents obvious\nand let the user explicitly follow marks from superimposed\ninformation to base information. Instead, our approach is to\nintegrate a superimposed document's contents and related base\ninformation to present a uniform representation of the integrated\ninformation for querying.\nThe rest of this paper is organized as follows: Section 2 provides\nan overview of SPARCE. Section 3 provides an overview of bi-level\nquery systems and describes a nave implementation of a bi-level\nquery system along with some example bi-level queries.\nSection 4 discusses some applications and implementation\nalternatives for bi-level query systems. Section 5 briefly reviews\nrelated work. Section 6 summarizes the paper.\nWe use the RIDPad document in Figure 1 for all examples in this\npaper.\nSPARCE OVERVIEW\nThe Superimposed Pluggable Architecture for Contexts and\nExcerpts (SPARCE) facilitates management of marks and context\ninformation in the setting of superimposed information\nmanagement [12]. A mark is an abstraction of a selection in a\nbase document. Several mark implementations exist, typically one\nper base type (PDF, HTML, Excel, and so on). A mark\nimplementation chooses an addressing scheme appropriate for the\nbase type it supports. For example, an MS Word mark\nimplementation uses the starting and ending character index of a\ntext selection, whereas an MS Excel mark uses the row and\ncolumn names of the first and last cell in the selection. All mark\nimplementations provide a common interface to address base\ninformation, regardless of base type or access protocol they\n8\nsupport. A superimposed application can work uniformly with\nany base type due to this common interface.\nContext is information concerning a base-layer element. Presentation\ninformation such as font name, containment information\nsuch as enclosing paragraph and section, and placement\ninformation such as line number are examples of context\ninformation. An Excerpt is the content of a marked base-layer\nelement. (We treat an excerpt also as a context element.) Figure 2\nshows the PDF mark corresponding to the item `Goal' (of the\nRIDPad document in Figure 1) activated. The highlighted portion\nis the marked region. Table 1 shows some of the context elements\nfor this mark.\n\nFigure 2: A PDF mark activated.\nFigure 3 shows the SPARCE architecture reference model. The\nMark Management module is responsible for operations on marks\n(such as creating and storing marks). The Context Management\nmodule retrieves context information. The Superimposed\nInformation Management module provides storage service to\nsuperimposed applications. The Clipboard is used for inter-process\ncommunication.\nTable 1: Some context elements of a PDF mark.\nElement name\nValue\nExcerpt\nprovide applications and users\nwith ... Garlic system\nFont name\nTimes New Roman\nEnclosing paragraph\nLoosely speaking, the goal ...\nSection Heading\nGarlic Overview\nSPARCE uses mediators [13] called context agents to interact\nwith different base types. A context agent is responsible for resolving\na mark and returning the set of context elements\nappropriate to that mark. A context agent is different from mediators\nused in other systems because it only mediates portions of\nbase document a mark refers to. For example, if a mark refers to\nthe first three lines of a PDF document, the mark's context agent\nmediates those three lines and other regions immediately around\nthe lines. A user could retrieve broader context information for\nthis mark, but the agent will not do so by default.\n\nFigure 3: SPARCE architecture reference model.\nA superimposed application allows creation of information elements\n(such as annotations) associated with marks. It can use an\ninformation model of its choice (SPARCE does not impose a\nmodel) and the model may vary from one application to another.\nFor example, RIDPad uses a group-item model (simple nesting),\nwhereas the Schematics Browser, another application we have\nbuilt, uses an ER model [2, 12]. The superimposed model may be\ndifferent from any of the base models. A detailed description of\nSPARCE is available in our previous work [12].\nBI-LEVEL QUERY SYSTEM\nA bi-level query system allows a superimposed application and its\nuser to query the superimposed information and base information\nas a whole. User queries are in a language appropriate to the\nsuperimposed model. For example, XQuery may be the query\nlanguage if the superimposed model is XML (or a model that can\nbe mapped to XML), whereas SQL may be the query language if\nsuperimposed information is in the relational model.\n\nFigure 4: Overview of a bi-level query system.\nFigure 4 provides an overview of a bi-level query system. An oval\nin the figure represents an information source. A rectangle\ndenotes a process that manipulates information. Arrows indicate\ndata flow. The query processor accepts three kinds of\ninformation--superimposed, mark, and context. Model transformers\ntransform information from the three sources in to\nmodel(s) appropriate for querying. One of these transformers, the\ncontext transformer, is responsible for transforming context information\n. We restrict bi-level query systems to use only one\nsuperimposed model at a time, for practical reasons. Choosing a\nquery language and the model for the result can be hard if\nsuperimposed models are mixed.\nBase Info 1\nBase Info n\nContext\nAgents\nModel Transformers\nMark\nInfo\nSuperimposed\nInfo\nQuery\nProcessor\nSuperimposed\nApplication\nSuperimposed\nInformation\nManagement\nMark\nManagement\nContext\nManagement\nClipboard\nBase\nApplication\nResult\nQuery\n9\n3.1 Implementation\nWe have implemented a nave bi-level query system for the XML\nsuperimposed model. We have developed a transformer to convert\nRIDPad information to XML. We have developed a context\ntransformer to convert context information to XML. We are able\nto use mark information without any transformation since\nSPARCE already represents that information in XML. User queries\ncan be in XPath, XSLT, and XQuery. We use Microsoft's\nXML SDK 4.0 [10] and XQuery demo implementation [11] to\nprocess queries.\nWe use three XML elements to represent RIDPad information in\nXML-<\n;RIDPadDocument>\nfor the document,\n<Group>\nfor\na group, and\n<Item>\nfor an item. For each RIDPad item, the\nsystem creates four children nodes in the corresponding\n<Item>\n\nelement. These children nodes correspond to the mark, container\n(base document where the mark is made), application, and\ncontext. We currently transform the entire context of the mark.\nThe XML data is regenerated if the RIDPad document changes.\n\nFigure 5: Partial XML data from a RIDPad document.\nFigure 5 shows partial XML data generated from the RIDPad\ndocument in Figure 1. It contains two\n<Group>\nelements (corresponding\nto the two groups in Figure 1). The `Garlic' element\ncontains four\n<Item>\nelements (one for each item in that group\nin Figure 1). There is also an\n<Item>\nelement for the group-less\nitem CLIO. The\n<Item>\nelement for `Goal' is partially expanded\nto reveal the\n<Mark>\n, <\nContainer>\n, <\nApplication>\n, and\n<\nContext>\nelements it contains. Contents of these elements are\nnot shown.\n3.2 Example Bi-level Queries\nWe now provide bi-level query expressions for the queries Q1 to\nQ3 listed in Section 1.\nQ1: List base documents used in this RIDPad document.\nThis query must retrieve the path to the base document of the\nmark associated with each item in a RIDPad document. The\nfollowing XQuery expression does just that. The Location\nelement in the Container element contains the path to the\ndocument corresponding to the mark associated with an item.\n\n<Paths> {FOR $l IN\ndocument("source")//Item/Container/Location\nRETURN <Path>{$l/text()}</Path>\n} </Paths>\nQ2: Show abstracts of papers related to Garlic.\nThis query must examine the context of items in the group labeled\n`Garlic.' The following XPath expression suffices. This\nexpression returns the text of a context element whose name\nattribute is `Abstract', but only for items in the required group.\n//Group[@name='Garlic']/Item/Context//Elemen\nt[@name='Abstract']/text()\nQ3: Create an HTML table of contents from the groups and items.\nWe use an XSLT style-sheet to generate a table of contents (TOC)\nfrom a RIDPad document. Figure 6 shows the query in the left\npanel and its results in the right panel. The right panel embeds an\ninstance of MS Internet Explorer. The result contains one list item\n(HTML LI tag) for each group in the RIDPad document. There is\nalso one list sub-item (also an HTML LI tag) for each item in a\ngroup. The group-less item CLIO is in the list titled `Other Items.'\nA user can save the HTML results, and open it in any browser\noutside our system.\n\nFigure 6: RIDPAD document transformed to HTML TOC.\nThe HTML TOC in Figure 6 shows that each item has a hyperlink\n(HTML A tag) attached to it. A hyperlink is constructed using a\ncustom URL naming scheme and handled using a custom handler.\nCustom URLs are one means of implementing Capability 5 identified\nin Section 1.\nDISCUSSION\nThe strength of the current implementation is that it retrieves\ncontext information for only those parts of base documents that\nthe superimposed document refers to (via marks). Interestingly,\nthe same is also its weakness: it retrieves context information for\nall parts of the base documents the superimposed document refers\nto, regardless of whether executing a query requires those elements\n. For example, only Query Q2 looks at context information\n(Q1 looks only at container information, Q3 looks at superimposed\ninformation and mark information). However, the XML\ndata generated includes context information for all queries. Generating\ndata in this manner is both inefficient and unnecessary-information\nmay be replicated (different items may use the same\nmark), and context information can be rather large (the size of the\ncomplete context of a mark could exceed the size of its docu-10\nment), depending on what context elements a context agent\nprovides. It is possible to get the same results by separating\nRIDPad data from the rest and joining the various information\nsources. Doing so preserves the layers, and potentially reduces the\nsize of data generated. Also, it is possible to execute a query in-crementally\nand only generate or transform data that qualifies in\neach stage of execution.\nFigure 7 gives an idea of the proposed change to the schema of\nthe XML data generated. Comparing with the Goal Item element\nof Figure 5, we see that mark, container, application, and context\ninformation are no longer nested inside the Item element. Instead,\nan\n<Item>\nelement has a new attribute called\nmarkID\n. In the\nrevised schema, the RIDPad data, mark, container, application,\nand context information exist independently in separate\ndocuments, with references linking them. With the revised\nschema, no context information would be retrieved for Query Q1.\nContext information would be retrieved only for items in the\n`Schematic Heterogeneity' group when Q2 is executed.\n\nFigure 7: XML data in the revised schema.\nPreserving the layers of data has some disadvantages. A major\ndisadvantage is that a user will need to use joins to connect data\nacross layers. Such queries tend to be error-prone, and writing\nthem can take too much time and effort. A solution would be to\nallow a user to write bi-level queries as they currently do (against\na schema corresponding to the data in Figure 5), and have the\nsystem rewrite the query to match the underlying XML schema\n(as in Figure 7). That is, user queries would actually be expressed\nagainst a view of the actual data. We are currently pursuing this\napproach to bi-level querying.\nOur current approach of grabbing context information for all\nmarks could be helpful in some cases. For example, if a query\nworkload ends up retrieving context of all (or most) marks, the\ncurrent approach is similar to materializing views, and could lead\nto faster overall query execution.\nThe current implementation does not exploit relationships between\nsuperimposed information elements. For example, Figure 8\nshows the RIDPad document in Figure 1 enhanced with two relationships\n`Uses' and `Addresses' from the item CLIO. A user may\nexploit these relationships, to pose richer queries and possibly\nrecall more information. For example, with the RIDPad document\nin Figure 8, a user could now pose the following queries: What\nsystem does CLIO use? How is CLIO related to SchemaSQL?\nOur initial use anticipated for bi-level queries was to query superimposed\nand base information as a whole, but we have noticed\nthat superimposed application developers and users could use the\ncapability to construct and format (on demand) superimposed\ninformation elements themselves. For example, a RIDPad item's\nname may be a section heading. Such a representation of an item\ncould be expressed as the result of a query or a transformation.\n\nFigure 8: A RIDPad document with relationships.\nBi-level queries could also be used for repurposing information.\nFor example, Query Q3 could be extended to include the contents\nof items (instead of just names) and transform the entire RIDPad\ndocument to HTML (like in Figure 6). The HTML version can\nthen be published on the web.\nWe have demonstrated bi-level queries using XML query\nlanguages, but superimposed applications might benefit from\nother query languages. The choice of the query language depends\nlargely on the superimposed information model (which in turn\ndepends on the task at hand). More than one query language may\nbe appropriate for some superimposed information models, in\nsome superimposed applications. For example, both CXPath [3]\nand XQuery may be appropriate for some applications that use the\nXML superimposed model.\nThe base applications we have worked with so far do not\nthemselves have query capabilities. If access to context or a selection\nover context elements can be posed as a query in a base\napplication, we might benefit from applying distributed query-processing\ntechniques. Finally, the scope of a bi-level query is\ncurrently the superimposed layer and the base information accessible\nvia the marks used. Some applications might benefit from\nincluding marks generated automatically (for example, using IR\ntechniques) in the scope of a query.\nRELATED WORK\nSPARCE differs from mediated systems such as Garlic [4] and\nMIX [1]. Sources are registered with SPARCE simply by the act\nof mark creation in those sources. Unlike in Garlic there is no\nneed to register a source and define its schema. Unlike MIX,\nSPARCE does not require a DTD for a source.\n11\nMETAXPath [5] allows a user to attach metadata to XML elements\n. It enhances XPath with an `up-shift' operator to navigate\nfrom data to metadata (and metadata to meta-metadata, and so\non). A user can start at any level, but only cross between levels in\nan upwards direction. In our system, it is possible to move both\nupwards and downwards between levels. METAXPath is\ndesigned to attach only metadata to data. A superimposed\ninformation element can be used to represent metadata about a\nbase-layer element, but it has many other uses.\nCXPath [3] is an XPath-like query language to query concepts,\nnot elements. The names used in query expressions are concept\nnames, not element names. In the CXPath model there is no\ndocument root--all concepts are accessible from anywhere. For\nexample, the CXPath expression `/Item' and `Item' are equivalent\n. They both return all Item elements when applied to the XML\ndata in Figure 5. The `/' used for navigation in XPath follows a\nrelationship (possibly named) in CXPath. For example, the expression\n\"/Item/{Uses}Group\" returns all groups that are related\nto an item by the `Uses' relationship when applied to an XML\nrepresentation of the RIDPad in Figure 8. CXPath uses predefined\nmappings to translate CXPath expressions to XPath expressions.\nThere is one mapping for each concept name and for each direction\nof every relationship of every XML source. In our system,\nwe intend to support multiple sources without predefined mappings\n, but we would like our query system to operate at a\nconceptual level like CXPath does.\nAs discussed in Section 4, preserving the layers of data, yet allowing\na user to express queries as if all data is in one layer\nmeans queries are expressed against views. Information Manifold\n[7] provides useful insight in to how heterogeneous source may be\nqueried via views. That system associates a capability record with\neach source to describe its inputs, outputs, and selection capabilities\n. We currently do not have such a notion in our system, but we\nexpect to consider source descriptions in the context of distributed\nquery processing mentioned in Section 4.\nSUMMARY\nOur existing framework for superimposed applications supports\nexamination and manipulation of individual superimposed and\nbase information elements. More global ways to search and manipulate\ninformation become necessary as the size and number of\ndocuments gets larger. A bi-level query system is a first step in\nthat direction. We have an initial implementation of a query\nsystem, but still have a large space of design options to explore.\nACKNOWLEDGMENTS\nThis work was supported in part by US NSF Grant IIS-0086002.\nWe thank all reviewers.\n\nREFERENCES\n[1] Baru, C., Gupta, A., Ludscher, B., Marciano, R.,\nPapakonstantinou, Y., Velikhov, P., and Chu, V. XML-Based\nInformation Mediation with MIX. In Proceedings of\nthe SIGMOD conference on Management of Data\n(Philadelphia, June, 1999). ACM Press, New York, NY,\n1999, 597-599.\n[2] Bowers, S., Delcambre, L. and Maier, D. Superimposed\nSchematics: Introducing E-R Structure for In-Situ\nInformation Selections. In Proceedings of ER 2002\n(Tampere, Finland, October 7-11, 2002). Springer LNCS\n2503, 2002. 90104.\n[3] Camillo, S.D., Heuser, C.A., and Mello, R. Querying\nHeterogeneous XML Sources through a Conceptual Schema.\nIn Proceedings of ER 2003 (Chicago, October 13-16, 2003).\nSpringer LNCS 2813, 2003. 186199.\n[4] Carey, M.J., Haas, L.M., Schwarz, P.M., Arya, M., Cody,\nW.F., Fagin, R., Flickner, M., Luniewski, A.W., Niblack,\nW., Petkovic, D., Thomas, J., Williams, J.H., and Wimmers,\nE.L. Towards heterogeneous multimedia information\nsystems: The Garlic approach. IBM Technical Report RJ\n9911, 1994.\n[5] Dyreson, C.E., Bohlen, M.H., and Jensen, C.S. METAXPath.\nIn Proceedings of the International Conference on Dublin\nCore and Metadata Applications (Tokyo, Japan, October\n2001). 2001, 17-23.\n[6] Halasz, F.G., and Schwartz, F. The Dexter Hypertext\nReference Model. Communications of the ACM, 37, 2, 30-39\n.\n[7] Levy, A.Y., Rajaraman, A., and Ordille, J.J. Querying\nheterogeneous information sources using source descriptions.\nIn Proceedings of VLDB (Bombay, India 1996). 251-262.\n[8] Maier, D., and Delcambre, L. Superimposed Information for\nthe Internet. In Informal Proceedings of WebDB '99\n(Philadelphia, June 3-4, 1999). 1-9.\n[9] Microsoft. COM: The Component Object Model\nSpecification, Microsoft Corporation. 1995.\n[10] Microsoft. MS XML 4.0 Software Development Kit.\nMicrosoft Corporation. Available online at\nhttp://msdn.microsoft.com/\n[11] Microsoft. XQuery Demo. Microsoft Corporation. Available\nonline at http://xqueryservices.com/\n[12] Murthy, S., Maier, D., Delcambre, L., and Bowers, S.\nPutting Integrated Information in Context: Superimposing\nConceptual Models with SPARCE. In Proceedings of the\nFirst Asia-Pacific Conference of Conceptual Modeling\n(Dunedin, New Zealand, Jan. 22, 2004). 71-80.\n[13] Wiederhold, G. Mediators in the architecture of future\ninformation systems. IEEE Computer, 25, 3 (March 1992).\n3849.\n\n12", "keywords": "Bi-level queries;implementation;system;Superimposed information management;SPARCE;superimposed;document;management;RIDPAD;query;information;Information integration;METAXPath;hyperlink"} {"name": "163", "title": "Ranking Flows from Sampled Traffic", "abstract": "Most of the theoretical work on sampling has addressed the inversion of general traffic properties such as flow size distribution , average flow size, or total number of flows. In this paper, we make a step towards understanding the impact of packet sampling on individual flow properties. We study how to detect and rank the largest flows on a link. To this end, we develop an analytical model that we validate on real traces from two networks. First we study a blind ranking method where only the number of sampled packets from each flow is known. Then, we propose a new method, protocol-aware ranking, where we make use of the packet sequence number (when available in transport header) to infer the number of non-sampled packets from a flow, and hence to improve the ranking. Surprisingly, our analytical and experimental results indicate that a high sampling rate (10% and even more depending on the number of top flows to be ranked) is required for a correct blind ranking of the largest flows. The sampling rate can be reduced by an order of magnitude if one just aims at detecting these flows or by using the protocol-aware method.", "fulltext": "INTRODUCTION\nThe list of the top users or applications is one of the most\nuseful statistics to be extracted from network traffic.\nNetwork operators use the knowledge of the most popular\ndestinations to identify emerging markets and applications\nor to locate where to setup new Points of Presence. Content\ndelivery networks use the popularity of sites to define\ncaching and replication strategies. In traffic engineering, the\nidentification of heavy hitters in the network can be used to\ntreat and route them differently across the network [20, 17,\n10]. Keeping track of the network prefixes that generate\nmost traffic is also of great importance for anomaly detection\n. A variation in the pattern of the most common applications\nmay be used as a warning sign and trigger careful\ninspection of the packet streams.\nHowever, the ability to identify the top users in a packet\nstream is limited by the network monitoring technology.\nCapturing and processing all packets on high speed links still\nremains a challenge for today's network equipment [16, 9].\nIn this context, a common solution is to sample the packet\nstream to reduce the load on the monitoring system and to\nsimplify the task of sorting the list of items. The underlying\nassumption in this approach is that the sampling process\ndoes not alter the properties of the data distribution.\nSampled traffic data is then used to infer properties of the\noriginal data (this operation is called inversion). The inversion\nof sampled traffic is, however, an error-prone procedure\nthat often requires a deep study of the data distribution to\nevaluate how the sampling rate impacts the accuracy of the\nmetric of interest. Although the inversion may be simple\nfor aggregate link statistics (e.g., to estimate the number\nof packets transmitted on a link, it is usually sufficient to\nmultiply the number of sampled packets by the inverse of\nthe sampling rate), it is much harder for the properties of\nindividual connections or \"flows\" [9, 11, 8].\nFor these reasons, in this paper, we address this simple,\nand so far unanswered, question: which sampling rate is\nneeded to correctly detect and rank the flows that carry the\nmost packets?\nWe define the problem as follows. Consider a traffic monitor\nthat samples packets independently of each other with\nprobability p (random sampling) and classifies them into\nsampled flows. At the end of the measurement period, the\nmonitor processes the list of sampled flows, ranks them\nbased on their size in packets, and returns an ordered list of\nthe t largest flows.\nWe are interested in knowing (i) whether the ordered list\ncontains all the actual largest flows in the original packet\n188\nstream (detection), and (ii) if the items in the list appear in\nthe correct order (ranking).\nWe build an analytical model and define a performance\nmetric that evaluates the accuracy of identification and ranking\nof the largest flows. We consider a flow to consist of a\nsingle TCP connection. However, our results are general\nand can be applied to alternative definitions of flow, as well.\nWe evaluate two approaches to sort the list of flows:\n(i) Blind, where the sampled flows are ranked just based\non their sampled size. This method can be applied to any\ndefinition of flow.\n(ii) Protocol-aware, where we make use of additional information\nin the packet header (e.g., the sequence number\nin TCP packets) to infer the number of non-sampled packets\nbetween sampled ones. This method can only be applied to\nflow definitions that preserve the protocol level details.\nThe contributions of this work are the following: (1) We\nperform an analytical study of the problem of ranking two\nsampled flows and compute the probability that they are\nmisranked. We propose a Gaussian approximation to make\nthe problem numerically tractable. (2) We introduce the\nprotocol-aware ranking method that uses protocol level information\nto complement the flow statistics and render the\ndetection and ranking of the largest flows more accurate. (3)\nBased on the model for the ranking of two flows, we propose\na general model to study the detection and ranking problem,\ngiven a generic flow size distribution. We define a performance\nmetric and evaluate the impact of several metric's\nparameter on the accuracy of the ranking. (4) We validate\nour findings on measurement data using publicly-available\npacket-level traces. Our results indicate that a surprisingly\nhigh sampling rate is required to obtain a good accuracy\nwith the blind approach (10% and even more depending on\nthe number of flows of interest). As for the protocol-aware\napproach, it allows to reduce the required sampling rate by\nan order of magnitude compared to the blind approach.\nThe paper is structured as follows. Next, we discuss the\nrelated literature. In Section 3 and 4, we present our model.\nSection 5 analyzes the model numerically and Section 6 validates\nit on real packet-level traces. Section 7 concludes the\npaper and provides perspectives for our future research.\nRELATED WORK\nThe inversion of sampled traffic has been extensively studied\nin the literature. The main focus has been on the inversion\nof aggregate flow properties such as flow size distribution\n[9, 11], average flow size or total number of flows [8] on\na given network link. Duffield et al. [8] study the problem of\nflow splitting and propose estimators for the total number\nof flows and for the average flow size in the original traffic\nstream. [9, 11] study the inversion of the flow size distribution\nwith two different methods. They both show that the\nmajor difficulty comes from the number of flows that are not\nsampled at all and that need to be estimated with an auxiliary\nmethod. As an auxiliary method, [8, 9] propose the use\nof the SYN flag in the TCP header to mark the beginning of\na flow. [9] shows that periodic and random sampling provide\nroughly the same result on high speed links, and so random\nsampling can be used for mathematical analysis due to its\nappealing features. [4] finds the sampling rate that assures\na bounded error on the estimation of the size of flows contributing\nto more than some predefined percentage of the\ntraffic volume. [14] studies whether the number of sampled\npackets is a good estimator for the detection of large flows\nwithout considering its impact on the flow ranking.\nGiven the potential applications of finding the list of top\nusers, it does not come as a surprise that there has been a\nsignificant effort in the research community to find ways to\ntrack frequent items in a data stream [5, 7, 3, 10]. However,\nthis problem has usually been addressed from a memory requirement\nstandpoint. All the works in the literature assume\nthat if the algorithm and the memory size is well chosen, the\nlargest flows can be detected and ranked with a high precision\n. However, in the presence of packet sampling, even if\nthe methods rank correctly the set of sampled flows, there\nis no guarantee that the sampled rank corresponds to the\noriginal rank. The problem we address in this paper complements\nthese works as it focuses on the impact of sampling\non the flow ranking.\nBASIC MODEL RANKING TWO FLOWS\nIn this section, we study the probability to misrank two\nflows of original sizes S\n1\nand S\n2\nin packets. This probability\nis the basis for the general model for detecting and ranking\nthe largest flows that we will present later. Indeed, the\ndetection and ranking of the largest flows can be transformed\ninto a problem of ranking over a set of flow pairs.\nWithout loss of generality, we assume S\n1\n< S\n2\n. We consider\na random sampling of rate p. Let s\n1\nand s\n2\ndenote\nthe sizes in packets of both flows after sampling. The two\nsampled flows are misranked if (i) s\n1\nis larger than s\n2\n, or\n(ii) both flows are not sampled, i.e., their sampled sizes\nequal to zero. By combining (i) and (ii), one can see that\nthe necessary condition for a good ranking is to sample at\nleast one packet from the larger flow (i.e., the smaller of\nthe two flows can disappear after sampling). The probability\nto misrank the two flows can then be written as\nP\nm\n(S\n1\n, S\n2\n) = P {s\n1\ns\n2\n}. For the case S\n1\n= S\n2\n, we consider\nthe two flows as misranked if s\n1\n= s\n2\n, or if both flows\nare not sampled at all, i.e. s\n1\n= s\n2\n= 0.\nWe compute and study the misranking probability of two\nflows of given sizes in the rest of this section. First, we consider\nthe blind ranking method where only the number of\nsampled packets from a flow is known. For this method,\nwe express the misranking probability as a double sum of\nbinomials, then we present a Gaussian approximation to\nmake the problem tractable numerically. Second, we consider\nthe protocol-aware ranking method for which we calculate\na numerical-tractable closed-form expression of the\nmisraking probability. Note that the misranking probability\nis a symmetric function, i.e., P\nm\n(S\n1\n, S\n2\n) = P\nm\n(S\n2\n, S\n1\n).\n3.1\nBlind ranking\nWith this method, s\n1\nand s\n2\nrepresent the number of\nsampled packets from flows S\n1\nand S\n2\n. Under our assumptions\n, these two variables are distributed according to a binomial\ndistribution of probability p. Hence, we can write\nfor S\n1\n< S\n2\n,\nP\nm\n(S\n1\n, S\n2\n) = P {s\n1\ns\n2\n} =\nS\n1\ni=0\nb\np\n(i, S\n1\n)\ni\nj=0\nb\np\n(j, S\n2\n). (1)\nb\np\n(i, S) is the probability density function of a binomial distribution\nof probability p, i.e., the probability of obtaining i\nsuccesses out of S trials. We have b\np\n(i, S) =\nS\ni\np\ni\n(1 - p)\nS-i\nfor i = 0, 1, ..., S, and b\np\n(i, S) = 0 for i < 0 and i > S. The\n189\nprobability to misrank two flows of equal sizes is given by\nP {s\n1\n= s\n2\nor s\n1\n= s\n2\n= 0} = 1 - P {s\n1\n= s\n2\n= 0}\n= 1 S\n1\ni=1\nb\n2\np\n(i, S\n1\n).\nUnfortunately, the above expression for the misranking\nprobability is numerically untractable since it involves two\nsums of binomials. For large flows of order S packets, the\nnumber of operations required to compute such a probability\nis on the order of O(S\n3\n), assuming that the complexity of\nthe binomial computation is on the order of O(S). The\nproblem becomes much more complex if one has to sum\nover all possible flow sizes (i.e., O(S\n5\n)). For this reason, we\npropose next a Gaussian approximation to the problem of\nblind ranking that is accurate and easy to compute. We use\nthis approximation to study the ranking performance as a\nfunction of the sampling rate and the flow sizes.\n3.1.1\nGaussian approximation to blind ranking\nConsider a flow made of S packets and sampled at rate\np. The sampled size follows a binomial distribution. However\n, it is well known that the binomial distribution can be\napproximated by a Normal (or Gaussian) distribution when\np is small and when the product pS is on the order of one\n(flows for which, on average, at least few packets are sampled\n) [21, pages 108109]. We assume that this is the case\nfor the largest flows, and we consider the sampled size of\na flow as distributed according to a Normal distribution of\naverage pS and of variance p(1 - p)S. Using this approximation\n, one can express the misranking probability for the\nblind ranking problem in the following simple form.\nProposition 1. For any two flows of sizes S\n1\nand S\n2\npackets (S\n1\n= S\n2\n), the Gaussian approximation gives,\nP\nm\n(S\n1\n, S\n2\n)\n1\n2 erf c\n|S\n2\n- S\n1\n|\n2(1/p - 1)(S\n1\n+ S\n2\n)\n,\n(2)\nwhere erfc(x) = (\n2\n\n)\n\nx\ne\n-u\n2\ndu is the complementary error\ncumulative function.\nProof: Consider two flows of sizes S\n1\nand S\n2\nin packets\nsuch that S\n1\n< S\n2\n. Their sampled versions s\n1\nand s\n2\nboth\nfollow Normal distributions of averages pS\n1\nand pS\n2\n, and\nof variances p(1 - p)S\n1\nand p(1 - p)S\n2\n. We know that the\nsum of two Normal variables is a Normal variable. So the\ndifference s\n1\n- s\n2\nfollows a Normal distribution of average\np(S\n1\n- S\n2\n) and of variance p(1 - p)(S\n1\n+ S\n2\n). We have then\nthis approximation for the misranking probability:\nP\nm\n(S\n1\n, S\n2\n) = P {s\n1\n- s\n2\n0}\nP V >\np(S\n2\n- S\n1\n)\np(1 - p)(S\n1\n+ S\n2\n)\n=\n1\n2 erfc\nS\n2\n- S\n1\n2(1/p - 1)(S\n1\n+ S\n2\n)\n. (3)\nV is a standard Normal random variable. Given the symmetry\nof the misranking probability, one can take the absolute\nvalue of S\n2\n- S\n1\nin (3) and get the expression stated in the\nproposition, which is valid for all S\n1\nand S\n2\n.\nFor S\n1\n= S\n2\n, one can safely approximate the misranking\nprobability to be equal to 1. This approximation is however\nof little importance given the very low probability of having\ntwo flows of equal sizes, especially when they are large.\n3.2\nProtocol-aware ranking\nPackets can carry in their transport header an increasing\nsequence number. A typical example is the byte sequence\nnumber in the TCP header. Another example could be the\nsequence number in the header of the Real Time Protocol\n(RTP) [19]. One can use this sequence number, when available\n, to infer the number of non-sampled packets (or bytes\nin the case of TCP) between sampled ones, and hence to improve\nthe accuracy of ranking. The size of the sampled flow\nin this case is no longer the number of packets collected, but\nrather the number of packets that exist between the first and\nlast sampled packets from the flow. Although this solution\nis limited to flows whose packet carry a sequence number, we\nbelieve that the study of this ranking method is important\ngiven the widespread use of the TCP protocol. Our objective\nis to understand how the use of protocol-level information\ncan supplement the simple, and more general, blind method\nand if it is worth the additional overhead it introduces (i.e.,\nstoring two sequence numbers per flow record).\nIn the following, we calculate the misranking probability\nof two flows of given sizes when using the protocol-aware\nmethod. This probability will be used later in the general\nranking problem. The main contribution of this section is a\nclosed-form expression for the misranking probability that\nis numerically tractable, without the need for any approximation\n.\nLet S be the size of a flow in packets. Let s\nb\n, s\nb\n=\n1, 2, ..., S, denote the (packet) sequence number carried by\nthe first sampled packet, and let s\ne\n, s\ne\n= S, S - 1, ..., s\nb\n,\ndenote the sequence number carried by the last sampled\npacket. Given s\nb\nand s\ne\n, one can estimate the size of the\nsampled flow in packets to s = s\ne\n- s\nb\n+ 1. The error in\nthis estimation comes from the non-sampled packets that\nare transmitted before s\nb\nand after s\ne\n. We give next the\ndistribution of s, which is needed for the computation of\nthe misranking probability, then we state our main result.\nBefore presenting the analysis, note that this new flow size\nestimator only counts the packets that are transmitted with\ndistinct sequence numbers. In the case of TCP, this corresponds\nto the number of bytes received at the application\nlayer, rather then the number of bytes carried over the network\n. It is equivalent to assuming that the probability of\nsampling a retransmitted (or duplicated) packet is negligible\n. This is a reasonable assumption if the loss rate is low.\nWe will address this aspect in more detail in Section 6.\nConsider a flow of size S 2 in packets. Using the above\ndefinition for s, the sampled flow has a size of i packets,\ni 2, with probability:\nP {s = i} =\nS-i+1\nk=1\nP {s\nb\n= k} P {s\ne\n= k + i - 1} .\nWe have P {s\nb\n= k} = (1 - p)\nk-1\np, and P {s\ne\n= k + i - 1} =\n(1 - p)\nS-k-i+1\np. This gives\nP {s = i} =\nS-i+1\nk=1\n(1 - p)\nk-1\np(1 - p)\nS-k-i+1\np\n= p\n2\n(1 - p)\nS-i\n(S - i + 1).\n(4)\nAs for i = 0, we have P {s = 0} = (1 - p)\nS\nfor S 1. And\nfor i = 1, we have P {s = 1} = p(1 - p)\nS-1\nS for S 1. It\nis easy to prove that the cumulative distribution of s is the\n190\nfollowing for all values of S:\nP {s i = 0} = p(1 - p)\nS-i\n(S - i + 1) + (1 - p)\nS-i+1\n. (5)\nWe come now to the misranking probability, which we recall\nis a symmetric function. For S\n1\n< S\n2\n, we have\nP\nm\n(S\n1\n, S\n2\n) = P {s\n2\ns\n1\n} =\nS\n1\ni=0\nP {s\n1\n= i}\ni\nj=0\nP {s\n2\n= j} .\n(6)\nAnd for S\n1\n= S\n2\n, we have\nP\nm\n(S\n1\n, S\n2\n) = 1 S\n1\ni=1\nP {s\n1\n= i}\n2\n.\n(7)\nOur main result is the following.\nProposition 2. For S\n1\n< S\n2\n, the misranking probability\nis equal to\nP\nm\n(S\n1\n, S\n2\n) = (1 - p)\nS\n1\n(1 - p)\nS\n2\n+ p(1 - p)\nS\n1\n-1\nS\n1\n[p(1 - p)\nS\n2\n-1\nS\n2\n+ (1 - p)\nS\n2\n]\n+ p\n3\n\n2\nF (1 - p, 1 - p)\nxy\n+ p\n2\nF (1 - p, 1 - p)\nx\n,\nwhere\nF (x, y) = xy\nS\n2\n-S\n1\n+1\n+ ... + x\nS\n1\n-1\ny\nS\n2\n-1\n= xy\nS\n2\n-S\n1\n+1\n(1 - (xy)\nS\n1\n-1\n)/(1 - xy).\nFor S\n1\n= S\n2\n= S, the misranking probability is equal to\nP\nm\n(S, S) = 1 - p\n2\n(1 - p)\n2(S-1)\nS\n2\n- p\n4\n\n2\nG(1 - p, 1 - p)\nxy\n,\nwhere\nG(x, y) = xy + x\n2\ny\n2\n+ x\nS-1\ny\nS-1\n= (xy - (xy)\nS\n)/(1 - xy).\nProof: One can validate the results by plugging (4) and (5)\ninto (6) and (7).\nNote that the main gain of writing the misraking probability\nin such a condensed form is a complexity that drops\nfrom O(S\n3\n) in (6) to O(S) in our final result. This gain\ncomes from the closed-form expression for the cumulative\ndistribution in (5), and from introducing the two functions\nF (x, y) and G(x, y). These two latter functions transform\ntwo series whose complexity is O(S\n2\n) into a closed-form expression\nwhose complexity is O(S).\nWe solve the derivatives in the above equations using the\nsymbolic toolbox of matlab, which gives explicit expressions\nfor the misranking probability. These expressions are simple\nto compute, but span on multiple lines, so we omit them for\nlack of space.\n3.3\nAnalysis of the misranking probability\n3.3.1\nThe blind case\nWe use the Gaussian approximation to study how the misranking\nprobability varies with the sampling rate and with\nthe sizes of both flows, in particular their difference. The\nstudy of the impact of the flow sizes is important to understand\nthe relation between flow size distribution and ranking\nof the largest flows.\nThe misranking probability is a decreasing function of the\nsampling rate. It moves to zero when p moves to 1 and to 0.5\nwhen p approaches zero\n1\n. Therefore, there exists one sampling\nrate that leads to some desired misranking probability,\nand any lower sampling rate results in larger error.\nWe study now how the misranking probability varies with\nthe sizes of both flows. Take S\n1\n= S\n2\n- k, k a positive\ninteger. From (2) and for fixed k, the misranking probability\nincreases with S\n1\nand S\n2\n(erfc(x) is an increasing function in\nx). This indicates that it is more difficult to rank correctly\ntwo flows different by k packets as their sizes increase in\nabsolute terms. The result is different if we take the size of\none flow equal to < 1 times the size of the second, i.e.,\nS\n1\n= S\n2\n. Here, (S\n1\n- S\n2\n)/S\n1\n+ S\n2\nis equal to S\n1\n(1\n)/1 + , which increases with S\n1\n. Hence, the misranking\nprobability given in (2) decreases when S\n1\nincreases. We\nconclude that, when the two flow sizes maintain the same\nproportion, it is easier to obtain a correct ranking when they\nare large in absolute terms.\nWe can now generalize the result above. One may think\nthat the larger the flows, the better the ranking of their\nsampled versions. Our last two examples indicate that this\nis not always the case. The ranking accuracy depends on\nthe relative difference of the flow sizes. In general, to have\na better ranking, the difference between the two flow sizes\nmust increase with the flow sizes and the increase must be\nlarger than a certain threshold. This threshold is given by\n(2): the difference must increase at least as the square root\nof the flow sizes. This is an interesting finding. In the context\nof the general ranking problem, it can be interpreted\nas follows. Suppose that the flow size has a cumulative distribution\nfunction y = F (x). As we move to the tail of the\ndistribution\n2\n, the size of the flows to be ranked increases.\nThe ranking performance improves if the difference between\nflow sizes increases faster than x. This is equivalent to\nsaying that dx/dy should increase with x faster than x.\nAll common distributions satisfy this condition, at least at\ntheir tails. For example, with the exponential distribution\nwe have dx/dy e\nx\n(1/ is the average), while for the\nPareto distribution we have dx/dy x\n+1\n( is the shape).\n3.3.2\nThe protocol-aware case\nThe first difference with the blind case is in the estimation\nerror (S - s = s\nb\n- 1 + S - s\ne\n), which can be safely assumed\nto be independent of the flow size for large flows (only dependent\non p). This means that if two large flows keep the\nsame distance between them while their sizes increase, their\nranking maintains the same accuracy. Their ranking improves\nif the difference between their sizes increases as well,\nand it deteriorates if the difference between their sizes decreases\n. So in contrast to the blind case, the threshold for\nthe ranking here to improve is that the larger flow should\nhave its size increasing a little faster than the smaller one. In\nthe context of the general ranking problem where flow sizes\nare distributed according to a cumulative distribution function\ny = F (x), and when the top flows become larger, the\nprotocol-aware ranking improves if the derivative dx/dy increases\nwith x. This is equivalent to saying that the function\nF (x) should be concave, which is satisfied by most common\ndistributions at their tail. For blind ranking, concavity was\n1\nThe Gaussian approximation does not account for the case\np = 0 where the misranking probability should be equal to\n1 based on our definition.\n2\nBecause we are more and more focusing on large flows or\nbecause the number of available flows for ranking increases.\n191\nnot enough to obtain a better ranking; the derivative dx/dy\nhad to increase faster than x. So in conclusion, the condition\nto have a better ranking when we move to the tail of the\nflow size distribution is less strict with the protocol-aware\nmethod, which is an indication of its good performance.\nThe second difference with the blind case is in the relation\nbetween the ranking accuracy and the sampling rate.\nConsider two large flows of sizes S\n1\nand S\n2\nin packets, and\nlet s\n1\nand s\n2\ndenote their sampled sizes. The coefficient of\nvariation of the difference s\n2\n- s\n1\nis an indication on how\nwell the ranking performs (a small coefficient of variation\nresults in better ranking\n3\n). It is easy to prove that this coefficient\nof variation scales as 1/p for protocol-aware ranking\nand as 1/p for blind ranking. This is again an important\nfinding. It tells that when the sampling rate is very small,\nblind ranking could (asymptotically) perform better than\nprotocol-aware ranking. Our numerical and experimental\nresults will confirm this finding.\nGENERAL MODEL DETECTING AND RANKING THE LARGEST FLOWS\nWe generalize the previous model from the ranking of\ntwo flows to the detection and ranking of the top t flows,\nt = 1, 2, . . . , N . The misranking probability P\nm\n(S\n1\n, S\n2\n) pre-viously\ncalculated is the basis for this generalization. Let\nN t denote the total number of flows available in the measurement\nperiod before sampling. We want the sampled list\nof top t flows to match the list of top t flows in the original\ntraffic. Two criteria are considered to decide whether this\nmatch is accurate. First, we require the two lists to be identical\n. This corresponds to the ranking problem. The second,\nless constrained, criterion requires the two lists to contain\nthe same flows regardless of their relative order within the\nlist. This corresponds to the detection problem. For both\nproblems, the quality of the result is expressed as a function\nof the sampling rate p, the flow size distribution, the number\nof flows to rank t, and the total number of flows N .\n4.1\nPerformance metric\nIn order to evaluate the accuracy of detection and ranking\n, we need to define a performance metric that is easy\nto compute and that focuses on the largest flows. A flow\nat the top of the list can be misranked with a neighboring\nlarge flow or a distant small flow. We want our metric to\ndifferentiate between these two cases and to penalize more\nthe latter one; a top-10 flow replaced by the 100-th flow in\nthe sampled top list is worse than the top-10 flow being replaced\nby the 11-th flow. We also want our metric to be zero\nwhen the detection and ranking of the top flows are correct.\nWe introduce our performance metric using the ranking\nproblem. The performance metric for the detection problem\nis a straightforward extension. Let's form all flow pairs\nwhere the first element of a pair is a flow in the top t and\nthe second element is anywhere in the sorted list of the\nN original flows. The number of these pairs is equal to\nN - 1 + N - 2 + + N - t = (2N - t - 1)t/2. We then count\nthe pairs in this set that are misranked after sampling and\nwe take the sum as our metric for ranking accuracy. This\n3\nFor S\n1\n< S\n2\n, we are interested in P {s\n1\ns\n2\n}. According\nto Tchebychev inequality, this probability can be supposed\nto behave like VAR[s\n1\n- s\n2\n]/E [s\n1\n- s\n2\n]\n2\n, which is the square\nof the coefficient of variation.\nsum indicates how good the ranking is at the top of the list.\nIt is equal to zero when the ranking is correct. When the\nranking is not correct, it takes a value proportional to the\noriginal rank of the flows that have taken a slot in the top-t\nlist. For example, if the top flow is replaced by its immediate\nsuccessor in the list, the metric will return a ranking error\nof 1. Instead, if the same flow is replaced by a distant flow,\nsay the 100-th, the metric will return an error of 99. Also,\nnote that our metric does not account for any misranking of\nflows outside the list of top t flows. For any two flows n and\nm, such that n > m > t, the fact that n takes the position\nof m does not add anything to our performance metric since\nour metric requires at least one element of a flow pair to be\nin the original list of top t flows.\nIn the detection problem, we are no longer interested in\ncomparing flow pairs whose both elements are in the top t\nlist. We are only interested in the ranking between flows\nin the top t list and those outside the list. Therefore, our\ndetection metric is defined as the number of misranked flow\npairs, where the first element of a pair is in the list of top t\nflows and the second element is outside this list (non top t).\nThe above metrics return one value for each realization\nof flow sizes and of sampled packets. Given that we want\nto account for all realizations, we define the performance\nmetrics as the number of misranked flow pairs averaged over\nall possible values of flow sizes in the original list of N flows\nand over all sampling runs. We deem the ranking/detection\nas acceptable when our metric takes a value below one (i.e.,\non average less than one flow pair is misranked).\nIn addition to the above, our metrics have the advantage\nof being easily and exactly calculable. Performance metrics\nbased on probabilities (e.g.,[12]) require lot of assumptions\nthat make them only suitable for computing bounds, but\nnot exact values.\n4.2\nComputation of the performance metric\nfor the ranking problem\nConsider a flow of i packets belonging to the list of top t\nflows in the original traffic (before sampling). First, we compute\nthe probability that this flow is misranked with another\nflow of general size and general position. Denote this probability\nby P\nmt\n(i), where m stands for misranking and t for\ntop. Then, we average over all values of i to get\nP\nmt4\n. This\nlatter function gives us the probability that, on average, the\ntop t-th flow is misranked with another flow. Thus, our performance\nmetric, which is defined as the average number of\nmisranked flow pairs where at least one element of a pair\nis in the top t, is equal to (2N - t - 1)t\nP\nmt\n/2. Next, we\ncompute the value of\nP\nmt\n.\nLet p\ni\ndenote the probability that the size of a general\nflow is equal to i packets, and P\ni\ndenote the flow size complementary\ncumulative distribution, i.e., P\ni\n=\nj=i\np\nj\n. For\na large number of flows N and a high degree of multiplexing,\nwe consider safe to assume that flow sizes are independent\nof each other (see [2] for a study of the flow size correlation\non a OC-12 IP backbone link). A flow of size i belongs to\nthe list of top t flows if the number of flows in the original\ntotal list, with a size larger than i, is less or equal than t - 1.\nSince each flow can be larger than i with probability P\ni\nindependently\nof the other flows, we can write the probability\nthat a flow of size i belongs to the list of the top t flows\n4\nNote that the distribution of the size of a flow at the top\nof the list is different from that of a generic flow.\n192\nas P\nt\n(i, t, N ) =\nt-1\nk=0\nb\nP\ni\n(k, N - 1), where b\nP\ni\n(k, N - 1)\nis the probability to obtain k successes out of N - 1 trials\n, P\ni\nbeing the probability of a success. The probability\nthat the t-th largest flow has a size of i packets is equal\nto P\nt\n(i) = p\ni\nP\nt\n(i, t, N )/\nP\nt\n(t, N ).\nP\nt\n(t, N ) is the probability\nthat a flow of general size is among the top t in the original\ntotal list, which is simply equal to t/N .\nUsing the above notation, one can write the misranking\nprobability between a top t flow of original size i packets\nand any other flow as follows\nP\nmt\n(i) =\n1\nP\nt\n(i, t, N )\ni-1\nj=1\np\nj\nP\nt\n(i, t, N - 1)P\nm\n(j, i)+\n\nj=i\np\nj\nP\nt\n(i, t - 1, N - 1)P\nm\n(i, j) .\n(8)\nIn this expression, we sum over all possible original sizes\nof the other flow (the variable j) and we separate the case\nwhen this other flow is smaller than i from the case when it\nis larger than i\n5\n. P\nm\n(i, j) is the misranking probability of\ntwo flows of sizes i and j packets, which we calculated in the\nprevious section for the two ranking methods.\nP\nmt\nis then\nequal to\ni=1\nP\nt\n(i)P\nmt\n(i).\nFor protocol-aware ranking, P\nm\n(i, j) is given explicitly\nin Proposition 2 and can be easily computed. For blind\nranking, we use the Gaussian approximation summarized in\nProposition 2, which we recall holds when at least one of the\ntwo flows to be compared is large.\n4.3\nComputation of the performance metric\nfor the detection problem\nConsider the probability that a flow among the top t is\nswapped with a flow that does belong to the top t. Let\n\nP\nmt\ndenote this probability. Following the same approach\ndescribed in Section 4, we can write\n\nP\nmt\n= 1\n\nP\nt\n\ni=1\ni-1\nj=1\np\ni\np\nj\nP\nt\n(j, i, t, N )P\nm\n(j, i).\nTo get this expression for\nP\nmt\n, we sum over all possible\nvalues for the size of the flow in the top t (index i) and\nall possible values for the size of the other flow not among\nthe top t (index j). In this expression, p\ni\nand p\nj\nrepresent\nthe probability that the size of a flow is equal to i or j\npackets, respectively. P\nm\n(j, i) is the probability that two\nflows of sizes i and j are misranked it is given by the\nGaussian approximation described in Proposition 1 for the\nblind method and the result stated in Proposition 2 for the\nprotocol-aware method. P\nt\n(j, i, t, N ) is the joint probability\nthat a flow of size i belongs to the list of the top t flows while\nanother flow of size j does not belong to it (i.e., it is in the\nbottom N - t flows).\nP\nt\nis the joint probability that a flow\nof any size belongs to the list of the top t flows while another\nflow of any size does not belong to this list. It is equal to\nt(N - t)/(N (N - 1)).\nWe now compute P\nt\n(j, i, t, N ) for j < i, i.e., the probability\nthat flow i belongs to the top list while flow j does\nnot. The number of flows larger than i should be smaller\nthan t, while the number of flows larger than j should be\nlarger than t. The probability that a flow size is larger than\n5\nIn the case j i, at most t - 2 flows can be larger than i\npackets if we want the flow of size i to be in the top t.\nTrace\nJussieu\nAbilene\nLink speed\nGigE (1 Gbps)\nOC-48 (2.5 Gbps)\nDuration\n2 hours\n30 minutes\nTCP connections\n11M\n15M\nPackets\n112M\n125M\nTable 1: Summary of the traces\ni is P\ni\n=\nk=i\np\nk\n. The probability that it is larger than j is\nP\nj\n=\nk=j\np\nk\n. The probability that a flow size is between\nj and i given that it is smaller than i is (P\nj\n- P\ni\n)/(1 - P\ni\n).\nWe call it P\nj,i\n. It follows that:\nP\nt\n(j, i, t, N ) =\nt-1\nk=0\nb\nP\ni\n(k, N - 2)\nN -k-2\nl=t-k-1\nb\nP\nj,i\n(l, N - k - 2).\nThe first sum accounts for the probability to see less than t\nflows above i packets. The second sum accounts for the\nprobability to see more than t flows above j given that\nk flows (k < t) were already seen above i. For t = 1,\nP\nt\n(j, i, t, N ) is no other than P\nt\n(i, t, N - 1), and both\nP\nmt\nand\nP\nmt\nare equal (i.e., the ranking and the detection problems\nare the same).\nOnce\nP\nmt\nis computed, we multiply it by the total number\nof flow pairs whose one element is in the top t and the other\none is not. This total number is equal to t(N -t). Our metric\nfor the detection problem is the result of this multiplication.\nAs for the ranking problem, we want this metric to be less\nthan one for the detection of the top t flows to be accurate.\nNUMERICAL RESULTS\nWe analyze now the accuracy of identifying and ranking\nthe largest flows in a packet stream for both the blind and\nprotocol-aware methods. Our metrics require the following\ninput: p\ni\n, the flow size distribution and N , the total number\nof flows observed on the link during the measurement period.\nTo derive realistic values for these two quantities, we consider\ntwo publicly available packet-level traces. The first\ntrace is Abilene-I collected by NLANR [15] on an OC-48\n(2.5 Gbps) link on the Abilene Network [1]. The second\ntrace has been collected by the Metropolis project [13] on\na Gigabit Ethernet access link from the Jussieu University\ncampus in Paris to the Renater Network [18]. Table 1 summarizes\nthe characteristics of the two traces.\nWe model the flow size distribution in the traces with\nPareto. We opted for Pareto since it is known to be appropriate\nto model flow sizes in the Internet due to its heavy\ntailed feature [6]. Note that it is not our goal to find an accurate\napproximation of the distribution of flow sizes in our\ntraces, but rather to find a general, well-known, distribution\nthat approaches the actual flow size. In this section we analyze\na wide range of parameters while Section 6 focuses on\nthe performance we observe in the two packet-level traces.\nThe Pareto distribution is continuous with a complementary\ncumulative distribution function given by P {S > x} =\n(x/a)\n\n. > 0 is a parameter describing the shape of the\ndistribution and a > 0 is a parameter describing its scale.\nThe Pareto random variable takes values larger than a, and\nhas an average value equal to a/( - 1). The tail of the\nPareto distribution becomes heavier as decreases.\nWe use our traces to derive an indicative value of the\nshape parameter . To this end, we compute the empirical\ncomplementary cumulative distribution of flow sizes and we\n193\n10\n0\n10\n1\n10\n2\n10\n3\n10\n4\n10\n5\n10\n-8\n10\n-7\n10\n-6\n10\n-5\n10\n-4\n10\n-3\n10\n-2\n10\n-1\n10\n0\nFlow size in Kbytes\nComplementary CDF\nAbilene trace\nx\n-2\n10\n0\n10\n1\n10\n2\n10\n3\n10\n4\n10\n5\n10\n-7\n10\n-6\n10\n-5\n10\n-4\n10\n-3\n10\n-2\n10\n-1\n10\n0\nFlow size in Kbytes\nComplementary CDF\nJussieu trace\nx\n-1.5\nFigure 1: Empirical flow size distribution\nplot it on a log-log scale. A heavy-tailed distribution of\nshape parameter decays linearly on a log-log scale at rate\n-. The empirical distributions are shown in Figure 1. The\nplots show that equal to 2 suits the Abilene trace and\nequal to 1.5 suits the Jussieu one. This means that the flow\nsize distribution has a heavier tail in the Jussieu trace.\nThen, we compute the average flow size in packets to get\nthe starting point a for the Pareto distribution. As an average\nflow size we measure 5.76 Kbytes and 7.35 packets on\nthe Abilene trace, and 9.22 Kbytes and 9.9 packets on the\nJussieu trace. The total number of flows N is set by taking a\nmeasurement interval equal to one minute, then multiplying\nthis interval by the average arrival rate of flows per second\non each trace. This gives N = 487 Kflows for the Abilene\ntrace and N = 103 Kflows for the Jussieu one.\nIn the rest of this section, all figures plot the ranking metric\nversus the packet sampling rate p on a log-log scale. We\nvary p from 0.1% to 50%. Each figure shows different lines\nthat correspond to different combinations of t, , and N .\nWe are interested in the regions where the value of the metric\nis below one, indicating that the ranking is accurate on\naverage. To ease the interpretation of results in the figures,\nwe plot the horizontal line of ordinate 1.\n5.1\nBlind ranking\n5.1.1\nImpact of the number of flows of interest\nThe first parameter we study is t, the number of largest\nflows to rank. The purpose is to show how many flows can\nbe detected and ranked correctly for a given sampling rate.\nWe set , N , and the average flow size to the values described\nbefore. The performance of blind ranking the top t\nflows is shown in Figure 2 for both traces. We observe that\nthe larger the number of top flows of interest, the more difficult\nit is to detect and rank them correctly. In particular,\nwith a sampling rate on the order of 1%, it is possible to\nrank at most the top one or two flows. As we focus at larger\nvalues of t, the required sampling rate to get a correct ranking\nincreases well above 10%. Note that with a sampling\nrate on the order of 0.1%, it is almost impossible to detect\neven the largest flow. We also observe that the ranking on\nthe Jussieu trace behaves slightly better than that on the\nAbilene trace. The Jussieu trace has a heavier tail for its\nflow size distribution, and so the probability to get larger\nflows at the top of the list is higher, which makes the ranking\nmore accurate. This will be made clear next as we will\nstudy the impact of the shape parameter .\n5.1.2\nImpact of the flow size distribution\nWe consider the blind ranking of the top 10 flows varying\n10\n-1\n10\n0\n10\n1\n10\n-2\n10\n0\n10\n2\n10\n4\n10\n6\nPacket sampling rate (%)\nAverage number of misranked flow pairs\nAbilene trace, N=487K, beta=2, blind ranking\nt=25\nt=10\nt=5\nt=2\nt=1\n10\n-1\n10\n0\n10\n1\n10\n-2\n10\n0\n10\n2\n10\n4\n10\n6\nPacket sampling rate (%)\nAverage number of misranked flow pairs\nJussieu trace, N=103K, beta=1.5, blind ranking\nt=25\nt=10\nt=5\nt=2\nt=1\nFigure 2: Performance of blind ranking varying the\nnumber t of top flows of interest\nthe shape parameter for the Pareto distribution among five\ndistinct values: 3, 2.5, 2, 1.5 and 1.2. Note that for 2\nthe Pareto distribution is known to be heavy tailed (infinite\nvariance). The other parameters of the model (N and the\naverage flow size) are set as before. The values taken by our\nmetric are shown in Figure 3 for both traces. We can make\nthe following observations from the figure:\nGiven a sampling rate, the ranking accuracy improves\nas becomes smaller, i.e., the tail of the flow size\ndistribution becomes heavier. Indeed, when the distribution\ntail becomes heavier, the probability to obtain\nlarger flows at the top of the list increases, and since it\nis simpler to blindly rank larger flows (for distributions\nsatisfying the square root condition, see Section 3.1.1),\nthe ranking becomes more accurate.\nThe ranking is never correct unless the sampling rate is\nvery high. In our setting, one needs to sample at more\nthan 50% to obtain an average number of misranked\nflow pairs below one for a value of equal to 1.5 (i.e,\nheavy tailed distribution), and at more than 10% for\na value of equal to 1.2 (i.e., pronounced heavy tailed\ndistribution). For larger values of (i.e., lighter tail),\nthe sampling rate needs to be as high as 100%.\n5.1.3\nImpact of the total number of flows\nAnother important parameter in the ranking problem is\nN , the total number of flows available during the measurement\nperiod. When N increases, the flows at the top of\nthe list should become larger, and therefore as we saw in\nSection 3.1.1, the blind ranking accuracy should improve\n194\n10\n-1\n10\n0\n10\n1\n10\n-1\n10\n0\n10\n1\n10\n2\n10\n3\n10\n4\n10\n5\n10\n6\n10\n7\nPacket sampling rate (%)\nAverage number of misranked flow pairs\nAbilene trace, N = 487K, t = 10 flows, blind ranking\nbeta=3\nbeta=2.5\nbeta=2\nbeta=1.5\nbeta=1.2\n10\n-1\n10\n0\n10\n1\n10\n-1\n10\n0\n10\n1\n10\n2\n10\n3\n10\n4\n10\n5\n10\n6\n10\n7\nPacket sampling rate (%)\nAverage number of misranked flow pairs\nJussieu trace, N = 103K, t = 10 flows, blind ranking\nbeta=3\nbeta=2.5\nbeta=2\nbeta=1.5\nbeta=1.2\nFigure 3: Performance of blind ranking varying the\nshape parameter of the flow size distribution\nfor flow size distributions satisfying the square root condition\n(in particular the Pareto distribution we are considering\nhere). N varies with the utilization of the monitored link\nthe higher the utilization, the larger the number of flows. N\ncan also vary with the duration of the measurement period\nthe longer we wait before ranking and reporting results,\nthe larger the number of flows.\nWe study the impact of N on the blind ranking accuracy\n. We take the same value of N used in the previous\nsections and computed over one minute measurement period\n(487 Kflows for the Abilene trace and 103 Kflows for\nthe Jussieu trace), then we multiply it by some constant\nfactor ranging from 0.5 (2 times fewer flows) to 5 (5 times\nmore flows). Results are shown in Figure 4. The lines in\nthe figures correspond to a factor value equal to: 0.5, 1,\n2.5, and 5. In these figures, we consider the ranking of the\ntop 10 flows with the values of and average flow size set\nfrom the traces. Clearly, the ranking accuracy improves as\nN increases. However, in our setting, this improvement is\nstill not enough to allow a perfect ranking. One can always\nimagine increasing N (e.g., by increasing the measurement\nperiod) until the top t flows are extremely large and hence,\nperfectly detected and ranked.\n5.2\nProtocol-aware ranking\nProtocol-aware ranking takes advantage of the information\ncarried in the transport header of the sampled packets\nto infer the number of non-sampled packets of a flow. We\nuse our model to check whether this improvement exists and\nto evaluate it. Remember that we are always in the context\nof low retransmission and duplication rates, which is neces-10\n-1\n10\n0\n10\n1\n10\n-1\n10\n0\n10\n1\n10\n2\n10\n3\n10\n4\n10\n5\n10\n6\nPacket sampling rate (%)\nAverage number of misranked flow pairs\nAbilene trace, beta=2, t = 10, blind ranking\nN=244K\nN=487K\nN=1.2M\nN=2.4M\n10\n-1\n10\n0\n10\n1\n10\n-1\n10\n0\n10\n1\n10\n2\n10\n3\n10\n4\n10\n5\n10\n6\nPacket sampling rate (%)\nAverage number of misranked flow pairs\nJussieu trace, beta = 2, t = 10, blind ranking\nN=52K\nN=103K\nN=258K\nN=515K\nFigure 4: Performance of blind ranking varying the\ntotal number of flows\nsary to remove the discrepancy between carried data volume\n(throughput) and application data volume (goodput).\nUsing the previous values for N , and average flow size,\nwe reproduce Figure 2, but this time for the protocol-aware\ncase. This leads to Figure 5, which illustrates the impact of\nthe number of largest flows to rank. For lack of space, we\nomit the other figures.\nWe compare this new figure to its counterpart in the blind\ncase. We make the following two observations:\n(i) The protocol-aware method improves the accuracy of\nthe largest flows ranking by an order of magnitude for high\nsampling rates (above 1%). For example, for the Abilene\ntrace, a sampling rate on the order of 50% was necessary to\ndetect and rank the largest 5 flows with the blind method.\nNow, with the protocol-aware method, a sampling rate on\nthe order of 5% is sufficient. The same conclusion applies\nto the Jussieu trace. A sampling rate on the order of 10%\nis needed. With the protocol-aware method, it becomes on\nthe order of 1%.\n(ii) The protocol-aware method does not improve the performance\nwhen applied at low sampling rates (above 1%).\nThis can be clearly seen if we compare the plots between\nboth figures for sampling rates below 1%. This results confirms\nour observations in Section 3.3.2.\n5.3\nLargest flows detection\nTo illustrate the difference between ranking and detection,\nwe consider the same scenario as in Section 5.1.1. We plot\nthe detection metric as a function of the sampling rate for\ndifferent values of t (the number of top flows of interest) and\nfor both Abilene and Jussieu traces. This gives Figure 6 for\n195\n10\n-1\n10\n0\n10\n1\n10\n-2\n10\n0\n10\n2\n10\n4\n10\n6\nPacket sampling rate (%)\nAverage number of misranked flow pairs\nAbilene trace, N=487K, beta=2, protocol-aware ranking\nt=25\nt=10\nt=5\nt=2\nt=1\n10\n-1\n10\n0\n10\n1\n10\n-2\n10\n0\n10\n2\n10\n4\n10\n6\nPacket sampling rate (%)\nAverage number of misranked flow pairs\nJussieu trace, N=103K, beta=1.5, protocol-aware ranking\nt=25\nt=10\nt=5\nt=2\nt=1\nFigure 5: Performance of protocol-aware ranking\nvarying the number t of top flows of interest\nblind ranking and Figure 7 for protocol-aware ranking. A\ncomparison between these results and their counterparts in\nFigure 2 and 5, respectively, shows a significant improvement\nin the detection case for both ranking methods. All\nplots are shifted down by an order of magnitude. For example\n, in the case of blind ranking, the required sampling\nrate to correctly rank the top 5 flows was around 50% for\nthe Abilene trace and 10% for the Jussieu trace. Now, with\nblind detection, it is around 10% and 3%, respectively. Another\nexample is with the protocol-aware method where a\nsampling rate around 10% was required to rank the largest\n10 flows (Figure 5), whereas now, a sampling rate around\n1% is sufficient to only detect them. The same gain can be\nobserved if we reconsider the other scenarios in Section 5.1\n(not presented here for lack of space). Also, note how in\nthe detection case the protocol aware method allows a better\naccuracy for high sampling rates when compared to the\nblind method. For low sampling rates (e.g., below 1%), the\naccuracy does not improve.\nEXPERIMENTAL RESULTS\nIn this section we present the results of running random\nsampling experiments directly on the packet traces. We use\nthe traces described in Section 5 and compute the performance\nmetrics defined in Section 4.1.\nIn our traces we consider only TCP packets. Since TCP\nsequence numbers count bytes, we express the flow sizes in\nbytes instead of packets throughout this section.\nOur experiments are meant to address four major issues\nthat arise when we move from the analytical study to a real\nnetwork setting: (i) how to deal with invalid TCP sequence\n10\n-1\n10\n0\n10\n1\n10\n-2\n10\n0\n10\n2\n10\n4\n10\n6\nPacket sampling rate (%)\nAverage number of misranked flow pairs\nAbilene trace, N=487K, beta=2, blind detection\nt=25\nt=10\nt=5\nt=2\nt=1\n10\n-1\n10\n0\n10\n1\n10\n-2\n10\n0\n10\n2\n10\n4\n10\n6\nPacket sampling rate (%)\nAverage number of misranked flow pairs\nJussieu trace, N=103K, beta=1.5, blind detection\nt=25\nt=10\nt=5\nt=2\nt=1\nFigure 6: Only detecting the largest flows: Performance\nof blind ranking varying the number t of top\nflows of interest\nnumbers in the packet stream; (ii) the importance of flow\nsize distributions and duration of the measurement interval;\n(iii) the impact of packet loss rates on individual flows\nlost packets trigger retransmissions by the TCP senders; (iv)\nthe variability of the detection/ranking performance across\nmultiple bins and packet sampling patterns.\n6.1\nImplementation of protocol-aware\nranking\nThe protocol-aware method depends on TCP sequence\nnumbers to perform the ranking. For a given flow, it keeps\ntrack of the lowest and highest sequence number observed\n(taking care of packets that wrap around the sequence number\nspace), s\nb\nand s\ne\nrespectively.\nNote that an actual implementation of this method would\njust require two 32 bit fields per flow to store the two sequence\nnumbers.\nAt the end of the measurement period, we compute the\ndifference between the highest and lowest sequence numbers\nfor each sampled flow, and we use the obtained values to\nrank flows. We then compare this ranking with the one\nobtained by counting all the bytes each flow transmits in\nthe original non sampled traffic.\nIn order to discard invalid packets carrying incorrect sequence\nnumbers that would corrupt the ranking, we implement\na simple heuristic to update s\ne\nand s\nb\n. A sampled\npacket with sequence number S > s\ne\ncauses an update\ns\ne\nS if S - s\ne\n< ( MTU)/p. The same rule applies to\nthe updates of s\nb\n. This way we set a limit on the maximum\ndistance in the sequence space between two sampled pack-196\n10\n-1\n10\n0\n10\n1\n10\n-2\n10\n0\n10\n2\n10\n4\n10\n6\nPacket sampling rate (%)\nAverage number of misranked flow pairs\nAbilene trace, N=487K, beta=2, protocol-aware detection\nt=25\nt=10\nt=5\nt=2\nt=1\n10\n-1\n10\n0\n10\n1\n10\n-2\n10\n0\n10\n2\n10\n4\n10\n6\nPacket sampling rate (%)\nAverage number of misranked flow pairs\nJussieu trace, N=103K, beta=1.5, protocol-aware detection\nt=25\nt=10\nt=5\nt=2\nt=1\nFigure 7: Only detecting the largest flows: Performance\nof protocol-aware ranking varying the number\nt of top flows of interest\nets. This distance is inversely proportional to the sampling\nrate and depends on the Maximum Transmission Unit.\nFurthermore, we use the parameter that allows to make\nthis threshold more or less \"permissive\" in order to account\nfor the randomness of the sampling process and for other\ntransport-layer events (e.g., packet retransmissions when the\nTCP window is large). We have run several experiments\nwith different values of and the results have shown little\nsensitivity to values of > 10. All the results in this Section\nare derived with = 100.\n6.2\nFlow size distribution and measurement\ninterval\nAs shown in Figure 1, flow size distributions do not follow\na perfect Pareto. Furthermore, the measurement interval\nitself plays a major role in shaping the distribution: it caps\nthe size of the largest flows, that is not unbounded but now\ndepends on the link speed. Indeed, network operators often\nrun measurements using a \"binning\" method, where packets\nare sampled for a time interval, classified into flows, ranked,\nand then reported. At the end of the interval, the memory is\ncleared and the operation is repeated for the next measurement\ninterval. With this binning method, all flows active at\nthe end of the measurement interval are truncated, so that\nnot all sampled packets of the truncated flow are considered\nat the same time for the ranking. The truncation may,\ntherefore, penalize large flows and alter the tail of the flow\nsize distribution (where flows are of large size and probably\nlast longer than the measurement interval).\nEach experiment consists of the following. We run ran-10\n-1\n10\n0\n10\n1\n10\n-1\n10\n0\n10\n1\n10\n2\n10\n3\n10\n4\n10\n5\n10\n6\npacket sampling rate %\naverage number of misranked flow pairs\nJussieu trace, blind ranking\ntop 1\ntop 2\ntop 5\ntop 10\ntop 25\n10\n-1\n10\n0\n10\n1\n10\n-1\n10\n0\n10\n1\n10\n2\n10\n3\n10\n4\n10\n5\n10\n6\npacket sampling rate %\naverage number of misranked flow pairs\nJussieu trace, protocol-aware ranking\ntop 1\ntop 2\ntop 5\ntop 10\ntop 25\nFigure 8: Performance of blind and protocol-aware\nranking on Jussieu trace (60s measurement interval\n).\ndom sampling on the packet traces and classify the sampled\npackets into flows. At the end of each measurement interval\n(set to 1 or 5 minutes), we collect the flows and rank them\nby the number of bytes sampled for each flow. We compare\nthe ranking before and after sampling using our performance\nmetric (Section 4.1). For each sampling rate we conduct 15\nruns and we calculate averages.\nThe results of the experiments confirm the numerical results\nof the previous section. In the interest of space, we\nplot the results of two representative experiments on which\nwe make several observations. The difference between numerical\nand experimental results, especially at low sampling\nrates, is caused by the non perfect match of the empirical\nflow size distribution with Pareto (Figure 1).\nFigure 8 shows the performance of ranking flows on the\nJussieu trace when the measurement bin is 60s. We consider\na wide range of sampling rates from 0.1% to 50% and study\nthe performance when ranking the top 1, 2, 5, 10 and 25\nflows in the packet stream. The top graph in Figure 8 is derived\nusing the blind method while the bottom graph shows\nthe performance of the protocol-aware methods. These results\nare very similar to the numerical results. For sampling\nrates above 1%, protocol-aware ranking gives approximately\nan order of magnitude gain on the performance when compared\nto blind ranking. When the sampling rate is lower\nthan 1%, however, the performance of the two methods is\nsimilar. Overall, the blind method requires a sampling rate\nof 10% to correctly identify the largest flow in the packet\nstream. The same sampling rate allows to correctly rank\nthe largest 5 flows when using the protocol-aware method.\n197\n6.3\nImpact of loss rate\nIn the analysis of the protocol-aware method in Section 3.2,\nwe made the assumption of negligible number of retransmissions\nfor all the flows in the packet stream.\nA retransmitted packet may cause inconsistency between\nthe blind and protocol-aware method depending on the location\nof the monitoring point. Indeed, the blind method\ncounts the total number of bytes sent by the flow while the\nprotocol-aware method considers only the data sent by the\ntransport layer. Therefore, if the packet is lost before the\nmonitoring point, the blind and protocol-aware method will\nhave a consistent view of the number of bytes sent. Instead,\nif the packet is lost after the monitoring point, the blind\nmethod may count this packet twice.\nThe impact of packet losses on the detection and ranking\nof the largest flows depends on the metric used to estimate\nthe size of the flows. If flow sizes are estimated according to\nthe total number of bytes sent (i.e., the throughput), then\nthe protocol-aware method may incur in an underestimation\nerror that is independent of the sampling rate (it will occur\neven if all packets are sampled!). On the other hand, if the\nflow sizes are estimated according to the transport data sent\n(i.e., the goodput), then the blind method may incur in an\noverestimation error independently of the sampling rate.\nTo illustrate the effect of packet loss rates, we plot in\nFigure 9 the performance of detecting the largest flows in the\nAbilene trace when the measurement bin is 5 minutes and\nthe flow sizes are measured using the total number of bytes\nsent over the link. The top graph shows the performance\nof the blind method, while the bottom graph presents the\nresults for the protocol-aware method.\nWe can make the following observations:\nThe protocol-aware method keeps performing better\nthan the blind method when the sampling rate is above\n1%. At lower sampling rates, the blind method performs\nbetter although it presents very large errors.\nFor sampling rates above 2%, the curve relative to\nthe detection of the top-25 flows in the protocol-aware\nmethod flattens to a value around 70. This is due\nto the presence of a few flows that experience a high\nloss rate when compared to other flows. Increasing\nthe sampling rate does not help the protocol-aware\nmethod in detecting the largest flows when the volume\nof bytes sent is used to define the flow size. However\n, the protocol-aware method can correctly detect\nthe top-25 flows when their size is defined in terms of\ntransport data (see Figure 10).\nIn summary, the network operator has to choose the metric\nof interest that depends on the application. For example\n, for anomaly detection or traffic engineering, a metric\nthat counts the number of bytes sent may be more appropriate\n. Instead, for dimensioning caches and proxies, the\nmetric that considers the size of the objects transferred may\nbe preferred. This latter metric suits more the protocol-aware\nmethod.\n6.4\nVariability of the results\nA last important aspect that we need to address is the\nvariability of the results across multiple measurement intervals\nand different realizations of the sampling process.\nIndeed, moving from one measurement interval to another,\n10\n-1\n10\n0\n10\n1\n10\n-1\n10\n0\n10\n1\n10\n2\n10\n3\n10\n4\n10\n5\n10\n6\npacket sampling rate %\naverage number of misranked flow pairs\nAbilene trace, blind detection\ntop 1\ntop 2\ntop 5\ntop 10\ntop 25\n10\n-1\n10\n0\n10\n1\n10\n-1\n10\n0\n10\n1\n10\n2\n10\n3\n10\n4\n10\n5\n10\n6\npacket sampling rate %\naverage number of misranked flow pairs\nAbilene trace, protocol-aware detection\ntop 1\ntop 2\ntop 5\ntop 10\ntop 25\nFigure 9: Performance of blind (top) and protocol-aware\n(bottom) detection on Abilene trace (300s\nmeasurement interval).\nthe composition of flows varies and with it the flow size distribution\n. Moreover, the sampling process may \"get lucky\"\nin certain cases and provide good results. The opposite is\nalso possible.\nFigure 11 shows the average performance over 15 sampling\nexperiments of the detection of the top-10 flows in\nthe Abilene trace over the 5-minute measurement intervals.\nThe error bars indicate the standard deviation across the\n15 experiments. As usual, the top graph refers to the blind\nmethod, while the bottom graph presents the protocol-aware\nmethod results.\nAs we can see the average performance shows limited\nvariability. A sampling rate of 0.1% gives poor results for\nall bins, while increasing the sampling rates consistently\nhelps. With a sampling rate of 10% the performance metric\n(i.e., average number of misranked flow pairs) for the\nblind method is always below 100 while the protocol-aware\nmethod is always below 1.\nLooking at the standard deviation, we observe large values\nfor the blind method and much smaller values for the\nprotocol-aware method. This indicates that the blind method\nis more sensitive to the sampling process than the protocol-aware\nmethod. The explanation is given in Section 3.3.2\nwhere we showed that that the blind method presents a\nlarger error for large flow sizes (expect when the sampling\nrate is very low).\nCONCLUSIONS\nWe study the problem of detection and ranking the largest\nflows from a traffic sampled at the packet level. The study is\n198\n10\n-1\n10\n0\n10\n1\n10\n-1\n10\n0\n10\n1\n10\n2\n10\n3\n10\n4\n10\n5\n10\n6\npacket sampling rate %\naverage number of misranked flow pairs\nAbilene trace, protocol-aware detect\ntop 1\ntop 2\ntop 5\ntop 10\ntop 25\nFigure 10: Performance of protocol-aware detection\non Abilene trace (300s measurement interval) when\nusing actual amount of data sent by the transport\nlayer application.\ndone with stochastic tools and real packet-level traces. We\nfind that the ranking accuracy is strongly dependent on the\nsampling rate, the flow size distribution, the total number\nof flows and the number of largest flows to be detected and\nranked. By changing all these parameters, we conclude that\nranking the largest flows requires a high sampling rate (10%\nand even more). One can reduce the required sampling rate\nby only detecting the largest flows without considering their\nrelative order.\nWe also introduce a new method for flow ranking that\nexploits the information carried in transport header. By\nanalysis and experimentation, we demonstrate that this new\ntechnique allows to reduce the required sampling rate by an\norder of magnitude.\nWe are currently exploring two possible future directions\nfor this work. First, we want to study the accuracy of the\nranking when the sampled traffic is fed into one of the mechanisms\nproposed in [10, 12] for sorting flows with reduced\nmemory requirements. Second, we are exploring the use of\nadaptive schemes that set the sampling rate based on the\ncharacteristics of the observed traffic.\nAcknowledgements\nWe wish to thank NLANR [15], Abilene/Internet2 [1] and\nthe Metropolis project [13] for making available the packet\ntraces used in this work.\n\nREFERENCES\n[1] Abilene: Advanced networking for leading-edge research and\neducation. http://abilene.internet2.edu.\n[2] C. Barakat, P. Thiran, G. Iannaccone, C. Diot, and\nP. Owezarski. Modeling Internet backbone traffic at the flow\nlevel. IEEE Transactions on Signal Processing (Special Issue\non Signal Processing in Networking), 51(8):21112124, Aug.\n2003.\n[3] M. Charikar, K. Chen, and M. Farach-Colton. Finding frequent\nitems in data streams. In Proceedings of ICALP, 2002.\n[4] B. Y. Choi, J. Park, and Z. Zhang. Adaptive packet sampling\nfor flow volume measurement. Technical Report TR-02-040,\nUniversity of Minnesota, 2002.\n[5] G. Cormode and S. Muthukrishnan. What's hot and what's\nnot: Tracking most frequent items dynamically. In Proceedings\nof ACM PODS, June 2003.\n[6] M. Crovella and A. Bestravos. Self-similarity in the World\nWide Web traffic: Evidence and possible causes. IEEE/ACM\nTransactions on Networking, 5(6):835846, Dec. 1997.\n0\n200\n400\n600\n800\n1000\n1200\n1400\n1600\n1800\n10\n-2\n10\n0\n10\n2\n10\n4\n10\n6\n10\n8\ntime (sec)\naverage number of swapped flow pairs\nAbilene trace, detection\nsampling 0.1%\nsampling 1%\nsampling 10%\n0\n200\n400\n600\n800\n1000\n1200\n1400\n1600\n1800\n10\n-2\n10\n0\n10\n2\n10\n4\n10\n6\n10\n8\ntime (sec)\naverage number of swapped flow pairs\nAbilene trace, detection\nsampling 0.1%\nsampling 1%\nsampling 10%\nFigure 11: Performance of blind (top) and protocol-aware\n(bottom) detection over multiple 300s intervals\n(Abilene trace). Vertical bars show the standard\ndeviation over multiple experiments.\n[7] E. Demaine, A. Lopez-Ortiz, and I. Munro. Frequency\nestimation of internet packet streams with limited space. In\nProceedings of 10th Annual European Symposium on\nAlgorithms, 2002.\n[8] N. G. Duffield, C. Lund, and M. Thorup. Properties and\nprediction of flow statistics from sampled packet streams. In\nProceedings of ACM Sigcomm Internet Measurement\nWorkshop, Nov. 2002.\n[9] N. G. Duffield, C. Lund, and M. Thorup. Estimating flow\ndistributions from sampled flow statistics. In Proceedings of\nACM Sigcomm, Aug. 2003.\n[10] C. Estan and G. Varghese. New directions in traffic\nmeasurement and accounting. In Proceedings of ACM\nSigcomm, Aug. 2002.\n[11] N. Hohn and D. Veitch. Inverting sampled traffic. In\nProceedings of ACM Sigcomm Internet Measurement\nConference, Oct. 2003.\n[12] J. Jedwab, P. Phaal, and B. Pinna. Traffic estimation for the\nlargest sources on a network, using packet sampling with\nlimited storage. Technical Report HPL-92-35, HP\nLaboratories, Mar. 1992.\n[13] Metropolis: METROlogie Pour l'Internet et ses services.\nhttp://www.laas.fr/ owe/METROPOLIS/metropolis eng.html.\n[14] T. Mori, M. Uchida, R. Kawahara, J. Pan, and S. Goto.\nIdentifying elephant flows through periodically sampled\npackets. In Proceedings of ACM Sigcomm Internet\nMeasurement Conference, Oct. 2004.\n[15] NLANR: National Laboratory for Applied Network Research.\nhttp://www.nlanr.net.\n[16] Packet Sampling Working Group. Internet Engineering Task\nForce. http://www.ietf.org/html.charters/psamp-charter.html.\n[17] K. Papagiannaki, N. Taft, and C. Diot. Impact of flow\ndynamics on traffic engineering design principles. In\nProceedings of IEEE Infocom, Hong Kong, China, Mar. 2004.\n[18] Renater. http://www.renater.fr.\n[19] H. Schulzrinne, S. Casner, R. Frederick, and V. Jacobson.\nRTP: A transport protocol for real-time applications. RFC\n1889, Jan. 1996.\n[20] A. Shaikh, J. Rexford, and K. G. Shin. Load-sensitive routing\nof long-lived IP flows. In Proceedings of ACM Sigcomm, Sept.\n1999.\n[21] M. Spiegel. Theory and Problems of Probability and\nStatistics. McGraw-Hill, 1992.\n199", "keywords": "largest flow detection and ranking;validation with real traces;Packet sampling;performance evaluation"} {"name": "164", "title": "Ranking Target Objects of Navigational Queries", "abstract": "Web navigation plays an important role in exploring public interconnected data sources such as life science data. A navigational query in the life science graph produces a result graph which is a layered directed acyclic graph (DAG). Traversing the result paths in this graph reaches a target object set (TOS). The challenge for ranking the target objects is to provide recommendations that reflect the relative importance of the retrieved object, as well as its relevance to the specific query posed by the scientist. We present a metric layered graph PageRank (lgPR) to rank target objects based on the link structure of the result graph. LgPR is a modification of PageRank; it avoids random jumps to respect the path structure of the result graph. We also outline a metric layered graph ObjectRank (lgOR) which extends the metric ObjectRank to layered graphs. We then present an initial evaluation of lgPR. We perform experiments on a real-world graph of life sciences objects from NCBI and report on the ranking distribution produced by lgPR. We compare lgPR with PageRank. In order to understand the characteristics of lgPR, an expert compared the Top K target objects (publications in the PubMed source) produced by lgPR and a word-based ranking method that uses text features extracted from an external source (such as Entrez Gene) to rank publications.", "fulltext": "INTRODUCTION\nThe last few years have seen an explosion in the number\nof public Web accessible data sources, Web services and semantic\nWeb applications. While this has occurred in many\ndomains, biologists have taken the lead in making life science\ndata public, and biologists spend a considerable amount of\ntime navigating through the contents of these sources, to\nobtain information that is critical to their research.\nProviding meaningful answers to queries on life science\ndata sources poses some unique challenges. First, information\nabout a scientific entity, e.g., genes, proteins, sequences\nand publications, may be available in a large number of autonomous\nsources and several sources may provide different\ndescriptions of some entity such as a protein. Second,\nthe links between scientific objects (links between data entries\nin the different sources) are important in this domain\nsince they capture significant knowledge about the relationship\nand interactions between these objects. Third, interconnected\ndata entries can be modeled as a large complex\ngraph. Queries could be expressed as regular expression navigational\nqueries and can more richly express a user's needs,\ncompared to simpler keyword based queries.\nConsider the following navigational query: Retrieve publications\nrelated to the gene 'tnf ' that are reached by traversing\none intermediate (protein or sequence) entry. This query\nexpresses the scientist's need to expand a search for gene related\npublications beyond those publications whose text directly\naddresses the 'tnf' gene, while still limiting the search\nto publications that are closely linked to gene entries.\nConsider gene sources OMIM Gene and Entrez Gene, protein\nsources NCBI Protein and SwissProt, sequences in NCBI\nNucleotide and biomedical publications in PubMed. Figure\n1 represents the results of evaluating this navigational query\nagainst these sources. The result is a layered DAG; we refer\nto it as a result graph (RG). All paths in this directed result\ngraph (RG) start with data entries in the sources OMIM\nGene or Entrez Gene; this is the first layer. They visit one\nintermediate data entry in sources NCBI Protein, Swiss Prot\nor NCBI Nucleotide (second layer) and they terminate in a\npublication data entry in PubMed (final layer).\nThe query returns all objects in PubMed that are reached\n27\n"tnf"\nkeyword\n"tnf"\nkeyword\nNCBI Nucleotide\nSwiss Prot\nPubMed\nNCBI Protein\nOMIM\nGene\nEntrez\nGene\nFigure 1: An example of a result graph (RG)\nby traversing results paths; these PubMed entries are re-ferred\nto as the target object set (TOS) reached by traversing\nthe result paths of the RG. In contrast, a keyword based\nquery would not have been able to specify the set of target\npublications. Navigational queries, the RG and the target\nobject set (TOS) that answers the query are defined in the\npaper.\nIt is difficult for a user to explore all target objects in a\nreasonable amount of time and it is important to provide\na ranking of the TOS. As is well known, word based ranking\nmethods are very good at identifying the most relevant\nresults, typically using features extracted from the contents\nof the target objects. For example [13] produces a ranking\nof documents in PubMed that are most relevant to a gene.\nIn contrast, PageRank [11] focuses on the importance of the\ntarget object and importance is transferred from other important\nobjects via the link structure. A recent technique\nObjectRank [1] addresses both relevance and importance; it\nexploits schema knowledge to determine the correct authority\ntransfer between important pages. We note that there\nis also research on ranking paths [2]. For term-based query\ndependent ranking, we refer to [3, 12].\nThe focus of this paper is to produce a ranking method\nto select the best target objects in the RG that answer the\nnavigational query. Our ideal ranking must identify target\nobjects that are both relevant and important. The ranking\nmust also be query dependent since we must guarantee that\nthe target objects that are ranked indeed occur in the RG\nand answer the navigational query. Further, both relevance\nand importance must be determined with respect to the objects\nin TOS, rather than with respect to all the data entries\n(as is the case with PageRank).\nWe propose two ranking metrics for the layered graph\nRG; they are layered graph PageRank (lgPR) and layered\ngraph ObjectRank (lgOR). lgPR extends PageRank by distinguishing\ndifferent roles (intermediate node, answer node)\nwhich can be played by the same node in the result graph.\nIt does not perform random jumps so as to respect the RG.\nOur second metric lgOR is an extension to ObjectRank; due\nto space limitations we only discuss it briefly.\nWe report on our preliminary evaluation of lgPR on a real\ndataset from NCBI/NIH. For some navigational queries, we\napply lgPR to the corresponding RG and use the ranking\ndistribution for lgPR to illustrate that lgPR indeed discriminates\namong the TOS objects. We also apply the original\nPR metric to the object graph of life science data (against\nwhich we evaluate the query). We compare with applying\nlgPR to the actual RG to illustrate that lgPR and PR produce\ndissimilar rankings.\nFinally, we report on an initial user experiment. We consider\na set of complex queries typical of a scientist searching\nfor gene related PubMed publications, and the Top K\nresults of a word based ranking technique (Iowa) that has\nbeen shown to be accurate in answering gene queries [13].\nWe compare the Iowa Top K publications with the lgPR Top\nK publications, for some sample gene related queries, using\ncriteria that reflect both relevance and importance. We use\nthese criteria to understand the characteristics of lgPR.\nThe paper is organized as follows: Section 2 describes the\ndata model, navigational query language and layered DAG\nresult graph. Section 3 presents PageRank, lgPR, ObjectRank\n, and lgOR. 4 reports on preliminary results of an\nexperimental study with NCBI data and concludes.\nDATA MODEL\nWe briefly describe a data model and navigational query\nlanguage for the life science graph. Details in [6, 9, 14].\n2.1\nData Model for the Life Science Graph\nThe data model comprises three levels: ontology, source\nand data (Figure 2). At the ontology level, a domain ontol-Gene\nMarker\nPublication\nNucleotide\nProtein\nDisease\nOMIM\nGene\n\\NCBI\nGene\nPubMed\nOMIM\nDisease\nSwiss\nProt\nNCBI\nProtein\nNCBI\nNucleotide\nUniSTS\nOntology Level\nSource Level\nMappings\nData Level\nMappings\nFigure 2: A Data Model for the Life Science Graph\nogy describes the universe of discourse, e.g., a gene, a pro-28\ntein, etc., and the relationships among them. An ontology\ngraph OG = (C, L\nC\n) models the domain ontology, where\nnodes in C represent classes, and edges in L\nC\ncorrespond to\nrelationships among classes. For example, genes and publications\nare classes in OG and the association discuss relates\npublications with genes.\nIn this paper, we only consider\none type of link, isRelatedTo, to capture the semantics of a\nrelationship; therefore, we omit all link labels.\nAt the source level, a source graph SG = (S, L\nS\n) describes\ndata sources and links that implement logical classes\n(C) and associations (L\nC\n) in OG, respectively. For example,\nPubMed and Entrez Gene are sources that implement the\nlogical classes publications and genes, respectively. A mapping\ndefines logical classes in C in terms of the sources in S\nthat implement them. A link between sources represents a\nhyper-link, a service or an application that connects these\ntwo sources. At the data level, a Data Graph is a graph\n(D, L\nD\n), where D is a set of data entries and L\nD\nis a set\nof references between entries. A mapping m\nS\nestablishes\nwhich data entries in D are published by source S.\n2.2\nNavigational Query Language\nWe define a query as a path expression over the alphabet\nC in OG, where each class occurrence can optionally be an-notated\nwith a Boolean expression. The simplest Boolean\nexpression is the comparison of a Field to a particular value.\nIn this paper, a field can be either source or Object content,\nand the relational operators can be \"=\" for source and \"contains\"\nfor Object content. A condition over source and the\nrelational operator \"=\", (source = \"name-of-source\"), restricts\nthe query to some specific sources that implement\nthe class. A condition on Object content and the relational\noperator \"contains\", specifies the set of keywords that must\noccur within objects in the Data Graph. The symbol\nis\na wild-card matching any class and the \".\" represents any\nrelationship.\nThe query: Retrieve publications that are related to the\ngene \"tnf or aliases in human\" in OMIM or Enrtez Gene,\nand are reached by traversing one intermediate resource, is\nexpressed in the navigational query language as follows: Q =\nGene[Object content contains\n{\"tnf\" and aliases in human}\nand source = OMIM or Entrez Gene]\n\nPublication\nThe answer to a query Q is defined at the three levels of\nthe data model. It comprises three sets of paths:\nOG\n(Q),\n\nSG\n(Q) and\nDG\n(Q). The meaning of query Q with respect\nto the ontology graph OG,\nOG\n(Q), is the set of simple paths\nin OG that correspond to words in the language induced by\nthe regular expression Q. The meaning of the query with\nrespect to the source graph SG,\nSG\n(Q), is the set of all\nsimple paths in SG that correspond to mappings of the paths\nin\nOG\n(Q). Finally, the answer for query Q with respect to\nthe data graph DG,\nDG\n(Q), is the set of simple paths in\nDG that are the result of mapping the paths in\nSG\n(Q)\nusing mapping function m\nS\n. A simple path does not repeat\n(revisit) the same class, data source or data entry (in the\nsame path).\nThe queries that are presented in this section are typical\nqueries posed by researchers. At present, there are no\nquery evaluation engines to answer navigational queries and\nresearchers must rely on manual navigation via browsers or\nthey must write scripts; the latter involves labor to keep\nwriting the scripts and the scripts may be inefficient in answering\nthese queries.\n2.3\nResult Graph\nThe union of paths in\nDG\n(Q) is the result graph RG. We\nnote that for our query language, all the paths that satisfy\na query are of the same length, i.e., all the paths in the\nsets\nOG\n(Q),\nSG\n(Q) and\nDG\n(Q) are of the same length.\nWe model a result graph RG\nQ\n= (D\nRG\n, L\nRG\n) for a query\nQ, as a layered directed acyclic graph comprising k layers,\nL\n1\n, ..., L\nk\n, where k is determined by the query. The set of\nnodes D\nRG\ncorresponds to the union of the data entries that\nappear in the paths in\nDG\n(Q). L\nRG\nrepresents the links\namong these data entries. A layer L\ni\nis composed of the\nunion of the data entries in the paths\nDG\n(Q) that appear\nin the i-th position of the paths. The data entries in the\nk-th layer are called the target objects and they form the\ntarget object set (TOS) of the RG.\nNote that since the result graph has multiple paths, and\nsince a source may occur in different layers of these paths,\nthe same data entry may appear multiple times in the different\nlayers, depending on its connectivity to other data\nentries. In this case, each occurrence of the data entry is\nrepresented independently within each layer/path in which\nit occurs. The result graph framework distinguishes the different\nroles (intermediate node, answer node) which can be\nplayed by the same node in the result graph.\nFigure 1 is a layered RG for the following query: Retrieve\npublications related to the gene \"tnf \" traversing one intermediate\nsource; it has three layers. The first layer corresponds\nto the genes in the sources OMIM Gene and Entrez Gene\nthat are related to the keyword \"tnf\". The second layer are\nthe entries in the sources NCBI Protein, Swiss Prot or NCBI\nNucleotide that are reached by objects in the first layer. Finally\n, the target objects in the third layer (TOS) are the\npublications in PubMed that are linked to the objects in\nthe second layer.\nRANKING METRICS\nWe briefly describe the PageRank metric [11] and then\ndiscuss our metric lgPR for layered DAGs. We briefly discuss\nthe ObjectRank metric [1] and our extension lgOR.\n3.1\nPageRank\nPageRank assumes that links between pages confer authority\n. A link from page i to page j is evidence that i is\nsuggesting that j is important. The importance from page\ni that is contributed to page j is inversely proportional to\nthe outdegree of i. Let N\ni\nbe the outdegree of page i. The\ncorresponding random walk on the directed web graph can\nbe expressed by a transition matrix A as follows:\nA[i, j] =\n1\nN\ni\nif there is an edge from i to j\n0\notherwise\nLet E be an arbitrary vector over the webpages, representing\nthe initial probability of visiting a page. Let d be\nthe probability of following a link from a page and let (1\n-d)\nbe the probability of a random jump to a page. The PageRank\nranking vector R = dA\nR + (1 - d)E. R converges for\nthe web graph with any E, since generally the web graph is\naperiodic and irreducible[5, 10].\nPageRank cannot be directly applied to a layered graph.\nA Markov Chain is irreducible if and only if the graph contains\nonly one strongly connected component. RG is not\n29\noutgoing links with respect to the query.\nThere are several potential ways to extend PageRank for\nRG. First, one can ignore links that point to pages without\noutgoing edges since these pages do not affect the ranking\nof other pages [11]. However we are specifically interested\nin obtaining a ranking for the TOS or the objects in the\nlast layer of the layered result graph RG with no outgoing\nlinks, we cannot ignore these pages.\nAnother possibility\nis modifying the transition matrix probability so that one\ntakes a random jump from a node in the TOS [5]. This\nwill ensure that the graph will be irreducible and aperiodic.\nHowever, this would arbitrarily modify RG whose structure\nis determined by the query; modifying RG will not assure\nthat it answers the query. To summarize, the extensions to\nPageRank in the literature cannot be applied to the problem\nof ranking the target object set TOS of RG.\n3.2\nLayered Graph PageRank(lgPR)\nWe describe layered graph PageRank to rank the TOS.\n3.2.1\nThe Metric\nTable 1 lists the symbols used to compute lgPR.\nSymbol\nMeaning\nRG(V\nRG\n, E\nRG\n)\nResult Graph, a layered DAG, with\nobjects V\nRG\nand edges E\nRG\ne\nE\nRG\nan edge in E\nRG\nR\nranking vector for objects in RG\nR\nini\ninitial ranking vector\nA\nlg\nthe transition matrix for objects in RG\nk\nthe number of layers in the result graph\nOutDeg\nRG\n(u\np\n)\noutdegree from object u at layer p\n(across multiple link types to objects in\nlayer p + 1\nTable 1: Symbols used by lgPR\nThe layered DAG result graph RG is represented by a\ntransition matrix A\nlg\nto be defined next. Note that an object\nin the object graph may occur in multiple paths of the\nresult graph, in different layers; it will be replicated in the\ntransition matrix for each occurrence. Each object u at layer\np will have an entry in the transition matrix to some object\nv at layer q. We denote the occurrence of them as u\np\nand\nv\nq\nrespectively.\nThe ranking vector R is defined by a transition matrix\nA\nlg\nand initial ranking vector R\nini\n, is as follows:\nR = A\nk\n-1\nlg\nR\nini\n= (\nk\n-1\nl=1\nA\nlg\n) R\nini\nWe pick R\nini\nas follows: the entry for an object in R\nini\nis\n1 if this object is a link in start layer and 0 otherwise. The\ntransition matrix A\nlg\nis computed as follows:\nA\nlg\n[i\np\nu\n, j\nq\nv\n] =\n\n\n\n\n\n\n1\nOutDeg\nRG\n(u\np\n)\nif OutDeg\nRG\n(u\np\n) > 0\nand e(u\np\n, v\nq\n)\nE\nRG\n,\n0\notherwise.\nNote that we define the outdegree of each object in RG\nto only consider those edges that actually occur in RG and\nlink to objects in the next layer. This reflect the probability\nthat a user follows an object path in the RG. In contrast,\nPageRank considers all outgoing edges from a page.\nUnlike PageRank, lgPR differentiates the occurrence of a\ndata entry in different layers, as well as the links to entries\nin subsequent layers; lgPR is thus able to reflect the role of\nobjects and links (from the entire graph of data entries) in\nanswering a navigational query. Suppose an object a occurs\nin an intermediate layer as well as in the TOS of the RG. It\nis possible that a is able to convey authority to other objects\nin the TOS. However, a may not rank very high in the TOS\nfor this query. This characteristic is unique to lgPR. Thus,\nthe score associated with the object is query dependent to\nreflect the role played by the object in the result graph.\n3.2.2\nConvergence Property\nThis transition matrix A\nlg\nis neither irreducible nor aperiodic\nas all rows for target objects contain only 0's. The\nmatrix A is a nilpotent matrix and the number of layers is\nthe index. We provide two defintions (details in [8]).\nDefinition\n3.1. A square matrix A is a nilpotent matrix,\nif there exists some positive integer k such that A\nk\n= 0 but\nA\nk\n-1\n= 0. Integer k is known as the index of A.\nDefinition\n3.2. Let k be the index of A.\n{A\nk\n-1\nx, A\nk\n-2\nx,\n..., Ax, x\n} form a Jordan Chain, where x is any vector such\nthat A\nk\n-1\nx = 0.\nA characteristic of a nilpotent matrix is that its only eigenvalue\nis 0. The consequence is that any vector x is an eigenvector\nof A as long as Ax = 0. From the previous definition\n{A\nk\n-1\nlg\nR\nini\n, A\nk\n-2\nlg\nR\nini\n, ..., A\nlg\nR\nini\n, A\nlg\n} forms a Jordan\nChain, since A\nk\n-1\nlg\nR\nini\n= 0.\nWe show following two lemmas without providing proof\nin this paper.\nLemma\n3.3. Jordan chain\n{A\nk\n-1\nlg\nR\nini\n, A\nk\n-2\nlg\nR\nini\n, ...,\nA\nlg\nR\nini\n, A\nlg\n} is a linearly independent set.\nLemma\n3.4.\n{A\nk\n-1\nlg\nR\nini\n, A\nk\n-2\nlg\nR\nini\n, ..., A\nlg\nR\nini\n, A\nlg\n} consists\nof a sequence of ranking vectors. In R\nini\n, only objects\nin layer 0 have non-zero scores; In ranking vector A\nm\nlg\nR\nini\n,\nonly objects in layer m receive non-zero scores.\nThe final ranking vector by lgPR is the first eigenvector\nin the Jordan Chain, given the above initial ranking vector\nR\nini\nand the transition matrix A\nlg\n. While the traditional\nPageRank algorithm converges on a ranking in multiple iterations\n, lgPR can be computed in exactly k\n- 1 iterations.\nNote that because RG is a layered DAG, we can use link\nmatrices, each of which represents links between neighboring\nlayers, instead of the single transition matrix A\nlg\nfor the\nentire graph. We also use keywords to filter query answers\nat each iteration.\n3.3\nLayered Graph ObjectRank(lgOR)\nPR is computed a priori on the complete data graph and is\nindependent of the RG. A recent technique ObjectRank [1]\nextends PageRank to consider relevance of query keywords.\nIt exploits schema knowledge to determine the correct authority\ntransfer in a schema graph.\nIn ObjectRank, the\nauthority flows between objects according to semantic connections\n. It does so by determining an authority weight for\neach edge in their schema graph. The ranking is (keyword)\nquery dependent.\nDue to space limitations, we do not provide the details\nof the ObjectRank metric. Instead, we briefly describe how\nirreducible since the last layer in RG contains nodes with no\n30\nthe transition matrix for lgPR can be extended to consider\nthe authority weights associated with edges that occur in\nRG.\nConsider a metric layered graph ObjectRank(lgOR). The\ndifference from lgPR is the transition matrix A\nOG\n. It is as\nfollows:\nA[i\np\nu\n, j\nq\nv\n] =\n(e\nE\nRG\n)\nif e(u\np\n, v\nq\n)\nE\nRG\n,\n0\notherwise.\n(e\nE\nRG\n) =\n(e\nESG\n)\nOutDeg(u\np\n,e\nESG\n)\nif OutDeg(u\np\n, e\nE\nSG\n) > 0\n0\nif OutDeg(u\np\n, e\nE\nSG\n) = 0\nLet the edge between u\np\nand v\nq\nmap to an edge E\nSG\nin\nthe SG. (E\nSG\n) represents the authority transfer weight\nassociated with E\nSG\n. OutDeg(u\np\n, e\nE\nSG\n) is the outdegree in\nRG of type E\nSG\n.\nAs discussed in [1], the success of ObjectRank depends on\ncorrectly determining the authority weight to be associated\nwith each link. Figure 3 (next section) illustrates the source\ngraph that we use in our evaluation of navigational queries.\nFor lgOR to be successful, an authority weight may have\nto be associated with each link in each result path (type)\nin the RG. Experiments with users to determine the correct\nauthority weights for lgOR is planned for future work.\nCurrently the importance is computed after query evaluation\n. We compute result graph first, then ranking, for\nthe reason that the transition matrix is defined in terms of\noutdegree in the RG. This motivates further research of combination\nof two problems, whose ideal solution is to ranking\nobjects during query evaluation.\nEXPERIMENTS ON LGPR\nWe report on experiments on real world data. We show\nthat the lgPR ranking distribution has the ability to differentiate\namong the target objects of the RG and it is different\nfrom PageRank. A user compared the Top K results\nof lgPR and a word based ranking (Iowa) [13], using criteria\nthat reflect both importance and relevance, to determine\ntheir characteristics.\n4.1\nExperiment Setting\nNCBI/NIH is the gatekeeper for biological data produced\nusing federal funds in the US\n1\n. We consider a source graph\nSG of 10 data sources and 46 links. Figure 3 presents the\nsource graph used in this task.We used several hundred keywords\nto sample data from these sources (the EFetch utility)\nand followed links to the other sources (the ELink utility).\nWe created a data graph of approximately 28.5 million objects\nand 19.4 million links. We note that several objects are\nmachine predicted objects so it is not uncommon that they\nhave no links. The object identifiers for the data entries\n(nodes of the data graph) and the pair of object identifiers\n(links) were stored in a DB2 relational database.\nTable 2 identifies the queries and keywords that were used\nin this experiment. The symbols g, p, n, s refer to classes\ngene, publication, nucleotide and SNP, respectively. Note\nthat\nis the wild card and can match all the classes and\nsources (in the source graph).\nFor each navigational query, the source paths that answer\nthe query were determined using an algorithm described in\n1\nwww.ncbi.nlm.nih.gov\nClass\nPubMed\nPmId\nTitle\nClass Author\nName\nClass Journal\nName\nClassYear\nYear\n(1,*)\n(1,1)\n(1,*)\nClass Lash\nTerms\nTermId\nDescrip\n(1,*)\n(0,*)\nClass Entrez\nGene\nEGId\n(1,*) (1,1)\nClass Geo\nGeoId\nClassOMIM\nCddId\nClass UniSTS\nUSId\nClass\nUniGene\nUGId\nClass Entrez\nProtein\nEPId\nClass dbSNP\ndbSID\nClass CDD\nCddId\nClass Entrez\nNucleotide\nNuId\n(1,*)\nFigure 3: Source Graph for User Evaluation\nQueries\ng.n.p, g.s.p, g.n.s.p, g.s.n.p, g.s.g.n.p, g.s.n.g.p,\ng. .p, g. . .p, g. . . .p\n\"parkinson disease\", \"aging\",\"cancer\"\nKeywords\n\"diabetes\", \"flutamide\", \"stress\"\n\"degenerative joint\",\"tnf\",\"insulin\"\n\"fluorouracil\", \"osteoarthritis\",\"sarcoma\"\nTable 2: Experiment setting\n[14]. Evaluating the paths in the data graph for each source\npath was implemented by SQL queries. Since a result graph\nRG could involve multiple source paths whose computation\nmay overlap we applied several multiple query optimization\ntechniques. The SQL queries were executed on DB2 Enterprise\nServer V8.2 installed on a 3.2 GHz Intel Xeon processor\nwith 1GB RAM. The execution time for these queries varied\nconsiderably, depending on the size and shape of RG. If we\nconsider the query g.n.p with keyword \"degenerative joint\"\nused to filter 'g', one source path was ranked in approximately\n1 second. However, the query (g. . . .p) with the\nkeyword aging used to filter 'g' created a very large result\ngraph and the execution time for this was approximately\n2000 seconds. Typically the We note that computing the\nhigh scoring TOS objects of the RG efficiently is a related\nbut distinct optimization problem.\n4.2\nlgPR Distribution\nWe report on the query (g. .p), i.e., paths from genes to\npublications via one intermediate source.\nFigures 4 and 5 report on the distribution of scores produced\nby the lgPR metric for the target objects in TOS\nfor some representative queries. The first 10 bars represent\nscores in the range (0.00-0.01) to (0.09-0.1) and the last bar\nrepresents the range (0.1-1.0).\nFig 4 shows that a small\nnumber of objects have very high score and the majority\nhave a low score. As expected, many queries and keywords\nproduced distributions that were similar to Figure 4. Most\nof the objects in TOS, in this case approx. 12,000 objects,\nhad a very low score, and less than 200 object had a score\nin the range (0.1-1.0).\nHowever, we made an interesting observation that some\nqueries produced distributions that were similar to Figure\n5. In this case, while many of the results (approx. 120) had\nlow scores in the range (0.00-0.02), 46 objects had scores in\nthe range (0.1-1.0) and 120 objects had scores in between.\n31\n.00-.01\n.01-.02\n.02-.03\n.03-.04\n.04-.05\n.05-.06\n.06-.07\n.07-.08\n.08-.09\n.09-.10\n.10-1.00\n0\n2000\n4000\n6000\n8000\n10000\n12000\n12183\n677\n404\n172\n105\n62\n52\n43\n34\n19\n197\nlgPR score\nNumber of objects\nFigure 4: Histogram for query: g[Object content contains\n\"aging\"]\np\n.00-.01\n.01-.02\n.02-.03\n.03-.04\n.04-.05\n.05-.06\n.06-.07\n.07-.08\n.08-.09\n.09-.10\n.10-1.00\n0\n10\n20\n30\n40\n50\n60\n70\n80\n81\n41\n12\n4\n32\n11\n8\n0\n2\n1\n46\nlgPR score\nNumber of objects\nFigure 5: Histogram for query: g [Object content contains\n\"degenerative joint\"]\np\nFinally, we compared the ranking produced by lgPR and\nPageRank. We apply PageRank to the entire data graph\nof 28.5 million objects and 19.4 million links described in\nsection 4.1. For the three sample queries (described in the\nnext section), there are no PubMed ID's in common to the\nTop 25, 50, 100 for each of the queries, except that the\ntop 50 of query with Lash term \"allele\" have 1 PubMed\npublications in common, and top 100 of same query have 3\nin common. We speculate that the link structure of the RG\nis distinct compared to the link structure of the data graph;\nhence applying lgPR to the RG results in dissimilar ranking\ncompared to a priori applying PageRank to the entire data\ngraph.\nWe summarize that the lgPR score can both identify those\nobjects with a very low ranking that may not be of interest\nto the user. However, it can also be used to discriminate\namongst objects in the TOS whose ranking has a much lower\nvariation of scores. Finally, lgPR ranking is not the same\nas that produced by PageRank applied to the entire data\ngraph.\n4.3\nUser Evaluation\nIn our user evaluation of lgPR, we consider a set of complex\nqueries typical of a scientist searching for gene related\nPubMed publications, and the Top K results of a word based\nranking technique (Iowa) that has been shown to be accurate\nin answering gene queries [13]. We compare the Iowa Top\nK publications with the lgPR Top K publications, for some\nsample gene related queries.\nWe use criteria that reflect\nboth relevance and importance to identify characteristics of\nlgPR.\nResearchers are particularly interested in genetic and phe-notypic\nvariations associated with genes; these phenomena\nare often studied in the context of diseases, in a chromoso-mal\nregion identified by a genomic marker (a unique known\nsequence) associated with the disease. Genetic and pheno-typic\nknowledge are described using terms of the Lash controlled\nvocabulary [7]. We focus on a branch of the Lash\nvocabulary that relates to phenotypes and population genetics\n. Terms of interest include linkage disequilibrium,\nquantitative trait locus and allele. Figure 6 presents\na portion of the Lash controlled vocabulary (term hierarchy\n). LD is not listed as the synonym to the term linkage\ndisequilibrium, because LD may often refer to another concept\n. In the following experiment, we did not consider the\nplural form of some terms, such as alleles to allele, but this\ncan be extended in the future studies.\n1. EPIGENETIC ALTERATION\n\n2. GENOMIC SEGMENT LOSS\n\n3. GENOMIC SEGMENT GAIN\n\n4. GENOMIC SEQUENCE ALTERATION\n\n5. PHENOTYPIC ASSOCIATION\n(synonym: phenotype, trait)\n(a) locus association (synonym: locus, loci)\ni. linkage\nii. quantitative trait locus (synonym: QTL)\n(b) allelic association (synonym: allele)\ni. linkage disequilibrium\nFigure 6: Branch 5 in Hierarchical controlled vocabulary\nof genetics terms (Lash Controlled Vocabulary\n)\nThe navigational query used in our evaluation experiment\ncan be described in English as follows: \"Return all publications\nin PubMed that are linked to an Entrez Gene entry\nthat is related to the human gene TNF (or its aliases). The\nentry in PubMed must contain an STS marker and a term\nfrom the Lash controlled vocabulary.\"\nWe used the query term \"TNF AND 9606[TAXID]\"\n2\nto\nsample data from Entrez Gene. We then followed 8 paths to\nPubMed. Table 3 reports on the number of entries in Entrez\nGene as well as the cardinality of the TOS for some sample\nqueries\n3\n.\nWe briefly describe the word-based ranking method (Iowa)\nthat focuses on ranking documents retrieved by PubMed\n2\nNote the Taxonomy ID for human is 9606 [4], and term\n9606[TAXID] was used to select human genes.\n3\nWe\nuse\ng[\"tnf\"\nand\naliases\nin\nhuman]\nto\ndenote\ng[Object content contains\n{\"tnf\" and aliases in human}];\nthe entries in the first column of Table 3 are similar.\n32\nQuery\nCardinality of TOS\ng[\"tnf\" and aliases in human]\n649\ng[\"tnf\" and aliases in human]\np[STS\n2777\nmarker and \"allele\"]\ng[\"tnf\" and aliases in human]\np[STS\n257\nmarker and \"linkage disequilibrium\"]\ng[\"tnf\" and aliases in human]\np[STS\n22\nmarker and \"quantitative trait locus\"]\nTable 3: Cardinality of TOS\nfor human gene queries [13], so that relevant documents are\nranked higher than non-relevant documents. This method\nrelies on using post-retrieval queries (ranking queries), au-tomatically\ngenerated from an external source, viz., Entrez\nGene (Locus Link), to rank retrieved documents. The research\nshows that ranking queries generated from a combination\nof the Official Gene Symbol, Official Gene Name,\nAlias Symbols, Entrez Summary, and Protein Products (optional\n) were very effective in ranking relevant documents\nhigher in the retrieved list. Documents and ranking queries\nare represented using the traditional vector-space representation\n, commonly used in information retrieval.\nGiven a\ngene, the cosine similarity score between the ranking query\nvector for the gene and each document vector is computed.\nCosine scores are in the [0, 1] range and documents assigned\na higher score are ranked higher than documents with a\nlower score. In the absence of summary and protein product\ninformation, ranking queries generated from the gene\nsymbol, name and aliases are used to rank retrieved documents\n. In this experiment study we are working on the Bio\nWeb documents alone.\nWe use the following criteria to compare the Top K results\nfrom Iowa and lgPR, to understand basic characteristics of\nthe two methods. Criteria labeled R appear to judge the\nrelevance of the paper and those labeled I appear to judge\nimportance. Some criteria appear to judge both and are\nlabeled R,I.\n1. R: Does the title or abstract of the article contain the\nterm TNF or its aliases in human? Does the article\ndiscuss immune response?\n2. R,I: Does the article contain any disease related terms?\nDoes the article contain any genomic components (genes,\nmarkers, snps, sequences, etc.)?\n3. R,I: Does the article discuss biological processes related\nto the Lash terms?\n4. R,I: What is the connectivity of the article to gene\nentries in Entrez Gene that are related to TNF? Note\nthat as shown in Table 3, there are 649 Entrez Gene\nentries that are related to human gene TNF. Each\nPubMed publication was reached by following a result\npath through the result graph RG that started\nwith one of these Entrez Gene entries. However, some\nPubMed publications may have been reached along\nmultiple paths in the RG reflecting much greater connectivity\n.\n5. I: What is the category of the article (review, survey,\netc.). Does the article address some specific topics or\nis it a broad brush article?\n6. I: Where did the article appear? What is the journal\nimpact factor? Has the article been highly cited?\nTop 10\nRel.\nImp.\nCriteria\nPMID\n(0-5)\n(0-5)\n1.\n2.\n3.\n4.\n5.\n6.\n16271851\n4\n2\nH\nM\nH\nL\nL\nL\n1946393\n4\n4\nH\nL\nH\nM\nH\nH\n12217957\n4\n4\nH\nH\nH\nL\nH\nH\n12545017\n4\n4\nH\nM\nH\nL\nH\nH\n9757913\n3\n3\nH\nL\nH\nL\nH\nH\n8882412\n4\n4\nH\nM\nH\nL\nH\nH\n2674559\n4\n3\nH\nM\nH\nL\nH\nL\n7495783\n4\n3\nH\nH\nH\nL\nH\nL\n15976383\n5\n4\nH\nH\nH\nH\nH\nL\n10698305\n3\n3\nH\nL\nH\nL\nH\nH\nTable 4: Relevance and Importance of Top 10 Puli-cations\nReported by Iowa Ranking Method\nTop 10\nRel.\nImp.\nCriteria\nPMID\n(0-5)\n(0-5)\n1.\n2.\n3.\n4.\n5.\n6.\n7560085\n5\n5\nH\nH\nH\nH\nH\nH\n12938093\n5\n5\nH\nH\nH\nH\nH\nH\n10998471\n3\n3\nM\nH\nH\nL\nH\nL\n11290834\n5\n4\nH\nH\nH\nH\nH\nL\n11501950\n4\n3\nH\nH\nH\nL\nH\nL\n11587067\n5\n4\nH\nH\nH\nH\nH\nL\n11845411\n2\n4\nL\nH\nH\nL\nH\nH\n12133494\n5\n4\nH\nH\nH\nH\nH\nL\n12594308\n4\n4\nH\nH\nH\nL\nH\nH\n12619925\n5\n5\nH\nH\nH\nH\nH\nH\nTable 5: Relevance and Importance of Top 10 Puli-cations\nReported by lgPR Ranking Method\nTables 4 and 5 report Top 10 publications in PubMed that\nare linked to an Entrez Gene entry that is related to human\ngene TNF and contain the term linkage disequilibrium.\nThe first column reports the PubMed identifiers (PMIDs)\nof the Top 10 publications returned by the Iowa and the\nlgPR ranking methods. The human evaluation results are\nreported in the fourth to the ninth columns using the the\nsix criteria listed above. An H represents the publication\nis highly matched to the correspoinding criteria (M and L\nrepresents medium and low respectively). An H indicates:\n1. The PubMed entry is linked to the human gene TNF\nwith Entrez Gene identifier GeneID:7124.\n2. The publication contains both diseases related terms\nand genomic components.\n3. The publication contains multiple Lash terms.\n4. The connectivity is high, if there are more than five\nrelated gene entries linked to the publication.\n5. A research article considered more important than a\nreview or a survey, and a more specific topic is better.\n6. The article is published in a journal with the impact\nfactor higher than 10.0, or the article is cited by ten\nor more publications.\nWe then score the relevance (rel.) and the importance\n(imp.) in the second and the third columns by combining\n33\nthe number of H and M reported in the six criteria. Criteria\n1 weighs twice compared to the other five criteria. We\nuse a number between 0 and 5, in which 5 indicates the\ncorresponding PubMed entry is highly relevant or highly\nimportant to the given query. While both rankings appear\nto identify \"good\" documents, Iowa appears to favor relevant\ndocuments based on their word content. lgPR appears\nto exploit the link structure of the RG, and have higher in-terconnectivity\nto TNF related entries in Entrez gene. The\npublications retrieved by lgPR are more likely to contain\ndiseases related terms or genomic components. The Iowa\nranking has a primary focus on the relevance of documents\n(based on document contents; it is not able to differentiate\nthe importance of these relevant documents. In contrast,\nlgPR has a primary focus on importance (based on the link\nstructure of the result graph); it is not able to differentiate\nthe relevance of important documents. We conclude that\nfurther study is needed to determine how we can exploit the\ncharacteristics of both methods.\nThere is no intersection between two sets of Top 10 publications\nreturned by these two ranking methods. The first\ncommon PMID is 7935762, which is ranked 24 in the Iowa\nmethod and 21 by the lgPR method.\nCONCLUSIONS\nWe have defined a model for life science sources. The answer\nto a navigational query are the target objects (TOS) of\na layered graph Result Graph (RG). We define two ranking\nmetrics layered graph PageRank (lgPR) and layered graph\nObjectRank (lgOR). We also report on the results of experiments\non real world data from NCBI/NIH. We show that\nthe ranking distribution of lgPR indeed discriminates among\nthe TOS objects of the RG. The lgPR distribution is not the\nsame as applying PageRank a priori to the data graph. We\nperform a user experiment on complex queries typical of\na scientist searching for gene related PubMed publications,\nand the Top K results of a word based ranking technique\n(Iowa) that has been shown to be accurate in answering gene\nqueries the query. Using criteria that judge both relevance\nand importance, we explore the characteristics of these two\nrankings. Our preliminary evaluation indicates there may\nbe a benefit or a meta-ranking.\nWe briefly presented layered graph ObjectRank (lgOR)\nwhich is an extension to ObjectRank.\nThe challenge of\nObjectRank is determining the correct authority weight for\neach edge. For lgOR, we need to find the weight for the\nedges that occur in RG. Experiments with users to determine\nthe correct authority weights for lgOR is planned for\nfuture work. We expect that IR techniques can be used to\ndetermine authority weights.\nREFERENCES\n[1] Andrey Balmin, Vagelis Hristidis, and Yannis\nPapakonstantinou. Objectrank: Authority-based\nkeyword search in databases. In VLDB, pages\n564575, 2004.\n[2] Magdalini Eirinaki, Michalis Vazirgiannis, and\nDimitris Kapogiannis. Web path recommendations\nbased on page ranking and markov models. In WIDM\n'05: Proceedings of the 7th annual ACM international\nworkshop on Web information and data management,\npages 29, New York, NY, USA, 2005. ACM Press.\n[3] Taher H. Haveliwala. Topic-sensitive pagerank. In\nWWW '02: Proceedings of the 11th international\nconference on World Wide Web, pages 517526, New\nYork, NY, USA, 2002. ACM Press.\n[4] Homo sapiens in NCBI Taxonomy Browser.\nwww.ncbi.nih.gov/Taxonomy/Browser/wwwtax.cgi?\nmode=Info&id=9606.\n[5] Sepandar D. Kamvar, Taher H. Haveliwala,\nChristopher D. Manning, and Gene H. Golub.\nExtrapolation methods for accelerating pagerank\ncomputations. In WWW, pages 261270, 2003.\n[6] Z. Lacroix, L. Raschid, and M.-E. Vidal. Semantic\nmodel ot integrate biological resources. In\nInternational Workshop on Semantic Web and\nDatabases (SWDB 2006), Atlanta, Georgia, USA, 3-7\nApril 2006.\n[7] Alex Lash, Woei-Jyh Lee, and Louiqa Raschid. A\nmethodology to enhance the semantics of links\nbetween PubMed publications and markers in the\nhuman genome. In Fifth IEEE Symposium on\nBioinformatics and Bioengineering (BIBE 2005),\npages 185192, Minneapolis, Minnesota, USA, 19-21\nOctober 2005.\n[8] Carl D. Meyer. Matrix Analysis and Applied Linear\nAlgebra. Society for Industrial and Applied\nMathmatics, 2000.\n[9] G. Mihaila, F. Naumann, L. Raschid, and M. Vidal. A\ndata model and query language to explore enhanced\nlinks and paths in life sciences data sources.\nProceedings of the Workshop on Web and Databases,\nWebDB, Maryland, USA, 2005.\n[10] Rajeev Motwani and Prabhakar Raghavan.\nRandomized algorithms. Cambridge University Press,\nNew York, NY, USA, 1995.\n[11] Lawrence Page, Sergey Brin, Rajeev Motwani, and\nTerry Winograd. The pagerank citation ranking:\nBringing order to the web. Technical report, Stanford\nDigital Library Technologies Project, 1998.\n[12] Matthew Richardson and Pedro Domingos. Combining\nlink and content information in web search. In Web\nDynamics '04: Web Dynamics - Adapting to Change\nin Content, Size, Topology and Use, pages 179194.\nSpringer, 2004.\n[13] Aditya Kumar Sehgal and Padmini Srinivasan.\nRetrieval with gene queries. BMC Bioinformatics,\n7:220, 2006.\n[14] Maria-Esther Vidal, Louiqa Raschid, Natalia\nM\narquez, Marelis C\nardenas, and Yao Wu. Query\nrewriting in the semantic web. In InterDB, 2006.\n34\n", "keywords": "Navigational Query;Link Analysis;PageRank;Ranking"} {"name": "165", "title": "Ranking Web Objects from Multiple Communities", "abstract": "Vertical search is a promising direction as it leverages domain-specific knowledge and can provide more precise information for users. In this paper, we study the Web object-ranking problem, one of the key issues in building a vertical search engine. More specifically, we focus on this problem in cases when objects lack relationships between different Web communities , and take high-quality photo search as the test bed for this investigation. We proposed two score fusion methods that can automatically integrate as many Web communities (Web forums) with rating information as possible. The proposed fusion methods leverage the hidden links discovered by a duplicate photo detection algorithm, and aims at minimizing score differences of duplicate photos in different forums . Both intermediate results and user studies show the proposed fusion methods are practical and efficient solutions to Web object ranking in cases we have described. Though the experiments were conducted on high-quality photo ranking , the proposed algorithms are also applicable to other ranking problems, such as movie ranking and music ranking", "fulltext": "INTRODUCTION\nDespite numerous refinements and optimizations, general\npurpose search engines still fail to find relevant results for\nmany queries. As a new trend, vertical search has shown\npromise because it can leverage domain-specific knowledge\nand is more effective in connecting users with the information\nthey want.\nThere are many vertical search engines,\nincluding some for paper search (e.g. Libra [21], Citeseer\n[7] and Google Scholar [4]), product search (e.g. Froogle\n[5]), movie search [6], image search [1, 8], video search [6],\nlocal search [2], as well as news search [3]. We believe the\nvertical search engine trend will continue to grow.\nEssentially, building vertical search engines includes data\ncrawling, information extraction, object identification and\nintegration, and object-level Web information retrieval (or\nWeb object ranking) [20], among which ranking is one of the\nmost important factors. This is because it deals with the\ncore problem of how to combine and rank objects coming\nfrom multiple communities.\nAlthough object-level ranking has been well studied in\nbuilding vertical search engines, there are still some kinds\nof vertical domains in which objects cannot be effectively\nranked. For example, algorithms that evolved from PageRank\n[22], PopRank [21] and LinkFusion [27] were proposed\nto rank objects coming from multiple communities, but can\nonly work on well-defined graphs of heterogeneous data.\n\"Well-defined\" means that like objects (e.g. authors in paper\nsearch) can be identified in multiple communities (e.g.\nconferences). This allows heterogeneous objects to be well\nlinked to form a graph through leveraging all the relationships\n(e.g. cited-by, authored-by and published-by) among\nthe multiple communities.\nHowever, this assumption does not always stand for some\ndomains. High-quality photo search, movie search and news\nsearch are exceptions. For example, a photograph forum\nwebsite usually includes three kinds of objects: photos, authors\nand reviewers.\nYet different photo forums seem to\nlack any relationships, as there are no cited-by relationships.\nThis makes it difficult to judge whether two authors cited\nare the same author, or two photos are indeed identical photos\n. Consequently, although each photo has a rating score\nin a forum, it is non-trivial to rank photos coming from different\nphoto forums. Similar problems also exist in movie\nsearch and news search. Although two movie titles can be\nidentified as the same one by title and director in different\nmovie discussion groups, it is non-trivial to combine rating\nscores from different discussion groups and rank movies\neffectively. We call such non-trivial object relationship in\nwhich identification is difficult, incomplete relationships.\nOther related work includes rank aggregation for the Web\n[13, 14], and learning algorithm for rank, such as RankBoost\n[15], RankSVM [17, 19], and RankNet [12]. We will contrast\ndifferences of these methods with the proposed methods after\nwe have described the problem and our methods.\nWe will specifically focus on Web object-ranking problem\nin cases that lack object relationships or have with incomplete\nobject relationships, and take high-quality photo\nsearch as the test bed for this investigation. In the following,\nwe will introduce rationale for building high-quality photo\nsearch.\n1.1\nHigh-Quality Photo Search\nIn the past ten years, the Internet has grown to become\nan incredible resource, allowing users to easily access a huge\nnumber of images. However, compared to the more than 1\nbillion images indexed by commercial search engines, actual\nqueries submitted to image search engines are relatively minor\n, and occupy only 8-10 percent of total image and text\nqueries submitted to commercial search engines [24]. This\nis partially because user requirements for image search are\nfar less than those for general text search. On the other\nhand, current commercial search engines still cannot well\nmeet various user requirements, because there is no effective\nand practical solution to understand image content.\nTo better understand user needs in image search, we conducted\na query log analysis based on a commercial search\nengine.\nThe result shows that more than 20% of image\nsearch queries are related to nature and places and daily\nlife categories. Users apparently are interested in enjoying\nhigh-quality photos or searching for beautiful images of locations\nor other kinds. However, such user needs are not\nwell supported by current image search engines because of\nthe difficulty of the quality assessment problem.\nIdeally, the most critical part of a search engine the\nranking function can be simplified as consisting of two\nkey factors: relevance and quality. For the relevance factor\n, search in current commercial image search engines provide\nmost returned images that are quite relevant to queries,\nexcept for some ambiguity. However, as to quality factor,\nthere is still no way to give an optimal rank to an image.\nThough content-based image quality assessment has been\ninvestigated over many years [23, 25, 26], it is still far from\nready to provide a realistic quality measure in the immediate\nfuture.\nSeemingly, it really looks pessimistic to build an image\nsearch engine that can fulfill the potentially large requirement\nof enjoying high-quality photos. Various proliferating\nWeb communities, however, notices us that people today\nhave created and shared a lot of high-quality photos on the\nWeb on virtually any topics, which provide a rich source for\nbuilding a better image search engine.\nIn general, photos from various photo forums are of higher\nquality than personal photos, and are also much more appealing\nto public users than personal photos. In addition,\nphotos uploaded to photo forums generally require rich metadata\nabout title, camera setting, category and description to\nbe provide by photographers. These metadata are actually\nthe most precise descriptions for photos and undoubtedly\ncan be indexed to help search engines find relevant results.\nMore important, there are volunteer users in Web communities\nactively providing valuable ratings for these photos.\nThe rating information is generally of great value in solving\nthe photo quality ranking problem.\nMotivated by such observations, we have been attempting\nto build a vertical photo search engine by extracting rich\nmetadata and integrating information form various photo\nWeb forums. In this paper, we specifically focus on how to\nrank photos from multiple Web forums.\nIntuitively, the rating scores from different photo forums\ncan be empirically normalized based on the number of photos\nand the number of users in each forum. However, such\na straightforward approach usually requires large manual\neffort in both tedious parameter tuning and subjective results\nevaluation, which makes it impractical when there are\ntens or hundreds of photo forums to combine. To address\nthis problem, we seek to build relationships/links between\ndifferent photo forums. That is, we first adopt an efficient\nalgorithm to find duplicate photos which can be considered\nas hidden links connecting multiple forums. We then formulate\nthe ranking challenge as an optimization problem,\nwhich eventually results in an optimal ranking function.\n1.2\nMain Contributions and Organization.\nThe main contributions of this paper are:\n1. We have proposed and built a vertical image search engine\nby leveraging rich metadata from various photo\nforum Web sites to meet user requirements of searching\nfor and enjoying high-quality photos, which is impossible\nin traditional image search engines.\n2. We have proposed two kinds of Web object-ranking\nalgorithms for photos with incomplete relationships,\nwhich can automatically and efficiently integrate as\nmany as possible Web communities with rating information\nand achieves an equal qualitative result compared\nwith the manually tuned fusion scheme.\nThe rest of this paper is organized as follows. In Section\n2, we present in detail the proposed solutions to the ranking\nproblem, including how to find hidden links between\ndifferent forums, normalize rating scores, obtain the optimal\nranking function, and contrast our methods with some\nother related research. In Section 3, we describe the experimental\nsetting and experiments and user studies conducted\nto evaluate our algorithm. Our conclusion and a discussion\nof future work is in Section 4.\nIt is worth noting that although we treat vertical photo\nsearch as the test bed in this paper, the proposed ranking\nalgorithm can also be applied to rank other content that\nincludes video clips, poems, short stories, drawings, sculptures\n, music, and so on.\n378\nALGORITHM\nThe difficulty of integrating multiple Web forums is in\ntheir different rating systems, where there are generally two\nkinds of freedom. The first kind of freedom is the rating\ninterval or rating scale including the minimal and maximal\nratings for each Web object. For example, some forums use\na 5-point rating scale whereas other forums use 3-point or\n10-point rating scales. It seems easy to fix this freedom, but\ndetailed analysis of the data and experiments show that it\nis a non-trivial problem.\nThe second kind of freedom is the varying rating criteria\nfound in different Web forums. That is, the same score does\nnot mean the same quality in different forums. Intuitively, if\nwe can detect same photographers or same photographs, we\ncan build relationships between any two photo forums and\ntherefore can standardize the rating criterion by score normalization\nand transformation. Fortunately, we find that\nquite a number of duplicate photographs exist in various\nWeb photo forums. This fact is reasonable when considering\nthat photographers sometimes submit a photo to more\nthan one forum to obtain critiques or in hopes of widespread\npublicity. In this work, we adopt an efficient duplicate photo\ndetection algorithm [10] to find these photos.\nThe proposed methods below are based on the following\nconsiderations. Faced with the need to overcome a ranking\nproblem, a standardized rating criterion rather than a reasonable\nrating criterion is needed. Therefore, we can take\na large scale forum as the reference forum, and align other\nforums by taking into account duplicate Web objects (duplicate\nphotos in this work). Ideally, the scores of duplicate\nphotos should be equal even though they are in different\nforums. Yet we can deem that scores in different forums\nexcept for the reference forum can vary in a parametric\nspace. This can be determined by minimizing the objective\nfunction defined by the sum of squares of the score differences\n. By formulating the ranking problem as an optimization\nproblem that attempts to make the scores of duplicate\nphotos in non-reference forums as close as possible to those\nin the reference forum, we can effectively solve the ranking\nproblem.\nFor convenience, the following notations are employed.\nS\nki\nand\nS\nki\ndenote the total score and mean score of\nith Web\nobject (photo) in the\nkth Web site, respectively. The total\nscore refers to the sum of the various rating scores (e.g., novelty\nrating and aesthetic rating), and the mean score refers\nto the mean of the various rating scores. Suppose there are\na total of\nK Web sites. We further use\n{S\nkl\ni\n|i = 1, ..., I\nkl\n;\nk, l = 1, ..., K; k = l}\nto denote the set of scores for Web objects (photos) in\nkth\nWeb forums that are duplicate with the\nlth Web forums,\nwhere\nI\nkl\nis the total number of duplicate Web objects between\nthese two Web sites. In general, score fusion can be\nseen as the procedure of finding\nK transforms\n\nk\n(\nS\nki\n)\n=\ne\nS\nki\n, k = 1, ..., K\nsuch that e\nS\nki\ncan be used to rank Web objects from different\nWeb sites. The objective function described in the above\nFigure 1: Web community integration. Each Web\ncommunity forms a subgraph, and all communities\nare linked together by some hidden links (dashed\nlines).\nparagraph can then be formulated as\nmin\n{\nk\n|k=2,...,K}\nK\nX\nk=2\nI\nk1\nX\ni=1\n\nw\nk\ni\n\"S\n1k\ni\nk\n(\nS\nk1\ni\n)\n\"\n2\n(1)\nwhere we use\nk = 1 as the reference forum and thus\n1\n(\nS\n1i\n) =\nS\n1i\n.\nw\nk\ni\n(\n0) is the weight coefficient that can be set heuris-tically\naccording to the numbers of voters (reviewers or com-menters\n) in both the reference forum and the non-reference\nforum. The more reviewers, the more popular the photo is\nand the larger the corresponding weight\nw\nk\ni\nshould be. In\nthis work, we do not inspect the problem of how to choose\nw\nk\ni\nand simply set them to one. But we believe the proper use\nof\nw\nk\ni\n, which leverages more information, can significantly\nimprove the results.\nFigure 1 illustrates the aforementioned idea. The Web\nCommunity 1 is the reference community. The dashed lines\nare links indicating that the two linked Web objects are actually\nthe same. The proposed algorithm will try to find the\nbest\n\nk\n(\nk = 2, ..., K), which has certain parametric forms\naccording to certain models.\nSo as to minimize the cost\nfunction defined in Eq. 1, the summation is taken on all the\nred dashed lines.\nWe will first discuss the score normalization methods in\nSection 2.2, which serves as the basis for the following work.\nBefore we describe the proposed ranking algorithms, we first\nintroduce a manually tuned method in Section 2.3, which is\nlaborious and even impractical when the number of communities\nbecome large. In Section 2.4, we will briefly explain\nhow to precisely find duplicate photos between Web forums.\nThen we will describe the two proposed methods: Linear fusion\nand Non-linear fusion, and a performance measure for\nresult evaluation in Section 2.5. Finally, in Section 2.6 we\nwill discuss the relationship of the proposed methods with\nsome other related work.\n2.2\nScore Normalization\nSince different Web (photo) forums on the Web usually\nhave different rating criteria, it is necessary to normalize\nthem before applying different kinds of fusion methods. In\naddition, as there are many kinds of ratings, such as ratings\nfor novelty, ratings for aesthetics etc, it is reasonable\nto choose a common one -- total score or average score -that\ncan always be extracted in any Web forum or calculated\nby corresponding ratings. This allows the normaliza-379\ntion method on the total score or average score to be viewed\nas an impartial rating method between different Web forums\n.\nIt is straightforward to normalize average scores by lin-early\ntransforming them to a fixed interval. We call this\nkind of score as Scaled Mean Score. The difficulty, however,\nof using this normalization method is that, if there are only\na few users rating an object, say a photo in a photo forum,\nthe average score for the object is likely to be spammed or\nskewed.\nTotal score can avoid such drawbacks that contain more\ninformation such as a Web object's quality and popularity.\nThe problem is thus how to normalize total scores in different\nWeb forums. The simplest way may be normalization\nby the maximal and minimal scores. The drawback of this\nnormalization method is it is non robust, or in other words,\nit is sensitive to outliers.\nTo make the normalization insensitive to unusual data,\nwe propose the Mode-90% Percentile normalization method.\nHere, the mode score represents the total score that has been\nassigned to more photos than any other total score. And The\nhigh percentile score (e.g.,90%) represents the total score for\nwhich the high percentile of images have a lower total score.\nThis normalization method utilizes the mode and 90% percentile\nas two reference points to align two rating systems,\nwhich makes the distributions of total scores in different forums\nmore consistent. The underlying assumption, for example\nin different photo forums, is that even the qualities of\ntop photos in different forums may vary greatly and be less\ndependent on the forum quality, the distribution of photos\nof middle-level quality (from mode to 90% percentile) should\nbe almost of the same quality up to the freedom which reflects\nthe rating criterion (strictness) of Web forums. Photos\nof this middle-level in a Web forum usually occupy more\nthan 70 % of total photos in that forum.\nWe will give more detailed analysis of the scores in Section\n3.2.\n2.3\nManual Fusion\nThe Web movie forum, IMDB [16], proposed to use a\nBayesian-ranking function to normalize rating scores within\none community. Motivated by this ranking function, we propose\nthis manual fusion method: For the\nkth Web site, we\nuse the following formula\ne\nS\nki\n=\n\nk\n\n,, n\nk\n\nS\nki\nn\nk\n+\nn\nk\n+ n\nk\nS\n\nk\nn\nk\n+\nn\nk\n\n(2)\nto rank photos, where\nn\nk\nis the number of votes and\nn\nk\n,\nS\n\nk\nand\n\nk\nare three parameters. This ranking function first\ntakes a balance between the original mean score\nS\nki\nand a\nreference score\nS\n\nk\nto get a weighted mean score which may\nbe more reliable than\nS\nki\n. Then the weighted mean score is\nscaled by\n\nk\nto get the final score f\nS\nki\n.\nFor\nn Web communities, there are then about 3n parameters\nin\n{(\nk\n, n\nk\n, S\n\nk\n)\n|k = 1, ..., n} to tune. Though this\nmethod can achieves pretty good results after careful and\nthorough manual tuning on these parameters, when\nn becomes\nincreasingly large, say there are tens or hundreds of\nWeb communities crawled and indexed, this method will become\nmore and more laborious and will eventually become\nimpractical. It is therefore desirable to find an effective fusion\nmethod whose parameters can be automatically determined\n.\n2.4\nDuplicate Photo Detection\nWe use Dedup [10], an efficient and effective duplicate image\ndetection algorithm, to find duplicate photos between\nany two photo forums. This algorithm uses hash function\nto map a high dimensional feature to a 32 bits hash code\n(see below for how to construct the hash code). Its computational\ncomplexity to find all the duplicate images among\nn images is about O(n log n). The low-level visual feature\nfor each photo is extracted on\nk k regular grids. Based\non all features extracted from the image database, a PCA\nmodel is built. The visual features are then transformed to\na relatively low-dimensional and zero mean PCA space, or\n29 dimensions in our system. Then the hash code for each\nphoto is built as follows: each dimension is transformed to\none, if the value in this dimension is greater than 0, and 0\notherwise. Photos in the same bucket are deemed potential\nduplicates and are further filtered by a threshold in terms\nof Euclidean similarity in the visual feature space.\nFigure 2 illustrates the hashing procedure, where visual\nfeatures -- mean gray values -- are extracted on both 6\n6\nand 7\n7 grids. The 85-dimensional features are transformed\nto a 32-dimensional vector, and the hash code is generated\naccording to the signs.\nFigure 2:\nHashing procedure for duplicate photo\ndectection\n2.5\nScore Fusion\nIn this section, we will present two solutions on score fusion\nbased on different parametric form assumptions of\n\nk\nin Eq. 1.\n2.5.1\nLinear Fusion by Duplicate Photos\nIntuitively, the most straightforward way to factor out the\nuncertainties caused by the different criterion is to scale, rel-380\native to a given center, the total scores of each unreferenced\nWeb photo forum with respect to the reference forum. More\nstrictly, we assume\n\nk\nhas the following form\n\nk\n(\nS\nki\n)\n=\n\nk\nS\nki\n+\nt\nk\n, k = 2, ..., K\n(3)\n\n1\n(\nS\n1i\n)\n=\nS\n1i\n(4)\nwhich means that the scores of\nk(= 1)th forum should be\nscaled by\n\nk\nrelative to the center\nt\nk\n1k\nas shown in Figure\n3.\nThen, if we substitute above\n\nk\nto Eq. 1, we get the\nfollowing objective function,\nmin\n{\nk\n,t\nk\n|k=2,...,K}\nK\nX\nk=2\nI\nk1\nX\ni=1\n\nw\nk\ni\nhS\n1k\ni\nk\nS\nk1\ni\n- t\nk\ni\n2\n. (5)\nBy solving the following set of functions,\n(\nf\n\nk\n=\n=\n0\nf\nt\nk\n=\n0 ,\nk = 1, ..., K\nwhere\nf is the objective function defined in Eq. 5, we get\nthe closed form solution as:\n,,\nk\nt\nk\n\n=\nA\n-1\nk\nL\nk\n(6)\nwhere\nA\nk\n=\n,, P\ni\n\nw\ni\n(\nS\nk1\ni\n)\n2\nP\ni\n\nw\ni\nS\nk1\ni\nP\ni\n\nw\ni\nS\nk1\ni\nP\ni\n\nw\ni\n\n(7)\nL\nk\n=\n,, P\ni\n\nw\ni\nS\n1k\ni\nS\nk1\ni\nP\ni\n\nw\ni\nS\n1k\ni\n\n(8)\nand\nk = 2, ..., K.\nThis is a linear fusion method. It enjoys simplicity and\nexcellent performance in the following experiments.\nFigure 3: Linear Fusion method\n2.5.2\nNonlinear Fusion by Duplicate Photos\nSometimes we want a method which can adjust scores on\nintervals with two endpoints unchanged. As illustrated in\nFigure 4, the method can tune scores between [\nC\n0\n, C\n1\n] while\nleaving scores\nC\n0\nand\nC\n1\nunchanged. This kind of fusion\nmethod is then much finer than the linear ones and contains\nmany more parameters to tune and expect to further\nimprove the results.\nHere, we propose a nonlinear fusion solution to satisfy\nsuch constraints. First, we introduce a transform:\n\nc\n0\n,c\n1\n,\n(\nx) =\n( \"\nx-c\n0\nc\n1\n-c\n0\n\"\n\n(\nc\n1\n- c\n0\n) +\nc\n0\n, if x (c\n0\n, c\n1\n]\nx\notherwise\nwhere\n> 0. This transform satisfies that for x [c\n0\n, c\n1\n],\n\nc\n0\n,c\n1\n,\n(\nx) [c\n0\n, c\n1\n] with\n\nc\n0\n,c\n1\n,\n(\nc\n0\n) =\nc\n0\nand\n\nc\n0\n,c\n1\n,\n(\nc\n1\n) =\nc\n1\n. Then we can utilize this nonlinear transform to adjust\nthe scores in certain interval, say (\nM, T ],\n\nk\n(\nS\nki\n)\n=\n\nM,T,\n(\nS\nki\n)\n.\n(9)\nFigure 4: Nonlinear Fusion method. We intent to\nfinely adjust the shape of the curves in each segment.\nEven there is no closed-form solution for the following\noptimization problem,\nmin\n{\nk\n|k[2,K]}\nK\nX\nk=2\nI\nk1\nX\ni=1\n\nw\nk\ni\nhS\n1k\ni\nM\n,T,\n(\nS\nki\n)\ni\n2\nit is not hard to get the numeric one. Under the same assumptions\nmade in Section 2.2, we can use this method to\nadjust scores of the middle-level (from the mode point to\nthe 90 % percentile).\nThis more complicated non-linear fusion method is expected\nto achieve better results than the linear one. However\n, difficulties in evaluating the rank results block us from\ntuning these parameters extensively. The current experiments\nin Section 3.5 do not reveal any advantages over the\nsimple linear model.\n2.5.3\nPerformance Measure of the Fusion Results\nSince our objective function is to make the scores of the\nsame Web objects (e.g. duplicate photos) between a non-reference\nforum and the reference forum as close as possible,\nit is natural to investigate how close they become to each\nother and how the scores of the same Web objects change\nbetween the two non-reference forums before and after score\nfusion.\nTaken Figure 1 as an example, the proposed algorithms\nminimize the score differences of the same Web objects in\ntwo Web forums: the reference forum (the Web Community\n1) and a non-reference forum, which corresponds to minimizing\nthe objective function on the red dashed (hidden)\nlinks. After the optimization, we must ask what happens to\nthe score differences of the same Web objects in two non-reference\nforums? Or, in other words, whether the scores\nof two objects linked by the green dashed (hidden) links\nbecome more consistent?\nWe therefore define the following performance measure -measure\n-- to quantify the changes for scores of the same\nWeb objects in different Web forums as\n\nkl\n=\nSim(S\nlk\n, S\nkl\n)\n- Sim(S\nlk\n\n, S\nkl\n\n)\n(10)\n381\nwhere S\nkl\n= (\nS\nkl\n1\n, ..., S\nkl\nI\nkl\n)\nT\n, S\nkl\n\n= ( e\nS\nkl\n1\n, ..., e\nS\nkl\nI\nkl\n)\nT\nand\nSim(a\n, b) =\na\nb\n||a||||b|| .\n\nkl\n> 0 means after score fusion, scores on the same Web\nobjects between\nkth and lth Web forum become more consistent\n, which is what we expect. On the contrary, if\n\nkl\n< 0,\nthose scores become more inconsistent.\nAlthough we cannot rely on this measure to evaluate our\nfinal fusion results as ranking photos by their popularity and\nqualities is such a subjective process that every person can\nhave its own results, it can help us understand the intermediate\nranking results and provide insights into the final\nperformances of different ranking methods.\n2.6\nContrasts with Other Related Work\nWe have already mentioned the differences of the proposed\nmethods with the traditional methods, such as PageRank\n[22], PopRank [21], and LinkFusion [27] algorithms in Section\n1. Here, we discuss some other related works.\nThe current problem can also be viewed as a rank aggregation\none [13, 14] as we deal with the problem of how to\ncombine several rank lists. However, there are fundamental\ndifferences between them. First of all, unlike the Web\npages, which can be easily and accurately detected as the\nsame pages, detecting the same photos in different Web forums\nis a non-trivial work, and can only be implemented by\nsome delicate algorithms while with certain precision and\nrecall. Second, the numbers of the duplicate photos from\ndifferent Web forums are small relative to the whole photo\nsets (see Table 1). In another words, the top\nK rank lists\nof different Web forums are almost disjointed for a given\nquery. Under this condition, both the algorithms proposed\nin [13] and their measurements -- Kendall tau distance or\nSpearman footrule distance -- will degenerate to some trivial\ncases.\nAnother category of rank fusion (aggregation) methods is\nbased on machine learning algorithms, such as RankSVM\n[17, 19], RankBoost [15], and RankNet [12]. All of these\nmethods entail some labelled datasets to train a model. In\ncurrent settings, it is difficult or even impossible to get these\ndatasets labelled as to their level of professionalism or popularity\n, since the photos are too vague and subjective to rank.\nInstead, the problem here is how to combine several ordered\nsub lists to form a total order list.\nEXPERIMENTS\nIn this section, we carry out our research on high-quality\nphoto search. We first briefly introduce the newly proposed\nvertical image search engine -- EnjoyPhoto in section 3.1.\nThen we focus on how to rank photos from different Web\nforums.\nIn order to do so, we first normalize the scores\n(ratings) for photos from different multiple Web forums in\nsection 3.2. Then we try to find duplicate photos in section\n3.3. Some intermediate results are discussed using\nmeasure\nin section 3.4. Finally a set of user studies is carried out\ncarefully to justify our proposed method in section 3.5.\n3.1\nEnjoyPhoto: high-quality Photo Search\nEngine\nIn order to meet user requirement of enjoying high-quality\nphotos, we propose and build a high-quality photo search engine\n-- EnjoyPhoto, which accounts for the following three\nkey issues: 1. how to crawl and index photos, 2. how to\ndetermine the qualities of each photo and 3. how to display\nthe search results in order to make the search process\nenjoyable. For a given text based query, this system ranks\nthe photos based on certain combination of relevance of the\nphoto to this query (Issue 1) and the quality of the photo\n(Issue 2), and finally displays them in an enjoyable manner\n(Issue 3).\nAs for Issue 3, we devise the interface of the system de-liberately\nin order to smooth the users' process of enjoying\nhigh-quality photos. Techniques, such as Fisheye and slides\nshow, are utilized in current system. Figure 5 shows the\ninterface. We will not talk more about this issue as it is not\nan emphasis of this paper.\nFigure 5:\nEnjoyPhoto:\nan enjoyable high-quality\nphoto search engine, where 26,477 records are returned\nfor the query \"fall\" in about 0.421 seconds\nAs for Issue 1, we extracted from a commercial search engine\na subset of photos coming from various photo forums\nall over the world, and explicitly parsed the Web pages containing\nthese photos. The number of photos in the data collection\nis about 2.5 million. After the parsing, each photo\nwas associated with its title, category, description, camera\nsetting, EXIF data\n1\n(when available for digital images), location\n(when available in some photo forums), and many\nkinds of ratings. All these metadata are generally precise\ndescriptions or annotations for the image content, which are\nthen indexed by general text-based search technologies [9,\n18, 11]. In current system, the ranking function was specifically\ntuned to emphasize title, categorization, and rating\ninformation.\nIssue 2 is essentially dealt with in the following sections\nwhich derive the quality of photos by analyzing ratings provided\nby various Web photo forums. Here we chose six photo\nforums to study the ranking problem and denote them as\nWeb-A, Web-B, Web-C, Web-D, Web-E and Web-F.\n3.2\nPhoto Score Normalization\nDetailed analysis of different score normalization methods\nare analyzed in this section. In this analysis, the zero\n1\nDigital cameras save JPEG (.jpg) files with EXIF (Exchangeable\nImage File) data. Camera settings and scene\ninformation are recorded by the camera into the image file.\nwww.digicamhelp.com/what-is-exif/\n382\n0\n2\n4\n6\n8\n10\n0\n1000\n2000\n3000\n4000\nNormalized Score\nTotal Number\n(a) Web-A\n0\n2\n4\n6\n8\n10\n0\n0.5\n1\n1.5\n2\n2.5\n3 x 10\n4\nNormalized Score\nTotal Number\n(b) Web-B\n0\n2\n4\n6\n8\n10\n0\n0.5\n1\n1.5\n2 x 10\n5\nNormalized Score\nTotal Number\n(c) Web-C\n0\n2\n4\n6\n8\n10\n0\n2\n4\n6\n8\n10 x 10\n4\nNormalized Score\nTotal Number\n(d) Web-D\n0\n2\n4\n6\n8\n10\n0\n2000\n4000\n6000\n8000\n10000\n12000\n14000\nNormalized Score\nTotal Number\n(e) Web-E\n0\n2\n4\n6\n8\n10\n0\n1\n2\n3\n4\n5\n6 x 10\n4\nNormalized Score\nTotal Number\n(f) Web-F\nFigure 6: Distributions of mean scores normalized\nto [0\n, 10]\nscores that usually occupy about than 30% of the total number\nof photos for some Web forums are not currently taken\ninto account. How to utilize these photos is left for future\nexplorations.\nIn Figure 6, we list the distributions of the mean score,\nwhich is transformed to a fixed interval [0\n, 10]. The distributions\nof the average scores of these Web forums look quite\ndifferent. Distributions in Figure 6(a), 6(b), and 6(e) looks\nlike Gaussian distributions, while those in Figure 6(d) and\n6(f) are dominated by the top score. The reason of these\neccentric distributions for Web-D and Web-F lies in their\ncoarse rating systems. In fact, Web-D and Web-F use 2 or\n3 point rating scales whereas other Web forums use 7 or 14\npoint rating scales. Therefore, it will be problematic if we\ndirectly use these averaged scores. Furthermore the average\nscore is very likely to be spammed, if there are only a few\nusers rating a photo.\nFigure 7 shows the total score normalization method by\nmaximal and minimal scores, which is one of our base line\nsystem. All the total scores of a given Web forum are normalized\nto [0\n, 100] according to the maximal score and minimal\nscore of corresponding Web forum. We notice that total\nscore distribution of Web-A in Figure 7(a) has two larger\ntails than all the others. To show the shape of the distributions\nmore clearly, we only show the distributions on [0\n, 25]\nin Figure 7(b),7(c),7(d),7(e), and 7(f).\nFigure 8 shows the Mode-90% Percentile normalization\nmethod, where the modes of the six distributions are normalized\nto 5 and the 90% percentile to 8. We can see that\nthis normalization method makes the distributions of total\nscores in different forums more consistent. The two proposed\nalgorithms are all based on these normalization methods.\n3.3\nDuplicate photo detection\nTargeting at computational efficiency, the Dedup algorithm\nmay lose some recall rate, but can achieve a high\nprecision rate. We also focus on finding precise hidden links\nrather than all hidden links. Figure 9 shows some duplicate\ndetection examples. The results are shown in Table 1 and\nverify that large numbers of duplicate photos exist in any\ntwo Web forums even with the strict condition for Dedup\nwhere we chose first 29 bits as the hash code. Since there\nare only a few parameters to estimate in the proposed fusion\nmethods, the numbers of duplicate photos shown Table 1 are\n0\n20\n40\n60\n80\n100\n0\n100\n200\n300\n400\n500\n600\nNormalized Score\nTotal Number\n(a) Web-A\n0\n5\n10\n15\n20\n25\n0\n1\n2\n3\n4\n5 x 10\n4\nNormalized Score\nTotal Number\n(b) Web-B\n0\n5\n10\n15\n20\n25\n0\n1\n2\n3\n4\n5 x 10\n5\nNormalized Score\nTotal Number\n(c) Web-C\n0\n5\n10\n15\n20\n25\n0\n0.5\n1\n1.5\n2\n2.5 x 10\n4\nNormalized Score\nTotal Number\n(d) Web-D\n0\n5\n10\n15\n20\n25\n0\n2000\n4000\n6000\n8000\n10000\nNormalized Score\nTotal Number\n(e) Web-E\n0\n5\n10\n15\n20\n25\n0\n0.5\n1\n1.5\n2\n2.5\n3 x 10\n4\nNormalized Score\nTotal Number\n(f) Web-F\nFigure 7: Maxmin Normalization\n0\n5\n10\n15\n0\n200\n400\n600\n800\n1000\n1200\n1400\nNormalized Score\nTotal Number\n(a) Web-A\n0\n5\n10\n15\n0\n1\n2\n3\n4\n5 x 10\n4\nNormalized Score\nTotal Number\n(b) Web-B\n0\n5\n10\n15\n0\n2\n4\n6\n8\n10\n12\n14 x 10\n4\nNormalized Score\nTotal Number\n(c) Web-C\n0\n5\n10\n15\n0\n0.5\n1\n1.5\n2\n2.5 x 10\n4\nNormalized Score\nTotal Number\n(d) Web-D\n0\n5\n10\n15\n0\n2000\n4000\n6000\n8000\n10000\n12000\nNormalized Score\nTotal Number\n(e) Web-E\n0\n5\n10\n15\n0\n2000\n4000\n6000\n8000\n10000\nNormalized Score\nTotal Number\n(f) Web-F\nFigure 8: Mode-90% Percentile Normalization\nsufficient to determine these parameters. The last table column\nlists the total number of photos in the corresponding\nWeb forums.\n3.4\n\nMeasure\nThe parameters of the proposed linear and nonlinear algorithms\nare calculated using the duplicate data shown in\nTable 1, where the Web-C is chosen as the reference Web\nforum since it shares the most duplicate photos with other\nforums.\nTable 2 and 3 show the\nmeasure on the linear model and\nnonlinear model. As\n\nkl\nis symmetric and\n\nkk\n= 0, we only\nshow the upper triangular part. The NaN values in both\ntables lie in that no duplicate photos have been detected by\nthe Dedup algorithm as reported in Table 1.\nThe linear model guarantees that the\nmeasures related\nTable 1: Number of duplicate photos between each\npair of Web forums\nA\nB\nC\nD\nE\nF\nScale\nA\n0\n316\n1,386\n178\n302\n0\n130k\nB\n316\n0\n14,708\n909\n8,023\n348\n675k\nC\n1,386\n14,708\n0\n1,508\n19,271\n1,083\n1,003k\nD\n178\n909\n1,508\n0\n1,084\n21\n155k\nE\n302\n8,023\n19,271\n1,084\n0\n98\n448k\nF\n0\n348\n1,083\n21\n98\n0\n122k\n383\nFigure 9: Some results of duplicate photo detection\nTable 2:\nmeasure on the linear model.\nWeb-B\nWeb-C\nWeb-D\nWeb-E\nWeb-F\nWeb-A\n0.0659\n0.0911\n0.0956\n0.0928\nNaN\nWeb-B\n\n0.0672\n0.0578\n0.0791\n0.4618\nWeb-C\n\n\n0.0105\n0.0070\n0.2220\nWeb-D\n\n\n\n0.0566\n0.0232\nWeb-E\n\n\n\n\n0.6525\nto the reference community should be no less than 0 theo-retically\n. It is indeed the case (see the underlined numbers\nin Table 2). But this model can not guarantee that the\n\nmeasures on the non-reference communities can also be no\nless than 0, as the normalization steps are based on duplicate\nphotos between the reference community and a non-reference\ncommunity. Results shows that all the numbers in\nthe\nmeasure are greater than 0 (see all the non-underlined\nnumbers in Table 2), which indicates that it is probable that\nthis model will give optimal results.\nOn the contrary, the nonlinear model does not guarantee\nthat\nmeasures related to the reference community should\nbe no less than 0, as not all duplicate photos between the\ntwo Web forums can be used when optimizing this model.\nIn fact, the duplicate photos that lie in different intervals\nwill not be used in this model. It is these specific duplicate\nphotos that make the\nmeasure negative. As a result, there\nare both negative and positive items in Table 3, but overall\nthe number of positive ones are greater than negative ones\n(9:5), that indicates the model may be better than the \"nor-malization\nonly\" method (see next subsection) which has an\nall-zero\nmeasure, and worse than the linear model.\n3.5\nUser Study\nBecause it is hard to find an objective criterion to evaluate\nTable 3:\nmeasure on the nonlinear model.\nWeb-B\nWeb-C\nWeb-D\nWeb-E\nWeb-F\nWeb-A\n0.0559\n0.0054\n-0.0185\n-0.0054\nNaN\nWeb-B\n\n-0.0162\n-0.0345\n-0.0301\n0.0466\nWeb-C\n\n\n0.0136\n0.0071\n0.1264\nWeb-D\n\n\n\n0.0032\n0.0143\nWeb-E\n\n\n\n\n0.214\nwhich ranking function is better, we chose to employ user\nstudies for subjective evaluations. Ten subjects were invited\nto participate in the user study. They were recruited from\nnearby universities. As search engines of both text search\nand image search are familiar to university students, there\nwas no prerequisite criterion for choosing students.\nWe conducted user studies using Internet Explorer 6.0 on\nWindows XP with 17-inch LCD monitors set at 1,280 pixels\nby 1,024 pixels in 32-bit color.\nData was recorded with\nserver logs and paper-based surveys after each task.\nFigure 10: User study interface\nWe specifically device an interface for user study as shown\nin Figure 10. For each pair of fusion methods, participants\nwere encouraged to try any query they wished. For those\nwithout specific ideas, two combo boxes (category list and\nquery list) were listed on the bottom panel, where the top\n1,000 image search queries from a commercial search engine\nwere provided. After a participant submitted a query, the\nsystem randomly selected the left or right frame to display\neach of the two ranking results. The participant were then\nrequired to judge which ranking result was better of the two\nranking results, or whether the two ranking results were of\nequal quality, and submit the judgment by choosing the corresponding\nradio button and clicking the \"Submit\" button.\nFor example, in Figure 10, query \"sunset\" is submitted to\nthe system. Then, 79,092 photos were returned and ranked\nby the Minmax fusion method in the left frame and linear\nfusion method in the right frame. A participant then compares\nthe two ranking results (without knowing the ranking\nmethods) and submits his/her feedback by choosing answers\nin the \"Your option.\"\nTable 4: Results of user study\nNorm.Only\nManually\nLinear\nLinear\n29:13:10\n14:22:15\n-Nonlinear\n29:15:9\n12:27:12\n6:4:45\nTable 4 shows the experimental results, where \"Linear\"\ndenotes the linear fusion method, \"Nonlinear\" denotes the\nnon linear fusion method, \"Norm. Only\" means Maxmin\nnormalization method, \"Manually\" means the manually tuned\nmethod.\nThe three numbers in each item, say 29:13:10,\nmean that 29 judgments prefer the linear fusion results, 10\n384\njudgments prefer the normalization only method, and 13\njudgments consider these two methods as equivalent.\nWe conduct the ANOVA analysis, and obtain the following\nconclusions:\n1. Both the linear and nonlinear methods are significantly\nbetter than the \"Norm. Only\" method with respective\nP-values 0\n.00165(< 0.05) and 0.00073(<< 0.05). This\nresult is consistent with the\n-measure evaluation result\n. The \"Norm. Only\" method assumes that the top\n10% photos in different forums are of the same quality\n. However, this assumption does not stand in general\n. For example, a top 10% photo in a top tier photo\nforum is generally of higher quality than a top 10%\nphoto in a second-tier photo forum. This is similar\nto that, those top 10% students in a top-tier university\nand those in a second-tier university are generally\nof different quality. Both linear and nonlinear fusion\nmethods acknowledge the existence of such differences\nand aim at quantizing the differences. Therefore, they\nperform better than the \"Norm. Only\" method.\n2. The linear fusion method is significantly better than\nthe nonlinear one with P-value 1\n.195 10\n-10\n. This\nresult is rather surprising as this more complicated\nranking method is expected to tune the ranking more\nfinely than the linear one. The main reason for this\nresult may be that it is difficult to find the best intervals\nwhere the nonlinear tuning should be carried out\nand yet simply the middle part of the Mode-90% Percentile\nNormalization method was chosen. The time-consuming\nand subjective evaluation methods -- user\nstudies -- blocked us extensively tuning these parameters\n.\n3. The proposed linear and nonlinear methods perform\nalmost the same with or slightly better than the manually\ntuned method. Given that the linear/nonlinear\nfusion methods are fully automatic approaches, they\nare considered practical and efficient solutions when\nmore communities (e.g. dozens of communities) need\nto be integrated.\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we studied the Web object-ranking problem\nin the cases of lacking object relationships where traditional\nranking algorithms are no longer valid, and took\nhigh-quality photo search as the test bed for this investigation\n. We have built a vertical high-quality photo search\nengine, and proposed score fusion methods which can automatically\nintegrate as many data sources (Web forums) as\npossible. The proposed fusion methods leverage the hidden\nlinks discovered by duplicate photo detection algorithm, and\nminimize score differences of duplicate photos in different\nforums. Both the intermediate results and the user studies\nshow that the proposed fusion methods are a practical\nand efficient solution to Web object ranking in the aforesaid\nrelationships. Though the experiments were conducted\non high-quality photo ranking, the proposed algorithms are\nalso applicable to other kinds of Web objects including video\nclips, poems, short stories, music, drawings, sculptures, and\nso on.\nCurrent system is far from being perfect. In order to make\nthis system more effective, more delicate analysis for the\nvertical domain (e.g., Web photo forums) are needed. The\nfollowing points, for example, may improve the searching\nresults and will be our future work: 1. more subtle analysis\nand then utilization of different kinds of ratings (e.g.,\nnovelty ratings, aesthetic ratings); 2. differentiating various\ncommunities who may have different interests and preferences\nor even distinct culture understandings; 3. incorporating\nmore useful information, including photographers' and\nreviewers' information, to model the photos in a heterogeneous\ndata space instead of the current homogeneous one.\nWe will further utilize collaborative filtering to recommend\nrelevant high-quality photos to browsers.\nOne open problem is whether we can find an objective and\nefficient criterion for evaluating the ranking results, instead\nof employing subjective and inefficient user studies, which\nblocked us from trying more ranking algorithms and tuning\nparameters in one algorithm.\nACKNOWLEDGMENTS\nWe thank Bin Wang and Zhi Wei Li for providing Dedup\ncodes to detect duplicate photos; Zhen Li for helping us\ndesign the interface of EnjoyPhoto; Ming Jing Li, Longbin\nChen, Changhu Wang, Yuanhao Chen, and Li Zhuang etc.\nfor useful discussions. Special thanks go to Dwight Daniels\nfor helping us revise the language of this paper.\n\nREFERENCES\n[1] Google image search. http://images.google.com.\n[2] Google local search. http://local.google.com/.\n[3] Google news search. http://news.google.com.\n[4] Google paper search. http://Scholar.google.com.\n[5] Google product search. http://froogle.google.com.\n[6] Google video search. http://video.google.com.\n[7] Scientific literature digital library.\nhttp://citeseer.ist.psu.edu.\n[8] Yahoo image search. http://images.yahoo.com.\n[9] R. Baeza-Yates and B. Ribeiro-Neto. Modern\nInformation Retrieval. New York: ACM Press;\nHarlow, England: Addison-Wesley, 1999.\n[10] W. Bin, L. Zhiwei, L. Ming Jing, and M. Wei-Ying.\nLarge-scale duplicate detection for web image search.\nIn Proceedings of the International Conference on\nMultimedia and Expo, page 353, 2006.\n[11] S. Brin and L. Page. The anatomy of a large-scale\nhypertextual web search engine. In Computer\nNetworks, volume 30, pages 107117, 1998.\n[12] C. Burges, T. Shaked, E. Renshaw, A. Lazier,\nM. Deeds, N. Hamilton, and G. Hullender. Learning\nto rank using gradient descent. In Proceedings of the\n22nd international conference on Machine learning,\npages 89 96, 2005.\n[13] C. Dwork, R. Kumar, M. Naor, and D. Sivakumar.\nRank aggregation methods for the web. In Proceedings\n10th International Conference on World Wide Web,\npages 613 622, Hong-Kong, 2001.\n[14] R. Fagin, R. Kumar, and D. Sivakumar. Comparing\ntop k lists. SIAM Journal on Discrete Mathematics,\n17(1):134 160, 2003.\n[15] Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer. An\nefficient boosting algorithm for combining preferences.\n385\nJournal of Machine Learning Research,\n4(1):933969(37), 2004.\n[16] IMDB. Formula for calculating the top rated 250 titles\nin imdb. http://www.imdb.com/chart/top.\n[17] T. Joachims. Optimizing search engines using\nclickthrough data. In Proceedings of the eighth ACM\nSIGKDD international conference on Knowledge\ndiscovery and data mining, pages 133 142, 2002.\n[18] J. M. Kleinberg. Authoritative sources in a\nhyperlinked environment. Journal of the ACM,\n46(5):604632, 1999.\n[19] R. Nallapati. Discriminative models for information\nretrieval. In Proceedings of the 25th annual\ninternational ACM SIGIR conference on Research and\ndevelopment in information retrieval, pages 64 71,\n2004.\n[20] Z. Nie, Y. Ma, J.-R. Wen, and W.-Y. Ma. Object-level\nweb information retrieval. In Technical Report of\nMicrosoft Research, volume MSR-TR-2005-11, 2005.\n[21] Z. Nie, Y. Zhang, J.-R. Wen, and W.-Y. Ma.\nObject-level ranking: Bringing order to web objects.\nIn Proceedings of the 14th international conference on\nWorld Wide Web, pages 567 574, Chiba, Japan,\n2005.\n[22] L. Page, S. Brin, R. Motwani, and T. Winograd. The\npagerank citation ranking: Bringing order to the web.\nIn Technical report, Stanford Digital Libraries, 1998.\n[23] A. Savakis, S. Etz, and A. Loui. Evaluation of image\nappeal in consumer photography. In SPIE Human\nVision and Electronic Imaging, pages 111120, 2000.\n[24] D. Sullivan. Hitwise search engine ratings. Search\nEngine Watch Articles, http://searchenginewatch.\ncom/reports/article.php/3099931, August 23, 2005.\n[25] S. Susstrunk and S. Winkler. Color image quality on\nthe internet. In IS&T/SPIE Electronic Imaging 2004:\nInternet Imaging V, volume 5304, pages 118131,\n2004.\n[26] H. Tong, M. Li, Z. H.J., J. He, and Z. C.S.\nClassification of digital photos taken by photographers\nor home users. In Pacific-Rim Conference on\nMultimedia (PCM), pages 198205, 2004.\n[27] W. Xi, B. Zhang, Z. Chen, Y. Lu, S. Yan, W.-Y. Ma,\nand E. A. Fox. Link fusion: a unified link analysis\nframework for multi-type interrelated data objects. In\nProceedings of the 13th international conference on\nWorld Wide Web, pages 319 327, 2004.\n386\n", "keywords": "image search;ranking;Web objects"} {"name": "166", "title": "Real-world Oriented Information Sharing Using Social Networks", "abstract": "While users disseminate various information in the open and widely distributed environment of the Semantic Web, determination of who shares access to particular information is at the center of looming privacy concerns. We propose a real-world -oriented information sharing system that uses social networks. The system automatically obtains users' social relationships by mining various external sources. It also enables users to analyze their social networks to provide awareness of the information dissemination process. Users can determine who has access to particular information based on the social relationships and network analysis.", "fulltext": "INTRODUCTION\nWith the current development of tools and sites that enable\nusers to create Web content, users have become able to\neasily disseminate various information. For example, users\ncreate Weblogs, which are diary-like sites that include various\npublic and private information. Furthermore, the past\nyear has witnessed the emergence of social networking sites\nthat allow users to maintain an online network of friends\nor associates for social or business purposes. Therein, data\nrelated to millions of people and their relationships are publicly\navailable on the Web.\nAlthough these tools and sites enable users to easily disseminate\ninformation on the Web, users sometimes have difficulty\nin sharing information with the right people and frequently\nhave privacy concerns because it is difficult to determine\nwho has access to particular information on such\napplications. Some tools and applications provide control\nover information access. For example, Friendster, a huge\nsocial networking site, offers several levels of control from\n\"public information\" to \"only for friends\". However, it provides\nonly limited support for access control.\nAn appropriate information sharing system that enables\nall users to control the dissemination of their information\nis needed to use tools and sites such as Weblog, Wiki, and\nsocial networking services fully as an infrastructure of disseminating\nand sharing information. In the absence of such\na system, a user would feel unsafe and would therefore be\ndiscouraged from disseminating information.\nHow can we realize such an information sharing system\non the Web? One clue exists in the information sharing\nprocesses of the real world. Information availability is often\nclosely guarded and shared only with the people of one's\nsocial relationships. Confidential project documents which\nhave limited distribution within a division of company, might\nbe made accessible to other colleagues who are concerned\nwith the project. Private family photographs might be shared\nnot only with relatives, but also with close friends. A professor\nmight access a private research report of her student.\nWe find that social relationships play an important role in\nthe process of disseminating and receiving information. This\npaper presents a real-world oriented information sharing system\nusing social networks. It enables users to control the\ninformation dissemination process within social networks.\nThe remainder of this paper is organized as follows: section\n2 describes the proposed information sharing system\nusing social networks. In section 3, we describe the application\nof our system. Finally, we conclude this paper in\nsection 4.\nINFORMATION SHARING USING SOCIAL NETWORKS\nFigure 1 depicts the architecture of the proposed information\nsharing system. The system functions as a \"plug-in\"\nfor applications so that external applications enable users\nto leverage social networks to manage their information dis-81\nSocial network analysis\nApplications for Information Sharing\ne.g. Weblog, Wiki, CMS, SNS etc\nContents\nData\nAccess control List Editor\nAccess control\nData (XACML)\nSocial networks extraction\nWeb\nWeb pages,\nSNS, FOAF, etc\nEmail\nSensors\nSocial networks Editor\nSocial networks\nData (FOAF)\nEdit social networks\nEdit access control list\nContents data\nAccess to contents\nAccess permission\nUser data\nFigure 1: Architecture of the proposed information\nsharing system\nBirthplace\n: Kagawa, Japan\nWorkplace\n: AIST\nJob\n: CS researcher\nResearch topics\n: Web\nUniversity\n: Tokyo univ.\nInterest\n: Sumo wrestling\n...\nperson\nProperties\nBirthplace\n: Los Angels, US\nWorkplace\n: Washington Univ.\nJob\n: CS researcher\nResearch topics\n: Web\nUniversity\n: UC California\nInterest\n: Sumo wrestling\n...\nperson\nProperties\nBirthplace\n: Kagawa, Japan\nWorkplace\n: AIST\nJob\n: CS researcher\nResearch topics\n: Web\nUniversity\n: Tokyo univ.\nInterest\n: Sumo wrestling\n...\nperson\nProperties\nBirthplace\n: Los Angels, US\nWorkplace\n: Washington Univ.\nJob\n: CS researcher\nResearch topics\n: Web\nUniversity\n: UC California\nInterest\n: Sumo wrestling\n...\nperson\nProperties\nperson\nperson\nEvent\nParticipate\nCommon\nEvent-participation relationship\nCommon property relationship\nFigure 2: Two kinds of relationships\nsemination. A user can attach an access control list to his\ncontent using his social network when creating content on\nan application. Then, when the application receives a request\nto access the content, it determines whether to grant\nthe request based on the access control list.\nBecause users determine the access control to information\nbased on the social network, the system requires social\nnetwork data. The system obtains users' social networks automatically\nby mining various external sources such as Web,\nemails, and sensor information; subsequently, it maintains a\ndatabase of the social network information. Users can adjust\nthe network if necessary.\nThe system enables users to analyze their social network\nto provide awareness of the information dissemination process\nwithin the social network. Using social relationships\nand the results of social network analyses, users can decide\nwho can access their information.\nCurrently, the proposed system is applied to an academic\nsociety because researchers have various social relationships\n(e.g., from a student to a professor, from a company to a university\n) through their activities such as meetings, projects,\nand conferences. Importantly, they often need to share various\ninformation such as papers, ideas, reports, and schedules\n. Sometimes, such information includes private or confidential\ninformation that ought only to be shared with appropriate\npeople. In addition, researchers have an interest\nin managing the information availability of their social relationships\n. The information of social relationships of an\nacademic society, in particular computer science, is easily\navailable online to a great degree. Such information is important\nto obtain social networks automatically.\nHereafter, we explain in detail how social networks are\nmodeled, extracted and analyzed.\nThen we explain how\nusers can decide to control information access using social\nnetworks.\n2.1\nRepresentation of Social Relationships\nWith the variety of social relationships that exist in the\nreal world, a salient problem has surfaced: integration and\nconsolidation on a semantic basis. The representation of\nsocial relationships must be sufficiently fine-grained that we\ncan capture all details from individual sources of information\nin a way that these can be recombined later and taken as\nevidence of a certain relationship.\nSeveral representations of social relationships exist. For\nexample, social network sites often simplify the relationship\nas \"friend\" or \"acquaintance\". In the Friend of a Friend\n(FOAF) [1] vocabulary, which is one of the Semantic Web's\nlargest and most popular ontologies for describing people\nand whom they know, many kinds of relationships between\npeople are deliberately simplified as \"knows\" relations. A\nrich ontological consideration of social relationships is needed\nfor characterization and analysis of individual social networks\n.\nWe define two kinds of social relationship (Fig. 2) [7].\nThe first basic structure of social relationship is a person's\nparticipation in an event. Social relationships come into existence\nthrough events involving two or more individuals.\nSuch events might not require personal contact, but they\nmust involve social interaction. From this event, social relationships\nbegin a lifecycle of their own, during which the\ncharacteristics of the relationship might change through interaction\nor the lack thereof. An event is classified as perdu-rant\nin the DOLCE ontology [6], which is a popular ontology.\nFor example, an event might be a meeting, a conference, a\nbaseball game, a walk, etc. Assume that person\nA and person\nB participate in Event X. In that situation, we note\nthat\nA and B share an event co-participation relationship\nunder event\nX.\nA social relationship might have various social roles asso-ciated\nwith it. For example, a student-professor relationship\nwithin a university setting includes an individual playing the\nrole of a professor; another individual plays the role of a student\n. If\nA and B take the same role to Event X, they are in\na same role relationship under event\nX (e.g., students at a\nclass, colleagues in a workspace). If\nA cannot take over B's\nrole or vice versa,\nA and B are in a role-sharing relationship\n(e.g., a professor and students, a project leader and staff).\nAnother kind of social relationship is called a common\nproperty relationship. Sharing the same property value generates\na common property relationship between people. For\nexample, person\nA and person B have a common working\nplace, common interests, and common experiences. Consequently\n, they are in a common property relationship with\nregard to those common properties.\n2.2\nExtraction of Social Networks\nIf two persons are in either an event co-participation relationship\nor a common property relationship, they often\ncommunicate. The communication media can be diverse:\n82\nFigure 3: Editor for social relationships\nFigure 4: Editor for analyzing social networks and\nassigning an access control list to content\nface-to-face conversation, telephone call, email, chat, online\ncommunication on Weblogs, and so on. If we wish to discover\nthe social relationship by observation, we must estimate\nrelationships from superficial communication. The\nemerging field of social network mining provides methods\nfor discovering social interactions and networks from legacy\nsources such as web pages, databases, mailing lists, and personal\nemails.\nCurrently, we use three kinds of information sources to obtain\nsocial relationships using mining techniques. From the\nWeb, we extract social networks using a search engine and\nthe co-occurrence of two persons' names on the Web. Consequently\n, we can determine the following relationships among\nresearchers: Coauthor, Same affiliation, Same project, Same\nevent (participants of the same conference, workshop, etc.)\n[8]. Coauthor and Same event correspond to an event co-participation\nrelationship. Same affiliation and same project\ncorrespond to a common property relationship. We are also\nusing other sources such as email and sensors (we are developing\na device that detects users within social spaces such\nas parties and conferences) to obtain social relationships.\nNecessarily, the quality of information obtained by mining\nis expected to be inferior to that of manually authored profiles\n. We can reuse those data if a user has already declared\nhis relationships in FOAF or profiles of social networking\nservices. Although users might find it difficult and demanding\nto record social relations, it would be beneficial to ask\nusers to provide information to obtain social relationships.\nIn addition to the relationship type, another factor of the\nsocial relationship is tie strength. Tie strength itself is a\ncomplex construct of several characteristics of social relations\n. It is definable as affective, frequency, trust, comple-mentarity\n, etc. No consensus for defining and measuring\nthem exists, which means that people use different elicita-tion\nmethods when it comes to determining tie strength.\nFor example, Orkut, a huge social networking service, allows\ndescription of the strength of friendship relations on a\nfive-point scale from \"haven't met\" to \"best friend\", whereas\nother sites might choose other scales or terms.\nIn our system, we use trust as a parameter of tie strength.\nTrust has several very specific definitions. In [4], Golbeck\ndescribes trust as credibility or reliability in a human sense:\n\"how much credence should I give to what this person speaks\nabout\" and \"based on what my friends say, how much should\nI trust this new person?\" In the context of information sharing\n, trust can be regarded as reliability regarding \"how a\nperson will handle my information\". Users can give trust\ndirectly in a numerical value to a person in his relation.\nAlternatively, trust is obtainable automatically as authori-tativeness\nof each person using the social network [8].\nThe obtained social network data are integrated as extended\nFOAF files and stored in database. Users can adjust\nnetworks if needed (Fig. 3). The social relationship and its\ntie strength become guiding principles when a user determines\nan access control list to information.\n2.3\nSocial Network Analysis for Information\nSharing\nThe system enables users to analyze their social networks\nto provide awareness of the information dissemination process\nwithin the social network.\nSocial network analysis (SNA) is distinguishable from other\nfields of sociology by its focus on relationships between actors\nrather than attributes of actors, a network view, and\na belief that structure affects substantive outcomes.\nBecause\nan actor's position in a network affects information\ndissemination, SNA provides an important implication for\ninformation sharing on the social network. For example, occupying\na favored position means that the actor will have\nbetter access to information, resources, and social support.\nThe SNA models are based on graphs, with graph measures\n, such as centrality, that are defined using a sociological\ninterpretation of graph structure. Freeman proposes numerous\nways to measure centrality [2]. Considering a social network\nof actors, the simplest measure is to count the number\nof others with whom an actor maintains relations. The actor\nwith the most connections, the highest degree, is most\ncentral. This measure is called degreeness. Another measure\nis closeness, which calculates the distance from each\nactor in the network to every other actor based on connections\namong all network members. Central actors are closer\nto all others than are other actors. A third measure is betweenness\n, which examines the extent to which an actor is\nsituated among others in the network, the extent to which\n83\nFigure 5: Web site for sharing research information\ninformation must pass through them to get to others, and\nconsequently, the extent to which they are exposed to information\ncirculation within the network. If the betweenness\nof an actor is high, it frequently acts as a local bridge that\nconnects the individual to other actors outside a group. In\nterms of network ties, this kind of bridge is well known as\nGranovetter's \"weak tie\" [5], which contrasts with \"strong\ntie\" within a densely-closed group.\nAs the weak tie becomes a bridge between different groups,\na large community often breaks up to a set of closely knit\ngroup of individuals, woven together more loosely according\nto occasional interaction among groups. Based on this\ntheory, social network analysis offers a number of clustering\nalgorithms for identifying communities based on network\ndata.\nThe system provides users with these network analyses\n(Fig. 4) so that they can decide who can access their information\n. For example, if user wants to diffuse her information\n, she might consider granting access to a person (with\ncertain trust) who has both high degreeness and betweenness\n. On the other hand, she must be aware of betweenness\nwhen the information is private or confidential. Clustering\nis useful when a user wishes to share information within a\ncertain group.\nAPPLICATION\nTo demonstrate and evaluate our system, we developed a\ncommunity site (Fig. 5) using communication tools such as\nWeblogs, Wikis, and Forums. By that system, studies from\ndifferent organizations and projects can be disseminated and\ntheir information thereby shared. Users can share various\ninformation such as papers, ideas, reports, and schedules at\nthe site. Our system is integrated into a site that provides\naccess control to that information. Integrating our system\ntakes advantage of the open and information nature of the\ncommunication tools. It also maintains the privacy of the\ncontent and activities of those applications.\nUsers can manage their social networks (Fig. 3) and attach\nthe access control list to their content (e.g., Blog entries\n, profiles, and Wiki pages) using extracted social relationships\nand social network analysis (Fig. 4).\nOnce a user determines the access control list, she can save\nit as her information access policy for corresponding content.\nThe access policy is described using extended eXtensible\nAccess Control Markup Language (XACML) and is stored\nin a database. She can reuse and modify the previous policy\nif she subsequently creates a similar content.\nOne feature of our system is that it is easily adaptable to\nnew applications because of its plug-and-play design. We\nare planning to integrate it into various Web sites and applications\nsuch as social network sites and RSS readers.\nRELATED WORKS AND CONCLUSIONS\nGoecks and Mynatt propose a Saori infrastructure that\nalso uses social networks for information sharing [3]. They\nobtain social networks from users' email messages and provide\nsharing policies based on the type of information. We\nobtain social networks from various sources and integrate\nthem into FOAF files. This facilitates the importation and\nmaintenance of social network data. Another feature is that\nour system enables users to analyze their social networks.\nThereby, users can control information dissemination more\neffectively and flexibly than through the use of pre-defined\npolicies.\nAs users increasingly disseminate their information on the\nWeb, privacy concerns demand that access to particular information\nbe limited.\nWe propose a real-world oriented\ninformation sharing system using social networks. It enables\nusers to control the information dissemination process\nwithin social networks, just as they are in the real world.\nFuture studies will evaluate the system with regard to how\nit contributes to wider and safer information sharing than it\nwould otherwise. We will also develop a distributed system\nthat can be used fully on the current Web.\nREFERENCES\n[1] D. Brickley and L. Miller. FOAF: the 'friend of a\nfriend' vocabulary. http://xmlns. com/foaf/0.1/, 2004.\n[2] L. C. Freeman. Centrality in social networks:\nConceptual clarification, Social Networks, Vol.1,\npp.215239, 1979.\n[3] J. Goecks and E. D. Mynatt. Leveraging Social\nNetworks for Information Sharing In Proc. of\nCSCW'04, 2004.\n[4] J. Golbeck, J. Hendler, and B. Parsia. Trust networks\non the semantic web, in Proc. WWW 2003, 2003.\n[5] M. Granovetter. Strength of weak ties, American\nJournal of Sociology, Vol.18, pp.13601380, 1973.\n[6] C. Masolo, S. Borgo, A. Gangemi, N. Guarinno, and\nA. Oltramari. WonderWeb Deliverable D18,\nhttp://wonderweb.semanticweb.org/deliverable/D18.shtml\n[7] Y. Matsuo, M. Hamasaki, J. Mori, H. Takeda and K.\nHasida. Ontological Consideration on Human\nRelationship Vocabulary for FOAF. In Proc. of the 1st\nWorkshop on Friend of a Friend, Social Networking\nand Semantic Web, 2004.\n[8] Y. Matsuo, H. Tomobe, K. Hasida, M. Ishiz uka.\nFinding Social Network for Trust Calculation. In\nProc. of 16th European Conference on Artificial\nIntelligence, 2004.\n84\n", "keywords": "Social network;Information sharing"} {"name": "167", "title": "Remote Access to Large Spatial Databases", "abstract": "Enterprises in the public and private sectors have been making their large spatial data archives available over the Internet . However, interactive work with such large volumes of online spatial data is a challenging task. We propose two efficient approaches to remote access to large spatial data. First, we introduce a client-server architecture where the work is distributed between the server and the individual clients for spatial query evaluation, data visualization, and data management. We enable the minimization of the requirements for system resources on the client side while maximizing system responsiveness as well as the number of connections one server can handle concurrently. Second, for prolonged periods of access to large online data, we introduce APPOINT (an Approach for Peer-to-Peer Offloading the INTernet). This is a centralized peer-to-peer approach that helps Internet users transfer large volumes of online data efficiently. In APPOINT, active clients of the client-server architecture act on the server's behalf and communicate with each other to decrease network latency, improve service bandwidth, and resolve server congestions.", "fulltext": "INTRODUCTION\nIn recent years, enterprises in the public and private sectors\nhave provided access to large volumes of spatial data\nover the Internet. Interactive work with such large volumes\nof online spatial data is a challenging task. We have been developing\nan interactive browser for accessing spatial online\ndatabases: the SAND (Spatial and Non-spatial Data) Internet\nBrowser. Users of this browser can interactively and\nvisually manipulate spatial data remotely. Unfortunately,\ninteractive remote access to spatial data slows to a crawl\nwithout proper data access mechanisms. We developed two\nseparate methods for improving the system performance, together\n, form a dynamic network infrastructure that is highly\nscalable and provides a satisfactory user experience for interactions\nwith large volumes of online spatial data.\nThe core functionality responsible for the actual database\noperations is performed by the server-based SAND system.\nSAND is a spatial database system developed at the University\nof Maryland [12].\nThe client-side SAND Internet\nBrowser provides a graphical user interface to the facilities\nof SAND over the Internet. Users specify queries by choosing\nthe desired selection conditions from a variety of menus\nand dialog boxes.\nSAND Internet Browser is Java-based, which makes it deployable\nacross many platforms. In addition, since Java has\noften been installed on target computers beforehand, our\nclients can be deployed on these systems with little or no\nneed for any additional software installation or customiza-tion\n. The system can start being utilized immediately without\nany prior setup which can be extremely beneficial in\ntime-sensitive usage scenarios such as emergencies.\nThere are two ways to deploy SAND. First, any standard\nWeb browser can be used to retrieve and run the client piece\n(SAND Internet Browser) as a Java application or an applet.\nThis way, users across various platforms can continuously\naccess large spatial data on a remote location with little or\n1\n5\nno need for any preceding software installation. The second\noption is to use a stand-alone SAND Internet Browser along\nwith a locally-installed Internet-enabled database management\nsystem (server piece). In this case, the SAND Internet\nBrowser can still be utilized to view data from remote locations\n. However, frequently accessed data can be downloaded\nto the local database on demand, and subsequently accessed\nlocally. Power users can also upload large volumes of spatial\ndata back to the remote server using this enhanced client.\nWe focused our efforts in two directions. We first aimed at\ndeveloping a client-server architecture with efficient caching\nmethods to balance local resources on one side and the significant\nlatency of the network connection on the other. The\nlow bandwidth of this connection is the primary concern in\nboth cases. The outcome of this research primarily addresses\nthe issues of our first type of usage (i.e., as a remote browser\napplication or an applet) for our browser and other similar\napplications.\nThe second direction aims at helping users\nthat wish to manipulate large volumes of online data for\nprolonged periods. We have developed a centralized peer-to\n-peer approach to provide the users with the ability to\ntransfer large volumes of data (i.e., whole data sets to the\nlocal database) more efficiently by better utilizing the distributed\nnetwork resources among active clients of a client-server\narchitecture. We call this architecture APPOINT -Approach\nfor Peer-to-Peer Offloading the INTernet. The\nresults of this research addresses primarily the issues of the\nsecond type of usage for our SAND Internet Browser (i.e.,\nas a stand-alone application).\nThe rest of this paper is organized as follows. Section 2 describes\nour client-server approach in more detail. Section 3\nfocuses on APPOINT, our peer-to-peer approach. Section 4\ndiscusses our work in relation to existing work. Section 5\noutlines a sample SAND Internet Browser scenario for both\nof our remote access approaches. Section 6 contains concluding\nremarks as well as future research directions.\nTHE CLIENT-SERVER APPROACH\nTraditionally, Geographic Information Systems (GIS)\nsuch as ArcInfo from ESRI [2] and many spatial databases\nare designed to be stand-alone products.\nThe spatial\ndatabase is kept on the same computer or local area network\nfrom where it is visualized and queried. This architecture\nallows for instantaneous transfer of large amounts of data\nbetween the spatial database and the visualization module\nso that it is perfectly reasonable to use large-bandwidth protocols\nfor communication between them. There are however\nmany applications where a more distributed approach is desirable\n. In these cases, the database is maintained in one location\nwhile users need to work with it from possibly distant\nsites over the network (e.g., the Internet). These connections\ncan be far slower and less reliable than local area networks\nand thus it is desirable to limit the data flow between the\ndatabase (server) and the visualization unit (client) in order\nto get a timely response from the system.\nOur client-server approach (Figure 1) allows the actual\ndatabase engine to be run in a central location maintained\nby spatial database experts, while end users acquire a Java-based\nclient component that provides them with a gateway\ninto the SAND spatial database engine.\nOur client is more than a simple image viewer. Instead, it\noperates on vector data allowing the client to execute many\noperations such as zooming or locational queries locally. In\nFigure 1: SAND Internet Browser -- Client-Server\narchitecture.\nessence, a simple spatial database engine is run on the client.\nThis database keeps a copy of a subset of the whole database\nwhose full version is maintained on the server. This is a\nconcept similar to `caching'. In our case, the client acts as\na lightweight server in that given data, it evaluates queries\nand provides the visualization module with objects to be\ndisplayed. It initiates communication with the server only\nin cases where it does not have enough data stored locally.\nSince the locally run database is only updated when additional\nor newer data is needed, our architecture allows the\nsystem to minimize the network traffic between the client\nand the server when executing the most common user-side\noperations such as zooming and panning. In fact, as long\nas the user explores one region at a time (i.e., he or she is\nnot panning all over the database), no additional data needs\nto be retrieved after the initial population of the client-side\ndatabase.\nThis makes the system much more responsive\nthan the Web mapping services. Due to the complexity of\nevaluating arbitrary queries (i.e., more complex queries than\nwindow queries that are needed for database visualization),\nwe do not perform user-specified queries on the client. All\nuser queries are still evaluated on the server side and the\nresults are downloaded onto the client for display. However,\nassuming that the queries are selective enough (i.e., there are\nfar fewer elements returned from the query than the number\nof elements in the database), the response delay is usually\nwithin reasonable limits.\n2.1\nClient-Server Communication\nAs mentioned above, the SAND Internet Browser is a\nclient piece of the remotely accessible spatial database server\nbuilt around the SAND kernel. In order to communicate\nwith the server, whose application programming interface\n(API) is a Tcl-based scripting language, a servlet specifically\ndesigned to interface the SAND Internet Browser with the\nSAND kernel is required on the server side. This servlet listens\non a given port of the server for incoming requests from\nthe client. It translates these requests into the SAND-Tcl\nlanguage. Next, it transmits these SAND-Tcl commands or\nscripts to the SAND kernel. After results are provided by\nthe kernel, the servlet fetches and processes them, and then\nsends those results back to the originating client.\nOnce the Java servlet is launched, it waits for a client to\ninitiate a connection. It handles both requests for the actual\nclient Java code (needed when the client is run as an applet)\nand the SAND traffic. When the client piece is launched,\nit connects back to the SAND servlet, the communication\nis driven by the client piece; the server only responds to\nthe client's queries. The client initiates a transaction by\n6\nsending a query.\nThe Java servlet parses the query and\ncreates a corresponding SAND-Tcl expression or script in\nthe SAND kernel's native format.\nIt is then sent to the\nkernel for evaluation or execution. The kernel's response\nnaturally depends on the query and can be a boolean value,\na number or a string representing a value (e.g., a default\ncolor) or, a whole tuple (e.g., in response to a nearest tuple\nquery). If a script was sent to the kernel (e.g., requesting\nall the tuples matching some criteria), then an arbitrary\namount of data can be returned by the SAND server. In this\ncase, the data is first compressed before it is sent over the\nnetwork to the client. The data stream gets decompressed\nat the client before the results are parsed.\nNotice, that if another spatial database was to be used\ninstead of the SAND kernel, then only a simple modification\nto the servlet would need to be made in order for the\nSAND Internet Browser to function properly. In particular\n, the queries sent by the client would need to be recoded\ninto another query language which is native to this different\nspatial database. The format of the protocol used for communication\nbetween the servlet and the client is unaffected.\nTHE PEER-TO-PEER APPROACH\nMany users may want to work on a complete spatial data\nset for a prolonged period of time. In this case, making an\ninitial investment of downloading the whole data set may be\nneeded to guarantee a satisfactory session. Unfortunately,\nspatial data tends to be large. A few download requests\nto a large data set from a set of idle clients waiting to be\nserved can slow the server to a crawl. This is due to the fact\nthat the common client-server approach to transferring data\nbetween the two ends of a connection assumes a designated\nrole for each one of the ends (i.e, some clients and a server).\nWe built APPOINT as a centralized peer-to-peer system\nto demonstrate our approach for improving the common\nclient-server systems. A server still exists. There is a central\nsource for the data and a decision mechanism for the\nservice. The environment still functions as a client-server\nenvironment under many circumstances. Yet, unlike many\ncommon client-server environments, APPOINT maintains\nmore information about the clients. This includes, inventories\nof what each client downloads, their availabilities, etc.\nWhen the client-server service starts to perform poorly or\na request for a data item comes from a client with a poor\nconnection to the server, APPOINT can start appointing\nappropriate active clients of the system to serve on behalf\nof the server, i.e., clients who have already volunteered their\nservices and can take on the role of peers (hence, moving\nfrom a client-server scheme to a peer-to-peer scheme). The\ndirectory service for the active clients is still performed by\nthe server but the server no longer serves all of the requests.\nIn this scheme, clients are used mainly for the purpose of\nsharing their networking resources rather than introducing\nnew content and hence they help offload the server and scale\nup the service. The existence of a server is simpler in terms\nof management of dynamic peers in comparison to pure peer-to\n-peer approaches where a flood of messages to discover\nwho is still active in the system should be used by each peer\nthat needs to make a decision. The server is also the main\nsource of data and under regular circumstances it may not\nforward the service.\nData is assumed to be formed of files. A single file forms\nthe atomic means of communication. APPOINT optimizes\nrequests with respect to these atomic requests. Frequently\naccessed data sets are replicated as a byproduct of having\nbeen requested by a large number of users. This opens up\nthe potential for bypassing the server in future downloads for\nthe data by other users as there are now many new points of\naccess to it. Bypassing the server is useful when the server's\nbandwidth is limited.\nExistence of a server assures that\nunpopular data is also available at all times. The service\ndepends on the availability of the server. The server is now\nmore resilient to congestion as the service is more scalable.\nBackups and other maintenance activities are already being\nperformed on the server and hence no extra administrative\neffort is needed for the dynamic peers. If a peer goes\ndown, no extra precautions are taken. In fact, APPOINT\ndoes not require any additional resources from an already\nexisting client-server environment but, instead, expands its\ncapability. The peers simply get on to or get off from a table\non the server.\nUploading data is achieved in a similar manner as downloading\ndata. For uploads, the active clients can again be\nutilized. Users can upload their data to a set of peers other\nthan the server if the server is busy or resides in a distant\nlocation. Eventually the data is propagated to the server.\nAll of the operations are performed in a transparent fashion\nto the clients. Upon initial connection to the server,\nthey can be queried as to whether or not they want to share\ntheir idle networking time and disk space. The rest of the\noperations follow transparently after the initial contact. APPOINT\nworks on the application layer but not on lower layers\n. This achieves platform independence and easy deploy-ment\nof the system. APPOINT is not a replacement but\nan addition to the current client-server architectures. We\ndeveloped a library of function calls that when placed in a\nclient-server architecture starts the service. We are developing\nadvanced peer selection schemes that incorporate the\nlocation of active clients, bandwidth among active clients,\ndata-size to be transferred, load on active clients, and availability\nof active clients to form a complete means of selecting\nthe best clients that can become efficient alternatives to the\nserver.\nWith APPOINT we are defining a very simple API that\ncould be used within an existing client-server system easily.\nInstead of denial of service or a slow connection, this API\ncan be utilized to forward the service appropriately. The\nAPI for the server side is:\nstart(serverPortNo)\nmakeFileAvailable(file,location,boolean)\ncallback receivedFile(file,location)\ncallback errorReceivingFile(file,location,error)\nstop()\nSimilarly the API for the client side is:\nstart(clientPortNo,serverPortNo,serverAddress)\nmakeFileAvailable(file,location,boolean)\nreceiveFile(file,location)\nsendFile(file,location)\nstop()\nThe server, after starting the APPOINT service, can make\nall of the data files available to the clients by using the\nmakeFileAvailable method.\nThis will enable APPOINT\nto treat the server as one of the peers.\nThe two callback methods of the server are invoked when\na file is received from a client, or when an error is encountered\nwhile receiving a file from a client. APPOINT guar-7\nFigure 2: The localization operation in APPOINT.\nantees that at least one of the callbacks will be called so\nthat the user (who may not be online anymore) can always\nbe notified (i.e., via email).\nClients localizing large data\nfiles can make these files available to the public by using the\nmakeFileAvailable method on the client side.\nFor example, in our SAND Internet Browser, we have the\nlocalization of spatial data as a function that can be chosen\nfrom our menus. This functionality enables users to download\ndata sets completely to their local disks before starting\ntheir queries or analysis. In our implementation, we have\ncalls to the APPOINT service both on the client and the\nserver sides as mentioned above. Hence, when a localization\nrequest comes to the SAND Internet Browser, the browser\nleaves the decisions to optimally find and localize a data set\nto the APPOINT service. Our server also makes its data\nfiles available over APPOINT. The mechanism for the localization\noperation is shown with more details from the\nAPPOINT protocols in Figure 2. The upload operation is\nperformed in a similar fashion.\nRELATED WORK\nThere has been a substantial amount of research on remote\naccess to spatial data.\nOne specific approach has\nbeen adopted by numerous Web-based mapping services\n(MapQuest [5], MapsOnUs [6], etc.). The goal in this approach\nis to enable remote users, typically only equipped\nwith standard Web browsers, to access the company's spatial\ndatabase server and retrieve information in the form of\npictorial maps from them. The solution presented by most\nof these vendors is based on performing all the calculations\non the server side and transferring only bitmaps that represent\nresults of user queries and commands. Although the\nadvantage of this solution is the minimization of both hardware\nand software resources on the client site, the resulting\nproduct has severe limitations in terms of available functionality\nand response time (each user action results in a new\nbitmap being transferred to the client).\nWork described in [9] examines a client-server architecture\nfor viewing large images that operates over a low-bandwidth\nnetwork connection.\nIt presents a technique\nbased on wavelet transformations that allows the minimization\nof the amount of data needed to be transferred over\nthe network between the server and the client. In this case,\nwhile the server holds the full representation of the large image\n, only a limited amount of data needs to be transferred\nto the client to enable it to display a currently requested\nview into the image. On the client side, the image is reconstructed\ninto a pyramid representation to speed up zooming\nand panning operations. Both the client and the server keep\na common mask that indicates what parts of the image are\navailable on the client and what needs to be requested. This\nalso allows dropping unnecessary parts of the image from the\nmain memory on the server.\nOther related work has been reported in [16] where a\nclient-server architecture is described that is designed to provide\nend users with access to a server. It is assumed that\nthis data server manages vast databases that are impractical\nto be stored on individual clients. This work blends raster\ndata management (stored in pyramids [22]) with vector data\nstored in quadtrees [19, 20].\nFor our peer-to-peer transfer approach (APPOINT), Nap-ster\nis the forefather where a directory service is centralized\non a server and users exchange music files that they have\nstored on their local disks. Our application domain, where\nthe data is already freely available to the public, forms a\nprime candidate for such a peer-to-peer approach. Gnutella\nis a pure (decentralized) peer-to-peer file exchange system.\nUnfortunately, it suffers from scalability issues, i.e., floods of\nmessages between peers in order to map connectivity in the\nsystem are required. Other systems followed these popular\nsystems, each addressing a different flavor of sharing over\nthe Internet. Many peer-to-peer storage systems have also\nrecently emerged. PAST [18], Eternity Service [7], CFS [10],\nand OceanStore [15] are some peer-to-peer storage systems.\nSome of these systems have focused on anonymity while others\nhave focused on persistence of storage. Also, other approaches\n, like SETI@Home [21], made other resources, such\nas idle CPUs, work together over the Internet to solve large\nscale computational problems. Our goal is different than\nthese approaches. With APPOINT, we want to improve existing\nclient-server systems in terms of performance by using\nidle networking resources among active clients. Hence, other\nissues like anonymity, decentralization, and persistence of\nstorage were less important in our decisions. Confirming\nthe authenticity of the indirectly delivered data sets is not\nyet addressed with APPOINT. We want to expand our research\n, in the future, to address this issue.\nFrom our perspective, although APPOINT employs some\nof the techniques used in peer-to-peer systems, it is also\nclosely related to current Web caching architectures. Squirrel\n[13] forms the middle ground. It creates a pure peer-to-peer\ncollaborative Web cache among the Web browser caches\nof the machines in a local-area network. Except for this recent\npeer-to-peer approach, Web caching is mostly a well-studied\ntopic in the realm of server/proxy level caching [8,\n11, 14, 17]. Collaborative Web caching systems, the most\nrelevant of these for our research, focus on creating either\na hierarchical, hash-based, central directory-based, or\nmulticast-based caching schemes. We do not compete with\nthese approaches.\nIn fact, APPOINT can work in tandem\nwith collaborative Web caching if they are deployed\ntogether. We try to address the situation where a request\narrives at a server, meaning all the caches report a miss.\nHence, the point where the server is reached can be used to\ntake a central decision but then the actual service request\ncan be forwarded to a set of active clients, i.e., the down-8\nload and upload operations.\nCache misses are especially\ncommon in the type of large data-based services on which\nwe are working. Most of the Web caching schemes that are\nin use today employ a replacement policy that gives a priority\nto replacing the largest sized items over smaller-sized\nones. Hence, these policies would lead to the immediate replacement\nof our relatively large data files even though they\nmay be used frequently. In addition, in our case, the user\ncommunity that accesses a certain data file may also be very\ndispersed from a network point of view and thus cannot take\nadvantage of any of the caching schemes. Finally, none of\nthe Web caching methods address the symmetric issue of\nlarge data uploads.\nA SAMPLE APPLICATION\nFedStats [1] is an online source that enables ordinary citizens\naccess to official statistics of numerous federal agencies\nwithout knowing in advance which agency produced them.\nWe are using a FedStats data set as a testbed for our work.\nOur goal is to provide more power to the users of FedStats\nby utilizing the SAND Internet Browser. As an example,\nwe looked at two data files corresponding to Environmen-tal\nProtection Agency (EPA)-regulated facilities that have\nchlorine and arsenic, respectively. For each file, we had the\nfollowing information available: EPA-ID, name, street, city,\nstate, zip code, latitude, longitude, followed by flags to indicate\nif that facility is in the following EPA programs: Hazardous\nWaste, Wastewater Discharge, Air Emissions, Abandoned\nToxic Waste Dump, and Active Toxic Release.\nWe put this data into a SAND relation where the spatial\nattribute `location' corresponds to the latitude and longitude\n. Some queries that can be handled with our system on\nthis data include:\n1. Find all EPA-regulated facilities that have arsenic and\nparticipate in the Air Emissions program, and:\n(a) Lie in Georgia to Illinois, alphabetically.\n(b) Lie within Arkansas or 30 miles within its border.\n(c) Lie within 30 miles of the border of Arkansas (i.e.,\nboth sides of the border).\n2. For each EPA-regulated facility that has arsenic, find\nall EPA-regulated facilities that have chlorine and:\n(a) That are closer to it than to any other EPA-regulated\nfacility that has arsenic.\n(b) That participate in the Air Emissions program\nand are closer to it than to any other EPA-regulated\nfacility which has arsenic. In order to\navoid reporting a particular facility more than\nonce, we use our `group by EPA-ID' mechanism.\nFigure 3 illustrates the output of an example query that\nfinds all arsenic sites within a given distance of the border of\nArkansas. The sites are obtained in an incremental manner\nwith respect to a given point. This ordering is shown by\nusing different color shades.\nWith this example data, it is possible to work with the\nSAND Internet Browser online as an applet (connecting to\na remote server) or after localizing the data and then opening\nit locally. In the first case, for each action taken, the\nclient-server architecture will decide what to ask for from\nthe server. In the latter case, the browser will use the peer-to\n-peer APPOINT architecture for first localizing the data.\nCONCLUDING REMARKS\nAn overview of our efforts in providing remote access to\nlarge spatial data has been given. We have outlined our\napproaches and introduced their individual elements. Our\nclient-server approach improves the system performance by\nusing efficient caching methods when a remote server is accessed\nfrom thin-clients. APPOINT forms an alternative approach\nthat improves performance under an existing client-server\nsystem by using idle client resources when individual\nusers want work on a data set for longer periods of time\nusing their client computers.\nFor the future, we envision development of new efficient algorithms\nthat will support large online data transfers within\nour peer-to-peer approach using multiple peers simultane-ously\n. We assume that a peer (client) can become unavail-able\nat any anytime and hence provisions need to be in place\nto handle such a situation. To address this, we will augment\nour methods to include efficient dynamic updates. Upon\ncompletion of this step of our work, we also plan to run\ncomprehensive performance studies on our methods.\nAnother issue is how to access data from different sources\nin different formats. In order to access multiple data sources\nin real time, it is desirable to look for a mechanism that\nwould support data exchange by design.\nThe XML protocol\n[3] has emerged to become virtually a standard for\ndescribing and communicating arbitrary data. GML [4] is\nan XML variant that is becoming increasingly popular for\nexchange of geographical data. We are currently working\non making SAND XML-compatible so that the user can instantly\nretrieve spatial data provided by various agencies in\nthe GML format via their Web services and then explore,\nquery, or process this data further within the SAND framework\n. This will turn the SAND system into a universal tool\nfor accessing any spatial data set as it will be deployable on\nmost platforms, work efficiently given large amounts of data,\nbe able to tap any GML-enabled data source, and provide\nan easy to use graphical user interface. This will also convert\nthe SAND system from a research-oriented prototype\ninto a product that could be used by end users for accessing\n, viewing, and analyzing their data efficiently and with\nminimum effort.\nREFERENCES\n[1] Fedstats: The gateway to statistics from over 100 U.S.\nfederal agencies. http://www.fedstats.gov/, 2001.\n[2] Arcinfo: Scalable system of software for geographic\ndata creation, management, integration, analysis, and\ndissemination. http://www.esri.com/software/\narcgis/arcinfo/index.html, 2002.\n[3] Extensible markup language (xml).\nhttp://www.w3.org/XML/, 2002.\n[4] Geography markup language (gml) 2.0.\nhttp://opengis.net/gml/01-029/GML2.html, 2002.\n[5] Mapquest: Consumer-focused interactive mapping site\non the web. http://www.mapquest.com, 2002.\n[6] Mapsonus: Suite of online geographic services.\nhttp://www.mapsonus.com, 2002.\n[7] R. Anderson. The Eternity Service. In Proceedings of\nthe PRAGOCRYPT'96, pages 242252, Prague, Czech\nRepublic, September 1996.\n[8] L. Breslau, P. Cao, L. Fan, G. Phillips, and\nS. Shenker. Web caching and Zipf-like distributions:\n9\nFigure 3: Sample output from the SAND Internet Browser -- Large dark dots indicate the result of a query\nthat looks for all arsenic sites within a given distance from Arkansas. Different color shades are used to\nindicate ranking order by the distance from a given point.\nEvidence and implications. In Proceedings of the IEEE\nInfocom'99, pages 126134, New York, NY, March\n1999.\n[9] E. Chang, C. Yap, and T. Yen. Realtime visualization\nof large images over a thinwire. In R. Yagel and\nH. Hagen, editors, Proceedings IEEE Visualization'97\n(Late Breaking Hot Topics), pages 4548, Phoenix,\nAZ, October 1997.\n[10] F. Dabek, M. F. Kaashoek, D. Karger, R. Morris, and\nI. Stoica. Wide-area cooperative storage with CFS. In\nProceedings of the ACM SOSP'01, pages 202215,\nBanff, AL, October 2001.\n[11] A. Dingle and T. Partl. Web cache coherence.\nComputer Networks and ISDN Systems,\n28(7-11):907920, May 1996.\n[12] C. Esperanca and H. Samet. Experience with\nSAND/Tcl: a scripting tool for spatial databases.\nJournal of Visual Languages and Computing,\n13(2):229255, April 2002.\n[13] S. Iyer, A. Rowstron, and P. Druschel. Squirrel: A\ndecentralized peer-to-peer Web cache. Rice\nUniversity/Microsoft Research, submitted for\npublication, 2002.\n[14] D. Karger, A. Sherman, A. Berkheimer, B. Bogstad,\nR. Dhanidina, K. Iwamoto, B. Kim, L. Matkins, and\nY. Yerushalmi. Web caching with consistent hashing.\nComputer Networks, 31(11-16):12031213, May 1999.\n[15] J. Kubiatowicz, D. Bindel, Y. Chen, S. Czerwinski,\nP. Eaton, D. Geels, R. Gummadi, S. Rhea,\nH. Weatherspoon, W. Weimer, C. Wells, and B. Zhao.\nOceanStore: An architecture for global-scale persistent\nstore. In Proceedings of the ACM ASPLOS'00, pages\n190201, Cambridge, MA, November 2000.\n[16] M. Potmesil. Maps alive: viewing geospatial\ninformation on the WWW. Computer Networks and\nISDN Systems, 29(813):13271342, September 1997.\nAlso Hyper Proceedings of the 6th International World\nWide Web Conference, Santa Clara, CA, April 1997.\n[17] M. Rabinovich, J. Chase, and S. Gadde. Not all hits\nare created equal: Cooperative proxy caching over a\nwide-area network. Computer Networks and ISDN\nSystems, 30(22-23):22532259, November 1998.\n[18] A. Rowstron and P. Druschel. Storage management\nand caching in PAST, a large-scale, persistent\npeer-to-peer storage utility. In Proceedings of the ACM\nSOSP'01, pages 160173, Banff, AL, October 2001.\n[19] H. Samet. Applications of Spatial Data Structures:\nComputer Graphics, Image Processing, and GIS.\nAddison-Wesley, Reading, MA, 1990.\n[20] H. Samet. The Design and Analysis of Spatial Data\nStructures. Addison-Wesley, Reading, MA, 1990.\n[21] SETI@Home. http://setiathome.ssl.berkeley.edu/,\n2001.\n[22] L. J. Williams. Pyramidal parametrics. Computer\nGraphics, 17(3):111, July 1983. Also Proceedings of\nthe SIGGRAPH'83 Conference, Detroit, July 1983.\n10\n", "keywords": "GIS;Client/server;Peer-to-peer;Internet"} {"name": "168", "title": "ResearchExplorer: Gaining Insights through Exploration in Multimedia Scientific Data", "abstract": "An increasing amount of heterogeneous information about scientific research is becoming available on-line. This potentially allows users to explore the information from multiple perspectives and derive insights and not just raw data about a topic of interest. However, most current scientific information search systems lag behind this trend; being text-based, they are fundamentally incapable of dealing with multimedia data. An even more important limitation is that their information environments are information-centric and therefore are not suitable if insights are desired. Towards this goal, in this paper, we describe the design of a system, called ResearchExplorer, which facilitates exploring multimedia scientific data to gain insights. This is accomplished by providing an interaction environment for insights where users can explore multimedia scientific information sources. The multimedia information is united around the notion of research event and can be accessed in a unified way. Experiments are conducted to show how ResearchExplorer works and how it cardinally differs from other search systems.", "fulltext": "INTRODUCTION\nCurrent web search engines and bibliography systems are\ninformation-centric. Before searching for information, users need\nto construct a query typically, by using some keywords to\nrepresent the information they want. After the query is issued, the\nsystem retrieves all information relevant to the query. The results\nfrom such queries are usually presented to users by listing all\nrelevant hits. Thus, with these information-centric systems, users\ncan find information such as a person's homepage, a paper, a\nresearch project's web page, and so on. However, when users\nwant to know the following types of things, they are unable to\nfind answers easily with current search systems:\n1) Evolution of a field\n2) People working in the field\n3) A person's contribution to the field\n4) Classical papers (or readings) in the field\n5) Conferences/journals in the field\n6) How the research of a person or an organization (group,\ndept, university, etc) has evolved.\nThe reasons why current information-centric search systems have\ndifficulty to help users to find answers to questions above are due\nto the limitations of their information environments.\nFirst, some issues result from their data modeling. For example, to\nanswer the question of \"evolution of a field\", the most important\ninformation components, which are time and location, need to be\ncaptured and appropriately presented or utilized. However, in\ntypical bibliography systems such information is rigidly utilized\n(if at all available) in the time-stamping sense.\nSecond, many important issues arise due to the presentation\nmethods utilized by such systems. For example, even though users\ncan find all papers of a person with some systems, it is not easy\nfor users to observe the trend if the results are just listed\nsequentially. As an alternative, presenting results in a visual form\ncan make trend easier to identify.\nThird, some of the questions listed above can not be answered\ndirectly by the system because the answers depend on individual\nperson. For example, different users will have different judgments\non a researcher's contribution to a field. To form their own\nthoughts, users may need to investigate and compare several\nfactors many times. In this case, it is too tedious if each query is a\nnew query. Thus, it is necessary that the system can maintain\nquery and user states and allow users to refine queries\ndynamically. In other words, the user can not only query but also\nexplore information.\nFor this study, we propose a bibliography system with novel\ninteraction environment that aids not just in syntactic query\nretrieval but also aids in developing insights. The goal of this\n\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nMIR'04, October 1516, 2004, New York, New York, USA.\nCopyright 2004 ACM 1-58113-940-3/04/0010...$5.00.\n\n7\nsystem is to provide users an interaction environment where\ninformation is modeled, accessed, and presented in such a way\nthat users can gain insights easily through exploration.\nSpecifically, in the interaction environment, scientific information\nis modeled around the notion of a research event, which brings\ntogether all semantically related information regardless of the\nmedia (text, image, or video), through which it is expressed. Thus,\nwhen users explore the information space, they can view\nresearch in multiple media formats. Further, the interaction\nenvironment presents information using multidimensional views,\nwhich include temporal and spatial views. At the same time, the\ninteraction environment shows information of other attributes of\nresearch, like category and people information.\nIn summary, the contribution of this work is to propose a novel\ninteraction environment for insights. Although the system is\nfocused on scientific information, we believe the techniques\ndeveloped in this work are applicable to other applications and\ncan work as a framework guiding design of interaction\nenvironments for insights. The paper is structured as follows. We\nbegin with an introduction of interaction environment for insights.\nSection 3 describes the system architecture. Section 4 explains\ndata modeling of the interaction environment. Section 5 presents\nhow the interaction environment is implemented. Section 6\ndiscusses experiments and results. Section 7 gives a review of\nrelated work. Section 8 concludes.\n\nINTERACTION ENVIRONMENT FOR INSIGHTS\nOur goal in designing the system is to provide an interaction\nenvironment for users to explore multimedia scientific data to\ngain insights into research. Insight is commonly understood as\nfollows.\nInsight: the clear (and often sudden) understanding of a complex\nsituation [21].\nFrom the definition, we can see insight is different from\ninformation. If insight is gained, people should be able to\nunderstand the inner nature of things. To illustrate their\ndifference, we refer the reader to Figure 1. In the figure, left part\nshows two columns of numbers. What these numbers convey to\npeople is just information. It is very difficult for people to\nunderstand the relationship between numbers in these two\ncolumns by looking at numbers only. But if we show these\nnumbers by a chart as in the right hand, people can easily tell and\nunderstand that the two columns have linear relationship. That is\nthe insight. In this case, people gain insight by understanding\nrelationship, which is visualized by a certain technique.\nIn the context of research, insights should include clear\nunderstanding of different situations. Examples of these situations\nare a research field, a person, an organization, and a specific\nresearch event which will be defined later.\n\n2.2 Key Characteristics of Interaction\nEnvironment for Insights\nAn interaction environment for insights is an environment that\nhelps users to gain insights through exploration. It consists of a\ndatabase to store data, and user interface to explore data. Such an\nenvironment has the following key characteristics.\n1) Database to store information. As described in section 1,\nspatio-temporal characteristics of information are critical to\npresent the evolution of a situation. Clear understanding a\nsituation often requires understanding how the situation\nevolves. Therefore, spatio-temporal aspects of information\nshould be captured in data modeling. In addition, the data\nmodeling should be able to unify multimedia information.\nMultimedia enables users to observe things using different\nsenses. Some media can help people to understand quickly.\nFigure 2 shows such an example. By looking at the text at the\ntop, people may not be able to understand what the paper\n\"Content Based Image Synthesis\" talks about. But with the\nhelp of the images below, people can get an idea quickly\nwhat the paper is about. Further, multimedia provides users\nopportunity to view things from different perspectives. This\nis especially important when clear understanding a situation\nrequires examine the situation from multiple angles.\n2) User interface. As people gain insights by exploration, they\nmay need to check into a situation repeatedly and from\ndifferent viewpoints. Thus, interaction between a user and\nthe environment becomes very important. The design of user\ninterface should take this into consideration. We believe the\nkey features of UI are as follows.\na. The UI should support exploration of the spatial and\ntemporal characteristics of information.\nb. The UI should support direct interactions between the\nusers and the information. This requires the UI to have\ntwo characteristics: First, the UI should have the same\nquery and presentation space. In other words, a window\nin the UI can not only be used to show information but\nalso be used to specify queries. For example, time and\nlocation windows can show temporal and spatial\ninformation. At the same time, users can issue temporal\nand spatial queries in time and location windows. To\nFigure 1. Information vs. insight\n51\n255\n153\n459\n408\n204\n102\n306\n357\n1\n5\n3\n9\n8\n4\n2\n6\n7\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\n1\n2\n3\n4\n5\n6\n7\n8\n9\n8\nspecify a query, the operation should be simple and\ndirect. The other characteristic is the reflective nature of\nthe UI. This means once information in a window is\nchanged, all other windows will be updated\nautomatically. This helps users to interact with the\nenvironment directly and effectively.\nc. The UI enables users to issue dynamic query. In some\ncurrent interaction environments, users are constrained\nin forming queries. For example, users can only\ngenerate temporal query with one time interval. In an\ninteraction environment for insights, users should be\nable to form a query with multiple choices. This\nprovides users more flexibility to look into a situation of\ninterest.\nd. The UI maintains the query state. It should know which\nquery whose results are used in the search condition of\nanother query and which query is based on another\nquery's results. This helps users not only to be aware to\ncontexts but also to form complex queries.\ne. The UI should have zoom-in/zoom-out functionality\nthat allows examining the information at different\nresolutions. When large volume of data is retrieved,\nthere is readability issue. To address this issue, zoom-in/zoom\n-out functionality is needed.\nf. Different visualization techniques need to be used. As\nshown in figure 1, visualization techniques help users to\nunderstand relationships and gain insights. However,\ndifferent relationships need different visualization\ntechniques. For example, social relationships are of\nnetwork structure. Temporal relationships are two\ndimensional. To visualize these two types of\nrelationships effectively, it requires different techniques.\nThese characteristics will guide the interaction environment\ndesign of the system we are discussing.\nSYSTEM ARCHITECTURE\nFigure 3 shows the high level architecture of ResearchExplorer.\nThere are three main components: Event Collector, Event\nDatabase and Interaction Environment. One of the functions of\nEvent Collector is to gather data from different sources. Then it\nparses and assimilates information around the notion of research\nevent. Finally, it sends these data to Event Database. Event\nDatabase is a database of events. It stores all information around\nevents. ResearchExplorer uses a natural XML database for Event\nDatabase. The reasons will be explained in next section. In this\ndatabase, all information about a research event is stored as an\nXML file. The schema will be defined in section 4.2. Interaction\nEnvironment consists of User Interface and Searcher. Through the\nUI, users form a query. The query is then converted into XPath\nformat by the Searcher and sent to the Event Database. After the\nresults are retrieved from the Event Database, the Searcher gets\nthem back to the User Interface to present to users.\nIn this paper, our focus is not on how to collect data or unify\nmultimedia information by event. Interested readers can refer to\n[5][6][14][15][18] for information gathering and [17] for\nmultimedia information assimilation. What we focus is on the\ndesign of Interaction Environment based on research event.\n\n\n\nEVENT DATABASE\nAs described above, the interaction environment for insights\nneeds data modeling which can capture temporal and spatial\ncharacteristics, and unify multimedia. A recent work in [17] has\nproposed a unified multimedia data model that is capable of\ndescribing spatial-temporal characteristics of information. This\nmodel is built on the notion of events. Its efficacy has been\ndemonstrated in different domains including modeling of\nmultimedia information related to meetings [17] and personal\nResearch Events\nEvent\nCollector\nSearcher\nUser Interface\nData\nEvent\nDatabase\nFigure 3. ResearchExplorer architecture.\nquery\nInteraction\nEnvironment\nFigure 2. Multimedia helps understanding.\nPaper Title: Content Based Image Synthesis\n9\nmultimedia information management [16]. We base our current\nresearch on these ideas and extend them further to the specific\ndomain of scientific information management. In [17], an event is\ndefined as follows.\nEvent: An event is an observed physical reality parameterized by\nspace and the time. The observations describing the event are\ndefined by the nature or physics of the observable, the\nobservation model, and the observer.\nThis definition was given to events in general. In order to be\nconcrete in research domain, a specific event definition to\nresearch is necessary. Therefore, based on their definition, we are\ndefining an event in research domain, which is called research\nevent, as follows.\nResearch event: A research event is a set of semantically\ncorrelated events within research domain, parameterized by time,\nlocation, participant, and content.\nNote that semantics is contextual. It depends on many factors like\ntime, location, people etc. Thus, a research event is flexible. For\ninstance, it can be a research paper, a thesis, a technical report, a\nbook, a patent, a presentation, an image, a video, or a project\ncombining part or all of aforementioned. Semantics also depends\non domain level. It is generated differently at different domain\nlevels even though from the same event. That is because different\naspects of the event are emphasized at different domain levels.\nThus, a research event could be part of another one. For example,\nsomeone is talking is an event by itself. At the same time, it is part\nof a seminar event as well.\nThe definition of a research event provides us with the central\ncharacteristics to meet the requirements of our application. By the\ndefinition, a research event is parameterized by time and location.\nIt can capture the dynamics of itself. Thus, users can easily\nobserve how events evolve, which is helpful to insight generation.\nRelationships between events can be shown in terms of attributes\nof an event. This will enable users to observe events in a big\ncontext and get deeper understanding. Further, all multimedia data\nis unified around the notion of a research event. Thus, a research\nevent becomes an access point to multimedia data.\n4.2 Semi-Structured Data\nMultimedia data about scientific research does not follow a rigid\nstructure. For example, research papers have reference while\nimages do not have. Even for references, the number of citations\nvaried over different papers. At the same time, these data do have\nsome common information components such as time and location\ninformation. This semi-structured characteristic makes methods\nlike relational database for storing structured data unsuitable.\nTechniques for storing semi-structured data are appropriate\ninstead.\nXML is one of the solutions to model semi-structured data. It has\nbecome very popular for introducing semantics in text. And it has\nrapidly replaced automatic approaches to deduce semantics from\nthe data in text files. This approach to explicitly introduce tags to\nhelp processes compute semantics has been very successful so far\n[13]. Based on this, we choose XML to store research event\ninformation. Figure 4 shows the schema of XML files for research\nevents.\n\n4.3 Description of the Data Model\nBased on the definition of research event, four fundamental\ninformation components are needed to describe a research event.\nThese components are: when the research event happens, where\nthe research event occurs, who participates in the research event,\nand what the research event is about. Thus, a data model as\nfollows is proposed to represent a research event. As shown in\nFigure 5, a research event is characterized by the following\nattributes: Name, Time, Participant, Category, Mediasource,\nSubevents, and Free Attributes. Here Name refers to the name of a\nresearch event, Time refers to the times when the research is done,\nParticipant refers to people who do the research and their\naffiliations, Category refers to the ACM Classification of the\nresearch, which can belong to several categories, Mediasource\ncontains media type and source (URL) of the media covering the\nresearch event, Subevent refers to part of the research event and\nhas the same structure of a research event, Free Attributes are\nused to capture media specific characteristics when needed, for\nexample, it refers to reference for a paper.\nAs described above, the data model encapsulates all information\ncomponents of a research event by one or more attributes. When\ncomponent is captured by Time, where is captured by Participant\nAffiliation, who is captured by Participant, and what is captured\nby Name and Categories. Multimedia supporting the research\nevent is brought in by Mediasource attribute.\n4.4 XML Database\nIn ResearchExplorer, Berkeley DB XML [3] is chosen for Event\nDatabase. Berkeley DB XML is an application-specific native\nXML data manager. It is supplied as a library that links directly\ninto the application's address space. Berkeley DB XML provides\nstorage and retrieval for native XML data and semi-structured\ndata. So it can meet the requirements of Event Database.\nFigure 4. Schema of research event.\n10\n\n\nUSER INTERFACE\nIn ResearchExplorer, a unified presentation-browsing-querying\ninterface is used. Research events are shown in multidimensional\nviews. As multimedia data is organized around research event, the\ndata is presented by fundamental components of research event,\ni.e., When, Where, Who, and What. Figure 7(a) shows a\nscreenshot of the user interface we developed. There are totally\nfive windows plus a text box. In the upper are timeline and map\nwindows showing time and location information of research\nevents. In the lower right, there are two windows showing people\nand category information. The window in the lower left is\ndifferent from those windows aforementioned. It is used to show\nmultimedia data of research events. Once a research event is\nselected, multimedia data like papers, images, and videos are\npresented in this window and they are presented according to the\nevent-subevent structure. Clicking on a specific media instance\nlabel will lead users to the original source of the media and trigger\nappropriate application for that particular kind of media. So users\ncan view original media as they want. The text box is designed for\nkeyword-based searching. It enables users to search information\nin traditional way.\n5.2 Research Representation\nTime and location are the primary parameters based on which\ndynamics is captured. Therefore, they are depicted as the primary\nexploration dimensions. The way to represent research events in\nthese windows is critical. In ResearchExplorer, two different\nrepresentation methods are used. We borrowed idea of\nrepresenting research events from [16] in the timeline window\nwhere research events are represented by rectangles. A rectangle\nspans the duration of a research event. Within each rectangle,\nthere may be smaller rectangles. These smaller ones represent\nsubevents of the research event. All rectangles for one research\nevent are nested according to the event-subevent structure. The\nmedia, presenting the research event, are represented by icons in\nthe rectangles. Icons are chosen intuitively for users to recognize\neasily. They are specific to each media. Icons belong to the same\nresearch event are grouped together in chronological order. The\nfidelity of such a representation is maintained during temporal\nzoom-in/zoom-out operations as described later. The recursive\nnature of the representation is used to capture aggregate\nrelationships where a research event may comprise of other\nevents. The primary purpose of such a representation is to provide\nusers with a structural and temporal view of research events. In\nthe map window, research events are represented by \"dot maps\"\n[19]. Each dot in the map shows a research event at the location\nof the dot. By means of dot maps, the precision of location\ninformation is high, and the variable density of dots conveys\ninformation about the amount of research events at a location.\n5.3 What-You-See-Is-What-You-Get\n(WYSIWYG)\nIn the system, WYSIWYG search is employed the query and\npresentation spaces are the same. As described above, windows\nserve as a way to display information and relationships of research\nevents. These windows, except details window, serve another\nfunction in specifying queries as well. Contrary to many search\ninterfaces where users specify several properties and then press a\nbutton to issue a query, users can issue a query by a simple\noperation in this user interface. For example, users can launch a\nquery by specifying a time interval, a location region, a person's\nname, or a research category. Figure 6 shows examples of these\nmethods. In ResearchExplorer, exploration is based on sessions.\nEach session consists of one or more queries. A query is either a\nnew session query or a refine query. A new session query is the\nfirst query of each session. All other queries in a session are refine\nqueries. For new session query, the system retrieves results from\nthe database. If it is a refine query, the query will not be sent to\nthe database. It will be executed based on the results set of the\nnew session query of that session. With this method, users can\nchoose a broad set of results first, and then observe any subset of\nthe results of interest. This is very important because knowledge is\naccumulated as users manipulate the results by choosing different\nperspectives. Once a refine query is posed, results of the query\nwill be highlighted in all windows.\n\nFigure 5. Graphical representation of a research event.\nRE: Research Event\nN: Name\nT: Time\nPS: Participants\nP: Participant\nPN: Participant's Name\nFN: First Name\nMS: Media Source\nMT: Media Type\nS: Source\nSS: Subevents\nSE: Subevent\nFA: Free Attribute\nN\nT\nRE\nC\nSS\nPS\nAN LA LO\nFN\nPN\nPA\nLN\nP\nCS FA\n...\n...\nMT\nS\nLN: Last Name\nPA: Participant Affiliation\nAN: Affiliation Name\nLA: Latitude\nLO: Longitude\nCS: Categories\nC: Category\nSE\n...\nMS\nFigure 6. Different query methods. (a) shows a query by\ntime. (b) shows a query by a person. (c) shows query by\nspecifying a region of locations. (d) shows query by\ncategory.\n11\n5.4 Reflective UI\nIn designing the user interface, multiple window coordination\nstrategy is used. By means of this strategy, components of the user\ninterface are coupled tightly. The windows respond to user\nactivity in a unified manner such that user interaction in one\nwindow is reflected instantly in other windows. For example,\nwhen the user selects a research event in timeline window, this\nresearch event will be highlighted in the map and other windows.\nThis cooperative visualization is effective in information\nexploration as it maintains the context as the user interacts with\nthe data. Figure 7(a) shows an example where a research event is\nselected and its information in other windows is highlighted.\n5.5 Interactions with Time and Location\nInformation\nIn ResearchExplorer, both timeline and map provide zoom-in/zoom\n-out function. This makes it possible for users to look at\nhow a research event evolves in details. The timeline has year as\nthe highest level of temporal representation. So it's likely that two\nsubevents of a research event are overlapped when they are shown\nin year level. With zoom-in functionality, users can zoom into\nfiner level to see the temporal relationship between these two\nsubevents. The location window in ResearchExplorer has been\nimplemented using an open source JavaBeansTM based package\ncalled OpenMapTM [2]. Research events are presented as dots on\nthe map. Due to the size limitation, it is hard for users to\ndifferentiate events when they are close to each other. Similarly as\nin timeline, users can zoom into that area and see the events at\nfiner resolution. Further, panning of the entire map is also\nsupported.\nEXPERIMENTS\nIn this section, we conduct some experiments as case studies.\nThese case studies will show how users can look into details of\nresearch events and observe relationships between research\nevents.\n6.1 Experiment I: Exploring Information\nExploring information with context is one of important features of\nResearchExplorer. With this function, users can refine retrieved\nresults to check into different aspects of a situation. In this\nexperiment, we are interested in how users can refine results as\nthey explore information. Assume following information is of\ninterest:\n\nShow all research events on AI during the time period from\n1989 through 2004?\n\nOut of the results above, show the part done in CA\n\nFor each person, show all research events he/she participated\nin.\nTo find answers to the first query, users can select the time\ninterval from 1989 to 2004 and then choose AI category only.\nFigure 7(a) shows all results. Note that the results consist of all\nresearch events in AI, but they do not include all in the world. As\nshown in the figure, timeline window shows the temporal\ninformation and temporal relationships between research events,\nmap window shows the distribution of location, category window\nshows what all categories these research events belong to,\nparticipant window shows all people being involved in these\nresearch events. The details window shows all multimedia data\nabout research events. In this window, a research event named\n\"UMN MegaScout\" is placed at the top after the rectangle\nrepresenting this event is clicked in the timeline window. If we\nclick the image thumbnails, the original images will be shown.\nFigures 7(b) shows the three images and four videos. When we\nlook at these images and video frames, we can have a better\nunderstanding about this research. In other words, direct\nobservation of the multimedia data of a research event helps users\nto gain insights about the research. In order to answer the second\nquery, we specify a region on the map which encloses all dots on\nCA. The research events are then highlighted in timeline windows\nas shown in Figure 7(c). To check research events a person\nparticipated in, we only need to move the mouse cursor to be over\nthe person's name. Similarly, all research events he/she\nparticipated in will be highlighted.\n6.2 Experiment II: Comparisons with Other\nSystems\nIn this section, we compare ResearchExplorer with other systems\nin terms of functionalities. Without loss of generality, we choose\nGoogle [9], CiteSeer [7], and ACM Digital Library [1] as\nexamples. First, we compare the presentation methods. Figure 8\nshows the screen shots of these systems after a query of \"artificial\nintelligence\" is issued. Compared with ResearchExplorer, these\nsystems are unable to show how AI research evolves. Users thus\ncan not get a whole picture of AI research, which otherwise is\nimportant to understand this area and conduct research in AI.\nAnother comparison is done on query and explore functions. The\nresults are shown in Table 1.\n\nRELATED WORK\nThere are many systems which can search for scientific\ninformation. These systems can be classified into two categories.\nThe first are bibliographical systems developed especially for\nscientific information searching. ACM Digital Library, IEEE\nXplore [10], and INSPEC [11] are good examples of this class.\nThese systems store information about publications, which are\nfrom some pre-selected sources, in their repositories. They\norganize data by using structural information of publications like\ntitle, author, etc. CiteSeer is another well-known system of this\nkind. Compared with the aforementioned, it collects publications\nfrom the web and performs citation indexing in addition to full-text\nindexing [8]. Another class is web search engine for general\ninformation. The most well-known of this type is Google. Systems\nof this class index the text contained in documents, allowing users\nto find information using keyword search. Our work differs\nsignificantly from these systems. These systems are designed to\nconcentrating in information providing. Our work is focused on\nproviding an interaction environment for insights into research.\nOther related work comes from research on multimedia\nexperience. Boll and Westermann [4] presented the Medither\nmultimedia event space, a decentralized peer-to-peer\ninfrastructure that allows to publish, to find and to be notified\nabout multimedia events of interest. Our focus was not to create a\nmultimedia event space but rather to develop an interaction\nenvironment for users to experience multimedia events. The\nInformedia group at CMU has also worked on multimedia\n12\nexperience [20]. However there are important differences here\ntheir main goal was to capture and integrate personal multimedia\nImage1 Image3\nImage2\nVideo4\nVideo1\nVideo2\nVideo3\n(a)\n(c)\n(b)\nFigure 7. ResearchExplorer UI. (a) shows screen shot of the UI. (b) shows the images and videos of a research event. (c) shows the\nhighlighted results when a spatial refinement is made.\nFigure 8. Screen shots of other search systems.\n13\nexperiences not create an environment for experiencing\nmultimedia personally.\nIn [12], Jain envisioned the essence of an experiential\nenvironment. The main goal of experiential environment is for\ninsights. Following this work, there are some other work on\nexperiential environment [13][16]. However, in these work, there\nis little discussion on experiential environments. Our work\ndeveloped further some ideas in [12] and concretized the design\nframework of an interaction environment for insights.\nCONCLUSION\nWe have described a novel system which helps users to gain\ninsights through exploring multimedia scientific data. Although\nframework for designing an interaction environment for insights is\nidentified, the implementation is a first step towards a mature\nsystem for insights. In future work, we will build on the methods\ndescribed here. Also, we will investigate more on relationships\nbetween research events and methodologies to present these\nrelationships. We believe some of the more interesting research\nproblems will be identified when new relationships between\nresearch events are used to help users to gain insights.\n\nACKNOWLEDGMENTS\nWe would thank Punit Gupta and Rachel L. Knickmeyer for their\nhelp on timeline and map components of ResearchExplorer.\n\nREFERENCES\n[1] ACM Digital Library, http://portal.acm.org/portal.cfm.\n[2] BBN Technologies (1999), OpenMapTM Open Systems\nMapping Technoloy, http://openmap.bbn.com/.\n[3] Berkeley DB XML,\nhttp://www.sleepycat.com/products/xml.shtml.\n[4] Boll, S., and Westermann, U. Medither -- an Event Space\nfor Context-Aware Multimedia Experiences. Proc. of the\n2003 ACM SIGMM Workshop on Experiential Telepresence\n(ETP '03), 21-30.\n[5] Brin, S., and Page, L. The Anatomy of a Large-Scale\nHypertextual Web Search Engine. Proc. of 7\nth\nInternational\nWorld Wide Web Conference (WWW '98), 107-117.\n[6] Cho, J., Garcia-Monila, H., and Page, L. Efficient Crawling\nthrough URL Ordering. Proc. of 7\nth\nWWW Conference\n(1998), 161-172.\n[7] CiteSeer, http://citeseer.ist.psu.edu/cis.\n[8] Giles, C. L., Bollacker, K. D., and Lawrence, S. CiteSeer: An\nAutomatic Citation Indexing System. The Third ACM\nConference on Digital Libraries (1998), 89-98.\n[9] Google,\nhttp://www.google.com.\n[10] IEEE Xplore, http://ieeexplore.ieee.org/Xplore/DynWel.jsp.\n[11] INSPEC, http://www.iee.org/Publish/INSPEC/.\n[12] Jain, R. Experiential Computing. Communications of the\nACM, 46, 7 (July 2003), 48-54.\n[13] Jain, R., Kim, P., and Li, Z. Experiential Meeting System.\nProc. of the 2003 ACM SIGMM Workshop on Experiential\nTelepresence (ETP '03), 1-12.\n[14] Rowe, N. C. Marie-4: A High-Recall, Self-Improving Web\nCrawler that Finds Images using Captions. IEEE Intelligent\nSystems, 17, 4 (2002), 8-14.\n[15] Shkapenyuk, V., and Suel, T. Design and Implementation of\na High-Performance Distributed Web Crawler. Proc. of the\nIntl. Conf. on Data Engineering (ICDE '02).\n[16] Singh, R., Knickmeyer, R. L., Gupta, P., and Jain, R.\nDesigning Experiential Environments for Management of\nPersonal Multimedia. ACM Multimedia 2004. To Appear.\n[17] Singh, R., Li, Z., Kim, P., and Jain, R. Event-Based\nModeling and Processing of Digital Media. 1\nst\nACM\nSIGMOD Workshop on Computer Vision Meets Databases,\nParis, France, 2004.\n[18] Teng, S-H., Lu, Q., and Eichstaedt, M. Collaborative Web\nCrawling: Information Gathering/Processing over Internet.\nProc. Of the 32\nnd\nHawaii Intl. Conf. on System Sciences\n(1999).\n[19] Toyama, K., Logan, R., Roseway, A., and Anandan, P.\nGeographic Location Tags on Digital Images. ACM\nMultimedia (2003), 156-166.\n[20] Wactlar, H. D., Christel, M. G., Hauptmann A. G., and Gong,\nY. Informedia Experience-on-Demand: Capturing,\nIntegrating and Communicating Experiences across People,\nTime and Space. ACM Computing Surveys 31(1999).\n[21] WordNet 2.0 http://www.cogsci.princeton.edu/cgi-bin/webwn\n.\nTable 1. Comparisons of ResearchExplorer and other systems\nFunctions ResearchExplorer\nGoogle\nCiteSeer ACM\nDigital\nLibrary\nShow spatio-temporal\nrelationships\nYes No No Can only list results by\ndate order.\nSame query and\npresentation space?\nYes No No No\nDynamic query\nYes\nNo\nNo\nNo\nMaintain query state\nYes\nNo\nNo\nNo\nZoom-in/zoom-out Yes\nNo\nNo\nNo\nVisualization\ntechniques\nMultiple\nListing only\nListing only\nListing only\n14", "keywords": "Event;Research Event;Multimedia Data;Spatio-Temporal Data;Exploration;Interaction Environment;Insight"} {"name": "169", "title": "Robustness Analysis of Cognitive Information Complexity Measure using Weyuker Properties", "abstract": "Cognitive information complexity measure is based on cognitive informatics, which helps in comprehending the software characteristics. For any complexity measure to be robust, Weyuker properties must be satisfied to qualify as good and comprehensive one. In this paper, an attempt has also been made to evaluate cognitive information complexity measure in terms of nine Weyuker properties, through examples. It has been found that all the nine properties have been satisfied by cognitive information complexity measure and hence establishes cognitive information complexity measure based on information contained in the software as a robust and well-structured one.", "fulltext": "Introduction\nMany well known software complexity measures have been\nproposed such as McCabe's cyclomatic number [8], Halstead\nprogramming effort[5], Oviedo's data flow complexity\nmeasures[9], Basili's measure[3][4], Wang's cognitive complexity\nmeasure[11] and others[7]. All the reported complexity measures\nare supposed to cover the correctness, effectiveness and clarity of\nsoftware and also to provide good estimate of these parameters.\nOut of the numerous proposed measures, selecting a particular\ncomplexity measure is again a problem, as every measure has its\nown advantages and disadvantages. There is an ongoing effort to\nfind such a comprehensive complexity measure, which addresses\nmost of the parameters of software. Weyuker[14] has suggested\nnine properties, which are used to determine the effectiveness of\nvarious software complexity measures. A good complexity\nmeasure should satisfy most of the Weyuker's properties. A new\ncomplexity measure based on weighted information count of a\nsoftware and cognitive weights has been developed by Kushwaha\nand Misra[2]. In this paper an effort has been made to estimate this\ncognitive information complexity measure as robust and\ncomprehensive one by evaluating this against the nine Weyuker's\nproperties.\n\nCognitive Weights of a Software\nBasic control structures [BCS] such as sequence, branch and\niteration [10][13] are the basic logic building blocks of any\nsoftware and the cognitive weights (W\nc\n) of a software [11] is the\nextent of difficulty or relative time and effort for comprehending a\ngiven software modeled by a number of BCS's. These cognitive\nweights for BCS's measure the complexity of logical structures of\nthe software. Either all the BCS's are in a linear layout or some\nBCS's are embedded in others. For the former case, we sum the\nweights of all the BCS's and for the latter, cognitive weights of\ninner BCS's are multiplied with the weight of external BCS's.\n\nCognitive Information Complexity Measure (CICM)\nSince software represents computational information and is a\nmathematical entity, the amount of information contained in the\nsoftware is a function of the identifiers that hold the information\nand the operators that perform the operations on the information\ni.e.\nInformation = f (Identifiers, Operators)\n\nIdentifiers are variable names, defined constants and other labels\nin a software. Therefore information contained in one line of code\nis the number of all operators and operands in that line of code.\nThus in k\nth\nline of code the Information contained is:\n\nI\nk\n= (Identifiers + Operands)\nk\n\n= (ID\nk\n+ OP\nk\n) IU\n\n\nWhere ID = Total number of identifiers in the k\nth\nLOC of\nsoftware,\nOP = Total number of operators in the k\nth\nLOC of\nsoftware,\nIU is the Information Unit representing that at least any identifier\nor operator has one unit information in them.\n\nTotal Information contained in a software (ICS) is sum of\ninformation contained in each line of code i.e.\n\nLOCS\nICS =\n(I\nk\n)\nk=1\nWhere I\nk\n= Information contained in k\nth\nline of code,\nLOCS = Total lines of code in the software.\n\nThus, it is the information contained in the identifiers and the\nnecessary operations carried out by the operators in achieving the\ndesired goal of the software, which makes software difficult to\nunderstand.\n\nOnce we have established that software can be comprehended as\ninformation defined in information units (IU's) [2], the weighted\ninformation count is defined as :\n\nThe Weighted Information Count of a line of code (WICL) of a\nsoftware is a function of identifiers, operands and LOC and is\ndefined as :\nACM SIGSOFT Software Engineering Notes Page 1 January 2006 Volume 31 Number 1\nACM SIGSOFT Software Engineering Notes Page 1 January 2006 Volume 31 Number 1\n\nWICL\nk\n= ICS\nk\n/ [LOCS k]\n\nWhere WIC\nk\n= Weighted Information Count for the k\nth\nline,\nICS\nk\n= Information contained in a software for the k\nth\n\nline.\nThe\nWeighted Information Count of the Software (WICS)\nis\ndefined as :\n\nLOCS\nWICS =\nWICL\nk\nk = 1\nIn order to be a complete and robust measure, the measure of\ncomplexity should also consider the internal control structure of\nthe software. These basic control structures have also been\nconsidered as the Newton's law in software engineering [10, 11].\nThese are a set of fundamental and essential flow control\nmechanisms that are used for building the logical architectures of\nsoftware.\nUsing the above definitions, Cognitive Information Complexity\nMeasure (CICM) is defined as the product of weighted\ninformation count of software(WICS) and the cognitive weight\n(W\nc\n) of the BCS in the software i.e.\n\nCICM = WICS * W\nc\n\n\nThis complexity measure encompasses all the major parameters\nthat have a bearing on the difficulty in comprehending software or\nthe cognitive complexity of the software. It clearly establishes a\nrelationship between difficulty in understanding software and its\ncognitive complexity. It introduces a method to measure the\namount of information contained in the software thus enabling us\nto calculate the coding efficiency (E\nI\n) as\n\nICS / LOCS [2].\n\n\nEvaluation of Cognitive Information Complexity Measure\nWeyuker[14] proposed the nine properties to evaluate any\nsoftware complexity measure. These properties also evaluate the\nweakness of a measure in a concrete way. With the help of these\nproperties one can determine the most suitable measure among the\ndifferent available complexity measures. In the following\nparagraphs, the cognitive information complexity measure has\nbeen evaluated against the nine Weyuker properties for\nestablishing itself as a comprehensive measure.\n\nProperty 1: (\nP)( Q)(|P| |Q|) Where P and Q are program body.\nThis property states that a measure should not rank all programs as\nequally complex. Now consider the following two examples given\nin Fig. 1 and Fig. 2. For the program given in Fig. 1 in Appendix\nI, there are two control structures: a sequential and a iteration.\nThus cognitive weight of these two BCS's is 1 + 3 = 4.\n\nWeighted information count for the above program is as under:\nWICS = 3/6 + 1/4 + 6/3+ 4/2 = 4.75\nHence Cognitive information complexity measure (CICM) is:\nCICM = WICS * W\nc\n= 4.75 * 4 = 19.0\nFor the program given in Fig. 2 in Appendix I there is only one\nsequential structure and hence the cognitive weight W\nc\nis 1. WICS\nfor the above program is 2.27. Hence CICM for the above program\nis 2.27 * 1 = 2.27.\n\nFrom the complexity measure of the above two programs, it can be\nseen that the CICM is different for the two programs and hence\nsatisfies this property.\n\nProperty 2: Let c be a non-negative number. Then there are only\nfinitely many programs of complexity c.\n\nCalculation of WICS depends on the number of identifiers and\noperators in a given program statement as well as on the number of\nstatements remaining that very statement in a given program. Also\nall the programming languages consist of only finite number of\nBCS's. Therefore CICM cannot rank complexity of all programs\nas c. Hence CICM holds for this property.\n\nProperty 3: There are distinct programs P and Q such that |P| = |Q|.\n\nFor the program given in Fig. 3 in Appendix I, the CICM for the\nprogram is 19, which is same as that of program in Fig. 1.\nTherefore this property holds for CICM.\n\n\nProperty 4: (\nP)( Q) (PQ & |P| |Q|)\nReferring to program illustrated in Fig.1, we have replaced the\nwhile loop by the formula \"sum = (b+1)*b/2\" and have illustrated\nthe same in Fig.2. Both the programs are used to calculate the sum\nof first n integer. The CICM for both the programs is different,\nthus establishing this property for CICM.\n\nProperty 5: (\nP)( Q)(|P| |P;Q| and |Q| |P;Q|).\nConsider the program body given in Fig.4 in Appendix I: The\nprogram body for finding out the factorial of a number consists of\none sequential and one branch BCS's. Therefore W\nc\n= 3. For the\nprogram body for finding out the prime number, there are one\nsequential, one iteration and two branch BCS's. Therefore W\nc\n= 1\n+ 2*3*2 = 13. For the main program body for finding out the\nprime and factorial of the number, there are one sequential, two\ncall and one branch BCS's. Therefore W\nc\n= 1+5+15+2 = 23.\nWICS for the program is 5.1. Therefore the Cognitive Information\nComplexity Measure for the above program = 5.1 * 23 = 117.3.\n\nNow consider the program given in Fig.5 in Appendix I to check\nfor prime. There is one sequential, one iteration and three branch\nBCS's. Therefore W\nc\n= 1 + 2*3*2 + 2 = 15. WICS = 1.85. So\nCICM = 1.85 * 15 = 27.79.\nFor the program given in Fig.6 in Appendix I, there is one\nsequential, one iteration and one branch BCS's .\nWc for this program is 7 and WICS is .5.11. Hence CICM =\nWICS * W\nc\n= 5.11 * 7 = 35.77.\nIt is clear from the above example that if we take the two-program\nbody, one for calculating the factorial and another for checking for\nACM SIGSOFT Software Engineering Notes Page 2 January 2006 Volume 31 Number 1\nprime whose CICM are 27.79 and 35.77 that are less than 117.3.\nSo property 5 also holds for CICM.\n\nProperty 6(a)\n: (\nP)( Q)(R)(|P| = |Q|) & (|P;R|\n|Q;R|)\nLet P be the program illustrated in Fig.1 and Q is the program\nillustrated in Fig.3. The CICM of both the programs is 19. Let R\nbe the program illustrated in Fig.6. Appending R to P we have the\nprogram illustrated in Fig.7 in Appendix I.\n\nCognitive weight for the above program is 9 and WICS is 8.3.\nTherefore CICM = 8.3*9=74.7.\n\nSimilarly appending R to Q we have W\nc\n= 9 and WICS = 8.925.\nTherefore CICM = 8.925*9 = 80.325 and 74.7\n80.325. This\nproves that Property 6(a) holds for CICM.\n\nProperty 6(b): (\nP)( Q)(R)(|P| = |Q|) & (|R;P| |R:Q|)\nTo illustrate the above property let us arbitrarily append three\nprogram statements in the programs given in Fig.1, we have the\nprogram given in Fig.8 in Appendix I. There is only one sequential\nand one iteration BCS. Hence cognitive weight is 1 + 3 = 4. There\nis only one sequential and one iteration BCS. Hence cognitive\nweight is 1 + 3 = 4 and WICS = 5.58. So CICM = 5.58 * 4 =\n22.32.\n\nSimilarly appending the same three statements to program in Fig.3\nwe again have cognitive weights = 4 and WICS = 5.29. Therefore\nCICM = 21.16\n22.32. Hence this property also holds for CICM.\nProperty 7: There are program bodies P and Q such that Q is\nformed by permuting the order of the statement of P and (|P|\n|Q|).\nSince WICS is dependent on the number of operators and operands\nin a given program statement and the number of statements\nremaining after this very program statement, hence permuting the\norder of statement in any program will change the value of WICS.\nAlso cognitive weights of BCS's depend on the sequence of the\nstatement[1]. Hence CICM will be different for the two programs.\nThus CICM holds for this property also.\n\nProperty 8 : If P is renaming of Q, then |P| = |Q|.\n\nCICM is measured in numeric and naming or renaming of any\nprogram has no impact on CICM. Hence CICM holds for this\nproperty also.\n\nProperty 9: (\nP)( Q)(|P| + |Q|) < (|P;Q|)\nOR\n(\nP)( Q)(R)(|P| + |Q| + |R|) < (|P;Q;R|)\nFor the program illustrated in Fig.4, if we separate the main\nprogram body P by segregating Q (prime check) and R (factorial),\nwe have the program illustrated in Fig.9 as shown in Appendix I.\nThe above program has one sequential and one branch BCS. Thus\ncognitive weight is 4 and WICS is 1.475. Therefore CICM =\n4.425. Hence 4.425 + 27.79 + 35.77 < 117.3. This proves that\nCICM also holds for this property.\n\n\nComparative Study of Cognitive Information Complexity Measure and Other Measures in Terms of Weyuker Properties\nIn this section cognitive information complexity measure has been\ncompared with other complexity measures in terms of all nine\nWeyuker's properties.\n\nP.N.- Property Number, S.C.- Statement Count, C N. -\nCyclomatic Number, E.M.- Effort Measure, D.C.-Dataflow\nComplexity, C.C.M. - Cognitive Complexity Measure, CICM\nCognitive Information Complexity Measure, Y- Yes, N NO\n\nP\nN\nSC CN EM DC CCM CICM.\n1\nY Y Y Y Y Y\n2\nY N Y N Y Y\n3\nY Y Y Y Y Y\n4\nY Y Y Y Y Y\n5\nY Y N N Y Y\n6\nN N Y Y N Y\n7\nN N N Y Y Y\n8\nY Y Y Y Y Y\n9\nN N Y Y Y Y\n\nTable 1: Comparison of complexity measures with Weyuker\n\n\nproperties.\n\nIt may be observed from the table 1 that complexity of a program\nusing effort measure, data flow measure and Cognitive\nInformation Complexity Measure depend directly on the\nplacement of statement and therefore all these measures hold for\nproperty 6 also. All the complexity measure intend to rank all the\nprograms differently\n\n\nConclusion\nSoftware complexity measures serves both as an analyzer and a\npredicator in quantitative software engineering. Software quality is\ndefined as the completeness, correctness, consistency, no\nmisinterpretation, and no ambiguity, feasible verifiable in both\nspecification and implementation. For a good complexity measure\nit is very necessary that the particular complexity measure not only\nsatisfies the above-mentioned property of software quality but also\nsatisfies the nine Weyuker properties. The software complexity in\nterms of cognitive information complexity measure thus has been\nestablished as a well- structured complexity measure\n.\n\n\nReferences\n[1] Misra,S and Misra,A.K.(2005): Evaluating Cognitive\nComplexity measure with Weyuker Properties, Proceeding of the3\nrd\nIEEE International Conference on Cognitive\nInformatics(ICCI'04).\nACM SIGSOFT Software Engineering Notes Page 3 January 2006 Volume 31 Number 1\n[2] Kushwaha,D.S and Misra,A.K.(2005): A Modified Cognitive\nInformation Complexity Measure of Software, Proceeding of the\n7\nth\nInternational Conference on Cognitive Systems(ICCS'05)\n(accepted for presentation)\n\n[3] Basili,V.R. and Phillips(1983): T.Y,Metric analysis and data\nvalidation across fortran projection .IEEE Trans.software Eng.,SE\n9(6):652-663,1983.\n\n[4] Basili,V.R.(1980): Qualitative software complexity model: A\nsummary in tutorial on models and method for software\nmanagement and engineering .IEEE Computer Society Press ,Los\nAlamitos,CA,1980\n.\n[5] Halstead,M.(1977): Elements of software science,Elsevier\nNorth Holland,New York.1997.\n\n[6] Klemola, T. and Rilling, J.(2003): A Cognitive Complexity\nMetric Based on Category Learning, , Proceeding of the 2\nnd\nIEEE\nInternational Conference on Cognitive Informatics(ICCI'03).\n\n[7] Kearney, J.K., Sedlmeyer, R. L., Thompson, W.B., Gray, M.\nA. and Adler, M. A.(1986): Software complexity measurement.\nACM Press, Newyork, 28:1044-1050,1986\n\n[8] McCabe, T.A.(1976): Complexity measure. IEEE\ntrans.software engg.,(se-2,6):308-320,1976.\n\n[9] Oviedo, E.(1980): Control flow, data and program complexity\n.in Proc. IEEE COMPSAC, Chicago, lL, pages 146-152,November\n1980.\n\n[10] Wang. and Shao,J.(2002): On cognitive informatics, keynote\nlecture, proceeding of the .1\nst\nIEEE International Conference on\nCognitive Informatics, pages 34-42,August 2002.\n\n[11] Wang ,Y .and Shao,J.(2004): Measurement of the Cognitive\nFunctional Complexity of Software, Proceeding of the 3\nrd\nIEEE\nInternational Conference on Cognitive Informatics(ICCI'04)\n\n[12] Wang,Y .and Shao,J.(2003): On cognitive informatics,\nProceeding of the 2\nnd\nIEEE International Conference on Cognitive\nInformatics(ICCI'03),London,England,IEEE CS Press, pages 67-71\n,August 2003 .\n\n[13] Wang, Y.(2002): The real time process algebra (rtpa). Annals\nof Software Engineering, an international journal, and 14:235-247\n,2002.\n\n[14] Weyuker, E.(1988): Evaluating software complexity measure.\nIEEE Transaction on Software Complexity Measure, 14(9): 1357-1365\n,september1988.\nACM SIGSOFT Software Engineering Notes Page 4 January 2006 Volume 31 Number 1\nAppendix I\n\n/*Calculate the sum of first n integer*/\nmain() {\nint i, n, sum=0;\nprintf("enter the number"); //BCS1\nscanf("%d" , &n);\nfor (i=1;i<=n;i++) //BCS2\nsum=sum+i;\nprintf("the sum is %d" ,sumssss);\ngetch();}\n\nFig. 1 : Source code of the sum of first n integers.\n\nmain()\n{\nint b;\nint sum = 0;\nPrintf(\"Enter the Number\");\nScanf(\"%d\", &n);\nSum = (b+1)*b/2;\nPrintf(\"The sum is %d\",sum);\ngetch();\n}\n\nFig. 2 : Source code to calculate sum of first n integers.\n\n# define N 10\nmain( )\n{\nint count\nfloat, sum,average,number;\nsum = count =0;\nwhile (count < N )\n{\nscanf (\" %f\",& number);\nsum = sum+ number;\ncount = count+1;\n}\naverage = sum / N;\nprintf (\"Average =%f\",average);\n}\n\nFig. 3 : Source code to calculate the average of a set of N\nnumbers.\n\n#include< stdio.h >\n#include< stdlib.h >\nint main() {\nlong fact(int n);\nint isprime(int n);\nint n;\nlong int temp;\nclrscr();\nprintf("\\n input the number"); //BCS11\nscanf("%d",&n);\ntemp=fact(n); //BCS12\n{printf("\\n is prime");}\nint flag1=isprime(n); //BCS13\nif (flag1= =1) //BCS14\nelse\n{printf("\\n is not prime")};\nprintf("\\n factorial(n)=%d",temp);\ngetch();\nlong fact(int n) {\nlong int facto=1; //BCS21\nif (n= =0) //BCS22\nfacto=1;else\nfacto=n*fact(n-1);\nreturn(facto); }\nint isprime(int n)\n{ int flag; //BCS31\nif (n= =2)\nflag=1; //BCS32\nelse\nfor (int i=2;i<n;i++) //BCS33\n{ if (n%i= =0) //BCS34\n{ flag=0; Therefore Wc = 3\n\n\nbreak; }\nelse {\nflag=1 ;}}\nreturn (flag);}}\n\nFig. 4: Source code to check prime number and to calculate\nfactorial of the number\n\n#include< stdio.h >\n#include< stdlib.h >\n#include< conio.h >\nint main() { //BCS1\nint flag = 1,n;\nclrscr();\nprintf("\\ n enter the number");\nscanf("%d",&n);\nif (n= =2)\nflag=1; //BCS21\nelse\n{for (int i=2;i<n;i++) //BCS22\nif (n%i= =0) //BCS23\n{ flag=0;\nbreak;}\nelse{\nflag=1;\ncontinue;} }\nif(flag) //BCS3\nprintf("the number is prime");\nelse\nprintf("the number is not prime");\ngrtch();}\nFig.5 : Source code for checking prime number\n\n#include< stdio.h >\n#include< stdlib.h >\n#include< conio.h >\nint main () {\nlong int fact=1;\nint n;\nclrscr();\nACM SIGSOFT Software Engineering Notes Page 5 January 2006 Volume 31 Number 1\nprintf("\\ input the number"); //BCS1\nscanf("%d",&n);\nif (n==0) //BCS21\nelse\nfor(int i=n;i>1;i--) //BCS22\nfact=fact*i;\nprintf("\\n factorial(n)=%1d",fact);\ngetch();}\n\nFig.6 : Source code for calculating factorial of a number\n\nInt main() {\nlong fact(int n);\nint i, n, sum=0;\nprintf("enter the number");\nscanf("%d" , &n);\ntemp = fact(n);\nfor (i=1;i<=n;i++)\nsum=sum+i;\nprintf("the sum is %d" ,sum);\ngetch();\nlong fact(int n){\nlong int facto = 1;\nif (n == 0)\nfacto = 1 else\nfacto = n*fact(n-1);\nreturn(facto);}}\n\nFig.7: Source code of sum of first n integer and factorial of n.\n\nmain() {\nint a,b,result;\nresult = a/b;\nprintf(the result is %d\",result);\nint i, n, sum=0;\nprintf("enter the number");\nscanf("%d" , &n);\nfor (i=1;i<=n;i++)\nsum=sum+i;\nprintf("the sum is %d" ,sum);\ngetch();}\nFig. 8 : Source code of division and the sum of first\nn integers.\nint main(){\nint n;\nlong int temp;\nclrscr();\nprintf(\"\\n input the number\");\nscanf(\"%d\",&n);\ntemp = fact(n);\n{printf(\"\\ is prime\");}\nint flag1 = isprime(n);\nif (flag1 == 1)\nelse\n{printf(\"\\n is not prime)};\nprintf(\"\\n factorial(n) = %d\",temp);\ngetch();}\n\nFig.9 : Source code of main program body of program in Fig.4\nACM SIGSOFT Software Engineering Notes Page 6 January 2006 Volume 31 Number 1", "keywords": "cognitive weight;cognitive information complexity measure;basic control structures;cognitive information complexity unit;Weighted information count"} {"name": "17", "title": "A Pseudo Random Coordinated Scheduling Algorithm for Bluetooth Scatternets", "abstract": "The emergence of Bluetooth as a default radio interface allows handheld devices to be rapidly interconnected into ad hoc networks. Bluetooth allows large numbers of piconets to form a scatternet using designated nodes that participate in multiple piconets. A unit that participates in multiple piconets can serve as a bridge and forwards traffic between neighbouring piconets. Since a Bluetooth unit can transmit or receive in only one piconet at a time, a bridging unit has to share its time among the different piconets. To schedule communication with bridging nodes one must take into account their availability in the different piconets, which represents a difficult , scatternet wide coordination problem and can be an important performance bottleneck in building scatternets. In this paper we propose the Pseudo-Random Coordinated Scatternet Scheduling (PCSS) algorithm to perform the scheduling of both intra and inter-piconet communication. In this algorithm Bluetooth nodes assign meeting points with their peers such that the sequence of meeting points follows a pseudo random process that is different for each pair of nodes. The uniqueness of the pseudo random sequence guarantees that the meeting points with different peers of the node will collide only occasionally. This removes the need for explicit information exchange between peer devices, which is a major advantage of the algorithm. The lack of explicit signaling between Bluetooth nodes makes it easy to deploy the PCSS algorithm in Bluetooth devices, while conformance to the current Bluetooth specification is also maintained. To assess the performance of the algorithm we define two reference case schedulers and perform simulations in a number of scenarios where we compare the performance of PCSS to the performance of the reference schedulers.", "fulltext": "INTRODUCTION\nShort range radio technologies enable users to rapidly interconnect\nhandheld electronic devices such as cellular phones, palm devices\nor notebook computers. The emergence of Bluetooth [1] as default\nradio interface in these devices provides an opportunity to turn\nthem from stand-alone tools into networked equipment. Building\nBluetooth ad hoc networks also represents, however, a number of\nnew challenges, partly stemming from the fact that Bluetooth was\noriginally developed for single hop wireless connections. In this\npaper we study the scheduling problems of inter-piconet communication\nand propose a lightweight scheduling algorithm that Bluetooth\nnodes can employ to perform the scheduling of both intra and\ninter-piconet communication.\nBluetooth is a short range radio technology operating in the unlicensed\nISM (Industrial-Scientific-Medical) band using a frequency\nhopping scheme. Bluetooth (BT) units are organized into piconets.\nThere is one Bluetooth device in each piconet that acts as the master\n, which can have any number of slaves out of which up to seven\ncan be active simultaneously. The communication within a piconet\nis organized by the master which polls each slave according to some\npolling scheme. A slave is only allowed to transmit in a slave-to\n-master slot if it has been polled by the master in the previous\nmaster-to-slave slot. In Section 3 we present a brief overview of\nthe Bluetooth technology.\nA Bluetooth unit can participate in more than one piconet at any\ntime but it can be a master in only one piconet. A unit that participates\nin multiple piconets can serve as a bridge thus allowing\nthe piconets to form a larger network. We define bridging degree\nas the number of piconets a bridging node is member of. A set\nof piconets that are all interconnected by such bridging units is referred\nto as a scatternet network (Figure 1). Since a Bluetooth unit\ncan transmit or receive in only one piconet at a time, bridging units\nmust switch between piconets on a time division basis. Due to the\nfact that different piconets are not synchronized in time a bridging\nunit necessarily loses some time while switching from one piconet\nto the other. Furthermore, the temporal unavailability of bridging\nnodes in the different piconets makes it difficult to coordinate the\ncommunication with them, which impacts throughput and can be\nan important performance constraint in building scatternets.\nThere are two important phenomena that can reduce the efficiency\nof the polling based communication in Bluetooth scatternets:\nslaves that have no data to transmit may be unnecessarily\npolled, while other slaves with data to transmit may have to\nwait to be polled; and\nat the time of an expected poll one of the nodes of a master-slave\nnode pair may not be present in the piconet (the slave\nthat is being polled is not listening or the master that is expected\nto poll is not polling).\nThe first problem applies to polling based schemes in general, while\nthe second one is specific to the Bluetooth environment. In order\nto improve the efficiency of inter-piconet communication the\nscheduling algorithm has to coordinate the presence of bridging\nnodes in the different piconets such that the effect of the second\nphenomenon be minimized.\nHowever, the scheduling of inter-piconet communication expands\nto a scatternet wide coordination problem. Each node that has more\nthan one Bluetooth links have to schedule the order in which it communicates\nwith its respective neighbours. A node with multiple\nBluetooth links can be either a piconet master or a bridging node or\nboth. The scheduling order of two nodes will mutually depend on\neach other if they have a direct Bluetooth link in which case they\nhave to schedule the communication on their common link for the\nsame time slots. This necessitates some coordination between the\nrespective schedulers. For instance in Figure 1 the scheduling order\nof node A and the scheduling order of its bridging neighbours, B,\nC, D and E mutually depend on each other, while nodes D and E\nfurther effects nodes F, G and H as well. Furthermore, the possible\nloops in a scatternet (e.g., A-E-G-H-F-D) makes it even more\ncomplicated to resolve scheduling conflicts.\nIn case of bursty traffic in the scatternet the scheduling problem\nis further augmented by the need to adjust scheduling order in response\nto dynamic variation of traffic intensity. In a bursty traffic\nenvironment it is desirable that a node spends most of its time on\nthose links that have a backlogged burst of data.\nOne way to address the coordination problem of inter-piconet\nscheduling is to explicitly allocate, in advance, time slots for communication\nin each pair of nodes. Such a hard coordination approach\neliminates ambiguity with regards to a node's presence in\npiconets, but it implies a complex, scatternet wide coordination\nproblem and requires explicit signaling between nodes of a scatternet\n. In the case of bursty traffic, hard coordination schemes\ngenerate a significant computation and signaling overhead as the\ncommunication slots have to be reallocated in response to changes\nin traffic intensity and each time when a new connection is established\nor released.\nIn this paper we propose the Pseudo-Random Coordinated Scatternet\nScheduling algorithm which falls in the category of soft coordination\nschemes. In soft coordination schemes nodes decide their\npresence in piconets based on local information. By nature, soft coordination\nschemes cannot guarantee conflict-free participation of\nbridging nodes in the different piconets, however, they have a significantly\nreduced complexity. In the PCSS algorithm coordination\nis achieved by implicit rules in the communication without the need\nof exchanging explicit control information. The low complexity of\nthe algorithm and its conformance to the current Bluetooth specification\nallow easy implementation and deployment.\nThe first key component of the algorithm is the notion of checkpoints\nwhich are defined in relation to each pair of nodes that\nare connected by a Bluetooth link and which represent predictable\npoints in time when packet transmission can be initiated on the particular\nlink. In other words, checkpoints serve as regular meeting\npoints for neighboring nodes when they can exchange packets. In\norder to avoid systematic collision of checkpoints on different links\nof a node the position of checkpoints follows a pseudo random sequence\nthat is specific to the particular link the checkpoints belong\nto.\nThe second key component of the algorithm is the dynamic adjustment\nof checking intensity, which is necessary in order to effec-tively\nsupport bursty data traffic. Bandwidth can be allocated and\ndeallocated to a particular link by increasing and decreasing checkpoint\nintensity, respectively.\nTo assess the performance of the algorithm we define two reference\nschedulers and relate the performance of the PCSS scheme to these\nreference algorithms in a number of simulation scenarios.\nThe remainder of the paper is structured as follows. In Section 2 we\ngive an overview of related work focusing on Bluetooth scheduling\nrelated studies available in the literature. Section 3 gives a brief\noverview of the Bluetooth technology. In Section 4 and 5 we introduce\nthe proposed algorithm. In Section 6 we define the reference\nschedulers. Finally, in Section 7 we present simulation results.\nRELATED WORK\nA number of researchers have addressed the issue of scheduling in\nBluetooth. Most of these studies have been restricted, however, to\nthe single piconet environment, where the fundamental question is\nthe polling discipline used by the piconet master to poll its slaves.\nThese algorithms are often referred to as intra-piconet scheduling\nschemes. In [7] the authors assume a simple round robin polling\nscheme and investigate queueing delays in master and slave units\ndepending on the length of the Bluetooth packets used. In [5] Johansson\net al. analyze and compare the behavior of three different\npolling algorithms. They conclude that the simple round robin\nscheme may perform poorly in Bluetooth systems and they propose\na scheme called Fair Exhaustive Polling. The authors demonstrate\nthe strength of this scheme and argue in favor of using multi-slot\npackets. Similar conclusions are drawn by Kalia et al. who argue\nthat the traditional round robin scheme may result in waste and un-fairness\n[8]. The authors propose two new scheduling disciplines\nthat utilize information about the status of master and slave queues.\nIn [9, 10] the authors concentrate on scheduling policies designed\nwith the aim of low power consumption. A number of scheduling\npolicies are proposed which exploit either the park or sniff low\npower modes of Bluetooth.\n194\nAlthough the above studies have revealed a number of important\nperformance aspects of scheduling in Bluetooth piconets, the algorithms\ndeveloped therein are not applicable for inter-piconet communication\n. In [6] the authors have shown that constructing an optimal\nlink schedule that maximizes total throughput in a Bluetooth\nscatternet is an NP hard problem even if scheduling is performed\nby a central entity. The authors also propose a scheduling algorithm\nreferred to as Distributed Scatternet Scheduling Algorithm\n(DSSA), which falls in the category of distributed, hard coordination\nschemes. Although the DSSA algorithm provides a solution\nfor scheduling communication in a scatternet, some of its idealized\nproperties (e.g., nodes are aware of the traffic requirements of their\nneighbours) and its relatively high complexity make it difficult to\napply it in a real life environment.\nThere is an ongoing work in the Personal Area Networking (PAN)\nworking group of the Bluetooth Special Interest Group (SIG) [2] to\ndefine an appropriate scheduling algorithm for Bluetooth scatternets\nBLUETOOTH BACKGROUND\nBluetooth is a short range radio technology that uses frequency\nhopping scheme, where hopping is performed on 79 RF channels\nspaced 1 MHz apart. Communication in Bluetooth is always between\nmaster and slave nodes. Being a master or a slave is only\na logical state: any Bluetooth unit can be a master or a slave.\nThe Bluetooth system provides full-duplex transmission based on\nslotted Time Division Duplex (TDD) scheme, where each slot is\n0.625 ms long. Master-to-slave transmission always starts in an\neven-numbered time slot, while slave-to-master transmission always\nstarts in an odd-numbered time slot. A pair of master-to-slave\nand slave-to-master slots are often referred to as a frame. The communication\nwithin a piconet is organized by the master which polls\neach slave according to some polling scheme. A slave is only allowed\nto transmit in a slave-to-master slot if it has been polled by\nthe master in the previous master-to-slave slot. The master may\nor may not include data in the packet used to poll a slave. Bluetooth\npackets can carry synchronous data (e.g., real-time traffic) on\nSynchronous Connection Oriented (SCO) links or asynchronous\ndata (e.g., elastic data traffic, which is the case in our study) on\nAsynchronous Connectionless (ACL) links. Bluetooth packets on\nan ACL link can be 1, 3 or 5 slot long and they can carry different\namount of user data depending on whether the payload is FEC\ncoded or not. Accordingly, the Bluetooth packet types DH1, DH3\nand DH5 denote 1, 3 and 5 slot packets, respectively, where the\npayload is not FEC encoded, while in case of packet types DM1,\nDM3 and DM5 the payload is protected with FEC encoding. There\nare two other types of packets, the POLL and NULL packets that do\nnot carry user data. The POLL packet is used by the master when\nit has no user data to the slave but it still wants to poll it. Similarly,\nthe NULL packet is used by the slave to respond to the master if it\nhas no user data. For further information regarding the Bluetooth\ntechnology the reader is referred to [1, 3].\nOVERVIEW OF THE PCSS ALGORITHM\nCoordination in the PCSS algorithm is achieved by the unique\npseudo random sequence of checkpoints that is specific to each\nmaster-slave node pair and by implicit information exchange between\npeer devices. A checkpoint is a designated Bluetooth frame.\nThe activity of being present at a checkpoint is referred to as to\ncheck. A master node actively checks its slave by sending a packet\nto the slave at the corresponding checkpoint and waiting for a response\nfrom the slave. The slave node passively checks its master\nby listening to the master at the checkpoint and sending a response\npacket in case of being addressed.\nThe expected behaviour of nodes is that they show up at each\ncheckpoint on all of their links and check their peers for available\nuser data. The exchange of user data packets started at a checkpoint\ncan be continued in the slots following the checkpoint. A\nnode remains active on the current link until there is user data in\neither the master-to-slave or slave-to-master directions or until it\nhas to leave for a next checkpoint on one of its other links. In\nthe PCSS scheme we exploit the concept of randomness in assigning\nthe position of checkpoints, which excludes the possibility that\ncheckpoints on different links of a node will collide systematically,\nthus giving the node an equal chance to visit all of its checkpoints.\nThe pseudo random procedure is similar to the one used to derive\nthe pseudo random frequency hopping sequence. In particular, the\nPCSS scheme assigns the positions of checkpoints on a given link\nfollowing a pseudo random sequence that is generated based on the\nBluetooth clock of the master and the MAC address of the slave.\nThis scheme guarantees that the same pseudo random sequence\nwill be generated by both nodes of a master-slave pair, while the sequences\nbelonging to different node pairs will be different. Figure\n2 shows an example for the pseudo random arrangement of checkpoints\nin case of a node pair A and B. The length of the current base\nchecking interval is denoted by\nT\n(i)\ncheck\nand the current checking intensity\nis defined accordingly as\n1\nT\n(i)\ncheck\n. There is one checkpoint\nwithin each base checking interval and the position of the checkpoint\nwithin this window is changing from one time window to the\nother in a pseudo random manner.\ncheckpoints of A toward B\ncheckpoints of B toward A\n1 frame\nT\n(i)\ncheck\nFigure 2: Pseudo-random positioning of checkpoints\nSince the pseudo random sequence is different from one link to another\n, checkpoints on different links of a node will collide only occasionally\n. In case of collision the node can attend only one of the\ncolliding checkpoints, which implies that the corresponding neighbours\nhave to be prepared for a non-present peer. That is, the master\nmight not poll and the slave might not listen at a checkpoint.\nWe note that a collision occurs either if there are more than one\ncheckpoints scheduled for the same time slot or if the checkpoints\nare so close to each other that a packet transmission started at the\nfirst checkpoint necessarily overlaps the second one. Furthermore,\nif the colliding checkpoints belong to links in different piconets,\nthe necessary time to perform the switch must be also taken into\naccount.\nDuring the communication there is the possibility to increase or\ndecrease the intensity of checkpoints depending on the amount of\nuser data to be transmitted and on the available capacity of the\nnode. According to the PCSS algorithm a node performs certain\ntraffic measurements at the checkpoints and increases or decreases\nthe current checking intensity based on these measurements. Since\n195\nnodes decide independently about the current checking intensity\nwithout explicit coordination, two nodes on a given link may select\ndifferent base checking periods. In order to ensure that two nodes\nwith different checking intensities on the same link can still communicate\nwe require the pseudo random generation of checkpoints\nto be such that the set of checkpoint positions at a lower checking\nintensity is a subset of checkpoint positions at any higher checking\nintensities. In the Appendix we are going to present a pseudo random\nscheme for generating the position of checkpoints, which has\nthe desired properties.\nOPERATION OF PCSS\nIn what follows, we describe the procedures of the PCSS algorithm.\nWe start by the initialization process which ensures that two nodes\ncan start communication as soon as a new link has been established\nor the connection has been reset. Next, we describe the rules that\ndefine how nodes calculate their checkpoints, decide upon their\npresence at checkpoints and exchange packets. Finally, we present\nthe way neighboring nodes can dynamically increase and decrease\nof checkpoint intensity.\n5.1\nInitialization\nIn the PCSS algorithm there is no need for a separate initialization\nprocedure to start communication, since the pseudo random generation\nof checkpoints is defined such that once a master slave node\npair share the same master's clock and slave's MAC address information\n, it is guaranteed that the same pseudo random sequence will\nbe produced at each node. That is, it is guaranteed that two nodes\nstarting checkpoint generation at different time instants with different\nchecking intensities will be able to communicate. It is the own\ndecision of the nodes to select an appropriate initial checking intensity\n, which may depend for example on the free capacities of the\nnode or on the amount of data to transmit. Once the communication\nis established the increase and decrease procedures will adjust the\npossibly different initial checking intensities to a common value.\n5.2\nCommunication\nA pair of nodes can start exchanging user data packets at a checkpoint\n, which can expand through the slots following the checkpoint.\nThe nodes remain active on the current link following a checkpoint\nuntil there is user data to be transmitted or one of them has to\nleave in order to attend a checkpoint on one of its other links. After\na POLL/NULL packet pair has been exchanged indicating that\nthere is no more user data left the nodes switch off their transmit-ters/receivers\nand remain idle until a next checkpoint comes on one\nof their links. However, during the communication any of the nodes\ncan leave in order to attend a coming checkpoint on one of its other\nlinks. After one of the nodes has left the remaining peer will realize\nthe absence of the node and will go idle until the time of its next\ncheckpoint. If the master has left earlier the slave will realize the\nabsence of the master at the next master-to-slave slot by not receiving\nthe expected poll. In the worst case the master has left before\nreceiving the last packet response from the slave, which can be a 5\nslot packet in which case the slave wastes 5+1 slots before realizing\nthe absence of the master. Similarly, if the master does not get\na response from the slave it assumes that the slave has already left\nthe checkpoint and goes idle until its next checkpoint. Note that the\nmaster may also waste 5+1 slots in the worst case before realizing\nthe absence of the slave.\nA node stores the current length of the base checking interval and\nthe time of the next checkpoint for each of its Bluetooth links separately\n. For its\ni\nth\nlink a node maintains the variable\nT\n(i)\ncheck\nto\nstore the length of the current base checking period in number of\nframes and the variable\nt\n(i)\ncheck\n, which stores the Bluetooth clock\nof the master at the next checkpoint. After passing a checkpoint\nthe variable\nt\n(i)\ncheck\nis updated to the next checkpoint by running\nthe pseudo random generator (\nP seudoChkGen) with the current\nvalue of the master's clock\nt\n(i)\nand the length of the base checking\nperiod\nT\n(i)\ncheck\nand with the MAC address of the slave\nA\n(i)\nslave\nas input\nparameters;\nt\n(i)\ncheck\n= P seudoChkGen(T\n(i)\ncheck\n, A\n(i)\nslave\n, t\n(i)\n).\nThe procedure\nP seudoChkGen is described in the Appendix.\nThere is a maximum and minimum checking interval\nT\nmax\n=\n2\nf\nmax\nand\nT\nmin\n= 2\nf\nmin\n, respectively. The length of the checking\nperiod must be a power of 2 number of frames and it must take\na value from the interval\n[2\nf\nmin\n, 2\nf\nmax\n].\n5.3\nIncreasing and Decreasing Checking Intensity\nThe increase and decrease procedures are used to adjust the checking\nintensity of a node according to the traffic intensity and to the\navailability of the peer device. Each node decides independently\nabout the current checking intensity based on traffic measurements\nat checkpoints.\nSince the time spent by a node on a link is proportional to the ratio\nof the number of checkpoints on that link and the number of checkpoints\non all links of the node, the bandwidth allocated to a link can\nbe controlled by the intensity of checkpoints on that link. This can\nbe shown by the following simple calculation.\nLet us assume that the node has\nL number of links and assume\nfurther that for the base checking periods on all links of the node\nit holds that\nT\nmin\nT\n(i)\ncheck\nT\nmax\n, i = 1, . . . , L. Then the\naverage number of checkpoints within an interval of length\nT\nmax\nis\nN =\n\nL\ni\n=1\nT\nmax\nT\n(i)\ncheck\n, and the average time between two consecutive\ncheckpoints is\nt = T\nmax\nN\n=\n1\nL\n\ni\n=1\n1\nT\n(i)\ncheck\n,\nprovided that the pseudo random generator produces a uniformly\ndistributed sequence of checkpoints. Then, the share of link\nj from\nthe total capacity of the node is\nr\nj\n= 1/T\n(j)\ncheck\nL\n\ni\n=1\n1\nT\n(i)\ncheck\n.\nA node has to measure the utilization of checkpoints on each of\nits links separately in order to provide input to the checking intensity\nincrease and decrease procedures. According to the algorithm\na given checkpoint is considered to be utilized if both nodes have\nshown up at the checkpoint and at least one Bluetooth packet carrying\nuser data has been transmitted or received. If there has not been\na successful poll at the checkpoint due to the unavailability of any\nof the nodes or if there has been only a POLL/NULL packet pair\nexchange but no user data has been transmitted, the checkpoint is\nconsidered to be unutilized. We note that due to packet losses the\nutilization of a given checkpoint might be interpreted differently by\nthe nodes. However, this does not impact correct operation of the\nalgorithm.\n196\nTo measure the utilization of checkpoints\n\n(i)\non the\ni\nth\nlink of the\nnode we employ the moving average method as follows. The utilization\nof a checkpoint equals to 1 if it has been utilized, otherwise\nit equals to 0. If the checkpoint has been utilized the variable\n\n(i)\nis updated as,\n\n(i)\n= q\nuti\n\n(i)\n+ (1 - q\nuti\n) 1;\nif the checkpoint has not been utilized it is updated as,\n\n(i)\n= q\nuti\n\n(i)\n+ (1 - q\nuti\n) 0,\nwhere\n0 q\nuti\n< 1 is the time scale parameter of the moving\naverage method. A further parameter of the utilization measurement\nis the minimum number of samples that have to be observed\nbefore the measured utilization value is considered to be confident\nand can be used as input to decide about increase and decrease of\nchecking intensity. This minimum number of samples is a denoted\nby\nN\nsample,min\n.\nFinally, a node also has to measure its total utilization, which is\ndefined as the fraction of time slots where the node has been active\n(transmitted or received) over the total number of time slots. To\nmeasure the total utilization of a node we employ the moving average\nmethod again. Each node measures its own utilization\n\n(node)\nand updates the\n\n(node)\nvariable after each\nN\nuti,win\nnumber of\nslots as follows:\n\n(node)\n= q\n(node)\nuti\n\n(node)\n+ (1 - q\n(node)\nuti\n)\n(win)\n,\nwhere\n\n(win)\nis the fraction of time slots in the past time window\nof length\nN\nuti,win\nwhere the node has been active over the total\nnumber of time slots\nN\nuti,win\n.\nIf the utilization of checkpoints on link\ni falls below the lower\nthreshold\n\nlower\n, the current base checking period\nT\n(i)\ncheck\nwill be\ndoubled. Having a low checkpoint utilization can be either because\none or both of the nodes have not shown up at all of the checkpoints\nor because there is not enough user data to be transmitted. In either\ncases the intensity of checkpoints has to be decreased. Whenever a\ndecrease or increase is performed on link\ni the measured utilization\n\n(i)\nmust be reset.\nSince the parameter\nT\n(i)\ncheck\nis one of the inputs to the pseudo random\ncheckpoint generation process,\nP seudoChkGen the checkpoints\nafter the decrease will be generated according to the new\nperiod. Furthermore, due to the special characteristic of the checkpoint\ngeneration scheme the remaining checkpoints after the decrease\nwill be a subset of the original checkpoints, which guarantees\nthat the two nodes can sustain communication independent of\nlocal changes in checking intensities.\nAn example for the checking intensity decrease in case of a node\npair A and B is shown in Figure 3. First, node A decreases checking\nintensity by doubling its current base checking period in response\nto the measured low utilization. As a consequence node B\nwill find node A on average only at every second checkpoint and\nits measured utilization will decrease rapidly. When the measured\nutilization at node B falls below the threshold\n\nlower\n, B realizes\nthat its peer has a lower checking intensity and follows the decrease\nby doubling its current base checking period. Although we\nhave not explicitly indicated in the Figure, it is assumed that there\nhas been user data exchanged at each checkpoint where both nodes\nwere present.\n=0.35<\nlower\n=0.36<\nlower\nnode A reduces the checking\nintensity, by doubling its base period\ncheckpoints of B toward A\ncheckpoints of A toward B\ndoubles its base period\nnode B realizes the decrease and\n=0.6=0.5\n=0.5\n=0.2\n=0.7\n=0.48 =0.56 =0.46\n=0.5\n=0.58\n=0.35\n=0.35\n=0.65\n=0.2\nFigure 3: Checking intensity decrease\nRecall from the utilization measurement procedure that there is a\nminimum number of checkpoints\nN\nsample,min\nthat has to be sam-pled\nbefore the measured utilization is considered to be confident\nand can be used to decide about checking intensity decrease. The\nparameter\nN\nsample,min\ntogether with the parameter of the moving\naverage method\nq\nuti\ndetermine the time scale over which the\nutilization of checkpoints has to be above the threshold\n\nlower\n,\notherwise the node decreases checking intensity. It might be also\nreasonable to allow that the parameter\nN\nsample,min\nand the moving\naverage parameter\nq\nuti\ncan be changed after each decrease or\nincrease taking into account for example the current checking intensity\n, the available resources of the node or the amount of user\ndata to be transmitted, etc. However, in the current implementation\nwe apply fixed parameter values.\nAfter a checkpoint where user data has been exchanged (not only a\nPOLL/NULL packet pair) checking intensity can be increased provided\nthat the measured utilization of checkpoints exceeds the upper\nthreshold\n\nupper\nand the node has available capacity. Formally\na checking intensity increase is performed on link\ni if the following\ntwo conditions are satisfied:\n\n(i)\n>\nupper\nand\n\n(node)\n<\n(node)\nupper\n,\nwhere\n\n(node)\nupper\nis the upper threshold of the total utilization of the\nnode. This last condition ensures that the intensity of checkpoints\nwill not increase unbounded. The intensity of checkpoints is doubled\nat each increase by dividing the current length of the base\nchecking period\nT\n(i)\ncheck\nby 2. For typical values of\n\nupper\nwe recommend\n0.8\nupper\n0.9 in which case the respective\nlower\nvalue should be\n\nlower\n0.4 in order to avoid oscillation of increases\nand decreases.\nFigure 4 shows an example where node A and B communicate and\nafter exchanging user data at the second checkpoint both nodes\ndouble the checking intensity. In the Figure we have explicitly indicated\nwhether there has been user data exchanged at a checkpoint\nor not.\nuser data\ncheckpoints of B toward A\ncheckpoints of A toward B\n=0.8>\nupper\n=0.8>\nupper\n=0.7\n=0.2\n=0.55\n=0.4\n=0.55\n=0.7\nuser data\nuser data\nuser data\n=0.4\n=0.2\nchecking intensity\nboth node A and B double\n=0.3\n=0.3\nFigure 4: Checking intensity increase\n197\nREFERENCE ALGORITHMS\nIn this section we define the Ideal Coordinated Scatternet Scheduler\n(ICSS) and the Uncoordinated Greedy Scatternet Scheduler\n(UGSS) reference algorithms.\nThe ICSS algorithm represents\nthe \"ideal\" case where nodes exploit all extra information when\nscheduling packet transmissions, which would not be available in a\nrealistic scenario. The UGSS algorithm represents the greedy case\nwhere nodes continuously switch among their Bluetooth links in a\nrandom order.\n6.1\nThe ICSS Algorithm\nThe ICSS algorithm is a hypothetical, ideal scheduling algorithm\nthat we use as a reference case in the evaluation of the PCSS\nscheme. In the ICSS algorithm a node has the following extra\ninformation about its neighbours, which represents the idealized\nproperty of the algorithm:\na node is aware of the already pre-scheduled transmissions\nof its neighbours; and\na node is aware of the content of the transmission buffers of\nits neighbours.\nAccording to the ICSS algorithm each node maintains a scheduling\nlist, which contains the already pre-scheduled tasks of the node. A\ntask always corresponds to one packet pair exchange with a given\npeer of the node. Knowing the scheduling list of the neighbours\nallows the node to schedule communication with its neighbours\nwithout overlapping their other communication, such that the capacity\nof the nodes is utilized as much as possible. Furthermore\nbeing aware of the content of the transmission buffers of neighbours\neliminates the inefficiencies of the polling based scheme,\nsince there will be no unnecessary polls and the system will be\nwork-conserving.\nIn the scheduling list of a node there is at most one packet pair\nexchange scheduled in relation to each of its peers, provided that\nthere is a Bluetooth packet carrying user data either in the transmission\nbuffer of the node or in the transmission buffer of the peer\nor in both. After completing a packet exchange on a given link the\ntwo nodes schedule the next packet exchange, provided that there\nis user data to be transmitted in at least one of the directions. If\nthere is user data in only one of the directions, a POLL or NULL\npacket is assumed for the reverse direction depending on whether\nit is the master-to-slave or slave-to-master direction, respectively.\nThe new task is fitted into the scheduling lists of the nodes using\na first fit strategy. According to this strategy the task is fitted into\nthe first time interval that is available in both of the scheduling lists\nand that is long enough to accommodate the new task. Note that the\nalgorithm strives for maximal utilization of node capacity by trying\nto fill in the unused gaps in the scheduling lists.\nIf there is no more user data to be transmitted on a previously busy\nlink, the link goes to idle in which case no tasks corresponding to\nthe given link will be scheduled until there is user data again in at\nleast one of the directions.\nAn example for the scheduling lists of a node pair A and B is shown\nin Figure 5. The tasks are labeled with the name of the corresponding\npeers the different tasks belong to. Each node has as many\npre-scheduled tasks in its scheduling list as the number of its active\nBluetooth links. A link is considered to be active if there is\nschedule the next packet pair\nexchange for node A and B\nscheduling list of\nnode A\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ncurrent time\nscheduling list of\nnode B\nt\nt\npeer B\npeer A\npeer A\npeer B\npeer E\npeer E\npeer C\npeer D\npeer C\nFigure 5: Example for the scheduling lists of a node pair in case\nof the ICSS algorithm\nuser data packet in at least one of the directions. Node A has active\npeers B and C, while node B has active peers A, D and E. After\nnode A and B have finished the transmission of a packet pair they\nschedule the next task for the nearest time slots that are available\nin both of their scheduling lists and the number of consecutive free\ntime slots is greater than or equal to the length of the task.\n6.2\nThe UGSS Algorithm\nIn the UGSS algorithm Bluetooth nodes do not attempt to coordinate\ntheir meeting points, instead each node visits its neighbours\nin a random order. Nodes switch continuously among their Bluetooth\nlinks in a greedy manner. If the node has\nn number of links it\nchooses each of them with a probability of\n1/n. The greedy nature\nof the algorithm results in high power consumption of Bluetooth\ndevices.\nIf the node is the master on the visited link it polls the slave by\nsending a packet on the given link. The type of Bluetooth packet\nsent can be a 1, 3 or 5 slot packet carrying useful data or an empty\nPOLL packet depending on whether there is user data to be transmitted\nor not. After the packet has been sent the master remains\nactive on the link in order to receive any response from the slave.\nIf the slave has not been active on the given link at the time when\nthe master has sent the packet it could not have received the packet\nand consequently it will not send a response to the master. After\nthe master has received the response of the slave or if it has sensed\nthe link to be idle indicating that no response from the salve can be\nexpected, it selects the next link to visit randomly.\nSimilar procedure is followed when the node is the slave on the\nvisited link. The slave tunes its receiver to the master and listens\nfor a packet transmission from the master in the current master-to\n-slave slot. If the slave has not been addressed by the master\nin the actual master-to-slave slot it immediately goes to the next\nlink. However, if the slave has been addressed it remains active on\nthe current link and receives the packet. After having received the\npacket of the master the slave responds with its own packet in the\nfollowing slave-to-master slot. After the slave has sent its response\nit selects the next link to visit randomly.\nSIMULATION RESULTS\nFirst, we evaluate the algorithm in a realistic usage scenario, which\nis the Network Access Point (NAP) scenario. Next we investigate\ntheoretical configurations and obtain asymptotical results that reveals\nthe scaling properties of the algorithm. For instance we investigate\nthe carried traffic in function of the number of forwarding\n198\nhops along the path and in function of bridging degree. Both in the\nrealistic and theoretical configurations we relate the performance of\nthe PCSS scheme to the performance of the ICSS and UGSS reference\nalgorithms. Before presenting the scenarios and simulation\nresults we shortly describe the simulation environment and define\nthe performance metrics that are going to be measured during the\nsimulations.\n7.1\nSimulation Environment\nWe have developed a Bluetooth packet level simulator, which is\nbased on the Plasma simulation environment [4]. The simulator\nhas a detailed model of all the packet transmission, reception procedures\nin the Bluetooth Baseband including packet buffering, upper\nlayer packet segmentation/reassemble, the ARQ mechanism,\netc. The simulator supports all Bluetooth packet types and follows\nthe same master-slave slot structure as in Bluetooth. For the physical\nlayer we employ a simplified analytical model that captures the\nfrequency collision effect of interfering piconets.\nIn the current simulations the connection establishment procedures,\ne.g., the inquiry and page procedures are not simulated in detail and\nwe do not consider dynamic scatternet formation either. Instead we\nperform simulations in static scatternet configurations where the\nscatternet topology is kept constant during one particular run of\nsimulation.\nIn the current simulations we run IP directly on top of the Bluetooth\nlink layer and we apply AODV as the routing protocol in the\nIP layer. The simulator also includes various implementations of\nthe TCP protocol (we employed RenoPlus) and supports different\nTCP/IP applications, from which we used TCP bulk data transfer\nin the current simulations.\nOne of the most important user perceived performance measures is\nthe achieved throughput. We are going to investigate the throughput\nin case of bulk TCP data transfer and in case of Constant Bit Rate\n(CBR) sources.\nIn order to take into account the power consumption of nodes we\ndefine activity ratio of a node,\nr\nact\nas the fraction of time when\nthe node has been active over the total elapsed time; and power\nefficiency,\np\nef f\nas the fraction of the number of user bytes success-fully\ncommunicated (transmitted and received) over the total time\nthe node has been active. The power efficiency shows the number\nof user bytes that can be communicated by the node during an\nactive period of length 1 sec. Power efficiency can be measured\nin [kbit/sec], or assuming that being active for 1 sec consumes 1\nunit of energy we can get a more straightforward dimension of\n[kbit/energy unit], which is interpreted as the number of bits that\ncan be transmitted while consuming one unit of energy.\n7.2\nNetwork Access Point Scenario\nIn this scenario we have a NAP that is assumed to be connected to\na wired network infrastructure and it provides network access via\nits Bluetooth radio interface. The NAP acts as a master and up to 7\nlaptops, all acting as slaves, can connect to the NAP. Furthermore\nwe assume that each laptop has a Bluetooth enabled mouse and\neach laptop connects to its mouse by forming a new piconet as it is\nshown in Figure 6.\nWe simulate a bulk TCP data transfer from the NAP towards each\nlaptop separately. Regarding the traffic generated by the mouse we\nassume that the mouse produces a 16 byte long packet each 50 ms,\nNAP\nlaptop\nmax 7\nlaptop\nmouse\nmouse\nFigure 6: Network Access Point Scenario\nperiodically. In the NAP-laptop communication we are interested\nin the achieved throughput while in the laptop-mouse communication\nwe are concerned with the delay perceived by the mouse.\nIn the current scenario we switched off the dynamic checkperiod\nadjustment capability of the PCSS algorithm and we set the base\nchecking period to 32 frames (40 ms), which is in accordance with\nthe delay requirement of a mouse. Note that this same base checking\nperiod is applied also on the NAP-laptop links, although, there\nis no delay requirement for the TCP traffic. However, the current\nimplementation in the simulator does not yet support the setting of\nthe base checking periods for each link separately. The dynamic\nchecking period adjustment would definitely improve the throughput\nof NAP-laptop communication as we are going to see later in\ncase of other configurations.\nThe simulation results are shown in Figure 7. In plot (a) the averaged\nthroughput of NAP-laptop communications are shown in the\nfunction of number of laptops for the different algorithms, respectively\n. Graph (b) plots the sum of the throughputs between the\nNAP and all laptops. As we expect, the individual laptop throughput\ndecreases as the number of laptops increases. However, it is\nimportant to notice that the sum of laptop throughputs do not decrease\nwith increasing number of laptops in case of the PCSS and\nICSS algorithms. As the number of laptops increases the efficient\ncoordination becomes more important and the total carried traffic\nwill decrease with the uncoordinated UGSS scheme. The increase\nof the total throughput in case of the PCSS algorithm is the consequence\nof the fixed checking intensities, which allocates one half\nof a laptop capacity to the mouse and the other half to the NAP. In\ncase of small number of laptops this prevents the laptops to fully\nutilize the NAP capacity, which improves as the number of laptops\nincreases.\nThe\n99% percentile of the delay seen by mouse packets is shown in\nplot (c). The delay that can be provided with the PCSS algorithm\nis determined by the base checking period that we use. Recall, that\nin the current setup the base checking period of the PCSS scheme\nwas set to 32 frames, which implies that the delay has to be in the\norder of 32 frames, as shown in the figure. The low delay with the\nUGSS algorithm is due to the continuous switching among the links\nof a node, which ensures high polling intensity within a piconet\nand frequent switching between piconets. The UGSS algorithm\nprovides an unnecessarily low delay, which is less than the delay\nrequirement at the expense of higher power consumption.\nPlots (d) and (e) show the averaged activity ratio over all laptops\nand mice, respectively. The considerably higher throughput\nachieved for small number of laptops by the ICSS scheme explains\nits higher activity ratio. On graph (f) the averaged power efficiency\nof laptops is shown, which relates the number of bytes transmitted\nto the total time of activity. The power efficiency of the PCSS\n199\nscheme decreases with increasing number of laptops, which is a\nconsequence of the fixed checking intensities. Since the NAP has\nto share its capacity among the laptops, with an increasing number\nof laptops there will be an increasing number of checkpoints where\nthe NAP cannot show up. In such cases the dynamic checking intensity\nadjustment procedure could help by decreasing checking\nintensity on the NAP-laptop links. Recall that in the current scenario\nwe employed fixed checking intensities in order to satisfy the\nmouse delay requirement. It is also important to notice that with the\nuncoordinated UGSS scheme the activity ratio of a mouse is relatively\nhigh, which is an important drawback considering the low\npower capabilities of such devices.\n7.3\nImpact of Number of Forwarding Hops\nIn what follows, we investigate the performance impact of the number\nof forwarding hops along the communication path in the scatternet\nconfiguration shown in Figure 8. The configuration consists\nof a chain of S/M forwarding nodes (\nF\ni\n) and a certain number of\nadditional node pairs connected to each forwarding node in order to\ngenerate background traffic. The number of S/M forwarding nodes\nis denoted by\nN\nF\n. There are\nN\nB\nnumber of background node pairs\nconnected to each forwarding node as masters. The background\ntraffic flows from each source node\nB\n(S)\nij\nto its destination pair\nB\n(D)\nij\nthrough the corresponding forwarding node\nF\ni\n. The traffic\nthat we are interested in is a bulk TCP data transfer between node\nS and D. The background traffic is a CBR source, which generates\n512 byte long IP packets with a period of length 0.05 sec.\nD\nB\n(D)\n1i\nB\n(D)\n2i\nB\n(D)\nNF i\nB\n(S)\nNF i\nB\n(S)\n2i\nB\n(S)\n1i\nF\n1\nF\n2\nF\nNF\nS\nFigure 8: Impact of number of forwarding nodes\nDuring the simulations we vary the number of forwarding hops\nN\nF\nand the number of background node pairs\nN\nB\nconnected to each\nforwarding node. As one would expect, with increasing number of\nforwarding hops and background node pairs the coordinated algorithms\nwill perform significantly better than the one without any\ncoordination (UGSS).\nThe throughput of the S-D traffic as a function of the number of\nforwarding nodes (\nN\nF\n) without background traffic (\nN\nB\n= 0) and\nwith two pairs of background nodes (\nN\nB\n= 2) are shown in Figure\n9 (a) and (b), respectively. The throughput in case of no cross\ntraffic drops roughly by half when we introduce the first forwarding\nnode. Adding additional forwarding hops continuously reduces\nthe throughput, however, the decrease at each step is less drasti-cal\n. We note that in case of the ICSS scheme one would expect\nthat for\nN\nF\n> 1 the throughput should not decrease by adding\nadditional forwarding hops. However, there are a number of other\neffects besides the number of forwarding hops that decrease the\nthroughput. For instance, with an increasing number of forwarding\nhops the number of piconets in the same area increases, which,\nin turn, causes an increasing number of lost packets over the radio\ninterface due to frequency collisions. Furthermore with increasing\nnumber of hops the end-to-end delay suffered by the TCP flow increases\n, which makes the TCP connection less reactive to recover\nfrom packet losses.\nIn the no background traffic case the PCSS scheme performs close\nto the UGSS algorithm in terms of throughput. However, as we\nintroduce two pairs of background nodes the UGSS algorithm fails\ncompletely, while the PCSS scheme still achieves approximately 20\nkbit/sec throughput. Furthermore, the power efficiency of the PCSS\nscheme is an order of magnitude higher than that of the UGSS algorithm\nin both cases, which indicates that the PCSS algorithm consumes\nsignificantly less power to transmit the same amount of data\nthan the UGSS scheme.\n7.4\nImpact of Bridging Degree\nNext we investigate the performance of scheduling algorithms as\nthe number of piconets that a bridging node participates in is increased\n. The scatternet setup that we consider is shown in Figure\n10, where we are interested in the performance of the bridging node\nC. Node C is an all slave bridging node and it is connected to master\nnodes\nP\ni\n, where the number of these master nodes is denoted\nby\nN\nP\n. To each master node\nP\ni\nwe connect\nN\nL\nnumber of leaf\nnodes as slaves in order to generate additional background load in\nthe piconets. We introduce bulk TCP data transfer from node C\ntowards each of its master node\nP\ni\nand CBR background traffic\non each\nL\nij\n- P\ni\nlink. The packet generation interval for background\nsources was set to 0.25 sec, which corresponds to a 16\nkbit/sec stream. During the simulation we vary the number of piconets\nN\nP\nparticipated by node C and investigate the performance\nof the PCSS algorithm with and without dynamic checkpoint intensity\nchanges. The number of background nodes\nN\nL\nconnected to\neach master node\nP\ni\nwas set to\nN\nL\n= 3 and it was kept constant in\nthe simulations.\nC\nP\n1\nL\n1i\nL\n1N\nL\nL\nN\nP\n1\nL\nN\nP\nN\nL\nP\nN\nP\nFigure 10: Impact of number of participated piconets\nThe throughputs of TCP flows between node C and each\nP\ni\nare averaged\nand it is shown in Figure 10 (a). The sum of TCP throughputs\nare plotted in graph (b) and the power efficiency of the central\nnode is shown in graph (c). The PCSS algorithm has been tested\nboth with fixed base checking periods equal to 32 frames (\"PCSS-32\"\n) and with dynamic checking intensity changes as well (\"PCSS-dyn\"\n). The parameter settings of the dynamic case is shown in Table\n1.\nq\nuti\n= 0.7\nN\nsample,min\n= 4\n\nlower\n= 0.3\n\nupper\n= 0.7\nq\n(node)\nuti\n= 0.7\nN\nuti,win\n= 10\n\n(node)\nmax\n= 0.8\nT\nmin\n= 8\nT\nmax\n= 256\nTable 1: Parameter setting of the dynamic PCSS scheme\n200\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\n1\n2\n3\n4\n5\n6\n7\nThroughput [kbit/s]\nNumber of laptops\nTCP throughput per laptop\nPCSS\nUGSS\nICSS\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\n1\n2\n3\n4\n5\n6\n7\nThroughput [kbit/s]\nNumber of laptops\nSum TCP throughput of laptops\nPCSS\nUGSS\nICSS\n0\n0.01\n0.02\n0.03\n0.04\n0.05\n0.06\n1\n2\n3\n4\n5\n6\n7\nDelay [sec]\nNumber of laptops\n0.99 percentile of mouse dealy\nPCSS\nUGSS\nICSS\n(a)\n(b)\n(c)\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n2\n3\n4\n5\n6\n7\nActivity ratio\nNumber of laptops\nActivity Ratio of laptops\nPCSS\nUGSS\nICSS\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n2\n3\n4\n5\n6\n7\nActivity ratio\nNumber of laptops\nActivity Ratio of mice\nPCSS\nUGSS\nICSS\n0\n100\n200\n300\n400\n500\n600\n1\n2\n3\n4\n5\n6\n7\nkbit/Energy unit\nNumber of laptops\nPower efficiency of laptops\nPCSS\nUGSS\nICSS\n(d)\n(e)\n(f)\nFigure 7: Throughput, delay and power measures in the function of number of laptops connected to the NAP\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\n0\n1\n2\n3\n4\n5\n6\n7\n8\nTCP throughput [kbit/s]\nNumber of forwarding nodes (N_F)\nTCP throughput without background nodes (N_B=0)\nPCSS\nUGSS\nICSS\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\n0\n1\n2\n3\n4\n5\n6\n7\n8\nTCP throughput [kbit/s]\nNumber of forwarding nodes (N_F)\nTCP throughput with 2 pairs of background nodes (N_B=2)\nPCSS\nUGSS\nICSS\n0\n100\n200\n300\n400\n500\n600\n0\n1\n2\n3\n4\n5\n6\n7\n8\nPower Efficiency [kbit/Energy unit]\nNumber of forwarding nodes (N_F)\nPower efficiency of forwarding nodes (N_B=2)\nPCSS\nUGSS\nICSS\n(a)\n(b)\n(c)\nFigure 9: Throughput and power efficiency in function of number of forwarding hops\nIt is important to notice that the per flow TCP throughputs in case\nof the dynamic PCSS scheme matches quite closely the throughput\nachieved by the ICSS algorithm and it significantly exceeds the\nthroughput that has been achieved by the fixed PCSS. This large\ndifference is due to the relatively low background traffic in the\nneighbouring piconets of node\nC, in which case the dynamic PCSS\nautomatically reduces checkpoint intensity on the lightly loaded\nlinks and allocates more bandwidth to the highly loaded ones by\nincreasing checking intensity.\nCONCLUSIONS\nWe have presented Pseudo Random Coordinated Scatternet\nScheduling, an algorithm that can efficiently control communication\nin Bluetooth scatternets without exchange of control information\nbetween Bluetooth devices. The algorithm relies on two key\ncomponents, namely the use of pseudo random sequences of meeting\npoints, that eliminate systematic collisions, and a set of rules\nthat govern the increase and decrease of meeting point intensity\nwithout explicit coordination.\nWe have evaluated the performance of PCSS in a number of simulation\nscenarios, where we have compared throughput and power\nmeasures achieved by PCSS to those achieved by two reference\nschedulers.\nThe first reference scheduler is an uncoordinated\ngreedy algorithm, while the other is a hypothetical \"ideal\" scheduler\n.\nIn all the scenarios investigated we have found that PCSS achieves\nhigher throughput than the uncoordinated reference algorithm.\nMoreover, with the traffic dependent meeting point intensity adjustments\nthe throughput and power measures of PCSS quite closely\nmatch the results of the \"ideal\" reference algorithm. At the same\ntime PCSS consumes approximately the same amount of power as\nthe ideal scheduler to achieve the same throughput, which is significantly\nless than the power consumption of the uncoordinated\nreference scheduler.\nREFERENCES\n[1] Bluetooth Special Interest Group. Bluetooth Baseband\nSpecification Version 1.0 B. http://www.bluetooth.com/.\n201\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n1\n2\n3\n4\n5\n6\nThroughput [kbit/s]\nNumber of piconets participated by the central node (N_P)\nAveraged TCP throughput between central node and master nodes\nPCSS-32\nPCSS-dyn\nUGSS\nICSS\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n1\n2\n3\n4\n5\n6\nThroughput [kbit/s]\nNumber of piconets participated by the central node (N_P)\nSum of TCP throughputs at the central node\nPCSS-32\nPCSS-dyn\nUGSS\nICSS\n0\n100\n200\n300\n400\n500\n600\n1\n2\n3\n4\n5\n6\nPower efficiency [kbit/Energy unit]\nNumber of piconets participated by the central node (N_P)\nEffective power of central node\nPCSS-32\nPCSS-dyn\nUGSS\nICSS\n(a)\n(b)\n(c)\nFigure 11: Throughput and power efficiency in function of the bridging degree of node C\n[2] Bluetooth Special Interest Group.\nhttp://www.bluetooth.com/.\n[3] J. Haartsen. BLUETOOTH- the universal radio interface for\nad-hoc, wireless connectivity. Ericsson Review, (3), 1998.\n[4] Z. Haraszti, I. Dahlquist, A. Farago, and T. Henk. Plasma an\nintegrated tool for ATM network operation. In Proc.\nInternational Switching Symposium, 1995.\n[5] N. Johansson, U. Korner, and P. Johansson. Performance\nevaluation of scheduling algorithms for Bluetooth. In IFIP\nTC6 WG6.2 Fifth International Conference on Broadband\nCommunications (BC'99), Hong Kong, November 1999.\n[6] N. Johansson, U. Korner, and L. Tassiulas. A distributed\nscheduling algorithm for a Bluetooth scatternet. In Proc. of\nThe Seventeenth International Teletraffic Congress, ITC'17,\nSalvador da Bahia, Brazil, September 2001.\n[7] P. Johansson, N. Johansson, U. Korner, J. Elgg, and\nG. Svennarp. Short range radio based ad hoc networking:\nPerformance and properties. In Proc. of ICC'99, Vancouver,\n1999.\n[8] M. Kalia, D. Bansal, and R. Shorey. MAC scheduling and\nSAR policies for Bluetooth: A master driven TDD\npico-cellular wireless system. In IEEE Mobile Multimedia\nCommunications Conference MOMUC'99, San Diego,\nNovember 1999.\n[9] M. Kalia, D. Bansal, and R. Shorey. MAC scheduling\npolicies for power optimization in Bluetooth: A master\ndriven TDD wireless system. In IEEE Vehicular Technology\nConference 2000, Tokyo, 2000.\n[10] M. Kalia, S. Garg, and R. Shorey. Efficient policies for\nincreasing capacity in Bluetooth: An indoor pico-cellular\nwireless system. In IEEE Vehicular Technology Conference\n2000, Tokyo, 2000.\nAPPENDIX\nHere, we present the procedure for generating the pseudo random\nsequence of checkpoints, where we reuse the elements of\nthe pseudo random frequency hop generation procedure available\nin Bluetooth. The inputs to the checkpoint generation procedure\nP seudoChkGen are the current checking period T\n(i)\ncheck\n, the Bluetooth\nMAC address of the slave\nA\nslave\nand the current value of the\nmaster's clock\nt\n(i)\n. A node can perform checkpoint generation using\nthe\nP seudoChkGen procedure at any point in time, it is always\nguaranteed that the position of checkpoint generated by the\ntwo nodes will be the same, as it has been pointed out in Section\n5.1. Nevertheless the typical case will be that whenever a node arrives\nto a checkpoint it generates the position of the next checkpoint\non the given link. The variable\nt\n(i)\ncheck\nalways stores the master's\nclock at the next checkpoint, thus it needs to be updated every time\na checkpoint is passed. Here we note that the Bluetooth clock of a\ndevice is a 28 bit counter, where the LSB changes at every half slot.\nLet us assume that the base period of checkpoints on the\ni\nth\nlink of\nthe node is\nT\n(i)\ncheck\n= 2\nj-2\n,\nj > 2 number of frames, which means\nthat there is one pseudo randomly positioned checkpoint in each\nconsecutive time interval of length\nT\n(i)\ncheck\nand the\nj\nth\nbit of the\nBluetooth clock changes at every\nT\n(i)\ncheck\n. Upon arrival to a checkpoint\nthe variable\nt\n(i)\ncheck\nequals to the current value of the master's\nclock on that link. After the checkpoint generation procedure has\nbeen executed the variable\nt\n(i)\ncheck\nwill store the master's clock at\nthe time of the next checkpoint on that link.\nBefore starting the procedure the variable\nt\n(i)\ncheck\nis set to the current\nvalue of the master's clock\nt\n(i)\nin order to cover the general\ncase when at the time of generating the next checkpoint the value\nof\nt\n(i)\ncheck\ndoes not necessarily equals to the current value of the\nmaster's clock\nt\n(i)\n. The position of the next checkpoint is obtained\nsuch that the node first adds the current value of\nT\n(i)\ncheck\nto the variable\nt\n(i)\ncheck\n, clears the bits\n[j - 1, . . . , 0] of t\n(i)\ncheck\nand\nthen generates the bits\n[j - 1, . . . , 2] one by one using the procedure\nP seudoBitGen(X, W\nctrl\n). When generating the k\nth\nbit\n(\nj -1 k 2) the clock bits X = t\n(i)\ncheck\n[k+1, . . . , k+5] are fed\nas inputs to the\nP seudoBitGen procedure, while the control word\nW\nctrl\nis derived from\nt\n(i)\ncheck\nincluding the bits already generated\nand from the MAC address of the slave\nA\nslave\n. The schematic view\nof generating the clock bits of the next checkpoint is illustrated in\nFigure 12.\nW\n27.\nX\nPseudoBitGen\nctrl\nk.\nk+5.\nk+1.\n28.\n0.\n1.\n2.\nFigure 12: Generating the clock bits of the next checkpoint\n202\nThe\nP seudoBitGen procedure is based on the pseudo random\nscheme used for frequency hop selection in Bluetooth.\nHowever\n, before presenting the\nP seudoBitGen procedure we give the\npseudo-code of the\nP seudoChkGen procedure.\nPseudoChkGen procedure:\nt\n(i)\n: the current value of the master's clock;\nT\n(i)\ncheck\n= 2\nj-2\n, j > 2: current length of the base checkperiod\nin terms of number of frames.\nt\n(i)\ncheck\n= t\n(i)\n;\nt\n(i)\ncheck\n[j - 1, . . . , 0] = 0;\nt\n(i)\ncheck\n= t\n(i)\ncheck\n+ T\n(i)\ncheck\n;\nk = j - 1;\nwhile (\nk 2)\nX[0, . . . , 4] = t\n(i)\ncheck\n[k + 1, . . . , k + 5];\nt\n(i)\ncheck\n[k] = P seudoBitGen(X, W\nctrl\n);\nk=k-1;\nend\nFinally, we discuss the\nP seudoBitGen procedure, which is illustrated\nin Figure 13.\n5\nA\nY\n5\nX\n5\nO\n1\n5\nB\nZ\n5\nPERM5\n5\nC\n9\nD\nV\n5\nV[k mod 5]\nbit selector\nX\nO\nR\nAdd\nmod 32\nFigure 13: The PseudoBitGen procedure\nThe\ncontrol\nwords\nof\nthe\nP seudoBitGen\nprocedure\nW\nctrl\n= {A, B, C, D} are the same as the control words of\nthe frequency hop selection scheme in Bluetooth and they are\nshown in Table 2.\nHowever, the input\nX and the additional\nbit selection operator at the end are different.\nAs it has been\ndiscussed above the input\nX is changing depending on which\nbit of the checkpoint is going to be generated.\nWhen generating\nthe\nk\nth\nclock bit of the next checkpoint the clock bits\nX = t\n(i)\ncheck\n[k + 1, . . . , k + 5] are fed as inputs and the bit\nselection operator at the end selects the\n(k mod 5)\nth\nbit of the 5\nbits long output\nV .\nA\nA\nslave\n[27 - 23] t\n(i)\ncheck\n[25 - 21]\nB\nB[0 - 3] = A\nslave\n[22 - 19], B[4] = 0\nC\nA\nslave\n[8, 6, 4, 2, 0] t\n(i)\ncheck\n[20 - 16]\nD\nA\nslave\n[18 - 10] t\n(i)\ncheck\n[15 - 7]\nTable 2: Control words\nThe operation PERM5 is a butterfly permutation, which is the\nsame as in the frequency hop selection scheme of Bluetooth and\nit is described in Figure 14. Each bit of the control word\nP is\nassociated with a given bit exchange in the input word. If the\ngiven bit of the control word equals to 1 the corresponding bit exchange\nis performed otherwise skipped. The control word\nP is\nobtained from\nC and D, such that P [i] = D[i], i = 0 . . . 8 and\nP [j + 9] = C[j], j = 0 . . . 4.\nZ[1]\nZ[2]\nZ[3]\nZ[4]\nZ[0]\nP[11,10]\nP[13,12]\nP[9,8]\nP[7,6]\nP[5,4]\nP[3,2]\nP[1,0]\nFigure 14: Butterfly permutation\n203", "keywords": "checkpoint;total utilization;piconets;threshold;scatternet;PCSS algorithm;Bluetooth;slaves;inter-piconet communication;scheduling;intensity;Network Access Point;bridging unit"} {"name": "170", "title": "Run-Time Dynamic Linking for Reprogramming Wireless Sensor Networks", "abstract": "From experience with wireless sensor networks it has become apparent that dynamic reprogramming of the sensor nodes is a useful feature. The resource constraints in terms of energy, memory, and processing power make sensor network reprogramming a challenging task. Many different mechanisms for reprogramming sensor nodes have been developed ranging from full image replacement to virtual machines. We have implemented an in-situ run-time dynamic linker and loader that use the standard ELF object file format. We show that run-time dynamic linking is an effective method for reprogramming even resource constrained wireless sensor nodes. To evaluate our dynamic linking mechanism we have implemented an application-specific virtual machine and a Java virtual machine and compare the energy cost of the different linking and execution models. We measure the energy consumption and execution time overhead on real hardware to quantify the energy costs for dynamic linking. Our results suggest that while in general the overhead of a virtual machine is high, a combination of native code and virtual machine code provide good energy efficiency. Dynamic run-time linking can be used to update the native code, even in heterogeneous networks.", "fulltext": "Introduction\nWireless sensor networks consist of a collection of programmable\nradio-equipped embedded systems. The behavior\nof a wireless sensor network is encoded in software running\non the wireless sensor network nodes. The software in\ndeployed wireless sensor network systems often needs to be\nchanged, both to update the system with new functionality\nand to correct software bugs. For this reason dynamically\nreprogramming of wireless sensor network is an important\nfeature. Furthermore, when developing software for wireless\nsensor networks, being able to update the software of a\nrunning sensor network greatly helps to shorten the development\ntime.\nThe limitations of communication bandwidth, the limited\nenergy of the sensor nodes, the limited sensor node memory\nwhich typically is on the order of a few thousand bytes large,\nthe absence of memory mapping hardware, and the limited\nprocessing power make reprogramming of sensor network\nnodes challenging.\nMany different methods for reprogramming sensor nodes\nhave been developed, including full system image replacement\n[14, 16], approaches based on binary differences [15,\n17, 31], virtual machines [18, 19, 20], and loadable native\ncode modules in the first versions of Contiki [5] and\nSOS [12]. These methods are either inefficient in terms of\nenergy or require non-standard data formats and tools.\nThe primary contribution of this paper is that we investigate\nthe use of standard mechanisms and file formats for\nreprogramming sensor network nodes. We show that in-situ\ndynamic run-time linking and loading of native code using\nthe ELF file format, which is a standard feature on many operating\nsystems for PC computers and workstations, is feasible\neven for resource-constrained sensor nodes. Our secondary\ncontribution is that we measure and quantify the energy\ncosts of dynamic linking and execution of native code\nand compare it to the energy cost of transmission and execution\nof code for two virtual machines: an application-specific\nvirtual machine and the Java virtual machine.\nWe have implemented a dynamic linker in the Contiki operating\nsystem that can link, relocate, and load standard ELF\nobject code files. Our mechanism is independent of the particular\nmicroprocessor architecture on the sensor nodes and\nwe have ported the linker to two different sensor node platforms\nwith only minor modifications to the architecture dependent\nmodule of the code.\nTo evaluate the energy costs of the dynamic linker we implement\nan application specific virtual machine for Contiki\ntogether with a compiler for a subset of Java. We also adapt\nthe Java virtual machine from the lejOS system [8] to run under\nContiki. We measure the energy cost of reprogramming\nand executing a set of program using dynamic linking of native\ncode and the two virtual machines. Using the measurements\nand a simple energy consumption model we calculate\nbreak-even points for the energy consumption of the different\nmechanisms. Our results suggest that while the execution\ntime overhead of a virtual machine is high, a combination of\nnative code and virtual machine code may give good energy\nefficiency.\nThe remainder of this paper is structured as follows. In\nSection 2 we discuss different scenarios in which reprogramming\nis useful. Section 3 presents a set of mechanisms for\nexecuting code inside a sensor node and in Section 4 we discuss\nloadable modules and the process of linking, relocating\n, and loading native code. Section 5 describes our implementation\nof dynamic linking and our virtual machines. Our\nexperiments and the results are presented in Section 6 and\ndiscuss the results in Section 7. Related work is reviewed in\nSection 8. Finally, we conclude the paper in Section 9.\nScenarios for Software Updates\nSoftware updates for sensor networks are necessary for a\nvariety of reasons ranging from implementation and testing\nof new features of an existing program to complete reprogramming\nof sensor nodes when installing new applications.\nIn this section we review a set of typical reprogramming scenarios\nand compare their qualitative properties.\n2.1\nSoftware Development\nSoftware development is an iterative process where code\nis written, installed, tested, and debugged in a cyclic fashion\n. Being able to dynamically reprogram parts of the sensor\nnetwork system helps shorten the time of the development\ncycle. During the development cycle developers typically\nchange only one part of the system, possibly only a single\nalgorithm or a function. A sensor network used for software\ndevelopment may therefore see large amounts of small\nchanges to its code.\n2.2\nSensor Network Testbeds\nSensor network testbeds are an important tool for development\nand experimentation with sensor network applications\n. New applications can be tested in a realistic setting\nand important measurements can be obtained [36]. When a\nnew application is to be tested in a testbed the application\ntypically is installed in the entire network. The application\nis then run for a specified time, while measurements are collected\nboth from the sensors on the sensor nodes, and from\nnetwork traffic.\nFor testbeds that are powered from a continuous energy\nsource, the energy consumption of software updates is only\nof secondary importance. Instead, qualitative properties such\nas ease of use and flexibility of the software update mechanism\nare more important. Since the time required to make an\nupdate is important, the throughput of a network-wide software\nupdate is of importance. As the size of the transmitted\nbinaries impact the throughput, the binary size still can be\nUpdate\nUpdate\nUpdate\nProgram\nScenario\nfrequency\nfraction\nlevel\nlongevity\nDevelopment\nOften\nSmall\nAll\nShort\nTestbeds\nSeldom\nLarge\nAll\nLong\nBug fixes\nSeldom\nSmall\nAll\nLong\nReconfig.\nSeldom\nSmall\nApp\nLong\nDynamic\nApplication\nOften\nSmall\nApp\nLong\nTable 1. Qualitative comparison between different reprogramming\nscenarios.\nused as an evaluation metric for systems where throughput is\nmore important than energy consumption.\n2.3\nCorrection of Software Bugs\nThe need for correcting software bugs in sensor networks\nwas early identified [7]. Even after careful testing, new bugs\ncan occur in deployed sensor networks caused by, for example\n, an unexpected combination of inputs or variable link\nconnectivity that stimulate untested control paths in the communication\nsoftware [30].\nSoftware bugs can occur at any level of the system. To\ncorrect bugs it must therefore be possible to reprogram all\nparts of the system.\n2.4\nApplication Reconfiguration\nIn an already installed sensor network, the application\nmay need to be reconfigured. This includes change of parameters\n, or small changes in the application such as changing\nfrom absolute temperature readings to notification when\nthresholds are exceeded [26]. Even though reconfiguration\nnot necessarily include software updates [25], application reconfiguration\ncan be done by reprogramming the application\nsoftware. Hence software updates can be used in an application\nreconfiguration scenario.\n2.5\nDynamic Applications\nThere are many situations where it is useful to replace the\napplication software of an already deployed sensor network.\nOne example is the forest fire detection scenario presented by\nFok et al. [9] where a sensor network is used to detect a fire.\nWhen the fire detection application has detected a fire, the\nfire fighters might want to run a search and rescue application\nas well as a fire tracking application. While it may possible\nto host these particular applications on each node despite\nthe limited memory of the sensor nodes, this approach is not\nscalable [9]. In this scenario, replacing the application on the\nsensor nodes leads to a more scalable system.\n2.6\nSummary\nTable 1 compares the different scenarios and their properties\n. Update fraction refers to what amount of the system\nthat needs to be updated for every update, update level to\nat what levels of the system updates are likely to occur, and\nprogram longevity to how long an installed program will be\nexpected to reside on the sensor node.\nCode Execution Models and Reprogramming\nMany different execution models and environments have\nbeen developed or adapted to run on wireless sensor nodes.\n16\nSome with the notion of facilitating programming [1], others\nmotivated by the potential of saving energy costs for reprogramming\nenabled by the compact code representation of\nvirtual machines [19]. The choice of the execution model\ndirectly impacts the data format and size of the data that\nneeds to be transported to a node. In this section we discuss\nthree different mechanisms for executing program code\ninside each sensor node: script languages, virtual machines,\nand native code.\n3.1\nScript Languages\nThere are many examples of script languages for embedded\nsystems, including BASIC variants, Python interpreters\n[22], and TCL machines [1]. However, most script\ninterpreters target platforms with much more resources than\nour target platforms and we have therefore not included them\nin our comparison.\n3.2\nVirtual Machines\nVirtual machines are a common approach to reduce the\ncost of transmitting program code in situations where the\ncost of distributing a program is high. Typically, program\ncode for a virtual machine can be made more compact than\nthe program code for the physical machine. For this reason\nvirtual machines are often used for programming sensor networks\n[18, 19, 20, 23].\nWhile many virtual machines such as the Java virtual machine\nare generic enough to perform well for a variety of\ndifferent types of programs, most virtual machines for sensor\nnetworks are designed to be highly configurable in order\nto allow the virtual machine to be tailored for specific applications\n. In effect, this means that parts of the application\ncode is implemented as virtual machine code running on the\nvirtual machine, and other parts of the application code is implemented\nin native code that can be used from the programs\nrunning on the virtual machine.\n3.3\nNative Code\nThe most straightforward way to execute code on sensor\nnodes is by running native code that is executed directly by\nthe microcontroller of the sensor node. Installing new native\ncode on a sensor node is more complex than installing code\nfor a virtual machine because the native code uses physical\naddresses which typically need to be updated before the program\ncan be executed. In this section we discuss two widely\nused mechanisms for reprogramming sensor nodes that execute\nnative code: full image replacement and approaches\nbased on binary differences.\n3.3.1\nFull Image Replacement\nThe most common way to update software in embedded\nsystems and sensor networks is to compile a complete new\nbinary image of the software together with the operating system\nand overwrite the existing system image of the sensor\nnode. This is the default method used by the XNP and Deluge\nnetwork reprogramming software in TinyOS [13].\nThe full image replacement does not require any additional\nprocessing of the loaded system image before it is\nloaded into the system, since the loaded image resides at the\nsame, known, physical memory address as the previous system\nimage. For some systems, such as the Scatterweb system\ncode [33], the system contains both an operating system image\nand a small set of functions that provide functionality\nfor loading new operating system images. A new operating\nsystem image can overwrite the existing image without overwriting\nthe loading functions. The addresses of the loading\nfunctions are hard-coded in the operating system image.\n3.3.2\nDiff-based Approaches\nOften a small update in the code of the system, such as\na bugfix, will cause only minor differences between in the\nnew and old system image. Instead of distributing a new\nfull system image the binary differences, deltas, between the\nmodified and original binary can be distributed. This reduces\nthe amount of data that needs to be transferred. Several types\nof diff-based approaches have been developed [15, 17, 31]\nand it has been shown that the size of the deltas produced by\nthe diff-based approaches is very small compared to the full\nbinary image.\nLoadable Modules\nA less common alternative to full image replacement and\ndiff-based approaches is to use loadable modules to perform\nreprogramming. With loadable modules, only parts of\nthe system need to be modified when a single program is\nchanged. Typically, loadable modules require support from\nthe operating system. Contiki and SOS are examples of systems\nthat support loadable modules and TinyOS is an example\nof an operating system without loadable module support.\nA loadable module contains the native machine code of\nthe program that is to be loaded into the system. The machine\ncode in the module usually contains references to functions\nor variables in the system. These references must be\nresolved to the physical address of the functions or variables\nbefore the machine code can be executed. The process of\nresolving those references is called linking. Linking can be\ndone either when the module is compiled or when the module\nis loaded. We call the former approach pre-linking and\nthe latter dynamic linking. A pre-linked module contains\nthe absolute physical addresses of the referenced functions\nor variables whereas a dynamically linked module contains\nthe symbolic names of all system core functions or variables\nthat are referenced in the module. This information increases\nthe size of the dynamically linked module compared to the\npre-linked module. The difference is shown in Figure 1. Dynamic\nlinking has not previously been considered for wireless\nsensor networks because of the perceived run-time overhead\n, both in terms of execution time, energy consumption,\nand memory requirements.\nThe machine code in the module usually contains references\nnot only to functions or variables in the system, but\nalso to functions or variables within the module itself. The\nphysical address of those functions will change depending\non the memory address at which the module is loaded in the\nsystem. The addresses of the references must therefore be\nupdated to the physical address that the function or variable\nwill have when the module is loaded. The process of updating\nthese references is known as relocation. Like linking,\nrelocation can be done either at compile-time or at run-time.\nWhen a module has been linked and relocated the program\nloader loads the module into the system by copying the\n17\nmemcpy\n/* ... */\n}\nvoid radio_send() {\n/* ... */\n}\n0x0237\n0x1720\nCore\nmemcpy();\nradio_send();\ncall 0x1720\ncall 0x0237\nModule with dynamic linking information\nPre-linked module\nmemcpy();\nradio_send();\ncall 0x0000\ncall 0x0000\ncall instruction\ncall instruction\nradio_send\nint memcpy() {\nFigure 1. The difference between a pre-linked module\nand a module with dynamic linking information: the pre-linked\nmodule contains physical addresses whereas the\ndynamically linked module contains symbolic names.\nlinked and relocated native code into a place in memory from\nwhere the program can be executed.\n4.1\nPre-linked Modules\nThe machine code of a pre-linked module contains absolute\naddresses of all functions and variables in the system\ncode that are referenced by the module. Linking of the module\nis done at compile time and only relocation is performed\nat run-time. To link a pre-linked module, information about\nthe physical addresses of all functions and variables in the\nsystem into which the module is to be loaded must be available\nat compile time.\nThere are two benefits of pre-linked modules over dynamically\nlinked modules. First, pre-linked modules are smaller\nthan dynamically linked modules which results in less information\nto be transmitted. Second, the process of loading a\npre-linked module into the system is less complex than the\nprocess of linking a dynamically linked module. However,\nthe fact that all physical addresses of the system core are\nhard-coded in the pre-linked module is a severe drawback as\na pre-linked module can only be loaded into a system with\nthe exact same physical addresses as the system that was to\ngenerate the list of addresses that was used for linking the\nmodule.\nIn the original Contiki system [5] we used pre-linked binary\nmodules for dynamic loading. When compiling the\nContiki system core, the compiler generated a map file containing\nthe mapping between all globally visible functions\nand variables in the system core and their addresses. This\nlist of addresses was used to pre-link Contiki modules.\nWe quickly noticed that while pre-linked binary modules\nworked well for small projects with a homogeneous set\nof sensor nodes, the system quickly became unmanageable\nwhen the number of sensor nodes grew. Even a small change\nto the system core of one of the sensor nodes would make it\nimpossible to load binary a module into the system bedcase\nthe addresses of variables and functions in the core were different\nfrom when the program was linked. We used version\nnumbers to guard against this situation. Version numbers did\nhelp against system crashes, but did not solve the general\nproblem: new modules could not be loaded into the system.\n4.2\nDynamic Linking\nWith dynamic linking, the object files do not only contain\ncode and data, but also names of functions are variables\nof the system core that are referenced by the module. The\ncode in the object file cannot be executed before the physical\naddresses of the referenced variables and functions have\nbeen filled in. This process is done at run time by a dynamic\nlinker.\nIn the Contiki dynamic linker we use two file formats for\nthe dynamically linked modules, ELF and Compact ELF.\n4.2.1\nELF - Executable and Linkable Format\nOne of the most common object code format for dynamic\nlinking is the Executable and Linkable Format (ELF) [3]. It\nis a standard format for object files and executables that is\nused for most modern Unix-like systems. An ELF object\nfile include both program code and data and additional information\nsuch as a symbol table, the names of all external\nunresolved symbols, and relocation tables. The relocation\ntables are used to locate the program code and data at other\nplaces in memory than for which the object code originally\nwas assembled. Additionally, ELF files can hold debugging\ninformation such as the line numbers corresponding to specific\nmachine code instructions, and file names of the source\nfiles used when producing the ELF object.\nELF is also the default object file format produced by the\nGCC utilities and for this reason there are a number of standard\nsoftware utilities for manipulating ELF files available.\nExamples include debuggers, linkers, converters, and programs\nfor calculating program code and data memory sizes.\nThese utilities exist for a wide variety of platforms, including\nMS Windows, Linux, Solaris, and FreeBSD. This is a clear\nadvantage over other solutions such as FlexCup [27], which\nrequire specialized utilities and tools.\nOur dynamic linker in Contiki understands the ELF format\nand is able to perform dynamic linking, relocation, and\nloading of ELF object code files. The debugging features of\nthe ELF format are not used.\n4.2.2\nCELF - Compact ELF\nOne problem with the ELF format is the overhead in terms\nof bytes to be transmitted across the network, compared to\npre-linked modules. There are a number of reasons for the\nextra overhead. First, ELF, as any dynamically relocatable\nfile format, includes the symbolic names of all referenced\nfunctions or variables that need to be linked at run-time. Second\n, and more important, the ELF format is designed to work\non 32-bit and 64-bit architectures. This causes all ELF data\nstructures to be defined with 32-bit data types. For 8-bit or\n16-bit targets the high 16 bits of these fields are unused.\nTo quantify the overhead of the ELF format we devise an\nalternative to the ELF object code format that we call CELF\n- Compact ELF. A CELF file contains the same information\nas an ELF file, but represented with 8 and 16-bit datatypes.\n18\nCELF files typically are half the size of the corresponding\nELF file. The Contiki dynamic loader is able to load CELF\nfiles and a utility program is used to convert ELF files to\nCELF files.\nIt is possible to further compress CELF files using lossless\ndata compression. However, we leave the investigation of the\nenergy-efficiency of this approach to future work.\nThe drawback of the CELF format is that it requires a\nspecial compressor utility is for creating the CELF files. This\nmakes the CELF format less attractive for use in many real-world\nsituations.\n4.3\nPosition Independent Code\nTo avoid performing the relocation step when loading a\nmodule, it is in some cases possible to compile the module\ninto position independent code. Position independent code is\na type of machine code which does not contain any absolute\naddresses to itself, but only relative references. This is the\napproach taken by the SOS system.\nTo generate position independent code compiler support\nis needed. Furthermore, not all CPU architectures support\nposition independent code and even when supported, programs\ncompiled to position independent code typically are\nsubject to size restrictions. For example, the AVR microcontroller\nsupports position independent code but restricts the\nsize of programs to 4 kilobytes. For the MSP430 no compiler\nis known to fully support position independent code.\nImplementation\nWe have implemented run-time dynamic linking of ELF\nand CELF files in the Contiki operating system [5]. To evaluate\ndynamic linking we have implemented an application\nspecific virtual machine for Contiki together with a compiler\nfor a subset of Java, and have ported a Java virtual machine\nto Contiki.\n5.1\nThe Contiki Operating System\nThe Contiki operating system was the first operating system\nfor memory-constrained sensor nodes to support dynamic\nrun-time loading of native code modules. Contiki is\nbuilt around an event-driven kernel and has very low memory\nrequirements.\nContiki applications run as extremely\nlightweight protothreads [6] that provide blocking operations\non top of the event-driven kernel at a very small memory\ncost. Contiki is designed to be highly portable and has been\nported to over ten different platforms with different CPU architectures\nand using different C compilers.\nLoaded program\n00000000000\n11111111111\n00000000000\n00000000000\n11111111111\n11111111111\n00000000000\n00000000000\n00000000000\n11111111111\n11111111111\n11111111111\n00000000000\n00000000000\n00000000000\n11111111111\n11111111111\n11111111111\nRAM\nCore\nLoaded program\nCore\nROM\nDevice drivers\nContiki kernel\nContiki kernel\nDynamic linker\nSymbol table\nLanguage run-time\nDevice drivers\nFigure 2. Partitioning in Contiki: the core and loadable\nprograms in RAM and ROM.\nA Contiki system is divided into two parts: the core and\nthe loadable programs as shown in Figure 2. The core consists\nof the Contiki kernel, device drivers, a set of standard\napplications, parts of the C language library, and a symbol\ntable. Loadable programs are loaded on top of the core and\ndo not modify the core.\nThe core has no information about the loadable programs,\nexcept for information that the loadable programs explicitly\nregister with the core. Loadable programs, on the other hand,\nhave full knowledge of the core and may freely call functions\nand access variables that reside in the core. Loadable\nprograms can call each other by going through the kernel.\nThe kernel dispatches calls from one loaded program to another\nby looking up the target program in an in-kernel list of\nactive processes. This one-way dependency makes it possible\nto load and unload programs at run-time without needing\nto patch the core and without the need for a reboot when a\nmodule has been loaded or unloaded.\nWhile it is possible to replace the core at run-time by running\na special loadable program that overwrites the current\ncore and reboots the system, experience has shown that this\nfeature is not often used in practice.\n5.2\nThe Symbol Table\nThe Contiki core contains a table of the symbolic names\nof all externally visible variable and function names in the\nContiki core and their corresponding addresses. The table\nincludes not only the Contiki system, but also the C language\nrun-time library. The symbol table is used by the dynamic\nlinker when linking loaded programs.\nThe symbol table is created when the Contiki core binary\nimage is compiled. Since the core must contain a correct\nsymbol table, and a correct symbol table cannot be created\nbefore the core exists, a three-step process is required to\ncompile a core with a correct symbol table. First, an intermediary\ncore image with an empty symbol table is compiled.\nFrom the intermediary core image an intermediary symbol\ntable is created. The intermediary symbol table contains the\ncorrect symbols of the final core image, but the addresses\nof the symbols are incorrect. Second, a second intermediary\ncore image that includes the intermediary symbol table\nis created. This core image now contains a symbol table of\nthe same size as the one in the final core image so the addresses\nof all symbols in the core are now as they will be\nin the final core image. The final symbol table is then created\nfrom the second intermediary core image. This symbol\ntable contains both the correct symbols and their correct addresses\n. Third, the final core image with the correct symbol\ntable is compiled.\nThe process of creating a core image is automated through\na simple make script. The symbol table is created using a\ncombination of standard ELF tools.\nFor a typical Contiki system the symbol table contains\naround 300 entries which amounts to approximately 4 kilobytes\nof data stored in flash ROM.\n5.3\nThe Dynamic Linker\nWe implemented a dynamic linker for Contiki that is designed\nto link, relocate, and load either standard ELF files [3]\nand CELF, Compact ELF, files. The dynamic linker reads\n19\nELF/CELF files through the Contiki virtual filesystem interface\n, CFS, which makes the dynamic linker unaware of the\nphysical location of the ELF/CELF file. Thus the linker can\noperate on files stored either in RAM, on-chip flash ROM,\nexternal EEPROM, or external ROM without modification.\nSince all file access to the ELF/CELF file is made through\nthe CFS, the dynamic linker does not need to concern itself\nwith low-level filesystem details such as wear-leveling\nor fragmentation [4] as this is better handled by the CFS.\nThe dynamic linker performs four steps to link, relocate\nand load an ELF/CELF file. The dynamic linker first parses\nthe ELF/CELF file and extracts relevant information about\nwhere in the ELF/CELF file the code, data, symbol table,\nand relocation entries are stored. Second, memory for the\ncode and data is allocated from flash ROM and RAM, respectively\n. Third, the code and data segments are linked and\nrelocated to their respective memory locations, and fourth,\nthe code is written to flash ROM and the data to RAM.\nCurrently, memory allocation for the loaded program is\ndone using a simple block allocation scheme. More sophisticated\nallocation schemes will be investigated in the future.\n5.3.1\nLinking and Relocating\nThe relocation information in an ELF/CELF file consists\nof a list of relocation entries. Each relocation entry corresponds\nto an instruction or address in the code or data in the\nmodule that needs to be updated with a new address. A relocation\nentry contains a pointer to a symbol, such as a variable\nname or a function name, a pointer to a place in the code or\ndata contained in the ELF/CELF file that needs to be updated\nwith the address of the symbol, and a relocation type\nwhich specifies how the data or code should be updated. The\nrelocation types are different depending on the CPU architecture\n. For the MSP430 there is only one single relocation\ntype, whereas the AVR has 19 different relocation types.\nThe dynamic linker processes a relocation entry at a time.\nFor each relocation entry, its symbol is looked up in the symbol\ntable in the core. If the symbol is found in the core's symbol\ntable, the address of the symbol is used to patch the code\nor data to which the relocation entry points. The code or data\nis patched in different ways depending on the relocation type\nand on the CPU architecture.\nIf the symbol in the relocation entry was not found in the\nsymbol table of the core, the symbol table of the ELF/CELF\nfile itself is searched. If the symbol is found, the address that\nthe symbol will have when the program has been loaded is\ncalculated, and the code or data is patched in the same way\nas if the symbol was found in the core symbol table.\nRelocation entries may also be relative to the data, BSS,\nor code segment in the ELF/CELF file. In that case no symbol\nis associated with the relocation entry. For such entries\nthe dynamic linker calculates the address that the segment\nwill have when the program has been loaded, and uses that\naddress to patch the code or data.\n5.3.2\nLoading\nWhen the linking and relocating is completed, the text and\ndata have been relocated to their final memory position. The\ntext segment is then written to flash ROM, at the location\nthat was previously allocated. The memory allocated for the\ndata and BSS segments are used as an intermediate storage\nfor transferring text segment data from the ELF/CELF file\nbefore it is written to flash ROM. Finally, the memory allocated\nfor the BSS segment is cleared, and the contents of the\ndata segment is copied from the ELF/CELF file.\n5.3.3\nExecuting the Loaded Program\nWhen the dynamic linker has successfully loaded the code\nand data segments, Contiki starts executing the program.\nThe loaded program may replace an already running Contiki\nservice. If the service that is to be replaced needs to pass\nstate to the newly loaded service, Contiki supports the allocation\nof an external memory buffer for this purpose. However\n, experience has shown that this mechanism has been\nvery scarcely used in practice and the mechanism is likely\nto be removed in future versions of Contiki.\n5.3.4\nPortability\nSince the ELF/CELF format is the same across different\nplatforms, we designed the Contiki dynamic linker to be easily\nportable to new platforms. The loader is split into one\narchitecture specific part and one generic part. The generic\npart parses the ELF/CELF file, finds the relevant sections of\nthe file, looks up symbols from the symbol table, and performs\nthe generic relocation logic. The architecture specific\npart does only three things: allocates ROM and RAM, writes\nthe linked and relocated binary to flash ROM, and understands\nthe relocation types in order to modify machine code\ninstructions that need adjustment because of relocation.\n5.3.5\nAlternative Designs\nThe Contiki core symbol table contains all externally visible\nsymbols in the Contiki core. Many of the symbols may\nnever need to be accessed by loadable programs, thus causing\nROM overhead. An alternative design would be to let the\nsymbol table include only a handful of symbols, entry points,\nthat define the only ways for an application program to interact\nwith the core. This would lead to a smaller symbol table,\nbut would also require a detailed specification of which entry\npoints that should be included in the symbol table. The main\nreason why we did not chose this design, however, is that\nwe wish to be able to replace modules at any level of the system\n. For this reason, we chose to provide the same amount of\nsymbols to an application program as it would have, would\nit have been compiled directly into the core. However, we\nare continuing to investigate this alternative design for future\nversions of the system.\n5.4\nThe Java Virtual Machine\nWe ported the Java virtual machine (JVM) from lejOS [8],\na small operating system originally developed for the Lego\nMindstorms. The Lego Mindstorms are equipped with an\nHitachi H8 microcontroller with 32 kilobytes of RAM available\nfor user programs such as the JVM. The lejOS JVM\nworks within this constrained memory while featuring pre-emptive\nthreads, recursion, synchronization and exceptions.\nThe Contiki port required changes to the RAM-only model\nof the lejOS JVM. To be able to run Java programs within the\n2 kilobytes of RAM available on our hardware platform, Java\nclasses needs to be stored in flash ROM rather than in RAM.\nThe Contiki port stores the class descriptions including bytecode\nin flash ROM memory. Static class data and class flags\nthat denote if classes have been initialized are stored in RAM\n20\nas well as object instances and execution stacks. The RAM\nrequirements for the Java part of typical sensor applications\nare a few hundred bytes.\nJava programs can call native code methods by declaring\nnative Java methods. The Java virtual machine dispatches\ncalls to native methods to native code. Any native function\nin Contiki may be called, including services that are part of\na loaded Contiki program.\n5.5\nCVM - the Contiki Virtual Machine\nWe designed the Contiki Virtual Machine, CVS, to be a\ncompromise between an application-specific and a generic\nvirtual machine. CVM can be configured for the application\nrunning on top of the machine by allowing functions to be\neither implemented as native code or as CVM code. To be\nable to run the same programs for the Java VM and for CVM,\nwe developed a compiler that compiles a subset of the Java\nlanguage to CVM bytecode.\nThe design of CVM is intentionally similar to other virtual\nmachines, including Mate [19], VM [18], and the Java\nvirtual machine. CVM is a stack-based machine with sepa-rated\ncode and data areas. The CVM instruction set contains\ninteger arithmetic, unconditional and conditional branches,\nand method invocation instructions. Method invocation can\nbe done in two ways, either by invocation of CVM bytecode\nfunctions, or by invocation of functions implemented in native\ncode. Invocation of native functions is done through a\nspecial instruction for calling native code. This instruction\ntakes one parameter, which identifies the native function that\nis to be called. The native function identifiers are defined at\ncompile time by the user that compiles a list of native functions\nthat the CVM program should be able to call. With the\nnative function interface, it is possible for a CVM program to\ncall any native functions provided by the underlying system,\nincluding services provided by loadable programs.\nNative functions in a CVM program are invoked like any\nother function. The CVM compiler uses the list of native\nfunctions to translate calls to such functions into the special\ninstruction for calling native code. Parameters are passed to\nnative functions through the CVM stack.\nEvaluation\nTo evaluate dynamic linking of native code we compare\nthe energy costs of transferring, linking, relocating, loading,\nand executing a native code module in ELF format using dynamic\nlinking with the energy costs of transferring, loading,\nand executing the same program compiled for the CVM and\nthe Java virtual machine. We devise a simple model of the\nenergy consumption of the reprogramming process. Thereafter\nwe experimentally quantify the energy and memory\nconsumption as well as the execution overhead for the reprogramming\n, the execution methods and the applications. We\nuse the results of the measurements as input into the model\nwhich enables us to perform a quantitative comparison of the\nenergy-efficiency of the reprogramming methods.\nWe use the ESB board [33] and the Telos Sky board [29]\nas our experimental platforms. The ESB is equipped with an\nMSP430 microcontroller with 2 kilobytes of RAM and 60\nkilobytes of flash ROM, an external 64 kilobyte EEPROM,\nas well as a set of sensors and a TR1001 radio transceiver.\nPROCESS_THREAD(test_blink, ev, data)\n{\nstatic struct etimer t;\nPROCESS_BEGIN();\netimer_set(&t, CLOCK_SECOND);\nwhile(1) {\nleds_on(LEDS_GREEN);\nPROCESS_WAIT_UNTIL(etimer_expired(&t));\netimer_reset(&t);\nleds_off(LEDS_GREEN);\nPROCESS_WAIT_UNTIL(etimer_expired(&t));\netimer_reset(&t);\n}\nPROCESS_END();\n}\nFigure 3.\nExample Contiki program that toggles the\nLEDs every second.\nThe Telos Sky is equipped with an MSP430 microcontroller\nwith 10 kilobytes of RAM and 48 kilobytes of flash ROM\ntogether with a CC2420 radio transceiver. We use the ESB to\nmeasure the energy of receiving, storing, linking, relocating,\nloading and executing loadable modules and the Telos Sky\nto measure the energy of receiving loadable modules.\nWe use three Contiki programs to measure the energy efficiency\nand execution overhead of our different approaches.\nBlinker, the first of the two programs, is shown in Figure 3.\nIt is a simple program that toggles the LEDs every second.\nThe second program, Object Tracker, is an object tracking\napplication based on abstract regions [35]. To allow running\nthe programs both as native code, as CVM code, and\nas Java code we have implemented these programs both in C\nand Java. A schematic illustration of the C implementation\nis in Figure 4. To support the object tracker program, we\nimplemented a subset of the abstract regions mechanism in\nContiki. The Java and CVM versions of the program call native\ncode versions of the abstract regions functions. The third\nprogram is a simple 8 by 8 vector convolution calculation.\n6.1\nEnergy Consumption\nWe model the energy consumption E of the reprogramming\nprocess with\nE\n= E\np\n+ E\ns\n+ E\nl\n+ E\nf\nwhere E\np\nis the energy spent in transferring the object over\nthe network, E\ns\nthe energy cost of storing the object on the\ndevice, E\nl\nthe energy consumed by linking and relocating the\nobject, and E\nf\nthe required energy for of storing the linked\nprogram in flash ROM. We use a simplified model of the\nnetwork propagation energy where we assume a propagation\nprotocol where the energy consumption E\np\nis proportional to\nthe size of the object to be transferred. Formally,\nE\np\n= P\np\ns\no\nwhere s\no\nis the size of the object file to be transfered and P\np\nis a constant scale factor that depends on the network protocol\nused to transfer the object. We use similar equations for\nE\ns\n(energy for storing the binary) and E\nl\n(energy for linking\nand relocating). The equation for E\nf\n(the energy for load-21\nPROCESS_THREAD(use_regions_process, ev, data)\n{\nPROCESS_BEGIN();\nwhile(1) {\nvalue = pir_sensor.value();\nregion_put(reading_key, value);\nregion_put(reg_x_key, value * loc_x());\nregion_put(reg_y_key, value * loc_y());\nif(value > threshold) {\nmax = region_max(reading_key);\nif(max == value) {\nsum = region_sum(reading_key);\nsum_x = region_sum(reg_x_key);\nsum_y = region_sum(reg_y_key);\ncentroid_x = sum_x / sum;\ncentroid_y = sum_y / sum;\nsend(centroid_x, centroid_y);\n}\n}\netimer_set(&t, PERIODIC_DELAY);\nPROCESS_WAIT_UNTIL(etimer_expired(&t));\n}\nPROCESS_END();\n}\nFigure 4. Schematic implementation of an object tracker\nbased on abstract regions.\ning the binary to ROM) contains the size of the compiled\ncode size of the program instead of the size of the object file.\nThis model is intentionally simple and we consider it good\nenough for our purpose of comparing the energy-efficiency\nof different reprogramming schemes.\n6.1.1\nLower Bounds on Radio Reception Energy\nWe measured the energy consumption of receiving data\nover the radio for two different radio transceivers: the\nTR1001 [32], that is used on the ESB board, and the\nCC2420 [2], that conforms to the IEEE 802.15.4 standard\n[11] and is used on the Telos Sky board. The TR1001\nprovides a very low-level interface to the radio medium. The\ntransceiver decodes data at the bit level and transmits the\nbits in real-time to the CPU. Start bit detection, framing,\nMAC layer, checksums, and all protocol processing must be\ndone in software running on the CPU. In contrast, the interface\nprovided by the CC2420 is at a higher level. Start bits,\nframing, and parts of the MAC protocol are handled by the\ntransceiver. The software driver handles incoming and outgoing\ndata on the packet level.\nSince the TR1001 operates at the bit-level, the communication\nspeed of the TR1001 is determined by the CPU. We\nuse a data rate of 9600 bits per second. The CC2420 has\na data rate of 250 kilobits per second, but also incurs some\nprotocol overhead as it provides a more high-level interface.\nFigure 5 shows the current draw from receiving 1000\nbytes of data with the TR1001 and CC2420 radio\ntransceivers. These measurements constitute a lower bound\non the energy consumption for receiving data over the radio,\nas they do not include any control overhead caused by a code\npropagation protocol. Nor do they include any packet headers\n. An actual propagation protocol would incur overhead\nTime\nEnergy\nTime per\nEnergy per\nTransceiver\n(s)\n(mJ)\nbyte (s)\nbyte (mJ)\nTR1001\n0.83\n21\n0.0008\n0.021\nCC2420\n0.060\n4.8\n0.00006\n0.0048\nTable 2.\nLower bounds on the time and energy consumption\nfor receiving 1000 bytes with the TR1001 and\nCC2420 transceivers. All values are rounded to two significant\ndigits.\nbecause of both packet headers and control traffic. For example\n, the Deluge protocol has a control packet overhead of\napproximately 20% [14]. This overhead is derived from the\ntotal number of control packets and the total number of data\npackets in a sensor network. The average overhead in terms\nof number of excessive data packets received is 3.35 [14]. In\naddition to the actual code propagation protocol overhead,\nthere is also overhead from the MAC layer, both in terms of\npacket headers and control traffic.\nThe TR1001 provides a low-level interface to the CPU,\nwhich enabled us to measure only the current draw of the\nreceiver. We first measured the time required for receiving\none byte of data from the radio. To produce the graph in the\nfigure, we measured the current draw of an ESB board which\nwe had programmed to turn on receive mode and busy-wait\nfor the time corresponding to the reception time of 1000\nbytes.\nWhen measuring the reception current draw of the\nCC2420, we could not measure the time required for receiving\none byte because the CC2420 does not provide an\ninterface at the bit level. Instead, we used two Telos Sky\nboards and programmed one to continuously send back-to-back\npackets with 100 bytes of data. We programmed the\nother board to turn on receive mode when the on-board button\nwas pressed. The receiver would receive 1000 bytes of\ndata, corresponding to 10 packets, before turning the receiver\noff. We placed the two boards next to each other on a table\nto avoid packet drops. We produced the graph in Figure 5 by\nmeasuring the current draw of the receiver Telos Sky board.\nTo ensure that we did not get spurious packet drops, we repeated\nthe measurement five times without obtaining differing\nresults.\nTable 2 shows the lower bounds on the time and energy\nconsumption for receiving data with the TR1001 and\nCC2420 transceivers. The results show that while the current\ndraw of the CC2420 is higher than that of the TR1001, the\nenergy efficiency in terms of energy per byte of the CC2420\nis better because of the shorter time required to receive the\ndata.\n6.1.2\nEnergy Consumption of Dynamic Linking\nTo evaluate the energy consumption of dynamic linking,\nwe measure the energy required for the Contiki dynamic\nlinker to link and load two Contiki programs. Normally,\nContiki loads programs from the radio network but to avoid\nmeasuring any unrelated radio or network effects, we stored\nthe loadable object files in flash ROM before running the\nexperiments. The loadable objects were stored as ELF files\nfrom which all debugging information and symbols that were\nnot needed for run-time linking was removed. At boot-up,\n22\n0\n5\n10\n15\n20\n0\n0.2\n0.4\n0.6\n0.8\n1\nCurrent (mA)\nTime (s)\nCurrent\n0\n5\n10\n15\n20\n0\n0.2\n0.4\n0.6\n0.8\n1\nCurrent (mA)\nTime (s)\nCurrent\nFigure 5. Current draw for receiving 1000 bytes with the TR1001 and CC2420, respectively.\n0\n5\n10\n15\n20\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\nCurrent (mA)\nTime (s)\nCurrent\nWriting to EEPROM\nLinking and relocating\nWriting\nROM\nExecuting binary\nto flash\nFigure 6. Current draw for writing the Blinker ELF file\nto EEPROM (0 - 0.166 s), linking and relocating the program\n(0.166 - 0.418 s), writing the resulting code to flash\nROM (0.418 - 0.488 s), and executing the binary (0.488 s\nand onward). The current spikes delimit the three steps\nand are intentionally caused by blinking on-board LEDs.\nThe high energy consumption when executing the binary\nis caused by the green LED.\none ELF file was copied into an on-board EEPROM from\nwhere the Contiki dynamic linker linked and relocated the\nELF file before it loaded the program into flash ROM.\nFigure 6 shows the current draw when loading the Blinker\nprogram, and Figure 7 shows the current draw when loading\nthe Object Tracker program. The current spikes seen\nin both graphs are intentionally caused by blinking the on-board\nLEDs. The spikes delimit the four different steps that\nthe loader is going through: copying the ELF object file\nto EEPROM, linking and relocating the object code, copying\nthe linked code to flash ROM, and finally executing the\nloaded program. The current draw of the green LED is\nslightly above 8 mA, which causes the high current draw\nwhen executing the blinker program (Figure 6). Similarly,\nwhen the object tracking application starts, it turns on the\nradio for neighbor discovery. This causes the current draw\nto rise to around 6 mA in Figure 7, and matches the radio\ncurrent measurements in Figure 5.\nTable 3 shows the energy consumption of loading and\nlinking the Blinker program. The energy was obtained from\nintegration of the curve from Figure 6 and multiplying it by\n0\n5\n10\n15\n20\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\nCurrent (mA)\nTime (s)\nCurrent\nWriting to EEPROM\nROM\nWriting\nto flash\nExecuting\nbinary\nLinking and relocating\nFigure 7. Current draw for writing the Object Tracker\nELF file to EEPROM (0 - 0.282 s), linking and relocating\nthe program (0.282 - 0.882 s), writing the resulting\ncode to flash ROM (0.882 - 0.988 s), and executing the\nbinary (0.988 s and onward). The current spikes delimit\nthe three steps and are intentionally caused by blinking\non-board LEDs. The high current draw when executing\nthe binary comes from the radio being turned on.\nthe voltage used in our experiments (4.5 V). We see that the\nlinking and relocation step is the most expensive in terms of\nenergy. It is also the longest step.\nTo evaluate the energy overhead of the ELF file format,\nwe compare the energy consumption for receiving four different\nContiki programs using the ELF and CELF formats.\nIn addition to the two programs from Figures 3 and 4 we include\nthe code for the Contiki code propagation mechanism\nand a network publish/subscribe program that performs periodic\nflooding and converging of information. The two latter\nprograms are significantly larger. We calculate an estimate of\nthe required energy for receiving the files by using the measured\nenergy consumption of the CC2420 radio transceiver\nand multiply it by the average overhead by the Deluge code\npropagation protocol, 3.35 [14]. The results are listed in Table\n4 and show that radio reception is more energy consuming\nthan linking and loading a program, even for a small program\n. Furthermore, the results show that the relative average\nsize and energy overhead for ELF files compared to the code\nand data contained in the files is approximately 4 whereas\nthe relative CELF overhead is just under 2.\n23\nELF\nELF\nELF radio\nCELF\nCELF\nCELF radio\nCode\nData\nfile\nfile size\nreception\nfile\nfile size\nreception\nProgram\nsize\nsize\nsize\noverhead\nenergy (mJ)\nsize\noverhead\nenergy (mJ)\nBlinker\n130\n14\n1056\n7.3\n17\n361\n2.5\n5.9\nObject tracker\n344\n22\n1668\n5.0\n29\n758\n2.0\n12\nCode propagator\n2184\n10\n5696\n2.6\n92\n3686\n1.7\n59\nFlood/converge\n4298\n42\n8456\n1.9\n136\n5399\n1.2\n87\nTable 4. The overhead of the ELF and CELF file formats in terms of bytes and estimated reception energy for four Contiki\nprograms. The reception energy is the lower bound of the radio reception energy with the CC2420 chip, multiplied\nby the average Deluge overhead (3.35).\nBlinker\nEnergy\nObj. Tr.\nEnergy\nStep\ntime (s)\n(mJ)\ntime (s)\n(mJ)\nWrt. EEPROM\n0.164\n1.1\n0.282\n1.9\nLink & reloc\n0.252\n1.2\n0.600\n2.9\nWrt. flash ROM\n0.070\n0.62\n0.106\n0.76\nTotal\n0.486\n2.9\n0.988\n5.5\nTable 3. Measured energy consumption of the storing,\nlinking and loading of the 1056 bytes large Blinker binary\nand the 1824 bytes large Object Tracker binary. The size\nof the Blinker code is 130 bytes and the size of the Object\nTracker code is 344 bytes.\nModule\nROM\nRAM\nStatic loader\n670\n0\nDynamic linker, loader\n5694\n18\nCVM\n1344\n8\nJava VM\n13284\n59\nTable 5. Memory requirements, in bytes. The ROM size\nfor the dynamic linker includes the symbol table. The\nRAM figures do not include memory for programs running\non top of the virtual machines.\n6.2\nMemory Consumption\nMemory consumption is an important metric for sensor\nnodes since memory is a scarce resource on most sensor node\nplatforms. The ESB nodes feature only 2 KB RAM and 60\nKB ROM while Mica2 motes provide 128 KB of program\nmemory and 4 KB of RAM. The less memory required for\nreprogramming, the more is left for applications and support\nfor other important tasks such as security which may also\nrequire a large part of the available memory [28].\nTable 5 lists the memory requirements of the static linker,\nthe dynamic linker and loader, the CVM and the Java VM.\nThe dynamic linker needs to keep a table of all core symbols\nin the system. For a complete Contiki system with process\nmanagement, networking, the dynamic loader, memory allocation\n, Contiki libraries, and parts of the standard C library,\nthe symbol table requires about 4 kilobytes of ROM. This is\nincluded in the ROM size for the dynamic linker.\n6.3\nExecution Overhead\nTo measure the execution overhead of the application\nspecific virtual machine and the Java virtual machine, we\nimplemented the object tracking program in Figure 4 in C\nand Java. We compiled the Java code to CVM code and\nJava bytecode. We ran the compiled code on the MSP430-equipped\nESB board.\nThe native C code was compiled\nExecution type\nExecution time (ms)\nEnergy (mJ)\nNative\n0.479\n0.00054\nCVM\n0.845\n0.00095\nJava VM\n1.79\n0.0020\nTable 6. Execution times and energy consumption of one\niteration of the tracking program.\nExecution type\nExecution time (ms)\nEnergy (mJ)\nNative\n0.67\n0.00075\nCVM\n58.52\n0.065\nJava VM\n65.6\n0.073\nTable 7. Execution times and energy consumption of the\n8 by 8 vector convolution.\nwith the MSP430 port of GCC version 3.2.3. The MSP430\ndigitally-controlled oscillator was set to clock the CPU at a\nspeed of 2.4576 MHz. We measured the execution time of\nthe three implementations using the on-chip timer A1 that\nwas set to generate a timer interrupt 1000 times per second.\nThe execution times are averaged over 5000 iterations of the\nobject tracking program.\nThe results in Table 6 show the execution time of one run\nof the object tracking application from Figure 4. The execution\ntime measurements are averaged over 5000 runs of\nthe object tracking program. The energy consumption is calculated\nby multiplying the execution time with the average\nenergy consumption when a program is running with the radio\nturned off. The table shows that the overhead of the Java\nvirtual machine is higher than that of the CVM, which is turn\nis higher than the execution overhead of the native C code.\nAll three implementations of the tracker program use the\nsame abstract regions library which is compiled as native\ncode. Thus much of the execution time in the Java VM\nand CVM implementations of the object tracking program is\nspent executing the native code in the abstract regions library.\nEssentially, the virtual machine simply acts as a dispatcher of\ncalls to various native functions. For programs that spend a\nsignificant part of their time executing virtual machine code\nthe relative execution times are significantly higher for the\nvirtual machine programs. To illustrate this, Table 7 lists the\nexecution times of a convolution operation of two vectors\nof length 8. Convolution is a common operation in digital\nsignal processing where it is used for algorithms such as filtering\nor edge detection. We see that the execution time of\nthe program running on the virtual machines is close to ten\ntimes that of the native program.\n24\nDynamic\nFull image\nStep\nlinking (mJ)\nreplacement (mJ)\nReceiving\n17\n330\nWrt. EEPROM\n1.1\n22\nLink & reloc\n1.4\nWrt\n. flash ROM\n0.45\n72\nTotal\n20\n424\nTable 8. Comparison of energy-consumption of reprogramming\nthe blinker application using dynamic linking\nwith an ELF file and full image replacement methods.\nStep\nELF\nCELF\nCVM\nJava\nSize (bytes)\n1824\n968\n123\n1356\nReceiving\n29\n12\n2.0\n22\nWrt. EEPROM\n1.9\n0.80\n\nLink\n& reloc\n2.5\n2.5\n\nWrt\n. flash ROM\n1.2\n1.2\n4\n.7\nTotal\n35\n16.5\n2.0\n26.7\nTable 9. Comparison of energy-consumption in mJ of reprogramming\nfor the object tracking application using\nthe four different methods.\n6.4\nQuantitative Comparison\nUsing our model from Section 6.1 and the results from\nthe above measurements, we can calculate approximations\nof the energy consumption for distribution, reprogramming,\nand execution of native and virtual machine programs in order\nto compare the methods with each other. We set P\np\n, the\nscale factor of the energy consumption for receiving an object\nfile, to the average Deluge overhead of 3.35.\n6.4.1\nDynamic Linking vs Full Image Replacement\nWe first compare the energy costs for the two native code\nreprogramming models: dynamic linking and full image replacement\n. Table 8 shows the results for the energy consumption\nof reprogramming the blinker application. The size\nof blinker application including the operating system is 20\nKB which is about 20 times the size of the blinker application\nitself. Even though no linking needs to be performed\nduring the full image replacement, this method is about 20\ntimes more expensive to perform a whole image replacement\ncompared to a modular update using the dynamic linker.\n6.4.2\nDynamic Linking vs Virtual Machines\nWe use the tracking application to compare reprogramming\nusing the Contiki dynamic linker with code updates for\nthe CVM and the Java virtual machine. CVM programs are\ntypically very small and are not stored in EEPROM, nor are\nthey linked or written to flash. Java uncompressed class files\nare loaded into flash ROM before they are executed. Table 9\nshows the sizes of the corresponding binaries and the energy\nconsumption of each reprogramming step.\nAs expected, the process of updating sensor nodes with\nnative code is less energy-efficient than updating with a virtual\nmachine. Also, as shown in Table 6, executing native\ncode is more energy-efficient than executing code for the virtual\nmachines.\nBy combining the results in Table 6 and Table 9, we can\ncompute break-even points for how often we can execute native\ncode as opposed to virtual machine code for the same\n0\n20\n40\n60\n80\n100\n120\n140\n0\n20000\n40000\n60000\n80000\n100000\nConsumed energy (mJ)\nNumber of program iterations\nJava VM\nELF\nCVM\nCELF\nFigure 8. Break-even points for the object tracking program\nimplemented with four different linking and execution\nmethods.\n0\n20\n40\n60\n80\n100\n120\n140\n0\n200\n400\n600\n800\n1000\nConsumed energy (mJ)\nNumber of program iterations\nJava VM\nELF\nCVM\nCELF\nFigure 9. Break-even points for the vector convolution\nimplemented with four different linking and execution\nmethods.\nenergy consumption. That is, after how many program iterations\ndo the cheaper execution costs outweigh the more\nexpensive code updates.\nFigure 8 shows the modeled energy consumption for executing\nthe Object Tracking program using native code loaded\nwith an ELF object file, native code loaded with an CELF\nobject file, CVM code, and Java code. We see that the Java\nvirtual machine is expensive in terms of energy and will always\nrequire more energy than native code loaded with a\nCELF file. For native code loaded with an ELF file the energy\noverhead due to receiving the file makes the Java virtual\nmachine more energy efficient until the program is repeated\na few thousand times. Due to the small size of the CVM code\nit is very energy efficient for small numbers of program iterations\n. It takes about 40000 iterations of the program before\nthe interpretation overhead outweigh the linking and loading\noverhead of same program running as native code and\nloaded as a CELF file. If the native program was loaded with\nan ELF file, however, the CVM program needs to be run approximately\n80000 iterations before the energy costs are the\nsame. At the break-even point, the energy consumption is\nonly about one fifth of the energy consumption for loading\nthe blink program using full image replacement as shown in\nTable 8.\nIn contrast with Figure 8, Figure 9 contains the break-even\npoints from the vector convolution in Table 7. We assume\nthat the convolution algorithm is part of a program with\nthe same size as in Figure 8 so that the energy consumption\nfor reprogramming is the same. In this case the break-even\npoints are drastically lower than in Figure 8. Here the native\ncode loaded with an ELF file outperforms the Java imple-25\nmentation already at 100 iterations. The CVM implementation\nhas spent as much energy as the native ELF implementation\nafter 500 iterations.\n6.5\nScenario Suitability\nWe can now apply our results to the software update scenarios\ndiscussed in Section 2. In a scenario with frequent\ncode updates, such as the dynamic application scenario or\nduring software development, a low loading overhead is to\nprefer.\nFrom Figure 8 we see that both an application-specific\nvirtual machine and a Java machine may be good\nchoices. Depending on the type of application it may be beneficial\nto decide to run the program on top of a more flexible\nvirtual machine such as the Java machine. The price for such\na decision is higher energy overhead.\nIn scenarios where the update frequency is low, e.g. when\nfixing bugs in installed software or when reconfiguring an\ninstalled application, the higher price for dynamic linking\nmay be worth paying. If the program is continuously run for\na long time, the energy savings of being able to use native\ncode outweigh the energy cost of the linking process. Furthermore\n, with a virtual machine it may not be possible to\nmake changes to all levels of the system. For example, a bug\nin a low-level driver can usually only be fixed by installing\nnew native code. Moreover, programs that are computation-ally\nheavy benefit from being implemented as native code as\nnative code has lower energy consumption than virtual machine\ncode.\nThe results from Figures 8 and 9 suggest that a combination\nof virtual machin code and native code can be energy\nefficient. For many situations this may be a viable alternative\nto running only native code or only virtual machine code.\n6.6\nPortability\nBecause of the diversity of sensor network platforms, the\nContiki dynamic linker is designed to be portable between\ndifferent microcontrollers. The dynamic linker is divided\ninto two modules: a generic part that parses and analyzes\nthe ELF/CELF that is to be loaded, and a microcontroller-specific\npart that allocates memory for the program to be\nloaded, performs code and data relocation, and writes the\nlinked program into memory.\nTo evaluate the portability of our design we have ported\nthe dynamic linker to two different microcontrollers: the TI\nMSP430 and the Atmel AVR. The TI MSP430 is used in\nseveral sensor network platforms, including the Telos Sky\nand the ESB. The Atmel AVR is used in the Mica2 motes.\nTable 10 shows the number of lines of code needed to\nimplement each module. The dramatic difference between\nthe MSP430-specific module and the AVR-specific module\nis due to the different addressing modes used by the machine\ncode of the two microcontrollers. While the MSP430\nhas only one addressing mode, the AVR has 19 different addressing\nmodes. Each addressing mode must be handled dif-ferently\nby the relocation function, which leads to a larger\namount of code for the AVR-specific module.\nDiscussion\nStandard file formats.\nOur main motivation behind\nchoosing the ELF format for dynamic linking in Contiki was\nLines of code,\nLines of code,\nModule\ntotal\nrelocation function\nGeneric linker\n292\nMSP430-specific\n45\n8\nAVR-specific\n143\n104\nTable 10. Number of lines of code for the dynamic linker\nand the microcontroller-specific parts.\nthat the ELF format is a standard file format. Many compilers\nand utilities, including all GCC utilities, are able to\nproduce and handle ELF files. Hence no special software is\nneeded to compile and upload new programs into a network\nof Contiki nodes. In contrast, FlexCup [27] or diff-based\napproaches require the usage of specially crafted utilities to\nproduce meta data or diff scripts required for uploading software\n. These special utilities also need to be maintained and\nported to the full range of development platforms used for\nsoftware development for the system.\nOperating system support. Dynamic linking of ELF\nfiles requires support from the underlying operating system\nand cannot be done on monolithic operating systems such as\nTinyOS. This is a disadvantage of our approach. For monolithic\noperating systems, an approach such as FlexCup is better\nsuited.\nHeterogeneity. With diff-based approaches a binary diff\nis created either at a base station or by an outside server. The\nserver must have knowledge of the exact software configuration\nof the sensor nodes on which the diff script is to be\nrun. If sensor nodes are running different versions of their\nsoftware, diff-based approaches do not scale.\nSpecifically, in many of our development networks we\nhave witnessed a form of micro heterogeneity in the software\nconfiguration. Many sensor nodes, which have been\nrunning the exact same version of the Contiki operating system\n, have had small differences in the address of functions\nand variables in the core. This micro heterogeneity comes\nfrom the different core images being compiled by different\ndevelopers, each having slightly different versions of the C\ncompiler, the C library and the linker utilities. This results\nin small variations of the operating system image depending\non which developer compiled the operating system image.\nWith diff-based approaches micro heterogeneity poses a big\nproblem, as the base station would have to be aware of all\nthe small differences between each node.\nCombination of native and virtual machine code. Our\nresults suggest that a combination of native and virtual machine\ncode is an energy efficient alternative to pure native\ncode or pure virtual machine code approaches. The dynamic\nlinking mechanism can be used to load the native code that\nis used by the virtual machine code by the native code interfaces\nin the virtual machines.\nRelated Work\nBecause of the importance of dynamic reprogramming of\nwireless sensor networks there has been a lot of effort in the\narea of software updates for sensor nodes both in the form\nof system support for software updates and execution environments\nthat directly impact the type and size of updates as\nwell as distribution protocols for software updates.\n26\nMainwaring et al. [26] also identified the trade-off between\nusing virtual machine code that is more expensive to\nrun but enables more energy-efficient updates and running\nnative code that executes more efficiently but requires more\ncostly updates. This trade-off has been further discussed by\nLevis and Culler [19] who implemented the Mate virtual machine\ndesigned to both simplify programming and to leverage\nenergy-efficient large-scale software updates in sensor\nnetworks. Mate is implemented on top of TinyOS.\nLevis and Culler later enhanced Mate by application specific\nvirtual machines (ASVMs) [20]. They address the main\nlimitations of Mate: flexibility, concurrency and propagation\n. Whereas Mate was designed for a single application\ndomain only, ASVM supports a wide range of application\ndomains. Further, instead of relying on broadcasts for code\npropagation as Mate, ASVM uses the trickle algorithm [21].\nThe MagnetOS [23] system uses the Java virtual machine\nto distribute applications across an ad hoc network\nof laptops. In MagnetOS, Java applications are partitioned\ninto distributed components. The components transparently\ncommunicate by raising events. Unlike Mate and Contiki,\nMagnetOS targets larger platforms than sensor nodes such\nas PocketPC devices.\nSensorWare [1] is another script-based\nproposal for programming nodes that targets larger\nplatforms. VM* is a framework for runtime environments\nfor sensor networks [18]. Using this framework Koshy and\nPandey have implemented a subset of the Java Virtual Machine\nthat enables programmers to write applications in Java,\nand access sensing devices and I/O through native interfaces.\nMobile agent-based approaches extend the notion of injected\nscripts by deploying dynamic, localized and intelligent\nmobile agents. Using mobile agents, Fok et al. have\nbuilt the Agilla platform that enables continuous reprogramming\nby injecting new agents into the network [9].\nTinyOS uses a special description language for composing\na system of smaller components [10] which are statically\nlinked with the kernel to a complete image of the system.\nAfter linking, modifying the system is not possible [19] and\nhence TinyOS requires the whole image to be updated even\nfor small code changes.\nSystems that offer loadable modules besides Contiki include\nSOS [12] and Impala [24]. Impala features an application\nupdater that enables software updates to be performed\nby linking in updated modules. Updates in Impala\nare coarse-grained since cross-references between different\nmodules are not possible. Also, the software updater in\nImpala was only implemented for much more resource-rich\nhardware than our target devices. The design of SOS [12]\nis very similar to the Contiki system: SOS consists of a\nsmall kernel and dynamically-loaded modules. However,\nSOS uses position independent code to achieve relocation\nand jump tables for application programs to access the operating\nsystem kernel. Application programs can register\nfunction pointers with the operating system for performing\ninter-process communication. Position independent code is\nnot available for all platforms, however, which limits the ap-plicability\nof this approach.\nFlexCup [27] enables run-time installation of software\ncomponents in TinyOS and thus solves the problem that\na full image replacement is required for reprogramming\nTinyOS applications. In contrast to our ELF-based solution,\nFlexCup uses a non-standard format and is less portable.\nFurther, FlexCup requires a reboot after a program has been\ninstalled, requiring an external mechanism to save and restore\nthe state of all other applications as well as the state of\nrunning network protocols across the reboot. Contiki does\nnot need to be rebooted after a program has been installed.\nFlexCup also requires a complete duplicate image of the\nbinary image of the system to be stored in external flash\nROM. The copy of the system image is used for constructing\na new system image when a new program has been loaded.\nIn contrast, the Contiki dynamic linker does not alter the core\nimage when programs are loaded and therefore no external\ncopy of the core image is needed.\nSince the energy consumption of distributing code in sensor\nnetworks increases with the size of the code to be distributed\nseveral attempts have been made to reduce the size\nof the code to be distributed. Reijers and Langendoen [31]\nproduce an edit script based on the difference between the\nmodified and original executable. After various optimiza-tions\nincluding architecture-dependent ones, the script is distributed\n. A similar approach has been developed by Jeong\nand Culler [15] who use the rsync algorithm to generate the\ndifference between modified and original executable. Koshy\nand Pandey's diff-based approach [17] reduces the amount\nof flash rewriting by modifying the linking procedure so that\nfunctions that are not changed are not shifted.\nXNP [16] was the previous default reprogramming mechanism\nin TinyOS which is used by the multi-hop reprogramming\nscheme MOAP (Multihop Over-the-Air Programming)\ndeveloped to distribute node images in the sensor network.\nMOAP distributes data to a selective number of nodes on\na neighbourhood-by-neighbourhood basis that avoids flooding\n[34]. In Trickle [21] virtual machine code is distributed\nto a network of nodes. While Trickle is restricted to single\npacket dissemination, Deluge adds support for the dissemination\nof large data objects [14].\nConclusions\nWe have presented a highly portable dynamic linker and\nloader that uses the standard ELF file format and compared\nthe energy-efficiency of run-time dynamic linking with an\napplication specific virtual machine and a Java virtual machine\n. We show that dynamic linking is feasible even for\nconstrained sensor nodes.\nOur results also suggest that a combination of native and\nvirtual machine code provide an energy efficient alternative\nto pure native code or pure virtual machine approaches. The\nnative code that is called from the virtual machine code can\nbe updated using the dynamic linker, even in heterogeneous\nsystems.\nAcknowledgments\nThis work was partly financed by VINNOVA, the\nSwedish Agency for Innovation Systems, and the European\nCommission under contract IST-004536-RUNES. Thanks to\nour paper shepherd Feng Zhao for reading and commenting\non the paper.\n27\n\nReferences\n[1] A. Boulis, C. Han, and M. B. Srivastava. Design and implementation\nof a framework for efficient and programmable sensor networks. In\nProceedings of The First International Conference on Mobile Systems,\nApplications, and Services (MOBISYS `03), May 2003.\n[2] Chipcon\nAS.\nCC2420\nDatasheet\n(rev.\n1.3),\n2005.\nhttp://www.chipcon.com/\n[3] TIS Committee. Tool Interface Standard (TIS) Executable and Linking\nFormat (ELF) Specification Version 1.2, May 1995.\n[4] H. Dai, M. Neufeld, and R. Han. Elf: an efficient log-structured flash\nfile system for micro sensor nodes. In SenSys, pages 176187, 2004.\n[5] A. Dunkels, B. Gronvall, and T. Voigt. Contiki - a lightweight and\nflexible operating system for tiny networked sensors. In Proceedings\nof the First IEEE Workshop on Embedded Networked Sensors, Tampa,\nFlorida, USA, November 2004.\n[6] A. Dunkels, O. Schmidt, T. Voigt, and M. Ali. Protothreads: Simplifying\nevent-driven programming of memory-constrained embedded\nsystems. In Proceedings of the 4th International Conference on Embedded\nNetworked Sensor Systems, SenSys 2006, Boulder, Colorado,\nUSA, 2006.\n[7] D. Estrin (editor). Embedded everywhere: A research agenda for networked\nsystems of embedded computers. National Academy Press, 1st\nedition, October 2001. ISBN: 0309075688\n[8] G. Ferrari, J. Stuber, A. Gombos, and D. Laverde, editors. Programming\nLego Mindstorms with Java with CD-ROM. Syngress Publishing,\n2002. ISBN: 1928994555\n[9] C. Fok, G. Roman, and C. Lu. Rapid development and flexible deploy-ment\nof adaptive wireless sensor network applications. In Proceedings\nof the 24th International Conference on Distributed Computing Systems\n, June 2005.\n[10] D. Gay, P. Levis, R. von Behren, M. Welsh, E. Brewer, and D. Culler.\nThe nesC language: A holistic approach to networked embedded systems\n. In Proceedings of the ACM SIGPLAN 2003 conference on Programming\nlanguage design and implementation, pages 111, 2003.\n[11] J. A. Gutierrez, M. Naeve, E. Callaway, M. Bourgeois, V. Mitter, and\nB. Heile. IEEE 802.15.4: A developing standard for low-power low-cost\nwireless personal area networks. IEEE Network, 15(5):1219,\nSeptember/October 2001.\n[12] C. Han, R. K. Rengaswamy, R. Shea, E. Kohler, and M. Srivastava.\nSos: A dynamic operating system for sensor networks. In MobiSYS\n'05: Proceedings of the 3rd international conference on Mobile systems\n, applications, and services. ACM Press, 2005.\n[13] J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, and K. Pister. System\narchitecture directions for networked sensors. In Proceedings of\nthe 9th International Conference on Architectural Support for Programming\nLanguages and Operating Systems, November 2000.\n[14] J. W. Hui and D. Culler. The dynamic behavior of a data dissemination\nprotocol for network programming at scale. In Proc. SenSys'04,\nBaltimore, Maryland, USA, November 2004.\n[15] J. Jeong and D. Culler. Incremental network programming for wireless\nsensors. In Proceedings of the First IEEE Communications Society\nConference on Sensor and Ad Hoc Communications and Networks\nIEEE SECON (2004), October 2004.\n[16] J.\nJeong,\nS.\nKim,\nand\nA.\nBroad.\nNetwork\nreprogramming\n.\nTinyOS\ndocumentation,\n2003.\nVisited\n2006-04-06.\nhttp://www.tinyos.net/tinyos-1\n.x/doc/NetworkReprogramming.pdf\n[17] J. Koshy and R. Pandey.\nRemote incremental linking for energy-efficient\nreprogramming of sensor networks. In Proceedings of the\nsecond European Workshop on Wireless Sensor Networks, 2005.\n[18] J. Koshy and R. Pandey. Vm*: Synthesizing scalable runtime environments\nfor sensor networks. In Proc. SenSys'05, San Diego, CA,\nUSA, November 2005.\n[19] P. Levis and D. Culler. Mate: A tiny virtual machine for sensor networks\n. In Proceedings of ASPLOS-X, San Jose, CA, USA, October\n2002.\n[20] P. Levis, D. Gay, and D Culler. Active sensor networks. In Proc.\nUSENIX/ACM NSDI'05, Boston, MA, USA, May 2005.\n[21] P. Levis, N. Patel, D. Culler, and S. Shenker. Trickle: A self-regulating\nalgorithm for code propagation and maintenance in wireless sensor\nnetworks. In Proc. NSDI'04, March 2004.\n[22] J. Lilius and I. Paltor.\nDeeply embedded python, a virtual\nmachine for embedded systems.\nWeb page.\n2006-04-06.\nhttp://www.tucs.fi/magazin/output.php?ID=2000.N2.LilDeEmPy\n[23] H. Liu, T. Roeder, K. Walsh, R. Barr, and E. Gun Sirer. Design and\nimplementation of a single system image operating system for ad hoc\nnetworks. In MobiSys, pages 149162, 2005.\n[24] T. Liu, C. Sadler, P. Zhang, and M. Martonosi. Implementing software\non resource-constrained mobile sensors: Experiences with Impala and\nZebraNet. In Proc. Second Intl. Conference on Mobile Systems, Applications\nand Services (MOBISYS 2004), June 2004.\n[25] G. Mainland, L. Kang, S. Lahaie, D. C. Parkes, and M. Welsh. Using\nvirtual markets to program global behavior in sensor networks. In Proceedings\nof the 2004 SIGOPS European Workshop, Leuven, Belgium,\nSeptember 2004.\n[26] A. Mainwaring, J. Polastre, R. Szewczyk, D. Culler, and J. Anderson.\nWireless sensor networks for habitat monitoring. In First ACM Workshop\non Wireless Sensor Networks and Applications (WSNA 2002),\nAtlanta, GA, USA, September 2002.\n[27] P. Jose Marron, M. Gauger, A. Lachenmann, D. Minder, O. Saukh, and\nK. Rothermel. Flexcup: A flexible and efficient code update mechanism\nfor sensor networks. In European Workshop on Wireless Sensor\nNetworks, 2006.\n[28] A. Perrig, R. Szewczyk, V. Wen, D. E. Culler, and J. D. Tygar. SPINS:\nsecurity protocols for sensor netowrks. In Mobile Computing and Networking\n, pages 189199, 2001.\n[29] J. Polastre, R. Szewczyk, and D. Culler. Telos: Enabling ultra-low\npower wireless research. In Proc. IPSN/SPOTS'05, Los Angeles, CA,\nUSA, April 2005.\n[30] N. Ramanathan, E. Kohler, and D. Estrin. Towards a debugging system\nfor sensor networks. International Journal for Network Management\n, 3(5), 2005.\n[31] N. Reijers and K. Langendoen. Efficient code distribution in wireless\nsensor networks. In Proceedings of the 2nd ACM international conference\non Wireless sensor networks and applications, pages 6067,\n2003.\n[32] RF Monolithics.\n868.35 MHz Hybrid Transceiver TR1001, 1999.\nhttp://www.rfm.com\n[33] J. Schiller, H. Ritter, A. Liers, and T. Voigt. Scatterweb - low power\nnodes and energy aware routing. In Proceedings of Hawaii International\nConference on System Sciences, Hawaii, USA, 2005.\n[34] T. Stathopoulos, J. Heidemann, and D. Estrin. A remote code update\nmechanism for wireless sensor networks. Technical Report CENS-TR\n-30, University of California, Los Angeles, Center for Embedded\nNetworked Computing, November 2003.\n[35] M. Welsh and G. Mainland. Programming sensor networks using abstract\nregions. In Proc. USENIX/ACM NSDI'04, San Francisco, CA,,\nMarch 2004.\n[36] G. Werner-Allen, P. Swieskowski, and M. Welsh. Motelab: A wireless\nsensor network testbed. In Proc. IPSN/SPOTS'05, Los Angeles, CA,\nUSA, April 2005.\n28", "keywords": "Wireless sensor networks;Embedded systems;Dynamic linking;Operating systems;Virtual machines"} {"name": "171", "title": "S2DB : A Novel Simulation-Based Debugger for Sensor Network Applications", "abstract": "Sensor network computing can be characterized as resource-constrained distributed computing using unreliable, low bandwidth communication. This combination of characteristics poses significant software development and maintenance challenges. Effective and efficient debugging tools for sensor network are thus critical. Existent development tools, such as TOSSIM, EmStar, ATEMU and Avrora, provide useful debugging support, but not with the fidelity, scale and functionality that we believe are sufficient to meet the needs of the next generation of applications. In this paper, we propose a debugger, called S2DB, based on a distributed full system sensor network simulator with high fidelity and scalable performance, DiSenS. By exploiting the potential of DiSenS as a scalable full system simulator, S2DB extends conventional debugging methods by adding novel device level, program source level, group level, and network level debugging abstractions. The performance evaluation shows that all these debugging features introduce overhead that is generally less than 10% into the simulator and thus making S2DB an efficient and effective debugging tool for sensor networks.", "fulltext": "INTRODUCTION\nSensor networks, comprised of tiny resource-constrained devices\nconnected by short range radios and powered by batteries, provide\nan innovative way to implement pervasive and non-intrusive envi-ronmental\ninstrumentation and (potentially) actuation. The resource-constrained\nnature of sensor network devices poses significant software\ndevelopment and maintenance challenges. To prolong battery\nlife and promote miniaturization, most devices have little memory,\nuse low-power and unreliable radios, and run long duty cycles. In\naddition to these per-device constraints, by definition sensor networks\nare also distributed systems, with all of the concomitant synchronization\nand consistency concerns that distributed coordination\nimplies.\nFor these reasons, effective debugging support is critical. A number\nof sensor network development systems [2, 18, 3, 17, 13, 6]\nprovide debugging support for individual devices and/or the complete\nnetwork. However, they all have their limitations. Some rely\non hardware support, subject to the same resource constraints that\nas the programs on which they operate. Some only monitor the network\nradio traffic. And most importantly, as networks scale, these\ntools become difficult to apply to the details of collections of interacting\nsensor nodes.\nIn this paper, we present a new approach that is based on scalable\nfull system sensor network simulation with enhanced debugging\nfeatures. Our debugging tool is called S\n2\nDB (where S\n2\nstands for\nSimulation and Sensor network). The goal of S\n2\nDB is to adapt\nconventional debugging methods to sensor network applications so\nthat we can have better control of hardware details and debug the\ncomplete sensor network in a coordinated way. Our approach relies\nupon four principle innovations in the area of debugging resource\nconstrained devices.\nAt the single device level, we introduce the concept of debugging\npoint a generalized notion of break point, watch point,\nand state interrogation that permits state display from all\nsensor device subsystems (flash pages, buffers, etc.);\nAlso at the device level, we introduce virtual registers within\nthe simulator to support source level instrumentation and tracing\n. The access to these registers does not affect the correct\nfunctioning of other components;\nAt the multi-device level, we introduce a coordinated break\ncondition, which enables the coordinated execution control\nof multiple devices;\nFinally, at the network level, we provide a \"time traveling\"\nfacility to use with network level trace analysis, so that error\nsite can be rapidly restored for detailed inspection.\nS\n2\nDB is built upon DiSenS [25], a scalable distributed full system\nsensor network simulator DiSenS has a distributed simulation\nframework. Individual sensor devices are emulated in separated operating\nsystem threads. DiSenS then partitions and schedules these\ndevice emulations to the computer nodes of a cluster, and simulates\ninter-device communication at the radio level (i.e. below the communication\nprotocol stack and radio hardware device interfaces).\nSensor device emulations in DiSenS are cycle-accurate. Moreover,\na plugin mechanism allows the insertion of power models and radio\nmodels with different fidelity levels. Thus DiSenS is capable of accurate\n, large-scale sensor network simulation where the application\nand operating system code can be executed, unmodified on native\nhardware.\nDiSenS benefits our design and implementation in many aspects.\nIts simulator infrastructure gives us the full control of device states,\nwhich enables the design of debugging points. Its high performance\nmakes our debugger execute efficiently. Its scalability enables us\nto debug large-scale sensor networks. While the availability of a\nhigh-fidelity radio model for sensor network radio remains elusive\n(making many senor network implementors reluctant to embrace\nsimulation and/or emulation), we believe the ability to debug sensor\nnetwork programs at scale as a precursor to actual deployment will\ncut development time and reduce the amount of in situ debugging\nthat will be required in an actual deployment.\nWe also wish to emphasize that in this paper we do not claim\nS\n2\nDB adequately addresses many of the thorny difficulties associated\nwith all debugging tools (e.g. the ability to debug optimized\ncode). Rather our focus is on innovations that we believe are important\nto the development of large-scale senor network deploy-ments\nand that also improve the current state-of-the-practice in sensor\nnetwork debugging. In Section 2, we first give the background\nof sensor network debugging. In Section 3, we briefly introduce\nthe features and details of DiSenS that are relevant to our debugging\npurpose. In Section 4, we introduce the debugging point and\nits use with break conditions. We also present the design of virtual\nhardware based source level instrumentation. In Section 5, we\ndiscuss how to control the execution of multiple devices in a coordinated\nway. We focus on the implementation detail in DiSenS\ninfrastructure. In Section 6, we talk about the checkpoint implementations\nfor fast time traveling. We evaluate the performance of\nour enhancing techniques in Section 7. And we conclude our work\nin Section 8.\nRELATED WORK\nLike most embedded devices, sensor network devices can be debugged\nwith special hardware support. For motes (e.g. Mica2 and\nMicaZ), Atmel's AVR JTAG ICE (In-Circuit Emulator) [2] is one\nof the popular hardware-based debuggers. Atmel's AVR family\nof microcontrollers (that are currently used as the processing elements\nin many mote implementations) has built-in debugging support\n, called On-Chip Debugging (OCD). Developers can access the\nOCD functions via JTAG [10] hardware interface. With JTAG ICE,\ndevelopers can set break points, step-execute program and query\nhardware resources. JTAG ICE can also be used with GUI interfaces\nor a GDB debugging console. Hardware-based approaches\nsuch as JTAG ICE typically have their limitations. For example, it\nis not possible to synchronize the states of program execution with\nI/O systems in debugging. This is because when the program execution\nis stopped in JTAG ICE, the I/O system continues to run at\nfull speed [1]. Also since the debugging support is only provided\nwith the processing unit (i.e. the microcontroller), it is not easy to\ninterrogate the state of other on-board systems, like flash memory.\nIn contrast, by working with the full system DiSenS simulations,\nS\n2\nDB does not suffer from these limitations.\nAt network level, many monitoring and visualization tools like\nSympathy [18, 19], SpyGlass [3], Surge Network Viewer [22] and\nMote-VIEW [16] provide a way to trace, display and analyze network\nactivities for a physical sensor network. These tools usually\nuse a software data collecting module running on sensor nodes in\nthe network. The collected data is transferred using flooding or\nmultihop routing to the gateway node. The gateway node then forwards\nthe data to a PC class machine for analysis or visualization.\nThese tools are useful for displaying the network topology and and\nanalyzing the dynamics of data flow, particularly with respect to\nspecific inter-node communication events. Tools like Sympathy\neven specialize in detecting and localizing sensor network failures\nin data collection applications. However, these monitoring may be\nintrusive in that they share many of the scarce device resources they\nuse with the applications they are intended to instrument. These\ntools may complement what we have with S\n2\nDB . When a communication\nanomaly is detected, for example, often a program-level\ndebugger may still be necessary to pinpoint the exact location of\nerror in code.\nMore generally, while debugging on real hardware is the ultimate\nway to verify the correctness of sensor network applications\n, simulation based debuggers provide complementary advantages\nthat have been successfully demonstrated by other projects.\nMany sensor network simulators, like TOSSIM [13], ATEMU [17],\nAvrora [23] and EmStar [6], provide significant debugging capabilities\n. TOSSIM is a discrete event simulator for TinyOS applications\n. It translates the TinyOS code into emulation code and links\nwith the emulator itself. So debugging with TOSSIM is actually\ndebugging the emulator. Developers have to keep in their mind\nthe internal representation of device states. While discrete event\nsimulators are useful for verifying functional correctness, they typically\ndo not capture the precise timing characteristics of device\nhardware, and thus have limited capability in exposing errors in\nprogram logic. In contrast, full system simulators, such as ATEMU\nand Avrora, have much higher fidelity. ATEMU features a source\nlevel debugger XATDB, which has a graphic frontend for easy use.\nXATDB can debug multiple sensor devices, but can only focus on\none at a time. Avrora provides rich built-in support for profiling\nand instrumentation. User code can be inserted at any program address\n, watches can be attached to memory locations, and specific\nevents can be monitored. These facilities can be quite useful for\ndebugging purposes. Indeed, we extend Avrora's probe and watch\nconcepts in the development of S\n2\nDB's debugging points (cf. Section\n4). In addition to this support for simulator instrumentation,\nS\n2\nDB also provides a source code level instrumentation facility,\nvia virtual debugging registers, since it is easier to use for some\ndebugging problems.\nTime traveling for debugging is currently the subject of much\nresearch [11, 20] in the field of software system development and\nvirtualization. Flashback [20] is a lightweight extension for rollback\nand replay for software debugging. Flashback uses shadow\nprocesses to take snapshots of the in-memory states of a running\nprocess and logs the process' I/O events with the underlying system\nto support deterministic rollback or replay. VMM (virtual machine\nmonitor) level logging is used in [11] for replaying the system executing\nin a virtual machine. Checkpointing the state of a full system\nsimulator is easier than that in a real OS or virtual machine monitor\nsince all the hardware are simulated in software. Our results show\nthat time traveling support in DiSenS has very low overhead due to\nthe simpleness of sensor hardware it emulates.\nTHE DiSenS SIMULATOR\nS\n2\nDB is built upon DiSenS [25], a distributed sensor network\nsimulator designed for high fidelity and scalable performance. DiSenS\nprovides sensor network applications an execution environment\nas \"close\" to real deployment as possible. DiSenS is also able\n103\nto simulate a sensor network with hundreds of nodes in real time\nspeed using computer clusters. In this section, we briefly introduce\nthe design aspects of DiSenS that are relevant to the implementation\nof S\n2\nDB . The complete discussion and evaluation of DiSenS\nare in papers [25, 24].\n3.1\nFull System Device Simulation\nThe building blocks of DiSenS are full system device simulators,\nsupporting popular sensor network devices, including iPAQ [9],\nStargate [21] and Mica2/MicaZ motes [15]. In this paper, we confine\nour description to the functionality necessary for debugging\nmote applications. However, the same functionality is implemented\nfor more complex devices such as the iPAQ and Stargate. A more\nfull examination of debugging for heterogeneous sensor devices is\nthe subject of our future work.\nThe mote device simulator in DiSenS supports most of the Mica2\nand MicaZ hardware features, including the AVR instruction set,\nthe ATmega128L microcontroller (memories, UARTs, timers, SPI\nand ADC, etc.), the on-board Flash memory, CC1000 (Mica2) and\nCC2420 (MicaZ) radio chips and other miscellaneous components\n(like sensor board, LEDs, etc.).\nThe core of the device simulator is a cycle-accurate AVR instruction\nemulator. The instruction emulator interacts with other hardware\nsimulation components via memory mapped I/O. When an\napplication binary is executed in the simulator, each machine instruction\nis fed into the instruction emulator, shifting the internal\nrepresentation of hardware states accordingly and faithfully. Asyn-chronous\nstate change is modelled as events. Events are scheduled\nby hardware components and kept in an event queue. The instruction\nemulator checks the event queue for each instruction execution\n, triggering timed events. The collection of simulated hardware\nfeatures is rich enough to boot and execute unmodified binaries of\nTinyOS [8] and most sensor network applications, including Surge,\nTinyDB [14] and Deluge [4]. By correctly simulating hardware\ncomponents, the device simulator ensures the cycle accuracy, providing\nthe basis of faithful simulation of a complete sensor network\n.\nThe full system device simulator in DiSenS also presents extension\npoints or \"hooks\" for integrating power and radio models.\nThis extensible architecture provides a way to support the development\nof new models and to trade simulation speed for level of\naccuracy. For debugging, this extensibility enables developers to\ntest applications with different settings. For example, radio models\nrepresenting different environments (like outdoor, indoor, etc.) can\nbe plugged in to test applications under different circumstances.\nIn i t s defaul t confi gurat i on, Di S enS i ncorporat es an accurat e power\nmodel from [12], a simple linear battery model, a basic lossless radio\nmodel, and a simple parameterized statistical model. The structure\nof the system, however, incorporates these models as modules\nthat can be replaced with more sophisticated counterparts.\n3.2\nScalable Distributed Simulation\nDiSenS's ability to simulate hundreds of mote devices using distributed\ncluster computing resources is its most distinctive feature.\nThis level of scalability makes it possible to experiment with large\nsensor network applications before they are actually deployed and\nto explore reconfiguration options \"virtually\" so that only the most\npromising need to be investigated in situ. As a debugging tool,\nDiSenS's scalability allows developers to identify and correct problems\nassociated with scale. For example, a data sink application\nmay work well in a network of dozens of nodes, but fails when the\nnetwork size increases to hundreds, due to the problems such as\ninsufficient queue or buffer size. Even for small scale network, the\nscalability is useful because it translates into simulation speed, and\nthus debugging efficiency.\nDiSenS achieves its scalability by using a simple yet effective\nsynchronization protocol for radio simulation and applying automatic\nnode partition algorithms to spread the simulation/emulation\nworkload across machines in a computer cluster. In DiSenS, sensor\nnodes are simulated in parallel, each running in its own operating\nsystem thread and keeping its own virtual clock. Sensor nodes interact\nwith each other only in the radio transmission, during which\nradio packets are exchanged. The radio interaction of sensor nodes\ncan be abstracted into two operations: read radio channel and write\nradio channel. The analysis [25] shows that only when a node reads\nradio channel, it needs to synchronize its clock with its neighbors\n(i.e., potential radio transmitters in its radio range). This ensures\nthat each receiving node receives all the packets it is supposed to\nreceive. A primitive called wait on sync is introduced to perform\nthis synchronization, which forces the caller to wait for neighbor\nnodes to catch up with its current clock time. To implement this\nprotocol, each node also has to keep its neighbors updated about its\nclock advance by periodically sending out its current clock time. A\nmore detailed description and analysis of this protocol is in [25].\nTo utilize distributed computing r esources, D iS enS partitions nodes\ninto groups, each simulated on one machine within a cluster. Communication\nbetween sensor nodes assigned to the same machine\nis via a shared-memory communication channel. However, when\nmotes assigned to distinct machines communicate, that communication\nand synchronization must be implemented via a message\npass between machines. Due to the relatively large overhead of\nremote synchronization via message passing (caused by network\nlatency), partitioning of simulated nodes to cluster machines plays\nan important role in making the ensemble simulation efficient.\nTo address this problem, graph-partitioning algorithms, originally\ndeveloped for tightly-coupled data-parallel high-performance\ncomputing applications, are employed. DiSenS uses a popular partitioning\npackage [7] to partition nodes nearly optimally.\nOur S\n2\nDB debugging tool is built upon DiSenS , whose design\nhas huge impact on how the debugging facilities that we have implemented\n, including both advantages and limitations. In the next 3\nsections, we'll discuss how DiSenS interacts with S\n2\nDB to support\nboth conventional and novel debugging techniques.\nDEBUGGING INDIVIDUAL DEVICES\nS\n2\nDB was first built as a conventional distributed debugger on\nthe DiSenS simulator. Each group of sensor nodes has a standalone\ndebugging proxy waiting for incoming debugging commands. A\ndebugger console thus can attach to each individual sensor node\nvia this group proxy and perform debugging operations. The basic\nS\n2\nDB includes most functions in a conventional debugger, like\nstate (register and memory) checking, break points and step execution\n, etc.\nIn this section, we discuss how we exploit the potential of a simulation\nenvironment to devise novel techniques for debugging single\nsensor devices.\n4.1\nDebugging Point\nDebugging is essentially a process of exposing program's internal\nstates relevant to its abnormal behavior and pinpointing the\ncause. Visibility of execution states is a determining factor of how\ndifficult the debugging task is. Building upon a full system simulator\nfor each device gives S\n2\nDB a great potential to expose time\nsynchronized state.\nConventional debuggers essentially manipulate three states of a\nprogram: register, memory and program counter (PC). Simulators\n104\nComponent\nParameters\nValue\nInterrupt\nWatchable\nOverhead\nPC (pc)\nmicrocontroller\nnone\nInt\nNo\nYes\nLarge\nRegister (reg)\nmicrocontroller\naddress\nInt\nNo\nYes\nLarge\nMemory Read (mem rd)\nSRAM\naddress\nBoolean\nNo\nYes\nSmall\nMemory Write (mem wr)\nSRAM\naddress\nBoolean\nNo\nYes\nSmall\nMemory (mem)\nSRAM\naddress\nInt\nNo\nYes\nSmall\nFlash Access (flash access)\nFlash\ncommand, address\nBoolean\nNo\nYes\nSmall\nFlash (flash)\nFlash\naddress\nInt\nNo\nYes\nSmall\nPower Change (power)\nPower Model\nnone\nFloat\nNo\nYes\nSmall\nTimer Match (timer)\nTimers\nnone\nBoolean\nYes\nNo\nSmall\nRadio Data Ready (spi)\nSPI (radio)\nnone\nBoolean\nYes\nNo\nSmall\nADC Data Ready (adc)\nADC (radio/sensor)\nnone\nBoolean\nYes\nNo\nSmall\nSerial Data Received (uart)\nUART\nnone\nBoolean\nYes\nNo\nSmall\nClock (clock)\nVirtual\nnone\nInt\nNo\nYes\nMinimal\nRadio Packet Ready (packet)\nRadio Chip\nnone\nPacket\nNo\nYes\nSmall\nProgram Defined (custom)\nVirtual Debugging Hardware\nID\nInt\nNo\nYes\nProgram defined\nTable 1: The current set of debugging points in S\n2\nDB .\ncan provide much more abundant state information, which may\nenable or ease certain debugging tasks. For example, to debug a\nTinyOS module that manages on-board flash memory, it is important\nfor the internal buffers and flash pages to be displayed directly.\nIt is straightforward for DiSenS but rather difficult in a conventional\ndebugger, which has to invoke complex code sequence to access the\nflash indirectly.\nWe carefully studied the device states in DiSenS and defined a\nseries of debugging points. A debugging point is the access point\nto one of the internal states of the simulated device. The device\nstate that is exposed by a debugging point can then be used by the\ndebugger for displaying program status and controlling program\nexecution, e.g., break and watch, as that in a conventional debugger\n. In this sense, debugging points have extended our debugger's\ncapability of program manipulation.\nTable 1 lists the current set of debugging points defined in S\n2\nDB.\nIt is not a complete list since we are still improving our implementation\nand discovering more meaningful debugging points. In the\ntable, the first column shows the debugging point name and the\nabbreviated notation (in parentheses) used by the debugger console\n. The corresponding hardware component that a debugging\npoint belongs to is listed in the second column. The third and fourth\ncolumns specify the parameters and return value of a debugging\npoint. For example, the \"memory\" point returns the byte content\nby the given memory address. The fifth column tells whether a debugging\npoint has an interrupt associated. And the sixth column\nspecifies whether a watch can be added to the point. The last column\nestimates the theoretical performance overhead of monitoring\na particular debugging point.\nAs we see in the table, the common program states interrogated\nby convent i onal debugger s , i . e . r egi s t e r, memor y and pr ogr am count er,\nare also generalized as debugging points in S\n2\nDB , listed as reg,\nmem and pc. For memory, we also introduced two extra debugging\npoints, mem rd and mem wr, to monitor the access to memory\nin terms of direction. Notice that debugging points have different\ntime properties: some are persistent while others are transient. In\nthe memory case, the memory content, mem, is persistent, while\nmemory accesses, mem rd and mem wr, are transient. They are\nvalid only when memory is read or written.\nSimilarly, the on-board flash has two defined debugging points:\none for the page content (flash) and the other (flash access) for the\nflash access, including read, program and erase. The power debugging\npoint is used to access the simulated power state of the device,\nwhich may be useful for debugging power-aware algorithms.\nFour important hardware events are defined as debugging points:\ntimer match event (timer), radio (SPI) data ready (spi), ADC data\nready (adc) and serial data ready (uart). They are all transient and\nall related to an interrupt. These debugging points provide a natural\nand convenient way to debug sensor network programs since\nmany of these programs are event-driven, such as TinyOS and its\napplication suite. As an example, if we want to break the program\nexecution at the occurrence of a timer match event, we can simply\ninvoke the command:\n> break when timer() == true\nIn a more conventional debugger, a breakpoint is typically set in\nthe interrupt handling code, the name of which must be known to\nthe programmer. Furthermore, breaking on these event-based debugging\npoints is much more efficient than breaking on a source\ncode line (i.e., a specific program address). This is because matching\nprogram addresses requires a comparison after the execution\nof each instruction while matching event-based debugging points\nonly happens when the corresponding hardware events are triggered\n, which occur much less frequently. We will discuss how to\nuse debugging points to set break conditions and their overhead in\nlater this subsection.\nThe clock debugging point provides a way for accurate timing\ncontrol over program execution. It can be used to fast forward the\nexecution to a certain point if we know that the bug of our interest\nwill not occur until after a period of time. It would be rather difficult\nto implement this in a conventional debugger since there is no\neasy way to obtain accurate clock timing across device subsystems.\nIt is also possible to analyze the states and data in the simulator to\nextract useful high-level semantics and use them to build advanced\ndebugging points. An example is the recognition of radio packet.\nThe Mica2 sensor device uses the CC1000 radio chip, which operates\nat the byte level. Thus an emulator can only see the byte stream\ntransmitted from/to neighbor nodes and not packet boundaries. For\napplication debugging, however, it is often necessary to break program\nexecution when a complete packet has been transmitted or\nreceived. A typical debugging strategy is to set a breakpoint in the\nradio software stack at the the line of code line that finishes a packet\nreception. However, this process can be both tedious and unreliable\n(e.g. software stack may change when a new image is installed),\nespecially during development or maintenance of the radio stack\nitself. Fortunately, in the current TinyOS radio stack implementation\n, the radio packet has a fixed format. We implemented a tiny\nradio packet recognizer in the radio chip simulation code. A \"radio\npacket ready\" (packet) debugging point is defined to signal the state\n105\nwhen a complete packet is received. These extracted high-level semantics\nare useful because we can debug applications without relying\non the source code, especially when the application binary is\noptimized code and it is hard to associate exact program addresses\nwith specific source code line. However, discovering these semantics\nusing low-level data/states is challenging and non-obvious (at\nleast, to us) and as such continues to be a focus of our on-going\nresearch in this area.\n4.1.1\nBreak Conditions Using Debugging Points\nDebugging points are used in a functional form. For example, if\nwe want to print a variable X, we can use:\n> print mem(X)\nTo implement conditional break or watch points, they can be included\nin imperatives such as:\n> break when flash_access(erase, 0x1)\nwhich breaks the execution when the first page of the flash is erased.\nIt is also possible to compose them:\n> break when timer() && mem(Y) > 1\nwhich breaks when a timer match event occurs and a state variable\nY , like a counter, is larger than 1.\nThe basic algorithm for monitoring and evaluating break conditions\nis as follows. Each debugging point maintains a monitor\nqueue. Whenever a break point is set, its condition is added to the\nqueue of every debugging point that is used by the condition. Every\ntime the state changes at a debugging point, the conditions in\nits queue is re-evaluated to check whether any of them is satisfied.\nIf so, one of the break points is reached and the execution is suspended\n. Otherwise, the execution continues.\nNote that the monitoring overhead varies for different debugging\npoints revealing the possibility for optimization of the basic condition\nevaluating algorithm. The monitoring overhead is determined\nby the frequency of state change at a debugging point. Obviously,\npc has the largest overhead because it changes at each instruction\nexecution. Event related debugging points have very low overhead\nsince hardware events occur less frequently. For example, the timer\nevent may be triggered for every hundreds of cycles. Clock logi-cally\nhas a large overhead since it changes every clock cycle. However\n, in simulation, clock time is checked anyway for event triggering\n. By implementing the clock monitoring itself as an event, we\nintroduce no extra overhead for monitoring clock debugging point.\nThus we are able to optimize the implementation of condition\nevaluation. For example, considering the following break condition\n:\n> break when pc() == foo && mem(Y) > 1\nUsing the basic algorithm, the overhead of monitoring the condition\nis the sum of pc's overhead and mem's overhead. However, since\nthe condition is satisfied when both debugging points match their\nexpression, we could only track mem since it has smaller overhead\nthan pc. When mem is satisfied, we then continue to check pc. In\nthis way, the overall overhead reduces.\nNow we present the general condition evaluation algorithm. Given\na condition as a logic expression, C, it is first converted into canonical\nform using product of maxterms:\nC = t\n1\nt\n2\n... t\nn\n(1)\nwhere t\ni\nis a maxterm. The overhead function f\nov\nis defined as the\ntotal overhead to monitor all the debugging points in a maxterm.\nThen we sort the maxterms by the value of f\nov\n(t\ni\n) in incremental\norder, say, t\nk\n1\n, ..., t\nk\nn\n. We start the monitoring of C first using\nmaxterm t\nk\n1\nby adding C to all the debugging points that belong\nto t\nk\n1\n. When t\nk\n1\nis satisfied, we re-evaluate C and stop if it is\ntrue. Otherwise, we remove C from t\nk\n1\n's debugging points and\nstart monitoring t\nk\n2\n. If t\nk\nn\nis monitored and C is still not satisfied,\nwe loop back to t\nk\n1\n. We repeat this process until C is satisfied. If\nC is unsatisfiable, this process never ends.\nDebugging points give us powerful capability to debug sensor\nnetwork programs at a level between the hardware level and the\nsource-code level. However, a direct instrumentation of the source\ncode i s somet i m es easi est and m ost s t r ai ght - f or war d debuggi ng met hod.\nThe typical methodology for implementing source-level instrumentation\nis to use print statements to dump states. Printing, however,\ncan introduce considerable overhead that can mask the problem being\ntracked.\nIn S\n2\nDB we include an instrumentation facility based on virtual\nregisters that serves the same purpose with reduced overhead. We\nintroduce our instrumentation facility in the next subsection.\n4.2\nVirtual Hardware Based Source Code\nInstrumentation\nSensor devices are usually resource-constrained, lacking the necessary\nfacility for debugging in both hardware and software. On\na Mica2 sensor device, the only I/O method that can be used for\ndisplay internal status by the program is to flash the three LEDs,\nwhich is tedious and error-prone to decode. DiSenS faithfully simulates\nthe sensor hardware, thus inheriting this limitation. Because\nwe insist that DiSenS maintain binary transparency with the native\nhardware it emulates, the simulated sensor network program is not\nable to perform a simple \"printf\".\nTo solve this problem, we introduce three virtual registers as an\nI/O channel for the communication between application and simulator\n. Their I/O addresses are allocated in the reserved memory\nspace of ATmega128L. Thus the access of these virtual registers\nwill not affect the correct functioning of other components. Table 2\nlists the three registers and their functions.\nAddress\nName\nFunctionality\n0x75\nVDBCMD\nCommand Register\n0x76\nVDBIN\nInput Register\n0x77\nVDBOUT\nOutput Register\nTable 2: Virtual registers for communication between application\nand simulator.\nThe operation of virtual registers is as follows: an application\nfirst issues a command in the command register, VDBCMD; then\nthe output data is transferred via VDBOUT register and the input\ndata is read from em VDBIN register. The simplest application\nof virtual registers is to print debugging messages by first sending\na \"PRINT\" command and then continuously writing the ASCII\ncharacters in a string to the VDBOUT register until a new line is\nreached. On the simulator side, whenever a command is issued, it\neither reads from the VDBOUT register or sending data to VDBIN.\nIn the print case, when the simulator gets all the characters (ended\nby a new line), it will print out on the host console of the simulating\nmachine.\nA more advanced use of virtual registers is to control a debugging\npoint. We term this combination of virtual registers and debugging\npoints a program defined debugging point (custom, as listed\nin the last line of Table 1). The state of a custom debugging point is\ngenerated by the instrumentation code in the program. To do so, the\ninstrumentation code first sends a \"DEBUG\" command to the VD-106\nBCMD register, then outputs the debugging data on the VDBOUT\nregister, in the form of a tuple, < id, value >. The id is used to\nidentify the instrumentation point in the source code and the value\nis any value generated by the instrumented code. If there is a break\ncondition registered at this point, it will be checked against the tuple\nand execution will stop when it is matched. As an example, if\nwe want to break at the\n10th entry of a function, we can instrument\nthe function and keep a counter of entries. Every time the\ncounter changes, we output the counter value via virtual registers.\nThe break condition will be satisfied when the value equals to\n10.\nTo make it easy to use, we developed a small C library for accessing\nthe virtual registers transparently. Developers can invoke\naccessing functions on these registers by simply calling the C APIs,\nfor example, in a TinyOS program.\nInstrumentation via the virtual registers has the minimal intru-siveness\non application execution. When generating a debugging\npoint event by sending a < id, value > tuple, only three register\naccesses are needed if both values in the tuple are\n8-bits each (one\nfor the command and two for the data).\nCOORDINATED PARALLEL DEBUGGING OF MULTIPLE DEVICES\nDiSenS's scalability and performance enables S\n2\nDB to debug\nlarge cooperating ensembles of sensors as a simulated sensor network\ndeployment. Like other debuggers, S\n2\nDB permits its user to\nattach to and \"focus\" on a specific sensor while the other sensors\nin the ensemble execute independently. However, often, more systematic\nerrors emerge from the interactions among sensor nodes\neven when individual devices and/or applications are functioning\ncorrectly. To reveal these kinds of errors, developers must be able\nto interrogate and control multiple sensor devices in a coordinated\nway.\nDebugging a program normally involves displaying program status\n, breaking program execution at arbitrary points, step-executing,\netc. By extending this concept to parallel debugging, we want to be\nable to:\n1. Display the status of multiple devices in parallel;\n2. Break the execution of multiple devices at certain common\npoint;\n3. Step-execute multiple devices at the same pace.\nThe first and third items in the above \"wish list\" are easy to implement\nin a simulation context. S\n2\nDB can simply \"multicast\" its debugging\ncommands to a batch of sensor nodes once their execution\nstop at a certain common point. As for the second item, since DiSenS\nis, in effect, executing multiple parallel simulations without\na centralized clock, implementing a time-correlated and common\nbreakpoint shares the same coordination challenges with in parallel\ndebugging counterpart.\nThe simplest form of coordinated break is to pause the execution\nof a set of involved nodes at a specific virtual time, T :\n> :break when clock() == T\nwhere the colon before \"break\" indicates that it is a batch command\nand will be sent to all the nodes in a global batch list (maintained\nby other commands).\nIt is necessary to review DiSenS's synchronization mechanism\nfirst. We summarize the major rules as follows:\n1. A node that receives or samples radio channel must wait for\nall its neighbors to catch up with its current clock time;\nA\nC\nB\nD\nY\nReceive Receive\nUpdate Transmit\nX\nclock update & byte transmission\nclock update\nFigure 1: Illustration of synchronization between sensor nodes\nin DiSenS . Dashed arrows indicate the update and transmission\nmessages. A < B < C < D.\n2. All nodes must periodically broadcast their clock updates to\nneighbors;\n3. Before any wait, a node must first send its clock update (to\navoid loop waiting);\n4. Radio byte is always sent with a clock update at the end of\nits last bit transmission.\nFigure 1 illustrates the process. At time point A, node X receives.\nIt first sends an update of its clock and wait for its neighbor Y\n(rule 3 & 1). Y runs to B and sends its clock update (rule 2), which\nwakes up X. X proceeds to C and receives again. Y starts a byte\ntransmission at B. At D, the last bit transmitted and so the byte\nalong with a clock update is sent to X (rule 4). X receives the\nbyte, knowing Y passes its current time, and proceeds.\nB\nY\nX\nA\nReceive\nC\nBreak point\nLast update Next update\nupdate for break\nFigure 2: Break at a certain point of time. Dashed arrows indicate\nthe update and transmission messages. B < A < C.\nNow, let's see what happens when we ask multiple nodes to stop\nat the same time. Figure 2 shows one case of the situation. X\nreceives at time A and sends an update and waits for Y . Y sends\nan update at B. Its next update time is C. But we want to break at\na point before C but after A. Since Y breaks (thus waits), it sends\nan update (rule 3). X receives the update, wakes up, proceeds to\nthe break point and stops. Now both X and Y are stopped at the\nsame time point.\nIn Figure 3, the situation is similar to the case in Figure 2. The\ndifference is that now the break point is in the middle of a byte\ntransmission for Y . Y can not just send an update to X and let\nX proceeds to break point as in Figure 2. because if X gets the\nupdate from Y , it believes Y has no byte to send up to the break\npoint and will continue its radio receiving logic. Thus the partial\nbyte from Y is lost. This problem is caused by rule 4. We solve\nit by relaxing the rule: Whenever a node is stopped (thus it waits)\nin the middle of a byte transmission, the byte is pre-transmitted\nwith the clock update. We can do this because mote radio always\ntransmits in byte unit. Once a byte transmission starts, we already\n107\nB\nY\nX\nA\nReceive\nC\nBreak point\nTransmit start Transmit\nupdate & pre-transmit\nFigure 3: Extension to the synchronization protocol: pre-transmission\n. Dashed arrows indicate the update and transmission\nmessages. B < A < C.\nknow its content. Also, in DiSenS , each byte received by a node is\nbuffered with timestamp. It will be processed only when the time\nmatches the local clock. With this relaxed rule, we are now able to\nstop multiple sensor nodes at the same virtual time.\nThe next question is how to perform a conditional break on multiple\nnodes. Notice that we cannot simply implement:\n> :break when mem(X) > 3\nbecause it asks the nodes to break independently. Whenever a node\nbreaks at some point, other nodes with direct or indirect neighborhood\nrelationship with it will wait at indeterminate points due to\nthe synchronization requirement. Whether they all satisfy the condition\nis not clear. A reasonable version of this command is:\n> :break when *.mem(X) > 3\nor\n> :break when node1.mem(X) > 3\n&& ... && nodek.mem(X) > 3\nwhich means \"break when X > 3 for all the nodes\". In the general\nform, we define a coordinated break as a break with condition\ncond\n1\ncond\n2\n... cond\nk\n, where cond\ni\nis a logic expression\nfor node i.\n000000000000000\n000000000000000\n111111111111111\n111111111111111\n000000000\n000000000\n111111111\n111111111\n00000000000000\n00000000000000\n11111111111111\n11111111111111\nX\nY\nZ\nA\nB\nC\nD\nCondition satisfied\nFigure 4: Coordinated break. The shaded boxes represent the\ntime range during which a local condition is satisfied. Between\nC and D, the global condition is satisfied. A < B < C < D.\nFigure 4 illustrates the meaning of this form of breakpoint. The\nshaded boxes are the time period during which the local condition\nfor a node is satisfied. In Figure 4, the global condition, i.e.\ncond\nx\ncond\ny\ncond\nz\n, is satisfied between time C and D. Time\nC is the exact point where we want to break.\nBefore we present the algorithm that implements coordinated\nbreak, we need to first introduce a new synchronization scheme.\nWe call it partially ordered synchronization. By default DiSenS\nimplements peer synchronization: all the nodes are running in arbitrary\norder except synchronized during receiving or sampling. The\nnew scheme imposes a partial order. In this scheme, a node master\nis first specified. Then all the other nodes proceed by following the\nmaster node. That is, at any wall clock time t\nwall\n(i.e., the real\nworld time), for any node\ni\n, clock\ni\n<= clock\nmaster\n.\nA\nC\nB\nD\nA\nC\nE\nB\nD\nX\nY\nX\nY\nFigure 5: TOP: peer synchronization in DiSenS . A < B <\nD < C. BOTTOM: partially ordered synchronization for S\n2\nDB.\nA < C < E, B = C, D = E. Dashed arrows indicate the\nupdate and transmission messages. (Some update messages are\nomitted)\nFigure 5 illustrates the two synchronization schemes. The top\npart shows DiSenS's peer synchronization scheme. Node X waits\nat A. Y sends update at B and wakes X. Then Y waits at D,\nwaken by X's update at C. X and Y proceed in parallel afterwards.\nThe bottom part shows S\n2\nDB's partially ordered synchronization\nscheme. Here Y is the master. X first waits at A. Y sends its\nupdate at B. X receives the update and runs to the updated point,\nwhich is C (=B). Then X waits again. When Y runs to D and\nsends update. X can proceed to E (=D). If Y needs to wait to\nreceive, X will wake it up when X reaches E according to rule 3.\nObviously, in this scheme, X always follows Y .\nNow we can give our algorithm for coordinated break. Using\nFigure 4 as the example, we first designate X as the master. At\npoint A, X's condition is satisfied. X stops at A. Since Y and\nZ follow X, they all stop at A. Then we choose the next node as\nthe new master, whose condition is not satisfied yet. It is Y . X\nand Z follow Y until Y reaches B. Next, similarly, we choose Z\nas new master. At time C, we find cond\nx\ncond\ny\ncond\nz\n=\ntrue. We break the execution and C is exactly our break point. In\nthis algorithm, the aforementioned pre-transmission also plays an\nimportant role in that it enables us to stop all nodes at the same time\npoint precisely.\nCoordinated break, however, does not work with arbitrary conditions\n. Consider the case where the local conditions in Figure 4\nare connected by injunction instead of conjunction. The break\npoint now should be at time A. Since we are not able to predict\nwhich node will first satisfy its condition, it is not possible for us\nto stop all the nodes together at time A unless we synchronize all\nthe nodes cycle by cycle, which would limit the scalability and the\nperformance significantly. For the same reason, we can not set up\nmultiple coordinated break points. We reiterate that these limita-108\ntions are a direct result of our desire to scale DiSenS and to use\nS\n2\nDB on large-scale simulated networks. That is, we have sacri-ficed\ngenerality in favor of the performance gained through parallel\nand distributed-memory implementation.\nAlthough the generality of coordinated break is limited, it is still\nuseful in many situations. For example, for a data sink application,\nwe may want to determine why data is lost when a surge of data\nflows to the sink node. In this case, we would break the execution\nof the sink node based on the condition that its neighbor nodes have\nsent data to it. Then we step-execute the program running on the\nsink node to determine why the data is being lost. To implement\nthe condition of data sent on neighbor nodes we can simply use\nsource code instrumentation exporting a custom debugging point.\nThus this example also illustrates how the single-device debugging\nfeatures discussed in the previous section can be integrated with the\ngroup debugging features.\nFAST TIME TRAVELING FOR REPLAYABLE DEBUGGING\nEven with the ability to perform coordinated breakpoints, the\nnormal debugging cycle of break/step/print i s still cumbersome when\nthe complete sensor network is debugged, especially if the size of\nnetwork is large. The high level nature of some systematic errors\nrequires a global view of the interactions among sensor nodes. An\nalternative model for debugging sensor networks is:\nA simulation is conducted with tracing. Trace log is analyzed\nto pinpoint the anomaly.\nQuickly return to the point when the anomaly occurs to perform\ndetailed source code level debugging.\nTo achieve this, we need to trace the simulation and restore the\nstate of network at any point in the trace. The debugging points\nand virtual hardware based instrumentation discussed in section 4\ncan be used to trace the simulation in a way similar to [23]. In this\nsection, we present the S\n2\nDB's design of fast time traveling, which\nenables the restoration of network states.\nThe basic mechanism required to implement time traveling is\na periodic checkpoint. A checkpoint of a simulation is a complete\ncopy of the state of the simulated sensor network. DiSenS is\nan object oriented framework in representing device components.\nWhen a checkpoint is initiated, the state saving function is invoked\nfirst at the highest level \"machine\" object. Recursively, the sub-components\nin the \"machine\" invoke their own state saving functions\n. The saved state is comprised of registers, memories (SRAM,\nEEPROM, etc.) and auxiliary state variables in each component.\nIt also includes some simulation related states. For example, we\nneed to save the event queue content, the received radio byte queue\nin the radio model and the status of the power model, etc. The\ncomplete binary of the state is saved into a timestamped file. The\nresult checkpoint file for DiSenS has a size of\n4948 bytes, mostly\ncomprised of SRAM (\n4KB) content.\nCheckpoint for the on-board flash has to be handled differently.\nMotes have a\n512KB flash chip used for sensor data logging and\nin-network programming. If flash content is saved as other components\n, the checkpoint file will be as large as over half megabyte,\nwhich is\n128 times larger than the one without flash. So if flash is\nalso saved in a snapshot way, it is both extremely space and time\ninefficient for a large scale sensor network. We solve this problem\nby saving flash operations in a log file. Since most sensor network\napplications use flash infrequently and flash content is updated in\npage unit, the overhead of saving log is much smaller than saving\nflash snapshots. Notice that the flash buffers have to be saved in the\nsnapshot checkpoint file.\nOnce a simulation is finished, we have a set of snapshot checkpoints\nand a continuous flash log. Given an arbitrary time point T ,\nto restore the state of system includes the following steps:\n1. Restore: find the latest checkpoint CP that is prior to T and\nload the snapshot checkpoint file;\n2. Replay: if flash is used, replay the flash operation log up to\nCP 's time;\n3. Re-run: start from CP , re-run the simulation until time T .\nCheckpoints can also be initiated by methods other than the need\nto take a periodic snapshot. For example, under S\n2\nDB a break\npoint can be associated with a checkpoint so that once the execution\nbreaks, a checkpoint is generated. Thus, a developer can move\nbetween the checkpoints to find the exact point when error occurs\nduring a replayed simulation. Checkpoint can also be initiated by a\ndebugging point, especially custom debugging points. By allowing\ncheckpoint to be triggered in conjunction with debugging points\nS\n2\nDB integrates the replay and state-saving capabilities needed to\nefficiently re-examine an error condition with the execution control\nover state changes.\nEVALUATION\nSince S\n2\nDB is built upon DiSenS , its performance is highly dependent\non DiSenS itself. We begin this section by focusing on the\nthe performance of DiSenS simulation/emulation and then show the\noverhead introduced by various S\n2\nDB debugging facilities. All experiments\ndescribed in this section are conducted using a 16-node\ncluster in which each host has dual 3.2GHz Intel Xeon processors\nwith 1GB memory. The hosts are connected via switched gigabit\nEthernet. To make fair comparison, we use the same sensor network\napplication CntToRfm for evaluation.\n7.1\nPerformance of DiSenS\nFor brevity, we present only the typical simulation speed of DiSenS\non the cluster. A more thorough examination of scalability\nand performance under different configurations can be found in paper\n[25].\nFigure 6 shows the performance achieved by DiSenS when simulating\nvarious numbers of nodes on the cluster in both\n1-D and\n2-D topologies. In the figure, the X axis shows the total number\nof nodes simulated. The Y -axis is the normalized simulation speed\n(compared to real time speed on hardware). For the\n1-D topology,\nall nodes are oriented on a straight line,\n50 meters apart (assuming\nthe maximal radio range is\n60 meters). For the 2-D topology,\nnodes are arranged in a square grid. Again the distance between\ntwo nodes is\n50 meters. Both performance curves are very close\nexcept in the middle part, where\n2-D topology has slightly worse\nperformance.\nThe simulation speed drops noticeably from\n1 to 4 nodes but\nthen the speed curve keeps flat until\n128 nodes are simulated. After\nthat, the speed decreases linearly. The transition from flat to linear\ndecrement is because there is not enough computing resources\nwithin the cluster (\n16 hosts).\nTo summarize the results from [25], DiSenS is able to simulate\none mote 9 times faster than real time speed, or 160 nodes at near\nreal time speed, or 2048 nodes at nearly a tenth of real time speed.\n109\nTotal number of nodes\nNormalized simulated clock speed\n1\n2\n4\n8\n16\n32\n64\n128\n256\n512 1024\n0.01\n0.10\n1.00\n10.00\none dimension\ntwo dimensions\nFigure 6: DiSenS simulation performance in\n1-D and 2-D\ntopologies. X-axis is total number of nodes simulated. Y -axis\nis normalized simulation speed (compared to execution speed\non real device).\n7.2\nPerformance of a Break Condition on a\nSingle Device\nWe first evaluate the cost of monitoring debugging points in single-device\ndebugging. Not all the listed (in Table 1) debugging points\nare evaluated since the overhead for some of them is application\ndependent.\npc\nmemrd\nmemwr\npower\ntimer\nspi\nDebugging Point\nRelative Simulation Speed\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nFigure 7: Relative simulation speed for various debugging\npoints. X-axis shows the name of debugging points. Y -axis\nis the ratio to original simulation speed (without monitoring\ndebugging points).\nFigure 7 gives the relative simulation speed of evaluating various\ndebugging points. For each one, we set a break condition using\nthe debugging point and run the simulation. The result shows that\npc has the largest overhead since the PC change occurs for every\ninstruction execution. Memory related debugging points has less\noverhead. Power and event-based debugging points have the least\noverhead since their states change infrequently.\n7.3\nPerformance of a Coordinated Break\nCondition with Multiple Devices\nWe evaluate the overhead of monitoring the coordinated break\ncondition in this subsection. We run our experiments with a\n2-D\n4 4 grid of sensor nodes, distributed in 4 groups (hosts).\n0.85\n0.90\n0.95\n1.00\nInvolved Groups\nRelative Simulation Speed\n1\n2\n3\n4\nFigure 8: Relative simulation speed of monitoring a coordinated\nbreak condition for multiple devices. X-axis is the number\nof groups (hosts) involved. Y -axis is the ratio to original\nsimulation speed (without condition monitoring).\nFigure 8 shows the speed ratio between the simulation with monitoring\nand without. When the group number is\n1, only nodes in\none group are involved in the break condition. For group number\n2, nodes in both groups are used in the break condition, and so on.\nThe speed ratio curve drops when the number of groups increases.\nThe overhead of monitoring coordinated break condition is mostly\ndue to the extra synchronization cost introduced by the new partially\nordered synchronization scheme. Obviously, when more nodes\n(especially remote nodes) involved, the simulation overhead is higher.\n7.4\nPerformance of Checkpointing for Time\nTraveling\nWe evaluate the overhead of checkpointing in four configurations\n:\n1 1, 4 1, 16 1 and 4 4, where x y means x nodes\nper group and y groups. For each one, we vary the checkpoint interval\nfrom\n1/8 up to 4 virtual seconds.\nFigure 9 shows the relative simulation speed when checkpointing\nthe system periodically. Naturally, the overhead increases when\ncheckpointing more frequently. It is hard to distinguish the single-group\ncurves since their differences are so small. In general, checkpointing\nin multi-group simulation seems to have larger overhead\nthan single-group. However, the checkpoint overhead is relatively\nsmall. All four curves lie above\n96% of original simulation speed,\nwhich translates to less than\n4% of overhead. This result encourages\nus to use time-traveling extensively in debugging. Developers\nthus can always return to the last break point or a previous trace\npoint with little cost.\nTo summarize, we find that most of the new debugging facilities\nwe have introduced with S\n2\nDB have small overhead (less than\n10%). As a result, we are able to debug sensor network applications\nusing tools that operate at different levels of abstraction while\npreserving the high performance and scalability provided by DiSenS\n.\n110\n0.96\n0.98\n1.00\n1.02\n1.04\nCheckpoint Interval in Virtual Time (second)\nRelative Simulation Speed\n1/8\n1/4\n1/2\n1\n2\n4\n1x1\n4x1\n16x1\n4x4\nFigure 9: Relative simulation speed for checkpointing. X-axis\nis the interval between two checkpoints (in terms of virtual\nclock time of mote device). Y -axis is the ratio to original simulation\nspeed (without checkpointing).\nCONCLUSION\nS\n2\nDB is an efficient and effective sensor network debugger based\non DiSenS, a scalable distributed sensor network simulator. S\n2\nDB\nmakes four innovations to the conventional debugging scheme at\ndifferent levels of abstraction. For effective debugging of single\nsensor devices, debugging points are introduced for the interrogation\nof all interested subsystem states in a sensor device. To facilitate\nsource level tracing and instrumentation, we extend the simulated\nsensor device hardware with a set of virtual registers providing\na way for the communication between simulator and simulated\nprogram. At the multi-device level, we discuss the implementation\nof coordinated break condition in the distributed framework.\nThis new type of break condition enables coordinated parallel execution\ncontrol of multiple sensor devices. A time traveling facility\nis introduced for the network level debugging, used for rapid error\nsite restoration when working with sensor network trace analysis\n. Overall, these debugging features impose overhead of less than\n10% (generally) to DiSenS, and thus enable efficient debugging of\nlarge scale sensor networks.\nS\n2\nDB is still an ongoing project that we think to make it a comprehensive\ndebugging tool for sensor networks, there is still a lot of\nwork to do. The most imperative task is to design and implement a\ngraphic user interface for intuitive and productive debugging. We\nare planning to build a plugin in the famous Eclipse [5] development\nenvironment, which controls the debugging and simulation\nfunctions in S\n2\nDB and DiSenS. We are also interested in incorporating\nthe debugging needs according to people's experiences in\nsensor network development and discovering new debugging techniques\n, especially at the network level.\nREFERENCES\n[1] Atmel. AVR JTAG ICE User Guide. 2001. http://www.atmel.\ncom/dyn/resources/prod documents/DOC2475.PDF\n.\n[2] Atmel's AVR JTAG ICE. http://www.atmel.com/dyn/\nproducts/tools card.asp?tool id=2737\n.\n[3] C. Buschmann, D. Pfisterer, S. Fischer, S. P. Fekete, and A. Kroller.\nSpyGlass: taking a closer look at sensor networks. In the Proceedings\nof the 2nd international conference on Embedded networked sensor\nsystems, pages 301302, 2004. New York, NY, USA.\n[4] A. Chlipala, J. W. Hui, and G. Tolle. Deluge: Dissemination\nProtocols for Network Reprogramming at Scale. Fall 2003 UC\nBerkeley class project paper, 2003.\n[5] Eclipse: an extensible development platform and application\nframeworks for building software. http://www.eclipse.org.\n[6] L. Girod, J. Elson, A. Cerpa, T. Stathopoulos, N. Ramanathan, and\nD. Estrin. EmStar: a Software Environment for Developing and\nDeploying Wireless Sensor Networks. USENIX Technical\nConference, 2004.\n[7] B. Hendrickson and R. Leland. The Chaco User's Guide: Version\n2.0. Technical Report SAND942692, Sandia National Lab, 1994.\n[8] J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, and K. Pister.\nSystem architecture directions for network sensors. International\nConference on Architectural Support for Programming Languages\nand Operating Systems, Oct. 2000.\n[9] iPAQ devices. http://welcome.hp.com/country/us/en/\nprodserv/handheld.html\n.\n[10] Boundary-Scan (JTAG) test and in-system programming solutions\n(IEEE 1149.1). http://www.jtag.com/main.php.\n[11] S. T. King, G. W. Dunlap, and P. M. Chen. Debugging Operating\nSystems with Time-Traveling Virtual Machines. In the Proceedings\nof USENIX Annual Technical Conference 2005, Apr. 2005. Anaheim,\nCA.\n[12] O. Landsiedel, K. Wehrle, and S. Gtz. Accurate Prediction of Power\nConsumption in Sensor Networks. In Proceedings of The Second\nIEEE Workshop on Embedded Networked Sensors (EmNetS-II), May\n2005. Sydney, Australia.\n[13] P. Levis, N. Lee, M. Welsh, and D. Culler. TOSSIM: Accurate and\nScalable Simulation of Entire TinyOS Applications. ACM\nConference on Embedded Networked Sensor Systems, Nov. 2003.\n[14] S. R. Madden, M. J. Franklin, J. M. Hellerstein, and W. Hong. The\nDesign of an Acquisitional Query Processor for Sensor Networks. In\nProceedings of SIGMOD 2003, June 2003.\n[15] Mote hardware platform.\nhttp://www.tinyos.net/scoop/special/hardware\n.\n[16] MOTE-VIEW Monitoring Software. http://www.xbow.com/\nProducts/productsdetails.aspx?sid=88\n.\n[17] J. Polley, D. Blazakis, J. McGee, D. Rusk, and J. S. Baras. ATEMU:\nA Fine-grained Sensor Network Simulator. IEEE Communications\nSociety Conference on Sensor and Ad Hoc Communications and\nNetworks, 2004.\n[18] N. Ramanathan, K. Chang, R. Kapur, L. Girod, E. Kohler, and\nD. Estrin. Sympathy for the Sensor Network Debugger. In the\nProceedings of 3rd ACM Conference on Embedded Networked\nSensor Systems (SenSys '05), Nov. 2005. San Diego, California.\n[19] N. Ramanathan, E. Kohler, and D. Estrin. Towards a debugging\nsystem for sensor networks. International Journal of Network\nManagement, 15(4):223234, 2005.\n[20] S. M. Srinivasan, S. Kandula, C. R. Andrews, and Y. Zhou.\nFlashback: A Lightweight Extension for Rollback and Deterministic\nReplay for Software Debugging. In the Proceedings of USENIX\nAnnual Technical Conference 2004, June 2004. Boston, MA.\n[21] Stargate: a platform X project.\nhttp://platformx.sourceforge.net/\n.\n[22] Surge Network Viewer. http://xbow.com/Products/\nproductsdetails.aspx?sid=86\n.\n[23] B. Titzer and J. Palsberg. Nonintrusive Precision Instrumentation of\nMicrocontroller Software. In the Proceedings of ACM\nSIGPLAN/SIGBED 2005 Conference on Languages, Compilers, and\nTools for Embedded Systems (LCTES'05), June 2005. Chicago,\nIllinois.\n[24] Y. Wen, S. Gurun, N. Chohan, R. Wolski, and C. Krintz. SimGate:\nFull-System, Cycle-Close Simulation of the Stargate Sensor Network\nIntermediate Node. In Proceedings of International Conference on\nEmbedded Computer Systems: Architectures, MOdeling, and\nSimulation (IC-SAMOS), 2006. Samos, Greece.\n[25] Y. Wen, R. Wolski, and G. Moore. DiSenS: Scalable Distributed\nSensor Network Simulation. Technical Report CS2005-30,\nUniversity of California, Santa Barbara, 2005.\n111\n", "keywords": "Sensor Network;Simulation;Debugging"} {"name": "172", "title": "Scalable Data Aggregation for Dynamic Events in Sensor Networks", "abstract": "Computing and maintaining network structures for efficient data aggregation incurs high overhead for dynamic events where the set of nodes sensing an event changes with time. Moreover, structured approaches are sensitive to the waiting-time which is used by nodes to wait for packets from their children before forwarding the packet to the sink. Although structure-less approaches can address these issues, the performance does not scale well with the network size. We propose a semi-structured approach that uses a structure-less technique locally followed by Dynamic Forwarding on an implicitly constructed packet forwarding structure to support network scalability. The structure, ToD, is composed of multiple shortest path trees. After performing local aggregation , nodes dynamically decide the forwarding tree based on the location of the sources. The key principle behind ToD is that adjacent nodes in a graph will have low stretch in one of these trees in ToD, thus resulting in early aggregation of packets. Based on simulations on a 2000 nodes network and real experiments on a 105 nodes Mica2-based network, we conclude that efficient aggregation in large scale networks can be achieved by our semi-structured approach.", "fulltext": "Introduction\nData aggregation is an effective technique for conserving\ncommunication energy in sensor networks. In sensor\nnetworks, the communication cost is often several orders of\nmagnitude larger than the computation cost. Due to inherent\nredundancy in raw data collected from sensors, in-network\ndata aggregation can often reduce the communication cost\nby eliminating redundancy and forwarding only the extracted\ninformation from the raw data. As reducing consumption\nof communication energy extends the network lifetime, it is\ncritical for sensor networks to support in-network data aggregation\n.\nVarious data aggregation approaches have been proposed\nfor data gathering applications and event-based applications.\nThese approaches make use of cluster based structures [1, 2]\nor tree based structures [38]. In data gathering applications,\nsuch as environment and habitat monitoring [912], nodes\nperiodically report the sensed data to the sink. As the traffic\npattern is unchanging, these structure-based approaches\nincur low maintenance overhead and are therefore suitable\nfor such applications. However, in event-based applications,\nsuch as intrusion detection [13, 14] and biological hazard detection\n[15], the source nodes are not known in advance.\nTherefore the approaches that use fixed structures can not\nefficiently aggregate data, while the approaches that change\nthe structure dynamically incur high maintenance overhead\n[4, 5]. The goal of this paper is to design a scalable and efficient\ndata aggregation protocol that incurs low maintenance\noverhead and is suited for event-based applications.\nConstructing an optimal structure for data aggregation for\nvarious aggregation functions has been proven to be an NP-hard\nproblem [16, 17]. Although heuristics can be used to\nconstruct structures for data aggregation, another problem\nassociated with the convergecast traffic pattern, where nodes\ntransmit their packets to the cluster-head or parent in cluster\nor tree structures, results in low performance of structure\nbased data aggregation protocols. In [18] the simulation\nresults show that the packet dropping rate in Shortest Path\nTree (SPT) is higher because of heavy contention caused by\nthe convergecast traffic. This results in more packet drops\nand increased delays. As a result, enforcing a fixed order\nof packet transmissions becomes difficult, which impacts the\nperformance of data aggregation in structured approaches.\nTypically, packets have to be transmitted in a fixed order\n181\nfrom leaves to the root in a tree-like structure to achieve maximum\naggregation. Dropped packets not only make the optimal\nstructure sub-optimal, but also waste energy on transmitting\npackets that are unable to reach the sink.\nIn [19] it shows that the performance gain by using heuristics\nto create the Steiner Minimum Tree (SMT) for aggregation\nis not significant compared with using only the Shortest\nPath Tree (SPT), not to mention that the overhead of\nconstructing such a structure may negate the benefit resulting\nfrom data aggregation. However, their conclusions were\nbased on the assumption of randomly located data sources,\nwhich is different from the scenarios in event-based sensor\nnetworks where a set of close-by nodes is expected to sense\nan event.\nRealizing the shortcomings of structured approaches, [20]\nproposes an anycast based structure-less approach at the\nMAC layer to aggregate packet. It involves mechanisms\nto increase the chance of packets meeting at the same node\n(Spatial Aggregation) at the same time (Temporal Aggregation\n). As the approach does not guarantee aggregation of all\npackets from a single event, the cost of forwarding unaggregated\npackets increases with the scale of the network and the\ndistance of the event from the sink.\nTo benefit from the strengths of the structured and the\nstructure-less approaches, we propose a semi-structured approach\nin this paper. The main challenge in designing such\na protocol is to determine the packet forwarding strategy in\nabsence of a pre-constructed global structure to achieve early\naggregation. Our approach uses a structure-less technique locally\nfollowed by Dynamic Forwarding on an implicitly constructed\npacket forwarding structure to support network scalability\n. The structure, ToD (Tree on Directed acyclic graph),\nis composed of multiple shortest path trees. After performing\nlocal aggregation, nodes dynamically decide the forwarding\ntree based on the location of the source nodes. The key principle\nbehind ToD is that adjacent nodes in a graph will have\nlow stretch in at least one of these trees in ToD, thus resulting\nin early aggregation of packets. This paper makes the\nfollowing contributions:\nWe propose an efficient and scalable data aggregation\nmechanism that can achieve early aggregation without\nincurring any overhead of constructing a structure.\nWe implement the ToD approach on TinyOS and compare\nits performance against other approaches on a 105\nnodes sensor network.\nFor studying the scalability aspects of our approach, we\nimplement ToD in the ns2 simulator and study its performance\nin networks of up to 2000 nodes.\nThe organization of the rest of the paper is as follows.\nSection 2 presents background and related work. Section\n3 presents the structure-less approach. Section 4 analyzes\nthe performance of ToD in the worst case. The performance\nevaluation of the protocols using simulations and experiments\nis presented in Section 5. Finally Section 6 concludes\nthe paper.\nRelated Work\nData aggregation has been an active research area in sensor\nnetworks for its ability to reduce energy consumption.\nSome works focus on how to aggregate data from different\nnodes [2124], some focus on how to construct and maintain\na structure to facilitate data aggregation [18,17,2530],\nand some focus on how to efficiently compress and aggregate\ndata by taking the correlation of data into consideration\n[17, 3134]. As our work focuses on how to facilitate\ndata aggregation without incurring the overhead of constructing\na structure, we briefly describe the structure-based\nas well as structure-less approaches in current research.\nIn [1,2], the authors propose the LEACH protocol to cluster\nsensor nodes and let the cluster-heads aggregate data. The\ncluster-heads then communicate directly with the base station\n. PEGASIS [26] extends LEACH by organizing all nodes\nin a chain and letting nodes be the head in turn. [26, 27] extend\nPEGASIS by allowing simultaneous transmission that\nbalances the energy and delay cost for data gathering. Both\nLEACH and PEGASIS assume that any node in the network\ncan reach the base-station directly in one-hop, which limits\nthe size of the network for them to be applicable.\nGIT [3] uses a different approach as compared to LEACH.\nGIT is built on top of a routing protocol, Directed Diffusion\n[21,22], which is one of the earliest proposed attribute-based\nrouting protocols. In Directed Diffusion, data can be aggregated\nopportunistically when they meet at any intermediate\nnode. Based on Directed Diffusion, the Greedy Incremental\nTree (GIT) establishes an energy-efficient tree by attaching\nall sources greedily onto an established energy-efficient path\nand pruning less energy efficient paths. However due to the\noverhead of pruning branches, GIT might lead to high cost\nin moving event scenarios.\nIn [4, 5], the authors propose DCTC, Dynamic Convoy\nTree-Based Collaboration, to reduce the overhead of tree migration\nin mobile event scenarios. DCTC assumes that the\ndistance to the event is known to each sensor and uses the\nnode near the center of the event as the root to construct and\nmaintain the aggregation tree dynamically. However it involves\nheavy message exchanges which might offset the benefit\nof aggregation in large-scale networks. From the simulation\nresults in DCTC [5], the energy consumption of tree\nexpansion, pruning and reconfiguration is about 33% of the\ndata collection.\nIn [8], the authors propose an aggregation tree construction\nalgorithm to simultaneously approximate the optimum\ntrees for all non-decreasing and concave aggregation functions\n. The algorithm uses a simple min-cost perfect matching\nto construct the tree. [7] also uses similar min-cost matching\nprocess to construct an aggregation tree that takes the data\nfusion cost into consideration. Other works, such as SMT\n(Steiner Minimum Tree) and MST (Multiple Shared Tree)\nfor multicast algorithms which can be used in data aggregation\n[17, 19, 30], build a structure in advance for data aggregation\n. In addition to their complexity and overhead, they\nare only suitable for networks where the sources are known\nin advance. Therefore they are not suitable for networks with\nmobile events.\nMoreover, fixed tree structure might have long stretch between\nadjacent nodes. A stretch of two nodes u and v in a\ntree T on a graph G is the ratio between the distance from\nnode u to v in T and their distance in G.\nLong stretch\n182\nimplies packets from adjacent nodes have to be forwarded\nmany hops away before they can be aggregated. This problem\nhas been studied as MSST (Minimum Stretch Spanning\nTree) [35] and MAST (Minimum Average Stretch Spanning\nTree) [36]. They are also NP-hard problems, and it has\nbeen shown that for any graph, the lower bound of the average\nstretch is O\n(log(n)) [36], and it can be as high as\nO\n(n) for the worst case [37]. Even for a grid network, it\nhas been shown that the lower bound for the worst case is\nO\n(n) [36]. [38] proposes a polynomial time algorithm to\nconstruct a group-independent spanning tree that can achieve\nO\n(log(n)) stretch. However the delay in [38] is high in large\nnetworks if only nodes near the sink are triggered.\n[20] is the first proposed structure-less data aggregation\nprotocol that can achieve high aggregation without incurring\nthe overhead of structure approaches. [20] uses anycast to\nforward packets to one-hop neighbors that have packets for\naggregation. It can efficiently aggregate packets near the\nsources and effectively reduce the number of transmissions.\nHowever, it does not guarantee the aggregation of all packets\nfrom a single event. As the network grows, the cost of\nforwarding packets that were unable to be aggregated will\nnegate the benefit of energy saving resulted from eliminating\nthe control overhead.\nIn order to get benefit from structure-less approaches even\nin large networks, scalability has to be considered in the design\nof the aggregation protocol. In this paper, we propose\na scalable structure-less protocol, ToD, that can achieve efficient\naggregation even in large networks. ToD uses a semi-structure\napproach that does not have the long stretch problem\nin fixed structure nor incur structure maintenance overhead\nof dynamic structure, and further improves the performance\nof the structure-less approach.\nScalable Data Aggregation\nAs described before, the goal of our protocol is to achieve\naggregation of data near the sources without explicitly constructing\na structure for mobile event scenarios. Aggregating\npackets near the sources is critical for reducing the number of\ntransmissions. Aggregating without using an explicit structure\nreduces the overhead of construction and maintenance\nof the structure. In this section, we propose a highly scalable\napproach that is suitable for very large sensor networks.\nOur protocol is based on the Data Aware Anycast (DAA)\nand Randomized Waiting (RW) approaches\n1\nproposed in\n[20]. There are two phases in our protocol: DAA and Dynamic\nForwarding. In the first phase, packets are forwarded\nand aggregated to a selected node, termed aggregator, using\nDAA. In DAA [20], packets were destined to the sink,\nwhereas in our approach they are destined to an aggregator.\nIn the second phase, the leftover un-aggregated or partially\naggregated packets are forwarded on a structure, termed Tree\non DAG (ToD), for further aggregation. First we briefly describe\nthe DAA protocol proposed in [20].\n3.1\nData Aware Anycast [20]\nData Aware Anycast is a structure-less protocol that aggregates\npackets by improving the Spatial and Temporal con-1\nIn rest of this paper, we use DAA or Data Aware Anycast to\nrefer to the combination of the two approaches.\nvergence. Spatial convergence and temporal convergence\nduring transmission are two necessary conditions for aggregation\n. Packets have to be transmitted to the same node\nat the same time to be aggregated. Structured approaches\nachieve these two conditions by letting nodes transmit packets\nto their parents in the aggregation tree and parents wait\nfor packets from all their children before transmitting the aggregated\npackets. Without explicit message exchanges in\nstructure-less aggregation, nodes do not know where they\nshould send packets to and how long they should wait for\naggregation. Therefore improving spatial or temporal convergence\nis critical for improving the chance of aggregation.\nSpatial Convergence is achieved by using anycast to forward\npackets to nodes that can achieve aggregation. Anycast\nis a routing scheme whereby packets are forwarded to\nthe best one, or any one, of a group of target destinations\nbased on some routing metrics. By exploiting the nature\nof wireless radio transmission in sensor networks where all\nnodes within the transmission range can receive the packet,\nnodes are able to tell if they can aggregate the transmitting\npacket, and the anycast mechanism allows the sender to forward\npackets to any one of them. Transmitting packets to\nnodes that can achieve aggregation reduces the number of\nremaining packets in the network, thereby reducing the total\nnumber of transmissions.\nTemporal Convergence is used to further improve the aggregation\n. Randomized Waiting is a simple technique for\nachieving temporal convergence, in which nodes wait for a\nrandom delay before transmitting. In mobile event triggered\nnetworks, nodes are unable to know which nodes are triggered\nand have packets to transmit in advance. Therefore\nnodes can not know if they should wait for their upstream\nnodes and how long they should wait for aggregation. A\nnaive approach of using a fixed delay depending on the distance\nto the sink may make the detection delay very high.\nFor example, as shown in Fig. 1, nodes closer to the sink\nmust wait longer for packets from possible upstream nodes\nif fixed waiting time is employed. When events are closer to\nthe sink, the longer delay chosen by nodes closer to the sink\nis unnecessary. Random delay is used to avoid long delay in\nlarge networks while increasing the chance of aggregation.\n\nsink\n......\n......\n= 0\n= 1\n= n-2\n= n-1\n= n\nnodes triggered by an event\nFigure 1.\nLonger delay is unnecessary but is inevitable using fixed\ndelay when the event is closer to the sink. Nodes closer to the sink have\nlonger delay (\n) because they have to wait for packets from possible\nupstream nodes.\nWhen a node detects an event and generates a packet for\nreporting, it picks a random delay between 0 and\nbefore\ntransmitting, where\nis a network parameter that specifies\nthe maximum delay. After delaying the packet, the node\n183\nbroadcasts an RTS packet containing an Aggregation ID.\nIn [20], the timestamp is used as the Aggregation ID, which\nmeans that packets generated at the same time can be aggregated\n. When a node receives an RTS packet, it checks if it\nhas packets with the same Aggregation ID. If it does, it has\nhigher priority for replying with a CTS than nodes that do\nnot have packets for aggregation. The priority is decided by\nthe delay of replying a CTS packet. Nodes with higher priority\nreply a CTS with shorter delay. If a node overhears any\ntraffic before transmitting its CTS packet, it cancels the CTS\ntransmission in order to avoid collision of multiple CTS responses\nat the sender. Therefore, nodes can send their packets\nfor aggregation as long as at least one of its neighbors has\na packet with the same Aggregation ID. More details and extensions\nof the DAA approach can be found in [20].\nHowever, DAA can not guarantee that all packets will be\naggregated into one packet. When more packets are transmitted\nfrom sources to the sink without aggregation, more\nenergy is wasted. This effect becomes more severe when the\nnetwork is very large and the sources are very far away from\nthe sink. Therefore, instead of forwarding packets directly to\nthe sink when DAA can not aggregate packets any more, we\npropose the use of Dynamic Forwarding for further packet\naggregation. We now describe the Dynamic Forwarding and\nthe construction of ToD.\n3.2\nDynamic Forwarding over ToD\n\nsink\nnodes triggered by event B\nnodes triggered by event A\n\nFigure 2.\nFixed tree structure for aggregation can have long distance\n(link-stretch) between adjacent nodes, as in the case of nodes triggered\nby event B. In this example we assume that nodes in the range of event\nB are within transmission range of each other.\nWe adopt the pre-constructed structure approach in the\nsecond phase to achieve further aggregation. Having a structure\nto direct all packets to a single node is inevitable if\nwe want to aggregate all packets into one. Constructing a\nstructure dynamically with explicit message exchanges incurs\nhigh overhead. Therefore we use an implicitly computed\npre-constructed structure that remains unchanged for\nrelatively long time periods (several hours or days). However\n, using a fixed structure has the long stretch problem as\ndescribed in Section 2. Take Fig. 2 as an example of pre-computed\ntree structure where gray nodes are the sources.\nThe fixed tree structure works well if the nodes that generate\npackets are triggered by event A because their packets can be\naggregated immediately on the tree. However, if the nodes\nthat generate packets are triggered by event B, their packets\ncan not be aggregated even if they are adjacent to each other.\nTherefore we design a dynamic forwarding mechanism over\nToD, to avoid the problem of long stretch.\n3.2.1\nToD in One Dimensional Networks\n\n\n......\n........................\n........................\n......\nnetwork\none row instance of the network\nsink\n\nFigure 3.\nWe illustrate the ToD construction from one row's point of\nview to simplify the discussion.\nFor illustrating the concept of ToD, we first describe the\nconstruction of ToD for a 1-D (a single row of nodes) network\n, as shown in Fig. 3. We assume that the nodes can communicate\nwith their adjacent nodes in the same row through\none hop.\nWe define a cell as a square with side length\nwhere is\ngreater than the maximum diameter of the area that an event\ncan span. The network is divided into cells. These cells are\ngrouped into clusters, called F-clusters (First-level clusters).\nThe size of the F-clusters must be large enough to cover the\ncells an event can span, which is two when we only consider\n1-D cells in the network. All nodes in F-clusters send their\npackets to their cluster-heads, called F-aggregators (First-level\naggregators). Note that nodes in the F-cluster can be\nmultiple hops away from the F-aggregator. The formation\nof the clusters and the election of the aggregators are discussed\nlater in Section 3.2.3. Each F-aggregator then creates\na shortest path to the sink. Therefore the structure is a shortest\npath tree where the root is the sink and the leaves are\nF-aggregators. We call this tree an F-Tree. Fig. 4(a) shows\nthe construction of the F-Tree.\nIn addition to the F-clusters, we create the second type\nof clusters, S-clusters (Second-level clusters) for these cells.\nThe size of an S-cluster must also be large enough to\ncover all cells spanned by an event, and it must interleave\nwith the F-clusters so it can cover adjacent cells in different\nF-clusters. Each S-cluster also has a cluster-head, S-aggregator\n, for aggregating packets. Each S-aggregator creates\na shortest path to the sink, and forms a second shortest\npath tree in the network. We call it S-Tree. The illustration of\nan S-Tree is shown in Fig. 4(b). For all sets of nearby cells\nthat can be triggered by an event, either they will be in the\nsame F-cluster, or they will be in the same S-cluster. This\nproperty is exploited by Dynamic Forwarding to avoid the\nlong stretch problem discussed earlier.\nAfter the S-Tree is constructed, the F-aggregators connect\nthemselves to the S-aggregators of S-clusters which its\nF-cluster overlaps with, as shown in Fig. 4(c). For example,\nin Fig. 4(c), the F-aggregator F4 connects to S-aggregators\nS3 and S4 because its F-cluster overlaps with S-cluster 3 and\n4. Thus, the combination of F-Tree and S-Tree creates a Di-184\nA B\nC D\nF1\nF2\nS2\nF4\nS4\nF6\nS6\nF8\nF3\nF5\nF7\nS1\nS3\nS5\nS7\nCells\nOther nodes in the\nnetwork\nF-Aggregators\nCells with packets\nF1\nF2 F4 F6\nF8\nF3 F5 F7\nF-Tree S-Tree\nOverlapping\nToD\n(a)\n(b)\n(c)\nA B\nC D\nF-clusters\nA\nB\nC\nD\nS-clusters\nS-Aggregators\nS2\nS4 S6\nS1\nS3\nS5\nS7\n\nFigure 4.\nThe construction of F-Tree, S-Tree, and ToD. (a) Leaf nodes are cells. Pairs of neighbor cells define F-clusters. Each F-cluster has an\nF-aggregator, and F-aggregators form the F-Tree. (b) Each pair of adjacent cells not in the same F-cluster form an S-cluster. Each S-cluster has an\nS-aggregator, and S-aggregators form the S-Tree. Nodes on the network boundary do not need to be in any S-cluster. (c) Each F-aggregator connects\nto two S-aggregators of S-clusters which its F-cluster overlaps with. This structure called the Tree on DAG or ToD. F-aggregator in ToD uses Dynamic\nForwarding to forward packets to the root, or through an S-aggregator in the S-Tree based on where the packets come from.\nrected Acyclic Graph, which we refer to as the ToD (Tree on\nDAG).\nNodes first use the Data Aware Anycast (DAA) approach\nto aggregate as many packets as possible. When no further\naggregation can be achieved by DAA, nodes forward their\npackets to the F-aggregator in its F-cluster. If an event only\ntriggers nodes within a single F-cluster, its packets can be\naggregated at the F-aggregator, and be forwarded to the sink\nusing the F-Tree. However, in case the event spans multiple\nF-clusters, the corresponding packets will be forwarded to\ndifferent F-aggregators. As we assumed that the event size\nis not larger than the size of a cell, an event on the boundary\nof F-clusters will only trigger nodes in cells on the boundary\nof the F-clusters. By the construction of S-clusters, adjacent\ncells on the boundary of F-clusters belong to the same S-cluster\n. Thus, F-aggregators can exploit the information collected\nfrom received packets to select the S-aggregator that\nis best suited for further aggregation. This information is obtained\nfrom the source of traffic that can be encoded in the\npackets. Often such information is readily available in the\npacket. Otherwise, 4 extra bits can be used to indicate which\ncell the packet comes from.\nConsider the example in Fig. 4(c). Since the maximum\nnumber of cells an event can span is two, either these two\ncells are in the same F-cluster, or they are in the same S-cluster\n. If they are in the same F-cluster, their packets can\nbe aggregated at the F-aggregator. For example, if the event\nspans A and B, F1 knows that no other F-cluster has packets\nfor aggregation, and it can forward the packets using\nthe F-Tree. If the event spans two cells that are in different\nF-clusters, the two F-aggregators in the two F-clusters\nwill receive packets only from one of their cells. The F-aggregators\nthen conjecture which F-cluster might also have\npackets based on which cells the packets come from. For example\n, if the event spans C and D, F4 will only receive packets\nfrom C. Therefore F4 can know either the event happens\nonly in C, or the event spans C and D. Consequently, F4 can\nforward packets to S4, the S-aggregator of its overlapped S-clusters\ncovering C. Also F5 will forward its packets to S4 if\npackets only come from D. Therefore these packets can be\naggregated at S4.\nNote that we do not specifically assign cells on the boundary\nof the network to any S-cluster. They do not need to be in\nany S-cluster if they are not adjacent to any other F-cluster,\nor they can be assigned to the same S-cluster as its adjacent\ncell.\nThe ToD for the one dimensional network has the\nfollowing property.\nProperty 1. For any two adjacent nodes in ToD in one dimensional\nnetwork, their packets will be aggregated either\nat a first level aggregator, or will be aggregated at a second\nlevel aggregator.\nProof. There are only three possibilities when an event triggers\nnodes to generate packets. If only nodes in one cell are\ntriggered and generate the packets, their packets can be aggregated\nat one F-aggregator since all nodes in a cell reside\n185\nin the same F-cluster, and all packets in an F-cluster will be\naggregated at the F-aggregator.\nIf an event triggers nodes in two cells, and these two cells\nare in the same F-cluster, the packets can be aggregated at\nthe F-aggregator as well.\nIf an event triggers nodes in two cells, but these two\ncells are in different F-clusters, they must reside in the same\nS-cluster because S-clusters and F-clusters are interleaved.\nMoreover, packets in one F-cluster will only originate from\nthe cell that is closer to the other F-cluster that also has packets\n. Therefore the F-aggregator can forward packets to the\nS-aggregator for aggregation accordingly, and packets will\nbe aggregated at the S-aggregator.\nSince the cell is not smaller than the maximum size of an\nevent, it is impossible for an event to trigger more than two\ncells, and this completes the proof.\n3.2.2\nToD in Two Dimensional Networks\nSection 3.2.1 only demonstrates the construction for one\nrow of nodes to illustrate the basic idea of dynamic forwarding\n, and it works because each cell is only adjacent to one (or\nnone, if the cell is on the boundary of the network) of the F-clusters\n. Therefore if an event spans two cells, the two cells\nare either in the same F-cluster or in the same S-cluster, and\nthe F-aggregator can conjecture whether to forward the packets\nto the S-aggregator, or to the sink directly. When we consider\nother cells and F-clusters in the adjacent row, a cell on\nthe boundary of an F-cluster might be adjacent to multiple F-clusters\n. If an event spans multiple cells, each F-aggregator\nmay have multiple choices of S-aggregators if the cells in\ntheir F-cluster are adjacent to multiple F-clusters. If these F-aggregators\nselect different S-aggregators, their packets will\nnot be aggregated. However, the ideas presented in 1D networks\ncan be extended for the 2D networks. But instead of\nguaranteeing that packets will be aggregated within two steps\nas in the 1D case (aggregating either at an F-aggregator or an\nS-aggregator), the ToD in 2D guarantees that the packets can\nbe aggregated within three steps.\nWe first define the cells and clusters in two dimensions.\nFor the ease of understanding, we use grid clustering to illustrate\nthe construction. As defined before, the size of a cell is\nnot less than the maximum size of an event, and an F-cluster\nmust cover all the cells that an event might span, which is\nfour cells in 2D grid-clustering. Therefore the entire network\nis divided into F-clusters, and each F-cluster contains\nfour cells. The S-clusters have to cover all adjacent cells in\ndifferent F-clusters. Each F-cluster and S-cluster also has\na cluster-head acting as the aggregator to aggregate packets\n. Fig. 5 shows a 5\n5 network with its F-clusters and\nS-clusters.\nSince the size of a cell (one side of the square cell) must\nbe greater or equal to the maximum size of an event (diameter\nof the event), an event can span only one, two, three, or\nfour cells as illustrated in Fig. 6. If the event only spans cells\nin the same F-cluster, the packets can be aggregated at the\nF-aggregator. Therefore we only consider scenarios where\nan event spans cells in multiple F-clusters.\n\n(a) F-clusters\n(c) S-cluters\nA B\nC\nD\n(b) Cells\nG H\nI\nE\nF\nC1\nA4 B3\nB1 C2\nA3\nA1 A2\nB2\nB4 C3 C4\nD3\nD1 D2\nD4 E3\nE1 E2\nE4 F3\nF1 F2\nF4\nG3\nG1 G2\nG4 H3\nH1 H2\nH4 I3\nI1 I2\nI4\nS1 S2\nS3 S4\nC1\nA4\nB3\nB1 C2\nA3\nA1\nA2\nB2\nB4 C3 C4\nD3\nD1\nD2\nD4\nE3\nE1 E2\nE4 F3\nF1 F2\nF4\nG3\nG1 G2\nG4 H3\nH1 H2\nH4 I3\nI1 I2\nI4\n2\n2\n2\n\n\nFigure 5.\nGrid-clustering for a two-dimension network. (a) The network\nis divided into 5\n5 F-clusters. (b) Each F-cluster contains four\ncells. For example the F-cluster A in (a) contains cell A1, A2, A3, and A4.\n(c) The S-clusters have to cover all adjacent cells in different F-clusters.\nEach S-cluster contains four cells from four different F-clusters.\n\n\nFigure 6.\nThe possible numbers of cells an event may span in 2\n2\ncells, which are one, two, three, and four from left to right. The four\ncells in each case are any instance of four cells in the network. They\nmay be in the same F-cluster or different F-clusters.\nFig. 7 shows four basic scenarios that an F-aggregator\nmay encounter when collecting all packets generated in its\nF-cluster. All other scenarios are only different combinations\nof these four scenarios. If packets originate from three\nor four cells in the same F-cluster, the F-aggregator knows\nthat no other nodes in other F-clusters have packets, and it\ncan forward the packets directly to the sink. If only one or\ntwo cells generate packets, it is possible that other F-clusters\nalso have packets. We assume that the region spanned by an\nevent is contiguous. So simultaneous occurrence of scenarios\nof (a) and (c), or (b) and (d), is impossible in the F-cluster.\nHowever, such scenarios are possible in presence of losses in\na real environment where packets from third or fourth cluster\nare lost. In such cases the F-aggregator can just forward the\npackets directly to the sink because no other F-cluster will\nhave packets from the same event.\n\nFigure 7.\nAll possible scenarios in an F-aggregator's point of view.\nEach case shows 3\n3 F-clusters, and the aggregator of the center F-cluster\nis making the decision. The dark grayed squares are cells that\ngenerate packets, and the light grayed squares represent the corresponding\nS-cluster of the dark grayed cells.\nWhen the F-aggregator collects all packets within its cluster\n, it knows which cells the packets come from and forwards\nthe packets to best suited S-aggregator for further aggregation\n. For example, if the packets only come from one cell\nas in case (a) in Fig. 7, the F-aggregator can forward the\npacket to the S-aggregator of the S-cluster that covers that\n186\ncell. However, if packets come from two cells in an F-cluster,\nthe two cells must be in different S-clusters. For example,\nas in Fig. 8 where the F-aggregator of F-cluster X receives\npackets from two cells, is the combination of case (a) and (b)\nin Fig. 7. It is possible that the F-aggregator of F-cluster Y\nmay receive packets from cells as in Fig. 7 (c), (d), or both.\nSince the F-aggregator of F-cluster X does not know which\ncase the F-aggregator of F-cluster Y encounters, it does not\nknow which S-aggregator to forward packets to. To guarantee\nthe aggregation, the F-aggregator of F-cluster X forwards\nthe packet through two S-aggregators that covers cell C1 and\nC2, therefore packets can meet at least at one S-aggregator.\nIf both F-aggregators receive packets from two cells in its\ncluster, to guarantee that the packets can meet at least at\none S-aggregator, these two F-aggregators must select the\nS-aggregator deterministically. The strategy is to select the\nS-aggregator that is closer to the sink. If the packets meet\nat the first S-aggregator, it does not need to forward packets\nto the second S-aggregator. The S-aggregator only forwards\npackets to the second S-aggregator if the packets it received\nonly come from two cells in one F-cluster. We will present a\nsimplified construction later (in Section 3.2.3) for the selection\nof S-aggregators.\n\n\n\n\nF-cluster X\nF-cluster Y\nS-cluster I\nS-cluster II\nC1 C2\nC3\n\n\nFigure 8.\nThe F-aggregators have two choices for S-aggregators if\nthey receive packets from two cells.\nTo guarantee that the packets can meet at least at one S-aggregator\n, the second S-aggregator must wait longer than\nthe first S-aggregator. Therefore, if the S-aggregator receives\npackets from only one cell, it waits longer to wait for possible\npackets forwarded by the other S-aggregator because it could\nbe the second S-aggregator of the other F-aggregator. Fig. 9\nshows an example of one F-aggregator sending packets to the\nfirst S-aggregator and then the second S-aggregator, while\nthe other F-aggregator sends packets directly to the second\nS-aggregator. As long as the second S-aggregator waits suf-ficiently\nlonger than the first S-aggregator the packets can be\naggregated at the second S-aggregator.\n\nF-aggregators\n1\nst\nS-aggregators\n2\nnd\nS-aggregators\n\n\n\n\nFigure 9.\nDepending on how many cells generate packets in its F-cluster\n, one F-aggregator sends packets to two S-aggregators while the\nother F-aggregator sends packets to only one S-aggregator. We assume\nthat the sink is located at bottom-left of the network.\nThe ToD for the two dimension networks has the following\nproperty.\nProperty 2. For any two adjacent nodes in ToD, their packets\nwill be aggregated at the F-aggregator, at the 1\nst\nS-aggregator\n, or at the 2\nnd\nS-aggregator.\nProof. First we define the F-aggregator X as the aggregator\nof F-cluster X and S-aggregator I as the aggregator of S-cluster\nI, and so forth.\nFor packets generated only in one F-cluster, their packets\ncan be aggregated at the F-aggregator since all packets in the\nF-cluster will be sent to the F-aggregator.\nIf an event triggers nodes in different F-clusters, there are\nonly three cases. First, only one cell in each F-cluster generates\npackets. In this case, all cells having packets will be\nin the same S-cluster since the adjacent cells in different F-clusters\nare all in the same S-cluster. Therefore their packets\ncan be aggregated at the S-aggregator.\nSecond, the event spans three cells, C1, C2, and C3, and\ntwo of them are in one F-cluster and one of them is in the\nother F-cluster. Without loss of generality, we assume that\nC1 and C2 are in the same F-cluster, F-cluster X , and C3\nis in the other F-cluster, F-cluster Y . Moreover C3 must be\nadjacent to either C1 or C2, and let us assume that it is C2.\nFrom the ToD construction we know that C2 and C3 will\nbe in the same S-cluster, S-cluster II, and C1 will be in another\nS-cluster, S-cluster I. Fig. 8 illustrates one instance\nof this case. First the F-aggregator X will aggregate packets\nfrom C1 and C2 because they are in the same F-cluster,\nand forward the aggregated packets through S-aggregator I\nto S-aggregator II, or the other way around, because C1\nis in S-cluster I and C2 is in S-cluster II. F-aggregator Y\nwill aggregate packets from C3 and forward packets to S-aggregator\nII because C3 is in S-cluster II. Because packets\nof F-aggregator Y only come from C3, they will have\nlonger delay in S-aggregator II in order to wait for packets\nbeing forwarded through the other S-aggregator. In the mean\ntime, if F-aggregator X forwards packets to S-aggregator II\nfirst, the packets can be aggregated at S-aggregator II. If\nF-aggregator X forwards packets to S-aggregator I first, S-aggregator\nI will forward packets to S-aggregator II with\nshorter delay because the packets come from two cells in\none F-cluster, therefore their packets can also be aggregated\nat S-aggregator II.\nIn the third case, the event spans four cells. Two of them\nwill be in one F-cluster and the other two will be in the other\nF-cluster. Without loss of generality, we can assume that\ncells C1 and C2 are in F-cluster X and cells C3 and C4 are\nin F-cluster Y , and C1 and C3 are adjacent, C2 and C4 are\nadjacent. From the ToD construction, C1 and C3 will be\nin one S-cluster, S-cluster I, and C2 and C4 will be in the\nother S-cluster, S-cluster II. Because from S-aggregator I\nand II, F-aggregator X and Y choose one that is closer to the\nsink as the first S-aggregator, they will choose the same S-aggregator\n. Therefore their packets can be aggregated at the\nfirst S-aggregator, and this completes the proof.\nThough in this section we assume that the size of an event\nis smaller than the size of the cell, our approach can still work\n187\ncorrectly and perform more efficiently than DAA even if the\nsize of the event is not known in advance. This is because\nthe nodes will use Dynamic Forwarding over ToD only at\nsecond phase where the aggregation by DAA is no longer\nachievable. Therefore at worst our approach just falls back to\nDAA. Section 5.1 shows that in experiments, ToD improves\nthe performance of DAA by 27% even if the size of the event\nis greater than the size of a cell.\n3.2.3\nClustering and Aggregator Selection\nIn this paper we use grid-clustering to construct the cells\nand clusters. Although other clustering methods, such as\nclustering based on hexagonal or triangular tessellation, can\nalso be used, we do not explore them further in this paper.\nIn principle any clustering would work as long as they satisfy\nthe following conditions. First, the size of the cell must\nbe greater than or equal to the maximum size of an event.\nSecond, the F-cluster and S-cluster must cover the cells that\nan event may span, and the S-cluster must cover the adjacent\ncells in different F-clusters.\nAs opposed to defining an arbitrary clustering, using grid-clustering\nhas two advantages. First, the size of the grid can\nbe easily determined by configuring the grid size as a network\nparameter. Second, as long as the geographic location\nis known to the node, the cell, F-cluster and S-cluster it belongs\nto can be determined immediately without any communication\n. Geographic information is essential in sensor\nnetworks, therefore we assume that sensor nodes know their\nphysical location by configuration at deployment, a GPS device\n, or localization protocols [39, 40]. As a consequence,\nall the cells, F-clusters, and S-clusters can be implicitly constructed\n.\nAfter the grids are constructed, nodes in an F-cluster and\nS-cluster have to select an aggregator for their cluster. Because\nthe node that acts as the aggregator consumes more\nenergy than other nodes, nodes should play the role of aggregator\nin turn in order to evenly distribute the energy consumption\namong all nodes. Therefore the aggregator selection\nprocess must be performed periodically. However\nthe frequency of updating the aggregator can be very low,\nfrom once in several hours to once in several days, depending\non the capacity of the battery on the nodes. Nodes can\nelect themselves as the cluster-head with probability based\non metrics such as the residual energy, and advertise to all\nnodes in its cluster. In case two nodes select themselves as\nthe cluster-head, the node-id can be used to break the tie.\nThe other approach is that the nodes in a cluster use a\nhash function to hash the current time to a node within that\ncluster, and use that node as the aggregator. Nodes have to\nknow the address of all nodes in its F-cluster and sort them\nby their node id. A hash function hashes the current time\nto a number k from 1 to n where n is the number of nodes\nin its cluster, and nodes use the k\nth\nnode as the aggregator.\nBecause the frequency of changing the aggregator could be\nlow, the time used could be in hours or days, therefore the\ntime only needs to be coarsely synchronized, and the cluster-head\nelection overhead can be avoided.\nHowever, the Dynamic Forwarding approach requires that\neach F-aggregator knows the location of S-aggregators of S-clusters\nthat its F-cluster overlaps with. Therefore each time\nthe S-aggregator changes, it has to notify the F-aggregators.\nTo simplify the cluster-head selection process and avoid the\noverhead of propagating the update information, we delegate\nthe role of S-aggregators to F-aggregators. Instead of selecting\na node as the S-aggregator and changing it periodically\nfor an S-cluster, we choose an F-cluster, called Aggregating\nCluster, for each S-cluster, and use the F-aggregator of the\nAggregating Cluster as its S-aggregator. The Aggregating\nCluster of an S-cluster is the F-cluster which is closest to the\nsink among all F-clusters that the S-cluster overlaps with,\nas shown in Fig. 10(a), assuming that the sink is located\non the bottom-left corner. Therefore as the F-aggregator\nchanges, the corresponding S-aggregator changes as well.\nWhen an F-aggregator forwards a packet to an S-aggregator,\nit forwards the packet toward the Aggregating Cluster of\nthat S-aggregator. When the packet reaches the Aggregating\nCluster, nodes in that F-cluster know the location of its\nF-aggregator and can forward the packet to it. Therefore no\naggregator update has to be propagated to neighboring clusters\n.\n\nF-cluster S-cluster\nThe common aggregator for both\nthe shaded F-cluster and S-cluster\n(a)\n(b)\nF-aggregator\n\n\n\nF-aggregator and 1\nst\nS-aggregator\n2\nnd\nS-aggregator\n\nFigure 10.\n(a) The S-cluster selects the F-cluster closest to the sink\namong its overlapped F-clusters, assuming that the sink is located at\nthe bottom-left corner of the network. (b) The white F-aggregator selects\nthe F-cluster containing the gray F-aggregator as the aggregating\ncluster.\nNow the role of S-aggregators is passed on to the F-aggregators\n, and the F-cluster selected by an S-aggregator\nis the one closer to the sink. When an F-aggregator wants to\nforward packets to both S-aggregators, it selects the F-cluster\nthat is closer to itself as the aggregating cluster of the first\nS-aggregator (could be itself) to reduce the number of transmissions\nbetween aggregators, as shown in Fig. 10(b). This\nselection does not affect the property that packets will eventually\nbe aggregated at one aggregator because the S-clusters\nthat cover the cells in two F-clusters are the same, therefore\nthe aggregating cluster selected by two F-aggregators will be\nthe same.\nThe benefits of using this approach are five-fold. First,\nno leader election is required for S-clusters, which eliminates\nthe leader election overhead. Second, nodes only\nneed to know the F-aggregator of its F-cluster, which make\nthis approach very scalable. Third, when the F-aggregator\nchanges, the S-aggregator changes as well, but the change\ndoes not need to be propagated to other F-clusters or S-clusters\n. Fourth, if nodes choose the aggregator by hashing\ncurrent time to get a node id of the aggregator in its cluster,\n188\nonly nodes within the same F-cluster need to be synchronized\nwith each other. And last, since the Aggregating Clusters\nof S-clusters are statically computed, there is no packet\noverhead for computing the Aggregating Clusters.\nPerformance Analysis\nIn this section we show that the maximum distance between\nany two adjacent nodes in ToD only depends on the\nsize of the cells, and is independent of the size of the network\n. We ignore the cost from the aggregator to the sink\nsince for perfect aggregation, only one packet will be forwarded\nto the sink from the aggregator, therefore the cost is\ncomparatively small. Compared to the lower bound O\n(n)\n[36] of the grid network for a fixed tree, ToD can achieve\nconstant factor even in the worst case.\n\nu\nv\ns\nf\nu\nf\nv\n\n\n\nFigure 11.\nThe worst case scenario for ToD.\nThe worst case in ToD is illustrated in Fig. 11 where only\ntwo adjacent nodes, u and v, in the corner of two different\nF-clusters generate packets, and their F-aggregators, f\nu\nand\nf\nv\n, are located at the opposite corner. We assume a dense\ndeployment of sensor nodes, therefore the distance between\ntwo nodes can be transferred to the cost of transmitting a\npacket between these nodes. Fig. 11 is the worst case since if\nmore nodes are generating packets in one cluster, it will only\namortize the cost of sending packets from the F-aggregator\nto the S-aggregator, and more nodes in multiple F-clusters\ngenerating packets will only lower the average distance.\nWe assume that the length of one side of the cell is\n, and\ntwo nodes are adjacent if their distance is less than a unit of\ndistance. Therefore in Fig. 11 the distance that packets from\nu and v have to be forwarded before they are aggregated at\ns is the sum of distances between u to f\nu\nto s and v to f\nv\nto s, and is\n(22 + 42) + (22 + 4) = 82 + 4.\nTherefore in the optimal approach, only one transmission is\nrequired because u and v are adjacent. In ToD, 8\n2 + 4\nnumber of transmission is required for the worst case.\nHowever, since we use DAA as the aggregation technique,\npackets from adjacent nodes will be aggregated immediately\n. Therefore for the worst cast to happen, the distance\nbetween u and v must be at least 2 units, and our protocol\nhas 4\n2+2 7.66 times number of transmissions than\noptimal. The upper bound is only dependent on the size of\na cell, and the size of the cell is dependent on the size of an\nevent. This value is independent of the size of the network\nand therefore is very suitable for large-scale networks.\nOn average, the number of transmissions will be much\nless than 4\n2 + 2 because first, typically there will be\nmany nodes generating packets. Second, the distance between\na node and its F-aggregator is not always 2\n2,\nand the distances between the F-aggregators and the S-aggregator\nare shorter, too. Third, the DAA approach can efficiently\naggregate packets from adjacent nodes thereby further\nreducing the number of transmissions. Therefore we\nexpect the average distance for nodes generating packets to\nbe much less than the worst case.\nPerformance Evaluation\nIn this section we use experiments and simulations to\nevaluate the performance of our semi-structured approach\nand compare it with other protocols.\n5.1\nTestbed Evaluation\nWe conduct experiments with 105 Mica2-based nodes on\na sensor testbed. The testbed consists of 105 Mica2-based\nmotes and each mote is hooked onto a Stargate. The Stargate\nis a 32-bit hardware device from CrossBow [41] running\nLinux, which has an Ethernet interface and a serial port\nfor connecting a mote. The Stargates are connected to the\nserver using wired Ethernet. Therefore we can program these\nmotes and send messages and signals to them through Stargates\nvia Ethernet connection. The 105 nodes form a 7\n15\ngrid network with 3 feet spacing. The radio signal using default\ntransmission power covers a lot of nodes in the testbed.\nIn our experiments we do not change the transmission power\nbut limit nodes only to receive packets from nodes within\ntwo grid neighbors away, i.e. each node has maximum 12\nneighbors.\nWe implement an Anycast MAC protocol on top of the\nMica2 MAC layer. The Anycast MAC layer has its own\nbackoff and retransmission mechanisms and we disable the\nACK and backoff of the Mica2 MAC module. The Anycast\nMAC implements the RTS-CTS-DATA-ACK for anycast\n. An event is emulated by broadcasting a message on\nthe testbed to the Stargates, and the Stargates send the message\nto the Mica2 nodes through serial port. The message\ncontains a unique ID distinguishing packets generated at different\ntime.\nWhen a node is triggered by an event, an event report is\ngenerated. If the node has to delay its transmission, it stores\nthe packet in a report queue. Both the application layer and\nAnycast MAC layer can access the queue, therefore they can\ncheck if the node has packets for aggregation, or aggregate\nthe received packets to packets in the queue.\nFirst we evaluate the following protocols on the testbed\nand the codes are available on-line\n2\n:\nDynamic Forwarding over ToD (ToD). The semi-structured\napproach we proposed in this paper. DAA\nis used to aggregate packets in each F-cluster, and aggregated\npackets are forwarded to the sink on ToD.\nData Aware Anycast (DAA). The structure-less approach\nproposed in [20].\nShortest Path Tree (SPT). Nodes send packets to the\nsink through the shortest path tree immediately after\nsensing an event. Aggregation is opportunistic and happens\nonly if two packets are at the same node at the\nsame time. The shortest path tree is constructed immediately\nafter the network is deployed. A message is\n2\nhttp://www.cse.ohio-state.edu/\nfank/research/tod.tar.gz\n189\nbroadcast from the sink and flooded into the network to\ncreate a shortest path tree from all nodes to the sink.\nShortest Path Tree with Fixed Delay (SPT-D) Same\nas the SPT approach, but nodes delay their transmission\naccording to their height in the tree to wait for packets\nfrom their children.\nDue to the scale of the testbed, in ToD we only divide the\nnetwork into two F-clusters, which forces the smallest cell to\nhave only 9 sensor nodes. However we do not limit the size\nof an event to be smaller than the cell size. The event size is\nlarger than the cell size in all following experiments.\nWe use the normalized number of transmissions as the\nmetric to compare the performance of these protocols. The\nnormalized number of transmissions is the average number\nof transmissions performed in the entire network to deliver\none unit of useful information from sources to the sink. It\ncan be converted to the normalized energy consumption if\nwe know the sending and receiving cost of one transmission,\ntherefore the energy spent on data collection for one packet\ncan be derived. We do not consider energy consumption on\nidle listening here since all nodes are fully active for all protocols\nin the experiments and simulations, and the idle energy\nconsumption would be similar for all protocols. To reduce\nthe energy consumption on idle listening, various duty\ncycling protocols have been proposed. However, due to the\npage limitation, we are unable to describe how to integrate\nthose works.\nFig. 12 shows the normalized number of transmissions\nfor different event sizes. We fixed the location of the event\nand vary its diameter from 12 ft to 36 ft where nodes within\ntwo grid-hops to six grid-hops of the event will be triggered\nrespectively and send packets to the sink located at one corner\nof the network. We use 6 seconds as maximum delay\nfor all protocols except SPT. For event size less than 12 ft,\nthere are too little nodes been triggered (less than five), and\nall triggered nodes are within transmission range. Data aggregation\nis not so interesting in such scenario therefore we\ndo not evaluate it. Actually DAA can perform best since all\npackets can be aggregated because all triggered nodes are\nwithin transmission range of each other.\nAll protocols have better performance when the size of\nthe event increases because packets have more chances of being\naggregated. ToD performs best among all protocols in all\nscenarios. This shows that DAA can efficiently achieve early\naggregation and the Dynamic Forwarding over ToD can effectively\nreduce the cost of directly forwarding unaggregated\npackets to the sink in DAA. In SPT-D, when the event size\nis smaller, the long stretch effect is more significant than in\nlarger event scenario. When event size is large, for example\n, two-third of nodes in the network are triggered when the\ndiameter of the event is 36 feet, most of the packets can be\naggregated to their parent with one transmission. This indicates\nthat in applications where most nodes are transmitting,\nthe fixed structure such as SPT-D is better, but when only\na small subset of nodes are transmitting, their performance\ndegrades because of the long stretch problem.\nWe notice that the variance of some results in SPT and\nSPT-D is very high. For example, when the event size is 12\nfeet in diameter, the maximum normalized number of trans-0\n0.5\n1\n1.5\n2\n2.5\n3\n3.5\n4\n4.5\n36\n30\n24\n18\n12\nNormalized Number of Transmissions\nEvent Size (ft)\nToD\nDAA\nSPT\nSPT-D\nFigure 12.\nThe normalized number of transmissions for different\nevent sizes from experiments on 105 sensors.\nmissions in SPT-D is 3\n.41, and the minimum value is 2.41.\nBy tracing into the detail experiment logs we found that the\nhigh variance is because of the different shortest path trees.\nThe tree is re-constructed for each experiment, and therefore\nmay change from experiment to experiment. We found that\nSPT-D always gets better performance in one tree where all\nsources are under the same subtree, and performs badly in\nthe other tree where sources are located under two or three\ndifferent subtrees. This further supports our claims that the\nlong stretch problem in fixed structured approaches affects\ntheir performance significantly.\nThe second experiment evaluates the performance of\nthese protocols for different values of maximum delay. We\nvary the delay from 0 to 8 seconds, and all nodes in the\nnetwork generate one packet every 10 seconds. Fig. 13\nshows the results. As we described, the performance of\nthe structure-based approaches heavily depends on the delay\n. The SPT-D performs worse than ToD when the maximum\ndelay is less than five seconds, and the performance\nincreases as the delay increases. On the contrary, the performance\nof ToD and DAA does not change for different delays,\nwhich is different from results observed in [20]. We believe\nthat this is because with the default transmission power, a\nlarge number of nodes are in interference range when nodes\ntransmit. Therefore even if nodes do not delay their transmissions\n, only one node can transmit at any given time. Other\nnodes will be forced to delay, which has the same effect as\nthe Randomized Waiting.\n5.2\nLarge Scale Simulation\nTo evaluate and compare the performance and scalability\nof ToD with other approaches requires a large sensor\nnetwork, which is currently unavailable in real experiments.\nTherefore we resort to simulations. In this section we use the\nns2 network simulator to evaluate these protocols. Besides\nToD, DAA, and SPT, we evaluate OPT, Optimal Aggregation\nTree, to replace the SPT-D protocol.\nIn OPT, nodes forward their packets on an aggregation\ntree rooted at the center of the event. Nodes know where to\nforward packets to and how long to wait. The tree is constructed\nin advance and changes when the event moves assuming\nthe location and mobility of the event are known.\n190\n1\n1.2\n1.4\n1.6\n1.8\n2\n2.2\n0\n1\n2\n3\n4\n5\n6\n7\n8\nNormalized Number of Transmissions\nMaximum Delay (s)\nToD\nDAA\nSPT\nSPT-D\nFigure 13.\nThe normalized number of transmissions for different\nmaximum delays from experiments on 105 sensors.\nIdeally only n\n- 1 transmissions are required for n sources.\nThis is the lower bound for any structure, therefore we use it\nas the optimal case. This approach is similar to the aggregation\ntree proposed in [4] but without its tree construction and\nmigration overhead. We do not evaluate SPT-D in simulation\nbecause SPT-D is not practical in the large scale network. In\nthe largest simulation scenario, the network is a 58-hop network\n. According to the simulation in smaller network, SPT-D\ngets best performance when the delay of each hop is about\n0\n.64 seconds. This makes nodes closer to the sink have about\n36 seconds delay in SPT-D, which is not advisable.\nWe perform simulations of these protocols on a 2000m\n\n1200m grid network with 35m node separation, therefore\nthere are a total of 1938 nodes in the network. The data\nrate of the radio is 38\n.4Kbps and the transmission range of\nthe nodes is slightly higher than 50m. An event moves in the\nnetwork using the random way-point mobility model at the\nspeed of 10m\n/s for 400 seconds. The event size is 400m in\ndiameter. The nodes triggered by an event will send packets\nevery five seconds to the sink located at\n(0,0). The aggregation\nfunction evaluated here is perfect aggregation, i.e. all\npackets can be aggregated into one packet without increasing\nthe packet size.\n5.3\nEvent Size\nWe first evaluate the performance for these protocols on\ndifferent number of nodes generating the packets. This simulation\nreflects the performance of each protocol for different\nevent sizes. We study the performance for 4 mobility scenarios\nand show the average, maximum, and minimum values\nof the results.\nFig. 14(a) shows the result of normalized number of\ntransmissions. ToD improves the performance of DAA and\nSPT by 30% and 85%, and is 25% higher than OPT. However\nOPT has the best performance by using the aggregation\ntree that keeps changing when event moves but its overhead\nis not considered in the simulation. SPT has very poor performance\nsince its aggregation is opportunistic. Except the\nSPT, the performance of all other protocols is quite steady.\nThis shows that they are quite scalable in terms of the event\nsize.\nFig. 14(b) and (c) show the total number of transmissions\nand total units of useful information received by the sink.\nAs observed in [20], DAA and ToD have higher number of\nreceived packets than OPT due to the ability of structure-less\naggregation to aggregate packets early and scatter them\naway from each other to reduce contention. ToD performs\nbetter than DAA in terms of the normalized number of transmissions\nbecause of its ability to aggregate packets at nodes\ncloser to the source, and thus it reduces the cost of forwarding\npackets from sources to the sink.\n5.4\nScalability\nIn this set of simulations we evaluate the scalability of our\nprotocol since our goal is to design a scalable data aggregation\nprotocol. If a protocol is not scalable, its performance\nwill degrade as the size of the network increases.\nTo evaluate the scalability of a protocol, we limit an event\nto move only in a bounded region at a certain distance from\nthe sink to simulate the effect of different network sizes. We\nlimit an event to move within a 400m\n1200m rectangle,\nand change the distance of the rectangle to the sink from\n200m to 1400m, as shown in Fig. 15. In order to be fair\nto all scenarios, we limit the event not to move closer than\n200m to the network boundary such that the number of nodes\ntriggered by the event does not change drastically.\n\n2000m\n1200m\n200m\n400m\n200m\n\nFigure 15.\nThe simulation scenario for scalability. The event is limited\nto move only within a small gray rectangle in each simulation.\nFig. 16 shows the results of the scalability simulation.\nThe performance of ToD and OPT remains steady, and ToD\nis 22% higher than OPT. This shows that ToD is quite scalable\nas its performance does not degrade as the size of the\nnetwork increases. The performance of both DAA and SPT\ndegrades as the size of the network increases. The normalized\nnumber of transmissions for DAA and SPT doubled\nwhen the event moves from the closest rectangle (to the sink)\nto the farthest rectangle.\nFig. 16(c) shows the number of packets received at the\nsink per event. If all packets can be aggregated near the\nevent and forwarded to the sink, the sink will receive only\none packet. Conversely, more packets received at the sink\nshows that fewer aggregations happened in the network. The\ncost of forwarding more packets to the sink increases rapidly\nas the size of the network increases. We can see that in both\nDAA and SPT the sink receives many packets. Though the\nnumber of packets received at the sink remains quite steady,\nthe total number of transmissions increases linearly as the\ndistance from the sources to the sink increases.\nIdeally the number of received packets at sink is 1, if all\npackets can be aggregated at the aggregator. However the\nnumber of received packets at sink is higher than 1 in ToD\n191\n0\n5\n10\n15\n20\n25\n30\n35\n500\n400\n300\n200\nNormalized Number of Transmissions\nEvent Size (m)\nToD\nDAA\nSPT\nOPT\n0\n10000\n20000\n30000\n40000\n50000\n60000\n70000\n80000\n90000\n100000\n500\n400\n300\n200\nNumber of Total Transmissions\nEvent Size (m)\nToD\nDAA\nSPT\nOPT\n0\n2000\n4000\n6000\n8000\n10000\n12000\n14000\n16000\n500\n400\n300\n200\nUnit of Received Information\nEvent Size (m)\nToD\nDAA\nSPT\nOPT\n(a) Normalized number of transmissions\n(b) Number of transmissions\n(c) Unit of received information\nFigure 14.\nThe simulation results for different event sizes.\nand OPT. This is because the delay in CSMA-based MAC\nprotocol can not be accurately predicted therefore the aggregator\nmight send the packet to the sink before all packets\nare forwarded to it. Though the cost of forwarding the unaggregated\npackets from aggregator to the sink in ToD and\nOPT also increases when the size of the network increases,\nthe increase is comparably smaller than DAA and SPT because\nfew packets are forwarded to the sink without aggregation\n.\nWe observe that the number of received packets at the sink\nin ToD is higher when the event is closer to the sink. In our\nsimulation, nodes in the same F-cluster as the sink will always\nuse sink as the F-aggregator because we assume that\nthe sink is wire powered therefore there is no need to delegate\nthe role of aggregator to other nodes in order to evenly\ndistribute the energy consumption.\n5.5\nCell Size\nThe above simulations use the maximum size of an event\nas the cell size. As we described in Section 3.2.2, this ensures\nthat the Dynamic Forwarding can aggregate all packets at\nan S-aggregator, and the cost of forwarding the aggregated\npackets to the sink is minimized. However, large cell size\nincreases the cost of aggregating packets to the aggregator as\nwe use DAA as the aggregation technique in an F-cluster. In\nthis section we evaluate the impact of the size of a cell on the\nperformance of ToD.\nWe vary the cell size from 50m\n50m to 800m 800m\nand run simulations for three different event sizes, which are\n200m, 400m, and 600m in diameter. The results are collected\nfrom five different event mobility patterns and shown in Fig.\n17.\nWhen the size of cell is larger than the event size, the performance\nis worse because the cost of aggregating packets\nto F-aggregator increases, but the cost of forwarding packets\nfrom S-aggregator does not change. When the size of cell\nis too small, the cost of forwarding packets to sink increases\nbecause packets will be aggregated at different F-aggregators\nand more packets will be forwarded to the sink without further\naggregation. In general, when the size of the F-cluster\nis small enough to only contain one node, or when the size\nof the F-cluster is large enough to include all nodes in the\nnetwork, ToD just downgrades to DAA.\nToD has the best performance when the cell size is\n100m\n100m (F-cluster size is 200m200m) when the event\nsize is 200m in diameter. When the diameter of an event\nis 400m and 600m, using 200m\n200m as the cell size has\nthe best performance (F-cluster size is 400m\n400m). This\nshows that the ToD performance can be further optimized by\nselecting the appropriate cell size. To explore the relation\nbetween the event and cell size for optimization will be part\nof our future work.\nConclusion\nIn this paper we propose a semi-structured approach that\nlocally uses a structure-less technique followed by Dynamic\nForwarding on an implicitly constructed packet forwarding\nstructure, ToD, to support network scalability. ToD avoids\nthe long stretch problem in fixed structured approaches and\neliminates the overhead of construction and maintenance of\ndynamic structures. We evaluate its performance using real\nexperiments on a testbed of 105 sensor nodes and simulations\non 2000 node networks. Based on our studies we find\nthat ToD is highly scalable and it performs close to the optimal\nstructured approach. Therefore, it is very suitable for\nconserving energy and extending the lifetime of large scale\nsensor networks.\nReferences\n[1] W. Heinzelman, A. Chandrakasan, and\nH. Balakrishnan, \"Energy-Efficient Communication\nProtocol for Wireless Microsensor Networks,\" in\nProceedings of the 33rd Annual Hawaii International\nConference on System Sciences, vol. 2, January 2000.\n[2] W. Heinzelman, A. Chandrakasan, and\nH. Balakrishnan, \"An Application-Specific Protocol\nArchitecture for Wireless Microsensor Networks,\" in\nIEEE Transactions on Wireless Communications,\nvol. 1, October 2002, pp. 660670.\n[3] C. Intanagonwiwat, D. Estrin, and R. Goviindan,\n\"Impact of Network Density on Data Aggregation in\nWireless Sensor Networks,\" in Technical Report\n01-750, University of Southern California, November\n2001.\n[4] W. Zhang and G. Cao, \"Optimizing Tree\nReconfiguration for Mobile Target Tracking in Sensor\nNetworks,\" in Proceedings of INFOCOM 2004, vol. 4,\nMarch 2004, pp. 24342445.\n192\n0\n5\n10\n15\n20\n25\n30\n35\n40\n400\n600\n800\n1000\n1200\n1400\n1600\nNormalized Number of Transmissions\nDistance to the Sink (m)\nToD\nDAA\nSPT\nOPT\n0\n2\n4\n6\n8\n10\n400\n600\n800\n1000\n1200\n1400\n1600\nNormalized Number of Transmissions\nDistance to the Sink (m)\nToD\nDAA\nOPT\n0\n5\n10\n15\n20\n400\n600\n800\n1000\n1200\n1400\n1600\nNumber of Received Packets\nDistance to the Sink (m)\nToD\nDAA\nSPT\nOPT\n(a) Normalized number of transmission\n(b) Zoom in of Fig. 16(a)\n(c) Number of received packets\nFigure 16.\nThe simulation results for difference distances from the event to the sink.\n3\n3.5\n4\n4.5\n5\n5.5\n6\n6.5\n7\n7.5\n8\n100\n200\n300\n400\n500\n600\n700\n800\nNormalized Number of Transmissions\nCell Size (m)\n200m\n400m\n600m\n0\n10000\n20000\n30000\n40000\n50000\n60000\n70000\n80000\n90000\n100\n200\n300\n400\n500\n600\n700\n800\nNumber of Total Transmissions\nCell Size (m)\n200m\n400m\n600m\n0\n100\n200\n300\n400\n500\n600\n700\n800\n900\n1000\n1100\n100\n200\n300\n400\n500\n600\n700\n800\nNumber of Received Packets\nCell Size (m)\n200m\n400m\n600m\n(a) Normalized number of transmissions\n(b) Number of transmissions\n(c) Number of received packets\nFigure 17.\nThe simulation results for difference cell sizes.\n[5] W. Zhang and G. Cao, \"DCTC: Dynamic Convoy\nTree-based Collaboration for Target Tracking in\nSensor Networks,\" in IEEE Transactions on Wireless\nCommunications, vol. 3, September 2004, pp.\n16891701.\n[6] M. Ding, X. Cheng, and G. Xue, \"Aggregation Tree\nConstruction in Sensor Networks,\" in Proceedings of\nthe 58th IEEE Vehicular Technology Conference,\nvol. 4, October 2003, pp. 21682172.\n[7] H. Luo, J. Luo, and Y. Liu, \"Energy Efficient Routing\nwith Adaptive Data Fusion in Sensor Networks,\" in\nProceedings of the Third ACM/SIGMOBILEe\nWorkshop on Foundations of Mobile Computing,\nAugust 2005.\n[8] A. Goel and D. Estrin, \"Simultaneous Optimization\nfor Concave Costs: Single Sink Aggregation or Single\nSource Buy-at-Bulk,\" in Proceedings of the 14th\nAnnual ACM-SIAM Symposium on Discrete\nAlgorithms, 2003.\n[9] \"Networked Infomechanical Systems,\"\nhttp://www.cens.ucla.edu .\n[10] \"Center for Embedded Networked Sensing at UCLA,\"\nhttp://www.cens.ucla.edu .\n[11] J. Polastre, \"Design and Implementation of Wireless\nSensor Networks for Habitat Mon itoring,\" Master's\nThesis, University of California at Berkeley, Spring\n2003.\n[12] A. Mainwaring, R. Szewczyk, J. Anderson, and\nJ. Polastre, \"Habitat Monitoring on Great Duck\nIsland,\" http://www.greatduckisland.net.\n[13] A. Arora, P. Dutta, and S. Bapat, \"Line in the Sand: A\nWireless Sensor Network for Target Detection,\nClassification, and Tracking,\"\nOSU-CISRC-12/03-TR71, 2003.\n[14] \"ExScal,\" http://www.cast.cse.ohio-state.edu/exscal/.\n[15] S. Corporation, \"Chemical/Bio Defense and Sensor\nNetworks,\"\nhttp://www.sentel.com/html/chemicalbio.html .\n[16] E. L. Lawler, J. K. Lenstra, A. H. G. R. Kan, and D. B.\nShmoys, The Traveling Salesman Problem : A Guided\nTour of Combinatorial Optimization.\nJohn Wiley &\nSons, 1985.\n[17] R. Cristescu, B. Beferull-Lozano, and M. Vetterli, \"On\nNetwork Correlated Data Gathering,\" in Proceedings\nof the 23rd Annual Joint Conference of the IEEE\nComputer and Communications Societies, vol. 4,\nMarch 2004, pp. 25712582.\n[18] K. W. Fan, S. Liu, and P. Sinha, \"Structure-free Data\nAggregation in Sensor Networks,\" in\nOSU-CISRC-4/06-TR35, Technical Report, Dept of\nCSE, OSU, April 2006.\n[19] Y. Zhu, K. Sundaresan, and R. Sivakumar, \"Practical\nLimits on Achievable Energy Improvements and\n193\nUseable Delay Tolerance in Correlation Aware Data\nGathering in Wireless Sensor Networks,\" in IEEE\nSecond Annual IEEE Communications Society\nConference on Sensor and Ad Hoc Communications\nand Networks, September 2005.\n[20] K. W. Fan, S. Liu, and P. Sinha, \"On the potential of\nStructure-free Data Aggregation in Sensor Networks,\"\nin To be appear in Proceedings of INFOCOM 2006,\nApril 2006.\n[21] C. Intanagonwiwat, R. Govindan, and D. Estrin,\n\"Directed Diffusion: A Scalable and Robust\nCommunication Paradigm for Sensor Networks,\" in\nProceedings of the 6th Annual International\nConference on Mobile Computing and Networking,\nAugust 2000, pp. 5667.\n[22] C. Intanagonwiwat, R. Govindan, D. Estrin,\nJ. Heidemann, and F. Silva, \"Directed Diffusion for\nWireless Sensor Networking,\" in IEEE/ACM\nTransactions on Networking, vol. 11, February 2003,\npp. 216.\n[23] S. Madden, M. J. Franklin, J. M. Hellerstein, and\nW. Hong, \"TAG: a Tiny AGgregation Service for\nAd-Hoc Sensor Networks,\" in Proceedings of the 5th\nsymposium on Operating systems design and\nimplementation, December 2002, pp. 131146.\n[24] S. Madden, R. Szewczyk, M. J. Franklin, and\nD. Culler, \"Supporting Aggregate Queries Over\nAd-Hoc Wireless Sensor Networks,\" in Proceedings of\nthe 4th IEEE Workshop on Mobile Computing Systems\nand Applications, June 2004, pp. 4958.\n[25] S. Lindsey, C. S. Raghavendra, and K. M. Sivalingam,\n\"Data Gathering in Sensor Networks using the\nEnergy*Delay Metric,\" in Proceedings 15th\nInternational Parallel and Distributed Processing\nSymposium, April 2001, pp. 20012008.\n[26] S. Lindsey and C. Raghavendra, \"PEGASIS:\nPower-efficient gathering in sensor information\nsystems,\" in Proceedings of IEEE Aerospace\nConference, vol. 3, March 2002, pp. 11251130.\n[27] S. Lindsey, C. Raghavendra, and K. M. Sivalingam,\n\"Data Gathering Algorithms in Sensor Networks\nUsing Energy Metrics,\" in IEEE Transactions on\nParallel and Distributed Systems, vol. 13, September\n2002, pp. 924935.\n[28] J. Wong, R. Jafari, and M. Potkonjak, \"Gateway\nplacement for latency and energy efficient data\naggregation,\" in 29th Annual IEEE International\nConference on Local Computer Networks, November\n2004, pp. 490497.\n[29] B. J. Culpepper, L. Dung, and M. Moh, \"Design and\nAnalysis of Hybrid Indirect Transmissions (HIT) for\nData Gathering in Wireless Micro Sensor Networks,\"\nin ACM SIGMOBILE Mobile Computing and\nCommunications Review, vol. 8, January 2004, pp.\n6183.\n[30] H. F. Salama, D. S. Reeves, and Y. Viniotis,\n\"Evaluation of Multicast Routing Algorithms for\nReal-time Communication on High-speed Networks,\"\nin IEEE Journal on Selected Area in Communications,\nvol. 15, April 1997.\n[31] A. Scaglione and S. D. Servetto, \"On the\nInterdependence of Routing and Data Compression in\nMulti-Hop Sensor Networks,\" in Proceedings of the\n8th Annual International Conference on Mobile\nComputing and Networking, September 2002, pp.\n140147.\n[32] A. Scaglione, \"Routing and Data Compression in\nSensor Networks: Stochastic Models for Sensor Data\nthat Guarantee Scalability,\" in Proceedings of IEEE\nInternational Symposium on Information Theory, June\n2003, p. 174.\n[33] R. Cristescu and M. Vetterli, \"Power Efficient\nGathering of Correlated Data: Optimization,\nNP-Completeness and Heuristics,\" in Summaries of\nMobiHoc 2003 posters, vol. 7, July 2003, pp. 3132.\n[34] S. Pattern, B. Krishnamachari, and R. Govindan, \"The\nImpact of Spatial Correlation on Routing with\nCompression in Wireless Sensor Networks,\" in\nProceedings of the 3rd International Symposium on\nInformation Processing in Sensor Networks, April\n2004, pp. 2835.\n[35] L. Cai and D. Corneil, \"Tree Spanners,\" in SIAM\nJournal of Discrete Mathematics, vol. 8, 1995.\n[36] N. Alon, R. M. Karp, D. Peleg, and D. West, \"A graph\ntheoretic game and its application to the k-server\nproblem,\" in SIAM Journal of Computing, vol. 24,\n1995.\n[37] D. Peleg and D. Tendler, \"Low Stretch Spanning Trees\nfor Planar Graphs,\" in Technical Report MCS01-14,\nMathematics & Computer Sience, Weizmann Institute\nOf Sience, 2001.\n[38] L. Jia, G. Noubir, R. Rajaraman, and R. Sundaram,\n\"GIST: Group-Independent Spanning Tree for Data\nAggregation in Dense Sensor Networks,\" in\nInternational Conference on Distributed Computing in\nSensor Systems, June 2006.\n[39] N. Bulusu, J. Heidemann, and D. Estrin, \"GPS-less\nLow Cost Outdoor Localization For Very Small\nDevices,\" in IEEE Personal Communications, Special\nIssue on \"Smart Spaces and Environments\", vol. 7,\nOctober 2000.\n[40] D. Moore, J. Leonard, D. Rus, and S. Teller, \"Robust\nDistributed Network Localization with Noisy Range\nMeasurements,\" in Proceedings of 2nd ACM Sensys,\npp. 5061.\n[41] Crossbow, \"Crossbow,\" http://www.xbow.com.\n194\n", "keywords": "ToD;Structure-free;Anycasting;Data Aggregation"} {"name": "173", "title": "Scalable Mining of Large Disk-based Graph Databases", "abstract": "Mining frequent structural patterns from graph databases is an interesting problem with broad applications. Most of the previous studies focus on pruning unfruitful search subspaces effectively, but few of them address the mining on large, disk-based databases. As many graph databases in applications cannot be held into main memory, scalable mining of large, disk-based graph databases remains a challenging problem. In this paper, we develop an effective index structure, ADI (for adjacency index), to support mining various graph patterns over large databases that cannot be held into main memory. The index is simple and efficient to build. Moreover, the new index structure can be easily adopted in various existing graph pattern mining algorithms. As an example , we adapt the well-known gSpan algorithm by using the ADI structure. The experimental results show that the new index structure enables the scalable graph pattern mining over large databases. In one set of the experiments, the new disk-based method can mine graph databases with one million graphs, while the original gSpan algorithm can only handle databases of up to 300 thousand graphs. Moreover, our new method is faster than gSpan when both can run in main memory.", "fulltext": "INTRODUCTION\nMining frequent graph patterns is an interesting research\nproblem with broad applications, including mining structural patterns from chemical compound databases, plan\ndatabases, XML documents, web logs, citation networks,\nand so forth. Several efficient algorithms have been proposed\nin the previous studies [2, 5, 6, 8, 11, 9], ranging\nfrom mining graph patterns, with and without constraints,\nto mining closed graph patterns.\nMost of the existing methods assume implicitly or explic-itly\nthat the databases are not very large, and the graphs\nin the database are relatively simple. That is, either the\ndatabases or the major part of them can fit into main memory\n, and the number of possible labels in the graphs [6] is\nsmall. For example, [11] reports the performance of gSpan,\nan efficient frequent graph pattern mining algorithm, on\ndata sets of size up to 320 KB, using a computer with 448\nMB main memory.\nClearly, the graph database and the\nprojected databases can be easily accommodated into main\nmemory.\nUnder the large main memory assumption, the computation\nis CPU-bounded instead of I/O-bounded. Then, the\nalgorithms focus on effective heuristics to prune the search\nspace. Few of them address the concern of handling large\ngraph databases that cannot be held in main memory.\nWhile the previous studies have made excellent progress\nin mining graph databases of moderate size, mining large,\ndisk-based graph databases remains a challenging problem.\nWhen mining a graph database that cannot fit into main\nmemory, the algorithms have to scan the database and navigate\nthe graphs repeatedly. The computation becomes I/O-bounded\n.\nFor example, we obtain the executable of gSpan from the\nauthors and test its scalability. In one of our experiments\n1\n,\nwe increase the number of graphs in the database to test\nthe scalability of gSpan on the database size. gSpan can\nonly handle up to 300 thousand graphs. In another experiment\n, we increase the number of possible labels in graphs.\nWe observe that the runtime of gSpan increases exponentially\n. It finishes a data set of 300 thousand graphs with 636\nseconds when there are only 10 possible labels, but needs\n15 hours for a data set with the same size but the number\nof possible labels is 45! This result is consistent with the\nresults reported in [11].\nAre there any real-life applications that need to mine large\ngraph databases? The answer is yes. For example, in data\nintegration of XML documents or mining semantic web, it is\noften required to find the common substructures from a huge\ncollection of XML documents. It is easy to see applications\nwith collections of millions of XML documents. There are\n1\nDetails will be provided in Section 6\n316\nResearch Track Paper\nhundreds of even thousands of different labels. As another\nexample, chemical structures can be modeled as graphs. A\nchemical database for drug development can contain millions\nof different chemical structures, and the number of different\nlabels in the graphs can easily go to up to 100. These large\ndatabases are disk-based and often cannot be held into main\nmemory.\nWhy is mining large disk-based graph databases so challenging\n? In most of the previous studies, the major data\nstructures are designed for being held in main memory. For\nexample, the adjacency-list or adjacency-matrix representations\nare often used to represent graphs. Moreover, most of\nthe previous methods are based on efficient random accesses\nto elements (e.g., edges and their adjacent edges) in graphs.\nHowever, if the adjacency-list or adjacency-matrix representations\ncannot be held in main memory, the random accesses\nto them become very expensive. For disk-based data, without\nany index, random accesses can be extremely costly.\nCan we make mining large, disk-based graph databases feasible\nand scalable? This is the motivation of our study.\nSince the bottleneck is the random accesses to the large\ndisk-based graph databases, a natural idea is to index the\ngraph databases properly. Designing effective and efficient\nindex structures is one of the most invaluable exercises in\ndatabase research. A good index structure can support a\ngeneral category of data access operations. Particularly, a\ngood index should be efficient and scalable in construction\nand maintenance, and fast for data access.\nInstead of inventing new algorithms to mine large, disk-based\ngraph patterns, can we devise an efficient index structure\nfor graph databases so that mining various graph patterns\ncan be conducted scalably? Moreover, the index structure\nshould be easy to be adopted in various existing methods\nwith minor adaptations.\nStimulated by the above thinking, in this paper, we study\nthe problem of efficient index for scalable mining of large,\ndisk-based graph databases, and make the following contributions\n.\nBy analyzing the frequent graph pattern mining problem\nand the typical graph pattern mining algorithms\n(taking gSpan as an example), we identify several bottleneck\ndata access operations in mining large, disk-based\ngraph databases.\nWe propose ADI (for adjacency index), an effective\nindex structure for graphs. We show that the major\noperations in graph mining can be facilitated efficiently\nby an ADI structure. The construction algorithm of\nADI structure is presented.\nWe adapt the gSpan algorithm by using the ADI structure\non mining large, disk-based graph databases, and\nachieve algorithm ADI-Mine. We show that ADI-Mine\noutperforms gSpan in mining complex graph databases\nand can mine much larger databases than gSpan.\nA systematic performance study is reported to verify\nour design. The results show that our new index structure\nand algorithm are scalable on large data sets.\nThe remainder of the paper is organized as follows. We\ndefine the problem of frequent graph pattern mining in Section\n2. The idea of minimum DFS code and algorithm gSpan\nb\nb\na\na\ny\nz\nx\nx\nx\nx\nz\na\na\nb\nv3\nv2\nv1\nv0\nb\nb\na\na\ny\nz\nx\nx\nv3\nv2\nv0\nv1\nb\nb\na\na\ny\nz\nx\nx\n(a) Graph\n(b) Subgraph\n(c) DFS-tree\n(d) DFS-tree\nG\nG\nT\n1\nT\n2\nFigure 1: Subgraph and DFS codes\nare reviewed in Section 3, and the major data access operations\nin graph mining are also identified. The ADI structure\nis developed in Section 4. The efficient algorithm ADI-Mine\nfor mining large, disk-based graph databases using ADI is\npresented in Section 5.\nThe experimental results are reported\nin Section 6. The related work is discussed in Section\n7. Section 8 concludes the paper.\nPROBLEM DEFINITION\nIn this paper, we focus on undirected labeled simple graphs.\nA labeled graph is a 4-tuple G = (V, E, L, l), where V is a set\nof vertices, E\nV V is a set of edges, L is a set of labels,\nand l : V\nE L is a labeling function that assigns a label\nto an edge or a vertex. We denote the vertex set and the\nedge set of a graph G by V (G) and E(G), respectively.\nA graph G is called connected if for any vertices u, v\n\nV (G), there exist vertices w\n1\n, . . . , w\nn\nV (G) such that\n{(u, w\n1\n), (w\n1\n, w\n2\n), . . . , (w\nn-1\n, w\nn\n), (w\nn\n, v)\n} E(G).\nFrequent patterns in graphs are defined based on subgraph\nisomorphism.\nDefinition 1\n(Subgraph isomorphism). Given graphs\nG = (V, E, L, l) and G = (V , E , L , l ). An injective function\nf : V\nV is called a subgraph isomorphism from G to\nG if (1) for any vertex u\nV , f(u) V and l (u) = l(f(u));\nand (2) for any edge (u, v)\nE , (f(u), f(v)) E and\nl\n(u, v) = l(f (u), f (v)).\nIf there exists a subgraph isomorphism from G to G, then\nG\nis called a subgraph of G and G is called a supergraph of\nG\n, denoted as G\nG.\nFor example, the graph G in Figure 1(b) is a subgraph of\nG in Figure 1(a).\nA graph database is a set of tuples (gid, G), where gid is\na graph identity and G is a graph. Given a graph database\nGDB, the support of a graph G in GDB, denoted as sup(G )\nfor short, is the number of graphs in the database that are\nsupergraphs of G , i.e.,\n|{(gid, G) GDB|G\nG\n}|.\nFor a support threshold min sup (0\nmin sup |GDB|),\na graph G is called a frequent graph pattern if sup(G )\n\nmin sup\n. In many applications, users are only interested\nin the frequent recurring components of graphs. Thus, we\nput a constraint on the graph patterns: we only find the\nfrequent graph patterns that are connected.\nProblem definition. Given a graph database GDB and\na support threshold min sup. The problem of mining frequent\nconnected graph patterns is to find the complete set of\nconnected graphs that are frequent in GDB.\n317\nResearch Track Paper\nMINIMUM DFS CODE AND GSPAN\nIn [11], Yan and Han developed the lexicographic ordering\ntechnique to facilitate the graph pattern mining. They\nalso propose an efficient algorithm, gSpan, one of the most\nefficient graph pattern mining algorithms so far.\nIn this\nsection, we review the essential ideas of gSpan, and point\nout the bottlenecks in the graph pattern mining from large\ndisk-based databases.\n3.1\nMinimum DFS Code\nIn order to enumerate all frequent graph patterns efficiently\n, we want to identify a linear order on a representation\nof all graph patterns such that if two graphs are in identical\nrepresentation, then they are isomorphic. Moreover, all the\n(possible) graph patterns can be enumerated in the order\nwithout any redundancy.\nThe depth-first search tree (DFS-tree for short) [3] is popularly\nused for navigating connected graphs.\nThus, it is\nnatural to encode the edges and vertices in a graph based\non its DFS-tree. All the vertices in G can be encoded in\nthe pre-order of T . However, the DFS-tree is generally not\nunique for a graph. That is, there can be multiple DFS-trees\ncorresponding to a given graph.\nFor example, Figures 1(c) and 1(d) show two DFS-trees of\nthe graph G in Figure 1(a). The thick edges in Figures 1(c)\nand 1(d) are those in the DFS-trees, and are called forward\nedges, while the thin edges are those not in the DFS-trees,\nand are called backward edges. The vertices in the graph\nare encoded v\n0\nto v\n3\naccording to the pre-order of the corresponding\nDFS-trees.\nTo solve the uniqueness problem, a minimum DFS code\nnotation is proposed in [11].\nFor any connected graph G, let T be a DFS-tree of G.\nThen, an edge is always listed as (v\ni\n, v\nj\n) such that i < j. A\nlinear order\non the edges in G can be defined as follows.\nGiven edges e = (v\ni\n, v\nj\n) and e = (v\ni\n, v\nj\n). e\ne if (1)\nwhen both e and e are forward edges (i.e., in DFS-tree T ),\nj < j\nor (i > i\nj = j ); (2) when both e and e are\nbackward edges (i.e., edges not in DFS-tree T ), i < i or\n(i = i\nj < j ); (3) when e is a forward edge and e is a\nbackward edge, j\ni ; or (4) when e is a backward edge and\ne\nis a forward edge, i < j .\nFor a graph G and a DFS-tree T , a list of all edges in\nE(G) in order\nis called the DFS code of G with respect to\nT , denoted as code(G, T ). For example, the DFS code with\nrespect to the DFS-tree T\n1\nin Figure 1(c) is code(G, T\n1\n) =\n(v\n0\n, v\n1\n, x, a, x)-(v\n1\n, v\n2\n, x, a, z)-(v\n2\n, v\n0\n, z, b, x)-(v\n1\n, v\n3\n, x, b, y) ,\nwhere an edge (v\ni\n, v\nj\n) is written as (v\ni\n, v\nj\n, l(v\ni\n), l(v\ni\n, v\nj\n),\nl(v\nj\n)), i.e., the labels are included. Similarly, the DFS code\nwith respect to the DFS-tree T\n2\nin Figure 1(d) is\ncode(G, T\n2\n) = (v\n0\n, v\n1\n, y, b, x)-(v\n1\n, v\n2\n, x, a, x)-(v\n2\n, v\n3\n, x, b, z)\n(v\n3\n, v\n1\n, z, a, x) .\nSuppose there is a linear order over the label set L. Then,\nfor DFS-trees T\n1\nand T\n2\non the same graph G, their DFS\ncodes can be compared lexically according to the labels of\nthe edges. For example, we have code(G, T\n1\n) < code(G, T\n2\n)\nin Figures 1(c) and 1(d).\nThe lexically minimum DFS code is selected as the representation\nof the graph, denoted as min(G). In our example\nin Figure 1, min(G) = code(G, T\n1\n).\nMinimum DFS code has a nice property: two graphs G\nand G are isomorphic if and only if min(G) = min(G ).\nMoreover, with the minimum DFS code of graphs, the prob-Input\n: a DFS code s, a graph database GDB and min sup\nOutput: the frequent graph patterns\nMethod:\nif s is not a minimum DFS code then return;\noutput s as a pattern if s is frequent in GDB;\nlet C =\n;\nscan GDB once, find every edge e such that\ne can be concatenated to s to form a DFS code s\ne\nand s\ne is frequent; C = C\n{s e};\nsort the DFS codes in C in lexicographic order;\nfor each s e C in lexicographic order do\ncall gSpan(s e, GDB, min sup);\nreturn;\nFigure 2: Algorithm\ngSpan.\nlem of mining frequent graph patterns is reduced to mining\nfrequent minimum DFS codes, which are sequences, with\nsome constraints that preserve the connectivity of the graph\npatterns.\n3.2\nAlgorithm gSpan\nBased on the minimum DFS codes of graphs, a depth-first\nsearch, pattern-growth algorithm, gSpan, is developed\nin [11], as shown in Figure 2. The central idea is to conduct\na depth-first search of minimum DFS codes of possible\ngraph patterns, and obtain longer DFS codes of larger\ngraph patterns by attaching new edges to the end of the\nminimum DFS code of the existing graph pattern.\nThe\nanti-monotonicity of frequent graph patterns, i.e., any super\npattern of an infrequent graph pattern cannot be frequent, is\nused to prune.\nComparing to the previous methods on graph pattern\nmining, gSpan is efficient, since gSpan employs the smart\nidea of minimum DFS codes of graph patterns that facilitates\nthe isomorphism test and pattern enumeration. Moreover\n, gSpan inherits the depth-first search, pattern-growth\nmethodology to avoid any candidate-generation-and-test. As\nreported in [11], the advantages of gSpan are verified by the\nexperimental results on both real data sets and synthetic\ndata sets.\n3.3\nBottlenecks in Mining Disk-based Graph\nDatabases\nAlgorithm gSpan is efficient when the database can be\nheld into main memory. For example, in [11], gSpan is scalable\nfor databases of size up to 320 KB using a computer\nwith 448 MB main memory. However, it may encounter difficulties\nwhen mining large databases. The major overhead\nis that gSpan has to randomly access elements (e.g., edges\nand vertices) in the graph database as well as the projections\nof the graph database many times. For databases that\ncannot be held into main memory, the mining becomes I/O\nbounded and thus is costly.\nRandom accesses to elements in graph databases and checking\nthe isomorphism are not unique to gSpan. Instead, such\noperations are extensive in many graph pattern mining algorithms\n, such as FSG [6] (another efficient frequent graph\npattern mining algorithm) and CloseGraph [9] (an efficient\nalgorithm for mining frequent closed graph patterns).\nIn mining frequent graph patterns, the major data access\noperations are as follows.\n318\nResearch Track Paper\nOP1: Edge support checking. Find the support of an\nedge (l\nu\n, l\ne\n, l\nv\n), where l\nu\nand l\nv\nare the labels of vertices\nand l\ne\nis the label of the edge, respectively;\nOP2: Edge-host graph checking. For an edge e\n=\n(l\nu\n, l\ne\n, l\nv\n), find the graphs in the database where e appears\n;\nOP3: Adjacent edge checking. For\nan\nedge\ne\n=\n(l\nu\n, l\ne\n, l\nv\n), find the adjacent edges of e in the graphs\nwhere e appears, so that the adjacent edges can be\nused to expand the current graph pattern to larger\nones.\nEach of the above operations may happen many times\nduring the mining of frequent graph patterns. Without an\nappropriate index, each of the above operations may have to\nscan the graph database or its projections. If the database\nand its projections cannot fit into main memory, the scanning\nand checking can be very costly.\nCan we devise an index structure so that the related information\ncan be kept and all the above operations can be\nachieved using the index only, and thus without scanning\nthe graph database and checking the graphs? This motivates\nthe design of the ADI structure.\nTHE ADI STRUCTURE\nIn this section we will devise an effective data structure,\nADI (for adjacency index), to facilitate the scalable mining\nof frequent graph patterns from disk-based graph databases.\n4.1\nData Structure\nThe ADI index structure is a three-level index for edges,\ngraph-ids and adjacency information. An example is shown\nin Figure 3, where two graphs, G\n1\nand G\n2\n, are indexed.\n4.1.1\nEdge Table\nThere can be many edges in a graph database. The edges\nare often retrieved by the labels during the graph pattern\nmining, such as in the operations identified in Section 3.3.\nTherefore, the edges are indexed by their labels in the ADI\nstructure.\nIn ADI, an edge e = (u, v) is recorded as a tuple\n(l(u), l(u, v), l(v)) in the edge table, and is indexed by the\nlabels of the vertices, i.e., l(u) and l(v), and the label of\nthe edge itself, i.e., l(u, v). Each edge appears only once in\nthe edge table, no matter how many times it appears in the\ngraphs. For example, in Figure 3, edge (A, d, C) appears\nonce in graph G\n1\nand twice in graph G\n2\n. However, there\nis only one entry for the edge in the edge table in the ADI\nstructure.\nAll edges in the edge table in the ADI structure are sorted.\nWhen the edge table is stored on disk, a B+-tree is built on\nthe edges. When part of the edge table is loaded into main\nmemory, it is organized as a sorted list. Thus, binary search\ncan be conducted.\n4.1.2\nLinked Lists of Graph-ids\nFor each edge e, the identities of the graphs that contain\ne form a linked list of graph-ids. Graph-id G\ni\nis in the list\nof edge e if and only if there exists at least one instance of e\nin G\ni\n. For example, in Figure 3, both G\n1\nand G\n2\nappear in\nthe list of edge (A, d, C), since the edge appears in G\n1\nonce\nand in G\n2\ntwice. Please note that the identity of graph G\ni\nG1\nG2\nG1\nG2\nG2\nG1\nA\nB\nC\nD\na\nd\nd\nb\n1\n2\n3\n4\nG1\nA\nB\nC\nC\nD\nB\na\nc\nd\nd\nd\n1\n2\n3\n4\n5\nG2\nEdges\nBlock 1\nBlock 2\nGraph-ids (on disk)\n1 2\n2 3\n1 4\n3 4\n1 2\n1 4\n1 6\n2 3\n4 5\n(A, a, B)\n(A, d, C)\n(B, b, D)\nG2\nG1\n(B, c, C)\n(B, d, D)\n(C, d, D)\nAdjacency (on disk)\n6\nFigure 3: An\nADI structure.\nappears in the linked list of edge e only once if e appears in\nG\ni\n, no matter how many times edge e appears in G\ni\n.\nA list of graph-ids of an edge are stored together. Therefore\n, given an edge, it is efficient to retrieve all the identities\nof graphs that contain the edge.\nEvery entry in the edge table is linked to its graph-id\nlinked list. By this linkage, the operation OP2: edge-host\ngraph checking can be conducted efficiently. Moreover, to\nfacilitate operation OP1: edge support checking, the length\nof the graph-id linked list, i.e., the support of an edge, is\nregistered in the edge table.\n4.1.3\nAdjacency Information\nThe edges in a graph are stored as a list of the edges\nencoded. Adjacent edges are linked together by the common\nvertices, as shown in Figure 3. For example, in block 1,\nall the vertices having the same label (e.g., 1) are linked\ntogether as a list. Since each edge has two vertices, only\ntwo pointers are needed for each edge.\nMoreover, all the edges in a graph are physically stored\nin one block on disk (or on consecutive blocks if more space\nis needed), so that the information about a graph can be\nretrieved by reading one or several consecutive blocks from\ndisk. Often, when the graph is not large, a disk-page (e.g.,\nof size 4k) can hold more than one graph.\nEncoded edges recording the adjacency information are\nlinked to the graph-ids that are further associated with the\nedges in the edge table.\n4.2\nSpace Requirement\nThe storage of an ADI structure is flexible. If the graph\ndatabase is small, then the whole index can be held into\nmain memory. On the other hand, if the graph database\nis large and thus the ADI structure cannot fit into main\n319\nResearch Track Paper\nmemory, some levels can be stored on disk. The level of\nadjacency information is the most detailed and can be put\non disk. If the main memory is too small to hold the graph-id\nlinked lists, they can also be accommodated on disk. In\nthe extreme case, even the edge table can be held on disk\nand a B+-tree or hash index can be built on the edge table.\nTheorem 1\n(Space complexity). For graph database\nGDB\n=\n{G\n1\n, . . . , G\nn\n},\nthe\nspace\ncomplexity\nis\nO(\nn\ni=1\n|E(G\ni\n)\n|).\nProof. The space complexity is determined by the following\nfacts. (1) The number of tuples in the edge table is equal to\nthe number of distinct edges in the graph database, which is\nbounded by\nn\ni=1\n|E(G\ni\n)\n|; (2) The number of entries in the\ngraph-id linked lists in the worst case is the number of edges\nin the graph database, i.e.,\nn\ni=1\n|E(G\ni\n)\n| again; and (3) The\nadjacency information part records every edge exactly once.\nPlease note that, in many application, it is reasonable to\nassume that the edge table can be held into main memory.\nFor example, suppose we have 1, 000 distinct vertex labels\nand 1, 000 distinct edge labels. There can be up to 1000\n\n999\n2 1000 = 4.995 10\n8\ndifferent edges, i.e., all possible\ncombinations of vertex and edge labels. Suppose up to 1%\nedges are frequent, there are only less than 5 million different\nedges, and thus the edge table can be easily held into main\nmemory.\nIn real applications, the graphs are often sparse, that is,\nnot all possible combinations of vertex and edge labels appear\nin the graphs as an edge. Moreover, users are often\ninterested in only those frequent edges. That shrinks the\nedge table substantially.\n4.3\nSearch Using ADI\nNow, let us examine how the ADI structure can facilitate\nthe major data access operations in graph pattern mining\nthat are identified in Section 3.3.\nOP1: Edge support checking Once an ADI structure is\nconstructed, this information is registered on the edge\ntable for every edge. We only need to search the edge\ntable, which is either indexed (when the table is on\ndisk) or can be searched using binary search (when\nthe table is in main memory).\nIn some cases, we may need to count the support of an\nedge in a subset of graphs G\nG. Then, the linked\nlist of the graph-ids of the edge is searched. There is\nno need to touch any record in the adjacency information\npart. That is, we do not need to search any detail\nabout the edges. Moreover, for counting supports of\nedges in projected databases, we can maintain the support\nof each edge in the current projected database and\nthus we do not even search the graph-id linked lists.\nOP2: Edge-host graph checking We only need to search\nthe edge table for the specific edge and follow the link\nfrom the edge to the list of graph-ids. There is no\nneed to search any detail from the part of adjacency\ninformation.\nOP3: Adjacent edge checking Again, we start from an\nentry in the edge table and follow the links to find\nthe list of graphs where the edge appears. Then, only\nInput: a graph database GDB and min sup\nOutput: the ADI structure\nMethod:\nscan GDB once, find the frequent edges;\ninitialize the edge table for frequent edges;\nfor each graph do\nremove infrequent edges;\ncompute the mininmum DFS code [11];\nuse the DFS-tree to encode the vertices;\nstore the edges in the graph onto disk and form\nthe adjacency information;\nfor each edge do\ninsert the graph-id to the graph-id list\nassociated with the edge;\nlink the graph-id to the related adjacency\ninformation;\nend for\nend for\nFigure 4: Algorithm of\nADI construction.\nthe blocks containing the details of the instances of the\nedge are visited, and there is no need to scan the whole\ndatabase. The average I/O complexity is O(log n +\nm + l), where n is the number of distinct edges in the\ngraph, m is the average number of graph-ids in the\nlinked lists of edges, and l is the average number of\nblocks occupied by a graph. In many applications, m\nis orders of magnitudes smaller than the n, and l is a\nvery small number (e.g., 1 or 2).\nThe algorithms for the above operations are simple. Limited\nby space, we omit the details here. As can be seen,\nonce the ADI structure is constructed, there is no need to\nscan the database for any of the above operations. That is,\nthe ADI structure can support the random accesses and the\nmining efficiently.\n4.4\nConstruction of ADI\nGiven a graph database, the corresponding ADI structure\nis easy to construct by scanning the database only twice.\nIn the first scan, the frequent edges are identified. According\nto the apriori property of frequent graph patterns, only\nthose frequent edges can appear in frequent graph patterns\nand thus should be indexed in the ADI structure. After the\nfirst scan, the edge table of frequent edges is initialized.\nIn the second scan, graphs in the database are read and\nprocessed one by one. For each graph, the vertices are encoded\naccording to the DFS-tree in the minimum DFS code,\nas described in [11] and Section 3. Only the vertices involved\nin some frequent edges should be encoded. Then, for each\nfrequent edge, the graph-id is inserted into the corresponding\nlinked list, and the adjacency information is stored. The\nsketch of the algorithm is shown in Figure 4.\nCost Analysis\nThere are two major costs in the ADI construction: writing\nthe adjacency information and updating the linked lists of\ngraph-ids. Since all edges in a graph will reside on a disk\npage or several consecutive disk pages, the writing of adjacency\ninformation is sequential. Thus, the cost of writing\nadjacency information is comparable to that of making a\n320\nResearch Track Paper\n4\n3\n2\n1\nD\nC\nA\nB\na\nd 1\nd 3\nd 4\nb 2\na 1\nb 3\nd 4\na 2\n4 C\n3 D\n2 B\n1 A\nd\nd\nb\n(a) The graph and the adjacency-lists\n1 A\n2 B\n3 D\n4 C\n1 A\n0\na\n0\nd\n2 B\na\n0\nb\n0\n3 D\n0\nb\n0\nd\n4 C\nd\n0\nd\n0\n(b) The adjacency-matrix\nFigure 5: The adjacency-list and adjacency-matrix\nrepresentations of graphs.\ncopy of the original database plus some bookkeeping.\nUpdating the linked lists of graph-ids requires random\naccesses to the edge table and the linked lists. In many\ncases, the edge table can be held into main memory, but not\nthe linked list. Therefore, it is important to cache the linked\nlists of graph-ids in a buffer. The linked lists can be cached\naccording to the frequency of the corresponding edges.\nConstructing ADI for large, disk-based graph database\nmay not be cheap. However, the ADI structure can be built\nonce and used by the mining many times. That is, we can\nbuild an ADI structure using a very low support threshold,\nor even set min sup = 1.\n2\nThe index is stored on disk.\nThen, the mining in the future can use the index directly,\nas long as the support threshold is no less than the one that\nis used in the ADI structure construction.\n4.5\nProjected Databases Using ADI\nMany depth-first search, pattern-growth algorithms utilize\nproper projected databases. During the depth-first search\nin graph pattern mining, the graphs containing the current\ngraph pattern P should be collected and form the P projected\ndatabase. Then, the further search of larger graph\npatterns having P as the prefix of their minimum DFS codes\ncan be achieved by searching only the P -projected database.\nInterestingly, the projected databases can be constructed\nusing ADI structures. A projected database can be stored\nin the form of an ADI structure. In fact, only the edge table\nand the list of graph-ids should be constructed for a new\nprojected database and the adjacency information residing\non disk can be shared by all projected databases.\nThat\ncan save a lot of time and space when mining large graph\ndatabases that contain many graph patterns, where many\nprojected databases may have to be constructed.\n4.6\nWhy Is ADI Good for Large Databases?\nIn most of the previous methods for graph pattern mining,\nthe adjacency-list or adjacency-matrix representations are\nused to represent graphs. Each graph is represented by an\nadjacency-matrix or a set of adjacency-lists. An example is\nshown in Figure 5.\n2\nIf min sup = 1, then the ADI structure can be constructed\nby scanning the graph database only once. We do not need\nto find frequent edges, since every edge appearing in the\ngraph database is frequent.\nIn Figure 5(a), the adjacency-lists have 8 nodes and 8\npointers. It stores the same information as Block 1 in Figure\n3, where the block has 4 nodes and 12 pointers.\nThe space requirements of adjacency-lists and ADI structure\nare comparable. From the figure, we can see that each\nedge in a graph has to be stored twice: one instance for\neach vertex. (If we want to remove this redundancy, the\ntradeoff is the substantial increase of cost in finding adjacency\ninformation). In general, for a graph of n edges, the\nadjacency-list representation needs 2n nodes and 2n pointers\n. An ADI structure stores each edge once, and use the\nlinkage among the edges from the same vertex to record the\nadjacency information. In general, for a graph of n edges, it\nneeds n nodes and 3n pointers.\nThen, what is the advantage of ADI structure against\nadjacency-list representation? The key advantage is that the\nADI structure extracts the information about containments\nof edges in graphs in the first two levels (i.e., the edge table\nand the linked list of graph-ids). Therefore, in many operations\n, such as the edge support checking and edge-host graph\nchecking, there is no need to visit the adjacency information\nat all. To the contrast, if the adjacency-list representation\nis used, every operation has to check the linked lists. When\nthe database is large so that either the adjacency-lists of all\ngraphs or the adjacency information in the ADI structure\ncannot be accommodated into main memory, using the first\ntwo levels of the ADI structure can save many calls to the\nadjacency information, while the adjacency-lists of various\ngraphs have to be transferred between the main memory and\nthe disk many times.\nUsually, the adjacency-matrix is sparse. The adjacency-matrix\nrepresentation is inefficient in space and thus is not\nused.\nALGORITHM ADI-MINE\nWith the help from the ADI structure, how can we improve\nthe scalability and efficiency of frequent graph pattern\nmining? Here, we present a pattern-growth algorithm ADI-Mine\n, which is an improvement of algorithm gSpan. The\nalgorithm is shown in Figure 6.\nIf the ADI structure is unavailable, then the algorithm\nscans the graph database and constructs the index. Otherwise\n, it just uses the ADI structure on the disk.\nThe frequent edges can be obtained from the edge table in\nthe ADI structure. Each frequent edge is one of the smallest\nfrequent graph patterns and thus should be output. Then,\nthe frequent edges should be used as the \"seeds\" to grow\nlarger frequent graph patterns, and the frequent adjacent\nedges of e should be used in the pattern-growth. An edge\ne\nis a frequent adjacent edge of e if e is an adjacent edge of\ne in at least min sup graphs. The set of frequent adjacent\nedges can be retrieved efficiently from the ADI structure\nsince the identities of the graphs containing e are indexed\nas a linked-list, and the adjacent edges are also indexed in\nthe adjacency information part in the ADI structure.\nThe pattern growth is implemented as calls to procedure\nsubgraph-mine.\nProcedure subgraph-mine tries every frequent\nadjacent edge e (i.e., edges in set F\ne\n) and checks\nwhether e can be added into the current frequent graph pattern\nG to form a larger pattern G . We use the DFS code to\ntest the redundancy. Only the patterns G whose DFS code\nis minimum is output and further grown. All other patterns\nG\nare either found before or will be found later at other\n321\nResearch Track Paper\nInput: a graph database GDB and min sup\nOutput: the complete set of frequent graph patterns\nMethod:\nconstruct the ADI structure for the graph database if\nit is not available;\nfor each frequent edge e in the edge table do\noutput e as a graph pattern;\nfrom the ADI structure, find set F\ne\n, the set of\nfrequent adjacent edges for e;\ncall subgraph-mine(e, F\ne\n);\nend for\nProcedure\nsubgraph-mine\nParameters: a frequent graph pattern G, and\nthe set of frequent adjacent edges F\ne\n// output the frequent graph patterns whose\n// minimum DFS-codes contain that of G as a prefix\nMethod:\nfor each edge e in F\ne\ndo\nlet G be the graph by adding e into G;\ncompute the DFS code of G ; if the DFS code is\nnot minimum, then return;\noutput G as a frequent graph pattern;\nupdate the set F\ne\nof adjacent edges;\ncall subgraph-mine(G , F\ne\n);\nend for\nreturn;\nFigure 6: Algorithm\nADI-Mine.\nbranches. The correctness of this step is guaranteed by the\nproperty of DFS code [11].\nOnce a larger pattern G is found, the set of adjacent edges\nof the current pattern should be updated, since the adjacent\nedges of the newly inserted edge should also be considered\nin the future growth from G . This update operation can be\nimplemented efficiently, since the identities of graphs that\ncontain an edge e are linked together in the ADI structure,\nand the adjacency information is also indexed and linked\naccording to the graph-ids.\nDifferences Between ADI-Mine and gSpan\nAt high level, the structure as well as the search strategies\nof ADI-Mine and gSpan are similar. The critical difference\nis on the storage structure for graphs--ADI-Mine uses ADI\nstructure and gSpan uses adjacency-list representation.\nIn the recursive mining, the critical operation is finding\nthe graphs that contain the current graph pattern (i.e.,\nthe test of subgraph isomorphism) and finding the adjacent\nedges to grow larger graph patterns.\nThe current graph\npattern is recorded using the labels. Thus, the edges are\nsearched using the labels of the vertices and that of the\nedges.\nIn gSpan, the test of subgraph isomorphism is achieved\nby scanning the current (projected) database.\nSince the\ngraphs are stored in adjacency-list representation, and one\nlabel may appear more than once in a graph, the search can\nbe costly. For example, in graph G\n2\nin Figure 3, in order\nto find an edge (C, d, A), the adjacency-list for vertices 4\nand 6 may have to be searched. If the graph is large and\nthe labels appear multiple times in a graph, there may be\nmany adjacency-lists for vertices of the same label, and the\nadjacency-lists are long.\nMoreover, for large graph database that cannot be held\ninto main memory, the adjacency-list representation of a\ngraph has to be loaded into main memory before the graph\ncan be searched.\nIn ADI-Mine, the graphs are stored in the ADI structure.\nThe edges are indexed by their labels. Then, the graphs that\ncontain the edges can be retrieved immediately. Moreover,\nall edges with the same labels are linked together by the\nlinks between the graph-id and the instances. That helps\nthe test of subgraph isomorphism substantially.\nFurthermore, using the index of edges by their labels, only\nthe graphs that contain the specific edge will be loaded into\nmain memory for further subgraph isomorphism test. Irrelevant\ngraphs can be filtered out immediately by the index.\nWhen the database is too large to fit into main memory, it\nsaves a substantial part of transfers of graphs between disk\nand main memory.\nEXPERIMENTAL RESULTS\nIn this section, we report a systematic performance study\non the ADI structure and a comparison of gSpan and ADI-Mine\non mining both small, memory-based databases and\nlarge, disk-based databases. We obtain the executable of\ngSpan from the authors. The ADI structure and algorithm\nADI-Mine are implemented using C/C++.\n6.1\nExperiment Setting\nAll the experiments are conducted on an IBM NetFinity\n5100 machine with an Intel PIII 733MHz CPU, 512M RAM\nand 18G hard disk. The speed of the hard disk is 10, 000\nRPM. The operating system is Redhat Linux 9.0.\nWe implement a synthetic data generator following the\nprocedure described in [6]. The data generator takes five\nparameters as follows.\nD:\nthe total number of graphs in the data set\nT :\nthe average number of edges in graphs\nI:\nthe average number of edges in potentially frequent\ngraph patterns (i.e., the frequent kernels)\nL:\nthe number of potentially frequent kernels\nN :\nthe number of possible labels\nPlease refer to [6] for the details of the data generator.\nFor example, a data set D10kN 4I10T 20L200 means that\nthe data set contains 10k graphs; there are 4 possible labels;\nthe average number of edges in the frequent kernel graphs\nis 10; the average number of edges in the graphs is 20; and\nthe number of potentially frequent kernels is 200. Hereafter\nin this section, when we say \"parameters\", it means the\nparameters for the data generator to create the data sets.\nIn [11], L is fixed to 200. In our experiments, we also set\nL = 200 as the default value, but will test the scalability of\nour algorithm on L as well.\nPlease note that, in all experiments, the runtime of ADI-Mine\nincludes both the ADI construction time and the mining\ntime.\n6.2\nMining Main Memory-based Databases\nIn this set of experiments, both gSpan and ADI-Mine run\nin main memory.\n322\nResearch Track Paper\n6.2.1\nScalability on Minimum Support Threshold\nWe test the scalability of gSpan and ADI-Mine on the minimum\nsupport threshold. Data set D100kN 30I5T 20L200 is\nused. The minimum support threshold varies from 4% to\n10%. The results are shown in Figure 7(a).\nAs can be seen, both gSpan and ADI-Mine are scalable,\nbut ADI-Mine is about 10 times faster. We discussed the\nresult with Mr. X. Yan, the author of gSpan. He confirms\nthat counting frequent edges in gSpan is time consuming.\nOn the other hand, the construction of ADI structure is\nrelatively efficient. When the minimum support threshold\nis set to 1, i.e., all edges are indexed, the ADI structure uses\napproximately 57M main memory and costs 86 seconds in\nconstruction.\n6.2.2\nScalability on Database Size\nWe test the scalability of gSpan and ADI-Mine on the\nsize of databases. We fix the parameters N = 30, I = 5,\nT = 20 and L = 200, and vary the number of graphs in\ndatabase from 50 thousand to 100 thousand. The minimum\nsupport threshold is set to 1% of the number of graphs in\nthe database. The results are shown in Figure 7(b). The\nconstruction time of ADI structure is also plotted in the\nfigure.\nBoth the algorithms and the construction of ADI structure\nare linearly scalable on the size of databases. ADI-Mine\nis faster. We observe that the size of ADI structure is also\nscalable. For example, it uses 28M when the database has 50\nthousand graphs, and 57M when the database has 100 thousand\ngraphs. This observation concurs with Theorem 1.\n6.2.3\nEffects of Data Set Parameters\nWe test the scalability of the two algorithms on parameter\nN --the number of possible labels. We use data set\nD100kN 20-50I5T 20L200, that is, the N value varies from\n20 to 50. The minimum support threshold is fixed at 1%.\nThe results are shown in Figure 7(c). Please note that the\nY -axis is in logarithmic scale.\nWe can observe that the runtime of gSpan increases exponentially\nas N increases. This result is consistent with\nthe result reported in [11].\n3\nWhen there are many possible\nlabels in the database, the search without index becomes\ndramatically more costly. Interestingly, both ADI-Mine and\nthe construction of ADI structure are linearly scalable on N .\nAs discussed before, the edge table in ADI structure only indexes\nthe unique edges in a graph database. Searching using\nthe indexed edge table is efficient. The time complexity of\nsearching an edge by labels is O(log n), where n is the number\nof distinct edges in the database. This is not affected by\nthe increase of the possible labels. As expected, the size of\nthe ADI structure is stable, about 57M in this experiment.\nWe use data set D100kN 30I5T 10-30L200 to test the scalability\nof the two algorithms on parameter T --the average\nnumber of edges in a graph. The minimum support threshold\nis set to 1%. The results are shown in Figure 7(d).\nAs the number of edges increases, the graph becomes more\ncomplex. The cost of storing and searching the graph also\nincreases accordingly. As shown in the figure, both algorithms\nand the construction of ADI are linearly scalable.\nWe also test the effects of other parameters. The experimental\nresults show that both gSpan and ADI-Mine are not\n3\nPlease refer to Figures 5(b) and 5(c) in the UIUC technical\nreport version of [11].\nsensitive to I--the average number of edges in potentially\nfrequent graph patterns--and L--the number of potentially\nfrequent kernels. The construction time and space cost of\nADI structures are also stable. The reason is that the effects\nof those two parameters on the distribution in the data\nsets are minor. Similar observations have been reported by\nprevious studies on mining frequent itemsets and sequential\npatterns. Limited by space, we omit the details here.\n6.3\nMining Disk-based Databases\nNow, we report the experimental results on mining large,\ndisk-based databases. In this set of experiments, we reserve\na block of main memory of fixed size for ADI structure.\nWhen the size is too small for the ADI-structure, some levels\nof the ADI structure are accommodated on disk. On the\nother hand, we do not confine the memory usage for gSpan.\n6.3.1\nScalability on Database Size\nWe test the scalability of both gSpan and ADI-Mine on the\nsize of databases. We use data set D100k-1mN 30I5T 20L200.\nThe number of graphs in the database is varied from 100\nthousand to 1 million. The main memory block for ADI\nstructure is limited to 250M. The results are shown in Figure\n8(a). The construction time of ADI structure is also\nplotted. Please note that the Y -axis is in logarithmic scale.\nThe construction runtime of ADI structure is approximately\nlinear on the database size. That is, the construction\nof the ADI index is highly scalable. We also measure\nthe size of ADI structure. The results are shown in Figure\n8(b). We can observe that the size of the ADI structure\nis linear to the database size. In this experiment, the ratio\nsize of ADI structure in megabytes\nnumber of graphs in thousands\nis about 0.6. When the\ndatabase size is 1 million, the size of ADI structure is 601M,\nwhich exceeds the main memory size of our machine. Even\nin such case, the construction runtime is still linear.\nAs explained before, the construction of ADI structure\nmakes sequential scans of the database and conducts a sequential\nwrite of the adjacency information. The overhead\nof construction of edge table and the linked lists of graph-ids\nis relatively small and thus has a minor effect on the\nconstruction time.\nWhile gSpan can handle databases of only up to 300 thousand\ngraphs in this experiment, ADI-Mine can handle\ndatabases of 1 million graphs. The curve of the runtime\nof ADI-Mine can be divided into three stages.\nFirst, when the database has up to 300 thousand graphs,\nthe ADI structure can be fully accommodated in main memory\n. ADI-Mine is faster than gSpan.\nSecond, when the database has 300 to 600 thousand graphs,\ngSpan cannot finish. The ADI structure cannot be fully held\nin main memory. Some part of the adjacency information is\nput on disk. We see a significant jump in the runtime curve\nof ADI-Mine between the databases of 300 thousand graphs\nand 400 thousand graphs.\nLast, when the database has 800 thousand or more graphs,\neven the linked lists of graph-ids cannot be fully put into\nmain memory. Thus, another significant jump in the runtime\ncurve can be observed.\n6.3.2\nTradeoff Between Efficiency and Main Memory\nConsumption\nIt is interesting to examine the tradeoff between efficiency\nand size of available main memory.\nWe use data set\n323\nResearch Track Paper\n0\n200\n400\n600\n800\n1000\n1200\n0\n2\n4\n6\n8\n10\nRuntime (second)\nmin_sup (%)\ngSpan\nADI-Mine\n0\n200\n400\n600\n800\n1000\n1200\n50 55 60 65 70 75 80 85 90 95 100\nRuntime (second)\nNumber of graphs (thousand)\ngSpan\nADI-Mine\nADI-construction\n10\n100\n1000\n10000\n100000\n20\n25\n30\n35\n40\n45\n50\nRuntime (second)\nN\ngSpan\nADI-Mine\nADI-construction\n0\n500\n1000\n1500\n2000\n10\n15\n20\n25\n30\nRuntime (second)\nT\ngSpan\nADI-Mine\nADI-construction\n(a) scalability on min sup\n(b) Scalability on size\n(c) Scalability on N\n(d) Scalability on T\nD100kN 30I5T 20L200\nD50-100kN 30I5T 20L200\nD100kN 20-50I5T 20L200\nD100kN 30I5T 10-30L200\nmin sup\n= 1%\nmin sup = 1%\nmin sup = 1%\nFigure 7: The experimental results of mining main memory-based databases.\n100\n1000\n10000\n100000\n100 200 300 400 500 600 700 800 900 1000\nRuntime (second)\nNumber of graphs (thousand)\ngSpan\nADI-Mine\nADI-construction\n0\n100\n200\n300\n400\n500\n600\n700\n100 200 300 400 500 600 700 800 900 1000\nSize (M)\nNumber of graphs (thousand)\nADI structure\n0\n200\n400\n600\n800\n1000\n0\n20\n40\n60\n80 100 120 140 160\nRuntime (s)\nSize of available main memory (M)\nADI-Mine\n(a) scalability on size\n(b) Size of ADI structure\n(c) Runtime vs. main memory\nD100k-1mN 30I5T 20L200\nD100k-1mN 30I5T 20L200\nD100kN 30I5T 20L200\nmin sup\n= 1%\nmin sup = 1%\nmin sup = 1%\nFigure 8: The experimental results of mining large disk-based databases.\nD100kN 30I5T 20L200, set the minimum support threshold\nto 1%, vary the main memory limit from 10M to 150M for\nADI structure, and measure the runtime of ADI-Mine. The\nresults are shown in Figure 8(c). In this experiment, the\nsize of ADI structure is 57M. The construction time is 86\nseconds. The highest watermark of main memory usage for\ngSpan in mining this data set is 87M. gSpan uses 1161 seconds\nin the mining if it has sufficient main memory.\nWhen the ADI structure can be completely loaded into\nmain memory (57M or larger), ADI-Mine runs fast. Further\nincrease of the available main memory cannot reduce the\nruntime.\nWhen the ADI structure cannot be fully put into main\nmemory, the runtime increases. The more main memory,\nthe faster ADI-Mine runs.\nWhen the available main memory is too small to even\nhold the linked lists of graph-ids, the runtime of ADI-Mine\nincreases substantially. However, it still can finish the mining\nwith 10M main memory limit in 2 hours.\n6.3.3\nNumber of Disk Block Reads\nIn addition to runtime, the efficiency of mining large disk-based\ndatabases can also be measured by the number of disk\nblock read operations.\nFigure 9(a) shows the number of disk block reads versus\nthe minimum support threshold. When the support threshold\nis high (e.g., 9% or up), the number of frequent edges\nis small. The ADI structure can be held into main memory\nand thus the I/O cost is very low. As the support threshold\ngoes down, larger and larger part of the ADI structure is\nstored on disk, and the I/O cost increases. This curve is\nconsistent with the trend in Figure 7(a).\nFigure 9(b) shows the number of disk block reads versus\nthe number of graphs in the database. As the database size\ngoes up, the I/O cost increases exponentially. This explains\nthe curve of ADI-Mine in Figure 8(a).\nWe also test the I/O cost on available main memory. The\nresult is shown in Figure 9(c), which is consistent with the\ntrend of runtime curve in Figure 8(c).\n6.3.4\nEffects of Other Parameters\nWe also test the effects of the other parameters on the\nefficiency. We observe similar trends as in mining memory-based\ndatabases. Limited by space, we omit the details here.\n6.4\nSummary of Experimental Results\nThe extensive performance study clearly shows the following\n. First, both gSpan and ADI-Mine are scalable when\ndatabase can be held into main memory. ADI-Mine is faster\nthan gSpan. Second, ADI-Mine can mine very large graph\ndatabases by accommodating the ADI structure on disk.\nThe performance of ADI-Mine on mining large disk-based\ndatabases is highly scalable. Third, the size of ADI structure\nis linearly scalable with respect to the size of databases.\nFourth, we can control the tradeoff between the mining efficiency\nand the main memory consumption. Last, ADI-Mine\nis more scalable than gSpan in mining complex graphs--the\ngraphs that have many different kinds of labels.\nRELATED WORK\nThe problem of finding frequent common structures has\nbeen studied since early 1990s. For example, [1, 7] study the\nthe problem of finding common substructures from chemical\ncompounds. SUBDUE [4] proposes an approximate algorithm\nto identify some, instead of the complete set of,\n324\nResearch Track Paper\n0\n100000\n200000\n300000\n400000\n500000\n600000\n700000\n0\n2\n4\n6\n8\n10\nNumber of blocks read\nmin_sup (%)\nADI-Mine\n0\n5e+07\n1e+08\n1.5e+08\n2e+08\n2.5e+08\n0\n200\n400\n600\n800\n1000\nNumber of blocks read\nNumber of graphs (thousand)\nADI-Mine\n0\n5e+06\n1e+07\n1.5e+07\n2e+07\n2.5e+07\n3e+07\n3.5e+07\n4e+07\n0\n20\n40\n60\n80 100 120 140 160\nNumber of blocks read\nSize of available main memory (M)\nADI-Mine\n(a) # blocks vs. support threshold\n(b) # blocks vs. database size\n(c) # blocks vs. main memory size\nD100kN 30I5T 20L200\nD100k-1mN 30I5T 20L200\nD100kN 30I5T 20L200\nmin sup\n= 1%\nmin sup = 1%\nFigure 9: The number of disk blocks read in the mining.\nfrequent substructures. However, these methods do not aim\nat scalable algorithms for mining large graph databases.\nThe problem of mining the complete set of frequent graph\npatterns is firstly explored by Inokuchi et al. [5]. An Apriori-like\nalgorithm AGM is proposed. Kuramochi and Karypis [6]\ndevelop an efficient algorithm, FSG, for graph pattern mining\n. The major idea is to utilize an effective graph representation\n, and conduct the edge-growth mining instead of\nvertex-growth mining. Both AGM and FSG adopt breadth-first\nsearch.\nRecently, Yan and Han propose the depth-first search approach\n, gSpan [11] for graph mining. They also investigate\nthe problem of mining frequent closed graphs [9], which is\na non-redundant representation of frequent graph patterns.\nAs a latest result, Yan et al. [10] uses frequent graph patterns\nto index graphs.\nAs a special case of graph mining, tree mining also receives\nintensive research recently.\nZaki [12] proposes the\nfirst algorithm for mining frequent tree patterns.\nAlthough there are quite a few studies on the efficient mining\nof frequent graph patterns, none of them addresses the\nproblem of effective index structure for mining large disk-based\ngraph databases. When the database is too large to\nfit into main memory, the mining becomes I/O bounded,\nand the appropriate index structure becomes very critical\nfor the scalability.\nCONCLUSIONS\nIn this paper, we study the problem of scalable mining\nof large disk-based graph database. The ADI structure, an\neffective index structure, is developed. Taking gSpan as a\nconcrete example, we propose ADI-Mine, an efficient algorithm\nadopting the ADI structure, to improve the scalability\nof the frequent graph mining substantially.\nThe ADI-Mine structure is a general index for graph mining\n. As future work, it is interesting to examine the effect of\nthe index structure on improving other graph pattern mining\nmethods, such as mining frequent closed graphs and mining\ngraphs with constraints. Furthermore, devising index structures\nto support scalable data mining on large disk-based\ndatabases is an important and interesting research problem\nwith extensive applications and industrial values.\nAcknowledgements\nWe are very grateful to Mr. Xifeng Yan and Dr. Jiawei Han\nfor kindly providing us the executable of gSpan and answering\nour questions promptly. We would like to thank the\nanonymous reviewers for their insightful comments, which\nhelp to improve the quality of the paper.\nREFERENCES\n[1] D.M. Bayada, R. W. Simpson, and A. P. Johnson. An\nalgorithm for the multiple common subgraph problem. J. of\nChemical Information & Computer Sci., 32:680685, 1992.\n[2] C. Borgelt and M.R. Berthold. Mining molecular\nfragments: Finding relevant substructures of molecules. In\nProc. 2002 Int. Conf. Data Mining (ICDM'02), Maebashi\nTERRSA, Maebashi City, Japan, Dec. 2002.\n[3] Thomas H. Cormen, Charles E. Leiserson, Ronald L.\nRivest, and Clifford Stein. Introduction to Algorithms,\nSecond Edition. MIT Press and McGraw-Hill, 2002.\n[4] L. B. Holder, D. J. Cook, and S. Djoko. Substructure\ndiscovery in the subdue system. In Proc. AAAI'94\nWorkshop Knowledge Discovery in Databases (KDD'94),\npages 359370, Seattle, WA, July 1994.\n[5] A. Inokuchi, T. Washio, and H. Motoda. An apriori-based\nalgorithm for mining frequent substructures from graph\ndata. In Proc. 2000 European Symp. Principle of Data\nMining and Knowledge Discovery (PKDD'00), pages\n1323, Lyon, France, Sept. 2000.\n[6] M. Kuramochi and G. Karypis. Frequent subgraph\ndiscovery. In Proc. 2001 Int. Conf. Data Mining\n(ICDM'01), pages 313320, San Jose, CA, Nov. 2001.\n[7] Y. Takahashi, Y. Satoh, and S. Sasaki. Recognition of\nlargest common fragment among a variety of chemical\nstructures. Analytical Sciences, 3:2338, 1987.\n[8] N. Vanetik, E. Gudes, and S.E. Shimony. Computing\nfrequent graph patterns from semistructured data. In Proc.\n2002 Int. Conf. Data Mining (ICDM'02), Maebashi\nTERRSA, Maebashi City, Japan, Dec. 2002.\n[9] X. Yan and J. Han. Closegraph: Mining closed frequent\ngraph patterns. In Proceedings of the 9th ACM SIGKDD\nInternational Conference on Knowledge Discovery and\nData Mining (KDD'03), Washington, D.C, 2003.\n[10] X. Yan, P.S. Yu, and J. Han. Graph indexing: A frequent\nstructure-based approach. In Proc. 2004 ACM SIGMOD\nInt. Conf. on Management of Data (SIGMOD'04), Paris,\nFrance, June 2004.\n[11] Y. Yan and J. Han. gspan: Graph-based substructure\npattern mining. In Proc. 2002 Int. Conf. on Data Mining\n(ICDM'02), Maebashi, Japan, December 2002.\n[12] M.J. Zaki. Efficiently mining frequent trees in a forest. In\nProc. 2002 Int. Conf. on Knowledge Discovery and Data\nMining (KDD'02), Edmonton, Alberta, Canada, July 2002.\n325\nResearch Track Paper\n", "keywords": "index;Edge table;Graph mining;Subgraph mine;Frequent graph pattern mining;Adjacency list representation;graph database;DFS code;ADI Index structure;frequent graph pattern;Gspan algorithm;Disk bases databases;GRaph databases;Memory based databases"} {"name": "174", "title": "Secure Access to IP Multimedia Services Using Generic Bootstrapping Architecture (GBA) for 3G & Beyond Mobile Networks", "abstract": "The IP Multimedia Subsystem (IMS) defined by Third Generation Partnership Projects (3GPP and 3GPP2) is a technology designed to provide robust multimedia services across roaming boundaries and over diverse access technologies with promising features like quality-of-service (QoS), reliability and security. The IMS defines an overlay service architecture that merges the paradigms and technologies of the Internet with the cellular and fixed telecommunication worlds. Its architecture enables the efficient provision of an open set of potentially highly integrated multimedia services, combining web browsing, email, instant messaging, presence, VoIP, video conferencing, application sharing, telephony, unified messaging, multimedia content delivery, etc. on top of possibly different network technologies. As such IMS enables various business models for providing seamless business and consumer multimedia applications. In this communication converged world, the challenging issues are security, quality of service (QoS) and management & administration. In this paper our focus is to manage secure access to multimedia services and applications based on SIP and HTTP on top of IP Multimedia Subsystem (IMS). These services include presence, video conferencing, messaging, video broadcasting, and push to talk etc. We will utilize Generic Bootstrapping Architecture (GBA) model to authenticate multimedia applications before accessing these multimedia services offered by IMS operators. We will make enhancement in GBA model to access these services securely by introducing Authentication Proxy (AP) which is responsible to implement Transport Layer Security (TLS) for HTTP and SIP communication. This research work is part of Secure Service Provisioning (SSP) Framework for IP Multimedia System at Fokus Fraunhofer IMS 3Gb Testbed.", "fulltext": "Introduction\nWith the emergence of mobile multimedia services, such as\nunified messaging, click to dial, across network multiparty\nconferencing and seamless multimedia streaming services, the\nconvergence of networks (i.e. fixedmobile convergence and\nvoicedata integration) has started, leading to an overall Internet\nTelecommunications convergence. In prospect of these global\ntrends, the mobile communications world has defined within the\nevolution of cellular systems an All-IP Network vision which\nintegrates cellular networks and the Internet. This is the IP\nMultimedia System (IMS) [1], namely overlay architecture for the\nprovision of multimedia services, such as VoIP (Voice over\nInternet Protocol) and videoconferencing on top of globally\nemerging 3G (Third Generation) broadband packet networks. The\nIP Multimedia System (IMS) which is standardized by Third\nGeneration Partnership Project (3GPP & 3GGP2) in releases 5 is\nan overlay network on top of GPRS/UMTS (General Packet\nRadio Systems/Universal Mobile Telecommunication Systems)\nnetworks and extended by ETSI TISPAN [2] for fixed line access\nnetwork within the Next Generation Network (NGN) architecture.\nThe IMS provides all IP Service Delivery Platform (SDP) for\nmobile multimedia services provisioning e.g. VoIP, Video-telephony\n, Multimedia conferencing, Mobile Content, Push-to-Talk\netc. and it is based on IETF protocols like SIP for session\ncontrol, Diameter for AAA (Authentication, Authorization, and\nAuditing) and SDP (Service Delivery Protocol), RTP etc.\nDifferent components and parts of IMS are highlighted in figure 1\nconsisting IMS Core (P-CSCF, I-CSCF, S-CSCF), IMS Client\n(UE) and Application & Media Servers along with the concept of\nhome network and visited network for roaming users on top of\ndifferent access networks technologies.\n\nThe security and data privacy is a big challenge when there is\nintegration of different networks and technologies. The\nintegration of different access technologies causes much\nvulnerability and hackers get access to steal financial and\nconfidential information. As these hackers networks are often\nbeyond the law enforcement agencies of the today's\ncommunication world. So the question arises how to prevent these\nhackers for performing such attacks on the corporate networks. In\norder to provide confidentiality, security and privacy, the 3G\nauthentication infrastructure is a valuable and milestone\ndevelopment and asset for 3G operators. This infrastructure\nconsists of authentication centre (AuC), the USIM (Universal\nSubscriber Identity Module) or ISIM (IP Multimedia Services\nIdentity Module) and AKA (Authentication and Key Agreement)\nProcedure.\nIt has recognized that this infrastructure could utilize to enable\napplication function in the network and on the user side to enable\nshared keys. Therefore, Third Generation Partnership Project\n(3GPP) has provided the bootstrapping of application security to\nauthenticate the subscriber by defining a Generic Bootstrapping\nArchitecture (GBA) [3] based on Authentication and Key\nAgreement (AKA) protocol. The GBA model can be utilized to\nauthenticate subscriber before accessing multimedia services and\napplications over HTTP. The candidate applications to use this\nbootstrapping mechanism include but are not restricted to\nsubscriber certificate distribution. These certificates supports\nservices including presence, conferencing, messaging and push to\ntalk etc. provided by mobile operators. The GBA model has\nenhanced by implementing Generic Authentication Architecture\n(GAA) [4] to provide secure assess over HTTP using TLS\n(Transport Layer Security).\nIn prospective of the advancement of telecommunication, the\nFraunhofer Fokus established a Third Generation & beyond (3Gb)\nTestbed and IMS Testbed [5] for research & development and\neducational purpose to provide state-of-the-art knowledge to\nengineers, researchers, educationists and technologists in this area\nof modern telecommunication. Fokus Fraunhofer has developed a\nSecure Service Provisioning (SSP) Framework [6] for IMS\nTestbed to provide security, privacy and authentication of\nsubscriber as well as confidential and protection to the network\nresources of 3G operators.\nThe paper is organised as: section 2 is about IMS as platform for\nmultimedia services, sections 3, 4 and 5 explain generic\nbootstrapping architecture, bootstrapping authentication\nprocedure and its application usage procedure respectively.\nSection 6 discusses the use of authentication proxy for\nimplementing TLS for securing multimedia services. In section 7,\nwe will discus briefly the IMS Testbed at Fokus and than\nconcludes the paper in last section.\nIMS - Platform for Next Generation Multimedia Services\nThe IMS defines service provision architecture, and it can be\nconsidered as the next generation service delivery platform. It\nconsists of modular design with open interfaces and enables the\nflexibility for providing multimedia services over IP technology.\nThe IMS does not standardize specific services but uses standard\nservice enablers e.g. presence, GLMS/XDMS etc. and supports\ninherently multimedia over IP, VoIP, Internet Multimedia and\npresence. In IMS architecture, SIP protocol use as the standard\nsignaling protocol that establishes controls, modifies and\nterminates voice, video and messaging sessions between two or\nmore participants. The related signaling servers in the architecture\nare referred to as Call State Control Functions (CSCFs) and\ndistinguished by their specific functionalities. It is important to\nnote that an IMS compliant end user system has to provide the\nnecessary IMS protocol support, namely SIP, and the service\nrelated media codecs for multimedia applications in addition to\nbasic connectivity support, e.g. GPRS, WLAN, etc. The IMS is\ndesigned to provide number of key capabilities required to enable\nnew IP services via mobile and fixed networks. The important key\nfunctionalities which enable new mobile IP services are:\nMultimedia session negotiation and management\nQuality of service management\nMobility\nmanagement\nService execution, control and interaction\nPrivacy and security management\n\nFigure 1:- IP Multimedia Subsystem (IMS) Architecture\n\nIn IMS specification, Application Server (AS) provides the\nservice logic and service creation environment for applications\nand services. The AS is intended to influence and maintain the\nvarious IMS SIP sessions on behalf of the services. It can behave\nas a termination point for signaling, redirecting or forwarding SIP\nrequests. It also can act as third party call control unit. Services in\nthis instance refer to IMS services, which are based on the IMS\nreference points (e.g. instant messaging, presence, conferencing\netc.). The advantage of application server is to enable IMS to\noperate in a more flexible and dynamic way, whereas the AS\nprovides more intelligence to the system. Most Application\nServers are closed boxes which map network functions (e.g. via\nOSA gateways) or signaling protocols (SIP) onto application\nprogramming interfaces based on a particular technology (Java,\n18\nCORBA, web-services). An alternative approach pursued by the\nOpen Mobile Alliance (OMA) is strongly related to the service\noriented methodology, which follows the top-down approach\nbeginning with service design down to service mapping over the\nunderlying network technologies. The SIP services can be\ndeveloped and deployed on a SIP application server using several\ntechnologies such as SIP servlets, Call Processing Language\n(CPL) script, SIP Common Gateway Interface (CGI) and JAIN\nAPIs.\n\nGeneric Bootstrapping Architecture (GBA)\nDifferent 3G Multimedia Services including video conferencing,\npresence, push to talk etc. has potential usage of Generic\nBootstrapping Architecture (GBA) to distribute subscriber\ncertificates. These certificates are used by mobile operators to\nauthenticate the subscriber before accessing the multimedia\nservices and applications. Now we discuss components, entities\nand interfaces of GBA.\n3.1 GBA Components and Entities\nThe GBA consists of five entities: UE (User Equipment), NAF\n(Network Authentication Function), BSF (Bootstrapping Server\nFunction) and HSS (Home Subscriber Server) and are explained\nbelow as specified in 3GPP standards (shown in figure 2).\nUser Equipment: UE is UICC (Universal Integrated Circuit\nCard) containing USIM or ISIM related information that supports\nHTTP Digest AKA (Authentication & Key Agreement) and NAF\n(Network Authentication Function) specific protocols. A USIM\n(Universal Subscriber Identity Module) is an application for\nUMTS mobile telephony running on a UICC smartcard which is\ninserted in a 3G mobile phone. It stores user subscriber\ninformation, authentication information and provides with storage\nspace for text messages. An IP Multimedia Services Identity\nModule (ISIM) is an application running on a UICC smartcard in\na 3G telephone in the IP Multimedia Subsystem (IMS). It contains\nparameters for identifying and authenticating the user to the IMS.\nThe ISIM application can co-exist with SIM and USIM on the\nsame UICC making it possible to use the same smartcard in both\nGSM networks and earlier releases of UMTS.\nBootstrapping Server Function (BSF): It hosts in a network\nelement under the control of mobile network operator. The BSF,\nHSS, and UEs participate in GBA in which a shared secret is\nestablished between the network and a UE by running the\nbootstrapping procedure. The shared secret can be used between\nNAFs and UEs, for example, for authentication purposes. A\ngeneric Bootstrapping Server Function (BSF) and the UE shall\nmutually authenticate using the AKA protocol, and agree on\nsession keys that are afterwards applied between UE and a\nNetwork Application Function (NAF). The BSF shall restrict the\napplicability of the key material to a specific NAF by using the\nkey derivation procedure. The key derivation procedure may be\nused with multiple NAFs during the lifetime of the key material.\nThe lifetime of the key material is set according to the local policy\nof the BSF. The BSF shall be able to acquire the GBA User\nsecurity Settings (GUSS) from HSS [3].\n\nFigure 2: Network Entities of GBA\n\nNetwork Authentication Function: NAF has the functionality to\nlocate and communicate securely with subscriber's BSF\n(Bootstrapping Server Function). It should be able to acquire a\nshared key material established between the UE and the BSF\nduring application specific protocol runs.\nHome Subscriber Server: HSS stores GBA user security settings\n(GUSSs). The GUSS is defined in such a way that interworking of\ndifferent operators for standardized application profiles is\npossible. It also supports operator specific application profiles\nwithout the standardized of existing application profiles. The\nGUSS shall be able to contain application-specific USSs that\ncontain parameters that relates to key selection indication,\nidentification or authorization information of one or more\napplications hosted by one ore more NAFs. Any other types of\nparameters are not allowed in the application-specific USS [3].\nDiameter-Proxy: In case where UE has contacted NAF of visited\nnetwork than home network, this visited NAF will use diameter\nproxy (D-Proxy) of NAFs network to communicate with\nsubscriber's BSF (i.e. home BSF). D-Proxy's general\nfunctionality requirements include [3]:\nD-Proxy functions as a proxy between visited NAF and\nsubscriber's home BSF and it will be able to locate subscriber's\nhome BSF and communicate with it over secure channel.\nThe D-Proxy will be able to validate that the visited NAF is\nauthorized to participate in GBA and shall be able to assert to\nsubscriber's home BSF the visited NAFs DNS name.\nThe D-Proxy shall also be able to assert to the BSF that the\nvisited NAF is authorized to request the GBA specific user\nprofiles contained in the NAF request.\n19\n\nFigure 3: Bootstrapping Authentication Procedure\n\n3.2 GBA Reference Points\nUb: The reference point Ub is between the UE and the BSF and\nprovides mutual authentication between them. It allows the UE to\nbootstrap the session keys based on 3GPP AKA infrastructure.\nThe HTTP Digest AKA protocol is used on the reference point\nUb. It is based on the 3GPP AKA [7] protocol.\nUa: The reference point Ua carries the application protocol, which\nis secured using the keys material agreed between UE and BSF as\na result of running of HTTP Digest AKA over reference point Ub.\nFor instance, in case of support for subscriber certificates, it is a\nprotocol, which allows the user to request certificates from NAF.\nIn this case, NAF would be PKI portal.\nZh: The reference point Zh used between BSF and HSS. It allows\nBSF to fetch the required authentication information and all GBA\nuser security settings from HSS. The interface to 3G\nAuthentication Centre is HSS-internal, and it need not be\nstandardised as part of this architecture.\nZn: The reference point Zn is used by the NAF to fetch the key\nmaterial agreed during a previous HTTP Digest AKA protocol run\nover the reference point Ub from the UE to the BSF. It is also\nused to fetch application-specific user security settings from the\nBSF, if requested by the NAF.\n\nBootstrapping Authentication Procedure\nThe UE and Network Authentication Function (NAF) have to\ndecide whether to use GBA before the start of communication\nbetween them. When UE wants to interact with NAF, it starts\ncommunication with NAF over Ua interface without GBA\nparameters. If NAF requires the use of shared keys obtained by\nmeans of GBA, but the request from UE does not include GBA-related\nparameters, the NAF replies with a bootstrapping initiation\nmessage [3]. When UE wants to interact with NAF, and it knows\n20\nthat the bootstrapping procedure is needed, it shall first perform a\nbootstrapping authentication as shown in figure 3. Otherwise, the\nUE shall perform a bootstrapping authentication only when it has\nreceived bootstrapping initiation required message or a\nbootstrapping negotiation indication from the NAF, or when the\nlifetime of the key in UE has expired. The UE sends an HTTP\nrequest to the BSF and the BSF retrieves the complete set of GBA\nuser security settings and one Authentication Vector (AV) [8] as\ngiven in equation 1 over the reference point Zh from the HSS.\nAV = RAND||AUTN||XRES||CK||IK\n------------------- Eq. 1\nAfter that BSF forwards the RAND and AUTN to the UE in the\n401 message without the CK, IK and XRES. This is to demand\nthe UE to authenticate itself. The UE checks AUTN to verify that\nthe challenge is from an authorized network; the UE also\ncalculates CK, IK and RES [8]. This will result in session keys IK\nand CK in both BSF and UE. The UE sends another HTTP\nrequest to the BSF, containing the Digest AKA response which is\ncalculated using RES.\nThe BSF authenticates the UE by verifying the Digest AKA\nresponse. The BSF generates key material Ks by concatenating\nCK and IK and it also generates B-TID (Bootstrapping\nTransaction Identifier) which is used to bind the subscriber\nidentity to the keying material in reference points Ua, Ub and Zn.\nThe BSF shall send a 200 OK message, including a B-TID to the\nUE to indicate the success of the authentication and the lifetime of\nthe key Ks. The key material Ks is generated in UE by\nconcatenating CK and IK. Both the UE and the BSF shall use the\nKs to derive the key material Ks-NAF which will be used for\nsecuring the reference point Ua. The Ks-NAF is computed as\nequation 2.\nKs-NAF = f\nKD\n(Ks, "gba-me", RAND, IMPI, NAF-ID) ----- Eq. 2\nwhere f\nKD\nis the key derivation function and will be implemented\nin the ME, and the key derivation parameters consist of user's\nIMPI, NAF-ID and RAND. The NAF-ID consists of the full DNS\nname of the NAF, concatenated with the Ua security protocol\nidentifier. The UE and the BSF shall store the key Ks with the\nassociated B-TID for further use, until the lifetime of Ks has\nexpired, or until the key Ks is updated [3].\nBootstrapping Usage Procedure\nBefore communication between the UE and the NAF can start, the\nUE and the NAF first have to agree whether to use shared keys\nobtained by means of the GBA. If the UE does not know whether\nto use GBA with this NAF, it uses the initiation of bootstrapping\nprocedure. Once the UE and the NAF have decided that they want\nto use GBA then every time the UE wants to interact with NAF.\nThe UE starts communication over reference point Ua with the\nNAF by supplying the B-TID to the NAF to allow the NAF to\nretrieve the corresponding keys from the BSF. The NAF starts\ncommunication over reference point Zn with BSF. The NAF\nrequests key material corresponding to the B-TID supplied by the\nUE to the NAF over reference point Ua. With the key material\nrequest, the NAF shall supply NAF's public hostname that UE has\nused to access NAF to BSF, and BSF shall be able to verify that\nNAF is authorized to use that hostname. The NAF may also\nrequest one or more application-specific USSs for the\napplications, which the request received over Ua from UE may\naccess.\nThe BSF derives the keys required to protect the protocol used\nover reference point Ua from the key Ks and the key derivation\nparameters. Than it supplies requested key Ks-NAF,\nbootstrapping time and the lifetime of the key to NAF. If the key\nidentified by the B-TID supplied by the NAF is not available at\nthe BSF, the BSF shall indicate this in the reply to the NAF. The\nNAF then indicates a bootstrapping renegotiation request to the\nUE. The BSF may also send the private user identity (IMPI) and\nrequested USSs to NAF according to the BSF's policy. The NAF\ncontinues with the protocol used over the reference point Ua with\nthe UE. Once the run of the protocol used over reference point Ua\nis completed the purpose of bootstrapping is fulfilled as it enabled\nUE and NAF to use reference point Ua in a secure way.\n\nFigure 4: Bootstrapping Application\n\n\nAuthentication Proxy Usage for Multimedia Services\nAuthentication Proxy (AP) is like a Network Authentication\nFunction (NAF) and performs the function of HTTP proxy for the\nUE. It is responsible to handle the Transport Layer Security (TLS)\nand implement the secure HTTP channel between AP and UE as\nshown in figure 5. It utilizes the generic bootstrapping\narchitecture to assure the application servers (ASs) that the\nrequest is coming from an authorized subscriber of mobile\n21\nnetwork operator. When HTTPS request is sent to AS through\nAP, the AP performs UE authentication. The AP may insert the\nuser identity when it forwards the request to application server.\nFigure 5b presents the architecture view of using AP for different\nIMS SIP services e.g. presence, messaging, conferencing etc.\nFigure 5: Authentication Proxy\nThe UE shall manipulate own data such as groups, through the\nUa/Ut reference point [4]. The reference point Ut will be\napplicable to data manipulation of IMS based SIP services, such\nas Presence, Messaging and Conferencing services. When the\nHTTPS client starts communication via Ua reference point with\nthe NAF, it shall establish a TLS tunnel with the NAF. The NAF\nis authenticated to the HTTPS client by means of a public key\ncertificate. The HTTPS client will verify that the server certificate\ncorresponds to the FQDN (Fully Qualified Domain Name) of the\nAP it established the tunnel with. We explain the procedure\nbriefly as:\nThe HTTPS client sends an HTTP request to NAF inside the TLS\ntunnel. In response to HTTP request over Ua interface, the AP\nwill invoke HTTP digest with HTTPS client in order to perform\nclient authentication using the shared keys. On the receipt of\nHTTPS digest from AP, the client will verify that the FDQN\ncorresponds the AP it established the TLS connection with, if not\nthe client will terminate the TLS connection with the AP. In this\nway the UE and AP are mutually authenticated as the TLS tunnel\nendpoints.\nNow we discuss an example that application residing on UICC\n(Universal Integrated Circuit Card) may use TLS over HTTP in\nGeneric Authentication Architecture (GAA) mechanism to secure\nits communication with Authentication Proxy (AP). The GBA\nsecurity association between a UICC-based application and AP\ncould establish as:\n\nFigure 6: HTTPS and BIP (Bearer Independent Protocol)\nProcedures\nThe ME (Mobile Equipment) executes the bootstrapping\nprocedure with the BSF supporting the Ub reference point. The\nUICC, which hosts the HTTPS client, runs the bootstrapping\nusage procedure with AP supporting the Ua reference point [9].\nFigure 6 shows the use of BIP (Bearer Independent Protocol) to\nestablish the HTTPS connection between UICC and AP. When\nUICC opens channel with AP as described in [10] than an active\nTCP/IP connection establishes between UICC and AP.\n\nFokus IMS Testbed\nIn face of the current challenges within telecommunications\nmarket are mainly consequences of insufficient early access to\nnew enabling technologies by all market players. , In this\ndevelopment Fraunhofer Institute FOKUS, known as a leading\nresearch institute in the field of open communication systems, has\nestablished with support of German Ministry of Education and\nResearch (BMBF) a 3G beyond Testbed, known as \"National\nHost for 3Gb Applications\". This Testbed provides technologies\nand related know-how in the field of fixed and wireless next\ngeneration network technologies and related service delivery\n22\nplatforms. As a part of 3Gb Testbed, the FOKUS Open IMS\nPlayground is deployed as an open technology test field with the\ntarget to validate existing and emerging IMS standards and to\nextend the IMS appropriately to be used on top of new access\nnetworks as well as to provide new seamless multimedia\napplications [11]. All major IMS core components, i.e., x-CSCF,\nHSS, MG, MRF, Application Servers, Application Server\nSimulators, service creation toolkits, and demo applications are\nintegrated into one single environment and can be used and\nextended for R&D activities by academic and industrial partners.\nAll these components can be used locally on top of all available\naccess technologies or can be used over IP tunnels remotely.\nUsers of the \"Open IMS playground\" can test their components\nperforming interoperability tests. The SIP Express Router (SER),\none of the fastest existing SIP Proxies, can be used as a reference\nimplementation and to proof interoperability with other SIP\ncomponents [11]. The major focal point of IMS Playground is to\nput Application Server aside. Varieties of platforms enable rapid\ndevelopment of innovative services.\n\nFigure 7: Fokus IMS Testbed\n\nThe Open IMS playground is deployed as an open technology test\nfield with the target to develop prototype and validate existing and\nemerging NGN/IMS standard components. It extends the IMS\narchitecture and protocols appropriately to be used on top of new\naccess networks as well as to provide new seamless multimedia\napplications. It is important to stress that all components have\nbeen developed by FOKUS as reference implementations, such as\nan own open source IMS core system (to be publicly released in\n2006 based on the famous SIP Express Router), IMS Clients and\napplication servers (SIPSee), and HSS. The IMS playground is\nused on the one hand as the technology basis for own industry\nprojects performed for national and international vendors and\nnetwork operators as well as for more mid term academic R&D\nprojects in the European IST context. In addition, the playground\nis used by others as well, i.e. FOKUS is providing consultancy\nand support services around the IMS playground. Users of the\n\"Open IMS playground\", e.g. vendors, are testing their\ncomponents performing interoperability and benchmarking tests.\nApplication developers are developing new IMS applications\nbased on various programming platforms provided, i.e.\nIN/CAMEL, OSA/Parlay, JAIN, SIP Servlets, etc., and gain a\nproof of concept implementation.. The different platform options,\neach with their strengths and weaknesses, can be selected and\nused according to the customers' needs. Figure 7 displays the\nOpen IMS playground partner components.\n\nConclusion\nIn this paper, we have presented the architecture of secure access\nand authentication of IP Multimedia Services based of SIP and\nHTTP communication using GBA (Generic Bootstrapping\nArchitecture) as recommended by 3GPP and TISPAN as a part of\nSecure Service Provisioning (SSP) Framework of IMS at Fokus\nFraunhofer IMS and 3Gb Testbed.\n\nREFERENCES\n[1] Third Generation Partnership Project; Technical\nSpecification Group Services and System Aspects; TS\n23.228 IP Multimedia Subsystem (IMS), Stage 2 / 3GPP2\nX.S0013-002-0 v1.0, www.3gpp.org.\n[2] ETSI TISPAN (Telecommunications and Internet converged\nServices and Protocols for Advanced Networking) WG\nhttp://portal.etsi.org/tispan/TISPAN_ToR.asp.\n[3] Third Generation Partnership Project; Technical\nSpecification Group Services and System Aspects; Generic\nAuthentication Architecture (GAA); Generic Bootstrapping\nArchitecture (GBA) (Release 7), 3GPP TS 33.220 V7\n(2005).\n[4] Third Generation Partnership Project; Technical\nSpecification Group Services and System Aspects; Generic\nAuthentication Architecture (GAA); Access to Network\nApplication Functions using Hypertext Transfer Protocol\nover Transport Layer Security (HTTPS) (Release 7), 3GPP\nTS 33.222 V7 (2005).\n[5] Third Generation & Beyond (3Gb) Testbed,\nwww.fokus.fraunhofer.de/national_host &\n\nIP Multimedia System (IMS) Playground\nwww.fokus.fraunhofer.de/ims.\n[6] M. Sher, T. Magedanz, \"Secure Service Provisioning\nFramework (SSPF) for IP Multimedia System and Next\nGeneration Mobile Networks\" 3rd International Workshop in\nWireless Security Technologies, London, U.K. (April 2005),\nIWWST'05 Proceeding (101-106), ISSN 1746-904X.\n[7] Third Generation Partnership Project; Technical\nSpecification Group Services and System Aspects; 3G\nSecurity; Security Architecture (Release 6); 3GPP, TS\n33.102 V6 (2004).\n[8] M. Sher, T. Magedanz: "Network Access Security\nManagement (NASM) Model for Next Generation Mobile\nTelecommunication Networks", IEEE/IFIP MATA'2005, 2\nnd\n\nInternational Workshop on Mobility Aware Technologies\nand Applications - Service Delivery Platforms for Next\nGeneration Networks, Montreal, Canada, October 17-19,\n2005, Proceeding Springer-Verlag LNCS 3744-0263, Berlin\n23\nHeidelberg 2005, pp. 263-272.\nhttp://www.congresbcu.com/mata2005\n[9] Third Generation Partnership Project; Technical\nSpecification Group Services and System Aspects; Generic\nAuthentication Architecture (GAA); Early Implementation of\nHTTPS Connection between a Universal Integrated Circuit\nCard (UICC) and Network Application Function (NAF)\n(Release 7), 3GPP TR 33.918 V7 (2005).\n[10] Third Generation Partnership Project; Technical\nSpecification Group Core Network and Terminals; Universal\nSubscriber Identity Module (USIM) Application Toolkit\n(USAT) (Release 7), 3GPP TS 31.111 V7 (2005).\n[11] K. Knttel, T.Magedanz, D. Witszek: \"The IMS Playground\n@ Fokus an Open Testbed for Next Generation Network\nMultimedia Services\", 1\nst\nInt. IFIP Conference on Testbeds\nand Research Infrastructures for the Development of\nNetworks and Communities (Tridentcom), Trento, Italian,\nFebruary 23 - 25, 2005, Proceedings pp. 2 11, IBSN 0-7695\n-2219-x, IEEE Computer Society Press, Los Alamitos,\nCalifornia.\nAcronyms\n3GPP\nThird Generation Partnership Project\n3GPP2\nThird Generation Partnership Project 2\nAAA\nAuthentication, Authorisation, and Accounting\nAKA\nAuthentication and Key Agreement\nAP Authentication\nProxy\nAS\nApplication Server\nAuC Authentication\nCentre\nAV Authentication\nFunction\nBGA\nGeneric Bootstrapping Architecture\nBSF\nBootstrapping Server Function\nB-TID\nBootstrapping Transaction Identifier\nCAMEL\nCustomized Applications for Mobile Enhanced Logic\nCGI\nCommon Gateway Interface\nCK Cipher\nKey\nCORBA\nCommon Object Request Broker Architecture\nCPL\nCall Programming Language\nCSCFs\nCall State Control Functions\nDNS\nDomain Name Server\nFMC\nFixed Mobile Convergence\nFQDN\nFully Qualified Domain Name\nGAA\nGeneric Authentication Architecture\nGPRS\nGeneral Packet Radio System\nGUSS\nGBA User Security Settings\nHSS\nHome Subscriber Server\nHTTP\nHyper Text Transfer Protocol\nHTTPS\nHTTP Secure ( HTTP over TLS)\nICSCF\nInterrogating Call State Control Function\nIETF\nInternet Engineering Task Force\nIK Integrity\nKey\nIM\nIP Multimedia\nIMPI\nIP Multimedia Private Identity\nIMS\nIP Multimedia Subsystem\nIN\nIntelligent Network\nIP Internet\nProtocol\nISIM\nIM Service Identity Module\nKs Session\nKey\nME Mobile\nEquipment\nMG Media\nGate\nMRF\nMedia Resource Function\nNAF\nNetwork Authentication Function\nNGN\nNext Generation Network\nOMA\nOpen Mobile Alliance\nOSA\nOpen Service Access\nPCSCF\nProxy Call State Control Function\nPDP\nPacket Data Protocol\nPoC\nPPT over Cellular\nPTT\nPush To Talk\nQoS\nQuality of Service\nRES Response\nRTP\nReal-time Transport Protocol\nSCSCF\nServing Call State Control Function\nSDP\nService Delivery Platform\nSER\nSIP Express Router\nSIP\nSession Initiation Protocol\nSSP\nSecure Service Provisioning\nTCP\nTransmission Control Protocol\nTISPAN\nTelecoms & Internet converged Services & Protocols\nfor Advanced Networks\nTLS\nTransport Layer Security\nUE User\nEquipment\nUICC\nUniversal Integrated Circuit Card\nUMTS\nUniversal Mobile Telecommunication Standard\nUSIM\nUniversal Subscriber Identity Module\nWLAN\nWireless Local Area Network\n\n\n24\n", "keywords": "TLS Tunnel end points;Generic Authentication Architecture;GLMS/XDMS;General bootstrapping architecture;Transport Layer Security;Network authentication function;Signalling protocols;Generic Bootstrapping Architecture;Authentication Proxy;GBA;Diameter proxy;Transport layer security;IP Multimedia System;Authentication proxy;TLS;IP multimedia subsystem;IMS platform;Fokus IMS Testbed;NAF;AP;Security and Privacy"} {"name": "175", "title": "Secure Hierarchical In-Network Aggregation in Sensor Networks", "abstract": "In-network aggregation is an essential primitive for performing queries on sensor network data. However, most aggregation algorithms assume that all intermediate nodes are trusted. In contrast, the standard threat model in sensor network security assumes that an attacker may control a fraction of the nodes, which may misbehave in an arbitrary (Byzantine) manner. We present the first algorithm for provably secure hierarchical in-network data aggregation. Our algorithm is guaranteed to detect any manipulation of the aggregate by the adversary beyond what is achievable through direct injection of data values at compromised nodes. In other words, the adversary can never gain any advantage from misrepresenting intermediate aggregation computations. Our algorithm incurs only O(log2n) node congestion, supports arbitrary tree-based aggregator topologies and retains its resistance against aggregation manipulation in the presence of arbitrary numbers of malicious nodes. The main algorithm is based on performing the SUM aggregation securely by first forcing the adversary to commit to its choice of intermediate aggregation results, and then having the sensor nodes independently verify that their contributions to the aggregate are correctly incorporated. We show how to reduce secure MEDIAN , COUNT , and AVERAGE to this primitive.", "fulltext": "INTRODUCTION\nWireless sensor networks are increasingly deployed in security-critical\napplications such as factory monitoring, environmental monitoring\n, burglar alarms and fire alarms. The sensor nodes for these\napplications are typically deployed in unsecured locations and are\nnot made tamper-proof due to cost considerations. Hence, an adversary\ncould undetectably take control of one or more sensor nodes\nand launch active attacks to subvert correct network operations.\nSuch environments pose a particularly challenging set of constraints\nfor the protocol designer: sensor network protocols must be highly\nenergy efficient while being able to function securely in the presence\nof possible malicious nodes within the network.\nIn this paper we focus on the particular problem of securely and\nefficiently performing aggregate queries (such as\nMEDIAN\n,\nSUM\nand\nAVERAGE\n) on sensor networks. In-network data aggregation is\nan efficient primitive for reducing the total message complexity of\naggregate sensor queries. For example, in-network aggregation of\nthe\nSUM\nfunction is performed by having each intermediate node\nforward a single message containing the sum of the sensor readings\nof all the nodes downstream from it, rather than forwarding each\ndownstream message one-by-one to the base station. The energy\nsavings of performing in-network aggregation have been shown to\nbe significant and are crucial for energy-constrained sensor networks\n[9, 11, 20].\nUnfortunately, most in-network aggregation schemes assume that\nall sensor nodes are trusted [12, 20]. An adversary controlling just\na few aggregator nodes could potentially cause the sensor network\nto return arbitrary results, thus completely subverting the function\nof the network to the adversary's own purposes.\nDespite the importance of the problem and a significant amount\nof work on the area, the known approaches to secure aggregation\neither require strong assumptions about network topology or adversary\ncapabilities, or are only able to provide limited probabilistic\nsecurity properties. For example, Hu and Evans [8] propose\na secure aggregation scheme under the assumption that at most a\nsingle node is malicious. Przydatek et al. [17] propose Secure Information\nAggregation (SIA), which provides a statistical security\nproperty under the assumption of a single-aggregator model. In the\nsingle-aggregator model, sensor nodes send their data to a single\naggregator node, which computes the aggregate and sends it to the\nbase station. This form of aggregation reduces communications\nonly on the link between the aggregator and the base station, and is\nnot scalable to large multihop sensor deployments. Most of the algorithms\nin SIA (in particular,\nMEDIAN\n,\nSUM\nand\nAVERAGE\n) cannot\nbe directly adapted to a hierarchical aggregation model since\n278\nthey involve sorting all of the input values; the final aggregator in\nthe hierarchy thus needs to access all the data values of the sensor\nnodes.\nIn this paper, we present the first provably secure sensor network\ndata aggregation protocol for general networks and multiple adver-sarial\nnodes. The algorithm limits the adversary's ability to manipulate\nthe aggregation result with the tightest bound possible for\ngeneral algorithms with no knowledge of the distribution of sensor\ndata values. Specifically, an adversary can gain no additional\ninfluence over the final result by manipulating the results of the\nin-network aggregate computation as opposed to simply reporting\nfalse data readings for the compromised nodes under its control.\nFurthermore, unlike prior schemes, our algorithm is designed for\ngeneral hierarchical aggregator topologies and multiple malicious\nsensor nodes. Our metric for communication cost is congestion,\nwhich is the maximum communication load on any node in the\nnetwork. Let n be the number of nodes in the network, and\nbe\nthe maximum degree of any node in the aggregation tree. Our algorithm\ninduces only O\n(log\n2\nn\n) node congestion in the aggregation\ntree.\nRELATED WORK\nResearchers have investigated resilient aggregation algorithms to\nprovide increased likelihood of accurate results in environments\nprone to message loss or node failures. This class of algorithms\nincludes work by Gupta et al. [7], Nath et al. [15], Chen et al. [3]\nand Manjhi et al. [14].\nA number of aggregation algorithms have been proposed to ensure\nsecrecy of the data against intermediate aggregators. Such algorithms\nhave been proposed by Girao et al. [5], Castelluccia et\nal. [2], and Cam et al. [1].\nHu and Evans [8] propose securing in-network aggregation against\na single Byzantine adversary by requiring aggregator nodes to forward\ntheir inputs to their parent nodes in the aggregation tree. Jadia\nand Mathuria [10] extend the Hu and Evans approach by incorporating\nprivacy, but also considered only a single malicious node.\nSeveral secure aggregation algorithms have been proposed for\nthe single-aggregator model. Przydatek et al. [17] proposed Secure\nInformation Aggregation (SIA) for this topology. Also for the\nsingle-aggregator case, Du et al. [4] propose using multiple witness\nnodes as additional aggregators to verify the integrity of the\naggregator's result. Mahimkar and Rappaport [13] also propose\nan aggregation-verification scheme for the single-aggregator model\nusing a threshold signature scheme to ensure that at least t of the\nnodes agree with the aggregation result. Yang et al. [19] describe\na probabilistic aggregation algorithm which subdivides an aggregation\ntree into subtrees, each of which reports their aggregates\ndirectly to the base station. Outliers among the subtrees are then\nprobed for inconsistencies.\nWagner [18] addressed the issue of measuring and bounding malicious\nnodes' contribution to the final aggregation result. The paper\nmeasures how much damage an attacker can inflict by taking\ncontrol of a number of nodes and using them solely to inject erroneous\ndata values.\nPROBLEM MODEL\nIn general, the goal of secure aggregation is to compute aggregate\nfunctions (such as\nSUM\n,\nCOUNT\nor\nAVERAGE\n) of the sensed\ndata values residing on sensor nodes, while assuming that a portion\nof the sensor nodes are controlled by an adversary which is\nattempting to skew the final result. In this section, we present the\nformal parameters of the problem.\n3.1\nNetwork Assumptions\nWe assume a general multihop network with a set S\n= {s\n1\n,...,s\nn\n}\nof n sensor nodes and a single (untrusted) base station R, which is\nable to communicate with the querier which resides outside of the\nnetwork. The querier knows the total number of sensor nodes n,\nand that all n nodes are alive and reachable.\nWe assume the aggregation is performed over an aggregation\ntree which is the directed tree formed by the union of all the paths\nfrom the sensor nodes to the base station (one such tree is shown\nin Figure 1(a)). These paths may be arbitrarily chosen and are not\nnecessarily shortest paths. The optimisation of the aggregation tree\nstructure is out of the scope of this paper--our algorithm takes the\nstructure of the aggregation tree as given. One method for constructing\nan aggregation tree is described in TaG [11].\n3.2\nSecurity Infrastructure\nWe assume that each sensor node has a unique identifier s and\nshares a unique secret symmetric key K\ns\nwith the querier. We further\nassume the existence of a broadcast authentication primitive\nwhere any node can authenticate a message from the querier. This\nbroadcast authentication could, for example, be performed using\nTESLA [16]. We assume the sensor nodes have the ability to perform\nsymmetric-key encryption and decryption as well as computations\nof a collision-resistant cryptographic hash function H.\n3.3\nAttacker Model\nWe assume that the attacker is in complete control of an arbitrary\nnumber of sensor nodes, including knowledge of all their secret\nkeys. The attacker has a network-wide presence and can record and\ninject messages at will. The sole goal of the attacker is to launch\nwhat Przydatek et al. [17] call a stealthy attack, i.e., to cause the\nquerier to accept a false aggregate that is higher or lower than the\ntrue aggregate value.\nWe do not consider denial-of-service (DoS) attacks where the\ngoal of the adversary is to prevent the querier from getting any\naggregation result at all. While such attacks can disrupt the normal\noperation of the sensor network, they are not as potentially\nhazardous in security-critical applications as the ability to cause\nthe operator of the network to accept arbitrary data. Furthermore,\nany maliciously induced extended loss of service is a detectable\nanomaly which will (eventually) expose the adversary's presence\nif subsequent protocols or manual intervention do not succeed in\nresolving the problem.\n3.4\nProblem Definition and Metrics\nEach sensor node s\ni\nhas a data value a\ni\n. We assume that the\ndata value is a non-negative bounded real value a\ni\n[0,r] for some\nmaximum allowed data value r. The objective of the aggregation\nprocess is to compute some function f over all the data values,\ni.e., f\n(a\n1\n,...,a\nn\n). Note that for the\nSUM\naggregate, the case where\ndata values are in a range\n[r\n1\n,r\n2\n] (where r\n1\n,r\n2\ncan be negative)\nis reducible to this case by setting r\n= r\n2\n- r\n1\nand add nr\n1\nto the\naggregation result.\nDefinition 1 A direct data injection attack occurs when an attacker\nmodifies the data readings reported by the nodes under its direct\ncontrol, under the constraint that only legal readings in\n[0,r] are\nreported.\nWagner [18] performed a quantitative study measuring the effect\nof direct data injection on various aggregates, and concludes\nthat the aggregates addressed in this paper (truncated\nSUM\nand\nAV\nERAGE\n,\nCOUNT\nand\nQUANTILE\n) can be resilient under such attacks\n.\n279\nWithout domain knowledge about what constitutes an anomalous\nsensor reading, it is impossible to detect a direct data injection\nattack, since they are indistinguishable from legitimate sensor readings\n[17, 19]. Hence, if a secure aggregation scheme does not make\nassumptions on the distribution of data values, it cannot limit the\nadversary's capability to perform direct data injection. We can thus\ndefine an optimal level of aggregation security as follows.\nDefinition 2 An aggregation algorithm is optimally secure if, by\ntampering with the aggregation process, an adversary is unable to\ninduce the querier to accept any aggregation result which is not\nalready achievable by direct data injection.\nAs a metric for communication overhead, we consider node congestion\n, which is the worst case communication load on any single\nsensor node during the algorithm. Congestion is a commonly\nused metric in ad-hoc networks since it measures how quickly the\nheaviest-loaded nodes will exhaust their batteries [6, 12]. Since the\nheaviest-loaded nodes are typically the nodes which are most essential\nto the connectivity of the network (e.g., the nodes closest to\nthe base station), their failure may cause the network to partition\neven though other sensor nodes in the network may still have high\nbattery levels. A lower communication load on the heaviest-loaded\nnodes is thus desirable even if the trade-off is a larger amount of\ncommunication in the network as a whole.\nFor a lower bound on congestion, consider an unsecured aggregation\nprotocol where each node sends just a single message to\nits parent in the aggregation tree. This is the minimum number\nof messages that ensures that each sensor node contributes to the\naggregation result. There is\n(1) congestion on each edge on the\naggregation tree, thus resulting in\n(d) congestion on the node(s)\nwith highest degree d in the aggregation tree. The parameter d is\ndependent on the shape of the given aggregation tree and can be as\nlarge as\n(n) for a single-aggregator topology or as small as (1)\nfor a balanced aggregation tree. Since we are taking the aggregation\ntree topology as an input, we have no control over d. Hence,\nit is often more informative to consider per-edge congestion, which\ncan be independent of the structure of the aggregation tree.\nConsider the simplest solution where we omit aggregation altogether\nand simply send all data values (encrypted and authenticated\n) directly to the base station, which then forwards it to the\nquerier. This provides perfect data integrity, but induces O\n(n) congestion\nat the nodes and edges nearest the base station. For an algorithm\nto be practical, it must cause only sublinear edge congestion.\nOur goal is to design an optimally secure aggregation algorithm\nwith only sublinear edge congestion.\nTHE SUM ALGORITHM\nIn this section we describe our algorithm for the\nSUM\naggregate,\nwhere the aggregation function f is addition. Specifically, we wish\nto compute a\n1\n+ + a\nn\n, where a\ni\nis the data value at node i. We\ndefer analysis of the algorithm properties to Section 5, and discuss\nthe application of the algorithm to other aggregates such as\nCOUNT\n,\nAVERAGE\nand\nMEDIAN\nin Section 6.\nWe build on the aggregate-commit-prove framework described\nby Przydatek et al. [17] but extend their single aggregator model\nto a fully distributed setting. Our algorithm involves computing a\ncryptographic commitment structure (similar to a hash tree) over\nthe data values of the sensor nodes as well as the aggregation process\n. This forces the adversary to choose a fixed aggregation topology\nand set of aggregation results. The individual sensor nodes\nthen independently audit the commitment structure to verify that\ntheir respective contributions have been added to the aggregate. If\nthe adversary attempts to discard or reduce the contribution of a\nlegitimate sensor node, this necessarily induces an inconsistency\nin the commitment structure which can be detected by the affected\nnode. This basic approach provides us with a lower bound for the\nSUM\naggregate. To provide an upper-bound for\nSUM\n, we can re-use\nthe same lower-bounding approach, but on a complementary\naggregate called the\nCOMPLEMENT\naggregate. Where\nSUM\nis defined\nas\na\ni\n,\nCOMPLEMENT\nis defined as\n(r - a\ni\n) where r is the\nupper bound on allowable data values. When the final aggregates\nare computed, the querier enforces the constraint that\nSUM\n+\nCOM\nPLEMENT\n= nr. Hence any adversary that wishes to increase\nSUM\nmust also decrease\nCOMPLEMENT\n, and vice-versa, otherwise the\ndiscrepancy will be detected. Hence, by enforcing a lower-bound\non\nCOMPLEMENT\n, we are also enforcing an upper-bound on\nSUM\n.\nThe overall algorithm has three main phases: query dissemination\n, aggregation-commit, and result-checking.\nQuery dissemination. The base station broadcasts the query to\nthe network. An aggregation tree, or a directed spanning tree over\nthe network topology with the base station at the root, is formed as\nthe query is sent to all the nodes, if one is not already present in the\nnetwork.\nAggregation commit. In this phase, the sensor nodes iteratively\nconstruct a commitment structure resembling a hash tree. First, the\nleaf nodes in the aggregation tree send their data values to their parents\nin the aggregation tree. Each internal sensor node in the aggregation\ntree performs an aggregation operation whenever it has\nheard from all its child sensor nodes. Whenever a sensor node s\nperforms an aggregation operation, s creates a commitment to the\nset of inputs used to compute the aggregate by computing a hash\nover all the inputs (including the commitments that were computed\nby the children of s). Both the aggregation result and the commitment\nare then passed on to the parent of s. After the final commitment\nvalues are reported to the base station (and thus also to the\nquerier), the adversary cannot subsequently claim a different aggregation\nstructure or result. We describe an optimisation to ensure\nthat the constructed commitment trees are perfectly balanced, thus\nrequiring low congestion overhead in the next phase.\nResult-checking. The result-checking phase is a novel distributed\nverification process. In prior work, algorithms have relied on the\nquerier to issue probes into the commitment structure to verify its\nintegrity [17, 19]. This induces congestion nearest the base station,\nand moreover, such algorithms yield at best probabilistic security\nproperties. We show that if the verification step is instead fully distributed\n, it is possible to achieve provably optimal security while\nmaintaining sublinear edge congestion.\nThe result-checking phase proceeds as follows. Once the querier\nhas received the final commitment values, it disseminates them to\nthe rest of the network in an authenticated broadcast. At the same\ntime, sensor nodes disseminate information that will allow their\npeers to verify that their respective data values have been incorporated\ninto the aggregate. Each sensor node is responsible for\nchecking that its own contribution was added into the aggregate.\nIf a sensor node determines that its data value was indeed added\ntowards the final sum, it sends an authentication code up the aggregation\ntree towards to the base station. Authentication codes are aggregated\nalong the way with the XOR function for communication\nefficiency. When the querier has received the XOR of all the authentication\ncodes, it can then verify that all the sensor nodes have\nconfirmed that the aggregation structure is consistent with their data\nvalues. If so, then it accepts the aggregation result.\nWe now describe the details of each of the three phases in turn.\n280\n(a) Example network graph.\nArrows:\nAggregation\ntree.\nR: Base station. Q: Querier.\nG\n0\n= 1,a\nG\n,r - a\nG\n,G\nF\n1\n= 2,v\nF\n1\n,v\nF\n1\n,H[N||2||v\nF\n1\n||v\nF\n1\n||F\n0\n||G\n0\n]\nC\n1\n= 4,v\nC\n1\n,v\nC\n1\n,H[N||4||v\nC\n1\n||v\nC\n1\n||C\n0\n||E\n0\n||F\n1\n]\nA\n1\n= 9,v\nA\n1\n,v\nA\n1\n,H[N||9||v\nA\n1\n||v\nA\n1\n||A\n0\n||B\n1\n||C\n1\n||D\n0\n]\nR\n= 12,v\nR\n,v\nR\n,H[N||12||v\nR\n||v\nR\n||H\n0\n||A\n1\n||I\n0\n]\n(b) Naive commitment tree, showing derivations of some of the vertices. For each sensor\nnode X , X\n0\nis its leaf vertex, while X\n1\nis the internal vertex representing the aggregate\ncomputation at X (if any). On the right we list the labels of the vertices on the path of\nnode G to the root.\nFigure 1: Aggregation and naive commitment tree in network context\n4.1\nQuery Dissemination\nFirst, an aggregation tree is established if one is not already\npresent. Various algorithms for selecting the structure of an aggregation\ntree may be used. For completeness, we describe one\nsuch process, while noting that our algorithm is directly applicable\nto any aggregation tree structure. The Tiny Aggregation Service\n(TaG) [11] uses a broadcast from the base station where each node\nchooses as its parent in the aggregation tree, the node from which\nit first heard the tree-formation message.\nTo initiate a query in the aggregation tree, the base station originates\na query request message which is distributed following the\naggregation tree. The query request message contains an attached\nnonce N to prevent replay of messages belonging to a prior query,\nand the entire request message is sent using an authenticated broadcast\n.\n4.2\nAggregation-Commit Phase\nThe goal of the aggregation-commit phase is to iteratively construct\na series of cryptographic commitments to data values and to\nintermediate in-network aggregation operations. This commitment\nis then passed on to the querier. The querier then rebroadcasts the\ncommitment to the sensor network using an authenticated broadcast\nso that the rest of the sensor network is able to verify that their\nrespective data values have been incorporated into the aggregate.\n4.2.1\nAggregation-Commit: Naive Approach\nWe first describe a naive approach that yields the desired security\nproperties but has suboptimal congestion overhead when sensor\nnodes perform their respective verifications. In the naive approach,\nwhen each sensor node performs an aggregation operation, it computes\na cryptographic hash of all its inputs (including its own data\nvalue). The hash value is then passed on to the parent in the aggregation\ntree along with the aggregation result. Figure 1(b) shows a\ncommitment tree which consists of a series of hashes of data values\nand intermediate results, culminating in a set of final commitment\nvalues which is passed on by the base station to the querier along\nwith the aggregation results. Conceptually, a commitment tree is\na hash tree with some additional aggregate accounting information\nattached to the nodes. A definition follows. Recall that N is the\nquery nonce that is disseminated with each query.\nDefinition 3 A commitment tree is a tree where each vertex has\nan associated label representing the data that is passed on to its\nparent. The labels have the following format:\ncount, value, complement, commitment\nWhere count is the number of leaf vertices in the subtree rooted\nat this vertex; value is the\nSUM\naggregate computed over all\nthe leaves in the subtree; complement is the aggregate over the\nCOMPLEMENT\nof the data values; and commitment is a cryptographic\ncommitment. The labels are defined inductively as follows:\nThere is one leaf vertex u\ns\nfor each sensor node s, which we\ncall the leaf vertex of s. The label of u\ns\nconsists of count=1,\nvalue\n=a\ns\nwhere a\ns\nis the data value of s, complement=r\n- a\ns\nwhere r is the upper bound on allowable data values, and\ncommitment\nis the node's unique ID.\nInternal vertices represent aggregation operations, and have labels\nthat are defined based on their children. Suppose an internal\nvertex has child vertices with the following labels: u\n1\n,u\n2\n,...,u\nq\n,\nwhere u\ni\n= c\ni\n,v\ni\n,v\ni\n,h\ni\n. Then the vertex has label c\n,v,v,h , with\nc\n= c\ni\n, v\n= v\ni\n, v\n= v\ni\nand h\n= H[N||c||v||v||u\n1\n||u\n2\n||||u\nq\n].\nFor brevity, in the remainder of the paper we will often omit references\nto labels and instead refer directly to the count, value,\ncomplement\nor commitment of a vertex.\nWhile there exists a natural mapping between vertices in a commitment\ntree and sensor nodes in the aggregation tree, a vertex is\na logical element in a graph while a sensor node is a physical device\n. To prevent confusion, we will always refer to the vertices in\nthe commitment tree; the term nodes always refers to the physical\nsensor node device.\nSince we assume that our hash function provides collision resistance\n, it is computationally infeasible for an adversary to change\nany of the contents of the commitment tree once the final commitment\nvalues have reached the root.\nWith knowledge of the root commitment value, a node s may\nverify the aggregation steps between its leaf vertex u\ns\nand the root\nof the commitment tree. To do so, s needs the labels of all its off-path\nvertices.\nDefinition 4 The set of off-path vertices for a vertex u in a tree is\nthe set of all the siblings of each of the vertices on the path from u\nto the root of the tree that u is in (the path is inclusive of u).\n281\nFigure 2: Off-path vertices for u are highlighted in bold. The\npath from u to the root of its tree is shaded grey.\nFigure 2 shows a pictorial depiction of the off-path vertices for a\nvertex u in a tree. For a more concrete example, the set of off-path\ncommitment tree vertices for G\n0\nin Figure 1 is\n{F\n0\n, E\n0\n, C\n0\n, B\n1\n,\nA\n0\n, D\n0\n, H\n0\n, I\n0\n}. To allow sensor node G to verify its contribution\nto the aggregate, the sensor network delivers labels of each off-path\nvertex to G\n0\n. Sensor node G then recomputes the sequence of\ncomputations and hashes and verifies that they lead to the correct\nroot commitment value.\nConsider the congestion on the naive scheme. Let h be the height\nof the aggregation tree and\nbe the maximum degree of any node\ninside the tree. Each leaf vertex has O\n(h) off-path vertices, and it\nneeds to receive all their labels to verify its contribution to the aggregate\n, thus leading to O\n(h) congestion at the leaves of the commitment\ntree. For an aggregation tree constructed with TaG, the\nheight h of the aggregation tree depends on the diameter (in number\nof hops) of the network, which in turn depends on the node density\nand total number of nodes n in the network. In a 2-dimensional\ndeployment area with a constant node density, the best bound on\nthe diameter of the network is O\n(n) if the network is regularly\nshaped. In irregular topologies the diameter of the network may be\n(n).\n4.2.2\nAggregation-Commit: Improved Approach\nWe present an optimization to improve the congestion cost. The\nmain observation is that, since the aggregation trees are a sub-graph\nof the network topology, they may be arbitrarily unbalanced.\nHence, if we decouple the structure of the commitment tree from\nthe structure of the aggregation tree, then the commitment tree\ncould be perfectly balanced.\nIn the naive commitment tree, each sensor node always computes\nthe aggregate sum of all its inputs. This can be considered\na strategy of greedy aggregation. Consider instead the benefit of\ndelayed aggregation at node C\n1\nin Figure 1(b). Suppose that C,\ninstead of greedily computing the aggregate sum over its own reading\n(C\n0\n) and both its child nodes E\n0\nand F\n1\n, instead computes the\nsum only over C\n0\nand E\n0\n, and passes F\n1\ndirectly to A along with\nC\n1\n= C\n0\n+ E\n0\n. In such a commitment tree, F\n1\nbecomes a child of\nA\n1\n(instead of C\n1\n), thus reducing the depth of the commitment tree\nby 1. Delayed aggregation thus trades off increased communication\nduring the aggregation phase in return for a more balanced\ncommitment tree, which results in lower verification overhead in\nthe result-checking phase. Greenwald and Khanna [6] used a form\nof delayed aggregation in their quantile summary algorithm.\nOur strategy for delayed aggregation is as follows: we perform\nan aggregation operation (along with the associated commit operation\n) if and only if it results in a complete, binary commitment\ntree.\nWe now describe our delayed aggregation algorithm for producing\nbalanced commitment trees. In the naive commitment tree,\neach sensor node passes to its parent a single message containing\nthe label of the root vertex of its commitment subtree T\ns\n. In\nthe delayed aggregation algorithm, each sensor node now passes\non the labels of the root vertices of a set of commitment subtrees\nF\n= {T\n1\n,...,T\nq\n}. We call this set a commitment forest, and we\nenforce the condition that the trees in the forest must be complete\nbinary trees, and no two trees have the same height. These constraints\nare enforced by continually combining equal-height trees\ninto complete binary trees of greater height.\nDefinition 5 A commitment forest is a set of complete binary commitment\ntrees such that there is at most one commitment tree of any\ngiven height.\nA commitment forest has at most n leaf vertices (one for each\nsensor node included in the forest, up to a maximum of n). Since\nall the trees are complete binary trees, the tallest tree in any commitment\nforest has height at most log n. Since there are no two trees\nof the same height, any commitment forest has at most log n trees.\nIn the following discussion, we will for brevity make reference\nto \"communicating a vertex\" to another sensor node, or \"communicating\na commitment forest\" to another sensor node. The actual\ndata communicated is the label of the vertex and the labels of the\nroots of the trees in the commitment forest, respectively.\nThe commitment forest is built as follows. Leaf sensor nodes in\nthe aggregation tree originate a single-vertex commitment forest,\nwhich they then communicate to their parent sensor nodes. Each\ninternal sensor node s originates a similar single-vertex commitment\nforest. In addition, s also receives commitment forests from\neach of its children. Sensor node s keeps track of which root vertices\nwere received from which of its children. It then combines all\nthe forests to form a new forest as follows.\nSuppose s wishes to combine q commitment forests F\n1\n,...,F\nq\n.\nNote that since all commitment trees are complete binary trees, tree\nheights can be determined by inspecting the count field of the\nroot vertex. We let the intermediate result be F\n= F\n1\nF\nq\n, and\nrepeat the following until no two trees are the same height in F:\nLet h be the smallest height such that more than one tree in F has\nheight h. Find two commitment trees T\n1\nand T\n2\nof height h in F,\nand merge them into a tree of height h\n+1 by creating a new vertex\nthat is the parent of both the roots of T\n1\nand T\n2\naccording to the\ninductive rule in Definition 3. Figure 3 shows an example of the\nprocess for node A based on the topology in Figure 1.\nThe algorithm terminates in O\n(qlogn) steps since each step reduces\nthe number of trees in the forest by one, and there are at most\nq log n\n+ 1 trees in the forest. Hence, each sensor node creates at\nmost q log n\n+ 1 = O(logn) vertices in the commitment forest.\nWhen F is a valid commitment forest, s sends the root vertices of\neach tree in F to its parent sensor node in the aggregation tree. The\nsensor node s also keeps track of every vertex that it created, as well\nas all the inputs that it received (i.e., the labels of the root vertices\nof the commitment forests that were sent to s by its children). This\ntakes O\n(d logn) memory per sensor node.\nConsider the communication costs of the entire process of creating\nthe final commitment forest. Since there are at most log n commitment\ntrees in each of the forests presented by any sensor node to\nits parent, the per-node communication cost for constructing the final\nforest is O\n(logn). This is greater than the O(1) congestion cost\nof constructing the naive commitment tree. However, no path in the\nforest is longer than log n hops. This will eventually enable us to\nprove a bound of O\n(log\n2\nn\n) edge congestion for the result-checking\nphase in Section 5.2.\nOnce the querier has received the final commitment forest from\nthe base station, it checks that none of the\nSUM\nor\nCOMPLEMENT\naggregates of the roots of the trees in the forest are negative. If\n282\nA\n0\n= 1,a\nA\n,r - a\nA\n,A\nD\n0\n= 1,a\nD\n,r - a\nD\n,D\nK\n0\n= 1,a\nK\n,r - a\nK\n,K\nC\n2\n= 4,v\nC\n2\n,v\nC\n2\n,H[N||4||v\nC\n2\n||v\nC\n2\n||F\n1\n||C\n1\n]\nB\n1\n= 2,v\nB\n1\n,v\nB\n1\n,H[N||2||v\nB\n1\n||v\nB\n1\n||B\n0\n||J\n0\n]\n(a) Inputs: A generates A\n0\n, and receives D\n0\nfrom D, C\n2\nfrom C, and\n(B\n1\n,K\n0\n) from B. Each dashed-line box shows the commitment\nforest received from a given sensor node. The solid-line box shows the vertex labels, each solid-line box below shows the labels of the\nnew vertices.\nv\nA\n1\n= a\nA\n+ a\nD\nv\nA\n1\n= r - a\nA\n+ r - a\nD\nA\n1\n= 2,v\nA\n1\n,v\nA\n1\n,H[N||2||v\nA\n1\n||v\nA\n1\n||A\n0\n||D\n0\n]\n(b) First merge: Vertex A\n1\ncreated\nv\nA\n2\n= v\nA\n1\n+ v\nB\n1\nv\nA\n2\n= v\nA\n1\n+ v\nB\n1\nA\n2\n= 4,v\nA\n2\n,v\nA\n2\n,H[N||4||v\nA\n2\n||v\nA\n2\n||A\n1\n||B\n1\n]\n(c) Second merge: Vertex A\n2\ncreated\nv\nA\n3\n= v\nA\n2\n+ v\nC\n2\nv\nA\n3\n= v\nA\n2\n+ v\nC\n2\nA\n3\n= 8,v\nA\n3\n,v\nA\n3\n,H[N||8||v\nA\n3\n||v\nA\n3\n||A\n2\n||C\n2\n]\n(d) Final merge: Vertex A\n3\ncreated. A\n3\nand K\n0\nare sent to the parent of A in the aggregation tree.\nFigure 3: Process of node A (from Figure 1) deriving its commitment forest from the commitment forests received from its children.\nany aggregates are negative, the querier rejects the result and raises\nan alarm: a negative aggregate is a sure sign of tampering since\nall the data values (and their complements) are non-negative. Otherwise\n, the querier then computes the final pair of aggregates\nSUM\nand\nCOMPLEMENT\n. The querier verifies that\nSUM\n+\nCOMPLEMENT\n= nr where r is the upper bound on the range of allowable data values\non each node. If this verifies correctly, the querier then initiates\nthe result-checking phase.\n4.3\nResult-checking phase\nThe purpose of the result-checking phase is to enable each sensor\nnode s to independently verify that its data value a\ns\nwas added into\nthe\nSUM\naggregate, and the complement\n(r - a\ns\n) of its data value\nwas added into the\nCOMPLEMENT\naggregate. The verification is\nperformed by inspecting the inputs and aggregation operations in\nthe commitment forest on the path from the leaf vertex of s to the\nroot of its tree; if all the operations are consistent, then the root\naggregate value must have increased by a\ns\ndue to the incorporation\nof the data value. If each legitimate node performs this verification,\nthen it ensures that the\nSUM\naggregate is at least the sum of all the\ndata values of the legitimate nodes. Similarly, the\nCOMPLEMENT\naggregate is at least the sum of all the complements of the data\nvalues of the legitimate nodes. Since the querier enforces\nSUM\n+\nCOMPLEMENT\n= nr, these two inequalities form lower and upper\nbounds on an adversary's ability to manipulate the final result. In\nSection 5 we shall show that they are in fact the tightest bounds\npossible.\nA high level overview of the process is as follows. First, the\naggregation results from the aggregation-commit phase are sent using\nauthenticated broadcast to every sensor node in the network.\nEach sensor node then individually verifies that its contributions\nto the respective\nSUM\nand\nCOMPLEMENT\naggregates were indeed\ncounted. If so, it sends an authentication code to the base station.\nThe authentication code is also aggregated for communication effi-283\nFigure 4: Dissemination of off-path values: t sends the label of\nu\n1\nto u\n2\nand vice-versa; each node then forwards it to all the\nvertices in their subtrees.\nciency. When the querier has received all the authentication codes,\nit is then able to verify that all sensor nodes have checked that their\ncontribution to the aggregate has been correctly counted.\nFor simplicity, we describe each step of the process with reference\nto the commitment tree visualised as an overlay network over\nthe actual aggregation tree. Hence, we will refer to vertices in the\ncommitment tree sending information to each other; in the physical\nworld, it is the sensor node that created the vertex is the physical\nentity that is responsible for performing communications and computations\non behalf of the vertex. Each edge in the commitment\ntree may involve multiple hops in the aggregation tree; the routing\non the aggregation tree is straightforward.\nDissemination of final commitment values. After the querier\nhas received the labels of the roots of the final commitment forest,\nthe querier sends each of these labels to the entire sensor network\nusing authenticated broadcast.\nDissemination of off-path values. To enable verification, each\nleaf vertex must receive all its off-path values. Each internal vertex\nt in the commitment forest has two children u\n1\nand u\n2\n. To disseminate\noff-path values, t sends the label of u\n1\nto u\n2\n, and vice-versa (t\nalso attaches relevant information tagging u\n1\nas the right child and\nu\n2\nas the left child). Vertex t also sends any labels (and left/right\ntags) received from its parent to both its children. See Figure 4 for\nan illustration of the process. The correctness of this algorithm in\ndelivering all the necessary off-path vertex labels to each vertex is\nproven in Theorem 14 in Section 5.2. Once a vertex has received all\nthe labels of its off-path vertices, it can proceed to the verification\nstep.\nVerification of inclusion. When the leaf vertex u\ns\nof a sensor\nnode s has received all the labels of its off-path vertices, it may\nthen verify that no aggregation result-tampering has occurred on\nthe path between u\ns\nand the root of its commitment tree. For each\nvertex t on the path from u\ns\nto the root of its commitment tree, u\ns\nderives the label of t (via the computations in Definition 3). It is\nable to do so since the off-path labels provide all the necessary data\nto perform the label computation. During the computation, u\ns\ninspects\nthe off-path labels: for each node t on the path from u\ns\nto the\nroot, u\ns\nchecks that the input values fed into the aggregation operation\nat t are never negative. Negative values should never occur\nsince the data and complement values are non-negative; hence if\na negative input is encountered, the verification fails. Once u\ns\nhas\nderived the label of the root of its commitment tree, it compares\nthe derived label against the label with the same count that was\ndisseminated by the querier. If the labels are identical, then u\ns\nproceeds\nto the next step. Otherwise, the verification fails and u\ns\nmay\neither immediately raise an alarm (for example, using broadcast),\nor it may simply do nothing and allow the aggregate algorithm to\nfail due to the absence of its confirmation message in the subsequent\nsteps.\nCollection of confirmations. After each sensor node s has suc-cessfully\nperformed the verification step for its leaf vertex u\ns\n, it\nsends an authentication code to the querier. The authentication code\nfor sensor node s is MAC\nK\ns\n(N||OK) where OK is a unique message\nidentifier and K\ns\nis the key that s shares with the querier. The collation\nof the authentication codes proceeds as follows (note that we\nare referring to the aggregation tree at this point, not the commitment\ntree). Leaf sensor nodes in the aggregation tree first send their\nauthentication codes to their parents in the aggregation tree. Once\nan internal sensor node has received authentication codes from all\nits children, it computes the XOR of its own authentication code\nwith all the received codes, and forwards it to its parent. At the end\nof the process, the querier will receive a single authentication code\nfrom the base station that consists of the XOR of all the authentication\ncodes received in the network.\nVerification of confirmations. Since the querier knows the key\nK\ns\nfor each sensor node s, it verifies that every sensor node has\nreleased its authentication code by computing the XOR of the authentication\ncodes for all the sensor nodes in the network, i.e.,\nMAC\nK\n1\n(N||OK) MAC\nK\nn\n(N||OK). The querier then compares\nthe computed code with the received code. If the two codes\nmatch, then the querier accepts the aggregation result. Otherwise,\nthe querier rejects the result. A rejection may indicate the presence\nof the adversary in some unknown nodes in the network, or it may\nbe due to natural factors such as node death or message loss. The\nquerier may either retry the query or attempt to determine the cause\nof the rejection. For example, it could directly request the leaf values\nof every sensor node: if rejections due to natural causes are\nsufficiently rare, the high cost of this direct query is incurred infre-quently\nand can be amortised over the other successful queries.\nANALYSIS OF SUM\nIn this section we prove the properties of the\nSUM\nalgorithm. In\nSection 5.1 we prove the security properties of the algorithm, and\nin Section 5.2 we prove bounds on the congestion of the algorithm.\n5.1\nSecurity Properties\nWe assume that the adversary is able to freely choose any arbitrary\ntopology and set of labels for the final commitment forest.\nWe then show that any such forest which passes all the verification\ntests must report an aggregate result that is (optimally) close to the\nactual result. First, we define the notion of an inconsistency, or\nevidence of tampering, at a given node in the commitment forest.\nDefinition 6 Let t\n= c\nt\n,v\nt\n,v\nt\n,H\nt\nbe an internal vertex in a commitment\nforest. Let its two children be u\n1\n= c\n1\n,v\n1\n,v\n1\n,H\n1\nand\nu\n2\n= c\n2\n,v\n2\n,v\n2\n,H\n2\n. There is an inconsistency at vertex t in a commitment\ntree if either (1) v\nt\n= v\n1\n+ v\n2\nor v\nt\n= v\n1\n+ v\n2\nor (2) any of\n{v\n1\n,v\n2\n,v\n1\n,v\n2\n} is negative.\nInformally, an inconsistency occurs at t if the sums don't add up\nat t, or if any of the inputs to t are negative. Intuitively, if there\nare no inconsistencies on a path from a vertex to the root of the\ncommitment tree, then the aggregate value along the path should\nbe non-decreasing towards the root.\nDefinition 7 Call a leaf-vertex u accounted-for if there is no inconsistency\nat any vertex on the path from the leaf-vertex u to the\nroot of its commitment tree, including at the root vertex.\nLemma 8 Suppose there is a set of accounted-for leaf-vertices with\ndistinct labels u\n1\n,...,u\nm\nand committed data values v\n1\n,...,v\nm\nin\n284\nthe commitment forest. Then the total of the aggregation values at\nthe roots of the commitment trees in the forest is at least\n\nm\ni\n=1\nv\ni\n.\nLemma 8 can be rigorously proven using induction on the height\nof the subtrees in the forest (see Appendix A). Here we present a\nmore intuitive argument.\nP\nROOF\n. (Sketch) We show the result for m\n= 2; a similar reasoning\napplies for arbitrary m. Case 1: Suppose u\n1\nand u\n2\nare in\ndifferent trees. Then, since there is no inconsistency on any vertex\non the path from u\n1\nto the root of its tree, the root of the tree\ncontaining u\n1\nmust have an aggregation value of at least v\n1\n. By a\nsimilar reasoning, the root of the tree containing u\n2\nmust have an\naggregation value of at least v\n2\n. Hence the total aggregation value\nof the two trees containing u\n1\nand u\n2\nis at least v\n1\n+ v\n2\n.\nCase 2: Now suppose u\n1\nand u\n2\nare in the same tree. Since they\nhave distinct labels, they must be distinct vertices, and they must\nhave a lowest common ancestor t in the commitment tree. The vertices\nbetween u\n1\nand t (including u\n1\n) must have aggregation value\nat least v\n1\nsince there are no inconsistencies on the path from u\n1\nto t, so the aggregation value could not have decreased. Similarly,\nthe vertices between u\n2\nand t (including u\n2\n) must have aggregation\nvalue at least v\n2\n. Hence, one of the children of t has aggregation\nvalue at least v\n1\nand the other has aggregation value at least v\n2\n.\nSince there was no inconsistency at t, vertex t must have aggregation\nvalue at least v\n1\n+v\n2\n. Since there are no inconsistencies on the\npath from t to the root of the commitment tree, the root also must\nhave aggregation value at least v\n1\n+ v\n2\n.\nNegative root aggregate values are detected by the querier at the\nend of the aggregate-commit phase, so the total sum of the aggregate\nvalues of the roots of all the trees is thus at least v\n1\n+ v\n2\n.\nThe following is a restatement of Lemma 8 for the\nCOMPLE\nMENTARY\nSUM\naggregate; its proof follows an identical structure\nand is thus omitted.\nLemma 9 Suppose there is a set of accounted-for leaf vertices\nwith distinct labels u\n1\n,...,u\nm\nwith committed complement values\nv\n1\n,...,v\nm\nin the commitment forest. Then the total\nCOMPLEMENT\naggregation value of the roots of the commitment trees in the forest\nis at least\n\nm\ni\n=1\nv\ni\n.\nLemma 10 A legitimate sensor node will only release its confirmation\nMAC if it is accounted-for.\nP\nROOF\n. By construction, each sensor node s only releases its\nconfirmation MAC if (1) s receives an authenticated message from\nthe querier containing the query nonce N and the root labels of all\nthe trees in the final commitment forest and (2) s receives all labels\nof its off-path vertices (the sibling vertices to the vertices on the\npath from the leaf vertex corresponding to s to the root of the commitment\ntree containing the leaf vertex in the commitment forest),\nand (3) s is able to recompute the root commitment value that it\nreceived from the base station and correctly authenticated, and (4)\ns verified that all the computations on the path from its leaf vertex\nu\ns\nto the root of its commitment tree are correct, i.e., there are no\ninconsistencies on the path from u\ns\nto the root of the commitment\ntree containing u\ns\n. Since the hash function is collision-resistant,\nit is computationally infeasible for an adversary to provide s with\nfalse labels that also happen to compute to the correct root commitment\nvalue. Hence, it must be that s was accounted-for in the\ncommitment forest.\nLemma 11 The querier can only receive the correct final XOR\ncheck value if all the legitimate sensor nodes replied with their confirmation\nMACs.\nP\nROOF\n. To compute the correct final XOR check value, the adversary\nneeds to know the XOR of all the legitimate sensor nodes\nthat did not release their MAC. Since we assume that each of the\ndistinct MACs are unforgeable (and not correlated with each other),\nthe adversary has no information about this XOR value. Hence, the\nonly way to produce the correct XOR check value is for all the\nlegitimate sensor nodes to have released their relevant MACs.\nTheorem 12 Let the final\nSUM\naggregate received by the querier\nbe S. If the querier accepts S, then S\nL\nS (S\nL\n+ r) where S\nL\nis\nthe sum of the data values of all the legitimate nodes, is the total\nnumber of malicious nodes, and r is the upper bound on the range\nof allowable values on each node.\nP\nROOF\n. Suppose the querier accepts the\nSUM\nresult S. Let the\nCOMPLEMENT SUM\nreceived by the querier be S. The querier accepts\nS if and only if it receives the correct final XOR check value\nin the result-checking phase, and S\n+ S = nr. Since the querier received\nthe correct XOR check value, we know that each legitimate\nsensor node must have released its confirmation MAC (Lemma 11),\nand so the leaf vertices of each legitimate sensor node must be\naccounted-for (Lemma 10). The set of labels of the leaf vertices of\nthe legitimate nodes is distinct since the labels contain the (unique)\nnode ID of each legitimate node. Since all the leaf vertices of the legitimate\nsensor nodes are distinct and accounted-for, by Theorem 8,\nS\nS\nL\nwhere S\nL\nis the sum of the data values of all the legitimate\nnodes. Furthermore, by Theorem 9, S\nS\nL\n, where S\nL\nis the sum\nof the complements of the data values of all the legitimate nodes.\nLet L be the set of legitimate sensor nodes, with\n|L| = l. Observe\nthat S\nL\n=\ni\nL\nr\n- a\ni\n= lr - S\nL\n= (n - )r - S\nL\n= nr - (S\nL\n+ r).\nWe have that S\n+ S = nr and S nr - (S\nL\n+ r). Substituting,\nS\n= nr - S S\nL\n+ r. Hence, S\nL\nS (S\nL\n+ r).\nNote that nowhere was it assumed that the malicious nodes were\nconstrained to reporting data values between\n[0,r]: in fact it is possible\nto have malicious nodes with data values above r or below 0\nwithout risking detection if S\nL\nS (S\nL\n+ r).\nTheorem 13 The\nSUM\nalgorithm is optimally secure.\nP\nROOF\n. Let the sum of the data values of all the legitimate nodes\nbe S\nL\n. Consider an adversary with malicious nodes which only\nperforms direct data injection attacks. Recall that in a direct data injection\nattack, an adversary only causes the nodes under its control\nto each report a data value within the legal range\n[0,r]. The lowest\nresult the adversary can induce is by setting all its malicious nodes\nto have data value 0; in this case the computed aggregate is S\nL\n. The\nhighest result the adversary can induce is by setting all nodes under\nits control to yield the highest value r. In this case the computed\naggregate is S\nL\n+ r. Clearly any aggregation value between these\ntwo extremes is also achievable by direct data injection. The bound\nproven in Theorem 12 falls exactly on the range of possible results\nachievable by direct data injection, hence the algorithm is optimal\nby Definition 2.\nThe optimal security property holds regardless of the number or\nfraction of malicious nodes; this is significant since the security\nproperty holds in general, and not just for a subclass of attacker\nmultiplicities. For example, we do not assume that the attacker is\nlimited to some\nfraction of the nodes in the network.\n5.2\nCongestion Complexity\nWe now consider the congestion induced by the secure\nSUM\nalgorithm\n. Recall that node congestion is defined as the communication\nload on the most heavily loaded sensor node in the network,\n285\nand edge congestion is the heaviest communication load on a given\nlink in the network. We only need to consider the case where the\nadversary is not performing an attack. If the adversary attempts\nto send more messages than the proven congestion bound, legitimate\nnodes can easily detect this locally and either raise an alarm\nor refuse to respond with their confirmation values, thus exposing\nthe presence of the adversary. Recall that when we refer to a\nvertex sending and receiving information, we are referring to the\ncommitment tree overlay network that lies over the actual physical\naggregation tree.\nTheorem 14 Each vertex u receives the labels of its off-path vertices\nand no others.\nP\nROOF\n. Since, when the vertices are disseminating their labels\nin the result-checking phase, every vertex always forwards any labels\nreceived from its parents to both its children, it is clear that\nwhen a label is forwarded to a vertex u , it is eventually forwarded\nto the entire subtree rooted at u .\nBy definition, every off-path vertex u\n1\nof u has a parent p which\nis a node on the path between u and the root of its commitment\ntree. By construction, p sends the label of u\n1\nto its sibling u\n2\nwhich\nis on the path to u (i.e., either u\n2\nis an ancestor of u, or u\n2\n= u).\nHence, the label u\n1\nis eventually forwarded to u. Every vertex u\n1\nthat is not an off-path vertex has a sibling u\n2\nwhich is not on the\npath between u and the root of its commitment tree. Hence, u is not\nin the subtree rooted at u\n2\n. Since the label of u\n1\nis only forwarded\nto the subtree rooted at its sibling and nowhere else, the label of u\n1\nnever reaches u.\nTheorem 15 The\nSUM\nalgorithm induces O\n(log\n2\nn\n) edge congestion\n(and hence O\n(log\n2\nn\n) node congestion) in the aggregation\ntree.\nP\nROOF\n. Every step in the algorithm except the label dissemination\nstep involves either broadcast or convergecast of messages\nthat are at most O\n(logn) size. The label-dissemination step is the\ndominating factor.\nConsider an arbitrary edge in the commitment-tree between parent\nvertex x and child vertex y. In the label dissemination step,\nmessages are only sent from parent to child in the commitment tree.\nHence the edge xy carries exactly the labels that y receives. From\nTheorem 14, y receives O\n(logn) labels, hence the total number of\nlabels passing through xy is O\n(logn). Hence, the edge congestion\nin the commitment tree is O\n(logn). Now consider an arbitrary aggregation\ntree edge with parent node u and child node v. The child\nnode v presents (i.e., sends) at most log n commitment-tree vertices\nto its parent u, and hence the edge uv is responsible for carrying\ntraffic on behalf of at most log n commitment-tree edges -- these\nare the edges incident on the commitment tree vertices that v presented\nto u. Note that v may not be responsible for creating all\nthe vertices that it presents to u, but v is nonetheless responsible\nfor forwarding the messages down to the sensor nodes which created\nthose vertices. Since each edge in the commitment tree has\nO\n(logn) congestion, and each edge in the aggregation tree carries\ntraffic for at most log n commitment-tree edges, the edge congestion\nin the aggregation tree is O\n(log\n2\nn\n). The node-congestion bound\nof O\n(log\n2\nn\n) follows from the O(log\n2\nn\n) edge congestion and the\ndefinition of\nas the greatest degree in the aggregation tree.\nOTHER AGGREGATION FUNCTIONS\nIn this section we briefly discuss how to use the\nSUM\nalgorithm\nas a primitive for the\nCOUNT\n,\nAVERAGE\nand\nQUANTILE\naggregates\n.\nThe C\nOUNT\nAggregate. The query\nCOUNT\nis generally used\nto determine the total number of nodes in the network with some\nproperty; without loss of generality it can be considered a\nSUM\naggregation\nwhere all the nodes have value either 1 (the node has the\nproperty) or 0 (otherwise). More formally, each sensor node s has\na data value a\ns\n{0,1}, and we wish to compute f (a\n1\n,...,a\nn\n) =\na\n1\n+a\n2\n++a\nn\n. Since count is a special case of\nSUM\n, we can use\nthe basic algorithm for\nSUM\nwithout modification.\nThe A\nVERAGE\nAggregate. The\nAVERAGE\naggregate can be\ncomputed by first computing the\nSUM\nof data values over the nodes\nof interest, and then the\nCOUNT\nof the number of nodes of interest,\nand then dividing the\nSUM\nby the\nCOUNT\n.\nThe\n-Q\nUANTILE\nAggregate. In the\nQUANTILE\naggregate,\nwe wish to find the value that is in the\nn-th position in the sorted\nlist of data values. For example, the median is a special case where\n= 0.5. Without loss of generality we can assume that all the data\nvalues are distinct; ties can be broken using unique node IDs.\nIf we wished to verify the correctness of a proposed\n-quantile q,\nwe can perform a\nCOUNT\ncomputation where each node s presents a\nvalue a\ns\n= 1 if its data value a\ns\nq and presents a\ns\n= 0 otherwise. If\nq is the\n-quantile, then the computed sum should be equal to n.\nHence, we can use any insecure approximate\n-quantile aggregation\nscheme to compute a proposed\n-quantile, and then securely\ntest to see if the result truly is within the approximation bounds of\nthe\n-quantile algorithm.\nCONCLUSION\nIn-network data aggregation is an important primitive for sensor\nnetwork operation. The strong standard threat model of multiple\nByzantine nodes in sensor networks requires the use of aggregation\ntechniques that are robust against malicious result-tampering\nby covert adversaries.\nWe present the first optimally secure aggregation scheme for arbitrary\naggregator topologies and multiple malicious nodes. This\ncontribution significantly improves on prior work which requires\nstrict limitations on aggregator topology or malicious node multiplicity\n, or which only yields a probabilistic security bound. Our algorithm\nis based on a novel method of distributing the verification\nof aggregation results onto the sensor nodes, and combining this\nwith a unique technique for balancing commitment trees to achieve\nsublinear congestion bounds. The algorithm induces O\n(log\n2\nn\n)\nnode congestion (where\nis the maximum degree in the aggregation\ntree) and provides the strongest security bound that can be\nproven for any secure aggregation scheme without making assumptions\nabout the distribution of data values.\nREFERENCES\n[1] H. Cam, S. Ozdemir, P. Nair, D. Muthuavinashiappan, and\nH. O. Sanli. Energy-efficient secure pattern based data\naggregation for wireless sensor networks. Computer\nCommunications, 29:446455, 2006.\n[2] C. Castelluccia, E. Mykletun, and G. Tsudik. Efficient\naggregation of encrypted data in wireless sensor networks. In\nProceedings of The Second Annual International Conference\non Mobile and Ubiquitous Systems, 2005.\n[3] J.-Y. Chen, G. Pandurangan, and D. Xu. Robust computation\nof aggregates in wireless sensor networks: distributed\nrandomized algorithms and analysis. In Proceedings of the\nFourth International Symposium on Information Processing\nin Sensor Networks, 2005.\n[4] W. Du, J. Deng, Y. Han, and P. K. Varshney. A witness-based\napproach for data fusion assurance in wireless sensor\n286\nnetworks. In Proceedings of the IEEE Global\nTelecommunications Conference, 2003.\n[5] J. Girao, M. Schneider, and D. Westhoff. CDA: Concealed\ndata aggregation in wireless sensor networks. In Proceedings\nof the ACM Workshop on Wireless Security, 2004.\n[6] M. B. Greenwald and S. Khanna. Power-conserving\ncomputation of order-statistics over sensor networks. In\nProceedings of the twenty-third ACM\nSIGMOD-SIGACT-SIGART symposium on Principles of\ndatabase systems, 2004.\n[7] I. Gupta, R. van Renesse, and K. P. Birman. Scalable\nfault-tolerant aggregation in large process groups. In\nProceedings of the International Conference on Dependable\nSystems and Networks, 2001.\n[8] L. Hu and D. Evans. Secure aggregation for wireless\nnetworks. In Workshop on Security and Assurance in Ad hoc\nNetworks, 2003.\n[9] C. Intanagonwiwat, D. Estrin, R. Govindan, and\nJ. Heidemann. Impact of network density on data\naggregation in wireless sensor networks. In Proceedings of\nthe 22nd International Conference on Distributed Computing\nSystems, 2002.\n[10] P. Jadia and A. Mathuria. Efficient secure aggregation in\nsensor networks. In Proceedings of the 11th International\nConference on High Performance Computing, 2004.\n[11] S. Madden, M. J. Franklin, J. M. Hellerstein, and W. Hong.\nTAG: a tiny aggregation service for ad-hoc sensor networks.\nSIGOPS Oper. Syst. Rev., 36(SI):131146, 2002.\n[12] S. Madden, M. J. Franklin, J. M. Hellerstein, and W. Hong.\nThe design of an acquisitional query processor for sensor\nnetworks. In Proceedings of the 2003 ACM International\nConference on Management of Data, 2003.\n[13] A. Mahimkar and T. Rappaport. SecureDAV: A secure data\naggregation and verification protocol for sensor networks. In\nProceedings of the IEEE Global Telecommunications\nConference, 2004.\n[14] A. Manjhi, S. Nath, and P. B. Gibbons. Tributaries and\ndeltas: efficient and robust aggregation in sensor network\nstreams. In Proceedings of the ACM International\nConference on Management of Data, 2005.\n[15] S. Nath, P. B. Gibbons, S. Seshan, and Z. R. Anderson.\nSynopsis diffusion for robust aggregation in sensor networks.\nIn Proceedings of the 2nd International Conference on\nEmbedded Networked Sensor Systems, 2004.\n[16] A. Perrig, R. Szewczyk, J. D. Tygar, V. Wen, and D. E.\nCuller. SPINS: Security protocols for sensor networks. Wirel.\nNetw., 8(5):521534, 2002.\n[17] B. Przydatek, D. Song, and A. Perrig. SIA: Secure\ninformation aggregation in sensor networks. In Proceedings\nof the 1st International Conference on Embedded Networked\nSensor Systems, 2003.\n[18] D. Wagner. Resilient aggregation in sensor networks. In\nProceedings of the 2nd ACM Workshop on Security of\nAd-hoc and Sensor Networks, 2004.\n[19] Y. Yang, X. Wang, S. Zhu, and G. Cao. SDAP: A secure\nhop-by-hop data aggregation protocol for sensor networks.\nIn Proceedings of the ACM International Symposium on\nMobile Ad Hoc Networking and Computing, 2006.\n[20] Y. Yao and J. Gehrke. The COUGAR approach to in-network\nquery processing in sensor networks. SIGMOD Rec.,\n31(3):918, 2002.\nAPPENDIX\nA.\nPROOF OF LEMMA 8\nWe first prove the following:\nLemma 16 Let F be a collection of commitment trees of height at\nmost h. Suppose there is a set U of accounted-for leaf-vertices with\ndistinct labels u\n1\n,...,u\nm\nand committed values v\n1\n,...,v\nm\nin F. Let\nthe set of trees that contain at least one member of U be T\nF\n. Define\nval\n(X) for any forest X to be the total of the aggregation values at\nthe roots of the trees in X . Then val\n(T\nF\n)\nm\ni\n=1\nv\ni\n.\nP\nROOF\n. Proof: By induction on h.\nBase case: h\n= 0. Then all the trees are singleton-trees. The total\naggregation value of all the singleton-trees that contain at least one\nmember of U is exactly\n\nm\ni\n=1\nv\ni\n.\nInduction step: Assume the theorem holds for h, and consider\nan arbitrary collection F of commitment trees with at most height\nh\n+ 1 where the premise holds. If there are no trees of height h + 1\nthen we are done. Otherwise, let the set R be all the root vertices\nof the trees of height h\n+ 1. Consider F = F\\R, i.e., remove all\nthe vertices in R from F. The result is a collection of trees with\nheight at most h. Let T\nF\nbe the set of trees in F containing at\nleast one member of U . The induction hypothesis holds for F , so\nval\n(T\nF\n)\nm\ni\n=1\nv\ni\n. We now show that replacing the vertices from\nR cannot produce an T\nF\nsuch that val\n(T\nF\n) < val(T\nF\n). Each vertex\nr from R is the root of two subtrees of height h in F. We have three\ncases:\nCase 1: Neither subtree contains any members of U . Then the\nnew tree contains no members of U , and so is not a member of T\nF\n.\nCase 2: One subtree t\n1\ncontains members of U . Since all the\nmembers of U are accounted-for, this implies that there is no inconsistency\nat r. Hence, the subtree without a member of U must\nhave a non-negative aggregate value. We know that r performs the\naggregate sum correctly over its inputs, so it must have aggregate\nvalue at least equal to the aggregate value of t\n1\n.\nCase 3: Both subtrees contain members of U . Since all the members\nof U are accounted-for, this implies that there is no inconsistency\nat r. The aggregate result of r is exactly the sum of the aggregate\nvalues of the two subtrees.\nIn case 2 and 3, the aggregate values of the roots of the trees of\nheight h\n+1 that were in T\nF\n, was no less than the sum of the aggregate\nvalues of their constituent subtrees in T\nF\n. Hence, val\n(T\nF\n)\nval\n(T\nF\n)\nm\ni\n=1\nv\ni\n.\nLet the commitment forest in Lemma 8 be F. Let the set of trees\nin F that contain at least one of the accounted-for leaf-vertices be\nT . By the above lemma, val\n(T)\nm\ni\n=1\nv\ni\n. We know that there are\nno root labels with negative aggregation values in the commitment\nforest, otherwise the querier would have rejected the result. Hence,\nval\n(F) val(T)\nm\ni\n=1\nv\ni\n.\n287", "keywords": "algorithm;Secure aggregation;commitment forest;in-network data aggregation;commitment tree;Sensor Networks;secure hierarchical data aggregation protocol;sensor network;aggregation commit;result checking;query dissemination;congestion complexity;Data aggregation"} {"name": "176", "title": "SensorBus: A Middleware Model for Wireless Sensor Networks", "abstract": "The use of middleware eases the development of distributed applications by abstracting the intricacies (communication and coordination among software components) of the distributed network environment. In wireless sensor networks, this is even trickier because of their specific issues such as addressing, mobility, number of sensors and energy-limited nodes. This paper describes SensorBus, a message-oriented middleware (MOM) model for wireless sensor networks based on the publish-subscribe paradigm and that allows the free exchange of the communication mechanism among sensor nodes allowing as result the capability of using more than one communication mechanism to address the requirements of larger number of applications. We intend to provide a platform which addresses the main characteristics of wireless sensor networks and also allows the development of energy-efficient applications. SensorBus incorporates constraint and query languages which will aid the development of interactive applications. It intends with the utilization of filters reduces data movement minimizing the energy consumption of nodes.", "fulltext": "INTRODUCTION\nRecent advances in wireless networking technology, low-power\ndigital circuits, sensing materials and Micro Electro-Mechanical\nSystems (MEMS) opened up the possibility of building small\nsensor devices capable of data processing, remote sensing and\nwireless communication. When several small sensors are scattered\nand linked over an area we may call this arrangement a \"Sensor\nNetwork\". These networks can be used for collecting and\nanalyzing data from the physical environment. More specifically,\nsensor networks are comprised of hundreds or even thousands of\nheterogeneous sensor nodes exchanging information to perform\ndistributed sensing and collaborative data processing [1].\nFrom a functional perspective sensor networks behave like\ndistributed systems with many different types of sensor nodes.\nGiven the diversity of node functionality and the size of these\nnetworks it is important for a user to be able to program and\nmanage the distributed applications that perform the information\ngathering. A programmer may develop these applications using\noperating system primitives. This kind of procedure, however,\nbrings another level of complexity to the programmer, in which\nhe not only has to deal with low-level primitives but he will also\nhave to treat issues concerning communication and coordination\namong software components distributed over the network. A\nmuch friendlier approach is the utilization of a middleware in\norder to provide higher-level primitives to hide issues concerning\nthe distributed environment.\nTraditional middleware is not suited to this task because of the\ncharacteristics of wireless networks. For example, conventional\nmiddleware designed for wired networks raises exceptions when\nthey do not find a specific component, but this situation is much\nmore like the standard than the exception in wireless\nenvironments. The lower bandwidth available for wireless\nnetworks requires optimizing the transport of data and this is not\nconsidered in conventional middleware. The coordination\nprimitives of these middleware products do not take into account\nthe frequent disconnections that happen in wireless networks.\nAnother problem is the size and computing requirements of these\nmiddleware products; they are often too large and too heavy to be\nrunning in a device with so few resources. Finally, the\ntransparency level provided is not sufficient enough because the\napplication running in such devices needs information about the\nexecution context to better adapt itself.\nA series of new middleware environments were proposed to deal\nwith the requirements imposed by the wireless environment [2].\nMiddleware products based on computing reflection are designed\nto be light (concerning the computing power required to run) and\neasily configurable. Middleware based on tuple space were\nproposed to address the problem of frequent disconnections, and\npresent a more natural way to deal with asynchronous\ncommunication. Context-aware middleware includes the ability of\nan application to access its information context (context-awareness\n). These proposals addressed adequately the issues\nbrought by the mobile networks, but are not well suited to support\nthe specific requirements of the target applications used or to be\nused in wireless sensor networks because they are designed to\nsupport traditional client-server applications used in regular\n(wired) environments.\nWireless sensor networks are very similar to conventional\nwireless networks; including energy-limited nodes, low\nbandwidth and communication channels more prone to errors.\nHowever, communication in wireless sensor nets differs from the\nend-to-end connections often necessary in usual networks [1]. In\nother words, the function of the network is to report information\nconsidering the phenomenon to the observer who is not\nnecessarily aware that the sensor infrastructure is being used as a\nmeans of communication. In addition, energy is much more\nlimited in sensor networks than in other types of wireless nets due\nto the nature of the sensor devices and the difficulty of reloading\nbatteries in hostile regions. Some works have shown that the\nexecution of 3000 instructions costs the same amount of energy\nnecessary to send 1-bit of data over 100 meters via radio [3].\nThose studies indicate that we must prioritize computing over\ncommunications.\nThe communication issues are addressed in the several routing\nprotocols proposed for wireless sensor nets. The communication\nmodel allows other ways of addressing the sensor nodes besides\nsingle addressing. The sensor nodes can be addressed by their\nown attributes or by attributes extracted from the physical\nenvironment (attribute-based naming). The sharp limitation of\nenergy demands that sensor nodes actively take part in the\nprocessing and dissemination of information in order to save as\nmuch energy as possible. Although the majority of the protocols\nreviewed are efficient in saving energy, they differ in addressing\ncapabilities. Some of them utilize single addressing [4] while\nothers utilize attribute-based naming [5]. Thus, each type of\napplication requires an adaptation of the communication\nmechanism to address specific application issues.\nTrying to overcome these problems this paper proposes\nSensorBus, a message oriented middleware for sensor networks\nallowing the free exchanging of the communication mechanism\namong sensor nodes. We propose a platform that provides\nfacilities for the development of energy-efficient applications and\nthat also addresses the key characteristics of sensor networks.\nThis type of middleware should be suited to perform\nenvironmental monitoring where single addressing is demanded\n(small areas) as well as where attribute-based naming is necessary\n(large areas).\nThe remainder of this paper is organized as follows: Section 2\ndescribes the type of sensor networks considered in this research;\nSection 3 presents the target application and explains its\nrequirements; Section 4 broaches the abstractions and\nmechanisms needed to address the requirements listed on the\nprevious section; Section 5 describes the components of the\nSensorBus architecture; Section 6 presents the communication\narchitecture and explains the steps needed to develop an\napplication using SensorBus; Section 7 broaches implementing\nand coding issue; Section 8 presents the related works, and finally\nSection 9 concludes the paper.\nASSUMPTIONS\nMost of the algorithms catalogued in the sensor networks\nliterature are hypothetical [1], i.e., they were proposed as an\nexperiment and were not tested in real networks (although many\nof them were deployed in testbed environments). This research is\nno different. When we speak about wireless sensor networks, we\nare referring to the projected and experimental designs and\ndeployments discussed in the literature and not to actual instances\nof wireless sensor networks deployed in the field.\nDifferently from the real settings, the testbed environments are\nbuilt and organized focusing on the network features one wants to\nobserve and test. This organization involves three main\ncomponents: infrastructure, network protocols and applications\n[6]. The infrastructure is formed of sensor nodes and their\ntopology the way they were scattered over a determined region.\nThe network protocol is responsible for the creation and\nmaintenance of communication links between sensor nodes and\nthe applications. The applications extract information about a\ndetermined phenomenon through the sensor nodes. The following\ntopics introduce in more details the assumptions made considering\nthose aspects.\n2.1\n\nApplications\nThe way in which the applications gather data from the sensor\nnodes depends on the network design. In the literature, we found\nthat there are four data transfer modes between sensor nodes and\napplications: continuous, event-oriented, query-oriented and\nhybrid [6]. In the continuous model, sensor nodes transfer their\ndata continuously at a predefined rate. In the event-oriented\nmodel, nodes transfer data only when an event of interest occurs.\nIn the query-oriented model the application is responsible for\ndeciding which events are of interest, thus requesting data about a\nphenomenon. Lastly, in the hybrid model, the three approaches\nmay coexist together. In this research, we adopt a hybrid approach\nin the way that it utilizes the query and the event-oriented model\nas will be shown in the target application presented in Section 3.\n2.2\n\nNetwork Protocol\nThe performance of the network protocol is influenced by the\ncommunications model adopted, the packet data transfer mode\nand the network mobility. In order to evaluate how a network\nprotocol behaves it is important to take into account these aspects.\nCommunication in sensor networks is classified in two major\ncategories [6]: application and infrastructure. Application\ncommunication consists of the transfer of data obtained by the\nsensor nodes to the observer. This kind of communication is of\n3\ntwo types: cooperative and non-cooperative. In cooperative mode\nsensor nodes exchange data among themselves before\ntransmitting the data gathered. In non-cooperative mode,\nhowever, sensor nodes do not exchange any kind of information;\neach one is solely responsible for reporting its collected data. The\ninfrastructure data refers to the information needed to set,\nmaintain and optimize the running network. As the network\nprotocol must support both categories, the SensorBus architecture\nwill not address those issues.\nThe packet data transfer is a routing issue concerning the network\nprotocol. This routing is divided into three types: flooding,\nunicast and multicast [6]. In the flooding approach, the sensor\nnode broadcasts its information to neighboring nodes that, in turn,\nbroadcast this information to their neighboring nodes until the\ninformation reaches the destination node. Alternatively, the sensor\nnode may transmit its data directly to the observer using unicast\nmulti-hop routing and also might use a cluster-head through one-to\n-one unicast. Lastly, the multicast approach casts information to\npredefined groups of nodes. The routing protocol is responsible\nfor treating packet data transfer relieving SensorBus of these\nissues.\nRegarding mobility, sensor networks are divided into static and\ndynamic [6]. In static nets there is no movement by sensor nodes,\nthe observers or the phenomenon to be studied. Conversely, in\ndynamic networks the nodes, observers and the phenomenon\nmight well change their locations. This kind of network is further\nclassified by the mobility of its components in dynamic nets with\nmobile observer, dynamic nets with mobile sensors and dynamic\nnets with mobile phenomena respectively. In the first, the\nobserver is mobile in relation to the sensors and phenomena; in\nthe second; the sensors are moving with respect to each other and\nthe observer; and in the later; the phenomenon itself is in motion.\nThe routing protocol is also responsible for treating mobility\nissues, relieving SensorBus of these concerns.\n2.3\n\nInfrastructure\nAs for the infrastructure, the issues to take into consideration are\nlocation, access point and sensor node's computing power. The\nnodes have well-know locations and are to be scattered over a\nwell-defined area. We will assume that all information is\ntransmitted and received by means of a unique access point called\nthe sink node. Despite the fact that, for this model, we will\nconsider all nodes as being the same, there is nothing to prevent\none node from having more memory, more energy or more\ncomputing power available.\nENVIRONMENTAL MONITORING APPLICATIONS\nEnvironmental Monitoring Applications are used to evaluate\nqualitatively and quantitatively the natural resources of a\ndetermined area. These applications collect data, analyze and\nfollow continuously and systematically environmental variables\nin order to identify current patterns and predict future trends [7].\nEnvironmental monitoring provides information about the factors\ninfluencing conservation, preservation, degradation and\nenvironmental recovery. One might consider it a tool of\nevaluation and control.\nWireless sensor networks can be used for performing\nenvironmental monitoring in indoor locations such as a building\nor a house or outdoors locations such as forests, lakes, deserts,\nrivers, etc. Internal monitoring might be described as tracking the\nvariables in an indoor location. For example, one might deploy an\ninfrared camera to track motion in a room that is supposed to be\nsecure; if motion is detected an internal device might trigger an\nalarm. Sometimes in order to detect and identify an event,\ninformation from more than one sensor might be required. These\nresults are processed and compared with the signature of the event\nof interest. In outdoor monitoring, there may be thousands of\nsensors scattered over an area and when an event of interest\noccurs such as temperature change, moisture change or CO2\nincrease the sensor might trigger the management events module\nwhich in turn sends the observer a signal to notify him or her of\nthe event. Wireless sensors might be useful in a way that can save\nmoney in deploying a sensor infrastructure such as described in\n[8] where the authors were able to decrease the number of sensors\nneeded to monitor forest fires in comparison with a wired model.\nIn summary, the value of a wireless sensor network relies in its\nability to provide information over a large area in reply to the\nquestions put to users. The query mode is the most common\napproach used. Another approach is the mode in which sensors\nmay remain waiting for some event to happen. By observing these\naspects we draw the first requirement (R1) of our middleware\nmodel: The system must be able to function in two modes: query-driven\nand event-oriented.\nDepending on the application would be more convenient to access\na specific node or a specific property. For example, in internal\nenvironmental monitoring, if one wants to know the temperature\nof a determined room you will have to access the information\ncollected by a specific sensor, thus requiring unique node\naddressing and identification. On the other hand, in external\nenvironmental monitoring, sensor nodes do not need to be\nuniquely identified, as in this kind of application the purpose is to\ncollect the value of a certain variable in a given area. From that\nobservation we extract the second requirement (R2): The system\nmust be able to address uniquely the sensor nodes and also by\nattribute (property to be observed).\nIn some applications the mobility of sensor nodes must be taken\ninto account. For example, sensors scattered over a forest for\ncollecting dampness and temperature data are to be static, i.e. they\nmust not change its geographical location, while that placing\nsensors in a river's surface for collecting data about its\ncontamination levels characterizes a mobile environment. Thus,\nthe third requirement (R3) of our middleware model is taking into\nconsideration mobility issues.\nThe sensing coverage area of a given wireless node is smaller\nthan its radio coverage. Besides, sensors operate in noisy\nenvironments. To achieve a trustworthy sensory resolution a high\ndensity of sensors is required. In some applications the size of the\ncoverage area leads to a great number of sensor nodes. A simple\napplication in the field of environmental monitoring such as\nsurveillance of oceans and forests requires from hundreds to\nthousands of nodes. In other applications, like internal\nenvironmental monitoring, the amount of nodes is limited by the\nsize of the area. Therefore, the fourth requirement (R4) is to take\ninto account the size of the network.\n4\nIn external environmental monitoring, the nodes are spread in a\nhostile region, where it is not possible to access them for\nmaintenance. The lifetime of each sensor node depends\nexclusively on the little available energy for the node. To\nconserve energy, the speed of the CPU and the bandwidth of the\nRF channel (Radio Frequency) must be limited. This requirement\nadds some restrictions in CPU performance, memory size, RF\nbandwidth and in battery size. In applications where the sensors\nare not spread in a hostile region it is possible to access them for\nmaintenance and the battery lifetime of each sensor does not\nbecome a critical aspect. Finally, the fifth requirement (R5) is to\ntake into consideration the limited energy resources of each\nsensor node.\nMECHANISMS AND ABSTRACTIONS\nThis middleware model is comprised of three mechanisms and\none abstraction. The publish-subscribe paradigm is employed as\nwell as constraints and query languages and application filters to\nmeet R1, R2 and R5 requirements. The design patterns abstraction\nis used to meet R2, R3 and R4 requirements.\n4.1\n\nPublish-Subscribe Paradigm\nThe SensorBus is a Message Oriented Middleware (MOM) that\nemploys the publish-subscribe paradigm. In this approach, a\ncomponent that generates events (producer) publishes the types of\nevents that will be available to other components (consumers) [9].\nThe consumer interested in a determined event \"subscribes\" to\nthis event, receiving from this moment on notifications about the\nevent \"subscribed\" to. These notifications are sent\nasynchronously from producers to all interested consumers. The\nMOM performs the functions of collecting producer's messages,\nfiltering and transforming such messages (when necessary) and\nrouting them to the appropriate consumers.\nThe publish-subscribe communication is anonymous,\nasynchronous and multicast. Data are sent and received by\nasynchronous broadcast messages, based in subject, independent\nfrom identity and location of producers and consumers. This kind\nof communication enlists desirable properties for sensor\nnetworks; for example, this model saves energy while a given\nnode does not need to be waiting for a synchronous response to\nproceed as it is in networks that implements end-to-end\nconnections, increasing the lifetime of the network. Furthermore,\nas it also implements multicast, a group of sensor might be\nformed regarding a specific application.\nAs a consequence, the adoption of the publish-subscribe paradigm\nmeets the R1 requirement, concerning the need for events and the\nR2 requirement pertaining to attribute addressing. In addition it\nalso meets the R5 requirement related to energy saving.\n4.2\n\nConstraint and Query Languages\nConstraint and Query languages are used to filter collecting data\nby specifying restrictions in the values and preferences of the\nattributes. A statement in these languages is a string that\nrepresents an expression.\nThe constraint language only includes constants (values) and\noperations over values. Values and operations with integer, float,\nboolean and strings are allowed. The language admits several\ntypes of expressions.\n\n\nThe expressions can be comparative: == (equality), !=\n(inequality), >, >=, <, <=. For instance, Temperature < 36.6\nmeans to consider data where the attribute Temperature is\nless than 36.6 degrees Celsius.\n\n\nThe expressions can be boolean: AND, OR, NOT. For\nexample, Temperature >= 26.6 AND Temperature < = 36.6\nimplies to consider data where the value of the attribute\nTemperature is between 26.6 and 36.6 degrees Celsius.\n\n\nThe expressions can be numerical with the mathematical\noperators + (addition), - (subtraction), * (multiplication) and\n/ (division).\nThe query language has its syntax based on a subgroup of the\nconditional expression syntax SQL92 [10]. It is an extension of\nthe constraints language with new functions. This new language\nembodies identifiers that can hold a constant value. A mapping\nbetween identifiers and values is required. In the evaluation of an\nexpression, the occurrence of an identifier is replaced by its\nassociated value. The addition of new operators (between, like, in,\nis, escape) allows submitting queries similar to those used in\ndatabases compliant with SQL92. For example, queries of the\ntype -- Temperature between 26.6 and 36.6 -- are possible.\nThe constraint and query languages are intended to ease the work\nprogramming of online applications. This type of application\naccess the information sent in real-time by the sensor nodes.\nThus, it completes the attendance of the R1 requirement on the\nway of operation for query.\n4.3\n\nApplication Filters\nFilters are application specific software modules that deal with\ndiffusion and data processing [11]. Filters are provided before\ndeploying a sensor network. Each filter is specified using a list of\nattributes to make possible the matching with the incoming data.\nFilters are used to make internal aggregation of data, collaborative\nsignals processing, caching and tasks that control the data flow in\nthe sensor network [11]. In SensorBus, filters will be used to limit\nthe data flow in the network. A filter can be designed to restrict\nthe range of values of a determined attribute, for example the\napplication requires that the attribute Temperature has values\nranging between 20 and 30 degrees Celsius, the values outside\nthis particular range are of no interest. The filtering process\ndiscards the unnecessary data reducing the flow between the\nnodes. This decrease reduces the consumption of energy in\nsensor nodes. Thus, it completes the attendance of the R5\nrequirement about the energy saving of the sensor nodes.\n4.4\n\nDesign Patterns\nDesign patterns are descriptions of objects and communicating\nclasses that are customized to solve a general design problem\nwithin a particular context [12]. It describes commonly recurring\ndesign architectures extracted from the experience of one or more\ndomain specialists. A design pattern names, abstracts, and\nidentifies the key aspects of a common design structure that make\nit useful for creating a reusable object-oriented design [12]. We\nmake use of design patterns in SensorBus project and the types\nwe have utilized are as follows:\n5\nThe Observer pattern: Defines a one-to-many dependency\nbetween objects so that when one object changes state, all of its\ndependents are notified and updated automatically. We utilize this\npattern to implement the publish-subscribe mechanism.\nThe Interpreter pattern: Defines a representation for its grammar\nalong with an interpreter that uses the representation to interpret\nsentences in the language. We make use of this pattern to\nimplement the constraint and query language.\nThe Facade pattern: Defines a unified (higher-level) interface to\na set of interfaces in a subsystem that makes the subsystem easier\nto use. We use this pattern to implement the middleware high-level\nprimitives which will be available to developers.\nThe Mediator pattern: Defines an object that encapsulates how a\nset of objects interact. Mediator promotes loose coupling by\nkeeping objects from referring to each other explicitly, and it lets\nyou vary their interaction independently.\nThe Adapter pattern: Converts the interface of a class into\nanother interface clients expect. Adapter lets classes work\ntogether that couldn't otherwise because of incompatible\ninterfaces.\nThe Router pattern: Decouples multiple sources of input from\nmultiple sources of output to route data correctly without\nblocking.\nThe design patterns Mediator, Adapter e Router are utilized to\nimplement the middleware message bus. The exchangeable\ncommunication mechanism was written using these patterns. This\nmechanism allows the utilization of any routing protocol designed\nfor sensor networks meeting as a result the requirements R2, R3\nand R4.\nSENSORBUS ARCHITECTURE\nSensorBus is comprised of the following elements: an application\nservice, a message service and a context service as shown in\nFigure 1.\n\n\n\n\n\n\n\n\nFigure 1. Middleware architecture.\nThe following sections present each one of the services mentioned\nby the means of UML (Unified Modeling Language) component\ndiagrams [13].\n5.1\n\nApplication Service\nThe application service provides Application Programming\nInterface (API) which simplifies application development. This\nservice is comprised of three components as shown in Figure 2:\n\nFigure 2. Application Service Architecture.\nDataBus: component providing a set of operations relating to bus\ncommunication for consumers and producers. These operations\ninclude: Announcement of data item (producer); to find a data\nitem (consumer); Announcement of data change (consumer);\nExclude data item (producer).\nFilter: component providing a set of operations relating to data\nfiltering.\nLanguage: component that implements the commands and the\nconstraint and query language interpreter.\n5.2\n\nMessage Service\nMessage service is responsible for providing communication and\ncoordination for the distributed components, abstracting the\ndeveloper from these issues. This service also comprises three\ncomponents as is shown in figure 3:\n\nFigure 3. Message Service Architecture.\nChannel: Component designed to deal with the specific transport\nimplementations. Each instance of Channel represents a simple\nsystem channel. The component Channel maintains the global\nstate information about the availability of channels and is also\nresponsible for exchanging channel's messages to the transport\nimplementation and vice versa.\nTransport: The communication among the nodes is made through\na specific transport implementation such as sockets. Each\ntransport implementation communicates through a channel with a\nmessage exchange server called Sinker. All transport\nimplementations have a common interface which is called\nITransport.\nSinker: Component responsible for routing messages among\ninstances of transport implementation, each instance\ncorresponding to an instance of Channel.\nChannel\nIChannel\nSinker\nTransport\nITransport\nFilter\nDataBus\nIDataBus\nLanguage\nILanguage\nNOS - JVM/KVM\nApplication/User\nApplication Service\nMessage\nContext Service\n6\n5.3\n\nContext Service\nInherently, an application running on a wireless sensor network\nneeds to capture information from the execution context, for\nexample, battery level, memory availability, bandwidth, location,\napplication specific information such as temperature and pressure,\netc. The middleware gets this information by interacting with\nseveral heterogeneous sensors; for example, the level of energy\nremaining on batteries can be obtained by executing an operating\nsystem primitive, location can be acquired from various\ncommunications technology such as GPS, infrared and RF. This\nwork does not take into consideration how the context sensing is\nexecuted; it is assumed that each sensor provides an interface so\nthe middleware can use it to get the value of the resource of\ninterest.\nThe context service manages the heterogeneous sensors that\ncollect information from the environment. For each resource the\nmiddleware manages, there is an adapter that interacts with the\nphysical sensor, processes its information thus obtaining the\ninformation demanded by the application. Only resource adapters\nthat are necessary to the running application will be loaded to\navoid unnecessary spending of the node's scarce computing\npower. Figure 4 shows an energy adapter interacting with an\nenergy sensor (an operating system primitive, in this example).\n\nFigure 4. Context Service Architecture.\n\nMIDDLEWARE ARCHITECTURE EXAMPLE\nFigure 5 shows the sensor network communication architecture.\nEach node in the field has the ability to collect data and send it to\nthe next sink node. The sink node can be a mobile node acting as\na data source or a fixed host computer (a PC). The user node\nconnects with the sink node through a conventional wireless LAN\n(e.g. IEEE 802.11x).\n\n\nFigure 5. Communication Architecture.\nThe services and components of SensorBus are distributed in\nthree distinct types of sensor nodes. The components DataBus,\nLanguage, Channel and Transport are in the user node. The\nSinker component is in the sink node. The sensor nodes contains\nthe Channel and Transport components while filter component\nand context service will only be loaded if the application requires\nenergy management and other resources such as memory and\nbandwidth.\nThe development of an application using SensorBus consists in\ncoding the parts for the producer and consumer. The consumer\ncode runs in the user machine while the producer code runs in\nsensor nodes. The minimum steps required for the use of\nSensorBus are as follows:\n1.\n\nCreate a new DataBus instance. A new transport\nimplementation is created by identifying a specific Sinker;\n2.\n\nInstantiate a producer or a consumer;\n3.\n\nInstantiate a \"Channel\" entity;\n4.\n\nRegister the just created producer or consumer for the\nchannel; and\n5.\n\nThe producer generates data items and places them into\nChannel while the consumer finds and \"crunches\" those data.\nSensorBus offers other functions that might be implemented, such\nas listing the available channels, adding new channels and stop\nreceiving new channels.\nThe producer sensor code has to be implemented before setup of\nthe network. If it is not possible to retrieve the sensor for\nmaintenance, the attributes of the data sent will always be the\nsame. To overcome this obstacle the constraint and query\nlanguages are used to add new queries that had not been initially\nforeseen. These queries are sent by the interested consumer\n(client) in the form of messages.\n\nFigure 6. Middleware architecture example.\nWLAN\nSink node\nUser node\nSensor field\nSensor nodes\n\nEnergySensor\nEnergyAdapter\nEnergyEvent\nIEvent\nIAdapter\nData\n<<user>>\n<<application>>\nLanguage\nFilter\nChannel\nTransport\nSinker\nBattery\nBattery\nTemperature\nSensor\nIAdapter\nApplication\nService\nMessage\nService\nContext Service\nTemperature\nAdapter\n7\nFilters are as well implemented in nodes. Soon after a producer is\ninstantiated, a filter is also instantiated and registered for a new\nchannel.\nFigure 6 shows the components that may be active in a given\nmoment. Although most of the components are the same for a\ngiven application, different settings may occur on the context\nservice. The figure shows only two adapters running at the same\ntime and interacting with its associated sensors (temperature and\nbattery). Distinct sensors can be used depending on the physical\nmeasurement to be taken and the type of computing resource to be\nmanaged.\n\nIMPLEMENTATION ISSUES\nThe testbed setup for SensorBus evaluation consists of Intel-based\nequipment equipped with 802.11b cards. The sink node is a\ncentrino-based Dell Lattitude notebook, the sensor nodes are\ndeployed in handheld computers HP iPAQ running Linux\noperating system on Intel XScale processor. The sensor nodes are\nplaced at various locations in the Electrical Engineering\nDepartment Building (about 40m60m) at Federal University of\nPar. Linksys Wireless LAN cards are used working in the DCF\nmode with a channel bandwidth of 11Mbps. In the building, there\nis interference from IEEE 802.11 access points (AP) and other\nelectronic devices.\n7.1\n\nWorking Prototype\nFor our working prototype, we have chosen the Java platform as\nour implementation technology because of its broad installed base\nand to ensure compatibility with most hardware platforms. The\nKVM (Kilobyte Virtual Machine) [14] is being used due to its\nfreely available source-code and its designed targeting towards\nsmall limited-resources devices similar to the sensor nodes of this\nwork. The SensorBus API and its constraint and query language\nare being coded as Java classes. Due to issues regarding\nefficiency, the code that will run in sensor nodes is being\nimplemented as native code.\nAn object serialization mechanism was implemented because\nKVM does not support this facility. Serialization mechanism\nconverts an object and its state into a byte stream allowing this\nobject to be moved over a network or persisted in a local file\nsystem. Object recovery is performed through another mechanism\ncalled deserialization. Other Java technologies as J2SE Java 2\nStandard Edition utilize this kind of facility to support the\nencoding of objects into a stream of bytes while protecting private\nand transient data. Serialization is used in distributed\nprogramming through sockets or Remote Method Invocation\n(RMI). We have coded a semiautomatic serialization in order to\nstore the state of the objects. To achieve this, we had to define a\nseries of new interfaces and classes.\nOne of the most critical problems with serialization is security\nbecause when an object is converted in a byte stream any attacker\nequipped with properly sniffer software can intercept and access\nit; in this case even private attributes can be accessed without any\nspecial technique. To tackle this issue, secure protocols as HTTPS\n(Secure HyperText Transport Protocol) or serialization encryption\ncan be used.\n7.2\n\nSimulation\nHaving implemented this working prototype, this research now\nintends to do performance evaluation by using the very well\nknown tool NS Network Simulator [15]. We plan to integrate\nthe SensorBus middleware with NS in a way that the sensor nodes\nwill plug into NS in order to provide real data for feeding the\nsimulator model. To do so, an execution environment will be\nadded to the simulator. This environment will run as a sole UNIX\nprocess and will be plugged to the NS protocol stack through a\nsensor agent.\nThe sensor agent is actually a NS agent responsible for\nconnecting an execution environment instance to the NS protocol\nstack. The communication takes place through a pair of UDP\n(User Datagram Protocol) sockets. Incoming packets are\nencapsulated in NS packets and transmitted through the simulated\nsensor network. Parameters that need to be known to the protocol\nstack are placed in the header of the NS packet while the rest of\nthe information is added to the payload of the NS packet.\nSimilarly, outgoing packets are retrieved from the NS packets and\nsent to the execution environment to be processed.\nIt will be necessary to provide a mechanism to synchronize the\nexecution and simulation environment since they run in distinct\ntimes. Simulations will be performed using a NS Directed\nDiffusion transport implementation [5] for wireless sensor\nnetworks.\n\nRELATED WORKS\nIn [16] an overall description of the challenges involving\nmiddleware for wireless sensor networks is presented focusing on\nthe restraint aspects of these systems.\nCougar [17] and SINA [18], Sensor Information Networking\nArchitecture, provide a distributed database interface for wireless\nsensor networks that use a query language to allow applications to\nrun monitoring functions. Cougar manages the power by\ndistributing the query among the sensor nodes to minimize energy\nrequired in data gathering. SINA adds low-level mechanisms to\nbuild hierarchical clustering of sensors aiming at efficient data\naggregation and also provides protocols which limit the\nrebroadcast of similar information to neighbor's nodes.\nAutoSec [19], Automatic Service Composition, manages resources\nof the sensor networks by providing access control for\napplications to ensure quality of service. This approach is very\nsimilar to conventional middleware technology but the techniques\nto collect resource information are suitable for wireless sensor\nnetworks.\nDSWare [20] provides service abstraction similar to AutoSec, but\ninstead of having a service provided by only one sensor node, the\nservice is supplied by a group of neighbor's nodes.\nSmart Messages Project [21] proposes a distributed computing\nmodel based on migration of executing units. Smart messages are\nmigratory units containing data and code. The goal of Smart\nMessages Project is to develop a computing model and systems\narchitecture to Networks Embedded Systems (NES).\nEnviroTrack [22] is a middleware for object-based distributed\nsystems that lifts the abstraction level for programming\n8\nenvironmental monitoring applications. It contains mechanisms\nthat abstract groups of sensors into logical objects.\nImpala [23] exploits mobile code techniques to alter the\nmiddleware's functionality running on a sensor node. The key to\nenergy-efficient management in Impala is that applications are as\nmodular and concise as possible so little changes demands fewer\nenergy resources.\nMiLAN [24] was developed to allow dynamic network setup to\nmeet the applications performance requirements. The applications\nrepresent its requests by the means of specialized graphics which\nincorporates changes due to applications needs.\nIn [25], an adaptative middleware is proposed to explore the\ncommitment between resource spending and quality during\ninformation collecting. The main goal is to decrease the\ntransmissions among sensor nodes without compromising the\noverall result.\nEvery one of those middleware proposals is designed to make\nefficient use of wireless sensor networks; they do not support free\nexchange of the transport mechanism. More specifically, most of\nthose approaches are not capable of altering the routing protocol\nto meet different application requirements.\n\nCONCLUDING REMARKS\nAs was demonstrated, application development is closely related\nto wireless sensor network design. Each communication\nmechanism provided by a determined routing protocol is\napplication specific, e.g. it is designed to meet some application\nspecific requirement. We suggest that the utility of the\nmiddleware for wireless sensor networks is supported by\ndecoupling the communication mechanism from the programming\ninterfaces and also by capability of using more than one\ncommunication mechanism to address the requirements of larger\nnumber of applications. We have shown that SensorBus, a sensor\nnetwork middleware that we are developing to meet these goals,\ncan aid the development of different types of sensor network\napplications.\n\nREFERENCES\n[1]\n\nP. Rentala, R. Musunuri, S. Gandham and U. Saxena, Survey\non Sensor Networks, Technical Report, University of Texas,\nDept. of Computer Science, 2002.\n[2]\n\nG. -C. Roman, A. L. Murphy, and G. P. Picco, Software\nEngineering for Mobility: A Roadmap. In The Future of\nSoftware Engineering 22\nnd\nInt. Conf. On Software\nEngineering (ICSE2000), pages 243-258. ACM Press, May\n2000.\n[3]\n\nJ. Pottie and W. J. Kaiser, Embedding the internet wireless\nintegrated network sensors, Communications of the ACM,\nvol. 43, no. 5, pp. 51-58, May 2000.\n[4]\n\nW. Heinzelman, A. Chandrakasan and H. Balakrishnan,\nEnergy-efficient communication protocol for wireless micro\nsensor networks. Proceedings of the 33\nrd\nAnnual Hawaii\nInternational Conference on System Sciences, Pages 3005-3014\n, 2000.\n[5]\n\nC. Intanagonwiwat, R. Govindan, and D. Estrin, Directed\ndiffusion: A scalable and robust communication paradigm\nfor sensor networks. In Proceedings of the ACM/IEEE\nInternational Conference on Mobile Computing and\nNetworking, pages 56-67, Boston, MA, USA, Aug. 2000.\n[6]\n\nT. Sameer, N. B. Abu-Ghazaleh and Heinzelman W. A\nTaxonomy of Wireless Micro-Sensor Network Models.\nMobile Computing and Communications Review, Volume 1,\nNumber 2, 2003.\n[7]\n\nGuia de Chefe Brazilian Institute of Environment\n(IBAMA)\nhttp://www2.ibama.gov.br/unidades/guiadechefe/guia/t-1corpo\n.htm. December, 2004.\n[8]\n\nB. C. Arrue, A. Ollero e J. R. M. de DIOS, An intelligent\nsystem for false alarm reduction in infrared forest-fire\ndetection, IEEE Intelligent Systems, vol. 15, pp. 64-73, 2000.\n[9]\n\nG. Couloris, J. Dollimore, e T. Kindberg Distributed\nSystems: Concepts and Design. Third edition. Addison-Wesley\n, 2001.\n[10]\n\nSQL92 Database Language SQL July 30, 1992.\nhttp://www.cs.cmu.edu/afs/andrew.cmu.edu/usr/shadow/www\n/sql/sql1992.txt\n[11]\n\nJ. Heidemann, F. Silva, C. Intanagonwiwat, R. Govindan, D.\nEstrin and D. Ganesan. Building efficient wireless sensor\nnetworks with low-level naming. In Proceedings of the\nSymposium on Operating Systems Principles, pages 146-159,\nChateau Lake Louise, Banff, Alberta, Canada, October 2001.\n[12]\n\nE. Gamma, R. Helm, R. Johnson e J. Vlissides, Design\nPatterns. Addison-Wesley, 1995.\n[13]\n\nJ. Rumbaugh, I. Jacobson and G. Booch. The Unified\nModeling Language Reference Manual. Addison Wesley,\n1998.\n[14]\n\nKVM The K Virtual Machine Specification.\nhttp://java.sun.com/products/kvm/\n, August 2004.\n[15]\n\nUCB/LBNL/VINT Network Simulator NS (Version 2).\nhttp//www.isi.edu./nsnam/ns/, August 2004.\n[16]\n\nK. Rmer, O. Kasten and F. Mattern. Middleware\nChallenges for Wireless Sensor Networks. Mobile\nComputing and Communications Review, volume 6, number\n2, 2002.\n[17]\n\nP. Bonnet, J. Gehrke and P. Seshadri. Querying the Physycal\nWorld. IEEE Personal Communication, 7:10-15, October\n2000.\n[18]\n\nC. Srisathapornphat, C. Jaikaeo and C. Shen. Sensor\nInformation Networking Architecture, International\nWorkshop on Pervasive Computing (IWPC00), Toronto\nCanada, August 2000.\n[19]\n\nQ. Han and N. Venkatasubramanian. AutoSec: An integrated\nmiddleware framework for dynamic service brokering. IEEE\nDistributed Systems Online, 2(7), 2001.\n9\n[20]\n\nS. Li, S. Son, and J. Stankovic. Event detection services\nusing data service middleware in distributed sensor\nnetworks. In Proceedings of the 2\nnd\nInternational Workshop\non Information Processing in Sensor Networks, April 2003.\n[21]\n\nSmart Messages project. March, 2003.\nhttp://discolab.rutgers.edu/sm.\n[22]\n\nT. Abdelzaher, B. Blum, Q. Cao, D. Evans, J. George, S.\nGeorge, T. He, L. Luo, S. Son, R. Stoleru, J. Stankovic and\nA. Wood. EnviroTrack: Towards an Environmental\nComputing Paradigm for Distributed Sensor Networks.\nTechnical report, Department of Computer Science,\nUniversity of Virginia, 2003.\n[23]\n\nT. Liu and M. Martonosi. Impala: A middleware system for\nmanaging autonomic, parallel sensor systems. In ACM\nSIGPLAN Symposium on Principles and Practice of Parallel\nProgramming (PPoPP03), June 2003\n[24]\n\nA. Murphy and W. Heinzelman, MiLan: Middleware linking\napplications and networks, Technical Report TR-795,\nUniversity of Rochester, 2002.\n[25]\n\nX. Yu, K. Niyogi, S. Mehrotra and N. Venkatasubramanian,\nAdaptive middleware for distributed sensor networks, IEEE\nDistributed Systems Online, May 2003.", "keywords": "message service;publish-subscribe paradigm;message-oriented middleware model;environmental monitoring applications;application filters;context service;Middleware;constraint and query languages;design pattern;wireless sensor networks;application service;wireless sensor network"} {"name": "177", "title": "Seven Cardinal Properties of Sensor Network Broadcast Authentication", "abstract": "We investigate the design space of sensor network broadcast authentication . We show that prior approaches can be organized based on a taxonomy of seven fundamental proprieties, such that each approach can satisfy at most six of the seven proprieties. An empirical study of the design space reveals possibilities of new approaches, which we present in the following two new authentication protocols : RPT and LEA. Based on this taxonomy, we offer guidance in selecting the most appropriate protocol based on an application's desired proprieties. Finally, we pose the open challenge for the research community to devise a protocol simultaneously providing all seven properties.", "fulltext": "INTRODUCTION\nDue to the nature of wireless communication in sensor networks,\nattackers can easily inject malicious data messages or alter the content\nof legitimate messages during multihop forwarding. Sensor\nnetwork applications thus need to rely on authentication mechanisms\nto ensure that data from a valid source was not altered in\ntransit. Authentication is thus arguably the most important security primitive in sensor network communication. Source authentication\nensures a receiver that the message originates from the\nclaimed sender, and data authentication ensures that the data from\nthat sender was unchanged (thus also providing message integrity).\nWhen we use the term authentication we mean both source and\ndata authentication.\nBroadcast authentication is a challenging problem. Furthermore,\nit is of central importance as broadcasts are used in many applications\n. For example, routing tree construction, network query, software\nupdates, time synchronization, and network management all\nrely on broadcast. Without an efficient broadcast authentication algorithm\n, the base station would have to resort to per-node unicast\nmessages, which does not scale to large networks. The practical-ity\nof many secure sensor network applications thus hinges on the\npresence of an efficient algorithm for broadcast authentication.\nIn point-to-point authentication, authentication can be achieved\nthrough purely symmetric means: the sender and receiver would\nshare a secret key used to compute a cryptographic message authentication\ncode (MAC) over each message [15, 23]. When a message\nwith a valid MAC is received, the receiver can be assured that\nthe message originated from the sender. Researchers showed that\nMACs can be efficiently implemented on resource-constrained sensor\nnetwork nodes [31], and find that computing a MAC function\nrequires on the order of 1ms on the computation-constrained Berkeley\nmote platform [11, 14].\nAuthentication of broadcast messages in sensor networks is much\nharder than point-to-point authentication [1]. The symmetric approach\nused in point-to-point authentication is not secure in broadcast\nsettings, where receivers are mutually untrusted. If all nodes\nshare one secret key, any compromised receiver can forge messages\nfrom the sender.\nIn fact, authenticated broadcast requires an asymmetric mechanism\n[1]. The traditional approach for asymmetric mechanisms\nis to use digital signatures, for example the RSA signature [34].\nUnfortunately, asymmetric cryptographic mechanisms have high\ncomputation, communication, and storage overhead, making their\nusage on resource-constrained devices impractical for many applications\n.\nThe property we need is asymmetry, and many approaches had\nbeen suggested for sensor network broadcast authentication. However\n, objectively comparing such approaches and selecting the most\nappropriate one for a given application is a non-trivial process, especially\nfor an engineer not specialized in security. The goal of\nthis work is to provide guidance for sensor network broadcast authentication\nby presenting a systematic investigation of the design\nspace. We arrive at a taxonomy of seven fundamental properties,\nand present protocols that satisfy all but one property. The list of\nthe desired properties are:\n147\n1. Resistance against node compromise,\n2. Low computation overhead,\n3. Low communication overhead,\n4. Robustness to packet loss,\n5. Immediate authentication,\n6. Messages sent at irregular times,\n7. High message entropy.\nIf we remove any one of the above requirements, a viable protocol\nexists. Table 1 gives an overview of the seven approaches for\naddressing each case. We show that existing protocols, or small\nmodifications thereof, make up for five of the seven possible cases.\nWe also introduce novel approaches for addressing the final two\ncases: the RPT protocol to authenticate messages sent at regular\ntimes, and the LEA protocol to authenticate low-entropy messages.\nFinally, we pose the open challenge to the research community to\ndesign a broadcast authentication mechanism that satisfies all seven\nproperties.\nOutline.\nThe paper is organized as follows. We introduce the\ntaxonomy of seven properties and discuss how current approaches\ncan be organized based on our taxonomy in Section 2. Section 3 describes\nthe TESLA broadcast authentication protocol and presents\nseveral extensions to increase its efficiency and robustness to DoS\nattacks. In Section 3.3, we introduce RPT, a novel protocol that\nauthenticates synchronous messages. In Section 4, we introduce\nLEA, a novel protocol for efficient network broadcast authentication\nfor low-entropy messages. Implementation and evaluation is\ndiscussed in Section 5. Finally, we present related work in Section\n6 and our conclusions and future work in Section 7.\nTAXONOMY OF EXISTING PROTOCOLS\nIn this section, we discuss the seven properties of broadcast authentication\nand describe possible approaches if we were to leave\nout one of the seven requirements.\nNode Compromise.\nSince sensor nodes are not equipped with\ntamper-proof or tamper-resistant hardware, any physical attacker\nwould be able to physically compromise a node and obtain its cryptographic\nkeys [5]. Since it is unlikely that tamper-proof hardware\nwill be deployed on sensor motes in the near future, secure sensor\nnetwork protocols need to be resilient against compromised nodes.\nHowever, if the nodes are deployed in a physically secured area\n(such as an attended army base), or if the application itself is resilient\nagainst malicious nodes, node compromise might not be an\nissue.\nIf we assume no compromised nodes, all parties could maintain a\nnetwork-wide key that is used to generate and verify a single Message\nAuthentication Code (MAC) per message. If instead one can\nassume a low number of compromised nodes, a simple approach\nexists which uses a different key for each receiver and adds one\nMAC per receiver to each message. Unfortunately, this approach\ndoes not scale to large networks since a 10-byte MACs per receiver\nwould result in prohibitively large messages. To trade off communication\noverhead with security, researchers propose a multi-MAC\napproach [3]. In their scheme, the sender chooses some number of\nrandom MAC keys, and distributes a subset of keys to each node.\nEvery message carries one MAC with each key (assuming 10 bytes\nper MAC),\n1\nwhich adds a substantial overhead. If an attacker compromises\na node, it can only forge a subset of MACs, thus with\nhigh probability, other nodes will be able to detect the forgery with\ntheir subset of keys. A variant of this approach was used to prevent\nmalicious injection of messages in sensor networks [36, 37].\nComputation Overhead. Sensor nodes have limited computation\nresources, so an ideal protocol would have low computation overhead\nfor both sender and receiver. However, there exist scenarios\nwhere computation might not be a particularly critical issue. For\nexample, it is conceivable that certain applications would only require\nauthenticated broadcasts for a small number of packets. In\nsuch a case, the application engineer might be willing to allow for\na small number of intensive computations.\nIf we admit a high computation overhead, we can use digital signatures\n. RSA today requires at least a 1024-bit modulus to achieve\na reasonable level of security, and a 2048-bit modulus for a high\nlevel of security [18]. ECC can offer the same level of security\nusing 160-bit keys and 224-bit keys, respectively. Recent advancement\nin ECC signature schemes on embedded processors can perform\nsignature verification using 160-bit ECC keys in about 1 second\n[10]. Although this represents a dramatic improvement over\nearlier public key cryptographic schemes [2, 4, 21], signature verification\nis still 3 orders of magnitude slower than MAC verification\n, while signature generation is 4 orders of magnitude slower.\nWhile we expect future sensor nodes to have more powerful processors\n, the energy constraints dictated by the limited battery resources\nwill always favor the use of more efficient symmetric cryptographic\nprimitives.\nCommunication Overhead.\nEnergy is an extremely scarce resource\non sensor nodes, and as a result, heavily influences the design\nof sensor network protocols. In particular, radio communication\nconsumes the most amount of energy, and thus protocols with\nhigh communication overhead are avoided if possible. However, in\nsome settings (e.g., powered nodes) energy consumption is not an\nissue. Thus an authentication protocol that requires high communication\noverhead would be acceptable.\nIf we admit a high communication overhead, we can leverage\nefficient one-time signatures that are fast to compute, but require\non the order of 100200 bytes per signature. Examples include the\nMerkle-Winternitz (MW) signature which requires 230 bytes per\nsignature [25, 26, 35] (we describe the MW signature in detail in\nSection 4.1), or the HORS signature, which requires around 100\nbytes per signature [33]. The MW signature requires around 200\none-way function computations to verify a signature (which corresponds\nto roughly 200 ms computation time on a sensor node),\nwhile the HORS signature only requires 11 one-way function computations\n. The disadvantage of the HORS signature is that the public\nkey is about 10 Kbytes,\n2\nwhereas the public key for the MW\nsignature is only 10 bytes. Signature generation is very efficient\nfor both mechanisms, and can be reduced to a single hash function\ncomputation assuming a lookup table for the cryptographic values.\nWe leverage the MW signature to construct the LEA broadcast authentication\nmechanism, which we present in Section 4.\nMessage Reliability. Our fourth property is message reliability.\nReliable message delivery is the property of a network such that\nvalid messages are not dropped. Ultimately, message reliability is\nan applications issue - some applications require message reliability\n, while others do not.\n1\nAn 80-bit MAC value achieves security comparable to a 1024-bit\nRSA signature [18].\n2\nThis is prohibitively large, since each public key of a one-time\nsignature can be used to authenticate only a single message.\n148\nDesired property\nApproach if property is relaxed\nResistance to node compromise\nNetwork-wide key\nLow computation overhead\nDigital signatures\nLow communication overhead\nOne-time signatures\nRobustness to packet loss\nHORS + chaining of public keys\nImmediate authentication\nTESLA\nMessages sent at irregular times\nRPT, described in Section 3.3\nHigh message entropy\nLEA, described in Section 4.2\nTable 1: Overview of desired properties of broadcast authentication and approaches. The left column presents the desired property,\nand the right column presents the approach that achieves all properties but relaxes the property in its left column. The text describes\neach approach in more detail.\nIf we have perfect message reliability, we can achieve efficient\nand immediate authentication by using the HORS signature in a\nspecial construction that combines multiple public keys [28]. In\nthis construction, a public key is still 10 Kbytes, but a single public\nkey can be used to authenticate almost arbitrarily many messages,\nas the public values are incrementally updated as signed messages\nare sent. The communication and computation costs are the same\nas for the HORS signature: 1 ms for signature generation, 11 ms\nfor signature verification, and 100 bytes for the signature. Note that\nin such a scheme, an attacker can start forging HORS signatures if\nmany packets are dropped.\nAuthentication Delay.\nDepending on the application, authentication\ndelay may influence the design of the sensor network protocol\n. For time-critical messages such as fire alarms, the receiver\nwould most likely need to authenticate the message immediately.\nHowever, authentication delay is typically acceptable for non-time-critical\nmessages.\nIf we admit an authentication delay and assume that the receivers\nare loosely time synchronized with the sender, the TESLA broadcast\nauthentication protocol only adds a 10 byte MAC and an optional\n10 byte key to each message [31]. We review the TESLA\nprotocol in detail in Section 3.1. To achieve a low computation\noverhead in the case of infrequent messages sent at unpredictable\ntimes, we need to extend the TESLA protocol to enable fast authentication\nof the keys in the one-way key chain. In Section 3.2\nwe present a more efficient key chain construction that enables efficient\nauthentication in this case. Simultaneously, our approach\nprotects TESLA against denial-of-service attacks by sending bogus\nkey chain values.\nSynchronous Messages.\nSome applications send synchronous\nmessages at regular and predictable times. For example, a key revocation\nlist might be sent to the entire network everyday at noon.\nWe extend the TESLA protocol to provide efficient and immediate\nauthentication for synchronous messages sent at regular and\npredictable times. We name the protocol RPT (Regular-Predictable\nTesla), and we present its details in Section 3.3.\nMessage Entropy. So far, all schemes we describe authenticate\nunpredictable messages with high entropy. However, in practice,\nmany protocols might only communicate with low-entropy messages\n. For example, in many applications, there are only a handful\nof valid commands that a base station can send to a sensor node.\nTherefore, these command packets could be considered as low-entropy\nmessages.\nIf we can assure a low upper bound on message entropy, we can\nleverage one-time signatures in constructions that provide message\nrecovery, where the message is not hashed but directly encoded in\nthe signature. We describe our new LEA protocol in Section 4.\nFor messages with merely a single bit of entropy, we could employ\nthe following optimization using two hash chains. One hash\nchain would correspond to messages of '1', while another would\ncorrespond to messages of '0'. The sender first sends the last value\nof both chains to the receivers in an authenticated manner (e.g., using\none-time signatures or digital signatures). Next, whenever the\nsender wishes to send a '0', it would reveal the next value in the\nhash chain corresponding to '0'. The same is done for the hash\nchain corresponding to '1'. The receiver needs to keep state of the\nmost recent value it received for each hash chain. Consequently, the\nreceiver can easily verify the authenticity of new values by hashing\nthem and comparing them against the most recent value of each\nhash chain.\nBROADCAST AUTHENTICATION WITH THE\n\nTESLA PROTOCOL\nIn this section, we first present a brief overview of the TESLA\nprotocol [29], the recommended broadcast authentication protocol\nif immediate authentication is not required. We improve the\nTESLA broadcast authentication protocol to provide efficient authentication\nfor infrequent messages sent at unpredictable times\n(Section 3.2). In Section 3.3, we describe RPT, further modification\nof TESLA that provides immediate authentication for synchronous\nmessages sent at regular and predictable times.\n3.1\n\nTESLA Overview\nThe TESLA protocol provides efficient broadcast authentication\nover the Internet which can scale to millions of users, tolerate packet\nloss, and support real time applications [30]. Currently, TESLA is\nin the process of being standardized in the MSEC working group\nof the IETF for multicast authentication.\nTESLA has been adapted for broadcast authentication in sensor\nnetworks, the resulting protocol is called the TESLA broadcast\nauthentication protocol [30, 31]. TESLA is used to secure routing\ninformation [17], data aggregation messages [12, 32], etc.\nWe now overview the TESLA protocol, a detailed description\nis available in our earlier paper [31]. Broadcast authentication requires\na source of asymmetry, such that the receivers can only verify\nthe authentication information, but not generate valid authentication\ninformation. TESLA uses time for asymmetry. TESLA\nassumes that receivers are all loosely time synchronized with the\nsender up to some time synchronization error\n, all parties agree\non the current time. Recent research in sensor network time synchronization\nprotocols has made significant progress, resulting in\ntime synchronization accuracy in the range of s [6, 7], which is\nmuch more accurate than the loose time synchronization required\nby TESLA. By using only symmetric cryptographic primitives,\nTESLA is very efficient and provides practical solutions for resource-constrained\nsensor networks. Figure 1 shows an example of TESLA\nauthentication, and here is a sketch of the basic approach:\n149\nM j\nM j+1\nM j+2\nM j+3\nM j+4\nM j+5\nM j+6\nK\ni\n-1\nK\ni\nK\ni\n+1\nK\ni\n+2\nF\n(Ki)\nF\n(Ki+1)\nF\n(Ki+2)\nF\n(Ki+3)\nInterval i\n- 1\nInterval i\nInterval i\n+ 1\nInterval i\n+ 2\ntime\nFigure 1: At the top of the figure is the one-way key chain (using the one-way function F). Time advances left-to-right. At the bottom\nof the figure, we can see the messages that the sender sends in each time interval. For each message, the sender uses the current time\ninterval key to compute the MAC of the message.\nThe sender splits up the time into time intervals of uniform\nduration. Next, the sender forms a one-way chain of self-authenticating\nkeys, by selecting key K\nN\nof interval N at random\n, and by repeatedly applying a one-way hash function F\nto derive earlier keys. A cryptographic hash function, such\nas SHA-1 [27], offers the required properties. The sender\nassigns keys sequentially to time intervals (one key per time\ninterval). The one-way chain is used in the reverse order of\ngeneration, so any key of a time interval can be used to derive\nkeys of previous time intervals. For example, assuming\na disclosure delay of 2 time intervals, key K\ni\nwill be used to\ncompute MACs of broadcast messages sent during time interval\ni, but disclosed during time interval i\n+ 2. The sender\ndefines a disclosure delay for keys, usually on the order of a\nfew time intervals. The sender publishes the keys after the\ndisclosure time.\nThe sender attaches a MAC to each message, computed over\nthe data, using the key for the current time interval. Along\nwith the message, the sender also sends the most recent key\nthat it can disclose. In the example of Figure 1, the sender\nuses key K\ni\n+1\nto compute the MAC of message M\nj\n+3\n, and\npublishes key K\ni\n-1\nassuming a key disclosure delay of two\ntime intervals.\nEach receiver that receives the message performs the following\noperation. It knows the schedule for disclosing keys and,\nsince the clocks are loosely synchronized, can check that the\nkey used to compute the MAC is still secret by determining\nthat the sender could not have yet reached the time interval\nfor disclosing it. If the MAC key is still secret, then the receiver\nbuffers the message. In the example of Figure 1, when\nthe receiver gets message M\nj\n+3\n, it needs to verify that the\nsender did not yet publish key K\ni\n+1\n, by using the loose time\nsynchronization and the maximum time synchronization error\n. If the receiver is certain that the sender did not yet\nreach interval i\n+ 3, it knows that key K\ni\n+1\nis still secret, and\nit can buffer the packet for later verification.\nEach receiver also checks that the disclosed key is correct\n(using self-authentication and previously released keys) and\nthen checks the correctness of the MAC of buffered messages\nthat were sent in the time interval of the disclosed key.\nAssuming the receiver knows the authentic key K\ni\n-2\n, it can\nverify the authenticity of key K\ni\n-1\nby checking that F\n(K\ni\n-1\n)\nequals K\ni\n-2\n. If K\ni\n-1\nis authentic, the receiver can verify\nthe authenticity of buffered packets sent during time interval\ni\n- 1, since they were authenticated using key K\ni\n-1\nto\ncompute the MAC.\nOne-way chains have the property that if intermediate keys are\nlost, they can be recomputed using later keys. So, even if some\ndisclosed keys are lost due to packet loss or jamming attacks, a\nreceiver can recover the key from keys disclosed later and check\nthe authenticity of earlier messages.\nAlong with each message M\ni\n, the sender broadcasts the TESLA\nauthentication information. The broadcast channel may be lossy,\nbut the sender would need to retransmit with an updated MAC key.\nDespite loss, each receiver can authenticate all the messages it receives\n.\n3.2\nReducing Verification Overhead of\n\nTESLA\nEven though TESLA provides a viable solution for broadcast\nauthentication in sensor networks, many challenges still remain.\nWe describe the remaining challenges below and propose extensions\nand new approaches to address these challenges.\nSome applications broadcast messages infrequently at unpredictable\ntimes and the receivers may need to authenticate messages\nimmediately. For example, a fire alarm event is infrequent and\nneeds to be quickly distributed and authenticated. Unfortunately,\nwhen messages are infrequent, due to the one-way chain approach\nto verify the authenticity of keys, a receiver may need to compute\na long chain of hash values in order to authenticate the key which\ncould take several tens of seconds for verification. Such verification\ndelays the message authentication significantly and may consume\nsignificant computation and energy resources. This approach also\nintroduces a Denial-of-Service (DoS) attack: an attacker sends a\nbogus key to a receiver, and the receiver spends several thousands\nof one-way function computations (and several seconds) to finally\nnotice that the sent key was incorrect.\nOne approach is to periodically release TESLA keys and hence\nthe work for verification of an infrequent message would be distributed\nover time. However, this approach wastes energy for periodic\nbroadcast of TESLA keys. In the same vein, a sender can\npublish several keys in a packet to reduce the effect of DoS attacks\nby requiring a receiver to perform a small number of one-way\nfunction computations to incrementally authenticate each key of the\none-way chain. An advantage of this approach is that it makes the\nDoS attack described above less attractive to an attacker, as a receiver\nwould need to follow the one-way chain for a short interval\nonly to detect a bogus key.\nAnother approach to counteract the slow and expensive verification\nproblem is to use a Merkle hash tree [24] instead of a one-way\nchain to authenticate TESLA keys. This approach has been suggested\nin another context [13]. For N keys, the tree has height\nd\n= log\n2\n(N) and along with each message, the sender sends d values\nto verify the key. Despite the logarithmic communication cost,\nthis is still too large for most sensor networks: consider a network\nwhere we switch to a different hash tree every day, and we need a\n150\nk\n2\nk\n5\nk\n8\nk\n11\nk\n14\nk\n17\nk\n20\nk\n23\nk\n1\nk\n4\nk\n7\nk\n10\nk\n13\nk\n16\nk\n19\nk\n22\nk\n0\nk\n3\nk\n6\nk\n9\nk\n12\nk\n15\nk\n18\nk\n21\nF\nv\n0\n-7\n= F(v\n0\n-3\n|| v\n4\n-7\n)\nv\n0\n-3\nv\n4\n-7\nv\n01\nv\n23\nv\n45\nv\n67\nv\n0\nv\n1\nv\n2\nv\n3\nv\n4\nv\n5\nv\n6\nv\n7\nFigure 2: Hash tree constructed over one-way chains of TESLA keys.\nkey resolution of 1 second. The 86,400 keys that we need in one\nday require a tree of height 17. Assuming a hash output of 10 bytes,\nthe sender would need to consequently add 170 bytes to each message\nfor authentication (17 nodes at 10 bytes each). This is far too\nmuch for most sensor networks, where nodes typically communicate\nwith messages shorter than 100 bytes. Splitting the load up into\ntwo messages is not a viable approach, because of the usually high\npacket loss rates in sensor networks. The receiver would only need\nto compute O\n(log(N)) operations for verification, 17 hash function\ncomputations in our example which requires around 17ms on current\nsensor nodes.\nTo reduce the bandwidth overhead, we design a different approach\nthat achieves lower message size at the cost of higher verification\ncomputation. our approach is to combine one-way chains\nwith hash trees. Consider the structure that Figure 2 shows. We\nconstruct a hash tree over short one-way chains. If each one-way\nchain has a length of k, the verification cost is expected to be k\n/2+\nlog\n(N/k) (it is at most k + log(N/k)), and the communication cost\nis log\n(N/k). For a given upper bound on the verification time, we\ncan thus minimize the communication overhead. Consider an upper\nbound on the verification time of approximately 500ms. We can\nset k\n= 2\n9\n= 512, thus the hash tree will have 8 levels, requiring 80\nbytes per packet, making this an attractive approach for many applications\n.\nAn alternative approach would be to construct a hash tree over\nthe one-way key chain, where the every k'th key will be a leaf node\nof the hash tree (for example, in Figure 2, the value k\n0\nwould be\nderived from the previous leaf node k\n0\n= F(v\n1\n)). The advantage\nof this approach is that a sender would not need to send the hash\ntree values along with a message, as a value can be authenticated\nby following the one-way chain to the last known value. However,\nif the sender did not send out any message during an extended time\nperiod, that authentication would be computationally expensive and\nthus the sender can choose to also send the hash tree nodes along\nfor fast verification. This approach would also prevent DoS attacks\nsince the verification is very efficient.\nM\ni\nM\ni\nK\ni\n-1\nK\ni\nK\ni\n+1\nF\n(Ki)\nF\n(Ki+1)\nInterval i\n- 1\nInterval i\ntime\nT\ni\n-1\nT\ni\nT\ni\n+1\nFigure 3: This figure shows authentication of one message in\nthe RPT protocol. Message M\ni\n= MAC\nK\ni\n(M\ni\n) , and message\nM\ni\n= M\ni\n,K\ni\n.\n3.3\nRPT: Authenticating Messages Sent at Regular\nand Predictable Times\nAs described in our taxonomy in Section 2, one additional property\nin the design space of broadcast authentication is to authenticate\nasynchronous messages sent at irregular and unpredictable\ntimes. All protocols described so far can achieve this property.\nHowever, if we were to remove this requirement, new possible approaches\nexist that can only authenticate messages sent at regular\nand predictable times, yet satisfy all of the other cardinal properties\ndefined in our taxonomy. In this section, we introduce our design\nof one such protocol called RPT, a modification of the TESLA\nprotocol.\nIn practice, many protocols send synchronous messages at regular\nand predictable times. The plaintext of these messages are often\nknown by the sender a priori. In particular, messages containing\nmeta-data are especially well-suited for this type of communication\n. For example, a base-station often performs key update or time\nre-synchronization at a preset time of day. In these examples, the\nsender knows exactly what message needs to be sent at a particular\ntime, but the protocol dictates that such messages cannot be sent\nuntil a pre-specified time.\nConsider an application that broadcasts a message every day at\nnoon to all nodes. If we use standard TESLA with one key per\n151\nday, it would take one day to authenticate the message, since the\nreceivers would need to wait for the disclosed key one day later.\nOn the other hand, if we use many keys, for example, one key per\nsecond, it would require 86\n,400 keys per day (not using the optimization\nwe presented in the previous section), and a sensor node\nwould require an expected time of 43 seconds to verify the authenticity\nof the key. Hence, if messages are sent at very regular time\nintervals, we can streamline TESLA to immediately authenticate\nthese messages.\nThe RPT protocol (Regular-Predictable TESLA) achieves immediate\nauthentication for messages sent at regular and predictable\ntimes. Consider a message that needs to be sent at times T\ni\n=\nT\n0\n+ i D. The sender creates a one-way key chain, and assigns\none key to each time interval of duration D. We assume that the\nsender knows the content of the message M\ni\nto be broadcast at time\nT\ni\nby time T\ni\n-, where is the maximum network broadcast propagation\ndelay plus the maximum time synchronization error. At\ntime T\ni\n- , the sender broadcasts message MAC\nK\ni\n(M\ni\n) , and at\ntime T\ni\nthe sender broadcasts M\ni\n,K\ni\n. As soon as the receiver receives\nthe first message, it needs to verify the safety condition that\nkey K\ni\nis still secret, given its current time and the maximum time\nsynchronization error. When receiving the second message, the receiver\nfirst verifies the key K\ni\n. If the key is correct it verifies the\nMAC, and if the MAC is correct it is assured that M\ni\nis authentic.\nNote that this approach does not exhibit any authentication delay,\nas the receiver can immediately authenticate M\ni\nimmediately after\nreception.\nAt first glance, it may appear that RPT is susceptible to a denial-of\n-broadcast attack, where an attacker sends a large number of\nforged MACs around the time the legitimate is sent out. This problem\nhad been studied and addressed in previous work [16]. However\n, it is not easy to evaluate how well this works in practice.\nBROADCAST AUTHENTICATION WITH ONE-TIME SIGNATURES\nAnother way to achieve asymmetric authentication is through the\nuse of one-time signatures. A one-time signature is much faster to\ngenerate and verify than general purpose signatures, but the private\nkey associated with the signature can be used to sign only a single\nmessage, otherwise the security degrades and an attacker could\nforge signatures. Unlike TESLA, time synchronization is not necessary\nand authentication is immediate. Moreover, one-time signatures\nachieve non-repudiation in addition to authentication, which\nenables a node to buffer a message and retransmit it later. The receiver\nof the retransmitted message can still authenticate the message\n.\nOne-time signatures are advantageous in applications with infrequent\nmessages at unpredictable times, as they do not add computation\nto the receiver based upon the time at which the message\nis received. This makes them resilient to many forms of DoS attacks\n. We now present an overview of one-time signatures, and\nthen present our LEA broadcast authentication protocol for authentication\nof low-entropy messages in Section 4.2.\n4.1\nOne-Time Signatures Overview\nThe Merkle-Winternitz signature was first drafted by Merkle [25,\n26], and was later also used by Even, Goldreich, and Micali [8],\nand more recently also by Rohatgi for efficient stream authentication\n[35]. We briefly describe the basic principle of the Merkle-Winternitz\nsignature.\nA Merkle-Winternitz signature relies on efficient one-way functions\nto construct a DAG (directed acyclic graph) to encode a signature\n. Each edge between two vertices (v\n1\nv\n2\n) in the graph\nrepresents an application of the one-way function, where the value\nof the end node is the result of the one-way function applied to the\nbeginning node (v\n2\n= F(v\n1\n), where F represents the one-way function\n). End nodes with multiple incoming edges take on the value\nof the hash of the concatenation of predecessor nodes. The initial\nvalues of the graph represent the private key, and the final value\nrepresents the public key.\nTo achieve a secure one-time signature, the property of the signature\nencoding is that an attacker would have to invert at least one\none-way function to sign any other value (i.e., forge a signature).\nWe now discuss an example of a signature graph and signature\nencoding. Figure 4(a) depicts the one-time signature. A one-way\nhash chain of length 4 can be used to encode the values 0\n- 3. For\nthis signature chain, we will use the convention that the 1st value\ns\n3\nin the chain encodes the value 3, the second 2, etc.\nThe signer derives the value s\n3\nfrom a randomly generated private\nkey K\npriv\nby using a Pseudo-Random Function (PRF), e.g.,\ns\n3\n= PRF\nK\npriv\n(0).\n3\nTo prevent signature forgery (as we will explain\nlater), the sender also creates a checksum chain c\n0\n...c\n3\n, deriving\nvalue c\n0\nalso from the private key, e.g., c\n0\n= PRF\nK\npriv\n(1),\nand again using the one-way function to derive the other values,\ne.g., c\n1\n= F(c\n0\n). The application of the one-way function on s\n0\nand c\n3\nforms the public key: K\npub\n= F(s\n0\n|| c\n3\n). To sign value i,\nwhere 0\ni 3, the signer uses values s\ni\nand c\ni\nas the signature.\nTo verify the signature s\ni\nand c\ni\n, the receiver follows the one-way\nchains and recomputes the public key as follows, with F\n0\n(x) = x:\nK\npub\n= F(F\ni\n(s\ni\n) || F\n3\n-i\n(c\ni\n))\nA signature is correct if the recomputed value matches the public\nkey For example, consider a signature on value 2: s\n2\nand c\n2\n. To\nverify, the receiver checks that K\npub\n= F(F(F(s\n2\n)) || F(c\n2\n)).\nAn attacker who wishes to forge a signature is forced to invert at\nleast one one-way function (since the indices of the checksum chain\nrun in direction opposite to the signature chain). Assuming the one-way\nfunction is secure, an attacker cannot invert the function to\nforge a signature, hence, the signature is secure. In practice, we can\nuse a secure cryptographic hash function for our one-way function,\nbut for increased efficiency we use a block cipher in hash mode, for\nexample the commonly used Matyas-Meyer-Oseas mode [22].\nUsing two chains achieves a secure one-time signature, but does\nnot scale well to sign a large number of bits. If we use two chains,\na signature on 32 bits would require a chain 2\n32\nvalues long, which\nhas a very high overhead to generate and verify. Instead, if more\nthan one chain is used, each chain can encode some number of bits\nof the signature. For example, one could encode an 8 bit number by\nusing four chains of length 4 to encode two bits in each chain. The\npublic key is derived from the last value on all the chains. However,\nin this scheme, we would still need an additional 4 chains of length\n4 to encode the values in the opposite direction to prevent forgeries.\nThe Merkle-Winternitz signature reduces the number of checksum\nchains, in that the redundant checksum chains do not encode\nthe actual value, but instead encode the sum of the values on the signature\nchains. As explained in detail by Merkle [25,26], the checksum\nchain encodes the sum of all values in the signature chains.\nAssuming k signature chains that sign m bits each, the maximum\nsum would be k\n(2\nm\n-1), thus the checksum chains would encode\n3\nWe use a block cipher to implement the PRF efficiently. A block\ncipher is a good PRF as long as we do not use the PRF to compute\nmore than O\n(2\nn\n) operations with the same key, where n is the\nblocksize in bits. Since we only perform a few operations, the block\ncipher is a secure and efficient PRF.\n152\nK\npriv\nK\npub\ns\n0\ns\n1\ns\n2\ns\n3\nc\n3\nc\n2\nc\n1\nc\n0\n(a) Simple one-time signature to sign 2 bits.\nF\nK\npriv\nK\npub\ns\n0\n,0\ns\n0\n,1\ns\n0\n,2\ns\n0\n,3\ns\n1\n,0\ns\n1\n,1\ns\n1\n,2\ns\n1\n,3\ns\n2\n,0\ns\n2\n,1\ns\n2\n,2\ns\n2\n,3\ns\n3\n,0\ns\n3\n,1\ns\n3\n,2\ns\n3\n,3\nc\n0\n,3\nc\n0\n,2\nc\n0\n,1\nc\n0\n,0\nc\n1\n,3\nc\n1\n,2\nc\n1\n,1\nc\n1\n,0\n(b) Merkle-Winternitz one-time signature.\nThis construction can sign 8 bits.\nFigure 4: This figure illustrates the Merkle-Winternitz one-time signature.\nlog\n2\nk\n(2\nm\n- 1) bits, providing for a significant savings. This approach\nstill ensures that an attacker would have to invert at least\none one-way function to forge a signature.\nUsing signature chains with 4 values, a signature on n bits will\nthen require n\n/2 signature chains. Since each chain encodes up to\nthe value 3, the checksum chain at most needs to encode the value\n(n/2) 3 as the total sum; thus, the checksum chains need to sign\nlog\n2\n(n/2 3) bits. If we also use checksum chains with 4 values,\neach checksum chain can again sign 2 bits and we need log\n2\n(n/2\n3\n)/2 checksum chains. Figure 4(b) shows an example of such a\nsignature for signing 8 bits. Since the four signature chains can\nat most encode the number 3, the total sum is at most 4\n3 = 12.\nThus we only need 2 additional checksum chains to encode the 4\nbits. Again, the indices in the checksum chain run opposite to the\nindices in the signature chain, to ensure that an attacker would have\nto invert at least one one-way function to forge a different signature.\nFor the specific case of signing 80 bits, researchers suggest using\nchains of length 16 to encode 4 bits per chain [35]. Thus, we need\n20\n= 80/4 signature chains, and the checksum chains would need\nto encode at most values 0\n...300(= 20 15), which will require 9\nbits, which again requires 3 checksum chains (where the third chain\nonly requires 2 values to sign a single bit).\n4\nWe now compute the computation overhead of signature verification\n. On average, signature verification requires following half\nthe signature chains, which requires 8 one-way function computations\n. In the case of signing 80 bits with 20 signature chains,\nthis will result in 160 one-way function computations. On average,\nthe checksum chains require 16 one-way function computations,\nadding up to a total of 176 computations.\n4.2\nLEA: Authentication of Low-Entropy Messages\nIf messages have high entropy, the one-time signature is still\nquite large in size. For example, if messages have 80 bits or more\nof entropy, the signer can hash the message before signing it. Using\n4\nWe could also use 2 signature chains with 18 values each, as 18\n2\n=\n324, saving one checksum chain.\nthe construction we discussed in Section 4.1, signing an 80-bit hash\nvalue would yield a 230 bytes signature (or 184 bytes if we assume\n8 byte long hash chain values). Unfortunately, this is still too large\nfor current sensor networks.\nHowever, for messages with lower entropy, one-time signatures\ncan be very effective. We thus present the LEA (Low-Entropy\nAuthentication) protocol. The LEA protocol is based on Merkle-Winternitz\none-time signatures, and periodically pre-distributes onetime\npublic keys to receivers, and the sender uses the corresponding\nprivate keys to sign messages.\nThe Merkle-Winternitz one-time signature is efficient for signing\nsmall numbers of bits. For example, assuming chains of length 16,\nto sign a message of n bits, we would need n\n/4 signature chains.\nThus we need to encode log\n2\n(n/415) bits in the checksum chains,\nhence requiring log\n2\n(n/4 15)/4 additional checksum chains.\nFor signing 8 bits, the signature would require 2 signature chains\nand 2 additional checksum chains to encode the sum ranging from\n0\n...30, which would require 32 bytes assuming 8 byte values.\nSince communication cost is a premium, we could use a single\nchecksum chain of length 30 to encode the checksum, thus saving\n8 bytes. Hence, the total size of the authentication information\nwould be 24 bytes.\nSince the size of the signature depends on the number of bits\nbeing signed, this method is preferable for situations where the\nmessage is a simple time critical command, such as an alarm, or\na preset command. For example, to sign 128 different commands,\nwe would only need one signature chain with 16 values, one signature\nchain with 8 values, and one checksum chain with 22 values.\nAssuming 8 byte values, the total signature length is 24 bytes.\nIn some applications it may be possible to use a lossy compression\nalgorithm to compress and quantize the data for the signature.\nThis would allow the message to contain uncompressed data, but\nthe attacher would only be able to change the message to a small\ndegree. This could be helpful in commands which set the sensitivity\nof a motion sensor and the administrator is willing to allow\na small error in the sensitivity which is actually received on the\ndevice.\nOne of the main challenges of using one-time signatures is to dis-153\ntribute one authentic public key for each signature to the receivers.\nWithout an authentic public key an attacker could inject it's own\npublic key and one-time signatures. This problem is easier than\nthe original problem of general broadcast authentication because\nthe public keys can be distributed far ahead of time at a predictable\ntime.\nThere are several methods by which this may be achieved. The\nsimplest would be to distribute a set of k public keys to each receiver\nat bootstrap and these keys would be usable for the first k\nmessages. If the lifetime of the devices compared to k is small,\nthen the devices will not have to be re-bootstrapped.\nIn general, the number of total messages is unknown. Thus, we\ndesign a mechanism to efficiently replenish authentic public keys\nafter their use. We leverage the RPT protocol for this purpose.\nNodes store a number of authentic public keys. The sender uses up\none one-time signature (or one private key) per message it broadcasts\n. With this approach, all receivers can immediately authenticate\nthe message. Periodically, the sender sends a RPT message at\na regular time with new one-time public keys to replenish the used-up\npublic keys at receivers. Since each public key is only 10 bytes\nlong, this is an efficient approach.\n4.3\nChaining Merkle-Winternitz Public Keys\nThe above scheme illustrates an effective way to use TESLA in\nconjunction with Merkle-Winternitz signatures to provide fast and\nefficient authentication. The only drawback of using the Merkle-Winternitz\none-time signature is that the public key can only be\nused once. Therefore when a TESLA authenticated message is\nsent at the beginning of the day authenticating k Merkle-Winternitz\npublic keys, the sender and receiver are limited to only being able\nto authenticate k messages that day. The tradeoff is that choosing a\nlarge k uses up receiver memory resources.\nTo circumvent this problem, rather than sending a fixed number\nof messages per interval, the public keys can be chained together\nin such a way that if more messages are needed they can be sent to\nthe receiver and authenticated immediately.\nIn this approach, the sender generates a large number of public\nand private keys for one-time signatures, labeling the public keys\nP\n0\n,P\n1\n,...,P\nn\n. These public keys are then combined, such that verification\nof one signature will automatically authenticate the public\nkey of the next signature:\nV\n0\n= P\n0\nV\n1\n= H(P\n1\n|| V\n0\n)\n...\nV\ni\n= H(P\ni\n|| V\ni\n-1\n)\n...\nV\nn\n= H(P\nn\n|| V\nn\n-1\n)\nIn this approach, the sender only needs to send the value V\nn\nauthenticated\nwith TESLA. The sender subsequently uses the private\nkey that corresponds to the public key P\nn\nto sign a message, and\nsends value V\nn\n-1\nalong with the message. From the signature, the\nreceiver can compute the public key P\nn\n, and together with the value\nV\nn\n-1\nthe receiver can authenticate the public key and V\nn\n-1\nbased\non the trusted value V\nn\n. Now that the receiver trusts value V\nn\n-1\n, the\nnext public key P\nn\n-1\ncan be authenticated in the same way.\nThis approach has the drawback that the message to be authenticated\nalso needs to carry the value V\nn\n-1\nincreasing the message\nsize by 810 bytes, and that message loss prevents later messages\nto be authenticated. We propose to use a hybrid approach: send k\npublic keys authenticated with RPT each day, along with one value\nV\nn\n. If the sender needs to send more than k authenticated messages,\nit can then use the chained public keys after the first k messages.\n0uA\n500uA\n1000uA\n1500uA\n2000uA\n22\n23\n24\n25\n26\n97 bits\n60 bits\n32 bits\nPower consumed (uA) vs. chain lengths\nFigure 5: The power consumption for an MSP430 sensor\nnode receiving and validating Merkle-Winternitz signatures for\nvarying signature chain lengths.\nIMPLEMENTATION AND PERFORMANCE EVALUATION\nFigure 5 illustrates the amount of energy required for using a\nMerkle-Winternitz signature for signing 32 bits, 60 bits, and 97\nbits. In this example, the sensor is an 16-bit TI MSP430 processor\nrunning at 1 MHz, which can compute an 8-byte hash in approximately\n5ms using RC5. This processor uses up 0.28 A per ms,\nand 3.8 A per byte received. Shown are the overall power consumption\nfor five different chain lengths, 2\n2\n, 2\n3\n, 2\n4\n, 2\n5\n, 2\n6\n, and\n2\n7\n. Table 2 shows the power consumption, validation times, and\ncommunication overhead for signing 60 bits with varying length\nchains.\nWe implemented the PRF using the Helix stream cipher [9].\nUnlike RC5, this cipher is not patented. It also features an efficient\nMAC construction which we use in our implementation of\nTESLA. The PRF is computed by using the input to the PRF as\nthe key in encryption mode, and using the keystream as the output\nof the PRF. In this implementation, it takes about 8 ms to compute\nan 8-byte PRF. Since the signature generation requires comparable\namount of computation as verification, generation of a 64-bit signature\ntakes about 1.2 seconds and verification takes about 1 second\nin our un-optimized implementation. However, in this scheme, the\npublic keys are generated in advance, so the sender must compute\ntwice as many hashes because it must recompute the hashes when\nhe wishes to actually compute a signature instead of simply generating\nthe public key. This still makes it feasible for a sensor-node\nto act as the base station in our implementation, but generating a\nlarge amount of public keys becomes costly. The implementation\nis about 4k in size, 2k for the Helix assembly code, and 2k for the\nMerkle-Winternitz code (with code for both generation and validation\n).\nRELATED WORK\nThe TESLA protocol is a viable mechanism for broadcast authentication\nin sensor networks [31]. Unfortunately, this approach\nintroduces an authentication delay and thus does not provide immediate\nauthentication of messages which is necessary in applications\n154\n2\n2\n2\n3\n2\n4\n2\n5\n2\n6\n2\n7\nPower-cons (A)\n1126.7\n823.1\n707.2\n717.5\n858.3\n1163.2\nAuth-time (ms)\n332.5\n442.5\n680.0\n1042.5\n1762.5\n2960.0\nOverhead (bytes)\n272\n184\n136\n112\n96\n88\nTable 2: Efficiency for signing a 60 bit value using Merkle-Winternitz one-time signature.\nwith real-time requirements. Moreover, the TESLA approach has\nsome denial-of-service vulnerabilities, which we address in this paper\n.\nLiu and Ning subsequently improved the efficiency of bootstrapping\nnew clients, using multiple levels of one-way key chains [20].\nThis work also discussed the DoS attack explained in Section 3.2.\nLiu et al. also outlines a potential approach to authenticate commitment\nmessages with Merkle hash trees [19].\nSeveral researchers have investigated the use of asymmetric cryptographic\ntechniques in sensor networks. Unfortunately, the overhead\nis too high to warrant use of such techniques for per-packet\nbroadcast authentication. Such schemes were discussed in Section\n2 in the context of protocols with high computation overhead.\nCONCLUSION\nWe have studied viable and efficient solutions for efficient broadcast\nauthentication in sensor networks. This problem is challenging\ndue to the highly constrained nature of the devices and the unpredictable\nnature of communication in many environments. Since the\nauthentication of broadcast messages is one of the most important\nsecurity properties in sensor networks, we need to study viable approaches\nfor a variety of settings. We establish a set of properties\nof broadcast authentication: security against compromised nodes,\nlow computation and communication cost, immediate authentication\n(with no receiver delay), authentication of unpredictable messages\nwith high entropy, and robustness to packet loss. We present\na viable protocol for each case where we relax one property, and\npose the open challenge to find a protocol that satisfies all properties\nREFERENCES\n[1] D. Boneh, G. Durfee, and M. Franklin. Lower bounds for\nmulticast message authentication. In Advances in Cryptology\n-- EUROCRYPT '01, pages 434450, 2001.\n[2] M. Brown, D. Cheung, D. Hankerson, J. Lopez Hernandez,\nM. Kirkup, and A. Menezes. PGP in constrained wireless\ndevices. In Proceedings of USENIX Security Symposium,\nAugust 2000.\n[3] R. Canetti, J. Garay, G. Itkis, D. Micciancio, M. Naor, and\nB. Pinkas. Multicast security: A taxonomy and some\nefficient constructions. In INFOCOMM'99, pages 708716,\nMarch 1999.\n[4] J. Deng, R. Han, and S. Mishra. A performance evaluation of\nintrusion-tolerant routing in wireless sensor networks. In\nProceedings of IEEE Workshop on Information Processing in\nSensor Networks (IPSN), April 2003.\n[5] J. Deng, C. Hartung, R. Han, and S. Mishra. A practical\nstudy of transitory master key establishment for wireless\nsensor networks. In Proceedings of the First IEEE/CreateNet\nConference on Security and Privacy for Emerging Areas in\nCommunication Networks (SecureComm), 2005.\n[6] Jeremy Elson, Lewis Girod, and Deborah Estrin.\nFine-grained network time synchronization using reference\nbroadcasts. In Proceedings of Symposium on Operating\nSystems Design and Implementation (OSDI), December\n2002.\n[7] Jeremy Elson and Kay Romer. Wireless sensor networks: A\nnew regime for time synchronization. In Proceedings of\nWorkshop on Hot Topics In Networks (HotNets-I), October\n2002.\n[8] S. Even, O. Goldreich, and S. Micali. On-line/off-line digital\nsignatures. In Advances in Cryptology -- CRYPTO '89,\nvolume 435, pages 263277, 1990.\n[9] Niels Ferguson, Doug Whiting, Bruce Schneier, John Kelsey,\nStefan Lucks, and Tadayoshi Kohno. Helix: Fast encryption\nand authentication in a single cryptographic primitive. In\nProceedings of the International Workshop on Fast Software\nEncryption (FSE 2003), 2003.\n[10] V. Gupta, M. Millard, S. Fung, Y. Zhu, N. Gura, H. Eberle,\nand S. C. Shantz. Sizzle: A standards-based end-to-end\nsecurity architecture for the embedded internet. In\nProceedings of the Third IEEE International Conference on\nPervasive Computing and Communication (PerCom), 2005.\n[11] Jason Hill, Robert Szewczyk, Alec Woo, Seth Hollar,\nDavid E. Culler, and Kristofer S. J. Pister. System\narchitecture directions for networked sensors. In Proceedings\nof Architectural Support for Programming Languages and\nOperating Systems (ASPLOS IX), pages 93104, 2000.\n[12] Lingxuan Hu and David Evans. Secure aggregation for\nwireless networks. In Workshop on Security and Assurance\nin Ad hoc Networks, January 2003.\n[13] Yih-Chun Hu, Adrian Perrig, and David B. Johnson. Packet\nleashes: A defense against wormhole attacks in wireless\nnetworks. In Proceedings of IEEE INFOCOM, April 2003.\n[14] J. M. Kahn, R. H. Katz, and K. S. Pister. Mobile networking\nfor smart dust. In Proceedings of ACM/IEEE Conference on\nMobile Computing and Networking (MobiCom), August\n1999.\n[15] C. Karlof, N. Sastry, and D. Wagner. TinySec: A link layer\nsecurity architecture for wireless sensor networks. In ACM\nSenSys, November 2004.\n[16] Chris Karlof, Naveen Sastry, Yaping Li, Adrian Perrig, and\nJ. D. Tygar. Distillation codes and applications to dos\nresistant multicast authentication. In Proceedings of the\nSymposium on Network and Distributed Systems Security\n(NDSS), November 2004.\n[17] Chris Karlof and David Wagner. Secure routing in wireless\nsensor networks: Attacks and countermeasures. In\nProceedings of First IEEE International Workshop on Sensor\nNetwork Protocols and Applications, May 2003.\n[18] A. Lenstra and E. Verheul. Selecting cryptographic key sizes.\nJournal of Cryptology, 14(4):255293, 2001.\n[19] D. Liu, P. Ning, S. Zhu, and S. Jajodia. Practical broadcast\nauthentication in sensor networks. In Proceedings of The 2nd\nAnnual International Conference on Mobile and Ubiquitous\nSystems: Networking and Services\n, November 2005.\n[20] Donggang Liu and Peng Ning. Efficient distribution of key\nchain commitments for broadcast authentication in\n155\ndistributed sensor networks. In Proceedings of Network and\nDistributed System Security Symposium (NDSS), pages\n263276, February 2003.\n[21] David Malan, Matt Welsh, and Michael Smith. A public-key\ninfrastructure for key distribution in TinyOS based on elliptic\ncurve cryptography. In Proceedings of IEEE International\nConference on Sensor and Ad hoc Communications and\nNetworks (SECON), October 2004.\n[22] S. Matyas, C. Meyer, and J. Oseas. Generating strong\none-way functions with cryptographic algorithm. IBM\nTechnical Disclosure Bulletin, 27:56585659, 1985.\n[23] A. Menezes, P. van Oorschot, and S. Vanstone. Handbook of\nApplied Cryptography. CRC Press, 1997.\n[24] R. Merkle. Protocols for public key cryptosystems. In\nProceedings of the IEEE Symposium on Research in Security\nand Privacy, pages 122134, April 1980.\n[25] R. Merkle. A digital signature based on a conventional\nencryption function. In Advances in Cryptology -- CRYPTO\n'87, pages 369378, 1988.\n[26] R. Merkle. A certified digital signature. In Advances in\nCryptology -- CRYPTO '89, pages 218238, 1990.\n[27] National Institute of Standards and Technology (NIST),\nComputer Systems Laboratory. Secure Hash Standard.\nFederal Information Processing Standards Publication (FIPS\nPUB) 180-2, February 2004.\n[28] A. Perrig. The BiBa one-time signature and broadcast\nauthentication protocol. In Proceedings of ACM Conference\non Computer and Communications Security (CCS), pages\n2837, November 2001.\n[29] A. Perrig, R. Canetti, J. D. Tygar, and D. Song. Efficient\nauthentication and signature of multicast streams over lossy\nchannels. In Proceedings of the IEEE Symposium on\nResearch in Security and Privacy, pages 5673, May 2000.\n[30] A. Perrig, R. Canetti, J. D. Tygar, and D. Song. The TESLA\nbroadcast authentication protocol. RSA CryptoBytes,\n5(Summer), 2002.\n[31] Adrian Perrig, Robert Szewczyk, Victor Wen, David Culler,\nand J. D. Tygar. SPINS: Security protocols for sensor\nnetworks. In Proceedings of ACM Conference on Mobile\nComputing and Networks (MobiCom), pages 189199, 2001.\n[32] Bartosz Przydatek, Dawn Song, and Adrian Perrig. SIA:\nSecure information aggregation in sensor networks. In\nProceedings of the First ACM International Conference on\nEmbedded Networked Sensor Systems (SenSys 2003), pages\n255265, November 2003.\n[33] Leonid Reyzin and Natan Reyzin. Better than BiBa: Short\none-time signatures with fast signing and verifying. In\nProceedings of Conference on Information Security and\nPrivacy (ACISP), July 2002.\n[34] R. Rivest, A. Shamir, and L. Adleman. A method for\nobtaining digital signatures and public-key cryptosystems.\nCommunications of the ACM, 21(2):120126, February\n1978.\n[35] P. Rohatgi. A compact and fast hybrid signature scheme for\nmulticast packet. In Proceedings of the 6th ACM Conference\non Computer and Communications Security, pages 93100.\nACM Press, November 1999.\n[36] F. Ye, H. Luo, S. Lu, and L. Zhang. Statistical en-route\nfiltering of injected false data in sensor networks. In\nProceedings of IEEE INFOCOM, March 2004.\n[37] S. Zhu, S. Setia, S. Jajodia, and P. Ning. An interleaved\nhop-by-hop authentication scheme for filtering false data in\nsensor networks. In Proceedings of IEEE Symposium on\nSecurity and Privacy, pages 259271, May 2004.\n156\n", "keywords": "Sensor Network;Broadcast Authentication;Taxonomy"} {"name": "178", "title": "Significance of gene ranking for classification of microarray samples", "abstract": "Many methods for classification and gene selection with microarray data have been developed. These methods usually give a ranking of genes. Evaluating the statistical significance of the gene ranking is important for understanding the results and for further biological investigations, but this question has not been well addressed for machine learning methods in existing works. Here, we address this problem by formulating it in the framework of hypothesis testing and propose a solution based on resampling. The proposed r-test methods convert gene ranking results into position p-values to evaluate the significance of genes. The methods are tested on three real microarray data sets and three simulation data sets with support vector machines as the method of classification and gene selection. The obtained position p-values help to determine the number of genes to be selected and enable scientists to analyze selection results by sophisticated multivariate methods under the same statistical inference paradigm as for simple hypothesis testing methods.", "fulltext": "INTRODUCTION\nAN important application of DNA microarray technologies\nin functional genomics is to classify samples\naccording to their gene expression profiles, e.g., to classify\ncancer versus normal samples or to classify different types\nor subtypes of cancer. Selecting genes that are informative\nfor the classification is one key issue for understanding the\nbiology behind the classification and an important step\ntoward discovering those genes responsible for the distinction\n. For this purpose, researchers have applied a number of\ntest statistics or discriminant criteria to find genes that are\ndifferentially expressed between the investigated classes\n[1], [2], [3], [4], [5], [6], [7]. This category of gene selection\nmethods is usually referred to as the filtering method since\nthe gene selection step usually plays the role of filtering the\ngenes before doing classification with some other methods.\nAnother category of methods is the so-called wrapper\nmethods, which use the classification performance itself as\nthe criterion for selecting the genes and genes are usually\nselected in a recursive fashion [8], [9], [10], [11], [12]. A\nrepresentative method of this category is SVM-RFE based\non support vector machines (SVM), which uses linear SVM\nto classify the samples and ranks the contribution of the\ngenes in the classifier by their squared weights [10].\nAll these selection methods produce rankings of the\ngenes. When a test statistic, such as the t-test, F-test, or\nbootstrap test, is used as the criterion, the ranking is\nattached by p-values derived from the null distribution of\nthe test statistic, which reflects the probability of a gene\nshowing the observed difference between the classes simply\ndue to chance. Such p-values give biologists a clear\nunderstanding of the information that the genes probably\ncontain. The availability of the p-value makes it possible to\ninvestigate the microarray data under the solid framework\nof statistical inference and many theoretical works have\nbeen built based on the extension of the concept of p-value,\nsuch as the false discovery rate (FDR) study [13].\nExisting gene selection methods that come with p-values\nare of the filtering category and are all univariate methods.\nTo consider possible combinatorial effects of genes, most\nwrapper methods adopt more sophisticated multivariate\nmachine learning strategies such as SVMs and neural\nnetworks. These have been shown in many experiments\nto be more powerful in terms of classification accuracy.\nHowever, for gene selection, the gene rankings produced\nwith these methods do not come with a measure of\nstatistical significance. The ranking is only a relative order\nof genes according to their relevance to the classifier. There\nis no clear evaluation of a gene's contribution to the\nclassification. For example, if a gene is ranked 50th\naccording to its weight in the SVM classifier, it is only\nsafe to say that this gene is perhaps more informative\nthan the gene ranked at 51st. However, there is no way to\ndescribe how significant it is and there is no ground to\ncompare the information it contains with a gene also\nranked as 50th by the same method in another experiment\n. This nature of relative ranking makes it hard to\ninterpret and further explore the gene selection results\nachieved with such advanced machine learning methods.\nFor example, it is usually difficult to decide on the proper\nnumber of genes to be selected in a specific study with\nsuch machine learning methods. Most existing works\nusually select a subgroup of genes with some heuristi-cally\ndecided numbers or thresholds [6], [8], [10]. The\nadvanced estimation techniques, such as FDR, based on\nsignificance measures do not apply for such methods.\nEvaluating the statistical significance of the detected\nsignal is the central idea in the paradigm of statistical\ninference from experimental data. There should be an\nequivalent study on those machine-learning-based multivariate\ngene selection methods which produce ranks\naccording to their own criteria. Strategies such as permutation\ncan be utilized to assess the significance of the\nclassification accuracy, but they do not measure the\nsignificance of the selected genes directly. Surprisingly, this\nquestion has not been addressed by the statistics or\nbioinformatics community in existing literature. We therefore\npropose that the question be asked in this way: For an\nobserved ranking of genes by a certain method, what is the\nprobability that a gene is ranked at or above the observed\nposition due to chance (by the same method) if the gene is,\nin fact, not informative to the classification? (\"Being\ninformative\" is in the sense of the criteria defined or\nimplied by the classification and ranking method. It may\nhave different meanings for different methods.) We call this\nproblem the significance of gene ranking or feature ranking.\nWe raise this problem in this paper and describe our\nstrategy toward a solution. The problem is discussed in the\ncontext of microarray classification of cancer samples, but\nthe philosophy and methodology is not restricted to this\nscenario.\nTHE SIGNIFICANCE OF RANKING PROBLEM\nSuppose a\nmicroarray\ndata\nset\ncontains\nm\ncases\nX\nfx\ni\n; i\n1;\n; m\ng. Each case is characterized by a\nvector\nof\nthe\nexpression\nvalues\nof\nn\ngenes\nx\ni\nx\ni1\n; x\ni2\n;\n; x\nin T\n2 R\nn\n; i\n1;\n; m\n. Each gene is a\nvector of their expression values across the cases g\nj\n\nx\n1j\n; x\n2j\n;\n; x\nmj T\nand we denote the set of all genes\na s\nG\nfg\nj\n; j\n1;\n; n\ng. E a c h c a s e h a s a l a b e l\ny\ni\nf 1; 1g, i 1;\n; m\nindicating the class it belongs to\namong the studied two classes, e.g., normal versus cancer,\nor two subtypes of a cancer, etc. Among the n genes, usually\nsome are informative to the classification and some are not,\nbut we do not know which genes are informative and which\nare not. For the convenience of description, we denote the\nset of informative genes as I\nG\nand that of the uninformative\ngenes as U\nG\n. To simplify the problem, we assume that\nI\nG\n\\ U\nG\n\nand I\nG\n[ U\nG\nG:\n1\nThe goal is to build a classifier that can predict the classes ^\ny\ni\nof the cases from x\ni\nand, at the same time, to identify the\ngenes that most likely belong to I\nG\n. The former task is called\nclassification and the latter one is called gene selection. In\nthe current study, we assume that there has already been a\nranking method RM which produces a ranking position for\neach gene according to some criterion assessing the gene's\nrelevance with the classification:\nr\nj\nrank g\nj\nj x\ni\n; y\ni\n; i 1;\n; m\nf\ng ; j 1;\n; n\n2\nand we do not distinguish the specific types of the RM. The\nranking is obtained based on the samples, thus r\nj\nis a\nrandom variable. The significance-of-ranking problem is to\ncalculate the following probability:\np\nr\nj\n\n4\nP rank\ng\nj\n\nr\nj\njg\nj\n2 U\nG\n;\n3\ni.e., given a gene is uninformative to the classification\n(according to RM's criterion), what is the probability that it\nis ranked at or above the observed ranking position by the\nranking method? We call this probability the p-value of a\ngene's ranking position or, simply, position p-value.\nThis significance-of-ranking problem is distinct from\nexisting statistics for testing differentially expressed genes\nin several aspects. It applies to more complicated multivariate\nclassification and gene selection methods. Even\nwhen it is applied on gene ranking methods based on\nunivariate hypothesis tests like t-test, the position p-value is\ndifferent with the t-test p-value by definition. The t-test\np-value of a gene is calculated from the expression values of\nthis gene in the two sample sets by comparing with the\nassumed null distribution model when the gene is not\ndifferentially expressed in the two classes. The position\np-value of a single gene, however, is defined on its context,\nin the sense that its value depends not only on the\nexpression of this gene in the samples, but also on other\ngenes in the same data set. A gene with the same expression\nvalues may have different position p-values in different\ndata sets. The null distributions of ranks of uninformative\ngenes are different in different data sets and, therefore, the\nforemost challenge for solving the problem is that the null\ndistribution has to be estimated from the specific data set\nunder investigation.\nTHE R-TEST SCHEME\nThe significance-of-ranking problem is formulated as a\nhypothesis testing problem. The null hypothesis is that the\ngene is not informative or g\nj\n2 U\nG\n, the alternative hypothesis\nis g\nj\n2 I\nG\n(the gene is informative) since we have\nassumption (1) and the statistic to be used to test the\nhypothesis is the ranking position. As in standard hypothesis\ntesting, the key to solving the problem is to obtain the\ndistribution of the statistic under the null hypothesis, i.e.,\nthe distribution of the ranks of uninformative genes:\nP r\njg 2 U\nG\n\n:\n4\nFor the extreme case when I\nG\n\n(all the genes are\nuninformative) and the ranking method is not biased, it is\nobvious that the null distribution is uniform. In a real\nmicroarray data set, however, usually some genes are\ninformative and some are not, thus the uniform null\ndistribution is not applicable. The null rank distribution in\na practical investigation depends on many factors, including\nthe separability of the two classes, the underlying\nnumber of informative genes, the power of the ranking\nmethod, the sample size, etc. The characteristics of these\nfactors are not well understood in either statistics or biology\nand, therefore, we have to estimate an empirical null\ndistribution from the data set itself.\nWe propose to tackle this problem in two steps. First, we\nidentify a set of putative uninformative genes (PUGs) which\nare a subset of U\nG\n. This is possible in practice because,\nalthough we do not know U\nG\n, discovering a number of genes\nthat are irrelevant to the classification is usually not hard in\nmost microarray data sets. We denote the identified subset as\nZHANG ET AL.: SIGNIFICANCE OF GENE RANKING FOR CLASSIFICATION OF MICROARRAY SAMPLES\n313\nU\n0G\n. The next step is to estimate the null distribution of ranks\nwith the ranking positions of these PUGs.\nFrom the original data set, we resample L new data sets\nand apply the ranking method on each of them, producing,\nfor each gene L, ranks r\nl\nj\n; l\n1;\n; L\n. In our implementa-tion\n, we randomly resample half of the cases in the original\ndata set each time. Other resampling schemes such as\nbootstrapping can also be used to obtain similar results\naccording to our experiments (data not shown). Since,\nusually, the size of U\nG\nis much larger than that of I\nG\n(i.e.,\nmost genes are uninformative), if a gene tends to always be\nranked at the bottom in the L rankings, it is very likely that\nthe gene is an uninformative one. Thus, we define r\nj\nas the\naverage position of gene j in the L rankings,\nr\nj\n1\nL\nX\nL\nl\n1\nr\nl\nj\n; j\n1;\n; n\n5\nand select the bottom k genes with the largest r\nj\nas the\nPUGs to form U\n0G\n, where k is a preset number. We rewrite\nU\n0G\nas U\n0k\nG\nwhen we need to emphasize the role of k in this\nprocedure. We assume that U\n0k\nG\nis a random sample of U\nG\nand r\nl\nj\n, l 1;\n; L\nfor g\nj\n2 U\n0k\nG\nis a random sample from the\nunderlying null distribution of the ranks of uninformative\ngenes. Thus, we have k L observations of the null\ndistribution of ranks from which we estimate the null\ndistribution using a histogram. More sophisticated non-parametric\nmethods can be adopted to fit the distribution if\nnecessary. We denote the estimated null distribution as\n^\nP r\njg 2 U\nG\n\nP\nhistogram\nr\nl\nj\njg\nj\n2 U\n0k\nG\n:\n6\nWith this estimated null distribution, the calculation of\nthe position p-value is straightforward: For gene i with\nranking position r\ni\n,\n^\np\nr\ni\n^\nP\nr\nr\ni\njg 2 U\nG\nP\nhistogram\nr\nl\nj\nr\ni\njg\nj\n2 U\n0k\nG\n: 7\nApplying this on all the genes, we convert the ranking list to\na list of position p-values reflecting the significance of the\ngenes' being informative to the classification.\nThis whole procedure for estimating the p-value of a\nranking is illustrated in Fig. 1. We name this scheme the\nr-test and call the position p-value thus calculated the r-test\np-value.\nCOMPENSATION FOR BIAS IN THE ESTIMATED PUGs\nOne important problem with the r-test scheme is the\nselection of the PUGs. Ideally, the ranks used to select\nPUGs and the ranks used to estimate null distribution\nshould be independent. However, this is impractical in that\n314\nIEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,\nVOL. 3,\nNO. 3,\nJULY-SEPTEMBER 2006\nFig. 1. The diagram showing the principle of r-test (pr-test). (a) A number (L) of new data sets is resampled from the original data set. A ranking is\ngenerated for each new data set with the ranking method, resulting in a total of L rankings. (b) The genes are ordered by their average positions of\nthe L rankings. The horizontal axis is genes by this order and the vertical axis is ranking position in the resample experiments. For each gene, its\nranking positions in the L experiments are drawn in a box plot, with a short dash in the middle showing the median. (c) From the bottom (rightmost) of\nthe ordered gene list, k genes are selected as putative uninformative genes or PUGs. The box-plots of the ranks of the k PUGs are illustrated in this\nenlarged image. The null distribution of ranks of uninformative genes is to be estimated from these ranks. (d) An example null distribution estimated\nfrom PUGs. For each gene on the microarray, its actual ranking is compared with the null distribution to calculate the position p-value of the gene\nbeing noninformative. For mr-test and tr-test, the average position in the L rankings is used in the calculation of the p*-value. In tr-test, the PUGs are\nnot selected from the bottom of the ranking, but rather from the genes with the largest t-test p-values.\nthere is actually only one data set available. In our strategy,\nthe same ranks are used to estimate both PUGs and the\ndistribution of their ranks. This is an unplanned test in the\nsense that the PUGs are defined after the ranks are observed\n[14]. The PUGs in U\n0k\nG\nare not an unbiased estimate of U\nG\n. In\nthe extreme case when k is small, uninformative genes that\nare ranked higher are underrepresented and the ranks of\nU\n0k\nG\nmight represent only a tail of the ranks of U\nG\non the\nright. If this happens, it will cause an overoptimistic\nestimation of the r-test p-values and result in more genes\nbeing claimed significant. Therefore, we propose two\nmodified strategies to compensate for the possible bias.\n4.1\nModified r-Test with Average Ranks\nIn (7), the position p-value is calculated by comparing the\nrank r\ni\nof gene g\ni\nobtained from the whole data set with the\nestimated null distribution. Intuitively, when the sample size\nis small, one single ranking based on a small sample set can\nhave a large variance, especially when all or most of the genes\nare uninformative. We propose replacing the rank r\ni\nby the r\ni\ndefined in (5), i.e., to use the average position of gene g\ni\nin the\nL\nresampling experiments as the estimate of the true rank,\nand to calculate the position p-value with this estimated rank\nrather than the single observation of the rank:\np\nr\ni\n^\nP\nr\nr\ni\njg 2 U\nG\nP\nhistogram\nr\nl\nj\nr\ni\njg\nj\n2 U\n0k\nG\n: 8\nThe estimated null distribution is the distribution of single\nranks of putative uninformative genes, but the r\ni\nto be\ncompared to it is the averaged rank and (8) is no longer a\np-value in the strict sense. Therefore, we name it p*-value\ninstead and call this modified r-test the mr-test for\nconvenience. Ideally, if a gene is informative to the\nclassification and the ranking method can consistently rank\nthe gene according this information on both the whole data\nset and on the resampled subsets, we'll have\nr\ni\nr\ni\n; for g\ni\n2 I\nG\n;\n9\nin which case the p*-value will be equivalent to the original\nr-test p-value for these genes. In practice, when the sample\nsize is small and the signal in some informative genes are\nnot so strong, we always have r\ni\nr\ni\nwhen r\ni\nis small;\ntherefore, the estimated ranks move toward the right on the\nrank distribution comparing with the single-run ranks,\nwhich, as an effect, can be a compensation to the bias in the\nestimated null distribution. (For the genes ranked in the\nlower half of the list, the averaged rank will move leftward,\nbut these genes are not of interest to us in this study since\nwe assume only a minority of the genes can be informative.)\n4.2\nIndependent Selection of PUGs\nThe ultimate reason that may cause biased estimation of the\nnull distribution is that the PUGs in the above r-test scheme\nare estimated from the same ranking information as that\nbeing used for the calculation of the test statistics. A\nsolution is to select a group of PUGs that are an unbiased\nsample from the U\nG\n. This is a big challenge because\nestimating the rank position distribution of U\nG\nis the\nquestion itself.\nWhen the ranking method RM is a multivariate one such\nas SVM-based methods, the ranking of the genes will not\ndirectly depend on the differences of single genes between\nthe classes. We therefore can use a univariate statistic such\nas the t-test to select a group of nondifferentially expressed\ngenes as the PUGs since these genes will have a high\nprobability of not being informative as they are basically the\nsame in the two classes. This selection will be less correlated\nwith the ranking by RM. Applying a threshold\non the\nt-test p-value p\nt\n, we select the PUG set U\n0\nG\nas:\nU\n0\nG\n\n4\ng\nj\njg\nj\n2 G; p\nt\ng\nj\n\n10\nand estimate the null position distribution according the\nranking of the U\n0\nG\ngenes by RM in the resampled data:\n^\nP\nt\nr\njg 2 U\nG\n\nP\nhistogram\nr\nl\nj\njg\nj\n2 U\n0\nG\n:\n11\nThe position p*-value of a gene ranked on average at r\ni\nis\ncalculated as:\n^\np\nr\ni\n^\nP\nt\nr\nr\ni\njg 2 U\nG\nP\nhistogram\nr\nl\nj\nr\ni\njg\nj\n2 U\n0\nG\n:\n12\nFor the convenience of discussion, we call this strategy the\ntr-test and call the primary r-test defined by (7) the pr-test.\nWe view the pr-test, mr-test, and tr-test as three specific\nmethods under the general r-test scheme.\nIt should be noted that if the ranking produced by RM is\nhighly correlated with t-test ranking, the result of tr-test will\nbe close to that of the original pr-test. On the other hand,\nsince insignificant genes evaluated individually may not\nnecessarily be uninformative when combined with certain\nother genes, the PUGs selected by (10) may include\ninformative genes for RM. Therefore, the estimated null\ndistribution may bias toward the left end in some situations,\nmaking the results overconservative. However, in the\nexperiments described below, it is observed that the tr-test\nresults are not sensitive to changes in the p-value cut-offs\nused for selecting PUGs (10), which is an implication that\nthe method is not very biased.\n\nEXPERIMENTS WITH SVM ON REAL AND SIMULATED DATA\n5.1\nr-Test with SVM Gene Ranking\nDue to the good generalization ability of support vector\nmachines (SVM) [15], they are regarded as one of the best\nmultivariate algorithms for classifying microarray data [9],\n[10], [16]. In the experiments for r-test in this work, we\nadopted linear SVM as the ranking machine RM. The linear\nSVM is trained with all genes in the data set, producing the\ndiscriminate function\nf\nx w\nx\nb ;\n13\nwhere w P\nn\ni\n1\ni\ny\ni\nx\ni\nand\ni\nare the solutions of the\nfollowing quadratic programming problem:\nL\np\n1\n2 w\nk k\n2\nX\nn\ni\n1\ni\ny\ni\nx\ni\nw\nb X\nn\ni\n1\ni\n:\n14\nFollowing [10], the contribution of each gene in the classifier\ncan be evaluated by\nZHANG ET AL.: SIGNIFICANCE OF GENE RANKING FOR CLASSIFICATION OF MICROARRAY SAMPLES\n315\nDL\np\n1=2 @\n2\nL\np\n@w\n2\ni\nDw\ni\n\n2\nw\ni\n\n2\n15\nand, thus, the genes are ranked by w\ni\n\n2\n. There are other\nways of assessing the relative contribution of the genes in a\nSVM classifier [17], but, since the scope of this paper is not\nto discuss the ranking method, we adopt the ranking\ncriterion given in (15) here. The ranking only reflects the\nrelative importance of the genes in the classifier, but cannot\nreveal how important each gene is. The r-test converts the\nranking to position p-values (or p*-values) to evaluate the\nsignificance.\n5.2\nData Sets\nExperiments were done on six microarray data sets: three\nreal data sets and three simulated data sets. The leukemia\ndata set [1] contains the expression of 7,129 genes (probe\nsets) of 72 cases, 47 of them are of the ALL class and 25 are\nof the AML class. The colon cancer data set [18] contains\n2,000 genes of 62 cases, among which 40 are from colon\ncancers and 22 from normal tissues. These two data sets\nhave been widely used as benchmark sets in many\nmethodology studies. Another data set used in this study\nis a breast cancer data set [19] containing 12,625 genes\n(probe sets) of 85 cases. The data set is used to study the\nclassification of two subclasses of breast cancer. Forty-two\nof the cases are of class 1 and 43 are of class 2.\nSimulated data sets were generated to investigate the\nproperties of the methods in different situations. The first\ncase is for an extreme situation where none of the genes are\ninformative. The simulated data set contains 1,000 genes\nand 100 cases. The expression values of the genes are\nindependently generated from normal distributions with\nrandomized means and variations in a given range. The\n100 cases are generated with the same model, but are\nassigned arbitrarily to two fake classes (50 cases in each\nclass). So, the two classes are, in fact, not separable and all\nthe genes are uninformative. We refer to this data set as the\n\"fake-class\" data set in the following description.\nEach of the other two simulated data sets also contains\n1,000 genes and 100 cases of two classes (50 cases in each\nclass). In one data set (we call it \"simu-1\"), 700 of the genes\nfollow N(0, 1) for both classes and the 300 genes follow\nN(0.25, 1) for class 1 and N(-0.25, 1) for class 2. In the other\ndata set (we call it \"simu-2\"), 700 of the genes follow N(0, 1)\nfor both classes and the 300 genes follow N(0.5, 1) for class 1\nand N(-0.5, 1) for class 2. With these two simulated data\nsets, we hope to mimic situations where there are weak and\nstrong classification signals in the data.\nAll the data sets except simu-1 and simu-2 were\nstandardized to 0-mean and standard deviation 1 first\nacross the cases and then across the genes. This is to prevent\npossible bias in the ranking affected by the scaling. In\npractical investigations, this step might not be needed or\nmight need to be done in some other way according to the\nspecific situation of the data and the specific ranking\nmethods to be adopted.\nThe six data sets used in our experiments represent\ndifferent levels of separability of the investigated classes.\nFor the leukemia data set, almost perfect classification\naccuracy has been achieved [1], [9], [10], so it represents a\nrelatively easy classification task. For the colon cancer data\nset, the samples can still be well separated, but with some\nerrors [10], [18]. The two subclasses studied in the breast\ncancer data set are hardly separable as observed in this data\nset, but it is believed that there could be some degree of\nseparability [20], [21]. The fake-class simulation represents a\nsituation where the two classes are completely nonseparable\nand the simu-1 and simu-2 simulation represents an\nideal situation where separation is defined on a subset of\nthe genes and the uninformative genes are i.i.d. To check\nthe classification accuracy that can be achieved on these\ndata sets, we randomly split them into independent training\nand test sets and applied linear SVM on them. These\nexperiments were done 200 times for each data set and the\nclassification accuracy obtained at different gene selection\nlevels is summarized in Table 1. It can be seen that the\naccuracies are consistent with the reports in the literature\nand with the design of the simulations. (Note that the error\nrates reported here are independent test results based on\nonly half of the samples for training, so they are larger than\nthe cross-validation errors reported elsewhere. The scope of\nthis paper is not to improve or discuss classification\naccuracy.)\n316\nIEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,\nVOL. 3,\nNO. 3,\nJULY-SEPTEMBER 2006\nTABLE 1\nSeparability of the Classes of the Six Data Sets\n5.3\nNumber of Significant Genes According to the\nmr-Test and tr-Test\nWe systematically experimented with the SVM-based\npr-test, mr-test, and tr-test methods on the six data sets\nand studied the number of genes claimed as significantly\ninformative with each method at various significant levels.\nThe results of the pr-test are affected by different choices of\nthe number k of selected PUGs (data not shown), indicating\nthat the pr-test can be very biased unless we know the\naccurate number of informative genes. Therefore, we focus\non the mr-test and tr-test in the following discussion.\nTable 2 shows the number of significant genes according\nto the mr-test at different p*-value levels, with different\nchoices of ks on the six data sets. Comparing with the pr-test\nresults, the mr-test results are less sensitive to changes in\nthe number k. This is especially true when there are ideal\nclassification signals, as in the simu-2 data, where we can\nsee a more than 10-fold change of k causes only little\nvariance in the estimated gene numbers. With p*-value\nlevels from 0.001 to 0.1, the estimated significant gene\nnumbers are all around the correct number (300). The\nclaimed significant genes are all those true informative\ngenes in the model when the estimated genes are less than\n300. For the situations where the number of estimated\ninformative genes is larger than 300, all the true informative\ngenes are discovered. When the data are less ideal, we see\nthat the results are stable within a smaller variation of k.\nMore experiment results with larger variations in the choice\nof k are provided in the supplemental material, which can\nbe found on the Computer Society Digital Library at http://\ncomputer.org/tcbb/archives.htm. From Table 2, it can also\nbe observed that the number of significant genes is not\ndirectly correlated with the classification accuracy. For\nexample, the breast cancer data and fake-class data both\nlook nonseparable according to the classification errors\n(Table 1), However, for the breast cancer data, more than\n200 genes are identified as significantly informative among\nthe 12,625 genes (\n1:6%\n) at the p*-value = 0.01 level, but,\nfor the fake-class data, this number is only about 0.4 percent\nof the 1,000 genes.\nResults of the tr-test with different t-test p-value cut-offs\nare shown in Table 3. It can be seen that different cut-offs\nresult in different numbers of PUGs, but the variation in\nestimated position p*-values due to PUG number difference\nis even smaller than in the mr-test. This implies that the\ntr-test results are not biased by the selection of PUGs since,\nif the PUG selection was biased, different numbers of PUGs\nat t-test p-value cut-offs would have caused different\ndegrees of bias and the results would have varied greatly.\nComparing between Table 2 and Table 3, as well as the\nresults in the supplemental material, which can be found on\nthe Computer Society Digital Library at http://computer.\norg/tcbb/archives.htm, we observe that, for the mr-test,\nalthough there is a range of k for each data set in which the\nresults are not very sensitive to variations of k, this range\ncan be different with different data sets. On the other hand,\nfor the tr-test, within the same ranges of cut-off t-test p-values\n, results on all the data sets show good consistency\nwith regard to variations in the cut-off value. This makes\nthe tr-test more applicable since users do not need to tune\nthe parameter specifically to each data set.\nComparing the number of genes selected by the tr-test\nand mr-test (Table 2 and Table 3), it is obvious that the\ntr-test is more stringent and selects much fewer genes than\nZHANG ET AL.: SIGNIFICANCE OF GENE RANKING FOR CLASSIFICATION OF MICROARRAY SAMPLES\n317\nTABLE 2\nThe Number of Genes Selected at Various r-Test p*-Value Levels with SVM\nthe mr-test on the real data sets. The differences are smaller\non the simulated data. Similarly to the results of the mr-test,\nalmost all the informative genes in simu-2 data can be\nrecovered at p*-value levels from 0.001 to 0.05 and there are\nonly a very few false-positive genes (e.g., the 307 genes\nselected at p*-value = 0.05 contains all the 300 true-positive\ngenes and seven false-positive genes). This shows that the\nSVM method is good in both sensitivity and specificity in\nselecting the true informative genes for such ideal case, and\nboth of the two r-test methods can detect the correct\nnumber of informative genes at a wide range of significance\nlevels. For simu-1 data, not all the informative genes can be\nrecovered in the experiments. This reflects the fact of the\nlarge overlap of the two distributions in this weak model.\nMany of original 300 \"informative\" genes are actually not\nstatistically significant in the contexts of both univariate\nmethods and multivariate methods.\nFor the real data sets, there are no known answers for the\n\"true\" number of informative genes. The mr-test uses the tail\nin the ranking list to estimate the null distribution for\nassessing the significance of the genes on the top of the list,\ntherefore there is a higher possibility of the p*-values being\nunderestimated, although this has been partially compen-sated\nby using the average rank positions. Thus, the number\nof genes being claimed significant by mr-test might be\noverestimated. In this sense, the tr-test scheme provides a\nmore unbiased estimation of the null distribution, which is\nsupported by the decreased sensitivity to PUG numbers.\nWith the tr-test, at the 0.05 p*-value level, we get about\n410 significant genes from the 7,129 genes (5.75 percent) in the\nleukemia data. On the other two real data sets, the results tend\nto be too conservative: about 13 out of 2,000 genes\n(0.65 percent) in the colon cancer data and 50 out of\n12,625 genes (0.4 percent) in the breast cancer data are\nclaimed as significant at this level.\nIt should be noted that the PUGs selected according to\nthe t-test may contain informative genes for SVM, which\nconsiders the combined effects of genes. This will cause the\nnumber of genes called significant by the tr-test to be\nunderestimated. This is especially true for data sets in\nwhich the major classification signal exists in the combinatorial\neffects of genes instead of differences in single genes.\nThe correct answer may be somewhere between the two\nestimations of the tr-test and mr-test. When the signal is\nstrong, the two estimations will be close as we see in the\nsimu-2 data. In practice, one can choose which one to use\naccording to whether the purpose is to discover more\npossibly informative genes or to discover a more manageable\nset of significant genes for follow-up investigations.\nDISCUSSION\nStatistical hypothesis testing is a fundamental framework for\nscientific inference from observations. Unfortunately, existing\nhypothesis testing methods are not sufficient to handle\nhigh-dimensional multivariate analysis problems arising\nfrom current high-throughput genomic and proteomic\nstudies. Many new data mining techniques have been\ndeveloped both in statistics and in the machine learning\nfield. These methods are powerful in analyzing complicated\nhigh-dimensional data and helped greatly in functional\ngenomics and proteomics studies. However, the analysis of\nthe statistical significance of data mining results has not been\npaid enough attention. One reason might be that many\nmethods are rooted in techniques aimed at solving problems\nin engineering and technological applications rather than in\nscientific discoveries. As an example, many machine-learning\n-based gene selection and classification methods may\nachieve very good performance in solving the specific\nclassification problems, but the results are usually of a\n318\nIEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,\nVOL. 3,\nNO. 3,\nJULY-SEPTEMBER 2006\nTABLE 3\nThe Number of Genes Selected at Various Position p-Value Levels by tr-Test with SVM\n\"black-box\" type and judging the significance of the features\nbeing used for the classification was usually not deemed\nimportant. This fact compromises their further contribution\nin helping biologists to understand the mechanisms underlying\nthe investigated disease classification.\nThis paper raises the problem of the significance of gene\nrankings in microarray classification study and proposes a\nsolution strategy called the r-test that converts the ranking\nof genes obtained with any method to position p-values\n(p*-values) that reflect the significance of the genes being\ninformative. The concept of this question is important and\nthe formulation and solution are challenging for several\nreasons as addressed in the paper. First of all, the definition\nof a gene being informative to the classification may not yet\nbe completely clear for many classification methods. Even\nunder the same criterion, there may not be a clear boundary\nbetween informative and uninformative genes. A biological\nstatus may be affected by several genes with different levels\nof contribution and it may affect the expression of many\nother genes. Differences between individuals and instrumental\nnoises may make the genes that have no relation\nwith the studied biological process show some relevancy in\nthe limited samples. All these (and other) complexities\nmake it hard to mathematically model microarray data. We\npropose the r-test methods based on intuitive reasoning\nunder certain assumptions about the nature of the data. As\nshown in the experiments, the methods provide reasonable\nsolutions, but the decision by the mr-test and tr-test method\ncan be very different for some situations. Theoretically,\nrigorous methods are still to be developed.\nUnder the proposed r-test framework, the key issue is\nthe choice of putative uninformative genes or PUGs. Since\nthe null distribution has to be estimated from the data\nthemselves, avoiding bias in the estimation is the most\nchallenging task. Besides the methods used by the mr-test\nand tr-test, we have also tried several other ways to tackle\nthe problem, including selecting the PUGs according to the\ndistribution of the ranks of all genes in the resample\nexperiments, deciding the number of PUGs recursively\naccording to the rank with an EM-like strategy, selection of\nan independent set of PUGs by fold-change, etc. Different\nresampling strategy has also been experimented with.\nAmong these efforts, the reported mr-test and tr-test give\nthe most satisfactory results. They both perform perfectly\non ideal simulations. For practical cases, the mr-test has a\ntendency to be overoptimistic by claiming more significant\ngenes and the tr-test has a tendency to be conservative by\napproving only a small number of significant genes. Note\nthat both r-test schemes do not change the ranking itself;\ntherefore, it is the role of the classification and gene\nselection method (the RM) to guarantee that the ranking\nitself is reasonable for the biological investigation. The r-test\nonly helps to decide on the number of genes to be selected\nfrom the list at given significance levels. Since there is\ncurrently no theoretical solution to completely avoid\nestimation bias, one can make a choice between mr-test\nand tr-test results by balancing between the two opposite\ntrends of possible biases according to the particular\nbiological problem at hand.\nACKNOWLEDGMENTS\nThe authors would like to thank Drs. J.D. Iglehart and A.L.\nRichardson for providing them with their microarray data\nfor the experiments. They would also like to thank the\neditor and reviewers for their valuable suggestions that\ncontributed a lot to the work. They thank Dustin Schones\nfor helping to improve their writing. This work is supported\nin part by NSFC projects 60275007, 60234020, and the\nNational Basic Research Program (2004CB518605) of China.\nThis work was performed while Chaolin Zhang was with\nthe MOE Key Laboratory of Bioinformatics/Department of\nAutomation, Tsinghua University, Beijing. Chaolin Zhang\nand Xuesong Lu contributed equally in this work and\nshould be regarded as joint authors. The corresponding\nauthor is Xuegong Zhang.\n\nREFERENCES\n[1]\nT.R. Golub, D.K. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek,\nJ.P. Mesirov, H. Coller, M.L. Loh, J.R. Downing, M.A. Caligiuri,\nC.D. Bloomfield, and E.S. Lander, \"Molecular Classification of\nCancer: Class Discovery and Class Prediction by Gene Expression\nMonitoring,\" Science, vol. 286, no. 5439, pp. 531-537, 1999.\n[2]\nC.-A. Tsai, Y.-J. Chen, and J.J. Chen, \"Testing for Differentially\nExpressed Genes with Microarray Data,\" Nuclear Acids Research,\nvol. 31, no. 9, p. e52, 2003.\n[3]\nM.S. Pepe, G. Longton, G.L. Anderson, and M. Schummer,\n\"Selecting Differentially Expressed Genes from Microarray Experiments\n,\" Biometrics, vol. 59, no. 1, pp. 133-142, 2003.\n[4]\nP. Broberg, \"Statistical Methods for Ranking Differentially\nExpressed Genes,\" Genome Biology, vol. 4, no. 6, p. R41, 2003.\n[5]\nW. Pan, \"A Comparative Review of Statistical Methods for\nDiscovering Differentially Expressed Genes in Replicated Microarray\nExperiments,\" Bioinformatics, vol. 18, no. 4, pp. 546-554, 2002.\n[6]\nS. Ramaswamy, K.N. Ross, E.S. Lander, and T.R. Golub, \"A\nMolecular Signature of Metastasis in Primary Solid Tumors,\"\nNature Genetics, vol. 33, no. 1, pp. 49-54, 2003.\n[7]\nL.J. van 't Veer, H. Dai, M.J. van de Vijver, Y.D. He, A.A.M. Hart,\nM. Mao, H.L. Peterse, K. van der Kooy, M.J. Marton, A.T.\nWitteveen, G.J. Schreiber, R.M. Kerkhoven, C. Roberts, P.S.\nLinsley, R. Bernards, and S.H. Friend, \"Gene Expression Profiling\nPredicts Clinical Outcome of Breast Cancer,\" Nature, vol. 415,\nno. 6871, pp. 530-536, 2002.\n[8]\nM. Xiong, X. Fang, and J. Zhao, \"Biomarker Identification by\nFeature Wrappers,\" Genome Research, vol. 11, no. 11, pp. 1878-1887,\n2001.\n[9]\nX. Zhang and H. Ke, \"ALL/AML Cancer Classification by Gene\nExpression Data Using SVM and CSVM Approach,\" Proc. Conf.\nGenome Informatics, pp. 237-239, 2000.\n[10]\nI. Guyon, J. Weston, S. Barnhill, and V. Vapnik, \"Gene Selection\nfor Cancer Classification Using Support Vector Machines,\"\nMachine Learning, vol. 46, pp. 389-422, 2002.\n[11]\nC. Furlanello, M. Serafini, S. Merler, and G. Jurman, \"Entropy-Based\nGene Ranking without Selection Bias for the Predictive\nClassification of Microarray Data,\" BMC Bioinformatics, vol. 4,\nno. 1, p. 54, 2003.\n[12]\nH. Yu, J. Yang, W. Wang, and J. Han, \"Discovering Compact and\nHighly Discriminative Features or Feature Combinations of Drug\nActivities Using Support Vector Machines,\" Proc. 2003 IEEE\nBioinformatics Conf. (CSB '03), 2003.\n[13]\nJ.D. Storey and R. Tibshirani, \"Statistical Significance for Genome\nWide Studies,\" Proc. Nat'l Academy of Science USA, vol. 100, no. 16,\npp. 9440-9445, 2003.\n[14]\nR.R. Sokal and F.J. Rohlf, Biometry. San Francisco: Freeman, 1995.\n[15]\nV. Vapnik, The Nature of Statistical Learning Theory. Springer-Verlag\n, 1995.\n[16]\nS. Ramaswamy, P. Tamayo, R. Rifkin, S. Mukherjee, C.-H. Yeang,\nM. Angelo, C. Ladd, M. Reich, E. Latulippe, J.P. Mesirov, T.\nPoggio, W. Gerald, M. Loda, E.S. Lander, and T.R. Golub,\n\"Multiclass Cancer Diagnosis Using Tumor Gene Expression\nSignatures,\" Proc. Nat'l Academy of Sciences, vol. 98, no. 26,\npp. 15149-15154, 2001.\nZHANG ET AL.: SIGNIFICANCE OF GENE RANKING FOR CLASSIFICATION OF MICROARRAY SAMPLES\n319\n[17]\nX. Zhang and W.H. Wong, \"Recursive Sample Classification and\nGene Selection Based on SVM: Method and Software Description\n,\" technical report, Dept. of Biostatistics, Harvard School of\nPublic Health, 2001.\n[18]\nU. Alon, N. Barkai, D.A. Notterman, K. Gish, S. Ybarra, D. Mack,\nand A.J. Levine, \"Broad Patterns of Gene Expression Revealed by\nClustering Analysis of Tumor and Normal Colon Tissues Probed\nby Oligonucleotide Arrays,\" Proc. Nat'l Academy of Sciences, vol. 96,\nno. 12, pp. 6745-6750, 1999.\n[19]\nZ.C. Wang, M. Lin, L.-J. Wei, C. Li, A. Miron, G. Lodeiro, L.\nHarris, S. Ramaswamy, D.M. Tanenbaum, M. Meyerson, J.D.\nIglehart, and A. Richardson, \"Loss of Heterozygosity and Its\nCorrelation with Expression Profiles in Subclasses of Invasive\nBreast Cancers,\" Cancer Research, vol. 64, no. 1, pp. 64-71, 2004.\n[20]\nE. Huang, S.H. Cheng, H. Dressman, J. Pittman, M.H. Tsou, C.F.\nHorng, A. Bild, E.S. Iversen, M. Liao, and C.M. Chen, \"Gene\nExpression Predictors of Breast Cancer Outcomes,\" The Lancet,\nvol. 361, no. 9369, pp. 1590-1596, 2003.\n[21]\nM. West, C. Blanchette, H. Dressman, E. Huang, S. Ishida, R.\nSpang, H. Zuzan, J.A. Olson Jr., J.R. Marks, and J.R. Nevins,\n\"Predicting the Clinical Status of Human Breast Cancer by Using\nGene Expression Profiles,\" Proc. Nat'l Academy of Sciences, vol. 98,\nno. 20, pp. 11462-11467, 2001.\nChaolin Zhang received the BE degree from\nthe Department of Automation at Tsinghua\nUniversity, Beijing, China, in 2002. From 2002\nto 2004, he worked as a graduate student on\nmachine learning applications in microarray data\nanalysis and literature mining at the MOE Key\nLaboratory of Bioinformatics, Tsinghua University\n. He is now a PhD student at Cold Spring\nHarbor Laboratory and the Department of\nBiomedical Engineering, the State University of\nNew York at Stony Brook.\nXuesong Lu received the BE degree from the\nDepartment of Automation, Tsinghua University,\nBeijing, China, in 2001. He is currently a PhD\ncandidate in the Department of Automation and\nthe MOE Key Laboratory of Bioinformatics at\nTsinghua University, Beijing, China. His research\ninterests include microarray data mining,\ngene network modeling, and literature mining.\nXuegong Zhang received the PhD degree in\npattern recognition and intelligent systems from\nTsinghua University, Beijing, China, in 1994. He\nis currently a professor in the Department of\nAutomation and the MOE Key Laboratory of\nBioinformatics at Tsinghua University. His research\ninterests include machine learning and\npattern recognition, bioinformatics, computa-tional\ngenomics, and systems biology.\n.\nFor more information on this or any other computing topic,\nplease visit our Digital Library at www.computer.org/publications/dlib.\n320\nIEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,\nVOL. 3,\nNO. 3,\nJULY-SEPTEMBER 2006", "keywords": "Significance of gene ranking;gene selection;microarray data analysis;classification"} {"name": "179", "title": "Simplifying Flexible Isosurfaces Using Local Geometric Measures", "abstract": "The contour tree, an abstraction of a scalar field that encodes the nesting relationships of isosurfaces, can be used to accelerate isosurface extraction, to identify important isovalues for volume-rendering transfer functions, and to guide exploratory visualization through a flexible isosurface interface. Many real-world data sets produce unmanageably large contour trees which require meaningful simplification. We define local geometric measures for individual contours, such as surface area and contained volume, and provide an algorithm to compute these measures in a contour tree. We then use these geometric measures to simplify the contour trees, suppressing minor topological features of the data. We combine this with a flexible isosurface interface to allow users to explore individual contours of a dataset interactively.", "fulltext": "Introduction\nIsosurfaces, slicing, and volume rendering are the three main techniques\nfor visualizing three-dimensional scalar fields on a two-dimensional\ndisplay. A recent survey [Brodlie and Wood 2001] describes\nthe maturation of these techniques since the mid 1980s. For\nexample, improved understanding of isosurfaces has produced robust\ndefinitions of watertight surfaces and efficient extraction methods\n. We believe that the same improved understanding and structuring\nleads to new interfaces that give the user better methods to\nselect isosurfaces of interest and that provide a rich framework for\ndata-guided exploration of scalar fields.\nAlthough key ideas in this paper apply to both isosurfaces and volume\nrendering, the immediate application is to isosurface rendering\n. An isosurface shows the surface for a fixed value (the isovalue\n) of the scalar field and is the 3D analogue of equal-height\ncontour lines on a topographic map. The contour tree represents\nthe nesting relationships of connected components of isosurfaces,\nwhich we call contours, and is thus a topological abstraction of a\nscalar field. Since genus changes to surfaces do not affect the nesting\nrelationship, they are not represented in the contour tree. Our\ncontribution is to combine the flexible isosurface interface [Carr\nand Snoeyink 2003] with online contour tree simplification guided\nby geometric properties of contours to produce a tool for interactive\nexploration of large noisy experimentally-sampled data sets.\nAn additional contribution is to draw attention to other potential\napplications of simplified contour trees, such as detail-preserving\ndenoising, automated segmentation, and atlasing.\nFigure 1 shows a comparison between a conventional isosurface\nand a flexible isosurface extracted from the same data set after contour\ntree simplification. On the left, the outermost surface (the\nskull) occludes other surfaces, making it difficult to study structures\ninside the head. Moreover, the contour tree for this data set has over\n1 million edges, making it impractical as a visual representation.\nFigure 2: The topographic map (2-d scalar field), surface rendering, and contour tree for a volcanic crater lake with a central island. A: a\nmaximum on the crater edge; B: maximum of island in the lake; F: lake surface; C and D: saddle points.\nOn the right is a flexible isosurface constructed using a simplified\ncontour tree, laid out and coloured to emphasize the structure of the\ndata set. Of particular interest is that there are no \"magic numbers\"\nembedded in the code. Instead, the surfaces shown were chosen\ndirectly from the simplified contour tree during exploration of this\ndata set, with the level of simplification being adjusted as needed.\nThe remainder of this paper is as follows. Section 2 reviews work\non contour trees in visualization. Section 3 shows how to simplify\nthe contour tree, and the effects on the data. Section 4 shows how to\ncompute local geometric measures efficiently to guide simplification\n. Section 5 gives implementation details, and Section 6 reports\nresults. Finally, Section 7 gives possible future extensions.\nRelated Work\nMost of the relevant work deals with a topological structure called\nthe contour tree that is becoming increasingly important in visualization\n. Section 2.1 reviews the contour tree and algorithms to\ncompute it. Section 2.2 then reviews visualization tools that use the\ncontour tree, while Section 2.3 reviews work on topological simplification\nand on efficient computation of geometric properties.\n2.1\nThe Contour Tree\nFor a scalar field f : IR\n3\nIR, the level set of an isovalue h is the\nset L\n(h) = {(x,y,z) | f (x,y,z) = h}. A contour is a connected component\nof a level set. As h increases, contours appear at local minima\n, join or split at saddles, and disappear at local maxima of f .\nShrinking each contour to a single point gives the contour tree,\nwhich tracks this evolution. It is a tree because the domain IR\n3\nis simply-connected; in more general domains we obtain the Reeb\ngraph [Reeb 1946], which is used in Morse theory [Matsumoto\n2002; Milnor 1963] to study the topology of manifolds.\nFigure 2 shows a 2-dimensional scalar field describing a volcanic\ncrater lake with a central island. The contour tree of this field is\nan abstract, but meaningful, depiction of the structure of all local\nmaxima, minima, and saddle points, and gives clues to interesting\ncontours. Individual contours are represented uniquely as points on\nthe contour tree. For example, the isolines c\n1\n, c\n2\n, and c\n3\nare all at\n2000m, but each has a unique location on the contour tree.\nThe contour tree has been used for fast isosurface extraction [van\nKreveld et al. 1997; Carr and Snoeyink 2003], to guide mesh simplification\n[Chiang and Lu 2003], to find important isovalues for\ntransfer function construction [Takahashi et al. 2004b], to compute\ntopological parameters of isosurfaces [Kettner et al. 2001], as an\nabstract representation of scalar fields [Bajaj et al. 1997], and to\nmanipulate individual contours [Carr and Snoeyink 2003].\nAlgorithms to compute the contour tree efficiently in three or more\ndimensions have been given for simplicial meshes [van Kreveld\net al. 1997; Tarasov and Vyalyi 1998; Carr et al. 2003; Chiang\net al. 2002; Takahashi et al. 2004b] and for trilinear meshes [Pascucci\nand Cole-McLaughlin 2002]. Much of this work focusses\non \"clean\" data from analytic functions or numerical simulation\nsee for example [Bajaj et al. 1997; Takahashi et al. 2004b]. All\nof the topology in this data is assumed to be important and significant\neffort is expended on representing it accurately using trilinear\ninterpolants [Pascucci and Cole-McLaughlin 2002] and topology-preserving\nsimplifications [Chiang and Lu 2003].\nIn contrast, we are interested in noisy experimentally-acquired data\nsuch as medical datasets. We expect to discard small-scale topological\nfeatures so that we can focus on large-scale features. We have\ntherefore chosen to work with the well-known Marching Cubes\ncases [Lorenson and Cline 1987; Montani et al. 1994], and with approximate\ngeometric properties. This paper does not turn on these\nchoices, however, and can also be applied to trilinear interpolants\nand exact geometric properties.\n2.2\nFlexible Isosurfaces\nThe contour spectrum [Bajaj et al. 1997] uses the contour tree to\nrepresent the topology of a field, alongside global measures of level\nsets such as surface area and enclosed volume. In contrast, the flexible\nisosurface interface [Carr and Snoeyink 2003] uses the contour\ntree actively instead of passively. The user selects an individual\ncontour from the contour tree or from the isosurface display, then\nmanipulates it. Operations include contour removal and contour\nevolution as the isovalue is changed, using the contour tree to track\nwhich contours to display. This interface depends on attaching isosurface\nseeds called path seeds to each edge of the contour tree so\nthat individual contours can be extracted on demand.\nA major disadvantage of both these interfaces is that contour trees\nwith more than a few tens of edges make poor visual abstractions.\nA principal contribution of this paper to simplify the contour tree\nwhile preserving the exploratory capabilities of the flexible isosurface\n. This requires that each point in a simplified contour tree represents\nan extractable contour. Moreover, extracted contours must\nevolve as smoothly as possible when the isovalue is adjusted.\nWe satisfy these constraint with simplifications that have pre-dictable\neffects on the scalar field and geometric measures that iden-498\ntify unimportant contour tree edges for simplification\n2.3\nSimplification and Geometric Measures\nThe distinction between this paper and other work that simplifies\ncontour trees or Reeb graphs is our emphasis on using tree structure\nfor local exploration. [Takahashi et al. 2004a] simplify the contour\ntree by replacing three edges at a saddle point with a single new\nedge, based on the height of the edge. [Takahashi et al. 2004b] use\nthe approximate volume of the region represented by the subtree\nthat is discarded. Saddles are processed until only a few remain,\nthen a transfer function is constructed that emphasizes the isovalues\nof those saddles. Our simplification algorithm extends this work to\npreserve local information such as isosurface seeds and to compute\narbitrary geometric measures of importance. We also describe the\neffects of simplification on the scalar field.\nSince removing a leaf of the contour tree cancels out a local ex-tremum\nwith a saddle, this form of simplification can be shown to\nbe equivalent to topological persistence [Edelsbrunner et al. 2003;\nEdelsbrunner et al. 2002; Bremer et al. 2003] if the geometric measure\nused is height. For other measures, such as volume or hypervolume\n, the method described in this paper is necessary to define\nthese properties, but thereafter, the process can optionally be described\nin terms of persistence.\nMoreover, work on persistence has focussed on the Morse complex\n, which is difficult to compute and segments data according to\nthe gradient of the field. When the boundary of an object such as\nan organ is better described by a contour than by drainage, contour\ntrees are more directly applicable than Morse complexes, and the\nadditional overhead of working with the Morse complex is unnecessary\n.\n[Hilaga et al. 2001] have shown how to simplify the Reeb graph by\nprogressive quantization of the isovalue to obtain a multi-resolution\nReeb graph. This suffers from several drawbacks, in particular that\nit is strictly tied to a function value which is treated as height (or\npersistence). Extension to geometric measures of importance such\nas volume or hypervolume is therefore problematic. Moreover, the\nquantization used imposes serious restrictions on isosurface generation\nand the level of simplification, as well as generating artifacts\nrelated to the quantization. In particular, we note that this quantization\nprocess limits potential simplification to at most as many\nlevels as there are bits in each input sample. Finally, this method\nis relatively slow: 15s is claimed for a 2-manifold input mesh with\n10,000 vertices: extensions to 10,000,000+ sample volumetric data\nhave not yet been published.\nWork also exists on computing geometric measures efficiently in\nlarge data sets. [Bentley 1979] defined problems to be decomposable\nif their solution could be assembled from the solutions of an\narbitrary decomposition into subproblems. Decomposability has\nbeen used for a variety of problems, including computation of geometric\nproperties of level sets [Bajaj et al. 1997] and extraction of\nisosurfaces [Lorenson and Cline 1987]. We use decomposability in\nSection 4 to compute local geometric measures.\nContour Tree Simplification\nGiven a contour tree and a scalar field, we apply graph simplification\nto the contour tree. This simplification can then be carried\nback to simplify the input data. Alternately, we can use the simplified\ncontour tree to extract the reduced set of isosurfaces that would\nresult if we had simplified the data. In this section, we describe\nthe contour tree structure, the simplification operators, and the algorithms\nfor simplification and isosurface extraction.\n3.1\nContour Tree Structure\nA contour tree is the result of contracting every contour to a point.\nWe use a simple tree structure in which every vertex is assigned a\ny-coordinate, and every edge is associated with the set of contours\nbetween its vertices. We store path seeds for generating individual\ncontours, as in [Carr and Snoeyink 2003]. That is, we store\na pointer to a monotone path that intersects all contours along the\nedge, which then serves as a seed to generate any given contour. In\nthis section, we assume that each edge has a simplification value\n(weight) that indicates the edge's priority. Low priority edges are\ngood candidates for simplification.\n3.2\nBasic Simplification Operations\nWe simplify the contour tree with two operations: leaf pruning and\nvertex reduction. Leaf pruning removes a leaf of the tree, reducing\nthe complexity of the tree, as shown in Figure 3, where vertex 80 is\npruned from the tree on the left to produce the tree in the middle.\nVertex reduction chooses a vertex with one neighbor above and one\nbelow, and deletes the vertex without changing the essential structure\nof the contour tree. This is also illustrated in Figure 3, where\nvertex 50 has been removed from the tree in the middle to produce\nthe tree on the right. Since vertex reductions do not change the essential\nstructure of the contour tree, we prefer them to leaf prunes.\nAlso, pruning the only up- or down- edge at a saddle is prohibited\nto preserve the edge for a later vertex reduction. It is clear that these\noperations can simplify the tree to any desired size.\nWe can also think of these operations as having well-defined effects\non the underlying scalar field: pruning a leaf corresponds to levelling\noff a maximum or minimum, while vertex reduction requires\nno changes.\nAs an example, in Figure 3 we show the result of leaf-pruning vertex\n80 and edge 80\n- 50 from the tree. Since 80 - 50 represents the\nleft-hand maximum, pruning it flattens out the maximum, as shown\nin the middle terrain. Similarly, the right-hand image shows the\nresults of reducing vertex 50 after the leaf prune. The edges incident\nto vertex 50 in the tree correspond to the regions above and\nbelow the contour through vertex 50. Removing vertex 50 merely\ncombines these two regions into one.\nThe fact that simplification operations can be interpreted as modifying\nthe scalar field suggests that one way to assess the cost of\nan operation is to measure geometric properties of the change. We\nshow how this can be done efficiently in Section 4.\n3.3\nSimplification Algorithm\nTo simplify the contour tree, we apply the following rules:\n1. Always perform vertex reduction where possible.\n2. Always choose the least important leaf to prune.\n3. Never prune the last up- or down- leaf at an interior vertex.\nWe implement this with a priority queue to keep track of the leaves\nof the tree with their associated pruning cost. We assume that for\neach edge e of the tree, we know two costs: up\n(e) for pruning\nthe edge from the bottom up: i.e. collapsing the edge to its upper\nvertex, and down\n(e) for the cost of pruning the edge from the\n499\n0\n90\n0\n90\n0\n90\n50\n0\n90\n50\n0\n90\n50\n80\n0\n90\n50\n80\nLeaf 80 is pruned\nVertex 50 is reduced\nFigure 3: Leaf Pruning Levels Extrema; Vertex Reduction Leaves Scalar Field Unchanged\ntop downwards. We add each leaf to the priority queue, with priority\nof up\n(e) for a lower leaf and down(e) for an upper leaf. We then\nrepeatedly remove the lowest cost leaf edge from the priority queue\nand prune it. If this pruning causes a vertex to become reducible,\nwe do so immediately.\nWhen a vertex is reduced, two edges e\n1\nand e\n2\nare merged into a\nsimplified edge d. The cost of pruning d is based on the costs of\nthe two reduced edges. Since up\n(d) is the cost of pruning d upwards\n, we set it to up\n(e\n1\n), the cost of pruning the upper edge upwards\n. Similarly, we set down\n(d) to down(e\n2\n), the cost of pruning\nthe lower edge downwards. If d is a leaf edge, we add it to the priority\nqueue. To simplify queue handling, we mark the reduced edges\nfor lazy deletion. When a marked edge reaches the front of the priority\nqueue, we discard it immediately. Similarly, when the edge\nremoved from the queue is the last up- or down- edge at its interior\nvertex, we discard it, preserving it for a later vertex reduction.\nA few observations on this algorithm: First, any desired level of\nsimplification of the tree can be achieved in a number of queue\noperations linear in t, the size of the original tree. Since at least half\nthe nodes are leaves, this bound is tight. And if the contour tree is\nstored as nodes with circular linked lists of upwards and downwards\nedges, every operation except (de)queueing takes constant time. As\na result, the asymptotic cost of this algorithm is dominated by the\nO\n(t log(t)) cost of maintaining the priority queue.\nSecond, the simplified contour tree can still be used to extract isosurface\ncontours. Vertex reductions build monotone paths corresponding\nto the simplified edges, while leaf prunes discard entire\nmonotone paths. Thus, any edge in a simplified contour tree corresponds\nto a monotone path through the original contour tree. To\ngenerate the contour at a given isovalue on a simplified edge, we\nperform a binary search along the contour tree edges that make up\nthe monotone path for that simplified edge. This search identifies\nthe unique contour tree edge that spans the desired isovalue, and we\nuse the path seed associated with that edge to generate the contour.\nThird, we extract contours from seeds as before. Instead of simplifying\nindividual contours, we reduce the set of contours that can be\nextracted. Surface simplification of contours is a separate task.\nFinally, up\n(e) and down(e) actually need not be set except at leaves\nof the tree. As a leaf is pruned and vertex reduced, new values can\nbe computed using information from the old nodes and edges. It is\nnot hard to show by induction that any desired level of simplification\nof the tree can be achieved. And, since leaf pruning and vertex\nreduction are the only two operations, the net result can also be a\nmeaningful simplification of the underlying scalar field, assuming\nthat a reasonable geometric measure is used to guide the simplification\n. We therefore next discuss geometric measures.\nLocal Geometric Measures\n[Bajaj et al. 1997] compute global geometric properties, and display\nthem alongside the contour tree in the contour spectrum. [Pascucci\nand Cole-McLaughlin 2002] propagate topological indices called\nthe Betti numbers along branches of the contour tree, based on previous\nwork by [Pascucci 2001]. We bring these two ideas together\nto compute local geometric measures for individual contours.\nIn 2D scalar fields, the geometric properties we could compute\ninclude the following contour properties: line length (perimeter),\ncross-sectional area (area of region enclosed by the contour), volume\n(of the region enclosed), and surface area (of the function over\nthe region). In 3D scalar fields, there are analogous properties that\ninclude isosurface area, cross-sectional volume (the volume of the\nregion enclosed by the isosurface), and hypervolume (the integral\nof the scalar field over the enclosed volume).\nFigure 4: Contours Sweeping Past a Saddle Point\nConsider a plane sweeping through the field in Figure 2 from high\nto low isovalues. At any isovalue h, the plane divides the field into\nregions above and below the plane. As the isovalue decreases, the\nregion above the plane grows, sweeping past the vertices of the\nmesh one at a time. Geometric properties of this region can be\nwritten as functions of the isovalue h. Such properties are decomposable\nover the cells of the input data for each cell we compute\na piecewise polynomial function, and sum them to obtain a piecewise\npolynomial function for the entire region. [Bajaj et al. 1997]\ncompute these functions by sweeping through the isovalues, altering\nthe function as each vertex is passed. Figure 4 illustrates this\nprocess, showing the contours immediately above and below a vertex\ns. As the plane sweeps past s, the function is unchanged in cells\noutside the neighbourhood of s, but changes inside the neighbourhood\nof s. This sweep computes global geometric properties for\nthe region above the sweep plane. Reversing the direction of the\nsweep computes global geometric properties for the region below\n500\nthe sweep plane.\nIn Figure 2, the region above the sweep plane at 2000m consists\nof two connected components, one defined by contours c\n1\nand c\n2\n,\nthe other by c\n3\n. To compute properties for these components, we\nsweep along an edge of the contour tree, representing a single contour\nsweeping through the data. This lets us compute functions for\nthe central maximum at B. For the crater rim defined by contours\nc\n1\nand c\n2\n, we use inclusion/exclusion. We sweep one contour at\na time, computing properties for the region inside the contour, including\nregions above and below the isovalue of the contour. The\narea of the crater rim can then be computed by subtracting the area\ninside contour c\n2\nfrom the area inside contour c\n1\n.\nWe define local geometric measures to be geometric properties of\nregions bounded by a contour. We compute these measures in a\nmanner similar to the global sweep of [Bajaj et al. 1997], but by\nsweeping contours along contour tree edges.\n4.1\nLocal Geometric Measures\nTo define local geometric measures attached to contour tree edges,\nwe must be careful with terminology. Above and below do not apply\nto the region inside c\n1\nin Figure 2, since part of the region is above\nthe contour and part is below. Nor do inside and outside, which lose\ntheir meaning for contours that intersect the boundary. We therefore\ndefine upstart and downstart regions of a contour. An upstart region\nis a region reachable from the contour by paths that initially ascend\nfrom the contour and never return to it. For contour c\n1\n, there is\none upstart region (inside) and one downstart region (outside). At\nsaddles such as D, there may be several upstart regions. Since each\nsuch region corresponds to an edge in the contour tree, we refer, for\nexample, to the upstart region at D for arc CD.\nWe now define upstart and downstart functions: functions computed\nfor upstart or downstart regions. Note that the upstart and\ndownstart functions do not have to be the same. For example, the\nlength of a contour line is independent of sweep direction, so the\nupstart and downstart functions for contour length in 2D are identical\n. But the area enclosed by a contour depends on sweep direction,\nso the upstart and downstart functions will be different.\nSince upstart and downstart functions describe geometric properties\nlocal to a contour, we refer to them collectively as local geometric\nmeasures. These measures are piecewise polynomial since they\nare piecewise polynomial in each cell. Because we need to track\nconnectivity for inclusion/exclusion, they are not strictly decomposable\n. Stated another way, in order to make them decomposable,\nwe need to know the connectivity during the local sweep. We are\nfortunate that the contour tree encodes this connectivity.\nFor regular data, we approximate region size with vertex count as\nin [Takahashi et al. 2004b]. For the integral of f over region R,\nwe sum the sample values to get\n\nx\nR\nf\n(x): the correct integral is\nthe limit of this sum as sample spacing approaches zero. When\nwe prune a leaf to a saddle at height h, the integral over the region\nflattened is\n\nx\nR\n( f (x) - h) = (\nx\nR\nf\n(x)) - Ah where A is the area\nof region R.\nIn three dimensions, vertex counting measures volume, and summing\nthe samples gives hypervolume. This geometric measure is\nquite effective on the data sets we have tested in Section 6.\n4.2\nCombining Local Geometric Measures\nTo compute local geometric measures, we must be able to combine\nupstart functions as we sweep a set of contours past a vertex. In\nFigure 4, we must combine the upstart functions for contours c\n1\n, c\n2\nand c\n3\nbefore sweeping past s. We must then update the combined\nupstart function as we sweep past the vertex.\nAfter sweeping past s, we know the combined upstart function d for\ncontours d\n1\n, d\n2\nand d\n3\n. We remove the upstart functions for d\n1\nand\nd\n2\nfrom d to obtain the upstart function for d\n3\n.\nWe assume that we have recursively computed the upstart functions\nfor d\n1\nand d\n2\nby computing the downstart functions and then inverting\nthem. Let us illustrate inversion, combination and removal for\ntwo local geometric measures in two dimensions.\nContour Length: Contour length is independent of sweep direction\n, so these operations are simple: Inversion is the identity operation\n, combination sums the lengths of the individual contours, and\ncontours are removed by subtracting their lengths.\nArea: Area depends on sweep direction, so inversion subtracts the\nfunction from the area of the entire field. Combining upstart functions\nat a saddle depends on whether the corresponding edges ascend\nor descend from the saddle. For ascending edges the upstart\nregions are disjoint, and the upstart functions are summed. For descending\nedges the upstart regions overlap, and the upstart functions\nare combined by inverting to downstart functions, summing,\nand re-inverting. Removing upstart functions reverses combination.\nConsider Figure 4 once more. The upstart region of d\n1\ncontains\ns, as well as contours c\n1\n, c\n2\nand c\n3\n. Similarly, the upstart regions\nof d\n2\nand d\n3\ncontain s and contours c\n1\n, c\n2\nand c\n3\n. However, the\ndownstart regions of d\n1\n, d\n2\nand d\n3\nare disjoint, and can be summed,\nthen inverted to obtain the combination of the upstart regions.\nIn general, measures of contour size are independent of sweep direction\nand their computation follows the pattern of 2D contour\nlength. Such measures include surface area in three dimensions,\nand hypersurface volume in four dimensions. Measures of region\nsize depend on sweep direction and their computation follows the\npattern of 2D cross-sectional area. Such measures include surface\narea and volume in two dimensions, and isosurface cross-sectional\nvolume and hypervolume in three dimensions.\nInput\n: Fully Augmented Contour Tree C\nA local geometric measure f with operations\nCombine\n( f\n1\n,..., f\nm\n) local geometric measures\nUpdate\n( f ,v) that updates f for sweep past v\nRemove\n( f , f\n1\n,..., f\nm\n) local geometric measures\nInvert\n() from down(e) to up(e) or vice versa\nOutput\n: down\n(e) and up(e) for each edge e in C\nMake a copy C of C\n1\nfor each vertex v do\n2\nIf v is a leaf of C, enqueue v\n3\nwhile NumberOfArcs\n(C ) > 0 do\n4\nDequeue v and retrieve edge e\n= (u,v) from C\n5\nWithout loss of generality, assume e ascends from v\n6\nLet d\n1\n, . . . , d\nk\nbe downward arcs at v in C\n7\nLet upBelow\n= Combine(down(d\n1\n),...,down(d\nk\n)\n8\nLet upAbove\n= Update(upBelow,v)\n9\nLet e\n1\n, . . . , e\nm\nbe upwards arcs at v in C, with e\n1\n= e\n10\nLet f\ni\n= Invert(down(e\ni\n)) for i = 2,...,m\n11\nLet up\n(e) = Remove(upAbove, f\n2\n,..., f\nm\n)\n12\nLet down\n(e) = Invert(up(e))\n13\nDelete e from C\n14\nIf u is now a leaf of C , enqueue u\n15\nAlgorithm 1: Computing Local Geometric Measures\n501\n(a) Reduced by Height (Persistence)\n(b) Reduced by Volume (Vertex Count)\n(c) Reduced by Hypervolume (Riemann Sum)\nFigure 5: Comparison of Simplification Using Three Local Geometric Measures. In each case, the UNC Head data set has been simplified to\n92 edges using the specified measure. Each trees were laid out using the dot tool, with no manual adjustment.\n4.3\nComputing Local Geometric Measures\nAlgorithm 1 shows how to compute edge priorities up\n(e) and\ndown\n(e) for a given local geometric measure. This algorithm relies\non Combine\n(), Update(), Invert(), and Remove() having been\nsuitably defined, and can be integrated into the merge phase of the\ncontour tree algorithm in [Carr et al. 2003].\nThe algorithm builds a queue of leaf edges in Step 2, then works\ninwards, pruning edges as it goes. At each vertex, including regular\npoints, the computation described in Section 4.2 is performed, and\nthe edge is deleted from the tree. In this way, an edge is processed\nonly when one of its vertices is reduced to a leaf: i.e. when all other\nedges at that vertex have already been processed.\nUnlike simplification, Algorithm 1 requires the fully augmented\ncontour tree, which is obtained by adding every vertex in the input\nmesh to the contour tree. This makes the algorithm linear in the\ninput size n rather than the tree size t: it cannot be used with the\nalgorithms of [Pascucci and Cole-McLaughlin 2002] and [Chiang\net al. 2002], which reduce running time by ignoring regular points.\n4.4\nComparison of Local Geometric Measures\nIn Figure 5, we show the results of simplifying the UNC Head data\nset with three different geometric measures: height (persistence),\nvolume, and hypervolume. In each case, the contour tree has been\nreduced to 92 edges and laid out using dot with no manual intervention\n.\nIn the left-hand image, height (persistence) is used as the geometric\nmeasure. All of the edges shown are tall as a result, but on inspection\n, many of these edges are caused by high-intensity voxels\nin the skull or in blood vessels. Most of the corresponding objects\nare quite small, while genuine objects of interest such as the eyes,\nventricular cavities and nasal cavity have already been suppressed,\nbecause they are defined by limited ranges of voxel intensity. Also,\non the corresponding simplification curve, we observe that there\nare a relatively large number of objects with large intensity ranges:\nagain, on further inspection, these tended to be fragments of larger\nobjects, particularly the skull.\nIn comparison, the middle image shows the results of using volume\n(i.e. vertex count) as the geometric measure. Not only does this\nfocus attention on a few objects of relatively large spatial extent,\nbut the simplification curve shows a much more rapid drop-off, implying\nthat there are fewer objects of large volume than there are of\nlarge height. Objects such as the eyeballs are represented, as they\nhave relatively large regions despite having small height. However,\nwe note that there are a large number of small-height edges at the\nbottom of the contour tree. These edges turn out to be caused by\nnoise and artifacts outside the skull in the original CT scan, in which\nlarge regions are either slightly higher or lower in isovalue than the\nsurrounding regions.\nFinally, the right-hand image shows the results of using hypervolume\n(the sum of sample values, as discussed above). In this case,\nwe see a very rapid dropoff of importance in the simplification\ncurve, with only 100 or so regions having significance. We note that\nthis measure preserves small-height features such as the eyeballs,\nwhile eliminating most of the apparent noise edges at the bottom\nof the tree, although at the expense of representing more skull fragments\nthan the volume measure. In general we have found that this\nmeasure is better for data exploration than either height or volume,\nsince it balances representation of tall objects with representation\nof large objects.\nWe do not claim that this measure is universally ideal: the choice\nof simplification measure should be driven by domain-dependent\ninformation. However, no matter what measure is chosen, the basic\nmechanism of simplification remains.\nImplementation\nWe have combined simplification with the flexible isosurface interface\nof [Carr and Snoeyink 2003], which uses the contour tree\nas a visual index to contours. The interface window, shown in Figures\n1, 6, and 7, is divided into data, contour tree, and simplification\ncurve panels. The data panel displays the set of contours marked in\nthe contour tree panel. Contours can be selected in either panel,\nthen deleted, isolated, or have their isovalue adjusted. The simplification\ncurve panel shows a log-log plot of contour tree size against\n\"feature size\": the highest cost of any edge pruned to reach the\ngiven level of simplification. Selecting a point on this curve determines\nthe detail shown in the contour tree panel.\nFor efficiency, we compute contour trees for the surfaces given by\nthe Marching Cubes cases of [Montani et al. 1994] instead of a sim-502\nplicial or trilinear mesh, because these surfaces generate roughly\n60% fewer triangles than even a minimal simplicial subdivision\nof the voxels, with none of the directional biases identified by\n[Carr et al. 2001], and because they are significantly simpler to\ncompute than the trilinear interpolant used by [Pascucci and Cole-McLaughlin\n2002]. There is a loss of accuracy, but since our simplification\ndiscards small-scale details of the topology anyway, little\nwould be gained from more complex interpolants.\nFinally, as in [Carr et al. 2003; Pascucci and Cole-McLaughlin\n2002; Chiang et al. 2002], we use simulation of simplicity [Edelsbrunner\nand Mucke 1990] to guarantee uniqueness of isovalues,\nthen collapse zero-height edges in the tree. Implementation details\ncan be found in [Carr 2004].\nResults and Discussion\nWe used a variety of data sets to test these methods, including results\nfrom numerical simulations (Nucleon, Silicium, Fuel, Neghip,\nHydrogen), analytical methods (ML, Shockwave), CT-scans (Lobster\n, Engine, Statue, Teapot, Bonsai), and X-rays (Aneurysm, Foot,\nSkull). Table 1 lists the size of each data set, the size of the unsimplified\ncontour tree, the time for constructing the unsimplified contour\ntree, and the simplification time. Times were obtained using a\n3 GHz Pentium 4 with 2 GB RAM, and the hypervolume measure.\nData Set\nData\nTree\nSize\nSize\nCT (s)\nST (s)\nNucleon\n41\n41 41\n49\n0.28\n0.01\nML\n41\n41 41\n695\n0.25\n0.01\nSilicium\n98\n34 34\n225\n0.41\n0.01\nFuel\n64\n64 64\n129\n0.72\n0.01\nNeghip\n64\n64 64\n248\n0.90\n0.01\nShockwave\n64\n64512\n31\n5.07\n0.01\nHydrogen\n128\n128128\n8\n5.60\n0.01\nLobster\n301\n324 56\n77,349\n19.22\n0.10\nEngine\n256\n256128\n134,642\n31.51\n0.18\nStatue\n341\n341 93\n120,668\n32.20\n0.15\nTeapot\n256\n256178\n20,777\n33.14\n0.02\nAneurysm\n256\n256256\n36,667\n41.83\n0.04\nBonsai\n256\n256256\n82,876\n49.71\n0.11\nFoot\n256\n256256\n508,854\n67.20\n0.74\nSkull\n256\n256256\n931,348\n109.73\n1.47\nCT Head\n106\n256256\n92,434\n21.30\n0.12\nUNC Head\n109\n256256\n1,573,373\n91.23\n2.48\nTooth\n161\n256256\n338,300\n39.65\n0.48\nRat\n240\n256256\n2,943,748\n233.33\n4.97\nTable 1: Data sets, unsimplified contour tree sizes, and contour tree\nconstruction time (CT) and simplification time (ST) in seconds.\nThe size of the contour tree is proportional to the number of local\nextrema in the input data. For analytic and simulated data sets, such\nas the ones shown in the upper half of Table 1, this is much smaller\nthan the input size. For noisy experimentally acquired data, such as\nthe ones shown in the lower half of Table 1, the size of the contour\ntree is roughly proportional to the input size. The time required to\nsimplify the contour tree using local geometric measures is typi-cally\nless than one percent of the time of constructing the original\ncontour tree, plus the additional cost of pre-computing these measures\nduring contour tree construction.\n6.1\nExamples of Data Exploration\nFigure 1 shows the result of exploring of the UNC Head data set\nusing simplified contour trees. An appropriate level of simplification\nwas chosen on the simplification curve and individual contours\nexplored until the image shown was produced. Surfaces identifiable\nas part of the skull were not chosen because they occluded the view\nof internal organs, although two contours for the ventricular system\nwere chosen despite being occluded by the brain surrounding them.\nThe flexible isosurface interface is particularly useful in this context\nbecause it lets one manipulate a single contour at a time, as shown\nin the video submitted with this paper.\nembryo\ngut?\nlungs\neyes\nbrain\nwindpipe?\nshoulder\nblades\nbreastbone\nFigure 6: A Pregnant Rat MRI (240\n256256). Despite low quality\ndata, simplifying the contour tree from 2,943,748 to 125 edges\nallows identification of several anatomical features.\nspinal column\nspinal cord\nventricles\nspinal cord\nspinal column\nventricles\nFigure 7: CT of a Skull (256\n256 106). Simplification of the\ncontour tree from 92,434 to 20 edges isolates the ventricular cavity,\nspinal cord and spinal column.\nSimilarly, Figure 6 shows the result of a similar exploration of a\n240\n256 256, low-quality MRI scan of a rat from the Whole\nFrog Project at http://www-itg.lbl.gov/ITG.hm.pg.docs/\nWhole.Frog/Whole.Frog.html. Again, simplification reduces\nthe contour tree to a useful size. Figure 7 shows a spinal column,\nspinal cord and ventricular cavity identified in a 256\n256 106\nCT data set from the University of Erlangen-Nuremberg. Other examples\nmay be seen on the accompanying video.\nEach of these images took less than 10 minutes to produce after\nall pre-processing, using the dot tool from the graphviz package\n(http://www.research.att.com/sw/tools/graphviz/)\nto lay out the contour tree: we generally then made a few adjustments\nto the node positions for clarity. Although dot produces reasonable\nlayouts for trees with 100 200 nodes, it is slow, sometimes\ntaking several minutes, and the layout computed usually becomes\nunsatisfactory as edges are added or subtracted from the tree.\n503\nNote that in none of these cases was any special constant embedded\nin the code the result is purely a function of the topology of the\nisosurfaces of the input data.\nConclusions and Future Work\nWe have presented a novel algorithm for the simplification of contour\ntrees based on local geometric measures. The algorithm is online\n, meaning that simplifications can be done and undone at any\ntime. This addresses the scalability problems of the contour tree\nin exploratory visualization of 3D scalar fields. The simplification\ncan also be reflected back onto the input data to produce an on-line\nsimplified scalar field. The algorithm is driven by local geometric\nmeasures such as area and volume, which make the simplifications\nmeaningful. Moreover, the simplifications can be tailored to a particular\napplication or data set.\nWe intend to explore several future directions. We could compute a\nmulti-dimensional feature vector of local geometric measures, and\nallow user-directed simplification of the contour tree, with different\nmeasures being applied in different regions of the function.\nThe simplified contour tree also provides a data structure for\nqueries. With local feature vectors one could efficiently answer\nqueries such as \"Find all contours with volume of at least 10 units\nand an approximate surface-area-to-volume ratio of 5.\" If information\nabout spatial extents (e.g., bounding boxes) is computed, then\nspatial constraints can also be included. Inverse problems could\nalso be posed given examples of a feature (e.g., a tumor), what\nshould the query constraints be to find such features?\nSome interface issues still need resolution, such as finding a fast\ncontour tree layout that is clear over a wide range of levels of simplification\nbut which also respects the convention that the y-position\ndepends on the isovalue. We would also like to annotate contours\nusing the flexible isosurface interface, rather than after the fact as\nwe have done in Figure 1 and Figures 6 7, and to enable local\nsimplification of the contour tree rather than the single-parameter\nsimplification presented here.\nIsosurfaces are not the only way of visualizing volumetric data.\nOther methods include boundary propagation using level set methods\nor T-snakes. We believe that simplified contour trees can provide\nseeds for these methods, either automatically or through user\ninteraction. We are adapting the flexible isosurface interface to generate\ntransfer functions for volume rendering. These transfer functions\nwould add spatial locality to volume rendering, based on the\nregions corresponding to edges of the simplified contour tree.\nAnother possible direction is to develop more local geometric measure\nfor multilinear interpolants. Lastly, the algorithms we describe\nwork in arbitrary dimensions, but special consideration should be\ngiven to simplification of contour trees for time-varying data.\nAcknowledgements\nAcknowledgements are due to the National Science and Engineering\nResearch Council of Canada (NSERC) for support in the\nform of post-graduate fellowships and research grants, and to\nthe U.S. National Science Foundation (NSF) and the Institute for\nRobotics and Intelligent Systems (IRIS) for research grants. Acknowledgements\nare also due to those who made volumetric data\navailable at volvis.org and other sites.\nReferences\nB\nAJAJ\n, C. L., P\nASCUCCI\n, V.,\nAND\nS\nCHIKORE\n, D. R. 1997. The Contour Spectrum.\nIn Proceedings of IEEE Visualization 1997, 167173.\nB\nENTLEY\n, J. L. 1979. Decomposable searching problems. Inform. Process. Lett. 8,\n244251.\nB\nREMER\n, P.-T., E\nDELSBRUNNER\n, H., H\nAMANN\n, B.,\nAND\nP\nASCUCCI\n, V. 2003. A\nMulti-resolution Data Structure for Two-dimensional Morse-Smale Functions. In\nProceedings of IEEE Visualization 2003, 139146.\nB\nRODLIE\n, K.,\nAND\nW\nOOD\n, J. 2001. Recent advances in volume visualization. Computer\nGraphics Forum 20, 2 (June), 125148.\nC\nARR\n, H.,\nAND\nS\nNOEYINK\n, J. 2003. Path Seeds and Flexible Isosurfaces: Using\nTopology for Exploratory Visualization. In Proceedings of Eurographics Visualization\nSymposium 2003, 4958, 285.\nC\nARR\n, H., M\nOLLER\n, T.,\nAND\nS\nNOEYINK\n, J. 2001. Simplicial Subdivisions and\nSampling Artifacts. In Proceedings of IEEE Visualization 2001, 99106.\nC\nARR\n, H., S\nNOEYINK\n, J.,\nAND\nA\nXEN\n, U. 2003. Computing Contour Trees in All\nDimensions. Computational Geometry: Theory and Applications 24, 2, 7594.\nC\nARR\n, H. 2004. Topological Manipulation of Isosurfaces. PhD thesis, University of\nBritish Columbia, Vancouver, BC, Canada.\nC\nHIANG\n, Y.-J.,\nAND\nL\nU\n, X. 2003. Progressive Simplification of Tetrahedral Meshes\nPreserving All Isosurface Topologies. Computer Graphics Forum 22, 3, to appear.\nC\nHIANG\n, Y.-J., L\nENZ\n, T., L\nU\n, X.,\nAND\nR\nOTE\n, G. 2002. Simple and Output-Sensitive\nConstruction of Contour Trees Using Monotone Paths. Tech. Rep. ECG-TR\n-244300-01, Institut f ur Informatik, Freie Universtat Berlin.\nE\nDELSBRUNNER\n, H.,\nAND\nM\nUCKE\n, E. P. 1990. Simulation of Simplicity: A technique\nto cope with degenerate cases in geometric algorithms. ACM Transactions\non Graphics 9, 1, 66104.\nE\nDELSBRUNNER\n, H., L\nETSCHER\n, D.,\nAND\nZ\nOMORODIAN\n, A. 2002. Topological\npersistence and simplification. Discrete Comput. Geom. 28, 511533.\nE\nDELSBRUNNER\n, H., H\nARER\n, J.,\nAND\nZ\nOMORODIAN\n, A. 2003. Hierarchical Morse-Smale\ncomplexes for piecewise linear 2-manifolds. Discrete Comput. Geom. 30,\n87107.\nH\nILAGA\n, M., S\nHINAGAWA\n, Y., K\nOHMURA\n, T.,\nAND\nK\nUNII\n, T. L. 2001. Topology\nmatching for fully automatic similarity estimation of 3d shapes. In SIGGRAPH\n2001, 203212.\nK\nETTNER\n, L., R\nOSSIGNAC\n, J.,\nAND\nS\nNOEYINK\n, J. 2001. The Safari Interface for\nVisualizing Time-Dependent Volume Data Using Iso-surfaces and Contour Spectra.\nComputational Geometry: Theory and Applications 25, 1-2, 97116.\nL\nORENSON\n, W. E.,\nAND\nC\nLINE\n, H. E. 1987. Marching Cubes: A High Resolution\n3D Surface Construction Algorithm. Computer Graphics 21, 4, 163169.\nM\nATSUMOTO\n, Y. 2002. An Introduction to Morse Theory. AMS.\nM\nILNOR\n, J. 1963. Morse Theory. Princeton University Press, Princeton, NJ.\nM\nONTANI\n, C., S\nCATENI\n, R.,\nAND\nS\nCOPIGNO\n, R. 1994. A modified look-up table\nfor implicit disambiguation of Marching Cubes. Visual Computer 10, 353355.\nP\nASCUCCI\n, V.,\nAND\nC\nOLE\n-M\nC\nL\nAUGHLIN\n, K. 2002. Efficient Computation of the\nTopology of Level Sets. In Proceedings of IEEE Visualization 2002, 187194.\nP\nASCUCCI\n, V. 2001. On the Topology of the Level Sets of a Scalar Field. In Abstracts\nof the 13th Canadian Conference on Computational Geometry, 141144.\nR\nEEB\n, G.\n1946.\nSur les points singuliers d'une forme de Pfaff compl`etement\nintegrable ou d'une fonction numerique. Comptes Rendus de l'Acad\n`emie des Sciences\nde Paris 222, 847849.\nT\nAKAHASHI\n, S., F\nUJISHIRO\n, I.,\nAND\nT\nAKESHIMA\n, Y. 2004. Topological volume\nskeletonization and its application to transfer function design. Graphical Models\n66, 1, 2449.\nT\nAKAHASHI\n, S., N\nIELSON\n, G. M., T\nAKESHIMA\n, Y.,\nAND\nF\nUJISHIRO\n, I. 2004.\nTopological Volume Skeletonization Using Adaptive Tetrahedralization. In Geometric\nModelling and Processing 2004.\nT\nARASOV\n, S. P.,\nAND\nV\nYALYI\n, M. N. 1998. Construction of Contour Trees in 3D\nin O\n(nlogn) steps. In Proceedings of the 14th ACM Symposium on Computational\nGeometry, 6875.\nVAN\nK\nREVELD\n, M.,\nVAN\nO\nOSTRUM\n, R., B\nAJAJ\n, C. L., P\nASCUCCI\n, V.,\nAND\nS\nCHIKORE\n, D. R. 1997. Contour Trees and Small Seed Sets for Isosurface Traver-sal\n. In Proceedings of the 13th ACM Symposium on Computational Geometry,\n212220.\n504", "keywords": "Isosurfaces;topological simplification;contour trees"} {"name": "18", "title": "A Resilient Packt-Forwarding Scheme against Maliciously Packet-Dropping Nodes in Sensor Networks", "abstract": "This paper focuses on defending against compromised nodes' dropping of legitimate reports and investigates the misbehavior of a maliciously packet-dropping node in sensor networks . We present a resilient packet-forwarding scheme using Neighbor Watch System (NWS), specifically designed for hop-by-hop reliable delivery in face of malicious nodes that drop relaying packets, as well as faulty nodes that fail to relay packets. Unlike previous work with multipath data forwarding, our scheme basically employs single-path data forwarding, which consumes less power than multipath schemes. As the packet is forwarded along the single-path toward the base station, our scheme, however, converts into multipath data forwarding at the location where NWS detects relaying nodes' misbehavior. Simulation experiments show that, with the help of NWS, our forwarding scheme achieves a high success ratio in face of a large number of packet-dropping nodes, and effectively adjusts its forwarding style, depending on the number of packet-dropping nodes en-route to the base station.", "fulltext": "INTRODUCTION\nWireless sensor networks consist of hundreds or even thousands\nof small devices each with sensing, processing, and\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nSASN'06, October 30, 2006, Alexandria, Virginia, USA.\nCopyright 2006 ACM 1-59593-554-1/06/0010 ...\n$\n5.00.\ncommunicating capabilities to monitor the real-world environment\n. They are envisioned to play an important role\nin a wide variety of areas ranging from critical military-surveillance\napplications to forest fire monitoring and the\nbuilding security monitoring in the near future. In such a\nnetwork, a large number of sensor nodes are distributed to\nmonitor a vast field where the operational conditions are\nharsh or even hostile. To operate in such environments, security\nis an important aspect for sensor networks and security\nmechanisms should be provided against various attacks\nsuch as node capture, physical tampering, eavesdropping,\ndenial of service, etc [23, 33, 38].\nPrevious research efforts against outsider attacks in key-management\nschemes [4, 13, 32] and secure node-to-node\ncommunication mechanisms [24, 32] in sensor networks are\nwell-defined.\nThose security protections, however, break\ndown when even a single legitimate node is compromised.\nIt turns out to be relatively easy to compromise a legitimate\nnode [14], which is to extract all the security information\nfrom the captured node and to make malicious code\nrunning for the attacker's purpose.\nEven a small number of compromised nodes can pose\nsevere security threats on the entire part of the network,\nlaunching several attacks such as dropping legitimate reports\n, injecting bogus sensing reports, advertising inconsistent\nrouting information, eavesdropping in-network communication\nusing exposed keys, etc. Such disruption by the\ninsider attacks can be devastating unless proper security\ncountermeasures against each type of attacks are provided.\nIn reality, detecting all of the compromised nodes in the\nnetwork is not always possible, so we should pursue graceful\ndegradation [35], with a small number of compromised\nnodes. The fundamental principle for defense against the\ninsider attacks is to restrict the security impact of a node\ncompromise as close to the vicinity of the compromised node\nas possible.\nWhen the attacker compromises a legitimate node, it may\nfirst try to replicate the captured node indefinitely with the\nsame ID and spread them over the network. Against such\nattacks, a distributed detection mechanism (based on emergent\nproperties [11]) has been proposed by Parno et al. [31].\nIn addition, Newsome et al. [30] have presented the techniques\nthat prevent the adversary from arbitrarily creating\nnew IDs for nodes.\nUsing cryptographic information obtained from a captured\nnode, attackers can establish pairwise keys with any\nlegitimate nodes in order to eavesdrop communication any-59\nwhere in the network. Localized key-establishment scheme\nby Zhu et al. [46] is a good solution against such an insider\nattack. Since the scheme does not allow a cloned node\n(by inside-attackers) to establish pairwise keys with any legitimate\nnodes except the neighbors of the compromised\nnodes, the cryptographic keys extracted from the compromised\nnode are of no use for attackers.\nCompromised nodes can also inject false sensing reports\nto the network (i.e. report fabrication attacks [39]), which\ncauses false alarms at the base station or the aggregation\nresult to far deviate from the true measurement. Proposed\nen-route filtering mechanisms [8, 39, 41, 44, 47] that detect\nand drop such false reports effectively limit the impact\nof this type of attacks. Also, proposed secure aggregation\nprotocols [34, 40] have addressed the problem of false data\ninjection, and they ensure that the aggregated result is a\ngood approximation to the true value in the presence of a\nsmall number of compromised nodes.\nAdvertising inconsistent routing information by compromised\nnodes can disrupt the whole network topology. Hu et\nal. [19, 20] have proposed SEAD, a secure ad-hoc network\nrouting protocol that uses efficient one-way hash functions\nto prevent any inside attackers from injecting inconsistent\nroute updates. A few secure routing protocols [6, 27] in sensor\nnetworks have been proposed to detect and exclude the\ncompromised nodes injecting inconsistent route updates.\nCompromised nodes also can silently drop legitimate reports\n(i.e. selective forwarding attacks [23]), instead of forwarding\nthem to the next-hop toward the base station. Since\ndata reports are delivered over multihop wireless paths to\nthe base station, even a small number of strategically-placed\npacket-dropping nodes can deteriorate the network throughput\nsignificantly. In order to bypass such nodes, most work\non secure routing and reliable delivery in sensor networks relies\non multipath forwarding scheme [5, 6, 7, 10], or interleaved-mesh\nforwarding scheme [26, 29, 39, 42].\nAmong the insider attacks described above, this paper focuses\non defense against compromised nodes' dropping of legitimate\nreports and we present a resilient packet-forwarding\nscheme using Neighbor Watch System (NWS) against maliciously\npacket-dropping nodes in sensor networks. We investigate\nthe misbehavior of a maliciously packet-dropping\nnode and show that an acknowledgement (ACK) that its\npackets were correctly received at the next-hop node does\nnot guarantee reliable delivery from the security perspective.\nNWS is specifically designed for hop-by-hop reliable delivery\nin face of malicious nodes that drop relaying packets,\nas well as faulty nodes that fail to relay packets. Unlike previous\nwork [10, 29, 42] with multipath data forwarding, our\nscheme basically employs single-path data forwarding, which\nconsumes less power than multipath schemes. As the packet\nis forwarded along the single-path toward the base station,\nour scheme, however, converts into multipath data forwarding\nat the location where NWS detects relaying nodes' misbehavior\n.\nNWS exploits the dense deployment of large-scale static\nsensor networks and the broadcast nature of communication\npattern to overhear neighbors' communication for free.\nThe contribution of this paper is two-fold. First, we investigate\nthe misbehavior of a maliciously packet-dropping\nnode and propose a resilient packet-forwarding scheme, which\nbasically employs single-path data forwarding, in face of\nsuch nodes, as well as faulty nodes. Second, our scheme\ncan work with any existing routing protocols. Since it is\ndesigned not for securing specific protocols but for universal\nprotocols, it can be applied to any existing routing protocols\nas a security complement.\nThe rest of paper is organized as follows. Background is\ngiven in Section 2. We present our resilient packet-forwarding\nscheme in Section 3. An evaluation of the scheme is given\nand discussed in Section 4. We present conclusions and future\nwork in Section 5.\nBACKGROUND\nSensor networks typically comprise one or multiple base\nstations and hundreds or thousands of inexpensive, small,\nstatic, and resource-constrained nodes scattered over a wide\narea.\nAn inexpensive sensor node cannot afford tamper-resistant\npackaging. We assume that a large number of sensor\nnodes are deployed in high density over a vast field, such\nthat the expected degree of a node is high; each sensor has\nmultiple neighbors within its communication range. Sensing\ndata or aggregated data are sent along the multihop route\nto the base station. We assume that each sensor node has\na constant transmission range, and communication links are\nbidirectional.\nOur sensor network model employs a key-establishment\nscheme that extends the one in LEAP [46] where the impact\nof a node compromise is localized in the immediate\nneighborhood of the compromised node, and our scheme is\nbased on it. To evolve from LEAP, we will describe it briefly\nin Section 2.4.\n2.2\nThreat Model\nThe attacks launched from outsiders hardly cause much\ndamage to the network, since the rouge node, which does not\npossesses the legitimate credentials (e.g. the predistributed\nkey ring from the key pool [13]), fails to participate in the\nnetwork. On the other hand, there may be multiple attacks\nfrom insiders (e.g.\ndropping legitimate reports, injecting\nfalse sensing reports, advertising inconsistent route information\n, and eavesdropping in-network communication using\nexposed keys, etc), and the combination of such attacks\ncan lead to disruption of the whole network. Thus, proper\nsecurity countermeasures (specifically designed to protect\nagainst each type of the attacks) should be provided.\nAmong them, in this paper, we focus on defending against\ncompromised nodes' dropping of legitimate reports; Other\nattacks mentioned above are effectively dealt with by several\nproposed schemes as described in the previous section.\nWe consider a packet-dropping node as not merely a faulty\nnode, but also an arbitrarily malicious node. Some previous\nwork [3, 29, 36] on reliable delivery uses an acknowledgement\n(ACK) that its packets were correctly received at the\nnext-hop node, in order to find out unreliable links. However\n, in the presence of maliciously packet-dropping nodes,\nsimply receiving ACK from a next-hop node does not guarantee\nthat the packet will be really forwarded by the next-hop\nnode. For example, node u forwards a packet to compromised\nnode v, and node u waits for ACK from node v.\nNode v sends back ACK to node u, and then node v silently\ndrops the packet. This simple example shows that receiving\nACK is not enough for reliable delivery in face of maliciously\npacket-dropping nodes.\n60\nFor more reliability, we should check whether the next-hop\nnode really forwards the relaying packet to its proper\nnext-hop node. Fortunately, due to the broadcast nature of\ncommunication pattern in sensor networks, we can overhear\nneighbors' communication for free (for now per-link encryption\nis ignored). After forwarding a packet to next-hop node\nv and buffering recently-sent packets, by listening in on node\nv's traffic, we can tell whether node v really transmits the\npacket. Watchdog [28] mechanism (extension to DSR [22]),\nimplicit ACK in M\n2\nRC [29], and local monitoring in DICAS\n[25] detect misbehaving nodes in this way. However,\nthis kind of simple overhearing schemes does not guarantee\nreliable delivery, either.\nWith arbitrarily malicious nodes, we should be assured\nthat the node, to which the next-hop node forwards the\nrelaying packet, is really a neighbor of the next-hop node.\nFor example, node u forwards a packet to compromised node\nv, and node u listens in on node v's traffic to compare each\noverheard packet with the packet in the buffer.\nNode v\ntransmits the relaying packet whose intended next-hop id\nmarked with any id in the network such as x that is not a\nneighbor of v. Then node u overhears this packet from node\nv, and considers it forwarded correctly despite the fact that\nnone actually receives the packet. The packet is eventually\ndropped without being detected. We refer to this attack as\nblind letter attack.\nWe consider packet-dropping attacks to be addressed in\nthis paper as ones ranging from the naive case (e.g. a faulty\nnode) to the most malicious one (e.g.\na node launching\nblind letter attack). We focus on developing a solution to\nsuch attacks.\n2.3\nNotation\nWe use the following notation throughout the paper:\nu, v are principals, such as communicating nodes.\nR\nu\nis a random number generated by u.\nf\nK\nis a family of pseudo-random function [12].\nMAC(K, M\n1\n|M\n2\n) denotes the message authentication\ncode (MAC) of message - concatenation of M\n1\nand M\n2\n,\nwith MAC key K.\n2.4\nKey-Establishment Scheme in LEAP\nLEAP supports the establishment of four types of keys for\neach sensor node - an individual key shared with the base\nstation, a pairwise key shared with its neighbor, a cluster\nkey shared with its surrounding neighbors, and a group key\nshared by all the nodes in the networks.\nIt assumes that the time interval T\nest\nfor a newly deployed\nsensor node to complete the neighbor discovery phase (e.g.\ntens of seconds) is smaller than the time interval T\nmin\nthat is\nnecessary for the attacker to compromise a legitimate node\n(i.e. T\nmin\n> T\nest\n). Some existing work [1, 39] has made\nsimilar assumptions, which are believed to be reasonable.\nThe four steps for a newly added node u to establish a\npairwise key with each of its neighbors are as follows:\n1. Key Pre-distribution. Each node u is loaded with\na common initial key K\nI\n, and derives its master key\nK\nu\n= f\nK\nI\n(u).\n2. Neighbor Discovery. Once deployed, node u sets\nup a timer to fire after time T\nmin\n, broadcasts its id,\nand waits for each neighbor v's ACK. The ACK from\nv is authenticated using the master key K\nv\nof node v.\nSince node u knows K\nI\n, it can derive K\nv\n= f\nK\nI\n(v).\nu - :\nu, R\nu\n.\nv - u :\nv, M AC(K\nv\n, R\nu\n|v).\n3. Pairwise Key Establishment. Node u computes its\npairwise key with v, K\nuv\n, as K\nuv\n= f\nK\nv\n(u). Node v\nalso computes K\nuv\nin the same way. K\nuv\nserves as\ntheir pairwise key.\n4. Key Erasure. When its timer expires, node u erases\nK\nI\nand all the master keys of its neighbors. Every\nnode, however, keeps its own master key, in order to\nestablish pairwise keys with later-deployed nodes.\nOnce erasing K\nI\n, a node will not be able to establish a\npairwise key with any other nodes that have also erased K\nI\n.\nWithout K\nI\n, a cloned node (by an attacker compromising a\nlegitimate node after T\nmin\n) fails to establish pairwise keys\nwith any nodes except the neighbors of the compromised\nnode. In such a way, LEAP localizes the security impact of\na node compromise.\nA RESILIENT PACKET-FORWARDING SCHEME USING NEIGHBOR WATCH SYSTEM\nIn this section, we present our resilient packet-forwarding\nscheme using Neighbor Watch System (NWS). NWS works\nwith the information provided by Neighbor List Verification\n(NLV) to be described in Section 3.2.\n3.1\nNeighbor Watch System\nOur scheme seeks to achieve hop-by-hop reliable delivery\nin face of maliciously packet-dropping nodes, basically employing\nsingle-path forwarding. To the best of our knowledge\n, proposed works so far rely on multipath forwarding\nor diffusion-based forwarding, exploiting a large number of\nnodes in order to deliver a single packet. ACK-based technique\nis not a proper solution at all as explained in the\nprevious section.\nWith NWS, we can check whether the next-hop node really\nforwards the relaying packet to the actual neighbor of\nthe next-hop node. The basic idea of our scheme is as follows\n:\n1. Neighbor List Verification. After deployment, during\nneighbor discovery phase, every node u gets to\nknow of not only its immediate neighbors, but also the\nneighbors' respective neighbor lists (i.e. u's neighbors'\nneighbor lists). The lists are verified using Neighbor\nList Verification to be described in Section 3.2. Every\nnode stores its neighbors' neighbor lists in the neighbor\ntable.\n2. Packet Forwarding to Next-hop. If node u has\na packet to be relayed, it buffers the packet and forwards\nthe packet (encrypted with cluster key of node\nu so that neighbors of node u can overhear it) to its\nnext-hop node v. As in LEAP, a cluster key is a key\nshared by a node and all its neighbors, for passive participation\n.\n61\nu\nv\n?\nw\ny\nFigure 1:\nNeighbor Watch System.\nSub-watch\nnodes w and y, as well as primary-watch node u listen\nin on v's traffic.\n3. Designation of Watch Nodes.\nOverhearing the\npacket from node u to node v, among neighbors of\nnode u, the nodes that are also neighbors of node v (in\nFigure 1, nodes w and y) are designated as sub-watch\nnodes and store the packet in the buffer. Other nodes\n(that are not neighbors of node v) discard the packet.\nNode u itself is a primary-watch node. A primary-watch\nnode knows which nodes are sub-watch nodes,\nsince every node has the knowledge of not only its\nneighbors but also their respective neighbor lists.\n4. Neighbor Watch by Sub-Watch Node. Sub-watch\nnodes w and y listen in on node v's traffic to compare\neach overheard packet with the packet in the buffer.\nTo defend against blind letter attack, each of them\nalso checks whether the packet's intended next-hop is\na verified neighbor of node v, by looking up the neighbor\ntable. If all correct, the packet in the buffer is\nremoved and the role of the sub-watch node is over.\nIf the packet has remained in the buffer for longer\nthan a certain timeout, sub-watch nodes w and y forward\nthe packet (encrypted with their respective cluster\nkeys) to their respective next-hop nodes other than\nnode v. Then the role of a sub-watch node is over (each\nof them is now designated as a primary-watch node for\nthe packet it has forwarded).\n5. Neighbor Watch by Primary-Watch Node. Primary-watch\nnode u does the same job as sub-watch nodes.\nThe only difference, however, is that it listens in on\nnot only node v's traffic, but also sub-watch nodes w's\nand y's. If the packet is correctly forwarded on by at\nleast one of them (nodes v, w, or y), primary-watch\nnode u removes the packet in the buffer and the role\nof the primary-watch node is over.\nOtherwise, after a certain timeout, primary-watch node\nu forwards the packet (encrypted with its cluster key)\nto its next-hop other than node v.\nAs the packet is forwarded on, this procedure (except for\nNeighbor List Verification) of NWS is performed at each\nhop so that hop-by-hop reliable delivery can be achieved\nwith mainly depending on single-path forwarding. On the\nother hand, in the previous approaches [29, 39, 42], when\nforwarding a packet, a node broadcasts the packet with no\ndesignated next-hop, and all neighbors with smaller costs\n1\n1\nThe cost at a node is the minimum energy overhead to\nBase\nStation\nu\nv\n?\n?\nFigure 2:\nAn example of our packet-forwarding\nscheme. Only the nodes that relay the packet are\npresented. With the help of sub-watch nodes (grey\nones), our scheme bypasses two packet-dropping\nnodes en-route to the base station.\nor within a specific geographic region continue forwarding\nthe packet anyway. For example, in Figure 1, if nodes v,\nw, and y have smaller costs than node u in the previous\napproaches, they all forward\n2\nthe packet from node u. In\nour scheme, however, sub-watch nodes w and y are just on\nwatch in designated next-hop node v, instead of uncondi-tionally\nforwarding the packet. If no packet-dropping occurs\nen-route to the base station, the packet may be forwarded\nalong single-path all the way through.\nHowever, a packet-dropping triggers the multipath forwarding\nfor the dropped packet. If the designated next-hop\nnode v in Figure 1 has not forwarded the relaying packet to\nits certified neighbor by a certain timeout, sub-watch nodes\nw and y forward the packet to their respective next-hop.\nAt the point, the packet is sent over multiple paths. Since\nthe location where the packet-dropping occurs is likely in\nan unreliable region, this prompt reaction of the conversion\nto multipath forwarding augments the robustness in our\nscheme. The degree of multipath depends on the number of\nthe sub-watch nodes. Figure 2 shows an example of our\npacket-forwarding scheme, bypassing two packet-dropping\nnodes en-route to the base station. If a node utilizes a cache\n[16, 21] for recently-received packets, it can suppress the\nsame copy of previously-received one within a certain timeout\n, as nodes u and v in Figure 2.\nOur scheme requires that a relaying packet should be encrypted\nwith a cluster key of a forwarding node, in order\nthat all its neighbors can decrypt and overhear it. In fact,\nper-link encryption provides better robustness to a node\ncompromise, since a compromised node can decrypt only\nthe packets addressed to it. Thus, there exists a tradeoff\nbetween resiliency against packet-dropping and robustness\nto a node compromise. However, encryption with a cluster\nkey provides an intermediate level of robustness to a node\ncompromise [24] (a compromised node can overhear only\nits immediate neighborhood), and also supports local broadcast\n(i.e. resiliency against packet-dropping), so that we can\nachieve graceful degradation in face of compromised nodes.\nforward a packet from this node to the base station.\n2\nIt is the broadcast transmission with no designated next-hop\n, and, if needed, the packet should be encrypted with a\ncluster key in order for all neighbors to overhear it.\n62\nTo make our scheme work (against blind letter attack), we\nmust address the problem of how a node proves that it really\nhas the claimed neighbors. It is the identical problem of\nhow a node verifies the existence of its neighbors' neighbors.\nApparently, a node has the knowledge of its direct neighbors\nby neighbor discovery and pairwise key establishment\nphases. However, in the case of two-hop away neighbors,\nas in Figure 1, malicious node v can inform its neighbor u\nthat it also has neighbor node x (any possible id in the network\n) which in fact is not a neighbor of node v. Node u has\nto believe it, since node x is not a direct neighbor of node\nu, and only the node v itself knows its actual surrounding\nneighbors. Then, how do we verify the neighbors' neighbors\n? The answer to this critical question is described in\nthe next subsection.\n3.2\nNeighbor List Verification\nTo verify neighbors' neighbors, we present Neighbor List\nVerification (NLV) which extends the pairwise key establishment\nin LEAP. During neighbor discovery in LEAP, two\nmessages are exchanged between neighbors to identify each\nother. On the other hand, NLV adopts three-way handshaking\nneighbor discovery, in order to identify not only communicating\nparties but also their respective neighbors.\nNLV has two cases of neighbor discovery. One is that\nneighbor discovery between two nodes that are both still\nwithin the initial T\nmin3\n(referred as pure nodes). The other\nis that neighbor discovery between a newly-deployed node\nwithin the initial T\nmin\nand an existing node over the initial\nT\nmin\n(referred as an adult node).\nNeighbor Discovery between Pure Nodes. Neighbor\nlist verification process between pure nodes is quite simple.\nIf a pure node broadcasts its neighbor list before the elapse of\nits initial T\nmin\n, we can accept the list as verifiable. Thus, the\nkey point here is to keep track of each other's T\nmin\n, and to\nmake sure that both broadcast their respective neighbor lists\nbefore their respective T\nmin\n. The following shows the three-way\nhandshaking neighbor discovery between pure node u\nand v:\nu - : u, R\nu\n.\nv - u : v, T\nv\n, R\nv\n\nM\nv\n, M AC(K\nv\n, R\nu\n|K\nu\n|M\nv\n).\nu - v : u, T\nu\n\nM\nu\n, M AC(K\nuv\n, R\nv\n|M\nu\n).\nwhere T\nv\nand T\nu\nare the amount of time remaining until\nT\nmin\nof v and T\nmin\nof u, respectively. Once deployed, node\nu sets up a timer to fire after time T\nmin\n. Then, it broadcasts\nits id, and waits for each neighbor v's ACK. The ACK from\nevery neighbor v is authenticated using the master key K\nv\nof\nnode v. Since node u knows K\nI 4\n, it can derive K\nv\n= f\nK\nI\n(v).\nThe ACK from node v contains T\nv\n, the amount of time\nremaining until T\nmin\nof node v. If T\nv\nis a non-zero value,\nnode v claims to be a pure node. K\nu\nin MAC proves node\nv to be a pure node, since pure node v should know K\nI\nand derive K\nu\n= f\nK\nI\n(u). Node u records\nT\nv\n(T\nv\nadded\n3\nT\nmin\nis the time interval, necessary for the attacker to compromise\na legitimate node as in LEAP [46].\n4\nEach node u is loaded with a common initial key K\nI\n, and\nderives its master key K\nu\n= f\nK\nI\n(u). After time T\nmin\n, node\nu erases K\nI\nand all the master keys of its neighbors.\nu\nv\nw\nx\nt\nz\nr\nq\nFigure 3: Neighbor Discovery between Pure node x\nand Adult node u. Grey and white nodes represent\nadult and pure nodes, respectively.\nto the current time of node u) in the entry for node v in\nthe neighbor table. Node u computes its pairwise key with\nv, K\nuv\n= f\nK\nv\n(u).\n5\nNode u also generates M AC(K\nv\n, v|u)\n(which means that v certifies u as an immediate neighbor),\nand stores it as a certificate.\nThe ACK from node u also contains T\nu\n, the amount of\ntime remaining until T\nmin\nof u. This ACK is authenticated\nusing their pairwise key K\nuv\n, which proves node u a pure\nnode and u's identity. Node v then records\nT\nu\n(T\nu\nadded\nto the current time of v) in the entry for u in the neighbor\ntable. It also generates M AC(K\nu\n, u|v) and stores it as a\ncertificate. Then, the three-way handshaking is done.\nEvery pure node u broadcasts its neighbor list just prior\nto T\nmin\nof u. Each receiving neighbor v checks whether the\nreceiving time at v is prior to\nT\nu\nin the neighbor table. If\nyes, the neighbor list of u is now certified by each neighbor v.\nNeighbor Discovery between A Pure Node and An\nAdult node. After most nodes have completed bootstrapping\nphase, new nodes can be added in the network. Consider\nFigure 3. The issue here is how adult node u can assure\nits existing neighbors (v and w) of the existence of its\nnewly-added neighbor x. This is a different situation from\nthe above neighbor list verification case between two pure\nnodes. Thus, the messages exchanged during the three-way\nhandshaking are somewhat different in this case. The following\nshows the three-way handshaking neighbor discovery\nbetween pure node x and adult node u:\nx- :\nx, R\nx\n.\nu- x :\nu, T\nu\n, R\nu\n, v,\ncertif icate\n\n\nM AC(K\nv\n, v|u), w,\ncertif icate\n\n\nM AC(K\nw\n, w|u)\n\nM\nu\n, M AC(K\nu\n, R\nx\n|M\nu\n).\nx- u :\nx, T\nx\n,\ncertif icate\n\n\nM AC(K\nx\n, x|u), v,\none-time cert.\n\n\nM AC(K\nv\n, x|u), w,\none-time cert.\n\n\nM AC(K\nw\n, x|u)\n\nM\nx\n, M AC(K\nxu\n, R\nu\n|M\nx\n).\nNewly-added node x sets up a timer to fire after time T\nmin\n.\nThen, it broadcasts its id, and waits for each neighbor u's\n5\nNode v also computes K\nuv\nin the same way. K\nuv\nserves as\ntheir pairwise key.\n63\nACK. The ACK from every neighbor u is authenticated using\nthe master key K\nu\nof node u. Since node x knows K\nI\n,\nit can derive K\nu\n= f\nK\nI\n(u). The ACK from node u contains\nT\nu\n, the amount of time remaining until T\nmin\nof u. If T\nu\nis\nzero, node u is an adult node that may already have multiple\nneighbors as in Figure 3. Node u reports its certified\nneighbor list (v and w) to x by including their respective\ncertificates in the ACK. Node x verifies u's neighbor list by\nexamining each certificate, since x can generate any certificate\nwith K\nI\n. If all correct, x computes its pairwise key with\nu, K\nxu\n= f\nK\nu\n(x). Node x also generates M AC(K\nu\n, u|x) and\nstores it as a certificate.\nThe ACK from x also contains T\nx\n, the amount of time\nremaining until T\nmin\nof x. This ACK is authenticated using\ntheir pairwise key K\nxu\n, which proves node x a pure node\nand x's identity. Node u then records\nT\nx\n(T\nx\nadded to the\ncurrent time of u) in the entry for x in the neighbor table.\nSince adult node u cannot generate M AC(K\nx\n, x|u) by itself,\npure node x provides the certificate for u in the ACK. Node\nx also provides one-time certificates\n6\nfor each of u's certified\nneighbors (v and w). Then, the three-way handshaking is\ndone.\nAfter that, adult node u broadcasts one-time certificates\n(from newly-discovered pure node x), in order to assure u's\nexisting neighbors (v and w) of the discovery of new neighbor\nx. The packet containing one-time certificates is as follows:\nu- :\nu, x, v,\none-time cert.\n\n\nM AC(K\nv\n, x|u), w,\none-time cert.\n\n\nM AC(K\nw\n, x|u), K\nA\nu\n\nM\nu\n, M AC(K\nc\nu\n, M\nu\n).\nwhere x is a new neighbor of u, K\nA\nu\nis a local broadcast authentication\nkey in u's one-way key chain, K\nc\nu\nis the cluster\nkey of u. Each receiving neighbor v of u verifies u's new\nneighbor x by examining the one-time certificate designated\nfor v, M AC(K\nv\n, x|u)\n6\n. If ok, node x is now certified by each\nneighbor v of u. Then, one-time certificates can be erased,\nsince they are of no use any more.\nBroadcast authentication only with symmetric keys such\nas cluster key K\nc\nu\nfails to prevent an impersonation attack,\nsince every neighbor of u shares the cluster key of u. Thus,\nwe employ the reverse disclosure of one-way key chain K\nA\nu\nas in LEAP.\nJust prior to T\nmin\nof x, pure node x broadcasts its neighbor\nlist. Each receiving neighbor u of x checks whether the\nreceiving time at u is prior to\nT\nx\nin the neighbor table. If\nyes, the neighbor list of x is now certified by each neighbor u.\nIn summary, through the proposed three-way handshaking\nneighbor discovery process, pure node u identifies each\nimmediate neighbor v and v's certified neighbor list (if v is\nan adult node), and keeps track of T\nmin\nof v. Just prior\nto T\nmin\nof u, node u broadcasts its direct neighbor list so\nthat every neighbor of u accepts the list as verifiable. Then,\nnode u becomes an adult node. After that, if newly-added\nnode x initiates neighbor discovery with adult node u, node\nu identifies pure node x, keeps track of T\nmin\nof x, provides\nu's certified neighbor list to x, and, in return, takes one-time\ncertificates from x. Node u then broadcasts these one-time\n6\nOne-time certificate, for instance M AC(K\nv\n, x|u), assures\nv that x is an immediate neighbor of u. It is generated by\npure node x with master key of v.\nTable 1: An example of the Neighbor Table of u.\nNeighbor ID\nCertificate\nVerified Neighbor List\nv\nM AC(K\nv\n, v|u)\nu, w, t\nw\nM AC(K\nw\n, w|u)\nu, v, z\nx\nM AC(K\nx\n, x|u)\nu, r, q\ncertificates, in order to assure u's existing neighbors of the\ndiscovery of new neighbor x. Thus, every time adult node u\ndiscovers newly-added node x through three-way handshaking\n, node u informs (by broadcasting) its existing neighbors\nof the discovery of new neighbor x. Also, whenever receiving\nneighbor list information from pure neighbor x, node u\nchecks whether the receiving time at u is prior to\nT\nx\nin the\nneighbor table. If yes, u now accepts the neighbor list of x\nas verifiable.\nThrough the above neighbor list verification in the bootstrapping\nphase, every node gets the knowledge of its neighbors'\ncertified neighbors. Our Neighbor Watch System makes\nuse of this information to prevent blind letter attack. With\nthis knowledge, watch nodes are able to check whether the\nrelaying packet's intended next-hop is a verified neighbor of\nthe forwarding node.\n3.3\nNeighbor Table Maintenance\nThe information obtained through neighbor list verification\n(e.g. its direct neighbors, corresponding certificates,\nneighbors' neighbor lists, etc) is stored in the neighbor table\nof each node. Table 1 shows an example of the neighbor\ntable of node u. In densely-deployed sensor networks, the\nexpected degree of a node is high. However, in this example,\nfor simplicity, node u has only three neighbors v, w, and x\nas in Figure 3.\nThe entries in the neighbor table are accessed and maintained\nwith immediate neighbor IDs. For example, if node\nu overhears the packet sent from w to v, node u begins to\nlisten in on v's traffic as a sub-watch node (since the neighbor\ntable of u has both v's and w's entries in it). Unless v\nforwards the packet to a node of the Verified Neighbor List\nin v's entry by a certain timeout, sub-watch node u will forward\nthe packet to its next-hop other than v; many existing\nrouting protocols [5, 18, 21, 27, 37, 43] enable each node to\nmaintain multiple potential next-hop. Once forwarding the\npacket, sub-watch node u becomes a primary-watch node\nand begins to listen in on its next-hop's traffic as described\nabove.\nIf newly-added node y initiates the three-way handshaking\nwith u, node u provides its neighbor list to y by sending\ncertificates in the neighbor table. Node u, in return from\nnode y, takes the certificate for y and one-time certificates\nfor u's existing neighbors. Then, node u stores the certificate\nin the new entry for y. However, node u does not store the\none-time certificates but broadcasts them to its neighbors.\nIf new neighbor y broadcasts its neighbor list within T\nmin\n,\nnode u stores the list in the entry for y.\nIf node u is compromised, not only cryptographic key\ninformation but also certificates in the neighbor table are\nexposed. However, the attacker cannot misuse these certificates\nfor other purposes. Since a certificate only attests\nneighborship between two specific nodes, it cannot be applied\nto any other nodes. In fact, it can be made even public.\nHowever, colluding nodes can deceive a pure node anyway,\n64\nby fabricating a bogus certificate. We will describe this limitation\nin Section 4.4.\nEVALUATION\nIn this section, we evaluate the communication and storage\ncost, and analyze the security of our resilient forwarding\nscheme (Neighbor Watch System) as well as Neighbor List\nVerification. We then present the simulation results of our\nforwarding scheme.\n4.1\nCommunication Cost\nUnlike the previously proposed diffusion-based reliable-forwarding\nschemes [21, 29, 39, 42] that exploit a large number\nof nodes to deliver a single packet, our scheme requires\nonly the designated next-hop node to relay the packet, under\nthe supervision of watch nodes. We note that, like overhearing\nby watch nodes in our scheme, those diffusion-based\nschemes require each node to listen to all its neighbors, since\nthey forward a packet by broadcasting with no designated\nnext-hop. With a smaller number of relaying nodes, our\nscheme makes a report successfully reach the base station.\nThus, the average communication cost of our forwarding\nscheme for delivery of a single packet is smaller than those\nof the previous schemes.\nOur neighbor list verification during the bootstrapping\nphase requires the three-way handshaking neighbor discovery\n. Unlike the neighbor discovery between two pure nodes,\nthe size of the messages exchanged between a pure and an\nadult node varies with the degree of the adult node. A large\nnumber of certificates caused by the high degree can be overburdensome\nto a single TinyOS packet which provides 29\nbytes for data. Considering 8-byte certificates and a 4-byte\n7\nmessage authentication code (MAC), the adult node is able\nto include at most two neighbors' information in a single\nTinyOS packet. Thus, when the entire neighbor list cannot\nbe accommodated within a single packet, the node should\nallot the list to several packets and send them serially. In a\nnetwork of size N with the expected degree d of each node,\nthe average number of packets invoked by a newly-added\nnode per each node is nearly (d - 1)\n2\n/2(N - 1).\nTherefore, as node density d grows, the total number\nof packets transmitted from adult nodes to a newly-added\nnode increases. However, neighbor discovery between a pure\nand an adult node occurs much less than between two pure\nnodes, since most neighbor discoveries throughout the network\nare between two pure nodes in the early stage of the\nnetwork. Neighbor discovery between a pure and an adult\nnode occurs generally when a new node is added to the network\n.\n4.2\nStorage Overhead\nIn LEAP, each node keeps four types of keys and a manageable\nlength of hash chain, which is found to be scalable.\nIn our scheme, each node needs to additionally store its direct\nneighbors' certificates and their respective neighbor lists\nas in Table 1. Thus, for a network of the expected degree\nd and the byte size l of node ID, the additional storage requirement\nfor each node is d (8 + ld) bytes.\nAlthough our storage requirement for these neighbor lists\nis O(d\n2\n), for a reasonable degree d, memory overhead does\n7\n4-byte MAC is found to be not detrimental in sensor networks\nas in TinySec [24] which employs 4-byte MAC.\nu\nv\n?\nu\nv\n?\nC\n1\nC\n2\nFigure 4: Examples of critical area C\n1\nand C\n2\n.\nnot exceed 1 KB (a Berkeley MICA2 Mote with 128 KB\nflash memory and 4 KB SRAM). For example, when d = 20\nand l = 2, a node needs 960 bytes of memory to store such\ninformation.\nIf node density of a network is so high that the required\nspace for those neighbor lists significantly increases and the\nstorage utilization becomes an issue, we can employ a storage-reduction\ntechnique such as Bloom filter [2]. For example,\nwhen d = 30 and l = 2, a node requires 2,040 bytes of additional\nspace mainly for the neighbor lists. Instead of storing\nneighbors' neighbor lists, applying each of the neighbor lists\n(480 bits) to a Bloom filter (of 5 hash functions mapping to\na 256 bit vector), a node needs the reduced space of 1,200\nbytes for such information (with the false positive probability\n= 0.02).\n4.3\nResilience to Packet-Dropping Attacks\nIn face of maliciously packet-dropping nodes, the higher\ndegree of multipath we provide, the more resiliency our\nscheme achieves against such attacks. The average degree\nof multipath depends on the number of sub-watch nodes\naround a packet-dropping node. Sub-watch nodes should\nbe located in the region within the communication range of\nboth forwarding node u and designated next-hop v. We refer\nto such a region as critical area. As in Figure 4, if nodes\nu and v are located farther away, the size of critical area C\n2\ngets smaller than that of C\n1\n, and the probability (p\nc\n) that\nat least one sub-watch node exists in the critical area goes\ndown. The probability (p\nc\n) is\np\nc\n= 1\n- (1 - c)\nd-1\n,\nwhere c is the ratio of the critical area size to the node's communication\nrange, and the expected degree d of the node.\nTo determine the appropriate degree d, we set the smallest\ncritical area C\n2\nin Figure 4 as a lower bound case (c = 0.4).\nFigure 5 shows that, even in the lower bound critical area,\nwith d = 6 and d = 10, probability p\nc\nis above 0.9 and above\n0.99, respectively.\nSince, in a network of degree d, the probability that there\nexist m sub-watch nodes in the critical area of the ratio c is\np(m) =\nd - 1\nm\nc\nm\n(1\n- c)\nd-m-1\n,\nthe expected number of sub-watch nodes, m, in the critical\narea is given by\nE[m] = (d - 1)c.\nThus, in the lower bound (c = 0.4) critical area, when d =\n10, 15, 20, the number of sub-watch nodes (i.e. the degree\nof multipath) is 3.6, 5.6, 7.6 on average, respectively. This\n65\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n1\n5\n10\n15\n20\nDegree of a node\nP\nr\nob\nab\ni\nl\ni\nt\ny\n\np\nc\nFigure 5:\nProbability (p\nc\n) that at least one sub-watch\nnode exists in the lower bound (c = 0.4) critical\narea.\nshows that the higher degree of each node has, our scheme\nhas the higher degree of multipath and resiliency against\npacket-dropping nodes.\n4.4\nThe Security of Neighbor List Verification\nOur Neighbor List Verification(NLV) keeps the nice properties\nof LEAP. Adult nodes fail to establish pairwise keys\nwith any adult nodes in arbitrary locations, so that the impact\nof a node compromise is localized. NLV performs the\nthree-way handshaking neighbor discovery, instead of two-message\nexchange in LEAP. The three-way handshaking enables\neach node to verify not only its direct neighbors but\nalso their respective neighbor lists.\nMoreover, this this three-way handshaking can be a potential\nsolution to deal with irregularity of radio range [15,\n37, 45]. In reality, due to the noise and some environmen-tal\nfactors, radio range of each node is not exactly circular\n. So, communication links among nodes are asymmetric;\nnode u can hear node v which is unable to hear u. With\ntwo-message exchange, only the node initiating the neighbor\ndiscovery is assured of the link's bidirectionality. By the\nthree-way handshaking, both of neighbors can be assured of\ntheir symmetric connectivity.\nWith NLV, only the verified lists are stored and utilized\nfor our packet-forwarding scheme. NLV verifies the neighbor\nlist of an adult node with certificates.\nThese certificates\nmerely attest neighborship between two specific nodes. Even\nif a node is compromised, the attacker fails to abuse the\ncertificates of the captured node for other purpose.\nHowever, collusion among compromised nodes can fabricate\nbogus certificates in order to deceive a newly-added\nnode. For example, consider two colluding nodes u and v at\nthe different locations. When compromised node u discovers\nnewly-added node x, node u provides x with u's neighbor\nlist (maliciously including v in it). Even though node v is\nnot an actual neighbor of u, colluding node v can generate\nthe bogus certificate for u, M AC(K\nv\n, v|u). Then, x falsely\nbelieves that v is a direct neighbor of u. This attack, however\n, affects only the one newly-added node x. Thus, when\ncompromised node u tries to launch the blind letter attack\n8\n,\n8\nCompromised node u transmits the relaying packet with its\nother surrounding adult neighbors of u can still detect it\nanyway.\nThe more serious case is that colluding nodes exploit a\nnewly-added node to generate bogus one-time certificates.\nFor example, consider two colluding nodes u and v that\nshare all their secret information as well as all their certificates\n. When newly-added node x initiates the three-way\nhandshaking with u, compromised node u pretends to be\nv and provides x with v' neighbor list. Then, x in return\nprovides u with one-time certificates for each neighbor of\nv; these one-time certificates falsely attest that v has new\nneighbor x. Node u sends this information to v over the\ncovert channel. Then, v broadcasts these one-time certificates\n, and neighbors of v falsely believe that x is a direct\nneighbor of v.\nUnfortunately, we do not provide a proper countermeasure\nto defend against this type of man-in-the-middle attacks\n. However, we point out that this type of attacks has\nto be launched in the passive manner. The adversary has\nto get the chance of discovery of a newly-added node. In\nother words, compromised nodes wait for the initiation of\nthe three-way handshaking from a newly-added node. Since\nthe attacker does not know where the new nodes will be\nadded, it has to compromise a sufficient number of legitimate\nnodes in order to increase the probability of discovery\nof newly-added nodes.\nAs an active defense against such man-in-the-middle attacks\n, we can apply a node replication detection mechanism\nsuch as Randomized or Line-Selected Multicast [31], which\nrevokes the same ID node at the different location claims.\nTo successfully launch such man-in-the-middle attacks, two\ncolluding nodes should pretend to be each other so that each\nof them claims to be at two different locations with the same\nID. Location-binding key-assignment scheme by Yang et al.\n[39] with a little modification also can be a good solution\nto such attacks. Since it binds secret keys with nodes' geographic\nlocations, the key bound to the particular location\ncannot be used at any arbitrary locations. Adopting this,\nNLV can check whether the claimed neighbors are really located\nwithin geographically two hops away.\n4.5\nSimulations\nTo further evaluate the performance of our resilient forwarding\nscheme, we run simulations of our scheme in the\npresence of packet-dropping nodes on a network simulator,\nns-2 [9].\n4.5.1\nSimulation Model\nIn our simulations, we deploy N sensor nodes uniformly at\nrandom within 500\n500m\n2\ntarget field, with N = 300 and\n600. Each sensor node has a constant transmission range of\n30m, so that the degree of each node is approximately 10\n(N = 300) and 20 (N = 600) on average. We position a base\nstation and a source node in opposite corners of the field, at\na fixed point (50, 50) and (450, 450), respectively. They are\nlocated approximately 18 hops away from each other.\nWe distribute compromised nodes over an inner square\narea with 200m each side (from 150m to 350m of each side\nof the 500\n500m\n2\ntarget area). Thus, compromised nodes\nare strategically-placed in between the base station and the\nsource node. In the simulations, those compromised nodes\ndrop all the relaying packets.\nnext-hop id as v, so that x considers it forwarded correctly.\n66\n( 300 nodes )\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\n50\nNumber of Packet-dropping Nodes\nS\nu\ncc\ness R\na\nt\ni\no\nSingle Path Forwarding\nwith NWS\n(a) Success ratio (N = 300, x = 0 50)\n( 600 nodes )\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0\n10\n20\n30\n40\n50\n60\n70\n80\n90\n100\nNumber of Packet-dropping Nodes\nS\nu\ncc\ness R\na\nt\ni\no\nSingle Path Forwarding\nwith NWS\n(b) Success ratio (N = 600, x = 0 100)\n( 300 nodes )\n0\n10\n20\n30\n40\n50\n60\n70\n80\n90\n100\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\n50\nNumber of Packet-Dropping Nodes\nN\nu\nm\nb\ne\nr\nof\nR\ne\nl\na\nyi\nn\ng\nN\no\nd\ne\ns\n.\nSingle Path Forwarding\nwith NWS\n(c) The number of relaying nodes with N = 300\n( 600 nodes )\n0\n10\n20\n30\n40\n50\n60\n70\n80\n90\n100\n0\n10\n20\n30\n40\n50\n60\n70\n80\n90\n100\nNumber of Packet-dropping Nodes\nN\nu\nm\nb\ne\nr\nof\nR\ne\nl\na\nyi\nn\ng\nN\no\nd\ne\ns\n.\nSingle Path Forwarding\nwith NWS\n(d) The number of relaying nodes with N = 600\nFigure 6: Simulation Results (averaged over 100 runs).\nWe use the typical TinyOS beaconing [17] with a little\nmodification as a base routing protocol in our simulations.\nWe add a hop count value in a beacon message\n9\n. To have\nmultiple potential next-hops, when receiving a beacon with\nthe same or better hop count than the parent node's, each\nnode marks the node sending the beacon as a potential next-hop\n.\nEach simulation experiment is conducted using 100 different\nnetwork topologies, and each result is averaged over 100\nruns of different network topologies.\n4.5.2\nSimulation Results\nIn the presence of compromised node dropping all the relaying\npackets, we measure the success ratio (i.e. the percentage\nof the packets that successfully reach the base station\nfrom the source) and the number of relaying nodes by\nthe primitive single-path forwarding and with NWS in a\nnetwork of size N, with N = 300 and 600.\n9\nThe base station initiates the beacon-broadcasting, which\nfloods through the network, in order to set up a routing tree.\nFigure 6(a) shows the success ratio in face of x packet-dropping\nnodes (varying x=0 to 50) in a 300-sensor-node\nnetwork with the approximate degree d = 10. Although\nthe success ratio gently decreases with x, it keeps up above\n0.8 even with x = 30, with the help of NWS. This tendency\nof decreasing success ratio can be attributed to the\ndegree d = 10 (3.6 sub-watch nodes on average) as well as\nan increasing number of packet-dropping nodes. Due to the\nstrategically-placement of compromised nodes in our simulations\n, as x increases on, it is likely that a forwarding\nnode's all potential sub-watch nodes themselves are packet-dropping\nnodes.\nFigure 6(c) shows the number of nodes\nthat relay the packet from the source to the base station\nin the same experiments. Since the source is located about\n18 hops away from the base station, the number of relaying\nnodes only with the single-path forwarding remains at 18.\nWith NWS, the number of relaying nodes increases with x,\nin order to bypass an increasing number of packet-dropping\nnodes. In face of such nodes, our scheme converts single-path\nforwarding into multipath data forwarding, with the\n67\nhelp of sub-watch nodes around such packet-dropping nodes.\nUtilizing a cache for recently-received packets can suppress\nthe same copy within a certain timeout, which reduces the\nnumber of relaying nodes.\nFigure 6(b) shows the success ratio in a 600-sensor-node\nnetwork with the approximate degree d = 20 with x packet-dropping\nnodes (varying x=0 to 100). Unlike that with N =\n300, the success ratio stays constantly at around 0.99 even\nwith x = 100, with the help of NWS. This tendency of high\nsuccess ratio can be mainly attributed to the degree d = 20\n(7.6 sub-watch nodes on average in the lower bound case),\nwhich is found to be high enough to bypass a large number\nof packet-dropping nodes. Figure 6(d) shows the number\nof relaying nodes from the source to the base station in the\nsame experiments. With NWS, the increase in the number\nof relaying nodes with x is more conspicuous than that with\nN = 300, since more than twice as many as sub-watch nodes\nhelp forward the packets so that it can bypass a large number\nof packet-dropping nodes anyway.\nIn the simulation results, we note that our forwarding\nscheme dynamically adjusts its forwarding style, depending\non the number of packet-dropping nodes en-route to the base\nstation. As in Figures 6(c) and 6(d), while there exist none\nor a small number of packet-dropping nodes on the way, our\nscheme works almost like the single-path forwarding with\nthe help of a few additional relaying nodes. On the other\nhand, when confronting a large number of packet-dropping\nnodes, our scheme makes full use of the help from additional\nrelaying nodes, in order to successfully deliver the packet to\nthe base station at any cost to the best efforts.\nCONCLUSIONS AND FUTURE WORK\nIn this paper we focus on defending against compromised\nnodes' dropping of legitimate reports. We have presented\na resilient packet-forwarding scheme using Neighbor Watch\nSystem (NWS) against maliciously packet-dropping nodes in\nsensor networks. In face of such nodes, NWS is specifically\ndesigned for hop-by-hop reliable delivery, and the prompt\nreaction of the conversion from single-path to multipath forwarding\naugments the robustness in our scheme so that the\npacket successfully reach the base station.\nIn future work, we plan on further improving NLV to defend\nagainst the man-in-the-middle attacks, collusion among\ncompromised nodes. Such attacks can be prevented by using\na master key derived with not only a node ID but also its\ngeographic information. We will also seek to address O(d\n2\n)\nstorage requirement for the neighbors' neighbor lists. Finally\n, we would like to perform an intensive experimental\nevaluation to compare our scheme with other reliable delivery\nprotocols [10, 29, 42].\nACKNOWLEDGMENTS\nThis work was supported by grant No.R01-2006-000-10073-0\nfrom the Basic Research Program of the Korea Science and\nEngineering Foundation.\nREFERENCES\n[1] R. Anderson, H. Chan, and A. Perrig, Key Infection:\nSmart Trust for Smart Dust, IEEE ICNP 2004\n[2] Burton H. Bloom, Space/Time Trade-offs in Hash\nCoding with Allowable Errors, Communication of the\nACM, vol. 13, 422-426, 1970\n[3] B. Carbunar, I. Ioannidis, and C. Nita-Rotaru,\nJANUS: Towards Robust and Malicious Resilient\nRouting in Hybrid Wireless Networks, ACM workshop\non Wireless security (WiSe'04), Oct. 2004\n[4] H. Chan, A. Perrig, and D. Song, Random Key\nPredistribution Schemes for Sensor Networks, IEEE\nSymposium on Security and Privacy, pp. 197-213, May\n2003.\n[5] B. Deb, S. Bhatnagar, and B. Nath, ReInForM:\nReliable Information Forwarding Using Multiple Paths\nin Sensor Networks, IEEE Local Computer Networks\n(LCN 2003), pp. 406-415, Oct. 2003.\n[6] J. Deng, R. Han, and S. Mishra, A Performance\nEvaluation of Intrusion- Tolerant Routing in Wireless\nSensor Networks, 2nd International Workshop on\nInformation Processing in Sensor Networks (IPSN 03),\npp. 349-364, Apr. 2003.\n[7] J. Deng, R. Han, and S. Mishra, Intrusion Tolerance\nand Anti-Traffic Analysis Strategies for Wireless\nSensor Networks, IEEE International Conference on\nDependable Systems and Networks (DSN), pp.\n594-603, 2004.\n[8] J. Deng, R. Han, and S. Mishra, Defending against\nPath-based DoS Attacks in Wireless Sensor Networks,\nACM Workshop on Security of Ad-Hoc and Sensor\nNetworks (SASN'05) , Nov, 2005.\n[9] K. Fall and K. Varadhan (editors), NS notes and\ndocumentation, The VINT project, LBL, Feb 2000,\nhttp://www.isi.edu/nsnam/ns/\n[10] D. Ganesan, R. Govindan, S. Shenker, and D. Estrin,\nHighly Resilient, Energy-Efficient Multipath Routing\nin Wireless Sensor Networks, Computing and\nCommunications Review (MC2R) Vol 1., pp. 11-25,\n2002.\n[11] V. D. Gligor, Security of Emergent Properties in\nAd-Hoc Networks, International Workshop on Security\nProtocols, Apr. 2004.\n[12] O. Goldreich, S. Goldwasser, and S. Micali, How to\nConstruct Random Functions, Journal of the ACM,\nVol. 33, No. 4, 210-217, 1986\n[13] L. Eschenauer and V. D. Gligor, A Key-Management\nScheme for Distributed Sensor Networks, 9th ACM\nConference on Computer and Communication\nSecurity (CCS), pp. 41-47, Nov. 2002.\n[14] C. Hartung, J. Balasalle, and R. Han, Node\nCompromise in Sensor Networks: The Need for Secure\nSystems, Technical Report CU-CS-990-05,\nDepartment of Computer Science University of\nColorado at Boulder, Jan. 2005\n[15] T. He, S. Krishnamurthy, J. A. Stankovic, T. F.\nAbdelzaher, L. Luo, R. Stoleru, T. Yan, L. Gu, J. Hui,\nand B. Krogh, An Energy-Efficient Surveillance\nSystem Using Wireless Sensor Networks, ACM\nMobiSys'04, June, 2004\n[16] W.R. Heinzelman, J. Kulik, H. Balakrishnan, Adaptive\nProtocols for Information Dissemination in Wireless\nSensor Networks, ACM MobiCom99, pp. 174.185,\n1999.\n[17] J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, and\nK. Pister, System Architecture Directions for\nNetworked Sensors, ACU ASPLOS IX, November\n2000.\n68\n[18] X. Hong, M. Gerla, W. Hanbiao, and L. Clare, Load\nBalanced, Energy-Aware Communications for Mars\nSensor Networks, IEEE Aerospace Conference, vol.3,\n1109-1115, 2002.\n[19] Y.-C. Hu, D. B. Johnson, and A. Perrig, SEAD:\nSecure Efficient Distance Vector Routing for Mobile\nWireless Ad Hoc Networks, IEEE Workshop on Mobile\nComputing Systems and Applications, pp. 3-13, Jun.\n2002.\n[20] Y.-C. Hu, A. Perrig, and D. B. Johnson, Efficient\nSecurity Mechanisms for Routing Protocols, NDSS\n2003, pp. 57-73, Feb. 2003.\n[21] C. Intanagonwiwat, R. Govindan and D. Estrin,\nDirected Diffusion: A Scalable and Robust\nCommunication Paradigm for Sensor Networks,\nMobiCom'00, Aug. 2000.\n[22] D. Johnson, D.A. Maltz, and J. Broch, The Dynamic\nSource Routing Protocol for Mobile Ad Hoc Networks\n(Internet-Draft), Mobile Ad-hoc Network (MANET)\nWorking Group, IETF, Oct. 1999.\n[23] C. Karlof and D. Wagner, Secure Routing in Wireless\nSensor Networks: Attacks and Countermeasures, The\nFirst IEEE International Workshop on Sensor Network\nProtocols and Applications, pp. 113-127, May 2003\n[24] C. Karlof, N. Sastry, and D. Wagner, TinySec: A Link\nLayer Security Architecture for Wireless Sensor\nNetworks, ACM SensSys'04, pp. 162-175, Nov. 2004.\n[25] I. Khalil, S. Bagchi, and C. Nina-Rotaru, DICAS:\nDetection, Diagnosis and Isolation of Control Attacks\nin Sensor Networks, IEEE SecureComm 2005, pp. 89 100\n, Sep. 2005\n[26] Y. Liu and W. K.G. Seah, A Priority-Based\nMulti-Path Routing Protocol for Sensor Networks,\n15th IEEE International Symposium on Volume 1, 216\n- 220, 2004\n[27] S.-B. Lee and Y.-H. Choi, A Secure Alternate Path\nRouting in Sensor Networks, Computer\nCommunications (2006),\ndoi:10.1016/j.comcom.2006.08.006.\n[28] S. Marti, T.J. Giuli, K. Lai, and M. Baker, Mitigating\nRouting Misbehavior in Mobile Ad Hoc Networks,\nACM/IEEE International Conference on Mobile\nComputing and Networking, pp. 255-265, 2000\n[29] H. Morcos, I. Matta, and A. Bestavros, M\n2\nRC:\nMultiplicative-Increase/Additive-Decrease Multipath\nRouting Control for Wireless Sensor Networks, ACM\nSIGBED Review, Vol. 2, Jan 2005.\n[30] J. Newsome, E. Shi, D. Song, and A. Perrig, The Sybil\nAttack in Sensor Networks: Analysis and Defenses,\nIEEE IPSN'04, pp. 259-268, Apr. 2004.\n[31] B. Parno, A. Perrig, and V. D. Gligor, Distributed\nDetection of Node Replication Attacks in Sensor\nNetworks, the 2005 IEEE Symposium on Security and\nPrivacy, pp. 49-63, May 2005.\n[32] A. Perrig, R. Szewczyk, V. Wen, D. Culler, and\nJ. Tygar, SPINS: Security Protocols for Sensor\nNetworks, ACM MobiCom'01, pp. 189-199, 2001.\n[33] A. Perrig, J. Stankovic, and D. Wagner, Security in\nWireless Sensor Networks, Communications of the\nACM, 47(6), Special Issue on Wireless sensor\nnetworks, pp.53- 57, Jun. 2004\n[34] B. Przydatek, D. Song, and A. Perrig, SIA: Secure\nInformation Aggregation in Sensor Networks, 1st\nInternational Conference on Embedded Networked\nSensor Systems, 255-256, 2003\n[35] E. Shi and A. Perrig, Designing Secure Sensor\nNetworks, Wireless Communications, IEEE Volume\n11, Issue 6, pp. 38-43, Dec. 2004.\n[36] D. Tian and N.D. Georganas, Energy Efficient\nRouting with Guaranteed Delivery in Wireless Sensor\nNetworks, IEEE Wireless Communications and\nNetworking (WCNC 2003), IEEE Volume 3, 1923 1929\n, March 2003\n[37] A. Woo, T. Tong, and D. Culler, Taming the\nUnderlying Challenges of Reliable Multhop Routing in\nSensor Networks, ACM SenSys03, Nov, 2003\n[38] A. Wood and J. Stankovic, Denial of Service in Sensor\nNetworks, IEEE Computer, Vol.35, 54-62, Oct. 2002\n[39] H.Yang, F. Ye, Y. Yuan, S. Lu and W. Arbough,\nToward Resilient Security in Wireless Sensor\nNetworks, ACM MobiHoc'05, 34-45, May 2005\n[40] Y. Yang, X. Wang, S. Zhu, and G. Cao SDAP: A\nSecure Hop-by-Hop Data Aggregation Protocol for\nSensor Networks, ACM MobiHoc'06 May 2006\n[41] F. Ye, H. Luo, S. Lu and L. Zhang, Statictial En-route\nFiltering of Injected False Data in Sensor Networks,\nIEEE INFOCOM, 2004\n[42] F. Ye, G. Zhong, S. Lu and L. Zhang, GRAdient\nBroadcast: A Robust Data Delivery Protocol for Large\nScale Sensor Networks, ACM Wireless Networks\n(WINET), March 2005\n[43] Y. Yu, R. Govindan, and D. Estrin, Geographical and\nEnergy Aware Routing: a recursive data dissemination\nprotocol for wireless sensor networks, UCLA\nComputer Science Department Technical Report\nUCLA/CSD-TR-01-0023, May 2001.\n[44] W. Zhang and G. Cao, Group Rekeying for Filtering\nFalse Data in Sensor Networks: A Predistribution and\nLocal Collaboration-Based Approach, IEEE\nINFOCOM'05. Vol. 1, 503-514, March 2005\n[45] G. Zhou, T. He, S. Krishnamurthy, and J. A.\nStankovic, Impact of radio irregularity on wireless\nsensor networks, the 2nd International Conference on\nMobile Systems, Applications, and Services\n(MobiSys04), June, 2004\n[46] S. Zhu, S. Setia, and S. Jajodia, LEAP: Efficient\nSecurity Mechanisms for Large-Scale Distributed\nSensor Networks, The 10th ACM Conference on\nComputer and Communications Security (CCS '03),\n62-72, 2003\n[47] S.Zhu, S. Setia, S. Jajodia, and P. Ning, An\nInterleaved Hop-by-Hop Authentication Scheme for\nFiltering False Data in Sensor Networks, IEEE\nSymposium on Security and Privacy, 2004\n69", "keywords": "Neighbor Watch System;legitimate node;Reliable Delivery;Packet-dropping Attacks;aggregation protocols;malicious node;robustness;critical area;single-path forwarding;Sensor Network Security;cluster key;secure ad-hoc network routing protocol;Secure Routing;degree of multipath"} {"name": "180", "title": "SIMULATING OPTION PRICES AND SENSITIVITIES BY HIGHER RANK LATTICE RULES", "abstract": "In this paper we introduce the intermediate rank or higher rank lattice rule for the general case when the number of quadrature points is n t m, where m is a composite integer, t is the rank of the rule, n is an integer such that (n, m) = 1. Our emphasis is the applications of higher rank lattice rules to a class of option pricing problems. The higher rank lattice rules are good candidates for applications to finance based on the following reasons: the higher rank lattice rule has better asymptotic convergence rate than the conventional good lattice rule does and searching higher rank lattice points is much faster than that of good lattice points for the same number of quadrature points; furthermore, numerical tests for application to option pricing problems showed that the higher rank lattice rules are not worse than the conventional good lattice rule on average.", "fulltext": "Introduction\nIt is well known in scientific computation that Monte Carlo\n(MC) simulation method is the main method to deal with\nhigh dimensional ( 4) problems. The main drawback for\nthis method is that it converges slowly with convergence\nrate O(\n1\nN\n), where N is the number of points (or samples\nor simulations), even after using various variance reduction\nmethods. To speed it up, researchers use quasi-random\nor low-discrepancy point sets, instead of using pseudo-random\npoint sets. This is the so called quasi-Monte Carlo\n(QMC) method.\nThere are two classes of low-discrepancy sequences\n(LDS). The first one is constructive LDS, such as Halton's\nsequence, Sobol's sequence, Faure's sequence, and Nieder-reiter's\n(t, m, s)-nets and (t, s)-sequence. This kind of\nLDS has convergence rate O(\n(log N )\ns\nN\n), where s is the dimension\nof the problem, N is, again, the number of points.\nThe second class is the integration lattice points, for example\n, good lattice points (GLP). This type of LDS has\nconvergence rate O(\n(log N )\ns\nN\n\n), where > 1 is a parameter\nrelated to the smoothness of the integrand, s and N are the\nsame as above. The monograph by Niederreiter [1] gives\nvery detailed information on constructive LDS and good\nlattice points, while the monograph by Hua and Wang [2]\nand Sloan and Joe [3] describe good lattice rules in detail.\nUnlike the constructive sequences, the construction of\ngood lattice points is not constructive in the sense that they\ncould be found only by computer searches (except in the\n2- dimensional case, where good lattice points can be constructed\nby using the Fibonacci numbers). Such searches\nare usually very time consuming, especially when the number\nof points is large or the dimension is high, or both.\nTherefore, to develop algorithms which can be used in finding\ngood lattice points fast is of practical importance.\nThis paper discusses the applications of the intermediate\nrank or higher rank lattice rules (HRLR) to option\npricing problems. The motivations of using higher rank\nlattice points are as follows. For a class of finance problems\n, we found that using the randomized good lattice\npoints (GLP) can reach much better convergence than the\nrandomized constructive quasi-random sequences (such as,\nSobol sequence), let alone the pseudo-random point sets,\nsee [4] about this (in that paper, the lattice points were\ntaken from [2]). The theory given in Section 2 shows that\nthe error bound of a higher rank lattice rule is smaller than\nthat of a good lattice rule, at least asymptotically. And\nsearching higher rank lattice points is much faster than\nsearching good lattice points. Our extensive numerical results\nconfirmed this fact. Some results are listed in Section\n2. Furthermore, the results in Section 3 showed that\nthe standard errors of the randomized higher rank lattice\npoints are smaller than those of the randomized good lattice\npoints (most of the times), which are much smaller than the\nstandard errors of the randomized Sobol sequence, when\nthese quasi-random point sets are applied to some financial\nderivative pricing problems in simulating option values and\nsensitivities.\nHigher Rank Lattice Rules\nDetailed information about lattice rules can be found in the\nliterature, such as [1], [2] and [3]. We start to introduce\nlattice rules briefly by considering an integral\n530-113\n258\nIf =\nC\ns\nf (x)dx,\n(1)\nwhere C\ns\n= [0, 1]\ns\nis the s-dimensional unit hyper-cube\n, f (x) is one-periodic in each component of x, i.e.\nf (x) = f (x + z), z Z\ns\n(the set of s- dimensional\ninteger points), x R\ns\n(the s-dimensional real space).\nAn s-dimensional integration lattice L is a discrete subset\nof R\ns\nthat is closed under addition and subtraction and\ncontains Z\ns\nas a subset. A lattice rule for (1) is a rule of the\nform\nQf = 1\nN\nN -1\nj=0\nf (x\nj\n),\n(2)\nwhere {x\n0\n, , x\nN -1\n} L U\ns\nwith U\ns\n= [0, 1)\ns\n, N is\ncalled the order of the rule.\nNow we consider the intermediate rank or higher rank\nlattice rules, i.e. rules of the form\nQ\nt\nf =\n1\nn\nt\nm\nn-1\nk\nt\n=0\n\nn-1\nk\n1\n=0\nm-1\nj=0\nf ({ j\nm g+\nk\n1\nn y\n1\n++ k\nt\nn y\nt\n})\n(3)\nfor 1 t s, where (m, n) = 1 and g, y\n1\n, , y\nt\n\nZ\ns\n. Notice that t = 0 or t = 1 and n = 1 in (3) is just\nthe conventional good lattice points rule (we refer it to the\nrank-1 rule in this paper). Under some conditions (see, for\nexample, Theorem 7.1, [3]) on g, y\n1\n, , y\nt\n, the points in\n(3) are distinct, so that Q\nt\nis a lattice rule of order N =\nn\nt\nm, and it has rank t.\nKorobov (1959) gave the first existence of good lattice\npoints in the case where N is a prime number. Niederreiter\n(1978) extended the existence to general number N . Disney\nand Sloan proved the existence and obtained the best\nasymptotic convergence rate for general N in good lattice\npoints case. The existence of good rank t rules can be established\n, but much more complicated. We introduce\nDefinition 1. For any integer N 2, let G = G(N ) =\n{g = (g\n1\n, , g\ns\n) Z\ns\n, (g\nj\n, N ) = 1 and -N/2 < g\nj\n\nN/2, 1 j s}. Let y\n1\n, , y\nt\nZ\ns\nbe fixed. The mean\nof P\n\n(Q\nt\n) over G is\nM\n(n)\n,t\n(m) =\n1\nCard(G)\ngG\nP\n\n(Q\nt\n), > 1\n(4)\nFor the sake of simplicity, Sloan et al chose the special\nform of y\nj\nwith all the components 0 except the jth which\nis 1 - the so-called copying rule. Thus (3) becomes\nQ\nt\nf =\n1\nn\nt\nm\nn-1\nk\nt\n=0\n\nn-1\nk\n1\n=0\nm-1\nj=0\nf ({ j\nm g+\n(k\n1,\n, k\nt\n, 0, , 0)\nn\n}).\n(5)\nWith this choice, P\n\n(Q\nt\n) is easily calculated as follows.\nFor > 1, 1 t s and n 2, define\nf\n(n)\n,t\n(x) = (\nt\nj=1\nF\n(n)\n\n(x\nj\n))\ns\nk=t+1\nF\n\n(x\nk\n),\n(6)\nwhere\nF\n(n)\n\n(x) = 1 + 1\nn\n\nhZ\n\n| h |\ne\n(hx),\nand\nF\n\n(x) = 1 +\nhZ\n\n| h |\ne\n(hx).\nIf Q\n(n)\nt\nf is the m-point lattice rule defined by\nQ\n(n)\nt\nf = 1\nm\nm-1\nj=0\nf ({ jn\nm g\n1\n, , jn\nm g\nt\n, j\nm g\nt+1\n, , j\nm g\ns\n}),\n(7)\nthen\nP\n\n(Q\nt\n) = Q\n(n)\nt\nf\n(n)\n,t\n- 1.\n(8)\nFor 1\n\nt\n\ns and g = (g\n1\n, , g\ns\n)G,\ndenote\nw = (ng\n1\n, , ng\nt\n, g\nt+1\n, , g\ns\n),\nand\nr\nt\n(h) =(\nt\nj=1\nr(nh\nj\n))\n\ns\nk=t+1\nr(h\nk\n),\nh =(h\n1,\n, h\ns\n). Then applying the rank-1 lattice rule with\ngenerating vector w, we have\nP\n\n(Q\nt\n) =\nhw0 (mod m)\nr\nt\n(h)\n\n-1.\n(9)\nThe existence of good rank-t rules and the error\nbounds for prime m was established by Joe and Sloan (Theorem\n7.4, [3]). The corresponding results for general m\nwere discovered and proved in [5], and is stated below.\nTheorem 1. For > 1, 1 t s, n 1 integer, m > 0\nany integer with (n, m) = 1, then\nM\n(n)\n,t\n(m) = 1\nm\nt\nk=0\ns-t\nl=0\n(\nt\nk\n)(\ns-t\nl\n) (2())\nn\nk\nk+l\n\np|m\nF\n,k+l\n(p ) - 1,\n(10)\nwhere the product is over all prime factors p of m, p is the\nhighest power of p dividing m,\np|m\nF\n,0\n(p ) = m, and\nfor k 1, F\n,k\n(p ) is given by\nF\n,k\n(p ) = 1 + (-1)\nk\n(1 - 1/p\n-1\n)\nk\n(1 - 1/p\n(k-1)\n)\n(p - 1)\nk-1\n(1 - 1/p\nk-1\n)\n.\n(11)\nRemark:\nUsing the Binomial Theorem, we can obtain\nthe result of Theorem 7.4 in [3] (the case when m is\nprime) from Theorem 1, since the assumption that m is\nprime and n is not a multiple of m implies that (n, m) = 1.\nMoreover, the result of Theorem 1 also holds for n = 1\nor t = 0. In either case, the right hand side of (10) is just\n259\nthe case of rank-1 in [3], and if n 2 and t = s, then we\nobtain the result of maximal rank case in [3].\nNow as in the case of rank-1, we give an upper bound\nfor M\n(n)\n,t\n(m) and hence P\n\n(Q\nt\n).\nCorollary 1. Under the conditions of Theorem 1, we have\nM\n(n)\n,t\n(m) 4()\n2\n(m) [(\ns-t\n2\n) + 1\nn\n2\n(\nt\n2\n) + (s - t)t\nn\n\n]\n+ 1\nm {[a(1 + 2())\ns-t\n+ b(1 - 2())\ns-t\n]\n+[a(1 + 2()\nn\n\n)\nt\n+ b(1 - 2()\nn\n\n)\nt\n]\n+ 1\nn\n\n[a(1 + 2())\ns\n+ b(1 - 2())\ns\n]},\n(12)\nwhere (\ns-t\n2\n) = 1 for s - t < 2, (\nt\n2\n) = 1, for t < 2; a =\n(3)\n(6)\n+\n1\n2\n1.68, and b = a - 1. Hence\nM\n(n)\n,t\n(m) = O( log log m\nm\n), asm .\n(13)\nTheorem 2. Let (m) = (1 - s/ log m)\n-1\n. If m >\ne\ns/(-1)\nthen there is a g G such that\nP\n\n(Q\nt\n) M\n(n)\n(m),t\n(m)\n/(m)\n.\n(14)\nIf s 3, then\nM\n(n)\n(m),t\n(N )\n/(m)\n1\nn\nt\n( 2e\ns )\ns\n(log m)\ns\nm\n\n(15)\nas m , where f (x) h(x) as x means\nlim\nx f (x)\nh(x)\n= 1.\nIt is hard to obtain a precise comparison result between\nthe mean for the case of t = 0, i.e., M\n(n)\n,t\n(m), and\nthe corresponding mean for the case of t = 0 rule, i.e.,\nM\n\n(n\nt\nm), even when m is prime, as pointed out in [3].\nNotice that the number of points for rank t rule is n\nt\nm, and\nwe should use the same number of points when comparing\nefficiency or convergence rate among different methods\n. We give an approximate result on this direction based\non (15) and a result in [3] similar to (15).\nCorollary 2.\nFor > 1, 1 t s, n 1 integer\n, m > 0 any integer with (n, m) = 1, let\n1\n(m) =\n(1 - s/ log m)\n-1\n,\n2\n(n\nt\nm) = (1 - s/ log(n\nt\nm))\n-1\n. If\nm > e\ns/(-1)\nthen\nM\n(n)\n\n1\n(m),t\n(m)\n/\n1\n(m)\nM\n\n2\n(n\nt\nm)\n(n\nt\nm)\n/\n2\n(n\nt\nm)\n\nlog m\nlog m + t log n\ns\n< 1.\n(16)\nFrom Corollary 2, we can roughly see that P\n\n(Q\nt\n) <\nP\n\n(Q\n1\n) for t > 1 with the same order (number of points)\nfor both rules, at least asymptotically. Our numerical tests\nshowed that it is true even for small number of points.\nFurthermore, higher rank good lattice points can also be\nfound by computer search via minimizing P\n\n(Q\nt\n) based\non (8), but using m instead of using n\nt\nm (as in the case\nof good lattice points). Therefore, searching higher rank\nlattice points is much faster than searching rank-1 lattice\npoints.\nUsually, the good lattice points were found by searching\nKorobov type g = (1, b, b\n2\n, ..., b\ns-1\n) mod m (compo-nentwise\n) with (b, m) = 1. Sloan and Reztsov proposed\na new searching algorithm - the component-by-component\nmethod. We searched extensively for both types of points.\nBased on the search results we found that these two types\nof lattice points are comparable in both errors and searching\ntimes for the same rank and the same number of points.\nWe only report the Korobov type lattice points here limited\nto space.\nSo far as we know, the theory of copying higher rank\nlattice rule is valid under the assumption that (n, m) = 1.\nWe conjecture that this restriction can be relaxed. We are\nunable to prove this yet so far. But our numerical results\nstrongly support our conjecture, see Table 1 (only partial\nresults are listed). In this table, the comparison is based on\nthe same number of points, where Kor t0 stands for the Korobov\ntype lattice points with t = 0 (i.e., rank-1 case), sim-ilarly\nfor Kor t4. Time is measured in seconds. Whenever\ntime is zero, it just means that the time used in searching\nis less than 0.5 seconds. The CPU times used in searching\nmay be machine dependent. Dev-C++ was used as our\nprogramming language (run on a laptop under Windows\nsystem). In order to measure CPU time as precise as possible\n, all the programs were run on the same machine and\nonly one program, no any other programs, was run one at\na time on the machine. Our searching results showed that\nwithin the same type of lattice points, the higher the rank,\nthe smaller the P\n2\n, and the faster the search. The search\ntime for rank=4 in the case of number of point =32768 is\nabout 1 second, those for all the other cases are less than\n0.5 seconds.\nTable 1\n: Computer search results of t = 0 and t = 4, with\n(n, m) = 1, n = 2, m = a power of 2, dimension = 5.\nKor t0\nKor t4\n2\nt\nm\nb\nP\n2\nTime\nb\nP\n2\n1024\n189\n0.735\n0\n5\n0.373\n2048\n453\n0.264\n1\n27\n0.164\n4096\n1595\n0.121\n3\n21\n0.067\n8192\n2099\n0.048\n10\n61\n0.026\n16384\n2959\n0.018\n43\n35\n0.010\n32768\n1975\n0.007\n169\n131\n0.004\nApplications to Option Pricing\nUnder the Black-Scholes framework, many European options\ncan be expressed in terms of multivariate normal distributions\n. Examples are options on maximum and minimum\nof n assets, discrete lookback options, discrete shout\noptions, discrete partial barrier options, reset options, etc.,\nsee [6] and the references therein.\nIn this section, we apply both the Monte Carlo and\nthe quasi-Monte Carlo methods to applied finance area-260\noption pricing, and compare the efficiencies among different\nmethods. For the quasi-Monte Carlo methods, we\nuse Sobol sequence and both rank-1 and higher rank lattice\npoints. The Sobol sequence is usually the best among the\nconstructive LDS based on our tests.\nTo compare the efficiencies of different methods, we\nneed a benchmark for fair comparisons. If the exact value\nof the quantity to be estimated can be found, then we use\nthe absolute error or relative error for comparison. Otherwise\n, we use the standard error (stderr) for comparison.\nHere stderr =\n\nN\n, where\n2\nis the unbiased sample variance\n, N is the sample size. For LDS sequences, we define\nthe standard error by introducing random shift as follows\n. Assume that we estimate = E[f (x)], where x\nis an s-dimensional random vector. Let {x\ni\n}\nm\ni=1\nC\ns\nbe a finite LDS sequence, {r\nj\n}\nn\nj=1\nC\ns\nbe a finite sequence\nof random vectors. For each fixed j, we have a\nsequence {y\n(j)\ni\n}\nm\ni=1\nwith y\n(j)\ni\n= x\ni\n+ r\nj\n. It can be shown\nthat such a sequence still has the same convergence rate\nas the original one. Denote\nj\n=\n1\nm\nm\ni=1\nf (y\n(j)\ni\n) and\n=\n1\nn\nn\nj=1\n\nj\n.The unbiased sample variance is\n2\n=\nn\nj=1\n(\nj\n-)\n2\nn-1\n=\nn\nn\nj=1\n\n2\nj\nn\nj=1\n\nj\n2\n(n-1)n\n. Then the standard\nerror is defined by stderr =\n\nn\n. The efficiency of a\nQMC method (after randomization) over the MC method is\ndefined as the ratio of the standard error of the MC method\nto the standard error of a QMC method (both methods have\nthe same number of points, otherwise the comparison is not\nfair).\nAs an example, let us consider the computation of call\noptions on maximum of s assets. Using martingale method,\nDufresne et al derived in [7] that the value of a call option\ncan be expressed in terms of multivariate normal distributions\n:\nV = V\ns\nmax\n({S\ni\n}, {\ni\n},\n0\n, r, q, ) =\ns\ni=1\nS\ni\ne\n-q\ni\nT\n\nN\ns\n(e\ni1\n, ..., e\ni,i-1\n, d\n(i)\ni\n(K, T ), e\ni,i+1\n, e\nis\n;\ni\n)Ke\n-rT\n1 - N\ns\n(-d\nQ\n1\n(K, T ), ..., -d\nQ\ns\n(K, T );\n0\n) (17)\nwhere\ne\nik\n= log(S\ni\n/S\nk\n) + T\n2\nik\n/2\n\nik\nT\n,\n\nik\n=\n\n2\ni\n- 2\ni\n\nk\n+\n2\nk\n,\nd\n(i)\ni\n(K, T ) = log(S\ni\n/K) + (r +\n2\ni\n/2)T\n\ni\nT\n,\nd\nQ\ni\n(K, T ) = log(S\ni\n/K) + (r 2\ni\n/2)T\n\ni\nT\n,\n\n0\n= (\njk\n)\nss\nand for i = 1, ..., s,\ni\n= (\n(i)\njk\n)\nss\nwith\n\n(i)\njk\n=\n2\ni\n+\njk\n\nj\n\nk\nij\n\ni\n\nj\nik\n\ni\n\nk\n\nij\n\nik\n, j, k = i;\n\n(i)\nik\n= i ik\n\nk\n\nik\n, i = k;\nii\n= 1.\nThus,\nin order to estimate the option values,\nwe need to estimate the following s-variate normal\ndistribution H(a, )\n=\n1\n\ndet()(2)\ns\na\n1\n\n\n\na\ns\nexp\n(1\n2\nx\nt\n\n-1\nx)dx, where a = (a\n1\n, a\n2\n, ..., a\ns\n),\n- a\ni\n+ ( i = 1, 2, ..., s), x R\ns\n,\ndx =dx\n1\n...dx\ns\n, = (\nij\n)\nss\nis a positive definite correlation\nmatrix. Details about the computation of multivariate\nnormal distributions can be found in [8]. Notice\nthat after the transformation, the s-dimensional integral\nfor H(a, ) is transformed into an s - 1 dimensional integral\n.\nFor the numerical demonstration, we consider a call\noption on maximum of 6 stocks. In our simulations, each\nmethod was randomly shifted, including the MC method,\nso that each method has the same number of points. We\ntook the number of random shifts to be 10, other parameters\nare s = 6, K {$90, $100, $110}, r = 10%, S\ni\n= $100,\n\ni\n= 0.2,\nij\n= 0.5, i = j, i, j = 1, ..., 6. Besides the\noption values, the option sensitivities or Greek letters\ni\n=\nV\nS\ni\n,\nij\n=\n\n2\nV\nS\ni\nS\nj\n, V\ni\n=\nV\n\ni\n, =\nV\nT\nand =\nV\nr\nare\nvery important quantities in financial risk management and\ntrading. They are usually harder to obtain than the option\nvalues themselves. The results where K = $100 are listed\nin the following Tables 2, 3 and 4, and the results where\nK = $90 and K = $110 are similar and are omitted here.\nIn these tables, column 1 contains the numbers of\npoints, numbers in the MC column are the standard errors,\nthose in the columns of quasi-Monte Carlo methods are\nefficiencies of the corresponding methods over the Monte\nCarlo method. Here we do not include the CPU times for\ndifferent methods since these programs were run on a main-frame\nusing UNIX system, and there were many other programs\nwere also running at the time I ran these programs.\nAnd I think that the CPU times measured in this way are\nnot precise.\nTable 2\n: Comparison of estimated call option values and\nefficiencies, the option value is $28.81 with standard error\n1.5099e-06 obtained by higher rank lattice rule (rank=4)\nusing 2\n14\n=16384 points with 10 random shifts. The standard\nerror is zero in my simulation by the same rule using\n2\n15\n=32768 points with 10 random shifts.\nN\nMC\nSobol\nKor t0\nKor t4\n2\n10\n0.6832\n10.8\n219.9\n200.5\n2\n11\n0.4834\n27.8\n666.8\n1194.9\n2\n12\n0.3413\n39.6\n2104.7\n371.8\n2\n13\n0.2409\n41.2\n18129.2\n22758.4\n2\n14\n0.1703\n110.2\n30156.0\n112804.8\n2\n15\n0.1206\n101.3\n253540.7\n*\nFrom Table 2, we observed that the randomized lattice\nrules achieve much better results than the randomized\nSobol's sequence does, the latter is about 10 to 110 time\nmore efficient than the MC method. The randomized Korobov\ntype higher rank lattice points beat the randomized\n261\nrank-1 lattice points, except when N = 2\n10\n= 1024 and\n2\n12\n= 4096.\nTable 3\n:\nComparison of estimated option sensitivity\n(Greek letter,\n1\nin this table) values and efficiencies,\nthe value of\n1\nis 0.1898 with standard error 4.4427E-09\nobtained by higher rank lattice rule (rank=4) using\n2\n15\n=32768 points with 10 random shifts.\nN\nMC\nSobol\nKor t0\nKor t4\n2\n10\n0.0151\n7.3\n135.5\n98.1\n2\n11\n0.0107\n10.7\n330.8\n1226.6\n2\n12\n0.0077\n9.9\n900.7\n110.7\n2\n13\n0.0054\n19.7\n8861.8\n9833.5\n2\n14\n0.0038\n30.7\n24742.9\n70692.9\n2\n15\n0.0027\n42.8\n137025.0\n605473.8\nAgain, the randomized lattice rules are much more\nefficient than the randomized Sobol's sequence, the latter\nis about 8 to 43 time more efficient than the MC method.\nKor t4 is more efficient than Kor t0 except when N =\n2\n10\n= 1024 and 2\n12\n= 4096.\nTable 4\n: Comparison of estimated gamma (\n11\n=\n\n2\nV\nS\n2\n1\n)\nvalues and efficiencies, the value of\n11\nis 0.01631 with\nstandard error 1.2418e-09 obtained by higher rank lattice\nrule (rank=4) using 2\n15\n=32768 points with 10 random\nshifts. N\nMC\nSobol\nKor t0\nKor t4\n2\n10\n4.6E-4\n8.0\n107.6\n12.8\n2\n11\n3.3E-4\n14.0\n118.6\n223.7\n2\n12\n2.3E-4\n19.6\n484.3\n30.3\n2\n13\n1.7E-4\n22.6\n4217.8\n4457.5\n2\n14\n1.2E-4\n24.1\n3108.7\n28713.5\n2\n15\n8.2E-5\n40.0\n108047.3\n66385.4\nThe conclusion is similar to that of Table 3. The\nrandomized Kor t4 is more efficient than the randomized\nKor t0 except when N = 2\n10\n= 1024 , 2\n12\n= 4096 and\n2\n15\n= 32768.\nIn our simulations, the pseudo random number generator\nwe used is ran2() in [9]. The periodizing function\nused is (x) =\n1\n2\n(2x - sin(2x)) .\nConclusion\nIn this paper, We introduced the higher rank lattice rules\nand gave a general expression for the average of P\n\n(Q\nt\n) for\nhigher rank lattice rule over a subset of Z\ns\n, an upper bound\nand an asymptotic rate for higher rank lattice rule. The results\nrecovered the cases of good lattice rule and maximal\nrank rule. Computer search results showed that P\n2\ns by the\nhigher rank lattice rule were smaller than those by good\nlattice rule, while searching higher rank lattice points was\nmuch faster than that of good lattice points for the same\nnumber of quadrature points. Numerical tests for applications\nto an option pricing problem showed that the higher\nrank lattice rules (t > 0) usually beat the conventional good\nlattice rule (t = 0 case). Both of these rules showed significant\nsuperiority over the Sobol sequence. Our tests (not\nlisted here) on other types of options showed similar efficiency\ngains of higher rank lattice rules over good lattice\nrules, though the gains may vary.\nSince searching higher rank lattice points is much\nfaster than that of rank - 1 lattice points (say the rank is\nlarger than 2); the search algorithm is simple; and the values\nof P\n2\nfor higher rank lattice points are smaller than that\nfor the rank - 1 points; furthermore, (standard) errors obtained\nby higher rank lattice rules to practical problems are\nnot worse than those by the rank - 1 rules on average, the\nhigher rank lattice rules are good candidates for applications\n. One unsolved problem in lattice rules (whether high\nrank or not) is the periodizing seems not work well in high\ndimensions. It needs futher exploration.\nAcknowledgements\nThis research was partially supported by an Natural\nSciences and Engineering Research Council of Canada\n(NSERC) grant.\n\nReferences\n[1] H. Niederreiter, Random Number Generation and\nQuasi-Monte Carlo Methods, SIAM, Philadelphia,\n1992.\n[2] L. Hua and Y. Wang, Applications of Number Theory\nin Numerical Analysis, Springer-Verlag, 1980.\n[3] I. H. Sloan and S. Joe, Lattice Methods for Multiple\nIntegration, Oxford University Press, New York, 1994.\n[4] P. Boyle, Y. Lai and K. S. Tan, Pricing Options Using\nlattice rules, North American Actuarial Journal, 9(3),\n2005, 50-76.\n[5] Y. Lai, Monte Carlo and Quasi-Monte Carlo Methods\nand Their Applications, Ph. D Dissertation, Department\nof Mathematics, Claremont Graduate University,\nCalifornia, USA, 2000.\n[6] P. Zhang, Exotic Options, 2nd edition, World Scientific\n, 1998.\n[7] P.C. Dufresne, W. Keirstead and M. P. Ross, Pricing\nDerivatives the Martingale Way, working paper, 1996.\n[8] Y. Lai, Effcient Computations of Multivariate Normal\nDistributions with Applications to Finance, working\npaper, Departmetn of Mathematics, Wilfrid Laurier\nUniversity, Waterloo, Ontario, Canada, 2005.\n[9] W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P.\nFlannery, Numerical recipes in C: The Art of Scientific\nComputing, Cambridge University Press, 1992.\n262", "keywords": "Monte Carlo and Quasi-Monte Carlo methods;Simulation of multivariate integrations;Lattice rules;Option Pricing"} {"name": "181", "title": "SmartCrawl: A New Strategy for the Exploration of the Hidden Web", "abstract": "The way current search engines work leaves a large amount of information available in the World Wide Web outside their catalogues. This is due to the fact that crawlers work by following hyperlinks and a few other references and ignore HTML forms. In this paper, we propose a search engine prototype that can retrieve information behind HTML forms by automatically generating queries for them. We describe the architecture, some implementation details and an experiment that proves that the information is not in fact indexed by current search engines.", "fulltext": "INTRODUCTION\nThe gigantic growth in content present in the World Wide\nWeb has turned search engines into fundamental tools when\nthe objective is searching for information. A study in 2000\n[11] discovered that they are the most used source for finding\nanswers to questions, positioning themselves above books,\nfor example.\nHowever, a great deal of relevant information is still hidden\nfrom general-purpose search engines like AlltheWeb.com\nor Google. This part of the Web, known as the Hidden Web\n[7], the Invisible Web [6, 9] or the Deep Web [1] is growing\nconstantly, even more than the visible Web, to which we are\naccustomed [6].\nThis happens because the crawler (the program that is\nresponsible for autonomous navigating the web, fetching\npages) used by current search engines cannot reach this information\n.\nThere are many reasons for this to occur. The Internet's\nown dynamics, for example, ends up making the index of\nsearch engines obsolete because even the quickest crawlers\nonly manage to access only a small fraction each day of the\ntotal information available on the Web.\nThe cost of interpretation of some types of files, as for example\nMacromedia Flash animations, compressed files, and\nprograms (executable files) could be high, not compensating\nfor the indexing of the little, or frequently absent, textual\ncontent. For this reason, that content is also not indexed\nfor the majority of search engines.\nDynamic pages also cause some problems for indexing.\nThere are no technical problems, since this type of page generates\nordinary HTML as responses for its requests. However\n, they can cause some challenges for the crawlers, called\nspider traps [8], which can cause, for example, the crawler to\nvisit the same page an infinite number of times. Therefore,\nsome search engines opt not to index this type of content.\nFinally, there are some sites that store their content in\ndatabases and utilize HTML forms as an access interface.\nThis is certainly the major barrier in the exploration of the\nhidden Web and the problem that has fewer implemented\nsolutions. Nowadays none of the commercial search engines\nthat we use explore this content, which is called the Truly\nInvisible Web [9].\nTwo fundamental reasons make crawling the hidden Web\na non-trivial task [7]. First is the issue of scale. Another\nstudy shows that the hidden content is actually much greater\nthan what is currently publicly indexed [1]. As well as this,\nthe interface for access to this information serves through\nthe HTML forms are projected to be manipulated and filled\nby humans, creating a huge problem for the crawlers.\nIn this paper, we propose a search engine prototype called\nSmartCrawl, which is capable of automatically attaining\npages that are not actually recoverable by current search\nengines, and that are \"secreted\" behind HTML forms.\nThe rest of this article is organised in the following manner\n. Section 2 shows related work and in the sequence, we\nexplain the construction of HTML forms and how they can\nbe represented. In sections 4 and 5, we describe the prototype\n. In Section 6 the experimental results are highlighted\nand finally, in Section 7, we conclude the paper.\n9\nRELATED WORK\nThere are some proposals for the automatic exploration\nof this hidden content. Lin and Chen's solution [6] aims to\nbuild up a catalogue of small search engines located in sites\nand, given the user searching terms, choose which ones are\nmore likely to answer them. Once the search engines are\nchosen by a module called Search Engine Selector, the user\nquery is redirected by filling the text field of the form. The\nsystem submits the keywords and waits for the results that\nare combined subsequently and sent to the users' interface.\nThe HiWE [7] is a different strategy which aims to test\ncombinations of values for the HTML forms at the moment\nof the crawling (autonomous navigation), making the indexing\nof the hidden pages possible. Once a form in a HTML\npage is found, the crawler makes several filing attempts,\nanalyses and indexes the results of the obtained pages.\nMoreover, the HiWE has a strategy to extract the labels\nof HTML forms by rendering the page. This is very useful\nto obtain information and classify forms and helps to fill in\nits fields.\nThere are other approaches that focus on the data extraction\n. Lage et al. [4] claims to automatically generate agents\nto collect hidden Web pages by filling HTML forms.\nIn addition to this, Liddle et al. [5] perform a more comprehensive\nstudy about form submissions and results processing\n. This study focus on how valuable information can\nbe obtained behind Web forms, but do not include a crawler\nto fetches them.\nEXTRACTING DATA FROM BEHIND THE FORM\nHTML forms are frequently found on the web and are\ngenerally used for filtering a large amount of information.\nAs shown in Figure 1, from a page with one form the user\ncan provide several pieces of data which will be passed on\nto a process in the server, which generates the answer page.\nThe current crawlers do not fill in form fields with values,\nmaking them the major barrier for exploration of the hidden\nWeb. In order to achieve this, it is vital to extract several\npieces of information from the form.\nAn HTML form can be built on different manners, including\nvarious types of fields such as comboboxes, radio buttons,\ncheckboxes, text fields, hidden fields and so on. However,\nthe data sent to the server through the Common Gateway\nInterface (CGI) is represented by proper codified pairs\n(name, value). This way, we can characterise a form with\nwhich has n fields as a tuple:\nF = {U, (N\n1\n, V\n1\n), (N\n2\n, V\n2\n), ..., (N\nn\n, V\nn\n)\n}\n(1)\nwhere U is the URL for the data that has been submitted,\nand (N\nn\n, V\nn\n) are the pairs (name, value) [5].\nHowever, this is a simplification, since there are much\nmore information associated with HTML forms. An example\nis the method by which the form data will be sent to the\nserver, that is, by HTTP GET or POST. Moreover, some\nfields possess domain limitations (e.g.\ntext fields with a\nmaximum size, comboboxes).\nTo do an analysis of the form and extract relevant information\nis not an easy task, but the most difficult step\nsurely is to extract the field's labels. This is because generally\nthere is not a formal relationship between them in the\nHTML code. For example, the label for a text field can be\nplaced above it, separated by a BR tag, it can be beside it,\nor it can be inserted inside table cells.\nAll these pieces of data are absolutely necessary to be extracted\nfor surpassing HTML forms and fetching the results\npage.\nTHE SMARTCRAWL\nThe aim of SmartCrawl is to bring a strategy that allows a\nmore complete exploration of the hidden content. To achieve\nthis, it managed to generate values for a largest number of\nforms.\nFurthermore, it has an architecture very similar to current\ncommercial search engines, which means it permits an easier\nimplantation of strategies vastly used to gain performance\nand scalability as presented by [10] or [2].\n4.1\nExecution of the Prototype\nSmartCrawl is above all a search engine and, therefore,\ncontains all its essential components. The difference is in the\nfact that each component has adaptations and some extra\nfeatures which enable them to explore the hidden content\nof the Web. The main goal is to index only the pages that\npotentially are in the non-explorable part of the Web.\nTo extract the content from behind these forms, SmartCrawl\ngenerates values for its fields and submits them. These values\nare chosen in two different moments: in the indexing\nand when an user performs a search.\nIn the indexing phase, once it finds a form, the SmartCrawl\nextracts a set of pieces of information from it that allow\nqueries (combinations of possible values for the form) to be\ncreated.\nNew queries are also generated when the user performs a\nsearch. For this, the forms that are more likely to answer to\nthe search receive the supplied keywords. However, contrary\nto the implementation of Lin and Chen [6] the obtained\nresults are also scheduled for indexing and not only returned\nto the user interface.\nThe process of execution of SmartCrawl is constituted in\nthe following steps: (1) finding the forms, (2) generating\nqueries for them, (3) going to the results and (4) searching\ncreated indexes.\n4.1.1\nFinding forms\nThe first step in the execution process of the SmartCrawl\nis the creation of a number of crawlers that work in parallel\nsearching for pages that include HTML forms. Every page\nfound is then compressed and stored for further analysis. At\nthis moment, it acts like a common crawler, following only\nlinks and references to frames.\nThe pages stored by the crawler are decompressed afterwards\nby a indexing software component which extracts\npieces of information from each of the forms found and catalogues\nthem. Beyond this, every page is indexed and associated\nwith the forms. If the same form is found in distinct\npages, all of them are indexed. Nevertheless, there will be\nonly one representation of the form.\n4.1.2\nGenerating queries for the forms\nAnother component of the indexing software is in charge of\ngenerating values for the encountered forms. The generation\nof queries is based on the collected information about the\nform and its fields.\n10\nSearch Results\n1. CD:\nKinks Face To Face\n2. CD:\nKinks Muswell Hillbillies\n3. CD:\nKinks Misfits\n...\nProcessing\n(Server)\nCDs\nCDs\nDVDs\nDVDs\n:\n:\n:\nKinks\nCDs\nCDs\nDVDs\nDVDs\n:\nCDs\nCDs\nDVDs\nDVDs\nMedia:\nTitle:\nArtist:\nKinks\nFigure 1: A form processing\nThe first generated query is always the default, that is,\nthe one which uses all the defined values in the HTML code\nof the form. Next, a pre-defined number k of other possible\ncombinations is generated. To generate values for the text\nfields (which possess an infinite domain) a table that stores\na list of values for a data category is consulted based on the\nfield label.\nFor every generated query, a further visit is scheduled for\nthe crawler. The parameters (set of field names and values)\nare stored and a new item is added to the queue of URLs\nthat must be visited by crawlers.\n4.1.3\nVisiting the results\nThe crawler is in charge of executing its second big goal\nwhich is to submit the scheduled queries. To accomplish this\nit needs an extra feature: the capacity to send parameters\nin HTTP requests using both the GET and POST methods\n. If it perceives that an item in the queue of URLs is\na query, it submits the parameters and analyses the HTML\ncode obtained as a result in the same way that it does to\nothers. The page is then compressed, stored and associated\nwith the information of the original query.\nThe indexing software decompress pages that contain results\nof form submissions and indexes them. From this index\n, the classification and search software finds results that\ncontain all the search terms formulated by the user.\n4.1.4\nSearching for stored pages\nAs soon as the user performs a search, two steps are exe-cuted\nby the classification and search software. Firstly, the\nindexes created by the indexing software are consulted to\nfind the keywords formed by the user and the results are returned\nin an organised form (the most relevant come first)\nin an HTML page.\nSubsequently, based on these indexed pages which are associated\nwith the forms, SmartCrawl selects forms that are\nmore likely to answer to the user's search and generate new\nqueries which will be visited afterwards by the crawlers and\nindexed in the same way by the indexing software.\nARCHITECTURE AND IMPLEMENTA-TION\nThe architecture at the high level of this application is\ndivided into: crawler, indexing software, ranking and search\nsoftware and storage components.\nAs we have seen, the crawler is responsible for obtaining\nWeb pages, submitting queries and storing the results. The\nindexing software, on the other hand, indexes the obtained\npages and generates form queries. The ranking and search\nsoftware uses the indexes to answer searches made by the\nuser or redirects other forms and storage components take\ncare of the storage of all the information used by other components\n.\nFigure 2 shows how the main components are available\nand how they interact with the storage components, that\nare represented by the rounded boxes. They are the Form\nParser, Form Inquirer and Form Result Indexer of the indexing\nsoftware, the Document Seeker of the ranking and\nsearch software and the Crawler Downloader of the crawler.\nTwo storage components support the crawling: URL Queue\nand URL List. The first is responsible for storing the line of\nURLs that the Crawlers Downloaders need to visit, in this\nway, the URLs which will serve as seeds to the autonomous\nare also added in it. As soon as a Crawler Downloader extracts\nlinks from an HTML page, it is in this component\nthat the new URLs will also be inserted for a further visit.\nThe URL list, on the other hand, stores the URLs that have\nalready been visited, allowing the Crawler Downloader keep\ntrack of them.\nWhen the page includes a form, or a result to a query,\nit needs to be stored for subsequent indexing. The crawler\nstores the compressed content in a storage component named\nWarehouse, where it is given a number called storeId.\nTwo components of the indexing software are responsible\nfor decompressing these pages that have been stored in the\nWarehouse. The first, called the Form Parser, extracts information\nfrom all the forms contained within the page, and\nsends them to the storage component Form List, where a\nnumber, called formId, is associated with every form. The\nForm Parser is also responsible for indexing the page which\ncontains forms and associating it to each formId of the forms\ncontained within.\nTo index these documents, SmartCrawl uses a technique\ncalled inverted index or inverted file. The Document List,\nWordmatch and Lexicon are the three components that carry\nout storing an indexed page. The Document List stores the\ntitle, a brief description of each indexed page and its docId.\nThe Lexicon and Wordmatch store the inverted index itself.\nThe first contains a list of pairs (wordId, word) for each one\nof the words used in indexed documents. The second contains\na list of occurrences of the words in the indexed documents\nand their position (offset) in the text. Wordmatch is\nformed therefore by the values of docId, wordId and offset\nThe Form Inquirer is the indexing software component\nwhose objective is to generate queries for the stored forms\nin the Form List. To generate values for the text fields, Form\nInquirer consults the list of categories and values through\nthe component Categories. Each query generated is sent to\nthe Query List where a queryId is associated to it and a\nnew URL is added to the URL Queue.\n11\nWarehouse\nURL\nQueue\nURL\nList\nQuery\nList\nForm\nList\nCategories\nWordmatch\nDocument\nList\nLexicon\nCrawler Downloader\nForm Parser\nForm Result Indexer\nDocument Seeker\nNew query\nForm Inquirer\nFigure 2: Architecture at a high level\nThe second component that extracts compressed pages\nfrom the Warehouse is the Form Result Indexer. Its job is\ntoo much easier than the others, as its aim is only to index\nthe pages that contain results of submitted queries and to\nassociate the proper queryId.\nFrom the indexes created and stored in Lexicon and Wordmatch\n, the Document Seeker answers to user searches. Every\ndocument stored in the in the Document List possesses\na formId associated with it, and optionally, a queryId when\nthe document is the response to a query. With these two\nnumbers, the Document Seeker consults the Form List and\nthe Query List to obtain the necessary information about\nthe way it locates an indexed page on the World Wide\nWeb. For example, a query to a form that points to the\nURL http://search.cnn.com/cnn/search using the HTTP\nGET method can be represented by\nhttp://search.cnn.com/cnn/search?q=brazil, if it possesses\nonly one parameter with name q and value brazil.\nThe Document Seeker should return the result set in an\nordered form, so that the most relevant documents will be\ntaking first place. A simple solution for this is to only take\ninto consideration the position of the words in the text, and\nthe number of occurrences. Considering o\ni\n, as the offset of\nthe i-th encountered word in the document which is amongst\nthe terms of the user, the relevance is given by this:\nr =\nn\nX\ni=0\n1000\n(o\ni\n+ 1)\n(2)\nIn equation 2, we compute the rank of a page by summing\nthe offsets of all user search terms that appears in the document\nand then divide the result by an arbitrary number (in\nthis case 1000) so that the most relevant entry receives the\nsmallest number.\nAnother important role of the Document Seeker is to redirect\nthe search terms supplied by the user to some of the\nseveral forms catalogued in the Form List. To accomplish\nthis, it looks in the Document List for pages that contain\nsearch engines (forms with one, and only one, text field) and\nthat possess in its text words related to the terms sought by\nthe user. New queries are then added to the Query List and\nnew URLs are added to the URL Queue to be visited by the\nCrawler Downloader.\n5.1\nLabels extraction algorithm\nA very important task is performed by a secondary component\ncalled Form Extractor. It is in charge of extracting\ndiverse pieces of form information present in an HTML page.\nTo facilitate the content analysis of a page, the HTML\ncode is converted into a DOM\n1\ntree provided by Cyberneko\nHtml Parser [3]. From the DOM tree, the Form Extractor\nlooks for nodes which represent forms and separate them\nfrom the rest of the tree. Each of these sub-trees, which\nencompass all the tags which are positioned between the\n<form> and the </form>, is submitted for processing.\nAmongst the data which should be obtained, undoubtedly\nthe fields and their labels are the most challenging ones. In\nspite of having a tag in the HTML specification called label\nfor the declaration of a label, it is almost not used and,\ntherefore, we do not possess a formal declaration of labels\nin the HTML code.\nThe solution encountered was to establish a standard that\nthe labels must have. For the Form Extractor, labels are\ncontinuous segments of texts which use the same format\nand have the maximum of n words and k characters. These\nvalues can be defined in the configuration file.\n1\nDocument Object Model\n12\nDVDs\nCDs\nDVDs\nMedia:\nTitle:\nArtist:\nKinks\n(a) HTML form example\n1\n2\n1\n2\n3\n4\n{\n\"Artist:\"}\nLabel\n{\n\"Title:\"}\nLabel\n{\n\"Media:\"}\nLabel\n{\n\"Submit\",\n\"Reset\"}\nButton\nButton\n{\n\"artist\"}\nTextfield\n{\n\"title\"}\nTextfield\n{\n\"cds\",\n\"CDs\",\n\"dvds\",\n\"DVDs\"}\nCheckbox\nLabel\nCheckbox\nLabel\n(b) Table representing the position of the elements of the form\nFigure 3: Representing the positions of the components of an HTML form\nCheckboxField\nLabel: \"Media:\"\nName: \"media\"\nOptions:\nTextField\nLabel: \"Artist:\"\nName: \"artist\"\nValue: \"\"\nForm\nAction: \"Search.jsp\"\nRequest Method: POST\nFields:\nTextField\nLabel: \"Title:\"\nName: \"title\"\nValue: \"\"\nSubmitButtonField\nName: \"Submit\"\nValue: \"submit\"\nOption\nLabel: \"CDs\"\nValue: \"cd\"\nOption\nLabel: \"DVDs\"\nValue: \"dvd\"\nFigure 4: An example of a HTML form representation\nFrom the sub-tree which contains a certain form information\n, the Form Extractor generates a table which represents\nthe positioning of the elements contained in it. Figure 3\nshows an example of the table generated by a simple form.\nThe table is generated considering the nodes in the DOM\ntree which represent a common HTML table. If there are\nmore than one defined table in the HTML code, similar representations\nare created. Each cell has a collection of the\nform's elements. The third step of the process is to extract\nthe labels of each one of the fields in the form (except for\nhidden fields and buttons) and generate an object-orientated\nrepresentation for the forms.\nTo extract the labels, the Form Extractor passes twice by\nthe generated table to the form. The first time, for each field\nin the form which require a label, it is verified exactly what\nexists on the left side of the field (even in one adjacent cell).\nIf a label is found in this position, this label is immediately\nassociated to the field. In the second passage, the fields\nwhich still do not have association with labels are observed\nagain, however this time the search for the label is done in\nthe above cell.\nFor the fields of the checkbox and combobox kind, the\ntreatment is special, because apart from the conventional\nlabel, the items which represent their domain also have labels\n.\nAs in the case of \"DVDs\" and \"CDs\" labels in Figure\n3(a). The domain labels are extracted from the right, and\nthe label of the set of items is obtained from the left. In\nthe example, the checkboxes are grouped by their names,\nwhich in this case is \"media.\" The labels of each item are\nextracted from the right (\"CDs\" and \"DVDs\") and the label\nof the set of checkboxes is extracted from the left of the first\nfield (\"Media:\").\nFigure 4 shows how the object-oriented structure for the\nexample above would be.\n5.2\nThe list of categories and values\nThe Categories component is in charged of controlling a\nlist of categories and values which helps the Form Inquirer\nto generate values for the text fields. In order to guarantee\nbetter results, the name of the category is normalized before\ncomparing to the field label. The normalization aims to:\n(1) remove punctuation (leaving just the words), (2) convert\ngraphic signing and other special characters into simple ones\nand (3) remove stop words.\nStop words is a concept given to the words which can be\ntaken from sentences without changing its meaning, being\nlargely used in normalization and some search engines even\nextinguish these words from their indexes. Great part of the\nStop Words are prepositions, articles and auxiliary verbs,\nsuch as \"the\", \"of\" or \"is.\"\nThe list of categories is automatically built by the Form\nParser. Once it finds a field with finite domain (e.g. comboboxes\n), the values are extracted and added to the categories\nlist associated to the field's label.\nWhen the same value is added more than once to a category\n, it gets more priority in relation to the others. This\nis obtained by using a number which means the relevance\nof this value in the category. It is based on this number\nthat the set of values is put in order before it is repassed to\nthe Query Inquirer. Therefore, the most relevant values are\ntested first in text fields.\n13\n5.3\nRedirecting queries\nAs mentioned before, associated to each form, there is a\nset of indexed pages where it has been found. These pages\nallow the Document Seeker to choose the forms to which the\nuser's search terms will be redirected to.\nIn order to choose amongst several forms, two steps are\ntaken: (1) finding which words or sequences of words are\nrelated to the terms of the user's search and (2) looking for\nthese words on the pages which have small search engines.\nTo solve the first problem, we could use the stored index\nitself. However, the volume of indexed information is not so\nlarge as to provide a good set of words. It was used, therefore\n, the catalogue of a general purpose search engine called\nGigablast\n2\n, since it implements data mining techniques and\nprovides, for the searching terms, a list of words or sentences\nwhich frequently appear in the returning documents.\nFrom this set of words, the Document Seeker performs\nsearch on pages which have search engines looking for these\nterms, also putting them in order according to equation 2.\nFor the first n selected search engines, the Document Seeker\ncreates new queries and add them to the queue so that they\ncan be submitted afterwards in the same way done by the\nForm Inquirer. The queries are created by filling in form's\nonly text field with the searching terms of the user and the\nother fields with default values.\nSearch form\nResults\nNew queries\nFigure 5: Interface for the search of indexed documents\n5.4\nThe searching interface\nA searching interface was build for the purpose of carrying\nout the tests. As shown in Figure 5, it is divided into three\nparts.\nThe searching form allows the user to provide the terms\nof the search. Furthermore, it is possible to choose what\nthe target of the search is: all the pages, only pages which\nhave forms or pages containing the results of forms. The\nresults are shown on a list of documents that contain all the\nsearching terms offered by the user. On pages that contain a\n2\nhttp://www.gigablast.com/\nquery associated to them, it is possible to visualise also the\nparameters. In the area called new queries the generated\nqueries which have been scheduled for the crawler's visit are\ndisplayed.\nEXPERIMENTAL RESULTS\nThe tests which have been carried out aim to test some\nstrategies used in the implementation of the prototype, hence\nsome important aspects of the system were tested separately.\nBesides that, an analysis of the indexed content regarding its\nabsence or not in the current searching engines was carried\nout.\nIn order to support the tests, we started up the crawlers\nand kept them up until we have 15 thousands indexed pages\n(including only pages with HTML forms and form results).\nIt worth repeat that our strategy does not index pages that\ndo not offer any challenge to regular crawlers.\n6.1\nLabel extraction algorithm evaluation\nThis phase aims to evaluate the algorithm used for the\nextraction of the labels from the form fields that was de-scribed\nin section 5.1. To do so, 100 forms were manually\nobserved and compared to the information extracted by the\nForm Extractor. For each one of these fields it was verified\nwhether the choice made by the algorithm was correct or\nnot. From the 100 forms evaluated, 5 of them (5%) were\nnot extracted.\nThe reason for this is that the HTML was malformed and\nthe API used for the extraction of the DOM tree (NekoHtml\n[3]) did not manage to recover the error. This way, 189 fields\nfrom the 95 remaining forms were verified. For 167 of them\n(88%), the algorithm extracted the label correctly, making\nmistakes only in 22 labels (see Figure 6).\nLabels\nextracted (88%)\ncorrectly\nLabels\nextracted (12%)\nincorrectly\nFigure 6: Fraction of labels extracted correctly\nSome labels were not extracted correctly because they did\nnot fit within the restrictions defined in section 5.1. Another\nproblem faced was when the labels were not defined inside\nthe tag FORM. In this case they were not present in the sub-tree\nanalysed by the Form Extractor, making its extraction\nimpossible.\nAlthough our solution did not reach the HiWE [7] accuracy\nto extract labels, we prove that it is possible to get\nvery close results without rendering the page (that consumes\nmuch computing resources). Moreover, many of the problems\nfaced in this experiment can be fixed without much\neffort.\n14\n6.2\nRelevances of the queries generated by the\nDocument Seeker\nIn order to analyse the results obtained by the new generated\nqueries from the searches of users, 80 search queries\nwere submitted to the prototype by using arbitrary terms\nthat are commonly used in general purpose search engines,\nsuch as \"World Cup\" or \"Music Lyrics\".\nFor each list of queries generated by the Document Seeker,\nthe first five ones were submitted and analysed manually,\ntotalizing 155 pages with results. Each page was verified\nwhether the query was successful or not. A successful query\nis one which has one or more results in it, in contrast to pages\nwith no result or pages that was not considered a search\nengine results page (e.g. mailing list registration form).\nErrors (10%)\nQueries with no results (24%)\nSuccessful queries (66%)\nFigure 7: Utilizations of the new queries generated\nby\nDocument Seeker\nThe result obtained was that 66% of the submitted queries\nbrought some results back, 24% were not successful and 10%\nof the pages, for some reason, could not be recovered. Figure\n7 illustrates better the obtained utilization.\nOnce the most relevant queries are returned taking first\nplace, it is probable that, with a larger number of indexed\npages, we will get better results.\n6.3\nVisibility of the indexed content\nThe implementation of SmartCrawl has as aim the pages\ngenerated from the filling of the HTML forms and which are\npotentially part of the hidden web. Despite the fact that the\ncurrent commercial search engines do not have an automatic\nmechanism which fills in the forms fields and obtain these\ndata, as the SmartCrawl does, through common links, part\nof this information can be explored.\nFor instance, a page with the results of a form that utilizes\nthe HTTP method GET can be accessed through an usual\nlink because all its parameters can be passed in the URL\nstring itself. In addition to this, once the content is stored\nin databases, it is not so difficult to find pages that offer the\nsame information through different interfaces.\nThis phase aimed to verify how much of the indexed content\ncan also be accessed by Google. In order to do this,\n300 pages with results of GET and POST forms were observed\nif, for one of the reasons stated before, they were not\nindexed by this general purpose search engine.\nWe found out that 62% of these pages are not indexed\nby Google. When only queries which use the method HTTP\nPOST (which are 59% from the total) are observed, this\nnumber becomes even greater, leaving just 14% reachable\nby Google.\nCONCLUSIONS AND FUTURE WORK\nThis work proposed a search engine prototype which is capable\nof handling with HTML forms as well as filling them\nin automatically in order to obtain information which is un-reachable\nby the current search engines. When compared to\nother solutions, as the HiWE [7] and the solution that redirects\nqueries by Lin and Chen [6], the SmartCrawl brings a\nbig differential which is the ability of surpassing great number\nof forms. The mentioned solutions have severe restrictions\nwhich directly affect the number of forms that receives\nqueries.\nThere is a great deal of work still to be attached to the solution\nfor a better exploration of the recovered content. An\nexample of this is that SmartCrawl does not make any analysis\nof the pages obtained as results of the queries, therefore\nindexing pages which contain errors and no results. The implementation\nof an algorithm which recognizes these pages\nwould increase the quality of the indexed data.\nBesides, a high performance structure was not used for\nthe storage of the indexes. This resulted in slow searching\nand indexing. A future work will be the implementation of\na new indexing module.\nREFERENCES\n[1] M. K. Bergman. The Deep Web: Surfacing Hidden\nValue. 2001.\n[2] S. Brin and L. Page. The anatomy of a large-scale\nhypertextual Web search engine. Computer Networks\nand ISDN Systems, 30(17):107117, 1998.\n[3] A. Clark. Cyberneko html parser, 2004.\nhttp://www.apache.org/ andyc/.\n[4] J. Lage, A. Silva, P. Golgher, and A. Laender.\nCollecting hidden web pages for data extraction. In\nProceedings of the 4th ACM International Workshop\non Web Information and Data Management, 2002.\n[5] S. Liddle, D. Embley, D. Scott, and S. H. Yau.\nExtracting data behind web forms. In Proceedings of\nthe Workshop on Conceptual Modeling Approaches for\ne-Business, pages 3849, 2002.\n[6] K.-I. Lin and H. Chen. Automatic information\ndiscovery from the invisible web. In Proceedings of the\nThe International Conference on Information\nTechnology: Coding and Computing (ITCC'02), pages\n332337, 2002.\n[7] S. Raghavan and H. Garcia-Molina. Crawling the\nhidden web. In Proceedings of the 27th International\nConference on Very Large Databases, pages 129138,\n2001.\n[8] A. Rappoport. Checklist for search robot crawling and\nindexing, 2004.\nhttp://www.searchtools.com/robots/robot-checklist\n.html.\n[9] C. Sherman and G. Price. The Invisible Web:\nUncovering Information Sources Search Engines Can't\nSee. CyberAge Books, 2001.\n[10] V. Shkapenyuk and T. Suel. Design and\nimplementation of a high-performance distributed web\ncrawler. In Proceedings of the 18th International\nConference on Data Engineering, pages 357368.\n[11] D. Sullivan. Internet Top Information Resource, Study\nFinds, 2001.\n15", "keywords": "implementation;architecture;Label Extraction;experimentation;html form;SmartCrawl;web crawler;hidden web content;information retrieval;search engine;Search Engine;extraction algorithm;Hidden Web"} {"name": "182", "title": "Sparsha: A Comprehensive Indian Language Toolset for the Blind", "abstract": "Braille and audio feedback based systems have vastly improved the lives of the visually impaired across a wide majority of the globe. However, more than 13 million visually impaired people in the Indian sub-continent could not benefit much from such systems. This was primarily due to the difference in the technology required for Indian languages compared to those corresponding to other popular languages of the world. In this paper, we describe the Sparsha toolset. The contribution made by this research has enabled the visually impaired to read and write in Indian vernaculars with the help of a computer.", "fulltext": "INTRODUCTION\nThe advent of computer systems has opened up many avenues for\nthe visually impaired. They have benefited immensely from\ncomputer based systems like automatic text-to-Braille translation\nsystems and audio feedback based virtual environments. Automatic\ntext-to-Braille translation systems are widely available for languages\nlike English, French, Spanish, Portuguese, and Swedish [7, 26, 18,\n16]. Similarly audio feedback based interfaces like screen readers\nare available for English and other languages [ref c, 8, 20]. These\ntechnologies have enabled the visually impaired to communicate\neffectively with other sighted people and also harness the power of\nthe Internet.\n\nHowever, most of these technologies remained unusable to the large\nvisually impaired population in the Indian sub-continent [17]. This\ncrisis can be attributed to primarily two reasons. First, the languages\nin the mentioned region differ widely from other popular languages\nin the world, like English.\nThese languages or vernaculars also use relatively complex scripts\nfor writing. Hence, the technologies used for English and other such\nlanguages cannot be easily extended to these languages. Secondly,\nthe development of these technologies for Indian languages, right\nfrom scratch, is not trivial as the various Indian languages also differ\nsignificantly amongst themselves.\nThe Sparsha toolset uses a number of innovative techniques to\novercome the above mentioned challenges and provides a unified\nframework for a large number of popular Indian languages. Each of\nthe tools of Sparsha will be discussed in detail in the following\nsections. Apart from English the languages supported by Sparsha\ninclude Hindi, Bengali, Assamese, Marathi, Gujarati, Oriya, Telugu\nand Kannada. The motivation for this work is to enable the visually\nimpaired to read and write in all Indian languages. The toolset set\nhas been named Sparsha since the word \"Sparsha\" means \"touch\" in\nHindi, something which is closely associated with how Braille is\nread.\n\nBHARATI BRAILLE TRANSLITERATION\nBharati Braille is a standard for writing text in Indian languages\nusing the six dot format of Braille. It uses a single script to represent\nall Indian languages. This is done by assigning the same Braille cell\nto characters in different languages that are phonetically equivalent.\nIn other words, the same combination of dots in a cell may represent\ndifferent characters in each of the different Indian languages.\nHowever, a single character in an Indian language may be\nrepresented by more than one Braille cell.\nThe above mentioned characteristics of Bharati Braille code is\nillustrated in Figure 1. There are many other issues and rules related\nto Bharati Braille. These will be discussed in the following sections\nalong with the methods used for implementing them.\n\nFigure 1. Examples of characters in Indian languages and their\ncorresponding Bharati Braille representation\n2.1 Transliteration to Bharati Braille\nAs shown in Figure 1, characters from different Indian languages\ncan be mapped to the same Braille representation. Thus, in order to\nimplement this, the system uses separate code tables for each of the\nlanguages and depending on the users choice of input language the\ncorresponding code table is used. The said method of\nimplementation also makes the system highly scalable and allows\nthe inclusion of more languages in future if required. For instance\nthis technique is being used successfully to extend the system to\ninclude Urdu and Sinhala. This work is expected to be completed in\nthe near future.\n\nFigure 2. Formation of Conjugates\n\nAnother important aspect of Indian languages is the formation of\nconsonant clusters or conjugates. In traditional hand written text this\nmay be expressed conceptually as the first consonant followed by a\nspecial character called halanth which in turn is followed by the\nsecond character. The consonant cluster may again be followed by a\nvowel. However, the visual representation of such a consonant\ncluster or conjugate may be quite different from the visual\nrepresentation of each of the individual consonants included in it, as\nshown in Figure 2. However, while translating the same text into\nBharati Braille the special character halanth must precede both the\nconsonants to be combined into a single conjugate.\nThe above constraints necessarily mean that the Braille translation\nfor a particular character also depends on the sequence of characters\npreceding and following it.\nHence, in order to perform the tasks efficiently the system uses a\nfinite state machine based approach similar to that of lexical\nanalyzers [3, 6]. The mentioned approach also proves to be suitable\nfor handling other issues associated\nwith standard Braille\ntranslation like detection of opening and closing quotation\nmarks, string of uppercase characters.\nApart from Indian languages the Sparsha system supports the\ntranslation of English language texts into grade 1 and grade 2\nBraille. The system maintains a database of all standard Braille\ncontractions which is used for generating grade 2 Braille.\nFurthermore, the system allows the user to add new contractions to\nthe existing database.\nThe Sparsha system also supports the proper translation of a\ndocument containing text both in English as well as an Indian\nLanguage. According to standard Braille notations [11] the change\nin language is indicated through the proper use of the letter sign.\nHowever, a single document containing text in more than one Indian\nlanguage cannot be translated into Braille, such that the reader is\nable to distinguish each of the languages correctly. This is due to the\nfollowing reason. As mentioned previously the same Braille\nrepresentation can refer to different characters in different Indian\nlanguages, this leads to the inherent ambiguity.\n\n\nFigure 3. A screenshot of the interface for translating and\nediting Braille in the Sparsha system\n\n2.2 Reverse transliteration\nThe Sparsha system allows reverse transliteration of Braille to text\nboth for Indian languages as well as English. This allows the\nvisually impaired to communicate seamlessly with other sighted\npeople. The Braille code to be translated may be entered into the\ncomputer using a standard six key Braille keyboard. After\ntranslating the Braille code into text, the visually readable text may\nthen be checked for correctness using a file reading system which\nwill be described in later section.\nIn order to achieve reverse translation from Braille to text, the\nsystem uses a finite state machine based approach similar to that\nused for translating text to Braille as described previously. The task\nof reverse translation also uses the code tables corresponding to the\nlanguage to which the text is being translated. Thus the system can\neasily be extended to other languages just by adding the\ncorresponding code tables to achieve both forward and\nreverse translation.\n2.3 Methods of Input - Output\nSparsha can accept English text, for translation, in the form of plain\ntext files, HTML (hyper text markup language) files and Microsoft\nWord documents. Apart from English the Sparsha Braille translation\nsystem, as described, can take input text in Indian languages. This\ninput can be given to the system in a number of forms as follows:\n\nISCII (Indian Script Code for Information Interchange)\n[24] documents generated by applications like iLeap [10]\n\nLP2 documents generated by iLeap [10]\n\nUnicode text generated by any standard editor\nsupporting Unicode [25]. This technique will be\ndiscussed in detail in a later section\nThe output of the Braille translation can be obtained on a large\nvariety of commercial Braille embossers [23]. The Sparsha system\nhas been tested on the following Braille embossers:\n\nIndex Basic-S\n\nIndex Basic-D\n\nIndex 4X4 PRO\n+ + + =\n\n115\n\nBraillo 400\n\nModified Perkins Brailler [15]\nAlternatively the output may be obtained on tactile Braille displays\n[1].\n\nBRAILLE MATHEMATICS\nAt the time of this development there existed a few translators for\nconverting mathematics to Braille [b]. However, these were found to\nbe unsuitable for the visually impaired in the Indian sub-continent\ndue to a number of reasons. Firstly the Braille code used for\nmathematics in India is slightly different from those used in other\nparts of the world [4], however, it bears close resemblance to the\nNemeth code [5]. Secondly the interleaving of Braille mathematics\nwith text in Indian languages was also not possible with the\navailable systems. Thirdly many of these systems require a working\nknowledge of LaTex [13]. This cannot be expected from every user.\nFinally, most of these systems are unaffordable to the visually\nimpaired in the Indian sub-continent.\nThe above mentioned reasons warranted the development of a\nmathematics-to-Braille translation system for the Indian subcontinent\n. The system thus developed can translate almost all\nmathematic and scientific notations. It also allows the user to\ninterleave mathematic and scientific expressions with text in both\nIndian languages and English.\nIn order to allow the user to write complex mathematic and\nscientific expressions, the system provides a special editor for the\npurpose. The above mentioned editor is named \"Nemeth editor\"\nafter Abraham Nemeth [t]. Thus the user is exempted from the task\nof learning LaTex. The editor provides a GUI (Graphic User\nInterface) as shown in Figure 4 for writing a mathematic or scientific\nexpression in a form similar to that used by LaTex. This string can\nthen be readily converted into Braille by the translation engine.\nHowever, the mathematical expression formed by the editor must be\nenclosed within a pair of special character sequences. This needs to\nbe done so that when the mathematic or scientific expression is\nembedded within another English or Indian language text, it is\nproperly translated to Braille using the standard for mathematic and\nscientific notation.\n\n\nFigure 4. Screenshot of the Nemeth Editor\n\nThe selection of mathematical symbols and notations is done by the\nuser in a menu driven fashion using the GUI. The set of all\nmathematic and scientific notations is partitioned in to separate\ncollections, each consisting of similar notations. Alternatively the\ntext may be entered by the user in a LaTex like format using any\nstandard text editor.\nSPARSHA CHITRA\nElementary tactile graphics is one of the best methods for\nintroducing certain subjects, like geometry, to visually impaired\nstudents. However, such tactile graphics have remained outside the\nreach of the common man. This is due to the fact that sophisticated\nBraille embossers and expensive image conversion software are\nnecessary for the purpose. Sparsha Chitra aims to provide relatively\nsimple tactile graphics which can be obtained even by using low\ncost Braille embossers like the modified Perkins Brailler [15]. In\nother words no assumptions have been about any special feature of\nthe Braille embosser being used. This allows tactile graphics to be\nembossed using just the Braille embossing capability of the\nembosser. The tactile graphics obtained from any image may be\nviewed and edited before finally being embossed. The image may\nalso be scaled up or down to a size suitable for embossing. The\nsystem also allows the image color to be inverted in order to\nimprove the contrast.\nSparsha Chitra takes its input in HTML format such that additional\ntext can be included along with the tactile representation of the\nimage. Sparsha is the feeling of touch and \"Chitra\" in Hindi means\n\"picture\" and thus this tool is named Sparsha Chitra.\nThe primary limitation of this tool is that complex images cannot be\nrepresented very clearly. However, the effect of this drawback is\nmitigated by the fact that the amount of detail that can be observed\nthrough touch is also limited. Furthermore the size of the tactile\nimage is restricted by the bounds imposed by the sheet on which it is\nembossed. The functions for scaling the tactile image may prove to\nbe useful in such a case.\n\nFigure 5. Screenshot of Spasha Chitra\n\nFILE READER\nIn order to enter text into the computer, in English, a visually\nimpaired user can take the help of any standard screen reader [12, 8,\n20 ]. Screen readers have proved to be vital to visually impaired\ncomputer users [19].Such screen readers are commercially available.\nHowever, such screen readers are not available for Indian languages.\n116\nThis was primarily due to the reasons mentioned at the beginning of\nthis paper.\nThe file reader which will be described in this section will redeem\nthe situation and allow the user to type in text in Indian languages\nusing Microsoft Word. For performing other tasks related to the\noperating system the user may use any of the standard screen\nreaders. The construction of such a file reader requires a number of\nvital components [2]. These include text-to-speech engines for\nIndian languages, fonts for Indian languages, keyboard layouts for\nthem, proper rendering engines and a text editor which can support\nIndian languages. Each of these components will be described\nbriefly in the following sections. This will be followed by a\ndescription of the overall architecture of the system and its\nfunctioning.\n5.2 Speech synthesis system\nA speech synthesis system is vital for the functioning of any screen\nreader. It is responsible for producing human voice rendition of the\ntext provided to it by the screen reader. In case of screen readers the\nspeech synthesis system should be able to deliver the voice in real-time\n. This is necessary for visually impaired users to get\ninstantaneous audio feedback.\nA multilingual screen reader necessarily needs a speech synthesis\nsystem for each of the languages that it supports.\nThe mentioned file reader uses a speech synthesis engine for Indian\nlanguages called Shruti [22]. Shruti support two popular Indian\nlanguages namely Hindi and Bengali. It uses a method of di-phone\nconcatenation for speech synthesis. This allows the speech synthesis\nsystem to produce reasonable real-time performance, at the same\ntime maintaining a low memory space requirement.\n5.3 Fonts and Rendering\nThere are number issues involved with Indian language fonts and\ntheir rendering. This is due to the fact that Indian language scripts\nare generally complex in nature. The Microsoft Windows system\ncan be configure for correctly rendering these complex Indian\nlanguage scripts. Correct rendering of fonts is achieved through the\nuse of Uniscribe (Unicode Script Processor) and OTLS (OpenType\nLayout Services) libraries [9, 14]. Furthermore glyph substitution\nand glyph repositioning, as shown in Figure 2, are closely associated\nwith the rendering of text in Indian languages. For this reason\nOpenType fonts have been found to be suitable for Indian languages\nas they carry, within the font file, explicit information about glyph\nsubstitution and glyph positioning. This maintained in the form of\ntwo tables namely GSUB (Glyph Substitution) and GPOS (Glyph\nPositioning).\n5.4\nEditor for Indian Languages\nA number of text editor are available for Indian languages. Many of\nthese editors are difficult to use and are non-intuitive. On the other\nhand it has been observed that Microsoft Word XP (Word 2002)\nperforms reasonably well for Indian languages when proper fonts\nand rendering engines are used. Thus, Microsoft Word has been\nused instead of creating a new editor for the file reading application\nas shown in Figure 7. Microsoft Word also provides certain\nadditional features which have been used extensively for the\ndevelopment of the file reader. These features have been discussed\nin detail in the following paragraphs. The use of Microsoft Word\nalso motivates visually impaired users to switch to main stream\napplications and also eliminates the effort of learning another\nsystem.\nMicrosoft Word supports Unicode [25], hence it can accept text in\nany Indian language. However, in order to enter text in an Indian\nlanguage in the Windows system a keyboard layout or IME (Input\nMethod Editor) [21] for that language is required. Keyboard layouts\nare available for some popular Indian languages like Hindi. For\nother Indian languages it may have to be created. In our case a\nkeyboard layout had to be created for Bengali.\nFigure 6. File reader System Architecture\n117\nThe capabilities of the Microsoft Word can be extended using COM\n(Component Object Model) Add-Ins. Such Add-Ins are basically\nprograms that run within the framework provided by Microsoft\nWord. The file reader has been developed in the form of such an\nAdd-In. It interacts closely with the editor to provide necessary\naudio feedback for text in Indian languages. Such interaction takes\nplace through the object model exposed by Microsoft Word. The file\nreader may be configured to start up every time Microsoft Word is\nused.\n5.5 Overall System Structure and Operation\nThe overall architectural structure of the file reader system is shown\nin Figure 6. Most of the components of the system shown in the\nfigure have been discussed in the last few sections. The interaction\nbetween the different components and how they operate as a system\nwill be discussed in this section.\nKeyboard hooks are placed within the operating system by the file\nreader Add-In. The keyboard hooks are responsible for trapping the\nkeystrokes entered by the user through the keyboard. A copy of the\nentered keystrokes is passed to the file reader Add-In. The\nkeystrokes are then passed to the keyboard layout or IME which is\nintegrated with the operating system. The keyboard layout translates\nthe keystrokes into Unicode characters and passes them to the\neditor.\n\n\nFigure 7. Screenshot of the file reader in operation using\nMicrosoft Word\nIn the mean while the file-reader Add-In, on receiving the\nkeystrokes, provides appropriate audio feedback by invoking the\nspeech synthesis engine. Again, certain combinations of keystrokes\nare recognized by the file reader Add-In as special commands.\nThese request the file reader to read out a certain portions of the\ntext. This selected text is then passed to the speech synthesis system\nfor producing human voice rendition. Thus providing an audio\nfeedback based virtual environment for Indian languages. The file\nreader can be further extended to provide full screen reading\nfunctionality by using Microsoft Active Accessibility.\nSYSTEM EVALUATION\nA subset of the Sparsha system known as the Bharati Braille\nTransliteration System\n*\nhas been deployed by Webel Mediatronics\nLimited in a number of organizations for the visually impaired all\nover India as a part of a project sponsored by the Ministry of\nCommunication and Information Technology, Government of India.\nAs a result of these field tests the system underwent an iterative\nprocess of refinement to reach its current form. A plethora of\nrequest and suggestions from visually impaired users led to the\ndevelopment and inclusion of a number of additional features and\ntools that were added to the toolset. These include the Sparsha\nChitra and the file readers for Indian languages. The process of\ncontinuous feedback helped the Sparsha toolset mature over the\nyears. It also helped in weeding out many bugs and shortcomings of\nthe initial versions of the system.\n6.2 Obtained\nResults\nThe Sparsha system is under a continuing process of use and\nevaluation. This feedback is being used to make the system more\nusable to the visually impaired and to enhance the features provided\nby the system. Training and deployment of the system has also been\ncarried out at a number of premier organizations for the visually\nimpaired. These include\n\nThe National Association for the Blind, Delhi\n\nBlind Peoples' Association, Ahmedabad\n\nNational Institute for the Visually Handicapped,\nDehradun\nThe Braille translation system has been tested on a large number of\ncomputers in these organizations. The typical performance\ncharacteristics of the Sparsha Braille translation system is as shown\nin Figure 8. The performance characteristics have been measured for\ntwo different personal computer systems.\nEnglish Grade - II Braille\n0\n1000\n2000\n3000\n4000\n5000\n6000\n7000\n200 600 1000 1400 1800 2200 2600 3000 3400 3800 5000\nWord Count\nTi\nm\ne\n(\nm\ni\nl\nl\ni\ns\ne\nc\non\nds\n)\nP4\nP3\n\n(a)\nGrade - I Braille\n0\n100\n200\n300\n400\n500\n200 600 1000 1400 1800 2200 2600 3000 3400 3800 5000\nWord Count\nT\ni\nm\ne\n(\nm\nillis\ne\nc\nonds\n)\nP4 - English\nP4 - Indian\nLanguages\nP3 - English\nP3 - Indian\nLanguages\n\n(b)\nFigure 8. Graphs showing the computation time taken during\nBraille translation for (a) Grade II English (b) Grade I\nEnglish and Indian languages\n118\nThe computers have the following specifications (Processor Type,\nPrimary Memory, Hard disk):\n\nIntel Pentiun 4 3GHz, 512MB, 80GB referred to as\nP4\n\nIntel Pentium III 550MHz, 256MB, 40GB referred to\nas P3\nThe Sparsha Chitra tool was tested by visually impaired users and\nthe obtained results are given in Figure 9. This was done by handing\nthem sheets of paper Braille paper with tactile diagrams created by\nSparsha Chitra and asking them to guess the image on the sheets.\nCorrect\nguess\n40%\nCannot\nguess\n20%\nClose guess\n35%\nWrong\nguess\n5%\n\nFigure 9. User response to tactile images generated by Sparsha\nChitra\n\nMost of these images were geometric figures. The majority of the\nguesses were correct while a large percentage of the guesses were\nvery close like identifying a rectangle as square or a triangle as a\nmountain. Such misinterpretations often occur due to lack of color\ninformation or misjudging dimensions which is indeed quite\ndifficult estimate from tactile representations.\nThe file reader tool needs extensive training before a nave user can\nuse it efficiently. Visually impaired users who are already familiar\nwith Jaws or other screen readers can adapt to this system very\nquickly. This tool was primarily tested by visually impaired users\nhaving reasonable experience with Jaws.\n0\n5\n10\n15\n20\n25\n30\nJaws\nFile reader for Indian\nLanguages\nW\no\nrd\ns\np\ne\nr\nM\ni\nn\nu\nt\ne\n\nFigure 10. Comparison of the typing speed of a visually impaired\nuser using Jaws and the Indian language file reader\n\nThe Indian language file reader could not be experimented with a\nlarge number of users since a good level of expertise with screen\nreaders is required for using the file reader efficiently. The\nexperimental results shown in Figure 10 and 11 pertain to a\nparticular visually impaired user having some experience with Jaws.\nThe experiments were carried out by dictating a paragraph of about\nhundred words to the user while he typed it into the computer using\nthe Indian language file reader. However, this is only a preliminary\nexperiment. It was also found that both the typing speed and the\nerror rates improved significantly with practice.\nNo Errors\n10%\nOne Error\n30%\nTwo Errors\n40%\nMore than 2\nErrors\n20%\n\nFigure 11. Number of words with errors for every ten\nwords\n\nLIMITATIONS AND FUTURE WORK\nThe Sparsha toolset is in the process of being extended to a number\nof other languages in the Indian subcontinent. This includes Urdu\nand Sinhala. The file reader in the Sparsha toolset is also limited by\nthe availability of text to speech synthesis engines for all Indian\nlanguages. Hopefully these will be available in the near future and\nallow the system to be extended to more languages. It is also\nenvisioned that the Sparsha system will be ported to mobile\nhandheld systems. This will enable the visually impaired to\ncommunicate on the move. As of now, the required text to speech\nsynthesis engines have been ported onto the Microsoft Pocket PC\nplatform as well as on an ARM-Linux platform. It is only a matter of\ntime before the file reader becomes functional on such mobile\nplatforms.\n\nCONCLUSION\nThe Sparsha system named after the feeling of touch has been the\nfirst attempt to help visually impaired users, in the Indian\nsubcontinent, read and write in their native tongues. In this paper the\nvarious aspects of Indian languages and how they differ from other\nlanguages in the world has been explained. It also been discussed\nhow these issues have been tackled in the Sparsha system.\nThe paper describes in depth the various tools included in the\nSparsha toolset. These tools form a comprehensive toolset for Indian\nlanguages. It can be hoped that the Sparsha system would help\nincrease the literacy rates among the 13 million visually impaired in\nthe Indian subcontinent.\n\nACKNOWLEDGMENTS\nThe authors would like to thank Media Lab Asia for sponsoring a\npart of the work related to the file reader. The authors would also\nlike to thank the National Association for the Blind, Delhi and many\nother organizations for the blind for their sustained help and\ncooperation during the entire development process. The authors owe\nspecial thanks to Mr. Samit Patra, Director, Electrosoft Consultants\nfor his enormous help with many technical aspects of the work.\n119\n\nREFERENCES\n[1] Basu Anupam, Roy S., Dutta P. and Banerjee S., \"A PC\nBased Multi-user Braille.Reading System for the Blind\nLibraries\", IEEE Transactions on Rehabilitation Engineering,\nVol. 6, No. 1, March 1998, pp.60--68\n[2] Blenkhorn, P. \"Requirements for Screen Access Software\nusing Synthetic Speech\". Journal of Microcomputer\nApplications, 16, 243-248, 1993.\n[3] Blenkhorn Paul, \"A System for Converting Braille to Print\",\nIEEE Transactions on Rehabilitation Engineering, Vol. 3, No.\n2, June 1995, pp. 215-221\n[4] Braille Mathematics Code for India Manual, Prepared under\nthe project \"Adoption and Introduction of an Appropriate\nBraille Mathematics Code for India\", sponsored by UNICEF,\nPublished by National Institute for Visually Handicapped,\nDehra Dun and National Association for the Blind, Bombay,\nIndia\n[5] Cranmer T. V. and Abraham Nemeth, A Uniform Braille\nCode, memo to the members of the BANA Board (January 15,\n1991); Available at: http://www.nfb.org or\nhttp://world.std.com/~iceb/\n[6] Das, P.K.; Das, R.; Chaudhuri, A., \"A computerised Braille\ntranscriptor for the visually handicapped\". Engineering in\nMedicine and Biology Society, 1995 and 14th Conference of\nthe Biomedical Engineering Society of India. An International\nMeeting, Proceedings of the First Regional Conference, IEEE.\n15-18 Feb. 1995 Page(s):3/7 - 3/8\n[7] Duxbury Braille Translator, 2000,\nhttp://www.duxburysystems.com/products.asp\n[8] HAL. Dolphin Computer Access,\nhttp://www.dolphinuk.co.uk/products/hal.htm\n[9] Hudson, John for Microsoft Typography, \"Windows Glyph\nProcessing : an Open Type Primer\", November 2000,\nhttp://www.microsoft.com/typography/glyph%20processing/i\nntro.mspx\n[10] iLeap. Centre for Development of Advanced Computing.\nhttp://www.cdacindia.com/html/gist/products/ileap.asp\n[11] International Council on English Braille (ICEB), Unified\nEnglish Braille Code (UEBC) Research Project,\nhttp://www.iceb.org/ubc.html\n[12] JAWS for Window. Freedom Scientific.\nhttp://www.freedomscientific.com/fs_products/software_jaws.\nasp\n[13] Lamport L. LaTeX - A Document Preparation System,\nAddison-Wesley, 1985, ISBN 0-201-15790-X.\n[14] Microsoft Typography, \"Specifications : overview\",\nhttp://www.microsoft.com/typography/SpecificationsOvervie\nw.mspx\n[15] Modified Perkins Brailler, Webel Mediatronics Limited.\nhttp://www.braille-aids.com/emboss.htm\n[16] MONTY, VisuAide. http://www.visuaide.com/monty.html\n[17] National Association for the Blind, India, 2002. Available at\nhttp://www.nabindia.org/sited/infor06.htm\n[18] NFBTRANS. National Federation of the Blind, 2004,\nhttp://www.nfb.org/nfbtrans.htm\n[19] Pennington C.A. and McCoy K.F., Providing Intelligent\nLanguage Feedback or Augmentative Communication Users,\nSpringer-Verlag, 1998.\n[20] Raman T.V. (1996). \"Emacspeak a speech interface\".\nProceedings of CHI96, April 1996\n[21] Rolfe, Russ \"What is an IME (Input Method Editor) and how\ndo I use it?\" http://www.microsoft.com/globaldev/handson\n[22] Shruti, Media Lab Asia Research Laboratory, Indian Institute\nof Technology, Kharagpur.\nhttp://www.mla.iitkgp.ernet.in/projects/shruti.html\n[23] Taylor Anne, \"Choosing your Braille Embosser\", Braille\nMonitor, October 200. Available at\nhttp://www.nfb.org/bm/bm01/bm0110/bm011007.htm\n[24] Technology Development for Indian Languages, Department\nof Information Technology, Ministry of Communication &\nInformation Technology, Government of India. Available at\nhttp://tdil.mit.gov.in/standards.htm\n[25] Unicode. http://www.unicode.org\n[26] WinBraille. Index Braille.\nhttp://www.braille.se/downloads/winbraille.htm\n\n\n120\n", "keywords": "audio feedback;Indian languages;Braille;Visual impairment"} {"name": "183", "title": "StyleCam: Interactive Stylized 3D Navigation using Integrated Spatial & Temporal Controls", "abstract": "This paper describes StyleCam, an approach for authoring 3D viewing experiences that incorporate stylistic elements that are not available in typical 3D viewers. A key aspect of StyleCam is that it allows the author to significantly tailor what the user sees and when they see it. The resulting viewing experience can approach the visual richness and pacing of highly authored visual content such as television commercials or feature films. At the same time, StyleCam allows for a satisfying level of interactivity while avoiding the problems inherent in using unconstrained camera models. The main components of StyleCam are camera surfaces which spatially constrain the viewing camera; animation clips that allow for visually appealing transitions between different camera surfaces; and a simple, unified, interaction technique that permits the user to seamlessly and continuously move between spatial-control of the camera and temporal-control of the animated transitions. Further, the user's focus of attention is always kept on the content, and not on extraneous interface widgets. In addition to describing the conceptual model of StyleCam, its current implementation, and an example authored experience, we also present the results of an evaluation involving real users.", "fulltext": "INTRODUCTION\nComputer graphics has reached the stage where 3D models\ncan be created and rendered, often in real time on\ncommodity hardware, at a fidelity that is almost\nindistinguishable from the real thing. As such, it should be\nfeasible at the consumer level to use 3D models rather than\n2D images to represent or showcase various physical\nartifacts. Indeed, as an example, many product\nmanufacturers' websites are beginning to supply not only\nprofessionally produced 2D images of their products, but\nalso ways to view their products in 3D. Unfortunately, the\nvisual and interactive experience provided by these 3D\nviewers currently fall short of the slick, professionally\nproduced 2D images of the same items. For example, the\nquality of 2D imagery in an automobile's sales brochure\ntypically provides a richer and more compelling\npresentation of that automobile to the user than the\ninteractive 3D experiences provided on the manufacturer's\nwebsite. If these 3D viewers are to replace, or at the very\nleast be at par with, the 2D imagery, eliminating this\nr,\nviewpoint in the scene\ndifference in quality is critical.\nThe reasons for the poor quality of these 3D viewers fall\nroughly into two categories. First, 2D imagery is usually\nproduced by professional artists and photographers who are\nskilled at using this well-established artform to convey\ninformation, feelings, or experiences, whereas creators of\n3D models do not necessarily have the same established\nskills and are working in an evolving medium. Howeve\nthis problem will work itself out as the medium matures.\nThe second issue is more troublesome. In creating 2D\nimages a photographer can carefully control most of the\nelements that make up the shot including lighting and\nviewpoint, in an attempt to ensure that a viewer receives the\nintended message. In contrast, 3D viewers typically allow\nthe user to interactively move their\nto view any part of the 3D model.\n\nFigure 1. StyleCam authored elements\n\n\nThis results in a host of problems: a user may \"get lost\" in\nthe scene, view the model from awkward angles that\npresent it in poor light, miss seeing important features,\nexperience frustration at controlling their navigation, etc.\nAs such, given that the author of the 3D model does not\nhave control over all aspects of what the user eventually\nsees, they cannot ensure that 3D viewing conveys the\nintended messages. In the worse case, the problems in 3D\nviewing produce an experience completely opposite to the\nauthors intentions!\nThe goal of our present research is to develop a system,\nwhich we call StyleCam (Figure 1), where users viewing\n3D models can be guaranteed a certain level of quality in\nterms of their visual and interactive experience. Further, we\nintend that the system should not only avoid the problems\nsuggested earlier, but also have the capability to make the\ninteractive experience adhere to particular visual styles. For\nexample, with StyleCam one should be able to produce an\ninteractive viewing experience for a 3D model of an\nautomobile \"in the style of\" the television commercial for\nthat same automobile. Ultimately, a high-level goal of our\nresearch is to produce interactive 3D viewing experiences\nwhere, to use an old saying from the film industry, \"every\nframe is a Rembrandt\".\n1.1. Author vs. User Control\nCentral to our research is differentiating between the\nconcept of authoring an interactive 3D experience versus\nauthoring a 3D model which the user subsequently views\nusing general controls. If we look at the case of a typical\n3D viewer on the web, in terms of interaction, the original\nauthor of the 3D scene is limited to providing somewhat\nstandard camera controls such as pan, tumble and zoom.\nEssentially, control of the viewpoint is left up to the user\nand the author has limited influence on the overall\nexperience.\nFrom an author's perspective this is a significant\nimbalance. If we view an interactive experience by\ncinematic standards, an author (or director) of a movie has\ncontrol over several major elements: content/art direction,\nshading/lighting, viewpoint, and pacing. It is these elements\nthat determine the overall visual style of a movie. However,\nin the interactive experience provided by current 3D\nviewers, by placing control of the viewpoint completely in\nthe hands of the user, the author has surrendered control of\ntwo major elements of visual style: viewpoint and pacing.\nThus we desire a method for creating 3D interactive\nexperiences where an author can not only determine the\ncontent and shading but also the viewpoints and pacing.\nHowever, intrinsic in any interactive system is some degree\nof user control and therefore, more accurately, our desire is\nto allow the author to have methods to significantly\ninfluence the viewpoints and pacing in order to create\nparticular visual styles. Thus, we hope to strike a better\nbalance between author and user control. In order to\nachieve this end, StyleCam incorporates an innovative\ninteraction technique that seamlessly integrates spatial\ncamera control with the temporal control of animation\nplayback.\n\nCONCEPTUAL MODEL\nIn order to provide author control or influence over\nviewpoints and pacing, we need a way for an author to\nexpress the viewpoints and the types of pacing they are\ninterested in. Thus we have developed three main elements\nupon which our StyleCam approach is based.\n1. Camera surfaces an author-created surface used to\nconstrain the users' movement of the viewpoint\n2. Animation clips an author-created set of visual\nsequences and effects whose playback may be\ncontrolled by the user. These can include:\nsophisticated camera movements.\nSlates 2D media such as images, movies,\ndocuments, or web pages.\nvisual effects such as fades, wipes, and edits.\nanimation of elements in the scene.\n3. Unified UI technique The user utilizes a single\nmethod of interaction (dragging) to control the\nviewpoint, animation clips, and the transitions between\ncamera surfaces.\n2.1. Camera Surfaces\nIn the motion picture industry a money-shot is a shot with a\nparticular viewpoint that a director has deemed \"important\"\nin portraying a story or in setting the visual style of a\nmovie. Similarly, in advertising, money-shots are those\nwhich are the most effective in conveying the intended\nmessage. We borrow these concepts of a money-shot for\nour StyleCam system. Our money-shots are viewpoints that\nan author can use to broadly determine what a user will see.\nFurther, we use the concept of a camera surface as\nintroduced by Hanson and Wernert\n[19, 36]\n. When on a\ncamera surface, the virtual camera's spatial movement is\nconstrained to that surface. Further, each camera surface is\ndefined such that they incorporate a single money-shot.\nFigure 2 illustrates this notion.\nCamera surfaces can be used for various purposes. A small\ncamera surface can be thought of as an enhanced money-shot\nwhere the user is allowed to move their viewpoint a bit\nin order to get a sense of the 3-dimensionality of what they\nare looking at. Alternatively, the shape of the surface could\nbe used to provide some dramatic camera movements, for\nexample, sweeping across the front grill of a car. The key\nidea is that camera surfaces allow authors to conceptualize,\nvisualize, and express particular ranges of viewpoints they\ndeem important.\nIntrinsic in our authored interactions is the notion that\nmultiple camera surfaces can be used to capture multiple\nmoney-shots. Thus authors have the ability to influence a\nuser's viewpoint broadly, by adding different camera\nsurfaces, or locally by adjusting the shape of a camera\n102\nVolume 4, Issue 2\nsurface to allow a user to navigate through a range of\nviewpoints which are similar to a single particular money-shot\n. For example, as shown in Figure 2, camera surfaces at\nthe front and rear of the car provide two authored\nviewpoints of these parts of the car in which a user can\n\"move around a bit\" to get a better sense of the shape of the\nfront grille and rear tail design.\n\nFigure 2. Camera surfaces. The active camera is at the\nmoney-shot viewpoint on the first camera surface.\n\nThe rate at which a user moves around on a camera surface\n(Control-Display gain) can dramatically affect the style of\nthe experience. In order to allow an author some control\nover visual pacing, we provide the author with the ability to\ncontrol the rate at which dragging the mouse changes the\ncamera position as it moves across a camera surface. The\nintention is that increasing/decreasing this gain ratio results\nin slower/faster camera movement and this will influence\nhow fast a user moves in the scene, which contributes to a\nsense of pacing and visual style. For example, if small\nmouse movements cause large changes in viewpoint this\nmay produce a feeling of fast action while large mouse\nmovement and slow changes in movement produce a slow,\nflowing quality. Figure 3 illustrates an example of variable\ncontrol-display gain, where the gain increases as the camera\ngets closer to the right edge of the camera surface.\n\nFigure 3. Variable control-display gain on a camera surface\n2.2. Animation Clips\nTo support transitions between two camera surfaces, we use\nanimation clips as illustrated in Figure 4. An animation clip\ncan be thought of as a \"path\" between the edges of camera\nsurfaces. When a user navigates to the edge of a camera\nsurface, this triggers an animation. When the animation\nends, they resume navigating at the destination camera\nsurface. One obvious type of animation between the camera\nsurfaces would simply be an automatic interpolation of the\ncamera moving from its start location on the first camera\nsurface to its end location on the second camera surface\n(Figure 4a). This is similar to what systems such as VRML\ndo. While our system supports these automatic interpolated\nanimations, we also allow for authored, stylized,\nanimations. These authored animations can be any visual\nsequence and pacing, and are therefore opportunities for\nintroducing visual style. For example, in transitioning from\none side of the car to the other, the author may create a\nstylized camera animation which pans across the front of\nthe car, while closing in on a styling detail like a front grille\nemblem (Figure 4b).\nThe generality of using animation clips allows the author\nthe stylistic freedom of completely abandoning the camera-movement\nmetaphor for transitions between surfaces and\nexpressing other types of visual sequences. Thus animation\nclips are effective mechanisms for introducing slates -- 2D\nvisuals which are not part of the 3D scene but are\nmomentarily placed in front of the viewing camera as it\nmoves from one camera surface to another (Figure 4c). For\nexample, moving from a view of the front of the car to the\nback of the car may be accomplished using a 2D image\nshowing the name of the car. This mechanism allows the\nuse of visual elements commonly found in advertising such\nas real action video clips and rich 2D imagery. In the\ncomputer realm, slates may also contain elements such as\ndocuments or webpages.\n\nFigure 4. Three example animated transitions between\ncamera surfaces. (a) automatic transition, (b) authored\nstylized transition, (c) slate transition.\n\nThe use of animation clips also allows for typical visual\ntransitions effects such as cross fades, wipes etc.\nIn addition to using animation clips for transitions between\ncamera surfaces, StyleCam also supports the animation of\nelements in the 3D scene. These scene element animations\ncan occur separately or concurrently with transition\nanimations. For example, while the animation clip for the\nvisual transition may have the camera sweeping down the\nside of the car, an auxiliary animation may open the trunk\nto reveal cargo space.\nVolume 4, Issue 2\n103\nThe animation of scene elements can also be used to affect\nextremely broad changes. For example, entire scene\ntransitions (similar to level changes in video games) may\noccur when a user hits the edge of particular camera\nsurface.\nAt the author's discretion, temporal control of animation\nclips can either be under user control or uninterruptable.\nOverall, in terms of visual expression, these varying types\nof animation clips allow an author to provide rich visual\nexperiences and therefore significantly influence the pacing\nand style of a user's interaction.\n2.3. Unified User Interaction Technique\nWhile animation clips are effective for providing a means\nto move between camera surfaces and introduce visual\nstyling elements, they also highlight the fundamental issue\nof arbitrating between user control and system control. At\nthe heart of our system are two distinct types of behavior:\n1) user control of the viewpoint, and 2) playback of\nanimation clips. In other systems these two types of\nbehavior are treated as distinct interactions. Specifically,\nthe user must stop dragging the camera viewpoint, then\nclick on something in the interface to trigger the animation,\ndividing their attention and interrupting the visual flow. In\nour system we wanted to use animations as a seamless way\nof facilitating movement between camera surfaces. Thus we\nneeded a mechanism for engaging these animations that did\nnot require an explicit mouse click to trigger animation.\nIdeally we wanted to leave the user with the impression that\nthey \"dragged\" from one camera surface to another even\nthough the transition between the surfaces was\nimplemented as an authored animation.\nThese two behaviors are fundamentally different in that\nviewpoint control is spatial navigation and animation\ncontrol is temporal navigation. From a user interaction\nstandpoint, spatial behavior can be thought of as \"dragging\nthe camera\" while temporal control is \"dragging a time\nslider\" or \"scrubbing\". Given this we required an\ninteraction model which allowed these two types of drags\nto be combined together in a way that was well defined,\ncontrollable, and corresponded to user's expectations.\nFigure 5, which uses the finite-state-machine model to\ndescribe interaction as introduced by\n[5, 26]\n, shows the\ninteraction model we developed. The key feature of this\nmodel is the ability to transition back and forth from spatial\nto temporal control during a contiguous drag. As a user\ndrags the camera across a camera surface (State 1, Spatial\nNavigation) and hits the edge of the surface, a transition is\nmade to dragging an invisible time slider (State 2,\nTemporal Navigation). As the user continues to drag, the\ndrag controls the location in the animation clip, assuming\nthat the author has specified the clip to be under user\ncontrol. Upon reaching the end of the animation, a\ntransition is made back to dragging the camera, however,\non a different, destination camera surface (State 1).\nButton Up\nClip Finished\nButton Up\nState\n0\nState\n1\nState\n2\nState\n3\nButton\nDown\nEnter\nSurface\nExit\nSurface\nButton\nDown\nTracking\nDragging\nin Space\nDragging\nin Time\nTracking\nduring\nAutomatic\nPlayback\nStop\nPlayback\nSpatial Navigation\nTemporal Navigation\n\nFigure 5. StyleCam interaction model.\n\nThe interaction model also handles a variety of reasonable\nvariations on this type of dragging behavior. A user may\nstop moving when dragging an animation clip, thus pausing\nthe animation. If, however, when in State 2 the user\nreleases the mouse button during a drag, automatic\nplayback is invoked to carry the user to the next camera\nsurface (State 3). Should the user press the mouse button\nduring this automatic playback, playback is stopped and\ntemporal control by the user is resumed (return to State 2).\nWe found in practice that this interaction design enhanced\nthe user's feeling of being in control throughout the entire\nexperience.\nDESIGN RATIONALE\nAt first glance, it may appear that the incorporation of\nanimation clips into StyleCam unnecessarily complicates its\nauthoring and use. After all, without animated transitions,\nwe would not have had to develop an interaction technique\nthat blended between spatial and temporal control. Indeed,\nwhen we first began our research, our hope was to create a\nsystem that simply involved spatial control of a constrained\ncamera.\nOur first variation used a single camera surface that\nsurrounded the 3D object of interest. The camera was\nconstrained to remain normal to this single camera surface\nat all times. While this gave the author more control than\nusing a simple unconstrained camera, we found that it was\ndifficult to author a single camera surface that encompassed\nall the desirable viewpoints and interesting transitions\nbetween those viewpoints. In order to guarantee desirable\nviewpoints, we introduced the concept of money-shots that\nwere placed on the single camera surface. The parameters\nof the camera were then determined based on its location on\nthe camera surface and a weighted average of the\nsurrounding money-shots. At this point, it was still difficult\nto author what the user would see when not directly on a\nmoney-shot. In other words, while money-shots worked\nwell, the transitions between them worked poorly.\nTo address this problem of unsatisfactory transitions, we\nfirst replaced the concept of a single global camera surface\nwith separate local camera surfaces for each money-shot.\n104\nVolume 4, Issue 2\nThen, to define transitions between these local camera\nsurfaces, we introduced the idea of animating the camera.\nThis led to the use of the three types of animation clips as\ndescribed earlier. Simply playing back the animation clips\nbetween camera surfaces gave users the sense that they lost\ncontrol during this period. To maintain the feeling of\ncontinuous control throughout, we developed our integrated\nspatial-temporal interaction technique.\n\nAN EXAMPLE EXPERIENCE\nWe illustrate how StyleCam operates by an example.\nFigure 6 illustrates the system components and how they\nreact to user input, as well as screen shots of what the user\nactually sees. The user starts by dragging on a camera\nsurface (position A). The path A-B shows the camera being\ndragged on the surface (spatial navigation). At B, the user\nreaches the edge of the camera surface and this launches an\nanimation that will transition the user from B to E. The zigzag\npath from B to D indicates that the user is scrubbing\ntime on the animation (temporal navigation). Position C\nsimply illustrates an intermediate point in the animation\nthat gets seen three times during the interaction. At position\nD, the user releases the mouse button, whereupon the\nsystem automatically completes playing back the remainder\nof the animation at the authored pacing. At position E, the\nuser enters another camera surface and resumes spatial\nnavigation of the camera as shown by path E-F. When the\nuser exits this camera surface at position F, another\nanimation is launched that will transition the user to\nposition J. Since the user releases the mouse button at\nposition F, the animation from F to J is played back at the\nauthored pacing. Since this animation is a slate animation,\nthe intermediate shots at positions G, H, and I along the\npath F to J are of slates containing information on the car\nfading in and out as the camera pans over the top of the car.\nThe net result of this StyleCam experience is a view of the\ncar that is far more visually rich and influenced by an\nauthor who intends to convey a certain message, rather than\nusing simple camera controls as is typical in current 3D\nviewers.\nRELATED WORK\nMuch prior research has explored camera techniques for 3D\nvirtual environments. Many of the techniques use a 2D\nmouse or stylus as an input device and introduce metaphors\nto assist the user. Perhaps the most ubiquitous metaphor,\nthe cinematic camera, enables users to tumble, track and\ndolly a viewpoint. Various other metaphors have been\nexplored by researchers, including orbiting and flying\n[32]\n,\nthrough-the-lens control\n[18]\n, points and areas of interests\n\nFigure 6. Example StyleCam experience. Top: system components and their reaction to user input. Bottom: what the user sees.\nVolume 4, Issue 2\n105\n[22]\n, using constraints [24, 29], drawing a path\n[21]\n, two-handed\ntechniques\n[1, 38]\n, and combinations of techniques\n[30, 37]. Bowman et. al. present taxonomies and\nevaluations of various schemes\n[3, 4]\n.\nOther techniques involve automatic framing of the areas of\ninterest as typically found in game console based adventure\ngames which use a \"chase airplane\" metaphor for a third\nperson perspective. Systems that utilize higher degree-of-freedom\ninput devices offer additional control and\nalternative metaphors have been investigated, including\nflying\n[7, 34]\n, eyeball-in-hand\n[35]\n, and worlds in miniature\n[31]\n. The major difference between this body of prior\nresearch and our work is that we attempt to give the author\nsubstantially more influence over the types of views and\ntransitions between them as the user navigates in the virtual\nspace.\nBeyond techniques for navigating the scene, extra\ninformation can also be provided to aid navigation. These\ninclude global maps in addition to local views\n[12, 14]\n, and\nvarious landmarks\n[9, 33]\n. Others have investigated\nintegrating global and local views, using various distorted\nspaces including \"fisheye\" views\n[6, 15]\n. At present, in an\nattempt to keep the visual space uncluttered, our work does\nnot have mechanisms for providing global information to\nthe user, however, this is something we may incorporate as\nour system progresses.\nApproaches which give the author more influence include\nguided tours where camera paths are prespecified for the\nend user to travel along. Galyean [17] proposes a \"river\nanalogy\" where a user, on a metaphorical boat, can deviate\nfrom the guided path, the river, by steering a conceptual\n\"rudder\". Fundamental work by Hanson and Wernert\n[19,\n36]\nproposes \"virtual sidewalks\" which are authored by\nconstructing virtual surfaces and specifying gaze direction,\nvistas, and procedural events (e.g., fog and spotlights) along\nthe sidewalk. Our system builds upon the guided tour and\nvirtual sidewalk ideas but differs by providing authoring\nelements that enable a much more stylized experience.\nSpecifically, we offer a means of presenting 3D, 2D, and\ntemporal media experiences through a simple, unified,\nsingular user interaction technique that supports both\nspatial and temporal navigation.\nRobotic planning algorithms have been used to assist or\nautomatically create a guided tour of a 3D scene, in some\ncases resulting in specific behaviors trying to satisfy goals\nand constraints\n[10, 11]\n. Individual camera framing of a\nscene has been used to assist in viewing or manipulation\ntasks\n[27]\n. Rules can be defined for cameras to\nautomatically frame a scene that follow cinematic\nprinciples such as keeping the virtual actors visible in the\nscene; or following the lead actor\n[20]\n. Yet another system\n[2]\nallows authors to define storyboard frames and the\nsystem defines a set of virtual cameras in the 3D scene to\nsupport the visual composition. This previous work assists\nin the authoring aspects by ceding some control to the\nsystem. Our work too involves some automatic system\ncontrol, but we emphasize author control.\nImage based virtual reality environments such as\nQuicktimeVR\n[8]\nutilize camera panning and zooming and\nallow users to move to defined vista points. The driving\nmetaphor has also been used for navigating interactive\nvideo, as seen in the Movie-Maps system\n[23]\n. More\nrecently, the Steerable Media project\n[25]\nfor interactive\ntelevision aims to retain the visual aesthetic of existing\ntelevision but increase the level of user interactivity. The\nuser is given the ability to control the content progression\nby seamlessly integrating video with augmented 2D and 3D\ngraphics. While our goals are similar in that we hope to\nenhance the aesthetics of the visual experience, we differ in\nthat our dominant media type is 3D graphics with\naugmented temporal media (animations and visual effects)\nand traditional 2D media (video, still images).\nLastly, we note that widely available 3D viewers or\nviewing technologies such as VRML, Cult3D, Shockwave,\nViewpoint, Virtools, and Pulse3D, are becoming very\npopular but offer the standard camera controls of vista\npoints, track, tumble, and zoom. We hope our explorations\nwill ultimately assist in offering new experience and\ninteraction approaches for future incarnations of these 3D\nviewers.\nIMPLEMENTATION\nStyleCam is implemented using Alias|wavefront's MAYA\n3D modeling and animation package. We use MAYA to\nauthor the 3D content to be visualized, the required camera\nsurfaces, animation clips, and required associations\nbetween them. A custom written MAYA plugin allows the\nuser to control their view of the 3D content based on their\nmouse input and the authored camera surfaces, animation\nclips, and associations.\nThe following description of our implementation assumes\nsome knowledge of MAYA, although we have endeavoured\nto be as general as possible without sacrificing accuracy.\n6.1. Authoring\nFirst, money-shots are created by defining a MAYA camera\nwith specific position, orientation, and other camera\nparameters. Then, a camera surface which intersects the\nposition of the money-shot camera is defined by creating an\nappropriate non-trimmed NURBS surface within MAYA.\nTo include an optional camera look-at point, the author\nsimply defines a point in 3D space (using a MAYA\nlocator). Finally, to make these components easily locatable\nby the plugin, they are grouped under a named MAYA\nnode within its dependency graph.\nThen, StyleCam animation clips are created as one would\nnormally create animations in MAYA, using its TRAX\nnon-linear animation editor. Animation clips at this stage\nare given meaningful, consistent, names in order to\nfacilitate their identification later when associating them\nwith events.\n106\nVolume 4, Issue 2\nStyleCam allows the author to create scripts and associate\nthem with events. Supported events are session startup,\ncamera surface entry, camera surface exit, and camera\nsurface timeout (Figure 7).\nWe implement variable control-display gain on a camera\nsurface (Figure 3) by varying the separation between the\nisoparms on the NURBS surface.\nAs shown in Figure 4, StyleCam supports three types of\ntransitions: automatic, authored, and slate.\n\nAutomatic transitions are those that smoothly move the\ncamera from one camera surface to another without\nrequiring any authored animation clips. This is done by\nhaving the system perform quaternion\n[28]\ninterpolation of\ncamera orientation, combined quaternion and linear\ninterpolation of camera position, and linear interpolation of\nother camera properties such as focal length. Using\nquaternion interpolation ensures smooth changes in\norientation while defining a smooth arcing path for the\nposition. At each time step in the transition, two\nquaternions representing the required fractional rotations of\nthe position and orientation vectors of the camera are\ncalculated and applied to the source vectors. In addition, the\nmagnitude of the position vector is adjusted by linear\ninterpolation between the source and destination position\nvector magnitudes. The result is a series of intermediate\ncamera positions and orientations as Figure 8 illustrates.\n\nFigure 7. StyleCam events\n\nThe session startup event is triggered only once when the\nuser initially begins using StyleCam to view a scene. Exit\nevents are triggered when the user leaves a camera surface\nfrom one of four directions. Associated scripts can specify\ndestination camera surfaces and types of transitions to be\nperformed. Time-out events are triggered when the mouse\nis idle for a given duration while on a particular camera\nsurface, and can be used to launch an automatic\npresentation. StyleCam's event and script mechanism\nprovides for the use of logic to dynamically alter the\npresentation. For example, scripts can ensure that some\nsurfaces are only visited once, while others are shown only\nafter certain surfaces have already been visited.\n6.2. Interaction\nWhen the StyleCam plugin is activated, the first money-shot\nof the first camera surface is used as the initial view. If\na look-at point is defined for this camera surface, the\norientation of the user camera is set such that the camera\npoints directly at the look-at point. Otherwise, the\norientation is set to the normal of the camera surface at the\nmoney-shot viewpoint's position.\nFigure 8. Combined quaternion and linear interpolation\n\nAuthored transitions involve the playback of preauthored\nanimation clips. This gives the author complete control\nover the user experience during the transition including the\npacing, framing and visual effects.\nUser's mouse movements and button presses are monitored\nby the StyleCam plugin. Mouse drags result in the camera\nmoving along the current camera surface. Specifically, for a\ngiven mouse displacement (dx, dy), the new position of the\ncamera on the camera surface (in uv-coordinates local to\nthe camera surface) is given by\nSlate transitions are a special case of authored transitions.\nUsed to present 2D media, slate transitions are authored by\nplacing an image plane in front of the camera as it\ntransitions between camera surfaces. Various visual effects\ncan be achieved by using multiple image planes\nsimultaneously and by animating transparency and other\nparameters of these image planes. While the slate transition\nis in progress, the camera is simultaneously being smoothly\ninterpolated towards the destination camera surface. This\nessentially allows for a \"soft\" fade from a camera view, to a\nslate, and back, as Figure 9 illustrates.\n(u1,v1) = (u0,v0) + c*(dx, dy)\nwhere (u0, v0) is the last position of the camera, and c is the\ngain constant. If either the u or v coordinate of the resulting\nposition is not within the range [0,1], the camera has left\nthe current camera surface. At this point, the author-scripted\nlogic is executed to determine the next step. First,\nthe destination money-shot is resolved. Next, an\nappropriate transition is performed to move to the next\ncamera surface.\nVolume 4, Issue 2\n107\n\n\nFigure 9. Slate transitions\n\nStyleCam supports temporal control or \"scrubbing\" of\nanimations. During navigation mode, the user's mouse\ndrags control the camera's position on the camera surface.\nHowever, when the user moves off a camera surface into an\nanimated transition, mouse drags control the (invisible)\ntimeslider of the animation. Time is advanced when the\nmouse is dragged in the same direction that the camera\nexited the camera surface and reversed if the directions are\nalso reversed. When the mouse button is released, the\nsystem takes over time management and smoothly ramps\nthe time steps towards the animation's original playback\nrate.\n\nOur present implementation supports scrubbing only for\nautomatic transitions. Authored and slate transitions are\ncurrently uninterruptible. There is however no technical\nreason why all transitions cannot support scrubbing. In\nfuture versions we intend to give the author the choice of\ndetermining whether or not any given transition is\nscrubable. This is important since in some cases it may be\ndesirable to force the animation to playback uninterrupted\nat a certain rate.\nEVALUATION\nWe conducted an informal user study to get a sense of\nusers' initial reactions to using StyleCam. Seven\nparticipants, three of whom had experience with 3D\ngraphics applications and camera control techniques, and\nfour who had never used a 3D application or camera\ncontrols, were asked to explore a 3D car model using\nStyleCam. In order to ensure the study resembled our\nintended casual usage scenario, we gave participants only\nminimal instructions. We explained the click-and-drag\naction required to manipulate the camera, a brief rationale\nfor the study, and to imagine they were experiencing an\ninteractive advertisement for that car. We did not identify\nthe various components (camera surfaces, animated\ntransitions, etc) nor give any details on them. This was\ndeliberately done so that the participants could experience\nthese components in action for themselves and give us\nfeedback without knowing in advance of their existence.\nOne very promising result was that none of the participants\nrealized that they were switching between controlling the\ncamera and controlling the time slider on the animations.\nThey felt that they had the same type of control throughout,\nindicating that our blending between spatial and temporal\ncontrol worked remarkably well. Also the simplicity of the\ninteraction technique essentially a single click and drag\naction was immediately understood and usable by all our\nusers.\nAnother reaction from all the participants was that, to\nvarying degrees, they sometimes felt that they were not in\ncontrol of the interaction when the uninterruptable\nanimations occurred. This was particularly acute when the\ninformation in the animations seemed unrelated to their\ncurrent view. In these cases, participants indicated that they\nhad no idea what triggered these animations and were often\nannoyed at the sudden interruptions. However when the\ninformation was relevant the interruptions were not as\nannoying and often actually appreciated. In some cases\nparticipants indicated that they would have liked to be able\nto replay the animation or to have it last longer. This\nhighlights the importance of carefully authoring the\nintermingling of uninterruptable animations with the rest of\nthe interaction experience.\nParticipants also indicated that they would have liked the\nability to click on individual parts of the car model in order\nto inspect them more closely. This request is not surprising\nsince we made no effort in our current implement to\nsupport pointing. However, we believe that in future\nresearch StyleCam could be extended to include pointing.\nAs we expected, all the participants with prior 3D graphics\ncamera experience stated that they at times would have\nliked full control of the camera, in addition to the\nconstrained control we provided. Participants without this\nprior experience, however, did not ask for this directly\nalthough they indicated that there were some areas of the\ncar model that they would have liked to see but could not\nget to. However, this does not necessarily imply full control\nof the camera is required. We believe that this issue can be\nlargely alleviated at the authoring phase by ascertaining\nwhat users want to see for a particular model and ensuring\nthat those features are accessible via the authored camera\nsurfaces. Interestingly, the participant with the most 3D\ngraphics experience commented that the automatic\ntransitions and smooth camera paths during those\ntransitions were very good and that \"for those who don't\nknow 3D and stuff, this would be very good\"!\n\nDISCUSSION & CONCLUSIONS\nCentral to our StyleCam system is the integration of spatial\nand temporal controls into a single user interaction model.\nThe implications of this interaction model go far beyond a\nsimple interaction technique. The blending of spatial and\ntemporal control presents a completely new issue that an\nauthor needs to understand and consider when creating\nthese interactive visual experiences. As evident from the\ncomments of our users, temporal control can feel very\nmuch like spatial control even when scrubbing backwards\n108\nVolume 4, Issue 2\nin an animation when the animation consists of moving the\nviewing camera around the central object of interest.\nHowever, if the animation is not around the central object\nof interest, for example in some of our slate animations,\ntemporal control can produce very different sensations.\nThese include the feeling of moving backwards in time,\ninterruption of a well paced animation, jarring or ugly\nvisuals, and sometimes even nonsensical content.\nAs a result, the author needs to be extremely cognizant of\nthese artefacts and make design decisions as to when and\nwhere to relinquish control - and how much control - to the\nuser. At one extreme, the author can specify that certain\nanimations are completely uninterruptible by the user. In\nthe experience we authored for our user study, we included\nseveral of these types of transitions. As discussed earlier,\nwhether users favored this depended heavily on the content.\nIn other words, in some cases, as authors, we did not make\nthe right decision. Further improvements could include\npartially interruptable animations. For example, we may not\nallow movement backwards in time but allow the user to\ncontrol the forward pacing. This will largely solve the\nnonsensical content problem but may still result in\noccasionally jarring visuals.\nIf we intend to support these various types of control, we\nmust also be able to set the users' expectations of what type\nof control they have at any given time. It is clear that the\ncurrent StyleCam switching between spatial and temporal\ncontrol without any explicit indication to the user that a\nswitch is happening works in most cases. In the cases\nwhere it fails, either the visual content itself should indicate\nwhat control is possible, or some explicit mechanism is\nrequired to inform the user of the current or upcoming\ncontrol possibilities. In addition to the obvious solution of\nusing on-screen visual indicators (e.g., changing cursors) to\nindicate state, future research could include exploring \"hint-ahead\"\nmechanisms that indicate upcoming content if the\nuser chooses to stay on their current course of travel. For\nexample, as the user reaches the edge of a camera surface, a\n\"voice-over\" could say something like \"now we're heading\ntowards the engine of the car\". Alternatively, a visual\n\"signpost\" could fade-in near the cursor location to convey\nthis information. These ideas coincide with research that\nstates that navigation routes must be discoverable by the\nuser\n[16]\n.\nIt is very clear from our experiences with StyleCam that the\nuser's viewing experience is highly dependent on the talent\nand skill of the author. It is likely that skills from movie\nmaking, game authoring, advertising, and theme park\ndesign would all assist in authoring compelling\nexperiences. However, we also realize that authoring skills\nfrom these other genres do not necessarily directly translate\ndue to the unique interaction aspects of StyleCam.\nWhile StyleCam has the appropriate components for\ncreating compelling visual experiences, it is still currently a\nresearch prototype that requires substantial skills with\nMAYA. We envision a more author-friendly tool that is\nbased on the conceptual model of StyleCam.\nSome future avenues that we intend to explore include\nsupporting soundtracks, extensions to enable pointing to\nelements in the 3D scene, and mechanisms for authoring\nanimation paths using alternate techniques such as\nChameleon\n[13]\n.\nFinally, it is important to note that StyleCam is not limited\nto product or automobile visualization. Other domains such\nas visualization of building interiors and medical\napplications could also utilize the ideas presented in this\npaper. Figures 10, 11, and 12 illustrate some examples.\nACKNOWLEDGEMENTS\nWe thank Scott Guy and Miles Menegon for assistance in\nfigure and video creation.\n\nREFERENCES\n1. Balakrishnan, R., & Kurtenbach, G. (1999). Exploring\nbimanual camera control and object manipulation in 3D\ngraphics interfaces. ACM CHI 1999 Conference on\nHuman Factors in Computing Systems. p. 56-63.\n2. Bares, W., McDermott, S., Boudreaux, C., & Thainimit,\nS. (2000). Virtual 3D camera composition from frame\nconstraints. ACM Multimedia. p. 177-186.\n3. Bowman, D.A., Johnson, D.B., & Hodges, L.F. (1997).\nTravel in immersive virtual environments. IEEE\nVRAIS'97 Virtual Reality Annual International\nSymposium. p. 45-52.\n4. Bowman, D.A., Johnson, D.B., & Hodges, L.F. (1999).\nTestbed environment of virtual environment interaction.\nACM VRST'99 Symposium on Virtual Reality Software\nand Technologies. p. 26-33.\n5. Buxton, W., ed. Three-state model of graphical input.\nHuman-computer interaction - INTERACT'90, ed. D.\nDiaper. 1990, Elsevier Science Publishers B. V. (North-Holland\n): Amsterdam. 449-456.\n6. Carpendale, M.S.T., & Montagnese, C.A. (2001). A\nframework for unifying presentation space. ACM\nUIST'2001 Symposium on User Interface Software and\nTechnology. p. 61-70.\n7. Chapman, D., & Ware, C. (1992). Manipulating the\nfuture: predictor based feedback for velocity control in\nvirtual environment navigation. ACM I3D'92\nSymposium on Interactive 3D Graphics. p. 63-66.\n8. Chen, S.E. (1995). QuickTime VR: An image-based\napproach to virtual environment navigation. ACM\nSIGGRAPH'95 Conference on Computer Graphics and\nInteractive Techniques. p. 29-38.\n9. Darken, R., & Sibert, J. (1996). Wayfinding strategies\nand behaviours in large virtual worlds. ACM CHI'96\nConference on Human Factors in Computing Systems.\np. 142-149.\nVolume 4, Issue 2\n109\n10. Drucker, S.M., Galyean, T.A., & Zeltzer, D. (1992).\nCINEMA: A system for procedural camera movements.\nACM Symposium on Interactive 3D Graphics. p. 67-70.\n11. Drucker, S.M., & Zeltzer, D. (1994). Intelligent camera\ncontrol in a virtual environment. Graphics Interface. p.\n190-199.\n12. Elvins, T., Nadeau, D., Schul, R., & Kirsh, D. (1998).\nWorldlets: 3D thumbnails for 3D browsing. ACM\nCHI'98 Conf. on Human Factors in Computing Systems.\np. 163-170.\n13. Fitzmaurice, G.W. (1993). Situated information spaces\nand spatially aware palmtop computers.\nCommunications of the ACM, 36(7). p. 38-49.\n14. Fukatsu, S., Kitamura, Y., Masaki, T., & Kishino, F.\n(1998). Intuitive control of bird's eye overview images\nfor navigation in an enormous virtual environment.\nACM VRST'98 Sympoisum on Virtual Reality Software\nand Technology. p. 67-76.\n15. Furnas, G. (1986). Generalized fisheye views. ACM\nCHI 1986 Conference on Human Factors in Computing\nSystems. p. 16-23.\n16. Furnas, G. (1997). Effective view navigation. ACM\nCHI'97 Conference on Human Factors in Computing\nSystems. p. 367-374.\n17. Galyean, T.A. (1995). Guided navigation of virtual\nenvironments. ACM I3D'95 Symposium on Interactive\n3D Graphics. p. 103-104.\n18. Gliecher, M., & Witkin, A. (1992). Through-the-lens\ncamera control. ACM SIGGRAPH' Conf. on Computer\nGraphics and Interactive Techniques. p. 331-340.\n19. Hanson, A.J., & Wernet, E. (1997). Constrained 3D\nnavigation with 2D controllers. p. 175-182.\n20. He, L., Cohen, M.F., & Salesin, D. (1996). The virtual\ncinematographer: a paradigm for automatic real-time\ncamera control and directing. ACM SIGGRAPH'96\nConference on Computer Graphics and Interactive\nTechniques. p. 217-224.\n21. Igarashi, T., Kadobayashi, R., Mase, K., & Tanaka, H.\n(1998). Path drawing for 3D walkthrough. ACM UIST\n1998 Symposium on User Interface Software and\nTechnology. p. 173-174.\n22. Jul, S., & Furnas, G. (1998). Critical zones in desert\nfog: aids to multiscale navigation. ACM Symposium on\nUser Interface Software and Technology. p. 97-106.\n23. Lippman, A. (1980). Movie-maps: an application of the\noptical videodisc to computer graphics. ACM\nSIGGRAPH'80 Conference on Computer Graphics and\nInteractive Techniques. p. 32-42.\n24. Mackinlay, J., Card, S., & Robertson, G. (1990). Rapid\ncontrolled movement through a virtual 3D workspace.\nACM SIGGRAPH 1990 Conference on Computer\nGraphics and Interactive Techniques. p. 171-176.\n25. Marrin, C., Myers, R., Kent, J., & Broadwell, P. (2001).\nSteerable media: interactive television via video\nsynthesis. ACM Conference on 3D Technologies for the\nWorld Wide Web. p. 7-14.\n26. Newman, W. (1968). A system for interactive graphical\nprogramming.\nAFIPS Spring Joint Computer\nConference. p. 47-54.\n27. Phillips, C.B., Badler, N.I., & Granieri, J. (1992).\nAutomatic viewing control for 3D direct manipulation.\nACM Symposium on Interactive 3D Graphics. p. 71-74.\n28.\nShoemake, K. (1985). Animating rotation with\nquartenion curves. ACM SIGGRAPH Conf Computer\nGraphics & Interactive Techniques. p. 245-254.\n29. Smith, G., Salzman, T., & Stuerzlinger, W. (2001). 3D\nScene manipulation with 2D devices and constraints.\nGraphics Interface. p. 135-142.\n30. Steed, A. (1997). Efficient navigation around complex\nvirtual environments. ACM VRST'97 Conference on\nVirtual Reality Software and Technology. p. 173-180.\n31. Stoakley, R., Conway, M., & Pausch, R. (1995). Virtual\nreality on a WIM: Interactive worlds in miniature. ACM\nCHI 1995 Conference on Human Factors in Computing\nSystems. p. 265-272.\n32. Tan, D., Robertson, G., & Czerwinski, M. (2001).\nExploring 3D navigation: combining speed-coupled\nflying with orbiting. ACM CHI'2001 Conference on\nHuman Factors in Computing Systems. p. 418-425.\n33. Vinson, N. (1999). Design guidelines for landmarks to\nsupport navigation in virtual environments. ACM\nCHI'99 Conference on Human Factors in Computing\nSystems. p. 278-285.\n34. Ware, C., & Fleet, D. (1997). Context sensitve flying\ninterface. ACM I3D'97 Symposium on Interactive 3D\nGraphics. p. 127-130.\n35. Ware, C., & Osborne, S. (1990). Exploration and virtual\ncamera control in virtual three dimensional\nenvironments. ACM I3D'90 Symposium on Interactive\n3D Graphics. p. 175-183.\n36. Wernert, E.A., & Hanson, A.J. (1999). A framework for\nassisted exploration with collaboration. IEEE\nVisualization. p. 241-248.\n37. Zeleznik, R., & Forsberg, A. (1999). UniCam - 2D\nGestural Camera Controls for 3D Environments. ACM\nSymposium on Interactive 3D Graphics. p. 169-173.\n38. Zeleznik, R., Forsberg, A., & Strauss, P. (1997). Two\npointer input for 3D interaction. ACM I3D Symposium\non Interactive 3D Graphics. p. 115-120.\n110\nVolume 4, Issue 2", "keywords": "3D viewers;camera controls;3D navigation;3D visualization;interaction techniques"} {"name": "185", "title": "Tactons: Structured Tactile Messages for Non-Visual Information Display", "abstract": "Tactile displays are now becoming available in a form that can be easily used in a user interface. This paper describes a new form of tactile output. Tactons, or tactile icons, are structured, abstract messages that can be used to communicate messages non-visually. A range of different parameters can be used for Tacton construction including : frequency, amplitude and duration of a tactile pulse, plus other parameters such as rhythm and location. Tactons have the potential to improve interaction in a range of different areas, particularly where the visual display is overloaded, limited in size or not available, such as interfaces for blind people or in mobile and wearable devices . . This paper describes Tactons, the parameters used to construct them and some possible ways to design them. Examples of where Tactons might prove useful in user interfaces are given.", "fulltext": "Introduction\nThe area of haptic (touch-based) human computer interaction\n(HCI) has grown rapidly over the last few years. A\nrange of new applications has become possible now that\ntouch can be used as an interaction technique (Wall et al.,\n2002). However, most current haptic devices have scant\nprovision for tactile stimulation, being primarily pro-grammable\n, constrained motion force-feedback devices\nfor kinaesthetic display. The cutaneous (skin-based)\ncomponent is ignored even though it is a key part of our\nexperience of touch (van Erp, 2002). It is, for example,\nimportant for recognising texture, and detecting slip,\ncompliance and direction of edges. As Tan (1997) says\n\"In the general area of human-computer interfaces ... the\ntactual sense is still underutilised compared with vision\nand audition\". One reason for this is that, until recently,\nthe technology for tactile displays was limited.\nTactile displays are not new but they have not received\nmuch attention from HCI researchers as they are often\nengineering prototypes or designed for very specific applications (Kaczmarek et al., 1991). They have been used\nin areas such as tele-operation or displays for blind people\nto provide sensory substitution where one sense is\nused to receive information normally received by another\n(Kaczmarek et al.). Most of the development of these\ndevices has taken place in robotics or engineering labs\nand has focused on the challenges inherent in building\nlow cost, high-resolution devices with realistic size,\npower and safety performance. Little research has gone\ninto how they might actually be used at the user interface.\nDevices are now available that allow the use of tactile\ndisplays so the time is right to think about how they\nmight be used to improve interaction.\nIn this paper the concept of Tactons, or tactile icons, is\nintroduced as a new communication method to complement\ngraphical and auditory feedback at the user interface\n. Tactons are structured, abstract messages that can be\nused to communicate messages non-visually. Conveying\nstructured messages through touch will be very useful in\nareas such as wearable computing where screens are limited\n. The paper gives some background to the perception\nand use of tactile stimuli and then describes the design of\nTactons. It finishes with examples of potential uses for\nTactons.\nBackground and previous work\nThe skin is the largest organ in the body, about 2 m\n2\n\nin\nthe average male (Montagu, 1971). Little direct use is\nmade of it for displaying information in human-computer\ninterfaces (Tan and Pentland, 1997, van Erp, 2002), yet a\ntouch on the hand or other parts of the body is a very rich\nexperience. The skin can therefore potentially be used as\na medium to communicate information. As a receiving\ninstrument the skin combines important aspects of the eye\nand the ear, with high acuity in both space and time\n(Gunther, 2001) giving it good potential as a communication\nmedium.\nThe human sense of touch can be roughly split in to two\nparts: kinaesthetic and cutaneous. \"Kinaesthetic\" is often\nused as catch-all term to describe the information arising\nfrom forces and positions sensed by the muscles and\njoints. Force-feedback haptic devices (such as the\nPHANToM from SensAble) are used to present information\nto the kinaesthetic sense. Cutaneous perception refers\nto the mechanoreceptors contained within the skin, and\nincludes the sensations of vibration, temperature, pain\nand indentation. Tactile devices are used to present feedback\nto the cutaneous sense.\n15\nCurrent haptic devices use force-feedback to present kinaesthetic\nstimuli. This works well for some aspects of\ntouch (e.g. identifying the geometric properties of objects\n) but is poor for features such as texture (normally\nperceived cutaneously). Oakley et al. (2000) found that\ntrying to use texture in a user interface with a force-feedback\ndevice actually reduced user performance. One\nreason for this is that the textures had to be made large so\nthat they could be perceived kinaesthetically, but they\nthen perturbed users' movements. The use of a tactile\nhaptic device to present texture would not have this problem\nas small indentations in the fingertip would not affect\nhand movements. At present, however, there are no haptic\ndevices that do a good job of presenting both tactile\nand force-feedback cues to users.\nCurrent force-feedback devices use a point interaction\nmodel; the user is represented by a single point of contact\ncorresponding to the tip of a stylus. This is analogous to\nexploring the world by remote contact through a stick\nthus depriving the user of the rich, spatially varying cutaneous\ncues that arise on the finger pad when contacting a\nreal object (Wall and Harwin, 2001). Users must integrate\ntemporally varying cues as they traverse the structure of\nvirtual objects with the single point of contact, which\nplaces considerable demands on short-term memory\n(Jansson and Larsson, 2002). Even when exploring simple\ngeometric primitives, performance is greatly reduced\ncompared to natural touch. Lederman and Klatzky (1999)\nhave shown that such removal of cutaneous input to the\nfingertip impedes perception of edge direction, which is\nan essential component of understanding haptic objects. It\ncan therefore be seen that tactile feedback and cutaneous\nperception are key parts of touch that must be incorporated\ninto haptic displays if they are to be effective and\nusable.\n2.1 Vibrotactile actuators\nThere are two basic types of vibrotactile display device.\nThese evoke tactile sensations using mechanical vibration\nof the skin (usually in the range 10-500Hz) (Kaczmarek\net al., 1991). This is commonly done by vibrating a small\nplate pressed against the skin or via a pin or array of pins\non the fingertip. These are very easy to control from standard\nPC hardware. Other types of actuator technology\nare available, including pneumatic and electrotactile\n(Stone, 2000), but these tend to be bulkier and harder to\ncontrol so are less useful in many situations.\n\nFigure 1: The pins arrays on the VirTouch tactile\nmouse (www.virtouch.com).\n\nThe first type of vibrotactile display uses a pin or array of\nsmall pins (e.g. the VirTouch mouse in Figure 1 or those\nproduced by Summers et al. (2001)) to stimulate the fingertip\n. Such devices can present very fine cues for surface\ntexture, edges, lines, etc. The second type uses larger\npoint-contact stimulators (e.g. Figure 2 or alternatively\nsmall loudspeaker cones playing tones, or other simple\nvibrating actuators placed against the skin as used by Tan\n(1997) and in devices such as the CyberTouch glove\nwww.immersion.com). The cues here are much lower\nresolution but can exert more force; they can also be distributed\nover the body to allow multiple simultaneous\ncues (often mounted in a vest on the user's back or in a\nbelt around the waist). These devices are both easy to\ncontrol and use. For a full review see Kaczmarek et al.\n(1991).\n\nFigure 2: Audiological Engineering Corp. VBW32\ntransducers (www.tactaid.com).\n2.2 Previous work on tactile display\nOne common form of tactile output is Braille, and dynamic\nBraille cells are available. A display is made up of\na line of `soft' cells (often 40 or 80), each with 6 or 8 pins\nthat move up and down to represent the dots of a Braille\ncell. The user can read a line of Braille cells by touching\nthe pins of each cell as they pop up (for more information\nsee www.tiresias.org). The focus of the work reported\nhere is not on Braille as it tends to be used mainly for\nrepresenting text (although other notations are used, e.g.\nmusic) and the cells are very low resolution (8 pins\nmaximum). These displays are also very expensive with\nan 80 cell display costing around 4000. There have been\nmany other tactile devices for blind people, such as the\nOptacon (TeleSensory Inc.), which used an array of 144\npins to display the input from a camera to the fingertip,\nbut again these are mainly used for reading text. Pin arrays\nproduce Braille but can do much more, especially the\nhigher resolution displays such as shown in Figure 1.\nOur research also builds on the work that has been done\non tactile graphics for blind people (this mainly takes the\nform of raised lines and dots on special `swell' paper).\nKurze (1997, 1998) and Challis (2001) have developed\nguidelines which allow images and objects to be presented\nthat are understandable through touch by blind\nusers.\nTwo other examples show that the cutaneous sense is\nvery effective for communication. Firstly, Tadoma is a\ntactile language used by deaf/blind people. The transmitter\nspeaks normally and the receiver puts a hand on the\nface of the speaker, covering the mouth and neck (Tan\nand Pentland, 2001). Tadoma users can listen at very high\n16\nspeeds (normal speaking speed for experts) and pick up\nsubtleties of the speech such as accent. In the second example\n, Geldard (1957) taught participants a simple tactile\nlanguage of 45 symbols, using three intensities, three\ndurations and five locations on the chest. Participants\nwere able to learn the alphabet quickly and could recognise\nup to 38 words per minute in some cases. Other sensory\nsubstitution systems convert sound into vibration for\nhearing-impaired people (e.g. the TactAid system from\nAudiological Engineering). Again this shows that cutaneous\nperception is very powerful and if we can make use\nof it at the user interfaces we will have a rich new way to\npresent information to users.\nResearch and existing applications have shown that the\ncutaneous sense is a very powerful method of receiving\ninformation. Other work has shown that it can be used in\nuser interfaces and wearable computers (Gemperle et al.,\n1998). Tan has begun to investigate the use of tactile displays\non wearable computers (Tan and Pentland, 1997).\nShe used a 3x3 grid of stimulators on a user's back to\nprovide navigation information. Informal results suggested\nit was useful but no formal evaluation has taken\nplace. Other relevant work has taken place in aircraft\ncockpits to provide pilots with navigation information\n(van Veen and van Erp, 2001, Rupert, 2000). In these\nexamples only simple tactile cues for direction have been\nprovided. For example, an actuator maybe vibrated on\none side of the body to indicate the direction to turn.\nMore sophisticated cues could be used to provide much\nmore information to users without them needing to use\ntheir eyes.\nGunther et al. have used tactile cues to present `musical'\ncompositions to users (Gunther, 2001, Gunther et al.,\n2002). They say: \"The approach taken ... views haptic\ntechnologies in particular the vibrotactile stimulator\nas independent output devices to be used in conjunction\nwith the composition and perception of music. Vibrotactile\nstimuli are viewed not as signals carrying information\nper se, but as aesthetic artifacts themselves\". He used an\narray of 13 transducers across the body of a `listener' so\nthat he/she could experience the combined sonic/tactile\npresentation. Gunther created a series of compositions\nplayed to listeners who appeared to enjoy them. This\nwork was artistic in nature so no formal usability assessments\nwere made but the listeners all liked the experience\n.\nIn order to create a tactile composition (the same is true\nfor the Tactons described below) a good understanding of\nthe experience of touch is needed. However, as Gunther\net al. suggest: \"It is indeed premature to hammer out the\ndetails of a language for tactile composition. It seems\nmore productive at this point in time to identify the underpinnings\nof such a language, specifically those dimensions\nof tactile stimuli that can be manipulated to form\nthe basic vocabulary elements of a compositional lan-guage\"\n. Research is needed to gain a more systematic\nunderstanding of cutaneous perception for use in the\npresentation of such messages.\nEnriquez and MacLean (2003) recently proposed `haptic\nicons', which they define as \"brief programmed forces\napplied to a user through a haptic interface, with the role\nof communicating a simple idea in a manner similar to\nvisual or auditory icons\". The problem they are trying to\naddress is different to that of Tactons, as they say \"With\nthe introduction of \"active\" haptic interfaces, a single\nhandle e.g. a knob or a joystick can control several\ndifferent and perhaps unrelated functions. These multi-function\ncontrollers can no longer be differentiated from\none another by position, shape or texture... Active haptic\nicons, or \"hapticons\", may be able to solve this problem\nby rendering haptically distinct and meaningful sensations\nfor the different functions\". These use one degree-of\n-freedom force-feedback devices, rather than tactile\ndisplays, so encode information very differently to Tactons\n. They report the construction of a tool to allow a user\nto create and edit haptic icons. This is early work and\nthey do not report results from the use of hapticons in any\ninterfaces. Their results, however, will be directly relevant\nto Tactons.\nTactons\nGiven that the cutaneous sense is rich and a powerful\ncommunication medium currently little utilised in HCI,\nhow can we make effective use of it? One approach is to\nuse it to render objects from the real world more realisti-cally\nin virtual environments, for example in improving\nthe presentation of texture in haptic devices. It could also\nbe used to improve targeting in desktop interactions along\nthe lines suggested by Oakley et al. (2000). In this paper\nit is suggested that it can additionally be used to present\nstructured informational messages to users.\nTactons are structured, abstract messages that can be used\nto communicate complex concepts to users non-visually.\nShneiderman (1998) defines an icon as \"an image, picture\nor symbol representing a concept\". Tactons can represent\ncomplex interface concepts, objects and actions very con-cisely\n. Visual icons and their auditory equivalent earcons\n(Blattner et al., 1989, Brewster et al., 1994) are very\npowerful ways of displaying information but there is currently\nno tactile equivalent. In the visual domain there is\ntext and its counterpart the icon, the same is true in sound\nwith synthetic speech and the earcon. In the tactile domain\nthere is Braille but it has no `iconic' counterpart.\nTactons fill this gap. Icons/Earcons/Tactons form a simple\n, efficient language to represent concepts at the user\ninterface.\nTactons are similar to Braille in the same way that visual\nicons are similar to text, or earcons are similar to synthetic\nspeech. For example, visual icons can convey complex\ninformation in a very small amount of screen space,\nmuch smaller than for a textual description. Earcons convey\ninformation in a small amount of time as compared to\nsynthetic speech. Tactons can convey information in a\nsmaller amount of space and time than Braille. Research\nwill also show which form of iconic display is most suitable\nfor which type of information. Visual icons are good\nfor spatial information, earcons for temporal. One property\nof Tactons is that they operate both spatially and\ntemporally so they can complement both icons and earcons\n. Further research is needed to understand how these\ndifferent types of feedback work together.\n17\nUsing speech as an example from the auditory domain:\npresenting information in speech is slow because of its\nserial nature; to assimilate information the user must hear\na spoken message from beginning to end and many words\nmay have to be comprehended before the message can be\nunderstood. With earcons the messages are shorter and\ntherefore more rapidly heard, speeding up interactions.\nThe same is true of Tactons when compared to Braille.\nSpeech suffers from many of the same problems as\ngraphical text in text-based computer systems, as this is\nalso a serial medium. Barker & Manji (1989) claim that\nan important limitation of text is its lack of expressive\ncapability: It may take many words to describe a fairly\nsimple concept. Graphical iconic displays were introduced\nthat speeded up interactions as users could see a\npicture of the thing they wanted instead of having to read\nits name from a list (Barker and Manji, 1989). In the\nsame way, an encoded tactile message may be able to\ncommunicate its information in fewer symbols. The user\nfeels the Tacton then recalls its meaning rather than having\nthe meaning described in Braille (or speech or text).\nThe icon is also (in principle) universal: it means the\nsame thing in different languages and the Tacton would\nhave similar universality.\nDesigning with Tactons\nTactons are created by encoding information using the\nparameters of cutaneous perception. The encoding is\nsimilar to that of earcons in sound (Blattner et al., 1989,\nBrewster et al., 1994) where each of the musical parameters\n(e.g. timbre, frequency, amplitude) is varied to encode\ninformation. Similar parameters can be used for\nTactons (although their relative importance is different).\nAs suggested by Blattner, short motifs could be used to\nrepresent simple objects or actions and these can then be\ncombined in different ways to represent more complex\nmessages and concepts. As Tactons are abstract the mapping\nbetween the Tacton and what it represents must be\nlearned, but work on earcons has shown that learning can\ntake place quickly (Brewster, 1998b).\nThe properties that can be manipulated for Tactons are\nsimilar to those used in the creation of earcons. The parameters\nfor manipulation also vary depending on the\ntype of transducer used; not all transducers allow all types\nof parameters. The general basic parameters are:\nFrequency: A range of frequencies can be used to differentiate\nTactons. The range of 20 1000 Hz is perceivable\nbut maximum sensitivity occurs around 250 Hz (Gunther\net al., 2002). The number of discrete values that can be\ndifferentiated is not well understood, but Gill (2003) suggests\nthat a maximum of nine different levels can be used.\nAs in audition, a change in amplitude leads to a change in\nthe perception of frequency so this has an impact on the\nuse of frequency as a cue. The number of levels of frequency\nthat can be discriminated also depends on whether\nthe cues are presented in a relative or absolute way. Making\nrelative comparisons between stimuli is much easier\nthan absolute identification, which will lead to much\nfewer discriminable values, as shown in the work on earcon\ndesign (Brewster et al., 1994).\nAmplitude: Intensity of stimulation can be used to encode\nvalues to present information to the user. Gunther (2002)\nreports that the intensity range extends to 55 dB above the\nthreshold of detection; above this pain occurs. Craig and\nSherrick (1982) indicate that perception deteriorates\nabove 28 dB so this would seem to be a useful maximum.\nGunther (2001) reports that various values, ranging from\n0.4dB to 3.2dB, have been reported for the just noticeable\ndifference (JND) value for intensity. Gill states that that\nno more than four different intensities should be used\n(Gill, 2003). Again the number of useful discriminable\nvalues will depend on absolute or relative presentation of\nstimuli. Due to the interactions between this and frequency\nseveral researchers have suggested that they be\ncombined into a single parameter to simplify design\nWaveform: The perception of wave shape is much more\nlimited than with the perception of timbre in sound. Users\ncan differentiate sine waves and square waves but more\nsubtle differences are more difficult (Gunther, 2001).\nThis limits the number of different values that can be encoded\nand makes this a much less important variable than\nit is in earcon design (where it is one of the key variables\n).\nDuration: Pulses of different durations can encode information\n. Gunther (2001) investigated a range of subjective\nresponses to pulses of different durations. He found that\nstimuli lasting less than 0.1 seconds were perceived as\ntaps or jabs whereas stimuli of longer duration, when\ncombined with gradual attacks and decays, may be perceived\nas smoothly flowing tactile phrases. He suggests\ncombining duration with alterations in the envelope of a\nvibration, e.g. an abrupt attack feels like a tap against the\nskin, a gradual attack feels like something rising up out of\nthe skin.\nRhythm: Building on from duration, groups of pulses of\ndifferent durations can be composed into rhythmic units.\nThis is a very powerful cue in both sound and touch.\nGunther (2001) suggests that differences in duration can\nbe used to group events when multiple events occur on\nthe same area of skin.\nSpecific transducer types allow other parameters to be\nused:\nBody location: Spatially distributed transducers can encode\ninformation in the position of stimulation across the\nbody. The choice of body location for vibrotactile display\nis important, as different locations have different levels of\nsensitivity and spatial acuity. A display may make use of\nseveral body locations, so that the location can be used as\nanother parameter, or can be used to group tactile stimuli.\nThe fingers are often used for vibrotactile displays because\nof their high sensitivity to small amplitudes and\ntheir high spatial acuity (Craig and Sherrick, 1982). However\n, the fingers are often required for other tasks, so\nother body locations may be more suitable. Craig and\nSherrick suggest the back, thigh and abdomen as other\nsuitable body locations. They report that, once subjects\nhave been trained in vibrotactile pattern recognition on\nthe back, they can almost immediately recognise the same\npatterns when they are presented to the thigh or abdomen.\nThis transfer also occurs to some extent when patterns are\n18\npresented to different fingers after training on one finger,\nbut is not so immediate.\nCertain body locations are particularly suitable, or particularly\nunsuitable, for certain types of vibrotactile displays\n. For example, transducers should not be placed on\nor near the head, as this can cause leakage of vibrations\ninto the ears, resulting in unwanted sounds (Gunther et\nal., 2002). An example of a suitable body location is in\nGunther's Skinscape display, where he positions low frequency\ntransducers on the torso as this is where low frequencies\nare felt when loud music is heard.\nThe method of attaching the transducers to a user's body\nis also important. The pressure of the transducer against\nthe body has a significant effect on the user's perception\nof the vibrations. Transducers should rest lightly on the\nskin, allowing the user to feel the vibration against the\nskin, and to isolate the location of the vibration with ease.\nExerting too much pressure with the transducer against\nthe user's body will cause the vibrations to be felt in the\nbone structure, making them less isolated due to skeletal\nconduction. In addition, tightening the straps holding the\ntransducer to achieve this level of pressure may impede\ncirculation (Gunther, 2001).\nRupert (2000) suggests using the full torso for displaying\n3D information, with 128 transducers distributed over the\nbody. His system displays information to pilots about the\nlocation of objects around them in 3D space, by stimulating\nthe transducers at the part of their body corresponding\nto the location of the object in 3D space around them.\nThis could be used to indicate horizons, borders, targets,\nor other aircraft.\nSpatiotemporal patterns: Related to position and rhythm,\nspatial patterns can also be \"drawn\" on the user's body.\nFor example, if a user has a 3x3 array of stimulators lo-cated\non his/her back, lines and geometric shapes can be\n\"drawn\" on the back, by stimulating, in turn, the stimulators\nthat make up that shape. In Figure 3, an `L' shaped\ngesture can be drawn by activating the stimulators: 1-4-78\n-9 in turn. Patterns can move about the body, varying in\ntime and location to encode information. Cholewiak\n(1996) and Sherrick (1985) have also looked at low-level\nperception of distributed tactile cues.\n.\n\nFigure 3: \"Drawing\" an L-shaped gesture.\nNow that the basic parameters for Tactons have been described\n, we will give some examples of how they might\nbe designed to convey information. The fundamental design\nof Tactons is similar to that of earcons.\n4.1 Compound Tactons\nA simple set of Tactons could be created as in Figure 4. A\nhigh-frequency pulse that increases in intensity could\nrepresent `Create', a lower frequency pulse that decreases\nin intensity could represent `Delete'. A two note falling\nTacton could represent a file and a two rising notes a\nfolder. The mapping is abstract; there is no intuitive link\nbetween what the user feels and what it represents.\n\n\nCreate\nDelete\nFile\nFolder\nCreate File\nDelete Folder\n\nFigure 4: Compound Tactons (after Blattner et al.,\n1989).\nThese Tactons can then be combined to create compound\nmessages. For example, `create file' or `delete folder'.\nThe set of basic elements could be extended and a simple\nlanguage of tactile elements created to provide feedback\nin a user interface.\n4.2 Hierarchical Tactons\nTactons could also be combined in a hierarchical way, as\nshown in Figure 5. Each Tacton is a node in a tree and\ninherits properties from the levels above it. Figure 5\nshows a hierarchy of Tactons representing a hypothetical\nfamily of errors. The top of the tree is a family Tacton\nwhich has a basic rhythm played using a sinewave (a different\nfamily of errors would use a different rhythm so\nthat they are not confused). The rhythmic structure of\nLevel 2 inherits the Tacton from Level 1 and adds to it. In\nthis case a second, higher frequency Tacton played with a\nsquarewave. At Level 3 the tempo of the two Tactons is\nchanged. In this way a hierarchical structure can be presented\n. The other parameters discussed above could be\nused to add further levels.\n4.3 Transformational Tactons\nA third type of Tacton is the Transformational Tacton.\nThese have several properties, each represented by a different\ntactile parameter. For example, if Transformational\nTactons were used to represent files in a computer interface\n, the file type could be represented by rhythm, size by\nfrequency, and creation date by body location. Each file\ntype would be mapped to a unique rhythm. Therefore,\ntwo files of the same type, and same size, but different\ncreation date would share the same rhythm and frequency\n, but would be presented to a different body location\n. If two files were of different types but the same size\nthey would be represented by different rhythms with the\nsame frequency.\n19\nUses for Tactons\nWe are interested in three areas of use for Tactons, although\nthere are many others where they have potential to\nimprove usability.\n5.1 Enhancements of desktop interfaces\nThe first, and simplest, area of interest is in the addition\nof Tactons to desktop graphical interfaces. The addition\nof earcons to desktops has shown many advantages in\nterms of reduced errors, reduced times to complete tasks\nand lowered workload (Brewster, 1998a). One problem\nwith audio is that users believe that it may be annoying to\nuse (although no research has actually shown this to be\nthe case) and it has the potential to annoy others nearby\n(for a discussion see (Brewster, 2002)). The addition of\nTactons to widgets has the same potential to indicate usability\nproblems but without the potential to annoy.\nOne reason for enhancing standard desktop interfaces is\nthat users can become overloaded with visual information\non large, high-resolution displays. In highly complex\ngraphical displays users must concentrate on one part of\nthe display to perceive the visual feedback, so that feedback\nfrom another part may be missed. This becomes\nvery important in situations where users must notice and\ndeal with large amounts of dynamic data or output from\nmultiple applications or tasks. If information about secondary\ntasks was presented through touch then users\ncould concentrate their visual attention on the primary\none but feel information about the others.\nAs a simple example, the display of a progress bar widget\ncould be presented tactually. Two sets of tactile pulses\ncould be used to indicate the current and end points of a\ndownload. The time between the two pulses would indicate\nthe amount of time remaining, the closer the two\npulses the nearer the download is to finishing. The two\npulses could use different waveforms to ensure they were\nnot confused. Different rhythms for each pulse could be\nused to indicate different types of downloads. If a more\nsophisticated set of transducers on a belt around the waist\nwas available then the position of a pulse moving around\nthe body in a clockwise direction (starting from the front)\nwould give information about progress: when the pulse\nwas at the right side of the body the download would be\n25% of the way through, when it was on the left hand\nside 75%, and when it got back around to the front it\nwould be finished. There would be no need for any visual\npresentation of the progress bar, allowing users to focus\ntheir visual attention on the main task they are involved\nwith.\nTactons could also be used to enhance interactions with\nbuttons, scrollbars, menus, etc. to indicate when users are\non targets and when certain types of errors occur. Others\nhave shown that basic tactile feedback can improve\npointing and steering type interactions (Akamatsu et al.,\n1995, Campbell et al., 1999). There are some commercial\nsystems that give simple tactile feedback in desktop user\ninterfaces, e.g. the software that comes with the Logitech\niFeel mouse (www.logitech.com). This provides basic\ntargeting: a brief pulse is played, for example, when a\nuser moves over a target. We believe there is much more\nthat can be presented with tactile feedback.\n5.2 Visually impaired users\nTactons will be able to work alongside Braille in tactile\ndisplays for blind and visually impaired users, in the same\nway as earcons work alongside synthetic speech. They\nwill allow information to be delivered more efficiently. In\naddition, hierarchical Tactons could help users navigate\nSine\nSine\nSquare\nError\nOperating system error\nExecution error\nSine\nSquare\nOverflow\nSine\nSquare\nUnderflow\nSine\nSquare\nFast tempo\nSlow tempo\nFigure 5: Hierarchical Tacton composition.\nLevel 1\nLevel 2\nLevel 3\n20\naround Braille media by providing navigation information\n(Brewster, 1998b).\nOne of our main interests is in using Tactons to improve\naccess to graphical information non-visually. Text can be\nrendered in a relatively straightforward manner by speech\nor Braille, but graphics are more problematic. One area\nthat we and others have focused on is visualisation for\nblind people. Understanding and manipulating information\nusing visualisations such as graphs, tables, bar charts\nand 3D plots is very common for sighted people. The\nskills needed are learned early in school and then used\nthroughout life, for example, in analysing information or\nmanaging home finances. The basic skills needed for creating\nand manipulating graphs are necessary for all parts\nof education and employment. Blind people have very\nrestricted access to information presented in these visual\nways (Edwards, 1995). As Wise et al. (2001) say \"Inac-cessibility\nof instructional materials, media, and technologies\nused in science, engineering, and mathematics\neducation severely restricts the ability of students with\nlittle or no sight to excel in these disciplines\". To allow\nblind people to gain the skills needed for the workplace\nnew technologies are necessary to make visualisations\nusable. Tactons provide another route through which information\ncan be presented.\nResearch has shown that using haptic devices is an effective\nway of presenting graphical information non-visually\n(Yu and Brewster, 2003, Wies et al., 2001, Van Scoy et\nal., 2000). The most common approach has been to use\nhaptic devices to present graphs, tables or 3D plots that\nusers can feel kinaesthetically by tracing a line or shape\nwith a finger using a device like the PHANToM\n(www.sensable.com). Lederman and Klatzky (1999)\nhave shown that removal of cutaneous input to the fingertip\nimpedes perception of edge direction, which is an essential\ncomponent of tracing a haptic line graph. This lack\nof cutaneous stimulation leads to problems with navigation\n(exploring using a single point of contact means it is\ndifficult to locate items as there is no context, which can\nbe given in a tactile display), exploring small scale features\n(these would be perceived cutaneously on the finger\npad in real life), and information overload (all haptic information\nis perceived kinaesthetically rather than being\nshared with cutaneous perception). Incorporating a tactile\ndisplay into a force-feedback device will alleviate many\nof these problems and potentially increase user efficiency\nand comprehension of visualisations.\nTactons could be presented as the user moves the force-feedback\ndevice over the visualisation. Dimensions of the\ndata can be encoded into a Tacton to give information\nabout the current point, using the parameters described in\nSection 4. This would allow more data to be presented\nmore efficiently. For example, with multidimensional\ndata one dimension might be mapped to the frequency of\na pulse in a Tacton, another might map to rhythm and\nanother to body locatoin. As the user moves about the\ndata he/she would feel the different parameters. In addition\nto the finger pad, we can also include tactile displays\nto other parts of the body (e.g. to the back) using spatially\ndistributed transducers to provide even more display area.\nAs long as this is done in a comprehensible manner users\nwill be able to gain access to their data in a much more\neffective way than with current force-feedback only visualisation\ntools.\n5.3 Mobile and wearable devices\nOur other main application area is mobile and wearable\ndevice displays (for both sighted and blind people). Mobile\ntelephones and handheld computers are currently one\nof the fastest growth areas of computing and this growth\nwill extend into more sophisticated, fully wearable computers\nin the future. One problem with these devices is\ntheir limited output capabilities. Their small displays easily\nbecome cluttered with information and widgets and\nthis makes interface design difficult. In addition, users are\nnot always looking at the display of a device as they must\nwalk or navigate through their environment which requires\nvisual attention. One way to solve this problem is\nto use other display modalities and so reduce demands on\nvisual display, or replace it if not available. Work has\ngone into using speech and non-speech sounds to overcome\nthe display bottleneck. Tactile displays have great\npotential here too but are much less well investigated.\nSound has many advantages but it can be problematic; in\nloud environments it can be impossible to hear auditory\noutput from a device, in quiet places the audio may be\ndisturbing to others nearby. Blind people often do not like\nto wear headphones when outdoors as they mask important\nenvironmental sounds. Tactile displays do not suffer\nfrom these problems (although there may be other problems\nfor example, perceiving tactile stimuli whilst running\ndue to the difficulties of keeping the transducers in\ncontact with the skin). Mobile telephones commonly have\na very simple point-contact tactile stimulator built-in that\ncan alert the user to a call. These are often only able to\nproduce pulses of different durations. A pin array would\nbe possible on such a device as the user will be holding it\nin a hand when in use. Such a sophisticated tactile display\ncould do much more, e.g. it could give information on the\ncaller, replace or enhance items on the display (like icons,\nprogress indicators, games) or aid in the navigation of the\ndevices' menus so that the user does not need to look at\nthe screen.\nIn a wearable device users could have body mounted\ntransducers so that information can be displayed over\ntheir body. In the simplest case this could be used to give\ndirectional information by vibrating one side of the body\nor other to indicate which way to turn (Tan and Pentland,\n1997). A belt of transducers around the waist could give a\ncompass-like display of direction; a pulse could be played\ncontinuously at north so the user can maintain orientation\nafter turning (useful when navigating in the dark) or at the\nposition around the waist corresponding to the direction\nin which to head. A more sophisticated display might\ngive information about the user's context. For example,\npresenting Tactons describing information such as the\ntype of building (shop, bank, office-block, house), the\ntype of shop (clothes, phones, food, furniture) the price-bracket\nof a shop (budget, mid-range, expensive), or information\nmore related to the concerns of visually impaired\npeople, such as the number of stairs leading up to\nthe entrance (for firefighters, whose vision is impaired\n21\ndue to smoke and flames, a tactile display could also provide\ninformation on the location of rooms and exits in a\nburning building). A tactile display could also present\ninformation on stock market data (building on from the\nwork on tactile visualisation in the section above) so that\nusers could keep track of trades whilst away from the\noffice. Such tactile displays could also work alongside\nauditory or visual ones.\nFuture work and conclusions\nThis paper has laid out some of the foundations of information\ndisplay through Tactons. There is still much work\nto be done to fully understand how they should be designed\nand used. There are many lower level perceptual\nquestions to be addressed before higher level design issues\ncan be investigated. Many of the parameters of touch\ndescribed in Section 4 are not fully understood and the\nfull usable ranges of the parameters are not known. Studies\nneed to be undertaken to explore the parameter space\nso that the relative importance of the different parameters\ncan be discovered.\nOnce the range of parameters is understood then the construction\nof Tactons can be examined. Basic studies are\nneeded to understand how the parameters can be combined\nto construct Tactons. Parameters which work well\nalone may not work well when combined with others into\na Tacton. For example, one parameter may mask another.\nWhen the basic design of Tactons is understood the composition\nof simple Tactons into more complex messages,\nencoding hierarchical information into Tactons, and their\nlearnability and memorability can be investigated. The\nconcurrent presentation of multiple Tactons must also be\nstudied. These studies will answer some of the main questions\nregarding the usability of Tactons and a good understanding\nof their design and usability will have been a-chieved\n.\nAnother important task is to investigate the strong relationships\nbetween hearing and touch by examining cross-modal\nuses of audio and tactile multimodal displays\n(Spence and Driver, 1997), e.g. combined audio and tactile\ncues, redundant tactile and audio cues, and moving\nfrom an audio to a tactile presentation of the same information\n(and vice versa). This is important in a mo-bile/wearable\ncontext because at different times different\ndisplay techniques might be appropriate. For example,\naudio might be inappropriate in a very noisy environment\n, or tactile cues might be masked when the user is\nrunning. One important issue is to identify the types of\ninformation best presented in sound and those best presented\ntactually. For example, the range of the vibrotactile\nfrequency response is roughly 20 times less than that\nof the auditory system. Such discrepancies must be accounted\nfor when performing cross-modal mappings from\nhearing to touch.\nIn conclusion, this paper has proposed a new form of tactile\noutput called Tactons. These are structured tactile\nmessages that can be used to communicate information.\nTactile output is underused in current interfaces and Tactons\nprovide a way of addressing this problem. The basic\nparameters have been described and design issues discussed\n. A technique is now available to allow tactile display\nto form a significant part of the set of interaction and\ndisplay techniques that can be used to communicate with\nusers at the interface.\nAcknowledgements\nThis research was conducted when Brewster was on sabbatical\nin the Department of Computer Science at the\nUniversity of Canterbury, Christchurch, New Zealand.\nThanks to Andy Cockburn for his thoughts and comments\non this work. The sabbatical was funded by an Erskine\nFellowship from the University of Canterbury. The work\nwas part funded by EPSRC grant GR/S53244. Brown is\nfunded by an EPSRC studentship.\nReferences\nAkamatsu, M., MacKenzie, I. S. and Hasbrouq, T.\n(1995): A comparison of tactile, auditory, and visual\nfeedback in a pointing task using a mouse-type device\n. Ergonomics, 38, 816-827.\nBarker, P. G. and Manji, K. A. (1989): Pictorial dialogue\nmethods. International Journal of Man-Machine Studies\n, 31, 323-347.\nBlattner, M., Sumikawa, D. and Greenberg, R. (1989):\nEarcons and icons: Their structure and common design\nprinciples. Human Computer Interaction, 4, 11-44\n.\nBrewster, S. A. (1998a): The design of sonically-enhanced\nwidgets. Interacting with Computers, 11,\n211-235.\nBrewster, S. A. (1998b): Using Non-Speech Sounds to\nProvide Navigation Cues. ACM Transactions on\nComputer-Human Interaction, 5, 224-259.\nBrewster, S. A. (2002): Chapter 12: Non-speech auditory\noutput. In The Human Computer Interaction Handbook\n(Eds, Jacko, J. and Sears, A.) Lawrence Erlbaum\nAssociates, pp. 220-239.\nBrewster, S. A., Wright, P. C. and Edwards, A. D. N.\n(1994): A detailed investigation into the effectiveness\nof earcons. In Auditory Display (Ed, Kramer, G.) Addison\n-Wesley, Reading, MA, pp. 471-498.\nCampbell, C., Zhai, S., May, K. and Maglio, P. (1999):\nWhat You Feel Must Be What You See: Adding Tactile\nFeedback to the Trackpoint. Proceedings of IFIP\nINTERACT'99, Edinburgh, UK, 383-390, IOS Press\nChallis, B. and Edwards, A. D. N. (2001): Design principles\nfor tactile interaction. In Haptic Human-Computer\nInteraction, Vol. 2058 (Eds, Brewster, S.\nA. and Murray-Smith, R.) Springer LNCS, Berlin,\nGermany, pp. 17-24.\nCholewiak, R. W. and Collins, A. (1996): Vibrotactile\npattern discrimination and communality at several\nbody sites. Perception and Psychophysics, 57, 724-737\n.\nCraig, J. C. and Sherrick, C. E. (1982): Dynamic Tactile\nDisplays. In Tactual Perception: A Sourcebook (Ed,\nFoulke, E.) Cambridge University Press, pp. 209-233.\n22\nEdwards, A. D. N. (Ed.) (1995) Extra-Ordinary Human-Computer\nInteraction, Cambridge University Press,\nCambridge, UK.\nEnriquez, M. J. and Maclean, K. (2003): The Hapticon\neditor: A tool in support of haptic communication research\n. Haptics Symposium 2003, Los Angeles, CA,\n356-362, IEEE Press\nGeldard, F. A. (1957): Adventures in tactile literacy. The\nAmerican Psychologist, 12, 115-124.\nGemperle, F., Kasabach, C., Stivoric, J., Bauer, M. and\nMartin, R. (1998): Design for wearability. Proceedings\nof Second International Symposium on Wearable\nComputers, Los Alamitos, CA, 116-122, IEEE Computer\nSociety\nGill, J. (2003), Vol. 2003 Royal National Institute of the\nBlind, UK.\nGunther, E. (2001): Skinscape: A Tool for Composition in\nthe Tactile Modality. Massachusetts Institute of Technology\n. Masters of Engineering.\nGunther, E., Davenport, G. and O'Modhrain, S. (2002):\nCutaneous Grooves: Composing for the Sense of\nTouch. Proceedings of Conference on New Instruments\nfor Musical Expression, Dublin, IR, 1-6,\nJansson, G. and Larsson, K. (2002): Identification of\nHaptic Virtual Objects with Differing Degrees of\nComplexity. Proceedings of Eurohaptics 2002, Edinburgh\n, UK, 57-60, Edinburgh University\nKaczmarek, K., Webster, J., Bach-y-Rita, P. and Tomp-kins\n, W. (1991): Electrotacile and vibrotactile displays\nfor sensory substitution systems. IEEE Transaction\non Biomedical Engineering, 38, 1-16.\nKurze, M. (1997): Rendering drawings for interactive\nhaptic perception. Proceedings of ACM CHI'97, Atlanta\n, GA, 423-430, ACM Press, Addison-Wesley\nKurze, M. (1998): TGuide: a guidance system for tactile\nimage exploration. Proceedings of ACM ASSETS '98,\nMarina del Rey, CA, ACM Press\nLederman, S. J. and Klatzky, R. L. (1999): Sensing and\nDisplaying Spatially Distributed Fingertip Forces in\nHaptic Interfaces for Teleoperator and Virtual Environment\nSystems. Presence: Teleoperators and Virtual\nEnvironments, 8, 86-103.\nMontagu, A. (1971): Touching: The Human Significance\nof the Skin, Columbia University Press, New York.\nOakley, I., McGee, M., Brewster, S. A. and Gray, P. D.\n(2000): Putting the feel in look and feel. Proceedings\nof ACM CHI 2000, The Hague, Netherlands, 415-422,\nACM Press, Addison-Wesley\nRupert, A. (2000): Tactile situation awareness system:\nproprioceptive prostheses for sensory deficiencies.\nAviation, Space and Environmental Medicine, 71, 92-99\n.\nSherrick, C. (1985): A scale for rate of tactual vibration.\nJournal of the Acoustical Society of America, 78.\nShneiderman, B. (1998): Designing the user interface, 3\nrd\n\nEd. Addison-Wesley, Reading (MA).\nSpence, C. and Driver, J. (1997): Cross-modal links in\nattention between audition, vision and touch: implications\nfor interface design. International Journal of\nCognitive Ergonomics, 1, 351-373.\nStone, R. (2000): Haptic feedback: A potted history, from\ntelepresence to virtual reality. The First International\nWorkshop on Haptic Human-Computer Interaction,\nGlasgow, UK, 1-7, Springer-Verlag Lecture Notes in\nComputer Science\nSummers, I. R., Chanter, C. M., Southall, A. L. and\nBrady, A. C. (2001): Results from a Tactile Array on\nthe Fingertip. Proceedings of Eurohaptics 2001, Birmingham\n, UK, 26-28, University of Birmingham\nTan, H. Z. and Pentland, A. (1997): Tactual Displays for\nWearable Computing. Proceedings of the First International\nSymposium on Wearable Computers, IEEE\nTan, H. Z. and Pentland, A. (2001): Chapter 18: Tactual\ndisplays for sensory substitution and wearable computers\n. In Fundamentals of wearable computers and\naugmented reality (Eds, Barfield, W. and Caudell, T.)\nLawrence Erlbaum Associates, Mahwah, New Jersey,\npp. 579-598.\nvan Erp, J. B. F. (2002): Guidelines for the use of active\nvibro-tactile displays in human-computer interaction.\nProceedings of Eurohaptics 2002, Edinburgh, UK,\n18-22, University of Edinburgh\nVan Scoy, F., Kawai, T., Darrah, M. and Rash, C. (2000):\nHaptic Display of Mathematical Functions for Teaching\nMathematics to Students with Vision Disabilities:\nDesign and Proof of Concept. Proceedings of the First\nWorkshop on Haptic Human-Computer Interaction,\nGlasgow, UK, University of Glasgow\nvan Veen, H. and van Erp, J. B. F. (2001): Tactile information\npresentation in the cockpit. In Haptic Human-Computer\nInteraction (LNCS2058), Vol. 2058 (Eds,\nBrewster, S. A. and Murray-Smith, R.) Springer, Berlin\n, Germany, pp. 174-181.\nWall, S. A. and Harwin, W. S. (2001): A High Bandwidth\nInterface for Haptic Human Computer Interaction.\nMechatronics. The Science of Intelligent Machines.\nAn International Journal, 11, 371-387.\nWall, S. A., Riedel, B., Crossan, A. and McGee, M. R.\n(Eds.) (2002) Eurohaptics 2002 Conference Proceedings\n, University of Edinburgh, Edinburgh, Scotland.\nWies, E., Gardner, J., O'Modhrain, S., Hasser, C. and\nBulatov, V. (2001): Web-based touch display for accessible\nscience education. In Haptic Human-Computer\nInteraction, Vol. 2058 (Eds, Brewster, S.\nA. and Murray-Smith, R.) Springer LNCS, Berlin, pp.\n52-60.", "keywords": "tactile displays;multimodal interaction;Tactons;non-visual cues"} {"name": "186", "title": "TCP/IP Performance over 3G Wireless Links with Rate and Delay Variation", "abstract": "Wireless link losses result in poor TCP throughput since losses are perceived as congestion by TCP, resulting in source throttling. In order to mitigate this effect, 3G wireless link designers have augmented their system with extensive local retransmission mechanisms. In addition, in order to increase throughput, intelligent channel state based scheduling have also been introduced. While these mechanisms have reduced the impact of losses on TCP throughput and improved the channel utilization, these gains have come at the expense of increased delay and rate variability. In this paper, we comprehensively evaluate the impact of variable rate and variable delay on long-lived TCP performance. We propose a model to explain and predict TCP's throughput over a link with variable rate and/or delay. We also propose a network-based solution called Ack Regulator that mitigates the effect of variable rate and/or delay without significantly increasing the round trip time, while improving TCP performance by up to 40%.", "fulltext": "INTRODUCTION\nThird generation wide-area wireless networks are currently\nbeing deployed in the United States in the form of 3G1X\ntechnology [10] with speeds up to 144Kbps. Data-only enhancements\nto 3G1X have already been standardized in the\n3G1X-EVDO standard (also called High Data Rate or HDR)\nwith speeds up to 2Mbps [6]. UMTS [24] is the third generation\nwireless technology in Europe and Asia with deploy-ments\nplanned this year. As these 3G networks provide pervasive\ninternet access, good performance of TCP over these\nwireless links will be critical for end user satisfaction.\nWhile the performance of TCP has been studied extensively\nover wireless links [3, 4, 15, 20], most attention has\nbeen paid to the impact of wireless channel losses on TCP.\nLosses are perceived as congestion by TCP, resulting in\nsource throttling and very low net throughput.\nIn order to mitigate the effects of losses, 3G wireless link\ndesigners have augmented their system with extensive local\nretransmission mechanisms. For example, link layer retransmission\nprotocols such as RLP and RLC are used in\n3G1X [22] and UMTS [21], respectively. These mechanisms\nensure packet loss probability of less than 1% on the wireless\nlink, thereby mitigating the adverse impact of loss on TCP.\nWhile these mechanisms mitigate losses, they also increase\ndelay variability. For example, as we shall see in Section 3,\nping latencies vary between 179ms to over 1 second in a\n3G1X system.\nIn addition, in order to increase throughput, intelligent\nchannel state based scheduling have also been introduced.\nChannel state based scheduling [7] refers to scheduling techniques\nwhich take the quality of wireless channel into account\nwhile scheduling data packets of different users at the\nbase station.\nThe intuition behind this approach is that\nsince the channel quality varies asynchronously with time\ndue to fading, it is preferable to give priority to a user with\nbetter channel quality at each scheduling epoch.\nWhile\nstrict priority could lead to starvation of users with inferior\nchannel quality, a scheduling algorithm such as proportional\nfair [6] can provide long-term fairness among different\nusers. However, while channel-state based scheduling improves\noverall throughput, it also increases rate variability.\nThus, while the impact of losses on TCP throughput have\nbeen significantly reduced by local link layer mechanisms\nand higher raw throughput achieved by channel-state based\nscheduling mechanisms, these gains have come at the expense\nof increased delay and rate variability. This rate and\ndelay variability translates to bursty ack arrivals (also called\nack compression) at the TCP source. Since TCP uses ack\nclocking to probe for more bandwidth, bursty ack arrival\nleads to release of a burst of packets from the TCP source.\nWhen this burst arrives at a link with variable rate or delay\n, it could result in multiple packet losses. These multiple\nlosses significantly degrade TCPs throughput.\nIn this paper, we make three main contributions. First,\n71\nwe comprehensively evaluate the impact of variable rate and\nvariable delay on long-lived TCP performance. Second, we\npropose a model to explain and predict TCP's throughput\nover a link with variable rate and/or delay. Third, we propose\na network-based solution called Ack Regulator that mitigates\nthe effect of variable rate and/or delay without significantly\nincreasing the round trip time, thereby improving\nTCP performance.\nThe remaining sections are organized as follows. In Section\n2, we discuss related work. In Section 3, we present the\nmotivation for our work using traces from a 3G1X system.\nIn Section 4, we describe a model for computing the throughput\nof a long-lived TCP flow over links with variable rate\nand variable delay. We then present a simple network-based\nsolution, called Ack Regulator, to mitigate the effect of variable\nrate/delay in Section 5. In Section 6, we present extensive\nsimulation results that compare TCP performance with\nand without Ack Regulator, highlighting the performance\ngains using the Ack Regulator when TCP is subjected to\nvariable rate and delay. Finally, in Section 7, we present\nour conclusions.\nRELATED WORK\nIn this section, we review prior work on improving TCP\nperformance over wireless networks. Related work on the\nmodeling of TCP performance is presented in Section 4.\nA lot of prior work has focused on avoiding the case of\na TCP source misinterpreting packet losses in the wireless\nlink as congestion signals. In [4], a snoop agent is introduced\ninside the network to perform duplicate ack suppression and\nlocal retransmissions on the wireless link to enhance TCP\nperformance. In [3], the TCP connection is split into two\nseparate connections, one over the fixed network and the\nsecond over the wireless link. The second connection can\nrecover from losses quickly, resulting in better throughput.\nLink-layer enhancements for reducing wireless link losses including\nretransmission and forward error correction have\nbeen proposed in [20]. Link layer retransmission is now part\nof both the CDMA2000 and UMTS standards [10, 24]. In\norder to handle disconnections (a case of long-lived loss),\nM-TCP has been proposed [8]. The idea is to send the last\nack with a zero-sized receiver window so that the sender can\nbe in persist mode during the disconnection. Link failures\nare also common in Ad Hoc networks and techniques to improve\nTCP performance in the presence of link failures have\nbeen proposed in [11]. Note that none of these approaches\naddress specifically the impact of delay and rate variation\non TCP, which is the focus of this paper.\nSeveral generic TCP enhancements with special applica-bility\nto wireless links are detailed in [12, 13]. These include\nenabling the Time Stamp option, use of large window\nsize and window scale option, disabling Van Jacobson\nheader compression, and the use of Selective Acknowledgments\n(Sack). Large window size and window scaling are\nnecessary because of the large delay of wireless link while\nSack could help TCP recover from multiple losses without\nthe expensive timeout recovery mechanism.\nAnother issue with large delay variation in wireless links\nis spurious timeouts where TCP unnecessarily retransmits\na packet (and lowers its congestion window to a minimum)\nafter a timeout, when the packet is merely delayed. In [13],\nthe authors refer to rate variability due to periodic allocation\nand de-allocation of high-speed channels in 3G networks\nas Bandwidth Oscillation. Bandwidth Oscillation can\nalso lead to spurious timeouts in TCP because as the rate\nchanges from high to low, the rtt value increases and a low\nRetransmission Timeout (RTO) value causes a spurious retransmission\nand unnecessarily forces TCP into slow start.\nIn [15], the authors conduct experiments of TCP over GSM\ncircuit channels and show that spurious timeouts are extremely\nrare. However, 3G wireless links can have larger\nvariations than GSM due to processing delays and rate variations\ndue to channel state based scheduling.\nGiven the\nincreased variability on 3G packet channels, the use of TCP\ntime stamp option for finer tracking of TCP round trip times\nand possibly the use of Eifel retransmission timer [16] instead\nof the conventional TCP timer can help avoid spurious\ntimeouts.\nAs mentioned earlier, the effect of delay and rate variability\nis ack compression and this results in increased burstiness\nat the source.\nAck compression can also be caused\nby bidirectional flows over regular wired networks or single\nflow over networks with large asymmetry. This phenomenon\nhas been studied and several techniques have been proposed\nto tackle the burstiness of ack compression.\nIn order to\ntackle burstiness, the authors in [18] propose several schemes\nthat withholds acks such that there is no packet loss at the\nbottleneck router, resulting in full throughput. However,\nthe round trip time is unbounded and can be very large.\nIn [23], the authors implement an ack pacing technique at\nthe bottleneck router to reduce burstiness and ensure fairness\namong different flows. In the case of asymmetric channels\n, solutions proposed [5] include ack congestion control\nand ack filtering (dropping of acks), reducing source burstiness\nby sender adaptation and giving priority to acks when\nscheduling inside the network. However, the magnitude of\nasymmetry in 3G networks is not large enough and can be\ntolerated by TCP without ack congestion control or ack filtering\naccording to [12].\nNote that, in our case, ack compression occurs because\nof link variation and not due to asymmetry or bidirectional\nflows. Thus, we require a solution that specifically adapts\nto link variation. Moreover, the node at the edge of the 3G\nwireless access network is very likely to be the bottleneck\nrouter (given rates of 144Kbps to 2Mbps on the wireless\nlink) and is the element that is exposed to varying delays and\nservice rates. Thus, this node is the ideal place to regulate\nthe acks in order to improve TCP performance.\nThis is\ndiscussed in more detail in the next section.\nMOTIVATION\nMD\nRNC\nRNC\nBS\nBS\nPDSN/\nSGSN\nBS: Base Station\nMD: Mobile Device\nHA/\nGGSN\nHA: Home Agent\nRNC: Radio Network Controller\nPDSN: Packet Data Service Node\nSGSN: Serving GPRS Service Node\nGGSN: Gateway GPRS Service Node\n(RLP/RLC) Link Layer Retransmission\nINTER\nNET\nFigure 1:\n3G network architecture\nA simplified architecture of a 3G wireless network is shown\n72\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0\n200\n400\n600\n800\n1000\n1200\n1400\nProb.\nPing latency (ms)\nFigure 2:\nCDF of Ping Latencies\nin Figure 1. The base stations are connected to a node called\nthe Radio Network Controller (RNC). The RNC performs\nCDMA specific functions such as soft handoffs, encryption,\npower control etc. It also performs link layer retransmission\nusing RLP(RLC) in 3G1X(UMTS) system. In the 3G1X\nsystem, the RNC is connected to a PDSN using a GRE tunnel\n(one form of IP in IP tunnel) and the PDSN terminates\nPPP with the mobile device. If Mobile IP service is enabled,\nthe PDSN also acts as a Foreign Agent and connects to a\nHome Agent. In the UMTS system, the RNC is connected\nto a SGSN using a GTP tunnel (another form of IP in IP\ntunnel); the SGSN is connected to a GGSN, again through\na GTP tunnel. Note that the tunneling between the various\nnodes allows for these nodes to be connected directly or\nthrough IP/ATM networks.\nIn this architecture, the RNC receives a PPP/IP packet\nthrough the GRE/GTP tunnel from the PDSN/SGSN. The\nRNC fragments this packet into a number of radio frames\nand then performs transmission and local retransmission of\nthese radio frames using the RLP(RLC) protocol. The base\nstation (BS) receives the radio frames from the RNC and\nthen schedules the transmission of the radio frames on the\nwireless link using a scheduling algorithm that takes the\nwireless channel state into account. The mobile device receives\nthe radio frames and if it discovers loss of radio frames,\nit requests local retransmission using the RLP(RLC) protocol\n. Note that, in order to implement RLP(RLC), the RNC\nneeds to keep a per-user queue of radio frames. The RNC\ncan typically scale up to tens of base stations and thousands\nof active users.\nIn order to illustrate the variability seen in a 3G system,\nwe obtained some traces from a 3G1X system. The system\nconsisted of an integrated BS/RNC, a server connected to\nthe RNC using a 10Mbps Ethernet and a mobile device connected\nto the BS using a 3G1X link with 144Kbps downlink\nin infinite burst mode and 8Kbps uplink. The infinite burst\nmode implies that the rate is fixed and so the system only\nhad delay variability.\nFigure 2 plots the cumulative distribution function (cdf)\nof ping latencies from a set of 1000 pings from the server to\nthe mobile device (with no observed loss). While about 75%\nof the latency values are below 200ms, the latency values go\nall the way to over 1s with about 3% of the values higher\nthan 500ms.\nIn the second experiment, a TCP source at the server using\nSack with timestamp option transferred a 2MB file to\nthe mobile device. The MTU was 1500 bytes with user data\nsize of 1448 bytes. The buffer at the RNC was larger than\nthe TCP window size\n1\n. and thus, the transfer resulted in\nno TCP packet loss and a maximal throughput of about\n1\nWe did not have control over the buffer size at the RNC in\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\nProb.\nInterack time Time (s)\n0\n0.5\n1\n1.5\n2\n2.5\n3\n3.5\n0\n20\n40\n60\n80\n100\n120\nRtt (s)\nTime (s)\n(a)\nTCP Ack Inter-arrival\n(b) TCP rtt value\nFigure 3:\n3G Link Delay Variability\n135Kb/s. The transmission time at the bottleneck link is\n1.448\n8/135 = 86ms. If the wireless link delay were constant\n, the TCP acks arriving at the source would be evenly\nspaced with a duration of 172ms because of the delayed ack\nfeature of TCP (every 2 packets are acked rather than every\npacket). Figure 3(a) plots the cdf of TCP ack inter-arrival\ntime (time between two consecutive acks) at the server. As\ncan be seen, there is significant ack compression with over\n10% of the acks arriving within 50ms of the previous ack.\nNote that the ack packet size is 52 bytes (40 + timestamp)\nand ack transmission time on the uplink is 52\n8/8=52ms;\nan interack spacing of less then 52ms is a result of uplink\ndelay variation.\nNote that the delay variability and the resulting ack compression\ndid not cause any throughput degradation in our\nsystem. This was due to the fact that the buffering in the\nsystem was greater than the TCP window size resulting in\nno buffer overflow loss. Figure 3(b) depicts the TCP round\ntrip time (rtt) values over time. Since the buffer at the RNC\nis able to accommodate the whole TCP window, the rtt increases\nto over 3s representing a case of over 30 packets in\nthe buffer at the RNC (30\n0.086 = 2.5s). Given an average\nping latency of 215ms and a transmission time of 86ms\nfor a 1500 byte packet, the bandwidth delay product of the\nlink is approximately (0.215 + 0.86)\n135=5KB or about 3\npackets. Thus, the system had a buffer of over 10 times the\nbandwidth delay product. Given that we had only one TCP\nflow in the system, a buffer of over 64KB is not a problem.\nBut, if every TCP flow is allocated a buffer of 64KB, the\nbuffer requirements at the RNC would be very expensive,\nsince the RNC supports thousands of active users.\nEven discounting the cost of large buffers, the inflated rtt\nvalue due to the excessive buffering has several negative consequences\nas identified in [15]. First, an inflated rtt implies\na large retransmission timeout value (rto). In the case of\nmultiple packet losses (either on the wireless link or in a\nrouter elsewhere in the network), a timeout-based recovery\nwould cause excessive delay, especially if exponential backoff\ngets invoked. Second, if the timestamp option is not used,\nthe rtt sampling rate is reduced and this can cause spurious\ntimeouts. Third, there is a higher probability that the data\nin the queue becomes obsolete (for e.g., due to user aborting\nthe transmission), but the queue will still have to be drained\nresulting in wasted bandwidth.\nThus, while excessive buffering at the RNC can absorb\nthe variability of the wireless links without causing TCP\nthroughput degradation, it has significant negative side effects\n, making it an undesirable solution.\nour system.\n73\nMODEL\nIn this section, we model the performance of a single long-lived\nTCP flow over a network with a single bottleneck server\nthat exhibits rate variation based on a given general distribution\nand a single wireless link attached to the bottleneck\nserver that exhibits delay variation based on another given\ndistribution.\nWe use a general distribution of rate and delay values for\nthe discussion in this section since we would like to capture\nthe inherent variation in rate and delay that is a characteristic\nof the 3G wireless data environment. Given that the\nwireless standards are constantly evolving, the actual rate\nand delay distribution will vary from one standard or implementation\nto another and is outside the scope of this paper.\nLater, in Section 6, we will evaluate TCP performance over\na specific wireless link, the 3G1X-EVDO (HDR) system, using\nsimulation.\nWe would like to model TCP performance in the case of\nvariable rate and delay for two reasons. One, we would like\nto understand the dynamics so that we can design an appropriate\nmechanism to improve TCP performance. Two,\nwe would like to have a more accurate model that specifically\ntakes the burstiness caused by ack compression due to\nrate/delay variability into account.\nTCP performance modeling has been extensively studied\nin the literature [1, 2, 9, 14, 17, 19]. Most of these models\nassume constant delay and service rate at the bottleneck\nrouter and calculate TCP throughput in terms of packet loss\nprobability and round trip time. In [19], the authors model\nTCP performance assuming deterministic time between congestion\nevents [1]. In [17], the authors improve the throughput\nprediction of [19] assuming exponential time between\ncongestion events (loss indications as Poisson). In our case,\nack compressions and link variation causes bursty losses and\nthe deterministic or Poisson loss models are not likely to be\nas accurate. In [9], the authors model an UMTS wireless\nnetwork by extending the model from [19] and inflating the\nrtt value to account for the average additional delay incurred\non the wireless link. However, we believe this will not result\nin an accurate model because 1) the rtt value in [19] is already\nan end-to-end measured value and 2) the loss process\nis much more bursty than the deterministic loss assumption\nin [19]. In [2], the authors observe that mean values are\nnot sufficient to predict the throughput when routers have\nvarying bandwidth and show that increasing variance for the\nsame mean service rate decreases TCP throughput. However\n, the approach is numerical, and provides little intuition\nin the case of delay variance.\nOur approach starts with the model in [14] which describes\nhow TCP functions in an \"ideal\" environment with\nconstant round trip time, constant service rate and suffers\nloss only through buffer overflow. A brief summary of the\nresult from [14] is presented here before we proceed to our\nmodel, which can be seen as an extension. We chose to extend\nthe model in [14] since it makes no assumption about\nthe nature of loss event process (which is highly bursty in\nour case) and explicitly accounts for link delay and service\nrate (which are variable in our case). For simplicity, we will\nonly discuss the analysis of TCP Reno. TCP Sack can be\nanalyzed similarly. We also assume that the sender is not\nlimited by the maximum receiver window; simple modifications\ncan be made to the analysis for handling this case.\nFigure 4(a) shows how the TCP congestion window varies\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\n0\n50\n100\n150\n200\nIdeal TCP\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\n50\n0\n50\n100\n150\n200\nTCP with Variable Delay\n(a)\nConstant delay\n(b) Variable delay\nFigure 4:\nTCP Congestion Window Evolution over time\nin a constant rate and delay setting. The initial phase where\nTCP tries to probe for available bandwidth is the Slow Start\nphase.\nAfter slow start, TCP goes to Congestion avoidance\nphase. In the case of long-lived TCP flow, one can\nfocus only on the congestion avoidance phase. Let be the\nconstant service rate, the constant propagation delay, T\nthe minimum round trip time ( + 1/) and B the buffer\nsize.\nThe congestion window follows a regular saw-tooth\npattern, going from W\n0\nto W\nmax\n, where W\n0\n= W\nmax\n/2 and\nW\nmax\n= + B + 1. Due to the regularity of each of the\nsaw-tooth, consider one such saw-tooth.\nWithin a single\nsaw-tooth, the congestion avoidance phase is divided into\ntwo epochs. In the first epoch, say epoch A, the congestion\nwindow increases from W\n0\nto T , in time t\nA\nwith number\nof packets sent n\nA\n. In the second epoch, say epoch B, the\ncongestion window increases from T to W\nmax\n, in time t\nB\nwith number of packets sent n\nB\n. TCP throughput (ignoring\nslow start) is simply given by (n\nA\n+ n\nB\n)/(t\nA\n+ t\nB\n) where\nt\nA\n=\nT (T\n- W\n0\n)\n(1)\nn\nA\n=\n(W\n0\nt\nA\n+ t\n2\nA\n/(2T ))/T\n(2)\nt\nB\n=\n(W\n2\nmax\n- (T )\n2\n)/(2)\n(3)\nn\nB\n=\nt\nB\n(4)\nThis model, while very accurate for constant and T ,\nbreaks down when the constant propagation and service rate\nassumptions are not valid. Figure 4(b) shows how the congestion\nwindow becomes much more irregular when there\nis substantial variation in the wireless link delay. This is\nbecause the delay variation and ack compression result in\nmultiple packet losses.\nThere are three main differences in the TCP congestion\nwindow behavior under variable rate/delay from the traditional\nsaw-tooth behavior.\nFirst, while the traditional\nsaw-tooth behavior always results in one packet loss due\nto buffer overflow, we have possibilities for multiple packet\nlosses due to link variation. To account for this, we augment\nour model with parameters p1, p2, p3 representing respectively\nthe conditional probability of a single packet loss,\ndouble packet loss, and three or more packet losses. Note\nthat, p1 + p2 + p3 = 1 by this definition. Second, while\nthe loss in the traditional saw-tooth model always occurs\nwhen window size reaches W\nmax\n= + B + 1, in our model\nlosses can occur at different values of window size, since\nand are now both variables instead of constants. We capture\nthis by a parameter W\nf\n=\n\n\nN\ni=1\nW\n2\nmax\ni\n/N , that is the\nsquare root of the second moment of the W\nmax\nvalues of each\ncycle. The reason we do this instead of obtaining a simple\nmean of W\nmax\nvalues is because throughput is related to\nW\nf\nquadratically (since it is the area under the curve in the\n74\n0\n5\n10\n15\n20\n25\n128\n130\n132\n134\n136\n138\n140\n142\ncwnd (packets)\nTime (s)\n0\n5\n10\n15\n20\n25\n100\n105\n110\n115\ncwnd (packets)\nTime (s)\n(a)\nTwo packet loss\n(b) Three packet loss\nFigure 5:\nCongestion Window with multiple losses\ncongestion window graph). Third, due to the fact that we\nhave multiple packet losses in our model, we need to consider\ntimeouts and slow starts in our throughput calculation. We\nrepresent the timeout duration by the T\n0\nparameter which\nrepresents the average timeout value, similar to the timeout\nparameter in [19].\nWe now model the highly variable congestion window behavior\nof a TCP source under rate/delay variation. We first\nuse W\nf\ninstead of W\nmax\n. We approximate (the propagation\ndelay) by ^\n, the average link delay in the presence\nof delay variability. We replace (the service rate) by ^\n,\nthe average service rate in the presence of rate variability.\nThus, T becomes ^\nT = (^\n+ 1 / ^\n). Now consider three\ndifferent congestion window patterns: with probability p1,\nsingle loss followed by congestion avoidance, with probability\np2, double loss followed by congestion avoidance, and\nwith probability p3, triple loss and timeout followed by slow\nstart and congestion avoidance\n2\n.\nFirst, consider the single loss event in the congestion avoidance\nphase. This is the classic saw-tooth pattern with two\nepochs as identified in [14].\nLets call these A1 and B1\nepochs. In epoch A1, window size grows from W\n01\nto ^\n^\nT\nin time, t\nA1\n, with number of packets transmitted, n\nA1\n. In\nepoch B1, window size grows from ^\n^\nT to W\nf\nin time, t\nB1\n,\nwith number of packets transmitted, n\nB1\n. Thus, with probability\np1, n\nA1\n+n\nB1\npackets are transmitted in time t\nA1\n+t\nB1\nwhere\nW\n01\n=\n(int)W\nf\n/2\n(5)\nt\nA1\n=\n^\nT (^\n^\nT\n- W\n01\n)\n(6)\nn\nA1\n=\n(W\n01\nt\nA1\n+ t\n2\nA1\n/(2 ^\nT ))/ ^\nT\n(7)\nt\nB1\n=\n(W\n2\nf\n- (^ ^\nT )\n2\n)/(2^\n)\n(8)\nn\nB1\n=\n^\nt\nB1\n(9)\nNext, consider the two loss event. An example of this\nevent is shown in Figure 5(a). The trace is obtained using\nns-2 simulation described in Section 6.\nIn this case,\nafter the first fast retransmit (around 130s), the source receives\nanother set of duplicate acks to trigger the second\nfast retransmit (around 131s). This fixes the two losses and\nthe congestion window starts growing from W\n02\n. The second\nretransmit is triggered by the new set of duplicate acks\nin response to the first retransmission. Thus, the duration\nbetween the first and second fast retransmit is the time re-quired\nfor the first retransmission to reach the receiver (with\na full buffer) plus the time for the duplicate ack to return\n2\nWe assume that three or more packet losses result in a\ntimeout; this is almost always true if the source is TCP\nreno.\nto the sender. In other words, this duration can be approximated\nby the average link delay with a full buffer, ^\nT +\nB/^\n=t\nR\n. We have three epochs now, epoch t\nR\n(time 130-131s\n)with one retransmission and zero new packet, epoch\nA2 (131-137s) with window size growing from W\n02\nto ^\n^\nT\nin time, t\nA2\n, with number of packets transmitted, n\nA2\n, and\nepoch B1 (137-143s) as before. Thus, with probability p2,\nn\nA2\n+n\nB1\npackets are transmitted in time t\nR\n+t\nA2\n+t\nB1\nwhere\nW\n02\n=\n(int)W\n01\n/2\n(10)\nt\nR\n=\n^\nT + B/^\n\n(11)\nt\nA2\n=\n^\nT (^\n^\nT\n- W\n02\n)\n(12)\nn\nA2\n=\n(W\n02\nt\nA2\n+ t\n2\nA2\n/(2 ^\nT ))/ ^\nT\n(13)\nFinally, consider the three loss event. An example of this\nevent is shown in Figure 5(b). In this case, after the first fast\nretransmit, we receive another set of duplicate acks to trigger\nthe second fast retransmit. This does not fixthe three\nlosses and TCP times out. Thus, we now have five epochs:\nfirst is the retransmission epoch (100-101s) with time t\nR\nand\nzero new packet, second is the timeout epoch (101-103s) with\ntime T\n0\nand zero new packet, third is the slow start epoch\n(103-106s) where the window grows exponentially up to previous\nssthresh value of W\n03\nin time t\nss\n(Eqn. 15) with number\nof packets transmitted n\nss\n(Eqn. 16)\n3\n, fourth is epoch A3\n(106-111s) where the window size grows from W\n03\nto ^\n^\nT in\ntime t\nA3\n(Eqn. 17) with number of packets transmitted n\nA3\n(Eqn. 18), and fifth is epoch B1 (111-118s) as before. Thus,\nwith probability p3, n\nss\n+n\nA3\n+n\nB1\npackets are transmitted\nin time t\nR\n+T\n0\n+t\nss\n+t\nA2\n+t\nB1\nwhere\nW\n03\n=\n(int)W\n02\n/2\n(14)\nt\nss\n=\n^\nT log\n2\n(W\n03\n)\n(15)\nn\nss\n=\nW\n03\n/ ^\nT\n(16)\nt\nA3\n=\n^\nT (^\n^\nT\n- W\n03\n)\n(17)\nn\nA3\n=\n(W\n03\nt\nA3\n+ t\n2\nA3\n/(2 ^\nT ))/ ^\nT\n(18)\nGiven that the different types of packet loss events are independent\nand using p1+p2+p3=1, the average TCP throughput\ncan now be approximated by a weighted combination of\nthe three types of loss events to be\np3\n(n\nss\n+ n\nA3\n) + p2\nn\nA2\n+ p1\nn\nA1\n+ n\nB1\np3\n(t\nR\n+ T\n0\n+ t\nss\n+ t\nA3\n) + p2\n(t\nR\n+ t\nA2\n) + p1\nt\nA1\n+ t\nB1\n(19)\nIf any of t\n\nare less than 0, those respective epochs do not\noccur and we can use the above equation while setting the\nrespective n\n\n, t\n\nto zero.\nIn this paper, we infer parameters such as p1, p2, p3, W\nf\n,\nand T\n0\nfrom the traces. Models such as [19] also infer the\nloss probability, round trip time, and timeout durations from\ntraces.\nTable 1 lists the various parameters used by the different\nmodels for simulations with rate and delay variability. We\nuse a packet size of 1000 bytes, a buffer of 10 which represents\nthe product of the average bandwidth times average\ndelay and we ensure that the source is not window limited.\nT D and T O denote the number of loss events that are of\nthe triple duplicate and timeout type respectively and these\nvalues are used by models in [19] and [17]. The simulation\n3\nusing analysis similar to [14] and assuming adequate buffer\nso that there is no loss in slow start.\n75\nItem\nRate(Kb/s)\nDelay(ms)\npkts\nTD\nTO\nT\n0\nrtt\np1\np2\nW\nf\n^\nT\n^\n\n1\n200\n400\n89713\n401\n1\n1.76\n616.2\n0.998\n0.000\n22.00\n440\n25.0\n2\n200\n380+e(20)\n83426\n498\n1\n1.71\n579.3\n0.639\n0.357\n21.38\n442\n25.0\n3\n200\n350+e(50)\n78827\n489\n12\n1.79\n595.8\n0.599\n0.367\n21.24\n461\n25.0\n4\n200\n300+e(100)\n58348\n496\n114\n1.92\n606.0\n0.339\n0.279\n18.95\n517\n25.0\n5\nu(200,20)\n400\n82180\n504\n1\n1.75\n578.1\n0.535\n0.460\n21.61\n400\n24.74\n6\nu(200,50)\n400\n74840\n517\n29\n1.80\n579.9\n0.510\n0.403\n20.52\n400\n23.34\n7\nu(200,75)\n400\n62674\n516\n81\n1.86\n585.9\n0.398\n0.348\n19.05\n400\n20.93\n8\nu(200,50)\n350+e(50)\n70489\n507\n43\n1.81\n595.7\n0.496\n0.377\n20.15\n459\n23.34\n9\nu(200,75)\n300+e(100)\n53357\n497\n93\n2.03\n635.7\n0.404\n0.298\n17.78\n511\n20.93\nTable 1:\nSimulation and Model parameters\nItem\nSimulator Goodput\nModel 1 [19] (accu.)\nModel 2 [17] (accu.)\nModel 3[Eqn. 19] (accu.)\n1\n199.8\n228.5(0.86)\n201.9(0.99)\n199.8(1.0)\n2\n185.4\n208.0(0.88)\n186.0(1.0)\n186.0(1.0)\n3\n175.1\n195.5(0.88)\n177.2(0.99)\n180.9(0.97)\n4\n129.4\n145.3(0.88)\n153.7(0.81)\n137.0(0.94)\n5\n182.5\n205.2(0.88)\n184.6(0.99)\n181.3(0.99)\n6\n166.2\n186.0(0.88)\n174.6(0.95)\n165.2(0.99)\n7\n139.2\n158.4(0.86)\n163.4(0.83)\n137.2(0.99)\n8\n156.5\n174.6(0.88)\n166.5(0.94)\n160.2(0.97)\n9\n118.4\n134.0(0.87)\n142.6(0.80)\n125.0(0.94)\nTable 2:\nSimulation and Model throughput values\nis run for 3600 seconds. We simulate delay and rate variability\nwith exponential and uniform distributions respectively\n(u(a, b) in the table represents uniform distribution\nwith mean a and standard deviation b while e(a) represents\nan exponential distribution with mean a). The details of the\nsimulation are presented in Section 6.\nTable 2 compares the throughput of simulation of different\ndistributions for rate and delay variability at the server\nand the throughput predicted by the exact equation of the\nmodel in [19], the Poisson model in [17] and by equation 19.\nThe accuracy of the prediction, defined as 1 minus the ratio\nof the difference between the model and simulation throughput\nvalue over the simulation throughput value, is listed in\nthe parenthesis. As the last column shows, the match between\nour model and simulation is extremely accurate when\nthe delay/rate variation is small and the match is still well\nover 90% even when the variation is large. The Poisson loss\nmodel used in [17] performs very well when the variability\nis low but, understandably, does not predict well when\nvariability increases. The deterministic loss model seems to\nconsistently overestimate the throughput.\nFrom Table 1, one can clearly see the impact of delay and\nrate variability. As the variability increases, the probability\nof double loss, p2, and three or more losses, p3=(1-p2-p1),\nstart increasing while the goodput of the TCP flow starts\ndecreasing. For example, comparing case 1 to case 4, p1\ndecreases from 0.998 to 0.339 while p3 increases. Increases\nin p2 and p3 come about because when the product ^\nT ^\n\ndecreases, a pipe that used to accommodate more packets\nsuddenly becomes smaller causing additional packet losses.\nGiven that n\nA1\n/t\nA1\n> n\nA2\n/(t\nR\n+ t\nA2\n) > (n\nss\n+ n\nA3\n)/(t\nR\n+\nT\n0\n+t\nss\n+t\nA3\n), any solution that improves TCP performance\nmust reduce the occurrence of multiple packet losses, p2 and\np3. We present a solution that tries to achieve this in the\nnext section.\nACK REGULATOR\nIn this section, we present our network-based solution\nfor improving TCP performance in the presence of varying\nbandwidth and delay. The solution is designed for improving\nthe performance of TCP flows towards the mobile host\n(for downloading-type applications) since links like HDR are\ndesigned for such applications. The solution is implemented\nat the wireless edge, specifically at the RNC, at the layer\njust above RLP/RLC. Note that, in order to implement the\nstandard-based RLP/RLC, the RNC already needs to maintain\na per-user queue. Our solution requires a per-TCP-flow\nqueue, which should not result in significant additional overhead\ngiven the low bandwidth nature of the wireless environment\n. We also assume that the data and ack packets go\nthrough the same RNC; this is true in the case of 3G networks\nwhere the TCP flow is anchored at the RNC because\nof the presence of soft handoff and RLP.\nWe desire a solution that is simple to implement and remains\nrobust across different implementations of TCP. To\nthis end, we focus only on the congestion avoidance phase\nof TCP and aim to achieve the classic saw-tooth congestion\nwindow behavior even in the presence of varying rates and\ndelays by controlling the buffer overflow process in the bottleneck\nlink. We also assume for this discussion that every\npacket is acknowledged (the discussion can be easily modified\nto account for delayed acks where single ack packets\nacknowledge multiple data packets).\nOur solution is called the Ack Regulator since it regulates\nthe flow of acks back to the TCP source. The intuition behind\nthe regulation algorithm is to avoid any buffer overflow\nloss until the congestion window at the TCP source reaches\na pre-determined threshold and beyond that, allow only a\nsingle buffer overflow loss. This ensures that the TCP source\noperates mainly in the congestion avoidance phase with congestion\nwindow exhibiting the classic saw-tooth behavior.\nBefore we present our solution, we describe two variables\nthat will aid in the presentation of our solution.\nConservativeMode: Mode of operation during which\n76\nDataSeqNoLast (DL): Largest Sequence # of Last Data Packet Received\nDataSeqNoFirst (DF): Largest Sequence # of Last Data Packet Sent\nAckSeqNoLast (AL): Largest Sequence # of Last Ack Packet Received\nAckSeqNoFirst (AF): Largest Sequence # of Last Ack Packet Sent\nDL\nDF\nAF\nAL\nPer-Flow Data and Ack Queue on RNC\nWireless\nNetwork\nWireline\nNetwork\nData Queue\nAck Queue\nQueueLength\nQueueLim\nFigure 6:\nAck Regulator Implementation\neach time an ack is sent back towards the TCP source,\nthere is buffer space for at least two data packets from\nthe source.\nNote that if TCP operates in the congestion avoidance\nphase, there would be no buffer overflow loss as long as the\nalgorithm operates in conservative mode. This follows from\nthe fact that, during congestion avoidance phase, TCP increases\nits window size by at most one on reception of an\nack. This implies that on reception of an ack, TCP source\nsends either one packet (no window increase) or two packets\n(window increase). Therefore, if there is space for at least\ntwo packets in the buffer at the time of an ack being sent\nback, there can be no packet loss.\nAckReleaseCount: The sum of total number of acks\nsent back towards the source and the total number of\ndata packets from the source in transit towards the\nRNC due to previous acks released, assuming TCP\nsource window is constant.\nAckReleaseCount represents the number of packets that\ncan be expected to arrive in the buffer at the RNC assuming\nthat the source window size remains constant. Thus, buffer\nspace equal to AckReleaseCount must be reserved whenever\na new ack is sent back to the source if buffer overflow is to\nbe avoided.\nOn Enque of Ack/Deque of data packet:\n1. AcksSent=0;\n2. BufferAvail=QueueLim-QueueLength;\n3. BufferAvail-=(AckReleaseCount+ConservativeMode);\n4. if (BufferAvail>=1)\n5.\nif (AckSeqNoLast-AckSeqNoFirst<BufferAvail)\n5.1\nAcksSent+=(AckSeqNoLast-AckSeqNoFirst);\n5.2\nAckSeqNoFirst=AckSeqNoLast;\nelse\n5.3\nAckSeqNoFirst+=BufferAvail;\n5.4\nAcksSent+=BufferAvail;\n5.5\nSend acks up to AckSeqNoFirst;\nFigure 7: Ack Regulator processing at the RNC\nFigure 6 shows the data and ack flow and the queue variables\ninvolved in the Ack Regulator algorithm, which is presented\nin Figure 7. We assume for now that the AckReleaseCount\nand ConservativeMode variables are as defined\nearlier. We later discuss how these variables are updated.\nThe Ack Regulator algorithm runs on every transmission of\na data packet (deque) and every arrival of an ack packet\n(enque). The instantaneous buffer availability in the data\nqueue is maintained by the BufferAvail variable (line 2).\nBufferAvail is then reduced by the AckReleaseCount and\nthe ConservativeMode variables (line 3).\nDepending on the value of the ConservativeMode variable\n(1 or 0), the algorithm operates in two modes, a conservative\nmode or a non-conservative, respectively. In the conservative\nmode, an extra buffer space is reserved in the data\nqueue to ensure that there is no loss even if TCP congestion\nwindow is increased by 1, while, in the non-conservative\nmode, a single packet loss occurs if TCP increases its congestion\nwindow by 1. Now, after taking AckReleaseCount and\nConservativeMode variables into account, if there is at least\none buffer space available (line 4) and, if the number of acks\npresent in the ack queue (AckSeqNoLast - AckSeqNoFirst) is\nlesser than BufferAvail, all those acks are sent to the source\n(lines 5.1,5.2); otherwise only BufferAvail number of acks\nare sent to the source (lines 5.3,5.4).\nNote that the actual transmission of acks (line 5.5) is not\npresented here. The transmission of AcksSent acks can be\nperformed one ack at a time or acks can be bunched together\ndue to the cumulative nature of TCP acks. However, care\nmust be taken to preserve the duplicate acks since the TCP\nsource relies on the number of duplicate acks to adjust its\ncongestion window. Also, whenever three or more duplicate\nacks are sent back, it is important that one extra buffer space\nbe reserved to account for the fast retransmission algorithm.\nAdditional buffer reservations of two packets to account for\nthe Limited Transmit algorithm [12] can also be provided\nfor, if necessary.\n1. Initialize ConservativeMode=1; = 2\n2. On Enque of ack packet:\nif ((DataSeqNoLast-AckSeqNoFirst)>*QueueLim)\nConservativeMode=0;\n3. On Enque and Drop of data packet:\nConservative Mode=1;\n4. On Enque/Deque of data packet:\nif (((DataSeqNoLast-AckSeqNoFirst)<*QueueLim/2)\nOR (DataQueueLength==0))\nConservativeMode=1;\nFigure 8: ConservativeMode updates\nWe now present the algorithm (Figure 8) for updating the\nConservativeMode variable which controls the switching of\nthe Ack Regulator algorithm between the conservative and\nthe non-conservative modes. The algorithm starts in conservative\nmode (line 1). Whenever a targeted TCP window\nsize is reached (in this case, 2*QueueLim) , the algorithm\nis switched into non-conservative mode (line 2). TCP Window\nSize is approximated here by the difference between the\nlargest sequence number in the data queue and the sequence\nnumber in the ack queue. This is a reasonable approximation\nin our case since the wireless link is likely the bottleneck\nand most (if not all) of the queuing is done at the RNC.\nWhen operating in the non-conservative mode, no additional\nbuffer space is reserved. This implies that there will be single\nloss the next time the TCP source increases it window\nsize. At the detection of the packet loss, the algorithm again\nswitches back to the conservative mode (line 3). This ensures\nthat losses are of the single loss variety as long as the\nestimate of AckReleaseCount is conservative. Line 4 in the\nalgorithm results in a switch back into conservative mode\n77\nwhenever the data queue length goes to zero or whenever\nthe TCP window size is halved. This handles the case when\nTCP reacts to losses elsewhere in the network and the Ack\nRegulator can go back to being conservative. Note that,\nif the TCP source is ECN capable, instead of switching to\nnon-conservative mode, the Ack Regulator can simply mark\nthe ECN bit to signal the source to reduce its congestion\nwindow, resulting in no packet loss.\n1.Initialize AckReleaseCount=0;\n2. On Enque of Ack/Deque of data packet:\n(after processing in Fig 7)\nAckReleaseCount+=AcksSent;\n3. On Enque of data packet:\nif (AckReleaseCount>0)\nAckReleaseCount;\n4. On Deque of data packet:\nif (DataQueueLength==0)\nAckReleaseCount=0;\nFigure 9: AckReleaseCount updates\nWe finally present the algorithm for updating the AckReleaseCount\nvariable in Figure 9. Since AckReleaseCount\nestimates the expected number of data packets that are arriving\nand reserves buffer space for them, it is important to\nget an accurate estimate. An overestimate of AckReleaseCount\nwould result in unnecessary reservation of buffers that\nwon't be occupied, while an underestimate of AckReleaseCount\ncan lead to buffer overflow loss(es) even in conservative\nmode due to inadequate reservation.\nWith the knowledge of the exact version of the TCP source\nand the round trip time from the RNC to the source, it is\npossible to compute an exact estimate of AckReleaseCount.\nHowever, since we would like to be agnostic to TCP version\nas far as possible and also be robust against varying round\ntrip times on the wired network, our algorithm tries to maintain\na conservative estimate of AckReleaseCount. Whenever\nwe send acks back to the source, we update AckReleaseCount\nby that many acks (line 2). Likewise, whenever a data\npacket arrives into the RNC from the source, we decrement\nthe variable while ensuring that it does not go below zero\n(line 3).\nWhile maintaining a non-negative AckReleaseCount in\nthis manner avoids underestimation, it also can result in\nunbounded growth of AckReleaseCount leading to significant\noverestimation as errors accumulate. For example, we\nincrease AckReleaseCount whenever we send acks back to\nthe source; however, if TCP is reducing its window size due\nto loss, we cannot expect any data packets in response to\nthe acks being released. Thus, over time, AckReleaseCount\ncan grow in an unbounded manner. In order to avoid this\nscenario, we reset AckReleaseCount to zero (line 4) whenever\nthe data queue is empty. Thus, while this reset operation\nis necessary for synchronizing the real and estimated\nAckReleaseCount after a loss, it is not a conservative mechanism\nin general since a AckReleaseCount of zero implies\nthat no buffer space is currently reserved for any incoming\ndata packets that are unaccounted for. However, by doing\nthe reset only when the data queue is empty, we significantly\nreduce the chance of the unaccounted data packets\ncausing a buffer overflow loss. We discuss the impact of this\nestimation algorithm of AckReleaseCount in Section 6.6.\nFinally, we assume that there is enough buffer space for\nRNC\nS1\nSn\nM1\nMn\nV\n100Mb/s\n1ms\n100Mb/s\n1ms\nL Mb/s\nD ms\nL Mb/s\nD ms\nFR Mb/s\nFD ms\nRR Mb/s\nRD ms\nFigure 10:\nSimulation Topology\nthe ack packets in the RNC. The maximum number of ack\npackets is the maximum window size achieved by the TCP\nflow (*QueueLim in our algorithm). Ack packets do not\nhave to be buffered as is, since storing the sequence numbers\nis sufficient (however, care should be taken to preserve\nduplicate ack sequence numbers as is). Thus, memory requirement\nfor ack storage is very minimal.\nSIMULATION RESULTS\nIn this section, we present detailed simulation results comparing\nthe performance of TCP Reno and TCP Sack, in the\npresence and absence of the Ack Regulator. First, we study\nthe effect of variable bandwidth and variable delay using\ndifferent distributions on the throughput of a single long-lived\nTCP flow. Next, we present a model for 3G1X-EVDO\n(HDR) system (which exhibits both variable rate and variable\ndelay), and evaluate the performance of a single TCP\nflow in the HDR environment. Then, we present the performance\nof multiple TCP flows sharing a single HDR wireless\nlink. Finally, we briefly discuss the impact of different parameters\naffecting the behavior of Ack Regulator.\nAll simulations are performed using ns-2. The simulation\ntopology used is shown in Figure 10. S\ni\n, i = 1..n corresponds\nto the set of TCP source nodes sending packets to a set of\nthe TCP sink nodes M\ni\n, i = 1..n. Each set of S\ni\n, M\ni\nnodes\nform a TCP pair. The RNC is connected to the M\ni\nnodes\nthrough a V (virtual) node for simulation purposes. L, the\nbandwidth between S\ni\nand the RNC, is set to 100Mb/s and\nD is set to 1ms except in cases where D is explicitly varied.\nThe forward wireless channel is simulated as having rate F R\nand delay F D, and the reverse wireless channel has rate RR\nand delay RD.\nEach simulation run lasts for 3600s (1hr) unless otherwise\nspecified and all simulations use packet size of 1KB. TCP\nmaximum window size is set to 500KB. Using such a large\nwindow size ensures that TCP is never window limited in\nall experiments except in cases where the window size is\nexplicitly varied.\n6.1\nVariable Delay\nIn this section, the effect of delay variation is illustrated by\nvarying F D, the forward link delay. Without modification,\nthe use of a random link delay in the simulation will result\nin out-of-order packets since packet transmitted later with\nlower delay can overtake packets transmitted earlier with\nhigher delay. However, since delay variability in our model\nis caused by factors that will not result in packet reordering\n(e.g. processing time variation) and RLP delivers packet in\nsequence, the simulation code is modified such that packets\ncannot reach the next hop until the packet transmitted earlier\nhas arrived. This modification applies to all simulations\nwith variable link delay.\nFigure 11(a) shows throughput for a single TCP flow (n =\n1) for F R = 200Kb/s and RR = 64Kb/s. F D has an ex-78\n100\n120\n140\n160\n180\n200\n20\n30\n40\n50\n60\n70\n80\n90\n100\nTCP Throughput (kb/s)\nDelay Variance\nReno\nReno, w/AR\nSack\nSack, w/AR\n100\n120\n140\n160\n180\n200\n2\n4\n6\n8\n10\n12\n14\n16\n18\nThroughput (kb/s)\nBuffer Size (packet)\nReno\nReno, w/AR\nSack\nSack, w/AR\nBDP=10\n(a)\nDelay Variability\n(b) Different Buffer Size\nFigure 11:\nThroughput with Variable Delay e(x)+400-x\nponential distribution with a mean that varies from 20ms\nto 100ms, and RD = 400ms - mean(F D) so that average\nF D+RD is maintained at 400ms. The buffer size on the\nbottleneck link for each run is set to 10, the product of\nthe mean throughput of (200Kb/s or 25pkt/s) and mean\nlink delay (0.4s). This product will be referred to as the\nbandwidth-delay product (BDP) in later sections.\nAdditional\ndelay distributions like uniform, normal, lognormal,\nand Poisson were also experimented with. Since the results\nare similar, only plots for an exponential delay distribution\nare shown.\nAs expected, when the delay variation increases, throughput\ndecreases for both TCP Reno and TCP Sack. By increasing\nthe delay variance from 20 to 100, throughput of\nTCP Reno decreases by 30% and TCP Sack decreases by\n19%. On the other hand, TCP Reno and TCP Sack flows\nwhich are Ack Regulated are much more robust and its\nthroughput decreases by only 8%.\nRelatively to one another\n, Ack Regulator performs up to 43% better than TCP\nReno and 19% better than TCP Sack. Another interesting\nresult is that Ack Regulator delivers the same throughput irrespective\nof whether the TCP source is Reno or Sack. This\nis understandable given the fact that the Ack Regulator tries\nto ensure that only single buffer overflow loss occurs and in\nthis regime, Reno and Sack are known to behave similarly.\nThis property of Ack Regulator is extremely useful since for\na flow to use TCP Sack, both the sender and receiver needs\nto be upgraded. Given that there are still significant number\nof web servers that have not yet been upgraded to TCP\nSack [28], deployment of Ack Regulator would ensure excellent\nperformance irrespective of the TCP version running.\nFigure 11(b) shows how throughput varies with buffer size\nwith the same set of parameters except for F D, which is now\nfixed with a mean of 50ms (exponentially distributed). Even\nwith a very small buffer of 5 packets (0.5 BDP), Ack Regulator\nis able to maintain a throughput of over 80% of the\nmaximum throughput of 200Kb/s. Thus, Ack Regulator delivers\nrobust throughput performance across different buffer\nsizes. This property is very important in a varying rate and\ndelay environment of a wireless system, since it is difficult to\nsize the system with an optimal buffer size, given that the\nBDP also varies with time. For a buffer of 4 packets, the\nimprovement over TCP Reno and Sack is about 50% and\n24% respectively. As buffer size increases, the throughput\ndifference decreases. With buffer size close to 20 packets\n(2 BDP), TCP Sack performs close to Ack Regulated flows,\nwhile improvement over TCP Reno is about 4%.\nFinally, in Table 3, we list parameter values from the simulation\nfor delay variance of 100. First, consider Reno and\nItem\nRate,\nTD\nTO\np1\np2\np3\nW\nf\nKb/s\nReno\n129\n496\n114\n0.34\n0.3\n0.38\n19\nReno+AR\n184\n302\n8\n0.98\n0.0\n0.02\n24\nSack\n160\n434\n4\n0.99\n0.0\n0.01\n19\nSack+AR\n184\n302\n8\n0.97\n0.0\n0.03\n24\nTable 3:\nParameters from simulation for variance=100\n140\n150\n160\n170\n180\n190\n200\n0\n10\n20\n30\n40\n50\n60\n70\n80\nThroughput (kb/s)\nRate Variance\nReno\nReno, w/AR\nSack\nSack, w/AR\n80\n100\n120\n140\n160\n180\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\nThroughput (kb/s)\nBuffer Size (packet)\nReno, No AR\nReno, w/AR\nSack, No AR\nSack, w/ AR\nBDP=9\n(a)\nBandwidth Variability\n(b) Different Buffer Size\nFigure 12:\nThroughput\nwith Variable Bandwidth\nu(200,x)\nReno with Ack Regulator (first two rows). It is clear that\nAck Regulator is able to significantly reduce the conditional\nprobability of multiple losses p2 and p3 as well as absolute\nnumber of loss events (T D and T O) resulting in substantial\ngains over Reno. Next, consider Sack and Sack with\nAck Regulator (last two rows). In this case, we can see that\nSack is very effective in eliminating most of the timeout occurrences\n. However, Ack Regulator is still able to reduce\nthe absolute number of loss events by allowing the congestion\nwindow to grow to higher values (24 vs 19), resulting\nin throughput gains.\n6.2\nVariable Bandwidth\nIn this section, we vary the link bandwidth, F R. Figure\n12(a) shows throughput for a single TCP flow. F R is\nuniformly distributed with a mean of 200 Kb/s and the variance\nis varied from 20 to 75. F D = 200ms, RR = 64Kb/s\nand RD = 200ms. The buffer size on the bottleneck link\nfor each run is 10. Again, we have experimented with other\nbandwidth distributions, but, due to lack of space, only uniform\ndistribution is shown. Note that, with variable rate,\nthe maximum throughput achievable is different from the\nmean rate. For uniform distribution, a simple closed form\nformula for the throughput is simply 1/\n\nb\na\n1/xdx = 1/(ln b-ln\na) where b is the maximum rate and a is the minimum\nrate.\nWhen the rate variance increases, throughput of TCP\nReno decreases as expected. Compared to TCP Reno, Ack\nRegulator improves the throughput by up to 15%. However\n, TCP Sack performs very well and has almost the same\nthroughput as Ack Regulated flows. Based on the calculations\nfor maximum throughput discussed before, it can be\nshown that all flows except Reno achieve maximum throughput\n. This shows that if rate variation is not large enough,\nTCP Sack is able to handle the variability. However, for\nvery large rate variations (e.g. rate with lognormal distribution\nand a large variance), the performance of TCP Sack\nis worse than when Ack Regulator is present.\nFigure 12(b) shows how the throughput varies with buffer\nsize. Note that with a lower throughput, bandwidth delay\nproduct is smaller than 10 packets. Again, Ack Regulated\n79\n140\n145\n150\n155\n160\n165\n170\n175\n180\n185\n190\n6\n8\n10\n12\n14\n16\n18\nThroughput (kb/s)\nBuffer Size (packet)\nReno, No AR\nReno, w/AR\nSack, No AR\nSack, w/ AR\nBDP=9\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n1.1\n1.2\n1.3\n6\n8\n10\n12\n14\n16\n18\nRTT (sec)\nBuffer Size (packet)\nReno, No AR\nReno, w/AR\nSack, No AR\nSack, w/ AR\nBDP=9\n(a)\nThroughput vs Buffer Size\n(b) rtt vs Buffer Size\nFigure 13:\nThroughput and rtt for u(200,50),350+e(50)\nTCP flows perform particularly well when the buffer size is\nsmall. With buffer size of 5, the improvement over TCP\nSack is 40%.\n6.3\nVariable Delay and Bandwidth\nIn this section, we vary both the bandwidth and delay of\nthe wireless link. F R is uniformly distributed with a mean of\n200 Kb/s and variance of 50, DR is exponentially distributed\nwith a mean of 50ms, RR = 64Kb/s and RD = 350ms. The\nmaximum achievable throughput is 186.7 Kb/s. The BDP\nis therefore about 9 packets.\nFigure 13(a) shows the throughput for a single TCP flow\nwith the buffer size ranging from 7 to 20. The combination\nof variable rate and delay has a large negative impact on the\nperformance of TCP Reno and it is only able to achieve 70%\nto 80% of the bandwidth of Ack Regulated flows when the\nbuffer size is 6 packets. Even with a buffer size of 18 packets,\nthe throughput difference is more than 5%. Throughput of\nTCP Sack is about 5% to 10% lower than Ack Regulator,\nuntil the buffer size reaches 18 packets (about 2 BDP).\nOne of the cost of using the Ack Regulator is the increase\nin average round trip time (rtt). The average rtt values for\nall 4 types of flows are shown in 13(b) for different buffer\nsizes. TCP Reno has the lowest rtt followed by TCP Sack\nand the rate of rtt increase with buffer size is comparable.\nWith Ack Regulator, rtt increase is comparable with unreg-ulated\nflows for buffer size less than 9 (1 BDP). For larger\nbuffer sizes, since Ack Regulator uses = 2 times buffer\nsize to regulate the acks in conservative mode, rtt increases\nfaster with buffer size than regular TCP, where only the\ndata packet buffer size contributes to rtt. For example, with\nbuffer size of 9, Ack Regulated flows have a rtt 15% larger\nand with buffer size of 18, the rtt is 48% larger compared\nto TCP Sack. This effect can be controlled by varying the\nparameter of the Ack Regulator.\n6.4\nSimulation with High Data Rate\nHigh Data Rate (HDR) [6] is a Qualcomm proposed CDMA\nair interface standard (3G1x-EVDO) for supporting high\nspeed asymmetrical data services. One of the main ideas\nbehind HDR is the use of channel-state based scheduling\nwhich transmits packets to the user with the best signal-to-noise\nratio. The actual rate available to the selected user\ndepends on the current signal-to-noise ratio experienced by\nthe user. The higher the ratio, the higher the rate available\nto the user. In addition, in order to provide some form\nof fairness, a Proportional Fair scheduler is used which provides\nlong-term fairness to flows from different users. We use\nQualcomm's Proportional Fair scheduler in our simulation\nwith an averaging window of 1000 time slots, where each\nRate(Kb/s)\nProb.\nRate(Kb/s)\nProb.\n38.4\n0.033\n614.4\n0.172\n76.8\n0.015\n921.6\n0.145\n102.6\n0.043\n1228.8\n0.260\n153.6\n0.023\n1843.2\n0.042\n204.8\n0.060\n2457.6\n0.011\n307.2\n0.168\nTable 4:\nHDR Data Rates for a one user system\n250\n300\n350\n400\n450\n500\n550\n600\n0\n5\n10\n15\n20\n25\n30\n35\n40\nThroughput (Kb/s)\nBuffer Size (packet)\nReno, No AR\nReno, w/AR\nSack, No AR\nSack, w/ AR\nBDP=15\n200\n300\n400\n500\n600\n700\n800\n0\n5\n10\n15\n20\n25\n30\n35\n40\nAverage RTT (ms)\nBuffer Size (Packet)\nReno\nReno, w/AR\nSack\nSack, w/AR\nBDP=15\n(a)\nThroughput\n(b) rtt\nFigure 14:\nThroughput/rtt with HDR, Single Flow\nslot is 1.67 ms. While the HDR system results in higher raw\nthroughput, the rate and delay variation seen is substantial.\nIn this section, we model a simplified HDR environment\nin ns-2, focusing on the layer 3 scheduling and packet fragmentation\n. The fading model for the wireless link used is\nbased on Jake's Rayleigh fading channel model [25]. This\ngives us the instantaneous signal-to-noise ratio. Using Table\n2 in [6] which lists the rate achievable for a given signal-to-noise\nratio assuming a frame error rate of less than 1%, the\nachievable bandwidth distribution (with one user) for our\nsimulation is shown in Table 4.\nThe simulation settings are as follows. F R is a variable\nthat has a bandwidth distribution of Table 4, due to the variations\nof the fading conditions of the channel. Based on the\nguidelines from [26], F D is modeled as having a uniform distribution\nwith mean 75ms and variance 30 and RD is modeled\nas having a uniform distribution with mean 125ms and\nvariance 15. These are conservative estimates. We expect\ndelay variations in actual systems to be higher (for example,\nnote the ping latencies from our experiment in Section 3).\nThe uplink in a HDR system is circuit-based and RR is set\nto be 64Kb/s.\nFigure 14(a) shows how throughput for a single TCP flow\nvaries with buffer size. Assuming an average bandwidth of\n600Kb/s and a link delay of 200ms, BDP is 15 packets.\nAgain, the performance of TCP Reno flows that are Ack\nRegulated is significantly better than plain TCP Reno over\nthe range of buffer size experimented, with improvements\nfrom 4% to 25%. TCP Sack flows also performs worse than\nAck Regulated flows up to buffer size of 20. The improvement\nof Ack Regulator over TCP Sack ranges from 0.5% to\n18%.\nAs mentioned earlier, one of the costs of using the Ack\nRegulator is increase in average rtt. The average rtt for all\n4 types of flows are shown in 14(b) with buffer size varying\nfrom 5 to 40. The effect is similar to the rtt variation with\nbuffer size seen in Section 6.3.\n6.5\nMultiple TCP Flows\nIn this simulation, the number of flows (n) sharing the\nbottleneck link is increased to 4 and 8. Per-flow buffering is\n80\n300\n400\n500\n600\n700\n800\n2\n4\n6\n8\n10\n12\n14\n16\n18\nTotal Throughput (Kb/s)\nPer Flow Buffer Size (Packet)\n4 Reno Flows\n4 Reno Flows, w/AR\n4 Sack Flows\n4 Sack Flows, w/AR\nBDP=5\n450\n500\n550\n600\n650\n700\n750\n800\n850\n900\n950\n2\n4\n6\n8\n10\n12\n14\n16\n18\nTotal Throughput (kb/s)\nPer Flow Buffer Size (Packet)\n8 Reno Flows\n8 Reno Flows, w/AR\n8 Sack Flows\n8 Sack Flows, w/AR\nBDP=3\n(a)\n4 TCP Flows\n(b) 8 TCP Flows\nFigure 15:\nThroughput with HDR, Multiple Flows\nprovided for each TCP flow. For 4 flows, using mean rate of\n200Kb/s, 1KB packet and rtt of 0.2s, BDP is 5 packets per\nflow. For 8 flows, using mean rate of 120Kb/s, 1KB packet\nand rtt of 0.2s, BDP is 3 packets per flow.\nAs the number of TCP flows increases, the expected rate\nand delay variation seen by individual flows also increases.\nThus, even though the total throughout of the system increases\nwith more users due to channel-state based scheduling\n, the improvement is reduced by the channel variability.\nFigure 15(a) shows the throughput for 4 TCP flows. The\nimprovement of Ack Regulator over TCP Sack increases\ncompared to the single TCP case. For example, the gain\nis 17% with per-flow buffer size of 5 (BDP). For Reno the\ngain is even greater. With per-flow buffer size of 5, the improvement\nis 33%. Similar result can also be observed for\nthe case of 8 TCP flows as shown in Figure 15(b). For both\nTCP Reno and Sack, the gain is about 31% and 29% respectively\nfor per-flow buffer size of 3. From the figure, it\ncan seen that, for TCP Sack and Reno to achieve close to\nmaximum throughput without Ack Regulator, at least three\ntimes the buffer requirements of Ack Regulator is necessary\n(buffer requirements for acks in the Ack Regulator is negligible\ncompared to the 1KB packet buffer since only the\nsequence number needs to be stored for the acks). This not\nonly increases the cost of the RNC, which needs to support\nthousands of active flows, it also has the undesirable side-effects\nof large rtt's that was noted in Section 3.\nWith multiple TCP flows, the issue of throughput fairness\nnaturally arises.\nOne way to quantify how bandwidth is\nshared among flows is to use the fairness indexdescribed in\n[27]. This indexis computed as the ratio of the square of the\ntotal throughput to n times the square of the individual flow\nthroughput. If all flows get the same allocation, then the\nfairness indexis 1. As the differences in allocation increases,\nfairness decreases. A scheme which allocates bandwidth to\nonly a few selected users has a fairness indexnear 0.\nComputation of this indexis performed for all multiple\nflows simulation and the indexis greater than 0.99 in all\ncases. This result is expected since with per flow buffering,\nand proportional fair scheduling, the long term throughput\nof many TCP long-lived flows sharing the same link should\nbe fair.\n6.6\nParameters affecting the performance of\nAck Regulator\nDue to lack of space, we will only briefly present the results\nof varying parameters such as wired network latency and .\nAs the network latency is varied from 20ms to 100ms,\nthroughput decreases by 1.63% and 2.62% for Reno and\nReno with Ack Regulator flows, respectively. Most of the\ndecrease can be accounted for by the impact of increase\nin latency on TCP throughput. The result shows that the\nAckReleaseCount estimation algorithm is effective and hence\nthe Ack Regulator is able to reserve the appropriate amount\nof buffer for expected packet arrivals even with substantial\nwireline delay.\nIn another experiment, the parameter in an Ack Regulated\nTCP flow is varied from 1 to 4. When is increased\nfrom 1 to 3, the TCP flow is able to achieve its maximum\nthroughput at a smaller buffer size.\nAs increases, the\nrtt also increases and when is increased to 4, throughput\ndecreases for larger buffer sizes (> 15). The decrease\nin throughput is caused by the accumulation of sufficiently\nlarge amount of duplicate acks that are sent to the TCP\nsender.\nA value of = 2 appears to be a good choice,\nbalancing throughput and rtt for reasonable buffer sizes.\n6.7\nSummary of Results and Discussion\nIn this section, we first summarize the results from the\nsimulation experiments and then briefly touch upon other\nissues.\nWe first started with experiments using a wireless link\nwith variable delay. We showed that Ack Regulator delivers\nperformance up to 43% better than TCP Reno and 19%\nbetter than TCP Sack when the buffer size was set to one\nBDP. We then examined the impact of a wireless link with\nvariable rate. We saw that when the rate variance increases,\nthroughput of TCP Reno decreases as expected. Compared\nto TCP Reno, Ack Regulator improves the throughput by\nup to 15%. However, TCP Sack performs very well and has\nalmost the same throughput as Ack Regulated flows as long\nas the rate variation is not extremely large.\nWe next considered the impact of a wireless link with\nvariable delay and variable rate. We found that this combination\nhad a large negative impact on the performance of\nboth TCP Reno and Sack (up to 22% and 10% improvement\nrespectively for Ack Regulated flows). We then considered\na specific wireless link standard called HDR which exhibits\nboth variable delay and variable rate. The results were as\nexpected, with Ack Regulator improving TCP Reno performance\nby 5% to 33% and TCP Sack by 0.5% to 24%. We\nthen evaluated the impact of multiple TCP flows sharing\nthe HDR link.\nThe gains of Ack Regulator over normal\nTCP flows were even greater in this case (with 32% to 36%\nimprovements) when the buffer size is set to one BDP.\nIn general, we showed that Ack Regulator delivers the\nsame high throughput irrespective of whether the TCP flow\nis Reno or Sack. We further showed that Ack Regulator delivers\nrobust throughput performance across different buffer\nsizes with the performance improvement of Ack Regulator\nincreasing as buffer size is reduced.\nWe only considered TCP flows towards the mobile host\n(for downloading-type applications) since links like HDR are\ndesigned for such applications. In the case of TCP flows in\nthe other direction (from the mobile host), Ack Regulator\ncan be implemented, if necessary, at the mobile host to optimize\nthe use of buffer on the wireless interface card.\nFinally, Ack Regulator cannot be used if the flow uses\nend-to-end IPSEC. This is also true for all performance enhancing\nproxies. However, we believe that proxies for performance\nimprovement are critical in current wireless networks.\nIn order to allow for these proxies without compromising\nsecurity, a split security model can be adopted where the\n81\nRNC, under the control of the network provider, becomes a\ntrusted element. In this model, a VPN approach to security\n(say, using IPSEC) is used on the wireline network between\nthe RNC and the correspondent host and 3G authentication\nand link-layer encryption mechanisms are used between the\nRNC and mobile host. This allows the RNC to support\nproxies such as the Ack Regulator to improve performance\nwithout compromising security.\nCONCLUSION\nIn this paper, we comprehensively evaluated the impact\nof variable rate and variable delay on TCP performance.\nWe first proposed a model to explain and predict TCP's\nthroughput over a link with variable rate and delay. Our\nmodel was able to accurately (better than 90%) predict\nthroughput of TCP flows even in the case of large delay\nand rate variation. Based on our TCP model, we proposed\na network based solution called Ack Regulator to mitigate\nthe effect of rate and delay variability. The performance of\nAck Regulator was evaluated extensively using both general\nmodels for rate and delay variability as well as a simplified\nmodel of a 3\nrd\nGeneration high speed wireless data air interface\n. Ack Regulator was able to improve the performance\nof TCP Reno and TCP Sack by up to 40% without significantly\nincreasingly the round trip time. We also showed\nthat Ack Regulator delivers the same high throughput irrespective\nof whether the TCP source is Reno or Sack. Furthermore\n, Ack Regulator also delivered robust throughput\nperformance across different buffer sizes. Given the difficulties\nin knowing in advance the achievable throughput and\ndelay (and hence the correct BDP value), a scheme, like\nAck Regulator, which works well for both large and small\nbuffers is essential. In summary, Ack Regulator is an effective\nnetwork-based solution that significantly improves TCP\nperformance over wireless links with variable rate and delay.\nAcknowledgements\nThe authors would like to thank Lijun Qian for providing\nthe fading code used in the HDR simulation, Clement Lee\nand Girish Chandranmenon for providing the 3G1xtrace\nand Sandy Thuel for comments on earlier versions of this\npaper.\n\nREFERENCES\n[1] E. Altman, K. Avrachenkov and C. Barakat, \"A\nStochastic Model of TCP/IP with Stationary\nRandom Loss,\" in Proceedings of SIGCOMM 2000.\n[2] F. Baccelli and D. Hong,\"TCP is Max-Plus Linear,\"\nin Proceedings of SIGCOMM 2000.\n[3] A. Bakre and B.R. Badrinath, \"Handoff and System\nSupport for Indirect TCP/IP,\" in proceedings of\nSecond UsenixSymposium on Mobile and\nLocation-Independent Computing, Apr 1995.\n[4] H. Balakrishnan et al., \"Improving TCP/IP\nPerformance over Wireless Networks,\" in proceedings\nof ACM Mobicom, Nov 1995.\n[5] H. Balakrishnan, V.N. Padmanabhan, R.H. Katz,\n\"The Effects of Asymmetry on TCP Performance,\"\nProc. ACM/IEEE Mobicom, Sep. 1997.\n[6] P. Bender et al., \"A Bandwidth Efficient High Speed\nWireless Data Service for Nomadic Users,\" in IEEE\nCommunications Magazine, Jul 2000.\n[7] P. Bhagwat at al, \"Enhancing Throughput over\nWireless LANs Using Channel State Dependent\nPacket Scheduling,\" in Proc. IEEE INFOCOM'96.\n[8] K. Brown and S.Singh, \"M-TCP: TCP for Mobile\nCellular Networks,\" ACM Computer\nCommunications Review Vol. 27(5), 1997.\n[9] A. Canton and T. Chahed, \"End-to-end reliability in\nUMTS: TCP over ARQ,\" in proceedings of\nGlobecomm 2001.\n[10] TIA/EIA/cdma2000, \"Mobile Station - Base Station\nCompatibility Standard for Dual-Mode Wideband\nSpread Spectrum Cellular Systems\", Washington:\nTelecommunication Industry Association, 1999.\n[11] G. Holland and N. H. Vaidya, \"Analysis of TCP\nPerformance over Mobile Ad Hoc Networks,\" in\nProceedings of ACM Mobicom'99.\n[12] H. Inamura et al., \"TCP over 2.5G and 3G Wireless\nNetworks,\" draft-ietf-pilc-2.5g3g-07, Aug. 2002.\n[13] F. Khafizov and M. Yavuz, \"TCP over CDMA2000\nnetworks,\" Internet Draft,\ndraft-khafizov-pilc-cdma2000-00.txt.\n[14] T. V. Lakshman and U. Madhow, \"The Performance\nof Networks with High Bandwidth-delay Products\nand Random Loss,\" in IEEE/ACM Transactions on\nNetworking, Jun. 1997.\n[15] R. Ludwig et al., \"Multi-layer Tracing of TCP over a\nReliable Wireless Link,\" in Proceedings of ACM\nSIGMETRICS 1999.\n[16] Reiner Ludwig and Randy H. Katz \"The Eifel\nAlgorithm: Making TCP Robust Against Spurious\nRetransmissions,\" in ACM Computer\nCommunications Review, Vol. 30, No. 1, 2000.\n[17] V. Misra, W. Gong and D. Towsley, \"Stochastic\nDifferential Equation Modeling and Analysis of TCP\nWindowsize Behavior,\" in Proceedings of\nPerformance'99.\n[18] P. Narvaez and K.-Y. Siu, \"New Techniques for\nRegulating TCP Flow over Heterogeneous\nNetworks,\" in LCN'98.\n[19] \"Modeling TCP Throughput: a Simple Model and its\nEmpirical Validation,\" in Proceedings of SIGCOMM\n1998.\n[20] S. Paul et al., \"An Asymmetric Link-Layer Protocol\nfor Digital Cellular Communications,\" in proceedings\nof INFOCOM 1995.\n[21] Third Generation Partnership Project, \"RLC\nProtocol Specification (3G TS 25.322:)\", 1999.\n[22] TIA/EIA/IS-707-A-2.10, \"Data Service Options for\nSpread Spectrum Systems: Radio Link Protocol\nType 3\", January 2000.\n[23] S. Karandikar et al., \"TCP rate control,\" in ACM\nComputer Communication Review, Jan 2000.\n[24] 3G Partnership Project, Release 99.\n[25] \"Microwave mobile communications,\" edited by W.\nC. Jakes, Wiley, 1974.\n[26] \"Delays in the HDR System,\" QUALCOMM, Jun.\n2000.\n[27] R. Jain, \"The Art of Computer Systems Performance\nAnalysis,\" Wiley, 1991.\n[28] J. Padhye and S. Floyd, \"On Inferring TCP\nBehavior,\" in Proceedings of SIGCOMM'2001.\n82\n", "keywords": "algorithm;architecture;TCP;wireless communication;performance evaluation;3G wireless links;prediction model;design;Link and Rate Variation;3G Wireless;simulation result;congestion solution;Network"} {"name": "188", "title": "The Forest and the Trees: Using Oracle and SQL Server Together to Teach ANSI-Standard SQL", "abstract": "Students in a sophomore-level database fundamentals course were taught SQL and database concepts using both Oracle and SQL Server. Previous offerings of the class had used one or the other database. Classroom experiences suggest that students were able to handle learning SQL in the dual environment, and, in fact, benefited from this approach by better understanding ANSI-standard versus database-specific SQL and implementation differences in the two database systems.", "fulltext": "INTRODUCTION\nA problem arises in many technology classes. The instructor\nwishes to teach principles and concepts of a technology. To give\nstudents hands-on experience putting those theories to work, a\nspecific product that implements that technology is selected for a\nlab component. Suddenly the students are learning more about the\nspecific product than they are about the technology concepts.\nThey may or may not realize what is specific to that product and\nwhat is general to the technology. Students may even start\nreferring to the course as a VB course, a PHP course, or an Oracle\ncourse when what you wanted to teach was programming, web\nscripting, or database principles.\nThis paper presents the experiences from a database fundamentals\ncourse that used both Oracle and SQL Server so that students\nwould better understand ANSI-standard SQL. Though each\ndatabase is ANSI SQL compliant, there are definite differences in\nimplementation (Gorman, 2001; Gulutzan, 2002). By learning\neach implementation and how each departs from ANSI-standard\nSQL, students can be better prepared to work with any database\nand better understand general concepts of databases and SQL. The\npaper discusses the observed results from this approach and how\nwell the approach met learning objectives.\n\nCOURSE CONTEXT AND LEARNING OBJECTIVES\nCPT 272, Database Fundamentals, is a sophomore-level database\nprogramming and design class taught primarily to computer\ntechnology majors in their fourth semester. Students will have\npreviously taken a freshman-level course that introduces them to\ndatabases as a tool for learning general information system\ndevelopment terms and concepts. The freshman-level course uses\nMicrosoft Access because it is easy to use for quickly developing\na small personal information system. That course also introduces\nboth SQL and Query By Example methods for querying a\ndatabase as well basic database design concepts, which are\napplied for simple data models.\nStudents then move into two programming courses, the second of\nwhich uses single-table SQL statements for providing data to\n(formerly) Visual Basic or (currently) web-programming\napplications. So by the time the students take the Database\nFundamentals course they have a concept of what a database is\nand how it is used as the back-end for programming. The\nDatabase Fundamentals course is the course where students learn\nSQL in depth, database concepts, and basic database design. It\ndoes not teach stored procedure programming, triggers, or\nenterprise or distributed database design, which are covered in\nmore advanced courses.\nThe learning objectives for the Database Fundamentals course\nare:\n\nTo understand the fundamentals of a relational database.\n\nTo understand the fundamentals of client-server and multi-tiered\napplications.\n\nTo understand the principles and characteristics of good\nrelational database design.\n\nTo design entity relationship models for a business problem\ndomain verified by the rules of normalization (through third\nnormalized form).\n\nTo build simple to moderately complex data models.\n\nTo write simple to moderately complex SQL to query a\nmultiple-table database.\n\nTo write data manipulation language (DML) SQL to insert\nrows, delete rows, and update rows.\n\nTo understand the concept of database transactions and\ndemonstrate the proper use of commits and rollbacks.\n\nTo write data definition language (DDL) SQL to create and\ndrop tables, indexes, and constraints.\n\nTo understand and be able to implement the fundamentals of\nsecurity and permissions in a database.\n\nTo explain the benefits of using views and write SQL\nstatements to create views.\n\nTo create and use SQL scripts and use SQL to build scripts.\n\nTo gain a working knowledge of query optimization,\nperformance tuning, and database administration.\n\nTo apply team skills to build a client-server database\napplication.\nCONSIDERATIONS FOR CROSS-ENGINE SQL EDUCATION\nCPT 272, Database Fundamentals, is taught in a multi-campus\nuniversity. It was initially taught on the main campus using\nOracle. When the course was rolled out to the regional campuses,\nSQL Server was first used because of administration\nconsiderations involved with Oracle. Now WAN connections\nhave been established that allow the use of either database engine\nor both.\nDuring the spring 2003 semester one regional campus\nexperimented with the use of both databases. The reasons for\ndoing this were:\n\nTo accomplish the course learning objectives in SQL\nnecessitates going beyond ANSI-standard SQL into\ndatabase-specific functions, sub-queries, and other aspects of\nSQL that are implemented differently in different databases.\nIf the students learn only Oracle or SQL Server (or any other\ndatabase) they are likely to confuse ANSI-standard SQL with\nthe database-specific implementation, which can hinder them\nwhen they enter the job market. By using both databases, it\nwas hoped that students would learn and understand the\ndifferences among ANSI-standard SQL, Oracle SQL, SQL\nServer T-SQL.\n\nSome design considerations, such as Identities/Sequences\nand datatypes, are implemented differently in different\ndatabases. Again, students will enter the job market with a\nstronger understanding if they understand the difference\nbetween the concept and how it is implemented.\n\nNeither Oracle nor SQL Server commands a majority of\nmarket share. However, the two together make up about fifty\npercent of the current market share, positioning students well\nfor the job market (Wong, 2002).\n\nStudying two databases together opens the door for\ndiscussing the pros and cons of these and other databases,\nincluding DB2, MySQL, and Sybase.\n\nFinally, students often want to install a database engine on\ntheir personal computer and work on lab assignments at\nhome. Both Oracle and SQL Server have licensing and\nhardware requirement issues that on any given computer may\npreclude one or the other. Using both allowed most students\nto do at least some of their work at home.\nTo implement a cross-engine approach, an SQL text would be\nneeded that taught both databases. A special textbook was created\nby two of the instructors, a draft of which was used in CD format.\nIn addition to covering SQL essentials in Oracle and SQL Server,\nit also covered Microsoft Access and MySQL in hopes that it\nmight also be used in the freshman course and be a good reference\nfor real world web programming. The text has since been picked\nup by a publisher.\n\nCOURSE DESIGN AND ASSIGNMENTS\nCPT 272, Database Fundamentals, consists of a both a lecture and\na lab component. The lectures cover fundamental database\nconcepts, including SQL concepts, query optimization, and\ndatabase design and normalization. The lab component focuses on\nmastery of SQL. Table 1 shows the labs that were assigned and\nthe database used for each.\nTable 1. Course Lab Schedule\nLab\nLab DBMS\nSingle Table Select\nOracle\nAggregates & Sub Queries\nSQL Server\nJoining Tables\nBoth Oracle & SQL Server\nDBMS Specific Functions\nBoth Oracle & SQL Server\nAdvanced Queries\nOracle\nData Manipulation\nStudent Choice\nDatabase Definition\nStudent Choice\nPrivileges Student\nChoice\n\nThe first three labs concentrated as much as possible on ANSI-standard\nSQL. Of course, implementation of sub queries and joins\nis in some cases different between Oracle and SQL Server. These\ndifferences were taught and discussed. However, those labs\navoided all DBMS specific functionality, such as concatenation,\ndate manipulation, and datatype conversion. These things were\ncovered in the DBMS Specific Functions lab. This approach\nlimited some of what could be done in the first labs but provided a\nsolid distinction between what was ANSI-standard SQL and what\nwas database-specific. Later labs used these database-specific\nfunctions in various lab exercises so that these functions, which\nare crucial in the real world, were mastered.\nIn addition to the use of both databases in labs, lecture material\nconstantly referred to how various design, administration, and\noptimization concepts would be applied in both databases and in\nother databases, such as MySQL. In addition, exam questions\nasked students to compare the capabilities of each database. Other\n235\n\n\nassignments led students into an exploration of the pros and cons\nof various database engines. Two of these were for students to\nwrite short papers on the following:\n1. Research one of the following databases: DB2, Sybase,\nInformix, MySQL, PostgreSQL, or SQLWindows Solo. Write\na 2-3 page paper comparing it to Oracle and SQL Server.\nInclude your recommendations regarding the circumstances in\nwhich this database should be used.\n2. Research what people on the Internet say comparing SQL\nServer and Oracle. Based on their perspectives and your own\nexperience with these two databases, write 1-2 pages\ncomparing them.\nDISCUSSION\nWith all the course objectives listed above, the Database\nFundamentals course makes for a full semester. Using two\ndatabases instead of one definitely adds to the challenge of fitting\nin all the material. This first attempt led to the realization of\nseveral \"kinks\" that would need to be worked out before it could\nbe attempted again. These are listed below.\n\nIn most cases any given SQL lecture had barely enough time\nfor covering the SQL concepts and implementation in both\ndatabases, forcing the instructor to scrimp on in-class\nexamples. This meant that students had a more shaky\nfoundation going into the lab assignment. However, it should\nbe noted that students did about as well on lab assignments\nas prior classes. One possible solution would be to focus\neach SQL lecture on only one database, but to revisit that\nconcept in the following lecture when the other database is\nused.\n\nIn past semesters using only one database, it was possible to\ninclude a discussion of datatypes along with the DDL\nlecture. Using two databases the DDL lecture had to be\nexpanded, which did not leave enough time for thorough\ncoverage of datatypes in both databases. A solution would be\nto move the datatype material to one of the design lectures,\nemphasizing datatyping as a design step.\n\nBy switching between databases, database-specific syntax\nnever clicked in students' minds. They seemed less able than\nprior classes to apply concatenation and datatype conversions\nwithout looking up the syntax in reference material. While\nthis is regrettable, it may be a worthwhile trade-off for\ngaining an understanding of what is ANSI-standard vs.\ndatabase-specific SQL. It is likely that when students move\nto the real world and settle in with one database, they will\nquickly be able to internalize the syntax.\n\nSome students expressed that they would have preferred\ngoing through all labs with one database and then looking at\ndifferences with a second database. However, other students\nliked working with both databases side by side. This suggests\nthat the alternating structure may need to be tweaked. But\nwhatever one does, it will probably not work with every\nlearning style.\n\nAfter using both databases, almost all students, when given a\nchoice, used SQL Server. This was solely because of a\nperceived superiority in the user interface of Query Analyzer\nversus SQL Plus. In itself this is not a bad thing because one\nof the goals was to understand the differences in the two\ndatabases. But in future semesters the instructor may want to\nforce a choice more often to insure that students will be\nexposed more equally to both.\nThese \"kinks\" aside, both instructor and students considered the\nexperiment a success. Their papers indicated a mature\nappreciation of the differences between Oracle and SQL Server\nand how they compared to other databases. Their comments in\nclass indicated that they understood what was ANSI-standard\nSQL and what was not. In the SQL lab exam these students\nperformed as well as previous classes of students, indicating that\nthe cross-engine approach did not hinder their learning. Students\nalso indicated enthusiasm for being able to list experience in both\ndatabases on their resume. Compared to prior semesters, students\nleft the course more comfortable and able to use either of these\nmajor database engines to accomplish the goals of a given\ninformation system.\n\nREFERENCES\n[1]\nGorman, Michael M. (2001). Is SQL a real standard\nanymore? The Data Administration Newsletter (TDAN.com)\nhttp://www.tdan.com/i016hy01.htm (17 Apr. 2003).\n[2]\nGulutzan, Peter. (2002). Standard SQL.\nhttp://www.dbazine.com/gulutzan3.html (17 Apr. 2003).\n[3]\nWong, Wylie. (2002). IBM passes Oracle in database\nmarket.\nhttp://techupdate.zdnet.com/techupdate/stories/main/0,14179\n,2864350,00.html (19 June 2003).\n\n\n236\n", "keywords": "SQL;SQL Server;training vs. education;database systems;Oracle;database;educational fundamentals;student feedbacks;ANSI-Standard SQL;teaching in IT;dual environment;SQL Language;course design;practical results"} {"name": "189", "title": "The Maximum Entropy Method for Analyzing Retrieval Measures", "abstract": "We present a model, based on the maximum entropy method, for analyzing various measures of retrieval performance such as average precision, R-precision, and precision-at-cutoffs. Our methodology treats the value of such a measure as a constraint on the distribution of relevant documents in an unknown list, and the maximum entropy distribution can be determined subject to these constraints. For good measures of overall performance (such as average precision), the resulting maximum entropy distributions are highly correlated with actual distributions of relevant documents in lists as demonstrated through TREC data; for poor measures of overall performance, the correlation is weaker. As such, the maximum entropy method can be used to quantify the overall quality of a retrieval measure. Furthermore, for good measures of overall performance (such as average precision), we show that the corresponding maximum entropy distributions can be used to accurately infer precision-recall curves and the values of other measures of performance, and we demonstrate that the quality of these inferences far exceeds that predicted by simple retrieval measure correlation, as demonstrated through TREC data.", "fulltext": "INTRODUCTION\nThe efficacy of retrieval systems is evaluated by a number\nof performance measures such as average precision, R-precision\n, and precisions at standard cutoffs. Broadly speaking\n, these measures can be classified as either system-oriented\nmeasures of overall performance (e.g., average precision and\nR-precision) or user-oriented measures of specific performance\n(e.g., precision-at-cutoff 10) [3, 12, 5]. Different measures\nevaluate different aspects of retrieval performance, and\nmuch thought and analysis has been devoted to analyzing\nthe quality of various different performance measures [10, 2,\n17].\nWe consider the problem of analyzing the quality of various\nmeasures of retrieval performance and propose a model\nbased on the maximum entropy method for evaluating the\nquality of a performance measure. While measures such as\naverage precision at relevant documents, R-precision, and\n11pt average precision are known to be good measures of\noverall performance, other measures such as precisions at\nspecific cutoffs are not. Our goal in this work is to develop\na model within which one can numerically assess the overall\nquality of a given measure based on the reduction in uncertainty\nof a system's performance one gains by learning\nthe value of the measure. As such, our evaluation model\nis primarily concerned with assessing the relative merits of\nsystem-oriented measures, but it can be applied to other\nclasses of measures as well.\nWe begin with the premise that the quality of a list of\ndocuments retrieved in response to a given query is strictly\na function of the sequence of relevant and non-relevant documents\nretrieved within that list (as well as R, the total number\nof relevant documents for the given query). Most standard\nmeasures of retrieval performance satisfy this premise.\nOur thesis is then that given the assessed value of a \"good\"\noverall measure of performance, one's uncertainty about the\nsequence of relevant and non-relevant documents in an unknown\nlist should be greatly reduced. Suppose, for example\n, one were told that a list of 1,000 documents retrieved in\nresponse to a query with 200 total relevant documents contained\n100 relevant documents. What could one reasonably\ninfer about the sequence of relevant and non-relevant documents\nin the unknown list? From this information alone,\none could only reasonably conclude that the likelihood of\nseeing a relevant document at any rank level is uniformly\n1/10. Now suppose that one were additionally told that the\naverage precision of the list was 0.4 (the maximum possi-27\nble in this circumstance is 0.5). Now one could reasonably\nconclude that the likelihood of seeing relevant documents at\nlow numerical ranks is much greater than the likelihood of\nseeing relevant documents at high numerical ranks. One's\nuncertainty about the sequence of relevant and non-relevant\ndocuments in the unknown list is greatly reduced as a consequence\nof the strong constraint that such an average precision\nplaces on lists in this situation. Thus, average precision\nis highly informative. On the other hand, suppose that one\nwere instead told that the precision of the documents in\nthe rank range [100, 110] was 0.4. One's uncertainty about\nthe sequence of relevant and non-relevant documents in the\nunknown list is not appreciably reduced as a consequence\nof the relatively weak constraint that such a measurement\nplaces on lists. Thus, precision in the range [100, 110] is not\na highly informative measure. In what follows, we develop\na model within which one can quantify how informative a\nmeasure is.\nWe consider two questions: (1) What can reasonably be\ninferred about an unknown list given the value of a measurement\ntaken over this list? (2) How accurately do these\ninferences reflect reality? We argue that the former question\nis properly answered by considering the maximum entropy\ndistributions subject to the measured value as a constraint,\nand we demonstrate that such maximum entropy models\ncorresponding to good overall measures of performance such\nas average precision yield accurate inferences about underlying\nlists seen in practice (as demonstrated through TREC\ndata).\nMore specifically, we develop a framework based on the\nmaximum entropy method which allows one to infer the\nmost \"reasonable\" model for the sequence of relevant and\nnon-relevant documents in a list given a measured constraint.\nFrom this model, we show how one can infer the most \"reasonable\"\nmodel for the unknown list's entire precision-recall\ncurve. We demonstrate through the use of TREC data that\nfor \"good\" overall measures of performance (such as average\nprecision), these inferred precision-recall curves are accurate\napproximations of actual precision-recall curves; however,\nfor \"poor\" overall measures of performance, these inferred\nprecision-recall curves do not accurately approximate actual\nprecision-recall curves. Thus, maximum entropy modeling\ncan be used to quantify the quality of a measure of overall\nperformance.\nWe further demonstrate through the use of TREC data\nthat the maximum entropy models corresponding to \"good\"\nmeasures of overall performance can be used to make accurate\npredictions of other measurements. While it is well\nknown that \"good\" overall measures such as average precision\nare well correlated with other measures of performance,\nand thus average precision could be used to reasonably predict\nother measures of performance, we demonstrate that\nthe maximum entropy models corresponding to average precision\nyield inferences of other measures even more highly\ncorrelated with their actual values, thus validating both average\nprecision and maximum entropy modeling.\nIn the sections that follow, we first describe the maximum\nentropy method and discuss how maximum entropy\nmodeling can be used to analyze measures of retrieval performance\n.\nWe then describe the results of applying our\nmethodology using TREC data, and we conclude with a\nsummary and future work.\nTHE MAXIMUM ENTROPY METHOD\nThe concept of entropy as a measure of information was\nfirst introduced by Shannon [20], and the Principle of Maximum\nEntropy was introduced by Jaynes [7, 8, 9]. Since its\nintroduction, the Maximum Entropy Method has been applied\nin many areas of science and technology [21] including\nnatural language processing [1], ambiguity resolution [18],\ntext classification [14], machine learning [15, 16], and information\nretrieval [6, 11], to name but a few examples. In\nwhat follows, we introduce the maximum entropy method\nthrough a classic example, and we then describe how the\nmaximum entropy method can be used to evaluate measures\nof retrieval performance.\nSuppose you are given an unknown and possibly biased\nsix-sided die and were asked the probability of obtaining any\nparticular die face in a given roll. What would your answer\nbe? This problem is under-constrained and the most seemingly\n\"reasonable\" answer is a uniform distribution over all\nfaces. Suppose now you are also given the information that\nthe average die roll is 3.5. The most seemingly \"reasonable\"\nanswer is still a uniform distribution. What if you are told\nthat the average die roll is 4.5? There are many distributions\nover the faces such that the average die roll is 4.5; how\ncan you find the most seemingly \"reasonable\" distribution?\nFinally, what would your answer be if you were told that\nthe average die roll is 5.5? Clearly, the belief in getting a\n6 increases as the expected value of the die rolls increases.\nBut there are many distributions satisfying this constraint;\nwhich distribution would you choose?\nThe \"Maximum Entropy Method\" (MEM) dictates the\nmost \"reasonable\" distribution satisfying the given constraints.\nThe \"Principle of Maximal Ignorance\" forms the intuition\nbehind the MEM; it states that one should choose the distribution\nwhich is least predictable (most random) subject\nto the given constraints. Jaynes and others have derived numerous\nentropy concentration theorems which show that the\nvast majority of all empirical frequency distributions (e.g.,\nthose corresponding to sequences of die rolls) satisfying the\ngiven constraints have associated empirical probabilities and\nentropies very close to those probabilities satisfying the constraints\nwhose associated entropy is maximal [7].\nThus, the MEM dictates the most random distribution\nsatisfying the given constraints, using the entropy of the\nprobability distribution as a measure of randomness. The\nentropy of a probability distribution p = {p\n1\n, p\n2\n, . . . , p\nn\n} is\na measure of the uncertainty (randomness) inherent in the\ndistribution and is defined as follows\nH(p) = n\nX\ni=1\np\ni\nlg p\ni\n.\nThus, maximum entropy distributions are probability distributions\nmaking no additional assumptions apart from the\ngiven constraints.\nIn addition to its mathematical justification, the MEM\ntends to produce solutions one often sees in nature. For\nexample, it is known that given the temperature of a gas, the\nactual distribution of velocities in the gas is the maximum\nentropy distribution under the temperature constraint.\nWe can apply the MEM to our die problem as follows.\nLet the probability distribution over the die faces be p =\n{p\n1\n, . . . , p\n6\n}. Mathematically, finding the maximum entropy\ndistribution over die faces such that the expected die roll is\n28\n1\n2\n3\n4\n5\n6\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\ndie face\nprobability\n1\n2\n3\n4\n5\n6\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\ndie face\nprobability\n1\n2\n3\n4\n5\n6\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\ndie face\nprobability\nFigure 1: Maximum entropy die distributions with mean die rolls of 3.5, 4.5, and 5.5, respectively.\nd corresponds to the following optimization problem:\nMaximize: H(p)\nSubject to:\n1.\n6\nP\ni=1\np\ni\n= 1\n2.\n6\nP\ni=1\ni p\ni\n= d\nThe first constraint ensures that the solution forms a distribution\nover the die faces, and the second constraint ensures\nthat this distribution has the appropriate expectation. This\nis a constrained optimization problem which can be solved\nusing the method of Lagrange multipliers. Figure 1 shows\nthree different maximum entropy distributions over the die\nfaces such that the expected die roll is 3.5, 4.5, and 5.5,\nrespectively.\n2.1\nApplication of the Maximum Entropy\nMethod to Analyzing Retrieval Measures\nSuppose that you were given a list of length N corresponding\nto the output of a retrieval system for a given query,\nand suppose that you were asked to predict the probability\nof seeing any one of the 2\nN\npossible patterns of relevant\ndocuments in that list. In the absence of any information\nabout the query, any performance information for the system\n, or any a priori modeling of the behavior of retrieval\nsystems, the most \"reasonable\" answer you could give would\nbe that all lists of length N are equally likely. Suppose now\nthat you are also given the information that the expected\nnumber of relevant documents over all lists of length N is\nR\nret\n. Your \"reasonable\" answer might then be a uniform\ndistribution over all `\nN\nR\nret\ndifferent possible lists with R\nret\nrelevant documents. But what if apart from the constraint\non the number of relevant documents retrieved, you were\nalso given the constraint that the expected value of average\nprecision is ap? If the average precision value is high,\nthen of all the `\nN\nR\nret\nlists with R\nret\nrelevant documents,\nthe lists in which the relevant documents are retrieved at\nlow numerical ranks should have higher probabilities. But\nhow can you determine the most \"reasonable\" such distribution\n? The maximum entropy method essentially dictates the\nmost reasonable distribution as a solution to the following\nconstrained optimization problem.\nLet p(r\n1\n, ..., r\nN\n) be a probability distribution over the\nrelevances associated with document lists of length N , let\nrel(r\n1\n, ..., r\nN\n) be the number of relevant documents in a list,\nand let ap(r\n1\n, ..., r\nN\n) be the average precision of a list. Then\nthe maximum entropy method can be mathematically formulated\nas follows:\nMaximize: H(p)\nSubject to:\n1.\nP\nr\n1\n,...,r\nN\np(r\n1\n, . . . , r\nN\n) = 1\n2.\nP\nr\n1\n,...,r\nN\nap(r\n1\n, . . . , r\nN\n) p(r\n1\n, . . . , r\nN\n) = ap\n3.\nP\nr\n1\n,...,r\nN\nrel(r\n1\n, . . . , r\nN\n) p(r\n1\n, . . . , r\nN\n) = R\nret\nNote that the solution to this optimization problem is a\ndistribution over possible lists, where this distribution effectively\ngives one's a posteriori belief in any list given the\nmeasured constraint.\nThe previous problem can be formulated in a slightly different\nmanner yielding another interpretation of the problem\nand a mathematical solution. Suppose that you were given\na list of length N corresponding to output of a retrieval system\nfor a given a query, and suppose that you were asked\nto predict the probability of seeing a relevant document at\nsome rank. Since there are no constraints, all possible lists\nof length N are equally likely, and hence the probability of\nseeing a relevant document at any rank is 1/2. Suppose now\nthat you are also given the information that the expected\nnumber of relevant documents over all lists of length N is\nR\nret\n. The most natural answer would be a R\nret\n/N uniform\nprobability for each rank.\nFinally, suppose that you are\ngiven the additional constraint that the expected average\nprecision is ap. Under the assumption that our distribution\nover lists is a product distribution (this is effectively\na fairly standard independence assumption), we may solve\nthis problem as follows. Let\np(r\n1\n, . . . , r\nN\n) = p(r\n1\n) p(r\n2\n) p(r\nN\n)\nwhere p(r\ni\n) is the probability that the document at rank i is\nrelevant. We can then solve the problem of calculating the\nprobability of seeing a relevant document at any rank using\nthe MEM. For notational convenience, we will refer to this\nproduct distribution as the probability-at-rank distribution\nand the probability of seeing a relevant document at rank i,\np(r\ni\n), as p\ni\n.\nStandard results from information theory [4] dictate that\nif p(r\n1\n, . . . , r\nN\n) is a product distribution, then\nH(p(r\n1\n, . . . , r\nN\n)) =\nN\nX\ni=1\nH(p\ni\n)\nwhere H(p\ni\n) is the binary entropy\nH(p\ni\n) = -p\ni\nlg p\ni\n- (1 - p\ni\n) lg(1 - p\ni\n).\nFurthermore, it can be shown that given a product distribution\np(r\n1\n, . . . , r\nN\n) over the relevances associated with docu-29\nMaximize: P\nN\ni=1\nH(p\ni\n)\nSubject to:\n1.\n1\nR\nN\nP\ni=1\n`\np\ni\ni\n`1 +\ni-1\nP\nj=1\np\nj\n= ap\n2.\nN\nP\ni=1\np\ni\n= R\nret\nFigure 2: Maximum entropy\nsetup for average precision.\nMaximize: P\nN\ni=1\nH(p\ni\n)\nSubject to:\n1.\n1\nR\nR\nP\ni=1\np\ni\n= rp\n2.\nN\nP\ni=1\np\ni\n= R\nret\nFigure 3: Maximum entropy\nsetup for R-precision.\nMaximize: P\nN\ni=1\nH(p\ni\n)\nSubject to:\n1.\n1\nk\nk\nP\ni=1\np\ni\n= PC (k)\n2.\nN\nP\ni=1\np\ni\n= R\nret\nFigure 4: Maximum entropy\nsetup for precision-at-cutoff.\nment lists of length N , the expected value of average precision\nis\n1\nR\nN\nX\ni=1\np\ni\ni\n\n1 +\ni-1\nX\nj=1\np\nj\n!!\n.\n(1)\n(The derivation of this formula is omitted due to space constraints\n.) Furthermore, since p\ni\nis the probability of seeing\na relevant document at rank i, the expected number of relevant\ndocuments retrieved until rank N is P\nN\ni=1\np\ni\n.\nNow, if one were given some list of length N , one were told\nthat the expected number of relevant documents is R\nret\n, one\nwere further informed that the expected average precision is\nap, and one were asked the probability of seeing a relevant\ndocument at any rank under the independence assumption\nstated, one could apply the MEM as shown in Figure 2.\nNote that one now solves for the maximum entropy product\ndistribution over lists, which is equivalent to a maximum entropy\nprobability-at-rank distribution. Applying the same\nideas to R-precision and precision-at-cutoff k, one obtains\nanalogous formulations as shown in Figures 3 and 4, respectively\n.\nAll of these formulations are constrained optimization problems\n, and the method of Lagrange multipliers can be used\nto find an analytical solution, in principle. When analytical\nsolutions cannot be determined, numerical optimization\nmethods can be employed. The maximum entropy distributions\nfor R-precision and precision-at-cutoff k can be obtained\nanalytically using the method of Lagrange multipliers\n. However, numerical optimization methods are required\nto determine the maximum entropy distribution for average\nprecision. In Figure 5, examples of maximum entropy\nprobability-at-rank curves corresponding to the measures\naverage precision, R-precision, and precision-at-cutoff 10 for\na run in TREC8 can be seen. Note that the probability-at\n-rank curves are step functions for the precision-at-cutoff\nand R-precision constraints; this is as expected since, for\nexample, given a precision-at-cutoff 10 of 0.3, one can only\nreasonably conclude a uniform probability of 0.3 for seeing\na relevant document at any of the first 10 ranks.\nNote,\nhowever, that the probability-at-rank curve corresponding\nto average precision is smooth and strictly decreasing.\nUsing the maximum entropy probability-at-rank distribution\nof a list, we can infer the maximum entropy precision-recall\ncurve for the list. Given a probability-at-rank distribution\np, the number of relevant documents retrieved until\nrank i is REL(i) = P\ni\nj=1\np\nj\n. Therefore, the precision\nand recall at rank i are PC (i) = REL(i)/i and REC (i) =\nREL(i)/R. Hence, using the maximum entropy probability-0\n100\n200\n300\n400\n500\n600\n700\n800\n900\n1000\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nrank\nprobability\nTREC8 System fub99a Query 435 AP = 0.1433\nap maxent dist.\nrp maxent dist.\npc-10 maxent dist.\nFigure 5: Probability-at-rank distributions.\nat-rank distribution for each measure, we can generate the\nmaximum entropy precision-recall curve of the list. If a measure\nprovides a great deal of information about the underlying\nlist, then the maximum entropy precision-recall curve\nshould approximate the precision-recall curve of the actual\nlist.\nHowever, if a measure is not particularly informative\n, then the maximum entropy precision-recall curve need\nnot approximate the actual precision-recall curve. Therefore\n, noting how closely the maximum entropy precision-recall\ncurve corresponding to a measure approximates the\nprecision-recall curve of the actual list, we can calculate how\nmuch information a measure contains about the actual list,\nand hence how \"informative\" a measure is. Thus, we have a\nmethodology for evaluating the evaluation measures themselves\n.\nUsing the maximum entropy precision-recall curve of a\nmeasure, we can also predict the values of other measures.\nFor example, using the maximum entropy precision-recall\ncurve corresponding to average precision, we can predict\nthe precision-at-cutoff 10. For highly informative measures,\nthese predictions should be very close to reality. Hence, we\nhave a second way of evaluating evaluation measures.\nEXPERIMENTAL RESULTS\nWe tested the performance of the evaluation measures average\nprecision, R-precision, and precision-at-cutoffs 5, 10,\n15, 20, 30, 100, 200, 500 and 1000 using data from TRECs\n3, 5, 6, 7, 8 and 9. For any TREC and any query, we chose\nthose systems whose number of relevant documents retrieved\nwas at least 10 in order to have a sufficient number of points\non the precision-recall curve. We then calculated the maximum\nentropy precision-recall curve subject to the given measured\nconstraint, as described above. The maximum entropy\nprecision-recall curve corresponding to an average precision\n30\nconstraint cannot be determined analytically; therefore, we\nused numerical optimization\n1\nto find the maximum entropy\ndistribution corresponding to average precision.\nWe shall refer to the execution of a retrieval system on\na particular query as a run. Figure 6 shows examples of\nmaximum entropy precision-recall curves corresponding to\naverage precision, R-precision, and precision-at-cutoff 10 for\nthree different runs, together with the actual precision-recall\ncurves. We focused on these three measures since they are\nperhaps the most commonly cited measures in IR. We also\nprovide results for precision-at-cutoff 100 in later plots and\ndetailed results for all measures in a later table. As can be\nseen in Figure 6, using average precision as a constraint, one\ncan generate the actual precision-recall curve of a run with\nrelatively high accuracy.\nIn order to quantify how good an evaluation measure is\nin generating the precision-recall curve of an actual list,\nwe consider two different error measures: the root mean\nsquared error (RMS) and the mean absolute error (MAE).\nLet {\n1\n,\n2\n, . . . ,\nR\nret\n} be the precisions at the recall levels\n{1/R, 2/R, . . . , R\nret\n/R} where R\nret\nis the number of relevant\ndocuments retrieved by a system and R is the number of\ndocuments relevant to the query, and let {m\n1\n, m\n2\n, . . . , m\nR\nret\n}\nbe the estimated precisions at the corresponding recall levels\nfor a maximum entropy distribution corresponding to a\nmeasure. Then the MAE and RMS errors are calculated as\nfollows.\nRMS\n=\nv\nu\nu\nt\n1\nR\nret\nR\nret\nX\ni=1\n(\ni\n- m\ni\n)\n2\nMAE\n=\n1\nR\nret\nR\nret\nX\ni=1\n|\ni\n- m\ni\n|\nThe points after recall R\nret\n/R on the precision-recall curve\nare not considered in the evaluation of the MAE and RMS\nerrors since, by TREC convention, the precisions at these\nrecall levels are assumed to be 0.\nIn order to evaluate how good a measure is at inferring\nactual precision-recall curves, we calculated the MAE and\nRMS errors of the maximum entropy precision-recall curves\ncorresponding to the measures in question, averaged over all\nruns for each TREC. Figure 7 shows how the MAE and RMS\nerrors for average precision, R-precision, precision-at-cutoff\n10, and precision-at-cutoff 100 compare with each other for\neach TREC. The MAE and RMS errors follow the same\npattern over all TRECs. Both errors are consistently and\nsignificantly lower for average precision than for the other\nmeasures in question, while the errors for R-precision are\nconsistently lower than for precision-at-cutoffs 10 and 100.\nTable 1 shows the actual values of the RMS errors for all\nmeasures over all TRECs. In our experiments, MAE and\nRMS errors follow a very similar pattern, and we therefore\nomit MAE results due to space considerations. From this\ntable, it can be seen that average precision has consistently\nlower RMS errors when compared to the other measures.\nThe penultimate column of the table shows the average RMS\nerrors per measure averaged over all TRECs. On average,\nR-precision has the second lowest RMS error after average\nprecision, and precision-at-cutoff 30 is the third best measure\nin terms of RMS error. The last column of the table\n1\nWe used the TOMLAB Optimization Environment for\nMatlab.\nshows the percent increase in the average RMS error of a\nmeasure when compared to the RMS error of average precision\n. As can be seen, the average RMS errors for the other\nmeasures are substantially greater than the average RMS\nerror for average precision.\nWe now consider a second method for evaluating how informative\na measure is. A highly informative measure should\nproperly reduce one's uncertainty about the distribution of\nrelevant and non-relevant documents in a list; thus, in our\nmaximum entropy formulation, the probability-at-rank distribution\nshould closely correspond to the pattern of relevant\nand non-relevant documents present in the list. One\nshould then be able to accurately predict the values of other\nmeasures from this probability-at-rank distribution.\nGiven a probability-at-rank distribution p\n1\n, p\n2\n, . . . , p\nN\n, we\ncan predict average precision, R-precision and precision-at-cutoff\nk values as follows:\nap = 1\nR\nN\nX\ni=1\np\ni\ni\n\n1 +\ni-1\nX\nj=1\np\nj\n!!\nrp = 1\nR\nR\nX\ni=1\np\ni\nPC (k) = 1\nk\nk\nX\ni=1\np\ni\nThe plots in the top row of Figures 8 and 9 show how average\nprecision is actually correlated with R-precision, precision-at\n-cutoff 10, and precision-at-cutoff 100 for TRECs 6 and 8,\nrespectively. Each point in the plot corresponds to a system\nand the values of the measures are averaged over all\nqueries. Using these plots as a baseline for comparison, the\nplots in the bottom row of the figures show the correlation\nbetween the actual measures and the measures predicted\nusing the average precision maximum entropy probability-at\n-rank distribution. Consider predicting precision-at-cutoff\n10 values using the average precision maximum entropy distributions\nin TREC 6. Without applying the maximum entropy\nmethod, Figure 8 shows that the two measures are\ncorrelated with a Kendall's value of 0.671. However, the\nprecision-at-cutoff 10 values inferred from the average precision\nmaximum entropy distribution have a Kendall's\nvalue of 0.871 when compared to actual precisions-at-cutoff\n10. Hence, the predicted precision-at-cutoff 10 and actual\nprecision-at-cutoff 10 values are much more correlated than\nthe actual average precision and actual precision-at-cutoff 10\nvalues. Using a similar approach for predicting R-precision\nand precision-at-cutoff 100, it can be seen in Figures 8 and 9\nthat the measured values predicted by using average precision\nmaximum entropy distributions are highly correlated\nwith actual measured values.\nWe conducted similar experiments using the maximum\nentropy distributions corresponding to other measures, but\nsince these measures are less informative, we obtained much\nsmaller increases (and sometimes even decreases) in inferred\ncorrelations. (These results are omitted due to space considerations\n.) Table 2 summarizes the correlation improvements\npossible using the maximum entropy distribution corresponding\nto average precision. The row labeled\nact\ngives\nthe actual Kendall's correlation between average precision\nand the measure in the corresponding column.\nThe row\nlabeled\ninf\ngives the Kendall's correlation between the\n31\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nrecall\nprecision\nTREC8 System fub99a Query 435 AP = 0.1433\nactual prec-recall\nap maxent prec-recall\nrp maxent prec-recall\npc-10 maxent prec-recall\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nrecall\nprecision\nTREC8 System MITSLStd Query 404 AP = 0.2305\nactual prec-recall\nap maxent prec-recall\nrp maxent prec-recall\npc-10 maxent prec-recall\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nrecall\nprecision\nTREC8 System pir9At0 Query 446 AP = 0.4754\nactual prec-recall\nap maxent prec-recall\nrp maxent prec-recall\npc-10 maxent prec-recall\nFigure 6: Inferred precision-recall curves and actual precision-recall curve for three runs in TREC8.\nTREC3\nTREC5\nTREC6\nTREC7\nTREC8\nTREC9\n0.08\n0.1\n0.12\n0.14\n0.16\n0.18\n0.2\n0.22\nMean Absolute Error\nap maxent prec-recall\nrp maxent prec-recall\npc-10 maxent prec-recall\npc-100 maxent prec-recall\nTREC3\nTREC5\nTREC6\nTREC7\nTREC8\nTREC9\n0.1\n0.15\n0.2\n0.25\nRMS Error\nap maxent prec-recall\nrp maxent prec-recall\npc-10 maxent prec-recall\npc-100 maxent prec-recall\nFigure 7: MAE and RMS errors for inferred precision-recall curves over all TRECs.\nTREC3\nTREC5\nTREC6\nTREC7\nTREC8\nTREC9\nAVERAGE\n%INC\nAP\n0.1185\n0.1220\n0.1191\n0.1299\n0.1390\n0.1505\n0.1298\nRP\n0.1767\n0.1711\n0.1877\n0.2016\n0.1878\n0.1630\n0.1813\n39.7\nPC-5\n0.2724\n0.2242\n0.2451\n0.2639\n0.2651\n0.2029\n0.2456\n89.2\nPC-10\n0.2474\n0.2029\n0.2183\n0.2321\n0.2318\n0.1851\n0.2196\n69.1\nPC-15\n0.2320\n0.1890\n0.2063\n0.2132\n0.2137\n0.1747\n0.2048\n57.8\nPC-20\n0.2210\n0.1806\n0.2005\n0.2020\n0.2068\n0.1701\n0.1968\n51.6\nPC-30\n0.2051\n0.1711\n0.1950\n0.1946\n0.2032\n0.1694\n0.1897\n46.1\nPC-100\n0.1787\n0.1777\n0.2084\n0.2239\n0.2222\n0.1849\n0.1993\n53.5\nPC-200\n0.1976\n0.2053\n0.2435\n0.2576\n0.2548\n0.2057\n0.2274\n75.2\nPC-500\n0.2641\n0.2488\n0.2884\n0.3042\n0.3027\n0.2400\n0.2747\n111.6\nPC-1000\n0.3164\n0.2763\n0.3134\n0.3313\n0.3323\n0.2608\n0.3051\n135.0\nTable 1: RMS error values for each TREC.\nTREC3\nTREC5\nTREC6\nRP\nPC-10\nPC-100\nRP\nPC-10\nPC-100\nRP\nPC-10\nPC-100\n\nact\n0.921\n0.815\n0.833\n0.939\n0.762\n0.868\n0.913\n0.671\n0.807\n\ninf\n0.941\n0.863\n0.954\n0.948\n0.870\n0.941\n0.927\n0.871\n0.955\n%Inc\n2.2\n5.9\n14.5\n1.0\n14.2\n8.4\n1.5\n29.8\n18.3\nTREC7\nTREC8\nTREC9\nRP\nPC-10\nPC-100\nRP\nPC-10\nPC-100\nRP\nPC-10\nPC-100\n\nact\n0.917\n0.745\n0.891\n0.925\n0.818\n0.873\n0.903\n0.622\n0.836\n\ninf\n0.934\n0.877\n0.926\n0.932\n0.859\n0.944\n0.908\n0.757\n0.881\n%Inc\n1.9\n17.7\n3.9\n0.8\n5.0\n8.1\n0.6\n21.7\n5.4\nTable 2: Kendall's correlations and percent improvements for all TRECs.\n32\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0.45\n0.5\nActual RP\nActual AP\nTREC 6 Actual RP vs Actual AP\nKendall's\n= 0.913\n0\n0.2\n0.4\n0.6\n0.8\n1\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nActual PC-10\nActual AP\nTREC 6 Actual PC-10 vs Actual AP\nKendall's\n\n= 0.671\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0.45\n0.5\nActual PC-100\nActual AP\nTREC 6 Actual PC-100 vs Actual AP\nKendall's\n\n= 0.807\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0.45\n0.5\nActual RP\nInferred RP\nTREC 6 Actual RP vs Inferred RP\nKendall's\n= 0.927\n0\n0.2\n0.4\n0.6\n0.8\n1\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nActual PC-10\nInferred PC-10\nTREC 6 Actual PC-10 vs Inferred PC-10\nKendall's\n\n= 0.871\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0.45\n0.5\nActual PC-100\nInferred PC-100\nTREC 6 Actual PC-100 vs Inferred PC-100\nKendall's\n\n= 0.955\nFigure 8: Correlation improvements, TREC6.\nmeasure inferred from the maximum entropy distribution\ncorresponding to average precision and the measure in the\ncorresponding column. The row labeled %Inc gives the percent\nincrease in correlation due to maximum entropy modeling\n. As can be seen, maximum entropy modeling yields\ngreat improvements in the predictions of precision-at-cutoff\nvalues. The improvements in predicting R-precision are no-ticeably\nsmaller, though this is largely due to the fact that\naverage precision and R-precision are quite correlated to begin\nwith.\nCONCLUSIONS AND FUTURE WORK\nWe have described a methodology for analyzing measures\nof retrieval performance based on the maximum entropy\nmethod, and we have demonstrated that the maximum entropy\nmodels corresponding to \"good\" measures of overall\nperformance such as average precision accurately reflect underlying\nretrieval performance (as measured by precision-recall\ncurves) and can be used to accurately predict the values\nof other measures of performance, well beyond the levels\ndictated by simple correlations.\nThe maximum entropy method can be used to analyze\nother measures of retrieval performance, and we are presently\nconducting such studies. More interestingly, the maximum\nentropy method could perhaps be used to help develop and\ngain insight into potential new measures of retrieval performance\n. Finally, the predictive quality of maximum entropy\nmodels corresponding to average precision suggest that if\none were to estimate some measure of performance using an\nincomplete judgment set, that measure should be average\nprecision--from the maximum entropy model corresponding\nto that measure alone, one could accurately infer other\nmeasures of performance.\nNote that the concept of a \"good\" measure depends on\nthe purpose of evaluation. In this paper, we evaluate measures\nbased on how much information they provide about\nthe overall performance of a system (a system-oriented evaluation\n). However, in different contexts, different measures\nmay be more valuable and useful, such as precision-at-cutoff\n10 in web search (a user-oriented evaluation). R-precision\nand average precision are system-oriented measures, whereas\nprecision-at-cutoff k is typically a user-oriented measure.\nAnother important conclusion of our work is that one can accurately\ninfer user-oriented measures from system-oriented\nmeasures, but the opposite is not true.\nApart from evaluating the information captured by a single\nmeasure, we could use the MEM to evaluate the information\ncontained in combinations of measures. How much does\nknowing the value of precision-at-cutoff 10 increase one's\nknowledge of a system's performance beyond simply knowing\nthe system's average precision? Which is more informative\n: knowing R-precision and precision-at-cutoff 30, or\nknowing average precision and precision-at-cutoff 100? Such\nquestions can be answered, in principle, using the MEM.\nAdding the values of one or more measures simply adds one\nor more constraints to the maximum entropy model, and\none can then assess the informativeness of the combination.\nNote that TREC reports many different measures. Using\nthe MEM, one might reasonably be able to conclude which\nare the most informative combinations of measures.\nREFERENCES\n[1] A. L. Berger, V. D. Pietra, and S. D. Pietra. A\nmaximum entropy approach to natural language\nprocessing. Comput. Linguist., 22:3971, 1996.\n[2] C. Buckley and E. Voorhees. Evaluating evaluation\nmeasure stability. In SIGIR '00: Proceedings of the\n23rd annual international ACM SIGIR conference on\nResearch and development in information retrieval,\npages 3340. ACM Press, 2000.\n[3] W. S. Cooper. On selecting a measure of retrieval\neffectiveness. part i. In Readings in information\nretrieval, pages 191204. Morgan Kaufmann\nPublishers Inc., 1997.\n33\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0.45\n0.5\nActual RP\nActual AP\nTREC 8 Actual RP vs Actual AP\nKendall's\n= 0.925\n0\n0.2\n0.4\n0.6\n0.8\n1\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nActual PC-10\nActual AP\nTREC 8 Actual PC-10 vs Actual AP\nKendall's\n\n= 0.818\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0.45\n0.5\nActual PC-100\nActual AP\nTREC 8 Actual PC-100 vs Actual AP\nKendall's\n\n= 0.873\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0.45\n0.5\nActual RP\nInferred RP\nTREC 8 Actual RP vs Inferred RP\nKendall's\n= 0.932\n0\n0.2\n0.4\n0.6\n0.8\n1\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nActual PC-10\nInferred PC-10\nTREC 8 Actual PC-10 vs Inferred PC-10\nKendall's\n\n= 0.859\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0.45\n0.5\nActual PC-100\nInferred PC-100\nTREC 8 Actual PC-100 vs Inferred PC-100\nKendall's\n\n= 0.944\nFigure 9: Correlation improvements, TREC8.\n[4] T. M. Cover and J. Thomas. Elements of Information\nTheory. John Wiley & sons, 1991.\n[5] B. Dervin and M. S. Nilan. Information needs and use.\nIn Annual Review of Information Science and\nTechnology, volume 21, pages 333, 1986.\n[6] W. R. Greiff and J. Ponte. The maximum entropy\napproach and probabilistic ir models. ACM Trans. Inf.\nSyst., 18(3):246287, 2000.\n[7] E. Jaynes. On the rationale of maximum entropy\nmethods. In Proc.IEEE, volume 70, pages 939952,\n1982.\n[8] E. T. Jaynes. Information theory and statistical\nmechanics: Part i. Physical Review 106, pages\n620630, 1957a.\n[9] E. T. Jaynes. Information theory and statistical\nmechanics: Part ii. Physical Review 108, page 171,\n1957b.\n[10] Y. Kagolovsky and J. R. Moehr. Current status of the\nevaluation of information retrieval. J. Med. Syst.,\n27(5):409424, 2003.\n[11] P. B. Kantor and J. Lee. The maximum entropy\nprinciple in information retrieval. In SIGIR '86:\nProceedings of the 9th annual international ACM\nSIGIR conference on Research and development in\ninformation retrieval, pages 269274. ACM Press,\n1986.\n[12] D. D. Lewis. Evaluating and optimizing autonomous\ntext classification systems. In SIGIR '95: Proceedings\nof the 18th annual international ACM SIGIR\nconference on Research and development in\ninformation retrieval, pages 246254. ACM Press,\n1995.\n[13] R. M. Losee. When information retrieval measures\nagree about the relative quality of document rankings.\nJ. Am. Soc. Inf. Sci., 51(9):834840, 2000.\n[14] K. Nigam, J. Lafferty, and A. McCallum. Using\nmaximum entropy for text classification. In IJCAI-99\nWorkshop on Machine Learning for Information\nFiltering, pages 6167, 1999.\n[15] D. Pavlov, A. Popescul, D. M. Pennock, and L. H.\nUngar. Mixtures of conditional maximum entropy\nmodels. In T. Fawcett and N. Mishra, editors, ICML,\npages 584591. AAAI Press, 2003.\n[16] S. J. Phillips, M. Dudik, and R. E. Schapire. A\nmaximum entropy approach to species distribution\nmodeling. In ICML '04: Twenty-first international\nconference on Machine learning, New York, NY, USA,\n2004. ACM Press.\n[17] V. Raghavan, P. Bollmann, and G. S. Jung. A critical\ninvestigation of recall and precision as measures of\nretrieval system performance. ACM Trans. Inf. Syst.,\n7(3):205229, 1989.\n[18] A. Ratnaparkhi and M. P. Marcus. Maximum entropy\nmodels for natural language ambiguity resolution,\n1998.\n[19] T. Saracevic. Evaluation of evaluation in information\nretrieval. In SIGIR '95: Proceedings of the 18th\nannual international ACM SIGIR conference on\nResearch and development in information retrieval,\npages 138146. ACM Press, 1995.\n[20] C. E. Shannon. A mathematical theory of\ncommunication. The Bell System Technical Journal\n27, pages 379423 & 623656, 1948.\n[21] N. Wu. The Maximum Entropy Method. Springer, New\nYork, 1997.\n34\n", "keywords": "Average Precision;Evaluation;Maximum Entropy"} {"name": "19", "title": "A Similarity Measure for Motion Stream Segmentation and Recognition", "abstract": "Recognition of motion streams such as data streams generated by different sign languages or various captured human body motions requires a high performance similarity measure . The motion streams have multiple attributes, and motion patterns in the streams can have different lengths from those of isolated motion patterns and different attributes can have different temporal shifts and variations. To address these issues, this paper proposes a similarity measure based on singular value decomposition (SVD) of motion matrices . Eigenvector differences weighed by the corresponding eigenvalues are considered for the proposed similarity measure . Experiments with general hand gestures and human motion streams show that the proposed similarity measure gives good performance for recognizing motion patterns in the motion streams in real time.", "fulltext": "INTRODUCTION\nMotion streams can be generated by continuously performed\nsign language words [14] or captured human body\nmotions such as various dances. Captured human motions\ncan be applied to the movie and computer game industries\nby reconstructing various motions from video sequences [10]\nor images [15] or from motions captured by motion capture\nsystems [4]. Recognizing motion patterns in the streams\nwith unsupervised methods requires no training process, and\nis very convenient when new motions are expected to be\nadded to the known pattern pools. A similarity measure\nwith good performance is thus necessary for segmenting and\nrecognizing the motion streams. Such a similarity measure\nneeds to address some new challenges posed by real world\nWork supported partially by the National Science Foundation\nunder Grant No. 0237954 for the project CAREER:\nAnimation Databases.\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nCopyright 200X ACM X-XXXXX-XX-X/XX/XX ...\n$\n5.00.\nmotion streams: first, the motion patterns have dozens of attributes\n, and similar patterns can have different lengths due\nto different motion durations; second, different attributes of\nsimilar motions have different variations and different temporal\nshifts due to motion variations; and finally, motion\nstreams are continuous, and there are no obvious \"pauses\"\nbetween neighboring motions in a stream. A good similarity\nmeasure not only needs to capture the similarity of complete\nmotion patterns, but also needs to capture the differences\nbetween complete motion patterns and incomplete motion\npatterns or sub-patterns in order to segment a stream for\nmotion recognition.\nAs the main contribution of this paper, we propose a similarity\nmeasure to address the above issues. The proposed\nsimilarity measure is defined based on singular value decomposition\nof the motion matrices. The first few eigenvectors\nare compared for capturing the similarity of two matrices,\nand the inner products of the eigenvectors are given different\nweights for their different contributions. We propose to\nuse only the eigenvalues corresponding to the involved eigenvectors\nof the two motion matrices as weights. This simple\nand intuitive weighing strategy gives the same importance to\neigenvalues of the two matrices. We also show that the 95%\nvariance rule for choosing the number of eigenvectors [13] is\nnot sufficient for recognizing both isolated patterns and motion\nstreams. Our experiments demonstrate that at least the\nfirst 6 eigenvectors need to be considered for motion streams\nof either 22 attribute or 54 attributes, and the first 6 eigenvalues\naccounts for more than 99.5% of the total variance in\nthe motion matrices.\nRELATED WORK\nMulti-attribute pattern similarity search, especially in continuous\nmotion streams, has been widely studied for sign\nlanguage recognition and for motion synthesis in computer\nanimation. The recognition methods usually include template\nmatching by distance measures and hidden Markov\nmodels (HMM).\nTemplate matching by using similarity/distance measures\nhas been employed for multi-attribute pattern recognition.\nJoint angles are extracted in [11] as features to represent different\nhuman body static poses for the Mahalanobis distance\nmeasure of two joint angle features. Similarly, momentum,\nkinetic energy and force are constructed in [2, 5] as activity\nmeasure and prediction of gesture boundaries for various\nsegments of the human body, and the Mahalanobis distance\nfunction of two composite features are solved by dynamic\nprogramming.\n89\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are not\nmade or distributed for profit or commercial advantage and that copies bear\nthis notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nMDM/KDD 2005 Chicago, August 21, Chicago, Illinois, USA\nCopyright 2005 ACM -- MDM 2005 - 1-59593-216-X...$5.00.\nSimilarity measures are defined for multi-attribute data\nin [6, 12, 16] based on principal component analysis (PCA).\nInner products or angular differences of principal components\n(PCs) are considered for similarity measure definitions\n, with different weighted strategies for different PCs.\nEqual weights are considered for different combinations of\nPCs in [6], giving different PCs equal contributions to the\nsimilarity measure. The similarity measure in [12] takes the\nminimum of two weighted sums of PC inner products, and\nthe two sums are respectively weighted by different weights.\nA global weight vector is obtained by taking into account all\navailable isolated motion patterns in [16], and this weight\nvector is used for specifying different contributions from different\nPC inner products to the similarity measure Eros.\nThe dominating first PC and a normalized eigenvalue vector\nare considered in [7, 8] for pattern recognition. In contrast,\nthis paper propose to consider the first few PCs, and the\nangular differences or inner products of different PCs are\nweighted by different weights which depends on the data\nvariances along the corresponding PCs.\nThe HMM technique has been widely used for sign language\nrecognition, and different recognition rates have been\nreported for different sign languages and different feature selection\napproaches. Starner et al. [14] achieved 92% and 98%\nword accuracy respectively for two systems, the first of the\nsystems used a camera mounted on a desk and the second\none used a camera in a user's cap for extracting features\nas the input of HMM. Similarly Liang and Ouhyoung [9]\nused HMM for postures, orientations and motion primitives\nas features extracted from continuous Taiwan sign language\nstreams and an average 80.4% recognition rate was achieved.\nIn contrast, the approach proposed in this paper is an unsupervised\napproach, and no training as required for HMM\nrecognizers is needed.\nSIMILARITY MEASURE FOR MOTION STREAM RECOGNITION\nThe joint positional coordinates or joint angular values of\na subject in motion can be represented by a matrix: the\ncolumns or attributes of the matrix are for different joints,\nand the rows or frames of the matrix are for different time\ninstants. Similarity of two motions is the similarity of the\nresulting motion matrices, which have the same number of\nattributes or columns, and yet can have different number\nof rows due to different motion durations. To capture the\nsimilarity of two matrices of different lengths, we propose\nto apply singular value decomposition (SVD) to the motion\nmatrices in order to capture the similarity of the matrix\ngeometric structures. Hence we briefly present SVD and its\nassociated properties below before proposing the similarity\nmeasure based on SVD in this section.\n3.1\nSingular Value Decomposition\nThe geometric structure of a matrix can be revealed by\nthe SVD of the matrix. As shown in [3], any real m n\nmatrix A can be decomposed into A = U V\nT\n, where U =\n[u\n1\n, u\n2\n, . . . , u\nm\n] R\nmm\nand V = [v\n1\n, v\n2\n, . . . , v\nn\n] R\nnn\nare two orthogonal matrices, and is a diagonal matrix with\ndiagonal entries being the singular values of A:\n1\n\n2\n\n. . .\n\nmin(m,n)\n0. Column vectors u\ni\nand v\ni\nare the i\nth\nleft and right singular vectors of A, respectively.\nIt can be shown that the right singular vectors of the sym-metric\nn n matrix M = A\nT\nA\nare identical to the corresponding\nright singular vectors of A, referred to as eigenvectors\nof M . The singular values of M , or eigenvalues of M ,\nare squares of the corresponding singular values of A. The\neigenvector with the largest eigenvalue gives the first principal\ncomponent. The eigenvector with the second largest\neigenvalue is the second principal component and so on.\n3.2\nSimilarity Measure\nSince SVD exposes the geometric structure of a matrix, it\ncan be used for capturing the similarity of two matrices. We\ncan compute the SVD of M = A\nT\nA\ninstead of computing\nthe SVD of A to save computational time. The reasons are\nthat the eigenvectors of M are identical to the corresponding\nright singular vectors of A, the eigenvalues of M are the\nsquares of the corresponding singular values of A, and SVD\ntakes O(n\n3\n) time for the n n M and takes O(mn\n2\n) time\nwith a large constant for the m n A, and usually m > n.\nIdeally, if two motions are similar, their corresponding\neigenvectors should be parallel to each other, and their corresponding\neigenvalues should also be proportional to each\nother. This is because the eigenvectors are the corresponding\nprincipal components, and the eigenvalues reflect the\nvariances of the matrix data along the corresponding principal\ncomponents. But due to motion variations, all corresponding\neigenvectors cannot be parallel as shown in Figure\n1. The parallelness or angular differences of two eigenvectors\nu and v can be described by the absolute value of\ntheir inner products: | cos | = |u v|/(|u||v|) = |u v|, where\n|u| = |v| = 1. We consider the absolute value of the inner\nproducts because eigenvectors can have different signs\nas shown in [8].\nSince eigenvalues are numerically related to the variances\nof the matrix data along the associated eigenvectors, the importance\nof the eigenvector parallelness can be described by\nthe corresponding eigenvalues. Hence, eigenvalues are to be\nused to give different weights to different eigenvector pairs.\nFigure 2 shows that the first eigenvalues are the dominating\ncomponents of all the eigenvalues, and other eigenvalues\nbecome smaller and smaller and approach zero. As the\neigenvalues are close to zero, their corresponding eigenvectors\ncan be very different even if two matrices are similar.\nHence not all the eigenvectors need to be incorporated into\nthe similarity measure.\nSince two matrices have two eigenvalues for the corresponding\neigenvector pair, these two eigenvalues should have\nequal contributions or weights to the eigenvector parallelness\n. In addition, the similarity measure of two matrices\nshould be independent to other matrices, hence only eigenvectors\nand eigenvalues of the two matrices should be considered\n.\nBased on the above discussions, we propose the following\nsimilarity measure for two matrices Q and P :\n(Q, P ) = 1\n2\nk\n\ni\n=1\n((\ni\n/\nn\n\ni\n=1\n\ni\n+\ni\n/\nn\n\ni\n=1\n\ni\n)|u\ni\nv\ni\n|)\nwhere\ni\nand\ni\nare the i\nth\neigenvalues corresponding to the\ni\nth\neigenvectors u\ni\nand v\ni\nof square matrices of Q and P ,\nrespectively, and 1 < k < n. Integer k determines how many\neigenvectors are considered and it depends on the number\nof attributes n of motion matrices. Experiments with hand\ngesture motions (n = 22) and human body motions (n =\n90\n2\n4\n6\n8\n10\n12\n14\n16\n18\n20\n22\n-0.7\n-0.6\n-0.5\n-0.4\n-0.3\n-0.2\n-0.1\n0\n0.1\n0.2\nComponent of First Eigenvector\nComponent Value of First Eigenvector\nMotion341\nMotion342\n2\n4\n6\n8\n10\n12\n14\n16\n18\n20\n22\n-0.4\n-0.2\n0\n0.2\n0.4\n0.6\nComponent Value of Second Eigenvector\nComponent of Second Eigenvector\nMotion341\nMotion342\nFigure 1: Eigenvectors of similar patterns. The first\neigenvectors are similar to each other, while other\neigenvectors, such as the second vectors shown in\nthe bottom, can be quite different.\n54) in Section 4 show that k = 6 is large enough without\nloss of pattern recognition accuracy in streams. We refer to\nthis non-metric similarity measure as k Weighted Angular\nSimilarity (kWAS) , which captures the angular similarities\nof the first k corresponding eigenvector pairs weighted by\nthe corresponding eigenvalues.\nIt can be easily verified that the value of kWAS ranges over\n[0,1]. When all corresponding eigenvectors are normal to\neach other, the similarity measure will be zero, and when two\nmatrices are identical, the similarity measure approaches the\nmaximum value one if k approaches n.\n3.3\nStream Segmentation Algorithm\nIn order to recognize motion streams, we assume one motion\nin a stream has a minimum length l and a maximum\nlength L. The following steps can be applied to incremen-tally\nsegment a stream for motion recognition:\n1. SVD is applied to all isolated motion patterns P to\nobtain their eigenvectors and eigenvalues. Let be\nthe incremented stream length for segmentation, and\nlet L be the location for segmentation. Initially L = l.\n2. Starting from the beginning of the stream or the end of\nthe previously recognized motion, segment the stream\nat location L. Compute the eigenvectors and eigenvalues\nof the motion segment Q.\n3. Compute kWAS between Q and all motion patterns\nP\n. Update\nmax\nto be the highest similarity after the\nprevious motion's recognition.\n4. If L+ < L, update L = L+ and go to step 2. Otherwise\n, the segment corresponding to\nmax\nis recognized\nto be the motion pattern which gives the highest similarity\nmax\n, update L = l starting from the end of the\nlast recognized motion pattern and go to step 2.\n1\n2\n3\n4\n5\n6\n7\n8\n85\n87\n89\n91\n93\n95\n97\n99\n100\nNumber of Eigenvalues\nAccumulated Eigenvalue Percentage (%)\nCyberGlove Data\nMoCap Data\nFigure 2: Accumulated eigenvalue percentages in\ntotal eigenvalues for CyberGlove data and captured\nhuman body motion data. There are 22 eigenvalues\nfor the CyberGlove data and 54 eigenvalues for the\ncaptured motion data. The sum of the first 2 eigenvalues\nis more than 95% of the corresponding total\neigenvalues, and the sum of the first 6 eigenvalues is\nalmost 100% of the total eigenvalues.\nPERFORMANCE EVALUATION\nThis section evaluates experimentally the performances\nof the similarity measure kWAS proposed in this paper. It\nhas been shown in [16] that Eros [16] outperforms other\nsimilarity measures mentioned in Section 2 except MAS [8].\nHence in this section, we compare the performances of the\nproposed kWAS with Eros and MAS for recognizing similar\nisolated motion patterns and for segmenting and recognizing\nmotion streams from hand gesture capturing CyberGlove\nand human body motion capture system.\n4.1\nData Generation\nA similarity measure should be able to be used not only\nfor recognizing isolated patterns with high accuracy, but also\nfor recognizing patterns in continuous motions or motion\nstreams. Recognizing motion streams is more challenging\nthan recognizing isolated patterns. This is because many\nvery similar motion segments or sub-patterns needs to be\ncompared in order to find appropriate segmentation locations\n, and a similarity measure should capture the difference\nbetween a complete motion or pattern and its sub-patterns.\nHence, both isolated motion patterns and motion streams\nwere generated for evaluating the performance of kWAS.\nTwo data sources are considered for data generation: a CyberGlove\nfor capturing hand gestures and a Vicon motion\ncapture system for capturing human body motions.\n4.1.1\nCyberGlove Data\nA CyberGlove is a fully instrumented data glove that provides\n22 sensors for measuring hand joint angular values to\ncapture motions of a hand, such as American Sign Language\n(ASL) words for hearing impaired. The data for a hand gesture\ncontain 22 angular values for each time instant/frame,\none value for a joint of one degree of freedom. The motion\ndata are extracted at around 120 frames per second.\nData matrices thus have 22 attributes for the CyberGlove\nmotions.\nOne hundred and ten different isolated motions were generated\nas motion patterns, and each motion was repeated\nfor three times, resulting in 330 isolated hand gesture motions\n. Some motions have semantic meanings. For example,\n91\nthe motion for BUS as shown in Table 1 is for the ASL sign\n\"bus\". Yet for segmentation and recognition, we only require\nthat each individual motion be different from others,\nand thus some motions are general motions, and do not have\nany particular semantic meanings, such as the THUMBUP\nmotion in Table 1.\nThe following 18 motions shown in Table 1 were used to\ngenerate continuous motions or streams. Twenty four different\nmotion streams were generated for segmentation and\nrecognition purpose. There are 5 to 10 motions in a stream\nand 150 motions in total in 24 streams, with 6.25 motions in\na stream on average. It should be noted that variable-length\ntransitional noises occur between successive motions in the\ngenerated streams.\nTable 1: Individual motions used for streams\n35 60 70 80 90 BUS GOODBYE\nHALF IDIOM JAR JUICE KENNEL KNEE\nMILK TV SCISSOR SPREAD THUMBUP\n4.1.2\nMotion Capture Data\nThe motion capture data come from various motions captured\ncollectively by using 16 Vicon cameras and the Vicon\niQ Workstation software. A dancer wears a suit of non-reflective\nmaterial and 44 markers are attached to the body\nsuit. After system calibration and subject calibration, global\ncoordinates and rotation angles of 19 joints/segments can\nbe obtained at about 120 frames per second for any motion\n. Similarity of patterns with global 3D positional data\ncan be disguised by different locations, orientations or different\npaths of motion execution as illustrated in Figure 3(a).\nSince two patterns are similar to each other because of similar\nrelative positions of corresponding body segments at\ncorresponding time, and the relative positions of different\nsegments are independent of locations or orientations of the\nbody, we can transform the global position data into local\nposition data as follows.\nLet X\np\n, Y\np\n, Z\np\nbe the global coordinates of one point on\npelvis, the selected origin of the \"moving\" local coordinate\nsystem, and , , be the rotation angles of the pelvis segment\nrelative to the global coordinate system axes, respectively\n. The translation matrix is T as follows:\nT\n=\n\n\n\n1\n0\n0\n0\n0\n1\n0\n0\n0\n0\n1\n0\n-X\np\n-Y\np\n-Z\np\n1\n\n\n\nThe rotation matrix R = R\nx\nR\ny\nR\nz\n, where\nR\nx\n=\n\n\n\n1\n0\n0\n0\n0\ncos - sin 0\n0\nsin\ncos\n0\n0\n0\n0\n1\n\n\n\nR\ny\n=\n\n\n\n\ncos\n0\nsin\n0\n0\n1\n0\n0\n- sin\n0 cos\n0\n0\n0\n0\n1\n\n\n\n\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n-1500\n-1000\n-500\n0\n500\n1000\n1500\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n0\n500\n1000\n1500\n2000\nMotion Capture Frames\nGlobal Coordinates of Joints(mm)\nGlobal Coordinates of Joints(mm)\n(a)\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n-1000\n-500\n0\n500\n1000\nTransformed Coordinates of Joints (mm)\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n-1000\n-500\n0\n500\n1000\nMotion Capture Frames\nTransformed Coordinates of Joints (mm)\n(b)\nFigure 3: 3D motion capture data for similar motions\nexecuted at different locations and in different orientations\n: (a) before transformation; (b) after transformation\n.\nR\nz\n=\n\n\n\n\ncos\n- sin\n0 0\nsin\ncos\n0 0\n0\n0\n1 0\n0\n0\n0 1\n\n\n\n\nLet X, Y, Z be the global coordinates of one point on any\nsegments, and x, y, z be the corresponding transformed local\ncoordinates. x, y and z can be computed as follows:\n[x y z 1] = [X Y Z 1] T R\nThe transformed data are positions of different segments\nrelative to a moving coordinate system with the origin at\nsome fixed point of the body, for example the pelvis. The\nmoving coordinate system is not necessarily aligned with\nthe global system, and it can rotate with the body. So data\ntransformation includes both translation and rotation, and\nthe transformed data would be translation and rotation invariant\nas shown in Figure 3(b). The coordinates of the\norigin pelvis are not included, thus the transformed matrices\nhave 54 columns.\nSixty two isolated motions including Taiqi, Indian dances,\nand western dances were performed for generating motion\ncapture data, and each motion was repeated 5 times, yielding\n310 isolated human motions. Every repeated motion has\na different location and different durations, and can face\ndifferent orientations. Twenty three motion streams were\ngenerated for segmentation. There are 3 to 5 motions in\na stream, and 93 motions in total in 23 streams, with 4.0\nmotions in a stream on average.\n4.2\nPerformance of\nk\nWAS for Capturing Similarities\nand Segmenting Streams\nWe first apply kWAS to isolated motion patterns to show\nthat the proposed similarity measure kWAS can capture the\nsimilarities of isolated motion patterns. Then kWAS is applied\nto motion streams for segmenting streams and recognizing\nmotion patterns in the streams. We experimented\nwith different k values in order to find out the smallest k\nwithout loss of good performance.\nFigure 2 shows the accumulated eigenvalue percentages\naveraged on 330 hand gestures and 310 human motions, respectively\n. Although the first two eigenvalues account for\n92\n1\n2\n90\n91\n92\n93\n94\n95\n96\n97\n98\n99\n100\nNumber of Nearest Neighbors (Most Similar Patterns)\nPattern Recognition Rate (%)\nkWAS (k = 22)\nkWAS (k = 5)\nkWAS (k = 3)\nkWAS (k = 2)\nMAS\nEROS\nFigure 4: Recognition rate of similar CyberGlove\nmotion patterns. When k is 3, kWAS can find the\nmost similar motions for about 99.7% of 330 motions\n, and can find the second most similar motions\nfor 97.5% of the them.\n1\n2\n3\n4\n95\n95.5\n96\n96.5\n97\n97.5\n98\n98.5\n99\n99.5\n100\nNumber of Nearest Neighbors (Most Similar Patterns)\nPattern Recognition Rate (%)\nkWAS (k = 54)\nkWAS (k = 5)\nkWAS (k = 4)\nkWAS (k = 3)\nMAS\nEROS\nFigure 5: Recognition rate of similar captured motion\npatterns. When k is 5, by using kWAS, the most\nsimilar motions of all 310 motions can be found, and\nthe second most similar motions of 99.8% of the 310\nmotions can also be found.\nmore than 95% of the respective sums of all eigenvalues,\nconsidering only the first two eigenvectors for kWAS is not\nsufficient as shown in Figure 4 and Figure 5. For CyberGlove\ndata with 22 attributes, kWAS with k = 3 gives the\nsame performance as kWAS with k = 22, and for motion\ncapture data with 54 attributes, kWAS with k = 5 gives the\nsame performance as kWAS with k = 54. Figure 4 and Figure\n5 illustrate that kWAS can be used for finding similar\nmotion patterns and outperforms MAS and Eros for both\nhand gesture and human body motion data.\nThe steps in Section 3.3 are used for segmenting streams\nand recognizing motions in streams. The recognition accuracy\nas defined in [14] is used for motion stream recognition.\nThe motion recognition accuracies are shown in Table 2. For\nboth CyberGlove motion and captured motion data, k = 6\nis used for kWAS, which gives the same accuracy as k = 22\nfor CyberGlove data and k = 54 for motion capture data,\nrespectively.\nFigure 6 shows the time taken for updating the candidate\nsegment, including updating the matrix, computing the\nSVD of the updated matrix, and computing the similarities\nof the segment and all motion patterns. The code implemented\nin C++ was run on one 2.70 GHz Intel processor\nof a GenuineIntel Linux box. There are 22 attributes for\nthe CyberGlove streams, and 54 attributes for the captured\nCyberGlove Streams\nMotion Capture Streams\n0\n2\n4\n6\n8\n10\n12\n14\n16\n18\n20\nTime (milliseconds)\nMAS\nkWAS (k = 6)\nEROS\nFigure 6: Computation time for stream segment update\nand similarity computation.\nTable 2: Stream Pattern Recognition Accuracy (%)\nSimilarity\nCyberGlove\nMotion Capture\nMeasures\nStreams\nStreams\nEros\n68.7\n78.5\nMAS\n93.3\n78.5\nkWAS (k=6)\n94.0\n94.6\nmotion streams. Hence updating captured motion segments\ntakes longer than updating CyberGlove motion segments as\nshown in Figure 6. The time required by kWAS is close to\nthe time required by MAS, and is less than half of the time\ntaken by using Eros.\n4.3\nDiscussions\nk\nWAS captures the similarity of square matrices of two\nmatrices P and Q, yet the temporal order of pattern execution\nis not revealed in the square matrices. As shown in [7],\ntwo matrices with the identical row vectors in different orders\nhave identical eigenvectors and identical eigenvalues. If\ndifferent temporal orders of pattern execution yield patterns\nwith different semantic meanings, we need to further consider\nthe temporal execution order, which is not reflected in\nthe eigenvectors and eigenvalues and has not been considered\npreviously in [6, 12, 16].\nSince the first eigenvectors are close or parallel for similar\npatterns, we can project pattern A onto its first eigenvector\nu\n1\nby Au\n1\n. Then similar patterns would have similar projections\n(called projection vectors hereafter), showing similar\ntemporal execution orders while the projection variations\nfor each pattern can be maximized. The pattern projection\nvectors can be compared by computing their dynamic time\nwarping (DTW) distances, for DTW can align sequences\nof different lengths and can be solved easily by dynamic\nprogramming [1]. Incorporating temporal order information\ninto the similarity measure can be done as for MAS in [7]\nif motion temporal execution orders cause motion pattern\nambiguity to kWAS.\nCONCLUSIONS\nThis paper has proposed a similarity measure kWAS for\nmotion stream segmentation and motion pattern recognition\n. kWAS considers the first few k eigenvectors and computes\ntheir angular similarities/differences, and weighs contributions\nof different eigenvector pairs by their correspond-93\ning eigenvalues. Eigenvalues from two motion matrices are\ngiven equal importance to the weights. Experiments with\nCyberGlove hand gesture streams and captured human body\nmotions such as Taiqi and dances show that kWAS can recognize\n100% most similar isolated patterns and can recognize\n94% motion patterns in continuous motion streams.\nREFERENCES\n[1] D. Berndt and J. Clifford. Using dynamic time\nwarping to find patterns in time series. In AAAI-94\nWorkshop on Knowledge Discovery in Databases,\npages 229248, 1994.\n[2] V. M. Dyaberi, H. Sundaram, J. James, and G. Qian.\nPhrase structure detection in dance. In Proceedings of\nthe ACM Multimedia Conference 2004, pages 332335,\nOct. 2004.\n[3] G. H. Golub and C. F. V. Loan. Matrix Computations.\nThe Johns Hopkins University Press,\nBaltimore,Maryland, 1996.\n[4] L. Ikemoto and D. A. Forsyth. Enriching a motion\ncollection by transplanting limbs. In Proceedings of the\n2004 ACM SIGGRAPH/Eurographics symposium on\nComputer animation, pages 99 108, 2004.\n[5] K. Kahol, P. Tripathi, S. Panchanathan, and\nT. Rikakis. Gesture segmentation in complex motion\nsequences. In Proceedings of IEEE International\nConference on Image Processing, pages II 105108,\nSept. 2003.\n[6] W. Krzanowski. Between-groups comparison of\nprincipal components. J. Amer. Stat. Assoc.,\n74(367):703707, 1979.\n[7] C. Li, B. Prabhakaran, and S. Zheng. Similarity\nmeasure for multi-attribute data. In Proceedings of the\n2005 IEEE International Conference on Acoustics,\nSpeach, and Signal Processing (ICASSP), Mar. 2005.\n[8] C. Li, P. Zhai, S.-Q. Zheng, and B. Prabhakaran.\nSegmentation and recognition of multi-attribute\nmotion sequences. In Proceedings of the ACM\nMultimedia Conference 2004, pages 836843, Oct.\n2004.\n[9] R. H. Liang and M. Ouhyoung. A real-time continuous\ngesture recognition system for sign language. In\nProceedings of the 3rd. International Conference on\nFace and Gesture Recognition, pages 558565, 1998.\n[10] K. Pullen and C. Bregler. Motion capture assisted\nanimation: texturing and synthesis. In SIGGRAPH,\npages 501508, 2002.\n[11] G. Qian, F. Guo, T. Ingalls, L. Olson, J. James, and\nT. Rikakis. A gesture-driven multimodal interactive\ndance system. In Proceedings of IEEE International\nConference on Multimedia and Expo, June 2004.\n[12] C. Shahabi and D. Yan. Real-time pattern isolation\nand recognition over immersive sensor data streams.\nIn Proceedings of the 9th International Conference on\nMulti-Media Modeling, pages 93113, Jan 2003.\n[13] A. Singhal and D. E. Seborg. Clustering of\nmultivariate time-series data. In Proceedings of the\nAmerican Control Conference, pages 39313936, 2002.\n[14] T. Starner, J. Weaver, and A. Pentland. Real-time\namerican sign language recognition using desk and\nwearable computer based video. IEEE Transactions\non Pattern Analysis and Machine Intelligence,\n20(12):13711375, 1998.\n[15] C. J. Taylor. Reconstruction of articulated objects\nfrom point correspondences in a single image.\nComputer Vision and Image Understanding,\n80(3):349363, 2000.\n[16] K. Yang and C. Shahabi. A PCA-based similarity\nmeasure for multivariate time series. In Proceedings of\nthe Second ACM International Workshop on\nMultimedia Databases, pages 6574, Nov. 2004.\n94\n", "keywords": "motion stream;segmentation;data streams;eigenvector;singular value decomposition;gesture;recognition;eigenvalue;similarity measure;Pattern recognition"} {"name": "190", "title": "The Model, Formalizing Topic Maps", "abstract": "This paper presents a formalization for Topic Maps (TM). We first simplify TMRM, the current ISO standard proposal for a TM reference model and then characterize topic map instances. After defining a minimal merging operator for maps we propose a formal foundation for a TM query language. This path expression language allows us to navigate through given topic maps and to extract information. We also show how such a language can be the basis for a more industrial version of a query language and how it may serve as foundation for a constraint language to define TM-based ontologies.", "fulltext": "Introduction\nTopic Maps (TM (Pepper 1999)), a knowledge representation\ntechnology alternative to RDF (O. Lassila\nand K. Swick 1993), have seen some industrial\nadoption since 2001. Concurrently, the TM community\nis taking various efforts to define a more fundamental\n, more formal model to capture the essence\nof what Topic Maps are (Newcomb, Hunting, Algermissen\n& Durusau 2003, Kipp 2003, Garshol 2004-07-22\n, Bogachev n.d.). While the degree of formality and\nthe extent of TM machinery varies, all models tend\nto abstract away from the sets of concepts defined in\n(Pepper 2000) and use assertions (and topics) as their\nprimitives.\nAfter giving an overview over the current state of\naffairs, we start with an attempt to conceptually simplify\nthe TMRM (Newcomb et al. 2003) model. From\nthat, a mathematically more rigorous formalization\nof TMs follows in section 4. Based on maps and elementary\nmap composition we define a path expression\nlanguage using a postfix notation. While low-level, it\nforms the basis for querying and constraining topic\nmaps as we point out in section 6. The last section\ncloses with future research directions.\nRelated Work\nHistorically, Topic Maps, being a relatively new technology\n, had some deficits in rigor in terms of a defining\nmodel. This may be due to the fact that it was more\n\nParadoxically, the standardization efforts started\nout with the syntax (XTM) with only little, informal\ndescription of the individual constructs. TMDM (formerly\nknown as SAM) was supposed to fill this role\nby precisely defining how XTM instances are to be\ndeserialized into a data structure. This is done by\nmapping the syntax into an infoset model (comparable\nto DOM) whereby UML diagrams help to illustrate\nthe intended structure as well as the constraints\nput on it. While such an approach to model definition\nhas a certain appeal for (Java) developers, its given\ncomplexity puts it well outside the reach for a more\nmathematical formalization.\nIn parallel a fraction within the TM community ar-gued\nthat the TM paradigm can be interpreted on a\nmuch more fundamental level if one considers assertions\nas the basic building blocks, abstracting from\nthe TAO-level which mainly sees topics with their\nnames, occurrences and involvements in associations.\nThis group has developed several generations of the\nTMRM (Newcomb et al. 2003), the reference model.\nThe model therein is mainly based on graph theory\nmixed with informal descriptions of constraints which\ncover the resolution of subject identity.\nSeveral attempts to suggest an alternative founda-tional\nmodel (Garshol 2004-07-22, Bogachev n.d.) or\nto formalize TMRM have been made. (Kipp 2003) is\nsuccessfully using a purely set-theoretic approach to\ndefine topic map instances. As all TMRM concepts\nhave been faithfully included, this resulted in a significant\nset of constraints to be used when reasoning\nabout map instances.\nThe contribution of this paper we see threefold:\nFirstly, we believe that TMRM can be reasonably\nsimplified without any loss of generality by the steps\noutlined in section 3. This is under the assumption\nthat all questions of subject identity are handled outside\nthe model. Secondly, the assertion model seems\nto be general enough to host conceptually not only\nTMRM, but also serve as basis for TMDM.\nAs the TM community now moves to ontology definition\nlanguages, retrieval and transformation languages\n, we contend that the path language which is\nbased on the model can serve as semantic fundament\nConceptual Simplification\nTMRM's main building blocks are properties which\nare attached to topics and assertions which connect\ntopics in various ways.\n3.1\nProperties\nFor properties TMRM distinguishes between subject\nidentifying properties and other properties. The for-37\nmer can be stand-alone or a combination of other\nproperties; they control--for a given application-under\nwhich conditions two topics should be regarded\nthe same.\nWith the assumption that all identity inducing\nconstraints are best covered by a proper ontology language\n, we drop this distinction. Also conferred properties\ncan be handled much more flexibly with an ontology\nlanguage, which allows us to let conferred and\nbuiltin properties collapse.\nWe abstract further by regarding properties just\nas a special form of binary assertions where the topic\nplays a role object and the property forms the other\nmember of the assertion.\n3.2\nAssertions\nA TMRM assertion stands for a statement between\nsubjects whereby these subjects play certain roles.\nSuch an assertion consists of the subject it is about\nand a type. Additionally, the players are cast into\ntheir respective roles. To be able to reify the fact\nthat a certain topic plays a certain role in an assertion\n, also this substatement is represented by a another\ntopic (casting).\nWe observe that any type information for an assertion\na can be represented by a second, dedicated\nassertion b where a plays the instance and that type\nplays the role class. A similar consideration applies\nto casting topics: again, a second, dedicated assertion\ncan be used where the role, the assertion and the\nplayer are playing appropriate roles.\nScoping--the restriction of an assertion to a certain\ncontext--is clearly a statement about an assertion\n, so we can represent scoping relations via a further\nassertion, one which connects the original assertion\nwith the scope itself, again via some predefined\nroles.\nAt the end of this process we only have to deal\nwith assertions containing role-player pairs. Assertions\nhave an identity which allows us to use them in\nother assertions. Topics only exist as focal points and\nhave no explicit property except an identifier.\n3.3\nReification\nThe term reification has a long tradition (Sowa 2000)\nin the knowledge representation community. It has\nchanged its meaning over the years, but it is usu-ally\nused to describe how humans form concepts and\nthen connect them with the `real world'. To fully\ncapture the term formally, we would have to adopt\na philosophical approach, something which we prefer\nto avoid for obvious reasons. The question, though,\nis whether any formalization of TMs can completely\nignore reification.\nWhenever a statement S is about another assertion\nA then one of two things could be intended by\nthe author: either (a) S is a statement about the relationship\nin the `real world' A is supposed to represent.\nAs an example consider that A is about an employment\nof a person within an organisation and that we\nwant to qualify in such that \"the employment only\nstarted in year 2000\". Alternatively (b), a statement\ncan be about the assertion within the map itself, such\nas \"this assertion was commented on by user X\". In\nthe latter case we treat A as if it were in the `real\nworld' (inverting somehow the notion of reification\nby pulling something abstract from a concept space\nand making it `real').\nOur--pragmatic--approach is that this distinction\ncan (and should) be indicated by the proper form of\nidentifiers. If a topic is supposed to reify a real world\nconcept, then its identifier should be a URI (a locator\nor a name), in case that the `real world thing' has\none. If that thing is a topic in a topic map, then\nthe author must have a way to address the map as\nwell as the topic within it. If a direct reification is\nnot possible, then the topic's identifier will simply\nnot be a URI. Indirect identification can be achieved\nvia subject indicators attached to the topic or more\ngenerally speaking by the context the topic is in.\nFor assertions we assume that they--as a whole-implicitly\nreify the relationship they describe. If another\nassertion makes a reference to an assertion then\nusing the assertion's identifier may thus automatically\ncover case (a) above. Like with topics, case (b) can\nbe handled by using an identifier which addresses the\nmap and then the assertion within it.\nHow eventually maps as `real world' objects are\nto be addressed is again a matter how identifiers are\nformed; but this is outside the scope of our model.\nFormal Maps\nIn this section we first prepare the grounds by defining\nidentifiers, then we build members and assertions and\nthen finally maps. For presentation, the text here has\ntwo layers, one for the formal part and an informal\none, shaded grey. The latter is to justify design issues\nor present examples.\n4.1\nIdentifiers\nThe set of identifiers, I, contains two sets of objects\n: names and literals. Literals may be numbers\nor quoted strings. The set of names, N , is an enu-merable\ncollection of atomic objects. Atomic means\nthat objects have no other properties than being dis-tinguishable\nfrom each other.\nIn practical situations names may be strings such as\nURIs. They also may be more complex like XLink\nor even HyTime pointers. The model only uses the\nproperty that they are distinct from each other.\nThe reason we chose literals to be numbers or strings\nis simply one of convenience. First, these two basic\ndata types are the most frequently used, and secondly,\nboth have naturally defined an ordering a b on\nwhich we can later base sorting.\nOne issue with selecting a particular set of primitive\ndata types is that of how to represent others, like composite\ntypes as one would need for, say, spatial coordinates\n. We see two approaches: One way is to model\nthe content explicitly with assertions themselves. The\nother option can be used if the structure of the data\nis not specifically relevant to a particular application,\nbut has to be kept in a map for archiving purposes.\nIn these situations data can be serialized into a string\nand treated as such.\nFurther we assume that I also contains a small set\nof predefined identifiers, id, instance, class, subclass,\nsuperclass. By themselves, they are not special. We\nonly single them out to be able to define additional\nsemantics later.\n4.2\nMembers and Assertions\nAs we are mainly interested in expressing associative\nrelations, we first define a member to be a pair r, p\n(N \ufffd I), with r being the role and p the player of the\nmember. An assertion a is a finite (possibly empty)\nset of members. The set of all assertions is denoted\nby A.\n38\nAssertions always have an identity. It is a function\nid(a) over the set of members of a, whereby we only\nrequest that different member sets result in different\nidentities. Obviously, assertions are only equal if they\nhave identical members.\nTo access the components of an assertion a we\ndefine the set roles(a) = {r\n1\n, . . . , r\nn\n} with r\ni\nbeing\nthe roles in the individual members of a, and the set\nplayers(a) = {p\n1\n, . . . , p\nn\n} with p\ni\nbeing the players\nin a.\nNote that in assertions players are not grouped\naround a role. If several players play one and the same\nrole, then individual members have to exist for every\nsuch player. Also note, that assertions do not have\na type component; it is up to a further assertion to\nestablish such a relationship whereby the predefined\nidentifiers instance and class can be used as roles.\nThe base model does not impose any restrictions on\nplayers and roles. While not necessary for the formalism\nitself, we might later want to put additional\nconstraints on the form of assertions to only meaningful\ncombinations. Examples of such meaningful\nconstraints are \"there may be only one player for a\nparticular role\" or \"in one and the same assertion a\nparticular identifier cannot be used as role and as\nplayer\": a A, roles(a) players(a) = . Another\nuseful constraint could avoid that the identifier\nfor an assertion appears in that assertion itself:\na A, id(a) /\n(roles(a) players(a)).\nThis assertion structure proves to be central to the\nwhole model. It is sufficiently flat as there is no distinction\nbetween assertions and properties. The focus\non assertions alone also reduces topics to identifiers\n. Still, the chosen structure seems to incorporate\nenough of the TM paradigm, in that any number\nof concepts can be bound together into an assertion\nand topics--as TMRM mandates--can function\nas the sole aggregation point for information.\n4.3\nMaps\nWe now consider assertions to be atoms from which\nmaps can be constructed. A map is a finite (possibly\nempty) set of of assertions. The set of all maps is\ndenoted by M.\nTo build bigger maps, we define the elementary\ncomposition, denoted by , of two maps m, m M,\nis defined as set union m m = m m . We say that\nm is a submap of m if m m .\nNote that we have no special merging operation; only\nexactly identical assertions will be identified.\nIn\nour setting special-purpose merging, such as TNC\n(topic name constraint), is split into two phases: first\nmaps are combined using elementary composition and\nthen a second operator is applied to the composite\nmap. That operator will perform a--more or less\nsophisticated--transformation where all the appropriate\nmerging is done.\nAs an example we consider a network which hosts\nseveral servers, organized into clusters (Table 1). At\na particular point in time, servers may be "up" or\n"down"\n.\nAccordingly, macy, lacy and stacy are the servers,\nthe first two being in clusterA, the other in\nclusterB\n. While lacy is down, clusterA is still functional\n, not so clusterB as its only machine is down.\n4.4\nPrimitive Navigation Operators\nTo navigate through maps and to extract information\nout of them, we first need to define basic navigation\noperations within a given map.\nIn our model we can navigate along roles. One\nway is to follow a role outwards in a given assertion\na m. Given additionally a name r we define the\nrole-out operator a\nr = {p | r, p a}. It returns\nall players of a given role in an assertion.\nLooking at a00 in the above example, the expression\na00\nclass\nreturns the set containing server only.\nAnother option to navigate is to follow a role inwards\n, seen from an assertion's point of view. Given\na map m, a name r and an identifier p, we define the\nrole-in operator p\nm\nr = {a m | r, p a}. We\nomit the reference to m if clear from the context.\nTo find all assertions in which clusterA plays the\nrole whole, we can write clusterA\nwhole\nwhich\nevaluates to {a02, a11}.\nThe role-in operator does not respect the type of assertions\n. It simply finds all assertions where a particular\nplayer plays the given role. However, for practical\nreasons a refined version of the operator will be\ndefined in section 4.6.\n4.5\nSubclassing and Instances\nTo describe (and query) topic maps, we need to express\nrelationships between concepts. While the variety\nof such relations itself is huge, two special relationships\nstand out as being fundamental: The subclass-superclass\nrelationship is used between classes to form\ntaxonomies (type systems). The instance-class relationship\nis established between an object and the class\n(or set) the object can be classified into.\nGiven a map m and names b, c N , we define the\npredicate subclasses\nm\n(b, c) to be true if there exists\nan a m such that both conditions, a\nsubclass\n=\n{b} and a\nsuperclass\n= {c}, hold. As the usual\ninterpretation of subclassing is that it is transitive,\nwe build the transitive closure subclasses\nm+\nand the\ntransitive, reflexive closure subclasses\nm\n\n.\nAnother relationship is instance of, abbreviated as\nis - a which holds if there exists a m such that\na\ninstance\n= {b} and a\nclass\n= {c}.\nMostly we are interested in an instance-of relationship\nwhich includes the transitive version of\nsubclassing above. is - a\nm\n\n(b, c) holds if there exist\na m such that for some name c we have\na\ninstance\n= {b}, a\nclass\n= {c } and\nsubclasses\nm\n\n(c , c).\nAccording\nto\nour\ncluster\nmap\nthe\nrelations\nsubclasses\nm\n(server, machine),\nis - a\nm\n(macy, server) and is - a\nm\n\n(macy, machine)\nare all true.\nThe\ndifference\nbetween\nis - a\nm\n(b, c)\nand\nis - a\nm\n\n(b, c) is that the former only reiterates\nthe information which is already explicit in the map.\nWhen querying a map, though, queries should be\nbuilt more robust: If we ask for \"all machines\" in\na map, then most likely one is also interested in\ninstances of all (direct and indirect) subclasses of\n\"machine\".\n39\nTable 1: An example map about a computer network\na00 = { < instance, macy\n>, < class, server\n> }\na01 = { < instance, a00\n>, < class, isInstance\n> }\na02 = { < part, macy\n>, < whole, clusterA\n> }\na03 = { < instance, a02\n>, < class, isPartOf\n> }\na04 = { < object, macy\n>, < status, "up"\n> }\na05 = { < instance, a04\n>, < class, hasStatus\n> }\n...\na10 = { < instance, lacy\n>, < class, server\n> }\na11 = { < part, lacy\n>, < whole, clusterA\n> }\na12 = { < object, lacy\n>, < status, "down"\n> }\n...\na20 = { < instance, stacy\n>, < class, server\n> }\na21 = { < part, stacy\n>, < whole, clusterB\n> }\na22 = { < object, stacy\n>, < status, "down"\n> }\na30 = { < subclass, server\n>, < superclass, machine > }\na40 = { < instance, clusterA >, < class, cluster\n> }\na41 = { < instance, clusterB >, < class, cluster\n> }\n4.6\nTyped Navigation\nWe can use the relation is - a\nm\n\n(b, c) to specialize\nthe role-in navigation. Given a map m, names r and\nt and an identifier p the typed role-in operator honors\nadditionally an assertion type:\np\nm\nr [t] = {a p\nm\nr | is - a\nm\n\n(id(a), t)}\n(1)\nThe obvious difference to the original role-in navigation\nis that we now only consider assertions of the\ngiven type to be part of the resulting set.\nThe expression clusterA\nm\nwhole\n[hasStatus] is\nsupposed to find all assertions of type hasStatus in\nwhich clusterA is the whole. Since there is no such\nassertion, the result is empty.\nA further way to generalize the navigation is to\nallow as role also all subclasses:\na\nm\nr\n\n= {p | r , p a : subclasses\nm\n\n(r , r)} (2)\np\nm\nr\n\n= {a m | r , p a : subclasses\nm\n\n(r , r)}\n(3)\nMap Path Language\nThe topic map path language can be used to extract\ninformation out of given map.\nThe language will\nbe defined via postfix operators which are applied to\n(sets of) assertions (or identifiers).\nBefore we can formally define the individual postfixes\nand chains of postfixes (path expressions) we\nhave to characterize the results of applying postfixes\nto a set of assertions, such as a map. This is done\nwith a simple algebra based on tuples.\n5.1\nTuple Algebra\nOur final result of applying a path expression will be\na bag of tuples. The advantage of tuples are that they\ncan hold composite results. Every tuple represents\nthen one possible result, all of them are organized\ninto a bag. Bags are like sets except that a particular\nelement may appear any number of times. This is\nconvenient if we later want to sort or count the tuples.\nOtherwise all the usual set operations can be used on\nbags.\nAssertion tuples are elements from the cartesian\nproduct A\nn\nwith A being the set of assertions. Simi-larily\n, identifier tuples are elements from I\nn\n. We call\nn the dimension of the tuple.\nWhen we organize tuples t\n1\n, . . . , t\nn\ninto a bag, then\nwe denote this as [t\n1\n, . . . , t\nn\n].\nA map m = {a\n1\n, . . . , a\nn\n} can be represented as the\ntuple bag [ a\n1\n, . . . , a\nn\n]. Conversely, we can also interpret\na tuple bag as map when the tuples it contains\nare single assertions.\nIf a bag contains other bags, then the structure\ncan be flattened out:\n[b\n1\n, b\n2\n, . . . , b\nn\n] = [b\nij\n| b\nij\nb\ni\n(1 i n)]\n(4)\nDuring application of path expressions also tuples\nof bags may be created. Also these can be reduced by\nbuilding tuples of all combinations of bag elements:\nb\n1\n, b\n2\n, . . . , b\nn\n= b\n1\n\ufffd b\n2\n\ufffd \ufffd \ufffd \ufffd \ufffd b\nn\n(5)\nFinally, if a tuple only contains a single component\n, then it is equivalent to that component:\nb = b\n(6)\nAs we have covered all possible constellations\nwhich can occur when evaluating path expressions,\nwe can always reduce every result to a bag of tuples.\nWe call this set B\nI\n.\n5.2\nPostfixes and Path Expressions\nIndividual postfixes (as detailed below) can be combined\nto form chains. The set of path expressions P\nM\nis defined as the smallest set satisfying the following\nconditions:\n1. The projection postfix\ni\nis in P\nM\nfor any non-negative\ninteger i.\n2. Every identifier from I is in P\nM\n.\n3. The role-out and role-in postfixes\nr and\nr\nfor a name r are in P\nM\n.\n40\n4. The positive predicate postfix [ p = q ] and the\nnegative predicate postfix [ p != q ] are both in\nP\nM\nfor two path expressions p and q. As special\ncases we also include [ p ] and [ !p ].\n5. For two path expressions p and q also the concatenation\np \ufffd q is in P\nM\n. If - from the context\n- it is clear that two path expressions are to be\nconcatenated, we omit the infix.\n6. For two path expressions p and q the alternation\np q is in P\nM\n.\nThe application of a path expression p to a map\nm is denoted by m p.\nFor this process, first we will reinterpret the map as\ntuple bag. Then each of the postfixes in p is applied to\nit. Each such step results in a new bag which will be\nflattened according to the tuple algebra above. The\nfinal bag will be the overall result.\n5.2.1\nProjection and Identifiers\nFor both, assertion and identifier tuples, we will use\nthe projection postfix to extract a particular j:\nu\n1\n, . . . , u\nn\n\nj\n= [ u\nj\n]\n(7)\nProjection here plays a similar role like in query languages\nlike SQL, except that we here use an index for\nselection instead of names.\nWe drop the index 1 in\n1\nif it is applied to a tuple\nwith only a single component where then obviously it\nholds that u = u . Such a projection also serves\nas the empty postfix.\nIn case the path expression is simply an identifier\ni I, then for any u the result is always this identifier\n:\nu i = [ i ]\n(8)\n5.2.2\nConcatenation and Alternation\nWe define the concatenation \ufffd of path expressions p\nand q (given any u) as\nu (p \ufffd q) = (u p) q\n(9)\nThe syntactic structure of path expressions ensures\nthat u is always a structure for which such an evaluation\nis defined.\nThe alternation of two path expressions p and q\nis defined as the union of the result tuple bags of the\nindividual evaluations:\nu (p q) = u p u q\n(10)\n5.2.3\nNavigation Postfix\nNext we define how role-out and role-in navigation\npostfixes can be applied to an assertion tuple. We\nsimply apply the navigation to every assertion in the\ntuple:\na\n1\n, . . . , a\nn\n\nr = a\n1\nr\n\n, . . . , a\nn\nr\n\n(11)\np\n1\n, . . . , p\nn\n\nr = p\n1\nr\n\n, . . . , p\nn\nr\n\n(12)\nNote that we have used the typed navigation from\nsection 4.6. While not absolutely necessary, it helps\nto keep path expressions more concise. Note also,\nthat the individual elements of the resulting tuples\nare bags. Again, the transformation rules of the tuple\nalgebra have to be used to reduce this into a bag of\ntuples.\n5.2.4\nFiltering Postfixes\nFrom tuple bags we can filter out specific tuples using\npredicates. Given a tuple bag B = [t\n1\n, . . . , t\nk\n] and\ntwo path expressions p and q, applying the positive\npredicate postfix [ p = q ] to B is defined as\nB [ p = q ] = [t B | t p t q = ]\n(13)\nIf p and q are identical, then we can abbreviate\n[ p = p ] with [ p ].\nThe result of the positive predicate prefix is that sub-bag\nof B for which elements the evaluation of p and\nq gives at least one common result.\nNote that this implements an exists semantics as\nB [ p = p ] is reducable to [t B | t p = ]. Only\nthose tuples of B will be part of the result tuple bag\nif there exists at least one result when p is applied to\nthat tuple.\nBy introducing negation in predicate postfixes, we\ncan also implement forall semantics. Given a tuple\nbag B and two path expressions p and q, we define\nthe negative predicate postfix as\nB [ p != q ] = [t B | t p t q = ]\n(14)\nIf p and q are identical, then we can abbreviate\n[ p != p ] with [ ! p ]. In this case the result tuple\nbag becomes [t B | t p = ].\nA particular tuple will only then be part of the result\ntuple bag if p applied to it will not render a single\nvalue, i.e. all evaluations will return no result.\nImplicit in the formalism are the logic conjunction\nand disjunction of predicate postfixes. Obviously, a\nlogical and is provided by concatenating two predicate\npostfixes ([ .. ] \ufffd [ .. ]) as the result of the first\npostfix will be further tested for the second predicate.\nThe logical or between predicate postfixes is implicitly\ngiven by alternating them ([ .. ] [ .. ]).\n5.3\nEvaluation Example\nLet us assume that we are looking for the status of\nthe servers in clusterA: [\nclass\n= isPartOf]\ninstance\n[\nwhole\n= clusterA]\npart\nobject\n<\nobject\n,\nstatus\n>\nThe first predicate selects out all those assertions in\nthe map which have a class role where one of the\nplayers happens to be isPartOf. If we are then looking\nat these assertions and the player(s) of the role\ninstance\n, then we have effectively selected the assertions\nof type isPartOf from the map.\nWe consider each of these assertions (in our case these\nare a02, a11 and a21) and filter out those of them\nwhich have a whole role where one player is clusterA.\nWhen we continue with a02 and a11, and then follow\nthe part, this leads to a bag containing only the\nnames macy and lacy.\n41\nIn the next step we investigate where these names\nare players of the role object, so we find a bag with\nassertions a04 and a12. Here our path splits into two\ncomponents: the first one navigates to the name of\nthat object, the other to its status. The result is\nthen [ macy, \"up\" , lacy, \"down\" ].\nIn second example we look at all clusters which are\ndown, i.e. where all machines in that cluster are\ndown. As result we get [ clusterB ]: [\nclass\n=\ncluster\n]\ninstance\n[\nwhole\npart\nobject\nstatus\n! = \"up\"]\nQuerying, Filtering and Constraining of Maps\nMaps and path expressions, as presented here, can\nserve as a basis for more high-level concepts, as they\nare needed for ontology and knowledge engineering\n(Fensel, Hendler, Lieberman & Wahlster 2003). The\nuse of path expressions to extract information out of\nmaps leads to the following observations:\nObviously, P\nM\nis a (primitive) language to query\ntopic maps. Note, though, that P\nM\nlacks all facilities\nto newly create content, such as XML or TM\ncontent as described in (Garshol & Barta 2003). A\nmore industrial topic map query language (TMQL)\nwill have to offer content generation language constructs\n. While it will also provide more concise syntax\ndue to high-level concepts, P\nM\ncan (and probably\nwill) act as a semantic foundation.\nMore formally, we can identify a subset of P\nM\n,\nthe filters F\nM\n, which contains all those queries which\nreturn maps:\nF\nM\n= {q P\nM\n| m M, mq = [ a\n1\n. . . a\nn\n| a\ni\nm]}\n(15)\nClearly, the filtered maps are always submaps of\nthe queried map: m f m, for f F\nM\n.\nInterestingly, P\nM\ncan also be regarded as primitive\nconstraint language: only when the application\nof a path expression c to a map m renders any result,\nthen the map conforms to the expectations we have\nset up in c.\nIf, for instance, we had set up a query which asks\nfor all weapons of mass destruction in our running\nexample, then the result would have been the empty\nbag. Only if the query follows the structure and the\nvocabulary of the map, then there will be a non-empty\nresult. Equivalently, this is also true the other way\nround.\nConsequently, we can define a satisfaction relation\n|= P\nM\nM between a path expression c and a map\nm, such that\nc |= m\n\nm c =\n(16)\nBased on this, logical connectives between constraints\ncan be defined.\nFuture Work\nWhile we concentrate in this work on formalizing the\nstructure of topic maps (at least our understanding\nthereof) and of an expression language to extract information\nfrom them, we have not yet studied any\nproperties of P\nM\n. Specifically, we are interested how\npath expressions relate to formulas in description logics\n(Baader, Calvanese, McGuinness, Nardi & Patel-Schneider\n2003, Description Logics Home Page n.d.),\nespecially in the light that both can be used to model\nan ontology. A related question is how a path language\ncan be used to express identity (apart from the\nexplicit identity given by the topic's identifier).\nFinally, in a larger picture, we are interested\nin connecting maps, constraints, queries and even\nmaybe updates for topic maps in an algebra. When\nconnecting maps, merging as defined by the XTM\nstandard is an issue.\nReferences\nBaader, F., Calvanese, D., McGuinness, D., Nardi, D.\n& Patel-Schneider, P., eds (2003), The Description\nLogic Handbook.\nURL:\nhttp:// books.cambridge.org/ 0521781760.\nhtm\nBogachev, D. (n.d.), `TMAssert'.\nURL:\nhttp:// homepage.mac.com/ dmitryv/\nTopicMaps/ TMRM/ TMAssert.pdf\nDescription Logics Home Page (n.d.).\nURL:\nhttp:// dl.kr.org/\nFensel, D., Hendler, J. A., Lieberman, H. & Wahlster,\nW., eds (2003), Spinning the Semantic Web, The\nMIT Press.\nURL:\nhttp:// mitpress.mit.edu/ catalog/ item/\ndefault.asp? tid=9182\nGarshol, L. M. (2004-07-22), `A proposed founda-tional\nmodel for Topic Maps'.\nURL:\nhttp:// www.jtc1sc34.org/ repository/\n0529.htm\nGarshol, L. M. & Barta, R. (2003), `JTC1/SC34:\nTMQL requirements'.\nURL:\nhttp:// www.isotopicmaps.org/ tmql/\ntmqlreqs.html\nKipp, N. A. (2003), `A mathematical formalism\nfor the Topic Maps reference model'.\nhttp://www.isotopicmaps.org/tmrm/0441.htm.\nURL:\nhttp:// www.isotopicmaps.org/ tmrm/\n0441.htm\nNewcomb, S. R., Hunting, S., Algermissen, J. & Durusau\n, P. (2003), `ISO/IEC JTC1/SC34, Topic\nMaps - reference model, editor's draft, revision\n3.10'.\nURL:\nhttp:// www.isotopicmaps.org/ tmrm/\nO. Lassila and K. Swick (1993), Resource Description\nFramework (RDF) model and syntax specification\n, Technical report, W3C, Camo AS.\nURL:\nhttp:// www.w3.org/ TR/ 1999/\nREC-rdf-syntax-19990222.html\nPepper, S. (1999), `Navigating haystacks, discovering\nneedles', Markup Languages: Theory and Practice\n, Vol. 1 No. 4 .\nPepper, S. (2000), `The TAO of Topic Maps'.\nURL:\nhttp:// www.gca.org/ papers/\nxmleurope2000/ papers/ s11-01.html\nSowa, J. (2000), Knowledge Representation: Logical,\nPhilosophical and Computational Foundations,\nBrooks-Cole, Pacific Grove.\n42", "keywords": "Semantic Web;Topic Maps;Knowledge Engineering"} {"name": "191", "title": "The Potential of the Cell Processor for Scientific Computing", "abstract": "The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists . As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the forthcoming STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations , and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on the Cell full system simulator. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture . Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.", "fulltext": "INTRODUCTION\nOver the last decade the HPC community has moved towards\nmachines composed of commodity microprocessors as\na strategy for tracking the tremendous growth in processor\nperformance in that market. As frequency scaling slows,\nand the power requirements of these mainstream processors\ncontinues to grow, the HPC community is looking for alternative\narchitectures that provide high performance on scientific\napplications, yet have a healthy market outside the\nscientific community. In this work, we examine the potential\nof the forthcoming STI Cell processor as a building block for\nfuture high-end computing systems, by investigating performance\nacross several key scientific computing kernels: dense\nmatrix multiply, sparse matrix vector multiply, stencil computations\non regular grids, as well as 1D and 2D FFTs.\nCell combines the considerable floating point resources required\nfor demanding numerical algorithms with a power-efficient\nsoftware-controlled memory hierarchy. Despite its\nradical departure from previous mainstream/commodity processor\ndesigns, Cell is particularly compelling because it\nwill be produced at such high volumes that it will be cost-competitive\nwith commodity CPUs. The current implementation\nof Cell is most often noted for its extremely high performance\nsingle-precision (SP) arithmetic, which is widely\nconsidered insufficient for the majority of scientific applications\n. Although Cell's peak double precision performance\nis still impressive relative to its commodity peers (~14.6\nGflop/s@3.2GHz), we explore how modest hardware changes\ncould significantly improve performance for computationally\nintensive DP applications.\nThis paper presents several novel results.\nWe present\nquantitative performance data for scientific kernels that compares\nCell performance to leading superscalar (AMD Opteron),\nVLIW (Intel Itanium2), and vector (Cray X1E) architectures\n. We believe this study examines the broadest array\nof scientific algorithms to date on Cell. We developed both\nanalytical models and lightweight simulators to predict kernel\nperformance that we demonstrated to be accurate when\ncompared against published Cell hardware result, as well as\nour own implementations on the Cell full system simulator.\nOur work also explores the complexity of mapping several\nimportant scientific algorithms onto the Cell's unique architecture\nin order to leverage the large number of available\nfunctional units and the software-controlled memory. Additionally\n, we propose modest microarchitectural modifications\nthat could increase the efficiency of double-precision\narithmetic calculations, and demonstrate significant performance\nimprovements compared with the current Cell implementation\n.\nOverall results demonstrate the tremendous potential of\nthe Cell architecture for scientific computations in terms of\nboth raw performance and power efficiency. We also conclude\nthat Cell's heterogeneous multi-core implementation\nis inherently better suited to the HPC environment than\nhomogeneous commodity multicore processors.\nRELATED WORK\nOne of the key limiting factors for computational performance\nis off-chip memory bandwidth. Since increasing\nthe off-chip bandwidth is prohibitively expensive, many architects\nare considering ways of using available bandwidth\nmore efficiently. Examples include hardware multithreading\nor more efficient alternatives to conventional cache-based architectures\nsuch as software-controlled memories. Software-controlled\nmemories can potentially improve memory subsystem\nperformance by supporting finely controlled prefetch-ing\nand more efficient cache-utilization policies that take advantage\nof application-level information -- but do so with far\nless architectural complexity than conventional cache architectures\n. While placing data movement under explicit software\ncontrol increases the complexity of the programming\nmodel, prior research has demonstrated that this approach\ncan be more effective for hiding memory latencies (including\ncache misses and TLB misses) -- requiring far smaller cache\nsizes to match the performance of conventional cache implementations\n[17, 19]. The performance of software-controlled\nmemory is more predictable, thereby making it popular for\nreal-time embedded applications where guaranteed response\nrates are essential.\nOver the last five years, a plethora of alternatives to conventional\ncache-based architectures have been suggested including\nscratchpad memories [9,16,30], paged on-chip memories\n[12, 17], and explicit three-level memory architectures\n[18, 19]. Until recently, few of these architectural concepts\nmade it into mainstream processor designs, but the increasingly\nstringent power/performance requirements for embedded\nsystems have resulted in a number of recent implementations\nthat have adopted these concepts. Chips like the\nSony Emotion Engine [20, 23, 29] and Intel's MXP5800 both\nachieved high performance at low power by adopting three\nlevels (registers, local memory, external DRAM) of software-managed\nmemory. More recently, the STI Cell processor has\nadopted a similar approach where data movement between\nSPE\n256 KB\nPPC\n512 KB\nmemo ry\ncon troller\nI/O\n\nI/O\nEIB\n4 rings, 8bytes/ core cycle\n\n25.6 GB/s\nSPE\n256 KB\nSPE\n256 KB\nSPE\n256 KB\nSPE\n256 KB\nSPE\n256 KB\nSPE\n256 KB\nSPE\n256 KB\nFigure 1: Overview of the Cell processor\nthese three address spaces is explicitly controlled by the application\n.\nFor predictable data access patterns the local\nstore approach is highly advantageous as it can be very efficiently\nutilized through explicit software-controlled scheduling\n. Improved bandwidth utilization through deep pipelining\nof memory requests requires less power, and has a faster\naccess time, than a large cache due in part to its lower complexity\n. If however, the data access pattern lacks predictabil-ity\n, then the advantages of software-managed memory are\nlost. This more aggressive approach to memory architecture\nwas adopted to meet the demanding cost/performance\nand real-time responsiveness requirements of Sony's upcoming\nvideo game console. However, to date, an in-depth study\nto evaluate the potential of utilizing the Cell architecture in\nthe context of scientific computations does not appear in the\nliterature.\nCELL BACKGROUND\nCell [8,27] was designed by a partnership of Sony, Toshiba,\nand IBM (STI) to be the heart of Sony's forthcoming PlayStation3\ngaming system. Cell takes a radical departure from\nconventional multiprocessor or multi-core architectures. Instead\nof using identical cooperating commodity processors,\nit uses a conventional high performance PowerPC core that\ncontrols eight simple SIMD cores, called synergistic processing\nelements (SPEs), where each SPE contains a synergistic\nprocessing unit (SPU), a local memory, and a memory flow\ncontroller. An overview of Cell is provided in Figure 1.\nAccess to external memory is handled via a 25.6GB/s\nXDR memory controller. The cache coherent PowerPC core,\nthe eight SPEs, the DRAM controller, and I/O controllers\nare all connected via 4 data rings, collectively known as the\nEIB. The ring interface within each unit allows 8 bytes/cycle\nto be read or written. Simultaneous transfers on the same\nring are possible. All transfers are orchestrated by the PowerPC\ncore.\nEach SPE includes four single precision (SP) 6-cycle pipelined\nFMA datapaths and one double precision (DP) half-pumped\n(the double precision operations within a SIMD\noperation must be serialized) 9-cycle pipelined FMA datapath\nwith 4 cycles of overhead for data movement [22]. Cell\nhas a 7 cycle in-order execution pipeline and forwarding network\n[8]. IBM appears to have solved the problem of inserting\na 13 (9+4) cycle DP pipeline into a 7 stage in-order machine\nby choosing the minimum effort/performance/power\nsolution of simply stalling for 6 cycles after issuing a DP\n10\ninstruction. The SPE's DP throughput [14] of one DP instruction\nevery 7 (1 issue + 6 stall) cycles coincides perfectly\nwith this reasoning.\nThus for computationally intense algorithms like dense\nmatrix multiply (GEMM), we expect SP implementations to\nrun near peak whereas DP versions would drop to approximately\none fourteenth the peak SP flop rate [10]. Similarly,\nfor bandwidth intensive applications such as sparse matrix\nvector multiplication (SpMV) we expect SP versions to be\nbetween 1.5x and 4x as fast as DP, depending on density\nand uniformity.\nUnlike a typical coprocessor, each SPE has its own local\nmemory from which it fetches code and reads and writes\ndata. All loads and stores issued from the SPE can only\naccess the SPE's local memory. The Cell processor depends\non explicit DMA operations to move data from main memory\nto the local store of the SPE. The limited scope of loads\nand stores allows one to view the SPE as having a two-level\nregister file. The first level is a 128 x 128b single cycle register\nfile, where the second is a 16K x 128b six cycle register\nfile. Data must be moved into the first level before it can be\noperated on by instructions. Dedicated DMA engines allow\nmultiple concurrent DMA loads to run concurrently with the\nSIMD execution unit, thereby mitigating memory latency\noverhead via double-buffered DMA loads and stores. The\nselectable length DMA operations supported by the SPE\nare much like a traditional unit stride vector load. We exploit\nthese similarities to existing HPC platforms to select\nprogramming models that are both familiar and tractable\nfor scientific application developers.\nPROGRAMMING MODELS\nThe Cell architecture poses several challenges to programming\n: an explicitly controlled memory hierarchy, explicit\nparallelism between the 8 SPEs and the PowerPC, and a\nquadword based ISA. Our goal is to select the programming\nparadigm that offers the simplest possible expression of an\nalgorithm while being capable of fully utilizing the hardware\nresources of the Cell processor.\nThe memory hierarchy is programmed using explicit DMA\nintrinsics with the option of user programmed double buffering\nto overlap data movement with computation on the\nSPEs. Moving from a hardware managed memory hierarchy\nto one controlled explicitly by the application significantly\ncomplicates the programming model, and pushes it towards\na one sided communication model. Unlike MPI, the intrinsics\nare very low level and map to half a dozen instructions.\nThis allows for very low software overhead and good performance\n, but requires the user to be capable and either ensure\ncorrect usage or provide an interface or abstraction.\nFor programming the parallelism on Cell, we considered\nthree possible programming models: task parallelism with\nindependent tasks scheduled on each SPE; pipelined parallelism\nwhere large data blocks are passed from one SPE to\nthe next; and data parallelism, where the processors perform\nidentical computations on distinct data. For simplicity, we\ndo not consider parallelism between the PowerPC and the\nSPEs, so we can treat this as a homogeneous parallel machine\n. Data pipelining may be suitable for certain classes\nof algorithms and will be the focus of future investigation.\nWe adopt the data-parallel programming model, which is a\ngood match to many scientific applications and offers the\nsimplest and most direct method of decomposing the problem\n. Data-parallel programming is quite similar to loop-level\nparallelization afforded by OpenMP or the vector-like\nmultistreaming on the Cray X1E and the Hitachi SR-8000.\nThe focus of this paper is Cell architecture and performance\n; we do not explore the efficacy of the IBM SPE XLC\ncompiler. Thus, we heavily rely on SIMD intrinsics and do\nnot investigate if appropriate SIMD instructions are gener-ated\nby the compiler. Although the produced Cell code may\nappear verbose -- due to the use of intrinsics instead of C\noperators -- it delivers readily understandable performance.\nOur first Cell implementation, SpMV, required about a\nmonth of learning the programming model, the architecture,\nthe compiler, the tools, and deciding on a final algorithmic\nstrategy. The final implementation required about 600 lines\nof code. The next code development examined two flavors\nof double precision stencil-based algorithms. These implementations\nrequired one week of work and are each about\n250 lines, with an additional 200 lines of common code. The\nprogramming overhead of these kernels on Cell required significantly\nmore effort than the scalar version's 15 lines, due\nmainly to loop unrolling and intrinsics use. Although the\nstencils are a simpler kernel, the SpMV learning experience\naccelerated the coding process.\nHaving become experienced Cell programmers, the single\nprecision time skewed stencil -- although virtually a complete\nrewrite from the double precision single step version\n-- required only a single day to code, debug, benchmark,\nand attain spectacular results of over 65 Gflop/s. This implementation\nconsists of about 450 lines, due once again to\nunrolling and the heavy use of intrinsics.\nSIMULATION METHODOLOGY\nThe simplicity of the SPEs and the deterministic behavior\nof the explicitly controlled memory hierarchy make Cell\namenable to performance prediction using a simple analytic\nmodel. Using this approach, one can easily explore multiple\nvariations of an algorithm without the effort of programming\neach variation and running on either a fully cycle-accurate\nsimulator or hardware. With the newly released cycle accurate\nsimulator (Mambo), we have succesfully validated our\nperformance model for SGEMM, SpMV, and Stencil Computations\n, as will be shown in the subsequent sections.\nOur modeling approach is broken into two steps commensurate\nwith the two phase double buffered computational\nmodel. The kernels were first segmented into code-snippets\nthat operate only on data present in the local store of the\nSPE. We sketched the code snippets in SPE assembly and\nperformed static timing analysis. The latency of each operation\n, issue width limitations, and the operand alignment requirements\nof the SIMD/quadword SPE execution pipeline\ndetermined the number of cycles required. The in-order nature\nand fixed local store memory latency of the SPEs makes\nthe analysis deterministic and thus more tractable than on\ncache-based, out-of-order microprocessors.\nIn the second step, we construct a model that tabulates\nthe time required for DMA loads and stores of the operands\nrequired by the code snippets. The model accurately reflects\nthe constraints imposed by resource conflicts in the\nmemory subsystem. For instance, concurrent DMAs issued\nby multiple SPEs must be serialized, as there is only a single\nDRAM controller. The model also presumes a conservative\nfixed DMA initiation latency of 1000 cycles.\nThe model computes the total time by adding all the per-11\nCell\nX1E\nAMD64\nIA64\nSPE\nChip\n(MSP)\nSIMD\nMulti-Multi\nSuper\nVLIW\nArchitecture\ncore\nchip\nscalar\nSIMD Vector\nClock (GHz)\n3.2\n3.2\n1.13\n2.2\n1.4\nDRAM (GB/s)\n25.6\n25.6\n34\n6.4\n6.4\nSP Gflop/s\n25.6\n204.8\n36\n8.8\n5.6\nDP Gflop/s\n1.83\n14.63\n18\n4.4\n5.6\nLocal Store\n256KB\n2MB\n\n\n-L2\nCache\n-512KB\n2MB\n1MB\n256KB\nL3 Cache\n\n\n\n-3MB\nPower (W)\n3\n~40\n120\n89\n130\nYear\n-2006\n2005\n2004\n2003\nTable 1: Architectural overview of STI Cell, Cray\nX1E MSP, AMD Opteron, and Intel Itanium2. Es-timated\ntotal Cell power and peak Gflop/s are\nbased on the active SPEs/idle PowerPC programming\nmodel.\niteration (outer loop) times, which are themselves computed\nby taking the maximum of the snippet and DMA transfer\ntimes. In some cases, the per-iteration times are constant\nacross iterations, but in others it varies between iterations\nand is input-dependent. For example, in a sparse matrix, the\nmemory access pattern depends on the nonzero structure of\nthe matrix, which varies across iterations. Some algorithms\nmay also require separate stages which have different execution\ntimes; e.g., the FFT has stages for loading data, loading\nconstants, local computation, transpose, local computation,\nbit reversal, and storing the results.\nFor simplicity we chose to model a 3.2GHz, 8 SPE version\nof Cell with 25.6GB/s of memory bandwidth. This version\nof Cell is likely to be used in the first release of the Sony\nPlayStation3 [28]. The lower frequency had the simplifying\nbenefit that both the EIB and DRAM controller could deliver\ntwo SP words per cycle. The maximum flop rate of\nsuch a machine would be 204.8 Gflop/s, with a computational\nintensity of 32 FLOPs/word. For comparison, we ran\nthese kernels on actual hardware of several leading processor\ndesigns: the vector Cray X1E MSP, superscalar AMD\nOpteron 248 and VLIW Intel Itanium2. The key architectural\ncharacteristics are detailed in Table 1.\n5.1\nCell+ Architectural Exploration\nThe Double Precision (DP) pipeline in Cell is obviously\nan afterthought as video games have limited need for DP\narithmetic.\nCertainly a redesigned pipeline would rectify\nthe performance limitations, but would do so at a cost of\nadditional design complexity and power consumption. We\noffer a more modest alternative that can reuse most of the\nexisting circuitry. Based on our experience designing the VI-RAM\nvector processor-in-memory chip [12], we believe these\n\"Cell+\" design modifications are considerably less complex\nthan a redesigned pipeline, consume very little additional\nsurface area on the chip, but show significant DP performance\nimprovements for scientific kernels.\nIn order to explore the limitations of Cell's DP issue bandwidth\n, we propose an alternate design with a longer forwarding\nnetwork to eliminate the all but one of the stall cycles\n-- recall the factors that limit DP throughput as described\nin Section 3. In this hypothetical implementation, called\nCell+, each SPE would still have the single DP datapath,\nbut would be able to dispatch one DP SIMD instruction\nevery other cycle instead of one every 7 cycles. The Cell+\ndesign would not stall issuing other instructions and would\nachieve 3.5x the DP throughput of the Cell (51.2 Gflop/s) by\nfully utilizing the existing DP datapath; however, it would\nmaintain the same SP throughput, frequency, bandwidth,\nand power as the Cell.\nDENSE MATRIX-MATRIX MULTIPLY\nWe begin by examining the performance of dense matrix-matrix\nmultiplication, or GEMM. This kernel is character-ized\nby high computational intensity and regular memory\naccess patterns, making it an extremely well suited for the\nCell architecture. We explored two storage formats: column\nmajor and block data layout [26] (BDL). BDL is a two-stage\naddressing scheme (block row/column, element sub\nrow/column).\n6.1\nAlgorithm Considerations\nFor GEMM, we adopt what is in essence an outer loop\nparallelization approach. Each matrix is broken into 8n x\nn element tiles designed to fit into the memory available on\nthe Cell chip, which are in turn split into eight n x n element\ntiles that can fit into the 8 SPE local stores. For the column\nlayout, the matrix will be accessed via a number of short\nDMAs equal to the dimension of the tile -- e.g. 64 DMAs\nof length 64. BDL, on the other hand, will require a single\nlong DMA of length 16KB.\nSince the local store is only 256KB, and must contain\nboth the program and stack, program data in the local\nstore is limited to about 56K words. The tiles, when double\nbuffered, require 6n\n2\nwords of local store (one from each\nmatrix) -- thus making 96\n2\nthe maximum square tiles in\nSP. Additionally, in column layout, there is added pressure\non the maximum tile size for large matrices, as each column\nwithin a tile will be on a different page resulting in TLB\nmisses. The minimum size of a tile is determined by the\nFLOPs to word ratio of the processor. In the middle, there\nis a tile-size \"sweet spot\" that delivers peak performance.\nThe loop order was therefore chosen to minimize the average\nnumber of pages touched per phase for a column major\nstorage format. The BDL approach, as TLB misses are of\nlittle concern, allows us to structure the loop order to minimize\nmemory bandwidth requirements.\nA possible alternate approach is to adapt Cannon's algorithm\n[3] for parallel machines. Although this strategy could\nreduce the DRAM bandwidth requirements by transferring\nblocks via the EIB, for a column major layout, it could significantly\nincrease the number of pages touched. This will\nbe the subject of future work. Note that for small matrix\nsizes, it is most likely advantageous to choose an algorithm\nthat minimizes the number of DMAs. One such solution\nwould be to broadcast a copy of the first matrix to all SPEs.\n6.2\nSingle Precision GEMM Results\nThe Cell performance of GEMM based on our performance\nmodel (referred to as Cell\npm\n) for large matrices is\npresented in Table 2. SGEMM simulation data show that\n32\n2\nblocks do not achieve sufficient computational intensity\nto fully utilize the processor. The choice of loop order\n12\nCell\npm\n+\nCell\npm\nX1E\nAMD64\nIA64\nDP (Gflop/s)\n51.1\n14.6\n16.9\n4.0\n5.4\nSP (Gflop/s)\n-204\n.7\n29.5\n7.8\n3.0\nTable 2: GEMM performance (in Gflop/s) for large\nsquare matrices on Cell, X1E, Opteron, and Itanium2\n.\nOnly the best performing numbers are\nshown. Cell data based on our performance model\nis referred to as Cell\npm\n.\nand the resulting increase in memory traffic prevents column\nmajor 64\n2\nblocks from achieving a large fraction of peak\n(over 90%) for large matrices. Only 96\n2\nblock sizes provide\nenough computational intensity to overcome the additional\nblock loads and stores, and thus achieving near-peak performance\n-- over 200Gflop/s. For BDL, however, 64\n2\nblocks\neffectively achieve peak performance. Whereas we assume a\n1000 cycle DMA startup latency in our simulations, if the\nDMA latency were only 100 cycles, then the 64\n2\ncolumn\nmajor performance would reach parity with BDL.\nAt 3.2GHz, each SPE requires about 3W [8]. Thus with\na nearly idle PPC and L2, Cell\npm\nachieves over 200 Gflop/s\nfor approximately 40W of power -- nearly 5 Gflop/s/Watt.\nClearly, for well-suited applications, Cell is extremely power\nefficient.\n6.3\nDouble Precision GEMM Results\nA similar set of strategies and simulations were performed\nfor DGEMM. Although the time to load a DP 64\n2\nblock is\ntwice that of the SP version, the time required to compute\non a 64\n2\nDP block is about 14x as long as the SP counterpart\n(due to the limitations of the DP issue logic). Thus it is far\neasier for DP to reach its peak performance. -- a mere 14.6\nGflop/s. However, when using our proposed Cell+ hardware\nvariant, DGEMM performance jumps to an impressive 51\nGflop/s.\n6.4\nPerformance Comparison\nTable 2 shows a performance comparison of GEMM between\nCell\npm\nand the set of modern processors evaluated in\nour study. Note the impressive performance characteristics\nof the Cell processors, achieving 69x, 26x, and 7x speed\nup for SGEMM compared with the Itanium2, Opteron, and\nX1E respectively. For DGEMM, the default Cell processor\nis 2.7x and 3.7x faster than the Itanium2 and Opteron. In\nterms of power, the Cell performance is even more impressive\n, achieving over 200x the efficiency of the Itanium2 for\nSGEMM!\nOur Cell\npm\n+\nexploration architecture is capable, for large\ntiles, of fully exploiting the DP pipeline and achieving over\n50 Gflop/s. In DP, the Cell+ architecture would be nearly\n10 times faster than the Itanium2 and nearly 30 times more\npower efficient. Additionally, traditional micros (Itanium2,\nOpteron, etc) in multi-core configurations would require either\nenormous power saving innovations or dramatic reductions\nin performance, and thus would show even poorer performance/power\ncompared with the Cell technology. Com-pared\nto the X1E, Cell+ would be 3 times as fast and 9\ntimes more power efficient.\nThe decoupling of main memory data access from the\ncomputational kernel guarantees constant memory access\nlatency since there will be no cache misses, and all TLB accesses\nare resolved in the communication phase. Matrix multiplication\nis perhaps the best benchmark to demonstrate\nCell's computational capabilities, as it achieves high performance\nby buffering large blocks on chip before computing\non them.\n6.5\nModel Validation\nIBM recently released their in-house performance evaluation\nof their prototype hardware [4]. On SGEMM, they\nachieve about 201 Gflop/s, which is within 2% of our pred-icated\nperformance.\nSPARSE MATRIX VECTOR MULTIPLY\nAt first glance, SpMV would seem to be a poor application\nchoice for the Cell since the SPEs have neither caches\nnor word-granularity gather/scatter support. Furthermore,\nSpMV has a relatively low O(1) computational intensity.\nHowever, these considerations are perhaps less important\nthan the Cell's low functional unit and local store latency\n(<2ns), the task parallelism afforded by the SPEs, the eight\nindependent load store units, and the ability to stream nonzeros\nvia DMAs.\n7.1\nAlgorithmic Considerations\nTwo storage formats are presented in this paper: Compressed\nSparse Row (CSR) and Blocked Compressed Sparse\nRow (BCSR). Only square BCSR was explored, and only\n2x2 BCSR numbers will be presented here.\nFuture Cell\nSpMV work will examine the entire BCSR space. Because\nof the quadword nature of the SPEs, all rows within a CSR\ntile are padded to a multiple of 4. This greatly simplifies\nthe programming model at the expense of increasing memory\ntraffic. Note that this is very different than 1x4 BCSR..\nTo perform a stanza gather operation the Cell utilizes the\nMFC \"get list\" command, where a list of addresses/lengths\nis created in local store. The MFC then gathers these stanzas\nfrom the global store and packs them into the local store.\nIt is possible to make every stanza a single quadword, however\n, without an accurate performance model of the MFC\n\"get list\" command, one must resort to tiling to provide\na reasonable estimate for performance. For simplicity all\nbenchmarks were run using square tiles. The data structure\nrequired to store the entire matrix is a 2D array of tiles,\nwhere each block stores its nonzeros and row pointers as if\nit were an entire matrix. We chose not to buffer the source\nand destination vector tiles as this would result in a smaller\nblock size. These tradeoffs will be examined in future work.\nCollectively the blocks are chosen to be no larger than ~36K\nwords in SP (half that in DP).\nThe inner loop of CSR SpMV either requires significant\nsoftware pipelining, hefty loop unrolling, or an approach al-gorithmically\nanalogous to a segmented scan [1]. As there\nare no conditional stores in the SPE assembly language, we\nchose to partially implement a segmented scan, where the\ngather operations are decoupled from the dot products. This\ndecoupled gather operation can be unrolled and software\npipelined, thereby completing in close to three cycles per\nelement (the ISA is not particularly gather friendly). It is\nimportant to note that since the local store is not a write\nback cache, it is possible to overwrite its contents without\nfear of consuming DRAM bandwidth or corrupting the actual\narrays.\nAs the nonzeros are stored contiguously in arrays, it is\n13\n#\nName\nN\nNNZ\nComments\n15\nVavasis\n40K 1.6M\n2D PDE Problem\n17\nFEM\n22K\n1M\nFluid Mechanics Problem\n18\nMemory\n17K 125K\nMotorolaMemory Circuit\n36\nCFD\n75K 325K Navier-Stokes, viscous flow\n06 FEM Crystal\n14K 490K\nFEM stiffness matrix\n09\n3D Tube\n45K 1.6M\n3D pressure Tube\n25\nPortfolio\n74K 335K\nFinancial Portfolio\n27\nNASA\n36K 180K\nPWT NASA Matrix\n28 Vibroacoustic 12K 177K\nFlexible box structure\n40 Linear Prog.\n31K\n1M\nAA\nT\n-7pt\n64\n256K 1.8M\n64\n3\n7pt stencil\nTable 3: Suite of matrices used to evaluate SpMV\nperformance.\nMatrix numbers as defined in the\nSPARSITY suite are shown in the first column.\nstraightforward to stream them in via DMA. Here, unlike\nthe source and destination vectors, it is essential to double\nbuffer in order to maximize the SPEs computational\nthroughput. Using buffers of 16KB for SP allows for 2K\nvalues and 2K indices for CSR, and 1K tiles for 2x2 BCSR.\nNote that for each phase -- loading nonzeros and indices\n-- there is the omnipresent 1000 cycle DMA latency overhead\nin addition to the startup and finalize penalties (as in\ntraditional pipelining).\nTo partition the work among the SPEs, we implemented\na cooperative blocking model. By forcing all SPEs to work\non the same block, it is possible to broadcast the blocked\nsource vector and row pointers to minimize memory traffic.\nOne approach, referred to as PrivateY, was to divide work\namong SPEs within a block by distributing the nonzeros\nas evenly as possible. This strategy necessitates that each\nSPE contains a private copy of the destination vector, and\nrequires an inter-SPE reduction at the end of each blocked\nrow.\nThe alternate method, referred to as PartitionedY,\npartitions the destination vector evenly among the SPEs.\nHowever there is no longer any guarantee that the SPEs'\ncomputations will remain balanced, causing the execution\ntime of the entire tile to be limited by the most heavily\nloaded SPE. Thus for load balanced blocks, the PartitionedY\napproach is generally advantageous; however, for matrices\nexhibiting irregular (uneven) nonzero patterns, we expect\nhigher performance using PrivateY.\nNote that there is a potential performance benefit by writing\na kernel specifically optimized for symmetric matrices.\nFor these types of matrices, the number of operations can\neffectively double relative to the memory traffic. However,\nthe algorithm must block two tiles at a time -- thus the symmetric\nmatrix kernel divides memory allocated for blocking\nthe vector evenly among the two submatrices, and performs\na dot product and SAXPY for each row in the lower triangle.\n7.2\nEvaluation Matrices\nIn order to effectively evaluate SpMV performance, we examine\na synthetic stencil matrix, as well as ten real matrices\nused in numerical calculations from the BeBop SPARSITY\nsuite [11, 31] (four nonsymmetric and six symmetric). Table\n3 presents an overview of the evaluated matrices.\n7.3\nSingle Precision SpMV Results\nSingle and double precision tuned SpMV results for the\nSPARSITY matrices are show in Tables 4 and 5. Surpris-ingly\n, given Cell's inherent SpMV limitations, the SPARSITY\nnonsymmetric matrices average over 4 Gflop/s, while\nthe symmetric matrices average nearly 8 Gflop/s. Unfortunately\n, many of these matrices are so small that they utilize\nonly a fraction of the default tile size. Unlike the synthetic\nmatrices, the real matrices, which contain dense sub-blocks,\ncan exploit BCSR without unnecessarily wasting memory\nbandwidth on zeros. As memory traffic is key, storing BCSR\nblocks in a compressed format (the zeros are neither stored\nnor loaded) would allow for significantly higher performance\nif there is sufficient support within the ISA to either decompress\nthese blocks on the fly, or compute on compressed\nblocks. This is an area of future research.\nOverall results show that the PrivateY approach is generally\na superior partitioning strategy compared with PartitionedY\n. In most cases, the matrices are sufficiently unbalanced\nthat the uniform partitioning of the nonzeros coupled\nwith a reduction requires less time than the performing a\nload imbalanced calculation.\nWhen using the PartionedY approach, the symmetric kernel\nis extremely unbalanced for blocks along the diagonal.\nThus, for matrices approximately the size of a single block,\nthe imbalance between SPEs can severely impair the performance\n-- even if the matrix is uniform. In fact, symmetric\noptimizations show only about 50% performance improvement\nwhen running the nonsymmetric kernel on the symmetric\nmatrices.\nOnce again DMA latency plays a relatively small role in\nthis algorithm.\nIn fact, reducing the DMA latency by a\nfactor of ten results in only a 5% increase in performance.\nThis is actually a good result. It means than the memory\nbandwidth is highly utilized and the majority of bus cycles\nare used for transferring data rather than stalls.\nOn the whole, clock frequency also plays a small part in\nthe overall performance.\nSolely increasing the clock frequency\nby a factor of 2 (to 6.4GHz) provides only a 1%\nincrease in performance on the SPARSITY nonsymmetric\nmatrix suite. Similarly, cutting the frequency in half (to\n1.6GHz) results in only a 20% decrease in performance. Simply\nput, for the common case, more time is used in transferring\nnonzeros and the vectors rather than computing on\nthem.\n7.4\nDouble Precision SpMV Results\nResults from our performance estimator show that single\nprecision SPMV is almost twice as fast as double precision,\neven though the nonzero memory traffic only increases by\n50%. This discrepancy is due to the reduction in the number\nof values contained in a tile, where twice as many blocked\nrows are present. For example, when using 16K\n2\nSP tiles on\na 128K\n2\nmatrix, the 512KB source vector must be loaded 8\ntimes. However, in DP, the tiles are only 8K\n2\n-- causing the\n1MB source vector to be loaded 16 times, and thus resulting\nin a much higher volume of memory traffic. Future work\nwill investigate caching mega blocks across SPEs to reduce\ntotal memory traffic.\n7.5\nPerformance Comparison\nTable 4 compares Cell's estimated performance (the best\npartitioning and blocking combination) for SpMV with re-14\nSPARSITY nonsymmetric matrix suite\nDouble Precision (Gflop/s)\nSingle Precision (Gflop/s)\nMatrix\nCell\nF SS\nCell\npm\n+\nCell\npm\nX1E\nAMD64\nIA64\nCell\npm\nAMD64\nIA64\nVavasis\n3.79\n3.17\n3.06\n0.84\n0.44\n0.46\n6.06\n0.70\n0.49\nFEM\n4.28\n3.44\n3.39\n1.55\n0.42\n0.49\n5.14\n0.59\n0.62\nMem\n2.21\n1.69\n1.46\n0.57\n0.30\n0.27\n2.79\n0.45\n0.31\nCFD\n1.87\n1.52\n1.44\n1.61\n0.28\n0.21\n2.33\n0.38\n0.23\nAverage\n3.04\n2.46\n2.34\n1.14\n0.36\n0.36\n4.08\n0.53\n0.41\nSPARSITY symmetric matrix suite\nDouble Precision (Gflop/s)\nSingle Precision (Gflop/s)\nMatrix\nCell\nF SS\nCell\npm\n+\nCell\npm\nX1E\nAMD64\nIA64\nCell\npm\nAMD64\nIA64\nFEM\n-6\n.79\n6.32\n3.12\n0.93\n1.14\n12.37\n1.46\n1.37\n3D Tube\n-6\n.48\n6.06\n2.62\n0.86\n1.16\n11.66\n1.36\n1.31\nPortfolio\n-1\n.83\n1.60\n2.99\n0.37\n0.24\n3.26\n0.42\n0.32\nNASA\n-1\n.92\n1.66\n3.30\n0.42\n0.32\n3.17\n0.46\n0.40\nVibro\n-3\n.90\n3.47\n2.54\n0.57\n0.56\n7.08\n0.56\n0.64\nLP\n-5\n.17\n4.87\n1.27\n0.47\n0.63\n8.54\n0.55\n0.92\nAverage\n-4\n.35\n4.00\n2.64\n0.60\n0.67\n7.68\n0.80\n0.83\nSynthetic Matrices\nDouble Precision (Gflop/s)\nSingle Precision (Gflop/s)\nMatrix\nCell\nF SS\nCell\npm\n+\nCell\npm\nX1E\nAMD64\nIA64\nCell\npm\nAMD64\nIA64\n7pt 64 Stencil\n2.20\n1.44\n1.29\n-0\n.30\n0.29\n2.61\n0.51\n0.32\nTable 4: SpMV performance in single and double precision on the SPARSITY (top) nonsymmetric and\n(bottom) symmetric matrix suites. Note: Cell\nF SS\nrepresents the actual implementation and runs on the\ncycle accurate full system simulator\nsults from the Itanium2 and Opteron using SPARSITY,\na highly tuned sparse matrix numerical library, on nonsymmetric\n(top) and symmetric matrix suites. X1E results\nwhere gathered using a high-performance X1-specific SpMV\nimplementation [6].\nConsidering that the Itanium2 and Opteron each have a\n6.4GB/s bus compared to the Cell's 25.6GB/s DRAM bandwidth\n-- one may expect that a memory bound application\nsuch as SpMV would perform only four times better on the\nCell. Nonetheless, on average, Cell\npm\nis more than 6x faster\nin DP and 10x faster in SP. This is because in order to\nachieve maximum performance, the Itanium2 must rely on\nthe BCSR storage format, and thus waste memory bandwidth\nloading unnecessary zeros. However, the Cell's high\nFLOP to byte ratio ensures that the regularity of BCSR is\nunnecessary allowing it to avoid loading many of the superfluous\nzeros. For example, in matrix #17, Cell uses more\nthan 50% of its bandwidth loading just the DP nonzero values\n, while the Itanium2 utilizes only 33% of its bandwidth.\nThe rest of Itanium2's bandwidth is used for zeros and meta\ndata. It should be noted that where simulations on Cell involve\na cold start to the local store, the Itanium2's have the\nadditional advantage of a warm cache.\nCell's use of on-chip memory as a buffer is advantageous in\nboth power and area compared with a traditional cache. In\nfact, Cell is 20 times more power efficient than the Itanium2\nand 15 times more efficient than the Opteron for SpMV. For\na memory bound application such as this, multicore commodity\nprocessors will see little performance improvement\nunless they also scale memory bandwidth.\nComparing results with an X1E MSP is far more difficult\n. For unsymmetric matrices, the Cell\npm\nperformance on\naverage is twice that of the X1E. For symmetric matrices,\nCell\npm\nperforms somewhere between half and triple the performance\nof the X1E, but on average is 50% faster. The fact\nthat the X1E consumes about three times the power of Cell\nguarantees Cell, in double precision, is at least as power efficient\nas the X1E\n7.6\nModel Validation\nSome might claim that matrix-matrix multiplication performance\ncan be easily predictable. Most, however, would\nagree that SpMV is very difficult to predict. As seen in Table\n4, we tested our implementation of the DP SpMV kernel\non the cycle accurate IBM full system simulator, referred\nto as Cell\nF SS\n. The actual implementation makes dynamic\nblocking and partitioning decisions at run time, based on\nthe lessons learned while exploring optimization strategies\nfor the performance model; however, the current version but\ndoes not include the BCSR approach, and only pads rows\nto the nearest even number.\nThe cycle accurate simulations with a superior implementation\nproved to be about 30% faster than the initial performance\nestimate, and averages impressive results of more\nthan 3 Gflop/s for nonsymmetric matrices. The 30% discrepancy\ndisappears when static partitioning and blocking\nstrategies used. We can clearly see how the actual implemen-tation's\nrun time search for structure boosted performance\nof the heat equation from about 1.3 Gflop/s to 2.2 Gflop/s -achieving\na 7x speedup over the Itanium2. Cell\nF SS\n, for double\nprecision nonsymmetric matrices, is more than 8 times\nfaster than the Itanium2, and 27 times more power efficient.\nThese results confirm our performance model's predictive\n15\nX\nnext\n[i, j, k, t+ 1] = X[i\n- 1, j, k, t]+ X[i+ 1, j, k, t]+\nX[i, j\n- 1, k, t]+ X[i, j+ 1, k, t]+\nX[i, j, k\n- 1, t]+ X[i, j, k+ 1, t]+\nX[i, j, k, t]\nX[i, j, k, t+ 1] =\ndt\n2\ndx\n2\n(X[i\n- 1, j, k, t]+ X[i+ 1, j, k, t])+\ndt\n2\ndy\n2\n(X[i, j\n- 1, k, t]+ X[i, j+ 1, k, t])+\ndt\n2\ndz\n2\n(X[i, j, k\n- 1, t]+ X[i, j, k+ 1, t])+\nX[i, j, k, t]\n- X[i, j, k, t- 1]\nFigure 2: Stencil kernels used in evaluation. Top:\nChombo heattut equation requires only the previous\ntime step. Bottom: Cactus WaveToy equation\nrequires both two previous time steps.\nabilities on complex kernels, and clearly demonstrate Cell's\nperformance superiority when compared with leading microarchitectural\napproaches.\nSTENCIL COMPUTATIONS\nStencil-based computations on regular grids are at the\ncore of a wide range of important scientific applications. In\nthese applications, each point in a multidimensional grid is\nupdated with contributions from a subset of its neighbors.\nThe numerical operations are then used to build solvers that\nrange from simple Jacobi iterations to complex multigrid\nand block structured adaptive methods.\nIn this work we examine two flavors of stencil computations\nderived from the numerical kernels of the Chombo [5]\nand Cactus [2] toolkits. Chombo is a framework for computing\nsolutions of partial differential equations (PDEs) using\nfinite difference methods on adaptively refined meshes. Here\nwe examine a stencil computation based on Chombo's demo\napplication, heattut, which solves a simple heat equation\nwithout adaptivity. Cactus is modular open source framework\nfor computational science, successfully used in many\nareas of astrophysics. Our work examines the stencil kernel\nof the Cactus demo, WaveToy, which solves a 3D hyperbolic\nPDE by finite differencing. The heattut and WaveToy\nequations are shown in Figure 2.\nNotice that both kernels solve 7 point stencils in 3D for\neach point. However, the heattut equation only utilizes values\nfrom the previous time step, while WaveToy requires values\nfrom the two previous timesteps.. Additionally, WaveToy\nhas a higher computational intensity, and can more\nreadily exploit the FMA pipeline.\n8.1\nAlgorithmic Considerations\nThe algorithm used on Cell is virtually identical to that\nused on traditional architectures except that the ISA forces\nmain memory loads and stores to be explicit, rather than\ncaused by cache misses and evictions. The basic algorithmic\napproach to update the 3D cubic data array is to sweep\nacross the domain, updating one plane at a time. Since a\nstencil requires both the next and previous plane, a minimum\nof 4 planes must be present in the local stores: (z-1,t),\nZ+2\nZ+1\nZ\nZ-1\nZ-1\nZ\nTime t+1\nTime t\nZ+2\nZ+1\nZ\nZ-1\nZ-2\nZ-1\nTime t+2\nTime t\nZ\nZ-1\nZ-2\nTime t+1\nFigure 3: Flow Diagram for Heat equation flow diagram\n. Left: Queues implemented within each SPE\nperform only one time step. Right: Time skewing\nversion requires an additional circular queue to hold\nintermediate results.\n(z,t), (z+1,t), and (z,t+1). Additionally, bus utilization can\nbe maximized by double buffering the previous output plane\n(z-1,t+1) with the next input plane (z+2,t).\nIn order to parallelize across SPEs, each plane of the 3D\ndomain is partitioned into eight overlapping blocks. Due\nto the finite size of the local store memory, a straightforward\nstencil calculation is limited to planes of 256\n2\nelements\nplus ghost regions. Thus each SPE updates the core 256x32\npoints from a 258x34 slab (as slabs also contain ghost regions\n).\nTo improve performance of stencil computations on cache-based\narchitectures, previous research has shown multiple\ntime steps can be combined to increase performance [13,\n21, 32].\nThis concept of time skewing can also be effectively\nleveraged in our Cell implementation.\nBy keeping\nmultiple planes from multiple time steps in the SPE simul-taneously\n, it is possible to double or triple the number of\nstencils performed with almost no increase in memory traffic\n; thus increasing computational intensity and improving\noverall performance. Figure 3 details a flow diagram for the\nheat equation, showing both the simple and time skewed\nimplementations.\nNote that the neighbor communication required by stencils\nis not well suited for the aligned quadword load requirements\nof the SPE ISA - i.e. unaligned loads must be emu-lated\nwith permute instructions. In fact, for SP stencils with\nextensive unrolling, after memory bandwidth, the permute\ndatapath is the limiting factor in performance -- not the\nFPU. This lack of support for unaligned accesses highlights\na potential bottleneck of the Cell architecture; however we\ncan partially obviate this problem for the stencil kernel via\ndata padding.\n8.2\nStencil Kernel Results\nThe performance estimation for the heattut and WaveToy\nstencil kernels is shown in Table 5. Results show that\nas the number of time steps increases, a corresponding decrease\nin the grid size is required due to the limited memory\nfootprint of the local store. In SP, the heat equation on the\nCell\npm\nis effectively computationally bound with two steps\nof time skewing, resulting in over 41 Gflop/s. More specifically\n, the permute unit becomes fully utilized as discussed\n16\nDouble Precision (Gflop/s)\nCell\nF SS\nCell\npm\n+\nCell\npm\n+\nCell\npm\nX1E AMD64 IA64\nStencil\n(2 step)\nHeat\n7.25\n21.1\n10.6\n8.2\n3.91\n0.57\n1.19\nWaveToy\n9.68\n16.7\n11.1\n10.8 4.99\n0.68\n2.05\nSingle Precision (Gflop/s)\nCell\nF SS\nCell\npm\nCell\npm\nX1E AMD64 IA64\nStencil\n(4 step) (2 step)\nHeat\n65.8\n41.9\n21.2\n3.26\n1.07\n1.97\nWaveToy\n-33\n.4\n22.3\n5.13\n1.53\n3.11\nTable 5:\nPerformance for the Heat equation and\nWaveToy stencils. X1E and Itanium2 experiments\nuse 256\n3\ngrids. The Opteron uses a 128\n3\n. Cell uses\nthe largest grid that would fit within the local stores.\nThe (n steps) versions denote a time skewed version\nwhere n time steps are computed.\nin Section 8.1. In DP, however, the heat equation is truly\ncomputationally bound for only a single time step, achieving\n8.2 Gflop/s. Analysis also shows that in the Cell+ approach,\nthe heat equation is memory bound when using a single time\nstep attaining 10.6 Gflop/s; for time skewing, performance\nof Cell+ DP jumps to over 21 Gflops/s.\nWe believe the temporal recurrence in the CACTUS WaveToy\nexample will allow more time skewing in single precision\nat the expense of far more complicated code, and will be the\nsubject of future investigation.\n8.3\nPerformance Comparison\nTable 5 presents a performance comparison of the stencil\ncomputations across our evaluated set of leading processors.\nNote that stencil performance has been optimized for the\ncache-based platforms as described in [15].\nIn single precision, for this memory bound computation,\neven without time skewing, Cell\npm\nachieves 6.5x, 11x, and\n20x speedup compared with the X1E, the Itanium2 and the\nOpteron respectively.\nRecall that the Cell has only four\ntimes the memory bandwidth the scalar machines, and 75%\nthe bandwidth of the X1E indicating that Cells potential to\nperform this class of computations in a much more efficient\nmanner is due to the advantages of software controlled memory\nfor algorithms exhibiting predictable memory accesses.\nIn double precision, with 1/14\nth\nthe floating point throughput\n, Cell\npm\nachieves a 2x, 7x, and 14x speedup compared to\nthe X1E, the Itanium2, and the Opteron for the heat equation\n-- a truly impressive result. Additionally, unlike the\nOpteron and Itanium2, simple time skewing has the potential\nto at least double the performance in either SP (either\nversion of Cell) or in DP on the Cell+ variant.\nFinally, recall that in Section 7 we examined Cell SpMV\nperformance using 7-point stencil matrices.\nWe can now\ncompare those results with the structured grid approach presented\nhere, as the numerical computations are equivalent\nin both cases. Results show that for two time step calculations\n, the single precision structured grid approach achieves\na 23x advantage compared with the sparse matrix method.\nThis impressive speedup is attained through the regularity of\nmemory accesses, reduction of memory traffic (constants are\nencoded in the equation rather than the matrix), the ability\nto time skew (increased computational intensity), and that\nstencils on a structured grid dont require multiplications by\n1.0 like a sparse matrix would. For double precision, the\nstencil algorithm advantage is diminished to approximately\n12x, due mainly to the lack of time skewing.\n8.4\nModel Validation\nAs with SpMV, we implemented an actual double precision\nkernel on the full system simulator, with Cell\nF SS\nresults\nshown in Table 5. At first, we were surprised that measured\nperformance fell short of our prediction by 13%. However,\nupon closer examination it was discovered that the actual\nCell implementation prohibits dual issuing of DP instructions\nwith loads or permutes, even though it allows SP with\nloads or permutes to be dual issued. Thus for kernels with\nstreaming behavior, it is realistic to assume that one double\nprecision SIMD instruction can be executed every 8 cycles\n-- instead of every 7 as we had predicted previously. This\ndiscrepancy results in a 14% architectural performance reduction\n, which corresponding very well to the 13% difference\nobserved in Table 5 between the predicted (Cellpm)\nand simulated (Cell\nF SS\n) DP data.\nNonetheless, the actual DP Cell\nF SS\nimplementation of\nour evaluated stencil kernel is about 13x faster, and nearly\n30x more power efficient than the Opteron. We also developed\na SP version of the heat equation that allowed four\ntime-skewed stencil steps.\n(Our original performance estimation\nassumed one or two time steps.)\nResults show\nspectacular SP Cell\nF SS\nperformance of nearly 66 Gflop/s\n-- more than 60x faster and 136x power efficient compared\nwith the Opteron, even though Cell has only four times the\nbandwidth and 20 times the single precision throughput.\nFAST FOURIER TRANSFORMS\nThe FFT presents us with an interesting challenge: its\ncomputational intensity is much less than matrix-matrix\nmultiplication and standard algorithms require a non-trivial\namount of data movement. Extensive work has been performed\non optimizing this kernel for both vector [24] and\ncache-based [7] machines. In addition, implementations for\nvarying precisions appear in many embedded devices using\nboth general and special purpose hardware. In this section\nwe evaluate the implementation of a standard FFT algorithm\non the Cell processor.\n9.1\nMethods\nWe examine both the 1D FFT cooperatively executed\nacross the SPEs, and a 2D FFT whose 1D FFTs are each\nrun on a single SPE. In all cases the data appears in a single\narray of complex numbers. Internally (within the local\nstores) the data is unpacked into separate arrays, and a table\nlookup is used for the roots of unity so that no runtime computation\nof roots is required. As such, our results include\nthe time needed to load this table. Additionally, all results\nare presented to the FFT algorithm and returned in natural\norder (i.e. a bit reversal was required to unwind the permutation\nprocess in all cases). Note that these requirements\nhave the potential to severely impact performance.\nFor simplicity we evaluated a naive FFT algorithm (no\ndouble buffering and with barriers around computational\nsegments) for the single 1D FFT. The data blocks are dis-tributed\ncyclically to SPEs, 3 stages of local work are performed\n, the data is transposed (basically the reverse of the\n17\ncyclic allocation), and then 9 to 13 stages of local computation\nis performed (depending on the FFT size). At that\npoint the indices of the data on chip are bit-reversed to unwind\nthe permutation process and the naturally ordered result\ncopied back into main memory. Once again, we presume\na large DMA initiation overhead of 1000 cycles. However, a\nCell implementation where the DMA initiation overhead is\nsmaller, would allow the possibility of much larger FFT calculations\n(including out of core FFTs) using smaller block\ntransfers, with little or no slowdown using double buffering\nto hide the DMA latency.\nBefore exploring the 2D FFT, we briefly discuss simultaneous\nFFTs. For sufficiently small FFTs (<4K points in\nSP) it is possible to both double buffer and round robin allocate\na large number of independent FFTs to the 8 SPEs.\nAlthough there is lower computational intensity, the sheer\nparallelism, and double buffering allow for extremely high\nperformance (up to 76 Gflop/s).\nSimultaneous FFTs form the core of the 2D FFT. In order\nto ensure long DMAs, and thus validate our assumptions on\neffective memory bandwidth, we adopted an approach that\nrequires two full element transposes. First, N 1D N-point\nFFTs are performed for the rows storing the data back to\nDRAM. Second, the data stored in DRAM is transposed\n(columns become rows) and stored back to DRAM. Third\nthe 1D FFTs are performed on the columns, whose elements\nare now sequential (because of the transpose). Finally a second\ntranspose is applied to the data to return it to its original\nlayout. Instead of performing an N point bit reversal for\nevery FFT, entire transformed rows (not the elements of the\nrows) are stored in bit-reversed order (in effect, bit reversing\nthe elements of the columns). After the first transpose, a\ndecimation in frequency FFT is applied to the columns. The\ncolumns are stored back in bit-reversed order -- in doing so,\nthe row elements are bit reversed. With a final transpose,\nthe data is stored back to memory in natural order and layout\nin less time.\n9.2\nSingle Precision FFT Performance\nTable 6 presents performance results for the Cell 1D and\n2D FFT. For the 1D case, more than half of the total time is\nspent just loading and storing points and roots of unity from\nDRAM. If completely memory bound, peak performance is\napproximately (25.6GB/s/8Bytes)\n5NlogN/3N cycles or\napproximately 5.3logN Gflop/s. This means performance is\nlimited to 64 Gflop/s for a 4K point SP FFT regardless of\nCPU frequency. A clear area for future exploration is hiding\ncomputation within the communication and the minimiza-tion\nof the overhead involved with the loading of the roots\nof unity.\nUnfortunately the two full element transposes, used in\nthe 2D FFT to guarantee long sequential accesses, consume\nnearly 50% of the time. Thus, although 8K simultaneous\n4K point FFTs achieve 76 Gflop/s (after optimizing away\nthe loading of roots of unity), a 4K\n2\n2D FFT only reaches\n46 Gflop/s -- an impressive figure nonetheless. Without the\nbit reversal approach, the performance would have further\ndropped to about 40 Gflop/s. The smaller FFT's shown in\nthe table show even poorer performance.\n9.3\nDouble Precision FFT Performance\nWhen DP is employed, the balance between memory and\ncomputation is changed by a factor of 7.\nThis pushes a\nDouble Precision (Gflop/s)\nN\nCell\npm\n+\nCell\npm\nX1E\n\nAMD64\nIA64\n4K\n12.6\n5.6\n2.92\n1.88\n3.51\n1D\n16K\n14.2\n6.1\n6.13\n1.34\n1.88\n64K\n\n-7\n.56\n0.90\n1.57\n1K\n2\n15.9\n6.6\n6.99\n1.19\n0.52\n2D\n2K\n2\n16.5\n6.7\n7.10\n0.19\n0.11\nSingle Precision (Gflop/s)\nN\nCell\npm\n+\nCell\npm\nX1E\n\nAMD64\nIA64\n4K\n-29\n.9\n3.11\n4.24\n1.68\n1D\n16K\n-37\n.4\n7.48\n2.24\n1.75\n64K\n-41\n.8\n11.2\n1.81\n1.48\n1K\n2\n-35\n.9\n7.59\n2.30\n0.69\n2D\n2K\n2\n-40\n.5\n8.27\n0.34\n0.15\nTable 6: Performance of 1D and 2D FFT in DP (top)\nand SP (bottom). For large FFTs, Cell is more than\n10 times faster in SP than either the Opteron or\nItanium2. The Gflop/s number is calculated based\non a naive radix-2 FFT algorithm. For 2D FFTs the\nnaive algorithm computes 2N N-point FFTs.\nslightly memory bound application strongly into the computationally\nbound domain. The SP simultaneous FFT is\n10 times faster than the DP version. On the upside, the\ntransposes required in the 2D FFT are now less than 20% of\nthe total time, compared with 50% for the SP case. Cell\npm\n+\nfinds a middle ground between the 4x reduction in computational\nthroughput and the 2x increase in memory traffic -increasing\nperformance by almost 2.5x compared with the\nCell for all problem sizes.\n9.4\nPerformance Comparison\nThe peak Cell FFT performance is compared to a number\nof other processors in the Table 6. These results are conservative\ngiven the naive 1D FFT implementation we used\non Cell whereas the other systems in the comparison used\nhighly tuned FFTW [7] or vendor-tuned FFT implementations\n[25]. Nonetheless, in DP, Cell\npm\nis at least 12x faster\nthan the Itanium2 for a 1D FFT, and Cell\npm\n+\ncould be as\nmuch as 30x faster for a large 2D FFT. Cell+ more than\ndoubles the DP FFT performance of Cell for all problem\nsizes. Cell performance is nearly at parity with the X1E in\ndouble precision; however, we believe considerable headroom\nremains for more sophisticated Cell FFT implementations.\nIn single precision, Cell is unparalleled.\nNote that FFT performance on Cell improves as the number\nof points increases, so long as the points fit within the\nlocal store. In comparison, the performance on cache-based\nmachines typically reach peak at a problem size that is far\nsmaller than the on-chip cache-size, and then drops precip-itously\nonce the associativity of the cache is exhausted and\ncache lines start getting evicted due to aliasing. Elimination\nof cache evictions requires extensive algorithmic changes for\nthe power-of-two problem sizes required by the FFT algorithm\n, but such evictions will not occur on Cells software-managed\nlocal store. Furthermore, we believe that even for\nproblems that are larger than local store, 1D FFTs will con\nX1E FFT numbers provided by Cray's Bracy Elton and\nAdrian Tate.\n18\ntinue to scale much better on Cell than typical cache-based\nsuperscalar processors with set-associative caches since local\nstore provides all of the benefits as a fully associative cache.\nThe FFT performance clearly underscores the advantages\nof software-controlled three-level memory architecture over\nconventional cache-based architectures.\nCONCLUSIONS AND FUTURE WORK\nThe Cell processor offers an innovative architectural approach\nthat will be produced in large enough volumes to be\ncost-competitive with commodity CPUs. This work presents\nthe broadest quantitative study Cell's performance on scientific\nkernels and directly compares its performance to tuned\nkernels running on leading superscalar (Opteron), VLIW\n(Itanium2), and vector (X1E) architectures. We developed\nan analytic framework to predict Cell performance on dense\nand sparse matrix operations, stencil computations, and 1D\nand 2D FFTs. Using this approach allowed us to explore\nnumerous algorithmic approaches without the effort of implementing\neach variation. We believe this analytical model\nis especially important given the relatively immature software\nenvironment makes Cell time-consuming to program\ncurrently; the model proves to be quite accurate, because\nthe programmer has explicit control over parallelism and\nfeatures of the memory system.\nFurthermore, we propose Cell+, a modest architectural\nvariant to the Cell architecture designed to improve DP behavior\n. Overall results demonstrate the tremendous potential\nof the Cell architecture for scientific computations in\nterms of both raw DP and SP performance and power efficiency\n. In addition, we show that Cell+ significantly out-performs\nCell for most of our evaluated DP kernels, while\nrequiring minimal microarchitectural modifications to the\nexisting design.\nAnalysis shows that Cell's three level software-controlled\nmemory architecture, which completely decouples main memory\nload/store from computation, provides several advantages\nover mainstream cache-based architectures. First, kernel\nperformance can be extremely predictable as the load\ntime from local store is constant. Second, long block transfers\ncan achieve a much higher percentage of memory bandwidth\nthan individual loads in much the same way a hardware\nstream prefetch engine, once engaged, can fully consume\nmemory bandwidth. Finally, for predictable memory\naccess patterns, communication and computation can be\noverlapped more effectively than conventional cache-based\napproaches. Increasing the size of the local store or reducing\nthe DMA startup overhead on future Cell implementations\nmay further enhance the scheduling efficiency by enabling\nmore effective overlap of communication and computation.\nThere are also disadvantages to the Cell architecture for\nkernels such as SpMV. With its lack of unaligned load support\n, Cell must issue additional instructions simply to permute\ndata, yet still manages to outperform conventional\nscalar processor architectures.\nEven memory bandwidth\nmay be wasted since SpMV is constrained to use tiling to\nremove the indirectly indexed accesses to the source vector\n. The ability, however, to perform a decoupled gather,\nto stream nonzeros, and Cell's low functional unit latency,\ntends to hide this deficiency. Additionally, we see stencil\ncomputations as an example of an algorithm that is heavily\ninfluenced by the performance of the permute pipeline.\nHere, the lack of support for an unaligned load instruction\nSpeedup vs.\nPower Efficiency vs.\nCell+\nX1E\nAMD64 IA64\nX1E\nAMD64\nIA64\nGEMM\n3x\n12.7x\n9.5x\n9x\n28.3x\n30.9x\nSpMV\n>2.7x\n>8.4x\n>8.4x >8.0x >18.7x >27.3x\nStencil\n5.4x\n37.0x\n17.7x\n16.2x\n82.4x\n57.5x\n1D FFT\n2.3x\n10.6x\n7.6x\n6.9x\n23.6x\n24.7x\n2D FFT\n2.3x\n13.4x\n30.6x\n6.9x\n29.8x\n99.5x\nSpeedup vs.\nPower Efficiency vs.\nCell\nX1E\nAMD64 IA64\nX1E\nAMD64\nIA64\nGEMM\n0.8x\n3.7x\n2.7x\n2.4x\n8.2x\n8.78x\nSpMV\n2.7x\n8.4x\n8.4x\n8.0x\n18.7x\n27.3x\nStencil\n1.9x\n12.7x\n6.1x\n5.7x\n28.3x\n19.8x\n1D FFT\n1.0x\n4.6x\n3.2x\n3.0x\n10.2x\n10.4x\n2D FFT\n0.9x\n5.5x\n12.7x\n2.7x\n12.2x\n41.3x\nTable 7: Double precision speedup and increase in\npower efficiency of (Top) Cell+ and (Bottom) Cell,\nrelative to the X1E, Opteron, and Itanium2 for our\nevaluated suite of scientific kernels. Results show an\nimpressive improvement in performance and power\nefficiency.\nis a more significant performance bottleneck than either the\nSP execution rate or the memory bandwidth.\nFor dense matrix operations, it is essential to maximize\ncomputational intensity and thereby fully utilize the local\nstore.\nHowever, if not done properly, the resulting TLB\nmisses adversely affect performance. For example, in the\nGEMM kernel we observe that the BDL data storage format,\neither created on the fly or before hand, can ensure that\nTLB misses remain a small issue even as on-chip memories\nincrease in size.\nTable 7 compares the advantage of Cell and Cell+ based\non the better of performance model or actual implementation\n(where available) in terms of DP performance and\npower efficiency for our suite of evaluated kernels and architectural\nplatforms. Observe that Cell+ has the potential to\ngreatly increase the already impressive performance characteristics\nof Cell.\nBy using the insight gained in the development of our estimation\nmodel, we developed an optimized SpMV version\nthat outperformed our initial predictions by 25% 70%. If a\nfull system simulator could model the modest improvements\nof our Cell+ variant, we feel confident that we could demonstrate\ncomparable improvements to DP performance as well.\nWe also note that DP stencil performance fell short of our\nmodel by 13% due to previously unknown microarchitectural\nlimitations. However, time skewing showed a huge benefit\nin SP and we believe a similar benefit would be present in\nDP on Cell+ variant.\nIt is important to consider these performance differences\nin the context of increasingly prevalent multi-core commodity\nprocessors. The first generation of this technology will\ninstantiate at most two cores per chip, and thus will deliver\nless than twice the performance of today's existing architectures\n. This factor of 2x is trivial compared with Cell+'s\npotential of 10-20x improvement.\nWhile peak Cell DP performance is impressive relative to\nits commodity peers, a fully utilizable pipelined DP floating\npoint unit would boost Cell (i.e. Cell+) performance and\nefficiency significantly.\n19\nAcknowledgments\nThis work was supported by the Director, Office of Science,\nof the U.S. Department of Energy under Contract No. DE-AC02\n-05CH11231. The authors gratefully thank Bracy Elton\nand Adrian Tate for their assistance in obtaining X1E\nFFT performance data, and Eduardo D'Azevedo for providing\nus with an optimized X1E SpMV implementation.\nREFERENCES\n[1] G. Blelloch, M. Heroux, and M. Zagha. Segmented\noperations for sparse matrix computation on vector\nmultiprocessors. Technical Report CMU-CS-93-173,\nCMU, 1993.\n[2] Cactus homepage. http://www.cactuscode.org.\n[3] L. Cannon. A Cellular Computer to Implement the\nKalman Filter Algorithm. PhD thesis, Montana State\nUniversity, 1969.\n[4] Cell broadband engine architecture and its first\nimplementation. http://www-128.ibm.com/\ndeveloperworks/power/library/pa-cellperf/.\n[5] Chombo homepage.\nhttp://seesar.lbl.gov/anag/chombo.\n[6] E. D'Azevedo, M. R. Fahey, and R. T. Mills.\nVectorized sparse matrix multiply for compressed row\nstorage format. In International Conference on\nComputational Science (ICCS), pages 99106, 2005.\n[7] FFTW speed tests. http://www.fftw.org.\n[8] B. Flachs, S. Asano, S. Dhong, et al. A streaming\nprocessor unit for a cell processor. ISSCC Dig. Tech.\nPapers, pages 134135, February 2005.\n[9] P. Francesco, P. Marchal, D. Atienzaothers, et al. An\nintegrated hardware/software approach for run-time\nscratchpad management. In Proceedings of the 41st\nDesign Automation Conference, June 2004.\n[10] Ibm cell specifications.\nhttp://www.research.ibm.com/cell/home.html.\n[11] E.-J. Im, K. Yelick, and R. Vuduc. Sparsity:\nOptimization framework for sparse matrix kernels.\nInternational Journal of High Performance Computing\nApplications, 2004.\n[12] The Berkeley Intelligent RAM (IRAM) Project.\nhttp://iram.cs.berkeley.edu.\n[13] G. Jin, J. Mellor-Crummey, and R. Fowlerothers.\nIncreasing temporal locality with skewing and\nrecursive blocking. In Proc. SC2001, 2001.\n[14] J. Kahle, M. Day, H. Hofstee, et al. Introduction to\nthe cell multiprocessor. IBM Journal of R&D, 49(4),\n2005.\n[15] S. Kamil, P. Husbands, L. Oliker, et al. Impact of\nmodern memory subsystems on cache optimizations\nfor stencil computations. In ACM Workshop on\nMemory System Performance, June 2005.\n[16] M. Kandemir, J. Ramanujam, M. Irwin, et al.\nDynamic management of scratch-pad memory space.\nIn Proceedings of the Design Automation Conference,\nJune 2001.\n[17] P. Keltcher, S. Richardson, S. Siu, et al. An equal area\ncomparison of embedded dram and sram memory\narchitectures for a chip multiprocessor. Technical\nreport, HP Laboratories, April 2000.\n[18] B. Khailany, W. Dally, S. Rixner, et al. Imagine:\nMedia processing with streams. IEEE Micro, 21(2),\nMarch-April 2001.\n[19] M. Kondo, H. Okawara, H. Nakamura, et al. Scima: A\nnovel processor architecture for high performance\ncomp uting. In 4th International Conference on High\nPerformance Computing in the Asia Pacific Region,\nvolume 1, May 2000.\n[20] A. Kunimatsu, N. Ide, T. Sato, et al. Vector unit\narchitecture for emotion synthesis. IEEE Micro, 20(2),\nMarch 2000.\n[21] Z. Li and Y. Song. Automatic tiling of iterative stencil\nloops. ACM Transactions on Programming Language\nSystems, 26(6), 2004.\n[22] S. Mueller, C. Jacobi, C. Hwa-Joon, et al. The vector\nfloating-point unit in a synergistic processor element\nof a cell processor. In 17th IEEE Annual Symposium\non Computer Arithmetic (ISCA), June 2005.\n[23] M. Oka and M. Suzuoki. Designing and programming\nthe emotion engine. IEEE Micro, 19(6), November\n1999.\n[24] L. Oliker, R. Biswas, J. Borrill, et al. A performance\nevaluation of the Cray X1 for scientific applications. In\nProc. 6th International Meeting on High Performance\nComputing for Computational Science, 2004.\n[25] Ornl cray x1 evaluation.\nhttp://www.csm.ornl.gov/\n\ndunigan/cray.\n[26] N. Park, B. Hong, and V. Prasanna. Analysis of\nmemory hierarchy performance of block data layout.\nIn International Conference on Parallel Processing\n(ICPP), August 2002.\n[27] D. Pham, S. Asano, M. Bollier, et al. The design and\nimplementation of a first-generation cell processor.\nISSCC Dig. Tech. Papers, pages 184185, February\n2005.\n[28] Sony press release. http://www.scei.co.jp/\ncorporate/release/pdf/050517e.pdf.\n[29] M. Suzuoki et al. A microprocessor with a 128-bit cpu,\nten floating point macs, four floating-point dividers,\nand an mpeg-2 decoder. IEEE Solid State Circuits,\n34(1), November 1999.\n[30] S. Tomar, S. Kim, N. Vijaykrishnan, et al. Use of local\nmemory for efficient java execution. In Proceedings of\nthe International Conference on Computer Design,\nSeptember 2001.\n[31] R. Vuduc. Automatic Performance Tuning of Sparse\nMatrix Kernels. PhD thesis, University of California\nat Berkeley, 2003.\n[32] D. Wonnacott. Using time skewing to eliminate idle\ntime due to memory bandwidth and network\nlimitations. In International Parallel and Distributed\nProcessing Symposium (IPDPS), 2000.\n20\n", "keywords": "GEMM;FFT;Cell processor;three level memory;SpMV;Stencil;sparse matrix"} {"name": "192", "title": "The Use of Mediation and Ontology Technologies for Software Component Information Retrieval", "abstract": "Component Based Development aims at constructing software through the inter-relationship between pre-existing components. However, these components should be bound to a specific application domain in order to be effectively reused. Reusable domain components and their related documentation are usually stored in a great variety of data sources. Thus, a possible solution for accessing this information is to use a software layer that integrates different component information sources. We present a component information integration data layer, based on mediators. Through mediators, domain ontology acts as a technique/formalism for specifying ontological commitments or agreements between component users and providers, enabling more accurate software component information search.", "fulltext": "INTRODUCTION\nComponent Based Development (CBD) [1] aims at constructing\nsoftware through the inter-relationship between pre-existing\ncomponents, thus reducing the complexity, as well as the cost of\nsoftware development, through the reuse of exhaustively tested\ncomponents. Building new solutions by combining components\nshould improve quality and support rapid development, leading to\na shorter time-to-market. At the same time, nimble adaptation to\nchanging requirements can be achieved by investing only in key\nchanges of a component-based solution, rather than undertaking a\nmajor release change. For these reasons, component technology is\nexpected by many to be the cornerstone of software production in\nthe years to come.\nAccording to Jacobson, Griss and Jonsson [1], the effectiveness of\ncomponent reuse depends on the connectiveness among them and\ntheir binding to specific application domains. The connectiveness\nis one of the most discussed problems in CBD [6, 12]. Approaches\nthat deal with component interfaces\n(one of premises for\nconnection between components) focus on their capability to\nprovide and request services. Although this interface aspect is\nimportant in a CBD approach, other problems arise when trying to\nconnect components. The connectiveness also depends on the\nexecution environment, the heterogeneity of components, the\ndistance between them and the architecture that controls their\nconnections [1, 5, 12]. The architecture that governs the\nconnections between components also depends on the application\ndomain. Therefore, reuse possibilities increase when components\nare bound to domain concepts. As stated by Krueger [1], while\nretrieving reusable software components, it is advantageous to use\na terminology that is familiar to the domain. This approach\ndiverges from other software component retrieval proposals, such\nas the Agora System [6] that bases the search only on the\ncomponent interfaces, covering solely the component\nconnectiveness problem, and the RIG initiative [10] that presents\nan approach for domain repository integration with minor user\ntransparency and without web access.\nSuppose that, in a typical component retrieval scenario, a software\ndeveloper wants to find software components to use in the\nconstruction of an application under development. If he does not\nknow any other specialized service that provides information\nabout components, the natural search space will be the Internet.\nNow, consider that this developer has no knowledge about the\navailable components. Thus, the following actions are necessary\nto discover software components that satisfy his needs:\n1. To locate information about components that can be stored in\ndistributed repositories. This could be typically done through\nan Internet search engine that takes as input a few keywords\nand returns a list of relevant resource descriptions that might\nbe available in these repositories. The success of this task\ndirectly depends on the interest of the repository\nadministrators in publicizing their data and the precision of\nthe user while providing the keywords.\n2. To determine usability of search results. Due to the\ncomplexity in the analysis of component usefulness (i.e.,\nconsidering the component domain, functionality and\nconnection possibilities based on architecture decisions), the\n\nAn interface of a component can be seen as a component's part that\ndefines its access points. These points allow clients (components\nthemselves) to access services provided by the component [12].\nThus, a naive Internet approach will not cope with the complexity\nof software component retrieval. The previous actions require an\nengine that combines the following three characteristics: (i)\nDistribution and Heterogeneity software components can be\ndistributed and use different kinds of storage; (ii) Domain\nOntology - to organize component repositories within a domain in\norder to ease its search; (iii) Software Component Information\nEvolution to insert new information (including legacy\ninformation).\nHowever, as stated before, current software component retrieval\nproposals either lack from heterogeneity or domain ontology. On\nthe other hand, many database projects [3,4,7,8,9] are particularly\nconcerned with distribution, heterogeneity (and ontology) found in\nlegacy databases. These projects are known as \"multi-database\" or\nHeterogeneous and Distributed Data Base Systems (HDDS) [4].\nOne solution found in HDDS is the use of mediators\n[2]\ncombined with ontology [14] to integrate, identify and retrieve\nrelated legacy databases. Ontology in this context can be defined\nas a vocabulary of terms and the relationship between them. For\neach term, a definition must be created, using an informal\ndescription, some concrete examples in the domain, and also a\nformal specification of the relationships between terms, thus\nforming a semantic network.\nWe believe that the HDDS technologies can be adapted to handle\nsoftware component repositories in the place of legacy databases.\nMediators can represent and integrate domain information\nrepositories (distributed and/or heterogeneous). Metadata found in\nmediators can describe the repositories of components, presenting\nthe domain, their semantics, software architecture and interfaces.\nUsually a query engine is available in HDDS and therefore ad hoc\nqueries over this metadata can be used to analyze the available\ncomponents. The organization of mediators with ontology drives\nthe user search along heterogeneous vocabulary.\nTherefore, our main objective is to present a software component\ninformation retrieval engine named Odyssey Mediation Layer\n(OML) that combines the connectiveness of components and the\ndomain concept approach. We address both issues through the\nadoption of mediation [2] and ontology [14] technologies,\nrespectively.\nOur approach is motivated by a project that is being conducted in\nthe Legislative House domain. There are several applications that\ncan benefit from reusable information within this domain and\nfrom other related ones, such as justice domain, criminal domain,\namong others. Our users are not specialists in the latter domains,\nonly in the Legislative domain. However, it is important that\nrelevant reusable information from all related domains can be\npresented to them, particularly when they are not aware of its\nexistence. Most components of legislative process applications\ncan be reused from the legislative domain (e.g., Proposal Creation,\nLegislature Evaluation, Council Members referee, among others),\nbut sometimes it is worth looking at components from other\nrelated domains such as justice domain. Our retrieval engine is\nable to identify and suggest components from other related\n\n2\n\nAccording to Wiederhold [2], mediators are modules that encompass\nlayers of mediation services, connecting bases of heterogeneous and\ndistributed data (producers) to information systems (consumers).\ndomains in the same way as it suggests components from the\nLegislative domain.\nWith our retrieval engine, the user search can rely on a controlled\nvocabulary (ontology) composed of domain terms that are familiar\nto him. Thus, the search is more focused and executed over\nrelevant available component information repositories. Besides,\nthe usefulness of the retrieved components is supported by the\nbindings between ontology terms and related components. This\nbinding is accomplished by domain specialists together with\ndomain engineers during a domain engineering process [5, 17]\nthus enforcing the biding precision. In order to address these\nissues, the retrieval engine organizes component information\nrepositories within domain ontologies while preserving its original\ncharacteristics of distribution and heterogeneity, all in a flexible\nway.\nThe main contribution of our proposal is to provide an approach\nfor accessing software components through the use of ontologies\nand mediators. Our innovative aspect is to provide flexibility,\ntransparency and accuracy in software component information\nretrieval.\nIn order to present our approach, the paper is organized as\nfollows: Section 2 discusses the novelties of our proposal with\nrelated works; Section 3 details the architecture of Odyssey\nMediation Layer; Section 4 shows a component retrieval example;\nand Section 5 presents our concluding remarks.\n\nRELATED WORKS\nIn a broader approach, several information retrieval systems use\nsemantic brokering with respect to their resources. These systems\ninclude SIMS [3], TSIMMIS [4], InfoMaster [7], and Information\nManifold [8]. These systems work in the definition of some sort of\ncommon vocabulary (similar to an ontology) to define objects in\ntheir domain. Individual information sources that contain these\nobjects describe constraints on objects that they can provide, in\nterms of this common vocabulary. The broker then uses these\nconstraints to determine how to process queries. These projects\ndeal with generic designs for database retrieval. Our approach\ncombines the mediation technology with specific domain ontology\n[2] to integrate different software components data sources. The\nmain difference between these projects and ours is that our\napproach is particularly concerned with software components.\nThus, we use an ontology, which is specifically constructed for\nthat. This ontology is specified during a Domain Engineering\nprocess [5] tailored for this purpose. Hence, the ontology accuracy\nis more efficient, and consequently the usefulness of the retrieved\ncomponents.\nAnother work worth mentioning is the InfoSleuth system[9]. It is\na large project, conducted by MCC, which uses an ontology\napproach to retrieve information from distributed and\nheterogeneous databases. InfoSleuth can be seen as a framework\nthat can be tailored for a given purpose. One interesting\napplication is the EDEN project in the environmental domain [9].\nIn this case, some tools of InfoSleuth were customized for this\nproject. Our project adopts a similar approach, where our\nconstructs are specific for software component information\nretrieval.\nThe Agora System [6] describes a search engine for retrieving\nreusable code components, such as JavaBeans and CORBA\ncomponents. Agora uses an introspection mechanism for\n20\nregistering code components, through its interface. As a result, this\ninformation may not be available at a certain time because the\nrepository is not running, or the information cannot be located. In\neach of these cases, the interface information cannot be\nsuccessfully retrieved and indexed, and the component is not\nregistered in the AGORA index database. In our proposal, the use\nof mediators provides the flexibility to access remote component\nrepositories. There is no need for an index phase, and the mediator\nis able to capture updates that may occur in a remote repository\n(i.e., the repository, using the translator\n3\nservices, sends a message\nwith these updates to the mediator). Moreover, the mediator\nmetadata manager has access to all the ontological terms of a\ngiven domain, facilitating the identification of the existence of a\ncomponent, even if its repository is out of service. In this case, the\nuser knows that the component exists and that he can retrieve it\nlatter. Moreover, new information is always associated to domain\nterms within a given domain ontology, improving its accessibility\nand reuse.\nYe and Fischer [18] presents an approach that provides an active\nrepository for components. The work emphasizes active delivery\nof reuse information, helping on the identification of components\nthat developers did not even know that existed. Regarding this last\naspect, it is similar to our approach. Our component information\nretrieval system also provides this functionality too, once it\naccesses components from other domains based on semantic\nsimilarity. The active repository functionality, although not\ndescribed in this paper, is also part of our work [16]. One aspect\nthat is different in our proposal is the retrieval of distributed\ninformation, which is not mentioned in Ye and Fischer's work.\nAnother important work to mention is the RIG initiative [9], which\ndescribes a reuse library interoperability approach. The idea of the\nasset library interoperability is based on the storage of domain\ninformation in several databases. These databases are static and\nbased on a unique global model. The integration requires that\ninformation is stored according to this unique model. Therefore, if\nany reuse database is to be integrated, it has to be translated to the\nRIG model. Alternatively, the mediation approach creates a new\nlevel of abstraction above the database model, allowing the insertion\nand/or removal of repositories from the mediation structure without\nthe need for updates on the whole structure. Moreover, RIG lacks a\nmore effective search engine that provides searches based on\ndomain concepts and filtering of relevant information, including\nInternet access, as we do in our work.\n\nSPECIALIZING A MEDIATION ARCHITECTURE TO A COMPONENT INFORMATION RETRIEVAL ENGINE\nThe effectiveness of a software component information retrieval\nengine is associated to its capacity to handle the distribution and\nheterogeneity of software components, to organize component\nrepositories within a domain, and to enable software component\nevolution. These requirements can be accomplished through a\nsoftware layer that is seen as a particular case of HDDS.\nAs stated before, mediators are modules that encompass layers of\nmediation services, connecting bases of heterogeneous and\ndistributed data (producers) to information systems (consumers).\nHence, in order to be really useful in software component\ninformation search this solution has to be tailored to component\n\n3\nA translator in this context is the same as a wrapper or adapter.\ninformation retrieval, considering the component domain, its\nsemantics, architecture, and interfaces.\nIn a mediation architecture, as new sources of information are\naggregated to the mediation structure, the amount of information\nto be modeled increases, frequently generating inconsistencies,\nambiguities, and conflicts in the represented information. One way\nto deal with this problem is to partition the consumers and\nproducers' models and the structure of mediation by domain. The\ndescription of these models, partitioned by domains, forms the so-called\ndomain ontology [14].\nIn the context of component information retrieval, the use of\nmediators allows the information access to be carried\nindependently of the format and the operational platform where it\nis stored. Therefore, the structure of a retrieval engine as a whole\ncan be flexible, since existing component data sources can be\nadded to the architecture in an easy way, with no need to convert\nfrom the original information format (format form of\ndata/information source) to the format used by the reuse\nenvironment. Another interesting feature of mediators within a\ncomponent information retrieval engine is that reusable\ninformation is naturally organized by domain (Figure 1), which\nfacilitates the search for domain concepts, since specific domain\ndata is accessed in a search. Moreover, the use of mediators allows\nthe aggregation of information already stored in legacy databases,\nwithout the need to transform the original database format.\nIn order to help on the correct choice of mediators for a given\ndomain, the mediator layer provides a specific ontology for each\ndomain. Therefore, this ontology must be specified by domain\nspecialists, facilitating the search for specific components, since\nthe ontology definition is directly connected to software\ncomponents within the domain.\nThe use of this layer in the Legislative Domain is particularly\ninteresting, since in Brazil as in other countries [15], there are\nsome legislative houses that are more up to date with software\ntechnology than others. The former represents a reference source\nof software components to several Legislative Houses. Without\nthis kind of layer, there exist some barriers for reusing\ncomponents among these houses, such as the distance, scarce\nfinancial resources, and semantic conflicts among components (a\ncommon component functionality can be identified differently in\neach legislative house).\nFigure 1 presents an example of a mediation layer configuration\nfor this specific application domain. Several mediators are\npresented as sub-domains, such as State Legislative (SL) and\nMunicipal Legislative (ML) domains. The SL Mediator is\naggregated (P1) to ML, generating a more generic mediator that\ncombines the two domains. The latter can be used in cases where\ninformation concerning the two domains is necessary. Each\nmediator is connected to the related domain data sources that\ncontain reuse component information. The Justice Domain\nMediator may be accessed in cases where the user wants\ncomponents related to the Justice domain.\nIn order to provide an architecture that is able to handle the\nrequirements of component information search and retrieval, we\nspecified and implemented OML (Odyssey Mediation Layer) was\nspecified and implemented, based on mediation and ontology\ntechnologies. OML is derived from a HDDS mediator, the\nHIMPAR architecture [12], adding to it more precision and\nsemantics, using ontologies tailored to software component\ninformation retrieval.\n21\n\nRetrieval Interface\nLegislative\nState\nDomain Mediator\nJustice\nDomain\nMediator\nLegislative\nDomain\nMediator\nLegislative\nMunicipal\nDomain\nMediator\nP1\nP2\nORB\nComponent\nRepository\nTranslator\nComponent\nRepository\nComponent\nRepository\nTranslator\nP1 : aggregation\nP2: association\nTranslator\nComponent\nRepository\nTranslator\n\nFigure 1 - An example of a mediation layer for the Legislative domain\n\nUser Interface\nService Manager\nSM\nQuery Translator\nQuery Packer\nQuery\nDecomposition\nQuery Manager(QM)\nOntology\nModel\nOntologyManager\nMetadata Manager (MM)\nMediator\nTranslator\nTranslator\nService Manager\nMetadata\nModel\nORB Bus\nComponent Repository\nComponent Repository\nORB Bus\nReuse Tools\n\nFigure 2- Architecture of the Odyssey Mediation Layer\n\nThe OML engine is part of a reuse environment, named Odyssey,\nthat deals with component based development of applications\nwithin a given domain. Specifically, OML is part of a system of\nagents that helps users in their search for reusable components\n[16]. It uses intelligent mechanisms such as learning techniques\nand user preferences in order to present in advance components\nthat the user does not even know that exist. The agent system\ninfers this information and presents the components to the user. It\nis important to notice that although OML was built in the context\nof the Odyssey project, it can be used standalone or integrated to\nother tools, since OML offers a user interface (as seen in figures 3\nthrough 8) and a CORBA IDL interface for its communication\nwith other tools.\nFigure 2 presents OML, which comprises four levels: Interface,\nMediation Layer, ORB bus, and Translators. The Interface level is\nimplemented by the Service Manager (SM), which stores\nmetadata about available mediators, and is capable of creating\nontological bindings between related ontologies in order to query\nseveral mediation layers. Also, SM is responsible for the creation\nand modification of mediators. The Mediation Layer provides the\nmanagement of each mediator through the Metadata Manager\n22\n(MM), and provides access to mediators through the Query\nManager. At the ORB level, communication between the\nmediation layer and translators is established through CORBA\nstandard services. Finally, the Translator level provides one\ntranslator for each component repository in such a way that it can\nparticipate in the Mediation Layer integration model.\n3.1 Service Manager\n\nThe Service Manager (SM) stores metadata about mediators,\ntranslators and data sources availability, and deals with\nontological commitments between related mediators (domains).\nSchema 1 presents an overview of SM Metadata in ODL\nnotation, and Schema 2 provides the IDL interface to access\nmediators through the CORBA bus.\n\n\nclass Object_Himpar\n( extent Himpares)\n{\nattribute string Name;\n}\n\nclass Mediator extends Object_Himpar\n( extent Mediadores)\n{\nrelationship list<Container> AssociatedDataSources\ninverse DataSource::Medis;\nattribute string Description;\nattribute string KeyWords;\nrelationship list<Mediador> Super\ninverse Mediador::*;\nrelationship list<Mediador> Spec\ninverse Mediador::*;\nrelationship list<Mediador> Assoc\ninverse Mediador::*;\nattribute string BaseName;\nrelationship list<OntologyTerm> TermRel\ninverse OntologyTerm::MediadorRel;\nattribute String password;\n}\n\n\nclass Wrapper extends Object_Himpar\n( extent Wrappers)\n{\nattribute string description;\nattribute string type;\nrelationship list<DataSource> Repositories\ninverse DataSource::Trad;\n}\n\nclass DataSource extends Object_Himpar\n( extent DataSources)\n{\nattribute string owner;\nrelationship list<Mapping> Structure\ninverse Mapping::Cont;\nattribute string AbstractionLevel;\nrelationship Wrapper Trad\ninverse Wrapper::Repositories;\nrelationship list<Mediator> Medis.\ninverse Mediator:: AssociatedDataSources;\nattribute String password;\n}\n\nclass Mapping\n{\nattribute string DataSourceName;\nattribute string map;\nrelationship DataSource Cont\ninverse DataSource::Structure;\n}\n\nclass OntologyTerm\n( extent Terms)\n{\nattribute string Name;\nrelationship list<OntologyTerm> Synonym\ninverse OntologyTerm::*;\nrelationship list< OntologyTerm > Hipernym\ninverse OntologyTerm::*;\nrelationship list< OntologyTerm > Hiponym\ninverse OntologyTerm::*;\nrelationship Mediator MediadorRel\ninverse Mediator: TermRel;\n}\n\nclass Component\n( extent Components)\n{\nattribute string type;\n}\n\nSchema 1 Metadata of Service Manager\nThe ontological commitments between related mediators are all\ndone at SM level. SM provides the necessary metadata for this,\nusing the Mediator, DataSource, Mapping, OntologyTerm and\nComponent classes described in Schema2. The decision to\nconcentrate all ontological commitments at SM level was mainly\nbased on the SM available information. It knows the availability\nof all OML components and thus is able to indicate which domain\nthe user could access. In order to use a given mediator within\nOML, the administrator has to register the mediator, its related\ndata sources, and translators used by these data sources into SM.\nFigures 3 and 4 present examples of the interfaces for doing this.\nmodule Mediator\n{\ninterface Access {\nstruct OntologyTerm {\nstring\nName;\nstring\nDescription;\n};\nstruct object {\nstring\ntype;\nstring\ndefinition;\n};\ntypedef sequence<Ontology> ListOntology;\ntypedef sequence<object> ListObjects;\n\n// Functions for the management of bases\n\nstring open_base (in string basename);\n\n\nvoid close_base (in string basename);\n// Functions for the management of ontologies\n\nListOntology retrieve_Ontology (in string\nmediator-name);\n\n// Functions for retrieve components\nListObjects queryMediator(in string query);\n\n\n\n};\n};\nSchema 2 SM IDL interface to access mediators\nFigure 3 shows some information about the Municipal Legislative\nMediator within SM. Some basic metadata are: i) the mediator\nname, ii) the executable file that has to be loaded (if it is not\nalready loaded) by ORB, in order to respond to some request to\nthis specific mediator, iii) the keywords related to the mediator\n23\n(this information provides a fast and limited knowledge about the\ncontents of the mediator), iv) the password required by the\nmediator to attend the request (if necessary), among others. Figure\n4 presents a data source registration, associating a specific data\nsource, File System2, to the Municipal Legislative Mediator. We\nmay also register the available types of components in order to\nknow to which phase of the application development the\ncomponent belongs (analysis, architectural or implementation\nsee Figure 8).\nOne important characteristic of OML is to use domain ontology to\nsearch for domain terms and its ontological relationship, within or\namong various domains at different levels of abstraction. Thus,\nSM has to capture the ontological model\n4\nof each mediator and\nassociate terms among them. For capturing each ontological\nmodel, SM uses the ORB bus, through IDL retrieve_Ontology()\ninterface method (Schema 2), to access the specific mediator,\nretrieving its ontological terms. Therefore, the ontological model\nprovides the main structure for dealing with domain ontology\nrelationships. Relationships involve semantic links such as\nHypernyms, Hyponyms, and Synonyms. A Synonym link\nassociates ontological terms in several domains that represent\nsynonyms for a particular ontological term. Hypernyms and\nHyponyms links relate ontological terms from various domains\nthat can be either more general or more specific than the current\none. Thus, it is possible to associate ontological terms from\nmultiple domains, providing accessibility for domain information.\nFigure 5 presents the interface for the association of ontological\nterms.\nIn a query formulation, SM accesses and retrieves all related\nmediators, searching for component information that fulfills the\nquery semantics. The ontological information about requested\n\n4\nEach ontological term is specified as a domain term and a detailed\ndescription of it, and relationships with other ontological terms, at\ndifferent levels of abstraction, are created (see section 3.2)\n\ndomains is transferred by the Broker (ORB) to the proper domain\n(mediator), using the method queryMediator (in string query), and\ncorrect ontological domain terms.\nIn the example shown, SM will query all mediators that are related\nto the Municipal Legislative mediator. Thus, SM will transmit the\nquery to the Legislative Domain Mediator and Justice Domain\nMediator (see Figure 2). The retrieved components and their\ncorresponding match levels are shown, if each retrieved\ncomponent exactly matches the query or if the component\nFigure 3 A Mediator Registration Example\n\nFigure 4 A data source registration example\n\n24\npartially attends the request. OML registers this information and\npresents how well the component attends the request (total or\nsome percentage in the latter case). In section 4, we present a\nmore concrete example of this kind of query.\n3.2 Mediator Manager\nEach mediator has its own metadata. This metadata represents the\nontological model of the domain. The Mediator manager (MM)\nalso stores relationships among the ontological terms and\ncomponents stored in data sources. This metadata provides the\ncapability to retrieve components related to this mediator\n(domain). In order to provide this feature, MM also stores the\nontology metadata related to this domain. For each ontological\ndomain term, it is necessary to register its name, its type (in this\nspecific case a term that represents a functionality in the domain),\nits importance within the domain and the related domain terms.\nThese terms permit the expansion of the query range within the\ndomain, i.e., if there is no component information on data sources\nrelated to this specific ontological term, OML could query related\nontological terms in the same domain. Of course, this \"shift\" must\nbe reported to the user.\nIn order to relate each ontological term with its counterparts in\ndata sources, MM retrieves related information of data sources\nfrom SM, using the ORB bus. The retrieved information is used\nby MM to locate and retrieve software components from data\nsources. Thus, we can associate each ontological term with its\nrelated components, with the help of a specific translator.\nA RETRIEVAL EXAMPLE USING OML\nConsider a user who is developing an application to handle new\nlegislative proposals, among other characteristics, in the legislative\ndomain. He wants to know if he can use pre-existing software\ncomponents in his application. Thus, he can use the OML user\ninterface to know about the availability of this kind of\ncomponents, and to retrieve some candidates.\nIn our example (Figure 6), data source 2 has a binary software\ncomponent called \"New Subject\", and data source 1 has a Java\npackage (set of related classes) named \"Proposal Creation\". Both\ndata sources were mapped into the Legislative Municipal Domain\nMediator. Therefore, the Justice Domain mediator has an ontology\nterm named Justice Code that is mapped to a component named\nCode Database.\nThese components are made available to the Legislative\nMunicipal Domain through the ontological term in the mediator\ncalled \"Creation of New Proposals\", and are mapped to the above\ncomponents in data sources, i.e., data sources 1 and 2.\nDuring the creation of new proposals within a Municipal\nLegislative House, there are some cases when it is necessary to\nconsult justice database rules. This justice database can impose\nsome restrictions on a new proposal creation.\nFigure 5 Multiple Ontology Association\n\n25\n\nWhen the Municipal Legislative Mediator was registered, the SM\nadministrator associated this mediator with the Justice Mediator.\nThis Justice Mediator provides software components used for the\ndevelopment of applications in the Justice domain. Thus, when\nour user accesses the SM interface in order to retrieve components\nrelated to the creation of new proposals, he can choose to access\ninformation from all related mediators, i.e., generic mediators,\nspecific mediators, associated mediators or all of them. Suppose\nour user decides to retrieve information from the Legislative\nMediator and associated mediators, then he will access\ncomponents from the Legislative Mediator and Justice Mediator\n(see Figure 2). The formulation of the query (Figure 7), selecting\nthe type of component to be retrieved (components belong to\nanalysis, architectural, codification or all phases of development),\nand the result of this query is presented in Figure 8. Note that for\neach component, a description of the retrieval is presented (in\n\nData Source 1\nData Source 2\n\nMediator Manager\n\nORB\n\nJustice Rules\nProposal Creation\nNew Subject\nData Source 3\n\nRule Database\n\nMediator Manager\n\nCreation of New\nProposals\nService Manager\n\nCreation of\nNew Proposals\nHyponym\nJustice\nRules\n\nFigure 6 Retrieval Schema Example\n\n\nFigure 7 Query Formulation\n\n26\nFigure 8, the description presented is related to the Rule Database\nAccess component) and the user can select one or more\ncomponents to retrieve.\nThrough the mediation structure, OML users can search for\ncomponents in a transparent and uniform way. In the above\nexample, users of OML do not have to know where components\nare stored. Moreover, users do not have to query all component\nrepositories, using each specific repository query language format\n(when a query language exists) to find where the needed\ncomponents are stored. They do not even have do know how to\naccess data sources.\nThe complexity for dealing with these heterogeneous repositories\nis treated by OML. Without this layer, users would have to handle\nthese repositories individually, increasing the complexity of the\nquery and access. By using mediators, users can query specifically\nthe mediation metadata, using one single model. The mappings\nbetween mediator metadata and translators redirect and\ndecompose the query to data sources 1, 2, and 3. Also, the\nidentification of components of the same domain that are in\ndifferent repositories can be detected at the time of their\nregistration in the mediation layer. Afterwards this is all\ntransparent to users.\n\nCONCLUSIONS\nThis work addresses the interoperability problem between\ncomponent information repositories. An integration layer was\ndeveloped to help searching and identifying suitable reuse\ncomponents. This layer is based on mediators and ontologies to\nprovide the binding of different components to their domain\nconcepts. To assist the identification of related components and\ntheir appropriate domain organization, each mediator encloses one\ndomain ontology and provides the mapping to their respective\nrepository of components.\nMediators provide a uniform view of the available components\norganized in domain taxonomy. Domain ontologies are used to\nhelp searching for reusable components information through the\nrepresentation of domain semantic concepts. Therefore, this\nmediation layer promotes domain information integration and\nprovides mechanisms to translate component requests across\nontologies. The important aspect of our proposal is the use of\ndomain ontologies, for reusable component retrieval, in a concrete\nsituation, allowing users to express component requests at a higher\nlevel of abstraction when compared to keyword based access or\ncomponent interface based access used in other proposals.\nWithout OML, users would have to access directly various\nrepositories, dealing with specific characteristics of each\nrepository. Therefore, the main contribution of this paper is to\nshow the potential of the technology of mediators, together with\nontology models, for dealing with components repositories\ncomplexities, and organizing the manipulation of different\ncomponents within a domain ontology. Although, the mediation\ntechnology is quite popular within HDDS, its adaptation using\ndomain ontologies for component information retrieval is\ninnovative.\nOML is an operational interoperability architecture based on the\nuse of mediators, translators, and a CORBA communication\nprotocol, which is responsible for the connection among\ntranslators and mediators in a distributed and heterogeneous\nenvironment. It was constructed using the C++ language together\nwith the Visibroker ORB for C++. Currently, OML is being\nextended in order to publish and search for components on the\nInternet [19], based on XML standard.\n\nReferences\n[1]\nJacobson, I.; Griss, M.; Jonsson, P. : \"Software Reuse:\nArchitecture, Process and Organization for Business\nSuccess;\" Addison Wesley Longman, May 1997.\n\nFigure 8 Example of component information retrieval in OML\n\n27\n[2] Wiederhold, Gio; Jannink, Jan: \"Composing Diverse\nOntologies;\" 8th Working Conference on Database\nSemantics (DS-8), Rotorua, New Zealand (DS-8) January\n1999 (Final version to be published by\nIFIP/Kluwer/Chapman&Hall).\n[3]\nArens Y., Knoblock C.A., and Shen W.: Query\nreformulation for dynamic information integration.\nJournal of Intelligent Information Systems, 6(2):99130,\n1996.\n[4]\nMolina, Garcia and et.al. :The TSIMMIS approach to\nmediation: Data models and languages. Journal of\nIntelligent Information System, 8(2), 1997.\n[5] Braga, R.; Mattoso, M.; Werner, C.: \"The Use of\nMediators for Component Retrieval in a Reuse\nEnvironment,\" In: Proc. Technology of Object-Oriented\nLanguages and Systems Conference (TOOLS-30\nUSA'99), IEEE CS Press, Santa Barbara, pp.542-546,\nAugust 1999.\n[6]\nSeacord, R.; Hissan, S.; Wallnau, K,: \"Agora: A Search\nEngine for Software Components,\" Technical Report\nCMU/SEI-98-TR-011, August 1998.\n[7]\nGenesereth M.R., Keller A., and Duschka O.M.:\nInfomaster: An Information Integration System. In\nSIGMOD RECORD, Proceedings of the 97 ACM\nSIGMOD International Conference on Management of\nData, pp. 539542, Tucson-Arizona, 1997.\n[8]\nLevy, Alon Y., Rajaraman, Anand, and Ordille, Joann J.:\nQuerying heterogeneous information sources using source\ndescriptions. In Proceedings of the 22nd VLDB\nConference, pp. 251262, Mumbai (Bombay), India,\n1996.\n[9]\nFowler, Jerry, Perry, Brad, Nodine, Marian, and\nBargmeyer, Bruce: Agent-Based Semantic Interoperability\nin InfoSleuth , SIGMOD Record 28(1): pp. 60-67, 1999.\n[10]\nRIG; \"Reusable Library Interoperability Group\" at\nhttp://www.asset.com/rig/, 1996.\n[11] Pires, P.; Mattoso, M.: \"A CORBA based architecture for\nheterogeneous information source interoperability;\"\nProceedings of Technology of Object-Oriented Languages\nand Systems - TOOLS'25, IEEE CS Press, pp.33-49,\nNovember 1997.\n[12] Szyperski, C.: Component Software: Beyond Object\nOriented Programming, Addison Wesley, 1998\n[13] Ram, S.: \"Guest Editor's Introduction: Heterogeneous\nDistributed Database Systems,\"; IEEE Computer, Vol. 24\nNo.12, December 1991.\n[14] Nieto, E. M.: OBSERVER: An Aproach for Query\nProcessing in Global Information Systems based on\nInteroperation across Pre-existing Ontologies, Doctoral\nThesis, Universidade de Zaragoza, November 1998.\n[15] Weinstein, P. C.: Ontology-based Metadata:\nTransforming the MARC Legacy, Proceedings of the 1998\nACM 7\nth\nInternacional Conference on Information and\nKnowledge Management, pp. 52-59, 1998.\n[16] Braga, R.; Mattoso, M.; Werner, C.: \"Using Ontologies\nfor Domain Information Retrieval,\" in DEXA 2000 DomE\nWorkshop, pp.100-104, September 2000.\n[17] Braga, R.; Werner, C.; Mattoso, M.: \"Odyssey: A Reuse\nEnvironment based on Domain Models\"; In: Proceedings\nof IEEE Symposium on Application-Specific Systems and\nSoftware Engineering Technology(ASSET'99), IEEE CS\nPress, Richardson, Texas, pp.50-57, March 1999.\n[18]\nYe, Y.; Fischer, G.: \"Promoting Reuse with Active Reuse\nRepository Systems,\" IEEE ICSR 2000, Vienna, pp.302-317\n, June 2000\n[19]\nPinheiro, R.; Costa, M.; Braga, R.; Mattoso, M; Werner,\nC.; \"Software Components Reuse Through Web Search\nand Retrieval\", Proceedings of the International\nWorkshop on Information Integration on the Web -\nTechnologies and Applications, Rio de Janeiro, Brazil,\n2001 (to appear).\n\n\n\n\n\n5\nEach ontological term is specified as a domain term and a detailed\ndescription of it, and relationships with other ontological terms, at\ndifferent levels of abstraction, are created (see section 3.2)\n\n28", "keywords": "Domain Engineering;Software Classification and Identification;Component Repositories;Component Based Engineering"} {"name": "193", "title": "Through Different Eyes Assessing Multiple Conceptual Views for Querying Web Services", "abstract": "We present enhancements for UDDI / DAML-S registries allowing cooperative discovery and selection of Web services with a focus on personalization. To find the most useful service in each instance of a request, not only explicit parameters of the request have to be matched against the service offers. Also user preferences or implicit assumptions of a user with respect to common knowledge in a certain domain have to be considered to improve the quality of service provisioning. In the area of Web services the notion of service ontologies together with cooperative answering techniques can take a lot of this responsibility. However, without quality assessments for the relaxation of service requests and queries a personalized service discovery and selection is virtually impossible. This paper focuses on assessing the semantic meaning of query relaxation plans over multiple conceptual views of the service ontology, each one representing a soft query constraint of the user request. Our focus is on the question what constitutes a minimum amount of necessary relaxation to answer each individual request in a cooperative manner. Incorporating such assessments as early as possible we propose to integrate ontology-based discovery directly into UDDI directories or query facilities in service provisioning portals. Using the quality assessments presented here, this integration promises to propel today's Web services towards an intuitive user-centered service provisioning. Categories and Subject Descriptors", "fulltext": "INTRODUCTION\nWeb services are expected to provide an open platform not only\nfor electronic B2B interaction, but also for the provisioning of so-called\nuser-centered services, i.e. B2C services that can provide\nuseful information and a variety of service offers to support users\nin a modern mobile lifestyle. Though the capabilities of such services\nare still relatively simple, their sophistication will grow with\nthe improvement of (wireless) networks, bandwidths, and client\ndevice capabilities. However, finding the adequate service for\nsubsequent use of each individual user becomes a more and more\ndemanding problem. Given the convergence of networks in forthcoming\n(mobile) environments and the evolving innovative business\nmodels for third party service deployment (e.g. NTT\nDoCoMo's i-mode service certification/licensing model for mobile\nservice portals [19]) the variety of services is even expected\nto grow. Making an informed choice of the `right' service will\ntherefore include matching individual users' preferences or dislikes\nagainst the concepts and capabilities of the services offered.\nUsually the interaction process for Web services consists of three\ndistinct phases: a discovery of possible services, the selection of\nthe most useful, and the subsequent execution. In understanding\nwhat a service actually offers the first two phases are crucial and\nthe general acceptance of user-centered services will depend on\nthe solutions of still demanding problems in interaction like cooperative\nquerying. As shown in [4] and [5] the discovery and selection\nprocesses of user-centered Web services involves a high degree\nof respect for user preferences to be flexible enough for real\nworld use. In that respect providing user-centered services\nstrongly differs from the well-defined capabilities of traditional\nB2B services. As a running example of a typical user-centered\nservice we will use an extension of the cooperative restaurant\nbooking Web service presented in [4]: restaurant booking services\nsubscribe to the least general applicable node along a complex\nservice ontology for a number of characteristics. A service request\ncan then be performed including a choice of various individual\ncategories. However, the individual services offered will usually\nonly more or less match all the user's expectations. Ranking services\nwith respect to requests is thus an ongoing challenge, as is\nalso evident from the research areas of IR or Web search engines\nfor information provisioning.\nService providers almost always can anticipate some typical interactions\nwith their services. For our example typical tasks are\nfor instance booking a certain restaurant for a specific evening,\nfinding a suitable restaurant in the vicinity for lunch, etc. The\ncharacteristics and input parameters for Web services for restaurant\nbooking thus usually contain a number of general input values\nthat can be specified in a service request/query: the name of\nthe restaurant, its location, its specific address, the type of cuisine,\nthe date and time for a booking, its price range or even third party\ncontent like recommendations (e.g. the Zagat reviews). However,\nfrom a service provisioning point of view the nature of these parameters\nstrongly differs. A user expecting to book a certain restaurant\non a specific evening will expect that the request may be\ngranted or may fail depending on current reservations of that restaurant\nfor the given date, but relaxing the constraints of the date\ngiven or booking a different restaurant for the evening might simply\nnot do. In contrast a user simply wishing for a close-by restaurant\nto have lunch will rarely provide such fixed terms as a restaurants\nname, but rather use descriptive terms like a preferred cuisine\nand an approximate location.\n\nFigure 1: Concept of enhanced UDDI service registries\nDistinguishing such query stereotypes like `book a table at the\n`Chez Panisse' for the 12/3/03 8:00 pm' and `give me the name\nand address of a Chinese restaurant in the commercial district of\nSan Francisco with medium price range' and the subsequent personalization\nof service provisioning also needs different types of\ninput parameters. Whereas simple variables like the restaurant's\nname or a certain category in a clear request can be handled in an\nexact match fashion, more fuzzy attributes in a somewhat tentative\nrequest like an approximate location or the choice of cuisine\nhave to be understood as a user`s preferences with respect to certain\nconcepts (soft constraints). In the area of the Semantic Web\nthe management of such concepts is usually done by the very\npowerful tool of ontologies that describe a generalization hierarchy\nof such concepts. In the course of this paper we will show\nhow to open up service provisioning to the better understanding,\nadequate handling and quality assessment of each individual\nuser's intentions and preferences. The contribution of the paper\nthus is twofold:\n\n\nOn one hand we relate the use of ontologies and the handling\nof conceptual views like given by the Semantic Web to co-operatively\nevaluating preferences for each specific user\n\n\nOn the other hand we show how to effectively deal with the\nproblem of relaxing multiple conceptual views for more\ncomplex queries and give quality measures to assess the\nmost useful results for each specific user.\nBoth contributions can be expected to improve the service provisioning\nof user-centered Web services and help to boost their\nusability and thus subsequently their acceptance.\nSEMANTIC REGISTRY ENHANCE-MENTS\nToday Web services are usually provided via an Internet wide\nnetwork of services registries given by the Universal Description\nDiscovery and Integration (UDDI) [21]. UDDI builds on the Web\nService Definition Language WSDL [7] which features basic\ninformation about providers of a service and technical service\ninvocation details. Even though UDDI has become the de facto\nstandard in the field it suffers from a major shortcoming: the information\noffered on individual services is rather limited. A yellow\n-page-style lookup mechanism provides the service interface\ntogether with a short verbal description of what task the service\nperforms. Mainly targeted a human Web service experts and developers\n, advanced query capabilities and cooperative matchmaking\n, however, are still lacking.\nResearch in the area of the Semantic Web seeks a solution to this\nunsatisfying situation, e.g. [20][4]. Generally speaking, the Semantic\nWeb fosters a population of the Web with content and\nservices having formal semantics and rich service descriptions.\nSeveral semantic frameworks for Web services are currently\nemerging with DAML-S [2] and W3C's recently established\nOWL-S [9], [10] initiative as the most prominent approaches. We\nhave built our previous work on DAML-S as a relatively mature\nontology-based approach to the description of Web services that\ntries to provide a common ontology of services for the Semantic\nWeb. Building on top of DAML+OIL [8] the Web service representations\nin DAML-S consist of a service profile for advertising\nand discovering services, a process model giving a detailed description\nof a service's operation and a service grounding providing\ndetails on how to interoperate with services via message exchange\n.\nFigure 1 shows a schematic view of our semantically enriched\nWeb service provisioning concept. A user states personal needs\nand preferences in an enhanced service request. The enhanced\nUDDI registry matches this request against the descriptions of all\nregistered services. The actual matching can be carried out using\ncooperative database technology like shown in [4]. The query is\nsplit in hard and soft constraints where the hard constraints are\nprocessed as filter conditions, whereas the soft conditions can be\nrelaxed if necessary. If no user-specific preferences are given with\nthe service request, the relaxation follows the domain-specific\nconceptual views of the service ontology given by the service\nproviders or portal operators. To distinguish between several possible\nrelaxations the quality assessment, which is the main aspect\nof this paper, will evaluate the degree of match for each service\nwith respect to the original user query and offer all best matches.\nAfter a certain implementation has been chosen, the service provider\nwill execute the service and deliver the result.\nFrom a service provisioning viewpoint centralized and publicly\navailable service ontologies can be understood as a default service\nconceptualization or the most common service concept hierarchy,\ni.e. encoding common and widely accepted knowledge or\nworld/domain knowledge. Due to the hierarchical nature of ontologies\na user asking for a specific service will also be served\nwith any more specific service concept subsumed by his request.\nOn the other hand, in the case where a best match to his initial\nrequest is not available he/she might also be satisfied with more\ngeneral services from a super-class of the requested one. This is\ndetermined by a relaxation step in the service ontology, i.e. a\ngeneralization of concepts along the lines of the ontology.\nONTOLOGY-BASED WEB SERVICE DISCOVERY AND SELECTION\nIn this section we provide a brief overview of the use of ontologies\nfor relaxation of soft query constraints and show how common\nknowledge serves as a default for cooperative retrieval.\n197\nAmerican\nCalifornia\nSeafood\nShopping\nCommercial\nCajun\nTexmex\nOrganic\nFusion\nCenter\nSuburb\nCity\nCultural\n...\n...\n...\nLocation\nThing\nRestaurant\nAkasaka\nChez Panisse\nAvocado Garden\nCesar\nThe Walrus\n...\nSe\nrvi\nc\ne\ns\nS\ne\nrvi\nc\ne O\nn\nt\no\nlo\ngy\n,,Cuisine\"\n\nFigure 2: Service Ontology for restaurant booking.\n3.1\n\nService Ontologies, User Preferences and\nUsage Patterns\nThe purpose of a service ontology is to describe the kinds of entities\navailable in a service repository and how they are related. To\nthis end service ontologies may include descriptions of service\nclasses, properties and their instances (the actual services that are\neventually selected for execution). A basic service ontology is\ndepicted in Figure 2. Here restaurant booking services are classified\naccording to their cuisine and their location in a city. For\ninstance the restaurant `Chez Panisse' serves `Organic' food with\n`Organic' being a specialization of the `Californian' cuisine (as\nwell as of `American'). Furthermore the restaurant `Chez Panisse'\nis located in the `Shopping' district which itself is part of the city\n`Center'. We have used W3C's Web Ontology Language (OWL)\nand its predecessor DAML+OIL to enrich DAML-S service pro-files in Web service repositories [4][5]. Modeled in OWL the\nmost general `Restaurant' and `Location' concepts are anchored\nin `owl:Thing' the most common concept of any ontology.\nA restaurant booking service using the service ontology from\nFigure 2 might on one hand assume that a user asking for `a restaurant\nfeaturing American cuisine' will be well served by all\nrestaurants with e.g. Cajun, Californian or Texmex cuisine, since\nthey are all instantiations of American cuisines. On the other\nhand, if a user asks for `a Californian fusion cuisine restaurant'\nand no such service should be registered, implicitly relaxing the\nquery to all Californian restaurants and offering restaurants with\norganic cuisine or Californian seafood to our user will be more\nhelpful than just stating the empty result. And even if a user has\ndifferent conceptions (e.g. cuisines being related based on their\nflavors) or explicit preferences (a Chinese restaurant rather than\nan Italian one), an ontology-based discovery/selection model is\nstill useful. [4] shows in detail how to deal with these cases by\noverwriting the default ontology with an explicitly provided (or\nimplicitily derived) generalization hierarchy of a user somewhat\nsimilar to the view definitions proposed by [14]. Since for our\nassessment framework here the exact kind and classes/values of\nan ontology do matter less than its actual structure in each instance\n, such overwritings by user specified conceptions are always\npossible to facilitate.\nWe advocate the use of service ontologies together with a proprietary\nnotion of basic user preferences and typical service usage\npatterns for the stepwise refinement of service requests in a cooperative\nservice provisioning environment. While the basic approach\nand combination of ontologies, preferences and patterns is\npublished elsewhere [5] we will now concentrate on enhancements\nto the relaxation along the lines of ontologies alone: unlike\nthe specialization of a request, a generalization can result in severe\nchanges of the initial query semantics. This is especially true,\nif several relaxation steps have to be performed until a match can\nbe found. At the point of relaxing a constraint to the root of an\nontology, the respective constraint can even be considered as\nentirely dropped. But nevertheless, since an ontology resembles\ncommon knowledge (and thus implicit preferences), for high\nquality service provisioning offering somewhat related features is\nusually still a better default than just returning an empty result set.\n3.2\n\nMultiple Conceptual Views\nIndividual users might have quite specific ideas about differing\ndomain concepts (conceptions) or very clear expectations how to\nbe served differing from the usual domain assumptions (explicit\npreferences), but also implicit preferences play an important part.\nConsider for instance location-based services, e.g. for restaurant\nbooking. If a user asks to book `a Chinese restaurant for dinner',\nthe common domain knowledge tells us that this restaurant should\nbe in the vicinity (e.g. a 30 miles area) of his current or usual\nwhereabouts and we can add this information as an implicit constraint\nfor better provisioning quality. A user in San Francisco\nwould usually be annoyed by offers of Chinese restaurants in\nHong Kong no matter how good their actual quality or rating is. If\nthis general assumption would not hold, however, (e.g. if a user\nwants to fly to Hong Kong in the morning and then have dinner\nthere) he or she would have stated this unusual detail already\nwithin the query and had asked for a `Chinese restaurant in Hong\nKong for dinner'. Such explicit information within a service request\nis provided due to the psychological notion that though\nusers want a service to know what is sensible (like they expect to\nbe served in human-human interaction), no user expects a service\nto be clairvoyant. Thus, not only having further (explicit) knowledge\nof a user, but also assuming typical behavior, concept hier-198\narchies given by ontologies can be used as good default relaxation\nhierarchies for user preferences. Should, however, some preferences\nor a specific conception be given, the underlying ontology\nhas to be exchanged against the user-provided terms or concepts.\nWe introduce the notion of conceptual views on a service ontology\nto account for all the different interests a user wants to express\nin a service request. Such a conceptual view is modeled as a\nclipping from the full service ontology that starts with the most\ngeneral concept associable with a specific user interest. For this\npaper we will for the ease of understanding assume conceptual\nviews to be non-overlapping, tree-shaped clippings from the full\nservice ontology where each service is registered with the node\nthat describes its value with respect to the most specific characteristics\n. An example conceptual view is indicated as a grey shaping\nin figure 2: the view named `Cuisine' is basically a sub-ontology\nonly concerned with the classification of restaurants according to\nthe offered type of food. Whereas the restaurants `The Walrus',\n`Chez Panisse', `Avocado Garden' and Cesar are classified as\nbeing `Fusion' or `Organic' places the restaurant `Akasaka' is not\nreachable in this view as it is only classified as located in the cultural\ndistrict. As we will discuss in the remainder of this paper\nmultiple conceptual views can be used to account for different\ninterests a user wants to express in a service request and the relative\nimportance between them. In the case of the restaurant booking\nexample it is conceivable that a user values the fulfillment of\nlocation constraints over cuisine constraints if he/she is only up\nfor a quick work lunch. However, for the ambitious `hobby gour-met'\nthis might be just the other way round on the weekend. Thus\nwe will need ways to assess the respective quality of different\nrelaxation schemes to allow users to make an informed choice.\nRELAXING MULTIPLE ONTOLOGIES\nLet us now focus on problems that arise when multiple soft constraints\nhave to be relaxed over the service ontology. We will first\nlook at our sample scenario, investigate the relaxation of conceptual\nviews and then discuss some quality considerations.\n4.1\n\nRelaxation Plans for Multiple Selection\nPredicates\nLet us consider our restaurant booking service from above. A\ntypical query would be \"Find a Californian fusion cuisine restaurant\nin the commercial district of Berkeley\". Here we will have to\ndeal with two soft constraints: the type of cuisine and the location.\nAssuming that we do not have more specific information about\nthe user's preferences both constraints could be relaxed along the\ntwo default conceptual views of the service ontology of figure 2.\n\n\n`The Walrus'\n\nfusion\n\n`Cesar'\n\n\n\n\n\n`Chez Panisse'\ncalifornian\norganic\n\n`Avocado Garden'\n\n\n`Cucina Calabrese'\n\n\n\n\nseafood\n\n`The Mediterraneum'\n\nFigure 3: The cuisine ontology with respective instances\nFigures 3 and 4 show the full first two concept levels of the respective\nconceptual views with some instances. So in figure 3 we\ncan for instance see that there is a service for a restaurant called\n`The Walrus' which is classified as offering fusion cuisine and as\nsuch, also offering Californian cuisine. The relaxation of query\npredicates over such a view is straightforward. If the query predicate\nspecifies `fusion cuisine' restaurants, we would have the\nchoice between the respective instances, here `The Walrus' and\n`Cesar'. If for some reason there would be no fusion cuisine restaurants\nregistered, or the instances cannot satisfy some other\nconstraints (like booking for a certain date), we will relax along\nthe ontology to the more general concept of Californian cuisine\nand can also consider the restaurants that are registered under the\n`organic' and `seafood' characterizations of California cuisine.\nThe technical problem of how to relax single conceptual views\nwith adequate query languages over cooperative database systems\nis in detail addressed e.g. in [4] and [5]. The beauty of the design\nis that all details of a UDDI or DAML-S style description for each\nservice together with some more characteristics (e.g. taken from\nRDF statements of a restaurants homepage) can be stored in a\nclassic relational database by the service provider and can be\nsearched using a declarative query language extended by preference\nconstructors like shown in e.g. [13]. Thus an added-value\nservice using semantically meaningful content can be provided\nquite easily.\n\n\n`Tom's Grill'\n\ncommercial\n`Sushi and Soul'\n\n\n`Avocado Garden'\n\n\n\ncity center cultural\n\n`The Mediterraneum'\n\n\n`Akasaka'\n\n\n\n\n\n`Chez Panisse'\n\nshopping\n\n`Cesar'\n\n\n`Pizza My Heart'\n\nFigure 4: The location ontology with respective instances\nA more serious problem arises when several soft query constraints\nover various conceptual views have to be evaluated. Consider for\ninstance the query on a `fusion cuisine restaurant in the commercial\ndistrict'. We can easily verify that in our example even\nthough `The Walrus' and `Cesar' are fusion cuisine, they are not\nregistered in the commercial district. Likewise `Tom's Grill',\n`Sushi and Soul' and `Avocado Garden' are in the commercial\ndistrict, but they do not offer fusion cuisine. Aiming at a cooperative\nretrieval behavior we are left with three choices: relaxing the\nconstraint on the cuisine, relaxing the constraint on the location or\nrelaxing both constraints. When relaxing the cuisine to `califor-nian'\nwe would retrieve the service of `Avocado Garden' the only\nCalifornian cuisine restaurant within the commercial district. Relaxing\nthe location-based ontology would result only in the service\nof `Cesar' the only fusion cuisine restaurant in the city center\n. Finally relaxing both constraints would result in all Californian\ncuisine restaurant within the city center, i.e. `Avocado Garden'\n, `The Mediterraneum', `Chez Panisse' and `Cesar'. Thus\n199\ndifferent kinds of relaxation will usually result in essentially differing\nanswer sets. This problem will remain if more relaxation\nsteps have to be taken. If we have relaxed only one constraint we\ncould e.g. decide to relax this property even further or relax another\nproperty of our service during the next step. Let us first\ndefine the task of finding the `best' service under relaxation\nThe Problem of Best-Matching Service Provisioning:\nGiven various characteristics that describe all services in a service\nontology, a concept hierarchy (conceptual view) for each characteristic\nand a user request stating a number of hard and soft constraints\n, the best-matching service is given by all services that:\n- fulfill all the hard constraints\n\n- fulfill all soft constraints with a minimum amount of relaxation\nof characteristics with respect to a suitable quality measure\nSo given that each service is registered in one concept node for\neach characteristic given by the respective conceptual view, and\nall services not fulfilling the hard constraints have been filtered,\nthe problem of selection over multiple constraints comes down to\ndeciding what is `a minimum amount of relaxation'. Obviously\nthe basic task is finding a service that is registered with all concepts\n(or any of their respective sub-concepts) as specified by the\nquery, a `perfect match'. But if there is no such service registered,\nthe decision what soft constraints to relax and how far they are\nrelaxed, is paramount for the quality of provisioning.\n4.2\n\nBasic Service Quality Considerations\nSince the decision about the relaxation scheme is important for\nthe output, some way of considering which scheme to follow is\nneeded. Usually ontologies are of a qualitative nature. A superclass\n/ subclass taxonomy is established, but there is no knowledge\nof the `degree' or the relative distance between different\nconcepts. However, such knowledge could crucially change the\nutility of certain relaxation plans. Relaxing more refined ontologies\nor views will generally hurt the user preferences less than\nrelaxing already coarse views or ontologies. The less general the\nconcept, the more refined are the sets of objects that will be offered\nto the user, and flooding the user with too much too general\ncontent is avoided. Let us first take a closer look at merely qualitative\nviews on ontologies of comparable granularity, etc. (i.e.\nrelaxing one constraint is introducing the same amount of generalization\nas relaxing any other) and then investigate ways to deal\nwith quantitative measures in the following section.\nFor scenarios of merely qualitative preferences and their relaxation\nfor the restricted class of ceteris paribus preferences [16]\nproposes a scheme of ordering different objects in the result set\naccording to the count of necessary relaxation steps from the top\nor the bottom of the hierarchy or simply the relative distance to all\nviolated query constraints. Since we assumed symmetrical views\non ontologies, also in our relaxation problem a similar concept\nwill help us to understand what should be relaxed preferably and\nwhat this means for the objects in the result set. Let us first show\nthe approach of simply counting the relaxation steps from the\nviolated constraints. We will label the services we found in each\nstep by the number of necessary steps to find them. But first we\nneed to define the necessary concept of relaxation paths.\nGiven tree-shaped conceptual views of a service ontology that\narranges concepts or values with respect to certain service characteristics\nusing a generalization semantics, a relaxation path is a\npath along the edges of each conceptual view that leads from a\nbase concept to the respective root of the view. Usually this base\nconcept is specified in a service request or user query and relaxing\nalong the relaxation path leads to an increasing generalization of\nthis concept. Assuming that all services have been assigned to the\nnode of their first appearance along the relaxation path (i.e. they\nare registered to any of the respective node's sub-trees of concepts\n, but not to an earlier node of the relaxation path) we get a\nchain of concepts with all services registered under the aspect of\nleast necessary level of generalization.\nAn example for a relaxation path can be easily derived from figures\n2 and 4. If a user is primarily interested in the commercial\ndistrict, the appropriate nodes of the relaxation path would be\n`commercial district', `city center', `city' and `location'. The\nservices registered in the nodes are e.g. `Tom's Grill', `Sushi and\nSoul' and `Avocado Garden' for `commercial district'. The node\n`city center' would also contain all services registered to its sub-concepts\n, (i.e. `The Mediterraneum', `Akasaka', `Chez Panisse',\n`Cesar' and `Pizza My Heart') and so on. Figure 5 shows the first\ntwo steps of respective relaxation paths for both conceptual views\nin figures 3 and 4 focusing only on the services registered in both\nviews. As we pointed out we will always assume tree-shape views\nfor the course of this paper. Please note that all the concepts easily\ncan be transferred to the case where sub-concepts can have\nmultiple parent nodes. In this case the node along the relaxation\npath would consist of the intersection of the different parent concepts\n, or the intersection of their registered services respectively.\nThis generalization has already been successfully employed in a\nsimilar fashion for mapping queries between differing ontologies\nby [18]. Distances then can simply be measured by the minimum\ndistance, if multiple paths for relaxation should be available.\nLet us now see, how relaxation can be done using an unlabeled\nrelaxation graph (see figures 3 and 4). If we begin by either relaxing\nthe cuisine or the location constraint of our query we would\nhave to assign a quality value of 1 to both the `Avocado Garden'\nand `Cesar'. Thus in terms of quality they are incomparable,\nwhich closely resembles our missing knowledge of what kind of\nrelaxation the individual user would prefer. If there should be\nmore ways to relax constraints to encounter a service, we will\nalways count the minimum number of relaxation steps necessary.\nRelaxing both constraints again leads to quality values of 1 for\n`Avocado Garden' and `Cesar' and a value of 2 for `The Mediterraneum'\nand `Chez Panisse', because they have only been seen by\nrelaxing both constraints and thus are probably less desirable for\nthe user. However, relaxing the cuisine ontology two steps (i.e. to\nall American cuisines) and sticking to the commercial center constraint\nmight result in `Tom's Grill' turning up also with a value\nof 2. This is because in the nave model the semantic difference\nbetween `deep' relaxation and `broad' relaxation is considered the\nsame. To be sensitive to the differing implications of broad and\ndeep relaxation with respect to generality we will use our concept\nof relaxation paths and show how to use labels along this path to\nget to a more sophisticated relaxation paradigm.\nfusion\ncalifornian\namerican\ncity center\ncommercial\ncity\n1\n2\n1\n2\nfusion\ncalifornian\namerican\ncity center\ncommercial\ncity\n1\n2\n1\n2\nAvocado Garden\nTom`s Grill\nCesar\nCesar\nAvocado Garden\nTom`s Grill\nChez Panisse\nMediterraneum\nChez Panisse\nMediterraneum\n\nFigure 5: Relaxation paths and assigned services\n200\nUsually the generalization throughout ontologies will become\nquickly rather unspecific with decreasing distance to the root.\nHence a broad relaxation strategy (breadth first relaxation) often\nis preferred to deep relaxation steps. Summing up the distances\nlike before, but weighing each relaxation step with the relative\ndistance to the original query term can implement this. Consider\nour query for `fusion cuisine' restaurants in the `commercial district'\n. So for example the `Avocado Garden' and `Cesar' need\neach only one relaxation step with a distance of one to the original\nconstraint (cf. labels in figure 5), so their quality value is 1. `Chez\nPanisse' and `The Mediterraneum' both need two relaxation steps\nwith a distance of 1 each resulting in a value of 2. In contrast\n`Tom's Grill' also needs only two relaxation steps, but whereas\nthe first step has a distance of 1, the second step already shows a\ndistance of 2. Thus the final value for `Tom's Grill' is 3 (i.e.\n1+2*1) and we can now effectively distinguish between deep and\nbroad relaxation. Depending on the nature and granularities of the\nontology or views we can of course also use higher weightings for\nthe deep relaxations, for instance 10\n(distance-1)\n. So the first step will\nbe weighted by 1, the second deep step by 10, the third deep step\nby 100, and so on. Since the broad relaxation steps are still simply\nadded up, this will `punish' deep relaxation and avoid too broad\ngeneralizations of constraints. If we always want to punish deep\nsteps symmetrically until all constraints in turn are relaxed at least\nto the same level a factor of (number_of_ontologies)\n(distance-1)\nwill\nbe adequate as shown in the following lemma.\nLemma 1: Weightings to Foster Broad Ontology Relaxation\nGiven n soft query constraints with their respective relaxation\nhierarchies. To always prefer a broad relaxation scheme, label\neach object by summing up the numbers of edges relaxed to find\nthis object in each hierarchy and weigh every edge by n\n(d 1)\nusing\nthe number of soft constraints n and the relative distance d to the\noriginal query constraint.\nProof: Since within each depth all weightings are the same, it is\nobvious that within a certain depth of the hierarchy any object\nseen with less relaxation steps has a smaller label than an object\nthat needs more steps. Thus if we e.g. have to relax two constraints\nwithin a level this object will always be labeled with a\nhigher weight than an object that needed only one relaxation independently\nof which constraints have been relaxed.\nWe still have to show that if we do a step with a deeper distance,\nan object O encountered there always gets a higher label than any\nobject P encountered in all hierarchies only with relaxations up to\na lower distance. We will do that by showing that the minimum\nlabel for object O is higher than the maximum label for object P.\nLet us assume that in order to encounter object O we have to relax\nat least one constraint to a distance of k. The minimum label for O\nthus is given by relaxing only a single constraint to distance k and\nnot having to relax any other constraint. Hence object O's label is\ngiven by (n\n0\n+n\n1\n+...+n\n(k-2)\n+n\n(k-1)\n). The maximum possible label for\nobject P on the other hand is given by having to relax every of the\nn constraints (k-1)-times, i.e. the maximum distance smaller than\nk. Thus the maximum label for P is n*(n\n0\n+n\n1\n+...+n\n(k-2)\n) =\n(n\n1\n+...+n\n(k-2)\n+n\n(k-1)\n) and thus P's label is -even relaxing all constraints\nto a maximum- at least by 1 smaller than the best possible\nlabel for O.\nThus in relaxing constraints for equally important conceptual\nviews an adequate algorithm would be the processing of decreasing\nlevels of quality, i.e. finding services with increasing weightings\n. Starting with the minimum possible relaxation the algorithm\nwill always work over an entire sequence of services with the\nsame quality index and return all the discovered services of the\nlowest level found, together with their quality estimation. This is\nimportant for having to restart the algorithm at the previous point\nof termination, if the services discovered so far should not have\nbeen sufficient and the evaluation of lesser quality levels becomes\nnecessary. Similarly, knowing the labeling technique and the\nviews involved a user can also specify a maximum quality value\nup to which he/she is willing to accept more general services. For\nour algorithm we will assume a declarative query mechanism on\nUDDI directories enhanced by all the feature characteristics described\nby their respective conceptual views like presented in [4].\nHowever, symmetrically relaxing just the same number of steps\neven when assuming views of the same granularity and importance\nwith respect to the user query, will generally not lead to our\ndesired broad relaxation scheme with as little generalization as\npossible. Imagine a query that specifies two soft predicates of\nwhich one is a leaf node of a view, whereas the other predicate\nspecifies a direct descendant node of the root in the respective\nview. If no perfect match should be given, our strategy would\nresult in relaxing either of the two constraints a single step. But\nwhereas the relaxation by a single step from our leaf node usually\nleads to a slight generalization allowing a few more services for\nselection, the relaxation step in our second constraint would relax\nto the root node and thus offer the total number of services registered\nin the entire ontology for the respective characteristic, i.e.\nentirely drop the second constraint. Obviously that would not be a\nsensible behavior. So we do not only have to punish deep relaxation\nsteps but even more severely refrain from relaxations the\ncloser we are to the root.\nThis concept will be implemented in our relaxation algorithm by\nletting the longest possible relaxation path determine the weightings\nfor all constraints (again assuming a comparable level of\ndetail throughout our ontology). The weightings for edges along\nthe relaxation path will then be assigned in each conceptual view\nin descending order starting from the root down to the concept\nspecified by the query. Thus the relaxation of all concepts at least\nto the same level of generalization is enforced before having to\nrelax already more general concepts. In our example from above\nthe relaxation path for a leaf node concept would be assigned\nweightings as given by the height of the conceptual view, whereas\nthe relaxation path for a concept node right below the root would\nbe assigned the highest possible weighting. Thus (following\nlemma 1), our leaf node concept will have been relaxed to a generalization\nlevel of the respective concept right below the root\nbefore the second constraint is relaxed for the first time. Now we\nare ready to present an algorithm for relaxation of symmetrical\nconstraints incorporation a breadth first paradigm and a minimum\nlevel of generalization strategy.\nAlgorithm: Symmetrical Constraints Breadth First Paradigm\n1.\n\nPose the query containing only the hard constraints against\nthe enhanced UDDI / DAML-S directory.\n1.1.\n\nIf an empty result should be returned, terminate the algorithm\noutputting the empty result set.\n1.2.\n\nRepose the query with all soft constraints included.\n1.3.\n\nIf a non-empty result should be returned for the expanded\nquery, terminate the algorithm and output the\nrespective services as perfect matches with a relaxation\nlevel of 0.\n201\n2.\n\nGiven n the number of soft constraints we have to label each\nedge along the relaxation path from the concept specified by\nthe query to the root node in every conceptual view (cf.\nlemma 1).\n2.1.\n\nAmong the n views find the longest possible relaxation\npath and set maxdepth as the maximum depth of all ontologies\nrelative to the class specified in the service request\n2.2.\n\nFor every view label the relaxation path starting from\nthe root by n\nd\ndown to the concept specified by the\nquery (d := (maxdepth 1) to 0 descending)\n3.\n\nFor i = 1 to\nn\nj\n(1\nj maxdepth)\n3.1.\n\nStart with the query including all hard and soft constraints\nand build statements containing any possible\nrelaxation with a weighting of i, i.e. relax in turn all\nconceptual views by one step up to the point of reaching\nthe desired weighting. Due to construction of the\nweighting this will result in a breadth first strategy.\n3.2.\n\nIf any of the statements produced in 3.1. retrieves a\nnon-empty result set, collect results of all possibilities\nand terminate the algorithm.\nIn this algorithm step 3 can efficiently be implemented using an\nA*-Algorithm that successively explores the differently weighted\ntree edges finding all possible combinations for each quality\nweighting. However, please note that not every weighting is possible\nto reach by relaxing constraints.\nWe will now exemplify the above algorithm through different\nexamples: consider the three conceptual views X, Y and Z given\nin figure 6 and assume a user has posed a query requesting the\ncharacteristics X3, Y2 and Z6 as soft constraints. Let us assume\nall hard filter conditions have already been satisfied, but the basic\nquery for our soft constraints fails, i.e. no service with these capabilities\nis registered. Step two of our algorithm will now label the\nrelaxation paths like shown in figure 6 (bold edges and shaded\nvertexes). Starting with all services having quality values of one,\nconcept Z6 is generalized to Z3 and the query is reposed with\nconstraints X3, Y2 and Z3. If we still should have no matching\nservices we have to go on relaxing. A query for a value of two is\nnot possible, but for a value of 3 we can relax X3 to X2 and repose\nthe query with constraints X2, Y2 and Z3. Let us assume we\nstill have not found a result the next quality value would be 4. For\nthis we have two possibilities and have to unite the results of the\nquery on X2, Y2, Z3 and the query X3, Y2, Z2. The next possible\nquality value would be 7 with a query on X2, Y2, Z2. Please note\nthat we indeed have relaxed all constraints to same level until we\nrelax any constraint to the top level (e.g. in the view Y) for the\nfirst time. The algorithm would terminate at the latest after relaxing\nall views to the top level with a quality value of 34.\nZ2\nZ1\nZ4\nY5\nY4\nY3\nX1\nX4\nX5\nX6\nZ6\n1\n9\n3\n9\n3\n9\nX3\nX2\nY2\nZ3\nY1\n\nFigure 6: Conceptual views with labeled relaxation paths\n4.3\n\nQuantitative Service Quality Measures\nIn the last section we have seen an effective scheme for the case\nof symmetrical relaxations under the assumption of equal useful-ness\n, i.e. one broad step was as useful as any other broad step of a\ncomparable level. But in real world applications query constraints\nare not always only of a qualitative, incomparable nature. Deeper\nknowledge of individual user's preferences or knowledge of\nstereotypical usage can become interesting parameters in assessing\nthe quality of different relaxation schemes. Tuning the factor\nto a certain ratio for each application (x broad steps equal 1 deep\nstep) will express the desired semantics in each instance. The\nexact coefficient used for the discrimination of deep relaxation in\neach instance, however, will typically strongly depend on the\ndomain, the total number of soft query constraints and the respective\ngranularity of the views / ontology (i.e. the semantic level of\ndetail) used. Views that are modeled with a very fine granularity\ncan be relaxed introducing a smaller degree of generalization to\nthe service request results than would be introduced by those\nviews that are modeled rather coarsely anyway. Hence, a deep\nrelaxation step in a very detailed ontology might be worth only\nthree or four broad relaxations of other user constraints, whereas a\ndeep step in a coarser ontology used within the same query might\nadd up to the worth of ten broad relaxation steps or even several\ndeep relaxation steps of other constraints. Likewise very flat hierarchies\nwith many subclasses to each node are not suited too well\nfor deep relaxation. So the discrimination will in each application\ndepend on:\n\n\nThe relative semantic importance of a view with respect\nto the user request\n\n\nThe maximum depth of each conceptual view,\n\n\nIts total number of (sub-) concepts and\n\n\nThe (average) number of instances in each concept.\nThe relative semantic importance of a view can usually only be\ndetermined by directly consulting the user. But in the following\nwe will give an overview of techniques that will generally help to\ndeal with problems of relaxing views with different granularities.\nAs a rule of thumb we can state that the relaxation of views having\na low maximum depth and rather high numbers of services\nattached to each node should be delayed as long as possible. Our\ntechnique of starting the view graph labeling from the root node\nalready helps facilitating this rule. If a shallow conceptual view is\nused (usually an indication for coarse modeling) together with a\nmore detailed view, even the edges to leaf nodes in the shallow\nontology will be assigned rather high weightings unlike the leaf\nnodes in ontologies with a rather high depth. This behavior is,\nhowever, not always the best choice. If unlike in figure 6 the respective\ndepths of conceptual views differ by a considerable\namount, we should not simply delay the relaxation of shallow\nviews, but have to insert several intermediate steps in the more\ndetailed views before relaxing the next step in a coarse view.\nFigure 7 shows pairs of views X, Y and X', Y'. In both cases the\ndepth of X, X' is only two whereas the depth of Y, Y' is four.\nThat means that in terms of relaxation we can assume that for\nsome reason the more shallow ontologies X and X' are modeled\nrather coarsely. On the left hand side in figure 7 we can see the\nlabeling scheme from our algorithm. Ontology Y would be relaxed\nto Y3 or even Y2 before a single step in X would be relaxed\n. On the right hand side we can see a better labeling scheme\nwith interleaved relaxation steps (two steps in Y' for a step in X').\n202\nY5\n8\n4\n1\n8\n4\n2\nY5\n12\n3\n1\n8\n4\n2\nY4\nY2\nY3\nY1\nY4\nY2\nY3\nY1\nX1\nX2\nX3\nX1\nX2\nX3\n\nFigure 7: Differently labeled relaxation paths\nSince the interleaving of relaxation steps with respect to the\nmaximum depth generally seems a fairer approach, we will incorporate\nthis behavior into our algorithm. If the maximum depth of\nsome conceptual views should severely differ, we will assume a\ncoarser level of detail and find out how many steps in the most\ndetailed view represent a single step in the coarser view. We then\nre-label the coarse view beginning from the root by adding so\nmany of the appropriate sequence of weightings, as steps in the\ndetailed view are necessary. For instance in figure 7 we can easily\nsee that a single step in X (depth 2) represents two steps in Y\n(depth 4) and thus we would have to re-label the first edge by the\nsum of the two highest weights of Y (8+4) and its second edge by\nthe sum of the next two weights (2+1). Following this scheme we\ncan gain the more suitable relaxation weights of view X'. In terms\nof our quality assessment algorithm that means that we have to\nreconsider the labeling of the relaxation paths in step two and\nreplace the respective section by the following:\n2.1. Among the n conceptual views find the longest possible\nrelaxation path multiplying possible steps in each view relative\nto the class specified in the service request by the view's\nrespective factor of q, where q is given as the integer part of\nthe result of dividing the maximum view depth by the maximum\ndepth of the current view (i.e. q steps in the most detailed\nview represent one step in this view). Set the maximum\nvalue for maxdepth.\n2.2. For every conceptual view label the relaxation path\nstarting from the root by n\nd\n+ n\nd-1\n+...+ n\nd-q+1\n(i.e. the sum of\nthe first q weights in terms of the most detailed relaxation\npath) down to n\nd\n+ n\nd-1\n+...+ n\n0\nwith d:= (maxdepth 1).\nA second possibility to control the relaxation properties of multiple\nconceptual views is the incorporation of user preferences giving\na preferred relaxation order. Incorporating such preferences\ninto the weights along the ontology, however, is a very difficult\nproblem in its own rights and therefore beyond the scope of this\npaper. For the case that a simple ordering of relaxation for the\nviews is given by the user, we can use double-labeled relaxation\npaths like for the tree patterns in [1]. The second label for each\nnode is e.g. the respective rank of the view in a specified relaxation\nordering or the number of the node's respective sub-concepts.\nIn the case that for the execution of step 3 in our algorithm more\nthan one query should be possible, the relaxations can then be\nexecuted minimizing the sum of second labels and retrieval can\nbe terminated whenever a result occurs. Also for the case that a\nprioritization of relaxations is given (cf. [13]), the respective relaxation\nscheme is straightforward: The prioritized view is successively\nrelaxed until a first result set is retrieved. Then the second\n, third, etc. views are used to break ties. However, when it\ncomes to integrating weights from preferences into relaxation\nweights deeper research is still needed.\nRELATED WORK\nWhile in the above discussions we assumed the existence of different\nconceptual views to a given ontology, the actual creation of\nthese views is beyond the scope of this paper. Multiple views as\nabstractions of data sources are a well understand concept in classical\ndatabases systems. Yet the concept of such views has only\nrecently been addressed in the context of the Semantic Web\nthrough the proposal of the view definition language RVL for the\nlow level ontology language RDFS [14]. RVL uses a declarative\nquery language for the creation of virtual schemas which in turn\nserve as views on existing complex ontologies. Please note that\nalthough we merely focused on tree-shaped clippings from OWL\nontologies as simple relaxation hierarchies in our examples, the\npresented concepts are general enough to be used with only the\nslightest adaptations together with other types of ontologies and\nmore complex views, e.g. virtual RVL views.\nChoosing the `right' Web service for execution has been considered\nin several ways. For legal or economical points of view especially\nassurance structures guaranteeing that a service performs\nthe desired task like [11] or [17] have been addressed. However,\nwhen negotiating about execution guarantees or costs the semantic\ncontent of a service has to be understood and its specific capabilities\nhave already to be agreed on. Taking a more user-centered\nview the notion of services' reputation for subsequent selection\n[15] or the quality assessment for the negotiation of service level\nagreements [3] have been proposed. However, these approaches\nfocus on conceptual designs omitting algorithms how to assess the\nquality in each instance. The most complete framework with respect\nto heterogeneous environments featuring multiple ontologies\nis given by [18] where the notion of information loss for\nquery reformulation is defined. Unlike our work presented here,\nwhere multiple conceptual views of a service ontology occur in a\nsingle request, this work, however, deals with the loss of information\nwhen a query has to be translated from one into another ontology\n(e.g. in order to pose it to a different data source). Thus it\nis rather concerned with the problem of ontology mappings.\nThe area of service request relaxation over ontologies also shows\nsome similarities with database query relaxation frameworks like\ngiven in [6] or [13] and especially recent work on querying semi-structured\ndata like in XML databases. In the case of XML the\nDTD of a document defines its structure together with the (semantic\n) type of data within each node. The main focus of querying in\nthat area is on building queries without perfect knowledge of\ndocuments structure or the exact data it contains. Exploiting the\nset of labels given by a XML document's DTD as ontology in\nterm of the documents' structure [12] uses the result sets of queries\nto define the semantic equivalence of alternative query expressions\n, however without relaxing concepts within queries. The\narea of relaxation for tree-shaped queries not only on a structural\nlevel (`relax to any descendant node instead of child node'), but\nalso on a limited semantic level (`find author of document instead\nof book') is in detail addressed in [1]. Here also weightings along\nthe edges of trees comparable to our user preference-driven quality\nassessments in section 4.3 are discussed. Our work differs\nmainly in that we can rely on fine-granular concept ontologies\nthat are custom made by domain experts and used in central service\nprovisioning portals or UDDI / DAML-S directories. Not\nonly are we able to exploit semantics by a far larger extent than\nprevious work, but we also provide means to derive sensible\nweightings for edges within conceptual views based on general\nuser preferences and a fair relaxation paradigm.\n203\nThe general area of enhancement of UDDI goes back to describing\nthe capabilities of Web services on a more detailed level by\nusing ontology languages like DAML+OIL [8]. An example for\nthe efficient mapping of DAML+OIL capability descriptions onto\nUDDI records is given in [20]. Our approach for result quality\nassessment here is facilitated by the database-based approach for\nUDDI enhancement featured in [4]. Using cooperative database\ntechnology and an extended declarative query language like the\none given in [13] this framework features a good implementation\nframework for our quality assessment. Queries using hard and soft\nconstraints can be automatically rewritten using the relaxed concepts\nand posed against a database of service descriptions using\nsuitable relaxation ontologies. However, in terms of quality these\nlanguages feature only a qualitative result set under the notion of\nPareto-optimality. Quantitative quality constraints that limit the\nflood of incomparable results delivered by the exponential growth\nof Pareto sets with the number of soft constraints in the request\nare not considered. A first study of such quality measures is given\nin [16] for the restricted class of ceteris paribus preferences like\ndiscussed in section 4.2.\nSUMMARY AND OUTLOOK\nIn this paper we presented a framework for the discovery and\nselection of Web Services based on personalized quality assessments\nfor individual service users. Starting with a set of conceptual\nviews over a service ontology that express a generalization\nhierarchy of concepts (conceptual views) for different services'\ncapabilities and characteristics, we proposed to enhance today's\nUDDI / DAML-S registries by a matching component that will\nnot only perform a filtering of services according to user specified\nterms, but also allows for cooperative matchmaking between service\ndescriptions and the individual user's preferences. Focusing\non the quality assessment component of such an enhancement we\ndescribed in detail how to deal with different kinds and multiple\ninstances of conceptual views. The views in our framework contain\nthe domain-specific understanding of concepts or the common\nknowledge that users typically will expect when trying to\nfind an adequate service for execution. If no service that offers all\nrequired capabilities can be found, relaxing along these views will\nstep by step generalize the services' requested features until a best\npossible match can be found. Thus cooperative behavior can be\nintroduced for improved service provisioning.\nWe focused on controlling these relaxation steps implementing a\nbreadth first strategy control flow to delay far-reaching generalizations\nfor as long as possible. We also discussed the influence of\nrelaxation plans for various views, which may differ in their\ngranularity or accuracy of discrimination and the influence of\nindividual user preferences for relaxation orders. For the case of\nviews with differing level of details we gave an adequate scheme\nto balance the control flow. Nevertheless, the exact instantiation\nof the views and the adequate weightings chosen still will usually\ndiffer between application areas. Anticipating stereotype interaction\npatterns or experiences of past interactions, however, the\nservice provider usually is able to also provide some suitable default\nontologies for cooperative matchmaking within UDDI /\nDAML-S registries or managed service portals. In any case the\nprovisioning of suitable facilities for the assessment of Web service\nquality can be expected to become a central part of service\nprovisioning and will essentially influence the future acceptance\nof Web service offers by individual clients.\nIn this paper we have restricted conceptual views to tree-shaped\nclippings of ontologies with view elements being exclusively\nrelated through `is-a' relationships (stating an explicit generalization\nof concepts in a superclass/subclass fashion). Of course also\nother relationships within ontologies might be available for relaxation\ntasks in service requests; on the other hand their semantic\nmeaning will usually be somewhat more difficult. Since complex\nontologies commonly contain named relationships between entities\nthat might be used to make views more flexible and query\nrelaxation more meaningful, an important future work item will\nbe to break down this restriction on relationships of the `is-a'\ntype. With our algorithms' focus on relaxation paths as an abstraction\nof the underlying views, our general framework for quality\nassessment can be expected to be extensible also to these new\ntypes of views in a straightforward manner independently of the\nexact type of view the relaxation path was derived from. If a relaxation\nof a constraint along a certain relationship is backed by\nsensible relaxation semantics (i.e. is meaningful), however, has to\nbe checked in each individual instance.\nFurthermore, our future work will focus on a tighter integration of\nindividual user preferences into the quality assessment process\nlike addressed in section 4.3. Choosing the adequate weightings\ndoes not have an obvious semantics. The meaning of `relaxing\none constraint is two-times better than relaxing another constraint'\ncan only be guessed, what about three-times, etc.? The area between\nquantitative quality assessments like e.g. re-weighting techniques\nor relevance feedback as known from the area of IR, and\nthe purely qualitative approaches like Pareto optimality of solu-tions like given in [13] offers a vast variety of possibilities to\nexplore for real world applications. Also here the notion of stereotypical\nusage of services and the grouping of users with similar\nintensions might lead to improved service provisioning. We believe\nthat using and extending our framework is a vital step towards\ngetting a better understanding of these topics. In any case\nassessing quality of service request results in a semantically sensible\nway promises to pave the road to cooperative provisioning\nfor user-centered services.\nACKNOWLEDGMENTS\nWe would like to thank Achim Leubner and Anthony Tarlano for\nhelpful comments and suggestions. This work was partially\nfunded by an Emmy-Noether-Grant of the German Research\nFoundation (DFG).\n\nREFERENCES\n[1]\n\nS. Amer-Yahia, S. Cho, D. Srivastava. Tree Pattern Relaxation\n. In Proc. of the Int. Conf. on Extending Database Technology\n(EDBT'02), Prague, Czech Republic, 2002.\n[2]\n\nA. Ankolenkar, M. Burstein, J. Hobbs, et. al. DAML-S: Web\nService Description for the Semantic Web. In Proc. of the\nInt. Semantic Web Conf. (ISWC'02), Sardinia, Italy, LNCS\n2342, Springer, 2002.\n[3]\n\nW.-T. Balke, A. Badii. Assessing Web Services Quality for\nCall-by-Call Outsourcing. In Proc. of the Int Workshop on\nWeb Services Quality (WQW'03), Rome, Italy, 2003.\n[4]\n\nW.-T. Balke, M. Wagner. Cooperative Discovery for User-centered\nWeb Service Provisioning. In Proceedings of the\nFirst International Conference on Web Services (ICWS'03),\nLas Vegas, USA, 2003.\n204\n[5]\n\nW.-T. Balke, M. Wagner. Towards Personalized Selection of\nWeb Services. In Proceedings of the 12th International\nWorld Wide Web Conference (WWW 2003) Alternate Track\non Web Services, Budapest, Hungary, 2003.\n[6]\n\nS. Chaudhuri. Generalization and a Framework for Query\nModification. In Proc. of the Int. Conf. on Data Engineering\n(ICDE'90), Los Angeles, USA, 1990.\n[7]\n\nE. Christensen, F. Curbera, G. Meredith, S. Weerawarana.\nWeb Services Description Language (WSDL) 1.1.\n\nhttp://www.w3.org/TR/2001/NOTE-wsdl-20010315, 2001.\n[8]\n\nD. Connolly et al. DAML+OIL Reference Description. W3C\nNote, December 2001.\n[9]\n\nDAML. OWL-S: Semantic Markup for Web Services.\nhttp://www.daml.org/services/owl-s/1.0/owl-s.html#foot29\n[10]\n\nDAML. OWL-S 1.0 Release.\n\nhttp://www.daml.org/services/owl-s/1.0/\n[11]\n\nM. Jakobsson, M. Yung. On Assurance Structures for WWW\nCommerce. In Proc. of Int. Conf. on Financial Cryptography\n(FC'98), Springer LNCS 1465, Anguilla, British West Indies,\n1998\n[12]\n\nY. Kanza, Y. Sagiv. Flexible Queries over Semistructured\nData. In Proc. of the ACM Symp. on Principles of Database\nSystems (PODS'02), Santa Barbara, USA, 2001.\n[13]\n\nW. Kieling, G. Kstler. Preference SQL - Design, Imple-mentation\n, Experiences. In Proc. of the Int. Conf. on Very\nLarge Databases (VLDB'02), Hong Kong, China, 2002.\n[14]\n\nA. Magkanaraki, V. Tannen, V. Christophides, D.\nPlexousakis. Viewing the Semantic Web Through RVL\nLenses. In Proc. of the Int. Semantic Web Conf. (ISWC'03),\nLNCS 2870, Sanibel Island, USA, 2003.\n[15]\n\nE. M. Maximilien, M.Singh. Conceptual Model of Web Service\nReputation. In SIGMOD Records 31(4), 2002.\n[16]\n\nM. McGeachie, J. Doyle. Efficient Utility Functions for Ce-teris\nParibus Preferences. In Proc. of Conf. on Artificial Intelligence\nand Conf. on Innovative Applications of Artificial\nIntelligence (AAAI/IAAI'02), Edmonton, Canada, 2002.\n[17]\n\nG. Medvinsky, C. Lai, B. Neuman. Endorsements, Licensing\n, and Insurance for Distributed System Services. In Proc.\nof the ACM Conf. on Computer and Communications Security\n, Fairfax, USA, 1994\n[18]\n\nE. Mena, V. Kashyap, A. Illarramendi, A. Sheth. Imprecise\nAnswers in Distributed Environments: Estimation of Information\nLoss for Multi-Ontology based Query Processing. In\nInternational Journal of Cooperative Information Systems\n(IJCIS), 9 (4), 2000.\n[19]\n\nNTT DoCoMo home page.\n\nhttp://www.nttdocomo.com/home.html, 2003.\n[20]\n\nM. Paolucci, T. Kawamura, T. Payne, K. Sycara. Importing\nthe Semantic Web in UDDI. In Proc. of the Int. Workshop on\nWeb Services, e-Business and the Semantic Web (WES'02),\nToronto, Canada, 2002\n[21]\n\nUDDI. The UDDI Technical White Paper.\nhttp://www.uddi.org.\n\n205", "keywords": "selection of the most useful;Web Service Definition Language;Web services;Tree-shaped clipping of ontologies;subsequent execution;Semantic Web;user profiling;The generalization throughout ontologies;ontology resembles common knowledge;Universal Description Discovery and Integration;discovery of possible services;generalization hierarchy of concepts;cooperative service discovery;personalization;preference-based service provisioning;Domain-specific understanding of concepts;Relaxing multiple ontologies"} {"name": "194", "title": "Topic Modeling in Fringe Word Prediction for AAC", "abstract": "Word prediction can be used for enhancing the communication ability of persons with speech and language impair-ments . In this work, we explore two methods of adapting a language model to the topic of conversation, and apply these methods to the prediction of fringe words.", "fulltext": "INTRODUCTION\nAlternative and Augmentative Communication (AAC) is\nthe field of research concerned with finding ways to help\nthose with speech difficulties communicate more easily and\ncompletely. Today there are approximately 2 million people\nin the United States with some form of communication\ndifficulty. One means to help ease communication is the\nuse of an electronic communication device, which may have\nsynthetic speech as output. However, one issue in using an\nAAC device is communication rate. Whereas speaking rate\nis estimated at 180 words per minute (wpm), many AAC\nusers' communication rates are lower than 15 wpm [3, 7,\n16]. Thus one goal of developers is to find ways to increase\nthe rate of communication, by making AAC devices easier\nto use and more intelligent.\nSome researchers have attempted to speed communication\nrate by providing quick access to the core vocabulary\nthe relatively small set of frequently used words. Methods\nfor doing this include abbreviation expansion and iconic\nmethods such as semantic compaction [1]. In contrast, in\nthis work we attempt to speed access to the much larger\nset of words often called fringe vocabulary. This set is of\ninterest because although each individual word occurs less\nfrequently, the set of fringe words on the whole is very significant\n.\nSuppose that the user wants to enter \"I want a home\nin the country.\" After typing, \"I want a h\", they might\nsee something like shown below. The system has created a\nprediction window\ncontaining the five words that it thinks\nthe user may be trying to type. In this example, the user can\npress F5 to complete the word \"home\" and the system will\nenter the word with a space afterwards. So in this example,\nthe user needed 2 keystrokes to enter what would normally\ntake 5 keystrokes.\nIt is difficult to judge how much word prediction can speed\ncommunication rate. Much of this determination is dependent\non the accuracy of the prediction method, the characteristics\nof the user, such as their physical and cognitive\nabilities, and the characteristics of the user interface, such\nas where the prediction list is displayed and how a word in\nthe list is selected. Here, the prediction method is evaluated\nseparately from the rest of a word prediction system by simulating\nwhat a user would type in a conversation if he/she\nwere taking full advantage of the prediction list. This theoretical\nevaluation measures the percentage of keystrokes\nthat were saved by word prediction over typing out every\ncharacter.\nIn this paper we first describe related work and give some\nbackground in statistical approaches to word prediction. We\npresent approaches to topic modeling and compare the results\nof topic modeling to a baseline method. For a more\nthorough account of this work, visit\nhttp://www.cis.udel.edu/fringe/.\nRELATED WORK\nSeveral previous researchers have used n-gram models in\nword prediction for AAC [4, 5, 12, 18]. For example, Lesher\net al. [12] show how increasing training set size and unigrams\nto bigrams (going from 47% to 54.7%) to trigrams (another\n.8%). These evaluations used a window size of 6.\nOther researchers have integrated grammatical information\ninto n-gram word prediction systems. Garay-Vitoria\nand Gonzalez-Abascal [10] integrated a statistical chart parser,\nwhile Fazly and Hirst [8] and Copestake [7] used part-of-speech\n(POS) tagging. These yielded improvements of 1-5%\nkeystroke savings.\nThere have been several attempts at topic modeling in\nthe language modeling community, particularly for speech\nrecognition [2, 14, 17, 6, 9, 13]. Some of the evaluations\nof topic modeling have found different variants of it to be\nvery beneficial [2, 14, 9]. Lesher and Rinkus [13] is an attempt\nat topic modeling for word prediction, but does not\nuse dynamic topic modeling like [9, 2] and this work.\nTable 1: The keystroke savings of topic modeling is\nshown compared to a bigram and trigram baseline.\nMETHODS\nLike several of the aforementioned word prediction researchers\n, we use n-gram methods for language modeling.\nOur baseline word prediction methods use bigram and trigram-based\nn-gram models with backoff with Good-Turing smoothing\n, the current best practice in statistical language modeling\naccording to Manning and Sch\nutze [15]. Additionally,\nwe incorporate a special unigram model for the first word\nof each sentence. In word prediction, these language models\nour used to rank all the words that the user could possibly\nbe typing. The top W words are presented to the user,\nwhere W is the prediction window size.\nStatistical approaches require a collection of text to construct\na language model. Ideally, our corpus would be a large\ncollection of conversations involving one or more people using\nan AAC system. Such a corpus is unavailable, so we\nfollow [13] in using the Switchboard corpus, which is a collection\nof telephone conversations and their transcriptions.\n1\nThe training section contains a randomly pre-selected 2217\nconversations and the testing section contains the remaining\n221 conversations. We perform preprocessing to remove\nsome speech repairs in accordance with Hindle [11]. These\nediting rules bring the Switchboard conversations closer to\nwhat we envision an AAC user would type.\n3.1\nEvaluation\nWe compare the number of keystrokes required for a user\ntaking full advantage of our word prediction system to the\nnumber of keystrokes required to enter each character of the\nconversation. We use immediate prediction for our evaluations\n, which allows use of the prediction list before the first\ncharacter of a word has been entered. We assume that one\nkeystroke is required to \"speak\" each turn of input and that\na space is automatically inserted after a word is selected\nfrom the prediction list.\nKS\n= keys\nnormal\n- keys\nwithprediction\nkeys\nnormal\n100%\nBecause we are interested in the prediction of fringe words,\nour evaluations are measured on fringe words only. Core\nwords are excluded from the list of predictions. The particular\ncore vocabulary we chose is available from the AAC\nCenters at the University of Nebraska at Lincoln, available\nfrom http://aac.unl.edu/. We used the \"Young Adult Conversation\nRegular\" core vocabulary list, as it is the most\nsimilar to the type of conversations in the Switchboard corpus\n.\n1\nThe Switchboard transcriptions were available from\nhttp://www.isip.msstate.edu/projects/switchboard/\nTOPIC MODELING\nThe goal of topic modeling is to identify the current topic\nof conversation, then increase the probability of related words\nand decrease the probability of unrelated words. Some words\nwill be unaffected by topic modeling, such as function words,\nwhich are used similarly in all topics. It is for this reason\nthat we chose to improve fringe word prediction with topic\nmodeling: we feel that topic modeling specifically improves\nfringe word prediction.\nResearchers are consistent in representing a topic by creating\na collection of representative text of the topic. However,\nresearchers differ on the best way to organize a collection of\ntopics. Some researchers have created a hierarchical collection\nof topics [9], while others have created a disjoint set of\ntopics [14, 2, 17]. We feel that the primary lure of a hierarchical\napproach, the ability to generalize, can be captured\nin the set approach as well, by giving varying weight to all\ntopics and not just the most likely topic. For this reason,\nwe represent topics as disjoint sets of conversations.\nThe current topic of conversation must be identified from\nthe part of the conversation that has taken place so far, and\nupdated periodically in the conversation. Thus, we must\ndevise a representation for a partial conversation for assessing\nthe similarity of the conversation to each topic. In representing\nthe conversation so far, we choose to implement\nan exponentially decayed cache, like [2], using TF-IDF values\nrather than raw frequencies. This follows the work of\nMahajan et. al. [14] in considering the inverse document\nfrequency of a word to proportional to its utility in identifying\nthe current topic. Because our approach is for topic\nidentification, we ignore words that occur in 85% or more\nof the topics, with the intuition that such words are irrelevant\nto selection of topic. As a step to convert our model of\nthe current conversation to a model of the current topic, we\ncompute the document similarity between the cache and the\nunigram model for each topic. We chose to use the cosine\nmetric, following [9].\nGiven that we have computed a similarity score between\neach topic and the current conversation, there are two main\nvariations on how to construct a new language model. Mahajan\net. al. [14] implemented a k-nearest solution, constructing\nthe topic model from the most similar k topics.\nEach topic's language model was weighted equally for their\nexperiments. Instead, we chose to follow Florian and Yarowsky's\napproach [9]. They expand the probability for a word (w)\ngiven a history (h) as follows:\nP\n(w | h) =\nX\nttopics\nP\n(t | h) P (w | t, h)\nP\n(w | t, h) is simply the probability of w taken from the\nlanguage model constructed for topic t. The probability of\nthe topic is estimated as follows:\nP\n(t | h)\nS\n(t, h)\nP\nt\n\n\ntopics\nS\n(t\n\n, h\n)\nwhere S(t, h) is the cosine similarity of the topic to the current\npart of the conversation.\n4.1\nMethod A\nOur first method of topic modeling is most similar in\nspirit to the work of Mahajan et. al. [14] and Florian and\nYarowsky [9]. In training, a bigram model is computed for\n277\neach topic in Switchboard. In testing, the cache representation\nof the current conversation is compared against the\nunigram representation of each topic and similarity scores\nare computed. The similarity scores are then used to weight\nthe frequencies obtained from each topic in a linear interpolation\n. Then this interpolated bigram model is used to\ncompute the probabilities used for word prediction.\nTopic modeling shows a sizable improvement over the the\nbigram baseline: 1.6% 1.7%. We've included the comparison\nto a bigram baseline because it is the most natural\nbaseline in terms of language understanding. However, a trigram\nbaseline is also a natural comparison when considering\nthat it can run with the same or less computational resources\nthan topic modeling. When compared against the trigram\nbaseline, the topic model gives 0.8% 1.5% improvement.\n4.2\nMethod B\nOur second method of topic modeling is more similar\nto the work of Bellegarda [2]. Like Bellegarda, we compute\ntopic-dependent unigram probabilities. These topic-dependent\nprobabilities are multiplied with probabilities from\na trigram backoff model. Additionally, we weight the topic\ncomponent with a tuning parameter. After manual tuning\non a two conversations, we found that = .15 worked well.\nMethod B is an improvement over a trigram baseline, but\nonly a minor improvement. We feel that the problem is that\na low value was necessary to avoid overriding the word\npreference that is due to context, but that it also reduced\nthe ability of the overall model to adapt to a particular topic.\n4.3\nComparison\nMethod A offers an additional 1% or more keystroke savings\nover Method B for most window sizes. This is due to\nthe low weight of the tuning parameter for Method B. However\n, as previously mentioned, the low weight was necessary.\nAdditionally, notice that Method A becomes comparatively\nbetter as the window size is increased. The trigram model\ncomponent in Method B can be thought of as a stronger\nsource of knowledge than the interpolated bigram model of\nMethod A. Because of this, when the trigram history exists\nin the language model, Method B's predictions are more accurate\n. However, because the trigram model is sparse, it\ncan only contribute to the top few predictions. Thus, it has\na much greater effect on the top few window sizes.\nFor real world systems, however, absolute performance is\nnot the only factor. The computational demands of each\napproach are often considered when selecting a practical solution\n. The trigram baseline processed at 1,325 words per\nminute (wpm). Method A processed conversations in testing\nat 32 wpm and Method B processed 1,267 words per\nminute. Method B uses barely more processing time than\nthe trigram baseline model.\nCONCLUSIONS\nTopic modeling can be implemented in many different\nways. We've demonstrated two such methods for topic modeling\n: one for computationally limited devices and another\nfor computationally rich devices. Both methods show a clear\nimprovement over a trigram model with backoff. Before the\nadvent of word prediction, a user would've pressed 6.4 keys\nper fringe word on average. Now, with topic modeling for\nword prediction, only 2.5 keys per word are required.\nACKNOWLEDGMENTS\nWe would like to thank the US Department of Education\nfor funding this research under grant H113G040051, under\nthe National Institute on Disability and Rehabilitation Research\nprogram. We would also like to thank Dr. Gregory\nLesher for correspondence regarding his work and Dr. David\nSaunders for lending us a compute server.\nREFERENCES\n[1] B. Baker. Minspeak. Byte, pages 186202, 1982.\n[2] J. Bellegarda. Large vocabulary speech recognition with\nmultispan language models. IEEE Trans. On Speech and\nAudio Processing\n, 8(1), 2000.\n[3] D. R. Beukelman and P. Mirenda. Augmentative and\nalternative communication: Management of severe\ncommunication disorders in children and adults\n. P.H. Brookes\nPub. Co., 1998.\n[4] L. Boggess. Two simple prediction algorithms to facilitate text\nproduction. In Proceedings of the second conference on\nApplied natural language processing\n, pages 3340, Morristown,\nNJ, USA, 1988. Association for Computational Linguistics.\n[5] A. Carlberger, J. Carlberger, T. Magnuson, M. S. Hunnicutt,\nS. Palazuelos-Cagigas, and S. A. Navarro. Profet, a new\ngeneration of word prediction: An evaluation study. In\nProceedings of Natural Language Processing for\nCommunication Aids\n, 1997.\n[6] S. Chen, K. Seymore, and R. Rosenfeld. Topic adaptation for\nlanguage modeling using unnormalized exponential models. In\nProc. Int'l Conf. on Acoustics, Speech and Signal Processing\n,\n1998.\n[7] A. Copestake. Augmented and alternative nlp techniques for\naugmentative and alternative communication. In Proceedings\nof the ACL workshop on Natural Language Processing for\nCommunication Aids\n, 1997.\n[8] A. Fazly and G. Hirst. Testing the efficacy of part-of-speech\ninformation in word completion. In Proceedings of the 10th\nConference of the European Chapter of the Association for\nComputational Linguistics\n, 2003.\n[9] R. Florian and D. Yarowsky. Dynamic nonlocal language\nmodeling via hierarchical topic-based adaptation. In\nProceedings of ACL'99\n, pages 167174, 1999.\n[10] N. Garay-Vitoria and J. Gonz\nalez-Abascal. Intelligent\nword-prediction to enhance text input rate. In Proceedings of\nthe second international conference on Intelligent User\nInterfaces\n, 1997.\n[11] D. Hindle. Deterministic parsing of syntactic non-fluencies. In\nProceedings of the 21st Annual Meeting of the Association for\nComputational Linguistics\n, 1983.\n[12] G. Lesher, B. Moulton, and J. Higgonbotham. Effects of ngram\norder and training text size on word prediction. In Proceedings\nof the RESNA '99 Annual Conference\n, 1999.\n[13] G. Lesher and G. Rinkus. Domain-specific word prediction for\naugmentative communication. In Proceedings of the RESNA\n'01 Annual Conference\n, 2001.\n[14] M. Mahajan, D. Beeferman, and X. D. Huang. Improved\ntopic-dependent language modeling using information retrieval\ntechniques. In Proceedings of the International Conference on\nAcoustics, Speech, and Signal Processing\n, 1999.\n[15] C. Manning and H. Sch\nutze. Foundations of Statistical\nNatural Language Processing\n. MIT Press, 2000.\n[16] A. Newell, S. Langer, and M. Hickey. The r^\nole of natural\nlanguage processing in alternative and augmentative\ncommunication. Natural Language Engineering, 4(1):116,\n1996.\n[17] K. Seymore and R. Rosenfeld. Using story topics for language\nmodel adaptation. In Proceedings of Eurospeech '97, pages\n19871990, Rhodes, Greece, 1997.\n[18] A. L. Swiffin, J. A. Pickering, J. L. Arnott, and A. F. Newell.\nPal: An effort efficient portable communication aid and\nkeyboard emulator. In Proceedings of the 8th Annual\nConference on Rehabilitation Techonology\n, pages 197199,\n1985.\n278\n", "keywords": "core vocabulary;identify current topic of conversation;AAC;language modeling;accuracy of prediction method;fringe vocabulary;prediction of fringe words;conversations in the Switchboard corpus;Word prediction;immediate prediction;decrease probability of unrelated words;increase probability of related words;prediction window size;communication rate;construct a new language model;topic modeling"} {"name": "195", "title": "Topic Transition Detection Using Hierarchical Hidden Markov and Semi-Markov Models", "abstract": "In this paper we introduce a probabilistic framework to exploit hierarchy, structure sharing and duration information for topic transition detection in videos. Our probabilistic detection framework is a combination of a shot classification step and a detection phase using hierarchical probabilistic models. We consider two models in this paper: the extended Hierarchical Hidden Markov Model (HHMM) and the Coxian Switching Hidden semi-Markov Model (S-HSMM) because they allow the natural decomposition of semantics in videos, including shared structures, to be modeled directly, and thus enabling efficient inference and reducing the sample complexity in learning. Additionally, the S-HSMM allows the duration information to be incorporated, consequently the modeling of long-term dependencies in videos is enriched through both hierarchical and duration modeling . Furthermore, the use of the Coxian distribution in the S-HSMM makes it tractable to deal with long sequences in video. Our experimentation of the proposed framework on twelve educational and training videos shows that both models outperform the baseline cases (flat HMM and HSMM) and performances reported in earlier work in topic detection . The superior performance of the S-HSMM over the HHMM verifies our belief that duration information is an important factor in video content modeling.", "fulltext": "INTRODUCTION\nThe ultimate goal of the video segmentation problem is to\ncharacterize the temporal dynamics of the video whereby it\ncan be segmented into coherent units, possibly at different\nlevels of abstraction. Seeking abstract units to move beyond\nthe shots has thus been an active topic of much recent research. While the problem of shot transition is largely solved\nat a satisfactory level [7], the `abstract units' or scene detection\nproblem is much harder, partially due to the following\nthree challenges identified in [29]: (a) the variety in directional\nstyles, (b) the semantic relationship of neighbouring\nscenes, and (c) the knowledge of the viewer about the world.\nWhile the last aspect is beyond the scope of this work, the\nfirst two clearly imply that effective modeling of high-level\nsemantics requires the domain knowledge (directional style)\nand the modeling of long-term, multiple-scale correlations\nof the video dynamics (neighboring semantic relationship).\nModeling temporal correlations over a long period is generally\na challenging problem. As we shall review in the\nsubsequent section, this problem is usually solved in a specific\ndomain setting so that the expert knowledge about the\ndomain can be utilised. While organization of content in\ngeneric videos (e.g., movies) is too diverse to be fully char-acterized\nby statistical models, the hierarchy of semantic\nstructure in the class of education-oriented videos is more\ndefined, exposing strong temporal correlation in time, and\nthus make it more desirable to probabilistic modeling. In\nthis paper, we concentrate on this video genre and develop\nan effective framework to segment these videos into topi-cally\ncorrelated units. This problem is an important step\nto enable abstraction, summarization, and browsing of educational\ncontent a rich class of film genre that has an\nincreasing important role in building e-services for learning\nand training.\nProbabilistic modeling of temporal correlations in video\ndata is however a difficult problem. It is complicated because\nthe underlying semantics naturally possess a hierarchical\ndecomposition with possible existence of tight structure\nsharing between high-level semantics. In addition, the\ntypical duration for these structures usually varies for each\nof its higher semantic. As an example, assisted narration a\nsection that involves the narrator talking to the audience is\nusually used in both the introduction and in the main body\nof a topic in an educational video. However while one, or\nrarely two, shots of assisted narration (AN) are considered\nsufficient for the introduction, the body typically requires\nmany AN shots. Thus it is important to exploit and fuse\nhierarchical decomposition, structure sharing and duration\ninformation in a unified framework to effectively address the\nproblem of topic transition detection.\nThe most widely used pobabilistic model is the hidden\nMarkov model (HMM). However, in many cases, the HMM\nis unsuitable for video analysis since the strong Markov assumption\nmakes it too restrictive to capture correlations\n11\nover long periods. This limitation is usually overcome in the\nliterature by the use of a series of HMMs in a hierarchic manner\n. The underlying problem in these approaches still is the\nmanual combination of HMMs at the higher levels which results\nin the excessive expense of preparing the training data\nand, more importantly, the interaction across higher semantic\nlevels is not incorporated during model training. One\nrigorous approach to overcome this limitation is the use of\nthe Hierarchical Hidden Markov Model (HHMM), first introduced\nin [6] and later extended to handle structure sharing\nin [3]. The sophisticated model in [3] allows natural hierarchical\norganization of the videos, including any existing\nstructure sharing, to be modeled rigorously. Practically this\nwill result in computational savings and a reduction in sample\ncomplexity for learning. Given its advantages, we use\nthis model in this paper to model educational video content\nfor topic transition detection.\nIt is natural to see that durative properties play an important\nrole in human perception. An excessively long lecture\nwould bore the students. As such, education-oriented videos\n(e.g., news, documentaries, lectures, training videos, etc.)\nexhibit strong duration information in their content. We\nthus propose an alternative approach towards handling temporal\ndependencies over long periods through the explicit\nmodeling of duration information captured in semi-Markov\nmodels. In these models, a state is assumed to remain un-changed\nfor some duration of time before it transits to a\nnew state, and thus it addresses the violation of the strong\nMarkov assumption from having states whose duration distributions\nare non-geometric.\nExisting semi-Markov models commonly model duration\ndistributions as multinomials. Video data is however typically\nvery long, thus making a multinomial semi-Markov\nmodel unsuitable for video analysis since it would result in\nboth excessive computation and the number of parameters\nrequired. Continuous modeling of duration does exist such\nas in the use of the Gamma distribution, or more generally\nthe exponential family, described in [12, 16] to provide\nmore compact parameterization. However, there are still\ntwo limitations applied to these models for video analysis:\n(a) learning these distributions requires numerical optimiza-tion\nand the time complexity still depends on the maximum\nduration length, and (b) no hierarchical modeling has been\nattempted. Fortunately, in [5], a Switching Hidden Semi-Markov\nModel (S-HSMM) is introduced in which the duration\nis modeled as a discrete M -phase Coxian distribution\n. This model is particularly interesting for video analysis\nsince: (1) it can model hierarchical decomposition, and\n(2) the Coxian duration modeling results in fast learning\nand inference, the number of parameters is small and close-formed\nestimation exists. Parameterizing long-term temporal\ncorrelations existing in video is thus enriched by both\nthe hierarchical architecture and the duration modeling at\nthe bottom level of the S-HSMM.\nTo model video content, we argue that it is beneficial to\nexploit both the hierarchical organization of the videos, their\nsemantically shared substructures and typical durations of\nimportant semantics. These aspects are all addressed in\nthis paper in a unified and coherent probabilistic framework\n. We use the HHMM and the S-HSMM and propose\na two-phase architecture for detecting topical transition in\neducational videos. In the first phase, shots are classified\ninto meaningful labels. Using classified shot labels, the second\nphase trains a hierarchical probabilistic model (HHMM\nor S-HSMM) which is then used at a later stage for segmentation\nand annotation. Prior knowledge about the domain,\nincluding shared structures, is incorporated into the topo-logical\nstructure during training.\nOur cross-validation on a dataset including a mix of twelve\nvideos demonstrates promising results. The performances\nfrom the baseline cases (HMM and HSMM) have shown\nthat they are too restrictive and unsuitable in our detection\nscheme, proving the validity of hierarchical modeling.\nThe performances of the hierarchical models, including the\nHHMM and S-HSMM, are shown to surpass all results reported\nin earlier work in topic detection [23, 20, 4]. The\nsuperior performance of the S-HSMM over the HHMM has\nalso demonstrated our belief that duration information is\nindeed an important element in the segmentation problem.\nExploiting the hierarchy, structure sharing and duration\nin a unified probabilistic framework, our contributions are\ntwofold: (1) we provide a coherent hierarchical probabilistic\nframework for topic detection. Although the current report\nconcentrates on the educational genre, this framework can\nclearly generalize to other genres such as news and documentaries\n, and (2) to our knowledge we are the first to investigate\nduration and hierarchical modeling for video segmentation\n1\nin a unified framework.\nThe remainder of this paper is organized as follows. In\nthe next section, we provide related background to this work.\nThis is followed by a detailed section on the detection framework\nincluding the description of the HHMM and S-HSMM.\nWe detail the shot classification phase in Section 4. Experimental\nresults are then reported in Section 5. Finally, the\nconclusion follows in Section 6.\nRELATED BACKGROUND\nSeeking high-level semantics to move beyond the shots has\nbeen the central theme of much recent research. Attempts\ntowards this problem have resulted in a fast growing body\nof work, and depending on the investigating domain, the abstracting\nunits appear under different names such as scene,\nstory, episode for motion pictures; topic, subtopic, macro\nsegments, story units for information-oriented videos (news,\ndocumentaries, training and educational videos), or general\nterm like logical story units used in [8, 32]. Otherwise stated,\nwe shall the term `scene' in this section to mean all of the\naforementioned names.\nEarly attempts have targeted extracting scene-level concepts\nin broadcast programs, in particular news videos (e.g., [9,\n14, 26]). In these attempts, the semantic extraction problem\nis usually cast as the classification problem. The authors\nin [26], for example, combine a number of visual and\naural low-level features together with shot syntax presented\nin news videos to group shots into different narrative structures\nand label them as anchor-shot, voice-over, or inter-1\nSince topic change coincides with a shot transition, the shot\nboundary provides crucial information in detecting topic\ntransitions, therefore the term `duration' in this work is calculated\nin terms of the number of shots. This drastically\nsimplifies the modeling process. An alternative way of modeling\nduration is to uniformly replicate a shot label based on\nits length. However, doing this would require an extra modeling\nof shot transition knowledge. In this work, we avoid\nthis complication and concentrate on duration information\nbased on the shot counts.\n12\nview. Liu et al. [14] propose a video/audio fusion approach\nto segment news reports from other categories in broadcast\nprograms with different types of classifiers (simple threshold\nmethod, Gaussian mixture classifier, and support vector machine\n). Ide et al. [9] propose an automatic indexing scheme\nfor news video where shots are indexed based on the image\ncontent and keywords into five categories: speech/report,\nanchor, walking, gathering, and computer graphics. Caption\ntext information is then used with classified shots to\nbuild the indices.\nSegmentation of the news story is the second major theme\nexplored in the broadcast domain. The common underlying\napproach used in these works is the use of explicit `rules'\nabout the structure of news programs to locate the transitions\nof a news story. Commonly accepted heuristics are for\nexample: a news story often starts and finishes with anchor-person\nshots [31]; the start of a news story is usually coupled\nwith music [2]; or a relative long silence period is the indication\nof the boundary between two news stories [33]. More\ncomplicated rules via temporal analysis are also exploited\nsuch as the work of [37] which utilises detection results of\nanchor-persons and captions to form a richer set of rules\n(i.e., if the same text caption appears in two consecutive\nanchor-person shots, then they belong to the same news\nstory). There is also a body of work which casts the segmentation\nproblem of news story in a HMM framework [10,\n4]. The authors in [10], for example, propose the news segmentation\nproblem as problem of decoding the maximum\nstate sequence of a trained HMM whose transition matrix is\ntailored by explicit rules about the news program. A somewhat\nsimilar approach to the work in this paper is [4] (whose\nresults came first in the TRECVID2003 story segmentation\nbenchmark). Shots in [4] are first classified into a set common\nlabels in news (e.g., anchor, 2anchor, text-scene, etc.).\nThese labels are then input to a HMM for the segmentation\ntask. They report best performances of 74.9% recall and\n80.2% precision for the TRECVID dataset. The work of [4]\nhowever remains limited due to the flat structure HMM, and\nit is not clear how the set of `transition' states were chosen.\nIn an effort to move beyond flat structure, the authors of [4]\nhave raised the need for high-order statistical techniques,\nwhich will be addressed in this paper through the HHMM\nand S-HSMM.\nMore recent approaches towards scene extraction have\nshifted to motion pictures (e.g., [30, 34, 1, 31]). Detecting\nscenes in motion pictures is in general a challenging problem\nand there are three main existing approaches as outlined\nin [31]: temporal clustering-based, rule-based and memory-based\ndetection. In the clustering-based approach, shots are\ngrouped into scenes based on visual similarity and temporal\ncloseness (e.g., [8, 13]). Scene breaks in the rule-based\ndetection approach are determined based on the semantic\nand syntactic analysis of audiovisual characteristics and in\nsome cases further enhanced with more rigorous grammars\nfrom film theory (e.g., [34, 1]). The authors in [30] propose a\nmemory-based scene detection framework. Visual shot similarity\nin these works is determined based on the consistency\nin color chromaticality, and the soundtrack is partitioned\ninto `audio scenes'. Visual and aural data are then fused\nwithin a framework of memory and attention span model to\nfind likely scene breaks or singleton events. Further related\nbackground on scene detection can be found in many good\nsurveys (e.g., [30, 28, 31]).\nExisting HMM-based approaches towards modeling long-term\ntemporal dependencies typically use pre-segmented training\ndata at multiple levels, and hierarchically train a pool\nof HMMs, in which HMMs at the lower levels are used as\ninput to the HMMs at the upper levels. In principle, some\nfundamental units are recognised by a sequence of HMMs,\nand then likelihood values (or labels) obtained from these\nHMMs are combined to form a hierarchy of HMMs\n2\nto capture\nthe interactions at higher semantic levels (e.g., [11,\n18]). Analysing sports videos, Kijak et al. [11] propose a\ntwo-tiered classification of tennis videos using two layers\nof HMMs. At the bottom level, four HMMs are used to\nmodel four shot classes (`first missed serve',`rally', `replay',\nand `break'). Each HMM is trained separately and subse-quently\ntopped up by another HMM at the top level which\nrepresents the syntax of the tennis video with three states\nof the game: `sets', `games', and `points'. Parameters for\nthe top HMM are, however, all manually specified. In [18],\na generic two-level hierarchy of HMMs is proposed to detect\nrecurrent events in movies and talk shows. Their idea\nis to use an ergodic HMM at the top level, in which each\nstate is another (non-ergodic) sub-HMM representing a type\nof signal stationary properties. For the case of movies, the\ntop HMM has six states, and each is in turn another three-state\nnon-ergodic HMM. The observations are modelled as\na mixture of Gaussians. After training, the authors claim\nthat interesting events can be detected such as `explosion',\n`male speech', and so on. While being able to overcome the\nlimitation of the flat HMM in modeling long-term dependencies\n, approaches that use HMMs at multiple levels still\nsuffer from two major problems: (1) pre-segmented and an-notated\ndata are needed at all levels for training, and (2)\nin most existing work parameterization at higher levels has\nto be manually specified. In many cases, preparing training\ndata at multiple levels is extremely tedious and at worst,\nmay not be possible. With respect to the second problem,\nsince each semantic level has to be modeled separately, the\nunderlying problem is that the interactions across semantic\nlayers are not modeled and thus do not contribute to the\nlearning process.\nOne framework that integrates the semantics across layers\nis the Hierarchical Hidden Markov Model (HHMM) proposed\nrecently in [6]. The hierarchical HMM extends the\nstandard HMM in a hierarchic manner to allow each state\nto be recursively generalised as another sub-HMM, and thus\nenabling the ability to handle hierarchical modeling of complex\ndynamic processes, in particular \"the ability to infer\ncorrelated observations over long periods in the observation\nsequence via the higher levels of hierarchy\" [6]. The original\nmotivation in [6] was to seek better modeling of different stochastic\nlevels and length scales presented in language (e.g.,\nspeech, handwriting, or text). However, the model introduced\nin [6] considers only state hierarchies that have tree\nstructures, disallowing the sharing of substructures among\nthe high-level states. Recognizing this need, the authors\nin [3] have extended the strict tree-form topology in the\noriginal HHMMs of [6] and allowed it to be a general lattice\nstructure. The extension thus permits a state at any arbitrary\nlevel of the HHMMs to be shared by more than one\nparental state at its higher level (i.e., resulting in a compact\nform of parameter typing at multiple levels). This extended\n2\nNot to be confused with the Hierarchical HMMs.\n13\nform is very attractive for video content modeling since it\nallows the natural organization of the video content to be\nmodeled not only in terms of multiple scales but also in\nterms of shared substructures existing in the decomposition.\nFurther details on the HHMM are provided in Section 3.1.\nEarly application of the HHMM for video analysis is found\nin [36] and later extended in [35]. In these works, the authors\nuse the HHMM to detect the events of `play' and\n`break' in soccer videos. For inference and learning, the\nHHMM is `collapsed' into a flat HMM with a very large\nproduct state space, which can then be used in conjunction\nwith the standard forward/backward passes as in a normal\nHMM. Four methods are compared in [36] to detect `play'\nand `break': (1) supervised HMMs, in which each category\nis trained with a separate HMM, (2) supervised HHMMs,\nin which bottom level HMMs are learned separately and\nparameters for the upper levels are manually specified, (3)\nunsupervised HHMMs without model adaptation, and (4)\nsupervised HHMMs with model adaptation. In (3) and (4),\ntwo-level HHMMs are used. Their results have shown a very\nclose match between unsupervised and supervised methods\nin which the completely unsupervised method with model\nadaptation performs marginally better. These figures are\n75.5%, 75.0%, 75.0% and 75.7% respectively for those four\nmethods. While presenting a novel contribution to the feature\nselection and model selection procedure, the application\nof the HHMMs in this work is still limited both for learning\nand for exploitation of the hierarchical structure. Flattening\na HHMM into a flat HMM as done in [36, 35] suffers from\nmany drawbacks as criticized in [17]: (a) it cannot provide\nmulti-scale interpretation, (b) it loses modularity since the\nparameters for the flat HMM get constructed in a complex\nmanner, and (c) it may introduce more parameters, and\nmost importantly it does not have the ability to reuse parameters\n, in other words parameters for the shared sub-models\nare not `tied' during the learning, but have to be replicated\nand thus lose the inherent strength of hierarchical modeling.\nBeing able to model shared structures, the extended HHMMs\nof [3] allows us to build more compact models, which\nfacilitates more efficient inference and reduces the sample\ncomplexity in learning. This model is applied in [20] and [22]\nfor the problem of topic transition detection and video structure\ndiscovery respectively. The authors in [20] use a three-level\nHHMM for the detection of topic transitions in educational\nvideos. Differing from our experiments in this paper\n, the HHMM in [20] is modified to operate directly with\ncontinuous-valued observed data via the use of Gaussian\nmixture models as the emission probabilities. Each shot-based\nobserved vector consists of seven features extracted\nfrom visual and audio streams. They report a 77.3% recall\nrate and 70.7% precision for the detection task. In another\napplication, with the help of prior knowledge about educational\nvideos, a topology for a three-level HHMM is used\nin [22] to automatically discover meaningful narrative units\nin the educational genre. Their experiments have shown encouraging\nresults in which many meaningful structures are\nhierarchically discovered such as `on-screen narration with\ntexts', `expressive linkage', `expressive voice-over', etc. The\nwork of [22] is somewhat similar to that of [18] reviewed\nearlier in this section, except the model in [22] allows more\ndomain knowledge to be encoded and the parameters are all\nlearned automatically.\nTHE PROBABILISTIC TOPIC DETECTION FRAMEWORK\nOur topic detection framework consists of two phases.\nThe first phase performs shot detection and low level feature\nextraction and then classifies a shot in a meaningful label set\n. This phase is described in Section 4. In the next phase,\nwe train a HHMM or S-HSMM over the alphabet space\nfrom the training data and then use it in conjunction with\nthe Viterbi to perform segmentation and annotation. The\narchitecture of the framework is depicted in Figure-1.\n1\n2\nF\nE\nA\nT\nU\nR\nE\nE\nX\nT\nR\nA\nC\nT\nI\nO\nN\nSHOT DETECTION AND\nCLASSIFICATION\nDirect Narration\nAssisted Narration\nVoice-Over\nExpressive Linkage Functional Linkage\nM-phase Coxian\nM-phase Coxian\nM-phase Coxian\nM-phase Coxian\nM-phase Coxian\nEND\n`Intro'\n`main body'\n1\n2\n3\n4\n5\nVideo & Audio Signals\nFigure 1: The architecture for topic detection framework.\nThe two-level HHMM and the S-HSMM (whose topology\nis shown on the top of Figure-1) are special cases of\nthe hierarchical model with two layers. For the S-HSMM\n(HHMM), the top layer is a Markov sequence of switching\nvariables, while the bottom layer is a sequence of concate-nated\nHSMMs (HMMs) whose parameters are determined\nby the switching variables at the top. Thus, the dynamics\nand duration parameters of the HSMM (HMM) at the bottom\nlayer are not time invariant, but are `switched' from\ntime to time, similar to the way the linear Gaussian dynamics\nare switched in a switching Kalman filter. When\nmapping to the topic modeling problem, the bottom layer\nis used to capture `atomic' semantics such as voice-over, expressive\nlinkage or assisted narration. Combinations of these\natomic semantics then form higher-level semantics, each of\nwhich is represented by a hidden state at the top layer in\nour model.\n3.1 The Hierarchical HMM\nWith the assumed knowledge of the flat HMM (e.g., see [24]),\nwe shall now briefly describe the HHMMs. A hierarchical\nHMM is formally defined by a three-turple , , : a topo-logical\nstructure parameterized by and an emission alphabet\nspace . The topology specifies the model depth\nD, the state space S\nd\navailable at each level d, and the\nparent-children relationship between two consecutive levels.\nFor example, the two-level topology shown on the top of\n14\nFigure-1 specifies the children set at the bottom level for\nstate 1 is {1, 2, 5} and for state 2 is {2, 3, 4}. Here, state 2\nat the bottom level has been `shared' by both state 1 and\n2 at the top level. Given , the parameter of the HHMM\nis specified in the following way. For d < D, p S\nd\nand\ni, j S\nd+1\nare the children of p:\nd,p\ni\nis the initial probability\nof i given p; A\nd,p\ni,j\nis the transition probability from i\nto j given the parent p; and A\nd,p\ni,end\nis the probability that\nstate i going to end-state (i.e., returns control to its parent)\ngiven the parent is p. Finally, for each state i at the lowest\nlevel D and an alphabet v : B\nv|i\nis the emission probability\nof observing v given the current state at the lowest\nlevel is i. The whole parameter set is written compactly as:\n= {, A, A\nend\n, B}, where:\n=\n[\n1d<D\npSd\nn\nd,p\n: 1 M o ,\nB : |S\nd\n| ||\nA =\n[\n1d<D\npSd\nnA\nd,p\n: M M o , A\nend\n=\n[\n1d<D\npSd\nnA\nd,p\nend\n: 1 M o\nwhere in each each M is implicitly meant the number of children\nof p and |.| is the cardinality operator. Stochastic constraints\nrequire: P\ni\n\nd,p\ni\n= 1, P\nv\nB\nv|i\n= 1 and P\nj\nA\nd,p\ni,j\n+\nA\nd,p\ni,end\n= 1. An intuitive way to view the set is to consider\nthe subset {\nd,p\n, A\nd,p\n, A\nd,p\nend\n} as the parameter of the\np-initiated Markov chain at level d. This chain is terminated\nwhen one of the children i of p reaches the end-state with the\nprobability of A\nd,p\ni,end\n. For inference and learning, the HHMM\nis represented as a dynamic Bayesian network (DBN) and\ncan be learned by the Asymmetric Inside-Outside algorithm\nin [3] or by the forward/backward passes in [17]. Figure-3\nshows on its left the DBN representation of the HHMM\nwith two levels, i.e., D = 2. We refer readers to [6, 17, 3]\nfor further information on the HHMMs.\n3.2 The Switching-Hidden Semi Markov Model\nTo provide an intuitive view to the S-HSMM, starting\nfrom the description of the HHMMs from the previous section\n, let us consider the case of a two-layer HHMM (D = 2)\ndefined as follows. The state space is divided into the set of\nstates at the top level Q\n\n= S\n1\n= {1, . . . , |Q\n\n|} and states\nat the bottom level Q = S\n2\n= {1, . . . , |Q|}. This model is\nparameterized by = {\n\n, A\n\n, , A, A\nend\n, B}.\nAt the top level,\n\np\nand A\n\npq\nare respectively the initial\nprobability and the transition matrix of a Markov chain defined\nover the state space Q\n\n. For each p Q\n\n, ch(p) Q is\nused to denote the set of children of p. As in the case of the\nextended HHMM in [3], it is possible that different parent\nstates may share common children, i.e., ch(p) ch(q) = for\np, q Q\n\n. A transition to p at the top level Markov chain\nwill initiate a sub-Markov chain at the lower level over the\nstate space ch(p) parameterized by {\np\n, A\np\n, A\np\nend\n} where\nq\ni\nand A\np\nij\nare the initial and transition probabilities as in the\nnormal HMM setting, and A\np\ni,end\nis the probability that this\nchain will terminate after a transition to i. At each time\npoint t, a discrete symbol y\nt\nis generated with a probability\nof B\nv|i\nwhere i is the current state at the bottom\nlevel. In the description of this two-level HHMM, the duration\nd for which a bottom state i remains the same clearly\nhas a geometric distribution parameterized by its non-self-transition\nprobability (1 - A\np\nii\n), i.e., d Geom(1 - A\np\nii\n).\nIn many cases, the geometric distributions are often too\nrestricted to model realistic data. The Switching Hidden\nSemi-Markov Models (S-HSMMs) proposed in [5] overcomes\nthis restriction and allows the duration d of state i at the\nbottom level to follow a more general discrete distribution\nd D\np,i\nd\n. More precisely, the p-initiated chain at the bottom\nlevel is now a semi-Markov sequence parameterized by\n{\np\ni\n, A\np\nij\n, D\np,i\nd\n} as opposed to the normal Markov chain in the\nHHMM case. The authors in [5] consider two families of distributions\nfor modeling the duration: the multinomial and\nthe Coxian. However for the multinomial case, the complexity\nof the learning algorithm is proportional to the maximum\nduration length, thus making it unsuitable for the problem\nof modeling video data which is usually very long in nature.\nApart from the disadvantage of assuming a maximum duration\n, our empirical testing on the multinomial case with\nthe maximum length of 50 has also shown that it is about\n20 times slower than its Coxian counterpart reported in this\npaper, thus making it impractical in our settings. We will\ntherefore omit the multinomial case and will consider exclu-sively\nthe Coxian parameterization in this paper.\nA discrete M -phase Coxian distribution Cox(; ), parameterized\nby = {\n1\n, . . . ,\nM\n} (P\ni\n\ni\n= 1) and =\n{\n1\n, . . . ,\nM\n}, is defined as a mixture of P\nM\ni=1\n\ni\nS\ni\nwhere\nS\ni\n(X\ni\n+ . . . + X\nM\n), in which X\ni\nare independent random\nvariables having geometric distributions X\ni\nGeom(\ni\n).\nThis distribution is a member of the phase-type distribution\nfamily and has the following very appealing interpretation\n. Let us construct a Markov chain with M + 1 states\nnumbered sequentially with the self transition parameter\nA\nii\n= 1 i\nas shown in Figure-2. The first M states rep-1\nabsorbing\nstate\n2\nM\n1\nM\n\n2\n\nM\n\n1\n\nM\n\n2\n\n1\n\nFigure 2: The phase diagram of an M -phase Coxian.\nresent M phases, while the last is the absorbing state which\nacts like an end state. The duration of each individual state\n(phase) i is X\ni\nGeom(\ni\n). If we start from state i, the\nduration of Markov chain before the end state reached is\nS\ni\n= X\ni\n+ . . . + X\nM\n. Thus, Cox(, ) is indeed the distribution\nof the duration of this constructed Markov chain with\nas the initial state (phase) distribution. The discrete Coxian\nis much more flexible than the geometric distribution:\nits probability mass function is no longer monotonically decreasing\nand it can have more than one mode.\nUsing the Coxian distribution, the duration for the states\nat the bottom level in the S-HSMM is modeled as follows.\nFor each p-initiated semi-Markov sequence, the duration of a\nchild state i is distributed according to D\np,i\nd\n= Cox(d;\np,i\n,\np,i\n).\nThe parameter\np,i\nand\np,i\nare M -dimensional vectors\nwhere M is a fixed number representing the number of phases\nin the discrete Coxian. It is easy to verify that for M = 1,\nthe model reduces identically to a two-layer HHMM.\n15\n3.3 Inference and Learning in the S-HSMM\nFor inference and learning, the S-HSMM is represented as\na dynamic Bayesian network as shown in Figure-3 and then\nforward/backward passes are applied to compute the filtering\nand smoothing distributions required for EM learning.\nt\n+1\nt\nz\nt\nz\nt\n+1\nm\nt\nm\nt\n+1\ny\nt\ny\nt\n+1\nx\nt\n+1\nx\nt\ne\nt\n+1\ne\nt\nz\nt\nz\nt\n+1\ny\nt\ny\nt\n+1\nx\nt\n+1\nx\nt\nt\nt\n+1\ne\nt\ne\nt\n+1\nFigure 3: Two-slice DBN representation of a two-level\nHHMM (left) and the (Coxian) S-HSMM (right).\nAt each time-slice t, an amalgamated hidden state S\nt\n=\n{z\nt\n,\nt\n, x\nt\n, e\nt\n, m\nt\n} together with the observation y\nt\nare maintained\n. The top level state is updated via z\nt\nand\nt\nis a\nboolean-valued variable set to 1 when the z\nt\n-initiated semi-Markov\nsequence ends at t. At the bottom level, x\nt\nis the\ncurrent child state in the z\nt\n-initiated chain, m\nt\nrepresents\nthe current phase of x\nt\nand e\nt\nis a boolean-valued variable\nset to 1 when x\nt\nreaches the end of its duration. The forward\nand backward procedures in the general DBN are then used\nto compute the filtering distribution Pr(S\nt\n|y\n1:t\n) and two\nsmoothing distributions Pr(S\nt\n|y\n1:T\n) and Pr(S\nt\n, S\nt+1\n|y\n1:T\n).\nWith these smoothing distributions, it is sufficient to derive\nall expected sufficient statistics required during EM learning\n. The overall complexity for the forward pass (and also\nfor the EM) is O(|Q|\n2\n|Q\n\n|\n2\nM T ). Further information can\nbe found in [5].\n3.4 Viterbi decoding for segmentation\nTo compute the best sequence state, that is to find:\nS\n\n1:T\n= argmax\nS\n1:T\nPr(S\n1:T\n|y\n1:T\n)\nViterbi decoding algorithms for the HHMM and S-HSMM\nare developed. These algorithms are similar to the one used\nin the standard HMM outlined in [24] except we replace\nthe normal state in the HMM setting by our amalgamated\nstate S\nt\nwhich\n{z\nt\n, x\nt\n,\nt\n, m\nt\n, e\nt\n} for the S-HSMM and\n{z\nt\n, x\nt\n,\nt\n, e\nt\n} for the HHMM (cf. Figure-3).\nSHOT-BASED SEMANTIC CLASSIFICATION\nIn this section, we detail the first phase in the detection\nframework. This includes the formulation of an alphabet\nset for shot labeling, low-level feature extraction and shot\nclassification.\n4.1 Shot labels set:\n\nExisting work on the educational videos analysis (e.g., [21,\n19]) has studied the nature of this genre carefully. As noted\nin [21], the axiomatic distinction of the educational genre is\nin its purpose of teaching and training; and as such a well-crafted\nsegment that moves viewers to actions or retains\na long-lasting message requires elaborative directing skills\n3\n.\nBased on a narrative analysis used in the educational domain\nand observed rules and conventions in the production of this\nmedia, the authors in [21] propose a hierarchy of narrative\nstructures at the shot level as shown in Figure-4.\nIn this paper, we select the five most meaningful structures\nfrom this hierarchy for experimentation. This set\nincludes: direct-narration (DN), assisted-narration (AN),\nvoice-over (VO), expressive-linkage (EL), and functional-linkage\n(FL). We shall now briefly describe these narratives.\nDirect-narration (DN) and assisted-narration (AN) are re-ferred\nto jointly as on-screen narration, which refer to the\nsegments with the appearance of the narrator. The purpose\nof these sections is to speak to the viewers with the\nvoice of authority, and is commonly used to demarcate a\nnew topic or subtopic, to clarify a concept or to lead the\nviewers through a procedure with examples. DN is a more\nstrict form of on-screen narration. It involves eye-to-eye\ncontact where the narrator speaks to the viewers directly.\nAn analogy from news video is the anchor-shot. AN refers\nto parts of the video when a narrator appears in a more\ndiverse style, and the attention of the viewers is not necessarily\nfocused on him or her. Here, the purpose is not only\nto talk to the viewers, but also to emphasize a message by\nmeans of text captions and/or to convey an experience via\nbackground scenes. A similar structure from news for AN\nis the reporting shot. Assisted narration can be used both\nin the introduction of a topic or in the main body, and thus\nthis structure should be shared\n4\nby both higher semantics\n`introduction' and `main body'. As we see later, this knowledge\nis explicitly modeled and incorporated in the design of\nthe topology for the S-HSMM. An important feature is that\nalthough the semantics of AN is shared, the typical durations\nare different when it is used in the introduction or the\nmain body respectively. An AN section used to demarcate\na new topic usually contains only one, and sometimes two\nshots, while an AN section used in the main body is typically\nlong, spanning a number of shots. Conditioning on the\nparent (i.e., introduction or main body), the typical duration\ndistribution of the AN section is learned automatically\nfor each case by our model.\nThe voice-over (VO) structure is identified as sections\nwhere the audiotrack is dominated by the voice of the narrator\n, but without his or her appearance. The purpose of\nthese segments is to communicate with the viewers via the\nnarrator's voice. Additional pictorial illustration is usually\nfurther shown in the visual channel.\nExpressive linkage (EL) and Functional linkage (FL) belong\nto the same broader linkage group in the hierarchy in\nFigure-4. The purpose of the linkage structure is to maintain\nthe continuity of a story line but there is neither on-screen\nnor voice-over narration involved. Functional linkage\ncontains transition shots encountered in switching from one\nsubject to the next. Usually, large superimposed text captions\nare used and the voice narration is completely stopped\n3\nWe note that the two closest video genre to educational\nvideos is news and documentaries. In the description of what\nfollows on educational genre, we can spot several similarities\nacross these genre.\n4\nIn terms of parameterization, it is a form of parameter tying\n.\n16\nLinkage\nNarration\nOn-screen\nNarration\nS\nu\np\np\no\nr\nt\ni\nv\ne\nN\na\nr\nr\na\nt\ni\no\nn\nN\na\nr\nr\na\nt\ni\no\nn\nV\no\ni\nc\ne\nO\nv\ne\nr\nvo\neducational videos\non\nlk\nf\nu\nlk\ne\nx\nlk\nEx\npre\nssi\nve\nLin\nkag\ne\nFu\nnct\nion\nal\nLin\nkag\ne\nsn\nSupportive Narration\nDi\nr\nect\ni\non\nNa\nr\nrat\ni\non\nan\nw\nt\ndn\na\nn\nw\ns\nvo\nw. Texts/Scenes\nvo w\nith Sc\nenes\nv\no\nt\ns\nvo\nw\ns\nvo\nw\nt\nAss\nNa\nrrw\n.S\ncen\nes\nAs\nsN\narr\nw.\nTe\nxts\nvo with Texts\nFigure 4: The hierarchy of narrative structures in educational\nvideos proposed in [21].\nwith possibly music played in the background. Expressive\nlinkage, on the other hand, is used to create `mood' for the\nsubject being presented. For example, in the video presenting\nthe fire safety topic, there is a segment in which the\nnarration is completely stopped and then a sequence of pictures\nof the house on fire is shown. These scenes obviously\ndo not give any direct instruction, rather they create a sense\nof `mood' that helps the video to be more appealing and interesting\n.\n4.2 Feature extraction and shot classification\nThe feature set and method for shot classification described\nin [21] is employed in this paper. The feature set\nis extracted from both visual and audio streams at the shot-based\nlevel. From the image sequence, we choose to detect\nthe frontal faces to reflect the appearance of the narrator\nusing the CMU face detection algoritm [25]; and captioned\ntexts as one of the common means of conveying information\nin educational videos using the algorithm described in [27].\nIn order to classify a shot into direct-narration, voice-over,\nlinkage, etc., further information is sought from the audio\nstream. Audio features are computed as the percentage of\nthe following audio classes within a shot: vocal speech, music\n, silence, and non-literal sound. A shot is then classified\ninto one of the elements of = {DN, AN, V O, EL, F L} using\nthe classification framework reported in [21]. Since we\nclaim no contribution at this stage, we shall refer readers\nto [21] for full details on this classification scheme.\nEXPERIMENTAL RESULTS\nOur dataset D consists of 12 educational and training\nvideos containing different types of subjects and presentational\nstyles, and thus this constitutes a relatively noisy set\nof data. We manually provide groundtruth for these videos\nwith topic transitions. In some cases, the groundtruth for\ntopic transitions comes directly from the hardcopy guidelines\nsupplied by the producer.\nAt the pre-processing stage, Webflix [15] is used to perform\nshot transition detection and all detection errors are\ncorrected manually. Since our contribution from this paper\nis at the semantic level, the latter step is to ensure an error\nat the shot detection does not influence the performance of\nthe system at higher levels. Since educational videos mainly\ncontain cut and dissolve transitions, the shot detection accuracy\nis found to be very high with rare cases being erroneous.\nGiven shot indices, each video is processed as described in\nSection 4, and then each shot S is labeled as one of the\nelements of = {DN, AN, V O, EL, F L}.\n5.2 Model topology and parameterization\nWe will use four models in this experiments: the flat HMM\nand HSMM (as the baseline cases), the HHMM and the S-HSMM\n. For the flat HMM and HSMM, we range the number\nof states from 2 to 5 with the observation space , where 2 is\nintended to be the minimum number of states required (like\n`intro' and `main body') and 5 is the number of alphabets\n(i.e., in the relaxed way that the number of states equates to\nthe number of alphabets). The semi-Markov version HSMM\nis further parameterized by 3-phase Coxian distributions as\nthe duration distributions of the states. The choice of M = 3\nphases is hinted by the results reported in [5] where M = 3\nhas resulted in best performances.\nFor the HHMM and the S-HSMM, the topology shown in\nthe top of Figure-1 is used to construct the S-HSMM in this\nexperiment. This topology specifies Q\n\n= 2 states at the top\nlevel where state 1 and 2 correspond to the introduction and\nthe main body of the topic respectively. The Markov chain\nat this level is similar to the flat HMM used in [4] for news\nstory segmentation\n5\nreviewed in Section 2. We incorporate\nthe assumed prior knowledge that a topic usually starts with\neither direct-narration, assisted-narration or functional linkage\n, thus state 1 has {1, 2, 5} as its children set. Similarly,\nthe main body can contain assisted-narration, voice-over or\nexpressive linkage, hence its children set is {2, 3, 4}. Here\nstate 2 (assisted narration) has been shared by both parent\nstate 1 (`intro') and 2 (`main body'). The bottom level\nhas 5 states corresponding to 5 shot labels. To map the\nlabels to the bottom states, we construct a diagonal-like B\nobservation matrix and fix it, i.e., we do not learn B. The\ndiagonal entries of B are set to 0.99 to relax the uncertainty\nduring the classification stage. The duration models in the\nS-HSMM are used with M = 3 phases Coxian.\n5.3 Detection Results\nGiven the dataset D, our evaluation employs a leave-one-out\nstrategy to ensure an objective cross-validation. We sequentially\npick out a video V and use the remainder set\n{D \\ V } to train the model, and then use V for testing. In\nthe results that follow, this method is used for all cases including\nthe flat HMM, the flat HSMM, hierarchical HMM,\nand the S-HSMM. A topic transition is detected when the introduction\nstate at the top level is reached during the Viterbi\ndecoding. Examples of Viterbi decoding with the S-HSMM\nand HHMM are shown in Figure-5.\nTo measure the performance, in addition to the well-known\n5\nThey called `transition' and `internal' states instead of `introduction'\nand `main body'.\n17\nrecall (recall) and precision (prec) metrics, we include the\nF-score (f-score) metric defined as:\nf-score = 2 recall prec\nrecall + prec = 2\n\n1\nrecall +\n1\nprec\n\n-1\nWhile the recall rate measures how well the system can recover\nthe true topic transitions, and high precision ensures\nthat it does not over-segment the video, the F-score shows\nthe overall performance of the system. In the ideal case\nwhen recall=prec=100%, clearly f-score = 1, i.e., the\nhighest performance the system can achieve.\nThe baseline cases: flat HMM and HSMM\nSince initialization is crucial during EM learning, we apply\nmultiple random restart points when conducting the experiments\n, including the uniform initialization. Although\nseveral restarts were used, the flat HMM is found to yield\nextremely poor results in all cases. Even when we train and\ntest on the same dataset, the flat HMM still produces poor\ndetection results, proving to be unsuitable in our topical\ntransition detection settings.\nThe flat HSMM produces slightly better results than the\nflat HMM, but still in all ten runs, the performance is still\nvery low (recall= 7.74% and prec= 48% in the best case).\nThe poor performance of the HMM and HSMM is of no\nsurprise, since their forms are too strict to model a rather\nhigh concept - the `topic'. Furthermore, with the flat structures\n, they offer no mechanism to incorporate prior domain\nknowledge such as those that we use in the topology of the\nS-HSMM and HHMM. This clearly shows that hierarchical\nmodels are much more suitable for video analysis than the\nflat ones. Given the poor results in the flat structure cases,\nwe will omit the HMM and HSMM in the discussion of what\nfollows below.\nDetection with the S-HSMM and HHMM\nThe recall rate, precision and F-score for representative runs\nare reported in Table 1, in which the best performance are\nhighlighted in bold. The detection results for each individual\nvideo for the best cases are shown in Table 2. With different\nrandom restarting points, including the uniform initialization\n, the performance of the HHMM ranges from poor\nto very good (41.29% 83.23% for recall and 80.00%\n84.47% for precision), whereas the S-HSMM consistently\nyields good results (83.87% 84.52% for recall and 87.92%\n88.51% for precision).\nSince during training there is nothing exposed to the testing\nexamples, we also report (in the second part of Table\n1) the performances of the HHMM and S-HSMM in a\nlikelihood-based `best model selection' scheme. This scheme\nworks as follows. As in the leave-one-out strategy, let V be\na video selected from D, and N is the number of times we\ntrain the model using the dataset {D \\ V } (i.e., without\nV ). Let\ni\n(V ) and L\ni\n(V ) (i = 1 . . . N ) respectively be the\nlearned model and the likelihood (at convergence) obtained\nfor i-th run. We then use the model\ni\n\nto test on the\nunseen video V where i\n\n= argmax\ni=1...N\nL\ni\n(V ). Simply speaking\n, we sequentially `throw away' a video V , then select the\nbest model (i.e., highest likelihood) among all runs to test\non V . For the HHMM, the result stays the same as when\nwe choose the best performance based on the F-score. For\nthe S-HSMM, the recall stays the same, while the precision\nslightly decreases from 88.51% to 87.92%. Nevertheless, the\nS-HSMM is still superior to the HHMM.\nrecall (%)\nprec (%)\nf-score\nresults for best performance selection\nUniform\n42.58\n81.48\n0.559\nRand. 1\n83.23\n84.47\n0.840\nHHMM\nRand. 2\n83.23\n84.87\n0.840\nRand. 3\n83.23\n84.87\n0.840\nRand. 3\n41.29\n80.00\n0.545\nRand. 4\n83.87\n83.87\n0.839\nUniform\n84.52\n87.92\n0.862\nRand. 1\n84.52\n88.51\n0.865\nS-HSMM\nRand. 2\n83.87\n87.25\n0.855\nRand. 3\n84.52\n88.51\n0.865\nRand. 4\n83.87\n87.25\n0.855\nRand. 5\n84.52\n88.51\n0.865\nresults for best model selection\nHHMM\n83.23\n84.87\n0.840\nS-HSMM\n84.52\n87.92\n0.862\nTable 1: Detection Performances for the S-HSMM and the\nHHMM. Best performance for each case is highlighted in\nbold (we note that best performances are attained in multiple\ncases and we select one of them to highlight).\nTable 1 and 2 show that modeling with the S-HSMM results\nin better performances than the HHMM in both recall\nand precision rates. And as a result, the F-score improves\nfrom 0.840 to 0.865. While the recall rate improves only\nslightly, the 4% improvement in the precision indicates\nthat the HHMM tends to over-segment the video more frequently\nthan the S-HSMM. This has confirmed our belief\nthat duration information is an important factor in our topic\ntransition detection settings. The semi-Markov modeling\nhas effectively overcome the limitation of the strict Markov\nassumption of {future past | present}\n6\nin the flat HMM,\nallowing longer temporal dependency to be captured via the\nduration of the state. Nevertheless, given a somewhat more\ncontained set of data used in this experiment, the results\nfrom both the S-HSMM and HHMM are better than the previous\ndetection results of news story reported in [4] (which\ncame first in TRECVIC2003 testbed) and the heuristics and\nBayesian approaches on topic detection in [23, 21]. These results\ndo not only imply the advantages of the S-HSMM over\nthe HHMM, but also show the contribution of the HHMM\nin its own right.\nCONCLUSION\nIn this paper we explore the difficult problem of detecting\ntopic transitions through the use of two probabilistic models\n, the HHMM and the S-HSMM. Both allow the modeling\nof hierarchy and the sharing of substructures within the hierarchy\n, whilst the S-HSMM additionally allows the explicit\nmodeling of durative properties. Coupled with the use of the\nCoxian model, we show how this unified framework performs\nbetter than the baseline cases (the flat HMM and HSMM)\nand previous results reported. In particular the use of the\nS-HSMM demonstrates that the modeling of duration is a\n6\ni.e., the future is conditionally independent of the past\ngiven the present.\n18\nVideo\nTP\nFP\nMiss\nGT\n1 - \"EESafety\"\n10\n8\n1\n3\n3\n5\n13\n2 - \"SSFall\"\n4\n4\n1\n1\n2\n2\n6\n3 - \"ElectS\"\n6\n6\n2\n1\n2\n2\n8\n4 - \"TrainHaz\"\n18\n20\n2\n2\n3\n1\n21\n5 - \"EyeS\"\n10\n10\n0\n1\n0\n0\n10\n6 - \"FootS\"\n10\n10\n1\n1\n1\n1\n11\n7 - \"HKeeping\"\n11\n11\n3\n3\n1\n1\n12\n8 - \"Maintn\"\n9\n8\n1\n3\n4\n5\n13\n9 - \"HandS\"\n9\n9\n1\n1\n1\n1\n10\n10 - \"SBurning\"\n19\n19\n1\n1\n2\n2\n21\n11 - \"HeadProt\"\n6\n5\n1\n3\n1\n2\n7\n12 - \"WeldingS\"\n19\n19\n3\n3\n4\n4\n23\nSum\n131\n129\n17\n23\n24\n26\n155\nTable 2: Detection results for each video in the best performance\ncases of the S-HSMM and the HHMM (TP: True\nPositive, FP: False Positive, GT: Ground-Truth).\npowerful tool in the extraction of higher level semantics.\nThe results demonstrate the promise of the approach and\nalthough the results are demonstrated with the educational\nand training film genre, the method can easily be applied to\nother genres. We believe that the promise of the approach\nlies in its unified probabilistic handling of durative properties\nand shared hierarchical structure, allowing it to handle\nlong video sequences with inherent variability and complicated\nsemantics.\nAcknowledgement\nHung Bui is supported by the Defense Advanced Research\nProjects Agency (DARPA), through the Department of the\nInterior, NBC, Acquisition Services Division, under Contract\nNo. NBCHD030010.\nREFERENCES\n[1] B. Adams, C. Dorai, and S. Venkatesh. Automated\nfilm rhythm extraction for scene analysis. In IEEE\nInternational Conference on Multimedia and Expo,\npages 10561059, Tokyo, Japan, August 2001.\n[2] P. Aigrain, P. Jolly, and V. Longueville. Medium\nknowledge-based macro-segmentation of video into\nsequences. In M. Maybury, editor, Intelligent\nMultimedia Information Retrieval, pages 159174.\nAAAI Press/MIT Press, 1998.\n[3] H. H. Bui, D. Q. Phung, and S. Venkatesh.\nHierarchical hidden markov models with general state\nhierarchy. In D. L. McGuinness and G. Ferguson,\neditors, Proceedings of the Nineteenth National\nConference on Artificial Intelligence, pages 324329,\nSan Jose, California, USA, 2004. AAAI Press / The\nMIT Press.\n[4] L. Chaisorn, T.-S. Chua, C.-H. Lee, and Q. Tian. A\nhierarchical approach to story segmentation of large\nbroadcast news video corpus. In IEEE International\nConference on Multimedia and Expo, Taipei, Taiwan,\nJune 2004.\n[5] T. V. Duong, H. H. Bui, D. Q. Phung, and\nS. Venkatesh. Activity recognition and abnormality\ndetection with the Switching Hidden Semi-Markov\nModel. In IEEE Int. Conf. on Computer Vision and\nPattern Recognition, volume 1, pages 838845, San\nDiego, 20-26 June 2005. IEEE Computer Society.\n[6] S. Fine, Y. Singer, and N. Tishby. The hierarchical\nhidden markov model: Analysis and applications.\nMachine Learning, 32(1):4162, 1998.\n[7] A. Hanjalic. Shot-boundary detection: Unraveled and\nresolved? IEEE Transaction in Circuits and Systems\nfor Video Technology, 12(2):90105, 2002.\n[8] A. Hanjalic, R. L. Lagendijk, and J. Biemond.\nAutomated high-level movie segmentation for\nadvanced video retrieval systems. IEEE Transactions\nin Circuits and Systems for Video Technology,\n9(4):580588, 1999.\n[9] I. Ide, K. Yamamoto, and H. Tanaka. Automatic video\nindexing based on shot classification. In First\nInternational Conference on Advanced Multimedia\nContent Processing, pages 99114, Osaka, Japan,\nNovember 1998.\n[10] U. Iurgel, R. Meermeier, S. Eickeler, and G. Rigoll.\nNew approaches to audio-visual segmentation of TV\nnews for automatic topic retrieval. In IEEE Int. Conf.\non Acoustics, Speech, and Signal Processing, volume 3,\npages 13971400, Salt Lake City, Utah, 2001.\n[11] E. Kijak, L. Oisel, and P. Gros. Hierarchical structure\nanalysis of sport videos using HMMs. In Int. Conf. on\nImage Processing, volume 2, pages II10258 vol.3,\n2003.\n[12] S. E. Levinson. Continuously variable duration hidden\nmarkov models for automatic speech recognition.\nComputer Speech and Language, 1(1):2945, March\n1986.\n[13] T. Lin and H. J. Zhang. Automatic video scene\nextraction by shot grouping. Pattern Recognition,\n4:3942, 2000.\n[14] Z. Liu and Q. Huang. Detecting news reporting using\naudio/visual information. In International Conference\non Image Processing, pages 2428, Kobe, Japan,\nOctober 1999.\n[15] Mediaware-Company. Mediaware solution webflix\nprofessional V1.5.3, 1999.\nhttp://www.mediaware.com.au/webflix.html.\n[16] C. D. Mitchell and L. H. Jamieson. Modeling duration\nin a hidden markov model with the exponential\nfamily. In Proc. of IEEE Int. Conf. on Acoustics,\nSpeech, and Signal Processing, pages II.331II.334,\nMinneapolis, Minnesota, April 1993.\n[17] K. Murphy and M. Paskin. Linear-time inference in\nhierarchical HMMs. In T. G. Dietterich, S. Becker,\nand Z. Ghahramani, editors, Advances in Neural\nInformation Processing Systems, Cambridge, MA,\n2001. MIT Press.\n[18] M. R. Naphade and T. S. Huang. Discovering\nrecurrent events in video using unsupervised methods.\nIn Int. Conf. om Image Processing, volume 2, pages\n1316, Rochester, NY, USA, 2002.\n[19] D. Q. Phung. Probabilistic and Film Grammar Based\nMethods for Video Content Analysis. PhD thesis,\nCurtin University of Technology, Australia, 2005.\n[20] D. Q. Phung, H. H. Bui, and S. Venkatesh. Content\nstructure discovery in educational videos with shared\n19\n2 2 2 2 1 1 2 2 2 2 2 2 1 1 1 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 2 2 2\n4 4 4 4 5 2 3 3 3 3 3 2 5 2 1 3 3 3 3 3 5 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 5 2 5 1 3 3 3\n1 1 2 3 3 3 2 2 2 2 3 3 3 3 3 2 2 2 2 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 2 2 2\n2 2 2 1 2 1 2 2 2 2 2 1 2 1 1 2 2 2 2 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2 2 1 2 2 2\n2 2 2 1 1 1 2 2 2 2 1 1 1 1 1 2 2 2 2 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 2 2 2\nDetected Topic Transitions\nz\nt\nx\nt\nm\nt\n\nt\ne\nt\nmain body\nintro\n13\n2 2 2 2 1 1 2 2 2 2 2 1 1 1 1 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 2 2 2\n4 4 4 4 5 2 3 3 3 3 3 2 5 2 1 3 3 3 3 3 5 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 5 2 5 1 3 3 3\n1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\nshot number\nz\nt\nx\nt\n\nt\ne\nt\nS\nH\nS\nM\nM\nH\nH\nM\nM\n39\nDetected Topic Transitions\n5\n21\nGround-Truth\nFigure 5: Example of Viterbi decoding for the S-HSMM and the HHMM for the first 45 shots of video `EESafety'. These\nresults should be read together with Figure-3 to see the semantics of the DBN structure.\nstructures in the hierarchical HMMs. In Joint Int.\nWorkshop on Syntactic and Structural Pattern\nRecognition, pages 11551163, Lisbon, Portugal,\nAugust 1820 2004.\n[21] D. Q. Phung and S. Venkatesh. Structural unit\nidentification and segmentation of topical content in\neducational videos. Technical report, Department of\nComputing, Curtin University of Technology, 2005.\nTR-May-2005.\n[22] D. Q. Phung, S. Venkatesh, and H. H. Bui.\nAutomatically learning structural units in educational\nvideos using the hierarchical HMMs. In International\nConference on Image Processing, Singapore, 2004.\n[23] D. Q. Phung, S. Venkatesh, and C. Dorai. High level\nsegmentation of instructional videos based on the\ncontent density function. In ACM International\nConference on Multimedia, pages 295298, Juan Les\nPins, France, 1-6 December 2002.\n[24] L. R. Rabiner. A tutorial on hidden markov models\nand selected applications in speech recognition. In\nProcs. IEEE, volume 77, pages 257286, February\n1989.\n[25] H. A. Rowley, S. Baluja, and T. Kanade. Neutral\nnetwork-based face detection. IEEE Transactions on\nPattern Analysis and Machine Intelligence,\n20(1):2338, January 1998.\n[26] K. Shearer, C. Dorai, and S. Venkatesh. Incorporating\ndomain knowlege with video and voice data analysis.\nIn Workshop on Multimedia Data Minning, Boston,\nUSA, August 2000.\n[27] J.-C. Shim, C. Dorai, and R. Bolle. Automatic text\nextraction from video for content-based annotation\nand retrieval. In International Conference on Pattern\nRecognition, volume 1, pages 618620, Brisbane,\nAustralia, August 1998.\n[28] C. G. Snoek and M. Worring. Multimodal video\nindexing: A review of the state-of-the-art. Multimedia\nTools and Applications, 2004. In Press.\n[29] H. Sundaram. Segmentation, Structure Detection and\nSummarization of Multimedia Sequences. PhD thesis,\nColumbia University, 2002.\n[30] H. Sundaram and S.-F. Chang. Computable scenes\nand structures in films. IEEE Transactions in\nMultimedia, 4(4):482491, 2002.\n[31] B. T. Truong. An Investigation into Structural and\nExpressive Elements in Film. PhD thesis, Curtin\nUniversity of Technology, 2004.\n[32] J. Vendrig and M. Worring. Systematic evaluation of\nlogical story unit segmentation. IEEE Transactions on\nMultimedia, 4(4):492499, 2002.\n[33] C. Wang, Y. Wang, H. Liu, and Y. He. Automatic\nstory segmentation of news video based on\naudio-visual features and text information. In Int.\nConf. on Machine Learning and Cybernetics,\nvolume 5, pages 30083011, 2003.\n[34] J. Wang, T.-S. Chua, and L. Chen. Cinematic-based\nmodel for scene boundary detection. In The Eight\nConference on Multimedia Modeling, Amsterdam,\nNetherland, 5-7 November 2001.\n[35] L. Xie and S.-F. Chang. Unsupervised mining of\nstatistical temporal structures in video. In\nA. Rosenfield, D. Doreman, and D. Dementhons,\neditors, Video Mining. Kluwer Academic Publishers,\nJune 2003.\n[36] L. Xie, S.-F. Chang, A. Divakaran, and H. Sun.\nLearning hierarhical hidden markov models for\nunsupervised structure discovery from video.\nTechnical report, Columbia University, 2002.\n[37] X. Zhu, L. Wu, X. Xue, X. Lu, and J. Fan. Automatic\nscene detection in news program by integrating visual\nfeature and rules. In IEEE Pacific-Rim Conference on\nMultimedia, pages 837842, Beijing, China, 2001.\n20", "keywords": "domain knowledge;Topic Transition Detection;A variety in directional styles;semantic relationship of neighborhood scenes;coxian switching hidden semi-markov model;natural hierarchical organization of videos;model educational video content;extended Hierarchical Hidden Markov Model;unified and coherent probabilistic framework;Educational Videos;shot-based semantic classification;their semantically shared substructures;topic transition detection;probabilistic framework;Coxian Switching Hidden semi-Markov Model;Coxian;Hierarchical Markov (Semi-Markov) Models;typical durations of important semantics;modeling temporal correlation;hierarchical hidden markov model"} {"name": "197", "title": "Towards Content-Based Relevance Ranking for Video Search", "abstract": "Most existing web video search engines index videos by file names, URLs, and surrounding texts. These types of video metadata roughly describe the whole video in an abstract level without taking the rich content, such as semantic content descriptions and speech within the video, into consideration. Therefore the relevance ranking of the video search results is not satisfactory as the details of video contents are ignored. In this paper we propose a novel relevance ranking approach for Web-based video search using both video metadata and the rich content contained in the videos. To leverage real content into ranking, the videos are segmented into shots, which are smaller and more semantic-meaningful retrievable units, and then more detailed information of video content such as semantic descriptions and speech of each shots are used to improve the retrieval and ranking performance. With video metadata and content information of shots, we developed an integrated ranking approach, which achieves improved ranking performance. We also introduce machine learning into the ranking system, and compare them with IRmodel (information retrieval model) based method. The evaluation results demonstrate the effectiveness of the proposed ranking methods.", "fulltext": "INTRODUCTION\nMultimedia search has become an active research field due to the\nrapid increase of online-available content and new practical\napplications. Search technology is considered the key to\nnavigating the Internet's growing media (video, audio and image)\ncollections. Google Yahoo, Blinkx and other search companies\nhave provided elementary video search engines. However,\nexisting video search engines are all based on the text information\nrelated to the video which can be retrieved from web pages, such\nas file names, URLs, and surrounding texts. These types of textual\ninformation can be considered as \"metadata\" of the video since\nthey only roughly describe the video. There is no doubt that text\nsearching is the most efficient way to retrieve information (even\nwhen searching for videos), because it well matches the manner of\nhuman thinking. However, only using metadata is far form\npeople's expectation in video searching, because even the best\ncase scenario, the metadata is only the highly concentrated\noverview of a video, with many losses on details.\nIn general, a video consists of many shots and sub-events with a\ntemporal main thread. The video should be segmented into smaller\nretrievable units that are directly related to what users perceive as\nmeaningful. Much research has concentrated on segmenting video\nstreams into \"shots\" using low level visual features [1]. Each\nsegment has its own scenes and meanings. In many cases, when\nusers query a video, they intend to find some desired clips in the\nvideo instead of viewing it thoroughly. However, this can seldom\nbe achieved by searching the surrounding text which is related to\nthe whole video.\nMuch content information can be used to search videos and\nshots. In content-based video retrieval systems, video shots can be\nclassified into or annotated by several semantic concepts. The\nmost substantial works in this field are presented in the TREC\nVideo Retrieval Evaluation (TRECVID) community [2]. In\naddition, speech is also significant information which has close\nconnection to video contents. Some videos are associated with\ntranscripts/closed captions which are provided by content provider.\nUsing ASR (automatically speech recognition) to generate speech\ntext is another practical solution.\nIn this paper, with the video metadata and content information\nof video shots, we index and rank the videos in a way similar to\ngeneral text-based web page search engines. The IR-model, which\nis widely employed in text information retrieval and web page\nsearch, will be applied to rank the search results by examining\nrelevance between query and indexed information (including both\nmetadata and content information). To fully utilize the content\ninformation and get a better ranking performance, we integrate the\n\"shot relevance\" into \"video relevance\". That is, the ranking is\ndecided not only by the relevance of video (metadata of the entire\nvideo), but also by all the relevant shots within the video.\nWe also apply learning based method to rank the search results\nbased on a set of features extracted from the corresponding query,\nvideo metadata, and content information.\nThe rest of this paper is organized as follows. Section 2\nintroduces the IR-model based ranking, including extraction of\nvideo metadata and content information, and a ranking method\nintegrating these two types of information. In section 3, a learning\nbased ranking approach is presented. Section 4 compares the\nranking performance evolution results, and Section 5 concludes\nthe paper.\n\nIR-MODEL BASED RANKING\nIn the traditional text retrieval and web page search, IR\n(Information Retrieval) models are usually used to evaluate the\nrelevance between a query and a document. BM25 [3] is one of\nthe frequently used evaluation methods. Given a query Q and a\ndocument D, the relevance between Q and D can be modeled as\nthe summation of the relevance between each query term (word) t\nin Q and D:\nand b are parameters. tf(t,D) is term frequency, means\nthe frequency of term t appears in document D. df(t) is document\nfrequency, means the frequency of document which contains term\nt within all documents. |D| stands for the length of document D.\nThe basic idea of this IR model can be explained as, if the\nquery term appears in document more frequently (higher tf), and\nthe query term is more unique that less documents contain it\n(lower df), the query will be more relevant to the document.\n2.2 Index and Rank the Video Information\nThe video data used in our experimental system are from MSN\nVideo (http://video.msn.com/), which contains 7230 videos.\n2.2.1 Metadata of the video\nBecause the videos in our data set are made by professional\ncontent provider, there is rich meta information that describes\neach entire video with brief text. Each video has the following\nmetadata fields: headline, caption, keywords, source, video URL,\nthumbnail image URL, related web page anchor text, page URL,\netc. Besides these types of textual information, some format\ninformation of the video, such as video length, frame size, bit rate,\nand creation date are also extracted. Some selected information\nfields of video metadata are listed in Table 1.\nTable 1. Video metadata\nField\nExample Value\nHeadline\nDiscovery launches\nCaption\nJuly 26: Watch the entire launch of space shuttle D...\nSource\nMSNBC\nKeywords\nTechnology, science, Space, Partner Codes ...\nVideo URL\nhttp://www.msnbc.msn.com/default.cdnx/id/871313...\nLink anchor\nMSNBC.com's Technology and Science front\nLink URL\nhttp://www.msnbc.msn.com/id/3032118\ndate\n7/26/2005 4:40:48 PM\nvideo length\n609.72 seconds\nFrame size\n320 x 240\nBit rate\n180 Kbps\nFor the videos contained in general web pages, some attributes\nmentioned above may not be obtained directly, but the\nsurrounding texts, URL, filename can be extracted as the metadata\nfields of the video.\nThese information fields correspond to document D in Section\n2.1. Different fields can be represented by different type of D (D\ni\n\nin Equation 4). The overall relevance can be calculated by the\nweighted summation of the relevance of all fields. The weight of\nthe fields (DW\ni\nin Equation 4) can be determined by their\nimportance, significance, and representativeness to the video.\n\n\n=\n)\n,\n(\n)\n,\n(\nQ\nD\nR\nDW\nQ\nVideo\nR\ni\ni\n(4)\nIn our system, four major information fields from video\nmetadata are selected to be indexed: headline, caption, keywords,\nand source. Headline is a highly representative description of the\nvideo content. Keywords are also good recapitulative terms. For\nthese two fields, higher weights are set. Caption is a more general\nand detail depicts for the video; Source provided a higher level\nand less relevant information, they will be set lower weights for\nranking. Table 2 gives out the weights of fields in our\nexperimental system.\nTable 2. Weights for relevance evaluation\nFields weight\nHeadline\n10\nKeywords\n10\nCaption\n5\nsource\n1\n2.2.2 Content information of the video shots\nThere is plenty of information in the visual/audio content of the\nvideo sequence, which can not be sufficiently presented by the\naforementioned textual video metadata. We can build a set of\nmodels that can be applied to automatically detect a corresponding\nset of concepts such that each video shot can be annotated with a\ndetection confidence score for each concept. Successful concept\nmodeling and detection approaches have been introduced in\nTRECVID, relying predominantly on visual/aural analysis and\nstatistical machine learning methods [4]. The LSCOM-lite\nLexicon [5] designed for the TRECVID 2005 Benchmark consists\nof more than 40 concepts spread across multiple concept-types\nsuch as object, events, site etc. Though the size of the lexicon is\nstill far from practical application for general Web-based video\nsearch, this semantic information is promising to enable real\ncontent-based video search, and therefore it is applied in our\nranking system.\nBesides visual contents, information from audio channel,\nespecially the speech, is also very useful for searching videos.. In\nour experimental system, we use Microsoft speech recognition\nengine (with a large vocabulary). This engine gives recognized\nwords with a start timestamp, length, and a recognition confidence\nvalue, which are very useful for later indexing and ranking. The\nspeech texts are allotted and assigned into video shots, according\nto the timestamp of words and video shots.\nThe content information is associated with individual video shot,\nwhich consist of semantic keywords (with corresponding detection\nconfidences), and speech words (with recognition confidences).\nThe confidences of words will act as weights of term frequencies\ntf to calculate the relevance in Equation (2).\n628\n2.3 Integrated Ranking with Metadata and\nContent Information\nTo combine metadata and content to rank the videos, we index the\nvideos by metadata and index the video shots by content\ninformation separately, and then integrated these two rank lists,\nnamed video list and shot list, to form a final ranking. The\nintegrated ranking returns search result by video, but taking all the\nrelevance shots within this video into consideration.\nFor video list, each item is a video. Let item\niv\n.vid denotes the\nvideo ID of the i\nth\nitem, item\niv\n.score denotes the ranking score of\nthe i\nth\nitem. For shot list, each item is a shot from a video. Let\nitem\nis\n.vid, item\nis\n.sid, item\nis\n.score donote the video ID which the\nshot belong to, the shot ID within the video, and the ranking score\nof the i\nth\nitem respectively.\nThe integrating process is presented in Algorithm 1. The basic\nidea is that, all the ranking score of the relevance shots within the\nvideo are accumulated to the ranking score of the video, with\ncorresponding weights. The relevant shots in the video will be\nhighlight when displaying the video as search result.\nnew a integrated result list (item denotes as item\niI\n)\nfor each item\niv\nin video list{\nnew item\nic\n;\nitem\niI\n.vid = item\nis\n.vid;\nitem\niI\nscore = item\niv\n. score * Weight_v;\nfor each item item\nis\nin shot list{\nif(item\niv\n.vid == item\nis\n.vid) {\nitem\niI\naddshot(item\nis\n.sid);\nitem\niI\nscore += item\nis\n. score * Weight_s;\nremove item\nis\nfrom shot list\n}\n}\nremove item\niv\nfrom video list\nadd item\niI\nto integrated list\n}\nadd the remaining video list and shot list into the integrated list\nsort the integrated list by item\niI\n.score.\n// Weight_v and Weight_s are weights for score accumulating\nAlgorithm 1. Generate integrated rank list.\nLEARNING BASED RANKING\nIR-model based ranking just consider some basic features such as\nterm frequency tf, document frequency df, and document length,\netc. In the learning based approach, more features are extracted\nfrom the query, metadata, and content information. To be clear,\nsuppose the query contains three terms \"a b c\", we compute the\nfollowing features from each document field:\nOrdered match: the frequency that both \"a\" and \"b\" appeared in\nthe indexed text, and \"b\" appears after \"a\".\nPartly exact match: the frequency that \"a b\" or \"b c\" appeared in\nthe indexed text.\nExact match: the frequency that \"a b c\" appeared in the indexed\ntext.\nQuery length: number of query terms\nFor the content information, each word has a confidence value,\nwe also consider:\nWeighted tf: Term frequency with confidence weighted,\nHigh confident match: query term match with words with high\nconfidence.\nHigh confident words: words with high confidence in the\nindexed text.\nSome non-textual, query-independent features, such as shot\nlength, video length, frame size, bit rate, etc, are also taken into\naccount.\nBy counting in the combinations of several document fields and\nquery terms (or part of query), we have about 50 dimensional\nfeatures in total for a query and search result to form a sample..\nThe GroundTruth of sample is the relevance judgments between a\nquery and a result, which is collected by a user labeling system\nintroduced in the next section.\n3.2 Neutral Network based Ranking\nTraditionally the learning algorithms are used for classification\nproblems. For ranking problems, one possible way is organize the\nsamples by pairs. Pair sample (x\n1\n, x\n2\n) are considered as a positive\nsample, if x\n1\nare ranked ahead of x\n2\n, and vice versa. The loss\nfunction is also formulated in pair wise. RankNET [6] is used in\nour implementation to train the ranking model and to validate the\nperformance.\nAbout half of the labeled data are used in training, and the\nsecond half are used for validation.\n\nEVALUATIONS\nTo evaluate the ranking performance of our proposed methods, we\ndeveloped a user labeling tool to collect some query-result\nrelevance judgments.\nThe video data set we used in our experiment includes news\nvideo, TV programs, movie trailers, advertisements, etc.\nAccording to the characteristics of the content of these videos, we\nselected some news related queries, such as hot events, hot place,\nand hot person names, to evaluate the ranking performance. For\neach query, we use the IR-model based ranking describe in\nSection 2 to generate a result list, and randomly select some\nresults form the list to label. Considering the labeling workload,\nfor each labeler and each query, 9 results are select from the list.\nTo make the selected query-result samples have a good uniformity\non distribution, 3 results are randomly selected from the first 1/3\npart of the list, 3 are from the second 1/3 part, and the other 3 are\nfrom the last 1/3 part. The order of these 9 selected results is\nshuffled and then provided to users to do relevance labeling.\nIn the labeling tool, for a query and a result, user can see all the\ninformation of the result, including the file format information\n(frame size, bit rate, video length, etc), description (headline,\ncaption, keywords), video thumbnail, video (in a video player),\nthumbnails of video shots, the speech text of the relevant shots.\nThe words matched with query terms are highlighted. See Figure\n1. If there are relevant shots in the result, the thumbnails of them\nare displayed with doubled size. The shot number, time\n629\ninformation, and the speech are also shown in the interface. Users\nare asked to read the displayed information, browse the\nthumbnails, and play the video and shots (a tool button is provided\nto play from one shot) to give a relevance judgment from 1, 2, 3, 4,\nand 5, which represent bad, fair, good, excellent, and perfect,\nrespectively.\n\nFigure 1. Relevance labeling tool\nIn our experiment, ten users are invited to do labeling, and about\n2,000 relevance judgments of query-result samples are collected.\n4.2 Precision Performance of Ranking\nWe have conducted a comparison between the 4 approaches listed\nbelow:\nMR: Ranking only based on video metadata (Section 2.2.1).\nCR: Ranking only by content information (Section 2.2.2).\nRI: Integrated Ranking described in Section 2.3\nRN: RankNET based ranking described in Section 3.2.\nThe precision in top N of the rank lists of all the labeled queries\nis used to evaluate the performance of ranking method.\nN\ntop\nin\nresults\nlabeled\ntotal\nN\ntop\nin\nresults\nlabeled\nrelevant\nN\nPrecision\n=\n@\n(5)\nIn our implementation, the judgment Perfect or Excellent are\nconsidered as relevant results, while other judgments are treated as\nirrelevant results. The Presicion@N (N=1 to 5) of the 4 ranking\nmethods are shown in Table 3.\nFrom the results, we can see that:\n1) Precisions of MR are very low. Only using video metadata\nwill result in a poor performance, since details of the video\ncontent are ignored. The content information is more\neffective to search and rank video than metadata, as\nprecisions of CR are higher than that of MR.\n2) Precisions of RI are much higher than that of MR and CR.\nBy combining video metadata and content information, the\nperformance is significantly improved and reaches an\nacceptable level, which shows that content-based relevance\nranking is a promising approach.\n\n3) RN has a good performance, even better than RI. Comparing\nto IR-model based ranking, more features are included to\nlearning the relevance. The result implies that the learning\nmethod can organize the information for ranking in a more\neffective way.\nTable 3. Precision of the ranking approaches\nPrecision@?\n1 2 3 4 5\nMR\n0.305 0.326 0.299 0.259 0.228\nCR\n0.544 0.526 0.571 0.550 0.522\nRI\n0.796 0.727 0.684 0.634 0.606\nRN\n0.805 0.746 0.763 0.717 0.669\n\nDISCUSSIONS AND CONCLUSION\nWe have presented a novel content-based approach to rank video\nsearch results\n.\nIn addition to the video metadata, more detailed\ncontent information in the video is used to improve the relevance\nranking of video search results. The videos are segmented into\nshots, which can carry rich content information such as semantic\nconcept keywords and speech. With the video metadata and\ncontent information, we proposed an IR-model based ranking\nmethod and a learning-based ranking method. Evaluation of the\ntop ranked results shows that the proposed ranking methods have\nsignificantly improved performance comparing to the approach\nuse video metadata only, which is frequently used in existing web\nvideo search engines.\nIn future work, more types of content information can be\nintegrated into our ranking scheme, such as content-based quality\nmetric, user comments and rating for videos shared in web\ncommunities. Moreover, how to define effective semantic\nconcepts, i.e., video semantic ontology, that facilitate video\nsearching and ranking is also a challenging problem., which is\nalso one of our future works.\n\nREFERENCES\n[1] Hong-Jiang Zhang, A. Kankanhalli, and S. Smoliar,\n\"Automatic Partitioning of Full-motion Video,\" A Guided\nTour of Multimedia Systems and Applications, IEEE\nComputer Society Press, 1995.\n[2] http://www-nlpir.nist.gov/projects/trecvid\n[3] S. E. Robertson, S. Walker, and M. Beaulieu. Okapi at\nTREC7: automatic ad hoc, filtering, VLC and filtering\ntracks. In Proceedings of TREC'99.\n[4] M. Naphade, J.R. Smith, F. Souvannavong, \"On the\nDetection of Semantic Concepts at TRECVID,\" ACM\nMultimedia, ACM Press, New York, NY, pp. 660-667, Oct.\n10-16, 2004\n[5] M. Naphade, L. Kennedy, J.R. Kender, S.F. Chang, J.R.\nSmith, P. Over, A. Hauptmann, \"LSCOM-lite: A Light Scale\nConcept Ontology for Multimedia Understanding for\nTRECVID 2005,\" IBM Research Tech. Report, RC23612\n(W0505-104), May, 2005.\n[6] Chris Burges, et.al, \"Learning to Rank using Gradient\nDescent\", ICML 2005, Bonn, Germany, pp.89-96, August 7-11\n, 2005.\n\n630", "keywords": "video index, relevance ranking;content-based relevance ranking;video retrieval;metadata;learning based ranking;neutral network based ranking;Relevance ranking;content information;content-based approach;ranking method;integrated ranking;video metadata;IR-model;segmented;Content-based ranking;machine learning model;video segmentation;IR model based ranking;Video search;video search"} {"name": "198", "title": "Towards Reasonability Properties for Access-Control Policy Languages", "abstract": "The growing importance of access control has led to the definition of numerous languages for specifying policies. Since these languages are based on different foundations, language users and designers would benefit from formal means to compare them. We present a set of properties that examine the behavior of policies under enlarged requests, policy growth, and policy decomposition. They therefore suggest whether policies written in these languages are easier or harder to reason about under various circumstances. We then evaluate multiple policy languages, including XACML and Lithium, using these properties.", "fulltext": "INTRODUCTION\nAccess-control policies should not be write-only. Because\nthey govern both the containment and availability of critical\ninformation, they must be highly amenable to analysis by\nboth humans and by reasoning software such as verifiers.\nAn access-control policy dictates a function from requests\nfor access to decisions about whether or not to grant access\n. The competing requirements of expressive power and\ncomputational speed makes the design of policy languages a\ndelicate balancing act. Contemporary policy languages have\nlargely followed one of two routes. Some are based on logics,\nrestricting first-order logic (e.g., Lithium [9]) or augmenting\nDatalog (e.g., Cassandra [2]). Others are custom languages\nsuch as XACML [12] and EPAL [13], which behave roughly\nby rule-evaluation and do not depend on theorem-proving\ncapabilities to determine a response to a query.\nThe custom language approach often produces fairly limited\nlanguages. For example, to express hierarchical role-based\naccess-control (RBAC) [14] in XACML requires a\nfairly cumbersome encoding [1]. On the other hand, its more\ndirect request evaluation strategy suggests that policies written\nin XACML are more transparent than policies written\nin languages based on first-order logic (as we motivate in\nSection 2).\nHow, then, do we distinguish different policy languages?\nStudies of complexity and expressive power may ensure tractable\nverification and the ability to capture certain policies,\nbut do not directly classify the ease of reasoning about policies\nin a language. In this paper we take a step towards formalizing\nreasonability properties that make languages more\namenable to reasoning. We then apply these properties to\nactual policy languages.\nSuch properties are useful even\nwhen verification is computationally tractable because they\nprovide a guide to where and how to edit a policy for a\ndesired effect.\nConcretely, our properties study three main questions:\nhow decisions change as requests include more information,\nhow decisions change as policies grow, and how amenable\npolicies are to compositional reasoning. The last of these is\nespecially important for two reasons. First, organizations in-creasingly\nhave different divisions creating policy fragments\nthat must be combined into a whole while preserving the intent\nof each division; second, to mirror these use cases, and\nto scale better as policies grow in size, it becomes important\nfor analysis and verification tools to function modularly.\nThese properties codify our observations made while writing\nand studying policies for non-trivial systems. (We do\nnot, however, presume to make broad statements about the\nimpact of these properties for manual reasoning.) They are\nmeant to be descriptive rather than prescriptive: which ones\na language should satisfy depends on the context of its use.\nWe do expect these properties to help both language designers\nand policy authors, the former to set goals and the latter\nto evaluate languages.\nWe first motivate the work with an example. Section 3\npresents background on policy languages. Section 4 presents\nthe heart of our formalism. Section 5 applies this framework\nto XACML, and Section 6 to logical approaches such as\n160\nLithium. The remainder discusses related work and offers\nconcluding remarks.\nMOTIVATING EXAMPLE\nConsider the following natural-language policy:\n1\n1. If the subject is a faculty member, then permit that\nsubject to assign grades.\n2. If the subject is a student, then do not permit that\nsubject to assign grades.\n3. If the subject is not a faculty member, then permit\nthat subject to enroll in courses.\nWe might represent this policy as follows:\nfaculty(\ns) = Permit(s, grades, assign)\n(\np\n1\n)\nstudent(\ns) = Permit(s, grades, assign)\n(\np\n2\n)\nfaculty(s) = Permit(s, courses, enroll)\n(\np\n3\n)\nLet the above formalization be\np and the first line of the\npolicy be sub-policy\np\n1\n, the second\np\n2\n, and the third\np\n3\n.\nConsider the following natural-language request:\nA student requests to enroll in courses.\nAssume that requests list the subject, resource, and action\nby name if possible and by variable if the name is unknown,\nalong with any other known facts. In this representation,\nthe request becomes:\n(s, courses, enroll) with student(s)\n(\nq\n1\n)\nShould the policy grant access? At least three interpretations\nof the policy are possible:\n1.\np grants access due to p\n3\n. The request does not show\nthe subject being a faculty member; thus,\np\n3\napplies\nand\np produces the decision to permit access. This\nrelies on the assumption that since the request does\nnot show the subject being faculty, that the subject is\nin fact not faculty. One could drop this assumption.\n2. The policy does not apply to the request. One would\nreason that\np\n1\nand\np\n2\ndo not apply since they are dealing\nwith assigning grades and not enrolling in courses.\nFurthermore, one could conclude that\np\n3\ndoes not apply\nsince the request does not prove that the subject\nis not faculty. To do so, the request would have been\n(s, courses, enroll) with student(s) faculty(s)\nSince the policy does not apply to the request, the\nsystem should have and enact some default behavior.\n3. By reasoning different than that used in the first interpretation\n,\np could still grant the request. As in the\nsecond interpretation, one could conclude that the request\nalone fails to establish that the subject is not a\nfaculty member. However, if the subject were a faculty\nmember, then the first two lines together would\nyield a contradiction:\np\n1\nwould imply that the subject\ncould enroll in courses and\np\n2\nwould imply that the\nsubject could not. Thus, student-faculty members do\nnot exist. Since the subject of the request is clearly\na student, he must not be faculty member. Thus,\np\n3\napplies to grant access.\n1\nThis example is adapted from Halpern and Weissman [9].\nIn the first two interpretations the user may limit his reasoning\nto each sub-policy independent of one another. However\n, under the third interpretation (which, in fact, is the\none chosen by Halpern and Weissman), the user must reason\nabout all three sub-policies at once. Furthermore, under\nInterpretation 2, the user must reason about both positive\nand negative attributes, unlike under Interpretation 1.\nThese semantic differences drastically affect a reader's ability\nto comprehend policies. For example, Interpretation 3\nrequires both global analysis and demands rich reasoning\npower to deduce the contradiction. This paper formalizes\nthese differences and their burdens.\nBACKGROUND\nDespite the differences between access-control policy languages\n, we can still identify many common elements. First\nwe describe features common to most languages, and then\nwe discuss in detail two areas in which many languages differ\n: the available decisions and policy combinators.\n3.1\nCommon Features\nA policy language must provide a way of describing the\ndifferent forms of access and the environment in which they\ncould occur. This information forms a request. Many languages\nbreak requests into four different parts:\nSubject the person or process making the request,\nResource the object, subsystem, person, or process that\nthe subject would like to affect (e.g., a file name or a\nprocess id),\nAction the command or change that the subject would like\nto execute on the resource, and\nEnvironment describes any other relevant information including\nthe time of day, location, or the previous actions\nof the subject.\nThe first three make up the form of access requested while\nthe last gives the context in which this access would occur.\nEach of these parts lists attributes associated with its respective\ntopic. In some languages, the absence or negation of an\nattribute might also be explicitly listed (see Section 4.2).\nLanguages must also provide a set of decisions. Such a set\nmust include some decisions that grant access and some that\nprohibit access. A policy will associate with each request a\ndecision (or in the case of nondeterministic policies, a policy\nwill relate each request with some number of decisions).\nDefinition 3.1. An access-control policy language is a\ntuple\nL = (P, Q, G, N, ) with\nP a set of policies,\nQ a family of sets of requests indexed by policies,\nG the set of decisions that stipulate that the system should\ngrant access (granting decisions),\nN the set of decisions that stipulate that the system should\nnot grant access (non-granting decisions),\na function taking a policy p P to a relation between\nQ\np\nand G N,\nwhere G N = .\n161\nWhen clear from context, the above symbols will be ref-erenced\nwithout explicitly relating them to\nL, and D will\nrepresent G N. The function\ngives the meaning of\npolicy p, and we write q p d for p P , q Q\np\n, and d D,\nwhen p assigns a decision of d to the request q. If for a\nlanguage Q\np\n= Q\np\nfor all p and p in P , then we drop the\nsubscript on Q and treat it as a set of requests common to\nall policies. Given\nL define the partial order on D to be\nsuch that d d if either d, d N, d, d G, or d N and\nd G.\n3.2\nDecisions\nPolicy languages must provide decisions to indicate a pol-icy's\nintent to grant or not to grant a request. Some languages\nmight just provide two decisions: permit for granting\naccess and deny for not granting access. A policy in such a\nlanguage associates various subsets of requests with one of\nthese two decisions. For example,\np\n1\nexplicitly identifies a\nsubset of permitted requests and\np\n2\ngives denied requests.\nHowever, a policy might assign some requests to neither\npermit nor deny (e.g.,\nq\n1\nunder Interpretation 2 of\np). To err\non the side of safety, the policy language should provide for\nsuch requests a default decision that does not imply a grant\nof access creating a closed policy [10]. However, assigning\nthem the decision of deny may limit the ability to compose\npolicies. For example, while combining the policies of two\ndepartments, one would like to distinguish between those\nrequests that each department really would like to prohibit\nand those about which they do not care [3]. The decision of\nnot applicable serves this purpose.\nWith a decision of not applicable sufficing to prevent access\n, some languages elect not to include statements associating\nrequests with deny. This leaves only statements permitting\nsome set of requests. The uniformity of statements\nin such languages might make the policy easier to read and\ncompose (see Section 3.3). However, allowing for the explicate\ndenial of requests can quickly rule out exceptional cases\nand provides a means to determine when a policy does not\ngrant access by desire rather than by default.\nSome requests might not have a logical interpretation under\na given policy. For example, a request of\n(s, grades, assign) with faculty(s) student(s) (q\n2\n)\nunder Interpretation 3 of\np contradicts the policy itself. A\nrequest might even contain illogical values or require undefined\ncomputation (such as division by zero). For generality,\na system might like to assign a decision to such inputs rather\nthan excluding them from the set of requests and leaving the\npolicy undefined on them. In such cases, a decision of error\nor some refinement of it might be appropriate.\nOne may view the fact that an error state is reached given\na request to be a weakness in the policy. However, one may\nalso take it to be a statement about the world in which the\npolicy is to function: that no such requests may logically\nexist.\nError decisions can enforce these preconditions or\nassumptions that the policy has made.\n3.3\nPolicy Combinators\nThe policy of an organization often consists of the composition\nof policy fragments, or sub-policies, from a variety of\ninternal units (e.g., legal, accounting, and execute departments\nof a corporation).\nThus, policy languages provide\ncombinators to create a single policy from these fragments.\nUnder\np, the request given above (q\n2\n) is permitted by\np\n1\nbut denied\np\n2\n. The method used to combine the three sub-policies\nof\np into one policy determines how to resolve this\nconflict. Some languages, like the hypothetical language in\nwhich\np is written, might have only one policy combinator\nthat is implicitly applied. Other languages provide multiple\ncombinators. If a policy has sub-policies nested inside of\nsub-policies, the different layers may be resolved differently.\nSome of the possible policy combinators are:\nPermit Overrides If any of the sub-policies produces a\npermit, return only permit. Otherwise, if any produces\na deny, return only deny. Else, return not applicable.\nDeny Overrides If any of the sub-policies produces a deny,\nreturn only deny. Otherwise, if any produces a permit,\nreturn only permit. Else, return not applicable.\nFirst Applicable Return the decision reached by the first\nsub-policy to reach one other than not applicable.\nAll Seen Return a set containing the decisions reached by\nall the sub-policies.\nEither Permit or Deny Nondeterministically return one\nof the produced decisions.\nError Return a error if the sub-policies produces both permit\nand deny. Otherwise return permit if produced,\nor deny if produced. Else, return not applicable.\nAnd Conjoin the sub-policies together by logical And and\nreturn the implied decision(s).\nDe Capitani di Vimercati et al. [4] list additional combinators\n. The nature of the combinators available in a language\ncan greatly impact the clarity of policies written in it.\nNotice that many of the above combinators behave the\nsame in the absence of the decision of deny. One might conclude\nfrom this observation that allowing the explicit denial\nof a request is an undesirable complication in a language.\nTo formalize the role of policy combinators, let policies be\neither an atomic policy or a set of sub-policies combined by\nsome policy combinator. Let p be a policy that consists of\nsub-policies p\ni\nwith 1\ni n. Then p = (p\n1\n, p\n2\n, . . . , p\nn\n)\nrepresents the composition of the sub-policies using\n.\n2\nSince sub-policies are themselves policies, one may apply\nto them.\n3\nThe relationship between\n(p\n1\n, p\n2\n, . . . , p\nn\n)\nand meaning of each sub-policy p\ni\naffects the clarity of the\npolicy and is studied in the next section.\nPOLICY LANGUAGE PROPERTIES\nHaving formalized policy languages, we are now ready to\ndescribe properties of them.\n2\nWe assume that the set of combinators in a given language\nL = (Q, P, G, N, ) is clear from the structure of P and\n. If this is not the case for a language, one could explicitly\nadd it to the definition of an policy language.\n3\nSome languages may permit contextual information from\nenclosing policies to affect the meaning of the sub-policies.\nFor example, a language might have a notation of variable\nbinding. For such a language,\nmight be extended to\ntake a second argument that carries such contextual information\n. All the following definitions could be extended, e.g.,\nmonadically, to deal with such an extended\n.\n162\n4.1\nDeterminism and Totality\nDefinition 4.1. A language L = (P, Q, G, N, ) is deterministic\nif\np P, q Q\np\n, d, d D, q p d q p d = d = d\nFor a deterministic language, we can define a function\n\nwhich takes a policy p P and returns a function from Q\np\nto\nD as p . q . d D s.t. q p d. For a deterministic language,\nmay be given instead of to define the language. (We\nonly even mention nondeterministic languages due to the\nexistence of one: XACML with obligations.)\nDefinition 4.2. A language L = (P, Q, G, N, ) is total\nif\np P, q Q\np\n, d D s.t. q p d\nThe policies of total languages will always make a decision.\n4.2\nSafety\nUnder Interpretation 2, the request contained too little\ninformation to determine which of the sub-policies of\np applied\n. Interpretation 1 avoids such indecision by having requests\nimplicitly refute the presence of any attribute not\nlisted. These two interpretations produce different meanings\nfor statements like\nfaculty(s) found in p\n3\n. Under\nInterpretation 1,\nfaculty(s) holds if faculty(s) is not in\nthe request, while under the second, the request must explicitly\nlist\nfaculty(s) for it to hold. We call the former\nimplicit and the latter explicit.\nThe explicit approach permits distinguishing between unknown\ninformation and attributes known to be absent. The\nexplicit interpretation, however, incurs the cost of listing a\npossibly large set of absent attributes and can lead to indecision\nas shown above.\nSuch indecision, however, allows the system to recognize\nwhen the policy requires more information to yield a decision\n. In contrast, the implicit interpretation can grant undue\naccess. If, for example, a request does not list faculty(s)\nsimply because the system did not determine whether s was\na faculty member or not, then the system might erroneously\nallow s to enroll in courses. Thus, the sub-system producing\nrequests must be sure to include all the relevant facts in\neach request.\nFor large scale systems, collecting or even determining\nthe germane information might consume large amounts of\ntime. For such systems, the explicate approach might prove\nbetter since requests may leave out information safely and\nbe refined until the policy yields a decision. Furthermore,\noverzealous optimizations and other coding errors might result\nin the system producing requests that do not contain\nall the relevant facts.\nHaving a policy drive which information requests include\nallows for the system to collect only the information really\nneeded to reach a decision from the policy. Under this approach\n, the sub-system evaluating the policy starts with a\nrequest that contains only the readily available information.\nIf this sub-system needs additional information to reach a\ndecision from the policy, it requests the necessary additional\ninformation. Thus, the system does not need to know what\ninformation the policy requires at the time of generating the\ninitial request. This approach may allow for more efficient\nimplementations.\nOnce a datum has been published, it cannot easily be retracted\n. Therefore, preventing unwanted access is usually\npreferable to granting it. As a result, such incomplete requests\nshould only result in a grant of access if the complete\none would have. We can formally state this safety concern:\nDefinition 4.3. Let be a family of partial ordering on\nrequests of a language\nL = (P, Q, G, N, ) indexed by the\npolicies of\nL. L is safe with respect to\niff\np P, q, q Q\np\n, q\np\nq = p (q) p (q ).\nDue to differences in the contents of a requests, for each language\na different family of partial orderings\nwill interest\nusers. The relation should be such that if q\np\nq , then\nq contains all the information contained in q and possibly\nmore. Often one partial ordering may serve every policy.\nFor example, consider a language in which requests are\nsets of non-contradictory facts and the set of decisions is\n{permit, deny}. Then using the subset partial ordering for\np\n(for every policy p) will make sense since it matches the\nintuition of information content. If the language is safe with\nrespect to such a defined\n, then one may omit facts from\na request without causing undue access.\nInformally, in a safe language, undue access is impossible\nprovided that requests tell no lies; whereas, in an unsafe\nlanguage, the requests must additionally tell the whole truth.\nThe choice between these is a function of trust in the program\ngenerating requests, comprehensiveness of analysis to\ngenerate requests, efficiency, and so on. Nevertheless, the\nability to conclude, given a request that will yield access,\nthat all requests with more information will also yield access\n, can potentially be a great boon to policy reasoning.\nSome languages might choose to avoid the complications\nintroduced by a policy testing for the absence of an attribute\nall together. In some contexts, such as certificate passing\nsystems in which a certificate may be withheld, negated attributes\nmay not make sense.\n4\nIn such a context, requests\nwould not list negated attributes and the policy would not\ntest for the absence of an attributes at all.\n4.3\nIndependent Composition\nConsider the third interpretation of\np. Under this interpretation\n, the meaning of\np can only be determined by looking\nat the interactions of the different sub-policies as a whole.\nNotice that any one of these sub-policies would produce a\ndecision of not applicable in isolation, and yet together they\ninteract to produce a permit decision. The third interpretation\nthus inhibits the easy use of local reasoning to reach\nconclusions about the policy as a whole. This increases the\npossibility of unintended results from combining sub-policies\ninto a policy.\nThe alternative, as found in the first two interpretations,\nis for the sub-policies to be combined in such a way that\nonly the result of each in isolation matters. This property\nis formalized as follows:\nDefinition 4.4. A policy combinator of a language\nL = (P, Q, G, N, ) independently composes its sub-4\nOne may argue that certificate passing systems may use\nnegative certificates to achieve the checking of attribute absence\n. Whether this captures the notion of the absence of\nan attribute or just the presence of another related attribute\nis unclear. For example, one could conceivably hold both a\npositive and a negative certificate for an attribute.\n163\npolicies iff\np\n1\n, p\n2\n, . . . , p\nn\nP, i,\n1\ni n = Q\n(p\n1\n,p\n2\n,...,p\nn\n)\nQ\np\ni\n(1)\nand there exists a function\n: Q D\n\nD such that\np\n1\n, p\n2\n, . . . , p\nn\nP, q Q\n(p\n1\n,p\n2\n,...,p\nn\n)\n,\n(p\n1\n, p\n2\n, . . . , p\nn\n) (q)\n=\n(q)( p\n1\n(q), p\n2\n(q), . . . , p\nn\n(q)) (2)\nIf all the combinators of\nL independently compose, then L\nhas the independent composition property.\nThe first requirement forces a request defined for a policy\nto also be defined on each of its sub-policies. This is necessary\nfor the second requirement to be well defined. The\nsecond requirement ensures that one can determine the decision\nof the whole policy from the request and decisions of\nits sub-policies on that request; no other properties of the\nsub-policies matter.\nOne might alternatively be tempted to define independent\ncomposition thus:\nDefinition 4.5. A policy combinator of a language\nL = (P, Q, G, N, ) semantically composes its sub-policies\niff\n: (Q D)\n\n(Q D), p\n1\n, p\n2\n, . . . , p\nn\nP,\n(p\n1\n, p\n2\n, . . . , p\nn\n) =\n( p\n1\n, p\n2\n, . . . , p\nn\n)\n(3)\nIf all the combinators of\nL semantically compose, then L has\nthe semantic composition property.\nSemantic composition ensures that all sub-policies with the\nsame meaning in isolation will behave the same under the\ncombinator.\nA language with the semantic composition\nproperty is arguably more clear than one without it, since\nonly the isolated meaning of the sub-policy must known to\nreason about its use under the combinator.\nTheorem 4.6. If a policy combinator of an policy language\nL has independent composition, then it has semantic\ncomposition.\nProof. To prove that has semantic composition,\n:\n(Q D)\n\n(Q D) required for Equation 3 will be\nconstructed from the\n: Q D\n\nD known to exist\nsince\nindependently composes. Let\n(f\n1\n, f\n2\n, . . . , f\nn\n) = q . (q)(f\n1\n(q), f\n2\n(q), . . . , f\nn\n(q))\nThen\n( p\n1\n, p\n2\n, . . . , p\nn\n)\n= q . (q)( p\n1\n(q), p\n2\n(q), . . . , p\n3\n(q))\n= q . (p\n1\n, p\n2\n, . . . , p\nn\n) (q)\n=\n(p\n1\n, p\n2\n, . . . , p\nn\n)\nTheorem 4.7. The semantic composition of a policy combinator\ndoes not imply that it independently composes.\nProof. Consider a rather odd language that has only one\nunary policy combinator,\n, atomic policies that are sets of\nvalues, G = {permit}, N = {deny}, and requests that are\nsets of values. Let the semantics be\n(p\n1\n) = q .\n(permit p\n1\n(\n{v }) = permit\ndeny\np\n1\n(\n{v }) = deny\nfor some distinguished value v , and for atomic policies p,\np (q) equals permit iff p q = and equals deny otherwise.\nThe language has semantic composition: for\nsuch that\n(f\n1\n) =\n(permit f\n1\n(\n{v }) = permit\ndeny\nf\n1\n(\n{v }) = deny\nclearly,\n(p\n1\n) =\n( p\n1\n).\nTo show that the language does not have independent\ncomposition, assume that it does. Then there exists such a\n: Q D\n\nD to satisfy Equation 2. Let p\n1\n=\n{v} and\np\n2\n=\n{v, v } for some value v such that v = v . Then,\ndeny = ({v}) ({v})\n=\n(\n{v})( {v} ({v})) = ({v})(permit)\n=\n(\n{v})( {v, v } ({v})) = ({v, v }) ({v}) = permit\nA contradiction is reached since permit = deny.\nOnly with independent composition can a policy reader\nwith a specific request in mind know the decision of the\nwhole policy from each of the component policies. This enables\na reader to ask what-if questions like \"What if Bob\nrequests to write the log?\" and determine the answer from\nrecursively asking that question of the sub-policies. Such an\nability is particularly helpful to readers interested in only a\nsubset of the possible requests or already familiar with some\nof the sub-policies.\n4.4\nMonotonicity\nAs noted at the end of Section 3.3, the decision of deny\ncomplicates the policy combinators. One of the reasons for\nthis is that, under combinators like Deny Overrides, a back-and\n-forth pattern can arise when considering the decision of\nthe whole policy from the sub-policies. Consider each sub-policy\nin\np with the request q\n2\n. Under a reasonable interpretation\np\n1\nyields a decision of permit,\np\n2\na decision of deny,\nand\np\n3\nnot applicable. Thus, if the order of\np was changed\nto\np\n3\n,\np\n1\n,\np\n2\nand we assume a Deny Overrides policy combinator\n, the apparent decision would go from non-granting\nto granting to non-granting.\nNote that Permit Overrides does not exhibit this pattern\nsince it is impossible to go from a granting decision to a\nnon-granting one under it. Thus, the formalization of this\npattern focuses on the transition from a granting to a non-granting\ndecision.\nDefinition 4.8. A policy combinator of a language\nL = (P, Q, G, N, ) is monotonic iff\np\n1\n, . . . , p\nn\n, p P, q Q ,\n(p\n1\n, . . . , p\nn\n) (q) (p\n1\n, . . . , p\ni\n, p , p\ni+1\n, . . . , p\nn\n) (q)\nwhere Q = Q\n(p\n1\n,...,p\nn\n)\nQ\n(p\n1\n,...,p\ni\n,p ,p\ni+1\n,...,p\nn\n)\n. We say\nL is monotonic if every combinator is monotonic.\nAdding another sub-policy to a monotonic combinator cannot\nchange the decision from granting to non-granting.\nHaving motivated and established these criteria, we now\napply them to concrete access control languages.\n164\nCORE XACML\nIn its entirety, XACML [12] exceeds the bounds of the\ndefinitions given in Section 3. Full XACML includes obligations\n, which act as annotations on the decisions of permit\nand deny. These annotations specify actions that the system\nenforcing the access controls must preform before granting\naccess or upon prohibiting access. Thus, an XACML policy\nmay have effects beyond just granting or prohibiting access\nthat the model presented fails to address.\nHandling all of XACML is beyond the scope of this paper\n. For illustrative purposes, we employ a formalized subset\nof XACML, which we will call Core XACML (CXACML),\nwhich corresponds to the input of the tool Margrave [7, 8].\nThis subset is expressive enough to capture RBAC\n0\n[14].\n5.1\nSyntax\nCXACML has two syntaxes: one for policies and one for\nrequests. We present the policy syntax first, with the start\nnon-terminal P. For syntactic brevity, we use a Lisp-like\nparenthetical syntax in place of XML notation.\nP\n::=\n(Policy C T P\n\n)\n| (Rule T F)\nC\n::=\nFirstApp\n| DenyOver | PermitOver\nT\n::=\n( (L\n\n) (L\n\n) (L\n\n) (L\n\n) )\nL\n::=\n(A\n+\n)\nA\n::=\n(id val)\nF\n::=\nPermit\n| Deny\nThose policies formed by using solely the right choice of\nthe production rule for P are called rules. XACML does not\nconsider rules to be policies. However, since the semantics\nassigned to rules allows them to behave as policies, we will\nconsider them policies. The elements of the syntax category\nT are called targets. The four parts of the target are the\nrequirements placed on the subject, resource, action, and\nenvironment, respectively, for the policy to apply to a request\n. The non-terminals id and val are strings representing\nthe attribute IDs and values.\nExample 5.1. The following is a CXACML policy:\n(Policy FirstApp\n((()) (((name log))) (()) (()))\n(Rule (((role dr)) (()) (()) (())) Deny)\n(Rule ((()) (()) (()) (())) Permit))\nThis policy permits all requests for access to a log except\nthose made by doctors, which it denies. In detail, the policy\nis composed of two sub-policies using the combinator First\nApplicable and applies only to requests where the resource\nhas the name of log. The first sub-policy denies requests\nwhere the subject has the role of dr regardless of the resource,\naction, or environment. The second permits all requests.\nThe syntax for requests is (with start non-terminal Q):\nQ\n::=\n( (A\n\n) (A\n\n) (A\n\n) (A\n\n) )\nThey simply list the attributes possessed by the subject,\nresource, action, and environment in turn.\n5.2\nSemantics\nIn the following natural semantics, we will use the convention\nthat a lower-case letter represents an element of the set\nor syntactic category represented by the upper-case equivalent\n. For example, P is the set of all policies and p is a policy.\nLet D be the set of all decisions (D = {permit, deny, na}).\nThe core of the semantics of CX ACML compares requests\nto targets. We will denote this relation by qt for request\nq and target t. The natural semantics of Table 1 defines .\nNext we define\n. Table 2 gives the result of evaluating\nrules. The following tables defines\nover policies. Table 3\ndeals with two cases where a policy does not apply to a\nrequest.\nFinally, we must define the policy combinators:\nPermit Overrides in Table 4, Deny Overrides in Table 5,\nand First Applicable in Table 6.\n5.3\nAnalysis\nThe syntax and semantics of CXACML defines\nL\nCXACML\n=\n(P, Q, G, N, ). The syntax determines P and Q where\nthe same set of requests is used for every policy (and thus,\nwe treat Q as a set of requests). From the semantics, G =\n{permit}, N = {na, deny}. CXACML allows for explicit\ndenials and the checking of the implicit absence of attributes.\nTheorem 5.2. L\nCXACML\nis deterministic.\nProof. Inspection of the inference rules for atomic policies\n(Table 2) shows that only one of them can hold at a\ntime. Thus, atomic policies are deterministic.\nTable 4 combined with Table 3 gives the semantics of the\npolicy combinator Permit Overrides.\nThe antecedents of\nall these inference rules are disjoint, that is, at most one\nthem can hold for any policy and request. Thus, Permit\nOverrides is deterministic. The same argument holds for\nDeny Overrides and First Applicable using Tables 5 and 6.\nThus, all the combinators are deterministic.\nThus, one may view a CXACML policy as a function from\nrequests to decisions with\nin place of . Further inspection\nestablishes that\nis a total function.\nFor two requests q = (s r a e) and q = (s r a e ), let\nq\np\nq (for every policy p) if s\ns , r\nr , a\na , and\ne\ne where\nis defined in Table 1.\nTheorem 5.3. CXACML is not safe with respect to .\nProof. Consider the policy p shown in Example 5.1 and\nthe requests q = (() ((name log)) () ()) and\nq = (((role dr)) ((name log)) () ()). Clearly q\np\nq .\nYet p (q) = permit\ndeny = p (q )\nTheorem 5.4. L\nCXACML\nhas independent composition.\nProof. A combination algorithm c and target t together\ndetermine a policy combinator. For each pair of values for c\nand t, the needed function\nt\nc\n: Q D\n\nD exists to provide\nthe meaning of the policy (Policy c t p p\n\n) and satisfy\nEquation 2. For Permit Overrides (when c = PermitOver,\nor PO for short), the function\nt\nPO\n(q)(d d\n\n) is equal to\nna\nif\nq /\nt\npermit\nelse if\nd = permit\nt\nPO\n(q)(d\n\n) = permit\ndeny\nelse if\nd = deny\nt\nPO\n(q)(d\n\n) = deny\nna\notherwise\nThe function\nt\nDO\n(q)(d d\n\n) for Deny Overrides is equal to\nna\nif\nq /\nt\ndeny\nelse if\nd = deny\nt\nDO\n(q)(d\n\n) = deny\npermit\nelse if\nd = permit\nt\nDO\n(q)(d\n\n) = permit\nna\notherwise\nFor First Applicable,\nt\nFA\n(q)(d d\n\n) is\n165\n(\na\n1\n)\n(l\n\n1\n)\n(\na\n2\n)\n(l\n\n2\n)\n(\na\n3\n)\n(l\n\n3\n)\n(\na\n4\n)\n(l\n\n4\n)\n((\na\n1\n) (\na\n2\n) (\na\n3\n) (\na\n4\n))\n((l\n\n1\n) (\nl\n\n2\n) (\nl\n\n3\n) (\nl\n\n4\n))\ni s.t. l\ni\n(\na\n\n)\n(\na\n\n)\n(l\n1\nl\n2\n. . . l\nn\n)\ni j s.t. a\ni\n= a\nj\n(\na\n1\na\n2\n. . . a\nn\n)\n(\na\n1\na\n2\n. . . a\nm\n)\nTable 1: The Match Relationship\nq /\nt\nq (Rule t f) na\nqt\nq (Rule t Permit) permit\nqt\nq (Rule t Deny) deny\nTable 2:\non Rules\nq (Policy c t) na\nq /\nt\nq (Policy c t p\n\n)\nna\nTable 3: Default na Inference Rules\nqt\ni s.t. q p\ni\npermit\nq (Policy PermitOver t p\n1\np\n2\n. . . p\nn\n)\npermit\nqt\ni s.t. q p\ni\ndeny\nj, (q p\nj\npermit)\nq (Policy PermitOver t p\n1\np\n2\n. . . p\nn\n)\ndeny\nqt\ni, q p\ni\nna\nq (Policy PermitOver t p\n1\np\n2\n. . . p\nn\n)\nna\nTable 4: Inference Rules for Permit Overrides\nqt\ni s.t. q p\ni\ndeny\nq (Policy DenyOver t p\n1\np\n2\n. . . p\nn\n)\ndeny\nqt\ni s.t. q p\ni\npermit\nj, (q p\nj\ndeny)\nq (Policy DenyOver t p\n1\np\n2\n. . . p\nn\n)\npermit\nqt\ni, q p\ni\nna\nq (Policy DenyOver t p\n1\np\n2\n. . . p\nn\n)\nna\nTable 5: Inference Rules for Deny Overrides\nqt\nq p\n1\npermit\nq (Policy FirstApp t p\n1\np\n2\n. . . p\nn\n)\npermit\nqt\nq p\n1\ndeny\nq (Policy FirstApp t p\n1\np\n2\n. . . p\nn\n)\ndeny\nqt\nq p\n1\nna\nq (Policy FirstApp t p\n2\n, . . . , p\nn\n)\nd\nq (Policy FirstApp t p\n1\np\n2\n. . . p\nn\n)\nd\nTable 6: Inference Rules for First Applicable\nna\nif\nq /\nt\nd\nelse if\nd = permit d = deny\nt\nFA\n(q)(d\n\n)\notherwise\nwhere\nt\nPO\n(\n) =\nt\nDO\n(\n) =\nt\nFA\n(\n) = na for the empty sequence\n.\nTheorem 5.5. L\nCXACML\nis not monotonic.\nProof. Consider the policy p in Example 5.1 and the\npolicy p that would be p without the first rule. Let the request\nq be (((role dr)) ((name log)) () ()). p (q) =\npermit, but p (q) = deny. Thus, adding a rule to p results\nin a request going from being granted to not being\ngranted.\nADAPTATIONS OF FIRST-ORDER LOGIC\nWhereas XACML is an attempt to create a policy language\nfrom whole cloth, other languages are adaptations\nof first-order logic.\nHalpern and Weissman present several\nschemata for such languages [9]. Here we present and\nanalyze the languages produced by two of their schemata,\nLithium and\nL\n5\n.\nFor the ease presentation, first, we define the language\nschemata FOL, a more readily identifiable restriction of first-order\nlogic. To ensure efficiency (and decidability!), the languages\nof\nL\n5\nand Lithium use additional context-sensitive\nconstraints to restrict FOL. We discuss these restrictions\nafter giving a semantics to FOL. (The semantics of L\n5\nand\nLithium will be the same as that of FOL, restricted to subsets\nof the language.)\nThe schemata FOL is a restriction of many-sorted first-order\nlogic. Each language of FOL corresponds to giving\nthe logic a different vocabulary (the parameters including\nquantifier symbols, predicate symbols, constant symbols,\nand function symbols). We assume that includes the sorts\nS for subjects, R for resources, A for actions, and the predicate\nsymbol Permit of the sort S R A {T, F}.\n5\n\nmay also include sorts to represent environmental data such\nas the current time and location.\n6.1\nSyntax of\nFOL\nA standard policy under the vocabulary is an expression\nwith one of the following forms:\n(\n\ny\n1\nx\n1\n,\n. . .,\ny\nm\nx\nm\n(\n1\n. . .\nn\nPermit(s, r, a)))\n(\n\ny\n1\nx\n1\n,\n. . .,\ny\nm\nx\nm\n(\n1\n. . .\nn\nPermit(s, r, a)))\nwhere each x\ni\nnames a variable over the sort identified by\ny\ni\n, s is a term over the sort S, r is a term over the sort R,\na is a term over the sort A, and each\nj\nis a literal over\nthat may include the variables x\n1\n, . . . , x\nm\n.\nThe policies of the language FOL() are the standard policies\nunder and conjunctions of policies:\nP ::= StandardPolicy | (and P\n\n)\nExample 6.1. Let the vocabulary contain\n1. the sorts S = {amy, bob, joe},\nR = {grades, courses}, and A = {assign, enroll};\n5\nHalpern and Weissman treat the Permit predicate as taking\ntwo arguments, a subject and a resource-action, instead\nof three.\n166\n2. the predicates Permit : S R A {T, F}, faculty :\nS {T, F}, and student : S {T, F}.\nFOL( ) includes the following policy:\n(and (\n\nS\nx (faculty(x)\nPermit(x, grades, assign)))\n(\n\nS\nx (student(x)\nPermit(x, grades, assign)))\n(\n\nS\nx (\nfaculty(x) Permit(x, courses, enroll))))\nwhere S identifies the sort S. As the semantics will soon\nshow, this policy has the same meaning as policy\np from\nSection 2 does under Interpretation 3.\nThe requests of FOL() have the form (s, r, a, e) where\ns S is the subject making the request; r R is the requested\nresource; a A is the action the subject would like\nto preform on the resource; and e is a conjunction of ground\nliterals and universal formulas of the form\n\ny\n1\nx\n1\n,\n. . . ,\ny\nm\nx\nm\n(\n1\n. . .\nn\n\nn+1\n)\nwhere each x\ni\nnames a variable over the sort identified by y\ni\nand each\ni\nis a literal over that may include the variables\nx\nl\n, . . . , x\nm\n. The expression e provides information about s,\nr, a, and the environment.\nExample 6.2. The four-tuple\n(bob, courses, enroll,\nstudent(bob)\nfaculty(amy) student(amy))\nis a request of FOL( ) where is defined in Example 6.1.\n6.2\nSemantics of\nFOL\nThe semantics of a policy follows from interpreting it as a\nformula in many-sorted first-order logic. The policy combinator\nand becomes conjunction. The standard policies and e\nare interpreted as the corresponding logic formulas. A policy\np defines a relation p between requests and {permit, deny}\nas follows:\n(\ns, r, a, e) p permit\niff\np e\nPermit(\ns, r, a)\n(\ns, r, a, e) p deny\niff\np e\nPermit(s, r, a)\nwhere\nis interpreted as the standard \"proves\" relation for\nmany-sorted first-order logic over .\nTo define a deterministic and total version of FOL, we\nexpand the set of decisions to D = {na, permit, deny, error}\nand define p ((s, r, a, e)) to be\nerror\nif pe\nPermit(\ns, r, a) and pe Permit(s, r, a)\npermit if pe\nPermit(\ns, r, a) and pe Permit(s, r, a)\ndeny if pe\nPermit(\ns, r, a) and pe Permit(s, r, a)\nna\nif pe\nPermit(\ns, r, a) and pe Permit(s, r, a)\nSince a policy composed of sub-policies, each composed of\nstandard policies, is semantically equivalent to a policy composed\nof all the standard policies without the intermediate\nsub-policies, we will henceforth treat all policies as either a\nstandard policy or a conjunction of standard policies.\n6.3\nAnalysis of\nFOL\nThe language FOL() defines the deterministic and total\npolicy language (P, Q, G, N, ). The syntax determines P\nand Q where Q may be treated as a set of requests since for\nall policies p and p , Q\np\n= Q\np\n. The semantics requires that\nG = {permit} and N = {na, deny, error}. The languages of\nFOL has the policy combinator and. FOL allows for explicit\ndenials and checking for the explicit absence of attributes.\nGiven two requests q = (s, r, a, e) and q = (s , r , a , e ),\nif s = s , r = r , or a = a , we consider the two requests incomparable\n. If s = s , r = r , and a = a , then we would like\nto order requests according to their information content.\nOne might conclude that q\np\nq if e\n=\ne. However,\nsuppose e = , where is logical contradiction. Then\ne contains no information and yet it implies e. Similarly, if\np e =\n, then e contains no information with respect\nto p. Thus, we define\np\nas follows:\nLet (s, r, a, e)\np\n(\ns , r , a , e ) iff\n1. s = s , r = r , and a = a ; and\n2. p e implies p e but not , or p e implies .\nTheorem 6.3. FOL() is safe with respect to\nfor any\nvocabulary .\nProof. Assume FOL() is not safe. Then there must\nexist p P and q, q\nQ such that q\np\nq and p (q)\np (q ). Let q = (s, r, a, e) and q = (s, r, a, e ). Since\npermit is the only granting decision, p (q) = permit and\nthus p e\nPermit(\ns, r, a). Since N = {na, deny, error},\np (q ) must be either na, deny, or error.\nSince q\np\nq , two cases arise:\n1. pe implies pe but not : Since pe = pe and\npe\nPermit(\ns, r, a), pe\nPermit(\ns, r, a). Thus,\np (q ) is either permit or error. However, if p (q ) =\nerror, then pe = , a contradiction. Furthermore,\np (q ) = permit /\nN is also a contradiction.\n2. p e implies : In this case, p (q) = error = permit,\na contradiction.\nWe can thus conclude that FOL() must be safe.\nTheorem 6.4. FOL() does not have independent composition\nfor some .\nProof. Consider the policy p\na\n:\n(and (\n\nS\nx, student(x)\nPermit(x, log, read))\n(\n\nS\nx,\nstudent(x) Permit(x, log, read)))\nthe policy p\nb\n:\n(and (\n\nS\nx, student(x)\nPermit(x, log, edit))\n(\n\nS\nx,\nstudent(x) Permit(x, log, edit)))\nand request q = (bob, log, read, T). On q, p\na\nproduces the\ndecision of permit while its sub-policies yield na. However, p\nb\nproduces the decision of na while its sub-policies also yield\nna on q. Thus, the required function\nand\nwould have to\nsatisfy\npermit =\nand\n(q)(na na) = na\nA contradiction, and hence\nand\ncannot exist.\nTheorem 6.5. FOL() is not monotonic for some .\nProof. Consider the policy p\nc\n:\n(and (\n\nS\nx, student(x)\nPermit(x, log, read))\n(\n\nS\nx, student(x)\nPermit(x, log, read)))\nwith and without the second sub-policy, and the request\n(bob, log, read, student(bob)). In the absence of the second\nsub-policy, the decision is permit, whereas p\nc\nproduces\nerror.\n167\n6.4\nAnalysis of Lithium\nHalpern and Weissman restrict FOL to create the language\nthey dub Lithium.\n6\nA slightly modified form follows.\nLithium relies heavily on the notion of \"bipolarity\". A\nliteral\nof f is labeled bipolar in f relative to the equality\nstatements in e if the following holds: there exists a term\nin f and variable substitutions and such that it follows\nfrom e that = .\nLithium also makes use of the notion of equality-safety.\n(p, e\n0\n, e\n1\n) is equality-safe if\n1. e\n1\np when written in CNF (i.e., of the form c\n1\n. . .\nc\nn\nwhere each c\nj\nhas the form\n\ny\n1\nx\n1\n,\n. . . ,\ny\nm\nx\nm\n(\n)\nwhere is a qualifier-free disjunction of literals) has\nno clause with a disjunct of the form t = t , and\n2. it is not the case that f\n0\nt = t where f\n0\nis the\nconjunction of the equality statements in e\n0\n,\nwhere t and t are closed terms such that (1) they both\nappear in e\n0\n; and (2) either t is a sub-term of t , or both t\nand t mention function symbols.\nLike FOL, Lithium is a set of languages each with a different\nvocabulary. Let Li() be the instance of Lithium using\nthe vocabulary .\nLi() has the same set of policies as\nFOL(). However, each policy p of Li() has a different set\nof requests for which it is defined (a different value for Q\np\n).\nA request (s, r, a, e\n0\ne\n1\n) of\nFOL() is in Q\np\niff:\n1. e\n0\nis a basic environment (a conjunction of ground\nterms),\n2. e\n1\nis a conjunction of universally quantified formulas,\n3. (p, e\n0\n, e\n1\n) is equality-safe, and\n4. every conjunct of e\n1\np has at most one literal that is\nbipolar in e\n1\np relative to the equality statements in\ne\n0\n.\nLithium is safe since its requests are a subset of those of\nFOL.\nTheorem 6.6. Lithium does not have independent composition\nfor some .\nProof. Consider the policies p\na\nand p\nb\nand the request\ngiven with them in the proof of Theorem 6.4. The request\nis in Q\np\na\n. To show this, we check that the request satisfies\nall four of the requirements for a request to be in Q\np\na\ngiven above.\nSince e\n0\ne\n1\n= T, the first three requirements\nhold. The last requirement holds since student(x)\nand\nstudent(x) are the only bipolars and they are each in\na different conjunct.\nBy similar reasoning, the request is also in Q\np\nb\n. Thus, the\nproof follows as before.\n6.5\nAnalysis of\nL\n5\nIn their work, Halpern and Weissman define a further restriction\nof FOL, which they call L\n5\n.\nLike Lithium,\nL\n5\n() includes all the policies of FOL()\nwith each policy having a different set of requests for which\nit is defined. For a policy p of L\n5\n(), Q\np\nconsists of all the\nrequests (s, r, a, e) of FOL() such that:\n6\nThe name Lithium only appears in the 2006 version of their\nwork [9].\n1. e is a basic environment,\n2. equality is not used in e or p,\n3. for every atomic policy p in p, all variables appearing\nin p appears as an argument to Permit in p , and\n4. there are no bipolars in p relative to the empty set of\nequality statements.\nAs with Lithium,\nL\n5\n() is safe since it is a subset of a safe\nlanguage.\nHalpern and Weissman have proven the following theorem\n(Proposition 4.2 in their updated document [9]):\nTheorem 6.7. Let p be a compound policy and (s, r, a, e)\nbe a request of\nL\n5\n. Then e p\nPermit(\ns, r, a) iffthere is\na sub-policy p of p such that e p\nPermit(\ns, r, a).\nUsing the same approach as given in their proof, one can\ngeneralize this proof to include statements of the form ep\nPermit(s, r, a) also.\nTheorem 6.8. L\n5\n() has independent composition for all\n.\nProof. Allowing p\ni\nto range over the sub-policies of p,\nthe above result yields:\np (q) =\n8\n>\n>\n>\n<\n>\n>\n>\n:\nerror\ni, j, p\ni\n(q) = permit p\nj\n(q) = deny\npermit else if i, p\ni\n(q) = permit\ndeny\nelse if\ni, p\ni\n(q) = deny\ndeny\notherwise\nFrom this, it is easy to construct an appropriate value for\nand\n(q)(d\n1\n, d\n2\n, . . . , d\nn\n):\nerror\nif\ni, j, d\ni\n= permit d\nj\n= deny\npermit\nelse if\ni, d\ni\n= permit\ndeny\nelse if\ni, d\ni\n= deny\ndeny\notherwise\nNotice that\nand\ndoes not use the value of q: it merely composes\nthe results from its sub-policies.\nA policy author concerned solely with expressive power\nwould select Lithium over\nL\n5\n. However, the choice becomes\nmore complicated when concerned about the ability to reason\nabout policies, because only\nL\n5\nfeatures independent\ncomposition. We hope that elucidating this trade-off with a\ncombination of proof and illustrative examples, as we have\ndone above, will help authors choose better between the policy\nlanguages they use, even when the languages are within\nthe same family.\nRELATED WORK\nDe Capitani di Vimercati et al. discuss explicit denial\nand how it introduces the need for policy combinators that\nreduce the clarity of the language [4]. The authors list various\npolicy combinators that are possible, many of which are\nmore complex than those we present. The paper includes\ndiscussion of a few policy languages, including XACML and\na language grounded in first-order logic. The paper does\nnot, however, attempt to systemically compare them.\nThe work of Mark Evered and Serge B\nogeholz concerns\nthe quality of a policy language [5].\nAfter conducting a\ncase study of the access-control requirements of a health\n168\ninformation system, they proposed a list of criteria for policy\nlanguages.\nThey state that languages should be concise,\nclear, aspect-oriented (i.e., separate from the application\ncode), fundamental (i.e., integrated with the middleware,\nnot an ad hoc addition), positive (i.e., lists what is allowed,\nnot what is prohibited), supportive of needs-to-know, and\nefficient. Although they compare four languages based on\nthese criteria, they do not formalize the criteria.\nSome authors have considered formal treatments of programming\nlanguage expressiveness [6, 11]. Felleisen's is the\nclosest in spirit to ours. His framework examines the ability\nto translate the features of one language in the other with\nonly local transformations. That work does not, however,\ndirectly address reasoning.\nDISCUSSION\nThis paper presents our analysis framework and its findings\n. Some differences between languages lie in the realms\nof decision sets, policy combinators, and checking for the\nabsence of attributes, but these are clear from the language\ndefinitions. Our framework highlights the following more\nsubtle, semantic differences:\nIndependence Core XACML and\nL\n5\nfeature independent\ncomposition of polices into compound policies, and\nthus allow for reasoning about a policy by reasoning\nabout the sub-policies separately.\nLithium, in contrast\n, does not exhibit this property, and therefore potentially\nrequires reasoning about a policy all at once.\nSafety\nL\n5\nand Lithium provide safety for the most natural\ndefinition of the \"contains more information\" ordering.\nCore XACML, in contrast, does not, which implies\nthat information missing from a Core XACML request\ncould result in unintended access being granted.\nThese differences are not orthogonal. Clearly, the combinators\nselected determine whether the language will have\nindependent composition.\nFurthermore, implicit checking\nof attributes will result in the loss of safety.\nThese properties may guide policy language designers.\nFor example, suppose a designer wishes to create a safe\nvariant of XACML. One way to achieve this would be to\neliminate rules that deny access and thus the decision of\ndeny. (We provide further details in an extended version of\nthis work [15].)\nAs noted in Section 5, the comparison framework must be\ngeneralized to treat language with more exotic constructs\nlike obligations. More importantly, we need to perform user\nstudies to determine whether, and how well, our properties\ncorrelate with policy comprehension by humans. Lastly, this\nframework should be coupled with one for measuring the\nexpressive power of a policy language before fair judgment\nmay be passed on languages.\nACKNOWLEDGMENTS\nWe thank Joe Halpern and Vicky Weissman for useful conversations\nand for sharing their ongoing work. We also thank\nKonstantin Beznosov, Kathi Fisler, and Steve Reiss. This\nwork was partially supported by NSF grant CPA-0429492 to\nBrown University and by the Army Research Office through\ngrant number DAAD19-02-1-0389 (\"Perpetually Available\nand Secure Information Systems\") to Carnegie Mellon Uni-versity's\nCyLab.\nREFERENCES\n[1] A. Anderson. Core and hierarchical role based access\ncontrol (RBAC) profile of XACML, version 2.0.\nTechnical report, OASIS, Sept. 2004.\n[2] M. Y. Becker and P. Sewell. Cassandra: Flexible trust\nmanagement, applied to electronic health records. In\nIEEE Computer Security Foundations Workshop,\npages 139154, 2004.\n[3] E. Bertino, P. Samarati, and S. Jajodia.\nAuthorizations in relational database management\nsystems. In ACM Conference on Computer and\nCommunications Security, pages 130139, 1993.\n[4] S. De Capitani di Vimercati, P. Samarati, and\nS. Jajodia. Policies, models, and languages for access\ncontrol. In Databases in Networked Information\nSystems: 4th International Workshop, volume 3433 of\nLecture Notes in Computer Science. Springer-Verlag,\nMar. 2005.\n[5] M. Evered and S. B\nogeholz. A case study in access\ncontrol requirements for a health information system.\nIn Workshop on Australasian Information Security,\nData Mining and Web Intelligence, and Software\nInternationalisation, pages 5361, 2004.\n[6] M. Felleisen. On the expressive power of programming\nlanguages. Science of Computer Programming,\n17:3575, 1991.\n[7] K. Fisler, S. Krishnamurthi, L. A. Meyerovich, and\nM. C. Tschantz. Verification and change-impact\nanalysis of access-control policies. In International\nConference on Software Engineering, pages 196205,\nMay 2005.\n[8] M. M. Greenberg, C. Marks, L. A. Meyerovich, and\nM. C. Tschantz. The soundness and completeness of\nMargrave with respect to a subset of XACML.\nTechnical Report CS-05-05, Department of Computer\nScience, Brown University, Apr. 2005.\n[9] J. Halpern and V. Weissman. Using first-order logic to\nreason about policies. In IEEE Computer Security\nFoundations Workshop, pages 187201, 2003. Updated\n2006 version available at http://www.citebase.org/\ncgi-bin/citations?id=oai:arXiv.org:cs/0601034.\n[10] S. Jajodia, P. Samarati, V. S. Subrahmanian, and\nE. Bertino. A unified framework for enforcing multiple\naccess control policies. In ACM SIGMOD\nInternational Conference on Management of Data,\npages 474485, 1997.\n[11] J. C. Mitchell. On abstraction and the expressive\npower of programming languages. Science of\nComputer Programming, 212:141163, 1993.\n[12] OASIS. eXtensible Access Control Markup Language\n(XACML) version 2.0. OASIS Standard, Feb. 2006.\n[13] C. Powers and M. Schunter. Enterprise privacy\nauthorization language (EPAL 1.2). W3C Member\nSubmission, Nov. 2003.\n[14] R. S. Sandhu, E. J. Coyne, H. L. Feinstein, and C. E.\nYouman. Role-based access control models. IEEE\nComputer, 29(2):3847, 1996.\n[15] M. C. Tschantz and S. Krishnamurthi. Towards\nreasonability properties for access-control policy\nlanguages with extended XACML analysis. Technical\nReport CS-06-04, Department of Computer Science,\nBrown University, Apr. 2006.\n169\n", "keywords": "common features;Access control;lithium;modularity;reasonability property;policy decomposition;properties;access control;policy combinator;XACML;comtemporary policy;access-control policy;policy language property;first order logic;xacml;multiple policy language;policy language;policy;security;formalize;policy languague"} {"name": "199", "title": "Tracking Dynamics of Topic Trends Using a Finite Mixture Model", "abstract": "In a wide range of business areas dealing with text data streams, including CRM, knowledge management, and Web monitoring services, it is an important issue to discover topic trends and analyze their dynamics in real-time.Specifically we consider the following three tasks in topic trend analysis: 1)Topic Structure Identification; identifying what kinds of main topics exist and how important they are, 2)Topic Emergence Detection; detecting the emergence of a new topic and recognizing how it grows, 3)Topic Characterization ; identifying the characteristics for each of main topics. For real topic analysis systems, we may require that these three tasks be performed in an on-line fashion rather than in a retrospective way, and be dealt with in a single framework. This paper proposes a new topic analysis framework which satisfies this requirement from a unifying viewpoint that a topic structure is modeled using a finite mixture model and that any change of a topic trend is tracked by learning the finite mixture model dynamically.In this framework we propose the usage of a time-stamp based discounting learning algorithm in order to realize real-time topic structure identification .This enables tracking the topic structure adaptively by forgetting out-of-date statistics.Further we apply the theory of dynamic model selection to detecting changes of main components in the finite mixture model in order to realize topic emergence detection.We demonstrate the effectiveness of our framework using real data collected at a help desk to show that we are able to track dynamics of topic trends in a timely fashion.", "fulltext": "INTRODUCTION\nIn a wide range of business areas dealing with text streams,\nincluding CRM, knowledge management, and Web monitoring\nservices, it is an important issue to discover topic trends\nand analyze their dynamics in real-time.For example, it is\ndesired in the CRM area to grasp a new trend of topics in\ncustomers' claims every day and to track a new topic as soon\nas it emerges.A topic is here defined as a seminal event or\nactivity.Specifically we consider the following three tasks\nin topic analysis:\n1) Topic Structure Identification; learning a topic structure\nin a text stream, in other words, identifying what kinds\nof main topics exist and how important they are.\n2) Topic Emergence Detection; detecting the emergence of\na new topic and recognizing how rapidly it grows, similarly,\ndetecting the disappearance of an existing topic.\n3) Topic Characterization; identifying the characteristics for\neach of main topics.\nFor real topic analysis systems, we may require that these\nthree tasks be performed in an on-line fashion rather than in\na retrospective way, and be dealt with in a single framework.\nThe main purpose of this paper is to propose a new topic\nanalysis framework that satisfies the requirement as above,\nand to demonstrate its effectiveness through its experimental\nevaluations for real data sets.\nOur framework is designed from a unifying viewpoint that\na topic structure in a text stream is modeled using a finite\nmixture model (a model of the form of a weighted average\nof a number of probabilistic models) and that any change\nof a topic trend is tracked by learning the finite mixture\nmodel dynamically.Here each topic corresponds to a single\nmixture component in the model.\nAll of the tasks 1)-3) are formalized in terms of a finite\nmixture model as follows: As for the task 1), the topic structure\nis identified by statistical parameters of a finite mixture\nmodel.They are learned using our original time-stamp\nbased discounting learning algorithm, which incrementally\nand adaptively estimates statistical parameters of the model\nby gradually forgetting out-of-date statistics, making use of\ntime-stamps of data.This makes the learning procedure\nadaptive to changes of the nature of text streams.\nAs for the task 2), any change of a topic structure is rec-ognized\nby tracking the change of main components in a\nmixture model.We apply the theory of dynamic model selection\n[7] to detecting changes of the optimal number of\nmain components and their organization in the finite mixture\nmodel.We may recognize that a new topic has emerged\nif a new mixture component is detected in the model and remains\nfor a while.Unlike conventional approaches to statistical\nmodel selection under the stationary environment, dynamic\nmodel selection is performed under the non-stationary\none in which the optimal model may change over time.Further\nnote that we deal with a complicated situation where\nthe dimension of input data, i.e., the number of features of\na text vector, may increase as time goes by.\nAs for the task 3), we classify every text into the cluster\nfor which the posterior probability is largest, and then we\ncharacterize each topic using feature terms characterizing\ntexts classified into its corresponding cluster.These feature\nterms are extracted as those of highest information gain,\nwhich are computed in real-time.\nWe demonstrate the validity of the topic trend analysis\nframework, by showing experimental results on its applications\nto real domains.Specifically we emphasize that it is\nreally effective for discovering trends in questions at a help\ndesk.\n1.2\nRelated Work\nThe technologies similar to 1)-3) have extensively been ex-plored\nin the area of topic detection and tracking (TDT) (see\n[1]).Actually 1) and 2) are closely related to the subprob-lems\nin TDT called topic tracking and new event detection,\nrespectively.Here topic tracking is to classify texts into one\nof topics specified by a user, while new event detection, formerly\ncalled first story detection, is to identify texts that\ndiscuss a topic that has not already been reported in earlier\ntexts.The latter problem is also related to work on topic-conditioned\nnovelty detection by Yang et.al.[16]. In most of\nrelated TDT works, however, topic tracking or new event\ndetection is conducted without identifying main topics or\na topic structure, hence the tasks 1)-3) cannot be unified\nwithin a conventional TDT framework.Further topic timeline\nanalysis has not been addressed in it.\nSwan and Allen [12] addressed the issue of how to auto-matically\noverview timelines of a set of news stories.They\nused the\n-method to identify at each time a burst of feature\nterms that more frequently appear than at other times.\nSimilar issues are addressed in the visualization community\n[3].However, all of the methods proposed there are\nnot designed to perform in an on-line fashion.\nKleinberg [4] proposed a formal model of \"bursts of ac-tivity\"\nusing an infinite-state automaton.This is closely\nrelated to topic emergence detection in our framework.A\nburst has a somewhat different meaning from a topic in\nthe sense that the former is a series of texts including a\nspecific feature, while the latter is a cluster of categorized\ntexts.Hence topic structure identification and characterization\ncannot be dealt with in his model.Further note that\nKleinberg's model is not designed for real-time analysis but\nfor retrospective one.\nRelated to our statistical modeling of a topic structure,\nLiu et.al. [2] and Li and Yamanishi [6] also proposed methods\nfor topic analysis using a finite mixture model.Specifically\n, Liu et.al.\nconsidered the problem of selecting the\noptimal number of mixture components in the context of\ntext clustering.In their approach a single model is selected\nas an optimal model under the assumption that the optimal\nmodel does not change over time.Meanwhile, in our\napproach, a sequence of optimal models is selected dynamically\nunder the assumption that the optimal model may\nchange over time.\nRelated to topic emergence detection, Matsunaga and Yamanishi\n[7] proposed a basic method of dynamic model selection\n, by which one can dynamically track the change of\nnumber of components in the mixture model.However, any\nof all of these technologies cannot straightforwardly be applied\nto real-time topic analysis in which the dimension of\ndata may increase as time goes by.\nRelated to topic structure identification, an on-line discounting\nlearning algorithm for estimating parameters in a\nfinite mixture model has been proposed by Yamanishi et.\nal.[14]. The main difference between our algorithm and\ntheirs is that the former makes use of time-stamps in order\nto make the topic structure affected by a timeline of topics\nwhile the latter considers only the time-order of data\nignoring their time-stamps.\nThe rest of this paper is organized as follows: Section 2\ndescribes a basic model of topic structure.Section 3 gives\na method for topic structure identification.Section 4 gives\na method for topic emergence detection.Section 5 gives a\nmethod for topic characterization.Section 6 gives experimental\nresults.Section 7 gives concluding remarks.\nMODEL\nWe employ a probabilistic model called a finite mixture\nmodel for the representation of topic generation in a text\nstream.Let\nW = {w\n1\n, , w\nd\n} be the complete vocabulary\nset of the document corpus after the stop-words removal\nand words stemming operations.For a given document x,\nlet tf(w\ni\n) be the term frequency of word w\ni\nin x.Let idf(w\ni\n)\nbe the idf value of w\ni\n, i. e. , idf(w\ni\n) = log(N/df(w\ni\n)) where\nN is the total number of texts for reference and df(w\ni\n) is\nthe frequency of texts in which w\ni\nappears.Let tf-idf(w\ni\n)\nbe the tf-idf value of w\ni\nin x, i. e. , tf-idf(w\ni\n) = tf(w\ni\n)\n\nlog(N/df(w\ni\n)). We may represent a text x of the form:\nx = (tf(w\n1\n), ..., tf (w\nd\n))\nor\nx = (tf-idf(w\n1\n), ..., tf -idf(w\nd\n)).\nWe may use either type of the representation forms.\nLet K be a given positive integer representing the number\nof different topics.We suppose that a text has only\none topic and a text having the i-th topic is distributed\naccording to the probability distribution with a density:\np\ni\n(x|\ni\n) (i = 1, 2, , K), where\ni\nis a real-valued parameter\nvector.We suppose here that x is distributed according\nto a finite mixture distribution (see e.g., [8]) with K components\ngiven by\np(x| : K) =\nK\nX\ni=1\n\ni\np(x|\ni\n),\n(1)\nwhere\ni\n> 0 (i = 1, , K) and P\nK\ni=1\n\ni\n= 1. We set\n= (\n1\n, ,\nK-1\n,\n1\n, ,\nK\n).Here\ni\ndenotes the degree\nto what the i-th topic is likely to appear in a text stream.\nNote that each component in the mixture defines a single\ncluster in the sense of soft-clustering.\nThroughout this paper we suppose that each p\ni\n(x|\ni\n) takes\na form of a Gaussian density: Letting d be the dimension of\n812\nIndustry/Government Track Poster\nText Data Stream\nDiscount Learning\nFinite Mixture\nModel 1\nDynamic Model Selection\nTopic Emergence Detection\nTopic Characteriztion\nTimeline of Topics\nDiscount Learning\nDiscount Learning\nFinite Mixture\nModel 2\nFinite Mixture\nModel K\nFigure 1: Topic Trend Analysis System\neach datum,\np\ni\n(x|\ni\n)\n=\n\ni\n(x|\ni\n,\ni\n)\n(2)\n=\n1\n(2)\nd/2\n|\ni\n|\n1/2\nexp\n,,\n- 12(x -\ni\n)\nT\n\n-1\ni\n(x -\ni\n)\n\n,\nwhere\ni\nis a d-dimensional real-valued vector,\ni\nis a d d-dimensional\nmatrix, and we set\ni\n= (\ni\n,\ni\n).In this case\n(1) is so-called a Gaussian mixture.Note that a Gaussian\ndensity may be replaced with any other form of probability\ndistributions, such as a multinomial distribution.\nIn terms of a finite mixture model, a topic structure is\nidentified by A) the number of components K (how many\ntopics exist), B) the weight vector (\n1\n, ,\nK\n) indicating\nhow likely each topic appears, and C) the parameter values\ni\n(i = 1, , K) indicating how each topic is distributed.A\ntopic structure in a text stream must be learned in an on-line\nfashion.Topic emergence detection is conducted by tracking\nthe change of main components in the mixture model.\nTopic characterization is conducted by classifying each text\ninto the component for which the posterior is largest and\nthen by extracting feature terms characterizing the classified\ntexts.Topic drift may be detected by tracking changes\nof a parameter value\ni\nfor each topic i.These tasks will be\ndescribed in details in the sessions to follow.\nThe overall flow of the tasks is illustrated in Figure 1.\nA text is sequentially input to the system.We prepare a\nnumber of finite mixture models, for each of which we learn\nstatistical parameters using the time-stamp based learning\nalgorithm to perform topic identification.These tasks are\nperformed in parallel.On the basis of the input data and\nlearned models, we conduct dynamic model selection for\nchoosing the optimal finite mixture model.We then compare\nthe new optimal model with the last one to conduct\ntopic emergence detection.Finally for each component of\nthe optimal model, we conduct topic characterization.\nTOPIC STRUCTURE IDENTIFICATION WITH DISCOUNTING LEARNING\nIn this section we propose an algorithm for learning a topic\nstructure, which we call an time-stamp based discounting\ntopic learning algorithm.\nThe algorithm is basically designed as a variant of the incremental\nEM algorithm for learning a finite mixture model (see,\ne.g., Neal and Hinton [9]). Our proposed one is distinguished\nfrom existing ones with regards to the following three main\nfeatures:\n1) Adaptive to the change of the topic structure. The parameters\nare updated by forgetting out-of-date statistics as\ntime goes on.This is realized by putting a larger weight to\nthe statistics for a more recent data.\n2) Making use of time stamps for texts. Not only the time\norder of texts but also their time stamps are utilized to make\nthe topic structure depend on the timeline.For example, for\ntwo text data x\nt\n1\n, x\nt\n2\n(t\n1\n< t\n2\n), if the length t\n2\n-t\n1\nis larger,\nthe topic structure learned at time t\n2\nwill be less affected by\nthat at time t\n1\n.\n3) Normalizing data of different dimensions. We consider\nthe on-line situation where the dimension of a datum may\nincrease as time goes by.This situation actually occurs because\nnew words may possibly be added to the list of words\nevery time a new text is input.Hence it is needed for normalizing\ndata of different dimensions.\nWe suppose that text data x\n1\n, x\n2\n, ... are given in this\norder, and each has a time-stamp indicating when it appeared\n.Here is a description of the algorithm, in which\nis a discounting parameter,\ni\ndenotes the posterior density\nof the ith component, and m is introduced for calculation\nof weights for old statistics.\nTime-stamp Based Discounting Learning Algorithm\nInitialization:\nSet initial values of\n(0)\ni\n,\n(0)\ni\n,\n(0)\ni\n, m\n(0)\n(i = 1, , k).Let\n> 0, 0 < < 1 be given.\nIteration:\nFor t = 1, 2, .. do the following procedure.\nFor the t-th data be x\nt\nand its time stamp be t\nnew\n.Let the\ntime stamp of the (t - 1)-th data be t\nold\n.\nFor i = 1, , k, update the parameters according to the\n813\nIndustry/Government Track Poster\nfollowing rules:\np(i|x\nt\n)\n:=\n\n(t-1)\ni\np\ni\n(x\nt\n|\n(t-1)\ni\n,\n(t-1)\ni\n)\nP\nk\nl=1\n\n(t-1)\nl\np\nl\n(x\nt\n|\n(t-1)\nl\n,\n(t-1)\nl\n)\n\n(t)\ni\n:=\nWA(p(i|x\nt\n), 1/k|1, )\n\n(t)\ni\n:=\nWA(\n(t)\ni\n,\n(t)\ni\n|m\n(t-1)\n,\n-(t\nnew\n-t\nold\n)\n)\n\n(t)\ni\n:=\nWA(\n(t-1)\ni\n, x\nt\n|\ni\nm\n(t-1)\n,\n-(t\nnew\n-t\nold\n)\n\n(t)\ni\n)\n\n(t)\ni\n:=\nWA(\n(t-1)\ni\n, x\ni\nx\nT\ni\n|\ni\nm\n(t-1)\n,\n-(t\nnew\n-t\nold\n)\n\n(t)\ni\n)\n\n(t)\ni\n:=\n\n(t)\ni\n-\ni\n\nT\ni\nm\n(t)\n:=\n\n(t\nnew\n-t\nold\n)\nm\n(t-1)\n+ 1,\nwhere\nWA denotes the operation such that\nWA(X, Y |A, B) =\nA\nA + B X +\nB\nA + B Y.\nGenerally, we set the initial value\n(0)\ni\n= 1/K, m\n(0)\n= 0, a\nsmall value to\n(0)\ni\n, and set\n(0)\ni\nthe first x\nt\ns that are different\neach other.This algorithm updates\ni\n,\ni\n, and\ni\nas the\nweighted average of the latest parameter value and the new\nstatistics.The weight ratio is m\n(t-1)\n:\n-(t\nnew\n-t\nold\n)\nfor\ni\n,\nand\ni\nm\n(t-1)\n:\n-(t\nnew\n-t\nold\n)\n\n(t)\ni\nfor\ni\nand\ni\n, respectively.\nNote that Yamanishi et.al.'s sequentially discounting learning\nalgorithm [14] can be thought of as a special case of this\nalgorithm in which the time interval t\nl+1\n- t\nl\nis independent\nof l.In that case if we further let = 1, the algorithm\nbecomes an ordinary incremental EM algorithm.\nIn real implementation, we supposed that\ni\nis a diagonal\nmatrix for the sake of computational complexity issues.The\nscalability issue for dealing with a general matrix\ni\nremains\nfor future study.\nTOPIC EMERGENCE DETECTION WITH DYNAMIC MODEL SELECTION\nIn this section we are concerned with the issue of topic\nemergence detection, i.e., tracking the emergence of a new\ntopic.We reduce here this issue to that of selecting the\noptimal components in the mixture model dynamically.We\ncall this statistical issue dynamic model selection (see also\n[7]).\nThe key idea of dynamic model selection is to first learn a\nfinite mixture model with a relatively large number of components\n, then to select main components dynamically from\namong them on the basis of Rissanen's predictive stochastic\ncomplexity [10].\nThe procedure of dynamic model selection is described as\nfollows:\nInitialization:\nLet K\nmax\n(maximum number of mixture components) and\nW (window size) be given positive integers.\nSet initial values of\n(0)\ni\n,\n(0)\ni\n= (\n(0)\ni\n,\n(0)\ni\n) (i = 1, , K\nmax\n).\nIteration:\nFor t = 1, 2, , do the following procedure 1 to 4:\n1. Model Class Construction:\nLet G\nt\ni\nbe the window average of the posterior probability\n(\nt-W\ni\n+\n+\nt\ni\n)/W .For k = 1, , K\nmax\n, do the following\nprocedure:\nLet\n1\n, ,\nk\nbe the indices of k highest scores such that\nG\n(t-1)\n1\nG\n(t-1)\nk\n.Construct the following mixture\nmodel with k components: For s = t - W, , t,\np\n(t-1)\n(x|\n1\n, ,\nk\n)\n=\nk-1\nX\nj=1\n\n(t-1)\nj\np\nj\n(x|\n(t-1)\nj\n)\n+\n\n1\nk\n-1\nX\nj=1\n\n(t-1)\nj\n!\nU.\nwhere\nU is a uniform distribution over the domain.\n2. Predictive Stochastic Complexity Calculation:\nWhen the t-th input data x\nt\nwith dimension d\nt\nis given,\ncompute\nS\n(t)\n(k) =\nt\nX\ns=t-W\n\"-logp\n(s)\n(x\ns\n|\n1\n, ,\nk\n)/d\ns\n\"\n.\n(3)\n3. Model Selection:\nSelect k\n\nt\nminimizing S\n(t)\n(k).Let p\nj\n(x|\n(t-1)\nj\n) (j = 1, , k\n\nt\n)\nbe main components at time t, which we write as {C\n(t)\n1\n, C\n(t)\nk\n\nt\n}.\n4. Estimation of Parameters:\nLearn a finite mixture model with K\nmax\ncomponents using\nthe time-stamp based discounting learning algorithm.Let\nthe estimated parameter be (\n(t)\n1\n, ,\n(t)\nK\nmax\n,\n(t)\n1\n, ,\n(t)\nK\nmax\n).\nNote that the S\n(t)\n(k) can be thought of as a variant of\nRissanen's predictive stochastic complexity [10] normalized\nby the dimension for each datum, which can be interpreted\nas the total code length required for encoding a data stream\nx\nt-W\n, ..., x\nt\ninto a binary string sequentially.\nOnce main components C\n(t)\n1\n, , C\n(t)\nk\n\nt\nare obtained, we\ncompare them with C\n(t-1)\n1\n, , C\n(t-1)\nk\n\nt-1\nto check the emergence\nof a new topic or the disappearance of an existing\ntopic in the following way.If a new component is selected\nat some point and remains for a longer time than a specified\nthreshold, we may determine that a new topic has emerged.\nSpecifically, if the optimal number k\n\nt\nof components becomes\nlarger than k\n\nt-1\n, we can recognize that a new topic\nhas emerged.Similarly, if an existing component is not selected\nat some time and does not appear any longer, then\nwe may determine that the topic has disappeared.\nTOPIC CHARACTERIZATION WITH INFORMATION GAIN\nOnce the optimal finite mixture model is obtained, we\nare concerned with the issue of how to characterize each\ntopic.We address this issue by extracting terms characterizing\neach topic and by observing the growth or decay of\neach topic component.Details are shown below.\nA) Extracting terms characterizing each topic.\nWe attempt\nto characterize each topic by extracting characteristic\nwords for it.We perform this task by computing the\ninformation gain of possible words.\nIn the time-stamp based discounting topic learning algorithm\n, the posterior probability distribution over the set of\nclusters is estimated every time a text data is input.According\nto that posterior distribution an input text will be\n814\nIndustry/Government Track Poster\ncategorized into the component for which the posterior probability\nis largest.This clustering task can be performed in\nan on-line fashion.\nAfter observing the t-th datum x\nt\n, for i = 1, , k, let\nS\nt\n(i) be the set of texts in x\nt\n= x\n1\n, .., x\nt\nclassified into the\ni-th component and let t\ni\nbe the size of\nS\nt\n(i).Let S\nt\n=\n\nk\ni=1\nS\nt\n(i).\nBelow we show the method for computing the information\ngain of a term w for each topic component.For any term\nw, let S(w) be a set of vectors in S\nt\nsuch that the frequency\nof w is larger than a given threshold, and let m\nw\nbe the size\nof\nS(w).Let S(\nw) be a set of vectors in S\nt\nsuch that the\nfrequency of w is not larger than the threshold, and m\n\nw\nbe\nthe size of\nS(\nw).\nFor a specified topic component, say, the i-th component,\nlet m\n+\nw\nbe the number of vectors in\nS(w) that are also included\nin\nS\nt\n(i).Let m\n+\n\nw\nbe the number of vectors in\nS(\nw)\nthat are also included in\nS\nt\n(i).\nThen we define the information gain of w for the i-th topic\ncomponent as follows:\nIG(w|i) = I(t, t\ni\n)\n- `I(m\nw\n, m\n+\nw\n) + I(m\n\nw\n, m\n+\n\nw\n) ,\nwhere I(x, y) is the information measure such as stochastic\ncomplexity [10], extended stochastic complexity [13][5].The\nstochastic complexity [10] is given as follows:\nI(x, y) = xH\n\" y\nx\n\"\n+ 1\n2 log\n\" x\n2\n\"\n,\nwhere H(x) = -x log x - (1 - x) log(1 - x) log(1 - x) is the\nbinary entropy function, and log's base is 2.A special case\nof extended stochastic complexity is given as follows [13][5]:\nI(x, y) = min{y, x - y} + cpx log x,\nwhere c is a constant.\nWe select a specified number of terms ws of largest information\ngains.We can think of them the set of terms\ncharacterizing the i-th topic component.\nThe statistics: m\nw\n, m\n+\nw\n, m\n\nw\n, m\n+\n\nw\nneeded for computing\ninformation gain can be calculated in an on-line fashion.\nHence topic characterization task is conducted in real-time.\nB) Observing the growth or decay of each cluster. Let G\n(t)\ni\nbe the window average of the posterior probability of the ith\ntopic component, that is, G\nt\ni\n= (\nt-W\ni\n+, , +\nt\ni\n)/W .\nG\n(t)\ni\nincreases when texts corresponding to the i-th topic is\ninput, and decreases when the other is input.We can see\nhow rapidly this topic grows by observing G\n(t)\ni\nas t goes by.\nEXPERIMENTAL RESULTS\nWe conducted an experiment on real data: contact data of\na help desk for an internal e-mail service.An example of the\ndata record is presented in Table 1.It has the field of contact\ndate/time, question/request, answered date/time, answer,\nand so on.The number of the records is 1202.The date of\nthe first/last contact is Feb 21 2004/May 20 respectively.\nWe input contact dates as the time-stamps, and ques-tions/requests\nas the text data to our system.We set K\nmax\nto 50 and to 0.99.Our system ran on an NEC Express5800\nwith 1GHz Pentium III and a 1GB memory.The system was\nimplemented using C, and OS was Windows 2000 Server.\nProcessing 1202 records of data took about five minute.\nFigure 2 shows the number of components k\n\nt\nselected by\nour system as main topics .The number increases at the\nDate\nk\n*t\nFigure 2: Number of main topics\nbeginning of March, and has a peak in the middle of April.\nSince a fiscal year begins at April in Japan, we can suppose\nthat the number of topics at the help desk is increasing\naround the first day of April.\nLet us look into a few of the components, because we do\nnot have enough space for all of the components.Here, we\nobserve Component 27 and 42 in detail.Figure 3 shows the\nwindow averages G\n27\n, G\n42\nof the posterior probabilities and\nthe periods where the components are selected as main topics\n. G\n27\nincreases in the beginning of April and has the first\nG\n27\nG\n42\nC27 is main.\nC42 is main.\nFigure 3: G\ni\nand Period of Component 27, 42\npeak at April 12.Then it repeats increase and decrease until\nthe middle of May.The corresponding component is selected\nas main from the first week of April, and remains as main\nuntil the middle of May (with short discontinuances). G\n42\nis positive during April, and also the corresponding topic is\nmain during April.The lines of G\ni\ns indicate how important\nthe corresponding topics are in each time.Moreover, we\ncan observe how the emerged topics grows and disappears\nfrom the figure.The topic corresponding to Component 42\nemerges at the beginning of April, grows for two weeks, is\nattenuated, then drops out from the main topics at the end\nof April.\nTerm \"transfer\" was extracted as a characteristic word\n815\nIndustry/Government Track Poster\nTable 1: Examples of help desk data records\nContact date/time\nQuestion/Request\nAnswered date/time\nAnswer\n,,,\nFeb 26 2004 14:05\nI forgot my password. How can I ...\nFeb 26 2004 14:48\nYou can get a new ...\nFeb 26 2004 14:08\nUntil what time is an account for a ...\nFeb 26 2004 14:25\nIt is valid for 14 days after retirement.\nFeb 26 2004 14:09\nIs it possible to forward mails from ...\nFeb 26 2004 15:09\nYes. You can set up by ....\n....\n....\n....\n....\nfor Component 27.Texts classified into this component are\nquestions like \"Is it possible to use Service XXX after I am\ntransfered to YYY?\".That kind of questions may increase\naround the beginning of a fiscal year.\"Service ZZZ\" and\n\"failure\" were extracted as chracteristic words for Component\n42.Actually, Service ZZZ failed in the beginning of\nApril, then, the topic consists of related complaints and\nquestions.\nIn this way we can recognize the emergence, growth, and\ndecay of each topic from the system.Through this example\nit has turned out that our framework for topic trend analysis\nare very effective for tracking dynamics of topic trends in\ncontact data at a help desk.\nCONCLUSION AND FUTURE STUDY\nIn this paper we have proposed a framework for tracking\ndynamics of topic trends using a finite mixture model.In\nthis framework the three main tasks: topic structure identification\n, topic emergence detection, and topic characterization\nare unified within a single framework.Topic structure\nidentification has been realized by our unique time-stamp\nbased learning algorithm.It enables tracking topic structures\nadaptively by forgetting out-of-date statistics.Topic\nemergence detection has been realized on the basis of the\ntheory of dynamic model selection.It enables detecting\nchanges of the optimal number of components in the finite\nmixture model to check whether a new topic has appeared\nor not.Topic characterization has been realized by on-line\ntext clustering and feature extraction based on information\ngain.Through the experiments using real data collected at a\nhelp desk, it is demonstrated that our framework works well\nin the sense that dynamics of topic trends can be tracked in\na timely fashion.\nThe following issues remain open for future study:\nContext-based topic trend analysis: In this paper we have\nproposed an approach to word-based topic trend analysis.\nHowever, we need to further analyze contexts, i.e., relations\namong words, in order to more deeply analyze the semantics\nof topics.\nMulti-topics analysis: We supposed that one text comes\nfrom a single mixture component corresponding to a single\ntopic.It is our future study how to deal with texts having\nmulti topics.\nREFERENCES\n[1] J.Allen, R.Papka, and V.Lavrenko: On-line new\nevent detection and tracking, in Proceedings of\nSIGIR International Conference on Information\nRetrieval, pp:37-45, 1998.\n[2] X.Liu, Y.Gong, W.Xu, and S.Zhu: Document\nclustering with cluster refinement and model\nselection capabilities, in Proceedings of SIGIR\nInternational Conference on Information Retrieval,\npp:191-198, 2002.\n[3] S.Harve, B.Hetzler, and L.Norwell: ThemeRiver:\nVisualizing theme changes over time, in Proceesings\nof IEEE Symposium on Information Visualization,\npp:115-123, 2000.\n[4] J.Kleiberg: Bursty and hierarchical structure in\nstreams, in Proceedings of KDD2002, pp:91-101,\nACM Press, 2003.\n[5] H.Li and K.Yamanishi: Text classification using\nESC-based decision lists, Information Processing and\nManagement, vol.38/3, pp:343-361, 2002.\n[6] H.Li and K.Yamanishi: Topic analysis using a finite\nmixture model, Information Processing and\nManagement, Vol.39/4, pp 521-541, 2003.\n[7] Y.Matsunaga and K.Yamanishi: An\ninformation-theoretic approach to detecting\nanomalous behaviors, in Information Technology\nLetters vol.2 (Proc. of the 2nd Forum on Information\nTechnologies), pp:123-124, (in Japanese) 2003.\n[8] G.McLahlan and D.Peel: Finite Mixture Models,\nWiley Series in Probability and Statistics, John\nWiley and Sons, 2000.\n[9] R.M.Neal and G.E.Hinton: A view of the EM\nalgorithm that justifies incremental sparse, and other\nvariants, Learning in Graphical Models, M. Jordan\n(editor), MIT Press, Cambridge MA, USA.\n[10] J.Rissanen: Universal coding, information, and\nestimation, IEEE Trans. on Inform. Theory,\n30:629-636, 1984.\n[11] R.Swan and J.Allen: Extracting significant\ntime-varying features from text, in Proceedings of 8th\nInternational Conference on Information Knowledge\nManagement, pp:38-45, 1999.\n[12] R.Swan and J.Allen: Automatic generation of\noverview timelines, in Proceedings of SIGIR\nInternational Conference on Information Retrieval,\npp:49-56, 2000.\n[13] K.Yamanishi: A Decision-theoretic Extension of\nStochastic Complexity and Its Applications to\nLearning, IEEE Trans. on Inform. Theory, vol.44/4,\npp:1424-1439, 1998.\n[14] K.Yamanishi, J.Takeuchi, G.Williams, and P.Milne:\nOn-line unsupervised outlier detection using finite\nmixtures with discounting learning algorithms,\" in\nProceedings of KDD2000, ACM Press, pp:320324\n2000.\n[15] Y.Yang, T.Pierce, J.G.Carbonell: A study on\nretrospective and on-line event detection, in\nProceedings of SIGIR International Conference on\nInformation Retrieval, pp:28-30, 1998.\n[16] Y.Yang, J.Zang, J.Carbonell, and C.Jin:\nTopic-conditioned novelty detection, in Proceedings\nof KDD 2002, pp:688-693, 2002.\n816\nIndustry/Government Track Poster\n", "keywords": "finite mixture model;CRM;time-stamp based discounting learning algorithm;topic structure identification;topic characterization;topic detection and tracking;time-stamp based learning algorithm;Topic Structure Identification;topic emergence detection;text mining;Topic Emergence Detection;tracking dynamics;dynamic model selection;Data Mining;information gain;topic trends;Topic Characterization;text data streams;model selection;topic trend;topic analysis"} {"name": "2", "title": "A Case Study on How to Manage the Theft of Information", "abstract": "This paper shows the importance that management plays in the protection of information and in the planning to handle a security breach when a theft of information happens. Recent thefts of information that have hit major companies have caused concern. These thefts were caused by companies' inability to determine risks associated with the protection of their data and these companies lack of planning to properly manage a security breach when it occurs. It is becoming necessary, if not mandatory, for organizations to perform ongoing risk analysis to protect their systems. Organizations need to realize that the theft of information is a management issue as well as a technology one, and that these recent security breaches were mainly caused by business decisions by management and not a lack of technology.", "fulltext": "INTRODUCTION\nAfter counter-terrorism and counter-intelligence, cyber crime is\nthe third highest priority for the U.S. Federal Bureau [4]. With\nthe rise of the theft of information and the lure of big profits for\nthis stolen information, it is necessary for information systems to\nhave the ability to protect this valuable asset. It is estimated that a\ncredit card number unsupported by any other documentation is\nworth $10, and a credit history report retails for $60 [2]. Recent\nbreaches of information systems that have lead to thefts of\ninformation have shown that management practices and not\ntechnology was part of the issue and in some cases the primary\ncause of the theft of the information. With each of these thefts,\nthere is a third party committing a crime, but in each case, risk\nanalysis could have been used to avoid or to help mitigate the\ntheft. It is becoming a necessity that companies examine their\nbusiness practices and company policies to avoid risks associated\nwith information stealing. The solution to information stealing\ndoes not reside in technology alone but also requires an\nunderstanding by management of the business and the risks\nassociated with it. This paper examines the theft of information\nfrom different companies in order to explain the short coming of\nmanagement practices that lead to the theft. .\nCASE STUDIES\nIn May of 2005, Citigroup lost computer tapes that were being\nsent to the credit bureau via UPS that included Social Security\nnumbers and payment history information for 3.9 million\ncustomers. After this event, this New York based company has\ndecided that it will start sending its data to the credit bureau\nelectronically using encryption [8].\nCitigroup should have learned a lesson from Time Warner who\nlost a shipment of backup tapes that contained personal\ninformation of 600,000 employees that was being sent to an\noffsite data storage company in March of 2005 [9]. But the\nquestion remains, why was Citigroup sending sensitive\ninformation unsecured? Why did they not encrypt the data in the\nfirst place, and why did they realize that these tapes could get lost\nor stolen as evident to what happened with Time Warner? The\nanswer is because they did not correctly identify the risk.\nCitigroup believed that UPS was a secure method for sending this\ninformation and that the data would be difficult to retrieve off the\ntapes because of the hardware needed to read the tapes. Citigroup\nneeded to evaluate this risk of properly protecting confidential\ninformation while in transmission. Now, Citigroup has the issue\nof dealing with the negative public associated with this event, and\nthe loss of any potential customers/revenue it will lose because of\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies\nare not made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nInformation Security Curriculum Development (InfoSecCD)\n\nConference '05, September 23-24, 2005, Kennesaw, GA, USA.\nCopyright 2005 ACM 1-59593-261-5/05/0009...$5.00.\n\n135\nit. This issue would have been avoided if Citigroup would have\nproperly identified this risk and taken the steps to protect this\ninformation. If the tapes were lost and the data was encrypted,\nthen this story would have never happened.\n2.2\n\nCase II: ChoicePoint\nChoicepoint has made more than 50 acquisitions since 1997 to\nmake it one of the largest collections of personal data in the\nUnited States. Choicepoint sells data \"to clients doing\nbackground checks on job and loan applicants and conducting\ncriminal investigations\" [10]. On February 16, 2005,\nChoicePoint went public to tell 145,000 people that identity\nthieves may have gained access to their personal information\nincluding their Social Security numbers and credit reports.\n\"Authorities believe it was the work of a group of people who\nused IDs stolen from legitimate business people to set up phony\nbusinesses that contracted with ChoicePoint for ID checks,\nBernknopf (ChoicePoint's spoke person) said\" [5].\n\nWith ChoicePoint's security incident, there was no firewall\nhacked, or an IDS fooled. This was a deceptive scheme that took\nadvantage of security holes in the business process.\n\nChoicePoint's CISO, Rich Baich, stated \"The mislabeling of this\nevent as a hack is killing ChoicePoint. It's such a negative\nimpression that suggests we failed to provide adequate protection.\nFraud happens everyday. Hacks don't\" [10]. ChoicePoint\nseemed to push that they were the victims of fraud, and not at\nfault. The bottom line is that confidential information was stolen,\nand the individuals who had their information stolen do not care if\nit was hacker or if the company was a victim of fraud.\nChoicePoint failed to identify holes in the business process to\nallow this event to happen. Which if someone hacked into their\nsystem, it would have lead to the same result, the theft of\ninformation. ChoicePoint needs to recognize that identifying\nrisks with their business process is just as important as securing\ntheir information system from an external hacker.\n\n2.3\n\nCase III: Egghead.com\nEgghead Software was a company that opened in 1984 to sell\ncomputer hardware and software that grew to have more than 205\nstores worldwide. Then in 1998 the company moved its business\nto the internet as Egghead.com.\nIn December of 2000, Egghead.com stated that \"a hacker has\nbreached its computer system and may have gained access to its\ncustomer database\" [6]. Jerry Kaplan, Egghead.com's co-chairman\n, stated that there was \"no evidence\" to support that the\ndatabase with the credit card numbers for its customer was stolen\nbut, he also could not give confirmation that they were not stolen.\n\"Egghead's inability to determine how many of it's customers\ncredit cards had been compromised may mean that the company\ndoes not have a real-time auditing system in place, said Paul\nRobertson, senior developer for security service firm TruSecure\nCorp. `If you don't know how many credit-card numbers you lost,\nyou are giving a quick, blanket, worst-case answer--and then\nfinding out what happened afterwards,' he said.\" [1]. The way\nthat Egghead.com handled its security incident showed that they\ndid not have a good plan to manage the theft of information, and\nit appeared as if they made the plan to handle this situation as it\nhappened. This lack of planning and risk analysis by\nmanagement caused Egghead.com's business to suffer\ntremendously. Shortly thereafter this event, Egghead.com went\ninto bankruptcy, and on November 26, 2001, Amazon.com\nacquired Egghead.com's assets in the Bankruptcy Court [6].\nIt appears the inability for Egghead.com to successful determine\nwith certainty the extent of information stolen caused more\ndamage to the company's reputation then the actual event itself.\nIf Egghead.com had a well developed incident response plan in\nplace to handle this security breach and a way to handle the media\nthat followed, Egghead.com may have been able to weather the\nstorm and stay in business. But all customer confidence was lost\nand Egghead.com was not able to recover.\n2.4\n\nCase IV: New Jersey Crime Ring\nBank employees for Wachovia Corporation, Bank of America\nCorporation, Commerce Bancorp Inc., and PNC Bank stole\ninformation on 676,000 customer accounts that are all New Jersey\nresidents. It is considered the largest banking security breach in\nhistory by the U.S. Department of the Treasury. \"The suspects\npulled up the account data while working inside their banks, then\nprinted out screen captures of the information or wrote it out by\nhand, Lomia (a New Jersey Police Detective) said. The data was\nthen provided to a company called DRL Associates Inc., which\nhad been set up as a front for the operation. DRL advertised itself\nas a deadbeat-locator service and as a collection agency, but was\nnot properly licensed for those activities by the state, police said\"\n[13].\n\nWith this security breach, there was no technology involved. No\nhackers breached the information system. This was completely\nan inside job. The question becomes of how this could have been\nprevented? The answer is that in some cases the theft of\ninformation can not be prevented. The only the thing that\nmanagement can do is be prepared for when it does happen.\nBecause of incidents like this, it is becoming a duty of\nmanagement to have an incident response plan in place long\nbefore a security breach happens. From a risk analysis viewpoint,\nan incident like this is difficult to detect and almost impossible to\nstop before it happens. But when it does happen and the criminals\nare caught, it becomes a necessity to punish the ones responsible\nto the full extent of the law to deter others from following suit.\n2.5\n\nCase V: LexisNexis\nLexisNexis is provider of legal and business data. In March of\n2005, LexisNexis announced that the information on 32,000\npeople was stolen. These breaches occurred at one of the\nsubsidiary companies, Seisint Inc. Seisnt Inc. was the company\nwho was the provider of data to the Multistate Anti-Terrorism\nInformation Exchange (MATRIX) system. \"LexisNexis, which\nacquired Seisint of Boca Raton, Florida, in September for $775\nmillion, expressed regret over the incident and said that it is\nnotifying the individuals whose information may have been\naccessed and will provide them with credit-monitoring services\"\n[12]. In this incident, hackers stole username and passwords of\nlegitimate users to access the confidential information. In a\nstatement, \"Kurt Sanford, president and CEO of LexisNexis\nCorporate and Federal Markets, said that the company will\nimprove the user ID and password administration procedures that\nits customers use and will devote more resources to protecting\nuser's privacy and reinforcing the importance of privacy\" [12].\nThis security breach is very similar to the incident that happened\nat ChoicePoint who is one of LexisNexis's competitors.\n\n\n136\nThere are several policies that should have been implemented that\ncould have reduced the risk of this security breach. Since\nLexisNexis gives third parties access to its confidential\ninformation, there becomes a need to educate these organizations\non certain practices to protect the data. Where was this education,\nand was there a lack of education due to the possible effect that it\ncould have on business? Also, what was the password policy for\nits customers? LexisNexis has not elaborated on the details of the\nsecurity breach, but considering the statement of the CEO of\nLexisNexis after the incident, there clearly seems that there was a\nfailure to detect the risk associated with their customer's\npassword policy that could result in a theft of information.\nLexisNexis inability to properly assess this risk caused the\nsecurity breach. Through education and a secure password\nadministration policy, this event could have been avoided.\nRESULTS AND DISCUSSION\nWhen analyzing these case studies, an important thing to ponder\nis that for every security breach reported, how many go\nunreported? These security breaches could have been avoided\nwith proper risk assessment and risk analysis, or at least the\nprobability of a security breach could have been reduced greatly.\nFor all security breaches, the prevention or at least the reduction\nof the probability of the security breach begins and ends with\ndecisions that management makes.\n\nIn an organization, when a security breach occurs it causes a\ncompany to re-evaluate their policies that guide their information\nsecurity. With this rash of security incidents that have recently\ntaken place, companies do not need to wait until a security breach\nhappens to evaluate their security policies and analyze their risks.\nCompanies need to have an ongoing risk analysis that is\ncontinually developed and re-developed. They need policies that\nare ever changing to meet new threats and new security\nweaknesses from a both business practices and technology\nviewpoints. Looking at the incidents that happened at\nChoicePoint, LexisNexis, and Citigroup, these companies have\ntechnological solutions to protect their data from being stolen, but\nthey failed at weighing equal importance the security of the data\nfrom a business issue perspective. This showed in their inability\nto properly evaluate the risk in the business practices. In several\nof the cases, the theft of information occurred because of the\nbusiness practices of the company, and technology was not even\ninvolved.\n\nAlso, companies need to learn from the mistakes of others\nbecause history will repeat itself if the lesson is not learned.\nThere is an age old saying that is a wise person learns from their\nmistakes, but an even wiser person learns from the mistakes of\nothers. Citigroup needed this advice. With Citigroup's loss of\ntheir backup tapes, they should have learned from the mistake that\nTime Warner made just months earlier, but they did not. Security\npolicies and practices need the flexibility to change, and\nmanagement has a responsibility to make these changes when\nnew threats or new weakness surface so that they can protect their\ndata.\nCompanies and organizations need to realize the importance of\nmaking information security a business issue as well as a\ntechnological one. With the issue that happened with\nEgghead.com, they did have security systems in place to protect\ntheir data from being stolen, \"but it lacked the kind of coordinated\norganizational response necessary to convince customers and\nshareholders that their sensitive data were actually secure.\"\nEgghead.com lost 25% of its stock value when their customer\ndata was stolen [7]. Egghead.com was not ready for the media\nstorm that followed the security breach which ultimately caused\ntheir collapse. By making information security a business issue,\nas well as a technological one, companies can add strategic,\noperational, and organizational defenses to protect their data.\n\nCONCULSIONS\nAs more identity thefts occur, companies that make their money\nfrom storing this information are going to become liable. \" `The\nChoicePoint scandal has been a wake up call for how vulnerable\nconsumers are to identity theft because of the lack of security\nstandards for the largely unregulated information broker\nindustry,' said Gail Hillebrand, Senior Attorney for Consumers\nUnion's West Coast Office. `This bill will ensure that information\nbrokers are held accountable for enforcing tough security\npractices to prevent thieves from gaining access to sensitive\nconsumer data. And it gives consumers important new rights to\nexamine the information maintained about them and to correct\nany errors they may find' [3].\n\nCompanies need to find the importance of protecting their data\nfrom both technology and business practices weaknesses.\n\nCompanies view the protection of their data from a technology\nissue, but fail to realize the importance that management plays in\nprotecting their systems with the creation of policies and\nunderstanding the risks that face their information systems.\n\nFrom a consumer standpoint, if a company is making profit from\nsomeone's personal information and they fail to protect this data,\nshould they not give some sort of reputation? Companies own\nand manage consumer information, and individuals have little\npower over their information that is controlled by these\norganizations. As identity theft continues and companies fail at\nprotecting their data, legislation will be passed that will force\ncompanies to comply with regulator standards that may force\ncompanies to give this reputation to individuals who have their\nidentity stolen.\n\nToday, there are only laws to protect data in certain industries.\nThis includes the Health Insurance Portability and Accountability\nAct for healthcare and the Gramm-Leach-Bliley Act for financial\nservices. With consumer groups voicing their opinions regarding\nthe theft of information from companies, the US Congress and\nother state legislators are getting prepared to pass broader data\nprivacy protection to protect consumers [11].\n\nThere are steps that companies and organizations need to take to\nprotect themselves from the theft of information. First,\ncompanies need to be prepared when a security breach occurs\nbecause a risk to an asset is never zero percent. Organizations\nneed to establish policies and risk assessments that protect their\ndata from both technology risks and business practices well\nbefore a security breach occurs. This is achieved by companies\nhaving the organizational structure that allows management to\nfully understand the business processes and technology that\nexpose their information systems to threats. Also, companies\nneed the ability to change and adapt to new threats that oppose\ntheir information. It is not possible to prevent all security\nbreaches that lead to a theft of information, but companies will\nneed to have policies and practices in place to better protect the\n\n137\ndata. Companies will need not only to weigh technology risk to\ntheir information, but also understand business issues that expose\ntheir information to theft. It no longer matters how the\ninformation stolen, whether it was a hacker or a social engineer\nthat committed the crime; companies need to protect their\ninformation from all threats and minimize their risks from all\naspects.\n\n\nREFERENCES\n[1] Charny, Ben and Lemos, Robert. December 22, 2000.\nEgghead Scrambles to Guage Damage. Retrieved\n06/19/2005 from\nhttp://seclists.org/lists/isn/2000/Dec/0134.html\n\n[2] Crawford, Michael. June 16, 2005. Criminals Grasp the\nMetrics of Information Value. Retrieved 06/20/2005 from\nhttp://www.computerworld.com.au/index.php?id=550545875\n&eid=-255\n\n[3] ConsumersUnion.org. Consumers Union applauds Nelson\n(FL) bill to extend federal oversight to information brokers\nlike ChoicePoint. Retrieved 06/28/2005 from\nhttp://www.consumersunion.org/pub/core_financial_services\n/002027.html\n\n[4] Easen, Nick. April 21, 2004. Cyber Crime is Right Under\nYour Nose. Retrieved 06/25/2005 from\nhttp://www.cnn.com/2004/BUSINESS/04/20/go.cyber.securi\nty/index.html\n\n[5] Gross, Grant. February 23, 2005. ChoicePoint's Error Sparks\nTalk of ID Theft Law. Retrieved 06/22/2005 from\nhttp://pcworld.com/news/article/0,aid,119790,00.asp\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n[6] Liu, Bob. December 3, 2001. Eggheacd.com Becomes\nAmazon.com Property. Retrieved 06/22/2005 from\nhttp://www.internetnews.com/ec-news/article.php/932871\n[7] McKinsey & Company, Inc. June 6, 2002. Managing\nInformation Security. Retrieved 06/22/2005 from\nhttp://news.com.com/2009-1017-933185.html\n\n[8] McMillian, Robert. June 7, 2005. Citigroup to Encrypt Data\nSent to Credit Bureaus. Retrieved 06/20/2005 from\nhttp://www.computerworld.com/hardwaretopics/hardware/st\nory/0,10801,102315,00.html\n\n[9] Mearian, Lucas. May 2, 2005. Time Warner Says Data of\n600,000 Workers Lost. Retrieved 06/21/2005 from\nhttp://www.computerworld.com/databasetopics/data/story/0,\n10801,101500,00.html\n\n[10] Mimoso, Michael. April 2005. Damage Control. Retrieved\n06/21/2005 from\nhttp://informationsecurity.techtarget.com/magItem/1,291266,\nsid42_gci1073914,00.html\n\n[11] Rasmussen, Michael. March 3, 2005. ChoicePoint Security\nBreach Will Lead to Increased Regulation. Retrieved\n06/25/2005 from\nhttp://www.csoonline.com/analyst/report3416.html\n\n[12] Robert, Paul. March 9, 2005. Hackers Grab LexisNexis Info\non 32,000 People. Retrieved 06/24/2005 from\nhttp://www.pcworld.com/resource/article/0,aid,119953,pg,1,\nRSS,RSS,00.asp\n\n[13] Weiss, Todd. May 20, 2005. Scope of Bank Data Theft\nGrows to 676,000 Customers. Retrieved 06/24/2005 from\nhttp://www.computerworld.com/securitytopics/security/cybe\nrcrime/story/0,10801,101903,00.html\n\n\n\n\n\n138\n", "keywords": "security breach;risk analysis;Information Security;business practises and policy;information system;cases of information theft;privacy;management issue;Information Security Management;theft of information;human factor;data protection procedure;Security Management;information security;cyber crime;confidential information;incident response plan;encryption;data protection;personal information"} {"name": "20", "title": "A Survey of Collaborative Information Seeking Practices of Academic Researchers", "abstract": "Information seeking and management practices are an integral aspect of people's daily work. However, we still have little understanding of collaboration in the information seeking process. Through a survey of collaborative information seeking practices of academic researchers, we found that researchers reported that (1) the lack of expertise is the primary reason that they collaborate when seeking information; (2) traditional methods, including face-to-face, phone, and email are the preferred communication mediums for collaboration; and (3) collaborative information seeking activities are usually successful and more useful than individually seeking information. These results begin to highlight the important role that collaborative information seeking plays in daily work.", "fulltext": "INTRODUCTION\nInformation seeking and management practices are an\nintegral aspect of people's daily work. In organizational\nwork, information is vital for making decisions and\ncoordinating activities. Therefore, organizations have\ndeveloped a wide variety of processes and technologies to\nsupport their workers' information seeking activities. Much\nof this support has been for the individual information\nseeker; in most organizations, information seeking has been\ntraditionally viewed as an individual activity [1, 2].\nYet, collaboration is becoming an increasingly important\ncomponent of work in organizations. Multidisciplinary\nteams are a common feature of modern organizations [3,\n4]. To successfully accomplish their work, team members\nmust collaborate with each other efficiently and effectively.\nOne important aspect of the team's work is seeking\ninformation [5]. Yet, we have little understanding of\ncollaborative information seeking practices [6, 7].\nTherefore, to help team members work together effectively\nand to design information systems that support their work,\nwe must understand the collaborative information seeking\npractices of team members.\nTo examine collaborative information seeking (CIS)\npractices, we conducted a survey of academic researchers\nin a small technology-focused research university.\nResearchers have traditionally collaborated with each other\non research projects because of the often cross-disciplinary\nnature of the work. This collaboration has increased in\nrecent years as information and communication\ntechnologies have improved. Although the survey asked a\nvariety of questions, in this paper, we focus on three\nparticular areas of interest:\nWhat triggers are most likely to lead to CIS activities?\nWhen engaging in CIS, what media or channel of\ncommunication is most likely used to collaborate?\nHow successful are these CIS activities?\nIn a previous study, we identified three triggers that cause\nteam members to collaborate when seeking information.\nThese triggers are (1) lack of expertise (2) complex\ninformation need and (3) information not easily accessible\n[8]. In this study, we were interested in identifying which\nof these triggers researchers reported to be the most\nimportant reason for them to collaborate when seeking\ninformation. We also wanted to identify what were the\nprimary mechanisms of collaboration (e.g., e-mail, face-to-face\n, etc.). We were also interested in determining the\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise,\nor republish, to post on servers or to redistribute to lists, requires prior\nspecific permission and/or a fee.\nGROUP'05, November 69, 2005, Sanibel Island, Florida, USA.\nCopyright 2005 ACM 1-59593-223-2/05/0011...$5.00.\n85\ndegree to which researchers found collaborative\ninformation seeking to be successful, particularly in\ncomparison to individual information seeking.\nCOLLABORATIVE INFORMATION SEEKING\nAlthough there is limited research on collaborative\ninformation seeking, researchers are beginning to explore\nthis phenomena in various domains [9].\nIn a study of two design teams, Poltrock et al [10] found\nthat each team had different communication and\ninformation seeking practices. Interestingly, they did not\nexamine an individual's role in the information seeking\nprocess but rather how team members actively worked\ntogether to identify information needs. They argue that an\nunderstanding of collaborative information retrieval will\nallow for the informed design of technologies meant to\nsupport such work, and will also allow teams to work more\neffectively with these sources of information. In a study of\ninformation behavior in a hierarchical work environment\na military command and control environment\nSonnenwald and Pierce [11] described information seeking\nas a dynamic activity in which \"individuals must work\ntogether to seek, synthesize and disseminate information.\"\nThey examined how team members maintained awareness\nof each other's information activities and how this\nawareness influenced their information sharing with each\nother. Finally, in a study of collaborative information\nseeking in the medical domain, Reddy and Dourish [12]\nargue that work rhythms play a role in healthcare\nproviders' collaborative information seeking practices.\nAlthough a few studies have examined collaborative\ninformation seeking in small group settings through\nethnographic field studies, there have been, to the best of\nour knowledge, no studies that have used surveys to gather\ndata on CIS from a larger population sample.\n\nMETHODS\nSeventy researchers at a small a small technology-focused\nresearch university participated in this study. The majority\nwere faculty researchers and a small percentage were\ngraduate research assistants. Most participants were from\nscience and technology disciplines.\n3.2 Materials and Procedures\nOne-hundred and fifty potential participants were emailed a\nrequest to participate, which included an email link to an\nonline survey. The response rate was 47%.\nThe survey included the following items:\n1. What causes you to work together when looking for\ninformation?\n(a) The information needed is complex.\n(b) The information needed requires a different\nexpertise.\n(c) The information is not immediately accessible.\n2. What medium are you most likely to use when\ncollaborating with your teammates to look for\ninformation?\n(a) Electronic forum; (b) Email; (c) Face-to-face;\n(d) Fax; (e) Instant message; (f) Telephone; (f) Web\nconferencing\n3. When collaborating with teammates to look for\ninformation, we usually find the information for which\nthe team is searching.\n4. Participating in collaborative information seeking is\neasier than individual information seeking.\n5. Participating in collaborative information seeking leads\nto more relevant information being found than when\nindividually seeking information.\n6. Participating in collaborative information seeking leads\nto information being found more quickly than when\nindividually seeking information.\nParticipants responded to each phrase under item 1 and to\nitems 3 6 on a scale ranging from 1 (strongly disagree) to\n10 (strongly agree) and to item 2 on a scale ranging from 1\n(not at all likely) to 10 (very likely). The survey also\nincluded free-text opportunities for the respondents to\nprovide more information about their answers, if they\nchose to do so.\n\nRESULTS & DISCUSSION\nIn order to determine the triggers that are most likely to\nlead to collaborative information seeking, the responses to\nquestionnaire items 1a, 1b, and 1c where considered. A\none-way within-subjects analyses of variance (ANOVA)\nwas computed with trigger serving as the independent\nvariable with three levels (complexity, expertise, and\naccessibility) and rating as the dependent variable. This\nANOVA was statistically significant F(2, 132) = 16.878\nand p < .001. Bonferroni's post hoc tests indicated that\nexpertise was rated significantly higher (M = 8.17) than\nboth complexity (M = 6.80) and accessibility (M = 6.73),\nwhile complexity did not significantly differ from\naccessibility.\nThe findings indicate that academic researchers will most\noften collaborate because they find the information requires\na different expertise than their own. Many academic\nresearch projects are multidisciplinary in nature and require\nparticular knowledge that a researcher may not have. As\none researcher stated, \"The basic reason is that frequently a\nwide range of expertise is needed and no one person can\npossibly have all the skills needed to be successful.\"\n86\nAlthough the complexity of the information need and\naccessibility of information could lead to collaboration, they\nare not viewed as strongly as expertise. In regards to\ninformation accessibility, one researcher points out,\n\"information is usually accessible; however, someone else\nwill likely understand it better.\" For this researcher the\ndifficulty was not in accessing the information but rather in\nunderstanding its relevance which may require different\nexpertise.\nDuring the CIS process, different researchers bring their own\nparticular expertise and perspective to the team. When a\nresearcher seeks information outside her domain of expertise,\nshe will often turn to another researcher for help. These\ndifferent expertises play an important role in the\ncollaborative information seeking activities of the research\nteam.\n4.2 Communication Mediums for CIS\nActivities\nIn order to examine the relationship between communication\nmediums, and to reduce the number of variables for\nsubsequent analysis, a principal component factor analyses\nwith a Varimax rotation was computed on the responses to\nquestionnaire item 2. A four factor solution was selected\nbecause all Eigen values were above 1, and a logical\ngrouping of sources emerged. We labeled the first factor\n\"traditional\", and it included: email, face-to-face, and\ntelephone. We labeled the second factor \"web\" and it\nincluded: instant messenger, web conferencing, and web\nsites. The third and fourth factor each included one item,\n\"electronic forum\" and \"fax\", and were, thus labeled\naccordingly. Factor scores were created by using the mean of\nall the items that loaded on a given factor, and these factor\nscores were used in subsequent analyses.\nIn order to identify the media that are most likely used for\ncollaborate information seeking, a one-way within-subjects\nanalyses of variance (ANOVA) was computed with medium\nfactor scores serving as the independent variable with four\nlevels (traditional, web, electronic forum and fax) and rating\nas the dependent variable. This ANOVA was statistically\nsignificant as F(3, 195) = 84.709 and p < .001. Bonferroni's\npost hoc tests indicated that traditional media (M= 8.10)\nsignificantly outscored all other types; both web media\n(M=3.64) and electronic forum (M=4.58) significantly\noutscored fax (M=2.70) but did not significantly differ one\nfrom the other.\nResearchers preferred traditional media for their\ncommunication. Within this category, we included e-mail.\nAlthough e-mail may not seem to fit in the same category as\nface-to-face and telephone, it has become such a ubiquitous\ncommunication medium that respondents viewed it as being\nsimilar to face-to-face and the telephone. Furthermore, email\nhas been in existence much longer than other types of\nelectronic mediums such as web conferences. People are\nmore comfortable and experienced with email and personal\nconversations, whether these conversations are in person or\non the phone. The other media were not as strongly\nembraced. For instance, we had anticipated that web-based\nmedia such as web-conferencing and instant messaging\nwould have higher rating than it did. One possible\nexplanation is \"newness\" of the technology. For instance,\ninstant messenger tools are still relatively new and have not\npermeated to all groups and ages. Furthermore, some of the\nweb-based media take time to set-up. Web conferences and\nweb sites require time and effort unlike picking up the phone\nto talk to someone. Interestingly, although not included as a\nmedium to rate, some participants added campus mail and\n\"snail mail\" as a medium for communication.\nWhether collaborators are physically co-located or\ngeographically dispersed, communication is an essential\ncomponent of collaborative information seeking. The\nresearchers orient towards the mediums that are familiar to\nthem.\n4.3 Success of Collaborative Information\nSeeking Activities\nIn order to address the question of whether collaborations are\nsuccessful when engaging in CIS, a dichotomous variable\nwas created for each success item (3 6), whereby a rating\nof 0 to 5 was considered \"disagree\", and a rating of 6 to 10\nwas considered \"agree\". We initially used a 10- point scale in\norder to be consistent with the rest of the survey questions.\nWe then made the decision to reduce the scale to a\ndichotomous variable in order to evaluate this question with\na test of statistical significance. Using this dichotomous\nvariable, a chi-square analysis was performed on the\nfrequencies for each success item. The results of these\nanalyses as well as the mean rating for each item, with mean\nrepresenting degree of agreement from 1 to 10 (10\nrepresenting \"strongly agree\"), is displayed in Table 1.\n\nTable 1. Means and Chi-Square for Success Factors\nAgreement\nSuccess\nFactor Mean\nAgree Disagree\nChi-Square\nUsually find\ninfo\n8.0152 64\n2\nX\n2\n=\n58.242, p <\n.001\nEasier than\nindividual info\nseeking 7.1061\n50\n16\nX\n2\n=\n17.515, p <\n.001\nFind more\nrelevant info\nthan individual\ninfo seeking\n7.3788\n55\n11\nX\n2\n=\n29.333, p <\n.001\nQuicker than\nindividual info\nseeking 6.9394\n48\n10\nX\n2\n=\n24.897, p <\n.001\n87\nSuccess is often subjective and difficult to define,\nparticularly with ill-defined tasks such as information\nseeking. Therefore, we asked four questions related to\nsuccess to gain a better understanding of this important\narea. Most researchers agreed that when collaborating with\ncolleagues to look for information, they usually found the\nneeded information. They also thought that collaboratively\nseeking information was easier and lead to more relevant\ninformation than individually seeking information.\nCollaborative information seeking allows researchers to\nrely on other colleagues for help and guidance; therefore,\nallowing them to focus on their own area of expertise. This\ncould be one possible reason why researchers strongly\nbelieve that CIS allows them to quickly find more relevant\ninformation when compared to individual information\nseeking. At the same time, one researcher provided a note\nof caution stating that the success \"depends on your team\nof seekers.\" As in many collaborative activities, the success\ndepends on how well the team of information seekers can\nwork together when looking for information.\nCONCLUSIONS\nCollaborative information seeking is an important aspect of\nthe work done by teams. The findings presented here raise\nissues that are important to consider when conceptualizing\ncollaborative information seeking and how to best support\nthis activity.\nOne important issue is how to support information seeking\nin geographically dispersed teams. Physical co-located\nteam members can have face-to-face interaction. However,\nfor \"virtual\" teams technical support becomes even more\nimportant because they do not have the advantages of the\nface-to-face interaction. This technical support could\ninclude features that allow individuals to exchange ideas,\nor share searches while collaboratively searching for\ninformation [9].\nFor the next stages of this study, we plan on conducting a\nfield study of academic research teams to better understand\nthe actual interaction of team members during the\ncollaborative information seeking process.\n\nACKNOWLEDGMENTS\nWe would like to thank the anonymous participants who\nanswered the survey. This research was supported in part\nby Missouri Research Board grant 1734.\n\nREFERENCES\n1. Ellis, D. (1989). A behavioral model for information\nretrieval system design. Journal of Information Science, 15:\np. 237-247.\n2. Ellis, D. and M. Haugan. (1997) Modeling the Information\nSeeking Patterns of Engineers and Research Scientists in an\nIndustrial Environment. The Journal of Documentation.\n53(4): p. 384-403.\n3. Hackman, R. ed. (1990) Groups that Work (and Those That\nDon't): Creating Conditions for Effective Teamwork. Jossey-Bass\nPublications: San Francisco.\n4. Mankin, D., S. Cohen, and T. Bikson. (1996). Teams and\nTechnology. Boston, MA: Harvard Business School Press.\n5. Bruce, H., et al. (2002). A comparison of the collaborative\ninformation retrieval (CIR) behaviors of two design teams. in\nInformation Seeking In Context: The Fourth International\nConference on Information Needs, Seeking and Use in\nDifferent Contexts. Lisbon, Portugal.\n6. Sonnenwald, D.H. and L.A. Lievrouw. (1996). Collaboration\nduring the Design Process: A Case Study of Communication,\nInformation Behavior, and Project Performance. in Proc Int\nConf on Research in Information Needs, Seeking, and Use in\nDifferent Contexts. Tampere, Finland: London: Taylor\nGraham.\n7. Haythornthwaite, C., B. Wellman, and M. Mantei. (1995).\nWork Relationships and Media Use: A Social Network\nAnalysis. Group Decision and Negotiation. 4(3): p. 193-211.\n8. Reddy, M. (In submission) Collaborative Information\nSeeking: Supporting the work of multi-disciplinary patient\ncare teams. Journal of American Medical Informatics\nAssociation (JAMIA).\n9. Twidale, M. and D.M. Nichols. (1998). Designing Interfaces\nto Support Collaboration in Information Retrieval.\nInteracting with Computers. 10(2): p. 177-193.\n10. Poltrock, S., et al. (2003). Information Seeking and Sharing\nin Design Teams. in Proceedings of the 2003 International\nACM SIGGROUP Conference on Supporting Group Work.\n11. Sonnenwald, D.H. and L.G. Pierce. (2000). Information\nbehavior in dynamic group work contexts: interwoven\nsituational awareness, dense social networks and contested\ncollaboration in command and control. Information\nProcessing and Management.36: p. 461-479.\n12. Reddy, M. and P. Dourish. (2002). A Finger on the Pulse:\nTemporal Rhythms and Information Seeking in Medical\nCare. In Proc. of ACM Conf. on Computer Supported\nCooperative Work (CSCW'02). New Orleans, LA: New\nYork: ACM. p. 344-353.\n88", "keywords": "Academic Researchers;communication media;information seeking;Group Work;Survey;collaboration;Collaborative Information Seeking"} {"name": "200", "title": "Transactional Agent Model for Fault-Tolerant Object Systems", "abstract": "A transactional agent is a mobile agent which manipulates objects in multiple computers by autonomously finding a way to visit the computers. The transactional agent commits only if its commitment condition like atomicity is satisfied in presence of faults of computers. On leaving a computer , an agent creates a surrogate agent which holds objects manipulated. A surrogate can recreate a new incarnation of the agent if the agent itself is faulty. If a destination computer is faulty, the transactional agent finds another operational computer to visit. After visiting computers, a transactional agent makes a destination on commitment according to its commitment condition. We discuss design and implementation of the transactional agent which is tolerant of computer faults.", "fulltext": "INTRODUCTION\nA transaction manipulates multiple objects distributed in\ncomputers through methods. Objects are encapsulations of\ndata and methods for manipulating the data. A transaction\nis modeled to be a sequence of methods which satisfies\nthe ACID (atomicity, consistency, isolation, and dura-bility\n) properties [8, 9]. Huge number and various types\nof peer computers are interconnected in peer-to-peer (P2P)\nnetworks [3]. Personal computers easily get faulty not only\nby crash but also by hackers and intrusions. A mobile agent\ncan autonomously escape from faulty computers by moving\nto another operational computer. Mobile agents [5, 19] are\nprograms which move to remote computers and then locally\nmanipulate objects on the computers.\nAn ACID transaction initiates a subtransaction on each\ndatabase server, which is realized in mobile agents [16, 9,\n13]. In this paper, a transactional agent is a mobile agent\nwhich autonomously decides in which order the agent visits\ncomputers in presence of computer faults, and locally manipulates\nobjects in a current computer with not only atomicity\nbut also other types of commitment conditions like at-least-one\ncondition [6]. After manipulating all or some objects in\ncomputers, an agent makes a decision on commit or abort.\nFor example, an agent atomically commits only if all objects\nin the computers are successfully manipulated [4]. An\nagent commits if objects in at least one of the computers are\nsuccessfully manipulated. In addition, an agent negotiates\nwith another agent which would like to manipulate a same\nobject in a conflicting manner. Through the negotiation,\neach agent autonomously makes a decision on whether the\nagent holds or releases the objects [6, 14].\nIf an agent leaves a computer, objects locked by the agent\nare automatically released. Hence, once leaving a computer,\nan agent cannot abort. An agent creates a surrogate agent\non leaving a computer. A surrogate agent still holds locks\non objects in a computer on behalf of the agent after the\nagent leaves.\nA transactional agent autonomously finds another destination\ncomputer if a destination computer is faulty.\nAn\nagent and surrogate are faulty if the current computer is\nfaulty. Some surrogate of the agent which exists on another\ncomputer recreates a new incarnation of the agent. Simi-larly\n, if a surrogate may be faulty, another surrogate detects\nthe fault and takes a way to recover from the fault. For\nexample, if an agent takes an at least one commitment condition\n, a fault of the surrogate can be neglected as long as\nat-least-one surrogate is operational.\nIn section 2, we present a system model. In section 3,\nwe discuss transactional agents. In section 4, we discuss\nfault-tolerant mechanism. In sections 5 and 6, we discuss\nimplementation and evaluation of transactional agents.\nSYSTEM MODEL\nA system is composed of computers interconnected in reliable\nnetworks. Each computer is equipped with a class\nbase (CB) where classes are stored and an object base (OB)\nwhich is a collection of persistent objects. A class is composed\nof attributes and methods. An object is an instantia-tion\nof a class which is an encapsulation of data and meth-1133\n2005 ACM Symposium on Applied Computing\nods. If result obtained by performing a pair of methods op\n1\nand op\n2\non an object depends on the computation order,\nop\n1\nand op\n2\nconflict with one another. For example, a pair\nof methods increment and reset conflict on a counter object\n. On the other hand, increment and decrement do not\nconflict, i.e. are compatible.\nA transaction is modeled to be a sequence of methods,\nwhich satisfies the ACID properties [4]. Especially, a transaction\ncan commit only if all the objects are successfully\nmanipulated. If a method op\n1\nfrom a transaction T\n1\nis performed\nbefore a method op\n2\nfrom another transaction T\n2\nwhich conflicts with op\n1\n, every method op\n3\nfrom T\n1\nhas to\nbe performed before every method op\n4\nfrom T\n2\nconflicting\nwith the method op\n3\n. This is the serializability property [2,\n4]. Locking protocols [2, 4, 7] are used to realize the serializability\nof transactions. Here, a transaction locks an object\nbefore manipulating the object.\nA mobile agent is a program which moves around computers\nand locally manipulates objects in each computer [5,\n18, 19]. A mobile agent is composed of classes. A home\ncomputer home(c) of a class c is a computer where the class\nc is stored. For example, each class c is identified by a pair\nof IP address of a home computer home (c) and a local path\nto the directory where the class c is stored. A home computer\nhome (A) of a mobile agent A is a home computer of\nthe class of the agent A.\nTRANSACTIONAL AGENTS\nA transactional agent is a mobile agent which satisfies the\nfollowing properties:\n1. autonomously decides on which computer to visit.\n2. manipulates objects on multiple computer.\n3. commits only if some commitment condition of the\nagent is satisfied, otherwise aborts.\nFor simplicity, a term agent means a transactional agent\nin this paper. Target objects are objects to be manipulated\nby an agent.\nT arget computers have the target objects.\nAn agent A is composed of routing RC(A), commitment\nCC(A), and manipulation agents M C(A, D\n1\n), ..., M C(A,\nD\nn\n), where D\ni\nstands for a target computer of the agent\nA. Here, let Dom(A) be a set of target computers D\n1\n, ...,\nD\nn\nof an agent A. First, an agent A on a current computer\nhas to move to a computer in Dom(A). A computer D\nj\nto\nwhich an agent A on D\ni\nmoves is a destination computer.\nAn agent A has to autonomously make a decision on which\ncomputer to visit. In the routing agent RC(A), a destination\ncomputer is selected. Then, the agent A moves to the\ndestination computer. Here, an agent first finds a candidate\nset of possible destination computers. Then, the agent selects\none target computer in the candidate computers and\nmoves to the computer.\nSecondly, a transactional agent A manipulates objects in a\ncurrent computer D. The agent A initiates a manipulation\nagent M C(A, D) for manipulating objects in the current\ncomputer D from the home computer. If an object base\nis realized in a relational database system [11], objects are\nmanipulated by issuing SQL commands in M C(A, D).\nLastly, a transactional agent makes a decision on whether\nthe agent can commit or abort after visiting target computers\n. A traditional transaction [2] atomically commits only if\nobjects in all the target computers are successfully manipulated\n. In this paper, we consider other types of commitment\nconditions [6]. For example, in the at-least-one commitment,\na transaction can commit only if objects in at least one target\ncomputer are successfully manipulated.\n3.2\nRouting agent\nA transactional agent A locally manipulates objects in a\ncomputer D\ni\nthrough the manipulation agent M C(A, D\ni\n)\nand then outputs intermediate objects OU T (A, D\ni\n). In the\nmeanwhile, the agent A visits another computer D\nj\n. Here,\nobjects in D\nj\nare manipulated through the manipulation\nagent M C(A, D\nj\n) by using the intermediate objects In(A, D\nj\n)\n(=OU T (A, D\ni\n)). Thus, the manipulation classes are related\nwith input-output relation. Here, D\ni x\nD\nj\nshows that the\nmanipulation agent M C(A, D\ni\n) outputs an intermediate object\nx which is used by M C(A, D\nj\n). If D\ni x\nD\nj\n, the agent A\nhas to visit D\ni\nbefore D\nj\nand the intermediate object x has\nto be delivered to D\nj\n. The input-output relation is shown\nin an input-output graph as shown in Figure 1.\nD\n5\nx\ny\nz\nw\nD\n1\nD\n2\nD\n4\nD\n3\n:computer\n:temporary object\nFigure 1: Input-output graph\nThere are computer and object nodes.\nDirected edges\nD\ni\n\nx and x\n\nD\ni\nshow that the manipulation agent\nM C(A, D\ni\n) outputs and inputs an object x, respectively. In\nFigure 1, the agent A outputs an intermediate object w in\nD\ni\n. The agent A uses x in D\n3\n, D\n4\n, and D\n5\n. This means the\nagent A is required to visit D\n3\n, D\n4\n, and D\n5\nafter D\n1\n.\nFrom the input-output graph, a transactional agent A decides\nin which order the agent visits. A directed acyclic\ngraph (DAG) M ap(A) named a map is created from the\ninput-output graph [Figure 2]. Here, a node D shows a computer\nD with a manipulation agent M C(A, D). A directed\nedge D\n1\nD\n2\na computer D\n2\nis required to be manipulated\nafter D\n1\n. D\n1\n\n\nD\n2\nif and only if (if f ) D\n1\nD\n2\nor D\n1\n\nD\n3\n\n\nD\n2\nfor some computer D\n3\n. D\n1\nand D\n2\nare independent\n(D\n1\nD\n2\n) if neither D\n1\n\n\nD\n2\nnor D\n2\n\n\nD\n1\n. Here,\na transactional agent A can visit the computers D\n1\nand D\n2\nin any order and can in parallel visit the computers D\n1\nand\nD\n2\n. Figure 2 shows an example of a map M ap(A) obtained\nfrom the input-output graph of Figure 1. Here, an agent A\nis required to visit a computer D\n3\nafter D\n1\n, D\n4\nafter D\n2\nand\nD\n3\n, and D\n5\nafter D\n4\n. On the other hand, an agent A can\nvisit D\n1\nand D\n2\nin any order, even in parallel.\nIn Figure 1, the intermediate object w has to be delivered\nto D\n3\n, D\n4\n, and D\n5\n. There are following ways to bring an\nintermediate object x obtained in D\ni\nto D\nj\n:\n1. A transactional agent A carries the intermediate object\nx to D\nj\n.\n2. x is transfered from D\ni\nto D\nj\nbefore A arrives at D\nj\n.\n3. x is transfered from D\ni\nto D\nj\nafter A arrives at D\nj\n.\nA routing agent RC(A) of a transactional agent A with a\nmap M ap(A) is moving around computers [Figure 3]. First,\n1134\nD\n1\nD\n2\nD\n3\nD\n4\nD\n5\nFigure 2: Map.\na collection I of computers which do not have any in-coming\nedge are found in M ap(A). For example, I =\n{D\n1\n, D\n2\n} in\nFigure 2. One computer D\ni\nis selected in I so as to satisfy\nsome condition, e.g. D\ni\nnearest to the current computer is\nselected. For example, an agent takes a computer D\n1\nin Figure\n2. The agent A moves to D\ni\n. Here, a manipulation agent\nM C(A, D\ni\n) is loaded to D\ni\nfrom the home computer. After\nmanipulating objects in D\ni\n, D\ni\nis removed from M ap(A).\nAnother destination D\nj\nis selected and A moves to D\nj\n.\nInitially, a routing agent RC(A) of the agent A is loaded\nand started on a computer. The computer is a base computer\nbase (A) of the agent A. An agent A leaves the base\ncomputer for a computer D\ni\n. Here, D\ni\nis a current computer\ncurrent (A) of A. If the agent A invokes a method t of\na class c on D\ni\n, the class c is searched:\n1. The cache of the current computer D\ni\nis first searched\nfor the class c. If c is found in the cache, the method\nt in the cache is invoked.\n2. If not, the class base (CB\ni\n) of D\ni\nis locally searched.\nIf found, the class c in CB\ni\nis taken to invoke t.\n3. Otherwise, the class c is transferred from the home\ncomputer home (c) into D\ni\n.\nA history H(A) shows a sequence of computers which an\nagent A has visited.\nCB\nD1\nD2\n:routing agent\n:manipulation agent\n:map\nFigure 3: Mobile agent.\n3.3\nManipulation agent\nA manipulation agent is composed of not only application-specific\nclasses but also library classes like JDBC [17] and\nJAVA classes [18]. Each computer is assumed to support\na platform to perform a mobile agent on an object base\n(OB). A platform includes cache and class base (CB). The\nrouting, manipulation, and commitment agents of a transactional\nagent A are stored in the class base (CB) of the\nhome computer home (A). If an agent A invokes a method\nt of a class c in a computer D\ni\n, the class c is loaded from\nthe home computer home (c) to the cache in D\ni\n. Then, the\nmethod t of the class c is performed in D\ni\n. If a method u\nof another class d is invoked in the method t, the class d is\nloaded from the home computer home (d) as well as the class\nc. Meanwhile, if another agent B invokes a method t of the\nclass c in D\ni\n, the class c in the cache is used to invoke the\nmethod t without loading the class c. Thus, if classes are\ncashed in a computer D\ni\n, methods in the classes are locally\ninvoked in D\ni\nwithout any communication. Otherwise, it\ntakes a longer time to invoke methods since classes with the\nmethods are transferred from the home computers in networks\n. Here, the class c is loaded i.e. cached to D\ni\n. The\nmethod t of the class c is performed on D\ni\n. If another agent\nB comes to D\ni\nafter A has left D\ni\n, B can take usage of the\nclass c in the cache.\n3.4\nCommitment agent\nIf a transactional agent A finishes manipulating objects\nin each computer, the following commitment condition is\nchecked by the commitment agent CC(A):\n1. Atomic commitment : an agent is successfully performed\non all the computers in the domain Dom(A), i.e. all-or\n-nothing principle used in the traditional two-phase\ncommitment protocol [4, 15].\n2. Majority commitment : an agent is successfully performed\non more than half of the computers in Dom(A).\n3. At-least-one commitment : an agent is successfully performed\non at least one computer in Dom(A).\n4.\n\nn\nr\n\ncommitment : an agent is successfully performed on\nmore than r out of n computers (r\nn) in Dom(A).\n5. Application specific commitment : condition specified\nby application is satisfied.\n3.5\nResolution of confliction\nSuppose an agent A moves to a computer D\nj\nfrom another\ncomputer D\ni\n. The agnet A cannot be performed on D\nj\nif\nthere is an agent or surrogate B conflicting with A. Here,\nthe agent A can take one of the following ways:\n1. W ait: The agent A in the computer D\ni\nwaits until\nthe agent A can land at a computer D\nj\n.\n2. Escape: The agent A f inds another computer D\nk\nwhich has objects to be manipulated before D\nj\n.\n3. N egotiate: The agent A negotiates with the agent B\nin D\nj\n. After the negotiation, B releases the objects or\naborts.\n4. Abort: The agent A aborts.\nDeadlock among agents may occur. If the timer expires,\nthe agent A takes a following way:\n1. The agent A retreats to a computer D\nj\nin the history\nH(A). All surrogates preceding D\nj\nare aborted.\n2. Then, the surrogate agent A\nj\non D\nj\nrecreates a new\nincarnation of the agent A. The agent A finds another\ndestination computer D\nh\n.\nThe surrogate A\nj\nto which the agent A retreats plays a\nrole of checkpoint [12].\nSuppose a surrogate agent B holds an object in a computer\nD\nj\n. An agent A would like to manipulate the object\nbut conflicts with B in D\nj\n. The surrogate B makes a following\ndecision:\n1. Atomic commitment : The agent A waits until the surrogate\nB finishes.\n2. At-least-one commitment : If the surrogate B knows\nat least one sibling surrogate of B is committable, B\nreleases the object and aborts after informing the other\nsibling surrogates of this abort.\n3. Majority commitment : If the surrogate B knows more\nthan half of the sibling surrogates are committable,\nB releases the object and aborts after informing the\nother surrogates.\n4.\n\nn\nr\n\ncommitment : If the surrogate B knows more than\n1135\nor equal to r sibling surrogate agents are committable,\nthe surrogate B releases the object and aborts.\nFAULT-TOLERANT AGENT\nWe assume computers may stop by fault and networks\nare reliable. A transactional agent is faulty only if a current\ncomputer of the agent is faulty. Suppose an agent A finishes\nmanipulating objects on a computer D\ni\n. The agent A selects\none computer D\nj\nfrom the map M ap(A, D\ni\n). The agent A\ndetects by timeout mechanism that D\nj\nis faulty. The agent\nA tries to find another destination computer D\nk\n[Figure 4].\nIf found, A moves to D\nk\nas presented here. If A cannot\nfind another destination computer in M ap(A, D\ni\n), the agent\nA backs to the preceding computer D\nk\n[Figure 5]. D\ni\nis\nremoved from M ap(A, D\nk\n). Then, the agent in D\nk\ntries to\nfind another destination computer in M ap(A, D\nk\n).\nDi\nDj\nDk\nFigure 4: Forwarding recovery.\nDk\nDi\nDj\nFigure 5: Backwarding recovery.\nAn agent A leaves its surrogate agent A\ni\non a computer\nD\ni\n. The surrogate A\ni\nholds objects even after the agent\nA leaves D\ni\n. An agent A and surrogate agent A\ni\nstop if\nthe current computers are faulty. First, suppose an agent A\nstops on the current computer D\nj\n. Suppose that the agent\nA comes from D\ni\nto D\nj\n. The surrogate A\ni\non D\ni\ndetects\nthat the agent A stops on D\nj\n. Here, A\ni\ntakes one of the\nfollowing actions:\n1. Find a succeeding surrogate A\nk\nof A\ni\nand skips A\nj\n.\n2. Recreate a new incarnation of the agent A.\nIf the commitment condition is not atomic, the surrogate\nA\nj\ntakes the first one, i.e. skips the fault of A\nj\n. For\nthe atomic condition, A\ni\nrecreates a new incarnation of the\nagent A. The agent A takes another destination computer\nD\nk\nin M ap(A, D\ni\n). If found, the agent A moves to D\nk\n.\nOtherwise, A waits until the computer D\ni\nis recovered or\nbacks to the precedent computer from D\nj\n.\nA surrogate A\ni\non a computer D\ni\nmay be faulty as well.\nA preceding surrogate A\nj\non D\nj\ndetects the fault of A\ni\n.\nSuppose a surrogate agent A\ni\nof A exists on D\ni\n. A\ni+1\nand\nA\ni-1\nshow the succeeding and precedeing surrogate agents of\nA\ni\n, respectively [Figure 6]. A\ni\nperiodically sends an enquiry\nmessage AYL (are you alive) to A\ni+1\nand A\ni-1\nto check if\nA\ni+1\nand A\ni\nare alive. On receipt of the AYL message, a\nsurrogate sends back a response message IAL (I am alive).\nThus, a faulty surrogate is detected by the succeeding and\npreceding a surrogate with timeout mechanism.\nIf A\ni\ndetects the stop of A\ni+1\n, A\ni\ndoes the follwings:\n1. A new incarnation of the agent A is recreated on D\ni\n.\n2. From the map M ap(A, D\ni\n), a new destination D different\nfrom D\ni-1\nis detected.\nA\ni-1\nA\ni\nA\ni+1\nAYL\nAYL\nIAL\nIAL\nFigure 6: Fault detection.\n3. If detected, the agent A moves to D. Otherwise, A\ni\ninforms A\ni-1\nof abort and then aborts. A\ni-1\ndoes the\nprocedure from step 1.\nIf the surrogate A\ni\ndetects the stop of the preceding surrogate\nA\ni-1\nor receives an abort message for A\ni-1\n, A\ni\ninforms\nthe succeeding surrogate A\ni+1\nof abort. On receipt of the\nabort message from A\ni\n, A\ni+1\nforwards the abort message to\nA\ni+2\nand then aborts. Thus, abort messages are eventually\nforwarded up to the agent A. In Figure 7, suppose A\n2\nstops.\nA pair of surrogates A\n1\nand A\n3\ndetect the stop of A\n2\n. A\n1\ncreates a new incarnation A of the agent A. The obsolete\nincarnation A still is moving to D\n6\n. The succeeding surrogate\nA\n3\nof A\n2\nsends an abort message to A\n4\n. If the abort\nmessage catches up the agent A, A can be aborted. Otherwise\n, the obsolete incarnation A cannot stop. Thus, there\nmight exist multiple incarnations of an agent.\nA0\nA1\nA2\nA3\nA4\nA\nD1\nD2\nD3\nD4\nD5\nD6\nA'\nAi : surrogate\nA : agent\nFigure 7: Incarnations of an agent.\nOn receipt of an AY L message from the preceding surrogate\nA\ni-1\n, A\ni\nsends an IAL message with the address\ninformation which A\ni\nknows of surrogates are backwarding\nto preceding surrogates. If the surrogate A\ni\nfinds A\ni-1\nto\nbe faulty, A\ni\nsends an abort message to not only A\ni+1\nbut\nalso a surrogate whose address A\ni\nknows and which is nearest\nto the current computer of A. By this method, an abort\nmessage can more easily catch up with the agent mapped\nthe agent can be aborted.\nIMPLEMENTATION\nWe discuss how to implement transactional agents in Aglets.\nA transactional agent A is composed of routing, manipulation\n, and commitment subagents. An routing agent RC(A)\nwith a map M ap(A) is transfered from one computer to another\n. When an agent A, i.e. routing agent RC(A) arrives\nat a computer D\ni\n, a manipulation agent M C(A, D\ni\n) is created\nby loading the manipulation class.\nAn object base (OB) is realized in a relational database\nsystem, Oracle [11]. A transactional agent manipulates table\nobjects by issuing SQL commands, i.e. select, insert,\ndelete, and update in a current computer D\ni\n. The computation\nof each agent A on a computer D\ni\nis realized as\na local transaction on a database system. If the agent A\nleaves D\ni\n, the transaction for A commits or aborts. That\nis, objects manipulated by A are released. Even if the agent\nA leaves D\ni\n, the objects manipulated by A are required to\nbe still held because A may abort after leaving D\ni\n. If the\n1136\nobjects are released, the agent is unrecoverable. Therefore,\na surrogate agent is created on D\ni\n. The surrogate agent\nis composed of a manipulation agent M C(A, D\ni\n) and an\nobject agent OBA\ni\n. OBA\ni\nbehaves as follows:\n1. On arrival at a computer D\ni\n, the routing agent RC(A)\nof an agent A initiates a manipulation agent M C(A,\nD\ni\n) and an object agent OBA\ni\non D\ni\n, i.e. M C(A,\nD\ni\n) and OBA classes are loaded. OBA\ni\ninitiates a\ntransaction on an object base OB\ni\n.\n2. If M C(A,D\ni\n) issues a method for manipulating objects\n, OBA\ni\nissues SQL commands to the database\nsystem in D\ni\n.\n3. If the agent A finishes, A leaves D\ni\n. However, OBA\ni\nis still operational and holding the objects in D\ni\n.\n4. OBA\ni\ncommits and aborts if the agent A sends commit\nand abort requests to the surrogate A\ni\n, respectively.\nAn object agent OBA\ni\nstays on a computer D\ni\nwhile holding\nobjects even if the agent A leaves D\ni\n. OBA\ni\nis a local\ntransaction on an object base OB\ni\n. On completion of the\nagent A, OBA\ni\nand M C(A, D\ni\n) are terminated.\nOB\ni\nD\ni\nOBA\nOBA\ni\nSQL\nXA\nMC(A, D\ni\n)\nRC(A)\nFigure 8: Object agent (OBA).\nAn OBA class can be loaded to a computer with any\ntype of database system. If a transactional agent comes to\nD\ni\nfrom another home computer, an OBA class is loaded\nto D\ni\nfrom the home computer. Thus, OBA instances are\naccumulated in the cache. In order to resolve this problem,\nan OBA class is loaded as follows:\n1. If the OBA class is not cached in the current computer,\nthe OBA class is loaded from home (OBA).\n2. If the OBA class could not be loaded from home (OBA),\nan OBA class in the home computer of the agent is\nloaded to a computer.\nThe routing agent RC(A) leaves a computer D\ni\nif the manipulation\nagent M C(A, D\ni\n) finishes manipulating objects.\nM C(A, D\ni\n) recreates a new incarnation of a routing agent\nRC(A) if the agent A stops due to the computer fault.\nA transactional agent A can commit if all or some of the\nsurrogates commit depending on the commitment condition.\nFor example, a transactional agent commits if all the surrogate\nagents successfully exist. Communication among an\nagent and its surrogate agents is realized by using the XA interface\n[20] which supports the two-phase commitment protocol\n[15] [Figure 8]. Each surrogate agent issues a prepare\nrequest to a computer on receipt of a prepare message from\nA. If prepare is successfully performed, the surrogate agent\nsends a prepared message to A. Here, the surrogate agent\nis committable. Otherwise, the surrogate agent aborts after\nsending aborted to A. The agent A receives responses from\nthe surrogate agents after sending prepare to the surrogates.\nOn receipt of the responses from surrogate agents, the agent\nA makes a decision on commit or abort based on the commitment\ncondition. For example, if the atomic condition holds,\nA sends commit only if prepared is received from every surrogate\n. The agent A sends abort to all committable agents\nif an aborted message is received from at least one surrogate.\nOn receipt of abort, a committable surrogate aborts. In the\nat-least-one commitment condition, A sends commit to all\ncommittable surrogates only if prepared is received from at\nleast one surrogate.\nNext, we discuss how to support robustness against faults\nof computers. Suppose a surrogate agent A\ni\nof a transactional\nagent A stops after sending prepared.\nHere, A\ni\nis\ncommittable. On recovery of the committable surrogate A\ni\n,\nA\ni\nunilaterly commits if the surrogate agent is committable\nin the at-least-one commitment condition. In the atomic\ncondition, A\ni\nasks the other surrogates if they had commit-ted\n. Suppose A\ni\nis abortable, i.e. faulty before receiving\nprepared. On recovery, A\ni\nunilaterly aborts.\nEVALUATION\nClient\ncreate\nD\n1\nmove\nresult\nRouting Agent\nDatabase\nServer\nManipulation\nAgent\nObject Agent\nM\n1\nM\n2\nM\n1\nD\n2\nresult\nM\n2\nHome\nComputer\nmove\nmove\nD\n3\nmanipulate\nresult\nM\n3\nM\n3\nresult\nresult\nresult\nmanipulate\nmanipulate\nFigure 9: Evaluation model\nWe evaluate the transactional agent which is implemented\nin Aglets. In the evaluation, There are three server computers\nD\n1\n, D\n2\n, and D\n3\n. A transactonal agent is created in a\ncomputer C by loading classes from the home computer h.\nThe servers D\n1\n, D\n2\n, and D\n3\nare realized in personal computers\n(Pentium 3) with Oracle database systems, which are\ninterconnected in the 1Gbps Ethernet.\nFirst, a transactional agent A is initiated in a base computer\nC. The agent A finds in which order D\n1\n, D\n2\n, and D\n3\nto be visited. Here, the agent A visits D\n1\n, D\n2\n, and D\n3\nin\nthis order as shown in Figure 9. On arrival of the agent A\non D\ni\n, the manipulation agent M C(A, D\ni\n) and object agent\nOBA\ni\nare loaded to D\ni\n[Figure 9].\nWe consider that following types of transactional agents:\nA. The manipulation agents M C(A, D\n1\n) derives intermediate\nobject I from the object base. The object bases\nin D\n2\nand D\n3\nare updated by using the object I, i.e.\nobjects in I are added to the object base.\nB. M C(A, D\n1\n) and M C(A, D\n2\n) derive objects to intermediate\nobjects I\n1\nand I\n2\n, respectively. Then, the object\nbase in D\n3\nis manipulated by using I\n1\nand I\n2\n.\nThere are three ways to deliver intermediate objects de-rived\nto another computer:\n1. The transactional agent A carries intermediate objects\nto a destination computer D\nj\nfrom D\ni\n.\n1137\n2. After the agent A arrives at a computer D\nj\n, the agent\nA requests D\ni\nto send the intermediate objects.\n3. The agent A transfers the intermediate object I to a\ncomputer D\nj\nbefore leaving D\ni\n.\nThe total response time of a transactional agent is measured\nfor number of intermediate objects, i.e. number of\ntuples deriverd in computeres. Figures 10 and 11 show the\nresponse time for the types of transactional agents A and\nB, respectively. The second and third ways to deliver intermediate\nobjects to destination computers imply shorter\nresponce time than the first way.\nFigure 10: Response A\nFigure 11: Response B\nCONCLUDING REMARKS\nThe authors discussed a transactional agent model to manipulate\nobjects in multiple computers with types of commitment\nconstraints in presence of computer faults. A transactional\nagent autonomausly finds a distination computer,\nmoves to a computer, and then locally manipulates objects.\nWe discussed how to implement transactional agents in Aglets\nand Oracle. We evaluated the transactional agent in terms\nof response time.\nREFERENCES\n[1] American National Standards Institute. The Database\nLanguage SQL, 1986.\n[2] P. A. Bernstein, V. Hadzilacos, and N. Goodman.\nConcurrency Control and Recovery in Database\nSystems. Addison-Wesley, 1987.\n[3] L. Gong. JXTA: A Network Programming\nEnvironment, pages 8895. IEEE Internet\nComputing,, 2001.\n[4] J. Gray and A. Reuter. Transaction Processing :\nConcepts and Techniques. Morgan Kaufmann, 1993.\n[5] IBM Corporation. Aglets Software Development Kit\nHome. http://www.trl.ibm.com/aglets/.\n[6] T. Komiya, T. Enokido, and M. Takizawa. Mobile\nagent model for transaction processing on distributed\nobjects. Information Sciences, 154:2338, 2003.\n[7] F. H. Korth. Locking primitives in a database system.\nJournal of ACM, 30(1):5579, 1989.\n[8] N. A. Lynch, M. Merritt, A. F. W. Weihl, and R. R.\nYager. Atomic Transactions. Morgan Kaufmann, 1994.\n[9] K. Nagi. Transactional Agents : Towards a Robust\nMulti-Agent System. Springer-Verlag, 2001.\n[10] A. Omicini, F. Zambonelli, M. Klusch, and\nR. Tolksdorf. Coordination of Internet Agents.\nSpringer-Verlag, 2001.\n[11] Oracle Corporation. Oracle8i Concepts Vol. 1 Release\n8.1.5, 1999.\n[12] R. S. Pamula and P. K. Srimani. Checkpointing\nstrategies for database systems. Proc. of the 15th\nAnnual Conf. on Computer Science, IEEE Computer\nSociety, pages 8897, 1987.\n[13] S. Pleisch. State of the Art of Mobile Agent\nComputing - Security, Fault Tolerance, and\nTransaction Support. IBM Corporation, 1999.\n[14] M. Shiraishi, T. Enokido, and M. Takizawa.\nFault-tolerant mobile agent in distributed objects\nsystems. Proc. of the 9th IEEE International\nWorkshop on Future Trends of Distributed Computing\nSystems (FTDCS 2003), pages 145151, 2003.\n[15] D. Skeen. Nonblocking commitment protocols. Proc.\nof ACM SIGMOD, pages 133147, 1982.\n[16] A. D. Stefano, L. L. Bello, and C. Santoro. A\ndistributed heterogeneous database system based on\nmobile agents. Proc. of the 7th Workshop on Enabling\nTechnologies (WETICE'98), IEEE Computer Society,\npages 223229, 1998.\n[17] Sun Microsystems Inc. JDBC Data Access API.\nhttp://java.sun.com/products/jdbc/.\n[18] Sun Microsystems Inc. The Source for Java (TM)\nTechnology. http://java.sun.com/.\n[19] J. E. White. Telescript Technology : The Foundation\nfor the Electronic Marketplace. General Magic Inc.,\n1994.\n[20] X/Open Company Ltd. X/Open CAE Specification\nDistributed Transaction Processing: The XA\nSpecification, 1991.\n1138\n", "keywords": "fault-tolerant agent;transactional agent;Transaction;ACID transaction;surrogate agent;Mobile agent;Fault-Tolerant;fault-tolerant;computer fault;mobile agent;transaction processing"} {"name": "201", "title": "Translating Unknown Cross-Lingual Queries in Digital Libraries Using a Web-based Approach", "abstract": "Users' cross-lingual queries to a digital library system might be short and not included in a common translation dictionary (unknown terms). In this paper, we investigate the feasibility of exploiting the Web as the corpus source to translate unknown query terms for cross-language information retrieval (CLIR) in digital libraries. We propose a Web-based term translation approach to determine effective translations for unknown query terms by mining bilingual search-result pages obtained from a real Web search engine. This approach can enhance the construction of a domain-specific bilingual lexicon and benefit CLIR services in a digital library that only has monolingual document collections. Very promising results have been obtained in generating effective translation equivalents for many unknown terms, including proper nouns, technical terms and Web query terms.", "fulltext": "INTRODUCTION\nWith the development of digital library technologies, large\namounts of library content and cultural heritage material are being\ndigitized all over the world. As digital library systems become\ncommonly constructed and digitized content becomes widely\naccessible on the Web, digital libraries that cross language and\nregional boundaries will be in increasingly high demand globally.\nUnfortunately, most of existing digital library systems only\nprovide monolingual content and search support in certain target\nlanguages. To facilitate a cross-language information retrieval\n(CLIR) service in digital library systems, it is important to\ndevelop a powerful query translation engine. This must be able to\nautomatically translate users' queries from multiple source\nlanguages to the target languages that the systems accept.\nConventional approaches to CLIR incorporate parallel texts [16]\nas the corpus. These texts contain bilingual sentences, from which\nword or phrase translations can be extracted with appropriate\nsentence alignment methods [7]. The basic assumption of such an\napproach is that queries may be long so query expansion methods\ncan be used to enrich query terms not covered in parallel texts.\nHowever, this approach presents some fundamental difficulties for\ndigital libraries that wish to support practical CLIR services. First,\nsince most existing digital libraries contain only monolingual text\ncollections, there is no bilingual corpus for cross-lingual training.\nSecond, real queries are often short, diverse and dynamic so that\nonly a subset of translations can be extracted through the corpora\nin limited domains. How to efficiently construct a domain-specific\ntranslation dictionary for each text collection has become a major\nchallenge for practical CLIR services in digital libraries. In this\npaper, we propose a Web-based approach to deal with this\nproblem. We intend to exploit the Web as the corpus to find\neffective translations automatically for query terms not included\nin a dictionary (unknown terms). Besides, to speedup online\ntranslation process of unknown terms, we extract possible key\nterms from the document set in digital libraries and try to obtain\ntheir translations in advance.\nFor some language pairs, such as Chinese and English, as well as\nJapanese and English, the Web offers rich texts in a mixture of\nlanguages. Many of them contain bilingual translations of proper\nnouns, such as company names and personal names. We want to\nrealize if this positive characteristic makes it possible to\nautomatically extract bilingual translations of a large number of\nquery terms. Real search engines, such as Google\nand AltaVista\n,\nallow us to search English terms for pages in a certain language,\ne.g. Chinese or Japanese. This has motivated us to develop the\nproposed approach for mining bilingual search-result pages,\nwhich are normally returned in a long, ordered list of snippets of\nsummaries to help users locate interesting documents. The\nproposed approach uses the bilingual search-result pages of\nunknown queries as the corpus for extracting translations by\nutilizing the following useful techniques: (1) Term extraction\nmethods that extract translation candidates with correct lexical\nboundaries. (2) Term translation methods that determine correct\ntranslations based on co-occurrence and context similarity\nanalysis.\nSeveral preliminary experiments have been conducted to test the\nperformance of the proposed approach. For example, very\npromising translation accuracy has been obtained in generating\neffective translation equivalents for many unknown terms,\nincluding proper nouns, technical terms and Web query terms.\nAlso, it has been shown that the approach can enhance bilingual\nlexicon construction in a very efficient manner and thereby benefit\nCLIR services in digital libraries that only have monolingual\ndocument collections. In Section 2 of this paper, we examine the\npossibility of using search-result pages for term translation. The\ntechnical details of the proposed approach, including the term\nextraction and term translation methods, are presented with some\nexperiments in Sections 3 and 4 respectively. An application of\nthe proposed approach to bilingual lexicon construction is\ndescribed in Section 5. Finally, in Section 6 we list our\nconclusions.\n\nOBSERVATIONS AND THE PROPOSED APPROACH\nA large number of Web pages contain a mixture of multiple\nlanguages. For example, Chinese pages on the Web consist of rich\ntexts in a mixture of Chinese (main language) and English\n(auxiliary language), many of which contain translations of proper\nnouns and foreign terms. In fact, in the Chinese writing style, the\nfirst time a foreign term appears in the text, we might also write\nits original word, e.g., \"\" (Yahoo). In our research, we are\nseeking to determine if the percentage of correct translations for\nreal queries is high enough in the top search-result pages. If this is\nthe case, search-result-based methods can be useful in alleviating\nthe difficulty of term translation. According to our observations,\nmany query terms are very likely to appear simultaneously with\ntheir translations in search-result pages. Figure 1 illustrates the\nsearch-result page of the English query \"National Palace\nMuseum\", which was submitted to Google to search Chinese\npages. Many relevant results were obtained, including both the\nquery itself and its Chinese aliases, such as \"\"\n(National Palace Museum), \"\" (an abbreviation of National\nPalace Museum) and \"\" (Palace Museum), which\nmight not be covered in general-purpose translation dictionaries.\n\nFigure 1. An illustration showing translation equivalents, such\nas National Palace Museum/\"\" (\"\"),\nwhich co-occur in search results returned from Google.\nAlthough search-result pages might contain translations, the\ndifficulties in developing a high-performance search-result-based\nterm translation approach still remain. For example, it is not\nstraightforward to extract translation candidates with correct\nlexical boundaries and minimum noisy terms from a text. It is also\nchallenging to find correct translations for each unknown term\nwithin an acceptable number of search-result pages and an\nacceptable amount of network access time. To deal with these\nproblems, the proposed approach contains three major modules:\nsearch-result collection, term extraction and term translation, as\nshown in Figure 2 (a). In the search-result collection module, a\ngiven source query (unknown term) is submitted to a real-world\nsearch engine to collect top search-result pages. In the term\nextraction module, translation candidates are extracted from the\ncollected search-result pages using the term extraction method.\nFinally, the term translation module is used to determine the most\npromising translations based on the similarity estimation between\nsource queries and target translations.\nIn fact there are two scenarios to which the proposed approach can\nbe applied. Except online translation of unknown queries, another\napplication is offline translation of key terms as in Figure 2 (b).\nTo reduce unnecessary online translation processes, the proposed\napproach can be used to augment the bilingual lexicon via\ntranslating key terms extracted from the document set in a digital\nlibrary. These extracted key terms are likely to be similar to terms\nthat users may use in real user queries. The proposed approach\ncan be applied to those unknown key terms to obtain their\ntranslations with an offline batch process (the extracted\ntranslations might be edited by indexers). Furthermore, the\nconstructed bilingual lexicon can be incrementally updated with\nthe input of unknown queries from users and the performing of\nonline translation processes. To facilitate the above scenarios the\nproposed term extraction and term translation techniques are\nrequired, which will be further described in the following sections.\n\nFigure 2. (a) An abstract diagram showing the concept of the\nproposed approach for translating an unknown query. (b)\nTwo application scenarios of the proposed Web-based term\ntranslation approach: online translation of unknown queries\nand offline translation of key terms extracted from the\ndocument set.\nSearch-Result\nCollection\nSearch-Result\nPages\nTranslation\nCandidates\nTerm\nExtraction\nTerm\nTranslation\nSearch\nEngine\nSource\nQuery\nTarget\nTranslations\n(a)\n(b)\nProposed\nApproach\nQ\nDoc\nOnline Translation of\nUnknown Queries\nOffline Translation of\nKey Terms\n109\nTERM EXTRACTION\nThe first challenge of the proposed approach is: how to efficiently\nand effectively extract translation candidates for an unknown\nsource term from a set of search-result pages. Other challenging\nissues include: whether all possible translations can be extracted\nand whether their lexical boundaries can be correctly segmented.\nConventionally, there are two types of term extraction methods\nthat can be employed. The first is the language-dependent\nlinguistics-based method that relies on lexical analysis, word\nsegmentation and syntactic analysis to extract named entities from\ndocuments. The second type is the language-independent\nstatistics-based method that extracts significant lexical patterns\nwithout length limitation, such as the local maxima method [19]\nand the PAT-tree-based method [3]. Considering the diverse\napplications in digital library and Web environments, we have\nadopted the second approach. Our proposed term extraction\nmethod, i.e., the PAT-tree-based local maxima method, is a hybrid\nof the local maxima method [19] and the PAT-tree-based method\n[3], which has been found more efficient and effective. First, we\nconstruct a PAT tree data structure for the corpus, in this case, a\nset of search-result pages retrieved using the source term as query.\n(The same term extraction method will be applied to extract key\nterms from digital libraries in Section 5 where the corpus is the\ndocuments in digital libraries). By utilizing the PAT tree, we can\nefficiently calculate the association measurement of every\ncharacter or word n-gram in the corpus and apply the local\nmaxima algorithm to extract the terms. The association\nmeasurement is determined not only by the symmetric conditional\nprobability [19] but also by the context independency ratio [3] of\nthe n-gram. We detail the proposed method in the following\nsubsections.\n3.1\n\nAssociation Measurement\nThe proposed association measurement, called SCPCD, combines\nthe symmetric conditional probability (SCP) [19] with the concept\nof context dependency (CD) [3]. SCP is the association estimation\nof the correlation between its composed sub n-grams, which is as\ndefined below:\n\n\n\n=\n+\n=\n+\n=\n=\n1\n1\n1\n1\n2\n1\n1\n1\n1\n1\n2\n1\n1\n)\n...\n(\n)\n...\n(\n1\n1\n)\n...\n(\n)\n(\n)\n(\n1\n1\n)\n(\n)\n(\nn\ni\nn\ni\ni\nn\nn\ni\nn\ni\ni\nn\nn\nw\nw\nfreq\nw\nw\nfreq\nn\nw\nw\nfreq\nw\nw\np\nw\nw\np\nn\nw\nw\np\nw\nw\nSCP\nK\nK\nK\nK\n(1)\nwhere w\n1\n...\nw\nn\nis the n-gram to be estimated, p(w\n1\n...\nw\nn\n) is the\nprobability of the occurrence of the n-gram w\n1\n...\nw\nn\n, and\nfreq(w\n1\n...w\nn\n) is the frequency of the n-gram.\nTo a certain degree, SCP can measure the cohesion holding the\nwords together within a word n-gram, but it cannot determine the\nlexical boundaries of the n-gram. An n-gram with complete\nlexical boundaries implies that it tends to have free association\nwith other n-grams appearing in the same context. Therefore, to\nfurther ensure that an n-gram has complete lexical boundaries, the\nconcept of context dependency is introduced. Moreover, we\nconsolidate the concept with SCP to form one association\nmeasurement. In order to achieve this goal, a refined measure, the\ncontext independency ratio - which is a ratio value between 0 and\n1 - is extended from [3]. It is defined as follows:\n\n2\n1\n1\n1\n1\n)\n(\n)\n(\n)\n(\n)\n(\nn\nn\nn\nn\nw\nw\nfreq\nw\nw\nRC\nw\nw\nLC\nw\nw\nCD\nK\nK\nK\nK\n=\n\n(2)\nwhere LC(w\n1\n...\nw\nn\n) is the number of unique left adjacent words in\nwestern languages, or characters in oriental languages, for the\nn-gram in the corpus, or is equal to the frequency of the n-gram if\nthere is no left adjacent word/character. Similarly, RC(w\n1\n...\nw\nn\n) is\nthe number of unique right adjacent words/characters for the\nn-gram, or is equal to the frequency of the n-gram if there is no\nright adjacent word/character. Using this ratio we are able to judge\nwhether the appearance of an n-gram is dependent on a certain\nstring containing it. For example, if w\n1\n...\nw\nn\nis always a substring\nof string xw\n1\n...\nw\nn\ny in the corpus, then CD(w\n1\n...\nw\nn\n) is close to 0.\nCombining formulae (1) and (2), the proposed association\nmeasure SCPCD is as follows\n\n\n=\n+\n=\n\n=\n1\n1\n1\n1\n1\n1\n1\n1\n1\n)\n(\n)\n(\n1\n1\n)\n(\n)\n(\n)\n(\n)\n(\n)\n(\nn\ni\nn\ni\ni\nn\nn\nn\nn\nn\nw\nw\nfreq\nw\nw\nfreq\nn\nw\nw\nRC\nw\nw\nLC\nw\nw\nCD\nw\nw\nSCP\nw\nw\nSCPCD\nK\nK\nK\nK\nK\nK\nK\n(3)\nNote that the difference between the formulae of SCPCD and SCP\nis in their numerator items. For SCP, those n-grams with low\nfrequency tend to be discarded, which is prevented in the case of\nSCPCD. The proposed new measure determines a highly cohesive\nterm because of the frequencies of its substrings and the number\nof its unique left and right adjacent words/characters.\n3.2\n\nLocal Maxima Algorithm\nThe local maxima algorithm, called LocalMaxs in [18], is based\non the idea that each n-gram has a kind of cohesion that holds the\nwords together within the n-gram. This is a heuristic algorithm\nused to combine with the previous association measurements to\nextract n-grams, which are supposed to be key terms from the text.\nWe know different n-grams usually have different cohesion values.\nGiven that:\n\n\nAn antecedent (in size) of the n-gram w\n1\nw\n2\n...\nw\nn\n,\nant(w\n1\n...\nw\nn\n), is a sub-n-gram of the n-gram w\n1\n...\nw\nn\n, having\nsize n - 1. i.e., the (n-1)-gram w\n1\n...\nw\nn-1\nor w\n2\n...\nw\nn\n.\n\n\nA successor (in size) of the n-gram w\n1\nw\n2\n...\nw\nn\n, succ(w\n1\n...\nw\nn\n),\nis a (n+1)-gram N such that the n-gram w\n1\n...\nw\nn\nis an ant(N).\ni.e., succ(w\n1\n...\nw\nn\n) contains the n-gram w\n1\n...\nw\nn\nand an\nadditional word before (on the left) or after (on the right) it.\nThe local maxima algorithm extracts each term whose cohesion,\ni.e. association measure, is local maxima. That is, the term whose\nassociation measure is greater than, or equal to, the association\nmeasures of its antecedents and is greater than the association\nmeasures of its successors.\n3.3\n\nThe PAT-Tree Based Local Maxima\nAlgorithm\nDespite the usefulness of the local maxima algorithm, without a\nsuitable data structure the time complexity of the algorithm is high.\nThe main time complexity problems occur in two areas. One is\ncalculating the context independency ratio (CD) for each unique\nn-gram in the corpus and the other is to find the successor of an\nn-gram. The two problems can be treated as one, i.e. finding the\nsuccessors of an n-gram. An intuitive way to do this is to find out\nall (n+1)-grams and then compare the n-gram with them\nsequentially to see if they are the successors of it. As this is\ntime-consuming, we introduce PAT tree as the data structure.\n110\nThe above method is time consuming, however, so we use the\nPAT tree, which is a more efficient data structure. It was\ndeveloped by Gonnet [8] from Morrison's PATRICIA algorithm\n(Practical Algorithm to Retrieve Information Coded in\nAlphanumeric) [15] for indexing a continuous data stream and\nlocating every possible position of a prefix in the stream. The\nPAT tree structure is conceptually equivalent to a compressed\ndigital search tree, but smaller. The superior feature of this\nstructure mostly resulted from its use of semi-infinite strings [14]\nto store the substream values in the nodes of the tree. This also\nmakes it easier and more efficient to find the successors of an\nn-gram. More details on the PAT tree can be found in [3].\nBy utilizing the constructed PAT tree as the corpus, we can\nefficiently retrieve all n-grams from the corpus, obtain their\nfrequencies and context dependency values, and then calculate the\nassociation measures, SCPCD, of all of them.\n3.4\n\nExperiments on Term Extraction\nTo determine the effectiveness of the proposed association\nmeasure SCPCD and the efficiency of the PAT-tree data structure,\nwe conducted several experiments on Web search-result pages\nusing the proposed PAT-tree-based local maxima algorithm.\nFirst, to test whether SCPCD can perform better than SCP and CD,\nwe randomly selected 50 real queries in English from a Chinese\nsearch engine called Openfind\n3\n. We then submitted each of them\nto Google to search Chinese result pages. Most of these query\nterms such as proper nouns and technical terms were not covered\nin the common translation dictionary. After using the term\nextraction method, the top 30 extracted\n\nChinese translation\ncandidates were examined and the extraction accuracy of each\ncandidate to the source query was manually determined. We\napplied this test mainly to determine whether the SCPCD\nmeasurement can extract more relevant translation candidates and\nsegment them with correct lexical boundaries. A translation\ncandidate was taken as correctly extracted only if it was correctly\nsegmented and contained meanings relevant to the source term. A\nrelevant translation candidate was not necessarily a correct\ntranslation. The whole relevant set was determined by examining\nthe terms extracted by all of the test methods, e.g., CD, SCP, and\nSCPCD. Table 1 clearly shows that the method based on the\nSCPCD measurement achieves the best performance.\n\nTable 1. The obtained extraction accuracy including precision,\nrecall, and average recall-precision of auto-extracted\ntranslation candidates using different methods.\nAssociation Measure\nPrecision\nRecall\nAvg. R-P\nCD\n68.1 %\n5.9 %\n37.0 %\nSCP\n62.6 %\n63.3 %\n63.0 %\nSCPCD\n79.3 %\n78.2 %\n78.7 %\n\nIn order to determine the efficiency of the PAT-tree data structure,\nwe compared the speed performance of the local maxima method\nand the PAT-tree-based local maxima method. As Table 2 shows,\nthe PAT-tree data structure is more efficient in term extraction.\nAlthough the PAT-tree construction phase took a little more time\n\n3\nhttp://www.openfind.com/\nin a small corpus, in a real-world case for a large corpus - where\n1,367 and 5,357 scientific documents were tested (refer to Section\n5.2 for the details)\n\n- the PAT-tree-based local maxima method\nperformed much better than the local maxima method.\n\nTable 2. The obtained average speed performance of different\nterm extraction methods.\nTerm Extraction Method\nTime for\nPreprocessing\nTime for\nExtraction\nLocalMaxs (Web Queries)\n0.87 s\n0.99 s\nPATtree+LocalMaxs\n(Web Queries)\n2.30 s\n0.61 s\nLocalMaxs (1,367 docs)\n63.47 s\n4,851.67 s\nPATtree+LocalMaxs\n(1,367 docs)\n840.90 s\n71.24 s\nLocalMaxs (5,357 docs)\n47,247.55 s\n350,495.65 s\nPATtree+LocalMaxs\n(5,357 docs)\n11,086.67 s\n759.32 s\nTERM TRANSLATION\nIn the term translation module, we utilize the co-occurrence\nrelation and the context information between source queries and\ntarget translations to estimate their semantic similarity and\ndetermine the most promising translations. Several similarity\nestimation methods were investigated based on co-occurrence\nanalysis. These included mutual information, DICE coefficient,\nand statistical tests including the chi-square test and the\nlog-likelihood ratio test [17, 20], where the chi-square test and the\ncontext vector analysis achieved the best performance. These will\nbe introduced below.\n4.1\n\nThe Chi-Square Test\nThe chi-square test (\n2\n) was adopted as the major method of\nco-occurrence analysis in our study. One major reason is that the\nrequired parameters for the chi-square test can be effectively\ncomputed using the search-result pages, which alleviates the data\nsparseness problem. It also makes good use of all relations of\nco-occurrence between the source and target terms, especially the\ninformation that they do not co-occur. For source term s and target\nterm t, the conventional chi-square test can be transformed as the\nsimilarity measure defined below [6]:\n\n)\n(\n)\n(\n)\n(\n)\n(\n)\n(\n)\n,\n(\n2\n2\nd\nc\nd\nb\nc\na\nb\na\nc\nb\nd\na\nN\nt\ns\nS\n+\n\n+\n\n+\n\n+\n\n\n\n=\n\n\n(4)\nw\nhere\na: the number of pages containing both terms s and t;\nb: the number of pages containing term s but not t;\nc: the number of pages containing term t but not s;\nd: the number of pages containing neither term s nor t;\nN: the total number of pages, i.e., N= a+b+c+d.\nSince most search engines accept Boolean queries and can report\nthe number of pages matched, the required parameters for the\nchi-square test can be obtained by submitting Boolean queries\nsuch as `st', `~st', `s~t' to search engines and utilizing the\nreturned page counts. On the other hand, it is easy to get number\nN using some search engines (e.g., Google), which indicates the\n111\ntotal number of their collected Web pages. The number d may not\nbe directly available from the search engine, but it can be\ncalculated using the formula N= a+b+c+d, i.e., d = N-a-b-c.\n4.2\n\nContext Vector Analysis\nCo-occurrence analysis is applicable to higher frequency terms\nsince they are more likely to appear with their translation\ncandidates. On the other hand, lower frequency terms have little\nchance of appearing with candidates on the same pages. The\ncontext vector method (CV) is therefore adopted to deal with this\nproblem. As translation equivalents may share similar terms, for\neach query term, we take the co-occurring feature terms as the\nfeature vector. The similarity between query terms and\ntranslation candidates can be computed based on their feature\nvectors. Thus, lower frequency query terms still have a chance\nto extract correct translations.\nThe context vector-based method has been used to extract\ntranslations from comparable corpora, such as the use of Fung et\nal.'s seed word [5]. In our method, real users' popular query terms\nare used as the feature set, which should help to avoid many\ninappropriate feature terms. Like Fung et al.'s vector space model,\nwe also use the TF-IDF weighting scheme to estimate the\nsignificance of context features. This is defined as follows:\n\n\n)\nn\nlog(\n)\n,\n(\nmax\n)\n,\n(\n\nN\nd\nt\nf\nd\nt\nf\nw\nj\nj\ni\nt\ni\n\n=\n\n(5)\nwhere f(t\ni\n,d) is the frequency of term t\ni\nin search-result page d, N\nis the total number of Web pages in the collection of search\nengines, and n is the number of pages containing t\ni\n. Given the\ncontext vectors of a source query term and each target translation\ncandidate, their similarity is estimated with cosine measure as\nfollows:\n\n\n)\n(\n)\n(\n)\n,\n(\n1\n2\n1\n2\n1\n\n\n\n=\n=\n=\n\n\n=\nm\ni\nt\nm\ni\ns\nt\ns\nm\ni\ncv\ni\ni\ni\ni\nw\nw\nw\nw\nt\ns\nS\n\n(6)\nIt is not difficult to construct context vectors for source query\nterms and their translation candidates. For a source query term, we\ncan use a fixed number of the top search results to extract\ntranslation candidates. The co-occurring feature terms of each\nquery can also be extracted, and their weights calculated, which\ntogether form the context vector of the query. The same procedure\nis used to construct a context vector for each translation candidate.\n4.3\n\nThe Combined Method\nBenefiting from real-world search engines, the search-result-based\nmethod using the chi-square test can reduce the work of corpus\ncollection, but has difficulty in dealing with low-frequency query\nterms. Although context vector analysis can deal with difficulties\nencountered by the chi-square test, it is not difficult to see that the\nfeature selection issue needs to be carefully handled. Intuitively, a\nmore complete solution is to integrate the above two methods.\nConsidering the various ranges of similarity values in the two\nmethods, we use a linear combination weighting scheme to\ncompute the similarity measure as follows:\n\n\n=\nm\nm\nm\nt\ns\nR\nt\ns\nS\nall\n)\n,\n(\n)\n,\n(\n\n\n(7)\nwhere\n\nm\nis an assigned weight for each similarity measure S\nm\n,\nand R\nm\n(s,t) - which represents the similarity ranking of each target\ncandidate t with respect to source term s - is assigned to be from 1\nto k (the number of candidates) in decreasing order of similarity\nmeasure S\nm\n(s,t).\n4.4\n\nExperiments on Term Translation\n4.4.1\n\nThe Test Bed\nTo determine the effectiveness of the proposed approach, we\nconducted several experiments to extract translation pairs for\nChinese and English terms in different domains.\nWeb Queries:\nWe collected query terms and the logs from two\nreal-world Chinese search engines in Taiwan, i.e., Dreamer and\nGAIS. The Dreamer log contained 228,566 unique query terms for\na period of over 3 months in 1998, while the GAIS log contained\n114,182 unique query terms for a period of two weeks in 1999.\nWe prepared two different test query sets based on these logs. The\nfirst, called the popular-query set, contained a set of 430 frequent\nChinese queries in the logs. These queries were obtained from the\nChinese translations of 1,230 English terms out of the most\npopular 9,709 query terms (with frequencies above 10 in both\nlogs), which co-occurred with their English counterparts in the\nlogs. The popular-query set was further divided into two types:\ntype Dic (the terms covered in the dictionary), consisting of about\n36% (156/430) of the test queries and type OOV (out of\nvocabulary; the terms not in the dictionary), consisting of about\n64% (274/430) of the test queries.\nThe second set, called the random-query set, contained 200\nChinese query terms, which were randomly selected from the top\n20,000 queries in the Dreamer log, where 165 (about 82.5%) were\nnot included in general-purpose translation dictionaries.\nProper Names and Technical Terms:\nTo further investigate\nthe translation effectiveness for proper names and technical terms,\nwe prepared two other query sets containing 50 scientists' names\nand 50 disease names in English. These were randomly selected\nfrom the 256 scientists (Science/People) and 664 diseases\n(Health/Diseases and Conditions) in the Yahoo! Directory. It\nshould be noted that 76% (38/50) of the scientists' names and\n72% (36/50) of the disease names were not included in the\ngeneral-purpose translation dictionary, which contained 202,974\nentries collected from the Internet.\nTo evaluate the search-result-based methods, we obtained\nsearch-result pages of the source query terms by submitting them\nto real-world Chinese search engines, such as Google Chinese and\nOpenfind. Basically, we used only the first 100 retrieved results\n(snippets) to extract translation candidates. The context vector of\neach source query and the required parameters (page counts) for\nthe chi-square test were also extracted from the retrieved\nsearch-result pages.\nTo evaluate the performance of translation extraction, we used the\naverage top-n inclusion rate as a metric. For a set of test queries,\nthe top-n inclusion rate was defined as the percentage of queries\nwhose translations could be found in the first n extracted\ntranslations. Also, we wished to know if the coverage rate of\ntranslations, i.e. the percentage of queries whose translations\ncould be found in the whole extracted candidate set, was high\nenough in the top search-result pages for real queries.\n\n112\n4.4.2\n\nPerformance\nWeb Queries\nWe carried out experiments to determine the performance of the\nproposed approach by extracting translations for the\npopular-query set. Tables 3 and 4 show the results in terms of top\n1-5 inclusion rates and coverage rates for Chinese and English\nqueries respectively. In this table, \"CV\", \"\n2\n\" and \"Combined\"\nrepresent the context-vector analysis, the chi-square test, and the\ncombined method, respectively. In addition, \"Dic\", \"OOV\" and\n\"All\" represent the terms covered in a dictionary, the terms not in\na dictionary, and the total test query set, respectively. The\ncoverage rates we obtained were promising, which shows that the\nWeb contains rich mixed texts in both languages. The\nperformance of the English query set was not as good as the\nChinese query set. The reason for this was that the English queries\nsuffered from more noise in Chinese translation candidates since\nthe search-result pages in the Chinese Web generally contain\nmuch more Chinese than English content. We also conducted an\nexperiment for random queries. As Table 5 shows, the coverage\nrates were encouraging.\nProper Names, Technical Terms and Common Terms\nTo further determine the effectiveness of the proposed approach in\ndealing with the translation of proper names and technical terms,\nwe conducted an experiment on the test sets of scientists' names\nand medical terms using the combined method. As the results in\nTable 6 show, the top-1 inclusion rates for the scientists' and\ndisease names were 40% and 44% respectively. Some examples\nof the extracted correct translations are shown in Table 7.\nAlthough the achieved performance for real queries looked\npromising, we wished to know if it was equally effective for\ncommon terms. We randomly selected 100 common nouns and\n100 common verbs from a general-purpose Chinese dictionary.\nTable 8 shows the results obtained using the combined method. It\nis easy to see that the proposed approach is less reliable in\n\nTable 3. Coverage and inclusion rates for popular Chinese queries using different\nmethods.\n\nMethod\nQuery Type\nTop-1\nTop-3\nTop-5\nCoverage\nDic 56.4%\n70.5%\n74.4%\n80.1%\nOOV 56.2%\n66.1%\n69.3% 85.0%\nCV\nAll 56.3%\n67.7%\n71.2%\n83.3%\nDic 40.4%\n61.5%\n67.9%\n80.1%\nOOV 54.7%\n65.0%\n68.2% 85.0%\n\n2\n\nAll 49.5%\n63.7%\n68.1%\n83.3%\nDic 57.7%\n71.2%\n75.0%\n80.1%\nOOV 56.6%\n67.9%\n70.9% 85.0%\nCombined\nAll 57.2%\n68.6%\n72.8%\n83.3%\n\nTable 4. Coverage and inclusion rates for popular English queries using\ndifferent methods.\nMethod\nTop-1\nTop-3\nTop-5\nCoverage\nCV 50.9%\n60.1%\n60.8%\n80.9%\n\n2\n44.6%\n56.1%\n59.2%\n80.9%\nCombined 51.8\n%\n60.7%\n62.2% 80.9%\nTable 5. Coverage and inclusion rates for random queries using the\ndifferent methods.\n\nMethod\nTop-1\nTop-3\nTop-5\nCoverage\nCV 25.5%\n45.5%\n50.5%\n60.5%\n2 26.0%\n44.5%\n50.5%\n60.5%\nCombined 29.5%\n49.5%\n56.5% 60.5%\n\nTable 6. Inclusion rates for proper names and technical terms using the\ncombined method.\n\nQuery Type\nTop-1\nTop-3\nTop-5\nScientist Name\n40.0%\n52.0%\n60.0%\nDisease Name\n44.0%\n60.0%\n70.0%\n\n113\nextracting translations of such common terms. One possible\nreason is that the usages of common terms are diverse on the Web\nand the retrieved search results are not highly relevant. It is\nfortunate that many of these common words can be found in\ngeneral-purpose translation dictionaries.\nTable 8. Top 1, 3, 5 inclusion rates obtained using the\ncombined method for extracting translations of common\nnouns and verbs.\nQuery Type\nTop-1\nTop-3\nTop-5\n100 Common Nouns\n23.0%\n33.0%\n43.0%\n100 Common Verbs\n6.0%\n8.0%\n10.0%\nBILINGUAL LEXICON CONSTRUCTION\nTo enhance CLIR services in a digital library that only has\nmonolingual document collections, the proposed approach can be\nused to construct a domain-specific bilingual lexicon. We take the\ndocument set in digital libraries into consideration. The document\nset in the target language is first analyzed and possible key terms\nthat are representative of the document set are extracted, using the\nproposed term extraction method. These extracted key terms are\nlikely to be similar to terms that users may use in real user queries,\nsince they are relatively more significant than other terms in the\ndocuments. The proposed term translation method can then be\napplied to those key terms not included in common translation\ndictionaries to obtain the translation of key terms in the source\nlanguage. Therefore, a bilingual lexicon can then be constructed\nwhere the mappings between key terms and relevant terms in the\nsource and target languages are maintained.\nAs we have already indicated, the constructed bilingual lexicon\ncan benefit CLIR services. For a given source query, the similarity\nwith candidate source relevant terms can be calculated using the\ncontext vector method presented in Section 4. Also, and the\ntop-ranked relevant terms can be extracted using the constructed\nbilingual lexicon. After the corresponding translations of relevant\nterms are obtained, relevant documents in the target language can\nbe retrieved, using these relevant translations. The source query\ncan then be expanded with the relevant translations and\nconventional CLIR methods can be used to retrieve documents in\nthe target language.\n5.2.\n\nAn Application\nWe tested the STICNET Database\n4\n, which is a government-supported\nWeb-accessible digital library system providing a\nsearch service for scientific documents collected in Taiwan. The\nsystem contained documents in either English or Chinese, but no\ncross-language search was provided. To test the performance of\nbilingual lexicon construction, we selected 1,367 Information\nEngineering documents and 5,357 Medical documents\nrespectively from the STICNET Database for the period 1983 to\n1997 as the test bed. Using the PAT-tree-based term extraction\nmethod, key terms were automatically extracted from each\ndocument collection and their relevant translations were extracted\nby the proposed term translation approach.\nIn the collection of Information Engineering documents, 1,330\nkey terms (with a threshold of 2 to 6-gram character strings, a\nterm frequency>10, and an association value>0.1) were\nautomatically extracted. Meanwhile, 5,708 key terms (with a\nthreshold of 2 to 6-gram character strings and a term\nfrequency>40) were automatically extracted from the Medical\ndocument collection. Among the 1,330 auto-extracted key terms\nfrom the Information Engineering documents, 32% were not\nincluded in KUH Chinese Dictionary\n5\n(unknown terms) - one of\nthe largest Chinese dictionaries with 158,239 term entries - where\n75% of these unknown terms were found useful. In the case of\nMedical documents, 71% of the 5,708 auto-extracted key terms\nwere not included in KUH Chinese Dictionary where 36.6% of\nthese unknown terms were found useful. Table 9 shows the\naccuracy of the extracted translations for these useful unknown\nterms. The promising result shows the potential of the proposed\napproach to assist bilingual lexicon construction.\n\n\n\n4\nhttp://sticnet.stic.gov.tw/\n5\nhttp://www.edu.tw/mandr/clc/dict/\nTable 7. Some examples of the test English proper names and technical terms, and their\nextracted Chinese translations.\n\nQuery Type\nEnglish Query\nExtracted Translations\n(in Traditional Chinese)\nScientist Name\nGalilei, Galileo (Astronomer)\nCrick, Francis (Biologists)\nKepler, Johannes (Mathematician)\nDalton, John (Physicist)\nFeynman, Richard (Physicist)\n\n//\n\n/\n\n//\n\n//\n\n\nDisease Name\nHypoplastic Left Heart Syndrome\nLegionnaires' Disease\nShingles\nStockholm Syndrome\nSudden Infant Death Syndrome (SIDS)\n\n\n\n\n\n/\n\n\n\n\n\n114\nTable 9. The top-n inclusion rates of translations for\nauto-extracted useful unknown terms.\nQuery Type\nTop-1\nTop-3\nTop-5\nAuto-extracted useful terms\nin Information Engineering\n33.3% 37.5% 50.0%\nAuto-extracted useful terms\nin Medicine\n34.6% 46.2% 50.0%\nRELATED WORK\nMany effective retrieval models have been developed for CLIR.\nFor example, the Latent Semantic Indexing (LSI) method [4] has\nbeen utilized to model inter-term relationships, instead of exact\nterm matching. Other methods include the cross-lingual relevance\nmodel [11], which integrates popular techniques of\ndisambiguation and query expansion. However, translation of\nqueries not covered in a bilingual dictionary remains one of the\nmajor challenges in practical CLIR services [9].\nTo deal with the translation of out-of-dictionary terms,\nconventional research on machine translation has generally used\nstatistical techniques to automatically extract translations from\ndomain-specific, sentence-aligned parallel bilingual corpora [20].\nHowever, a large parallel corpus is difficult to obtain. Some work\nhas been done on term translation extraction from comparable\ntexts, such as bilingual newspapers [5], which are easier to obtain.\nUsing a non-parallel corpus is more difficult than a parallel one,\ndue to the lack of alignment correspondence for sentence pairs.\nOn the other hand, research on digital libraries has made the same\nendeavor. Larson et al. [10] proposed a method for translingual\nvocabulary mapping using multilingual subject headings of book\ntitles in online library catalogs - a kind of parallel corpus.\nHowever, book titles are still limited in coverage, compared to the\nrich resources on the Web.\nA new potential research direction is to perform query translation\ndirectly, through mining the Web's multilingual and wide-range\nresources [16]. Web mining is a new research area that focuses on\nfinding useful information from large amounts of semi-structured\nhypertexts and unstructured texts [1]. Chen et al. [2] proposed a\ndictionary-based approach in which the search results returned\nfrom Yahoo China search engine were utilized to extract\ntranslations for terms not covered in the dictionary. In their work\nonly an English term appearing (maybe in parenthesis)\nimmediately or closely after a Chinese term was considered a\npossible translation. In our previous research, we proposed an\napproach for extracting translations of Web queries through the\nmining of anchor texts and link structures and obtained very\npromising results [12, 13]. Previous experiments showed that the\nanchor-text-based approach can achieve a good precision rate for\npopular queries. Its major drawback is the very high cost of the\nhardware and software required to collect sufficient anchor texts\nfrom Web pages. Collecting anchor texts requires a powerful Web\nspider and takes cost of network bandwidth and storage. Because\nof the practical needs of digital libraries, search-result pages,\nwhich are easier to obtain are, therefore, investigated in this paper.\n\nCONCLUSION\nIn this paper, we have introduced a Web-based approach for\ndealing with the translation of unknown query terms for\ncross-language information retrieval in digital libraries. With the\nproposed term extraction and translation methods, it is feasible to\ntranslate unknown terms and construct a bilingual lexicon for key\nterms extracted from documents in a digital library. With the help\nof such bilingual lexicons, it would be convenient for users to\nformulate cross-lingual queries. The simplicity of the approach\nnot only makes it very suitable for digital library systems, but\nwould also facilitate the implementation of CLIR services.\n\nREFERENCES\n[1]\n\nChakrabarti, S. Mining the Web: Analysis of Hypertext and\nSemi Structured Data, Morgan Kaufmann, 2002.\n[2]\n\nChen, A., Jiang, H., and Gey, F. Combining Multiple Sources\nfor Short Query Translation in Chinese-English\nCross-Language Information Retrieval. In Proceedings of the\n5th International Workshop on Information Retrieval with\nAsian Languages (IRAL 2000), 2000, 17-23.\n[3]\n\nChien, L.F. PAT-Tree-based Keyword Extraction for\nChinese Information Retrieval. In Proceedings of the 20\nth\n\nAnnual International ACM Conference on Research and\nDevelopment in Information Retrieval (SIGIR 1997), 1997,\n50-58.\n[4]\n\nDumais, S. T., Landauer, T. K., and Littman, M. L.\nAutomatic Cross-Linguistic Information Retrieval Using\nLatent Semantic Indexing. In Proceedings of ACM-SIGIR\nWorkshop on Cross-Linguistic Information Retrieval (SIGIR\n1996), 1996, 16-24.\n[5]\n\nFung, P. and Yee, L. Y. An IR Approach for Translating\nNew Words from Nonparallel, Comparable Texts. In\nProceedings of the 36th Annual Conference of the\nAssociation for Computational Linguistics (ACL 1998), 1998,\n414-420.\n[6]\n\nGale, W. A. and Church, K. W. Identifying Word\nCorrespondences in Parallel Texts. In Proceedings of DARPA\nSpeech and Natural Language Workshop, 1991, 152-157.\n[7]\n\nGale, W.A. and Church, K.W. A Program for Aligning\nSentences in Bilingual Corpora. Computational Linguistics,\n19, 1 (1993), 75-102.\n[8]\n\nGonnet, G.H., Baeza-yates, R.A. and Snider, T. New Indices\nfor Text: Pat Trees and Pat Arrays. Information Retrieval\nData Structures & Algorithms, Prentice Hall, 1992, 66-82.\n[9]\n\nKwok, K. L. NTCIR-2 Chinese, Cross Language Retrieval\nExperiments Using PIRCS. In Proceedings of NTCIR\nworkshop meeting, 2001, 111-118.\n[10]\n\nLarson, R. R., Gey, F., and Chen, A. Harvesting Translingual\nVocabulary Mappings for Multilingual Digital Libraries. In\nProceedings of ACM/IEEE Joint Conference on Digital\nLibraries (JCDL 2002), 2002, 185-190.\n[11]\n\nLavrenko, V., Choquette, M., and Croft, W. B. Cross-Lingual\nRelevance Models. In Proceedings of ACM Conference on\nResearch and Development in Information Retrieval (SIGIR\n2002), 2002, 175-182.\n[12]\n\nLu, W. H., Chien, L. F., and Lee, H. J. Translation of Web\nQueries using Anchor Text Mining. ACM Transactions on\nAsian Language Information Processing, 1 (2002), 159-172.\n[13]\n\nLu, W. H., Chien, L. F., and Lee, H. J. Anchor Text Mining\nfor Translation of Web Queries: A Transitive Translation\nApproach. ACM Transactions on Information Systems, 22\n(2004), 128.\n[14]\n\nManber, U. and Baeza-yates, R. An Algorithm for String\nMatching with a Sequence of Don't Cares. Information\nProcessing Letters, 37 (1991), 133-136.\n[15]\n\nMorrison, D. PATRICIA: Practical Algorithm to Retrieve\nInformation Coded in Alphanumeric. JACM, 1968, 514-534.\n115\n[16]\n\nNie, J. Y., Isabelle, P., Simard, M., and Durand, R.\nCross-language Information Retrieval Based on Parallel\nTexts and Automatic Mining of Parallel Texts from the Web.\nIn Proceedings of ACM Conference on Research and\nDevelopment in Information Retrieval (SIGIR 1999), 1999,\n74-81.\n[17]\n\nRapp, R. Automatic Identification of Word Translations from\nUnrelated English and German Corpora, In Proceedings of\nthe 37th Annual Conference of the Association for\nComputational Linguistics (ACL 1999), 1999, 519-526.\n\n[18]\n\nSilva, J. F., Dias, G., Guillore, S., and Lopes, G. P. Using\nLocalMaxs Algorithm for the Extraction of Contiguous and\nNon-contiguous Multiword Lexical Units. Lecture Notes in\nArtificial Intelligence, 1695, Springer-Verlag, 1999, 113-132.\n[19]\n\nSilva, J. F. and Lopes, G. P. A Local Maxima Method and a\nFair Dispersion Normalization for Extracting Multiword\nUnits. In Proceedings of the 6\nth\nMeeting on the Mathematics\nof Language, 1999, 369-381.\n[20]\n\nSmadja, F., McKeown, K., and Hatzivassiloglou, V.\nTranslating Collocations for Bilingual Lexicons: A Statistical\nApproach, Computational Linguistics, 22, 1 (1996), 1-38.\n116\n", "keywords": "Information Search and Retrieval;Web Mining;Term Translation;translation dictionary;Context Vector Analysis;Unknown Cross-Lingual Queries;Web-based term translation approach;Cross-Language Information Retrieval;BILINGUAL LEXICON CONSTRUCTION;Digital Library;PAT-Tree Based Local Maxima Algorithm;CLIR services;Term Extraction;Digital Libraries"} {"name": "202", "title": "TypeCase: A Design Pattern for Type-Indexed Functions", "abstract": "A type-indexed function is a function that is defined for each member of some family of types. Haskell's type class mechanism provides collections of open type-indexed functions, in which the indexing family can be extended by defining a new type class instance but the collection of functions is fixed. The purpose of this paper is to present TypeCase: a design pattern that allows the definition of closed type-indexed functions, in which the index family is fixed but the collection of functions is extensible. It is inspired by Cheney and Hinze's work on lightweight approaches to generic programming. We generalise their techniques as a design pattern . Furthermore, we show that type-indexed functions with type-indexed types, and consequently generic functions with generic types, can also be encoded in a lightweight manner, thereby overcoming one of the main limitations of the lightweight approaches.", "fulltext": "Introduction\nA type-indexed function is a function that is defined for each member\nof a family of types. One of the most popular mechanisms\nimplementing this notion is the Haskell [31] type class system. A\ntype class consists of a collection of related type-indexed functions;\nthe family of index types is the set of instances of the type class.\nType classes provide just one possible interpretation of the notion\nof type-indexed functions. In particular, they assume an open-world\nperspective: the family of index types is extensible, by defining a\nnew type class instance for that type, but the collection of type-indexed\nfunctions is fixed in the type class interface so needs to\nbe known in advance. For some applications -- particularly when\nproviding a framework for generic programming -- the family of\nindex types is fixed (albeit large) and the collection of type-indexed\nfunctions is not known in advance, so a closed-world perspective\nwould make more sense.\nThe original concept of a design pattern has its origins in Christopher\nAlexander's work in architecture, but it has been picked up\nwith enthusiasm by the object-oriented programming community.\nThe idea of design patterns is to capture, abstract and record beneficial\nrecurring patterns in software design. Sometimes those patterns\ncan be captured formally, as programming language constructs\nor software library fragments. Often, however, the appropriate\nabstraction cannot be directly stated, either because of a lack\nof expressiveness in the language, or because there is inherent ambiguity\nin the pattern -- Alexander describes a pattern as a solution\n`you can use [. . . ] a million times over, without ever doing it the\nsame way twice' [1]. In this case, one must resort to an informal\ndescription. Even if the abstraction itself can be captured formally,\none might argue that a complete description of the pattern includes\nnecessarily informal information: a name, motivation, examples,\nconsequences, implementation trade-offs, and so on.\nIn this paper, we present a technique that allows the definition of\nclosed type-indexed functions, as opposed to the open type-indexed\nfunctions provided by type classes; we do so in the format of a\ndesign pattern. Our inspiration comes from previous research on\nlightweight approaches to generic programming (LAGP). In particular\n, Hinze's two papers \"A Lightweight Implementation of Generics\nand Dynamics\" [4] (LIGD, with James Cheney) and \"Generics\nfor the Masses\" [19] (GM) provide our motivation and basis.\nThose two papers focus on the particular context of generic\nprogramming, and provide a number of techniques that can be used\nto encode first-class generic functions in Haskell. However, those\ntechniques have a wider applicability, not addressed by Hinze. We\npropose a generalisation of the technique, and demonstrate its use\nin a variety of applications. Our specific contributions are:\nGeneralisation of the lightweight approaches. We provide templates\nfor designing closed type-indexed functions, abstracting\naway from generic programming. The techniques in LIGD and\nGM are instances of these templates.\nA design pattern for type-indexed functions. We document this\ngeneralisation as a design pattern.\nType-indexed functions with type-indexed types. We show that\nwith our more general interpretation of the design pattern, type-indexed\nfunctions with type-indexed types are also instances of\nthe design pattern. As a consequence, generic functions with\ngeneric types can also be encoded in a lightweight manner.\nThus, we remove one of the main limitations of the lightweight\napproaches.\nOther applications. We present two other interesting applications\nof the pattern: PolyP in Haskell 98, and a very flexible printf\nfunction.\nThe remainder of this paper is structured as follows. In Section 2\nwe review the lightweight approaches to generic programming. In\nSection 3 we abstract the essence of the technique as a design pattern\n. Section 4 presents two other small applications of the design\npattern, and Section 5 uses it to model type-indexed functions with\ntype-indexed types. Section 6 concludes.\nLightweight generic programming\nWe start by summarising the earlier work on lightweight approaches\nto generic programming underlying our generalisation.\n2.1\n\"A Lightweight Implementation of Generics and\nDynamics\"\nCheney and Hinze [4] show how to do a kind of generic programming\n, using only the standard Hindley-Milner type system extended\nwith existential types. The index family consists of hierarchical\nsums and products of integers and characters. This family is enough\nto represent a large subset of Haskell 98 datatypes (including mutually\nrecursive and nested datatypes).\ndata Sum a b\n= Inl a | Inr b\ndata Prod a b\n= Prod a b\ndata Unit\n= Unit\nThis style of generic programming requires a representation of\ntypes as values in order to support typecase analysis. The key idea\nof the LIGD paper is to use a parametrised type as the type representation\n, ensuring that the type parameter reflects the type being\nrepresented. Some Haskell implementations have recently been extended\nwith generalised algebraic datatypes (GADTs) [32], which\ncan be used for this purpose; but LIGD predates that extension, and\ndepends only on existential quantification.\ndata Rep t\n=\nRUnit\n(t Unit)\n| RInt\n(t Int)\n| RChar\n(t Char)\n| a b. RSum (Rep a) (Rep b) (t (Sum a b))\n| a b. RProd (Rep a) (Rep b) (t (Prod a b))\ndata a\nb = EP{from :: a b,to :: b a}\n(Note that the universal quantifications are in contravariant positions\n, so act existentially.)\nThe intention is that the equivalence type a\nb represents embedding/projection\npairs witnessing to an isomorphism between\ntypes a and b, thereby enforcing a correspondence between types t\nand Rep t. Of course, within Haskell, it is not possible to automatically\nverify the isomorphisms (from\nto = id and to from = id), so\nthese laws should be externally checked. Furthermore, we follow\nthe convention of ignoring the `ugly fact' of bottom values destroying\nthe `beautiful theory' of many such isomorphisms [8].\nA common case is with the trivial embedding/projections.\nself :: a\na\nself\n= EP{from = id,to = id}\nUsing self , we can provide a set of smart constructors for the Rep\ntype, yielding representations of types by themselves.\nrUnit :: Rep Unit\nrUnit\n= RUnit self\nrInt :: Rep Int\nrInt\n= RInt self\nrChar :: Rep Char\nrChar\n= RChar self\nrSum :: Rep a\nRep b Rep (Sum a b)\nrSum ra rb\n= RSum ra rb self\nrProd :: Rep a\nRep b Rep (Prod a b)\nrProd ra rb\n= RProd ra rb self\nUsing these smart constructors, we can build representations for\nrecursive datatypes, by making explicit the structure isomorphism\nof the datatype. For instance, the isomorphism defining lists is\n[a]\n= 1 + a [a], and so the corresponding type representation is\nas follows.\nrList ::\na. Rep a Rep [a]\nrList ra\n= RSum rUnit (rProd ra (rList ra)) (EP from to)\nwhere from\n[ ]\n= Inl Unit\nfrom\n(x : xs)\n= Inr (Prod x xs)\nto\n(Inl Unit)\n= [ ]\nto\n(Inr (Prod x xs)) = x : xs\nNote that the representation of a recursive datatype is an infinite\nvalue; but, because of laziness, this poses no problem.\nHaving constructed representation values for arbitrary types, the\nfinal step is to define generic functions. Using the representation\nas a basis for structural case analysis, it is possible to simulate a\ntypecase [16]. For example, here is a definition of generic equality:\neq ::\nt. Rep t t t Bool\neq\n(RInt ep)\nt\n1\nt\n2\n= from ep t\n1\nfrom ep t\n2\neq\n(RChar ep)\nt\n1\nt\n2\n= from ep t\n1\nfrom ep t\n2\neq\n(RUnit ep)\n= True\neq\n(RSum ra rb ep) t\n1\nt\n2\n= case (from ep t\n1\n,from ep t\n2\n) of\n(Inl x,Inl y) eq ra x y\n(Inr x,Inr y) eq rb x y\nFalse\neq\n(RProd ra rb ep) t\n1\nt\n2\n= case (from ep t\n1\n,from ep t\n2\n) of\n(Prod x y,Prod x y )\neq ra x x\neq rb y y\nUsing Haskell type classes, it is possible to make the use of generic\nfunctions even more convenient: the class TypeRep can be used to\nbuild values of type Rep t implicitly.\nclass TypeRep t where\nrep :: Rep t\ninstance TypeRep Unit where\nrep\n= rUnit\ninstance TypeRep Int where\nrep\n= rInt\ninstance TypeRep Char where\nrep\n= rChar\ninstance\n(TypeRep a,TypeRep b) TypeRep (Sum a b) where\nrep\n= rSum rep rep\ninstance\n(TypeRep a,TypeRep b) TypeRep (Prod a b) where\nrep\n= rProd rep rep\ninstance TypeRep a\nTypeRep [a] where\nrep\n= rList rep\nFor example, we can now express generic equality with an implicit\nrather than explicit dependence on the representation.\nceq ::\nt. TypeRep t t t Bool\nceq t\n1\nt\n2\n= eq rep t\n1\nt\n2\n2.2\n\"Generics for the Masses\"\nHinze's later GM approach [19] has a very similar flavour to LIGD;\nhowever, somewhat surprisingly, Hinze shows how to do generic\nprogramming strictly within Haskell 98, which does not support\nrank-n types or even existential types. Nevertheless, there is a close\nrelationship between type classes and polymorphic records (for\nexample, one possible translation of type classes into System F uses\npolymorphic records), and these require something like existential\ntypes for their encoding. Thus, type class instances can be seen\nas implicitly-passed records. Hinze uses this observation to deliver\ntwo implementations of generics.\n2.2.1\nGeneric functions on types\nThe first implementation of generics in GM (\"GM1\", from now\non) can be seen as a direct descendent of LIGD. Instead of using a\ndatatype with an existential quantification, Hinze uses a type class\nGeneric.\n99\nclass Generic g where\nunit\n:: g Unit\nsum\n::\n(TypeRep a,TypeRep b) g (Sum a b)\nprod\n::\n(TypeRep a,TypeRep b) g (Prod a b)\ndatatype :: TypeRep a\n(b a) g b\nchar\n:: g Char\nint\n:: g Int\nThe parameter g of the type class represents the generic function,\nand each of the member functions of the type class encodes the\nbehaviour of that generic function for one structural case. Generic\nfunctions over user-defined types can also be defined using the\ndatatype type case. In this case, the isomorphism between the\ndatatype and its structural representation must be provided.\nThe type class TypeRep is used to select the appropriate behaviour\nof the generic function, based on the type structure of its argument\n. The role of this type class is somewhat analogous to the\nsynonymous one in Section 2.1. One contrast with LIGD is that\nTypeRep for GM1 is not optional, because the type representations\nare always implicitly passed.\nclass TypeRep a where\ntypeRep :: Generic g\ng a\ninstance TypeRep Unit where\ntypeRep\n= unit\ninstance\n(TypeRep a,TypeRep b) TypeRep (Sum a b) where\ntypeRep\n= sum\ninstance\n(TypeRep a,TypeRep b) TypeRep (Prod a b) where\ntypeRep\n= prod\ninstance TypeRep Char where\ntypeRep\n= char\ninstance TypeRep Int where\ntypeRep\n= int\nFor GM, the type class TypeRep directly selects the appropriate\nbehaviour for a particular structural case from the generic function.\nIn contrast, for LIGD, the corresponding type class TypeRep builds\na value as a type representation for a particular structural case,\nand this representation is then used by a generic function to select\nthe appropriate behaviour. The effect is the same, but GM is more\ndirect.\nA new generic function is defined via an instance of Generic,\nproviding an implementation for each structural case. For instance,\nthe generic function gSize that counts all the elements of type Int\nand Char in some structure could be encoded as follows.\nnewtype GSize a\n= GSize{appGSize :: a Int}\ninstance Generic GSize where\nunit\n= GSize ( 0)\nsum\n= GSize (t case t of\nInl x\ngSize x\nInr y\ngSize y)\nprod\n= GSize (t case t of\nProd x y\ngSize x + gSize y)\ndatatype iso\n= GSize (t gSize (from iso t))\nchar\n= GSize ( 1)\nint\n= GSize ( 1)\ngSize :: TypeRep a\na Int\ngSize\n= appGSize typeRep\nA record of type GSize a contains a single function appGSize of\ntype a\nInt, which can be used to compute the number of elements\nin some structure of type a. The function gSize, which is the actual\ngeneric function, simply extracts the sole appGSize field from a\nrecord of the appropriate type, built automatically by typeRep.\n2.2.2\nGeneric functions on type constructors\nThe second implementation of generics in GM (\"GM2\") permits\nparametrisation by type constructors rather than by types. For example\n, whereas the generic function gSize of the previous section\nhas type a\nInt for all first-order types a in the type class TypeRep,\nin this section we show a generic function gSize with type f a\nInt\nfor all type constructors f in the constructor class FunctorRep.\nLifting in this fashion introduces the possibility of ambiguity:\na type g\n(f a) may be considered a type constructor g applied\nto a type f a, or the composition of constructors g and f applied\nto type a. Therefore we must explicitly pass type representations,\nincreasing flexibility but decreasing brevity. This is reflected in the\nanalogous type class Generic, where the implicitly-passed TypeRep\ncontexts are now changed to explicitly-passed functions.\nclass Generic g where\nunit\n:: g Unit\nsum\n:: g a\ng b g (Sum a b)\nprod\n:: g a\ng b g (Prod a b)\ndatatype ::\n(b a) g a g b\nchar\n:: g Char\nint\n:: g Int\nHowever, this modification of the type class restricts expressivity,\nsince the only generic function we can call is the one being defined,\nrecursively. Consequently, generic functions that perform calls to\nother generic functions (as when defining generic membership in\nterms of generic equality) become harder to define.\nWith the new Generic class it is also possible to build the\nvalues for type representations automatically, using another type\nclass TypeRep. Just as with LIGD, this class now becomes optional.\nAlternatively, we can use a type class FunctorRep to capture the\nnotion of unary type constructor or functor.\nclass FunctorRep f where\nfunctorRep :: Generic g\ng a g (f a)\nWe have to define similar classes for each arity of type constructor.\nGeneric functions are defined in a very similar fashion to GM1.\nFor instance, the type Count a below represents a generic function\nthat counts zero for each occurrence of a value of type Int or Char\nin some structure of type a.\nnewtype Count a\n= Count{applyCount :: a Int}\ninstance Generic Count where\nunit\n= Count ( 0)\nsum a b\n= Count (x case x of\nInl l\napplyCount a l\nInr r\napplyCount b r)\nprod a b\n= Count ((Prod x y)\napplyCount a x\n+ applyCount b y)\ndatatype iso a\n= Count (x\napplyCount a\n(from iso x))\nchar\n= Count ( 0)\nint\n= Count ( 0)\nWhile this function by itself approximates const 0, it is the basis\nfor other more useful functions that really count the number of elements\nin some structure in some way, by overriding the behaviour\nof the basic generic function for occurrences of the type parameter:\ngSize :: FunctorRep f\nf a Int\ngSize\n= applyCount (functorRep (Count ( 1)))\nThe payback of using FunctorRep is that we can define the\nbehaviour of the generic function for its parameters. For instance,\nwe could sum all the integers in some integer-parametrised datatype\nby using the identity function to define the behaviour of the generic\nfunction for the type parameter.\ngSum :: FunctorRep f\nf Int Int\ngSum\n= applyCount (functorRep (Count id))\n100\nClosed type-indexed functions\nIn LIGD and GM, we are shown three methods for implementing\nclosed type-indexed functions. Those three variations give us different\nexpressive power, and impose different constraints on the\ntype system. A choice of implementation techniques, together with\ntechnical trade-offs making no one method superior in all circumstances\n, is characteristic of design patterns.\nIn this section, we introduce the TypeCase design pattern,\ncapturing the different techniques for implementing closed type-indexed\nfunctions.\nThe TypeCase design pattern\nIntent:\nAllowing the definition of closed type-indexed functions.\nMotivation:\nThe typecase design pattern captures a closed-world\nview of ad-hoc polymorphism. In Haskell, the type class system\nis a mechanism that supports ad-hoc polymorphism, but from an\nopen-world point of view: they can be extended with cases for\nnew datatypes, at the cost of a non-extensible set of functions.\nUnder the closed-world assumption, there is a fixed set of type-structural\ncases but arbitrarily many type-indexed functions ranging\nover those cases. An example where the closed-world perpective\nworks better than the open-world one is generic programming, in\nwhich we take a structural perspective on types as opposed to the\nmore traditional nominal one. Using just a few operations on types,\nit is possible to represent the whole family of structural definitions\nof interest. For instance, here is a possible definition for a generic\nfunction that counts all the elements of some structure t:\ngsize t ::\n:: t\nInt\ngsize Unit\n= 0\ngsize Int\n= 1\ngsize Sum\n(Inl x)\n= gsize x\ngsize Sum\n(Inr y)\n= gsize y\ngsize Prod\n(Prod x y) = gsize x + gsize y\nWith an open-world perspective, we can present a fixed number\nof type-indexed definitions that range over those few cases; but\nwe cannot easily introduce new definitions. This is clearly not\nappropriate for generic programming. In fact, what we expect from\na generic programming facility is the ability to a introduce new\ngeneric definition without affecting the surrounding context. This\nis precisely what the closed-world perspective provides us.\nApplicability:\nUse this pattern:\n\nto encode collections of definitions that are indexed by some\nfixed family of types, while allowing new definitions to be added\nto the collection without affecting modularity;\n\nwhen a definition is variadic, that is, it has a variable number of\narguments (see Section 4.2 for an example);\n\nto try to avoid type-class trickery, such as multiple-parameter\ntype classes, functional dependencies, overlapping instances or\neven duplicate instances (just consider a direct encoding of the\nexamples presented in the paper into type classes [30]);\n\nto capture some shape invariants, like the ones captured by\nsome nested types or phantom types [29, 18].\nStructure:\nSee Figure 1.\nParticipants:\n\nStructural Cases: a set of datatypes which represent the possible\nstructural cases for the type-indexed function;\n\nTypecase: representing the structure of a type-indexed function;\n\nDispatcher: a type class, containing a single function, that is\nresponsible for dispatching a value of one of the structural cases\ninto the corresponding branch of the typecase, based on the type\nof the value;\n\nType-indexed function: defining the type-indexed function using\nan instance of the typecase.\nCollaborations:\n\nThe typecase uses the structural cases in order to create a\ncorresponding number of cases that can be used to define the\ntype-indexed function.\n\nThe dispatcher uses the structural cases in order to create\na corresponding number of instances that will forward some\nvalue of that family of structural cases into the corresponding\ncase in the typecase component.\n\nThe type-indexed function (TIF) uses an instance of the typecase\nin order to implement the desired functionality for the type-indexed\nfunction.\nImplementation:\nTypically, a typecase component is created using\nthe structural cases. There are three main variations for the implementation\nof a typecase: two of them are based on type classes\nand the other one on a smart datatype. A smart datatype is a parametrised\ntype where the type parameters are dependent on the constructors\n. The idea of a smart datatype can be represented in various\nforms: existential datatypes with an equivalence type ( la LIGD),\nGADTs, phantom types, among others.\nThe goal of this design pattern is to simulate a closed type-indexed\nfunction. In general, a type-indexed function f has the\nfollowing structure.\nf t ::\n| d\n1\n... d\nk\n::\nf t\n1\na\n1\n... a\ni\n= x\n11\n... x\n1n\ne\n1\n.\n.\n.\nf t\nm\nz\n1\n... z\nj\n= x\nm1\n... x\nmn\ne\nm\nThe type signature tells us that f has one type parameter t and\noptional type parameters d\n1\n... d\nk\nwith the same structure and kind\nas t. The type of the TIF may depend on t and d\n1\n... d\nk\n.\nWe should note that this is not the same as having a TIF with\nmultiple type arguments. There is no problem, in principle, in having\nmultiple-parameter type arguments, but it would lead to an explosion\nin the number of typecases. This would be a generalisation\nof this design pattern. For simplicity, we will only consider type\nparameters with the same structure. The usefulness of this simpler\ncase is reflected in applications such as generic map where the input\nand output structures of the generic map function are the same.\nThe body of f contains (at least) m branches, providing the\nbehaviour of the TIF for each member of the family of types t\n(that is, t\n1\na\n1\n... a\ni\n,...,t\nm\nz\n1\n... z\nj\n). This family of types corresponds\nto the structural cases participant of the design pattern\n. For each branch of the definition, we bind possible variables\nx\n11\n... x\n1n\n,...,x\nm1\n... x\nmn\nand define each typecase of f with\ne\n1\n,...,e\nm\n.\nWe now discuss the three main variations of the design pattern.\n1. Smart datatypes: This variation is inspired by the LIGD approach\n. Hindley-Milner typing extended with existential datatypes\n(supported in most Haskell compilers) is enough to encode\nit. However, with extensions such as GADTs (supported\nby GHC 6.4) the encoding becomes much more direct. Unfortu-nately\n, neither of those extensions conforms to Haskell 98. We\nwill present this version of the design pattern using a GADT\nsyntax for simplicity.\nUsing the structural cases given by t\n1\na\n1\n... a\ni\n,...,t\nm\nz\n1\n... z\nj\n,\nwe can derive the typecase and dispatcher seen in Figure 1.\nSince there are m structural cases in a standard instance of the\ndesign pattern, one would create m constructors c\nt\n1\n,...,c\nt\nm\nand\nalso m instances for Rep\n\n. TIFs can now be defined using those\ncomponents, by creating some function f that takes a first argument\nof type Rep\n\nand returns a value of type .\n101\nSmart Datatype\nImplicit/Explicit Representations\nTypecase\ndata t d\n1\n... d\nk\nwhere\nc\nt\n1\n::\n(a\n1\n... a\ni\n)\n\n(t\n1\na\n1\n... a\ni\n) d\n11\n... d\n1k\n.\n.\n.\nc\nt\nm\n::\n(z\n1\n... z\nj\n)\n\n(t\nm\nz\n1\n... z\nj\n) d\nm1\n... d\nmk\nclass\n(g ::\nk\n+1\n) where\ncase\nt\n1\n::\n(a\n1\n... a\ni\n)\ng\n(t\n1\na\n1\n... a\ni\n) d\n11\n... d\n1k\n.\n.\n.\ncase\nt\nm\n::\n(z\n1\n... z\nj\n)\ng\n(t\nm\nz\n1\n... z\nj\n) d\nm1\n... d\nmk\nDispatcher\nclass Rep\n\nt d\n1\n... d\nk\nwhere\nrep :: Rep\n\nt d\n1\n... d\nk\ninstance\n(a\n1\n... a\ni\n)\n\nRep\n\n(t\n1\na\n1\n... a\ni\n) d\n11\n... d\n1k\nwhere\nrep\n= c\nt\n1\nrep\ni\n.\n.\n.\ninstance\n(z\n1\n... z\nj\n)\n\nRep\n\n(t\nm\nz\n1\n... z\nj\n) d\nm1\n... d\nmk\nwhere\nrep\n= c\nt\nm\nrep\nj\nclass Rep\n\nt d\n1\n... d\nk\nwhere\nrep :: g\ng t d\n1\n... d\nk\ninstance\n(a\n1\n... a\ni\n)\n\nRep\n\n(t\n1\na\n1\n... a\ni\n) d\n11\n... d\n1k\nwhere\nrep\n= case\nt\n1\n{rep\ni\n}\n.\n.\n.\ninstance\n(z\n1\n... z\nj\n)\n\nRep\n\n(t\nm\nz\n1\n... z\nj\n) d\nm1\n... d\nmk\nwhere\nrep\n= case\nt\nm\n{rep\nj\n}\nType-indexed\nfunction\nf :: t d\n1\n... d\nk\n\nf\n(c\nt\n1\nr\na\n1\n... r\na\ni\n) = x\n11\n... x\n1n\n[[e\n1\n]]\n.\n.\n.\nf\n(c\nt\nm\nr\nz\n1\n... r\nz\nj\n) = x\nm1\n... x\nmn\n[[e\nm\n]]\nf :: Rep\n\nt d\n1\n... d\nk\n\nf\n= f rep\nnewtype F t d\n1\n... d\nk\n= F{f :: }\nf :: Rep\n\nt d\n1\n... d\nk\n\nf\n= f rep\ninstance Rep\n\nF where\ncase\nt\n1\n{r\na\n1\n... r\na\ni\n} = x\n11\n... x\n1n\n[[e\n1\n]]\n.\n.\n.\ncase\nt\nm\n{r\nz\n1\n... r\nz\nj\n} = x\nm1\n... x\nmn\n[[e\nm\n]]\nFigure 1. The structure of the TypeCase design pattern.\nThe dispatcher component is optional in this variation. The\nTIFs created with this variation are fully closed to extension;\nno customisation is possible. This means that if we want to add\nextra functionality we need to modify the smart datatype (and\nthe dispatcher if we have one). However, TIFs that call other\nTIFs are trivial to achieve; there is no need for tupling.\n2. Implicit representations: The implicit representation version\nof the design pattern is inspired by GM1. Perhaps surprisingly,\nsome implementations of this instance require only Haskell 98.\nHowever, if we need to have structurally-dependent variables,\nthen we also require multiple-parameter type classes.\nProceeding in a similar fashion to the smart datatype approach\n, we use the structural cases to derive the typecase and\ndispatcher seen in Figure 1. Again, because we have m structural\ncases, we create m functions case\nt\n1\n,...,case\nt\nm\nand m instances\nof Rep\n\n.\nThe dispatcher is not an optional component: it always\nneeds to be defined in this variation. As with the smart datatype\nvariation, TIFs defined in this way are fully closed to extension,\nand calls to other TIFs are trivial.\n3. Explicit representations: The explicit representation variation\nof the design pattern is inspired by GM2. Like the implicit\napproach, Haskell 98 is enough to handle the simpler forms\n(one type parameter). However, if we discard the optional dispatcher\n, then Haskell 98 can handle all forms.\nUsing the structural cases to derive the typecase and dispatcher\nseen in Figure 1, we would obtain a very similar structure\nto the implicit representation version. The most noticeable\ndifference is that, with the explicit representation, the definition\nof rep needs to provide the corresponding case function with\nthe representations for each of its type parameters. The second\ndifference is that , which corresponds to the representations\nof the type parameters, reflects the fact that we are providing\nexplicit representations. Thus, corresponds in this instance\nto explicit arguments of the function, while with the implicit\nrepresentation it corresponds to (implicitly passed) type class\nconstraints. The dispatcher is an optional component.\nVariations of this instance of the design pattern can also be\nfound in the literature [10, 37], as described in Section 4.2. TIFs\ndefined in this fashion are not fully closed to extension: it is possible\nto override default behaviour. However, the extra flexibility\ncomes at a cost: recursive calls to other TIFs are not possible.\nOne common solution for this problem is to tuple together into\na record the mutually-dependent functions. Another possibility\nwould be to have a notion of dependencies: if a TIF f requires\ncalls to another TIF g, then the record that defines f has a field\nthat is an instance of g. Although this work is quite tedious, Lh\n[26] shows how a type system can lighten the burden.\nAn associated problem for TIFs in this setting is the issue\nof composability. If two TIFs are defined using different instances\n(this is, they are not tupled together), then we cannot, in\na straightforward manner, use the same representation to compose\nthem. To illustrate the problem, consider:\nnewtype F v\n1\n... v\nn\n= F{f :: }\nnewtype G v\n1\n... v\nn\n= G{g :: }\ninstance Generic F where\n...\ninstance Generic G where\n...\nNow let us suppose that we define a type-indexed abstraction\n(that is, a function that uses one or more TIFs and is not defined\nover the structure of types):\nh rep\n= ... f rep ... g rep ...\nThe interpretation of this definition as a type-indexed function\ncould be thought of as: h a\n= ... f a ... g a .... While this\nis a perfectly reasonable interpretation, in practice f requires\ninconsistent types F v\n1\n... v\nn\nand G v\n1\n... v\nn\nfor rep: F and\nG are two different type constructors, so in a Hindley-Milner\ntype system, unification obviously fails. However, F and G\ndo have something in common. In particular, they are both\n102\ninstances of Generic. So, in Haskell extended with higher-order\npolymorphism, we can capture this relation with a rank-2\ntype, thus providing a possible solution for the problem of\ncomposability.\nh ::\n( g. Generic g g v\n1\n... v\nn\n)\nh rep\n= ... f rep ... g rep ...\nWe should note that even though we have presented three main\nvariations of the design pattern, the concept of a design pattern is,\nby itself, quite informal and thus prone to different interpretations.\nFor instance, as we will see later, applications of the pattern (such\nas GM) can have more type cases than there are datatype variants,\nbecause some cases overlap. It is important to note that, depending\non the context of a problem, a design pattern can be adapted to\nbetter fit that problem.\nApplications\nWe present two applications of the design pattern. In Section 4.1,\nstill within the context of generic programming, we show how\none can build a library inspired by PolyP [21, 22] but working in\nHaskell 98. In Section 4.2, we present a very flexible version of a\nC-style printf function.\n4.1\nLight PolyP\nIt probably comes as no surprise to the reader that the technique\nintroduced in GM and LIGD can be applied to other generic programming\napproaches as well. PolyP was one of the first attempts to\nproduce a generic programming language. It is a simpler language\nthan Generic Haskell, working in a much more restricted family\nof datatypes, namely one-parameter regular types. But this restriction\nallows stronger properties to be stated: its simplicity and strong\ntheoretical background make it an appropriate language for teaching\nboth the theory [3] and practice of generic programming. Our\nproposal Light PolyP encourages this, because no external PolyP\ncompiler is required (although one might still be desirable, for a\nmore convenient syntax).\nNorell [30] shows how to use the Haskell type class system (extended\nwith multiple-parameter type classes and functional dependencies\n) to obtain first-class PolyP generic functions in Haskell. In\nthis section, we will present a \"lighter\" version of PolyP, requiring\nonly Haskell 98 (without extensions such as multiple-parameter\ntype classes and functional dependencies) but with the same expressive\npower.\nInstead of using sums of products like LAGP or Generic\nHaskell, PolyP uses lifted pattern functors as structural cases. The\npattern functors Empty, Plus and Prod have counterparts in LAGP.\nThe pattern functors Rep and Par correspond respectively to the recursive\nargument and the parameter of the unary regular datatype.\nThe pattern functor Const t for some type t represents the constant\nfunctor, and Comp handles the composition of functors required\nfor regular types.\ndata Empty p r\n= Empty\ndata Plus g h p r\n= Inl (g p r) | Inr (h p r)\ndata Prod g h p r\n= Prod (g p r) (h p r)\nnewtype Par p r\n= Par{unPar ::p}\nnewtype Rec p r\n= Rec{unRec :: r}\nnewtype Comp d h p r\n= Comp{unComp :: d (h p r)}\nnewtype Const t p r\n= Const{unConst :: t}\nThe equivalence type is used to establish the isomorphism\nbetween a regular datatype and its top-level structure. The embedding/projection\nfunctions are traditionally called inn and out.\ndata Iso a b\n= Iso{inn :: a b,out :: b a}\nlistIso\n= Iso inL outL\nwhere\ninL\n(Inl Empty)\n= [ ]\ninL\n(Inr (Prod (Par x) (Rec xs))) = x : xs\noutL\n[ ]\n= Inl Empty\noutL\n(x : xs) = Inr (Prod (Par x) (Rec xs))\nIn PolyP no generic customisation is allowed, thus we can use\nan implicit representation version of the design pattern and consequently\n, it is possible for one generic function to use other generic\nfunctions in its definition. The typecase component corresponds to:\nclass Generic f where\nempty\n:: f Empty\nplus\n::\n(Rep g,Rep h) f (Plus g h)\nprod\n::\n(Rep g,Rep h) f (Prod g h)\npar\n:: f Par\nrec\n:: f Rec\ncomp\n::\n(Functor d,Rep h) f (Comp d h)\nconstant :: f\n(Const t)\nThe dispatcher simply selects the corresponding case based on\nthe type of the argument of the generic function g.\nclass Rep g where\nrep :: Generic f\nf g\ninstance Rep Empty where\nrep\n= empty\ninstance\n(Rep g,Rep h) Rep (Plus g h) where\nrep\n= plus\ninstance\n(Rep g,Rep h) Rep (Prod g h) where\nrep\n= prod\ninstance Rep Par where\nrep\n= par\ninstance Rep Rec where\nrep\n= rec\ninstance\n(Functor d,Rep h) Rep (Comp d h) where\nrep\n= comp\ninstance Rep\n(Const t) where\nrep\n= constant\nLike GM, defining a generic function is a matter of declaring\na record with a single field, a function of the appropriate type. As\nan example, we could define fmap2, the map operation for binary\nfunctors, as follows.\nnewtype FMap2 a b c d f\n= FMap2{\nappFMap2 ::\n(a c) (b d) f a b f c d}\ninstance Generic\n(FMap2 a b c d) where\nempty\n= FMap2 (\nEmpty)\nplus\n= FMap2 (f g t case t of\nInl x\nInl (fmap2 f g x)\nInr y\nInr (fmap2 f g y))\nprod\n= FMap2 (f g t case t of\nProd x y\nProd (fmap2 f g x) (fmap2 f g y))\npar\n= FMap2 (f g (Par t) Par (f t))\nrec\n= FMap2 (f g (Rec t) Rec (g t))\ncomp\n= FMap2 (f g (Comp t)\nComp\n(fmap (fmap2 f g) t))\nconstant\n= FMap2 (\n(Const t) (Const t))\nfmap2 :: Rep f\n(a c) (b d) f a b f c d\nfmap2\n= appFMap2 rep\nWith fmap2 it is now possible to define several widely-applicable\nrecursion operators [28, 14] using PolyP. For example, the cata-morphism\noperator could be defined as:\ncata iso f\n= f fmap2 id (cata iso f ) out iso\nNote that one must give explicitly the isomorphism that converts\nbetween the datatype and its representation. This contrasts\nwith the original PolyP approach, in which that translation is inferred\n. This is the common trade-off of brevity for flexibility; being\nforced to state the isomorphism allows the programmer to choose a\ndifferent one, giving something analogous to Wadler's ideas about\n103\nviews [34]. We might say that this style of generic programming is\nisomorphism-parametrised instead of datatype-parametrised.\nIn the original PolyP, the polytypic construct provides a convenient\nsyntax for encoding generic functions. Furthermore, combinators\nfor pointfree programming may be provided, making generic\ndefinitions even more compact. These combinators are just normal\nHaskell functions, and so there is no problem in implementing them\nin pure Haskell; but to keep the example short, we have stuck with\npointwise definitions.\nThe advantages of this translation when compared with the one\nproposed in [30] are that it requires only Haskell 98, and that the\ntypes of the generic functions are much closer to what one would\nexpect. In Norell's translation, the type class constraints posed\nsome problems because both the two-parameter class FunctorOf\nand the classes for the generic functions propagated throughout\nthe code. With the Light PolyP approach, only instances of Rep\npropagate, leading usually to just one type class constraint.\n4.2\nPrintf\nThe C-style printf function, which takes a variable number of\nparameters, has always been a challenge for programmers using\nstrongly and statically typed languages. The problem with printf is\nthat, in its true essence, it requires dependent types. This happens\nbecause the value of the format string determines the type of the\nfunction. However, it has been shown by Danvy [10] that by changing\nthe representation of the control string it is possible to encode\nprintf in any language supporting a standard Hindley-Milner type\nsystem.\n4.2.1\nA solution using explicit representations\nIn this section, we will demonstrate that Danvy's solution is another\ninstance of the TypeCase design pattern, using an explicit representation\n. Furthermore, we will show a new use of the printf function\nby making use of the fact that we can (in some cases) infer the\nformat string.\nDanvy's original solution had the following combinators:\nlit\n:: String\n(String a) String a\nlit x k s\n= k (s ++ x)\neol\n::\n(String a) String a\neol k s\n= k (s ++ "\\n")\nint\n::\n(String a) String Int a\nint k s x\n= k (s ++ show x)\nstr\n::\n(String a) String String a\nstr k s x\n= k (s ++ x)\neod\n:: String\nString\neod\n= id\nIf we capture all the occurrences of the form String\nt with a\nnewtype Printf , and modify the definitions in order to reflect this\nnewtype, we obtain the following code.\nnewtype Printf t\n= Printf {printfApp :: String t}\nlit\n:: String\nPrintf a Printf a\nlit x k\n= Printf (s printfApp k (s ++ x))\neol\n:: Printf a\nPrintf a\neol k\n= Printf (s printfApp k (s ++ "\\n"))\nint\n:: Printf a\nPrintf (Int a)\nint k\n= Printf (s x printfApp k (s ++ show x))\nstr\n:: Printf a\nPrintf (String a)\nstr k\n= Printf (s x printfApp k (s ++ x))\neod\n:: Printf String\neod\n= Printf id\nTaking one step further, we can now abstract over Printf and\ncreate a type class that replaces it with some functor f .\nclass Format f where\nlit :: String\nf r f r\neol :: f r\nf r\nint :: f r\nf (Int r)\nstr :: f r\nf (String r)\neod :: f String\nWith this last transformation, we can start seeing an instance of the\nTypeCase design pattern. The structural cases participant consists\nof functions of the form Int\nr or String r, or a String -- lit\nand eol are overlapping cases. The class Format constitutes the\ntypecase participant. Because the dispatcher is optional in explicit\nversions of the design pattern, there is no obligation to define it.\nNow, using the newtype Printf , we can define an instance of Format\nthat implements the functionality of printf .\ninstance Format Printf where\nlit x k\n= Printf (s printfApp k (s ++ x))\neol k\n= Printf (s printfApp k (s ++ "\\n"))\nint k\n= Printf (s x printfApp k (s ++ show x))\nstr k\n= Printf (s x printfApp k (s ++ x))\neod\n= Printf id\nThe final touch is provided by the definition of printf in terms of\nprintfApp. The printf function is expected to receive the formatting\nargument of type Printf t as its first parameter. The parameter t\ndefines the type of printf , which can involve a variable number of\narguments. Analysing the type of printfApp, we see that the first\nparameter is the formatting argument, the resulting type is the type\nthat we expect for printf , and there is a second argument which is a\nString. Now, what does that String represent? Danvy's solution uses\na continuation-passing style and the second argument of printfApp\ncorresponds to the value fed to the initial continuation. Thus using\nthe string "" for that argument does the trick.\nprintf\n:: Printf t\nt\nprintf p\n= printfApp p ""\nWe have shown, informally, that Danvy's solution is indeed an\ninstance of the TypeCase design pattern. However, some questions\nmight be asked at this point. Do we really need to create a class in\norder to implement printf ? What other instances of the class would\nwe be able to provide? In fact there are not many other uses for the\ntype class; printf seems to be the only natural instance. Perhaps we\ncould consider scanf , another C function that uses the same format\nstring; but the derived type for scanf would be different, and so\nit is not possible to reuse the same type class. Another possibility\nwould be considering other versions of printf , such as one for the\nIO monad. However, if we think that printf is really the only useful\ninstance of the type class, why not get rid of the type class all\ntogether?\nA design pattern is a flexible design, and depending on the context\nof the problem, it can be adapted to fit the problem. If a type-indexed\nfunction is used at just one type index, it is reasonable to\nsimplify the pattern and eliminate the type class. The result would\nbe the specialised solution using the newtype Printf t presented before\n. We could go even further and argue that Danvy's original solution\nis already an instance of the design pattern, corresponding to\none further simplification of the design pattern, namely getting rid\nof the newtype.\n4.2.2\nAn alternative solution using smart datatypes\nIn the previous section, we have argued that Danvy's version of\nprintf is an instance of the TypeCase design pattern. However,\nDanvy's solution and explanation for printf is not, perhaps, very\nintuitive to understand. In this section, we take a different perpective\nand will look at the formatting parameter of printf as a special\nkind of list. This perpective corresponds to an instance of the\ndesign pattern using a smart datatype. The datatype (the typecase\nparticipant) encodes a list, which has an empty case that corres-104\nponds to the combinator eod, and a number of recursive cases that\ncorrespond to lit, eol, int and str.\ndata Printf t where\nLit :: String\nPrintf t Printf t\nEol :: Printf t\nPrintf t\nInt :: Printf t\nPrintf (Int t)\nStr :: Printf t\nPrintf (String t)\nEod :: Printf String\nInformally speaking, we have reused the types from the newtype\nsolution and lifted the functions to constructors. However, using\na datatype instead of a number of functions makes it easier to\nview the format parameter of printf as a list. For instance, the Lit\nconstructor takes the literal string that we wish to print and also the\nlist corresponding to the rest of the format parameter of printf .\nThe printfApp from the previous section would, in this setting,\ncorrespond to a dependently-typed function (in the sense that the\ntypes of its branches are determined by the constructors used to\nperform pattern matching).\nprintfApp\n:: Printf t\nString t\nprintfApp\n(Lit x k) s = printfApp k (s ++ x)\nprintfApp\n(Eol k) s = printfApp k (s ++ "\\n")\nprintfApp\n(Int k) s = (x printfApp k (s ++ show x))\nprintfApp\n(Str k) s = (x printfApp k (s ++ x))\nprintfApp Eod\ns\n= s\nThe final step is to define printf . Little effort is required; we just\nneed to copy the definition of printf from the previous section. The\nonly apparent difference between the two versions is that, where\nthe first version uses functions like lit and int, this version uses\nconstructors like Lit and Int. However, despite the similarity of the\ntwo solutions, their expressive power is not the same. The smart\ndatatype solution in this section is fully closed to extension. That\nis, in order to add another case in the formatting list, such as a\nconstructor Chr that handles characters, we would need to modify\nthe GADT itself. On the other hand, the solution in the previous\nsection using the explicit version of the design pattern allows some\nform of extensibility. Adding a new case for printf that handles\ncharacters corresponds to adding a new function, which could even\nbe in a different module.\n4.2.3\nMaking use of a dispatcher\nThe two solutions that we presented did not make any use of a\ndispatcher. In this section we will show how the dispatcher can\nbe useful. The version of the dispatcher presented here is for the\nexplicit representation solution in Section 4.2.1, but could be easily\nadapted to the smart datatype solution in Section 4.2.2.\nSuppose that we want to define a function that prints a pair of\nintegers. Equipped with printf , we could try to encode that with\neither one of the following two functions.\nprintPair x y\n= printf fmt "(" x ", " y ")"\nwhere\nfmt\n= str $ int $ str $ int $ str $ eod\nprintPair2 x y\n= printf fmt x y\nwhere\nfmt\n= lit "(" $ int $ lit ", " $ int $ lit ")" $ eod\nThe function printPair tackles the problem using a printf that takes\na format argument expecting five arguments: three strings and two\nintegers. The function printPair2, on the other hand, makes use of\nthe fact that the string arguments are constants, and uses lit instead.\nThus, in this case, printf takes the format argument and two integer\narguments. Although relatively compact, the format argument is not\nas convenient to use as it would be in C, where one would write\nsomething like "(%d, %d)".\nThe role of the dispatcher is to infer automatically the corresponding\ntype representation for some type t. In the case of printf ,\nit is not possible to infer all possible representations. Consider, for\ninstance, the end of line case eol ::f r\nf r, which takes an existing\nformat with some type r, adds a newline and returns a format of the\nsame type. Clearly, there is no way to deduce that there is an occurrence\nof eol based on the type alone. Similarly, the lit case has no\neffect on the type. Nevertheless, the other, more type-informative,\ncases of printf can be inferred.\nclass Rep t where\nrep :: Format f\nf t\ninstance Rep String where\nrep\n= eod\ninstance Rep r\nRep (Int r) where\nrep\n= int rep\ninstance Rep r\nRep (String r) where\nrep\n= str rep\nWe should note that these instance declarations are outside the\nscope of Haskell 98 -- types are used where type variables should\noccur. However, this is a quite mild extension, and is supported by\nmost Haskell compilers.\nMaking use of the fact that now we can infer some cases of the\nstring format, we could define:\nprintPair :: Int\nInt String\nprintPair x y\n= printf rep "(" x ", " y ")"\nprintTrio :: Int\nInt Int String\nprintTrio x y z\n= printf rep "(" x ", " y ", " z ")"\nThe function printPair does the same as before. However, with\nthis new definition, the format directive is automatically inferred.\nThe function printTrio is doing the same as printPair, except that\nit does it for triples. We should emphasise that the occurrences of\nprintf in those two functions use different numbers of arguments.\nWe should also mention that, in some situations, we will need to\nprovide explicit types, otherwise the type checker would not be able\nto infer the correct instances of the type class Rep.\nThis use of printf seems to be practical, and for this simple\nversion of it we might even argue that everything that we could\ndo with a manually-provided parameter could be done with an\nautomatically-inferred one. We simply do not need lit and eol,\nbecause those can be simulated using str (with, of course, extra\nString arguments). Nevertheless, if we decided to go for a more\npowerful version of printf , this might not be the case. Consider, for\ninstance, the formatting directive "%2d". In this case the number 2\nis specifying the minimum width of the string that represents that\nnumber. If we wanted to allow this kind of behaviour, we could\nadd an extra parameter of type Int to the int case. However, the\nproblem now is to choose a value for that parameter when we\nautomatically build the format directive. In this case we need to\nuse some default value (for instance 1). However we are no longer\nable, for all possible cases, to simulate the functionality of printf\nwith manual format strings using only automatically-built ones.\nType-indexed types\nUntil now we have been discussing type-indexed functions, that\nis, families of functions indexed by types. We turn now to type-indexed\ntypes, that is, families of types indexed by types. In the\ncontext of generic programming, we call these generic types. Generic\nfunctions with generic types are functions that have different\nresult types for each structural case.\nIn this section, we will show how to implement type-indexed\ntypes as another variation of the TypeCase design pattern. We do\nthis by translating a standard example of Generic Haskell [20],\nnamely generic tries [17], into our approach.\n105\n5.1\nEncoding type-indexed types\nSection 3 presents templates for encoding type-indexed functions.\nIn this section, we show how to translate a type-indexed type into\nan instance of the TypeCase design pattern.\nIn general, a type-indexed type has the form\nt ::\n::\nt\n1\na\n1\n... a\ni\n= d\n11\n... d\n1n\n\n1\n.\n.\n.\nt\nm\nz\n1\n... z\nj\n= d\nm1\n... d\nmn\n\nm\nwhere is the type-level function that defines the type-indexed\ntype; t is the family of types (or type constructors)\n(t\n1\na\n1\n... a\ni\n),...,\n(t\nm\nz\n1\n... z\nj\n) of kind that corresponds to the structural cases\nof the design pattern; and, finally, is the kind of t :: . For\neach type that is member of that family, we have a corresponding\nbranch for . The type-level lambda abstraction on the right side of\neach branch is optional, and corresponds to possible parametrically\npolymorphic variables d\n1\n... d\nn\nthat the type-indexed type might\ndepend on. Finally,\n1\n...\nm\ncorresponds to the family of types (or\ntype constructors) that defines the type-indexed type.\n5.1.1\nType class translation\nWe can now derive an instance of the TypeCase design pattern\nto capture type-indexed functions with type-indexed types. The\ntypecase participant, for instances of the design pattern using either\nimplicit or explicit representations, could be defined as follows.\nclass\n(g :: ) where\ncase\nt\n1\n::\n(a\n1\n... a\ni\n)\ng\n(t\n1\na\n1\n... a\ni\n) d\n11\n... d\n1n\n\n1\n.\n.\n.\ncase\nt\nm\n::\n(z\n1\n... z\nj\n)\ng\n(t\nm\nz\n1\n... z\nj\n) d\nm1\n... d\nmn\n\nm\nWe reuse the name for the name of the type class that encodes\nthe typecase component. The parameter g is a type constructor\nwith kind\n, where is the literal occurrence\nof (if we were to use instead of its literal occurrence\n, we would obtain the wrong kind). There are m functions\ncase\nt\n1\n,...,case\nt\nm\nthat correspond to the typecases for each type\n(t\n1\na\n1\n... a\ni\n),...,(t\nm\nz\n1\n... z\nj\n). Each case of the typecase function\nis defined by providing the type constructor g with the corre-ponding\ntypes. Finally,\n(a\n1\n... a\ni\n)\n,...,\n(z\n1\n... z\nj\n)\ncorresponds to the\nrepresentations for the types\n(a\n1\n... a\ni\n),...,(z\n1\n... z\nj\n).\nThe only difference between explicit and implicit versions of the\ndesign pattern for the typecase component is that in the explicit version\nthe occurrences of are expanded into explicitly-passed representations\nof the form g a\n\n..., whereas with the implicit representations\nthose occurrences are replaced by type class constraints\nof the form Rep\n\na\n\n....\nThe dispatcher can also be derived; but to do so requires extensions\nto Haskell 98 -- specifically, multiple-parameter type classes\nwith functional dependencies. The problem is that, even in its\nsimplest form, a type-indexed type requires at least two type arguments\n: the first one corresponding to the index type, and the\nsecond one that is the resulting type-indexed type for that index,\nand thus depending on the index. This problem is not too serious if\nwe use the explicit representations variant of the pattern, since the\ndispatcher is optional, but using implicit representations forces us\noutside Haskell 98.\nclass Rep\n\nt d\n1\n... d\nn\n\n| t d\n1\n... d\nn\nwhere\nrep :: g\ng t d\n1\n... d\nn\n\ninstance\n(a\n1\n... a\ni\n)\n\nRep\n\n(t\n1\na\n1\n... a\ni\n) d\n11\n... d\n1n\n\n1\nwhere\nrep\n= case\nt\n1\n{rep\ni\n}\n.\n.\n.\ninstance\n(z\n1\n... z\nj\n)\n\nRep\n\n(t\nm\nz\n1\n... z\nj\n) d\nm1\n... d\nmn\n\nm\nwhere\nrep\n= case\nt\nm\n{rep\nj\n}\nThe type class Rep\n\nhas at least two type arguments: t and . If\nthere are parametric types that depends on, then the type class\nalso needs to account for those types (d\n1\n... d\nn\n). The class contains\njust one member function, rep, used to build representations for\n. The function rep has a type class constraint ensuring that g\nis an instance of . There are, at least, m instances of Rep\n\n, and\nthose instances define rep with the corresponding case\nt\n\nfunction.\nIf we are implementing an implicit version of the design pattern,\nthen the definition of rep is complete; otherwise, for an explicit\nversion, we need to apply case\nt\n\nto a number i of rep functions\n(where i is the number of type parameters of t\n\n). The constraints\n\n(a\n1\n... a\ni\n)\n,...,\n(z\n1\n... z\nj\n)\nare very similar to the constraints , and in\nfact for implicit representations they coincide: they correspond to\nrepresentations for the types a\n1\n... a\ni\n,...,z\n1\n... z\nn\n.\n5.1.2\nSmart datatype translations\nEncoding type-indexed functions with smart datatypes proceeds\nin a similar fashion to the encoding with type classes. We will\ndemonstrate how to do this translation using a GADT syntax (as\nfound in the new GHC 6.4 Haskell compiler).\nA type-indexed type generates a smart datatype of the following\nform.\ndata t d\n1\n... d\nn\nwhere\nc\nt\n1\n::\n(a\n1\n... a\ni\n)\n\n(t\n1\na\n1\n... a\ni\n) d\n11\n... d\n1n\n\n1\n.\n.\n.\nc\nt\nm\n::\n(z\n1\n... z\nj\n)\n\n(t\nm\nz\n1\n... z\nj\n) d\nm1\n... d\nmn\n\nm\nInstead of being parametrised by a \"function\" (like the type class\napproach), a smart datatype is parametrised by all the types on\nwhich it depends. Another difference from the type class approach\nis that the functions that represent each case are now replaced\nby constructors c\nt\n1\n,...,c\nt\nm\nthat can just be pattern matched (in a\ndependent manner) by functions defined over those datatypes. A\nfinal difference is that\n(a\n1\n... a\ni\n)\n,...,\n(z\n1\n... z\nj\n)\nneed to reflect the fact\nthat we are now using a smart datatype.\nThe changes to Rep\n\nare minimal; the only change to the type\nclass version is that in the definition of rep we now use the constructors\nc\nt\n1\n,...,c\nt\nm\ninstead of the functions case\nt\n1\n,...,case\nt\nm\n.\nclass Rep\n\nt d\n1\n... d\nn\n\n| t d\n1\n... d\nn\nwhere\nrep :: t d\n1\n... d\nn\n\ninstance\n(a\n1\n... a\ni\n)\n\nRep\n\n(t\n1\na\n1\n... a\ni\n) d\n11\n... d\n1n\n\n1\nwhere\nrep\n= c\nt\n1\nrep\ni\n.\n.\n.\ninstance\n(z\n1\n... z\nj\n)\n\nRep\n\n(t\nm\nz\n1\n... z\nj\n) d\nm1\n... d\nmn\n\nm\nwhere\nrep\n= c\nt\nm\nrep\nj\n5.2\nTries\nTries or digital search trees are a traditional example of a generic\ntype. Tries make use of the structure of search keys in order\nto organise information, which can then be efficiently queried. In\nthis section we will show how to implement generic tries using a\nvariation of the LAGP type representations. For a more theoretical\npresentation of tries, see [20, 17]; the implementation of tries\npresented here follows closely the implementations found in those\npapers.\nIn [20], the generic type for tries is given as follows.\n106\nFMap t ::\n::\n\nFMap Unit\nv\n= Maybe v\nFMap Int\nv\n= MapInt v\nFMap Plus t\n1\nt\n2\nv\n= OptPair (FMap t\n1\nv\n) (FMap t\n2\nv\n)\nFMap Prod t\n1\nt\n2\nv\n= FMap t\n1\n(FMap t\n2\nv\n)\nIt is clear that the type-indexed function FMap takes a type\nparameter t :: and another type of kind\nand returns another type\nof kind . Only the shape of parameter t is analysed; the other\nparameter v needs to be used in the definition because the resulting\ntype is parametrically polymorphic in relation to v.\nWe encode this characterisation of FMap as follows.\nclass FMap g where\nunit :: g Unit v Maybe\nplus :: g a v c\ng b v d g (Plus a b) v (PlusCase c d)\nprod :: g a\n(d v) c g b v d g (Prod a b) v (ProdCase c d)\ndata :: g a v c\nIso b a Iso (d v) (c v) g b v d\nint :: g Int v MapInt\nThis class forms the typecase participant of an explicit representation\nvariant of the TypeCase pattern. The class FMap is a variation\nof the Generic class from Section 2.2.2. The functor g ::\n\n( ) takes the necessary information to rebuild the type-indexed\ntype. The three parameters of the functor correspond, respectively\n, to the type parameter t, the second parameter and the\nresulting type of FMap. (The kind of the resulting type is now\n. We could have used kind as in FMap, but we believe this\nversion is slightly more readable.) The function unit just reflects the\nchange of the functor g and adds the information for the parametric\ntype v and the functor Maybe that is used to define the trie for the\nUnit case. The cases for plus and prod have explicit arguments that\ncorrespond to the recursive calls of the function; and the functors\nPlusCase c d and ProdCase c d correspond to the respective cases\nof the type-indexed type. The data function handles user-defined\ndatatypes, having a recursive case and two isomorphisms: the first\nbetween the structural cases and a second between the tries corresponding\nto those cases. Finally, we could also define some extra\nbase cases to handle primitive types such as Int and Char.\nThe auxiliary definitions for the newtypes PlusCase a b v and\nProdCase a b v are defined as follows.\ndata OptPair a b\n= Null | Pair a b\nnewtype PlusCase a b v\n=\nPlusCase\n{unPlus :: OptPair (a v) (b v)}\nnewtype ProdCase a b v\n=\nProdCase\n{unProd :: a (b v)}\nThe introduction of OptPair a b is for efficiency reasons [20].\nIn order to use a user-defined type (or a built-in type that does\nnot have a special case for it), we need to do much the same work\nas for GM2 in Section 2.2.2. As an example, we show what to do\nfor Haskell's built-in lists.\nlist :: FMap g\ng a (FList c v) c g [a] v (FList c)\nlist ra\n= data (plus unit (prod ra (list ra))) listEP\n(Iso unFList FList)\nlistEP :: Iso\n[a] (Plus Unit (Prod a [a]))\nlistEP\n= Iso fromList toList\nwhere\nfromList\n[ ]\n= Inl Unit\nfromList\n(x : xs)\n= Inr (Prod x xs)\ntoList\n(Inl Unit)\n= [ ]\ntoList\n(Inr (Prod x xs)) = x : xs\nnewtype FList c v\n= FList{\nunFList ::\n(PlusCase Maybe (ProdCase c (FList c))) v}\nThe function list defines the encoding for the representation of lists.\nBecause lists are a parametrised datatype with one type parameter,\nlist is a function that takes one argument; this argument corresponds\nto the representation of the list type argument, and list returns the\nrepresentation for lists. The definition is nearly the same as the\nequivalent for GM, but it takes an extra isomorphism describing\nthe mapping between the structural representation of a list trie and\na newtype FList c v that is introduced to represent the resulting list\ntrie. The function listEP is just the isomorphism\n[a]\n= 1 + a [a].\nThis means that listEP can be shared with other versions of generics\nthat use the same structural cases. However, list and FList c v still\nhave to be introduced for each type-indexed datatype. Nevertheless,\nthat is boilerplate code, and, with compiler support, it is should be\npossible to avoid writing it.\nHaving set up the main components of the design pattern, we\ncan now move on to define our first function over tries. The function\nempty creates a new empty trie and can be defined as follows.\nnewtype EmptyTrie a v t\n= EmptyTrie{empty :: t v}\ninstance FMap EmptyTrie where\nunit\n= EmptyTrie Nothing\nint\n= EmptyTrie (MapInt [ ])\nplus ra rb\n= EmptyTrie (PlusCase Null)\nprod ra rb\n= EmptyTrie (ProdCase (empty ra))\ndata ra iso iso2\n= EmptyTrie (to iso2 (empty ra))\nThis function is very simple but, nonetheless, it has a type-indexed\ntype: the unit case returns Nothing; the int case returns a value of\na user-defined type for integer tries; the cases for prod and plus\nreturn, respectively, values for the previously defined ProdCase and\nPlusCase types; finally, the data returns a value of the newtype used\nto represent the trie of some user-defined datatype.\nAnother function that we will probably want to have in a library\nfor tries is the lookUp function which, given a key, returns the\ncorresponding value stored in the trie.\nnewtype LUp a v t\n= LUp{lookUp :: a t v Maybe v}\ninstance FMap LUp where\nunit\n= LUp ( fm fm)\nint\n= LUp (i fm lookUpInt i fm)\nplus ra rb\n= LUp (t fm\ncase\n(unPlus fm) of\nNull\nNothing\n(Pair fma fmb) case t of\n(Inl l) lookUp ra l fma\n(Inr r) lookUp rb r fmb)\nprod ra rb\n= LUp (t (ProdCase fma)\ncase t of\n(Prod x y) (lookUp ra x lookUp rb y) fma)\ndata ra iso iso2\n=\nLUp\n(t r lookUp ra (from iso t) (from iso2 r))\n(The operator\nrepresents monadic composition.) The functions\nempty and lookUp have definitions that only have generic function\ncalls to themselves. However, that is not the case for all generic\nfunctions. One such function is the generic function that creates a\ntrie containing a single element; a possible definition makes use of\nthe generic function empty. We discussed in Section 3 that, using\nan explicit version of the design pattern, there are some issues with\ngeneric functions calling generic functions other than themselves.\nOne solution for this problem is using tupling. Just as one does\nwith a type class, we would choose a fixed set of functions and\ngroup them together in a record. For instance, in the case of tries,\nwe could have the following.\ndata Tries a v t\n= Tries{\nempty :: t v\n,\nisempty :: t v\nBool,\nsingle :: a\nv t v,\nlookup :: a\nt v Maybe v,\ninsert ::\n(v v v) a v t v t v,\nmerge ::\n(v v v) t v t v t v,\ndelete :: a\nt v t v}\n107\nWith our definition we could, for any function in the record, make\nmutual generic calls.\nWhilst we could have used a multiple-parameter type class\nwith functional dependencies in order to implement this library of\nfunctions over tries, there would be one important disadvantage in\ndoing so (apart from the fact that we need to leave Haskell 98):\nwe can only have functions on types of kind . With type classes,\ncontexts are implicitly passed, and there is no way to redefine those\nimplicit behaviours. In other words, type classes have the same\nlimitation as implicit representations as a version of the TypeCase\ndesign pattern, in that they can only work on types. On the other\nhand, derived from the fact that we use external representations,\nwith this implementation we can define generic functions over type\nconstructors.\nTupling is not the only option to solve the problem of generic\nfunction calls. Another possibility is to have the notion of dependencies\n: instead of tupling all functions together, we can, for each\ngeneric function that we need to use, include one instance of that\nfunction. Here is a possible definition of single using this strategy.\ndata Single a v t\n= Single{\nemptyT :: EmptyTrie a v t\n,\nsingle :: a\nv t v}\ninstance FMap Single where\nunit\n= Single unit ( v Just v)\nint\n= Single int (i v MapInt [(i,v)])\nplus ra rb\n= Single (plus (emptyT ra) (emptyT rb))\n(i v\ncase i of\nInl l\nPlusCase (Pair (single ra l v)\n(empty (emptyT rb)))\nInr r\nPlusCase (Pair (empty (emptyT ra))\n(single rb r v)))\nprod ra rb\n= Single (prod (emptyT ra) (emptyT rb))\n(i v\ncase i of\nProd x y\nProdCase (single ra x (single rb y v)))\ndata ra iso iso2\n= Single (data (emptyT ra) iso iso2)\n(i v to iso2 (single ra (from iso i) v))\nThe idea of dependencies is motivated by Dependency-Style Generic\nHaskell [26, 27]. In this version of Generic Haskell, the type\nsystem reflects the uses of generic functions in the definitions by\nkeeping track of constraints that identify such uses. With this definition\n, we have to manually introduce those dependencies by adding\nextra fields to the record that keep track of all the functions on\nwhich the definition depends. That change is also reflected in the\ninstance that defines the generic function, where we need to provide\nvalues for the extra fields; the values for those fields just reconstruct\nthe dependent functions with their values for those fields.\nDiscussion and conclusions\nThe goal of design patterns is not to come up with a miraculous\nsolution for a problem. Instead, design patterns capture good techniques\nthat appear in the literature or in practice, in a variety of\ncontexts, and document them to make them easier to identify and\nimplement. In this paper we have generalised the technique found\nin LIGD and GM to a design pattern, and presented a number of\napplications of the pattern. Furthermore, we have identified other\noccurrences of the design pattern in the literature.\n6.1\nRelated work\nThe technique used by Danvy [10] and generalised by Yang [37]\nallows us to encode type-indexed values in a Hindley-Milner type\nsystem. This encoding is directly related to the explicit representation\nversion of the TypeCase pattern. This technique influenced\nmany other works, ranging from type-directed partial evaluation\n[37, 9, 12], through embedded interpreters [2], to a generalisation\nof families of functions like zipWith [13] -- these are all possible\napplications of the TypeCase design pattern. Our paper revises that\ntechnique and shows how slightly richer type systems can be used\nto improve it. In particular, the use of a dispatcher makes it possible\nto automatically built the values encoding types. Moreover,\nthe issue of composability (identified by Yang), while still a problem\n, can benefit from stronger type systems: the use of rank-two\ntypes combined with type classes provides a good solution.\nThe work on extensional polymorphism [11] presents an approach\nthat allows functions to implicitly bind the types of their\narguments in a modified version of ML. Furthermore, using a\ntypecase construct it is possible to support generic programming.\nHarper and Morrisett's work on intensional type analysis [16]\npresents an intermediate language where run-time type analysis is\npermitted, using typecase and Typecase constructs to define type-indexed\nfunctions and type-indexed types, respectively. However,\napproaches based on run-time type analysis have important drawbacks\n; for instance, they cannot support abstract datatypes, and\nthey do not respect the parametricity theorem [35, 33]. Subsequent\napproaches to intensional type analysis by Crary and others [7, 6]\nuse a type-erasure semantics that does not suffer from those problems\n. Still, those approaches were limited to first-order type analysis\n. More recently, Weirich [36] proposed a version of intensional\ntype analysis covering higher-order types with a type-erasure semantics\n. Furthermore, she presented an implementation in Haskell\n(augmented with rank-two types). This work inspired Hinze's implementation\nof GM, which shows, in essence, how to avoid rank-two\ntypes by using Haskell's class system. Our work makes use\nof those results and explains how to simulate typecase constructs.\nFurthermore, we show that the limitation of GM that generic functions\nwith generic types cannot be defined can be lifted with our\nmore general interpretation.\nGeneric programming (or perhaps datatype-generic programming\n[15]) is about defining functions and types that depend on\nthe structure of other types. One of the first attempts to produce\na generic programming language was PolyP [21]. This language\nallowed the definition of generic functions over regular datatypes\nwith one type parameter. In Section 4.1 we show that, using our\ndesign pattern, it is possible to define PolyP-like generic functions\njust using Haskell 98. A previous attempt [30] to define first-class\nPolyP functions in Haskell required extensions to the language.\nThe Generic Haskell [26, 5] project is more ambitious than PolyP,\nand aims at defining generic functions for nearly all types defin-able\nin Haskell 98. Furthermore, Generic Haskell features generic\ntypes and generic function customisation (which were not present\nin PolyP). Dependency-Style Generic Haskell [26, 27] introduces\na rather complex type system that keeps track of dependencies on\ngeneric function calls. The need for this sophisticated type system\nis a consequence of a model for generic programming that allows\ngeneric function customisation. The approach presented in [24] is\nanother kind of lightweight approach to generic programming, relying\non a run-time type-safe cast operator. With that operator it is\npossible to define a number of traversals that allow a very interesting\nmodel of generic programming based on nominal typing. Our\ndesign pattern can be used to encode many of the generic definitions\nthat these generic programming techniques allow. However,\nit can be less practical than approaches providing a special-purpose\ncompiler. Nevertheless, the advantage of our technique is that we\ndo not need to commit in advance to a model of generic programming\n: we have the freedom to choose our own model of generic\nprogramming.\nDesign patterns in the object-oriented programming community\nhave been given a great deal of attention. Whilst amongst the\n108\nfunctional programming community there has been some work\non -- or, at least, involving the concept of -- design patterns\n[24, 23, 25], the concept is still much less popular than in the object-oriented\ncommunity. Moreover, most of this work presents patterns\nthat are really more like algorithmic patterns rather than design\npatterns. Perhaps the reason why this happens is that functional\nlanguages are very expressive, and often natural features of those\nlanguages, such as laziness or higher-order functions, can be used\nto remove the need for complex designs. Nevertheless, we believe\nthat our design pattern is more related to the OO concept of a design\npattern with type classes/datatypes taking the role of OO interfaces\nand class instances taking the role of OO concrete classes. One\ndifficulty found in this work had to do with the fact that, unlike\nOO design patterns which are documented using informal notations\nsuch as UML, we do not have a notation to \"talk\" about the design\nof Haskell programs. The notation that we used is quite ad-hoc and\nit can be difficult to read.\n6.2\nFuture work\nWe mentioned that this design pattern seems to be very similar to\nOO design patterns. It would be interesting to explore the applicability\nof this design pattern in an OO environment.\nDesign patterns are useful to overcome the lack of certain features\nin programming languages. In our case, we overcome the\nlack of a typecase construct. The work on intensional type analysis\ninvestigates the possibility of languages supporting typecase constructs\ndirectly in the language. Combining these results in order to\nextend Haskell with a more natural support for typecase programming\nis something we would like to try in the future.\nProblems that use multiple instances of the design pattern are\nnot composable. For instance, in a generic programming context,\nwe could have a class Generic that allowed us to define generic\nfunctions with one type parameter; and we could also have a class\nFMap for working with tries. Although, those classes are structured\nin a similar way, they require two distinct representations of types,\none for each of the classes; we hope to address this impracticality.\nAcknowledgements\nWe would like to thank Ralf Hinze for the discussion that inspired\nthis paper. Stefan Holdermans, the anonymous referees and the\nmembers of the Algebra of Programming group at Oxford and\nthe EPSRC-funded Datatype-Generic Programming project made\na number of helpful suggestions.\nReferences\n[1] C. Alexander. A Pattern Language. Oxford University Press, 1977.\n[2] N. Benton. Embedded interpreters. Microsoft Research, Cambridge,\nJan. 2005.\n[3] R. Bird and O. de Moor. Algebra of Programming. International\nSeries in Computer Science. Prentice Hall, 1997.\n[4] J. Cheney and R. Hinze. A lightweight implementation of generics\nand dynamics. In Haskell Workshop, pages 90104, 2002.\n[5] D. Clarke and A. Lh. Generic Haskell, specifically. In Generic\nProgramming, pages 2147. Kluwer, B.V., 2003.\n[6] K. Crary and S. Weirich. Flexible type analysis. In International\nConference on Functional Programming, pages 233248, 1999.\n[7] K. Crary, S. Weirich, and J. G. Morrisett. Intensional polymorphism\nin type-erasure semantics. In International Conference on Functional\nProgramming, pages 301312, 1998.\n[8] N. A. Danielsson and P. Jansson. Chasing bottoms: A case study in\nprogram verification in the presence of partial and infinite values. In\nD. Kozen, editor, LNCS 3125: Mathematics of Program Construction,\npages 85109. Springer-Verlag, 2004.\n[9] O. Danvy. Type-directed partial evaluation.\nIn Principles of\nProgramming Languages, 1996.\n[10] O. Danvy. Functional unparsing. Journal of Functional Programming\n, 8(6):621625, 1998.\n[11] C. Dubois, F. Rouaix, and P. Weis. Extensional polymorphism. In\nPrinciples of Programming Languages, pages 118129, 1995.\n[12] P. Dybjer and A. Filinski. Normalization and partial evaluation. In\nLNCS 2395: Applied Semantics, pages 137192. Springer, 2002.\n[13] D. Fridlender and M. Indrika. Do we need dependent types? Journal\nof Functional Programming, 10(4):409415, 2000.\n[14] J. Gibbons. Calculating functional programs. In Algebraic and\nCoalgebraic Methods in the Mathematics of Program Construction,\npages 149202, 2000.\n[15] J. Gibbons. Patterns in datatype-generic programming. In Declarative\nProgramming in the Context of Object-Oriented Languages, 2003.\n[16] R. Harper and G. Morrisett.\nCompiling polymorphism using\nintensional type analysis. In Principles of Programming Languages,\npages 130141, San Francisco, California, 1995.\n[17] R. Hinze. Generalizing generalized tries. Journal of Functional\nProgramming, 10(4):327351, 2000.\n[18] R. Hinze. Fun with phantom types. In J. Gibbons and O. de Moor,\neditors, The Fun of Programming, pages 245262. Palgrave, 2003.\n[19] R. Hinze. Generics for the masses. In International Conference on\nFunctional Programming, pages 236243. ACM Press, 2004.\n[20] R. Hinze, J. Jeuring, and A. Lh. Type-indexed data types. Science\nof Computer Programming, 51(1-2):117151, 2004.\n[21] P. Jansson. Functional Polytypic Programming. PhD thesis, Chalmers\nUniversity of Technology, May 2000.\n[22] J. Jeuring and P. Jansson. Polytypic programming. In J. Launchbury,\nE. Meijer, and T. Sheard, editors, LNCS 1129: Advanced Functional\nProgramming, pages 68114. Springer-Verlag, 1996.\n[23] T. Khne. A Functional Pattern System for Object-Oriented Design.\nVerlag Dr. Kovac, ISBN 3-86064-770-9, Hamburg, Germany, 1999.\n[24] R. Lmmel and S. Peyton Jones. Scrap your boilerplate: a practical\ndesign pattern for generic programming. In Types in Language Design\nand Implementation, 2003.\n[25] R. Lmmel and J. Visser. Design patterns for functional strategic\nprogramming. In Workshop on Rule-Based Programming, 2002.\n[26] A. Lh. Exploring Generic Haskell. PhD thesis, Utrecht University,\n2004.\n[27] A. Lh, D. Clarke, and J. Jeuring. Dependency-style Generic Haskell.\nIn International Conference on Functional Programming, pages 141\n152, 2003.\n[28] E. Meijer, M. Fokkinga, and R. Paterson. Functional programming\nwith bananas, lenses, envelopes and barbed wire. In LNCS 523:\nFunctional Programming Languages and Computer Architecture,\npages 124144. Springer-Verlag, 1991.\n[29] D. Menendez. Fixed-length vectors in Haskell. http://www.\nhaskell.org/pipermail/haskell/2005-May/015815.html\n.\n[30] U. Norell and P. Jansson. Polytypic programming in Haskell. In\nImplementing Functional Languages, 2003.\n[31] S. Peyton Jones, editor. Haskell 98 Language and Libraries: The\nRevised Report. Cambridge University Press, 2003.\n[32] S. Peyton Jones, G. Washburn, and S. Weirich. Wobbly types: Type\ninference for generalised algebraic data types. Microsoft Research,\nCambridge, 2004.\n[33] J. C. Reynolds. Types, abstraction and parametric polymorphism. In\nInformation Processing 83, pages 513523. Elsevier, 1983.\n[34] P. Wadler. Views: a way for pattern matching to cohabit with data\nabstraction. In Principles of Programming Languages, pages 307\n313. ACM Press, 1987.\n[35] P. Wadler. Theorems for free! In Functional Programming and\nComputer Architecture, 1989.\n[36] S. Weirich. Higher-order intensional type analysis in type-erasure\nsemantics. http://www.cis.upenn.edu/~sweirich/papers/\nerasure/erasure-paper-july03.pdf\n, 2003.\n[37] Z. Yang. Encoding types in ML-like languages. In International\nConference on Functional Programming, pages 289300, 1998.\n109", "keywords": "Generic programming;type-indexed functions;type classes"} {"name": "203", "title": "UML-Based Service Robot Software Development: A Case Study", "abstract": "The research field of Intelligent Service Robots, which has become more and more popular over the last years, covers a wide range of applications from climbing machines for cleaning large storefronts to robotic assistance for disabled or elderly people. When developing service robot software, it is a challenging problem to design the robot architecture by carefully considering user needs and requirements, implement robot application components based on the architecture, and integrate these components in a systematic and comprehensive way for maintainability and reusability. Furthermore, it becomes more difficult to communicate among development teams and with others when many engineers from different teams participate in developing the service robot. To solve these problems, we applied the COMET design method, which uses the industry-standard UML notation, to developing the software of an intelligent service robot for the elderly, called T-Rot, under development at Center for Intelligent Robotics (CIR). In this paper, we discuss our experiences with the project in which we successfully addressed these problems and developed the autonomous navigation system of the robot with the COMET/UML method.", "fulltext": "INTRODUCTION\nRobots have been used in several new applications. In recent\nyears, both academic and commercial research has been focusing\non the development of a new generation of robots in the emerging\nfield of service robots. Service robots are individually designed to\nperform tasks in a specific environment for working with or\nassisting humans and must be able to perform services semi- or\nfully automatically [1]. Examples of service robots are those used\nfor inspection, maintenance, housekeeping, office automation and\naiding senior citizens or physically challenged individuals [2]. A\nnumber of commercialized service robots have recently been\nintroduced such as vacuum cleaning robots, home security robots,\nrobots for lawn mowing, entertainment robots, and guide robots\n[3, 4].\nIn this context, Public Service Robot (PSR) systems have been\ndeveloped for indoor service tasks at Korea Institute of Science\nand Technology (KIST) [5, 6]. The PSR is an intelligent service\nrobot, which has various capabilities such as navigation,\nmanipulation, etc. Up to now, three versions of the PSR systems,\nthat is, PSR-1, PSR-2, and a guide robot Jinny have been built.\nThe worldwide aging population and health care costs of aged\npeople are rapidly growing and are set to become a major problem\nin the coming decades. This phenomenon could lead to a huge\nmarket for service robots assisting with the care and support of\nthe disabled and elderly in the future [8]. As a result, a new\nproject is under development at Center for Intelligent Robotics\n(CIR) at KIST, i.e. the intelligent service robot for the elderly,\ncalled T-Rot.\nIn our service robot applications, it is essential to not only\nconsider and develop a well-defined robot software architecture,\nbut also to develop and integrate robot application components in\na systematic and comprehensive manner. There are several\nreasons for this:\n\nFirst, service robots interact closely with humans in a wide\nrange of situations for providing services through robot\napplication components such as vision recognition, speech\nrecognition, navigation, etc. Thus, a well-defined robot\ncontrol architecture is required for coherently and\nsystematically combining these services into an integrated\nsystem.\n\nSecond, in robot systems, there are many-to-many relations\namong software components as well as hardware\ncomponents. For instance, a local map module requires\nrange data from a laser scanner, ultrasonic sensors, and\ninfrared sensors, as well as prior geometrical descriptions of\nthe environment. On the other hand, the laser scanner should\nprovide its data to a path planner, localizer, and a local map\nbuilding module. These relationships, as well as interactions\namong software or hardware modules, must be carefully\nanalyzed and systematically managed from an early stage of\ndevelopment in order to understand the big picture.\n\nThird, the functional performance of each software and\nhardware module becomes highly dependent on the\narchitecture, as the number of robot platforms increases [6],\nand new services are added, or existing services are removed\nor updated to address changes in user needs.\n\nFourth, previously developed software modules like maps,\nlocalization, and path planners can be directly reused for\nnew tasks or services by service robot developers. Thus, a\nrobot architecture, as well as systematic processes or\nmethods, are required to support the implementation of the\nsystem, to ensure modularity and reusability.\nAs a consequence, in the previous work [5,6], the Tripodal\nschematic control architecture was proposed to tackle the\nproblems. Many related research activities have been done.\nHowever, it is still a challenging problem to develop the robot\narchitecture by carefully taking into account user needs and\nrequirements, implement robot application components based on\nthe architecture, and integrate these components in a systematic\nand comprehensive way. The reason is that the developers of\nservice robots generally tend to be immersed in technology\nspecific components, e.g. vision recognizer, localizer and path\nplanner, at an early stage of product development without\ncarefully considering architecture to integrate those components\nfor various services [9]. Moreover, engineers and developers are\noften grouped into separate teams in accordance with the specific\ntechnologies (e.g., speech processing, vision processing), which\nmakes integration of these components more difficult [7, 9]. In\nsuch a project like T-Rot, particularly, several engineers and\ndevelopers (i.e., approximately, more than 150 engineers) from\ndifferent organizations and teams participate in the\nimplementation of the service robot. Each separate team tends to\naddress the specific technologies such as object recognition,\nmanipulation, and navigation and so on. Engineers who come\nfrom different teams are concerned with different characteristics\nof the system. Thus, a common medium is required to create\nmutual understanding, form consensus, and communicate with\neach other for successfully constructing the service robot. Without\nsuch a medium or language, it is difficult to sufficiently\nunderstand the service robot system and interact between teams to\nintegrate components for services.\nWithin the domain of software engineering, many approaches\nhave been suggested for a systematic and complete system\nanalysis and design, and for the capture of specifications. The\nobject-oriented paradigm [10,11] is a widely-accepted approach\nto not only cover the external and declarative view of a system,\nbut also at the same time bridge seamlessly with the internal\nimplementation view of a system [13]. Object-oriented concepts\nare crucial in software analysis and design because they focus on\nfundamental issues of adaptation and evolution [14]. Therefore,\ncompared with the traditional structured software development\nmethods, object-oriented methods are a more modular approach\nfor analysis, design, and implementation of complex software\nsystems, which leads to more self-contained and hence modifiable\nand maintainable systems. More recently, the Unified Modeling\nLanguage (UML) [15,16] has captured industry-wide attention for\nits role as a general-purpose language for modeling software\nsystems, especially for describing object-oriented models. The\nUML notation is useful to specify the requirements, document the\nstructure, decompose into objects, and define relationships\nbetween objects in a software system. Certain notations in the\nUML have particular importance for modeling embedded systems\n[17,18], like robot systems. By adopting the UML notation,\ndevelopment teams thus can communicate among themselves and\nwith others using a defined standard [14,17,18]. More importantly,\nit is essential for the UML notation to be used with a systematic\nobject-oriented analysis and design method in order to be\neffectively applied [14].\nAs a result, our aim is to develop the intelligent service robot\nbased on the systematic software engineering method, especially\nfor real-time, embedded and distributed systems with UML. To\ndo so, we applied the COMET method, which is a UML based\nmethod for the development of concurrent applications,\nspecifically distributed and real-time applications [14]. By using\nthe COMET method, it is possible to reconcile specific\nengineering techniques with the industry-standard UML and\nfurthermore to fit such techniques into a fully defined\ndevelopment process towards developing the service robot\nsystems.\nIn this paper, we describe our experience of applying the COMET\n/UML method into developing the intelligent service robot for the\nelderly, called T-Rot, developed at CIR. In particular, we focused\non designing an autonomous navigation system for the service\nrobot, which is one of the most challenging issues for the\ndevelopment of service robots.\nSection 2 describes the hardware configuration and services of the\nT-Rot, and discusses the related work. Section 3 illustrates how to\napply the COMET method into designing and developing the\nautonomous navigation system for the service robot, and\ndiscusses the results of experiments. The lessons learned from the\nproject are summarized in section 4, and section 5 concludes the\npaper with some words on further work.\n\nBACKGROUD ON T-Rot\nFig. 1. KIST service robots\nAt KIST, intelligent service robots have been developed in large-scale\nindoor environments since 1998. So far, PSR-1 and PSR-2,\nwhich performs delivery, patrol, and floor cleaning jobs, and a\nguide robot Jinny, which provides services like exhibition guide\nand guidance of the road at a museum, have been built [5,6] (see\nFig. 1). The service robot T-Rot is the next model of the PSR\nsystem under development for assisting aged persons.\nDevelopment of T-Rot, in which our role is developing and\nintegrating robot software, started in 2003 by mainly CIR with\n535\nmore than 10 groups consisting of more than 150 researchers and\nengineers from academia and industry. This project is based on\nthe needs and requirements of elderly people through the studies\nand analysis of the commercial health-care market for providing\nuseful services to them. Thus, the aim of this project is to develop\nthe intelligent service robot for the elderly by cooperating and\nintegrating the results of different research groups. This project\nthat is divided into three stages will continue until 2013 and we\nare now in the first stage for developing the service robot\nincrementally to provide various services.\n2.2\n\nHardware of T-Rot\nThe initial version of T-Rot, as shown in Fig. 2, has three single\nboard computer (SBC), that is, mobile Pentium 4 (2.2GHz) and\n1GB SDRAM on each SBC. In terms of software environment,\nLinux Red hat 9.0 and RTAI (Real-Time Application Interface)\n[12] are used as operating system. Fig. 3 shows hardware\nconfiguration as a whole. As mentioned earlier, development of\nT-Rot is conducted incrementally for various services and thus the\nplatform will be extended with manipulators and robot hands later.\nIn our project, we developed the robot software based on the\ninitial version of the platform. The details of the hardware\nplatform are described in Table 1.\n\nFig. 2. T-Rot robot hardware platform\n\nFig. 3. T-Rot robot hardware platform configuration\nTable 1. T-Rot hardware platform devices\nIntel Mobile Pentium 4 (2.2 GHz)\n1GB SDRAM\nSBC\n30GB Hard Disk\n16 microphones for speaker localization\n1 microphone for speech recognition\nVoice\n1 speaker for speech generation\nVision\n2 stereo vision cameras for recognizing users and object\ns (1288 H x 1032 V maximum resolution and 7Hz fram\ne rates)\nPan/Tilt for controlling the vision part\n2 laser scanners (front and back)\n2 IR scanners (front and back)\n12 Ultrasonic sensors\nSensor\n1 Gyroscope sensor for measuring balance\n2 actuators for two drive wheels (right and left)\n2 free wheels (the support wheels)\n2 Servo Motors (100 [w])\n2 encoders (2048 ppr)\nActuator\n2 bumpers\n1 TFT LCD & Touch (10.4\" 1024x768, 26000 colors)\nKVM (Keyboard/Mouse)\nInterface\nWireless LAN for communications\n2.3\n\nRobot Services\nSome of the primary services under-developed that the initial\nversion for T-Rot provides for the elderly are described as below.\n\nVoice-based Information Services: The robot T-Rot can\nrecognize voice commands from a user (i.e., an aged person)\nvia microphones equipped with the robot and can synthesize\nvoices for services. While a user is watching TV, the user\ncan ask some questions about the specific TV program or\nrequest a task to open an Internet homepage by speaking the\nTV program name.\n\nSound Localization and Voice Recognition: A user can call\na robot's predefined name, to let the robot recognize the call\nwhile the robot knows the direction to move to the user. This\nservice analyzes audio data from 3 microphones on the\nshoulder for sound localization and 16 mic array on the head\nfor speech recognition to recognize the command from the\nuser.\n\nAutonomous navigation: A user can command the robot to\nmove to a specific position in the map to perform some task.\nFor instance, the robot can navigate to its destination in the\nhome environment via its sensors, which include laser\nscanners and ultrasonic sensors. The robot plans a path to\nthe specified position, executes this plan, and modifies it as\nnecessary for avoiding unexpected obstacles. While the\nrobot is moving, it constantly checks sensor data from its\nsensors every 200 ms.\n\nAn errand service: The robot can carry objects that a user\n(i.e., an aged person) usually uses, like a plate, books, a cane\na cup of tea, beverages, etc according to the user's\ninstructions. For instance, the user can order the robot to\nbring a cup of tea or beverage by speaking the name of the\ndrink.\nOf these T-Rot services, our emphasis was on the autonomous\nnavigation service, which is one of the most challenging issues\nand is essential in developing service robots, particularly mobile\nservice robots to assist elderly people. It includes hardware\nintegration for various sensors and actuators, and the development\nof crucial navigation algorithms like maps, path planners, and\n536\nlocalizers as well as software integration of software modules like\na path planner, a localizer, and a map building module.\n2.4\n\nControl Architecture of PSR\nUp to now, there have been many related research activities to\ndevelop efficient and well-defined control architectures and\nsystem integration strategies for constructing service robots. A\nrecent trend is that many control architectures are converging to a\nsimilar structure based on a hybrid approach that integrates\nreactive control and deliberation [6]. At KIST, for developing\nservice robots, that is PSR-1, PSR-2, and Jinny in the previous\nwork [5,6], the Tripodal schematic control architecture was\nproposed as the solution to the problem.\nOne important point of Tripodal schematic design is to integrate\nrobot systems by using a layered functionality diagram. The\nlayered functionality diagram is a conceptual diagram of three\nlayers for arrangement of various hardware and software modules\nand functions. It also shows the connectivity and the information\nflow between components. Those layers are composed of\ndeliberate, sequencing, and reactive layers based on the hybrid\napproach. The purposes of the deliberate layer are to interface\nwith a user and to execute a planning process. The sequencing\nlayer is classified into two groups, that is, the controlling part that\nexecutes the process by managing the components in the reactive\nlayer and the information part that extracts highly advanced\ninformation from sensor data. The reactive layer controls the real-time\ncommand and hardware-related modules for sensors and\nactuators. The detailed description of whole control architecture\nof the PSR is introduced in [5].\nHowever, as described earlier, in order to effectively apply this\napproach and the UML notation to developing service robots, it is\nessential to use a systematic software engineering process or\nmethods like object-oriented analysis and design methods,\nespecially for real-time and embedded systems. We believe that\nonly a systematic and comprehensive software development\nprocess and method will be able to resolve the issues discussed\nbefore and will be vital for success in developing service robots.\n2.5\n\nThe COMET method\nCOMET [14] is a method for designing real-time and distributed\napplications, which integrates object-oriented and concurrent\nprocessing concepts and uses the UML notation [15,16]. The\nCOMET object- oriented software life cycle model is a highly\niterative software development process based around the use case\nconcept. Therefore, in this project, the COMET method with\nUML was used to develop a system for autonomous navigation by\nthe intelligent service robot, T-Rot. The method separates\nrequirements activities, analysis activities and design activities,\nand these activities are briefly described as below. The details are\ndescribed in section 3 with the case study.\n\nRequirements modeling - A use case model is developed in\nwhich the functional requirements of the system are defined\nin terms of actors and use cases.\n\nAnalysis modeling - Static and dynamic models of the\nsystem are developed. The static model defines the\nstructural relationships among problem domain classes. A\ndynamic model is then developed in which the use cases\nfrom the requirements model are refined to show the objects\nthat participate in each use case and how they interact with\neach other.\n\nDesign modeling The software architecture of the system\nis designed, in which the analysis model is mapped to an\noperational environment. For distributed applications, a\ncomponent based development approach is taken, in which\neach subsystem is designed as a distributed self-contained\ncomponent.\nAPPLYING THE COMET/UML METHOD TO T-ROT\nIn this section, we explain how to develop robot software for the\nautonomous navigation system with the COMET/UML method.\nIn our project, the UML notation conforms to UML 1.3 and the\nRational Rose tool is used.\n3.1\n\nRequirements Modeling\nCapturing the functional requirements of the system is the first\nphase in software development, which defines what the system\nshould do or provide for the user. In our approach, developers can\ncatch the functional requirements or services by using the use\ncase model in terms of use cases and actors (see Fig. 4). To\nidentify and define the requirements of the system more clearly,\nthe system has to be considered like a black box. In the service\nrobot, the actor can be usually a human user as well as external\nI/O devices and external timer.\nNavigation\nCommander\n(from 1.0 Actors)\nClock\n(from 1.0 Actors)\nObstacle Avoidance\n<<extend>>\n\nFig. 4. Use case diagram for Navigation\nTable 2 shows a specification for Navigation use case. In our\nnavigation system, we identified a Commander and a Clock as an\nactor. While the robot is moving, if the robot recognizes obstacles,\nit should avoid them for continuing to go to the destination. Even\nwhen humans or objects suddenly appear, the robot must be able\nto stop to avoid crashing into those. However, in order to do this,\nthe robot has to check that there are obstacles by using sensor data\nmore often (e.g., every 50 ms) than the normal navigation system\ndoes (e.g., every 200 ms). As a result, the Obstacle Avoidance use\ncase is extended from the Navigation use case. During the\nNavigation use case is executing, if the obstacles are recognized,\nthen the Obstacle Avoidance use case is triggered to perform the\nemergency stop of the robot. If the obstacles disappear, the robot\nmoves again to the destination.\nTable 2. Navigation use case\nSummary\nThe Commander enters a destination and the robot\nsystem moves to the destination.\nActor Commander\nPrecondition\nThe robot system has the grid map and the current\nposition is known\nDescription\n1. The use case begins when the commander enters a\ndestination.\n2. The system calculates an optimal path to the\ndestination.\n3. The system commands the wheel actuator to start\n537\nmoving to the destination.\n4. The wheel actuator notifies the system that it has\nstarted moving\n5. The system periodically reads sensor data and\ncalculates the current position.\n6. The system determines that it arrives at the destination\nand commands the wheel actuator to stop.\n7. The wheel actuator notifies the system that it has\nstopped moving and the use case is finished.\nAlternative\n6.1. If the system doesn't arrive at the destination, it\nkeeps moving.\nPostcondition The robot system is at the destination and waiting for the\nnext destination.\n3.2\n\nAnalysis Modeling\n3.2.1\n\nStatic Modeling\nThe objective of static modeling is to understand the interface\nbetween the system and the external environment and to describe\nthe static structure of the system under development by\ndeveloping a system context class diagram. It is specifically\nimportant for real-time and embedded systems like robot systems\n[14]. The system context class diagram can be determined by\nstatic modeling of the external classes that connect to the system.\nCommander\n(from 1.0 Actors)\nSensor\n<<external input device>>\nWheelActuator\n<<external output device>>\nCommandLine\n<<external user>>\n1\n1\n1\n1\nRobot Navigation System\n<<System>>\n1..*\n1\n1..*\n1\nInputs To\n\n1\n1\n1\n1\nOutputs to\n\n1\n1 1\n1\ninteracts with\n\nClock\n(from 1.0 Actors)\nClock\n<<external timer>>\n1\n1\n1\n1\nAwakens\n\n\nFig. 5. Robot Navigation System context class diagram\nThe system context class diagram of the Robot Navigation System\nis shown in Fig. 5, which illustrates the external classes to which\nthe system has to interface. In our navigation system, a\ncommander enters a destination via a command line, to which the\nrobot should move. The system uses sensor data via various\nsensors such as laser scanners, IR scanners, ultrasonic sensors, etc\nand it controls the wheels of the robot via the wheel actuator.\nTherefore, the external classes correspond to the users (i.e., a\nCommander who interacts with the system via a Command Line),\nand I/O devices (i.e., a Sensor and Wheel Actuator). A Clock actor\nneeds an external timer class called Clock to provide timer events\nto the system. This external timer class is needed to periodically\ncheck sensor data via those sensors for avoiding obstacles (i.e.,\ndoing the emergency stop) while the robot is moving.\nNext, to structure the Robot Navigation System into objects,\nobject structuring needs to be considered in preparation for\ndynamic modeling. The objective of the object structuring is to\ndecompose the problem into objects within the system. We\nidentified the internal objects according to the object structuring\ncriteria in COMET (see Fig. 6). In our system, interface objects,\ni.e. a Command Line Interface, Sensor Interface and Wheel\nActuator Interface are identified by identifying the external\nclasses that interface to the system, i.e. the Command Line,\nSensor, and Wheel Actuator, respectively. There are four entity\nobjects identified, that is, a Current Position, Destination,\nNavigation Path and Navigation Map, which are usually long-living\nobject that stores information. In addition to those objects,\nthere is a need for control objects, which provide the overall\ncoordination for objects in a use case and may be coordinator,\nstate-dependent control, or timer objects. The Navigation System\nhas a state-dependent control object called Navigation Control\nthat controls the wheel actuator and sensors. The states of the\nNavigation Control object are shown on a Navigation Control\nstatechart (this will be discussed in the dynamic modeling). There\nare two timer objects, i.e. a Navigation Timer and an Obstacle\nAvoidance Timer. The Obstacle Avoidance Timer is activated by a\ntimer event from an external timer to periodically check that there\nis any obstacle around the robot. On the other hand, the\nNavigation Timer is started by the Navigation Control object and\ngenerates a timer event for navigation. Also, a Localizer\nalgorithm object and Path Planner algorithm object are identified,\nwhich encapsulate an algorithm used in the problem domain,\nnamely the autonomous navigation.\n<< Robot Navigation System >>\nCommander\n(from 1.0 Actors)\nCommandLineInterface\n<<user interface>>\nCommandLine\n<<external user>>\n1\n1\n1\n1\n1\n1\n1\n1\nSensorInterface\n<<input device interface>>\nSensor\n<<external input device>>\n1\n1..*\n1\n1..*\nWheelActuator\n<<external output device>>\nWheelActuatorInterface\n<<output device interface>>\n1\n1\n1\n1\nDestination\n<<entity>>\nNavigation Path\n<<entity>>\nNavigation Map\n<<entity>>\nCurrent Position\n<<entity>>\nNavigation Control\n<<state dependent>>\nNavigation Timer\n<<timer>>\nObstacleAvoidanceTimer\n<<timer>>\nClock\n<<external timer>>\n1\n1\n1\n1\nLocalizer\n<<algorithm>>\nPathPlanner\n<<algorithm>>\n\nFig. 6. Object structuring class diagram for Navigation\nSystem\n3.2.2\n\nDynamic Modeling\nDynamic modeling emphasizes the dynamic behavior of the\nsystem and plays an important role for distributed, concurrent and\nreal-time system analysis. The dynamic model defines the object\ninteractions that correspond to each use case and thus is based on\nthe use cases and the objects identified during object structuring.\nIn our case, collaboration diagrams are developed to show the\nsequence of object interactions for each use case. Additionally, if\nthe collaboration involves the state-dependent object, which\nexecutes a statechart, the event sequence is shown on a statechart.\n: Navigation\nControl\n: CommandLine\n: Sensor\n: WheelActuator\n: WheelActuatorInterface\n: SensorInterface\n: Destination\n: Navigation\nPath\n: Navigation Map\n: Current\nPosition\n: CommandLineInterface\n: Navigation\nTimer\nPath\nPlanner\nLocalizer\nSequencing\nLayer\n<<external user>>\n<<user interface>>\n<<state dependent control>>\n<<timer>>\n<<entity>>\n<<algorithm>>\n<<entity>>\n<<entity>>\n<<entity>>\n<<algorithm>>\n<<external input device>>\n<<input device interface>>\n<<output device interface>>\n<<external output device>>\nDeliberate\nLayer\nReactive\nLayer\n1.2a: Store Destination\n2.11, 3.11 : Check Destination\n2.12 : No , 3.12: Yes\n1.13, 2.18: Planned Path\n1.10, 2.15: Read a Path\n1.14: Start\n2.19: Move\n3.13: Stop\n1.17: Started\n3.16: Stopped\n1.4, 2.7, 3.7: Read Current Position\n1.7, 2.10, 3.10: Current Position\n1.2, 2.5, 3.5: Read Map\n1.8, 2.13: Update Map\n1.3, 2.6, 3.6 : Map\n1.9, 2.14: Updated Map\n1: Enter Destination\n1.1: Destination Entered\n2.1, 3.1: Read Sensors\n2.4, 3.4: Sensor Data\n2.2, 3.2: Read\n2.3, 3.3: Data\n1.15: Start WheelActuator Output\n2.20:Move WheelActuator Output\n3.14: Stop WheelActuator Output\n1.16, 5.8: Started Ack\n3.15: Stopped Ack\n1.5, 2.8, 3.8: Localize\n1.6, 2.9, 3.9: Current Position\n2, 3: After(Elapsed Time)\n1.18, 5.10: Start Timer\n3.17, 4.10: Stop Timer\n1.12, 2.17: Path\n1.11, 2.16: Plan a path\n\nFig. 7. Collaboration diagram for Navigation use case\n538\nIn the navigation system, the Localizer has the algorithm which\ncan calculate the current position based on sensor data via sensors.\nSo, the role of the Localizer is to update the current position of\nthe service robot. In the Path Planner object, there is a method for\ncalculating a path to arrive at the destination based on both sensor\ninformation and the current position that is calculated at the\nLocalizer. The Navigation Timer is an internal timer that is\ncontrolled by the Navigation Control. After the destination is\nentered from the external user, the Navigation Control starts the\nNavigation Timer, then the timer generates a timer event\nperiodically (i.e., every 200ms) until the Navigation Control stops\nthe timer.\nThe Navigation use case starts with the commander entering the\ndestination into the navigation system. The message sequence\nnumber starts at 1, which is the first external event initiated by the\nactor. Subsequence numbering in sequence is from 1.1 to 1.18 as\nshown in Fig. 7. The next message sequence activated by the\nNavigation Timer is numbered 2, followed by the events 2.1, 2.2,\nand so forth. The following message sequences are illustrated in\nthe collaboration diagram (see Fig. 7).\nThe collaboration diagram for the Obstacle Avoidance use case is\nshown in Fig. 8. When activated by the Obstacle Avoidance\nTimer every 50 ms, the Sensor Interface object reads sensor data\nvia various sensors (Events 4.1, 5.1, 6.1). If an obstacle is\nrecognized, the Obstacle Avoidance Timer sends the emergency\nstop message to the Wheel Actuator Interface (Event 4.5).\nAfterwards, the timer also sends a suspend event to the\nNavigation Control. If the obstacle disappears, the timer sends a\nrestart event to the Navigation Control for the robot to move\nagain.\n: Sensor\n: WheelActuator\n: WheelActuatorInterface\n: SensorInterface\n: Clock\n: ObstacleAvoidanceTimer\n<<state dependent control>>\n<<external timer>>\n<<external input device>>\n<<timer>>\n<<input device interface>>\n<<output device interface>>\n<<external output device>>\n: Navigation\nControl\nSequencing\nLayer\nReactive\nLayer\n5.6: Start\n5.9: Started\n4, 5, 6: Timer Event\n4.2, 5.2, 6.2: Read\n4.3, 5.3, 6.3: Data\n4.1, 5.1, 6.1: Read Sensors\n4.4, 5.4, 6.4: Sensor Data\n4.5: Stop\n4.8: Stopped\n4.9: Suspend\n5.5: Restart\n6.5: Time Expired\n5.7: Start WheelActuator Output\n4.6 : Stop WheelActuator Output\n5.8: Started Ack\n4.7: Stopped Ack\n\nFig. 8. Collaboration diagram for Obstacle Avoidance use case\nWith COMET, the software architecture can be based on a\nsoftware architectural style (pattern) such as client/server or\nlayers of abstraction. In our project, the layered strategy of the\nTripodal schematic design described in section 2 is applied for\ndesign and modeling of the robot system, which provides a\nconceptual diagram of three layers (i.e., deliberate, sequencing,\nand reactive layers) for arrangement of various hardware and\nsoftware modules and functions. Therefore, in the collaboration\ndiagrams (see Fig. 7 and 8), the Command Line Interface is\nlocated in the deliberate layer and the Sensor Interface, Wheel\nActuator Interface, and Obstacle Avoidance Timer are in the\nreactive layer. The others are positioned in the sequencing layer.\nIn our navigation system, after drawing the collaboration\ndiagrams for the Navigation and Obstacle Avoidance use cases\nwhich include the Navigation Control state-dependent object, we\ndevelop a Navigation Control statechart, which is executed by the\nNavigation Control object. The statechart needs to be considered\nin connection with the collaboration diagram. Specifically, it is\nrequired to take into account the messages that are received and\nsent by the control object, which executes the statechart [14]. An\ninput event (e.g., 1.1: destination entered) into the Navigation\nControl object on the collaboration diagram should be consistent\nwith the same event shown on the statechart. The output event,\nwhich causes an action, enable or disable activity, like 1.2: Read\nMap (which cases an action) on the statechart must be consistent\nwith the output event depicted on the collaboration diagram.\nBecause the statechart modeling involves two state-dependent use\ncases in the navigation system, it is also required to consolidate\nthe two partial statecharts to create a complete statechart. The\ncomplete statechart for both the Navigation and Obstacle\nAvoidance use cases is shown in Fig. 9.\nIdle\nStarting\nPlanning a Path\nChecking\nDestination\nStopping\n3.16: Stopped / 3.17: Stop Timer\nReading Sensors\nLocalizing\nMoving\n1.17, 5.9: Started / 1.18, 5.10: Start Timer\n1.13: Planned Path[ Start ] / 1.14: Start\n2.18: Planned Path[ Move ] / 2.19: Move\nReading\nMap\n2.4, 3.4: Sensor Data / 2.5, 3.5: Read Map\n1.1: Destination Entered / 1.2 : Read Map, 1.2a: Store Destination\n1.3, 2.6, 3.6: Map / 1.4, 2.7, 3.7: Read Current Position\nUpdating\nMap\n2.10, 3.10:Current Position[ Move ] / 2.11, 3.11: Check Destination\n1.7: Current Position[ Start ] / 1.8: Update Map\n1.9, 2.14: Updated Map / 1.10, 2.15: Read a Path\n2.12: No / 2.13: Update Map\n3.12 : Yes / 3.13: Stop\nSuspending\n2, 3: After( Elapsed Time ) / 2.1, 3.1: Read Sensors\n4.9: Suspend / 4.10: Stop Timer\n5.5: Restart / 5.6: Start\n6.1: Time Expired\n\nFig. 9. Statechart for Navigation Control\n3.3\n\nDesign Modeling\n3.3.1\n\nSoftware Architecture\nIn this phase, all collaboration diagrams developed for use cases\nin the analysis model are merged into the consolidated\ncollaboration diagram.\n\nThe consolidated collaboration diagram is\nthus intended to be a complete description of all objects and their\ninteractions.\nThe consolidation of the two collaboration diagrams respectively\nsupporting the two use cases is shown in Fig. 10. Some objects\nand message interactions appear on more than one collaboration\ndiagram. For instance, the Navigation Control, Navigation Timer,\nSensor Interface and Wheel Actuator Interface objects participate\nin both the Navigation and Obstacle Avoidance use cases. For\nthose objects, their message interactions are only shown once in\nthe consolidated collaboration diagram.\n3.3.2\n\nArchitectural Design of Distributed Real-time\nSystems\nThe robot system is a distributed embedded system and executes\non distributed nodes by the communication methods like TCP/IP,\nCAN (Controller Area Network), and Wire/Wireless LAN. With\nCOMET, a distributed real-time system is structured into\ndistributed subsystems. Tasks in different subsystems may\ncommunicate with each other via several types of message\ncommunication, such as asynchronous, synchronous with reply,\nsynchronous without reply, and client/server communications, etc.\n539\nHence, we should define distributed nodes and their messages to\neach node.\nThe overall distributed software architecture for the robot\nnavigation system is depicted in Fig. 11. In the robot system,\nobjects that are part of the navigation are located in the robot\nnavigation system. The robot navigation system communicates\nwith the external I/O devices via synchronous message without\nreply communication and with the external timer via\nasynchronous message communication.\n: Navigation\nControl\n: CommandLine\n: Sensor\n: WheelActuator\n: WheelActuatorInterface\n: SensorInterface\n: Destination\n: Navigation\nPath\n: Navigation Map\n: Current\nPosition\n: CommandLineInterface\n: Navigation\nTimer\nPath\nPlanner\nLocalizer\n<<external user>>\n<<user interface>>\n<<state dependent control>>\n<<timer>>\n<<entity>>\n<<algorithm>>\n<<entity>>\n<<entity>>\n<<entity>>\n<<algorithm>>\n<<external input device>>\n<<input device interface>>\n<<output device interface>>\n<<external output device>>\n: Clock\n: ObstacleAvoidanceTimer\n<<external timer>>\n<<timer>>\nDeliberate\nLayer\nSequencing\nLayer\nReactive\nLayer\nStore Destination\nCheck Destination\nYes/No\nPlanned Path\nRead a Path\nStart\nMove\nStop\nStarted\nStopped\nRead Current Position\nCurrent Position\nRead Map\nUpdate Map\nMap\nEnter Destination\nStart WheelActuator Output\nMove WheelActuator Output\nStop WheelActuator Output\nStarted Ack\nStopped Ack\nRead Sensors\nSensor Data\nRead\nData\nRead Sensors\nSensor Data\nLocalize\nCurrent Position\nDestination Entered\nAfter(Elapsed Time)\nStart Timer\nStop Timer\nPath\nPlan a path\nTimer Event\nSuspend\nRestart\nTime Expired\nStop\nStopped\n\nFig. 10. Consolidated collaboration diagram for Navigation\nSystem\n: CommandLine\n: Sensor\n: WheelActuator\n: Robot Navigation\nSystem\n<< synchronous message without reply>>\n<< synchronous message without reply>>\n<< synchronous message without reply>>\n: Clock\n<<asynchronous message>>\nEnter Destination\nStart WheelActuator Output\nStop WheelActuator Output\nMove WheelActuator Output\nRead\nTimer Event\n\nFig. 11. Distributed software architecture for Navigation\nSystem\n3.3.3\n\nTask Structuring\nDuring the task structuring phase, a task architecture can be\ndeveloped in which the system is structured into concurrent tasks,\nand the task interfaces and interconnections are defined. A task is\nan active object and has its own thread of control. In this sense,\nthe term \"object\" will be used to refer to a passive object in this\npaper. In COMET, task structuring criteria are provided to help in\nmapping an object-oriented analysis model of the system to a\nconcurrent tasking architecture. At the end of this phase, a task\nbehavior specification (TBS) is developed.\nThe task architecture for the Navigation System is shown in Fig.\n12. In order to determine the tasks in the system, it is necessary to\nunderstand how the objects in the application interact with each\nother based on the collaboration diagrams. In the collaboration\ndiagram of Fig. 7, the Localizer object reads sensor data and the\nmap from the Current Position object, calculates a new current\nposition, and sends the current position to the Current Position\nobject for updating it. Thus, the Localizer object is structured as\nan asynchronous algorithm task called Localizer. There are two\nasynchronous algorithms, i.e. Localizer and Path Planner, which\nare internal asynchronous tasks. There are four passive entity\nobjects, i.e. Destination, Current Position, Navigation Map, and\nNavigation Path, which do not need a separate thread of control\nand are further all categorized as data abstraction objects. The\nSensor and Wheel Actuator are a passive input device and a\npassive output device, respectively because they do not generate\nan interrupt on completion of the input or output operation.\n: CommandLine\n: Sensor\n: WheelActuator\n<< external user >>\n<< passive input device >>\n<< control clustering >>\n<< passive output device >>\n: Destination\n: Navigation\nPath\n: Navigation Map\n: Current\nPosition\n<< data abstraction >>\n: Localizer\n: Path\nPlanner\n<< asynchronous algotithm >>\n<< asynchronous algotithm >>\n: Navigation\nController\n<< data abstraction >>\n<< data abstraction >>\n<< data abstraction >>\n: Clock\n: Navigation\nController\n<<sequential clustering>>\n<<external timer>>\nReactive\nLayer\nDeliberate\nLayer\nSequencing\nLayer\nenter (in destination)\nread(out sensorData)\nread(out sensorData)\nread(out sensorData, out map)\nupdate(in currentPosition)\nread(out destination,out currentPosition,out map)\nupdate(in path)\nstore(destination)\ncheck(currentPosition, yes/no)\nread(in sensorData,in map,out currentPosition)\nStart WheelActuator Output\nMove WheelActuator Output\nStop WheelActuator Output\nread(in destination,in currentPosition,in map, out path)\nread(out map)\nupdate(in sensorData, in currentPosition, out map)\nsuspend()\nrestart()\ntimerEvent\nstopWheelActuatorOutput\n\nFig. 12. Task architecture for Navigation System\nThe Navigation Control is a state-dependent control object that\nexecutes the Navigation Control statechart and is structured as a\ncontrol task because it needs to have a separate thread of control.\nThe Navigation Control object can be combined with the\nCommand Line Interface, Navigation Timer, Sensor Interface, and\nWheel Actuator Interface objects into one task, Navigation\nController, based on the control clustering task structuring\ncriterion because it is not possible for them to execute\nconcurrently (see the middle of Fig. 12). The Obstacle Avoidance\nTimer object is structured as a periodic task, activated periodically\nto read sensor data. It can be grouped with the Sensor Interface\nand Wheel Actuator Interface into one sequentially clustered task,\nObstacle Avoidance Controller based on sequential clustering\nsince those are carried out in a sequential order. The design of\nthose composite tasks, the Navigation Controller and Obstacle\nAvoidance Controller are considered in the next section (i.e.,\ndetailed software design).\nAfter developing the task architecture, a task behavior is\ndescribed for specifying the characteristics of each task based on\nCOMET. During the task structuring, the TBS focuses on the task\ninputs and outputs. One part of the TBS, i.e. the task's event\nsequencing logic is defined in the detailed software design phase.\n3.3.4\n\nDetailed Software Design\nThe internals of composite tasks which have passive objects\nnested inside them are designed, detailed task synchronization\nissues are addressed, and each task's internal event sequencing\nlogic is defined in this phase. Before this is done, the information\nhiding classes (from which the passive objects are instantiated)\nare designed. In particular, the operations of each class and the\ndesign of the class interfaces are determined and specified in a\nclass interface specification (because of space limitation, the\ndetailed TBS and the class interface specification have not been\nincluded).\n540\nLet us consider the internal design of the Navigation Controller,\nwhich is a composite task designed as a control clustering task, to\nshow the nested information hiding objects (see Fig. 13). The\ninformation hiding objects are the Navigation Control state-dependent\ncontrol object, the Sensor Interface and Wheel\nActuator Interface objects, the Navigation Timer object and the\nuser interface object, the Command Line Interface. In addition,\nthe Navigation Controller contains one coordinator object called\nNavigation Coordinator, which receives incoming messages and\ncoordinates the execution of the other objects. That is, the\nNavigation Coordinator extracts the event from the request and\ncalls Navigation Control.processEvent (in event, out action) (see\nFig. 13). The Navigation Control returns the action to be\nperformed like store, check, start, etc according to the state\ntransition table. Afterwards, the Navigation Coordinator initiates\nthe action.\n<<control clustering>>\n:NavigationController\n: Navigation\nControl\n: Navigation\nTimer\n: CommandLineInterface\n: Current\nPosition\n: Navigation\nMap\n: Navigation\nPath\n: WheelActuatorInterface\n: WheelActuator\n: SensorInterface\n: Sensor\n: Destination\n: Navigation\nCoordinator\n: CommandLine\n<< external user >>\n<<user interface>>\n<<timer>>\n<<coordinator>>\n<<data abstraction>>\n<<data abstraction>>\n<<data abstraction>>\n<<input device interface>>\n<<state dependent control>> <<output device interface>>\n<<data abstraction>>\nStart WheelActuator Output\nMove WheelActuator Output\nStop WheelActuator Output\nread(out sensorData)\nstore(in destination)\ncheck(in currentPosition,out yes/no)\nread(out sensorData)\nstartTimer( )\nstopTimer( )\nactivate( )\nread(in sensorData, in map, out CurrentPosition)\nread(out map)\nupdate(sensorData, currentPosition, map)\nread(destination, currentPosition, map)\nstart(in path,out started)\nmove(in path)\nstop(out stopped)\nprocessEvent(in event,out action)\nstartRobot(in destination)\nenter(in destination)\n\nFig. 13. Detailed software design for Navigation Controller\nIn our system, communication between tasks such as the\nNavigation Controller, Localizer, and Path Planner is through\ndata abstraction classes like the Current Position and Navigation\nPath. As a result, connector objects [14] are not used for the\nmessage communication interface between tasks.\n: CommandLineInterface\n: Navigation\nControl\n: Navigation\nCoordinator\n: Destination : Navigation\nMap\n: Current\nPosition\n: Navigation\nPath\n: WheelActuatorInterface\n: Navigation\nTimer\n:\nSensorInterface\n1. startRobot(destination)\n1.2. store(destination)\n1.4. read(map)\n1.6. read(sensorData, map, currentPosition)\n1.1. processEvent(event, action)\n1.3. processEvent(event, action)\n1.5. processEvent(event, action)\n1.7. processEvent(event, action)\n1.8. update(sensorData, currentPosition, map)\n1.9. processEvent(event, action)\n1.10. read(destination, currentPosition, map, path)\n1.11. processEvent(event, action)\n1.12. start(path, started)\n1.13. processEvent(event, action)\n1.14. startTimer( )\n2. activate( )\n2.1. processEvent(event, action)\n2.2. read(sensorData)\n2.3. processEvent(event, action)\n2.4. read(map)\n2.5. processEvent(event, action)\n2.6. read(sensorData, map, currentPosition)\n2.7. processEvent(event, action)\n2.8. check(currentPosition, yes/no)\n2.10. update(sensorData, currentPosition, map)\n2.9. processEvent(event, action)\n2.12. read(destination, currentPosition, map, path)\n2.11. processEvent(event, action)\n2.13. processEvent(event, action)\n2.14. move(path)\n3. stop(stopped)\n4. processEvent(event, action)\n5. stopTimer( )\nif not desitniation\nif destination\n\nFig. 14. The task event diagram for Navigation Controller\nLastly, the task's event sequencing logic is specified, which\ndescribes how the task responds to each of its message or event\ninputs. However, instead of using informally Pseudo code in\nCOMET, in this project, task event diagrams are developed for\ntasks by using the UML sequence diagrams for understanding and\nreadability, which turned out to be very useful when to implement\nthe tasks. Fig. 14 illustrates the task event diagram for the\nNavigation Controller.\nLESSONS LEARNED\nThis section summarizes the lessons learned from the project\nwhere we successfully applied the object-oriented method with\nUML to developing the service robot.\n4.1\n\nUML for Service Robot Domain\nThrough the case study, we found that the UML standard was\nvery useful as a notation for specifying the requirements,\ndocumenting the structure, decomposing into objects, and\ndefining relationships between objects especially in a service\nrobot system. Certain diagrams and notations were particularly\nimportance for analyzing, designing, and modeling service robot\nsystems as follows.\n\nUse case diagrams: With the use case model, services or\nfunctions (i.e., functional requirements), which a service\nrobot performs or provides, can be defined in terms of actors\nwho are users of the robot system and use cases. Each use\ncase defines the behavior of some aspect of the robot system\nwithout revealing its internal structure.\n\nClass diagrams: The class diagram notation is used to depict\nthe static model, which focuses on the static structure of the\nrobot system. The class diagram shows the classes of objects\nin the system, their internal structure including attributes,\ntheir operations, and their relationships to other classes\n(such as associations and generalization/inheritance).\n\nCollaboration diagrams: This diagram shows how objects\nthat participate in each use case interact with each other by\nsending and receiving messages in the dynamic model. It\ndefines a specific way to use objects in the robot system by\nshowing the possible interactions between them, especially\nto satisfy the needs described in the use case, namely\nprovide the services. Compared to a sequence diagram, the\ndiagram in particular is useful for synthesizing the\ncollaboration diagrams to create the software architecture of\nthe system as discussed in section 3.3.\n\nSequence diagrams: This diagram show objects interaction\narranged in time sequence and in particular could be used to\ndescribe the task event sequencing logic, which describes\nhow the task responds to each of its message or event inputs.\nIn COMET, the event sequencing logic is usually described\ninformally in Pseudo code. We found that the sequence\ndiagram can help the engineers describe the task event\nsequencing logic and implement the tasks by showing the\norder in which messages are passed between tasks and\nobjects.\n\nState chart diagrams: The service robot system is highly\nstate-dependent like real-time embedded systems. This\ndiagram describes how state-dependent aspects of the\nsystem are defined by a finite state machine and can help\ndesign and developing highly state-dependent systems. It is\n541\nalso possible for this diagram to model object behavior over\nseveral use cases with the collaboration diagrams.\nIn addition, by using the UML notation as a defined standard,\ndifferent research groups and development teams can\ncommunicate among themselves and with others to develop and\nintegrate specific components for providing various services.\n4.2\n\nImportance of Systematic Process/Method\nfor Service Robot Domain\nIn order to effectively apply the UML notation and the robot\ncontrol architecture like the Tripodal schematic control\narchitecture to developing service robots, it is essential to use\nthem with a systematic software engineering process or method,\nlike an object-oriented analysis and design method, especially for\nreal-time and embedded systems. It is not possible to resolve the\nissues in integrating and developing the service robots discussed\nbefore without systematic and comprehensive software\ndevelopment methods, particularly for service robots.\nIn our case study, we applied COMET/UML method to\ndeveloping the service robot. The COMET object-oriented\nsoftware life cycle model is a highly iterative software\ndevelopment process based around the use case concept. In the\nrequirements model, the service or functions (i.e., the function\nrequirements) of the robot system are defined in terms of actors\nand use cases. In the analysis model, the use case is refined to\ndescribe the objects that participate in the use case, and their\ninteractions. In the design model, the robot software architecture\nis developed, emphasizing issues of distribution, concurrency, and\ninformation hiding. This project showed that this was a viable\napproach because applying the COMET method with UML led to\ndeveloping an effective service robot architecture by carefully\ntaking into account user needs and requirements, implementing\ntechnical components based on the architecture, and integrating\nthese components in a systematic and comprehensive fashion.\n4.3\n\nCustomizing the COMET Method for\nService Robot Domain\nService robots like PSR-1, PSR-2, and Jinny have been built at\nKIST based on the Tripodal schematic control architecture. The\nTripodal schematic design addressed on developing efficient and\nwell-defined control architecture and a system integration strategy\nfor constructing service robots. T-Rot is the next model of the\nPSR system under development for assisting aged persons. One of\nour aims is to develop the intelligent service robot for the elderly\nby cooperating and integrating the results of different research\ngroups in accordance with the Tripodal schematic control\narchitecture that has already been implemented on the PSR and\nsuccessfully tested. Thus, the layered strategy of the Tripodal\nschematic design has been applied for design and modeling of the\nT-Rot. In the collaboration diagrams of the analysis modeling,\nand the consolidated collaboration diagram and the task\narchitecture of the design modeling, the Command Line Interface\nis located in the deliberate layer for interfacing with a user, while\nthe Sensor Interface, Wheel Actuator Interface, and Obstacle\nAvoidance Timer are in the reactive layer for controlling and\nmanaging the components in the reactive layer. The Navigation\nControl, Navigation Timer, Destination, Current Position,\nNavigation Path, Navigation Map, Localizer, and Path Planer are\npositioned in the sequencing layer for controlling the robot\nmotion by executing relatively simple computations in real-time.\nAs a result, the Tripodal schematic control architecture was\nhelpful in arranging various hardware and software modules and\nfunctions.\nAdditionally, as stated in section 4.1, in COMET, the event\nsequencing logic is usually described informally in Pseudo code.\nWe found that the sequence diagram can help the engineers\ndescribe the task event sequencing logic and implement the tasks\nby showing the order in which messages are passed between tasks\nand objects. Hence, instead of using informal Pseudo code, task\nevent diagrams were developed for tasks by using the UML\nsequence diagrams to improve understanding and readability. It\nturned out that these task event diagrams are very useful when\nimplementing these tasks.\n4.4\n\nHuman Communication\nHuman communication to understand and develop what is desired\nof the service robot is likely to be more difficult than expected. In\nour case study, most engineers who are involved in the project\ncome from the mechanical or robotics engineering field. The\ndifferent research groups and teams tend to focus on their own\ntechnology and components and thus it is not easy to realize how\nmuch knowledge they have and how much information will need\nto be made explicit and communicated to integrate those\ncomponents for the service robot. Several things can be done to\nimprove the situation. One is for engineers from different teams,\nespecially software engineers and mechanical engineers to work\ntogether for analyzing, designing, and developing the robot\nsystem during the project. It is very important that all engineers\nand developers from different groups and teams interact directly.\nAlso, in order to develop a common ground for understanding the\ndomain, technology, process and method, a common medium or\nlanguage such as UML is critical. In addition to the standard\nnotation like UML, guidelines about what notation to use, when\nto use it, and how to use the notation comprehensively and\nsystematically are required. This is why the method like COMET\nis needed. Domain knowledge and experiences in each area will\nmake it much easier to communicate what is desired, e.g. service\nrobot domain, the autonomous robot navigation, vision processing,\nand so on for software engineers, and object-oriented concepts,\nsoftware development process, and UML, etc for mechanical\nengineers. If there is relatively little domain knowledge and\nexperience, to have one day or half-day technical workshop is\nneeded. This has proved useful in a variety of settings in the\ndevelopment of the robot system, such as developing and\nincreasing background knowledge of the domain and technology.\n4.5\n\nNecessity of Multi-Aspect Integration\nMethod for Service Robot Domain\nA service robot should be able to perform several tasks\nautonomously to provide various services for human beings in a\ndynamic and partially unknown environment by applying both\ntechnology and knowledge. In order to be able to achieve\ncomplex tasks, perform new tasks, and integrate data learned from\nexperience for the robot services, it is required to consider not\nonly the robot's behavior, but also other robot's characteristics\nsuch as learning, planning, decision-making, and knowledge\nrepresentation. It is necessary to allow existing robot behaviors to\nbe used in new ways, to plan for accomplishing more complex\ntasks, to reuse the knowledge of one task in other tasks, and to\n542\ncomplete tasks more efficiently by learning various action\nsequences.\nIn the case study, we focused on designing and modeling the\nrobot's behavioral aspect, which is related to the sequencing and\nreactive layers in the Tripodal layered design, by applying the\nCOMET/UML method. However, it is clear that planning and\nlearning abilities have to also be considered when designing and\ndeveloping a service robot, which correspond to the deliberate\nlayer that is responsible for interfacing with a user and executing\nthe planning process. As a consequence, a task manager, which is\nlocated in the deliberate layer, has been in charge of these robotic\nabilities in the project. Because the planning process is knowledge\nbased and not reactive, a different analysis and design approach is\nneeded for the task manager. Hence, we are convinced that\nmethods to model the robot's learning, planning and decision\nmaking aspects as well as to incorporate, use and maintain task\nknowledge are necessary. Furthermore, it is essential to integrate\nthese methods with the COMET method into a multi-aspect\nintegration method for developing service robot software.\n\nCONCLUSIONS AND FUTHER WORK\nService robots have been suggested for a growing number of\napplications. A service robot is a complex system as it includes\nvarious technical components (i.e., hardware and software) to be\nintegrated correctly and many different research groups to\ndevelop the components. As a result, it is not only essential to\ndevelop complex algorithms or technical components, but also to\nintegrate them adequately and correctly to provide the various\nrobot services.\nIn the paper, we have presented our case study where we\ndeveloped the autonomous navigation system for the intelligent\nservice robot for the elderly, T-Rot. The object-oriented method\nfor real-time embedded systems, COMET has been applied to the\nservice robot T-Rot with the industry standard UML. It makes it\npossible to reconcile specific engineering techniques like robot\ntechnologies with the UML notation and furthermore to fit such\ntechniques into a fully defined development process towards\ndeveloping the service robot system. In this way, we contribute to\ndeveloping service robot software with UML in a systematic\nmanner.\nThe service robot T-Rot is still under development (at this point,\nwe are at the first stage of total three stages). Thus, the current\nstatus of our work is to extend applications that include vision\nprocessing, speech processing and manipulation for providing\nvarious robot services. Also, we work on designing the\nknowledge-based task manager for improving the robot's ability.\n\n\nACKNOWLEDGMENTS\nThis research (paper) was performed for the Intelligent Robotics\nDevelopment Program, one of the 21st Century Frontier R&D\nPrograms funded by the Ministry of Commerce, Industry and\nEnergy of Korea.\n\nREFERENCES\n[1]\n\nK. Kawamura and M. Iskarous, Trends in service robots for\nthe disabled and the elderly, Proc. of the 1994 IEEE/RSJ Int.\nConf. on Intelligent Robots and Systems, Vol. 3 (1994) 1674.\n[2]\n\nR. D. Schraft, \"Mechatronics and robotics for service\napplications,\" in IEEE Robotics and Automation Magazine,\nno. 4, pp. 31 - 35, Dec. 1994.\n[3]\n\nRofer T., Lankenau A. and Moratz R., Service Robotics-Applications\nand Safety Issues in an Emerging Market,\nWorkshop W20 proc. ECAI2000, Berlin, 2000.\n[4]\n\nB. You et al., \"Development of a Home Service Robot\n`ISSAC'\", Proc. of the 1994 IEEE/RSJ Int. Conf. on\nIntelligent Robots and Systems, Las Vegas, Nevada, 2003,\npp. 2630-2635.\n[5]\n\nG. Kim, W. Chung, M. Kim, and C. Lee, \"Tripodal\nSchematic Design of the Control Architecture for the Service\nRobot PSR,\" in Proc. of the IEEE Conf. on Robotics and\nAutomation, Taipei, Taiwan, pp.2792-2797, 2003.\n[6]\n\nG. Kim, W. Chung, M. Kim, and C. Lee, "Implementation of\nMulti-Functional Service Robots Using Tripodal Schematic\nControl Architecture", in Proc. of IEEE Conf. on Robotics\nand Automation, New Orleans, LA, USA, 2004\n[7]\n\nA. C. Dominguez-Brito, D.Hernandez-Sosa, J. Isern-Gonzalez\n, and J. Cabrera-Games. Integrating robotics\nsoftware. IEEE International Conference on Robotics and\nAutomation, 2004.\n[8]\n\nQ. Meng and M.H. Lee, \"Learning and Control in Assistive\nRobotics for the Elderly\", Proc. Of the 2004 IEEE Conf. on\nRobotics, Automation and Mechartonics, Singapore, Dec.,\n2004, pp. 71-76.\n[9]\n\nM. Kim, J. Lee, K. Kang, Y. Hong, and S. Bang, \"Re-engineering\nSoftware Architecture of Home Service Robots:\nA Case Study\", Proc. Of 27th Int. Conf. on Software\nEngineering (ICSE2005), St. Louis, USA, May, 2005,\npp.505-513.\n[10]\n\nG. Booch, Object-Oriented Analysis and Design with\nApplications, 2nd ed. Redwood City, CA: Benjamin\nCummings, 1994.\n[11]\n\nI. Jacobson, Object-Oriented Software Engineering, Addison\nWesley, 1992.\n[12]\n\nReal-Time Application Interface, 2004. Available at: http://\nwww.rtai.org\n[13]\n\nGjalt de Jong, \"A UML-Based Design Methodology for\nReal-Time and Embedded Systems\", DATE 2002, March,\n2002.\n[14]\n\nH. Gomaa, Designing Concurrent, Distributed, and Real-Time\nApplication with UML, Addison-Wesley, 2000.\n[15]\n\nOMG Unified Modeling Language, Version 1.5, March 2003.\nAvailable at:http://www.uml.org\n[16]\n\nM. Fowler and K. Scott, UML Distilled 2nd Edition,\nAddison Wesley, 2000.\n[17]\n\nG. Martin, L. Lavagno, and J. Louis-Guerin, \"Embedded\nUML: a merger of real-time UML and codesign\", CODES\n2001, Copenhagen, April 2001, pp.23-28.\n[18]\n\nG. Martin, "UML for Embedded Systems Specification and\nDesign: Motivation and Overview", DATE 2002, March,\n2002.\n\n543", "keywords": "Software engineering;object-oriented analysis and design methods;service robot development;UML"} {"name": "204", "title": "Unified Utility Maximization Framework for Resource Selection", "abstract": "This paper presents a unified utility framework for resource selection of distributed text information retrieval. This new framework shows an efficient and effective way to infer the probabilities of relevance of all the documents across the text databases. With the estimated relevance information, resource selection can be made by explicitly optimizing the goals of different applications. Specifically, when used for database recommendation, the selection is optimized for the goal of high-recall (include as many relevant documents as possible in the selected databases); when used for distributed document retrieval, the selection targets the high-precision goal (high precision in the final merged list of documents). This new model provides a more solid framework for distributed information retrieval. Empirical studies show that it is at least as effective as other state-of-the-art algorithms.", "fulltext": "INTRODUCTION\nConventional search engines such as Google or AltaVista use\nad-hoc information retrieval solution by assuming all the\nsearchable documents can be copied into a single centralized\ndatabase for the purpose of indexing. Distributed information\nretrieval, also known as federated search [1,4,7,11,14,22] is\ndifferent from ad-hoc information retrieval as it addresses the\ncases when documents cannot be acquired and stored in a single\ndatabase. For example, \"Hidden Web\" contents (also called\n\"invisible\" or \"deep\" Web contents) are information on the Web\nthat cannot be accessed by the conventional search engines.\nHidden web contents have been estimated to be 2-50 [19] times\nlarger than the contents that can be searched by conventional\nsearch engines. Therefore, it is very important to search this type\nof valuable information.\n\nThe architecture of distributed search solution is highly\ninfluenced by different environmental characteristics. In a small\nlocal area network such as small company environments, the\ninformation providers may cooperate to provide corpus statistics\nor use the same type of search engines. Early distributed\ninformation retrieval research focused on this type of\ncooperative environments [1,8]. On the other side, in a wide\narea network such as very large corporate environments or on\nthe Web there are many types of search engines and it is difficult\nto assume that all the information providers can cooperate as\nthey are required. Even if they are willing to cooperate in these\nenvironments, it may be hard to enforce a single solution for all\nthe information providers or to detect whether information\nsources provide the correct information as they are required.\nMany applications fall into the latter type of uncooperative\nenvironments such as the Mind project [16] which integrates\nnon-cooperating digital libraries or the QProber system [9]\nwhich supports browsing and searching of uncooperative hidden\nWeb databases. In this paper, we focus mainly on uncooperative\nenvironments that contain multiple types of independent search\nengines.\n\nThere are three important sub-problems in distributed\ninformation retrieval. First, information about the contents of\neach individual database must be acquired (resource\nrepresentation) [1,8,21]. Second, given a query, a set of\nresources must be selected to do the search (resource selection)\n[5,7,21]. Third, the results retrieved from all the selected\nresources have to be merged into a single final list before it can\nbe presented to the end user (retrieval and results merging)\n[1,5,20,22].\n\nMany types of solutions exist for distributed information\nretrieval. Invisible-web.net\nresource selection components. This solution is useful when the\nusers want to browse the selected databases by themselves\ninstead of asking the system to retrieve relevant documents\nautomatically. Distributed document retrieval is a more\nsophisticated task. It selects relevant information sources for\nusers' queries as the database recommendation system does.\nFurthermore, users' queries are forwarded to the corresponding\nselected databases and the returned individual ranked lists are\nmerged into a single list to present to the users.\nThe goal of a database recommendation system is to select a\nsmall set of resources that contain as many relevant documents\nas possible, which we call a high-recall goal. On the other side,\nthe effectiveness of distributed document retrieval is often\nmeasured by the Precision of the final merged document result\nlist, which we call a high-precision goal. Prior research\nindicated that these two goals are related but not identical [4,21].\nHowever, most previous solutions simply use effective resource\nselection algorithm of database recommendation system for\ndistributed document retrieval system or solve the inconsistency\nwith heuristic methods [1,4,21].\n\nThis paper presents a unified utility maximization framework to\nintegrate the resource selection problem of both database\nrecommendation and distributed document retrieval together by\ntreating them as different optimization goals.\n\nFirst, a centralized sample database is built by randomly\nsampling a small amount of documents from each database with\nquery-based sampling [1]; database size statistics are also\nestimated [21]. A logistic transformation model is learned off\nline with a small amount of training queries to map the\ncentralized document scores in the centralized sample database\nto the corresponding probabilities of relevance.\n\nSecond, after a new query is submitted, the query can be used to\nsearch the centralized sample database which produces a score\nfor each sampled document. The probability of relevance for\neach document in the centralized sample database can be\nestimated by applying the logistic model to each document's\nscore. Then, the probabilities of relevance of all the (mostly\nunseen) documents among the available databases can be\nestimated using the probabilities of relevance of the documents\nin the centralized sample database and the database size\nestimates.\n\nFor the task of resource selection for a database\nrecommendation system, the databases can be ranked by the\nexpected number of relevant documents to meet the high-recall\ngoal. For resource selection for a distributed document retrieval\nsystem, databases containing a small number of documents with\nlarge probabilities of relevance are favored over databases\ncontaining many documents with small probabilities of\nrelevance. This selection criterion meets the high-precision goal\nof distributed document retrieval application. Furthermore, the\nSemi-supervised learning (SSL) [20,22] algorithm is applied to\nmerge the returned documents into a final ranked list.\n\nThe unified utility framework makes very few assumptions and\nworks in uncooperative environments. Two key features make it\na more solid model for distributed information retrieval: i) It\nformalizes the resource selection problems of different\napplications as various utility functions, and optimizes the utility\nfunctions to achieve the optimal results accordingly; and ii) It\nshows an effective and efficient way to estimate the probabilities\nof relevance of all documents across databases. Specifically, the\nframework builds logistic models on the centralized sample\ndatabase to transform centralized retrieval scores to the\ncorresponding probabilities of relevance and uses the centralized\nsample database as the bridge between individual databases and\nthe logistic model. The human effort (relevance judgment)\nrequired to train the single centralized logistic model does not\nscale with the number of databases. This is a large advantage\nover previous research, which required the amount of human\neffort to be linear with the number of databases [7,15].\n\nThe unified utility framework is not only more theoretically\nsolid but also very effective. Empirical studies show the new\nmodel to be at least as accurate as the state-of-the-art algorithms\nin a variety of configurations.\n\nThe next section discusses related work. Section 3 describes the\nnew unified utility maximization model. Section 4 explains our\nexperimental methodology. Sections 5 and 6 present our\nexperimental results for resource selection and document\nretrieval. Section 7 concludes.\n\nPRIOR RESEARCH\nThere has been considerable research on all the sub-problems of\ndistributed information retrieval. We survey the most related\nwork in this section.\n\nThe first problem of distributed information retrieval is resource\nrepresentation. The STARTS protocol is one solution for\nacquiring resource descriptions in cooperative environments [8].\nHowever, in uncooperative environments, even the databases are\nwilling to share their information, it is not easy to judge whether\nthe information they provide is accurate or not. Furthermore, it\nis not easy to coordinate the databases to provide resource\nrepresentations that are compatible with each other. Thus, in\nuncooperative environments, one common choice is query-based\nsampling, which randomly generates and sends queries to\nindividual search engines and retrieves some documents to build\nthe descriptions. As the sampled documents are selected by\nrandom queries, query-based sampling is not easily fooled by\nany adversarial spammer that is interested to attract more traffic.\nExperiments have shown that rather accurate resource\ndescriptions can be built by sending about 80 queries and\ndownloading about 300 documents [1].\nMany resource selection algorithms such as gGlOSS/vGlOSS\n[8] and CORI [1] have been proposed in the last decade. The\nCORI algorithm represents each database by its terms, the\ndocument frequencies and a small number of corpus statistics\n(details in [1]). As prior research on different datasets has shown\nthe CORI algorithm to be the most stable and effective of the\nthree algorithms [1,17,18], we use it as a baseline algorithm in\nthis work. The relevant document distribution estimation\n(ReDDE [21]) resource selection algorithm is a recent algorithm\nthat tries to estimate the distribution of relevant documents\nacross the available databases and ranks the databases\naccordingly. Although the ReDDE algorithm has been shown to\nbe effective, it relies on heuristic constants that are set\nempirically [21].\n\nThe last step of the document retrieval sub-problem is results\nmerging, which is the process of transforming database-specific\n33\ndocument\nscores\ninto\ncomparable\ndatabase-independent\ndocument scores. The semi supervised learning (SSL) [20,22]\nresult merging algorithm uses the documents acquired by query-based\nsampling as training data and linear regression to learn the\ndatabase-specific, query-specific merging models. These linear\nmodels are used to convert the database-specific document\nscores into the approximated centralized document scores. The\nSSL algorithm has been shown to be effective [22]. It serves as\nan important component of our unified utility maximization\nframework (Section 3).\n\nIn order to achieve accurate document retrieval results, many\nprevious methods simply use resource selection algorithms that\nare effective of database recommendation system. But as\npointed out above, a good resource selection algorithm\noptimized for high-recall may not work well for document\nretrieval, which targets the high-precision goal. This type of\ninconsistency has been observed in previous research [4,21].\nThe research in [21] tried to solve the problem with a heuristic\nmethod.\n\nThe research most similar to what we propose here is the\ndecision-theoretic framework (DTF) [7,15]. This framework\ncomputes a selection that minimizes the overall costs (e.g.,\nretrieval quality, time) of document retrieval system and several\nmethods [15] have been proposed to estimate the retrieval\nquality. However, two points distinguish our research from the\nDTF model. First, the DTF is a framework designed specifically\nfor document retrieval, but our new model integrates two\ndistinct applications with different requirements (database\nrecommendation and distributed document retrieval) into the\nsame unified framework. Second, the DTF builds a model for\neach database to calculate the probabilities of relevance. This\nrequires human relevance judgments for the results retrieved\nfrom each database. In contrast, our approach only builds one\nlogistic model for the centralized sample database. The\ncentralized sample database can serve as a bridge to connect the\nindividual databases with the centralized logistic model, thus the\nprobabilities of relevance of documents in different databases\ncan be estimated. This strategy can save large amount of human\njudgment effort and is a big advantage of the unified utility\nmaximization framework over the DTF especially when there\nare a large number of databases.\nUNIFIED UTILITY MAXIMIZATION FRAMEWORK\nThe Unified Utility Maximization (UUM) framework is based\non estimating the probabilities of relevance of the (mostly\nunseen) documents available in the distributed search\nenvironment. In this section we describe how the probabilities of\nrelevance are estimated and how they are used by the Unified\nUtility Maximization model. We also describe how the model\ncan be optimized for the high-recall goal of a database\nrecommendation system and the high-precision goal of a\ndistributed document retrieval system.\n3.1 Estimating Probabilities of Relevance\n\nAs pointed out above, the purpose of resource selection is high-recall\nand the purpose of document retrieval is high-precision. In\norder to meet these diverse goals, the key issue is to estimate the\nprobabilities of relevance of the documents in various databases.\nThis is a difficult problem because we can only observe a\nsample of the contents of each database using query-based\nsampling. Our strategy is to make full use of all the available\ninformation to calculate the probability estimates.\n3.1.1 Learning Probabilities of Relevance\nIn the resource description step, the centralized sample database\nis built by query-based sampling and the database sizes are\nestimated using the sample-resample method [21]. At the same\ntime, an effective retrieval algorithm (Inquery [2]) is applied on\nthe centralized sample database with a small number (e.g., 50)\nof training queries. For each training query, the CORI resource\nselection algorithm [1] is applied to select some number\n(e.g., 10) of databases and retrieve 50 document ids from each\ndatabase. The SSL results merging algorithm [20,22] is used to\nmerge the results. Then, we can download the top 50 documents\nin the final merged list and calculate their corresponding\ncentralized scores using Inquery and the corpus statistics of the\ncentralized sample database. The centralized scores are further\nnormalized (divided by the maximum centralized score for each\nquery), as this method has been suggested to improve estimation\naccuracy in previous research [15]. Human judgment is acquired\nfor those documents and a logistic model is built to transform\nthe normalized centralized document scores to probabilities of\nrelevance as follows:\n(\n)\n))\n(\nexp(\n1\n))\n(\nexp(\n|\n)\n(\n_\n_\nd\nS\nb\na\nd\nS\nb\na\nd\nrel\nP\nd\nR\nc\nc\nc\nc\nc\nc\n+\n+\n+\n=\n=\n\n(1)\nwhere\n)\n(\n_\nd\nS\nc\nis the normalized centralized document score and\na\nc\nand b\nc\nare the two parameters of the logistic model. These two\nparameters are estimated by maximizing the probabilities of\nrelevance of the training queries. The logistic model provides us\nthe tool to calculate the probabilities of relevance from\ncentralized document scores.\n3.1.2 Estimating Centralized Document Scores\nWhen the user submits a new query, the centralized document\nscores of the documents in the centralized sample database are\ncalculated. However, in order to calculate the probabilities of\nrelevance, we need to estimate centralized document scores for\nall documents across the databases instead of only the sampled\ndocuments. This goal is accomplished using: the centralized\nscores of the documents in the centralized sample database, and\nthe database size statistics.\n\nWe define the database scale factor for the i\nth\ndatabase as the\nratio of the estimated database size and the number of\ndocuments sampled from this database as follows:\n^\n_\ni\ni\ni\ndb\ndb\ndb\nsamp\nN\nSF\nN\n=\n\n(2)\nwhere\n^\ni\ndb\nN\nis the estimated database size and\n_\ni\ndb\nsamp\nN\nis the\nnumber of documents from the i\nth\ndatabase in the centralized\nsample database. The intuition behind the database scale factor\nis that, for a database whose scale factor is 50, if one document\nfrom this database in the centralized sample database has a\ncentralized document score of 0.5, we may guess that there are\nabout 50 documents in that database which have scores of about\n0.5. Actually, we can apply a finer non-parametric linear\ninterpolation method to estimate the centralized document score\ncurve for each database. Formally, we rank all the sampled\ndocuments from the i\nth\ndatabase by their centralized document\n34\nscores to get the sampled centralized document score list\n{S\nc\n(ds\ni1\n), S\nc\n(ds\ni2\n), S\nc\n(ds\ni3\n),.....} for the i\nth\ndatabase; we assume\nthat if we could calculate the centralized document scores for all\nthe documents in this database and get the complete centralized\ndocument score list, the top document in the sampled list would\nhave rank SF\ndbi\n/2, the second document in the sampled list\nwould rank SF\ndbi\n3/2, and so on. Therefore, the data points of\nsampled documents in the complete list are: {(SF\ndbi\n/2, S\nc\n(ds\ni1\n)),\n(SF\ndbi\n3/2, S\nc\n(ds\ni2\n)), (SF\ndbi\n5/2, S\nc\n(ds\ni3\n)),.....}. Piecewise linear\ninterpolation is applied to estimate the centralized document\nscore curve, as illustrated in Figure 1. The complete centralized\ndocument score list can be estimated by calculating the values of\ndifferent ranks on the centralized document curve as:\n]\n,\n1\n[\n,\n)\n(\nS\n^\n^\nc\ni\ndb\nij\nN\nj\nd\n\n.\nIt can be seen from Figure 1 that more sample data points\nproduce more accurate estimates of the centralized document\nscore curves. However, for databases with large database scale\nratios, this kind of linear interpolation may be rather inaccurate,\nespecially for the top ranked (e.g., [1, SF\ndbi\n/2]) documents.\nTherefore, an alternative solution is proposed to estimate the\ncentralized document scores of the top ranked documents for\ndatabases with large scale ratios (e.g., larger than 100).\nSpecifically, a logistic model is built for each of these databases.\nThe logistic model is used to estimate the centralized document\nscore of the top 1 document in the corresponding database by\nusing the two sampled documents from that database with\nhighest centralized scores.\n))\n(\n)\n(\nexp(\n1\n))\n(\n)\n(\nexp(\n)\n(\n2\n2\n1\n1\n0\n2\n2\n1\n1\n0\n^\n1\ni\nc\ni\ni\nc\ni\ni\ni\nc\ni\ni\nc\ni\ni\ni\nc\nds\nS\nds\nS\nds\nS\nds\nS\nd\nS\n\n\n\n\n\n\n+\n+\n+\n+\n+\n=\n\n(3)\n0\ni\n\n,\n1\ni\n\nand\n2\ni\n\nare the parameters of the logistic model. For\neach training query, the top retrieved document of each database\nis downloaded and the corresponding centralized document\nscore is calculated. Together with the scores of the top two\nsampled documents, these parameters can be estimated.\n\nAfter the centralized score of the top document is estimated, an\nexponential function is fitted for the top part ([1, SF\ndbi\n/2]) of the\ncentralized document score curve as:\n]\n2\n/\n,\n1\n[\n)\n*\nexp(\n)\n(\n1\n0\n^\ni\ndb\ni\ni\nij\nc\nSF\nj\nj\nd\nS\n\n+\n=\n\n\n\n(4)\n^\n0\n1\n1\nlog(\n(\n))\ni\nc\ni\ni\nS d\n\n\n=\n\n(5)\n)\n1\n2\n/\n(\n))\n(\nlog(\n)\n(\n(log(\n^\n1\n1\n1\n\n=\ni\ndb\ni\nc\ni\nc\ni\nSF\nd\nS\nds\nS\n\n\n(6)\nThe two parameters\n0\ni\n\nand\n1\ni\n\nare fitted to make sure the\nexponential function passes through the two points (1,\n^\n1\n)\n(\ni\nc\nd\nS\n)\nand (SF\ndbi\n/2, S\nc\n(ds\ni1\n)). The exponential function is only used to\nadjust the top part of the centralized document score curve and\nthe lower part of the curve is still fitted with the linear\ninterpolation method described above. The adjustment by fitting\nexponential function of the top ranked documents has been\nshown empirically to produce more accurate results.\nFrom the centralized document score curves, we can estimate\nthe complete centralized document score lists accordingly for all\nthe available databases. After the estimated centralized\ndocument scores are normalized, the complete lists of\nprobabilities of relevance can be constructed out of the complete\ncentralized document score lists by Equation 1. Formally for the\ni\nth\ndatabase, the complete list of probabilities of relevance is:\n]\n,\n1\n[\n,\n)\n(\nR\n^\n^\ni\ndb\nij\nN\nj\nd\n\n.\n\n\n3.2 The Unified Utility Maximization Model\n\nIn this section, we formally define the new unified utility\nmaximization model, which optimizes the resource selection\nproblems\nfor\ntwo\ngoals\nof\nhigh-recall\n(database\nrecommendation) and high-precision (distributed document\nretrieval) in the same framework.\n\nIn the task of database recommendation, the system needs to\ndecide how to rank databases. In the task of document retrieval,\nthe system not only needs to select the databases but also needs\nto decide how many documents to retrieve from each selected\ndatabase. We generalize the database recommendation selection\nprocess, which implicitly recommends all documents in every\nselected database, as a special case of the selection decision for\nthe document retrieval task. Formally, we denote d\ni\nas the\nnumber of documents we would like to retrieve from the i\nth\n\ndatabase and\n,.....}\n,\n{\n2\n1\nd\nd\nd\n=\nas a selection action for all the\ndatabases.\n\nThe database selection decision is made based on the complete\nlists of probabilities of relevance for all the databases. The\ncomplete lists of probabilities of relevance are inferred from all\nthe available information specifically\ns\nR\n, which stands for the\nresource descriptions acquired by query-based sampling and the\ndatabase size estimates acquired by sample-resample;\nc\nS\nstands\nfor the centralized document scores of the documents in the\ncentralized sample database.\n\nIf the method of estimating centralized document scores and\nprobabilities of relevance in Section 3.1 is acceptable, then the\nmost probable complete lists of probabilities of relevance can be\nderived and we denote them as\n1\n^\n^\n*\n1\n{(R(\n),\n[1,\n]),\ndb\nj\nd\nj\nN\n\n=\n\n\n2\n^\n^\n2\n(R(\n),\n[1,\n]),.......}\ndb\nj\nd\nj\nN\n\n.\nRandom vector\n\ndenotes an\narbitrary set of complete lists of probabilities of relevance and\n)\n,\n|\n(\nc\ns\nS\nR\nP\n\nas the probability of generating this set of lists.\nFinally, to each selection action\nd\n\nand a set of complete lists of\n\nFigure 1.\nLinear interpolation construction of the complete\ncentralized document score list (database scale factor is 50).\n\n35\nprobabilities of relevance\n\n, we associate a utility function\n)\n,\n(\nd\nU\n\n\nwhich indicates the benefit from making the\nd\n\nselection when the true complete lists of probabilities of\nrelevance are\n\n.\nTherefore, the selection decision defined by the Bayesian\nframework is:\n\n\n\n\nd\nS\nR\nP\nd\nU\nd\nc\ns\nd\n)\n.\n|\n(\n)\n,\n(\nmax\narg\n*\n=\n\n(7)\nOne common approach to simplify the computation in the\nBayesian framework is to only calculate the utility function at\nthe most probable parameter values instead of calculating the\nwhole expectation. In other words, we only need to calculate\n)\n,\n(\n*\nd\nU\n\nand Equation 7 is simplified as follows:\n)\n,\n(\nmax\narg\n*\n*\n\nd\nU\nd\nd\n=\n\n(8)\nThis equation serves as the basic model for both the database\nrecommendation system and the document retrieval system.\n3.3 Resource Selection for High-Recall\nHigh-recall is the goal of the resource selection algorithm in\nfederated search tasks such as database recommendation. The\ngoal is to select a small set of resources (e.g., less than N\nsdb\ndatabases) that contain as many relevant documents as possible,\nwhich can be formally defined as:\n=\n=\ni\nN\nj\nij\ni\ni\ndb\nd\nd\nI\nd\nU\n^\n1\n^\n*\n)\n(\nR\n)\n(\n)\n,\n(\n\n\n(9)\nI(d\ni\n) is the indicator function, which is 1 when the i\nth\ndatabase is\nselected and 0 otherwise. Plug this equation into the basic model\nin Equation 8 and associate the selected database number\nconstraint to obtain the following:\nsdb\ni\ni\ni\nN\nj\nij\ni\nd\nN\nd\nI\nto\nSubject\nd\nd\nI\nd\ni\ndb\n=\n=\n=\n)\n(\n:\n)\n(\nR\n)\n(\nmax\narg\n^\n1\n^\n*\n\n(10)\nThe solution of this optimization problem is very simple. We\ncan calculate the expected number of relevant documents for\neach database as follows:\n=\n=\ni\ndb\ni\nN\nj\nij\nRd\nd\nN\n^\n1\n^\n^\n)\n(\nR\n\n(11)\nThe N\nsdb\ndatabases with the largest expected number of relevant\ndocuments can be selected to meet the high-recall goal. We call\nthis the UUM/HR algorithm (Unified Utility Maximization for\nHigh-Recall).\n3.4 Resource Selection for High-Precision\nHigh-Precision is the goal of resource selection algorithm in\nfederated search tasks such as distributed document retrieval. It\nis measured by the Precision at the top part of the final merged\ndocument list. This high-precision criterion is realized by the\nfollowing utility function, which measures the Precision of\nretrieved documents from the selected databases.\n=\n=\ni\nd\nj\nij\ni\ni\nd\nd\nI\nd\nU\n1\n^\n*\n)\n(\nR\n)\n(\n)\n,\n(\n\n\n(12)\nNote that the key difference between Equation 12 and Equation\n9 is that Equation 9 sums up the probabilities of relevance of all\nthe documents in a database, while Equation 12 only considers a\nmuch smaller part of the ranking. Specifically, we can calculate\nthe optimal selection decision by:\n=\n=\ni\nd\nj\nij\ni\nd\ni\nd\nd\nI\nd\n1\n^\n*\n)\n(\nR\n)\n(\nmax\narg\n\n(13)\nDifferent kinds of constraints caused by different characteristics\nof the document retrieval tasks can be associated with the above\noptimization problem. The most common one is to select a fixed\nnumber (N\nsdb\n) of databases and retrieve a fixed number (N\nrdoc\n) of\ndocuments from each selected database, formally defined as:\n0\n,\n)\n(\n:\n)\n(\nR\n)\n(\nmax\narg\n1\n^\n*\n\n=\n=\n=\n=\ni\nrdoc\ni\nsdb\ni\ni\ni\nd\nj\nij\ni\nd\nd\nif\nN\nd\nN\nd\nI\nto\nSubject\nd\nd\nI\nd\ni\n\n(14)\nThis optimization problem can be solved easily by calculating\nthe number of expected relevant documents in the top part of the\neach database's complete list of probabilities of relevance:\n=\n=\nrdoc\ni\nN\nj\nij\nRd\nTop\nd\nN\n1\n^\n^\n_\n)\n(\nR\n\n(15)\nThen the databases can be ranked by these values and selected.\nWe call this the UUM/HP-FL algorithm (Unified Utility\nMaximization for High-Precision with Fixed Length document\nrankings from each selected database).\n\nA more complex situation is to vary the number of retrieved\ndocuments from each selected database. More specifically, we\nallow different selected databases to return different numbers of\ndocuments. For simplification, the result list lengths are required\nto be multiples of a baseline number 10. (This value can also be\nvaried, but for simplification it is set to 10 in this paper.) This\nrestriction is set to simulate the behavior of commercial search\nengines on the Web. (Search engines such as Google and\nAltaVista return only 10 or 20 document ids for every result\npage.) This procedure saves the computation time of calculating\noptimal database selection by allowing the step of dynamic\nprogramming to be 10 instead of 1 (more detail is discussed\nlatterly). For further simplification, we restrict to select at most\n100 documents from each database (d\ni\n<=100) Then, the\nselection optimization problem is formalized as follows:\n]\n10\n..,\n,\n2\n,\n1\n,\n0\n[\n,\n*\n10\n)\n(\n:\n)\n(\nR\n)\n(\nmax\narg\n_\n1\n^\n*\n\n=\n=\n=\n=\n=\nk\nk\nd\nN\nd\nN\nd\nI\nto\nSubject\nd\nd\nI\nd\ni\nrdoc\nTotal\ni\ni\nsdb\ni\ni\ni\nd\nj\nij\ni\nd\ni\n\n(16)\nN\nTotal_rdoc\nis the total number of documents to be retrieved.\nUnfortunately, there is no simple solution for this optimization\nproblem as there are for Equations 10 and 14. However, a\n36\ndynamic programming algorithm can be applied to calculate the\noptimal solution. The basic steps of this dynamic programming\nmethod are described in Figure 2. As this algorithm allows\nretrieving result lists of varying lengths from each selected\ndatabase, it is called UUM/HP-VL algorithm.\n\nAfter the selection decisions are made, the selected databases are\nsearched and the corresponding document ids are retrieved from\neach database. The final step of document retrieval is to merge\nthe returned results into a single ranked list with the semi-supervised\nlearning algorithm. It was pointed out before that the\nSSL algorithm maps the database-specific scores into the\ncentralized document scores and builds the final ranked list\naccordingly, which is consistent with all our selection\nprocedures where documents with higher probabilities of\nrelevance (thus higher centralized document scores) are selected.\nEXPERIMENTAL METHODOLOGY\nIt is desirable to evaluate distributed information retrieval\nalgorithms with testbeds that closely simulate the real world\napplications.\n\nThe TREC Web collections WT2g or WT10g [4,13] provide a\nway to partition documents by different Web servers. In this\nway, a large number (O(1000)) of databases with rather diverse\ncontents could be created, which may make this testbed a good\ncandidate to simulate the operational environments such as open\ndomain hidden Web. However, two weakness of this testbed are:\ni) Each database contains only a small amount of document (259\ndocuments by average for WT2g) [4]; and ii) The contents of\nWT2g or WT10g are arbitrarily crawled from the Web. It is not\nlikely for a hidden Web database to provide personal homepages\nor web pages indicating that the pages are under construction\nand there is no useful information at all. These types of web\npages are contained in the WT2g/WT10g datasets. Therefore,\nthe noisy Web data is not similar with that of high-quality\nhidden Web database contents, which are usually organized by\ndomain experts.\nAnother choice is the TREC news/government data [1,15,17,\n18,21]. TREC news/government data is concentrated on\nrelatively narrow topics. Compared with TREC Web data: i) The\nnews/government documents are much more similar to the\ncontents provided by a topic-oriented database than an arbitrary\nweb page, ii) A database in this testbed is larger than that of\nTREC Web data. By average a database contains thousands of\ndocuments, which is more realistic than a database of TREC\nWeb data with about 250 documents. As the contents and sizes\nof the databases in the TREC news/government testbed are more\nsimilar with that of a topic-oriented database, it is a good\ncandidate to simulate the distributed information retrieval\nenvironments of large organizations (companies) or domain-specific\nhidden Web sites, such as West that provides access to\nlegal, financial and news text databases [3]. As most current\ndistributed information retrieval systems are developed for the\nenvironments of large organizations (companies) or domain-specific\nhidden Web other than open domain hidden Web,\nTREC news/government testbed was chosen in this work.\nTrec123-100col-bysource testbed is one of the most used TREC\nnews/government testbed [1,15,17,21]. It was chosen in this\nwork. Three testbeds in [21] with skewed database size\ndistributions and different types of relevant document\ndistributions were also used to give more thorough simulation\nfor real environments.\n\nTrec123-100col-bysource:\n100 databases were created from\nTREC CDs 1, 2 and 3. They were organized by source and\npublication date [1]. The sizes of the databases are not skewed.\nDetails are in Table 1.\n\nThree testbeds built in [21] were based on the trec123-100col-bysource\ntestbed. Each testbed contains many \"small\" databases\nand two large databases created by merging about 10-20 small\ndatabases together.\nInput:\nComplete lists of probabilities of relevance for all\nthe |DB| databases.\nOutput:\nOptimal selection solution for Equation 16.\ni) Create the three-dimensional array:\nSel (1..|DB|, 1..N\nTotal_rdoc/10\n, 1..N\nsdb\n)\nEach Sel (x, y, z) is associated with a selection\ndecision\nxyz\nd\n,\nwhich represents the best selection\ndecision in the condition: only databases from number 1\nto number x are considered for selection; totally y*10\ndocuments will be retrieved; only z databases are\nselected out of the x database candidates. And\nSel (x, y, z) is the corresponding utility value by\nchoosing the best selection.\nii) Initialize Sel (1, 1..N\nTotal_rdoc\n/10, 1..N\nsdb\n) with only the\nestimated relevance information of the 1\nst\ndatabase.\niii) Iterate the current database candidate i from 2 to |DB|\nFor each entry Sel (i, y, z):\nFind k such that:\n)\n10\n,\nmin(\n1\n:\n)\n)\n(\n)\n1\n,\n,\n1\n(\n(\nmax\narg\n*\n10\n^\n*\ny\nk\nto\nsubject\nd\nR\nz\nk\ny\ni\nSel\nk\nk\nj\nij\nk\n\n\n+\n\n\n=\n\n\n)\n,\n,\n1\n(\n)\n)\n(\n)\n1\n,\n,\n1\n(\n(\n*\n*\n10\n^\n*\nz\ny\ni\nSel\nd\nR\nz\nk\ny\ni\nSel\nIf\nk\nj\nij\n>\n;\n+\n\n\n\n\nThis means that we should retrieve\n*\n10 k\n\ndocuments\nfrom the i\nth\ndatabase, otherwise we should not select this\ndatabase and the previous best solution Sel (i-1, y, z)\nshould be kept.\nThen set the value of\niyz\nd\nand Sel (i, y, z) accordingly.\niv) The best selection solution is given by\n_\n/10\n|\n|\nToral\nrdoc\nsdb\nDB N\nN\nd\n\nand the corresponding utility value is Sel (|DB|,\nN\nTotal_rdoc/10\n, N\nsdb\n).\nFigure 2.\nThe dynamic programming optimization\nprocedure for Equation 16.\n\nTable1:\nTestbed statistics.\nNumber of documents\nSize (MB)\n\nTestbed\nSize\n(GB)\nMin\nAvg\nMax\nMin\nAvg\nMax\nTrec123\n3.2\n752\n10782\n39713\n28\n32\n42\n\nTable2:\nQuery set statistics.\n\nName\nTREC\nTopic Set\nTREC\nTopic Field\nAverage Length\n(Words)\nTrec123\n51-150\nTitle\n3.1\n\n37\nTrec123-2ldb-60col (\"representative\"):\nThe databases in the\ntrec123-100col-bysource were sorted with alphabetical order.\nTwo large databases were created by merging 20 small\ndatabases with the round-robin method. Thus, the two large\ndatabases have more relevant documents due to their large sizes,\neven though the densities of relevant documents are roughly the\nsame as the small databases.\n\nTrec123-AP-WSJ-60col (\"relevant\"):\nThe 24 Associated Press\ncollections and the 16 Wall Street Journal collections in the\ntrec123-100col-bysource testbed were collapsed into two large\ndatabases APall and WSJall. The other 60 collections were left\nunchanged. The APall and WSJall databases have higher\ndensities of documents relevant to TREC queries than the small\ndatabases. Thus, the two large databases have many more\nrelevant documents than the small databases.\n\nTrec123-FR-DOE-81col (\"nonrelevant\"):\nThe 13 Federal\nRegister collections and the 6 Department of Energy collections\nin the trec123-100col-bysource testbed were collapsed into two\nlarge databases FRall and DOEall. The other 80 collections were\nleft unchanged. The FRall and DOEall databases have lower\ndensities of documents relevant to TREC queries than the small\ndatabases, even though they are much larger.\n\n100 queries were created from the title fields of TREC topics\n51-150. The queries 101-150 were used as training queries and\nthe queries 51-100 were used as test queries (details in Table 2).\n4.2 Search Engines\n\nIn\nthe\nuncooperative\ndistributed\ninformation\nretrieval\nenvironments of large organizations (companies) or domain-specific\nhidden Web, different databases may use different types\nof search engine. To simulate the multiple type-engine\nenvironment, three different types of search engines were used\nin the experiments: INQUERY [2], a unigram statistical\nlanguage model with linear smoothing [12,20] and a TFIDF\nretrieval algorithm with \"ltc\" weight [12,20]. All these\nalgorithms were implemented with the Lemur toolkit [12].\nThese three kinds of search engines were assigned to the\ndatabases among the four testbeds in a round-robin manner.\nRESULTS RESOURCE SELECTION OF DATABASE RECOMMENDATION\nAll four testbeds described in Section 4 were used in the\nexperiments to evaluate the resource selection effectiveness of\nthe database recommendation system.\n\nThe resource descriptions were created using query-based\nsampling. About 80 queries were sent to each database to\ndownload 300 unique documents. The database size statistics\nwere estimated by the sample-resample method [21]. Fifty\nqueries (101-150) were used as training queries to build the\nrelevant logistic model and to fit the exponential functions of the\ncentralized document score curves for large ratio databases\n(details in Section 3.1). Another 50 queries (51-100) were used\nas test data.\n\nResource selection algorithms of database recommendation\nsystems are typically compared using the recall metric\nn\nR\n[1,17,18,21]. Let B denote a baseline ranking, which is often the\nRBR (relevance based ranking), and E as a ranking provided by\na resource selection algorithm. And let B\ni\nand E\ni\ndenote the\nnumber of relevant documents in the i\nth\nranked database of B or\nE. Then R\nn\nis defined as follows:\n\n=\n=\n=\nk\ni\ni\nk\ni\ni\nk\nB\nE\nR\n1\n1\n\n(17)\nUsually the goal is to search only a few databases, so our figures\nonly show results for selecting up to 20 databases.\n\nThe experiments summarized in Figure 3 compared the\neffectiveness of the three resource selection algorithms, namely\nthe CORI, ReDDE and UUM/HR. The UUM/HR algorithm is\ndescribed in Section 3.3. It can be seen from Figure 3 that the\nReDDE and UUM/HR algorithms are more effective (on the\nrepresentative, relevant and nonrelevant testbeds) or as good as\n(on the Trec123-100Col testbed) the CORI resource selection\nalgorithm. The UUM/HR algorithm is more effective than the\nReDDE algorithm on the representative and relevant testbeds\nand is about the same as the ReDDE algorithm on the Trec123-100Col\nand the nonrelevant testbeds. This suggests that the\nUUM/HR algorithm is more robust than the ReDDE algorithm.\nIt can be noted that when selecting only a few databases on the\nTrec123-100Col or the nonrelevant testbeds, the ReDEE\nalgorithm has a small advantage over the UUM/HR algorithm.\nWe attribute this to two causes: i) The ReDDE algorithm was\ntuned on the Trec123-100Col testbed; and ii) Although the\ndifference is small, this may suggest that our logistic model of\nestimating probabilities of relevance is not accurate enough.\nMore training data or a more sophisticated model may help to\nsolve this minor puzzle.\n\n\nCollections Selected. Collections Selected.\nTrec123-100Col Testbed. Representative Testbed.\n\nCollection Selected. Collection Selected.\nRelevant Testbed. Nonrelevant Testbed.\nFigure 3.\nResource selection experiments on the four testbeds.\n38\nRESULTS DOCUMENT RETRIEVAL EFFECTIVENESS\nFor document retrieval, the selected databases are searched and\nthe returned results are merged into a single final list. In all of\nthe experiments discussed in this section the results retrieved\nfrom individual databases were combined by the semi-supervised\nlearning results merging algorithm. This version of\nthe SSL algorithm [22] is allowed to download a small number\nof returned document texts \"on the fly\" to create additional\ntraining data in the process of learning the linear models which\nmap database-specific document scores into estimated\ncentralized document scores. It has been shown to be very\neffective in environments where only short result-lists are\nretrieved from each selected database [22]. This is a common\nscenario in operational environments and was the case for our\nexperiments.\nDocument retrieval effectiveness was measured by Precision at\nthe top part of the final document list. The experiments in this\nsection were conducted to study the document retrieval\neffectiveness of five selection algorithms, namely the CORI,\nReDDE, UUM/HR, UUM/HP-FL and UUM/HP-VL algorithms.\nThe last three algorithms were proposed in Section 3. All the\nfirst four algorithms selected 3 or 5 databases, and 50 documents\nwere retrieved from each selected database. The UUM/HP-FL\nalgorithm also selected 3 or 5 databases, but it was allowed to\nadjust the number of documents to retrieve from each selected\ndatabase; the number retrieved was constrained to be from 10 to\n100, and a multiple of 10.\n\nThe Trec123-100Col and representative testbeds were selected\nfor document retrieval as they represent two extreme cases of\nresource selection effectiveness; in one case the CORI algorithm\nis as good as the other algorithms and in the other case it is quite\nTable 5.\nPrecision on the representative testbed when 3 databases were selected. (The first baseline is CORI; the second baseline for\nUUM/HP methods is UUM/HR.)\nPrecision at\nDoc Rank\nCORI\nReDDE\nUUM/HR\nUUM/HP-FL\nUUM/HP-VL\n5 docs\n0.3720\n0.4080 (+9.7%)\n0.4640 (+24.7%)\n0.4600 (+23.7%)(-0.9%)\n0.5000 (+34.4%)(+7.8%)\n10 docs\n0.3400\n0.4060 (+19.4%)\n0.4600 (+35.3%)\n0.4540 (+33.5%)(-1.3%)\n0.4640 (+36.5%)(+0.9%)\n15 docs\n0.3120\n0.3880 (+24.4%)\n0.4320 (+38.5%)\n0.4240 (+35.9%)(-1.9%)\n0.4413 (+41.4%)(+2.2)\n20 docs\n0.3000\n0.3750 (+25.0%)\n0.4080 (+36.0%)\n0.4040 (+34.7%)(-1.0%)\n0.4240 (+41.3%)(+4.0%)\n30 docs\n0.2533\n0.3440 (+35.8%)\n0.3847 (+51.9%)\n0.3747 (+47.9%)(-2.6%)\n0.3887 (+53.5%)(+1.0%)\n\nTable 6.\nPrecision on the representative testbed when 5 databases were selected. (The first baseline is CORI; the second baseline for\nUUM/HP methods is UUM/HR.)\nPrecision at\nDoc Rank\nCORI\nReDDE\nUUM/HR\nUUM/HP-FL\nUUM/HP-VL\n5 docs\n0.3960\n0.4080 (+3.0%)\n0.4560 (+15.2%)\n0.4280 (+8.1%)(-6.1%)\n0.4520 (+14.1%)(-0.9%)\n10 docs\n0.3880\n0.4060 (+4.6%)\n0.4280 (+10.3%)\n0.4460 (+15.0%)(+4.2%)\n0.4560 (+17.5%)(+6.5%)\n15 docs\n0.3533\n0.3987 (+12.9%)\n0.4227 (+19.6%)\n0.4440 (+25.7%)(+5.0%)\n0.4453 (+26.0%)(+5.4%)\n20 docs\n0.3330\n0.3960 (+18.9%)\n0.4140 (+24.3%)\n0.4290 (+28.8%)(+3.6%)\n0.4350 (+30.6%)(+5.1%)\n30 docs\n0.2967\n0.3740 (+26.1%)\n0.4013 (+35.3%)\n0.3987 (+34.4%)(-0.7%)\n0.4060 (+36.8%)(+1.2%)\n\nTable 3.\nPrecision on the trec123-100col-bysource testbed when 3 databases were selected. (The first baseline is CORI; the second\nbaseline for UUM/HP methods is UUM/HR.)\nPrecision at\nDoc Rank\nCORI\nReDDE\nUUM/HR\nUUM/HP-FL\nUUM/HP-VL\n5 docs\n0.3640\n0.3480 (-4.4%)\n0.3960 (+8.8%)\n0.4680 (+28.6%)(+18.1%)\n0.4640 (+27.5%)(+17.2%)\n10 docs\n0.3360\n0.3200 (-4.8%)\n0.3520 (+4.8%)\n0.4240 (+26.2%)(+20.5%)\n0.4220 (+25.6%)(+19.9%)\n15 docs\n0.3253\n0.3187 (-2.0%)\n0.3347 (+2.9%)\n0.3973 (+22.2%)(+15.7%)\n0.3920 (+20.5%)(+17.1%)\n20 docs\n0.3140\n0.2980 (-5.1%)\n0.3270 (+4.1%)\n0.3720 (+18.5%)(+13.8%)\n0.3700 (+17.8%)(+13.2%)\n30 docs\n0.2780\n0.2660 (-4.3%)\n0.2973 (+6.9%)\n0.3413 (+22.8%)(+14.8%)\n0.3400 (+22.3%)(+14.4%)\n\nTable 4.\nPrecision on the trec123-100col-bysource testbed when 5 databases were selected. (The first baseline is CORI; the second\nbaseline for UUM/HP methods is UUM/HR.)\nPrecision at\nDoc Rank\nCORI\nReDDE\nUUM/HR\nUUM/HP-FL\nUUM/HP-VL\n5 docs\n0.4000\n0.3920 (-2.0%)\n0.4280 (+7.0%)\n0.4680 (+17.0%)(+9.4%)\n0.4600 (+15.0%)(+7.5%)\n10 docs\n0.3800\n0.3760 (-1.1%)\n0.3800 (+0.0%)\n0.4180 (+10.0%)(+10.0%)\n0.4320 (+13.7%)(+13.7%)\n15 docs\n0.3560\n0.3560 (+0.0%)\n0.3720 (+4.5%)\n0.3920 (+10.1%)(+5.4%)\n0.4080 (+14.6%)(+9.7%)\n20 docs\n0.3430\n0.3390 (-1.2%)\n0.3550 (+3.5%)\n0.3710 (+8.2%)(+4.5%)\n0.3830 (+11.7%)(+7.9%)\n30 docs\n0.3240\n0.3140 (-3.1%)\n0.3313 (+2.3%)\n0.3500 (+8.0%)(+5.6%)\n0.3487 (+7.6%)(+5.3%)\n39\na lot worse than the other algorithms. Tables 3 and 4 show the\nresults on the Trec123-100Col testbed, and Tables 5 and 6 show\nthe results on the representative testbed.\n\nOn the Trec123-100Col testbed, the document retrieval\neffectiveness of the CORI selection algorithm is roughly the\nsame or a little bit better than the ReDDE algorithm but both of\nthem are worse than the other three algorithms (Tables 3 and 4).\nThe UUM/HR algorithm has a small advantage over the CORI\nand ReDDE algorithms. One main difference between the\nUUM/HR algorithm and the ReDDE algorithm was pointed out\nbefore: The UUM/HR uses training data and linear interpolation\nto estimate the centralized document score curves, while the\nReDDE algorithm [21] uses a heuristic method, assumes the\ncentralized document score curves are step functions and makes\nno distinction among the top part of the curves. This difference\nmakes UUM/HR better than the ReDDE algorithm at\ndistinguishing documents with high probabilities of relevance\nfrom low probabilities of relevance. Therefore, the UUM/HR\nreflects the high-precision retrieval goal better than the ReDDE\nalgorithm and thus is more effective for document retrieval.\n\nThe UUM/HR algorithm does not explicitly optimize the\nselection decision with respect to the high-precision goal as the\nUUM/HP-FL and UUM/HP-VL algorithms are designed to do.\nIt can be seen that on this testbed, the UUM/HP-FL and\nUUM/HP-VL algorithms are much more effective than all the\nother algorithms. This indicates that their power comes from\nexplicitly optimizing the high-precision goal of document\nretrieval in Equations 14 and 16.\n\nOn the representative testbed, CORI is much less effective than\nother algorithms for distributed document retrieval (Tables 5 and\n6). The document retrieval results of the ReDDE algorithm are\nbetter than that of the CORI algorithm but still worse than the\nresults of the UUM/HR algorithm. On this testbed the three\nUUM algorithms are about equally effective. Detailed analysis\nshows that the overlap of the selected databases between the\nUUM/HR, UUM/HP-FL and UUM/HP-VL algorithms is much\nlarger than the experiments on the Trec123-100Col testbed,\nsince all of them tend to select the two large databases. This\nexplains why they are about equally effective for document\nretrieval.\n\nIn real operational environments, databases may return no\ndocument scores and report only ranked lists of results. As the\nunified utility maximization model only utilizes retrieval scores\nof sampled documents with a centralized retrieval algorithm to\ncalculate the probabilities of relevance, it makes database\nselection decisions without referring to the document scores\nfrom individual databases and can be easily generalized to this\ncase of rank lists without document scores. The only adjustment\nis that the SSL algorithm merges ranked lists without document\nscores by assigning the documents with pseudo-document scores\nnormalized for their ranks (In a ranked list of 50 documents, the\nfirst one has a score of 1, the second has a score of 0.98 etc)\n,which has been studied in [22]. The experiment results on\ntrec123-100Col-bysource testbed with 3 selected databases are\nshown in Table 7. The experiment setting was the same as\nbefore except that the document scores were eliminated\nintentionally and the selected databases only return ranked lists\nof document ids. It can be seen from the results that the\nUUM/HP-FL and UUM/HP-VL work well with databases\nreturning no document scores and are still more effective than\nother alternatives. Other experiments with databases that return\nno document scores are not reported but they show similar\nresults to prove the effectiveness of UUM/HP-FL and UUM/HP-VL\nalgorithms.\n\nThe above experiments suggest that it is very important to\noptimize the high-precision goal explicitly in document\nretrieval. The new algorithms based on this principle achieve\nbetter or at least as good results as the prior state-of-the-art\nalgorithms in several environments.\nCONCLUSION\nDistributed information retrieval solves the problem of finding\ninformation that is scattered among many text databases on local\narea networks and Internets. Most previous research use\neffective\nresource\nselection\nalgorithm\nof\ndatabase\nrecommendation system for distributed document retrieval\napplication. We argue that the high-recall resource selection\ngoal of database recommendation and high-precision goal of\ndocument retrieval are related but not identical. This kind of\ninconsistency has also been observed in previous work, but the\nprior solutions either used heuristic methods or assumed\ncooperation by individual databases (e.g., all the databases used\nthe same kind of search engines), which is frequently not true in\nthe uncooperative environment.\n\nIn this work we propose a unified utility maximization model to\nintegrate the resource selection of database recommendation and\ndocument retrieval tasks into a single unified framework. In this\nframework, the selection decisions are obtained by optimizing\ndifferent objective functions. As far as we know, this is the first\nwork that tries to view and theoretically model the distributed\ninformation retrieval task in an integrated manner.\n\nThe new framework continues a recent research trend studying\nthe use of query-based sampling and a centralized sample\ndatabase. A single logistic model was trained on the centralized\nTable 7.\nPrecision on the trec123-100col-bysource testbed when 3 databases were selected (The first baseline is CORI; the second\nbaseline for UUM/HP methods is UUM/HR.) (Search engines do not return document scores)\nPrecision at\nDoc Rank\nCORI\nReDDE\nUUM/HR\nUUM/HP-FL\nUUM/HP-VL\n5 docs\n0.3520\n0.3240 (-8.0%)\n0.3680 (+4.6%)\n0.4520 (+28.4%)(+22.8%)\n0.4520 (+28.4%)(+22.8)\n10 docs\n0.3320\n0.3140 (-5.4%)\n0.3340 (+0.6%)\n0.4120 (+24.1%)(+23.4%)\n0.4020 (+21.1%)(+20.4%)\n15 docs\n0.3227\n0.2987 (-7.4%)\n0.3280 (+1.6%)\n0.3920 (+21.5%)(+19.5%)\n0.3733 (+15.7%)(+13.8%)\n20 docs\n0.3030\n0.2860 (-5.6%)\n0.3130 (+3.3%)\n0.3670 (+21.2%)(+17.3%)\n0.3590 (+18.5%)(+14.7%)\n30 docs\n0.2727\n0.2640 (-3.2%)\n0.2900 (+6.3%)\n0.3273 (+20.0%)(+12.9%)\n0.3273 (+20.0%)(+12.9%)\n\n40\nsample database to estimate the probabilities of relevance of\ndocuments by their centralized retrieval scores, while the\ncentralized sample database serves as a bridge to connect the\nindividual databases with the centralized logistic model.\nTherefore, the probabilities of relevance for all the documents\nacross the databases can be estimated with very small amount of\nhuman relevance judgment, which is much more efficient than\nprevious methods that build a separate model for each database.\nThis framework is not only more theoretically solid but also\nvery effective. One algorithm for resource selection (UUM/HR)\nand two algorithms for document retrieval (UUM/HP-FL and\nUUM/HP-VL) are derived from this framework. Empirical\nstudies have been conducted on testbeds to simulate the\ndistributed search solutions of large organizations (companies)\nor domain-specific hidden Web. Furthermore, the UUM/HP-FL\nand UUM/HP-VL resource selection algorithms are extended\nwith a variant of SSL results merging algorithm to address the\ndistributed document retrieval task when selected databases do\nnot return document scores. Experiments have shown that these\nalgorithms achieve results that are at least as good as the prior\nstate-of-the-art, and sometimes considerably better. Detailed\nanalysis indicates that the advantage of these algorithms comes\nfrom explicitly optimizing the goals of the specific tasks.\n\nThe unified utility maximization framework is open for different\nextensions. When cost is associated with searching the online\ndatabases, the utility framework can be adjusted to automatically\nestimate the best number of databases to search so that a large\namount of relevant documents can be retrieved with relatively\nsmall costs. Another extension of the framework is to consider\nthe retrieval effectiveness of the online databases, which is an\nimportant issue in the operational environments. All of these are\nthe directions of future research.\n\nACKNOWLEDGEMENT\nThis research was supported by NSF grants EIA-9983253 and\nIIS-0118767.\nAny\nopinions,\nfindings,\nconclusions,\nor\nrecommendations expressed in this paper are the authors', and\ndo not necessarily reflect those of the sponsor.\n\nREFERENCES\n[1]\n\nJ. Callan. (2000). Distributed information retrieval. In W.B.\nCroft, editor, Advances in Information Retrieval. Kluwer\nAcademic Publishers. (pp. 127-150).\n[2]\n\nJ. Callan, W.B. Croft, and J. Broglio. (1995). TREC and\nTIPSTER experiments with INQUERY. Information\nProcessing and Management, 31(3). (pp. 327-343).\n[3]\n\nJ. G. Conrad, X. S. Guo, P. Jackson and M. Meziou.\n(2002). Database selection using actual physical and\nacquired logical collection resources in a massive domain-specific\noperational environment. Distributed search over\nthe hidden web: Hierarchical database sampling and\nselection. In Proceedings of the 28\nth\nInternational\nConference on Very Large Databases (VLDB).\n[4]\n\nN. Craswell. (2000). Methods for distributed information\nretrieval. Ph. D. thesis, The Australian Nation University.\n[5]\n\nN. Craswell, D. Hawking, and P. Thistlewaite. (1999).\nMerging results from isolated search engines. In\nProceedings of 10th Australasian Database Conference.\n[6]\n\nD. D'Souza, J. Thom, and J. Zobel. (2000). A comparison\nof techniques for selecting text collections. In Proceedings\nof the 11th Australasian Database Conference.\n[7]\n\nN. Fuhr. (1999). A Decision-Theoretic approach to\ndatabase selection in networked IR. ACM Transactions on\nInformation Systems, 17(3). (pp. 229-249).\n[8]\n\nL. Gravano, C. Chang, H. Garcia-Molina, and A. Paepcke.\n(1997). STARTS: Stanford proposal for internet meta-searching\n. In Proceedings of the 20th ACM-SIGMOD\nInternational Conference on Management of Data.\n[9]\n\nL. Gravano, P. Ipeirotis and M. Sahami. (2003). QProber:\nA System for Automatic Classification of Hidden-Web\nDatabases. ACM Transactions on Information Systems,\n21(1).\n[10]\n\nP. Ipeirotis and L. Gravano. (2002). Distributed search over\nthe hidden web: Hierarchical database sampling and\nselection. In Proceedings of the 28th International\nConference on Very Large Databases (VLDB).\n[11]\n\nInvisibleWeb.com. http://www.invisibleweb.com\n[12]\n\nThe lemur toolkit. http://www.cs.cmu.edu/~lemur\n[13]\n\nJ. Lu and J. Callan. (2003). Content-based information\nretrieval in peer-to-peer networks. In Proceedings of the\n12th International Conference on Information and\nKnowledge Management.\n[14]\n\nW. Meng, C.T. Yu and K.L. Liu. (2002) Building efficient\nand effective metasearch engines. ACM Comput. Surv.\n34(1).\n[15]\n\nH. Nottelmann and N. Fuhr. (2003). Evaluating different\nmethod of estimating retrieval quality for resource\nselection. In Proceedings of the 25th Annual International\nACM SIGIR Conference on Research and Development in\nInformation Retrieval.\n[16]\n\nH., Nottelmann and N., Fuhr. (2003). The MIND\narchitecture for heterogeneous multimedia federated digital\nlibraries. ACM SIGIR 2003 Workshop on Distributed\nInformation Retrieval.\n[17]\n\nA.L. Powell, J.C. French, J. Callan, M. Connell, and C.L.\nViles. (2000). The impact of database selection on\ndistributed searching. In Proceedings of the 23rd Annual\nInternational ACM SIGIR Conference on Research and\nDevelopment in Information Retrieval.\n[18]\n\nA.L. Powell and J.C. French. (2003). Comparing the\nperformance of database selection algorithms. ACM\nTransactions on Information Systems, 21(4). (pp. 412-456).\n[19]\n\nC. Sherman (2001). Search for the invisible web. Guardian\nUnlimited.\n[20]\n\nL. Si and J. Callan. (2002). Using sampled data and\nregression to merge search engine results. In Proceedings\nof the 25th Annual International ACM SIGIR Conference\non Research and Development in Information Retrieval.\n[21]\n\nL. Si and J. Callan. (2003). Relevant document distribution\nestimation method for resource selection. In Proceedings of\nthe 26th Annual International ACM SIGIR Conference on\nResearch and Development in Information Retrieval.\n[22]\n\nL. Si and J. Callan. (2003). A Semi-Supervised learning\nmethod to merge search engine results. ACM Transactions\non Information Systems, 21(4). (pp. 457-491).\n41", "keywords": "resource selection;distributed information retrieval"} {"name": "205", "title": "Unwanted Traffic in 3G Networks", "abstract": "The presence of \"unwanted\" (or background) traffic in the Internet is a well-known fact. In principle any network that has been engineered without taking its presence into account might experience troubles during periods of massive exposure to unwanted traffic, e.g. during large-scale infections. A concrete example was provided by the spreading of Code-Red-II in 2001, which caused several routers crashes worldwide. Similar events might take place in 3G networks as well, with further potential complications arising from their high functional complexity and the scarcity of radio resources. For example, under certain hypothetical network configuration settings unwanted traffic, and specifically scanning traffic from infected Mobile Stations, can cause large-scale wastage of logical resources, and in extreme cases even starvation. Unwanted traffic is present nowdays also in GPRS/UMTS, mainly due to the widespread use of 3G connect cards for laptops. We urge the research community and network operators to consider the issue of 3G robustness to unwanted traffic as a prominent research area.", "fulltext": "INTRODUCTION\nPublic wide-area wireless networks are now migrating to\nthird-generation systems (3G), designed to support packet-switched\ndata services and Internet access. Several UMTS\nnetworks became operational since 2003 while early GPRS\ndeployments date back to 2000. Since then, the growing\npopularity of 3G terminals and services has extended the\ncoverage of Internet wireless access to the geographic area,\nand 3G networks are becoming key components of the global\nInternet. In a recent CCR contribution Keshav [17] foresees\nthat cell phones will become the dominant component of future\nInternet population, while Kleinrock expects this role\nto be played by \"small pervasive devices ubiquitously embedded\nin the physical world\" (quoted from [14, p. 112]).\nBoth scenarios underlay that the main access mode in the future\nInternet will be wide-area wireless. Currently deployed\n3G networks, along with their future evolutions, are in pole-position\nface to concurrent technologies (e.g. WIMAX) to\nprovide such access connectivity in the large-scale.\nGenerally speaking, the 3G network being essentially a\nmixture of two paradigms, namely mobile cellular and IP, it\nis exposed to the security and reliability issues affecting each\ncomponent, plus the new risks emerging from their combination\n. The 3G environment inherits from the cellular\nparadigm a number of features like terminal personalization\nand geolocalization that make privacy and information security\nparticularly critical. When coupled with the IP world,\nmarkedly the \"openess\" of its applications and accessibility,\nthe concerns of privacy and security from the user perspective\nbecome even more critical than in legacy 2G networks.\nBecause of that - and of some \"lessons learned\" from past\nmistakes in 2G security [5] - privacy and information security\naspects have received a thorough treatment in the 3G specifications\n(see [7] for an exhaustive overview). Nevertheless,\nthe specific topic of 3G network security in relation to the robustness\nand availability of the network infrastructure itself\nhas not received adequate attention by the research community\nto date. The problem can be condensed in the following\nquestion: What is the level of robustness of a 3G network\nagainst deliberate attacks or other unanticipated stimuli?\nThe problem of network security involves issues related to\nnetwork resilience and stability, and can not be addressed\nwithout a deep understanding of the detailed structure and\norganization of the real network. Considered the relative\nrecent deployment of 3G, and the very limited access that\nresearch groups have to these networks, it should be no surprise\nthat the work in this area has been sporadic. Some\nexploits against 3G network are known and documented in\nindustry reports (e.g. [15] [2]), while the fact that a limited\namount of malicious traffic can cause large-case troubles to\na wireless cellular network has been \"unveiled\" in the recent\npaper [18] with reference to a 2G network supporting open\nSMS service. But at this stage what is still missing is an\nexhaustive and systematic recognition of the potential risks,\nthreats and problems to 3G network security, from which a\nresearch agenda can be drawn.\nWe provide here a novel contribution towards this goal\nby introducing an issue that has passed unrecognized so far:\nthe impact onto 3G networks of unwanted traffic, and specifically\nlarge-scale worm infections. Remarkably, all the cited\nprevious works consider deliberate DoS attack against the\nnetwork. Instead here we focus on a slightly more subtle\nissue, namely the (side-)effects onto the network of (unwanted\n) traffic, whose intended target is typically not the\nnetwork but rather its terminals. Our work was inspired\nby the consequences of the Code-Red-II infection onto the\nrouters of the wired Internet, reported in [3] and [4].\nWe claim that under certain conditions and for certain\nnetwork configuration scenarios large-scale worm infections\ncan cause sensible degradation and risks for the network\nperformances and availability. We urge the research community\nand network operators to consider the issue of 3G\nrobustness to unwanted traffic as a prominent research area.\nThe goal of this contribution is to trigger interest and at the\nsame time move the first pioneering steps in such direction.\nThe following discussion is based on empirical observations\nfrom an operational GPRS/UMTS network collected\nduring an ongoing research project in traffic monitoring and\nmodeling in 3G, the DARWIN project [1], carried out in collaboration\nwith mobilkom austria AG&CoKG (the leading\nmobile operator in Austria, EU) and Kapsch CarrierCom\n(provider of equipments and network engineering services).\nOVERVIEW OF 3G NETWORKS\nNetwork structure.\nA 3G network includes two main sections\n: a Packet-Switched Core Network (CN), which is based\non IP, and one or more Radio Access Network (RAN). Along\nwith the UMTS RAN (UTRAN) based on W-CDMA, several\noperators maintain a parallel GPRS RAN evolved from\nthe legacy GSM radio. This structure is sketched in Figure\n1. It is also possible to connect additional separate RANs\nto the same CN, typically WLAN [13] and perhaps in the\nfuture also WIMAX. Each RAN can evolve independently\nfrom the CN: for example in several networks GPRS has\nbeen upgraded to EDGE [10, p. 152], while UMTS upgrade\ntowards HSDPA [8, p. 351] is ongoing. Each RAN is connected\nto the legacy 2G Circuit-Switched Core-Network (not\nshown in Figure 1) for traditional services like voice calls,\nand to the Packet-Switched Core-Network (CN for short)\nfor data services. The CN embeds several elements: SGSN,\nGGSN, and a number of information servers. Some of the\nlatter are shared with the Circuit-Switched Core-Network\nof the legacy 2G system\n, e.g. the HLR/AuC. The SGSNs\nperform functions such as access control, location management\n, paging, route management [10]. The GGSN is the\nlogical gateway between the CN and external packet networks\n(Internet and private networks), is endowed with a\nfull IP-stack and handles the IP-level connectivity with the\nMS. The SGSN and GGSN of the same operator communicate\nthrough the Gn interface. The CNs of different opera-1\nNotably the close coupling between the circuit- (GSM) and\npacket-switched (GPRS/UMTS) sections is a source of concern\nsince in principle troubles originated in the latter might\ncause impairments or side-effect to the former as well.\ntors are interconnected through the Gp interface for support\nof roaming. The Gn protocol stack [10, p. 94] shows that a\nlower UDP/IP layer is used to carry the user data packets\nacross Gn, with an intermediate encapsulation into a 3G-specific\nprotocol (GPRS Tunnelling Protocol, GTP). In fact,\nthe Gn interface is basically a wide-area IP network interconnecting\nthe different SGSN/GGSN sites, and as such it\nembeds routers, IP subnets etc. Besides that, the CN is rich\nin IP-based elements, including servers supporting control\nand management functions (e.g. DNS, DHCP, RADIUS, see\n[10]) and application elements (e.g. WAP gateway, proxies,\ninternal servers). The latter are always located behind the\nGGSN, on the Gi side (ref. Figure 1) as they operate directly\non the data-plane. Note also that packet filtering and other\nrestiction policies can be located on separate dedicated elements\n(NAT, IDS, firewalls) at the network boundaries (Gi,\nGp) and/or directly configured into the GGSNs.\n3G terminals.\nThe population of 3G terminals is highly\nheterogeneous and includes very different types of device:\nhand-held phones and PDA, connect-card pluggable into\nlaptops, blackberry, etc. Additionally, a broad range of automatic\ndevices with no human interaction is emerging, taking\nadvantage of the ubiquity of the GPRS/UMTS coverage\n(e.g. sensors, alarms, presence indicators, remote cameras).\nPresently the most numerous 3G terminals are hand-held\nphones. They span a broad range of technological platforms,\na major point of difference (for the moment) from the wired\nInternet that is essentially a monoculture. The last aspect\nis critical when considering malware infections: such a \"bi-ological\nvariety\" intrinsically limits the potential infection\nscope, which in turn reduces somehow the very appeal for\nprogramming new pieces of malware. As a result, large-scale\ninfections of cellular phones have not yet been observed, despite\na growing number of exploits and pieces of malicious\ncode targeting GPRS/UMTS phones have already appeared\nin the wild (e.g. Cabir, Mosquito, Comwarrior\n2\n).\n3G datacards for laptop.\nMany 3G datacards for laptop\nwere sold starting in 2004, often coupled with flat-rate offers.\nMost of these laptops are equipped with Microsoft Windows\n- note that for some datacards drivers are not available for\nother operating systems. This introduced into the 3G environment\na sub-population of homogeneous terminals, i.e.\nWindows laptops, that are intrinsically exposed to all kinds\nof exploits and infections that are found in the wired Internet\n. In case of active infection (e.g. a scanning worm) they\nintroduce into the 3G network the same \"unwanted\" traffic\npatterns (e.g. probe SYN packets) that are found in wired\nLANs and in the Internet.\nPROBLEM STATEMENT\nUnwanted traffic.\nThe term \"unwanted traffic\" has been\nused in [16] to refer cumulatively to those traffic components\noriginated directly or indirectly by malicious or anyway \"non\nproductive\" activities. It includes backscatter traffic asso-ciated\nto remote DoS attacks, scanning probes, spam, exploit\nattempts etc. Unwanted traffic might have a negative\nimpact onto the underlying network, and in extreme cases\ndrive the network or at least some of its elements to crash.\nA bright example was provided by the spreading of Code-Red\n-II in 2001 [3]. Once installed on a victim host, the\nworm started to scan for new potential victims by sending\na high rate of probing TCP SYN packets to random\naddresses. This caused troubles to the packet forwarding\nmodules of several edge routers all over the Internet, some\nof which eventually crashed [4]. In simple words, the problem\nis that route caching mechanisms were designed (and\noptmized) to operate under \"normal\" (i.e. expected) traffic\nconditions, where most of the packets are directed to a\nrelativelly small subset of popular subnets. In such nominal\ncondition, route caching can be very effective. But during\nthe infection probing SYN packet were massively generated\nand sent to randomly chosen IP addresses, thus driving the\ncache access mechanisms to explode. In other words, the\nworm infection built-up a traffic aggregate macroscopically\ndifferent from the \"normal\" pattern, and the network proved\nto be not robust enough to sustain such different conditions.\nThe lesson to be learned is that in terms of the characteristics\nof the macroscopic traffic aggregate (entropy of the\ndestination IP address distribution, packet size, etc.) large\ninfections or other unwanted traffic components can expose\nthe network to a different \"operating point\" from what the\nnetwork was engineered and optmized for, with potentially\ndramatic effects\n3\n.\nPotential impact on 3G.\nIn principle, the 3G network is\nexposed to the same type of incidents, and perhaps even\nmore given the higher functional complexity inherited by the\nwireless cellular paradigm. The 3G network is ultimately an\nIP network, but with important peculiarities. First, the underlying\ntransport stratum, specifically the 3G-specific lower\nprotocols in the RAN, are endowed with very high functional\ncomplexity and signaling interactions - mainly for the sake\nof mobility management and efficient resource management.\nSecond, the population of internal \"hosts\" is extremely large\n(from thousands to millions of MSs) and highly dynamic (activity\nperiods can be as short as few seconds). The potential\nimpact of large-scale infections and unwanted traffic in such\na system is an intriguing point for research, that has not yet\nbeen addressed by the research community. The existence of\nthe problem has been conjectured in a previous work [9, p.\n447-448]. In lack of past empirical events, it is not possible\nto claim that 3G network are exposed to serious damages\nfrom large infections. On the other hand, without a systematic\nrisk assessment it is neither possible to provide a priori\nguarantees about their robustness. Empirical evidence of\nthe very existence of unwanted traffic in a real 3G network\nhas been reported in [6] along with initial but technically-detailed\nspeculations on the potential impact that the observed\ntraffic would have under certain hypothetical conditions\nand configuration setting. The actual impact, if any,\ndepends on a combination of factors related to the network\nconfiguration and equipment features. In the following we\nillustrate the problem by discussing a few examplary forms\nof impact that might take place in a real network.\nStateful elements.\nThe presence of massive amounts of\nTCP SYN packets might cause troubles to those stateful\nelements designed to reserve resources for each TCP con-3\nIn this regard, this is another example of (lack of) robusteness\nto unanticipated types of events in HOT systems [11].\nnection (e.g. application layer proxies, servers, NATs). Note\nthat some stateful operations might be enabled also on the\nGGSNs.. In this cases the GGSN logic should be robust to\nhigh rates of SYN packets coming from the MSs.\nLarge volumes of SYN packets might be originated by deliberate\nDoS/DDoS or from large-scale infections of scanning\nworms. In both cases, the source(s) can be hosts in the Internet\n(exogenous traffic) or other MS in the RAN (endogenous\ntraffic). In general, exogenous traffic can be blocked at\nthe external firewall as for any other private network. The\nfirst element to inspect the IP packets sent by the MSs is\nthe GGSN. The latter generally embeds full router capabilities\n, therefore it can be configured with the same stateless\n/ stateful firewalling policies and/or throttling mechanisms\n(see e.g. [12]) to filter endogenous uplink traffic. For an\nimproved robusteness against residual unblocked SYNs, all\nstateful elements should be designed to resist massive SYN\nstorms rather than just rely on external filtering elements.\nWastage of logical resources.\nThe UMTS radio bearer\nchannels (called Dedicated Channel, DCH) are assigned dy-namically\nto active MSs. The assignment policy is implemented\nin the RNC and is generally based on a combination\nof timeouts from the last data packet and thresholds on\nthe recent sending / receving rates. The exact algorithm is\nvendor-dependent, with parameters configurable by the operator\n. Let us consider here the simplest case of a purely\ntimeout-based DCH assignment policy: the DCH is assigned\nto the MS at the time of the first packet (sent or received),\nand is released after T\nDCH\nseconds from the last packet,\nT\nDCH\nbeing the holding timeout for DCH. Note that when\nthe MS does not have an assigned DCH, packets are ex-changed\non the common channels FACH or RACH (see [8,\nCh. 7]). Note also that each channel switch operation involves\na signaling procedure at the radio interface, contributing\nto the total transfer delay for the arriving packet. The\nvalue of T\nDCH\nmust be tuned carefully. Too short values\ncauses a high frequency of channel switch cycles, and consequently\n(i) a higher consumption of signaling resources on\nthe radio link and (ii) longer packet delays and hence worse\nuser experience. On the other hand, too long values will\nlead to wastage of logical resources, i.e. DCHs, whose available\nnumber if limited in each cell. Therefore, the optimal\nvalue of T\nDCH\nmust be chosen according to the distribution\nof idle-period duration for \"typical users\".\nGiven such framework, consider what happen when a number\nof infected terminals are scanning the local address space.\nEach active MS (not necessarely infected) will be visited by\nscanning probes at an average rate of R\nv\npkt/sec. The exact\nvalue of R\nv\ndepends on several factors like number of\nscanning MSs, scanning rate, etc. (see [6] for more details)\nand can typically be in the order of few seconds or below.\nIn case that the average probe interarrival time is smaller\nthan the DCH holding timer, i.e.\nv\n= (R\nv\n)\n-1\n< T\nDCH\n, the\nincoming unwanted traffic will keep the DCH channel assigned\nto the target MSs indefinitely, until the user switches\noff the terminal or explicitely close the PDP-context\n4\n. Note\nthat the volume in byte count of such incoming background\ntraffic is extremely low and would pass unnoticed by the\nuser. No assumption is made about the vulnerability of the\n4\nThe \"PDP-context\" is the logical connection to the 3G\nnetwork, conceptually similar to a wired modem dial-up.\nACM SIGCOMM Computer Communication Review\n55\nVolume 36, Number 2, April 2006\ntarget MS to the specific exploit, the only condition being\nthat it is reachable by probing packets, i.e. it has an active\nPDP-context. Such always-on \"spurious\" DCH waste\nresources on the radio interface. Notably, wastage is limited\nto the logical resources, i.e. DCH, since the physical\nbandwidth is left largely unused as only sporadic and small\npackets (probe SYNs) are transmitted over the air. Such\nphenomenon might lead to logical congestion of some radio\ncells as soon as the number of active MSs in the cell reaches\nthe number of available DCHs.\nSignaling overhead.\nOne key assumption in the above scenario\nis that the average interarrival of background packets\nis smaller than the DCH holding timeour, i.e.\nv\n< T\nDCH\n.\nOther problems arise in case that\nv\nis higher but close to\nT\nDCH\n, i.e.\nv\n= T\nDCH\n+\nfor small , particularly in the\ncase of low T\nDCH\n. In this case, a DCH reassignment follows\nimmediately a DCH release at rate 1/T\nDCH\n, thus wasting\nsignaling bandwidth in the radio section. Again, the more\n\"victims\" are present in the same cell the higher the impact.\nCONCLUSIONS\nWe warn that unwanted (or \"background\") traffic can\nhave an impact onto the functionally-complex 3G network,\nat least under certain conditions of network configuration\nand setting. Real measurements [6] provide evidence of the\npresence of such traffic inside a real GPRS/UMTS network.\nWe have speculated on its potential impact under hypothetical\nnetwork conditions (e.g. MS-to-MS communication enabled\n, no firewalling set in the GGSNs). The extent to which\nsuch conditions are effectively found in a real network is unknown\n, as mobile operators do not disclose details about the\ndeployment and configuration of their networks. Since the\nactual impact, if any, depends pointedly on a combination\nof factors related to the network configuration and equipment\nfeatures, in many cases the relevant countermeasures\nand fixes are obvious or anyway simple to implement once\nthat the potential risk has been identified. Often preventive\nactions are as simple as a careful and informed network engineering\nand equipment configuration. For instance, stateful\nfirewalling at the GGSN prevents probe packets to reach\nthe target MS thus avoding DCH channels to be \"spuri-ously\"\nkept alive by background traffic. Alternatively, a\nmore sophisticated DCH assignment strategy (e.g. based on\nthresholds on the packet rate) would alleviate the problem.\nHowever, such features might never be activated unless an\nexplicit recognition of the problem of unwanted traffic and\nits consequences. In summary, the very first problem is to\nrecognize and assess the potential risks, which might be hidden\nin the intricate web of interactions and dependencies\nembedded within the functionally-complex 3G network.\nThe potential risks due to the presence of unwanted traffic\nmust be taken into account in the design of the network setting\n, so as to avoid the emergence of hazardous conditions.\nA coherent process of risk assessment should be considered\nas a natural component of the network engineering process.\nIn turn, risk recognition must be based on a thorough understanding\nof the specific traffic environment, which is conti-nously\nevolving following the emerging of new services, new\ntypes of terminals, new forms of infections, new attacks, etc.\nAutomatic or semi-automatic methods can be implemented\nto detect drifts in the macroscopic composition of the traffic,\nincluding the raise of new components of unwanted traffic,\nborrowing concepts and tools from the recent achievements\nin the field of anomaly detection in the Internet. The prerequisite\nfor all that is a continuous (always-on) process of\nlarge-scale traffic monitoring and analysis from inside the\nnetwork, i.e. on the internal interfaces like Gn.\nREFERENCES\n[1] DARWIN home page\nhttp://userver.ftw.at/\n\nricciato/darwin.\n[2] A. Bavosa. Attacks and Counter Measures in 2.5G\nand 3G Cellular IP Networks. Juniper White Paper,\nJune 2004. Online at www.juniper.net/solutions/lit-erature/white\npapers/200074.pdf.\n[3] C.C. Zou, W. Gong, D. Towsley. Code Red Worm\nPropagation Modeling and Analysis. 9th ACM Conf.\non Computer and Comm. Security (CCS'02), 2002.\n[4] Cisco. Dealing with mallocfail and High CPU\nUtilization Resulting From the \"Code Red\" Worm.\nwww.cisco.com/warp/public/117/ts codred worm.pdf.\n[5] E. Barkan, E. Biham, N. Keller. Instant Ciphertext-Only\nCryptanalysis of GSM Encrypted Communications\n. Crypto 2003, Santa Barbara, CA, August 2003.\n[6] F. Ricciato, P. Svoboda, E. Hasenleithner, W.\nFleischer. On the Impact of Unwanted Traffic onto a\n3G Network. Technical Report FTW-TR-2006-006,\nFebruary 2006. Available online from [1].\n[7] G. M. Koien. An Introduction ro Access Security in\nUMTS. IEEE Wireless Communications, 11(1), 2004.\n[8] H. Holma, A. Toskala. WCDMA for UMTS. Wiley.\n[9] H. Yang, F. Ricciato, S. Lu, L. Zhang. Securing a\nWireless World. Proceedings of the IEEE, 94(2), 2006.\n[10] J. Bannister, P. Mather, S. Coope. Convergence\nTechnologies for 3G Networks. Wiley, 2004.\n[11] J. M. Carlson, J. Doyle. HOT: Robustness and design\nin complex systems. Phys. Rev. Let., 84(11), 2000.\n[12] J. Twycross, M. M. Williamson. Implementing and\ntesting a virus throttle. Tech. Report HPL-2003-103,\nMay 2003. Online www.hpl.hp.com/techreports/2003.\n[13] K. Ahmavaara, H. Haverinen, R. Pichna. Interworking\nArchitecture Between 3GPP and WLAN systems.\nIEEE Communications Magazine, November 2003.\n[14] L. Kleinrock. The Internet: History and Future. Lectio\nMagistralis at Politecnico di Torino, October 2005.\nOnline at www.tlc.polito.it/\n\nnordio/seminars.\n[15] O. Whitehouse. GPRS Wireless Security: Not Ready\nFor Prime Time. Research Report by stake, June 2002.\nOnline at www.atstake.com/research/reports.\n[16] R. Pang et al. Characteristics of Internet Background\nRadiation. IMC'04, Taormina, Italy, October 2004.\n[17] S. Keshav. Why Cell Phones Will Dominate the\nFuture Internet. Computer Communication Review,\n35(2), April 2005.\n[18] W. Enck, P. Traynor, P. McDaniel, T. La Porta.\nExploiting Open Functionality in SMS Capable\nCellular Networks. 12th ACM Conf. on Computer and\nComm. Security (CCS'05), November 2005.\nACM SIGCOMM Computer Communication Review\n56\nVolume 36, Number 2, April 2006", "keywords": "Unwanted traffic;Cellular networks;3G"} {"name": "206", "title": "Use of Contextualized Attention Metadata for Ranking and Recommending Learning Objects", "abstract": "The tools used to search and find Learning Objects in different systems do not provide a meaningful and scalable way to rank or recommend learning material. This work propose and detail the use of Contextual Attention Metadata, gathered from the different tools used in the lifecycle of the Learning Object, to create ranking and recommending metrics to improve the user experience. Four types of metrics are detailed: Link Analysis Ranking, Similarity Recommendation, Personalized Ranking and Contextual Recommendation. While designed for Learning Objects, it is shown that these metrics could also be applied to rank and recommend other types of reusable components like software libraries.", "fulltext": "INTRODUCTION\nOne of the main reasons to capture and analyze the information\nabout the interaction between a user and a tool is to improve the\nuser experience. For example, a online library system could\nrecord the subject of the books that a client has bought before in\norder to recommend him new books about a similar subject the\nnext time he/she logs in, saving him/her the hassle to search for\nthem [1]. A news web site could record the topic of the news\narticles that a user normally read in order to filter out news that do\nnot interest such user [2]. A collaborative browser could use the\ninformation recollected from the browsing patterns of a given\ncommunity to improve the rank of different pages on the searches\nof an individual user, member of that community [3]. The\ngeneric name of Attention Metadata[4] has been applied to\ndescribe the information about these interactions.\nWhen the information stored does not only contain the reference\nto the user and the action that it performs, but also register when\nthe action took place, through which tool the action was\nperformed, what others thing was doing the user at the same time,\nwhat is the profile of the user performing the action, to what\ncommunity he/she belongs, etc, it leads to an improved and more\nuseful form of record, called Contextualized Attention Metadata\n[5] (CAM). AttentionXML [6] and its extensions [5] are an effort\nto standardize the way in which CAM is stored. This\nstandardization will lead to the opportunity to share attention\nrecords between different applications. For example, a second\ngeneration of an Attention-Sharing online library could know\nwhich news topics the user is interested in and it could\nrecommend him/her books related to those topics.\nThe authors believe that one group of applications that could\ngreatly benefit from CAM information is the search and find of\nLearning Objects. These applications have suffered from an\nunder-par performance compared to similar applications in other\nfields [7] [8]. The main reason for this is the lack of a meaningful\nand scalable way to rank or recommend the objects to the users.\nCurrently, two main methods are used to rank (not even\nrecommend) Learning Objects: Manual Rating or Metadata\nContent Rating. In the first approach, Manual Rating, each\nLearning Objects should be rated by a group of experts and/or the\nuser community. For each search, the returned objects are ranked\nbased on their average rate. While this is bound to provide\nmeaningful ordering, it does not scale. For example MERLOT\nuse this approach, but only 10% of the total content of the\ndatabase has ever be rated [9]. The other approach, use only the\ninformation contained in the metadata record to perform ranking\nbased on the similarity with the query terms. The most common\nmethod used for this is the TFIDF metric[10] that measure in a\nVector Space the distance between the query vector and the\nvector composed from the text contained in the metadata record.\nGiven than TFIDF was designed to work over full text documents\nand that metadata records contain very few textual descriptions\n[11], normally the ordering is not meaningful for the user. SILO\n(Search and Indexing Learning Objects) tools from the\nARIADNE [12] repository use this approach. CAM could be\nused to generate a third approach, one in which the human\nattention (meaningful) is processed to construct an automated\n(scalable) rating and recommending procedure.\n\nThe following sections of this work describe in detail what\ninformation should be stored in the CAM record of Learning\nObject Applications (Section 2) and the mechanisms by which\nsuch information could be used to generate rating and\nrecommending metrics (Section 3). It is also discussed how these\nmechanisms and metrics could be applied to related contexts\n(Section 4) and which research questions need to be addressed in\nfurther work (Section 5). The work finalize with an overview of\nrelated research (Section 6)\n\nCAM FOR LEARNING OBJECTS APPLICATIONS\nUsers interact with a Learning Object through the object's whole\nlifecycle. CAM recorders capture and timestamp all those\ninteractions in order to provide the information needed to\ncalculate useful metrics to be used in a next generation breed of\nlearning object management tools. According to the\nAttentionXML extension proposed by Najjar et al at [5], these\ninteractions are stored inside an Action record. This work will\nbriefly list the different Actions that should be recorded through\nthe Learning Object lifecycle. The lifecycle phases are taken\nfrom the enumeration done by Collins and Strijker in [13]. Also,\nit is suggested which applications should generate the attention\nrecords.\nCreation: In this phase the author creates or assembles the\nlearning object in its digital form using some sort of authoring\ntool. The Creating Action should be captured and it must include\nthe identity of the created object, its author(s), the authoring tool\nused and the list of component-objects [14] reused through the\ncreation process. This record should be created by the authoring\ntool, for example Microsoft Power Point.\nLabeling: At this stage the author, an indexer or even an\nautomated system could add a metadata record that describes the\nLearning Object. The Labeling Action must include information\nthat identify the object, the labeler, the origin of the metadata\n(Automated, Semi-Automated, Manual), the metadata format\nused, the level of confidence of the information (how sure the\nautor is that metadata values are correct) and a unique identifier\nfor the metadata record. Normally this record should be also\ncreated by the authoring tool at the end of the creation of the\nobjects, but could also be created by metadata editors as [15] or\nautomated metadata generators as [16].\nOffering: At this stage the author or indexer inserts the object in\na repository or other system that allow the object to be shared\nwith others. The Inserting Action must include the following\ninformation: Object Unique Identifier, Inserter, Tool Used and\nLearning Object Unique Identifier inside the sharing tool. This\nrecord should be created by the sharing tool, being it a Learning\nObject Repository or a Peer to Peer sharing application.\nSelecting: In this stage the user search, find and select Learning\nObjects in the Sharing System. Several Actions should be\ncaptured during this phase. A Searching Action when a query is\nperformed to find relevant objects. It must include information\nthat describe the query performed and the objects returned. A\nRecommending Action when the system suggests relevant objects\nwithout the user performing a query. It must contain information\na list of the object(s) recommended, the user action that trigger\nthe recommendation and the tool used to perform the\nrecommendation. A Browsing Action when the user reviews the\nmetadata or description of an object. It must store information\nthat identifies the metadata record browsed and the time expend\nin the review. Finally, A Selecting Action when the user chooses\nan object by downloading it or accepting the recommendation. It\nmust contain information that identifies the selected object. All\nthis actions should also contain information about the user that\nperforms the action. These records should be generated by the\nsharing or recommending tool.\nUsing: This stage comprehends all the actions that the final user\n(instructor or learner) performs with the learning object during its\nnormal utilization in a learning environment. There are several\nactions to be registered. A Publicating Action when the instructor\ninserts the object into a Course belonging to some kind of\nLearning Management System. It must contain information that\nidentifies the published object and the context (course, lesson)\nwhere it was published. A Sequencing Action when one or more\nobjects are included in an instructional design or sequenced\npackage. It must contain information about the identification (in\nan ordered form) of the integrated objects. A Viewing Action, the\nobject is read or viewed by learners. It must contain information\nabout the time spent reviewing the material. An Annotating\nAction when the instructor or the learner add a comment or rate\nthe learning object. It must include information about the\ncomment or the rate given and the identifier of the object. All\nthese action should also store information about the user that\nperforms them. Different tools should be in charge of the\ngeneration of the attention records, a LMS for the Publishing\nAction, a Learning Activity Management System [17] or SCORM\n[18] Packager for the Sequencing Action, a Web browser or\ndocument reader for the Viewing Action and a Rating or Review\nsystem for the Annotating Action.\nLifecycle\nActions\nMain Information\nSource\nCreation\nCreating\nauthor, components\nAuthoring tool,\nComponents\nLabeling\nLabeling\nmetadata format,\norigin, confidence\nAuthoring tool\nor Metadata\ngenerator\nOffering\nInserting\ninserter LOR\nor\nSharing app.\nSearching\nquery, results\nLOR's search\ntool\nRecommending\nobjects recommended\nRecommender\nBrowsing\nTime LOR\nor\nRecommender\nSelecting\nSelecting\nobject identifier\nLOR or\nRecommender\nPublicating\nLMS context\nLMS\nSequencing\nlist of sequenced\nobjects\nID tool or\nPackager\nViewing\nTime, tool used\nBrowser or\nReading app.\nUsing\nAnnotating\nrate or review\nLMS\nRetaining\nRetaining\ndecision to keep or\ndelete\nLMS\nTable 1. Proposed CAM information to be stored for\nLearning Object Applications\n10\nRetaining: In this phase, the instructor check for the validity of\nthe learning object and decides if it is still useful or if it should be\nreplaced / updated. The Retaining Action should contain\ninformation that identifies the object and the decision taken (keep,\nupdate, delete). This attention record will normally be generated\nby the LMS where the object has been published.\nA summary of the Actions that CAM should record is presented\nin Table 1. Some of these CAM Actions (Creating, Inserting,\nSelecting, Viewing) are already produced and stored in different\ntools [5]. The others are easy to implement in existing tools\ntaking in account that most of them (LMS, Metadata Generators,\netc) already produce a log with the user's interactions. In the next\nsection, metrics to exploit this Action records to improve tools to\nsearch and find Learning Objects are proposed.\n\nRANKING AND RECOMMENDING METRICS USING CAM\nSeveral ranking and recommending metrics will be proposed.\nThese metrics will use only two sources of information to be\ncalculated: the first one is the Learning Object Metadata (LOM)\n[19] record that describe each Learning Object; the second one is\nthe CAM Actions described in the previous section.\n3.1 Link Analysis Based Ranking\nOne of the most famous and successful ranking algorithms at the\npresent is PageRank [20]. PageRank use the information\ncontained in the network of links between web pages to calculate\nthe relative \"importance\" of a page. It could be summarized as: a\npage is important if it is linked by a high number of pages. Also,\nthe importance increases if the pages linking to it have also a high\nimportance rank. Unfortunately, this algorithm could not be\napplied directly to Learning Objects. While LOM records have a\nlinking field, it is rarely populated [11]. Also, LOM linking\nreflect just a semantic relationship; it does not imply a \"vote\" for\nthat object as it is assumed for Web pages.\nAs an alternative to the explicit linking structure that the web\nposses, CAM allow us to create an implicit linking between\nLearning Objects and other entities related to them: Authors,\nUsers, Courses, Learners, etc. For example: Creating Actions can\nbe converted into a link between an author and an object,\nSelecting Actions can be converted into a link between a user and\nan object, Publicating Actions can be converted into a link\nbetween a course and an object and also between a user and the\nsame object. Viewing Actions can be converter into a link\nbetween a learner and an object. As result of this conversion of\nCAM to links between different entities, a K-partite graph is\ncreated (a graph with different partitions, where there are not links\nbetween nodes of the same partition). In this graph each type of\nentity (Learning Object, User, Course, and Learner) is considered\na partition Figure 1 present diagram of an example of such a\ngraph.\nOnce CAM information is represented as a graph, it is easy to use\nbasic graph algorithms to calculate ranking metrics. Following\nthere are some metrics that could be developed this way:\n\nPopularity Rank (PR): Using the information contained in\nthe Selecting Action (converted already in a 2-partite graph),\nit is easy to obtain the number of times that an object has\nbeen downloaded. To calculate just count the number of\nincident links that each Learning Object node has from nodes\nin the User Partition. This metric is a just a basic way to put\nmost downloaded objects first in the result list.\n)\n(\n)\n(\nobject\ninDegree\nobject\nPR\n=\n\n\n\nFigure 1. K-Partite Graph representation of CAM\n\nAuthor-Corrected Popularity Rank (ACP): Combining\nthe Creating and Selecting Actions, it could be calculated\nhow popular an object is based on the number of downloads\nand the popularity of the Author. The first step is to create a\n3-partite graph with Users, Objects and Author partitions.\nThen the PopularityRank (PC) is calculated for all the\nobjects. Next, the Author Popularity (AP) is calculated\nadding the PC of all the Learning Objects nodes that are\nlinked to the author node. Finally, the AP is multiplied by a\nweighting factor and added to the also weighted PC. This\nmetrics enable new objects (that do not have any downloads)\nfrom a well downloaded author, appear higher in the list.\n\n=\ni\ni\nobject\nPR\nauthor\nAP\n)\n(\n)\n(\n; if object\ni\nis linked to author\n\n)\n(\n)\n(\n)\n(\nauthor\nAP\nobject\nPR\nobject\nACP\n\n+\n\n=\n\n\n\n\nWeighted Popularity (WP): Selecting, Publicating and\nRetaining Actions can be combined to generate a 2-partite\ngraph between Users and Learning Objects. The links of this\ngraph will be weighted: if the link is made using the\nRetaining information (inDegree\nR\n) it will have more weight\nas if the link was made using the Publication information\n(inDegree\nP\n). In the same way, Publication links will weight\nmore than Selection links (inDegree\nS\n). The rationale behind\nthis metric is that different actions mean different level of\n\"preference\" for an object. If the instructor has use the\nobject and she is happy with it to keep it for the next\nsemester is a stronger vote of support than just using it for\nthe first time or that just downloading it. That difference of\nimportance is represented in the weight given to each kind of\nlink.\n)\n(\n)\n(\n)\n(\n)\n(\nobject\ninDegree\nobject\ninDegree\nobject\ninDegree\nobject\nWP\nR\nP\nS\n\n+\n\n+\n+\n\n=\n\n\n\n\nwith\n> >\nO 1\nO 2\nO 3\nC1\nC2\nU1\nU 2\nA1\nA2\nUser Partition\n\nCourse Partition\nAuthor Partition\nObject Partition\n\n11\n\nRate of Reuse Rank (RRR): Using the Selecting Action (or\nalso the Publicating and Retaining Actions as in the previous\nmetric), the number of times that an object has been\ndownloaded during a given period of time P (last week,\nmonth, year) can be calculated. The 2-partite graph (Users\nand Objects partitions) can be constructed but only taking in\naccount the Actions occurred in a given period of time. For\nexample: if the last week is selected as P, This rank will\ncalculate how often the object has been downloaded\n(inserted or retained) on the last 7 days. This value could be\nnormalized using the age of the object, obtained from the\nrelated Creation action. This metric will help to rank higher\nobject that have been reused frequently and are relatively\nnew.\n)\n(\n)\n(\n)\n(\nobject\nage\nobject\ninDegree\nobject\nRRR\n=\n; for links inside period P\n\nManual Rank (MR): Using the information that is stored in\nthe Annotation Action, the number of times that an object\nhas been positively (or negatively) rated or reviewed could\nbe considered to calculate a metric. A 2-partite graph (Users\nand Objects partitions) is created. The procedure will weight\nthe link as 1 if it is a positive rate or review, -1 if it is a\nnegative rate or review. The actual value of the rate is only\nused to evaluate if the rate is a positive or negative \"vote\",\nbecause different users and system have different scales to\ngrade. The reviews can only be considered if their\npositiveness or negativeness value is included in the\nAnnotation Action or could be automatically inferred from\nthe text.\n)\n(\n)\n(\n)\n(\nobject\ninDegree\nobject\ninDegree\nobject\nMR\nNegative\nPositive\n=\n\nThese metrics can be calculated off-line because they are not user\nor query specific. They calculate an average importance or\nrelevance of the learning objects based in the agglutination of\nattention information. These metrics, and others that can be\ndeveloped afterwards, could be integrated in a final ranking\nmetric. This Compound Popularity Metric (CP) can be calculated\nas the weighted sum of the values of the individual metrics. For\nexample, Google integrates more than 100 of different simple\nmetrics in order to provide its results [21].\nMR\nRRR\nWP\nACP\nPR\nCP\n\n\n\n\n\n+\n+\n+\n+\n=\n\nThe weighted coefficients (\n, , etc) should be estimated (not\ntrivial procedure) to provide an optimal result ordering. Methods\nto make these estimates are described in [22] and [23]. Also,\nmanual rates should be used carefully because the Annotation\nInformation is optional and could not exist for all the objects\ninvolved in the calculation.\n3.2 Similarity Metrics for Recommendation\nOne property of a 2-partite graph is that it can be folded over one\nof its partitions, generating a normal graph with just one entity\nand links between its nodes. For example if we have a 2-partite\ngraph of Users that have download Learning Objects, we can fold\nover the Learning Object partition and we will end up with a\ngraph where the Users are linked between them. Each link mean\nthat those two users have download the same object at least once.\nThis new graph could be used to calculate similarity between the\nusers based on the download patterns. In Figure 2 we can see a\nrepresentation of the folding result. The first part of the figure\nrepresents a 2-partite graph with the User and Objects partitions.\nThe graph shows that, for example, that User 5 had downloaded\nObject 2 and Object 3 and User 1 had only downloaded Object 1.\nThe second part of the figure illustrates the folded version of the\ngraph. In this new graph, the users have a link between them if\nthey linked to the same object in the unfolded graph. The more\nobjects the users have in common, the thicker the line. For\nexample User 1 and User 4 are linked because they both have\ndownloaded Object 1. User 2 and User 5 have a stronger links\nbecause they both have downloaded Object 2 and Object 3. This\ntechnique is similar to the one applied in scientometrics to obtain\nrelations between different authors, based on the papers that have\nco-authored[24].\nU1\nU2\nU3\nO1\nO2\nO3\nU4\nU5\nU6\nU1\nU2\nU3\nU4\nU6\nU5\n2-Partite Graph (User and Objects)\nFolded Normal Graph (Users)\n\nFigure 2. Unfolded and Folded 2-Partite Graph\nWe present several similarity metrics that can be calculated using\nthe information contained in CAM Actions detailed in the\nprevious section.\n\nObject Similarity based on Number of Downloads:\nCreate a 2-partite graph with the information of the Select\nActions (when a User download a Learning Object), and fold\nover the User Partition. A link between two Objects in the\nfinal graph means that those objects have been downloaded\nby the same user. The strength of the similarity is number of\nusers that have downloaded both objects.\n\nObject Similarity based on Re-Use: Create a 2-partite\ngraph with the information from the Publish Actions (when a\nLearning Object is inserted into a Course), and fold over the\nCourse Partition. A link between two Objects in the final\ngraph means that two objects have been inserted in the same\ncourse. The strength of the similarity is number of courses\nthat include both objects.\n\nUsers similarity based on Downloads: Create a 2-partite\ngraph with the information from the Select Actions, and fold\nover the Object Partition. A link between two Users means\nthat they have downloaded the same object. The strength of\nthe similarity is the number objects that the users have in\ncommon.\n\nAuthor similarity based on Re-Use of Components: The\nCreation Action information could be use to identify re-use\nof learning object components. For example several authors\ncould use the same picture or diagram inside they\npresentations. As the Creation Action store information\nabout which existing components have been reused (see\nSection 2), a 2-partite graph between Authors and\nComponents can be created and then folded over the\n12\nComponents partition. The new graph will represent\nrelationship between different authors. More components\nthose authors have used in common, the stronger their\nsimilarity.\nThe similarity metric obtained from the graph could be then\napplied in recommendation tools. For example: If a user finds an\nobject useful, a link to similar objects could be provided (similar\nto what Amazon does with books [1]). Also, the similarity\nbetween users can be exploited to recommend Learning Objects\nto a user, based on what other users that are in the same\ncommunity have recently download (similar to collaborative\nbrowsing applications [3]). To automatically extract the\ncommunities from the graph, an algorithm like EdgeBetweeness\n[25] can be applied. The same procedure could be applied to the\nAuthor Similarity graph. The communities of authorship can be\nautomatically extracted from the graph. The author then can be\nrecommended with components that have been created by other\nauthors in the same authorship community.\nBeside recommendation systems for Learning Objects, these\nsimilarity metrics could be considered as distance metrics. The\ndistance metric can be used inside clustering algorithms to\nautomatically find groups of similar objects. These clusters could\nbe used to improve the presentation of results of a search, much as\nVivisimo [26] does for Web Pages.\n3.3 Personalized Ranking\nTo be able to personalize the search result order for a given user,\nthe application should have a representation of such user in a\nprofile. While this profile could be created explicitly by the user,\nCAM information could help the application to learn it form the\nuser interaction with the tool. For example, the information about\nstored in the Select, Publish and Retain from a user could help us\nto determine in which objects is he/she interested, and rank higher\nobjects that are similar to those.\nThis work proposes the creation of a fuzzy profile that could\neasily account for the evolving and not fixed behavior of an\ninstructor downloading learning objects. Instead of having a crisp\npreference for one type of object, this profile will provide\ndifferent grades of likeness for several characteristics of the\nlearning object. This profile is constructed with several Fields.\nThe Fields could be a subset of the fields considered in the LOM\nstandard, especially the ones that use a vocabulary or represent a\nclassification. Each field will contain 2 or more fuzzy sets that\nrepresent the values that the field could have (from the vocabulary\nor the classification values). A user could \"prefer\" in different\ndegrees 1 or more of the values of a Field. The preference of the\nuser for each one of the values is calculated based on the number\nof objects that the user have download before that contained that\nvalue in the corresponding LOM field. This fuzzy profile has\nbeen derived from research done to produce automatic TV\nrecordings for PVRs [27] like TIVO.\nThe fuzzy profile could be easily operationalized to provide a\npersonalized rank for Learning Objects. First, each field should\nhave a weighting value (that express how important that field is).\nThat value could be assigned by an expert or could be calculated\nautomatically for entropy of the distribution of the field values for\nthat user. For example if a user downloads objects from a wide\nvariety of topics, the weight of topic as a good ranking\nmeasurement is low. Instead, if the user only downloads objects\nin one language, the weight of that field should be high. Second,\neach LOM record from the result list is converted to a similar\nrepresentation, using the same fields and a preference value of 1.0\nfor the value found in the metadata. Finally, the object\nrepresentation is operated with the profile in order to obtain how\nwell the object fits the preference of the user. This operation is\ndescribed in the following equation:\nj\ni\ni\nj\nj\ni\ni\nvalue\nobject\nField\nvalue\nuser\nField\nuser\nobject\nnk\nPersonalRa\n).\n(\n).\n(\n)\n,\n(\n\n\n=\n\n\n\nFor example, lets consider a user that have download 20 objects,\n16 with topic Computer Sciences and 4 with topic Physics. Of\nthose 20, also, 12 are in English, 4 are in French, 4 are in Spanish.\nA fuzzy profile that represents that user could be expressed as:\nU1 = {(0.8/ComputerScience + 0.2/Physics),\n(0.6/English + 0.2/Spanish + 0.2/French)}\nThe fields weighting terms are 0.9 for Topic and 0.6 for\nLanguage. Lets now considered 2 objects represented also as\nfuzzy sets:\nO1 = {(1.0/ComputerScience), (1.0/Spanish)}\nO2 = {(1.0/Physics, 1.0/English)}\nThe calculated rank for both objects is:\nO1 = 0.9*0.8 + 0.6*0.2 = 0.84\nO2 = 0.9*0.2 + 0.6*0.6 = 0.54\nO1 will be ranked higher than O2 as it is more similar to the user\nprofile.\nThe personalized calculation could be combined with the\npopularity ranking described before to create a better ranking\nalgorithm, the same way as Google personalized Search mix the\nstandard Popularity measure with information from the user\nprofile to order the results.\n3.4 Contextual Recommending\nIf the CAM is considered not only as a source for historic data,\nbut also as a continuous stream of contextualized attention\ninformation, we can use very recent CAM (in the order of seconds\nor minutes) to generate recommendations based on what the user\nis focusing his/her attention at the moment. For example, the\nrecommender system could use the information stored in the if the\nuser has inserted an object inside a Course in a Learning\nManagement System (LMS), the LMS will generate a CAM\nrecord with contextual information about which object was\ninserted and in which lesson of the course. The recommending\nsystem could use that information to present the user with similar\nobjects to the one inserted or others that have been used in similar\ncourses, based on the topic of the course or in similarity metrics\nas the one explained in the Section 3.2.\nThe recommending system could also present objects that suit the\napplication that the user is using at a given moment, based on the\ninformation about the object (LOM record). For example, if the\nuser is working Microsoft Power Point authoring tool,\npresentations, slides, small texts, images and diagrams will be\nrecommended. If he/she is working with a SCORM Packager,\ncomplete learning objects will be presented instead.\nContextual recommending techniques have been tried before in\nseveral fields [28] [29]. Blinkx [30] is an example of this kind of\napplications. It recommends web pages, videos and news based\n13\non the present content of the screen of the user. A similar\napplication could be developed in a LMS for example, where the\nsystem could recommend to the instructor materials to add to each\nlesson, or could recommend the learner with similar or\ncomplementary materials to the one that the instructor has added\nto the course.\nAPPLICATION IN OTHER CONTEXTS\nWhile the CAM based metrics proposed in this work were\ndesigned for Learning Objects, they could be easily extended or\nadapted to work for other kind of reusable components where\nCAM could be collected. For example, given the exponential\ngrow of open source software libraries that could be reused inside\nsoftware projects, programmers are sometimes overwhelmed with\nthe amount of available choices. It makes sense to develop some\nkind of ranking or recommending system that could help the\ndevelopers to select the right tools.\nTo construct the ranking application we can use the same methods\nproposed for learning objects. The k-partite graph used to\ncalculate the popularity metric could be constructed using the\nmetadata information about the library (who is the author of a\nsoftware library) and contextual attention information about how\nand when the programmers interact with the library (which\nprogrammers have download it, in which software project they\nhave been used). Most of this information could be obtained from\nopen source project repositories like SourceFourge [31]. The\nrationale behind the ranking would be: A library that have been\ndownloaded more often / at a higher rate is more useful. A library\nproduced by authors with highly useful libraries could also be\nuseful. A library re-used in many projects is probable highly\nuseful. This metrics are parallel to the ones described for learning\nobjects.\nRecommending systems for software libraries could also be\nconstructed in a similar way to the ones proposed for learning\nobjects. For example, we can fold the Libraries-Programmers 2-partite\ngraph over the Libraries Partition, creating a graph that\nrelate Programmers between them based on the Libraries that they\nhave downloaded/used. Communities could be extracted form the\nresulting graph and could be used for recommending a\nprogrammer with new libraries that other members of his/her\ncommunity have used in their projects.\nThe precaution to have when applying this metrics to other\ndomains is the semantics of the relations that are created with the\ngraphs. For example, if two learning objects are used in the same\ncourse, those two learning objects must have something in\ncommon (same topic for example), while if two libraries are used\ninside the same project, that does not mean that the libraries are\nrelated (you could use a database access library and graphical\ninterface library inside the same project).\nOther contexts where CAM information could be exploited to\nrank and recommend elements with a similar strategy as the one\npresented in this work are music mixes (component songs or\nloops) and news aggregators.\nFURTHER WORK\nThis work is just an introduction to how CAM information could\nbe used to rank and recommend Learning Objects. Several topics\nshould be solved before a big scale application that use the\nmetrics presented could be built:\n\nCollection and Integration of the different CAM sources:\nWhile today exist several applications that generate CAM,\nthere is not an established multi-application CAM repository\nthat could be used to collect and integrate attention\ninformation.\n\nCombination of different ranking strategies: When\ndifferent ranking strategies are combined, some weighting\ncoefficient must be applied. The calculation of those\ncoefficients is not trivial and should be made using extensive\nuser feedback.\n\nCritical mass vs. Closed Community: To be useful, the\nmetrics should be calculated over a significant amount of\nCAM data. But if we integrate data from different\ncommunities to obtain a bigger amount of CAM (for\nexample attention from different LORs), there will probably\nnot exist common objects, users or courses that could be\nused to generate relations between the communities.\n\nRELATED WORK\nBroisin et al in [32] propose a framework to capture usage\ninformation about Learning Object from different Learning\nManagement Systems and Repositories in order to analyze the\nusage patterns of the users through a Management Application.\nThe approach of this paper goes a step further, using the attention\ninformation to calculate metrics that could be used to improve\nexisting tools. Broisin's work also uses a simplified format of\nattention (basically usage information) in a non-standard format,\nlimiting the possible use of the information by other systems,\nbecause existing applications should be reprogrammed to produce\nthat format. This work proposes the use of an extension of\nAttentionXML standard to be able to capture the CAM from a\nvariety of systems that already produce it.\nIn a related area, digital libraries, Nicholson in [33] propose the\nfusion of bibliometrics analysis with user-related data mining to\ngenerate a new field of study, bibliomining. His proposal could\nbe compared with the one presented in this work: Using the\ninformation about the book and the usage information generated\nby the interaction of the users with the digital library system to\nimprove the user experience. While Nicholson mentions several\nways in which the attention metadata could be used, he does not\ndetail any specific metric to improve digital library systems.\n\nCONCLUSIONS\nThe current immaturity of the tools to search and find Learning\nObjects could be overcome if CAM information is store through\nthe lifecycle of the Learning Object and used to compute metrics\nfor ranking and recommendation. These metrics should generate\na meaningful and automated way in which Learning Object could\nbe ranked. This work presented detailed methods to calculate\nvarious metrics and propose several uses for those metrics. The\nproposed calculations could also be applied to rank and\nrecommend other reusable components from which CAM could\nbe gathered, as it was shown for the case of open source software\nlibraries example.\nWhile the metrics are easy to calculate, and some initial data is\nalso present, more research is needed to be able to assemble a\nlarge scale system that could gather the necessary amount of\nCAM in order to render the calculations meaningful.\n14\n\nREFERENCES\n[1] Linden, G.; Smith, B. and York, J. Amazon.com\nRecommendations: Item-to-Item Collaborative Filtering.\nIEEE Internet Computing, 7, 1 (2003), 76-80.\n[2] Shepherd, M.; Watters, C. and Marath, A. Adaptive User\nModeling for Filtering Electronic News. In Proceedings of\nthe 35th Annual Hawaii International Conference on System\nSciences, 2002. HICSS. (2002), 1180- 1188.\n[3] James, S. Outfoxed Collaborative Browsing,\nhttp://www.getoutfoxed.com. Retrieved on May, 2006.\n[4] Najjar, J., Meire, M. and Duval, E. Attention Metadata\nManagement: Tracking the use of Learning Objects through\nAttention.XML. In Proceedings of World Conference on\nEducational Multimedia, Hypermedia and\nTelecommunications. (2005). 1157-1161.\n[5] Najjar, J., Wolpers, M. and Duval, E., Attention\nMetadata:Collection and Management\", WWW2006\nworkshop on Logging Traces of Web Activity, Edinburgh,\nScotland, (2006).\n[6] AttentionXML, AttentionXML specifications,\nhttp://developers.technorati.com/wiki/attentionxml.\nRetrieved on June, 2006\n[7] Duval, E. and Hodgins, W., A LOM research agenda. In\nProceedings of WWW2003: Twelfth International World\nWide Web Conference, (2003), 659-667.\n[8] Ochoa, X. Learning Object Repositories are Useful, but are\nthey Usable? In Proceedings of IADIS International\nConference Applied Computing. (2005), 138-144\n[9] Duval, E. LearnRank: the Real Quality Measure for\nLearning Materials. Policy and Innovation in Education -\nQuality Criteria, (2005)\n[10] Aizawa, A. An information-theoretic perspective of tfidf\nmeasures. Information Processing and Management, 39,\n(2003), 45-65.\n[11] ISO/IEC JTC1 SC36. International LOM Survey: Report.\nhttp://mdlet.jtc1sc36.org/doc/SC36_WG4_N0109.pdf\n(2004).\n[12] Ariadne Foundation. Ariadne Foundation.\nhttp://www.ariadne-eu.org (2005).\n[13] Collis, B. and Strijker, A. Technology and Human Issues in\nReusing Learning Objects. Journal of Interactive Media in\nEducation, 4, (2004).\n[14] Verbert, K. Jovanovic, J. Gasevic, D. and Duval, E.\nRepurposing Learning Object Components. OTM 2005\nWorkshop on Ontologies, Semantics and E-Learning, (2005).\n[15] IEEE-LOM Editor, http://www-i5.informatik.rwth-aachen\n.de/i5new/staff/chatti/LOMEditor/index.html.\nRetrieved June 2006.\n[16] Cardinels, K., Meire, M., and Duval, E. Automating\nmetadata generation: the simple indexing interface. In\nProceedings of the 14th WWW conference, (2005), 548-556\n[17] Dalziel, J. Implementing Learning Design: The Learning\nActivity Management System (LAMS), ASCILITE (2003)\n[18] ADL, SCORM Standard, http://www.adlnet.gov/index.cfm,\nRetrieved March, 2006\n[19] IEEE. IEEE Standard for Learning Object Metadata.\nhttp://ltsc.ieee.org/doc/wg12/ (2002).\n[20] Page, L., Brin, S., Motwani, R. and Winograd, T. The\nPageRank Citation Ranking: Bringing order to the Web.\nTechnical Report, Computer Science Department, Stanford\nUniversity (1998)\n[21] Google Technology, http://www.google.com/technology/.\nRetrieved, August 2006.\n[22] Radlinski, F. and Joachims, T. Query Chains: Learning to\nRank from Implicit Feedback, Proceedings of the ACM\nConference on Knowledge Discovery and Data Mining.\n(2005).\n[23] Fan, W., Gordon, M. and Pathak, P. A generic ranking\nfunction discovery framework by genetic programming for\ninformation retrieval, Information Processing and\nManagement. 40 (2004) 587602\n[24] Nascimento, M., Sander, J. and Pound, J. Analysis of\nSIGMOD's co-authorship graph. ACM SIGMOD Record,\n32, 3. (2003). 8-10\n[25] Girvan, M. and Newman, M. Community structure in social\nand biological networks. Proc. Natl. Acad. Sci. 11. (2002).\n[26] Vivisimo Clustering Engine. http://www.vivisimo.com.\nRetrieved August 2006.\n[27] Pigeau, A., Raschia, G., Gelgon, M., Mouaddib, N. and\nSaint-Paul, R. A fuzzy linguistic summarization technique\nfor TV recommender systems. The 12th IEEE International\nConference on Fuzzy Systems, 2003. FUZZ '03. 1 (2003)\n743-748.\n[28] Google AdSense, https://www.google.com/adsense/.\nRetrieved August 2006.\n[29] Fan, W., Gordon, M. and Pathak, P. Incorporating contextual\ninformation in recommender systems using a\nmultidimensional approach. Information Processing and\nManagement. 40, 4. (2004). 587-602.\n[30] Blinkx Contextual Search. http://www.blinkx.com.\nRetrieved August 2006.\n[31] Sourceforge, Open Software Repository.\nhttp://www.sourceforge.net. Retrieved August 2006.\n[32] Broisin, J., Vidal, P. and Sibilla, M. A Management\nFramework Based On A Model Driven Approach For\nTracking User Activities In A Web-Based Learning\nEnvironment. EDMEDIA, (2006) 896-903\n[33] Nicholson, S. The basis for bibliomining: Frameworks for\nbringing together sage-based data mining and bibliometrics\nthrough data arehousing in digital library services.\nInformation Processing and Management. 42, 3 (2006). 785-804\n.\n\n15", "keywords": "Learning Objects;Ranking;Attention Metadata;Recommending"} {"name": "207", "title": "Use of Relative Code Churn Measures to Predict System Defect Density", "abstract": "Software systems evolve over time due to changes in requirements, optimization of code, fixes for security and reliability bugs etc. Code churn, which measures the changes made to a component over a period of time, quantifies the extent of this change. We present a technique for early prediction of system defect density using a set of relative code churn measures that relate the amount of churn to other variables such as component size and the temporal extent of churn. Using statistical regression models, we show that while absolute measures of code churn are poor predictors of defect density, our set of relative measures of code churn is highly predictive of defect density. A case study performed on Windows Server 2003 indicates the validity of the relative code churn measures as early indicators of system defect density. Furthermore, our code churn metric suite is able to discriminate between fault and not fault-prone binaries with an accuracy of 89.0 percent.", "fulltext": "INTRODUCTION\nA \"reliability chasm\" often separates the quality of a software\nproduct observed in its pre-release testing in a software\ndevelopment shop and its post-release use in the field. That is,\ntrue field reliability, as measured by the number of failures found\nby customers over a period of time, cannot be measured before a\nproduct has been completed and delivered to a customer. Because\ntrue reliability information is available late in the process,\ncorrective actions tend to be expensive [3]. Clearly, software\norganizations can benefit in many ways from an early warning\nsystem concerning potential post-release defects in their product\nto guide corrective actions to the quality of the software.\nWe use code churn to predict the defect density in software\nsystems. Code churn is a measure of the amount of code change\ntaking place within a software unit over time. It is easily extracted\nfrom a system's change history, as recorded automatically by a\nversion control system. Most version control systems use a file\ncomparison utility (such as diff) to automatically estimate how\nmany lines were added, deleted and changed by a programmer to\ncreate a new version of a file from an old version. These\ndifferences are the basis of churn measures.\nWe create and validate a set of relative code churn measures as\nearly indicators of system defect density. Relative churn measures\nare normalized values of the various measures obtained during the\nchurn process. Some of the normalization parameters are total\nlines of code, file churn, file count etc. Munson et al. [17] use a\nsimilar relative approach towards establishing a baseline while\nstudying code churn. Studies have shown that absolute measures\nlike LOC are poor predictors of pre- and post release faults [7] in\nindustrial software systems. In general, process measures based on\nchange history have been found be better indicators of fault rates\nthan product metrics of code [9]. In an evolving system it is highly\nbeneficial to use a relative approach to quantify the change in a\nsystem. As we show, these relative measures can be devised to\ncross check each other so that the metrics do not provide\nconflicting information.\nOur basic hypothesis is that code that changes many times pre-release\nwill likely have more post-release defects than code that\nchanges less over the same period of time. More precisely, we\naddress the hypotheses shown in Table 1.\nOur experiments on Windows Server 2003 (W2k3) support these\nfour hypotheses with high statistical significance. We analyzed the\ncode churn between the release of W2k3 and the release of the\nW2k3 Service Pack 1 (W2k3-SP1) to predict the defect density in\nW2k3-SP1. The relative code churn measures are statistically\nbetter predictors of defect density than the absolute measures.\nThey also they are indicative of increase in system defect density\nand can accurately predict the system defect density with a high\ndegree of sensitivity. Our metric suite is able to discriminate\nbetween fault and not fault-prone binaries in W2k3-SP1 with an\naccuracy of 89.0 percent.\nTable 1. Research Hypotheses\nHypothesis\nH\n1\n\nIncrease in relative code churn measures is\naccompanied by an increase in system defect\ndensity\nH\n2\n\nUsing relative values of code churn predictors is\nbetter than using direct (absolute) values to explain\nthe system defect density\nH\n3\n\nRelative code churn measures can be used as\nefficient predictors of system defect density.\nH\n4\n\nRelative code churn measures can be used to\ndiscriminate between fault and not fault-prone\nbinaries.\n\nThe organization of this paper is as follows. Section 2 describes\nthe related work. Section 3 explains data collection and section 4\nthe relative code churn measures. Section 5 presents the case\nstudy and the observed results. Section 6 discusses our\nconclusions and future work.\nRELATED WORK\nPrior analyses on predicting defect density used code churn\nmeasures as part of a larger set of metrics. Code churn measures\nhave not been studied in isolation as predictors of software defect\ndensity. The background work presented below is from studies\nthat involved industrial software systems. The source code base of\nW2k3 is two orders of magnitude larger than the largest example\nconsidered below.\nMunson et al. [17] observe that as a system is developed, the\nrelative complexity of each program module that has been altered\n(or churned) also will change. The rate of change in relative\ncomplexity serves as a good index of the rate of fault injection.\nThey studied a 300 KLOC (thousand lines of code) embedded real\ntime system with 3700 modules programmed in C. Code churn\nmetrics were found to be among the most highly correlated with\nproblem reports [17].\nKhoshgoftaar et al.[13] define debug churn as the number of lines\nof code added or changed for bug fixes. Their objective was to\nidentify modules where debug code churn exceeds a threshold, in\norder to classify the modules as fault-prone. They studied two\nconsecutive releases of a large legacy system for\ntelecommunications. The system contained over 38,000\nprocedures in 171 modules. Discriminant analysis identified fault-prone\nmodules based on 16 static software product metrics. Their\nmodel when used on the second release showed a type I and II\nmisclassification rate of 21.7%, 19.1% respectively and an overall\nmisclassification rate of 21.0%.\nOhlsson et al. [19] identify fault-prone modules by analyzing\nlegacy software through successive releases. They use a total of\n28 measures, twelve of which are based on size and change\nmeasures. These measures were used to identify 25 percent of the\nmost fault-prone components successfully.\nKarunanithi [12] uses a neural network approach for software\nreliability growth modeling in the presence of continuous code\nchurn, which he shows improves over the traditional time-domain\nbased models. Similarly Khoshgoftaar et al. [15] use code churn\nas a measure of software quality in a program of 225,000 lines of\nassembly language. Using eight complexity measures, including\ncode churn, they found neural networks and multiple regression to\nbe an efficient predictor of software quality, as measured by gross\nchange in the code. They suggest that using neural networks may\nnot work in all environments and the results obtained are\nenvironment specific. Neural networks can be used for improving\nsoftware maintenance [15].\nOstrand et al. [20] use information of file status such as new,\nchanged, unchanged files along with other explanatory variables\nsuch as lines of code, age, prior faults etc. as predictors in a\nnegative binomial regression equation to predict the number of\nfaults in a multiple release software system. The predictions made\nusing binomial regression model were of a high accuracy for\nfaults found in both early and later stages of development. [20]\nClosely related to our study is the work performed by Graves et al.\n[9] on predicting fault incidences using software change history.\nSeveral statistical models were built based on a weighted time\ndamp model using the sum of contributions from all changes to a\nmodule in its history. The most successful model computes the\nfault potential by summing contributions from changes to the\nmodule, where large and/or recent changes contribute the most to\nfault potential [9]. This is similar to our approach of using relative\nmeasures to predict fault potential.\nDrawing general conclusions from empirical studies in software\nengineering is difficult because any process depends to a large\ndegree on a potentially large number of relevant context variables.\nFor this reason, we cannot assume a priori that the results of a\nstudy generalize beyond the specific environment in which it was\nconducted [2]. Researchers become more confident in a theory\nwhen similar findings emerge in different contexts [2]. Towards\nthis end we intend that our case study contributes towards\nstrengthening the existing empirical body of knowledge in this\nfield [7, 9, 13, 15, 17, 19, 20].\nDATA COLLECTION\nThe baseline used for measuring the code churn and other\nmeasures described below is Windows Server 2003 (W2k3). We\nmeasured churn between this baseline and Windows Server 2003\nService Pack 1 (W2k3-SP1). We sometimes refer to W2k3-SP1 as\nthe \"new version\" of the code. Service packs are a means by\nwhich product updates are distributed\n1\n. Service packs contain\nupdates for system reliability, program compatibility, security, etc.\nthat are conveniently bundled for easy downloading.\nThe size of the code base analyzed is 44.97 million LOC (44,970\nKLOC). This consisted of 2465 binaries which were compiled\nfrom 96,189 files. Some files contribute to more than one binary.\nAs defects for W2k3-SP1 are reported at the binary level, we\nrelate churn to defects at the level of binaries.\n\n\n1\n\nhttp://support.microsoft.com/\n\n285\nThe absolute measures and methods of data collection are\ndescribed below:\n\nTotal LOC is the number of lines of non-commented\nexecutable lines in the files comprising the new version\nof a binary. Internal Microsoft tools were used to\ncompute this measure.\n\nChurned LOC is the sum of the added and changed\nlines of code between a baseline version and a new\nversion of the files comprising a binary.\n\nDeleted LOC is the number of lines of code deleted\nbetween the baseline version and the new version of a\nbinary. The churned LOC and the deleted LOC are\ncomputed by the version control systems using a file\ncomparison utility like diff.\n\nFile count is the number of files compiled to create a\nbinary.\n\nWeeks of churn is the cumulative time that a file was\nopened for editing from the version control system.\n\nChurn count is the number of changes made to the files\ncomprising a binary between the two versions (W2k3\nand W2k3-SP1).\n\nFiles churned is the number of files within the binary\nthat churned.\n\nRELATIVE CODE CHURN MEASURES\nIn this section we describe our relative code churn measures. The\nchurn measures are denoted by the elements M1-M8. The\nelements and their relationship to defect density are explained\nbelow (these relationships are verified in section 5.1):\n\nM1: Churned LOC / Total LOC. We expect the larger\nthe proportion of churned (added + changed) code to the\nLOC of the new binary, the larger the magnitude of the\ndefect density for that binary will be.\n\nM2: Deleted LOC / Total LOC. We expect the larger\nthe proportion of deleted code to the LOC of the new\nbinary, the larger the magnitude of the defect density for\nthat binary will be.\n\nM3: Files churned / File count. We expect the greater\nthe proportion of files in a binary that get churned, the\ngreater the probability of these files introducing defects.\nFor e.g. suppose binaries A and B contain twenty files\neach. If binary A has five churned files and binary B has\ntwo churned files, we expect binary A to have a higher\ndefect density.\n\nM4: Churn count / Files churned. Suppose binaries A\nand B have twenty files each and also have five churned\nfiles each. If the five files in binary A are churned\ntwenty times and the five files in binary B are churned\nten times, then we expect binary A to have a higher\ndefect density. M4 acts as a cross check on M3.\n\nM5: Weeks of churn / File count. M5 is used to\naccount for the temporal extent of churn. A higher value\nof M5 indicates that it took a longer time to fix a smaller\nnumber of files. This may indicate that the binary\ncontains complex files that may be hard to modify\ncorrectly. Thus, we expect that an increase in M5 would\nbe accompanied by an increase in the defect density of\nthe related binary.\n\nM6: Lines worked on / Weeks of churn: The measure\n\"Lines worked on\" is the sum of the churned LOC and\nthe deleted LOC. M6 measures the extent of code churn\nover time in order to cross check on M5. Weeks of\nchurn does not necessarily indicate the amount of churn.\nM6 reflects our expectation that the more lines are\nworked on, the longer the weeks of churn should be. A\nhigh value of M6 cross checks on M5 and should\npredict a higher defect density.\n\nM7: Churned LOC / Deleted LOC. M7 is used in order\nto quantify new development. All churn is not due to\nbug fixes. In feature development the lines churned is\nmuch greater than the lines deleted, so a high value of\nM7 indicates new feature development. M7 acts as a\ncross check on M1 and M2, neither of which accurately\npredicts new feature development.\n\nM8: Lines worked on / Churn count: We expect that\nthe larger a change (lines worked on) relative to the\nnumber of changes (churn count), the greater the defect\ndensity will be. M8 acts as a cross check on M3 and\nM4, as well as M5 and M6. With respect to M3 and M4,\nM8 measures the amount of actual change that took\nplace. M8 cross checks to account for the fact that files\nare not getting churned repeatedly for small fixes. M8\nalso cross checks on M5 and M6 to account for the fact\nthat the higher the value of M8 (more lines per churn),\nthe higher is the time (M5) and lines worked on per\nweek (M6). ). If this is not so then a large amount of\nchurn might have been performed in a small amount of\ntime, which can cause an increased defect density.\nFigure 1 illustrates the cross check relationships of these relative\ncode churn measures. As discussed above M1, M2 and M7 cross\ncheck on each other and M8 cross checks on the set of M3, M4\nand M5, M6. All these measures triangulate on their respective\ndependent measures with the goal of providing the best possible\nestimate of defect density with a minimum inflation in the\nestimation.\n\nCASE STUDY\nWe now describe the case study performed at Microsoft. Section\n5.1 presents the correlation analysis between the relative code\nchurn measures and system defect density. Section 5.2 details the\nmodel building activities and Section 5.3 the predictive ability of\nthe models. Section 5.4 discusses the discriminative power of the\nrelative code churn measures and Section 5.5 the limitations of the\nstudy.\n\n\n286\n\nFigure 1. Relative Churn Measure Cross Check Relationships\n\nTable 2. Cross Correlations. All correlations are significant at the 0.01 (99%) level (2-tailed).\n\n\nM1\nM2\nM3\nM4\nM5\nM6\nM7\nM8\nDefects\n/KLOC\nM1\n\n\n1.000\n.834\n.795\n.413\n.707\n.651\n.466\n.588\n.883\nM2\n\n\n\n1.000\n.645\n.553\n.747\n.446\n.219\n.492\n.798\nM3\n\n\n\n\n1.000\n.186\n.749\n.434\n.445\n.269\n.868\nM4\n\n\n\n\n\n1.000\n.531\n.429\n.210\n.631\n.288\nM5\n\n\n\n\n\n\n1.000\n.263\n.201\n.390\n.729\nM6\n\n\n\n\n\n\n\n1.000\n.701\n.843\n.374\nM7\n\n\n\n\n\n\n\n\n1.000\n.507\n.288\nM8\n\n\n\n\n\n\n\n\n\n1.000\n.262\nDefects/\nKLOC\n\n\n\n\n\n\n\n\n\n\n1.000\nAs mentioned before, the system defect density for W2k3-SP1\nwas collected at the level of binaries. That is, for each binary we\nhave a count of the number of defects assigned to that binary.\nThroughout the rest of the paper we assume a statistical\nsignificance at 99% confidence (level of significance ( = 0.01)).\n5.1 Correlation Analysis\nOur goal is to verify that with an increase in the code churn\nmeasures (M1-M8) there is a statistically significant increase in\nthe defects/KLOC. Table 2 shows the Spearman rank correlation\n() among the defects/KLOC and the relative code churn\nmeasures. Spearman rank correlation is a commonly-used robust\ncorrelation technique [8] because it can be applied even when the\nassociation between elements is non-linear.\nTable 2 shows that there exists a statistically significant (at 99%\nconfidence) positive relationship between the measures and the\ndefects/KLOC (shown in bold). Thus, with an increase in the\nrelative churn measures there is a corresponding positive increase\nin the defects/KLOC. This is indicated by the statistically\nsignificant positive Spearman rank correlation coefficient . From\nthe above observations we conclude that an increase in relative\ncode churn measures is accompanied by an increase in system\ndefect density (H\n1\n).\nIn order to illustrate the cross checks better consider the measures\nM1, M2 and M7 in Figure 2 with their Spearman rank correlation\ncoefficients from Table 2.\n\nFigure 2: Cross Correlation Relationships\nThe Spearman correlation coefficient of 0.834 between M1 and\nM2 indicates that there is a very strong correlation between the\ntwo measures. But this might not be the case when there is a\nhigher proportion of churned code compared to deleted code (as\nmeasured by M7 for new feature development). Since this cannot\nbe measured by M1 or M2, M7 acts as a cross check on them. The\ncorrelation between M1 and M7 (0.466) indicates when there is a\nM1\nM2\nM7\n0.834\n0.466\n0.219\nM7\nM2\nM1\nM6\nM3\nM4\nM5\nM8\nM1: Churned LOC / Total LOC\nM2: Deleted LOC / Total LOC\nM3: Files churned / File count\nM4: Churn count / Files churned\nM5: Weeks of churn/ File count\nM6: Lines worked on / Weeks of churn\nM7: Churned LOC / Deleted LOC\nM8: Lines worked on / Churn count\nCross check\n287\nnew feature addition there is a corresponding increase in the\nchurned code. For M2 and M7 this correlation is not as strong (but\nis statistically significant) because there were relatively fewer new\nfeature additions compared to other changes in the W2k3-SP1\nsource base.\n5.2 Model Fitting\nWe now compare predictive models built using absolute measures\nagainst those built using the relative churn measures. For the\nabsolute model, defects/KLOC is the dependent variable and the\npredictors are the absolute measures described in Section 3. For\nthe relative model, defects/KLOC is the dependent variable and\nthe predictors are the relative measures described in Section 4.\nR\n2\nis a measure of variance in the dependent variable that is\naccounted for by the model built using the predictors [4]. R\n2\nis a\nmeasure of the fit for the given data set. (It cannot be interpreted\nas the quality of the dataset to make future predictions). The\nadjusted R\n2\nmeasure also can be used to evaluate how well a\nmodel will fit a given data set [5]. Adjusted R\n2\nexplains for any\nbias in the R\n2\nmeasure by taking into account the degrees of\nfreedom of the predictor variables and the sample population. The\nadjusted R\n2\ntends to remain constant as the R\n2\nmeasure for large\npopulation samples.\nThe multiple regression model fit for absolute measures using all\nthe predictors has an R\n2\nvalue of 0.052 (F=16.922, p<0.0005).\n(The F-ratio is used to test the hypothesis that all regression\ncoefficients are zero). This is a poor fit of the data and irrespective\nof other transformations (like for e.g. log) we cannot get a marked\nimprovement in R\n2\n. The adjusted R\n2\nvalue for the absolute\nmeasures is 0.49. Throughout the rest of this paper we present the\nadjusted R\n2\nvalues in addition to the R\n2\nmeasures in order to\neliminate any bias in model building. But with respect to the large\nsample size (2465 binaries) the adjusted R\n2\nand R\n2\nvalue\n\nshow\nonly minor variation, not sufficient enough to drop the R\n2\nvalue\nand employ the adjusted\n\nR\n2\nvalue.\n\n\nThere are different ways in which regression models [16] can be\nbuilt. Three common regression methods [16] are forward,\nbackward and step-wise regression. In forward regression, one\nadds a single predictor at a time to the model based on the strength\nof its correlation with the dependent variable. The effect of adding\neach predictor is evaluated based on the results of an F-ratio test\n[16]. Variables that do not significantly add to the success of the\nmodel are excluded. In backward regression, a model is built\nusing all the predictors. The weakest predictor variable is removed\nand the strength of the overall built model is assessed similar to\nthe forward regression procedure. If this significantly weakens the\nmodel then the predictor is put back (and otherwise removed).\nStep-wise regression [16] is the more robust technique of these\nmethods. The initial model consists of the predictor having the\nsingle largest correlation with the dependent variable.\nSubsequently, new predictors are selected for addition into the\nmodel based on their partial correlation with the predictors already\nin the model. With each new set of predictors, the model is\nevaluated and predictors that do not significantly contribute\ntowards statistical significance in terms of the F-ratio are removed\nso that, in the end, the best set of predictors explaining the\nmaximum possible variance is left.\nA step-wise regression analysis using the absolute set of\npredictors does not lead to any significant change in the R\n2\nvalues\n(=0.051) (adjusted R\n2\n= 0.050). Only the LOC and the number of\ntimes a file is churned are kept as predictors. This further confirms\nthe fact that using the absolute measures is not an appropriate\nmethod for assessing the system defect density.\nSeveral empirical studies use Principal Component Analysis\n(PCA) [10] to build regression models [6]. In PCA a smaller\nnumber of uncorrelated linear combinations of metrics, which\naccount for as much sample variance as possible, are selected for\nuse in regression. PCA is not a possible solution when using\nabsolute measures because the correlation matrix is not positive\ndefinite. We still use the two principal components generated to\nbuild a multiple regression equation. The multiple regression\nequation constructed has an even lower value of R\n2\n=0.026,\n(F=33.279, p<0.0005).\nBased on the three results discussed above (multiple regression\nusing all the predictors, step-wise regression and PCA) we\nconclude that the absolute measures are not good predictors of\nsystem defect density.\nAs outlined in Section 3 we calculate the relative code churn\nmeasures (M1-M8) and build regression models using all the\nmeasures, step-wise regression and PCA. Table 3 shows the R\n2\n\nvalue of the regression equation built using all the measures. We\nalso present the adjusted R\n2\nvalue and the root MSE (Mean\nSquared Error).\n\nTable 3. Regression Fit Using All Measures\nModel\nR\n2\n\nAdjusted R\n2\n\nRoot MSE\nAll Measures\n.811\n.811\n1.301215\n\nTable 4 shows how the R\n2\nvalue changes in step-wise regression\nfor all the models built during that process. In the step-wise\nregression model the measure M7 is dropped. The best R\n2\nvalue in\nTable 4 (without M7) is the same as that of Table 3 (.811) but\nthere is a change in the third decimal place of the standard error of\nthe estimate. M7 probably was dropped because there were\nrelatively fewer new feature additions compared to other changes\nin the W2k3-SP1 source base. The adjusted R\n2\nvalues are also\nshown but are not significantly different from the R\n2\nvalues due to\nthe large sample size used to build the models.\n\n\n\n288\nTable 4. Step-wise Regression Models\nModel\nR-Square\n\nAdjusted\nR-Square Root MSE\n(a)\n.592\n.592\n1.908727\n(b)\n.685\n.685\n1.677762\n(c)\n.769\n.769\n1.437246\n(d)\n.802\n.801\n1.331717\n(e)\n.808\n.807\n1.312777\n(f)\n.810\n.809\n1.305817\n(g)\n.811\n.811\n1.300985\na Predictors: (Constant), M2\nb Predictors: (Constant), M2, M3\nc Predictors: (Constant), M2, M3, M8\nd Predictors: (Constant), M2, M3, M8, M1\ne Predictors: (Constant), M2, M3, M8, M1, M6\nf Predictors: (Constant), M2, M3, M8, M1, M6, M5\ng Predictors: (Constant), M2, M3, M8, M1, M6, M5, M4.\nThe PCA of the eight relative code churn measures yields three\nprincipal components. PCA can account for the multicollinearity\namong the measures, which can lead to inflated variance in the\nestimation of the defect density.\n\nBut for PCA to be applicable the KMO (Kaiser-Meyer-Olkin)\nmeasure[11] of sampling adequacy should be greater than 0.6 [4].\nThe KMO measure of sampling adequacy is a test of the amount\nof variance within the data that can be explained by the measures.\nThe KMO measure of the eight relative code churn measures is\n0.594 which indicates that PCA might not be an appropriate\nmethod to apply.\nWe still perform the analysis to investigate and present those\nresults as well on a comparative basis. The results for all three\nmodels are summarized in Table 5.\nTable 5. Relative Measures Model Fits\nModel R\n2\nAdjusted\nR\n2\nF-Test\nsig.\nAll measures\n0.811\n0.811\n1318.44,\n(p<0.0005)\nStep-wise\nregression\n0.811 0.811 1507.31,\n(p<0.0005)\nPCA 0.749\n0.748\n2450.89,\n(p<0.0005)\nFrom the above results we can see that using relative values of\ncode churn predictors is better than using absolute values to\nexplain the system defect density (H\n2\n).\n\n\n\nFigure 3: Actual vs. Estimated System Defect Density\n289\n5.3 Defect Density Prediction\nWe use the technique of data splitting [18] to measure the ability\nof the relative code churn measures to predict system defect\ndensity. The data splitting technique was employed to get an\nindependent assessment of how well the defect density can be\nestimated from a population sample. We randomly select two\nthirds of the binaries (1645) to build the prediction model and use\nthe remaining one third (820) to verify the prediction accuracy.\nWe constructed models using all the measures, step-wise\nregression and PCA (for purpose of completeness). Table 6 shows\nthe results for these models.\nTable 6. Regression Data Fit\nModel R\n2\nAdjusted\nR\n2\n\nF-Test sig.\nAll measures\n0.821\n0.820\n938.304,\n(p<0.0005)\nStep-wise\nregression\n(M7 dropped)\n0.821 0.820 1072.975,\n(p<0.0005)\nPCA 0.762\n0.761\n1749.113,\n(p<0.0005)\nUsing the fitted regression equation we estimate the system defect\ndensity for the remaining 820 binaries. Figure 3 shows the\nestimated and actual defect density using the regression equation\nconstructed using all the measures (sorted by estimated defect\ndensity). The estimated defect density is shown by the thicker\ncontinuous line. From the graph we can see that the estimated\ndefect density is similar to the actual defect density. The axes on\nthe graphs are removed in order to protect proprietary data\nTo quantify the sensitivity of prediction, we run a correlation\nanalysis between the estimated and actual values. A high positive\ncorrelation coefficient indicates that with an increase in the actual\ndefect density there is a corresponding positive increase in the\nestimated defect density. We perform Pearson and Spearman\ncorrelations to indicate their sensitivity. The Pearson correlation\nindicates a linear relationship. The Spearman correlation is a more\nrobust correlation technique.\nTable 7 shows that the correlations are all positive and statistically\nsignificant. The magnitude of the correlations indicates the\nsensitivity of the predictions (the stronger the correlations the\nmore sensitive are the predictions). The models built using all the\nmeasures and the step-wise method have the same sensitivity and\nare better than the model built using PCA.\nTable 7. Correlation Results\nModel\nPearson (sig.)\nSpearman (sig.)\nAll measures\n0.889 (p<0.0005)\n0.929 (p<0.0005)\nStep-wise\nregression\n0.889 (p<0.0005)\n0.929 (p<0.0005)\nPCA\n0.849 (p<0.0005)\n0.826 (p<0.0005)\nAnalyses that are based on a single dataset that use the same data\nto both estimate the model and to assess its performance can lead\nto unreasonably negative biased estimates of sampling variability.\nIn order to address this we repeat the random sampling with 3\ndifferent random samples to verify if the above results are\nrepeatable. For each sample the model is fit with 1645 binaries to\nbuild the model. Table 8 shows the fit of the various models built\nfor each sample.\nTable 8. Random Splits Data Fit\nModel R\n2\nAdjusted\nR\n2\n\nF-Test (Sig.)\nRandom 1: All\n0.836\n0.835\n1045.07,\n(p<0.0005)\nRandom 1:\nStepwise (drop\nnone)\n0.836 0.835 1045.07,\n(p<0.0005)\nRandom 1: PCA\n0.757\n0.756\n1701.98,\n(p<0.0005)\nRandom 2: All\n0.822\n0.821\n941.86,\n(p<0.0005)\nRandom 2:\nStepwise (drop\nM4)\n0.821 0.820 1074.05,\n(p<0.0005)\nRandom 2: PCA\n0.765\n0.764\n1776.87,\n(p<0.0005)\nRandom 3: All\n0.799\n0.798\n813.12,\n(p<0.0005)\nRandom 3:\nStepwise (drop\nM7)\n0.799 0.798 927.54,\n(p<0.0005)\nRandom 3: PCA\n0.737\n0.736\n1529.25,\n(p<0.0005)\nUsing each of the above predictive models we calculate the\nestimated defect density for the remaining 820 binaries. Table 9\nshows the correlation between the estimated and the actual defect\ndensity.\nTable 9. Correlation Between Actual and Estimated\nDefects/KLOC\nModel Pearson\nCorrelation (sig.)\nSpearman\nCorrelation (sig.)\nRandom 1:\nAll\n0.873 (p<0.0005)\n0.931 (p<0.0005)\nRandom 1:\nStepwise\n0.873 (p<0.0005)\n0.931 (p<0.0005)\nRandom 1:\nPCA\n0.858 (p<0.0005)\n0.836 (p<0.0005)\nRandom 2:\nAll\n0.878 (p<0.0005)\n0.917 (p<0.0005)\nRandom 2:\nStepwise\n0.876 (p<0.0005)\n0.906 (p<0.0005)\nRandom 2:\nPCA\n0.847 (p<0.0005)\n0.825 (p<0.0005)\nRandom 3:\nAll\n0.899 (p<0.0005)\n0.892 (p<0.0005)\nRandom 3:\nStepwise\n0.901 (p<0.0005)\n0.893 (p<0.0005)\nRandom 3:\nPCA\n0.880 (p<0.0005)\n0.818 (p<0.0005)\n\nBased on the consistent positive and statistically significant\ncorrelations, indicating the sensitivity of predictions obtained in\nTable 9 we can say that relative code churn measures can be used\nas efficient predictors of system defect density (H\n3\n).\nOur results demonstrate it is effective to use all eight measures\nrather than dropping any of them from the predictive equation.\nEach of these measures cross check on each other and any\n290\nabnormal behavior in one of the measures (for e.g. like a file\ngetting churned too many times) would be immediately\nhighlighted.\nBy interchanging the measures in a model equation we can get\nestimated values for all the relative measures independently. For\nexample, in order to determine the maximum allowable code\nchurn with respect to the file size (i.e. M1), say for a particular\nsoftware model we fix the maximum allowable system defect\ndensity. We then can build a regression model with M2-M8 and\ndefect density as predictors and M1 as the dependent variable.\n5.4 Discriminant Analysis\nDiscriminant analysis, is a statistical technique used to categorize\nprograms into groups based on the metric values. It has been used\nas a tool for the detection of fault-prone programs [13, 14, 18].\nThe ANSI-IEEE Std. [1] defines a fault as an accidental condition\nthat causes a functional unit to fail to perform its required\nfunction. We use discriminant analysis to identify binaries as\nfault-prone or not fault-prone. To classify if a binary is fault-prone\nor not we use the system defect density in a normal\nconfidence interval calculation as shown in equation 1.\nLB =\nx\n-z\n/2\n*Standard deviation of defect density... (1)\nn\n\n\n\nwhere\n\nLB is the lower bound on system defect density;\n\n\nx\nis the mean of defect density;\n\nZ\n/2\nis the upper /2 quantile of the standard normal\ndistribution;\n\nn is the number of observations.\nWe conservatively classify all binaries that have a defect density\nless than LB as not fault-prone and the remaining as fault-prone.\nTable 10 shows the eigenvalue and overall classification ability of\nusing the eight measures and the three principal components. The\neigenvalue is a measure of the discriminative ability of the\ndiscriminant function. The higher the eigenvalue the better is the\ndiscriminative ability. For all measures, the function correctly\nclassifies nearly nine out of every ten binaries.\nTable 10. Overall Discriminant Function Fit\nModel Eigenvalue\nClassification\nability\nAll Measures\n1.025\n2188/2465 (88.8%)\nPCA 0.624\n2195/2465\n(89.0%)\nAs before, we split the data set into 1645 programs to build the\ndiscriminant function and the remaining 820 binaries to verify the\nclassification ability of the discriminant function. We perform this\nanalysis using all the measures and the principal components. The\nresults of this fit and classification are shown below in table 11.\nTable 11. Discriminant Analysis\n\nFor Model Fit (for 1645\nbinaries to build the model)\nFor Test Data\n(820 binaries)\nModel Eigen\nvalue\nClassification\nability\nClassification\nability\nAll\nMeasures\n1.063 1464/1645\n(90.0%)\n735/820\n(89.6%)\nPCA 0.601 1461/1645\n(88.8%)\n739/820\n(90.1%)\nTable 11 shows that the relative code churn measures have\neffective discriminant ability (comparable to prior studies done on\nindustrial software [13]). We conclude that relative code churn\nmeasures can be used to discriminate between fault and not fault-prone\nbinaries (H\n4\n).\n5.5 Limitations of Study\nInternal validity. Internal validity issues arise when there are\nerrors in measurement. This is negated to an extent by the fact that\nthe entire data collection process is automated via the version\ncontrol systems. However, the version control systems only\nrecords data upon developer check-out or check-in of files. If a\ndeveloper made many overlapping edits to a file in a single check-out/check\n-in period then a certain amount of churn will not be\nvisible. A developer also might have a file checked out for a very\nlong period of time during which few changes were made,\ninflating the \"weeks of churn\" measure.\nThese concerns are alleviated to some extent by the cross check\namong the measures to identify abnormal values for any of the\nmeasures and the huge size and diversity of our dataset.\nIn our case study we provide evidence for using all the relative\nchurn measures rather than a subset of values or principal\ncomponents. This is case study specific and should be refined\nbased on further results.\nExternal validity. External validity issues may arise from the fact\nthat all the data is from one software system (albeit one with many\ndifferent components) and that the software is very large (some 44\nmillion lines of code) as other software systems used for a similar\nanalysis might not be of comparable size.\nCONCLUSIONS AND FUTURE WORK\nWe have shown how relative code churn metrics are excellent\npredictors of defect density in a large industrial software system.\nOur case study provides strong support for the following\nconclusions:\n\nIncrease in relative code churn measures is accompanied\nby an increase in system defect density;\n\nUsing relative values of code churn predictors is better\nthan using absolute values to explain the system defect\ndensity;\n\nRelative code churn measures can be used as efficient\npredictors of system defect density; and\n\nRelative code churn measures can be used to\ndiscriminate between fault and not fault-prone binaries.\n\nWe plan to validate our approach on other products developed\ninside Microsoft like SQL Server and Office. We also plan to\ndevelop standards for all the measures to provide guidance to the\ndevelopers on the maximum allowable change. We also plan to\ninvestigate how testing can more effectively be directed towards\nchurned code.\n\nACKNOWLEDGEMENTS\nWe would like to express our appreciation to Brendan Murphy of\nMicrosoft Research for providing the Windows Server 2003 SP1\ndata set. We would like to thank Madan Musuvathi of Microsoft\nResearch, for critical feedback on the relative churn measures. We\nwould like to thank Jim Larus of Microsoft Research, Laurie\nWilliams, Jason Osborne of North Carolina State University for\n291\nreviewing initial drafts of this paper and the anonymous referees\nfor their thoughtful comments on an earlier draft of this paper.\n\nREFERENCES\n[1] ANSI/IEEE, "IEEE Standard Glossary of Software\nEngineering Terminology, Standard 729," 1983.\n[2] Basili, V., Shull, F.,Lanubile, F., "Building Knowledge\nthrough Families of Experiments," IEEE Transactions on\nSoftware Engineering, Vol. Vol. 25, No.4, No., 1999.\n[3] Boehm, B. W., Software Engineering Economics.\nEnglewood Cliffs, NJ: Prentice-Hall, Inc., 1981.\n[4]\nBrace, N., Kemp, R., Snelgar, R., SPSS for Psychologists:\nPalgrave Macmillan, 2003.\n[5]\nBrito e Abreu, F., Melo, W., "Evaluating the Impact of\nObject-Oriented Design on Software Quality," Proceedings\nof Third International Software Metrics Symposium, 1996,\npp. 90-99.\n[6]\nDenaro, G., Pezze, M., "An Empirical Evaluation of Fault-Proneness\nModels," Proceedings of International\nConference on Software Engineering, 2002, pp. 241 - 251.\n[7]\nFenton, N. E., Ohlsson, N., "Quantitative analysis of faults\nand failures in a complex software system," IEEE\nTransactions on Software Engineering, Vol. 26, No. 8, pp.\n797-814, 2000.\n[8]\nFenton, N. E., Pfleeger, S.L., Software Metrics. Boston,\nMA: International Thompson Publishing, 1997.\n[9]\nGraves, T. L., Karr, A.F., Marron, J.S., Siy, H., "Predicting\nFault Incidence Using Software Change History," IEEE\nTransactions on Software Engineering, Vol. 26, No. 7, pp.\n653-661, 2000.\n[10] Jackson, E. J., A User's Guide to Principal Components:\nJohn Wiley & Sons, Inc., 1991.\n[11] Kaiser, H. F., "An Index of Factorial Simplicity,"\nPsychometrika, Vol. 39, No., pp. 31-36, 1974.\n[12] Karunanithi, N., "A Neural Network approach for Software\nReliability Growth Modeling in the Presence of Code\nChurn," Proceedings of International Symposium on\nSoftware Reliability Engineering, 1993, pp. 310-317.\n[13] Khoshgoftaar, T. M., Allen, E.B., Goel, N., Nandi, A.,\nMcMullan, J., "Detection of Software Modules with high\nDebug Code Churn in a very large Legacy System,"\nProceedings of International Symposium on Software\nReliability Engineering, 1996, pp. 364-371.\n[14] Khoshgoftaar, T. M., Allen, E.B., Kalaichelvan, K.S.,\nGoel, N., Hudepohl, J.P., Mayrand, J., "Detection of fault-prone\nprogram modules in a very large telecommunications\nsystem," Proceedings of International Symposium Software\nReliability Engineering, 1995, pp. 24-33.\n[15] Khoshgoftaar, T. M., Szabo, R.M., "Improving Code\nChurn Predictions During the System Test and\nMaintenance Phases," Proceedings of IEEE International\nConference on Software Maintainence, 1994, pp. 58-67.\n[16] Kleinbaum, D. G., Kupper, L.L., Muller, K.E., Applied\nRegression Analysis and Other Multivariable Methods.\nBoston: PWS-KENT Publishing Company, 1987.\n[17] Munson, J. C., Elbaum, S., "Code Churn: A Measure for\nEstimating the Impact of Code Change," Proceedings of\nIEEE International Conference on Software Maintenance,\n1998, pp. 24-31.\n[18] Munson, J. C., Khoshgoftaar, T.M., "The Detection of\nFault-Prone Programs," IEEE Transactions on Software\nEngineering, Vol. 18, No. 5, pp. 423-433, 1992.\n[19] Ohlsson, M. C., von Mayrhauser, A., McGuire, B., Wohlin,\nC., "Code Decay Analysis of Legacy Software through\nSuccessive Releases," Proceedings of IEEE Aerospace\nConference, 1999, pp. 69-81.\n[20] Ostrand, T. J., Weyuker, E.J, Bell, R.M., "Where the Bugs\nAre," Proceedings of the 2004 ACM SIGSOFT\nInternational Symposium on Software Testing and\nAnalysis (ISSTA), 2004, pp. 86-96.\n\n\n\n292", "keywords": "principal component analysis;Relative code churn;defect density;fault-proneness;multiple regression"} {"name": "208", "title": "Using Case-Based Reasoning in Traffic Pattern Recognition for Best Resource Management in 3G Networks", "abstract": "With the underlying W-CDMA technique in 3G networks, resource management is a very significant issue as it can directly influence the system capacity and also lead to system QoS. However, the resource can be dynamically managed in order to maintain the QoS according to the SLA. In this paper, CBR is used as part of an intelligent-based agent management system. It uses information from previously managed situations to maintain the QoS in order to meet the SLA. The results illustrate the performance of an agent in traffic pattern recognition in order to identify the specific type of problem and finally propose the right solution.", "fulltext": "INTRODUCTION\nThe third generation (3G) cellular system has been developed to\nsatisfy increasing customer demands for higher bit-rate access in\norder to provide wireless Internet access anytime and anywhere. In\naddition, 3G networks will integrate different type of services like\nvoice, data, and video.\nWith W-CDMA, all users share the same spectrum and use codes\nto identify themselves. Hence the whole bandwidth can be reused\nin every cell. The system is considered a soft capacity system as all\nusers simultaneously transmit so increasing the interference seen by\nothers. The system capacity is, therefore, limited by the total\ninterference that occurs from other users (in the case of the network\nbeing uplink-capacity limited) or other base stations (in the case of\nthe network being downlink-capacity limited) and the background\nnoise. The benefit of this technique is therefore providing the\nflexible, higher bandwidth services, and maintaining the best\nsystem capacity. On the other hand, it leads to more complexity in\nresource management.\nPrevious work [1] introduced the use of intelligent agents in\nmanaging the resources to meet the service level agreement (SLA)\nwhen congestion occurs. It shows that by using intelligent agents\ntogether with the assignment and admission scheme, the system\nenvironment can be monitored and the policy that is suitable for\nthat particular situation will be selected and applied to the system.\nAlso the quality of service (QoS) for each particular class of\ncustomer can be monitored and controlled according to the SLA. In\n[2], Case-Based Reasoning (CBR) is introduced as a mean of\ngiving the agent more \"intelligence\". The aim of using CBR is so\nthat the problem can be automatically solved by referring to a\nsimilar traffic pattern that the system has seen before and kept in\nthe case library. The end solution from the previous case can then\nbe applied immediately to give a fast and efficient response. In this\npaper, a wider range of traffic situations will be illustrated, which\nwill also show the benefit of using CBR in order to identify\ndifferent traffic patterns and to propose the best solution. In\naddition, results show the outcome of system flexibility in giving\ndifferent priority patterns to customers according to the system\nrequirements.\nThe paper is organised as follows. The agent system and\narchitecture for the multi-agent system are described in section 2.\nIn section 3, the implementation of CBR in SLA-based control by\nthe agent will be introduced. The assignment and admission\nscheme is presented in section 4 and section 5 covers the simulation\nmodel. Traffic pattern recognition and numerical results are\nillustrated and discussed in section 6. Lastly, the conclusions of the\npaper are in section 7.\n\nAGENT SYSTEM AND ARCHITECTURE\nCritical in a radio network is the allocation of bandwidth to radio\ncells in order to avoid local congestion or degradation of the QoS\nand it is generally the capacity of the wireless link to the user that\n\nlimits the overall system capacity, rather than any back-haul part of\nthe network.\nIn [3], an agent approach for a distributed resource management\nsystem is introduced. The main reason for using intelligent agents\nis to give greater autonomy to the base stations; this gives an\nincrease in flexibility to deal with new situations in traffic load and\nto decrease the information load (the messaging resulting from\ntaking, or determining control actions) on the network.\nIn the past, mobile network operators have generally restricted the\ncustomer to only one service provider. With the influence of the\nInternet, more widespread choice of service providers (SPs) will be\navailable to 3G users. By using an agent, it would be possible to\nallow selection of SP by offering on price, QoS, or value added\nservice.\nIn this work, each agent uses three layers taking action and\ndecisions on different timescales: reactive, local planning and cooperative\nplanning.\nAs an individual connection must have the decision made in real-time\n, the reactive layer is designed for a very fast response. More\ncomplex functions have been implemented at the planning layers.\nGenerally the local planning layer is concerned with long-term\nactions within its own instance, whereas the co-operative layer is\nconcerned with long-term actions between peer agents, or with\nother types of agent. The reactive layer is, therefore very simple,\nimplementing policies being passed down by the higher layer. This\nis discussed in more detail later in the paper.\nCBR IN SLA-BASED CONTROL\nCBR is an Artificial intelligence (AI) approach that can allow the\nagent to learn from past successes. It is a method that finds the\nsolution to the new problem by analysing previously solved\nproblems, called cases, or adapting old solutions to meet new\ndemands.\n\nFigure 1 Case-based reasoning process model\n(Based on the CBR cycle in [4])\nFigure 1 shows the process model of the case-based reasoning. The\nprocess of CBR starts when there is a new problem or new case\nhappening. The first step is case retrieval, which uses the\ncharacterizing indexes of the event to find the best-match solved\ncase(s) from the case library. The solution from the retrieved\ncase(s) will be reused.\nHowever, the solution might need to be modified to fit the new\nsituation as the new situation will rarely match the old one exactly:\nthis step is called \"revising\". Once the new solution is proposed,\nthe next step is to test it with the real environment. The result is\neither success or failure. If the solution fails, a monitoring process\nwill analyse the failure, repair the working solution, and test again.\nIf the solution succeeds, this new solution will be indexed and\nretained in the case library to use for future problem solving.\nThe work shown in [5] gives an example of using CBR in network\ntraffic control by using it to control traffic flow in the standard\npublic switched telephone network of the Ile de France. In another\nwork in [6], CBR is used to correct the error estimation of the\nrequired bandwidth computed by conventional connection\nadmission control schemes.\nIn the work described in this paper, CBR is used to recognise traffic\npatterns as congestion occurs in a 3G network and to define the\npolicies to respond to that congestion in the reactive layer of the\nresource agent. Congestion here means the situation where system\ncould not maintain the QoS required by the SLA. (This is explained\nin more detail in section 5)\n3.2 Resource agent\nIn this work the resource agent is the focus of attention as it is an\nimportant agent in managing the resource within the network. The\narchitecture of the resource agent is illustrated in figure 2.\n\n\nFigure 2 Resource agent internal architecture\n\nThe reactive layer is designed to be fast, performing the same\nfunction that would be in a conventional RNC (Radio Network\nController), assigning the connection a Node B, and performing\nPrevious\nCases\nGeneral\nKnowledge\nSuggestion\n\nSolution\n\nConfirmed\n\nSolution\n\nRETRIEVE\nProblem\nRetrieve\nCase\nTested /\nRepaired\nCase\nSolved\nCase\nCase\nNew\nNew\n\nCase\n\nLearned\nCase\nRETAIN\nREUSE\nREVISE\nCo-operative Planning Layer\n\n- Action between cells\n\nLocal Planning\n\nLayer\n\n\n- Act on changing QoS within cell\n\nCase - Based Reasoning\nReactive Layer\nAssignment\nOK\nOK\nCAC\n\nException\nhandle\nNo\nYes\nNo\nYes\nAssigned\nrequest\nModified request\n\nSet up\nconnection\nReject\nRequest\npol\ni\ncy\npol\ni\ncy\npol\ni\ncy\nS\nt\nat\nus\nre\np\no\nrti\nng\n253\nCAC (Connection Admission Control) but it does this according to\npolicies assigned by the planning layer.\nThe connection request (containing information about the service\nprovider, QoS, type of connection) is first considered for\nassignment to a Node B using an algorithm or set of rules passed\ndown from the planning layer. As a result, the system performance\ncan be monitored at all times. Any congestion occurring can be\ndetected and reported to the planning layer which, will then find the\nbest solution using the CBR approach in order to maintain the SLA.\n\nASSIGNMENT AND ADMISSION SCHEME\nAssignment and admission control together determine which base\nstation will have power control over a mobile, which means that\nbase station must have available bandwidth to support the new call,\nand also must make sure that none of the existing connection will\nbe dropped.\nA great deal of work has been done in this area. In [7], a\ncomparison is made between a transmitted power-based call\nadmission control (TPCAC) that protects the ongoing calls and a\nreceived power-based call admission control (RPCAC) that blocks\nnew calls when the total received power at a base station exceeds a\nthreshold. The result shows that the RPCAC scheme is found to\noffer significant performance benefits. In [8], the number-based\nCAC and interference-based CAC are compared. SIR-based CAC\n(signal-to-interference based CAC) has been proposed in [9], the\nbenefit being in improving the system performance at traffic hot-spots\n.\nIn this paper, a combination between the ideal scheme and SIR-based\nCAC has been chosen with uplink capacity limitation (which\nmeans the signal-to-interference of the received signal from mobile\nto base station is calculated) As for the ideal scheme, the system\nhas to make sure none of the existing connections will be dropped\nwhen accepting a new connection request. Hence, two perfect\npower control loops are run to verify that the new request can really\nbe accepted; otherwise it would be blocked or put into the buffer.\nThe admission process is as follow :\n- With the new connection request, the new mobile's\ntransmitted power is estimated in order to get the target SIR.\n(the open power control in section 5.4)\n- If the estimated transmitted power is in the accepted range, it\nmeans the new mobile can make a connection. Otherwise, it\nwill be blocked or held in the buffer.\n- Set up the new connection and perform the first perfect power\ncontrol loop. With this, the new transmitted power that is\nsupposed to give each connection the target SIR can be\ndetermined.\n- The second perfect power control loop is performed to achieve\nthe actual SIR for each connection as a result of accepting a\nnew connection request.\n- If any existing connection would be dropped (by having SIR\nless than the threshold), the new connection is still rejected\notherwise it is accepted and the connection can be made.\nThe rejected connection request will be put into a queue until the\nnext calculation or new call arrival and it will be blocked at the\nexpiry of a timer: setting the timer to zero means that a request is\nimmediately accepted or rejected. Furthermore, the base station\nserving the mobile can be reassigned at anytime during the\nconnection if the current base station cannot provide the required\nlink quality.\nSIMULATION MODEL\nThe simulation model has been implemented in MatLab. The\nsystem used for the results in this paper consists of 9 hexagonal\ncells (25 cells have been used for other work but the large model\nsuffers from an excessively long run time) and each cell has its own\nbase station with an omni-directional antenna placed at the centre\nof the cell. A number of mobiles have been generated randomly\naccording to the input traffic. When considering different classes of\nuser, it is quite common to use three classes: bronze, silver, and\ngold. In the results described here, 50% of the users are bronze,\n30% are silver and 20% are gold. It is assumed that the gold\ncustomers will pay the highest service charge followed by silver\nand bronze customers, so that the gold customer is paying for the\nbest service and more flexibility than the others.\n5.1 Radio Propagation Model\nIn cellular systems, radio propagation is crucially influenced by the\npath loss according to the distance, log-normal shadowing, and\nmultipath fading. The relationship between the transmitted power\nand received power can be expressed as [9].\n0\n10\n/\n10\n)\n(\nP\nr\nr\nP\n\n\n=\n\n\n(1)\n\nwhere P(r) is the received power, P0 is transmitted power, r is the\ndistance from the base station to mobile,\nin decibels has a normal\ndistribution with zero mean and standard deviation of\n(typical\nvalue of 8 dB), and\n\n2\nrepresents the gain (typical values of\nin a\ncellular environment are 2.7-4.0).\n5.2 Traffic Model\nThe model consists of two traffic types, voice and video. The\nmodel has been simplified from the three type traffic model that\nalso included data traffic, which was used in [2]. The reason in\nsimplifying the traffic model is because modelling data traffic to\nthe level of packet results in unrealistically long simulation times.\n5.2.1 Voice traffic\nVoice traffic is considered to be real-time traffic. The common\nmodel for a single voice source is illustrated by the ON-OFF\nprocess. It consists of two stages, active (ON) and silent (OFF)\nstage, with a transition rate\nfrom ON to OFF and from OFF to\nON stage.\nFigure 3 illustrates the ON-OFF model. The silent period is\nassumed to be the period that cannot be used to transmit data\nmessage or voice call.\n\n\n\n\n\n\n\nFigure 3 Traffic model for voice call (ON-OFF model)\nSilent\nActive\nDuration in silent\nstate : exponential\ndistribution\nDuration in active\nstate : exponential\ndistribution\n\n\n\n1\n\n1\n254\nTo simplify the simulation, the approach of [9] is used with an\nactivity factor of 0.45 is used. The transmission rate for voice\ntraffic is assumed to be 8kbit/s and mean holding time is\n180second.\n5.2.2 Video traffic\nVideo traffic is also considered as real-time traffic. The common\nmodel for video source is illustrated by the discrete-state\ncontinuous time Markov process illustrated in figure 4.\nThe bit rate of video traffic is quantized into finite discrete level (0\nA 2A... MA). Transitions between levels occur with exponential\ntransition rates that may depend on the current level. [11]\n\n\n\nFigure 4 Video source model (Discrete-state continuous time\nMarkov process)\nThe\nand are the state transitions and they are obtained by:\n\n\n+\n=\nM\nN\n04458\n.\n5\n1\n9\n.\n3\n\n(2)\n\n\n=\n9\n.\n3\n(3)\nwhere N is a Number of aggregated video sources (typical\nassumption 1) and M is a number of quantization levels (typical\nassumption 8).\n\nImplementing this video traffic model in the simulation causes\nsimulation times to be very long. Many authors therefore simplify\nthis by using activity factor. [9][12] Here, an activity factor of 1\nhas been assumed for the real-time video source as use in [9]. The\ntransmission rate for video traffic is assumed to be 64, 144, or\n384kbit/s and mean holding time is 300 second.\n5.3 Receiver Model\nFor the uplink capacity limited, the SIR of each transmission is\ncalculated at the base station and it can be expressed as follows,\n(based on [13])\nthermal\ner\nra\nN\nI\nI\nR\nW\nSIR\n+\n+\n\n\n\n\n\n=\nint\nint\nPr\n\n(4)\n\nwhere, (W/R) is the processing gain, Pr is the received signal\nstrength, I\nintra\nis the sum of the received signal powers of other\ntransmissions within the same cell, I\ninter\nis the sum of the received\nsignal powers from the other cells, and N\nthermal\nis the thermal noise\npower.\n\n5.4 Power Control Model\nPower control is the crucial part in the system since it is necessary\nto minimise the interference in the system by minimising the level\nof transmitted power to the optimum level, which means just\nenough to maintain the link quality. Power control in UMTS\nconsists of three main functions: (i) open-loop power control, (ii)\ninner-loop power control, and (iii) outer-loop power control [14].\nAs the simulation focuses on the uplink-limited capacity, power\ncontrol for the uplink is applied. In this work, the first two types are\napplied in\n\nthe simulation since they have the major effect on the\nsimulation result. Without outer-loop power control, the target SIR\nhas to be fixed; here, it is assumed to be 6 dB and the threshold is 4\ndB. [10] The power control step is assumed to be 1 dB at each\npower control cycle. [15][16]\n5.4.1 Open-Loop Power Control\nOpen-loop power control is applied when new connection requests\narrive in the system as initial step of the admission process (section\n4). The total interference at the base station is calculated as it is the\nparameter that User Equipment (UE) needs to use in the estimation\nof its initial transmit power. According to the parameter and the\ntarget SIR, UE estimates the transmit power and uses it as an initial\ntransmit power.\n5.4.2 Inner-Loop Power Control\nThis is done periodically to allow the transmitted power of each\nconnection to be kept as low as possible, yet maintain the target\nSIR. Firstly, the base station calculates the received SIR from the\nUE. If the SIR is less than the target SIR\n,\nthe TPC (Transmit Power\nControl) command \"up\" is sent to the UE which increases the\n\ntransmitted power by one step. If the SIR is more than the target\nSIR+1, the TPC command \"down\" is sent to the UE which\ndecreases the transmitted power by one step. Otherwise, the UE\nmaintains the same transmitted power. After the power control\ncycle has been performed, the new SIR for each mobile can be\ncalculated. Any mobile that has an SIR less than the threshold will\nnot be dropped immediately; instead the system will try to\nreallocate that mobile to another base station nearby that still has\navailable bandwidth and can provide the link quality. If it is\npossible, the mobile will be handed over to the next base station,\notherwise the mobile will be dropped. The transmitted power has a\nmaximum of 21dBm; if the calculated transmit power is more than\nthe maximum power, the maximum transmitted power will be\napplied.\nIn this simulation, the inner-loop power control is performed every\n10ms. Experiments have been done by varying the power control\ntime step and the results show that the blocking rate becomes\nerratic as the timing gets too high. From the experiment, the time\nstep of 10ms has been chosen as it is the highest value that gives\nconsistent results. From the simulation point of view, it is\npreferable to use the highest time as this reduces the length of the\nsimulation.\nThe power control is essentially that used in 3G without including\nouter-loop power control.\n5.5 Verification and Validation\nOne of the most important aspects in developing the simulation\nmodel is its credibility; therefore the validation and verification of\nany simulation model are essential. The simulation model was\nvalidated by comparing the result with the relevant result from [8]:\nthis is discussed in [1]\n.\n0\nMA\n(M-1)A\n(M-2)A\nA\n2A\n\n\n2\n\n\nM\n\nM\n\n)\n1\n(\nM\n\n2\n\n)\n1\n(\nM\n255\n5.6 CBR Model\nAccording to [17], there are several proposed schemes of\norganizational structure and retrieval algorithms for CBR. In this\nwork, the hierarchical memory with parallel search is used as it\nprovides an efficient retrieval that is less time consuming, as the\nmatching and retrieving happen in one step, which also give less\ncomplexity.\nThe monitoring process of the system performed every 10 seconds.\nThis means the monitoring parameters will be collected for 10\nseconds and sent to the local planning layer of an agent where the\nCBR model is located as shown in figure 1. The parameters will\nthen be compared with the SLA requirements and any deviation\nfrom the SLA can be reported. The CBR model will then be used\nto find the best solution for the situation. Base on the process model\nin figure 1, a solution will be proposed, or where the best matched\ncase cannot be found or the evaluating process fails, a calculation\nmight be used instead in order to find the solution according to\ncertain rules.\nAs the parallel search has been chosen for the CBR model, the\nwhole library will be searched for each characterizing index in one\nstep. If the new case is to be retained in the library, the library\nindexes have to be re-sorted according to the priority of the\ncharacterizing index of the new case.\n5.7 Monitoring and case matching process\nAs explained above, the monitoring process is done every 10\nseconds. The call blocking, call dropping and the accumulative\nvalue of blocking rate are calculated and by comparing them with\nthe SLA requirements, the error can be detected. If the error being\nreported is significant, the CBR model will be called.\nThere are seven characterising indexes used to describe the case at\nthe moment. Currently, they are obtained by matching the actual\nmonitoring factors into a suitable range where the value belongs.\nTherefore, the characterising indexes will be in form of small\ninteger numbers. The seven monitoring factors are as follows-:\nTotal throughput for the whole system\nOffered traffic for the whole system\nOffered traffic for silver customer for the whole system\nOffered traffic for gold customer for the whole system\nCell identity where offered traffic exceeds limit\nAccumulative blocking rate for silver class\nAccumulative blocking rate for gold class\nIn case that there is not an exact match, there needs be a way to\nidentify whether the closest match is acceptable for the situation.\nHowever, in the current model, only the best match will be chosen.\nIn future work, the acceptable level for each case will be\ndetermined by the distance to the seven-dimensional coordinates\ndefining the individual point of the case. If it is within the tolerable\nrange, the case will be used.\nTRAFFIC PATTERN RECOGNITION AND NUMERICAL RESULTS\nThe previous work [1] was done to support the basic idea that the\nreactive layer of the agent system can be controlled by the planning\nlayer in order to ensure system compliance with the SLA. Here, the\nSLA assumptions made for the maximum acceptable level of call\nblocking rate.\n- Maximum acceptable call blocking rate for gold : 0.03\n- Maximum acceptable call blocking rate for silver : 0.05\nThe gold customer pays the highest rate for the least elasticity of\nservice level. These rates can either be instantaneous values or\nmeasured over a period of time; naturally the numerical limits\nwould be different.\nIn [1], the random overload situation has been tested with the traffic\nload being increased after the system reached stability. The call\ndropping rate is acceptable before high traffic load was applied, but\nafter changing the load, the call dropping rate increases, then\nslowly declines to about the same level as before, because of the\nimplementation of ideal assignment and admission control. On the\nother hand, the call blocking rate increases as a greater number of\nmobiles attempt to get into the system and the system tries not to\ndrop any existing connection, so more will be blocked. The call\nbuffering time for all classes of customer and all types of service\nhas been set to zero to give immediate accept or reject decisions.\nFigure 5 shows the comparison between the result from the\nconventional system that does not chance the policy with the\ndashed lines and the result of policy chance as the solid lines.\nWithout the SLA based control, the call blocking rate for all\ncustomer classes raises as the traffic load increases.\nFor the SLA based control, the implementation here uses a buffer\nmechanism so the a call request that cannot be served immediately\nis held for a short time in case resources become available. The\nbuffering time is configurable. From solid lines at the point when\nthe level of call blocking rate for gold customer reaches the\nmaximum level, policy 2 is applied which allocates a short\nbuffering time to call requests from gold and silver customers, with\nthat for bronze customers still being set to zero. The result shows\nthat the call blocking rate of gold and silver customers stabilises,\nbut does not go below the limit set for gold. After waiting for a\nshort period (here 2 minutes) to ensure the trend is stable, a further\nchange in policy is applied; this gives longer buffering time for\ngold customers and slightly longer buffering time for silver\ncustomers, so increasing still further the probability of gold and\nsilver customers (especially gold) being accepted at the expense of\nbronze.\n\n\n\n\n\n\n\n\n\nFigure 5 Comparison between the result from the conventional\nsystem and SLA based control system\nSilver maximum level\nGold maximum level\nMean interarrival 100ms\nMean interarrival 25 ms.\nPolicy 1\nPolicy 2\nPolicy 3\nBronze\nSilver\nGold\nWith SLA control\nWithout SLA control\nGold\nBronze\nSilver\n0\n0.02\n0.04\n0.06\n0.08\n0.1\n0.12\n0.14\n18000\n90000\n2E+05\n2E+05\n3E+05\n4E+05\n5E+05\n5E+05\n6E+05\nSimulation time (ms.)\nRate of call blocking over total avtice call\n256\nA simple case library has been generated partly from this previous\nwork and the knowledge from the work under current study. By\nusing the simulation model mentioned before, a few traffic patterns\ncan be implemented to test the system performance.\nThe two main environments being tested here are the random\noverload situation and the hot spot situation. As the system detects\nthe congestion, the CBR model is called to analyse the situation and\nthe simulation results will be divided in two sections.\n6.1 Random overload case\nFor this case, the simulation repeated the previous work explained\nbefore by adding the CBR model and also use the simulation model\nillustrated in section 5. (In previous work [1] the less detailed\nsimulation model has been used.)\nFigure 6 shows the simulation results of the call blocking across the\nsimulation time as the traffic load increases in a conventional\nsystem that does not change policy. The call buffering time for all\nclasses of customers and all types of services has been set to zero to\ngive immediate accept or reject decisions.\n\n\n\n\n\n\n\n\n\n\n\nFigure 6 The simulation result from conventional system for\nthe random overload situation\nFigure 7, 8 and 9 show the effect of using the CBR approach to\nidentify the current traffic pattern and manage the reactive layer\npolicies accordingly.\nIt might be thought that these results are simply the normal result of\napplying priorities, but the technique is more powerful. In many\nSLAs, it is not short-term violations that are important: an SLA\nmight specify for instance that the blocking rate must not exceed a\ncertain value during a day or a month.\nThe new policy has been applied to the reactive layer as soon as the\nsystem recognises congestion, in this example using the\naccumulative error rate over a period of 10s.\nThe implementation here again uses a buffer mechanism to give\nshort buffering time to call request that cannot be served\nimmediately, especially for the higher priority customer. The\nbuffering time is configurable. It can be seen from the result that\nCBR keeps the call blocking rate for gold and/or silver customers\nwithin the SLA bounds, according to the congestion pattern.\nIn figure 7, the traffic reaches overload when the accumulative call\nblocking rate for gold exceeds the limit, at that point silver is still in\nan acceptable range. In this case the chosen policy gives the highest\nbuffering time to gold and lower value for silver with that for\nbronze still at zero.\n\n\n\n\n\n\n\n\n\n\nFigure 7 Simulation results showing the effect of SLA-based\ncontrol by CBR approach for the first random overload case\nIt can be seen that the system detects the overload situation at the\npoint where the traffic load increases and generates the appropriate\npolicy. As the new policy gives priority to gold, the call blocking\nfor gold customer is maintained within an acceptable range at the\nexpense of both silver and bronze.\nFigure 8 shows the result from the second case, where the traffic is\noverloaded with the accumulative call blocking rate for gold and\nsilver exceeding the maximum value. In this case, both silver and\ngold QoS need to be handled. By giving highest buffering time to\nsilver and slightly lower for gold, the blocking for both can be kept\nwithin the range. As the buffer in this implementation uses the\npriority arrangement, gold customers are always in the top of the\nqueue, so, in order to also give priority to silver customers, their\nbuffering time has to be higher.\n\n\n\n\n\n\n\n\nFigure 8 Simulation results showing the effect of SLA-based\ncontrol by CBR approach for the second random\noverload case\nSilver\nGold maximum level\nSilver maximum level\nNormal traffic\nOverload traffic\nGold\nBronze\nSilver\nGold maximum level\nSilver maximum level\nNormal traffic\nOverload traffic\nGold\nBronze\nSilver\nGold maximum level\nSilver maximum level\nNormal traffic\nOverload traffic\nGold\nBronze\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0.45\n5\n35\n65\n95 125 155 185 215 245 275 305 335 365 395\nSimulation time (s.)\nCall blocking rate\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0.45\n5\n35\n65 95 125 155 185 215 245 275 305 335 365 395\nSimulation time (s.)\nCall blocking rate\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0.45\n5\n35\n65\n95 125 155 185 215 245 275 305 335 365 395\nSimulation time (s.)\nCall blocking rate\n257\nIn Figure 9 the situation is that the long-term value for gold\ncustomers has been met, but that for silver is at the limit. When\ncongestion occurs then, silver customers have to be given priority\nin order that their long-term blocking is not exceeded, but gold\ncustomers can be allowed to have worse service since there is still\n\"slack\" in their SLA.\n\n\n\n\n\n\n\n\nFigure 9 Simulation results showing the effect of SLA-based\ncontrol by CBR approach for the\nlast random overload case\nThe SLA monitoring here is looking at the long-term blocking, has\ndetected that silver needs priority and has applied that priority.\nThese results show the flexibility of the control system which\nassigns different policies to different scenarios and also shows that\nthe highest priority can also be a sacrifice in order to maintain the\ncustomer who normally has the overall long-term values.\nIn fact any SLA that can be evaluated numerically can be used as\nthe basis for controlling the policy: the system is that flexible.\n6.2 Hot spot case\nWith hot spots, the monitoring process is able to identify the\ncongestion from the individual blocking and dropping parameters\nof each cell. CBR model will then match the pattern with the cases\nin the library. The proposed mechanism here can be seen in figure\n10.\nThe bronze and silver users near the boundary will be transferred to\nthe neighbouring cells that have normal traffic: effectively then by\ncontrolling the cell size in a more comprehensive manner than\nsimple cell breathing from power control. By doing this, some of\nthe capacity will be released for the hot spot cell in order to\nmaintain the users nearer to the centre and high priority users.\n\n\n\n\n\n\n\n\n\n\n\n\n\nFigure 10 Hot spot situation and the proposed solution\n\nIn the initial work for this hot spot case, the transferring process or\nhandover will be done every 10s, which is the frequency of the\nmonitoring process. An example of results from the initial work in\nthis case are shown in figure 11 and 12.\nIn figure 11, after the traffic load has increased in the hot spot cell,\nthe call blocking rate for the hot spot call rise up while the other\ncells still have low blocking rate as the traffic was controlled within\nnormal level.\n\n\n\n\n\n\nFigure 11 Result from the conventional system for the hot\nspot situation\n\n\n\n\nGold\nSilver\nBronze\nSilver\nGold maximum level\nSilver maximum level\nNormal traffic\nOverload traffic\nGold\nBronze\nSilver\nGold maximum level\nSilver maximum level\nNormal traffic\nOverload traffic\nGold\nBronze\nHot spot cell\nNormal traffic cell\n\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0.45\n5\n35 65\n95 125 155 185 215 245 275 305 335 365 395\nSimulation model (s.)\nCall blocking rat\ne\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0.45\n5\n35 65 95 125 155 185 215 245 275 305 335 365 395\nSimulation Time (s.)\nCall block rate\n258\nFigure 12 shows the result of using the CBR model which\ninstructs the system to perform the handover for bronze and silver\nusers near boundary to the neighbouring cells every 10s. The\nblocking rate for the hot spot cell still increases after the traffic\nload has increased but by comparing with the result in figure 11,\nthe call blocking rate is lower. Further work is being done on\nevaluating more complex scenarios.\n\nFigure 12 Result from SLA based control system with CBR\nmodel for the hot spot situation\n\nCONCLUSIONS\nThis paper has introduced the concept of combining CBR with an\nintelligent agent layered architecture to manage SLAs in W-CDMA\nnetworks. The simulation results show that the CBR\nsystem has been able to detect congestion occurring and then\napply the appropriate policy to manage the behaviour of the CAC\nto block those customers who, at that time, are perceived as less\nimportant to the operator.\nThe scenarios illustrated are fairly simple but further work is\nevaluating the approach over a much more complex range of\nsituations.\nREFERENCES\n[1]\nChantaraskul, S. and Cuthbert, L.G. SLA Control for\nCongestion Management in 3G Networks, in Proceeding of\nthe IASTED International Conference on Wireless and\nOptical Communications (WOC2003), Banff, Alberta,\nCanada, 2003, pp. 447-452.\n[2]\nChantaraskul, S. and Cuthbert, L.G. Introducing Case-Based\nReasoning in SLA Control for Congestion Management in\n3G Networks, in Proceeding of the\n\nIEEE Wireless\nCommunications and Networking Conference 2004 (IEEE\nWCNC2004), Atlanta, Georgia, USA, 2004.\n[3]\nCuthbert, L.G., Ryan, D., Tokarchuk, L., Bigham, J.,\nBodanese, E. Using intelligent agents to manage resource in\n3G Networks, Journal of IBTE, vol. 2 part 4, Oct.-Dec. 2001,\npp. 1-6.\n[4]\nAdmodt A. and Plaza E., Case-Based Reasoning:\nFoundational Issues, Methodological Variations, and System,\nAI Communications, The European Journal of Artificial\nIntelligence, vol. 7:1, pp. 39-59. 1994.\n[5]\nCaulier, P. and Houriez, B. A Case-Based Reasoning\nApproach in Network Traffic Control, in Proceeding of the\nIEEE International Conference on Systems, Man and\nCybernetics, 1995. Intelligent Systems for the 21st Century,\nVolume 2, 1995, pp. 1430-1435.\n[6]\nHassanein, H., Al-Monayyes, A. and Al-Zubi,M. Improving\nCall Admission Control in ATM Networks Using Case-Based\nReasoning, in Proceeding of the IEEE International\nConference on Performance, Computing, and\nCommunications, 2001, pp. 120-127.\n[7]\nHuang, C. Y. and Yates, R. D. Call Admission in Power\nControlled CDMA Systems, in Proceeding of the IEEE\nVehicular Technology Conference, 1996, pp.227-231.\n[8]\nCapone, A., Redana S. Call Admission Control Techniques\nfor UMTS, in Proceeding of the IEEE Vehicular Technology\nConference, 2001, pp.925-929.\n[9]\nLiu, Z. and Zarki, M. E. SIR Based Call Admission Control\nfor DS-CDMA Cellular System, IEEE Journal on Selected\nAreas in Communications, vol. 12, issue 4, May 1994, pp.\n638-644.\n[10]\nKuri, J. and Mermelstein, P. Call Admission on the Uplink of\na CDMA System based on Total Received Power, in\nProceeding of the IEEE International Conference on\nCommunications, vol. 3, 1999, pp. 1431-1436.\n[11]\nSo, J.W. and Cho, D.H. Access Control of Data in Integrated\nVoice/Data/Video CDMA Systems, in Proceeding of the VTC\nSpring 2002, IEEE 55\nth\nvol. 3, 2002, p.1512-1516.\n[12]\nAngelou, E.S., Koutsokeras, N.Th, Kanatas, A.G. and\nConstantinou, Ph. SIR-Based Uplink Terrestrial Call\nAdmission Control Scheme with Handoff for Mixed Traffic\nW-CDMA Networks , in Proceeding of the 4\nth\nInternational\nWorkshop on Mobile and Wireless Communications Network,\n2002, pp. 83-87, 2002.\n[13]\nRadio Frequency (RF) system scenarios, 3GPP TR 253942,\nAvialable: htpp://www.sgpp.org\n[14]\nLaiho, J., Wacker, A. and Novosad, T. Radio Network\nPlanning and Optimisation for UMTS, John Wiley & Sons,\nLtd., 2002.\n[15]\nBaker, M.P.J., Moulsley, T.J. Power Control in UMTS\nRelease '99, 3G Mobile Communication Technologies, 2000.\nFirst International Conference on (IEE Conf. Publ.\nNo.471),27-29, March 2000, pp.3640\n.\n\n[16]\nThong, W.S., Bigham, J. Hierachical Managament of CDMA\nNetwork Resources, in Proceeding of the Third International\nConference on 3G Mobile Communication Technologies,\n2002\n(Conf. Publ. No. 489),\n8-10 May 2002\n\npp. 216 220.\n[17]\nKolodner, J. Case-Based Reasoning, Morgan Kaufmann\nPublishers, Inc., 1993, pp. 289-320.\n\nSilver\nGold maximum level\nSilver maximum level\nNormal traffic\nOverload traffic\nGold\nBronze\nHot spot cell\nNormal traffic\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0.45\n5\n35 65\n95 125 155 185 215 245 275 305 335 365 395\nSimulation Time (s.)\nCall block rate\n259\n", "keywords": "Service Level Agreement;Intelligent agent and Case-based reasoning;3G Resource Management"} {"name": "209", "title": "Using Roles and Business Objects to Model and Understand Business Processes", "abstract": "Business process modeling focus on describing how activities interact with other business objects while sustaining the organization's strategy. Business objects are object-oriented representations of organizational concepts, such as resources and actors, which collaborate with one another in order to achieve business goals. These objects exhibit different behavior according to each specific collaboration context. This means the perception of a business object depends on its collaborations with other objects. Business process modeling techniques do not clearly separate the multiple collaborative aspects of a business object from its internal aspects, making it difficult to understand objects which are used in different contexts, thus hindering reuse. To cope with such issues, this paper proposes using role modeling as a separation of concerns mechanism to increase the understandability and reusability of business process models. The approach divides a business process model into a business object model and a role model. The business object models deals with specifying the structure and intrinsic behavior of business objects while the role model specifies its collaborative aspects.", "fulltext": "INTRODUCTION\nRepresenting and keeping the alignment between the multiple\nelements of an organization is fundamental to understand how it\noperates and how it can be adapted to cope with a changing business\nenvironment [5]. This requires understanding how business\nactivities interact and are aligned with other organizational elements\nwhile supporting the operation of the business.\nIn the past years, significant work, particularly in the area of\nbusiness process modeling has been proposed, ranging from general\nmodeling concepts to business automation languages [10, 16,\n17, 18]. Business process modeling can be used for multiple purposes\n, such as facilitating human understanding and communication\n[29], supporting process improvement and re-engineering\nthrough business process analysis and simulation [8, 17] and\nautomating the execution of business processes [1, 22].\nA business process model captures the relationships that are\nmeaningful to the business between different organizational concepts\n, such as activities, the resources used by activities and the\nhuman or automated actors who perform these activities. Identifying\nthe properties and relationships of these concepts is fundamental\nto help understanding and evolving the business since it facilitates\nthe communication between stakeholders, business specialists\nand support system specialists.\nWe model business concepts as classes of business objects in a\nconsistent object-oriented glossary of business concepts from\nwhere objects can be composed, specialized and reused.\nHowever, fully characterizing the type of a business object, its\nproperties and relationships is not straightforward. This results\nfrom a business object generally being used in different contexts\nand relating to several other business objects in the organization.\nFor example, a business object modeling a\nProduct\nmay be\nbrought into play in several processes, such as\nManufacturing\n,\nLogistics\nand\nSelling\n. In each of these contexts, it relates with different\nactivities and resources, displaying different and possibly overlapping\nproperties and behavior that are context-dependent. This\nmeans the object acts as a multi-dimensional concept.\nIf business objects are modeled as one-dimensional concepts,\ni.e. without its properties and behavior being described as dependent\non the context, then the objects will not have explicit information\non how to guide the design of a business support system that\nis able to cope with evolution. For example, if the\nManufacturing\nprocess changes, there may be changes to the\nProduct\nobject.\nHowever, if the\nProduct\nobject does not explicitly represent the\naspects related to its manufacture, then there will be no information\non the properties requiring modifications.\nThis paper focus on describing on how to break up the universe\nof process modeling and its business objects into different aspects\nor areas of concern, of which each can then be handled independ-ently\nand later composed to synthesize a complete model. To do\nso, we propose defining two complementary conceptual models, a\nrole model and a business object model. The role model describes\nbusiness object collaborations and the properties of business objects\nthat are concerned with each role, being each role a type on\nits own. The business object model describes the structure and the\nproperties of a business object that are independent of a specific\ncontext. The relationships between business objects are specified\nby roles the objects play while collaborating. We argue that using\nroles and business objects to model business processes improves\nthe understandability of the individual business objects and of the\nprocess model. It also improves model reengineering since it promotes\nreuse and makes explicit the dependencies between the\nmodel elements.\nThe remainder of this paper is structured as follows: next section\nreviews some of the research on business process modeling. Section\n3 reviews role modeling, describes how roles can be identified\nand defines the concepts of business objects and role. Section 4\npresents how the business object and the role model can describe a\nbusiness process, followed by an example of application in section\n5. Finally, section 6 sets out the conclusions and future work.\nMODELING BUSINESS PROCESSES\nThe Workflow Reference Model [31] defines a business process\nas a set of one or more connected activities, which collectively\nrealize a business objective or policy goal, normally within the\ncontext of an organizational structure defining functional responsibilities\nand relationships. This definition extends the definition\nproposed by Davenport and Short [7] stating that a business process\nis a set of logically related tasks performed to achieve a defined\nbusiness outcome. Most approaches to business process\nmodeling concentrate on some sort of process map or diagram,\nwhich shows how activities are scheduled in the course of a business\nprocess. Indeed, there is little disagreement about the key\nelements process diagrams. There are usually ways to represent\ndecision points and to express various activity coordination patterns\n, such as sequential flow, branching and parallel execution.\nSome techniques introduce swim-lanes to indicate the responsibilities\nof participants, such as departments or individuals. This allows\nrepresenting the activities performed by actors in the context of a\nprocess.\nTwo representative coordination-oriented business process modeling\ntechniques that make use of actors, activities and swim-lanes\nare Role Interaction Networks [24] and Role Activity Diagrams\n[21]. Role Activity Diagrams provide the means to identify roles\nand interactions. Roles organize a process' activities into sets of\noperations associated with a given participant in the process. Interactions\nshow the dependencies between those participants. While\nthis approach improves the understandability of a process model\nsince it depicts what a participant does in a process, it falls short to\nexplain the behavior of the business objects in a specific context of\ninteraction. Additionally, roles are defined as groups of activities\nand not as types, so they cannot be explicitly composed or specialized\n.\nBusiness process modeling is not limited to process diagrams.\nThe focus of this paper is not on process diagrams but on describing\nthe roles that are used to specify the responsibilities of business\nobjects. A business object is the model of a concept in the business\nuniverse of discourse. It plays roles in a business process by means\nof participating in different activities. Business objects participate\nin different business processes in different contexts, thus playing\nmultiple roles. It is important to note that process diagrams do not\nfully describe the business object structure and relationships, and\ndo not emphasize why activities are performed or roles are enacted\n. Besides, they only identify actor roles, i.e. the roles of the\nperformer of an activity. This means, for example, that the properties\nof a resource that is used by multiple activities are not separated\naccording to its usage context. The next section introduces\nthe fundamental concepts behind role theory and role modeling.\nROLE MODELING\nIn the late 1920s, role theory started to generated interest among\nsocial scientists from many backgrounds, such as psychology and\nsociology. Its central concern has been with patterns of human\nconduct, context and social structure as well as with individual\nresponse. The motivation for roles is to allow particular viewpoints\nregarding the factors presumed to be influential in governing behavior\n. It lies on a theatrical analogy of actors playing parts or\nroles in a play. As Biddle and Thomas [4] have stated: \"When actors\nportray a character in a play, their performance is determined\nby the script, the director's instructions, the performances of fellow\nactors, and reactions of the audience as well as by the acting\ntalents of the players. Apart from differences between actors in the\ninterpretation of their parts, the performance of each actor is pro-grammed\nby all of these external factors; consequently, there are\nsignificant similarities in the performances of actors taking the\nsame part, no matter who the actual actors are.\"\nThere are many complementary definitions for the concept of\nrole but still there is no consensus on the properties to represent it.\nIn the late 1970s, sociological role theorists defined a role as \"a\ncomprehensive pattern for behavior and attitude\" [26] or as \"be-havioral\nrepertoire characteristic of a person or a position\" [3].\nNonetheless, the concept of role is used in computer science and\nsoftware engineering as a modeling technique that deals with separation\nof concerns, i.e. the separation of the behavioral repertoire\ncharacteristics of some concept. It is used in methodologies such\nas RM-ODP [14] and in several object-oriented frameworks [10,\n12, 14, 15, 25].\n3.1 Business Objects and Roles\nModeling is an abstraction technique that consists of identifying\nconcepts of interest in some universe of discourse and representing\nits essential features for a specific purpose in a model. In business\nmodeling, the universe of discourse corresponds to what is per-ceived\nof an organization as being reality by business domain experts\n.\nOntologies typically distinguish entities (nouns) from activities\n(verbs). Entities are things that exist in the business, either concrete\n(e.g. a person) or abstract (e.g. an organization). Activities\nare things that happen in the business. Activities make use of the\nbusiness entities. We model both of these concepts as business\nobjects. A business object is then the super type of all objects that\nrepresent business concepts with a well-defined boundary and\nidentity. It encapsulates the definition, attributes, behavior and\nrelationships to other business objects [20]. The state of a business\nobject is characterized by the values of its attributes. The behavior\nis given by the actions that the business object is capable of performing\nto fulfill its purpose, including changing its intrinsic attributes\nand collaborating with other business objects.\nBusiness objects have intrinsic and extrinsic features. Intrinsic\nfeatures describe it in isolation, while extrinsic features arise from\nthe relationships or collaborations with other business objects. For\nexample, a\nPerson\nhas intrinsic features such as\nAge\nand\nSex\n, and\nextrinsic features such as\nJob Position\nand\nSalary,\nwhich derive\nfrom a transitory relationship between the\nPerson\nand some\nOrganization\nor\nCompany\n. Intrinsic features may change over time\n(e.g.\nAge\n) but always characterize the object. However, extrinsic\nfeatures may become inappropriate (e.g. the\nJob Position\nproperty\nis not relevant when characterizing an unemployed person).\nOne way to separate the intrinsic features from the extrinsic features\nof an object is by means of roles [4, 15, 23]. Roles, as a modeling\nconstruct, aim at separating the concerns that arise from\n1309\nbusiness object collaborations. We define a role as the observable\nbehavioral of a business object defined in a specific collaboration\ncontext. Thus, a role represents the extrinsic features of a business\nobject when it collaborates with other business objects.\n3.2 Identifying Roles\nTo distinguish roles from entities, Guarino et al. proposed two\ncriteria [11]. A role is a type that (1) is founded and (2) lacks semantic\nrigidity. Something is founded if it is defined in terms of\nrelationships with other things in a given context. For instance, the\nconcept of\nReader\nis founded since for a\nPerson\nto be a\nReader\nthere must be something being read. Conversely, a\nPerson\nis not\nfounded for the reason that its intrinsic properties are defined on\ntheir own regardless of the collaborations with other things.\nSomething is semantically rigid if its identity directly depends\non being kind of some class. A\nBook\nis semantically rigid since its\nidentity is still that of a\nBook\nregardless someone is reading it or\nnot. In contrast,\nReader\nis not rigid because an entity filling the\nrole of\nReader\nretains its identity outside the context of that role.\nFor example, a\nPerson\nis a\nReader\nwhile reading a\nBook\n, but when\nit stops reading it, it is still a\nPerson\n.\nTherefore, roles are founded, semantically non-rigid types while\nentities are non-founded, semantically rigid types.\nROLE-BASED PROCESS MODELING\nThe proposed approach deals with decomposing the business\nprocess modeling universe into two complementary models, the\nbusiness object model and the role model, and later binding these\ntwo models into an integrated specification of the business process\n. The business object model deals with the structure and intrinsic\nproperties of business objects. Here, a process is modeled as a\nnetwork of business objects. However, business objects relate to\nother business objects in specific contexts and are often used in\nmore than one context, where they may play different roles. So,\nthe roles for a business object only need to be included in its definition\nwhen the object acts in the collaboration contexts described\nby the roles. It is also impossible to forecast all of the possible\nroles of a business object. Thus, adding superfluous roles to the\nobject impairs several design quality attributes such as understandability\n, maintainability and reusability. To deal with such a concern\n, roles and business objects should be dealt with separately and\nlater bound together.\nThe concept of role allows a system to be decomposed into a set\nof business objects capable of clearly separating core parts and\ncollaboration-dependent parts and then to abstract and compose\nsuch objects. Consequently, a set of roles helps business objects to\nbe defined to be more reusable and extensible. Roles may also be\nreused as an independent unit encapsulating specific collaborations\n. Roles are organized into role models, which deal with specifying\nthe network of related roles required for a collaboration to\nhappen.\nWe propose defining and represented both of these models using\nthe Unified Modeling Language [19] since its graphical syntax and\nsemantics is well-know by software specialists and, although to a\nlesser scale, by business specialists. However, the standard UML\ndoes not have explicit constructs to represent the required business\ndomain concepts. We make use of the UML extensibility package\nto define such concepts. The extensibility package specifies how\nUML model elements can be extended and customized with new\ngraphical representations and new semantics by specifying stereotypes\n, tagged values and constraints. A coherent set of such extensions\ndefined for a specific purpose makes up a UML profile [2,\n19]. The next subsections describe how the business object models\nand role models are represented.\n4.1 The Business Object Model\nThe business object model specifies the structure and intrinsic\nproperties of business objects. Business objects are coordinated\ntowards the achievement of goals that describe why actions occur\n. A business process describes how objects are coordinated.\nFigure 1. Classes in the business object model profile.\nFigure 1 is a class diagram describing the UML stereotypes\n(classes in white) that are used in the business object model. A\nBusiness Object\nis a UML\nClass\nand it is specialized as a noun or\nverb by means of the\nEntity\nand\nActivity\nclass stereotypes.\nBusiness object models are represented as UML class diagrams\nand the intrinsic behavior of its objects is represented using UML's\nbehavioral diagrams. Note that collaborations between business\nobjects are not represented in this model but in the role model.\nThe stereotypes within the business object model can be summa-rized\nas follows:\nBusiness Object\n: an abstraction of a concept of interest in the\norganization. It is a UML\nClass\n.\nActivity\n: a specialization of\nBusiness Object\n. It is a verb describing\nhow a piece of work is performed.\nActivities\nare performed\nby\nActors\n, and operate over\nBusiness Objects\n, especially those\nacting as\nResources\n.\nEntity\n: a specialization of\nBusiness Object\n. It is a noun describing\na concrete or abstract business concept.\nResource\n: a specialization of\nEntity\n, which is the input or output\nof an\nActivity\n. It represents things such as materials or information\n.\nActor\n: a specialization of\nEntity\n. It is someone (a human actor) or\nsomething (an automated actor, such as an information system\nor a production machine) that can perform the actions required\nby an\nActivity\n.\nGoal\n: a specialization of\nEntity\nthat represents a measurable state\nthat the organization intends to achieve.\nGoals\nare achieved by\nBusiness Objects\n, especially\nActivities\n.\nA business process is composed of\nActivities\nthat use input\nResources\n, such as materials or information, to produce output\nResources\n. Nevertheless, the input of an\nActivity\nmay be any other\nBusiness Object\nor a composition of\nBusiness Objects\n. For instance,\nchanging or reengineering a business process is in itself a process.\nThis process takes as input a business object model (i.e. a network\nof relationships between business objects) and produces a modified\nmodel. Therefore, the composed business object model is being\nused as a resource in this context.\nActivities\nare performed to achieve specific business\nGoals\n. Analyzing\nGoals\nand their relationships with the\nActivities\nproduces an\nalignment measure between the processes and the organization's\noperational strategy. The\nActivities\nof a business process are not\n1310\nautonomous in the sense they require one or more\nActor\nor\nBusiness\nSupport Systems\nto perform them.\nActors\nrepresent people,\nsystems (mechanical or computer based) or a combination of both.\nAt a large scale, business processes are aggregated into value\nchains (which are also business processes) that produce a measurable\nvalue that is visible to external customers.\nFigure 2. Example of activity composition and specialization.\nBusiness objects are classes conforming to a type. They can be\nspecialized and composed just like ordinary objects. Figure 2\nshows an example of a class diagram depicting composition and\nspecialization. Each chevron icon represents an activity or process\nas previously defined. The\nSell Product\nactivity is composed by a\nset of sub-activities such as\nIdentify Customers\nand\nHandle Order\n.\nThese activities can be further decomposed into actions that are\nmore refined. The activity\nSell Product\nis specialized as\nSell by Mail\nOrder\nand\nSell Online\n. Note that composition and specialization do\nnot imply any collaboration constraints between the activities.\n4.2 The Role Model\nRoles are a separation of concerns mechanism that allows business\nobjects to be observed from different perspectives. Role models\nidentify roles as types and describe the network of roles required\nfor a specific collaboration to happen.\nAs a player of a collaboration, a role defines the set of extrinsic\nproperties and behavior necessary to realize its participating collaborations\n.\nFigure 3. Representation of a role model package (left). Pair\nor related roles (right).\nRole models are represented as UML packages with two compartments\n(v. Figure 3, left). The bottom compartment of the role\nmodel is a standard UML activity or interaction diagram describing\nhow the roles are orchestrated. The top compartment of the\npackage depicts the roles within the role model. Roles are represented\nby rounded rectangles, connected by a navigable collaboration\nrelationship between the roles. The representation of a role\nalways shows its name. Optionally, it also depicts in parenthesis\nthe name of the role model to where the role belongs so that its\nscope is clearly defined (v. Figure 3, right).\nFigure 4 show an example of three role collaborations contained\nin two role models. The\nTutorship\nrole model defines a collaboration\npattern between two roles,\nTutor\nand\nStudent\n. In this\nCourse\nrole model defines two pairs of collaborations:\nParticipant\n/\nTaken\nCourse\nand\nLecturer\n/\nGiven Course\n.\nFigure 4. Example of role collaborations.\nRoles are modeled as classes and represented in class diagrams\nMethods and attributes concerning the specific collaboration context\ncan be specified in the class diagram. Roles can also be constrained\n. A constraint asserts conditions between the roles in a role\nmodel. It can be expressed informally or formally (e.g. in plain\ntext or OCL). An example of a constraint is disallowing two roles\nto be played simultaneously by the same player, such as forbidding\nan object playing the role of\nTutor\nand that of\nStudent\nsimultaneously\nand in the same context.\nFigure 5. Example of role specialization.\nFigure 5 is a class diagram that shows how the\nTeacher\nrole is\nspecialized as\nTutor\nand\nLecturer\n. Role specialization means that if\na business object is able to play a child role, then it is also able to\nplay the super role. We have not yet found the need to define abstract\nroles, i.e., a role that may only have its non-abstract speciali-zations\ninstantiated.\n4.3 Binding Roles to Objects\nRoles are bound to business objects pertaining to a given business\nobject model. The binding is accomplished via the\nplay\nrelationship stereotype, which links a business object to a role. It\nmeans the business object is able to exhibit or play the behavior\nspecified by the target role.\nFigure 6. Binding roles to business objects.\nFigure 6 shows a class diagram where the pairs of roles\nTutor\n/\nStudent\n,\nLecturer\n/\nGiven Course\nand\nParticipant\n/\nTaken Course\ndefined\nearlier in Figure 4 and Figure 5, are bound to two different\nbusiness objects,\nPerson\nand\nCourse\n. The binding between objects\nand roles is depicted as a strong arrow. The light arrow represents\nthe relationships between roles. The model also defines a constraint\nin the\nTutorship\nrole model. It asserts that the instances actually\nplaying the\nTutor\nand\nStudent\nrole must be distinct. In this\nexample, it means the\nPerson\nacting as a\nTutor\nand the\nPerson\nacting\nas\nStudent\nmust be different objects, as expected.\n1311\nEXAMPLE\nFigure 7 shows two base role models,\nSupply\nand\nPay\nand a\ncomposed role model,\nPurchase\n. Each role is a class and has methods\nand attributes concerning the specific collaboration context\n(e.g. the\nSupplier\nrole in the\nSupply\nrole model has the inquire and\norder methods). The\nPurchase\nrole model describes the collaborations\nbetween a\nClient\nand a\nSupplier\n, while the\nPay\nrole model,\nspecifies\nPayer\nand\nPayee\n.\nFigure 7.\nSupply\n,\nPay\nand\nPurchase\nrole models.\nPurchase\nis\ncomposed of the role model\nSupply\nand role model\nPay\n.\nThe\nPurchase\nrole model is a composition of the\nSupply\nand\nPay\nrole models. A purchase results from supplying a product and paying\nfor it. Figure 8 shows the binding of the roles within the\nPurchase\nrole model to a set of business objects. In the first case, a\nRetailer\nacts as the\nClient\nand\nPayer\nto a\nProducer\nwho is a\nSupplier\nand a\nPayee\nto the\nRetailer\n. However, the\nRetailer\nalso acts as a\nSupplier\n(and a\nPayee\n) to a customer.\nClient\n(BankClient)\nplay\nplay\nplay\nplay\nplay\nplay\nSupplier\n(Purchase)\nClient\n(Purchase)\nSupplier\n(Purchase)\nClient\n(Purchase)\nbusiness object\nCustomer\nPayer\n(Purchase)\nPayee\n(Purchase)\nbusiness object\nBank B\nBanker\n(BankClient)\nPayer\n(Purchase)\nplay\nPayee\n(Purchase)\nbusiness object\nBank A\nBanker\n(BankClient)\nClient\n(BankClient)\nplay\nbusiness object\nProducer\nbusiness object\nRetailer\nFigure 8. Binding roles to business objects.\nCONCLUSIONS\nThis paper has presented the fundamental concepts towards a\nconceptual object-oriented framework for role-based business\nprocess modeling. It relies on defining two distinct models. The\nbusiness object model focus on describing the components of a\nbusiness process (activities, goals, resources and actors) as business\nobjects. This model depicts the type of each business object,\nits intrinsic behavior and properties but does not address the representation\nof the object's features that are related to its collaborations\nwith other objects.\nThe role model depicts the collaborative behavior between roles\nand the constraints that regulate them. Roles are bound to business\nobjects in a specific business object model, thus defining their\nusage context. This model describes roles as types on their own\nthat can be specialized and aggregated. Role reuse is possible\nwhenever the semantics of the interaction pattern is the same, regardless\nof the interaction context.\nThe proposed approach separates the specification of the intrinsic\nfeatures of a business object from its extrinsic features, meaning\nthat the properties and behavior that arise from the collaborations\nwith other objects are separated from the properties concerning\nthe object. This separation results in an increase of the understandability\nof the business process since each different aspect of\nthe business object may be discussed, analyzed and dealt with\nseparately.\nAdditionally roles also contribute to keep the alignment between\nthe multiple organizational levels where a business process is defined\n. When a business object specified at business level is\nmapped to a component at business process support systems level,\nroles provide information on how to design the component so that\nchanges to other levels can be traced and managed. Since the collaborative\naspects of a business object are specified outside the\nobject as roles, changes to a business process only interfere with\nthe roles which derive from the corresponding activities, leaving\nthe intrinsic properties of the object and its remaining roles un-changed\n. This means that only the implementation of the concerned\nroles needs modifications. The same reasoning applies the\nother way around. When the implementation of a specific role or\nbusiness object is changed due to technical modifications or to the\nevolution of the software, these changes can be traced up to the\nprocesses and goals depending on it.\nThe value of using role modeling increases with the need of\nmaking explicit the patterns of interaction between business objects\n. This is the case of processes where its business objects relate\nto several other business objects. In this case, understanding and\nreengineering such a process is often difficult due to the number of\ndependencies between objects, which are not separated or organized\naccording to the interaction context. This also makes difficult\nto abstract common behavior patterns so that the business process\nelements may be reused in other contexts.\nWe are currently extending this framework to enhance the representation\nof the interaction between business objects and the\ncorresponding business support systems. The goal is to analyze the\ngap between the existing human skills and information system\nservices of an organization and the requirements imposed by the\nas-is and to-be business models so that the alignment between\nthese two levels may be improved.\nREFERENCES\n1.\nW.Aalst, K.Hee, Workflow Management, MIT Press, 2002.\n2.\nS.Alhir, Unified Modeling Language Extension Mechanisms,\nDistributed Computing, 1998.\n3.\nC. Bachman, M. Daya. The role concept in data models,\nProceedings of the 3\nrd\nInternational Conference on VLDB,\n1977.\n4.\nB. Biddle, E. Thomas, Role Theory, Concepts and Research,\nKluwer Publishers, 1979.\n1312\n5.\nY. Chan, Why Haven't We Mastered Alignment?: The\nImportance of the Informal Organization Structure, MISQ\nExecutive, Vol.1, No.2, 2002.\n6.\nB.\nCurtis,\nM.\nKelner,\nJ.\nOver,\nProcess\nModeling,\nCommunications of the ACM, Vol. 35, No. 9, 1992.\n7.\nT. Davenport, J. Short, The New Industrial Engineering:\nInformation Technology and Business Process Redesign.\nSloan Management Review, 1990.\n8.\nH. Eertink, W. Janssen, P. Luttighuis, W. Teeuw, C. Vissers, A\nBusiness Process Design Language, World Congress on\nFormal Methods, Springer, 1999, pp. 76-95.\n9.\nH. Eriksson, M. Penker, Business Modeling with UML, OMG\nPress, 2001.\n10. G. Gottlob, M. Schrefl, B. Rck, Extending Object-Oriented\nSystems with Roles, ACM Transactions on Information\nSystems, Vol, 14, 1996 pp. 268-296.\n11. N. Guarino, M. Carrara, and P. Giaretta. An ontology of meta-level\ncategories. Proceedings of the Fourth International\nConference on Knowledge Representation and Reasoning,\npages 270280. Morgan Kaufmann, 1994.\n12. T. Halpin, Augmenting UML with Fact-orientation, 34th\nHawaii International Conference on System Sciences, IEEE\nPress, Hawaii, USA, 2001.\n13. ISO, ISO/IEC 10746 ODP Reference Model, International\nStandards Organization, 1995.\n14. E. Kendall, Agent Roles and Role Models, New Abstractions\nfor Multiagent System Analysis and Design, International\nWorkshop on Intelligent Agents, 1998.\n15. B. Kristiansen, Object-Oriented Modeling with Roles, 1\nst\nConference on Object Information Systems, 1996.\n16. M. Madhavji, The Process Cycle, Software Engineering\nJournal, Vol. 6, No. 5, 1991.\n17. C. McGowan, L. Bohmer, Model-based business process\nimprovement, 15thInternational Conference on Software\nEngineering, IEEE Computer Society Press, 1993.\n18. D. Miers, Business Process Engineering, C-T Colin, Kogan\nPage, London, 1996.\n19. OMG, Unified Modeling Language Specification, Version 1.5,\nformal/03-03-01, 2003.\n20. OMG, Business Object Management Special Interest Group\n(BOMSIG) Glossary of Terms, 1995.\n21. M. Ould, Business Processes, Modeling and Analysis for\nReengineering and Improvement, John Wiley & Sons, 1995.\n22. A. Scheer, ARIS Business Process Modeling, 2\nnd\nedition,\nSpringer, 1999.\n23. T. Reenskaug et al., Working With Objects: The OOram\nSoftware Engineering Method. ManningPublication Co.,\n1996.\n24. B. Singh, G. Rein. Role Interaction Nets (RINs): A Process\nDescription Formalism, MCC, 1992\n25. D. Taylor, Business Engineering with Object Technology,\nJohn Wiley & Sons, 1995.\n26. R. Turner, Strategy for Developing an Integrated Role Theory.\nHumboldt Journal of Sociology and Religion 7: 123-139,\n1979.\n27. M. Uschold, M. King, S. Moralee, Y. Zorgios, The Enterprise\nOntology, The Knowledge Engineering Review, Vol. 13,\n1998.\n28. E. Verharen, A Language-Action Perspective on the Design of\nCooperative Information Agents, CIP-Gegevens Koninklijke\nBiibliotheek, 1997.\n29. T. Walford,\nBusiness\nProcess\nImplementation\nfor\nIT\nProfessionals and Managers, Arthech House, MA, 1999.\n30. E. Yourdon, Modern Structured Analysis, Prentice-Hall,\nEnglewood Cliffs, NJ, 1989.\n31. Workflow Management Coalition, The Workflow Reference\nModel, 1995\n1313", "keywords": "Business Object;Business Process Modeling;Role Modeling;Organizational Engineering"} {"name": "21", "title": "A Taxonomy of Ambient Information Systems: Four Patterns of Design", "abstract": "Researchers have explored the design of ambient information systems across a wide range of physical and screen-based media. This work has yielded rich examples of design approaches to the problem of presenting information about a user's world in a way that is not distracting, but is aesthetically pleasing, and tangible to varying degrees. Despite these successes, accumulating theoretical and craft knowledge has been stymied by the lack of a unified vocabulary to describe these systems and a consequent lack of a framework for understanding their design attributes. We argue that this area would significantly benefit from consensus about the design space of ambient information systems and the design attributes that define and distinguish existing approaches. We present a definition of ambient information systems and a taxonomy across four design dimensions: Information Capacity, Notification Level, Representational Fidelity, and Aesthetic Emphasis. Our analysis has uncovered four patterns of system design and points to unexplored regions of the design space, which may motivate future work in the field.", "fulltext": "INTRODUCTION\nFrom the very first formulation of Ubiquitous Computing, the\nidea of a calmer and more environmentally integrated way of\ndisplaying information has held intuitive appeal. Weiser called this\n\"calm computing\" [35] and described the area through an elegant\nexample: a small, tangible representation of information in the\nworld, a dangling string that would wiggle based on network\ntraffic. When information can be conveyed via calm changes in\nthe environment, users are more able to focus on their primary\nwork tasks while staying aware of non-critical information that\naffects them. Research in this sub-domain goes by various names\nincluding \"ambient displays\", \"peripheral displays\", and\n\"notification systems\". The breadth of the systems in these broad\ncategories is quite large. We seek to disentangle the terminology\nused to describe and categorize the wide array of systems in order\nto provide a common language for discussing research therein.\nAn ambient display can represent many types of data, from\nstock prices, to weather forecasts, to the presence or absence of\ncolleagues. Maintaining awareness of co-located and distant work\nand social groups has been a long-term research thread in the area\nof Computer Supported Cooperative Work (CSCW) [5, 8]. The\nTangible Media Group at the MIT Media Lab, directed by Ishii,\nalso helped shape the field of ambient computation. They coined\nthe term \"tangible media,\" citing inspiration from Weiser's vision\n[35] and from Pederson and Sokoler's AROMA system [29] and\ndeveloped AmbientROOM [17] and Ambient Fixtures [6, 18].\nThese systems use ambient displays to make people aware of both\ngroup activity and other information such as network traffic.\nRecent work in Ambient Intelligence has brought techniques from\nArtificial Intelligence to ambient systems, spearheaded by the\nDisappearing Computer initiative of the European Union [31].\nThis research thrust seeks to imbue ambient systems with\ncontextual knowledge about the environment. The Roomware\nproject has resulted in smart architectural spaces that support\ninformation conveyance (and group collaboration) [33].\nResearchers have developed systems that use a multitude of\neveryday objects to display information. Examples include lights\nof various sorts [2, 17], sounds [25], shadows [8], artificial flowers\n[18], mobiles [24], and office-dcor water fountains [12, 16].\nFurther research has sought to use framed photographs [26] and\nlarger artistic pictures to represent information from the world in\nan art-like manner [14, 30, 32]. There are also peripheral display\n\"modes\" of a user's main desktop, including screensavers like\nWhat's Happening [36], information bars and menus such as those\nleveraged in Sideshow and Irwin [6,\n\n22], and alternate panes, like\nApple's Dashboard [3]. As one can see, the design space is large.\nAll these systems provide a rich history of system design\nprinciples, approaches, and decisions, but accumulating theoretical\nand craft knowledge has been stymied by the lack of a unified\nvocabulary to define and describe these systems. In this paper we\npropose a set of design choices that developers of ambient\ninformation systems must confront to build successful and\ncompelling systems. First we set out a definition of an ambient\ninformation system that is a synthesis of the varied definitions\ngiven in published research. We hone the intuitive set of\n\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nAVI '06, May 23-26, 2006, Venezia, Italy.\nCopyright 2006 ACM 1-59593-353-0/06/0005. $5.00.\n67\n\n\ncharacteristics that distinguish ambient systems from other\nubiquitous computing research systems. Next, we propose a set of\ndesign dimensions for ambient information systems. The four\ndimensions of system design elucidate the main decisions one\nconfronts when designing an effective ambient system. Finally, we\nexplore the clusters across dimensions to uncover four coherent\ncombinations of system designs, which work as design patterns for\nthe field. The results also identify new ways of combining the\ndesign attributes to explore new possibilities for ambient\ninformation systems.\n\nAMBIENT INFORMATION SYSTEMS\nMany different terms have been used to describe the types of\nsystems we discuss in this paper. Three of the most commonly\nused terms are \"ambient display,\" \"peripheral display,\" and\n\"notification system.\" But how does one differentiate these\nterms? Based on general understandings, we claim that:\n\nall ambient displays are peripheral displays,\n\nsome notification systems are peripheral displays\n(some notification systems are not peripheral but are\ninstead the object of focused work and attention)\n\nThe words of researchers themselves likely best explain their\nconceptions of the systems that they have built. Below, we present\ngermane definitional quotes.\n\n\nIshii et al: \"[In Ambient Displays] information is moved off\nthe screen into the physical environment, manifesting itself as\nsubtle changes in form, movement, sound, color, smell,\ntemperature, or light. Ambient displays are well suited as a\nmeans to keep users aware of people or general states of large\nsystems, like network traffic and weather.\" [17]\n\n\nMatthews et al: Peripheral displays, then, are displays that\nshow information that a person is aware of, but not focused on.\n[24]\n\n\nMatthews et al: \"Ambient displays might be defined as those\nthat are "minimally attended" (e.g. just salient enough for\nconscious perception) while alerting displays are "maximally\ndivided" (e.g. slightly less salient than focal tasks). [24]\n\n\nStasko et al: Ambient displays typically communicate just one,\nor perhaps a few at the most, pieces of information and the\naesthetics and visual appeal of the display is often paramount.\nPeripheral displays refer to systems that are out of a person's\nprimary focus of attention and may communicate one or more\npieces of information.\" [32]\n\n\nMankoff et al: \"Ambient displays are abstract and aesthetic\nperipheral displays portraying non-critical information on the\nperiphery of a user's attention... They generally support\nmonitoring of non-critical information.\" \"Ambient displays\nhave the ambitious goal of presenting information without\ndistracting or burdening the user.\" [20]\n\n\nRounding and Greenberg: \"The [notification collage] is\ndesigned to present info[rmation] as lightweight and peripheral\nobjects. It does not demand the full attention of its users: rather\nit can be attended to in passing, where people collaborate should\nthe need or desire arise.\" [14]\n\n\nMcCrickard et al: \"Often implemented as ubiquitous systems or\nwithin a small portion of the traditional desktop, notification\nsystems typically deliver information of interest in a parallel,\nmultitasking approach, extraneous or supplemental to a user's\nattention priority.\" [21]\n\n\nMcCrickard et al: Notification systems are defined as\ninterfaces that are typically used in a divided-attention,\nmultitasking situation, attempting to deliver current, valued\ninformation through a variety of platforms and modes in an\nefficient and effective manner [21].\n\nThe easiest way to explain the differences between systems is\nto look at the design motivations that informed them. Ambient\ndisplays are those that have pointed aesthetic goals and present a\nvery small number of information elements. These systems are a\nproper subset of peripheral displays, which can appear either in the\nenvironment or on secondary or even primary computer displays.\nNotification systems' design motivation results from divided\nattention situations. As such, they can be equal to a primary work\ntask in their attentional needs or be secondary. When notification\nsystems are designed to be secondary to a primary task, the\nsystems are appropriately defined as peripheral.\n\nIn this paper, we propose the term ambient information system\nas the unit of study and define the behavioral characteristics of\nsuch as systems as follows:\n\n\n\nDisplay information that is important but not critical.\n\n\nCan move from the periphery to the focus of attention and\nback again.\n\n\nFocus on the tangible; representations in the environment.\n\n\nProvide subtle changes to reflect updates in information\n(should not be distracting).\n\n\nAre aesthetically pleasing and environmentally appropriate.\nPREVIOUS TAXONOMIES\nA small number of research papers that describe ambient\ninformation systems also include extended discussions of the\ndesign dimensions that motivate and contextualize their work. The\nauthors provide dimensions to compare and contrast their systems\nto others in order to explain their design rationales.\nMatthews et al use the dimensions notification level,\ntransition, and abstraction to characterize systems in this space\n[24]. They developed the Peripheral Display Toolkit [23] that\nhelps people to develop ambient information displays more easily.\nTheir concept of notification level means the relative importance\nof a particular data stream. Transitions are the programmatic\nchanges to the display, based on the data. Transitions include\nfading, scrolling, or animation effects. They define abstraction as\nthe mapping that takes a piece of numerical or ordinal data and\nturns it into something that the ambient display can use, something\n\"more easily interpreted with less [user] attention.\"\nMatthews et al segregate notification level into five levels:\nIgnore, Change Blind, Make Aware, Interrupt, and Demand\nAttention. The gradations run from low, a system ignoring the\nchange in the data, to high, a system demanding attention in a way\nthat must also be explicitly dismissed. They propose categories of\ntransition: interrupt, make aware, and change blind. Finally, they\nbifurcate abstraction into feature abstraction or degradation.\nMcCrickard et al introduce a different set of three dimensions\nto classify notification systems: interruption, reaction, and\ncomprehension [21]. Interruption is defined psychologically,\nsimilar to Matthews' notion, \"as an event prompting transition and\nreallocation of attention focus from a [primary] task to the\nnotification.\" Reaction is defined as the rapid response to a given\nstimulus, while comprehension is the long-term notion of\nremembering and sense-making.\n68\n\n\nMcCrickard et al then plot the design space as a 3-tuple of\ninteraction, reaction, and comprehension (IRC). Each dimension is\nassigned a rating of high (1) or low (0), creating models like 0-1-0.\nThey label these models with meaningful names like \"Ambient\nMedia, 0-0-1\" \"Indicator, 0-1-0\" and \"Critical Activity Monitor,\n1-1-1.\" Eight models serve as the corners of a design space. The\nresulting space, it should be noted, is larger than the design space\nof ambient information systems as we discuss in this paper\nbecause it contains games, secondary displays, and critical activity\nmonitors (which by our definition, are notification systems that are\nnot also peripheral systems). McCrickard also classifies a set of 14\nextant systems in the design space on the three dimensions.\nBoth of these taxonomies deal thoroughly with interruption\nand detail some of the criteria for categorizing systems along this\ndesign dimension. We extend this analysis to other dimensions of\ndata representation, flexibility, and aesthetics. This more holistic\nview points out design trade-offs between aesthetic emphasis and\nand flexibility, and between a system's information display style\nand display capacity.\nMankoff et al proposed a set of heuristics for evaluating\nambient systems [20], which may also assist system builders. The\nheuristics attempt to give guidance for the formative evaluation of\nambient systems, but they also can be viewed as high-level design\nguidelines, such as \"The display should be designed to give `just\nenough' information. Too much information cramps the display,\nand too little makes the display less useful.\"\n\nDESIGN DIMENSIONS OF AMBIENT SYSTEMS\nDesigners of ambient information systems make decisions\nabout how much information to display, what specific aspects to\ndepict, and how exactly to display it, transparently or abstractly,\non a monitor or via a decorative sculpture. We present four design\ndimensions that capture the space of ambient information systems.\nThe dimensions can be thought of as design choices or design\nquestions that system builders must answer. The dimensions are:\n\n\ninformation capacity\n\n\nnotification level\n\n\nrepresentational fidelity\n\n\naesthetic emphasis\nWe rank 19 research systems and three consumer ambient\ninformation systems on each of the four axes. Each axis is divided\ninto 5 bands, from low to high. We place systems into groups\nbased on information from published conference and journal\nproceedings, including images and videos of systems in use if\navailable. The 19 systems we chose are not intended to be an\nexhaustive list of all ambient information systems in the research\nliterature. The 19 systems are representative of the breadth of the\nfield and we feel that attempting an exhaustive list, while\namplifying completeness, would not significantly alter the design\ndimensions.\nResearch systems that we analyzed include: Bus Mobile [24],\nDangling String [35], Digital Family Portrait [26], InfoCanvas\n[33], Informative Art [30], Information Percolator [16], Irwin [22],\nKandinsky [11], Kiumra [19], Lumitouch [5], Notification Collage\n[14], Scope [34], Sideshow [7], Table Fountain [12], Water Lamp\n[8], and What's Happening [36]. We include three consumer\nsystems that fit our definition of ambient information systems,\nAmbient Devices Ambient Orb [2], the My Yahoo! web portal\n[27] and Apple's Dashboard [3].\n\nFigure 1 shows the four dimensions for our analysis, and\neach of the 19 systems placed into a group along each. Thin\ncolored lines trace the rankings of systems on each axis, similar to\na parallel coordinates plot. Each axis has values that range from\nlow to high through five grades. The dimensions of notification\nlevel and representational fidelity have more descriptive axis\nlabels that will be explained in detail below.\n4.1 Information Capacity\nAmbient information systems are created to convey\ninformation to users--information that typically is important to a\nuser's sense of wellbeing and general awareness, but not critical to\ntheir work or personal life. Information capacity represents the\nnumber of discrete information sources that a system can\nrepresent. Some systems are capable of displaying a single piece\nof data such as the current price of a stock index. Others can\ndisplay the value of 20 (or more) different information elements\non one screen. We rank systems from \"Low\" to \"High\" on this\ndesign dimension.\nInformation elements are discrete information \"nuggets\". For\nexample, if a system monitors campus shuttle buses, each bus is a\nsingle nugget. If the system can represent both the time to a\nlocation and a direction of travel, then there are two nuggets of\ninformation for each bus that is monitored.\nInformation capacity makes visible the design trade-off\nbetween space and time. A designer can increase the information\ncapacity of a display by increasing the space for information to be\npresented or by creating a display that transitions through a set of\nviews over time. If a system is designed with multiple views or\nuses scrolling, we rank it in the top tier, since the number of pieces\nof information that it could display is arbitrarily large.\nA further caveat about information capacity is necessary.\nSome of the analyzed systems such as InfoCanvas, Sideshow, and\nDashboard are user-configured and user-customizable. This means\nthat these and other systems could potentially be made to display\nhundreds of elements. Instead of attempting to calculate a\ntheoretical maximum throughput for the display in these cases, we\nuse the system designer's naturalistic portrayal in their published\nwork to determine the \"everyday maximum.\" Each of these\nsystems is also in the top tier of information capacity.\nThe design dimension of information capacity has a barbell\ndistribution. Five of the 19 systems display a single information\nelement and are ranked \"Low\". Conversely, there are eight\nsystems that display from ten to 20 information elements, with\nsome systems having the potential to display more and these are\nranked \"High.\" Only a few systems take a middle-ground\napproach, attempting to display a small number (from two to ten)\nof information elements.\nThe systems with low ratings on the attribute of information\nconveyance are those that are physical displays. Fountains,\nglowing lights, and office-decoration sculptures afford designers\nonly so much flexibility for changes.\n69\n\n\n\nFigure 1: Parallel Coordinate plot of 19 existing ambient information systems across four design dimensions. Colored lines trace\neach system's ranking along the design dimensions. Different colors are used to denote groups of systems which are similar as\nexplained more fully in Section 5.\nSince the number of changes possible is small, the total number\nof information nuggets that can be represented is\ncorrespondingly small. The systems with high information\nconveyance are those that are presented on LCD screens. The\nsystems that run at full screen (instead of as a small section of a\nfocused main monitor) are ranked the highest.\n4.2 Notification Level\nNotification level is the degree to which system alerts are\nmeant to interrupt a user. Notification level is a design attribute\nthat is present in the two taxonomies of ambient and peripheral\ninformation systems we reviewed earlier. Matthews et al\nsubdivides notification level into five categories: ignore, change\nblind, make aware, interrupt, and demand attention. For our\nanalysis we adopt those categories but replace the lowest level\nof system alert function, ignore (a degenerate case) with user\npoll. Systems such as Apple Dashboard and My Yahoo! do not\nalways appear in a user's environment and must be explicitly\ncalled to the fore.\nNotification level can be thought of as the \"ambience\" of\nthe systems in question. Some systems in the ambient space are\nquiet, and afford opportunistic glances to the information, while\nothers provide more strident alerts by blinking, flashing,\nbeeping, or even opening dialog windows. Systems that provide\nunobtrusive change blind or make aware notifications to the user\nare at the core of the ambient information system design space.\nSystems that interrupt users with alarms or that demand\nattention (by launching system dialog windows) are not subtle,\nso are further from the core concept of ambient information\nsystems, though, as Matthews et al argues, the smooth transition\nfrom more subtle to more jarring is an interesting design\ndirection for ambient system designers.\nNotification level is the designer-intended level of alert.\nWe do not take pains to distinguish between systems that are\nproven to be \"change blind\" through user experimentation\nversus those that merely claim change blindness. We remain\nagnostic here about the techniques used for ensuring subtlety\nincluding slow animation, scrolling, and fading (these\nimplementation details are at a lower level of design rationale).\nOnce the decision has been made to produce a system with\nchange blind transitions, the designer must then produce system\ntransitions that meet the goal in the specifics of the system. Our\nanalysis focuses on the high level decision on the part of the\ndesigner or design team.\nThe distribution of systems here shows a good fit to our\ndefinition of ambient information systems. It is apparent that\nmost ambient information systems adhere to the central notion\nof subtle visual or representational changes. The vast majority of\nambient information systems fall into the change blind and make\naware transition categories (somewhat low and medium). Few\nsystems are designed to interrupt users or demand attention.\n70\n\n\nTwo that do however are Scope and Sideshow. Note that most\nsystems that are physical displays do not have make-aware or\ninterruption-level alerts, much less demand attention alerts. The\nBus Mobile does enable make-aware transitions, when, for\nexample, the last bus of the day approaches.\n4.3 Representational Fidelity\nRepresentational fidelity describes a system's display\ncomponents and how the data from the world is encoded into\npatterns, pictures, words, or sounds. Some systems reproduce\nthe information being monitored in a very direct way, while\nothers are much more abstract in their representation. Matthews\net al's taxonomy characterizes this design choice as abstraction,\nbut only distinguishes two sub-types, feature degradation and\nfeature abstraction. We consider this design dimension to be rich\nand complex, so we will try to tease apart the many different\ntypes of abstraction that appear in ambient information systems.\nRepresentational fidelity can be described in the language\nof Semiotics, the branch of Philosophy that deals with signs, sign\nsystems (such as natural languages) and their meanings. As such\nit has an accepted vocabulary for the elements of a symbolic\nrepresentation. Semiotics can help analyze the way that\nparticular signifiers--words, pictures, sounds, and other\nthings--stand for the things they represent.\nA semiotic sign is made up of three parts [28]. The object\nis called the signified; it is the physical thing or idea that the\nsign stands for. The signifier is the representation of the object,\nwhich could be a word, a picture, or a a sound. The sense is the\nunderstanding that an observer gets from seeing or experiencing\neither the signified or its signifier. The signifier and the signified\nneed not have any direct relationship. However, both the\nsignified and the signifier create the same sense in the head of an\nobserver; seeing a log aflame and seeing the word \"fire\" create\nthe same meaning for a person.\nAmbient information systems, in the vocabulary of\nsemiotics, contain one or more signs. Each sign has its object,\ninformation in the world, and its representation, the lights,\npictures, or sounds used to signify that information. Many\nambient information systems contain multiple signs--each\npicture element standing for a different piece of information.\nThe theory of Semiotics also helps to explain the notion\nthat some signs are transparent, easily understood, while others\nare metaphorical and still others are abstract. Signs can be\nsymbolic, iconic, or indexical. Symbolic signs are those that are\ncompletely arbitrary. For example languages are arbitrary, for\nthe word \"bachelor\" has no more natural relation to an\nunmarried man than does the word \"foobar.\" Symbolic signs\nare those signs for which a code, or rule-following convention,\nis required to understand. Language characters and numbers are\nall symbolic, as are abstract visual representations (the color red\nstanding for \"danger\"). Iconic signs are those signs that have an\nintermediate degree of transparency to the signified object.\nIconic signs include metaphors as well as doodles, drawings,\nand caricatures. Icons represent their objects by having some\nsimilarity or resemblance to the object or to an essential aspects\nof the object. Indexical signs are those that are directly\nconnected to the signified. Examples include measuring\ninstruments, maps, and photographs.\nWe have subdivided the three main categories of\nrepresentational fidelity to distinguish between ambient\ninformation systems. We propose five groups, ranked from\nindexical (high) to symbolic (low):\n\n\nINDEXICAL: measuring instruments, maps,\nphotographs\n\n\nICONIC: drawings, doodles, caricatures\n\n\nICONIC: Metaphors\n\n\nSYMBOLIC: language symbols (letters and numbers)\n\n\nSYMBOLIC: abstract symbols\n\nSome ambient information systems have displays that do\nnot afford representational flexibility, because of the constraints\nof the display. For example, the LiveWire system and the\nAmbient Orb cannot represent language symbols, nor can they\nconvey indexical forms like photographs. However, some\nflexibility is present. The systems might map information in an\narbitrary way, remaining fully abstract (representing stock\nincreases with the color green and losses with the color red), or\nit could map information more metaphorically, as would be the\ncase if LiveWire were connected to information from a\nseismograph or ocean tides. As one can see, the question\nconcerning representational flexibility requires one to consider\nboth the display and the information that is displayed.\nThe InfoCanvas is a very flexible system when considering\nrepresentational fidelity. The InfoCanvas uses all five types of\nrepresentational fidelity. It uses abstract symbols, such as the\ncolor red standing for traffic being stopped, metaphors, like a\ncartoon drawing of a cloud representing cloudy conditions, and\nalso photographs and words of news stories, which are fully\nindexical. We show this ability for a system to straddle multiple\nrepresentational forms by duplicating the system in each\ncategory and noting them with an asterisk (see Figure 1).\nSystems which are designed to represent information at multiple\nlevels of fidelity are: Apple's Dashboard, InfoCanvas,\nInformative Art, Notification Collage, Sideshow, and What's\nHappening. In these cases, we draw the parallel coordinate plot\nto the top-most tier of representational fidelity for each system.\nThe majority of systems however, only afford a single level\nof representational fidelity. Many of the sculptural displays only\nafford symbolic, that is abstract, representations, while a smaller\nnumber afford text and photographic representations.\n4.4 Aesthetic Emphasis\nThe final dimension concerns the relative importance of the\naesthetics of the display. Some system designers seek to build\ndisplays and artifacts with sculptural or artistic conventions. For\nthese systems, being visually pleasing is a primary objective.\nOthers however place relatively little focus on aesthetics and\ntypically focus more on information communication ability.\nSince aesthetic judgment is at its core a subjective phenomenon,\nwe do not judge systems on their relative artistic merits. Instead\nwe attempt to rank ambient information systems by our\nperception of the importance given to aesthetics. There is often a\ntradeoff\nmade\nbetween\ncommunication\ncapacity,\nrepresentational fidelity, and aesthetics, a relationship that we\nexplore in this section.\nAmbient information systems are intended to be visible;\npositioned on a shelf, hung on the wall, or placed as a small\nsculpture on a desk, the systems are seen not just by a user, but\nalso by co-workers, colleagues, or family members. There are a\n71\n\n\nmultitude of approaches when it comes to building aesthetically\npleasing devices. One approach is to build systems that mirror\nexisting artworks by a particular artist, as is the case in\nKandinsky and Informative Art. A second approach is to design\na display that is representative of a particular style or art\nmovement. InfoCanvas, through its use of themes, allows the\ndisplay to take on characteristics of Asian water-color paintings,\nfor example.\nWe rank systems on the design dimension of aesthetic\nemphasis as low, somewhat low, medium, somewhat high and\nhigh. Note again that we are not assessing the degree to which\nthe systems are successful as art. We are providing a subjective\nmeasure of how much the system designers focused on\naesthetics and how much they emphasized aesthetic\nconsiderations in their research and design decisions.\nMost systems that we analyzed had medium or somewhat\nhigh degrees of aesthetic emphasis (12 of 19). The decisions of\ndesigners to strive for visually pleasing displays is most clear in\nthe cases where the display is intended to leverage the work of\nexisting artists. The physical ambient information displays are\noften sculptural in their design decisions. They attempt to set\nthemselves off from the rest of the environment, often on\npedestals or stands. Their capability to display much information\n(information capacity) is often limited by their design clarity and\nausterity. We consider this design trade-off in the next section.\nSystems that we ranked at the middle of the spectrum of\naesthetic emphasis are those which are not intended by their\ndesigners to be art worthy of contemplation as art objects. But\nthey are explicitly intended to be viewed as calm pleasing\nobjects and displays. Apple's Dashboard widgets have a clean\ndesign sense about them, as does Kimura, What's Happening\nand the Information Percolator. The systems that are ranked low\non aesthetic emphasis are Scope, Sideshow, Bus Mobile, Elvin,\nand My Yahoo!. These systems put information conveyance at a\nhigher priority than being aesthetically pleasing. They are still\ncalm and environmentally appropriate, but their designers did\nnot emphasize their aesthetic qualities. Cleary, some systems\nthat are early-stage prototypes like Bus Mobile, may not have\nthe aesthetic polish of more finished systems.\n\n\nFOUR DESIGN PATTERNS\nIn this section, we introduce four design patterns for\nambient information systems, after Alexander's pattern language\nfor architectural studies [1]. The design patterns illustrate four\ncoherent combinations of the four design dimensions previously\npresented. We have already pointed out trends and clusters that\nare present in each particular design dimension. However, there\nare fruitful conclusions for system designers as we consider the\ninteraction between the design dimensions to form design\npatterns.\nConsidering the clusters of systems in each dimension and\nthe correspondences that are visible in the parallel coordinate\nplot, we find four main archetypes in existing ambient\ninformation system design: Symbolic Sculptural Display,\nMultiple-Information Consolidators, Information Monitor\nDisplay, and High Throughput Textual Display. Figure 2\nshows the pattern of each archetype across the dimensions.\n\nFigure 2: a-d System design archetypes shown in the context\nof the design space. Heavy boxes indicate core design\ndecisions, while light boxes show alternate choices.\nSymbolic Sculptural Displays are ambient information systems\nthat display very few pieces of information, usually a single\nelement. They represent information in an abstract sculptural\nway with light, water, or moving objects. They are intended to\nbe decorative objects for a home or office setting and as such are\nhighly aesthetic in their design (see Figure 2a). This design\npattern is a core of ambient system design, and accounts for six\nof our analyzed systems: Ambient Orb, Dangling String, Digital\nFamily Portrait, Information Percolator, Lumitouch, Table\nFountain, and Water Lamp. The Digital Family Portrait\ncombines multiple information sources and so truly represents\nmore information than the other members of this type.\n72\n\n\nMultiple Information Consolidators are ambient systems that\ndisplay many individual pieces of information in a consolidated\nmanner. They are typically screen-based in order to convey\nmuch information and make users aware of changes to that\ninformation (usually by blinking the visual representation of a\ncertain element). They are reasonably aesthetically motivated,\nbut all clearly demonstrate the trade-off between aesthetics and\ncustomization and information capacity (see Figure 2b). Systems\nwhich illustrate this design pattern are: Kandinsky, Kimura,\nInfoCanvas, Notification Collage, and What's Happening.\nKandinsky departs from the other systems in that it is explicitly\nmodeled on the fine art of Kandinsky, and as such is highly\nstylized and design-focused. It does so at the expense of\nflexibility, since it can only display photographs in its slots.\n\nInformation Monitor Displays are displays that are a\nperipheral part of a user's computer desktop. As such, they\nafford different interactions and design choices. They display\nmultiple sources of information, and do so usually by visual\nmetaphors. They are capable of notifying users in multiple ways\nabout changes in the source data, including subtle awareness,\ninterrupting, and even demanding user attention when necessary\n(i.e., requiring the user to switch focus to dismiss a notification).\nThe systems achieve aesthetics, but their primary purpose is not\ngood looks (see Figure 2c). Examples of this design archetype\ninclude: Scope, and Sideshow.\nHigh Throughput Textual Display systems are those that use\ntext and very simple graphics (icons) to denote information.\nThey are capable of representing voluminous information, but\ndo not draw attention with interruption-level notifications. These\nsystems are not primarily as concerned with aesthetics as they\nare with information conveyance (see Figure 2d). These systems\nare simple but efficient for certain types of tasks. Examples of\nthis design archetype are: Elvin, and My Yahoo!.\nThe four design archetypes cover nearly all of the analyzed\nsystems, but do not cleanly categorize three systems. Apple's\nDashboard system is most similar to a Multiple Information\nConsolidator. It fails being a pure example of this archetype\nbecause of its inability to alert users to changes in information\nit requires users poll the system by calling up the transparent\npane via a hot key. The Bus Mobile is an early stage prototype,\nand as such is not concerned with aesthetics to a large degree.\nWith a higher degree of aesthetic emphasis, it might be closer to\na Information Monitor Display (albeit a physical instead of\nscreen-based system). Informative Art is quite unlike the four\ndesign archetypes. Informative Art has high aesthetic emphasis,\nbut low information capacity (e.g. 5 or 6 city's weather forecast\ninformation). It is metaphorical and abstract in its information\nmapping fidelity.\n\n\nEXTENDING THE PATTERNS\nThe four patterns for system design can help designers to\nmake appropriate choices as they develop new ambient\ninformation systems. The design patterns can be used as models\nso a designer can decide to build \"an information monitor\ndisplay for a home health awareness application\", or \"a set of\nsymbolic sculptural displays for work-group collaboration\".\nFurther, the designer may be depart from the pattern, by building\nup a system's range of possible notification levels, or by\nchoosing to trade aesthetics for increased information capacity.\nHowever, our analysis also points at what has not yet been\nexplored. The four design patterns show four coherent\ncombinations, but they are not the only possibilities for building\nuseful ambient systems. Combined with longer-term trends in\nthe fields of Ambient Intelligence and Ubiquitous Computing,\nnew archetypes for system design are emerging. We note\npossibilities here, which change both the dimensions and the\nfour design patterns.\nWe do not expect the information capacity for ambient\nsystems to increase by dramatically. Though scrolling or time-divided\nambient systems (What's Happening, Elvin) can already\ndisplay data elements numbering in the hundreds, simultaneous\nvisual displays are usually limited to 25 or 30 elements by\nreadability and user learnability. Ambient information systems\nwill not turn into information visualization systems showing\nthousands of data points. However, contextual sets of\ninformation may be useful for ambient systems in specialized\nenvironments. Systems which display contextual sets of\ninformation like that of the Bus Mobile (all of the buses on a\ncollege campus) or Scope (email and calendar data) would\nincrease the number of systems in the middle portion of this\ndesign dimension.\nWe also expect to see changes to the design dimension of\nrepresentational flexibility. Designers have begun to explore the\naffordances of abstract and symbolic mappings between\ninformation sources and their representations. We see this\ncontinuing, with new systems focusing on personally relevant\nsymbolic representations, and utilizing metaphors from the\nnatural and built worlds. Another shift that we foresee is the\ndesigners creating systems where multiple information sources\nand aspects interact to affect a single part of the representation.\nThis is apparent already in Digital Family Portrait where the size\nof the butterflies represents \"activity,\" even though activity is\nnot the reading from a single sensor, but it instead a reading\nfrom multiple sensors in a home. Informative Art also has\naspects of this approach, changing both the color and\ndimensions of squares based on two different aspects of weather.\nAs regards aesthetic emphasis, we foresee a more radical\nchange. We predict further exploration of the space of truly\nartistically motivated ambient information systems. These\ngenerative artworks use information from the world to drive\ntheir behavior and ask (and answer) art questions as well as\ntechnology questions. Though most of these works are outside\nthe academy (they are shown in galleries instead of computer\nscience conferences), Bolen and Mateas' Office Plant #1 [4] is a\nsculpture that characterizes the mood of a user's email stream\nand conveys it via transformations of a robotic plant. These\nsystems are going to create a new design space above the top tier\nthat we depict in this work.\nCONCLUSIONS\nIn this work we synthesize a definition that distinguishes\nresearch in ambient information systems from that of\nnotification systems and peripheral displays. We propose four\ndesign dimensions, rank systems to show clusters, and uncover\nfour design patterns on which system developers may model\ntheir system designs. Future work will expand the four\ndimensions to include aspects of the social interaction and\nimpact that system have on the behavior of individuals and\ngroups.\nIn this work we point toward open areas in the design\nspace, and we point to new design directions that may fill these\ngaps. Future work may also turn this taxonomy into an\nevaluation framework for ambient information systems.\n73\n\n\n\nREFERENCES\n1.\n\nAlexander, C., A Pattern Language: Towns, Buildings,\nConstruction. Oxford University Press, 1977.\n2.\n\nAmbient Orb. http://www.ambientdevices.com/\n3.\n\nApple Mac OS X Dashboard. http://www.apple.com/\nmacosx/features/dashboard/index.htm\n\n4.\n\nBohlen, M., and Mateas, M. Office Plant #1. Leonardo 31:5. pp.\n345-349.\n\n5.\n\nChang, A., Resner, B., Koerner B., Wang, X and Ishii, H.,\n\nLumitouch: An emotional communication device. Extended\nAbstracts of CHI 2001, pp. 371-372.\n6.\n\nCadiz, J\n.,\n\nFussell, S., Kraut, R., Lerch, J., and Scherlis, W.\nThe\nAwareness\nMonitor:\nA\nCoordination\nTool\nfor\nAsynchronous, Distributed Work Teams. Unpublished\nmanuscript. Demonstrated at CSCW 1998.\n7.\n\nCadiz, J., Venolia, G., Janke, G., ans Gupta, A. Designing\nand deploying an information awareness interface.\nProceedings of CSCW 2002, pp. 314 - 323.\n8.\n\nDahley, A., Wisneski, C., and Ishii, H. Water lamp and\npinwheels: Ambient projection of digital information into\narchitectural space. CHI Conference Summary 1998, pp.\n269270.\n9.\n\nEspinosa, A., Cadiz, J., Rico-Gutierrez L., Kraut, R., Sherlis,\nW., and Lautenbacher, G. Coming to the Wrong Decision\nQuickly: Why Awareness Tools Must be Matched with\nAppropriate Tasks. Proceedings of CHI 2000, pp. 392-399.\n10.\n\nFitzpatrick, G., Kaplan, S., Arnold, D., Phelps, T., and\nSegall, B. Augmenting the Workaday World with Elvin.\nProceedings of ECSCW 1999, pp. 431-450.\n11.\n\nFogarty, J., Forlizzi, J., and Hudson, S. Aesthetic\nInformation Collages: Generating Decorative Displays that\nContain Information. Proceedings of the UIST 2001,\npp. 141-150.\n12.\n\nGellersen, H.-W., Schmidt, A. and Beigl. M. Ambient Media\nfor Peripheral Information Display. Personal Technologies\n3, 4 : 199-208. 1999.\n13.\n\nGreenberg, S., and Fitchett, C. Phidgets: Easy development\nof physical interfaces through physical widgets. Proceedings\nof UIST 2001. pp 209-218.\n14.\n\nGreenberg, S., and Rounding, M. The Notification Collage:\nPosting Information to Public and Personal Displays.\nProceedings of CHI 2001, pp. 515-521.\n15.\n\nDe Guzman, E., Yau M, Park, A., and Gagliano, A.\nExploring the Design and Use of Peripheral Displays of\nAwareness Information. Extended Abstracts of CHI 2004,\npp. 1247-1250.\n16.\n\nHeiner, J. M., Hudson, S., and Kenichiro, T. The\nInformation Percolator: Ambient information display in a\ndecorative object. In Proc. of UIST 1999, pp. 141-148.\n17.\n\nIshii, H.,Wisenski, C., Brave, S., Dahley, A., Gorbet, M.,\nUllmer, B., and Yarin, P. AmbientROOM: Integrating\nAmbient Media with Architectural Space. Summary of CHI\n1998, pp.173-174.\n18.\n\nIshii, H., Ren, S., and Frei, P. Pinwheels: visualizing\ninformation flow in an architectural space. Extended\nAbstracts of CHI 2001, pp. 111-112.\n19.\n\nMacIntyre, B., Mynatt, E., Voida, S., Hansen, K., Tullio, J.,\nand Corso, G. Support For Multitasking and Background\nAwareness\nUsing\nInteractive\nPeripheral\nDisplays.\nProceedings of UIST 2001, pp. 41-50.\n20.\n\nMankoff, J., Dey, A., Heish, G., Kientz, J., Lederer, S., and\nAmes, M. Heuristic evaluation of ambient displays.\nProceedings of CHI 2003, pp. 169-176.\n21.\n\nMcCrickard, D. S., Chewar, C., Somervell, J., and\nNdiwalana, A. A Model for Notification Systems\nEvaluation--Assessing User Goals for Multitasking\nActivity. ACM Transactions on CHI 10,4 : 312 338. 2002\n22.\n\nMcCrickard, D.S., Catrambone, R., and Stasko, J. Evaluating\nanimation in the periphery as a mechanism for maintaining\nawareness. Proceedings of INTERACT 2001, pp. 148-156.\n23.\n\nMatthews, T., Dey, A.., Mankoff, J., Carter S., and\nRattenbury, T. A Toolkit for Managing User Attention in\nPeripheral Displays. Proceedings of UIST 2004, pp. 247-256\n.\n24.\n\nMatthews ,T., Rattenbury, T., Carter, S., Dey, A., and\nMankoff, J. A Peripheral Display Toolkit. Tech Report IRB-TR\n-03-018. Intel Research Berkeley. 2002.\n25.\n\nMynatt, E.D., Back, M., Want, R., and Ellis, J.B. Designing\naudio aura. Proceedings of CHI 1998, pp. 566-573.\n26.\n\nMynatt, E.D., Rowan, J., Jacobs, A., and Craighill, S. Digital\nFamily Portraits: Supporting Peace of Mind for Extended\nFamily Members. Proceedings of CHI 2001, pp. 333-340.\n27.\n\nMy Yahoo!. http://my.yahoo.com/index.html\n28.\n\nOgden, C., and Richards I. The Meaning of Meaning.\nRoutledge & Kegan. London, England. 1923.\n29.\n\nPederson, E. R., and Sokoler, T. AROMA: Abstract\nRepresentation of Presence Supporting Mutual Awareness.\nProceedings of CHI 1997, pp.51-58.\n30.\n\nRedstrom, J., Skog, T., and Hallanas, L. Informative Art:\nUsing Amplified Artworks as Information Displays.\nProceedings of DARE 2000, pp. 103-114.\n31.\n\nRussel, D., Streitz, N., and Winograd, T. Building\nDisappearing Computers. Communications of the ACM.\n48(3):42-48. 2005.\n32.\n\nStasko, J., Miller, T., Pousman Z., Plaue, C., and Ullah, O.\nPersonalized Peripheral Information Awareness through\nInformation Art. Proceedings of UbiComp 2004, pp. 18-35.\n33.\n\nStreitz, N., Tandler, P., Muller-Tomfelde, C., and Konomi,\nS. Roomware: Towards the Next Generation of Human-Computer\nInteraction based on an Integrated Design of Real\nand Virtual Worlds. In: J. Carroll (Ed.): Human-Computer\nInteraction in the New Millennium, Addison-Wesley. pp.\n553-578. 2001.\n34.\n\nVan Dantzich, M., Robbins, D., Horvitz, E., and Czerwinski,\nM. Scope: Providing Awareness of Multiple Notifications at\na Glance. Proceedings of AVI 2002. pp. 157-166.\n35.\n\nWeiser, M. and Brown, J.S. Designing Calm Technology.\nPowerGrid Journal, 1:1, 1996.\n36.\n\nZhao, A., and Stasko, J. What's Happening?: Promoting\nCommunity Awareness through Opportunistic, Peripheral\nInterfaces. Proceedings of AVI 2002, pp. 69-74.\n\n74", "keywords": "Peripheral Display;four main design patterns;calm computing;symbolic Sculptural display;high throughput textual display;Notification System;information monitor display;ambient information system;Taxonomy;framework to understand design attributes;user interface;notification systems and peripheral displays;Design Guidelines;multiple-information consolidators;Ambient Display;definition and characteristics of ambient systems;Ubiquitous Computing"} {"name": "210", "title": "Using Web Helper Agent Profiles in Query Generation", "abstract": "ABSTRACT Personalized information agents can help overcome some of the limitations of communal Web information sources such as portals and search engines. Two important components of these agents are: user profiles and information filtering or gathering services. Ideally, these components can be sep-arated so that a single user profile can be leveraged for a variety of information services. Toward that end, we are building an information agent called SurfAgent;in previous studies, we have developed and tested methods for automatically learning a user profile [20]. In this paper, we evaluate alternative methods for recommending new documents to a user by generating queries from the user profile and submitting them to a popular search engine. Our study focuses on three questions: How do different algorithms for query generation perform relative to each other? Is positive relevance feedback adequate to support the task? Can a user profile be learned independent of the service? We found that three algorithms appear to excel and that using only positive feedback does degrade the results somewhat. We conclude with the results of a pilot user study for assessing interaction of the profile and the query generation mechanisms.", "fulltext": "INTRODUCTION\nThe Web has become an indispensable source of information\nfor many people. Based on surveys of the most popular\nWeb sites [14], users deal with the overwhelming amount and\nconstantly updating nature of the information by routinely\nvisiting hub sites (e.g., Netscape, Yahoo, CNN) and making\ncopious use of search engines (e.g., AltaVista, Excite, Magellan\n). Users have derived tremendous leverage from shared\ninformation resources such as those just mentioned. Hubs or\nportals provide communally useful information about perennial\n(e.g., financial management, child rearing) and timely\n(e.g., September 11 events, stock quote) topics. Search engines\nsatisfy specific, spontaneous information needs.\nAs our appetite for information increases, so does its availability\non the Web. Studies (e.g., [21, 12]) have identified\nlimitations with these tools for satisfying users' needs;for\nexample, users appear to lack the motivation to learn how to\nformulate complex queries or to view long lists of potential\nmatches. Meta-search engines, such as Meta-Crawler [18],\nSavvySearch [6], and NECI [11], propose to overcome the\nWeb coverage problem by combining the indexing power of\nmultiple stand-alone search engines. However, because they\nleverage the capabilities of many search engines, they tend\nto generalize the search task: limiting the access to search-engine\n-specific advanced search capabilities and, perhaps,\nintroducing even more noise into the return results.\nOne promising approach to compensating for the limitations\nis to personalize the tools. Pretschner and Gauch\ndivide personalization into two types: personalized access\nto resources and filtering/ranking [15].\nFor example, My\nYahoo (http://my.yahoo.com) provides personalized access\nby allowing users to design their own Yahoo page with pertinent\ninformation;many search and meta-search engines\nsupport some customization (e.g., types of search, return\namount, and search engine selection). \"Softbot\"s or information\nagents augment searching and information gathering\n(filtering/ranking). Personalized information agents, such\nas Letizia [13], WebWatcher [1, 10], and WebMate [5], can\nprovide a range of services from automatically retrieving\nWeb pages to assisting in formulating queries.\nThese agents generally comply with the architecture presented\nin Figure 1. The agent intercedes between the user\nand their Web access, monitoring the user's activities to\nconstruct a model of user requests (the profile) to be used\nfor specific information services (e.g., modifying requests\nand/or filtering results). In our research, we adopt the principle\nthat user overhead needs to be minimized:\nThe profile should be learned by asking the user to\nsimply click on a feedback button positioned on the\nbottom of each page to indicate interest.\nLearning should track changes in user interests.\nThe profile should support multiple information services\n.\nIn previous papers, we have assessed some alternative approaches\nto learning user profiles [20, 19]. In this paper,\nwe examine alternative approaches to one of the services:\nquery generation to find new documents (i.e., automatically\nretrieving Web pages that match a user's interests by submitting\nqueries to a Web search engine). In particular, we\nare interested in answering the following questions:\n1. How do different algorithms for query generation perform\nrelative to each other? For our case, query generation\ninvolves constructing queries from a user profile\nthat are submitted to a search engine for the purpose\nof harvesting documents.\n2. Is positive relevance feedback adequate to support the\ntask?\nTo minimize user overhead, we have solicited\nonly positive relevance feedback. Obviously, this provides\nrelatively weak knowledge, requiring the profiling\nmechanism to self-organize the categories of interest\nand to trade-off precision.\n3. Can a user profile be learned independent of the service\n? If so, then user overhead can be minimized and\nmultiple services provided as part of the same agent.\nThis paper describes a framework for evaluating alternative\nalgorithms for information gathering agents and a study that\nwas designed to address the three questions above. In summary\n, we found: Three algorithms perform best in terms of\nextracting sufficient numbers of documents from the search\nengine and in terms of the relevance of the links returned.\nWe did find evidence that soliciting only positive feedback\nhampers query generation;however, it is not clear that the\ndegradation in performance is worth the cost of obtaining\nthe negative feedback. As often happens, the study raised\nsome issues that are still to be resolved (particularly about\nthe evaluation criteria and the interaction of profiling and\nquery generation);we conclude with a pilot study in which\nwe investigate how to resolve these issues.\nSURFAGENT\nSurfAgent [19] is a personalized Web information agent,\nwhich follows the basic architecture in Figure 1. It is designed\nas a testbed for expediting plug-and-play and evaluation\nof alternative algorithms, front-ends, and service tasks.\nIts two primary components are the user profile and the\nmodule which generates requests for document streams. Monitoring\nshould be simple and unobtrusive. Filtering depends\non the representation and construction of the user profile,\nforcing a relatively tight coupling of those two components.\nThis section provides an overview of its user profiling and\ndocument stream generation.\n2.1\nBuilding User Profiles and Filtering\nThe user profile maintained by a Web helper agent is a\nmodel of what documents the user finds relevant. Like most\nother personal Web helper agents, SurfAgent uses TF-IDF\nvectors [17] as the basis of its user profile representation.\nOne such vector is used to represent each of the several different\ntopics of interest associated with each user.\nOver time, topic descriptions are learned from user-supplied\nexamples of relevant documents, which are added to the existing\nTF-IDF vectors in what is known as relevance feedback\n[16]. Associated with each vector is a dissemination threshold\n, which is used when new documents are filtered by the\nagent: if the similarity between the new document's vector\nand a vector in the profile exceeds the associated dissemination\nthreshold, the document is considered relevant and\nshown to the user. We found that learning the appropriate\ndissemination threshold was critical to filtering performance\nand that one could be learned with relatively little feedback\n(i.e., 10 relevance judgments) [19].\nTF-IDF vectors and their associated dissemination thresholds\nare known in the Information Retrieval (IR) literature\nas filtering queries. This type of query is distinguished from\na typical retrieval query (used with search engines or at a\nlibrary) by a few characteristics. Filtering queries tend to\nbe used repeatedly over a long period of time, during which\nthey can be improved and maintained through learning and\nrelevance feedback, whereas retrieval queries are typically\nused only once. Also, filtering queries typically contain lots\nof terms with complex weighting schemes, whereas retrieval\nqueries tend to be a boolean combination of relatively few\nterms, with no weighting at all.\nEach filtering query in SurfAgent's user profile corresponds\nto a distinct topic of interest. User profiles are learned in one\nof two ways. First, relevant documents provided as training\nby the user can be explicitly associated with the topic\nof interest. Alternatively, to minimize overhead to the user,\nincremental clustering can be used by the agent to automatically\nclassify relevant examples into internally learned topics\n[20, 5]. In the latter situation, the user only needs to prompt\nthe agent when a relevant document has been encountered,\nwithout having to associate it with a particular topic of interest\n. To minimize user disruption, we request only positive\nexamples. We augmented existing IR clustering techniques\nto accommodate Web needs (i.e., avoid storing the documents\nthemselves, require minimal user overhead and be\nassociated with a user). In our earlier study, we found that\na tuned version of the Doubling algorithm [4] achieved high\nrecall, without a great sacrifice on precision.\n2.2\nIncoming Document Streams\nPersonal information agents use a wide range of techniques\nto generate incoming streams.\nLetizia pre-fetches\nWeb pages by exploring the links in the Web page cur-rently\nbeing viewed. Similarly, WebWatcher analyzes text\nin and around links in the current page to predict relevant\nlinks worth following. Fab builds a list of likely to be relevant\ndocuments through best-first search;documents that\npass the filtering phase are then included in a list of recom-813\nUser\nUser Profile\nExtract Query\nSearch\nEngine\nFilter Documents\nFigure 2: Incoming document streams generated by\nquerying a search engine\nmended links. Finally, WebMate filters articles found on a\nlist of well-known news sources in order to compile a personal\nnewspaper for its user.\nOur goals are to maximize the quality of the incoming\ndocument stream generated for SurfAgent, while at the same\ntime minimizing effort. For this purpose, a promising technique\nappears to be the construction of queries that are suitable\nfor a large-scale search engine such as Google [3]. Well-formulated\nqueries have the potential to off-load significant\nportions of the filtering task to the search engine, which\nshould provide the agent with a document stream consisting\nof more relevant documents.\nIn this paper, we explore several methods of generating\nsearch engine queries from the user profile of a personal\nWeb helper agent. We wish to find both the method and\nthe query length that would optimize the relevance of the\ndocuments returned by the search engine with respect to the\nuser profile. This process is illustrated in Figure 2.\nTECHNIQUES FOR QUERY GENERATION\nFiltering queries are not directly suitable for being submitted\nto a search engine. They are complex models representing\na possibly large collection of documents;they contain a\nlarge number of terms with associated weights, which would\noverly restrict the range of documents a search engine might\nreturn.\nQuery generation techniques have evolved from the more\ngeneral query refinement mechanism of relevance feedback\n[16]. For instance, in [9, 7] search engine queries are extended\nwith features extracted from positive and negative\nexamples in order to bias them toward a more relevant sub-topic\n. Several other researchers have been concerned with\nextracting only a few highly representative terms from the\nrepresentation of a large document cluster. For WebACE\n[2], the authors propose a mechanism for generating queries\nfrom a cluster of documents based on the average word count\n(term frequency, TF) of the terms contained in the documents\nof the cluster, and the document frequency (DF),\ni.e., the number of documents in the cluster that contain\na specified term. A set of\nk terms with the highest TF,\nand another set of\nk terms with the highest DF are selected\nfrom the cluster. The intersection of these two sets was submitted\nto Yahoo search as a query, yielding a total of 2280\nresults, which were later refined to 372 by augmenting the\nquery with terms resulting from the difference between the\nTF and DF sets.\nCorpusBuilder [8] uses automatic query generation to collect\ndocuments in a specified language (Slovenian) from the\nWeb, with the purpose of building a corpus in that language.\nThe point is to preserve computation power by avoiding a\nbrute-force crawl of the Web where an expensive classifier\nwould have to be run on each encountered document. By\ngenerating queries from the already existing corpus, the authors\nhope to significantly increase the likelihood that the\nresulting documents would already be in Slovenian, thus\nspeeding up document collection. Several methods for generating\nqueries are used interchangeably:\nuniform select n terms from the relevant documents\nwith equal probability;\nterm-frequency select n most frequent terms from\nthe relevant documents;\nprobabilistic term-frequency select n terms from the\nrelevant documents with probability proportional to\ntheir term frequency;\nodds-ratio select n terms with highest odds-ratio\nscores, as given by the formula:\nOR\nt\n= log\n2\nP (t|relevant) (1 - P (t|nonrelevant))\nP (t|nonrelevant) (1 - P (t|relevant))\nwhere\nt is the term, and P (t|relevant) and P (t|nonrelevant)\nare the probabilities of\nt appearing in a relevant and\nnon-relevant document, respectively;\nprobabilistic odds-ratio select n terms with probability\nproportional to their odds-ratio score;\nThe authors report best results with\nn = 4 and the simple\nodds-ratio method. However, this method is not necessarily\napplicable to our task because identifying relevance with\nrespect to a query cluster is somewhat more subtle than\ndetermining whether a returned document is in a particular\nlanguage such as Slovenian.\nOUR STUDY OF QUERY GENERATION\nThe purpose of this study is to examine the role of query\ngeneration technique for a Web information agent: what\ntechniques work well, how much user overhead is warranted\nand how query generation interacts with profiling. These\nthree factors correspond to the three questions articulated\nin the Introduction.\n4.1\nExperiment Design\nThe basic protocol for our study was as follows:\n1. construct user profiles,\n2. generate queries from those profiles using each of the\nquery generation methods,\n3. submit queries of different lengths to Google,\n4. evaluate the results.\nThis led to a factorial experiment in which the independent\nvariables were query generation method and query length\nand the dependent variables were two evaluation metrics:\nreturn count and relevance.\n814\n4.1.1\nConstructing the profiles\nTo expedite experimental control and data collection, we\nconstructed two user profiles from TREC\n1\ndisk #5. TREC\ndata consist of a large number of articles (over 250,000), of\nwhich a large portion have been judged as either relevant\nor non-relevant with respect to a set of 450 different topics.\nOur first topic is on airport security measures, and was constructed\nbased on 46 relevant documents which appeared in\nthe Los Angeles Times over the course of two years (1989-1990\n);this topic will be referred to as LATIMES. The second\ntopic is on genetic research, and was constructed based on\n55 relevant documents from the Foreign Broadcasting Information\nService appeared in 1996;this topic will be referred\nto as FBIS. One topic was picked randomly from each of the\ntwo document collection on the TREC disk.\nWe used synthetically generated topics in order to test the\nhypothetical scenario in which negative feedback is available.\nBy default, SurfAgent does not collect negative feedback in\nthe form of documents which are non-relevant to a given\ntopic. Thus, we are interested in how much performance\nmight be sacrificed by restricting feedback to only positive\nexamples. The number of positive documents used in the\nconstruction of each topic (46 and 55, respectively) is realistic\ncompared to what a human user would have been capable\nof providing while building her profile in real life.\n4.1.2\nGenerating Queries\nWe implemented several methods, including both methods\nwhich use such negative examples (e.g., odds-ratio) against\nmethods which do not (e.g., Boley's method [2] and term\nfrequency based methods).\nIn addition to the methods mentioned in Section 3, we\nadd two methods: deterministic extraction of highest weight\nterms for SurfAgent's TF-IDF profile vectors and probabilistic\nweighted extraction from the TF-IDF profile vectors.\nThe complete list of methods used is given below:\nUniform (Unif ) baseline case, select\nn terms with uniform\nprobability;\nBoley select the intersection of the\nk top ranking terms\naccording to term frequency in one set, and document\nfrequency in the other;\nTF select\nn top ranking terms according to term frequency;\nProbabilistic TF (P-TF) select\nn terms with probability\nproportional to their term frequency;\nOR select the top ranking\nn terms according to their odds-ratio\nscore;\nProbabilistic OR (P-OR) select\nn terms with probability\nproportional to their odds-ratio score;\nTFIDF select\nn terms with the highest TF-IDF weight\nfrom the profile vector;\nProbabilistic TF-IDF (P-TFIDF) select\nn terms with\nprobability proportional to their TF-IDF weights;\n1\nText REtrieval Conference: TREC benchmark disks are\npublicly available and can be ordered from the conference\nhomepage at http://trec.nist.gov\nThe probabilistic versions were included because injection of\nsmall amounts of randomness has been found to be helpful\nin other search problems.\nCode from SurfAgent was modified to collect the data\nrequired by all query generation methods employed. For\neach topic, we collected the following data:\naverage term frequencies for terms in relevant documents\n;\ndocument frequencies for terms in relevant documents;\nTF-IDF vector built from relevant documents;\nodds-ratio scores for terms in relevant documents (odds-ratio\nscores are based on both relevant and non-relevant\ndocuments related to the topic).\nFrom these data, we generated queries of seven lengths (two\nto eight terms) for each of the eight methods.\nFor the\nfour probabilistic methods, we generated 10 queries of each\nlength, which means their reported results will be averages\nover those 10 queries.\nFor Boley's method, we repeatedly computed the intersection\nof the sets consisting of the top\nk ranking terms w.r.t.\nTF and DF, while incrementing\nk. We retained all distinct\nqueries of length between 2 and 8. For FBIS, no value of\nk generated a query of length 6. Similarly, for LATIMES,\nthere was no value of\nk resulting in a query of length 7.\n4.1.3\nSubmit the Queries\nThe previous step resulted in 614 queries (307 for each\ntopic). We submitted these queries to the Google search engine\nand collected back the first page of results. By default,\na page can contain up to 10 responses.\n4.1.4\nCollect the results\nThe results of the queries (the URLs returned) were parsed\nout of the page returned by Google, and their corresponding\ndocuments were retrieved from the Web and stored locally.\nWe discarded (and decremented our hit count accordingly)\nall dead links and all hits that were in a format other than\nASCII\n2\nor PDF: a total of 312 out of 2917 hits were discarded\n. The PDF documents were converted into ASCII\nusing the pdftotext utility.\n4.2\nResults\nFor each valid hit, we computed the similarity between\nthe document's TF-IDF vector and the TF-IDF vector of\nthe appropriate topic, which is a measure of the document's\nrelevance. For each combination of query generation method\nand query length, we recorded the number of hits received,\nthe relevance of the best hit, and the average relevance over\nall hits received. For the probabilistic methods, these measurements\nrepresent average values obtained over the ten\nrepetitions for each combination of parameters. The results\nare summarized in Table 1 for the FBIS topic, and Table 2\nfor the LATIMES topic. The three rows corresponding to\neach method indicate average relevance (top), maximum relevance\n, and number of returned hits (bottom).\nAll methods return at least seven documents with query\nlengths of 2, but most taper off in the number returned\n2\nASCII includes all variants and versions of HTML, XML,\netc.\n815\nquery length\nmethod\n2\n3\n4\n5\n6\n7\n8\navg:\n.022\n.046\n.059\n.018\n\n\n.011\nUnif\nmax:\n.051\n.077\n.101\n.019\n\n\n.011\ncnt:\n7.7\n4.3\n3.2\n0.4\n0\n0\n0.1\navg:\n.026\n.054\n.044\n.053\n.069\n.091\n.082\nP-TF\nmax:\n.059\n.096\n.079\n.102\n.120\n.192\n.138\ncnt:\n8.9\n7.7\n7.9\n5.2\n6.5\n7.2\n6.3\navg:\n.039\n.047\n\n.019\n\n\n-P\n-OR\nmax:\n.099\n.090\n\n.019\n\n\n-cnt\n:\n9.0\n3.2\n0\n0.1\n0\n0\n0\navg:\n.045\n.058\n.088\n.069\n.035\n.034\n.030\nP-TFIDF\nmax:\n.100\n.110\n.178\n.097\n.054\n.035\n.055\ncnt:\n9.1\n6.1\n8.4\n2.4\n2.7\n0.6\n1.4\navg:\n.053\n.077\n.090\n.081\n\n.111\n.088\nBoley\nmax:\n.120\n.112\n.136\n.168\n\n.239\n.239\ncnt:\n9\n9\n10\n7\n0\n8\n9\navg:\n.036\n.031\n.048\n.082\n.081\n.087\n.083\nTF\nmax:\n.065\n.059\n.129\n.134\n.103\n.130\n.135\ncnt:\n10\n9\n10\n9\n9\n10\n9\navg:\n.123\n.186\n.102\n\n\n\n-OR\nmax:\n.155\n.361\n.190\n\n\n\n-cnt\n:\n9\n8\n2\n0\n0\n0\n0\navg:\n.100\n.144\n.160\n.176\n.214\n.278\n.242\nTFIDF\nmax:\n.377\n.377\n.377\n.279\n.399\n.404\n.242\ncnt:\n9\n10\n10\n7\n10\n4\n1\nTable 1:\nAverage relevance, Maximum relevance,\nand count of returned hits for the FBIS topic on\ngenetic technology\nquery length\nmethod\n2\n3\n4\n5\n6\n7\n8\navg:\n.012\n.012\n.012\n.013\n.004\n\n-Unif\nmax:\n.024\n.024\n.028\n.019\n.006\n\n-cnt\n:\n8.0\n5.3\n3.9\n1.0\n0.6\n0\n0\navg:\n.017\n.026\n.025\n.028\n.032\n.024\n.010\nP-TF\nmax:\n.042\n.073\n.062\n.061\n.046\n.042\n.011\ncnt:\n9.1\n9.5\n6.0\n6.5\n2.0\n4.0\n0.7\navg:\n.017\n.018\n.016\n.011\n\n.007\n-P\n-OR\nmax:\n.052\n.039\n.029\n.013\n\n.007\n-cnt\n:\n8.2\n8.3\n4.0\n0.9\n0\n0.1\n0\navg:\n.026\n.036\n.064\n.063\n.059\n.020\n.010\nP-TFIDF\nmax:\n.058\n.103\n.125\n.156\n.106\n.036\n.014\ncnt:\n9.2\n8.1\n8.1\n5.7\n5.3\n1.3\n0.2\navg:\n.040\n.098\n.135\n.193\n.199\n\n.167\nBoley\nmax:\n.086\n.199\n.299\n.343\n.359\n\n.299\ncnt:\n8\n9\n8\n8\n8\n0\n7\navg:\n.107\n.058\n.030\n.048\n.041\n.069\n-TF\nmax:\n.222\n.093\n.051\n.075\n.069\n.069\n-cnt\n:\n7\n10\n10\n7\n6\n1\n0\navg:\n.048\n.036\n.348\n\n\n\n-OR\nmax:\n.122\n.096\n.402\n\n\n\n-cnt\n:\n9\n9\n2\n0\n0\n0\n0\navg:\n.115\n.144\n.155\n.171\n.144\n.153\n.143\nTFIDF\nmax:\n.331\n.331\n.357\n.299\n.276\n.349\n.349\ncnt:\n9\n7\n8\n8\n9\n9\n9\nTable 2:\nAverage relevance, Maximum relevance,\nand count of returned hits for the LATIMES topic\non airport security\nwith longer query lengths. For the deterministic methods,\nthe relevance increases as the query length increases (until 7\nor 8), but the relevance for the probabilistic methods tends\nto plateau early.\nAll methods consistently outperform the baseline uniform\nterm selection. Probabilistic methods are outperformed by\nFigure 3: Box plot of relevance by method for FBIS\ntopic at query length 2\nFigure 4: Box plot of relevance by method for FBIS\ntopic at query length 3\nthe non-probabilistic ones, which is consistent with the observations\nin [8]. The best results for the FBIS topic were\nobtained using TFIDF at query length 7: a total of 4 hits\nwere returned, with an average relevance of\n.278, and a maximum\nrelevance of\n.404. The best results for the LATIMES\ntopic were obtained using OR at query length 4: two hits\nwere returned, with average relevance of\n.348 and maximum\nrelevance of\n.402.\nQuery lengths 2 and 3 were the only ones where all methods\nlead to non-empty returns for both topics.\nTo test\nwhether the differences between the methods were significant\n, we ran an analysis of variance (ANOVA) study on\neach topic at query lengths 2 and 3, with the query generation\nmethod as the independent variable, and relevance as\nthe dependent. The effects were significant in each case: for\nFBIS, we obtained\nF = 14.007, p < 0.001 at query length 2,\nand\nF = 8.692, p < 0.001 at query length 3;for LATIMES,\nwe obtained\nF = 24.027, p < 0.001 at query length 2, and\nF = 20.277, p < 0.001 at query length 3.\nBox plots of relevance by method for query lengths 2 and\n3 are given in Figures 3 and 4 for FBIS, and Figures 5 and\n816\nFigure 5: Box plot of relevance by method for LATIMES\ntopic at query length 2\nFigure 6: Box plot of relevance by method for LATIMES\ntopic at query length 3\n6 for LATIMES. Note that medians rather than means are\nmarked on the graph. These plots illustrate several points\nobscured by the previous table. First, while TFIDF returns\nthe best match and second best median in all cases, overall\nbetter results are obtained using OR for FBIS, and TF and\nBoley for LATIMES. Second, each method can show high\nvariance in the results, although TFIDF tends generally to\nhigher variance. Additionally, the results for query length 3\nhave higher variance than those for query length 2. Finally,\nthe distributions are often skewed. For example, Boley consistently\nhas a higher median than mean.\nBecause relevance for all returned documents is measured\nagainst the TF-IDF vector of the corresponding topic, the\nexperiments are slightly biased in favor of the TFIDF query\ngeneration method.\nOur experiments cannot prove that\nTFIDF query generation is sufficient, but its good performance\ncoupled with the not always good performance of OR\nsuggest that we do not incur a significant loss by leaving out\nnegative feedback. Collecting other information based on\npositive feedback in addition to TF-IDF topic vectors may\nbe required with SurfAgent: e.g., straight TF vectors and\nquery length\nmethod\n2\n3\n4\n5\n6\n7\n8\nTFIDF\nnrel:\n8\n3\n4\n4\n4\n\n-cnt\n:\n10\n10\n10\n10\n9\n0\n0\nP-TFIDF\nnrel:\n1.0\n1.8\n0.7\n0.4\n0.0\n0.5\n-cnt\n:\n9.5\n8.1\n7.7\n4.1\n1.3\n1.3\n0.0\nTable 3: Number of relevant documents and count of\nreturned hits for the user-generated topic on stress\nrelief\ntopic-specific document frequency information would allow\nus to use TF and Boley query generation in addition to the\nTFIDF method. As the results show, sometimes TF and\nBoley perform better than OR and TFIDF.\nBoth Boley and TFIDF consistently result in many links\nbeing returned, even for long query lengths. Many hits are\ndesirable when the agent will be pruning out previously visited\nsites and filtering the returned links according to a filtering\nthreshold.\nThe computation time required for query generation by\neach of the studied methods is negligible when compared to\nthe network-related delays incurred while downloading the\nactual documents.\nPILOT USER STUDY\nTo gain an understanding of the performance of TFIDF\nquery generation without the bias present in our experiments\nwith synthetically generated topics, we have also performed\na pilot study with a user-generated topic, containing\n34 documents on stress relief. Since SurfAgent only collects\nTF-IDF information at this time, query generation was limited\nto the TFIDF and P-TFIDF methods. We followed the\nsame protocol as with the synthetically generated topics:\nquery lengths were between 2 and 8, and results were aver-aged\nover ten trials for the probabilistic version P-TFIDF.\nA total of 343 distinct URLs were returned by Google. We\nshuffled these URLs and asked the user to examine each one\nof them, and mark the ones she found to be relevant to her\ntopic. 56 documents out of the total of 343 were found relevant\n. Table 3 presents the number of relevant documents\nand the number of hits returned for each parameter combination\n.\nThis pilot study supports the hypothesis that TFIDF based\nqueries will generate an adequate incoming stream: queries\nof length up to six returned at least nine hits from Google.\nUnlike the previous study, the shorter queries yielded lower\nrelevance, which could be due to the way the user was judging\nrelevance or to the nature of the topic.\nAs a followup, we will be designing a user study that includes\nthe three apparently best methods (TF, TFIDF, and\nBoley). We will focus on three issues: Does method performance\nvary among users and topics (as is suggested by\nour current study)?\nShould profile construction incorporate\nmore information? Does relevance assessment change\nas profiles become more mature? Can the best query length\nbe determined a priori?\nCONCLUSIONS\nWe studied several methods for generating queries from a\nuser profile to answer three questions related to the design\nof Web information agents. First, how do different query\n817\ngeneration algorithms perform relative to each other? In\nfact, we observed significantly different performance among\nthe eight methods tested. Overall, Boley, TFIDF and to a\nlesser extent TF provided a good number of hits and relatively\nhigh relevance.\nSecond, is positive relevance feedback adequate to support\nthe task? We found that leaving out negative training\nexamples does not incur a significant performance loss.\nOdds-Ratio was found to excel on one topic, but its competitive\nadvantage does not appear to be worth the additional\noverhead expected from the user. TFIDF and Boley, requiring\nonly positive relevance feedback, generated queries that\nresulted in relevant hits.\nThird, can user profiles be learned independently of the\nservice? The results from TFIDF and the pilot experiment\ndo suggest it. However, the pilot study also suggests that either\nuser relevance judgments may be a bit lower (harsher)\nthan the automated method or that the profile may not\nadequately reflect the users' interests. In fact, the good performance\nof Boley and TF indicates that in some cases it\nmight be worthwhile to collect more than TF-IDF information\nfrom the user-supplied positive training examples. This\nlast question will be examined in more detail in the future.\nOur study confirmed that additional user burden in the\nform of negative feedback appears unwarranted to support\ndocument generation and that queries generated based on\nautomatically learned profiles can guide harvesting of new\ndocuments of interest. This last result is excellent news for\nthe development of agents that leverage a single learned profile\nto personalize a multitude of web information services.\nACKNOWLEDGMENTS\nThis research was supported in part by National Science\nFoundation Career Award IRI-9624058. The United States\nGovernment is authorized to reproduce and distribute reprints\nfor governmental purposes notwithstanding any copyright\nnotation herein.\n\nREFERENCES\n[1] R. Armstrong, D. Freitag, T. Joachims, and\nT. Mitchell. WebWatcher: A Learning Apprentice for\nthe World Wide Web. In Proceedings of the AAAI\nSpring Symposium on Information Gathering from\nHeterogeneous, Distributed Resources, Stanford, CA,\n1995.\n[2] D. Boley, M. Gini, R. Gross, E. Han,\nK. Hastingsand G. Karypis, V. Kumar, M. Bamshad,\nand J. Moore. Document Categorization and Query\nGeneration on the World Wide Web Using WebAce.\nAI Review, 13(5-6):365391, 1999.\n[3] S. Brin and L. Page. The Anatomy of a Large-scale\nHypertextual Web Search Engine. Computer Networks\nand ISDN Systems, pages 107117, 1998.\n[4] M. Charikar, C. Chekuri, T. Feder, and R. Motwani.\nIncremental Clustering and Dynamic Information\nRetrieval. Proceedings of the 29th ACM Symposium on\nTheory of Computing, 1997.\n[5] L. Chen and Katia Sycara. WebMate: A Personal\nAgent for Browsing and Searching. In Proceedings of\nthe Second International Conference on Autonomous\nAgents, Minneapolis, MN, 1998.\n[6] D. Dreilinger and A.E. Howe. Experiences with\nselecting search engines using meta-search. ACM\nTransactions on Information Systems, 15(3):195222,\n1997.\n[7] G.W. Flake, E.J. Glover, S. Lawrence, and C.L. Giles\nExtracting Query Modifications from Nonlinear\nSVMs. In Proceedings of the Eleventh International\nWorld Wide Web Conference (WWW 2002),\nHonolulu, HI, U.S.A., 2002.\n[8] R. Ghani, R. Jones, and D. Mladenic. On-line learning\nfor query generation: Finding documents matching a\nminority concept on the web. In Proc. of the First\nAsia-Pacific Conference on Web Intelligence, 2001.\n[9] E.J. Glover, G.W. Flake, S. Lawrence,\nW.P. Birmingham, A. Kruger, C.L. Giles, and\nD. Pennock. Improving Category Specific Web Search\nby Learning Query Modifications. In Proceedings of\nthe IEEE Symposium on Applications and the Internet\n(SAINT 2001), San Diego, CA, U.S.A., 2001.\n[10] T. Joachims, D. Freitag, and T. Mitchell.\nWebWatcher: A Tour Guide for the World Wide Web.\nIn Proc. of the 15th International Joint Conference on\nArtificial Intelligence, Nagoya, Japan, 1997.\n[11] S. Lawrence and C.L. Giles. Context and page\nanalysis for improved web search. IEEE Internet\nComputing, 2(4):3846, 1998.\n[12] S. Lawrence and C.L. Giles. Searching the world wide\nweb. Science, 280:98100, April 3 1998.\n[13] H. Lieberman. Letizia: An agent that assists web\nbrowsing. In Proceedings of the 14th International\nJoint Conference on Artificial Intelligence (IJCAI-95),\nMontreal, Canada, 1995.\n[14] Inc. Netmation. 100 most popular web sites.\nhttp://netmation.com/list100.htm.\n[15] A. Pretschner and S. Gauch. Personalization on the\nweb. Technical Report ITTC-FY2000-TR-13591-01,\nDept. of Electrical Engineering and Computer Science,\nUniversity of Kansas, December 1999.\n[16] J.J. Rocchio. Relevance feedback in information\nretrieval. In G. Salton, editor, The SMART Retrieval\nSystem: Experiments in Automatic Document\nProcessing. Prentice-Hall, 1971.\n[17] G. Salton. Automatic Text Processing: The\nTransformation, Analysis, and Retrieval of\nInformation by Computer. Addison-Wesley, 1988.\n[18] E. Selberg and O. Etzioni. The metacrawler\narchitecture for resource aggregation on the web.\nIEEE Expert, 12(1):814, 1997.\n[19] G.L. Somlo and A.E. Howe. Adaptive lightweight text\nfiltering. In Proceedings of the 2001 Conference on\nIntelligent Data Analysis (IDA '01), Lisbon, Portugal,\nSeptember 2001.\n[20] Gabriel L. Somlo and Adele E. Howe. Incremental\nclustering for profile maintenance in information\ngathering web agents. In Proceedings of the 2001\nInternational Conference on Autonomous Agents\n(AGENTS'01), Montreal, Canada, May 2001.\n[21] A. Spink, J. Bateman, and B.J. Jansen. Searching\nheterogeneous collections on the web: Behavior of\nexcite users. Information Research: An Electronic\nJournal, 5(2), 1998.\nhttp://www.shef.ac.uk/~is/publications/infers\n818\n", "keywords": "query generation;information agents;user modeling"} {"name": "211", "title": "Very Low Complexity MPEG-2 to H.264 Transcoding Using Machine Learning", "abstract": "This paper presents a novel macroblock mode decision algorithm for inter-frame prediction based on machine learning techniques to be used as part of a very low complexity MPEG-2 to H.264 video transcoder. Since coding mode decisions take up the most resources in video transcoding, a fast macro block (MB) mode estimation would lead to reduced complexity. The proposed approach is based on the hypothesis that MB coding mode decisions in H.264 video have a correlation with the distribution of the motion compensated residual in MPEG-2 video. We use machine learning tools to exploit the correlation and derive decision trees to classify the incoming MPEG-2 MBs into one of the 11 coding modes in H.264. The proposed approach reduces the H.264 MB mode computation process into a decision tree lookup with very low complexity. The proposed transcoder is compared with a reference transcoder comprised of a MPEG-2 decoder and an H.264 encoder. Our results show that the proposed transcoder reduces the H.264 encoding time by over 95% with negligible loss in quality and bitrate.", "fulltext": "INTRODUCTION\nDuring the past few years, technological developments, such as\nnovel video coding algorithms, lower memory costs, and faster\nprocessors, are facilitating the design and development of highly\nefficient video encoding standards. Among the recent works in\nthis area, the H.264 video encoding standard, also known as\nMPEG-4 AVC occupies a central place [1].\nThe H.264 standard is highly efficient by offering perceptually\nequivalent video quality at about 1/3 to 1/2 of the bitrates offered\nby the MPEG-2 format. However, these gains come with a\nsignificant increase in encoding and decoding complexity [2].\nThough H.264 is highly efficient compared to MPEG-2, the wide\nand deep penetration of MPEG-2 creates a need for co-existence\nof these technologies and hence creates an important need for\nMPEG-2 to H.264 transcoding technologies. However, given the\nsignificant differences between both encoding algorithms, the\ntranscoding process of such systems is much more complex\ncompared to the other heterogeneous video transcoding processes\n[3-6]. The main elements that require to be addressed in the\ndesign of an efficient heterogeneous MPEG-2 to H.264 transcoder\nare [7]: the inter-frame prediction, the transform coding and the\nintra-frame prediction. Each one of these elements requires to be\nexamined and various research efforts are underway. In this\npaper, we focus our attention on a part of the inter-frame\nprediction: the macroblock mode decision, one of the most\nstringent tasks involved in the transcoding process.\n\nA video transcoder is comprised of a decoding stage followed by\nan encoding stage. The decoding stage of a transcoder can\nperform full decoding to the pixel level or partial decoding to the\ncoefficient level. Partial decoding is used in compressed domain\ntranscoding where the transform coefficients in the input format\nare directly transcoded to the output format. This transformation\nis straightforward when the input and output formats of the\ntranscoder use the same transform (e.g., MPEG-2 to MPEG-4\ntranscoding) [5]. When these transforms differ substantially, the\ncompressed domain transcoding becomes computationally\nexpensive. The utility of this compressed domain transcoding is\nlimited to intra MB transcoding. For predicted MBs, the\ntranscoding in compressed domain becomes prohibitively\nexpensive. The substantial differences in MPEG-2 and H.264\nmake even intra transcoding in the compressed domain relatively\nexpensive [8]; pixel domain transcoding is shown to produce\nbetter results [9].\n\nPixel domain transcoders have a full decoding stage followed by a\nreduced complexity encoding stage. The complexity reduction is\nachieved by reusing the information gathered from the decoding\nstage. It is assumed that the input video is encoded with\nreasonable RD optimization. The MPEG-2 to H.264 complexity\nreduction techniques reported in the literature fall into two\ncategories: 1) MB mode mapping in H.264 based on the MB\nmodes of the incoming video [10] and 2) Selective evaluation of\nMB modes in H.264 based on heuristics [11]. Because of the large\nnumber of inter and intra MB coding modes supported by H.264,\nthere is no one-to-one mapping between MPEG-2 and H.264 MB\nmodes. A direct mapping leads to either a sub-optimal decision if\nthe mapped mode is the final MB mode or an increase on\ncomplexity if additional evaluations have to be made to improve\nthe mode decision. Selective evaluation is based on the\nobservation that certain MB modes are less likely to occur for a\nclass of videos and bitrates. If the selective evaluation is\naggressive in limiting the number of allowed modes, the\nperformance is sub-optimal. On the contrary, increasing the\nnumber of allowed modes increases the complexity.\nWe have developed an innovative approach that is not limited by\nthe inefficiencies of mode mapping or selective evaluation\napproaches. The proposed approach is based on the hypothesis\nthat MB coding mode decisions in H.264 video have a correlation\nwith the distribution of the motion compensated residual in\nMPEG-2 video. Exploiting this correlation together with the MB\ncoding modes of MPEG-2 could lead to a very low complexity\ntranscoder. Figure 1 shows a plot of the mean and variance of the\nMPEG-2 MB residual in the input video and the H.264 MB\ncoding mode of the corresponding MB in the transcoded video.\nAs the coding mode changes, the shift in the mean and variance of\nthe corresponding MB can be clearly seen. This correlation can be\neffectively exploited using machine learning approaches. Thus,\nthe H.264 MB mode computation problem is posed as a data\nclassification problem where the MPEG-2 MB coding mode and\nresidual have to be classified into one of the several H.264 coding\nmodes. The proposed transcoder is developed based on these\nprinciples and reduces the H.264 MB mode computation process\ninto a decision tree lookup with very low complexity.\n\n\n\n\n\n\n\n\n\n\n\nFigure 1. Relationship between MPEG-2 MB residual and\nH.264 MB coding mode.\n\nThe rest of the paper is organized as follows. Section 2 reviews\nthe principles of operation of the prediction of inter-coded\nmacroblocks in p-slices used by the H.264 encoding standard.\nSection 3 introduces our macroblock mode decision algorithm for\ninter-frame prediction based on machine learning techniques,\nspecifically designed for MPEG-2 to H.264 transcoders. In\nSection 4, we carry out a performance evaluation of the proposed\nalgorithm in terms of its computational complexity and rate-distortion\nresults. We compare the performance of our proposal to\nthe reference transcoder with the encoding stage using the H.264\nreference implementation. Finally, Section 5 draws our\nconclusions and outlines our future research plans.\nMACROBLOCK MODE DECISION AND MOTION ESTIMATION IN H.264\nIn the H.264 standard, the macroblock decision mode and motion\nestimation are the most computationally expensive processes.\nH.264 uses block-based motion compensation, the same principle\nadopted by every major coding standard since H.261. Important\ndifferences from earlier standards include the support for a range\nof block sizes (down to 4x4) and fine sub-pixel motion vectors\n(1/4 pixel in the luma component). H.264 supports motion\ncompensation block sizes ranging from 16x16 to 4x4 luminance\nsamples with many options between the two. The luminance\ncomponent of each macroblock (16x16 samples) may be split up\nin 4 ways: 16x16, 16x8, 8x16 or 8x8. Each of the sub-divided\nregions is a macroblock partition. If the 8x8 mode is chosen, each\nof the four 8x8 macroblock partitions within the macroblock may\nbe further split in 4 ways: 8x8, 8x4, 4x8 or 4x4 (known as sub-macroblock\npartitions). These partitions and sub-partitions give\nrise to a large number of possible combinations within each\nmacroblock (see Figure 2). This method of partitioning\nmacroblocks into motion compensated sub-blocks of varying size\nis known as tree structured motion compensation.\n\nFigure 2. Macroblock partitions, sub-macroblock partitions\nand partition scans.\nA separate motion vector (previously calculated in the motion\nestimation module) is required for each partition or sub-partition.\nEach motion vector must be coded and transmitted; in addition,\nthe choice of partition(s) must be encoded in the compressed\nbitstream. Choosing a large partition size (e.g. 16x16, 16x8, 8x16)\nmeans that a small number of bits are required to signal the choice\nof motion vector(s) and the type of partition; however, the motion\ncompensated residual may contain a significant amount of energy\nin areas with high detail. Choosing a small partition size (e.g. 8x4,\n4x4, etc.) may give a lower-energy residual after motion\ncompensation but requires a larger number of bits to signal the\nmotion vectors and choice of partition(s). The choice of partition\nsize therefore has a significant impact on compression\nperformance. In general, a large partition size is appropriate for\nhomogeneous areas of the frame and a small partition size may be\nbeneficial for areas with high detail\n.\n\nVa\nri\na\nn\nce\nMPEG-2 Res. MB Var.\nH.264 MB Mode\n\nMB Number\nMea\nn\nMPEG-2 Res. MB Mean\nH.264 MB Mode\n932\nThe resolution of each chroma component in a macroblock (Cr\nand Cb) is half that of the luminance (luma) component. Each\nchroma block is partitioned in the same way as the luma\ncomponent, except that the partition sizes have exactly half the\nhorizontal and vertical resolution (an 8x16 partition in luma\ncorresponds to a 4x8 partition in chroma; an 8x4 partition in luma\ncorresponds to 4x2 in chroma; and so on). The horizontal and\nvertical components of each motion vector (one per partition) are\nhalved when applied to the chroma blocks.\nEach partition in an inter-coded macroblock is predicted from an\narea of the same size in a reference picture. The offset between\nthe two areas (the motion vector) has -pixel resolution (for the\nluma component). If the video source sampling is 4:2:0, 1/8 pixel\nsamples are required in the chroma components (corresponding to\n-pixel samples in the luma). The luma and chroma samples at\nsub-pixel positions do not exist in the reference picture and so it is\nnecessary to create them using interpolation from nearby image\nsamples. Sub-pixel motion compensation can provide\nsignificantly better compression performance than integer-pixel\ncompensation, at the expense of increased complexity. Quarter-pixel\naccuracy outperforms half-pixel accuracy.\nEncoding a motion vector for each partition can take a significant\nnumber of bits, especially if small partition sizes are chosen.\nMotion vectors for neighboring partitions are often highly\ncorrelated and so each motion vector is predicted from vectors of\nnearby, previously coded partitions. The method of forming the\nprediction MVp depends on the motion compensation partition\nsize and on the availability of nearby vectors.\nIn H.264, the macroblock mode decision is the most\ncomputationally expensive process. Mode decision is a process\nsuch that for each possible block-size a cost is evaluated. The\nencoder selects the coding-modes for the macroblock, including\nthe best macroblock partition (sub-macroblock partition) and\nmode of prediction for each macroblock partition, such that the\ncost is optimized. In the JM reference code (version 10.2) [12],\nthe motion estimation and the mode decision are executed\ntogether. This implies that for each macroblock partition (sub-macroblock\npartition) within the MB, motion estimation is done\nfirst and the resulting cost is used for the mode decision.\nIn the H.264, two methods have been defined to evaluate the cost\nfor MB mode decision: RD-cost and SAE-cost. In the following,\nwe describe these two methods.\n2.1 The RD-Cost\nThe Rate-Distortion (RD) optimization method is based on a\nLagrange multiplier [13] [14]. The H.264 standard can make use\nof this optimization method to choose the best macroblock mode\ndecision. Different from evaluating the cost of coding a\nmacroblock on a pixel by pixel basis (SAE cost), the RD-cost\nconsists of making the selection based on a Lagrange function. In\nthis way, the H.264 standard selects the macroblock mode\nexhibiting the minimum Lagrange cost. This implies that for each\nexisting macroblock partition (sub-partition) within the MB, bitrate\nand distortion are calculated by actually encoding and\ndecoding the video. Therefore, the encoder can achieve the best\nRate-Distortion performance results, at the expense of calculation\ncomplexity.\nFor evaluating the RD-cost, the standard has to obtain the\nencoding rate, R, and the distortion, D, of each macroblock\npartition (sub-macroblock partition). The former is obtained by\nfirst computing the difference between the original macroblock\nand its predictor. Thereafter, a 4x4 Hadamard Transform (HT)\nhas to be applied followed by a quantization process. The\ndistortion, D, is obtained by performing an inverse quantization\nprocess followed by its inverse HT and then comparing the\noriginal macroblock to the reconstructed one. The H.264 standard\nchooses then the decision mode having the minimum cost, J. The\ncost is evaluated using the Lagrange function J=D + x R, where\nis the Lagrange multiplier. Figure 3 depicts the overall process.\nOne of the main drawbacks of this method is its excessive\ncomputational cost. On the contrary, the encoder can achieve the\nbest Rate-Distortion performance results. However, for many\napplications, the use of the Lagrange multiplier may be\nprohibitive. This is the case when developing a transcoding\narchitecture aimed to work in real-time.\nHT\n+\nQP\n\nEncoder H.264/AVC with\nloop Rate-Distorsion\nQP\n-1\nIHT\n\n+\nCompute\nrate\nPrediction\nFrame\n+\nDetermine\ndistorsion\n+\n+\nCompute cost\n(J = D+\n\nx R)\nDecision\nR\nD\n\nFigure 3. RD-cost method in the H.264 encoder.\n2.2 The SAE-Cost\nIn this method, the H.264 encoder selects the best macroblock\nmode by using the Sum of Absolute Errors (SAE). This implies\nthat for each existing macroblock partition (sub-partition) within\nthe MB, a predictor within the pixel-domain is created from the\nmotion estimation of the current partition and the SAE costs is\nevaluated. For each MB and for each color component (Y,Cr,Cb),\none prediction mode have to be obtained. The best mode is\ndetermined corresponding to the mode exhibiting the minimum\nSAE cost. One of the main advantages of this method is its low\ncomputational cost. On the contrary, the Rate-Distortion\n\nperformance results are sub-optimal.\n2.3 The Fast Motion Estimation Option\nMotion estimation is one of the most important tools in H.264\nencoder for exploiting the high temporal redundancy between\nsuccessive frames to improve video coding efficiency. And\nmotion estimation is also the most time consuming part in the\nH.264 encoder (since it is also used for mode decision). Generally\nmotion estimation is conducted into two steps: first is integer pel\nmotion estimation; and the second is fractional pel motion\nestimation around the position obtained by the integer pel motion\nestimation.\nAlgorithms on Fast Motion Estimation (FME) are always hot\nresearch spot, especially fast integer pel motion estimation has\nachieved much more attention because traditional fractional pel\n933\nmotion estimation only take a very few proportion in the\ncomputation load of whole motion estimation. Fast motion\nestimation algorithms such as EPZS [15], UMHexagonS [16], and\nSEA [17] have been proposed to reduce the number of searching\npoints in motion estimation.\nThe UMHexagonS algorithm proposed by Tsinghua University\nwas adopted by the H.264/MPEG-4 Part 10 (AVC) reference\nsoftware implementation [12]. This algorithm uses the hybrid and\nhierarchical motion search strategies. It includes four steps with\ndifferent kinds of search pattern: 1) Predictor selection and\nprediction mode reordering; 2) Unsymmetrical-cross search; 3)\nUneven multi-hexagon-grid search; 4) Extended hexagon-based\nsearch. With the second and third step, the motion estimation\naccuracy can be nearly as high as that of full search. But the\ncomputation load and operations can be reduced even more.\nUnsymmetrical-cross search uses prediction vector as the search\ncenter and extends in the horizontal and vertical directions\nrespectively. Uneven multi-hexagon-grid search includes two sub-steps\n: first a full search is carried out around the search center.\nAnd then a 16-HP multi-hexagon-grid search strategy is taken.\nExtended hexagon-based search is used as a center based search\nalgorithm, including hexagon search and diamond search in a\nsmall range.\nIn the H.264 reference software, the Fast Motion Estimation\n(FME) algorithm (based in the UMHexagonS algorithm) can be\nemployed for the motion estimation in addition to the original\nFull Search (FS) algorithm.\nMACHINE LEARNING\nMachine learning refers to the study of algorithms and systems\nthat \"learn\" or acquire knowledge from experiences. Deductive\nmachine learning deduces new rules/knowledge from existing\nrules and inductive machine learning uses the analysis of data sets\nfor creating a set of rules to take decisions. These rules can be\nused, in the machine learning, to build a tree decision using a set\nof experiments or examples, named the training data set. This set\nof data must have the following properties [18]:\n1. Each attribute or variable can take nominal or numerical\nvalues, but the number of attributes cannot vary from an\nexample to another. This is to say, all the samples in the\ntraining data set used for training the model must have\nthe same number of variables.\n2. The set of categories that the examples can be assigned\nto must a priori be known to enable supervised learning.\n3. The set of categories must be finite and must be\ndifferent from one another.\n4. Since the inductive learning consists of obtaining\ngeneralization from examples, it is supposed the\nexistence of a sufficiently great number of examples.\nMachine learning uses statistics with different kinds of algorithms\nto solve a problem by studying and analyzing the data. Machine\nlearning has been used in an extensive range of applications\nincluding search engines, medical diagnosis, stock market\nanalysis, classifying DNA sequences, speech and handwriting\nrecognition, object recognition in computer vision, game playing\nand robot motion, etc.\nIn this paper, we describe the process of using machine learning\nto build a decision tree for very low complexity transcoding. The\ndecision tree will be used to determine the coding mode of an MB\nin P frames of the output H.264 video, based on the information\ngathered during the MPEG-2 decoding stage. Figure 4 depicts the\nprocess for building the decision trees to be used in the MPEG-2\nto H.264 transcoding process. The incoming MPEG-2 video is\ndecoded and during the decoding stage, the MB coding mode, the\ncoded block pattern (CBPC), and the mean and variance of the\nresidual information for this MB (calculated for its 4x4 sub-blocks\nresulting in 16 means and 16 variances for each MB) are\nsaved. The decoded MPEG-2 video is then encoded using the\nstandard H.264 encoder. The coding mode of the corresponding\nMBs in H.264 is also saved. Based on the MPEG-2 data and the\ncorresponding H.264 coding mode decision for each MB, a\nmachine learning algorithm is used to create decision trees that\nclassify an MB into one of the 11 H.264 MB coding modes.\n\nFigure 4. Process for building decision trees for MPEG-2 to\nH.264 transcoding.\n3.1 Creating the Training Files\nA decision tree is made by mapping the observations about a set\nof data to a tree made of arcs and nodes. The nodes are the\nvariables and the arcs the possible values for that variable. The\ntree can have more than one level; in that case, the nodes (leafs of\nthe tree) represent the decisions based on the values of the\ndifferent variables that drive the decision from the root to the leaf.\nThese types of trees are used in the machine learning processes\nfor discovering the relationships in a set of data. The tree leafs are\nthe classifications and the branches are the features that lead to a\nspecific classification. A tree decision is a classifier based on a set\nof attributes allowing us to determine the category of an input\ndata sample.\nThe decision tree for the transcoder was made using the WEKA\ndata mining tool [18]. The files that are used for the WEKA data\nmining program are known as Attribute-Relation File Format\n(ARFF) files. An ARFF file is written in ASCII text and shows\nthe relationship between a set of attributes. Basically, this file has\ntwo different sections:1) the header which contains the name of\nthe relation, the attributes that are used, and their types; and 2) the\nsection containing the data.\nThe training sets were made using MPEG-2 sequences encoded at\nhigher than the typical broadcast encoding rates for the same\nquality, since the B frames are not used. The H.264 decisions in\nthe training set were obtained from encoding the MPEG-2\n934\ndecoded sequence with a quantization parameter of 25 and RD\noptimization enabled. After extensive experimentation, we found\nthat sequences that contain regions varying from homogenous to\nhigh-detail serve as good training sets. Good sample sequences\ncould be Flower and Football. The goal is to develop a single,\ngeneralized, decision tree that can be used for transcoding any\nMPEG-2 video.\nFigure 5 shows the decision trees built using the process depicted\nin Figure 4. As shown in Figure 4, the Decision Tree for the\nproposed transcoder is a hierarchical decision tree with three\ndifferent WEKA trees: 1) classifier for Intra, Skip, Inter 16x16,\nand Inter 8x8, 2) classifier to classify inter 16x16 into one of\n16x16, 16x8, and 8x16 MBs and 3) classifier to classify inter 8x8\ninto one of 8x8, 8x4, 4x8, or 4x4. This paper focuses on the Inter\nMB mode computation and the further classification and\nprocessing for Intra MBs is not discussed in this paper.\nFor creating the first WEKA tree (Figure 5 node 1), the first\ntraining data set uses the mean and variance of each one of the\nsixteen 4x4 residual sub-blocks, the MB mode in MPEG-2 (skip,\nintra, and three non-intra modes, labeled as 0, 1, 2, 4 and 8 in the\ncode shown below), the coded block pattern (CBPC) in MPEG-2,\nand the corresponding H.264 MB coding mode decision for that\nMB as determined by the standard reference software. The header\nsection of the ARFF files has the attribute declaration depicted\nherein:\nThe supposed dependent variable, namely class in the example, is\nthe variable that we are trying to understand, classify, or\ngeneralize. The other attributes are the variables that determine\nthe classification. The ARFF data section has the instance lines,\nwhich are the samples used to train our model. Each macroblock\nsample is represented on a single line. In this case the variable\nclass can take four values (skip, 16x16, 8x8 or Intra labeled as 0,\n1, 8 and 9 in the code).\nThe second training data set, used for creating the second WEKA\ntree (Figure 5 node 2), was made using the samples (MBs) that\nwere encoded as 16x16 MBs in the H.264 reference encoder. It\nuses the mean and variances of each one of the sixteen 4x4\nresidual sub-blocks, the MB mode in MPEG-2 (in this case only\nthe three non-intra modes), the coded block pattern (CBPC) in\nMPEG-2, and the corresponding H.264 MB coding sub-mode\ndecision in the 16x16 mode, as determined by the standard\nreference software: 16x16, 16x8 or 8x16. This tree determines the\nfinal coding mode of the MBs classified as inter 16x16 by the first\ntree.\nThe third and last training data set, was used to create the third\nWEKA tree (Figure 5 node 3) and was made using the samples\n(MBs) that were encoded as inter 8x8 MBs in the H.264 reference\nencoder. It uses four means and four variances of 4x4 residual\nsub-blocks, the MB mode in MPEG-2 (the three non-intra\nmodes), the coded block pattern (CBPC) in MPEG-2, and the\ncorresponding H.264 MB sub-partition decision in the 8x8 mode,\nas determined by the standard reference software: 8x8, 8x4, 4x8\nor 4x4. Since this decision is made separately for each 8x8 sub-block\n, only the four means and four variances of 4x4 residual sub-blocks\nare used in each sample for training the model.\nBased on these training files, the J48 algorithm implemented in\nthe WEKA data mining tool was used to create the three decision\ntrees. The J48 algorithm is an implementation of the C4.5\nalgorithm proposed by Ross Quinlan [19]: the algorithm widely\nused as a reference for building decision trees.\nThe decision tree, that is proposed to solve the inter-prediction\nproblem, is a model of the data that encodes the distribution of the\nclass label in terms of the attributes. The final goal of this\ndecision tree is to help find a simple structure to show the\npossible dependences between the attributes and the class.\n3.2 The Decision Tree\nThis sub-section discusses the proposed macroblock mode\ndecision algorithm aiming to accelerate the inter-frame prediction.\nThis goal is achieved by making use of the MPEG-2 MB coding\nmodes, the coded block pattern (CBPC), and the mean and\nvariance of the residual information for this MB calculated for its\n4x4 sub-blocks. MPEG-2 uses 16x16 motion compensation (MC)\nand does not temporally decorrelate an image fully. The MC\nresidual can thus be exploited to understand the temporal\ncorrelation of variable block sizes in H.264. The open source\nWEKA data mining tool is used to discover a pattern of the mean,\nvariance, MPEG-2 coding modes, and the coded block pattern in\nMPEG-2 (CBPC) for H.264 coding mode decisions. Figure 5\nshows the decision tree used in the proposed transcoder.\nThe decision tree consists of three WEKA decision trees, shown\nin Figure 5 with grey balls. The first WEKA tree is used to check\nfor the skip, Intra, 8x8 and 16x16 MBs modes. If an MB is 8x8 or\n16x16, a second and a third decision tree is used for selecting the\nfinal coding mode of the MB. The WEKA tool determined the\nmean and variance thresholds for each of the three WEKA trees in\nthe decision tree. Due to space constraints we cannot show all the\nrules being evaluated in the WEKA decision nodes. The process\ndescribed in herein should be sufficient for interested people to\ndevelop the decision trees and repeat these experiments. The\ndecision tree works as follows:\nNode 1. The inputs for this node are all the MPEG-2 coded MBs.\nIn this node a tree decision generated with WEKA is used to\ndecide whether the MB should be coded in H.264. This tree\nexamines whether the MB has a very high residual or a medium\nresidual. The output of this node is a first level decision mode that\nshould be used for coding the MB: skip, Intra, 8x8 or 16x16. The\nintra decision process is not discussed in this paper. In the other\ncases, the algorithm has to make a second level decision based in\nthe first decision. For example, the following rules were given by\nWEKA:\n\nIf the MPEG-2 MB was \"MC not coded\", (non-zero MV\npresent, none of the 8x8 block has coded coefficients), then\n@RELATION mean-variance_4x4\n\n@ATTRIBUTE mean0 Numeric\n@ATTRIBUTE variance0 Numeric\n@ATTRIBUTE mean1 Numeric\n@ATTRIBUTE variance1 Numeric\n.................................................................................\n@ATTRIBUTE mean15 Numeric\n@ATTRIBUTE variance15 Numeric\n@ATTRIBUTE mode_mpeg2 {0,1,2,4,8}\n@ATTRIBUTE CBPC0 {0,1}\n.................................................................................\n@ATTRIBUTE CBPC6 {0,1}\n@ATTRIBUTE class {0,1,8,9}\n\n935\nthe MB will be coded as 16x16 in H.264. Again, a second\ndecision level will be made to select the best choice in this\ncase (see node 2).\n\nIf the MPEG-2 MB was coded in intra mode, the MB will be\ncoded as intra or inter 8x8 mode in H.264. In some cases the\nalgorithm will propose Intra, and the algorithm will end, and\nin other cases the algorithm will propose 8x8 mode, so a\nsecond level decision will be done (see node 3).\n\nIf the MPEG-2 MB was coded in skip mode, then the H.264\ndecision mode should be skip. The decision will be made in\nnode 4.\n\nFigure 5. The Decision Tree.\nNode 2. The inputs for this node are the 16x16 MBs classified by\nthe node 1. In this node we use again a decision tree generated\nwith WEKA to decide whether the MB should be coded in H.264\n(16x16, 16x8 or 8x16). This tree examines if there are continuous\n16x8 or 8x16 sub-blocks that might result in a better prediction.\nThe output of this node is the 16x16 sub-mode decision mode that\nshould be used for coding the MB: 16x16, 16x8 or 8x16. When\nthe node decision is 16x8 or 8x16 the coding mode is finalized. In\nthe other case, the evaluation continues in node 4, where the final\ndecision will be made.\nNode 3. The inputs for this node are the MBs classified by the\nnode 1 as 8x8. This node evaluates only the H.264 8x8 modes\nusing the third WEKA tree and selects the best option: 8x8, 8x4,\n4x8 or 4x4. As explained in the previous section, this tree is run 4\ntimes, once for each of the four sub-macroblocks in the MB. This\ntree is different from the others because this one only uses four\nmeans and four variances to make the decision.\nNode 4. The inputs for this node are skip-mode MBs in the\nMPEG-2 bitstream classified by the node 1, or the 16x16 MBs\nclassified by the node 2. This node evaluates only the H.264\n16x16 mode (without the sub-modes 16x8 or 8x16). Then, the\nnode selects the best option, skip or inter 16x16.\nSince the MB mode decision, and hence the thresholds, depend on\nthe quantization parameter (QP) used in the H.264 encoding\nstage, the mean and variance threshold will have to be different at\neach QP. The two solutions here are: 1) develop the decision trees\nfor each QP and use the appropriate decision tree depending on\nthe QP selected and 2) develop a single decision tree and adjust\nthe mean and variance threshold used by the trees based on the\nQP. The first option is complex as we have to develop and switch\nbetween 52 different decision trees resulting in 156 WEKA trees\nin a transcoder. Since the QP used by H.264 is designed to change\nthe quantization step size and the relationship between the QPs is\nwell defined, this relationship can be used to adjust the mean and\nvariance thresholds. The proposed transcoder uses a single\ndecision tree developed for a mid-QP of 25 and then adjusted for\nother QPs. Since the quantization step size in H.264 doubles when\nQP increases by 6, the thresholds are adjusted by 2.5% for a\nchange in QP of 1. For QP values higher than 25, the thresholds\nare decreased and for QP values lower than 25 thresholds are\nproportionally increased.\nFigure 6 shows an example of the results obtained by applying\nour proposed algorithm. Figure 6a illustrates the residual for the\nMPEG-2 encoded Tempete sequence. Figures 6b and 6c show the\nmean and variance of the residual. Figures 6.e and 6.f show the\ndifferences between the inter mode selection made by the H.264\nstandard (with the RD-optimized option enabled), and the\nproposed algorithm, with a value of 10 for QP. From these\nfigures, it is clear that our algorithm obtains very similar results to\nthose obtained using the full estimation of the H.264 standard.\n\n\n\n(a) MPEG-2 residual (+128)\n\n\n\n\n(b) Mean of the MPEG-2 residual (+128)\n\n\n\n(c) Variance of the MPEG-2 residual\n\n\n(d) Different kinds of Macroblocks in the grid pictures\n\n\n\n(e) H.264 Rd opt, first frame P, Tempete (CIF)\nQP= 10. Inter mode selected by H.264\n\n\n\n\n(f ) H.264 Rd opt, first frame P, Tempete (CIF)\nQP= 10. Inter mode selected by our proposal\nInter 16x16 Macroblock\nSkip Macroblock\nIntra Macroblock\nInter 8x16 Macroblock\nInter 16x8 Macroblock\nInter 8x8 Macroblock\nInter 4x8 Sub-macroblock\nInter 8x4 Sub-macroblock\nInter 4x4 Sub-macroblock\nInter 8x8 Sub-macroblock\n\nFigure 6. Macroblock partitions generated by the proposed\nalgorithm for the first P-frame in the Tempete sequence.\nPERFORMANCE EVALUATION\nThe proposed low complexity MB coding mode decision\nalgorithm is implemented in the H.264/AVC reference software,\nversion JM 10.2 [12]. Figure 7 shows the overall operation of the\nproposed transcoder. The MPEG-2 video is decoded and the\ninformation required by the decision trees is gathered in this\nstage. The additional computation here is the computation of the\nmean and variance of the 4x4 sub-blocks of the residual MBs. The\nMB coding mode decision determined by the decision trees is\nused in the low complexity H.264 encoding stage. This is an\n936\nH.264 reference encoder with the MB mode decision replaced by\nsimple mode assignment from the decision tree. The H.264 video\nencoder takes as input the decoder MPEG-2 video (pixel data)\nand the MB mode decision from the decision tree and encodes the\nH.264 video. The MPEG-2 motion vectors are not used and the\nencoder performs the motion estimation just for the final MB\nmode determined by the decision tree.\nMPEG-2\nVideo\nH.264\nVideo\n\nFigure 7. Proposed transcoder.\nThe performance of the proposed very low complexity transcoder\nis compared with a reference transcoder comprised of a full\nMPEG-2 decoder followed by a full H.264 encoder. We compare\nthe performance of our proposal to the full H.264 encoder when\nthe RD-cost (with and without FME option enabled) and the SAE-cost\n(with and without FME option enabled) are used. The metrics\nused to evaluate the performance are the reduction in the\ncomputational cost and rate distortion function. The time results\nreported are for the H.264 encoding component as the MPEG-2\ndecoding cost is the same for both the proposed and reference\nencoders.\nWe have conducted an extensive set of experiments with videos\nrepresenting wide range of motion, texture, and color.\nExperiments were conducted to evaluate the performance of the\nproposed algorithm when transcoding videos at commonly used\nresolutions: CCIR-601, CIF, and QCIF. The input to the\ntranscoder is a high quality MPEG-2 video. Since the proposed\ntranscoder addresses transcoding P frames in MPEG-2 to H.264 P\nframes, MPEG-2 bitstreams were created without B frames. Since\nthe B frames, which are much smaller than P frames, are not used\nin the input video, the video has to be encoded at higher than the\ntypical encoding rates for equivalent broadcast quality. Table 1\nshows the bitrates used for the input MPEG-2 video. The\nexperiments have shown that the proposed approach performs\nextremely well across all bitrates and resolutions.\nTable 1. Bitrates for the input sequences\nFormat Bitrate\nCCIR-601 (720x480)\n5 Mbps\nCIF (352x288)\n1.15 Mbps\nQCIF (176x144)\n0.768 Mbps\nThe sequences have been encoded with H.264 using the QP\nfactors ranging from 5 up to 45 in steps of 5. This corresponds to\nthe H.264 QP range used in most practical applications. The size\nof the GOP is 12 frames; where the first frame of every GOP was\nencoded as I-frame, and the rest of the frames of the GOP were\nencoded as a P-frames. The rate control was disabled for all the\nsimulations. The ProfileIDC was set to High for all the\nsimulations, with the FRExt options enabled. The simulations\nwere run on a P4 HT at 3.0 GHz Intel machine with 512 MB\nRAM. The results are reported for six different sequences: two for\neach of the three resolutions shown in Table 1.\nCCIR Sequences (720x480, 200 Frames, 25 Hz)\n30\n35\n40\n45\n50\n55\n60\n0\n5000\n10000\n15000\n20000\n25000\n30000\n35000\nBit rate [kbits/s]\nPSN\nR\n[\nd\nB\n]\nH.264 (Rd opt)\nProposed (Rd opt)\nAyersroc\nMartin\n\n(a)\nCIF Sequences (352x288, 200 Frames, 25 Hz)\n30\n35\n40\n45\n50\n55\n60\n0\n1000\n2000\n3000\n4000\n5000\n6000\n7000\n8000\n9000\n10000\nBit rate [kbits/s]\nPSN\nR [d\nB]\nH.264 (Rd opt)\nProposed (Rd opt)\nParis\nTempete\n\n(b)\nQCIF Sequences (176x144, 200 Frames, 25 Hz)\n30\n35\n40\n45\n50\n55\n60\n0\n500\n1000\n1500\n2000\n2500\n3000\nBit rate [kbits/s]\nP\nS\nNR [db\n]\nH.264 (Rd opt)\nProposed (Rd opt)\nForeman\nNews\n\n(c)\nFigure 8. RD results for RD-cost without FME option.\nFigure 8 shows the RD results for the reference and proposed\ntranscoder with RD optimization enabled and fast motion\nestimation (FME) disabled. Figure 9 shows the RD results for the\nreference and proposed transcoder with RD optimization enabled\nand fast motion estimation (FME) enabled. As seen from the\nfigures, the PSNR obtained with the proposed transcoder deviates\nslightly from the results obtained when applying the considerable\n937\nmore complex reference transcoder. Compared with the reference\ntranscoder, the proposed transcoder has a PSNR drop of at most\n0.3 dB for a given bitrate and bitrate increase of at most 5% for a\ngiven PSNR. This negligible drop in RD performance is more\nthen offset by the reduction in computational complexity. Tables\n2 and 3 show the average encoding time per frame given in\nmilliseconds. As shown in Table 2 and Table 3, the transcoding\ntime reduces by more than 80% with RD optimization, and more\nthan 90% with FME enabled for both the reference and proposed\ntranscoders.\n\nCCIR Sequences (720x480, 200 Frames, 25 Hz)\n30\n35\n40\n45\n50\n55\n60\n0\n5000\n10000\n15000\n20000\n25000\n30000\n35000\nBit rate [kbits/s]\nP\nS\nN\nR [\nd\nB]\nH.264 (Rd opt, Fast ME)\nProposed (Rd opt, Fast ME)\nAyersroc\nMartin\n\n(a)\nCIF Sequences (352x288, 200 Frames, 25 Hz)\n30\n35\n40\n45\n50\n55\n60\n0\n1000\n2000\n3000\n4000\n5000\n6000\n7000\n8000\n9000\n10000\nBit rate [kbits/s]\nP\nS\nNR [\nd\nB]\nH.264 (Rd opt, Fast ME)\nProposed (Rd opt, Fast ME)\nParis\nTempete\n\n(b)\n\nQCIF Sequences (176x144, 200 Frames, 25 Hz)\n30\n35\n40\n45\n50\n55\n60\n0\n500\n1000\n1500\n2000\n2500\n3000\nBit rate [kbits/s]\nP\nS\nNR\n[\ndb]\nH.264 (Rd opt, Fast ME)\nProposed (Rd opt, Fast ME)\nForeman\nNews\n\n(c)\nFigure 9. RD results for RD-cost with FME option.\nFigure 10 shows the RD results for the reference and proposed\ntranscoder with SAE-Cost (RD optimization disabled) and fast\nmotion estimation (FME) disabled. Figure 11 shows the RD\nresults for the reference and proposed transcoder with SAE-Cost\n(RD optimization disabled) and fast motion estimation (FME)\nenabled. As seen from the figures, in some cases the proposed\ntranscoder have better results than the reference transcoder. This\nhappens because the best solution is obtained by enabling the RD\noptimization, and in the experiments reported in the figures we\nare comparing the faster configuration of a H.264 encoder (SAE\ncost) with our proposed reduced-complexity transcoder. With\nSAE based encoding (RD-optimization disabled), the proposed\ntranscoder continues to outperform the reference transcoder\ncomputationally (Tables 2 and 3). The transcoder still maintains a\nPSNR drop of less than 0.3 dB and bitrate increase of less than\n5%. The computational cost is reduced by over 38% for the SAE\ncase and by over 82% with FME enabled for both the reference\nand proposed transcoders.\nTable 2. Mean encoding time (milliseconds) per frame with\nthe reference transcoder\nSequence\nRD Opt\nRD Opt\n+ FME\nSAE SAE\n+\nFME\nMartin 7370\n6420\n2110\n940\nAyersroc 7650 6820 2095 1030\nParis 2305\n2020\n590\n235\nTempete 2360 2050 605 290\nForeman 565 495 155 68\nNews 550\n470\n150\n55\nTable 3. Mean encoding time (milliseconds) per frame with\nthe proposed transcoder\nSequence\nRD Opt\nRD Opt\n+ FME\nSAE SAE\n+\nFME\nMartin 1460\n470\n1190\n170\nAyersroc 1620 670 1160 190\nParis 415\n95\n360\n45\nTempete 445 135 360 53\nForeman 102 24 93 12\nNews 103\n21\n92\n11\nTable 4. Mean Time Reduction (%) per frame with the\nproposed transcoder\nSequence\nRD Opt\nRD Opt\n+ FME\nSAE SAE\n+\nFME\nMartin 80,19\n92,68\n43,60\n81,91\nAyersroc 78,82 90,18 44,63 81,55\nParis 82,00\n95,30\n38,98\n80,85\nTempete 81,14 93,41 40,50 81,72\nForeman 81,95 95,15 40,00 82,35\nNews 81,27\n95,53\n38,67\n80,00\nBased on the results shown in the Tables 2 and 3, the proposed\ntranscoder with SAE and FME has the lowest complexity. The\nproposed transcoder with RD optimization and FME is still faster\nthan the fastest case of the reference transcoder (SAE + FME).\nUsing FME reduces the complexity substantially. Selecting RD\noptimization with the proposed transcoder doubles the complexity\ncompared with SAE+FME case. The decision to enable RD\noptimization can be based on the operating bitrates and sensitivity\nto the PSNR drop. At higher bitrates, RDOPT + FME option give\nabout 0.6 dB better than the SAE + FME option; this is doubling\n938\nthe complexity for a gain of 0.6 dB. However, at lower bitrates,\nthe PSNR gain reduces to about 0.3 dB.\nCCIR Sequences (720x480, 200 Frames, 25 Hz)\n30\n35\n40\n45\n50\n55\n60\n0\n5000\n10000\n15000\n20000\n25000\n30000\n35000\nBit rate [kbits/s]\nP\nS\nNR\n[\nd\nB]\nH.264 (SAE)\nProposed (SAE)\nAyersroc\nMartin\n\n(a)\nCIF Sequences (352x288, 200 Frames, 25 Hz)\n30\n35\n40\n45\n50\n55\n60\n0\n2000\n4000\n6000\n8000\n10000\n12000\nBit rate [kbits/s]\nP\nS\nN\nR [dB]\nH.264 (SAE)\nProposed (SAE)\nParis\nTempete\n\n(b)\nQCIF Sequences (176x144, 200 Frames, 25 Hz)\n30\n35\n40\n45\n50\n55\n60\n0\n500\n1000\n1500\n2000\n2500\n3000\nBit rate [kbits/s]\nPS\nN\nR\n[\nd\nb\n]\nH.264 (SAE)\nProposed (SAE)\nForeman\nNews\n\n(c)\nFigure 10. RD results for SAE-cost without FME option.\nTable 4 summarizes the reduction in the computational cost due\nto the proposed machine learning based mode decision algorithms\nin the proposed transcoder. With RD optimization and FME, the\ncomputational cost is reduced by over 90%. The cost reduction\nreaches as high as 95.5% for QCIF sequences. With SAE and\nFME, the computational cost is reduces by over 80%. The\ncomputational cost reduction come at a cost of reduced quality.\nThe quality reduction, however, is very small and negligible for\nmost video applications. Table 5 shows the quality variation\nversus time reduction of the proposed transcoder with respect the\nreference transcoder for the same input bitrates shown in Table 1,\nshowing over 96% reduction in the computational complexity\ncharacterizing our proposed scheme. As shown in the table, using\nthe proposed transcoder reduces the PSNR by at most 0.3dB with\nRD optimization enabled and by at most 0.1 dB with SAE cost\nbased transcoder. Our results show that the proposed algorithm is\nable to maintain a good picture quality while considerably\nreducing the number of operations to be performed in all the\nscenarios.\nCCIR Sequences (720x480, 200 Frames, 25 Hz)\n30\n35\n40\n45\n50\n55\n60\n0\n5000\n10000\n15000\n20000\n25000\n30000\n35000\nBit rate [kbits/s]\nP\nS\nNR [\nd\nB\n]\nH.264 (SAE, Fast ME)\nProposed (SAE, Fast ME)\nAyersroc\nMartin\n\n(a)\nCIF Sequences (352x288, 200 Frames, 25 Hz)\n30\n35\n40\n45\n50\n55\n60\n0\n2000\n4000\n6000\n8000\n10000\n12000\nBit rate [kbits/s]\nPSN\nR\n[d\nB\n]\nH.264 (SAE, Fast ME)\nProposed (SAE, Fast ME)\nParis\nTempete\n\n(b)\nQCIF Sequences (176x144, 200 Frames, 25 Hz)\n30\n35\n40\n45\n50\n55\n60\n0\n500\n1000\n1500\n2000\n2500\n3000\nBit rate [kbits/s]\nP\nS\nN\nR\n[\ndb]\nH.264 (SAE, Fast ME)\nProposed (SAE, Fast\nME)\nForeman\nNews\n\n(c)\nFigure 11. RD results for SAE-cost with FME option.\n939\nTable 5. Quality Variation vs Time Reduction (for transcoding rate)\nQuality Variation from Reference\nTranscoder (dB)\nTime Reduction from Reference\nTranscoder (%)\nSequence\n\nMPEG-2\nBit Rate\n(Mbps)\nRD OPT\nRD FME\nSAE\nSAE\nFME\nRD OPT\nRD FME\nSAE\nSAE\nFME\nAyersroc\n5.0\n- 0.3\n- 0.3\n0.0\n- 0.1\n80.0\n90.5\n43.3\n82.3\nMartin\n5.0\n- 0.2\n- 0.2\n- 0.1\n- 0.1\n80.5\n92.8\n42.1\n82.0\nTempete\n1.15\n- 0.2\n- 0.2\n0.0\n0.0\n80.0\n93.8\n41.1\n82.5\nParis\n1.15\n- 0.3\n- 0.3\n0.0\n- 0.1\n81.6\n95.6\n38.5\n80.7\nForeman\n0.768\n- 0.3\n- 0.3\n0.0\n0.0\n83.5\n95.5\n37.4\n82.6\nNews\n0.768\n- 0.2\n- 0.2\n0.0\n0.0\n84.1\n96.0\n35.1\n81.1\nCONCLUSIONS\nIn this paper, we proposed a novel macroblock partition mode\ndecision algorithm for inter-frame prediction to be used as part of\na high-efficient MPEG-2 to H.264 transcoder. The proposed\nalgorithms use machine learning techniques to exploit the\ncorrelation in the MPEG-2 MC residual and the H.264 coding\nmodes. The WEKA tool was used to develop decision trees for\nH.264 coding mode decision. The proposed algorithm has very\nlow complexity as it only requires the mean and variance of the\nMPEG-2 residual and a set of rules to compare the mean and\nvariance against a threshold. The proposed transcoder uses a\nsingle decision tree with adaptive thresholds based on the\nquantization parameter selected in the H.264 encoding stage. The\nproposed transcoder was evaluated using MPEG-2 videos at\nCCIR, CIF, and QCIF resolutions. Our results show that the\nproposed algorithm is able to maintain a good picture quality\nwhile considerably reducing the computational complexity by as\nmuch as 95%. The reduction in computational cost has negligible\nimpact on the quality and bitrate of the transcoded video. The\nresults show that the proposed transcoder maintains its\nperformance across all resolutions and bitrates. The proposed\napproach to transcoding is novel and can be applied to develop\nother transcoders as well.\nOur future plans will focus on further reducing the complexity of\nthe proposed transcode by reusing the MPEG-2 motion vectors\nfollowed by a motion vector refinement. By reusing the motion\nvector, we believe, real-time transcoding of CIF resolution video\nat 30 FPS is within reach.\n\nREFERENCES\n[1]\nITU-T RECOMMENDATION H.264 \"Advanced Video Coding\nfor Generic Audiovisual Services\". May 2003.\n[2]\nImplementation Studies Group, \"Main Results of the AVC\nComplexity Analysis\". MPEG Document N4964, ISO/IEC\nJTC11/SC29/WG11, July 2002.\n[3]\nT. Shanableh and M. Ghanbari, \"Heterogeneous Video\nTranscoding to Lower Spatio-Temporal Resolutions and\nDifferent Encoding Formats,\" IEEE Transactions on\nMultimedia, vol.2, no.2, June 2000.\n[4]\nA. Vetro, C. Christopoulos, and H.Sun \"Video Transcoding\nArchitectures and Techniques: An Overview\". IEEE Signal\nProcessing Magazine, vol. 20, no. 2, pp.18-29, March. 2003.\n[5]\nH. Kalva, A. Vetro, and H. Sun, \"Performance Optimization of\nthe MPEG-2 to MPEG-4 Video Transcoder\". Proceeding of\nSPIE Conference on Microtechnologies for the New Millennium,\nVLSI Circuits and Systems, May 2003.\n\n\n[6]\nS. Dogan, A.H. Sadka and A.M. Kondoz, \"Efficient MPEG-4/H\n.263 Video Transcoder for Interoperability of Heterogeneous\nMultimedia Networks,\" IEE Electronics Letters, Vol. 35, No.11.\npp. 863-864.\n[7]\nH. Kalva. "Issues in H.264/MPEG-2 Video Transcoding".\nProceedings of Consumer Communications and Networking\nConference, January 2004.\n[8]\nY. Su, J. Xin, A. Vetro, and H. Sun, \"Efficient MPEG-2 to\nH.264/AVC Intra Transcoding in Transform-Domain,\" IEEE\nInternational Symposium on Circuits and Systems, 2005. ISCAS\n2005. pp. 1234- 1237 Vol. 2, 23-26 May 2005.\n[9]\nB. Petljanski and H. Kalva, "DCT Domain Intra MB Mode\nDecision for MPEG-2 to H.264 Transcoding" Proceedings of the\nICCE 2006. January 2006. pp. 419-420.\n[10]\nY.-K. Lee, S.-S. Lee, and Y.-L. Lee, \"MPEG-4 to H.264\nTranscoding using Macroblock Statistics,\" Proceedings of the\nICME 2006, Toronto, Canada, July 2006.\n[11]\nX. Lu, A. M. Tourapis, P. Yin, and J. Boyce, \"Fast Mode\nDecision and Motion Estimation for H.264 with a Focus on\nMPEG-2/H.264 Transcoding,\" Proceedings of 2005 IEEE\nInternational Symposium on Circuits and Systems (ISCAS),\nKobe, Japan, May 2005.\n[12]\nJoint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG,\nReference Software to Committee Draft. JVT-F100 JM10.2.\n2006.\n[13]\nG. Sullivan and T. Wiegand, \"Rate-Distortion Optimization for\nVideo Compression,\" IEEE Signal Processing Magazine, vol.\n15, no. 6, pp. 74-90, November. 1998.\n[14]\nT. Wiegand et al., \"Rate-Constrained Coder Control and\nComparison of Video Coding Standards,\" IEEE Transactions on\nCircuits Systems and Video Technology, vol. 13, no. 7, pp. 688-703\n, July 2003.\n[15]\nA.M. Tourapis, O.C. Au, M.L. Liou, \"Highly Efficient\nPredictive Zonal Algorithms for Fast Block-Matching Motion\nEstimation,\" IEEE Transactions on Circuits and Systems for\nVideo Technology, Vol. 12, Issue 10, Oct. 2002.\n[16]\nZ. Chen, P. Zhou, and Y. He, \"Fast Integer Pel and Fractional\nPel Motion Estimation for JVT\", 6th Meeting. Awaji, December\n2002\n[17]\nM. Yang, H. Cui, K. Tang, \"Efficient Tree Structured Motion\nEstimation using Successive Elimination,\" IEE Proceedings-Vision\n, Image and Signal Processing, Vol. 151, Issue 5, Oct.\n2004.\n[18]\nIan H. Witten and Eibe Frank, \"Data Mining: Practical Machine\nLearning Tools and Techniques\", 2nd Edition, Morgan\nKaufmann, San Francisco, 2005.\n[19]\nJ.R. Quinlan, \"C4.5: Programs for Machine Learning\", Morgan\nKaufmann, 1993.\n940", "keywords": "H.264;Inter-frame;Machine Learning;Transcoding;MPEG-2"} {"name": "212", "title": "Video-Streaming for Fast Moving Users in 3G Mobile Networks", "abstract": "The emergence of third-generation (3G) mobile networks offers new opportunities for the effective delivery of data with rich content including multimedia messaging and video-streaming. Provided that streaming services have proved highly successful over stationary networks in the past, we anticipate that the same trend will soon take place in 3G networks. Although mobile operators currently make available pertinent services, the available resources of the underlying networks for the delivery of rich data remain in-herently constrained. At this stage and in light of large numbers of users moving fast across cells, 3G networks may not be able to warrant the needed quality-of-service requirements. The support for streaming services necessitates the presence of content or media servers properly placed over the 3G network; such servers essen-tially become the source for streaming applications. Evidently, a centralized approach in organizing streaming content might lead to highly congested media-nodes which in presence of moving users will certainly yield increased response times and jitter to user requests . In this paper, we propose a workaround that enables 3G networks to offer uninterrupted video-streaming services in the presence of a large number of users moving in high-speed. At the same time, we offer a distributed organization for the network's media-servers to better handle over-utilization.", "fulltext": "INTRODUCTION\nThe third generation (\n3G) mobile phone system UMTS enables\nbetter quality and allows for more convenient use of multimedia\nmessaging and video-services by offering higher bandwidth and\nlower latency than its GSM and GPRS predecessors [15, 1].\nUMTS\nfurnishes upto 2Mbps rates for indoor and upto 384Kbps for outdoor\nenvironments. Clearly, much improvement in terms of allo-cated\nresources has been made for the handling of \"rich\" data including\nmultimedia messages and video-services. Nevertheless, the\navailable resources still present significant limitations for the scale\nup of services when very large number of clients are present in a\ncell of a network. Perhaps, the most daunting challenge comes from\nmoving users who access multimedia data and video feeds with the\nhelp of their mobile phones and PDAs while traveling on either\nprivate vehicles or mass transportation means such as commuter\ntrains and public busses. Evidently, a large number of concurrent\nconnections soliciting data resources in a cell and being handled\nin real-time pose significant capacity problems for the underlying\n3G network. The situation becomes even more challenging when\nusers attempt to follow streaming sources while on the move. We\nconsider streaming services to be of key importance as they will\nultimately offer information on demand for the moving user in any\ngeographic position and at any time. In order to facilitate video\nstreaming over\nUMTS networks a number of issues have to be addressed\nso that users do not experience delays and discontinuities\nduring playback. The two core aspects that require attention are\nthe variations of the available bandwidth as users enter and leave\ncells as well as the effective management of handovers as roaming\nusers attach to different base-stations along their trajectory. The\nproblem of graceful transition when moving between base-stations\nbecomes more critical when the users are on high-speed motorways\n. In this case, handovers become more frequent and traffic\nload at successive base-stations may vary. In this paper, we outline\nthe above emerging problem and propose a scheme that allows for\nnot only improved network resource sharing but also for enhanced\nmanagement of streaming sources to the mobile user. It is expected\nthat base-stations are to transmit in different bitrates throughout the\njourney of an individual as cells will undoubtedly present diverse\nlevels of congestion and availability of connections.\nWhen considering vehicular users in general, one can exploit the\nfact that the user's trajectory can be predicted in a satisfactory manner\n. An early method to attain this goal is to keep an aggregate history\nof observations made regarding the movement of users within\neach cell [9]. Based on this information, probability density functions\nfor the prediction for the next-cell-to-move can be derived and\nused. Traffic authorities imposed speed limits and road signals can\n65\nalso assist in the more accurate estimation of a user's average speed.\nIn addition, the user's direction can be predicted reasonably well by\nkeeping track of her trajectory thus far. Although a model for precise\nprediction is beyond the scope of this paper, we can assume\nthat there are already techniques that can offer a good estimation\nof a moving user path. For instance, an individual's geographic\nlocation could be tracked with the assistance of UMTS\nLocation\nService (LCS) [2] that can identify the cell that a user presently\nappears in. If a user is moving along a highway, one could easily\nestimate not only the direction of his movement but also his average\nspeed along a given trajectory. Finally, the soon anticipated incorporation\nof Global Positioning System (GPS) receivers into mobile\nphones through\nA-GPS features [30] will help in the very accurate\nuser positioning and extraction of their movement characteristics.\nIt is our conjecture that at this stage, simply knowing the overall\ndirection of a user's trajectory in conjuction with the highway that\nshe travels on can ensure timely video-streaming and playout continuity\nfor users.\nIn our streaming environment, there exist three distinct types of\nsynergistic computing systems: media-servers, base-stations, and\nuser equipment. These systems are organized in three functional\nlayers as Figure 1 depicts. The role of the media-servers is to predominantly\nmanage content in a highly distributed fashion where\nfast content retrieval and data resilience can be guaranteed. Base-stations\nhandle all user initiated connections to the\n3G network and\nthrough their channels offer users requested data. The last tier of\nFigure 1 consists of cellular phones and PDAs equipped with appropriate\nvideo-players and featuring minimal buffer capabilities to\nsupport streaming.\nBase Station\nMedia Server\nBase Station\nBase Station\nMedia Server\nBase Station\nWCDMA radio interface\nFigure 1: Three-Tier Organization for Streaming Provision.\nMedia and base-stations are internetworked via high-speed wired\nlinks while\nUMTS offer wireless connections between user equipment\nand base-stations. This distributed media-server architecture\nprovides for dividing of large video streams into multiple segments\n[34, 14]. A media-server initially retrieves a solicited video\nobject either from its storage units or from another remote media-server\n. In this paper, we take the approach that instead of transmitting\nthe entire object to a single base-station, we first segment it\nand then forward its segments to the base-stations along the user's\npath. Our rationale is that an individual base-station handles only a\nsection of the video file; the size of the section in discussion is commensurate\nto the duration of a user's trip inside the cell. Clearly,\nvideo-object segmentation reduces both network transmission costs\nbetween media and base-stations and start-up latencies that users\nexperience upon cell entrance. On the other hand, segmentation\nmight get more complex if a user remains longer in a cell than\nher estimated time and so she may face delays in the reception of\nframes. We address this issue by continual monitoring of both user\nspeed and position and by doing so giving the base-station the option\nto receive additional video increments and sufficiently feed a\nuser at all times.\nOur work requires minimal buffer capabilities for mobile stations\nso that a sufficient number of frames can be accommodated.\nBuffer presence assures that the playout does not stop if the base-station\nemits at lower bitrate due to\n3G network congestion. We\npropose a rate adaptation scheme which allows a base-station to\nadjust its transmitting bitrate at any time according to base-station\nload and the states of the client-side buffer. The\nUMTS streaming\nclass defines that the bitrate assigned to a moving user is guaranteed\neven though it might be less than the maximum video bitrate [15].\nThe base-station's decision of accepting a video streaming session\nhas an impact on all subsequent base-stations that follow up on the\nvideo delivery process. While a streaming session is in progress,\nthe load of base-stations-to-service may dynamically change and\npotentially lead to session drops. Such drops are highly undesirable\nand we adopt a policy to address this issue. Our proposed scheme\ngives a base-station the opportunity to appropriately alter the transmission\nbitrate by taking into account the current base-station load\nand simultaneously ensuring that the client buffer does not starve.\nThe rest of the paper is organized as follows: Section 2 presents\nthe overall system architecture and examines the interaction between\nmedia-servers and base-stations. Section 3 proposes our bitrate\nadaptation scheme and Section 4 discusses the results of our\npreliminary experimentation. Finally, related work and conclusions\ncan be found in Sections 5 and 6 respectively.\nMEDIA-SERVERS/BASE-STATIONS INTERACTION\nThe fast movement of users via different cells of the\n3G network\nimposes a set of new requirements for the entire video delivery\nsystem. As the user relocates rapidly, she faces a high number\nof handovers during her journey and, as a consequence, a large\nvideo stream has to be fetched from different base-stations in order\nto warrant continuous playback. As suggested earlier, we assume\nthe deployment of dedicated media-servers which undertake both\nthe storage and distribution of video-streams to underline base-stations\n. It is imperative that media-servers, base-stations, and user-equipment\ninvolved in a streaming session must cooperate in order\nto guarantee QoS for the video reception of the moving individual\n. In this section, we outline our overall architecture, discuss the\ncontent delivery process that media-servers carry out, and present\nspecific algorithms for video segmentation and content distribution.\n2.1\nArchitecture\nThe three distinct types of cooperative computing systems\n(shown in Figure 1) organized in a multi-tier architecture constitute\nour proposed streaming environment. We assume that base-stations\ncommunicate with the mobile stations through the\nWCDMA radio\ninterface [15]. Each cell of the\nUMTS network is served by\na different base-station whose responsibility is to deliver the video\nstreams to its constituent mobile users. A streaming service necessitates\nthe use of media servers that will handle the storage and delivery\nof video files [3]. Although we could adopt a centralized approach\nto accommodate the media in delivery, high contention and\nresource over-utilization would impact user request response times\ngreatly. Clearly a distributed approach that webs media-servers together\nis required. High-speed wired communication means link\nthese servers and all share required meta-data structures.\n66\nDue to incurred costs and the fact that users move at high-speeds,\nhaving a dedicated media server for each base-station would be a\npoor decision. If the mobile user is traveling at a speed of 100 km/h\nand the cell radius is 0.7 km, then he will pass through the given\ncell in 50.4 seconds at maximum, assuming an hexagonal shape.\nThis implies that the number of frequent handovers taking place\nincrease the interaction among media-servers that will have to be\ninvolved throughout the streaming session. Furthermore, in order\nto avoid under-utilization of media-servers and strike a balance in\nthe aggregate use for facilitating streaming, we group\n3G cells into\ngroups as Figure 2 depicts. This assignment is expected to happen\nin a static manner although it could be modified to reflect emerging\nnew realities in the core network. In this regard, Figure 2 shows\na network layout in which sets of sixteen cells are configured to\nfunction as a group. In this example, the mobile user is currently\nin a cell of group\nA\nand is heading towards group\nF\n. The media-servers\nthat can be involved in the delivery of video objects are\nA,\nD, E\nand\nF\n. The server\nA\nis expected to interact with server\nD\n,\nD\nwith\nE\n, and\nE\nwith\nF\n. In this chained-fashion, we anticipate that\nthe media servers notify each other about the streaming session of\nthe oncoming mobile user. In addition, the media-servers send and\nreceive in pipelined fashion the video object under transmission.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n!\n\n\n\n"\n#\n\n\n$\n%'&\n(0)1324\n576\n\n86\n\n98\n#9@\nA\n\nB\n5\nFigure 2: Grouping of Cells\nThere may be other interactions as well. For instance, there must\nbe cooperation between media server\nA\nand\nF\nif the requested video\nobject is initially located at\nF\n. A media server accepts video requests\nfrom base-stations residing in its group; if it does not currently\nhave the object it is responsible for locating using the meta\ndata structure and fetching it. Due to the location awareness of our\napproach, we assume that each server predominantly stores streams\nspecific to its own geographic area. For instance, if the route drawn\nin Figure 2 crosses a county, video clips with showing traffic conditions\nahead in specific points may be requested. Similarly in a city\nsetting, such requests may entail multimedia location-based virtual\npresentations.\n2.2\nContent Distribution\nProvided that a media server has a video clip in place, a straightforward\napproach would be to transmit the object into all base-stations\noperating in cells located on or near-by the user's trajectory\n. This is not only wasteful in both network and base-station\nresources but also increases the user-perceived playout latency.\nTherefore, we resort to using video segmentation [14], [21] in order\nto reduce network transmission costs between the video-holding\nmedia server and its subordinates base-stations. Segmentation also\ndecreases the start-up latency that users experience upon cell entrance\n.\nThe length of a video-segment, denoted\nS\nt\n, sent to each base-station\nis proportional to the average time that the user is expected\nto stay in the specific cell. The process of segmenting the video\nstreams into chunks of frames of specific duration assumes that the\nmedia servers are aware of the underlying cell configuration. In\nparticular, a media-server has to be aware of the precise manner\nwith which a motorway cuts across its subordinate cells, the direction\nas well as the speed of moving users. With this data available,\nthe media-server in discussion can approximate the time that a user\nspends in a cell. For example, if a motorist moving at 100km/h has\njust entered a cell and departs after traversing a 1km route portion,\nthe server can compute the duration of the user's stay to be at ap-proximately\n36 seconds. The media-server can capitalize on this\nvery information to appropriately segment the streamed video; it\nonly dispatches enough frames for a playout period of 36 seconds.\nThe duration of a user's presence within a cell may vary according\nto speed changes with clearly lower speeds leading to elongated\nstays in the cell and vice versa. Should the speed be decreasing,\nthe base-station will ultimately require more frames from the media\nserver than the number predicted once the user appeared in the\ncell. Such a request constitutes a \"cache miss\" which will not be\nnoted by the user if detected on time and acted upon by the coordinating\nbase-station. Imposing a minimum threshold in the number\nof frames always available for delivery at a base-station may\nhelp overcome such \"cache-misses\". Therefore, when the number\nof frames awaiting transmission on a base-station falls below\nthe abovementioned threshold, the base-station signals its need for\nadditional frames to its overseeing media-server; should the latter\nact upon this request, additional frames arrive on time at the base-station\nfor delivery. On the other hand, as soon as a user increases\nspeed, she will depart the cell extend earlier than initially anticipated\n. The drawback here is that the media-server has provided\nthe base-station with more frames than those eventually needed.\nDuring the handover process, the coordinating media-server has to\ngenerate a video-fragment which in its opening contains frames that\nhave already been transmitted to the previous base-station but not-yet\n-seen by the user.\nOur approach allows a base-station to dynamically alter the\ntransmission bitrate according to its current load. Under light load,\na base-station may opt to increase the transmission rate for a specific\nvideo-stream thus leading to potential frame shortage. To easily\navoid such shortage, we use the minimum allowed vehicle speed\nto compute the size of the video-segment\nS\nt\nto be transported to\nbase-stations:\nS\nt\n=\nDistance\nM inimumSpeed\n(1)\nIn most freeways there are authority-posted limits for minimum allowed\nspeed in each road segment. As media-servers are aware of\nthe geographic area that they serve, such minimum speed rates are\nstatically designated for each cell in their jurisdiction. Evidently,\nthe video-segment size that we potentially use as safety against\nframe shortage is:\nS\nt\n=\nDistance\nM inimumSpeed Distance\nAverageSpeed\n(2)\n67\nAlgorithms 1 and 2 depict the video segmentation and distribution\nthat we follow in our media distribution. Upon a new video-streaming\nrequest, we assume that the media-server can efficiently\nretrieve the corresponding video-file either from local storage options\nor remote servers via its low-latency/high-bandwidth wired\nnetworking infrastructure. The identification of the user's current\nlocation, the precompiled knowledge of the traveled distance within\na cell, in conjunction with the minimum allowed speed on pertinent\nhighway segments, permit for the estimation of the maximum\nuser stay\nS\nt\nin a specific cell. Subsequently, the media-server can\ncreate the first segment of frames needed for transmission via the\nbase-station to the requesting client. The size of a video-segment is\ngiven by\nV =\nP\nN\ni=1\nF\ni\n, where\nF\ni\nis the size of the\ni-th frame and\nN is the number of frames in the segment; we can easily compute\nN by multiplying the frame-rate (frames/second) with the duration\nof stay in a cell\nS\nt\n.\nAlgorithm 1\nVideo Segmentation at Media-Server\n1: Get Minimum User Speed MinSpeed\n2: Get PathLength in cell range\n3:\np last frame transmitted\n4: if (New Session) then\n5:\nS\nt\n=\nP athLength\nM inSpeed\n6: else\n7:\n// Shortage of Frames\n8:\nS\nt\n=\n\nP athLength\nM inSpeed\n9:\n// with\n<< 1\n10: end if\n11:\nV i\nP\np+N\ni=p+1\nF\ni\n,\nwhere\nF\ni\nis size of\ni-th frame, N =F rameRate S\nt\nIn light of frame shortage, our video-segmentation algorithm dispatches\ninto the base-station with need an increment of frames.\nThis is defined as a fixed fraction of the length of\nS\nt\nin the current\ncell (line 7 of Algorithm 1). Requests of such increments may\noccur multiple times before a motorist leaves a cell due to congestion\n.\nA handover might find a moving user either serviced by a base-station\nin the realm of the current media-server or under the authority\nof a completely new media-server along the motorist's path.\nIn the first case, the media-server initiates the delivery of the next\nvideo-fragment to the next-base-station encountered. The just departed\nbase-station can help in determining the appropriate stream\nposition\np from which the segmentation will have to resume. The\nduration of the video-segment is computed anew using the same algorithm\nthat now takes into consideration the data points from the\nnew cell. Clearly, the length of the route as well as designated minimum\nspeed limits may be different from those encountered in the\nprevious cell.\nIn the second scenario, a handover may force a user to operate\nin an entirely new group of cells supported by a new media-server.\nIn general, the portion of the \"not-yet-seen\" stream has to be forwarded\nfrom the previous to the new media-server unless the latter\nalready maintains its own copy. If we are not interested in reducing\nthe transmission costs, we can transport the entire video object\nto the new media-server using the assumed high-speed link.\nThe media-server now in charge takes over the session identifier of\nthe moving user and along with user state data from its previous\nlocation can help coordinate the delivery of the video in the new\ncell. To enhance coordination among media-servers in the highest-level\nof Figure 1, prefetching could be used [7, 34]. We could de-Algorithm\n2\nVideo Distribution from Media-Server to Base-Station\n(s)\n1: while (OutstanindRequests) do\n2:\nif\nNew Session then\n3:\nStart session (user's location, video stream, cell ID)\n4:\nif\nVideo Stream Not in Storage then\n5:\nGet Video Stream from corresponding Media-Server\n6:\nend if\n7:\nend if\n8:\nif\n(not(Handover)) then\n9:\nApply Video Segmentation Algorithm\n10:\nSend Video-Segment\nV to Base-Station\n11:\nelse\n12:\nif\n(new Base-Station within Media-Server realm) then\n13:\nApply Video Segmentation Algorithm\n14:\nSend Segment\nV to Base-Station\n15:\nSend playback position\np to new Base-Station\n16:\nelse\n17:\nSend Video Stream to next Media-Server\n18:\nSend playback position\np to next Media-Server\n19:\nend if\n20:\nend if\n21: end while\nploy prefetching of video-segments to media-servers and/or base-stations\nto facilitate playout continuity and minimize the start-up\nlatencies.\nUsers moving with similar speed and nearby to the streaming\nuser can benefit from the already segmented video stream and start\nthe playout immediately. Caching efficiency is limited by the fact\nthat only users with similar traveling behavior may use the video\nsegments. We can overcome this limitation if the starting point of\nthe video segment at the next cell corresponds to users traveling\nat the maximum speed within the current cell. At the same time,\nthe total size of the segment caters for users that travel at minimum\nspeed within the next cell, thus remaining longer in the cell's range.\nThis ensures that successive base stations hold a sufficient amount\nof frames to serve users traveling at different speeds.\nRATE ADAPTATION\nIn this section, we propose a rate adaptation scheme whose objective\nis to better serve the overall needs of fast-roaming users.\nMore specifically, we present a mechanism used by base-stations\nto control the rate at which they transmit video to each user when\nthe cell becomes overloaded and the transmission bitrate eventually\nneeds to be decreased. In light of this reduction, we seek ways\nto avoid discontinuities in user playback and cell bandwidth over-utilization\nlowering so the probability for a session drop.\nWhile focusing on bitrate management between the second and\nlast tier of Figure 1, we assume that pertinent video-segment data\nis available at a base-station. Upon session initiation, the size\nQ\nof the mobile device buffer becomes known to the managing base-station\n. In general, we assume that a video-object is divided into\nframes of constant duration. Frames that belong to the same file\nvary in size depending on the encoding rate and the scene content.\nA time domain perspective allows us to control the transmission\nrate examining the time interval between transmission of successive\nframes rather than their respective sizes. If the inter-departure\ntime corresponds to the rate instructed by the file's frame rate\n1\n,\n1\nTypical frame rate values are 25 frames/sec for the PAL color sys-68\nFigure 3: Rate Adaptation Modules within a base-station\nthe transmission bitrate corresponds to the file's encoding bitrate.\nAlterations in the inter-departure times result in the inversely proportional\nchanges in transmission bitrate.\nLet\n{X\ni\nk\n}\nk\n1\ndenote the departure process of the video frames\nfrom the base-station, for the\ni-th user. If\ni\nk\nis the departure time\nfor frame\nk, X\ni\nk\n=\ni\nk+1\ni\nk\nis the inter-departure interval for the\nk-th frame. In the absence of buffering capabilities on the mobile\ndevice, the smoothness of\n{X\ni\nk\n} is critical for the smoothness of\nplayback at the user's end. Thus, the following should hold:\nP {X\ni\nk\n= T } 1\n(3)\nwhere\nT is the inter-departure interval specified by the video-object\nframe rate. The buffer support that we assume available at the user-end\nenables the modification of the\n{X\nk\n} process reflecting modifications\nto the actual transmitting bitrate of the base-station.\nVideo Streaming modules are integral parts of the base-station\nconfiguration and each such module handles the transmission of a\nvideo-stream. Hence, a segment of a video-stream is assigned to\nan instance of a Video Streaming module for final delivery to the\nuser's equipment. A Rate Adaptation (RA) element is assigned to\neach user session for the specified video stream. An RA is aware\nof the user's buffer size and is responsible for the forwarding of\nvideo frames to the actual Transmitter of the base-station. As information\nabout the station's load is fed-back by the Transmitter,\nthe Rate Adaptation element regulates the inter-departure process\nof video frames from the station to a user, in a way that preserves\nplayback continuity. Figure 3 depicts the interaction among these\ntwo elements and the Transmitter at a base-station that serves\nk\nconcurrent sessions for the same video-object.\nThe operation of the Rate Adaptation element is governed by\nperiodic time intervals of constant duration, termed Control Cycles.\nOperating in the time domain, the module is aware of the exact\nnumber of frames the media player at the user-end will need over a\nspecific period of time to ensure smooth playback. Let\nA denote the\nduration of the control cycle. Also, let\nQ\nA\n> 0 be the occupancy\n(i.e., number of frames) of the buffer at the beginning of the control\ncycle and\nN\nA\nthe number of frames that will be reproduced at the\nuser-end during the control cycle. Since\nN\nA\nframes are requested\ntem that corresponds to an inter-departure time of 40 msec, and 30\nfps for the NTSC system which corresponds to inter-departure time\nof 33.3 msec.\nfrom the media-player and\nQ\nA\nframes are accumulated the rate\nadapter needs only transmit\nN\nA\n- Q\nA\nframes at minimum over the\ncontrol cycle.\nThe video frame rate instructs that a frame be transmitted every\nT =\nA\nN\nA\nmseconds. Each one of the N\nA\n-Q\nA\nframes transmitted\nat minimum during the control cycle will depart the base-station at\nlonger intervals equal to\nT =\nA\nN\nA\n-Q\nA\n. The initial inter-departure\ntime has been spread by a tolerance factor\n( 0) where:\nA\nN\nA\n- Q\nA\n= (1 + ) A\nN\nA\n=\nQ\nA\nN\nA\n- Q\nA\n(4)\nThe tolerance factor,\n, is a parameter of the control cycle; may\nturn negative only when\nQ\nA\n> N\nA\n. Thus, a more specific definition\nof\nwould be\n=\nQ\nA\nN\nA\n- Q\nA\n1\n{Q\nA\nN\nA\n}\n+ 1\n{Q\nA\n>N\nA\n}\n(5)\nIf the RA element forwards frames at the rate instructed by the\ntolerance factor, the transmission bitrate over the control cycle will\nbe equal to\nB/(1+), where B is the encoding bitrate of the video-stream\n. A control cycle during which the base-station transmits at\nthe minimum bitrate instructed by\nis called a degraded cycle. A\ndegraded cycle will lead to zero buffer occupancy at the end of the\ncontrol cycle and the tolerance factor for the next control cycle will\nbe equal to zero. Therefore, no two successive degraded cycles may\noccur.\nNon-zero buffer occupancy at the beginning of a control cycle\nwill be present only if the overall transmission rate over the previous\ncontrol cycles exceeded\nB. This can be achieved if the RA\nelement forwards frames at a higher rate when the station is un-derutilized\n. Let\ndenote the speed-up factor, the factor by which\nthe bitrate increases in this case. An expression for the speed-up\nfactor may be obtained if we consider that the maximum transmission\nbitrate will lead to a full user buffer at the end of the control\ncycle. If\nQ\nA\nis the buffer occupancy at the beginning of the cycle\n, then the station may transmit at maximum\nQ - Q\nA\nframes\nover the control cycle. In this case, each frame will be transmitted\nevery\nA\nQ\n-Q\nA\nmseconds, with inter-departure interval having been\ndecreased by\n:\nA\nQ - Q\nA\n= (1 - ) A\nN\nA\n= 1 N\nA\nQ - Q\nA\n(6)\nAn upgraded cycle will transmit at a rate of\nB/(1 - ). The\nspeed-up factor may turn negative only when the free buffer space\nis less than the frames that will be played back during the control\ncycle. In this case, the cycle is forced to operate in degraded mode,\nso that we can avoid buffer overflow.\nIt is clear that the\nn-th control cycle may forward frames at a rate\nin the range of:\nB\nmax{(1 - ), (1 + )} B\nn\n\nB\n(1 - )\n(7)\nThe respective inter-departure process,\n{X\nn\n} will be in the range\nof:\n(1 - )T {X\nn,k\n} max{(1 - )T, (1 + )T }\n(8)\nThe RA element knows at any time the exact number of frames\nthat have been transmitted to the user, and it also knows the time\nthat has passed since the session initiation, which corresponds to\nthe number of frames the playback process has consumed. The difference\nbetween the two values denotes the user buffer occupancy,\n69\nso no feedback mechanism is required as far as the user buffer occupancy\nis concerned. The algorithm followed by each Rate Adaptation\nelement in the Video Streaming module is outlined in Algorithm\n3.\nAlgorithm 3 Rate Adaptation\nelement operation\n1:\n// Executed at the beginning of every control cycle\n2:\n// for user\ni\n3:\nQ\ni\nA\n= F ramesT ransmit\ni\n- F ramesP layed\ni\n4:\n= Q\ni\nA\n/(F ramesCycle - Q\ni\nA\n)\n5:\n= 1 - F ramesCycle/(Q - Q\ni\nA\n)\n6:\nM inInterval\ni\n= (1 - ) T\n7: if\n< 0 then\n8:\nM axInterval\ni\n= M inInterval\ni\n9: else\n10:\nM axInterval\ni\n= (1 + ) T\n11: end if\n12:\nInterval\ni\n= M inInterval\ni\n+\n(CellLoadP erc/100) (M axInterval\ni\n- M inInterval\ni\n)\nSince the duration of the control cycle is constant, multiple control\ncycles may occur during a user's presence in the range of a\nsingle cell, depending on the size of the cell and the user's speed.\nWe assume that the each cell handover always initiates a new control\ncycle.\nAlgorithm 3 allows for alteration in the transmission bitrate by\nproviding upper and lower bounds (i.e.,\nM inInterval\ni\nand\nM ax\nInterval\ni\n) to ensure the smoothness of the playout process. The\nchoice of the actual bitrate within the specified range, at which the\nbase-station transmits during a control cycle, is ultimately a function\nof the station's load at the time. This load is continually estimated\nwith the help of the Transmitter module. This feedback enables\neach Rate Adaptation element to cater for buffer occupancy\nincrease, taking advantage of low system load periods. At the same\ntime, by detecting high system load, the Rate Adaptation element\nlowers the transmission bandwidth, allowing for more sessions to\nbe accommodated, while at the same time the playback process is\nnot distorted.\nEVALUATION RESULTS\nIn order to reproduce and experiment with the behavior of our\nproposed architecture and bitrate adaptation scheme, we have setup\na simulation testbed. We have assumed a user trajectory with\nduration of 200 seconds. The user traverses numerous cells of different\nsizes. Each base-station is equipped with the Video Streaming\nmodule as described earlier. A Control Cycle of 5 seconds is\nadopted by all elements. The buffer size at the user-equipment is\nassumed to be large enough to store 10 seconds which is readily\nmet by modern cellular phones and/or PDAs. We designate ten levels\nof base-station load with load changing at random times. The\nmaximum duration of each load state is 30seconds. The\nPAL color\nsystem is assumed for the video being transmitted, so the default\ninter-departure time for each frame is set at 40 mseconds.\nAt the beginning of each control cycle, the Rate Adaptation element\napplies the proposed Algorithm 3. The testbed initially computes\nthe tolerance and speed-up factors thus generating the acceptable\ninter-departure times range. The actual inter-departure\ntime for the control cycle is proportional to the station's load at the\ntime. If the station is lightly loaded, the minimum interdeparture\ntime (maximum bandwidth) is applied. Conversely when the base-station\nis heavily loaded, the frames are forwarded to the transmitter\nat the minimum rate instructed by the maximum interdeparture\ntime. For intermediate load levels an appropriate value from the\ninter-departure times range is selected according to Algorithm 3 in\na uniform fashion.\nFigure 4 shows the evolution of the base stations' load during the\nuser's trajectory, over all cells that the individual travels in. Having\ndefined load of value 5 as \"normal\" load, the base-station load\nremains relatively high through the user trajectory with a few very\nshort periods of low load.\n0\n2\n4\n6\n8\n10\n0\n50\n100\n150\n200\nLoad\nTime\nBase station load\nFigure 4: Base-station load during user trajectory\nFigure 5 shows the applied transmission bitrate, along with the\nminimum and maximum bitrates allowed for every control cycle.\nThe\ny-axis represents the percentage of the actual transmission bitrate\nto the video encoding bitrate. The bitrate is inversely proportional\nto the inter-departure times which are illustrated in figure 6.\nIf we compare the curves of Figures 5 and 4, we can easily establish\nthat the actual bitrate applied is a function of the base-station\nload and the calculated allowed bitrate range. At times of high\nload, the applied bitrate is closer to the minimum acceptable bitrate\nand conversely at times of light load, the applied bitrate is closer\nto the maximum acceptable bitrate. Figure 6 shows that the interdeparture\ntimes are indeed proportional to the base-station load.\n0\n50\n100\n150\n200\n0\n50\n100\n150\n200\nTransmission bitrate (%)\nTime\nTransmission bitrate\nFigure 5: Allowed transmission bitrate limits and applied\ntransmission bitrate.\n70\n20\n25\n30\n35\n40\n45\n50\n55\n60\n65\n70\n0\n50\n100\n150\n200\nInterdeparture time\nTime\nFrame interdeparture time\nFigure 6: Applied interdeparture interval.\nWe show the user buffer occupancy throughout the trajectory in\nFigure 7. The buffer does not starve at any time suggesting the\nusefulness of our proposed scheme. The occupancy increases at\ntimes where the base station load falls below the normal load.\nUnder overall higher-than-normal base-station load settings, our\ntests show that the playback continuity is preserved by only taking\nadvantage of relatively small periods of station underutilization to\nincrease the transmission bitrate. The range of the acceptable transmission\nrate includes the bandwidth already guaranteed by the network\nupon session acceptance at all times. Therefore, the network\nwas never forced to transmit at a higher-than-agreed bitrate. At\ntimes of station over-utilization, the decreased transmission bitrate\nallows for more calls to be accommodated significantly decreasing\nthe probability of session drop.\n0\n50\n100\n150\n200\n250\n0\n50\n100\n150\n200\nOccupancy\nTime\nOccupancy\nFigure 7: User buffer occupancy.\nA system that adopts no rate adaptation scheme would constantly\nrequire a transmission bitrate equal to the 100% of the video encoding\nbitrate throughout the session. Although the bitrate of real-time\nvideo streaming sessions is guaranteed by the\nUMTS specifications\n, at near-capacity situations, the network would have to\neither drop a session or be forced to transmit with lower bitrate.\nThe former case is clearly undesirable and the latter generates jitter\neffects for the end-user. This would happen even if the station\ngets overloaded only for a period of time equal to the proposed\nscheme's control cycle duration. Discontinuities in the playback\nprocess may be observed at a system adopting the proposed rate\nadaptation scheme as well, however only in the case when the base-station\nis constantly load saturated, thus not allowing for any upgraded\ncycles to take place.\nRELATED WORK\nThere has been a large amount of reported work in related areas\nthat include caching for video systems, management of moving objects\n, and rate adaptation for streaming systems on wired networks.\nData caching and mirroring has been proposed as a way to help\nthe scalability of video delivery systems [32]. By placing content\ncloser to the consuming clients not only network costs can be curtailed\nbut also the load of streaming servers can be reduced. Various\naspects of the use of proxy servers for video objects has been examined\nin [14, 26, 23, 10]. In [14], the segmentation and caching\nof streaming objects is proposed and the merging of requests that\nare temporally related is investigated. This merging idea has been\nused in [8, 18, 11] to save bandwidth in light of requests for the\nsame video object that arrive closely in time.\nThe partial caching of two successive intervals of a video stream\nis proposed in [10] as a way to speed-up the serving of follow-up\nrequests for the the same video object. In [26], the caching of initial\nframes of a video object is used in a proxy-setting to reduce startup\nlatency. In the same direction, the storage of the bursty parts of a\nvideo-stream in a proxy is advocated in [34]; the remaining parts of\nthe video are directly retrieved from the source helping significantly\nreduce peak bandwidth requirements in the backbone. A caching\nmechanism for layered encoded multimedia streams is suggested in\n[23]; the objective of the technique is to selectively deliver stream\nquality by differentiating on the client network connection. Stream\nquality differentiation is also addressed in [24], in conjuction with\na seamless handoff mechanism for mobile users.\nA formal spatiotemporal framework for modeling moving objects\nand a query language is discussed in [27]. Efficient techniques\nfor indexing moving objects in one and two dimensions are\nproposed in [12] while in [5] the trade-offs for indexing schemes to\nanswer interval, rectangle, approximate nearest-neighbor, approximate\nfarthest-neighbor and convex-hull queries are examined. The\nindexing of current and anticipated positions of moving objects in\nthe context of location based services is examined in [25]. Much\nwork has been also reported in data broadcast and dissemination\nover wireless networks during the last decade [4, 22, 33, 19, 6].\nRate adaptation for wired streaming systems has been exten-sively\nstudied in [28, 20, 31, 29, 16, 17]. These studies assume\nan adaptable video encoding system that changes the encoding parameters\non the fly based on feedback information about the channel\nstate. The notion of cycle-based operation is used in [13] with\ncycles being successively alternating between good and bad cycles.\nOur work differs in that it does not require a prefetching period so\nthat an initial occupancy is built up in the buffer before the playback\nbegins. Also, our algorithm functions in a graceful manner when\nthe base-station load does not allow for aggressive use of channel\nresources.\nCONCLUSIONS\nIn this paper, we address the problem of efficient video delivery\nin real-time to high-speed users roaming a\n3G network. We propose\na network of media servers handling the content distribution\non top of the mobile environment that closely cooperates with the\n71\nbase-stations and user-equipment for the provision of continuous\nvideo playout. We segment video streams into variable-sized parts\naccording to the user's speed and traversal path through different\ncells. In this manner, we minimize the transmission costs between\nmedia-servers and base-stations as well as the start-up latency experienced\nby users during handover. We adopt the use of Video\nStreaming modules along with their Rate Adaptation elements with\nthe infrastructure of base-stations to ensure smoothness of the playout\nprocess. Preliminary experimentation results through simulation\nshow that the proposed scheme rapidly adapts to changes in\nload conditions at base-stations, thus minimizing the probability of\nbuffer starvation or even session drops. The low complexity of the\nproposed mechanism makes it suitable for real-time applications.\nREFERENCES\n[1] 3rd Generation Partnership Project. Universal Mobile\nTelecommunication System/IMT2000 Spectrum. Technical\nReport 6, UMTS Forum, 1998.\n[2] 3rd Generation Partnership Project. Stage 2 Functional Specification\nof Location Services in URAN. Technical Report (3G TR 25.923\nversion 1.4.0), UMTS Forum, 1999. Technical Specification\nGroup(TSG) RAN, Working Group 2 (WG2).\n[3] 3rd Generation Partnership Project. Transparent End-to-End Packet\nSwitched Streaming Service (PSS) General Description (Release 4).\nTechnical Report 3GPP-TS-26.233-V4.0.0, UMTS Forum, 2000.\nTechnical Specification Group Services and System Aspects.\n[4] S. Acharya, M.J. Franklin, and S. B. Zdonik. Balancing Push and\nPull for Data Broadcast. In Proceedimgs of SIGMOD 1997, Tucson,\nAZ, May 1997.\n[5] P.K. Agarwal, L. Arge, J. Erickson, and H. Yu. Efficient Tradeoff\nSchemes in Data Structures for Querying Moving Objects. In 12th\nAnnual European Symposium on Algorithms (ESA), pages 415,\nBergen, Norway, September 2004.\n[6] D. Aksoy, M. Altinel, R. Bose, U. Cetintemel, M. Franklin, J. Wang,\nand S. Zdonik. Research in Data Broadcast and Dissemination. In\nProceedings of International Conf. on Advanced Multimedia Content\nProcessing (AMCP), Osaka, Japan, November 1998.\n[7] P. Cao, E. W. Felten, A. R. Karlin, and K. Li. A Study of Integrated\nPrefetching and Caching Strategies. In Proceedings of ACM\nSIGMETRICS Conf., pages 188197, Ottawa, Canada, May 1995.\n[8] S. Chan and F. Tobagi. Caching schemes for distributed video\nservices. In Proceedings of the 1999 IEEE International Conference\non Communications (ICC'99), Vancouver, Canada, June 1999.\n[9] S. Choi and K. G. Shin. Predictive and Adaptive Bandwidth\nReservation for Hand-Offs in QoS-Sensitive Cellular Networks. In\nProceedings of ACMSIGCOMM, pages 155166, 1998.\n[10] A. Dan and D. Sitaram. A Generalized Interval Caching Policy for\nMixed Interactive and Long Video Environments. In Proceedings of\nIST/SPIE Multimedia Computing and Networking Conference, San\nJose, CA, January 1996.\n[11] A. Dan, D. Sitaram, and P. Shahabuddin. Dynamic Batching Policies\nfor an On-demand Video Server. Multimedia Systems, 4(3):112121,\n1996.\n[12] G. Kollios and D. Gunopulos and V.J. Tsotras. On Indexing Mobile\nObjects . In Proceedimgs of 1999 ACM SIGACT-SIGMOD-SIGART\nSymposium on Principles of Database Systems (PODS), Philadephia,\nPA, 1999.\n[13] M. Hassan, L. Atzori, and M. Krunz. Video Transport over Wireless\nChannels: A Cycle-based Approach for Rate Control. In Proceedings\nof the ACM Multimedia 2004 Conference. ACM Press, October 2004.\n[14] M. Hofmann, E. Ng, K. Guo, S. Paul, and H. Zhang. Caching\nTechniques for Streaming Multimedia over the Internet. Technical\nreport, Bell Laboratories, April 1999. BL011345-990409-04TM.\n[15] H. Holma and A. Toskala. WCDMA for UMTS Radio Access for\nThird Generation Mobile Communications. John Wiley & Sons Inc.,\nNew York, NY, 2nd edition, 2002.\n[16] C.-Y. Hsu, A. Ortega, and A.R. Reibman. Joint Selection of Source\nand Channel Rate for VBR Video Transmission Under ATM Policing\nConstraints. IEEE Journal of Selected Areas in Communications,\n15(6):10161028, 1997.\n[17] P.-C. Hu, Z-L. Zhang, and M. Kaveh. Channel Condition ARQ Rate\nControl for Real-time Wireless Video Under Buffer Constraints. In\nProceedings of the IEEE International Conf. on Image Processing,\nVancouver BC, Canada, September 2000.\n[18] K.A. Hua, Y. Cai, and S. Sheu. Patching: a Multicast Technique for\nTrue Video-on-demand Services. In Proceedings of the 6th ACM\nInternational Conference on Multimedia, pages 191200. ACM\nPress, 1998.\n[19] T. Imielinski, S. Viswanathan, and B. R. Badrinath. Data on Air:\nOrganization and Access. IEEE Transactions on Knowledge and\nData Engineering, (3):353372, 1997.\n[20] H.-J. Lee, T. Chiang, and Y.-Q. Zhang. Scalable Rate Control for\nMPEG-4 Video. IEEE Transactions On Circuits and Systems for\nVideo Technology, 10(9):878894, September 2000.\n[21] S.-J. Lee, W.-Y. Ma, and B. Shen. An Interactive Video Delivery and\nCaching System Using Video Summarization. Computer\nCommunications, 25(4):424435, March 2002.\n[22] E. Pitoura and P.K. Chrysanthis. Multiversion Data Broadcast. IEEE\nTransactions on Computers, 51(10):12241230, 2002.\n[23] R. Rejaie, H. Yu, M. Handley, and D. Estrin. Multimedia Proxy\nCaching Mechanism for Quality Adaptive Streaming Applications in\nthe Internet. In Proceedings of INFOCOM(2), pages 980989, 2000.\n[24] S. Roy and B. Shen abd V. Sundaram. Application Level Handoff\nSupport for Mobile Media Transcoding Sessions. In 12th\nInternational Workshop on Network and Operating System Support\nfor Digital Audio and Video, Miami, FL, 2002.\n[25] S. Saltenis and C.S. Jensen. Indexing of Moving Objects for\nLocation-Based Services. In Proceedings of the IEEE International\nConference on Data Engineering (ICDE), pages 463472, 2002.\n[26] S. Sen, J. Rexford, and D. F. Towsley. Proxy Prefix Caching for\nMultimedia Streams. In Proceedings of INFOCOM(3), pages\n13101319, New York, NY, 1999.\n[27] A.P. Sistla, O. Wolfson, S. Chamberlain, and S. Dao. Modeling and\nQuerying Moving Objects. In Proceedings of the 13th IEEE\nInternational Conf. on Data Engineering, Birmingham, UK, April\n1997.\n[28] H. Song and C.-C.-J. Kuo. Rate control for Low Bit Rate Video via\nVariable Encoding Frame Rates. IEEE Transactions On Circuits and\nSystems for Video Technology, 11(4):512521, April 2001.\n[29] W. Tawbi, F. Horn, E. Horlait, and J.-B. Stefani. Video Compression\nStandards and Quality of Service. The Computer Journal,\n36(1):4354, 1993.\n[30] Texas Instruments. Mobile Connectivity: Assisted-GPS.\nhttp://focus.ti.com/general/docs/wtbu, 2004.\n[31] T. Wiegand, M. Lightstone, T. Campbell, D. Mukherjee, and\nS. Mitra. Rate-Distortion Optimized Mode Selection for Very Low\nBit Rate Video Coding and the Emerging H.263 Standard. URL:\nciteseer.ist.psu.edu/wiegand95ratedistortion.html, 1999.\n[32] D. Wu, Y.T. Hou, W. Zhu, Y.-Q. Zhang, and J.M. Peha. Streaming\nVideo over the Internet: Approaches and Directions. IEEE\nTransactions on Circuits and Systems for video Technology,\n11(1):120, February 2001.\n[33] X. Yang and A. Bouguettaya. Broadcast-Based Data Access in\nWireless Environments. In Proceedings of the EDBT International\nConference, Prague, Czech Republic, 2002.\n[34] Z.-L. Zhang, Y. Wang, D.H.C. Du, and D. Shu. Video staging: a\nproxy-server-based approach to end-to-end video delivery over\nwide-area networks. IEEE/ACM Transactions on Networking,\n8(4):429442, 2000.\n72\n", "keywords": "mobile multimedia services;rate adaptation;real-time streaming;Streaming for moving users"} {"name": "213", "title": "Web Taxonomy Integration through Co-Bootstrapping", "abstract": "We address the problem of integrating objects from a source taxonomy into a master taxonomy. This problem is not only currently pervasive on the web, but also important to the emerging semantic web. A straightforward approach to automating this process would be to learn a classifier that can classify objects from the source taxonomy into categories of the master taxonomy. The key insight is that the availability of the source taxonomy data could be helpful to build better classifiers for the master taxonomy if their categorizations have some semantic overlap. In this paper, we propose a new approach, co-bootstrapping , to enhance the classification by exploiting such implicit knowledge. Our experiments with real-world web data show substantial improvements in the performance of taxonomy integration.", "fulltext": "INTRODUCTION\nA taxonomy, or directory or catalog, is a division of a set of\nobjects (documents, images, products, goods, services, etc.) into\na set of categories. There are a tremendous number of\ntaxonomies on the web, and we often need to integrate objects\nfrom a source taxonomy into a master taxonomy.\n\nThis problem is currently pervasive on the web, given that many\nwebsites are aggregators of information from various other\nwebsites [2]. A few examples will illustrate the scenario. A web\nmarketplace like Amazon\nmay want to combine goods from\nmultiple vendors' catalogs into its own. A web portal like\nNCSTRL\nmay want to combine documents from multiple\nlibraries' directories into its own. A company may want to merge\nits service taxonomy with its partners'. A researcher may want to\nmerge his/her bookmark taxonomy with his/her peers'.\nSingapore-MIT Alliance\n, an innovative engineering education\nand research collaboration among MIT, NUS and NTU, has a\nneed to integrate the academic resource (courses, seminars,\nreports, softwares, etc.) taxonomies of these three universities.\nThis problem is also important to the emerging semantic web [4],\nwhere data has structures and ontologies describe the semantics\nof the data, thus better enabling computers and people to work in\ncooperation. On the semantic web, data often come from many\ndifferent ontologies, and information processing across\nontologies is not possible without knowing the semantic\nmappings between them. Since taxonomies are central\ncomponents of ontologies, ontology mapping necessarily involves\nfinding the correspondences between two taxonomies, which is\noften based on integrating objects from one taxonomy into the\nother and vice versa [10, 14].\nIf all taxonomy creators and users agreed on a universal standard,\ntaxonomy integration would not be so difficult. But the web has\nevolved without central editorship. Hence the correspondences\nbetween two taxonomies are inevitably noisy and fuzzy. For\nillustration, consider the taxonomies of two web portals Google\nand Yahoo\n: what is \"Arts/Music/Styles/\" in one may be\n\"Entertainment/Music/Genres/\" in the other, category\n\"Computers_and_Internet/Software/Freeware\" and category\n\n\"Computers/Open_Source/Software\" have similar contents but\nshow non-trivial differences, and so on. It is unclear if a\nuniversal standard will appear outside specific domains, and\neven for those domains, there is a need to integrate objects from\nlegacy taxonomy into the standard taxonomy.\nManual taxonomy integration is tedious, error-prone, and clearly\nnot possible at the web scale. A straightforward approach to\nautomating this process would be to formulate it as a\nclassification problem which has being well-studied in machine\nlearning area [18]. Normally the classifier would be constructed\nusing objects in the master taxonomy as training examples, and\nthe source taxonomy would be completely ignored during\nlearning. However, the availability of the source taxonomy data\ncould be helpful to build better classifiers for the master\ntaxonomy if their categorizations have some semantic overlap,\nparticularly when the number of training examples is not very\nlarge.\nPossible useful semantic relationships between a master category\nC and a source category S include:\n\n\nC\nS\n=\n(identical): an object belongs to C if and only if it\nbelongs to S ;\n\n\nC\nS\n=\n\n(mutual exclusion): if an object belongs to S it\ncannot belong to C ;\n\n\nC\nS\n\n(superset): any object that belonging to S must also\nbelong to C ;\n\n\nC\nS\n\n(subset): any object not belonging to S also cannot\nbelong to C ;\n\n\nC and S overlap but neither is a superset of the other.\nIn addition, semantic relationships may involve multiple master\nand source categories. For example, a master category C may be\na subset of the union of two source categories\na\nS\nand\nb\nS\n, so if\nan object does not belong to either\na\nS\nor\nb\nS\n, it cannot belong to\nC . The real-world semantic relationships are noisy and fuzzy,\nbut they can still provide valuable information for classification.\nFor example, knowing that most (80%) objects in a source\ncategory S belong to one master category\na\nC\nand the rest (20%)\nexamples belong to another master category\nb\nC\nis obviously\nhelpful. The difficulty is that knowledge about those semantic\nrelationships is not explicit but hidden in the data.\nIn this paper, we propose a new approach, co-bootstrapping, to\nenhance the classification by exploiting such implicit knowledge.\nOur experiments with real-world web data show substantial\nimprovements in the performance of taxonomy integration.\nThe rest of this paper is organized as follows. In 2, we give the\nformal problem statement. In 3, we describe a state-of-the-art\nsolution. In 4, we present our approach in detail. In 5, we\nconduct experimental evaluations. In 6, we review the related\nwork. In 7, we make concluding remarks.\n\nPROBLEM STATEMENT\nTaxonomies are often organized as hierarchies. In this work, we\nassume for simplicity, that any objects assigned to an interior\nnode really belong to a leaf node which is an offspring of that\ninterior node. Since we now have all objects only at leaf nodes,\nwe can flatten the hierarchical taxonomy to a single level and\ntreat it as a set of categories [2].\nNow we formally define the taxonomy integration problem that\nwe are solving. Given two taxonomies:\n\n\na master taxonomy\nM with a set of categories\n1\n2\n,\n,...,\nM\nC C\nC\n\neach containing a set of objects, and\n\n\na source taxonomy\nN with a set of categories\n1\n2\n,\n,...,\nN\nS S\nS\n\neach containing a set of objects,\nwe need to find the categories in\nM for each object in N.\nTo formulate taxonomy integration as a classification problem,\nwe take\n1\n2\n,\n,...,\nM\nC C\nC\nas classes, the objects in\nM as training\nexamples, the objects in\nN as test examples, so that taxonomy\nintegration can be automatically accomplished by predicting the\nclasses of each test example. Such a classification problem is\nmulti-class and multi-label, in the sense that there are usually\nmore than two possible classes and one object may be relevant to\nmore than one class.\n\nA STATE-OF-THE-ART SOLUTION\nAgrawal and Srikant recently proposed an elegant approach to\ntaxonomy integration by enhancing the Nave Bayes algorithm\n[2].\nThe Nave Bayes (NB) algorithm is a well-known text\nclassification technique [18]. NB tries to fit a generative model\nfor documents using training examples and apply this model to\nclassify test examples. The generative model of NB assumes that\na document is generated by first choosing its class according to a\nprior distribution of classes, and then producing its words\nindependently according to a (typically multinomial) distribution\nof terms conditioned on the chosen class [15]. Given a test\ndocument d , NB predicts its class to be arg max Pr[\n| ]\nC\nC d\n. The\nposterior probability Pr[\n| ]\nC d\ncan be computed via Bayes's rule:\nPr[\n| ]\nC d\n\nPr[ , ]\nPr[ ]\nC d\nd\n=\n\nPr[ ] Pr[ |\n]\nPr[ ]\nC\nd C\nd\n=\n\nPr[ ] Pr[ |\n]\nC\nd C\n\n\n(\n)\n( , )\nPr[ ]\nPr[ |\n]\nn d w\nw d\nC\nw C\n\n=\n\n,\nwhere ( , )\nn d w\nis the number of occurrences of w in d . The\nprobability Pr[ ]\nC\ncan be estimated by the proportion of training\ndocuments in C . The probability Pr[ |\n]\nw C\ncan be estimated by\n(\n)\n( , )\n( ,\n)\ni\ni\nw\nV\nn C w\nn C w\n\n\n\n+\n+\n\n, where ( , )\nn C w\n\nis the number of\noccurrences of w in training documents in C , V is the\nvocabulary of terms, and 0\n1\n\n<\nis the Lidstone's smoothing\nparameter [1]. Taking logs, we see that NB is actually a linear\nclassifier:\nlog Pr[\n| ]\nC d\n\n(\n)\n(\n)\n( , )\nlog Pr[ ]\nPr[\n|\n]\nn d w\nw d\nC\nw C\n\n\n\n\n(\n)\nlog Pr[ ]\n( , ) log Pr[ |\n]\nw d\nC\nn d w\nw C\n\n=\n\n+\n\n.\nThe enhanced Nave Bayes (ENB) algorithm [2] uses the\ncategorization of the source taxonomy to get better probability\nestimations. Given a test document d that is know to be in\ncategory S in\nN, ENB predicts its category in M to be\narg max Pr[\n| , ]\nC\nC d S\n. The posterior probability Pr[\n| , ]\nC d S\ncan\n411\nbe computed as Pr[\n| , ]\nC d S\n\nPr[ , , ]\nPr[ , ]\nC d S\nd S\n=\n\nPr[ ] Pr[ ,\n| ]\nPr[ , ]\nS\nC d S\nd S\n=\n\nPr[ ,\n| ]\nC d S\n\n. ENB invokes a simplification that assumes d\nand S are independent given C , therefore\nPr[ ,\n| ]\nC d S\n\nPr[\n| ] Pr[ | , ]\nC S\nd S C\n=\n\nPr[\n| ] Pr[ |\n]\nC S\nd C\n=\n\n(\n)\n( , )\nPr[\n| ]\nPr[\n|\n]\nn d w\nw d\nC S\nw C\n\n=\n\n.\nThe probability Pr[\n|\n]\nw C\ncan be estimated in the same way of\nNB. For the probability Pr[\n| ]\nC S\n, ENB estimates it by\n(\n)\ni\ni\ni\nC\nC\nC\nS\nC\nC\nS\n\n\n\n\n\n\n\n, where C is the number of documents\nin C , C\nS\n\nis the number of documents in S classified into\nC by the NB classifier, and\n0\n\n\nis a parameter reflecting the\ndegree of semantic overlap between the categorizations of\nM\nand\nN. The optimal value of\n\ncan be found using a tune set (a\nset of objects whose categories in both taxonomies are known).\nThe tune set can be made available via random sampling or\nactive learning [2]. Taking logs, we see that ENB is still a linear\nclassifier:\nlog Pr[\n| , ]\nC d S\n\n(\n)\n(\n)\n( , )\nlog Pr[\n| ]\nPr[\n|\n]\nn d w\nw d\nC S\nw C\n\n\n\n\n(\n)\nlog Pr[\n| ]\n( , ) log Pr[ |\n]\nw d\nC S\nn d w\nw C\n\n=\n\n+\n\n.\nComparing the classification functions of NB and ENB, it is\nobvious that all ENB does is to shift the classification threshold\nof its base NB classifier, no more and no less.\nTo achieve multi-class multi-label classification that is required\nby taxonomy integration, we use the \"one-vs-rest\" method to\ncreate an ensemble of binary (yes/no) NB or ENB classifiers, one\nfor each category C in\nM.\n\nOUR APPROACH\nHere we present our approach in detail. In 4.1, we introduce the\nboosting technique. In 4.2, we propose the co-bootstrapping\nmethod. In 4.3, we discuss the advantages of our approach.\n4.1\n\nBoosting\nIn our approach to taxonomy integration, we utilize a powerful\nmachine learning method, boosting [17, 23], to build classifiers.\nThe main idea of boosting is to combine many weak hypotheses\n(simple and moderately accurate classification rules), into a\nhighly accurate classifier. In this paper, we focus on boosting for\ntext classification. Generalization to other kinds of data and\nlearning algorithms would be straightforward.\n4.1.1\n\nTerm-Features\nText objects (documents) can be represented using a set of term-features\n1\n2\n{\n,\n,...\n}\nT\nT\nT\nT n\nF\nf\nf\nf\n=\n. The term-feature\nTh\nf\n(1\n)\nh\nn\n\n\nof a given object x is a binary feature indicating the presence or\nabsence of\nh\nw\n(the h-th distinct word in the document collection)\nin x , i.e.,\n1 if\n0 if\nh\nTh\nh\nw\nx\nf\nw\nx\n\n\n=\n\n\n\n.\n4.1.2\n\nWeak Hypotheses\nLet\nX denote the domain of possible objects, and let Y be a set\nof k possible classes. A labeled example is a pair ( , )\nx Y\nwhere\nx\n\nX is an object and Y\n\nY is the set of classes which x\nbelongs to. We define [ ]\nY l\nfor l\n\nY to be\n1 if\n[ ]\n1 if\nl\nY\nY l\nl\nY\n+\n\n\n=\n\n\n\n.\nA hypothesis is a real-valued function :\nh\n\nR\nX Y\n. The sign\nof ( , )\nh x l\nis a prediction of [ ]\nY l\nfor x , i.e., whether object x is\ncontained in class l . The magnitude of ( , )\nh x l\nis interpreted as\na measure of confidence in the prediction.\nBased on a binary feature f , we are interested in weak\nhypotheses h which are simple decision stumps of the form\n1\n0\nif\n1\n( , )\nif\n0\nl\nl\nc\nf\nh x l\nc\nf\n=\n\n=\n\n=\n\n, where\n1\n0\n,\nl\nl\nc\nc\n\n.\n4.1.3\n\nAdaBoost Algorithm\nThe most popular boosting algorithm is AdaBoost introduced in\n1995 by Freund and Schapire [12]. Our work is based on a multi-class\nmulti-label version of AdaBoost, AdaBoost.MH [24, 25],\nwhich is described in Figure 1.\n\nGiven m training examples\n1\n1\n( , ),...,(\n,\n)\nm\nm\nx Y\nx\nY\nwhere each\ni\nx\n\nX ,\ni\nY\n\nY , AdaBoost.MH dynamically maintains a\ndistribution\nt\nD\n\nover all objects and classes. Initially this\ndistribution\n1\nD\nis uniform. In the t-th round, the optimal weak\nhypothesis\nt\nh\nis selected based on the set of training examples\nand the current distribution\nt\nD\n. Then a parameter\nt\n\nis chosen,\nand the distribution\nt\nD\nis updated in a manner that puts more\nweights on \"difficult\" examples (object-class pairs) that are\nmisclassified by\nt\nh\n. Please be referred to [24, 25] for the details\non computing optimal\nt\nh\nand\nt\n\n. This procedure repeats for T\nrounds. The final hypothesis\n( , )\nH x l\nis actually a weighted vote\nGiven:\n1\n1\n( , ),...,(\n,\n)\nm\nm\nx Y\nx\nY\nwhere each\ni\nx\n\nX\n,\ni\nY\n\nY\n.\nInitialize\n1\n( , )\n1 (\n)\nD i l\nmk\n=\n.\nfor\n1,...,\nt\nT\n=\ndo\nPass distribution\nt\nD\nto weak learner.\nGet weak hypothesis\n:\nht\n\nR\nX Y\n.\nChoose\nt\n\n\n\n.\nUpdate:\n\n1\n( , ) exp(\n[ ] ( , ))\n( , )\nt\nt i\nt\ni\nt\nt\nD i l\nY l h x l\nD\ni l\nZ\n\n+\n=\n\nwhere\nt\nZ\nis the normalization factor\nend for\nOutput the final hypothesis:\n\n1\n( , )\n( , )\nT\nt t\nt\nH x l\nh x l\n\n=\n=\n\n.\n\nFigure 1: The boosting algorithm AdaBoost.MH.\n412\nof weak hypotheses\n1\n( , )\nT\nt t\nt\nh x l\n\n=\n\n, and the final prediction can be\ncomputed according to the sign of\n( , )\nH x l\n.\n4.2\n\nCo-Bootstrapping\nThus far we have completely ignored the categorization of\nN.\nAlthough\nM and N are usually not identical, their\ncategorizations often have some semantic overlap. Therefore the\ncategorization of\nN contains valuable implicit knowledge about\nthe categorization of\nM. Hereby we propose a new approach, co-bootstrapping\n, to enhance the classification by exploiting such\nimplicit knowledge.\n4.2.1\n\nCategory-Features\nIf we have indicator functions for each category in\nN, we can\nimagine taking those indicator functions as features when we\nlearn the classifier for\nM. This allows us to exploit the semantic\nrelationship among the categories of\nM and N without\nexplicitly figuring out what the semantic relationships are. More\nspecifically, for each object in\nM, we augment the ordinary\nterm-features with a set of category-features\n1\n2\n{\n,\n...,\n}\nN\nF\nf\nf\nf\n=\nN\nN\nN\nN\nderived from\nN. The category-feature\nj\nf\nN\n(1\n)\nj\nN\n\nof a given object x\nis a binary feature\nindicating whether x belongs to category\nj\nS\n(the j-th category\nof\nN), i.e.,\n1 if\n0 if\nj\nj\nj\nx\nS\nf\nx\nS\n\n\n=\n\n\n\nN\n.\nIn the same way, we can get a set of category-features\n1\n2\n{\n,\n...,\n}\nM\nF\nf\nf\nf\n=\nM\nM\nM\nM\nderived from\nM to be used for\nsupplementing the features of objects in\nN. The remaining\nproblem is to obtain these indicator functions, which are initially\nnot available.\n4.2.2\n\nCo-Bootstrapping Algorithm\nWhen building the classifier for\nM, the training examples are\nthe objects in\nM and the test examples are the objects in N. To\nleverage the categorization of\nN to reinforce classification, our\nclassifier uses term-features\nT\nF\nas well as category-features\nF\nN\n. However, we do not know the exact values of F\nN\nof the\ntraining examples.\nOur proposed algorithm overcomes the above obstacle by\nutilizing the bootstrapping idea. Let\n( )\nr\nF\nT\nB\ndenote a boosting-classifier\nfor taxonomy\nT's categorization based on feature set\nF\nat step r . Initially we build a classifiers\n0\n(\n)\nT\nF\nB\nN\nbased on\nonly term-features, then use it to classify the objects in\nM (the\ntraining examples) into the categories of\nN, thus we can predict\nthe value of each category-feature\nj\nf\nF\n\nN\nN\nfor each object\nx\n\nM . At next step we will be able to build\n1\n(\n)\nT\nF\nF\n\nM\nN\nB\n\nusing the predicted values of F\nN\nof the training examples.\nSimilarly we can build\n0\n(\n)\nT\nF\nM\nB\nand\n1\n(\n)\nT\nF\nF\n\nN\nM\nB\n. The new\nclassifier\n1\n(\n)\nT\nF\nF\n\nN\nM\nB\nought to be better than\n0\n(\n)\nT\nF\nB\nN\n\nbecause\n1\n(\n)\nT\nF\nF\n\nN\nM\nB\nleverages more knowledge. Hence we\ncan predict the value of each category-feature\nj\nf\nF\n\nN\nN\nfor\neach object x\n\nM more accurately using\n1\n(\n)\nT\nF\nF\n\nN\nM\nB\n\ninstead of\n0\n(\n)\nT\nF\nB\nN\n, and afterwards we can build\n2\n(\n)\nT\nF\nF\n\nM\nN\nB\n. Also\n2\n(\n)\nT\nF\nF\n\nM\nN\nB\nis very likely to be\nbetter than\n1\n(\n)\nT\nF\nF\n\nM\nN\nB\nbecause\n2\n(\n)\nT\nF\nF\n\nM\nN\nB\nis based\non a more accurate prediction of F\nN\n. This process can be\nrepeated iteratively in a \"ping-pong\" manner. We name this\napproach co-bootstrapping since the two classifiers\n(\n)\nr\nT\nF\nF\n\nB\nM\nN\nand\n(\n)\nr\nT\nF\nF\n\nB\nN\nM\ncollaborate to bootstrap\nthemselves together. Figure 2 presents the co-bootstrapping\nalgorithm, and Figure 3 depicts its process.\n4.3\n\nDiscussion\n4.3.1\n\nWhy Choose Boosting\nWe have selected to employ the boosting technique to build\nclassifiers in our co-bootstrapping approach to taxonomy\nintegration because of its following virtues.\n\n\nBoosting has shown outstanding classification performance on\nmany kinds of data such as text documents [17, 23, 24].\n\n\nBoosting finds the optimal combination of heterogeneous\nweak hypotheses automatically, therefore alleviates the\nproblem of how to weight ordinary features (e.g. term-features)\nand category-features appropriately. In contrast, approaches\nbased on other machine learning algorithms like Support\nVector Machines (SVMs) [9] would require to adjust relative\ncombination weights, which is a non-trivial problem.\n\n\nBoosting generates descriptive and human-readable\nhypotheses as the final classifier, and the learned classifier is\nusually sparse despite the large feature set.\nAlthough boosting looks an ideal choice, other machine learning\nalgorithms can also be utilized in the co-bootstrapping approach.\nWe have not investigated this issue yet.\n4.3.2\n\nComparison with ENB\nAlthough ENB [2] has been shown to work well for taxonomy\nintegration, we think that a more general approach is still\nattractive. It has been experimentally shown that AdaBoost is\nmore promising than NB for text classification [24]. The co-bootstrapping\napproach allows more powerful machine learning\nalgorithms like AdaBoost to be utilized.\nBoth ENB and our co-bootstrapping approach exploit the\ncategorization of\nN to enhance classification. While all ENB\ndoes is to shift the classification threshold of its base NB\nclassifier (see 3), co-bootstrapping has the ability to achieve\nmore complex adjustments on the classification function of its\nbase classifier.\n413\nFurthermore, ENB needs a stand-alone tune set to find the\noptimal value of parameter\n\nwhich controls the influence of\nsource categorization information on classification, whereas co-bootstrapping\nbased on boosting does not have such burdens.\nAlthough co-bootstrapping looks more effective, ENB still holds\nan advantage in efficiency.\nEXPERIMENTS\nWe have collected 5 datasets from Google and Yahoo. One\ndataset includes the slice of Google's taxonomy and the slice of\nYahoo's taxonomy about websites on one specific topic, as\nshown in Table 1.\n\nIn each slice of taxonomy, we take only the top level directories\nas categories, e.g., the \"Movie\" slice of Google's taxonomy has\ncategories like \"Action\", \"Comedy\", \"Horror\", etc.\n\nFor each dataset, we show in Table 2 the number of categories\noccurred in Google and Yahoo respectively.\nIn each category, we take all items listed on the corresponding\ndirectory page and its sub-directory pages as its objects. An\nobject (list item) corresponds to a website on the world wide web,\nwhich is usually described by its URL, its title, and optionally a\nshort annotation about its content. Here each object is considered\nas a text document composed of its title and annotation. All\ndocuments are pre-processed by removal of stop-words and\nstemming.\n\nFor each dataset, we show in Table 3 the number of objects\noccurred in Google (G), Yahoo (Y), either of them (G\nY), and\nboth of them (G\nY) respectively. The set of objects in GY\ncovers only a small portion (usually less than 10%) of the set of\nobjects in Google or Yahoo alone, which suggests the great\nbenefit of automatically integrating them. This observation is\nconsistent with [2].\n\nFigure 3: The co-bootstrapping process.\n\nGiven: two taxonomies\nM\nand\nN\n.\n\nBuild classifier\n0\n(\n)\nT\nF\nB\nM\n, then use it to predict the\nvalue of each category-feature\ni\nf\nF\n\nM\nM\nfor each\nobject\nx\n\nN\n.\n\nBuild classifier\n0\n(\n)\nT\nF\nB\nN\n, then use it to predict the\nvalue of each category-feature\nj\nf\nF\n\nN\nN\nfor each\nobject\nx\n\nM\n.\nfor\n1,...,\nr\nR\n=\ndo\n\nBuild classifier\n(\n)\nr\nT\nF\nF\n\nB\nM\nN\n, then use it to\npredict the value of each category-feature\ni\nf\nF\n\nM\nM\nfor each object\nx\n\nN\n.\n\nBuild classifier\n(\n)\nr\nT\nF\nF\n\nB\nN\nM\n, then use it to\npredict the value of each category-feature\nj\nf\nF\n\nN\nN\nfor each object\nx\n\nM\n.\nend for\n\nFor each object\nx\n\nN\n, if the value of its category-feature\ni\nf\nF\n\nM\nM\nis positive, then we classify it\ninto\ni\nC\n\nM\n.\n\nFor each object\nx\n\nM\n, if the value of its category-feature\nj\nf\nF\n\nN\nN\nis positive, then we classify it\ninto\nj\nS\n\nN\n.\nFigure 2: The co-bootstrapping algorithm.\nTable 1: The datasets.\n\nGoogle Yahoo\nBook\n/ Top/ Shopping/\nPublications/ Books/\n/ Business_and_Economy/\nShopping_and_Services/\nBooks/ Bookstores/\nDisease\n/ Top/ Health/\nConditions_and_Diseases/\n/ Health/\nDiseases_and_Conditions/\nMovie\n/ Top/ Arts/ Movies/\nGenres/\n/ Entertainment/\nMovies_and_Film/\nGenres/\nMusic\n/ Top/ Arts/ Music/ Styles/\n/ Entertainment/ Music/\nGenres/\nNews\n/ Top/ News/ By_Subject/\n/ News_and_Media/\n\nTable 3: The number of objects.\n\nGoogle Yahoo G\nY GY\nBook\n10,842\n11,268\n21,111\n999\nDisease\n34,047\n9,785\n41,439\n2,393\nMovie 36,787 14,366 49,744 1,409\nMusic 76,420 24,518 95,971 4,967\nNews 31,504\n19,419\n49,303 1,620\n\nTable 2: The number of categories.\n\nGoogle Yahoo\nBook 49\n41\nDisease 30\n51\nMovie 34\n25\nMusic 47\n24\nNews 27\n34\n\n414\nThe number of categories per object in these datasets is 1.54 on\naverage. This observation justifies the necessity of building\nmulti-class multi-label classifiers.\n5.2\n\nTasks\nFor each dataset, we pose 2 symmetric taxonomy integration\ntasks: G\nY (integrating objects from Yahoo into Google) and\nY\nG (integrating objects from Google into Yahoo).\nAs described in 2, we formulate each task as a classification\nproblem. The objects in G\nY can be used as test examples,\nbecause their categories in both taxonomies are known to us [2].\nWe hide the test examples' master categories but expose their\nsource categories to the learning algorithm in training phase, and\nthen compare their hidden master categories with the predictions\nof the learning algorithm in test phase. Suppose the number of\nthe test examples is n . For G\nY tasks, we randomly sample n\nobjects from the set G-Y as training examples. For Y\nG tasks,\nwe randomly sample n objects from the set Y-G as training\nexamples. This is to simulate the common situation that the sizes\nof\nM and\nN are roughly in same magnitude. For each task, we\ndo such random sampling 5 times, and report the classification\nperformance averaged over these 5 random samplings.\n5.3\n\nMeasures\nAs stated in 2, it is natural to accomplish taxonomy integration\ntasks via building multi-class multi-label classifiers. To measure\nclassification performance for each class (category in\nM), we\nuse the standard F-score (F\n1\nmeasure) [3]. The F-score is defined\nas the harmonic average of precision (p) and recall (r),\n2\n(\n)\nF\npr\np\nr\n=\n+\n, where precision is the proportion of correctly\npredicted positive examples among all predicted positive\nexamples, and recall is the proportion of correctly predicted\npositive examples among all true positive examples. The F-scores\ncan be computed for the binary decisions on each\nindividual category first and then be averaged over categories. Or\nthey can be computed globally over all the M\nn\n\nbinary\ndecisions where M is the number of categories in consideration\n(the number of categories in\nM) and n is the number of total\ntest examples (the number of objects in\nN). The former way is\ncalled macro-averaging and the latter way is called micro-averaging\n[27]. It is understood that the micro-averaged F-score\n(miF) tends to be dominated the classification performance on\ncommon categories, and that the macro-averaged F-score (maF)\nis more influenced by the classification performance on rare\ncategories [27]. Providing both kinds of scores is more\ninformative than providing either alone.\n5.4\n\nSettings\nWe use our own implementation of NB and ENB. The Lidstone's\nsmoothing parameter\n\nis set to an appropriate value 0.1 [1].\nThe performance of ENB would be greatly affected by its\nparameter\n\n. We run ENB with a series of exponentially\nincreasing values of\n\n: (0, 1, 3, 10, 30, 100, 300, 1000) [2] for\neach taxonomy integration task, and report the best experimental\nresults. We use BoosTexter [24] for the implementation of\nAdaBoost, taking single words as terms. We set the boosting\nrounds\n1000\nT\n=\nand the co-bootstrapping iteration number\n8\nR\n=\n(see Figure 1 & 2). In the following sections, we denote\nthe normal AdaBoost approach by AB, and denote the co-bootstrapping\napproach based on AdaBoost algorithm by CB-AB.\n5.5\n\nResults\n\n\n\nThe experimental results of NB and ENB are shown in Table 4.\nWe see that ENB really can achieve much better performance\nthan NB for taxonomy integration.\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0\n1\n2\n3\n4\n5\n6\n7\n8\nmaF(G\n\nY)\nmiF(G\n\nY)\nmaF(Y\n\nG)\nmiF(Y\n\nG)\nFigure 4: The taxonomy integration performance\nincreases along with the number of co-bootstrapping\niterations, on the Book dataset.\n\nTable 5: Experimental Results of AB and CB-AB.\nAB CB-AB\n\n\nmaF miF maF miF\nBook\n0.1740 0.4499 0.2540 0.6030\nDisease 0.5375 0.6674 0.6533 0.7703\nMovie 0.1930 0.4892 0.3172 0.6716\nMusic 0.3316 0.5025 0.4851 0.6826\nG\nY\nNews 0.2150 0.4625 0.3083 0.6218\nBook\n0.2436 0.3853 0.3516 0.6341\nDisease 0.3719 0.6350 0.4371 0.7287\nMovie 0.2559 0.5214 0.3922 0.7154\nMusic 0.4369 0.6397 0.5799 0.7994\nY\nG\nNews 0.3774 0.4942 0.4340 0.6421\nTable 4: Experimental Results of NB and ENB.\nNB ENB\n\n\nmaF miF maF miF\nBook\n0.1286 0.2384 0.1896 0.5856\nDisease 0.4386 0.5602 0.5230 0.6895\nMovie 0.1709 0.3003 0.2094 0.5331\nMusic 0.2386 0.3881 0.2766 0.5408\nG\nY\nNews\n0.2233 0.4450 0.2578 0.5987\nBook\n0.1508 0.2107 0.2227 0.5471\nDisease 0.2746 0.4812 0.3415 0.6370\nMovie 0.2319 0.4046 0.2884 0.5534\nMusic 0.3124 0.5359 0.3572 0.6824\nY\nG\nNews\n0.2966 0.4219 0.3639 0.6007\n415\nThe experimental results of AB and CB-AB are shown in Table 5.\nObviously AB beats NB, which is consistent with the conclusion\nof [24]. Also we find that CB-AB works better than AB for\ntaxonomy integration, which suggests that co-bootstrapping\nmakes effective use of the categorization of\nN to enhance\nclassification for\nM.\nFigure 4 shows that the taxonomy integration performance\nincreases along with the number of co-bootstrapping iterations,\non the Book dataset. This implies that the two boosting-classifiers\nlearned from two taxonomies do mutually boost each\nother until they become stable.\n\n\nThe experimental results of ENB and CB-AB are compared in\nFigure 5 and 6. It is clear that CB-AB outperforms ENB\nconsistently and significantly.\nRELATED WORK\nMost of the recent research efforts related to taxonomy\nintegration are in the context of ontology mapping on semantic\nweb. An ontology specifies a conceptualization of a domain in\nterms of concepts, attributes, and relations [11]. The concepts in\nan ontology are usually organized into a taxonomy: each concept\nis represented by a category and associated with a set of objects\n(called the extension of that concept). The basic goal of ontology\nmapping is to identify (typically one-to-one) semantic\ncorrespondences between the taxonomies of two given ontologies:\nfor each concept (category) in one taxonomy, find the most\nsimilar concept (category) in the other taxonomy. Many works in\nthis field use a variety of heuristics to find mappings [7, 16, 19,\n21]. Recently machine learning techniques have been introduced\nto further automate the ontology mapping process [10, 13, 14, 20,\n26]. Some of them derive similarities between concepts\n(categories) based on their extensions (objects) [10, 13, 14],\ntherefore they need to first integrate objects from one taxonomy\ninto the other and vice versa (i.e., taxonomy integration). So our\nwork can be utilized as a basic component of an ontology\nmapping system.\nAs stated in 2, taxonomy integration can be formulated as a\nclassification problem. The Rocchio algorithm [3, 22] has been\napplied to this problem in [14]; and the Nave Bayes (NB)\nalgorithm [18] has been applied to this problem in [10], without\nexploiting information in the source taxonomy. To our\nknowledge, the most advanced approach to taxonomy integration\nis the enhanced Nave Bayes (ENB) algorithm proposed by\nAgrawal and Srikant [2], which we have reviewed and compared\nwith our approach.\nIn [6], AdaBoost is selected as the framework to combine term-features\nand automatically extracted semantic-features in the\ncontext of text categorization. We also choose AdaBoost to\ncombine heterogeneous features (term-features and category-features\n), but it is for a different problem (taxonomy integration)\nand it works in a more complex way (through co-bootstrapping).\nIn [8], an approach called co-boosting is proposed for named\nentity classification. Essentially co-boosting is a co-training [5]\nmethod that attempts to utilize unlabeled data to help\nclassification through exploiting a particular form of redundancy\nin data: each instance is described by multiple views (disjoint\nfeature sets) which are both compatible and uncorrelated\n(conditionally independent). However, the multi-view\nassumption does not hold in the context of taxonomy integration:\nthe set of category features should not be considered as a view\nbecause category features alone are not sufficient for\nclassification and they are strongly correlated with term features.\nIn contrast to co-boosting (co-training), co-bootstrapping works\nwith two taxonomies but not two views.\nCONCLUSION\nOur main contribution is to propose a new approach, co-bootstrapping\n, that can effectively exploit the implicit knowledge\nin the source taxonomy to improve taxonomy integration.\nThe future work may include: theoretical analysis of the co-bootstrapping\napproach, incorporating commonsense knowledge\nand domain constraints into the taxonomy integration process,\nand so forth.\nACKNOWLEDGMENTS\nWe would like to thank the anonymous reviewers for their\nhelpful comments and suggestions.\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\nB\nook\nDi\ns\ne\na\ns\ne\nMo\nv\ni\ne\nMu\ns\ni\nc\nNe\nws\nB\nook\nDi\ns\ne\na\ns\ne\nMo\nv\ni\ne\nMu\ns\ni\nc\nNe\nws\nG\n\nY\nY\n\nG\nENB\nCB-AB\nFigure 5: Comparing the macro-averaged F-scores of\nENB and CB-AB.\n\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\nB\nook\nDi\ns\ne\na\ns\ne\nMo\nv\ni\ne\nMu\ns\ni\nc\nNe\nws\nB\nook\nDi\ns\ne\na\ns\ne\nMo\nv\ni\ne\nMu\ns\ni\nc\nNe\nws\nG\n\nY\nY\n\nG\nENB\nCB-AB\nFigure 6: Comparing the micro-averaged F-scores of\nENB and CB-AB.\n\n416\n\nREFERENCES\n[1] Agrawal, R., Bayardo, R. and Srikant, R. Athena: Mining-based\nInteractive Management of Text Databases. in\nProceedings of the 7th International Conference on\nExtending Database Technology (EDBT), Konstanz,\nGermany, 2000, 365-379.\n[2] Agrawal, R. and Srikant, R. On Integrating Catalogs. in\nProceedings of the 10th International World Wide Web\nConference (WWW), Hong Kong, 2001, 603-612.\n[3] Baeza-Yates, R. and Ribeiro-Neto, B. Modern Information\nRetrieval. Addison-Wesley, New York, NY, 1999.\n[4] Berners-Lee, T., Hendler, J. and Lassila, O. The Semantic\nWeb Scientific American, 2001.\n[5] Blum, A. and Mitchell, T. Combining Labeled and\nUnlabeled Data with Co-Training. in Proceedings of the\n11th Annual Conference on Computational Learning Theory\n(COLT), Madison, WI, 1998, 92-100.\n[6] Cai, L. and Hofmann, T. Text Categorization by Boosting\nAutomatically Extracted Concepts. in Proceedings of the\n26th Annual International ACM SIGIR Conference on\nResearch and Development in Information Retrieval\n(SIGIR), Toronto, Canada, 2003, 182-189.\n[7] Chalupsky, H. OntoMorph: A Translation System for\nSymbolic Knowledge. in Proceedings of the 7th\nInternational Conference on Principles of Knowledge\nRepresentation and Reasoning (KR), Breckenridge, CO,\n2000, 471-482.\n[8] Collins, M. and Singer, Y. Unsupervised Models for Named\nEntity Classification. in Proceedings of the Joint SIGDAT\nConference on Empirical Methods in Natural Language\nProcessing and Very Large Corpora (EMNLP), College\nPark, MD, 1999, 189-196.\n[9] Cristianini, N. and Shawe-Taylor, J. An Introduction to\nSupport Vector Machines. Cambridge University Press,\nCambridge, UK, 2000.\n[10] Doan, A., Madhavan, J., Domingos, P. and Halevy, A.\nLearning to Map between Ontologies on the Semantic Web.\nin Proceedings of the 11th International World Wide Web\nConference (WWW), Hawaii, USA, 2002.\n[11] Fensel, D. Ontologies: A Silver Bullet for Knowledge\nManagement and Electronic Commerce. Springer-Verlag,\n2001.\n[12] Freund, Y. and Schapire, R.E. A Decision-theoretic\nGeneralization of On-line Learning and an Application to\nBoosting. Journal of Computer and System Sciences, 55 (1).\n119-139.\n[13] Ichise, R., Takeda, H. and Honiden, S. Rule Induction for\nConcept Hierarchy Alignment. in Proceedings of the\nWorkshop on Ontologies and Information Sharing at the\n17th International Joint Conference on Artificial\nIntelligence (IJCAI), Seattle, WA, 2001, 26-29.\n[14] Lacher, M.S. and Groh, G. Facilitating the Exchange of\nExplicit Knowledge through Ontology Mappings. in\nProceedings of the Fourteenth International Florida\nArtificial Intelligence Research Society Conference\n(FLAIRS), Key West, FL, 2001, 305-309.\n[15] McCallum, A. and Nigam, K. A Comparison of Event\nModels for Naive Bayes Text Classification. in AAAI-98\nWorkshop on Learning for Text Categorization, Madison,\nWI, 1998, 41-48.\n[16] McGuinness, D.L., Fikes, R., Rice, J. and Wilder, S. The\nChimaera Ontology Environment. in Proceedings of the\n17th National Conference on Artificial Intelligence (AAAI),\nAustin, TX, 2000, 1123--1124.\n[17] Meir, R. and Ratsch, G. An Introduction to Boosting and\nLeveraging. in Mendelson, S. and Smola, A.J. eds.\nAdvanced Lectures on Machine Learning, LNCS, Springer-Verlag\n, 2003, 119-184.\n[18] Mitchell, T. Machine Learning. McGraw Hill, 1997.\n[19] Mitra, P., Wiederhold, G. and Jannink, J. Semi-automatic\nIntegration of Knowledge Sources. in Proceedings of The\n2nd International Conference on Information Fusion,\nSunnyvale, CA, 1999.\n[20] Noy, N.F. and Musen, M.A. Anchor-PROMPT: Using Non-Local\nContext for Semantic Matching. in Proceedings of the\nWorkshop on Ontologies and Information Sharing at the\n17th International Joint Conference on Artificial\nIntelligence (IJCAI), Seattle, WA, 2001, 63-70.\n[21] Noy, N.F. and Musen, M.A. PROMPT: Algorithm and Tool\nfor Automated Ontology Merging and Alignment. in\nProceedings of the National Conference on Artificial\nIntelligence (AAAI), Austin, TX, 2000, 450-455.\n[22] Rocchio, J.J. Relevance Feedback in Information Retrieval.\nin Salton, G. ed. The SMART Retrieval System: Experiments\nin Automatic Document Processing, Prentice-Hall, 1971,\n313-323.\n[23] Schapire, R.E. The Boosting Approach to Machine\nLearning: An Overview. in MSRI Workshop on Nonlinear\nEstimation and Classification, Berkeley, CA, 2002.\n[24] Schapire, R.E. and Singer, Y. BoosTexter: A Boosting-based\nSystem for Text Categorization. Machine Learning,\n39 (2/3). 135-168.\n[25] Schapire, R.E. and Singer, Y. Improved Boosting\nAlgorithms Using Confidence-rated Predictions. Machine\nLearning, 37 (3). 297-336.\n[26] Stumme, G. and Maedche, A. FCA-MERGE: Bottom-Up\nMerging of Ontologies. in Proceedings of the 17th\nInternational Joint Conference on Artificial Intelligence\n(IJCAI), Seattle, WA, 2001, 225-230.\n[27] Yang, Y. and Liu, X. A Re-examination of Text\nCategorization Methods. in Proceedings of the 22nd ACM\nInternational Conference on Research and Development in\nInformation Retrieval (SIGIR), Berkeley, CA, 1999, 42-49.\n\n417", "keywords": "Taxonomy Integration;Bootstrapping;Semantic Web;Classification;Ontology Mapping;Machine Learning;Boosting"} {"name": "214", "title": "WebKhoj: Indian language IR from Multiple Character Encodings", "abstract": "Today web search engines provide the easiest way to reach information on the web. In this scenario, more than 95% of Indian language content on the web is not searchable due to multiple encodings of web pages. Most of these encodings are proprietary and hence need some kind of standardization for making the content accessible via a search engine. In this paper we present a search engine called WebKhoj which is capable of searching multi-script and multi-encoded Indian language content on the web. We describe a language focused crawler and the transcoding processes involved to achieve accessibility of Indian langauge content. In the end we report some of the experiments that were conducted along with results on Indian language web content.", "fulltext": "INTRODUCTION\nIndia is a multi-language, multi-script country with 22 official\nlanguages and 11 written script forms. About a billion\npeople in India use these languages as their first language.\nEnglish, the most common technical language, is the lingua\nfranca of commerce, government, and the court system, but\nis not widely understood beyond the middle class and those\nwho can afford formal, foreign-language education. Not only\nis there a large societal gap between the rich and poor, but\nthat gap appears to be widening due the dominance of English\nin the society. About 5% of the population (usually the\neducated class) can understand English as their second language\n. Hindi is spoken by about 30% [5] of the population,\nbut it is concentrated in urban areas and north-central India\n, and is still not only foreign but often unpopular in many\nother regions. Computability of Indian languages could help\nbridge the societal gaps in education, economy and health-care\n. However the research and development, availability\nof standards, support from operating systems and applications\nin these directions moved very slow due to language\nheterogeneity.\nToday this phenomenon can also be observed on the world\nwide web. The percentage of Indian language content is\nvery less compared to the official languages of United Nations\n[7]. Even within the available content, majority is not\nsearchable and hence not reachable due to multiple encodings\nused while authoring such websites. Web publishers of\nsuch content were hesitant to use any available standards\nsuch as Unicode due to very delayed support from operating\nsystems and browsers in rendering Indic scripts. Even\ntoday Hindi is rendered properly only on Windows XP and\nbeyond. Linux has very little support for these languages.\nIndian languages had barely any support till Windows 2000\noperating system. This creates a major bottleneck for web\npublishers in these languages to get viewership.\nDespite all these issues, we found considerable amount of\ncontent being published on the web. However such content\ngets unnoticed or gets very less viewership since most of such\ncontent is not accessible through search engines due to nonstandard\nencodings being rendered using proprietary fonts.\nThis paper is organized into seven sections. In the next\nsub-section we give an introduction to characters, glyphs\nand fonts in order to appreciate the complexity involved in\nrendering complex scripts. We then introduce to the complexity\nof Indic scripts in the sub-section 1.2. In Section 2 we\nmake the problem statement and explain an implementation\nto solve this problem in Section 3. We report some experiments\nand results in Section 4, followed by a conclusion in\nSection 5.\n1.1\nFonts, characters and glyphs\nIn the history of mankind the act of writing has always\nbeen considered as an activity producing visual results, namely\ntext. The computer has brought a more abstract layer to\n801\nit, by storing and transmitting textual data. The atomic\nunit of this abstract representation of text, as defined in\nthe Unicode standard [8], is called a character. And indeed,\ncharacters prove to be useful for obtaining alternative (non-visual\n) representations of text such as Braille, speech synthesis\n, etc. The visual representation of a character is called\na glyph [8]. Displaying textual contents, whether on screen\nor on paper, involves translating characters into glyphs, a\nnon-trivial operation for many writing systems. Going in\nthe opposite direction (from glyphs to characters) is known\nas OCR when done by a machine, or as reading when done\nby a human [8]. The technology trend over the last few years\nhas been to use characters for most of the text processing\nand to limit glyph issues to the last stage, namely rendering.\nAt that level, character to glyph translation is handled by\nincreasingly \"intelligent\" (cf. OpenType and AAT technologies\n) fonts and font encodings. Unicode is an effort in this\ndirection. At the same time, restoring the original character\nstream from a rendered electronic document output for\noperations such as searching, indexing, or copy-pasting, no\ngeneral solution exists in today's popular document formats\nyet. Despite the problems involved, web authors tend to use\nproprietary encodings due to the complex characteristics of\nIndic scripts as described in the following section.\n1.2\nCharacteristics of Indic Scripts\nIndic scripts are phonetic in nature. There are vowels and\nconsonant symbols. The consonants become a syllable after\nthe addition of a vowel sound to it. Further to compound\nthe problem there are `compound syllables' also referred as\nligatures. For instance, if we consider `tri' in `triangle', there\nare three letters corresponding to three sounds `ta', `ra', `yi'.\nBut in the case of Indic Scripts the three are built together\nto make a single compound consonant having a non-linear\nstructure unlike Latin based languages.\nThe main problem with display of Indic scripts is dealing\nwith their non-linear structures. Glyphs have variable\nwidths and have positional attributes. Vowel signs can be\nattached to the top, bottom, left and right sides of the base\nconsonant. Vowel signs may also combine with consonants\nto form independent glyphs. Consonants frequently combine\nwith each other to form complex conjunct glyphs. Although\nthe encoding encapsulates only the basic alphabetic characters\n, the number of glyphs and their combinations required\nfor the exhaustive rendering of these scripts can be quite\nlarge [11].\nSince the character to glyph mappings have to be achieved\nusing a 256 character address space, web authors come up\nwith an intelligent way of representing all the characters in\nthe language using some 256 glyphs. Most of these glyphs\ndo not have any semantic significance in the language by\nthemselves. However when displayed together using some\npositional parameters, they achieve human readable characters\n. This situation makes the Indian language web content\ninaccessible for machine processing.\nPROBLEM STATEMENT\nMany information seekers use a search engine to begin\ntheir Web activity. In this case, users submit a query, typically\na list of keywords, and receive a list of Web pages that\nmay be relevant, typically pages that contain the keywords.\nToday though considerable amount of content is available\nin Indian languages, users are unable to search such\ncontent. Because Indian language websites rely on unique\nencodings or proprietary extensions of existing standard encodings\n[11]. This plurality of encodings creates a problem\nfor information retrieval to function as desired. Also\nmany research groups in information retrieval and natural\nlanguage processing feel the need to collect corpora in these\nlanguages from the web in the same way they obtain corpora\nfor other languages [14], [7], [1], [10]. Therefore in order to\nsearch or process Indian language websites, we should be\nable to transliterate all the encodings into one standard encoding\nand accept the user's queries in the same encoding\nand build the search index.\nThis task involves many steps. First step is to be able\nto identify the various encodings in Indian languages on the\nweb. Since these encodings are non-standard, there is no one\ncomprehensive list of such possible encodings. Therefore\nwe need to somehow identify all such encodings and also\nbe able to classify these encodings into the existing types.\nSecond step is to build a transliteration mapping for the\ngiven encoding into a standard encoding which is UTF-8\nand hence convert any page into a standard and index it.\nThird step is to be able to accept user's queries in the same\nstandard as that of the transliterated documents which is\nUTF-8.\nWEBKHOJ ARCHITECTURE\nIn this paper we report a search engine called WebKhoj\nwhich can search web pages in the top 10 Indian languages\naccording to the number of native speakers. WebKhoj cur-rently\nsupports Hindi, Telugu, Tamil, Malayalam, Marathi,\nKannada, Bengali, Punjabi, Gujarati and Oriya. Before we\ndescribe the architecture of WebKhoj, it is useful to understand\nhow a Web search engine is typically put together and\nthen see its extensions for our task.\n3.1\nGeneral web search engine\nFigure 1 shows a general web search engine schematically\n[2]. The major modules of a web search engine are a Crawler,\nan Indexer, a Query Engine and a Ranking Engine. Every\nengine relies on a crawler module to provide the grist for\nits operation (shown on the left in Figure 1). Crawlers are\nsmall programs that browse the Web on the search engine's\nbehalf, similar to how a human user follows links to reach\ndifferent pages. The programs are given a starting set of\nURLs whose pages they retrieve from the Web. The crawler\nextracts URLs appearing in the retrieved pages and give this\ninformation to the crawler control module. This module determines\nwhat links to visit next and feeds these links back\nto the crawler. (Some of the functionality of the crawler\ncontrol module may be implemented by the crawlers themselves\n.) The crawlers also pass the retrieved pages into a\npage repository. Crawlers continue visiting the Web until\nlocal resources, such as storage, are exhausted. The indexer\nmodule extracts all the words from each page and records\nthe URL where each word occurred. The result is a generally\nvery large \"lookup table\" that can provide all the URLs\nthat point to pages where a given word occurs (the text index\nin Figure 1). The table is of course limited to the pages\nthat were covered in the crawling process. As mentioned\nearlier, text indexing of the Web poses special difficulties,\ndue to its size and its rapid rate of change. In addition to\nthese quantitative challenges, the Web calls for some special,\nless common, kinds of indexes. For example, the indexing\n802\nFigure 1: General web search engine architecture\nmodule may also create a structure index, which reflects the\nlinks between pages. Such indexes would not be appropriate\nfor traditional text collections that do not contain links.\nThe collection analysis module is responsible for creating a\nvariety of other indexes. During a crawling and indexing\nrun, search engines must store the pages they retrieve from\nthe Web. The page repository in Figure 1 represents this\npossibly temporary collection. Search engines sometimes\nmaintain a cache of the pages they have visited beyond the\ntime required to build the index. This cache allows them\nto serve out result pages very quickly, in addition to providing\nbasic search facilities. Some systems, such as the\nInternet Archive, have aimed to maintain a very large number\nof pages for permanent archival purposes. Storage at\nsuch a scale again requires special consideration. The query\nengine module is responsible for receiving and filling search\nrequests from users. The engine relies heavily on the indexes\n, and sometimes on the page repository. Due to the\nWeb's size and the fact that users typically only enter one\nor two keywords, result sets are usually very large. Hence\nthe ranking module has the task of sorting the results such\nthat results near the top are the most likely to be what the\nuser is looking for. In the rest of this section we describe the\nadditional modules that were used in a general web search\nengine to make it work for Indian languages.\n3.2\nLanguage focused crawling\nSince our goal is to be able to search web sites of specific\nlanguages, we are looking for a relatively narrow segment of\nthe web. Crawlers that fetch pages related to a particular\ntopic of interest are called topic focused crawlers [6]. While\nour crawler is very similar to the one mentioned in [6], we\nuse a language identification module instead of a classifier\nand hence call it as language focused crawling. The language\nidentification module returns the name of the language for a\ngiven web page. This module is aware of all the proprietary\nencodings and also uses a bag of words to recognize unknown\nencodings from meta-tag information that might be found\nin an HTML page. In many cases web pages contain more\nthan one language, especially one of the languages being\nEnglish. This happens since many of the website organzi-ation\ninformation such as menu items, or disclaimers and\nother such formatting information. In some websites such\nas blogs or forums majority of the content might be English\n, with Indian language content being a minority. The\nlanguage identifier module returns a language only if the\nnumber of words in a web page are above a given threshold\nvalue.\n3.3\nTranscoding\nSince Indian language content is being published in multiple\nencodings on the web, transliteration of encodings to\na popular standard such as Unicode [15] is needed. In order\nto transliterate a non-UTF-8 encoding into UTF-8 which is\na Unicode based encoding one has to come up with byte\nsequence mappings between source and target encodings.\nSuch mappings are rarely one to one mappings, and involve\nmany to one, one to many and many to many mappings of\nbyte sequences. As it was explained in the beginning of this\npaper, a sequence of bytes represent a sequence of glyphs\nof a font, which in turn could render a single character or\na ligature in the Indic script. Ideally mappings are to be\ncreated to all the unique characters in the language, which\ncould be a large number in the order of tens of thousands.\nSince it would be tedious to list out all the characters and\nligatures, we make use of the large number of documents\ncollected by the crawler to come up with a semi-automatic\nprocess of generating mappings.\nWe use a simple heuristic to identify the potential character\nboundaries from byte sequences. First the text from the\ncollected web pages is divided into words using a suitable\nword tokenizer. Then the algorithm lists all the possible\nword beginning bytes in both the source and target font encodings\n. Now each word is scanned from left to right until\none such byte occurs in the word. Whenever a valid word\nbeginner occurs in the word, we tokenize at that point, and\nthe byte sequence till that point is treated as a potential\ncharacter. For example in a given encoding if all the possible\nword beginning bytes are `a', `b' and `c', a new word\n`cars' is tokenized as `c', `ars', since neither `r' nor `s' are\n803\nFigure 2: Transcoding from Jagran encoding to UTF-8\nvalid word beginners. The byte sequences thus obtained by\nsegmentation are potential characters or ligatures in that\nlanguage.\nOnce such segmentation is done, the frequency of such\nbyte sequences (or potential characters) is calculated. It was\nfound from our experiments that the ranks based on the normalized\nfrequency of such potential characters is highly cor-related\n(we present more details in our experiments section).\nTherefore we use this algorithm to come up initial suggested\nmappings for transcoding, and then the user would manually\ncorrect any errors by going through the font mappings\nas shown in the Figure 2. The transcoding tool sorts the\npotential characters according to their ranks, so that the\nuser would find the equivalent match in the target encoding\namong the top few possibilities. Also since the mappings\nare ordered based on the normalized frequency found in the\ncorpus, mapping source and target bytes in this order ensures\noptimal precision that can be obtained from a set of\nmappings.\nOnce such transcoder mappings are generated for all possible\nencodings in Indian languages, a transcoding module\nis called during indexing of the web documents. If a web\ndocument is encoded in an encoding other than UTF-8,\nthe transcoder module is called to transliterate the encoding\nof the given web page into UTF-8 standard. In order\nto do this, the HTML page is parsed to obtain its document\nobject model (DOM) using the JTidy utility\n1\n. All the\nnodes of type \"font\" are extracted from the DOM and the\nfont encoding is checked against a known set of encodings\non the web. Based on the font encoding, the appropriate\ntranscoder mappings are used to transliterate the relevant\ntext into UTF-8. One word is transcoded at a time. In order\nto transcode, the maximum byte sequence available in the\nmapping table is used to transliterate the encodings and the\nprocess is repeated to the remaining substring of the word.\nThis transliterated document is then sent to the indexer to\nbuild the inverted index.\n3.4\nRetrieval Algorithm\nThe score of query q for document d is defined in terms\nof TFIDF [13] metric as shown below:\nscore\n(q, d) = c(q, d).q\nn\n(q).(\nX\nt in q\ntf\n(t in d).idf (t))\n1\nJTidy is a Java implementation of Dave Raggett's HTML\ntidy. JTidy can be found at http://jtidy.sourceforge.net\n3.4.1\ntf (term frequency)\n`tf' (also known as term frequency) is a score factor based\non a term or phrase's frequency in a document. Terms and\nphrases repeated in a document indicate the topic of the\ndocument, so implementations of this score usually return\nlarger values when frequency is large, and smaller values\nwhen frequency is small.\n3.4.2\nidf (inverse document frequency)\n`idf' is a score factor based on a term's document frequency\n(the number of documents which contain the term).\nTerms that occur in fewer documents are better discriminators\nof topic, so implemenations of this method usually\nreturn larger values for rare terms, and smaller values for\ncommon terms.\n3.4.3\nc (coverage of query terms)\n`c' is a score factor based on the fraction of all query\nterms that a document contains. This value is multiplied\ninto scores. The presence of a large portion of the query\nterms indicates a better match with the query, so implemenations\nof this function usually return larger values when the\nratio between these parameters is large and smaller values\nwhen the ratio between them is small.\n3.4.4\nq\nn\n(query normalization)\nThis is the normalization value for a query given the sum\nof the squared weights of each of the query terms. This value\nis then multiplied into the weight of each query term.\nThis does not affect ranking, but rather just attempts to\nmake scores from different queries comparable.\n3.5\nUser Interface\nCurrently there is no easy means to key-in UTF-8 queries\nto the search engine using the normal keyboard. So WebKhoj\nis provided with a soft keyboard which displays the\nUTF-8 character set of the language on the screen. The\nlayout of the keys is very similar to the Keylekh layout [9].\nWe also tried providing a roman to local language transliteration\nkeyboard which dynamically renders Indian language\ntext when its phonetic equivalent is typed using roman characters\n. We had student volunteers from a near by village to\ntry out the keyboards. However, we found that the students\nwho are taught in the local language in schools are\nnot comfortable with English symbols. Also within the local\nlanguage, the way symbols are taught in schools is much\n804\nFigure 3: Hindi soft keyboard user interface for WebKhoj search engine\nFigure 4: Search results being displayed for a Hindi query in UTF-8\n805\ndifferent from the way UTF-8 characters need to be typed\nin. However, with some training these students were able to\nadapt to the soft keyboard.\nCurrently soft keyboards for 10 Indian languages are provided\nin the searching interface. One language is shown to\nthe user at any given instance. The user can change the\nkeyboard to a different language by clicking on the desired\nlanguage hyperlink displayed on the interface as shown in\nFigure 3. After thus framing the query, the user can search\nfor the web documents, and the results are ranked and displayed\nmuch like Google as shown in Figure 4.\n3.6\nWord spelling normalization\nIndian language words face standardization issues in spelling,\nthereby resulting in multiple spelling variants for the same\nword. For example we found widely used spelling variations\nfor the hindi word `angrezi' as shown below\nThe major reasons for this phenomenon can be attributed\nto unavailability of proper website authoring tools equipped\nwith spell checkers for Indian languages and multiple dialects\nof spoken language, transliteration of proper names\nand words borrowed from foreign languages whose spellings\nare not standardized. While we have to handle Indian language\nwords with spelling variations and errors, we also\nshowed that a considerable percentage of foreign language\nwords mainly English have entered into Indian language usage\nwhich cannot be ignored. While such words are being\nfrequently used by people, there is no standardization in\nspelling for such words thereby resulting in huge variations\ndue to transliteration. Given such variations in spelling it\nbecomes difficult for web Information Retrieval applications\nbuilt for Indian languages, since finding relevant documents\nwould require more than performing an exact string match.\nIt was shown that normalization rules for specific languages\nwork best with spelling normalization problems. We make\nuse of a set of rules [12] to normalize the words before indexing\nthem or looking them up from the index. These rules\nare language specific and we describe the rules for Hindi in\nthe next sub-sections. We achieve normalization of word\nspellings by mapping the alphabet of the given language L\ninto another alphabet L where L L. We use the following\nrules to achieve such a normalized mapping.\n3.6.1\nMapping\nchandrabindu\nto\nbindu\nOften people tend to use chandrabindu (a half-moon with\na dot) and bindu (a dot on top of alphabet) interchangeably.\nLots of confusion exists in common language usage on which\nto use when. In order to equate all such words we convert all\noccurrences of chandrabindu to bindu, which would equate\nall the words shown below.\n3.6.2\nnukta\ndeletion\nUnicode contains 10 consonant characters with nukta (a\ndot under consonant) and one nukta character itself. We\ndelete all\noccurrences of nukta character and replace all consonants\nwith nuktas with their corresponding consonant character\n. This would equate words like the ones shown below.\n3.6.3\nhalanth\ndeletion\nHindi and many other Indian languages face the problems\nof 'schwa' (the default vowel 'a' that occurs with every\nconsonant) deletion. Lots of spelling variations occur due to\n'schwa' deletion. In order to normalize such words we delete\nall the halanth characters in the given word before making\na string match. This operation would normalize words as\nshown in the example below.\n3.6.4\nvowel shortening\nMany times in written script people use shorter vowels instead\nof longer ones or vice versa. Therefore in our application\nwe convert all the longer vowels to their corresponding\nshorter ones. Using this feature we can normalize words as\nshown in this example.\n3.6.5\nchandra\ndeletion\n'chandra' (half-moon) is used for vowel rounding. Usually\nwords borrowed from English at times require vowel rounding\noperation. For example the word \"documentary\". But\nthis character is used inconsistently many times. Therefore\ndeleting such a character would normalize the words where\nvowel rounding has been used.\nThese rules were compared with many approximate string\nmatching algorithms are were found to result in a better f-measure\n[12].\nEXPERIMENTS AND DISCUSSION\nWe report here some experiments that were conducted\nin transcoding the proprietary encodings and present some\nstatistics from our language focused crawl about the Indian\nlanguage web.\nThe transcoding tool was designed to generate mappings\nbetween two encodings in a semi-automatic fashion. In order\nto achieve this the tool automatically gives some mapping\nsuggestions based on the rank correlation of the two\nencodings in question. We found that the byte sequences\nfrom two encodings of same language correlate very well, by\nlooking at the Spearman's rank correlation coefficient. In-tuitively\nthis phenomenon can be understood as the convergence\nof unique lexicon from two encodings from sufficiently\nlarge corpus, since they both belong to the same language.\nTo find the amount of correlation, we experimented with\ntwo different encodings from Hindi. We ran the character\nsegmentation algorithm and computed the normalized\nfrequencies as mentioned above and ranked the character\nsequences in both the encodings from a corpus of 2,000 documents\nfrom each of these encodings. We manually marked\n806\nthe corresponding frequency based rank positions of a given\ncharacter or a ligature from these encodings and calculated\nthe Spearman's rank correlation coefficient. We then plotted\na graph with the Spearman's correlation coefficient on\ny-axis and the number of mappings on x-axis as shown in\nFigure 5. We observed that the rank correlation is 100% for\nthe first 25 mappings that were automatically generated,\nand are close to 90% for the first 200 mappings which can\nachieve a transcoding precision of above 90%.\n0\n0.2\n0.4\n0.6\n0.8\n1\n0\n20\n40\n60\n80\n100\n120\n140\n160\n180\n200\nRank Correlation coefficient\nNumber of byte sequences\nRank Correlation of byte sequence frequencies\n"Spearman Correlation"\nFigure 5: Spearman's rank correlation for number of\nbyte sequences between Jagran and Webdunia font\nencodings\nSince these byte sequences are an ordered set, ordered\nby their normalized frequency, the precision of transliteration\nobtained by providing mappings between encodings in\nthe order provided in the ordered set is optimal. We have\nobserved that with about 2,000 encoding mappings for each\nencoding on average once can achieve around 99% precision.\nHowever this number also depends on the language complexity\n. For instance, the number of encodings required in\nTelugu transliteration is more than the number of encodings\nrequired in Hindi to obtain the same amount of precision.\nWe now report some of our experiments on the Indian language\nfocused crawling. We ran a daily crawl for 6 months\nperiod. Our crawler was focused to fetch content in the\ntop 10 spoken languages in India, namely Hindi, Telugu,\nTamil, Bengali, Marathi, Gujarati, Kannada, Malayalam,\nOriya and Punjabi. In another experiment, in order to find\nthe effectiveness of language focused crawling, we executed\nthe crawler in two modes with a set of 100 seed URLs which\nconstitute popular Indian based web portals, news sites and\nhome pages of people of Indian origin. In the first mode\nit was executed without language focus restriction using a\npure FIFO crawl queue while the second mode was with language\nfocus restriction using a priority queue from which the\ncrawler fetched the next crawl URL. We plotted the number\nof relevant pages fetched in the first 50,000 URLs in both the\nruns as shown in the Figure 6. The relevance of the fetched\npages was calculated by checking the encoding on that page.\nIt can be clearly seen that language focus restriction on the\ncrawler helps in downloading more relevant pages.\nFrom the 6 month crawl, about half a million unique documents\nwere collected from all the languages. Unique web\npages were picked after eliminating approximate duplicate\npages using shingling technique [4]. These half a million\npages were distributed across the 10 languages as shown\nin the Figure 7. Figure 8 shows the population of people\nspeaking the various Indian languages [3]. It can be observed\nthat even within India there is a divide in the web\npublishing activity in various languages. For instance it can\nbe observed that the content is actively getting published in\nsouth Indian languages like Telugu, Tamil and Malayalam\nwhen compared to the northern languages such as Marathi,\nGujarati, Oriya, Bengali and Punjabi. Hindi has the majority\nof content published on the web but Hindi is also the\nlanguage spoken by majority of Indian population.\nIt can be seen from Figure 10 that a very few websites\npublish content using a global standard such as Unicode.\nThis explains the reason for most of the Indian language\nnot being indexed or searchable by the present day popular\nweb search engines. On the other hand it can be seen from\nFigure 9 and Figure 11 that the number of unique encodings\nfound on the web for these languages is almost equivalent\nto the number of websites. This observation suggests that\nevery web publisher is coming up with their own proprietary\nencodings to publish web content. We did not consider the\nwebsites that publish using images in this study, but our\npreliminary study suggests that there are a large number of\nwebsites that publish content as images as well.\nFigure 6: Crawl with and without language focus\nFigure 7: Languages on x-axis and number of unique\nweb pages on y-axis\n807\nFigure 8: Languages on x-axis and number of native\nspeakers on y-axis\nFigure 9: Languages on x-axis and number of encodings\nfound on web including UTF-8 on y-axis\nFigure 10: Languages on x-axis and number of UTF-8\nwebsites on y-axis\nCONCLUSIONS\nIn this paper we discussed the importance of being able\nto search the Indian language web content and presented a\nweb search engine which takes the UTF-8 queries from a\nsoft keyboard and capable of searching 10 most spoken Indian\nlanguages' web pages encoded in multiple encodings.\nWe presented a language focussed crawler which can fetch\nweb pages of specific languages and also the distribution of\nFigure 11: Languages on x-axis and number of websites\n(web servers) on y-axis\nthe Indian language content on web based on the pages that\nwere crawled. This distribution clearly shows the need for\nprocesses and algorithms to transcode non-Unicode encodings\nto Unicode. Hence we have discussed a semi-automatic\nalgorithm to generate the mappings between different encodings\n. This shows that transcoding of proprietary encodings\ninto a standard encoding makes Indian language web\ncontent accessible through search engines.\nACKNOWLEDGMENTS\nWe would like to thank the Department of Science and\nTechnology, Ministry of Communications and IT, Government\nof India for funding this project.\nREFERENCES\n[1] J. Allan, J. Aslam, N. Belkin, C. Buckley, J. Callan,\nB. Croft, S. Dumais, N. Fuhr, D. Harman, D. J.\nHarper, D. Hiemstra, T. Hofmann, E. Hovy,\nW. Kraaij, J. Lafferty, V. Lavrenko, D. Lewis,\nL. Liddy, R. Manmatha, A. McCallum, J. Ponte,\nJ. Prager, D. Radev, P. Resnik, S. Robertson,\nR. Rosenfeld, S. Roukos, M. Sanderson, R. Schwartz,\nA. Singhal, A. Smeaton, H. Turtle, E. Voorhees,\nR. Weischedel, J. Xu, and C. Zhai. Challenges in\nInformation Retrieval and Language Modeling:\nReport of a Workshop held at the Center for\nIntelligent Information Retrieval, University of\nMassachusetts Amherst, September 2002. SIGIR\nForum, 37(1):3147, 2003.\n[2] A. Arasu, J. Cho, H. Garcia-Molina, A. Paepcke, and\nS. Raghavan. Searching the Web. ACM Trans. Inter.\nTech., 1(1):243, 2001.\n[3] G. B. 14th ed. Ethnologue: Languages of the World.\nSIL International, Dallas, TX, 2003.\n[4] S. Brin, J. Davis, and H. Garcia-Molina. Copy\nDetection Mechanisms for Digital Documents. In\nSIGMOD '95: Proceedings of the 1995 ACM SIGMOD\nInternational Conference on Management of Data,\npages 398409, New York, NY, USA, 1995. ACM\nPress.\n[5] G. E. Burkhart, S. E. Goodman, A. Mehta, and\nL. Press. The Internet in India: Better times ahead?\nCommun. ACM, 41(11):2126, 1998.\n808\n[6] S. Chakrabarti, K. Punera, and M. Subramanyam.\nAccelerated Focused Crawling through Online\nRelevance Feedback. In WWW '02: Proceedings of the\n11th International Conference on World Wide Web,\npages 148159, New York, NY, USA, 2002. ACM\nPress.\n[7] F. Gey, N. Kando, and C. Peters. Cross Language\nInformation Retrieval: A Research Roadmap. SIGIR\nForum, 36(2):7280, 2002.\n[8] Y. Haralambous and G. Bella. Injecting Information\ninto Atomic Units of Text. In DocEng '05:\nProceedings of the 2005 ACM Symposium on\nDocument Engineering, pages 134142, New York,\nNY, USA, 2005. ACM Press.\n[9] A. Joshi, A. Ganu, A. Chand, V. Parmar, and\nG. Mathur. Keylekh: a Keyboard for Text Entry in\nIndic Scripts. In CHI '04: CHI '04 Extended Abstracts\non Human Factors in Computing Systems, pages\n928942, New York, NY, USA, 2004. ACM Press.\n[10] L. S. Larkey, M. E. Connell, and N. Abduljaleel. Hindi\nCLIR in thirty days. ACM Transactions on Asian\nLanguage Information Processing (TALIP),\n2(2):130142, 2003.\n[11] D. P. Madalli. Unicode for Multilingual\nRepresentation in Digital Libraries from the Indian\nPerspective. In JCDL '02: Proceedings of the 2nd\nACM/IEEE-CS Joint Conference on Digital Libraries,\npages 398398, New York, NY, USA, 2002. ACM\nPress.\n[12] P. Pingali and V. Varma. Word Normalization in\nIndian Languages. In ICON05: Proceedings of the\n2005 International Conference on Natural Language\nProcessing, 2005.\n[13] G. Salton and C. Buckley. Term-weighting Approaches\nin Automatic Text Retrieval. Information Process.\nManagement, 24(5):513523, 1988.\n[14] S. Strassel, M. Maxwell, and C. Cieri. Linguistic\nResource Creation for Research and Technology\nDevelopment: A Recent Experiment. ACM\nTransactions on Asian Language Information\nProcessing (TALIP), 2(2):101117, 2003.\n[15] F. Yergeau. UTF-8, a transformation format of ISO\n10646. RFC Editor, United States, 2003.\n809\n", "keywords": "web search;Indian languages;non-standard encodings"} {"name": "215", "title": "What's There and What's Not? Focused Crawling for Missing Documents in Digital Libraries", "abstract": "Some large scale topical digital libraries, such as CiteSeer, harvest online academic documents by crawling open-access archives, university and author homepages, and authors' self-submissions. While these approaches have so far built reasonable size libraries, they can suffer from having only a portion of the documents from specific publishing venues. We propose to use alternative online resources and techniques that maximally exploit other resources to build the complete document collection of any given publication venue. We investigate the feasibility of using publication metadata to guide the crawler towards authors' homepages to harvest what is missing from a digital library collection. We collect a real-world dataset from two Computer Science publishing venues, involving a total of 593 unique authors over a time frame of 1998 to 2004. We then identify the missing papers that are not indexed by CiteSeer. Using a fully automatic heuristic-based system that has the capability of locating authors' homepages and then using focused crawling to download the desired papers, we demonstrate that it is practical to harvest using a focused crawler academic papers that are missing from our digital library. Our harvester achieves a performance with an average recall level of 0.82 overall and 0.75 for those missing documents. Evaluation of the crawler's performance based on the harvest rate shows definite advantages over other crawling approaches and consistently outperforms a defined baseline crawler on a number of measures.", "fulltext": "INTRODUCTION\nDigital libraries that are based on active crawling methods such as\nCiteSeer often have missing documents in collections of archived\npublications, such as ACM and IEEE. How do such digital\nlibraries find and obtain those missing? We propose using\nexternal resources of publication metadata and focused crawlers\nto search the Web for those missing.\nThe basic concept of a focused crawler (also known as a topical\ncrawlers) [1], is based on a crawling strategy that relevant Web\npages contain more relevant links, and these relevant links should\nbe explored first. Initially, the measure of relevancy was based on\nkeywords matching; connectivity-based metrics were later\nintroduced [2]. In [3] the concept of a focused crawler was\nformally introduced: a crawler that seeks, acquires, indexes, and\nmaintains pages on a specific set of topics that represent a\nrelatively narrow segment of the Web.\nToday, focused crawling techniques have become more important\nfor building specialty and niche (vertical) search engines While\nboth the sheer volume of the Web and its highly dynamic content\nincreasingly challenge the task of document collection, digital\nlibraries based on crawling benefit from focused crawlers since\nthey can quickly harvest a high-quality subset of the relevant\nonline documents.\nCurrent approaches to harvesting online academic documents\nnormally consist of focused crawling of open-access archives,\nauthor and institution web sites and directories of authors' self-submissions\n. A random sample of 150 journals and conferences in\nComputer Science show that less than 10% have websites that are\nopen to crawlers. Many of the top publishing venues that have\ntheir documents electronically available to subscribers such as the\nACM Digital Library, the IEEE Digital, Library or the Springer-Verlag\nDigital Library, normally use access permission\ntechniques and robots.txt to ban crawlers. A recent study indicates\nthat CiteSeer indexes 425, 000 unique research documents related\nto Computer Science, DBLP contains 500,464 records and there\nare 141,345 records in the Association for Computing Machinery\n(ACM) Digital Library and 825,826 records in the more\ncomprehensive ACM Guide [4]. The study also shows that in\nCiteSeer there is an overlapping portion of 86, 467 documents\n(20.2% of CiteSeer's total archive) comprising 17.3% of the\nDigital Bibliography & Library Project (DBLP) archive.\nThis research investigates alternative online resources and\nfocused crawling techniques to build a complete document\ncollection for any given publication venue. We propose to answer\nthe following:\nQ1 - What are the best focused crawling techniques to maximally\nexploit online resources, in order to harvest the desired papers\neffectively and efficiently?\nQ2 Is it effective to use authors' homepages as alternative\nonline resources to find the missing documents?\nQ3 How can the above methods be automated to effectively\nobtain missing documents?\nThe rest of the paper is organized as follows. In section 2 we\npresent a review of related work. In Section 3 we cover in much\ndetail the design rationale of the system. In Section 4 we describe\nhow we collect data and perform the evaluation, and present the\nresults with discussion. Finally, we conclude the paper with future\nwork proposed in Section 5.\nRELATED WORK\nThe focused crawling literature shows that much has been focused\non enhancing the dynamic performance, scalability, effectiveness,\nand efficiency of the crawler, namely, harvesting higher-quality\ndocuments in a shorter period of time.\nBreadth-first searching is probably the simplest strategy for\ncrawling, i.e. traversing the Web in a way that a directed graph is\ntraveled using a breadth-first search algorithm. Interestingly, a\nbreadth-first crawler is found to be capable of yielding high-quality\ndocuments at an early stage of the crawl [5]. Although\nmore sophisticated crawlers tend to retrieve even higher quality\npages than their breadth-first counterparts, they are usually\ncomputationally more expensive. In our study, we use a multi-threaded\nbreadth-first crawler as a baseline to compare to our own\ncrawling method.\nBest-first crawling attempts to direct the crawler towards the best\n(i.e. most relevant in terms of topic relevance) documents.\nDifferent heuristics, such as link-based criteria, lexical similarity\nmeasures, contextual knowledge, and fine-tuned combinations of\nsuch have been explored in a number of studies over the years. In\n[2], the authors find that PageRank [6] can yield the best\nperformance when ordering seed URLs. However, a more recent\nstudy [7] shows that PageRank metrics may just be too general in\ncontext without regard to the specific target topic. An updated\nversion of PageRank algorithm which reflects the importance with\nrespect to a particular topic has been proposed [8].\nIn [3], a Bayesian classifier is used to estimate the probability that\na page belongs to the target topic, in a way that a node belongs to\na certain position in an existing taxonomy hierarchy. In [9], a\nkeyword-based vector space model is used to calculate the\nsimilarity of Web pages to the seed URLs, and if the similarity is\nabove a certain threshold, the pages are downloaded and indexed,\nand their out-going links are followed.\nA focused crawler [10] based on context graphs is proposed by so\nthat the crawler can extract information about the context within\nwhich desired documents are usually found. A set of classifiers\nare then trained to classify in-degree Web pages according to an\nestimation of their relevance to the topic. The relevance\nestimation then navigates the crawler towards desired documents.\nCrawlers with a probability model are used for calculating\npriorities, which combines Web page content-based learning,\nURL token-based learning, and link-based learning [11]. In a later\nwork, [12] takes into account the users' access behavior and re-tunes\nthe previous model to connect this behavior with the\npredicate satisfaction probability of the candidate Web pages\nwaiting to be crawled.\nAn interesting \"reversed\" approach is proposed in [13], which\nsuggests a given scientific document from a digital library be used\nas an input to the focused crawler. The main title and reference\ntitles of the document are extracted and used to train a classifier to\nlearn topical knowledge. The crawler is then guided by such\nknowledge to discover other topic-relevant documents on the Web.\nMore up-to-date reviews of focused crawling algorithms are\npresented in [14] and [15]. In [14], five different methods are\nimplemented and evaluated within a unified evaluation\nframework on small and large datasets.\nHere we discuss two studies that bear similarities to ours. The\nHPSearch and Mops presented in [16] support the search for\nresearch papers close to the homepages of certain scientists.\nHowever, their system does not investigate the issues of document\nharvesting for digital libraries for different publishing venues.\nFurthermore, our system outperforms theirs in terms of the\npercentage of correct homepages returned. In a more recent study\n[17], a Paper Search Engine (PaSE) is proposed, which uses\ncitation information to locate online copies of scientific\ndocuments. While their study addresses a different research\nquestion, the PaSE system employs similar heuristics as we do to\nfavor certain out-going links in order to quickly locate academic\npapers.\nSYSTEM DESIGN\nWe develop an automated system in which document metadata is\nused to automatically locate the homepages of the authors and\nfocused crawl these homepages with the intent of finding missing\ndocuments. Our system, shown in Figure 1, consists of a\nHomepage Aggregator and a smart Focused Crawler.\nThe system accepts a user's request to harvest the desired papers\npublished in a specific venue (e.g. a conference or a journal). The\nHomepage Aggregator will query a Public Metadata Repository\nand extract useful metadata heuristics to assist in quickly and\naccurately locating URLs of the authors' homepages. A list of\nsuch URLs will be inserted into the Homepage URL Database.\nThe Crawler uses focused crawling techniques to search the\ndomains for desired publications. It accepts the seed URLs as an\ninput and uses them as starting points for the crawl. The Crawler\nuses anchor text to determine link priorities and quickly navigates\nthrough the websites using to get to the desired academic papers.\nThe harvested documents will be stored in the Document\nDatabase.\n302\nFigure 1. System Architecture\n3.2 Using Metadata to Locate Homepages\nCrawling authors' homepages first requires the system to be able\nto locate such websites quickly and accurately. A study of the\nliterature indicates that personal website and homepage finding\nhave been studied a lot since the birth of WWW. In [18], the\nauthors present AHOY! as the first working system for personal\nhomepage finding, which can filter irrelevant pages based on\npattern matching heuristics. Later, the TREC (Text REtrieval\nConference) hosted the task of Web homepage finding in 2001\nand its subsequent years, and algorithms based on link analysis,\nlinguistic cues, and machine learning etc. are proposed [19, 20,\n21]. Examples of current working systems include\nHomePageSearch (hpsearch.uni-trier.de) which is a Homepage\nAggregator mainly for computer scientists, and compiled\ndirectories (e.g. Google Directory)\nSee Figure 2 for the architecture of the Homepage Aggregator\ncomponent.\n\nFigure 2. Architecture of the Homepage Aggregator\nThe goal of the Homepage Aggregator is to look for homepages\nof the authors and save them as seed URLs to feed the Focused\nCrawler. First it queries the Metadata Repository and retrieves the\ndocument metadata. For each author, it extracts from metadata a\nvalue pair of (N, P), where N is the name of the author and P is\nthe name of the venue (with a number of variations) in which the\npaper is published. A list of such pairs is then submitted to a Web\nsearch engine. Pages returned by the search engine will go\nthrough a Homepage Filter where we use metadata heuristics to\nremove false positives (pages that are not likely to be the\nhomepages of the authors) and disambiguate among namesakes, if\nthere is any. Different priority weights are assigned to the\nremaining pages according to their likelihood of being the\nhomepage of the author. The more likely it's the homepage of the\nauthor, the higher priority it receives. Eventually the page with\nthe highest priority weights will be inserted into the Homepage\nURL Database, and will be crawled later.\nRecall that we extract from metadata a pair value of (N, P). Now\nlet U be the URL and T be the title of a Web page P returned by\nthe Web search engine. When there are more than two authors for\nthe same paper, assume Ui are the URLs of the homepages of\nother authors already found by the system. We have incorporated\nthe findings in [16] about major characteristics of personal\nhomepages. The metadata heuristics employed in the Homepage\nFilter are explained in Table 1.\nTable 1. Heuristics Employed in Homepage Filter\nFunction Heuristic\nRules\nRemove false\npositives\n\n\nRemove U if U or T indicates a\npublisher's website.\n\nRemove U if U or T indicates a\ndigital library.\n\nRemove U if U points to a file other\nthan .htm/.html\nDisambiguate\nbetween\nnamesakes\n\n\nChoose U among the candidates if U\nis in the same domain as Ui.\n\nRemove U if its parent-domain is\nalready found by the system.\n\nAssign priority\n\n\nU receives high priority if T contains\nN and any of the following:\nhomepage (home page), web\n(website), research, publication,\npapers.\n\nU receives medium priority if T\ncontains any of the following:\nhomepage (home page), web\n(website), research, publication,\npapers.\n\nU receives low priority when neither\none of the above two rules is fired.\n\n3.3 Crawler Architecture\nThe Focused Crawler crawls web pages, using heuristics to\nquickly navigate to the publications. The architecture of the\ncomponent is shown in Figure 3.\nThe crawler accepts two primary sets of inputs that vary for each\ncrawl. The first is a set of seed URLs that are the starting points of\nthe crawl. These are added to the crawl queue at low priority. The\nsecond set of inputs is a collection of domain names that the\ncrawler is permitted to crawl.\nOnce the seed URLs are entered into the queue, the crawler\nthreads are started. Each thread gets one URL from the priority\nqueue, and downloads the page that it points to.\nAfter a page is downloaded, the out-going links are examined and\nthose matched with the ignored list are removed, either because\nthey are out of the target domain or because their MIME types are\nnot processed by the crawler. At this point, if a PDF/PostScript\ndocument is found, it will be inserted into the Document Database.\nThe rest of the out-going links will each be classified as high,\nmedium, or low priority, and inserted into different priority\nqueues.\nMetadata\nExtractor\nWeb\nSearch Engine\nHomepage Filter\nHomepage\nAggregator\nPublic\nMetadata\nRepository\n\nHomepage\nURL\nDatabase\nFocused\nCrawler\nMetadata\nHeuristics\nPublic\nMetadata\nRepository\n\nDocument\nDatabase\nHomepage\nURL\nDatabase\nHomepage\nAggregator\n303\nFigure 3. Architecture of the Focused Crawler\nIn order to concentrate or limit the crawls towards only desirable\ncontent, the crawler is provided with three lists for reference. The\ncontents of the lists may be changed depending on the types of\ndomains being crawled.\nThe Ignore List is a set of file types that are to be ignored by the\ncrawler. The most common types of URLs that are ignored by the\ncrawler are links to image files. The list can also include parts of\nthe domain(s) being crawled, which the crawler is not supposed to\nvisit. Table 2 shows a sample Ignore List.\nTable 2. Sample Ignore List\nFile Types\n.jpg, .bmp, .gif, .png, .jpeg, .mpg, .mpeg, .avi\nhttp://clgiles.ist.psu.edu/picture.html\n\nDomains\nhttp://clgiles.ist.psu.edu/courses.html\nFiles of type JPG, BMP etc will be ignored during the crawl. Also\nany outgoing links to pages within the ignored domains will not\nbe considered for crawling.\nThe Allow List on the other hand is a collection of domain names\nthat make up the crawl space of the crawler. Links pointing\noutside the specified domains are ignored by the crawler (unless\nthey are determined to be research documents). This list is useful\nto limit the breadth of the crawl to only those domains that are of\ninterest. Table 3 shows a sample Allow List.\nTable 3. Sample Allow List\nDomains\nhttp://clgiles.ist.psu.edu\nSo the link http://clgiles.ist.psu.edu will be considered for\ncrawling if it's discovered.\nPriority lists contain a set of keywords and their assigned weights\nthat are used to determine the priorities of the extracted links. The\nlinks will be visited by the crawler in the order of their assigned\npriority.\nThe Crawl Queue holds the discovered URLs that are yet to be\ncrawled. This queue consists of three sub-queues: High-priority,\nMedium-priority and Low-priority queue. The Low-priority queue\nis the default queue. The seed URLs are entered into this queue.\nWe adopt a simple yet very effective heuristics to make the\npriority classification based upon the likelihood of the link\neventually leading to academic publications.\nWe first train a classifier with data collected from two publishing\nvenues: the Very Large Data Bases (VLDB) Conference and the\nText REtrieval Conference (TREC). Several crawls are carried\nout with a breadth-first policy. The logs of the crawls are\nanalyzed and a traverse tree is generated for each of the crawl that\nindicates the URLs visited and the link path that is followed by\nthe crawler to reach the desired publications.\nConsider a small website having 11 pages as shown in Figure 4.\n\nFigure 4. Sample Website\nThe circles represent URL's in the website and the arrows are the\nhyperlinks from one page to another. The link structure shown is\nthat which is followed by the breadth-first crawler to visit each\nURL. All other links such as those that may point outside the\ndomain are ignored in the above diagram.\nThe node marked with `S' is the seed or start URL. The nodes\nmarked with `P' are research document files that are detected by\nthe crawler. Now the links that are of interest to us are S A P\nand S C D P. The anchor text contained in these links `SA',\n`AP', `SC', `SD', `DP' is extracted and marked as `interesting'.\nThe text in the remainder of the links is also noted, but goes in\n`not interesting' set.\nSimilar analysis is done on all the logs that are generated by the\nbreadth-first crawl. All the keywords that are commonly\noccurring in the \"interesting\" class and not so commonly\noccurring in the \"non-interesting\" class are extracted. Weights are\nassigned to each of these keywords depending on their placement\nin the link structure. The keywords closer to the documents are\ngiven more weight that those closer to the seed URL. For e.g.\nkeyword `SA' has a lesser weight than keyword `DP' as `DP' is\ncloser to P than to S as opposed to `SA'.\nThe formula for calculating keyword weight is:\nW (OT\noq\n) = D (Q) / D (P)\n\n(I)\nwhere OT\noq\nis the anchor text of the out-going link from page O\nto page Q; P is the desired academic paper found by following the\nlink from O to Q; D(P) denotes the distance (number of hops)\nbetween P and the starting URL S on the path\nS\nA\nB\nC\nP\nE\nD\nP\nDepth 1\nDepth 2\nDepth 3\nDepth 0\nHomepage\nURL\nDatabase\nDocument\nDatabase\nIgnore\nList\nPriority Queues\nPriority\nHeuristics\nDownload Pages\nLink Extractor\nLink Filter\nLink\nPriority\nAnalyzer\nPDF/PS\nDocuments\nCrawler\nThread\nAllow\nList\n304\nS...OQ...P; D(Q) denotes the distance between Q\nand the starting URL S on the path S...OQ.\nNow that a list of anchor texts and their corresponding priority\nweights has been compiled during the training process, we can\nclassify each of them into different priority categories according\nto the weights. Table 4 shows a few samples extracted from our\nlist.\nTable 4. Sample Anchor Texts\nPriority Anchor\nTexts\np_High\nvolume, pub, paper, conf, journal, content,\nprogram, research, list\np_Medium\ntopic, faculty, people, group, lab\nWe now need to consider how to prioritize out-going links that\nare more likely to lead to desired academic publications. The\nanchor text in these links is compared against the weighted\nkeywords. If any of the weighted keywords are present in the text,\nthe comparison is considered to be successful. There are no\nkeywords having more than one weight. The final priority of the\nlink is calculated by the following function.\nThe priority of a link may also depend on the priority of its parent.\nThis is mainly due to the fact that not all the links that emerge\nfrom a page with a medium or high priority may lead to a research\ndocument. For e.g. in Figure 4 the node `C' will be crawled with a\nmedium priority, however only node `D' leads to a research\ndocument. The priority of the node `E' is thus reduced to low as it\nwill not have a weighted keyword attached to it and that of `D' is\nincreased to high. The priorities of links thus established are used\nto insert the link in the proper priority queue for crawling. In\norder to achieve high efficiency, the crawler spawns multiple\nthreads which will be fed with URLs on the descending order of\npriority. When there is no URL left in the priority queues and no\ncrawler thread is currently running, the crawling task is finished.\n\nRESULTS AND DISCUSSION\nWe have collected data from two Computer Science publication\nvenues: the ACM SIGMOD International Workshop on the Web\nand Databases (WebDB), first held in 1998 and then each year in\nconjunction with the annual ACM SIGMOD Conference, and\nJournal of Artificial Intelligence Research (JAIR), which was\nestablished in 1993 both as an electronic scientific journals and a\nhard-copy semiyearly published by AAAI Press. We choose these\ntwo venues because both of them are highly selective venues with\nless than a 25% acceptance rate and we want to observe if there is\na major difference of performance between conferences and\njournals.\nWe have extracted the metadata of WebDB and JAIR from the\nDBLP repository. By analyzing these metadata, we successfully\nidentify the 593 unique authors who have in total published 289\npapers in either one of these two venues during the period from\n1998 to 2004. Please see Table 5 for more details of the dataset.\nTable 5. Statistics of the collected data\nWebDB JAIR\nYear\nUnique\nAuthors\nPublication Unique\nAuthors\nPublication\n1998\n32 13 40 20\n1999\n51 17 50 28\n2000\n61 20 33 20\n2001\n51 18 45 25\n2002\n47 17 64 27\n2003\n56 17 72 30\n2004\n51 16 57 21\nTotal\n285 118 308 171\nIn order to examine whether our approach is effective in\nrecovering those missing documents from a digital library, we use\nthe CiteSeer Scientific Digital Library as another data source.\nCross-referencing the metadata of each of the two venues from\nDBLP, we successfully identified 30 out of 118 (25.42%) WebDB\npapers and 46 out of 171 (26.90%) JAIR papers that are not\nindexed by CiteSeer (see Figure 5 for details). This is done by\nexact title-matching between the records in the DBLP metadata\nrepository and the CiteSeer document archive.\nWebDB\n8\n12\n17\n13\n10\n14\n14\n5\n5\n3\n5\n7\n3\n2\n0%\n10%\n20%\n30%\n40%\n50%\n60%\n70%\n80%\n90%\n100%\n1998\n1999\n2000\n2001\n2002\n2003\n2004\nNOT Indexed\nIndexed\n\nJAIR\n17\n25\n20\n21\n17\n16\n15\n3\n3\n0\n4\n10\n14\n6\n0%\n10%\n20%\n30%\n40%\n50%\n60%\n70%\n80%\n90%\n100%\n1998\n1999\n2000\n2001\n2002\n2003\n2004\nNOT Indexed\nIndexed\n\nFigure 5. Coverage of the two venues by CiteSeer\nThe metadata extracted from DBLP are also used as heuristics to\nlocate the homepages of the 593 authors. The name of the author\n// Get_Priority(): Returns the priority for link L\nT\nwith anchor\ntext T which has weight W\nT\n.\n// Low=0, Medium=1, High=2 (for weight and priority)\nGet_Priority {\nIf W\nT\n= 0 and (Priority(Parent(L\nT\n)) > 0 then\nPriority(L\nT\n)\n\n= Priority(Parent(L\nT\n)) -1;\nElse if W\nT\n> 0\nPriority(L\nT\n)\n\n= W\nT\n;\nEnd IF\nReturn (Priority(L\nT\n));\n}\n305\nand the corresponding venue (with a number of variations) are\nsubmitted to Google API and the first 10 URLs returned are\nparsed automatically by the Homepage Filter component. Using\nthe heuristics discussed in the previous section, we assign priority\nweights to each of the URLs. For each author, URLs with the\nhighest priority weights are inserted into the URL Database and\ncrawled by the Focused Crawler at a later stage.\nWe have manually examined the records in the URL Database in\norder to evaluate the effectiveness of the Homepage Aggregator.\nIn total, homepages of 539 authors (90.89%) have been found.\nDetails about the 54 authors whose homepages cannot be found\nby the system are shown in Table 6. Here we define Non-U.S.\nauthors to be those whose affiliations are not currently in the\nStates.\nTable 6. Number of authors whose homepages are not found\n\nWebDB JAIR\nU.S. Authors\n13 6\nNon-U.S. Authors\n25 10\nTotal (Percentage)\n38 (13.33%)\n16 (5.19%)\nThere are only 2 papers ([22], [23]) of which all the authors'\nhomepages are not found by the system, which account for less\nthan 1% of the 289 papers in our data set. In other words,\nalthough the system fails to locate the homepages of about 9% of\nthe authors, it is not a major performance impact on the document\nrecall and the crawler should still be able to find 99.31% of all the\npapers.\nFor the cases where the system fails to locate some of the\nhomepages, we notice that most of the 19 U.S. authors whose\nhomepages are not found were actually in their graduate programs\nwhen they co-authored the paper, and their Web presences seem\nto have disappeared after graduation. In addition, there's a\nsignificant difference between the numbers of U.S. and non-U.S.\nauthors whose homepages cannot be found, with non-U.S. almost\ntwice the number of U.S. authors. Since this is our initial attempt\nlimited to only the domain of computer science, whether this\ndifference holds true for other disciplines and the reason behind\nremain an open question. Finally, there are several cases where\nthe homepages of those with famous names actually show up\ninstead of the desired authors. For example, a search via Google\nAPI for the first author in [24] returns the homepage of a comic\nartist. The top 5 websites for George Russell, the first author of\n[25], happen to belong to that of a famous Jazz musician. There\nare also a few cases where the search engine actually returns the\nhomepage of the co-author instead of the author himself, because\nthe author's name is listed on the co-author's page as a\ncollaborator and the co-author's page receives a higher page\nranking. All these indicate that the disambiguation capability\nneeds to be improved.\n4.1 Finding Desired Academic Publications\nWhen the crawl is finished, we manually examine the\ndownloaded PDF/PostScript documents in order to evaluate the\nperformance of the crawler. In total, the crawler has acquired 236\nout of the 289 papers (81.66%) published in WebDB (100 out of\n118, 84.75%) and JAIR (136 out of 171, 79.53%) from 1998 to\n2004. For details of the results for each venue, please see Figure 6\nand 7.\nWebDB, 1998 - 2004\n0\n5\n10\n15\n20\n25\n1998\n1999\n2000\n2001\n2002\n2003\n2004\nNumb\ner\nPapers found\nPapers published\nFigure 6. Number of WebDB Papers\nJAIR,1998 - 2004\n0\n5\n10\n15\n20\n25\n30\n35\n1998\n1999\n2000\n2001\n2002\n2003\n2004\nNumber\nPapers found\nPapers published\nFigure 7. Number of JAIR Papers\nHere we adopt one of the performance metrics, recall level, first\nproposed in [16] and used in [17]. Recall level is defined as:\n(i) = | S(i)\nT | / | T |\nwhere S(i) is the set of documents downloaded by the crawler\nduring a crawl on the dataset of a calendar year i; T is the set of\ndesired documents, which in this study are the papers published\nby a specific venue in the same calendar year. This measure\nrepresents the capability of the system to capture desired\nacademic papers.\nOverall, our system has achieved a recall level of 0.8475 for\nWebDB and 0.7953 for JAIR documents. See Figure 8 for more\ndetails.\nIt's interesting to note that while the recall level of WebDB is\nconstantly increasing until reaching 1.0 in the last two years, the\nrecall level of JAIR seems to fluctuate around 0.8 over the 7-years\nperiod. We find that 29 out of the 35 (82.86%) JAIR papers\nnot found by the system are actually downloadable via a link from\nthe authors' homepages to the publisher's website. Yet we miss\nthese papers simply because we limit our crawler not to go\nbeyond the domain of authors' homepages. We believe that a\nmore sophisticated domain restriction for the crawler can be\neasily employed in order to achieve an even higher recall level.\n306\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\n1998\n1999\n2000\n2001\n2002\n2003\n2004\nReca\nll Level\nWebDB\nJAIR\nFigure 8. Overall Recall Level, 1998 - 2004\nWe calculate the recall level for the documents published in\nWebDB and JAIR yet missing from CiteSeer's collection (see\nFigure 9). In this case, S(i) is the set of missing documents\ndownloaded by the crawler, and T is the set of the papers not\nindexed by CiteSeer and missing from the collection. On average,\nthe recall level has achieved 0.78 for WebDB and 0.72 for JAIR.\nEspecially WebDB's recall level is constantly increasing,\nreaching 1.0 for the last three years. This proves that it's practical\nto harvest the missing documents for a given publishing venue.\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\n1998\n1999\n2000\n2001\n2002\n2003\n2004\nReca\nl\nl\nL\ne\nv\ne\nl\nWebDB\nJAIR\n\nFigure 9. Recall Level for the Missing Documents\nThe trends shown in Figure 8 and 9 seem to indicate that a rising\nnumber of academic papers have been put online, especially in\nand after the year 2000. However, it's interesting to note that it\nseems conference/workshop authors favor putting their\npublications on their homepages, while journal authors don't. Due\nto the limited size of our sample, we feel this is an open question\nto be answered with more data across multiple venues.\n4.2 Crawler Comparison: BF Crawler\nIn order to further evaluate the performance of our system, we\nalso compare our work to other crawling approaches. First we\ncrawled three conference websites using our system and a\nbreadth-first (BF) crawler. Figures 10, 11 and 12 show the results\nof crawls on different conference websites. The BF crawls are\nshown by the dashed line while the results of the focused crawler\nare shown by the solid line on the figures. The horizontal axis\nindicates the number of pages crawled and the vertical axis\nrepresents the number of research documents found by searching\nthose pages. The number of documents found is a cumulative sum\nof all PDF, PS and GZ files found on those sites. Since they may\ncontain duplicate files or the same content in different file types,\nthe numbers shown do not indicate unique papers. The number of\npages crawled does not include academic papers. The same crawl\nrestrictions applied to both the crawlers.\n0\n200\n400\n600\n800\n1000\n1200\n1400\n1600\n1800\n0\n10\n20\n30\nPages Crawled\nNu\nm\nb\ne\nr\no\nf\nDo\nc\nu\nm\ne\nn\nt\ns\nFC\nBF\n\nFigure 10. ACL Conference Crawl\nFigure 10 shows the crawls done on parts of the Association for\nComputational Linguistics (ACL) conference website. The total\nnumber of pages crawled on this site were less than 30. Both\ncrawls overlap which indicates that there is virtually no difference\nbetween the document detection rate of the BF crawler and our\nfocused crawler. For such a small website, both crawlers detect\nthe same number of documents after crawling the same number of\npages on the website.\n0\n1000\n2000\n3000\n4000\n5000\n6000\n7000\n0\n100\n200\n300\n400\nPages Crawled\nNu\nm\nb\ne\nr\no\nf\n\nDo\nc\nu\nm\ne\nn\nt\ns\nFC\nBF\n\nFigure 11. TREC Conference Crawl\nFigure 11 shows the crawls done on the Text Retrieval\nConference (TREC) pages. Here the total of pages crawled is\nabout 1000 (only first half of the crawl is shown in the graph).\nBoth crawlers start detecting documents at the same rate. After\ndetecting around 1393 documents (35 pages crawled) the\ndocument detection rate of the focused crawler becomes slightly\nbetter than the BF crawler. Although the difference is not very\nsignificant, the focused crawler does detect the research\ndocuments slightly earlier in the crawl as compared to the BF\ncrawler. The BF crawler detects the same amount of documents\n(4800 documents) as the focused crawler but after crawling 20-30\n307\npages more than the focused crawler. The total number of\ndocuments found by both the crawlers is around 6000.\n0\n500\n1000\n1500\n2000\n2500\n3000\n3500\n4000\n0\n500 1000 1500 2000 2500 3000 3500\nPages Crawled\nNu\nm\nb\ne\nr\no\nf\nDo\nc\nu\nm\ne\nn\nt\ns\nFC\nBF\n\nFigure 12. VLDB Conference Crawl\nThe crawls performed on the Very Large Database (VLDB)\nconference pages as shown in Figure 12 indicate that the focused\ncrawler detects the documents much earlier in the crawl. Here the\ntotal number of pages crawled is about 3500. Approximately 28%\n(1000 out of 3500) of the documents are located by both the\ncrawlers after crawling around 8.5% (300 out of 3500) of the\ndomain. At this point the focused crawler continues to locate\nmore documents while the BF crawler does not uncover any new\ndocuments until 28% (1000 out of 3500) of the total crawl. 85%\n(3000 out of 3500) of the documents are located by the focused\ncrawler after completing just 33% (1189 out of 3500) of the total\ncrawl, while the breadth first crawler locates the same amount of\ndocuments after completing 50% (1781 out of 3500) of the total\ncrawl. Towards the end of the crawl the breadth-first crawler\ndetects more papers as compared to the focused crawler. It takes\nthe focused crawler around 1000 more pages of crawls until it\nmakes up the difference. This seems to be due to the lack of\nkeywords associated with the links that eventually led to the\ndocuments. The focused crawler evaluates other papers that have\na higher priority values before eventually discovering the\nremaining documents.\nThe behavior of the BF crawler is consistent for all the three\ncrawls. Most of the documents located were in crawl depths 2, 3,\n4 and 5. The BF crawler detects them after completing search of\nthe previous crawl depths. As the focused crawler prioritizes the\nlinks for crawling, the higher depths with more priority are\ncrawled before the lower depths with less priority.\nThe above experiment indicates that the document harvest rate is\nalmost the same for smaller websites. The difference becomes\napparent when the size of the website being crawled is large. The\nfocused crawler is able to detect the documents much earlier in\nthe crawl as compared to the BF crawler. Since the crawls are not\nterminated early for the focused crawler, the number of\ndocuments found and the relevance of documents are same for\nboth the crawlers. Therefore as the size of websites being crawled\nincreases, the focused crawler detects more documents earlier\nduring the crawls as compared to the BF crawler.\nWe assess the crawler's capability of harvesting academic\npublications in a more general sense which is not only limited to a\nspecific venue. We have manually examined the first 500\nPDF/PostScript documents found by the two crawlers, classified\nthe documents into academic publications which are desirable\n(papers published in conferences and journals; technical reports;\ndegree thesis, etc.), and non-publication documents which are\nconsidered noise for a publication collection (course material;\npresentation slides; project schedule; etc.) Percentage of both\ncategories is compared side-by-side and shown in Figure 13. Our\ncrawler has outperformed the breadth-first counterpart by having\nmuch less of this noise.\n423\n480\n77\n22\n0%\n10%\n20%\n30%\n40%\n50%\n60%\n70%\n80%\n90%\n100%\nBF\nFC\nTotal: 50\n0\nNon-Publication\nDocuments\nAcademic\nPublications\nFigure 13. Composition of the First 500 PDF/PS Documents\n4.3 Crawler Comparison: Nutch Crawler\nWe compare the performance of our system with Nutch\n(http://www.nutch.org/docs/en/), an open source Web crawler and\nsearch engine built upon Lucene. In our experiment, we run the\nNutch crawler on the official websites of WebDB and JAIR, and\nidentify those papers published between 1998 and 2004 from the\ndownloaded documents. We then compare the number of papers\nharvested by Nutch and FC crawler (see Figure 14 for details).\nResults show that guided by certain heuristics, crawling authors'\nhomepages can actually achieve almost the same recall level as\ncrawling publishers' websites.\n0\n50\n100\n150\n200\nWebDB\nJAIR\nNumber of\nDocuments\nNutch\nFC\nTotal\nPublications\nFigure 14. Comparison between Nutch and Focused Crawler\nFigure 15\n\nindicates the progress of the crawls conducted by both\nthe Focused Crawler and the Nutch Crawler on the ACL\nconference website. The documents found are of PDF and PS\nonly. The focused crawler starts discovering documents earlier in\nthe crawl and the process continues gradually. Nutch on the other\nhand discovers most of the documents after crawling around 84%\n(22 out of 26) of the website.\n308\nFigure 6. ACL Conference Crawl\n0\n200\n400\n600\n800\n1000\n1200\n1400\n1600\n1800\n0\n10\n20\n30\nPages Crawled\nNu\nm\nb\ne\nr\no\nf\n\nDo\nc\nu\nm\ne\nn\nt\ns\nNutch\nFC\n\nFigure 15. Crawling ACL Conference Websites\nDocuments found during the ACL conference crawl are classified\ninto two categories: relevant (i.e. academic publications) and non-relevant\n(non-publication). Figure 16 shows the number of\ndocuments in each category. Note that determining documents'\nrelevancy is an offline process. Here R indicates relevant and NR\nindicated non-relevant documents.\nR, 1588\nR, 1404\nNR, 0\nNR, 0\n0\n200\n400\n600\n800\n1000\n1200\n1400\n1600\n1800\nFC\nNutch\nCrawler\nNu\nm\nb\ne\nr\no\nf\n\nDo\nc\nu\nm\ne\nn\nt\ns\nNR\nR\n\nFigure 16. Relevancy of the ACL Conference Crawl\nFigure 16 indicates that all the documents (PDF and PS) found by\nboth the crawlers are academic publications (thus NR = 0).\nHowever, the 184 documents Nutch failed to detect are\ndetermined to be all relevant research publications.\nThe same comparison is also conducted by crawling the official\nWebDB conference websites. Figure 17 shows that the Focused\nCrawler starts detecting desired documents at an earlier stage as\ncompared to the Nutch crawler. Yet due to the small number of\npages crawled, a rigorous comparison cannot be made in this case.\nFigure 18 shows that the focused crawler locates two more\nacademic publications than the Nutch crawler, both of which are\nmarked as relevant documents.\nCONCLUSION AND FUTURE WORK\nWe have shown the feasibility of using authors' homepages as\nalternative online resources to harvest the academic papers\nmissing from a collection of digital libraries, as well as the\ntechniques to maximize the crawler's performance in doing so.\nWe have designed and implemented a heuristic-based system\nwhich utilizes document metadata to accurately locate authors'\nhomepages and performs a focused crawling to quickly navigate\nto the desired publications. Evaluation has been conducted using a\nlarge dataset collected from several publishing venues in the\nComputer Science domain, and detailed results are presented and\ndiscussed.\nFigure 10. WEDB Conference Crawl\n0\n20\n40\n60\n80\n100\n120\n0\n10\n20\n30\n40\nPages Crawled\nNu\nm\nb\ne\nr\no\nf\nDo\nc\nu\nm\ne\nn\nt\ns\nNutch\nFC\n\nFigure 17. Crawling WebDB Conference Websites\nR, 104\nR, 106\nNR, 1\nNR, 1\n0\n20\n40\n60\n80\n100\n120\nFC\nNutch\nCrawler\nNu\nm\nb\ne\nr\no\nf\nDo\nc\nu\nm\ne\nn\nt\ns\nNR\nR\n\nFigure 18. Relevancy of the WebDB Conference Crawl\nFor the academic venues investigated in this study, we are able to\nfill many of the missing documents in the CiteSeer digital library.\nThe designed focused crawling technique efficiently locates\ndesired publications on authors' homepages as well as conference\nwebsites. The Homepage Aggregator detects homepages well and\nthe Focused Crawler outperforms the baseline crawler in a\nnumber of measures.\nFuture work includes a more rigorous disambiguation scheme for\nthe Homepage Aggregator and a more sophisticated weighting\nscheme for the Focused Crawler. In addition, we are now\ndeveloping a training process for the crawler to learn the URL\npatterns of alternative resources other than author homepages,\nsuch as institutional archives. Also, the automation of the process\ncycle of crawling, log analysis, and heuristics generation can help\nsearch engine based digital libraries scale and significantly reduce\ncosts. The actual URL of the web pages can also be used to assist\nin priority assignment instead of just using the anchor text of the\nlink. A comparison of this approach to techniques other than a\nBreadth-first crawl is currently underway. Furthermore, we plan\nto evaluate the validity of this approach by expanding our\nexperiment on to disciplines other than the Computer Science\n309\ndomain. We believe our study and its consequents will shed lights\non the question of finding missing papers for our digital library,\nor \"what's there and what's not\".\n\nACKNOWLEDGEMENTS\nWe gratefully acknowledge P. Mitra and the anonymous\nreviewers for their comments, I. Councill and P. Teregowda for\ntheir work on the CiteSeer metadata, and E. Maldonado and D.\nHellar for the crawl list. This work is partially supported by\nMicrosoft.\n\nREFERENCES\n[1] De Bra, P., Houben, G., Kornatzky, Y., and Post, R\nInformation Retrieval in Distributed Hypertexts. In\nProceedings of the 4th RIAO (Computer-Assisted Information\nRetrieval) Conference, pp. 481-491, 1994.\n[2] Cho J., Garcia-Molina, H., and Page, L. Efficient Crawling\nThrough URL Ordering. In Proceedings of the 7th World Wide\nWeb Conference, Brisbane, Australia, pp. 161-172. April 1998.\n[3] Chakrabarti, S., Van den Berg, M., and Dom, B. Focused\nCrawling: A New Approach to Topic-Specific Web Resource\nDiscovery. In Proceedings of the 8th International WWW\nConference, pp. 545-562, Toronto, Canada, May 1999.\n[4] Giles, C. L. and Councill, I. G. Who gets acknowledged:\nMeasuring scientific contributions through automatic\nacknowledgement indexing. In Proceedings of the National\nAcademy of Sciences 101(51) pp. 17599-17604, Dec. 21, 2004.\n[5] Najork, M. and Wiener, J. L. Breadth-First Search Crawling\nYields High-Quality Pages. In Proceedings of the 10th\nInternational World Wide Web Conference, pp. 114-118, 2001.\n[6] Page, L., Brin, S., Motwani, R., and Winograd, T. The\npagerank citation ranking: Bringing order to the web.\nTechnical report, Stanford University Database Group, 1998.\nAvailable at http://dbpubs.stanford.edu: 8090/pub/1999-66\n[7] Menczer, F., Pant, G., Ruiz, M., and Srinivasan, P. Evaluating\nTopic-Driven Web Crawlers.' In Proceedings of the 2001\nAnnual Conference of the Association of Computing\nMachinery, Special Interest Group in Information Retrieval,\n241-249. New Orleans, September 2001.\n[8] Haveliwala, T. H. Topic-Sensitive PageRank. In Proceedings\nof the 11th International World Wide Web Conference, pp.\n517-526. Honolulu, Hawaii, USA. May 2002.\n[9] Mukherjea, S. WTMS: a system for collecting and analyzing\ntopic-specific Web information. Computer Networks 33(1-6):\n457-471, 2000.\n[10] Diligenti, M., Coetzee, F.M., Lawrence, S., Giles, C. L., and\nGori, M. Focused Crawling Using Context Graphs. In\nProceedings of the 26th International Conference on Very\nLarge Data Bases, pp. 527-534, 2000.\n[11] Aggarwal, C. C., Al-Garawi, F., and Yu, P. S. Intelligent\nCrawling on the World Wide Web with Arbitary Predicates. In\nProceedings of the Tenth International Conference on World\nWide Web, pp. 96-105, 2001.\n[12] Aggarwal, C. C. On Learning Strategies for Topic Specific\nWeb Crawling. Next Generation Data Mining Applications,\nJanuary 2004.\n[13] Pant, G., Tsjoutsiouliklis, K., Johnson, J., and Giles, C. L.\nPanorama: Extending Digital Libraries with Topical Crawlers.\nIn Proceedings of the 2004 Joint ACM/IEEE Conference on\nDigital Libraries, pp. 142-150, 2004.\n[14] Menczer, F., Pant, G., and Srinivasan, P. Topical Web\nCrawlers: Evaluating Adaptive Algorithms. ACM TOIT 4(4):\n378-419, 2004.\n[15] Pant, G., Srinivasan, P., and Menczer, F. Crawling the Web. In\nM. Levene and A. Poulovassilis, eds.: Web Dynamics, Springer,\n2004.\n[16] Hoff, G. and Mundhenk, M. Finding scientific papers with\nhomepagesearch and MOPS. In Proceedings of the Nineteenth\nAnnual International Conference of Computer Documentation,\nCommunicating in the New Millennium, pp. 201-207. October\n21-24, 2001, Santa Fe, New Mexico, USA.\n[17] On, B. and Lee, D. PaSE: Locating Online Copy of Scientific\nDocuments Effectively. In Proceedings of the 7th International\nConference of Asian Digital Libraries (ICADL), pp. 408-418.\nShanghai, China, December 2004.\n[18] Shakes, J., Langheinrich, M., and Etzioni, O. Dynamic\nReference Sifting: a Case Study in the Homepage Domain. In\nProceedings of the Sixth International World Wide Web\nConference, pp. 189-200, 1997.\n[19] Xi, W. and Fox, E. A. Machine Learning Approach for\nHomepage Finding Task. In Proceedings of the Tenth Text\nREtrieval Conference (TREC 2001), pp. 686-698, 2001.\n[20] Anh, V. N. and Moffat, A. Homepage Finding and Topic\nDistillation using a Common Retrieval Strategy. In\nProceedings of the Eleventh Text REtrieval Conference (TREC\n2002), 2002.\n[21] Ogilvie, P. and Callan, J. Combining Structural Information\nand the Use of Priors in Mixed Named-Page and Homepage\nFinding. In Proceedings of the Twelfth Text REtrieval\nConference (TREC 2003), pp. 177-184, 2003.\n[22] Sundaresan, N., Yi, J., and Huang, A. W. Using Metadata to\nEnhance a Web Information Gathering System. In Proceedings\nof the Third International Workshop on the Web and\nDatabases (WebDB 2000), pp. 11-16, 2000.\n[23] Flesca, S., Furfaro, F., and Greco, S. Weighted Path Queries on\nWeb Data. In Proceedings of the Fourth International\nWorkshop on the Web and Databases (WebDB 2001), pp. 7-12,\n2001.\n[24] Ruiz, A., Lpez-de-Teruel, P. E., and Garrido, M. C.\nProbabilistic Inference from Arbitrary Uncertainty using\nMixtures of Factorized Generalized Gaussians. Journal of\nArtificial Intelligence Research (JAIR), Volume 9, pp. 167-217,\n1998.\n[25] Russell, G., Neumller, M., and Connor, R. C. H. TypEx: A\nType Based Approach to XML Stream Querying. In\nProceedings of the Sixth International Workshop on the Web\nand Databases (WebDB 2003), pp. 55-60, 2003.\n310\n", "keywords": "Digital libraries;CiteSeer;focused crawler;DBLP;harvesting;ACM"} {"name": "22", "title": "A Two-Phase Sampling Technique for Information Extraction from Hidden Web Databases", "abstract": "Hidden Web databases maintain a collection of specialised documents, which are dynamically generated in response to users' queries. However, the documents are generated by Web page templates, which contain information that is irrelevant to queries. This paper presents a Two-Phase Sampling (2PS) technique that detects templates and extracts query-related information from the sampled documents of a database. In the first phase, 2PS queries databases with terms contained in their search interface pages and the subsequently sampled documents. This process retrieves a required number of documents. In the second phase, 2PS detects Web page templates in the sampled documents in order to extract information relevant to queries. We test 2PS on a number of real-world Hidden Web databases. Experimental results demonstrate that 2PS effectively eliminates irrelevant information contained in Web page templates and generates terms and frequencies with improved accuracy.", "fulltext": "INTRODUCTION\nAn increasing number of databases on the Web maintain a\ncollection of documents such as archives, user manuals or news\narticles. These databases dynamically generate documents in\nresponse to users' queries and are referred to as Hidden Web\ndatabases [5]. As the number of databases proliferates, it has\nbecome prohibitive for specialised search services (such as\nsearch.com) to evaluate databases individually in order to answer\nusers' queries.\nCurrent techniques such as database selection and categorisation\nhave been employed to enhance the effectiveness of information\nretrieval from databases [2, 5, 10, 11, 15]. In the domain of the\nHidden Web, knowledge about the contents of databases is often\nunavailable. Existing approaches such as in [2, 10, 15] acquire\nknowledge through sampling documents from databases. For\ninstance, query-based sampling [2] queries databases with terms\nthat are randomly selected from those contained in the sampled\ndocuments. The techniques in [10, 15] sample databases with\nterms obtained from Web logs to retrieve additional topic terms.\nA major issue associated with existing techniques is that they also\nextract information irrelevant to queries. That is, information\nextracted is often found in Web page templates, which contain\nnavigation panels, search interfaces and advertisements.\nConsequently, the accuracy of terms and frequencies generated\nfrom sampled documents has been reduced.\nIn addition, approximate string matching techniques are adopted\nby [13] to extract information from Web pages, but this approach\nis limited to textual contents only. Alternatively, the approaches\nproposed in [3, 4] analyse Web pages in tree-like structures.\nHowever, such an approach requires Web pages with well-conformed\nHTML tag trees. Furthermore, [3] discovers\ndynamically generated objects from Web pages, which are\nclustered into groups of similar structured pages based on a set of\npre-defined templates, such as exception page templates and\nresult page templates.\nIn this paper, we propose a sampling and extraction technique,\nwhich is referred to as Two-Phase Sampling (2PS). 2PS aims to\nextract information relevant to queries in order to acquire\ninformation contents of underlying databases. Our technique is\napplied in two phases. First, it randomly selects a term from those\nfound in the search interface pages of a database to initiate the\nprocess of sampling documents. Subsequently, 2PS queries the\ndatabase with terms randomly selected from those contained in\nthe sampled documents. Second, 2PS detects Web page templates\nand extracts query-related information from which terms and\nfrequencies are generated to summarise the database contents.\nOur approach utilises information contained in search interface\npages of a database to initiate the sampling process. This differs\nfrom current sampling techniques such as query-based sampling,\nwhich performs an initial query with a frequently used term.\nFurthermore, 2PS extracts terms that are relevant to queries thus\ngenerating statistics (i.e., terms and frequencies) that represent\ndatabase contents with improved accuracy. By contrast, the\napproaches in [2, 10, 15] extract all terms from sampled\ndocuments, including those contained in Web page templates.\nConsequently, information that is irrelevant to queries is also\nextracted.\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nWIDM'04, November 1213, 2004, Washington, DC, USA.\nCopyright 2004 ACM 1-58113-978-0/04/0011...$5.00.\n1\n\nFigure 1. The Two-Phase Sampling (2PS) technique.\n\n2PS is implemented as a prototype system and tested on a number\nof real-world Hidden Web databases, which contain computer\nmanuals, healthcare archives and news articles. Experimental\nresults show that our technique effectively detects Web page\ntemplates and generates terms and frequencies (from sampled\ndocuments) that are relevant to the queries.\nThe remainder of the paper is organised as follows. Section 2\nintroduces current approaches to the discovery of information\ncontents of Hidden Web databases. Related work on the\ninformation extraction from Web pages or dynamically generated\ndocuments is also discussed. Section 3 describes the proposed\n2PS technique. Section 4 presents experimental results. Section 5\nconcludes the paper.\nRELATED WORK\nA major area of current research into the information retrieval of\nHidden Web databases focuses on the automatic discovery of\ninformation contents of databases, in order to facilitate their\nselection or categorisation. For instance, the technique proposed\nin [6] analyses the hyperlink structures of databases in order to\nfacilitate the search for databases that are similar in content. The\napproach adopted by [10, 15] examines the textual contents of\nsearch interface pages maintained by data sources to gather\ninformation about database contents.\nA different approach is to retrieve actual documents to acquire\nsuch information. However, in the domain of Hidden Web\ndatabases, it is difficult to obtain all documents from a database.\nTherefore, a number of research studies [2, 10, 15] obtain\ninformation by retrieving a set of documents through sampling.\nFor instance, query-based sampling [2] queries databases with\nterms that are randomly selected from those contained in the\nsampled documents. The techniques in [10, 15] sample databases\nwith terms extracted from Web logs to obtain additional topic\nterms. These techniques generate terms and frequencies from\nsampled documents, which are referred to as Language Models\n[2], Textual Models [10, 15] or Centroids [11].\nA key issue associated with the aforementioned sampling\ntechniques is that they extract information that is often irrelevant\nto queries, since information contained in Web page templates\nsuch as navigation panels, search interfaces and advertisements is\nalso extracted. For example, a language model generated from the\nsampled documents of the Combined Health Information\nDatabase (CHID) contains terms (such as `author' and `format')\nwith high frequencies. These terms are not relevant to queries but\nare used for descriptive purposes. Consequently, the accuracy of\nterms and frequencies generated from sampled documents has\nbeen reduced. The use of additional stop-word lists has been\nconsidered in [2] to eliminate irrelevant terms - but it is\nmaintained that such a technique can be difficult to apply in\npractice.\nExisting techniques in information extraction from Web pages are\nof varying degrees of complexity. For instance, approximate\nstring matching techniques are adopted by [13] to extract texts\nthat are different. This approach is limited to finding textual\nsimilarities and differences. The approaches proposed in [3, 4]\nanalyse textual contents and tag structures in order to extract data\nfrom Web pages. However, such an approach requires Web pages\nthat are produced with well-conformed HTML tag-trees.\nComputation is also needed to convert and analyse Web pages in\na tree-like structure. Moreover, [3] identifies Web page templates\nbased on a number of pre-defined templates, such as exception\npage templates and result page templates.\nOur technique examines Web documents based on textual\ncontents and the neighbouring tag structures rather than analysing\ntheir contents in a tree-like structure. We also detect information\ncontained in different templates through which documents are\ngenerated. Therefore, it is not restricted to a pre-defined set of\npage templates.\nFurthermore, we focus on databases that contain documents such\nas archives and new articles. A distinct characteristic of\ndocuments found in such a domain is that the content of a\ndocument is often accompanied by other information for\nsupplementary or navigation purposes. The proposed 2PS\ntechnique detects and eliminates information contained in\ntemplates in order to extract the content of a document. This\ndiffers from the approaches in [1, 4], which attempt to extract a\nset of data from Web pages presented in a particular pattern. For\nexample, the Web pages of a bookstore Web site contain\ninformation about authors followed by their associated list of\npublications. However, in the domain of document databases,\ninformation contained in dynamically generated Web pages is\noften presented in a structured fashion but irrelevant to queries.\nOther research studies [9, 8, 12] are specifically associated with\nthe extraction of data from query forms in order to further the\nretrieval of information from the underlying databases.\n\nTWO-PHASE SAMPLING\nThis section presents the proposed technique for extracting\ninformation from Hidden Web document databases in two phases,\nwhich we refer to as Two-Phase Sampling (2PS). Figure 1 depicts\nthe process of sampling a database and extracting query-related\n2\ninformation from the sampled documents. In phase one, 2PS\nobtains randomly sampled documents. In phase two, it detects\nWeb page templates. This extracts information relevant to the\nqueries and then generates terms and frequencies to summarise\nthe database content. The two phases are detailed in section 3.1\nand 3.2.\n3.1 Phase One: Document Sampling\nIn the first phase we initiate the process of sampling documents\nfrom a database with a randomly selected term from those\ncontained in the search interface pages of the database. This\nretrieves top N documents where N represents the number of\ndocuments that are the most relevant to the query. A subsequent\nquery term is then randomly selected from terms extracted from\nthe sampled documents. This process is repeated until a required\nnumber of documents are sampled. The sampled documents are\nstored locally for further analysis.\nFigure 2 illustrates the algorithm that obtains a number of\nrandomly sampled documents. t\nq\ndenotes a term extracted from\nthe search interface pages of a database, D. qt\np\nrepresents a query\nterm selected from a collection of terms, Q, qt\np\n\nQ, 1 p m;\nwhere m is the distinct number of terms extracted from the search\ninterface pages and the documents that have been sampled. R\nrepresents the set of documents randomly sampled from D. t\nr\nis a\nterm extracted from d\ni\n. d\ni\nrepresents a sampled document from D,\nd\ni\n\nD, 1 i n, where n is the number of document to sample.\n\n\nFigure 2. The algorithm for sampling documents from a\ndatabase.\n\n2PS differs from query-based sampling in terms of selecting an\ninitial query. The latter selects an initial term from a list of\nfrequently used terms. 2PS initiates the sampling process with a\nterm randomly selected from those contained in the search\ninterface pages of the database. This utilises a source of\ninformation that is closely related to its content. Moreover, 2PS\nanalyses the sampled documents in the second phase in order to\nextract query-related information. By contrast, query-based\nsampling does not analyse their contents to determine whether\nterms are relevant to queries.\n3.2 Phase Two: Document Content Extraction\nand Summarisation\nThe documents sampled from the first phase are further analysed\nin order to extract information relevant to the queries. This is then\nfollowed by the generation of terms and frequencies to represent\nthe content of the underlying database. This phase is carried out\nthrough the following processes.\n3.2.1 Generate Document Content Representations\nThe content of each sampled document is converted into a list of\ntext and tag segments. Tag segments include start tags, end tags\nand single tags specified in HyperText Markup Language\n(HTML). Text segments are text that resides between two tag\nsegments. The document content is then represented by text\nsegments and their neighbouring tag segments, which we refer to\nas Text with Neighbouring Adjacent Tag Segments (TNATS). The\nneighbouring adjacent tag segments of a text segment are defined\nas the list of tag segments that are located immediately before and\nafter the text segment until another text segment is reached. The\nneighbouring tag segments of a text segment describe how the\ntext segment is structured and its relation to the nearest text\nsegments. Assume that a document contains n segments, a text\nsegment, txs, is defined as: txs = (tx\ni\n, tg-lst\nj\n, tg-lst\nk\n), where tx\ni\nis\nthe textual content of the i\nth\ntext segment, 1\ni n; tg-lst\nj\n\nrepresents p tag segments located before tx\ni\nand tg-lst\nk\nrepresents\nq tag segments located after tx\ni\nuntil another text segment is\nreached. tg-lst\nj\n= (tg\n1\n, ..., tg\np\n), 1\nj p and tg-lst\nk\n= (tg\n1\n, ..., tg\nq\n),\n1\nk q.\nAlgorithm SampleDocument\nExtract t\nq\nfrom search interface pages of D, Q = t\nq\n\nFor i = 1 to n\nRandomly select qt\np\nfrom Q\nIf (qt\np\nhas not been selected previously)\nExecute the query with qt\np\non D\nj = 0\nWhile j <= N\nIf (d\ni\n\nR)\nRetrieve d\ni\nfrom D\nExtract t\nr\nfrom d\ni\n,\nR = d\ni\nQ = t\nr\n\nIncrease j by 1\nEnd if\nEnd while\nEnd if\nEnd for\n\nFigure 3. A template-generated document from CHID.\nFigure 3 shows a template-generated document retrieved from the\nCHID database. The source code for this document is given in\nFigure 4. For example, text segment, `1. Equipos Mas Seguros:\nSi Te Inyectas Drogas.', can be identified by the text (i.e., `1.\nEquipos Mas Seguros: Si Te Inyectas Drogas.') and its\nneighbouring tag segments. These include the list of tags located\nbefore the text (i.e., </TITLE>, </HEAD>, <BODY>, <HR>,\n<H3>, <B> and <I>) and the neighbouring tags located after the\ntext (i.e., </I>, </B>, </H3>, <I> and <B>). Thus, this segment is\nthen represented as (`1. Equipos Mas Seguros: Si Te Inyectas\nDrogas.', (</TITLE>, </HEAD>, <BODY>, <HR>, <H3>, <B>\n,<I>), (</I>, </B>, </H3>, <I>, <B>)). Figure 5 shows the content\n3\nrepresentation of the CHID document (given in Figure 3)\ngenerated based on TNATS. Given a sampled document, d, with n\ntext segments, the content of d is then represented as: Content(d)\n= {txs\n1\n, ..., txs\nn\n}, where txs\ni\nrepresents a text segment, 1\ni n.\n\n\n\nFigure 4. The source code for the CHID document.\n\n\nFigure 5. The content representation of the CHID document\nusing TNATS.\n\n3.2.2 Detect Templates\nIn the domain of Hidden Web databases, documents are often\npresented to users through one or more templates. Templates are\ntypically employed in order to describe document contents or to\nassist users in navigation. For example, information contained in\nthe document (as shown in Figure 3) can be classified into the two\nfollowing categories:\n(i) Template-Generated Information. This includes information\nsuch as navigation panels, search interfaces and\nadvertisements. In addition, information may be given to\ndescribe the content of a document. Such information is\nirrelevant to a user's query. For example, navigation links\n(such as `Next Doc' and `Last Doc') and headings (such\n`Subfile' and `Format') are found in the document.\n(ii) Query-Related Information. This information is retrieved in\nresponse to a user's query, i.e., `1. Equipos Mas Seguros:\nSi Te Inyectas Drogas. ...'.\nThe 2PS technique detects Web page templates employed by\ndatabases to generate documents in order to extract information\nthat is relevant to queries. Figure 6 describes the algorithm that\ndetects information contained in Web page templates from n\nsampled documents. d\ni\nrepresents a sampled document from the\ndatabase D, d\ni\n,\nD, 1 i n. Content(d\ni\n) denotes the content\nrepresentation of d\ni\n.\n\n...\n<HTML><HEAD><TITLE>CHID Document\n</TITLE></HEAD>\n<BODY>\n<HR><H3><B><I> 1. Equipos Mas Seguros: Si Te Inyectas\nDrogas.\n</I></B></H3>\n<I><B>Subfile: </B></I>\nAIDS Education<BR>\n<I><B>Format (FM): </B></I>\n08 - Brochure.\n<BR>\n...\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAlgorithm DetectTemplate\nFor i = 1 to n\nIf\nT =\n\nIf S =\n\n\n\nS = d\ni\n\nElse if S\n\nWhile l <= s AND T =\n\nCompare (Content(d\ni\n),Content(d\nl\n))\nIf Content(d\ni\n) Content(d\nl\n)\nwpt\nk\n= Content(d\ni\n)\nContent(d\nl\n),\nStore wpt\nk\n, T = wpt\nk\nDelete (Content(d\ni\n)\nContent(d\nl\n)) from\nContent(d\ni\n), Content(d\nl\n)\nG\nk\n= d\ni\n, G\nk\n= d\nl\n\nDelete d\nl\nfrom S\nEnd if\nEnd while\nIf T =\n\n\nS = d\ni\n\nEnd if\nEnd if\n\n\nElse if T\n\nWhile k <= r AND d\ni\n\nG\nk\n\nCompare (Content(wpt\nk\n), Content(d\ni\n))\n\n\nIf Content(wpt\nk\n) Content(d\ni\n)\n\nDelete (Content(wpt\nk\n)\nContent(d\ni\n)) from\nContent(d\ni\n)\nG\nk\n= d\ni\n\nEnd if\nEnd while\nIf S\nAND d\ni\n\nG\nk\n\nWhile l <= s AND d\ni\n\nG\nk\n\nCompare (Content(d\ni\n),Content(d\nl\n))\nIf Content(d\ni\n) Content(d\nl\n)\nwpt\nk\n= Content(d\ni\n)\nContent(d\nl\n)\nStore wpt\nk\n, T = wpt\nk\nDelete (Content(d\ni\n)\nContent(d\nl\n)) from\nContent(d\ni\n), Content(d\nl\n)\nG\nk\n= d\ni\n, G\nk\n= d\nl\n\nDelete d\nl\nfrom S\nEnd if\nEnd while\nEnd if\nIf d\ni\n\nG\nk\n\nS = d\ni\n\nEnd if\nEnd if\nEnd for\n...\n`CHID Document', (<HTML>, <HEAD>, <TITLE>),\n(</TITLE>, </HEAD>, <BODY>, <HR>, <H3>, <B>,\n<I>);\n`1. Equipos Mas Seguros: Si Te Inyectas Drogas.',\n(</TITLE>, </HEAD>, <BODY>, <HR>, <H3>, <B>,\n<I>), (</I>, </B>, </H3>, <I>, <B>);\n`Subfile:', (</I>, </B>, </H3>, <I>, <B>), (</B>, </I>\n);\n`AIDS Education', (</B>, </I>)\n, (\n<BR>, <I>, <B>);\n`Format (FM):', (<BR>, <I>, <B>), (</B>, </I>);\n...\nFigure 6. The algorithm for detecting and eliminating the\ninformation contained in Web page templates.\n4\nSimilar to the representation for the contents of sampled\ndocuments, the content of a Web page template, wpt, is\nrepresented as Content(wpt) = {txs\n1\n, ..., txs\nq\n}, where q is the\nnumber of text segments, txs\nj\n, 1\nj q. T represents a set of\ntemplates detected. T = {wpt\n1\n, ..., wpt\nr\n}, where r is the distinct\nnumber of templates, wpt\nk\n, 1\nk r. G\nk\nrepresents a group of\ndocuments generated from wpt\nk\n. Furthermore, S represents the\nsampled documents from which no templates have yet been\ndetected. Thus, S = {d\n1\n, ..., d\ns\n}, where s is the number of\ntemporarily stored document, d\nl\n, 1\nl s.\nThe process of detecting templates is executed until all sampled\ndocuments are analysed. This results in the identification of one\nor more templates. For each template, two or more documents are\nassigned to a group associated with the template from which the\ndocuments are generated. Each document contains text segments\nthat are not found in their respective template. These text\nsegments are partially related to their queries. In addition to a set\nof templates, the content representations of zero or more\ndocuments in which no matched patterns are found are stored.\n3.2.3 Extract Query-Related Information\nThis process analyses a group of documents associated with each\ntemplate from which documents are generated. It further identifies\nany repeated patterns from the remaining text segments of the\ndocuments in order to extract query-related information.\nWe compute cosine similarity [14] given in (1) to determine the\nsimilarities between the text segments of different documents that\nare associated the template where the documents are generated.\nThe textual content of each text segment is represented as a vector\nof terms with weights. The weight of a term is obtained by its\noccurrence in the segment.\n\nwhere txs\ni\nand txs\nj\nrepresent two text segments in a document; tw\nik\n\nis the weight of term k in txs\ni\n, and tw\njk\nis the weight of term k in\ntxs\nj\n. This is only applied to text segments with identical adjacent\ntag segments. Two segments are considered to be similar if their\nsimilarity exceeds a threshold value. The threshold value is\ndetermined experimentally.\nThe algorithm that extracts information relevant to queries is\nillustrated in Figure 7. d\na\nand d\nb\nrepresent the sampled documents\nfrom the database, D, d\na\n, d\nb\n\nG\nk\n, where G\nk\ndenotes a group of\ndocuments associated with the template, wpt\nk\n, from which the\ndocuments are generated. tx\nm\nrepresents the textual content of a\ntext segment, txs\nm\n, contained in d\ni\n, d\ni\n\nG\nk\n. tx\nn\nrepresents the\ntextual content of a text segment, txs\nn\n, contained in d\nl\n, d\nl\nS. S\nrepresents the sampled documents from which no templates are\ndetected.\nThe\nresults\nof the above algorithm\nextract text segments with\ndifferent tag structures. It also extracts text segments that have\nidentical adjacent tag structures but are significantly different in\ntheir textual contents. Figure 8 shows the information extracted\nfrom the document content (given in Figure 4) as a result of\neliminating information contained in the Web page template.\n3.2.4 Generate Content Summary\nFrequencies are computed for the terms extracted from randomly\nsampled documents. These summarise the information content of\na database, which we refer to as Content Summary.\n\n\n\n\n\n\n\nAlgorithm ExtractQueryInfo\nFor each (d\na\n\nG\nk\n)\nFor each (d\nb\n\nG\nk\n), d\na\n\nd\nb\n\nCompare (Content(d\na\n),Content(d\nb\n))\nIf Content(d\na\n) Content(d\nb\n)\nDelete (Content(d\na\n)\nContent(d\nb\n)) from\nContent(d\na\n), Content(d\nb\n)\nEnd if\nEnd for\nEnd for\nFor each (d\ni\n\nG\nk\n)\nExtract tx\nm\nof txs\nm\nfrom Content(d\ni\n)\nEnd for\nFor each (d\nl\nS)\nExtract tx\nn\nof txs\nn\nfrom Content(d\nl\n)\nEnd for\nFigure 7. The algorithm for extracting query-related\ninformation from template-generated documents.\n\n\n\n\n1. Equipos Mas Seguros: Si Te Inyectas Drogas.\nAIDS Education\n...\n\n\n\n=\n=\n=\n\n\n=\nt\nk\njk\nt\nk\nt\nk\nik\njk\nik\nj\ni\ntw\ntw\ntw\ntw\ntxs\ntxs\nCOSINE\n1\n2\n1\n1\n2\n)\n(\n)\n(\n)\n(\n)\n,\n(\n.\n(1)\nFigure 8. The query-related information extracted from the\nCHID document.\n\nPrevious experiments in [2] demonstrate that a number of\nrandomly sampled documents (i.e., 300 documents) sufficiently\nrepresent the information content of a database.\nIn the domain of Hidden Web databases, the inverse document\nfrequency (idf), used in traditional information retrieval, is not\napplicable, since the total number of documents in a database is\noften unknown. Therefore, document frequency (df), collection\nterm frequency (ctf) and average term frequency (avg_tf) initially\nused in [2] are applied in this paper. We consider the following\nfrequencies to compute the content summary of a Hidden Web\ndatabase.\nDocument frequency (df): the number of documents in the\ncollection of documents sampled that contain term t,\nwhere d is the document and f is the frequency\nCollection term frequency (ctf): the occurrence of a term\nin the collection of documents sampled, where c is the\ncollection, t is the term and f is the frequency\nAverage term frequency (avg_tf): the average frequency\nof a term obtained from dividing collection term\nfrequency by document frequency (i.e., avg_tf = ctf / df)\n5\nTable 1. 3 Hidden Web databases used in the experiments\nDatabase URL\nSubject Content Template\nHelp Site\nwww.help-site.com\nComputer manuals Homogeneous Multiple\ntemplates\nCHID www.chid.nih.gov\nHealthcare\narticles Homogeneous Single\ntemplate\nWired News\nwww.wired.com\nGeneral news articles\nHeterogeneous Single\ntemplate\n\nThe content summary of a document database is defined as\nfollows. Assume that a Hidden Web database, D, is sampled with\nN documents. Each sampled document, d, is represented as a\nvector of terms and their associated weights [14]. Thus d = (w\n1\n,\n..., w\nm\n), where w\ni\nis the weight of term t\ni\n, and m is the number of\ndistinct terms in d\nD, 1 i m. Each w\ni\nis computed using term\nfrequency metric, avg_tf (i. e., w\ni\n= ctf\ni\n/df\ni\n). The content summary\nis then denoted as CS(D), which is generated from the vectors of\nsampled documents. Assume that n is the number of distinct terms\nin all sampled documents. CS(D) is, therefore, expressed as a\nvector of terms: CS(D)= {w\n1\n, ..., w\nn\n}, where w\ni\nis computed by\nadding the weights of t\ni\nin the documents sampled from D and\ndividing the sum by the number of sampled documents that\ncontain t\ni\n, 1\ni n.\n\nEXPERIMENTAL RESULTS\nThis section reports on a number of experiments conducted to\nassess the effectiveness of the 2PS technique in terms of: (i)\ndetecting Web page templates, and (ii) extracting relevant\ninformation from the documents of a Hidden Web databases\nthrough sampling. The experimental results are compared with\nthose from query-based sampling (abbreviated as QS). We\ncompare 2PS with QS as it is a well-established technique and has\nalso been widely adopted by other relevant studies [5, 10, 11, 15].\nExperiments are carried out on three real-world Hidden Web\ndocument databases including Help Site, CHID and Wired News,\nwhich provide information about user manuals, healthcare\narchives and news articles, respectively. Table 1 summarises\nthese databases in terms of their subjects, contents and templates\nemployed. For instance, Help Site and CHID contain documents\nrelating to subjects on computing and healthcare, respectively.\nTheir information contents are homogeneous in nature. By\ncontrast, Wired News contains articles that relate to different\nsubjects of interest.\nWhere the number of templates is concerned, CHID and Wired\nNews generate documents from one Web page template. Help\nSite maintains a collection of documents produced by other\ninformation sources. Subsequently, different Web page templates\nare found in Help Site sampled documents.\nThe experiment conducted using QS initiates the first query to a\ndatabase with a frequently used term to obtain a set of sampled\ndocuments. Subsequent query terms are randomly selected from\nthose contained in the sampled documents. It extracts terms\n(including terms contained in Web page templates) and updates\nthe frequencies after each document is sampled. By contrast, 2PS\ninitiates the sampling process with a term contained in the search\ninterface pages of a database. In addition, 2PS analyses the\nsampled documents in the second phase in order to extract query-related\ninformation, from which terms and frequencies are\ngenerated.\nExperimental results in [2] conclude that QS obtains\napproximately 80% of terms from a database, when 300\ndocuments are sampled and top 4 documents are retrieved for\neach query. These two parameters are used to obtain results for\nour experiments in which terms and frequencies are generated for\nQS and 2PS after 300 documents have been sampled. The results\ngenerated from QS provide the baseline for the experiments.\nThree sets of samples are obtained for each database and 300\ndocuments are retrieved for each sample. First, we manually\nexamine each set of sampled documents to obtain the number of\nWeb page templates used to generate the documents. This is then\ncompared with the number of templates detected by 2PS. The\ndetection of Web page templates from the sampled documents is\nimportant as this determines whether irrelevant information is\neffectively eliminated.\nNext, we compare the number of relevant terms (from top 50\nterms) retrieved using 2PS with the number obtained by QS.\nTerms are ranked according to their ctf frequencies to determine\ntheir relevancy to the queries. This frequency represents the\noccurrences of a term contained in the sampled documents. Ctf\nfrequencies are used to demonstrate the effectiveness of\nextracting query-related information from sampled documents\nsince the terms extracted from Web page templates are often\nranked with high ctf frequencies.\n\nTable 2. The number of templates employed by databases and\nthe number detected by 2PS\nNumber of templates\nDatabases\nEmployed Detected\nSample 1\n17\n15\nSample 2\n17\n16\nHelp Site\nSample 3\n19\n17\nSample 1\n1\n1\nSample 2\n1\n1\nCHID\nSample 3\n1\n1\nSample 1\n1\n1\nSample 2\n1\n1\nWired News\nSample 3\n1\n1\nExperimental results for QS and 2PS are summarised as follows.\nFirstly, Table 2 gives the number of Web page templates\nemployed by the databases and the number detected by 2PS. It\nshows that 2PS effectively identifies the number of templates\nfound in the sampled documents. However, a small number of\ntemplates are not detected from Help Site. For instance, 2PS does\nnot detect two of the templates from the first set of sampled\ndocuments, since the two templates are very similar in terms of\ncontent and structure.\n6\nTable 3 summarises the number of relevant terms (from top 50\nterms ranked according to their ctf frequencies) obtained for the\nthree databases. These terms are retrieved using 2PS and QS. We\ndetermine the relevancy of a term by examining whether the term\nis found in Web page templates. Table 3 gives the number of\nretrieved terms that do not appear in Web page templates. The\nresults show that 2PS obtains more relevant terms. For instance,\nin the first set of documents sampled from CHID using 2PS, the\nnumber of relevant terms retrieved is 47. By comparison, the\nnumber of terms obtained for QS is 20.\nThe results generated from CHID and Wired News demonstrate\nthat 2PS retrieves more relevant terms, as a large number of terms\ncontained in the templates have been successfully eliminated from\nthe top 50 terms. However, the elimination of template terms is\nless noticeable for Help Site. Our observation is that template\nterms attain high frequencies since the CHID and Wired News\ndatabases generate documents using a single Web page template.\nBy comparison, a larger number of Web page templates are found\nin the documents sampled from Help Site. As a result, terms\ncontained in the templates do not attain high frequencies as those\nfound in the templates employed by CHID and Wired News.\nTable 4 and 5 show the results of the top 50 terms ranked\naccording to their ctf frequencies retrieved from the first set of\nsampled documents of the CHID database. Table 4 shows the top\n50 terms retrieved for QS whereby terms contained in Web page\ntemplates are not excluded. As a result, a number of terms (such\nas `author', `language' and `format') have attained much higher\nfrequencies. By contrast, Table 5 lists the top 50 terms retrieved\nusing 2PS. Our technique eliminates terms (such as `author' and\n`format') and obtains terms (such as `treatment', `disease' and\n`immunodeficiency') in the higher rank.\nTable 3. The number of relevant terms retrieved (from top 50\nterms) according to ctf frequencies\nNumber of relevant terms\nDatabases\nQS 2PS\nSample 1\n46\n48\nSample 2\n47\n48\nHelp Site\nSample 3\n46\n48\nSample 1\n20\n47\nSample 2\n19\n47\nCHID\nSample 3\n20\n47\nSample 1\n14\n42\nSample 2\n10\n43\nWired News\nSample 3\n11\n39\nCONCLUSION\nThis paper presents a sampling and extraction technique, 2PS,\nwhich utilises information that is contained in the search interface\npages and documents of a database in the sampling process. This\ntechnique extracts information relevant to queries from the\nsampled documents in order to generate terms and frequencies\nwith improved accuracy. Experimental results demonstrate that\nour technique effectively eliminates information contained in\nWeb page templates, thus attaining terms and frequencies that are\nof a higher degree of relevancy. This can also enhance the\neffectiveness of categorisation in which such statistics are used to\nrepresent the information contents of underlying databases.\nWe obtain promising results by applying 2PS in the experiments\non three databases that differ in nature. However, experiments on\na larger number of Hidden Web databases are required in order to\nfurther assess the effectiveness of the proposed technique.\n\nTable 4. Top 50 terms and frequencies ranked according to ctf generated from CHID when QS is applied\nRank Term Rank Term Rank\nTerm\n1\nhiv\n18 document 35\nlg\n2 aids 19 disease 36\nve\n3 information 20 published 37\nyr\n4 health 21\nphysical\n38\nac\n5 prevention 22 subfile 39\ncorporate\n6 education 23 audience 40\nmj\n7 tb 24\nupdate\n41 description\n8 accession 25 verification 42\nwww\n9 number 26 major 43\ncn\n10 author 27\npamphlet\n44\npd\n11 persons 28 chid 45\nenglish\n12 language 29 human 46\nnational\n13 sheet 30 date 47\npublic\n14 format 31 abstract 48\nimmunodeficiency\n15\ntreatment\n32 code 49\nvirus\n16\ndescriptors\n33 ab 50\norg\n17 availability 34\nfm\n\n\n7\nTable 5. Top 50 terms and frequencies ranked according to ctf generated from CHID when 2PS is applied\nRank Term Rank\nTerm\nRank\nTerm\n1 hiv 18\neducation\n35\ntesting\n2 aids 19\nvirus\n36\nprograms\n3 information 20 org 37\nservices\n4 health 21\nnotes\n38\nclinical\n5 prevention 22 nt 39\npeople\n6 tb 23\ncdc\n40\nhepatitis\n7 persons 24\nservice\n41\ncommunity\n8\nsheet\n25 box 42\nworld\n9 treatment 26\nresearch\n43\nlisted\n10 disease 27\ndepartment\n44\nprofessionals\n11 human 28\npositive\n45\ntraining\n12 pamphlet 29\ntuberculosis\n46\ndiseases\n13 www 30\ncontrol\n47\naccession\n14 http 31\ndrug\n48\nnetwork\n15 national 32\ndiscusses\n49\ngeneral\n16 public 33 ill 50\nstd\n17 immunodeficiency 34 organizations\n\n\n\nREFERENCES\n[1]\nArasu, A. and Garcia-Molina, H. Extracting Structured Data\nfrom Web Pages. In Proceedings of the 2003 ACM SIGMOD\nInternational Conference on Management, 2003, 337-348.\n[2]\nCallan, J. and Connell, M. Query-Based Sampling of Text\nDatabases. ACM Transactions on Information Systems\n(TOIS), Vol. 19, No. 2, 2001, 97-130.\n[3]\nCaverlee, J., Buttler, D. and Liu, L. Discovering Objects in\nDynamically-Generated Web Pages. Technical report,\nGeorgia Institute of Technology, 2003.\n[4]\nCrescenzi, V., Mecca, G. and Merialdo, P. ROADRUNNER:\nTowards Automatic Data Extraction from Large Web Sites,\nIn Proceedings of the 27th International Conference on Very\nLarge Data Bases (VLDB), 2001, 109-118.\n[5]\nGravano, L., Ipeirotis, P. G. and Sahami, M. QProber: A\nSystem for Automatic Classification of Hidden-Web\nDatabases. ACM Transactions on Information Systems\n(TOIS), Vol. 21, No. 1, 2003.\n[6]\nHe, M. and Drobnik, O. Clustering Specialised Web-databases\nby Exploiting Hyperlinks. In Proceedings of the\nSecond Asian Digital Library Conference, 1999.\n[7]\nHedley, Y.L., Younas, M., James, A. and Sanderson M.\nQuery-Related Data Extraction of Hidden Web Documents.\nIn Proceedings of the 27th Annual International ACM SIGIR\nConference, 2004, 558-559.\n[8]\nLage, J. P., da Silva, A. S., Golgher, P. B. and Laender, A.\nH. F. Automatic Generation of Agents for Collecting Hidden\n\n\nWeb Pages for Data Extraction. Data & Knowledge\nEngineering, Vol. 49, No. 2, 2004, 177-196.\n[9]\nLiddle, S.W., Yau, S.H. and Embley, D. W. On the\nAutomatic Extraction of Data from the Hidden Web. In\nProceedings of the 20th International Conference on\nConceptual Modeling, (ER) Workshops, 2001, 212-226.\n[10]\nLin, K.I. and Chen, H. Automatic Information Discovery\nfrom the Invisible Web. International Conference on\nInformation Technology: Coding and Computing (ITCC),\n2002, 332-337.\n[11]\nMeng, W., Wang, W., Sun, H. and Yu, C. Concept\nHierarchy Based Text Database Categorization.\nInternational Journal on Knowledge and Information\nSystems, Vol. 4, No. 2, 2002, 132-150.\n[12]\nRaghavan, S. and Garcia-Molina, H. Crawling the Hidden\nWeb. In Proceedings of the 27th International Conference on\nVery Large Databases (VLDB), 2001, 129-138.\n[13]\nRahardjo, B. and Yap, R. Automatic Information Extraction\nfrom Web Pages, In Proceedings of the 24th Annual\nInternational ACM SIGIR Conference, 2001, 430-431.\n[14]\nSalton, G. and McGill, M. Introduction to Modern\nInformation Retrieval. New York, McCraw-Hill, 1983.\n\n[15]\nSugiura, A. and Etzioni, O. Query Routing for Web Search\nEngines: Architecture and Experiments. In Proceedings of\nthe 9th International World Wide Web Conference: The\nWeb: The Next Generation, 2000, 417-430.\n\n8", "keywords": "Hidden Web Databases;search interface pages;Information Extraction;hypertext markup langauges;hidden web databases;2-phase sampling technique;neighbouring adjacent tag segments;string matching techniques;information extraction;web page templates;Document Sampling;query-based sampling;irrelavant information extraction"} {"name": "23", "title": "A Unified Approach for Improving QoS and Provider Revenue in 3G Mobile Networks", "abstract": "In this paper, we introduce a unified approach for the adaptive control of 3G mobile networks in order to improve both quality of service (QoS) for mobile subscribers and to increase revenue for service providers. The introduced approach constantly monitors QoS measures as packet loss probability and the current number of active mobile users during operation of the network. Based on the values of the QoS measures just observed, the system parameters of the admission controller and packet scheduler are controlled by the adaptive performance management entity. Considering UMTS, we present performance curves showing that handover failure probability is improved by more than one order of magnitude. Moreover, the packet loss probability can be effectively regulated to a predefined level and provider revenue is significantly increased for all pricing policies.", "fulltext": "Introduction\nThe third generation (3G) of mobile networks is expected\nto complete the worldwide globalization process of mobile\ncommunication. Since different parts of the worlds emphasize\ndifferent issues, the global term 3G has regional synonyms\n: In the US and Japan, 3G often carries the name International\nMobile Telephony 2000 (IMT2000). In Europe,\n3G has become Universal Mobile Telecommunications System\n(UMTS) following the ETSI perspective. The European\nindustrial players have created the 3rd Generation Partnership\nProject (3GPP) [1] for the standardization of UMTS.\n3G mobile networks provide the foundation for new services\nwith high-rate data not provided by current second generation\nsystems [26]. While the standardization of 3G is still ongoing\nthe discussion of technical issues beyond 3G has already\nstarted [23,28]. Recently, Aretz et al. reported a vision for\nthe future of wireless communication systems beyond 3G that\nconsists of a combination of several optimized access systems\non a common IP-based medium access and core network platform\n[5].\nCharging and pricing are essential issues for network operations\nof 3G mobile networks. A primary target of differen-tiated\npricing of Internet services is the prevention of system\noverload and an optimal resource usage according to different\ndaytimes and different traffic intensities [12]. Among the\nproposed pricing proposals, flat-rate pricing [11] is the most\ncommon mode of payment today for bandwidth services.\nFlat-rate pricing is popular because of its minimal accounting\noverhead. A flat-rate encourages usage but does not offer\nany motivation for users to adjust their demand. Dynamic\npricing models that take the state of the network into account\nin the price determination have been proposed as being more\n\nCorresponding author.\nresponsive. Usage-based pricing regulates usage by imposing\na fee based on the amount of data actually sent, whereas\ncongestion-sensitive pricing uses a fee based on the current\nstate of congestion in the network. Thus, a unified approach\nconsidering both dynamic pricing and controlling quality of\nservice (i.e., performance management) provides an effective\ntool for the operation of 3G mobile networks. However, in\nprevious work [8,13,19,21,25] the improvement of Quality of\nService (QoS) in 3G mobile networks and the optimization\nof mobile service provider revenue has been considered sepa-rately\n.\nThe Quality of Service (QoS) concept and architecture for\nUMTS networks specified in [2] provides means for sharing\nradio resources among different groups of users according\nto their individual QoS demands. Furthermore, the concept\nof UMTS management and control functions such as\nadmission controller and resource manager is roughly outlined\n. Das et al. proposed a framework for QoS provisioning\nfor multimedia services in 3G wireless access networks [8].\nThey developed an integrated framework by combining various\napproaches for call admission control, channel reservation\n, bandwidth degradation, and bandwidth compaction.\nIn [19], we introduced a framework for the adaptive control\nof UMTS networks, which utilizes online monitoring of QoS\nmeasures (e.g., handover failure and call blocking probabilities\n) in order to adjust system parameters of the admission\ncontroller and the packet scheduler. The presented approach\nis based on a lookup table called the Performance Management\nInformation Base (P-MIB). Entries of the P-MIB have\nto be determined using extensive off-line simulation experiments\nto determine optimal parameter configuration for the\nconsidered scenarios. Given the entries of the P-MIB, we\nshowed how to improve QoS for mobile users by periodi-cally\nadjusting system parameters. The practical applicability\nof this approach is limited if the P-MIB comprises many en-210\nC. LINDEMANN ET AL.\ntries (i.e., many scenarios have to be considered) because of\nthe high computational effort for determining these entries by\nsimulation.\nThis paper introduces a unified approach for the adaptive\nperformance management for 3G mobile networks. As the\nmain result of the paper, the introduced approach is based on\na mathematical framework for the proposed update schemes\nrather than a lookup table. As a consequence, the adaptive\ncontrol mechanism can be adjusted in an intuitive way and\noptimal system parameter configuration can efficiently be determined\n. We effectively utilize adaptive performance management\nfor improving not only QoS for mobile users but\nalso increase revenue earned by service providers. As in [19],\ncontrolled system parameters comprise queueing weights for\npacket scheduling, a threshold value of the access queue for\nadmission of non real-time traffic, and a portion of the overall\navailable bandwidth reserved for handover calls.\nBeyond\n[19], we propose a scheme for adjusting the queueing\nweights for both improving QoS for higher priority users that\nsuffer from a high population of users with lower priority and\nfor increasing the revenue earned by the service provider. For\nthe analysis of the update strategy of the queuing weights, we\nconsider a usage-based and a usage-/throughput-based pricing\npolicy according to [11,12,21]. Furthermore, we introduce\na hybrid pricing policy combining the notion of flat-rate\nand a usage-based pricing according to current policies\nof GSM networks. Performance curves derived by simulation\nevidently illustrate the gain of the unified approach for adaptive\nperformance management. In fact, for UMTS networks,\nsimulation results show that handover failure probability can\nbe improved by more than one order of magnitude. Moreover,\npacket loss probability can be effectively regulated to a predefined\nlevel and the provider revenue is significantly increased\nfor all considered pricing policies.\nThe paper is organized as follows. Section 2 introduces the\nunified approach for adaptive performance management and\ndescribes its embedding in the system architecture of 3G mobile\nnetworks. Section 3 introduces strategies for controlling\nthe parameters of an admission controller in order to improve\nQoS. Section 4 describes the parameter control of a packet\nscheduler for the combined improvement of both QoS and\nprovider revenue. In section 5, we present simulation results\nthat illustrate the benefit of employing the proposed approach\nfor adaptive performance management. Finally, concluding\nremarks are given.\nAdaptive performance management for 3G mobile networks\nThis section introduces the unified approach for regularly adjusting\nsystem parameters to changing traffic load, packet arrival\npattern or population of users, etc. We consider a cellular\nmobile network in which a different transceiver station serves\neach cell. The purpose of the transceiver station is the modu-Figure\n1. System architecture for adaptive performance management.\nlation of carrier frequencies and demodulation of signals. Furthermore\n, a base station controller (BSC) is considered that is\nresponsible for a cluster of cells, i.e., several transceiver stations\n. The BSC manages the radio resources, i.e., schedules\ndata packets, and controls handovers inside the cell cluster as\nwell as handovers towards and from neighboring cell clusters.\nTo improve QoS for mobile users as well as to increase\nrevenue earned by service providers, an entity for Adaptive\nPerformance Management (APM) is included in a BSC. Furthermore\n, a BSC has to be extended by an online performance\nmonitoring component that derives QoS measures in a certain\ntime window (e.g., handover failure probabilities of mobile\nusers or packet loss probabilities). These QoS measures\nform a system pattern that is submitted in fixed time intervals\n(i.e., a control period) to the APM entity, which subsequently\nupdates corresponding system parameters (i.e., parameters of\ntraffic controlling components like the admission controller\nand packet scheduler). Thus, the proposed approach closes\nthe loop between network operation and network control. Figure\n1 shows the system architecture for performance management\nembedded in a BSC.\n2.1.1. Online performance monitoring\nSystem parameters of a BSC can be effectively updated by\nmonitoring QoS measures, which are immediately affected by\nthese parameters. A current value for a QoS measure is determined\nonline based on a set of relevant events corresponding\nto this QoS measure (e.g., packet arrivals are relevant events\nfor computing packet loss probabilities). The online monitoring\nof QoS measures is done by a sliding window technique\nas introduced in [19]. The width of the sliding window over\ntime depends on the number of relevant events that are occurred\naccording to a QoS measure. Upon arrival of a new\nrelevant event the sliding window moves in time. At the end\nof a control period the QoS measures are derived for each\nsliding window (e.g., packet loss probability can be derived\nfrom number of lost packets divided by number of all packet\narrivals in the sliding window). These QoS measures and the\nnumber of events occurred in the last control period form the\nsystem pattern that is transferred to the adaptive performance\nmanagement entity (see figure 1).\nIMPROVING QoS AND PROVIDER REVENUE IN 3G MOBILE NETWORKS\n211\nNote that an accurate online monitoring of QoS measures\nrequires a specific width for the sliding window. A certain\nnumber of events representing the history of the QoS measure\nhave to be considered to get an expressive measure. On\nthe other hand considering a big sliding window prevents the\nAPM entity from fast reaction on changing traffic conditions.\nA bigger sliding window contains more history and, thus,\nmore events have to be collected to cause a significant change\nin the online monitored QoS measure. This tradeoff between\naccurate online monitoring and fast reaction of the APM to\nchanging traffic conditions has to be studied carefully in several\nexperiments to get the optimal width of the sliding window\nfor each QoS measure.\n2.1.2. Adaptive performance management\nWhenever a system pattern S\n= {(P\n1\n, n\n1\n), . . . , (P\nm\n, n\nm\n)\n},\nconsisting of online monitored QoS measures P\n1\n, . . . , P\nm\nand the numbers of relevant events n\n1\n, . . . , n\nm\noccurred in\nthe last control period is transmitted to the APM an update\nof the system parameters can be performed. In general\n, an update of a system parameter is made according\nto a function f depending on a subset of the QoS measures\nP\n1\n, . . . , P\nm\nand the previous value\n(\nold)\nof the system parameter\n. Let P\n(\n1)\n, . . . , P\n(k)\n, k\nm\n, be the QoS measures\ncorresponding to system parameter , then the update is made\nif a certain minimum number n( ) of relevant events occurred\nin the last control period. That is:\n\n(\nnew)\n= f P\n(\n1)\n, . . . , P\n(k)\n,\n(\nold)\n,\nif min\n{n\n(\n1)\n, . . . , n\n(k)\n}\nn( ).\n(1)\nWe classify update functions in relative functions, that perform\na parameter update relative to the old parameter value\nand absolute functions that set the new parameter value independent\nof the old value, i.e., f is independent of\n(\nold)\nin (1). With relative update functions strong fluctuations of\nthe corresponding system parameter in one update step can be\navoided. In section 3, we study a special class of relative update\nfunctions in order to set the parameters of an admission\ncontroller. Furthermore, we develop in section 4 an absolute\nupdate function for adjusting the weights of a weighted fair\nqueueing packet scheduler.\n2.2. Economics and pricing policies in 3G mobile networks\nThere are multiple requirements, which should be fulfilled\nfor any viable pricing mechanism in multi-service class data\ncommunication networks [12]. A primary target of differen-tiated\npricing of Internet services is the prevention of system\noverload and an optimal resource usage according to different\ndaytimes and different traffic intensities. Furthermore, the\npricing scheme should be implemented in a completely de-centralized\nmanner and there should be multiple priorities in\norder to take into account the different QoS required by different\napplications and users.\nIn general, pricing policies can be partitioned into usage-based\n(pay-as-you-go) pricing, flat-rate (all-you-can-eat)\npricing, and dynamic pricing. In usage-based pricing policies\na user is charged according to a connection time or traffic\nvolume. Whereas connection based calls (e.g., in GSM) are\ncharged by connection time, packet-switched services (e.g., in\nUMTS) are charging the transferred data volume. Dynamic\npricing models take into account the state of the mobile radio\nnetwork for determining the current price of a service.\nCongestion-sensitive pricing as a particular dynamic pricing\nmodel has been shown to be more responsive. MacKie-Mason\nand Varian introduced the concept of congestion-sensitive\npricing in their smart market scheme [21]. Under this model,\nthe actual price for each packet is determined based on the\ncurrent state of network congestion. In [25], Rao and Petersen\ndiscussed the optimal pricing of priority services. Analogously\nto the smart market approach, Gupta et al. presented\na pricing scheme that uses priorities on the packet-level [13].\nThey proposed to differentiate Internet traffic according to delay\nand loss requirements.\nFor the analysis of the update strategy of the queuing\nweights, we consider in section 4 a usage-based and a usage-/\nthroughput-based pricing policy according to [11,12,21]. Furthermore\n, we introduce a hybrid pricing policy combining the\nnotion of flat-rate and a usage-based pricing according to current\npolicies of GSM networks.\nStrategies for improving Quality of Service\nThe proposed approach distinguishes three different types\nof services: circuit-switched services, packet-switched real-time\nservices (RT), and packet-switched non real-time services\n(NRT). Typically, circuit-switched services are voice\ncalls from a GSM mobile station. As proposed by 3GPP, RT\nservices belong to the conversational and streaming classes\nand NRT services fall into the interactive and background\nclasses [2]. The bandwidth available in a cell must be shared\nby calls of these different service classes and the different service\nrequirements have to be met. Before a mobile session\nbegins, the user needs to specify its traffic characteristics and\ndesired performance requirements by a QoS profile. Then, an\nadmission controller decides to accept or reject the users request\nbased on the QoS profile and the current network state\nas, e.g., given by queueing length. The purpose of the admission\ncontroller is to guarantee the QoS requirements of the\nuser who requested admission while not violating the QoS\nprofiles of already admitted users. The call admission criteria\nwill be different for each service class. The QoS profile for\nRT sessions specifies a guaranteed bandwidth to be provided\nfor the application in order to meet its QoS requirements. If\nthe network cannot satisfy the desired bandwidth, the corresponding\nadmission request is rejected.\nData packets arriving at the BSC are queued until they are\nscheduled to be transmitted over the radio link. For NRT sessions\n, we consider an admission controller taking into account\nfree buffer space in the NRT queue [8]. In order to prevent\nbuffer overflow once a call is admitted, the current queueing\nlength is set against certain buffer availability threshold\n212\nC. LINDEMANN ET AL.\nof the capacity, denoted by . The admission criteria for\nvoice and RT handovers are the same as for new voice calls\nand RT sessions except that additional handover bandwidth\ncan be utilized. The analysis of several admission control\nschemes for cellular systems presented in [24] showed that\nthe simple reservation scheme (i.e., reserving bandwidth for\nhandover calls) performs remarkably well. For simple cellular\nnetworks, the optimal amount of bandwidth reserved for\nhandover calls can be determined by analytical models [14].\nIn the model presented here, we denote with b\nh\nthe portion\nof the overall bandwidth that is exclusively reserved for handover\ncalls from neighboring cells. The considered admission\ncontroller does not prioritize NRT handovers over new NRT\nsessions. Further details of the admission controller are given\nin [19].\n3.2. Adjusting the admission controller for QoS improvement\nIn this section, we show how to utilize equation (1) for setting\nthe parameters and b\nh\nof the admission controller in order to\nreduce packet loss probability and handover failure probability\n. For updating the system parameters, we split the general\nfunction introduced in section 2.1 into separate functions each\ndepending only on one QoS measure. Let P\n1\n, . . . , P\nk\nbe the\nQoS measures corresponding to a system parameter . Then,\nequation (1) can be simplified to\n\n(\nnew)\n= f\n1\n(P\n1\n)\n+ + f\nk\n(P\nk\n)\nk\n\n(\nold)\n,\nL\n\n(\nnew)\nR.\n(2)\nThe interpretation of (2) is the following. Each update\nfunction f\ni\ndescribes the influence that the QoS measure\nP\ni\nshould have on the system parameter . Subsequently,\nthe overall update is performed by computing the arithmetic\nmean of the functions f\ni\nmultiplied with the old value of the\nsystem parameter. Note that the value\n(\nnew)\nmust be truncated\nat a certain lower bound L and an upper bound R in\norder to guarantee that the computation of\n(\nnew)\nresults in a\nvalid value of the system parameter. As basic update function\nwe consider a logarithmic linear function of the form:\nf\ni\n(P\ni\n)\n= m\ni\nlog P\ni\n+ b\ni\n.\n(3)\nThe reason for this choice is that we want to consider QoS\nmeasures like loss probabilities and failure/blocking probabilities\n, which are in the range of 10\n-5\nto 1. Therefore, a\nlogarithmic shape is more suitable. In previous work [19],\nwe have studied update schemes of system parameters of\nan admission controller and a packet scheduler based on a\nlookup table. In order to determine the optimal entries of this\nlookup table extensive off-line simulation experiments have\nbeen conducted. Applying regression statistics to the entries\nof this lookup table shows that these entries are well represented\nby functions with logarithmic shape. Thus, besides the\nmotivation of the update functions given here, their choice\nis to a large extend originated from regression statistics conducted\nin earlier work. The strength of the influence of f\ni\non\n\n(\nnew)\ncan be adjusted with the gradient m\ni\n. The parameter b\ni\ncan be determined by the following interpretation: suppose\nthe desired level of the QoS measure P\ni\nis\ni\n(e.g., the desired\npacket loss probability is 0.001). That is, if the online\nmeasured value of P\ni\nis\ni\nthe system parameter should\nnot be changed in the update step from the point of view of\nmeasure P\ni\n. Therefore, we chose f\ni\n(\ni\n)\n= 1 and from this\nrelation we get b\ni\n= 1 - m\ni\nlog\ni\n. Inserting in equation (3)\nresults in the final form of the update function:\nf\ni\n(P\ni\n)\n= m\ni\nlog P\ni\n\ni\n+ 1.\n(4)\nFor ease of notation, we abbreviate the QoS measures handover\nfailure probability and new call/session blocking probability\ncorresponding to voice calls and RT sessions by HFP\nand CBP, respectively. The probability of a packet loss due\nto buffer overflow in the NRT queue is abbreviated by PLP.\nThe update strategy according to equations (2)(4) is justified\nby its intuitive understanding and the performance results presented\nin section 5. The suitability of update functions other\nthan (2)(4), is subject for further study and out of the scope\nof this paper.\n3.2.1. Update of non real-time queue threshold\nRecall that a system parameter update is performed each time\na system pattern arrives at the APM entity and the minimum\nnumber of relevant events corresponding to this system parameter\nis reached. Determining the update for the system parameter\n, i.e., determining\n(\nnew)\n, is performed corresponding\nto the old value\n(\nold)\nand the actually observed QoS measure\nPLP. That is:\n\n(\nnew)\n= f (PLP)\n(\nold)\n,\n0.001\n\n(\nnew)\n1.\n(5)\nThe truncation of\n(\nnew)\nat the lower bound guaranties that\nthe value does not accumulate near zero for long periods of\nlow traffic load. The minimum number of relevant events required\nfor an update of is counted in data volume rather than\nin packet arrivals (in the experiments this number is 5 MB).\nThe setting of the gradient m of the corresponding update\nfunction is derived from a couple of experiments for different\nvalues of the gradient. We found m\n= -0.02 to be suitable.\nChoosing a suitable value for the gradient is a similar tradeoff\nas explained for the sliding window size. A large gradient results\nin a fast update of the system parameter in a few number\nof update steps, but also introduces higher fluctuations of the\nsystem parameter over time. We demonstrate the speed of the\nparameter adjustment in an experiment in section 5. Furthermore\n, several experiments for different desired loss values\nare presented.\n3.2.2. Update of fraction of bandwidth reserved for handover\nThe update for the system parameter b\nh\n, i.e., determining\nb\n(\nnew)\nh\n, is performed based on the old value and the actually\nobserved QoS measures HFP and CBP. That is:\nb\n(\nnew)\nh\n= f\n1\n(\nHFP)\n+ f\n2\n(\nCBP)\n2\nb\n(\nold)\nh\n,\n0.001\nb\n(\nnew)\nh\nR.\n(6)\nIMPROVING QoS AND PROVIDER REVENUE IN 3G MOBILE NETWORKS\n213\nThe value b\n(\nnew)\nh\nis truncated at a lower bound of 0.1%\nand a certain upper bound R which is a fraction of the overall\nbandwidth available (in the experiments we fix R\n= 0.7).\nThe truncation at the lower bound is for the same reason as\nexplained above. In fact, for computing b\n(\nnew)\nh\ntwo QoS measures\ncorresponding to the actually observed HFP and CBP\nare taken into account. A high HFP should increase b\n(\nnew)\nh\nbut this obviously also increases the CBP because less bandwidth\nis available for new voice calls and RT sessions. Therefore\n, the HFP and the CBP influence the handover bandwidth\nb\n(\nnew)\nh\n. In fact, m\n1\n= -m\n2\nholds in the update functions f\n1\nand f\n2\n. From a couple of experiments for different gradients,\nwe found m\n1\n= 0.08 to be suitable. A common assumption\nin cellular networks is to prioritize handover calls over new\ncalls. Therefore, the desired handover failure level\n1\nshould\nbe smaller than the desired call blocking level\n2\n. According\nto these values the handover bandwidth is slightly increased,\nif HFP is equal to CBP.\nWith the presented strategy the parameters of the update\nfunctions can be chosen in an intuitive way and optimal parameter\nconfiguration can efficiently be determined. This is the\nmajor advantage over the approach based on a Performance\nManagement Information Base introduced in [19] which requires\nextensive off-line simulation experiments.\nStrategies for improving both QoS and provider revenue\nAt a BSC responsible for a cluster of cells, data packets from\nvarious connections arrive and are queued until bandwidth\nfor transmission is available. In order to distinguish different\npriorities for NRT traffic corresponding to the traffic handling\npriority defined by 3GPP [2], scheduling algorithms\nlike Weighted Round Robin (WRR), Weighted Fair Queueing\n(WFQ [9]) or Class Based Queueing (CBQ [10]) have to\nbe implemented. An overview of queueing issues for guaranteed\nperformance services can be found in [27]. In WFQ,\nthe weights control the amount of traffic a source may deliver\nrelative to other active sources during some period of\ntime. From the scheduling algorithm's point of view, a source\nis considered to be active, if it has data queued in the NRT\nqueue. Let B be the overall bandwidth available for NRT\nsessions at time t. For an active source i with weight w\ni\n, the\nbandwidth B\ni\nthat is allocated to this transfer at time t is given\nby\nB\ni\n=\nw\ni\nj\nw\nj\nB.\n(7)\nIn (7) the sum is taken over all active NRT sources j . A class\nbased version of WFQ serves packets of each priority class\naccording to the weights rather than every active source.\n4.2. Adjusting the packet scheduler for QoS and revenue\nimprovement\nThis section utilizes the proposed approach for the adaptive\ncontrol of the weights of a weighted fair queueing packet\nscheduler in order to improve QoS as well as to increase the\nrevenue. The strategy for adjusting the weights combined\nwith the introduction of several pricing policies constitutes\na further contribution of the paper. Recall that the revenue\nearned by a mobile service provider is determined by the\nmonthly payment of mobile users as well as by the additional\nusage-based pricing after the monthly amount of data volume\nis consumed. Note, that the monthly subscription rate is only\nrelevant for monthly revenue calculations. In this section, we\nconsider the revenue improvement in a certain small time period\nregardless the monthly subscription rates. In section 5,\nwe briefly discuss monthly revenue calculation. Let P denote\nthe number of different priority classes, i.e., weights of the\nweighted fair queueing scheduler. Define by b\ni\n(t)\nthe transferred\ndata volume in time t of users of priority i and by r\ni\n(t)\nthe payment of users of priority i at time t, i.e., the user pays\nfor the transferred data volume. We distinguish a pure usage-based\nand a usage-/throughput-based pricing policy:\n(a) A user of priority i has a fixed payment p\ni\nper kbit during\nhis session, i.e., r\ni\n(t)\n= p\ni\n.\n(b) The payment of a user of priority i consists of a fixed part\np\ni\nthat is increased proportional to the additional throughput\ni\n(t)\nhe received due to the update of the queueing\nweights, i.e., r\ni\n(t)\n= p\ni\n\ni\n(t)\n.\nAccording to the proposed data volume based pricing with\nrespect to different priority classes the revenue function\n(t)\nis given by\n(t)\n=\nP\ni\n=1\nr\ni\n(t)b\ni\n(t).\n(8)\nThe revenue function of equation (8) is utilized in section 5\nfor evaluating the strategies for revenue improvement presented\nbelow.\n4.2.1. Update of WFQ weights\nRecall that packets of NRT users arriving at the BSC are first\nqueued until they are scheduled for transfer by a weighted\nfair queueing discipline. Let w\ni\nw\ni\n+1\n, i\n= 1, . . . , P - 1,\nbe the basic weights of the WFQ scheduler. The update of\nthe queueing weights, i.e., determining w\n(\nnew)\ni\nis made according\nto an absolute update function depending on the basic\nweights w\ni\nand the current number of NRT sessions belonging\nto priority i. Therefore, every system pattern that is transmitted\nfrom the online monitoring component to the adaptive\nperformance management entity contains the current number\nof active NRT sessions with priority i in the cell. For ease\nof notation, the number of active non real-time sessions with\npriority i is abbreviated by NRT\ni\n.\nThe idea behind the strategy for revenue improvement is\nto shift the overall utilization of bandwidth for NRT traffic\n214\nC. LINDEMANN ET AL.\ntowards higher priority users, which pay more for the transferred\ndata volume. Note that the update strategy should be\nconservative in a way that the transfer of packets of low priority\nis not simply blocked if packets of higher priorities are\narriving, i.e., priority queueing. Assuming that the majority\nof users will buy a cheaper low priority service class, priority\nqueueing will leave most users unsatisfied. Therefore, the update\nstrategy also considers the QoS aspect. The update strategy\nconcerning the queueing weights is developed according\nto the following premises:\n(i) If the number of active NRT users in the cell is the same\nfor each priority class, i.e., NRT\ni\n= NRT\nj\n, i\n= j,\nthe weights w\n(\nnew)\ni\nshould be set according to the basic\nweights w\ni\nfor i\n= 1, . . . , P .\n(ii) Priority classes with low population of users compared\nto other classes should be prioritized, i.e., the corresponding\nweights should be increased.\n(iii) The relative ordering of the weights should be preserved\nin a strong way, i.e., w\n(\nnew)\ni\n(w\ni\n/w\ni\n+1\n)\nw\n(\nnew)\ni\n+1\nfor i\n=\n1, . . . , P\n- 1.\nPremise (i) constitutes the key of the update strategy. If all\npriority classes have the same population of users the scheduling\nshould work as in the case without adaptive control of\nthe weights. The rationale behind premise (ii) is to prioritize\nusers that are consuming less bandwidth (relative to their\nweights) than users belonging to other classes, i.e., users of\nlow population should be made more independent from the\ninfluence of user classes with higher population. This premise\nconstitutes the basic idea for QoS improvement and is demon-strated\nby the following example that considers two priority\nclasses, i.e., a high and low priority class. In WFQ the available\nbandwidth is shared among all active users according to\ntheir weights. That is, if the minority are high priority users,\nthe overall bandwidth consumed by these users will suffer\nfrom a strong influence of low priority users that hold the\nmajority. Therefore, increasing the weights for high priority\nusers will result in a higher QoS for this user class. Updating\nthe weights according to this strategy will result in a scheduling\nalgorithm somewhere between a WFQ and a class based\nqueueing scheduler. In fact, the benefit of both is utilized: the\nfair sharing of the bandwidth of WFQ and the higher bandwidth\nguarantees for each priority class provided by a class\nbased queueing scheduler.\nPreserving the relative ordering of the weights (i.e.,\npremise (iii)) guarantees that QoS for higher priority users\nand, therefore, the provider revenue can only be improved\ndue to the adaptive control of the weights. If the intention\nof the update strategy is not primary on improving provider\nrevenue the weights can be also set in a weak relation, i.e.,\nw\n(\nnew)\ni\nw\n(\nnew)\ni\n+1\n. This might be useful to increase QoS for\nusers of low population independent of their priority class.\nWith the following algorithm the computation of the weights\nw\n(\nnew)\n1\n, . . . , w\n(\nnew)\nP\ncan be performed iteratively in P\n-1 minimum\ncalculations. The iteration is given by\nw\n(\nnew)\n1\n= w\n1\n(NRT\n1\n)\n\n,\n(9)\nw\n(\nnew)\ni\n= min\nw\ni\nw\ni\n-1\nw\n(\nnew)\ni\n-1\n, w\ni\n(NRT\ni\n)\n\n,\ni\n= 2, . . . , P.\n(10)\nIn order to smooth the influence of the number of NRT\nusers on the queueing weights, an exponent\n0 is considered\n(e.g.,\n= 1/2). It is easy to show that premises (i), (ii)\nand (iii) hold for the weights set according to equations (9)\nand (10). The iteration starts with setting w\n(\nnew)\n1\naccording to\nNRT\n1\nand continues up to w\n(\nnew)\nP\n. Note that this is only one\npossibility to set the new weights. Any other starting position\nfor the iteration is possible and results in a slightly different\nupdate of the weights. Nevertheless, the algorithms work in\na similar way, and therefore, we consider only the iteration\nof (9) and (10). If currently no users of priority i are in the\ncell, i.e., NRT\ni\n= 0, the algorithm skips the setting of the\ncorresponding weight w\n(\nnew)\ni\nand the next iteration step i\n+ 1\nis related to step i\n- 1. Subsequently, these weights were set\nto zero. For other scheduling disciplines like weighted round\nrobin or a class based queueing corresponding update strategies\ncan be derived in a similar way.\n4.2.2. Considering advanced pricing policies\nIn pricing policy (b) introduced above, users have to pay an\nadditional fee depending on the throughput improvement due\nto the update of the queueing weights. This concept of pricing\nindicates strong similarities to the congestion-sensitive\npricing of the smart market scheme [21], where the actual\nprice for each packet is determined based on the current state\nof network congestion. Similarly, in our throughput-based\npricing policy the throughput of users is determined by their\nwillingness-to-pay additional costs (according to their choice\nof priority class) for transmission of packets in a congested\nnetwork. The additional payment is justified because the\nthroughput for users of higher priority will be maintained,\neven if more and more users of lower priority attend the cell,\ni.e., the network is currently congested. We describe the relative\nthroughput increase of priority class i with the function\n\ni\n(t)\n=\n(\nnew)\ni\n= w\nP\nTHR\ni\nw\ni\nTHR\nP\n\n.\n(11)\nIn equation (11), THR\ni\nis the current throughput of class i derived\nfrom the corresponding sliding window and 0\n\n1\nis a scaling exponent (e.g.,\n= 1/4) that has to be adjusted\nby the service provider for appropriate revenue dimensioning\n. In order to guarantee that revenue will be only improved,\ni\n(t)\nhas to be truncated, i.e.,\ni\n(t)\n1.\nNext, we adjust the weights according to an advanced pricing\npolicy that adopts ideas, which have been successful in\nexisting GSM networks. In GSM networks, the pricing of\na provided service is as follows: the proposed service is offered\nbased on a monthly payment for a dedicated amount\nIMPROVING QoS AND PROVIDER REVENUE IN 3G MOBILE NETWORKS\n215\nof call time. If a user has consumed this amount of time\nbefore the end of the month, he has to pay for any further\nuse of this service based on a time-dependent accounting.\nThis idea can be generalized and extended towards packet-switched\nservices in 3G networks. Analogously, a user has\nto pay a monthly charge for a dedicated amount of data volume\n, which can be transferred without further pricing. After\nusing up this monthly amount of data, the user has to pay for\nthe desired services according to the transferred data volume\n(byte-based). Moreover, analogous to GSM networks a user\ncan utilize \"unused\" data volume, i.e., the unused fraction of\nthe prepaid monthly amount of data volume, in subsequent\nmonths. If the monthly amount of data is unrestricted, this\npricing would become a flat-rate pricing and if there is no\nmonthly payment, the pricing follows a usage-based policy.\nThus, our pricing policy constitutes a hybrid approach of flat-rate\nand usage-based pricing.\nThe update of the queueing weights can now be extended\nin a way that users consuming their monthly amount of data\nare served with a lower priority than users currently paying for\ntheir data transfer. Therefore, we introduce a new weight w\ncorresponding to the not paying users. The weight w must\nbe sorted in the weights w\n1\n, . . . , w\nP\nand the iterative update\nalgorithm (9)(10) can be applied to the P\n+ 1 weights as\ndescribed above. In order to distinguish not paying users\nwith different priorities these users are served by the WFQ\nscheduler with weights w\n1\n, . . . , w\nP\nrelative to w .\nThat\nis, WFQ is applied to 2\nP weights, i.e., w\n1\n, . . . , w\nP\nand\n(w /w)\nw\n1\n, . . . , (w /w)\nw\nP\nwith w\n= w\n1\n+ + w\nP\n.\n4.3. Implementation issues\nAs outlined in section 2.1, the controlled system parameters\nfor QoS and revenue improvement, i.e., , b\nh\n, w\ni\n,\ni\n, constitute\nan integral component of the proposed extension to a\nBSC. The adjustment of system parameters is only based on\nimplicit information that is directly measured by the online\nmonitoring component. Therefore, no additional signaling\nwith other BSCs is necessary for updating system parameters.\nThe online monitored QoS measures, i.e., PLP, HFP, CBP,\nNRT\ni\n, and THR\ni\n, can easily be derived and stored within\nthe BSC (see figure 1). The PLP can directly be determined\nby counting the number of IP packets, which are lost due to\nbuffer overflow in the NRT queue. HFP as well as the CBP is\ndetermined by the non-admitted handover calls and new calls\nin the admission controller, respectively. Admission, termination\n, and handover of NRT calls enable the profiling of NRT\ni\n,\nthe number of non real-time sessions with priority i. Moreover\n, the packet scheduler allows the throughput computation\nof NRT users according to their individual priorities. Furthermore\n, no time consuming signaling is needed to transfer\nthe system pattern inside the BSC because the online performance\nmonitoring component and the performance management\nentity both reside in the BSC.\nThe question arises how call charging can be accomplished\nfor the considered pricing policies in 3G mobile networks.\nFor pricing policy (a), i.e., a fixed payment per kbit, call\ncharging can easily be processed by the subscription management\ncomponent of the operation subsystem (OSS) by means\nof the call charging mechanism using the home location register\n(HLR) [1]. Similarly, the hybrid pricing scheme can be\nrealized except that the remaining amount of prepaid data volume\nhas to be stored in the HLR for charging the transferred\ndata volume. Utilizing these existing charging mechanisms,\nno additionally signaling overhead arises for charging data\nservices. The throughput-based pricing policy (pricing policy\n(b)) just slightly changes the situation and can easily be\nimplemented within the BSC using a local copy of the user's\nHLR charging data fields. This local data minimizes signaling\noverhead of individual user charging. According to the\ntransferred data volume and current throughput of the user's\nbandwidth class, this local charging profile is continuously\nupdated. Handovers with changing BSC of response induce\nthe transfer of this local charging profile to the new BSC of\nresponse. Subsequently, these local data have to be updated\nin the HLR for individual user accounting after termination of\nthe call. Note, that this transfer of local charging profiles can\nnaturally be embedded in the OSS functionality.\nEvaluation of the adaptive performance management strategies\nFor traffic modeling of RT applications we utilize the approach\nproposed in [18], where variable bit rate video traffic\nis modeled in terms of time-discrete M/G/\ninput processes.\nThis model is based on measured video streams and efficiently\ncaptures the correlation structure of the considered\nvideo traffic applying the time-discrete M/G/\ninput\nprocess. The generated traffic is transformed utilizing a hybrid\nGamma/Pareto numerical transformation in order to capture\nthe marginal distribution of the measured traffic stream.\nSubsequently, the synthetically generated traffic is broken\ndown to IP packets of a maximum size of 1500 bytes, which\nare uniformly distributed within a given frame-duration of the\nMPEG video sequence comprising of 1/30 s. Note that this\ntraffic model does not propose information for modeling RT\nsession durations. Therefore, we assume session durations to\nbe exponentially distributed (see section 5.2).\nRecent recommendations for modeling NRT traffic and analytical\ntraffic models for 3G mobile networks are proposed\nin [15,16], respectively. The traffic model is based on real\nmeasurements conducted at an Internet service provider dial-in\nlink, which comprises comparable characteristics of future\nmobile networks [17], i.e., different access speeds, influence\nof the user behavior due to different tariff limits, as well as\nasymmetric up- and downlink traffic. Based on these measurements\na NRT traffic model is conducted, applying the idea\nof the single user traffic model, which describes traffic characteristics\non session-level, connection-level, i.e., application-level\n, and packet-level, respectively. The key insight of this\nmodeling approach lies in an appropriate scaling procedure of\n216\nC. LINDEMANN ET AL.\nTable 1\nCharacteristics for different UMTS session types.\nCircuit switched\nStreaming real time (RT)\nInteractive non real time (NRT)\nvoice service\nAudio\nVideo\nhigh priority\nnormal priority\nlow priority\nPortion of arriving requests\n25%\n12%\n3%\n6%\n18%\n36%\nSession duration\n120 s\n180 s\ndetermined by session volume distribution\nSession dwell time\n60 s\n120 s\n120 s\nthe measured trace data towards typical bandwidth classes of\n3G mobile networks, i.e., 64 kbps, 144 kbps, and 384 kbps.\nIn this context, a bandwidth class denotes the maximum bandwidth\ncapability of future handheld devices. We refer to [15]\nfor details of the NRT traffic model, especially for the parameterization\nof the traffic characteristics.\n5.2. The simulation environment\nIn order to evaluate the proposed approach for adaptive control\n, we developed a simulation environment for a UMTS access\nnetwork, i.e., a UMTS Terrestrial Radio Access Network\n(UTRAN [3]). The simulator considers a cell cluster comprising\nof seven hexagonal cells with corresponding transceiver\nstations (i.e., Node B elements), that are managed by\na base station controller (i.e., a Radio Network Controller,\nRNC). We assume that a mobile user requests a new session\nin a cell according to a Poisson process. When a mobile user\nstarts a new session, the session is classified as voice, RT,\nor NRT session, i.e., with the session the user utilizes voice,\nRT, or NRT services mutually exclusive. RT sessions consist\nof streaming downlink traffic corresponding to the UMTS\nstreaming class specified by 3GPP [2] and NRT sessions consist\nof elastic traffic and correspond to the UMTS interactive\nclass or background class, respectively. For the year 2010\nan amount of about 50% voice calls is anticipated [26]. We\nassume that one half of the voice calls are served over the frequency\nspectrum for traditional GSM services (i.e., 890915\nand 935960 MHz) and the second half is served over the new\nfrequency spectrum allocated for UMTS. Nevertheless, the\nsimulator considers only the new frequency spectrum. Therefore\n, we assume that 25% of the call requests are voice calls\nwhereas RT and NRT sessions constitute 15% and 60% of the\noverall arriving requests (see table 1).\nSubsequently, we have to specify the QoS profile for RT\nand NRT sessions. For RT sessions the simulator considers\ntwo QoS profiles, i.e., a low bandwidth profile comprising of\na guaranteed bit rate of 64 kbps corresponding to streaming\naudio and a high bandwidth profile comprising of a guaranteed\nbit rate of 192 kbps corresponding to streaming video.\nAccording to the RT traffic model presented in section 5.1, we\nassume that 80% of the RT sessions utilize the low bandwidth\nprofile whereas the remaining 20% utilize the high bandwidth\nprofile. Following the single user traffic model, NRT sessions\nare partitioned according to different bandwidth classes\nas follows: 60% for 64 kbps, 30% for 144 kbps, and 10%\nfor 384 kbps, comprising of different priorities (see table 1),\nrespectively.\nThe amount of time that a mobile user with an ongoing\nsession remains within the cell is called dwell time. If the session\nis still active after the dwell time, a handover toward an\nadjacent cell takes place. The call/session duration is defined\nas the amount of time that the call will be active, assuming it\ncompletes without being forced to terminate due to handover\nfailure. We assume the duration of voice calls and RT sessions\nto be exponentially distributed. As proposed in [6], the\ndwell time is modeled by a lognormal distribution. All corresponding\nmean values are shown in table 1. A NRT session\nremains active until a specific data volume drawn according to\na bandwidth-dependent lognormal distribution is transferred.\nTo distinguish between NRT traffic classes, the UMTS simulator\nimplements a WFQ scheduler with three packet priorities\n: 1 (high), 2 (normal), and 3 (low) with weights w\n1\n= 4,\nw\n2\n= 2, and w\n3\n= 1. These priorities correspond to the\ntraffic handling priority specified by 3GPP. To model the user\nbehavior in the cell, the simulator considers the handover flow\nof active mobile users from adjacent cells. The iterative procedure\nintroduced in [4] is employed for balancing the incoming\nand outgoing handover rates. The iteration is based on the\nassumption that the incoming handover rate of a user class at\nstep i\n+ 1 is equal to the corresponding outgoing handover\nrate computed at step i.\n5.2.1. UMTS system model assumptions\nThe simulator exactly mimics UMTS system behavior on the\nIP level. The focus is not on studying link level dynamics\n. Therefore, we assume a reliable link layer as provided\nby the automatic repeat request (ARQ) mechanism of the\nRadio Link Control (RLC) protocol. As shown in [22] for\nthe General Packet Radio Service (GPRS), the ARQ mechanism\nis fast enough to recover from packet losses before reliable\nprotocols on higher layers (e.g., TCP) recognize these\nlosses due to timer expiration. Thus, a reliable link level\ncan be assumed when considering higher layer protocol actions\n(see, e.g., [20]). To accurately model the UMTS radio\naccess network, the simulator represents the functionality of\none radio network controller and seven Node B transceiver\nstations, one for each of the considered cells. Since in the\nend-to-end path, the wireless link is typically the bottleneck,\nand given the anticipated traffic asymmetry, the simulator focuses\non resource contention in the downlink (i.e., the path\nRNC\nNode B MS) of the radio interface.\nThe simulator considers the UTRAN access scheme based\non Wideband-Code Division Multiple Access (W-CDMA) in\nFrequency Division Duplex mode (FDD) proposed by 3GPP\n[1]. In FDD downlink, a division of the radio frequencies into\nIMPROVING QoS AND PROVIDER REVENUE IN 3G MOBILE NETWORKS\n217\nfour physical code channels with data rates of 1,920 kbps each\nup to 512 physical code channels with 15 kbps data rates each\nis possible. Therefore, the overall bandwidth that is available\nin one cell is 7,680 kbps. For the channel coding, we assume\na convolution-coding scheme with coding factor 2. In the experiments\nwithout adaptive control the handover bandwidth\nportion b\nh\nis 5% and the NRT queue threshold is set to 95%.\nThe simulation environment was implemented using the simulation\nlibrary CSIM [7]. In a presimulation run the handover\nflow is balanced, for each cell at the boundary of the seven-cell\ncluster. All simulation results are derived with confidence\nlevel of 95% using the batch means method. The execution\nof a single simulation run requires about 4060 min of CPU\ntime (depending on the call arrival rate) on a dual processor\nSun Sparc Enterprise with one GByte main memory.\n5.2.2. Implementation of the hybrid pricing policy in the\nsimulator\nAccording to the hybrid pricing policy as introduced in section\n4.2.2, the user's overall remaining amount of prepaid data\nvolume d out of the user's monthly data volume D is determined\nat the beginning of a session. Moreover, the remaining\namount of data volume of previous months r is determined\n. For simulation study purposes, this is accomplished\nby choosing the random value d uniformly out of the interval\n[0, kD]. kD captures the monthly amount of data a user typically\ntransfers, i.e., a user typically transfers a multiple k of\nthe data volume D that is available for a fixed monthly payment\n. The random value r, is sampled according to a uniform\ndistribution out of the interval\n[0, 0.1D], where 0.1D measures\nthe maximum amount of \"unused\" data volume of previous\nmonths. If d exceeds D\n+ r the user has no remaining\nprepaid data volume, including the data volume of the current\nand the previous months. Otherwise, there is a remaining\namount of prepaid data volume D\n+ r - d for the considered\nuser and additional pricing arises only, if the transferred data\nvolume of the user session exceeds D\n+ r - d. Thus, during\nthe user session the remaining data volume has to be updated\naccording to the actually transferred data.\nIn the simulation studies we utilize the proposed hybrid-pricing\nscheme with a prepaid monthly data volume of\n150 MB. According to the different priority classes 1, 2,\nand 3, the volume-based pricing for transferred data exceeding\nthe prepaid monthly data volume comprises of 20, 15,\nand 10 cost-units per MB, respectively.\nConsidering the\nchanging traffic loads according to the daytime, this approach\ncan be refined, by the notion of different pricing for daily periods\nof time. For the parameterization of the typically monthly\ntransferred data volume, we assume k\n= 2. Note that the\nparameterization of the pricing scheme is chosen for demonstration\npurposes only. Due to the high flexibility of the hybrid\npricing scheme, it can be easily extended towards multiple\n, concurrent pricing schemes comprising of, e.g., different\nmonthly amounts of prepaid data volumes, different payments\nfor the individual priority classes, or a pure usage-based pricing\nas well as pure flat-rate pricing.\n(a)\n(b)\nFigure 2. Impact of adaptive performance management on non real-time\ntraffic.\n5.3. Performance results\nUsing simulation experiments, we illustrate the benefit of the\nproposed unified approach for adaptive performance management\nof UMTS systems. In particular, we show the improvement\nof QoS measures and the increase in revenue earned by\nservice providers. The presented curves plot the mean values\nof the confidence intervals for the considered QoS measures.\nIn almost all figures, the overall call/session arrival rate of\nnew mobile users is varied to study the cell under increasing\nload conditions. For ease of notation, results with and without\nadaptive performance management (APM) are abbreviated by\nAPM on and APM off, respectively.\nIn a first experiment, we investigate the effect of adaptive\ncontrol on the threshold for the buffer size of the NRT\nqueue denoted by . Figure 2 shows the NRT packet loss\nprobability (a) and the average number of NRT users in the\ncell (b) for the UMTS system with and without adaptive control\n. Furthermore, the figures distinguish between different\ndesired loss levels as introduced in section 3.2. We observe\nthat the APM achieves a substantially decrease in packet loss\nprobability. Moreover, the packet loss probability can be kept\nbelow a constant level for increasing arrival rates of mobile\nusers. Note, that this level slightly differs from the desired\nlevel of the QoS measure. This is due to the fact that the update\nfunction only decreases the NRT threshold if the online\n218\nC. LINDEMANN ET AL.\nFigure 3. Number of packet losses for a half day window of a weekly usage\npattern.\nmeasured packet loss probability is greater than . Therefore\n, the packet loss probability is in steady state also slightly\ngreater than . Nevertheless, figure 2 shows that the resulting\npacket loss probability can be adjusted quite well. For very\nlow arrival rates, the packet loss probability is increased compared\nto the case without adaptive control. This is because\nthe packet loss probability is below the desired level and is\nadjusted towards 100%.\nFigure 2(b) shows the average number of NRT users admitted\nin the cell. For all curves, the number of NRT users in\nthe cell first increases up to about 70 users for an arrival rate of\n1.0 arrivals per second. For higher arrival rates the admission\ncontroller decides to reject requests depending on the choice\nof the NRT threshold. In the case without APM the number\nof NRT users approaches 100 whereas in the cases with\nadaptive control less users are admitted in the cell because\nthe threshold parameter is decreased (e.g., about 80 users\nfor\n= 0.001). For high arrival rates a slightly decrease\nof the average number of NRT users can be observed. This is\ndue to the fact that with increasing arrival rate the competition\nbetween voice, RT and NRT traffic decreases the bandwidth\ncapacity available for NRT traffic. Therefore, less NRT users\nare admitted.\nIn the experiment presented in figure 3, we study the absolute\nnumber of packet losses observed in one hour for a\ntransient scenario, i.e., the arrival rate of new calls is changing\nevery hour according to a half day window of a weekly\nusage pattern [15]. The purpose of this experiment is to show\nthat the adaptive performance management is fast enough to\nreact on changing traffic conditions, i.e., to effectively adjust\nthe NRT threshold in order to reduce packet losses. The bars\nshown in figure 3 correspond to the number of packet losses\nfor experiments with and without adaptive control. Furthermore\n, the figure distinguishes between a desired loss level\nof 0.01 and 0.001, respectively. The new call arrival rates considered\nin one hour are depicted above the bars. We conclude\nfrom figure 3 that for a real-life pattern of changing arrival\nrates the packet losses can be effectively controlled by the\nAPM. This justifies the choice of the gradient m\n= -0.02 in\nthe update function for the NRT threshold.\n(a)\n(b)\nFigure 4. Impact of adaptive performance management on handover traffic.\nNext, we study the effect of the APM on the handover\ntraffic. Figure 4 shows the handover failure probability (a)\nand the new call blocking probability (b) for the UMTS system\nwith and without APM. Similar to figure 2, we distinguish\nbetween different desired levels for the handover failure\nprobability. The desired level for new call blocking is\nfixed to 0.1. Note, that for controlling the handover bandwidth\nthe desired level can be used only to adjust the degree\nof prioritization of handover failure over new call blocking\n. Distinct from the packet loss probability, it cannot be\nexpected to keep the handover failure probability at a constant\nlevel for increasing traffic load. That is for two reasons\n: (1) the handover bandwidth is adjusted according to\ntwo QoS measures that have a contrary influence and (2) the\nincrease of the handover bandwidth must be limited by a\ncertain portion of the overall available bandwidth (see section\n3.2). If this limit is reached handover failures occur\nmore frequently for further increasing call arrival rate. These\ntwo effects can be observed in the curves of figure 4. Nevertheless\n, the handover failure probability is improved more\nthan one order of magnitude for call arrival rates between\n0.75 and 1.25 call requests per second and a desired loss\nlevel\n= 0.001. When studying the blocking probability\nof new voice calls and RT sessions (see figure 4(b)),\nwe surly observe a higher blocking probability of new calls\nin the case with adaptive control and high arrival rate. In\nIMPROVING QoS AND PROVIDER REVENUE IN 3G MOBILE NETWORKS\n219\nFigure 5. Improving QoS for high priority non real-time users.\nFigure 6. Effect of adjusting WFQ weights on bandwidth utilization of NRT\ntraffic.\nfact, almost all call requests are blocked if system load is\nhigh.\nIn a next experiment, we study the impact of NRT users on\nQoS by the adaptive control of the queueing weights as introduced\nin section 4.2. Figure 5 plots the average throughput\nper user for each priority class of NRT traffic. As shown in\ntable 1, we assume 10% NRT users with high priority, 30%\nwith normal priority, and 60% with low priority. Recall, that\nhigher priority service is more expensive and, hence, more\nusers choose low priority service. If the overall load in the\ncell is very low (i.e., less than 0.3 call arrivals per second)\neach NRT user receives the maximal throughput independent\nof the priority class. However, when the cell load is further\nincreased (arrival rates of more than 0.5 arrivals per second),\nthroughput for users of all priority classes decreases. The intention\nof adaptively controlling the queueing weights is to\nreduce heavy throughput degradation of high priority users\nin this case. The performance increase of high priority users\nand the decrease of low priority users are shown in figure 5.\nFigure 6 plots the bandwidth portion utilized for each priority\nclass of NRT traffic. For low arrival rate (i.e., less than\n0.5 call arrivals per second) NRT users with low priority utilize\nthe greatest portion of the NRT bandwidth because most\nNRT users have priority low. When the cell load is increased\n(arrival rates of more than 0.5 arrivals per second), the band\n(a)\n(b)\nFigure 7. Revenue improvement for usage-based pricing policy.\nwidth will be utilized more and more by high priority users.\nThe adaptive control of the WFQ weights decides to intensify\nthis effect because users belonging to priority high suffer from\nthe high population of low priority users. Figures 5 and 6 are\nderived from simulation runs with\n= 1/2 (see section 4.2).\nIn the following experiments, we study the impact of controlling\nthe queueing weights on the revenue function (see\nequation (8)) for the three proposed pricing policies, i.e.,\nusage-based, usage-/throughput-based, and the hybrid pricing\npolicy. From the revenue function the average (steady state)\nprovider revenue in the considered cell can be derived. Recall\nthat the available bandwidth for NRT traffic is variable for\ndifferent call arrival rates. Therefore, we consider the revenue\nearned by the provider in one hour per available bandwidth\nunit, i.e., per available kbit, for NRT traffic. Figure 7 shows\nthe provider revenue for the usage-based pricing policy (i.e.,\n= 0) and different values of the exponent . As discussed\nin section 4.2, the best revenue improvement will be achieved\nwith priority queueing. From the curves we conclude that the\nupdate strategy increases the revenue in one cell successfully\nfor the considered traffic assumptions. Recall that the revenue\nimprovement stems from a shift in bandwidth utilization towards\nhigher priority users (see figure 6) if the population of\nhigh priority users is low compared to users of lower priority.\nFigure 7(b) shows the revenue improvement for different\nuser populations. In the experiment the percentage of high\n220\nC. LINDEMANN ET AL.\nFigure 8. Revenue improvement for usage-/throughput-based pricing policy.\nFigure 9. Revenue improvement for hybrid pricing policy.\npriority users among the arriving user requests is varied. The\nremaining users are assumed to be low priority users. Normal\npriority users are not considered in this experiment (i.e., 0%\nnormal priority users). This figure shows how the adaptive\ncontrol of the queueing weights works. As expected, for a low\npercentage of high priority users the corresponding weight\nis increased. Therefore, QoS for high priority users and the\nprovider revenue is also increased. For more than 50% high\npriority users the revenue is the same as in the case without\nadaptive control. No further revenue improvement is allowed\nbecause degradation of QoS for low priority users would be\nunacceptable. Considering a weak relation among the weights\nas introduced in section 4.2 would decrease the revenue compared\nto the case without adaptive control for more than 50%\nhigh priority users. This might be useful to increase QoS for\nusers of low population independent of their priority class.\nFigure 8 shows the revenue improvement for the usage-/\nthroughput-based pricing policy and scaling exponents\n=\n1/4 and\n= 1/16. In the last experiment we studied the\nrevenue improvement for the hybrid pricing policy (see figure\n9). We assume that half of the arriving users start their\nsession in non-paying mode (i.e., k\n= 2). The curves distinguish\nbetween weights w\n= 1 and w = 2 for the non-paying\nusers. Furthermore, the revenue for the case with and without\nadaptive control is compared. The curves are derived from\nsimulations with\n= 0 and = 1/2. From the revenue\ncurves of figures 79 the average monthly revenue can be\ncomputed considering a daily/weekly usage-pattern and different\nsplits of call arrival rates of users requesting different\nservices (i.e., voice, RT, NRT with different priorities). Comparing\nthe monthly revenue for the pricing policies used in\nfigures 79 with the monthly revenue for the hybrid pricing\npolicy a provider can determine values such as the monthly\nfree data volume and monthly payment per user.\nConclusions\nWe introduced a unified approach based on a mathematical\nframework for the adaptive performance management of 3G\nmobile networks. Opposed to previous work [8,13,19,21,25],\nthe improvement of quality of service (QoS) and the optimization\nof mobile service provider revenue was considered in an\nintegrated way. The unified approach aims at improving both\nQoS for mobile subscribers and increasing revenue earned by\nservice providers. System parameters controlled by adaptive\nperformance management constitute the portion of bandwidth\nreserved for handovers, the buffer threshold of the queue for\nnon real-time traffic, and the weights of a weighted fair queueing\npacket scheduler.\nUsing the UMTS traffic model of [15] and a simulator on\nthe IP level for the UMTS system, we presented performance\ncurves for various QoS measures to illustrate the benefit of\nthe unified approach for adaptive performance management.\nWe introduced update functions that effectively control the\npacket loss probability and the handover failure probability.\nConsidering usage-based, usage-/throughput-based, and hybrid\npricing policies, we showed that the provider revenue in\none cell can be significantly increased by the adaptive control\nof the queueing weights.\nThroughout the paper, we considered the services and QoS\nprofiles standardized for UMTS. Thus, the proposed approach\nfor adaptive control is tailored to UMTS networks. However\n, by considering other services and QoS profiles, the basic\nideas underlying the unified approach for adaptive performance\nmanagement can also be applied for the adaptive control\nof other kinds of multi-service IP networks.\nReferences\n[1] 3GPP, http://www.3gpp.org\n[2] 3GPP, QoS concept and architecture, Technical Specification TS\n23.107 (September 2001).\n[3] 3GPP, UTRAN overall description, Technical Specification TS 25.401\n(September 2001).\n[4] M. Ajmone Marsan, S. Marano, C. Mastroianni and M. Meo, Performance\nanalysis of cellular mobile communication networks supporting\nmultimedia services, Mobile Networks and Applications 5 (2000) 167\n177.\n[5] K. Aretz, M. Haardt, W. Konhuser and W. Mohr, The future of wireless\ncommunications beyond the third generation, Computer Networks\n37 (2001) 8392.\n[6] F. Barcel and J. Jordn, Channel holding time distribution in public\ncellular telephony, in: Proceedings of the 16th International Teletraffic\nCongress, Edinburgh, Scotland (1999) pp. 107116.\nIMPROVING QoS AND PROVIDER REVENUE IN 3G MOBILE NETWORKS\n221\n[7] CSIM18 The Simulation Engine, http://www.mesquite.com\n[8] S.K. Das, R. Jayaram, N.K. Kakani and S.K. Sen, A call admission and\ncontrol scheme for Quality-of-Service provisioning in next generation\nwireless networks, Wireless Networks 6 (2000) 1730.\n[9] A. Demers, S. Keshav and S. Shenker, Analysis and simulation of a fair\nqueueing algorithm, in: Proceedings of the International Symposium\non Communications Architectures and Protocols (SIGCOMM), Austin,\nTX (1989) pp. 112.\n[10] S. Floyd and V. Jacobson, Link-sharing and resource management models\nfor packet networks, IEEE/ACM Transactions on Networking 3\n(1995) 365386.\n[11] X. Geng and A.B. Whinston, Profiting from value-added wireless services\n, IEEE Computer 34 (August 2001) 8789.\n[12] A. Gupta, D.O. Stahl and A.B. Whinston, The economics of network\nmanagement, Communications of the ACM 42 (1999) 5763.\n[13] A. Gupta, D.O. Stahl and A.B. Whinston, Priority pricing of integrated\nservices networks, in: Internet Economics, eds. L. McKnight\nand J. Bailey (MIT Press, 1995) pp. 323378.\n[14] G. Haring, R. Marie and K.S. Trivedi, Loss formulas and their application\nto optimization for cellular networks, IEEE Transactions on Vehicular\nTechnology 50 (2001) 664673.\n[15] A. Klemm, C. Lindemann and M. Lohmann, Traffic modeling and\ncharacterization for UMTS networks, in: Proceedings of GLOBECOM\n2001, San Antonio, TX (November 2001) pp. 17411746.\n[16] A. Klemm, C. Lindemann and M. Lohmann, Traffic modeling of IP\nnetworks using the batch Markovian arrival process, in: Proceedings of\nTools 2002, London, Great Britain (April 2002) pp. 92110.\n[17] J. Kilpi and I. Norros, Call level traffic analysis of a large ISP, in: Proceedings\nof the 13th ITC Specialist Seminar on Measurement and Modeling\nof IP Traffic, Monterey, CA (2000) pp. 6.16.9.\n[18] M. Krunz and A. Makowski, A source model for VBR video traffic\nbased on M/G/\ninput processes, in: Proceedings of the 17th Conference\non Computer Communications (IEEE INFOCOM), San Francisco,\nCA (1998) pp. 14411449.\n[19] C. Lindemann, M. Lohmann and A. Thmmler, Adaptive performance\nmanagement for UMTS networks, Computer Networks 38 (2002) 477\n496.\n[20] R. Ludwig, A. Konrad and A.D. Joseph, Optimizing the end-to-end\nperformance of reliable flows over wireless links, in: Proceedings\nof the 5th Conference on Mobile Computing and Networking (ACM\nMobiCom), Seattle, WA (1999) pp. 113119.\n[21] J.K. MacKie-Mason and H.R. Varian, Pricing the Internet, in: Public\nAccess to the Internet, eds. B. Kahin and J. Keller (MIT Press, 1995)\npp. 269314.\n[22] M. Meyer, TCP performance over GPRS, in: Proceedings of the First\nWireless Communications and Networking Conference (IEEE WCNC),\nNew Orleans, MS (1999) pp. 12481252.\n[23] Mobile Wireless Internet Forum (MWIF), OpenRAN architecture in\n3rd generation mobile systems, Technical report MTR-007 (September\n2001) http://www.mwif.org\n[24] J.M. Peha and A. Sutivong, Admission control algorithms for cellular\nsystems, Wireless Networks 7 (2001) 117125.\n[25] S. Rao and E.R. Petersen, Optimal pricing of priority services, Operations\nResearch 46 (1998) 4656.\n[26] UMTS-Forum, UMTS/IMT-2000 Spectrum, Report No. 6 (1999).\n[27] H. Zhang, Service disciplines for guaranteed performance service in\npacket-switched networks, Proceedings of the IEEE 83 (1995) 1374\n1396.\n[28] Wireless\nWorld\nResearch\nForum\n(WWRF),\nhttp://www.\nwireless-world-research.org\nChristoph Lindemann is an Associate Professor in\nthe Department of Computer Science at the University\nof Dortmund and leads the Computer Systems\nand Performance Evaluation group. From 1994 to\n1997, he was a Senior Research Scientist at the GMD\nInstitute for Computer Architecture and Software\nTechnology (GMD FIRST) in Berlin. In the summer\n1993 and during the academic year 1994/1995,\nhe was a Visiting Scientist at the IBM Almaden Research\nCenter, San Jose, CA. Christoph Lindemann\nis a Senior Member of the IEEE. He is author of the monograph Performance\nModelling with Deterministic and Stochastic Petri Nets (Wiley, 1998). Moreover\n, he co-authored the survey text Performance Evaluation Origins and\nDirections (Springer-Verlag, 2000). He served on the program committees of\nvarious well-known international conferences. His current research interests\ninclude mobile computing, communication networks, Internet search technology\n, and performance evaluation.\nE-mail: cl@cs.uni-dortmund.de\nWWW: http://www4.cs.uni-dortmund.de/\nLindemann/\nMarco Lohmann received the degree Diplom-Infor-matiker\n(M.S. in computer science) with honors\nfrom the University of Dortmund in March 2000.\nPresently, he is a Ph.D. student in the Computer Systems\nand Performance Evaluation group at the University\nof Dortmund. He is a student member of the\nIEEE and the ACM. His research interests include\nmobile computing, Internet search technology, and\nstochastic modeling.\nE-mail: ml@ls4.cs.uni-dortmund.de\nAxel Thmmler received the degree Diplom-Infor-matiker\n(M.S. in computer science) from the University\nof Dortmund in April 1998. Presently, he is a\nPh.D. student in the Computer Systems and Performance\nEvaluation group at the University of Dortmund\n. His research interests include mobile computing\n, communication networks, and performance\nevaluation.\nE-mail: at@ls4.cs.uni-dortmund.de", "keywords": "QoS;packet loss probability;Quality of Service in mobile systems;provider revenue;performance evaluation of next generation mobile systems;packet scheduler;adaptive performance management;admission control in mobile system;pricing policy;admission control;3G mobile networks;pricing and revenue optimization"} {"name": "24", "title": "A WEIGHTED RANKING ALGORITHM FOR FACET-BASED COMPONENT RETRIEVAL SYSTEM", "abstract": "Facet-based component retrieval techniques have been proved to be an effective way for retrieving. These Techniques are widely adopted by component library systems, but they usually simply list out all the retrieval results without any kind of ranking. In our work, we focus on the problem that how to determine the ranks of the components retrieved by user. Factors which can influence the ranking are extracted and identified through the analysis of ER-Diagram of facet-based component library system. In this paper, a mathematical model of weighted ranking algorithm is proposed and the timing of ranks calculation is discussed. Experiment results show that this algorithm greatly improves the efficiency of component retrieval system.", "fulltext": "Motivations\nA high efficiency retrieval system for software\ncomponent library is important for the reuse of software\ncomponents. The point of high efficiency is not that the\ntime performance in one matching or retrieving process\nwhich can be measured by how many seconds or how\nmany milliseconds elapsed, but that the efficiency to\nmake the component consumers be able to find what they\nneed as soon as possible, even though the former is the\nbasis of the latter.\nNo matter accuracy matching or fuzzy matching, our\ncomponent retrieval system usually simply lists out all the\nretrieval results without any kind of ranking, or at least\nwithout a systematic ranking. Users have to view the\ndetail information of all the retrieval results one by one to\nfind out which is the best to fit their requirements, or else\nthey have to adjust their query conditions to retrieve again.\nIf there are a large number of components retrieved from\nthe component library, it could be a tough and torturous\nexperience to find a proper component. However, it's a\nfact that there's a matching degree between the query\nconditions and retrieval results. The matching degree is\njust the similarity and relevancy between the query\ncondition and its retrieval results. Only when we rank the\nretrieval results by the matching degree as the Web search\nengines can component consumers easily find what they\nneed. They only have to compare the first several retrieval\nresults but not all of them.\nAccording to the discussion above, it's clear that a\nformula to calculate the matching degree and its\ncorresponding ranking algorithm, which can greatly\nimprove the retrieval efficiency for software component\nlibrary, are needed. In this paper, we propose a weighted\nranking algorithm for facet-based component retrieval\nsystem. This algorithm has been implemented in a\nsoftware component library, called DLCL, and greatly\nimproves the efficiency of the retrieval system.\n\nIntroduction to Retrieval Methods for Component Library\n2.1 Existing Retrieval Methods for Component\nLibrary\n\nRetrieval of software components is a core technique of\ncomponent library. Today there are lots of retrieval\nmethods for software component library. The main are as\nfollows [1, 2]: (1) Specification matching method; (2) AI\nBased method; (3) Information science method; (4)\nHypertext browsing method. As to the four methods, each\nhas its own features and there's no a general formula to\ncalculate the matching degree. For example, specification\nmatching method uses formal specifications to describe\nthe behavior of software components\n\nand relies on\ntheorem proving to determine match and mismatch. AI\nBased method relies on the use of AI planning techniques\nto automatically search software components in\ncomponent library. So we have to use different\ncalculating strategies to calculate the matching degree of\neach retrieval method.\nAmong the retrieval methods discussed above,\ninformation science method is widely used in practice.\nInformation science method usually comprises several\ndifferent retrieval methods which are attribute-value,\nenumerated, faceted, and keyword method. Of the four\nmethods, facet-based component retrieval method has\nbeen proved to be an effective way for retrieving and has\nbeen widely adopted by component library systems. In the\nfollowing section, we'll discuss the facet-based retrieval\nmethod.\n505-049\n274\n2.2\n\nFacet-based Retrieval Method\nA component classification is a set of {facet, facet term}\npairs, also called descriptors [3]. Reusable software\ncomponents (RSC) are classified by assigning appropriate\nfacet terms for all applicable facets. The objective in\nclassifying the RSC is to make it possible for all users\nwho might use the RSC to retrieve it as a result of their\nrequests. Faceted classification scheme is an effective\nway for classifying the components and widely adopted\nby component library systems.\nCorrespondingly, there are several retrieving\nalgorithms for faceted classification scheme. Some\nsystems use the traditional database query techniques in\nfacet-based retrieval. Wang YF proposed a tree matching\nalgorithm in his PH.D dissertation [4]. This algorithm\nmaps the component facets into a facet tree and maps the\nquery conditions into a query tree. The matching\nalgorithm deals with the match of the facet tree and query\ntree and calculates the matching cost. This algorithm\nbases on the tree matching theories, such as tree\nembedding, tree inclusion, and tree containment. These\nthree tree matching methods are becoming more and more\nelastic in order to improve the retrieving recall while\nmaintaining the precision to a certain extent. Matching\ncost of the tree matching will be calculated to measure the\napproximate degree between the facet trees of the\ncomponents and the query tree. The data structure of a\ntree is represented by a three-tuple: T= (V, E, root (T)), V\nrepresents a limited set of vectors, root (T) represents the\nroot of the tree, E represents the set of edges.\n\nWeighted Ranking Algorithm for Facet-based Component Retrieval System\nThere's no a general formula to calculate the matching\ndegree due to the different feature of each retrieval\nmethod. Facet-based retrieval method has been widely\nadopted by existing component library systems, such as\nREBOOT, Proteus, Asset Library, and JBCL [5]. It has\nbeen proved to be an effective method to the retrieval of\ncomponent library system. And therefore, it makes great\nsense to propose a component ranking algorithm for facet-based\nretrieval system.\n\n3.1 ER-Diagram of Software Component Library\n\nThe extraction and identification of the influential factors\nwhich are used to calculate the matching degree is the\nfirst step to establish a mathematical model. To Analyze\nthe ER-Diagram of software component library is an\neffective way to extract the factors. An ER-diagram of\nfacet-based component library was given below:\n\n\nProducer\nProvide\nConsume\nr\nComponent\nReuse\nFacet\nTerm\nDescribe\nFeedback\nSummary\nFeedback\n1\nn\n1\n1\n1 1\nn\nn\nn\nn\nm\nm\nRelate\nn\nm\nInclude\nDescribe\n\nFig. 1. ER-Diagram of Component Library\n\nEntities list:\n\nComponent: component is the basic and primary\nentity in component library. Besides the attributes,\nthere are facet-term pairs and information summary\nto describe a component.\n\nUser Feedback: an opinion, a comment or a score\nprovided by users after they have used a component.\n\nComponent Summary: an information summary to\ndescribe a component which enables users to know\nwell the component quickly.\n\nFacet: facet and its terms are used to classify and\nrepresent the components.\n3.2 Factors of Weighted Ranking Algorithm\n\nAs to a facet-based component library system, facet is the\nmost important method to classify and represent the\ncomponents. Correspondingly, facet-based retrieval\nmethods, such as facet tree matching method, are\nimportant for the component retrieval system. The\nmatching degree between facet tree and query tree is of\nmuch importance for ranking. However, matching degree\nof facet is not the only factor which is able to influence\nthe ranking.\nRetrieval system of component library usually has two\nsearch modes: simple query and complex query. Simple\nquery just simply uses the traditional database query\nmethod to match the Attribute-Valued pairs. In contrast,\ncomplex query is a much more effective way which\ncombines several query methods together to match\ndifferent kinds of component information. And therefore,\n275\nwe should take other factors into account besides the facet\nfor ranking the retrieval results. According to the analysis\nof ER-Diagram above, we can extract some other factors\nwhich are able to influence the ranking of component\nretrieval results while using the complex query.\nAttributes of component, such as component name, can\nbe used to match the keywords in the query conditions.\nThe matching degree of Attribute-Valued pairs should be\nan influential factor for ranking.\nSummary of component can also be used to match the\nquery conditions. Query conditions usually consist of\nseveral keywords. The density, prominence, and position\nof keywords within the component summary will\ninfluence the ranking of components. The keyword\ndensity is just the number of occurrences of the keywords\nwithin the component summary divided by the total\nnumber of words. Keyword prominence is related to the\nlocation of keywords in the summary. For example,\nkeywords placed at the beginning of the summary maybe\ncarry more weight than those towards the end of it.\nUser feedback of a component is very useful for other\nusers who want to use it to evaluate the quality and other\nfeatures of the component. They can acquire much more\nobjective description and useful information about the\ncomponent besides the component attributes and\nsummary.\nHow many times the component information has been\nvisited and how many times the component has been\ndownloaded for reusing should also be taken into account\nas the factors to calculate the matching degree for ranking.\nThey reflect the popularity and reusability of the\ncomponent from another aspect.\n\n3.3 Mathematical Model\n\nRetrieval results consist of a collection of components\nmatching the query conditions:\nDefinition 1: Components (C\n1\n, C\n2\n, ......, C\ni\n, ......, Cn);\n(n N, n1)\n\n\nIt makes no sense to discuss the circumstance of empty\nretrieval results, since that we are going to discuss the\nranking of component lists.\nAccordingly, each component has a rank value:\nDefinition 2: Ranks (R\n1\n, R\n2\n, ......, R\ni\n, ......, Rn); (n N,\n\nn1)\nThe query condition consists of a collection of\nkeywords:\nDefinition 3: Keywords (K\n1\n, K\n2\n, ......, K\ni\n, ......, K\nn0\n);\n(n\n0\nN, n\n\n0\n1)\nEach component is described by a set of Attribute-Valued\npairs:\nDefinition 4: Attributes (A\n1\n, A\n2\n, ......, A\ni\n, ......, A\nn1\n);\n(n\n1\nN,\n\nn\n1\n1)\nBesides with Attribute-Valued pairs, components are\nalso classified and represented by a set of facets and their\nterms:\nDefinition 5: Facets (F\n1\n, F\n2\n, ......, F\ni\n, ......, F\nn2\n);\n(n\n2\nN, n\n\n2\n1)\nSummary of component information differs from\nAttribute-Valued pairs. It provides a comprehensive\ndescription of a component in context.\nDefinition 6: Summary (S);\nUser feedback includes all the comments and feedback\nto a specific component:\nDefinition 7: User Feedback (U\n1\n, U\n2\n, ......, U\ni\n, ......,\nU\nn3\n); (n\n3\nN, n\n\n3\n1)\nUser feedback must be analyzed and evaluated to a\nrelative number. We use E to represent the Evaluation\nnumber of user feedback.\nDefinition 8: E = Evaluate (User Feedback).\nDefinition 9: Visited times of a component: Visited\ntimes (V);\nDefinition 10: Downloaded times of a component:\nDownloaded times (D);\nWe have listed out all the influential factors above,\nwhich constitute a six-tuple:\nFactors (A, F, S, U, V, D);\nTheir influential weights differ from each other\naccording to their feature and importance:\nDefinition 11: Weights (W\nA\n, W\nF\n, W\nS\n, W\nU\n, W\nV\n, W\nD\n);\n(0W\nA\n, W\nF\n, W\nS\n, W\nU\n, W\nV\n, W\nD\n1,\nW\nA\n+W\nF\n+W\nS\n+W\nU\n+W\nV\n+W\nD\n= 1)\nW\nA\nrepresents the weight of Attributes; W\nF\nrepresents\nthe weight of Facets; W\nS\n, W\nU\n, W\nV\n, and W\nD\nalso represent\nthe weight of corresponding factor discussed above.\nThere are several functions for calculating the matching\ndegree of some factors. The core calculating formula of\neach function relies on its corresponding matching\nalgorithm.\n\nFunctions Formulas\nSummary F\nS\n(Keywords,\nSummary)\n\n=\nno\ni\nS\nKi\nmatch\n1\n)\n,\n(\n\nFacets\nF\nF\n(Keywords,\nFacets)\n\n=\n=\nno\ni\nn\nj\nFj\nKi\nmatch\n1\n2\n1\n)\n,\n(\n\nAttributes\nF\nA\n(Keywords,\nAttributes)\n\n=\n=\nno\ni\nn\nj\nAj\nKi\nmatch\n1\n1\n1\n)\n,\n(\n\nMatch function of component summary uses the\ncontent-based similarity measurement algorithm of the\nsearch engine techniques. A Best-First algorithm was\nproposed by Cho [6]. This algorithm uses a vector space\nmodel to calculate the similarity between the keywords\nand the content. Its formula is given as following:\n\n\n\n\n\n\n=\np\nk\nq\nk\nkq\nkp\np\nq\nk\nkp\nkq\nW\nW\nW\nW\np\nq\nsim\n2\n2\n|\n)\n,\n(\n\nThe variable q represents the collection of keywords, p\nrepresents the content, and W\nkp\nrepresents the importance\nof k to a specific topic. In our mathematical model,\nvariable q represents the query condition, and the variable\np represents the summary of the component information.\n276\nFacet-based retrieving method usually adopts the facet\ntree matching. And therefore, its match function\ncalculates the matching degree between the facet tree of\ncomponent and the query tree. A formula to calculate the\nmatching cost of tree containment matching was given by\nXu [7]:\nQ=(V,E,root(Q)), D=(W,F,root(D)) are two unordered\nlabel tree, TCostM(Q, D) represents the tree containment\nmatching cost from tree Q to tree D.\n( )\n{\n}\nD\nQ\nf\nf\n\n=\n:\n|\nmin\nD)\nTCostM(Q,\n\n\n\n\n( )\n(\n)\n(\n)\n(\n\n\n\n\n\n\n\n+\n\n+\n\n=\n)\n(\n)\n(\n)\n(\n)\n(\n)\n(\n)\n(\n))\n(\n(\n)\n(\nf\nRange\nf\nspectrum\nw\nf\ndomain\nV\nv\nf\ndomain\nv\nv\nlabel\nv\nlabel\nv\nf\nlabel\nv\nlabel\nf\n\n\n\n\n\n\n)\n\nIf f is a tree containment matching from tree Q to tree\nD, and (f) = TCostM(Q, D), then f is the tree\ncontainment matching which obtains the minimum\nmatching cost from tree Q to tree D. This definition could\nbe also applied to the containment matching between tree\nand forest or between forest and forest.\nAs to the match function of component attributes, we\njust use the traditional database query methods to deal\nwith it.\nE, V, and D are three ranking factors without any\nrelation to query keywords. Even though they are\nnumbers, we could not use them directly for ranking.\nFunctions should be provided to transform them.\n\n\nFunctions\nEvaluation of Feedback\nF\nE\n(E)\nVisited Times\nF\nV\n(V)\nDownloaded Times\nF\nD\n(D)\n\nAccording to the discussion above, we finally draw out\na very simple formula to calculate the rank for each\ncomponent:\nRank = F\nA\nW\nA\n+ F\nF\nW\nF\n+ F\nS\nW\nS\n+ F\nE\nW\nE\n+ F\nV\nW\nV\n\n+ F\nD\nW\nD\nWe can use matrix operation to represent the\ncalculation of rank value for each component in the\nretrieval results. There are n components and 6 influential\nfactors. F\n6n\nW\n16\n= R\n1n\n:\n\n\n\n\n\n\n\n=\n\n\n\n\n\n\n\n\n\n\n\n\n\nn\ni\nD\nV\nE\nS\nF\nA\nDn\nVn\nEn\nSn\nFn\nAn\nDi\nVi\nEi\nSi\nFi\nAi\nD\nV\nE\nS\nF\nA\nD\nV\nE\nS\nF\nA\nR\nR\nR\nR\nW\nW\nW\nW\nW\nW\nF\nF\nF\nF\nF\nF\nF\nF\nF\nF\nF\nF\nF\nF\nF\nF\nF\nF\nF\nF\nF\nF\nF\nF\nM\nM\nM\nM\nM\nM\nM\nM\nM\nM\nM\nM\nM\nM\n2\n1\n2\n2\n2\n2\n2\n2\n1\n1\n1\n1\n1\n1\n\nWe specify the weights with experiential values at the\nvery beginning, and then use the data mining technology\nto analyze the user logs to dynamically and iteratively\nadjust those values.\n\n3.4 Timing of Ranks Calculation\n\nIt's time for us to determine when to deal with the\ncalculation of ranks, since that we have designed how to\ndeal with it. It's just the matter of the timing of rank\ncalculation. There are two probable times for calculation:\ncalculating after the retrieving process has been finished;\ncalculating during the process of retrieving. Both of the\nprobable times have their own advantages and\ndisadvantages.\nIf we calculate the rank after the retrieving process has\nbeen finished, we can deal with the retrieving and ranking\nseparately. It will be much easier for us to design and\nmaintain the system, since the retrieving and ranking\nprocess are independent. However, it costs much more\ntime and space to calculate the rank. It needs a lot of\nmemory space to store a large number of retrieval results\ntemporally before they are ranked, and costs much more\ntime to manage the transmission of data between storage\ndevices and CPU. On the contrary, if we calculate the\nrank during the process of retrieving, we have to combine\nthe retrieving with the ranking process completely or\npartly. Undoubtedly, it will be hard for us to implement\nand maintain the system, but it can greatly improve the\ntime and the space performances.\nAccording to the discussion above, we can choose a\nproper time to calculate the ranks. Which solution we\nshould choose depends on the requirements of the system.\nThe solution which calculates the ranks during the\nretrieving process should be adopted if the time and the\nspace performances are rigorously required.\nImplementation\nOur component library system, named DLCL, is\nimplemented with J2EE platform. The mathematical\nmodel and its algorithm we discussed above have been\nimplemented in this system with Java. Java is an object-oriented\nlanguage. Its implementation consists of several\ncore interfaces and classes. The core interfaces, classes\nand the relationship between them are demonstrated in the\nUML class diagram:\n\n\n277\nFig. 2. Class Diagram of Ranking Module\n\nWe calculate the ranks during the retrieving process to\nimprove the time and space performances, and therefore,\nwe have to combine the ranking module partly with the\nretrieving module. In order to lower the coupled degree\nbetween these two modules, a callback mechanism was\nadopted. We define an interface Rank, which consists of\nonly one method to calculate the ranks.\nRankComponentImpl is a class implementing the\ninterface Rank to calculate the ranks of those components\nretrieved by users. The method of its concrete object can\nbe executed by the searcher during the retrieving process.\nClass Component encapsulates those methods which\nprovide the component information.\nComponentMatchingDegree is a class providing those\nmethods for calculating the matching degree between the\nquery keywords and component. Each influential factor\nhas its own strategy for calculation. There's also a class\nWeight, which provides the methods to get the influential\nweight of each factor.\nExperiment and its Results\nIn order to verify the efficiency of the component\nretrieving system which adoptes our weighted ranking\nalgorithm, we design an experiment and carry out this\nexperiment in our component library system, named\nDLCL. There are more than 1000 components in this\nsystem. Retrieving system of DLCL splits the retrieval\nresults into several pages if there are too many\ncomponents retrieved, and lists out 10 components per\npage.\nThe experiment separates the users into two groups,\ngroup 1 and group 2. Each group consists of 10 persons.\nAll the users know about the knowledge of component\nreuse to a certain extent. Both groups use the facet-based\ncomponent retrieving method to retrieve the components.\nRetrieval results of group 1 are listed out without any\nranking, however, those of group 2 are ranked by our\nweighted ranking algorithm.\nThere are several aspects to measure the efficiency of\neach group: how many pages they turned; how many\ntimes they had to adjust the query condition; and the most\nimportant, how many time was elapsed during the whole\nretrieving process. The experimental results are given in\nthe following table:\n\n\nGroup 1 Group 2\nAverage Turned Pages\n(pages)\n2.7 1.4\nAverage Adjusted Times\n(times)\n2.3 1.1\nAverage Time elapsed\n(minutes)\n26.6 9.5\n\nThe experimental results obviously shows that the\nefficiency of group 2 is greatly higher than group 1. By\napplying the weighted ranking algorithm into the\nretrieving system of DLCL, users needn't turn too many\npages to view and compare the component information or\nto adjust the query condition to improve the query\nprecision. Only to view the first page of retrieval results\nwill be enough most of the time. And therefore, it greatly\nsaves the time and retrieval costs.\n\n\n\nRelated Works\nThe idea of Component Rank comes from computing fair\nimpact factors of published papers [8]. Google is a web\nsearch engine. Its method can be considered as an HTML\nextension of the method proposed for counting impact of\npublications, called influence weight in [8]. Google\ncomputes the ranks (called PageRanks) for HTML\ndocuments in the Internet [9, 10]. In reference [11], the\nauthors present the Component Rank model for ranking\nsoftware components, and show a system for computing\nComponent Rank. In this model, a collection of software\ncomponents is represented as a weighted directed graph\nwhose nodes correspond to the components and edges\ncorrespond to the usage relations. Similar components are\nclustered into one node so that effect of simply duplicated\nnodes is removed. The nodes in the graph are ranked by\ntheir weights which are defined as the elements of the\neigenvector of an adjacent matrix for the directed graph.\nA major distinction of Component Rank model in [11]\nfrom PageRank and the influence weight in [9, 10] is that\nComponent Rank model explores similarity between\ncomponents before the weight computation.\nIn this paper, we also propose a weighted ranking\nalgorithm for component retrieval system. This weighted\nranking algorithm uses different calculating strategies\naccording to the feature of facet-based retrieval methods.\nWhile in [11], the authors employed only statical use\nrelations.\n\nConclusion\nIn this paper, a mathematical model of weighted ranking\nalgorithm is proposed and the timing of ranks calculation\nis discussed. We have applied this ranking algorithm into\nour component library system, named DLCL. The\nexperiment we carried out shows that this algorithm\ngreatly improves the efficiency of component retrieving\nsystem, saving the time and retrieval costs for component\nreusing.\n\nAcknowledgement\n278\nThis research is partially supported by the National High\nTechnology Development 863 Program under Grant No.\n2004AA116010.\n\nReferences\n[1] Frakes WB, Pole TP, An empirical study of\nrepresentation methods for reusable software components,\nIEEE Transactions on Software Engineering, 1994, l20(8),\npp617-630\n[2] H. Mili, R. Rada, W. Wang, K. Strickland, C.\nBoldyreff, L. Olsen, J. Witt, J. Heger, W. Scherr, and P.\nElzer, Practitioner and SoftClass: A Comparative Study of\nTwo Software Reuse Research Projects, J. Systems and\nSoftware, 1994, 27(5)\n[3] NEC Software Engineering Laboratory, NATO\nStandard for Management of a Reusable Software\nComponent Library, NATO Communications and\nInformation Systems Agency, 1991\n[4] Wang YF. Research on retrieving reusable\ncomponents classified in faceted scheme [Ph.D. Thesis].\nShanghai: Fudan University, 2002.\n[5] Chang JC, et al. Representation and Retrieval of\nReusable Software Components [J].Computer Science,\n1999, 26(5):41-48.\n[6] Cho J, Garcia-Molina H, Page L. Efficient Crawling\nThrough URL Ordering [J]. Computer Networks, 1998,\n30(1~7):161-172\n[7] Xu RZ, et al. Research on Matching Algorithm for\nXML-Based Software Component Query, Journal of\nSoftware, 2003, 14(7):1195-1202.\n[8] G. Pinski and F. Narin. \"Citation Influence for Journal\nAggregates of Scientific Publications: Theory, with\nApplication to the Literature of Physics\". Information\nProcessing and Management, 12(5):297.312, 1976.\n[9] L. Page, S. Brin, R. Motwani, and T. Winograd. \"The\nPageRank Citation Ranking: Bringing Order to the Web\".\nTechnical Report of Stanford Digital Library\nTechnologies Project, 1998. \"http://www-db\n.stanford.edu/.backrub/ pageranksub.ps\".\n[10] J. Kleinberg. \"Authoritative Sources in a\nHyperlinked Environment\". Journal of the ACM,\n46(5):604.632, 1999.\n[11] Katsuro Inoue, Reishi Yokomori, Hikaru Fujiwara,\nTetsuo Yamamoto, Makoto Matsushita, Shinji Kusumoto:\nComponent Rank: Relative Significance Rank for\nSoftware Component Search. ICSE 2003: 14-24\n279", "keywords": "retrieval system;facet;component rank;component retrieval;and component library;ranking algorithm;Weighted ranking algorithm;matching degree;facet-based component retrieval;component library"} {"name": "25", "title": "Accelerated Focused Crawling through Online Relevance Feedback", "abstract": "The organization of HTML into a tag tree structure, which is rendered by browsers as roughly rectangular regions with embedded text and HREF links, greatly helps surfers locate and click on links that best satisfy their information need. Can an automatic program emulate this human behavior and thereby learn to predict the relevance of an unseen HREF target page w.r.t. an information need, based on information limited to the HREF source page? Such a capability would be of great interest in focused crawling and resource discovery, because it can fine-tune the priority of unvisited URLs in the crawl frontier, and reduce the number of irrelevant pages which are fetched and discarded. We show that there is indeed a great deal of usable information on a HREF source page about the relevance of the target page. This information, encoded suitably, can be exploited by a supervised apprentice which takes online lessons from a traditional focused crawler by observing a carefully designed set of features and events associated with the crawler. Once the apprentice gets a sufficient number of examples, the crawler starts consulting it to better prioritize URLs in the crawl frontier. Experiments on a dozen topics using a 482-topic taxonomy from the Open Directory (Dmoz) show that online relevance feedback can reduce false positives by 30% to 90%.", "fulltext": "Introduction\nKeyword search and clicking on links are the dominant\nmodes of accessing hypertext on the Web.\nSupport for\nkeyword search through crawlers and search engines is very\nmature, but the surfing paradigm is not modeled or assisted\n\n(Note: The HTML version of this paper is best viewed using\nMicrosoft Internet Explorer. To view the HTML version using\nNetscape, add the following line to your ~/.Xdefaults or\n~/.Xresources file:\nNetscape*documentFonts.charset*adobe-fontspecific: iso-8859-1\nFor printing use the PDF version, as browsers may not print the\nmathematics properly.)\n\nContact author, email soumen@cse.iitb.ac.in\nCopyright is held by the author/owner(s).\nWWW2002, May 711, 2002, Honolulu, Hawaii, USA.\nACM 1-58113-449-5/02/0005\nBaseline learner\nDmoz\ntopic\ntaxonomy\nClass models\nconsisting of\nterm stats\nFrontier URLS\npriority queue\nCrawler\nPick\nbest\nNewly fetched\npage u\nSubmit page for classification\nIf Pr(c*|u) is large enough\nthen enqueue all outlinks v of u\nwith priority Pr(c*|u)\nCrawl\ndatabase\nSeed\nURLs\nFigure 1: A basic focused crawler controlled by one topic\nclassifier/learner.\nas well. Support for surfing is limited to the basic interface\nprovided by Web browsers, except for a few notable research\nprototypes.\nWhile surfing, the user typically has a topic-specific\ninformation need, and explores out from a few known\nrelevant starting points in the Web graph (which may be\nquery responses) to seek new pages relevant to the chosen\ntopic/s. While deciding for or against clicking on a specific\nlink (u, v), humans use a variety of clues on the source\npage u to estimate the worth of the (unseen) target page\nv, including the tag tree structure of u, text embedded in\nvarious regions of that tag tree, and whether the link is\nrelative or remote. \"Every click on a link is a leap of faith\"\n[19], but humans are very good at discriminating between\nlinks based on these clues.\nMaking an educated guess about the worth of clicking\non a link (u, v) without knowledge of the target v is\ncentral to the surfing activity. Automatic programs which\ncan learn this capability would be valuable for a number\nof applications which can be broadly characterized as\npersonalized, topic-specific information foragers.\nLarge-scale, topic-specific information gatherers are\ncalled focused crawlers [1, 9, 14, 28, 30]. In contrast to giant,\nall-purpose crawlers which must process large portions of\nthe Web in a centralized manner, a distributed federation of\nfocused crawlers can cover specialized topics in more depth\nand keep the crawl more fresh, because there is less to cover\nfor each crawler.\nIn its simplest form, a focused crawler consists of a\nsupervised topic classifier (also called a `learner') controlling\nthe priority of the unvisited frontier of a crawler (see\nFigure 1). The classifier is trained a priori on document\nsamples embedded in a topic taxonomy such as Yahoo!\nor Dmoz.\nIt thereby learns to label new documents as\nbelonging to topics in the given taxonomy [2, 5, 21]. The\ngoal of the focused crawler is to start from nodes relevant\nto a focus topic c\n\nin the Web graph and explore links to\nselectively collect pages about c\n\n, while avoiding fetching\npages not about c\n\n.\nSuppose the crawler has collected a page u and\n148\nencountered in u an unvisited link to v. A simple crawler\n(which we call the baseline) will use the relevance of u\nto topic c\n\n(which, in a Bayesian setting, we can denote\nPr(c\n\n|u)) as the estimated relevance of the unvisited page\nv.\nThis reflects our belief that pages across a hyperlink\nare more similar than two randomly chosen pages on the\nWeb, or, in other words, topics appear clustered in the\nWeb graph [11, 23]. Node v will be added to the crawler's\npriority queue with priority Pr(c\n\n|u). This is essentially a\n\"best-first\" crawling strategy. When v comes to the head\nof the queue and is actually fetched, we can verify if the\ngamble paid off, by evaluating Pr(c\n\n|v). The fraction of\nrelevant pages collected is called the harvest rate.\nIf V\nis the set of nodes collected, the harvest rate is defined\nas (1/\n|V |)\nv\nV\nPr(c\n\n|v). Alternatively, we can measure\nthe loss rate, which is one minus the harvest rate, i.e., the\n(expected) fraction of fetched pages that must be thrown\naway.\nSince the effort on relevant pages is well-spent,\nreduction in loss rate is the primary goal and the most\nappropriate figure of merit.\nFor focused crawling applications to succeed, the \"leap\nof faith\" from u to v must pay off frequently. In other words,\nif Pr(c\n\n|v) is often much less than the preliminary estimate\nPr(c\n\n|u), a great deal of network traffic and CPU cycles\nare being wasted eliminating bad pages. Experience with\nrandom walks on the Web show that as one walks away\nfrom a fixed page u\n0\nrelevant to topic c\n0\n, the relevance of\nsuccessive nodes u\n1\n, u\n2\n, . . . to c\n0\ndrops dramatically within\na few hops [9, 23]. This means that only a fraction of outlinks\nfrom a page is typically worth following. The average\nout-degree of the Web graph is about 7 [29]. Therefore, a\nlarge number of page fetches may result in disappointment,\nespecially if we wish to push the utility of focused crawling\nto topic communities which are not very densely linked.\nEven w.r.t. topics that are not very narrow, the\nnumber of distracting outlinks emerging from even fairly\nrelevant pages has grown substantially since the early\ndays of Web authoring [4].\nTemplate-based authoring,\ndynamic page generation from semi-structured databases,\nad links, navigation panels, and Web rings contribute many\nirrelevant links which reduce the harvest rate of focused\ncrawlers. Topic-based link discrimination will also reduce\nthese problems.\n1.1\nOur contribution: Leaping with more faith\nIn this paper we address the following questions:\nHow much information about the topic of the HREF\ntarget is available and/or latent in the HREF source page,\nits tag-tree structure, and its text? Can these sources be\nexploited for accelerating a focused crawler?\nOur basic idea is to use two classifiers. Earlier, the regular\nbaseline classifier was used to assign priorities to unvisited\nfrontier nodes. This no longer remains its function. The role\nof assigning priorities to unvisited URLs in the crawl frontier\nis now assigned to a new learner called the apprentice, and\nthe priority of v is specific to the features associated with\nthe (u, v) link which leads to it\n1\n. The features used by the\napprentice are derived from the Document Object Model or\n1\nIf many u's link to a single v, it is easiest to freeze the priority of\nv when the first-visited u linking to v is assessed, but combinations\nof scores are also possible.\nBaseline learner (Critic)\nDmoz\ntopic\ntaxonomy\nClass models\nconsisting of\nterm stats\nFrontier URLS\npriority queue\nCrawler\nPick\nbest\nNewly fetched\npage u\nSubmit page for classification\nIf Pr(c*|u) is\nlarge enough...\nAn instance (u,v)\nfor the apprentice\nu\nv\nPr(c*|v)\nPr(c|u) for\nall classes c\nCrawl\ndatabase\nApprentice learner\nClass\nmodels\n+\nOnline\ntraining\n... submit (u,v)\nto the apprentice\nApprentice\nassigns more\naccurate priority\nto node v\nFigure 2:\nThe apprentice is continually presented with\ntraining cases (u, v) with suitable features. The apprentice\nis interposed where new outlinks (u, v) are registered with\nthe priority queue, and helps assign the unvisited node v a\nbetter estimate of its relevance.\nDOM (http://www.w3.org/DOM/) of u. Meanwhile, the role\nof the baseline classifier becomes one of generating training\ninstances for the apprentice, as shown in Figure 2. We may\ntherefore regard the baseline learner as a critic or a trainer,\nwhich provides feedback to the apprentice so that it can\nimprove \"on the job.\"\nThe critic-apprentice paradigm is related to reinforcement\nlearning and AI programs that learn to play games\n[26,\n1.2]. We argue that this division of labor is natural\nand effective.\nThe baseline learner can be regarded as\na user specification for what kind of content is desired.\nAlthough we limit ourselves to a generative statistical model\nfor this specification, this can be an arbitrary black-box\npredicate.\nFor rich and meaningful distinction between\nWeb communities and topics, the baseline learner needs\nto be fairly sophisticated, perhaps leveraging off human\nannotations on the Web (such as topic directories).\nIn\ncontrast, the apprentice specializes in how to locate pages\nto satisfy the baseline learner.\nIts feature space is more\nlimited, so that it can train fast and adapt nimbly to\nchanging fortunes at following links during a crawl.\nIn\nMitchell's words [27], the baseline learner recognizes \"global\nregularity\" while the apprentice helps the crawler adapt\nto \"local regularity.\"\nThis marked asymmetry between\nthe classifiers distinguishes our approach from Blum and\nMitchell's co-training technique [3], in which two learners\ntrain each other by selecting unlabeled instances.\nUsing a dozen topics from a topic taxonomy derived\nfrom the Open Directory, we compare our enhanced crawler\nwith the baseline crawler. The number of pages that are\nthrown away (because they are irrelevant), called the loss\nrate, is cut down by 3090%. We also demonstrate that\nthe fine-grained tag-tree model, together with our synthesis\nand encoding of features for the apprentice, are superior to\nsimpler alternatives.\n1.2\nRelated work\nOptimizing the priority of unvisited URLs on the crawl\nfrontier for specific crawling goals is not new. FishSearch\nby De Bra et al. [12, 13] and SharkSearch by Hersovici\net al. [16] were some of the earliest systems for localized\nsearches in the Web graph for pages with specified keywords.\n149\nIn another early paper, Cho et al. [10] experimented with a\nvariety of strategies for prioritizing how to fetch unvisited\nURLs.\nThey used the anchor text as a bag of words to\nguide link expansion to crawl for pages matching a specified\nkeyword query, which led to some extent of differentiation\namong out-links, but no trainer-apprentice combination was\ninvolved. No notion of supervised topics had emerged at\nthat point, and simple properties like the in-degree or the\npresence of specified keywords in pages were used to guide\nthe crawler.\nTopical locality on the Web has been studied for a few\nyears.\nDavison made early measurements on a 100000-node\nWeb subgraph [11] collected by the DiscoWeb system.\nUsing the standard notion of vector space TFIDF similarity\n[31], he found that the endpoints of a hyperlink are much\nmore similar to each other than two random pages, and that\nHREFs close together on a page link to documents which are\nmore similar than targets which are far apart. Menczer has\nmade similar observations [23]. The HyperClass hypertext\nclassifier also uses such locality patterns for better semi-supervised\nlearning of topics [7], as does IBM's Automatic\nResource Compilation (ARC) and Clever topic distillation\nsystems [6, 8].\nTwo important advances have been made beyond the\nbaseline best-first focused crawler: the use of context graphs\nby Diligenti et al. [14] and the use of reinforcement learning\nby Rennie and McCallum [30].\nBoth techniques trained\na learner with features collected from paths leading up to\nrelevant nodes rather than relevant nodes alone. Such paths\nmay be collected by following backlinks.\nDiligenti et al. used a classifier (learner) that regressed\nfrom the text of u to the estimated link distance from u to\nsome relevant page w, rather than the relevance of u or an\noutlink (u, v), as was the case with the baseline crawler.\nThis lets their system continue expanding u even if the\nreward for following a link is not immediate, but several\nlinks away.\nHowever, they do favor links whose payoffs\nare closest. Our work is specifically useful in conjunction\nwith the use of context graphs: when the context graph\nlearner predicts that a goal is several links away, it is crucial\nto offer additional guidance to the crawler based on local\nstructure in pages, because the fan-out at that radius could\nbe enormous.\nRennie and McCallum [30] also collected paths leading\nto relevant nodes, but they trained a slightly different\nclassifier, for which:\nAn instance was a single HREF link like (u, v).\nThe features were terms from the title and headers\n(<h1>...</h1> etc.)\nof u, together with the text\nin and `near' the anchor (u, v).\nDirectories and\npathnames were also used.\n(We do not know the\nprecise definition of `near', or how these features were\nencoded and combined.)\nThe prediction was a discretized estimate of the\nnumber of relevant nodes reachable by following (u, v),\nwhere the reward from goals distant from v was\ngeometrically discounted by some factor < 1/2 per\nhop.\nRennie and McCallum obtained impressive harvests of\nresearch papers from four Computer Science department\nsites, and of pages about officers and directors from 26\ncompany Websites.\nLexical proximity and contextual features have been\nused extensively in natural language processing for disambiguating\nword sense [15]. Compared to plain text, DOM\ntrees and hyperlinks give us a richer set of potential features.\nAggarwal et al. have proposed an \"intelligent crawling\"\nframework [1] in which only one classifier is used, but similar\nto our system, that classifier trains as the crawl progresses.\nThey do not use our apprentice-critic approach, and do not\nexploit features derived from tag-trees to guide the crawler.\nThe \"intelligent agents\" literature has brought forth\nseveral systems for resource discovery and assistance to\nbrowsing [19].\nThey range between client- and site-level\ntools. Letizia [18], Powerscout, and WebWatcher [17] are\nsuch systems.\nMenczer and Belew proposed InfoSpiders\n[24], a collection of autonomous goal-driven crawlers without\nglobal control or state, in the style of genetic algorithms. A\nrecent extensive study [25] comparing several topic-driven\ncrawlers including the best-first crawler and InfoSpiders\nfound the best-first approach to show the highest harvest\nrate (which our new system outperforms).\nIn all the systems mentioned above, improving the\nchances of a successful \"leap of faith\" will clearly reduce\nthe overheads of fetching, filtering, and analyzing pages.\nFurthermore, whereas we use an automatic first-generation\nfocused crawler to generate the input to train the apprentice,\none can envisage specially instrumented browsers being used\nto monitor users as they seek out information.\nWe distinguish our work from prior art in the following\nimportant ways:\nTwo classifiers:\nWe use two classifiers. The first one is\nused to obtain `enriched' training data for the second one.\n(A breadth-first or random crawl would have a negligible\nfraction of positive instances.) The apprentice is a simplified\nreinforcement learner. It improves the harvest rate, thereby\n`enriching' the data collected and labeled by the first learner\nin turn.\nNo manual path collection:\nOur two-classifier framework\nessentially eliminates the manual effort needed to\ncreate reinforcement paths or context graphs. The input\nneeded to start off a focused crawl is just a pre-trained topic\ntaxonomy (easily available from the Web) and a few focus\ntopics.\nOnline training:\nOur apprentice trains continually, acquiring\never-larger vocabularies and improving its accuracy\nas the crawl progresses. This property holds also for the\n\"intelligent crawler\" proposed by Aggarwal et al., but they\nhave a single learner, whose drift is controlled by precise\nrelevance predicates provided by the user.\nNo manual feature tuning:\nRather than tune ad-hoc\nnotions of proximity between text and hyperlinks, we encode\nthe features of link (u, v) using the DOM-tree of u, and\nautomatically learn a robust definition of `nearness' of a\ntextual feature to (u, v).\nIn contrast, Aggarwal et al\nuse many tuned constants combining the strength of text-and\nlink-based predictors, and Rennie et al. use domain\nknowledge to select the paths to goal nodes and the word\nbags that are submitted to their learner.\n150\nMethodology and algorithms\nWe first review the baseline focused crawler and then\ndescribe how the enhanced crawler is set up using the\napprentice-critic mechanism.\n2.1\nThe baseline focused crawler\nThe baseline focused crawler has been described in detail\nelsewhere [9, 14], and has been sketched in Figure 1. Here\nwe review its design and operation briefly.\nThere are two inputs to the baseline crawler.\nA topic taxonomy or hierarchy with example URLs\nfor each topic.\nOne or a few topics in the taxonomy marked as the\ntopic(s) of focus.\nAlthough we will generally use the terms `taxonomy' and\n`hierarchy', a topic tree is not essential; all we really need is\na two-way classifier where the classes have the connotations\nof being `relevant' or `irrelevant' to the topic(s) of focus.\nA topic hierarchy is proposed purely to reduce the tedium\nof defining new focused crawls. With a two-class classifier,\nthe crawl administrator has to seed positive and negative\nexamples for each crawl. Using a taxonomy, she composes\nthe `irrelevant' class as the union of all classes that are not\nrelevant. Thanks to extensive hierarchies like Dmoz in the\npublic domain, it should be quite easy to seed topic-based\ncrawls in this way.\nThe baseline crawler maintains a priority queue on the\nestimated relevance of nodes v which have not been visited,\nand keeps removing the highest priority node and visiting it,\nexpanding its outlinks and checking them into the priority\nqueue with the relevance score of v in turn.\nDespite its\nextreme simplicity, the best-first crawler has been found to\nhave very high harvest rates in extensive evaluations [25].\nWhy do we need negative examples and negative classes\nat all? Instead of using class probabilities, we could maintain\na priority queue on, say, the TFIDF cosine similarity\nbetween u and the centroid of the seed pages (acting as an\nestimate for the corresponding similarity between v and the\ncentroid, until v has been fetched). Experience has shown\n[32] that characterizing a negative class is quite important to\nprevent the centroid of the crawled documents from drifting\naway indefinitely from the desired topic profile.\nIn this paper, the baseline crawler also has the implicit\njob of gathering instances of successful and unsuccessful\n\"leaps of faith\" to submit to the apprentice, discussed next.\n2.2\nThe basic structure of the apprentice\nlearner\nIn estimating the worth of traversing the HREF (u, v), we\nwill limit our attention to u alone. The page u is modeled\nas a tag tree (also called the Document Object Model or\nDOM). In principle, any feature from u, even font color and\nsite membership may be perfect predictors of the relevance\nof v. The total number of potentially predictive features will\nbe quite staggering, so we need to simplify the feature space\nand massage it into a form suited to conventional learning\nalgorithms. Also note that we specifically study properties\nof u and not larger contexts such as paths leading to u,\nmeaning that our method may become even more robust and\nuseful in conjunction with context graphs or reinforcement\nalong paths.\nInitially, the apprentice has no training data, and passes\njudgment on (u, v) links according to some fixed prior\nobtained from a baseline crawl run ahead of time (e.g., see\nthe statistics in\n3.3). Ideally, we would like to train the\napprentice continuously, but to reduce overheads, we declare\na batch size between a few hundred and a few thousand\npages. After every batch of pages is collected, we check if any\npage u fetched before the current batch links to some page\nv in the batch. If such a (u, v) is found, we extract suitable\nfeatures for (u, v) as described later in this section, and add\n(u, v), Pr(c\n\n|v) as another instance of the training data for\nthe apprentice. Many apprentices, certainly the simple naive\nBayes and linear perceptrons that we have studied, need not\nstart learning from scratch; they can accept the additional\ntraining data with a small additional computational cost.\n2.2.1\nPreprocessing the DOM tree\nFirst, we parse u and form the DOM tree for u.\nSadly,\nmuch of the HTML available on the Web violates any\nHTML standards that permit context-free parsing, but\na variety of repair heuristics (see, e.g., HTML Tidy,\navailable at http://www.w3.org/People/Raggett/tidy/)\nlet us generate reasonable DOM trees from bad HTML.\na\nHREF\nTEXT\nfont\nTEXT\nli\nli\nli\nul\nli\nTEXT\nTEXT\nem\nTEXT\ntt\nTEXT\nTEXT\n@0\n@0\n@1\n@2\n@3\n@-1\n@-2\nFigure 3: Numbering of DOM leaves used to derive offset\nattributes for textual tokens. `@' means \"is at offset\".\nSecond, we number all leaf nodes consecutively from left\nto right. For uniformity, we assign numbers even to those\nDOM leaves which have no text associated with them. The\nspecific <a href...> which links to v is actually an internal\nnode a\nv\n, which is the root of the subtree containing the\nanchor text of the link (u, v). There may be other element\ntags such as <em> or <b> in the subtree rooted at a\nv\n. Let\nthe leaf or leaves in this subtree be numbered (a\nv\n) through\nr(a\nv\n)\n(a\nv\n). We regard the textual tokens available from\nany of these leaves as being at DOM offset zero w.r.t. the\n(u, v) link. Text tokens from a leaf numbered , to the left of\n(a\nv\n), are at negative DOM offset\n- (a\nv\n). Likewise, text\nfrom a leaf numbered to the right of r(a\nv\n) are at positive\nDOM offset\n- r(a\nv\n). See Figure 3 for an example.\n2.2.2\nFeatures derived from the DOM and text\ntokens\nMany related projects mentioned in\n1.2 use a linear notion\nof proximity between a HREF and textual tokens. In the\nARC system, there is a crude cut-off distance measured\n151\nin bytes to the left and right of the anchor.\nIn the\nClever system, distance is measured in tokens, and the\nimportance attached to a token decays with the distance.\nIn reinforcement learning and intelligent predicate-based\ncrawling, the exact specification of neighborhood text is not\nknown to us. In all cases, some ad-hoc tuning appears to be\ninvolved.\nWe claim (and show in\n3.4) that the relation between\nthe relevance of the target v of a HREF (u, v) and the\nproximity of terms to (u, v) can be learnt automatically. The\nresults are better than ad-hoc tuning of cut-off distances,\nprovided the DOM offset information is encoded as features\nsuitable for the apprentice.\nOne obvious idea is to extend the Clever model: a page\nis a linear sequence of tokens. If a token t is distant x from\nthe HREF (u, v) in question, we encode it as a feature t, x .\nSuch features will not be useful because there are too many\npossible values of x, making the t, x space too sparse to\nlearn well. (How many HREFS will be exactly five tokens\nfrom the term `basketball' ?)\nClearly, we need to bucket x into a small number of\nranges. Rather than tune arbitrary bucket boundaries by\nhand, we argue that DOM offsets are a natural bucketing\nscheme provided by the page author.\nUsing the node\nnumbering scheme described above, each token t on page u\ncan be annotated w.r.t. the link (u, v) (for simplicity assume\nthere is only one such link) as t, d , where d is the DOM\noffset calculated above.\nThis is the main set of features\nused by the apprentice. We shall see that the apprentice\ncan learn to limit\n|d| to less than d\nmax\n= 5 in most cases,\nwhich reduces its vocabulary and saves time.\nA variety of other feature encodings suggest themselves.\nWe are experimenting with some in ongoing work (\n4),\nbut decided against some others. For example, we do not\nexpect gains from encoding specific HTML tag names owing\nto the diversity of authoring styles.\nAuthors use <div>,\n<span>, <layer> and nested tables for layout control in\nnon-standard ways; these are best deflated to a nameless\nDOM node representation.\nSimilar comments apply to\nHREF collections embedded in <ul>, <ol>, <td> and\n<dd>.\nFont and lower/upper case information is useful\nfor search engines, but would make features even sparser\nfor the apprentice.\nOur representation also flattens two-dimensional\ntables to their \"row-major\" representation.\nThe features we ignore are definitely crucial for other\napplications, such as information extraction. We did not\nsee any cases where this sloppiness led to a large loss rate.\nWe would be surprised to see tables where relevant links\noccurred in the third column and irrelevant links in the fifth,\nor pages where they are rendered systematically in different\nfonts and colors, but are not otherwise demarcated by the\nDOM structure.\n2.2.3\nNon-textual features\nLimiting d may lead us to miss features of u that may be\nuseful at the whole-page level. One approach would be to use\n\"d =\n\" for all d larger in magnitude than some threshold.\nBut this would make our apprentice as bulky and slow to\ntrain as the baseline learner.\nInstead, we use the baseline learner to abstract u for\nthe apprentice. Specifically, we use a naive Bayes baseline\nlearner to classify u, and use the vector of class probabilities\nreturned as features for the apprentice. These features can\nhelp the apprentice discover patterns such as\n\"Pages about /Recreation/Boating/Sailing often\nlink to pages about /Sports/Canoe_and_Kayaking.\"\nThis also covers for the baseline classifier confusing between\nclasses with related vocabulary, achieving an effect similar\nto context graphs.\nAnother kind of feature can be derived from co-citation.\nIf v\n1\nhas been fetched and found to be relevant and HREFS\n(u, v\n1\n) and (u, v\n2\n) are close to each other, v\n2\nis likely to\nbe relevant. Just like textual tokens were encoded as t, d\npairs, we can represent co-citation features as , d , where\nis a suitable representation of relevance.\nMany other features can be derived from the DOM tree\nand added to our feature pool. We discuss some options\nin\n4. In our experience so far, we have found the t, d\nfeatures to be most useful. For simplicity, we will limit our\nsubsequent discussion to t, d features only.\n2.3\nChoices of learning algorithms for the\napprentice\nOur feature set is thus an interesting mix of categorical,\nordered and continuous features:\nTerm tokens t, d have a categorical component t and\na discrete ordered component d (which we may like to\nsmooth somewhat). Term counts are discrete but can\nbe normalized to constant document length, resulting\nin continuous attribute values.\nClass names are discrete and may be regarded as\nsynthetic terms. The probabilities are continuous.\nThe output we desire is an estimate of Pr(c\n\n|v), given all the\nobservations about u and the neighborhood of (u, v) that\nwe have discussed. Neural networks are a natural choice\nto accommodate these requirements. We first experimented\nwith a simple linear perceptron, training it with the delta\nrule (gradient descent) [26]. Even for a linear perceptron,\nconvergence was surprisingly slow, and after convergence,\nthe error rate was rather high.\nIt is likely that local\noptima were responsible, because stability was generally\npoor, and got worse if we tried to add hidden layers or\nsigmoids.\nIn any case, convergence was too slow for use\nas an online learner. All this was unfortunate, because the\ndirect regression output from a neural network would be\nconvenient, and we were hoping to implement a Kohonen\nlayer for smoothing d.\nIn contrast, a naive Bayes (NB) classifier worked very\nwell. A NB learner is given a set of training documents,\neach labeled with one of a finite set of classes/topic.\nA\ndocument or Web page u is modeled as a multiset or bag\nof words,\n{ , n(u, ) } where is a feature which occurs\nn(u, ) times in u. In ordinary text classification (such as\nour baseline learner) the features are usually single words.\nFor our apprentice learner, a feature is a t, d pair.\nNB classifiers can predict from a discrete set of classes,\nbut our prediction is a continuous (probability) score. To\nbridge this gap, We used a simple two-bucket (low/high\nrelevance) special case of Torgo and Gama's technique of\nusing classifiers for discrete labels for continuous regression\n[33], using \"equally probable intervals\" as far as possible.\n152\nTorgo and Gama recommend using a measure of centrality,\nsuch as the median, of each interval as the predicted value of\nthat class. Rennie and McCallum [30] corroborate that 23\nbins are adequate. As will be clear from our experiments, the\nmedians of our `low' and `high' classes are very close to zero\nand one respectively (see Figure 5). Therefore, we simply\ntake the probability of the `high' class as the prediction from\nour naive Bayes apprentice.\nThe prior probability of class c, denoted Pr(c) is the\nfraction of training documents labeled with class c. The NB\nmodel is parameterized by a set of numbers\nc,\nwhich is\nroughly the rate of occurrence of feature in class c, more\nexactly,\n\nc,\n=\n1 +\nu\nV\nc\nn(u, )\n|T | +\nu,\nn(u, ) ,\n(1)\nwhere V\nc\nis the set of Web pages labeled with c and T is the\nentire vocabulary. The NB learner assumes independence\nbetween features, and estimates\nPr(c\n|u)\n\nPr(c) Pr(u\n|c)\n\nPr(c)\n\nu\n\nn(u, )\nc,\n. (2)\nNigam et al. provide further details [22].\nExperimental study\nOur experiments were guided by the following requirements.\nWe wanted to cover a broad variety of topics, some `easy' and\nsome `difficult', in terms of the harvest rate of the baseline\ncrawler. Here is a quick preview of our results.\nThe apprentice classifier achieves high accuracy in\npredicting the relevance of unseen pages given t, d\nfeatures. It can determine the best value of d\nmax\nto\nuse, typically, 46.\nEncoding DOM offsets in features improves the\naccuracy of the apprentice substantially, compared\nto a bag of ordinary words collected from within the\nsame DOM offset window.\nCompared to a baseline crawler, a crawler that is\nguided by an apprentice (trained offline) has a 30%\nto 90% lower loss rate.\nIt finds crawl paths never\nexpanded by the baseline crawler.\nEven if the apprentice-guided crawler is forced to\nstay within the (inferior) Web graph collected by the\nbaseline crawler, it collects the best pages early on.\nThe apprentice is easy to train online. As soon as it\nstarts guiding the crawl, loss rates fall dramatically.\nCompared to t, d features, topic- or cocitation-based\nfeatures have negligible effect on the apprentice.\nTo run so many experiments, we needed three highly\noptimized and robust modules: a crawler, a HTML-to-DOM\nconverter, and a classifier.\nWe started with the w3c-libwww crawling library from\nhttp://www.w3c.org/Library/, but replaced it with our\nown crawler because we could effectively overlap DNS\nlookup, HTTP access, and disk access using a select over\nall socket/file descriptors, and prevent memory leaks visible\nin w3c-libwww. With three caching DNS servers, we could\nachieve over 90% utilization of a 2Mbps dedicated ISP\nconnection.\nWe used the HTML parser libxml2 library to extract\nthe DOM from HTML, but this library has memory leaks,\nand does not always handle poorly written HTML well. We\nhad some stability problems with HTML Tidy (http://www.\nw3.org/People/Raggett/tidy/), the well-known HTML\ncleaner which is very robust to bad HTML. At present we\nare using libxml2 and are rolling our own HTML parser and\ncleaner for future work.\nWe intend to make our crawler and HTML parser code\navailable in the public domain for research use.\nFor both the baseline and apprentice classifier we used\nthe public domain BOW toolkit and the Rainbow naive\nBayes classifier created by McCallum and others [20]. Bow\nand Rainbow are very fast C implementations which let us\nclassify pages in real time as they were being crawled.\n3.1\nDesign of the topic taxonomy\nWe downloaded from the Open Directory (http://dmoz.\norg/) an RDF file with over 271954 topics arranged in a\ntree hierarchy with depth at least 6, containing a total of\nabout 1697266 sample URLs. The distribution of samples\nover topics was quite non-uniform. Interpreting the tree as\nan is-a hierarchy meant that internal nodes inherited all\nexamples from descendants, but they also had their own\nexamples. Since the set of topics was very large and many\ntopics had scarce training data, we pruned the Dmoz tree\nto a manageable frontier by following these steps:\n1. Initially we placed example URLs in both internal and\nleaf nodes, as given by Dmoz.\n2. We fixed a minimum per-class training set size of k =\n300 documents.\n3. We iteratively performed the following step as long\nas possible: we found a leaf node with less than k\nexample URLs, moved all its examples to its parent,\nand deleted the leaf.\n4. To\neach\ninternal\nnode\nc,\nwe\nattached\na\nleaf\nsubdirectory called Other.\nExamples associated\ndirectly with c were moved to this Other subdirectory.\n5. Some topics were populated out of proportion, either\nat the beginning or through the above process. We\nmade the class priors more balanced by sampling\ndown the large classes so that each class had at most\n300 examples.\nThe resulting taxonomy had 482 leaf nodes and a total\nof 144859 sample URLs. Out of these we could successfully\nfetch about 120000 URLs. At this point we discarded the\ntree structure and considered only the leaf topics. Training\ntime for the baseline classifier was about about two hours\non a 729MHz Pentium III with 256kB cache and 512MB\nRAM. This was very fast, given that 1.4GB of HTML text\nhad to be processed through Rainbow. The complete listing\nof topics can be obtained from the authors.\n3.2\nChoice of topics\nDepending on the focus topic and prioritization strategy,\nfocused crawlers may achieve diverse harvest rates.\nOur\n153\nearly prototype [9] yielded harvest rates typically between\n0.25 and 0.6.\nRennie and McCallum [30] reported recall\nand not harvest rates. Diligenti et al. [14] focused on very\nspecific topics where the harvest rate was very low, 46%.\nObviously, the maximum gains shown by a new idea in\nfocused crawling can be sensitive to the baseline harvest\nrate.\nTo avoid showing our new system in an unduly positive\nor negative light, we picked a set of topics which were fairly\ndiverse, and appeared to be neither too broad to be useful\n(e.g., /Arts, /Science) nor too narrow for the baseline\ncrawler to be a reasonable adversary.\nWe list our topics\nin Figure 4. We chose the topics without prior estimates of\nhow well our new system would work, and froze the list\nof topics.\nAll topics that we experimented with showed\nvisible improvements, and none of them showed deteriorated\nperformance.\n3.3\nBaseline crawl results\nWe will skip the results of breadth-first or random crawling\nin our commentary, because it is known from earlier work\non focused crawling that our baseline crawls are already\nfar better than breadth-first or random crawls. Figure 5\nshows, for most of the topics listed above, the distribution\nof page relevance after running the baseline crawler to\ncollect roughly 15000 to 25000 pages per topic.\nThe\nbaseline crawler used a standard naive Bayes classifier on\nthe ordinary term space of whole pages. We see that the\nrelevance distribution is bimodal, with most pages being\nvery relevant or not at all. This is partly, but only partly, a\nresult of using a multinomial naive Bayes model. The naive\nBayes classifier assumes term independence and multiplies\ntogether many (small) term probabilities, with the result\nthat the winning class usually beats all others by a large\nmargin in probability. But it is also true that many outlinks\nlead to pages with completely irrelevant topics. Figure 5\ngives a clear indication of how much improvement we can\nexpect for each topic from our new algorithm.\n3.4\nDOM window size and feature selection\nA key concern for us was how to limit the maximum window\nwidth so that the total number of synthesized t, d features\nremains much smaller than the training data for the baseline\nclassifier, enabling the apprentice to be trained or upgraded\nin a very short time. At the same time, we did not want\nto lose out on medium- to long-range dependencies between\nsignificant tokens on a page and the topic of HREF targets\nin the vicinity. We eventually settled for a maximum DOM\nwindow size of 5. We made this choice through the following\nexperiments.\nThe easiest initial approach was an end-to-end cross-validation\nof the apprentice for various topics while\nincreasing d\nmax\n.\nWe observed an initial increase in the\nvalidation accuracy when the DOM window size was\nincreased beyond 0.\nHowever, the early increase leveled\noff or even reversed after the DOM window size was\nincreased beyond 5. The graphs in Figure 6 display these\nresults.\nWe see that in the Chess category, though the\nvalidation accuracy increases monotonically, the gains are\nless pronounced after d\nmax\nexceeds 5. For the AI category,\naccuracy fell beyond d\nmax\n= 4.\nTopic\n#Good #Bad\n/Arts/Music/Styles/Classical/Composers\n24000 13000\n/Arts/Performing_Arts/Dance/Folk_Dancing\n7410\n8300\n/Business/Industries.../Livestock/Horses...\n17000\n7600\n/Computers/Artificial_Intelligence\n7701 14309\n/Computers/Software/Operating_Systems/Linux\n17500\n9300\n/Games/Board_Games/C/Chess\n17000\n4600\n/Health/Conditions_and_Diseases/Cancer\n14700\n5300\n/Home/Recipes/Soups_and_Stews\n20000\n3600\n/Recreation/Outdoors/Fishing/Fly_Fishing\n12000 13300\n/Recreation/Outdoors/Speleology\n6717 14890\n/Science/Astronomy\n14961\n5332\n/Science/Earth_Sciences/Meteorology\n19205\n8705\n/Sports/Basketball\n26700\n2588\n/Sports/Canoe_and_Kayaking\n12000 12700\n/Sports/Hockey/Ice_Hockey\n17500 17900\nFigure 4: We chose a variety of topics which were neither\ntoo broad nor too narrow, so that the baseline crawler\nwas a reasonable adversary.\n#Good (#Bad) show the\napproximate number of pages collected by the baseline\ncrawler which have relevance above (below) 0.5, which\nindicates the relative difficulty of the crawling task.\n0\n0\n.\n2\n0\n.\n4\n0\n.\n6\n0\n.\n8\n1\nAI\nAstronomy\nBasketball\nCancer\nChess\nComposers\nFlyFishing\nFolkDance\nHorses\nIceHockey\nKayaking\nLinux\nMeteorology\nSoups\nTobacco\n10\n100\n1000\n10000\n100000\nExpected #pages\nRelevance probability\nFigure 5: All of the baseline classifiers have harvest rates\nbetween 0.25 and 0.6, and all show strongly bimodal\nrelevance score distribution: most of the pages fetched are\nvery relevant or not at all.\nIt is important to notice that the improvement in\naccuracy is almost entirely because with increasing number\nof available features, the apprentice can reject negative\n(low relevance) instances more accurately, although the\naccuracy for positive instances decreases slightly. Rejecting\nunpromising outlinks is critical to the success of the\nenhanced crawler. Therefore we would rather lose a little\naccuracy for positive instances rather than do poorly on the\nnegative instances. We therefore chose d\nmax\nto be either 4\nor 5 for all the experiments.\nWe verified that adding offset information to text tokens\nwas better than simply using plain text near the link [8].\nOne sample result is shown in Figure 7.\nThe apprentice\naccuracy decreases with d\nmax\nif only text is used, whereas\nit increases if offset information is provided. This highlights\n154\nChess\n65\n70\n75\n80\n85\n90\n95\n100\n0\n2\n4\n6\n8\nd_max\n%\nA\nc\nc\nu\nr\na\nc\ny\nNegative\nPositive\nAverage\nAI\n65\n70\n75\n80\n85\n90\n0\n2\n4\n6\n8\nd_max\n%\nA\nc\nc\nu\nr\na\nc\ny\nNegative\nPositive\nAverage\nFigure 6: There is visible improvement in the accuracy\nof the apprentice if d\nmax\nis made larger, up to about 5\n7 depending on topic. The effect is more pronounced on\nthe the ability to correctly reject negative (low relevance)\noutlink instances.\n`Average' is the microaverage over all\ntest instances for the apprentice, not the arithmetic mean\nof `Positive' and `Negative'.\nAI\n76\n78\n80\n82\n84\n86\n0\n1\n2\n3\n4\n5\n6\n7\n8\nd_max\n%\nA\nc\nc\nu\nr\na\nc\ny\nText\nOffset\nFigure 7: Encoding DOM offset information with textual\nfeatures boosts the accuracy of the apprentice substantially.\nthe importance of designing proper features.\nTo corroborate the useful ranges of d\nmax\nabove, we\ncompared the value of average mutual information gain for\nterms found at various distances from the target HREF.\nThe experiments revealed that the information gain of terms\nfound further away from the target HREF was generally\nlower than those that were found closer, but this reduction\nwas not monotonic. For instance, the average information\nChess\n0.00002\n0.00004\n0.00006\n0.00008\n0.0001\n0.00012\n0.00014\n0.00016\n0.00018\n0.0002\n-8\n-6\n-4\n-2\n0\n2\n4\n6\n8\nd\nI\nn\nf\no\ng\na\ni\nn\nd_max=8\nd_max=5\nd_max=4\nd_max=3\nAI\n4.00E-05\n5.00E-05\n6.00E-05\n7.00E-05\n8.00E-05\n9.00E-05\n1.00E-04\n-8\n-6\n-4\n-2\n0\n2\n4\n6\n8\nd\nI\nn\nf\no\nG\na\ni\nn\nd_max=8\nd_max=5\nd_max=4\nd_max=3\nFigure 8:\nInformation gain variation plotted against\ndistance from the target HREF for various DOM window\nsizes. We observe that the information gain is insensitive to\nd\nmax\n.\ngain at d =\n-2 was higher than that at d = -1; see Figure 8.\nFor each DOM window size, we observe that the information\ngain varies in a sawtooth fashion; this intriguing observation\nis explained shortly. The average information gain settled\nto an almost constant value after distance of 5 from the\ntarget URL. We were initially concerned that to keep the\ncomputation cost manageable, we would need some cap on\nd\nmax\neven while measuring information gain, but luckily,\nthe variation of information gain is insensitive to d\nmax\n, as\nFigure 8 shows. These observations made our final choice of\nd\nmax\neasy.\nIn a bid to explain the occurrence of the unexpected\nsaw-tooth form in Figure 8 we measured the rate\nt,d\nat\nwhich term t occurred at offset d, relative to the total count\nof all terms occurring at offset d. (They are roughly the\nmultinomial naive Bayes term probability parameters.) For\nfixed values of d, we calculated the sum of values of terms\nfound at those offsets from the target HREF. Figure 9(a)\nshows the plot of these sums to the distance(d) for various\ncategories. The values showed a general decrease as the\ndistances from the target HREF increased, but this decrease,\nlike that of information gain, was not monotonic. The\nvalues of the terms at odd numbered distances from the\ntarget HREF were found to be lower than those of the\nterms present at the even positions. For instance, the sum\nof values of terms occurring at distance\n-2 were higher\nthan that of terms at position\n-1. This observation was\nexplained by observing the HTML tags that are present\nat various distances from the target HREF. We observed\nthat tags located at odd d are mostly non-text tags, thanks\nto authoring idioms such as <li><a...><li><a...> and\n<a...><br><a...><br> etc.\nA plot of the frequency of\nHTML tags against the distance from the HREF at which\n155\n0.02\n0.04\n0.06\n0.08\n0.1\n0.12\n0.14\n0.16\n0.18\n0.2\n-5\n-4\n-3\n-2\n-1\n0\n1\n2\n3\n4\n5\nd\nT\nh\ne\nt\na\n_\n{\nt\n,\nd\n}\nAI\nChess\nHorses\nCancer\nIceHockey\nLinux\nBball+\nBball-Tags\nat various DOM offsets\n0\n1000\n2000\n3000\n4000\n5000\n6000\n7000\n8000\n9000\n-5\n-4\n-3\n-2\n-1\n0\n1\n2\n3\n4\n5\nd\nN\nu\nm\nb\ne\nr\no\nf\no\nc\nc\nu\nr\nr\ne\nn\nc\ne\ns\nfont\ntd\nimg\nb\nbr\np\ntr\nli\ncomment\ndiv\ntable\ncenter\ni\nspan\nhr\nFigure 9: Variation of (a) relative term frequencies and\n(b) frequencies of HTML tags plotted against d.\nthey were found is shown in Figure 9(b). (The <a...> tag\nobviously has the highest frequency and has been removed\nfor clarity.)\nThese were important DOM idioms, spanning many\ndiverse Web sites and authoring styles, that we did not\nanticipate ahead of time.\nLearning to recognize these\nidioms was valuable for boosting the harvest of the enhanced\ncrawler. Yet, it would be unreasonable for the user-supplied\nbaseline black-box predicate or learner to capture crawling\nstrategies at such a low level.\nThis is the ideal job of\nthe apprentice. The apprentice took only 310 minutes\nto train on its (u, v) instances from scratch, despite a\nsimple implementation that wrote a small file to disk for\neach instance of the apprentice. Contrast this with several\nhours taken by the baseline learner to learn general term\ndistribution for topics.\n3.5\nCrawling with the apprentice trained\noff-line\nIn this section we subject the apprentice to a \"field test\" as\npart of the crawler, as shown in Figure 2. To do this we\nfollow these steps:\n1. Fix a topic and start the baseline crawler from all\nexample URLs available from the given topic.\n2. Run the baseline crawler until roughly 2000025000\npages have been fetched.\n3. For all pages (u, v) such that both u and v have\nbeen fetched by the baseline crawler, prepare an\ninstance from (u, v) and add to the training set of\nthe apprentice.\n4. Train the apprentice. Set a suitable value for d\nmax\n.\n0\n2000\n4000\n6000\n0\n2000\n4000\n6000\n8000\n10000\nExpected #pages lost\n\n#Pages fetched\nFolk Dancing\nBaseline\nApprentice\n0\n4000\n8000\n0\n4000\n8000\n12000\n16000\n20000\nExpected #pages lost\n\n#Pages fetched\nIce Hockey\nBaseline\nApprentice\nFigure 10:\nGuidance from the apprentice significantly\nreduces the loss rate of the focused crawler.\n5. Start the enhanced crawler from the same set of pages\nthat the baseline crawler had started from.\n6. Run the enhanced crawler to fetch about the same\nnumber of pages as the baseline crawler.\n7. Compare the loss rates of the two crawlers.\nUnlike with the reinforcement learner studied by Rennie\nand McCallum, we have no predetermined universe of URLs\nwhich constitute the relevant set; our crawler must go\nforth into the open Web and collect relevant pages from\nan unspecified number of sites. Therefore, measuring recall\nw.r.t. the baseline is not very meaningful (although we do\nreport such numbers, for completeness, in\n3.6). Instead, we\nmeasure the loss (the number of pages fetched which had to\nbe thrown away owing to poor relevance) at various epochs\nin the crawl, where time is measured as the number of pages\nfetched (to elide fluctuating network delay and bandwidth).\nAt epoch n, if the pages fetched are v\n1\n, . . . , v\nn\n, then the total\nexpected loss is (1/n)\ni\n(1\n- Pr(c\n\n|v\ni\n)).\nFigure 10 shows the loss plotted against the number of\npages crawled for two topics: Folk dancing and Ice hockey.\nThe behavior for Folk dancing is typical; Ice hockey is\none of the best examples. In both cases, the loss goes up\nsubstantially faster with each crawled page for the baseline\ncrawler than for the enhanced crawler. The reduction of loss\nfor these topics are 40% and 90% respectively; typically, this\nnumber is between 30% and 60%. In other words, for most\n156\ntopics, the apprentice reduces the number of useless pages\nfetched by one-third to two-thirds.\nIn a sense, comparing loss rates is the most meaningful\nevaluation in our setting, because the network cost of\nfetching relevant pages has to be paid anyway, and can be\nregarded as a fixed cost. Diligenti et al. show significant\nimprovements in harvest rate, but for their topics, the loss\nrate for both the baseline crawler as well as the context-focused\ncrawler were much higher than ours.\n3.6\nURL overlap and recall\nThe reader may feel that the apprentice crawler has an\nunfair advantage because it is first trained on DOM-derived\nfeatures from the same set of pages that it has to crawl\nagain. We claim that the set of pages visited by the baseline\ncrawler and the (off-line trained) enhanced crawler have\nsmall overlap, and the superior results for the crawler guided\nby the apprentice are in large part because of generalizable\nlearning. This can be seen from the examples in Figure 11.\nBaseline\nApprentice Intersect\nBasketball\n27220\n26280\n2431\nFolkDance\n14011\n8168\n2199\nIceHockey\n34121\n22496\n1657\nFlyFishing\n19252\n14319\n6834\nBasketball\n49%\n47%\n4%\nBaseline\nApprentice\nIntersect\nFolkDance\n57%\n34%\n9%\nBaseline\nApprentice\nIntersect\nIceHockey\n58%\n39%\n3%\nBaseline\nApprentice\nIntersect\nFlyFishing\n48%\n35%\n17%\nBaseline\nApprentice\nIntersect\nFigure 11:\nThe apprentice-guided crawler follows paths\nwhich are quite different from the baseline crawler because\nof its superior priority estimation technique. As a result\nthere is little overlap between the URLs harvested by these\ntwo crawlers.\nGiven that the overlap between the baseline and the\nenhanced crawlers is small, which is `better' ? As per the\nverdict of the baseline classifier, clearly the enhanced crawler\nis better. Even so, we report the loss rate of a different\nversion of the enhanced crawler which is restricted to visiting\nonly those pages which were visited by the baseline learner.\nWe call this crawler the recall crawler. This means that in\nthe end, both crawlers have collected exactly the same set\nof pages, and therefore have the same total loss. The test\nthen is how long can the enhanced learner prevent the loss\nfrom approaching the baseline loss. These experiments are a\nrough analog of the `recall' experiments done by Rennie and\nMcCallum. We note that for these recall experiments, the\napprentice does get the benefit of not having to generalize,\nso the gap between baseline loss and recall loss could be\noptimistic. Figure 12 compares the expected total loss of\nthe baseline crawler, the recall crawler, and the apprentice-guided\ncrawler (which is free to wander outside the baseline\ncollection) plotted against the number of pages fetched, for a\nfew topics. As expected, the recall crawler has loss generally\n0\n1000\n0\n1000\n2000\n3000\n4000\n5000\n6000\nExpected #pages lost\n\n#Pages fetched\nIce Hockey\nBaseline\nRecall\nApprentice\n0\n5000\n10000\n0\n5000\n10000\n15000\n20000\nExpected #pages lost\n\n#Pages fetched\nKayaking\nBaseline\nRecall\nApprentice\nFigure 12: Recall for a crawler using the apprentice but\nlimited to the set of pages crawled earlier by the baseline\ncrawler.\nsomewhere between the loss of the baseline and the enhanced\ncrawler.\n3.7\nEffect of training the apprentice online\nNext we observe the effect of a mid-flight correction when\nthe apprentice is trained some way into a baseline and\nswitched into the circuit. The precise steps were:\n1. Run the baseline crawler for the first n page fetches,\nthen stop it.\n2. Prepare instances and train the apprentice.\n3. Re-evaluate the priorities of all unvisited pages v in\nthe frontier table using the apprentice.\n4. Switch in the apprentice and resume an enhanced\ncrawl.\nWe report our experience with \"Folk Dancing.\" The baseline\ncrawl was stopped after 5200 pages were fetched.\nRe-evaluating\nthe priority of frontier nodes led to radical\nchanges in their individual ranks as well as the priority\ndistributions. As shown in Figure 13(a), the baseline learner\nis overly optimistic about the yield it expects from the\nfrontier, whereas the apprentice already abandons a large\nfraction of frontier outlinks, and is less optimistic about\n157\n(a)\nFolk Dancing\n0\n2000\n4000\n6000\n8000\n10000\n12000\n0\n0-.2\n.2-.4\n.4-.6\n.6-.8\n.8-1\nEstimated relevance of outlinks\nF\nr\ne\nq\nu\ne\nn\nc\ny\nBaseline\nApprentice\n(b)\nFolk Dancing\n2100\n2200\n2300\n2400\n2500\n2600\n2700\n4500\n5500\n#Pages crawled\nE\nx\np\ne\nc\nt\ne\nd\nl\no\ns\ns\n(\n#\np\na\ng\ne\ns\n)\nTrain\napprentice\nCollect instances\nfor apprentice\nApprentice\nguides crawl\nFigure 13: The effect of online training of the apprentice.\n(a)\nThe\napprentice\nmakes\nsweeping\nchanges\nin\nthe\nestimated promise of unvisited nodes in the crawl frontier.\n(b) Resuming the crawl under the guidance of the\napprentice immediately shows significant reduction in the\nloss accumulation rate.\nthe others, which appears more accurate from the Bayesian\nperspective.\nFigure 13(b) shows the effect of resuming an enhanced\ncrawl guided by the trained apprentice.\nThe new (u, v)\ninstances are all guaranteed to be unknown to the apprentice\nnow.\nIt is clear that the apprentice's prioritization\nimmediately starts reducing the loss rate. Figure 14 shows\nan even more impressive example. There are additional mild\ngains from retraining the apprentice at later points. It may\nbe possible to show a more gradual online learning effect\nby retraining the classifier at a finer interval, e.g., every\n100 page fetches, similar to Aggarwal et al. In our context,\nhowever, losing a thousand pages at the outset because of\nthe baseline crawler's limitation is not a disaster, so we need\nnot bother.\n3.8\nEffect of other features\nWe experimented with two other kinds of feature, which we\ncall topic and cocitation features.\nOur limiting d\nmax\nto 5 may deprive the apprentice of\nimportant features in the source page u which are far from\nthe link (u, v).\nOne indirect way to reveal such features\nto the apprentice is to classify u, and to add the names\nof some of the top-scoring classes for u to the instance\n(u, v).\n2.2.3 explains why this may help. This modification\nresulted in a 1% increase in the accuracy of the apprentice.\nA further increase of 1% was observed if we added all\nClassical Composers\n600\n800\n1000\n1200\n1400\n1600\n1800\n2000\n3000\n4000\n5000\n6000\n7000\n8000\n#Pages fetched\nC\nu\nm\nu\nl\na\nt\ni\nv\ne\ne\nx\np\ne\nc\nt\ne\nd\nl\no\ns\ns\n(\n#\np\na\ng\ne\ns\n)\nCollect\ninstances for\napprentice\nTrain\napprentice\nApprentice\nguides crawl\nFigure 14:\nAnother example of training the apprentice\nonline followed by starting to use it for crawl guidance.\nBefore guidance, loss accumulation rate is over 30%, after,\nit drops to only 6%.\nprefixes of the class name.\nFor example, the full name\nfor the Linux category is /Computers/Software/Operating_\nSystems/Linux.\nWe added all of the following to the\nfeature set of the source page: /, /Computers, /Computers/\nSoftware, /Computers/Software/Operating_Systems and\n/Computers/Software/Operating_Systems/Linux. We also\nnoted that various class names and some of their prefixes\nappeared amongst the best discriminants of the positive and\nnegative classes.\nCocitation features for the link (u, v) are constructed by\nlooking for other links (u, w) within a DOM distance of d\nmax\nsuch that w has already been fetched, so that Pr(c\n\n|w) is\nknown. We discretize Pr(c\n\n|w) to two values high and low\nas in\n2.3, and encode the feature as low, d or high, d .\nThe use of cocitation features did not improve the accuracy\nof the apprentice to any appreciable extent.\nFor both kinds of features, we estimated that random\nvariations in crawling behavior (because of fluctuating\nnetwork load and tie-breaking frontier scores) may prevent\nus from measuring an actual benefit to crawling under\nrealistic operating conditions. We note that these ideas may\nbe useful in other settings.\nConclusion\nWe have presented a simple enhancement to a focused\ncrawler that helps assign better priorities to the unvisited\nURLs in the crawl frontier. This leads to a higher rate of\nfetching pages relevant to the focus topic and fewer false\npositives which must be discarded after spending network,\nCPU and storage resources processing them. There is no\nneed to manually train the system with paths leading to\nrelevant pages. The key idea is an apprentice learner which\ncan accurately predict the worth of fetching a page using\nDOM features on pages that link to it. We show that the\nDOM features we use are superior to simpler alternatives.\nUsing topics from Dmoz, we show that our new system can\ncut down the fraction of false positives by 3090%.\nWe are exploring several directions in ongoing work.\nWe wish to revisit continuous regression techniques for the\napprentice, as well as more extensive features derived from\nthe DOM. For example, we can associate with a token t the\nlength\nof the DOM path from the text node containing t to\n158\nthe HREF to v, or the depth of their least common ancestor\nin the DOM tree. We cannot use these in lieu of DOM offset,\nbecause regions which are far apart lexically may be close\nto each other along a DOM path.\nt, , d features will be\nmore numerous and sparser than t, d features, and could\nbe harder to learn. The introduction of large numbers of\nstrongly dependent features may even reduce the accuracy\nof the apprentice. Finally, we wish to implement some form\nof active learning where only those instances (u, v) with the\nlargest\n| Pr(c\n\n|u) - Pr(c\n\n|v)| are chosen as training instances\nfor the apprentice.\nAcknowledgments:\nThanks to the referees for suggesting\nthat we present Figure 7.\nReferences\n[1] C. C. Aggarwal, F. Al-Garawi, and P. S. Yu.\nIntelligent\ncrawling on the World Wide Web with arbitrary predicates. In\nWWW2001, Hong Kong, May 2001. ACM.\nOnline at http:\n//www10.org/cdrom/papers/110/.\n[2] C. Apte, F. Damerau, and S. M. Weiss.\nAutomated learning\nof decision rules for text categorization. ACM Transactions on\nInformation Systems, 1994. IBM Research Report RC18879.\n[3] A. Blum and T. M. Mitchell. Combining labeled and unlabeled\ndata with co-training.\nIn Computational Learning Theory,\npages 92100, 1998.\n[4] S. Chakrabarti.\nIntegrating the document object model with\nhyperlinks for enhanced topic distillation and information\nextraction.\nIn WWW 10, Hong Kong, May 2001.\nOnline at\nhttp://www10.org/cdrom/papers/489.\n[5] S. Chakrabarti,\nB. Dom,\nR. Agrawal,\nand P. Raghavan.\nScalable feature selection, classification and signature generation\nfor organizing large text databases into hierarchical topic\ntaxonomies.\nVLDB Journal, Aug. 1998.\nOnline at http:\n//www.cs.berkeley.edu/~soumen/VLDB54_3.PDF.\n[6] S. Chakrabarti, B. Dom, D. Gibson, J. Kleinberg, P. Raghavan,\nand S. Rajagopalan.\nAutomatic resource compilation by\nanalyzing hyperlink structure and associated text. In 7th Worldwide\nweb conference (WWW7), 1998. Online at http://www7.\nscu.edu.au/programme/fullpapers/1898/com1898.html.\n[7] S. Chakrabarti, B. Dom, and P. Indyk.\nEnhanced hypertext\ncategorization using hyperlinks. In SIGMOD Conference. ACM,\n1998. Online at http://www.cs.berkeley.edu/~soumen/sigmod98.\nps.\n[8] S.\nChakrabarti,\nB.\nE.\nDom,\nD.\nA.\nGibson,\nR.\nKumar,\nP. Raghavan, S. Rajagopalan, and A. Tomkins. Topic distillation\nand spectral filtering.\nArtificial Intelligence Review, 13(5\n6):409435, 1999.\n[9] S. Chakrabarti, M. van den Berg, and B. Dom.\nFocused\ncrawling:\na new approach to topic-specific web resource\ndiscovery.\nComputer Networks, 31:16231640, 1999.\nFirst\nappeared in the 8th International World Wide Web Conference,\nToronto,\nMay\n1999.\nAvailable\nonline\nat\nhttp://www8.org/\nw8-papers/5a-search-query/crawling/index.html.\n[10] J. Cho, H. Garcia-Molina, and L. Page.\nEfficient crawling\nthrough URL ordering. In 7th World Wide Web Conference,\nBrisbane, Australia, Apr. 1998. Online at http://www7.scu.edu.\nau/programme/fullpapers/1919/com1919.htm.\n[11] B. D. Davison.\nTopical locality in the Web.\nIn Proceedings\nof the 23rd Annual International Conference on Research and\nDevelopment in Information Retrieval (SIGIR 2000), pages\n272279, Athens, Greece, July 2000. ACM. Online at http://\nwww.cs.rutgers.edu/~davison/pubs/2000/sigir/.\n[12] P. M. E. De Bra and R. D. J. Post.\nInformation retrieval\nin the world-wide web: Making client-based searching feasible.\nIn Proceedings of the First International World Wide Web\nConference, Geneva, Switzerland, 1994. Online at http://www1.\ncern.ch/PapersWWW94/reinpost.ps.\n[13] P. M. E. De Bra and R. D. J. Post.\nSearching for arbitrary\ninformation in the WWW: The fish search for Mosaic. In Second\nWorld Wide Web Conference '94:\nMosaic and the Web,\nChicago, Oct. 1994.\nOnline at http://archive.ncsa.uiuc.edu/\nSDG/IT94/Proceedings/Searching/debra/article.html and http:\n//citeseer.nj.nec.com/172936.html.\n[14] M. Diligenti, F. Coetzee, S. Lawrence, C. L. Giles, and M. Gori.\nFocused crawling using context graphs. In A. E. Abbadi, M. L.\nBrodie, S. Chakravarthy, U. Dayal, N. Kamel, G. Schlageter,\nand\nK.-Y.\nWhang,\neditors,\nVLDB\n2000,\nProceedings\nof\n26th International Conference on Very Large Data Bases,\nSeptember 10-14, 2000, Cairo, Egypt, pages 527534. Morgan\nKaufmann, 2000. Online at http://www.neci.nec.com/~lawrence/\npapers/focus-vldb00/focus-vldb00.pdf.\n[15] W. A. Gale, K. W. Church, and D. Yarowsky. A method for\ndisambiguating word senses in a large corpus. Computer and\nthe Humanities, 26:415439, 1993.\n[16] M. Hersovici, M. Jacovi, Y. S. Maarek, D. Pelleg, M. Shtalhaim,\nand S. Ur. The shark-search algorithm--an application: Tailored\nWeb site mapping. In WWW7, 1998. Online at http://www7.scu.\nedu.au/programme/fullpapers/1849/com1849.htm.\n[17] T. Joachims, D. Freitag, and T. Mitchell. WebWatcher: A tour\nguide for the web. In IJCAI, Aug. 1997. Online at http://www.\ncs.cmu.edu/~webwatcher/ijcai97.ps.\n[18] H. Leiberman. Letizia: An agent that assists Web browsing. In\nInternational Joint Conference on Artificial Intelligence (IJCAI\n), Montreal, Aug. 1995. See Website at http://lieber.www.\nmedia.mit.edu/people/lieber/Lieberary/Letizia/Letizia.html.\n[19] H. Leiberman, C. Fry, and L. Weitzman.\nExploring the Web\nwith reconnaissance agents.\nCACM, 44(8):6975, Aug. 2001.\nhttp://www.acm.org/cacm.\n[20] A. McCallum. Bow: A toolkit for statistical language modeling,\ntext retrieval, classification and clustering. Software available\nfrom http://www.cs.cmu.edu/~mccallum/bow/, 1998.\n[21] A. McCallum and K. Nigam. A comparison of event models for\nnaive Bayes text classification. In AAAI/ICML-98 Workshop\non Learning for Text Categorization, pages 4148. AAAI Press,\n1998. Online at http://www.cs.cmu.edu/~knigam/.\n[22] A. McCallum and K. Nigam. A comparison of event models for\nnaive Bayes text classification. In AAAI/ICML-98 Workshop\non Learning for Text Categorization, pages 4148. AAAI Press,\n1998.\nAlso technical report WS-98-05, CMU; online at http:\n//www.cs.cmu.edu/~knigam/papers/multinomial-aaaiws98.pdf.\n[23] F.\nMenczer.\nLinks\ntell\nus\nabout\nlexical\nand\nsemantic\nWeb content.\nTechnical Report Computer Science Abstract\nCS.IR/0108004, arXiv.org, Aug. 2001. Online at http://arxiv.\norg/abs/cs.IR/0108004.\n[24] F. Menczer and R. K. Belew.\nAdaptive retrieval agents:\nInternalizing local context and scaling up to the Web. Machine\nLearning, 39(2/3):203242, 2000.\nLonger version available as\nTechnical Report CS98-579, http://dollar.biz.uiowa.edu/~fil/\nPapers/MLJ.ps, University of California, San Diego.\n[25] F. Menczer, G. Pant, M. Ruiz, and P. Srinivasan. Evaluating\ntopic-driven Web crawlers. In SIGIR, New Orleans, Sept. 2001.\nACM.\nOnline at http://dollar.biz.uiowa.edu/~fil/Papers/\nsigir-01.pdf.\n[26] T. Mitchell. Machine Learning. McGraw Hill, 1997.\n[27] T. Mitchell. Mining the Web. In SIGIR 2001, Sept. 2001. Invited\ntalk.\n[28] S. Mukherjea.\nWTMS: a system for collecting and analyzing\ntopic-specific Web information. WWW9/Computer Networks,\n33(16):457471, 2000. Online at http://www9.org/w9cdrom/293/\n293.html.\n[29] S. RaviKumar, P. Raghavan, S. Rajagopalan, D. Sivakumar,\nA. Tomkins, and E. Upfal. Stochastic models for the Web graph.\nIn FOCS, volume 41, pages 5765. IEEE, nov 2000. Online at\nhttp://www.cs.brown.edu/people/eli/papers/focs00.ps.\n[30] J. Rennie and A. McCallum. Using reinforcement learning to\nspider the web efficiently. In ICML, 1999. Online at http://\nwww.cs.cmu.edu/~mccallum/papers/rlspider-icml99s.ps.gz.\n[31] G.\nSalton\nand\nM.\nJ.\nMcGill.\nIntroduction\nto\nModern\nInformation Retrieval. McGraw-Hill, 1983.\n[32] M. Subramanyam, G. V. R. Phanindra, M. Tiwari, and M. Jain.\nFocused crawling using TFIDF centroid.\nHypertext Retrieval\nand Mining (CS610) class project, Apr. 2001. Details available\nfrom manyam@cs.utexas.edu.\n[33] L. Torgo and J. Gama. Regression by classification. In D. Borges\nand C. Kaestner, editors, Brasilian AI Symposium, volume 1159\nof Lecture Notes in Artificial Intelligence, Curitiba, Brazil,\n1996. Springer-Verlag. Online at http://www.ncc.up.pt/~ltorgo/\nPapers/list_pub.html.\n159", "keywords": "focused crawlers;Reinforcement learning;URLs;Focused crawling;taxonomy;DOM;HREF link;classifiers;Document object model"} {"name": "26", "title": "Accelerating 3D Convolution using Graphics Hardware", "abstract": "Many volume filtering operations used for image enhancement, data processing or feature detection can be written in terms of three-dimensional convolutions. It is not possible to yield interactive frame rates on todays hardware when applying such convolutions on volume data using software filter routines. As modern graphics workstations have the ability to render two-dimensional convoluted images to the frame buffer, this feature can be used to accelerate the process significantly. This way generic 3D convolution can be added as a powerful tool in interactive volume visualization toolkits.", "fulltext": "Introduction\nDirect volume rendering is a very important technique for visualizing\nthree dimensional data. Several fundamental different methods\nhave been introduced [2, 4, 5, 6, 8, 12]. Hardware-based volume\ntexturing [9, 14] is one of the most prominent variants for interactive\nvisualization due to the high frame rates that can be achieved\nwith this technique.\nThe basic principle of texture based volume rendering is depicted\nin Figure 1. According to the sampling theorem a 3D view of the\nvolume is generated by drawing an adequate number of equidistant,\nsemi-transparent polygons parallel to the image plane with respect\nto the current viewing direction (\"volume slicing\").\nFiltering on the other hand is a major part of the visualization\npipeline. It is broadly used for improving images, reducing noise,\nand enhancing detail structure. Volume rendering can benefit from\nfilter operations, as low pass filters reduce the noise e.g. of sam-pled\nmedical volume images and high pass filters can be used for\nedge extraction, visualizing prominent data features. Multiscale approaches\nas [13] regularly use disjunct filtering and downsampling\nsteps and can benefit from any speedups of the filtering process.\n\nFilters can be classified as linear or non-linear. Discrete linear filters\ncan be written as convolutions with filter kernels that completely\nspecify the filtering operation. Non-linear filters, as for instance\nmorphological operators, were recently used for volume analysis\nand visualization [7]. Segmentation and classification depend heavily\non filtering operations as well. Bro-Nielson [1] already thought\nabout using convolution hardware for accelerating the registration\nprocess.\nFor texture based volume rendering the data set has to be loaded\ninto special texture memory, which can be addressed by the graphics\npipe very fast. The loading process itself is relatively slow, taking\nseveral seconds for big data sets even on the fastest available\ngraphics workstations. As the data set has to be reloaded after a\nfilter operation has been performed in software, interactive filtering\nwill benefit a lot from convolution algorithms that directly work\non the texture hardware. Additionally, we will show in the following\nthat computing the convolution with graphics hardware is much\nfaster than software solutions.\n3D Convolution\nThe general three-dimensional discrete convolution can be written\nas\n~\nf (x, y, z) =\ni\n1\n,i\n2\n,i\n3\nk(i\n1\n, i\n2\n, i\n3\n)\nf(x + i\n1\n, y + i\n2\n, z + i\n3\n)\nwith f being the input data function and k being the filter kernel,\nresulting in the convoluted data ~\nf .\nIn the following examination we will assume without loss of generality\nthat k(i\n1\n, i\n2\n, i\n3\n) is given for 0\ni\n1\n, i\n2\n, i\n3\n< n and vanishes\noutside this interval. Also, we will assume that the input data function\nvanishes for (x, y, z) outside the interval [0, m)\n3\n.\nIn the special case of k(i\n1\n, i\n2\n, i\n3\n) =\nk\n1\n(i\n1\n)\nk\n2\n(i\n2\n)\nk\n3\n(i\n3\n) the\n0-7803-5897-X/99/$10.00 Copyright 1999 IEEE\n471\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n-3\n-2\n-1\n0\n1\n2\n3\n0.1\n0.2\n0.3\n0\n1\n2\n3\n4\n5\n6\n7\n-0.5\n-0.4\n-0.3\n-0.2\n-0.1\n0\n0.1\n0.2\n0.3\n-3\n-2\n-1\n0\n1\n2\n3\n-0.5\n-0.4\n-0.3\n-0.2\n-0.1\n0\n0.1\n0.2\n0.3\n0\n1\n2\n3\n4\n5\n6\n7\nFigure 2: The Gau filter function and its second derivative\nFigure 3: Example image; original, filtered with Gau, and filtered\nwith second derivative\nkernel is called separable. In this case the number of operations\nnecessary for the convolution can be reduced down to O(m\n3\nn),\nfrom O(m\n3\nn\n3\n) in the non-separable case:\n~\nf\n1\n(x, y, z)\n=\ni\n1\n\nk\n1\n(i\n1\n)\nf(x + i\n1\n, y, z)\n(1)\n~\nf\n2\n(x, y, z)\n=\ni\n2\n\nk\n2\n(i\n2\n)\n~\nf\n1\n(x, y + i\n2\n, z)\n(2)\n~\nf (x, y, z)\n=\ni\n3\n\nk\n3\n(i\n3\n)\n~\nf\n2\n(x, y, z + i\n3\n)\n(3)\nOf course special care has to be taken near the boundaries of the\ninput data function, as convolution routines are generally written\non a very low language level for speed purposes.\nFigure 2 shows two well known convolution filters, the Gau filter\nand its second derivative, both in their continuous and discrete\nforms. They can be used for noise reduction and edge detection,\nrespectively. An example image that was filtered with these kernels\ncan be seen in Figure 3.\nHardware Acceleration\nIn order to accelerate the convolution process, special purpose hardware\ncan be used. On systems that have built-in Digital Signal\nProcessors (DSPs), for example for multimedia acceleration, a spe-cialized\nconvolution subroutine could be downloaded to the signal\n2D Convolution\nFigure 4: The first pass of the hardware filtering algorithm\n0\n1\n000000000000000000000000000000\n000000000000000000000000000000\n000000000000000000000000000000\n000000000000000000000000000000\n111111111111111111111111111111\n111111111111111111111111111111\n111111111111111111111111111111\n111111111111111111111111111111\nUsed texture coordinates\n0\n00\n00\n11\n11\nthe data value\n15/16\nTexel\ninside a texel\nExact position of\n1/16\n3/4\n7/8\n1\nTexture coordinates\n5/8\n1/8\n1/4\n3/8\n1/2\n00\n00\n11\n11\n00\n00\n11\n11\n00\n00\n11\n11\n00\n00\n11\n11\n00\n00\n11\n11\nFigure 5: Texture coordinates used for exact texel hits\nprocessor. On the other hand, most times these DSPs are not well\ndocumented or the run-time system can not be modified by the user.\nIn general they are not faster than the main processor as well. Additionally\n, there exists a wide range of different DSP systems, all of\nwhich are incompatible to each other.\nThe approach we have implemented in our system is to combine\na 2D and a 1D convolution kernel in order to calculate three-dimensional\nseparable convolutions. Several vendors of the graphics\nAPI OpenGL -- as for example Silicon Graphics Inc. [10] -have\nincluded extensions for one- and two-dimensional filtering.\nIn contrast to most implementations that emulate these extensions\nonly in software, the SGI graphics pipes MXE and BasicReality\ncalculate the convolutions on-board, boosting performance by an\norder of magnitude already for reasonably sized filters. The CRM\ngraphics system of the O\n2\nis capable of rendering convolutions in\nhardware as well, but it does not support volume textures, which\nare crucial for the algorithm as well.\nRecall that the volume data is already stored in texture memory for\nvisualization using texture hardware. Now (1) and (2) are combined\nto one 2D convolution that is to be applied to every plane of the volume\ndata perpendicular to the z-axis. Therefore, plane by plane is\ndrawn by rendering textured triangle strips (two triangles per strip)\ninto the frame buffer as it is sketched in Figure 4. The texture coordinates\nassigned to the vertices of the triangle strips are specified\nin such way that no interpolation of the texture is necessary (see\nFigure 5). In order to increase the potential speedup and to avoid\nrounding problems, nearest neighbor interpolation is activated during\nthe rendering process. Each plane is then read back with the\nOpenGL routine\nglCopyTexSubImage3DEXT\n, which replaces\none plane of the active volume texture orthogonal to the z-axis by\ndata that is read directly from the frame buffer. While transfering\nthe data to the texture memory, the separable 2D convolution filter\nis activated using\nglSeparableFilter2DEXT\n. After this first\npass, the volume texture contains data filtered along the x- and y-axes\n.\n472\n1D Convolution\nFigure 6: The second pass\nFramebuffer\nOperations\nMemory\nTexture\nPixel Data\nTextures\nPixel Transfer Modes\nPrimitives\nPer-Fragment\nRasterization\nEngine\nGeometry\nGeometric\nScale, Bias\nClamping\nStorage Mode\nConvolution\nPost-Convolution\nScale, Bias\nPixel\nFigure 7: The OpenGL graphics pipeline\nApplying the convolution to the third axis is more complicated and\ndepicted in Figure 6. In this second pass planes are rendered perpendicular\nto one of the other axes. Assume that the y-axis has been\nchosen. As\nglCopyTexSubImage3DEXT\ncan not write planes\northogonal to any other axis than the z-axis to the texture memory,\nthey have to be transfered to a second volume texture. OpenGL's\ntexture objects are used by calling\nglBindTexture\nfor switching\nbetween them, which implies only a very small overhead. While\ncopying the data from the frame buffer to the texture memory, a\none-dimensional convolution filter is activated. As we are dealing\nwith two-dimensional image data, we specify a 2D convolution filter\nwith\nglConvolutionFilter2DEXT\n, using a filter kernel\nthat is exactly one pixel wide.\nAfter this second pass the convoluted volume data has been mirrored\nat the plane y\n- z = 0. For texture based volume renderers\nthis does not impose any restrictions, as they only have to swap coordinates\nduring texture coordinate calculations. When data order\nis crucial for the application, the algorithm of pass two can be used\nfor both passes, thus restoring the data order in the second pass.\nHowever, the textured planes have to be drawn two times perpendicular\nto the y-axis. Cache misses are much more likely in this\ncase in respect to planes rendered orthogonal to the z-axis. This\ncan increase the convolution times on big volumes by up to 50%\neven on fast graphics hardware.\nFigure 7 depicts the relevant part of the OpenGL pipeline. It reveals\n, that pixel fragments read from the frame buffer are clamped\nto [0, 1) before they can be written back to the frame buffer or into\nthe texture memory. Filter kernels with negative coefficients can\ncompute negative intermediary values during the two-dimensional\nconvolution pass, which will not contribute to the final 1D convolution\n. Additionally, no negative results can be stored in the output\nvolume. These values are especially needed when the filter kernel\nis not symmetrical.\nThe strategy for avoiding these effects depends on the particular\napplication. For edge detection the absolute maxima of the filtered\nvolume data are of interest.\nIn this case, calculating the\nabsolute value can be performed in hardware as well, further reducing\nnecessary computations on the CPU. In most other cases\npost-convolution scaling and bias can be used to map the expected\nresults to the interval [0, 1) just before the clamping takes place.\nOpenGL provides the\nGL POST CONVOLUTION c SCALE EXT\nand\nGL POST CONVOLUTION c BIAS EXT\nparameters, which\nare applied to pixel color values after convolution and before clamping\nas depicted in Figure 7.\nResults\nThe used data sets are presented on the color plate. Figures 8 to 12\nshow slices of a head data set of size 128\n3\n. Figure 8 reveals the\nunfiltered data set, whereas Figure 9 and 10 present slices of the\nsoftware and hardware low pass filtered volume data, respectively.\nHere, a Gau filter kernel of size 5\n3\nhas been used. Almost no\ndifferences can be detected. Figure 11 and 12 depict the results\nfor high pass filtering using the second derivative of the Gau filter\n, again computed in software and in hardware. The hardware\nconvoluted volume displays noticeable artifacts that occur due to\nthe already mentioned clamping step in the OpenGL pipeline. By\nusing post-convolution scaling and bias the artifacts disappear completely\n.\nThe Figures 13 to 18 picture another data set that has been used\nfor testing the implementation. They have been visualized with the\nhardware based volume rendering toolkit TiVOR [11], again with\nthe first picture being rendered with the original data set. While\nthe noise reduction effect of the Gau filter is rather bothering in\nFigure 14 by smearing volume details, it has remarkable positive\neffects on ISO surface generation (compare Figures 17 and 18).\nPlease remark that the ISO surfaces are rendered in real-time using\na hardware accelerated volume rendering approach described\nin [14].\nNoise interferes with high pass filters, which can be seen in Figure\n15. Using a high pass filter on the already low pass filtered data\nset reveals by far more and better separable details (see Figure 16)\ncompared to the directly filtered volume.\nWe have tested the speed our implementation against a well tuned\nsoftware convolution filter. Unsurprisingly, the software convolution\nis almost completely memory bound. Even extremely fast\nworkstations as the Octane are limited by the main memory band-473\nFilter size\n2\n3\n3\n3\n5\n3\n7\n3\nhead\n\n0.33/0.72\n0.33/1.02\n0.33/1.56\n0.48/2.0\nangio\n\n2.5/6.0\n2.5/8.7\n2.5/14.7\n3.7/21.3\n\nData set created by computer tomography, 128\n3\n\nData set created by MR angiography, 256\n3\nAll times were measured on a Silicon Graphics Onyx2\nequipped with a BasicReality graphics pipe.\nThe\nsystem has two R10000/195 MHz processors and\n640 MB main memory.\nTable 1: Convolution times in seconds using hardware / software\nwidth, as today's caches are far too small for the values needed\nfor convolution along the z-axis. High end machines as the Onyx2\nperform huge 3D convolutions three times faster than the Octane,\neven when equipped with slower CPUs. Standard PCs cannot cope\nwith the memory bandwidth of the Onyx2 system, and multipro-cessor\noptions will not accelerate the process because it is not CPU\nbound.\nTable 1 shows convolution times for different data sets and filter\nsizes, using software and hardware convolution. All times have\nbeen measured on an Onyx2 equipped with a BasicReality graphics\npipe. The maximum filter size supported by the graphics system\nis 7\n2\n. Therefor, the maximum 3D convolution that can be performed\nin hardware on this system is 7\n3\n. Noteworthy is the fact\nthat the BasicReality graphics system is optimized for filter kernels\nof size 5\n2\n. Convolutions with smaller kernels need exactly the same\ncomputation time. Filters of size 6\n2\nand 7\n2\nshare their timing results\nas well. The x and y coordinates of the volume are swapped during\nthe hardware based convolution process, which is a side effect of\nthe presented 3D convolution algorithm.\nConclusion\nAs several of today's graphics workstation vendors have added two-dimensional\nconvolution to their OpenGL pipeline, using this capability\nfor accelerating 3D convolution is an almost straightforward\napproach. We have determined that by using the implemented\nalgorithm three-dimensional convolution can be performed even\non big data sets with nearly interactive rates. First promising approaches\nof accelerating wavelet decomposition and reconstruction\nhave been investigated as well [3].\nAs all intermediary data is transfered to the frame buffer, clamping\ncan swallow negative values that result from the two-dimensional\nconvolution as well as final negative results. Thus this approach\nis currently most useful for symmetrical filter kernels. By using\npost-convolution scaling and bias extensions these problems can be\neasily overcome.\nNon-separable convolutions are not possible right now with this algorithm\n. However, by applying several two-dimensional filter kernels\nand blending convoluted images in the frame buffer the use of\nnon-separable 3D kernels will be a possibility for the future as well.\nReferences\n[1] M Bro-Nielson. Medical Image Registration and Surgery Simulation.\nPhD thesis, IMM, Technical University of Denmark, 1996.\n[2] R. A. Crawfis and N. L. Max. Texture Splats for 3D Scalar and Vector\nField Visualization. In G. M. Nielson and Bergeron D., editors, Visualization\n93, pages 261265, Los Alamitos, CA, 1993. IEEE Computer\nSociety, IEEE Computer Society Press.\n[3] M. Hopf and T. Ertl. Hardware Based Wavelet Transformations. In\nErlangen Workshop '99 on Vision, Modeling and Visualization, Erlangen\n, November 1999. IEEE. accepted for publication, November 17\n19.\n[4] A. Kaufman. Volume Visualization. IEEE Computer Society Press,\n1991.\n[5] P. Lacroute and M. Levoy. Fast Volume Rendering Using a Shear-Warp\nFactorization of the Viewing Transformation.\nIn Computer\nGraphics Proceedings, Annual Conference Series, pages 451457,\nLos Angeles, California, July 1994. ACM SIGGRAPH, Addison-Wesley\nPublishing Company, Inc.\n[6] L. Lippert, M. H. Gross, and C. Kurmann. Compression Domain\nVolume Rendering for Distributed Environments. In D. Fellner and\nL. Szirmay-Kalos, editors, EUROGRAPHICS '97, volume 14, pages\nC95C107. Eurographics Association, Blackwell Publishers, 1997.\n[7] C. L urig and T. Ertl. Hierarchical Volume Analysis and Visualization\nBased on Morphological Operators. In Proc. IEEE Visualization '98,\npages 335341, 1998.\n[8] P. Schroder and G. Stoll. Data Parallel Volume Rendering as Line\nDrawing. In 1992 Workshop on Volume Visualization. ACM SIGGRAPH\n, October 1992.\n[9] Silicon Graphics Inc., Mountain View, California. Volume Rendering\nusing RE2 Technology, 1994.\n[10] Silicon Graphics Inc., Mountain View, California. OpenGL on Silicon\nGraphics Systems, 1996.\n[11] O. Sommer, A. Dietz, R. Westermann, and T. Ertl. An Interactive Visualization\nand Navigation Tool for Medical Volume Data. In N. M.\nThalmann and V. Skala, editors, WSCG '98, The Sixth International\nConference in Central Europe on Computer Graphics and Visualization\n'98, volume II, pages 362371, Plzen, Czech Republic, February\n1998. University of West Bohemia Press.\n[12] T. Totsuka and M. Levoy. Frequency Domain Volume Rendering.\nComputer Graphics, 27(4):27178, August 1993.\n[13] R. Westermann and T. Ertl.\nA Multiscale Approach to Integrated\nVolume Segmentation and Rendering.\nIn Computer Graphics Forum\n16(3) (Proc. EUROGRAPHICS '97), number 3, pages 117129.\nBlackwell, 1997.\n[14] R. Westermann and T. Ertl. Efficiently Using Graphics Hardware in\nVolume Rendering Applications. In Computer Graphics Proceedings,\nAnnual Conference Series, pages 169177, Orlando, Florida, 1998.\nACM SIGGRAPH.\n474\nFigure 8: The unfiltered\nhead data set\nFigure 9: Head, low pass\nfiltered in software\nFigure 10:\nHead, low\npass filtered in hardware\nFigure 11:\nHead, high\npass filtered in software\nFigure 12:\nHead, high\npass filtered in hardware\nFigure 13: The original angiography\ndata set\nFigure 14: The Gau filtered\ndata set\nFigure 15:\nData,\nafter direct\nfiltering with Gau' second\nderivative\nFigure 16: First low pass, then\nhigh pass filtered data\nFigure 17: ISO surfaces on the original angiography data set\nFigure 18: ISO surfaces on the Gau filtered data set", "keywords": "3D convolution;Convolution;visualization;filtering;Volume Visualization;Hardware Acceleration;volume rendering"} {"name": "27", "title": "Agent Technology for Personalized Information Filtering: The PIA-System", "abstract": "As today the amount of accessible information is overwhelming, the intelligent and personalized filtering of available information is a main challenge. Additionally, there is a growing need for the seamless mobile and multi-modal system usage throughout the whole day to meet the requirements of the modern society (\"anytime, anywhere, anyhow\"). A personal information agent that is delivering the right information at the right time by accessing, filtering and presenting information in a situation-aware matter is needed. Applying Agent-technology is promising, because the inherent capabilities of agents like autonomy, pro- and reactiveness offer an adequate approach. We developed an agent-based personal information system called PIA for collecting, filtering, and integrating information at a common point, offering access to the information by WWW, e-mail, SMS, MMS, and J2ME clients. Push and pull techniques are combined allowing the user to search explicitly for specific information on the one hand and to be informed automatically about relevant information divided in pre-, work and recreation slots on the other hand. In the core of the PIA system advanced filtering techniques are deployed through multiple filtering agent communities for content-based and collaborative filtering. Information-extracting agents are constantly gathering new relevant information from a variety of selected sources (internet, files, databases, web-services etc.). A personal agent for each user is managing the individual information provisioning, tailored to the needs of this specific user, knowing the profile, the current situation and learning from feedback.", "fulltext": "Introduction\nNowadays, desired information often remains unfound,\nbecause it is hidden in a huge amount of unnecessary and\nirrelevant data. On the Internet there are well maintained search\nengines that are highly useful if you want to do full-text\nkeyword-search [1], but they are not able to support you in a\npersonalized way and typically do not offer any \"push-services\"\nor in other words no information will be sent to you when you are\nnot active. Also, as they normally do not adapt themselves to\nmobile devices, they cannot be used throughout a whole day\nbecause you are not sitting in front of a standard browser all the\ntime and when you return, these systems will treat you in the\nvery same way as if you have never been there before (no\npersonalization, no learning). Users who are not familiar with\ndomain-specific keywords won't be able to do successful\nresearch, because no support is offered. Predefined or auto-generated\nkeywords for the search-domains are needed to fill that\ngap. As information demands are continuously increasing today\nand the gathering of information is time-consuming, there is a\ngrowing need for a personalized support. Labor-saving\ninformation is needed to increase productivity at work and also\nthere is an increasing aspiration for a personalized offer of\ngeneral information, specific domain knowledge, entertainment,\nshopping, fitness, lifestyle and health information. Existing\ncommercial \"personalized\" systems are far away from that\nfunctionality, as they usually do not offer much more than\nallowing to choose the kind of the layout or collecting some of\nthe offered information channels (and the information within is\nnot personalized).\nTo overcome that situation you need a personal information\nagent (PIA) who \"knows\" the way of your thinking. and can\nreally support you throughout the whole day by accessing,\n\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, or\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nSAC'05, March 13-17, 2005, Santa Fe, New Mexico, USA.\nCopyright 2005 ACM 1-58113-964-0/05/0003...$5.00.\n\n54\n2005 ACM Symposium on Applied Computing\nfiltering and presenting information to you in a situation-aware\nmatter (figure 1). Some systems exist (FAB [2], Amalthaea [3],\nWAIR [4], P-Tango [5], TripMatcher [6], PIAgent [7], Letizia\n[8], Let's Browse [9], Newt [10], WebWatcher [11], PEA [12],\nBAZAR [13]) that implement advanced algorithmic technology,\nbut did not become widely accepted, maybe because of real\nworld requirements like availability, scalability and adaptation to\ncurrent and future standards and mobile devices.\nIn this paper we present an agent-based approach for the\nefficient, seamless and tailored provisioning of relevant\ninformation on the basis of end-users' daily routine. As we\nassume the reader will be familiar with agent-technology (see\n[14], [15] for a good introduction), we will concentrate on the\npractical usage and the real-world advantages of agent-technology\n. We describe the design and architecture in the\nfollowing section and afterwards depict the system in section\nthree.\nDesign of PIA The Personal Information Agent\nTo meet the discussed requirements and to support the user in\nthat matter, we designed a multi-agent system composed of four\nclasses of agents: many information extracting agents, agents that\nimplement different filtering strategies, agents for providing\ndifferent kinds of presentation and one personal agent for each\nuser. Logically, all this can be seen as a classical three tier\napplication (figure 2). Concerning the information extraction,\ngeneral search engines on the one hand but also domain-specific\nportals on the other hand have to be integrated. Additional\ninformation sources (Databases, Files, Mailinglists etc.) should\nalso be integrated easily at run-time.\nSeveral agents realize different filtering strategies (content-based\nand collaborative filtering [16], [5]) that have to be\ncombined in an intelligent matter. Also agents for providing\ninformation actively via SMS, MMS, Fax, e-mail (push-services)\nare needed. A Multi-access service platform has to manage the\npresentation of the results tailored to the used device and the\ncurrent situation. The personal agent should constantly improve\nthe knowledge about \"his\" user by learning from the given\nfeedback, which is also taken for collaborative filtering, as\ninformation that has been rated as highly interesting might be\nuseful for a user with a similar profile as well. As users usually\nare not very keen on giving explicit feedback (ratings), implicit\nfeedback like the fact that the user stored an article can also be\ntaken into account [18].\nA \"keywordassistant\" should support the user to be able to\ndefine queries even if he is not familiar with a certain domain.\nKeywords predefined by experts should be offered and also the\npossibility to point to a \"basis-paper\" serving as an example. The\nkeywordassistant will extract automatically the most important\nkeywords of that paper and will provide them for searching. The\nwhole system is designed to be highly scalable, easy to modify, to\nadapt and to improve and therefore an agent-based approach that\nallows to integrate or to remove agents even at run-time is a\nsmart choice. The different filtering techniques are needed to\nprovide accurate results, because the weakness of individual\ntechniques should be compensated by the strengths of others.\nDocuments should be logically clustered by their domains to\nallow fast access, and for each document several \"models\" [19]\nwill be built, all including stemming and stop-word elimination,\nbut some tailored for very efficient retrieval at run-time and\nothers to support advanced filtering algorithms for a high\naccuracy.\nPIA\n\n\n\n\n\nDemand for information is increasing continuously\n\n\n\nInformation gathering is time consuming\n\n\n\nUsers have different devices\n\nFacts\nPortals\n\nDatabases\n\nGeneral\n\nInformation\n\n\nNews\n\n\nWeather\n\nInformation to\nincrease\nproductivity /\nLabor-saving\ninformation\n\nDomain knowledge\n\nRecreation\n\n\nEntertainment\n\n\nShopping\n\nHealth\n\n/ Fitness /\nLifestyle\nTechnologies\n\n\nEnd Devices\n\n\nSoftware\n\n\nNetworks\n\nwww\n\n/ Internet\n\nNot\n\nstructured\ncontent\n\nStructured\n\nDedicated\ncontent\n\nStructured\ncontent\n\nProfile\n\nDemand\n\nfor\n\ninformation\n\nFigure 1: Demand for a personal information agent\n\n55\n\nFigure 2: The PIA System seen as a three tier application\n\nIf the system notices that the content-based filtering is not\nable to offer sufficient results, additional information should be\noffered by collaborative filtering, i.e. information that was rated\nas interesting by a user with a similar profile will be presented.\nWith the \"push-services\", the user can decide to get new\nintegrated relevant information immediately and on a mobile\ndevice, but for users who do not want to get new information\nimmediately, a personalized newsletter also has to be offered.\nThis newsletter is collecting new relevant information to be\nconveniently delivered by e-mails, allowing users to stay\ninformed even if they are not actively using the system for some\ntime.\nDeployment and Evaluation\nWe implemented the system using Java and standard open\nsource database and web-technology and based on the JIAC IV\n1\n\nagent framework [20]. JIAC IV is FIPA 2000 compliant [21], that\nis it is conforming to the latest standards.\nIt consists of a communication infrastructure as well as\nservices for administrating and organizing agents (Agent\nManagement Service, AMS and Directory Facilitator, DF). The\nJIAC IV framework provides a variety of management and\nsecurity functions, management services including configuration,\nfault management and event logging, security aspects including\nauthorization, authentication and mechanisms for measuring and\nensuring trust and therefore has been a good choice to be used\nfrom the outset to the development of a real world application.\nWithin JIAC IV, agents are arranged on platforms, allowing\nthe arrangement of agents that belong together with the control of\nat least one \"manager\". A lot of visual tools are offered to deal\nwith administration aspects. Figure 3 shows a platform, in this\ncase with agents for the building of different models specialized\nfor different retrieval algorithms.\nThe prototypical system is currently running on Sun-Fire-880\n, Sun-Fire-480R and Sun Fire V65x, whereas the main\nfiltering computation, database- and web-server and information-extraction\nis placed on different machines for performance\nreasons.\n\n1\nJIAC IV is funded by T-Systems Nova GmbH\n\n\nFigure 3: Several Agents are building different models specialized for different retrieval algorithms.\n56\nNew content is stored, validated and consolidated in a\ncentral relational database (update-driven). Information can be\naccessed by WWW, e-mail, SMS, MMS, and J2ME Clients,\nwhere the system adapts the presentation accordingly, using the\nCC/PP (Preferences Profile) with a tailored layout for a mobile\nphone and a PDA (see section 3.5). The personalized newsletter\nand the push-services are sent via e-mail, SMS or MMS. The\nuser can use self-defined keywords for a request for information\nor choose a category and therefore the system will use a list of\nkeywords predefined by experts and updated smoothly by\nlearning from collaborative filtering. A combination of both is\nalso possible. The keyword assistant is able to extract the most\nimport keywords of a given article using the term frequency\ninverse document frequency (TFIDF)-algorithm [22].\n3.2\n\nGathering new Information\nNew information is constantly inserted in the system by\ninformation extraction agents, e.g. web-reader agents or agents\nthat are searching specified databases or directories. Additional\nagents for the collection of new content can easily be integrated\neven at runtime, as all that is necessary for a new agent is to\nregister himself at the system, store the extracted information at\na defined database table and inform the modeling-manager agent\nabout the insertion. As a file reader-agent is constantly observing\na special directory, manual insertion of documents can be done\nsimply by drag-and-drop and an e-mail and upload-interface also\nexists. Source can also be integrated by Web services. New\nReaders can be created using a easy-to-handle tool and another\ntool is enabling to conveniently observe the extraction-agents, as\nthis is the interface to the outside that might become critical if\nfor example the data-format of a source is changed.\n3.3\n\nPre processing for efficient retrieval\nThe first step of pre processing information for efficient retrieval\nis the use of distinct tables in the global database for different\ndomains like e.g. news, agent-related papers, etc. Depending on\nthe filtering request, tables with no chance of being relevant can\ntherefore be omitted. The next step is the building of several\nmodels for each document. Stemming and stop-word elimination\nis implemented in every model but different models are built by\ncomputing a term importance either based only on local\nfrequencies, or based on term frequency inverse document\nfrequency (TFIDF) approach. Furthermore number of words that\nshould be included in models is different which makes models\neither more accurate or more efficient. Created models are\nindexed either on document or word level, which facilitate their\nefficient retrieval. The manager agent is assigning the\nappropriate modeling agents to start building their models but\nmight decide (or the human system administrator can tell him) at\nruntime to delay latest time-consuming modeling activity for a\nwhile if system load is critical at a moment. This feature is\nimportant for a real-world application, as overloading has been a\nmain reason for the unusability of advanced academic systems.\n3.4\n\nFiltering technology\nAs the quality of results to a particular filtering request might\nheavily depend on the information domain (news, scientific\npapers, conference calls), different filtering communities are\nimplemented. For each domain, there is at least one community\nwhich contains agents being tailored to do specific filtering and\nmanaging tasks in an efficient way. Instead of having only\nfiltering agents, each and every community has also one so called\nmanager agent that is mainly responsible for doing coordination\nof filtering tasks. The coordination is based on quality, CPU, DB\nand memory fitness values, which are the measures being\nassociated to each filtering agent [23]. These measures\nrespectively illustrate filtering agent successfulness in the past,\nits efficiency in using available CPU and DB resources, and the\namount of memory being required for filtering. A higher CPU,\nDB or memory fitness value means that filtering agent needs a\nparticular resource in a smaller extent for performing a filtering\ntask. This further means that an insufficiency of a particular\nresource has a smaller influence on filtering agents with a higher\nparticular fitness value.\nThe introduced different fitness values together with the\nknowledge about the current system runtime performance can\nmake coordination be situation aware (see also [23]) in the way\nthat when a particular resource is highly loaded a priority in\ncoordination should be given to filtering agents for which a\nparticular resource has a minor importance. This situation aware\ncoordination provides a way to balance response time and\nfiltering accuracy, which is needed in overcoming the problem of\nfinding a perfect filtering result after few hours or even few days\nof an expensive filtering.\nInstead of assigning filtering task to the agent with the best\ncombination of fitness values in the current runtime situation,\nmanager is going to employ a proportional selection principle\n[24] where the probability for the agent to be chosen to do actual\nfiltering is proportional to the mentioned combination of its\nfitness values. By not always relying only on the most promising\nagents, but also sometimes offering a job to other agents,\nmanager gives a chance to each and every agent to improve its\nfitness values. While the adaptation of quality fitness value can\nbe accomplished after the user feedback became available, the\nother fitness values can be changed immediately after the\nfiltering was finished through the response time analyses. The\nadaptation scheme has a decreasing learning rate that prevents\nalready learnt fitness values of being destroyed, which further\nmeans that proven agents pay smaller penalties for bad jobs than\nthe novice ones [17].\nIn the case where the received filtering task cannot be\nsuccessfully locally accomplished usually because of belonging to\nunsupported information domain, manager agent has to cooperate\nwith other communities. While coordination takes place inside\neach community between manager and filtering agents,\ncooperation occurs between communities among manager agents.\nCooperation is based on either finding a community which\nsupports given domain or in splitting received task on sub-tasks\nwhere for each sub-task a community with good support exists.\nFigure 4 presents a high level overview of the filtering\nframework being composed of three different filtering\ncommunities (FC), where each community has one filter manager\nagent (M) and different number of specialized filtering agents\n(F). There are two different databases (DB) with information\nfrom different domains, and they are accessed at least by one\ncommunity. On the figure cooperation is illustrated as a circle\nwith arrows which connect manager agents.\n57\n\n\nFigure 4: Filtering framework with three different\ncommunities\n\n3.5\n\nPresentation\nAs one of the main design principles has been to support the user\nthroughout the whole day, the PIA system provides several\ndifferent access methods and adapts its interfaces to the used\ndevice (Figure 5). To fulfill these requirements an agent platform\n("Multi Access Service Platform") was developed that optimizes\nthe graphical user interface for the access by Desktop PCs, PDAs\nand smart phones.\nIf the user wants to use the PIA system, the request is received by\nthe Multi Access Service Platform (MAP). The MAP delegates\nthe request to an agent, providing the logic for this service. In the\nPIA system the requests are forwarded either to login agent or\nthe personal agent. The chosen agent performs the service\nspecific actions and sends the MAP an abstract description of the\nformular that should be presented to the user. For this purpose\nthe XML based Abstract Interaction Description Language\n(AIDL) has been defined. Based on the abstract description and\nthe features of the used device the MAP generates an optimized\ninterface presented to the user. The conversion is implemented as\na XSLT transformation in which the optimal XSLT style sheet is\nselected based on the CC/PP information about the user's device.\nThis approach simplifies the creation of optimized user\ninterfaces for different devices. The abstract interface description\ncan be easily transformed into HTML, PDA optimized HTML or\nWML. If the user wants to have a voice interface, a style sheet\nfor converting the abstract user interface description into\nVoiceXML has to be added to the MAP. Additional changes at\nthe PIA system are not needed.\nBeside of the features provided by MAP the design of the user\ninterface must create an easy to use system even on devices with\na tiny screen and without a keyboard. That is why the PIA\ninterface provides additional navigation elements on complex\nforms and minimizes the use of text input fields. New results\nmatching a defined request are presented first as a list of short\nblocks containing only title, abstract and some meta-information\n(as this is familiar to every user from well-known search-engines\n). This information is also well readable on PDAs or even\nmobile phones. Important articles can be stored in a repository.\nThis allows the user to choose the articles on his PDA he wants\nto read later at his desktop PC.\nDepending on the personal options specified by the user, old\ninformation found for a specific query may be deleted\nautomatically step by step after a given time, that is, there is\nalways up to date information that is presented to the user (we\ncall this "smart mode"). This is for example convenient for\ngetting personalized filtering news. The other option is to keep\nthat information unlimited ("global mode") for a query for e.g.\nbasic scientific papers.\nFor the "push-services" (that is, the system is becoming active\nand sending the user information without an explicit request), the\nuser specifies his working time (e.g. 9 am to 5 pm). This divides\nthe day in a pre-, work, and a recreation slot, where the PIA\nsystem assumes different demands of information. For each slot\nan adequate delivering technology can be chosen (e-mail, sms,\nmms, fax or Voice). If you decide to subscribe to the\npersonalized newsletter, new relevant information for you will be\ncollected and sent by e-mail or fax once a day with a similar\nlayout and structure for convenient reading if you have not seen it\nalready by other pull- or push services. Therefore you can also\nstay informed without having to log into the system and if you do\nnot want to get all new information immediately.\n\nFigure 5: Information accessed by browser or tailored for\npresentation on a PDA or a mobile phone\n58\nConclusion and future work\nThe implemented system has an acceptable runtime performance\nand shows that it is a good choice to develop a personal\ninformation system using agent-technology based on a solid\nagent-framework like JIAC IV. Currently, PIA system supports\nmore than 120 different web sources, grabs daily around 3.000\nnew semi-structured and unstructured documents, has almost\n500.000 already pre-processed articles, and actively helps about\nfifty scientists related to our laboratory in their information\nretrieval activities. Their feedback and evaluation is a valuable\ninput for the further improvement of PIA. In the near future we\nplan to increase the number of users to thousands, and therefore\nwe plan to work on the further optimization of the filtering\nalgorithms to be able to simultaneously respond to multiple\nfiltering requests. Also, we think about integrating additional\nservices for the user that provide information tailored to his\ngeographical position (GPS), a natural speech interface and\ninnovative ways to motivate the user to give precise explicit\nfeedback, as the learning ability of the system is depending on\nthat information.\n\nREFERENCES\n[1]\n\nBrin, S.; Page, L.: The anatomy of a large-scale hyper\ntextual (Web) search engine, Proc. 7th International World\nWide Web Conference on Computer Networks, 30(1-7), pp.\n107-117, 1998.\n[2]\n\nBalabanovic, M.; Yoav, S.: FAB: Content-Based\nCollaborative Recommendation, Communication of the\nACM, Volume 40, Number 3, pp. 66-72, 1997.\n[3]\n\nMoukas, A.: Amalthaea: Information Discovery and\nFiltering using a Multi agent Evolving Ecosystem, Practical\nApplication of Intelligent Agents & Multi-Agent\nTechnology, London 1998.\n[4]\n\nZhang, B.; Seo, Y.: Personalized Web-Document Filtering\nUsing Reinforcement Learning, Applied Artificial\nIntelligence, Volume 15 Number 7, pp. 665-685, 2001.\n[5]\n\nClaypool, M.; Gokhale, A.; Miranda, T.; Murnikov, P.;\nNetes, D.; Sartin, N.: Combining Content-Based and\nCollaborative Filters in an Online Newspaper, ACM SIGIR\nWorkshop on Recommender Systems, Berkeley, CA, August\n19, 1999.\n[6]\n\nDelgado, J.; Davidson, R.: Knowledge bases and user\nprofiling in travel and hospitality recommender systems, In\nProceedings of the ENTER 2002 Conference, Innsbruck,\nAustria, January 22-25 2002, Springer Verlag, pp. 1-16.\n[7]\n\nKuropka, D.; Serries, T.: Personal Information Agent,\nInformatik Jahrestagung 2001, pp. 940-946.\n[8]\n\nLieberman, H.: Letizia: An Agent That Assists Web\nBrowsing, International Joint Conference on Artificial\nIntelligence, Montreal, August 1995.\n[9]\n\nLieberman, H.; Van Dyke, N,; Vivacqua, A.: Let's Browse:\nA Collaborative Browsing Agent, Knowledge-Based\nSystems, 12(8), 1999, pp. 427431.\n[10]\n\nSheth, B.: A Learning Approach to Personalized\nInformation Filtering, M.S. Thesis, MIT- EECS dept, USA,\n1994.\n[11]\n\nJoachims, T.; Freitag, D.; Mitchell, T.: WebWatcher: A Tour\nGuide for the World Wide Web, In IJCAI (1), 1997, pp. 770-777\n.\n[12]\n\nWiniwarter, W.: PEA - A Personal Email Assistant with\nEvolutionary Adaptation, International Journal of\nInformation Technology, Vol. 5, No. 1, 1999.\n[13]\n\nThomas, C.; Fischer, G.: Using agents to improve the\nusability and usefulness of the world wide web. In\nProceedings of the Fifth International Conference on User\nModelling, pages 5--12, 1996.\n[14]\n\nJennings, N; Wooldridge, M: Agent-oriented software\nengineering, Handbook of Agent Technology (ed. J.\nBradshaw). AAAI/MIT Press, 2000.\n[15]\n\nSesseler, R.; Albayrak, S.: Agent-based Marketplaces for\nElectronic Commerce, International Conference on Artificial\nIntelligence, IC-AI 2001.\n[16]\n\nResnick, P.; Neophytos, J.; Suchak, M.; Bergstrom, P.;\nRiedl, J.: GroupLens: An open architecture for collaborative\nfiltering of net news, Proceedings ACM Conference on\nComputer-Supported Cooperative Work, pp. 175-186, 1994.\n[17]\n\nAlbayrak, S.; Milosevic, D.: Self Improving Coordination in\nMulti Agent Filtering Framework, IEEE/WIC/ACM\nInternational Joint Conference on Intelligent Agent\ntechnology (IAT 04) and Web Intelligence (WI 04), Beijing,\nChina, September 2004., (to appear).\n[18]\n\nNichols, D.: Implicit Rating and Filtering, Proc. Fifth\nDELOS Workshop on Filtering and Collaborative Filtering,\nBudapest, Hungary, 10-12 November, ERCIM, pp. 31-36,\n1997.\n[19]\n\nTauritz, D.: Adaptive Information Filtering: concepts and\nalgorithms, Ph.D. dissertation, Leiden University, 2002.\n[20]\n\nFricke, S.; Bsufka, K.; Keiser, J.; Schmidt, T.; Sesseler, R.;\nAlbayrak, S.: Agent-based Telematic Services and Telecom\nApplications, Communications of the ACM, April. 2001.\n[21]\n\nFoundation for Intelligent Physical Agents, www.fipa.org,\n2004.\n[22]\n\nJing, L.; Huang, H.; Shi, H.: Improved Feature Selection\nApproach TFIDF in Text Mining, Proc. 1\nst\nInternat.\nConference on Machine Learning and Cybernetics, Beijing,\n2002.\n[23]\n\nAlbayrak, S; Milosevic D.: Situation-aware Coordination in\nMulti Agent Filtering Framework, The 19th International\nSymposium on Computer and Information Sciences (ISCIS\n04), Antalya, Turkey, October 2004., (to appear).\n[24]\n\nZbigniew, M.; Fogel, D.: How to Solve It: Modern\nHeuristics, Springer-Verlag New York, Inc., New York,\nNY, 2000.\n\n\n59", "keywords": "Adaptation and Learning;filtering;Recommendation systems;Agent-based deployed applications;Evolution;Intelligent and personalized filtering;Agents and complex systems;personal information agent;agent technology;Ubiquitous access"} {"name": "28", "title": "An Adaptive Information Retrieval System based on Associative Networks", "abstract": "In this paper we present a multilingual information retrieval system that provides access to Tourism information by exploiting the intuitiveness of natural language. In particular, we describe the knowledge representation model underlying the information retrieval system. This knowledge representation approach is based on associative networks and allows the definition of semantic relationships between domain-intrinsic information items. The network structure is used to define weighted associations between information items and augments the system with a fuzzy search strategy. This particular search strategy is performed by a constrained spreading activation algorithm that implements information retrieval on associative networks. Strictly speaking, we take the relatedness of terms into account and show, how this fuzzy search strategy yields beneficial results and, moreover, determines highly associated matches to users' queries. Thus, the combination of the associative network and the constrained spreading activation approach constitutes a search algorithm that evaluates the relatedness of terms and, therefore, provides a means for implicit query expansion.", "fulltext": "Introduction\nProviding easy and intuitive access to information\nstill remains a challenge in the area of information system\nresearch and development. Moreover, as Van Rijsbergen\n(1979) points out, the amount of available\ninformation is increasing rapidly and offering accurate\nand speedy access to this information is becoming\never more difficult. This quote, although about\n20 years old, is still valid nowadays if you consider the\namount of information offered on the Internet. But\nhow to address these problems? How to overcome\nthe limitations associated with conventional search interfaces\n? Furthermore, users of information retrieval\nsystems are often computer illiterate and not familiar\nwith the required logic for formulating appropriate\nqueries, e.g. the burdens associated with Boolean\nCopyright c 2004, Australian Computer Society, Inc. This paper\nappeared at First Asia-Pacific Conference on Conceptual\nModelling (APCCM 2004), Dunedin, New Zealand. Conferences\nin Research and Practice in Information Technology, Vol.\n31. Sven Hartmann and John Roddick, Ed. Reproduction for\nacademic, not-for profit purposes permitted provided this text\nis included.\nlogic. This goes hand in hand with the urge to understand\nwhat users really want to know from information\nretrieval systems.\nStandard information retrieval interfaces consist of\ncheck boxes, predefined option sets or selection lists\nforcing users to express her or his needs in a very restricted\nmanner. Therefore, an approach leaving the\nmeans of expression in users' hands, narrows the gap\nbetween users' needs and interfaces used to express\nthese needs. An approach addressing this particular\nproblem is to allow query formulation in natural\nlanguage. Natural language interfaces offer easy and\nintuitive access to information sources and users can\nexpress their information needs in their own words.\nHence, we present a multilingual information retrieval\nsystem allowing for query formulation in natural\nlanguage. To reduce word sense ambiguities the\nsystem operates on a restricted domain. In particular,\nthe system provides access to tourism information,\nlike accommodations and their amenities throughout\nAustria.\nHowever, the core element of the information retrieval\nsystem remains the underlying knowledge representation\nmodel.\nIn order to provide a knowledge\nrepresentation model allowing to define relations\namong information items, an approach based on a\nnetwork structure, namely an associative network, is\nused.\nMore precisely, this associative network incorporates\na means for knowledge representation allowing\nfor the definition of semantic relationships of\ndomain-intrinsic information. Processing the network\nand, therefore, result determination is accomplished\nby a technique refereed to as spreading activation.\nSome nodes of the network act as sources of activation\nand, subsequently, activation is propagated to\nadjacent nodes via weighted links. These newly activated\nnodes, in turn, transmit activation to associated\nnodes, and so on.\nWe introduce a knowledge representation approach\nbased on an associative network consisting\nof three layers. Moreover, a constrained spreading\nactivation algorithm implements a processing technique\nthat operates on the network. Due to the network\nstructure of the knowledge representation model\nand the processing technique, implicit query expansion\nenriches the result set with additional matches.\nHence, a fuzzy search strategy is implemented.\nThe remainder of the paper is organized as follows\n. In Section 2 we review the architecture of the\ninformation retrieval system that acts as a basis for\nthe redeveloped approach presented herein.\nMoreover\n, Section 3 gives an overview about associative\nnetworks and we present an algorithm for processing\nsuch networks, i.e. spreading activation. In Section 4\nwe describe our approach based on associative net-27\nworks and finally, some conclusions are given in Section\n5.\nAD.M.IN A Natural Language Information Retrieval System\nCrestani (1997) points out that information retrieval\nis a science that aims to store and allow fast access\nto a large amount of data. In contrast to conventional\ndatabase systems, an information retrieval system\ndoes not provide an exact answer to a query but\ntries to produce a ranking that reflects the intention\nof the user. More precisely, documents are ranked according\nto statistical similarities based on the occurrence\nfrequency of terms in queries and documents.\nThe occurrence frequency of a term provides an indicator\nof the significance of this term. Moreover, in order\nto get a measure for determining the significance\nof a sentence, the position of terms within a sentence\nis taken into account and evaluated. For comprehensive\nreports about information retrieval see Salton\n& McGill (1983), Salton (1989) and Baeza-Yates &\nRibeiro-Neto (1999).\nIn order to adapt information retrieval systems to\nthe multilingual demands of users, great efforts have\nbeen made in the field of multilingual information retrieval\n. Hull & Grafenstette (1996) subsume several\nattempts to define multilingual information retrieval,\nwhere Harman (1995) formulates the most concise\none: \"multilingual information retrieval is information\nretrieval in any language other than English\".\nMultilingual information retrieval systems have to\nbe augmented by mechanisms for query or document\ntranslation to support query formulation in multiple\nlanguages. Information retrieval is such an inexact\ndiscipline that it is not clear whether or not query\ntranslation is necessary or even optimal for identifying\nrelevant documents and, therefore, to determine\nappropriate matches to the user query. Strictly speaking\n, the process of translating documents or queries\nrepresents one of the main barriers in multilingual information\nretrieval.\nDue to the shortness of user queries, query translation\nintroduces ambiguities that are hard to overcome\n. Contrarily, resolving ambiguity in document\ntranslation is easier to handle because of the quantity\nof text available. Nevertheless, state-of-the-art\nmachine translation systems provide only an insufficient\nmeans for translating documents. Therefore,\nresolving ambiguities associated with translations remains\na crucial task in the field of multilingual information\nretrieval. Ballesteros & Croft (1998), for\ninstance, present a technique based on co-occurrence\nstatistics from unlinked text corpora which can be\nused to reduce the ambiguity associated with translations\n. Furthermore, a quite straightforward approach\nin reducing ambiguities is to restrict the domain a\nmultilingual information retrieval system operates on.\nXu, Netter & Stenzhorn (2000) describe an information\nretrieval system that aims at providing\nuniform multilingual access to heterogeneous data\nsources on the web.\nThe MIETTA (Multilingual\nTourist Information on the World Wide Web) system\nhas been applied to the tourism domain containing\ninformation about three European regions,\nnamely Saarland, Turku, and Rome. The languages\nsupported are English, Finnish, French, German, and\nItalian. Since some of the tourism information about\nthe regions were available in only one language, machine\ntranslation was used to deal with these web\ndocuments. Due to the restricted domain, automatic\ntranslation should suffice to understand the basic\nmeaning of the translated document without having\nknowledge of the source language. Users can query\nthe system in various ways, such as free text queries,\nform-based queries, or browsing through the concept\nhierarchy employed in the system. MIETTA makes\nit transparent to the users whether they search in a\ndatabase or a free-form document collection.\n2.1\nThe Architecture of the Original System\nThe software architecture of the natural language information\nretrieval system is designed as a pipeline\nstructure. Hence, successively activated pipeline elements\napply transformations on natural language\nqueries that are posed via arbitrary client devices,\nsuch as, for instance, web browsers, PDAs or mobile\nphones. Due to the flexibility of this approach, different\npipeline layouts can be used to implement different\nprocessing strategies. Figure 1 depicts the layout\nof the software architecture and illustrates the way of\ninteraction of the pipeline elements.\nIn a first step, the natural language query is evaluated\nby an automatic language identification module\nto determine the language of the query. Next, the system\ncorrects typographic errors and misspellings to\nimprove retrieval performance. Before adding grammar\nrules and semantic information to the query\nterms, a converter transforms numerals to their nu-meric\nequivalents. Depending on the rules assigned to\nthe query terms, a mapping process associates these\nterms with SQL fragments that represent the query\nin a formal way. Due to the fact that the system\nuses a relational database as backend this mapping\nprocess is crucial. In a next step the SQL fragments\nare combined according to the modifiers (e.g. \"and\",\n\"or\", \"near\", \"not\") identified in the query and a single\nSQL statement that reflects the intention of the\nquery is obtained. Then the system determines the\nappropriate result and generates an XML representation\nfor further processing. Finally, the XML result\nset is adapted to fit the needs of the client device.\nThe remainder of this section gives a brief outline\nof the system.\n2.1.1\nThe Knowledge Base\nA major objective of the Ad.M.In.(Adaptive Multilingual\nInterfaces) system was to separate the program\nlogic from domain dependent data. In particular\n, language, domain and device dependent portions\nare placed in the knowledge base. Thus, the knowledge\nbase represents the backbone of the system and\nconsists of a relational database and a set of ontologies\n. The database stores information about domain\nentities, as, for instance, amenities of accommodations\n. The ontologies store synonyms, define semantic\nrelations and grammar rules.\nBasically, the knowledge base consists of separate\nXML files, whereas the synonym ontology is used to\nassociate terms having the same semantic meaning,\ni.e. describes linguistic relationships like synonymy.\nThe synonym ontology is based on a flat structure,\nallowing to define synonymy. Taking a look at the\ntourism domain, \"playground\" represents a concept\npossessing several semantic equivalents, as, for instance\n, \"court\".\nUnfortunately, the synonym ontology provides no\nmeans to associate concepts. Consider, for example,\nthe three concepts \"sauna\", \"steam bath\" and \"vegetarian\nkitchen\". Straightforward, someone might derive\na stronger degree of relatedness between the concepts\n\"sauna\" and \"steam bath\" as between \"sauna\"\nand \"vegetarian kitchen\".\nThe second component of the knowledge base\nstores a set of grammar rules.\nMore precisely, a\nlightweight grammar describes how certain concepts\n28\nFigure 1: Software Architecture\nmay be modified by prepositions, adverbial or adjectival\nstructures that are also specified in the synonym\nontology. For a more detailed description we refer\nto Berger (2001).\n2.1.2\nLanguage Identification\nTo identify the language of a query, an n-gram-based\ntext classification approach (cf. Cavnar & Trenkle\n(1994)) is used. An n-gram is an n-character slice of\na longer character string. As an example, for n = 3,\nthe trigrams of the string \"language\" are: { la, lan,\nang, ngu, gua, uag, age, ge }.\nDealing with multiple\nwords in a string, the blank character is usu-ally\nreplaced by an underscore \" \" and is also taken\ninto account for the construction of an n-gram document\nrepresentation. This language classification approach\nusing n-grams requires sample texts for each\nlanguage to build statistical models, i.e. n-gram frequency\nprofiles, of the languages. We used various\ntourism-related texts, e.g. hotel descriptions and holiday\npackage descriptions, as well as news articles both\nin English and German language. The n-grams, with\nn ranging from 1...5, of these sample texts were an-alyzed\nand sorted in descending order according to\ntheir frequency, separately for each language. These\nsorted histograms are the n-gram frequency profiles\nfor a given language. For a comprehensive description\nsee Berger, Dittenbach & Merkl (2003).\n2.1.3\nError Correction\nTo improve retrieval performance, potential orthographic\nerrors have to be considered in the web-based\ninterface.\nAfter identifying the language, a spell-checking\nmodule is used to determine the correctness\nof query terms. The efficiency of the spell checking\nprocess improves during the runtime of the system\nby learning from previous queries. The spell checker\nuses the metaphone algorithm (cf. Philips (1990)) to\ntransform the words into their soundalikes. Because\nthis algorithm has originally been developed for the\nEnglish language, the rule set defining the mapping of\nwords to the phonetic code has to be adapted for other\nlanguages. In addition to the base dictionary of the\nspell checker, domain-dependent words and proper\nnames like names of cities, regions or states, have\nto be added to the dictionary. For every misspelled\nterm of the query, a list of potentially correct words\nis returned. First, the misspelled word is mapped to\nits metaphone equivalent, then the words in the dictionary\n, whose metaphone translations have at most\nan edit distance (cf. Levenshtein (1966)) of two, are\nadded to the list of suggested words. The suggestions\nare ranked according to the mean of first, the edit\ndistance between the misspelled word and the suggested\nword, and second, the edit distance between\nthe misspelled word's metaphone and the suggested\nword's. The smaller this value is for a suggestion, the\nmore likely it is to be the correct substitution from\nthe orthographic or phonetic point of view. However,\nthis ranking does not take domain-specific information\ninto account. Because of this deficiency, correctly\nspelled words in queries are stored and their respective\nnumber of occurrences is counted. The words\nin the suggestion list for a misspelled query term are\nlooked up in this repository and the suggested word\nhaving the highest number of occurrences is chosen\nas the replacement of the erroneous original query\nterm. In case of two or more words having the same\nnumber of occurrences the word that is ranked first\nis selected. If the query term is not present in the\nrepository up to this moment, it is replaced by the\nfirst suggestion, i.e. the word being phonetically or\northographically closest. Therefore, suggested words\nthat are very similar to the misspelled word, yet make\nno sense in the context of the application domain,\nmight be rejected as replacements. Consequently, the\nword correction process described above is improved\nby dynamic adaptation to past knowledge.\nAnother important issue in interpreting the natural\nlanguage query is to detect terms consisting of\nmultiple words. Proper names like \"Bad Kleinkirch-heim\"\nor nouns like \"parking garage\" have to be\ntreated as one element of the query. Therefore, all\nmulti-word denominations known to the system are\nstored in an efficient data structure allowing to identify\nsuch cases. More precisely, regular expressions\nare used to describe rules applied during the identification\nprocess.\n2.1.4\nSQL Mapping\nWith the underlying relational database management\nsystem PostgreSQL, the natural language query has\nto be transformed into a SQL statement to retrieve\n29\nthe requested information. As mentioned above the\nknowledge base describes parameterized SQL fragments\nthat are used to build a single SQL statement\nrepresenting the natural language query. The query\nterms are tagged with class information, i.e. the relevant\nconcepts of the domain (e.g. \"hotel\" as a type\nof accommodation or \"sauna\" as a facility provided\nby a hotel), numerals or modifying terms like \"not\",\n\"at least\", \"close to\" or \"in\". If none of the classes\nspecified in the ontology can be applied, the database\ntables containing proper names have to be searched.\nIf a noun is found in one of these tables, it is tagged\nwith the respective table's name, such that \"Tyrol\"\nwill be marked as a federal state. In the next step,\nthis class information is used by the grammar to select\nthe appropriate SQL fragments. Finally, the SQL\nfragments have to be combined to a single SQL statement\nreflecting the natural language query of the\nuser. The operators combining the SQL fragments\nare again chosen according to the definitions in the\ngrammar.\nAssociative Networks\nQuillian (1968) introduced the basic principle of a\nsemantic network and it played, since then, a central\nrole in knowledge representation. The building\nblocks of semantic networks are, first, nodes that express\nknowledge in terms of concepts, second, concept\nproperties, and third,the hierarchical sub-super class\nrelationship between these concepts.\nEach concept in a semantic network represents a\nsemantic entity. Associations between concepts describe\nthe hierarchical relationship between these semantic\nentities via is-a or instance-of links.\nThe\nhigher a concept moves up in the hierarchy along is-a\nrelations, the more abstract is its semantic meaning\n. Properties are attached to concepts and, therefore\n, properties are also represented by concepts and\nlinked to nodes via labeled associations. Furthermore,\na property that is linked to a high-level concept is inherited\nby all descendants of the concept. Hence, it is\nassumed that the property applies to all subsequent\nnodes. An example of a semantic network is depicted\nin Figure 2.\nSemantic networks initially emerged in cognitive\npsychology and the term itself has been used in the\nfield of knowledge representation in a far more general\nsense than described above. In particular, the\nterm semantic network has been commonly used to\nrefer to a conceptual approach known as associative\nnetwork.\nAn associative network defines a generic\nnetwork which consists of nodes representing information\nitems (semantic entities) and associations between\nnodes, that express, not necessarily defined or\nlabeled, relations among nodes. Links between particular\nnodes might be weighted to determine the\nstrength of connectivity.\n3.1\nSpreading Activation\nA commonly used technique, which implements information\nretrieval on semantic or associative networks,\nis often referred to as spreading activation.\nThe\nspreading activation processing paradigm is tight-knit\nwith the supposed mode of operation of human memory\n. It was introduced to the field of artificial intelligence\nto obtain a means of processing semantic or\nassociative networks. The algorithm, which underlies\nthe spreading activation (SA) paradigm, is based\non a quite simple approach and operates on a data\nstructure that reflects the relationships between information\nitems. Thus, nodes model real world entities\nand links between these nodes define the relatedness\nof entities. Furthermore, links might possess\n, first, a specific direction, second, a label and,\nthird, a weight that reflects the degree of association.\nThis conceptual approach allows for the definition of\na more general, a more generic network than the basic\nstructure of a semantic network demands. Nevertheless\n, it could be used to model a semantic network as\nwell as a more generic one, for instance an associative\nnetwork.\nThe idea, underlying spreading activation, is to\npropagate activation starting from source nodes via\nweighted links over the network. More precisely, the\nprocess of propagating activation from one node to\nadjacent nodes is called a pulse. The SA algorithm\nis based on an iterative approach that is divided into\ntwo steps: first, one or more pulses are triggered and,\nsecond, a termination check determines if the process\nhas to continue or to halt.\nFurthermore, a single pulse consists of a pre-adjustment\nphase, the spreading process and a post-adjustment\nphase.\nThe optional pre- and post-adjustment\nphases might incorporate a means of activation\ndecay, or to avoid reactivation from previous\npulses. Strictly speaking, these two phases are used\nto gain more control over the network. The spreading\nphase implements propagation of activation over the\nnetwork. Spreading activation works according to the\nformula:\nI\nj\n(p) =\nk\ni\n(O\ni\n(p - 1) w\nij\n)\n(1)\nEach node j determines the total input I\nj\nat pulse\np of all linked nodes. Therefore, the output O\ni\n(p - 1)\nat the previous pulse p - 1 of node i is multiplied with\nthe associated weight w\nij\nof the link connecting node i\nto node j and the grand total for all k connected nodes\nis calculated. Inputs or weights can be expressed by\nbinary values (0/1), inhibitory or reinforcing values\n(-1/+1), or real values defining the strength of the\nconnection between nodes. More precisely, the first\ntwo options are used in the application of semantic\nnetworks, the latter one is commonly used for associative\nnetworks. This is due to the fact that the type\nof association does not necessarily have some exact\nsemantic meaning. The weight rather describes the\nrelationship between nodes. Furthermore, the output\nvalue of a node has to be determined. In most cases,\nno distinction is made between the input value and\nthe activation level of a node, i.e. the input value of a\nnode and its activation level are equal. Before firing\nthe activation to adjacent nodes a function calculates\nthe output depending on the activation level of the\nnode:\nO\ni\n= f (I\ni\n)\n(2)\nVarious functions can be used to determine the\noutput value of a node, for instance the sigmoid function\n, or a linear activation function, but most commonly\nused is the threshold function. The threshold\nfunction determines, if a node is considered to be active\nor not, i.e. the activation level of each node is\ncompared to the threshold value. If the activation\nlevel exceeds the threshold, the state of the node is\nset to active. Subsequent to the calculation of the\nactivation state, the output value is propagated to\nadjacent nodes. Normally, the same output value is\nsent to all adjacent nodes.\nThe process described\nabove is repeated, pulse after pulse, and activation\nspreads through the network and activates more and\nmore nodes until a termination condition is met. Finally\n, the SA process halts and a final activation state\nis obtained. Depending on the application's task the\n30\naccomodation\nhotel\nanimal\nfarm\npension\npig\nsheep\nfacility\nsteam bath\nsauna\nhot\nis_a\nis_a\nis_a\nis_a\nis_a\nis_a\nis_a\nhas\noffers\nis\nis\nFigure 2: A semantic network example of tourism-related terms\nactivation levels are evaluated and interpreted accordingly\n.\n3.2\nTaming Spreading Activation\nUnfortunately, the basic approach of spreading activation\nentails some major drawbacks. Strictly speaking\n, without appropriate control, activation might be\npropagated all over the network. Furthermore, the semantics\nof labeled associations are not incorporated\nin SA and it is quite difficult to integrate an inference\nmechanism based on the semantics of associations\n. To overcome these undesired side-effects the\nintegration of constraints helps to tame the spreading\nprocess (cf. Crestani (1997)). Some constraints\ncommonly used are described as follows.\nFan-out constraint: Nodes with a broad semantic\nmeaning possess a vast number of links to\nadjacent nodes. This circumstance implies that\nsuch nodes activate large areas of the network.\nTherefore, activation should diminish at nodes\nwith a high degree of connectivity to avoid this\nunwanted effect.\nDistance constraint: The basic idea underlying\nthis constraint is, that activation ceases\nwhen it reaches nodes far away from the activation\nsource. Thus, the term far corresponds\nto the number of links over which activation was\nspread, i.e. the greater the distance between two\nnodes, the weaker is their semantic relationship.\nAccording to the distance of two nodes their relation\ncan be classified. Directly connected nodes\nshare a first order relation. Two nodes connected\nvia an intermediate node are associated by a second\norder relation, and so on.\nActivation constraint: Threshold values are\nassigned to nodes (it is not necessary to apply\nthe same value to all nodes) and are interpreted\nby the threshold function. Moreover, threshold\nvalues can be adapted during the spreading process\nin relation to the total amount of activity in\nthe network.\nPath constraint: Usually, activation spreads\nover the network using all available links. The integration\nof preferred paths allows to direct activation\naccording to application-dependent rules.\nAnother enhancement of the spreading activation\nmodel is the integration of a feedback process. The\nactivation level of some nodes or the entire network\nis evaluated by, for instance, another process or by\na user. More precisely, a user checks the activation\nlevel of some nodes and adapts them according to\nher or his needs.\nSubsequently, activation spreads\ndepending on the user refinement. Additionally, users\nmay indicate preferred paths for spreading activation\nand, therefore, are able to adapt the spreading process\nto their own needs.\nRecommendation via Spreading Activation\nOne of the first information retrieval systems using\nconstrained spreading activation was GRANT. Kjeldsen\n& Cohen (1987) developed a system that handles\ninformation about research proposals and potential\nfunding agencies. GRANT's domain knowledge is\nstored in a highly associated semantic network. The\nsearch process is carried out by constrained spreading\nactivation over the network. In particular, the system\nextensively uses path constraints in the form of path\nendorsement. GRANT can be considered as an inference\nsystem applying repeatedly the same inference\nschema:\nIF x AND R(x, y) y\n(3)\nR(x, y) represents a path between two nodes x and y.\nThis inference rule can be interpreted as follows: \"if\na founding agency is interested in topic x and there\nis a relation between topic x and topic y then the\nfounding agency might be interested in the related\ntopic y.\"\nCroft, Lucia, Crigean & Willet (1989) developed\nan information retrieval system initially intended to\nstudy the possibility of retrieving documents by plausible\ninference. In order to implement plausible inference\nconstrained spreading activation was chosen\naccidently. The I\n3\nR system acts as a search intermediary\n(cf. Croft & Thompson (1987)). To accomplish\nthis task the system uses domain knowledge to\nrefine user queries, determines the appropriate search\nstrategy, assists the user in evaluating the output and\nreformulating the query. In its initial version, the domain\nknowledge was represented using a tree structure\nof concepts. The design was later refined to meet\nthe requirements of a semantic network.\nBelew (1989) investigated the use of connectionist\ntechniques in an information retrieval system\ncalled Adaptive Information Retrieval (AIR). The\nsystem handles information about scientific publications\n, like the publication title and the author. AIR\nuses a weighted graph as knowledge representation\nparadigm. For each document, author and keyword\n(keywords are words found in publication titles) a\nnode is created and associations between nodes are\nconstructed from an initial representation of documents\nand attributes. A user's query causes initial\nactivity to be placed on some nodes of the network.\n31\nThis activity is propagated to other nodes until certain\nconditions are met. Nodes with the highest level\nof activation represent the answer to the query by the\nAIR system. Furthermore, users are allowed to assign\na degree of relevance to the results (++, +, -, --).\nThis causes new links to be created and the adaptation\nof weights between existing links. Moreover,\nfeedback is averaged across the judgments of many\nusers.\nA mentionable aspect of the AIR system is that no\nprovision is made for the traditional Boolean operators\nlike AND and OR. Rather, AIR emulates these\nlogical operations because \"the point is that the difference\nbetween AND and OR is a matter of degree\".\nThis insight goes back to Von Neumann (as pointed\nout by Belew (1989)).\nA system based on a combination of an ostensive\napproach with the associative retrieval approach is described\nin Crestani & Lee (2000). In the WebSCSA\n(Web Searching by Constrained Spreading Activation\n) approach a query does not consist of keywords.\nInstead, the system is based on an ostensive approach\nand assumes that the user has already identified relevant\nWeb pages that act as a basis for the following\nretrieval process.\nSubsequently, relevant pages are\nparsed for links and they are followed to search for\nother relevant associated pages. The user does not ex-plicitly\nrefine the query. More precisely, users point to\na number of relevant pages to initiate a query and the\nWebSCSA system combines the content of these pages\nto build a search profile. In contrast to conventional\nsearch engines WebSCSA does not make use of extensive\nindices during the search process. Strictly speaking\n, it retrieves relevant information only by navigating\nthe Web at the time the user searches for information\n. The navigation is processed and controlled by\nmeans of a constrained spreading activation model.\nIn order to unleash the power of WebSCSA the system\nshould be used when users already have a point\nto start for her or his search. Pragmatically speaking,\nthe intention of WebSCSA is to enhance conventional\nsearch engines, use these as starting points and not\nto compete with them.\nHartmann & Strothotte (2002) focus on a spreading\nactivation approach to automatically find associations\nbetween text passages and multimedia material\nlike illustrations, animations, sounds, and videos.\nMoreover, a media-independent formal representation\nof the underlying knowledge is used to automatically\nadapt illustrations to the contents of small text segments\n. The system contains a hierarchical representation\nof basic anatomic concepts such as bones, muscles\n, articulations, tendons, as well as their parts and\nregions.\nNetwork structures provide a flexible model for\nadaptation and integration of additional information\nitems. Nevertheless, Crestani (1997) points out that\n\"... the problem of building a network which effec-tively\nrepresents the useful relations (in terms of the\nIRs aims) has always been the critical point of many\nof the attempts to use SA in IR. These networks are\nvery difficult to build, to maintain and keep up to date.\nTheir construction requires in depth application domain\nknowledge that only experts in the application\ndomain can provide.\"\nDittenbach, Merkl & Berger (2003) present an approach\nbased on neural networks for organizing words\nof a specific domain according to their semantic relations\n. A two-dimensional map is used to display\nsemantically similar words in spatially regions. This\nrepresentation can support the construction and enrichment\nof information stored in the associative network\n.\n4.1\nThe Redeveloped System Architecture\nTo overcome the limitations of the knowledge base of\nthe original system, the development of an alternative\napproach to model domain knowledge was necessary\n.\nBasically, the unassociated, non-hierarchic\nknowledge representation model inhibits the power\nof the system. Strictly speaking, the original system\nfailed to retrieve results on a fuzzy basis, i.e. the results\ndetermined by the system provide exact matches\nonly, without respect to first, interactions users made\nduring past sessions, second, personal preferences of\nusers, third, semantic relations of domain intrinsic information\n, and fourth, locational interdependencies.\nIn order to adapt the system architecture accordingly\n, an approach based on associative networks was\ndeveloped. This associative network replaces the flat\nsynonym ontology used in the original system. Moreover\n, both the grammar rules and the SQL fragments\nhave been removed from the knowledge base.\nMore precisely, the functionality and logic is now covered\nby newly developed pipeline elements or implic-itly\nresolved by the associative network.\nIn analogy\nto the original pipeline, the first three processing\nsteps are accomplished.\nNext, a newly implemented\ninitializationelement associates concepts extracted\nfrom the query with nodes of the associative\nnetwork. These nodes act as activation sources. Subsequently\n, the newly designed spreading element implements\nthe process of activation propagation. Finally\n, the new evaluationelement analyzes the activation\nlevel of the associative network determined\nduring the spreading phase and produces a ranking\naccording to this activation level.\n4.1.1\nThe Knowledge Representation Model\nBasically, the knowledge base of the information retrieval\nsystem is composed of two major parts: first,\na relational database that stores information about\ndomain entities and, second, a data structure based\non an associative network that models the relationships\namong terms relevant to the domain. Each domain\nentity is described by a freely definable set of attributes\n. To provide a flexible and extensible means\nfor specifying entity attributes, these attributes are\norganized as <name, value> pairs. An example from\nthe tourism domain is depicted in Table 1.\nHotel Wellnesshof\ncategory\n4\nfacility\nsauna\nfacility\nsolarium\nfacility\n...\nTable 1: <name,value>-pair example for entity \"Hotel\nWellnesshof \"\nThe associative network consists of a set of nodes\nand each node represents an information item. Moreover\n, each node is member of one of three logical layers\ndefined as follows:\nAbstraction layer: One objective of the rede-velopment\nof the knowledge base was to integrate\ninformation items with abstract semantic meaning\n. More precisely, in contrast to the knowledge\nbase used in the original system which only\nsupported modeling of entity attributes, the new\napproach allows the integration of a broader set\nof terms, e.g. terms like \"wellness\" or \"summer\nactivities\" that virtually combine several information\nitems.\n32\nConceptual layer: The second layer is used to\nassociate entity attributes according to their semantic\nrelationship. Thus, each entity attribute\nhas a representation at the conceptual layer. Furthermore\n, the strengths of the relationships between\ninformation items are expressed by a real\nvalue associated with the link.\nEntity layer: Finally, the entity layer associates\nentities with information items (entity attributes\n) of the conceptual layer, e.g. an entity\npossessing the attribute \"sauna\" is associated\nwith the saunanode of the conceptual layer.\nThe building blocks of the network are concepts.\nA concept represents an information item possessing\nseveral semantically equivalent terms, i.e. synonyms,\nin different languages. Each concept possesses one of\nthree different roles:\nConcrete concepts are used to represent information\nitems at the conceptual layer. More\nprecisely, concrete concepts refer to entity attributes\n.\nConcepts with an abstract role refer to terms at\nthe abstraction layer.\nFinally, the modifier role is used to categorize\nconcepts that alter the processing rules for abstract\nor concrete concepts. A modifier like, for\ninstance, \"not\" allows the exclusion of concepts\nby negation of the assigned initialization value.\nMoreover, concepts provide, depending on their\nrole, a method for expressing relationships among\nthem.\nThe connectedTo relation defines a bidirec-tional\nweighted link between two concrete concepts,\ne.g. the concrete concept \"sauna\" is linked to \"steam\nbath\". The second relation used to link information\nitems is the parentOf association. It is used to express\nthe sub-super class relationship between abstract concepts\nor concrete and abstract concepts.\nA set of concepts representing a particular domain\nis described in a single XML file and acts as input\nsource for the information retrieval system. During\ninitialization, the application parses the XML file, in-stantiates\nall concepts, generates a list of synonyms\npointing at corresponding concepts, associates concepts\naccording to their relations and, finally, links the\nentities to concrete concepts. Currently, the associative\nnetwork consists of about 2,200 concepts, 10,000\nlinks and more than 13,000 entities. The concept network\nincludes terms that describe the tourism domain\nas well as towns, cities and federal states throughout\nAustria.\nTo get a better picture of the interdependencies\nassociated with the layers introduced above see Figure\n3. Each layer holds a specific set of concepts.\nAbstract concepts associate concepts at the same or\nat the conceptual layer. Concepts at the conceptual\nlayer define links between entity attributes and associate\nthese attributes with entities at the entity layer.\nFinally, entities are placed at the lowest layer, the\nentity layer. Concepts at the entity layer are not associated\nwith items at the same layer. Consider, for\nexample, the abstract concept \"indoor sports\" and\nthe concept \"sauna\" as concepts from which activation\noriginates from. First, activation is propagated\nbetween the abstraction layer to the conceptual layer\nvia the dashed line from \"indoor sports\" to \"table tennis\"\n. We shall note, that dashed lines indicate links\nbetween concepts of different layers. Thus, \"sauna\"\nand \"table tennis\" act as source concepts and, moreover\n, activation is spread through the network along\nlinks at the conceptual layer. Activation received by\nconcepts at the conceptual layer is propagated to the\nentities at the entity layer stimulating, in this particular\ncase, the entities \"Hotel Stams\", \"Hotel Thaya\"\nas well as \"Wachauerhof \". Moreover, a fraction of\nactivation is propagated to adjacent concept nodes at\nthe conceptual layer, i.e. \"solarium\", \"whirlpool\" as\nwell as \"tennis\", and to entities, i.e. \"Hotel Wiental\"\nand \"Forellenhof \", respectively.\n4.1.2\nProcessing the Associative Network\nDue to the flexibility and adaptivity of the original\nsystem, the integration of the redesigned parts has\nbeen accomplished with relatively little effort. In particular\n, the existing knowledge base has been replaced\nby the associative network and additional pipeline elements\nto implement spreading activation have been\nincorporated.\nFigure 4 depicts the redeveloped knowledge base\non which the processing algorithm operates.\nThe\nconceptual layer stores concrete concepts and the\nweighted links among them.\nAssociating abstract\nconcepts with concrete concepts is done at the abstraction\nlayer. Each entity has a unique identifier\nthat is equivalent to the entity identifier stored in the\nrelational database. Furthermore, entities are connected\nto concepts at the conceptual layer.\nMore\nprecisely, an entity is connected to all attributes it\npossesses. As an example consider the entity \"Hotel\nStams\" as depicted in Figure 4. This hotel offers\na \"sauna\", a \"steam bath\" and a \"solarium\" and is,\ntherefore, linked to the corresponding concepts at the\nconceptual layer.\nFirst, a user's query, received by the information\nretrieval system, is decomposed into single terms. After\napplying an error correction mechanism and a\nphrase detection algorithm to the query, terms found\nin the synonym lexicon are linked to their corresponding\nconcept at the abstraction or conceptual layer.\nThese concepts act as activation sources and, subsequently\n, the activation process is initiated and activation\nspreads according to the algorithm outlined\nbelow.\nAt the beginning, the role of each concept is evaluated\n. Depending on its role, different initialization\nstrategies are applied:\nModifier role: In case of the \"not\" modifier,\nthe initialization value of the subsequent concept\nis multiplied with a negative number. Due to the\nfact that the \"and\" and \"or\" modifiers are im-plicitly\nresolved by the associative network, they\nreceive no special treatment. More precisely, if,\nfor instance, somebody is searching for an accommodation\nwith a sauna or solarium, those accommodations\noffering both facilities will be ranked\nhigher than others, providing only one of the desired\nfacilities. Furthermore, the \"near\" modifier\nreflecting geographic dependencies, is automatically\nresolved by associating cities or towns\nwithin a circumference of 15km. Depending on\nthe distance, the weights are adapted accordingly\n, i.e. the closer they are together, the higher\nis the weight of the link in the associative network\n.\nAbstract role: If a source concept is abstract,\nthe set of source concepts is expanded by resolving\nthe parentOf relation between parent and\nchild concepts.\nThis process is repeated until\nall abstract concepts are resolved, i.e. the set of\nsource concepts contains members of the conceptual\nlayer only. The initial activation value is\npropagated to all child concepts, with respect to\nthe weighted links.\n33\nFigure 3: Network layer interdependencies\nConcrete role: The initial activation level of\nconcrete concepts is set to initialization value\ndefined in the XML source file. The spreading\nactivation process takes place at the conceptual\nlayer, i.e. the connectedTo relations between adjacent\nconcepts are used to propagate activation\nthrough the network.\nAfter the initialization phase has completed, the iterative\nspreading process is activated. During a single\niteration one pulse is performed, i.e. the number of\niterations equals the number of pulses. Starting from\nthe set of source concepts determined during initialization\n, in the current implementation activation is\nspread to adjacent nodes according to the following\nformula:\nO\ni\n(p) =\n0\nif I\ni\n(p) < ,\nF\ni\np+1\nI\ni\n(p)\nelse, with F\ni\n= (1 C\ni\nC\nT\n) (4)\nThe output, O\ni\n(p), sent from node i at pulse p,\nis calculated as the fraction of F\ni\n, which limits the\npropagation according to the degree of connectivity\nof node i (i.e. fan-out constraint, cf. Section 3.2), and\np + 1, expressing the diminishing semantic relationship\naccording to the distance of node i to activation\nsource nodes (i.e. distance constraint, cf. Section 3.2).\nMoreover, F\ni\nis calculated by dividing the number of\nconcepts C\ni\ndirectly connected to node i by the total\nnumber of nodes C\nT\nbuilding the associative network.\nNote, represents a threshold value.\nSimultaneous to calculating the output value for\nall connected nodes, the activation level I\ni\n(p) of node\ni is added to all associated entities. More precisely,\neach entity connected to node i receives the same\nvalue and adds it to an internal variable representing\nthe total activation of the entity. As an example,\nif the concept node \"sauna\" is activated, the activation\npotential is propagated to the entities \"Hotel\nStams\" and \"Hotel Thaya\" (cf. Figure 4). Next, all\nnewly activated nodes are used in the subsequent iteration\nas activation sources and the spreading process\ncontinues until the maximum number of iterations is\nreached.\nAfter the spreading process has terminated, the\nsystem inspects all entities and ranks them according\nto their activation. Figure 5 depicts the results\ndetermined for the example query\nIch und meine Kinder m\nochten in einem Hotel in\nKitzb\nuhel Urlaub machen.\nEs sollte ein Dampfbad\nhaben.\n1\nIn this particular case, the entities \"Schwarzer\nAdler Kitzb\nuhel\" and \"Hotel Schloss Lebenberg\nKitzb\nuhel\" located in \"Kitzb\nuhel\" are suggested to\nbe the best matching answers to the query. Moreover,\nthe result set includes matches that are closely related\nto the user's query. Thus, depending on the relations\nstored in the associative network, entities offering related\nconcepts are activated accordingly. More precisely\n, not only the attributes \"hotel\", \"steam bath\"\nand \"kids\" are taken into account, but also all other\nrelated entity attributes (e.g. \"sauna\", \"whirlpool\",\n\"solarium\", etc.) have some influence on the ranking\nposition. Furthermore, accommodations in cities in\nthe vicinity of \"Kitzb\nuhel\" providing the same or even\nbetter offers are also included in the result set. Thus,\nthe associative network provides a means for exact\ninformation retrieval and incorporates a fuzzy search\nstrategy that determines closely related matches to\nthe user's query.\n1\nMe and my kids would like to spend our holidays in a hotel in\nKitzbhel. It should have a steam bath.\n34\nFigure 4: Knowledge base architecture\nConclusion\nA natural language system based on an approach\ndescribed in Berger (2001) and Berger, Dittenbach,\nMerkl & Winiwarter (2001) has been reviewed in this\npaper and, furthermore, provided the basis for the research\npresented herein. The reviewed system offers\nmultilingual access to information on a restricted domain\n. In this particular case the system operates on\nthe tourism domain. Moreover, users of the search interface\nare encouraged to formulate queries in natural\nlanguage, i.e. they are able to express their intentions\nin their own words.\nWe developed a knowledge representation model\nthat facilitates the definition of semantic relations between\ninformation items exemplified by terms of the\ntourism domain.\nIn particular, an associative network\nbased on a three layered structure was introduced\n. First, the abstraction layer allows modelling\nof terms with a subjective or broader semantic meaning\n, second, the conceptual layer is used to define\nrelations via weighted links between terms, and, finally\n, the entity layer provides a means to associate\nelements stored in a relational database with information\nitems in the associative network. Moreover,\na constrained spreading activation algorithm implements\na processing technique operating on the network\n. Generally, the combination of the associative\nnature of the knowledge representation model and the\nconstrained spreading activation approach constitutes\na search algorithm that evaluates the relatedness of\nterms and, therefore, provides a means for implicit\nquery expansion.\nThe flexible method of defining relationships between\nterms unleashes the ability to determine highly\nassociated results as well as results that are predefined\ndue to personal preferences. Moreover, especially designed\nassociative networks can be used to model scenarios\n, as, for instance, a winter holiday scenario that\nfavors accommodations offering winter sports activities\nby adapting the weights of links accordingly.\nOne important task for further enhancement is the\npossibility to express the relevance of query terms.\nUsers should be able to assign a degree of significance\nto terms. Consider, for example, a user searching for\nan accommodation with several amenities in the capital\ncity of Austria. Moreover, the user is a vegetarian.\nTherefore, a means for expressing the importance of\nvegetarian kitchen is needed. In order to accomplish\nthis requirement, the system might be extended to\nunderstand words that emphasis terms, e.g. in analogy\nto modifiers like \"and\", \"or\", \"near\", etc. the\nword \"important\" is handled like a modifier and influences\nthe activation level of the following query\nterm. Additionally, an interface providing a graphical\ninstrument to express relevance by means of a\nslide controller might be considered.\nFurthermore, an associative network might act as\na kind of short term memory. More precisely, during\na user session a particular network is used to store\nthe activation level determined during past user interactions\n. A user, for instance, is searching for a\nhotel in Vienna. Thus, the associative network stores\nthe activation level for further processing. Next, the\nuser might restrict the results to accommodations offering\na sauna. This spreading process is carried out\nusing the associative network determined during the\nprevious interaction.\nReferences\nBaeza-Yates, R. A. & Ribeiro-Neto, B. (1999),\nModern Information Retrieval, Addison-Wesley,\nReading, MA.\nBallesteros, L. & Croft, W. B. (1998), Resolving\nambiguity for cross-language retrieval, in `Re-search\nand Development in Information Retrieval'\n, pp. 6471.\nBelew, R. K. (1989), Adaptive information retrieval:\nUsing a connectionist representation to retrieve\nand learn about documents, in N. J. Nicholas\nJ. Belkin & C. J. Van Rijsbergen, eds, `Proceedings\nof the 12th International Conference on\nResearch and Development in Information Retrieval\n(SIGIR'89)', ACM, pp. 1120.\nBerger, H. (2001), Adaptive multilingual interfaces,\nMaster's thesis, Vienna University of Technology\n.\nBerger, H., Dittenbach, M. & Merkl, D. (2003),\nQuerying tourism information systems in natural\nlanguage, in `Proceedings of the 2nd International\nConference on Information System\nTechnology and its Applications (ISTA 2003)',\nKharkiv, Ukraine.\nBerger, H., Dittenbach, M., Merkl, D. & Winiwarter,\nW. (2001), Providing multilingual natural language\naccess to tourism information, in W. Winiwarter\n, S. Bressan & I. K. Ibrahim, eds, `Proceedings\nof the 3rd International Conference on\n35\nFigure 5: Weighted result set determined by constrained spreading activation\nInformation Integration and Web-based Applications\nand Services (IIWAS 2001)', Austrian\nComputer Society, Linz, Austria, pp. 269276.\nCavnar, W. B. & Trenkle, J. M. (1994), N-gram-based\ntext categorization, in `International Symposium\non Document Analysis and Information\nRetrieval', Las Vegas, NV.\nCrestani, F. (1997), `Application of spreading activation\ntechniques in information retrieval', Artificial\nIntelligence Review 11(6), 453582.\nCrestani, F. & Lee, P. L. (2000), `Searching the web\nby constrained spreading activation', Information\nProcessing and Management 36(4), 585\n605.\nCroft, W., Lucia, T., Crigean, J. & Willet, P. (1989),\n`Retrieving documents by plausible inference: an\nexperimental study', Information Processing &\nManagement 25(6), 599614.\nCroft, W. & Thompson, R. H. (1987), `I\n3\nR: A New\nApproach to the Design of Document Retrieval\nSystems', Journal of the American Society for\nInformation Science 38(6), 389404.\nDittenbach, M., Merkl, D. & Berger, H. (2003), Using\na connectionist approach for enhancing domain\nontologies: Self-organizing word category\nmaps revisited, in `Proceedings of the 5th International\nConference on Data Warehousing and\nKnowledge Discovery - (DaWaK 2003)'.\nAccepted\nfor publication.\nHarman, D. K. (1995), Overview of the 3rd Text\nRetrieval Conference (TREC-3), in D. K. Harman\n, ed., `Proceedings of the 3rd Text Retrieval\nConference (TREC-3)', NIST Special Publication\n500225, pp. 119.\nHartmann, K. & Strothotte, T. (2002), A spreading\nactivation approach to text illustration, in `Proceedings\nof the 2nd International Symposium on\nSmart graphics', ACM Press, pp. 3946.\nHull, D. A. & Grafenstette, G. (1996), Querying\nacross languages: A dictionary-based approach\nto multilingual information retrieval, in `Proceedings\nof ACM SIGIR Conference on Research\nand Development in Information Retrieval (SIGIR\n1996)', pp. 4957.\nKjeldsen, R. & Cohen, P. (1987), `The evolution and\nperformance of the GRANT system', IEEE Expert\npp. 7379.\nLevenshtein, V. I. (1966), `Binary codes capable of\ncorrecting deletions, insertions and reversals',\nSoviet Physics Doklady 10(8), 707710.\nPhilips, L. (1990), `Hanging on the metaphone', Computer\nLanguage Magazine 7(12).\nQuillian, M. R. (1968), Semantic memory, in M. Min-sky\n, ed., `Semantic Information Processing', MIT\nPress, pp. 227270.\nSalton, G. (1989), Automatic Text Processing: The\nTransformation, Analysis, and Retrieval of Information\nby Computer, Addison-Wesley, Reading\n, MA.\nSalton, G. & McGill, M. J. (1983), Introduction\nto Modern Information Retrieval, McGraw-Hill,\nNew York.\nVan Rijsbergen, C. J. (1979), Information Retrieval,\nDepartment of Computer Science, University of\nGlasgow.\nXu, F., Netter, K. & Stenzhorn, H. (2000), Mietta a\nframework for uniform and multilingual access\nto structured database and web information, in\n`Proceedings of the 5th International Workshop\non Information Retrieval with Asian languages'.\n36", "keywords": "natural language information retrieval;constrained spreading activation;query expansion;spreading activation;multilingual information retrieval system;knowledge representation model;associative networks;knowledge representation;natural language query"} {"name": "29", "title": "An Analytical Model Based on G/M/1 with Self-Similar Input to Provide End-to-End QoS in 3G Networks", "abstract": "The dramatic increase in demand for wireless Internet access has lead to the introduction of new wireless architectures and systems including 3G, Wi-Fi and WiMAX. 3G systems such as UMTS and CDMA2000 are leaning towards an all-IP architecture for transporting IP multimedia services, mainly due to its scalability and promising capability of inter-working heterogeneous wireless access networks. During the last ten years, substantial work has been done to understand the nature of wired IP traffic and it has been proven that IP traffic exhibits self-similar properties and burstiness over a large range of time scales. Recently, because of the large deployment of new wireless architectures, researchers have focused their attention towards understanding the nature of traffic carried by different wireless architecture and early studies have shown that wireless data traffic also exhibits strong long-range dependency. Thus, the classical tele-traffic theory based on a simple Markovian process cannot be used to evaluate the performance of wireless networks. Unfortunately, the area of understanding and modeling of different kinds of wireless traffic is still immature which constitutes a problem since it is crucial to guarantee tight bound QoS parameters to heterogeneous end users of the mobile Internet. In this paper, we make several contributions to the accurate modeling of wireless IP traffic by presenting a novel analytical model that takes into account four different classes of self-similar traffic. The model consists of four queues and is based on a G/M/1 queueing system. We analyze it on the basis of priority with no preemption and find exact packet delays. To date, no closed form expressions have been presented for G/M/1 with priority.", "fulltext": "INTRODUCTION\nDuring the past decade, researchers have made significant efforts\nto understand the nature of Internet traffic and it has been proven\nthat Internet traffic exhibits self-similar properties. The first\nstudy, which stimulated research on self-similar traffic, was\nbased on measurements of Ethernet traffic at Bellcore [1].\nSubsequently, the self-similar feature has been discovered in\nmany other types of Internet traffic including studies on\nTransmission Control Protocol (TCP) [2, 3], WWW traffic [4],\nVBR video [5] and Signaling System No 7 [6]. Deeper studies\ninto the characteristics of Internet traffic has discovered and\ninvestigated various properties such as self-similarity [7], long-range\ndependence [8] and scaling behavior at small time-scale\n[9]. The references [10, 11] provide two extensive bibliographies\non self-similarity and long-range dependence research covering\nboth theoretical and applied papers on the subject.\nConcurrently, over the past few years, we have witnessed a\ngrowing popularity of Third Generation Systems (3G), which\nhave been designed to provide high-speed data services and\nmultimedia applications over mobile personal communication\nnetworks. The Universal Mobile Telecommunication System\n(UMTS) is the predominant global standard for 3G developed by\nThird Generation Partnership Project (3GPP) [12]. The UMTS\narchitecture is shown in Fig. 1. It consists of two service\ndomains, a Circuit Switched (CS) service domain and a Packet\nSwitched (PS) service domain, which is of interest in this paper.\nIn the PS service domain, a UMTS network connects to a public\ndata network (PDN) through Serving GPRS Support node\n(SGSN) and Gateway GPRS support node (GGSN). 3GPP has\ndefined four different QoS classes for UMTS; (1) Conversational\n(2) Interactive (3) Streaming and (4) Background, conversational\nbeing the most delay-sensitive and background the least delay\nsensitive class [12].\nWith the increasing demand of Internet connectivity and the\nflexibility and wide deployment of IP technologies, there has\nemerged a paradigm shift towards IP-based solutions for wireless\nnetworking [13]. Several Wireless IP architectures have been\nproposed [17-23] based on three main IP QoS models, IntServ\n[14], DiffServ [15] and MPLS [16]. 3GPP has also recently\nintroduced a new domain called IP Multimedia Subsystem (IMS)\nfor UMTS. The main objective of IMS is to deliver innovative\n\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, or\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nMobiWAC'06, October 2, 2006, Torremolinos, Malaga, Spain.\nCopyright 2006 ACM 1-59593-488-X/06/0010...$5.00.\n\n180\nFig. 1: A Simplified UMTS Network Architecture\nand cost-effective services such as IP telephony, media streaming\nand multiparty gaming by providing IP connectivity to every\nmobile device [24].\nIn the light of this, researchers have recently focused on\nunderstanding the nature of wireless IP traffic and early studies\nhave shown that wireless data traffic also exhibits self-similarity\nand long-range dependency [25-28]. Much of the current\nunderstanding of wireless IP traffic modeling is based on the\nsimplistic Poisson model, which can yield misleading results and\nhence poor wireless network planning. Since the properties and\nbehavior of self-similar traffic is very different from traditional\nPoisson or Markovian traffic, several issues need to be addressed\nin modeling wireless IP traffic to provide end-to-end QoS to a\nvariety of heterogeneous applications. We begin by giving an\noverview of related work on wired and wireless IP traffic\nmodeling along with a comparison of our model with previous\nwork.\nRELATED WORK\nIn this section, we first discuss the related work which has been\ndone in the area of performance evaluation of wired IP and\nWireless IP networks under self-similar input and then we\ncompare our model with the previous ones.\n2.1\n\nPrevious Work on IP Traffic Modeling\nThere has been much work done on Internet traffic modeling based\non queueing theory in the presence of self-similar traffic [29-34].\nIn [33], a Matrix Geometric (analytical) method is used to compute\nnumerical results for a two class DiffServ link being fed by a\nMarkovian Modulated Poisson Process (MMPP) input. A\nweakness of this model is that MMPP may require an estimation of\na large number of parameters. An OPNET based simulation\napproach was adopted in [34] to see the impact of self-similarity\non the performance evaluation of DiffServ networks. As a result,\nan idea of expected queue length was given in relation to the Hurst\nparameter and server utilization. It is difficult to offer guaranteed\nQoS parameters on the basis of such analysis. The major weakness\nof the majority of available queueing based results is that only the\nFIFO queueing discipline has been considered for serving the\nincoming traffic and thus differential treatment to different kinds\nof traffic can not be provided. In addition, the previous results are\nasymptotic. We also refer the readers to [35-39] for an overview of\nprevious work that has been carried out to evaluate the\nperformance of IP networks. The major drawback of the existing\nwork is that, the queueing models considered are not able to\ncapture the self-similar characteristics of Internet traffic.\nFurthermore, it is important to note that most of the previous work\nis focused on the analysis of one type of traffic only without\ndiscussing its affect on the performance of other kinds of network\ntraffic.\n2.2\n\nPrevious Work on Wireless IP Traffic\nModeling\nFew studies have focused on wireless traffic modeling and here we\ndiscuss the most relevant work. As shown in Fig. 1, the principle\nof allocation of data flows between end users and GGSN leads to\nincreasing load on the network elements when moving closer to\nthe GGSN. Hence, GGSN is the node most exposed to self-similar\ninfluence in UMTS [40]. The influence of self-similar input on\nGGSN performance in the UMTS Release 5 IM-subsystem has\nbeen analyzed on the basis of a FBM/D/1/W queueing system\n(FBM-Fractional Brownian Motion) in [40]. In this work, different\nprobabilistic parameters of GGSN such as average queue length\nand average service rate were also found. The work in [41]\npresents modeling and a simulation study of the Telus Mobility (a\ncommercial service provider) Cellular Digital Packet Data (CDPD)\nnetwork. The collected results on average queueing delay and\nbuffer overflow probability indicated that genuine traffic traces\n181\nproduce longer queues as compared to traditional Poisson based\ntraffic models. To get an overview of the analysis done in wireless\nIP traffic modeling with self-similar input, we refer the readers to\n[42-45]. These studies are merely based on characterization of\nwireless traffic. To provide differential treatment to multiple\ntraffic classes with different QoS demands, there is a need to\naccurately determine end-to-end QoS parameters such as delay,\njitter, throughput, packet loss, availability and per-flow sequence\npreservation.\n2.3\n\nComparison of our Model with Prior\nWork\nTo overcome the limitations of the previous work in traffic\nmodeling (wired and wireless IP traffic), we present a realistic and\nnovel analytical model by considering four different classes of\ntraffic that exhibit long-range dependence and self-similarity. Our\nmodel implements four queues based on a G/M/1 queueing system\nand we analyze it on the basis of priority with no preemption. The\ntraffic model considered is parsimonious with few parameters and\nhas been studied in [46]. The model is furthermore similar to\non/off processes, in particular to its variation N-Burst model\nstudied in [47] where packets are incorporated. However, only a\nsingle type of traffic is considered in [47]. We present a novel\nanalytical approach and make the following contributions to\nWireless IP traffic modeling.\nInterarrival Time Calculations: We calculate the packet\ninterarrival time distributions for the particular self-similar traffic\nmodel [46] for the first time in this paper. The distribution of cross\ninterarrival time between different types of packets is derived on\nthe basis of single packet results.\nPacket Delays for Multiple Self-Similar Traffic Classes: We\nconsider a G/M/1 queueing system which takes into account four\ndifferent classes of self-similar input traffic denoted by SS/M/1\nand analyze it on the basis of non preemptive priority and find\nexact packet delays. To date, no closed form expressions have\nbeen presented for G/M/1 with priority.\nEmbedded Markov Chain Formulation: We also formulate the\nembedded Markov chain of G/M/1 by considering all possible\nstates and derive the corresponding transition probabilities.\nThe rest of the paper is organized as follows. Section 3 and 4 are\ndevoted to explaining the self-similar traffic model with multiple\nclasses and the calculation of interarrival times respectively.\nSection 5 explains the procedure of formulating the embedded\nMarkov Chain along with the derivation of packet delays. The\napplications of the model are discussed in section 6. Finally,\nconclusion and future work is given in Section 7.\n\nTRAFFIC MODEL\nThe traffic model considered here [46] belongs to a particular class\nof self-similar traffic models also called telecom process in [48],\nrecently. The model captures the dynamics of packet generation\nwhile accounting for the scaling properties of the traffic in\ntelecommunication networks. Such models, also called infinite\nsource models, are similar to on/off processes with heavy tailed on\nand/or off times. What is more, our model abstracts the packet\narrival process in particular and facilitates queueing analysis by\nthe approaches developed in the sequel.\nIn the framework of a Poisson point process, the model represents\nan infinite number of potential sources. The traffic is found by\naggregating the number of packets generated by such sources.\nEach source initiates a session with a heavy-tailed distribution, in\nparticular a Pareto distribution whose density is given by\n\n,\n1\n)\n(\n\n=\n\n\n\nr\nb\nr\ng\nr\n> b\n\nwhere\n\nis\nrelated to the Hurst parameter by\n2\n/\n)\n3\n(\n\n=\nH\n.\nThe sessions arrive according to a Poisson process with rate\n\n.\nThe packets arrive according to a Poisson process with rate\n\n,\nlocally, over each session.\nFor each class, the traffic Y (t) measured as the total number of\npackets injected in [0, t] is found by\n)\n)\n(\n(\n)\n(\n\n\n\n=\nt\nS\ni\ni\ni\ni\nS\nt\nR\nU\nt\nY\n\nwhere\n\ndenote the local Poisson process, the duration\nand the arrival time of session i, respectively. Hence, Y(t)\ncorresponds to the sum of packets generated by all sessions\ninitiated in [0,t] until the session expires if that happens before t,\nand until t if is does not. The stationary version of this model\nbased on an infinite past is considered in calculations below. The\npacket sizes are assumed to be fixed because each queue\ncorresponds to a certain type of application where the packets have\nfixed size or at least fixed service time distribution.\ni\ni\ni\nS\nR\nU\n,\n,\nThe traffic model Y is long-range dependent and almost second-order\nself-similar; the auto covariance function of its increments is\nthat of fractional Gaussian noise. Three different heavy traffic\nlimits are possible depending on the rate of increase in the traffic\nparameters [46, 48]. Two of these limits are well known self-similar\nprocesses, fractional Brownian motion and Levy process,\nwhich do not account for packet dynamics in particular.\nINTERARRIVAL TIMES\nPacket interarrival time distributions for the particular self-similar\ntraffic model are calculated for the first time in this paper. We\nconsider a single type of packet first. The distributions of cross\ninterarrival time between different types of packets are derived on\nthe basis of single packet results.\n4.1\n\nInterarrival Times for a Single Class\nAlthough the packet arrival process itself is long-range dependent\nand shows self-similarity, the number of alive sessions at a period\nof time, say of length t, has a stationary distribution and is\nPoisson distributed. The alive sessions at any time can be further\nsplit into independent components as those session that last longer\nthan t and those that expire before t. Such results are well known\n[49, pg.273] and will be used to derive the interarrival time\ndistribution of the packets.\nGiven that there is a packet arrival at an instant in time, we aim to\nfind the distribution of the time until next arrival denoted by T.\nWe will find\n)\n(t\nF\n=\n, for\n. When the event\nis considered, the information that there is a packet\narrival is equivalent to the information that there is at least one\nsession alive at the given instant. This follows from the\nassumption that local packet generation process is Poisson over\neach session. The probability that next interarrival is greater than t\n}\n{\nt\nT\nP\n>\n0\n\nt\n}\n{\nt\nT\n>\n182\non a particular session is the same as the probability that the\nremaining time until next arrival is greater than t due to the\nmemoryless property of exponential distribution. That is,\n=\n> }\n{\nt\nT\nP\nP {Next packet interarrival is greater than t | there\nis a packet arrival}\n=\n\n1\nP {Next packet interarrival is greater than t, there is at least\none alive session}\nwhere\n\nis the probability that there is at least one alive session,\nin other words the utilization of an\nsystem. The\nevent that next packet interarrival is greater than t can be split as\nfollows:\n\n/\n/ G\nM\n\n\nThe active sessions that expire before t do not incur\nany new arrivals.\n\n\nThe active sessions that expire after t do not incur any\nnew arrivals\n\n\nNo new session arrivals in t or at least one session\narrival with no packet arrival in t.\nWe find the probability that all three events occur at the same\ntime by using the independence of a Poisson point process over\ndisjoint sets. The result is\n}\n{\nt\nT\nP\n>\n=\n)]\n1\n(\nexp[\n{\n1\n)\n(\n)\n(\nt\nB\nv\nA\nv\ne\nt\ne\ne\nt\nt\n\n\n\n\n\n\n\n\n)\n1\n]\n/\n)\n1\n)(\n(\nexp[\n)\n1\n]\n)\n(\n(exp[\n\n\n\n\nt\ne\nB\nv\ne\nA\nv\nt\nt\nt\nt\n\n\n\n\n)\n1\n]\ne\n)\n(\n)](exp[\n1\n(\nexp[\ne\ne\n)\n(\n)\n(\n\n\n+\n\n\n\nt\nt\nt\nB\nA\nA\ne\nt\nt\nt\n\n\n\n\n\n\n\n}\n)\n1\n]\n/\n)\ne\n1\n)(\n(\n)](exp[\n1\n(\nexp[\ne\ne\n)\n(\n)\n(\n\n\n\n+\n\n\n\nt\nB\ne\nt\nt\nt\nt\nB\nA\nt\nt\n\n\n\n\n\n\n\n\nwhere\n=\n)\n(\nt\nA\n\n\n\nt\ndy\ny\ng\nt\ny\n)\n(\n)\n(\n\n\n(1)\n\n\n\n\n\n+\n=\n\nt\nt\nt\nG\nt\ndy\ny\nyg\nB\n0\n)\n(\n)\n(\n)\n(\n\n\n\n(2)\n\nand\n)\n]\nduration\nsession\n[\nE\nexp(\n1\n\n\n\n=\n\n]\n)\n1\n/(\nexp[\n1\n\n\n=\n\n\n\nb\n\nbecause the steady state number in the system in\n\n/\n/ G\nM\n\nqueue is Poisson distributed with mean\n\nE[Session duration]\n[50], and\n\nand b are the parameters of the session duration with\ncomplementary distribution function\nG\nand density\n1\n)\n(\n\n=\n\n\n\nr\nb\nr\ng\n\n\nb\nr\n>\nwhich is Pareto.\n4.2\n\nInterarrival Times for Multiple Classes\nHere we explain the detailed procedure to find out the Interarrival\ntimes for two classes, the Interarrival times for more than two\nclasses can be found in a similar way. Let\ndenote the\ninterarrival time between a class i packet that comes first and a\nclass j packet that follows,\nThe analysis, which can\nbe extended to\n, provides a method for other self-similar\nmodels as well provided that the distribution of interarrivals\n\nare available.\nij\nT\n.\n2\n,\n1\n,\n=\nj\ni\n3\n,\n\nj\ni\ni\nT\nFor the consecutive packet 1 arrival time\n, we have\n11\nT\n,\n{\n}\n{\n1\n11\nt\nT\nP\nt\nT\nP\n>\n=\n>\nno arrivals of class 2 in\n\n}\n1\nT\n\n\n=\nt\nT\nds\ns\nf\nP\n)\n(\n}\ns\nin\n2\nclass\nof\narrivals\nno\n{\n1\n\n\n\n=\nt\nT\nds\ns\nf\ns\nF\n)\n(\n)\n(\n1\n0\n2\n\nwhere\n)\n(\n)\n(\n0\n2\n2\n2\n)\n(\nt\nt\nA\nv\nB\nv\ne\ne\nt\nF\n\n=\n.\n]\n)\n(\nexp[\n)]\n1\n(\nexp[\n2\n2\n2\n2\nt\nt\nt\ne\nA\nv\ne\nt\n\n\n\n\n\n\n\n(3)\n]\n/\n)\n1\n)(\n(\nexp[\n2\n2\n2\nt\ne\nB\nv\nt\nt\n\n\n\n\n)\n(\n2\nt\nA\n\nand\n)\n(\n2\nt\nB\n\nare defined analogously as in (1) and (2),\nand we used the independence of class 1 and 2 packet inputs.\nHere,\n0\n2\nF\nis found through similar arguments used for P {T>t}\nin the last subsection, without assuming that there is an alive\nsession of type 2 . As a result, by differentiation we find\n)\n(\n)\n(\n)\n(\n0\n2\n1\n11\nt\nF\nt\nf\nt\nf\nT\nT\n=\n\nNow consider the interarrival time T\n12\noccurring between a class 1\npacket followed by a class 2 packet. For T\n12\n, we get\n=\n}\n{\n12\nt\nT\nP\n\nt\nds\ns\nP\ns\nf\n0\n0\n2\n}\narrived\npacket\nclass1\na\n|\nin\n1\nclass\nof\narrivals\nno\n{\n)\n(\n\n\n=\nt\nT\nds\ns\nF\ns\nf\n0\n0\n2\n)\n(\n)\n(\n1\n\nwhere\nis the density function corresponding to the event\nthat there is an arrival of class 2 packet at time s, and we used\nindependence of class 1 and 2 packet streams. As a matter of fact,\ncan be obtained by taking the derivative of the\ncomplementary distribution function\n)\n(\n0\n2\ns\nf\n)\n(\n0\n2\ns\nf\n0\n2\nF\ngiven in (3). As a\nresult, we get\n)\n(\n)\n(\n)\n(\n1\n12\n0\n2\nt\nF\nt\nf\nt\nf\nT\nT\n=\n\n183\nSimilarly, it can be shown that\n)\n(\n)\n(\n)\n(\n0\n1\n2\n22\nt\nF\nt\nf\nt\nf\nT\nT\n=\n,\n)\n(\n)\n(\n)\n(\n2\n21\n0\n1\nt\nF\nt\nf\nt\nf\nT\nT\n=\nQUEUEING MODEL\nWe consider a model of four queues based on G/M/1 by\nconsidering four different classes of self-similar input traffic\ndenoted by SS/M/1, and analyze it on the basis of priority with no\npreemption. Let the service time distribution have rate\n1\n\n,\n2\n\n3\n,\n\nand\n4\n\nfor type 1, type 2, type 3 and type 4 packets,\nrespectively, and let type 1 packets have the highest priority and\ntype 4 packets have the lowest priority.\n5.1\nWith Four Classes\n1\n/\n/ M\nSS\nThe usual embedded Markov chain [51] formulation of\nis based on the observation of the queueing system at\nthe time of arrival instants, right before an arrival. At such\ninstants, the number in the system is the number of packets that\narriving packet sees in the queue plus packets in service, if any,\nexcluding the arriving packet itself. We specify the states and the\ntransition probability matrix P of the Markov chain with the self-similar\nmodel for four types of traffic.\n1\n/\n/ M\nG\nLet\ndenote the embedded Markov chain at the\ntime of arrival instants. As the service is based on priority, the\ntype of packet in service is important at each arrival instant of a\ngiven type of packet to determine the queueing time. Therefore,\nwe define the state space as:\n}\n0\n:\n{\n\nn\nX\nn\n}\n,\n,\n,\n},\n,\n,\n,\n,\n{\n},\n,\n,\n,\n{\n:\n)\n,\n,\n,\n,\n,\n{(\n4\n3\n2\n1\n4\n3\n2\n1\n4\n3\n2\n1\n4\n3\n2\n1\n+\n\n\n\n=\nZ\ni\ni\ni\ni\nI\ns\ns\ns\ns\ns\na\na\na\na\na\ns\na\ni\ni\ni\ni\nS\n(4)\nwhere\nare labels to denote the type of\narrival,\nare labels to denote the type of packet in\nservice,\nare the number of packets in each queue\nincluding a possible packet in service, I denotes the idle state in\nwhich no packet is in service or queued and\nis the set of\nnonnegative integers. Some of the states in the state space S given\nin (4) have zero probability. For example,\n\nis impossible. The particular notation in (4) for S is chosen for\nsimplicity, although the impossible states could be excluded from\nS. Each possible state, the reachable states from each and the\ncorresponding transition probabilities will be calculated.\n4\n3\n2\n1\n,\n,\n,\na\na\na\na\n4\n3\n2\n1\n,\n,\n,\ns\ns\ns\ns\n4\n3\n2\n1\n,\n,\n,\ni\ni\ni\ni\n+\nZ\n)\n,\n,\n,\n,\n0\n,\n(\n2\n1\n4\n3\n1\ns\na\ni\ni\ni\n5.2 States of the Embedded Markov Chain\nThe states of the Markov chain and the possible transitions with\nrespective probabilities can be enumerated by considering each\ncase. We will only analyze the states with non-empty queues in\nthis paper.\n5.2.1 States\nwith\n)\n,\n,\n,\n,\n,\n(\n4\n3\n2\n1\ns\na\ni\ni\ni\ni\n0\n,\n,\n,\n4\n3\n2\n1\n\ni\ni\ni\ni\n\nWe can divide the states and transitions into 256 groups. Because\n(a, s) can occur 4x4=16 different ways, and the next state (p, q)\ncan be composed similarly in 16 different ways as\n}\n,\n,\n,\n{\n,\n4\n3\n2\n1\na\na\na\na\np\na\n\nand\n. We\nwill analyze only the first one in detail; the others follow\nsimilarly.\n}\n,\n,\n,\n{\n,\n4\n3\n2\n1\ns\ns\ns\ns\nq\ns\n\n\n5.2.2 Transition from\n\n)\n,\n,\n,\n,\n,\n(\n)\n,\n,\n,\n,\n,\n(\n2\n2\n4\n3\n2\n1\n1\n1\n4\n3\n2\n1\ns\na\nj\nj\nj\nj\ns\na\ni\ni\ni\ni\n\nThis is the case where a transition occurs from an arrival of type 1\nto an arrival of type 2 such that the first arrival has seen a type 1\npacket in service, packets of type 1 (equivalently, total of\nqueue 1 and the packet in service) and\npackets of type 2 (in\nthis case only queue 2), packets of type 3 and\npackets of\ntype 4 in the system. The transition occurs to\npackets of type\n1,\npackets of type 2, with a type 2 packet in service,\n\npackets of type 3 and\npackets of type 4 in the system. Due to\npriority scheduling, an arrival of type 2 can see a type 2 packet in\nservice in the next state only if all type 1 packets including the\none that arrived in the previous state are exhausted during the\ninterarrival time. That is why\ncan take only the value 0 and\nexactly\n1\ni\n2\ni\n3\ni\n4\ni\n1\nj\n2\nj\n3\nj\n4\nj\n1\nj\n1\n1\n+\ni\npackets of type 1 are served. In contrast, the\nnumber of packets served from queue 2, say k, can be anywhere\nbetween 0 and\n1\n2\ni\nas at least one type 2 packet is in the\nsystem, one being in service, when a new arrival occurs. The\ntransition probabilities are\n)}\n,\n,\n,\n,\n,\n(\n|\n)\n,\n,\n,\n,\n,\n0\n(\n{\n1\n1\n4\n3\n2\n1\n2\n2\n4\n3\n2\n1\ns\na\ni\ni\ni\ni\nX\ns\na\ni\ni\nk\ni\nX\nP\nn\nn\n=\n=\n+\n1\n{\n1\n+\n=\ni\nP\nserved from type 1, k served from type 2 and a type\n2 packet remains in service during\n}\n12\nT\nwhere we use the fact that the remaining service time of a type 1\npacket in service has the same exponential distribution Exp(\n1\n\n),\ndue to the memory-less property of a Markovian service.\nTherefore, for\n1\n,\n,\n0\n2\n=\ni\nk\nK\n\n)}\n,\n,\n,\n,\n,\n(\n|\n)\n,\n,\n,\n,\n,\n0\n(\n{\n1\n1\n4\n3\n2\n1\n2\n2\n4\n3\n2\n1\ns\na\ni\ni\ni\ni\nX\ns\na\ni\ni\nk\ni\nX\nP\nn\nn\n=\n=\n+\n\n\n\n+\n+\n=\n0 0\n)\n(\n)\n(\n)\n(\n12\n2\n1\n1\n1\n2\nt\nx\nt\nT\nS\nS\nS\ndt\ndx\nds\nt\nf\nx\nf\ns\nf\nk\ni\n\nwhere\n: sum of l independent service times of type m\npackets, m=1, 2,\nl\nm\nS\n+\nZ\nl\n. Note that\nhas an Erlang\ndistribution with parameters\nl\nm\nS\n)\n,\n(\nm\nl\n\nas each service time has an\nexponential distribution, and the sum\nbeing the sum\nof several exponentially distributed random variables has a\nhypoexponential distribution. The density functions of all these\ndistributions can easily be evaluated numerically. Similarly, we\ncan enumerate all 256 cases. The results for first 64 cases are\ngiven in Table 1 in the Appendix.\n2\n1\n2\n1\nl\nl\nS\nS\n+\n184\n\n5.3 Limiting Distribution and Waiting Times\nSteady state distribution\n\nas seen by an arrival can be found by\nsolving\n\n\n=\nP\nusing the transition matrix\nP\nof the Markov\nchain analyzed above. In practice, the queue capacity is limited in\na router. So, the steady state distribution exists.\nTo the best of our knowledge, no previous analytical expressions\nare available for the waiting time of a G/M/1 queue with priority.\nOur analysis relies on the limiting distribution of the state of the\nqueue at the arrival instances, which can be computed using the\nanalysis given above for our self-similar traffic model. In general,\nthe following analysis is valid for any G/M/1 queueing system\nwhere the limiting distribution\n\nat the arrival instances can be\ncomputed. The expected waiting time for the highest priority\nqueue can be found as\n+\n+\n+\n=\n\n\n=\n=\n=\n=\n=\n=\n=\n=\n)\n,\n,\n,\n,\n,\n(\n)\n1\n(\n)\n,\n,\n,\n,\n,\n(\n]\n[\n2\n1\n4\n3\n2\n1\n1\n0\n1\n0\n0\n2\n1\n1\n1\n1\n4\n3\n2\n1\n1\n1\n0\n0\n0\n1\n1\n1\n1\n1\n2\n2\n3\n3\n4\n4\n1\n1\n2\n2\n3\n3\n4\n4\ns\na\nj\nj\nj\nj\nj\ns\na\nj\nj\nj\nj\nj\nW\nE\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\n\n\n\n\n\n)\n,\n,\n,\n,\n,\n(\n)\n1\n(\n)\n,\n,\n,\n,\n,\n(\n)\n1\n(\n4\n1\n4\n3\n2\n1\n1\n0\n0\n0\n1\n4\n1\n1\n3\n1\n4\n3\n2\n1\n3\n1\n0\n0\n1\n0\n1\n1\n1\n1\n2\n2\n3\n3\n4\n4\n1\n1\n2\n2\n3\n3\n4\n4\ns\na\nj\nj\nj\nj\nj\ns\na\nj\nj\nj\nj\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\n\n\n\n\n\n\n\n\n=\n=\n=\n=\n=\n=\n=\n=\n+\n+\n+\nwhere\nand\nare the respective capacities of each\nqueue. This follows clearly from the fact that an arriving packet\nof higher priority will wait until all packets of the same priority as\nwell as the packet in service are served. Depending on the type of\nthe packet in service, we have the constituent expressions in the\nsum.\n,\n,\n2\n1\nJ\nJ\n3\nJ\n4\nJ\nOn the other hand, we obtain the expected waiting time for the\nlow priority queues by analyzing the events that constitute this\ndelay. The amount of work in the system at any time is defined as\nthe (random) sum of all service times that will be required by the\npackets in the system at that instant. The waiting time of a type 2\npacket (which is 2\nnd\nhighest priority queue) can be written as\n....\n3\n2\n1\n2\n+\n+\n+\n=\nZ\nZ\nZ\nW\n(5)\nwhere Z\n1\nis the amount of work seen by the arriving packet in\nqueue 1 and queue 2 (i.e, higher priority and equal priority), Z\n2\nis\nthe amount of work associated with higher priority (i.e.type 1)\npackets arriving during Z\n1\n, Z\n3\nis the amount of work associated\nwith type 1 packets arriving during Z\n2\n, and so on. As illustrated in\nFig.2, the waiting time of an arriving packet of type 2 is indeed\ngiven by the total workload building in front of it. The arrows in\nthe figure denote the arrival times of type 1 packets, and all the\noblique lines have 45 degrees angle with the time axis. In this\nfigure the waiting time is\n4\n3\n2\n1\n2\nZ\nZ\nZ\nZ\nW\n+\n+\n+\n=\nfor example.\nLet M\nj\ndenote the number of type j arrivals over Z\ni\n, j=1,2,....Then\n\n\nL\n+\n+\n+\n=\n2\n1\n1\n1\n1\n2\nM\nM\nS\nS\nZ\nW\n\nwhere\ndenotes the random sum of M\nj\nM\nS\n1\nj\nindependent service\ntimes of type 1 packets. Then,\nL\n+\n+\n+\n=\n]\n[\n]\n[\n]\n[\n]\n[\n]\n[\n[\n2\n1\n1\n1\n1\n2\n]\nM\nE\nS\nE\nM\nE\nS\nE\nZ\nE\nW\nE\n\nsince the service times and the arrival process are independent.\nFor a stationary packet arrival process, we get\n]\n[\n]\n[\n]]\n|\n[\n[\n]\n[\n1\n1\nj\nj\nj\nj\nj\nZ\nE\nc\nZ\nc\nE\nZ\nM\nE\nE\nM\nE\n=\n=\n=\n\ndue to mentioned independence, where\nis a constant\nparticular to the arrival process. That is, expectation of the\nnumber of arrivals in any period of time is proportional to the\nlength of that period because of stationarity in time and linearity\nof expectation. In our stationary self-similar traffic input process,\nc\n0\n1\n>\nc\n1\nis the expected number of arrivals per unit time which can be\ncalled the arrival rate, given by the product of the arrival rate of\nsession arrivals, the arrival rate of packets over a session, and the\nexpected session length [46]. Explicitly,\n.\nHence, the expected waiting time reduces to\n)\n1\n/(\n1\n=\n\n\nb\nc\nL\n+\n+\n+\n=\n]\n[\n]\n[\n]\n[\n]\n[\n]\n[\n[\n2\n1\n1\n1\n1\n1\n1\n2\n]\nZ\nE\nc\nS\nE\nZ\nE\nc\nS\nE\nZ\nE\nW\nE\n\n\n]\n[\n]\n[\n)\n]\n[\n]\n[\n(\n]\n[\n2\n1\n1\n1\n2\n1\n1\n1\n1\nW\nE\nc\nZ\nE\nZ\nE\nZ\nE\nc\nZ\nE\n\n\n+\n=\n+\n+\n+\n=\nL\n\nIn view of (5), therefore we get\nfrom\n]\n[\n2\nW\nE\n+\n+\n+\n+\n=\n\n\n=\n=\n=\n=\n=\n=\n=\n=\n)\n,\n,\n,\n,\n,\n(\n)\n(\n)\n,\n,\n,\n,\n,\n(\n)\n(\n]\n[\n2\n2\n4\n3\n2\n1\n2\n2\n0\n1\n1\n0\n0\n1\n1\n1\n2\n4\n3\n2\n1\n2\n2\n1\n1\n0\n0\n0\n1\n1\n2\n1\n1\n2\n2\n3\n3\n4\n4\n1\n1\n2\n2\n3\n3\n4\n4\ns\na\nj\nj\nj\nj\nj\nj\ns\na\nj\nj\nj\nj\nj\nj\nW\nE\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\n\n\n\n\n\n\n1\n2\n1\n4\n2\n4\n3\n2\n1\n4\n0\n1\n0\n0\n1\n2\n2\n1\n1\n3\n2\n4\n3\n2\n1\n3\n0\n1\n0\n1\n0\n2\n2\n1\n1\n]\n[\n)\n,\n,\n,\n,\n,\n(\n)\n1\n(\n)\n,\n,\n,\n,\n,\n(\n)\n1\n(\n1\n1\n2\n2\n3\n3\n4\n4\n1\n1\n2\n2\n3\n3\n4\n4\n\n\n\n\n\n\n\n\n\nW\nE\nc\ns\na\nj\nj\nj\nj\nj\nj\ns\na\nj\nj\nj\nj\nj\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\n+\n+\n+\n+\n+\n+\n\n\n=\n=\n=\n=\n=\n=\n=\n=\n\nZ\n4\nZ\n3\nZ\n2\nZ\n1\ntime\nWork (time)\nFig.2 Waiting time of a type 2 packet in terms of Z\nj\n's.\n185\nSimilarly, we can directly write down the expected waiting\ntime for a packet of type 3 (3\nrd\npriority queue) and type 4\n(lowest priority queue). The expected waiting time for a\npacket of type 3 can be found from:\n+\n+\n+\n+\n+\n+\n=\n\n\n=\n=\n=\n=\n=\n=\n=\n=\n)\n,\n,\n,\n,\n,\n(\n)\n(\n)\n,\n,\n,\n,\n,\n(\n)\n(\n]\n[\n2\n3\n4\n3\n2\n1\n3\n3\n0\n1\n1\n0\n0\n2\n2\n1\n1\n1\n3\n4\n3\n2\n1\n3\n3\n2\n2\n1\n0\n1\n0\n0\n1\n1\n3\n1\n1\n2\n2\n3\n3\n4\n4\n1\n1\n2\n2\n3\n3\n4\n4\ns\na\nj\nj\nj\nj\nj\nj\nj\ns\na\nj\nj\nj\nj\nj\nj\nj\nW\nE\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\n\n\n\n\n\n\n\n\n]\n[\n)\n(\n)\n,\n,\n,\n,\n,\n(\n)\n1\n(\n)\n,\n,\n,\n,\n,\n(\n)\n(\n3\n2\n2\n1\n1\n4\n3\n4\n3\n2\n1\n4\n0\n0\n1\n0\n1\n3\n3\n2\n2\n1\n1\n3\n3\n4\n3\n2\n1\n3\n3\n2\n2\n0\n0\n1\n1\n0\n1\n1\n1\n1\n2\n2\n3\n3\n4\n4\n1\n1\n2\n2\n3\n3\n4\n4\nW\nE\nc\nc\ns\na\nj\nj\nj\nj\nj\nj\nj\ns\na\nj\nj\nj\nj\nj\nj\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\n\n\n\n\n\n\n\n\n\n\n\n+\n+\n+\n+\n+\n+\n+\n+\n\n\n=\n=\n=\n=\n=\n=\n=\n=\nand\ncan be determined from\n]\n[\n4\nW\nE\n+\n+\n+\n+\n+\n+\n+\n+\n=\n\n\n=\n=\n=\n=\n=\n=\n=\n=\n)\n,\n,\n,\n,\n,\n(\n)\n(\n)\n,\n,\n,\n,\n,\n(\n)\n(\n]\n[\n2\n4\n4\n3\n2\n1\n4\n4\n3\n3\n0\n1\n0\n1\n0\n2\n2\n1\n1\n1\n4\n4\n3\n2\n1\n4\n4\n3\n3\n2\n2\n1\n0\n0\n1\n0\n1\n1\n4\n1\n1\n2\n2\n3\n3\n4\n4\n1\n1\n2\n2\n3\n3\n4\n4\ns\na\nj\nj\nj\nj\nj\nj\nj\nj\ns\na\nj\nj\nj\nj\nj\nj\nj\nj\nW\nE\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\n\n\n\n\n\n\n\n\n\n\n+\n+\n+\n+\n+\n+\n+\n+\n\n\n=\n=\n=\n=\n=\n=\n=\n=\n)\n,\n,\n,\n,\n,\n(\n)\n(\n)\n,\n,\n,\n,\n,\n(\n)\n(\n4\n4\n4\n3\n2\n1\n4\n4\n0\n0\n0\n1\n1\n3\n3\n2\n2\n1\n1\n3\n4\n4\n3\n2\n1\n4\n4\n0\n0\n1\n1\n0\n3\n3\n2\n2\n1\n1\n1\n1\n2\n2\n3\n3\n4\n4\n1\n1\n2\n2\n3\n3\n4\n4\ns\na\nj\nj\nj\nj\nj\nj\nj\nj\ns\na\nj\nj\nj\nj\nj\nj\nj\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\nJ\nj\n\n\n\n\n\n\n\n\n\n\n]\n[\n)\n(\n4\n3\n3\n2\n2\n1\n1\nW\nE\nc\nc\nc\n\n\n\n+\n+\nAPPLICATIONS OF THE MODEL\nHere we give an overview of the prime application of the model.\n3G systems such as UMTS and CDMA2000 are leaning towards\nan all-IP network architecture for transporting IP multimedia\nservices [52]. An all-IP DiffServ platform is the currently most\npromising architecture to interwork the heterogeneous wireless\naccess networks and the Internet to provide broadband access,\nseamless global roaming and QoS guarantees for various IP\nmultimedia services [53]. To transport UMTS services through IP\nnetworks without loosing end-to-end QoS provisioning, a\nconsistent and efficient QoS mapping between UMTS services and\nIP QoS classes is required. According to 3GPP, UMTS-to-IP QoS\nmapping is performed by a translation function in the GGSN\nrouter that classifies each UMTS packet flow and maps it to a\nsuitable IP QoS class [52]. In order to make accurate mappings and\nto ensure guaranteed QoS parameters to the end user of mobile\nInternet, it is essential to being able to accurately model the end-to\n-end behavior of different classes of wireless IP traffic\n(conversational, interactive, streaming and background) passing\nthrough a DiffServ domain. Several queueing tools have been\ndeveloped that can be implemented in IP routers within different\nQoS domains including Priority Queueing (PQ), Custom Queueing\n(CQ), Weighted Fair Queueing (WFQ), Class Based Weighted Fair\nQueueing (CBWFQ) and Low-Latency Queueing (LLQ) [54].\nOur model is directly applicable to the problem of determining the\nend-to-end queueing behavior of IP traffic through both Wired and\nwireless IP domains, but modeling accuracy is more crucial in\nresource constrained environments such as wireless networks. For\nexample, our model is directly able to analyze the behavior of four\ndifferent QoS classes of UMTS traffic passing through a DiffServ\ndomain, in which routers are implemented with priority queueing.\nThus, the model enables tighter bounds on actual behavior so that\nover-provisioning can be minimized. It also enables translations of\ntraffic behavior between different kinds of QoS domains so that it\nis possible to map reservations made in different domains to\nprovide session continuity.\nCONCLUSION AND FUTURE WORK\nIn this paper, we have presented a novel analytical model based on\nG/M/1 queueing system for accurate modeling of wireless IP\ntraffic behavior under the assumption of four different classes of\nself-similar traffic. We have analyzed it on the basis of non-preemptive\npriority and explicit expressions of expected waiting\ntime for the corresponding classes have been derived. The model\nrepresents an important step towards the overall aim of finding\nrealistic (under self-similar traffic assumptions) end-to-end QoS\nbehavior (in terms of QoS parameters such as delay, jitter and\nthroughput) of multiple traffic classes passing through\nheterogeneous wireless IP domains (IntServ, DiffServ and MPLS).\nAt the moment, we are working on the numerical analysis to find\nsolutions to the suggested imbedded Markov Chain in order to find\nexact QoS parameter bounds for a given system. Our future work\nwill focus to analyze the performance of different QoS domains\nimplemented with different queueing disciplines.\nREFERENCES\n[1]\n\nW. Leland, M. Taqqu, W. Willinger and D. Wilson, \"On the self-similar\nnature of Ethernet traffic (extended version)\", IEEE/ACM\nTransactions on Networking, vol. 2. no. 1, pp. 1-15, Feb. 1994.\n[2]\n\nV. Paxon, \"Empirically derived analytical models of wide-area TCP\nconnections\", IEEE/ACM Transactions on Networking, vol. 2, pp.\n316-336, Aug. 1994\n[3]\n\nV. Paxon and S. Floyd , \"Wide-area traffic: the failure of Poisson\nmodeling\", in Proc. ACM SIGCOMM 94, London, U.K., Aug.\n1994, pp. 257-268\n[4]\n\nM. Crovella and A. Bestavros, \"Explaining World Wide Web\nTraffic Self-Similarity\", Tech. Rep. TR-95-015, Boston University,\nCS Dept, Boston, MA 02215, Aug. 1995\n[5]\n\nM. W. Garrett and W. Willinger, \"Analysis, Modeling and\nGeneration of Self-Similar VBR Video Traffic\", ACM Computer\nCommunication Review, vol. 24, Oct. 1994, SIGCOMM 94\nSymposium\n[6]\n\nW. Willinger et al, \"Statistical analysis of CCSN/SS7 traffic data\nfrom working CCS subnetworks\", IEEE. Journal on Selected Areas\nof Communication, vol. 12, no. 3, pp. 544-551, Apr. 1994\n[7]\n\nM. Crovella and A. Bestavros, \"Self-Similarity in World Wide Web\nTraffic: Evidence and Possible Causes\", in ACM Sigmetrics, May\n1996\n[8]\n\nJ.C Bolot and M. Grossglauser, \"On the Relevance of Long-Range\nDependence in Network Traffic\", Computer Communication\nReview, vol. 26, no. 4, pp. 15-24, October 1996.\n186\n[9]\n\nZ. L. Zhang, V. Ribeiro, S. Moon and C. Diot, \"Small-Time\nScaling behavior of internet backbone traffic: An Empirical Study\",\nin IEEE INFOCOM, march 2003\n[10]\n\nM. S. Taqqu, \"Self-Similar processes\". In S. Kotz and N. Johnson,\neditors, Encyclopedia of Statistical Sciences, vol. 8, pp. 352-357.\nWiley, New York, 1988.\n[11]\n\nW. Willinger, M.S Taqqu and A. Erramilli, \"A bibliographical\nguide to self-similar traffic and performance modeling for modern\nhigh speed networks\", In F. P. Kelly, S. Zachary and I. Ziedins,\neditors, Stochastic Networks: Theory and Applications, pp. 339-366\n, Claredon Press, Oxford, 1996\n[12]\n\nH. Holma and A. Taskala, \"WCDMA for UMTS, Radio Access for\nThird Generation Mobile Communications, 2\nnd\nEdition\", John\nWiley & Sons, Ltd. 2002, pp. 1-5\n[13]\n\nJ. Yang and I. Kriaras, \"Migration to all-IP based UMTS networks,\n\" IEEE 1\nst\nInternational Conference on 3G Mobile Communication\nTechnologies , 27-29 March, 2000, pp. 19-23\n[14]\n\nW. Stallings, \"Integrated Services Architecture: The Next-Generation\nInternet\", International Journal of Network\nManagement, 9, 1999, pp. 38-43\n[15]\n\nS. Blake et al., \"An Architecture for Differentiated Services\", IETF\nRFC 2475, Dec. 1998\n[16]\n\nRosen E. et al., \"Multiprotocol Label Switching (MPLS)\nArchitecture\", RFC 3031, Jan. 2001\n[17]\n\nK. Venken, J. De Vriendt and D. De Vleeschauwer, \"Designing a\nDiffServ-capable IP-backbone for the UTRAN\", IEEE 2\nnd\n\nInternational Conference on 3G Mobile Communication\nTechnologies, 26-28 March 2001, pp. 47-52\n[18]\n\nS. Maniatis, C. Grecas and I. Venieris, \"End-to-End QoS Issues\nOver Next Generation Mobile Internet\", IEEE Symposium on\nCommunication and Vehicular Technology, 2000, SVCT-2000, 19\nOct, 2000, pp. 150-154\n[19]\n\nP. Newman, Netillion Inc. \"In Search of the All-IP Mobile\nNetwork\", IEEE Communication Magazine, vol. 42, issue 12, Dec.\n2004, pp. S3-S8\n[20]\n\nG. Araniti, F. Calabro, A. Iera, A. Molinaro and S. Pulitano,\n\"Differentiated Services QoS Issues in Next Generation Radio\nAccess Network: a New Management Policy for Expedited\nForwarding Per-Hop Behavior\", IEEE Vehicular Technology\nConference, VTC 2004-Fall, vol. 4, 26-29 Sept. 2004, pp. 2693-2697\n[21]\n\nS. Uskela, \"All IP Architectures for Cellular Networks\", 2\nnd\n\nInternational Conference on 3G Mobile Communication\nTechnologies, 26-28 March 2001, pp. 180-185\n[22]\n\nJeong-Hyun Park, \"Wireless Internet Access for Mobile\nSubscribers Based on GPRS/UMTS Network\" IEEE\nCommunication Magazine, vol. 40, issue 4, April 2002, pp. 38-39\n[23]\n\nK. Daniel Wong and Vijay K. Varma, \"Supporting Real-Time IP\nMultimedia Services in UMTS\", IEEE Communication Magazine,\nvol. 41, issue 11, Nov. 2003, pp. 148-155\n[24]\n\n3GPP, \"Universal Mobile Telecommunication System (UMTS);\nQoS Concepts and Architecture\", TS23.107V6, March 2004\n[25]\n\nR. Chakravorty, J. Cartwright and I. Pratt, \"Practical Experience\nwith TCP over GPRS\", in IEEE GlobeCom, Nov. 2002\n[26]\n\nD. Schwab and R. Bunt, \"Characterizing the use of a Campus\nWireless Network\", in IEEE INFOCOM, March 2004\n[27]\n\nX. Meng, S. Wong, Y. Yuan and S.Lu, \"Characterizing Flows in\nLarge Wireless Data Networks\", in ACM Mobicom, Sep 2004\n[28]\n\nA. Balachandran, G. M. Voelker, P. Bahl and P. Venkat Rangan,\n\"Characterizing user behavior and network performance in a public\nWireless LAN\", Sigmetrics Performance Evaluation. Review, vol.\n30. no. 1, 2002, pp. 195-205\n[29]\n\nA. Adas and A. Mukherjee, \"On Resource Management and QoS\nguarantees for long-range dependant traffic\", In Proc IEEE\nINFOCOM, 1995, pp. 779-787\n[30]\n\nM. Parulekar and A. Makowski, \"Tail Probabilities for a\nMultiplexer with self-similar input\", In proc IEEE INFOCOM,\n1996, pp. 1452-1459\n[31]\n\nI. Norros, \"A Storage Model with self-similar input\", Queueing\nSystem, 16, 1994, pp. 387-396\n[32]\n\nB. Tsybakov and N. D. Georganas, \"Self-Similar traffic and upper\nbounds to buffer overflow in ATM queue\", Performance\nEvaluation, 36, 1998, pp. 57-80\n[33]\n\nM. Zukerman et al, \"Analytical Performance Evaluation of a Two\nClass DiffServ link\", IEEE ICS, 25-28 Nov. 2002, vol. 1, pp. 373-377\n[34]\n\nJ. M. Chung, Z. Quan, \"Impact of Self-Similarity on Performance\nEvaluation in DiffServ Networks\", IEEE MWSCAS, 4-7 Aug. 2002,\nvol. 2, pp. 326-329\n[35]\n\nC. F. Chou et al, \"Low Latency and efficient packet scheduling for\nstreaming applications\", IEEE ICC, 20-24 June, 2004, vol. 4, pp.\n1963-1967\n[36]\n\nA. Kos and B. Klepec, \"Performance of VoIP applications in a\nsimple Differentiated Services network architecture\", IEEE\nEUROCON, 4-7 July, 2001, vol. 1, pp. 214-217\n[37]\n\nJ. M. Chung and H. M. Soo, \"Analysis of non preemptive priority\nqueueing of MPLS networks with Bulk arrivals\", IEEE MWSCAS,\n4-7 Aug. 2002, vol. 3. pp. 81-84\n[38]\n\nSalil S. Kanhere and Harish Sethu, \"Fair, Efficient and Low-Latency\nPacket Scheduling using Nested Deficit Round Robin\",\nProceedings of the IEEE Workshop on High Performance\nSwitching and Routing (HSPR), May 2001\n[39]\n\nN. F. MIR and A. Chien, \"Simulation of Voice over MPLS\ncommunication networks\", IEEE ICCS, 25-28 Nov. 2002, vol. 1,\npp. 389-393\n[40]\n\nA. Krendzel, Y. Koucheryavy, J. Harju and S. Lopatin, \"Traffic and\nQoS management in Wireless Multimedia Networks\" COST 290::\nWi-QoST, Working group N3 http://www.cost290.org\n[41]\n\nM. Jiang, M. Nikolic, S. Hardy and L. Trajkovic, \"Impact of Self-Similarity\non Wireless Data Network Performance\", IEEE ICC,\n2001, vol. 2, pp. 477-481\n[42]\n\nJ. Ridoux, A. Nucci and D. Veitch, \"Characterization of Wireless\nTraffic based on Semi-Experiments\", Technical Report-LIP6,\nDecember 2005\n[43]\n\nZ. Sahinoglu and S. Tekinay, \"On Multimedia Networks: Self-Similar\nTraffic and Network Performance\", IEEE Communication\nMagazine, vol. 37, issue 1, Jan. 1999, pp. 48-52\n[44]\n\nI. Norros, \"On the use of Fractional Brownian Motion in theory of\nconnectionless networks\", IEEE Journal on Selected Areas in\nCommunications, vol. 13. no. 6, August 1995, pp. 953-962\n[45]\n\nP. Benko, G. Malicsko and A. Veres, \"A Large-scale, passive\nanalysis of end-to-end TCP Performances over GPRS\", in IEEE\nINFOCOM, March 2004\n[46]\n\nM. Caglar, \"A Long-Range Dependant Workload Model for Packet\nData Traffic\", Mathematics of Operations Research, 29, 2004, pp.\n92-105\n187\n[47]\n\nH. P. Schwefel, L. Lipsky, \"Impact of aggregated self-similar\nON/OFF traffic on delay in stationary queueing models (extended\nversion)\", Performance Evaluation, 43, 2001, pp. 203-221\n[51]\n\nE. Cinlar, \"Introduction to Stochastic Processes\", 1975, pp.\n178\n[52]\n\nR. Ben Ali, Y Lemieux and S. Pierre, \"UMTS-to-IP QoS Mapping\nfor Voice and Video Telephony Services, IEEE Network , vol. 19,\nissue 2, March/April 2005, pp. 26-32\n[48]\n\nI. Kaj, \"Limiting fractal random processes in heavy-tailed systems\",\nIn Fractals in Engineering, New Trends in Theory and\nApplications, Eds.J. Levy-Lehel, E. Lutton, Springer-Verlag\nLondon, 2005, pp. 199-218\n[53]\n\nY. Cheng, H, Jiang, W, Zhuang, Z. Niu and C. Lin, \"Efficient\nResource Allocation for China's 3G/4G Wireless Networks, IEEE\nCommunication Magazine, vol. 43, issue 1, Jan 2005, pp. 76-83\n[49]\n\nS.M. Ross, \"Introduction to Probability Models\" Academic Press,\n1997.\n[54]\n\nW. Odom and M. J. Cavanaugh, \"IP Telephony Self-Study Cisco\nDQoS Exam Certification Guide\", Cisco Press, 2004, pp. 3-314\n\n\n[50]\n\nK.S. Trivedi, Probability and statistics with reliability, queueing,\nand computer science applications Wiley, New York, 2002.\nAPPENDIX\nTable: 1 The States of the Markov Chain and Transition Probabilities\nInitial State\nReachable States (\n4\n,\n3\n,\n2\n,\n1\n=\nm\n)\nTransition Probability\n)\n,\n,\n,\n,\n,\n1\n(\n1\n4\n3\n2\n1\ns\na\ni\ni\ni\nk\ni\nm\n+\n\n,\n1\n,\n,\n0\ni\nk\nK\n=\n\n\n\n\n0\n0\n)\n(\n)\n(\n)\n(\n1\n1\n1\nt\nx\nt\nT\nS\nS\ndt\ndx\nds\nt\nf\nx\nf\ns\nf\nm\nk\n\n)\n,\n,\n,\n,\n,\n0\n(\n2\n4\n3\n2\ns\na\ni\ni\nk\ni\nm\n\n,\n1\n,\n,\n0\n2\n=\ni\nk\nK\n\n\n\n\n+\n+\n0 0\n)\n(\n)\n(\n)\n(\n1\n2\n1\n1\n1\n2\nt\nx\nt\nT\nS\nS\nS\ndt\ndx\nds\nt\nf\nx\nf\ns\nf\nm\nk\ni\n\n1\n,.......\n1\n,\n0\n)\n,\n,\n,\n,\n0\n,\n0\n(\n3\n3\n4\n3\n=\ni\nk\ns\na\ni\nk\ni\nm\n\ndsdxdt\nt\nf\nx\nf\ns\nf\nm\nk\ni\ni\nT\nS\nS\nS\nt\nx\nt\nS\n)\n(\n)\n(\n)\n(\n1\n3\n2\n2\n1\n1\n1\n3\n0 0\n+\n+\n\n\n+\n\n\n)\n,\n,\n,\n,\n,\n(\n1\n1\n4\n3\n2\n1\ns\na\ni\ni\ni\ni\n\n1\n.....\n,.........\n1\n,\n0\n)\n,\n,\n,\n0\n,\n0\n,\n0\n(\n4\n4\n4\n=\ni\nk\ns\na\nk\ni\nm\n\n\n\n\n+\n+\n+\n+\n0 0\n)\n(\n)\n(\n)\n(\n1\n4\n3\n3\n2\n2\n1\n1\n1\n4\nt\nx\nt\nT\nS\nS\nS\nS\nS\ndsdxdt\nt\nf\nx\nf\ns\nf\nm\nk\ni\ni\ni\n\n)\n,\n,\n,\n,\n1\n,\n1\n(\n1\n4\n3\n2\n1\ns\na\ni\ni\ni\nk\ni\nm\n+\n\n,\n1\n,\n,\n0\ni\nk\nK\n=\n\n\n\n\n+\n0 0\n)\n(\n)\n(\n)\n(\n1\n1\n2\n1\n1\nt\nx\nt\nT\nS\nS\nS\ndt\ndx\nds\nt\nf\nx\nf\ns\nf\nm\nk\n\n)\n,\n,\n,\n,\n,\n1\n(\n2\n4\n3\n2\n1\ns\na\ni\ni\ni\ni\nm\n+\n\n\n\n0\n)\n(\n)\n(\n1\n2\nt\nT\nS\ndt\nds\nt\nf\ns\nf\nm\n\n)\n,\n,\n,\n,\n,\n0\n(\n2\n4\n3\n2\ns\na\ni\ni\nk\ni\nm\n\nK\n,\n3\n,\n2\n2\n=\ni\nand\n1\n,\n,\n1\n2\n=\ni\nk\nK\n\n\n\n\n+\n+\n0 0\n)\n(\n)\n(\n)\n(\n1\n2\n1\n1\n1\n2\nt\nx\nt\nT\nS\nS\nS\ndt\ndx\nds\nt\nf\nx\nf\ns\nf\nm\nk\ni\n\n)\n,\n,\n,\n,\n,\n(\n2\n1\n4\n3\n2\n1\ns\na\ni\ni\ni\ni\n\n1\n...\n,.........\n1\n,\n0\n)\n,\n,\n,\n,\n0\n,\n0\n(\n3\n3\n4\n3\n=\ni\nk\ns\na\ni\nk\ni\nm\n\n\n\n\n+\n+\n+\n0 0\n)\n(\n)\n(\n)\n(\n1\n3\n2\n2\n1\n1\n1\n3\nt\nx\nt\nT\nS\nS\nS\nS\ndsdxdt\nt\nf\nx\nf\ns\nf\nm\nk\ni\ni\n\n188\n\n1\n...\n,.........\n2\n,\n1\n,\n0\n)\n,\n,\n,\n0\n,\n0\n,\n0\n(\n4\n4\n4\n=\ni\nk\ns\na\nk\ni\nm\n\n\n\n\n+\n+\n+\n+\n0 0\n)\n(\n)\n(\n)\n(\n1\n4\n3\n3\n2\n2\n1\n1\n1\n4\nt\nx\nt\nT\nS\nS\nS\nS\nS\ndsdxdt\nt\nf\nx\nf\ns\nf\nm\nk\ni\ni\ni\n\n1\n1\n4\n3\n2\n1\n...\n,.........\n1\n,\n0\n)\n,\n,\n,\n1\n,\n,\n1\n(\ni\nk\ns\na\ni\ni\ni\nk\ni\nm\n=\n+\n\n\n\n\n+\n0 0\n)\n(\n)\n(\n)\n(\n1\n1\n3\n1\n1\nt\nx\nt\nT\nS\nS\nS\ndsdxdt\nt\nf\nx\nf\ns\nf\nm\nk\n\n1\n...\n,.........\n1\n,\n0\n)\n,\n,\n,\n1\n,\n,\n0\n(\n2\n2\n4\n3\n2\n=\n\ni\nk\ns\na\ni\ni\nk\ni\nm\n\n\n\n\n+\n+\n+\n0 0\n)\n(\n)\n(\n)\n(\n1\n1\n3\n2\n1\n1\n1\n2\nt\nx\nt\nT\nS\nS\nS\nS\ndsdxdt\nt\nf\nx\nf\ns\nf\nm\nk\ni\n\n)\n,\n,\n,\n,\n,\n1\n(\n3\n4\n3\n2\n1\ns\na\ni\ni\ni\ni\nm\n+\n\n\n\n0\n)\n(\n)\n(\n1\n3\nt\nT\nS\ndt\nds\nt\nf\ns\nf\nm\n\n)\n,\n,\n,\n,\n0\n,\n0\n(\n3\n4\n3\ns\na\ni\nk\ni\nm\n\n,......\n3\n,\n2\n3\n=\ni\n1\n,\n,\n1\n3\n=\ni\nk\nK\n\n\n\n\n+\n+\n+\n0 0\n)\n(\n)\n(\n)\n(\n1\n3\n2\n2\n1\n1\n1\n3\nt\nx\nt\nT\nS\nS\nS\nS\ndt\ndx\nds\nt\nf\nx\nf\ns\nf\nm\nk\ni\ni\n\n)\n,\n,\n,\n,\n,\n(\n3\n1\n4\n3\n2\n1\ns\na\ni\ni\ni\ni\n\n1\n.....\n,.........\n1\n,\n0\n)\n,\n,\n,\n0\n,\n0\n,\n0\n(\n4\n4\n4\n=\ni\nk\ns\na\nk\ni\nm\n\n\n\n\n+\n+\n+\n+\n0 0\n)\n(\n)\n(\n)\n(\n1\n4\n3\n3\n2\n2\n1\n1\n1\n4\nt\nx\nt\nT\nS\nS\nS\nS\nS\ndsdxdt\nt\nf\nx\nf\ns\nf\nm\nk\ni\ni\ni\n\n1\n1\n4\n3\n2\n1\n..\n,.........\n1\n,\n0\n)\n,\n,\n1\n,\n,\n,\n1\n(\ni\nk\ns\na\ni\ni\ni\nk\ni\nm\n=\n+\n\n\n\n\n+\n0 0\n)\n(\n)\n(\n)\n(\n1\n1\n4\n1\n1\nt\nx\nt\nT\nS\nS\nS\ndsdxdt\nt\nf\nx\nf\ns\nf\nm\nk\n\n1\n..\n,.........\n1\n,\n0\n)\n,\n,\n1\n,\n,\n,\n0\n(\n2\n2\n4\n3\n2\n=\n\ni\nk\ns\na\ni\ni\nk\ni\nm\n\n\n\n\n+\n+\n+\n0 0\n)\n(\n)\n(\n)\n(\n1\n1\n4\n2\n1\n1\n1\n2\nt\nx\nt\nT\nS\nS\nS\nS\ndsdxdt\nt\nf\nx\nf\ns\nf\nm\nk\ni\n\n1\n....\n,.........\n1\n,\n0\n)\n,\n,\n1\n,\n,\n0\n,\n0\n(\n3\n3\n4\n3\n=\n\ni\nk\ns\na\ni\nk\ni\nm\n\n\n\n\n+\n+\n+\n+\n0 0\n)\n(\n)\n(\n)\n(\n1\n1\n4\n3\n2\n2\n1\n1\n1\n3\nt\nx\nt\nT\nS\nS\nS\nS\nS\ndsdxdt\nt\nf\nx\nf\ns\nf\nm\nk\ni\ni\n\n)\n,\n,\n,\n,\n,\n1\n(\n4\n4\n3\n2\n1\ns\na\ni\ni\ni\ni\nm\n+\n\n\n\n0\n)\n(\n)\n(\n1\n4\nt\nT\nS\ndsdt\nt\nf\ns\nf\nm\n\n)\n,\n,\n,\n,\n,\n(\n4\n1\n4\n3\n2\n1\ns\na\ni\ni\ni\ni\n\n..\n,.........\n3\n,\n2\n)\n,\n,\n,\n0\n,\n0\n,\n0\n(\n4\n4\n4\n=\ni\ns\na\nk\ni\nm\n1\n,...\n2\n,\n1\n4\n=\ni\nk\n\n\n\n\n+\n+\n+\n+\n0 0\n)\n(\n)\n(\n)\n(\n1\n4\n3\n3\n2\n2\n1\n1\n1\n4\nt\nx\nt\nT\nS\nS\nS\nS\nS\ndsdxdt\nt\nf\nx\nf\ns\nf\nm\nk\ni\ni\ni\n\n\n189", "keywords": "QoS;3G networks;traffic modelling;3G;Self-Similar;GGSN;self-similar traffic;wireless IP traffic;UMTS;queuing model"} {"name": "3", "title": "A Computational Approach to Reflective Meta-Reasoning about Languages with Bindings", "abstract": "We present a foundation for a computational meta-theory of languages with bindings implemented in a computer-aided formal reasoning environment. Our theory provides the ability to reason abstractly about operators, languages, open-ended languages, classes of languages, etc. The theory is based on the ideas of higher-order abstract syntax, with an appropriate induction principle parameterized over the language (i.e. a set of operators) being used. In our approach , both the bound and free variables are treated uniformly and this uniform treatment extends naturally to variable-length bindings . The implementation is reflective, namely there is a natural mapping between the meta-language of the theorem-prover and the object language of our theory. The object language substitution operation is mapped to the meta-language substitution and does not need to be defined recursively. Our approach does not require designing a custom type theory; in this paper we describe the implementation of this foundational theory within a general-purpose type theory. This work is fully implemented in the MetaPRL theorem prover, using the pre-existing NuPRL-like Martin-Lof-style computational type theory. Based on this implementation, we lay out an outline for a framework for programming language experimentation and exploration as well as a general reflective reasoning framework. This paper also includes a short survey of the existing approaches to syntactic reflection.", "fulltext": "Introduction\n1.1\nReflection\nVery generally, reflection is the ability of a system to be \"self-aware\"\nin some way. More specifically, by reflection we mean the\nproperty of a computational or formal system to be able to access\nand internalize some of its own properties.\nThere are many areas of computer science where reflection\nplays or should play a major role. When exploring properties of\nprogramming languages (and other languages) one often realizes\nthat languages have at least two kinds of properties -- semantic\nproperties that have to do with the meaning of what the language's\nconstructs express and syntactic properties of the language itself.\nSuppose for example that we are exploring some language that\ncontains arithmetic operations. And in particular, in this language\none can write polynomials like x\n2\n+ 2x + 1. In this case the number\nof roots of a polynomial is a semantic property since it has to do\nwith the valuation of the polynomial. On the other hand, the degree\nof a polynomial could be considered an example of a syntactic\nproperty since the most natural way to define it is as a property of\nthe expression that represents that polynomial. Of course, syntactic\nproperties often have semantic consequences, which is what makes\nthem especially important. In this example, the number of roots of\na polynomial is bounded by its degree.\nAnother area where reflection plays an important role is run-time\ncode generation -- in most cases, a language that supports\nrun-time code generation is essentially reflective, as it is capable\nof manipulating its own syntax. In order to reason about run-time\ncode generation and to express its semantics and properties, it is\nnatural to use a reasoning system that is reflective as well.\nThere are many different flavors of reflection. The syntactic\nreflection we have seen in the examples above, which is the ability\nof a system to internalize its own syntax, is just one of these\nmany flavors. Another very important kind of reflection is logical\nreflection, which is the ability of a reasoning system or logic to\ninternalize and reason about its own logical properties. A good\nexample of a logical reflection is reasoning about knowledge -since\nthe result of reasoning about knowledge is knowledge itself,\nthe logic of knowledge is naturally reflective [Art04].\nIn most cases it is natural for reflection to be iterated. In the\ncase of syntactic reflection we might care not only about the syntax\nof our language, but also about the syntax used for expressing the\nsyntax, the syntax for expressing the syntax for expressing the\nsyntax and so forth. In the case of the logic of knowledge it is\nnatural to have iterations of the form \"I know that he knows that\nI know . . .\".\nWhen a formal system is used to reason about properties of programming\nlanguages, iterated reflection magnifies the power of the\n2\nsystem, making it more natural to reason not just about individual\nlanguages, but also about classes of languages, language schemas,\nand so on. More generally, reflection adds a lot of additional power\nto a formal reasoning system [GS89, Art99]. In particular, it is\nwell-known [God36, Mos52, EM71, Par71] that reflection allows\na super-exponential reduction in the size of certain proofs. In addition\n, reflection could be a very useful mechanism for implementing\nproof search algorithms [ACU93, GWZ00, CFW04]. See also\n[Har95] for a survey of reflection in theorem proving.\n1.2\nUniform Reflection Framework\nFor each of the examples in the previous section there are many\nad-hoc ways of achieving the specific benefits of a specific flavor\nof reflection. This work aims at creating a unifying reflective\nframework that would allow achieving most of these benefits in a\nuniform manner, without having to reinvent and re-implement the\nbasic reflective methodology every time. We believe that such a\nframework will increase the power of the formal reasoning tools,\nand it may also become an invaluable tool for exploring the properties\nof novel programming languages, for analyzing run-time code\ngeneration, and for formalizing logics of knowledge.\nThis paper establishes a foundation for the development of this\nframework -- a new approach to reflective meta-reasoning about\nlanguages with bindings. We present a theory of syntax that:\n\nin a natural way provides both a higher-order abstract syntax\n(HOAS) approach to bindings and a de Bruijn-style approach\nto bindings, with easy and natural translation between the two;\n\nprovides a uniform HOAS-style approach to both bound and\nfree variables that extends naturally to variable-length \"vectors\"\nof binders;\n\npermits meta-reasoning about languages -- in particular, the\noperators, languages, open-ended languages, classes of languages\netc. are all first-class objects that can be reasoned about\nboth abstractly and concretely;\n\ncomes with a natural induction principle for syntax that can be\nparameterized by the language being used;\n\nprovides a natural mapping between the object syntax and meta-syntax\nthat is free of exotic terms, and allows mapping the\nobject-level substitution operation directly to the meta-level one\n(i.e. -reduction);\n\nis fully derived in a pre-existing type theory in a theorem\nprover;\n\nis designed to serve as a foundation for a general reflective\nreasoning framework in a theorem prover;\n\nis designed to serve as a foundation for a programming language\nexperimentation framework.\nThe paper is structured as follows. Our work inherits a large\nnumber of ideas from previous efforts and we start in Section 2\nwith a brief survey of existing techniques for formal reasoning\nabout syntax. Next in Section 3 we outline our approach to reasoning\nabout syntax and in Section 4 we present a formal account\nof our theory based on a Martin-Lof style computational type theory\n[CAB\n+\n86, HAB\n+\n] and the implementation of that account in\nthe MetaPRL theorem prover [Hic97, Hic99, Hic01, HNC\n+\n03,\nHNK\n+\n, HAB\n+\n]. Then in Section 5 we outline our plan for building\na uniform reflection framework based on the syntactic reflection.\nFinally, in Section 6 we resume the discussion of related work that\nwas started in Section 2.\n1.3\nNotation and Terminology\nWe believe that our approach to reasoning about syntax is fairly\ngeneral and does not rely on any special features of the theorem\nprover we use. However, since we implement this theory in\nMetaPRL, we introduce some basic knowledge about MetaPRL\nterms.\nA MetaPRL term consists of:\n1. An operator name (like \"sum\"), which is a unique name indicating\nthe logic and component of a term;\n2. A list of parameters representing constant values; and\n3. A set of subterms with possible variable bindings.\nWe use the following syntax to describe terms, based on the NuPRL\ndefinition [ACHA90]:\nopname\noperator name\n[ p\n1\n; ; p\nn\n]\nparameters\n{v\n1\n.t\n1\n; ; v\nm\n.t\nm\n}\nsubt er ms\nIn addition, MetaPRL has a meta-syntax somewhat similar to\nthe higher-order abstract syntax presented in Pfenning and Elliott\n[PE88]. MetaPRL uses the second-order variables in the style of\nHuet and Lang [HL78] to describe term schemas. For example,\nx.V [x], where V is a second-order variable of arity 1, is a schema\nthat stands for an arbitrary term whose top-level operator is .\nThis meta-syntax requires that every time a binding occurrence\nis explicitly specified in a schema, all corresponding bound occurrences\nhave to be specified as well. This requirement makes it very\neasy to specify free variable restrictions -- for example, x.V ,\nwhere V is a second-order meta-variable of arity 0, is a schema\nthat stands for an arbitrary term whose top-level operator is and\nwhose body does not have any free occurrences of the variable\nbound by that . In particular, the schema x.V matches the term\ny.1, but not the term x.x.\nIn addition, this meta-language allows specifying certain term\ntransformations, including implicit substitution specifications. For\nexample, a beta reduction transformation may be specified using\nthe following schema:\n(x.V\n1\n[x]) V\n2\nV\n1\n[V\n2\n]\nHere the substitution of V\n2\nfor x in V\n1\nis specified implicitly.\nThroughout this paper we will use this second-order notation to\ndenote arbitrary terms -- namely, unless stated otherwise, when we\nwrite \"x.t [x]\" we mean an arbitrary term of this form, not a term\ncontaining a concrete second-order variable named \"t\".\nAs in LF [HHP93] we assume that object level variables (i.e.\nthe variables of the language whose syntax we are expressing)\nare directly mapped to meta-theory variables (i.e. the variable of\nthe language that we use to express the syntax). Similarly, we\nassume that the object-level binding structure is mapped to the\nmeta-level binding structure. In other words, the object-level notion\nof the \"binding/bound occurrence\" is a subset of that in the metalanguage\n. We also consider -equal terms -- both on the object\nlevel and on the meta-level -- to be identical and we assume that\nsubstitution avoids capture by renaming.\nThe sequent schema language we use [NH02] contains a number\nof more advanced features in addition to those outlined here.\nHowever, for the purposes of this presentation, the basic features\noutlined above are sufficient.\nPrevious Models of Reflection\nIn 1931 Godel used reflection to prove his famous incompleteness\ntheorem [God31]. To express arithmetic in arithmetic itself, he\nassigned a unique number (a Godel number) to each arithmetic\n3\nformula. A Godel number of a formula is essentially a numeric\ncode of a string of symbols used to represent that formula.\nA modern version of the Godel's approach was used by Aitken\net al. [ACHA90, AC92, ACU93, Con94] to implement reflection\nin the NuPRL theorem prover [CAB\n+\n86, ACE\n+\n00]. A large part\nof this effort was essentially a reimplementation of the core of the\nNuPRL prover inside NuPRL's logical theory.\nIn Godel's approach and its variations (including Aitken's one),\na general mechanism that could be used for formalizing one logical\ntheory in another is applied to formalizing a logical theory in itself.\nThis can be very convenient for reasoning about reflection, but for\nour purposes it turns out to be extremely impractical. First, when\nformalizing a theory in itself using generic means, the identity\nbetween the theory being formalized and the one in which the\nformalization happens becomes very obfuscated, which makes it\nalmost impossible to relate the reflected theory back to the original\none. Second, when one has a theorem proving system that already\nimplements the logical theory in question, creating a completely\nnew implementation of this logical theory inside itself is a very\ntedious redundant effort. Another practical disadvantage of the\nGodel numbers approach is that it tends to blow up the size of\nthe formulas; and iterated reflection would cause the blow-up to\nbe iterated as well, making it exponential or worse.\nA much more practical approach is being used in some programming\nlanguages, such as Lisp and Scheme. There, the common\nsolution is for the implementation to expose its internal syntax\nrepresentation to user-level code by the quote constructor (where\nquote (t) prevents the evaluation of the expression t). The problems\noutlined above are solved instantly by this approach: there is\nno blow-up, there is no repetition of structure definitions, there is\neven no need for verifying that the reflected part is equivalent to the\noriginal implementation since they are identical. Most Scheme implementations\ntake this even further: the eval function is the internal\nfunction for evaluating a Scheme expression, which is exposed\nto the user-level; Smith [Smi84] showed how this approach can\nachieve an infinite tower of processors. A similar language with the\nquotation and antiquotation operators was introduced in [GMO03].\nThis approach, however, violates the congruence property with\nrespect to computation: if two terms are computationally equal then\none can be substituted for the other in any context. For instance,\nalthough 2 2 is equal to 4, the expressions \"2*2\" and \"4\" are\nsyntactically different, thus we can not substitute 2*2 by 4 in\nthe expression quote(2*2). The congruence property is essential\nin many logical reasoning systems, including the NuPRL system\nmentioned above and the MetaPRL system [HNC\n+\n03, HNK\n+\n,\nHAB\n+\n] that our group uses.\nA possible way to expose the internal syntax without violating\nthe congruence property is to use the so-called \"quoted\" or\n\"shifted\" operators [AA99, Bar01, Bar05] rather than quoting the\nwhole expression at once. For any operator op in the original language\n, we add the quoted operator (denoted as op) to represent a\nterm built with the operator op. For example, if the original language\ncontains the constant \"0\" (which, presumably, represents the\nnumber 0), then in the reflected language, 0 would stand for the\nterm that denotes the expression \"0\". Generally, the quoted operator\nhas the same arity as the original operator, but it is defined on\nsyntactic terms rather than on semantic objects. For instance, while\nis a binary operator on numbers, is a binary operator on terms.\nNamely, if t\n1\nand t\n2\nare syntactic terms that stand for expressions\ne\n1\nand e\n2\nrespectively, then t\n1\nt\n2\nis a new syntactic term that stands\nfor the expression e\n1\ne\n2\n. Thus, the quotation of the expression 12\nwould be 1 2.\nIn general, the well-formedness (typing) rule for a quoted operator\nis the following:\nt\n1\nTerm\n. . .\nt\nn\nTerm\nop{t\n1\n; . . . ; t\nn\n} Term\n(1)\nwhere Term is a type of terms.\nNote that quotations can be iterated arbitrarily many times,\nallowing us to quote quoted terms. For instance, 1 stands for the\nterm that denotes the term that denotes the numeral 1.\nProblems arise when quoting expressions that contain binding\nvariables. For example, what is the quotation of x.x? There are\nseveral possible ways of answering this question. A commonly\nused approach [PE88, DH94, DFH95, ACM02, ACM03] in logical\nframeworks such as Elf [Pfe89], LF [HHP93], and Isabelle [PN90,\nPau94] is to construct an object logic with a concrete operator\nthat has a type like\n(Term Term) Term or (Var Term) Term.\nIn this approach, the quoted x.x might look like (x.x) and the\nquoted x.1 might look like (x.1). Note that in these examples\nthe quoted terms have to make use of both the syntactic (i.e. quoted)\noperator and the semantic operator .\nExotic Terms.\nNaive implementations of the above approach\nsuffer from the well-known problem of exotic terms [DH95,\nDFH95]. The issue is that in general we can not allow applying\nthe operator to an arbitrary function that maps terms to terms (or\nvariables to terms) and expect the result of such an application to\nbe a \"proper\" reflected term.\nConsider for example the following term:\n(x. if x = 1 then 1 else 2)\nIt is relatively easy to see that it is not a real syntactic term and\ncan not be obtained by quoting an actual term. (For comparison,\nconsider (x. if x = 1 then 1 else 2), which is a quotation of\nx. if x = 1 then 1 else 2).\nHow can one ensure that e denotes a \"real\" term and not an\n\"exotic\" one? That is, is it equal to a result of quoting an actual\nterm of the object language? One possibility is to require e to be\na substitution function; in other words it has to be equal to an\nexpression of the form x.t [x] where t is composed entirely of term\nconstructors (i.e. quoted operators) and x, while using destructors\n(such as case analysis, the if operator used in the example above,\netc) is prohibited.\nThere are a number of approaches to enforcing the above restriction\n. One of them is the usage of logical frameworks with restricted\nfunction spaces [PE88, HHP93], where -terms may only contain\nconstructors. Another is to first formalize the larger type that\ndoes include exotic terms and then to define recursively a predicate\ndescribing the \"validity\" or \"well-formedness\" of a term [DH94,\nDFH95] thus removing the exotic terms from consideration. Yet\nanother approach is to create a specialized type theory that combines\nthe idea of restricted function spaces with a modal type operator\n[DPS97, DL99, DL01]. There the case analysis is disallowed\non objects of \"pure\" type T , but is allowed on objects of a special\ntype\nT . This allows expressing both the restricted function space\n\"T\n1\nT\n2\n\" and the unrestricted one \"( T\n1\n) T\n2\n\" within a single\ntype theory.\nAnother way of regarding the problem of exotic terms is that it\nis caused by the attempt to give a semantic definition to a primarily\nsyntactic property. A more syntax-oriented approach was used by\nBarzilay et al. [BA02, BAC03, Bar05]. In Barzilay's approach, the\nquoted version of an operator that introduces a binding has the\nsame shape (i.e. the number of subterms and the binding structure)\nas the original one and the variables (both the binding and the\n4\nbound occurrences) are unaffected by the quotation. For instance,\nthe quotation of x.x is just x.x.\nThe advantages of this approach include:\n\nThis approach is simple and clear.\n\nQuoted terms have the same structure as original ones, inheriting\na lot of properties of the object syntax.\n\nIn all the above approaches, the -equivalence relation for\nquoted terms is inherited \"for free\". For example, x.x and\ny.y are automatically considered to be the same term.\n\nSubstitution is also easy: we do not need to re-implement the\nsubstitution that renames binding variables to avoid the capture\nof free variables; we can use the substitution of the original\nlanguage instead.\nTo prune exotic terms, Barzilay says that x.t [x] is a valid term\nwhen x.t [x] is a substitution function. He demonstrates that it is\npossible to formalize this notion in a purely syntactical fashion. In\nthis setting, the general well-formedness rule for quoted terms with\nbindings is the following:\nis subst\nk\n{x\n1\n, , x\nk\n.t[x]}\n\nis subst\nl\n{z\n1\n, , z\nl\n.s[z]}\nop{x\n1\n, , x\nk\n.t[x];\n;\nz\n1\n, , z\nl\n.s[z]} Term\n(2)\nwhere is subst\nn\n{x\n1\n, , x\nn\n.t[x]} is the proposition that t is a substitution\nfunction over variables x\n1\n, , x\nn\n(in other words, it is a\nsyntactic version of the Valid predicate of [DH94, DFH95]). This\nproposition is defined syntactically by the following two rules:\nis subst\nn\n{x\n1\n, , x\nn\n. x\ni\n}\nand\nis subst\nn+k\n{x\n1\n, , x\nn\n, y\n1\n, , y\nk\n.t[x; y]}\n.\n.\n.\nis subst\nn+l\n{x\n1\n, , x\nn\n, z\n1\n, , z\nl\n.s[x; z]}}\nis subst\nn\n{x\n1\nx\nn\n.op{y\n1\ny\nk\n.t[x; y]; ; z\n1\nz\nl\n.s[x; z]}}\nIn this approach the is subst\nn\n{} and operators are essentially\nuntyped (in NuPRL type theory, the computational properties of\nuntyped terms are at the core of the semantics; types are added on\ntop of the untyped computational system).\nRecursive Definition and Structural Induction Principle.\nA\ndifficulty shared by both the straightforward implementations of\nthe (Term Term) Term approach and by the Barzilay's one\nis the problem of recursively defining the Term type. We want to\ndefine the Term type as the smallest set satisfying rules (1) and (2).\nNote, however, that unlike rule (1), rule (2) is not monotonic in the\nsense that is subst\nk\n{x\n1\n, , x\nk\n.t[x]} depends non-monotonically\non the Term type. For example, to say whether x.t [x] is a term, we\nshould check whether t is a substitution function over x. It means at\nleast that for every x in Term, t [x] should be in Term as well. Thus\nwe need to define the whole type Term before using (2), which\nproduces a logical circle. Moreover, since has type (Term\nTerm) Term, it is hard to formulate the structural induction\nprinciple for terms built with the term constructor.\nVariable-Length Lists of Binders.\nIn Barzilay's approach, for\neach number n, is subst\nn\n{} is considered to be a separate operator\n-- there is no way to quantify over n, and there is no way to\nexpress variable-length lists of binders. This issue of expressing the\nunbounded-length lists of binders is common to some of the other\napproaches as well.\nMeta-Reasoning.\nAnother difficulty that is especially apparent\nin Barzilay's approach is that it only allows reasoning about concrete\noperators in concrete languages. This approach does not provide\nthe ability to reason about operators abstractly; in particular,\nthere is no way to state and prove meta-theorems that quantify over\noperators or languages, much less classes of languages.\nHigher-Order Abstract Syntax with Inductive Definitions\nAlthough it is possible to solve the problems outlined in the previous\nSection (and we will return to the discussion of some of those\nsolutions in Section 6), our desire is to avoid these difficulties from\nthe start. We propose a natural model of reflection that manages to\nwork around those difficulties. We will show how to give a simple\nrecursive definition of terms with binding variables, which does\nnot allow the construction of exotic terms and does allow structural\ninduction on terms.\nIn this Section we provide a conceptual overview of our approach\n; details are given in Section 4.\n3.1\nBound Terms\nOne of the key ideas of our approach is how we deal with terms\ncontaining free variables. We extend to free variables the principle\nthat variable names do not really matter. In fact, we model free\nvariables as bindings that can be arbitrarily -renamed. Namely,\nwe will write bterm{x\n1\n, , x\nn\n.t[x]} for a term t over variables\nx\n1\n, , x\nn\n. For example, instead of term xy we will use the\nterm bterm{x, y.xy} when it is considered over variables x and\ny and bterm{x, y, z.xy} when it is considered over variables x,\ny and z. Free occurrences of x\ni\nin t [x] are considered bound\nin bterm{x\n1\n, , x\nn\n.t[x]} and two -equal bterm{} expressions\n(\"bterms\") are considered to be identical.\nNot every bterm is necessarily well-formed. We will define the\ntype of terms in such a way as to eliminate exotic terms. Consider\nfor example a definition of lambda-terms.\nE\nXAMPLE\n1. We can define a set of reflected lambda-terms as the\nsmallest set such that\n\nbterm{x\n1\n, , x\nn\n.x\ni\n}, where 1 i n, is a lambda-term (a\nvariable);\n\nif bterm x\n1\n, , x\nn\n, x\nn+1\n.t[x] is a lambda-term, then\nbterm x\n1\n, , x\nn\n.x\nn+1\n.t[x]\nis also a lambda-term (an abstraction);\n\nif bterm{x\n1\n, , x\nn\n.t\n1\n[x]} and bterm{x\n1\n, , x\nn\n.t\n2\n[x]} are\nlambda-terms, then\nbterm{x\n1\n; ; x\nn\n.apply{t\n1\n[x]; t\n2\n[x]}}\nis also a lambda-term (an application).\nIn a way, bterms could be understood as an explicit coding for\nBarzilay's substitution functions. And indeed, some of the basic\ndefinitions are quite similar. The notion of bterms is also very\nsimilar to that of local variable contexts [FPT99].\n3.2\nTerminology\nBefore we proceed further, we need to define some terminology.\nD\nEFINITION\n1. We change the notion of subterm so that the subterms\nof a bterm are also bterms. For example, the immediate subterms\nof bterm{x , y.x y} are bterm{x , y.x } and bterm{x , y.y}; the\nimmediate subterm of bterm{x .y.x } is bterm{x, y.x }.\nD\nEFINITION\n2. We call the number of outer binders in a bterm\nexpression its binding depth. Namely, the binding depth of the\nbterm bterm{x\n1\n, , x\nn\n.t[x]} is n.\nD\nEFINITION\n3. Throughout the rest of the paper we use the notion\nof operator shape. The shape of an operator is a list of natural numbers\neach stating how many new binders the operator introduces on\n5\nthe corresponding subterm. The length of the shape list is therefore\nthe arity of the operator. For example, the shape of the + operator\nis [0; 0] and the shape of the operator is [1].\nThe mapping from operators to shapes is also sometimes called\na binding signature of a language [FPT99, Plo90].\nD\nEFINITION\n4. Let op be an operator with shape [d\n1\n; ; d\nN\n],\nand let btl be a list of bterms [b\n1\n; ; b\nM\n]. We say that btl is\ncompatible with op at depth n when,\n1. N = M;\n2. the binding depth of bterm b\nj\nis n + d\nj\nfor each 1 j N .\n3.3\nAbstract Operators\nExpressions of the form bterm{x.op{ }} can only be used to express\nsyntax with concrete operators. In other words, each expression\nof this form contains a specific constant operator op. However,\nwe would like to reason about operators abstractly; in particular,\nwe want to make it possible to have variables of the type \"Op\" that\ncan be quantified over and used in the same manner as operator\nconstants. In order to address this we use explicit term constructors\nin addition to bterm{x.op{ }} constants.\nThe expression mk bterm{n; \"op\"; btl}, where \"op\" is some encoding\nof the quoted operator op, stands for a bterm with binding\ndepth n, operator op and subterms btl. Namely,\nmk bterm{n; op; bterm{x\n1\n, , x\nn\n, y\n1\n.t\n1\n[x; y\n1\n]} :: ::\nbterm{x\n1\n, , x\nn\n, y\nk\n.t\nk\n[x; y\nk\n]} :: nil}\nis bterm{x\n1\n, , x\nn\n.op {y\n1\n.t\n1\n[x; y\n1\n]; ; y\nk\n.t\nk\n[x; y\nk\n]}}. Here,\nnil is the empty list and :: is the list cons operator and therefore\nthe expression b\n1\n:: :: b\nn\n:: nil represents the concrete list\n[b\n1\n; ; b\nn\n].\nNote that if we know the shape of the operator op and we know\nthat the mk bterm expression is well-formed (or, more specifically,\nif we know that btl is compatible with op at depth n), then it\nwould normally be possible to deduce the value of n (since n is\nthe difference between the binding depth of any element of the list\nbtl and the corresponding element of the shape(op) list). There are\ntwo reasons, however, for supplying n explicitly:\n\nWhen btl is empty (in other words, when the arity of op is 0),\nthe value of n can not be deduced this way and still needs to be\nsupplied somehow. One could consider 0-arity operators to be a\nspecial case, but this results in a significant loss of uniformity.\n\nWhen we do not know whether an mk bterm expression is\nnecessarily well-formed (and as we will see it is often useful\nto allow this to happen), then a lot of definitions and proofs\nare greatly simplified when the binding depth of mk bterm\nexpressions is explicitly specified.\nUsing the mk bterm constructor and a few other similar constructors\nthat will be introduced later, it becomes easy to reason abstractly\nabout operators. Indeed, the second argument to mk bterm\ncan now be an arbitrary expression, not just a constant. This has a\ncost of making certain definitions slightly more complicated. For\nexample, the notion of \"compatible with op at depth n\" now becomes\nan important part of the theory and will need to be explicitly\nformalized. However, this is a small price to pay for the ability to\nreason abstractly about operators, which easily extends to reasoning\nabstractly about languages, classes of languages and so forth.\n3.4\nInductively Defining the Type of Well-Formed Bterms\nThere are two equivalent approaches to inductively defining the\ngeneral type (set) of all well-formed bterms. The first one follows\nthe same idea as in Example 1:\n\nbterm{x\n1\n, , x\nn\n.x\ni\n} is a well-formed bterm for 1 i n;\n\nmk bterm{n; op; btl} is a well-formed bterm when op is a well-formed\nquoted operator and btl is a list of well-formed bterms\nthat is compatible with op at some depth n.\nIf we denote bterm{x\n1\n, , x\nl\n, y, z\n1\n, , z\nr\n.y} as var{l; r},\nwe can restate the base case of the above definition as \"var{l; r },\nwhere l and r are arbitrary natural numbers, is a well-formed\nbterm\". Once we do this it becomes apparent that the above definition\nhas a lot of similarities with de Bruijn-style indexing of\nvariables [dB72]. Indeed, one might call the numbers l and r the\nleft and right indices of the variable var{l; r }.\nIt is possible to provide an alternate definition that is closer to\npure HOAS:\n\nbnd{x .t [x]}, where t is a well-formed substitution function, is\na well-formed bterm (the bnd operation increases the binding\ndepth of t by one by adding x to the beginning of the list of t's\nouter binders).\n\nmk term{op; btl}, where op is a well-formed quoted operator,\nand btl is a list of well-formed bterms that is compatible with\nop at depth 0, is a well-formed bterm (of binding depth 0).\nOther than better capturing the idea of HOAS, the latter definition\nalso makes it easier to express the reflective correspondence\nbetween the meta-syntax (the syntax used to express the theory of\nsyntax, namely the one that includes the operators mk bterm, bnd,\netc.) and the meta-meta-syntax (the syntax that is used to express\nthe theory of syntax and the underlying theory, in other words, the\nsyntax that includes the second-order notations.) Namely, provided\nthat we define the subst{bt; t } operation to compute the result of\nsubstituting a closed term t for the first outer binder of the bterm\nbt, we can state that\nsubst{bnd{x .t\n1\n[x]} ; t\n2\n} t\n1\n[t\n2\n]\n(3)\n(where t\n1\nand t\n2\nare literal second-order variables). In other words,\nwe can state that the substitution operator subst and the implicit\nsecond-order substitution in the \"meta-meta-\" language are equivalent\n.\nThe downside of the alternate definition is that it requires defining\nthe notion of \"being a substitution function\".\n3.5\nOur Approach\nIn our work we try to combine the advantages of both approaches\noutlined above. In the next Section we present a theory that includes\nboth the HOAS-style operations (bnd, mk term) and the de Bruijn-style\nones (var, mk bterm). Our theory also allows deriving the\nequivalence (3). In our theory the definition of the basic syntactic\noperations is based on the HOAS-style operators; however, the\nrecursive definition of the type of well-formed syntax is based on\nthe de Bruijn-style operations. Our theory includes also support for\nvariable-length lists of binders.\nFormal Implementation in a Theorem Prover\nIn this Section we describe how the foundations of our theory are\nformally defined and derived in the NuPRL-style Computational\nType Theory in the MetaPRL Theorem Prover. For brevity, we\nwill present a slightly simplified version of our implementation;\nfull details are available in the extended version of this paper\n[NKYH05, Appendix].\n4.1\nComputations and Types\nIn our work we make heavy usage of the fact that our type theory\nallows us to define computations without stating upfront (or even\nknowing) what the relevant types are. In NuPRL-style type theo-6\nries (which some even dubbed \"untyped type theory\"), one may define\narbitrary recursive functions (even potentially nonterminating\nones). Only when proving that such function belongs to a particular\ntype, one may have to prove termination. See [All87a, All87b] for\na semantics that justifies this approach.\nThe formal definition of the syntax of terms consists of two\nparts:\n\nThe definition of untyped term constructors and term operations\n, which includes both HOAS-style operations and de\nBruijn-style operations. As it turns out, we can establish most\nof the reduction properties without explicitly giving types to all\nthe operations.\n\nThe definition of the type of terms. We will define the type of\nterms as the type that contains all terms that can be legitimately\nconstructed by the term constructors.\n4.2\nHOAS Constructors\nAt the core of our term syntax definition are two basic HOAS-style\nconstructors:\n\nbnd{x .t [x]} is meant to represent a term with a free variable x.\nThe intended semantics (which will not become explicit until\nlater) is that bnd{x.t [x]} will only be considered well-formed\nwhen t is a substitution function.\nInternally, bnd{x.t [x]} is implemented simply as the pair\n0, x.t [x] . This definition is truly internal and is used only\nto prove the properties of the two destructors presented below;\nit is never used outside of this Section (Section 4.2).\n\nmk term{op; ts} pairs op with ts. The intended usage of this\noperation (which, again, will only become explicit later) is that\nit represents a closed term (i.e. a bterm of binding depth 0) with\noperator op and subterms ts. It will be considered well-formed\nwhen op is an operator and ts is a list of terms that is compatible\nwith op at depth 0. For example, mk term{; bnd{x.x}} is x.x.\nInternally, mk term{op; ts} is implemented as the nested pair\n1, op, ts . Again, this definition is never used outside of this\nSection.\nWe also implement two destructors:\n\nsubst{bt; t } is meant to represent the result of substituting term\nt for the first variable of the bterm bt. Internally, subst{bt; t }\nis defined simply as an application (bt.2) t (where bt.2 is the\nsecond element of the pair bt ).\nWe derive the following property of this substitution operation:\nsubst{bnd{x.t\n1\n[x]} ; t\n2\n} t\n1\n[t\n2\n]\nwhere \"\" is the computational equality relation\n1\nand t\n1\nand\nt\n2\nmay be absolutely arbitrary, even ill-typed. This derivation\nis the only place where the internal definition of subst{bt; t} is\nused.\nNote that the above equality is exactly the \"reflective property\nof substitution\" (3) that was one of the design goals for our\ntheory.\n\nweak dest {bt; bcase; op, ts.mkt case[op; ts]} is designed to\nprovide a way to find out whether bt is a bnd{} or a mk term{op; ts}\n1\nIn NuPRL-style type theories the computational equality relation (which\nis also sometimes called \"squiggle equality\" and is sometimes denoted\nas \"\" or \"\") is the finest-grained equality relation in the theory.\nWhen a b is true, a may be replaced with b in an arbitrary context.\nExamples of computational equality include beta-reduction x.a[x] b\na[b], arithmetical equalities (1 + 2 3), and definitional equality (an\nabstraction is considered to be computationally equal to its definition).\nand to \"extract\" the op and ts in the latter case. In the rest of\nthis paper we will use the \"pretty-printed\" form for weak dest\n-- \"match bt with bnd{ } bcase | mk term{op; ts}\nmkt case[op; ts]\". Internally, it is defined as\nif bt.\n1 = 0 then bcase else mkt case[bt.2.1; bt.2.2].\nFrom this internal definition we derive the following properties\nof weak dest:\n\nmatch bnd{x.t[x]} with\nbnd{ } bcase\n| mk term{op; ts} mkt case[op; ts]\n\nbcase\n\nmatch mk term{op; ts} with\nbnd{ } bcase\n| mk term{o; t} mkt case[o; t]\n\nmkt case[op; ts]\n4.3\nVector HOAS Operations\nAs we have mentioned at the end of Section 2, some approaches to\nreasoning about syntax make it hard or even impossible to express\narbitrary-length lists of binders. In our approach, we address this\nchallenge by allowing operators where a single binding in the metalanguage\nstands for a list of object-level bindings. In particular, we\nallow representing bnd{x\n1\n.bnd{x\n2\n. bnd{x\nn\n.t[x\n1\n; . . . ; x\nn\n]} }}\nas\nvbnd{n; x .t [nth{1; x} ; . . . ; nth{n; x}]}, where \"nth{i ; l}\" is the \"i th\nelement of the list l\" function.\nWe define the following vector-style operations:\n\nvbnd{n; x .t [x]} represents a \"telescope\" of nested bnd operations\n. It is defined by induction\n2\non the natural number n as\nfollows:\nvbnd{0; x.t [x]}\n:=\nt [nil]\nvbnd{n + 1; x.t [x]}\n:=\nbnd{v.vbnd{n; x .t [v :: x ]}}\nWe also introduce vbnd{n; t } as a simplified notation for\nvbnd{n; x .t } when t does not have free occurrences of x.\n\nvsubst{bt; ts} is a \"vector\" substitution operation that is meant\nto represent the result of simultaneous substitution of the terms\nin the ts list for the first |ts| variables of the bterm bt (here |l| is\nthe length of the list l). vsubst{bt; ts} is defined by induction on\nthe list ts as follows:\nvsubst{bt; nil}\n:=\nbt\nvsubst{bt; t :: ts}\n:=\nvsubst{subst{bt; t } ; ts}\nBelow are some of the derived properties of these operations:\nbnd{v.t [v]} vbnd{1; hd(v)}\n(4)\nm, n N.\nvbnd{m + n; x .t [x]} vbnd{m; y.vbnd{n; z.t [y@z]}}\n(5)\nl List. (vsubst{vbnd{|l|; v.t[v]} ; l} t[l])\n(6)\nl List.n N. (n |l|)\n(vsubst{vbnd{n; v.t[v]} ; l} vbnd{n - |l|; v.bt[l@v]})\n(7)\nn N.\nvbnd{n; l.vsubst{vbnd{n; v.t [v]} ; l}} vbnd{n; l.t [l]}\n(8)\nwhere \"hd\" is the list \"head\" operation, \"@\" is the list append\noperation, \"List\" is the type of arbitrary lists (the elements of a list\ndo not have to belong to any particular type), N is the type of natural\nnumbers, and all the variables that are not explicitly constrained to\na specific type stand for arbitrary expressions.\n2\nOur presentation of the inductive definitions is slightly simplified by\nomitting some minor technical details. See [NKYH05, Appendix]\nfor\ncomplete details.\n7\nEquivalence (5) allows the merging and splitting of vector bnd\noperations. Equivalence (6) is a vector variant of equivalence (3).\nEquivalence (8) is very similar to equivalence (6) applied in the\nvbnd{n; l. } context, except that (8) does not require l to be a\nmember of any special type.\n4.4\nDe Bruijn-style Operations\nBased on the HOAS constructors defined in the previous two sections\n, we define two de Bruijn-style constructors.\n\nvar{i ; j } is defined as vbnd{i ; bnd{v.vbnd{ j ; v}}}. It is easy to\nsee that this definition indeed corresponds to the informal\nbterm{x\n1\n, , x\nl\n, y, z\n1\n, , z\nr\n.y}\ndefinition given in Section 3.4.\n\nmk bterm{n; op; ts} is meant to compute a bterm of binding\ndepth n, with operator op, and with ts as its subterms. This operation\nis defined by induction on natural number n as follows:\nmk bterm{0; op; ts}\n:=\nmk term{op; ts}\nmk bterm{n + 1; op; ts}\n:=\nbnd{v.mk bterm{n; op; map t.subst{t ; v} ts}}\nNote that, if ts is a list of bnd expressions (which is the intended\nusage of the mk bterm operation), then the\nbnd{v. map t.subst{t ; v} ts }\nhas the effect of stripping the outer bnd from each of the members\nof the ts list and \"moving\" them into a single \"merged\" bnd\non the outside.\nWe also define a number of de Bruijn-style destructors, i.e., operations\nthat compute various de Bruijn-style characteristics of a\nbterm. Since the var and mk bterm constructors are defined in terms\nof the HOAS constructors, the destructors have to be defined in\nterms of HOAS operations as well. Because of this, these definitions\nare often far from straightforward.\nIt is important to emphasize that the tricky definitions that we\nuse here are only needed to establish the basic properties of the\noperations we defined. Once the basic theory is complete, we can\nraise the level of abstraction and no usage of this theory will\never require using any of these definitions, being aware of these\ndefinitions, or performing similar tricks again.\n\nbdepth{t } computes the binding depth of term t . It is defined\nrecursively using the Y combinator as\nY\n\nf.b.match b with\nbnd{ } 1 + f subst{b; mk term{0; 0}}\n| mk term{ ; } 0\n\nt\nIn effect, this recursive function strips the outer binders from a\nbterm one by one using substitution (note that here we can use\nan arbitrary mk bterm expression as a second argument for the\nsubstitution function; the arguments to mk bterm do not have\nto have the \"correct\" type) and counts the number of times it\nneeds to do this before the outermost mk bterm is exposed.\nWe derive the following properties of bdepth:\nl, r N. bdepth{var{l; r}} (l + r + 1) ;\nn N. bdepth{mk bterm{n; op; ts}} n .\nNote that the latter equivalence only requires n to have the\n\"correct\" type, while op and ts may be arbitrary. Since the\nbdepth operator is needed for defining the type of Term of well-formed\nbterms, at this point we would not have been able to\nexpress what the \"correct\" type for ts would be.\n\nleft{t } is designed to compute the \"left index\" of a var expression\n. It is defined as\nY\n\n\n\n\n\nf.b.l.\nmatch b with\nbnd{ }\n1 + f subst{b; mk term{l; 0}} (l + 1)\n| mk term l ;\nl\n\n\n\n\nt 0\nIn effect, this recursive function substitutes mk term{0; 0}\nfor the first binding of t , mk term{1; 0} for the second one,\nmk term{2; 0} for the next one and so forth. Once all the binders\nare stripped and a mk term{l; 0} is exposed, l is the index\nwe were looking for. Note that here we intentionally supply\nmk term with an argument of a \"wrong\" type (N instead of\nOp); we could have avoided this, but then the definition would\nhave been significantly more complicated.\nAs expected, we derive that\nl, r N.(left{var{l; r}} l).\n\nright{t } computes the \"right index\" of a var expression. It\nis trivial to define in terms of the previous two operators:\nright{t } := bdepth{t } - left{t } - 1.\n\nget op{t ; op} is an operation such that\nn N. get op mk bterm{n; op; ts} ; op op ,\nl, r N. (get op{var{i; j} ; op} op .\nIts definition is similar to that of left{}.\n\nsubterms{t } is designed to recover the last argument of a\nmk bterm expression. The definition is rather technical and\ncomplicated, so we omit it; see [NKYH05, Appendix C] for\ndetails. The main property of the subterms operation that we\nderive is\nn N.btl List. subterms{mk bterm{n; op; btl}}\nmap b.vbnd{n; v.vsubst{b; v}} btl\nThe right-hand side of this equivalence is not quite the plain\n\"btl\" that one might have hoped to see here. However, when\nbtl is a list of bterms with binding depths at least n, which is\nnecessarily the case for any well-formed mk bterm{n; op; btl},\nequivalence (8) would allow simplifying this right-hand side to\nthe desired btl.\n4.5\nOperators\nFor this basic theory the exact representation details for operators\nare not essential and we define the type of operators Op abstractly.\nWe only require that operators have decidable equality and that\nthere exist a function of the type Op N List that computes\noperators' shapes.\nUsing this shape function and the bdepth function from Section\n4.4, it is trivial to formalize the \"ts is compatible with op at\ndepth n\" predicate of Definition 4. We denote this predicate as\nshape compat{n; op; ts} and define it as\n|shape{op}| = |btl|\ni 1..|btl|.bdepth{nth{btl; i}} = n + nth{shape{op}; i}\n4.6\nThe Type of Terms\nIn this section we will define the type of terms (i.e. well-formed\nbterms), Term, as the type of all terms that can be constructed by\nthe de Bruijn constructors from Section 4.4. That is, the Term type\ncontains all expressions of the forms:\n\nvar{i ; j } for all natural numbers i, j ; or\n8\n\nmk bterm{n; op; ts} for any natural number n, operator op, and\nlist of terms ts that is compatible with op at depth n.\nThe Term type is defined as a fixpoint of the following function\nfrom types to types:\nIter(X ) := Image(dom(X ); x .mk(x )),\nwhere\n\nImage is a type constructor such that Image(T ; x. f [x]) is the\ntype of all the f [t ] for t T (for it to be well-formed, T must\nbe a well-formed type and f must not have any free variables\nexcept for x);\n\ndom(X ) is a type defined as\n(NN)+ n:Nop:Op{ts:X List | shape compat{n; op; ts}} ;\n\nand mk(x) (where x is presumably a member of the type\ndom(X )) is defined as\nmatch x with\ninl (i, j) var{i; j}\n| inr (n, op, ts) mk bterm{n; op; ts} .\nThe fixpoint of Iter is reached by defining\n\nTerm\n0\n:= Void (an empty type)\n\nTerm\nn+1\n:= Iter(Term\nn\n)\n\nTerm :=\nn\nN\nTerm\nn\nWe derive the intended introduction rules for the Term type:\ni N\nj N\nvar{i ; j } Term\nand\nn N\nop Op\nts Term List\nshape compat{n; op; ts}\nmk bterm{n; op; ts} Term\n.\nAlso, the structural induction principle is derived for the Term\ntype. Namely, we show that to prove that some property P[t ] holds\nfor any term t , it is sufficient to prove\n\n(Base case) P holds for all variables, that is, P[var{i ; j }] holds\nfor all natural numbers i and j ;\n\n(Induction step) P[mk bterm{n; op; ts}] is true for any natural\nnumber n, any operator op, and any list of terms ts that is\ncompatible with op at depth n, provided P[t ] is true for any\nelement t of the list ts.\nNote that the type of \"terms over n variables\" (where n = 0 corresponds\nto closed terms) may be trivially defined using the Term\ntype and the \"subset\" type constructor -- {t : Term | bdepth{t } =\nn}.\nConclusions and Future Work\nIn Sections 3 and 4 we have presented a basic theory of syntax\nthat is fully implemented in a theorem prover. As we mentioned in\nthe introduction, the approach is both natural and expressive, and\nprovides a foundation for reflective reasoning about classes of languages\nand logics. However, we consider this theory to be only\nthe first step towards building a user-accessible uniform reflection\nframework and a user-accessible uniform framework for programming\nlanguage reasoning and experimentation, where tasks similar\nto the ones presented in the P\nOPL\nM\nARK\nchallenge [ABF\n+\n05] can\nbe performed easily and naturally. In this section we provide an outline\nof our plans for building such frameworks on top of the basic\nsyntactic theory.\n5.1\nHigher-Level User Interface\nOne obvious shortcoming of the theory presented in Sections 3\nand 4 is that it provides only the basic low-level operations such\nas bnd, var, subterms, etc. It presents a very low-level account of\nsyntax in a way that would often fail to abstract away the details\nirrelevant to the user.\nTo address this problem we are planning to provide user interface\nfunctionality capable of mapping the high-level concepts\nto the low-level ones. In particular, we are going to provide an\ninterface that would allow instantiating general theorems to specific\ncollections of operators and specific languages. Thus, the user\nwill be able to write something like \"reflect language [x.;\napply{; }]\" and the system will create all the components outlined\nin Example 1:\n\nIt will create a definition for the type\nLanguage[x.; apply{; }]\nof reflected lambda-terms (where Language[l] is a general definition\nof a language over a list of operators l);\n\nIt will state and derive the introduction rules for this type;\n\nIt will state and derive the elimination rule for this type (the\ninduction principle).\nMoreover, we are planning to support even more complicated language\ndeclarations, such as\nt := int | t t ;\ne := v | x : t.e[x ] | apply{e; e}\nthat would cause the system to create mutually recursive type\ndefinitions and appropriate rules.\nFinally, we are also planning to support \"pattern bindings\" that\nare needed for a natural encoding of ML-like pattern matching\n(such as the one sketched in the P\nOPL\nM\nARK\nchallenge [ABF\n+\n05]).\nAs far as the underlying theory goes, we believe that the mechanisms\nvery similar to the \"vector bindings\" presented in Section 4.3\nwill be sufficient here.\n5.2\n\"Dereferencing\" Quoted Terms\nAs in Barzilay's work, the quoted operator approach makes it easy\nto define the \"unquoting\" (or \"dereferencing\") operator [[]]\nunq\n. If t\nis a syntactic term, then [[t ]]\nunq\nis the value represented by t . By\ndefinition,\n[[op{t\n1\n; . . . ; t\nn\n}]]\nunq\n= op{[[t\n1\n]]\nunq\n; . . . ; [[t\nn\n]]\nunq\n}.\nFor instance, [[2 3]]\nunq\nis 2 3 (i.e. 6).\nIn order to define unquoting on terms with bindings, we need to\nintroduce the \"guard\" operation\nsuch that [[ t ]]\nunq\nis t for an\narbitrary expression t . Then [[]]\nunq\ncan be defined as follows:\n[[op{x\n1\n, , x\nk\n.t[x\n1\n; . . . ; x\nk\n]; ;z\n1\n, , z\nl\n.s[z\n1\n; . . . ; z\nl\n]}]]\nunq\n=\nop{x\n1\n, , x\nk\n.[[t[ x\n1\n; . . . ; x\nk\n]]]\nunq\n;\n;\nz\n1\n, , z\nl\n.[[s[ z\n1\n; . . . ; z\nl\n]]]\nunq\n}.\nFor example, [[x.2 x]]\nunq\n= x.[[2 x ]]\nunq\n= x.[[2]]\nunq\n\n[[ x ]]\nunq\n= x.2 x.\nThe unquote operation establishes the identity between the original\nsyntax and the reflected syntax, making it a \"true\" reflection.\nNote that the type theory (which ensures, in particular, that\nonly terminating functions may be shown to belong to a function\ntype) would keep the [[ ]]\nunq\noperation from introducing logical\nparadoxes.\n3\n3\nThis is, obviously, not a proper argument. While a proper argument can be\nmade here, it is outside of the scope of this particular paper.\n9\nAlso, since the notion of the quoted operators is fully open-ended\n, each new language added to the system will automatically\nget to use the [[ ]]\nunq\noperation for all its newly introduced operators\n.\n5.3\nLogical Reflection\nAfter defining syntactic reflection, it is easy to define logical reflection\n. If we consider the proof system open-ended, then the logical\nreflection is trivial -- when P is a quotation of a proposition, we\ncan regard \"[[P]]\nunq\n\" as meaning \" P is true\". The normal modal\nrules for the [[]]\nunq\nmodality are trivially derivable. For example\nmodus ponens\n[[P Q]]\nunq\n[[P]]\nunq\n[[Q]]\nunq\nis trivially true because if we evaluate the first [[]]\nunq\n(remember,\n[[P Q]]\nunq\n= [[P]]\nunq\n[[Q]]\nunq\nby definition of [[]]\nunq\n), we get an obvious tautology\n([[P]]\nunq\n[[Q]]\nunq\n) [[P]]\nunq\n[[Q]]\nunq\n.\nIn order to consider a closed proof system (in other words, if\nwe want to be able to do induction over derivations), we would\nneed to define a provability predicate for that system. We are\nplanning to provide user interface functionality that would allow\nusers to describe a set of proof rules and the system would generate\nappropriate proof predicate definitions and derive appropriate rules\n(in a style similar to the one outlined in Section 5.1 for the case of\nlanguage descriptions).\nRelated Work\nIn Section 2 we have already discussed a number of approaches\nthat we consider ourselves inheriting from. Here we would like to\nrevisit some of them and mention a few other related efforts.\nOur work has a lot in common with the HOAS implemented in\nCoq by Despeyroux and Hirschowitz [DH94]. In both cases, the\nmore general space of terms (that include the exotic ones) is later\nrestricted in a recursive manner. In both cases, the higher-order\nanalogs of first-order de Bruijn operators are defined and used as a\npart of the \"well-formedness\" specification for the terms. Despeyroux\nand Hirschowitz use functions over infinite lists of variables\nto define open terms, which is similar to our vector bindings.\nThere are a number of significant differences as well. Our approach\nis sufficiently syntactical, which allows eliminating all exotic\nterms, even those that are extensionally equal to the well-formed\nones, while the more semantic approach of\n[DH94,\nDFH95] has to accept such exotic terms (their solution to this problem\nis to consider an object term to be represented by the whole\nequivalence class of extensionally equal terms); more generally\nwhile [DH94] states that \"this problem of extensionality is recurrent\nall over our work\", most of our lemmas establish identity and\nnot just equality, thus avoiding most of the issues of extensional\nequality. In our implementation, the substitution on object terms is\nmapped directly to -reduction, while Despeyroux et al. [DFH95]\nhave to define it recursively. In addition, we provide a uniform approach\nto both free and bound variables that naturally extends to\nvariable-length \"vector\" bindings.\nWhile our approach is quite different from the modal -calculus\none [DPS97, DL99, DL01], there are some similarities in the intuition\nbehind it. Despeyroux et al. [DPS97] says \"Intuitively, we\ninterpret\nB as the type of closed objects of type B. We can iterate\nor distinguish cases over closed objects, since all constructors\nare statically known and can be provided for.\" The intuition behind\nour approach is in part based on the canonical model of the\nNuPRL type theory [All87a, All87b], where each type is mapped\nto an equivalence relations over the closed terms of that type.\nGordon and Melham [GM96] define the type of -terms as a\nquotient of the type of terms with concrete binding variables over\n-equivalence. Michael Norrish [Nor04] builds upon this work by\nreplacing certain variable \"freshness\" requirements with variable\n\"swapping\". This approach has a number of attractive properties;\nhowever, we believe that the level of abstraction provided by the\nHOAS-style approaches makes the HOAS style more convenient\nand accessible.\nAmbler, Crole, and Momigliano [ACM02] have combined the\nHOAS with the induction principle using an approach which in\nsome sense is opposite to ours. Namely, they define the HOAS\noperators on top of the de Bruijn definition of terms using higher\norder pattern matching. In a later work [ACM03] they have de-scribed\nthe notion of \"terms-in-infinite-context\" which is quite similar\nto our approach to vector binding. While our vector bindings\npresented in Section 4.3 are finite length, the exact same approach\nwould work for the infinite-length \"vectors\" as well.\nAcknowledgments\nThe authors are grateful to Eli Barzilay whose ideas were an inspiration\nfor some of the work that lead to this paper. We are also\ngrateful for his comments on an early draft of this paper.\nWe are grateful to the anonymous reviewers for their very thorough\nand fair feedback and many helpful suggestions.\nReferences\n[AA99]\nEric Aaron and Stuart Allen. Justifying calculational logic\nby a conventional metalinguistic semantics. Technical Report\nTR99-1771, Cornell University, Ithaca, New York, September\n1999.\n[ABF\n+\n05]\nBrian E. Aydemir, Aaron Bohannon, Matthew Fairbairn,\nJ. Nathan Foster, Benjamin C. Pierce, Peter Sewell, Dimitrios\nVytiniotis, Geoffrey Washburn, Stephanie Weirich, and Steve\nZdancewic. Mechanized metatheory for the masses: The\nPOPLmark challenge. Available from http://www.cis.\nupenn.edu/group/proj/plclub/mmm/, 2005.\n[AC92]\nWilliam Aitken and Robert L. Constable. Reflecting on\nNuPRL : Lessons 14. Technical report, Cornell University,\nComputer Science Department, Ithaca, NY, 1992.\n[ACE\n+\n00]\nStuart Allen, Robert Constable, Richard Eaton, Christoph\nKreitz, and Lori Lorigo. The NuPRL open logical environment\n. In David McAllester, editor, Proceedings of the\n17\nt h\nInternational Conference on Automated Deduction, volume\n1831 of Lecture Notes in Artificial Intelligence, pages\n170176. Springer Verlag, 2000.\n[ACHA90]\nStuart F. Allen, Robert L. Constable, Douglas J. Howe,\nand William Aitken. The semantics of reflected proof. In\nProceedings of the 5\nt h\nSymposium on Logic in Computer\nScience, pages 95197. IEEE Computer Society Press, June\n1990.\n[ACM02]\nSimon Ambler, Roy L. Crole, and Alberto Momigliano.\nCombining higher order abstract syntax with tactical theorem\nproving and (co)induction. In TPHOLs '02: Proceedings\nof the 15th International Conference on Theorem Proving\nin Higher Order Logics, pages 1330, London, UK, 2002.\nSpringer-Verlag.\n[ACM03]\nS. J. Ambler, R. L. Crole, and Alberto Momigliano. A\ndefinitional approach to primitive recursion over higher\norder abstract syntax. In Proceedings of the 2003 workshop\non Mechanized reasoning about languages with variable\nbinding, pages 111. ACM Press, 2003.\n[ACU93]\nWilliam Aitken, Robert L. Constable, and Judith Underwood.\nMetalogical Frameworks II: Using reflected decision procedures\n. Journal of Automated Reasoning, 22(2):171221,\n1993.\n10\n[All87a]\nStuart F. Allen. A Non-type-theoretic Definition of Martin-Lof's\nTypes. In D. Gries, editor, Proceedings of the 2\nnd\nIEEE\nSymposium on Logic in Computer Science, pages 215224.\nIEEE Computer Society Press, June 1987.\n[All87b]\nStuart F. Allen. A Non-Type-Theoretic Semantics for Type-Theoretic\nLanguage. PhD thesis, Cornell University, 1987.\n[Art99]\nSergei Artemov. On explicit reflection in theorem proving\nand formal verification. In Ganzinger [Gan99], pages 267\n281.\n[Art04]\nSergei Artemov.\nEvidence-based common knowledge.\nTechnical Report TR-2004018, CUNY Ph.D. Program in\nComputer Science Technical Reports, November 2004.\n[BA02]\nEli Barzilay and Stuart Allen. Reflecting higher-order abstract\nsyntax in NuPRL. In Victor A. Carre~no, Cezar A. Mu~noz,\nand Sophi`ene Tahar, editors, Theorem Proving in Higher\nOrder Logics; Track B Proceedings of the 15\nt h\nInternational\nConference on Theorem Proving in Higher Order Logics\n(TPHOLs 2002), Hampton, VA, August 2002, pages 2332.\nNational Aeronautics and Space Administration, 2002.\n[BAC03]\nEli Barzilay, Stuart Allen, and Robert Constable. Practical\nreflection in NuPRL. Short paper presented at 18th Annual\nIEEE Symposium on Logic in Computer Science, June 22\n25, Ottawa, Canada, 2003.\n[Bar01]\nEli Barzilay. Quotation and reflection in NuPRL and Scheme.\nTechnical Report TR2001-1832, Cornell University, Ithaca,\nNew York, January 2001.\n[Bar05]\nEli Barzilay. Implementing Reflection in NuPRL. PhD thesis,\nCornell University, 2005. In preparation.\n[CAB\n+\n86]\nRobert L. Constable, Stuart F. Allen, H. M. Bromley, W. R.\nCleaveland, J. F. Cremer, R. W. Harper, Douglas J. Howe,\nT. B. Knoblock, N. P. Mendler, P. Panangaden, James T.\nSasaki, and Scott F. Smith. Implementing Mathematics with\nthe NuPRL Proof Development System. Prentice-Hall, NJ,\n1986.\n[CFW04]\nLuis Crus-Filipe and Freek Weidijk. Hierarchical reflection.\nIn Slind et al. [SBG04], pages 6681.\n[Con94]\nRobert L. Constable. Using reflection to explain and enhance\ntype theory. In Helmut Schwichtenberg, editor, Proof and\nComputation, volume 139 of NATO Advanced Study Institute\n, International Summer School held in Marktoberdorf,\nGermany, July 20-August 1, NATO Series F, pages 65100.\nSpringer, Berlin, 1994.\n[dB72]\nN. G. de Bruijn. Lambda calculus notation with nameless\ndummies, a tool for automatic formula manipulation, with\napplication to the Church-Rosser theorem. Indagaciones\nMathematische, 34:381392, 1972. This also appeared in the\nProceedings of the Koninklijke Nederlandse Akademie van\nWetenschappen, Amsterdam, series A, 75, No. 5.\n[DFH95]\nJoelle Despeyroux, Amy Felty, and Andre Hirschowitz.\nHigher-order abstract syntax in Coq.\nIn M. Dezani-Ciancaglini\nand G. Plotkin, editors, Proceedings of the\nInternational Conference on Typed Lambda Calculus and\nits Applications, volume 902 of Lecture Notes in Computer\nScience, pages 124138. Springer-Verlag, April 1995. Also\nappears as INRIA research report RR-2556.\n[DH94]\nJoelle Despeyroux and Andre Hirschowitz. Higher-order\nabstract syntax with induction in Coq.\nIn LPAR '94:\nProceedings of the 5th International Conference on Logic\nProgramming and Automated Reasoning, volume 822\nof Lecture Notes in Computer Science, pages 159173.\nSpringer-Verlag, 1994. Also appears as INRIA research\nreport RR-2292.\n[DH95]\nJames Davis and Daniel Huttenlocher. Shared annotations for\ncooperative learning. In Proceedings of the ACM Conference\non Computer Supported Cooperative Learning, September\n1995.\n[DL99]\nJoelle Despeyroux and Pierre Leleu. A modal lambda\ncalculus with iteration and case constructs. In T. Altenkirch,\nW. Naraschewski, and B. Reus, editors, Types for Proofs\nand Programs: International Workshop, TYPES '98, Kloster\nIrsee, Germany, March 1998, volume 1657 of Lecture Notes\nin Computer Science, pages 4761, 1999.\n[DL01]\nJoelle Despeyroux and Pierre Leleu. Recursion over objects\nof functional type. Mathematical Structures in Computer\nScience, 11(4):555572, 2001.\n[DPS97]\nJoelle Despeyroux, Frank Pfenning, and Carsten Schurmann.\nPrimitive recursion for higherorder abstract syntax. In\nR. Hindley, editor, Proceedings of the Third International\nConference on Typed Lambda Calculus and Applications\n(TLCA'97), volume 1210 of Lecture Notes in Computer\nScience, pages 147163. Springer-Verlag, April 1997. An\nextended version is available as Technical Report CMU-CS-96\n-172, Carnegie Mellon University.\n[EM71]\nAndrzej Ehrenfeucht and Jan Mycielski.\nAbbreviating\nproofs by adding new axioms. Bulletin of the American\nMathematical Society, 77:366367, 1971.\n[F\n+\n86]\nSolomon Feferman et al., editors. Kurt Godel Collected\nWorks, volume 1.\nOxford University Press, Oxford,\nClarendon Press, New York, 1986.\n[FPT99]\nMarcelo Fiore, Gordon Plotkin, and Daniele Turi. Abstract\nsyntax and variable binding. In Proceedings of 14\nt h\nIEEE\nSymposium on Logic in Computer Science, pages 193+. IEEE\nComputer Society Press, 1999.\n[Gan99]\nHarald Ganzinger, editor. Proceedings of the 16\nt h\nInternational\nConference on Automated Deduction, volume 1632\nof Lecture Notes in Artificial Intelligence, Berlin, July 710\n1999. Trento, Italy.\n[GM96]\nA. D. Gordon and T. Melham.\nFive axioms of alpha-conversion\n. In J. von Wright, J. Grundy, and J. Harrison,\neditors, Theorem Proving in Higher Order Logics: 9th\nInternational Conference, Turku, Finland, August 1996:\nProceedings, volume 1125 of Lecture Notes in Computer\nScience, pages 173190. Springer-Verlag, 1996.\n[GMO03]\nJim Grundy, Tom Melham, and John O'Leary. A reflective\nfunctional language for hardware design and theorem\nproving. Technical Report PRG-RR-03-16, Oxford Univerity,\nComputing Laboratory, 2003.\n[God31]\nKurt Godel.\nUber formal unentscheidbare satze der principia\nmathematica und verwandter systeme I. Monatshefte fur\nMathematik und Physik, 38:173198, 1931. English version\nin [vH67].\n[God36]\nK. Godel.\n\nUber die Lange von beweisen. Ergebnisse\neines mathematischen Kolloquiums, 7:2324, 1936. English\ntranslation in [F\n+\n86], pages 397399.\n[GS89]\nF. Giunchiglia and A. Smaill. Reflection in constructive\nand non-constructive automated reasoning. In H. Abramson\nand M. H. Rogers, editors, Meta-Programming in Logic\nProgramming, pages 123140. MIT Press, Cambridge,\nMass., 1989.\n[GWZ00]\nH. Geuvers, F. Wiedijk, and J. Zwanenburg. Equational reasoning\nvia partial reflection. In J. Harrison and M. Aagaard,\neditors, Theorem Proving in Higher Order Logics: 13\nt h\nInternational\nConference, TPHOLs 2000, volume 1869 of Lecture\nNotes in Computer Science, pages 162178. Springer-Verlag,\n2000.\n[HAB\n+\n]\nJason J. Hickey, Brian Aydemir, Yegor Bryukhov, Alexei\nKopylov, Aleksey Nogin, and Xin Yu. A listing of MetaPRL\ntheories. http://metaprl.org/theories.pdf.\n[Har95]\nJ. Harrison. Metatheory and reflection in theorem proving:\nA survey and critique. Technical Report CRC-53, SRI\nInternational, Cambridge Computer Science Research\nCentre, Millers Yard, Cambridge, UK, February 1995.\n11\n[HHP93]\nRobert Harper, Furio Honsell, and Gordon Plotkin.\nA\nframework for defining logics. Journal of the Association\nfor Computing Machinery, 40(1):143184, January 1993. A\nrevised and expanded verion of '87 paper.\n[Hic97]\nJason J. Hickey.\nNuPRL-Light: An implementation\nframework for higher-order logics. In William McCune,\neditor, Proceedings of the 14\nt h\nInternational Conference\non Automated Deduction, volume 1249 of Lecture Notes in\nArtificial Intelligence, pages 395399. Springer, July 1317\n1997. An extended version of the paper can be found at\nhttp://www.cs.caltech.edu/~jyh/papers/cade14_\nnl/default.html.\n[Hic99]\nJason J. Hickey. Fault-tolerant distributed theorem proving.\nIn Ganzinger [Gan99], pages 227231.\n[Hic01]\nJason J. Hickey. The MetaPRL Logical Programming\nEnvironment. PhD thesis, Cornell University, Ithaca, NY,\nJanuary 2001.\n[HL78]\nGerard P. Huet and Bernard Lang. Proving and applying\nprogram transformations expressed with second-order\npatterns. Acta Informatica, 11:3155, 1978.\n[HNC\n+\n03]\nJason Hickey, Aleksey Nogin, Robert L. Constable,\nBrian E. Aydemir, Eli Barzilay, Yegor Bryukhov, Richard\nEaton, Adam Granicz, Alexei Kopylov, Christoph Kreitz,\nVladimir N. Krupski, Lori Lorigo, Stephan Schmitt, Carl\nWitty, and Xin Yu. MetaPRL -- A modular logical environment\n. In David Basin and Burkhart Wolff, editors,\nProceedings of the 16\nt h\nInternational Conference on Theorem\nProving in Higher Order Logics (TPHOLs 2003), volume\n2758 of Lecture Notes in Computer Science, pages 287303.\nSpringer-Verlag, 2003.\n[HNK\n+\n]\nJason J. Hickey, Aleksey Nogin, Alexei Kopylov, et al.\nMetaPRL home page. http://metaprl.org/.\n[Mos52]\nAndrzej Mostowski. Sentences undecidable in formalized\narithmetic: an exposition of the theory of Kurt Godel.\nAmsterdam: North-Holland, 1952.\n[NH02]\nAleksey Nogin and Jason Hickey. Sequent schema for\nderived rules. In Victor A. Carre~no, Cezar A. Mu~noz,\nand Sophi`ene Tahar, editors, Proceedings of the 15\nt h\nInternational Conference on Theorem Proving in Higher\nOrder Logics (TPHOLs 2002), volume 2410 of Lecture Notes\nin Computer Science, pages 281297. Springer-Verlag, 2002.\n[NKYH05]\nAleksey Nogin, Alexei Kopylov, Xin Yu, and Jason Hickey.\nA computational approach to reflective meta-reasoning\nabout languages with bindings.\nTechnical Report CaltechCSTR\n:2005.003, California Institure of Technology,\n2005. Available at http://resolver.caltech.edu/\nCaltechCSTR:2005.003.\n[Nor04]\nMichael Norrish. Recursive function definition for types with\nbinders. In Slind et al. [SBG04], pages 241256.\n[Par71]\nR. Parikh. Existence and feasibility in arithmetic. The Journal\nof Symbolic Logic, 36:494508, 1971.\n[Pau94]\nLawrence C. Paulson. Isabelle: A Generic Theorem Prover,\nvolume 828 of Lecture Notes in Computer Science. Springer-Verlag\n, New York, 1994.\n[PE88]\nFrank Pfenning and Conal Elliott. Higher-order abstract\nsyntax. In Proceedings of the ACM SIGPLAN '88 Conference\non Programming Language Design and Implementation\n(PLDI), volume 23(7) of SIGPLAN Notices, pages 199208,\nAtlanta, Georgia, June 1988. ACM Press.\n[Pfe89]\nFrank Pfenning. Elf: a language for logic definition and\nverified metaprogramming. In Proceedings of the 4\nt h\nIEEE\nSymposium on Logic in Computer Science, pages 313322,\nAsilomar Conference Center, Pacific Grove, California, June\n1989. IEEE Computer Society Press.\n[Plo90]\nGordon Plotkin. An illative theory of relations. In R. Cooper,\nK. Mukai, and J. Perry, editors, Situation Theory and Its\nApplications, Volume 1, number 22 in CSLI Lecture Notes,\npages 133146. Centre for the Study of Language and\nInformation, 1990.\n[PN90]\nL. Paulson and T. Nipkow. Isabelle tutorial and user's manual\n. Technical report, University of Cambridge Computing\nLaboratory, 1990.\n[SBG04]\nKonrad Slind, Annette Bunker, and Ganesh Gopalakrishnan,\neditors. Proceedings of the 17\nt h\nInternational Conference\non Theorem Proving in Higher Order Logics (TPHOLs\n2004), volume 3223 of Lecture Notes in Computer Science.\nSpringer-Verlag, 2004.\n[Sch01]\nCarsten Schurmann. Recursion for higher-order encodings.\nIn L. Fribourg, editor, Computer Science Logic, Proceedings\nof the 10\nt h\nAnnual Conference of the EACSL, volume 2142\nof Lecture Notes in Computer Science, pages 585599.\nSpringer-Verlag, 2001.\n[Smi84]\nB.C. Smith. Reflection and semantics in Lisp. Principles of\nProgramming Languages, pages 2335, 1984.\n[vH67]\nJ. van Heijenoort, editor. From Frege to Godel: A Source\nBook in Mathematical Logic, 18791931. Harvard University\nPress, Cambridge, MA, 1967.\n12", "keywords": "system reflection;programming language;High order abstract syntax;formal languages;Theorem prover;NuPRL;Meta-syntax;MetaPRL theorem prover;Languages with bindings;Uniform reflection framework;Higher-Order Abstract Syntax;Bruijn-style operations;HOAS-style operations;NuPRL-like Martin-Lof-style computational type theory;higher-order abstract syntax;Type Theory;formal definition and theory;computer aided reasoning;Meta-reasoning;Recursive definition;Reflective reasoning;Reflection;Languages with Bindings;Substitution;MetaPRL;Runtime code generation;Meta-language;uniform reflection framework;Theory of syntax;Programming Language Experimentation"} {"name": "30", "title": "An Architectural Style for High-Performance Asymmetrical Parallel Computations", "abstract": "Researchers with deep knowledge of scientific domains are now developing highly-adaptive and irregular (asymmetrical ) parallel computations, leading to challenges in both delivery of data for computation and mapping of processes to physical resources. Using software engineering principles, we have developed a new communications protocol and architectural style for asymmetrical parallel computations called ADaPT. Utilizing the support of architecturally-aware middleware, we show that ADaPT provides a more efficient solution in terms of message passing and load balancing than asymmetrical parallel computations using collective calls in the Message-Passing Interface (MPI) or more advanced frameworks implementing explicit load-balancing policies. Additionally , developers using ADaPT gain significant windfall from good practices in software engineering, including implementation-level support of architectural artifacts and separation of computational loci from communication protocols", "fulltext": "INTRODUCTION\nIn recent years, as the cost-to-performance ratio of consumer\nhardware has continued to decrease, computational\nclusters consisting of fast networks and commodity hardware\nhave become a common sight in research laboratories. A\nCopyright is held by the author/owner.\nICSE'06, May 2028, 2006, Shanghai, China.\nACM 1-59593-085-X/06/0005.\ngrowing number of physicists, biologists, chemists, and computer\nscientists have developed highly-adaptive and irregular\nparallel applications that are characterized by computational\nintensity, loosely-synchronous parallelism and dynamic\ncomputation. Because the computation time of each\nparallel process varies significantly for this class of computation\n, we shall refer to them as asymmetrical parallel computations\n.\nAdaptive mesh refinements for the simulation\nof crack growth, combinatorial search applications used in\nartificial intelligence, and partial differential equation field\nsolvers [2] are examples of asymmetrical computations.\nWhile supercomputing platforms available to us continue\nto increase performance, our ability to build software capable\nof matching theoretical limits is lacking [8]. At the\nsame time, researchers with significant depth of knowledge\nin a scientific domain but with limited software experience\nare confounded by the interface bloat of libraries such the\nMessage-Passing Interface (MPI), which has 12 different routines\nfor point-to-point communications alone [5].\nWould-be practitioners of high-performance computing are\nintroduced early to the mantra of optimization. The myth\nthat high-level concepts inherent to software engineering\nprinciples, such as \"separation of concerns,\" result in inefficiencies\nat the performance level has caused these researchers\nto eschew best practices of traditional software\ndevelopment in favor of highly-optimized library routines.\nWe contend that a sound software engineering solution to\nasymmetrical parallel computations provides decoupling of\nconnectors from computational loci and reduces the complexity\nof development for the programmer while still providing\nan efficient solution both in terms of load-balancing\nand message-delivery. In this paper, we present such a solution\n.\nIn the next section, we will discuss our motivations for creating\nthe ADaPT protocol and architecture, including the\nload-balancing inefficiencies of \"optimized\" communications\nlibraries when computing asymmetrical parallel computations\n. We will then present ADaPT, a communications protocol\nand associated software architecture for asymmetrical\ncomputations. Additionally, we will present analysis which\nshows ADaPT's ability to outperform both MPI and other\nload-balancing frameworks using traditional work-sharing\nstrategies. We conclude with an overview of future research\nopportunities afforded by ADaPT.\nMOTIVATION\nThis work has been motivated by our experience with\ntwo key classes of existing approaches: use of optimized\n857\ncommunications libraries such as MPI [4], and message-passing\nframeworks which implement load-balancing strategies\nbased on work sharing.\n2.1\nMessage-Passing Interface\nHigh-performance communications libraries such as MPI\nare optimized to reduce the bandwidth needed to communicate\na large amount of data to subprocesses. In order to\naccomplish this reduction, collective calls in MPI are synchronous\n, causing barriers at data-distribution points in the\nsoftware. When used to compute uniform parallel computations\nbarriers are unobtrusive. In asymmetrical computations\n, however, an effective mapping of processes to physical\nresources contributes more significantly to wall-clock time to\ncompletion than efficient communications. For these computations\n, asynchronous communications are needed, despite\nincreased bandwidth.\nTo illustrate this phenomena, let us consider a mapping\nof a large normalized population of computation times with\na high level of variance onto a significantly smaller number\nof physical nodes (a strategy known as overaggregation).\nThe MPI library offers developers efficient use of bandwidth\nvia collective scatter and gather commands. While\nbandwidth is conserved using these collective calls, analyses\nmade by Gropp, et. al. and Skjellum [10, 4] suggest\nthat most implementations of MPI's scatter are built on top\nof MPI's rendezvous protocol and result in a synchronous\nbarrier at each subsequent distribution of data.\nSince each process has variable computation time, a number\nof subprocesses will remain idle until the longest process\ncompletes during each of the scatters. In [1] we have shown\nthat the smallest contribution to overall wall-clock time to\ncompletion made by this idle time is given as\nn , where n\nis the number of subprocesses and\nis the mean of the computation\ntimes. In comparison, the wall-clock time saved\nusing the collective calls to reduce bandwidth is negligible.\nWhile these collective calls only consider bandwidth optimizations\n, it is clear that in asymmetrical parallel computations\n, process load-balancing across subprocesses is a more\nimportant optimization to pursue.\n2.2\nLoad-Balancing Frameworks\nAttempts to develop message-passing frameworks that can\nassist computational scientists in the development of asymmetrical\nparallel computations can be divided into two groups:\nstatic load-balancing frameworks and dynamic load-balancing\nframeworks. Because a priori knowledge of the computation\ninvolved in asymmetrical parallel computations is required\nof static load balancers, such frameworks are inapplicable to\nthis class of problems.\nUnlike static load balancers, dynamic load-balancing frameworks\ndo not require information a priori and are able to redeploy\nbalanced distributions of data during program execution\n. Notable examples of parallel development frameworks\nwhich provide dynamic load-balancing are PREMA [2] and\nCharm++ [6]. Unfortunately, these frameworks often incur\nsignificant performance losses due to the introduction of\nbarriers for load-balancing. Additionally, these frameworks\ndo not provide explicit support for consistency of structure\nand development.\nA software architectural solution can provide a number of\nbenefits in addition to load balancing. Employing a sound\nsoftware engineering principle, the separation of communication\nfrom computational elements shields the developer from\nthe need to optimize communications and provides enforcement\nof architectural constraints. An added benefit is that\narchitectural components reified as explicit implementation-level\nartifacts allow for easy reconfiguration of software in\nprinciple. We will revisit this point below.\nA NOVEL PROTOCOL\nTwo overlooked aspects of performance optimizations that\nmust be addressed in order to provide a truly efficient solution\nare asynchronous load-balancing and event pattern optimization\n. In addition to simply providing a load-balanced\ndistribution, asynchronous load-balancing provides a best\neffort redistribution of processes without introducing a barrier\nto computation.\nEvent pattern optimization suggests that a protocol is capable\nof utilizing the predictability of future messages given\nanalysis of past messages. During overaggregated parallel\ncomputations, a number of computations need to be distributed\nto each of the subprocesses over the course of the\nparallel computation, causing a pattern to emerge.\nIn order to incorporate each of these optimizations into a\nhigh-performance communications protocol, we have developed\nADaPT, an Adaptive Data-parallel Publication/ Subscription\nTransport protocol and software architecture. The\nthesis of ADaPT is that it is possible to exploit the sequence\nthat emerges from sending multiple messages to each parallel\nprocess in order to reduce the overall wall clock time to completion\nof the computation while still making a best-effort\nto avoid sending messages to each subprocess to quickly for\nthe process to buffer.\n3.1\nADaPT Defined\nWe feel that for the purposes of this paper it is most helpful\nto define ADaPT's protocol, architectural elements, and\nimplementation.\n3.1.1\nProtocol\nADaPT views each parallel process as an independent\nsoftware component (Worker) residing on a physical node\ncapable of performing computations on data. Each Worker\ninitiates computation by subscribing to a coordination component\n(Master).\nAn important distinction between ADaPT and traditional\npublication/subscription systems is that unlike traditional\npub/sub systems, ADaPT does not duplicate messages to\nservice multiple downstream requests. Rather, it distributes\nmessages uniquely from a queue in a round-robin fashion.\nUpon receipt of a subscription, the Master publishes a\nmessage to the Worker. There is another divergence from\ntraditional pub/sub systems at this point. The Master waits\nfor another request from the subscribed Worker before publishing\nanother message to that Worker. Using data from\neach subscribed Worker on its computation time, or\n, the\nMaster tracks an average processing time, or\n.\nBecause the protocol is adaptive, when a predetermined\nnumber of messages have been sent to the Workers and a\n\nhas been calculated, the Master switches from this conservative\nphase to an aggressive phase during which it sends\nmessages of the requested type to the process at the regular\ninterval dictated by\n. The protocol exploits the emerging\nevent pattern to reduce the overall processing time at each\nphysical node.\n858\n0\n5\n\n0 100 150 200\n0\n5\n10\n15\n20\nNumber of Workers\n% Overhead\nComputation Time Variance = 10\nLBF\nMP I\nADaPT\n0\n50\n100\n150\n200\n0\n5\n10\n15\n20\nNumber of Workers\n% Overhead\nComputation Time Variance = 50\nLBF\nMPI\nADaPT\n0\n50\n100\n150\n200\n0\n5\n10\n15\n20\nNumber of Workers\n% Overhead\nComputation Time Variance = 100\nLBF\nMPI\nADaPT\n0\n50\n100\n150\n200\n0\n5\n10\n15\n20\nNumber of Workers\n% Overhead\nComputation Time Variance = 200\nLBF\nMPI\nADaPT\nFigure 1: Monte Carlo simulations of overhead for asymmetrical computations exhibiting multiple variances.\nSimilar to MPI's eager protocol, this phase of ADaPT can\nbe too aggressive, flooding the process's buffer (datasets in\nhigh-performance computing tend to be very large causing\nmemory limitations to surface frequently). If the number\nof messages in the Worker's buffer reaches a maximum, the\nWorker unsubscribes from the Master. After the Worker has\ncomputed each of the messages in its buffer, it re-subscribes\nto the Master, starting once again with the conservative\nphase of delivery as described above.\n3.1.2\nArchitectural Model and Implementation\nWe have further codified ADaPT in a software architectural\nstyle [9].\nIn addition to Master and Worker components\n, the ADaPT connector utilizes an adaptive dispatcher\nto deliver messages to each subscribed Worker using\nthe ADaPT protocol. The dispatcher uses a priority-based\nround-robin algorithm which utilizes the calculated\n\nand attempts to saturate each Worker's computation load\nwithout flooding the Worker's buffer. This handler auto-matically\nswitches between the conservative and aggressive\nphases. The key contribution of this connector is the encap-sulation\nof underlying protocols, allowing the developer to\nfocus instead on the computations to be performed.\nSimilar to the C2 software architecture [3], messages triggering\ncomputation travel downstream from one or more\nMasters to the ADaPT connector. Messages typed as results\noriginating at Workers travel upstream through the\nADaPT connector back to the Masters.\nWe have implemented these architectural rules through\nextensions to the Prism framework [7]. Prism-MW, a middleware\ndesigned to enforce architectural rules at the level\nof software artifacts, is a light-weight event-based framework\nconsisting of a core set of functionality with handles\nto extensible components, connectors, and event handlers.\nTopological rules for each architectural style are also en-forced\nthrough overloaded methods for connecting artifacts.\n3.2\nPerformance Analysis\nIn analyzing ADaPT's performance in comparison to load-balancing\nframeworks as well as synchronous scatters and\ngathers using MPI, it is first important to define a base\nmetric with which to compare protocols. This metric, the\n\"natural rate\" of parallel computation, is the sum of all individual\ncomputations to be completed divided by the number\nof nodes in the parallel computation. In this section we will\npresent comparisons of protocols as measured by percentage\noverhead (calculated as the wall-clock time for the parallel\nprocess to complete minus the natural rate, divided by the\nnatural rate).\nIn order to properly compare ADaPT's ability to reduce\nmessage traffic as well as to efficiently map asymmetrical\ncomputations to physical resources, we developed a Monte\nCarlo simulation in which a normalized population of computations\nwas delivered to virtual processors via three different\ncommunications policies/architectures and the percentage\noverhead was calculated for each. All massage-passing\ncosts were uniform across the network for each policy implemented\n.\nMPI (collective calls) - Costs of synchronous scatters\nand gathers using MPI were modeled using equations from\n[10, 4]. In this policy each worker receives a computation via\na scatter and returns via a gather before scattering the next\nsubset until all computations are completed. This process\nis known as a multi-part scatter [1].\nLoad-balancing framework - The Monte Carlo simulation\nof the load-balancing framework uses work-sharing\nmethods. All events are delivered to workers before they begin\nprocessing and a barrier is periodically introduced. At\n859\nthis barrier, the workload is redistributed evenly between all\nprocessors. In order to idealize load balancing, the cost of\nthis calculation was treated as negligible.\nADaPT implementation - Using the routing policies\nof ADaPT, this implementation assumes that workers are\ncapable of buffering only two events and each worker is homogeneous\n. We made each of these assumptions in order to\nconservatively profile ADaPT's performance.\nIn each of four simulations, a normalized population of\n1000 computations was generated with a mean computation\ntime of 100 milliseconds and a variance of 10 milliseconds\n2\n,\n50 milliseconds\n2\n, 100 milliseconds\n2\n, and 200 milliseconds\n2\n,\nrespectively. For each simulation, the aggregation of the parallel\ncomputation (i.e, the ratio of workers to computations)\nwas varied from 1:500 to 1:5.\nResults of this comparison are shown in Figure 1.\nDISCUSSION\nIt can be seen in these plots that while ADaPT performs\nuniformly at all aggregations smaller than 1:100 (i.e.,\n>=10\nworkers), MPI collective commands and load-balancing frameworks\ndecrease performance as the aggregation is reduced.\nFor load-balancing frameworks, this is due to the increased\nvolume of messaging required in order to re-balance the load\nacross all processors at each barrier. In the presence of load-balancing\n, the idle time is significantly reduced, but the cost\nof rerouting messages to new processors makes ADaPT the\nbetter performer especially in high variance computations.\nFrom these initial results, we feel that our implementation\nof ADaPT outperforms collective calls via MPI as well\nas load-balancing frameworks employing a full worksharing\nscheme for significantly varied aggregations and computation\ntime variances. In Monte Carlo simulations, ADaPT\nproduced a better mapping of computations to resources,\nreducing computational overhead to under 1% for aggregations\nless than 1:100. In the simulations of aggregations\ngreater than 1:100, ADaPT does not perform as well as MPI\nor other load-balancing frameworks due to the increased percentage\nof time each worker's buffer remains empty before\nanother event is pushed to the Worker at the calculated rate\nof computation. This situation seems of little consequence,\nhowever, in that data sets are seldom overagreggated to this\nextreme.\nADaPT offers a significant decrease in overhead for event\ndelivery in parallel computations and also outperforms established\nload-balancing techniques for use with asymmetrical\nparallel computations. Additionally, ADaPT, through\nits implementation in Prism, offers developers architectural\nartifacts at the level of implementation, clear division between\nthe computation loci (in the form of extensible Workers\n) and communications algorithms, and reduction of communications\nknowledge needed by the developer in order to\nimplement asymmetrical parallel computations.\n4.2\nFuture Work\nWhile ADaPT is clearly an applicable architectural style\nto high-performance computing, we make no claim as to its\nmonopoly of the field. In future work, we hope to build a\nmore substantial architectural framework for high-performance\ncomputing which provides more underlying protocol choices\nand further assists developers in code migration to new platforms\nincluding SMP and other shared-memory machines.\nWe hope to demonstrate the ease of system design and implementation\nvia architectures to the high-performance community\nwithout serious performance degradation, as is cur-rently\nthe prevalent though.\nFurther enhancements to the ADaPT protocol and architecture\nwill include refinement of its topological constraints\nto encapsulate both data-parallel stages of computation and\nhigher-level workflow stages using multiple layers of masters\nand workers connected between more advanced ADaPT\nconnectors (themselves perhaps distributed across multiple\nphysical nodes). Also, we hope to further investigate the\ntradeoffs associated with alternate unsubscription policies\nand the effects of \"pumping\" the parallel computation by\nmodifying delivery rates to be faster than average computation\nrates.\nThis work was supported by the NSF 0312780 grant. Any\nopinions, findings and conclusions or recommendations expressed\nin this material are those of the authors and do not\nnecessarily reflect those of the National Science Foundation.\nThe authors also wish to thank the anonymous reviewers for\ntheir helpful comments.\nREFERENCES\n[1] D. Woollard et. al. Adapt: Event-passing protocol for\nreducing delivery costs in scatter-gather parallel\nprocesses. In Proceeding of the Workshop on Patterns\nin High Performance Computing, Urbana, Illinois,\nMay 2005.\n[2] K. Barker et. al. A load balancing framework for\nadaptive and asynchronous applications. Parallel and\nDistributed Systems, IEEE Transactions on,\n15:183192, 2004.\n[3] R. Taylor et. al. A component- and message-based\narchitectural style for gui software. IEEE Transactions\non Software Engineering, June, 1996.\n[4] W. Gropp, E. Lusk, and A. Skjellum. Using MPI:\nPortable Programming with the Message Passing\nInterface. MIT Press, 1999.\n[5] S. Guyer and C. Lin. Broadway: A software\narchitecture for scientific computing. In Proceedings of\nthe IFIP TC2/WG2.5 Working Conference on the\nArchitecture of Scientific Software, pages 175192,\nDeventer, The Netherlands, The Netherlands, 2001.\nKluwer, B.V.\n[6] L. Kale and S. Krishnan. CHARM++: A Portable\nConcurrent Object Oriented System Based on C++.\nIn A. Paepcke, editor, Proceedings of OOPSLA'93,\npages 91108. ACM Press, September 1993.\n[7] S. Malek, M. Mikic-Rakic, and N. Medvidovic. A\nstyle-aware architectural middleware for\nresource-constrained, distributed systems. IEEE\nTransactions on Software Engineering, March, 2005.\n[8] D. Post and L. Votta. Computational science demands\na new paradigm. Physics Today, 58(1):3541, 2005.\n[9] M. Shaw and D. Garlan. Software Architecture:\nPerspectives on an Emerging Discipline. Prentice-Hall,\n1996.\n[10] A. Skjellum. High performance mpi: Extending the\nmessage passing interface for higher performance and\nhigher predictability, 1998.\n860\n", "keywords": "collective calls;ADaPT;MPI;software engineering;asymamtrical parallel computations;load balancing;communication protocols;high-performance computing;High-Performance Computing;Asymmetrical Parallel Computations"} {"name": "31", "title": "An empirical comparison of supervised machine learning techniques in bioinformatics", "abstract": "Research in bioinformatics is driven by the experimental data. Current biological databases are populated by vast amounts of experimental data. Machine learning has been widely applied to bioinformatics and has gained a lot of success in this research area. At present, with various learning algorithms available in the literature, researchers are facing difficulties in choosing the best method that can apply to their data. We performed an empirical study on 7 individual learning systems and 9 different combined methods on 4 different biological data sets, and provide some suggested issues to be considered when answering the following questions: (i) How does one choose which algorithm is best suitable for their data set? (ii) Are combined methods better than a single approach? (iii) How does one compare the effectiveness of a particular algorithm to the others?", "fulltext": "Introduction\nIn the post-genome era, research in bioinformatics has\nbeen overwhelmed by the experimental data. The\ncomplexity of biological data ranges from simple strings\n(nucleotides and amino acids sequences) to complex\ngraphs (biochemical networks); from 1D (sequence data)\nto 3D (protein and RNA structures). Considering the\namount and complexity of the data, it is becoming\nimpossible for an expert to compute and compare the\nentries within the current databases. Thus, machine\nlearning and artificial intelligence techniques have been\nwidely applied in this domain to discover and mine the\nknowledge in the databases. Quoting from Baldi and\nBrunak (Baldi and Brunak, 2001) \"As a result, the need for\ncomputer / statistical / machine learning techniques is\ntoday stronger rather than weaker.\"\nShavlik et al. (Shavlik et al., 1995) described the field of\nmolecular biology as tailor-made for machine learning\napproaches. This is due to the nature of machine learning\napproaches that performs well in domains where there is a\nvast amount of data but little theory this is exactly the\nsituation in bioinformatics. Since the introduction of\nmachine learning to this field, various algorithms and\nmethods have been produced and applied to study different\ndata sets. Most of these studies compare a `new' algorithm\nwith the conventional ones, asserting the effectiveness and\nefficiencies of their methods in particular data sets. The\nvariety of learning algorithms currently available for the\nresearchers are enormous and the main problems faced by\nresearchers are: (i) How does one choose which algorithm\nis best suitable for their data set? (ii) Are combined\nmethods better than a single approach? (iii) How does one\ncompare the effectiveness of a particular algorithm to the\nothers?\n\nCopyright 2003, Australian Computer Society, Inc. This paper\nappeared at First Asia-Pacific Bioinformatics Conference,\nAdelaide, Australia. Conferences in Research and Practice in\nInformation Technology, Vol. 19. Yi-Ping Phoebe Chen, Ed.\nReproduction for academic, not-for profit purposes permitted\nprovided this text is included.\n\nThe objective of this study is to provide some suggestions\nfor the community by answering the above questions. This\npaper is organised as follows. Section 2 presents a brief\nsummary of machine learning. Section 3 outlines the\nmaterials and methods used in this study. Section 4\npresents the results and discussion, and the final section\nsummarises this work.\n\nMachine Learning Background\nA machine learning algorithm is one that can learn from\nexperience (observed examples) with respect to some class\nof tasks and a performance measure. (Mitchell, 1997).\nMachine learning methods are suitable for molecular\nbiology data due to the learning algorithm's ability to\nconstruct classifiers/hypotheses that can explain complex\nrelationships in the data. The classifiers or hypotheses can\nthen be interpreted by a domain expert who suggests some\nwet-lab experiments to validate or refute the hypotheses.\nThis feedback loop between in silico and in vivo / in vitro\nexperiments accelerates the knowledge discovery process\nover the biological data. This feedback is an important\ncharacteristic of machine learning in bioinformatics.\nGenerally, there are two types of learning schemes in\nmachine learning: supervised learning where the output\nhas been given a priori labelled or the learner has some\nprior knowledge of the data; and unsupervised learning\nwhere no prior information is given to the learner\nregarding the data or the output. The overall tasks for the\nlearner are to classify, characterise, and cluster the input\ndata. Classification is the most common task in biological\nproblem where given two different sets of examples,\nnamely positive E\n+\nand negative E\nexamples\n(E\n+\nE\n=\n),\nthe learner needs to construct a classifier to distinguish\nbetween the positive examples and the negative set. This\nclassifier can then be used as the basis for classifying as\nyet unseen data in the future. Usually, for a supervised\nclassification problem, the training examples are in the\nform of a set of tuples\n{(\nwhere x\n)}\n,\n(\n),...,\n,\n1\n1\nj\nn\nn\nj\ny\nx\ny\nx\ni\nis\nthe class label and y\nij\nis the set of attributes for the\ninstances. The task of the learning algorithm is to produce\na classifier (hypothesis, function) to classify the instances\ninto the correct class. In this study, we only consider\nsupervised machine learning applied to classification.\nMaterials and Methodologies\nWe performed an empirical comparison of rule-based\nlearning systems (Decision trees, One Rule, Decision\nrules), statistical learning system (Nave Bayes, Instance\nBased, SVM and neural networks) and ensemble methods\n(Stacking, Bagging and Boosting) on the data listed in\nTable 1 based on the accuracy, positive predicted value,\nspecificity and sensitivity of the learning algorithms. All\nthe learning methods used in this study were obtained from\nthe WEKA machine learning package\n(http://www.cs.waikato.ac.nz/~ml/weka/).\n3.2 Data\nset\nIn this study we used the following data sets obtained from\nUCI machine learning repository (Blake and Merz, 1998).\nWe briefly describe the biological motivation for the data\nsets; interested readers should refer to the cited papers for\ndetails.\nE.coli data set The objective of this data set is to predict\nthe cellular localisation sites of E.coli proteins (Horton and\nNakai, 1996). There are 8 different cellular sites, which\nare cytoplasm (cp), inner membrane without signal\nsequence (im), periplasm (pp), inner membrane with\nuncleavable signal sequence (imU), outer membrane (om),\nouter membrane lipoprotein (omL), inner membrane\nlipoprotein (imL) and inner membrane with cleavable\nsignal sequence (imS). The attributes are signal sequence\nrecognition methods (specifically those of McGeoch and\nvon Heijne), the presence of charge on N-terminus of\npredicted lipoproteins and 3 different scoring functions on\nthe amino acid contents whether predicted as a outer\nmembrane or inner membrane, cleavable or uncleavable\nsequence signal.\nYeast data set The objective is similar to the E.coli data,\nwhich is to determine the cellular localisation of the yeast\nproteins (Horton and Nakai, 1996). There are 10 different\nsites, which include: CYT (cytosolic or cytoskeletal);\nNUC (nuclear); MIT (mitochondrial); ME3 (membrane\nprotein, no N-terminal signal); ME2 (membrane protein,\nuncleaved signal); ME1 (membrane protein, cleaved\nsignal); EXC (extracellular); VAC (vacuolar); POX\n(peroxisomal) and ERL (endoplasmic reticulum lumen).\nThe attributes are similar to the E.coli data set with the\naddition of nuclear localisation information.\nPromoter data set. The task of the classifier is to predict\nwhether a DNA sequence from E.coli is either a promoter\nor not (Towell et al., 1990). The input data is a\n57-nucleotide sequence (A, C, T or G).\nHIV data set The data set contains 362 octamer protein\nsequences each of which needs to be classified as an HIV\nprotease cleavable site or uncleavable site (Cai and Chou,\n1998).\nData set\nE.coli\nYeast\nPromoters\nHIV\nContinuous Attribute\n2 0 57\n8\nDiscrete Attribute\n5 8 0\n0\nClasses\n8 10 2\n2\nData Size\n336 1484 106\n362\nTable 1: Data sets used in this study.\n3.3 Evaluation\nWe constructed a confusion matrix (contingency table) to\nevaluate the classifier's performance. Table 2 shows a\ngeneric contingency table for a binary class problem. True\npositives (TP) denote the correct classifications of positive\nexamples. True negatives (TN) are the correct\nclassifications of negative examples. False positives (FP)\nrepresent the incorrect classifications of negative\nexamples into class positive and False negatives (FN) are\nthe positive examples incorrectly classified into class\nnegative.\nPredicted\n\nPositive Negative\nPositive\nTP FN\nActual\nNegative\nFP TN\nTable 2: A contingency table for a binary class\nproblem.\nBased on the contingency table, several measurements can\nbe carried out to evaluate the performance of the induced\nclassifier. The most popular performance evaluation\nmeasure used in prediction or classification learning is\nclassifier accuracy which measures the proportion of\ncorrectly classified instances;\nFN\nFP\nTN\nTP\nTN\nTP\n+\n+\n+\nAcc\n+\n=\n.\nPositive Predictive Accuracy (PPV, or the reliability of\npositive predictions of the induced classifier) is computed\nby\nFP\nTP\nTP\nPPV\n+\n=\n. Sensitivity (S\nn\n) measures the fraction of\nactual positive examples that are correctly classified\nFN\nTP\nTP\n+\n=\nS\nn\n; while specificity (S\np\n) measures the fraction\nof actual negative examples that are correctly\nclassified\nFP\nTN\nTN\n+\nS\n.\np\n=\n3.4 Cross-validation\nTo evaluate the robustness of the classifier, the normal\nmethodology is to perform cross validation on the\nclassifier. Ten fold cross validation has been proved to be\nstatistically good enough in evaluating the performance of\nthe classifier (Witten and Frank, 2000). In ten fold cross\nvalidation, the training set is equally divided into 10\ndifferent subsets. Nine out of ten of the training subsets\nare used to train the learner and the tenth subset is used as\nthe test set. The procedure is repeated ten times, with a\ndifferent subset being used as the test set.\n\nResults and Discussion\nWe summarise our experimental results in Figure 1 and 2.\nThe full analysis of this study is available in\nhttp://www.brc.dcs.gla.ac.uk/~actan/APBC2003.\n\nFigure 1. Accuracy vs Positive Predictive Value\n\nFigure 2. Specificity vs Sensitivity\nFrom the results, we observed that most of the individual\nlearners tend to perform well either in accuracy or\nspecificity. Probably this is due to the induced classifier\nbeing able to characterise the negative examples (most of\nthe training sets have large ratio of negative examples\ncompared to positive examples). Furthermore, the results\nsuggest that combination approaches are in general better\nat minimising overfitting of the training data. We also\nobserved from this experiment that boosting performs\nbetter than bagging. This is because attributes which are\nhighly important in discriminating between classes are\nrandomly removed by bagging; however they are\npreserved in boosting and thus contribute to the final\nvoting scheme. The only individual learning system that\nperform better than the combined methods is Nave Bayes\nlearning. This may suggest that Nave Bayes is capable of\nclassifying instances based on simple prior probabilistic\nknowledge. In this study SVM does not perform well\ncompared to other methods, probably due to the fact that\ntraining data are not separable in the vector space.\n4.1 Rules-of-thumb\nIn this section, we address the following questions by\nproviding some suggested issues (rules-of-thumb) to be\nconsidered when answering them.\n(i) How does one choose which algorithm is best suitable\nfor their data set?\nRatio of the training data From these experiments, we\nobserved that the division of the training data plays a\ncrucial role in determining the performance of the\nalgorithms. If the training TPs and TNs are almost equal in\nsize, the algorithms tend to construct much better\nclassifiers. This observation suggested that the classifier\ninduced from equal size of TP and TN tend to be more\nrobust in classifying the instances. Furthermore, the\nclassifiers generated consider all the discriminative\nattributes that distinguish between two different classes. If\nthe size of the TP set is small compared to that of TN, most\nprobably the classifier will overfit the positive examples\nand thus perform poorly in the cross validation stages.\nAttributes Another factor that must be taken into\nconsideration when choosing a learning method is the\nnature of the attributes. Generally, statistical methods (e.g.\nSVM, neural networks) tend to perform much better over\nmulti-dimensions and continuous attributes. This is\nbecause the learning strategy embedded in these\nalgorithms enables the learners to find a maximal margin\nthat can distinguish different classes in the vector space.\nBy contrast, rule-based systems (e.g. Decision trees,\nPART) tend to perform better in discrete / categorical\nattributes. The algorithms of these methods operate in a\ntop-down manner where the first step is to find the most\ndiscriminative attribute that classifies different classes.\nThe process is iterated until most of the instances are\nclassified into their class.\nCredibility vs. Comprehensibility When choosing a\nmachine learning technique, users need to ask themselves\nwhat they really want to \"discover\" from the data. If they\nare interested in generating understandable hypotheses,\nthen a rule-base learning algorithm should be used instead\nof statistical ones. Most machine learning algorithms\nfollow Occam's principle when constructing the final\nhypothesis. According to this principle, the algorithm\ntends to find the simplest hypotheses by avoiding\noverfitting the training data. But does this principle still\nhold in bioinformatics? In bioinformatics we often wish\nto explore data and explain results, and hence we are\ninterested in applying intelligent systems to provide an\ninsight to understand the relations between complex data.\nThe question then arises as to whether we prefer a simple\nclassifier or a highly comprehensible model. In general,\nthere is a trade off between the credibility and\ncomprehensibility of a model. Domingos (1999)\nsuggested applying domain constraints as an alternative\nfor avoiding overfitting the data. We agree with\nMuggleton et al. (1998) that when comparing the\nperformance of learning systems in a bioinformatics\ncontext, the hypothesis with better explanatory power is\npreferable when there exist more than one hypotheses with\nstatistical equivalent predictive accuracy.\n(ii) Are combined methods better than a single approach?\nFrom the experiments most of the combined methods\nperform better than the individual learner. This is because\nnone of the individual methods can claim that they are\nsuperior to the others due to statistical, computational and\nrepresentational reasons (Dietterich, 2000). Every\nlearning algorithm uses a different search strategy. If the\ntraining data is too small, the individual learner can induce\ndifferent hypotheses with similar performances from the\nsearch space. Thus, by averaging the different hypotheses,\nthe combined classifier may produce a good\napproximation to the true hypotheses. The computational\nreason is to avoid local optima of individual search\nstrategy. By performing different initial searches and\ncombining the outputs, the final classifier may provide a\nbetter approximation to the true hypotheses. Lastly, due to\nthe limited amount of training data, the individual\nclassifier may not represent the true hypotheses. Thus,\nthrough considering different classifiers, it may be\npossible to expand the final classifier to an approximate\nrepresentation of the true hypotheses. Ensemble learning\nhas been an active research topic in machine learning but\nnot in the bioinformatics community. Since most of the\nhypotheses induced are from incomplete biological data, it\nis essential to generate a good approximation by\ncombining individual learners.\n(iii) How does one compare the effectiveness of a\nparticular algorithm to the others?\nPredictive accuracy Most of the time, we can find in the\nliterature reports that a learning scheme performs better\nthan another in term of one model's accuracy when\napplied to a particular data set. From this study, we found\nthat accuracy is not the ultimate measurement when\ncomparing the learner's credibility. Accuracy is just the\nmeasurement of the total correctly classified instances.\nThis measurement is the overall error rate, but there can be\nother measures of the accuracy of a classifier rule. If the\ntraining data set has 95 TNs and 5 TPs, by classifying all\nthe instances into a negative class, the classifier still can\nachieve a 95% accuracy. But the sensitivity and the\npositive predicted value is 0% (both measurements\nevaluate the performance in classifying TPs). This means\nthat although the accuracy of the classifier is 95% it still\ncannot discriminate between the positive examples and the\nnegatives. Thus, when comparing the performance of\ndifferent classifiers, accuracy as a measure is not enough.\nDifferent measures should be evaluated depending on\nwhat type of question that the user seeks to answer. See\nSalzberg (Salzberg, 1999) for a tutorial on comparing\nclassifiers.\n\nConclusions\nMachine learning has increasingly gained attention in\nbioinformatics research. With the availability of different\ntypes of learning methods, it has become common for the\nresearchers to apply the off-shelf systems to classify and\nmine their databases. In the research reported in this paper,\nwe have performed a comparison of different supervised\nmachine learning techniques in classifying biological data.\nWe have shown that none of the single methods could\nconsistently perform well over all the data sets. The\nperformance of the learning techniques is highly\ndependant on the nature of the training data. This study\nalso shows that combined methods perform better than the\nindividual ones in terms of their specificity, sensitivity,\npositive predicted value and accuracy. We have suggested\nsome rules-of-thumb for the reader on choosing the best\nsuitable learning method for their dataset.\n\nAcknowledgements\nWe would like to thank colleagues in the Bioinformatics\nResearch Centre for constructive discussions. We would\nalso like to thank the anonymous reviewers for their useful\ncomments. The University of Glasgow funded AC Tan's\nstudentship.\n\nReferences\nBALDI, P. AND BRUNAK, S. (2001) Bioinformatics:\nThe Machine Learning Approach, 2\nnd\nEd., MIT Press.\nBlake, C.L. AND Merz, C.J. (1998) UCI Repository of\nmachine learning databases\n[http://www.ics.uci.edu/~mlearn/MLRepository.html]\nCAI, Y.-D. AND CHOU, K.-C. (1998) Artificial neural\nnetwork model for predicting HIV protease cleavage\nsites in protein. Advances in Engineering Software, 29:\n119-128.\nDIETTERICH, T.G. (2000) Ensemble methods in\nmachine learning. In Proceedings of the First\nInternational Workshop on MCS, LNCS 1857: 1-15.\nDOMINGOS, P. (1999) The role of Occam's razor in\nknowledge discovery. Data Mining and Knowledge\nDiscovery, 3: 409-425.\nHORTON, P. AND NAKAI, K. (1996) A probabilistic\nclassification system for predicting the cellular\nlocalization sites of proteins. In Proceedings of Fourth\nInternational Conference on ISMB, p.109-115. AAAI /\nMIT Press.\nMITCHELL, T. (1997) Machine Learning. McGraw-Hill.\nMUGGLETON, S., SRINIVASAN, A., KING, R.D. AND\nSTERNBERG, M.J.E. (1998) Biochemical knowledge\ndiscovery using inductive logic programming. In H.\nMotoda (Ed.) Proceedings of the First Conference on\nDiscovery Science, Springer-Verlag.\nSALZBERG, S. (1999). On comparing classifiers: a\ncritique of current research and methods. Data mining\nand knowledge discovery, 1: 1-12.\nSHAVLIK, J., HUNTER, L. & SEARLS, D. (1995).\nIntroduction. Machine Learning, 21: 5-10.\nTOWELL, G.G., SHAVLIK, J.W. AND NOORDEWIER,\nM.O. (1990) Refinement of approximate domain\ntheories by knowledge-based neural networks. In\nProceedings of the Eighth National Conference on\nArtificial Intelligence, p. 861-866. AAAI Press.\nWITTEN, I.H. AND FRANK, E. (2000) Data Mining:\nPractical machine learning tools and techniques with\njava implementations. Morgan Kaufmann.\n", "keywords": "classification;Supervised machine learning;cross validation;performance evaluation;training data;biological data;supervised machine learning;machine learning;ensemble methods;bioinformatics"} {"name": "32", "title": "An expressive aspect language for system applications with Arachne", "abstract": "C applications, in particular those using operating system level services, frequently comprise multiple crosscutting concerns : network protocols and security are typical examples of such concerns. While these concerns can partially be addressed during design and implementation of an application, they frequently become an issue at runtime, e.g., to avoid server downtime. A deployed network protocol might not be efficient enough and may thus need to be replaced. Buffer overflows might be discovered that imply critical breaches in the security model of an application. A prefetching strategy may be required to enhance performance. While aspect-oriented programming seems attractive in this context, none of the current aspect systems is expressive and efficient enough to address such concerns. This paper presents a new aspect system to provide a solution to this problem. While efficiency considerations have played an important part in the design of the aspect language, the language allows aspects to be expressed more concisely than previous approaches. In particular, it allows aspect programmers to quantify over sequences of execution points as well as over accesses through variable aliases. We show how the former can be used to modularize the replacement of network protocols and the latter to prevent buffer overflows. We also present an implementation of the language as an extension of Arachne, a dynamic weaver for C applications. Finally, we present performance evaluations supporting that Arachne is fast enough to extend high performance applications , such as the Squid web cache.", "fulltext": "INTRODUCTION\nReal-world applications typically comprise multiple crosscutting\nconcerns. This applies, in particular, to C applications\nusing operating system level services. We have exam-ined\nthree concerns which are typical for this domain in the\ncontext of a large application, the open source web cache\nSquid [36]. More concretely, we have considered translation\nof network protocols (which may be necessary for efficiency\nreasons), insertion of checks for buffer overflows (which are\nat the heart of many of today's security issues), and introduction\nof prefetching strategies within the cache (which\ncan be used to enhance efficiency of the web cache). We\nhave found that all these concerns are scattered over large\nportions of the code of Squid.\nHence, the three concerns are crosscutting in the sense\nof Aspect-Oriented Programming (AOP) [24] and aspects\nshould therefore be a means of choice for their modularization\n. The concerns have three important characteristics.\nFirst, they must frequently be applied at runtime, e.g., in\norder to rapidly fix a buffer overflow and thus prevent security\nbreaches without incurring server downtime. A dynamic\naspect weaver is therefore needed. Second, they expose intricate\nrelationships between execution points, e.g., network\nprotocols are most concisely expressed in terms of sequences\nof execution points, not individual ones. The aspect system\nmust therefore support expressive means for the definition of\naspects, in particular pointcuts. Third, efficiency is crucial\nin the application domain we consider.\nTo our knowledge, none of the current aspect systems for\nC meet these three requirements and is suitable for the modularization\nof such concerns. Moreover, requirements for\ndynamic weaving and efficiency often trade off with expressivity\n. Squid should be as efficient as possible and therefore\nexploit any suitable operating system and hardware particularity\n. Its code base is therefore difficult to understand and\nmanipulate, thus hindering in particular modularization efforts\n. It is therefore highly questionable that the considered\nmodularization problems can be solved without aspects.\nIn this paper we propose a solution to the aspectization of\nsuch concerns of C applications. More concretely, we provide\nthree main contributions. First, we provide a new expressive\naspect language featuring a construct for quantification over\nsequences of execution points as well as over accesses to local\naliases of global variables. We show how this aspect lan-27\nguage permits concise expression of the considered concerns\nas aspects. Second, we present how the aspect language can\nbe implemented efficiently through runtime weaving into binary\ncode. Concretely, this is done by integrating the aspect\nlanguage into our tool Arachne, a dynamic weaver for C applications\n. Furthermore, we present how Arachne improves\non our previous work Dyner [32]. Finally, we give evidence\nthat our approach meets strong efficiency requirements by\nshowing performance evaluations in the context of Squid.\nThe paper is structured as follows. Section 2 presents the\nmotivating concerns we identified within Squid. Section 3\nshows how to modularize these concerns as aspects and defines\nour aspect language. Section 4 describes its implementation\nwithin Arachne. Section 5 assesses the performance\nof our implementation. Section 6 describes related work.\nSection 7 concludes and suggests futures work.\nMOTIVATIONS\nLegacy C applications involve multiple crosscutting concerns\n. Many of them remain challenging, both in terms\nof expressiveness required to handle them properly in an\naspect-oriented language and in terms of constraints posed\non the weaver. This section describes three such concerns\nin C applications: switching the network protocol, buffer\noverflows and prefetching. The network protocol concern is\ntypically scattered through the entire application. It is an\nissue when administrators discover at runtime that the retained\nprotocol is not efficient enough. Likewise the security\nthreats posed by buffer overflows is a real concrete problem\nfor administrators. While guarding all buffers against overflows\nmight decrease performance considerably, administrators\nare left with no other option than accepting the trade-off\nbetween security and performance chosen at application's\ndesign time. Prefetching is another well-known crosscutting\nconcern [12]. Since prefetching aims at increasing performance\n, prefetching aspects make only sense with an efficient\nweaver. Yet, it is still difficult to modularize these three concerns\nin today's aspect-oriented language. In this section,\nwe first describe the context in which the concerns arise before\nshowing their crosscutting nature and finally explaining\nthe lack in current aspect-oriented languages to handle them\nproperly.\n2.1\nTCP to UDP protocol\nHTTP was essentially designed as a file transfer protocol\nrunning on top of TCP, a connection-oriented protocol\nensuring communication reliability. While the average Web\npage size does not exceed 8 KB [4], the cost of retrieving\na Web page is often dominated by data exchanged for control\npurposes of TCP rather than by the page content itself.\nThis is not a new problem, many researches have already\npointed out that TCP is not suitable for short-lived connections\n. While HTTP 1.1 has introduced persistent connections\nallowing a client to retrieve multiple pages from the\nsame server through the same TCP connection, the number\nof simultaneous TCP connections is limited by operating\nsystems. Servers have a strong incentive to close HTTP\nconnections as soon as possible. Hence, despite the persistent\nconnection mechanism, many studies conclude that\nTCP should be replaced by UDP to retrieve short pages [10,\n29, 7]. In spite of its performance improvements, the number\nof legacy Web applications has prevented a wide adoption\nof this solution. Typical legacy Web applications have to be\nlisten\naccept\nread\nwrite\nclose\nwrite\nread\nclose\nconnect\nsocket\nServer\nClient\nTCP Protocol\nsocket\nbind\nclose\nclose\nsocket\nServer\nClient\nUDP Protocol\nrecvfrom\nsendto\nrecvfrom\nsocket\nbind\nNetwork\nNetwork\nsendto\nTime\nFigure 1: Typical usage of the TCP and UDP APIs.\nstopped to switch the protocol. The traditional approach\nto avoid depriving a subnetwork from Internet connectivity\nwhile stopping the cache is to swap the application between\ndifferent machines. This approach is not only expensive in\nterms of hardware, it complicates the administrative task of\nthe Web cache administrator and poses the problem of con-sistently\ntransferring the runtime state of the application\nbefore restarting it. Stopping an e-commerce Web server\nmeans a loss of money and many small companies can not\nafford the cost of redundant servers. For a wide acceptance,\na HTTP dialect using UDP as transport protocol should\nthus be deployable on demand at runtime.\nIn addition, replacing TCP by UDP in an application is\nrelatively difficult. The choice of a transport protocol is\nusually based on standards believed to be ever-lasting and\nmade at an early design stage. Hence no particular effort is\nmade to localize this design decision in a single piece of code.\nFor example, despite a modularization effort, the TCP API\nprovided by the operating system is used directly in 7 of the\n104 \".c\" source files of the Squid Web cache.\nAs shown in Fig. 1, the TCP API is built around a set of\nC functions to be invoked sequentially by the application. In\na properly written program, TCP functions are first used to\nestablish the connection (typically with socket, connect,\nbind and listen), exchange data through the connection\n(typically with read and write) and then close it (typically\nclose). UDP uses similar but less functions. UDP applications\nfirst direct the operating system to dedicate the appropriate\nresources to exchange data (typically with socket and\nbind), then exchange data through these resources (typically\nwith sendto and recvfrom) before releasing them (typically\nwith close). Hence, the problem is not only difficult because\nTCP-related function invocations are scattered but\nbecause the relative order of each invocation is important in\norder to map it onto the appropriate UDP function.\nThis example is typical of protocol based APIs. When\nsuch an API is used in an undisciplined way, it becomes\nquickly impossible to replace it by another one. Today,\naspect-oriented systems lack an appropriate sequencing construct\nin their language. Moreover, many do not provide the\nability to weave aspects dynamically.\n2.2\nBuffer overflows\nIn C, the size of an array is fixed at allocation time. According\nto ISO and ANSI standards [2], an invalid array\naccess does not result in an immediate error but leads to\nan implementation-dependent behavior. Such behavior is\nincreasingly exploited by hackers to circumvent security re-28\nstrictions [37]. It is therefore crucial for C programmers to\nensure every access to an array to be valid. On the other\nhand, bound checking code is error prone: it is easy to forget\nto check an access and even when the access is checked,\nit is easy to compare the index locating the access with an\ninappropriate bound. Therefore, researchers have proposed\nto make compilers responsible for enforcing proper array access\n[22, 31]. The problem is that even the most efficient\nsystem (CRED [31]) slows down an application up to 130%.\nMoreover, most frequently used compilers like gcc do not\nsupport bound checking.\nToday, administrators discovering a buffer overflow in production\nsoftware are left with no other option than stopping\nthe application and restarting a bug free version. This was\nthe solution chosen when a buffer overflow was discovered\nin Squid in [6]. While widely used, this solution suffers from\nthree major drawbacks. First, it does not enforce continuous\nservicing since the service delivered by the application is not\navailable during the update. Second, this solution entails an\nimportant information loss: an administrator has no means\nto learn whether the buffer overflow has been exploited by\na hacker or not. Third, it misunderstands the performance\ntrade-off, i.e. it is not necessary to check every array access,\nit is only necessary to perform enough checking to discourage\nhackers. Therefore, bound checking code should only\nrun when an environment becomes hostile [23].\nBound checking code tends to crosscut the entire application\n. For example, properly written C functions accepting\nan array argument commonly take a second argument holding\nthe array size: the first one allows the function to access\nthe array while the second is used to ensure correctness of\naccesses. In Squid, bound checking code can be found in\nany of the 104 \".c\" files of its source code. On the 57635\nlines composing these \".c\" files, at least 485 check bounds.\nThis problem fails to be handled properly in current aspect\nlanguages as they lack the ability to trigger advices\nupon access made through the alias of a variable. Again,\nmany aspect-oriented systems offer only static weaving capabilities\npreventing the administrator to choose the trade-off\nsecurity/performance suiting his needs.\n2.3\nFrom fetching to prefetching\nOperations like retrieving a file on a local disk or over the\nWeb can be sped up if the underlying software anticipates\nuser requests and start to fetch documents beforehand. Such\nprefetching schemes distinguish themselves from each other\nin the way they predict future user requests. These \"ora-cles\"\nactually prevent a clean encapsulation of prefetching\nin a single module communicating with the rest of the application\nthrough well-defined interfaces since predictions are\nbased on information meant to be private to other modules.\nIn addition, it is very likely that there is no universal perfect\noracle [19]. A statically linked prefetching module is\ntherefore inappropriate, but prefetching modules along with\nthe necessary oracles should be loaded and unloaded on the\nfly. Due to their crosscutting nature, prefetching modules\nincluding such oracles are better written with aspects [32].\nCoady et al. have already pointed out the crosscutting\nnature of prefetching in the FreeBSD OS [12]. In our previous\nwork considering the Squid Web cache, we reached a\nsimilar conclusion [32]. We have previously shown that this\nconcern can be addressed with cflow-like constructs.\nDespite potential performance improvements, prefetching\nalso increases resource consumption (e.g. network prefetching\nconsumes local storage and bandwidth). When the pressure\non resources is too high, prefetching computation competes\nfor them against regular user requests, and slows down\ntheir treatment instead of speeding it up. In such cases,\nprefetching should therefore be, temporarily, disabled. Squid\nessentially manages file descriptors, a resource only available\nin a limited quantity. A file descriptor is used between the\nunderlying operating system and applications to describe a\nnetwork connection or a file on the disk. Squid's file descriptor\nmanagement is based on a global variable that tracks the\nnumber of file descriptors currently in use. By comparing\nits value with the maximum number of file descriptors allowed\nby the operating system, it is possible to estimate that\nprefetching should be disabled or resumed.\nFor this problem of file descriptor consumption, the current\npractice of checking if prefetching should be disabled or\nnot within the advice, is a bad practice that impedes both\nreadability and maintainability. A mechanism is needed\nwithin the aspect language to restraint the advice execution\nat times where the pressure on resources is too high.\nThis problem were not addressed in our previous work.\nAN EXPRESSIVE ASPECT LANGUAGE FOR SYSTEM PROGRAMMING IN C\nWhile AOP seems to be the obvious choice to tackle the\ncrosscutting concerns introduced above, none of the existing\nAO systems provides explicit support for some of their essential\nelements, in particular, join point sequences for protocols\n, and references to aliases which are local to a function.\nIn this section we introduce a new aspect language for\nsystem programming in C that allows such crosscutting concerns\nto be expressed concisely. In order to make this point,\nwe first revisit the examples by concisely aspectizing them\nusing our language. (Note that our aspect language is expressive\nin the sense of enabling the concise definition of certain\ntypes of aspects, especially compared to other tools for\nsystem-level manipulations, but not necessarily more expressive\nthan existing approaches in a language-theoretic sense.)\nWe then define the join point model underlying our language\nprecisely, followed by the definition of its syntax and informal\nsemantics. Finally, we illustrate how its semantics can\nbe formally defined in terms of a small-step operational semantics\nusing the framework introduced in [14].\n3.1\nExample crosscutting concerns revisited\nWe now revisit the concerns discussed in section 2 in order\nto show our language in action and give evidence that it\nallows such concerns to be concisely modularized.\nThe aspect shown in Fig. 2 translates transport protocols\nfrom TCP to UDP. A protocol defines a sequence of function\ncalls, so the top-level operator of this aspect is seq.\nThe sequence aspect syntactically consists of a list of pairs\nof pointcut and advice (separated by then). In the example\n, the TCP protocol starts with a call to socket() with\nthree constant arguments: AF INET, SOCK STREAM and\n0. When such a call is matched, the second parameter is\nreplaced by SOCK DGRAM as required by the UDP protocol\n. The result of this transformed call, the file descriptor,\nis bound to fd by return(fd). Then the next call to connect\n() with the same file descriptor fd as its first parameter\nis matched. In this case the values of the other parameters\n29\nseq( call(int socket(int, int, int)) && args(AF INET, SOCK STREAM, 0) && return(fd)\nthen socket(AF INET, SOCK DGRAM, 0);\ncall(int connect(int, struct socketaddr, socklen t)) && args(fd, address, length)\nthen returnZero();\n// where int returnZero() { return 0; }\n( call(size t read(int, void, size t)) && args(fd, readBuffer, readLength)\nthen recvfrom(fd, readBuffer, readLength, 0, address, length);\n|| call(size t write(int, void, size t)) && args(fd, writeBuffer, writeLength)\nthen sendto(fd, writeBuffer, writeLength, 0, address, length); )\ncall(int close(int)) && args(fd) ; )\nFigure 2: An Aspect for Switching Transport Protocols, from TCP to UDP\nseq( call(void malloc(size t))\n&& args(allocatedSize) && return(buffer) ;\nwrite(buf f er) && size(writtenSize)\n&& if(writtenSize > allocatedSize)\nthen reportOverflow ();\ncall(void free(void)) )\nFigure 3: An Aspect for Detecting Buffer Overflow\nare bound to arguments address and length, and the original\ncall is replaced by returnZero(). Indeed, there is no connect\nstep in the UDP protocol. After that, calls to read() and\nwrite() (using the `or' on aspects: ||) on the same file descriptor\nfd are translated to UDP recvfrom() and sendto(),\nrespectively. Note that sequences of such access are potentially\ntranslated (due to use of the repetition operator ).\nFinally, a call to close() on fd terminates the TCP protocol\nas well as the UDP protocol and thus is not modified (i.e.,\nthere is no then clause). This last step is required to free\nthe variables used in the sequence (here, fd, address and\nlength). Indeed, this aspect can use numerous (instances of\nthese) variables when it deals with interleaved sequences, as\neach call to socket() creates a new instance of the sequence.\nThe aspect shown in Fig. 3 detects buffer overflows. The\ncorresponding sequence starts when the function malloc()\nreturns the buffer address which is then bound to buffer.\nThen, each time this address is accessed (through a global\nvariable or a local alias) the size of the data to be written is\ncompared with the size of the initially allocated memory. If\nthe former exceeds the latter, an overflow is indicated. The\nsequence ends when the memory is deallocated using free().\nThe aspect in Fig. 4 introduces prefetching in a web cache.\nThe first controlflow phrase initializes prefetching when\nan HTTP response is built (clientBuildReply()) within the\ncontrol flow of a client request (clientSendMoreData()). The\nuntil clause stops prefetching when the number of connection\nbecomes too large, a situation where prefetching would\neffectively degrade performance. The second controlflow\nphrase analyzes hyperlinks in a page being transmitted (i.e.,\nwhen comm write mbuf() is called within the control flow\nof clientSendMoreData()). Finally, the last call phrase pre-fetches\nhyperlinks analyzed by the second aspect. It does so\nby replacing the method call to clientWriteComplete() with\nretrieveHyperlinks(). Finally, note that the two require\nclauses at the top of the aspect declare the types of the\nglobal variables of the base program used in the aspects.\n3.2\nJoin points\nA join point model defines the points in the execution\nof the base program to which pointcuts may refer. In our\nJP\n::= callJP(val funId(val\n))\n|\nreadGlobalJP(varId, val)\n|\nreadJP(@, val)\n|\nwriteGlobalJP(varId, val, size)\n|\nwriteJP(@, val, size)\n|\ncontrolflowJP(---f\nunId, cfEnd)\n|\ncontrolflowstarJP(---f\nunId, cfEnd)\ncf End ::= callJP(val funId(val\n))\n|\nreadGlobalJP(varId, val)\n|\nwriteGlobalJP(varId, val, size)\nval\n::= 0 | 1 | 2 | ...\n// int\n|\n@0 | @1 | @2 | ... // int*\n|\n... // values of other C types\nFigure 5: Join point model\ncase, join points are defined by JP in the grammar shown\nin Fig. 5. A join point is either:\nA call of a function callJP(v\n1\nfunId(\nv\n2\n)) with function\nname funId, return value v\n1\nand a vector of arguments\nv\n2\n.\nA\nread\naccess\nwhich\ncomes\nin\ntwo\nvariants:\nreadGlobalJP(varId, v) denotes reading a global variable\nwith name varId holding the value v ; readJP(@, v)\ndenotes reading a global variable or a local alias with\naddress @ holding the value v .\nWrite access which also comes in two variants:\nwriteGlobalJP(varId, v, size) denotes assignment to a global\nvariable with name varId of the value v of size size.\nwriteJP(@, v, size) denotes assignment to a global variable\nor a local alias with address @ of the value v of size size.\nA cflow expression controlflowJP(---f\nunId, c), where\n---f\nunId = [funId\n1\n, .., funId\nn\n] is a stack of function names, and\nc (either a function call or an access to a global variable) occurs\nwithin the body of function f unId\nn\n. Such a join point\nrequires a call to f unId\ni+1\nwithin the body of f unId\ni\n.\nA cflow expression controlflowstarJP(---f\nunId, c), where\n---f\nunId = [funId\n1\n, .., funId\nn\n] is a partial stack of function\nnames, and c (either a function call or an access to a global\nvariable) occurs within the control flow of function f unId\nn\n.\nSuch a join point requires a call to f unId\ni+1\nwithin the\ncontrol flow of (i.e., not necessarily in the body of) f unId\ni\n.\nTwo features of this join point model may be surprising\nat first sight: distinction of accesses to aliases from those to\nglobal variables and explicit representation of control flow\n30\nrequire N umber Of F d as int;\nrequire Squid M axF d as int;\ncontrolflow(call(void clientSendMoreData(void, char, size t)),\ncall(HttpReply clientBuildReply(clientHttpRequest, char, size t))\n&& args( request, buf f er, buf f erSize ))\nthen startPrefetching(request, buffer, bufferSize);\n&& until(writeGlobal(int N umber Of F d) && if((N umber Of F d) 100/(Squid M axF d) 75) ; )\ncontrolflow( call(void clientSendMoreData(void, char, size t)),\ncall(void comm write mbuf(int, MemBuf, void, void))\n&& args(fd, mb, handler, handlerData) && if(! isP ref etch(handler)) )\nthen parseHyperlinks(fd, mb, handler, handlerData);\ncall(void clientWriteComplete(int, char, size t, int, void))\n&& args(fd, buf, size, error, data) && if(! isP ref etch(handler))\nthen retrieveHyperlinks(fd, buf, size, error, data);\nFigure 4: An Aspect for Prefetching\nexpressions. Both are motivated by our quest for efficiency\nand are grounded in strong implementation constraints in\nthe context of dynamic weaving of binary C code: an access\nto a local alias is several magnitudes slower than that to a\nglobal variable and matching of control flow join points can\nbe done using an atomic test on the implementation level.\n3.3\nPointcuts\nWe now present a pointcut language (see Fig. 6) that provides\nconstructs to match individual join points.\nPrimitive pointcuts are defined by PPrim and comprise\nthree basic pointcuts matching calls, global variable accesses,\nand control flow join points. Primitive pointcuts can also be\ncombined using a logical \"or\" noted ||.\nA call pointcut PCall selects all function call join points\ncallJP(val funId(val\n)), i.e., all calls to a function matching\nthe signature type funId(-type\n), where the arguments of the\nfunction can be bound to pointcut variables using argument\nbinder args( ----pattern\n) and the return value can be bound to\na pointcut variable using a return clause return( pattern ).\nThe two constructs args( ----pattern\n) and return( pattern )\ncan also provide pattern matching by using values (or already\nbound pointcut variables) in pattern. Pointcuts can\nalso depend on a boolean condition using the if-constructor.\nA global access pointcut PAccGlobal selects either all read\njoin points readGlobalJP(varId, val) or all write join points\nwriteGlobalJP(varId, val, size) on the global base program\nvariable varId. In these cases, the read or written value can\nbe bound to a variable using value(pattern); in addition, the\nsize of the written value can be bound with size(varName).\nPattern matching can also be used for variable access.\nA control flow pointcut PCf of the form controlflow(\nPCallSig\n1\n, ..., PCallSig\nn\n, PCfEnd) matches all join points\nof the form controlflowJP(funId\n1\n, ..., funId\nn\n, cfEnd), where\nthe function identifier in P CallSig\ni\nis f unId\ni\n. Similarly, a\ncontrol flow pointcut may match a global variable access\nfor a given stack configuration. The pointcuts of the form\ncontrolflowstar(. . . ) select calls or global variable accesses\nin a stack context allowing for calls that are not directly\nnested within one another.\nFinally, P Acc, an access pointcut for a global variable or\nall of its local aliases, matches all join points of the form\nreadJP or writeJP.\nAsp\n::= AspP rim [ && until( AspP rim ) ]\n|\nAspSeq [ && until( AspP rim ) ]\nAspP rim\n::= P P rim Advice\nAspSeq\n::= seq( AspP rim\nAspSeqElts\nAspSeqElt )\nAspSeqElts ::= [AspSeqElts] AspSeqElt [ ]\nAspSeqElt ::= AspP rim\n|\nP Acc Advice\n|\n(AspSeqElt || AspSeqElt)\nAdvice\n::= [ then f unId(----pattern\n) ] ;\nFigure 7: Aspect language\n3.4\nAspect Language\nThe aspect language we propose is defined in Fig. 7. Aspects\nAsp are either primitive AspP rim, or sequences of\nprimitive aspects AspSeq.\nA primitive aspect AspPrim combines a primitive pointcut\nwith an advice that will be applied to all join points\nselected by the pointcut. If the primitive pointcut has the\nform p\n1\n|| p\n2\n, then all variables used in the advice have to\nbe bound in both, p\n1\nand p\n2\n.\nAn advice (Advice) is a C function call that replaces a join\npoint in the base program execution (similarly to around in\nAspectJ). It must have the same return type as the join\npoint it replaces: the type of the global variable in case of a\nread access, void for a write access and the return type of\nthe function for a call. When the advice is empty (no then\nclause), the original join point is executed. The original join\npoint can be skipped by calling an empty C function.\nA sequence aspect is composed of a sequence of primitive\naspects. A sequence starts when the first primitive aspect\nmatches. Then the second primitive aspect becomes active\ninstead of the first one. When it matches, the third aspect\nbecomes active instead of the second one. And so on, until\nthe last primitive aspect in the sequence. All but the first\nand last primitive aspects can be repeated zero or multiple\ntimes by using : in this case, the primitive aspect is ac-31\nP P rim\n::= P Call\n|\nP AccGlobal\n|\nP Cf\n|\nP P rim || P P rim\nP Call\n::= P CallSig [ && args( ----pattern\n) ] [ && return( pattern ) ] [ && P If ]\nP CallSig\n::= call( type f unId(-type\n) )\nP If\n::= if( expr ) [ && P If ]\nP AccGlobal\n::= readGlobal( type varId ) [ && value( pattern ) ] [ && P If ]\n|\nwriteGlobal( type varId ) [ && value( pattern ) ] [ && size( pattern ) ] [ && P If ]\nP Cf\n::= controlflow( P CallSigList, P Cf End )\n|\ncontrolflowstar( P CallSigList, P Cf End )\nP CallSigList ::= P CallSig [ , P CallSigList ]\nP Cf End\n::= P Call | P AccGlobal\nP Acc\n::= read( var ) [ && value( pattern ) ] [ && P If ]\n|\nwrite( var ) [ && value( pattern ) ] [ && size( pattern ) ] [ && P If ]\npattern\n::= var | val\nFigure 6: Pointcut language\nA\n::= A\n|\nA || A\n; parallelism\nA\n::= a.A\n; recursive definition (a Rec)\n|\nC I; A\n; prefixing\n|\nC I; a\n; end of sequence (a Rec)\n|\nC I; STOP ; halting aspect\n|\nA P A\n; choice\nFigure 8: Tiny aspect language\ntive as long as the following one in the sequence does not\nmatch. Branching, i.e., a logical `or' between two primitive\naspects, can be introduced in a sequence by the operator ||.\nAn element of the sequence can also match a global variable\nof the base program and accesses to its local aliases, as\nsoon as its address is known (i.e., a previous primitive pointcut\nhas already bound its address to a pointcut variable).\nHence, an aspect matching accesses cannot start a sequence.\nEvery join point matching the first primitive pointcut of a\nsequence starts a new instance of the sequence. The different\ninstances are matched in parallel.\nA primitive or a sequence aspect a can be used in combination\nwith an expression until(a\n1\n), to restrict its scope. In\nthis case, once a join point has been matched by a, the execution\nof a proceeds as previously described until a\n1\nmatches.\nTo conclude the presentation of our language, note that it\ndoes not include some features, such as named pointcuts as\narguments to controlflows and conjunctive terms, which\nare not necessary for the examples we considered but which\ncould easily be added. (As an aside, note that such extensions\nof the pointcut language may affect the computability\nof advanced algorithmic problems, such as whether a pointcut\nmatches some part of any base program [25].)\n3.5\nTowards a formal semantics for expressive\naspects\nIn the previous sections, we have given an informal semantics\nof our aspect language. We now illustrate how the\naspect language could be formally defined by translating one\nof the example aspects into formal aspect language by extension\nof that used in the formal framework of [14].\nThe original formal language must be extended in order to\ndeal with halting aspects, an unbounded number of sequential\naspects and arbitrary join point predicates. The grammar\nof the extension, our tiny aspect language, is defined in\nFigure 8. In this language, aspect expressions A consists of\nparallel combinations of aspects, C is a join point predicate\n(similar to our pointcut language) expressed as a conjunction\nof a term pattern and possibly an expression from the\nconstraint logic programming language CLP(R) [20].\nAn aspect A is either:\nA recursive definition.\nA sequence formed using the prefix operation C I ; X,\nwhere X is an aspect or a recursion variable and I a piece\nof code (i.e., an advice).\nA choice construction A\n1\nP A\n2\nwhich chooses the first\naspect that matches a join point (the other is thrown away).\nIf both match the same join point, A\n1\nis chosen.\nA parallel composition of two aspects A\n1\n||\nA\n2\nthat\ncannot occur in choice construction.\nA halting aspect STOP.\nThe semantics of the protocol translation aspect (from\nTCP to UDP) is given in Fig. 9. A sequence can have several\ninstances. This is translated into the language A by the\nexpression a\n1\n|| ... which starts a new sequence a\n1\nonce\nthe first join point has been matched and continue to match\nthe rest of the sequence in progress. The repetition operator\nis translated into recursion on variable the a\n2\n. The\nbranching operator || is translated into the choice operator\n32\na\n1\n. callJP(fd socket(AF INET, SOCK STREAM, 0)) socket(AF INET, SOCK DGRAM, 0);\na\n1\n|| ( callJP(a connect(fd, address, length)) returnZero();\na\n2\n. callJP(b close(fd)) skip; STOP\nP callJP(c read(fd, readBuffer, readLength)) recvfrom(fd, readBuffer, readLength, 0, address, length); a\n2\nP callJP(d write(fd, writeBuffer, writeLength)) recvfrom(fd, writeBuffer, writeLength, 0, address, length); a\n2\nFigure 9: Definition of the protocol translation using the tiny aspect language\nP. Finally, the last primitive aspect of the sequence occurs\nas the first aspect of a choice to get priority over the join\npoints read and write because of the . Note that we use\npattern matching in A and that an overbar marks the first\noccurrence of a variable (i.e., its definition not a use).\nNote that formal definitions such as that of the protocol\ntranslation aspect precisely define several important issues,\nin particular, when new instances of the sequence aspect are\ncreated, and disambiguate of potentially non-deterministic\nsituations, e.g., when two pointcuts of consecutive primitive\naspects in the sequence match at the same time.\nDYNAMIC WEAVING WITH ARACHNE\nArachne is built around two tools, an aspect compiler and\na runtime weaver. The aspect compiler translates the aspect\nsource code into a compiled library that, at weaving time, directs\nthe weaver to place the hooks in the base program. The\nhooking mechanisms used in Arachne are based on improved\ntechniques originally developed for Dyner [32]. These techniques\nallow to rewrite the binary code of executable files\non the fly i.e.without pausing the base program, as long\nas these files conform to the mapping defined by the Unix\nstandard [35] between the C language and x86 assembly language\n. Arachne's implementation is structured as an open\nframework that allows to experiment with new kinds of join\npoints and pointcut constructs. Another important difference\nbetween Arachne and Dyner is, that Dyner requires\na compile time preparation of the base program, whereas\nArachne does not. Hence Arachne is totally transparent for\nthe base program while Dyner is not.\n4.1\nThe Arachne Open Architecture\nThe Arachne open architecture is structured around three\nmain entities: the aspect compiler, the instrumentation kernel\n, and the different rewriting strategies. The aspect compiler\ntranslates the aspect source code into C before compiling\nit. Weaving is accomplished through a command line\ntool weave that acts as a front end for the instrumentation\nkernel. weave relays weaving requests to the instrumentation\nkernel loaded in the address space of the program\nthrough Unix sockets. Upon reception of a weaving request,\nthe instrumentation kernel selects the appropriate rewriting\nstrategies referred by the aspects to be woven and instruments\nthe base program accordingly. The rewriting strategy\nconsults the pointcut analysis performed by the aspect\ncompiler to locate the places where the binary code of the\nbase program needs to be rewritten. It finally modifies the\nbinary code to actually tie the aspects to the base program.\nWith this approach, the Arachne core is independent of\na particular aspect, of the aspect language, of the particular\nprocessor architecture, and of a particular base program.\nIn fact, all dependencies to aspect language implementation\nare limited to the aspect compiler. All dependencies to the\noperating system are localized in the instrumentation kernel\nand finally all dependencies to the underlying hardware\narchitecture are modularized in the rewriting strategies.\n4.1.1\nThe Arachne aspect compilation process\nThe aspect compilation scheme is relatively straightforward\n: it transforms advices into regular C functions. Pointcuts\nare rewritten as C code driving hook insertions into\nthe base program at weaving time. There are however cases\nwhere the sole introduction of hooks is insufficient to determine\nwhether an advice should be executed. In this case,\nthe aspect compiler generates functions that complement\nthe hooks with dynamic tests on the state of the base program\n. These dynamic tests are called residues in AspectJ\nand the rewritten instructions within the base program the\nshadow [16]. Once the aspects have been translated into C,\nthe Arachne compiler uses a legacy C compiler to generate a\ndynamically linked library (DLL) for the compiled aspects.\n4.1.2\nThe Arachne weaving process\nFrom a user viewpoint, the Arachne weave and deweave\ncommand line programs the same syntax than Dyner's version\n. They both take two arguments. The first identifies the\nprocess to weave aspects in or deweave aspects from, and\nthe second indicates the aspect DLL. However, Arachne can\ntarget potentially any C application running on the machine\nwhile Dyner was limited to applications compiled with it\nrunning on the machine. When Arachne's weave receives a\nrequest to weave an aspect in a process that does not contain\nthe Arachne instrumentation kernel, it loads the kernel\nin the process address space using standard techniques [11].\nThe instrumentation kernel is transparent for the base\nprogram as the latter cannot access the resources (memory\nand sockets essentially) used by the former. Once injected\n, the kernel creates a thread with the Linux system\ncall: clone. This thread handles the different weaving requests\n. Compared to the POSIX pthread create function,\nthe usage of clone allows the instrumentation thread to prevent\nthe base program to access its sockets. The instrumentation\nkernel allocates memory by using side effect free allocation\nroutines (through the Linux mmap API). Because the\nallocation routines are side effect free, Arachne's memory is\ntotally invisible to the base program. It is up to the aspect\nto use Arachne's memory allocation routines or base program\nspecific allocation functions. This transparency turns\nout to be crucial in our experiments. Legacy applications\nsuch as Squid use dedicated resource management routines\nand expect any piece of code they run to use these routines.\nFailures will result in an application crash.\nAfter loading an aspect, the instrumentation kernel rewrites\nthe binary code of the base program. These rewriting strategies\nare not included in the kernel and must be fetched on\ndemand by each loaded aspect.\n4.2\nRewriting strategies\nRewriting strategies are responsible for transforming the\nbinary code of the base program to effectively tie aspects to\n33\nshadow: rewriting\nsite replaced by a\nx86 instruction\nx86 instruction\nx86 instruction\nx86 instruction\nexecution flow\ngenerated at aspect compile time\nAspect DLL\nHooks generated at weaving\ntime\njump\nBinary code of the\ncompiled base\nprogram\nand/or advices\nResidue (dynamic tests)\nEntry hook\nsave registers\nReturn hook\nRestore registers\ninstruction(s)\nRelocated tailored\nupdating registers\nLegacy base program\nFigure 10: Generic hook operations.\nthe base program at weaving time. These strategies localize\nArachne's main dependencies to the underlying hardware\narchitecture. In general, rewriting strategies need to collect\ninformation about the base program. These information\ntypically consist of the addresses of the different shadows,\ntheir size, the symbol (i.e.function or global variable name)\nthey manipulate, their length etc. In order to keep compiled\naspects independent from the base program, this information\nis gathered on demand at runtime. The mapping between\na symbol name in the base program source code and\nits address in memory is inferred from linking information\ncontained in the base program executable. However because\nthese information can be costly to retrieve, Arachne collects\nand stores it into meta-information DLLs. these DLLs behave\nas a kind of cache and lessen the problem of collecting\nthe information required to instrument the base program.\nTo implement our aspect language, Arachne provides a set\nof eight rewriting strategies that might eventually use each\nother.\n4.2.1\nStrategies for\ncall\n,\nreadGlobal\nand\nwriteGlobal\nIn Arachne, call, readGlobal and writeGlobal allow an\nadvice to be triggered upon a function call, a read on a\nglobal variable or a write respectively. While the implementation\nof readGlobal and writeGlobal in Arachne is close\nto the one in Dyner, Arachne implements the strategy for\ncall by rewriting function invocations found in the base\nprogram. Dyner instead rewrites the function body of the\ncallee. On the Intel architecture, function calls benefit from\nthe direct mapping to the x86 call assembly instruction\nthat is used by almost, if not all, compilers. Write and read\naccesses to global variables are translated into instructions\nusing immediate, hard coded addresses within the binary\ncode of the base program. By comparing these addresses\nwith linking information contained in the base program executable\n, Arachne can determine where the global variable\nis being accessed. Therefore those primitive pointcuts do\nnot involve any dynamic tests. The sole rewriting of the\nbinary base program code is enough to trigger advice and\nresidue\n1\nexecutions at all appropriate points.\nThe size of the x86 call instruction and the size of an x86\njump (jmp) instruction are the same. Since the instruction\nperforming an access to a global variable involves a hard\ncoded address, x86 instructions that read or write a global\n1\nResidues (i.e. dynamic tests on the base program state) are\nrequired when these primitive pointcuts are combined with\nconditional pointcuts or when pattern matching is involved.\nvariable have at least the size of a x86 jmp instruction. Hence\nat weaving time, Arachne rewrites them as a jmp instruction\nto a hook. Hooks are generated on the fly on freshly allocated\nmemory. As shown in figure 10, hooks contain a few\nassembly instructions that save and restore the appropriate\nregisters before and after an advice (or shadow) execution.\nA generic approach is to have hooks save the whole set of\nregisters, then execute the appropriate residue and/or advice\ncode before restoring the whole set of registers; finally\nthe instructions found at the join point shadow are executed\nto perform the appropriate side effects on the processor registers\n. This is accomplished by relocating the instructions\nfound at the join point shadow. Relocating the instructions\nmakes the rewriting strategies handling read and write access\nto global variable independent from the instruction generated\nby the compiler to perform the access\n2\n. The limited\nnumber of x86 instructions used to invoke a function allows\nArachne's rewriting strategy to exploit more efficient, relo-cation\nfree, hooks.\n4.2.2\nStrategies for\ncontrolflow\nand\ncontrolflowstar\nEvery time a C function is called, the Linux runtime\ncreates an activation record on the call stack [35]. Like\nDyner, Arachne's implementation of the rewriting strategy\nfor controlflow uses the most deeply nested function\ncall (or global read or write access) in the control flow pointcut\nas shadow. This shadow triggers a residue. This residue\nuses the activation record's chaining to check whether the\nremaining function calls of the control flow, are on the call\nstack maintained by the Linux runtime. An appropriate\nusage of hashtables that store the linking information contained\nin the base program executables can thereby decrease\nthe cost of determining if a specific function is the\ncaller of another to a pointer comparison. Therefore, the\nresidue for a controlflow with n directly nested functions\nimplies exactly n pointer comparisons. However, the residue\nworst case runtime for the indirect control flow operator\ncontrolflowstar that allows for not directly nested functions\n, is proportional to the base program stack depth.\n4.2.3\nStrategies for\nread\nand\nwrite\nread and write are new join points not included in Dyner\nthat have been added to the latest version of Arachne. Their\nimplementation relays on a page memory protection as allowed\nby the Linux operating system interface (i.e. mprotect)\nand the Intel processor specifications [18]. A read or write\npointcut triggers a residue to relocate the bound variable\ninto a memory page that the base program is not allowed\nto access and adds a dedicated signal handler. Any attempt\nmade by the base program to access the bound variable identified\nwill then trigger the execution of the previously added\nsignal handler. This handler will then inspect the binary\ninstruction trying to access the protected page to determine\nwhether it was a read or a write access before eventually\nexecuting the appropriate advice.\n4.2.4\nStrategies for\nseq\nLike read and write, seq is a new language feature of\nArachne. Dyner offers no equivalent construct. Arachne's\nrewriting strategy of this operator associates a linked list to\n2\nAbout 250 x86 instruction mnemonics can directly manipulate\na global variable. This corresponds to more than one\nthousand opcodes.\n34\nevery stage inside the sequence except the last one. Each\nstage in a sequence triggers a residue that updates these\nlinked lists to reflect state transitions of currently matching\nexecution flows. Upon matching of the first pointcut\nof the first primitive aspect in the seq, a node is allocated\nand added to the associated linked list. This node contains\na structure holding variables shared among the different\npointcuts within the sequence. Once a join point\nmatches a pointcut of an primitive aspect denoting a stage\nin the sequence, Arachne consults every node in the linked\nlist associated with the previous stage and executes the corresponding\nadvice\n3\n. Arachne eventually updates the node\nand in the absence of a moves it to the list associated\nwith the currently matched pointcut.If the matching pointcut\ncorresponds to the end of the sequence, structures are\nnot moved into another list but freed. Our aspect compiler\nincludes an optimization where structures are allocated from\na resizable pool and upon a sequence termination, structures\nare not freed but returned to the pool.\n4.3\nArachne limitations\nAggressive optimizations of the base program might prevent\nArachne to seamlessly weave aspects. Two optimizations\nare not yet supported by Arachne. First if the compiler\ninlines a function in another one within the binary code of\nthe base program, the Arachne weaver will fail to properly\nhandle pointcuts referring to that function. Second, control\nflow pointcuts are based on the chaining of activation\nrecords. On the x86 architecture, in leaf functions, optimizing\ncompilers sometimes do not maintain this chaining\nto free one register for the rest of the computation. This\nhowever has not been a problem during our experiments\nas we used the open source C compiler gcc. Arachne supports\ntwo of the three optimization levels proposed by gcc.\nStripping that removes linking information and aggressive\noptimizations that break the interoperability between compilers\nand/or debuggers are incompatible with Arachne. In\npractice, Arachne can be used on applications compiled like\nsquid with two of the three gcc optimization level.\nPERFORMANCE EVALUATION\nAspect-oriented solutions will be used if the aspect sys-tem's\nlanguage is expressive enough and if the aspect system\noverhead is low enough, for the task at hand. The purpose\nof this section is to study Arachne's performance. We first\npresent the speed of each Arachne language construct and\ncompare it to similar C language constructs. We then study\nthe overhead of extending Squid with a prefetching policy.\nThis case study shows that even if the cost of some Arachne\naspect language constructs might be high compared to C\nlanguage constructs, this overhead is largely amortized in\nreal applications.\n5.1\nEvaluation of the language constructs\nThis performance evaluation focuses on studying the cost\nof each construct of our aspect language. To estimate the\ncost for each construct of our aspect language, we wrote an\naspect using this construct that behaves as an interpreter of\n3\nIn case the previous stage pointcut was used with a star\n, Arachne examines nodes from linked list associated with\nthe last two previous stages, and so on, until a not starred\nprimitive aspect in the sequence is reached.\nExecution times (cycles)\nArachne\nNative\nRatio\ncall\n28\n2.3%\n21\n1.9%\n1.3\nseq\n201\n0.5%\n63\n1.7%\n3.2\ncflow\n228\n1.6%\n42\n1.8%\n5.4\nreadGlobal\n2762\n4.3%\n1\n0.2%\n2762\nread\n9729\n4.9%\n1\n0.6%\n9729\nTable 1: Speed of each language construct used to\ninterpret the base program compared to a native\nexecution.\nthe base program. For example, to study the performance\nof readGlobal, we wrote an aspect whose action returns the\nvalue of the global variable referred in the pointcut, i.e., we\nwrote aspects behaving like the base program. For each of\nthese aspects, we compare the time required to perform the\noperation matching the pointcut, in case the operation is\ninterpreted by the woven aspect with the time required to\ncarry out the operation natively (without the woven aspect).\nFor example, to study the performance of readGlobal, we\nfirst evaluate the time needed to retrieve the global variable\nvalue through the code generated by the C compiler gcc\nwithout any aspect woven and compare this value to the\ntime needed to retrieve the global variable value through\nthe aspect once it has been woven in the base program.\nWe express our measurements as a ratio between these two\ndurations to abstract from the experimentation platform.\nThis approach requires the ability to measure short periods\nof time. For instance, a global variable value is usually\nretrieved (readGlobal in our aspect language) in a single\nclock tick. Since standard time measurement APIs were\nnot precise enough, our benchmarking infrastructure relies\non the rdtsc assembly instruction [18]. This instruction returns\nthe number of clock cycles elapsed since power up. The\nPentium 4 processor has the ability to dynamically reorder\nthe instructions it executes. To ensure the validity of our\nmeasurement, we thus insert mfence instructions in the generated\ncode whose execution speed is being measured. An\nmfence forces the preceding instructions to be fully executed\nbefore going on. The pipeline mechanism in the Pentium 4\nprocessor entails that the speed of a piece of assembly code\ndepends from the preceding instructions. To avoid such hidden\ndependencies, we place the operation whose execution\ntime is being measured in a loop. We use gcc to unroll the\nloop at compile time and we measure the time to execute\nthe complete loop. This measure divided by the number of\nloop repetitions yields an estimation of the time required\nto execute the operation. The number of times the loop is\nexecuted is chosen after the relative variations of the measures\n,i.e., we increased the number of repetitions until ten\nruns yields an average relative variation not exceeding 5%.\nTo check the correctness of our experimental protocol, we\nmeasured the time needed to execute a nop assembly instruction\n, that requires one processor cycle according to the\nIntel specification. The measures of nop presented a relative\nvariation of 1.6%.\nTable 1 summarizes our experimental results. Using the\naspect language to replace a function that returns immediately\nis only 1.3 times slower than a direct, aspect-less, call\nto that empty function. Since the aspect compiler packages\nadvices as regular C functions, and because a call pointcut\ninvolves no residue, this good result is not surprising. When\n35\nControlflow\n28 Cycles\n228 Cycles\n327 Cycles\n424 Cycles\n522 Cycles\n1\n2\n5\n3\n4\n10\n20\n30\nRatio with a normal function call\nNumber of imbricated calls\n1\n2\n5\n3\n4\nRatio with 3 calls\nNumber of matching instances\n5\n10\n200.6 Cycles\n293.2 Cycles\n380.8 Cycles\n466.3 Cycles\nSequence\nRatio\n1000\n2000\n3000\n577 Cycles\nFigure 11: controlflow, seq, and read performances\nan access to a global variable is replaced by an advice execution\n, the hooks generated by the rewriting strategy need\nto prepare the processor to call the advice function. This\nincreases the time spent in the hooks. In addition, while\nan access to a global variable is often performed by a single\nx86 instruction, an empty function is often composed\nof four instructions. Hence the relative cost of an aspect\ntriggered upon a global variable access and a direct, aspect-less\n, access to a global variable is slightly higher than the\ncorresponding ratio for functions. A seq of three invocations\nof empty functions is only 3.2 time slower than the\ndirect, aspect-less, three successive functions calls. Compared\nto the pointcuts used to delimit the different stages,\nthe seq overhead is limited to a few pointer exchanges between\nthe linked lists holding the bound variable. On Intel\nx86, global variable accesses benefit from excellent hardware\nsupport. In the absence of aspects, a direct global variable\nread is usually carried out in a single unique cycle. To trigger\nthe advice execution, the Arachne runtime has to save\nand restore the processor state to ensure the execution co-herency\n, as advices are packaged as regular C functions (see\nalso 4.2.1). It is therefore not surprising that a global variable\nreadGlobal appears as being 2762 times slower than\na direct, aspect-less global variable read. read performance\ncan be accounted in the same way: in the absence of aspect,\nlocal variables are accessed in a single unique cycle. The\nsignal mechanism used in the read requires that the operating\nsystem detects the base program attempt to read into\na protected memory page before locating and triggering the\nsignal handler set up by Arachne, as shown in 4.2.3. Such\nswitches to and from kernel space remain slow. Using read\nto read a local variable is 9729 times slower than retrieving\nthe local variable value directly, without aspects.\nseq and controlflow can refer to several points in the execution\nof the base program (i.e. different stages for seq and\ndifferent function invocations for the controlflow). The\nruntime of these pointcuts grows linearly with the number\nof execution points they refer to and with the number of\nmatching instances. Figure 11 summarizes a few experimental\nresults for controlflow and seq proving these points.\n5.2\nCase Study on a real application\nSince, depending on the aspect construct used, interpreting\nthe base program with aspects can slow it down by a factor\nranging between 1.3 and 9729, we studied Arachne's performance\non a real world application, the Web cache Squid.\nArachne\nManual\nTop1\nTop1\nDiff\nTop2\nTop2\n(%)\nThroughput\n(request/s)\n5.59\n5.59\n5.58\n5.59\nResponse Time\n(ms)\n1131.42\n1146.07\n1.2 -1\n1085.31\n1074.55\nMiss response\ntime (ms)\n2533.50\n2539.52\n0.2 1.8\n2528.35\n2525.34\nHit response\ntime (ms)\n28.96\n28.76\n-0.6 3.8\n30.62\n31.84\nHit ratio\n59.76\n59.35\n-0.6 0.7\n61.77\n62.22\nErrors\n0.51\n0.50\n-1.9 0\n0.34\n0.34\nTable 2: Performances comparison between manual\nmodification and Arachne, for prefechting policy integration\nin Squid\nWe extended Squid with a prefetching policy [9]. As described\nin section 3.1, we implemented this policy as a set\nof aspects and made a second implementation of this policy\nby editing the Squid source code and recompiling it. This\nsection compares the performance of these two implemen-tations\nusing standard Web cache performance indicators:\nthroughput, response time and hit ratio.\nObtaining access traces adequate to study a Web cache\nperformance is difficult. The trace must be long enough to\nfill the cache. Due to privacy issues, traces are usually not\npublicly available. Since traces do not include the content of\nthe accessed pages, these pages must be downloaded again.\nIn the meantime the page contents may have changed and\neven the URLs may have disappeared.\nInstead of traces, we based our evaluation on Web Polygraph\n[30]. Polygraph is a benchmarking tool developed by\nthe Squid team and featuring a realistic HTTP and SSL\ntraffic generator and a flexible content simulator.\nWe filled up the cache and simulated a one day workload\nwith its two request rate peaks observed in real life environments\n[30]. Table 2 shows results of our simulation. Measures\nhave been made during the two request peaks. The\nhit time and the miss time, time needed to deliver a document\npresent, respectively not present, in the cache are very\nsimilar. It shows that differences are imperceptible between\nthe version of Squid extended by Arachne and the one extended\nmanually (less than 1%). Hence, even if the cost\nof Arachne's aspect language constructs might seem high,\nthey are largely amortized in real applications. To give a\ntypical example observed on our experimental platform: in\ncase of a cache hit, a 3.8 MB page was retrieved in a single\nsecond, the time spent in prefetching advices amounted to\n1801 sec, and the time spent within Arachne to execute the\nhooks and dynamic tests to 0.45 sec. In a miss case, on\nthe average, a client retrieved the same page in 1.3 seconds,\n16679 sec were spent in the advices and 0.67 sec within\nArachne itself.\nRELATED WORK\nOur work is directly related to other aspect weavers for\nC, approaches for expressive aspect languages, and dynamic\nweaving, in particular for C. In this section, we consider\nrelated work in each of these fields in turn.\nApart from Dyner and Arachne, there are few aspect\n36\nweavers for C (or even C like languages); some noteworthy\nexceptions are AspectC [12] (no available implementation\n), AspectC++ and [33]. All of these rely on source-code\ntransformation and thus cannot apply aspects to running\nC applications as required by the applications we consider.\nFurthermore, none of these systems provides explicit support\nfor aspects over join point sequences.\nThere is quite a large body of work now on the notion of\nexpressive aspect languages where \"more expressive\" typically\ncompares to w.r.t. AspectJ's pointcut and advice models\n. Our work has been inspired by Event-based AOP [15],\nwhich aims at the definition of pointcuts in terms of arbitrary\nrelations between events. Nevertheless, many other\napproaches to expressive aspect languages exist: e.g., data-flow\nrelations [26], logic programming [13], process algebras\n[3], graphs [5], and temporal logics [1], have all been proposed\nas a basis for the definition of expressive aspect languages\n. However, few of these encompass dynamic weaving\nand only the latter has been applied to C code under efficiency\nconsiderations similar to our setting.\nDynamic weaving is commonly realized in Java through\npreprocessing at load-time like [8] or through the JVM Debugging\nInterface [28]. These tools rely on bytecode rewriting\ntechniques, have typically limited expressivity (some do\nnot support field accesses) and incur a huge performance\noverhead. Dynamic weaving through modification at runtime\nis found infrequently for compiled languages. An exception\nfor Java is JasCo [21] whose most recent version (0.7)\nsupports dynamic weaving through the new instrumentation\nAPI of Java 5.\nMany instrumentation techniques have been proposed to\nrewrite binary code on the fly. In these approaches, difficulty\nissues range from the complexity to rewrite binary\ncode to the lack of a well-defined relationship between source\ncode and the compiler generated binary code. Hence many\napproaches work on an intermediate representation of the\nbinary code and source language [34]. Producing this representation\nfirst and then regenerating the appropriate binary\nexecutable code has proven to be costly both in terms of\nmemory consumption and in CPU time.\nA few other approaches have considered a direct rewriting\nof the binary code at runtime. Dyninst [17] and dynamic\nprobes [27] allow programmers to modify any binary instruction\nbelonging to an executable. Dyninst however relies on\nthe Unix debugging API: ptrace. ptrace allows a third\nparty process to read and write the base program memory.\nIt is however highly inefficient: before using ptrace, the\nthird party process has to suspend the execution of the base\nprogram and resume its execution afterwards. In comparison\n, Arachne uses ptrace at most once, to inject its kernel\nDLL into the base program process. In addition, Dyninst\ndoes not free the programmer from dealing with low level\ndetails. For example, it seems difficult to trigger an advice\nexecution upon a variable access with Dyninst: the translation\nfrom the variable identifier to an effective address is left\nto the user. Worse, Dyninst does not grant that the manipulation\nof the binary instructions it performs will succeed.\nDyninst uses an instrumentation strategy where several adjacent\ninstructions are relocated. This is unsafe as one of\nthe relocated instructions can be the target of branching\ninstructions. In comparison, Arachne join point model has\nbeen carefully chosen to avoid these kind of issues; if an aspect\ncan be compiled with Arachne, it can always be woven.\nCONCLUSION AND FUTURE WORK\nIn this paper we have discussed three different crosscutting\nconcerns which are typical for C applications using OS-level\nservices and which frequently need to be applied at\nruntime. We have motivated that such concerns can be expressed\nas aspects and have defined a suitable aspect language\n. This language is more expressive than those used in\nother aspect weavers for C in that it provides support for\naspects defined over sequences of execution points as well as\nfor variable aliases. We have presented an integration of this\nlanguage into Arachne, a weaver for runtime weaving of aspects\nin C applications. Finally, we have provided evidence\nthat the integration is efficient enough to apply such aspects\ndynamically to high-performance applications, in particular\nthe web cache \"squid.\"\nAs future work, we intend to investigate the suitability of\nthe proposed aspect language for other C-applications. We\nalso intend to investigate Arachne extension to the C++\nlanguage. Indeed, object-oriented programming heavily uses\nprotocol-based interfaces collaboration (hence sequence aspects\n). Along with its open architecture, extending Arachne\nto support C++, will pave the way to a relatively language\nindependent aspect and weaving infrastructure.\nFinally,\nArachne's toolbox should be extended with support for aspect\ninteractions (e.g., analyses and composition operators).\nREFERENCES\n[1] R. A.\nAberg, J. L. Lawall, M. S\nudholt, G. Muller, and\nA.-F. L. Meur. On the automatic evolution of an os\nkernel using temporal logic and AOP. In Proceedings\nof Automated Software Engineering (ASE'03), pages\n196204. IEEE, 2003.\n[2] American National Standards Institute.\nANSI/ISO/IEC 9899-1999: Programming Languages\n-- C. American National Standards Institute, 1430\nBroadway, New York, NY 10018, USA, 1999.\n[3] J. H. Andrews. Process-algebraic foundations of\naspect-oriented programming. In Proceedings of the\n3rd International Conference on Metalevel\nArchitectures and Separation of Crosscutting\nConcerns, volume 2192 of LNCS. Springer Verlag,\nSept. 2001.\n[4] M. Arlitt and T. Jin. A workload characterization\nstudy of the 1998 world cup web site. IEEE Network,\n14(3):3037, May 2000.\n[5] U. Amann and A. Ludwig. Aspect weaving by graph\nrewriting. In U. W. Eisenecker and K. Czarnecki,\neditors, Generative Component-based Software\nEngineering (GCSE), Erfurt, Oct. 1999.\n[6] CERT - Carnegie Mellon University. Vulnerability\nnote vu#613459, Feb. 2002. published on line:\nhttp://www.kb.cert.org/vuls/id/613459.\n[7] H. Chen and P. Mohapatra. Catp: A context-aware\ntransportation protocol for http. In International\nWorkshop on New Advances in Web Servers and\nProxy Technologies Held with ICDCS, 2003.\n[8] S. Chiba and K. Nakagawa. Josh: An open\nAspectJ-like language. In Proceedings of the third\n37\ninternational conference on Aspect-oriented software\ndevelopment, pages 102111. ACM Press, Mar. 2004.\n[9] K.-I. Chinen and S. Yamaguchi. An interactive\nprefetching proxy server for improvement of WWW\nlatency. In Seventh Annual Conference of the Internet\nSociety (INET'97), Kuala Lumpur, June 1997.\n[10] I. Cidon, A. Gupta, R. Rom, and C. Schuba. Hybrid\ntcp-udp transport for web traffic. In Proceedings of the\n18th IEEE International Performance, Computing,\nand Communications Conference (IPCCC'99), pages\n177184, Feb. 1990.\n[11] S. Clowes. Injectso: Modifying and spying on running\nprocesses under linux. In Black hat briefings, 2001.\n[12] Y. Coady, G. Kiczales, M. Feeley, and G. Smolyn.\nUsing AspectC to improve the modularity of\nPath-Specific customization in operating system code.\nIn V. Gruhn, editor, Proc. of the Joint 8th European\nSoftware Engeneering Conference and 9th ACM\nSIGSOFT Symposium on the Foundation of Software\nEngeneering (ESEC/FSE-01), volume 26, 5 of\nSOFTWARE ENGINEERING NOTES, pages 8898,\nNew York, Sept. 1014 2001. ACM Press.\n[13] K. de Volder. Aspect-oriented logic meta\nprogramming. In P. Cointe, editor, Meta-Level\nArchitectures and Reflection, 2nd International\nConference on Reflection, volume 1616 of LNCS,\npages 250272. Springer Verlag, 1999.\n[14] R. Douence, P. Fradet, and M. S\nudholt. A framework\nfor the detection and resolution of aspect interactions.\nIn Proceedings of the ACM SIGPLAN/SIGSOFT\nConference on Generative Programming and\nComponent Engineering (GPCE'02), volume 2487 of\nLLNCS, pages 173188. Springer-Verlag, Oct. 2002.\n[15] R. Douence, O. Motelet, and M. S\nudholt. A formal\ndefinition of crosscuts. In Proceedings of the 3rd\nInternational Conference on Metalevel Architectures\nand Separation of Crosscutting Concerns, volume 2192\nof LNCS, pages 170186. Springer Verlag, Sept. 2001.\n[16] E. Hilsdale and J. Hugunin. Advice weaving in\naspectj. In Proceedings of the 3rd international\nconference on Aspect-oriented software development,\npages 2635. ACM Press, 2004.\n[17] J. K. Hollingsworth, B. P. Miller, M. J. R. Goncalves,\nO. Naim, Z. Xu, and L. Zheng. MDL: A language and\ncompiler for dynamic program instrumentation. In\nIEEE Conference on Parallel Architectures and\nCompilation Techniques (PACT), pages 201213, Nov.\n1997.\n[18] Intel Corportation. IA-32 Intel Architecture Software\nDeveloper's Manual. Intel Corportation, 2001.\n[19] V. Issarny, M. Ban^atre, B. Charpiot, and J.-M.\nMenaud. Quality of service and electronic newspaper:\nThe Etel solution. Lecture Notes in Computer Science,\n1752:472496, 2000.\n[20] J. Jaffar, S. Michaylov, P. J. Stuckey, and R. H. C.\nYap. The clp( r ) language and system. ACM Trans.\nProgram. Lang. Syst., 14(3):339395, 1992.\n[21] JasCo home page. http://ssel.vub.ac.be/jasco/.\n[22] R. Jones and P. Kelly. Backwards-compatible bounds\nchecking for arrays and pointers in c programs. In\nM. Kamkar, editor, Proceedings of the Third\nInternational Workshop on Automatic Debugging,\nvolume 2, pages 1326, May 1997.\n[23] A. D. Keromytis. \"Patch on Demand\" Saves Even\nMore Time? IEEE Computer, 37(8):9496, 2004.\n[24] G. Kiczales, J. Lamping, A. Menhdhekar, C. Maeda,\nC. Lopes, J.-M. Loingtier, and J. Irwin.\nAspect-oriented programming. In M. Aksit and\nS. Matsuoka, editors, Proceedings European\nConference on Object-Oriented Programming, volume\n1241, pages 220242. Jyvaskyla, Finland, June 1997.\n[25] K. J. Lieberherr, J. Palm, and R. Sundaram.\nExpressiveness and complexity of crosscut languages.\nTechnical Report NU-CCIS-04-10, Northeastern\nUniversity, Sept. 2004.\n[26] H. Masuhara and K. Kawauchi. Dataflow pointcut in\naspect-oriented programming. In First Asian\nSymposium on Programming Languages and Systems\n(APLAS'03), 2003.\n[27] R. J. Moore. Dynamic probes and generalised kernel\nhooks interface for Linux. In USENIX, editor,\nProceedings of the 4th Annual Linux Showcase and\nConference, Atlanta, October 1014, 2000, Atlanta,\nGeorgia, USA, Berkeley, CA, USA, 2000. USENIX.\n[28] A. Popovici, G. Alonso, and T. Gross. Just-in-time\naspects: efficient dynamic weaving for Java. In\nProceedings of the 2nd international conference on\nAspect-oriented software development, pages 100109,\nBoston, Massachusetts, Mar. 2003. ACM Press.\n[29] M. Rabinovich and H. Wang. DHTTP: An efficient\nand cache-friendly transfer protocol for web traffic. In\nINFOCOM, pages 15971606, 2001.\n[30] A. Rousskov and D. Wessels. High-performance\nbenchmarking with Web Polygraph. Software Practice\nand Experience, 34(2):187211, Feb. 2004.\n[31] O. Ruwase and M. S. Lam. A practical dynamic buffer\noverflow detector. In Proceedings of the 11th Annual\nNetwork and Distributed System Security Symposium.\nInternet Society, Feb. 2004.\n[32] M. Segura-Devillechaise, J.-M. Menaud, G. Muller,\nand J. Lawall. Web cache prefetching as an aspect:\nTowards a dynamic-weaving based solution. In\nProceedings of the 2nd international conference on\nAspect-oriented software development, pages 110119,\nBoston, MA, USA, Mar. 2003. ACM Press.\n[33] O. Spinczyk, A. Gal, and W. Schroeder-Preikschat.\nAspectC++: an aspect-oriented extension to the C++\nprogramming language. In Proceedings of the Fortieth\nInternational Conference on Tools Pacific, pages\n5360. Australian Computer Society, Inc., 2002.\n[34] A. Srivastava and A. Edwards. Vulcan: Binary\ntransformation in a distributed environment. Microsoft\nResearch Tech. Rpt. MSR-TR-2001-50, 2001.\n[35] U. S. L. System Unix. System V Application Binary\nInterface Intel 386 Architecture Processor Supplement.\nPrentice Hall Trade, 1994.\n[36] D. Wessels. Squid: The Definitive Guide. O'Reilly and\nAssociates, Jan. 2004.\n[37] J. Wilander and M. Kamkar. A comparison of publicly\navailable tools for dynamic buffer overflow prevention.\nIn Proceedings of the 10th Network and Distributed\nSystem Security Symposium, pages 149162, San\nDiego, California, February 2003.\n38\n", "keywords": "aspect language;buffer overflows;prefetching;sequence pointcut;system applications;binary code;dynamic weaving;Arachne;web cache;operating system;network protocol;C applications"} {"name": "33", "title": "An Index System for Quality Synthesis Evaluation of BtoC Business Website", "abstract": "It is important for successful electronic business to have a hi-quality business website. So we need an accurate and effective index system to evaluate and analyses the quality of the business website. In this paper, the evaluation index system following the `grey box' principle is proposed which considers both efficiency of business website and performance of electronic business system. Using R-Hierarchical clustering method to extract the typical indexes from sub-indexes is theoretically proved to have a rationality and effectiveness. Finally, the evaluation method is briefly discussed.", "fulltext": "INTRODUCTION\nBusiness website is an online media between buyer and seller.\nA hi-quality website is crucial to a company for a successful\ne-business. What is a hi-quality business website? In terms of\nmaintaining the website, what do we focus on so that the quality\nmeets the users' needs? Apparently, using click-through rate to\nassess the popularity cannot objectively and accurately evaluate\nthe quality of the business websites. Instead, we need to rely on\nscientific evaluation index system and methods.\nAt present, there are many methods available for business\nwebsite comparison or ranking, such as Usage Ranking,\nPurchase Comparison, Expert Opinion and Synthesis\nEvaluation etc. You can find both official authority and\nnon-governmental organization that issue their power ranking.\nThe former one is to monitor and regulate the market, such as\nCNNIC, which organized the competition for the Top Ten\nWebsites in domestic. The latter one, such as Consumerreports\n(\nwww.consumerreports\n. org\n), BizRate(www.bizrate.com),\nForrester Research etc., is mainly to guide the web users'\nactivity. These kinds of comparison or ranking have special\nvalue in getting reputation and increasing recognition of the\nbusiness websites among the users, however, e-business\nenterprise can not improve the quality of their websites directly\nbased on the results of these kinds of assessments.\nThe main purpose of this paper is to develop an index system\nfor quantitative evaluation of the BtoC websites, which dose not\nemphasize the income of the website but focus on evaluating of\nits synthesis quality. We hope that the applying of this index\nsystem will provide the technique developers and maintainers\nsome references for designing, appraising and diagnosing their\ne-business system to improve its quality level, and to support\nmanagers to make decisions for operation of the websites.\n\nOVERVIEW OF PREVIOUS STUDIES\nComparing to the fast growing of e-business websites in the\nworld, currently we can rarely find the particular research on\nthe evaluation index system of business website. QEM (The\nwebsite quality evaluation method) proposed by Olsina and\nGodoy etc. in 1999 can be considered as one of the\nrepresentative approaches. It based on the main factors to\nevaluate the quality of the websites, including functionality\n(global search, navigability, and content relativity), usability\n(website map, addresses directory), efficiency and reliability. In\n2000, American researcher, Panla Solaman, presented\ne-SERVQUAL model based on the conventional service quality\nevaluation model SERVQUAL. It contains some factors like\nefficiency, deal completeness, reliability, privacy protection,\nresponsiveness, recompense and contact etc. In the same year,\nanother American researcher, Hauler, introduced an e-QUAL\nmodel which includes the factors of content, accessibility,\nnavigability, design and presentation, responsiveness,\nbackground, personalization and customization, etc. In 2004,\nF.J. Miranda Gonzalez and T.M.Banegil Palacios developed an\nuniversal evaluation index system WIS (Web Assessment Index)\nthat can be employed to assess websites by different\norganizations. It consists of four indexes of accessibility,\nnavigability, speed and content.\n[1]\nHowever, the universal index\nsystem cannot measure a website exactly and absolutely due to\nthe industry specialty, organizational characteristics and\ndifferent usages. One of the representative researches is Mr.\nZhongHai Li's paper about ergonomics standard of online store.\nIt assesses the business websites by testing if the design of the\nwebsite coincides with the shopping process of online\nconsumers. This standard has five factors, such as search and\nbrowse, merchandise information, shopping cart, register and\npay, service and support.\n[4]\nAnother index system for small and\nmedium business websites covers the factors of general features,\ndesign, promotion, information and the others.\n[5]\n\nHere we list our major findings from the previous researches:\n\n2.1 Unreasonable Selection of the Index\nSome research consider not only the original design but also the\nfactors such as promotion and income of business website.\n75\nSome evaluation systems have correlative or contradictive\nindexes. For example, it considers the download speed, at the\nsame time, it requires the web designers not to excessively use\nflash and sounds to slow down the speed.\n2.2 Unilateral Evaluation\nMost of the research takes the users' view to evaluate the\nfunction and design of website. It treats the business system as a\n`black box' and ignores the impact of system performance on\nthe websites quality. But considering the factors of system\nperformance alone is also not a complete evaluation for\nimproving service quality of website.\n2.3 Lack of a Complete Set of Quality\nSynthesis Evaluation System\nA complete set of tool to evaluate the websites must include the\nfollowing important elements: categories, factors, weights,\nrankings standard and assessment model. So far, we have not\nseen any literature discussing complete set of evaluation index\nsystem aiming at the quality of BtoC websites.\n\nPRINCIPLE FOR THE QUALITY SYNTHESIS EVALUATION\nFirst, the three fundamental principles we need to follow are to\nbe comprehensive, to be scientific and to be feasible. We should\nevaluate all the facets of the website from different dimensions\nand avoid missing value of important factors. Moreover, the\ndefinition of the evaluation index should be accurateobjective\nand logical so it can eliminate the impact on the evaluation\nresult brought by the correlative indexes. Concurrently, we need\nreduce the quantity of indexes or adopt the simple ones which\ndata is easier to be collected, and prevent from complicated\ncalculation due to the excessive indexes.\nThe main purpose of improving business websites is to serve\nthe users better. They are concerned only about the websites'\nexternal attributes, such as content, function, presentation and\nbrowse speed, etc. So, evaluating only by taking their views\ncannot directly guide to develop, maintain and administrate the\nwebsite. Just like treating the patient's symptom but not the\ndisease itself, the technique developer or maintainer cannot\nradically improve the quality of their websites by correcting\nsystem structure and web design according to the evaluation\nresult. Only after we adopt the `grey box' index system that\nconsiders both efficiency of business website and performance\nof e-business system, we can establish a quality synthesis\nevaluation index system to benefit the management of BtoC\nwebsites.\n\nQUALITY EVALUATION INDEXES FOR BUSINESS WEBSITE\nSelection of index items lays down the foundation for\nconstructing evaluation index system. After we thoroughly\nanalyze the evaluation objectives based on the characteristics of\nbusiness website, we propose an initial index system includes 5\ncategories and totally 28 index items shown in the following\nTable 1.\nTable 1 Quality evaluation indexes for business\nwebsites\nCategories Indexes\nFunction\nEffectiveness\n1\n\nIntegrative Function\n2\n\nInteractive Function\n3\n\nConvenience\n4\n\nService Personalization\n5\n\nWebsite Credibility\n6\n\nBusiness Authorization\nBusiness\nInformation\n7\n\nAccuracy\n8\n\nAuthoritativeness\n9\n\nVariety in Type\n10\n\nInclusiveness\n11\n\nUniqueness\n12\n\nOrderliness\n13\n\nTimeliness\n14\n\nVariety in Search Method\n15\n\nSearch Effectiveness\n16\n\nVersion Internationalization\nWebsite\nDesign\n17\n\nUser Interface Friendliness\n18\n\nDevelopment Standardization\n19\n\nWebsite Uniqueness\n20\n\nColumns Originality\n21\n\nWebsite Structure Clarity\n22\n\nPage Style Consistency\n23\n\nHarmonization\nSystem\nUsability\n24\n\nSystem Stableness\n25\n\nCompatibility\n26\n\nSystem Security\n27\n\nSelf-adaptability\nSystem\nEfficiency\n28\n\nSystem Speediness\n\nWebsite self-adaptability refers to capability of e-business\nsystem intelligently providing personalized service and\ndynamic optimizing system performance. System Efficiency\nrefers to the ability that the system response quickly to the\nrequests of numbers of web users. It can be measured through\nvalues of some quantitative indexes, such as response time,\nthroughput or utilization rate, etc.\n\nOPTIMIZING THE EVALUATION INDEXES\nIt is necessary for our initial evaluation system to optimize if it\ncan be applied in practice. First, the indexes are more or less\ncorrelative which will affects the objectiveness of the\nevaluation. Second, there are too more indexes that will result\nin lower efficiency. Therefore, we try to extract and simplify\nthe indexes by using R-Hierarchical clustering method.\nGenerally, R indicates the coefficient of correlation between\ntwo items. R-Hierarchical clustering method is usually applied\nto cluster the indexes. The steps are described as following.\n5.1 Calculate Coefficient of Correlation and\nClustering\nIt firstly treats every index as one cluster. So, we have 28\nclusters. Then, coefficient of correlation is calculated between\nevery two clusters by minimum-distance method. Next, the two\nclusters with the maximal coefficient of correlation are\nclustered into a new one. The same process is repeated until all\nthe indexes are clustered into one.\n76\n5.2 Analyze the Clustering Process and\nDetermine Clusters\nWe analyze the variation of minimum coefficient of correlation\nduring the clustering process to find the leap points. According\nto the number of leap points and the knowledge of special field,\nwe can eventually determine how many clusters we need. The\nwhole process is illustrated in the following Figure 1.\n\n\nFigure 1 The process of R-Hierarchical clustering\nFollowing the principle of simplification and feasibility and\nconsidering the characteristics of BtoC website, we cluster the\n28 index items into 10. The precision rate is over 0.75.\n5.3 Calculate Correlation Index and Extract the\nRepresentative Indexes\nFirst, we calculate the correlation index that is the average of R\nbetween one index and every other index in the same cluster.\n1\n2\n2\n=\n\nj\nj\nm\nr\nR\n\nmi in this formula is the number of the indexes in the cluster\nthat index Xj belongs to.\nThen, we select the index with the maximal correlation index in\nthe total 10 clusters individually and identify 10 of them as the\nmost representative indexes.\nFinally, the weights of the indexes are derived by the expert\ngrade method. The final indexes and their weights are shown in\nthe following table 2.\n\nTable 2 The final indexes and their weights\nCategory Weight\nIndex\nWeight\n1.1 Service Personalization\n0.10\nFunction\nEffectiveness\n\n0.22 1.2 Website Credibility\n0.12\n2.1 Information Inclusiveness\n0.10\nBusiness\nInformation\n\n0.18 2.2 Version Internationalization 0.08\n3.1 Columns Originality\n0.09\nWebsite\nDesign\n\n0.28 3.2 Website Structure Clarity\n0.10\n3.3 Harmonization\n0.09\n4.1 System Stableness\n0.10\nSystem\nUsability\n\n0.22 4.2 System Security\n0.12\nSystem\nEfficiency\n0.10 5.1 System Speediness\n0.10\nCONCLUSION\nIn this paper, we have proposed an index system for quality\nsynthesis evaluation and diagnosis of the BtoC websites\nfollowing the `Grey Box' evaluation principle, and\nscientifically determined and simplified the index items.\nUsually, factor analysis or principal component analysis is used\nto solve the problem of common-factor and multiple indexes.\nBut these methods are only suitable for the quantitative indexes,\nand the evaluation process is not truly simplified. Because the\nnew index is the linear function of some original ones, it still\nneeds to calculate the value of new indexes by collecting all the\nvalues of the original ones.\nIn our index system, most of index is descriptive one. So we\nhave finalized the indexes by using the R-Hierarchical\nclustering method. It really has reduced the number of the\nevaluation indexes without losing the major information from\nthe original indexes. Furthermore, it has effectively avoided the\nimpact of common-factors on the evaluation result.\nOnly the index of system efficiency can be measured through\nquantitative sub-indexes such as response time, etc. Most of\ndepictive indexes are subjective and fuzzy. In view of this, we\nshould use fuzzy comprehensive analysis method to evaluate to\nget more efficiency result.\nIn our future work we are intended to propose an evaluation\nmodel and conduct evaluation to some famous domestic BtoC\nwebsites to prove if this index system is scientific and feasible.\nMoreover, we will improve this set of index system including\nevaluation model to make the whole set of index system more\nfeasibility.\n\nREFERENCES\n[1] F.J. Miranda Gonzalez, T.M. Banegil Palacios,\nQuantitative evaluation of commercial web sites: an\nempirical study of Spanish firms, International Journal of\nInformation Management, 24(2004)313-328\n[2] Chang Liu, Kirk P. Amett, Exploring the factors associated\nwith Web site success in the context of electronic\ncommerce, Information & Management , 38 (2000)\n23-33\n[3] Evans, J. R., & King, V. E.. Business-to-business\nmarketing and the World Wide Web: Planning, managing\nand assessing web sites. Industrial Marketing Management,\n28(1999)343358\n[4] Zhonghai Li, Jianqiao Liao, Hui Xiao; the analyze of work\nefficiency on webshop design in China; human work\nefficiency; 4(2002) 43-45\n[5] Research a index system for evaluation of on enterprise\nwebsite,\nhttp://www.365un.com/\nxmb/viewthread.php?tid=1\n998\n\n77\n", "keywords": "Business website;B2C websites;System performance;representitive indexes;fuzzy analysis;R-Hierarchical clustering;Evaluation system;quality synthesis evaluation;quality evaluation index;R-Hierarchical clustering method;index optimization;e-commerce;clustering;correlation index"} {"name": "34", "title": "An Integrated Environment to Visually Construct 3D Animations", "abstract": "In this paper, we present an expressive 3D animation environment that enables users to rapidly and visually prototype animated worlds with a fully 3D user-interface. A 3D device allows the specification of complex 3D motion, while virtual tools are visible mediators that live in the same 3D space as application objects and supply the interaction metaphors to control them. In our environment, there is no intrinsic difference between user interface and application objects. Multi-way constraints provide the necessary tight coupling among components that makes it possible to seamlessly compose animated and interactive behaviors. By recording the effects of manipulations, all the expressive power of the 3D user interface is exploited to define animations. Effective editing of recorded manipulations is made possible by compacting all continuous parameter evolutions with an incremental data-reduction algorithm, designed to preserve both geometry and timing. The automatic generation of editable representations of interactive performances overcomes one of the major limitations of current performance animation systems. Novel interactive solutions to animation problems are made possible by the tight integration of all system components. In particular, animations can be synchronized by using constrained manipulation during playback. The accompanying video-tape illustrates our approach with interactive sequences showing the visual construction of 3D animated worlds. All the demonstrations in the video were recorded live and were not edited.", "fulltext": "INTRODUCTION\nModern 3D graphics systems allow a rapidly growing user\ncommunity to create and animate increasingly sophisticated\nworlds. Despite their inherent three-dimensionality, these systems\nare still largely controlled by 2D WIMP user-interfaces. The lack\nof correlation between manipulation and effect and the high\ncognitive distance from users to edited models are the major\ndrawbacks of this solution [13]. The inadequacy of user-interfaces\nbased on 2D input devices and mindsets becomes particularly\nevident in the realm of interactive 3D animation. In this case, the\nlow-bandwidth communication between user-interface and\napplication and the restrictions in interactive 3D motion\nspecification capabilities make it extremely difficult to define\nanimations with straight-ahead actions. This inability to\ninteractively specify the animation timing is a major obstacle in all\ncases where the spontaneity of the animated object's behavior is\nimportant [21; 35; 4].\nIn this paper, we present an expressive 3D animation\nenvironment that enables users to rapidly and visually prototype\nanimated worlds with a fully 3D user-interface. A 3D device\nallows the specification of complex 3D motion, while virtual tools\nsupply the interaction metaphors to control application objects. In\nour environment, there is no intrinsic difference between user interface\nand application objects. Multi-way constraints provide\nthe necessary tight coupling among components that makes it\npossible to compose animated and interactive behaviors. By\nrecording the effects of manipulations, all the expressive power of\nthe 3D user interface is exploited to define animations. Effective\nediting of recorded manipulations is made possible by compacting\nall continuous parameter evolutions with our data-reduction\nalgorithm, designed to preserve both geometry and timing. Novel\ninteractive solutions to animation problems are made possible by\nthe tight integration of all system components. In particular,\nanimations can be synchronized using constrained manipulation\nduring playback.\nIn the following sections, we present an overview of the\nsystem, we make comparisons with related work, and we conclude\nwith a view of future directions. The accompanying video-tape\nillustrates our approach with interactive sequences showing the\nvisual construction of 3D animated worlds. All demonstrations in\nthe video were recorded live and were not edited.\nSYSTEM OVERVIEW\nOur animation environment is built on top of VB2 [17; 18], a\ngraphics architecture based on objects and constraints. During\ninteraction, the user is the source of a flow of information\npropagating from input device sensors to manipulated models.\nPermission to make digital/hard copy of part or all of this work\nfor personal or classroom use is granted without fee provided\nthat copies are not made or distributed for profit or commercial\nadvantage, the copyright notice, the title of the publication and\nits date appear, and notice is given that copying is by permission\nof ACM, Inc. To copy otherwise, to republish, to post on\nservers, or to redistribute to lists, requires prior specific\npermission and/or a fee.\n1995 ACM-0-89791-701-4/95/008...$3.50\n395\nVB2 applications are represented by a network of interrelated\nobjects, and the maintenance of relationships is delegated to a\nconstraint-based change propagation mechanism. Different\nprimitive elements represent the various aspects of the system's\nstate and behavior: active variables store the system's state,\ndomain-independent hierarchical constraints [9] maintain multi way\nrelations between active variables, daemons provide support\nfor discrete simulation tasks, and indirect expressions allow\nconstraints and daemons to dynamically locate their variables.\nConstraints are maintained using an efficient local propagation\nalgorithm based on Skyblue [27; 17; 18]. The solver is domain independent\nand can maintain a hierarchy of multi-way, multi output\ndataflow constraints. The fact that constraint solving\nconsists in performing method selection on the basis of constraint\npriorities and graph structure, without considering the variables'\nvalues, allows an effective application of a lazy evaluation\nstrategy [17; 18]. The main drawback of such a local propagation\nalgorithm is the limitation to acyclic constraint graphs. However,\nas noted by Sannella et al. [28], cyclic constraint networks are\nseldom encountered in the construction of user interfaces, and\nlimiting the constraint solver to graphs without cycles gives\nenough efficiency and flexibility to create highly responsive\ncomplex interactive systems. In VB2 , the objects' internal\nconstraint networks are designed so as to reduce the possibility of\ncreating cyclic constraint graphs. Runtime introduction of a\nconstraint that would create a cyclic graph causes an exception\nthat can be handled to remove the offending constraint\n1.\nThe state manager behavior and the constraint solving\ntechniques are detailed in [17; 18].\n2.2 Interaction\nThe system's desktop configuration uses keyboard commands to\ntrigger mode changes and animation playback, a Spaceball for\ncontinuous specification of spatial transformations, and a mouse\nfor picking. Both hands are thus used simultaneously to input\ninformation. LCD shutter glasses provide binocular perception of\nthe synthetic world. Since our main research goal is to explore the\npotentialities of 3D interaction, we do not provide a two dimensional\ngraphical user interface. A 3D cursor, controlled by\nthe Spaceball , is used to select and manipulate objects of the\nsynthetic world.\nDirect manipulation and virtual tools are the two techniques\nused to input information. Both techniques involve using mediator\nobjects that transform the cursor's movements into modifications\nof manipulated objects. Virtual tools are visible first class objects\nthat live in the same 3D space as application objects and offer the\ninteraction metaphor to control them. Their visual appearance is\ndetermined by a modeling hierarchy, while their behavior is\ncontrolled by an internal constraint network [18].\nAs in the real world, users configure their workspaces by\nselecting tools, positioning and orienting them in space, and\nbinding them to application objects. At the moment of binding, the\ntool decides whether to accept the connection by checking if the\napplication object contains all the needed information and by\nverifying that the constraint graph obtained by connecting the tool\nto the model can be handled by the underlying solver (i.e. it is\nacyclic). The binding mechanism is defined in a declarative way\nby using indirect constraints [18].\n1\nVB2's current constraint solver [17; 28] is unable to find acyclic\nsolutions of potentially cyclic constraint graphs. An algorithm that\nremoves this limitation is presented in [36].\nInformation control\nInformation display\nMODEL\nTOOL\nv1\nv2\nc1\nc2\nbound\nbound.v1\nbound.v2\nv1\nv2\nout_variable\nin_variable\nconstraint\nin_out_variable\ndirect reference\nindirect reference\nInstance\n(a)\n(b)\nFigure 1a.\nDesign notation\nFigure 1b.\nModel and virtual tool\nWhen bound, the tool changes its visual appearance to a shape that\nprovides information about its behavior and offers semantic\nfeedback. During manipulation, the tool's and the application\nobject's constraint networks remain continuously connected, so as\nto ensure information propagation. Multiple tools can be active\nsimultaneously in the same 3D environment in order to control all\nits aspects. The environment's consistency is continuously ensured\nby the underlying constraint solver. The bi-directionality of the\nrelationships between user-interface and application objects makes\nit possible to use virtual tools to interact with a dynamic\nenvironment, opening the door to the integration of animation and\ninteraction techniques.\n2.3 Animation\nBy recording the effects of manipulations, animations can be\nsketched. In order to be able to edit the captured performance, a\ncompact representation of continuous parameter evolution must be\nobtained. This representation must not only precisely approximate\nthe shape of the initial parameter curves but also their timing. The\ndata reduction algorithm must therefore treat the geometry and\ntime components simultaneously in order to avoid the introduction\nof errors that would be difficult to control. We have developed an\nalgorithm that incrementally builds, from the input sequence, a\nparametric B-spline preserving value and time of each input\nsample within a given tolerance. It is an incremental version of the\nLyche and Mrken algorithm [22] that works in parallel with the\ninteractive specification by considering only a small portion of the\ninput curve at any time. Latency time and memory requirements\nfor handling each portion of the curve are constant. Data reduction\nmay therefore be performed concurrently with interactive\nparameter input, and the responsiveness of the application can be\nensured when handling animations defined by any number of\nsamples. The algorithm is presented in detail in [2; 4]. This\nperformance-based approach complements key-framing by\nproviding the ability to create animations with straight-ahead\nactions. It provides complete control over the animation shape and\ntiming, while key-framing offers control only at a limited number\nof points.\nThe mediation of virtual tools makes it possible to sketch the\nevolution of non-geometric attributes, while constrained or free\nmotion can be specified with 3D devices. Since these devices offer\ncontinuous control of spatial transformations, subtle\nsynchronizations between position and orientation components\ncan be directly specified. In our environment, straight-ahead\nanimations are defined by expressing the desire to record\nparameter evolution during interaction. This is done simply by\npressing a different mouse button when starting an interaction\ntask. A controller object is connected to each animatable model\nand is responsible for monitoring model state changes. While\nrecording, all changes are handled by the controller to feed the\nanimation tracks. Continuous tracks apply the data reduction\n396\nalgorithm to the incoming information, while discrete tracks\nsimply store a change value event. During playback, information\npropagates from the animation tracks through the controllers and\ndown to the models. All connections are realized by bi-directional\nconstraints. Since playback constraints are weaker than interaction\nconstraints, the user can take control over animated models during\nplayback.\n\nAnimations involving synchronizations with the\nenvironment's evolution can thus be specified by interacting\nduring playback [5].\nDiscrete\nTrack\nInteraction\nMediator\nData reduction\nData sampling\nEditing\nAnimation recording\nAnimation playback\nContinuous\nTrack\nApplication\nObject\nAnimation\nController\nFigure 2.\nInteractive animation and playback\nRELATED WORK\nConstraint-based architectures have long been used for 2D\ngraphics systems (see [28] for a survey). In the 3D graphics world,\none-way constraints are commonly employed to maintain\ndependencies between components [20; 34; 37; 38]. This type of\nconstraint cannot easily model mutual relations between objects,\nthus hindering the tight coupling between user-interface and\napplication objects [28]. Our system uses instead multi-way local\npropagation constraints, which offer support for two-way\ncommunication between objects while remaining efficient enough\nto ensure the responsiveness of the system [17; 18; 27]. TBAG\n[14] also uses multi-way constraints maintained by Skyblue [27],\nbut its functional approach concentrates more on modeling time varying\nbehaviors than on creating interactive systems. Much\neffort has been spent in developing powerful numerical solvers for\ncomputer graphics (e.g. [7; 15; 16]). This work is complementary\nto ours, which focuses more on providing ways to interact with\nconstrained environments. Such advanced solvers could replace\nlocal propagation in our system for the maintenance of numerical\nrelationships.\n3.2 Three-dimensional User Interfaces\nMuch recent research has focused on obtaining rich interaction\nwith 3D environments by means of advanced devices and 3D\ninteraction metaphors [8; 10; 11; 13; 16; 19; 26; 30; 32]. 3D\nwidgets or manipulators, similar to our virtual tools, are presented\nin [13; 32]. These works focused on providing support for 3D\nwidget construction, while we concentrate more on the integration\nof multiple tools in a single dynamic environment. We are not\naware of any attempts to apply the results of 3D interaction\nresearch to enhance animation capabilities.\n3.3 Performance Animation\nA number of authors have proposed using live performances to\ndrive computer animations (e.g. [1; 23; 33; 35]). We strive to\nbring the expressiveness of these approaches to general purpose\nanimation systems running on graphics workstations. Instead of\nrelying on advanced motion capture devices, we exploit our fully\n3D user-interface to control the animated environment at a higher\nlevel of abstraction. The guiding approach proposed in [23] also\nseeks to provide better control of synthetic objects by raising the\nabstraction level of user interaction. That work concentrates on\nmodeling complex behaviors in a discrete simulation framework,\nwhile we focus on providing intuitive user interfaces. A major\nlimitation of current performance animation systems is the\ninability to build editable representations out of captured\nperformances [35].\n3.4 Data Reduction\nData reduction or curve fitting techniques have been successfully\napplied for the interactive specification of 2D or 3D curves or\nsurfaces (e.g. [12; 24; 25; 29]). These techniques cannot be easily\nadapted to sketching animations of multi-dimensional parameters\nbecause they all exhibit one or more of the following problems: (i)\nrestriction to 2D or 3D geometric constructions, (ii) lack of control\non parameterization errors, and (iii) need to consider the entire\ninput curve before reduction. An early attempt to use data reduction\nfor animation is described in [29]. In that system, path\ngeometry and path timing specifications were decoupled, loosing\nthus the advantages of performance approaches. Banks and Cohen\n[6] proposed for their drafting tool an incremental version of the\nLyche and Mrken algorithm [22] that does not have the\naforementioned drawbacks and could be used in a performance\nanimation context. Their method shares with ours the idea of\nprocessing successive portions of the input curve which are then\nspliced together, but is unable to ensure constant latency times and\nmemory needs [4].\nCONCLUSIONS AND FUTURE WORK\nIn this video-paper, we have presented an integrated environment\nfor the rapid and visual prototyping of 3D animated worlds. Using\nour fully 3D user-interface, non-professional users can swiftly\ncreate complex animations with pose-to-pose and straight-ahead\ntechniques. Thanks to automatic data-reduction, animations\ncreated by interactive performances can then be effectively edited.\nIn our future work, we intend to develop new virtual tools and\nvisualizations that will improve our 3D user interface for discrete\nand continuous track manipulation. To allow the system to adhere\nto timing requirements, we are developing time-critical techniques\nfor controlling rendering complexity and constraint evaluation.\n\nACKNOWLEDGMENTS\nThe authors would like to thank Ronan Boulic for providing the\nwalking engine used in the interactive sequences, Sally Kleinfeldt\nas well as Dean Allaman for helpful comments and suggestions,\nAngelo Mangili for technical help, and Michele Mller for doing\nthe voice on the video.\nThis research was conducted by the authors while at the Swiss\nFederal Institute of Technology in Lausanne.\n397\n\nREFERENCES\n[1]\nBaecker RM (1969) Picture-driven Animation. Proc. Spring\nJoint Computer Conference 34: 273-288.\n[2]\nBalaguer JF (1993) Virtual Studio: Un systme d'animation\nen environnement virtuel . PhD Thesis, Swiss Federal\nInstitute of Technology in Lausanne.\n[3]\nBalaguer JF, Gobbetti E (1995) Animating Spaceland. To\nappear in IEEE Computer Special Isssue on Real-world\nVirtual Environments 28(7).\n[4]\nBalaguer JF, Gobbetti E (1995) Sketching 3D Animations.\nTo appear in Proc. EUROGRAPHICS.\n[5]\nBalaguer JF, Gobbetti E (1995) Supporting Interactive\nAnimation using Multi-way Constraints. Submitted for\npublication.\n[6]\nBanks M, Cohen E (1990) Real-time Spline Curves from\nInteractively Sketched Data. Proc. SIGGRAPH Symposium\non Interactive 3D Graphics: 99-107\n[7]\nBarzel R, Barr A (1988) A Modeling System Based on\nDynamic Constraints. Proc. SIGGRAPH: 179-188.\n[8]\nBier EA (1990) Snap-Dragging in Three Dimensions. Proc.\nSIGGRAPH Symposium on Interactive 3D Graphics : 193 204\n.\n[9]\nBorning A, Freeman-Benson B, Wilson M (1992) Constraint\nHierarchies. Lisp and Symbolic Computation 5(3): 221-268.\n[10] Butterworth J, Davidson A, Hench S, Olano TM (1992)\n3DM: A Three Dimensional Modeler Using a Head-Mounted\nDisplay. Proc. SIGGRAPH Symposium on Interactive 3D\nGraphics: 135-138.\n[11] Card SK, Robertson GG, Mackinlay JD (1991) The\nInformation Visualizer: An Information Workspace. Proc.\nSIGCHI : 181-188.\n[12] Chou JJ, Piegl LA (1992) Data Reduction Using Cubic\nRational Splines. IEEE Computer Graphics and Applications\n12(3): 60-68.\n[13] Conner DB, Snibbe SS, Herndon KP, Robbins DC, Zeleznik\nRC, Van Dam A (1992) Three-Dimensional Widgets.\nSIGGRAPH Symposium on Interactive 3D Graphics : 183 188\n.\n[14] Elliott C, Schechter G, Yeung R, Abi-Ezzi S (1994) TBAG:\nA High Level Framework for Interactive, Animated 3D\nGraphics Applications. Proc. SIGGRAPH: 421-434.\n[15] Gleicher M (1993) A Graphics Toolkit Based on Differential\nConstraints. Proc. UIST : 109-120.\n[16] Gleicher M, Witkin A (1992) Through-the-Lens Camera\nControl. Proc. SIGGRAPH: 331-340.\n[17] Gobbetti E (1993) Virtuality Builder II: Vers une\narchitecture pour l'interaction avec des modes sysnthtiques.\nPhD Thesis, Swiss Federal Institute of Technology in\nLausanne.\n[18] Gobbetti E, Balaguer JF (1993) VB2: A Framework for\nInteraction in Synthetic Worlds. Proc. UIST: 167-178.\n[19] Herndon KP, van Dam A, Gleicher M (1994) Report:\nWorkshop on the Challenges of 3D Interaction, CHI\nBulletin, October.\n[20] Kass M (1992) CONDOR: Constraint-based Dataflow. Proc.\nSIGGRAPH: 321-330.\n[21] Lasseter J (1987) Principles of Traditional Animation\nApplied to 3D Computer Animation. Proc. SIGGRAPH: 35 44\n.\n[22] Lyche T, Mrken K (1987) Knot Removal for Parametric B spline\nCurves and Surfaces. Computer Aided Geometric\nDesign 4: 217-230.\n[23] McKenna M, Pieper S, Zeltzer D (1990) Control of a Virtual\nActor: The Roach. Proc. SIGGRAPH Symposium on\nInteractive 3D Graphics: 165-174.\n[24] Plass M, Stone M (1983) Curve Fitting with Piecewise\nParametric Cubics. Proc. SIGGRAPH: 229-239.\n[25] Pudet T (1994) Real Time Fitting of Hand Sketched Pressure\nBrushstrokes. Proc. EUROGRAPHICS: 205-220.\n[26] Sachs E, Roberts A, Stoops D (1990) 3-Draw: A Tool for\nDesigning 3D Shapes. IEEE Computer Graphics and\nApplications 11(6): 18-26.\n[27] Sannella M (1994) Skyblue: A Multi-Way Local Propagation\nConstraint Solver for User Interface Construction. Proc.\nUIST : 137-146.\n[28] Sannella M, Maloney J, Freeman-Benson B, Borning A\n(1992) Multi-way versus One-way Constraints in User Interfaces\n. Software Practice and Experience 23(5): 529-566.\n[29] Schneider PJ (1988) Phoenix: An Interactive Curve Design\nSystem Based on the Automatic Fitting of Hand-Sketched\nCurves. Master's Thesis, University of Washington.\n[30] Shaw C, Green M (1994) Two-Handed Polygonal Surface\nDesign. Proc. UIST : 212-215.\n[31] Shelley KL, Greenberg DP (1982) Path Specification and\nPath Coherence. Proc. SIGGRAPH: 157-166.\n[32] Strauss PS, Carey R (1992) An Object-Oriented 3D Graphics\nToolkit. Proc. SIGGRAPH: 341-347.\n[33] Tice S (1993) VActor Animation Creation System.\nSIGGRAPH Tutorial 1.\n[34] Upson C, Fulhauber T, Kamins D, Laidlaw D, Schlegel D,\nVroom J, Gurwitz R, van Dam A (1989) The Application\nVisualization System: A Computational Environment for\nScientific Visualization. IEEE CG&A 9(4): 30-42.\n[35] Walters G (1993) Performance Animation at PDI.\nSIGGRAPH Tutorial 1.\n[36] Vander Zanden B (1995) An Incremental Algorithm for\nSatisfying Hierarchies of Multi-way, Dataflow Constraints .\nTechnical Report, University of Tennessee, Knoxville.\n[37] Zeleznik RC, Conner DB, Wlocka MM, Aliaga DG, Wang\nNT, Hubbard PM, Knepp B, Kaufman H, Hughes JF, van\nDam A (1991) An Object-Oriented Framework for the\nIntegration of Interactive Animation Techniques. Proc.\nSIGGRAPH: 105-112.\n[38] Zeltzer D, Pieper S, Sturman DJ (1989) An Integrated\nGraphical Simulation Platform. Proc. Graphics Interface:\n266-274.\n398", "keywords": "animation synchronization;computer graphics;Object-Oriented Graphics;3d animation environment;data reduction;visualization;multi-way constrained architecture;human interaction;Data Reduction;3D Animation;Local Propagation Constraints;recording 3d manipulation;Virtual Tools;3D Widgets;3d user interface;3D Interaction;dynamic model"} {"name": "35", "title": "An Intensional Approach to the Specification of Test Cases for Database Applications", "abstract": "When testing database applications, in addition to creating in-memory fixtures it is also necessary to create an initial database state that is appropriate for each test case. Current approaches either require exact database states to be specified in advance, or else generate a single initial state (under guidance from the user) that is intended to be suitable for execution of all test cases. The first method allows large test suites to be executed in batch, but requires considerable programmer effort to create the test cases (and to maintain them). The second method requires less programmer effort, but increases the likelihood that test cases will fail in non-fault situations, due to unexpected changes to the content of the database. In this paper, we propose a new approach in which the database states required for testing are specified intensionally, as constrained queries, that can be used to prepare the database for testing automatically . This technique overcomes the limitations of the other approaches, and does not appear to impose significant performance overheads.", "fulltext": "INTRODUCTION\nModern information systems are typically organised as\ncollections of independent application programs that communicate\nwith one another by means of a central database.\nThe database records the state of the organisation that the\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nICSE'06,\nMay 2028, 2006, Shanghai, China.\nCopyright 2006 ACM 1-59593-085-X/06/0005 ...\n$\n5.00.\ninformation system supports, while the application programs\nimplement the business processes that manipulate the state.\nTo take a simple but ubiquitous example, a database system\nmight record details of customers, products and sales,\nwhile the application programs associated with it handle operations\nsuch as new product purchases and update of the\nproduct catalogue, as well as supporting decision making\nby generating reports regarding the most profitable product\nlines, names and addresses of loss-making customers, etc.\nIn order to test such application programs, it is necessary\nto create test fixtures that simulate the presence of the rest\nof the information system. Fixtures for traditional test cases\ntypically consist of in-memory objects and data structures\nthat provide the inputs to the program being tested. This\nkind of fixture is also needed when testing database applications\n(especially when performing unit testing); however,\nsince it is unrealistic (and often incorrect) to execute test\ncases against an empty database, we need to create additional\nfixture elements within the database itself.\nCurrent practice in the software industry is to maintain\none or more test databases that can be used for testing individual\nprograms. These databases can be artificially generated\n(e.g., using tools such as DBMonster\n1\nand DataFac-tory\n2\n) or they may be subsets of the live database, taken\nas a snapshot at some recent point in time. Copies of the\nlive data sets have the advantage that they are more likely\nto be representative of the patterns of data encountered in\npractice, while artificial data sets have the advantage that\nthey can be made to embody specific characteristics (such\nas particular data skew patterns or volumes), which may be\nuseful for load and stress testing.\nBoth approaches, however, suffer from several disadvantages\n. The most significant problem occurs when none of\nthe available test databases are suitable starting points for a\nparticular test case. For example, suppose a particular test\ncase executes a program which purges inactive customers,\nwith the aim of verifying that the business rule forbidding\ndeletion of customers with negative balances is correctly en-forced\n. If none of the test databases contains any inactive\ncustomers with negative balances, then the test case cannot\nbe executed successfully. For a one-off test run, testing\npersonnel can choose a database that is close to what is required\n, and manually update it so that it is suitable for use\nwith the test case. But if a complete test suite is to be executed\n(possibly including test cases which themselves make\nmodifications to the database state) then in the worst case\n1\nhttp://DBMonster.kernelpanic.pl\n2\nhttp://www.quest.com/datafactory\n102\nthis manual intervention will be required in between every\ntest case execution. This is clearly undesirable if test suites\nare large or time-consuming to execute, or if the test suite\nis to be run in batch (as in the case of overnight regression\ntesting, for example).\nCurrent research in testing for database systems proposes\ntwo approaches to this problem. One of these is to include\nwithin the test case description a full (extensional) specification\nof the database state against which it is to be run (and\nof the database state that should be produced if the test has\nexecuted successfully) [13, 14]. This solution is exemplified\nby DBUnit\n3\n, an extension of the JUnit testing framework\n4\nthat is designed for testing database applications written in\nJava. Each DBUnit test case is accompanied by an XML\nfile describing the data set required for the test. Before each\ntest run, DBUnit clears the database state and inserts the\ndata described by the XML file.\nThis approach has the advantage of simplicity, but it places\na considerable burden on testing personnel, especially when\ncomplex database states are required. It is also inefficient,\nsince the database must be continually destroyed and recre-ated\nbetween tests, even when significant parts of the database\nmight have been reused by the succeeding tests. Moreover,\nmaintenance of a large suite of such tests is extremely challenging\n, since any small change to the database schema may\nrequire corresponding changes to many test cases.\nThe second approach that has been explored in the literature\nis more efficient in that it requires the creation of only\none database state per test suite (rather than one per test\ncase). It is exemplified by the AGENDA database testing\ntoolkit [6, 7], which can automatically generate a database\nstate given information about the schema, some data generation\nfunctions for individual attributes and some user-selected\nheuristics describing the kind of database state required\n. The AGENDA tool also generates test cases from a\nsimple analysis of the program being verified. The user must\nthen add preconditions to each test case that are checked\njust before it is executed and that will prevent a case from\nbeing executed against an inappropriate database state. This\napproach successfully relieves the user of the need to specify\ncomplete database states in full detail, but at a cost. The\nuser must accept that some of the test cases may not be\nexecuted because the database state fails the precondition,\neven when it would require only a small change to bring the\ndatabase into a suitable state for the test. Since only one\ndatabase state is created per test suite, this problem of failed\ntests is likely to become more severe as the size of the test\nsuite grows. There is also a potential inefficiency involved\nin generating test descriptions and inputs, and in creating\nthe additional log tables and constraints/triggers needed by\nthe AGENDA tool, for test cases that are not in fact going\nto be executed.\nIdeally, we would prefer to be able to combine the advantages\nof both these approaches, to give a form of database\ntest case that is quick and natural to specify, and which\nmaximises the number of cases within the suite that can be\nexecuted while minimising the number of full test databases\nthat need to be maintained.\nOur thesis is that this can\nbe achieved by allowing testing personnel to describe the\ndatabase states involved in their test cases intensionally, in\n3\nhttp://www.dbunit.org\n4\nhttp://www.junit.org\nthe form of declarative conditions that the input database\nmust satisfy, and by providing a testing harness that can\nautomatically adjust the input database so that the test\nconditions are satisfied [19].\nIn this paper, we present a language for specifying such\nintensional database tests, and describe its semantics and\noperational behaviour (Section 2). We present an algorithm\nfor automatically modifying database states so that test preconditions\nare satisfied (Section 3), thus ensuring that all\ntest cases can be executed without requiring any human\nintervention. We further describe how we have extended the\nJUnit testing framework to allow intensional database tests\nto be specified and executed in practice (Section 4). Finally,\nwe present the results of an evaluation of the performance\nof the techniques (Section 5) and conclude (Section 6).\nSPECIFYING INTENSIONAL TESTS\nA conventional test case is typically modelled as a triple\n< p, i, o >, which denotes a test that executes program p\nwith inputs (e.g., parameters) denoted by i. If no faults are\nencountered during the test execution, the output that will\nbe produced is o. In the case of test cases for database applications\n, we must add two further elements--the specification\nof the database state against which p is to be executed,\nand some statement of the database state that should result\nfrom the execution of p if it is operating correctly according\nto its specification.\nFor example, consider the example program mentioned\nin Section 1 that prunes inactive customer details from the\ndatabase. For this test case, we require a database state that\ncontains at least one inactive customer. This could easily\nbe stated as a predicate logic condition over the database,\nassuming the obvious mapping between stored relations and\npredicates, e.g.:\n(custNo, lastOrderOn, a, b, c)\ncustomer (custNo, a, b, c, lastOrderOn)\nlastOrderOn < today - 90\nThe program in question does not access any parts of the\ndatabase other than the customer table. Therefore, we do\nnot care what values the other tables contain and need not\nmention them in the intensional specification of the test.\nThis approach works equally well for observing the results\nof the test. For example, when testing the customer pruning\nbehaviour, we might require that no inactive customer with\na non-negative balance should exist in the database after\nthe test:\n((custNum, lastOrderDate, a, b, c)\ncustomer (custNum, a, bal , c, lastOrderDate)\nlastOrderDate < today - 90 bal > 0)\nEffectively, the test case describes a set of valid (i.e., fault-free\n) state transition for the database, as a classic pre/post-condition\npair.\nThis first-order-logic style of database specification does\nnot work so well when we consider the testing problem in\nmore depth, however. The problem is that we need to do\nmore than test the input database for compliance with the\nrequirements of the test case; we also need to extract information\nfrom it to be used to instantiate other elements\n103\nof the test case. For example, suppose we wish to test a\nprogram that deletes details of individual customers. Such\nprograms typically require some input from the user, identifying\nthe specific customer record that is to be deleted (e.g.,\nby supplying the relevant customer code as a parameter).\nThis could be achieved by requiring the tester to embed the\ncustomer code into the test case elements, as literal values.\nAlternatively, we could search for a suitable customer that\nalready exists in the database, using a standard database\nquery, and use the values from that in specifying the inputs\nfor the test case. This would minimise the amount of work\nrequired to prepare the database for test execution (since we\nwould be using data already present in the database), and it\nwould also mean that test cases can be written very quickly,\nsince the user does not need to specify every last detail of\nthe data to be used.\nUnder this approach, the specification of the input database\nstate now has a dual role: it must state the condition that\ndetermines whether the database state is suitable for execution\nof the test case and it must also return bindings for the\nfree variables that appear in the remaining components of\nthe test case. For the latter purpose, we would prefer to use\na straightforward query language, while for the former we\nrequire the ability to place conditions on the data. With a\nsimple extension of a standard query language such as SQL,\nwe can combine both these purposes in a single statement.\nFor example, the following statement:\nANY :cn GENERATED BY\nSELECT custNo FROM customer\nWHERE lastOrderDate < today() - 90\nAND balance < 0\nretrieves the customer code of some record that meets the\ngiven conditions (an inactive customer with negative balance\n) from the database, and binds it to the variable :cn.\nIt also places a cardinality constraint on the result of the\nquery, that at least one such binding must exist (implied by\nthe use of the keyword ANY).\nThe variable :cn can then be used to specify other elements\nof the test case. The obvious usage in this example is\nin specifying the inputs to the program being tested, but it\ncan also be used in describing the expected outputs of the\nprogram. In this example test case, the correct behaviour\nof the DeleteCustomer program is to reject the deletion\nof :cn, since customers with a negative balance cannot be\npurged from the database. We might therefore give the following\nspecification of the desired output database state:\nAT LEAST 1 :cn2 GENERATED BY\nSELECT custNo FROM customer\nWHERE custNo = :cn\nOf course, not all test cases are best specified in terms of\nvalues retrieved from the database. For example, suppose\nthat we wish to write test cases for a program that adds new\ncustomers to the database. The inputs to this program are\nthe details of the new customer, and the precondition for one\nparticular test case states that no customer should exist that\nhas the same customer code as that of the customer being\ncreated. We cannot retrieve the customer details from the\ndatabase in this case, as they have not yet been stored in it.\nAgain, we could force the user to include the required values\nas literals in the test case, but ideally we would like to give\n<CONDITION> ::= <TYPE> <BINDINGLIST>\nGENERATED BY <SELECT>\n<TYPE> ::= ANY | NO | AT LEAST <i> |\nAT MOST <i> | EXACTLY <i> |\nALL | FIRST\n<i>\n::= {0-9}\n<BINDINGLIST>\n::=\n<BINDING> { `,' <BINDINGLIST> }\n<BINDING> ::= {A-Z | a-z}\n<SELECT>\n::= ...\nFigure 1: Simplified BNF Grammar for SQL Extensions\nmore support to the process of test case generation. One\nway to achieve this is to allow user-defined data generator\nfunctions to be incorporated within queries as though they\nwere relations. For example, the following expression states\nour requirements for this test case, while also binding the\nvariables needed for input to the program:\nANY :cn, :name, :addr, :bal GENERATED BY\nSELECT gc.custno, gc.name, gc.addr, 0\nFROM genCustomerDetails() AS gc\nWHERE gc.custno NOT IN (\nSELECT custno\nFROM customer\nWHERE balance > 0)\nHere, the data generator function getCustomerDetails()\nis used as if it were a normal relation, whereas in fact the\nresults it returns are computed on the fly. In fact, several\nof the main commercial database management systems already\nallow user-defined functions to be embedded in queries\nin this way, so this does not require a further extension of\nSQL. Figure 1 shows the minimal extensions that are needed\nto support all the kinds of constrained query shown above\nusing the SQL99 standard [17].\n2.1\nTest Case Semantics\nClearly, the semantics of these intensional database test\ncases is more complex than for traditional extensional tests.\nHowever, we can define their semantics formally in terms\nof a mapping from intensional tests to sets of equivalent\nextensional database test cases. We first present a formal\ndefinition of the structure of our intensional test cases:\nDefinition 1. An intensional database test case is a quintuple\n< p, i, DB\ni\n, o, DB\no\n>, where:\np is the program to be executed in the test,\ni is a tuple of n variables and literals that describes the\ninputs to be given to program p, where n is the number\nof parameters expected by p,\nDB\ni\nis a set of constrained queries that together specify\nthe initial database state.\no is a tuple of m variables and literal that describes the\nexpected outputs from the program p.\nDB\no\nis a set of constrained queries that together specify\nthe conditions that must hold in the database state after\nexecution of p if no fault has been encountered.\n104\nA constrained query has the form < Q, min, max , vars >,\nwhere Q is a standard relational algebra query, min and\nmax describe the constraints on the cardinality of the query\nresult set, and vars is the list of variables bound by the\nquery result.\nA database test case is well-formed for use with a particular\ndatabase schema iff:\nfor every variable v that occurs free in i, DB\ni\n, o and\nDB\no\n, there exists a query in DB\ni\nthat provides a binding\nfor v,\nfor every query < q, n, m, vs > in DB\ni\nDB\no\n, q is a\nwell-formed query over that returns k-tuples, where\n|vs| = k, and\nthere are no circular variable dependencies amongst\nthe queries in DB\ni\n.\nWe can now define a semantics for the intensional database\ntest cases as follows. Every intensional test case is equivalent\nto a set of extensional test cases. An extensional test case\ndefines a specific test run, in terms of actual inputs and\noutputs, rather than expressions denoting sets of inputs and\noutputs.\nThe set of all possible extensional test cases is\ngiven by:\nP L\nn\nDB L DB\nwhere P is the set of all programs, L is the set of all literals\n, L\nn\nis the set of all n-tuples formed from L and DB\nis the set of all database states (relative to all schemas)\n5\n.\nThe components of each extensional test are the program\nto be tested, the input values, the initial database state,\nthe expected output and the expected final database state,\nrespectively.\nAn intensional test case is effectively a shorthand expression\nfor a set of extensional test cases that are all derived\nfrom the same equivalence partition of the test case inputs.\nAn intensional database test < p, i, DB\ni\n, o, DB\no\n>, where\nDB\ni\n= {< q\ni\n, n\ni\n, m\ni\n, v\ni\n>} and DB\no\n= {< q\no\n, n\no\n, m\no\n, v\no\n>},\nis equivalent to the following set of extensional tests:\n{< p, i[v\ni\n/v], db\ni\n, o[v\ni\n/v], db\no\n> |\ndb\ni\nDB\n(n\ni\n|q\ni\n(db\ni\n)| m\ni\n)\nv q\ni\n(db\ni\n)\ndb\no\nDB\n(n\no\n|(q\no\n[v\ni\n/v])(db\no\n)| m\no\n)}\nWe use the notation exp[\n1\n/\n2\n] to express the substitution of\nthe values in\n1\nby the corresponding values in\n2\nwhereever\nthey occur in exp. Therefore, this expression denotes the set\nof extensional tests where the input database satisfies the\nconstraints imposed by the initial constrained query, and\nwhere the bindings from execution of that query (here expressed\nas the tuple of variables v) are substituted into the\n5\nFor simplicity of presentation, we assume that all programs\nrequire the same number of inputs (n). In practice, n can\nbe the largest number of inputs required by any program,\nand the unused values can be filled with nulls.\nexpressions defining the inputs, expected output and expected\nfinal database state before they too are evaluated\n6\n.\nThe idea underlying this notion of an intensional test is\nthat when any of its corresponding extensional sets are executed\n, the intensional test is itself deemed to have been\nexecuted. Thus, the use of intensional tests allows much\ngreater freedom at test execution time, since we may choose\nany of the possible extensional tests, depending on which is\nclosest to our starting environment. In the next section, we\nwill consider the practical ramifications of this approach to\ntesting, and describe how the semantics just described can\nbe implemented in practice.\nDATABASE PREPARATION\nThe execution of an intensional database test case consists\nof three distinct phases: 1) preparation of the environment\nfor test execution; 2) execution of the test with the\nprepared inputs; and 3) capture and storage of the results,\nfor later analysis.\nSince all the work of finding bindings\nfor the variables in the test case specification is done in the\npreparation phase, the final two phases are straightforward\nand differ little from standard testing procedures. When\nprogram execution is complete, the constrained query that\ndetermines whether the test has been successful or not is\nevaluated against the database, and the output from the\nprogram is checked against what is expected. In the case\nof test failure, the details of the actual extensional test that\nwas executed are recorded, for diagnosis purposes.\nThe first phase, however, is more complex. If we were\ncontent to execute only those test cases which happen to\nbe suitable for use with the initial database state, then the\npreparation phase would simply be a matter of executing\nthe input constrained queries against the database and, if\nthey are all successful, using the bindings thus produced\nto instantiate the remaining components of the test case.\nHowever, thanks to the declarative nature of our test case\nspecifications, the testing framework can be pro-active in\ncases where the given database is not suitable for use by\nthe test case, and can automatically generate a sequence of\nupdates that will cause the constrained queries to produce\nthe required number of bindings.\nIn fact, this problem is similar (though not identical) to\none that has been studied by the database and artificial intelligence\ncommunities for many years. It is known variously\nas the view update problem [9], the knowledge base update\nproblem [12], and the transaction repair problem [10]. Many\ndatabase systems have the capability to define views on top\nof the basic database. A view is a kind of virtual relation.\nTo the user, it appears to be a normal relation, but it contains\nno stored data. Instead, the contents of the view are\ndefined by a expression over other relations, and attempts\nto retrieve data from the view are converted into queries\nover these relations. To take a simple example for illustration\n, we might create a view called Debtors which appears\nto be a relation of the same name containing all customers\nwith a negative balance. Attempts to retrieve Debtors is\n6\nFor simplicity of presentation, we assume here that there\nis only one query in each of DB\ni\nand DB\no\n. In practice,\nit may be necessary to include several queries, each producing\ndifferent bindings and imposing different cardinality\nconstraints. In this case, the constraints must be conjoined,\nand the full set of bindings can be retrieved by performing\na natural join of all the queries, with join condition true.\n105\nconverted into a query against the customer table with an\nadded constraint on the balance.\nIf views are truly to act as normal relations then it should\nbe possible to update them as well query them. But what\ndoes it mean to update a virtual relation? In this case, the\nview update must be converted into a sequence of updates\non the stored relations that will cause the desired change in\nthe contents of the view itself. This is a non-trivial problem\nfor realistic view languages, and becomes even more difficult\nwhen we move into the context of knowledge bases, where\nvirtual relations can be defined using rules over other relations\n, and when we add integrity constraints that must be\nmaintained by all updates [1, 2, 3, 4, 5, 8, 11].\nOnly in very narrow circumstances does a view update\nhave a single translation into real updates [15, 18]. Various\nheuristics for selecting from amongst the possible translations\nhave been proposed (of which the most common is to\nchoose the update that results in the smallest change to the\nexisting data set [2]), but in real applications user input is\nneeded in order to identify the translation that corresponds\nmost closely to the real world state that the database should\nreflect [10].\nIn the case of intensional database tests, we have a query\n(the constrained query that describes our requirements for\nthe test) that does not produce the correct number of answers\nwhen executed against the test database. We need to\nfind a sequence of updates to the base data that will cause\nour query to produce the number of answers we need. However\n, in this case, there is no requirement to find the set of\nupdates that matches the state of reality -- any sensible update\nthat satisfies the query conditions will be acceptable.\nThis simplifies the problem considerably, removing the need\nfor complex search procedures and for any user input.\n3.1\nThe Preparation Algorithm\nOne of the advantages of using a query-based language\nfor test specification (as opposed to a predicate calculus-based\nlanguage) is that we can make use of a very common\nand easy-to-analyse internal form for (relational) database\nqueries, called relational algebra. This form provides a small\nnumber of operations on relations that can be combined to\nform complex queries. For example, the three most basic\n(and useful) relational algebra operators are:\nThe projection operator,\nAtts\nR, which creates a relation\nfrom R by deleting all attributes not in Atts.\nFor example,\n[Country]\nCustomer produces a relation\nthat contains just the countries that appear in the\nCustomer\nrelation.\nThe selection operator,\nc\nR, which creates a relation\nthat contains all the rows from relation R that satisfy\nthe condition c. For example,\nbal <0\nCustomer returns\na relation containing details of all customers with negative\nbalances.\nThe join operator, R\n1\nc\nS, which creates a relation\ncontaining rows from the cross product of R and S that\nsatisfy the join condition c. The query Debtor\n1\ndNo=iNo\nInactive returns details of all debtors who are also inactive\n.\nSince the result of each relational algebra operator is itself\na relation, together they form a closed algebra. This means\nthat we can form arbitrarily complex queries by applying\noperators to the results of other operators. For example, a\nquery which retrieves the customer number of all customers\nwith a negative balance would be written as:\n\n[custNo]\n(\nbalance<0\nCustomer )\nA common way to visualise such expressions is as a tree of\noperators. The tree for the above query is shown in Figure 2.\nFigure 2: Relational Algebra Tree for Negative Balance\nQuery.\nOur algorithm for preparing a database for testing is based\naround this notion of a relational algebra tree. We take the\ncardinality constraints from the test specification, and push\nthem down through the nodes of the input database query\ntree, collecting up additional conditions as we go. When we\nreach a leaf node (i.e. a base relation), we make updates\nto the database so that the pushed-down constraints are\nsatisfied for that relation.\nAt each stage, we collect up the different kinds of constraint\nand push them further down into the tree. These\nconstraint types are:\nMin and Max, the upper and lower bounds on the desired\ncardinality of the result set.\nSelC, the selection conditions on the relations that we\nare interested in.\nUAtts, the collection of attributes that are used in the\nconstrained query, and that must be populated in any\nnew data that we insert.\nWe also build up a collection of queries that describe the\ndata that has been prepared for testing so far, as we progress\nthrough the tree. We call these queries \"bindings\" (Bgs),\nsince they give us values for the variables that occur within\nthe selection and join conditions. At each stage, the bindings\nshould contain one query for each leaf node that has so far\nbeen prepared.\nIt is easiest to see how this works by considering a simple\nexample, such as that shown in Figure 2. Let us assume we\nhave a constrained query that requires at least one customer\nwith negative balance to exist, and that our database does\nnot currently contain any such customers. We begin at the\nroot node of the tree, with only the cardinality constraints\nextracted from the test specification:\nMin = 1, Max = null, SelC = true,\nUAtts = , Bgs =\nThe top node is a projection operator. Projection does not\naffect the cardinality of the result set, nor impose any conditions\n, but it does tell us something about the attributes used\n106\nFigure 3: Relational Algebra Tree Showing Multiple\nJoins\nby the query. We therefore add the projection attributes to\nUAtts and push the constraints down to the next node:\nMin = 1, Max = null, SelC = true,\nUAtts = {custNo}, Bgs =\nNext we must deal with the selection node. Selection nodes\nreduce the cardinality of their input, so we need to push\ndown the selection conditions to ensure that any updates\nwe may make affect the correct tuples. We also need to add\nany attributes appearing in the selection condition to UAtts:\nMin = 1, Max = null, SelC = balance < 0,\nUAtts = {custNo, balance}, Bgs =\nThe final node is the leaf node, representing the Customer\nrelation. We construct a query from the conditions on that\nrelation and execute it, to find out how many answers are\ncurrently in the database. In this case, there are none, so\nwe need to insert a new Customer record with at least\nthe custNo and balance attributes populated, and with\na negative balance. If there are any integrity constraints\non this relation, then we need to make sure they are also\nsatisfied by the new data.\nWe use the DBMonster data generator mentioned earlier\nto create the new data. It allows generation functions to\nbe specified for attributes, and additional constraints to be\nplaced on them. It will also maintain primary key, foreign\nkey, non-null and domain constraints if configured appro-priately\nusing the information present in the pushed-down\nconstraints.\nOf course, this is a very simple example. In general, we\ncan expect to have to deal with more complicated queries\ninvolving several joins, such as that shown in Figure 3. This\nrelational algebra tree is equivalent to the following constrained\nquery:\nANY :orderNo, :productNo GENERATED BY\nSELECT o.orderno, p.productno\nFROM Order o, Orderdetail d, Product p\nWHERE o.orderno = d.orderno AND\nd.productno = p.productno AND\np.price > 50\nwhich requires that at least one order must exist that involves\nthe purchase of at least one product that costs more\nthan 50. Joins complicate the process of preparing the\ndatabase, because they introduce dependencies between the\nupdates that take place at different leaf nodes. For example,\nimagine that we have processed the tree shown in Figure 3 as\nfar as the leaf node representing the OrderDetail relation.\nJoin operators further constrain the selection condition (by\nconjoining in their join condition), but add no other constraints\n. So, by the time we reach this leaf node, SelC will\nhave been set to:\no.orderno = d.orderno d.productno = p.productno\nWe need to find out whether a suitable OrderDetail record\nexists within the database. However, in order to do this,\nwe need to know something about what preparation actions\nwere performed when the Product leaf node was processed.\nMaybe there were already plenty of 50-plus products in\nthe catalogue, or maybe there were none and one had to\nbe created. How is this information passed through to the\nOrderDetail\nnode so that the correct tuple can be identified\nor created?\nIn the current version of our algorithm, we have chosen\nto use the database itself to communicate these values. If\nthere are many suitable Product records, then we can find\none by querying the database directly once again. If a new\nproduct had to be created, then it will now be present in\nthe database, so we can still retrieve it by querying. The\ninformation needed to construct these queries is present in\nthe selection conditions that have been considered during\nthe processing of the relational algebra tree up to this point.\nFor example, in order to search for an OrderDetail tuple\nthat is connected to a suitable Product, we need to issue\nthe following query:\nSELECT d.* FROM OrderDetail d, Product p\nWHERE d.productno = p.productno AND\np.price > 50\nThis query cannot be constructed from only the constraints\npushed-down from the parent nodes of the leaf node; instead,\nwe need to collect up the constraints imposed by all nodes\nvisited before the current node, so that they are available for\nquery formation. This is done using the Bgs data structure\nmentioned earlier.\nFigure 4 presents the complete algorithm, showing the behaviour\nrequired for each different type of operator. The algorithm\nis presented as a side-effecting function which takes\nthe constrained query that is to be satisfied by the database,\nand a set of initial conditions that state the required cardinality\nbounds and initialise SelC to true, UAtts to and Bgs\nto . The function returns a set of bindings, but these are\ndiscarded. The main task of the algorithm is carried out\nby the side-effecting updates that occur when leaf nodes are\nprocessed.\nDOT-UNIT TESTING FRAMEWORK\nThe intensional database test language and accompanying\npreparation algorithm have been implemented within a testing\ntool, called DOT-Unit. This tool is part of a larger Data-Oriented\nTesting\n7\nframework that is under development at\nthe University of Manchester [20]. DOT-Unit has been implemented\nas an extension to the JUnit testing framework\n7\nhttp://www.cs.man.ac.uk/\n\nwillmord/dot/\n107\nProjection operator\nprepare(\nAtts\nQ, Min, Max, UAtts, SelC, Bgs)\n= prepare(Q, Min, Max, UAtts Atts, SelC, Bgs)\nSelection operator\nprepare(\nc\nQ, Min, Max, UAtts, SelC, Bgs)\n= prepare(Q, Min, Max, UAtts, SelC c, Bgs)\nJoin operator\nprepare(Q\n1\n1\njc\nQ\n2\n, Min, Max, UAtts, SelC, Bgs)\n= prepare(Q\n2\n, Min, Max, UAtts, SelC jc,\nprepare(Q\n1\n, Min, Max, UAtts, SelC, Bgs))\nRelation (leaf node)\nprepare(Rasv , Min, Max, UAtts, SelC, Bgs)\nQ = bindingQuery(v, SelC, Bgs)\nExecute Q to produce result set RS\nif |RS| < Min then\nInvoke DBMonster to create (Min - |RS|) more\ninstances of R that satisfy the conditions in Q\nelse if |RS| > Max then\nDelete the first (|RS| - Max) tuples in RS\nelse\nNo preparation updates needed\nreturn (Bgs binding(v, Q))\nFigure 4: The Database Preparation Algorithm\nfor the unit testing of Java applications [16]. We have subclassed\nthe standard JUnit TestCase class, to create a dedicated\nDatabaseTestCase class for specifying and managing\nintensional database tests. DatabaseTestCase provides\nfacilities for specifying pre-conditions on database state,\ngenerating and manipulating the bindings that are produced\nby such pre-conditions, and evaluating post-conditions on\nthe database state after the test has been completed. The\nstandard JUnit methods for determining the results of test\nexecution on the in-memory fixture can also be used.\nFigure 5 shows an example DatabaseTestCase that includes\ntwo individual tests. The first verifies that when a\ncustomer with a non-negative balance is deleted, all customers\nwith that customer number really do disappear from\nthe database. The second uses a data generation function to\npropose attribute values for a new customer record (including\na unique customer number), and checks that after the\nprogram has executed only one customer with the generated\ncustomer number exists.\nWe use a prefixed colon to indicate variables that are\nshared amongst the test components -- a notation that will\nbe familiar to many database programmers, since it is commonly\nused in various forms of embedded SQL. The shared\nvariables acquire their values when the test harness evaluates\nthe precondition (and performs any necessary database\npreparation steps). These values can then be accessed using\nthe binding method, and can be used in arbitrarily\ncomplex assert conditions, as well as in instantiating the\npost-condition query.\nOne of the main advantages of using the JUnit framework\nas the basis for the implementation of DOT-Unit is that it\nallows us to integrate our tool seamlessly into existing development\nenvironments, such as Eclipse\n8\n. Thus, DOT-Unit\ntests are executed in exactly the same way as a standard JUnit\ntest case, and the results are displayed using the same\ninterface components. This allows testing of database and\nnon-database components to be interleaved in a convenient\nand natural manner.\n8\nhttp://www.eclipse.org\nEVALUATION\nThe practicality of this intensional test case approach depends\nlargely on the performance overhead imposed by the\ndatabase preparation algorithm. If the time required to execute\neach individual test case is significantly higher using\nour approach than with DBUnit, say, then fewer tests will\nbe able to be executed in the time available and the benefits\nof faster test development and fewer spurious test failures\nwill be negated.\nTo gain a handle on the degree of performance overhead\nto be expected from DOT-Unit, we made use of an existing\nextensional DB test suite that we created for earlier\nwork [20]. This suite was designed for mp3cd browser\n9\n, an\nopen-source Java/JDBC program that stories information\nabout mp3 files in a MySQL 5.0 database\n10\n. The schema\nof the database consists of 6 relations with 22 attributes, 7\nprimary key constraints and 6 foreign key constraints. We\ncreated an equivalent intensional test suite, consisting of 20\ntest cases, from the extensional suite by converting each test\ncase into DOT-Unit pre- and post-conditions. We also re-placed\neach hard-coded test parameter in the original tests\ninto constrained query bindings.\nWe wanted to investigate two specific aspects of the performance\nof DOT-Unit. First, we wanted to compare its\nperformance with that of DBUnit over the equivalent test\ncases as the database size grows. Second, we wanted to gain\nsome idea of what aspects of DB preparation and testing\nwere dominating the performance of DOT-Unit. The results\nof the experiments we performed are presented below.\nAll experiments were run on a Pentium-M 2.0GHz machine,\nwith 1Gb RAM, running Ubuntu Linux.\n5.1\nComparison with DBUnit\nAt first sight, the extensional approach, as exemplified\nby DBUnit, would seem to be the more efficient method\nof the two, as the testing harness does not need to spend\nany time figuring out what updates need to be made prior\nto each test--it only needs to execute them.\nThis does\n9\nhttp://mp3cdbrowser.sourceforge.net/mp3cd/\n10\nhttp://www.mysql.com\n108\npublic class ProgramTest extends DatabaseTestCase {\npublic void testDeleteCustomer() {\npreCondition("ANY :cn GENERATED BY SELECT custNo FROM customer WHERE balance > 0;");\nProgram p = new Program();\np.deleteCustomer(binding(":cn"));\npostCondition("NO :cn2 GENERATED BY SELECT custno FROM customer WHERE custNo = :cn;");\n}\npublic void testNewCustomer() {\npreCondition("ANY :cn, :name, :addr GENERATED BY SELECT gc.custNo, gc.name, gc.addr FROM\ngenCustomerDetails() AS gc WHERE gc.custNo NOT IN (SELECT custNo FROM customer);");\nProgram p = new Program();\nboolean b = p.newCustomer(binding(":cn"), binding(":name"), binding(":addr"));\nassertTrue(b);\npostCondition("EXACTLY 1 :cn, :name, :addr GENERATED BY SELECT custno, name, addr\nFROM customer;");\n}\n}\nFigure 5: Example DOT-Unit Test Case\nnot happen by accident, but because a human programmer\nhas spent time earlier, deciding exactly what the database\nshould look like for each test case. However, when writing\nDBUnit tests, it is common to try to reuse database descriptions\nfor multiple test cases where possible, to reduce\nthe amount of programming and maintenance time. In this\ncase, some redundant updates will be made before each test\ncase - updates that our extensional approach will not bother\nto make. It is also the case that DBUnit makes its updates\nblindly, whether they are needed or not, whereas the intensional\napproach will be able to reuse much of the existing\ndatabase state for each new test case.\nGiven this, it seems likely that the performance of DBUnit\nwill be better when the database state required for each\ntest case is relatively small, but that the situation will be\nreversed when the database state grows much larger.\nIn\norder to gauge the point at which this change occurs, we\nran our two test suites (extensional and intensional) with\ndatabases of varying sizes, and measured the execution time\ntaken to execute the whole test suite.\nIn each case, we generated initial database states of varying\nsizes at random - either populating the database directly\n(for the intensional test cases) or generating XML descriptions\nof the required state (for the extensional test cases).\nThe results are shown in Figure 6.\nFigure 6:\nComparison of Approaches as DB Size\nIncreases\nTo our surprise, although the performance of DOT-Unit was\ninitially worse than that of DBUnit, it overtook its competitor\nat a comparatively small database size of around 20\ntuples per relation. Obviously, this experiment is a little\nunfair to DBUnit, since programmers are unlikely to create\ndatabase descriptions consisting of 1000s of tuples per relation\n. However, tests of this scale will be needed at some\npoint in the development cycle, in order to verify the behaviour\nof the system on more realistic data sets.\nIn order to assess the behaviour of DOT-Unit more pre-cisely\n, consider the graph in Figure 7, which shows the results\nat small databases sizes in more detail. It can be ob-served\nthat the performance of DOT-Unit first improves and\nthen begins to degrade again at a database size of around\n50 tuples per relation.\nFigure 7: Detailed Comparison of Approaches\nOne possible explanation for this initial improvement in performance\nis that, as the database size rises, so does the\nprobability that the data needed for the test case is already\npresent in the database. For the very small states,\na lot of preparation work is required to create the needed\ndata, whereas less work is needed for a more fully populated\ndatabase. As the database size increases further, however,\nthe costs of making the queries needed to test the preconditions\nand formulate the preparation updates rises, pushing\nup the time required for the entire preparation step. This\n109\nbehaviour may be a peculiarity of the particular test suite\nused, of course, and further, more extensive studies will be\nrequired in order to completely characterise the performance\nof the DOT-Unit test harness.\nFrom these initial results, however, DOT-Unit appears to\nscale well relative to database size, and the execution times\nare of the same order of magnitude as those resulting from\nDBUnit. This suggests that the intensional approach may\nprovide a good compromise between saving expensive programmer\ntime in developing new test cases and expenditure\nof cheaper processing time in executing the test cases.\n5.2\nEffect of Constraint Complexity\nA further concern was the effect of increasing constraint\ncomplexity on the performance of DOT-Unit test cases. How\nmuch additional overhead is added for conditions involving\na higher number of selection conditions and (most impor-tantly\n) joins? In order to assess this, we grouped the test\ncases into three groups, according to their complexity:\nA: queries with one or more selections and no joins,\nB: queries with one or more selections and a join between\ntwo relations,\nC: queries with one or more selections and joins between\nthree relations.\nThis gave a test suite with 5 test cases in each of these\ncategories, which we executed against a randomly generated\ndatabase state with 500 tuples per relation that does not\nsatisfy any of the test case pre-conditions. Figure 8 shows\nthe results obtained for the three complexity categories. We\nmeasured the average time taken to execute the test cases\nin each category, including a breakdown of where the time\nis spent in each case:\nTest: the time required to execute the procedural aspects\nof the test case;\nQuery: the time required to execute the query aspect\nof the test case condition;\nPrepare the time required to execute the preparation\naspect of the test case condition.\nWhile the overall time required to execute the test cases rises\nas the complexity rises (unsurprisingly), the relative proportions\nof time spent in the various phases remains roughly the\nsame. The preparation phase seems to account for slightly\nmore than half of the time in each case, indicating that significant\nimprovements could be achieved with a less-naive\npreparation algorithm.\nCONCLUSIONS\nWe have presented a new approach to the specification\nof test cases for database systems that attempts to reduce\nthe amount of manual intervention required in between test\ncase runs while also minimising the number of spurious test\nfailures due to inappropriate input database states. The approach\nhas the further advantage that it sits naturally on top\nof test data sets taken from live databases, and this allows\ntesting to be carried out using realistic data sets without requiring\nsignificant programmer effort to tailor the data set to\nthe test cases. In effect, the intensional approach we have\nFigure 8: The Affect of Changing Constraint Complexity\ndescribed allows software developers to trade programmer\ntime for test execution time\nOur experience has indicated that intensional test cases\nare quick and natural to write for anyone who is familiar\nwith SQL and database programming, although a study\nwith an independent testing team would be necessary before\nwe can make any strong claims in this regard. However\n, compared with what is involved in writing pure JDBC\ndatabase test cases and DBUnit test cases, we found that\nthe self-contained nature of the intensional test cases was a\ndefinite advantage. Writing DBUnit test cases requires the\nprogrammer to continually check that the test case is compatible\nwith the database description. Moreover, since it is\ncommon to try to reuse database descriptions for multiple\ntest cases by combining their requirements into one database\nstate, it becomes very easy to break one test case by changing\nthe database description in order to ready it for another.\nThese problems do not arise with intensional testing, since\nall the information about the test case is present in a single\nfile (the Java class file).\nWe designed this first version of the preparation algorithm\nfor simplicity and correctness rather than efficiency, and as\nsuch it performs rather stupidly in many cases. We are currently\nexploring options for improving the algorithm, including\nmore intelligent selection of the order in which the relational\nalgebra tree is traversed, alternating between passing\nquery bindings and passing literal value bindings as is most\nefficient, and making use of modifications to existing tuples\nas well as simply adding and deleting tuples (both of which\nare comparatively expensive operations). The complexity of\nthe conditions we can handle is at present limited by the\ncapabilities of DBMonster, and can be expanded by development\nof a custom data generation facility. We also need\nto expand the range of queries that can be handled, beyond\nsimple select-project-join queries.\nFor example, standard\nSQL also allows aggregation and ordering within queries-both\nof which offer challenges in terms of automatic preparation\n.\nA further problem with our current algorithm is that it\nmay sometimes fail to find a solution to the database preparation\nproblem, even though one exists. This is due to the\nfact that updates are made at leaf nodes before the full set of\nconstraints on those nodes has been encountered. It should\n110\nbe possible to address the problem with more sophisticated\nquerying techniques (this is an example of a fairly standard\nconstrained search problem, after all), although this will add\nto the performance overhead. A thorough study of the trade-offs\nbetween spurious failures and more intelligent searching\nwill need to be carried out before any concrete recommendations\ncan be made.\nFinally, we note that where it is important to test large\nnumbers of frame constraints (i.e. aspects of the original\ndatabase state that are not affected by the execution of the\nprogram under test), it may be easier to express the test case\nusing DBUnit, rather than cluttering up the intensional test\nwith many such constraints.\nOur work presents a number of possible avenues for future\nwork beyond the improvements mentioned above, of which\nthe most urgent is the question of ordering of test cases\nwithin suites. This ordering can be in terms of reducing the\ncost of the modifications to database state or to maximise\nfault coverage. There is also the question of whether the\nmodifications to database state should always persist between\ntest cases or under certain conditions discarded. For\nexample, a test case may specify that a relation be empty\nand to satisfy the condition the content is discarded. However\n, this relation may be required by later test cases and so\nby discarding its contents we increase the divide between the\ntest state and the real world. This could be accomplished\nby either embedding the modifications inside of a transaction\nwhich can then be aborted or by using a hypothetical\ndatabase engine.\nACKNOWLEDGMENTS\nWe thank Leonardo Mariani and the anonymous reviewers\nfor comments on earlier drafts of this paper. David Willmor\nis supported by a research studentship from the UK Engineering\nand Physical Sciences Research Council.\nREFERENCES\n[1] M. Arenas, L. E. Bertossi, and J. Chomicki.\nConsistent query answers in inconsistent databases. In\nProceedings of the 18th ACM\nSIGACT-SIGMOD-SIGART Symposium on Principles\nof Database Systems (PODS), pages 6879. ACM\nPress, 1999.\n[2] L. E. Bertossi and J. Chomicki. Query answering in\ninconsistent databases. In J. Chomicki, R. van der\nMeyden, and G. Saake, editors, Logics for Emerging\nApplications of Databases, pages 4383. Springer,\n2003.\n[3] P. Bohannon, M. Flaster, W. Fan, and R. Rastogi. A\ncost-based model and effective heuristic for repairing\nconstraints by value modification. In Proceedings of\nthe SIGMOD Conference, pages 143154. ACM, 2005.\n[4] L. Bravo and L. E. Bertossi. Logic programs for\nconsistently querying data integration systems. In\nG. Gottlob and T. Walsh, editors, Proceedings of the\n18th International Joint Conference on Artificial\nIntelligence (IJCAI), pages 1015. Morgan Kaufmann,\nAugust 2003.\n[5] A. Cal`i, D. Lembo, and R. Rosati. On the decidability\nand complexity of query answering over inconsistent\nand incomplete databases. In Proceedings of the 22nd\nACM SIGACT-SIGMOD-SIGART Symposium on\nPrinciples of Database Systems (PODS), pages\n260271. ACM, June 2003.\n[6] D. Chays, S. Dan, P. G. Frankl, F. I. Vokolos, and\nE. J. Weber. A framework for testing database\napplications. In Proceedings of the International\nSymposium on Software Testing and Analysis\n(ISSTA), pages 147157, August 2000.\n[7] D. Chays, Y. Deng, P. G. Frankl, S. Dan, F. I.\nVokolos, and E. J. Weyuker. An AGENDA for testing\nrelational database applications. Software Testing,\nVerification and Reliability, 14(1):1744, 2004.\n[8] J. Chomicki and J. Marcinkowski. On the\ncomputational complexity of minimal-change integrity\nmaintenance in relational databases. In L. E. Bertossi,\nA. Hunter, and T. Schaub, editors, Inconsistency\nTolerance, volume 3300 of Lecture Notes in Computer\nScience, pages 119150. Springer, 2005.\n[9] S. S. Cosmadakis and C. H. Papadimitriou. Updates\nof relational views. Journal of the ACM,\n31(4):742760, 1984.\n[10] S. M. Embury, S. M. Brandt, J. S. Robinson,\nI. Sutherland, F. A. Bisby, W. A. Gray, A. C. Jones,\nand R. J. White. Adapting integrity enforcement\ntechniques for data reconciliation. Information\nSystems, 26(8):657689, 2001.\n[11] G. Greco, S. Greco, and E. Zumpano. A logical\nframework for querying and repairing inconsistent\ndatabases. IEEE Transactions on Knowledge and\nData Engineering, 15(6):13891408, 2003.\n[12] A. Guessoum and J. W. Lloyd. Updating knowledge\nbases. New Generation Computing, 8(1):7189, 1990.\n[13] F. Haftmann, D. Kossmann, and A. Kreutz. Efficient\nregression tests for database applications. In\nProceedings of the 2nd Biennial Conference on\nInnovative Data Systems Research (CIDR), pages\n95106. Online Proceedings, January 2005.\n[14] G. M. Kapfhammer and M. L. Soffa. A family of test\nadequacy criteria for database-driven applications. In\nProceedings of the 11th ACM SIGSOFT Symposium\non Foundations of Software Engineering, pages\n98107. ACM, September 2003.\n[15] R. Langerak. View updates in relational databases\nwith an independent scheme. ACM Transactions on\nDatabase Systems (TODS), 15(1):4066, 1990.\n[16] P. Louridas. Junit: Unit testing and coding in\ntandem. IEEE Software, 22(4):12 15, July-Aug 2005.\n[17] J. Melton and A. R. Simon. SQL:1999 Understanding\nRelational Language Components. Morgan Kaufmann,\n2002.\n[18] H. Shu. Using constraint satisfaction for view update.\nJournal of Intelligent Information Systems,\n15(2):147173, 2000.\n[19] D. Willmor and S. M. Embury. Exploring test\nadequacy for database systems. In Proceedings of the\n3rd UK Software Testing Research Workshop\n(UKTest), pages 123133. The University of Sheffield,\nSeptember 2005.\n[20] D. Willmor and S. M. Embury. A safe regression test\nselection technique for databasedriven applications.\nIn Proceedings of the 21st International Conference on\nSoftware Maintenance (ICSM), pages 421430. IEEE\nComputer Society, September 2005.\n111\n", "keywords": "database testing;Efficient Testing;software testing;Seamless Integration;Query Based Language;Improvement for the Intensional Test Cases;DOT-UNIT;databases;Lesser Programmer Effort for Test Cases;Intensional Test Cases;Testing Framework;Testing for Database Systems;Performance Testing"} {"name": "36", "title": "Analysis of Soft Handover Measurements in 3G Network", "abstract": "A neural network based clustering method for the analysis of soft handovers in 3G network is introduced. The method is highly visual and it could be utilized in explorative analysis of mobile networks. In this paper, the method is used to find groups of similar mobile cell pairs in the sense of handover measurements. The groups or clusters found by the method are characterized by the rate of successful handovers as well as the causes of failing handover attempts. The most interesting clusters are those which represent certain type of problems in handover attempts. By comparing variable histograms of a selected cluster to histograms of the whole data set an application domain expert may find some explanations on problems. Two clusters are investigated further and causes of failing handover attempts are discussed.", "fulltext": "INTRODUCTION\nMobility management is a great challenge in current and\nfuture radio access networks. In third generation (3G) networks\nuser experienced quality of service (QoS) under a\nmove of mobile station (MS) from one mobile cell to another\ncell has been improved by implementing soft handover\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nMSWiM'06,\nOctober 26, 2006, Torremolinos, Malaga, Spain.\nCopyright 2006 ACM 1-59593-477-4/06/0010 ...$5.00.\n(SHO). Soft handover makes it possible to have connections\non several base stations (BS) simultaneously.\nIn this paper, a set of measurements which can be used for\nsoft handover decision making are analyzed and compared\nwith other measurements in which statistics of successful-ness\nof handover attempts have been collected. We do not\nknow exactly the parameters of used SHO algorithm. SHOs\nare investigated only on basis of data set and some general\nknowledge of 3G systems. Mobile cell pairs with handovers\n(HO) are divided in groups using clustering algorithm. Cell\npairs in which SHOs are similar with each other fall in same\ngroup. Different types of SHO failures are analyzed using\nclustering information and distributions of measurements in\neach cluster.\nIn Section 2 the soft handover concept, the measurements\nand used neural network algorithm are shortly introduced.\nAnalysis methods which have been used are described in\nSection 3. Preliminary results are shown and discussed in\nSection 4. Finally, some conclusions are drawn in the last\nsection.\nBACKGROUND\nIn this section, the basics of soft handover in 3G network\nis explained and the available data set is introduced. Neural\nnetwork algorithm used in data clustering is also presented.\n2.1\nSoft handover\nSoft handover is a state of MS being connected to several\nBSs simultaneously. In GSM networks, a fixed threshold for\nhandover from one cell to another is used. In 3G networks,\neach MS is connected to a network via a set of BSs called\nactive set. Members of active set are updated on basis of\nmeasurements made by MS. The advantage of having connections\non several BS simultaneously is realized when MS\nis moving towards another BS, the MS should have a connection\nat least on one BS all the time. In GSM system, the\nolder connection has to be terminated before the new one\ncan be setup. The connection setup phases are the most\nvulnerable steps in a call. The connection between MS and\nBS is setup in a beginning of a call or later when handover\noccurs. If the setup is not successful, it is useful to have an\nexisting connection to another BS or otherwise the call will\nbe abnormally terminated.\nHandover can occur due to signal quality reasons or when\nthe traffic capacity in a cell has reached its maximum or is\napproaching it. In the latter case, traffic load in the network\ncan be distributed more uniformly by handing over some\nusers from the most crowded cells. The above method is\n330\ncalled cell breathing. Use of cell breathing without giving\nthe information to the analyzer increases the complexity of\nthe analysis and can mix up a lot in the analysis process.\nFor a user soft handover means power saving (in uplink)\nand less abnormally terminated calls. For an operator lower\nMS transmitting powers mean less interference. When MS\nis in SHO, several BSs listen the same uplink channel, but\nall BSs have their own downlink channel. The offered diversity\nis resource consuming in downlink direction. There is\na tradeoff between better QoS in mobility management and\nconsumption of resources.\nDecision of soft handover is made in mobile station by\ncomparing the signal-to-noise ratios of active and candidate\nBSs Common Pilot Channel (CPICH) [2]. Members of active\nset are selected on basis of powers of this pilot signal [5,\n12, 16].\nBSs which are not in the active set but next from it in the\nsense of measured quantity are in candidate set. Candidate\nset BSs are constantly monitored whether their offer better\nconnection than cells in active set. Cells not in active or\ncandidate set are monitored less frequently whether their\ncan enter the candidate set.\nCell is either added to the\nactive set if the maximum amount of cells in the active set\nis not reached or cell replaces the cell which offers the lowest\nquality connection. Cells which are no more able to offer\na connection which is good enough are removed from the\nactive set.\nThresholds are used in adding, replacing and removing\nBSs from active set by BSs in candidate set to avoid ping\npong effect. This means that a value of measured quantity\nshould be with a certain threshold better than the old one\nfor changing cells in active set. If measurement which is only\nslightly better (i.e. with zero threshold) is enough for changing\ncells in sets, it is quite possible that the same change is\nperformed in opposite direction very soon. Thus, the original\nupdate of the set was useless and resource consuming in\nthe sense of all required signaling.\n2.2\nData\nThree data sets of Key Performance Indicator (KPI) level\nmeasurements related on handover events are saved. Each\nset consists of measurements collected during one hour. KPI\nis considered as an important measure to be followed. It can\nbe a measurement by itself or it has been computed from a\ngroup of raw counters [10]. One data vector consists of probabilities\n, means, sums and counters computed over one hour\nof one source target cell pair. Here, source refers on cell\nin active set and target on another cell which is measured\nand possibly added in active or candidate set.\nMeasurements\nof target cell are compared with those of source cell.\nHandover decisions are made in MS on basis of measured\nand computed base stations received signal signal-to-noise\nratios (E\nc\n/N\n0\n). For each source and target cell pair mean\nof signal-to-noise ratio differences is computed using\nEcnoDiffMean = mean n[E\nc\n/N\n0\n]\ntarget\n- [E\nc\n/N\n0\n]\nsource\no .\nMean value and number of made comparisons (EcnoDiffNum)\nare saved. Four bin pdfs of these measurements are also\nstored with bin centers in -6, -3, 0 and 3dB, correspond-ingly\n.\nIn addition to E\nc\n/N\n0\nmeasurements, averages of received\npilot signal power ratios between BS pairs (av rscp ratio)\nhave been computed and stored in database. The time and\nprobability of being in SHO with each other have also been\nmeasured. Time of target and source cell being in SHO with\neach other simultaneously is counted in variable t act. Then,\nat least one MS is in SHO having both source and target cell\nin its active set. The measurement is symmetric for a switch\nof source and target cells. Time of target cell being in SHO\nwith source cell is stored in t act dir. Cell total time in\nSHO is saved in tot time sho. It has been counted over all\nthe targets of fixed source cell. Probability of target and\nsource being in same active set is stored in variable p act.\nTotal number of SHO attempts to add target to active\nset is stored in SHO total att. Ratio of successful SHO attempts\nwhich lead to addition of target cell in active set is\nsaved in add ratio. In addition to those above, the number\nof SHO failures is stored in pfail total and ratios of four\ndifferent failure causes are saved. Failure occurs in setup\nor active time phase of SHO and it is either radio channel\nproblem or not. Probability of cell being in monitored state\nis also measured (p4th 5th). All the measurements used in\nthe analysis are shortly described in Table 1.\nA lot of data has been saved in data sets, but also some\nvery important information is missing. Due to missing information\non cell capacities, their locations and performed\nmanual and automatic tuning operations on network configuration\nbetween successive data set saves, only preliminary\nanalysis can be performed. The rest of the analysis process\nis described on theoretical level.\n2.3\nSelf-Organizing Map\nSelf-Organizing Map (SOM) [8] is an unsupervised neural\nnetwork algorithm which adapts the codebook vectors\nof neurons so that they approximate the input data distribution\n. When the training has converged topological areas\nor domains corresponding to certain types of inputs can be\nfound from the map. The topology and the size of the network\nis fixed before adaptation.\nIn the SOM algorithm, the codebook vectors w\nj\nof the\nSOM are at first initialized. Then, the following steps are\nrepeated for each input vector x: Find the index of best-matching\nor nearest codebook vector using\ni(x) = argmin||x - w\nj\n||,\nin which j goes through all the neurons in the map. Next,\nthe codebook vectors of winner neuron and its neighbors are\nupdated using\nw\nj\n(t + 1) = w\nj\n(t) + h\nij\n(x)(x(t) - w\nj\n(t)).\nHere, is the learning rate and h\nij\n(x) is the neighborhood\nfunction centered around the winner neuron. Input sample\nx defines the winner neuron and the topological distance\nbetween indexes i and j defines how much the neuron is\nupdated. Neighborhood function is typically Gaussian or\nbubble function i.e. function which decrease monotonically\nand even goes to zero when the distance increases.\nIn this paper, a batch version of the SOM algorithm is\nused. In batch SOM, all codebook vectors of the SOM are\ncomputed after the best-matching units of all input data\nvectors have been found. The same data set is used several\ntimes.\nMETHODS\nHandover related measurement from 3G network can be\nanalyzed using standard data mining methods [1]. In this\n331\nTable 1: Measurements in the analysis. Data set has one sample vector for each source target cell pair.\nVariable\nExplanation\nType\nEcnoDiffNum\nComputed E\nc\n/N\n0\ndifferences\nnumber\nEcnoDiffMean\nComputed E\nc\n/N\n0\ndifferences\nmean\nEcnoDiffPdf-6.0\n-6 dB bin of E\nc\n/N\n0\ndifference pdf\nratio\nEcnoDiffPdf-3.0\n-3 dB bin of E\nc\n/N\n0\ndifference pdf\nratio\nEcnoDiffPdf0.0\n0 dB bin of E\nc\n/N\n0\ndifference pdf\nratio\nEcnoDiffPdf3.0\n3 dB bin of E\nc\n/N\n0\ndifference pdf\nratio\nt act\nTarget and source simultaneously in SHO\nmean\nt act dir\nTime of target being in SHO with source\nmean\ntot time sho\nCell total time in SHO\nsum\np act\nTarget in active set of source\nratio\nSHO total att\nSHO attempts to add Target to active set\nnumber\nadd ratio\nSuccessful attempts leading to addition\nratio\npfail total\nFailures\nnumber\npfail ini\nSetup phase failures due to non-radio\nratio\npfail ini radio\nSetup phase failures due to radio\nratio\npfail act\nActive time failures due to non-radio\nratio\npfail act radio\nActive time failures due to radio\nratio\np4th 5th\nCell is in monitored state (=4th or 5th)\nratio\nav rscp ratio\nTarget / Source Received power ratio\nmean\nr fail\nRatio pfail total / SHO total att\n\nratio\nr EcnoDNum\nRatio EcnoDiffNum / SHO total att\n\nratio\n\nVariable defined in the analysis.\nstudy, methods presented in Figure 1 are used. At first,\nthe miner have to decide what could be interesting in this\ndata. The analysis task has to be defined. On basis of that\nthe first choice of variables will be done. Next, the selected\nvariables are preprocessed, in order to be able to use them\nin later analysis.\nIn data mining tasks, variable selection and preprocessing\nare the most critical phases, because in this step the miner\ndecides which variables are important and how should they\nbe processed. The whole data mining process consists of several\ncycles performed repeatedly. The cycles include testing\nhow different variable selections and preprocessing methods\neffect on final results. The process has inner loops in which\nsome tasks or parameters are fixed on basis of selections\nmade in outer loop. The inner loops are performed more\nfrequently. Loops with more general task like the definition\nof mining task are repeated less frequently. When the\nmining task is defined the analyzer should be able to decide\nwhat is (s)he looking out for.\nNow, the analysis task is defined as finding groups of sim-ilarly\nbehaving cell pairs in SHO situations. Importance of\nmeasurements can also be highlighted using proper weighting\nof variables. In addition to clustering, also other tasks\nfor data analysis can be defined. One possibility is to try to\nfind cells or cell pairs with anomalous behavior. Anomalies\ncan also be found by clustering, but expert knowledge in\nvariable selection and preprocessing steps are very important\n.\nUsing different variables, preprocessing methods and weighting\nof variables different clustering results can be found. To\nfind out which of them is useful, interpretation of clusters is\nneeded. This can be done using histograms or rules defined\nby data samples falling in clusters. The results which have\nbeen found using clustering methods should be visualized\ntogether with spatial locations to be able to understand the\nusefulness of results. Methods should be performed repeatedly\nto analyze successive data sets under the knowledge of\nperformed tuning operations. Thus, there is a possibility to\nfind explanations to changing results. In this study, results\nof only one data set are shown, because more information\non application domain is needed to be able to combine and\ncompare successive clustering results.\n3.1\nPreprocessing\nDifferent preprocessing methods have been tested. The\nfinal method was selected on basis of histograms and the\nclusters which were found using the selected method. At\nthe first step, the distributions are truncated. Outliers in\nthe selected variables were replaced by their maximum permitted\nvalues. Two variables, pfail total and EcnoDiffNum,\nwere scaled using the number of performed soft handover\nattempts (see Table 1). Logarithms of some of the variables\nwere taken, but finally only scaled EcnoDiffNum was preprocessed\nwith logarithmic function. Sample vectors with high\namount of undefined measurements were canceled.\nUsed\nclustering method (see section 3.2) allows using sample vectors\nin which some variables are undefined. However, they\nare not so useful when the rate of undefined values increases.\nHere, sample vectors with 15 or more missing values in 20\nvariables are canceled.\nIn Figure 2 the histograms of the most interesting variables\npreprocessed using selected methods are visualized.\nSome of the variables have quite high peaks in distributions,\nbut due to the origin of variables no other preprocessing\nhave been performed. For example, handover failure reasons\npfail ini, pfail ini radio, pfail act radio and pfail act sum up\nto unity. However, pfail act is not analyzed because it is zero\nall the time in the first data set.\n332\nInterpretation\nof clusters\nClustering\nPreprocessing\nParameter tuning\nTask definition\nVisualization\nwith locations\nVariable selection\nFigure 1: Used data analysis method.\nSteps connected\nwith solid arrows have been performed.\n3.2\nClustering\nCluster analysis is used to divide data vectors in groups.\nData vectors falling in same cluster are similar with each\nother.\nHere, clustering is performed using a two-phase\nmethod [15]. In this method, data vectors are at first used\nto train a Self-Organizing Map. Neurons of the SOM adapt\nto incoming data so that the input data can in later analysis\nbe represented by the codebook vectors of neurons. Number\nof these codebook vectors is much smaller than the number\nof original data vectors. Thus, computational complexity of\nthe final crisp clustering algorithm is decreased. Another\nadvantage of using a SOM based two-phase method instead\nof direct clustering of data vectors is the visualization capability\nof SOM.\nIn addition to preprocessing, SOM algorithm provides another\npossibility to emphasize important properties of data.\nLarger weights in distance computation are given to the\nmost important properties defined by the analyzer. Smaller\nor even zero weight can be given to those variables which\nare not used in organization of the SOM i.e. in building clusters\n. However, values of them can be compared to those with\nlarger weights using various visualization methods. Weighting\nby variable importance can also be built into SOM training\nalgorithm by utilizing learning distance metrics [7].\nFigure 2: Logarithmic histograms after distribution\ncuts, logarithmic preprocessing of r EcnoDNum and\nscaling of all variables between [0,1]\nThe codebook vectors are further clustered using k-means\nor some hierarchical clustering method. In this paper, Ward\nagglomerative clustering method has been used [4]. In the\nbeginning of hierarchical clustering, each codebook vector\nis a cluster of its own. In the next step, the most similar\nclusters are combined and this is continued until all vectors\nare in same cluster. The clustering results form a tree structure\ncalled dendrogram. In visualization of a dendrogram,\nthe clusters combined in each step and the distance between\nthem are shown. Final clustering is selected by cutting this\ntree at certain level. The number of clusters can be selected\nmanually or some cluster validation index can be utilized\nto find the optimum. In this paper, Davies-Bouldin validation\nindex has been used [3]. Similar clustering methods\nhave earlier been used in the analysis of both GSM and 3G\nnetwork BTSs [9, 11, 13].\nAs a result of clustering, each data vector is represented\nby index of one neuron or by the codebook vector stored in\nthat neuron. Furthermore, the neuron and the data vectors\nthe neuron represents belong to same cluster. On basis of\nthe clustering result, some clusters can be selected for more\nspecific analysis. Cluster selection is usually done on basis\nof found higher values of some critical variables. It is possible\nto build a system in which rules are found for clusters\n[14, 9] and these are used to select interesting clusters au-tomatically\n. Here, interesting clusters are selected manually\non basis of clusterwise variable mean values and histograms.\nRESULTS\nIn this section, handover measurement data is used to\ntrain a Self-Organizing Map of size 17 12. Then, the codebook\nvectors of the SOM are clustered using hierarchical\nWard method. Results of clustering are described and two\nclusters are then selected for more specific analysis. Characteristics\nof sample vectors falling in those clusters are studied\nusing histograms.\nOnly the most interesting variables are used to find the\n333\nnearest neuron of input data vector. These variables have\nnonzero mask which can also be considered as a weighting\nfactor in a search for the best-matching neuron. Rest of\nthe variables have zero mask, which means that they can\nbe visualized and updated using SOM algorithm, but they\ndo not have an effect on organization of the SOM and on\nselection of the cluster in which the sample belongs to.\nIn Figure 3 all other component planes of SOM with positive\nmask are shown, except E\nc\n/N\n0\ndifference distributions\nwhich are shown in Figure 6. In component plane visualization\n, distributions of components (or variables) of SOM\ncodebook vectors are shown. Component values of one codebook\nvector are visualized using grayscaling and their locate\nin the same position at each plane. For example, values of\none codebook vector are shown at upper right corner in each\nplane.\nFigure 3: Component planes of SOM with denor-malized\nscales. Shown variables have nonzero mask\nand they are not describing E\nc\n/N\n0\ndifference distributions\n.\nSome component values which were not used in SOM\ntraining (i.e. they were masked out) are shown in Figure\n4. Although, they have no effect on SOM organization, they\nare adapted to be able to compare their distributions even\nwith those used in organizing the SOM.\nBy visual comparison of variables in Figures 3 and 4, it can\nbe\nseen\nthat\nthe\ntotal\nnumber\nof\nSHO\nattempts\n(SHO total att)\nand\nE\nc\n/N\n0\ndifference\nmeasurements\n(EcnoDiffNum) is higher in upper part of the SOM. However\n, when the latter is scaled by total number of attempts,\nhigher rate of measurements (r EcnoDNum) is in lower part\nof the map. Also, the total number of failuring SHO attempts\n(pfail total) is high in upper right corner, but scaling\nthis by number of attempts tells us that the failure rate\n(r fail) in upper right corner is quite moderate. Instead,\nhigher failure rates exists in both lower corners i.e. in clusters\n5 and 8 (see Figure 5).\nTrained SOM codebook vectors are clustered using hierarchical\nWard algorithm. The clustering result selected by\nDavies-Bouldin index is shown in Figure 5. Four bin E\nc\n/N\n0\ndifference histograms are visualized on top of clustered SOM\nFigure 4: Denormalized component planes of variables\nwhich were not used in SOM training.\nin Figure 6. When component values of SOM (see Figures\n3, 4 and 6) are compared with clustering result (see Figure\n5) several types of source target pairs can be found. Most of\nthem are behaving as expected, but some of them represent\nhandover attempts with certain type of problems.\nFigure 5: SOM which is clustered using hierarchical\nWard method and Davies-Bouldin validation index.\nTo find out the most interesting clusters of the SOM for\nfurther investigations, distribution of data samples on SOM\nis visualized. In Figure 7a hits of all samples on SOM nodes\nare visualized and in Figure 7b hits of samples with SHO\nfailure rate (r fail) larger than 22% are shown. Samples are\ndistributed all over the map, only some edge nodes have\nslightly larger hit rate. Lower part of the map has more hits\nwhen samples with increased failure rate are considered.\nIn Figure 8 hits of samples which represent two differ-334\nFigure 6: EcnoDiff distributions on top of clustered\nSOM. In each SOM node a four bin E\nc\n/N\n0\nhistogram\nis shown.\nent types of SHO failures are shown. Samples are from cell\npairs in which the rate of selected type of failures is larger\nthan 75%. However, handover initialization failures due to\nsome other reason than radio channel resources (i.e. pfail ini\ntype failures) are obviously more frequent than failures due\nto radio channel initialization problems (pfail ini radio type\nfailures). Cell pairs with SHO failures originating mainly\nfrom these two reasons are mapped on separate clusters.\nAll SHO failures due to radio channel initialization are in\ncluster 9 (see Figures 5 and 8b) and most of all other initialization\nfailures are in cluster 5 (see Figures 5 and 8a). In\nthe following, these two clusters are studied in more detail.\nIn Figures 9 and 10 histograms of samples which belong\nto clusters 5 and 9 are shown.\nThese histograms should\nbe compared with histograms of whole data set which were\nshown in Figure 2. In histograms of cluster 5 (see Figure\n9), the average received signal power ratio (av rscp ratio) is\nslightly lower than in general. Distributions of three largest\nE\nc\n/N\n0\ndifference measurement bins are completely different\nthan corresponding distributions from the whole data set.\nIn cluster 5 most of the samples have about 3dB E\nc\n/N\n0\ndifference\n(EcnodiffPdf3.0) which means that at least this measurement\nmakes successful SHOs possible and SHO should\nbe performed. Exceptional E\nc\n/N\n0\ndifference measurements\n(a) All\n(b) SHO failure rate > 22%\nFigure 7: Sample vector hits on SOM nodes. Size of\nblack hexagonal on SOM node denotes the number\nof hits. Maximum number of hits per node is shown\nabove the plot.\n(a) pfail ini\n(b) pfail ini radio\nFigure 8: Hits of samples of two failure types. Samples\nof which more than 75% are failuring due to\nselected cause are counted.\nof this cluster can also be seen in Figure 6. All the failing\ncell pairs fail in initialization due to other than radio channel\nreasons (pfail ini). Total rate of failures is very high\n(r fail). One reason for high rate of failures can be that all\nthe capacity is in use.\nIn histograms of cluster 9 (see Figure 10), the average received\npower ratios are a bit higher than usual, but there are\nno samples with high rate of 3dB E\nc\n/N\n0\ndifferences (EcnoDiffPdf3\n.0). However, in such a situation it should be possible\nto perform successful SHOs. The rate of initialization failures\nin radio channels (pfail ini radio) is higher than usually,\nbut because only a small part of samples in this cluster have\nabove mentioned problems the total SHO failure rate is not\nhigher than usually. The total number of samples or cell\npairs with high rate of initialization failures in SHO is so\nsmall, that it is impossible to make any further inferences\nfrom these clusters. It is possible to check histograms of\nonly those samples which fulfill the failure rate criteria, but\nthe number of samples is anyway quite low.\n335\nFigure 9: Histograms of data vectors of cluster 5.\nCell pairs with high rate of radio channel initialization\nfailures in SHO attempts vary from data set to another,\nbut without any information on network topology and with\nuncomplete information on performed tuning operations, it\nis impossible to make any further inferences.\nFigure 10: Histograms of data vectors of cluster 9.\nCONCLUSIONS\nIn this paper, a data analysis method based on a neural\nnetwork has been presented. The method is utilized in\ndata visualization and clustering. The presented method is\nonly one possibility for finding data clusters. However, the\nbenefits of the proposed method are the decrease in computational\ncomplexity due to used two-phase clustering algorithm\nand the visualization capability of the method. Thus,\nit is well suitable for this kind of explorative data analysis.\nIt is desirable to find clusters with characteristics which\ndiffer from one cluster to another. In the presented method,\nselection of variables and variable weighting factors have\nbeen used to find interesting clusters. In the preprocessing\nphase, also the number of permitted undefined measurement\nvalues in sample vector has an effect on found clusters.\nSample vectors with high rate of missing values are not so\nusable and describable as samples without them. Vectors\nwith missing values can be used in the SOM training but\nthe benefit of using them decreases when the rate of undefined\nvalues increases.\nIn this study, histograms are used both when preprocessing\nmethods are decided and when an interpretation for the\nfound clusters are looked for. However, clusters can also be\ncompared using other visual methods, finding limiting rules\nfor variable values in clusters or comparing distributions of\nvariable values in clusters using more sophisticated distribution\ncomparison measures like Kullback-Leibler divergences\n[6].\nThe results which have been obtained using all available\ndata sets differ slightly from each other, but due to uncomplete\ninformation on network configuration and parameter\ntuning, further inferences cannot be made. However, adding\nthis information would offer interesting possibilities to continue\nthis study.\nREFERENCES\n[1] P. Chapman, J. Clinton, T. Khabaza, T. Reinartz,\nand R. Wirth. CRISP-DM 1.0 step-by-step data\nmining guide. Technical report, CRISM-DM\nconsortium, 2000. http://www.crisp-dm.org.\n[2] Y. Chen. Soft Handover Issues in Radio Resource\nManagement for 3G WCDMA Networks. PhD thesis,\nQueen Mary, University of London, 2003.\n[3] D. Davies and D. Bouldin. A cluster separation\nmeasure. IEEE Transactions on Pattern Analysis and\nMachine Intelligence, 1(2):224227, April 1979.\n[4] B. Everitt. Cluster Analysis. Arnold, 1993.\n[5] V. K. Garg. Wireless Network Evolution: 2G to 3G.\nPrentice-Hall, Inc., 2002.\n[6] S. Haykin. Neural Networks, a Comprehensive\nFoundation. Macmillan, 1999.\n[7] S. Kaski and J. Sinkkonen. Metrics that learn\nrelevance. In Proceedings of the International Joint\nConference on Neural Networks, volume 5, pages\n547552, 2000.\n[8] T. Kohonen. Self-Organizing Maps. Springer-Verlag,\nBerlin, 1995.\n[9] J. Laiho, K. Raivio, P. Lehtim\naki, K. H\nat\nonen, and\nO. Simula. Advanced analysis methods for 3G cellular\nnetworks. IEEE Transactions on Wireless\nCommunications, 4(3):930942, May 2005.\n[10] J. Laiho, A. Wacker, and T. Novosad, editors. Radio\nNetwork Planning and Optimisation for UMTS. John\nWiley & Sons Ltd., 2001.\n[11] P. Lehtim\naki and K. Raivio. A SOM based approach\nfor visualization of GSM network performance data.\nIn IEA/AIE, pages 588598, 2005.\n[12] R. Prakash and V. Veeravalli. Locally optimal soft\nhandoff algorithms. IEEE Transactions on Vehicular\nTechnology, 52(2):347356, March 2003.\n[13] K. Raivio, O. Simula, and J. Laiho. Neural analysis of\nmobile radio access network. In IEEE International\n336\nConference on Data Mining, pages 457464, San Jose,\nCalifornia, USA, November 29 - December 2 2001.\n[14] M. Siponen, J. Vesanto, O. Simula, and P. Vasara. An\napproach to automated interpretation of SOM. In\nN. Allinson, H. Yin, L. Allinson, and J. Slack, editors,\nAdvances in Self-Organizing Maps, pages 8994.\nSpringer, 2001.\n[15] J. Vesanto and E. Alhoniemi. Clustering of the\nself-organizing map. IEEE Transactions on Neural\nNetworks, 11(3):586600, May 2000.\n[16] J. Zander. Radio Resource Management for Wireless\nNetworks. Artech House, Inc., 2001.\n337\n", "keywords": "Two-Phase Clustering Algorithm;data mining;mobility management;Key Performance Indicator of Handover;soft handover;Data Mining;Soft Handover;Visualization Capability;Neural Network Algorithm;neural networks;hierarchical clustering;Self-Organizing Map;Cluster Analysis;Histograms;3G network;Decrease in Computational Complexity"} {"name": "37", "title": "Aspect Oriented Programming for a component-based real life application: A case study", "abstract": "Aspect Oriented Programming, a relatively new programming paradigm, earned the scientific community's attention . The paradigm is already evaluated for traditional OOP and component-based software development with remarkable results. However, most of the published work, while of excellent quality, is mostly theoretical or involves evaluation of AOP for research oriented and experimental software . Unlike the previous work, this study considers the AOP paradigm for solving real-life problems, which can be faced in any commercial software. We evaluate AOP in the development of a high-performance component-based web-crawling system, and compare the process with the development of the same system without AOP. The results of the case study mostly favor the aspect oriented paradigm.", "fulltext": "INTRODUCTION\nAspect Oriented Programming, a relatively new programming\nparadigm introduced by Kiczales ([2]), recently earned\nthe scientific community's attention.\nHaving around six\nyears of life, the paradigm was already presented in important\nconferences, and recently triggered the creation of\nseveral conferences and workshops to deal with it.\nThe paradigm is already evaluated for traditional OOP\nand component-based software development and is found\nvery promising. Several evaluations consider it to be the\ncontinuation of the OOP paradigm. However, most of the\npublished work while of excellent quality is mostly theoretical\nor involves evaluation of AOP for research oriented and\nexperimental software. Unlike previous works, this study\nconsiders the AOP paradigm for solving real-life problems,\nwhich need to be faced in any commercial software. We evaluated\nAspect Oriented Programming in the development of\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nSAC'04 March 14-17, 2004, Nicosia, Cyprus\nCopyright 2004 ACM 1-58113-812-1/03/04 ...\n$\n5.00.\na high-performance component-based web-crawling system,\nand compared the process with the development of the same\nsystem without AOP. The results of the case study, mostly\nfavoring the aspect oriented paradigm, are reported in this\nwork.\nThis introduction is followed by an introduction to the\nAOP approach. We then describe the application that was\nused for our evaluation and proceed with a description of\nour evaluation scenario. We then present and comment our\nevaluation results. We continue with references to similar\nevaluation attempts, and, finally, we summarize the conclusions\nfrom our evaluation, and report on future work.\nASPECT ORIENTED PROGRAMMING\nAspect Oriented Programming, as proposed by Kiczales\n([2]), is based on the aspectual decomposition. Aspectual\ndecomposition is somewhat complementary to functional decomposition\n, and tries to overcome the limitation of functional\ndecomposition to capture and represent crosscutting\nfunctionality. After separating the system to functional constructs\n, aspectual decomposition is applied to the design in\norder to catch the crosscutting concerns. Crosscutting functionality\nusually includes extra-functional requirements (e.g.\ntiming constraints or logging facility to all the system components\n). This functionality is usually replicated a number\nof times, spread over the whole system. There is no single\npoint of reference where the developer can say that the\naspectual functionality belongs and should be implemented.\nThe main purpose of AOP is to capture the crosscutting\nconcerns throughout the system, and promote them as first-class\ncitizens, in order to enable the modeling and reusing of\nthem. The high level goals of such an approach, as reported\nin various publications, follow:\n1. AOP makes programming easier and faster, closer to\nthe human perception ([2, 3, 7]). Developers understand\nthe concept of crosscutting concerns and crosscutting\nfunctionality, and they use it in understanding\nthe whole system. However, apart from AOP, there is\nno easy way to implement such a crosscutting concern\n.\nWith AOP, aspects are closer to the human\nperception for crosscutting concerns and simplify the\ndesign and implementation of systems with such requirements\n. Aspects can even allow code reuse for the\nextra-functional requirements they implement, which\nusually crosscut the whole system. Thus, they make\nsystem implementation easier and faster.\n1554\n2004 ACM Symposium on Applied Computing\n2. AOP makes programming less error-prone and easier\nto debug and maintain ([2, 3, 7, 6]). Not only the code\nbecomes more modular, thus, easier to maintain and\nenhance, but also the goal for debugging is more easily\ngained (offered from the AOP inherent ability of automatic\naspect invocation). Furthermore, AOP favors\nreusability and modular representation of crosscutting\nconcerns, which make the code more readable and prevent\ntangling code.\nThe AOP approach is already used in the implementation\nof several academic-oriented systems such as [4], but there\nis not much work reported on AOP relating with commercial\nenvironment. However, we strongly believe that AOP can\nenter the industrial environment, and that it has much to\noffer. We expect to witness that in the near future.\nTHE HIGH PERFORMANCE COMPONENT-BASED WEB CRAWLER\nTo evaluate the AOP paradigm, we chose a high performance\ncomponent-based web crawler, which would serve the\nneeds of our laboratory. However, it was important for us to\nmake the crawler easily extensible and changeable in order\nto be able to reuse it in different projects. Furthermore, the\ncrawler should not be characterized as experimental (e.g.\nunstable or with extremely complicated configuration) since\nit should be reusable in a number of different projects, and\nwithout needing to know the complete infrastructure. We\nalso needed the crawler to be easily adjustable to different\nconfigurations, hardware, and network situations, because\nof the variety of our hardware, as this would be desired in a\nreal-life application.\nThis application was found suitable for our AOP evaluation\n, since it was of respectable size, which would give\nus the opportunity for better results.\nFurthermore, the\nnon-experimental characterization of the current application\n, which is rarely the outcome in the academic environment\n, would ensure a more practical approach of our evaluation\n. For the same reason, the extra-functional requirements\nimplemented for the evaluation, were carefully selected. It\nwas important for us to keep the whole implementation and,\nconsequently, the AOP evaluation not far from the commercial\nfield, which we feel to be the important end-user of the\nprogramming paradigms.\nHaving these points in mind, we decided to use the following\ndesign, comprising three basic multi-threaded components\n: (i) the database component, (ii) the crawling component\n, and (iii) the processing component.\nFigure 1: The architecture of a high-performance\ncomponent-based web-crawling system.\nThe database component was responsible for two tasks:\n(a) updating the database with the processed information,\nreceived from the processing component, and (b) feeding the\ncrawling component with the necessary URLs to be crawled.\nFurthermore, as in all the components, a number of threads\nwere running in parallel in each component, so that the fast\ndevices like CPU and memory (as opposed to the usually\nslow devices like I/O and network) would be more efficiently\nutilized. The number of threads running in parallel in each\ncomponent could be selected from the user, and also adjusted\ndynamically from each component for optimal performance\n. Selecting a very small number of threads, the\nuser would let fast resources like processor and the memory\nrather unutilized, while selecting an overly large number of\nthreads would result to large context switching overhead.\nThe crawling component's responsibility was to download\nthe URLs from the web and provide the processing component\nwith the page information for further processing.\nPage information included the page's URL, IP address, and\npage text. Again, the crawling component ran a number of\nthreads to maximize resource utilization.\nFinally, the processing component was responsible for receiving\nthe page information from the crawling component\nand processing it, and passing the results to the database\ncomponent for permanent storage. As in the other components\n, this component was also multi-threaded, thus utilizing\nthe resources better.\nEVALUATION SCENARIO\nTo evaluate AOP in the crawling project, we ran the following\nscenario: First, we set our metrics for the evaluation\nof AOP, trying to keep them as objective as possible; then,\nwe designed the component-based web crawler and located\nthe different functionalities that could be modeled as aspects\n. Following that, we implemented and tested the three\ncomponents independently. The implementation up to that\npoint did not include any of the functionalities identified as\naspects in the earlier step. Finally, we tried to integrate the\nthree components, and also include the extra functionality,\nimplemented with and without AOP.\nOur selection for the metrics was mostly to favor (as much\nas possible) objective results. Our goal, as Murphy in [5]\nsuggests, was to answer two important questions: (a) if AOP\nmakes it easier to develop and change a certain category of\nsoftware (usefulness), and (b) what is the effect of AOP in\nthe software development (usability). For these reasons, we\nselected the following metrics:\n1. We measure effectiveness of AOP for implementing the\nextra functionality, compared to traditional OOP.\n2. We measure the learning curve of AOP methodology.\n3. We measure time that took to complete the project\nwith the two approaches, AOP and traditional OOP.\n4. We measure complete lines of code for the added functionality\nwith AOP and with traditional OOP.\n5. We compare code tangling in the AOP and the traditional\nOOP model.\n6. We report on the stability of the AOP model for creating\ncomponent-based software.\nThe types of functionality that we identified as being best\nmodeled as aspects were the following ones:\n1555\nLogging : This functionality requires saving extended program\nexecution trace to a file or printing it to the\nscreen.\nThe trace should include entrance and exit\nmessages from the methods, exceptions thrown, and\ntime of each event.\nOverloading checks : Since the crawling function is expensive\nin resources, we must constantly check for overloading\nin any of the resources, in order to avoid driving\nthe machines to collapse. The two resources we\nhad to monitor were the DNS server that was serving\nour crawler and the machine that was hosting our\ncrawling database.\nDatabase optimizer : Even with the combination of the\nexpensive high performance hardware and software that\nwas used for the database server, we still needed to\nfollow some optimization techniques to minimize the\nneed for database connectivity. This was due to the\nheavy load that our database server experienced from\nthe crawling function.\nThe Logging aspect, the most common aspect in AOP,\nwas mostly to help debugging during the developing stage\nof the application, but it would also be used for identifying\nbottlenecks (profiling) and performing optimizations to\nthe components in a later stage. When the logging aspect\nwas enabled, entering or exiting a method would print (to\nstderr) the method's name, the exact time, and some other\nuseful information. Moreover, a method throwing an exception\nwould result in invoking the logging aspect to print the\nexception with the method's name in stderr.\nThe overloading checks were broken in two aspects, the\nDNS monitoring aspect and the database monitoring aspect\n. The DNS monitoring aspect was trying to adjust the\nnumber of active downloading threads according to the DNS\nserver status. More to the point, the problem we faced was\nthat the DNS server that was serving our crawler was shared\nwith other machines, some of them running experimental\nsoftware, doing extensive use of the DNS server for DNS\nresolution. This practically meant that the efficiency of the\nDNS server was dependent of the number of software clients\nusing it in parallel. Running more than the appropriate (for\neach moment) downloading threads in our crawler (that were\ndoing the DNS resolution) resulted in more DNS resolution\nrequests that our DNS server could handle, and eventually,\ncollapsing of our DNS server. On the other hand, underestimating\nour DNS server's abilities in low-usage hours would\nresult in significantly lower crawling speed. For these reasons\nwe constructed and used the DNS monitoring aspect,\nwhich would adjust the number of the downloading threads\naccording to the running DNS load. Each DNS resolution\nwas timed, and when discovering latency higher than expected\n, we were temporarily pausing some of the downloading\nthreads (the pause time and the number of the threads\nthat we were pausing were analogous to the latency), thus,\ncausing less DNS lookups in a specific time.\nThe database monitoring aspect's goal was to disable overloading\nin the database machine.\nA similar approach to\nthe DNS monitoring aspect was used. We were monitoring\nthe responses from our database server and when we\nwere detecting overloading of the database we would pause\nsome of the downloading threads. The reason that we could\nnot predict the ideal number for the database component\nthreads from the beginning was because of the variety of\nthe web-pages. For example, a web-page with many new\nwords (words that are for first time parsed from the crawler)\nwould result in much database load, while words that are\nseen before from the crawler would result in much less (due\nto some optimizations, similar to those proposed in\n[1]).\nFor these reasons, we constructed the database monitoring\naspect to monitor database queries. The aspect would time\nevery interaction with the database server and try to detect\noverloading. When the time demanded for the query was\nbigger than a threshold (all the queries we were executing\nwere having the same average time for execution in normal\ncircumstances), we would pause some of the downloading\nthreads for some time, in order to allow the database server\nto complete its work without extra work added at the same\ntime. Later on, the downloading threads would resume their\nwork.\nThese two last aspects would not contradict each other,\nsince they were both doing the same action, pausing some of\nthe downloading threads. However, the pause time and the\nnumber of the downloading threads to pause were not the\nsame in the two cases. Each of the aspects was calculating\nthe time and the number of threads to pause with a different\nalgorithm.\nFinally, we also constructed the database optimizer aspect\nwhich acted as a database cache and released some of\nthe database load. More specifically, for the parsing function\nwe were making heavy usage of the crawling dictionary\ntable from the database. That dictionary was matching every\nword we found up to the moment with its id number.\nThe choice was to avoid needless and costly replication of\ndata and enable saving the page text as numbers (smaller\nin storing size and faster in seeking). By keeping a memory\ncache of that table as in Brin's implementation ([1]), we\nwould manage to get important workload off the database\nserver and speed things up. More to the point, prior addressing\nthe database for a word's serial number, we were\nquerying an indexed structure in the local memory. If the\nquery failed, we were then inserting the word in the database\nand in the RAM dictionary and continuing our work. This\nminimized the database interactions and boosted the complete\nprocess, since RAM access was enormously faster than\naccess to the database. Processing English language pages\nwith an average-size dictionary of 1 million words would result\nto around 99,9% success from the RAM table, thus, it\nwould prevent querying the database very efficiently.\nEVALUATION RESULTS\nAs already mentioned, these four aspects were implemented\nin two distinct ways: (a) injected in the program code, using\nstandard OOP approach, and (b) modeled and implemented\nas aspects. The two versions were then compared and evaluated\nin the described metrics. The results from the evaluation\nwere mostly in favor of the AOP methodology. While\nthe developers were not long experienced in AOP, the new\nmodel boosted the implementation speed and helped in more\nmodular software.\nRegarding effectiveness of the AOP approach compared\nto the traditional OOP approach, the two approaches were\nthe same. We managed to add the extra functionality in\nboth versions of the software (however, it was not always\ntrivial to do so). In short, for the presented aspects there\nwas always an AOP-oriented and an OOP-oriented solution\n1556\navailable, and there was not a noticeable performance difference\nbetween the two.\nRegarding the time demanded to learn the AOP methodology\n, this was not significant. Both the developers that\nwere working on the project were very experienced with\nOOP, but did not have previous practical experience with\nAOP. Fortunately for the project, they were able to learn\nAOP sufficiently without tutoring using only publicly available\nonline sources in a single week. There was also another\nshort overhead of one day for installing and getting familiar\nto an AOP-aware IDE (we used Eclipse with the AOP\nmodules).\nThe complete time that was required to finish the crawler\nwas shorter in the AOP version (this time did not include the\ntime spent for learning AOP however). Both the versions\nused the same core already developed (the three components\ndemonstrated earlier) but they were continued com-pletely\nindependently, without reusing knowledge or code\nfrom one version to the other (the nature of the two versions\nprohibited reusing knowledge or code anyway). The\ntime demanded for completing the crawler with the aspects\nin the AOP version was 7 man-hours, while the OOP version\ndemanded 10 man-hours in order to design and develop\nthe code. Most of this time, in the case of the OOP\nversion, was needed for locating the methods and putting\nthe necessary code to them. For implementing the logging\nfunctionality for instance, in the OOP version there were\n73 such methods counted, while AOP did not demand this\ntask since the pointcuts were found automatically from the\naspect definition. It was the developers' feeling that most of\nthe man-hours spent in the OOP version of the crawler were\nwasted, because they were repeating trivial code in the application\n. Furthermore, as they said, the result in the OOP\ncase was not satisfactory for them since, if they needed to\nchange something in an aspect, they should relocate the aspect\ncode from the beginning and this would be difficult to\nbe done.\nWe also measured the number of lines we needed to add\nin the two approaches to implement the extra functionality.\nFor the logging aspect with the AOP approach, we needed\nless than 20 lines in one single file, while the same functionality\nfor the traditional OOP version required 126 lines of\ncode spread in eight different files (the number of lines for\nthe AOP code also include the pointcuts definitions and the\njava include directives). This, apart from a time-demanding\napproach, also reveals important code tangling since we had\nto modify eight classes for a simple logging requirement.\nThe other two aspects, the DNS monitoring and the database\nmonitoring aspect, needed roughly the same number of lines\nfor the two versions. To implement both the DNS monitoring\nand the database monitoring functionality, we needed\naround 30 lines for the pure OOP solution: (a) four lines\nfor timing the DNS or the database query, (b) ten lines for\nchecking for overloading, and proceeding in alternate behavior\nif overloading occurs, and finally (c) one line for invoking\nthe check wherever needed. In the AOP solution, we\nwere able to join the two concerns in a single aspect - something\nthat we were unable to do in the OOP version - and\nreuse some of the code. The AOP version of the solution\ndemanded roughly the same number of lines, around forty\nfor both the concerns (the additional code was because of\naspect and advice headers and the pointcuts definition).\nFinally, the database optimizer needed the same number\nof lines in the two versions, that is forty lines. These lines\nin the OOP version were split in three different places in\nthe original database component file, while at the AOP approach\nthe original file was kept intact and all the new code\nwas in a single aspect-definition file.\nWe also tried to capture the code tangling that occurs in\nthe two versions, after the extra functionality is added. To\ndo that, we found the distribution of the added code in the\neight affected files. The OOP version of the logging aspect,\nas expected, was spread in all the eight files in seventy-three\ndifferent places. The OOP version of the DNS monitoring\naspect resulted in addition of code in one file only, the down-loader\ncomponent file, in three different places. Similarly,\nthe OOP version of the database monitoring aspect resulted\nin addition of code in the database component file, in three\ndifferent places. Finally, the database optimizer aspect implementation\n, without AOP, also resulted in code addition\nin three different places (again, in the database component\nfile).\nOn the other hand, implementation of the four aspects\nwith the AOP approach, as expected, created no code tangling\n. The complete code for the extra functionalities was\nincluded in the three (instead of four, since the DNS and the\ndatabase monitoring concerns were implemented as a single\naspect) aspect files. For the case of the database optimizer,\nthis offered us another important advantage since we often\nneeded to disable the database optimizer due to hardware\n(memory) limitations in weaker machines.\nWhile we did\nnot take any provision for that, the AOP version, unlike\nthe OOP version, enabled removal of the optimizer without\nchanging any code. In the OOP version, the developer had\nto remove or modify some of the original code.\nTable 1 summarizes the results for the code size and code\ntangling for the four aspects:\nFinally, the implementation of AOP we used, combined\nwith the IDE tool, were stable and did not cause us any unexpected\nproblems (such as bugs in the compiler). While the\ncrawling system was not extremely big, it did make extensive\nuse of the machines' resources, and AspectJ compiled\nfiles did not face any trouble with that. AspectJ compiled\nfiles proved to work fine under pressure with the standard\nvirtual machine, and aspects introduction was not causing\na noticeable overhead to the machines.\nRELATED WORK\nSeveral publications try to evaluate AOP. Almost all of\nthem report results similar to ours. However, while of excellent\nquality, most of the previous work we are aware of\nfollow a theoretical approach or limit their hands-on evaluation\nfor academic or experimental software. We will now\nbriefly comment on some of them.\nWalker in [8] constructs several experiments and a case-study\nto evaluate AOP. The outcome of the evaluation is\nthat AOP can help faster software development (programming\n, debugging, etc.) under certain conditions, while other\ncases make development of AOP less attractive. While of superb\nquality and significant importance, this work is limited\nto the evaluation of AOP based on a preliminary version\nof AspectJ, version 0.1.\nSince then, AOP and especially\nAspectJ, changed significantly confronting most of the limitations\ndetected in the evaluation, and also powering the\nusers with more functionalities. Furthermore, CASE tools\nand powerful IDE environments were developed to assist the\n1557\nAspect\n# Lines of code\n# Places to add\n# Files to add\nOOP\nAOP\nOOP\nAOP\nOOP\nAOP\nLogging\n126\n19\n73\n1\n8\n1\nDNS Monitoring\n15\n40\n3\n1\n1\n1\nDatabase Monitoring\n15\n3\n1\nDatabase optimizer\n45\n45\n3\n1\n1\n1\nTable 1: Size of added code and code tangling for implementation of the aspects. The two aspects, DNS and\nDatabase monitoring were easily joined to a single aspect in the AOP version\ndevelopers in the process.\nMendhekar in [4] also presents a case study, evaluating\nAOP in an image processing application. Although AOP\nwas then still in infancy, this case study presents results very\nsimilar to ours. However, Mendhekar, being in Xerox labs\nwhere AOP was born, follows a more research-oriented approach\nduring the evaluation. The evaluation uses an AOP\nimplementation that cannot easily be used from people outside\nthe Xerox environment. Also, being interested in performance\n, this work does not elaborate on various other important\nmeasures, such as the learning curve and the time\nthat took the developer to complete.\nSeveral other important publications ([2, 3, 7, 6]) evaluate\nAOP from mostly a theoretical approach. Most of them also\nreport results that favor AOP programming. Some of their\nresults are reported in section 3 of this report.\nCONCLUSIONS\nDuring the construction of the component-based high-performance\nweb crawler, we had the opportunity to evaluate\nthe relatively new aspect oriented paradigm for building\ncomponent-based systems. Having defined our extra functionality\n, we implemented and compared the two versions of\nthe web crawler, the AOP and the OOP one. For the required\nextra functionality, both the paradigms proved able\nto implement a correct solution. The quantity of code (number\nof lines) that the developer needed to implement in the\ntwo versions was not of much difference, with the only exception\nof the logging aspect where the OOP implementation\nwas much larger than the AOP one. Furthermore, in\nboth the versions of the application there was no apparent\nperformance difference. Both the versions were stable, even\nwhen working under high load and in varying system environments\n. The significant difference however between the\ntwo implementations was in the time required to develop and\ndebug each of them, and the quality of the produced code.\nThe AOP approach not only completed the system faster,\nbut it also produced modular high quality code, while the\ntraditional approach was creating the well-known spaghetti\ncode.\nMore specifically, the AOP version was having all\nthe extra functionality apart of the code implementing the\nstandard functional requirements. This not only kept the\noriginal components reusable in different implementations,\nbut also prevented tangling the code, thus, making future\nmaintenance easier. Furthermore, this enabled us to easily\nenable and disable the extra functionality, depending on the\nhardware resources available and on our requirements.\nConcluding, we have to report that the AOP model in general\nappears to favor the development of quality component-based\nsoftware. The AOP model itself is able to boost the\nimplementation speed without negatively affecting quality\nof the software. Moreover, the learning time of the model,\njudging from our experience, is not long. While not having\nmuch experience of AOP implementation languages, we\nwere able to produce AOP-based code in no time. Finally,\nwhile AOP cannot offer any solution to problems unsolv-able\nfrom traditional approaches, and while AOP does not\nalways target to less code, it can offer better and easier solutions\nto programs that are otherwise difficult to be implemented\n. Therefore, we can safely arrive to the conclusion\nthat AOP has much to offer in component-based software\ndevelopment. We strongly believe that integration of AOP\nwith component-based software is going to be the target of\nimportant research attempts in the near future and can produce\nsome very interesting results, and we await for the introduction\nof AOP software in commercial component-based\nsoftware products.\nREFERENCES\n[1] S. Brin and L. Page. The anatomy of a large-scale\nhypertextual Web search engine. Computer Networks\nand ISDN Systems, 30(17):107117, 1998.\n[2] G. Kiczales, J. Lamping, A. Menhdhekar, C. Maeda,\nC. Lopes, J.-M. Loingtier, and J. Irwin.\nAspect-oriented programming. In Proceedings of the\nEuropean Conference on Object-Oriented Programming\n(ECOOP), LNCS 1241, pages 220242,\nSpringer-Verlag, 1997.\n[3] C. Lopes. D: A Language Framework for Distributed\nProgramming. PhD thesis, College of Computer\nScience, Northeastern University, November 1997.\n[4] A. Mendhekar, G. Kiczales, and J. Lamping. RG: A\ncase-study for aspect-oriented programming. Technical\nReport SPL97-009 P9710044, Xerox Palo Alto Research\nCenter, Palo Alto, CA, USA, February 1997.\n[5] G. C. Murphy, R. J. Walker, and E. L. Baniassad.\nEvaluating emerging software development\ntechnologies: Lessons learned from assessing\naspect-oriented programming. Technical Report\nTR-98-10, Department of Computer Science, University\nof British Columbia, 1998.\n[6] A. Navasa, M. A. Perez, J. Murillo, and J. Hernandez.\nAspect oriented software architecture: a structural\nperspective. In Proceedings of the Aspect-Oriented\nSoftware Development, 2002, The Netherlands.\n[7] D. Shukla, S. Fell, and C. Sells. Aspect-oriented\nprogramming enables better code encapsulation and\nreuse. MSDN Magazine,\nhttp://msdn.microsoft.com/msdnmag/, March 2002.\n[8] R. J. Walker, E. L. A. Baniassad, and G. C. Murphy.\nAn initial assessment of aspect-oriented programming.\nTechnical Report TR-98-12, Department of Computer\nScience, University of British Columbia, Sept. 1998.\n1558\n", "keywords": "case study;evaluation;Web Crawler Implementation;AOP;component-based application;Aspect Oriented Programming;development process experiment metrics;Software development process experiment;programming paradigm comparison;OOP;Object Oriented Programming;programming paradigms;Aspect Oriented Programming application"} {"name": "38", "title": "Attack-Resilient Hierarchical Data Aggregation in Sensor Networks", "abstract": "In a large sensor network, in-network data aggregation, i.e., combining partial results at intermediate nodes during message routing, significantly reduces the amount of communication and hence the energy consumed. Recently several researchers have proposed robust aggregation frameworks, which combine multi-path routing schemes with duplicate-insensitive algorithms, to accurately compute aggregates (e.g., Sum, Count, Average) in spite of message losses resulting from node and transmission failures. However, these aggregation frameworks have been designed without security in mind. Given the lack of hardware support for tamper-resistance and the unattended nature of sensor nodes, sensor networks are highly vulnerable to node compromises. We show that even if a few compromised nodes contribute false sub-aggregate values, this results in large errors in the aggregate computed at the root of the hierarchy. We present modifications to the aggregation algorithms that guard against such attacks, i.e., we present algorithms for resilient hierarchical data aggregation despite the presence of compromised nodes in the aggregation hierarchy. We evaluate the performance and costs of our approach via both analysis and simulation . Our results show that our approach is scalable and efficient.", "fulltext": "INTRODUCTION\nIn large sensor networks, computing aggregates in-network, i.e.,\ncombining partial results at intermediate nodes during message routing\n, significantly reduces the amount of communication and hence\nthe energy consumed [11, 23]. An approach used by several data\nacquisition systems for sensor networks is to construct a spanning\ntree rooted at the querying node, and then perform in-network aggregation\nalong the tree. Partial results propagate level-by-level up\nthe tree, with each node awaiting messages from all its children\nbefore sending a new partial result to its parent.\nTree-based aggregation approaches, however, are not resilient to\ncommunication losses resulting from node and transmission failures\n, which are relatively common in sensor networks [11, 22,\n23]. Because each communication failure loses an entire subtree\nof readings, a large fraction of sensor readings are potentially un-accounted\nfor at the querying node, leading to a significant error\nin the query answer. To address this problem, researchers have\nproposed the use of multi-path routing techniques for forwarding\nsub-aggregates [11]. For aggregates such as Min and Max which\nare duplicate-insensitive, this approach provides a fault-tolerant solution\n. For duplicate-sensitive aggregates such as Count and Sum,\nhowever, multi-path routing leads to double-counting of sensor readings\n, resulting in an incorrect aggregate being computed.\nRecently researchers [3, 12, 14] have presented clever algorithms\nto solve the double-counting problem associated with multi-path\napproaches. A robust and scalable aggregation framework called\nSynopsis Diffusion\nhas been proposed for computing duplicate- sensitive\naggregates such as Count and Sum. There are two primary\nelements of this approach - the use of a ring-based topology instead\nof a tree-based topology for organizing the nodes in the aggregation\nhierarchy, and the use of duplicate-insensitive algorithms for\ncomputing aggregates based on Flajolet and Martin's algorithm for\ncounting distinct elements in a multi-set [5].\nAs presented, the Synopsis Diffusion aggregation framework does\nnot include any provisions for security. Although we can easily prevent\nunauthorized nodes from launching attacks by augmenting the\naggregation framework with authentication and encryption protocols\n[15, 24], compromised nodes present an entirely new set of security\nchallenges. The lack of tamper-resistance and the unattended\nnature of many networks renders sensor nodes highly vulnerable to\ncompromise. Standard authentication mechanisms cannot prevent\na compromised node from launching attacks since all its keys are\nalso compromised. In this paper, we present novel mechanisms for\nmaking the synopsis diffusion aggregation framework resilient to\nattacks launched by compromised nodes.\nWe present counter-measures against attacks in which a compromised\nnode attempts to change the aggregate value computed at the\nroot of the hierarchy. In particular, we focus on an attack in which\n71\na sensor node that is not a leaf node in the aggregation hierarchy\nrelays a false sub-aggregate value to its parents. We refer to this\nattack as the falsified sub-aggregate attack.\nWe show that if the synopsis diffusion approach is used to compute\naggregates such as Count and Sum, an adversary can use the\nfalsified sub-aggregate attack to cause the answer computed at the\nbase station in response to a query to differ from the true value by\nan arbitrary amount. Moreover, we show that this attack can be\nlaunched with a high rate of success, even if only one or a small\nnumber of nodes are compromised.\nWe present an approach in which the synopsis diffusion aggregation\nframeork is augmented with a set of countermeasures that\nmitigate the effect of the falsified sub-aggregate attack. In our approach\n, a subset of the total number of nodes in the network include\nan authentication code (MAC) along with their response to a query.\nThese MACs are propagated to the base station along with the partial\nresults that are computed at each level in the hierarchy. By verifying\nthese MACs, the base station can estimate the accuracy of\nthe final aggregate value it computes, and can filter out the effect of\nany false sub-aggregates contributed by compromised nodes. Thus,\nour approach can be used in conjunction with synopsis diffusion to\ncompute basic aggregates such as Count and Sum despite the presence\nof compromised nodes in the aggregation hierarchy.\nThe communication overhead of our approach depends upon the\nnumber of contributing nodes which send a MAC to the base station\n. We evaluate the performance and costs of our approach via\nboth analysis and simulation. We show that our approach is scalable\nsince the number of contributing nodes (and hence the average\ncommunication overhead) do not increase with network size. To\nfurther reduce the communication overhead, we describe a variation\nof our basic approach that trades communication costs for\nlatency.\nBACKGROUND SYNOPSIS DIFFUSION FOR ROBUST AGGREGATION\nIn this section, we provide a brief overview of the synopsis diffusion\napproach for robust aggregation [3, 14]. Figure 1 illustrates\nhow the synopsis diffusion approach uses a rings topology for aggregation\n.\nR0\nR1\nR2\nq\nC\nB\nA\nD\nFigure 1: Synopsis Diffusion over a rings topology\nIn the query distribution phase, nodes form a set of rings around\nthe querying node q based on their distance in hops from q. During\nthe subsequent query aggregation period, starting in the outermost\nring each node generates a local synopsis s\n= SG(v) where v is the\nsensor reading relevant to the query, and broadcasts it. (SG\n() is\nthe synopsis generation function.) A node in ring R\ni\nwill receive\nbroadcasts from all the nodes in its range in ring R\ni\n+1\n. It will then\ncombine its own local synopsis with the synopses received from its\nchildren using a synopsis fusion function SF\n(), and then broadcast\nthe updated synopsis. Thus, the fused synopses propagate level-by\n-level until they reach the querying node, who first combines the\nreceived synopses with its local synopsis using SF\n() and then uses\nthe synopsis evaluation function SE\n() to translate the final synopsis\nto the answer to the query.\nThe functions SG\n(), SF(), and SE() depend upon the target aggregation\nfunction, e.g. Count, Sum, etc. We now describe the\nduplicate-insensitive synopsis diffusion algorithms for the Count\naggregate, i.e., the total number of nodes in the sensor network,\nand the Sum aggregate, i.e., the sum of the sensor readings of the\nnodes in the network. These algorithms are based on Flajolet and\nMartin's well-known probablistic algorithm for counting the number\nof distinct elements in a multi-set[5].\n2.1\nCOUNT\nIn this algorithm, each node generates a local synopsis which is\na bit vector ls of length k\n> log n, where n is an upper bound on\nthe nodes in the network. To generate its local synopsis, each node\nexecutes the function CT\n(X, k) given below, where X is the node's\nidentifier and k is the length of ls in bits. CT\n() can be interpreted\nas a coin-tossing experiment (with a cryptographic hash function\nh\n(), modeled as a random oracle whose output is 0 or 1, simulating\na fair coin-toss), which returns the number of coin tosses until the\nfirst heads occurs or k\n+ 1 if k tosses have occurred with no heads\noccurring. In the local synopsis ls of node X , a single bit i is set to\n1, where i is the output of CT\n(X, k). Thus ls is a bitmap of the form\n0\ni\n-1\n1\nwith probability 2\n-i\n.\nAlgorithm 1 CT\n(X, k)\ni=1;\nwhile i\n< k + 1 AND h(X, i) = 0 do\ni\n= i + 1;\nend while\nreturn i;\nThe synopsis fusion function SF\n() is simply the bitwise Boolean\nOR of the synopses being combined. Each node fuses its local\nsynopsis ls with the synopses it receives from its children by computing\nthe bit-wise OR of all the synopses. Let S denote the final\nsynopsis computed by the querying node by combining all the synopses\nreceived from its children and its local synopsis. We observe\nthat S will be a bitmap of length k of the form 1\nr\n-1\n0\n. The querying\nnode can estimate Count from S via the synopsis evaluation\nfunction SE\n(): if r is the lowest-order bit in S that is 0, the count\nof nodes in the network is 2\nr\n-1\n/0.7735. The synopsis evaluation\nfunction SE\n() is based on Property 2 below. Intuitively, the number\nof sensor nodes is proportional to 2\nr\n-1\nsince no node has set the rth\nbit while computing CT\n(X, k).\nWe now present a few important properties of the final synopsis S\ncomputed at the querying node that have been derived in [5, 3], and\nthat we will find useful in the rest of this paper. Let S\n[i], 1 i k\ndenote the ith bit of S, where bits are numbered starting at the left.\nProperty 1\nFor i\n< log\n2\nn\n-2log\n2\nlog\n2\nn\n, S\n[i] = 1 with probability\n1. For i\n3\n2\nlog\n2\nn\n, S\n[i] = 0 with probability 1.\nThis result implies that for a network of n nodes, we expect that\nS\nhas an initial prefix of all ones and a suffix of all zeros, while\nonly the bits around S\n[log\n2\nn\n] exhibit much variation. This provides\nan estimate of the number of bits, k, required for a node's local\nsynopsis. In practice, k\n= log\n2\nn\n+ 4 bits are sufficient to represent\nS\nwith high probability [5]. This result also indicates that the length\nof the prefix of all ones in S can be used to estimate n. Let r\n=\n72\nmin\n{i|S[i] = 0}, i.e., r is the location of the leftmost zero in S.\nThen R\n= r -1 is a random variable representing the length of the\nprefix of all ones in the sketch. The following results hold for R.\nProperty 2\nThe expected value of R, E\n(R) log\n2\n(n) where the\nconstant\nis approximately 0.7735.\nThis result implies that R can be used for an unbiased estimator\nof log\n2\n(n), and it is the basis for the synopsis evaluation function\nSE\n() which estimates n as 2\nR\n/.\nProperty 3\nThe variance of R, denoted as\n\n2\nR\nn\n, satisfies\n\n2\nR\nn\n=\n2\nR\n\n+ Q(log\n2\nn\n) + o(1),\nwhere constant\n\nR\n\nis approximately 1\n.1213 and Q(x) is a periodic\nfunction with mean value 0 and period 1.\nThis property implies that the standard deviation of R is approximately\n1\n.1213, i.e., the estimates of n derived from R will often\nbe off by a factor of two or more in either direction. To reduce\nthe standard deviation of R, Flajolet et al [5] proposed an algorithm\nnamed PCSA, where m synopses are computed in parallel and the\nnew estimator (\nR\n) is the average of all individual R's of these synopses\n. For PCSA, the standard error in the estimate of n, i.e.,\n\nn\n/n,\nis equal to 0\n.78/m [5].\nProperty 4\nIn a network of n nodes, the expected number of nodes\nthat will have the ith bit of their local synopsis ls\n[i] = 1 is n/2\ni\n. This\nresult implies that the expected number of nodes that contribute a 1\nto the ith bit of S and the bits to the right of the ith bit in S (i.e., bits\nj\n, where i\nj k) is n/2\ni\n-1\n.\n2.2\nSUM\nConsidine et al. [3] extended the Count algorithm described above\nfor computing the Sum aggregate. The synopsis generation function\nSG\n() for Sum is a modification of that for Count while the\nfusion function SF\n() and the evaluation function SE() for Sum are\nidentical to those for Count.\nTo generate its local synopsis for a sensor reading v, a node X\ninvokes the function CT\n() v times\n1\nand ORs the results. As a result,\nthe local synopsis of a node is a bitmap of length k\n= log\n2\nu\ns\n+ 4\nwhere u\ns\nis an upper bound on the value of Sum aggregate. Unlike\nthe local synopsis of a node for Count, more than one bit in the\nlocal synopsis of a node for Sum will be equal to 1. Count can\nbe considered as a special case of Sum where each node's sensor\nreading is equal to one unit.\nConsidine et al. [3] proposed an optimized version of SG\n() for\nSum to make it suitable for a low-end sensor node, even if the\nsensed value v is high. Moreover, they showed that Properties 14\ndescribed above for Count also hold for Sum (with appropriate\nmodifications). Similarly, as in the case of Count, the PCSA algorithm\ncan be used to reduce the standard deviation of the estimate\nfor Sum.\nATTACKS ON SYNOPSIS DIFFUSION\nThe Synopsis Diffusion aggregation framework does not include\nany provisions for security; as a result, it is vulnerable to many attacks\nthat can be launched by unauthorized or compromised nodes.\nTo prevent unauthorized nodes from eavesdropping on or participating\nin communications between legitimate nodes, we can augment\nthe aggregation framework with any one of several recently\nproposed authentication and encryption protocols [15, 24]. However\n, compromised nodes pose an entirely new set of security challenges\n.\nSensor nodes are often deployed in unattended environments, so\nthey are vulnerable to physical tampering. Current sensor nodes\n1\nEach sensor reading is assumed to be an integer\nlack hardware support for tamper-resistance. Consequently, it is\nrelatively easy for an adversary to compromise a node without being\ndetected. The adversary can obtain confidential information\n(e.g., cryptographic keys) from the compromised sensor and repro-gram\nit with malicious code.\nA compromised node can be used to launch multiple attacks\nagainst the sensor application. These attacks include jamming at\nphysical or link layer, other denial of service attacks like flooding,\nroute disruption, message dropping, message modification, false\ndata injection and many others. Standard authentication mechanisms\ncannot prevent these insider attacks since the adversary knows\nall the keying material possessed by the compromised nodes.\nIn this paper, we focus on defending against an important subclass\nof these insider attacks which can potentially corrupt the final\nresult of the aggregation query. Below we describe these attacks in\nthe context of the Count and Sum aggregates.\nA compromised node M can corrupt the aggregate value computed\nat the root (i.e., the sink) of the hierarchical aggregation\nframework in three ways. First, M can simply drop aggregation\nmessages that it is supposed to relay towards the sink. If M is located\nat a relatively high position in the aggregation hierarchy, this\nhas the effect of omitting a large fraction of the set of sensor readings\nbeing aggregated. Second, M can falsify its own sensor reading\nwith the goal of influencing the aggregate value. Third, M can\nfalsify the sub-aggregate which M is supposed to compute based\non the messages received from M's child nodes.\nThe effect of the first attack in which a node intentionally drops\naggregation messages is no different from the effect of transmission\nand node failures, which are common in sensor networks [7].\nThe synopsis diffusion approach employs multi-path routing for addressing\nthese failures, and thus it also addresses message losses\ndue to compromised nodes [3, 12, 14]. We refer to the second attack\nin which a sensor intentionally falsifies its own reading as the\nfalsified local value attack\n. This attack is similar to the behavior of\nnodes with faulty sensors and can be addressed by well-studied approaches\nfor fault tolerance such as majority voting and reputation-based\nframeworks [10, 6]. The third attack, however, in which a\nnode falsifies the aggregate value it is relaying to its parents in the\nhierarchy is much more difficult to address, and is the main focus\nof this paper. We refer to this attack as the falsified sub-aggregate\nattack\n.\nThe Falsified Sub-Aggregate Attack\nSince the sink estimates the\naggregate based on the lowest-order bit r that is 0 in the final fused\nsynopsis, a compromised node would need to falsify its own fused\nsynopsis such that it would affect the value of r. It can accomplish\nthis quite easily by simply inserting ones in one or more bits in positions\nj, where r\nj k, in its own fused synopsis which it broadcasts\nto its parents. Note that the compromised node does not need\nto know the true value of r; it can simply set some higher-order bits\nto 1 in the hope that this will affect the value of r computed by the\nsink. Since the synopsis fusion function is a bitwise Boolean OR,\nthe resulting synopsis computed at the sink will reflect the contributions\nof the compromised node.\nLet r\n\nbe the lowest-order bit that is 0 in the corrupted synopsis,\nwhereas r is the lowest-order bit that is 0 in the correct synopsis.\nThen the sink's estimate of the aggregate will be larger than the\ncorrect estimate by a factor of 2\nr\n\n-r\n. It is easy to see that, with the\nabove technique, the compromised node can inject a large amount\nof error in the final estimate of the sink.\nWe also observe that even a single node can launch this attack\nwith a high rate of success because the use of multi-path routing\nin the synopsis diffusion approach makes it highly likely that the\nfalsified synopsis will be propagated to the base station. If p is the\n73\npacket loss rate and if each node has\nparents in the aggregation\nhierarchy then the probability of success for this attack is\n(1 - p\n\n)\nh\n,\nwhere the compromised node is h hops away from the sink. As an\nexample, if p\n= 0.2, = 3, and h = 5 then the probability that the\nattack will succeed is 96%.\nOn the other hand, it is very hard to launch an attack which results\nin the aggregate estimated at the sink being lower than the true\nestimate. This is because setting a bit in the falsified synopsis to 0\nhas no effect if there is another node X that contributes a 1 to the\nsame position in the fused synopsis. To make this attack a success\nthe attacker has to compromise all the possible paths from node X\nto the sink so that X 's 1 cannot reach the sink, which is hard to\nachieve. If there is more than one node which contributes to the\nsame bit then it is even harder. As an example, in Count algorithm,\nhalf of the nodes are likely to contribute to the leftmost bit of the\nsynopsis, one-fourth nodes of contribute to the second bit, and so\non. There are bits in the synopsis to which only one or two nodes\ncontribute but it is very hard to predict in advance which nodes will\nbe contributing to these particular bits if the sink broadcasts along\nthe query request a random seed to be used with the hash function\nin the synopsis generation phase. Hence, we can safely assume that\nthis attack is extremely difficult to launch. In the rest of this paper,\nwe restrict our discussion to the previous attack where the goal of\nthe attacker is only to increase the estimate.\nPROBLEM DESCRIPTION & ASSUMPTIONS\nIn a sensor network where some fraction of the nodes are potentially\ncompromised, there are three sources that contribute to the\nerror in the sink's estimate of the aggregate being computed: (i)\nerror due to packet losses, (ii) error due to the approximation algorithm\nused, e.g., Flajolet and Martin's probabilistic algorithm [5],\nand (iii) error injected by compromised nodes.\nThe first two types of error are already addressed by the synposis\ndiffusion aggregation framework. Our paper is complementary\nto this previous work; our objective is to filter out the third\ntype of error. In particular, we aim to make the synopsis diffusion\napproach resilient to the falsified local value attack and the falsified\nsub-aggregate attack\n, i.e., to enable the sink to get the \"true\"\nestimate of the aggregate being computed despite the presence of\ncompromised nodes in the aggregation hierarchy. By \"true\" estimate\nwe mean the estimate of the aggregate which the sink would\ncompute if there were no compromised nodes.\n4.2\nAssumptions\nWe now discuss our assumptions with respect to the sensor network\nand the adversary.\nSystem Assumptions\nWe assume that the base station is located at\nthe center of the sensor network, and nodes are deployed around\nthe base station. However, our approach for attack-resilient aggregation\ndoes not depend upon this assumption. We assume that\nsensor nodes are similar to the current generation of sensor nodes,\ne.g., Mica2 motes [13], in their computational and communication\ncapabilities and power resources, while the sink is a laptop class\ndevice supplied with long-lasting power.\nWe assume that the sink has an estimate of the upper bound on\nthe value of the Count aggregate. If the sink does not have any\nfurther knowledge, the upper bound of Count can be set to the total\nnumber of nodes deployed. We also assume that there exists an\nupper bound on the value of a sensor reading. The upper bound of\nSum can be conservatively set to be equal to product of the upper\nbound of Count and the upper bound of a sensor reading. Previous\nworks on the synopsis diffusion approach [3, 14] have made the\nsame assumptions regarding the upper bounds for Count and Sum;\nthese bounds provide an estimate of the length of the synopsis.\nSecurity Assumptions\nWe assume that the sink cannot be compromised\nand it uses a protocol such as Tesla [15]) to authenticate its\nbroadcast messages. We also assume that each node shares a pairwise\nkey with the sink, which is used to authenticate the messages\nit sends to the sink.\nWe assume that the adversary can compromise sensor nodes without\nbeing detected. If a node is compromised, all the information it\nholds will also be compromised. We use a Byzantine fault model,\nwhere the adversary can inject malicious messages into the network\nthrough the compromised nodes. We conservatively assume that all\ncompromised nodes can collude, or are under the control of a single\nattacker.\nNotations\nThe following notations are used in the description of\nour attack-resilient aggregation algorithms.\nBS refers to the base station, i.e., the sink. X is the identifier\nof a the sensor node whereas M represents a compromised\nnode.\nK\nX\nis the pair-wise key X shares with the sink.\nm1|m2 denotes the concatenation of two message fields m1\nand m2.\nMAC(K,m) is the message authentication code (MAC) of the\nmessage m generated using the key K.\nX Y : m denotes a one-hop delivery of message m from X\nto Y , while X\n: m denotes that X broadcasts message m\nto all of its one-hop neighbors, and X\n: m denotes that\nX\nbroadcasts message m to all nodes in the network.\nATTACK-RESILIENT AGGREGATION THE BASIC APPROACH\nIn this section, we present an attack-resilient approach for computing\nthe Count and Sum aggregates. In this approach we assume\nthat the BS has an estimate of the lower bound and the upper bound\nof the aggregates. We will see that this approach is scalable only if\nthe ratio of the upper bound to the lower bound is small. Despite\nthis limitation, we discuss this approach in detail because it provides\nthe background and motivation for our extended approach,\nwhich is discussed in Section 6. We first present the main idea underlying\nthe basic approach and then present the detailed protocol\nfor securing Count and Sum.\n5.1\nThe Main Idea\nIn our approach, nodes execute the synopsis diffusion aggregation\nalgorithm as specified in [3, 14]. However, a subset of the\nnodes include along with their synopses a message authentication\ncode (MAC) that can be used by the sink to verify the validity of\ntheir contribution to the aggregate function.\nThe key observations behind the design of our approach are that\nIn order to derive the correct estimate from the final synopsis\n(say S) computed at the sink, we need only to figure out the\ncorrect lowest order bit (say r) in S that is 0.\nThe number of nodes contributing a 1 to bit j decreases exponentially\nas we move from the lowest order bit ( j\n= 1) to\nhigher order bits of the synopsis. For example, in the case\n74\nof Count, on average, half the nodes in the network will contribute\n2\nto the leftmost bit of the synopsis, one-fourth of the\nnodes contribute to the second bit of the synposis, and so on.\nThus, we expect that only a small number of nodes will contribute\nto the bits around the expected value of r. Each such node includes\nalong with its response to an aggregation query a MAC computed\nusing a pairwise key shared exclusively with sink. We demonstrate\nthat these MACs enable the sink to filter out the contributions of\nthe falsified sub-aggregates injected by the compromised nodes to\nthe final aggregate.\nFor our scheme to work, two issues need to be addressed. First,\nsince the the value of r is not known before the execution of the\nquery, we need to specify a criterion whereby a node can determine\nif it needs to include a MAC along with its synopsis. Second, this\ncriterion should be designed so that the number of such nodes who\ninclude a MAC is minimized.\nIn our basic approach, we assume that the BS has an estimate\nof the lower bound and the upper bound of Count which are denoted\nby l\nc\nand u\nc\nrespectively. Based upon these bounds, the BS\nknows that bit r will lie between a and b, which are the bit positions\nin the synopsis S corresponding to l\nc\nand u\nc\nrespectively,\ni.e., a\n= log\n2\n(l\nc\n) and b = log\n2\n(u\nc\n) (by Property 2 in Section\n2). Thus, there is no need for the BS to verify the bits to the\nleft of a; only nodes contributing to bits in the range a to b need\nto prove to the BS that their contribution to the synopsis S is valid.\nWe refer to the collection of bits in the range a to b in synopsis S\nas the synopsis-edge as shown in Figure 2. It is easy to see that the\nlength of the synopsis-edge is\n(log\n2\n(\nu\nc\nl\nc\n)+1) bits. If we denote\nthe number of nodes contributing to the synopsis-edge by\n, then,\nby Property 4 in Section 2,\n(\nu\nc\n2\na\n+ . . . +\nu\nc\n2\nb\n)\n1\n\n(\n2u\nc\nl\nc\n-1).\nThe upper bound for Count (u\nc\n) can be set to the total number\nof nodes deployed. The lower bound for Count (l\nc\n) can be guessed\ndepending on the the energy reserve of the sensor nodes and rate\nof energy expenditure. As an example, if 2000 nodes are deployed\nthen u\nc\n= 2000 and l\nc\n= 1000 may be a safe estimate at the time\nof the Count query's execution. For this example, the length of the\nsynopsis-edge\nis\nu\nc\nl\nc\n= 2 and the expected number of nodes contributing\nto synopsis-edge is less than 3.87.\nsynopsis-edge\ncorresponds to\nLower Bound\ncorresponds to\nUpper Bound\nFigure 2: Securing Count synopsis.\nTo securely compute\nCount synopsis, the base station needs to verify only bits in the\nsynopsis-edge.\nFor the ease of presentation, we present the basic approach assuming\nthat only one synopsis is computed. We can easily extend\nthis approach to compute m synopses in parallel as in algorithm\nPCSA.\n5.2\nSecuring Count\nTo compute the Count aggregate securely, we extend the original\nCount algorithm discussed in Section 2 as follows. For the sake\n2\nFor convenience, henceforth, we say that a node \"contributes\" to\na position j in the synopsis S if bit j in its local synopsis is 1.\nof completeness, we first briefly describe the query dissemination\nphase, and then we present the aggregation procedure in detail.\nIn the query dissemination phase, the BS broadcasts the name\nof the aggregation function, a random number (Seed) and the bit\npositions of the start and the end of the synopsis-edge, which are\nspecified by a and b respectively. Each node will use the random\nnumber, Seed, as an input to the hash function in the synopsis generation\nprocedure. In more concrete terms, a query packet that the\nBS broadcasts is as follows:\nBS\n: F\nagg\n, Seed, a, b, s,t, h\nwhere F\nagg\nis the name of the aggregation function (i.e. `Count'), s\ndenotes the time when the aggregation phase will start, t represents\nthe duration of one round i.e. t\n=\nT\nh\n, where h is the total number\nof hops and T is the duration of the aggregation phase (also called\nepoch\n). Note that, as in the original Count algorithm discussed in\nSection 2, the epoch is sub-divided into a series of rounds, one for\neach hop, starting from the farthest hop. Tesla [15] can be used\nfor authenticating the broadcast packet.\nIn the aggregation phase, each node executes the synopsis generation\nfunction SG\n() and the synopsis fusion function SF() for\nCount as discussed in Section 2. In addition, each node checks\nwhether it contributes to the synopsis-edge, and if so, it generates a\nMAC and forwards the MAC along with its fused synopsis. Specifically\n, if node X contributes to bit i in the synopsis-edge, it generates\na MAC, M\n= MAC(K\nX\n, m) over the message m whose format\nis\n[X|i|Seed], where Seed is the random number which was dissem-inated\nin the query distribution phase. Each node X forwards to\nits parents its fused synopsis along with the set of MACs (\nM\n) it\nreceived from its child nodes and its own MAC if it generated one.\nThe format of the message a node X forwards to its parents is as\nfollows:\nX\n: S\nl\n|\nM\n,\nwhere S\nl\nis the fused synopsis computed by X . If the message does\nnot fit into one packet, node X breaks it into several packets and\nforwards them. In Appendix A, we formally describe the algorithm\n(SecureCount) executed by each node in response to an aggregation\nquery.\nAfter the BS receives the MACs, it checks their validity. In particular\n, for each message and MAC pair\n[m|MAC(K\nX\n, m)] where\nm\nis\n[X|i|Seed], the BS executes the synopsis generation function\nSG\n() of X and verifies whether node X really contributes to bit i\nin the synopsis-edge, and then checks whether the attached MAC\nis valid. If any of these tests fail, the corresponding MAC is discarded\n.\nAfter this verification step, the BS checks whether it has received\nat least one valid MAC for each bit in the synopsis-edge. The bits in\nthe synopsis-edge for which the BS has not received a valid MAC\nare reset to 0. The bits at positions to the left of the synopsis-edge\nare set to 1. Finally, the BS computes the Count using the synopsis\nevaluation function SE\n().\nSecurity Analysis\nThe security of our approach follows from two\nfacts:\nThe sink can independently verify the output of SG() for a\nparticular node X . This is because the output of SG\n() depends\nonly upon the node id X , and the random seed included\nin the query message.\nEach bit that is set to 1 in the synopsis edge has an associated\nMAC that can be verified by the sink. This MAC is computed\nusing a pairwise key that is known only to the contributing\nnode and the sink, thus the MAC cannot be fabricated by an\nattacker (as long as it is reasonably long.)\n75\nAlthough a compromised node can falsely set some bits in its\nfused synopsis and forward false MACs corresponding to those\nbits, the sink will be able to discard any false MACs. This implies\nthat the attacker cannot falsely increase the Count. On the other\nhand, the attacker may attempt to decrease the Count by dropping\na genuine MAC (or by corrupting a genuine MAC) sent by a contributing\nnode, but the genuine MAC is likely to reach BS via an\nalternate path. If BS receives at least one valid MAC for each 1 bit\nin the synopsis-edge, then BS obtains the true estimate of Count as\ndiscussed below.\nAs discussed in Section 2, the synopsis diffusion approach uses\na multi-path routing scheme to reduce the aggregation error due to\npacket losses resulting from node and link failures. The effect of\npackets being dropped by compromised nodes is simply to increase\nthe overall packet loss rate, and this can be countered by an appropriate\nchoice of\n, the number of parents of a node in the synopsis\ndiffusion ring-based aggregation hierarchy. Specifically, if each\nnode has more than\nparents, the total number of rings in the rings\ntopology is h, and if the probability of a node being compromised\nis p then, on average, a contributing node's MAC will reach the BS\nwith probability q, where\nq\n1h\nh\n\nj\n=1\n(1 - p\n\n)\nj\nHere we have assumed that the contributing nodes are uniformly\ndistributed over the rings in the hierarchy. As an example, if p\n=\n0\n.05, = 3, and h = 10 then q is greater than 0.999, i.e., the impact\nof the compromised nodes on the communication error is negligible\n.\nWe also note that while deriving q we assumed that there is only\none node which contributes to a particular bit in the synopsis. In\nreality, the expected number of nodes contributing to a bit increases\nexponentially as we move from the Rth bit, where R is the the\nlength of the prefix of all ones in the synopsis S, to the lower-order\nbits, thereby increasing the probability that at least one MAC corresponding\nto a bit position reaches the sink.\nComputation and Communication Overhead\nEach contributing\nnode computes one MAC. The expected number of contributing\nnodes is\n=\n1\n\n(\n2u\nc\nl\nc\n-1), which is independent of network size.\nThus, only a subset of nodes will incur any computational overhead\n. With respect to communication overhead, the maximum number\nof MACs that any node will need to forward is\n. Thus this approach\nis scalable, and can be used in large-scale sensor networks\nas long as the ratio u\nc\n/l\nc\nis reasonably small.\n5.3\nSecuring SUM\nWe can extend the approach used for making the Count aggregate\nresilient to compromised nodes to the Sum aggregate. To derive\nthe synopsis-edge for Sum we need to assume upper and lower\nbounds for the value of a sensor reading in addition to the upper\nand lower bounds for the number of sensor nodes.\nA node X sends to the BS a MAC, M\n= MAC(K\nX\n, m), only if it\ncontributes to the synopsis-edge as in SecureCount. The format of\nthe message m sent by a node is\n[X|A|Seed|v], where X is the node\nid, Seed is the random seed included in the broadcast query, A represents\nthe collection of bits in the synopsis to which X contributes,\nand v is X 's sensed value.\nSecurity Analysis\nIn the case of the Sum aggregate, the attacker\ncould falsely set some bits in its synopsis not only by using a false\nnode id but also using a false sensor reading. Although MACs from\nthe contributing nodes enable the BS to verify the node Ids, the BS\ncannot verify the sensed value of a node. A compromised node can\nclaim to have a large sensed value close to the upper bound u\nv\nto\nincrease its chance of being able to contribute to the synopsis-edge.\nThe following theorem (whose proof can be found in Appendix\nB) shows that this attack's impact is limited.\nTheorem 1.\nLet\nbe the number of compromised nodes in a network\nof n nodes. Let u\nv\nand a\nv\ndenote the upper bound and the\naverage value of the sensor reading respectively. Let S be the final\nsynopsis computed at the sink and let R be the length of the prefix\nof all ones in S. Let s denote the value of the Sum aggregate. If each\ncompromised node claims that its sensed value is equal to the upper\nbound u\nv\n, and if\n( u\nv\n) < s, then the probability Pr[S[R + 1] = 1]\nis proportional to the product of the fraction of compromised nodes\nin the network,\n/n.\nNote that if the compromised node contributes to the\n(R + 1)th\nbit BS's estimate of Sum doubles. Thus, the theorem shows that for\na large network, as long the fraction of compromised nodes\ngrows\nsub-linearly, the probability of this attack succeeding is small. For\nsmaller networks, the probability of this attack succeeding depends\nupon the ratio\n/n and on the ratio u\nv\n/a\nv\n. As an example, if n\n=\n1000,\n= 25, and u\nv\n/a\nv\n= 4, then Pr[S[R + 1] = 1] = 0.064.\nThe impact of the attack is further reduced if we employ the\nPCSA algorithm in which m independent synopses are computed\nand the final estimator\nR\nis calculated by averaging these m esti-mators\n. As an example, to add an error of 40% to the final Sum,\nthe attacker needs to set the R\n+ 1-th bit in at least\nm\n2\nsynopses. In\nthe example above where Pr[S[R\n+ 1] = 1] is 0.064, this probability\nis close to zero when m is 20. This example illustrates that this\nattack's impact is limited when\n(\nu\nv\nn a\nv\n) is small.\nOn the other hand, when\n(\nu\nv\nn a\nv\n) is large, we cannot neglect the\npossibility that the attacker will succeed in injecting a significant\nerror to the Sum computed at the sink. To address this scenario, we\ncan use a scheme in which a node that contributes to the synopsis-edge\nneeds an endorsement from at least\nneighbors attesting to\nthe validity of its sensed value. We assume that the sensed values\nof one-hop neighbors are correlated so that one node can verify\nthe reading of its neighbors. We assume that there are fewer than\ncompromised nodes among the one hop neighbors of any node.\nEach contributing node X collects at least\nendorsements from its\none-hop neighbors in the form of a MAC computed over the sensor\nreading using the pairwise key that the neighbor shares with the\nsink. Then X computes an XMAC [1] by XORing the collected\nMACs and X 's own MAC, and sends the XMAC to the BS. (Zhu\net al. [25] use an identical scheme to reduce the total size of the\nMACs.) We also assume that BS has the knowledge to verify if a\nset of nodes are one-hop neighbors, which prevents the collusion\nattack. (We refer to this scheme as the XMAC-based scheme.)\nComputation and Communication Overhead\nThe number of\ncontributing nodes\nis less than\n1\n\n(\n2u\ns\nl\ns\n-1), where u\ns\nand l\ns\nare\nthe upper bound and lower bound of Sum. As in the case of Count,\nis independent of the network size and thus this approach is scalable\n. With respect to worst case communication overhead, a node\nwill need to forward at most\nMACs.\nTHE EXTENDED APPROACH TRADING LATENCY FOR COMMUNICATION OVERHEAD\nWhen the ratio (\n) of the upper bound of the aggregate to the\nlower bound is high, the basic approach described in the previous\nsection is not scalable because the worst case communication cost\nincurred by a node is proportional to\n. In this section, we describe\nan approach which has lower communication costs in comparison\nto the basic approach at the expense of greater latency.\n76\n6.1\nProtocol Overview\nOur extended approach is based on the observation that the expected\nnumber of nodes that contribute to bits i, where R\n< i k in\nthe synopsis (k is the length of the synopsis) is very small. In fact,\nusing Property 2 and Property 4 from Section 2, we can show that\nexpected number of nodes contributing to the Rth and higher-order\nbits of S is less than 2\n/ 2.58.\nWe use a sliding-window based approach in which the aggregation\nphase is divided into multiple epochs\n3\n. Starting at the right-most\nbit k, we proceed from right to left within the synopsis S using\na bit window of w bits. In each epoch, only the nodes that contribute\na 1 to the bits in S corresponding to the current position of\nthe window, send MACs to the sink that can be used to verify their\ncontribution to S. In other words, in the first epoch, only nodes that\ncontribute a 1 to bits k to k\n-w+1 respond to the query. In epoch\ntwo, nodes that contribute to bits between k\n- w and k - 2w + 1\nrespond, and so on.\nThe algorithm terminates when the querying node has determined\nthat the remaining bits of S to the left of the current window are\nlikely to be 1 with high probability. The design of this termination\ncriterion is the main challenge in using this approach; we discuss\nthe termination criterion and its analytical underpinnings in detail.\nOnce the querying node determines that the algorithm can terminate\nit broadcasts a STOP message to the network to announce the\nend of the iterative aggregation procedure.\n6.2\nProtocol Operation\nThe operation of the protocol is similar to that of the protocol\nused in the basic approach with some minor differences as follows.\nThe query message broadcast to the network includes the window\nsize w in addition to the other parameters. As in the original synopsis\ndiffusion algorithm [3, 14], we assume that the time is syn-chronized\namong BS and the sensor nodes. Each node computes\nthe start and end time of the current epoch, based on the window w.\nFurther, although the MACs generated by nodes are sent to the\nBS over the course of multiple epochs, the fused synopsis computed\nby each node is forwarded to its parent in the first epoch.\nThus, the BS can compute the aggregate at the end of the first epoch\nitself, although this aggregate may be erroneous in the presence of\ncompromised nodes.\n6.3\nTermination Criterion\nThe goal of our algorithm is to find r, the lowest-order bit in S\nthat is 0. Further, recall that S is of the form 1\nr\n-1\n0\n, where the\nbits at positions i\n> r are highly likely to be 0. Thus, the intuition\nbehind our termination criterion is simple: as we examine the bits\nof S moving from right to left, if we observe two consecutive 1's,\ni.e., if we observe the string \"110\", it is highly likely that the 0\nis at the rth position. In fact, we can show analytically that the\nprobability of this event is greater than 90% which follows from\nthe following theorem.\nTheorem 2.\nLet F denote the event that the string \"0s\nl\n11\" where\ns\nl\nrepresents any string of length l, l\n0 appears in a synopsis S.\nThe probability of the event F is less than 10%. (The proof is given\nin the appendix.)\nFurther, we can take advantage of the fact that most applications\nwill use the PCSA algorithm to reduce the approximation error in\nestimating R\n= r -1. Recall that in the PCSA algorithm m synopses\nare computed in parallel. Let R\ni\ndenote the value of R estimated\nfrom the ith synopsis. Then, according to the PCSA algorithm,\n3\nThe original synopsis diffusion algorithm [3, 14] takes one epoch\nto complete.\nthe the expected value of the random variable R is estimated by\naveraging the individual values of R for each synopsis, i.e., E\n[R] =\n\nR\n=\ni\n=m\ni\n=1\nR\ni\n.\nAlthough there is likely to be some variation among the R\ni\n, we\nknow from Property 3 in Section 2 that the variation is expected\nto correspond to two bit position both to the left and the right of\nthe true value of R. This suggests that there is a high degree of\ncorrelation between the R\ni\nfor different synopses. Thus, in our\nwindow-based approach, we can increase our confidence that we\nhave found the correct position of R, if we observe the bit pattern\n\"11\" in multiple synopses among the m that are being computed in\nparallel. Based on this intuition, our termination criterion consists\nof checking whether we have observed the string \"11\" in at least m\n\nout of the m synopses.\n1\n1\n1\n1\n1\n1\n0\n0\n1st Synopsis\n2nd Synopsis\nm-th Synopsis\n3rd Synopsis\n1st position of\nthe window\nthe aggregation\nafter this window\nprocess stops\nthe termination-test\npasses in this\nwindow\n1\n1\n1\n1\n1\n1\n1\n0\nFigure 3: Each synopsis is divided into several windows of\nwidth w\n= 2 bits. After the termination criterion is satisfied,\nthe base station broadcasts a STOP message and the aggregation\nphase stops after the next epoch. In each epoch, nodes\nwhich contribute to the corresponding window send a MAC to\nthe base station. The MACs which correspond to the crossed\nbits are never sent.\nOur goal in selecting the threshold m\n\nis to reduce the likelihood\nof both a false positive, which means that the algorithm was terminated\ntoo early, and a false negative, which means that the algorithm\nterminated too late, i.e., after the sliding window had already\ncrossed the true position of R. A false positive results in an\nover-estimate of R, whereas a false negative results in additional\ncommunication overhead. We now show that it is possible to find a\nsuitable value for m\n\nsuch that the probability of false positive and\nthe probability of false negative are both low.\nTheorem 3.\nLet G\ni\ndenote the event that both bit i and\n(i + 1) in\na synopsis S are 1. Let\ndenote the expected value of the estimator\n\nR\n. Then, Pr\n[G\n\n] = 0.3454, Pr[G\n+1\n] = 0.1315, and Pr[G\n+2\n] =\n0\n.0412.\nBecause of space limitations, the proof of this theorem can be\nfound in the appendix.\nIf the sliding window in our algorithm is two bits wide, i.e, w\n= 2,\nfrom the definition of the false positive (FP), we get that the probability\nPr\n[FP] is the probability that the event G\n+2\noccurs in m\n\nor more synopses. Similarly, the probability of a false negative,\nPr\n[FN] is the probability that the event G\n\noccurs in fewer than m\n\nsynopses. For m\n= 20 (which is the typical value used in previous\n77\nwork [3, 14], we find that the best value of m\n\nis 4 in which case\nPr\n[FP] = 0.0082 and Pr[FN] = 0.0484. The same approach illus-trated\nhere can be used to derive the appropriate threshold m\n\nfor\nother window sizes.\nFigure 3 illustrates the operation of our algorithm for w\n= 2. Assume\nthat the termination criterion is satisfied in epoch e. The BS\nbroadcasts a STOP message which directs all nodes to terminate\nthe aggregation phase. Note that by the time each node in the network\nreceives the broadcast STOP message, many of the nodes will\nhave already sent MACs corresponding to their contributions to the\nnext epoch e\n+ 1 of the algorithm. Thus, the effect of the termination\ncriterion being satisfied in epoch e message is to terminate the\naggregation after epoch e\n+ 1.\nWe can take advantage of this extra epoch to further increase our\nlevel of confidence in the estimated value of R. Let the bit position\nof the sliding window in epoch e correspond to bits\nand + 1.\nInstead of estimating R\n= + 1 because m\n\nout of m synopses had\nboth bits\nand +1 equal to 1, we can now estimate R based on the\nobserved value of R\ni\nfor all m synopses. Our simulations show that\nestimate of the aggregate computed using our extended approach is\nclose to the estimate computed using the original synopsis diffusion\n[3, 14] algorithm.\nLatency\nThe number of epochs taken by our sliding window approach\ndepends on the ratio (\n) of the upper bound of the aggregate\nto the actual value. If the upper bound is u and the actual value is\n, for a window of width w the number of epochs is equal to\n(log\n2\nu\nm\n-log\n2\n\nm\nw\n+2) = (log\n2\n\nw\n+2)\nCommunication Overhead\nTheorem 3 implies that it is highly\nlikely that the sliding window contains the position\nR\nwhen the termination\ncriterion is satisfied. As discussed above, if the termination\ncriterion is satisfied in epoch e, the aggregation completes after\nepoch e\n+ 1. Thus, by property 2 and property 4 in Section 2, if m\nsynopses are computed in parallel, the expected number of nodes\nwhich send a MAC varies in the range of\n(2.58 m) to (5.16m).\nEven if a sensor node contributes to more than one bits, it sends\njust one MAC validating all the bits. Note that the number of contributing\nnodes does not exceed this range even if the network size\nis increased. Our simulation results show that 85 MACs are sent on\naverage when m\n= 20.\nWe observe that the width of the window w determines a tradeoff\nbetween the communication overhead and the latency. If we divide\nthe synopses into wider windows, the number of MACs sent and\nhence the communication overhead will increase while the latency\nof the aggregation process will decrease, and vice versa.\n6.4\nDiscussion\nAn alternative approach to the sliding window-based approach\ndescribed above is one in which the base station computes the aggregate\nof interest in the first epoch using the original Synopsis\nDiffusion algorithm. It then broadcasts a message requesting only\nthe nodes that contribute to the bit window that contains R to send\nthe MACs authenticating their local synopses. If the BS success-fully\nverifies all the MACs it receives, then the protocol terminates\nat the end of the second epoch. However, if it does not receive the\nrequested MACs or if one or more MACs are invalid, the BS executes\nthe sliding-window protocol described above to compute the\ncorrect value of R. If the probability of compromised nodes being\npresent in the network is low, then this alternative approach is\npreferable to the extended approach since it will have much lower\nlatency on average.\nSIMULATION RESULTS\nIn this section, we report on a detailed simulation study that examined\nthe performance of our attack-resilient aggregation algorithms\ndiscussed in Sections 5 and 6. Our simulations were written\nusing the TAG simulator developed by Madden et al. [11]. We\nadded the attack-resilient functionality to the source code provided\nby Considine et al. [3] which simulates their multipath aggregation\nalgorithms in the TAG simulator environment.\n7.1\nSimulation Environment\nFor our basic experimental network topology, we used a regular\n30\n30 grid with 900 sensor nodes, where one sensor is placed at\neach grid point and the base station is at the center of the grid, as in\n[3]. The communication radius of each node is 2 unit allowing\nthe nearest eight grid neighbors to be reached.\nThe goal of our simulation experiments is to examine the communication\noverhead and accuracy of our scheme in the presence\nof packet losses, which are relatively frequent in sensor networks.\nWe use a simple packet loss model in which packets are dropped\nwith a fixed probability; this packet loss rate is assumed to include\npackets that are lost due to being dropped by compromised nodes.\nWe do not model any additional attacks by compromised nodes,\nspecifically the falsified subaggregate and the falsified local value\nattacks, in our simulation. This is because we have already shown\nthat these attacks cannot affect the estimate of the aggregate computed\nat the sink. Consequently, these attacks simply have the effect\nof increasing the communication and computation overhead; in effect\n, they become a form of DOS or resource consumption attacks.\nWe assign a unique id to each sensor, and we assume that the sensor\nreading is a random integer uniformly distributed in the range\nof 0 to 250 units. We compute 20 synopses in parallel using the\nPCSA algorithm as in the experiments reported in [3, 14]. We use\nthe method of independent replications as our simulation methodology\n. Each simulation experiment was repeated 200 times with a\ndifferent seed. The plots below show the 95% confidence intervals\nof the reported metric.\n7.2\nResults and Discussion\nDue to space constraints, we will only present the results of our\nextended approach for computing the Sum aggregate.\nAccuracy of our estimate\nIn the first set of experiments, we validate\nour claim that our attack-resilient approach has the same accuracy\nin computing the true value of the aggregate as the original\nsynopsis diffusion approach. Figure 4a plots the estimates of our\napproach and the synopsis diffusion approach as a function of the\npacket loss rate. We observe that the two estimates are indeed very\nclose in all loss rate conditions. We observe that the average value\nof the sensor reading is approximately 125, i.e., the accurate Sum\nis 900\n125 = 11250.\nCommunication overhead\nWe now compare the communication\noverhead of our approach to that of the original synopsis diffusion\napproach. Figure 4(b) plots the total number of bytes transmitted\nfor computing the Sum aggregate. As discussed in Section 5.3,\nfor preventing a node from using a false reading to generate its\nown local synopsis, we can adopt two approaches. In the first approach\n, we ignore the impact of the falsified local value attack; in\nthe figure, this approach is labeled as ARSD (attack-resilient synopsis\ndiffusion). The second approach requires the contributing\nnode to include a XMAC, which corresponds to an endorsement\nfrom its neighbors, in the message; in the figure, this approach is\nlabeled ARSD+XMAC.\nFor ARSD+XMAC, each contributing node sends an authentication\nmessage which has two parts: the first part contains the ID (2\n78\n0\n0.1\n0.2\n0.3\n0.6\n0.8\n1\n1.2\n1.4\nx 10\n5\nLink Loss Rate\nEstimated Sum\nARSD\nSD\nActual Sum\n0\n0.1\n0.2\n0.3\n1\n2\n3\n4\n5\n6\nx 10\n4\nLink Loss Rate\nTotal Byte Transmitted\nARSD\nARSD+XMAC\nSD\na. Accuracy of Sum\nb. Byte overhead\n0\n2\n4\n6\n8\n10\n12\n0\n2\n4\n6\n8\nLog (upper bound / actual Sum)\nNumber of epochs\nARSD (window width=2)\n1000\n2000\n3000\n4000\n5000\n40\n60\n80\n100\n120\nNumber of sensor nodes\nNumber of Contributing nodes\nARSD\nc. Latency\nd. Varying the network size\nFigure 4: Experimental Results\nbytes) of the contributing node and its sensed value (3 bytes), and\nthe second part includes the IDs of the k neighbors and a XMAC\n(4 bytes). If the value of k is not more than 4 then a node needs\n8 bytes to specify the identity of the neighbors whose MACs are\nused to generate the XMAC. Thus, the size of one authentication\nmessage is 17 bytes. For ARSD, the contributing node just needs\nto send its own MAC; no neighbor endorsement is needed, which\nreduces the authentication message size to 9 bytes.\nFigure 4(b) shows that the byte overhead of the ARSD+XMAC\nscheme is roughly 5 times larger than the original approach, whereas\nARSD is 2.5 times larger than the original approach. One might expect\nthat if loss rate is high our extended approach may take more\ntime to stop because some MACs could be lost en-route, and, as a\nresult, the communication overhead could increase. But Figure 4(b)\ndemonstrates that the overhead of the extended approach does not\nincrease with the loss rate.\nLatency\nAs discussed in Section 6, the latency of the extended\napproach depends on the looseness of the base station's estimate of\nthe upper bound of Sum. Figure 4(c) plots the number of epochs\ntaken by our extended approach as a function of the ratio of the\nupper bound to the actual value of the aggregate. The figure shows\nthat the number of epochs increases at logarithmic scale with the\nratio of the upper bound to the actual Sum. We note, however, that\nthe byte overhead of our scheme is independent of this ratio.\nEffect of network size\nIn this experiment, we study the impact of\nthe network size on the communication overhead of the extended\napproach. The communication overhead depends upon the number\nof contributing nodes that send a MAC to the base station, authenticating\ntheir synopsis. Recall from Section 6 that the expected\nnumber of contributing nodes is independent of the network size.\nFigure 4(d) confirms our analysis; we observe that the number of\ncontributing nodes is more or less constant as the network size increases\n4\n. This figure thus illustrates the scalability of our approach\nfor attack-resilient aggregation.\nRELATED WORK\nSeveral data aggregation protocols [11, 19, 23] have been proposed\nin the literature which efficiently fuse the sensed information\nen-route to the base station to reduce the communication overhead.\nSince packet losses and node failures are relatively common in sensor\nnetworks, several studies have investigated the design of robust\naggregation algorithms. Considine et al. [3] and Nath et al. [14,\n12] have presented robust aggregation approaches that combine the\nuse of multi-path routing with clever algorithms that avoid double-counting\nof sensor readings. Jelasity et al. [9] proposed a robust\ngossip-based protocol for computing aggregates over network components\nin a fully decentralized fashion. They assume that nodes\nform an overlay network where any pair of nodes are considered to\nbe neighbors, which makes this protocol impractical for sensor networks\n. We note that none of the above algorithms were designed\nwith security in mind.\nRecently several researchers have examined security issues in\naggregation. Wagner [17] examined the problem of resilient data\naggregation in presence of malicious nodes, and provided guidelines\nfor selecting aggregation functions in a sensor network. Buttyan\net al. [2] proposed a model of resilient aggregation and analyzed\nthe maximum deviation from the true value of the aggregate that an\nadversary could introduce while remaining undetected. The models\nused by by both Buttyan et al and Wagner assume that there is\nno in-network aggregation, that is, the aggregation is performed at\nthe sink. Przydatek et al [16] present protocols that can be used by\na trusted remote user to query a sensor network in which the base\n4\nThe link loss rate is held at 20% in this set of experiments.\n79\nstation may be compromised and the base station is the only aggregator\n. One of the protocols described by Przydatek et al is a robust\napproach for counting distinct elements in a data stream that can\nbe used for estimating the size of the network, i.e., the Count aggregate\n. Their approach for counting distinct elements is similar to\nour scheme for Count in the sense that in both cases only a subset\nof elements need to be verified.\nThe first secure in-network data aggregation protocol was designed\nby Hu and Evans [8]. Their protocol is effective only if no\nmore than one node is compromised. Recently, Yang et al. [18] proposed\nSDAP, a secure hop-by-hop data aggregation protocol which\ncan tolerate more than one compromised node. SDAP is a tree-based\naggregation protocol with communication cost comparable\nwith that of the ordinary aggregation protocols while it provides\ncertain level of assurance on the trustworthiness of the aggregation\nresult. As SDAP is a tree-based protocol, it is vulnerable to link\nloss and node failures which are relatively common in sensor networks\n, whereas our protocol is robust to this communication loss\nand, at the same time, secure against compromised nodes.\nWe note that our work is related to the general problem of preventing\nfalse data injection. Du et al. [4] proposed a mechanism\nthat allows the base station to check the aggregated values submit-ted\nby several designated aggregators, based on the endorsements\nprovided by a certain number of witness nodes around the aggregators\n. Their scheme does not provide per-hop aggregation. Several\nother works [20, 21, 25] have also proposed solutions to prevent\nfalse data injection attacks in sensor networks, but they do not involve\ndata aggregation.\nCONCLUSION\nIn this paper, we investigated the security issues of synopsis diffusion\nframework in presence of compromised nodes. We showed\nthat a compromised node can launch several simple attacks on the\nexisting aggregation algorithms, which could significantly deviate\nthe estimate of the aggregate. We also proposed modifications to\nthe aggregation algorithms that guard against these attacks. Our\nanalytical results and simulation results show that our approach is\neffective and it incurs minimal computation and communication\noverhead.\nIn this paper, we assume that a sensor node has a security association\nonly with the base station, and, as a result, the authentication\nmessages cannot be processed in-network in our approach. To further\nreduce the communication overhead, we plan to exploit other\nsecurity settings, e.g., local pairwise keys among nodes, as a part\nof our future work.\nREFERENCES\n[1] M. Bellare, R. Guerin, and P. Rogaway. XOR MACs: New\nmethods for message authentication using finite\npseudorandom functions. In Proc. of the 15th Annual\nInternational Cryptology Conference on Advances in\nCryptology - CRYPTO'95\n, pages 1528, 1995.\n[2] L. Buttyan, P. Schaffer, and I. Vajda. Resilient aggregation\nwith attack detection in sensor networks. In Proc. of 2nd\nIEEE Workshop on Sensor Networks and Systems for\nPervasive Computing\n, 2006.\n[3] J. Considine, F. Li, G. Kollios, and J. Byers. Approximate\naggregation techniques for sensor databases. In Proc. of\nIEEE Int'l Conf. on Data Engineering (ICDE)\n, 2004.\n[4] W. Du, J. Deng, Y. S. Han, and P. Varshney. A pairwise key\npre-distribution scheme for wireless sensor networks. In\nProc. of the 10th ACM Conference on Computer and\nCommunications Security (CCS '03).\n, 2003.\n[5] P. Flajolet and G. N. Martin. Probabilistic counting\nalgorithms for data base applications. Journal of Computer\nand System Sciences\n, 31(2):182209, 1985.\n[6] S. Ganeriwal and M. B. Sribastava. Reputation-based\nframework for highly integrity sensor networks. In Proc. of\nACM Workshop on Security of Sensor and Adhoc Networks\n(SASN)\n, Washington, DC, 2004.\n[7] D. Ganesan, R. Govindan, S. Shenker, and D. Estrin.\nHighly-resilient energy-efficient multipath routing in\nwireless sensor networks. Mobile Comuting and\nCommunication Review\n, 4(5):1125, 2001.\n[8] L. Hu and D. Evans. Secure aggregation for wireless\nnetworks. In Proc. of Workshop on Security and Assurance in\nAd hoc Networks.\n, 2003.\n[9] M. Jelasity, A. Montresor, and O. Babaoglu. Gossip-based\naggregation in large dynamic networks. ACM Transactions\non Computer Systems\n, 23(3):219252, 2005.\n[10] F. Koushanfar, M. Potkonjak, and\nA. Sangiovanni-Vincentelli. Fault tolerance techniques in\nwireless ad-hoc sensor networks. In Sensors 2002.\nProceedings of IEEE\n, pages 1491 1496.\n[11] S. Madden, M. J. Franklin, J.M. Hellerstein, and W. Hong.\nTAG: A tiny aggregation service for ad hoc sensor networks.\nIn Proc. of 5th USENIX Symposium on Operating Systems\nDesign and Implementation\n, 2002.\n[12] A. Manjhi, S. Nath, and P. Gibbons. Tributeries and deltas :\nEfficient and robust aggregation in sensor network streams.\nIn Proc. of ACM International Conference on Management\nof Data (SIGMOD)\n, 2005.\n[13] Mica Motes. http://www.xbow.com.\n[14] S. Nath, P. B. Gibbons, S. Seshan, and Z. Anderson.\nSynopsis diffusion for robust aggregation in sensor networks.\nIn Proc. of the 2nd international conference on Embedded\nnetworked sensor systems (SenSys)\n, 2004.\n[15] A. Perrig, R. Szewczyk, V. Wen, D. Culler, and J. D. Tygar.\nSPINS: Security protocols for sensor networks. In Seventh\nAnnual International Conference on Mobile Computing and\nNetworks (MobiCOM)\n, 2001.\n[16] B. Przydatek, D. Song, and A. Perrig. SIA: Secure\ninformation aggregation in sensor networks. In Proc. of the\n1st international conference on Embedded networked sensor\nsystems (SenSys)\n, 2003.\n[17] D. Wagner. Resilient aggregation in sensor networks. In\nProc. of ACM Workshop on Security of Sensor and Adhoc\nNetworks (SASN)\n, 2004.\n[18] Y. Yang, X. Wang, S. Zhu, and G. Cao. SDAP: A secure\nhop-by-hop data aggregation protocol for sensor networks.\nIn Proc. of ACM MOBIHOC, 2006.\n[19] Y. Yao and J. E. Gehrke. The cougar approach to in-network\nquery processing in sensor networks. ACM SIGMOD Record,\n31(2):918, September 2002.\n[20] Fan Ye, Haiyun Luo, Songwu Lu, and Lixia Zhang.\nStatistical en-route filtering of injected false data in sensor\nnetworks. In Proc. of IEEE Infocom, 2004.\n[21] W. Zhang and G. Cao. Group rekeying for filtering false data\nin sensor networks: A predistribution and local\ncollaboration-based approach. Proc. of IEEE Infocom, 2005.\n[22] J. Zhao and R. Govindan. Understanding packet delivery\nperformance in dense wireless sensor networks. In Proc. of\n80\nthe 1st international conference on Embedded networked\nsensor systems (SenSys)\n, 2003.\n[23] J. Zhao, R. Govindan, and D. Estrin. Computing aggregates\nfor monitoring sensor networks. In Proc. of the 2nd IEEE\nInternational Workshop on Sensor Network Protocols and\nApplications\n, 2003.\n[24] S. Zhu, S. Setia, and S. Jajodia. LEAP: Efficient security\nmechanisms for large-scale distributed sensor networks. In\nProc. of the 10th ACM Conference on Computer and\nCommunications Security (CCS '03).\n, 2003.\n[25] S. Zhu, S. Setia, S. Jajodia, and P. Ning. An interleaved\nhop-by-hop authentication scheme for filtering injected false\ndata in sensor networks. In Proc. of IEEE Symposium on\nSecurity and Privacy\n, 2004.\nAppendix\nA.\nBelow we describe the algorithm (SecureCount) executed by each\nnode in response to a Count query. X represents the node Id.\nAlgorithm 2\nSecureCount(X\n, Seed, a, b)\n1:\nM\n= {}; //\nM\nis initialized as an empty set\n2: i\n= SG(X, Seed); // X contributes to bit i\n3: if (a\ni b) then\n4:\nm\n= [X|i|Seed];\n5:\nM\n= MAC(K\nX\n, m);\n6:\nM\n=\nM\nM;\n7: end if\n8: S\nl\n= SF(); // S\nl\nis the fused synopsis at X\n9:\nM\n=\nM\n\nC\n; //\nC\nrepresents the set of MACs X received from\n// its child nodes\n10: X\n: S\nl\n|\nM\n;\nB. Proofs of Theorems\nWe provide the proofs for the theorems present in the paper.\nTheorem 1.\nLet there be n nodes in the sensor network among\nwhich\nnodes are compromised. Let u\nv\nand a\nv\ndenote the upper\nbound and the average value of the sensor reading respectively. Let\nS\nbe the final synposis computed at the sink and let R be the length\nof the prefix of all ones in S. Let s denote the value of the Sum\naggregate. If each compromised node claims that its sensed value is\nequal to the upper bound u\nv\n, and if\n( u\nv\n) < s, then the probability\nPr[S\n[R + 1] = 1] is proportional to the product of the fraction of\ncompromised nodes in the network,\n/n, and the ratio u\nv\n/a\nv\n.\nP\nROOF\n. By property 2 in Section 2, the expected value of the\nestimator R, for the Sum synopsis S, is log\n2\n(s), where s denotes\nthe Sum. As a node X with sensed value v invokes the function\nCT() v times (in the synopsis generation phase), the probability that\nX\ndoes not contribute to bit i in S is\n(1 1\n2\ni\n)\nu\nv\n. So, the probability\n(p) that a node with sensed value u\nv\nwill contribute to the\n(R + 1)th\nbit is (1\n-(11\n2\nR\n+1\n)\nu\nv\n). After simplifying, we get\np\n= 1 -(1- 1\n2\ns )\nu\nv\n1-(1- u\nv\n2\ns ) =\nu\nv\n2\ns\nThe above approximation is valid as u\nv\nis smaller than s. If there\nare\ncompromised nodes, then Pr (S[R + 1] = 1) is\nq\n= 1 -(1- p)\n\np = u\nv\n2\ns = (\n1\n2\n) (\nn ) (\nu\nv\na\nv\n)\nTo prove Theorem 2, we first prove the following results.\nLemma 1.\nLet E\ni\n, 1\ni k -2 denote the event that the string\n\"011\" appears in a synopsis S from bit i to bit\n(i + 2) (i.e., S[i] = 0,\nS\n[i + 1] = 1, and S[i + 2] = 1), where k is the length of S. The\nmaximum value of the probability (p\ni\n) of the event E\ni\nis 0.037 for\nany value of i and for any value of Count (or Sum) shared by S.\nP\nROOF\n. If function CT\n() is invoked once (ref. Section 2), then\nPr\n[S[ j] = 1] = q\nj\n=\n1\n2\nj\n, 1 j k. This probability increases if it is\ngiven that bit j\n\n, 1\nj\n\nk will remain 0. Specifically,\nPr\n[S[ j] = 1|S[j\n\n] = 0] =\nq\nj\n1\n-q\nj\n\n=\n1\n(2\nj\n) (1 1\n2\nj\n\n) .\nIf\nis the total Count (or Sum) shared by synopsis S, then Pr[S[i] =\n0\n] = (1 -q\ni\n)\n\n, and\nPr\n[S[i + 1] = 1, S[i + 2] = 1]\n= 1 -Pr[S[i+1] = 0]-Pr[S[i+2] = 0]\n+ Pr[S[i + 1] = 0, S[i + 2] = 0]\n(1)\n= 1 -(1-q\ni\n+1\n)\n\n-(1-q\ni\n+2\n)\n\n+ (1 -q\ni\n+1\n-q\ni\n+2\n)\n\n= 1 -(11\n2\ni\n+1\n)\n\n-(11\n2\ni\n+2\n)\n\n+ (1 1\n2\ni\n+1\n1\n2\ni\n+2\n)\n\nSo, the probability of the event E\ni\nis\np\ni\n= Pr[S[i] = 0]\nPr\n[S[i + 1] = 1, S[i + 2] = 1 | S[i] = 0]\n= (1 1\n2\ni\n)\n\n\n[1 -(11\n(2\ni\n+1\n) (11\n2i\n)\n)\n\n-(11\n(2\ni\n+2\n) (11\n2i\n)\n)\n\n+ (1 1\n(2\ni\n+1\n) (11\n2i\n)\n1\n(2\ni\n+2\n) (11\n2i\n)\n)\n\n]\n(2)\nNote that if i\n<< log\n2\n(), the 1st factor is close to 0 and second\nfactor is close to 1, making p\ni\nclose to 0. On the other hand, if\ni\n>> log\n2\n(), the 1st factor is close to 1, but the 2nd factor are\nclose to 0, again making p\ni\nclose to 0. p\ni\nattains the highest value\nwhen i is close to log\n2\n(). We have numerically found that the\nmaximum value of p\ni\nis 0.037, for any value of i or\n.\nLemma 2.\nLet E denote the event that the string \"011\" appears in\na synopsis S at any position. The probability of the event E is less\nthan 0.099.\nP\nROOF\n. E\ni\ndenotes the event that \"011\" appears in a synopsis\nS\nwhere 0 is at the ith bit. We observe that the events E\ni\n, E\ni\n+1\n,\nE\ni\n+2\nare mutually exclusive, for any value of i. Following the\nsame direction of Lemma 1, we can show that the probability that\n\" 011s\nl\n011\" appears in synopsis S is close to zero, where s\nl\nrepresents\nany string of length l, l\n0. So, the probability that two\nevents E\ni\nand E\nj\nwhere j\n(i+3) can occur together is negligible,\nfor any value of i. As a result, we can approximate that events E\ni\ns\nare mutually exclusive and hence the probability of event E is\np\n=\nk\n\ni\n=1\np\ni\n,\nwhere p\ni\nis given by expression (2) and k is the length of S. We\nhave numerically found that maximum value of p is 0.099.\nLemma 3.\nLet F\ni\ndenote the event that a string \"0s\ni\n11\" appears\nin a synopsis S, and let F denote the general event that a string\n81\n\"0s\nl\n11\", l\n0 appears in S, where s\nl\nrepresents any string of length\nl\n. Pr\n[F] = Pr[F\n0\n]\nP\nROOF\n. As the string \"011\" is a special case of string \"0s\nl\n11\"\nwhere l\n= 0, Pr[F] Pr[F\n0\n]. On the other hand, if string s\n\n=\n\"0s\nl\n11\", l\n0 appears in S, string \"011\" must also appear as a\nsubstring of s\n\n. As an example, if s\n\n= \"01011\" where s\nl\n= \"10\",\nwe can see \"011\" as a substring of s\n\n. Hence, Pr\n[F] Pr[F\n0\n]. So,\nwe get that Pr\n[F] = Pr[F\n0\n].\nTheorem 2.\nLet F denote the event that the string \"0s\nl\n11\" where\ns\nl\nrepresents any string of length l, l\n0 appears in a synopsis S.\nThe probability of the event F is less than 10%.\nP\nROOF\n. As the event F\n0\nin Lemma 3 is same as the event E\nin Lemma 2, we get that the probability of event F is less than\n10%.\nTheorem 3.\nLet G\ni\ndenote the event that both bit i and\n(i + 1) in\na synopsis S are 1. Let\ndenote the expected value of the estimator\n\nR\n. Then, Pr\n[G\n\n] = 0.3454, Pr[G\n+1\n] = 0.1315, and Pr[G\n+2\n] =\n0\n.0412.\nP\nROOF\n. The expected value of\nR\nis log\n2\n(\n\nm\n), which we denote\nby\n, where is the total Count (or Sum) shared by m synopses\nfollowing the algorithm PCSA. If function CT\n() is invoked once\n(ref. Section 2),\nPr\n[S[i] = 1] = q\ni\n= 1\nm\n1\n2\ni\n, 1 i k\nbecause synopsis S is selected with probability\n1\nm\namong m synopses\n. As\nis the total Count (or Sum) shared by all synopses, we\nget by using similar expression as (1) in Lemma 1 that\nPr\n[G\ni\n] = 1 -(11\nm\n2\ni\n)\n\n-(11\nm\n2\ni\n+1\n)\n\n+ (1 1\nm\n2\ni\n1\nm\n2\ni\n+1\n)\n\nSo,\nPr\n[G\n\n] = 1 -(11\nm\n\n\nm\n)\n\n-(11\nm\n2\n\nm\n)\n\n+ (1 1\nm\n\n\nm\n1\nm\n2\n\nm\n)\n\n1-e\n1\n-e\n1\n2\n\n+ e\n3\n2\n\n= 0.3454\nSimilarly, we find Pr[G\n+1\n]\n= 0.1315, and Pr[G\n+2\n]\n= 0.0412.\n82", "keywords": "sensor networks;node compromise prevention;falsified local value attack;in-network data aggregation;Attack resilient heirarchical data aggregation;Sum aggregate;falsified sub-aggregate attack;Attack-Resilient;Count aggregate;Sensor Network Security;robust aggregation;Synopsis Diffusion;Data Aggregation;synopsis diffusion aggregation framework;network aggregation algorithms;Hierarchical Aggregation"} {"name": "39", "title": "Automated Rich Presentation of a Semantic Topic", "abstract": "To have a rich presentation of a topic, it is not only expected that many relevant multimodal information, including images, text, audio and video, could be extracted; it is also important to organize and summarize the related information, and provide users a concise and informative storyboard about the target topic. It facilitates users to quickly grasp and better understand the content of a topic. In this paper, we present a novel approach to automatically generating a rich presentation of a given semantic topic. In our proposed approach, the related multimodal information of a given topic is first extracted from available multimedia databases or websites. Since each topic usually contains multiple events, a text-based event clustering algorithm is then performed with a generative model. Other media information, such as the representative images, possibly available video clips and flashes (interactive animates), are associated with each related event. A storyboard of the target topic is thus generated by integrating each event and its corresponding multimodal information. Finally, to make the storyboard more expressive and attractive, an incidental music is chosen as background and is aligned with the storyboard. A user study indicates that the presented system works quite well on our testing examples.", "fulltext": "INTRODUCTION\nIn the multimedia field, a major objective of content analysis is to\ndiscover the high-level semantics and structures from the low-level\nfeatures, and thus to facilitate indexing, browsing, searching,\nand managing the multimedia database. In recent years, a lot of\ntechnologies have been developed for various media types,\nincluding images, video, audio and etc. For example, various\napproaches and systems have been proposed in image content\nanalysis, such as semantic classification [1], content-based image\nretrieval [2] and photo album management [3]. There are also a lot\nof research focuses on video analysis, such as video segmentation\n[4], highlight detection [5], video summarization [6][7], and video\nstructure analysis [8], applied in various data including news\nvideo, movie and sports video. Since audio information is very\nhelpful for video analysis, many research works on audio are also\ndeveloped to enhance multimedia analysis, such as audio\nclassification [9], and audio effect detection in different audio\nstreams [10]. Most recently, there are more and more approaches\nand systems integrating multimodal information in order to\nimprove analysis performance [11][12].\nThe main efforts of the above mentioned research have focused on\nunderstanding the semantics (including a topic, an event or the\nsimilarity) from the multimodal information. That is, after the\nmultimedia data is given, we want to detect the semantics implied\nin these data. In this paper, we propose a new task, Rich\nPresentation, which is an inverse problem of the traditional\nmultimedia content analysis. That is, if we have a semantic topic,\nhow can we integrate its relevant multimodal information,\nincluding image, text, audio and video, to richly present the target\ntopic and to provide users a concise and informative storyboard?\nIn this paper, the so-called \"semantic topic\" is a generic concept.\nIt could be any keyword representing an event or events, a\nperson's name, or anything else. For example, \"World Cup 2002\"\nand \"US election\" could be topics, as well as \"Halloween\" and\n\"Harry Potter\". In this paper, our task is to find sufficient\ninformation on these topics, extract the key points, fuse the\ninformation from different modalities, and then generate an\nexpressive storyboard.\nRich presentation can be very helpful to facilitate quickly\ngrasping and better understanding the corresponding topic.\nPeople usually search information from (multimedia) database or\nthe Internet. However, what they get is usually a bulk of\nunorganized information, with many duplicates and noise. It is\ntedious and costs a long time to get what they want by browsing\nthe search results. If there is a tool to help summarize and\nintegrate the multimodal information, and then produce a concise\nand informative storyboard, it will enable users to quickly figure\nout the overview contents of a topic that they want to understand.\nRich presentation provides such a tool, and thus it could have\nmany potential applications, such as education and learning,\nmultimedia authoring, multimedia retrieval, documentary movie\nproduction, and information personalization.\nIn this paper, we will present the approach to rich presentation. In\norder to produce a concise and informative storyboard to richly\npresent a target topic, we need to answer the following questions.\n1) How to extract the relevant information regarding the target\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nMM'05, November 611, 2005, Singapore.\nCopyright 2005 ACM 1-59593-044-2/05/0011...$5.00.\n745\ntopic? 2) How to extract the key points from the relevant\ninformation and build a concise and informative storyboard? 3)\nHow to fuse all the information from different modality? and 4)\nhow to design the corresponding rendering interface?\nStoryboard\nRelevant Media\nMusic\nMultiple Events Clustering\nEvent summary (4w + time)\nGeographic information\nRelevant multimodal information Retrieval\nText\nA Target Topic\nRhythm Analysis\nOnset/Beat Sequence\nStrength confidence\nMedia Association\nRepresentative images\nRelevant video clips\nStoryboard Generation\nEvent presentation, multimodal information fusion, layout design\nMusic and storyboard synchronization\nRich Presentation\nUser\nInteraction\n\nFig. 1 The system framework of rich presentation of a target\nsemantic topic. It is mainly composed of three steps, relevant\nmultimodal information extraction, media analysis, and rich\npresentation generation.\n\nIn this paper, we propose a number of novel approaches to deal\nwith the above issues and also present an example system. Fig. 1\nillustrates the proposed system framework of rich presentation. It\nis mainly composed of three steps, relevant multimodal information\nextraction, media analysis including multiple events clustering\n, representative media detection and music rhythm analysis;\nand the final storyboard generation and music synchronization.\nIn the proposed system, given the semantic topic, the relevant\ninformation, including text, image, video and music, is first\nextracted from the available multimedia database or the web database\n. User interaction is also allowed to provide extra relevant\nmaterial or give relevant feedback. Then, the information is\nsummarized, with an event clustering algorithm, to give a concise\nrepresentation of the topic and figure out the overview of the\ncontents. Other multimedia materials, such as representative\nimages (or image sequences) and geographic information, are\nsubsequently associated with each event. In the next step, all the\nabove information is integrated to generate a storyboard, in which\neach event is presented as one or multiple slides. An incidental\nmusic, which is also possibly relevant to the topic, is finally\nsynchronized with the storyboard to improve its expressiveness\nand attractiveness. Thus, with these steps, a concise and\ninformative rich presentation regarding the target topic is generated\n.\nThe rest of the paper is organized as follows. Section 2 discusses\nthe relevant information extraction corresponding to the target\ntopic. Section 3 presents our approach to the topic representation,\nincluding multiple events clustering, event description, and\nrepresentative media selection. Section 4 describes the approach\nto rich presentation generation, including storyboard generation,\nincidental music analysis and synchronization. Experiments and\nevaluations are presented in the Section 5. Conclusions are given\nin the Section 6.\nOBTAINING RELEVANT INFORMATION\nTo obtain the multimodal information which is relevant to the\ninput topic (keyword), generally, we could search them from\nvarious databases which have been indexed with the \"state-of-the-art\"\nmultimedia analysis techniques. However, in current stage,\nthere is lack of such publicly available multimedia databases. The\npublic search engine like MSN or Google indexes all the Internet\nweb-pages and can return a lot of relevant information, but the\nsearch results usually contain much noise. We could also build a\nprivate database for this system to provide more relevant and\nclean results, but it will be too much expensive to collect and\nannotate sufficient multimedia data for various topics. In order to\nobtain relatively accurate and sufficient data for an arbitrary topic,\nin our system, we chose to collect the relevant multimodal\ninformation of the given topic from the news websites such as\nMSNBC, BBC and CNN, instead of building an available\ndatabase from the scratch. These news websites are usually well\norganized and managed; and contain various kinds of high quality\ninformation including text, image and news video clips. Although\nthe news websites are used as the information sources in our\nsystem, other various multimedia databases can be also easily\nincorporated into the system if they are available.\nInstead of directly submitting the topic as a query and getting the\nreturned results by using the search function provided by the\nwebsites, in our system, we crawled the news documents from\nthese websites in advance and then build a full-text index. It\nenables us to quickly obtain the relevant documents, and also enable\nus to use some traditional information retrieval technologies,\nsuch as query expansion [13], to remove the query ambiguousness\nand get more relevant documents.\nIn our approach, user interaction is also allowed to provide more\nmaterials relevant to the topic, or give relevant feedback on the\nreturned results. For example, from the above websites, we can\nseldom find a music clip relevant to the target topic. In this case,\nusers could provide the system a preferred music, which will be\nfurther used as incidental music to accompany with the storyboard\npresentation. Users could also give some feedbacks on the\nobtained documents. For example, if he gives a thumb-up to a\ndocument, the relevant information of the document needs to be\npresented in the final storyboard. On the other side, users could\nalso thumb-down a document to remove the related information.\nTOPIC REPRESENTATION\nA semantic topic is usually a quite broad concept and it usually\ncontains multiple events. For example, in the topic \"Harry Potter\",\nthe publication of each book and the release of each movie could\nbe considered as an event; while in the topic \"World Cup 2002\",\neach match could also be taken as an event. For each event, there\nare usually many documents reporting it. Therefore, in order to\ngenerate an informative and expressive storyboard to present the\ntopic, it would be better to decompose the obtained information\nand cluster the documents into different events.\nHowever, event definition is usually subjective, different\nindividuals may have different opinions. It is also confusing in\nwhich scale an event should be defined. Also take \"World Cup\"\nas an example, in a larger scale, \"World Cup 2002\" and \"World\nCup 2006\" could also be considered as a big event. Therefore,\ndue to the above vagueness, in this paper, we do not strictly define\n746\neach event of the target topic. Following our previous works on\nnews event detection [14], an event is assumed as some similar\ninformation describing similar persons, similar keywords, similar\nplaces, and similar time duration. Therefore, in our system, an\nevent is represented by four primary elements: who (persons),\nwhen (time), where (locations) and what (keywords); and event\nclustering is to group the documents reporting similar primary\nelements. As for the scale of event, in the paper, it could be\nadaptively determined by the time range of the obtained\ndocuments or the required event number.\nIn this section, we present a novel clustering approach based on a\ngenerative model proposed in [14], instead of using traditional\nclustering methods such as K-means. After event clusters are\nobtained, the corresponding event summary is then extracted and\nother representative media is associated with each event.\n3.1 Multiple Event Clustering\nTo group the documents into different events, essentially, we need\nto calculate p(e\nj\n|\n\nx\ni\n), which represents the probability that a document\nx\ni\nbelongs to an event e\nj\n. Here, as mentioned above, an\nevent e\nj\n(and thus the document x\ni\ndescribing the event) is\nrepresented by four primary elements: who (persons), when (time),\nwhere (locations) and what (keywords). That is,\n}\n,\n,\n,\n{\n/\ntime\nkeywords\nlocations\npersons\nDocment\nEvent\n=\n\nAssuming that a document is always caused by an event [14] and\nthe four primary elements are independent, to calculate the\nprobability p(e\nj\n|\n\nx\ni\n), in our approach, we first determine the likelihood\nthat the document x\ni\nis generated from event e\nj\n, p(x\ni\n|\n\ne\nj\n)\nwhich could be further represented by the following generative\nmodel,\n)\n|\n(\n)\n|\n(\n)\n|\n(\n)\n|\n(\n)\n|\n(\nj\ni\nj\ni\nj\ni\nj\ni\nj\ni\ne\ntime\np\ne\nkey\np\ne\nloc\np\ne\nname\np\ne\nx\np\n=\n(1)\nwhere name\ni\n, loc\ni\n, key\ni\n, and time\ni\nare the feature vectors\nrepresenting persons, locations, keywords and time in the\ndocument x\ni\n, respectively. In our approach, the above entities are\nextracted by the BBN NLP tools [15]. The tool can extract seven\ntypes of entities, including persons, organizations, locations, date,\ntime, money and percent. In our approach, the obtained organization\nentity is also considered as a person entity; and all the words\nexcept of persons, locations, and other stop-words are taken as\nkeywords.\nIn more detail, name\ni\n(similarly, loc\ni\nand key\ni\n) is a vector <c\ni1\n,\nc\ni2\n, ..., c\niNp\n>, where c\nin\nis the occurrence frequency of the person\nn\n\nappears in the document x\ni\n, and person\nn\nis the nth person in the\nperson vocabulary, which is composed of all the persons appeared\nin all the obtained documents (similarly, we can define keyword\nvocabulary and location vocabulary). Assuming N\np\nis the size of\nperson vocabulary, p(name\ni\n|e\nj\n) could be further expressed by\n\n=\n=\np\nin\nN\nn\nc\nj\nn\nj\ni\ne\nperson\np\ne\nname\np\n1\n)\n|\n(\n)\n|\n(\n(2)\nSince the person, location and keyword are discrete variables\nrepresented by words, and the probability of the location and\nkeyword can be also defined similarly as that of the person in (2),\nin the flowing sections, we will not discriminate them and\nuniformly represent the probability p(person\nn\n|\n\ne\nj\n) (correspond-ingly\n, the p(location\nn\n|\n\ne\nj\n) and p(keyword\nn\n|\n\ne\nj\n)) as p(w\nn\n|\n\ne\nj\n), which\ndenotes the probability that the word w\nn\nappears in the event e\nj\n\nOn the other hand, the time of an event usually lasts a continuous\nduration. It is also observed, especially in the news domain, that\nthe documents about an event usually increases at the beginning\nstage of the event and then decreases at the end. Therefore, in\nour approach, a Gaussian model N(u\nj\n,\nj\n) is utilized to roughly\nrepresent the probability p(time\ni\n|\n\ne\nj\n), where u\nj\nand\nj\nis the mean\nand standard deviation, respectively.\nTo this end, in order to estimate the probability p(e\nj\n|\n\nx\ni\n), we need\nto estimate the parameters = {p(w\nn\n|\n\ne\nj\n), u\nj\n,\nj\n, 1jK}, assuming\nK is the number of events (the selection of K is discussed in\nsection 3.2). In our approach, the Maximum Likelihood is used to\nestimate the model parameters, as,\n\n\n=\n\n=\n=\n=\n=\n=\nM\ni\nK\nj\nj\ni\nj\nM\ni\ni\ne\nx\np\ne\np\nx\np\nX\np\n1\n1\n1\n*\n))\n,\n|\n(\n)\n(\nlog(\nmax\narg\n))\n|\n(\nlog(\nmax\narg\n))\n|\n(\nlog(\nmax\narg\n\n\n\n\n\n\n\n(3)\nwhere X represents the corpus of the obtained documents; M and\nK are number of documents and events, respectively.\nSince it is difficult to derive a close formula to estimate the\nparameters, in our approach, an Expectation Maximization (EM)\nalgorithm is applied to maximize the likelihood, by running E-step\nand M-step iteratively. A brief summary of these two steps is\nlisted as follows, and more details can be found in [14].\nIn E-step, the posterior probability p(e\nj\n| x\ni\n) is estimated as:\n)\n(\n)\n(\n)\n|\n(\n)\n|\n(\n)\n(\n)\n(\n)\n1\n(\ni\nt\nj\nt\nj\ni\nt\ni\nj\nx\np\ne\np\ne\nx\np\nx\ne\np\n=\n+\n\n(4)\nwhere the upper script (t) indicate the tth iteration.\nIn M-step, the model parameters are updated, as,\n\n\n\n+\n\n\n+\n=\n=\n=\n+\n=\n+\n+\nM\ni\nN\ns\nt\ni\nj\nM\ni\nt\ni\nj\nt\nj\nn\ns\ni\ntf\nx\ne\np\nN\nn\ni\ntf\nx\ne\np\ne\nw\np\n1\n1\n)\n1\n(\n1\n)\n1\n(\n)\n1\n(\n)\n)\n,\n(\n)\n|\n(\n(\n)\n,\n(\n)\n|\n(\n1\n)\n|\n(\n(5)\n\n\n\n=\n=\n+\n=\n+\n+\nM\ni\nt\ni\nj\nM\ni\ni\nt\ni\nj\nt\nj\nx\ne\np\ntime\nx\ne\np\nu\n1\n)\n1\n(\n1\n)\n1\n(\n)\n1\n(\n)\n|\n(\n)\n|\n(\n\n(6)\n\n\n\n=\n=\n+\n=\n+\n+\n+\nM\ni\nt\ni\nj\nM\ni\ntj\ni\nt\ni\nj\nt\nj\nx\ne\np\nu\ntime\nx\ne\np\n1\n)\n1\n(\n1\n2\n)\n1\n(\n)\n1\n(\n)\n1\n(\n2\n)\n|\n(\n)\n(\n)\n|\n(\n\n\n(7)\nwhere tf(i,n) is the term frequency of the word w\nn\nin the\ndocument x\ni\nand N is the corresponding vocabulary size. It\nis noted that, in (5), the Laplace smoothing [16] is applied to\nprevent zero probability for the infrequently occurring word.\nAt last, the prior of each event is updated as:\nM\nx\ne\np\ne\np\nM\ni\nt\ni\nj\nt\nj\n\n=\n=\n+\n+\n1\n)\n1\n(\n)\n1\n(\n)\n|\n(\n)\n(\n\n(8)\nThe algorithm can increase the log-likelihood consistently with\nthe iterations; and then converge to a local maximum. Once the\nparameters are estimated, we can simply assign each document to\nan event, as following\n))\n|\n(\n(\nmax\narg\ni\nj\nj\ni\nx\ne\np\ny\n=\n\n(9)\nwhere y\ni\nis the event label of the document x\ni\n.\n747\nThe advantage of this generative approach is that it not only\nconsiders the temporal continuity of an event, it also can deal with\nthe issue that some events overlap in some time durations. In this\ncase, the Gaussian model of the event time can also be overlapped\nthrough this data-driven parameter estimation. From this view,\nthe event clustering is also like a Gaussian mixture model (GMM)\nestimation in the timeline.\n3.2 Determining the Number of Events\nIn the above approach to event clustering, the event number K is\nassumed known (as shown in (3)-(8)). However, the event number\nis usually very difficult to be determined a priori. In our approach,\nan intuitive way is adopted to roughly estimate the event number\nbased on the document distribution along with the timeline.\nAs mentioned above, it is assumed that each document is caused\nby an event, and the document number of an event changes with\nthe development of the event. According to this property, each\npeak (or the corresponding contour) of the document distribution\ncurve might indicate one event [14], as the Fig. 2 shows. Thus, we\ncan roughly estimate the event number by simply counting the\npeak number. However, the curve is quite noisy and there\ninevitably exist some noisy peaks in the curve. In order to avoid\nthe noisy peaks, in our approach, only the salient peaks are\nassumed to be relevant to the event number.\nTo detect the salient peaks, we first smooth the document curve\nwith a half-Hamming (raised-cosine) window, and then remove\nthe very small peaks with a threshold. Fig.2 illustrates a\nsmoothed document distribution with the corresponding threshold,\ncollected on the topic \"US Election\" in four months. In\nexperiments, the threshold is adaptively set as\nd\nd\n/2, where\nd\n\nand\nd\nare the mean and standard deviation of the curve,\nrespectively.\nAfter the smoothing and tiny peaks removal, we further detect the\nvalleys between every two contingent peaks. Thus, the range of\nan event (which is correlated to the corresponding peak) can be\nconsidered as the envelope in the two valleys. As shown in Fig2,\nthe duration denoted by L\ni\n+R\ni\nis a rough range of the event\ncorrelated to the peak P\ni\n. Assuming an important event usually\nhas more documents and has effects in a longer duration, the\nsaliency of each peak is defined as,\n)\n)(\n(\navr\ni\ni\navr\ni\ni\nD\nR\nL\nP\nP\nS\n+\n=\n(10)\nwhere P\ni\nis the ith peak, L\ni\nand R\ni\nis the duration from the ith peak\nto the previous and next valley; P\navr\nis the average peak value and\nD\navr\nis average duration between two valleys in the curve. S\ni\nis the\nsaliency value of the peak P\ni\n. It could also be considered as the\nnormalized area under peak P\ni\n, and thus, it roughly represents the\ndocument number of the corresponding event.\nIn our approach, the top K salient peaks are selected to determine\nthe event number:\n}\n/\n{\nmax\narg\n'\n1\n1\n'\n\n\n\n\n=\n=\n=\ni\nN\ni\nk\ni\ni\nk\nS\nS\nK\n\n(11)\n\nwhere\n'\ni\nS is the sorted saliency value from large to small, N is\ntotal number of detected peaks and is a threshold. In our\nexperiments, is set as 0.9, which roughly means that at least\n90% documents will be kept in the further initialization of event\nclustering. This selection scheme is designed to guarantee there is\nno important information is missed in presentation. After the\nevent number and initial clusters (the most salient peaks with their\ncorresponding range) are selected, the event parameters could be\ninitialized and then updated iteratively.\n0\n5\n10\n15\n20\n0\n20\n40\n60\n80\n100\n120\n#Doc\nThreshold\nP\ni\nP\ni+1\nP\ni-1\nL\ni\nR\ni\nPeaks relevant to event\n\nFig.2 Peak saliency definition. It also illustrates the smoothed\ndocument distribution (document number per day) with the\ncorresponding threshold for tiny peak removal. Each peak P\ni\nis\nassumed to be correlated with each event.\nIt is noted that some technology such as Bayesian Information\nCriteria (BIC) or minimum description length (MDL) [17] could\nbe used to estimate the optimal event number, by searching\nthrough a reasonable range of the event number to find the one\nwhich maximizes the likelihood in (3). However, these algorithms\ntake long time, and it is usually not necessary to estimate\nthe exact event number in our scenario of rich presentation.\nActually, in our system, the most important point of event clustering\nis that the clustered documents `really' represent the same\nevent, rather than the event number, as observed in the experiments\n. Moreover, in the step of synchronization between the\nmusic and storyboard (in the section 4.2), the number of presented\nevents may be further refined, based on the user's preference, in\norder to match the presentation duration with the music duration.\n3.3 Event Description\nAfter obtaining the events and the corresponding documents, we\nnot only need a concise event summary, but also need to extract\nsome representative media to describe each event.\n3.3.1 Event Summary\nA simple way to summarize an event is to choose some\nrepresentative words on the persons, locations and keywords of\nthe event. For example, for the event e\nj\n, the `leading actor' could\nbe chosen as the person with the maximum p(person\nn\n|\n\ne\nj\n), while\nthe major location could be selected based on p(location\nn\n|\n\ne\nj\n).\nHowever, such brief description might have a bad readability.\nTherefore, in order to increase the readability of the summary, in\nour system, we also provide an alterative way. That is, we choose\na candidate document to represent an event. For example, the\ndocument with the highest p(x\ni\n|e\nj\n) is a good candidate representative\nof the event e\nj\n. However, a document might be too long to\nbe shown on the storyboard. Therefore, in our system, only the\n\"title-brow\" (the text between the news title and news body) of\nthe document, which usually exists and is usually a good\noverview (summary) of the document based on our observation\n(especially true in our case of news document), is selected to\ndescribe the event.\n748\nI\nIII\nII\nIV\n\nFig. 3 The event template of the Storyboard, which illustrates (I) the representative media, (II)geographic information, (III) event summary,\nand (IV) a film strip giving an overview of the events in the temporal order.\n3.3.2 Extracting Representative Media\nIn the obtained documents describing an event, there are usually\nmany illustrational images, with possible flashes and video clips.\nThese media information is also a good representative of the\ncorresponding event. However, since the obtained documents are\ndirectly crawled from the news websites, they usually contain\nmany noisy multimedia resources, such as the advertisements.\nMoreover, there also possible exist some duplicate images in\ndifferent documents describing the same event. Therefore, to\nextract the representative media from the documents, we need to\nremove noisy media and possible duplicate images. Before this,\nwe also performed a pre-filtering to remove all the images smaller\nthan 50 pixels in height or width.\nNoisy Media Detection. In our approach, a simple but\nefficient rule is used to remove the noisy media resources.\nWe find almost all advertisements are provided by other\nagencies rather than these news websites themselves. That is,\nthe hosts of advertisement resources are from different\nwebsites. Thus, in our approach, we extract the host names\nfrom the URLs of all multimedia resources, and remove\nthose resources with different host name.\nDuplicate Detection. A number of image signature schemes\ncan be adopted here to accomplish duplicate detection. In\nour implementation, each image is converted into grayscale,\nand down-sampled to 88. That is, a 64-byte signature for\neach image is obtained. Then the Euclidean distance of the\n64-byte signature are taken as the dissimilarity measure.\nImages have sufficiently small distance are considered as\nduplicates.\nOnce removing the noisy resources and duplicate images, we\nsimply select the 1-4 large images from the top representative\ndocuments (with the top largest p(x\ni\n|e\nj\n)), and take them as\nrepresentative media of the corresponding event. The exact\nnumber of the selected images is dependent on the document\nnumber (i.e., the importance) of the event and the total image\nnumber the event has. It is noted that, in our current system, we\nonly associates images with each event. However, other media\nlike video and flashes can be chosen in a similar way.\nRICH PRESENTATION GENERATION\nIn the proposed system, the above obtained information, including\nevent summary and representative media, are fused to generate a\nconcise and informative storyboard, in order to richly present the\ntarget topic. In this section, we will first describe the storyboard\ngeneration for the target topic, by presenting each event with the\nmultimodal information. Then, we present the approach to\nsynchronizing the storyboard with an incidental music.\n4.1 Storyboard Generation\nIn our approach, a storyboard of a target topic is generated by\npresenting each event of the topic slide by slide. To describe an\nevent, we have obtained the corresponding information including\nthe person, time, location, event summary and other relevant\nimages. Therefore, to informatively present each event, we need\nfirst to design an event template (i.e., an interface) to integrate all\nthe information.\nFig. 3 illustrates the event template used in our proposed system,\nwith an example event in the topic `US Election\". First, the\ntemplate presents the representative images in the largest area\n(part I), since the pictures are more vivid than the words. As for\neach representative picture, the title and date of the document from\nwhich it is extracted is also illustrated. In the Fig.3, there are 4\npictures extracted from 3 documents. Then, the corresponding\nevent summaries of these three documents are presented (part III),\nwhere each paragraph refers to the summary of one document. If a\nuser is interested in one document, he can click on the corresponding\ntitle to read more details. Moreover, the geographic information\nof the event is shown with a map in the top-left corner (part\nII), to give users a view of the event location. The map is obtained\nfrom \"MapPoint Location\" service [18], which can return a\n749\ncorresponding map based on user's location query. However, the\nmapping is usually difficult, especially when the event location is\nconfusing so that the representative location is not accurately\ndetected. For example, the event shown in the Fig 1 is mapped to\nWashington D.C. rather than New York where the republic\nconvention is held, since Washington is the most frequently\nmentioned places in the documents. Finally, a film strip (part IV)\nis also presented, arranging each event in the temporal order,\nwhere each event is simply represented by a cluster of images,\nwith the current event highlighted. It enables users to have a quick\noverview of the past and the future in the event sequence.\nBy connecting various events slide by slide, we could get an\ninformative storyboard regarding the target topic. In order to\ncatch the development process of a topic, the events are ordered\nby their timestamps in the generated storyboard.\n4.2 Synchronizing with Music\nTo make the storyboard more expressive and attractive, and to\nprovide a more relaxing way to read information, in the proposed\nsystem, we will accompany the storyboard with an incidental\nmusic and align the transitions between event slides with the\nmusic beats, following the idea in music video generation [19][20].\nSometimes, music could also provide extra information about the\ntarget topic. For example, when the target topic is a movie, the\ncorresponding theme song could be chosen for the rich presentation\n. In this sub-section, we will present our approach to music\nanalysis and synchronization with the storyboard.\n4.2.1 Music Rhythm Analysis\nIn the proposed system, we detect the onset sequences instead of\nthe exact beat series to represent music rhythm. This is because\nthe beat information is sometimes not obvious, especially in light\nmusic which is usually selected as incidental music. The strongest\nonset in a time window could be assumed as a \"beat\". This is\nreasonable since there are some beat positions in a time window\n(for example, 5 seconds); thus, the most possible position of a beat\nis the position of the strongest onset.\nThe process of onset estimation is illustrated in Fig. 4. After FFT\nis performed on each frame of 16ms-length, an octave-scale filter-bank\nis used to divide the frequency domain into six sub-bands,\nincluding [0,\n0\n/2\n6\n), [\n0\n/2\n6\n,\n0\n/2\n5\n), ..., [\n0\n/2\n2\n,\n0\n/2], where\n0\n\nrefers to the sampling rate.\nAcoustic Music Data\nFFT\nSub-Band 1\nEnvelope\nExtractor\nDifference curve\nOnset Curve\nSub-Band N\nEnvelope\nExtractor\nDifference curve\n... ... ...\n.\n.\n.\n.\n.\n.\n\nFig. 4 The process of onset sequence estimation\nAfter the amplitude envelope of each sub-band is extracted by\nusing a half-Hamming window, a Canny operator is used for onset\nsequence detection by estimating its difference function,\n)\n(\n)\n(\n)\n(\nn\nC\nn\nA\nn\nD\ni\ni\n\n=\n(12)\nwhere D\ni\n(n) is the difference function in the ith sub-band, A\ni\n(n) is\nthe amplitude envelope of the ith sub-band, and C(n) is the Canny\noperator with a Gaussian kernel,\n]\n,\n[\n)\n(\n2\n2\n/\n2\n2\nc\nc\ni\nL\nL\nn\ne\ni\nn\nC\n\n=\n\n\n(13)\nwhere L\nc\nis the length of the Canny operator and is used to\ncontrol the operator's shape, which are set as 12 and 4 in our\nimplementation, respectively.\nFinally, the sum of the difference curves of these six sub-bands is\nused to extract onset sequence. Each peak is considered as an\nonset, and the peak value is considered as the onset strength.\nBased on the obtained onsets, an incidental music is further\nsegmented into music sub-clips, where a strong onset is taken as\nthe boundary of a music sub-clip. These music sub-clips are then\nused as the basic timeline for the synchronization in the next step.\nThus, to satisfy the requirement that the event slide transitions of\nthe storyboard should occur at the music beats, we just need to\nalign the event slide boundaries and music sub-clip boundaries.\nTo give a more pleasant perception, the music sub-clip should not\nbe too short or too long, also it had better not always keep the\nsame length. In our implementation, the length of music sub-clips\nis randomly selected in a range of [t\nmin\n, t\nmax\n] seconds. Thus, the\nmusic sub-clips can be extracted in the following way: given the\nprevious boundary, the next boundary is selected as the strongest\nonset in the window which is [t\nmin\n, t\nmax\n] seconds away from the\nprevious boundary. In the proposed system, users can manually\nspecify the range of the length of the music sub-clip. The default\nrange in the system is set as [12, 18] seconds, in order to let users\nhave enough time to read all the information on each event slide.\n4.2.2 Alignment Scheme\nTo synchronize the transitions between different event slides and\nthe beats of the incidental music, as mentioned above, we actually\nneed to align the slide boundaries and music sub-clip boundaries.\nTo satisfy this requirement, a straightforward way is to set the\nlength of each event slide be equal to the corresponding length of\nthe sub-music clip.\nHowever, as Fig. 5 illustrates, the number of event slides is\nusually not equal to the number of music sub-clip. In this case, in\nour proposed system, we provide two schemes to solve this\nproblem.\n1) Music Sub-clip Based. In this scheme, only the top N important\nevents of the target topic are adaptively chosen and used in the\nrich presentation, where N is supposed as the number of music\nsub-clip in the corresponding incidental music, as the Fig.5 shows.\nAlthough a formal definition of event importance is usually hard\nand subjective, in our approach, the importance score of an event\nis simply measured by the number of documents reporting it,\nassuming that the more important the event, the more the\ncorresponding documents. The assumption is quite similar as that\nin the definition of (10).\n750\n2) Specified Event Number Based. In this scheme, users can\nspecify the number of the event he wants to learn. For example, a\nuser could choose to show the top 30 important events or all the\nevents. Thus, to accommodate all the events in the music duration,\nwe will repeat the incidental music if it is needed and then fade out\nthe music at the end.\nE1\nE2\nE3\nE4\nE5\nE6\nE8\nE7\nS2\nS1\nS3\n.......\nS4\nS5\n.......\nEvent\nSlide List\nMusic\nSub-Clip\n\nFig. 5 Music and storyboard synchronization: a music sub-slip\nbased scheme, that is, only the top important events are presented\nto match the number of music sub-clips.\n4.2.3 Rendering\nAfter the alignment between storyboard and incidental music, in\nour system, fifteen common transition effects, such as cross-fade,\nwipe and dissolve, are also randomly selected to connect the event\nslides, producing a better rich presentation in final rendering.\nEVALUATIONS\nIn this section, we evaluate the performance of the proposed\napproach to rich presentation and its key component, event\nclustering. In the experiments, we randomly select 8 topics of\ndifferent types, including Earthquake, Halloween, Air Disaster,\nUS Election, Nobel Prize, Britney Spears, David Beckham, and\nHarry Potter, from some hot news topics in the end of 2004 and\nbeginning of 2005. Once the topic is selected, the topic name is\nused as a query and the relevant documents are collected from\nCNN, MSNBC and BBC. More details about the selected topics\nand the corresponding documents are shown in the Table 1, which\nlists the topic name, the time range of the collected documents,\nand the number of documents and its corresponding events.\nTable 1. A list of testing topics in the rich presentation evaluations\nNo. Topic Time\n#doc\n#event\n1 Earthquake\n1995-2004\n976 17\n2 Halloween\n1995-2004\n762 9\n3 Air\nDisaster\n1995-2004\n210 13\n4 US\nElection\n1995-2004\n2486 -5\nBritney\nSpears\n2000-2004 1311 -6\nNobel\nPrize\n1995-2004\n186 -7\nDavid\nBeckham\n1995-2004 877 -8\nHarry\nPotter\n2000-2004\n841 -Total\n\n--7649\nIt\nis noted that, in the table, only 3 topics have labeled events,\nwhile another 5 topics have not. This is because that, the labeling\nwork of a topic is very subjective and usually hard for individuals\nto manually decide the event number of a given topic. Therefore,\nwe only label the topics which are easily to be annotated based on\nthe criterion in Topic Detection and Tracking (TDT) project [21].\nFor example, Halloween is a topic which is reported once a year,\nthus, each year's documents can be regarded as an event; as for\nEarthquake and Air Disaster, their events lists could be found\nfrom corresponding official websites. In the annotation, we\nremove the events which do not have or have few (less than 4)\nrelevant documents, and also remove the documents not belonging\nto any events.\nAfter parsing the obtained documents, for each topic, we usually\ncan obtain 3.8 images per document in average. With further\nduplicate detection, only 1.6 images per document are remained.\nMoreover, from each document, we could also obtain about 3.0\nunique location entities and 2.8 unique name entities. Other words\nexcept of these entities are taken as keywords. Fig.6 shows a real\nrepresentation of an example document with extracted entities in\nthe XML format, from which the event clustering is performed.\nFig. 6. XML representation of a document on \"US Election\" with\nextracted entities\n5.1 Event Clustering\nAs mentioned above, the evaluation of the approach to event\nclustering is evaluated on three topics, including Earthquake, Halloween\n, and Air Disaster, for which the corresponding event numbers\nare determined and the documents are labeled using a similar\nmethod in the TDT project. However, in the proposed appraoch,\nwe actually do not estimate the optimal event number, but use a\nmuch larger one. Therefore, in order to better evaluate the\nperformance of the event clustering algorithm and compare with\nits counterpart, we use the event number in the ground truth to\ninitialize the cluster number in the proposed clustering algorithm.\n.....\n<URL>http://news.bbc.co.uk/1/hi/world/americas/4071845.stm </URL>\n<Abstract>The US battleground state of Ohio has certified the victory\nof President George W Bush's in last month's poll. </Abstract>\n<Date> 2004/12/6 </Date>\n<NLPRESULT>\n<LOCATION>\n<entity> Ohio </entity> <freq>4</freq>\n<entity> US </entity> <freq> 2 </freq>\n</LOCATION>\n<PERSON>\n<entity> Bush </entity> <freq> 3 </freq>\n<entity>David Cobb</entity> <freq>1</freq>\n...\n</PERSON>\n...\n<DATE>\n<entity> 6 December, 200</entity> <freq> 1 </freq>\n<entity> Friday </entity> <freq> 2 </freq>\n...\n</DATE>\n<KEYWORDS>\n...\n<entity> recount </entity> <freq>7</freq>\n<entity> elect </entity> <freq>3</freq>\n<entity> America </entity> <freq>3</freq>\n<entity> poll </entity> <freq>3</freq>\n...\n</KEYWORDS>\n</NLPRESULT>\n751\nIn the experiments, K-means, which is another frequently used\nclustering algorithm (as well in TDT [22]), is adopted to compare\nwith the proposed approach. The comparison results of two\nclustering approaches are illustrated in Table 2, with precision and\nrecall for each topic.\nTable 2. The performance comparison between our approach and\nK-means on the event clustering\nPrecision Recall\n\nK-means\nOurs\nK-means\nOurs\nEarthquake 0.74\n0.87\n0.63\n0.74\nHalloween 0.88\n\n0.93 0.72 0.81\nAir Disaster\n0.57\n0.68\n0.55\n0.61\nAverage 0.73\n0.83\n0.63\n0.72\nFrom Table 2, it can be seen that the results of our approach are\nsignificantly better than those of K-means, both on precision and\nrecall. On the three testing topics, the average precision of our\napproach is up to 0.83 and the average recall achieves 0.72, which\nis 10% and 9% higher than those of K-means, respectively. By\ntracing the process of K-means, we find that K-means usually\nassigns documents far away from each other on the timeline into\nthe same cluster, since the time information affects little in K-means\n. It also indicates the advantages of our approach with time\nmodeling.\nThe algorithms also show different performance on different kind\ntopics. As for the \"Air disaster\", its performance is not as good as\nthat of the other two, since the features (words and time) of its\nevents are more complicated and intertwined in the feature space.\nAs for the topics (4-8 in Table I) which could not have an\nobjective evaluation, the clustering performance on these topics\ncould be indirectly reflected by the subjective evaluation of the\nrich presentation presented in section 5.2. This is because users\nwill be more satisfied when the grouped documents shown in each\nevent slide really belong to the same event; while users are not\nsatisfied if the documents from different events are mixed in one\nevent slide.\n5.2 Rich Presentation\nIt is usually difficult to find a quantitative measure for rich\npresentation, since the assessment of the goodness of rich presentation\nis a strong subjective task. In this paper, we carry out a preliminary\nuser study to evaluate the performance of the proposed\nrich presentation schemes.\nTo indicate the performance of rich presentation, we design two\nmeasures in the experiments, including `informativeness' and\n`enjoyablity', following the criteria used in the work [7]. Here, the\ninformativeness measures whether the subjects satisfy with the\ninformation obtained from the rich presentation; while enjoyablity\nindicates if users feel comfortable and enjoyable when they are\nreading the rich presentation. In evaluating the informativeness,\nwe also provide the documents from which the rich presentation is\ngenerated. They are used as baseline, based on which the subjects\ncan more easily evaluate if the important overview information\ncontained in the documents is conveyed by the rich presentation.\nMoreover, in order to reveal the subjects' opinion on the design of\nthe storyboard template, like the one shown in Fig 3, we also ask\nthe subjects to evaluate the `interface design'.\nIn the user study, 10 volunteered subjects including 8 males and 2\nfemales are invited. The subjects are around 20-35 years old, have\nmuch experience on computer manipulation, and usually read\nnews on web in their leisure time. We ask them to give a\nsubjective score between 1 and 5 for each measure of the rich\npresentation of each testing topic (an exception is `interface\ndesign', which is the same for each rich presentation). Here, the\nscore `1' to `5' stands for unsatisfied (1), somewhat unsatisfied (2),\nacceptable (3), satisfied (4) and very satisfied (5), respectively.\nIn experiments, we first check with the `interface design' measure.\nWe find 7 out of 10 subjects satisfy with the event template design\nand the left three also think it is acceptable. The average score is\nup to 3.9. An interesting observation is that, some subjects like\nthe template design very much at the first glance, but they feel a\nlittle boring after they finish all the user study since every slide in\nthe rich presentation of each topic has the same appearance. It\nhints us that we had better design different templates for different\ntopics to make the rich presentation more attractive.\nAs for the other two measures, we average the score across all the\nsubjects to represent the performance for each topic, and list the\ndetailed results in Table 3. It can be seen that the average score of\nboth enjoyablity and informativeness achieves 3.7, which indicates\nthat most subjects satisfy the provided overview information of the\ntarget topic, and they enjoy themselves when reading these rich\npresentations.\nTable 3. The evaluation results of rich presentation on each topic\nNo. Topic Informative\nEnjoyable\n1 Earthquake\n4.3\n3.2\n2\nHalloween 3.6 4.0\n3 Air\nDisaster\n4.0\n3.4\n4 US\nElection\n4.1\n4.0\n5 Britney\nSpears\n3.6\n4.1\n6 Nobel\nPrize\n3.3\n3.4\n7 David\nBeckham\n3.4\n4.0\n8 Harry\nPotter\n3.3\n3.4\nAverage 3.7\n3.7\nIn the experiments, we find informativeness is highly depended on\nthe correlation between the presented documents and the target\ntopic. If the presented information is consistent with the topic,\nsubjects usually give a high score for informativeness, such as\nthose on Earthquake and US Election; otherwise, they will give a\nlow score, like those on David Beckham and Nobel Prize. It\nindicates that it is quite important to provide users clean\ninformation of the target topic with less noise. However, in\ncurrent system, the documents are crawled from web and\ninevitably contain many noises. It affects much on the performance\nof informativeness in the current system. We need to consider\nhow to prone the information of the target topic in the future\nworks.\nWe also find that the enjoyablity score is usually related with\ninformativeness. If the subjects do not get enough information\nfrom the rich presentation, they will be not enjoyable as well, such\nas the topics of Nobel Prize and Harry Potter. Enjoyablity is also\ntopic-related, the subjects usually feel unconformable when they\nare facing with miserable topics, such as Earthquake and Air\nDisaster, although their informativeness is quite high. On the\n752\ncontrary, users give a high score for enjoyablity on the interesting\ntopics, such as Britney Spears and David Beckham, although their\ninformative score is not high. This is because that there are\nusually many funny and interesting pictures in the presentation of\nthese topics. Another finding is that users usually fell unenjoyable\nif the images and summaries in one event slide are not consistent\nwith each other. From this view, the high enjoyablity score in our\nexperiments also indicates that our event clustering algorithm\nworks promisingly\nCONCLUSIONS\nTo facilitate users to quickly grasp and go through the content of a\nsemantic topic, in this paper, we have proposed a novel approach\nto rich presentation to generate a concise and informative\nstoryboard for the target topic, with many relevant multimodal\ninformation including image, text, audio and video. In this\napproach, the related multimodal information of a given topic is\nfirst extracted from news databases. Then, the events are clustered,\nand the corresponding information, such as representative images,\ngeographic information, and event summary, is obtained. The\ninformation is composed into an attractive storyboard which is\nfinally synchronized with incidental music. A user study indicates\nthat the presented system works well on our testing examples.\nThere is still some room for improving the proposed approach.\nFirst, the proposed approach could be extended to other\nmultimedia databases or more general websites. For example,\nsome standard multimedia database like NIST TRECVID could\nprovide a nice platform for the implementation and evaluation of\nevent detection and rich presentation. Second, to integrate more\nrelevant multimedia information (such as video clips and flashes)\nand more accurate information regarding the target topic is highly\nexpected by users. Thus, more advanced information retrieval/\nextraction techniques and other multimedia analysis techniques are\nneeded to be exploited and integrated, such as relevance ranking,\nmapping schemes, important or representative video clips\ndetection and video clip summarization. We also need to design a\nmuch natural way to incorporate video clips in the event template.\nThird, we also consider designing various storyboard templates for\ndifferent kind of topics. For example, each topic may be belonging\nto different clusters such as politics, sports and entertainments,\neach of which can have a representative template. Forth,\nappropriate user interaction will be added to further make the\nstoryboard more interactive and easy to control. Finally, a\nthorough evaluation will be implemented to evaluate the effect of\neach component in the framework and storyboard template.\n\nREFERENCES\n[1] A. Vailaya, M.A.T. Figueiredo, A. K. Jain, and H.-J. Zhang.\n\"Image classification for content-based indexing\". IEEE\nTransactions on Image Processing, Vol.10, Iss.1, 2001\n[2] F. J., M.-J. Li, H.-J. Zhang, and B. Zhang. \"An effective\nregion-based image retrieval framework\". Proc. ACM\nMultimedia'02, pp. 456-465, 2002\n[3] J. Platt \"AutoAlbum: Clustering Digital Photographs using\nProbabilistic Model Merging\" Proc. IEEE Workshop on\nContent-Based Access of Image and Video Libraries, pp. 96\n100, 2000.\n[4] A. Hanjalic, R. L. Lagendijk, J. Biemond, \"Automated high-level\nmovie segmentation for advanced video-retrieval\nsystems\", IEEE Trans on Circuits and Systems For Video\nTechnology, Vol. 9, No. 4, pp. 580-588, 1999.\n[5] J. Assfalg and et al, \"Semantic annotation of soccer videos:\nautomatic highlights identification," CVIU'03, vol. 92, pp.\n285-305, 2003.\n[6] A. Ekin, A. M. Tekalp, and R. Mehrotra, "Automatic soccer\nvideo analysis and summarization," IEEE Trans. on Image\nProcessing, 12(7), pp. 796-807, 2003.\n[7] Y. -F. Ma, L. Lu, H. -J. Zhang, and M.-J Li. \"A User\nAttention Model for Video Summarization\". ACM\nMultimeida'02, pp. 533-542, 2002.\n[8] L. Xie, P. Xu, S.F. Chang, A. Divakaran, and H. Sun,\n"Structure analysis of soccer video with domain knowledge\nand hidden markov models," Pattern Recognition Letters,\nvol. 25(7), pp. 767-775, 2004.\n[9] L. Lu, H. Jiang, H. J. Zhang, \"A Robust Audio Classification\nand Segmentation Method,\" Proc. ACM Multimedia'01, pp.\n203-211, 2001\n[10] R. Cai, L. Lu, H.-J. Zhang, and L.-H. Cai, \"Highlight Sound\nEffects Detection in Audio Stream,\" Proc. ICME'03 Vol.3,\npp.37-40, 2003.\n[11] Y. Rui, A. Gupta, and A. Acero, \"Automatically Extracting\nHighlights for TV Baseball Programs\", Proc. ACM Multi-media'00\n, pp.105-115, 2000.\n[12] C. Snoek, and M. Worring. \"Multimodal Video Indexing: A\nReview of the State-of-the-art\". Multimedia Tools and\nApplications, Vol. 25, No. 1 pp. 5 35, 2005\n[13] E.M. Voorhees, \"Query expansion using lexical-semantic\nrelations\" Proc. ACM SIGIR Conference on Research and\nDevelopment in Information Retrieval , pp 61 - 69, 1994\n[14] Z.-W. Li, M.-J. Li, and W.-Y. Ma. "A Probabilistic Model for\nRetrospective News Event Detection\", Proc. SIGIR\nConference on Research and Development in Information\nRetrieval, 2005\n[15] D. M. Bikel, R. L. Schwartz, and R. M. Weischedel. \"An\nAlgorithm That Learns What's in a Name\". Machine\nLearning, 34(1-3), 1999\n[16] K. Nigam, A. McCallum, S. Thrun, and T. Mitchell. \"Text\nClassification from Labeled and Unlabeled Documents using\nEM\". Machine Learning, 39(2-3), 2000\n[17] T. Hastie, R. Tibshirani, and J. Friedman. \"The Elements of\nStatistical Learning: Data Mining, Inference and Prediction\".\nSpringer-Verlag, 2001\n[18] MapPoint Web Service http://www.microsoft.com/mappoint/\nproducts/ webservice/default.mspx\n[19] X.-S. Hua, L. Lu, H.-J. Zhang. "Automated Home Video\nEditing", Proc. ACM Multimedia'03, pp. 490-497, 2003\n[20] J. Foote, M. Cooper, and A. Girgensohn. \"Creating Music\nVideos Using Automatic Media Analysis\". ACM\nMultimedia'02, pp.553-560, 2002.\n[21] Topic Detection and Tracking (TDT) Project: http://www.\nnist.gov/speech/tests/tdt/\n[22] J. Allan, R. Papka, and V. Lavrenko. \"On-line New Event\nDetection and Tracking\". Proc. SIGIR Conference on\nResearch and Development in Information Retrieval 98,\npp.37-45, 1998\n\n753", "keywords": "documentary and movie;Rich presentation;events clustering;Communication and Multimedia;Representative Media;Images, videos and Audio Technologies;Rich video clips and flashes;Multi-modal information;Generate storyboard;storyboard;Subjective multiple events;multimedia fusion;High-level semantics;Event clustering;multimodality;multimedia authoring"} {"name": "4", "title": "A Database Security Course on a Shoestring", "abstract": "Database security has paramount importance in industrial, civilian and government domains. Despite its importance, our search reveals that only a small number of database security courses are being offered. In this paper, we share our experience in developing and offering an undergraduate elective course on database security with limited resources. We believe that database security should be considered in its entirety rather than being component specific. Therefore , we emphasize that students develop and implement a database security plan for a typical real world application . In addition to the key theoretical concepts, students obtain hands-on experience with two popular database systems . We encourage students to learn independently making use of the documentation and technical resources freely available on the Internet. This way, our hope is that they will be able to adapt to emerging systems and application scenarios.", "fulltext": "INTRODUCTION\nDatabase systems are designed to provide efficient access\nto large volumes of data. However, many application\ndomains require that the data access be restricted for security\nreasons.\nFor example, an unauthorized access to\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, or\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nSIGCSE'06, March 1\ufffd5, 2006, Houston, Texas, USA.\nCopyright 2006 ACM 1-59593-259-3/06/0003...\n$\n5.00.\na bank database can potentially cost millions of dollars.\nThe federal Health Insurance Portability and Accountability\nAct (HIPAA) regulates the disclosure of information from a\npatient database, allowing access to health care providers,\nhealth plans, and health care clearinghouses, simultaneously\nprotecting the privacy of patients. For obvious reasons, a\nDepartment of Defense (DoD) database needs to be protected\nfrom unauthorized access. Since many organizations\nincreasingly entrust their information resources with database\nsystems, in today's highly networked environment, the\nsensitive information can be at high risk unless there are security\nmechanisms in place to protect the data at the source\nitself. However, a large number of databases are incorrectly\ninstalled, configured, and maintained. This, in part, may be\nattributed to the lack of database security education in our\ncomputer science programs. We feel that a new undergraduate\ncourse on database security will help our students face\nthe ever increasing challenges in this field.\nOur search shows that, despite the importance, only a\nhandful database security courses are being offered. Most\nof the courses we found are graduate courses and are highly\ntheoretical. We also found a few extension program courses,\nwhich are product specific.\nAlthough a large number of\ndatabase courses exist at both undergraduate and graduate\nlevels, we feel that, one reason for not offering database\nsecurity courses may be the scarcity of textbooks, reference\nmaterials, and other resources.\nRealizing the importance of database security in computer\nscience curriculum, [8] proposes adding a new module to the\nbasic database course. Since the basic database course has\nalready many topics to cover, we feel that the addition of\nnew material will not completely serve the purpose. Further,\nwe find it difficult to incorporate hands-on component to\nsuch a course. Similarly, a computer security course is too\nbroad in scope, and rarely includes database security topics.\nTherefore, we decided to develop a new undergraduate level\nelective on database security. This paper is based on our\nexperience of offering a database security course in Spring\n2005. We have adjusted the contents and assignments in\nresponse to the feedback and course outcome. The modified\nversion is presented here.\nSince many of our students seek industrial positions after\ngraduation, we have designed our course to meet their needs\nwith the right blend of theory and practice. The course objective\nis to develop an understanding of security aspects\nof databases, database administration, and database supported\napplications. We collected information from alumni\nas well as potential employers before finalizing the contents.\n7\nAlthough students were expected to gain hands-on experience\nwith some popular databases in our course, we tried to\nfocus on concepts rather than just syntax or product specific\nfeatures. Often, students were asked to learn software packages\non their own by reading the product documentation.\nWe also offered an online feedback page for receiving anonymous\nstudent comments, which helped us know if students\nneeded additional assistance. By encouraging students to\nlearn and experiment on their own, we hope that they can\neasily apply the learned concepts to emerging application\nscenarios.\nThis is particularly needed since today's work\nenvironment expects agility from employees to quickly master\nand develop software systems. To facilitate participation\nfurther, we asked students to research and make presentations\nchoosing from a set of specified topics. Most impor-tantly\n, since many of our educational institutions are cash\nstrapped, we designed our course to execute with a small\nbudget.\nIn the next section, we detail topics, which may be included\nin a database security course, with references that,\nwe hope, will be useful for other instructors. We also discuss\nlabs and assignments in detail. Finally, we conclude the\npaper with an account of lessons learned and future possibilities\nDATABASE SECURITY TOPICS\nAlthough a large number of topics can be included, we try\nto focus on a few important ones that, in our judgment, are\nlikely to be immediately useful after graduation. We also include\ntopics on securing the data within a database, as well\nas the security of database systems and operating systems\nas suggested in [8]. Our position is that database security\nshould be considered in whole rather than adopting a piecemeal\napproach. However, we recognize that, in practice, it\nis often easy to overlook some aspects of database security.\nTherefore, we recommend that students develop a database\nsecurity plan. We also include other relevant topics such as\nstatistical database security, and security and privacy issues\nof data mining. Table 1 shows the schedule of topics for a\ntypical sixteen week semester course on database security.\nMajor labs and assignments are given in Table 2.\nThe course begins with an \"Introduction to database se-curity\"\n, where the objective is to highlight the importance\nof database security and to motivate students to learn the\nrest of the topics.\n2.1\nIntroducing Database Security\nOne way to emphasize the importance of database security\nwould be to reflect on the impact of not having security at\nall in application domains such as military, medical, financial\n, credit card, credit file, driving records, and insurance\ndatabases. Students may survey the incidents of database\nsecurity breaches and evaluate the efforts to ensure database\nsecurity by industry and government.\nSince database security is a combination of database technology\nand computer security, basics of both will be helpful\n. A discussion on security properties such as confiden-tiality\n, integrity, availability and non-repudiation should be\nincluded.\nAlthough an in-depth study of cryptography is not within\nthe scope of this course, basics of secret key cryptography\nand public key cryptography will benefit students. A good\nreference book we found is \"Data Security and Cryptogra-Week\nTopic\n1\nCourse Overview and Introduction to Database\nSecurity, Basics of Data Security and Cryptography\n2\nOverview of Security Models\n3\nAccess Control Models,\nCovert Channels and Inference Channels\n4\nMySQL Security\n5\nOracle Security\n6\nOracle Label Security\n7\nDeveloping a Database Security Plan\n8\nSpring Break\n9\nSQL Server Security\n10\nSecurity of Statistical Databases\n11\nSecurity and privacy issues of Data Mining\n12\nDatabase Applications Security,\nSQL Injection,\nDefensive Programming\n13\nDatabase Intrusion Prevention, Audit,\nFault Tolerance and Recovery\n14\nHippocratic Databases,\nXML Security\n15\nNetwork Security,\nBiometrics\n16\nFinal Examination Week\nTable 1: Course Schedule\nphy\" by Dorothy Denning [5]. Digital signatures, digital certificates\nand Public Key Infrastructure (PKI) [21] are other\ntopics to consider.\nAn overview of security and integrity models [4] will also\nbe helpful at this point.\nThis is the best time to introduce\nthe computer security lingo such as subjects and objects.\nThe difference between widely used access control techniques\nmay also be highlighted.\n2.2\nAccess Control\nDiscretionary Access Control (DAC) mechanisms such as\ncapabilities, profiles, access control lists, passwords, and permission\nbits may be discussed. Here we also introduce the\noperating system security aspects (using Windows\nR\nand\nLinux environments), and how they impact database security\nin general. Although details are not required until\nwe introduce Oracle security, overview of Role-Based Access\nControl (RBAC) [6, 18] may be discussed.\nUnlike the above access control techniques, in Mandatory\nAccess Control (MAC) the security is enforced by the system\nas dictated in the security policy, not by the owner of\nan object. Although there are many security models suggested\nfor providing Mandatory security, Bell-LaPadula [2]\nmodel is probably the simplest to learn. Even when a system\nenforces Mandatory Access Control, information leakage\nthrough covert channels [11] and inference channels [13]\nmay still be possible. A few examples will help students understand\nhow the information leakage can take place through\nsuch means.\nDatabases enforcing MAC often assign security classification\nlevels for objects and security clearance levels for\nsubjects. Access control is performed by the system based\non these levels. A lab may be developed, where students\n8\nsimulate a multilevel database on an ordinary database system\n. This means students will have to modify the schema to\nadd additional fields for storing security classification levels.\nThey also develop views for users having different clearance\nlevels. Further, to support poly-instantiation, the primary\nkey will have to be redefined to include security level to accommodate\nthe possibility of the same key values existing\nat multiple security levels.\nAnother topic of interest would be to explore how the Discretionary\nAccess Control and the Mandatory Access Control\ncan be combined and applied in some scenarios.\n2.3\nSecuring Real Life Databases\nThe candidate database systems we chose for hands-on\nexperience were MySQL\nTM\n, Oracle\nR\n, and Microsoft\nR\nSQL\nServer\nTM\n. Because of time constraints, students were able\nto focus only on the first two databases, but an overview of\nSQL server security was also provided.\n2.3.1\nMySQL Security\nWith more than six million installations worldwide [14],\nthe simplicity and open source architecture make MySQL,\nprobably, the first database to study. The primary source\nof information would be the MySQL manual itself (available\nfrom MySQL site [14]), particularly the section on \"MySQL\nAccess Privilege System\". Another source, MySQL Security\nHandbook [22], explains MySQL security system and\nprovides a few practical examples.\nLabs/Assignments\n1 Multilevel Security \ufffd Poly-instantiation\n2 MySQL Grant Privilege System\n3 SQL Injection\n4 Oracle Security \ufffd Basic Lab\n5 Database Security Plan Development\n6 Backend Development for B2C Application\n7 Probability Distributions, Sampling\n8 Statistical Databases - Breach of Security\n9 Statistical Databases - Inference Protection Techniques\n10 Data Mining Security - Reading and Presentation\nTable 2: Major Labs/Assignments\nMySQL Access Privilege System authenticates a user based\non user name, host name, and password. Further, it ensures\nthat users perform only permitted operations based on the\nprivileges specified in the grant tables (namely, user, db,\nand host). The format and contents of these tables, therefore\n, are of particular importance. Since most of the critical\ninformation including the grant tables are stored on a default\ndatabase named mysql, the security of mysql database\nis also crucial. Students should learn to apply the \"principle\nof least privilege\" when granting privileges in order to\nperform the task at hand.\nEach student was given a MySQL instance with root level\naccess. Students were asked to create users and assign privileges\nwhile monitoring the privilege tables for changes. Students\nalso experimented with the privilege system by man-ually\nmodifying privilege tables.\nWe created two person\nadministrator-user teams for enabling the students to experience\nthe system from both perspectives. Users were assigned\ncertain tasks to perform. Some of the tasks given\nwere specifically designed to understand the limitations of\nthe MySQL privilege system. The role of the administrators\nwas to grant privileges just sufficient for users to perform\nthe task. Users could access the system in any manner they\nwish \ufffd in fact, users will be encouraged to expose the weaknesses\nin the privilege assignments. The administrators, on\nthe other hand, controlled access based on need-to-know, at\nthe same time trying not to be too restrictive for users to\nperform the required tasks. We found the users very excited\nto expose security weaknesses in the privilege assignment.\nAlthough administrators were a little embarrassed, they too\nwere motivated by the exercise. For the the next lab session,\nstudents switched roles, i.e., those who were administrators\nbecame users and vice versa.\nMySQL supports data security by providing functions such\nas ENCRYPT, DES ENCRYPT, AES ENCRYPT, PASSWORD\n, OLD PASSWORD and ENCODE. Since these functions\nmay not be safe under all circumstances, it would be\nuseful to highlight the unsafe scenarios.\nStudents may also learn how to use SSL for security, and\nsimultaneously make sure that the system performance is\nnot significantly impacted. Also useful would be to study\nhow the authentication requirements may vary when using\noptions such as REQUIRE SSL, REQUIRE ISSUER and\nREQUIRE X509. Even when using SSL, the data security\ncan depend on the type of cipher and the key lengths used.\nTherefore, students may learn how to specify these parameters\nusing the REQUIRE CIPHER option.\nSome privileges in MySQL, if not carefully used, can expose\nthe system to high security risk. For example, FILE\nprivilege may be misused to gain access to the system. Hence,\na comprehensive study of unsafe privileges will be extremely\nuseful.\nEven when the privilege system is correctly set up and\nmaintained the entire privilege system can be circumvented\nusing a MySQL startup option like --skip-grant-tables.\nOn the other hand, some startup options make the server\nsafer. Therefore, MySQL startup options and their security\nconsequences must be discussed.\nMany web applications have MySQL database server deployed\nas the backend, and HTML based form acting as the\nfront end. Since user input is used to generate SQL queries\nto interact with the database, if unchecked, malicious users\nor programs can inject unsafe SQL queries. Basic concepts\nof preventing SQL injection may also be discussed.\nStudents\nmay be asked to analyze a number of SQL queries for\npotential vulnerability.\nOther topics, which can be included are: using MySQL\nnetwork scanner to detect MySQL servers on the network\nwith default passwords, MySQL resource control, data backup\nand recovery, auditing, and firewalls.\n2.3.2\nPlanning Database Security\nSince enforcing database security is extremely complex\ntask with a large number of factors affecting the security of\na database, the best way to approach the problem would\nbe to systematically develop and implement a comprehensive\ndatabase security plan. Therefore, in our course, we\nrequired that students develop a database security plan for\na small Business-to-Consumer (B2C) E-Commerce application\n. See [19] Ch7 (available online) for detailed exposition\non database security planning. Although the text is on Oracle\nsecurity, the concepts can be applied to any database.\n9\n2.3.3\nOracle Security\nFor security reasons, the computer science department\nwas reluctant to grant administrative privileges to students\non our Oracle server. Therefore, we ended up creating a\nseparate Oracle instance for the course. For each student,\nwe created one administrative account with DBA privileges,\nand then the students were allowed to create user accounts\nas needed, provided they follow a naming convention to\navoid conflicting names.\nIn addition to Oracle Security\nHandbook [12], we found the Oracle Database Administrators\nGuide [15] also useful. The guide is available online\nfrom the Oracle Database Documentation Library.\nFirst, we had an Oracle Security Basics Lab. Students\nwere introduced to the Oracle security system through a\nseries of tasks. The next lab was more advanced, and built\nup on the database security plan developed in a previous\nassignment.\nThe task was to develop the backend for a\nsmall B2C E-Commerce application. Students were asked\nto create user accounts, roles, tables, views and triggers as\nrequired. The privileges were to be assigned by observing\nthe \"principle of least privilege\", as per the security plan.\nFurther, students may also be trained to perform some\nstandard checks for security such as checking for default user\naccounts, default passwords, users having excessive privileges\n(e.g., DBA, ALTER SYSTEM, CREATE LIBRARY,\nCREATE ANY TRIGGER), security impact of WITH AD-MIN\nand WITH GRANT options on privileges, EXTER-NALLY\nauthenticated users, and the existence of database\nlinks. Students may also learn how to display information on\nitems such as triggers, views and externally authenticated\nusers. A section on security issues of using default Oracle\nsupplied roles will be useful.\nOther topics to include are: Transparent Network Substrate\n(TNS) security and listener management from remote\nmachines and setting up listener passwords, buffer overflow\nattacks and prevention, auditing, and undocumented Oracle\nfeatures. Students may also be introduced to reading security\nadvisories and obtaining Oracle Critical Patch Updates\n(CPU).\nRecently, a large number of security breaches have been\nreported.\nInterestingly, however, many of these breaches\nwere incidents of missing or stolen backup storage devices.\nTherefore, we feel appropriate to include a session on security\nand protection needs of exports, cold backups, hot\nbackups, and disaster recovery sites.\n2.3.4\nOracle Label Security\nOracle Label Security provides built-in row level access\ncontrol for high security applications. Essentially, Oracle\nadds a new field to each row for storing the row's sensitivity\nlabels. Row access is granted or denied by comparing the\nuser's identity and security clearance label with the row's\nsensitivity labels. Earlier, in Assignment 1, students have\nsimulated a multilevel database. Therefore, the above concepts\nshould be easy to learn at this point.\nAs a source of information on Oracle Label Security Architecture\n, we used Oracle Label Security Administrator's\nGuide [16]. We covered levels, compartments, groups, session\nand row labels, label security algorithm, and management\nof label security using Oracle Internet Dictionary.\n2.3.5\nMicrosoft SQL Server Security\nAs we mentioned earlier at the beginning of this section,\ndue to time constraints, we could not provide an extensive\ncoverage of Microsoft SQL Server. We briefly discussed\nSQL Server security model, authentication mechanisms, authentication\nmodes, and good security practices for SQL\nservers. Students presented information they gathered on\nSQL server vulnerabilities, security breaches, and prevention\ntechniques. We found a few excellent articles on SQLServer-Central\n.com, an online community of DBAs, developers, and\nSQL server users.\nWe also found SQL Server Developer\nCenter (http://msdn.microsoft.com/sql) useful in providing\na large number of resources in this area.\n2.4\nStatistical Security\nAs for the rest of the course, this section is application\noriented, giving the students the gist of the concepts they\nneed to know and then putting them to work in the context\nof a real database. Thus, the first lab is a simulation based\nassignment designed as an introduction to probability distributions\n, expectation, spread, sampling methods, and sampling\ndistributions of relevant statistics. We find that, even\nfor students with prior coursework in probability and statistics\n, an assignment of this type is very beneficial. The second\nlab presents the task of setting up a sequence of queries, so\nthat students can extract from a database what should have\nbeen secure information.\nAt this point we introduce the\nmain conceptual techniques for inference protection such as\nthe lattice model and partitioning the database entities into\npopulations. See [4], Ch 5 for details. The third major assignment\naims at teaching inference protection techniques.\nGiven a database, the students are asked to answer queries\nwithout disclosing sensitive information by applying restriction\n, perturbation, and combined techniques.\n2.5\nSecurity Issues of Data Mining\nData mining may be misused to obtain confidential information\nfrom a database. So we believe, a course on database\nsecurity should include an overview of security and privacy\nconcerns of data mining. Organizations would like to share\nthe data for operational convenience, at the same time prevent\nthe mining of data for information they do not want to\ndisclose. Likewise, private individuals would like to submit\ntheir personal information for data mining without compromising\ntheir privacy while keeping the key association rules\nintact. Secure data mining techniques appear similar to statistical\nsecurity methods, however, their computational efficiency\nis a major concern. We found a number of interesting\npapers [3, 17, 20] that can be used for reading assignments\nand group discussions.\n2.6\nOther Topics\nMalicious users may bypass security mechanisms provided\nby an application by directly connecting to the database.\nTherefore, whenever possible stored procedures, and views\nmust be used for providing data access. Database application\nsecurity and defensive programming was briefly covered\n. Semi-structured nature of Extensible Markup Language\n(XML) documents make them ideal candidates for\nuse in many applications including E-Business. Therefore,\nXML [7] security was also discussed.\n10\nOther topics of interest are: database intrusion detection\nand prevention [17], database fault tolerance and recovery,\nHippocratic databases [1], network security [10], and biometrics\n[9].\nRELATED COURSES\nDepartment of Computer Science at University of Alberta\nhas offered an independent study on database security with\ntopics such as security models, security mechanisms, intrusion\ndetection systems, and statistical database protection.\nUniversity of Maryland University College has a graduate\nlevel course on database security with theory and applications\n, including frameworks for discretionary and mandatory\naccess control, data integrity, availability and performance,\nsecure database design, data aggregation, data inference, secure\nconcurrency control, and secure transaction processing.\nUniversity of South Carolina and George Mason University\noffer graduate level elective courses on Database Security.\nSimilar courses are offered at a few other institutions, but we\ndo not discuss them here due to space constraints. Among\nthe undergraduate courses we found, the closest one to what\nwe offered is taught at University of Arkansas Little Rock. It\nprovides database security theory and background on Oracle\nsecurity environment.\nCONCLUSIONS\nOur new undergraduate elective course on database security\ncovers basic concepts and provides practical experience\non two popular databases. We emphasized that students develop\na database security plan that, we hope, will encourage\nthem to view the problem of ensuring database security as a\ntask that needs to be carefully planned in whole rather than\nsomething that can be addressed in parts.\nThe initial offering of the course had \"Data Structures\"\nas the only pre-requisite, because we wanted to keep the\ncourse open to a larger audience. Students, in general, were\nfound be more motivated to follow through on course work\nthan other courses we have taught. We received excellent\nnumerical score as well as comments from students in the\ndepartmental student evaluations. We had a good mixture\nof students. All were computer science majors, and forty\nseven percent were honors students. Sixty seven percent of\nthe class completed the course with an overall score of 80%\nor higher with all honors students falling into this category.\nHowever, it was felt that a few students lacked basics to\nfully grasp the material. Therefore, having a basic database\ntechnology course as the pre-requisite will help cover more\nof the suggested topics in depth.\nIn closing, we hope that our experience shared herein will\nhelp other instructors develop and offer a similar course on\ndatabase security with limited resources.\nREFERENCES\n[1] R. Agrawal, J. Kiernan, R. Srikant, and Y. Xu.\nHippocratic Databases. In Proc. of the Very Large\nData Bases (VLDB) Conference, Hong Kong, China,\nAugust 2002.\n[2] D. Bell and L. LaPadula. Secure Computer Systems:\nMathematical Foundations. Technical Report\nESD-TR-73-278, MITRE Corporation, 1973.\n[3] C. Clifton and D. Marks. Security and Privacy\nImplications of Data Mining. In Workshop on Data\nMining and Knowledge Discovery, Montreal, Canada,\nFebruary 1996.\n[4] S. Castano, M. G. Fugini, G. Martella, and\nP. Samarati. Database Security. Addison-Wesley &\nACM Press, 1995.\n[5] D. E. Denning. Cryptography and Data Security.\nAddison-Wesley, 1982.\n[6] D. Ferraiolo and R. Kuhn. Role-Based Access\nControls. In Proc. 15th NIST-NCSC National\nComputer Security Conference, Baltimore, MD,\nOctober 1992.\n[7] B. Dournaee. XML Security. RSA Press, Berkeley,\nCA, USA, 2002.\n[8] M. Guimaraes, H. Mattord, and R. Austin.\nIncorporating Security Components into Database\nCourses. In Proc. of the InfoSecCD Conference'04,\nKennesaw, GA, September 2004.\n[9] A. Jain, L. Hong, and S. Pankanti. Biometric\nIdentification. Commun. ACM, 43(2), 2000.\n[10] C. Kaufman, R. Perlman, and M. Speciner. Network\nSecurity: Private Communication in a Public World,\nSecond Edition. Prentice-Hall, 2002.\n[11] B. W. Lampson. A Note on the Confinement Problem.\nCommun. ACM, 16(10), October 1973.\n[12] M. Theriault and A. Newman. Oracle Security\nHandbook : Implement a Sound Security Plan in Your\nOracle Environment. Osborne McGraw-Hill, 2001.\n[13] M. Morgenstern. Security and Inference in Multi-Level\nDatabase and Knowledge-Base Systems. In ACM\nSIGMOD Conf. on the Management of Data, San\nFrancisco, CA, May 1987.\n[14] http://www.mysql.com.\n[15] Oracle Database Administrator's Guide. Oracle\nCorporation, 2001.\n[16] Oracle Label Security Administrator's Guide. Oracle\nCorporation, 2003.\n[17] R. Agrawal and R. Srikant. Privacy-preserving Data\nMining. In Proc. of the ACM SIGMOD Conference on\nManagement of Data, Dallas, TX, May 2000.\n[18] R. Sandhu and Q. Munawer. How to do Discretionary\nAccess Control Using Roles. In RBAC '98:\nProceedings of the third ACM workshop on Role-based\naccess control, Fairfax, VA, 1998.\n[19] M. Theriault and W. Heney. Oracle Security. O'Reilly\n& Associates, Inc., 1998.\n[20] V. Verykios, E. Bertino, I. Fovino, L. Provenza, Y.\nSaygin and Y. Theodoridis. State-of-the-art in Privacy\nPreserving Data Mining. SIGMOD Record, 33(1),\n2004.\n[21] W. Ford and M. S. Baum. Secure Electronic\nCommerce: Building the Infrastructure for Digital\nSignatures and Encryption. Prentice Hall, 2000.\n[22] Wrox Author Team. MySQL Security Handbook. Wrox\nPress, 2003.\n11\n", "keywords": "Database security course;Statistical Database Security;Statistical Security;Database security;High Risk of Sensitive Information;Database security plan;Security breaches;Labs;Database system;Undergraduate Database Security Course;Real Life Database Security;Database Security Education;XML Security;Undergraduate students;Cryptography;Secure information;Hands-on experience;Database Security Plan;Data access;Security Plan;Database Privacy;Administrators;Hands-on;Database Security Course;Undergraduate course;Access Privilege System;Database Security;Privacy Issues;Laboratory/Active Learning;Right Blend of Theory and Practice;Assignments;Real Life Databases Hands-on;MySQL Security;Topics;Importance of Database Security;Few Database Security Courses;Oracle Security"} {"name": "40", "title": "Automatic Extraction of Titles from General Documents using Machine Learning", "abstract": "In this paper, we propose a machine learning approach to title extraction from general documents. By general documents, we mean documents that can belong to any one of a number of specific genres, including presentations, book chapters, technical papers, brochures, reports, and letters. Previously, methods have been proposed mainly for title extraction from research papers. It has not been clear whether it could be possible to conduct automatic title extraction from general documents. As a case study, we consider extraction from Office including Word and PowerPoint. In our approach, we annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data, train machine learning models, and perform title extraction using the trained models. Our method is unique in that we mainly utilize formatting information such as font size as features in the models. It turns out that the use of formatting information can lead to quite accurate extraction from general documents. Precision and recall for title extraction from Word is 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint is 0.875 and 0.895 respectively in an experiment on intranet data. Other important new findings in this work include that we can train models in one domain and apply them to another domain, and more surprisingly we can even train models in one language and apply them to another language. Moreover, we can significantly improve search ranking results in document retrieval by using the extracted titles.", "fulltext": "INTRODUCTION\nMetadata of documents is useful for many kinds of document\nprocessing such as search, browsing, and filtering. Ideally,\nmetadata is defined by the authors of documents and is then used\nby various systems. However, people seldom define document\nmetadata by themselves, even when they have convenient\nmetadata definition tools [26]. Thus, how to automatically extract\nmetadata from the bodies of documents turns out to be an\nimportant research issue.\nMethods for performing the task have been proposed. However,\nthe focus was mainly on extraction from research papers. For\ninstance, Han et al. [10] proposed a machine learning based\nmethod to conduct extraction from research papers. They\nformalized the problem as that of classification and employed\nSupport Vector Machines as the classifier. They mainly used\nlinguistic features in the model.\n1\n\nIn this paper, we consider metadata extraction from general\ndocuments. By general documents, we mean documents that may\nbelong to any one of a number of specific genres. General\ndocuments are more widely available in digital libraries, intranets\nand the internet, and thus investigation on extraction from them is\n\n1\nThe work was conducted when the first author was visiting\nMicrosoft Research Asia.\n\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, or\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nJCDL'05, June 711, 2005, Denver, Colorado, USA\nCopyright 2005 ACM 1-58113-876-8/05/0006...$5.00.\n\n145\nsorely needed. Research papers usually have well-formed styles\nand noticeable characteristics. In contrast, the styles of general\ndocuments can vary greatly. It has not been clarified whether a\nmachine learning based approach can work well for this task.\nThere are many types of metadata: title, author, date of creation,\netc. As a case study, we consider title extraction in this paper.\nGeneral documents can be in many different file formats:\nMicrosoft Office, PDF (PS), etc. As a case study, we consider\nextraction from Office including Word and PowerPoint.\nWe take a machine learning approach. We annotate titles in\nsample documents (for Word and PowerPoint respectively) and\ntake them as training data to train several types of models, and\nperform title extraction using any one type of the trained models.\nIn the models, we mainly utilize formatting information such as\nfont size as features. We employ the following models: Maximum\nEntropy Model, Perceptron with Uneven Margins, Maximum\nEntropy Markov Model, and Voted Perceptron.\nIn this paper, we also investigate the following three problems,\nwhich did not seem to have been examined previously.\n(1) Comparison between models: among the models above, which\nmodel performs best for title extraction;\n(2) Generality of model: whether it is possible to train a model on\none domain and apply it to another domain, and whether it is\npossible to train a model in one language and apply it to another\nlanguage;\n(3) Usefulness of extracted titles: whether extracted titles can\nimprove document processing such as search.\nExperimental results indicate that our approach works well for\ntitle extraction from general documents. Our method can\nsignificantly outperform the baselines: one that always uses the\nfirst lines as titles and the other that always uses the lines in the\nlargest font sizes as titles. Precision and recall for title extraction\nfrom Word are 0.810 and 0.837 respectively, and precision and\nrecall for title extraction from PowerPoint are 0.875 and 0.895\nrespectively. It turns out that the use of format features is the key\nto successful title extraction.\n(1) We have observed that Perceptron based models perform\nbetter in terms of extraction accuracies. (2) We have empirically\nverified that the models trained with our approach are generic in\nthe sense that they can be trained on one domain and applied to\nanother, and they can be trained in one language and applied to\nanother. (3) We have found that using the extracted titles we can\nsignificantly improve precision of document retrieval (by 10%).\nWe conclude that we can indeed conduct reliable title extraction\nfrom general documents and use the extracted results to improve\nreal applications.\nThe rest of the paper is organized as follows. In section 2, we\nintroduce related work, and in section 3, we explain the\nmotivation and problem setting of our work. In section 4, we\ndescribe our method of title extraction, and in section 5, we\ndescribe our method of document retrieval using extracted titles.\nSection 6 gives our experimental results. We make concluding\nremarks in section 7.\n\nRELATED WORK\nMethods have been proposed for performing automatic metadata\nextraction from documents; however, the main focus was on\nextraction from research papers.\nThe proposed methods fall into two categories: the rule based\napproach and the machine learning based approach.\nGiuffrida et al. [9], for instance, developed a rule-based system for\nautomatically extracting metadata from research papers in\nPostscript. They used rules like \"titles are usually located on the\nupper portions of the first pages and they are usually in the largest\nfont sizes\". Liddy et al. [14] and Yilmazel el al. [23] performed\nmetadata extraction from educational materials using rule-based\nnatural language processing technologies. Mao et al. [16] also\nconducted automatic metadata extraction from research papers\nusing rules on formatting information.\nThe rule-based approach can achieve high performance. However,\nit also has disadvantages. It is less adaptive and robust when\ncompared with the machine learning approach.\nHan et al. [10], for instance, conducted metadata extraction with\nthe machine learning approach. They viewed the problem as that\nof classifying the lines in a document into the categories of\nmetadata and proposed using Support Vector Machines as the\nclassifier. They mainly used linguistic information as features.\nThey reported high extraction accuracy from research papers in\nterms of precision and recall.\n2.2 Information Extraction\nMetadata extraction can be viewed as an application of\ninformation extraction, in which given a sequence of instances, we\nidentify a subsequence that represents information in which we\nare interested. Hidden Markov Model [6], Maximum Entropy\nModel [1, 4], Maximum Entropy Markov Model [17], Support\nVector Machines [3], Conditional Random Field [12], and Voted\nPerceptron [2] are widely used information extraction models.\nInformation extraction has been applied, for instance, to part-of-speech\ntagging [20], named entity recognition [25] and table\nextraction [19].\n2.3 Search Using Title Information\nTitle information is useful for document retrieval.\nIn the system Citeseer, for instance, Giles et al. managed to\nextract titles from research papers and make use of the extracted\ntitles in metadata search of papers [8].\nIn web search, the title fields (i.e., file properties) and anchor texts\nof web pages (HTML documents) can be viewed as `titles' of the\npages [5]. Many search engines seem to utilize them for web page\nretrieval [7, 11, 18, 22]. Zhang et al., found that web pages with\nwell-defined metadata are more easily retrieved than those without\nwell-defined metadata [24].\nTo the best of our knowledge, no research has been conducted on\nusing extracted titles from general documents (e.g., Office\ndocuments) for search of the documents.\n146\nMOTIVATION AND PROBLEM SETTING\nWe consider the issue of automatically extracting titles from\ngeneral documents.\nBy general documents, we mean documents that belong to one of\nany number of specific genres. The documents can be\npresentations, books, book chapters, technical papers, brochures,\nreports, memos, specifications, letters, announcements, or resumes.\nGeneral documents are more widely available in digital libraries,\nintranets, and internet, and thus investigation on title extraction\nfrom them is sorely needed.\nFigure 1 shows an estimate on distributions of file formats on\nintranet and internet [15]. Office and PDF are the main file\nformats on the intranet. Even on the internet, the documents in the\nformats are still not negligible, given its extremely large size. In\nthis paper, without loss of generality, we take Office documents as\nan example.\n\nFigure 1. Distributions of file formats in internet and intranet.\n\nFor Office documents, users can define titles as file properties\nusing a feature provided by Office. We found in an experiment,\nhowever, that users seldom use the feature and thus titles in file\nproperties are usually very inaccurate. That is to say, titles in file\nproperties are usually inconsistent with the `true' titles in the file\nbodies that are created by the authors and are visible to readers.\nWe collected 6,000 Word and 6,000 PowerPoint documents from\nan intranet and the internet and examined how many titles in the\nfile properties are correct. We found that surprisingly the accuracy\nwas only 0.265 (cf., Section 6.3 for details). A number of reasons\ncan be considered. For example, if one creates a new file by\ncopying an old file, then the file property of the new file will also\nbe copied from the old file.\nIn another experiment, we found that Google uses the titles in file\nproperties of Office documents in search and browsing, but the\ntitles are not very accurate. We created 50 queries to search Word\nand PowerPoint documents and examined the top 15 results of\neach query returned by Google. We found that nearly all the titles\npresented in the search results were from the file properties of the\ndocuments. However, only 0.272 of them were correct.\nActually, `true' titles usually exist at the beginnings of the bodies\nof documents. If we can accurately extract the titles from the\nbodies of documents, then we can exploit reliable title information\nin document processing. This is exactly the problem we address in\nthis paper.\nMore specifically, given a Word document, we are to extract the\ntitle from the top region of the first page. Given a PowerPoint\ndocument, we are to extract the title from the first slide. A title\nsometimes consists of a main title and one or two subtitles. We\nonly consider extraction of the main title.\nAs baselines for title extraction, we use that of always using the\nfirst lines as titles and that of always using the lines with largest\nfont sizes as titles.\n\nFigure 2. Title extraction from Word document.\n\n\nFigure 3. Title extraction from PowerPoint document.\n\nNext, we define a `specification' for human judgments in title data\nannotation. The annotated data will be used in training and testing\nof the title extraction methods.\nSummary of the specification: The title of a document should be\nidentified on the basis of common sense, if there is no difficulty in\nthe identification. However, there are many cases in which the\nidentification is not easy. There are some rules defined in the\nspecification that guide identification for such cases. The rules\ninclude \"a title is usually in consecutive lines in the same format\",\n\"a document can have no title\", \"titles in images are not\nconsidered\", \"a title should not contain words like `draft',\n147\n`whitepaper', etc\", \"if it is difficult to determine which is the title,\nselect the one in the largest font size\", and \"if it is still difficult to\ndetermine which is the title, select the first candidate\". (The\nspecification covers all the cases we have encountered in data\nannotation.)\nFigures 2 and 3 show examples of Office documents from which\nwe conduct title extraction. In Figure 2, `Differences in Win32\nAPI Implementations among Windows Operating Systems' is the\ntitle of the Word document. `Microsoft Windows' on the top of\nthis page is a picture and thus is ignored. In Figure 3, `Building\nCompetitive Advantages through an Agile Infrastructure' is the\ntitle of the PowerPoint document.\nWe have developed a tool for annotation of titles by human\nannotators. Figure 4 shows a snapshot of the tool.\n\nFigure 4. Title annotation tool.\n\n\nTITLE EXTRACTION METHOD\nTitle extraction based on machine learning consists of training and\nextraction. The same pre-processing step occurs before training\nand extraction.\nDuring pre-processing, from the top region of the first page of a\nWord document or the first slide of a PowerPoint document a\nnumber of units for processing are extracted. If a line (lines are\nseparated by `return' symbols) only has a single format, then the\nline will become a unit. If a line has several parts and each of\nthem has its own format, then each part will become a unit.\n\nEach\nunit will be treated as an instance in learning. A unit contains not\nonly content information (linguistic information) but also\nformatting information. The input to pre-processing is a document\nand the output of pre-processing is a sequence of units (instances).\nFigure 5 shows the units obtained from the document in Figure 2.\n\nFigure 5. Example of units.\n\nIn learning, the input is sequences of units where each sequence\ncorresponds to a document. We take labeled units (labeled as\ntitle_begin, title_end, or other) in the sequences as training data\nand construct models for identifying whether a unit is title_begin\ntitle_end, or other. We employ four types of models: Perceptron,\nMaximum Entropy (ME), Perceptron Markov Model (PMM), and\nMaximum Entropy Markov Model (MEMM).\nIn extraction, the input is a sequence of units from one document.\nWe employ one type of model to identify whether a unit is\ntitle_begin, title_end, or other. We then extract units from the unit\nlabeled with `title_begin' to the unit labeled with `title_end'. The\nresult is the extracted title of the document.\nThe unique characteristic of our approach is that we mainly utilize\nformatting information for title extraction. Our assumption is that\nalthough general documents vary in styles, their formats have\ncertain patterns and we can learn and utilize the patterns for title\nextraction. This is in contrast to the work by Han et al., in which\nonly linguistic features are used for extraction from research\npapers.\n4.2 Models\nThe four models actually can be considered in the same metadata\nextraction framework. That is why we apply them together to our\ncurrent problem.\nEach input is a sequence of instances\nk\nx\nx\nx\nL\n2\n1\ntogether with a\nsequence of labels\nk\ny\ny\ny\nL\n2\n1\n.\ni\nx\nand\ni\ny\nrepresents an instance\nand its label, respectively (\nk\ni\n,\n,\n2\n,\n1 L\n=\n). Recall that an instance\nhere represents a unit. A label represents title_begin, title_end, or\nother. Here, k is the number of units in a document.\nIn learning, we train a model which can be generally denoted as a\nconditional probability distribution\n)\n|\n(\n1\n1\nk\nk\nX\nX\nY\nY\nP\nL\nL\nwhere\ni\nX\nand\ni\nY\ndenote random variables taking instance\ni\nx\nand label\ni\ny\nas values, respectively (\nk\ni\n,\n,\n2\n,\n1 L\n=\n).\nLearning Tool\nExtraction Tool\n\n\n\n\n\n\n\n2\n1\n1\n2\n1\n2\n22\n21\n2\n22\n21\n1\n12\n11\n1\n12\n11\nnk\nn\nn\nk\nn\nn\nk\nk\nk\nk\ny\ny\ny\nx\nx\nx\ny\ny\ny\nx\nx\nx\ny\ny\ny\nx\nx\nx\nL\nL\nL\nL\nL\nL\nL\nL\n\n\n\n)\n|\n(\nmax\narg\n1\n1\nmk\nm\nmk\nm\nx\nx\ny\ny\nP\nL\nL\n)\n|\n(\n1\n1\nk\nk\nX\nX\nY\nY\nP\nL\nL\nConditional\nDistribution\nmk\nm\nm\nx\nx\nx\nL\n2\n1\n\nFigure 6. Metadata extraction model.\n\nWe can make assumptions about the general model in order to\nmake it simple enough for training.\n148\nFor example, we can assume that\nk\nY\nY\n,\n,\n1\nL\nare independent of\neach other given\nk\nX\nX\n,\n,\n1\nL\n. Thus, we have\n)\n|\n(\n)\n|\n(\n)\n|\n(\n1\n1\n1\n1\nk\nk\nk\nk\nX\nY\nP\nX\nY\nP\nX\nX\nY\nY\nP\nL\nL\nL\n=\n\nIn this way, we decompose the model into a number of classifiers.\nWe train the classifiers locally using the labeled data. As the\nclassifier, we employ the Perceptron or Maximum Entropy model.\nWe can also assume that the first order Markov property holds for\nk\nY\nY\n,\n,\n1\nL\ngiven\nk\nX\nX\n,\n,\n1\nL\n. Thus, we have\n)\n|\n(\n)\n|\n(\n)\n|\n(\n1\n1\n1\n1\n1\nk\nk\nk\nk\nk\nX\nY\nY\nP\nX\nY\nP\nX\nX\nY\nY\nP\n=\nL\nL\nL\n\nAgain, we obtain a number of classifiers. However, the classifiers\nare conditioned on the previous label. When we employ the\nPercepton or Maximum Entropy model as a classifier, the models\nbecome a Percepton Markov Model or Maximum Entropy Markov\nModel, respectively. That is to say, the two models are more\nprecise.\nIn extraction, given a new sequence of instances, we resort to one\nof the constructed models to assign a sequence of labels to the\nsequence of instances, i.e., perform extraction.\nFor Perceptron and ME, we assign labels locally and combine the\nresults globally later using heuristics. Specifically, we first\nidentify the most likely title_begin. Then we find the most likely\ntitle_end within three units after the title_begin. Finally, we\nextract as a title the units between the title_begin and the title_end.\nFor PMM and MEMM, we employ the Viterbi algorithm to find\nthe globally optimal label sequence.\nIn this paper, for Perceptron, we actually employ an improved\nvariant of it, called Perceptron with Uneven Margin [13]. This\nversion of Perceptron can work well especially when the number\nof positive instances and the number of negative instances differ\ngreatly, which is exactly the case in our problem.\nWe also employ an improved version of Perceptron Markov\nModel in which the Perceptron model is the so-called Voted\nPerceptron [2]. In addition, in training, the parameters of the\nmodel are updated globally rather than locally.\n4.3 Features\nThere are two types of features: format features and linguistic\nfeatures. We mainly use the former. The features are used for both\nthe title-begin and the title-end classifiers.\n4.3.1 Format Features\nFont Size: There are four binary features that represent the\nnormalized font size of the unit (recall that a unit has only one\ntype of font).\nIf the font size of the unit is the largest in the document, then the\nfirst feature will be 1, otherwise 0. If the font size is the smallest\nin the document, then the fourth feature will be 1, otherwise 0. If\nthe font size is above the average font size and not the largest in\nthe document, then the second feature will be 1, otherwise 0. If the\nfont size is below the average font size and not the smallest, the\nthird feature will be 1, otherwise 0.\nIt is necessary to conduct normalization on font sizes. For\nexample, in one document the largest font size might be `12pt',\nwhile in another the smallest one might be `18pt'.\nBoldface: This binary feature represents whether or not the\ncurrent unit is in boldface.\nAlignment: There are four binary features that respectively\nrepresent the location of the current unit: `left', `center', `right',\nand `unknown alignment'.\nThe following format features with respect to `context' play an\nimportant role in title extraction.\nEmpty Neighboring Unit: There are two binary features that\nrepresent, respectively, whether or not the previous unit and the\ncurrent unit are blank lines.\nFont Size Change: There are two binary features that represent,\nrespectively, whether or not the font size of the previous unit and\nthe font size of the next unit differ from that of the current unit.\nAlignment Change: There are two binary features that represent,\nrespectively, whether or not the alignment of the previous unit and\nthe alignment of the next unit differ from that of the current one.\nSame Paragraph: There are two binary features that represent,\nrespectively, whether or not the previous unit and the next unit are\nin the same paragraph as the current unit.\n4.3.2 Linguistic Features\nThe linguistic features are based on key words.\nPositive Word: This binary feature represents whether or not the\ncurrent unit begins with one of the positive words. The positive\nwords include `title:', `subject:', `subject line:' For example, in\nsome documents the lines of titles and authors have the same\nformats. However, if lines begin with one of the positive words,\nthen it is likely that they are title lines.\nNegative Word: This binary feature represents whether or not the\ncurrent unit begins with one of the negative words. The negative\nwords include `To', `By', `created by', `updated by', etc.\nThere are more negative words than positive words. The above\nlinguistic features are language dependent.\nWord Count: A title should not be too long. We heuristically\ncreate four intervals: [1, 2], [3, 6], [7, 9] and [9,\n\n) and define one\nfeature for each interval. If the number of words in a title falls into\nan interval, then the corresponding feature will be 1; otherwise 0.\nEnding Character: This feature represents whether the unit ends\nwith `:', `-', or other special characters. A title usually does not\nend with such a character.\nDOCUMENT RETRIEVAL METHOD\nWe describe our method of document retrieval using extracted\ntitles.\nTypically, in information retrieval a document is split into a\nnumber of fields including body, title, and anchor text. A ranking\nfunction in search can use different weights for different fields of\n149\nthe document. Also, titles are typically assigned high weights,\nindicating that they are important for document retrieval. As\nexplained previously, our experiment has shown that a significant\nnumber of documents actually have incorrect titles in the file\nproperties, and thus in addition of using them we use the extracted\ntitles as one more field of the document. By doing this, we attempt\nto improve the overall precision.\nIn this paper, we employ a modification of BM25 that allows field\nweighting [21]. As fields, we make use of body, title, extracted\ntitle and anchor. First, for each term in the query we count the\nterm frequency in each field of the document; each field\nfrequency is then weighted according to the corresponding weight\nparameter:\n\n=\nf\ntf\nf\nt\ntf\nw\nwtf\n\nSimilarly, we compute the document length as a weighted sum of\nlengths of each field. Average document length in the corpus\nbecomes the average of all weighted document lengths.\n\n=\nf\nf\nf\ndl\nw\nwdl\n\n\nIn our experiments we used\n75\n.\n0\n,\n8\n.\n1\n1\n=\n=\nb\nk\n. Weight for content\nwas 1.0, title was 10.0, anchor was 10.0, and extracted title was\n5.0.\n\nEXPERIMENTAL RESULTS\nWe used two data sets in our experiments.\nFirst, we downloaded and randomly selected 5,000 Word\ndocuments and 5,000 PowerPoint documents from an intranet of\nMicrosoft. We call it MS hereafter.\nSecond, we downloaded and randomly selected 500 Word and 500\nPowerPoint documents from the DotGov and DotCom domains on\nthe internet, respectively.\nFigure 7 shows the distributions of the genres of the documents.\nWe see that the documents are indeed `general documents' as we\ndefine them.\n\nFigure 7. Distributions of document genres.\n\nThird, a data set in Chinese was also downloaded from the internet.\nIt includes 500 Word documents and 500 PowerPoint documents\nin Chinese.\nWe manually labeled the titles of all the documents, on the basis\nof our specification.\nNot all the documents in the two data sets have titles. Table 1\nshows the percentages of the documents having titles. We see that\nDotCom and DotGov have more PowerPoint documents with titles\nthan MS. This might be because PowerPoint documents published\non the internet are more formal than those on the intranet.\nTable 1. The portion of documents with titles\nDomain\nType\nMS DotCom DotGov\nWord 75.7%\n77.8% 75.6%\nPowerPoint 82.1% 93.4% 96.4%\n\nIn our experiments, we conducted evaluations on title extraction in\nterms of precision, recall, and F-measure. The evaluation\nmeasures are defined as follows:\nPrecision:\nP = A / ( A + B )\nRecall:\nR = A / ( A + C )\nF-measure:\nF1 = 2PR / ( P + R )\nHere, A, B, C, and D are numbers of documents as those defined\nin Table 2.\nTable 2. Contingence table with regard to title extraction\n\nIs title\nIs not title\nExtracted A B\nNot extracted\nC\nD\n\n6.2 Baselines\nWe test the accuracies of the two baselines described in section\n4.2. They are denoted as `largest font size' and `first line'\nrespectively.\n6.3 Accuracy of Titles in File Properties\nWe investigate how many titles in the file properties of the\ndocuments are reliable. We view the titles annotated by humans as\ntrue titles and test how many titles in the file properties can\napproximately match with the true titles. We use Edit Distance to\nconduct the approximate match. (Approximate match is only used\nin this evaluation). This is because sometimes human annotated\ntitles can be slightly different from the titles in file properties on\nthe surface, e.g., contain extra spaces).\nGiven string A and string B:\nif ( (D == 0) or ( D / ( La + Lb ) < ) ) then string A = string B\nD:\nEdit Distance between string A and string B\nLa:\nlength of string A\nLb:\nlength of string B\n:\n0.1\n\n\n\n+\n+\n+\n=\nt\nt\nn\nN\nwtf\navwdl\nwdl\nb\nb\nk\nk\nwtf\nF\nBM\n)\nlog(\n)\n)\n1\n((\n)\n1\n(\n25\n1\n1\n150\nTable 3. Accuracies of titles in file properties\nFile Type\nDomain\nPrecision\nRecall\nF1\nMS 0.299\n0.311\n0.305\nDotCom 0.210 0.214 0.212\nWord\nDotGov 0.182 0.177\n0.180\nMS 0.229\n0.245\n0.237\nDotCom 0.185 0.186 0.186\nPowerPoint\nDotGov 0.180 0.182\n0.181\n\n6.4 Comparison with Baselines\nWe conducted title extraction from the first data set (Word and\nPowerPoint in MS). As the model, we used Perceptron.\nWe conduct 4-fold cross validation. Thus, all the results reported\nhere are those averaged over 4 trials. Tables 4 and 5 show the\nresults. We see that Perceptron significantly outperforms the\nbaselines. In the evaluation, we use exact matching between the\ntrue titles annotated by humans and the extracted titles.\nTable 4. Accuracies of title extraction with Word\n\nPrecision\nRecall\nF1\nModel Perceptron 0.810 0.837\n0.823\nLargest font size\n0.700\n0.758\n0.727\nBaselines\nFirst line\n0.707\n0.767\n0.736\n\nTable 5. Accuracies of title extraction with PowerPoint\n\nPrecision\nRecall\nF1\nModel Perceptron 0.875 0.\n895\n0.885\nLargest font size\n0.844\n0.887\n0.865\nBaselines\nFirst line\n0.639\n0.671\n0.655\n\nWe see that the machine learning approach can achieve good\nperformance in title extraction. For Word documents both\nprecision and recall of the approach are 8 percent higher than\nthose of the baselines. For PowerPoint both precision and recall of\nthe approach are 2 percent higher than those of the baselines.\nWe conduct significance tests. The results are shown in Table 6.\nHere, `Largest' denotes the baseline of using the largest font size,\n`First' denotes the baseline of using the first line. The results\nindicate that the improvements of machine learning over baselines\nare statistically significant (in the sense p-value < 0.05)\nTable 6. Sign test results\nDocuments Type\nSign test between\np-value\nPerceptron vs. Largest\n3.59e-26\nWord\nPerceptron vs. First\n7.12e-10\nPerceptron vs. Largest\n0.010\nPowerPoint\nPerceptron vs. First\n5.13e-40\n\nWe see, from the results, that the two baselines can work well for\ntitle extraction, suggesting that font size and position information\nare most useful features for title extraction. However, it is also\nobvious that using only these two features is not enough. There\nare cases in which all the lines have the same font size (i.e., the\nlargest font size), or cases in which the lines with the largest font\nsize only contain general descriptions like `Confidential', `White\npaper', etc. For those cases, the `largest font size' method cannot\nwork well. For similar reasons, the `first line' method alone\ncannot work well, either. With the combination of different\nfeatures (evidence in title judgment), Perceptron can outperform\nLargest and First.\nWe investigate the performance of solely using linguistic features.\nWe found that it does not work well. It seems that the format\nfeatures play important roles and the linguistic features are\nsupplements..\n\nFigure 8. An example Word document.\n\n\n\nFigure 9. An example PowerPoint document.\n\nWe conducted an error analysis on the results of Perceptron. We\nfound that the errors fell into three categories. (1) About one third\nof the errors were related to `hard cases'. In these documents, the\nlayouts of the first pages were difficult to understand, even for\nhumans. Figure 8 and 9 shows examples. (2) Nearly one fourth of\nthe errors were from the documents which do not have true titles\nbut only contain bullets. Since we conduct extraction from the top\nregions, it is difficult to get rid of these errors with the current\napproach. (3). Confusions between main titles and subtitles were\nanother type of error. Since we only labeled the main titles as\ntitles, the extractions of both titles were considered incorrect. This\ntype of error does little harm to document processing like search,\nhowever.\n6.5 Comparison between Models\nTo compare the performance of different machine learning models,\nwe conducted another experiment. Again, we perform 4-fold cross\n151\nvalidation on the first data set (MS). Table 7, 8 shows the results\nof all the four models.\nIt turns out that Perceptron and PMM perform the best, followed\nby MEMM, and ME performs the worst. In general, the\nMarkovian models perform better than or as well as their classifier\ncounterparts. This seems to be because the Markovian models are\ntrained globally, while the classifiers are trained locally. The\nPerceptron based models perform better than the ME based\ncounterparts. This seems to be because the Perceptron based\nmodels are created to make better classifications, while ME\nmodels are constructed for better prediction.\nTable 7. Comparison between different learning models for\ntitle extraction with Word\nModel Precision Recall F1\nPerceptron 0.810 0.837 0.823\nMEMM 0.797 0.824 0.810\nPMM 0.827 0.823 0.825\nME 0.801 0.621\n0.699\n\nTable 8. Comparison between different learning models for\ntitle extraction with PowerPoint\nModel Precision Recall F1\nPerceptron\n0.875\n0. 895\n0. 885\nMEMM 0.841 0.861 0.851\nPMM\n0.873 0.896 0.885\nME 0.753 0.766\n0.759\n6.6 Domain Adaptation\nWe apply the model trained with the first data set (MS) to the\nsecond data set (DotCom and DotGov). Tables 9-12 show the\nresults.\nTable 9. Accuracies of title extraction with Word in DotGov\n\nPrecision\nRecall\nF1\nModel Perceptron 0.716 0.759\n0.737\nLargest font size\n0.549 0.619\n0.582\nBaselines\n\nFirst line\n0.462 0.521\n0.490\n\nTable 10. Accuracies of title extraction with PowerPoint in\nDotGov\n\nPrecision\nRecall\nF1\nModel Perceptron 0.900 0.906\n0.903\nLargest font size\n0.871 0.888\n0.879\nBaselines\n\nFirst line\n0.554 0.564\n0.559\n\nTable 11. Accuracies of title extraction with Word in DotCom\n\n\nPrecisio\nn\nRecall F1\nModel Perceptron 0.832 0.880\n0.855\nLargest font size\n0.676 0.753\n0.712\nBaselines\n\nFirst line\n0.577 0.643\n0.608\n\nTable 12. Performance of PowerPoint document title\nextraction in DotCom\n\n\nPrecisio\nn\nRecall F1\nModel Perceptron 0.910 0.903\n0.907\nLargest font size\n0.864 0.886\n0.875\nBaselines\n\nFirst line\n0.570 0.585\n0.577\nFrom the results, we see that the models can be adapted to\ndifferent domains well. There is almost no drop in accuracy. The\nresults indicate that the patterns of title formats exist across\ndifferent domains, and it is possible to construct a domain\nindependent model by mainly using formatting information.\n\n6.7 Language Adaptation\nWe apply the model trained with the data in English (MS) to the\ndata set in Chinese.\nTables 13-14 show the results.\nTable 13. Accuracies of title extraction with Word in Chinese\n\nPrecision\nRecall\nF1\nModel Perceptron 0.817 0.805\n0.811\nLargest font size\n0.722\n0.755\n0.738\nBaselines\n\nFirst line\n0.743\n0.777\n0.760\n\nTable 14. Accuracies of title extraction with PowerPoint in\nChinese\n\nPrecision\nRecall\nF1\nModel Perceptron 0.766 0.812\n0.789\nLargest font size\n0.753\n0.813\n0.782\nBaselines\n\nFirst line\n0.627\n0.676\n0.650\n\nWe see that the models can be adapted to a different language.\nThere are only small drops in accuracy. Obviously, the linguistic\nfeatures do not work for Chinese, but the effect of not using them\nis negligible. The results indicate that the patterns of title formats\nexist across different languages.\nFrom the domain adaptation and language adaptation results, we\nconclude that the use of formatting information is the key to a\nsuccessful extraction from general documents.\n\n6.8 Search with Extracted Titles\nWe performed experiments on using title extraction for document\nretrieval. As a baseline, we employed BM25 without using\nextracted titles. The ranking mechanism was as described in\nSection 5. The weights were heuristically set. We did not conduct\noptimization on the weights.\nThe evaluation was conducted on a corpus of 1.3 M documents\ncrawled from the intranet of Microsoft using 100 evaluation\nqueries obtained from this intranet's search engine query logs. 50\nqueries were from the most popular set, while 50 queries other\nwere chosen randomly. Users were asked to provide judgments of\nthe degree of document relevance from a scale of 1to 5 (1\nmeaning detrimental, 2 bad, 3 fair, 4 good and 5 excellent).\n152\nFigure 10 shows the results. In the chart two sets of precision\nresults were obtained by either considering good or excellent\ndocuments as relevant (left 3 bars with relevance threshold 0.5), or\nby considering only excellent documents as relevant (right 3 bars\nwith relevance threshold 1.0)\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0.45\nP@10\nP@5\nReciprocal\nP@10\nP@5\nReciprocal\n0.5\n1\nBM25 Anchor, Title, Body\nBM25 Anchor, Title, Body, ExtractedTitle\nName All\nRelevanceThreshold Data\nDescription\n\nFigure 10. Search ranking results.\n\nFigure 10 shows different document retrieval results with different\nranking functions in terms of precision @10, precision @5 and\nreciprocal rank:\n\nBlue bar BM25 including the fields body, title (file\nproperty), and anchor text.\n\nPurple bar BM25 including the fields body, title (file\nproperty), anchor text, and extracted title.\nWith the additional field of extracted title included in BM25 the\nprecision @10 increased from 0.132 to 0.145, or by ~10%. Thus,\nit is safe to say that the use of extracted title can indeed improve\nthe precision of document retrieval.\nCONCLUSION\nIn this paper, we have investigated the problem of automatically\nextracting titles from general documents. We have tried using a\nmachine learning approach to address the problem.\nPrevious work showed that the machine learning approach can\nwork well for metadata extraction from research papers. In this\npaper, we showed that the approach can work for extraction from\ngeneral documents as well. Our experimental results indicated that\nthe machine learning approach can work significantly better than\nthe baselines in title extraction from Office documents. Previous\nwork on metadata extraction mainly used linguistic features in\ndocuments, while we mainly used formatting information. It\nappeared that using formatting information is a key for\nsuccessfully conducting title extraction from general documents.\nWe tried different machine learning models including Perceptron,\nMaximum Entropy, Maximum Entropy Markov Model, and Voted\nPerceptron. We found that the performance of the Perceptorn\nmodels was the best. We applied models constructed in one\ndomain to another domain and applied models trained in one\nlanguage to another language. We found that the accuracies did\nnot drop substantially across different domains and across\ndifferent languages, indicating that the models were generic. We\nalso attempted to use the extracted titles in document retrieval. We\nobserved a significant improvement in document ranking\nperformance for search when using extracted title information. All\nthe above investigations were not conducted in previous work, and\nthrough our investigations we verified the generality and the\nsignificance of the title extraction approach.\n\nACKNOWLEDGEMENTS\nWe thank Chunyu Wei and Bojuan Zhao for their work on data\nannotation. We acknowledge Jinzhu Li for his assistance in\nconducting the experiments. We thank Ming Zhou, John Chen,\nJun Xu, and the anonymous reviewers of JCDL'05 for their\nvaluable comments on this paper.\n\nREFERENCES\n[1] Berger, A. L., Della Pietra, S. A., and Della Pietra, V. J. A\nmaximum entropy approach to natural language processing.\nComputational Linguistics, 22:39-71, 1996.\n[2] Collins, M. Discriminative training methods for hidden\nmarkov models: theory and experiments with perceptron\nalgorithms. In Proceedings of Conference on Empirical\nMethods in Natural Language Processing, 1-8, 2002.\n[3] Cortes, C. and Vapnik, V. Support-vector networks. Machine\nLearning, 20:273-297, 1995.\n[4] Chieu, H. L. and Ng, H. T. A maximum entropy approach to\ninformation extraction from semi-structured and free text. In\nProceedings of the Eighteenth National Conference on\nArtificial Intelligence, 768-791, 2002.\n[5] Evans, D. K., Klavans, J. L., and McKeown, K. R. Columbia\nnewsblaster: multilingual news summarization on the Web.\nIn Proceedings of Human Language Technology conference /\nNorth American chapter of the Association for\nComputational Linguistics annual meeting, 1-4, 2004.\n[6] Ghahramani, Z. and Jordan, M. I. Factorial hidden markov\nmodels. Machine Learning, 29:245-273, 1997.\n[7] Gheel, J. and Anderson, T. Data and metadata for finding and\nreminding, In Proceedings of the 1999 International\nConference on Information Visualization, 446-451,1999.\n[8] Giles, C. L., Petinot, Y., Teregowda P. B., Han, H.,\nLawrence, S., Rangaswamy, A., and Pal, N. eBizSearch: a\nniche search engine for e-Business. In Proceedings of the\n26th Annual International ACM SIGIR Conference on\nResearch and Development in Information Retrieval, 413-414\n, 2003.\n[9] Giuffrida, G., Shek, E. C., and Yang, J. Knowledge-based\nmetadata extraction from PostScript files. In Proceedings of\nthe Fifth ACM Conference on Digital Libraries, 77-84, 2000.\n[10] Han, H., Giles, C. L., Manavoglu, E., Zha, H., Zhang, Z., and\nFox, E. A. Automatic document metadata extraction using\nsupport vector machines. In Proceedings of the Third\nACM/IEEE-CS Joint Conference on Digital Libraries, 37-48,\n2003.\n[11] Kobayashi, M., and Takeda, K. Information retrieval on the\nWeb. ACM Computing Surveys, 32:144-173, 2000.\n[12] Lafferty, J., McCallum, A., and Pereira, F. Conditional\nrandom fields: probabilistic models for segmenting and\n153\nlabeling sequence data. In Proceedings of the Eighteenth\nInternational Conference on Machine Learning, 282-289,\n2001.\n[13] Li, Y., Zaragoza, H., Herbrich, R., Shawe-Taylor J., and\nKandola, J. S. The perceptron algorithm with uneven margins.\nIn Proceedings of the Nineteenth International Conference\non Machine Learning, 379-386, 2002.\n[14] Liddy, E. D., Sutton, S., Allen, E., Harwell, S., Corieri, S.,\nYilmazel, O., Ozgencil, N. E., Diekema, A., McCracken, N.,\nand Silverstein, J. Automatic Metadata generation &\nevaluation. In Proceedings of the 25th Annual International\nACM SIGIR Conference on Research and Development in\nInformation Retrieval, 401-402, 2002.\n[15] Littlefield, A. Effective enterprise information retrieval\nacross new content formats. In Proceedings of the Seventh\nSearch Engine Conference,\nhttp://www.infonortics.com/searchengines/sh02/02prog.html,\n2002.\n[16] Mao, S., Kim, J. W., and Thoma, G. R. A dynamic feature\ngeneration system for automated metadata extraction in\npreservation of digital materials. In Proceedings of the First\nInternational Workshop on Document Image Analysis for\nLibraries, 225-232, 2004.\n[17] McCallum, A., Freitag, D., and Pereira, F. Maximum entropy\nmarkov models for information extraction and segmentation.\nIn Proceedings of the Seventeenth International Conference\non Machine Learning, 591-598, 2000.\n[18] Murphy, L. D. Digital document metadata in organizations:\nroles, analytical approaches, and future research directions.\nIn Proceedings of the Thirty-First Annual Hawaii\nInternational Conference on System Sciences, 267-276, 1998.\n[19] Pinto, D., McCallum, A., Wei, X., and Croft, W. B. Table\nextraction using conditional random fields. In Proceedings of\nthe 26th Annual International ACM SIGIR Conference on\nResearch and Development in Information Retrieval, 235-242\n, 2003.\n[20] Ratnaparkhi, A. Unsupervised statistical models for\nprepositional phrase attachment. In Proceedings of the\nSeventeenth International Conference on Computational\nLinguistics. 1079-1085, 1998.\n[21] Robertson, S., Zaragoza, H., and Taylor, M. Simple BM25\nextension to multiple weighted fields, In Proceedings of\nACM Thirteenth Conference on Information and Knowledge\nManagement, 42-49, 2004.\n[22] Yi, J. and Sundaresan, N. Metadata based Web mining for\nrelevance, In Proceedings of the 2000 International\nSymposium on Database Engineering & Applications, 113-121\n, 2000.\n[23] Yilmazel, O., Finneran, C. M., and Liddy, E. D. MetaExtract:\nAn NLP system to automatically assign metadata. In\nProceedings of the 2004 Joint ACM/IEEE Conference on\nDigital Libraries, 241-242, 2004.\n[24] Zhang, J. and Dimitroff, A. Internet search engines' response\nto metadata Dublin Core implementation. Journal of\nInformation Science, 30:310-320, 2004.\n[25] Zhang, L., Pan, Y., and Zhang, T. Recognising and using\nnamed entities: focused named entity recognition using\nmachine learning. In Proceedings of the 27th Annual\nInternational ACM SIGIR Conference on Research and\nDevelopment in Information Retrieval, 281-288, 2004.\n[26] http://dublincore.org/groups/corporate/Seattle/\n\n154", "keywords": "Digital Copies;metadata extraction;Metadata processing;Search ranking results;File Formats extraction;Information search and Retrieval;PowerPoint documents;information extraction;Precision extraction;File extraction;search;generic languages;machine learning;Microsoft Office Automation"} {"name": "41", "title": "Autonomous and Distributed Node Recovery in Wireless Sensor Networks", "abstract": "Intrusion or misbehaviour detection systems are an important and widely accepted security tool in computer and wireless sensor networks. Their aim is to detect misbehaving or faulty nodes in order to take appropriate countermeasures, thus limiting the damage caused by adversaries as well as by hard or software faults. So far, however, once detected, misbehaving nodes have just been isolated from the rest of the sensor network and hence are no longer usable by running applications. In the presence of an adversary or software faults, this proceeding will inevitably lead to an early and complete loss of the whole network. For this reason, we propose to no longer expel misbehaving nodes, but to recover them into normal operation. In this paper, we address this problem and present a formal specification of what is considered a secure and correct node recovery algorithm together with a distributed algorithm that meets these properties. We discuss its requirements on the soft- and hardware of a node and show how they can be fulfilled with current and upcoming technologies. The algorithm is evaluated analytically as well as by means of extensive simulations, and the findings are compared to the outcome of a real implementation for the BTnode sensor platform. The results show that recovering sensor nodes is an expensive, though feasible and worthwhile task. Moreover , the proposed program code update algorithm is not only secure but also fair and robust.", "fulltext": "INTRODUCTION\nWireless sensor networks (WSNs) consist of many wireless\ncommunicating sensor nodes. Essentially, these are mi-crocontrollers\nincluding a communication unit and a power\nsupply, as well as several attached sensors to examine the\nenvironment. Sensor nodes typically have very limited computing\nand storage capacities and can only communicate\nwith their direct neighbourhood. In addition, WSNs have\nto work unattended most of the time as their operation area\ncannot or must not be visited. Reasons can be that the area\nis inhospitable, unwieldy, or ecologically too sensitive for human\nvisitation; or that manual maintenance would just be\ntoo expensive.\nMore and more, WSN applications are supposed to operate\nin hostile environments, where their communication\nmight be overheard and nodes can be removed or manipu-lated\n. Regarding attacks on sensor networks, one differentiates\nbetween so called outsider and insider attacks [21]. In\nthe former, a potential attacker tries to disclose or influence\na confidential outcome without participating in its computation\n; for instance, by intercepting, modifying, or adding\nmessages. In the latter, by contrast, an attacker appears as\nan adequate member of the WSN by either plausibly impersonating\nregular nodes or by capturing and compromising\nthem.\nCryptographic methods, such as encrypting or signing\nmessages, are an effective protection against attacks from\noutside the network, but are of only limited help against\ninsider attacks. Once an adversary possesses one or several\nvalid node identities (including the associated keys), it\ncan actively participate in the operations of the WSN and\ninfluence the computed results.\nIntrusion or misbehaviour detection systems (IDS), on the\nother hand, are an important and widely accepted security\ntool against insider attacks [18, 21].\nThey allow for the\ndetection of malicious or failed nodes and the application\nof appropriate countermeasures. So far, however, once detected\n, misbehaving nodes have just been isolated from the\nrest of the network and hence are no longer usable by running\napplications. In the presence of an adversary or software\nfaults, this proceding will inevitably result in an early\nand complete loss of the whole network. Therefore, not only\nthe detection of misbehaving nodes is important, but also\nthe selection and application of effective countermeasures.\nTheir aim must not be to simply expel suspected nodes but\nto recover them into correct operation. In combination, the\nadvantages of an IDS together with the appropriate recov-113\nery measures are manifold. Not only do they help in case of\nprogram faults (e.g., deadlocks or crashes) but even if an attacker\nmanages to capture a node and to abuse it for his own\npurposes, there is a chance that the aberrant behaviour of\nthis node will be detected and the node be recovered, thus\nnullifying the attack. However, due to the size of sensor\nnetworks, both the IDS functionality as well as the recovery\nmeasures should be autonomously executed by the involved\nnodes in a distributed and cooperative manner and without\nthe need for central instances with extended functionality.\nMotivated by the above mentioned insights, this paper focuses\non autonomous an distributed node recovery in wireless\nsensor networks and proposes three alternative countermeasures\nto node expelling; namely to switch a node off,\nto restart it, and to update its program code. We formally\nspecify what we consider a secure and correct recovery algorithm\n, present a distributed algorithm which meets these\nproperties, and reason why it can help to extend the overall\nlifetime of a sensor network. In addition, we discuss the limitations\nof the proposed countermeasures, show which hard-and\nsoftware parts of a corrupted node must still work correctly\nto make them applicable, and explain how this can\nbe achieved with current and upcoming technologies. More\nprecisely, the contributions of this paper are as follows:\nWe propose to no longer expel misbehaving nodes, but\nto either (i) switch them off, (ii) restart them, or (iii)\nupdate their program code.\nWe give a formal specification of a secure and correct\nrecovery algorithm.\nWe present a provably secure and robust distributed\nnode recovery algorithm.\nWe discuss the requirements on the soft- and hardware\nof a node in order to make the countermeasures applicable\nand show how they can be fulfilled with current\nand upcoming technologies.\nThe algorithm is evaluated analytically as well as by means\nof extensive simulations and the findings are compared to\nthe outcome of a real implementation for the BTnode sensor\nplatform. The results show that recovering sensor nodes\nis an expensive, though feasible and worthwhile task. Moreover\n, the proposed program code update algorithm is not\nonly provably secure but also fair and robust. It distributes\nthe update load equally over all participating nodes and terminates\nas long as at least one of them remains correct.\nThe rest of this paper is organised as follows. Section 1.1\npresents the related work in the area of intrusion detection\nand node recovery in wireless sensor networks. Section 2\nstates the required definitions and assumptions. Section 3\nspecifies the proposed recovery algorithm whose correctness\nis proven in section 4. The algorithm is evaluated in section\n5 and section 6 concludes the paper.\n1.1\nRelated Work\nIn this section, we present related work in the area of intrusion\ndetection in wireless sensor networks. Additionally,\nrelated work regarding program code updates in sensor networks\nis also discussed, as we propose program code updates\nas a mean to recover nodes.\nIntrusion Detection\nIn recent years, intrusion detection systems for wireless sensor\nnetworks have become a major research issue and several\napproaches have been proposed. However, to our best\nknowledge, the only countermeasure applied so far was to\n(logically) exclude malicious nodes.\nKhalil, Bagchi, and Nina-Rotaru present a distributed\nIDS where nodes monitor the communication among their\nneighbours [14]. For each monitored node a malignity counter\nis maintained and incremented whenever the designated node\nmisbehaves. Once a counter exceeds a predefined threshold,\nan according alert is sent to all neighbours and if enough\nalerts are received the accused node is revoked from the\nneighbourhood list. Hsin and Liu suggest a two-phase timeout\nsystem for neighbour monitoring which uses active probing\nto reduce the probability of false-positives [10].\nA rule-based IDS, which proceeds in three phases, is proposed\nby da Silva et al. [5]. In the first phase, messages are\noverheard and the collected information is filtered and ordered\n. In the second phase, the detection rules are applied\nto the gathered data and each inconsistency counted as a\nfailure. Finally, in the third phase, the number or failures is\ncompared to the expected amount of occasional failures and\nif too high an intrusion alert is raised.\nInverardi, Mostarda and Navarra introduce a framework\nwhich enables the automatic translation of IDS specifications\ninto program code [12]. The so generated code is then\ninstalled on the sensor nodes in order to locally detect violations\nof the node interaction policies. In the approach by\nHerbert et al. [6], predefined correctness properties (invariants\n) are associated with conditions of individual nodes or\nthe whole network and program code to verify these invariants\nis automatically inserted during compilation.\nA reputation-based IDS framework for WSNs where sensor\nnodes maintain a reputation for other nodes is presented\nby Ganeriwal and Srivastava [8]. It uses a Bayesian formula-tion\nfor reputation representation, update, and integration.\nProgram Code Update\nThe main difference between the already available reprogramming\nalgorithms and the proposed recovery measures\nare that the former focus on the propagation of new program\nreleases among all nodes of the network, whereas the\naim of the latter is the local and autonomous update of a\nsingle node. Furthermore, most reprogramming mechanisms\ndo not care about security at all, or rely on expensive public\nkey cryptography.\nKulkarni and Wang propose a multihop reprogramming\nservice for wireless sensor networks which uses a sender selection\nalgorithm to avoid collisions [16]. Impala, a middleware\nsystem for managing sensor systems is presented\nby Liu and Martonosi [17]. Its modular architecture supports\nupdates to the running system. An application consists\nof several modules which are independently transferred;\nan update is complete if all its modules have been received.\nJeong and Culler introduce an efficient incremental network\nprogramming mechanism [13]. Thanks to the usage of the\nRsync algorithm, only incremental changes to the new program\nmust be transferred.\nA secure dissemination algorithm to distribute new program\nreleases among nodes is presented by Dutta et al. [7].\nProgram binaries are propagated as a sequence of data blocks\nof which the first is authenticated with the private key of\n114\nthe base station and the subsequent ones by means of a\nhash chain. In order to improve the fault tolerance of the\nsensor network, nodes use a grenade timer to reboot period-ically\n. During the boot process neighboring nodes are asked\nwhether a new program release is available and if so, its\ndownload is initiated.\nDEFINITIONS\nIn this section, we define our assumptions regarding the\nobservation of nodes and the network communication model.\nWe specify the capabilities of a potential adversary, explain\nwhat we consider a correct recover algorithm, and discuss\nthe requirements on the hard- and software of a sensor node.\n2.1\nIntrusion and Misbehaviour Detection\nTroughout this paper, we assume that the network is divided\ninto N\nC\nso called observation clusters C\ni\n= (V\ni\n, E\ni\n),\n0 i < N\nC\nof size n, n = |V\ni\n|. Within a cluster each node\nis connected to each other (v\ni\n, v\nj\nV\nk\n, v\ni\n= v\nj\n: {v\ni\n, v\nj\n}\nE\nk\n) and observes the behaviour of its cluster neighbours.\nFor the actual monitoring of the neighbours an arbitrary\nIDS can be used, as long as each node ends up with an\n(individual) decision about whether a certain node behaves\ncorrect or malicious. The set of malicious nodes in a cluster\nis denoted by M\ni\nand their number by t, t = |M\ni\n| n.\n2.2\nNetwork Model\nIn the following, p\ns\n(p\nr\n) denotes the probability that the\nsending (receiving) of a message fails. Thus, for 0 p\ns\n, p\nl\n<\n1 the resulting probability for an unsuccessful transmission\n(packet loss ratio, PLR) is\np\nl\n:= 1 - (1 - p\ns\n)(1 - p\nr\n) = p\ns\n+ p\nr\n+ p\ns\np\nr\nAdditionally, we assume that there exists a constant upper\nbound\np\nO(1) on the transmission time of a message.\n2.3\nAdversary Model\nWe consider an omnipresent but computationally bounded\nadversary who can perform both outsider as well as insider\nattacks. This means that a potential adversary is able to intercept\nand create arbitrary messages but unable to decrypt\nor authenticate messages for which he does not possess the\nrequired keys. We further assume that nodes can be either\nlogically (i.e., by exploiting a software bug) or physically\ncaptured. However, the time to compromise a node physically\nis considered non-negligible (i.e., it takes some time to\nmove from node to node and to perform the physical manipulations\n) and to not significantly decrease with the number\nof already captured nodes.\n2.4\nHard- and Software Requirements\nTo all presented recovery measures applies that they are\nonly applicable if at least the therefore needed systems of\nthe corrupt node in the following denoted as the recovery\nsystem still work correctly. In order to achieve this, one\nhas to make sure that the recovery system is logically and,\nif feasible, physically protected.\nLogical Protection of the Recovery System\nLogical protection means that it should not be possible for a\nrunning application to prevent the execution of the recovery\nprocedures. That is, if the program code running on a node\nhas crashed or been corrupted by an adversary (e.g., by exploiting\na security hole), this should not affect the integrity\nand availability of the recovery system.\nOne mechanism to achieve this is to set up a hardware\ninterrupt which cannot be suppressed or redirected by the\napplication and by locating the dedicated interrupt routine\nin a write protected memory area. Consequently, on each\ninterrupt request, control is handed over to the immutable\ninterrupt routine an thus to the recovery system. A simple\nvariant of this mechanism in which a grenade timer period-ically\nreboots the system and the bootloader is located in\nread only memory (ROM) is used by Dutta et al. [7]. Another\napproach would be to misuse some additionally available\nMCUs [23], for example the ARM CPU on the ARM-based\nBluetooth module on the BTnode.\nSome of these\nMCUs are powerful enough to take on additional tasks like\nmonitoring the main MCUs activities or rewriting the application\nmemory. In case of the BTnode that extra MCU\nis directly responsible for communication and thus it would\nbe guaranteed that it has access to all received packets as\nwell.\nOn more advanced systems, mechanisms as provided by\nIntel's protected mode (e.g., isolated memory areas, privilege\nlevels, etc.) could be used to protect the recovery system\nmore efficiently. Current technologies such as ARM's\nTrustZone [1] for embedded devices or Intel's LaGrande technology\n[11] go even further and enable a comprehensive protection\nof the CPU, memory, and peripherals from software\nattacks.\nPhysical Protection of the Recovery System\nThe physical protection of current sensor node platforms\nis very poor because of their focus on simple maintenance\n[9]. However, although it is generally agreed that entirely\ntamper-proof sensor nodes would be too expensive, current\ntrends in the hardware development of embedded devices indicate\nthat some level of physical protection will be available\nin the near future [20, 15]. Security mechanisms regarding\nthe packaging of sensor nodes as, for instance, those proposed\nby FIPS 140-2 level 2 [19] could already significantly\nincrease the cost for an adversary. For integrity and not\nconfidentiality is the main concern with the recovery module\n, it has only to be protected against manipulations but\nnot against unintended disclosure or side-channel attacks.\nIn fact, it would be sufficient to have mechanisms which\nrender a node useless if the case of the recovery system was\nopened; complete tamper resistance is not required.\n2.5\nCorrect Node Recover Algorithms\nA node recovery algorithm for a cluster C\ni\n= (V\ni\n, E\ni\n) is\nconsidered correct if the following liveness and safety properties\nhold:\nL1 If all correct nodes (V\ni\n\\ M\ni\n) accuse a node m V\ni\nto\nbe faulty or malicious, its recovery process will finally\nbe initiated with high probability.\nL2 Once the recovery process for a node m V\ni\nhas been\ninitiated, it will eventually terminate as long as there\nremain at least k 1 correct nodes V\ni\n\\ M\ni\n.\nS1 If no more than\nn-1\n3\ncorrect nodes (i.e., a minority)\naccuse another correct node v V\ni\n\\ M\ni\nthe recovery\nprocess will not be initiated.\n115\nS2 After the recovery process, a node m V\ni\nmust either\n(i) be halted, (ii) contain the same program code as\nbefore, or (iii) contain the correct program code.\nThe two liveness properties L1 and L2 ensure that each malicious\nnode is recovered if its aberrant behaviour is detected\nby enough neighbours. Safety property S1 is required to\nmake sure that a node is only recovered if a majority of correct\nnodes accuses it and property S2 ensures that things\nare not worsened by applying the recovery process.\nDISTRIBUTED NODE RECOVERY\nIn this section, we present a distributed node recovery\nalgorithm which is autonomously executed within an observation\ncluster. The supported recovery measures are: node\nshutdown, node restart, and program code update. As long\nas the recovery module of an otherwise faulty or malicious\nnode is still intact, it is tried to recover it by restarting it or\nupdating its program code; or to at least eliminate its interfering\ninfluence by turning it off. If a node does not respond\nto any of these measures, it is still possible to logically expell\nit; preferably by means of a reliable majority decision [22]\nto avoid inconsistencies among the cluster members.\n3.1\nDescription of the Recovery Procedure\nThe proposed recovery algorithm consists of two phases.\nIn the first, so called accusation phase, nodes accuse all\nneighbours which are regarded as being malicious. If a node\nis accused by at least two third of its neighbours it initiates\nthe second, so called recovery phase, during which the actual\ncountermeasures are executed. To simplify the cooperative\nprogram code update, the program memory of a node is divided\ninto F frames f\ni\n, 0 i < F of size f s. Additionally,\nfor each frame f\ni\nits corresponding hash value h\ni\n:= h(f\ni\n) is\ncomputed.\nAccusation Phase\nRecovery Phase\nRound 1\nRound 2\nRound 3\nFigure 1: Schematic depiction of a recovery procedure which\nperforms a program update as the countermeasure.\nAccusation Phase\nNodes which conclude that one of their neighbours behaves\nmaliciously, send it an authenticated accusation message\n1\n.\nThe proposed countermeasure depends on the observed aberration\nand can be either of type shutdown, reset, or update,\nif the node should be halted, restarted, or its program code\nupdated, respectively. Accusation messages have to be ac-knowledged\nand are resent up to r times otherwise.\n1\nFor simplicity, it is assumed that nodes can accuse their\nneighbours at any time. However, if the recovery module\nis only active from time to time, nodes could of course also\nactively ask for (pending) accusations.\nIn case that a program update is requested, the accusation\nmessages also include a list of the sender's F frame hash\nvalues h\ni\n. They represent the current state of its program\nmemory and are required to deduce the correct program\ncode. Therefore, for each frame f\ni\nnot only its hash value h\ni\nbut also a counter c\ni\n, which is initialised with zero, is stored.\nUpon reception of a accusation message, each included hash\nvalue is compared to the already stored one and if they are\nequal, c\ni\nis incremented by one. If they differ and c\ni\n> 0 the\ncounter is decremented by one; otherwise (i.e., they are not\nequal and c\ni\n= 0) the stored hash value is replaced with the\nreceived value. This procedure ensures that, for 3t < n - 1,\nevery h\ni\nwill contain the hash value of the correct program\ncode frame after\n2(n-1)\n3\naccusations have been received\n(see Proof 4).\nRecovery Phase\nWhen a node m has received\n2(n-1)\n3\naccusations of a certain\nrecovery type, the corresponding measure is initiated.\nIn the non trivial case of a distributed program code update\n, the correct program code has therefore to be down-loaded\nfrom the neighboring nodes. Otherwise, the node is\njust rebooted or shutdown and no further communication or\ncoordination is required.\nThe autonomous program code transfer is performed in\nrounds of which each starts with the broadcasting of an authenticated\nupdate request message by the accused node m.\nEssentially, the message contains a list of so called frame\ndescriptors (u\ni\n, Q\ni\n), consisting of a node id u\ni\nand a set of\nrequested frame numbers Q\ni\n:= {r\n0\n, r\n1\n, . . . , r\n|Q|-1\n}. Upon\nreception of a valid request, a node v seeks for descriptors\nwhich contain its own id (i.e., u\ni\n= v). If present, for each\nrequested frame number r\nj\nQ\ni\nthe corresponding program\ncode frame is sent back to m with an update message. All\nreceived program code fragments f\ni\n, in turn, are verified by\nm using the stored hash values h\ni\n. Valid code fragments are\ncopied into the program memory\n2\nand the frame marked as\nupdated. If for a duration of\nround\nno update messages arrive\nalthough there are still some outstanding frames, a new\nupdate request message is broadcasted and the next round\ninitiated. As soon as all frames have been received, the node\nis rebooted and thus the new program code activated.\nIn order to distribute the transfer load equally among all\nparticipating nodes and to ensure that the update procedure\nterminates if at least one correct node is available, the\nframe descriptors are determined as follows: First, the n - 1\nparticipating nodes are ordered such that id(v\n0\n) < id(v\n1\n) <\n. . . < id(v\nn-2\n). Next, the F memory frames are divided into\nn - 1 sectors of length l :=\nF\nn-1\n. Finally, to each node one\nsuch fragment is assigned per update round in a round robin\nfashion. Thus, in round i node v\nj\n, is responsible for the segment\ns := j + i mod (n - 1), that is, for the frames sl to\nmin((s + 1)l - 1, F - 1). In the first round, for example,\nthe first node is responsible for the first l frames, the second\nnode for the second l frames and so on. In the second round,\nhowever, the assignment is rotated by one and thus the outstanding\nframes of the first sector are now requested from\nthe second node. This process has to be continued until all\nrequired frames have been received.\n2\nOn most sensor node platforms, new code is not directly\nwritten into program memory but into a therefore available\nFlash memory and installed during a subsequent reboot.\n116\nExtensions and Optimisations\nEven though not all but only the subset of modified program\ncode frames has to be requested, updating a node is\nstill a time consuming and expensive task. Consequently,\nthe amount of update load that a specific node can cause\nshould be restricted, for instance by limiting the number of\nupdate messages that are sent to it. To further reduce the\nload for the participating nodes, the F hash values h\ni\nin\nan accusation message can be replaced by the hash value\nh := h(h\n0\n||h\n1\n|| . . . ||h\nF -1\n).\nOnce the correct value h has\nbeen determinded using the corresponding counter c in analogy\nto the above mentioned algorithm, the actual hash values\ncan be requested from the neighbours in a second step\nand verified with h. In order to decrease the total number\nof required accusation messages, more than one recovery\nmeasure per message should be allowed. Alternatively, the\nmeasures could be hierarchically organised, having the type\nupdate also counting as a reboot or shutdown request.\n3.2\nAlgorithms\nListing 1: Algorithm for an accusing node v.\nv a r\na c c r e t r i e s [ n - 1 ] : = {0, . . . ,0}\na c c f a i l e d [ n - 1 ]\n: = {false, . . . ,false}\nn u m u p d a t e s [ n - 1 ] : = {0, . . . ,0}\nupon m i s b e h a v i o r\nd e t e c t i o n\no f n o d e m\nc h o o s e an a p p r o p r i a t e\na c c u s a t i o n -t y p e a\nm\ni f a\nm\n= acc update\ns e n d\na c c u s a t i o n , v, m, , a\nm\n, {h(f\n0\n), . . . , h(f\nF -1\n)}\nt o m\ne l s e\ns e n d\na c c u s a t i o n , v, m, , a\nm\nt o m\ns t a r t\nt i m e r A\nm\nupon r e c e p t i o n\no f\na c c u s a t i o n a c k , m, , a\nf r o m m\ns t o p\nt i m e r A\nm\na c c r e t r i e s [ m ] := 0\nupon t i m e o u t\no f\nt i m e r A\nm\ni f\na c c r e t r i e s [ m ] < max acc retries\na c c r e t r i e s [ m ] := a c c r e t r i e s [ m ] + 1\ns e n d\na c c u s a t i o n , v, m, , a\nm\n, {h(f\n0\n), . . . , h(f\nF -1\n)}\nt o m\ns t a r t\nt i m e r A\nm\ne l s e\na c c f a i l e d [ m ] := true\nupon r e c e p t i o n\no f\nu p d a t e r e q u e s t , m, , R\nf r o m m\ni f\nn u m u p d a t e s [ m ] < m a x u p d a t e s and\n(u, {r\n0\n, . . . , r\nk\n}) R\nn u m u p d a t e s [ m ] := n u m u p d a t e s [ m ] + 1\nr\ni\n, 0 i k s e n d\nu p d a t e , v, m, r\ni\n, f\nr\ni\nt o m\nListing 2: Algorithm for the accused node m.\nv a r\nu p d a t i n g\n:= false\nn u m a c c r e s e t\n:= 0\nn u m a c c u p d a t e\n:= 0\nn u m a c c s h u t d o w n := 0\ns t a r t n o d e\n:= 0\na c c r e s e t r e c v d [ n - 1 ]\n:= {0, . . . ,0}\na c c u p d a t e r e c v d [ n - 1 ]\n:= {0, . . . ,0}\na c c s h u t d o w r e c v d [ n - 1 ] := {0, . . . ,0}\nf r a m e u p d a t e d [ F - 1 ] := {false, . . . ,false}\nf r a m e d i g e s t [ F - 1 ]\n:= {h(f\n0\n), . . . , h(f\nF -1\n)}\nf r a m e c o u n t [ F - 1 ]\n:= {0, . . . ,0}\nupon r e c e p t i o n\no f\na c c u s a t i o n , v, m, ,acc reset\nf r o m v\ns e n d\na c c u s a t i o n a c k , m, ,acc reset\nt o v\ni f\nn o t u p d a t i n g and n o t\na c c r e s e t r e c v d [ v ]\na c c r e s e t r e c v d [ v ] := true\nn u m a c c r e s e t : = n u m a c c r e s e t + 1\ni f\nn o t u p d a t i n g and n u m a c c r e s e t\n2(n-1)\n3\nr e s e t\nn o d e\nupon r e c e p t i o n\no f\na c c u s a t i o n , v, m, ,acc shutdown\nf r o m v\ns e n d\na c c u s a t i o n a c k , m, ,acc shutdown\nt o v\ni f\nn o t u p d a t i n g and n o t a c c s h u t d o w n r e c v d [ v ]\na c c s h u t d o w n r e c v d [ v ] := true\nn u m a c c s h u t d o w n : = n u m a c c s h u t d o w n + 1\ni f\nn o t u p d a t i n g and n u m a c c s h u t d o w n\n2(n-1)\n3\nshutdown n o d e\nf u n c t i o n\ns e t u p u p d a t e r e q u e s t ( )\nk := 0\nR := {}\nf o r 0 i < n , i = m\nw := ( s t a r t n o d e + i) mod n\nQ := {}\nf o r 0 j <\nF\nn\ni f\nn o t f r a m e u p d a t e d [ k ]\nQ := Q {k}\nk := k + 1\ni f Q = {}\nR := R {(w, Q)}\ns t a r t n o d e := s t a r t n o d e + 1\nr e t u r n R\nupon r e c e p t i o n\no f\na c c u s a t i o n , v, m, ,acc update\n, {h\n0\n, . . . , h\nF -1\n}\nf r o m v\ns e n d\na c c u s a t i o n a c k , m, ,acc update\nt o v\ni f\nn o t u p d a t i n g and n o t\na c c u p d a t e r e c v d [ v ]\na c c u p d a t e r e c v d [ v ] := true\nn u m a c c u p d a t e : = n u m a c c u p d a t e + 1\nf o r 0 i < F\ni f\nf r a m e d i g e s t [ i ] = h\ni\nf r a m e c o u n t [ i ] := f r a m e c o u n t [ i ] + 1\ne l s e\ni f\nf r a m e c o u n t [ i ] > 0\nf r a m e c o u n t [ i ] := f r a m e c o u n t [ i ] - 1\ne l s e\nf r a m e d i g e s t [ i ] := h\ni\ni f\nn o t u p d a t i n g and n u m a c c u p d a t e\n2(n-1)\n3\nR : = s e t u p u p d a t e r e q u e s t ( )\nb r o a d c a s t\nu p d a t e r e q u e s t , m, , R\ns t a r t\nt i m e r U\nu p d a t i n g = true\nupon t i m e o u t\no f\nt i m e r U\nR : = s e t u p u p d a t e r e q u e s t ( )\nb r o a d c a s t\nu p d a t e r e q u e s t , m, , R\ns t a r t\nt i m e r U\nupon r e c e p t i o n\no f\nu p d a t e , v, m, i, f\nf r o m v\nr e s e t\nt i m e r U\n117\ni f h(f ) = f r a m e d i g e s t [ i ] and n o t\nf r a m e u p d a t e d [ i ]\nu p d a t e memory f r a m e i\nf r a m e u p d a t e d [ i ] := true\ni f i, 0 i < F\nf r a m e u p d a t e d [ i ]\nr e s e t\nn o d e\nPROOF OF CORRECTNESS\nIn this section, we proof the correctness of the proposed\nalgorithm with respect to the specifications of section 2.\nTheorem 1. Given the network and adversary model specified\nin section 2, the proposed recovery algorithm is correct\nand fulfils the properties L1, L2, S1, and S2 if the recovery\nmodule of the accused node m is intact, if h() is a secure\nhashfunction, and if less than one third of the participating\nnodes are malicious (i.e., 3t < n - 1).\nIn order to prove Theorem 1 we have to show that the\nproperties L1, L2, S1, and S2 hold. We therefore first prove\nsome helper Lemmas.\nLemma 1. If all correct nodes accuse a node m, its recovery\nprocess will be initiated with high probability.\nProof. The probability that less than\n2(n-1)\n3\naccusations\nare received is equal to the probability that more than\nn-1\n3\nmessages are either not sent or lost. Assuming that the t\nmalicious nodes do not participate in the distributed update,\nat least\nn-1\n3\n-t+1 accusations must get lost. Given 0 p\nl\n<\n1, the probability for this is (p\nr\nl\n)\nn-1\n3\n-t+1\n+ (p\nr\nl\n)\nn-1\n3\n-t+2\n+\n. . .+(p\nr\nl\n)\nn-1\n\n2(n-1)+3t\n3\n(p\nr\nl\n)\nn-1\n3\n-t+1\n. It holds that c > 1 :\nr 1 such that\n2(n-1)+3t\n3\n,,\np\nn-1\n3\n-t+1\nl\n\nr\n< n\n-c\n. Thus, the\nnode m gets\n2(n-1)\n3\naccusations w.h.p. and the recovery\nprocess is initiated.\nLemma 2. Once the recovery process for a node m has\nbeen initiated it will eventually terminate as long as there\nremain at least k 1 correct nodes.\nProof. In order that a frame is updated in a specific\nround, the dedicated request as well as its actual transmission\nmust succeed.\nThe probability that this is the case\nis (1 - p\nl\n)\n2\n. With only one correct node (k = 1) the expected\nnumber of update rounds per frame a can be described\nas a Markov chain described by the expression a =\n(a + 1)(1 - (1 - p\nl\n)\n2\n) + (1 - p\nl\n)\n2\nwith the solution a =\n1\n(1-p\nl\n)\n2\n.\nThe overall expected number of rounds is thus\naF =\nF\n(1-p\nl\n)\n2\nO(1). In each round at most one request and\nF updates are transmitted, leading to an upper bound for\nits duration of (F + 1)\np\nO(1). Altogether, the expected\nworst case duration is\nF (F +1)\n(1-p\nl\n)\n2\n\np\nO(1).\nLemma 3. If no more than\nn-1\n3\ncorrect nodes accuse another\ncorrect node v M the recovery process will not be\ninitiated.\nProof. From each node only one accusation is accepted\nand thus the number of valid accusations is at most\nn-1\n3\n+t <\n2(n-1)\n3\n.\nLemma 4. At the start of a program code update the target\nnode m has stored the correct hash value h\ni\nfor all frames,\ngiven that all correct nodes have loaded the same program\ncode.\nProof. Let's assume that there is a hash value h\ni\nwhich\nis not correct when the program code update starts. As a\nstored hash value is only substituted if the dedicated counter\nc\ni\nis zero, the node must have received at least as many\nwrong values as correct ones. From each node only one accusation\nis accepted, thus of the a\n2(n-1)\n3\nreceived values\nat most t <\nn-1\n3\nare false.\nIt follows that at least\na - t >\n2(n-1)\n3\nn\n-1\n3\n=\nn-1\n3\n> t values must be correct,\nwhich contradicts the assumption that at least as many false\nas correct hash values were received.\nThe properties L1, L2, and S1 are proven by Lemma 1,\n2, and 3, respectively.\nIf the accused node is turned off\nor restarted, property S2 holds by definition. Otherwise, if\nthe program code is updated, Lemma 4 and property L2\nguarantee that only correct code frames are installed and\nthat the procedure finally terminates.\nEVALUATION\nIn this section we provide an analytical evaluation of the\nproposed algorithm and present the findings of extensive\nsimulations as well as of a real implementation for the BTnodes\n.\nThe evaluated metrics are: (a) number of update\nrounds, (b) update load for the accused nodes, (c) update\nload for the other participating nodes, and (d) update duration\n.\n5.1\nAnalytical Evaluation\nNumber of Update Rounds\nIn order to update a frame it is required that the dedicated\nrequest as well as the actual frame itself are successfully\ntransmitted. The expected fraction of erroneous updates is\ntherefore 1 - (1 - p\nl\n)\n2\n. If one further assumes that the t\nmalicious nodes do not participate in the program update,\nthe fraction increases to 1 n\n-1-t\nn-1\n(1 - p\nl\n)\n2\n. Thus, the expected\nnumber of outstanding frames after the first and second\nupdate round are E\nf\n(1) = F (1 n\n-1-t\nn-1\n(1 - p\nl\n)\n2\n) and\nE\nf\n(2) = E\nf\n(1)(1-n\n-1-t\nn-1\n(1-p\nl\n)\n2\n) = F (1-n\n-1-t\nn-1\n(1-p\nl\n)\n2\n)\n2\n,\nrespectively. In general, the expected number of outstanding\nframes after i > 0 rounds is:\nE\nf\n(i)\n=\nE\nf\n(i - 1)\n,,\n1 - n - 1 - t\nn - 1\n(1 - p\nl\n)\n2\n\n=\nF\n,,\n1 - n - 1 - t\nn - 1\n(1 - p\nl\n)\n2\n\ni\nConsequently, the expected number of update rounds is\nE\nr\n\nlog(0.5) - log(F )\nlog(1 n\n-1-t\nn-1\n(1 - p\nl\n)\n2\n)\nFor reliable connections (p\nl\n= 0) we get E\nr\n\nlog(0.5)-log(F )\nlog(1/3)\n\nO(1).\nIn a worst case scenario a continuous sequence of\nframes is assigned to the t malicious nodes and thus at least\nt + 1, that is O(t) = O(n) rounds are required. Moreover,\nfor a fixed p\nl\n, the expected number of rounds is (almost)\nindependent of the cluster size n as\n2\n3\n<\nn-1-t\nn-1\n1 for a\nfixed t, 0 t <\nn-1\n3\n.\n118\nUpdate Load\nThe expected amount of data (in bytes) to transfer for the\naccused node is\nE\ntm\n=\nE\nr\n-1\nX\ni=0\n,,\nC\nreq\n+ C\nmac\n(n - 1) + C\nsel\n(n - 1) E\nf\n(i)\nF\n\n\n(C\nreq\n+ C\nmac\n(n - 1) + C\nsel\n(n - 1))E\nr\nand\nE\ntv\n=\nE\nr\n-1\nX\ni=0\nE\nf\n(i)\nn - 1 (1 - p\nl\n) (C\nupdate\n+ f s)\n\nF\nn - 1 (1 - p\nl\n) (C\nupdate\n+ f s) E\nr\nfor the other participating nodes. In the above expressions\n(n - 1)\nE\nf\n(i)\nF\nis the expected number of addressed nodes and\nE\nf\n(i)\nn-1\n(1 - p\nl\n) expresses the expected number of successfully\nrequested frames per node.\nUpdate Duration\nThe total number of sent messages E\nm\nis bound by (F +\n1)E\nr\nO(E\nr\n) as there are only one request and no more\nthat F update messages per round.\nThus, the expected\nvalue of E\nm\nis in O(1) for reliable connections and in O(n)\nin the worst case. More precisely, the expected number of\nmessages is given by\nE\nm\n=\nE\nr\n-1\nX\ni=0\n,,\n1 + E\nf\n(i)\nn - 1 (1 - p\nl\n)\n\n= E\nr\n+ (n - 1 - t)E\ntv\nC\nupdate\n+ f s\nNeglecting the delays caused by the involved software routines\n, a good approximation for the update duration can be\nachieved by considering the overall transfer time and the\ndelays caused by the round timeouts. The expected time to\ntransfer all messages is\nE\ntm\n+(n-1-t)E\ntv\nB\n+ E\nm\n\nmac\nwhereas\nthe overhead of the round timer is given by (E\nr\n- 1)\nround\n,\nresulting in a total update duration of\nE\nd\n\nE\ntm\n+ (n - 1 - t)E\ntv\nB\n+ E\nm\n\nmac\n+ (E\nr\n- 1)\nround\nParametrisation\nFor the comparison of the analytical results with the simulation\nand implementation of the algorithm, the following\nparameters were used:\nBaudrate\nB\n19.2 kBit/s\nB-MAC preamble\n\nmac\n100 ms\nRound timeout\n\nround\n3 s\nRequest header size\nC\nreq\n12 Bytes\nUpdate header size\nC\nupdate\n11 Bytes\nFrame selector site\nC\nsel\n10 Bytes\nMAC size\nC\nmac\n20 Bytes\nFrame size\nf s\n1024 Bytes\n5.2\nSimulation\nThe simulation of the algorithm was carried out with the\nJava based JiST/SWANS simulator [2]. In order to make the\nresults comparable to the real BTnode implementation, the\nradio module was set up according to the characteristics of\n0\n10\n20\n30\n40\n50\n60\n70\n80\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\npacket loss ratio\nExpected number of update rounds\nF=100, n=10, t=0 (analytic)\nF=100, n=10, t=2 (analytic)\nF=100, n=10, t=0 (simulated)\nF=100, n=10, t=2 (simulated)\nFigure 2: Expected number of rounds to update a node.\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\npacket loss ratio\nExpected duration (in sec.) to update a node\nF=100, n=10, t=0 (analytic)\nF=100, n=10, t=2 (analytic)\nF=100, n=10, t=0 (simulated)\nF=100, n=10, t=2 (simulated)\nFigure 3: Expected duration to update a node.\nthe Chipcon CC1000 transceiver [4] and B-MAC was chosen\nas the data link layer protocol. The complete parametrisa-tion\nof the simulation is given in the table below:\nTransmission frequency\n868 MHz\nTransmission power\n5 dBm\nReceiver sensitivity\n-100 dBm\nMemory size\n100 kByte\nNumber of nodes\n10\nDeployment area\n20 x 20 m (u.r.d.)\n5.3\nImplementation\nIn addition to the above mentioned simulations, the algorithm\nwas also implemented for the BTnodes, a wireless\nsensor platform running NutOS [3]. A detailed description\nof the created software is omitted due to space reasons but\ncan be found in [22]. The implementation was evaluated by\nrandomly distributing a cluster of 10 nodes in a field of 20\nx 20 m, whereupon each node in turn initiated a complete\nprogram update. Altogether, over 100 recovery procedures\nwhere measured.\n5.4\nResults\nThe packet loss ratio has, as expected, a significant effect\non all evaluated metrics and each of them increases exponen-tially\nif the ratio worsens. The number of nodes, in contrast,\n119\n0\n2\n4\n6\n8\n10\n12\n14\n16\n18\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\npacket loss ratio\nExpected amount of data to transfer (in kBytes) for the updating node\nF=100, n=10, t=0 (analytic)\nF=100, n=10, t=2 (analytic)\nF=100, n=10, t=0 (simulated)\nF=100, n=10, t=2 (simulated)\nFigure 4: Expected update load for the accused node.\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\n50\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\npacket loss ratio\nExpected amount of data to transfer (in kBytes) for the participating nodes\nF=100, n=10, t=0 (analytic)\nF=100, n=10, t=2 (analytic)\nF=100, n=10, t=0 (simulated)\nF=100, n=10, t=2 (simulated)\nFigure 5: Expected update load for the participating nodes.\nhas for a fixed packet loss ratio almost no negative impact\non the evaluated metrics, showing that the algorithm itself\nscales well. Furthermore, the results show that the update\nalgorithm is fair and equally distributes the update load over\nall participating nodes.\nUpdate Rounds and Update Duration\nWhilst the expected number of update rounds (see Figure\n2) is only of secondary importance, the update duration (see\nFigure 3) is of major interest for the feasibility of the algorithm\n. The faster a node recovery is completed, the sooner\nthe network is operable again. Even though the update duration\nalmost triples from 50 to 150 s if the packet loss ratio\nincreases from 0 to 40 percent, it is still in a range which\nmost WSN application should be able to cope with.\nUpdate Load\nIn a cluster of 10 nodes the update load for the accused node\n(see Figure 4) is 0.5 to 3.5 kByte for 0 p\nl\n0.4 and thus\nconsiderably smaller than for the other participating nodes\n(see Figure 5) with a load of 12 to 24 kByte. However, the\nlatter is, as expected, inverse proportional to the cluster size\n(see Figure 7): the larger a cluster and the lower the number\nof malicious nodes, the smaller the expected update load per\nparticipating node.\n0\n50\n100\n150\n200\n250\n5\n10\n15\n20\n25\n30\nnumber of nodes\nExpected duration (in sec.) to update a node\nF=100, plr=0.1, t=0\nF=100, plr=0.5, t=0\nF=100, plr=0.1, t=n/3\nF=100, plr=0.5, t=n/3\nFigure 6: Influence of the cluster size on the update duration\n.\n0\n10\n20\n30\n40\n50\n60\n70\n80\n90\n5\n10\n15\n20\n25\n30\nnumber of nodes\nExpected amount of data to transfer (in kBytes) for the participating nodes\nF=100, plr=0.1, t=0\nF=100, plr=0.5, t=0\nF=100, plr=0.1, t=n/3\nF=100, plr=0.5, t=n/3\nFigure 7: Influence of the cluster size on the expected update\nload for the participating nodes.\nImplementation\nIn the experiments conducted with the BTnode implementation\n, the average number of update rounds was five, the\nupdate duration about 100 s ( 10 s), and the load for\nthe participating nodes about 15 kByte ( 1 kByte). Applied\nto the analytical model this would mean that the gross\npacket loss ratio was roughly 20%.\nSUMMARY AND CONCLUSIONS\nIn this paper, we presented an autonomous and distributed\nrecovery algorithm for sensor networks. The algorithm allows\nfor bringing malicious or failed nodes back into normal\noperation or, at least, for securely shutting them down. Particularly\nin remote or unwieldy areas, such as deserts, the\nbottom of the sea, mountains, or even on planets in outer\nspace, where redeployment is expensive and sensor nodes\ncannot easily be exchanged or maintained, the application\nof a node recovery system is most likely to extend the lifetime\nof the whole network.\nThe results of the simulation and analytical analysis were\nconfirmed by the real BTnode implementation. They show\nthat recovering sensor nodes is as any form of reprogramming\nan expensive, though feasible task. Moreover, the\nproposed program code update algorithm is not only prov-120\nably secure but also fair and robust. It distributes the update\nload equally over all participating nodes and terminates\nas long as at least one of the nodes remains correct.\nTo all presented recovery measures applies that they are\nonly applicable if at least the therefore needed systems of\nthe corrupt node still work correctly. However, although it is\ngenerally agreed that entirely tamper-proof sensor nodes are\ntoo expensive, current trends in the hardware development\nof embedded devices indicate that at least some logical and\nphysical protection (e.g., CPUs which support isolated memory\nareas or automatic memory erason if a node is tempered\nwith) will be available in the near future. We discussed how\nthese upcoming technologies can be exploited to protect the\nrecovery mechanisms of a sensor node and what is already\nfeasible with existing systems.\nREFERENCES\n[1] T. Alves and D. Felton. TrustZone: Integrated\nHardware and Software Security. ARM Ltd, July 2004.\n[2] R. Barr, Z. J. Haas, and R. van Renesse. Jist: an\nefficient approach to simulation using virtual\nmachines: Research articles. Softw. Pract. Exper.,\n35(6):539576, 2005.\n[3] J. Beutel, O. Kasten, and M. Ringwald. Poster\nabstract: Btnodes a distributed platform for sensor\nnodes. In Proceedings of the 1st International\nConference on Embedded Networked Sensor Systems,\npages 292 293, Los Angeles, California, USA, Jan.\n2003. ACM Press. http://www.btnode.ethz.ch/.\n[4] Chipcon AS, Oslo, Norway. Single Chip Very Low\nPower RF Transceiver, Rev. 2.1, Apr. 2002.\nhttp://www.chipcon.com/.\n[5] A. P. R. da Silva, M. H. T. Martins, B. P. S. Rocha,\nA. A. F. Loureiro, L. B. Ruiz, and H. C. Wong.\nDecentralized intrusion detection in wireless sensor\nnetworks. In Q2SWinet '05: Proceedings of the 1st\nACM international workshop on Quality of service &\nsecurity in wireless and mobile networks, pages 1623,\nNew York, NY, USA, 2005. ACM Press.\n[6] S. B. Douglas Herbert, Yung-Hsiang Lu and Z. Li.\nDetection and repair of software errors in hierarchical\nsensor networks. To appear in IEEE conference on\nSensor Networks and Ubiquitous Trustworthy\nComputing (SUTC), June 2006.\n[7] P. K. Dutta, J. W. Hui, D. C. Chu, and D. E. Culler.\nTowards secure network programming and recovery in\nwireless sensor networks. Technical Report\nUCB/EECS-2005-7, Electrical Engineering and\nComputer Sciences University of California at\nBerkeley, Oct. 2005.\n[8] S. Ganeriwal and M. B. Srivastava. Reputation-based\nframework for high integrity sensor networks. In\nSASN '04: Proceedings of the 2nd ACM workshop on\nSecurity of ad hoc and sensor networks, pages 6677,\nNew York, NY, USA, 2004. ACM Press.\n[9] C. Hartung, J. Balasalle, and R. Han. Node\ncompromise in sensor networks: The need for secure\nsystems. Technical Report CU-CS-990-05, Department\nof Computer Science, University of Colorado, Jan.\n2005.\n[10] C. Hsin and M. Liu. A distributed monitoring\nmechanism for wireless sensor networks. In WiSE '02:\nProceedings of the 3rd ACM workshop on Wireless\nsecurity, pages 5766, New York, NY, USA, 2002.\nACM Press.\n[11] Intel Corporation. LaGrande Technology Architectural\nOverview, Sept. 2003.\n[12] P. Inverardi, L. Mostarda, and A. Navarra.\nDistributed IDSs for enhancing security in mobile\nwireless sensor networks. AINA, 2:116120, 2006.\n[13] J. Jeong and D. Culler. Incremental network\nprogramming for wireless sensors. In Proceedings of\nthe First IEEE Communications Society Conference\non Sensor and Ad-Hoc Communications and Networks\n(SECON), 2004.\n[14] I. Khalil, S. Bagchi, and C. Nina-Rotaru. Dicas:\nDetection, diagnosis and isolation of control attacks in\nsensor networks. securecomm, 00:89100, 2005.\n[15] P. Kocher, R. Lee, G. McGraw, and A. Raghunathan.\nSecurity as a new dimension in embedded system\ndesign. In DAC '04: Proceedings of the 41st annual\nconference on Design automation, pages 753760, New\nYork, NY, USA, 2004. ACM Press.\nModerator-Srivaths Ravi.\n[16] S. S. Kulkarni and L. Wang. Mnp: Multihop network\nreprogramming service for sensor networks. icdcs,\n00:716, 2005.\n[17] T. Liu and M. Martonosi. Impala: a middleware\nsystem for managing autonomic, parallel sensor\nsystems. SIGPLAN Not., 38(10):107118, 2003.\n[18] S. Northcutt and J. Novak. IDS: Intrusion\nDetection-Systeme. mitp Verlag Bonn, 2001.\n[19] N. B. of Standards. Security Requirements for\nCryptographic Modules. National Bureau of Standards,\nDec. 2002.\n[20] S. Ravi, A. Raghunathan, P. Kocher, and\nS. Hattangady. Security in embedded systems: Design\nchallenges. Trans. on Embedded Computing Sys.,\n3(3):461491, 2004.\n[21] E. Shi and A. Perrig. Designing secure sensor\nnetworks. IEEE Wireless Communication Magazine,\n11(6):3843, Dec. 2004.\n[22] M. Strasser. Intrusion detection and failure recovery in\nsensor networks. Master's thesis, Department of\nComputer Science, ETH Zurich, 2005.\n[23] H. Vogt, M. Ringwald, and M. Strasser. Intrusion\ndetection and failure recovery in sensor nodes. In\nTagungsband INFORMATIK 2005, Workshop\nProceedings, LNCS, Heidelberg, Germany, Sept. 2005.\nSpringer-Verlag.\n121\n", "keywords": "sensor networks;countermeasures;intrusion detection;Wireless Sensor Networks;Node Recovery;Intrusion Detection;node recovery;IDS;sensor nodes;security;distributed algorithm"} {"name": "42", "title": "Bayesian Online Classifiers for Text Classification and Filtering", "abstract": "This paper explores the use of Bayesian online classifiers to classify text documents. Empirical results indicate that these classifiers are comparable with the best text classification systems. Furthermore, the online approach offers the advantage of continuous learning in the batch-adaptive text filtering task.", "fulltext": "INTRODUCTION\nFaced with massive information everyday, we need automated\nmeans for classifying text documents. Since handcrafting\ntext classifiers is a tedious process, machine learning\nmethods can assist in solving this problem[15, 7, 27].\nYang & Liu[27] provides a comprehensive comparison of\nsupervised machine learning methods for text classification.\nIn this paper we will show that certain Bayesian classifiers\nare comparable with Support Vector Machines[23], one\nof the best methods reported in [27].\nIn particular, we\nwill evaluate the Bayesian online perceptron[17, 20] and the\nBayesian online Gaussian process[3].\nFor text classification and filtering, where the initial training\nset is large, online approaches are useful because they\nallow continuous learning without storing all the previously\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nSIGIR'02, August 11-15, 2002, Tampere, Finland.\nCopyright 2002 ACM 1-58113-561-0/02/0008 ...\n$\n5.00.\nseen data. This continuous learning allows the utilization\nof information obtained from subsequent data after the initial\ntraining. Bayes' rule allows online learning to be performed\nin a principled way[16, 20, 17]. We will evaluate the\nBayesian online perceptron, together with information gain\nconsiderations, on the batch-adaptive filtering task[18].\nCLASSIFICATION AND FILTERING\nFor the text classification taskdefined by Lewis[9], we\nhave a set of predefined categories and a set of documents.\nFor each category, the document set is partitioned into two\nmutually exclusive sets of relevant and irrelevant documents.\nThe goal of a text classification system is to determine whether\na given document belongs to any of the predefined categories\n. Since the document can belong to zero, one, or more\ncategories, the system can be a collection of binary classifiers\n, in which one classifier classifies for one category.\nIn Text REtrieval Conference (TREC), the above taskis\nknown as batch filtering. We will consider a variant of batch\nfiltering called the batch-adaptive filtering[18]. In this task,\nduring testing, if a document is retrieved by the classifier,\nthe relevance judgement is fed backto the classifier. This\nfeedbackcan be used to improve the classifier.\n2.1\nCorpora and Data\nFor text classification, we use the ModApte version of\nthe Reuters-21578 corpus\n1\n, where unlabelled documents are\nremoved. This version has 9,603 training documents and\n3,299 test documents. Following [7, 27], only categories that\nhave at least one document in the training and test set are\nretained. This reduces the number of categories to 90.\nFor batch-adaptive filtering, we attempt the taskof TREC-9\n[18], where the OHSUMED collection[6] is used. We will\nevaluate on the OHSU topic-set, which consists of 63 topics.\nThe training and test material consist of 54,710 and 293,856\ndocuments respectively. In addition, there is a topic statement\nfor each topic. For our purpose, this is treated as an\nadditional training document for that topic. We will only\nuse the title, abstract, author, and source sections of the\ndocuments for training and testing.\n2.2\nRepresentation\nThere are various ways to transform a document into a\nrepresentation convenient for classification. We will use the\n1\nAvailable via http://www.daviddlewis.com/resources/\ntestcollections/reuters21578.\n97\nbag-of-words approach, where we only retain frequencies\nof words after tokenisation, stemming, and stop-words removal\n. These frequencies can be normalized using various\nschemes[19, 6]; we use the ltc normalization:\nl\ni,d\n=\n1 + log\n2\nT F\ni,d\nt\ni\n=\nlog\n2\nN\nn\ni\nltc\ni,d\n=\nl\ni,d\nt\ni\n\n\nj\n{terms in d}\n(l\nj,d\nt\nj\n)\n2\n,\nwhere the subscripts i and d denote the ith term and the\ndth document respectively, T F\ni,d\nis the frequency of the ith\nterm in the dth document, n\ni\nis the document-frequency of\nthe ith term, and N is the total number of documents.\n2.3\nFeature Selection Metric\nGiven a set of candidate terms, we select features from\nthe set using the likelihood ratio for binomial distribution\nadvocated by Dunning[5]:\n=\nR\nt\n+R\n\nt\nN\nR\nt\n+R\n\nt\nN\nt\n+N\n\nt\nN\nN\nt\n+N\n\nt\nR\nt\nR\nt\n+N\nt\nR\nt\nN\nt\nR\nt\n+N\nt\nN\nt\nR\n\nt\nR\n\nt\n+N\n\nt\nR\n\nt\nN\n\nt\nR\n\nt\n+N\n\nt\nN\n\nt\n,\nwhere R\nt\n(N\nt\n) is the number of relevant (non-relevant) training\ndocuments which contain the term, R\n\nt\n(N\n\nt\n) is the number\nof relevant (non-relevant) training documents which do\nnot, and N is the total number of training documents.\nAsymptotically,\n-2 ln is\n2\ndistributed with 1 degree of\nfreedom. We choose terms with\n-2 ln more than 12.13,\ni.e. at 0.05% significance level. More details on the feature\nselection procedures will be given in section 4.\n2.4\nPerformance Measures\nTo evaluate a text classification system, we use the F\n1\nmeasure introduced by van Rijsbergen[22]. This measure\ncombines recall and precision in the following way:\nRecall\n=\nnumber of correct positive predictions\nnumber of positive examples\nPrecision\n=\nnumber of correct positive predictions\nnumber of positive predictions\nF\n1\n=\n2\nRecall Precision\nRecall + Precision\n.\nFor ease of comparison, we summarize the F\n1\nscores over\nthe different categories using the micro- and macro-averages\nof F\n1\nscores[11, 27]:\nMicro-avg F\n1\n=\nF\n1\nover categories and documents\nMacro-avg F\n1\n=\naverage of within-category F\n1\nvalues.\nThe micro- and macro-average F\n1\nemphasize the performance\nof the system on common and rare categories respectively\n. Using these averages, we can observe the effect\nof different kinds of data on a text classification system.\nIn addition, for comparing two text classification systems,\nwe use the micro sign-test (s-test) and the macro sign-test\n(S-test), which are two significance tests first used for comparing\ntext classification systems in [27]. The s-test compares\nall the binary decisions made by the systems, while\nthe S-test compares the within-category F\n1\nvalues. Similar\nto the F\n1\naverages, the s-test and S-test compare the\nperformance of two systems on common and rare categories\nrespectively.\nTo evaluate a batch-adaptive filtering system, we use the\nT9P measure of TREC-9[18]:\nT9P =\nnumber of correct positive predictions\nMax(50, number of positive predictions) ,\nwhich is precision, with a penalty for not retrieving 50 documents\nBAYESIAN ONLINE LEARNING\nMost of this section is based on workby Opper[17], Solla\n& Winther[20], and Csat\no & Opper[3].\nSuppose that each document is described by a vector x,\nand that the relevance indicator of x for a category is given\nby label y {-1, 1}, where -1 and 1 indicates irrelevant\nand relevant respectively. Given m instances of past data\nD\nm\n=\n{(y\nt\n, x\nt\n), t = 1...m}, the predictive probability of the\nrelevance of a document described by x is\np(y|x, D\nm\n) =\nda p(y|x, a)p(a|D\nm\n),\nwhere we have introduced the classifier a to assist us in the\nprediction. In the Bayesian approach, a is a random variable\nwith probability density p(a|D\nm\n), and we integrate over all\nthe possible values of a to obtain the prediction.\nOur aim is to obtain a reasonable description of a. In\nthe Bayesian online learning framework[16, 20, 17], we begin\nwith a prior p(a|D\n0\n), and perform incremental Bayes'\nupdates to obtain the posterior as data arrives:\np(a|D\nt\n+1\n)\n=\np(y\nt\n+1\n|x\nt\n+1\n, a)p(a|D\nt\n)\n\nda p(y\nt\n+1\n|x\nt\n+1\n, a)p(a|D\nt\n) .\nTo make the learning online, the explicit dependence of\nthe posterior p(a|D\nt\n+1\n) on the past data is removed by approximating\nit with a distribution p(a|A\nt\n+1\n), where A\nt\n+1\ncharacterizes the distribution of a at time t + 1. For example\n, if p(a|A\nt\n+1\n) is a Gaussian, then A\nt\n+1\nrefers to its mean\nand covariance.\nHence, starting from the prior p\n0\n(a) = p(a|A\n0\n), learning\nfrom a new example (y\nt\n+1\n, x\nt\n+1\n) comprises two steps:\nUpdate the posterior using Bayes rule\np(a|A\nt\n, (y\nt\n+1\n, x\nt\n+1\n))\np(y\nt\n+1\n|x\nt\n+1\n, a) p(a|A\nt\n)\nApproximate the updated posterior by parameterisation\np(a|A\nt\n, (y\nt\n+1\n, x\nt\n+1\n))\np(a|A\nt\n+1\n),\nwhere the approximation step is done by minimizing the\nKullback-Leibler distance between the the approximating and\napproximated distributions.\nThe amount of information gained about a after learning\nfrom a new example can be expressed as the Kullback-Leibler\ndistance between the posterior and prior distributions\n[25]:\nIG(y\nt\n+1\n, x\nt\n+1\n|D\nt\n)\n=\nda p(a|D\nt\n+1\n) log\n2\np(a|D\nt\n+1\n)\np(a|D\nt\n)\n\nda p(a|A\nt\n+1\n) log\n2\np(a|A\nt\n+1\n)\np(a|A\nt\n) ,\nwhere instances of the data\nD are replaced by the summaries\nA in the approximation.\n98\nTo simplify notation henceforth, we use p\nt\n(a) and . . .\nt\nto\ndenote p(a|A\nt\n) and averages taken over p(a|A\nt\n) respectively.\nFor example, the predictive probability can be rewritten as\np(y|x, D\nt\n)\np(y|x, A\nt\n) =\nda p(y|x, a)p\nt\n(a) = p(y|x, a)\nt\n.\nIn the following sections, the scalar field h = a x will also\nbe used to simplify notation and calculation.\n3.1\nBayesian Online Perceptron\nConsider the case where a describes a perceptron. We then\ndefine the likelihood as a probit model\np(y|x, a) =\nya x\n\n0\n,\nwhere\n2\n0\nis a fixed noise variance, and is the cumulative\nGaussian distribution\n(u) =\n1\n2\nu\nd\ne\n2\n/\n2\n.\nIf p\n0\n(a) is the spherical unit Gaussian, and p\nt\n(a) is the\nGaussian approximation, Opper[16, 17] and Solla & Winther[20]\nobtain the following updates by equating the means and covariances\nof p(a|A\nt\n+1\n) and p(a|A\nt\n, (y\nt\n+1\n, x\nt\n+1\n)):\na\nt\n+1\n=\na\nt\n+ s\nt\n+1\n\nh\nt\nln p(y\nt\n+1\n|h)\nt\nC\nt\n+1\n=\nC\nt\n+ s\nt\n+1\ns\nT\nt\n+1\n\n2\nh\n2\nt\nln p(y\nt\n+1\n|h)\nt\n,\nwhere\ns\nt\n+1\n=\nC\nt\nx\nt\n+1\n,\np(y\nt\n+1\n|h)\nt\n=\n\ny\nt\n+1\nh\nt\n\nt\n+1\n,\n\n2\nt\n+1\n=\n\n2\n0\n+ x\nT\nt\n+1\nC\nt\nx\nt\n+1\nand\nh\nt\n=\na\nT\nt\nx\nt\n+1\n.\n3.1.1\nAlgorithm\nTraining the Bayesian online perceptron on m data involves\nsuccessive calculation of the means a\nt\nand covariances\nC\nt\nof the posteriors, for t {1, ..., m}:\n1. Initialize a\n0\nto be 0 and C\n0\nto be 1 (identity matrix),\ni.e. a spherical unit Gaussian centred at origin.\n2. For t = 0, 1, ..., m - 1\n3.\ny\nt\n+1\nis the relevance indicator for document x\nt\n+1\n4.\nCalculate s\nt\n+1\n,\nt\n+1\n, h\nt\nand p(y\nt\n+1\n|h)\nt\n5.\nCalculate u =\ny\nt+1\nh\nt\n\nt+1\nand\n\n=\n1\n2\nexp(\n1\n2\nu\n2\n)\n6.\nCalculate\n\nh\nt\nln p(y\nt\n+1\n|h)\nt\n=\ny\nt+1\n\nt+1\n\n1\np\n(y\nt+1\n|h)\nt\n\n\n7.\nCalculate\n\n2\nh\n2\nt\nln p(y\nt\n+1\n|h)\nt\n=\n- 1\n\n2\nt\n+1\nu\n\np(y\nt\n+1\n|h)\nt\n+\n\np(y\nt\n+1\n|h)\nt\n2\n8.\nCalculate a\nt\n+1\nand C\nt\n+1\nThe prediction for datum (y, x) simply involves the calculation\nof p(y|x, a)\nm\n= p(y|h)\nm\n.\n3.2\nBayesian Online Gaussian Process\nGaussian process (GP) has been constrained to problems\nwith small data sets until recently when Csat\no & Opper[3]\nand Williams & Seeger[24] introduced efficient and effective\napproximations to the full GP formulation. This section will\noutline the approach in [3].\nIn the GP framework, a describes a function consisting of\nfunction values\n{a(x)}. Using the probit model, the likelihood\ncan be expressed as\np(y|x, a) =\nya(x)\n\n0\n,\nwhere\n0\nand are described in section 3.1.\nIn addition, p\n0\n(a) is a GP prior which specifies a Gaussian\ndistribution with zero mean function and covariance/kernel\nfunction K\n0\n(x, x ) over a function space. If p\nt\n(a) is also a\nGaussian process, then Csat\no & Opper obtain the following\nupdates by equating the means and covariances of p(a|A\nt\n+1\n)\nand p(a|A\nt\n, (y\nt\n+1\n, x\nt\n+1\n)):\na\nt\n+1\n=\na\nt\n+ s\nt\n+1\n\nh\nt\nln p(y\nt\n+1\n|h)\nt\nC\nt\n+1\n=\nC\nt\n+ s\nt\n+1\ns\nT\nt\n+1\n\n2\nh\n2\nt\nln p(y\nt\n+1\n|h)\nt\n,\nwhere\ns\nt\n+1\n=\nC\nt\nk\nt\n+1\n+ e\nt\n+1\n,\np(y\nt\n+1\n|h)\nt\n=\n\ny\nt\n+1\nh\nt\n\nt\n+1\n,\n\n2\nt\n+1\n=\n\n2\n0\n+ k\n\nt\n+1\n+ k\nT\nt\n+1\nC\nt\nk\nt\n+1\nand\nh\nt\n=\na(x\nt\n+1\n)\nt\n=\na\nT\nt\nk\nt\n+1\nNotice the similarities to the updates in section 3.1. The\nmain difference is the `kernel trick' introduced into the equations\nthrough\nk\n\nt\n+1\n=\nK\n0\n(x\nt\n+1\n, x\nt\n+1\n)\nand\nk\nt\n+1\n=\n(K\n0\n(x\n1\n, x\nt\n+1\n), . . . , K\n0\n(x\nt\n, x\nt\n+1\n))\nT\nNew inputs x\nt\n+1\nare added sequentially to the system via\nthe (t + 1)th unit vector e\nt\n+1\n. This results in a quadratic\nincrease in matrix size, and is a drawbackfor large data\nsets, such as those for text classification. Csat\no & Opper\novercome this by introducing sparseness into the GP. The\nidea is to replace e\nt\n+1\nby the projection\n^\ne\nt\n+1\n= K\n-1\nt\nk\nt\n+1\n,\nwhere\nK\nt\n=\n{K\n0\n(x\ni\n, x\nj\n), i, j = 1 . . . t}.\nThis approximation introduces an error\nt\n+1\n= (k\n\nt\n+1\n- k\nT\nt\n+1\nK\n-1\nt\nk\nt\n+1\n)\n\nh\nt\nln p(y\nt\n+1\n|h)\nt\n,\nwhich is used to decide when to employ the approximation.\nHence, at any time the algorithm holds a set of basis vectors\n. It is usually desirable to limit the size of this set. To\naccommodate this, Csat\no & Opper describe a procedure for\nremoving a basis vector from the set by reversing the process\nof adding new inputs.\nFor lackof space, the algorithm for the Bayesian Online\nGaussian Process will not be given here. The reader is re-ferred\nto [3] for more information.\n99\nEVALUATION\nIn this evaluation, we will compare Bayesian online perceptron\n, Bayesian online Gaussian process, and Support Vector\nMachines (SVM)[23]. SVM is one of the best performing\nlearning algorithms on the Reuters-21578 corpus[7, 27].\nThe Bayesian methods are as described in section 3, while\nfor SVM we will use the SV M\nlight\npackage by Joachims[8].\nSince SVM is a batch method, to have a fair comparison,\nthe online methods are iterated through the training data 3\ntimes before testing.\n2\n4.1.1\nFeature Selection\nFor the Reuters-21578 corpus, we select as features for\neach category the set of all words for which\n-2 ln > 12.13.\nWe further prune these by using only the top 300 features.\nThis reduces the computation time required for the calculation\nof the covariances of the Bayesian classifiers.\nSince SVM is known to perform well for many features,\nfor the SVM classifiers we also use the set of words which\noccur in at least 3 training documents[7]. This gives us 8,362\nwords. Note that these words are non-category specific.\n4.1.2\nThresholding\nThe probabilistic outputs from the Bayesian classifiers can\nbe used in various ways. The most direct way is to use the\nBayes decision rule, p(y = 1|x, D\nm\n) > 0.5, to determine\nthe relevance of the document described by x.\n3\nHowever,\nas discussed in [10, 26], this is not optimal for the chosen\nevaluation measure.\nTherefore, in addition to 0.5 thresholding, we also empir-ically\noptimise the threshold for each category for the F\n1\nmeasure on the training documents. This scheme, which we\nshall call MaxF1, has also been employed in [27] for thresholding\nkNN and LLSF classifiers. The difference from our\napproach is that the threshold in [27] is calculated over a\nvalidation set. We do not use a validation set because we\nfeel that, for very rare categories, it is hard to obtain a reasonable\nvalidation set from the training documents.\nFor the Bayesian classifiers, we also perform an analyti-cal\nthreshold optimisation suggested by Lewis[10]. In this\nscheme, which we shall call ExpectedF1, the threshold for\neach category is selected to optimise the expected F\n1\n:\nE [F\n1\n]\n\n\n\ni\nD\n(1\n- p\ni\n)\nif\n|D\n+\n| = 0\n2\n\niD+\np\ni\n|\nD\n+\n|\n+\n\niD\np\ni\notherwise,\nwhere is the threshold, p\ni\nis the probability assigned to\ndocument i by the classifier, D is the set of all test documents\n, and\nD\n+\nis the set of test documents with probabilities\nhigher than the threshold .\nNote that ExpectedF1 can only be applied after the probabilities\nfor all the test documents are assigned. Hence the\nclassification can only be done in batch. This is unlike the\nfirst two schemes, where classification can be done online.\n4.1.3\nResults and Discussion\n2\nSee section A.2 for discussion on the number of passes.\n3\nFor SVM, to minimise structural risks, we would classify\nthe document as relevant if w\nx + b > 0, where w is the\nhyperplane, and b is the bias.\n4\nSee section A.3 for discussion on the jitter terms\nij\n.\nTable 1: Description of Methods\nDescription\n4\nSVM-1\nK\n0\n= x\ni\nx\nj\n+ 1\nSVM-2\nK\n0\n= (x\ni\nx\nj\n+ 1)\n2\nSVM-R1\nK\n0\n= exp(\n1\n2\n|x\ni\n- x\nj\n|\n2\n)\nPerceptron\n\n0\n= 0.5, one fixed feature (for bias)\nGP-1\n\n0\n= 0.5, K\n0\n= x\ni\nx\nj\n+ 1 + 10\n-4\n\nij\nGP-2\n\n0\n= 0.5, K\n0\n= (x\ni\nx\nj\n+ 1)\n2\n+ 10\n-4\n\nij\nGP-R1\n\n0\n= 0.5, K\n0\n= exp(\n1\n2\n|x\ni\n- x\nj\n|\n2\n) + 10\n-4\n\nij\nTable 2: Micro-/Macro-average F\n1\n0.5\nMaxF1\nExpectedF1\nSVM\na\n-1\n86.15 / 42.63\n86.35 / 56.92\nSVM\na\n-2\n85.44 / 40.13\n86.19 / 56.42\nSVM\na\n-R1\n84.99 / 37.61\n86.63 / 53.14\nSVM\nb\n-1\n85.60 / 52.03\n85.05 / 52.43\nSVM\nb\n-2\n85.60 / 50.53\n84.50 / 50.49\nSVM\nb\n-R1\n85.75 / 50.52\n84.65 / 51.27\nPerceptron\n85.12 / 45.23\n86.69 / 52.16\n86.44 / 53.08\nGP-1\n85.08 / 45.20\n86.73 / 52.12\n86.54 / 53.12\nGP-2\n85.58 / 47.90\n86.60 / 52.19\n86.77 / 55.04\nGP-R1\n85.18 / 44.88\n86.76 / 52.61\n86.93 / 53.35\nTable 1 lists the parameters for the algorithms used in our\nevaluation, while Table 2 and 3 tabulate the results. There\nare two sets of results for SVM, and they are labeled SVM\na\nand SVM\nb\n. The latter uses the same set of features as the\nBayesian classifiers (i.e. using the\n-2 ln measure), while\nthe former uses the set of 8,362 words as features.\nTable 2 summarizes the results using F\n1\naverages. Table\n3 compares the classifiers using s-test and S-test. Here, the\nMaxF1 thresholds are used for the classification decisions.\nEach row in these tables compares the method listed in the\nfirst column with the other methods. The significance levels\nfrom [27] are used.\nSeveral observations can be made:\nGenerally, MaxF1 thresholding increases the performance\nof all the systems, especially for rare categories.\nFor the Bayesian classifiers, ExpectedF1 thresholding\nimproves the performance of the systems on rare categories\n.\nPerceptron implicitly implements the kernel used by\nGP-1, hence their similar results.\nWith MaxF1 thresholding, feature selection impedes\nthe performance of SVM.\nIn Table 2, SVM with 8,362 features have slightly lower\nmicro-average F\n1\nto the Bayesian classifiers. However,\nthe s-tests in Table 3 show that Bayesian classifiers\noutperform SVM for significantly many common categories\n. Hence, in addition to computing average F\n1\nmeasures, it is useful to perform sign tests.\nAs shown in Table 3, for limited features, Bayesian\nclassifiers outperform SVM for both common and rare\ncategories.\nBased on the sign tests, the Bayesian classifiers outperform\nSVM (using 8,362 words) for common categories,\nand vice versa for rare categories.\n100\nTable 3: s-test/S-test using MaxF1 thresholding\nSVM\na\n-1\nSVM\na\n-2\nSVM\na\n-R1\nSVM\nb\n-1\nSVM\nb\n-2\nSVM\nb\n-R1\nPptron\nGP-1\nGP-2\nGP-R1\nSVM\na\n-1\n/\n< /\n/\n/\n/\n/\n/\n/\n/\nSVM\na\n-2\n/\n/\n\n> /\n/\n/\n/\n/\n/\n\n/ >\nSVM\na\n-R1\n> /\n/\n\n/\n\n/\n/\n< / >\n< / >\n/\n< /\nSVM\nb\n-1\n/\n< /\n/\n\n/\n\n> / >\n/ <\n/\n\n/ <\n/ <\nSVM\nb\n-2\n/\n/\n/\n/\n\n/\n/ <\n/ <\n/\n/\nSVM\nb\n-R1\n/\n/\n/\n< / <\n/\n/ <\n/\n/\n/\nPerceptron\n/\n/\n> / <\n/ >\n/ >\n/ >\n/\n/\n/\nGP-1\n/\n/\n> / <\n/\n\n/ >\n/\n/\n/\n/\nGP-2\n/\n/\n\n/\n/ >\n/\n/\n/\n/\n/\nGP-R1\n/\n/ <\n> /\n/ >\n/\n/\n/\n/\n/\n\"\n\" or \"\n\" means P-value\n0.01;\n\">\" or \"<\" means 0.01 < P-value 0.05;\n\"\n\" means P-value > 0.05.\nThe last observation suggests that one can use Bayesian\nclassifiers for common categories, and SVM for rare ones.\n4.2\nFiltering on OHSUMED\nIn this section, only the Bayesian online perceptron will\nbe considered. In order to avoid numerical integration of\nthe information gain measure, instead of the probit model\nof section 3.1, here we use a simpler likelihood model in\nwhich the outputs are flipped with fixed probability :\np(y|x, a) = + (1 - 2) (ya x) ,\nwhere\n(x) =\n\n1\nx > 0\n0\notherwise.\nThe update equations will also change accordingly, e.g.\np(y\nt\n+1\n|h)\nt\n=\n+ (1 - 2)\ny\nt\n+1\nh\nt\n\nt\n+1\n,\n\n2\nt\n+1\n=\nx\nT\nt\n+1\nC\nt\nx\nt\n+1\nand\nh\nt\n=\na\nT\nt\nx\nt\n+1\n.\nUsing this likelihood measure, we can express the information\ngained from datum (y\nt\n+1\n, x\nt\n+1\n) as\nIG(y\nt\n+1\n, x\nt\n+1\n|D\nt\n)\n\nlog\n2\n+\ny\nt\n+1\nh\nt\n+1\n\nt\n+1\nlog\n2\n1\n\n-log\n2\np(y\nt\n+1\n|h)\nt\n,\nwhere\n\n2\nt\n+1\n=\nx\nT\nt\n+1\nC\nt\n+1\nx\nt\n+1\nand\nh\nt\n+1\n=\na\nT\nt\n+1\nx\nt\n+1\n.\nWe use = 0.1 in this evaluation. The following sections\nwill describe the algorithm in detail. To simplify presentation\n, we will divide the batch-adaptive filtering taskinto\nbatch and adaptive phases.\n4.2.1\nFeature Selection and Adaptation\nDuring the batch phase, words for which\n-2 ln > 12.13\nare selected as features.\nDuring the adaptive phase, when we obtain a feedback, we\nupdate the features by adding any new words with\n-2 ln >\n12.13. When a feature is added, the distribution of the perceptron\na is extended by one dimension:\na\n\n\na\n0\n\nC\n\n\nC\n0\n0\n1\n\n.\n4.2.2\nTraining the classifier\nDuring the batch phase, the classifier is iterated through\nthe training documents 3 times. In addition, the relevant\ndocuments are collected for use during the adaptive phase.\nDuring the adaptive phase, retrieved relevant documents\nare added to this collection. When a document is retrieved,\nthe classifier is trained on that document and its given relevance\njudgement.\nThe classifier will be trained on irrelevant documents most\nof the time. To prevent it from \"forgetting\" relevant documents\ndue to its limited capacity, whenever we train on an\nirrelevant document, we would also train on a past relevant\ndocument. This past relevant document is chosen succes-sively\nfrom the collection of relevant documents.\nThis is needed also because new features might have been\nadded since a relevant document was last trained on. Hence\nthe classifier would be able to gather new information from\nthe same document again due to the additional features.\nNote that the past relevant document does not need to be\nchosen in successive order. Instead, it can be chosen using\na probability distribution over the collection. This will be\ndesirable when handling topic-drifts.\nWe will evaluate the effectiveness of this strategy of retraining\non past retrieved relevant documents, and denote\nits use by +rel. Though its use means that the algorithm\nis no longer online, asymptotic efficiency is unaffected, since\nonly one past document is used for training at any instance.\n4.2.3\nInformation Gain\nDuring testing, there are two reasons why we retrieve\na document.\nThe first is that it is relevant, i.e.\np(y =\n1\n|x, D\nt\n) > 0.5, where x represents the document. The second\nis that, although the document is deemed irrelevant\nby the classifier, the classifier would gain useful information\nfrom the document. Using the measure IG(y, x|D\nt\n), we calculate\nthe expected information gain\nIG(x|D\nt\n) =\n\n{-1,1}\np(y = |x, D\nt\n)\nIG(y = , x|D\nt\n).\n101\n0\n10\n20\n30\n40\n50\n60\n70\n80\n90\n100\n0\n0.2\n0.4\n0.6\n0.8\n1\nN\nret\n\nTarget number of\ndocuments = 50\nFigure 1: versus N\nret\ntuned for T9P\nA document is then deemed useful if its expected information\ngain is at least . Optimizing for the T9P measure\n(i.e. targeting 50 documents), we choose to be\n= 0.999 1 + exp - N\nret\n- 50.0\n10\n-1\n+ 0.001,\nwhere N\nret\nis the total number of documents that the system\nhas retrieved. Figure 1 plots against N\nret\n. Note that this\nis a kind of active learning, where the willingness to tradeoff\nprecision for learning decreases with N\nret\n. The use of this\ninformation gain criteria will be denoted by +ig.\nWe will test the effectiveness of the information gain strategy\n, against an alternative one. The alternative, denoted by\n+rnd, will randomly select documents to retrieve based on\nthe probability\nU =\n\n0\nif N\nret\n>= 50\n50-N\nret\n293856\notherwise,\nwhere 293,856 is the number of test documents.\n4.2.4\nResults and Discussion\nTable 4 lists the results of seven systems. The first two are\nof Microsoft Research Cambridge and Fudan University respectively\n. These are the only runs in TREC-9 for the task.\nThe third is of the system as described in full, i.e. Bayesian\nonline perceptron, with retraining on past retrieved relevant\ndocuments, and with the use of information gain. The rest\nare of the Bayesian online perceptron with different combinations\nof strategies.\nBesides the T9P measure, for the sake of completeness, Table\n4 also lists the other measures used in TREC-9. Taken\ntogether, the measures show that Bayesian online perceptron\n, together with the consideration for information gain,\nis a very competitive method.\nFor the systems with +rel, the collection of past known\nrelevant documents is kept. Although Microsoft uses this\nsame collection for its query reformulation, another collection\nof all previously seen documents is used for threshold\nadaptation. Fudan maintains a collection of past retrieved\ndocuments and uses this collection for query adaptation.\n5\n[18] reports results from run ok9bfr2po, while we report\nresults from the slightly better run ok9bf2po.\n0\n2\n4\n6\n8\n10\n12\n14\n16\n18\n20\n40\n50\n60\n70\n80\n90\n100\n110\n120\n130\nAverage number of relevant documents retrieved\nAverage number of features\nPptron+rel+ig\nPptron+ig\nPptron+rnd\nPptron\nFigure 2: Variation of the number of features as\nrelevant documents are retrieved.\nThe plots for\nPptron+rel+ig and Pptron+ig are very close. So are\nthe plots for Pptron+rnd and Pptron.\nIn a typical operational system, retrieved relevant documents\nare usually retained, while irrelevant documents are\nusually discarded. Therefore +rel is a practical strategy to\nadopt.\nFigure 2 plots the average number of features during the\nadaptive phase.\nWe can see that features are constantly\nadded as relevant documents are seen. When the classifier\nis retrained on past documents, the new features enable the\nclassifier to gain new information from these documents. If\nwe compare the results for Pptron+rel and Pptron in Table\n4, we find that not training on past documents causes\nthe number of relevant documents retrieved to drop by 5%.\nSimilarly, for Pptron+rel+ig and Pptron+ig, the drop is\n8%.\nTable 5 breaks down the retrieved documents into those\nthat the classifier deems relevant and those that the classifier\nis actually querying for information, for Pptron+ig\nand Pptron+rnd. The table shows that none of the documents\nrandomly queried are relevant documents. This is\nnot surprising, since only an average of 0.017% of the test\ndocuments are relevant. In contrast, the information gain\nstrategy is able to retrieve 313 relevant documents, which is\n26.1% of the documents queried. This is a significant result.\nConsider Pptron+ig. Table 4 shows that for Pptron, when\nthe information gain strategy is removed, only 731 relevant\ndocuments will be retrieved. Hence, although most of the\ndocuments queried are irrelevant, information gained from\nthese queries helps recall by the classifier (i.e. 815 documents\nversus 731 documents), which is important for reaching\nthe target of 50 documents.\nMacKay[13] has noted the phenomenon of querying for\nirrelevant documents which are at the edges of the input\nspace, and suggested maximizing information in a defined\nregion of interest instead.\nFinding this region for batch-adaptive\nfiltering remains a subject for further research.\nComparing the four plots in Figure 2, we find that, on\naverage, the information gain strategy causes about 3% more\nfeatures to be discovered for the same number of relevant\ndocuments retrieved. A consequence of this is better recall.\n102\nTable 4: Results for Batch-adaptive filtering optimized for T9P measure.\nMicrosoft\n5\nFudan\nPptron+rel+ig\nPptron+ig\nPptron+rnd\nPptron+rel\nPptron\nTotal retrieved\n3562\n3251\n2716\n2391\n2533\n1157\n1057\nRelevant retrieved\n1095\n1061\n1227\n1128\n732\n772\n731\nMacro-average recall\n39.5\n37.9\n36.2\n33.3\n20.0\n20.8\n20.0\nMacro-average precision\n30.5\n32.2\n35.8\n35.8\n21.6\n61.9\n62.3\nMean T9P\n30.5\n31.7\n31.3\n29.8\n19.2\n21.5\n20.8\nMean Utility\n-4.397\n-1.079\n15.318\n15.762\n-5.349\n18.397\n17.730\nMean T9U\n-4.397\n-1.079\n15.318\n15.762\n-5.349\n18.397\n17.730\nMean scaled utility\n-0.596\n-0.461\n-0.025\n0.016\n-0.397\n0.141\n0.138\nZero returns\n0\n0\n0\n0\n0\n8\n0\nTable 5: Breakdown of documents retrieved for Pptron+ig and Pptron+rnd. The numbers for the latter are in\nbrackets.\nRelevant\nNot Relevant\nTotal\n# docs retrieved by perceptron classifier proper\n815\n(732)\n378\n(345)\n1193\n(1077)\n# docs retrieved by information gain (or random strategy)\n313\n(0)\n885\n(1456)\n1198\n(1456)\nTotal\n1128\n(732)\n1263\n(1801)\n2391\n(2533)\nCONCLUSIONS AND FURTHER WORK\nWe have implemented and tested Bayesian online perceptron\nand Gaussian processes on the text classification problem\n, and have shown that their performance is comparable\nto that of SVM, one of the best learning algorithms on\ntext classification in the published literature. We have also\ndemonstrated the effectiveness of online learning with information\ngain on the TREC-9 batch-adaptive filtering task.\nOur results on text classification suggest that one can use\nBayesian classifiers for common categories, and maximum\nmargin classifiers for rare categories. The partitioning of the\ncategories into common and rare ones in an optimal way is\nan interesting problem.\nSVM has been employed to use relevance feedbackby\nDrucker et al [4], where the retrieval is in groups of 10 documents\n. In essence, this is a form of adaptive routing. It\nwould be instructive to see how Bayesian classifiers perform\nhere, without storing too many previously seen documents.\nIt would also be interesting to compare the merits of incremental\nSVM[21, 1] with the Bayesian online classifiers.\nAcknowledgments\nWe would like to thank Lehel Csat\no for providing details\non the implementation of the Gaussian process, Wee Meng\nSoon for assisting in the data preparation, Yiming Yang\nfor clarifying the representation used in [27], and Loo Nin\nTeow for proof-reading the manuscript. We would also like\nto thankthe reviewers for their many helpful comments in\nimproving the paper.\n\nREFERENCES\n[1] G. Cauwenberghs and T. Poggio. Incremental and\ndecremental support vector machine learning. In T. K.\nLeen, T. G. Dietterich, and V. Tresp, editors, NIPS\n2000, volume 13. The MIT Press, 2001.\n[2] D. Cox and E. Snell. Analysis of Binary Data.\nChapman & Hall, London, 2nd edition, 1989.\n[3] L. Csat\no and M. Opper. Sparse representation for\nGaussian process models. In T. K. Leen, T. G.\nDietterich, and V. Tresp, editors, NIPS 2000,\nvolume 13. The MIT Press, 2001.\n[4] H. Drucker, B. Shahrary, and D. C. Gibbon.\nRelevance feedbackusing support vector machines. In\nProceedings of the 2001 International Conference on\nMachine Learning, 2001.\n[5] T. E. Dunning. Accurate methods for the statistics of\nsurprise and coincidence. Computational Linguistics,\n19(1):6174, 1993.\n[6] W. Hersh, C. Buckley, T. Leone, and D. Hickam.\nOHSUMED: An interactive retrieval evaluation and\nnew large test collection for research. In Proceedings of\nthe 17th Annual International ACM SIGIR\nConference on Research and Development in\nInformation Retrieval, pages 192201, 1994.\n[7] T. Joachims. Text categorization with support vector\nmachines: Learning with many relevant features. In\nProceedings of the European Conference on Machine\nLearning (ECML), pages 137142, 1998.\n[8] T. Joachims. Making large-scale SVM learning\npractical. In B. Sch\nolkopf, C. Burges, and A. Smola,\neditors, Advances in Kernel Methods -- Support\nVector Learning, chapter 11. The MIT Press, 1999.\n[9] D. D. Lewis. Representation and Learning in\nInformation Retrieval. PhD thesis, Department of\nComputer and Information Science, University of\nMassachusetts at Amherst, 1992.\n[10] D. D. Lewis. Evaluating and optimizing automomous\ntext classification systems. In Proceedings of the 18th\nAnnual International ACM SIGIR Conference on\nResearch and Development in Information Retrieval,\npages 246254, 1995.\n[11] D. D. Lewis, R. E. Schapire, J. P. Callan, and\nR. Papka. Training algorithms for linear text\nclassifiers. In Proceedings of the 19th Annual\nInternational ACM SIGIR Conference on Research\nand Development in Information Retrieval, pages\n298306, 1996.\n[12] D. J. Mackay. Bayesian interpolation. Neural\nComputation, 4(3):415447, 1991.\n[13] D. J. Mackay. Information-based objective functions\nfor active data selection. Neural Computation,\n4(4):590604, 1992.\n103\n[14] R. M. Neal. Monte Carlo implementation of Gaussian\nprocess models for Bayesian regression and\nclassification. Technical Report CRG-TR-97-2,\nDepartment of Computer Science, University of\nToronto, January 1997.\n[15] H. T. Ng, W. B. Goh, and K. L. Low. Feature\nselection, perceptron learning, and a usability case\nstudy for text categorization. In Proceedings of the\n20th Annual International ACM SIGIR Conference on\nResearch and Development in Information Retrieval,\npages 6773, 1997.\n[16] M. Opper. Online versus offline learning from random\nexamples: General results. Physical Review Letters,\n77:46714674, 1996.\n[17] M. Opper. A Bayesian approach to online learning. In\nD. Saad, editor, On-Line Learning in Neural\nNetworks. Combridge University Press, 1998.\n[18] S. Robertson and D. A. Hull. The TREC-9 filtering\ntrackfinal report. In Proceedings of the 9th Text\nREtrieval Conference (TREC-9), pages 2540, 2001.\n[19] G. Salton and C. Buckley. Term-weighting approaches\nin automatic text retrieval. Information Processing\nand Management, 24(5):513523, 1988.\n[20] S. A. Solla and O. Winther. Optimal perceptron\nlearning: an online Bayesian approach. In D. Saad,\neditor, On-Line Learning in Neural Networks.\nCombridge University Press, 1998.\n[21] N. A. Syed, H. Liu, and K. K. Sung. Incremental\nlearning with support vector machines. In Proceedings\nof the Workshop on Support Vector Machines at the\nInternational Joint Conference on Artificial\nIntelligence (IJCAI-99), 1999.\n[22] C. van Rijsbergen. Information Retrieval.\nButterworths, London, 1979.\n[23] V. N. Vapnik. The Nature of Statistical Learning\nTheory. Springer, New York, 1995.\n[24] C. K. Williams and M. Seeger. Using the Nystr\nom\nmethod to speed up kernel machines. In T. K. Leen,\nT. G. Dietterich, and V. Tresp, editors, NIPS 2000,\nvolume 13. The MIT Press, 2001.\n[25] O. Winther. Bayesian Mean Field Algorithms for\nNeural Networks and Gaussian Processes. PhD thesis,\nUniversity of Copenhagen, CONNECT, The Niels Bohr\nInstitute, 1998.\n[26] Y. Yang. A study on thresholding strategies for text\ncategorization. In Proceedings of the 24th Annual\nInternational ACM SIGIR Conference on Research\nand Development in Information Retrieval, pages\n137145, 2001.\n[27] Y. Yang and X. Liu. A re-examination of text\ncategorization methods. In Proceedings of the 22nd\nAnnual International ACM SIGIR Conference on\nResearch and Development in Information Retrieval,\npages 4249, 1999.\nAPPENDIX\nA.\nON THE CHOICE OF PARAMETERS\nA.1\nLikelihood model\nMacKay[12] has suggested the evidence frameworkfor model\nselection. Here, we calculate the evidence on the training\nTable 6:\nMicro-/Macro-avg F\n1\n(MaxF1 thresholds)\nand Avg log-evidence on Reuters-21578 for different\nlikelihood models, using Bayesian online perceptron.\nMicro-/Macro-avg F\n1\nAvg log-evidence\nLogit\n86.48 / 52.75\n-45.02\nProbit\n86.69 / 52.16\n-34.32\nFlip\n85.94 / 53.00\n-368.8\nTable 7:\nMicro-/Macro-avg F\n1\n(MaxF1 thresholds)\nand Avg log-evidence on Reuters-21578 for different\npasses over the training data, using Bayesian online\nperceptron.\nPasses\nMicro-/Macro-avg F\n1\nAvg log-evidence\n1\n87.08 / 52.87\n-35.56\n2\n86.92 / 52.63\n-34.36\n3\n86.69 / 52.16\n-34.32\n4\n86.62 / 52.75\n-34.54\n5\n85.22 / 46.93\n-34.69\ndata using the final posterior for a:\np(D\nm\n) =\nm\nt\n=1\np(y\nt\n|x\nt\n, a)\nm\n.\nTable 6 illustrates this for selecting the likelihood measure\nfor the text classification task, using the Bayesian online\nperceptron. In the table, the probit model follows the\nformulation in section 3.1 with\n0\n= 0.5, logit model is esti-mated\nby the probit model with\n0\n= 1.6474[2], and the flip\nnoise model is as described in section 4.2. Although their\nF\n1\naverages are similar, the evidences show that the probit\nmodel with\n0\n= 0.5 is a more likely model. The small evidence\nfor the flip noise model is because much information\nis lost through the threshold function .\nA.2\nEffects of multiple passes over data\nUsing the evidence measure defined in section A.1, Table\n7 illustrates the effects of different number of passes over\ntraining data for Bayesian online perceptron. Treating the\nnumber of passes as a parameter for the algorithm, we see\nthat having 3 passes over the data gives the highest average\nevidence, although there is no significant difference between\n2, 3, or 4 passes. Similar results hold for the Gaussian process\nfor the 3 different kernels. Hence, in section 4.1, we\nchoose to use 3 passes for all the Bayesian algorithms.\nA.3\nJitter term\nThe addition of the jitter term 10\n-4\n\nij\n(where\nij\n= 1\nif i = j, and 0 otherwise) for Gaussian process for classification\nis recommended by Neal[14]. This term improves\nthe conditioning of the matrix computations while having\na small effect on the model. From our preliminary experiments\n, without the jitter term, the matrix operations in\nBayesian online Gaussian process become ill-conditioned.\nA.4\nSizes of the basis vectors sets\nThe sizes of the sets of basis vectors for GP in section 4.1\nare limited to less than or equal to the number of features\nselected. This is because, as noted by Csat\no & Opper[3],\nfor a feature space of finite dimension M, no more than M\nbasis vectors are needed, due to linear dependence.\n104", "keywords": "Text Classification;perceptron;text classification;filtering;information gain;Bayesian online classifiers;Online;Machine Learning;continous learning;machine learning;Text Filtering;Gaussian process;Bayesian"} {"name": "43", "title": "Beyond PageRank: Machine Learning for Static Ranking", "abstract": "Since the publication of Brin and Page's paper on PageRank, many in the Web community have depended on PageRank for the static (query-independent) ordering of Web pages. We show that we can significantly outperform PageRank using features that are independent of the link structure of the Web. We gain a further boost in accuracy by using data on the frequency at which users visit Web pages. We use RankNet, a ranking machine learning algorithm, to combine these and other static features based on anchor text and domain characteristics. The resulting model achieves a static ranking pairwise accuracy of 67.3% (vs. 56.7% for PageRank or 50% for random).", "fulltext": "INTRODUCTION\nOver the past decade, the Web has grown exponentially in size.\nUnfortunately, this growth has not been isolated to good-quality\npages. The number of incorrect, spamming, and malicious (e.g.,\nphishing) sites has also grown rapidly. The sheer number of both\ngood and bad pages on the Web has led to an increasing reliance\non search engines for the discovery of useful information. Users\nrely on search engines not only to return pages related to their\nsearch query, but also to separate the good from the bad, and\norder results so that the best pages are suggested first.\nTo date, most work on Web page ranking has focused on\nimproving the ordering of the results returned to the user (query-dependent\nranking, or dynamic ranking). However, having a good\nquery-independent ranking (static ranking) is also crucially\nimportant for a search engine. A good static ranking algorithm\nprovides numerous benefits:\n\n\nRelevance\n: The static rank of a page provides a general\nindicator to the overall quality of the page. This is a\nuseful input to the dynamic ranking algorithm.\n\n\nEfficiency\n: Typically, the search engine's index is\nordered by static rank. By traversing the index from high-quality\nto low-quality pages, the dynamic ranker may\nabort the search when it determines that no later page\nwill have as high of a dynamic rank as those already\nfound. The more accurate the static rank, the better this\nearly-stopping ability, and hence the quicker the search\nengine may respond to queries.\n\n\nCrawl Priority\n: The Web grows and changes as quickly\nas search engines can crawl it. Search engines need a way\nto prioritize their crawl--to determine which pages to re-crawl\n, how frequently, and how often to seek out new\npages. Among other factors, the static rank of a page is\nused to determine this prioritization. A better static rank\nthus provides the engine with a higher quality, more up-to\n-date index.\nGoogle is often regarded as the first commercially successful\nsearch engine. Their ranking was originally based on the\nPageRank algorithm [5][27]. Due to this (and possibly due to\nGoogle's promotion of PageRank to the public), PageRank is\nwidely regarded as the best method for the static ranking of Web\npages.\nThough PageRank has historically been thought to perform quite\nwell, there has yet been little academic evidence to support this\nclaim. Even worse, there has recently been work showing that\nPageRank may not perform any better than other simple measures\non certain tasks. Upstill et al. have found that for the task of\nfinding home pages, the number of pages linking to a page and the\ntype of URL were as, or more, effective than PageRank [32]. They\nfound similar results for the task of finding high quality\ncompanies [31]. PageRank has also been used in systems for\nTREC's \"very large collection\" and \"Web track\" competitions,\nbut with much less success than had been expected [17]. Finally,\nAmento et al. [1] found that simple features, such as the number\nof pages on a site, performed as well as PageRank.\nDespite these, the general belief remains among many, both\nacademic and in the public, that PageRank is an essential factor\nfor a good static rank. Failing this, it is still assumed that using the\nlink structure is crucial, in the form of the number of inlinks or the\namount of anchor text.\nIn this paper, we show there are a number of simple url- or page-\nbased features that significantly outperform PageRank (for the\npurposes of statically ranking Web pages) despite ignoring the\nCopyright is held by the International World Wide Web Conference\nCommittee (IW3C2). Distribution of these papers is limited to\nclassroom use, and personal use by others.\nWWW 2006, May 2326, 2006, Edinburgh, Scotland.\nACM 1-59593-323-9/06/0005.\n707\nstructure of the Web. We combine these and other static features\nusing machine learning to achieve a ranking system that is\nsignificantly better than PageRank (in pairwise agreement with\nhuman labels).\nA machine learning approach for static ranking has other\nadvantages besides the quality of the ranking. Because the\nmeasure consists of many features, it is harder for malicious users\nto manipulate it (i.e., to raise their page's static rank to an\nundeserved level through questionable techniques, also known as\nWeb spamming). This is particularly true if the feature set is not\nknown. In contrast, a single measure like PageRank can be easier\nto manipulate because spammers need only concentrate on one\ngoal: how to cause more pages to point to their page. With an\nalgorithm that learns, a feature that becomes unusable due to\nspammer manipulation will simply be reduced or removed from\nthe final computation of rank. This flexibility allows a ranking\nsystem to rapidly react to new spamming techniques.\nA machine learning approach to static ranking is also able to take\nadvantage of any advances in the machine learning field. For\nexample, recent work on adversarial classification [12] suggests\nthat it may be possible to explicitly model the Web page\nspammer's (the adversary) actions, adjusting the ranking model in\nadvance of the spammer's attempts to circumvent it. Another\nexample is the elimination of outliers in constructing the model,\nwhich helps reduce the effect that unique sites may have on the\noverall quality of the static rank. By moving static ranking to a\nmachine learning framework, we not only gain in accuracy, but\nalso gain in the ability to react to spammer's actions, to rapidly\nadd new features to the ranking algorithm, and to leverage\nadvances in the rapidly growing field of machine learning.\nFinally, we believe there will be significant advantages to using\nthis technique for other domains, such as searching a local hard\ndrive or a corporation's intranet. These are domains where the\nlink structure is particularly weak (or non-existent), but there are\nother domain-specific features that could be just as powerful. For\nexample, the author of an intranet page and his/her position in the\norganization (e.g., CEO, manager, or developer) could provide\nsignificant clues as to the importance of that page. A machine\nlearning approach thus allows rapid development of a good static\nalgorithm in new domains.\nThis paper's contribution is a systematic study of static features,\nincluding PageRank, for the purposes of (statically) ranking Web\npages. Previous studies on PageRank typically used subsets of the\nWeb that are significantly smaller (e.g., the TREC VLC2 corpus,\nused by many, contains only 19 million pages). Also, the\nperformance of PageRank and other static features has typically\nbeen evaluated in the context of a complete system for dynamic\nranking, or for other tasks such as question answering. In contrast,\nwe explore the use of PageRank and other features for the direct\ntask of statically ranking Web pages.\nWe first briefly describe the PageRank algorithm. In Section 3 we\nintroduce RankNet, the machine learning technique used to\ncombine static features into a final ranking. Section 4 describes\nthe static features. The heart of the paper is in Section 5, which\npresents our experiments and results. We conclude with a\ndiscussion of related and future work.\nPAGERANK\nThe basic idea behind PageRank is simple: a link from a Web\npage to another can be seen as an endorsement of that page. In\ngeneral, links are made by people. As such, they are indicative of\nthe quality of the pages to which they point when creating a\npage, an author presumably chooses to link to pages deemed to be\nof good quality. We can take advantage of this linkage\ninformation to order Web pages according to their perceived\nquality.\nImagine a Web surfer who jumps from Web page to Web page,\nchoosing with uniform probability which link to follow at each\nstep. In order to reduce the effect of dead-ends or endless cycles\nthe surfer will occasionally jump to a random page with some\nsmall probability\n\n, or when on a page with no out-links. If\naveraged over a sufficient number of steps, the probability the\nsurfer is on page j at some point in time is given by the formula:\n\n\n\n+\n=\nj\ni\ni\ni\nP\nN\nj\nP\nB\nF\n)\n(\n)\n1\n(\n)\n(\n\n\n\n(1)\n\nWhere F\ni\nis the set of pages that page i links to, and B\nj\nis the set of\npages that link to page j. The PageRank score for node j is defined\nas this probability: PR(j)=P(j). Because equation (1) is recursive,\nit must be iteratively evaluated until P(j) converges (typically, the\ninitial distribution for P(j) is uniform). The intuition is, because a\nrandom surfer would end up at the page more frequently, it is\nlikely a better page. An alternative view for equation (1) is that\neach page is assigned a quality, P(j). A page \"gives\" an equal\nshare of its quality to each page it points to.\nPageRank is computationally expensive. Our collection of 5\nbillion pages contains approximately 370 billion links. Computing\nPageRank requires iterating over these billions of links multiple\ntimes (until convergence). It requires large amounts of memory\n(or very smart caching schemes that slow the computation down\neven further), and if spread across multiple machines, requires\nsignificant communication between them. Though much work has\nbeen done on optimizing the PageRank computation (see e.g.,\n[25] and [6]), it remains a relatively slow, computationally\nexpensive property to compute.\nRANKNET\nMuch work in machine learning has been done on the problems of\nclassification and regression. Let X={x\ni\n} be a collection of feature\nvectors (typically, a feature is any real valued number), and\nY\n={y\ni\n} be a collection of associated classes, where y\ni\nis the class\nof the object described by feature vector x\ni\n. The classification\nproblem is to learn a function f that maps y\ni\n=f(x\ni\n), for all i. When\ny\ni\nis real-valued as well, this is called regression.\nStatic ranking can be seen as a regression problem. If we let x\ni\n\nrepresent features of page i, and y\ni\nbe a value (say, the rank) for\neach page, we could learn a regression function that mapped each\npage's features to their rank. However, this over-constrains the\nproblem we wish to solve. All we really care about is the order of\nthe pages, not the actual value assigned to them.\nRecent work on this ranking problem [7][13][18] directly\nattempts to optimize the ordering of the objects, rather than the\nvalue assigned to them. For these, let Z={<i,j>} be a collection of\npairs of items, where item i should be assigned a higher value than\nitem j. The goal of the ranking problem, then, is to learn a\nfunction f such that,\n)\n(\n)\n(\n,\n,\nj\ni\nf\nf\nj\ni\nx\nx\nZ\n>\n\n\n\n708\nNote that, as with learning a regression function, the result of this\nprocess is a function (f) that maps feature vectors to real values.\nThis function can still be applied anywhere that a regression-learned\nfunction could be applied. The only difference is the\ntechnique used to learn the function. By directly optimizing the\nordering of objects, these methods are able to learn a function that\ndoes a better job of ranking than do regression techniques.\nWe used RankNet [7], one of the aforementioned techniques for\nlearning ranking functions, to learn our static rank function.\nRankNet is a straightforward modification to the standard neural\nnetwork back-prop algorithm. As with back-prop, RankNet\nattempts to minimize the value of a cost function by adjusting\neach weight in the network according to the gradient of the cost\nfunction with respect to that weight. The difference is that, while a\ntypical neural network cost function is based on the difference\nbetween the network output and the desired output, the RankNet\ncost function is based on the difference between a pair of network\noutputs. That is, for each pair of feature vectors <i,j> in the\ntraining set, RankNet computes the network outputs o\ni\nand o\nj\n.\nSince vector i is supposed to be ranked higher than vector j, the\nlarger is o\nj\n-o\ni\n, the larger the cost.\nRankNet also allows the pairs in Z to be weighted with a\nconfidence (posed as the probability that the pair satisfies the\nordering induced by the ranking function). In this paper, we used\na probability of one for all pairs. In the next section, we will\ndiscuss the features used in our feature vectors, x\ni\n.\nFEATURES\nTo apply RankNet (or other machine learning techniques) to the\nranking problem, we needed to extract a set of features from each\npage. We divided our feature set into four, mutually exclusive,\ncategories: page-level (Page), domain-level (Domain), anchor text\nand inlinks (Anchor), and popularity (Popularity). We also\noptionally used the PageRank of a page as a feature. Below, we\ndescribe each of these feature categories in more detail.\nPageRank\nWe computed PageRank on a Web graph of 5 billion crawled\npages (and 20 billion known URLs linked to by these pages).\nThis represents a significant portion of the Web, and is\napproximately the same number of pages as are used by\nGoogle, Yahoo, and MSN for their search engines.\nBecause PageRank is a graph-based algorithm, it is important\nthat it be run on as large a subset of the Web as possible. Most\nprevious studies on PageRank used subsets of the Web that are\nsignificantly smaller (e.g. the TREC VLC2 corpus, used by\nmany, contains only 19 million pages)\nWe computed PageRank using the standard value of 0.85 for\n\n.\nPopularity\nAnother feature we used is the actual popularity of a Web page,\nmeasured as the number of times that it has been visited by\nusers over some period of time. We have access to such data\nfrom users who have installed the MSN toolbar and have opted\nto provide it to MSN. The data is aggregated into a count, for\neach Web page, of the number of users who viewed that page.\nThough popularity data is generally unavailable, there are two\nother sources for it. The first is from proxy logs. For example, a\nuniversity that requires its students to use a proxy has a record\nof all the pages they have visited while on campus.\nUnfortunately, proxy data is quite biased and relatively small.\nAnother source, internal to search engines, are records of which\nresults their users clicked on. Such data was used by the search\nengine \"Direct Hit\", and has recently been explored for\ndynamic ranking purposes [20]. An advantage of the toolbar\ndata over this is that it contains information about URL visits\nthat are not just the result of a search.\nThe raw popularity is processed into a number of features such\nas the number of times a page was viewed and the number of\ntimes any page in the domain was viewed. More details are\nprovided in section 5.5.\nAnchor text and inlinks\nThese features are based on the information associated with\nlinks to the page in question. It includes features such as the\ntotal amount of text in links pointing to the page (\"anchor\ntext\"), the number of unique words in that text, etc.\nPage\nThis category consists of features which may be determined by\nlooking at the page (and its URL) alone. We used only eight,\nsimple features such as the number of words in the body, the\nfrequency of the most common term, etc.\nDomain\nThis category contains features that are computed as averages\nacross all pages in the domain. For example, the average\nnumber of outlinks on any page and the average PageRank.\nMany of these features have been used by others for ranking Web\npages, particularly the anchor and page features. As mentioned,\nthe evaluation is typically for dynamic ranking, and we wish to\nevaluate the use of them for static ranking. Also, to our\nknowledge, this is the first study on the use of actual page\nvisitation popularity for static ranking. The closest similar work is\non using click-through behavior (that is, which search engine\nresults the users click on) to affect dynamic ranking (see e.g.,\n[20]).\nBecause we use a wide variety of features to come up with a static\nranking, we refer to this as fRank (for feature-based ranking).\nfRank uses RankNet and the set of features described in this\nsection to learn a ranking function for Web pages. Unless\notherwise specified, fRank was trained with all of the features.\n\nEXPERIMENTS\nIn this section, we will demonstrate that we can out perform\nPageRank by applying machine learning to a straightforward set\nof features. Before the results, we first discuss the data, the\nperformance metric, and the training method.\n5.1\n\nData\nIn order to evaluate the quality of a static ranking, we needed a\n\"gold standard\" defining the correct ordering for a set of pages.\nFor this, we employed a dataset which contains human judgments\nfor 28000 queries. For each query, a number of results are\nmanually assigned a rating, from 0 to 4, by human judges. The\nrating is meant to be a measure of how relevant the result is for\nthe query, where 0 means \"poor\" and 4 means \"excellent\". There\nare approximately 500k judgments in all, or an average of 18\nratings per query.\nThe queries are selected by randomly choosing queries from\namong those issued to the MSN search engine. The probability\nthat a query is selected is proportional to its frequency among all\n709\nof the queries. As a result, common queries are more likely to be\njudged than uncommon queries. As an example of how diverse\nthe queries are, the first four queries in the training set are \"chef\nschools\", \"chicagoland speedway\", \"eagles fan club\", and\n\"Turkish culture\". The documents selected for judging are those\nthat we expected would, on average, be reasonably relevant (for\nexample, the top ten documents returned by MSN's search\nengine). This provides significantly more information than\nrandomly selecting documents on the Web, the vast majority of\nwhich would be irrelevant to a given query.\nBecause of this process, the judged pages tend to be of higher\nquality than the average page on the Web, and tend to be pages\nthat will be returned for common search queries. This bias is good\nwhen evaluating the quality of static ranking for the purposes of\nindex ordering and returning relevant documents. This is because\nthe most important portion of the index to be well-ordered and\nrelevant is the portion that is frequently returned for search\nqueries. Because of this bias, however, the results in this paper are\nnot applicable to crawl prioritization. In order to obtain\nexperimental results on crawl prioritization, we would need\nratings on a random sample of Web pages.\nTo convert the data from query-dependent to query-independent,\nwe simply removed the query, taking the maximum over\njudgments for a URL that appears in more than one query. The\nreasoning behind this is that a page that is relevant for some query\nand irrelevant for another is probably a decent page and should\nhave a high static rank. Because we evaluated the pages on\nqueries that occur frequently, our data indicates the correct index\nordering, and assigns high value to pages that are likely to be\nrelevant to a common query.\nWe randomly assigned queries to a training, validation, or test set,\nsuch that they contained 84%, 8%, and 8% of the queries,\nrespectively. Each set contains all of the ratings for a given query,\nand no query appears in more than one set. The training set was\nused to train fRank. The validation set was used to select the\nmodel that had the highest performance. The test set was used for\nthe final results.\nThis data gives us a query-independent ordering of pages. The\ngoal for a static ranking algorithm will be to reproduce this\nordering as closely as possible. In the next section, we describe\nthe measure we used to evaluate this.\n5.2\n\nMeasure\nWe chose to use pairwise accuracy to evaluate the quality of a\nstatic ranking. The pairwise accuracy is the fraction of time that\nthe ranking algorithm and human judges agree on the ordering of\na pair of Web pages.\nIf S(x) is the static ranking assigned to page x, and H(x) is the\nhuman judgment of relevance for x, then consider the following\nsets:\n)}\n(\n)\n(\n:\n,\n{\ny\nH\nx\nH\ny\nx\n>\n=\np\nH\n\nand\n)}\n(\n)\n(\n:\n,\n{\ny\nS\nx\nS\ny\nx\n>\n=\np\nS\n\nThe pairwise accuracy is the portion of H\np\nthat is also contained\nin S\np\n:\np\np\np\nH\nS\nH\n\n=\naccuracy\n\npairwise\n\nThis measure was chosen for two reasons. First, the discrete\nhuman judgments provide only a partial ordering over Web pages,\nmaking it difficult to apply a measure such as the Spearman rank\norder correlation coefficient (in the pairwise accuracy measure, a\npair of documents with the same human judgment does not affect\nthe score). Second, the pairwise accuracy has an intuitive\nmeaning: it is the fraction of pairs of documents that, when the\nhumans claim one is better than the other, the static rank\nalgorithm orders them correctly.\n5.3\n\nMethod\nWe trained fRank (a RankNet based neural network) using the\nfollowing parameters. We used a fully connected 2 layer network.\nThe hidden layer had 10 hidden nodes. The input weights to this\nlayer were all initialized to be zero. The output \"layer\" (just a\nsingle node) weights were initialized using a uniform random\ndistribution in the range [-0.1, 0.1]. We used tanh as the transfer\nfunction from the inputs to the hidden layer, and a linear function\nfrom the hidden layer to the output. The cost function is the\npairwise cross entropy cost function as discussed in section 3.\nThe features in the training set were normalized to have zero mean\nand unit standard deviation. The same linear transformation was\nthen applied to the features in the validation and test sets.\nFor training, we presented the network with 5 million pairings of\npages, where one page had a higher rating than the other. The\npairings were chosen uniformly at random (with replacement)\nfrom all possible pairings. When forming the pairs, we ignored the\nmagnitude of the difference between the ratings (the rating spread)\nfor the two URLs. Hence, the weight for each pair was constant\n(one), and the probability of a pair being selected was\nindependent of its rating spread.\nWe trained the network for 30 epochs. On each epoch, the\ntraining pairs were randomly shuffled. The initial training rate was\n0.001. At each epoch, we checked the error on the training set. If\nthe error had increased, then we decreased the training rate, under\nthe hypothesis that the network had probably overshot. The\ntraining rate at each epoch was thus set to:\nTraining rate =\n1\n+\n\n\n\nWhere\n\nis the initial rate (0.001), and\n\nis the number of times\nthe training set error has increased. After each epoch, we\nmeasured the performance of the neural network on the validation\nset, using 1 million pairs (chosen randomly with replacement).\nThe network with the highest pairwise accuracy on the validation\nset was selected, and then tested on the test set. We report the\npairwise accuracy on the test set, calculated using all possible\npairs.\nThese parameters were determined and fixed before the static rank\nexperiments in this paper. In particular, the choice of initial\ntraining rate, number of epochs, and training rate decay function\nwere taken directly from Burges et al [7].\nThough we had the option of preprocessing any of the features\nbefore they were input to the neural network, we refrained from\ndoing so on most of them. The only exception was the popularity\nfeatures. As with most Web phenomenon, we found that the\ndistribution of site popularity is Zipfian. To reduce the dynamic\nrange, and hopefully make the feature more useful, we presented\nthe network with both the unpreprocessed, as well as the\nlogarithm, of the popularity features (As with the others, the\nlogarithmic feature values were also normalized to have zero\nmean and unit standard deviation).\n710\nApplying fRank to a document is computationally efficient, taking\ntime that is only linear in the number of input features; it is thus\nwithin a constant factor of other simple machine learning methods\nsuch as nave Bayes. In our experiments, computing the fRank for\nall five billion Web pages was approximately 100 times faster\nthan computing the PageRank for the same set.\n5.4\n\nResults\nAs Table 1 shows, fRank significantly outperforms PageRank for\nthe purposes of static ranking. With a pairwise accuracy of 67.4%,\nfRank more than doubles the accuracy of PageRank (relative to\nthe baseline of 50%, which is the accuracy that would be achieved\nby a random ordering of Web pages). Note that one of fRank's\ninput features is the PageRank of the page, so we would expect it\nto perform no worse than PageRank. The significant increase in\naccuracy implies that the other features (anchor, popularity, etc.)\ndo in fact contain useful information regarding the overall quality\nof a page.\n\nTable 1: Basic Results\nTechnique\nAccuracy (%)\nNone (Baseline)\n50.00\nPageRank\n56.70\nfRank\n67.43\n\nThere are a number of decisions that go into the computation of\nPageRank, such as how to deal with pages that have no outlinks,\nthe choice of\n\n, numeric precision, convergence threshold, etc.\nWe were able to obtain a computation of PageRank from a\ncompletely independent implementation (provided by Marc\nNajork) that varied somewhat in these parameters. It achieved a\npairwise accuracy of 56.52%, nearly identical to that obtained by\nour implementation. We thus concluded that the quality of the\nPageRank is not sensitive to these minor variations in algorithm,\nnor was PageRank's low accuracy due to problems with our\nimplementation of it.\nWe also wanted to find how well each feature set performed. To\nanswer this, for each feature set, we trained and tested fRank\nusing only that set of features. The results are shown in Table 2.\nAs can be seen, every single feature set individually outperformed\nPageRank on this test. Perhaps the most interesting result is that\nthe Page-level features had the highest performance out of all the\nfeature sets. This is surprising because these are features that do\nnot depend on the overall graph structure of the Web, nor even on\nwhat pages point to a given page. This is contrary to the common\nbelief that the Web graph structure is the key to finding a good\nstatic ranking of Web pages.\n\nTable 2: Results for individual feature sets.\nFeature Set\nAccuracy (%)\nPageRank\n56.70\nPopularity\n60.82\nAnchor\n59.09\nPage\n63.93\nDomain\n59.03\nAll Features\n67.43\n\nBecause we are using a two-layer neural network, the features in\nthe learned network can interact with each other in interesting,\nnonlinear ways. This means that a particular feature that appears\nto have little value in isolation could actually be very important\nwhen used in combination with other features. To measure the\nfinal contribution of a feature set, in the context of all the other\nfeatures, we performed an ablation study. That is, for each set of\nfeatures, we trained a network to contain all of the features except\nthat set. We then compared the performance of the resulting\nnetwork to the performance of the network with all of the features.\nTable 3 shows the results of this experiment, where the \"decrease\nin accuracy\" is the difference in pairwise accuracy between the\nnetwork trained with all of the features, and the network missing\nthe given feature set.\n\nTable 3: Ablation study. Shown is the decrease in accuracy\nwhen we train a network that has all but the given set of\nfeatures. The last line is shows the effect of removing the\nanchor, PageRank, and domain features, hence a model\ncontaining no network or link-based information whatsoever.\nFeature Set\nDecrease in\nAccuracy\nPageRank\n0.18\nPopularity\n0.78\nAnchor\n0.47\nPage\n5.42\nDomain\nAnchor, PageRank & Domain\n0.10\n0.60\n\nThe results of the ablation study are consistent with the individual\nfeature set study. Both show that the most important feature set is\nthe Page-level feature set, and the second most important is the\npopularity feature set.\nFinally, we wished to see how the performance of fRank\nimproved as we added features; we wanted to find at what point\nadding more feature sets became relatively useless. Beginning\nwith no features, we greedily added the feature set that improved\nperformance the most. The results are shown in Table 4. For\nexample, the fourth line of the table shows that fRank using the\npage, popularity, and anchor features outperformed any network\nthat used the page, popularity, and some other feature set, and that\nthe performance of this network was 67.25%.\n\nTable 4: fRank performance as feature sets are added. At each\nrow, the feature set that gave the greatest increase in accuracy\nwas added to the list of features (i.e., we conducted a greedy\nsearch over feature sets).\nFeature Set\nAccuracy (%)\nNone\n50.00\n+Page\n63.93\n+Popularity\n66.83\n+Anchor\n67.25\n+PageRank\n67.31\n+Domain\n67.43\n\n711\nFinally, we present a qualitative comparison of PageRank vs.\nfRank. In Table 5 are the top ten URLs returned for PageRank and\nfor fRank. PageRank's results are heavily weighted towards\ntechnology sites. It contains two QuickTime URLs (Apple's video\nplayback software), as well as Internet Explorer and FireFox\nURLs (both of which are Web browsers). fRank, on the other\nhand, contains more consumer-oriented sites such as American\nExpress, Target, Dell, etc. PageRank's bias toward technology can\nbe explained through two processes. First, there are many pages\nwith \"buttons\" at the bottom suggesting that the site is optimized\nfor Internet Explorer, or that the visitor needs QuickTime. These\ngenerally link back to, in these examples, the Internet Explorer\nand QuickTime download sites. Consequently, PageRank ranks\nthose pages highly. Though these pages are important, they are\nnot as important as it may seem by looking at the link structure\nalone. One fix for this is to add information about the link to the\nPageRank computation, such as the size of the text, whether it was\nat the bottom of the page, etc.\nThe other bias comes from the fact that the population of Web site\nauthors is different than the population of Web users. Web\nauthors tend to be technologically-oriented, and thus their linking\nbehavior reflects those interests. fRank, by knowing the actual\nvisitation popularity of a site (the popularity feature set), is able to\neliminate some of that bias. It has the ability to depend more on\nwhere actual Web users visit rather than where the Web site\nauthors have linked.\nThe results confirm that fRank outperforms PageRank in pairwise\naccuracy. The two most important feature sets are the page and\npopularity features. This is surprising, as the page features\nconsisted only of a few (8) simple features. Further experiments\nfound that, of the page features, those based on the text of the\npage (as opposed to the URL) performed the best. In the next\nsection, we explore the popularity feature in more detail.\n5.5\n\nPopularity Data\nAs mentioned in section 4, our popularity data came from MSN\ntoolbar users. For privacy reasons, we had access only to an\naggregate count of, for each URL, how many times it was visited\nby any toolbar user. This limited the possible features we could\nderive from this data. For possible extensions, see section 6.3,\nfuture work.\nFor each URL in our train and test sets, we provided a feature to\nfRank which was how many times it had been visited by a toolbar\nuser. However, this feature was quite noisy and sparse,\nparticularly for URLs with query parameters (e.g., http://search\n.msn.com/results.aspx?q=machine+learning&form=QBHP). One\nsolution was to provide an additional feature which was the\nnumber of times any URL at the given domain was visited by a\ntoolbar user. Adding this feature dramatically improved the\nperformance of fRank.\nWe took this one step further and used the built-in hierarchical\nstructure of URLs to construct many levels of backoff between the\nfull URL and the domain. We did this by using the set of features\nshown in Table 6.\n\nTable 6: URL functions used to compute the Popularity\nfeature set.\nFunction\nExample\nExact URL\ncnn.com/2005/tech/wikipedia.html?v=mobile\nNo Params\ncnn.com/2005/tech/wikipedia.html\nPage\nwikipedia.html\nURL-1\ncnn.com/2005/tech\nURL-2\ncnn.com/2005\n...\n\nDomain\ncnn.com\nDomain+1\ncnn.com/2005\n...\n\n\nEach URL was assigned one feature for each function shown in\nthe table. The value of the feature was the count of the number of\ntimes a toolbar user visited a URL, where the function applied to\nthat URL matches the function applied to the URL in question.\nFor example, a user's visit to cnn.com/2005/sports.html would\nincrement the Domain and Domain+1 features for the URL\ncnn.com/2005/tech/wikipedia.html.\nAs seen in Table 7, adding the domain counts significantly\nimproved the quality of the popularity feature, and adding the\nnumerous backoff functions listed in Table 6 improved the\naccuracy even further.\n\nTable 7: Effect of adding backoff to the popularity feature set\nFeatures\nAccuracy (%)\nURL count\n58.15\nURL and Domain counts\n59.31\nAll backoff functions (Table 6)\n60.82\n\nTable 5: Top ten URLs for PageRank vs. fRank\nPageRank\nfRank\ngoogle.com\ngoogle.com\napple.com/quicktime/download\nyahoo.com\namazon.com\namericanexpress.com\nyahoo.com\nhp.com\nmicrosoft.com/windows/ie\ntarget.com\napple.com/quicktime\nbestbuy.com\nmapquest.com\ndell.com\nebay.com\nautotrader.com\nmozilla.org/products/firefox\ndogpile.com\nftc.gov\nbankofamerica.com\n\n712\nBacking off to subsets of the URL is one technique for dealing\nwith the sparsity of data. It is also informative to see how the\nperformance of fRank depends on the amount of popularity data\nthat we have collected. In Figure 1 we show the performance of\nfRank trained with only the popularity feature set vs. the amount\nof data we have for the popularity feature set. Each day, we\nreceive additional popularity data, and as can be seen in the plot,\nthis increases the performance of fRank. The relation is\nlogarithmic: doubling the amount of popularity data provides a\nconstant improvement in pairwise accuracy.\nIn summary, we have found that the popularity features provide a\nuseful boost to the overall fRank accuracy. Gathering more\npopularity data, as well as employing simple backoff strategies,\nimprove this boost even further.\n5.6\n\nSummary of Results\nThe experiments provide a number of conclusions. First, fRank\nperforms significantly better than PageRank, even without any\ninformation about the Web graph. Second, the page level and\npopularity features were the most significant contributors to\npairwise accuracy. Third, by collecting more popularity data, we\ncan continue to improve fRank's performance.\nThe popularity data provides two benefits to fRank. First, we see\nthat qualitatively, fRank's ordering of Web pages has a more\nfavorable bias than PageRank's. fRank's ordering seems to\ncorrespond to what Web users, rather than Web page authors,\nprefer. Second, the popularity data is more timely than\nPageRank's link information. The toolbar provides information\nabout which Web pages people find interesting right now,\nwhereas links are added to pages more slowly, as authors find the\ntime and interest.\nRELATED AND FUTURE WORK\nSince the original PageRank paper, there has been work on\nimproving it. Much of that work centers on speeding up and\nparallelizing the computation [15][25].\nOne recognized problem with PageRank is that of topic drift: A\npage about \"dogs\" will have high PageRank if it is linked to by\nmany pages that themselves have high rank, regardless of their\ntopic. In contrast, a search engine user looking for good pages\nabout dogs would likely prefer to find pages that are pointed to by\nmany pages that are themselves about dogs. Hence, a link that is\n\"on topic\" should have higher weight than a link that is not.\nRichardson and Domingos's Query Dependent PageRank [29]\nand Haveliwala's Topic-Sensitive PageRank [16] are two\napproaches that tackle this problem.\nOther variations to PageRank include differently weighting links\nfor inter- vs. intra-domain links, adding a backwards step to the\nrandom surfer to simulate the \"back\" button on most browsers\n[24] and modifying the jump probability (\n\n) [3]. See Langville\nand Meyer [23] for a good survey of these, and other\nmodifications to PageRank.\n6.2\n\nOther related work\nPageRank is not the only link analysis algorithm used for ranking\nWeb pages. The most well-known other is HITS [22], which is\nused by the Teoma search engine [30]. HITS produces a list of\nhubs and authorities, where hubs are pages that point to many\nauthority pages, and authorities are pages that are pointed to by\nmany hubs. Previous work has shown HITS to perform\ncomparably to PageRank [1].\nOne field of interest is that of static index pruning (see e.g.,\nCarmel et al. [8]). Static index pruning methods reduce the size of\nthe search engine's index by removing documents that are\nunlikely to be returned by a search query. The pruning is typically\ndone based on the frequency of query terms. Similarly, Pandey\nand Olston [28] suggest crawling pages frequently if they are\nlikely to incorrectly appear (or not appear) as a result of a search.\nSimilar methods could be incorporated into the static rank (e.g.,\nhow many frequent queries contain words found on this page).\nOthers have investigated the effect that PageRank has on the Web\nat large [9]. They argue that pages with high PageRank are more\nlikely to be found by Web users, thus more likely to be linked to,\nand thus more likely to maintain a higher PageRank than other\npages. The same may occur for the popularity data. If we increase\nthe ranking for popular pages, they are more likely to be clicked\non, thus further increasing their popularity. Cho et al. [10] argue\nthat a more appropriate measure of Web page quality would\ndepend on not only the current link structure of the Web, but also\non the change in that link structure. The same technique may be\napplicable to popularity data: the change in popularity of a page\nmay be more informative than the absolute popularity.\nOne interesting related work is that of Ivory and Hearst [19].\nTheir goal was to build a model of Web sites that are considered\nhigh quality from the perspective of \"content, structure and\nnavigation, visual design, functionality, interactivity, and overall\nexperience\". They used over 100 page level features, as well as\nfeatures encompassing the performance and structure of the site.\nThis let them qualitatively describe the qualities of a page that\nmake it appear attractive (e.g., rare use of italics, at least 9 point\nfont, ...), and (in later work) to build a system that assists novel\nWeb page authors in creating quality pages by evaluating it\naccording to these features. The primary differences between this\nwork and ours are the goal (discovering what constitutes a good\nWeb page vs. ordering Web pages for the purposes of Web\nsearch), the size of the study (they used a dataset of less than 6000\npages vs. our set of 468,000), and our comparison with PageRank.\ny = 0.577Ln(x) + 58.283\nR\n2\n= 0.9822\n58\n58.5\n59\n59.5\n60\n60.5\n61\n1\n10\n100\nDays of Toolbar Data\nP\na\ni\nr\nw\ni\ns\ne\n\nA\nc\nc\nu\nr\na\nc\ny\n\nFigure 1: Relation between the amount of popularity data and\nthe performance of the popularity feature set. Note the x-axis\nis a logarithmic scale.\n\n713\nNevertheless, their work provides insights to additional useful\nstatic features that we could incorporate into fRank in the future.\nRecent work on incorporating novel features into dynamic ranking\nincludes that by Joachims et al. [21], who investigate the use of\nimplicit feedback from users, in the form of which search engine\nresults are clicked on. Craswell et al. [11] present a method for\ndetermining the best transformation to apply to query independent\nfeatures (such as those used in this paper) for the purposes of\nimproving dynamic ranking. Other work, such as Boyan et al. [4]\nand Bartell et al. [2] apply machine learning for the purposes of\nimproving the overall relevance of a search engine (i.e., the\ndynamic ranking). They do not apply their techniques to the\nproblem of static ranking.\n6.3\n\nFuture work\nThere are many ways in which we would like to extend this work.\nFirst, fRank uses only a small number of features. We believe we\ncould achieve even more significant results with more features. In\nparticular the existence, or lack thereof, of certain words could\nprove very significant (for instance, \"under construction\"\nprobably signifies a low quality page). Other features could\ninclude the number of images on a page, size of those images,\nnumber of layout elements (tables, divs, and spans), use of style\nsheets, conforming to W3C standards (like XHTML 1.0 Strict),\nbackground color of a page, etc.\nMany pages are generated dynamically, the contents of which may\ndepend on parameters in the URL, the time of day, the user\nvisiting the site, or other variables. For such pages, it may be\nuseful to apply the techniques found in [26] to form a static\napproximation for the purposes of extracting features. The\nresulting grammar describing the page could itself be a source of\nadditional features describing the complexity of the page, such as\nhow many non-terminal nodes it has, the depth of the grammar\ntree, etc.\nfRank allows one to specify a confidence in each pairing of\ndocuments. In the future, we will experiment with probabilities\nthat depend on the difference in human judgments between the\ntwo items in the pair. For example, a pair of documents where one\nwas rated 4 and the other 0 should have a higher confidence than\na pair of documents rated 3 and 2.\nThe experiments in this paper are biased toward pages that have\nhigher than average quality. Also, fRank with all of the features\ncan only be applied to pages that have already been crawled.\nThus, fRank is primarily useful for index ordering and improving\nrelevance, not for directing the crawl. We would like to\ninvestigate a machine learning approach for crawl prioritization as\nwell. It may be that a combination of methods is best: for\nexample, using PageRank to select the best 5 billion of the 20\nbillion pages on the Web, then using fRank to order the index and\naffect search relevancy.\nAnother interesting direction for exploration is to incorporate\nfRank and page-level features directly into the PageRank\ncomputation itself. Work on biasing the PageRank jump vector\n[16], and transition matrix [29], have demonstrated the feasibility\nand advantages of such an approach. There is reason to believe\nthat a direct application of [29], using the fRank of a page for its\n\"relevance\", could lead to an improved overall static rank.\nFinally, the popularity data can be used in other interesting ways.\nThe general surfing and searching habits of Web users varies by\ntime of day. Activity in the morning, daytime, and evening are\noften quite different (e.g., reading the news, solving problems,\nand accessing entertainment, respectively). We can gain insight\ninto these differences by using the popularity data, divided into\nsegments of the day. When a query is issued, we would then use\nthe popularity data matching the time of query in order to do the\nranking of Web pages. We also plan to explore popularity features\nthat use more than just the counts of how often a page was visited.\nFor example, how long users tended to dwell on a page, did they\nleave the page by clicking a link or by hitting the back button, etc.\nFox et al. did a study that showed that features such as this can be\nvaluable for the purposes of dynamic ranking [14]. Finally, the\npopularity data could be used as the label rather than as a feature.\nUsing fRank in this way to predict the popularity of a page may\nuseful for the tasks of relevance, efficiency, and crawl priority.\nThere is also significantly more popularity data than human\nlabeled data, potentially enabling more complex machine learning\nmethods, and significantly more features.\n\nCONCLUSIONS\nA good static ranking is an important component for today's\nsearch engines and information retrieval systems. We have\ndemonstrated that PageRank does not provide a very good static\nranking; there are many simple features that individually out\nperform PageRank. By combining many static features, fRank\nachieves a ranking that has a significantly higher pairwise\naccuracy than PageRank alone. A qualitative evaluation of the top\ndocuments shows that fRank is less technology-biased than\nPageRank; by using popularity data, it is biased toward pages that\nWeb users, rather than Web authors, visit. The machine learning\ncomponent of fRank gives it the additional benefit of being more\nrobust against spammers, and allows it to leverage further\ndevelopments in the machine learning community in areas such as\nadversarial classification. We have only begun to explore the\noptions, and believe that significant strides can be made in the\narea of static ranking by further experimentation with additional\nfeatures, other machine learning techniques, and additional\nsources of data.\n\nACKNOWLEDGMENTS\nThank you to Marc Najork for providing us with additional\nPageRank computations and to Timo Burkard for assistance with\nthe popularity data. Many thanks to Chris Burges for providing\ncode and significant support in using training RankNets. Also, we\nthank Susan Dumais and Nick Craswell for their edits and\nsuggestions.\n\nREFERENCES\n[1]\n\nB. Amento, L. Terveen, and W. Hill. Does \"authority\" mean\nquality? Predicting expert quality ratings of Web documents.\nIn Proceedings of the 23\nrd\nAnnual International ACM SIGIR\nConference on Research and Development in Information\nRetrieval, 2000.\n[2]\n\nB. Bartell, G. Cottrell, and R. Belew. Automatic combination\nof multiple ranked retrieval systems. In Proceedings of the\n17th Annual International ACM SIGIR Conference on\nResearch and Development in Information Retrieval, 1994.\n[3]\n\nP. Boldi, M. Santini, and S. Vigna. PageRank as a function\nof the damping factor. In Proceedings of the International\nWorld Wide Web Conference, May 2005.\n714\n[4]\n\nJ. Boyan, D. Freitag, and T. Joachims. A machine learning\narchitecture for optimizing web search engines. In AAAI\nWorkshop on Internet Based Information Systems, August\n1996.\n[5]\n\nS. Brin and L. Page. The anatomy of a large-scale\nhypertextual web search engine. In Proceedings of the\nSeventh International Wide Web Conference, Brisbane,\nAustralia, 1998. Elsevier.\n[6]\n\nA. Broder, R. Lempel, F. Maghoul, and J. Pederson.\nEfficient PageRank approximation via graph aggregation. In\nProceedings of the International World Wide Web\nConference, May 2004.\n[7]\n\nC. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N.\nHamilton, G. Hullender. Learning to rank using gradient\ndescent. In Proceedings of the 22\nnd\nInternational Conference\non Machine Learning, Bonn, Germany, 2005.\n[8]\n\nD. Carmel, D. Cohen, R. Fagin, E. Farchi, M. Herscovici, Y.\nS. Maarek, and A. Soffer. Static index pruning for\ninformation retrieval systems. In Proceedings of the 24th\nAnnual International ACM SIGIR Conference on Research\nand Development in Information Retrieval, pages 43-50,\nNew Orleans, Louisiana, USA, September 2001.\n[9]\n\nJ. Cho and S. Roy. Impact of search engines on page\npopularity. In Proceedings of the International World Wide\nWeb Conference, May 2004.\n[10]\n\nJ. Cho, S. Roy, R. Adams. Page Quality: In search of an\nunbiased web ranking. In Proceedings of the ACM SIGMOD\n2005 Conference. Baltimore, Maryland. June 2005.\n[11]\n\nN. Craswell, S. Robertson, H. Zaragoza, and M. Taylor.\nRelevance weighting for query independent evidence. In\nProceedings of the 28\nth\nAnnual Conference on Research and\nDevelopment in Information Retrieval (SIGIR), August,\n2005.\n[12]\n\nN. Dalvi, P. Domingos, Mausam, S. Sanghai, D. Verma.\nAdversarial Classification. In Proceedings of the Tenth\nInternational Conference on Knowledge Discovery and Data\nMining (pp. 99-108), Seattle, WA, 2004.\n[13]\n\nO. Dekel, C. Manning, and Y. Singer. Log-linear models for\nlabel-ranking. In Advances in Neural Information Processing\nSystems 16. Cambridge, MA: MIT Press, 2003.\n[14]\n\nS. Fox, K S. Fox, K. Karnawat, M. Mydland, S. T. Dumais\nand T. White (2005). Evaluating implicit measures to\nimprove the search experiences. In the ACM Transactions on\nInformation Systems, 23(2), pp. 147-168. April 2005.\n[15]\n\nT. Haveliwala. Efficient computation of PageRank. Stanford\nUniversity Technical Report, 1999.\n[16]\n\nT. Haveliwala. Topic-sensitive PageRank. In Proceedings of\nthe International World Wide Web Conference, May 2002.\n[17]\n\nD. Hawking and N. Craswell. Very large scale retrieval and\nWeb search. In D. Harman and E. Voorhees (eds), The\nTREC Book. MIT Press.\n[18]\n\nR. Herbrich, T. Graepel, and K. Obermayer. Support vector\nlearning for ordinal regression. In Proceedings of the Ninth\nInternational Conference on Artificial Neural Networks, pp.\n97-102. 1999.\n[19]\n\nM. Ivory and M. Hearst. Statistical profiles of highly-rated\nWeb sites. In Proceedings of the ACM SIGCHI Conference\non Human Factors in Computing Systems, 2002.\n[20]\n\nT. Joachims. Optimizing search engines using clickthrough\ndata. In Proceedings of the ACM Conference on Knowledge\nDiscovery and Data Mining (KDD), 2002.\n[21]\n\nT. Joachims, L. Granka, B. Pang, H. Hembrooke, and G.\nGay. Accurately Interpreting Clickthrough Data as Implicit\nFeedback. In Proceedings of the Conference on Research and\nDevelopment in Information Retrieval (SIGIR), 2005.\n[22]\n\nJ. Kleinberg. Authoritative sources in a hyperlinked\nenvironment. Journal of the ACM 46:5, pp. 604-32. 1999.\n[23]\n\nA. Langville and C. Meyer. Deeper inside PageRank.\nInternet Mathematics 1(3):335-380, 2004.\n[24]\n\nF. Matthieu and M. Bouklit. The effect of the back button in\na random walk: application for PageRank. In Alternate track\npapers and posters of the Thirteenth International World\nWide Web Conference, 2004.\n[25]\n\nF. McSherry. A uniform approach to accelerated PageRank\ncomputation. In Proceedings of the International World\nWide Web Conference, May 2005.\n[26]\n\nY. Minamide. Static approximation of dynamically generated\nWeb pages. In Proceedings of the International World Wide\nWeb Conference, May 2005.\n[27]\n\nL. Page, S. Brin, R. Motwani, and T. Winograd. The\nPageRank citation ranking: Bringing order to the web.\nTechnical report, Stanford University, Stanford, CA, 1998.\n[28]\n\nS. Pandey and C. Olston. User-centric Web crawling. In\nProceedings of the International World Wide Web\nConference, May 2005.\n[29]\n\nM. Richardson and P. Domingos. The intelligent surfer:\nprobabilistic combination of link and content information in\nPageRank. In Advances in Neural Information Processing\nSystems 14, pp. 1441-1448. Cambridge, MA: MIT Press,\n2002.\n[30]\n\nC. Sherman. Teoma vs. Google, Round 2. Available from\nWorld Wide Web (http://dc.internet.com/news/article.php/\n1002061), 2002.\n[31]\n\nT. Upstill, N. Craswell, and D. Hawking. Predicting fame\nand fortune: PageRank or indegree?. In the Eighth\nAustralasian Document Computing Symposium. 2003.\n[32]\n\nT. Upstill, N. Craswell, and D. Hawking. Query-independent\nevidence in home page finding. In ACM Transactions on\nInformation Systems. 2003.\n\n\n\n715\n", "keywords": "anchor text;relevance;Web pages;pairwise accuracy;fRank;popularity data;dynamic ranking;search engines;PageRank;static ranking;Static ranking;static features;RankNet"} {"name": "44", "title": "Black-Box Constructions for Secure Computation", "abstract": "It is well known that the secure computation of non-trivial functionalities in the setting of no honest majority requires computational assumptions. We study the way such computational assumptions are used. Specifically, we ask whether the secure protocol can use the underlying primitive (e.g., one-way trapdoor permutation) in a black-box way, or must it be nonblack-box (by referring to the code that computes this primitive)? Despite the fact that many general constructions of cryptographic schemes (e.g., CPA-secure encryption ) refer to the underlying primitive in a black-box way only, there are some constructions that are inherently nonblack-box. Indeed, all known constructions of protocols for general secure computation that are secure in the presence of a malicious adversary and without an honest majority use the underlying primitive in a nonblack-box way (requiring to prove in zero-knowledge statements that relate to the primitive). In this paper, we study whether such nonblack-box use is essential. We present protocols that use only black-box access to a family of (enhanced) trapdoor permutations or to a homomorphic public-key encryption scheme. The result is a protocol whose communication complexity is independent of the computational complexity of the underlying primitive (e.g., a trapdoor permutation) and whose computational complexity grows only linearly with that of the underlying primitive. This is the first protocol to exhibit these properties.", "fulltext": "INTRODUCTION\nIt is a known fact that most cryptographic tasks require\nthe use of computational hardness assumptions. These assumptions\ntypically come in two types: specific assumptions\nlike the hardness of factoring, RSA, discrete log and others,\nand general assumptions like the existence of one-way functions\n, trapdoor permutations and others. In this paper, we\nrefer to general assumptions and how they are used. Specifically\n, we consider an intriguing question regarding how secure\nprotocols utilize a primitive that is assumed to carry\nsome hardness property. Here again, there is a clear distinction\nbetween two types of uses:\n1. Black-box usage: a protocol (or construction) uses\na primitive in a black-box way if it refers only to the\ninput/output behavior of the primitive.\n1\nFor example,\nif the primitive is a trapdoor permutation, then the\nprotocol may sample a permutation and its domain,\nand may compute the permutation and its inverse (if\nthe trapdoor is given). Beyond this, no reference is\nmade to the primitive. In particular, the code used to\ncompute the permutation (or carry out any other task)\nis not referred to by the protocol. The vast majority\nof constructions in cryptography are black-box.\n2. Nonblack-box usage: a protocol (or construction)\nuses a primitive in a nonblack-box way if it refers to\nthe code for computing its functionality. A typical example\nof a nonblack-box construction is where a Karp\nreduction is applied to the circuit computing the function\n, say, in order to prove an\nN P zero-knowledge\nproof, as in [14].\nA rich and fruitful body of work, initiated by [16], attempts\nto draw the borders between possibility and impossibility for\nblack-box constructions in cryptography. While many of the\nrelations between primitives are well understood, there are\nstill some important tasks for which the only constructions\nthat we have rely on nonblack-box access to the assumed\nprimitive, yet the existence of a black-box construction is\n1\nIt is typically also required that the security proof of the construction\nis black-box in the sense that an adversary breaking the protocol\ncan be used as an oracle in order to break the underlying primitive.\nSee, e.g., [11, 12, 29] for a comprehensive treatment of black-box reductions\nin cryptography.\n99\nnot ruled out. In particular, all known general constructions\nof multiparty protocols that are secure in the presence\nof malicious adversaries and without an honest majority\n, originating from [15], use nonblack-box access to the\nassumed primitive.\n2\n(We note that by \"general construc-tions\"\n, we mean constructions that can be used to securely\ncompute any functionality.)\nAnother notable example of\nthis phenomenon is the case of public-key encryption that\nis secure against chosen-ciphertext attacks [7, 30, 23]; here\ntoo, all known constructions are nonblack-box. The above\nphenomenon begs the following question:\nIs it possible to construct general protocols for\nsecure computation without an honest majority\nand with malicious adversaries, given only black-box\naccess to a \"low-level\" primitive?\nAnswering the above question is of interest for the following\nreasons. First, it is of theoretical interest to understand\nwhether or not nonblack-box access to a primitive is necessary\nfor these tasks. An answer to this question would\nenhance our understanding of how hardness assumptions\ncan (or must) be used. Second, as we have mentioned, the\nnonblack-box use of the underlying primitive is typically utilized\nin order to apply a Karp reduction for the purpose\nof using a (general) zero-knowledge proof. Such reductions\nare highly inefficient and are unlikely to be very useful in\npractice. Furthermore, in these protocols the communication\ncomplexity depends on the complexity of computing\nthe primitive and the computational complexity grows more\nthan linearly with that of the primitive. (An exception to\nthis rule is the communication-efficient compiler presented\nin [26], which relies on the communication-efficient arguments\nof [20, 25]. However, the computational complexity\nof the protocol of [26] is even worse than the GMW protocol\n[15].)\nTo illustrate the type of inefficiency resulting from current\nnonblack-box constructions, consider the following hypothetical\nscenario. Suppose that, due to major advances\nin cryptanalytic techniques, the security parameter must\nbe large enough so that all basic cryptographic primitives\nrequire a full second of computation on a fast CPU. In\nsuch a case, would it still be possible to carry out a distributed\ntask like oblivious transfer? Current nonblack-box\ntechniques (e.g., the GMW protocol [15]) require parties to\nprove in zero-knowledge statements that involve the computation\nof the underlying primitive, say a trapdoor permutation\n. These zero-knowledge protocols, in turn, invoke\ncryptographic primitives for any gate of a circuit computing\na trapdoor permutation. Since (by our assumption) a trapdoor\npermutation takes one second to compute, its circuit\nimplementation contains trillions of gates, thereby requiring\nthe protocol trillions of second to run. In contrast, a\nblack-box construction of oblivious transfer from the trapdoor\npermutation primitive would make the number of invocations\nof the primitive independent of the complexity of\n2\nWe stress that the above discussion is only true when considering\ngeneral assumptions. Furthermore, it is only true when considering\n\"low-level primitives\" like trapdoor permutations. Specifically, there\ndo exist constructions of secure multiparty protocols that use only\nblack-box access to an oblivious transfer primitive [18].\nHowever,\nsince it is not known how to construct oblivious transfer using only\nblack-box access to, say trapdoor permutations, the overall construction\nobtained does not use its \"low-level\" primitive in a black-box\nway.\nimplementing the primitive, thus making oblivious transfer\nfeasible even in the hypothetical scenario described above.\nWe conclude that the current nonblack-box use of the underlying\nprimitives constitutes an obstacle to efficiency. It is\ntherefore of great interest to know whether or not it is possible\nto obtain solutions to these tasks that do not suffer from\nthis obstacle. (We note that the inefficiency of nonblack-box\nconstructions here is quite ironic because in many areas of\ncryptography, black-box constructions have been shown to\nhave inherent computational limitations [21, 10].) Despite\nthe above, we stress that the focus of this paper is not on\nefficiency, but rather on the theoretical question of whether\nor not it is possible to obtain the aforementioned black-box\nconstructions. We believe this question to be interesting in\nits own right.\nOur results.\nWe show how to construct general secure\nmultiparty computation (for the case of no honest majority\nand malicious adversaries), given black-box access to either\nhomomorphic encryption schemes or enhanced trapdoor permutations\n(see [13, Appendix C.1] for the definition of enhanced\ntrapdoor permutations). We note that all known\ngeneral constructions for this task from \"low-level\" primitives\nrely on either enhanced trapdoor permutations or homomorphic\nencryption schemes. However, they all use them\nin an inherently nonblack-box way. This is the case even for\nprotocols that implement very simple functionalities, such\nas oblivious transfer. We prove the following:\nTheorem 1.1. There exist protocols for securely computing\nany multiparty functionality without an honest majority\nand in the presence of static malicious adversaries, that rely\nonly on black-box access to a family of enhanced trapdoor\npermutations or to a homomorphic encryption scheme.\nWe remark that nonblack-box access is not typically used\nwhen considering semi-honest adversaries [32, 15]. Rather,\nthe nonblack-box access is utilized in known protocols in order\nto have the parties prove (in zero-knowledge) that they\nare correctly following the protocol specification. This is\nnecessary for preventing a malicious adversary from effec-tively\ndeviating from the protocol instructions. We note also\nthat in the case of an honest majority, it is possible to securely\ncompute any functionality information-theoretically,\nand without any hardness assumption [2, 5]. Thus, no primitive\nat all is needed. For this reason, we focus on the case\nof no honest majority (including the important two-party\ncase) and malicious adversaries.\nTechniques.\nIn order to prove Theorem 1.1, we begin\nby constructing oblivious transfer protocols that use only\nblack-box access to enhanced trapdoor permutations or homomorphic\nencryption schemes, but provide rather weak security\nguarantees. We then \"boost\" the security of these\nprotocols in order to obtain protocols that are secure in the\npresence of malicious adversaries. Constructions until today\nthat have followed this paradigm work by first obtaining\nprotocols that are secure in the presence of semi-honest\nadversaries, and then boosting them so that they are secure\nin the presence of malicious adversaries. However, it is\nnot known how to carry out this \"boosting\" in a black-box\nway (and, indeed, it has been conjectured that malicious\noblivious transfer cannot be constructed from semi-honest\noblivious transfer in a black-box way [24]). Since we wish to\nmake our construction black-box, we take a different route.\n100\nProtocol number\nSecurity for corrupted sender\nSecurity for corrupted receiver\n3.1, 3.3\nPrivate for defensible sender\nPrivate for defensible receiver\n4.1\nPrivate for defensible sender\nSecure for malicious receiver\n5.1\nSecure for malicious sender\nPrivate for defensible receiver\nIn Theorem 6.1\nSecure for malicious sender\nSecure for malicious receiver\nTable 1: The progression of our constructions: each protocol uses the previous one as a subprotocol.\nSpecifically, we begin by introducing the notion of a defensible\nadversary. In order to describe this notion, we describe\nwhat a defense is: a defense is an input and random-tape\nthat is provided by the adversary after the protocol execution\nconcludes. A defense is good if the honest party upon\nthat input and random-tape would have sent the same messages\nas the adversary sent. Such a defense is a supposed\n\"proof\" of honest behavior. However, the adversary need\nnot actually behave honestly and can construct its defense\nretroactively (after the execution concludes). A protocol is\nsaid to be private in the presence of defensible adversaries if\nprivacy is preserved in the event that an adversary provides\na good defense. However, in the case that the adversary\ndoesn't provide a good defense, nothing is guaranteed, and\nthe entire honest party's input may be learned. This notion\nis therefore rather weak. We note that the oblivious transfer\nprotocol of [8] is not secure under this notion. However, it\ncan be efficiently modified into one that is secure under this\nnotion. It is also possible to efficiently construct such an\noblivious transfer protocol from homomorphic encryption.\nImportantly, we show that it is possible to construct oblivious\ntransfer that is secure in the presence of malicious adversaries\nfrom oblivious transfer that is private in the presence\nof defensible adversaries. Furthermore, this construction is\nblack-box.\nAs we have mentioned, we start by constructing oblivious\ntransfer protocols that are private in the presence of\ndefensible adversaries. We present two such protocols: one\nthat uses black-box access to a family of enhanced trapdoor\npermutations, and one that uses black-box access to a homomorphic\npublic-key encryption scheme. Next, we construct\nfrom the above oblivious transfer protocol a new oblivious\ntransfer protocol that is still private in the presence of defensible\nsenders, but is secure in the presence of malicious\nreceivers (where security is \"full security\" according to the\nideal/real simulation paradigm). This is achieved using the\nso-called cut-and-choose technique. That is, many oblivious\ntransfer executions (using random inputs) are run, and the\nreceiver is asked to present a defense for its behavior in half\nof them. If it indeed presents a good defense, then we are\nguaranteed that it behaved somewhat honestly in most of\nthe executions.\nWe stress that this step is novel, because the requirements\non a protocol that is secure according to the ideal/real simulation\nparadigm are much stricter than when only privacy\nis guaranteed. Indeed, some efficient protocols for oblivious\ntransfer from the literature [27, 1, 17] are private for both\n(malicious) parties, but are not fully secure for either party.\nNevertheless, we are able to boost both the resilience of the\nprotocol (from a defensible to a malicious adversary) and\nits security guarantee (from privacy to full simulation-based\nsecurity). Next, we \"reverse\" the oblivious transfer protocol\n(i.e., by switching the sender and receiver roles) in order to\nobtain a protocol with reversed security properties. Specifically\n, this next protocol is secure in the presence of malicious\nsenders and private in the presence of defensible receivers.\nAt this point, we reapply our security boosting technique in\norder to obtain a protocol that is \"fully secure\"; that is, a\nprotocol that is secure in the presence of malicious senders\nand receivers. See Table 1 for the series of oblivious transfer\nprotocols that we construct. Needless to say, each protocol\nuses its subprotocol in a black-box way.\nFinally, having constructed secure oblivious transfer protocols\nusing only black-box access to primitives, it suffices to\napply the well-known result of Kilian [18, 19] that shows that\nany functionality can be securely computed using black-box\naccess to a secure oblivious transfer protocol. This therefore\nyields Theorem 1.1, as desired.\nRelated work. Recently, in [6], it was shown that it is possible\nto construct constant-round protocols for the setting of\nan honest majority, that use only black-box access to the assumed\nprimitive. As we have mentioned, in the setting of\nan honest majority, it is possible to construct information-theoretically\nsecure protocols (which are, by triviality, black-box\n). Nevertheless, there are no known (general) constant-round\nprotocols for the information-theoretic setting, and\nso [6] relates to this issue. We remark that the techniques\nused in [6] and here are vastly different, due to the inherent\ndifferences between the setting of an honest majority and\nthat of no honest majority.\nOrganization.\nDue to lack of space in this abstract, we\npresent only brief sketches of the definitions and proofs.\nComplete details appear in the full version of the paper.\nWe often write OT as shorthand for oblivious transfer.\nDEFINITIONS\nWe denote by\nP\n1\n(1\nn\n, x\n1\n,\n1\n)\n, P\n2\n(1\nn\n, x\n2\n,\n2\n) the transcript\nof an execution between parties\nP\n1\nand\nP\n2\nwith a security\nparameter\nn, where P\ni\nhas input\nx\ni\nand random-tape\n\ni\n. For\nbrevity, we will sometimes omit the security parameter 1\nn\n.\nThe message sent by party\nP\ni\n(on the above inputs) after\nhaving received the series of incoming messages\nis denoted\nby\nP\ni\n(\nx\ni\n,\ni\n;\n). Stated otherwise, P\ni\n(\nx\ni\n,\ni\n;\n) denotes the\nnext message function of\nP\ni\n. Let\nt = P\n1\n(\nx\n1\n,\n1\n)\n, P\n2\n(\nx\n2\n,\n2\n) .\nThen, denote the\nth\nmessage sent by\nP\ni\nin\nt by sent\nP\ni\n(\nt) and\nthe first\nmessages received by\nP\ni\nin\nt by received\nP\ni\n1,...,\n(\nt).\nWe also denote the output of\nP\ni\nin an execution by\noutput\nP\ni\nP\n1\n(\nx\n1\n,\n1\n)\n, P\n2\n(\nx\n2\n,\n2\n) .\nIn our presentation, we assume familiarity with the standard\ndefinitions of secure computation; see [13, Chapter 7]\nfor a full treatment. In this work, we consider malicious adversaries\n(i.e., adversaries that may arbitrarily deviate from\nthe protocol specification), and static corruptions (meaning\nthat the set of corrupted parties is fixed before the protocol\nexecution begins).\nWe use a non-uniform formulation of adversaries here and\ntherefore, without loss of generality, assume that they are\n101\ndeterministic. However, this is not essential and all of our\nproofs hold for the uniform model of computation.\nBlack-box access to primitives. In this paper, we consider\nconstructions of protocols that use only black-box access\nto an underlying primitive. This can be easily formalized\nby defining oracles that provide the functionality of the\nprimitive. For example, a trapdoor permutation can be defined\nby an oracle that samples a function description along\nwith a trapdoor, an oracle that is given the function description\nand samples a random value from the domain, an\noracle that is given the function description and a point in\nthe domain and computes the permutation, and an oracle\nthat is given the trapdoor and a point in the domain and\ncomputes the permutation inverse. It is easy to see that\nour protocols rely on the underlying primitive in a black-box\nway. We will therefore not burden the presentation by\nformally defining these oracles. We remark that we also construct\nprotocols that use subprotocols in a black-box way.\nThis can be formalized by just looking at the input/output\nbehavior of the protocol. We will not formalize this. It suffices\nfor our result to note that if the subprotocol uses the\nunderlying primitive in a black-box way, then the protocol\n(that uses the subprotocol) also uses the underlying primitive\nin a black-box way. Again, this is easy to verify for\nall of our protocols. In addition to using the underlying\nprimitive in a black-box way, our proofs of security are also\nblack-box. Therefore, our reductions are what are typically\ncalled \"fully black-box\" [29].\n2.2\nDefensible Adversarial Behavior\nWe introduce the notion of defensible adversarial behavior\n. Loosely speaking, an adversary that exhibits defensible\nbehavior may arbitrarily deviate from the protocol specification\n. However, at the conclusion of the protocol execution,\nthe adversary must be able to justify or defend its behavior\nby presenting an input and a random-tape such that the\nhonest party (with this input and random-tape) would behave\nin the same way as the adversary did. A protocol is\n\"private\" under defensible adversarial behavior if it is \"private\"\nin the presence of such adversaries. We stress that if\nan adversary behaves maliciously and cannot provide a good\ndefense, then no security guarantees are given.\nWe now define the notion of a good defense. Intuitively,\na defense is an \"explanation\" of an adversary's behavior\nduring the protocol execution. Such an explanation consists\nof an input and random-tape, and the defense is \"good\" if\nan honest party, given that input and random-tape, would\nhave sent the same messages as the adversary did during the\nprotocol execution. The formal definition follows.\nDefinition 2.1. (good defense for t): Let t be the transcript\nof an execution of a protocol\n= (P\n1\n, P\n2\n) between an\nadversary\nA (say, controlling P\n1\n) and the honest party (say\nP\n2\n). Then, we say that the pair (\nx\n1\n,\n1\n) constitutes a good\ndefense by\nA for t in , denoted (x\n1\n,\n1\n) = defense\n\nA\n(\nt), if for\nevery\nit holds that sent\nA\n(\nt) = P\n1\n(\nx\n1\n,\n1\n; received\nA\n1,..., -1\n(\nt)).\nIn other words, every message sent by\nA in the execution\nis such that the honest party\nP\n1\nwith input (\nx\n1\n,\n1\n) would\nhave sent the same message.\n2.3\nSecurity of OT Protocols\nThe starting point of our constructions is an oblivious\ntransfer protocol [28, 8] that is private in the presence of a\ndefensible receiver or sender. Recall that an oblivious transfer\nprotocol involves a sender\nS with two input strings s\n0\nand\ns\n1\n, and a receiver\nR with an input bit r {0, 1}. Very\ninformally, an oblivious transfer protocol has the property\nthat the sender learns nothing about the receiver's bit\nr and\nthe receiver obtains\ns\nr\n, but learns nothing about\ns\n1-r\n. (The\nvariant of oblivious-transfer that we use here is usually referred\nto as \"1-out-of-2 OT\".) We begin by presenting the\nformal definition of oblivious transfer that is private in the\npresence of a defensible receiver and then proceed to define\nprivacy in the presence of a defensible sender.\nNon-trivial protocols.\nOne technicality that must be\ndealt with is that a protocol that does nothing is trivially\n\"private\" in that it does not reveal anything about the par-ties'\ninputs. Of course, such a protocol is also useless. In\norder to make sure that the oblivious transfer protocols that\nwe construct are \"useful\", we define the notion of a non-trivial\noblivious transfer protocol. Such a protocol has the\nproperty that if both the sender and receiver are honest,\nthen the receiver will receive its output as designated by\nthe oblivious transfer functionality\nf((s\n0\n, s\n1\n)\n, r) = (, s\nr\n)\n(where\ndenotes the empty output).\nPrivacy for random inputs in the presence of a defensible\nreceiver.\nWe now define privacy for defensible\nreceivers. Recall that the receiver in an oblivious transfer\nprotocol is supposed to obtain one of the pair (\ns\n0\n, s\n1\n) in the\nexecution. However, the other value must remain secret.\nWhen considering defensible adversaries, the requirement is\nthat, as long as the adversary can provide a good defense,\nit can only learn one of the values. Recall that, by Definition\n2.1, a party's defense includes its input (in this case, the\nbit\nr of the receiver, meaning that it wishes to obtain the\nvalue\ns\nr\n). We therefore require that a defensible receiver can\nlearn nothing about\ns\n1-r\nwhen its defense contains the input\nvalue\nr. Due to technical reasons in our proofs later on,\nwe define privacy only for the case that the sender's inputs\nare uniformly distributed bits. Fortunately, this will suffice\nfor our constructions.\nWe define an experiment for a protocol\nand an adversary\nA modelled by a polynomial-size family of circuits {A\nn\n}\nnN\n.\nInformally, the experiment begins by choosing a random pair\nof bits (\ns\n0\n, s\n1\n) to be used for the sender's input. The adversary's\naim is to guess the value of the input that it doesn't\nreceive as output.\nExperiment Expt\nrec\n\n(\nA\nn\n):\n1. Choose\ns\n0\n, s\n1\n\nR\n{0, 1} uniformly at random.\n2. Let\n\nS\nbe a uniformly distributed random tape for\nS\nand let\nt = S(1\nn\n, s\n0\n, s\n1\n,\nS\n)\n, A\nn\n.\n3. Let ((\nr,\nr\n)\n, ()) be the output of A\nn\n(\nt). (The pair\n(\nr,\nr\n) constitute\nA\nn\n's defense and\nis its guess for\ns\n1-r\n.)\n4. Output 1 if and only if (\nr,\nr\n) is a good defense by\nA\nn\nfor\nt in , and = s\n1-r\n.\nNotice that by\nA's defense, it should have received s\nr\n. The\nchallenge of the adversary is therefore to guess the value\nof\ns\n1-r\n; if it cannot do this, then the sender's privacy is\npreserved.\n102\nDefinition 2.2. (privacy for random inputs in the presence\nof a defensible receiver): Let\n= (S, R) be a non-trivial\noblivious transfer protocol. We say that\nis private for random\ninputs in the presence of a defensible receiver if for every\npolynomial-size family of circuits\nA = {A\nn\n}\nnN\ncontrolling\nR, for every polynomial p() and for all sufficiently large n's\nPr [Expt\nrec\n\n(\nA\nn\n) = 1]\n< 12 + 1\np(n) .\nRemark. The definition of Expt\nrec\n\nonly considers the case\nthat the inputs of the sender are uniformly distributed. We\nstress that this is a very weak definition. However, the reasons\nthat we make this restriction are because (a) it suffices\nfor our construction of \"fully secure\" oblivious transfer\n(see Protocol 4\n.1), and more importantly, (b) without this\nrestriction we were unable to prove the privacy of Protocol\n3\n.3 for defensible receivers (see Section 3.2). We stress\nthat this restriction is not made when considering security\nin the presence of malicious parties.\nPrivacy in the presence of a defensible sender.\nIn\nan oblivious transfer protocol, the sender is not supposed to\nlearn anything about the receiver's input. When considering\na defensible sender, this means that the sender should\nnot be able to simultaneously present a good defense of its\nbehavior and make a correct guess as to the value of the receiver's\ninput. We stress that this privacy requirement only\nneeds to hold when the sender outputs a good defense; in\nall other cases, there may be no privacy whatsoever. The\nexact definition is formulated in a similar way as above.\nSecurity.\nThe definitions above refer only to \"privacy\",\nmeaning that the adversary can learn nothing more about\nthe honest party's input than what is revealed by the output.\nHowever, these definitions say nothing about the simulata-bility\nof the protocols in question. In particular, a protocol\nthat is private by one of the above definitions may not\nbe secure according to the real/ideal simulation paradigm\n(see [13, Chapter 7] for these definitions). When we mention\nsecurity in this paper, we refer to security according to\nthe ideal/real model paradigm.\nPRIVACY FOR DEFENSIBLE SENDERS AND DEFENSIBLE RECEIVERS\nIn this section we show how to construct oblivious transfer\nprotocols that are private for defensible senders and receivers\n. We present two protocols: one based on homomorphic\nencryption and one based on enhanced trapdoor permutations\n. Importantly, both protocols access the underlying\nprimitive in a black-box way only.\n3.1\nBit OT from Homomorphic Encryption\nWe assume the existence of a public-key encryption scheme\n(\nG, E, D) that is indistinguishable under chosen-plaintext\nattacks and has the following homomorphic property:\n1. The plaintext is taken from a finite Abelian group\ndetermined by the public key. For notational convenience\n, we assume here that the group is an \"additive\"\ngroup\nZ\nq\n; however, the same construction works for\n\"multiplicative\" groups as well.\n2. Given any public-key\npk generated by the key generation\nalgorithm\nG and any two ciphertexts c\n1\n=\nE\npk\n(\nm\n1\n) and\nc\n2\n=\nE\npk\n(\nm\n2\n), it is possible to efficiently\ncompute a random encryption of the sum\nE\npk\n(\nm\n1\n+\nm\n2\n).\nConsequently, it is also possible to efficiently\ncompute\nE\npk\n(\nm\n1\n) for any known integer\n.\nWe also assume that (\nG, E, D) has no decryption errors.\nSuch encryption schemes can be constructed under the quadratic\n-residuosity, decisional Diffie-Hellman and other assumptions\n; see [1, 17] for some references. The following protocol\nis implicit in [22].\nProtocol 3.1.\nInputs: The sender S has a pair of bits (s\n0\n, s\n1\n); the\nreceiver\nR has a bit r.\nThe protocol:\n1. The receiver\nR chooses a pair of keys (pk, sk)\nG(1\nn\n), computes\nc = E\npk\n(\nr) and sends c and p\nk\nto\nS.\n2. The sender\nS uses the homomorphic property and\nits knowledge of\ns\n0\nand\ns\n1\nto compute a random\nencryption\nc = E\npk\n((1\n- r)s\n0\n+\nrs\n1\n).\n3.\nR computes and outputs s\nr\n=\nD\nsk\n(\nc ).\nBefore proving security, note that if\nS and R are both\nhonest, then\nR receives the correct output. For example, if\nr = 0, then c = E\npk\n(1\ns\n0\n+ 0\ns\n1\n) =\nE\npk\n(\ns\n0\n) and so\nR\nreceives the correct value after decryption.\nClaim 3.2. Assume that the encryption scheme (G, E, D)\nis indistinguishable under chosen-plaintext attacks and has\nno decryption errors. Then, Protocol 3\n.1 is a non-trivial\noblivious transfer protocol that is private in the presence of\ndefensible senders and private for random inputs in the presence\nof defensible receivers.\nPrivacy in the presence of a defensible (or even malicious)\nsender follows from the fact that the sender's view consists\nonly of a single encryption under\nE, and this encryption\nis secure. Privacy with respect to a defensible receiver follows\nsince the existence of a proper defense implies that\nc\nis indeed an encryption of 0 or 1. This, in turn, guarantees\nthat\nc is a random encryption of s\nr\n. Hence, again, privacy\nfollows from the security of\nE.\n3.2\nBit OT from Enhanced Trapdoor Permutations\nThe following protocol is a modified version of [8] that is\nprivate in the presence of defensible adversaries. We stress\nthat the original protocol of [8] is completely insecure in the\npresence of defensible adversaries.\nThe construction uses\nany family of enhanced trapdoor permutations. Informally\nspeaking, a family of trapdoor permutations is comprised of\na function-sampling algorithm\nI, a domain-sampling algorithm\nD\nf\n, an algorithm\nF for computing the permutation\nand an algorithm\nF\n-1\nfor inverting the permutation (given\nthe trapdoor). Such a family is called enhanced if it is hard\nto invert a random value\ny even when given the coins used\nby the domain-sampling algorithm to sample\ny. See [13,\nAppendix C.1 and Section 7.3] for a full definition. In the\nsequel, we will abuse notation and refer to the random coins\nused by\nD\nf\nas its input. We note that the enhanced property\n103\nis used in all constructions of oblivious transfer from trapdoor\npermutations. Indeed it has been shown that black-box\nconstructions of oblivious transfer from plain trapdoor permutations\nis impossible [9].\nWe will require that\nI is errorless, meaning that for every\nseries of random coins provided to\nI, the description of\nthe function output is indeed a permutation. We call this\nerrorless function sampling, or just errorless sampling.\nThe protocol uses a perfectly binding commitment scheme\nC. We denote a commitment to a using randomness by\nC(a; ). For simplicity, we assume that in order to commit\nto a string\na of length n, it suffices to use a random string\nthat is also of length\nn. Such a commitment scheme can be\nobtained using black-box access to any trapdoor permutation\nor homomorphic encryption scheme.\nProtocol 3.3.\nInputs: The sender S has a pair of random bits (s\n0\n, s\n1\n);\nthe receiver\nR has a bit r.\nAuxiliary information: The description of a family\nof (enhanced) trapdoor permutations (\nI, D\nf\n, F, F\n-1\n) and\na hard-core bit\nB for the family.\nThe protocol:\n1. The receiver\nR chooses\n1\n,\nR\n{0, 1}\nn\nand sends\nc = C(\n1\n;\n) to the sender S.\n2.\nS chooses a trapdoor permutation pair (i, t)\nI(1\nn\n) and a random\n\n2\n\nR\n{0, 1}\nn\n, and sends\ni\nand\n\n2\nto\nR.\n3.\nR computes y\n1-r\n=\nD\nf\n(\n\n1\n\n2\n); i.e.,\ny\n1-r\nis\nobtained by running the domain sampling algorithm\nwith coins\n\n1\n\n2\n. In addition,\nR chooses\n\nR\n{0, 1}\nn\n, obtains\nx\nr\n=\nD\nf\n(\n) and computes\ny\nr\n=\nf\ni\n(\nx\nr\n). Finally,\nR sends (y\n0\n, y\n1\n) to\nS.\n4.\nS uses t to compute\n0\n=\nB(f\n-1\ni\n(\ny\n0\n))\ns\n0\nand\n\n1\n=\nB(f\n-1\ni\n(\ny\n1\n))\ns\n1\n.\nS sends (\n0\n,\n1\n) to\nR.\n5.\nR computes and outputs s\nr\n=\nB(x\nr\n)\n\nr\n.\nNote that the only difference between Protocol 3.3 and\nthe protocol of [8] is that in [8], the value\ny\n1-r\nis chosen\nsinglehandedly by the receiver, whereas here the value is\nchosen mutually using a (weak non-simulatable) coin-tossing\nprotocol. (Indeed, in the protocol of [8] a cheating receiver\ncan just choose a value\ny\n1-r\nfor which it knows the preimage.\nThe receiver will then learn both\ns\n0\nand\ns\n1\n. Note also that a\ndefensible receiver can also easily cheat in the protocol of [8]\nbecause it can send any value\ny\n1-r\nand not the value that\nequals\nD\nf\n(\n\n1\n\n2\n). In particular, it can send a value\ny\n1-r\nfor which it knows its preimage\nx\n1-r\nunder\nf\ni\n, and can still\nclaim in its defense that its coins are such that\ny\n1-r\nwas\nsampled directly.)\nClaim 3.4. Assume that (I, D\nf\n, F, F\n-1\n) is a family of\nenhanced one-way trapdoor permutations and that the scheme\nC is perfectly binding and computationally hiding. Then,\nProtocol 3\n.3 is a non-trivial oblivious transfer protocol that\nis private in the presence of defensible receivers and private\nfor random inputs in the presence of defensible senders.\nIntuitively, a corrupted sender cannot guess the value of\nr\nfrom (\ny\n0\n, y\n1\n) because these values are identically distributed.\nThis actually only holds as long as the function\nf\ni\nchosen by\nthe sender is really a permutation from the family. (Otherwise\n, it may be possible to distinguish\ny\nr\nwhich is generated\nby computing\nf\ni\n(\nx\nr\n) from\ny\n1-r\nwhich is randomly chosen\nfrom the domain.) The fact that the function is really a\npermutation is \"proven\" in the defense, and so if a good\ndefense is provided,\ny\nr\nand\ny\n1-r\nare identically distributed.\nWe therefore have that the only way a defensible sender can\nlearn the value of\nr is from the commitments. However,\nthis involves distinguishing between\nc = C(D\n-1\nf\n(\ny\n0\n)\n\n2\n)\nand\nc = C(D\n-1\nf\n(\ny\n1\n)\n\n2\n) which is hard due to the hiding\nproperty of commitments. (Notice that\ny\n1-r\n=\nD\nf\n(\n\n1\n\n2\n)\nand so\nc = C(\n1\n) =\nC(D\n-1\nf\n(\ny\n1-r\n)\n\n2\n). Therefore, the\nproblem of guessing\nr reduces to the problem of distinguishing\nsuch commitments.) As for privacy in the presence of\na defensible receiver\nR\n\n: intuitively, if\nR\n\nbehaves so that\nit can present a good defense, then it is unable to compute\nB(f\n-1\n(\ny\n1-r\n)) because it has no freedom in choosing\ny\n1-r\n.\nThat is,\nR\n\nmust choose\ny\n1-r\n=\n\n1\n\n2\nand so it cannot\nknow the preimage\nf\n-1\n(\ny\n1-r\n). This implies that it can only\nlearn the sender's bit\ns\nr\n.\nACHIEVING SECURITY AGAINST A MALICIOUS RECEIVER\nIn this section we construct a bit oblivious transfer protocol\nthat is secure in the presence of a malicious receiver\nand private in the presence of a defensible sender. We stress\nthat the security achieved for malicious receivers is according\nto the ideal/real model definition of security for secure\ncomputation. Our construction uses black-box access to an\noblivious transfer protocol that is private for defensible receivers\nand senders (like those constructed in the previous\nsection). Thus, in this section we show how to boost the\nsecurity guarantee from privacy in the presence of a defensible\nreceiver to security in the presence of a malicious receiver\n. The guarantee regarding a corrupted sender remains\nunchanged.\nProtocol 4.1.\nInputs: The sender S has a pair of bits (s\n0\n, s\n1\n); the\nreceiver\nR has a bit r.\nThe protocol:\n1. The receiver\nR chooses 2n uniformly distributed\nbits\nr\n1\n, . . . , r\n2n\n\nR\n{0, 1}.\n2. The sender\nS chooses 2n pairs of random bits\ns\n0\ni\n, s\n1\ni\n\nR\n{0, 1} for i = 1, . . . , 2n.\n3.\nS and R run 2n parallel executions of a bit oblivious\ntransfer protocol\nthat is private in the presence\nof defensible receivers and defensible senders.\nIn the\ni\nth\nexecution,\nS inputs (s\n0\ni\n, s\n1\ni\n) and\nR inputs\nr\ni\n. Let\nt\n1\n, . . . , t\n2n\nbe the transcripts that result\nfrom these executions.\n4.\nS and R run a secure two-party coin-tossing protocol\n(that accesses a one-way function in a black-box\nway) for generating a random string of length\nn: q = q\n1\n, . . . , q\nn\n.\n3\nThe string\nq is used to define\na set of indices\nQ {1, . . . , 2n} of size n in\nthe following way:\nQ = {2i - q\ni\n}\nn\ni=1\n. (Thus, for\nn = 3 and q = 010 we have that Q = {2, 3, 6}.)\n3\nSequential executions of the coin-tossing protocol of [3] can be used.\nThe security of this has been proven formally in [13].\n104\n5. For every\ni Q, the receiver R provides a defense\n(\nr\ni\n,\ni\nr\n).\n6.\nS checks that for every i Q, the pair (r\ni\n,\ni\nr\n)\nconstitutes a good defense by\nR for t\ni\n. If not,\nthen\nS aborts and halts. Otherwise, it continues\nto the next step.\n7. For every\nj / Q, the receiver R computes\nj\n=\nr r\nj\n(where\nr is R's initial input) and sends\n{\nj\n}\nj /\nQ\nto\nS.\n8.\nS computes\n0\n=\ns\n0\n\n\nj /\nQ\ns\n\nj\nj\nand\n\n1\n=\ns\n1\n\n\nj /\nQ\ns\n1-j\nj\n, and sends (\n\n0\n,\n1\n) to\nR.\n9.\nR computes and outputs s\nr\n=\n\nr\n\n\nj /\nQ\ns\nr\nj\nj\n.\nWe note that the sender's inputs to the executions of the\noblivious transfer subprotocol\nin Protocol 4.1 are uniformly\ndistributed. Therefore, it suffices to use Protocol 3.3,\neven though it has only been proven \"private\" for the case\nof uniformly distributed sender inputs.\nWe stress that our proof below of Protocol 4.1 relies on\nthe fact that the sender's inputs are single bits.\n4\nClaim 4.2. Assume that is a non-trivial oblivious transfer\nprotocol that is private for random inputs in the presence\nof defensible senders and receivers. Then, Protocol 4\n.1 is a\nnon-trivial oblivious transfer protocol that is secure in the\npresence of malicious receivers and private in the presence\nof defensible senders.\nProof Sketch: We first demonstrate the non-triviality\nproperty; that is, we show that if\nS and R are honest, then\nR receives s\nr\n, as required. To see this, first note that by the\nnon-triviality of\n, the receiver R obtains all of the bits s\nr\nj\nj\n,\nand in particular all\ns\nr\nj\nj\nfor\nj / Q. Now, if r = 0, then R sets\n\nj\n=\nr\nj\nfor every\nj / Q. Therefore, R will compute s\n0\n=\n\n0\n\n\nj /\nQ\ns\nr\nj\nj\n=\n\n0\n\n\nj /\nQ\ns\n\nj\nj\n. This computation\nis correct because\nS computed\n0\n=\ns\n0\n\n\nj /\nQ\ns\n\nj\nj\n. In\ncontrast, if\nr = 1, then\nj\n= 1\nr\nj\nfor every\nj, which is\nequivalent to\nr\nj\n= 1\nj\n. Thus, once again,\nR's computation\nof\n\nj /\nQ\ns\nr\nj\nj\nwhen computing\ns\n1\nequals\nS's computation of\n\nj /\nQ\ns\n1-j\nj\nwhen computing\n\n1\n, and\nR will obtain\n1\n.\nPrivacy in the presence of defensible senders.\nWe\npresent only the idea behind the proof that Protocol 4.1 is\nprivate in the presence of a defensible sender\nA. Intuitively,\nif protocol\nis private in the presence of a defensible sender,\nthen a defensible adversary here cannot learn any of the\nr\ni\nvalues in the execution (apart from those explicitly revealed\nby\nR when it provides its defenses). Therefore, the\nj\n=\nr\nj\nr values that it receives reveal nothing of the receiver's\ninput\nr, because for all j / Q, the value r\nj\nis not learned.\nSecurity in the presence of malicious receivers. We\npresent an almost full proof that Protocol 4.1 is secure in\nthe presence of malicious receivers. The intuition behind\n4\nThis is due to our definition of \"oblivious transfer that is private for\ndefensible adversaries\". It is possible to define a stronger notion of\ndefensible adversaries that is sufficient for proving that Protocol 4.1\nis secure even when the sender's inputs are strings of an arbitrary\nlength. However, we were not able to prove that Protocol 3.3 is private\nfor defensible adversaries under this stronger notion (in contrast to\nProtocol 3.1 that can be proven secure under the stronger notion).\nthis proof is that the cut-and-choose technique forces an adversarial\nreceiver\nA to be able to provide a good defense for\nmost of the oblivious transfer executions (or be caught with\nhigh probability). In particular, there must be at least one\nj / Q for which A could have provided a good defense. This\nimplies that there exists some\nj for which A cannot predict\nthe value of\ns\n1-r\nj\nj\nwith any non-negligible advantage. Since\ns\n1-r\nis masked by\ns\n1-r\nj\nj\n, it follows that\nA also learns nothing\nabout\ns\n1-r\n. We stress that the above intuition shows\nthat a malicious\nA cannot learn anything about s\n1-r\n. However\n, we actually need to prove a much stronger claim in\nthat the protocol is secure for a malicious\nR\n\n, as defined\nvia the ideal/real model simulation paradigm. We present\nour analysis in the so-called \"hybrid model\", where the honest\nparties use a trusted party to compute the coin-tossing\nfunctionality for them.\nWe now describe the simulator Sim for\nA = {A\nn\n}:\n1. For each\ni = 1, . . . , 2n, simulator Sim chooses random\npairs\ns\n0\ni\n, s\n1\ni\n\nR\n{0, 1} and plays the honest sender in\nwith these inputs, where\nA\nn\nplays the receiver.\n2. Sim chooses a random string\nq\nR\n{0, 1}\nn\nand hands\nit to\nA\nn\nas if it is the output of the coin-tossing functionality\n, as sent by the trusted party. Let\nQ be the\nindex set derived from\nq. Upon receiving back pairs\n(\nr\ni\n,\ni\nr\n) for\ni Q, simulator Sim checks that they all\nconstitute good defenses, respectively. If not, then it\naborts (just like the honest sender).\n3. Sim rewinds\nA\nn\nto the beginning of the previous step\nand chooses a new random string\nq with associated\nindex set\nQ . (We stress that q is independent of q.)\nSim hands\nq to A\nn\nand sees if it replies with pairs\n(\nr\ni\n,\ni\nr\n) that are good defenses, for all\ni Q . Sim\nrepeats this process with a new\nq until A\nn\nindeed\nreplies with pairs (\nr\ni\n,\ni\nr\n) that are good defenses, for\nall\ni Q . If Q = Q, then Sim outputs fail. Otherwise\nit proceeds to the next step.\n4. Given that\nQ = Q (and |Q | = |Q|), there exists at\nleast one index\nj such that j / Q but j Q. For\nsuch a\nj, Sim computes r = r\nj\n\nj\nand sends\nr to\nthe trusted party. (Note that\nr\nj\nis obtained from the\ndefense (\nr\nj\n,\nj\nr\n) that was received from\nA\nn\nafter it was\nsent the query set\nQ. In contrast,\nj\nis the value received\nfrom\nA\nn\nafter rewinding; i.e., when the query\nset was\nQ .)\n5. Upon receiving back a bit\ns\nr\nfrom the trusted party,\nSim computes\n\n0\nand\n\n1\nas follows:\n(a) If\nr = 0, then\n0\n=\ns\n0\n\n\nj /\nQ\ns\n\nj\nj\nand\n\n1\n\nR\n{0, 1}.\n(b) If\nr = 1, then\n0\n\nR\n{0, 1} and\n1\n=\ns\n1\n\n\nj /\nQ\ns\n1-j\nj\n.\nSim sends (\n\n0\n,\n1\n) to\nA\nn\nand output whatever\nA\nn\ndoes.\nWe proceed to prove that the joint output of Sim and the\nhonest sender\nS in the ideal model is computationally indistinguishable\nfrom the joint output of\nA\nn\nand\nS in the\nreal model. Actually, since the honest\nS has no output from\nthe protocol, it suffices here to show that the output of Sim\nin the ideal model is computationally indistinguishable from\nthe output of\nA\nn\nin the real model. We first claim that apart\nfrom the pair (\n\n0\n,\n1\n), the view of\nA\nn\nin the simulation with\n105\nSim is statistically close to its view in a real execution with\nS; the only difference being in the case that Sim outputs fail.\nThis can be seen as follows: if\nA\nn\ndoes not send good defenses\nafter receiving\nq, then Sim aborts, just as the honest\nS would (and in this case the simulation is perfect). If A\nn\ndoes send good defenses, then Sim continues until it finds another\n(independent)\nq for which A\nn\nalso replies with good\ndefenses. It is not hard to see that this yields a distribution\nthat is the same as in a real execution, except when\nq = q,\nin which case Sim outputs fail. However, this event (that\nit provides good defenses on\nq and then the next time that\nit provides good defenses is again on\nq) can happen with\nprobability only 2\n-n\n.\nWe therefore have that in the simulation by Sim, the adversary\nA\nn\n's partial view up until the point that it receives\n(\n\n0\n,\n1\n) is statistically close to its view in a real execution\nwith\nS. We now show that A\nn\n's full view is computationally\nindistinguishable. To do this, we consider a modified ideal-model\nsimulator Sim who receives the sender\nS's input pair\n(\ns\n0\n, s\n1\n). Simulator Sim works in exactly the same way as\nSim, except that it computes\n\n1-r\nas an honest sender would\ninstead of choosing it uniformly. By the above argument, it\nfollows that the distribution generated by Sim in the ideal\nmodel is statistically close to the distribution generated by a\nreal execution between\nS and A\nn\n. (Recall that Sim already\ngenerates\n\nr\nin the same way as an honest\nS, and therefore\nso does Sim .) It remains to show that the distribution generated\nby Sim is computationally indistinguishable to that\ngenerated by Sim.\nThe only difference between Sim and Sim is in the generation\nof\n\n1-r\n: simulator Sim generates it \"honestly\", whereas\nSim chooses it uniformly. As mentioned above, intuitively,\nindistinguishability follows from the fact that at least one\ns\n1-r\nj\nj\nmasks the value of\ns\n1-r\n. Formally, we show that if\nthis \"fake\"\n\n1-r\ncan be distinguished from a real one, then\nwe can construct a defensible receiver ~\nA\nn\nthat can break the\noblivious transfer protocol\n.\nThat is, we show that if the output generated by Sim and\nSim can be distinguished with non-negligible probability,\nthen it is possible for a defensible adversary ~\nA\nn\nto succeed\nin the experiment of Definition 2.2 with non-negligible advantage\n, with respect to the subprotocol\n. Assume by contradiction\nthat there exists a distinguisher\nD, a polynomial\np() and infinitely many n's such that\n|Pr[D(output\nSim\n) = 1]\n- Pr[D(output\nSim\n) = 1]\n| 1\np(n) .\nWithout loss of generality, assume that\nPr[\nD(output\nSim\n) = 1]\n- Pr[D(output\nSim\n) = 1]\n1\np(n) . (1)\nWe now use the above to construct a defensible adversary\n~\nA = { ~\nA\nn\n}. Adversary ~\nA\nn\nbegins its attack by starting\nthe simulation of Protocol 4.1, according to Sim's strategy.\nSpecifically, ~\nA\nn\nchooses\ns\n0\n, s\n1\n\nR\n{0, 1} and runs the simulation\nstrategy of Sim with\nA\nn\nup until the point where\n\n0\nand\n\n1\nare sent.\nThe simulation is the same as Sim,\nexcept for the following difference:\n~\nA\nn\nbegins by choosing\nj\nR\n{1, . . . , 2n} and internally invokes A\nn\n, simulating an\nexecution of Protocol 4.1. Then, all of the oblivious transfers\nsubexecutions of\n, except for the j\nth\none, are run internally\nwith ~\nA\nn\nplaying the honest sender ( ~\nA\nn\nalso chooses the\ns\n0\ni\nand\ns\n1\ni\nvalues as\nS would); in contrast, the messages of the\nj\nth\nexecution of the oblivious transfer protocol\nare forwarded\nbetween ~\nA\nn\n's external sender and the internal\nA\nn\nplaying the receiver. Following the oblivious transfer executions\n, ~\nA\nn\nruns the honest sender in the coin-tossing protocol\nto generate\nq and thus Q as required. If j / Q, then ~\nA\nn\noutputs fail and halts. Otherwise, ~\nA\nn\nreceives back the defenses\n; since\nj Q, the j\nth\ndefense is included. If (\nr\nj\n,\nj\nr\n) is\nnot a good defense, then ~\nA\nn\noutputs fail and halts. Otherwise\n, it stores (\nr\nj\n,\nj\nr\n) and continues like Sim by rewinding\nA\nn\nand generating a new\nq and Q . If j Q , then once\nagain ~\nA\nn\noutputs fail and halts.\nOtherwise, it continues\nlike Sim (using the\nj chosen above for which it is given that\nj Q and j / Q ). ~\nA\nn\ncontinues in the same way that Sim\ndoes up until (but not including) the point at which (\n\n0\n,\n1\n)\nmust be sent. Now, ~\nA\nn\ncomputes (\n\n0\n,\n1\n) as follows. First,\nnote that ~\nA\nn\nknows the values (\ns\n0\n, s\n1\n) and\ns\n0\ni\n, s\n1\ni\nfor all\ni = j (because it chose them). However, the values s\n0\nj\nand\ns\n1\nj\nare not known to ~\nA\nn\nbecause these are the values used\nby the external sender with whom it interacts. Nevertheless,\nthe (good) defense provided by\nA\nn\nis enough to obtain the\nvalue\ns\nr\nj\nj\n. This holds because given the transcript of the\nj\nth\noblivious transfer execution and the input and random-tape\nof the receiver, it is possible to derive\ns\nr\nj\nj\n. The only value\nunknown to ~\nA\nn\nis therefore\ns\n1-r\nj\nj\n. Therefore, ~\nA\nn\nis able to\ncompute\n\nr\nlike the honest sender. In contrast, it cannot\nhonestly compute\n\n1-r\n. Rather, ~\nA\nn\nguesses the value of\ns\n1-r\nj\nj\n\nR\n{0, 1} randomly, and then computes\n1-r\nusing\ns\n1-r\n, all of the\ns\ni\nvalues that it knows (i.e., all apart from\ns\n1-r\nj\nj\n), and the uniformly chosen\ns\n1-r\nj\nj\n. In order to determine\nits output, ~\nA\nn\nobtains the output of\nA\nn\nand runs the\ndistinguisher\nD (from Eq. (1)) on this output; let b be the\nbit output by\nD. Then, ~\nA\nn\nsets\n= s\n1-r\nj\nj\nb. (Recall that\nis ~\nA\nn\n's guess for the \"not-received\" bit used by the honest\nsender. The motivation for this guess is that by Eq. (1),\nD outputs 1 with higher probability on Sim (when the bit\nis random) than on Sim (when the bit is correct). Thus,\nwhen\nD outputs 1, we flip ~\nA\nn\n's guess for\ns\n1-r\nj\nj\n.) Finally,\n~\nA\nn\noutputs the defense (\nr\nj\n,\nj\nr\n) from above and the bit\n.\nWe proceed to analyze the probability that ~\nA\nn\nsucceeds\nin Expt\nrec\n\n. First, note that unless ~\nA\nn\noutputs fail, the view\nof\nA\nn\nwhen interacting with ~\nA\nn\nabove is identical to its\nview in the simulation by Sim. This is due to the fact that\n~\nA\nn\nfollows Sim's strategy, except for two differences. The\nfirst difference is that in the\nj\nth\nexecution of the oblivious\ntransfer protocol\nis run externally. However, since Sim\nplays the role of an honest receiver in all of the executions,\nthis makes no difference to\nA\nn\n's view. The second difference\nis in how\n\n1-r\nis computed: Sim chooses it uniformly,\nwhereas ~\nA\nn\ncomputes it as described above. Clearly, the\ndistribution generated is the same because ~\nA\nn\nuses a uniformly\ndistributed\ns\n1-r\nj\nj\n, and thus\n\n1-r\nis also uniformly\ndistributed.\nNow, denote the inputs of the honest sender that ~\nA\nn\ninteracts\nwith by (~\ns\n0\n, ~s\n1\n). Using the facts that (a) ~\nA\nn\ngenerates\nthe exact same distribution as Sim, (b) ~\nA\nn\nsets\n= s\n1-r\nj\nj\nb\n(where\nb is D's output bit), and (c) ~\nA\nn\npresents a good defense\nevery time that it does not output fail, we have that\nPr Expt\nrec\n\n( ~\nA\nn\n) = 1\n| output\n~\nA\nn\n= fail\n(2)\n= Pr\nD(output\nSim\n)\ns\n1-r\nj\nj\n= ~\ns\n1-r\nj\n.\n106\n(Recall that Expt\nrec\n\n( ~\nA\nn\n) = 1 if ~\nA\nn\npresents a good defense\nand\n= ~s\n1-r\nj\n.)\nIn contrast to the above, conditioned on the event that\ns\n1-r\nj\nj\n= ~\ns\n1-r\nj\n(i.e., the event that ~\nA\nn\nguessed correctly), the\nresult is an execution that is distributed exactly according\nto Sim . (Recall that the only difference between Sim and\nSim is with respect to the computation of\n\n1-r\n.) That is,\nPr\nD(output\nSim\n)\ns\n1-r\nj\nj\n= ~\ns\n1-r\nj\n| s\n1-r\nj\nj\n= ~\ns\n1-r\nj\n= Pr\nD(output\nSim\n)\ns\n1-r\nj\nj\n= ~\ns\n1-r\nj\n| s\n1-r\nj\nj\n= ~\ns\n1-r\nj\n= Pr [\nD(output\nSim\n) = 0]\nwhere the last equality is just due to the fact that\ns\n1-r\nj\nj\n=\n~\ns\n1-r\nj\n. Now, recalling that\ns\n1-r\nj\nj\nis chosen uniformly by ~\nA\nn\n(and so equals ~\ns\n1-r\nj\nwith probability exactly 1\n/2), we have:\nPr\nD(output\nSim\n)\ns\n1-r\nj\nj\n= ~\ns\n1-r\nj\n=\n1\n2 Pr D(output\nSim\n)\ns\n1-r\nj\nj\n= ~\ns\n1-r\nj\n| s\n1-r\nj\nj\n= ~\ns\n1-r\nj\n+ 1\n2 Pr D(output\nSim\n)\ns\n1-r\nj\nj\n= ~\ns\n1-r\nj\n| s\n1-r\nj\nj\n= ~\ns\n1-r\nj\n=\n1\n2 Pr [D(output\nSim\n) = 0]\n+ 1\n2 Pr D(output\nSim\n) = 1\n| s\n1-r\nj\nj\n= ~\ns\n1-r\nj\n=\n1\n2 (1 - Pr [D(output\nSim\n) = 1])\n+ 1\n2 Pr D(output\nSim\n) = 1\n| s\n1-r\nj\nj\n= ~\ns\n1-r\nj\n=\n1\n2 +\n1\n2 Pr D(output\nSim\n) = 1\n| s\n1-r\nj\nj\n= ~\ns\n1-r\nj\n- 12 Pr[D(output\nSim\n) = 1]\n.\nRecalling again that when\ns\n1-r\nj\nj\n= ~\ns\n1-r\nj\nthe output of Sim\nis the same as Sim , we have that\n1\n2 +\n1\n2 Pr D(output\nSim\n) = 1\n| s\n1-r\nj\nj\n= ~\ns\n1-r\nj\n- 12 Pr[D(output\nSim\n) = 1]\n=\n1\n2 +\n1\n2 Pr D(output\nSim\n) = 1\n| s\n1-r\nj\nj\n= ~\ns\n1-r\nj\n+ 1\n2 Pr D(output\nSim\n) = 1\n| s\n1-r\nj\nj\n= ~\ns\n1-r\nj\n- Pr [D(output\nSim\n) = 1]\n=\n1\n2 + Pr [D(output\nSim\n) = 1]\n- Pr [D(output\nSim\n) = 1]\n.\nCombining the above with Equations (1) and (2), we have\nthat for infinitely many\nn's\nPr Expt\nrec\n\n( ~\nA\nn\n) = 1\n| output\n~\nA\nn\n= fail\n= Pr\nD(output\nSim\n)\ns\n1-r\nj\nj\n= ~\ns\n1-r\nj\n12 + 1\np(n) .\nRecall now that ~\nA\nn\noutputs fail if\nA\nn\ndoes not output a good\ndefense, if\nj / Q, or if j Q . We first claim that A\nn\nmust\noutput a good defense with non-negligible probability. This\nfollows simply from the fact that when\nA\nn\ndoes not output\na good defense, the execution is truncated and the distributions\ngenerated by Sim and Sim are identical. Therefore,\nEq. (1) implies that for infinitely many\nn's, A\nn\noutputs a\ngood defense with probability at least 1\n/p(n). Next, recall\nthat ~\nA\nn\nchooses the sets\nQ and Q randomly (under the constraints\nprescribed in the protocol). Thus, with probability\nexactly 1\n/4, j Q and j / Q (because the probability that\na given\nj is in a specified set is exactly 1/2). We conclude\nthat with non-negligible probability, ~\nA\nn\ndoes not output fail,\nand thus Pr[Expt\nrec\n\n( ~\nA\nn\n) = 1] is non-negligible.\nIt remains to show that Sim runs in expected polynomial-time\n. Aside from the rewinding stage, all work takes a fixed\npolynomial amount of time. Regarding the rewinding stage,\nwe have the following. Let\np denote the probability that A\nn\nreplies correctly upon a random set of indices\nQ of size n,\nas specified in the protocol. Then, given that\nA\nn\nreplied\ncorrectly to the initial query set\nQ, the expected number\nof rewinding attempts with independent\nQ made by Sim\nequals 1\n/p. Since these rewinding attempts are only made if\nA\nn\nreplied correctly to the initial query set\nQ, we have that\nthe expected number of attempts overall equals\np 1/p = 1.\nThis completes the proof.\nMALICIOUS SENDERS AND DEFENSIBLE RECEIVERS\nIn this section, we reverse the oblivious transfer protocol\nof Protocol 4.1 to obtain a protocol that is secure in the\npresence of a malicious sender and private for random inputs\nin the presence of a defensible receiver. We use the\nconstruction of [31] for reversing Protocol 4.1. The protocol\nis as follows:\nProtocol 5.1. (reversing oblivious transfer):\nInputs: The sender S has a pair of bits (s\n0\n, s\n1\n) for\ninput and the receiver\nR has a bit r.\nThe protocol:\n1. The sender and receiver run an oblivious transfer\nprotocol\nthat is secure in the presence of a\nmalicious receiver and private in the presence of\na defensible sender:\n(a) The sender\nS, playing the receiver in , inputs\n~\nr = s\n0\ns\n1\n(b) The receiver\nR, playing the sender in , chooses\na random bit\n\nR\n{0, 1} and inputs ~s\n0\n=\n\nand ~\ns\n1\n=\nr.\nDenote\nS's output from by a.\n2.\nS sends R the bit = s\n0\na.\n3.\nR outputs s\nr\n=\n.\nThe security of Protocol 5.1 can be easily proven as an\ninformation-theoretic reduction, or when the original oblivious\ntransfer protocol is fully secure. In contrast, it is far\nmore subtle in the setting where only privacy in the presence\nof a defensible sender is assumed. Nevertheless, we do\nobtain the following claim:\nClaim 5.2. If is a non-trivial oblivious transfer protocol\nthat is secure in the presence of a malicious receiver\nand private in the presence of a defensible sender, then Protocol\n5.1 is a non-trivial oblivious transfer protocol that is\nsecure in the presence of a malicious sender and private for\nrandom inputs in the presence of a defensible receiver.\n107\nFULLY-SECURE BIT OT\nIn this section, we use the construction of Protocol 4.1\nagain in order to boost the security of Protocol 5.1 so that\nit is secure in the presence of both a malicious sender and\na malicious receiver; we call such a protocol fully secure to\nstress that it is secure in the face of any corruption.\nBy Claim 4.2, we have that Protocol 4.1 boosts the security\nof any oblivious transfer protocol that is private for\ndefensible receivers into one that is secure in the presence\nof malicious receivers. We can therefore use Protocol 4.1\nto boost the security of Protocol 5.1 so that the result is a\nprotocol that is secure in the presence of malicious receivers.\nThis does not suffice, however, because we must show that\nif the subprotocol used in Protocol 4.1 is secure in the presence\nof malicious senders, then the result is still secure in the\npresence of malicious senders. (Claim 4.1 considers only privacy\nfor defensible senders.) This is actually easy to show,\nand is omitted here due to lack of space.\nTheorem 6.1. Assume that there exists a non-trivial bit\noblivious transfer protocol\nthat is secure in the presence of\nmalicious senders and private for random inputs in the presence\nof defensible receivers. Then, Protocol 4.1 that is in-stantiated\nusing this\n, is a non-trivial bit oblivious transfer\nprotocol that is secure in the presence of malicious receivers\nand senders.\nBlack-box construction of oblivious transfer. Noting\nthat perfectly-binding commitment schemes (as used in Protocol\n3.3) can be constructed using black-box access to homomorphic\nencryption or enhanced trapdoor permutations,\nand combining Protocols 3.1 and 3.3 with Protocol 4.1, followed\nby Protocol 5.1 and the construction in Theorem 6.1,\nwe obtain secure bit oblivious transfer with black-box access\nto a homomorphic encryption scheme or a family of\nenhanced trapdoor permutations.\nBLACK-BOX SECURE COMPUTATION\nKilian [18] showed that any function can be securely computed\ngiven black-box access to a bit oblivious transfer functionality\n.\nWe therefore have the following theorem, that\nconstitutes our main result:\nTheorem 7.1. Assume that there exist homomorphic encryption\nschemes with errorless decryption or families of\nenhanced trapdoor permutations. Then, for any probabilis-tic\npolynomial-time functionality\nf there exists a protocol\nthat uses only black-box access to a homomorphic encryption\nscheme or to a family of enhanced trapdoor permutations\n, and securely computes\nf with any number of corrupted\nparties and in the presence of a static malicious adversary.\nWe remark that as is standard for the setting of no honest\nmajority, the security guarantee achieved here is that of \"se-curity\nwith abort\"; see [13, Chapter 7] for formal definitions.\nREFERENCES\n[1] W. Aiello, Y. Ishai and O. Reingold. Priced Oblivious\nTransfer: How to Sell Digital Goods. In EUROCRYPT 2001,\nSpringer-Verlag (LNCS 2045), pages 119135, 2001.\n[2] M. Ben-Or, S. Goldwasser and A. Wigderson. Completeness\nTheorems for Non-Cryptographic Fault-Tolerant Distributed\nComputation. In 20th STOC, pages 110, 1988.\n[3] M. Blum. Coin Flipping by Phone. In IEEE Spring\nCOMPCOM, pages 133137, 1982.\n[4] R. Canetti. Security and Composition of Multiparty\nCryptographic Protocols. Journal of Cryptology,\n13(1):143202, 2000.\n[5] D. Chaum, C. Cr\nepeau and I. Damg\nard. Multi-party Uncond-itionally\nSecure Protocols. In 20th STOC, pages 1119, 1988.\n[6] I. Damg\nard and Y. Ishai. Constant-Round Multiparty\nComputation Using a Black-Box Pseudorandom Generator. In\nCRYPTO 2005, Springer-Verlag (LNCS 3621), pages 378394,\n2005.\n[7] D. Dolev, C. Dwork and M. Naor. Non-Malleable\nCryptography. SIAM Journal on Computing, 30(2):391437,\n2000.\n[8] S. Even, O. Goldreich and A. Lempel. A Randomized Protocol\nfor Signing Contracts. In Communications of the ACM,\n28(6):637647, 1985.\n[9] R. Gennaro, Y. Lindell and T. Malkin. Enhanced versus Plain\nTrapdoor Permutations for Non-Interactive Zero-Knowledge\nand Oblivious Transfer. Manuscript in preparation, 2006.\n[10] R. Gennaro and L. Trevisan. Lower Bounds on the Efficiency\nof Generic Cryptographic Constructions. In 41st FOCS, pages\n305314, 2000.\n[11] Y. Gertner, S. Kannan, T. Malkin, O. Reingold and\nM. Viswanathan. The Relationship between Public Key\nEncryption and Oblivious Transfer. In 41st FOCS, pages\n325334, 2000.\n[12] Y. Gertner, T. Malkin and O. Reingold. On the Impossibility\nof Basing Trapdoor Functions on Trapdoor Predicates. In 42nd\nFOCS, pages 126135, 2001.\n[13] O. Goldreich. Foundations of Cryptography: Volume 2\nBasic Applications. Cambridge University Press, 2004.\n[14] O. Goldreich, S. Micali and A. Wigderson. Proofs that Yield\nNothing but their Validity or All Languages in NP Have\nZero-Knowledge Proof Systems. Journal of the ACM,\n38(1):691729, 1991.\n[15] O. Goldreich, S. Micali and A. Wigderson. How to Play any\nMental Game A Completeness Theorem for Protocols with\nHonest Majority. In 19th STOC, pages 218229, 1987.\n[16] R. Impagliazzo and S. Rudich. Limits on the Provable\nConsequences of One-way Permutations. In CRYPTO'88,\nSpringer-Verlag (LNCS 403), pages 826, 1988.\n[17] Y.T. Kalai. Smooth Projective Hashing and Two-Message\nOblivious Transfer. In EUROCRYPT 2005, Springer-Verlag\n(LNCS 3494) pages 7895, 2005.\n[18] J. Kilian. Founding Cryptograph on Oblivious Transfer. In\n20th STOC, pages 2031, 1988.\n[19] J. Kilian. Uses of Randomness In Algorithms and Protocols.\nMIT Press, 1990.\n[20] J. Kilian. Improved Efficient Arguments. In CRYPTO'95,\nSpringer-Verlag (LNCS 963), pages 311324, 1995.\n[21] J.H. Kim, D.R. Simon and P. Tetali. Limits on the Efficiency\nof One-Way Permutation-Based Hash Functions. In 40th\nFOCS, pages 535542, 1999.\n[22] E. Kushilevitz and R. Ostrovsky. Replication Is Not Needed:\nSingle Database, Computationally-Private Information\nRetrieval. In 38th FOCS, pages 364373, 1997.\n[23] Y. Lindell. A Simpler Construction of CCA2-Secure Public-Key\nEncryption Under General Assumptions. In EUROCRYPT\n2003, Springer-Verlag (LNCS 2656), pages 241254, 2003.\n[24] T. Malkin and O. Reingold. Personal communication, 2006.\n[25] S. Micali. Computationally Sound Proofs. SIAM Journal on\nComputing, 30(4):12531298, 2000.\n[26] M. Naor and K. Nissim. Communication Preserving Protocols\nfor Secure Function Evaluation. In 33rd STOC, pages 590599,\n2001.\n[27] M. Naor and B. Pinkas. Efficient Oblivious Transfer Protocols.\nIn 12th SODA, pages 458457, 2001.\n[28] M. Rabin. How to Exchange Secrets by Oblivious Transfer.\nTech. Memo TR-81, Harvard University, 1981.\n[29] O. Reingold, L. Trevisan, and S. Vadhan. Notions of\nReducibility between Cryptographic Primitives. In 1st TCC,\npages 120, 2004.\n[30] A. Sahai. Non-Malleable Non-Interactive Zero-Knowledge and\nAdaptive Chosen-Ciphertext Security. In 40th FOCS, pages\n543553, 1999.\n[31] S. Wolf and J. Wullschleger. Oblivious Transfer Is Symmetric.\nTo appear in EUROCRYPT 2006. Appears at Cryptology\nePrint Archive, Report 2004/336, 2004.\n[32] A. Yao. How to Generate and Exchange Secrets. In 27th\nFOCS, pages 162167, 1986.\n108", "keywords": "oblivious transfer;encryption scheme;oblivious transfer protocol;secure computation;nonblack-box;malicious adversary;black-box;Theory of cryptography;cryptographic;black-box reductions;trapdoor permutation"} {"name": "45", "title": "Bluetooth Dynamic Scheduling and Interference Mitigation", "abstract": "Bluetooth is a cable replacement technology for Wireless Personal Area Networks. It is designed to support a wide variety of applications such as voice, streamed audio and video, web browsing, printing, and file sharing, each imposing a number of quality of service constraints including packet loss, latency, delay variation, and throughput. In addition to QOS support, another challenge for Bluetooth stems from having to share the 2.4 GHz ISM band with other wireless devices such as IEEE 802.11. The main goal of this paper is to investigate the use of a dynamic scheduling algorithm that guarantees QoS while reducing the impact of interference. We propose a mapping between some common QoS parameters such as latency and bit rate and the parameters used in the algorithm. We study the algorithm's performance and obtain simulation results for selected scenarios and configurations of interest.", "fulltext": "Introduction\nToday most radio technologies considered by Wireless Personal\nArea Network (WPAN) industry consortia and standard\ngroups including the Bluetooth Special Interest Group [1],\nHomeRF, and the IEEE 802.15, employ the 2.4 GHz ISM frequency\nband. This same frequency band is already in use by\nmicrowave ovens and the popular Wireless Local Area Network\n(WLAN) devices implementing the IEEE 802.11 standard\nspecifications [8].\nHowever, instead of competing with WLANs for spectrum\nand applications, WPANs are intented to augment many of\nthe usage scenarios and operate in conjunction with WLANs,\ni.e., come together in the same laptop, or operate in proximity\nin an office or conference room environment. For example,\nBluetooth can be used to connect a headset, or PDA to a desk-top\ncomputer, that, in turn, may be using WLAN to connect\nto an Access Point placed several meters away.\nThus, an issue of growing concern is the coexistence of\nWLAN and WPAN in the same environment. Several techniques\nand algorithms aimed at reducing the impact of interference\nhave been considered.\nThese techniques range\nfrom collaborative schemes intended for Bluetooth and IEEE\n802.11 protocols to be implemented in the same device to\nfully independent solutions that rely on interference detection\nand estimation. In particular:\nCollaborative mechanisms. Mechanisms for collaborative\nschemes have been proposed to the IEEE 802.15 Coexistence\nTask Group and are based on a Time Division Multiple\nAccess (TDMA) solution that alternates the transmission\nof Bluetooth and WLAN packets (assuming both\nprotocols are implemented in the same device and use a\ncommon transmitter) [9]. A priority of access is given to\nBluetooth for transmitting voice packets, while WLAN is\ngiven priority for transmitting data.\nNon-collaborative mechanisms. The non-collaborative\nmechanisms range from adaptive frequency hopping [11]\nto packet scheduling and traffic control [4]. They all use\nsimilar techniques for detecting the presence of other devices\nin the band such as measuring the bit or frame error\nrate, the signal strength or the signal to interference ratio\n(often implemented as the Received Signal Indicator\nStrength (RSSI)). Frequency hopping devices may be able\nto detect that some frequencies are used by other devices\nand thus modify their frequency hopping pattern. They\ncan also choose not to transmit on \"bad\" frequencies. The\nfirst technique is known as adaptive frequency hopping,\nwhile the second technique is known as MAC scheduling.\nThe main advantage of scheduling is that it does not require\nchanges to the Bluetooth specifications.\nIn this paper we present a Bluetooth Interference Aware\nScheduling (BIAS) algorithm to deal with coexistence. This\nalgorithm takes advantage of the fact that devices in the same\npiconet will not be subject to the same levels of interference\non all channels of the band. The basic idea is to utilize the\nBluetooth frequency hopping pattern and distribute channels\nto devices such that to maximize their throughput while ensuring\nfairness of access among users.\nIn this paper, we propose several extensions to a preliminary\ndiscussion of the algorithm [4] in order to address (1) priority\nscheduling, (2) dynamic changes in the environment,\nand (3) asymmetric scenarios where packet lengths and data\nrates are chosen differently in the upstream (slave to master\ntransmission) and downstream (master to slave transmission)\ndirections. In addition, we describe how to map commonly\nused QOS parameters, namely bit rate, and jitter and the parameters\nused in BIAS. Simulation results for scenarios and\nconfigurations of interest are presented and performance is\nmeasured in terms of packet loss and mean access delay.\nThe remainder of this paper is organized as follows. In\nsection 2, we give some general insights on the Bluetooth\ninterference environment. In section 3, we describe BIAS\nand discuss the mapping of QOS parameters. In section 4,\n22\nN. GOLMIE\nwe present simulation results and offer concluding remarks in\nsection 5.\nInterference environment\nSince Bluetooth operates in the 2.4 GHz band along with\nother wireless technologies such as 802.11, high and low rate\nWPAN (802.15.3 and 4), the resulting mutual interference\nleads to significant performance degradation.\nIn this paper, we assume that interference is caused by an\n802.11 spread spectrum network operating in proximity of the\nBluetooth piconet. This represents the worst case interference\nfor Bluetooth. Golmie et al. [6] use a detailed MAC and\nPHY simulation framework to evaluate the impact of interference\nfor a pair of WLAN devices and a pair of Bluetooth\ndevices. The results indicate that Bluetooth performance may\nbe severely impacted by interference with packet loss of 8%\nand 18% for voice and data traffic, respectively. In [6], the\nauthors investigate the effect of several factors, such as transmitted\npower, offered load, packet size, hop rate, and error\ncorrection on performance. First, they note that power control\nmay have limited benefits in an interference environment.\nIncreasing the Bluetooth transmission power even ten times is\nnot sufficient to reduce the Bluetooth packet loss. Second, using\na shorter packet size leads to less packet loss for Bluetooth\nat the cost of causing more interference on WLAN. Overall,\nthe results exhibit a strong dependence on the type and characteristics\nof the traffic distribution used.\nAdditional analytical [5,10] and experimentation [3,7] results\nconfirm these findings.\nBluetooth Interference Aware Scheduling\nIn this section, we present a Bluetooth Interference Aware\nScheduling (BIAS) algorithm that consists of several components\n, namely, (i) dynamic channel estimation, (ii) credit\ncomputation, and (iii) access priority. A preliminary discussion\nof BIAS appeared in [4].\nIn this sequel, we assume that traffic from slave S\ni\nto the\nmaster (upstream) is characterized by a data rate,\ni\nup\n, equal\nto (N\ni\npeak\nl\niup\n)/p\ni\nwhere N\ni\npeak\nis the number of packets sent\nback-to-back within a poll interval, p\ni\n, and l\niup\nis the packet\nlength (1, 3, or 5 slots depending on the packet type). Similarly\n, the data rate in the downstream (from the master to slave\nS\ni\n) is characterized by\ni\ndn\nequal to (N\ni\npeak\nl\ni\ndn\n)/p\ni\n. Note that\nN\ni\npeak\nand p\ni\nare the same in the upstream and downstream,\nsince every packet in the upstream corresponds to one in the\ndownstream. In addition, we assume the following transmission\nrules for the master and slave.\nMaster The master polls S\ni\nevery p\ni\nslots in order to guarantee\ni\nup\nin the upstream direction. A poll message can\nbe either a data or POLL packet. A data packet is sent if\nthere is a packet in the queue destined for S\ni\n. This packet\ncontains the ACK of the previous packet received from S\ni\n.\nIn case there is no data to transmit and the master needs\nto ACK a previous slave transmission, it sends a NULL\npacket.\nSlave S\ni\nUpon receipt of a packet from the master, the\nslave can transmit a data packet. This data packet contains\nthe ACK information of the master to slave packet\ntransmission. In case the slave does not have any data to\nsend, it sends a NULL packet in order to ACK the previous\npacket reception from the master. No ACK is required for\na NULL message from the master.\nIn a nutshell, we propose a method that allows the master\ndevice, which controls all data transmissions in the piconet,\nto avoid data transmission to a slave experiencing a \"bad\"\nfrequency. Furthermore, since a slave transmission always\nfollows a master transmission, using the same principle, the\nmaster avoids receiving data on a \"bad\" frequency, by avoiding\na transmission on a frequency preceding a \"bad\" one in\nthe hopping pattern.\nThis simple scheduling scheme illustrated in figure 1 needs\nonly be implemented in the master device and translates into\nthe following transmission rule. The master transmits in a slot\nafter it verifies that both the slave's receiving frequency, f\ns\n,\nFigure 1. Interference Aware Scheduling.\nBLUETOOTH DYNAMIC SCHEDULING AND INTERFERENCE MITIGATION\n23\nFigure 2. Master packet transmission flow diagram.\nand its own receiving frequency, f\nm\n, are \"good\". Otherwise,\nthe master skips the current transmission slot and repeats the\nprocedure over again in the next transmission opportunity.\nFigure 2 describes the master's transmission flow diagram.\nIn addition, to checking the slave's and the master's receiving\nfrequencies pair, (f\ns\n, f\nm\n), the algorithm incorporates bandwidth\nrequirements, and quality of service guarantees for each\nmaster/slave connection in the piconet. This bandwidth allocation\nis combined with the channel state information and\nmapped into transmission priorities given to each direction in\nthe master/slave communication. It is shown in the \"choose\nslave\" routine in the flow diagram. Note that the master invokes\nthe \"choose\" routine after serving the retransmission\nACK queue for packets sent by the master requiring retransmission\n.\nIn the remainder of this section, we discuss (a) a dynamic\nchannel estimation procedure, (b) a credit allocation function,\nand (c) a service priority routine that schedules packet transmissions\nto devices according to their service requirements\nand the state of the channel.\n3.1. Dynamic channel estimation\nEstimation is mainly based on measurements conducted on\neach frequency or channel in order to determine the presence\nof interference. Several methods are available ranging\nfrom BER, RSSI, packet loss rate, and negative ACKs. In this\ndiscussion, the estimation is based on negative ACKs, which\nbelongs to the class of implicit methods that do not require\nmessages to be exchanged between the master and the slave\ndevices. First, we define two phases in the channel estimate\nprocedure as illustrated in figure 3. During the Estimation\nWindow, EW, packets are sent on all frequencies regardless of\ntheir classification.\nNote that in case no data traffic is available for transmission\n, POLL/NULL packets could be exchanged between the\nmaster and the slave in order to probe the channel and collect\nmeasurements. This POLL/NULL exchanged is designed in\nmost implementations to keep the connection alive and check\nthe status of the slave. It comes at the expense of causing\nmore interference on other systems. EW takes place at the\nFigure 3. Implicit estimation.\nbeginning of every Estimation Interval, EI, and is followed\nby an Online phase where the master uses only \"good\" frequencies\nto selectively send data and POLL packets to slaves\nin the piconet. Next, we give a lower bound on the EW and\ndescribe how to adjust EI based on the environment's dynamics\n.\nEstimation Window.\nThe time to perform the channel estimation\ndepends on the frequency hopping rate since the methods\nused to perform the classification depend on packet loss\nmeasurements per frequency visited. A lower bound calculation\nis as follows. First, we assume a hop rate of 1600 hops/s\ngiven single slot packets. For each receiver the hopping rate\nis 1600/2 hops/s, or 800 hops/s since nodes receive on every\nother \"frequency\" or \"hop\" in the sequence. Next, we consider\nthe Bluetooth frequency hopping algorithm. In a window\nof 32 frequencies, every frequency is selected once, then\nthe window is advanced by 16 frequencies, and the process\nis repeated. Therefore, it takes 5 windows of 32 frequencies\nin order to visit each of the 79 frequencies twice. In other\nwords, 160 hops visit each frequency twice. The time to visit\neach frequency four times per receiver is 160/800\n2 = 0.4\nseconds or 400 ms. In fact, 400 ms constitutes a lower bound\nassuming full load and single-slot packets.\nIn order to avoid having to fix the EW, or compute it manu-ally\n, we propose a simple technique to dynamically adjusts the\nwindow based on the number of times, N\nf\n, each frequency in\nthe band should be visited. For example, if N\nf\nis equal to 2,\nthen each receiving frequency in the band is visited at least\ntwice, i.e., the estimation phase ends only when the last frequency\nin the band has been used twice for each device in\nthe piconet. Note that, avoiding \"bad\" frequencies can start\nbefore EW ends, or as soon as frequency status information\nbecomes available.\nEstimation Interval.\nHow often to update the channel estimation\ndepends on the application and the dynamics of the\nscenario used. We propose an adaptive procedure to adjust\nEI, which is the interval between two consecutive estimation\nwindows.\nFirst, we let , be the percentage of frequencies that change\nclassification status (from \"good\" to \"bad\" or vice versa) during\nthe previous estimation phase. More formally, let S(f, t)\nbe the status of frequency f at time t.\nS(f, t)\n= 1 if f is \"good\",\n0\notherwise.\n(1)\nUsing the exclusive bit \"OR\" operation between S(f, t) and\nS(f, t\n+1) represents the change of status of frequency f from\n24\nN. GOLMIE\ntime t to t\n+ 1. A change of status leads to a logic \"1\" while\na no change yields a logic \"0\". Summing over all frequencies\nand dividing by the number of frequencies available, which is\n79 in this case, is then equal to .\n\nt\n+1\n= 1\n79\n79\nf\nS(f, t)\nS(f, t + 1) .\n(2)\nInitially, EI is set to EI\nmin\n. Then, EI is updated every interval\n, t, according to the rationale that if a change were to happen\nit is likely to happen again in the near future and therefore\nEI is set to EI\nmin\n. Otherwise, the window is doubled.\nEI\nt\n+1\n= max(2 EI\nt\n, EI\nmax\n)\nif\nt\n+1\n0.1,\nEI\nmin\notherwise.\n(3)\n3.2. Credit allocation\nThe credit system controls the bandwidth allocated to each\ndevice in order to ensure that no device gets more than its\nfair share of the available bandwidth. Thus, devices with a\npositive credit counter, c\ni\n, are allowed to send data. Since\nthe rate in the upstream can be different from the rate in the\ndownstream, we define c\ni\nup\nand c\ni\ndn\nfor both the upstream and\ndownstream credits. Credits can be computed according to\nthe upstream and downstream rates negotiated as follows:\nc\ni\nup\n=\ni\nup\nN,\nc\ni\ndn\n=\ni\ndn\nN,\n(4)\nwhere N is the number of slots considered in the allocation\nand\ni\nup/down\n= l\ni\nup/down\nN\ni\npeak\n/p\ni\n. Credits are decremented\nby the number of slots used in each data packet transmission.\nThe transmission of POLL and NULL packets does not affect\nthe credit count based on the rationale that credits are\nnot required for the transmission of POLL and NULL messages\n. An interesting question is how to compute or derive\nit from application QOS parameters such as delay, peak\nbandwidth, and jitter. Let d (seconds), r (bits/s), (seconds)\nrepresent delay, peak bandwidth, and jitter, respectively. r is\npart of the L2CAP QOS parameters and for some applications\nis negotiated between the master and the slave at connection\nsetup. r is equal to (N\npeak\nE\nl\n8)/(p 625 10\n-6\n)\nand\n= (r l 625 10\n-6\n)/(E\nl\n8). Note that E\nl\nis the number\nof information bytes contained in a packet of length l.\nTable 1 gives E\nl\ncorresponding to the various DH formats.\nThe choice of l depends on the L2CAP packet size, k. When\nk\nE\n5\n, N\npeak\n= 1 and l is such that:\nl\n=\n1\nif 0 < k\n27,\n3\nif 27 < k\n183,\n5\nif 183 < k\n339.\n(5)\nTable 1\nPacket encapsulation rate for DH packets.\nPacket type\nl\nE\nl\n(bytes)\nDH1\n1\n27\nDH3\n3\n183\nDH5\n5\n339\nHowever, when k > E\n5\n, higher layer packets (L2CAP) are\nsegmented into N\npeak\npackets. The aim is to find N\npeak\nequal\nto\nN\npeak\n= k\nE\nl\n(6)\nsuch as to minimize N\npeak\nl, or the total number of slots\nneeded. Furthermore, since master and slave transmission alternate\n, the end-to-end delay of a packet accounts for the segmentation\nand the transmission of packets in both directions.\nTherefore, the choice of l\nup\nand l\ndn\nare loosely constrained by\nthe delay requirements as follows:\nN\npeak\n(l\nup\n+ l\ndn\n)\nd\n625\n10\n-6\n,\n(7)\nwhere 625\n10\n-6\nis the length of a slot in seconds. Finally,\nthe choice of p is determined by as follows:\n2\np\n\n625\n10\n-6\n,\n(8)\nwhere 2 is the minimum value for the poll interval since every\nother slot is dedicated to a master (or slave) transmission.\nIn case r, d, and cannot be determined from the application\nQOS, can be set to 1\ni\n, the leftover bandwidth after\nhaving calculated for all other applications with known\nservice rates (\n\n).\n3.3. Service priority\nThe third component of the algorithm is to give an access priority\nto devices based on their channel conditions and their\nallocated credits.\nWe let u\ni\nbe the probability that a pair of master/slave\ntransmission slots are \"good\". Thus, u\ni\nrepresents the available\nspectrum to slave S\ni\n, and we write:\nu\ni\n= min\n1\n- 1\n79 ,\nP (\nslave i has a good receiving frequency)\nP (master has a good receiving frequency) , (9)\nwhere\nP (\ndevice i has a good receiving frequency)\n= Number of good channels\ni\nTotal number of channels .\n(10)\nWe use a two-tier system with high and low priorities, denoted\nby A, and B, respectively. Priority A is used to support delay\nconstrained applications such as voice, MP3, and video.\nOn the other hand, priority B, is used to support best effort\nconnections such as ftp, http, print, email. The scheduling\nroutine services priority A devices first, and priority B devices\nsecond. Also, among same tier connections, we choose\nto give devices with fewer number of good channels the right\nof way over other devices that have more channels available.\nThe priority access is determined according to a weight factor\n, w, that is the product of the credits and the probability of\nBLUETOOTH DYNAMIC SCHEDULING AND INTERFERENCE MITIGATION\n25\nTable 2\nDefinition of parameters used in the scheduling algorithm.\nParameters\nDefinition\n\ni\nup,dn\nRate allocated for device i in the upstream anddownstream\nw\ni\nup,dn\nWeight for device i\nc\ni\nup,dn\nCredit for device i\nN\nNumber of slots considered in the allocation\nu\ni\nAvailable frequency usage for device i\nexperiencing a bad frequency. w\ni\nup\nand w\ni\ndn\nare computed as\nfollows:\nw\ni\nup\n= c\ni\nup\n(1 - u\ni\n),\nw\ni\ndn\n= c\ni\ndn\n(1 - u\ni\n).\n(11)\nThe master schedules a data transmission for slave i such as\nto maximize the product of the weights in the up and downstreams\n:\ni\n= max\nf\nS\nw\ni\nup\nw\ni\ndn\n.\n(12)\nTo transmit a POLL packet, the master looks only at the\nweight function in the upstream:\ni\n= max\nf\nS\nw\ni\nup\n.\n(13)\nThe selection of a slave is restricted over the set of slaves\nS\nthat can receive on the master's current transmission frequency\n, f . Thus, any slave that experiences a \"bad\" channel\non the current transmission frequency is not considered.\nFour sets of slaves are formed, A\nf\ndata\n, A\nf\npoll\n, B\nf\ndata\n, and B\nf\npoll\n.\nA\ndata\nand A\npoll\nrepresent the set of high priority connections\nrequiring data and POLL packet transmissions, respectively.\nSimilarly, B\ndata\nand B\npoll\nrepresent low priority connections.\nFirst, the algorithm tries to schedule a packet to high priority\nslaves in group A, then a POLL packet, before it moves to\ngroup B. The credit counters and weights are updated accordingly\nafter every master's transmission. Table 2 summarizes\nthe parameters used in the algorithm and their definition. The\nalgorithm's pseudocode is given in table 11.\nPerformance evaluation\nIn this section, we present simulation results to evaluate the\nperformance of BIAS. The experiments illustrate the algorithm's\nresponsiveness to changes in the environment and\nthe support of QOS. The results obtained are compared with\nRound Robin (RR) scheduling. Our simulation environment\nis based on a detailed MAC, PHY and channel models for\nBluetooth and IEEE 802.11 (WLAN) as described in [6]. The\nparameters used in the setup vary according to the experiment\n. The common simulation parameters are summarized in\ntable 3. The simulations are run for 900 seconds of simulated\ntime unless specified otherwise. We run 10 trials using a different\nrandom seed for each trial. In addition, to plotting the\nmean value, we verify that that the statistical variation around\nthe mean values are very small (less than 1%).\nThe performance metrics include the packet loss, the mean\naccess delay, and the channel estimation transient time. The\nTable 3\nCommon simulation parameters.\nBluetooth parameters\nValues\nACL baseband packet encapsulation\nDH5\nTransmitted power\n1 mW\nWLAN parameters\nValues\nPacket interarrival time\n2.172 ms\nOffered load\n60% of channel capacity\nTransmitted power\n25 mW\nData rate\n11 Mbit/s\nPLCP header\n192 bits\nPacket header\n224 bits\nPayload size\n12000 bits\npacket loss is the percentage of packets dropped due to interference\nover the total number of packets received. The\naccess delay measures the time it takes to transmit a packet\nfrom the time it is passed to the MAC layer until it is suc-cessfully\nreceived at the destination. The delay is measured\nat the L2CAP layer. The estimation transient time measures\nthe time it takes a Bluetooth device to detect the presence of a\n\"bad\" frequency, i.e., from the time a packet loss occurs until\nthe frequency is classified \"bad\". This average is provided on\na per frequency basis.\n4.1. Experiment 1: base case\nThis experiment includes Bluetooth performance results for\nthe reference scenario when no interference is present. It represents\na base case since the effects of BIAS are quantified\nand compared against the reference scenario. It also covers\ndifferent levels of interference caused by WLAN systems operating\nin close proximity. Thus, we examine Bluetooth's performance\nwhen 1, 2, and 3 WLAN interfering systems are operational\nand compare that to the ideal performance when no\ninterference is present. Note that, the maximum number of\nnon-overlapping channels for WLAN systems is 3, i.e., there\ncould be up to 3 WLAN networks operating simultaneously\nusing different non-overlapping channels. In each case, results\nare obtained with BIAS and RR scheduling. The benefits\nof using BIAS are discussed in terms of packet loss and\naccess delay.\nTopology.\nWe use the topology illustrated in figure 4 that\nconsists of 3 WLAN systems (sourcesink pairs), and one\nBluetooth piconet with one master and one slave device. In a\nfirst step, we record the results of Bluetooth when no WLAN\nsystem is present. Then, we add one WLAN system at a time\nstarting with WLAN (Source/Sink) 1, followed by WLAN\n(Source/Sink) 2, and 3.\nTraffic.\nFor Bluetooth, a generic source that generates DH5\npackets is considered. The packet interarrival mean time in\nseconds, t\nB\n, is exponentially distributed and is computed according\nto\nt\nB\n= 2 l 0.000625 1 - 1 ,\n(14)\n26\nN. GOLMIE\nwhere l is the packet length in slots and is the offered\nload. We assume that WLAN is operating in the Direct Sequence\nSpread Spectrum (DSSS) mode. The WLAN source\nis transmitting data packets to the sink which is responding\nwith ACKs. The WLAN packet payload is set to 12000 bits\ntransmitted at 11 Mbit/s, while the PLCP header of 192 bits\nis transmitted at 1 Mbit/s. The packet interarrival time in seconds\n, t\nW\n, is exponentially distributed and its mean is computed\naccording to\nt\nW\n=\n192\n1000000 +\n12224\n11000000\n1\n.\n(15)\nResults.\nFigure 5 gives the packet loss (a) and the mean access\ndelay (b) measured at the slave for a variable Bluetooth\noffered load (580%). Observe that when no WLAN system\nis present, the packet loss is zero and the access delay remains\nflat at around 4 ms. This represents a reference measure\nfor the Bluetooth performance when there is no interference.\nEach WLAN system addition an increase of 15% in packet\nloss as shown in figure 5(a). The packet loss is around 15%,\n30% and 45% when one, two, and three WLAN systems are\npresent, respectively. Repeating the same experiments using\nBIAS, brings the packet loss down to zero for any number\nof WLAN systems. The delay trends captured in figure 5(b)\nFigure 4. Topology for experiments 1 and 2.\nare consistent with the packet loss results. Using BIAS yields\nlower delays than when RR is used. When one WLAN system\nis present, the delay curve with BIAS is flat at 5 ms (a 1 ms\nincrease compared to the reference case when no interference\nis present). When 2 WLAN systems are present, the delay\ncurve takes off at 35% with RR, while the curve remains flat\nuntil 60% with BIAS. When 3 WLAN systems are present,\nthe delay curve takes off sharply at 15% with RR, while the\nknee of the curve remains lower with BIAS (shifted to the\nright).\n4.2. Experiment 2: dynamic behavior\nIn this experiment, we focus on BIAS's responsiveness to\ntransient effects and sudden changes in the environment. We\nmeasure the channel estimation transient time per frequency\nand over the entire spectrum. We design an experiment where\nthe WLAN traffic is turned on and off several times during\neach simulation run (about 30 times).\nTopology.\nWe use the topology of figure 4 with one WLAN\nsystem (Source/Sink 1) and the Bluetooth master/slave pair.\nTraffic.\nThe traffic is based on bulk data. The offered load\nfor Bluetooth is varied between 10 and 100%, while for\nWLAN the offered load is set to 60%. For Bluetooth, both\nDH1 (1 slot) and DH5 (5 slots) packets are used in order\nto compare the difference in transient times. The time the\nWLAN connection is ON, T\nON\n, is exponentially distributed\nwith a mean equal to 10 seconds, while the time the WLAN\nconnection is OFF, T\nOFF\n, is also exponentially distributed\nwith mean equal to 20 seconds. Each simulation is run for\n900 seconds. Unless specified otherwise, we set EI\nmin\n= 2\nseconds, EI\nmax\n= 100 seconds, N\nf\n= 1.\nResults.\nFigures 6(a) and 6(b) give the packet loss and access\ndelay, respectively, measured at the Bluetooth slave de\n(a)\n(b)\nFigure 5. Experiment 1. Variable number of WLAN interfering systems. (a) Probability of packet loss. (b) Mean access delay.\nBLUETOOTH DYNAMIC SCHEDULING AND INTERFERENCE MITIGATION\n27\n(a)\n(b)\nFigure 6. Experiment 2. Variable Bluetooth offered load. (a) Probability of packet loss. (b) Mean access delay.\nvice. The packet loss obtained with BIAS is negligible (less\nthan 2%) for both DH1 and DH5 packets. On the other hand\nthe packet loss with Round Robin (RR) is close to 10%. The\naccess delay obtained with BIAS for DH1 packets is lower\nthan the delay for DH5 packets for offered loads under 70%\n(it is around 1.5 ms for DH1 packets, and 4 ms for DH5 packets\n). The knee of the curve for DH5 packets is located around\n80% of the offered load while it is at 60% for DH1 packets\n. Observe that BIAS gives lower access delays than RR for\nDH5 packets (between 40% and 80% offered load). However,\nthe same does not apply to DH1 packets, in which we observe\na slight increase in access delay (0.5 ms) with BIAS compared\nto RR. For short packets (DH1) retransmissions due to packet\nloss (RR), and delay in transmission due to \"bad\" frequency\navoidance (BIAS), yields comparable delays. Furthermore,\ngiven that the probability of packet loss (and retransmission)\nis small for short packets, RR gives lower access delays on average\n. Figure 7 gives the time it takes to estimate a \"bad\" frequency\nusing DH1 and DH5 packets. The use of DH5 packets\nleads to a higher round trip transmission time, and therefore\nincreases the transient time, up to 1.5 ms while it is around\n0 s for DH1 packets.\n4.3. Experiment 3: QOS support\nThis experiment highlights the support of QOS in an environment\nwhere devices experience different levels of interference\nand connections have a range of service requirements.\nTopology.\nWe use the topology illustrated in figure 8.\nSlaves 1 and 2 experience the same level of interference,\nwhile slave 3 does not experience any interference. The y-coordinate\nof the WLAN FTP server is varied along the y-axis\nin order to vary the level of interference on the Bluetooth piconet\n.\nFigure 7. Experiment 2. Variable Bluetooth offered load. Time to estimate a\n\"bad\" channel.\nFigure 8. Topology for experiment 3.\n28\nN. GOLMIE\nTraffic.\nFor Bluetooth, we consider three application profiles\n, namely, Print, Video, and Email. We use print, video,\nand email traffic between slaves 1, 2, 3 and the master, respectively\n. Note that the master is the client process in all\nthree connections. The profile parameters are given in table 4.\nThe WLAN uses the FTP profile described in table 5.\nSince the video application generates roughly around 93\nand 58 packets in the upstream and downstream directions,\nrespectively, and since it is often difficult to predict the exact\ntraffic distributions, the rate is divided evenly between both\ndirections. Thus, we set\n2\nup\n\n2\ndn\n= 0.25. The two other appli-Table\n4\nBluetooth application profile parameters.\nParameters\nDistribution\nValue\nEmail\nSend interarrival time (sec)\nExponential\n120\nSend group\nConstant\n3\nReceive interarrival time (sec)\nExponential\n60\nReceive group\nConstant\n3\nEmail size (bytes)\nExponential\n1024\nPrint\nPrint requests interarrival time (sec) Exponential\n30\nFile size\nNormal\n(30 K, 9 M)\nVideo\nFrame rate\nConstant\n1 frame/s\nFrame size (bytes)\nConstant\n17280 (128\n120 pixels)\nTable 5\nWLAN application profile parameters.\nParameters\nDistribution\nValue\nFTP\nFile interarrival time (sec)\nExponential\n5\nFile size (bytes)\nExponential\n5 M\nPercentage of get\n100%\ncations, share the leftover bandwidth (\n1,3\nup,dn\n= (1 - 0.5)/4 =\n0.125).\nResults.\nFigure 9 depicts the results when the WLAN y-coordinate\nis varied between 0 and 10 meters. In figure 9(a),\nthe packet loss with BIAS is below 0.1% for all three slaves\nand the master. With RR, slave 1 (Print) and slave 2 (Video)\nvary between 15% and 3% of packet loss between 0 and 10\nmeters, respectively. While the packet loss for the master is\nabove 20%. Slave 3 (Email) has a low packet loss with both\nBIAS and RR since it is far from the WLAN server.\nThe access delay for slave 2 (Video) in figure 9(b) is 0.3\nseconds with BIAS, while it is almost double with RR (0.6\nseconds). For Print, delays with BIAS are half the delays with\nRR (0.01 seconds as opposed to 0.02 seconds). The delays for\nEmail are also reduced by half with BIAS.\n4.4. Experiment 4: WLAN and multi-Bluetooth piconets\ninterference\nWhen two or more Bluetooth piconets are proximally located,\none expects few collisions when the packets happen to be\ntransmitted on the same frequency. However, the probability\nof such collisions is low as discussed in [2] since each\npiconet has a unique frequency sequence. Given that these\npacket collisions are random in nature and are already miti-gated\nby frequency hopping, we do not expect significant performance\nimprovements when BIAS is used since the packet\nloss is already very low. Furthermore, the fact that frequencies\nare eliminated due to other Bluetooth piconet interference\nmay even cause delay increases. We illustrate this particular\nissue using the following scenario.\nTopology.\nWe use the topology illustrated in figure 10 representing\na conference hall environment. It consists of one\nWLAN AP located at (0, 15) meters, and one WLAN mobile\nat (0, 0) meters. The WLAN mobile is the server device,\n(a)\n(b)\nFigure 9. Experiment 3. Variable distance. (a) Probability of packet loss. (b) Access delay.\nBLUETOOTH DYNAMIC SCHEDULING AND INTERFERENCE MITIGATION\n29\nFigure 10. Topology for experiment 4.\nTable 6\nProfile parameters.\nParameters\nDistribution\nValue\nBluetooth FTP\nPercentage of put/get\n100%\nInter-request time (sec)\nExponential\n5\nFile size (bytes)\nExponential\n250 K\nHTTP\nPage interarrival time (sec)\nExponential\n30\nNumber of objects per page\nConstant\n2\nObject 1 size (bytes)\nConstant\n1 K\nObject 2 size (bytes)\nUniform\n(2 K, 100 K)\nwhile the AP is the client. The distance between the WLAN\nAP and mobile is d\nW\n= 15 meters. There are ten Bluetooth\npiconets randomly placed, covering a disk. The center of the\ndisk is located at (0, 0) and its radius is r\n= 10 meters. We define\nd\nB\nas the distance between a Bluetooth master and slave\npair. d\nB\n= 1 meter for half of the master and slave pairs,\nwhile d\nB\n= 2 meters for the other half of the master and slave\npairs.\nTraffic.\nWe run four experiments with different combinations\nof WLAN and Bluetooth applications, namely, HTTP\nand FTP. We use the application profiles available in the OP-NET\nlibrary and configure the parameters according to table\n6. The WLAN FTP profile parameters are given in table\n5.\nResults.\nThe results for the Bluetooth packet loss and access\ndelay are given in tables 7 and 8, respectively. The results are\ngrouped by application category (FTP, HTTP), and d\nB\n, for\neach of the WLAN profiles. Overall, the packet loss results\nwith BIAS are comparable to the packet loss obtained with\nRR. In some instances, the packet loss with BIAS is slightly\nlower than with RR, however the difference remains less than\n2%. The access delays for Bluetooth is given in table 8. The\nresults with BIAS and RR are comparable. However, there\nare no significant advantages in using BIAS.\nTables 9 and 10 give the packet loss and the access delay\nrespectively for the WLAN FTP and HTTP profiles. Ob-Table\n7\nBluetooth packet loss probability for experiment 4.\nBT traffic\nWLAN traffic\nFTP\nHTTP\nBIAS\nRR\nBIAS\nRR\nFTP\nd\nB\n= 1 m\n0.0103\n0.0158\n0.0064\n0.0356\nd\nB\n= 2 m\n0.1079\n0.1210\n0.0379\n0.0393\nHTTP\nd\nB\n= 1 m\n0.0012\n0.0034\n0.0003\n0.0002\nd\nB\n= 2 m\n0.0425\n0.0614\n0.0265\n0.0071\nTable 8\nBluetooth MAC delay (sec) for experiment 4.\nBT traffic\nWLAN traffic\nFTP\nHTTP\nBIAS\nRR\nBIAS\nRR\nFTP\nd\nB\n= 1 m\n0.1805\n0.1749\n0.1912\n0.1739\nd\nB\n= 2 m\n0.3753\n0.4574\n0.2444\n0.2378\nHTTP\nd\nB\n= 1 m\n0.0840\n0.0861\n0.0836\n0.0835\nd\nB\n= 2 m\n0.0945\n0.1121\n0.0963\n0.0952\nTable 9\nWLAN probability of packet loss for experiment 4.\nBT traffic\nWLAN traffic\nFTP\nHTTP\nBIAS\nRR\nBIAS\nRR\nFTP\n0.1534\n0.303\n0.2510\n0.3481\nHTTP\n0.0192\n0.0961\n0.0721\n0.1534\nTable 10\nWLAN MAC delay (sec) for experiment 4.\nBT traffic\nWLAN traffic\nFTP\nHTTP\nBIAS\nRR\nBIAS\nRR\nFTP\n0.0017\n0.0022\n0.0010\n0.0011\nHTTP\n0.0011\n0.0018\n0.0009\n0.0012\nserve a significant reduction in packet loss with BIAS for both\nWLAN applications, in which the packet loss drops from 30%\nand 34% to 15% and 25% for the FTP and HTTP application,\nrespectively. The access delay shown in table 10 is consistent\nwith the packet loss results and shows slight improvements\nwith BIAS. In summary, the use of BIAS in a multi-Bluetooth\nand WLAN environment leads to performance improvements\nfor WLAN, while it has little benefits on the Bluetooth performance\nConcluding remarks\nIn this paper we propose a scheduling technique, BIAS, aimed\nat eliminating interference on WLAN and alleviating the impact\nof interference on the Bluetooth performance. This work\naddresses the need to adjust to changes in the environment,\nsupport asymmetric traffic in the upstream and downstream,\nin addition to the use of different scheduling priorities.\n30\nN. GOLMIE\nTable 11\nBIAS pseudocode.\n1: Every N slots\n2:\nestimate_channel();\n3:\ncompute_credits();\n4: Every even TS\nf\n// Master transmission slot\n5:\nif TS\nf\n+ l\ndn\nis clear\n// Master can receive in next slot\n6:\n{\n7:\nA\nf\ndata\n= {set of high priority slaves s.t. ((f \"good\") and (qsize > 0) and (c\ndn\n>\n0)}\n8:\nA\nf\npoll\n= {set of high priority slaves s.t. ((f \"good\") and (c\nup\n>\n0))}\n9:\nB\nf\ndata\n= {set of low priority slaves s.t. ((f \"good\") and (qsize > 0))}\n10:\nB\nf\npoll\n= {set of low priority slaves s.t. ((f \"good\") and (c\nup\nc\ndn\n>\n0))}\n11:\n// Service high priority slaves first\n12:\nif (A\nf\ndata\n= )\n// transmit data packets\n13:\n{\n14:\ni\n= max\nA\nf\ndata\n(w\ni\nup\nw\ni\ndn\n)\n// select device i with the largest weight\n15:\ntransmit data packet of size l\ndn\nto slave i\n16:\nc\ni\ndn,up\n= c\ni\ndn,up\n- l\nidn,up\n;\n// decrement credit counter\n17:\nw\ni\ndn,up\n= (1 - u\ni\n)\nc\ni\ndn,up\n;\n// update weights\n18:\n}\n19:\nelse if (A\nf\npoll\n= )\n// transmit polls\n20:\n{\n21:\ni\n= max\nA\nf\npoll\n(w\ni\nup\n)\n// select device i with the largest weight\n22:\ntransmit poll to slave i\n23:\nc\ni\nup\n= c\ni\nup\n- l\niup\n;\n// decrement credit counter\n24:\nw\ni\nup\n= (1 - u\ni\n)\nc\ni\nup\n;\n// update weights\n25:\n}\n26:\n// Then service low priority slaves\n27:\nelse if (B\nf\ndata\n= )\n28:\n{\n29:\ni\n= max\nB\nf\ndata\n(w\ni\nup\nw\ni\ndn\n)\n// select device i with the largest weight\n30:\ntransmit data packet of size l\ndn\nto slave i\n31:\nif (c\ni\ndn\n>\n0) c\ni\ndn\n= c\ni\ndn\n- l\ni\ndn\n;\n// decrement credit counter\n32:\nelse c\ni\nup\n= c\ni\nup\n- l\ni\ndn\n;\n// decrement credit counter\n33:\nw\ni\ndn,up\n= (1 - u\ni\n)\nc\ni\ndn,up\n;\n// update weights\n34:\n}\n35:\nelse if (B\nf\npoll\n= )\n// transmit polls\n36:\n{\n37:\ni\n= max\nB\nf\npoll\n(w\ni\nup\n)\n// select device i with the largest weight\n38:\ntransmit poll to slave i\n39:\nif (c\nup\n>\n0) c\ni\nup\n= c\ni\nup\n- l\niup\n;\n// decrement credit counter\n40:\nelse c\ni\ndn\n= c\ni\ndn\n- l\niup\n;\n// decrement credit counter\n41:\nw\ni\ndn,up\n= (1 - u\ni\n)\nc\ni\ndn,up\n;\n// update weights\n42:\n}\n43:\n}\nThe performance results obtained are summarized as follows\n. First, BIAS eliminates packet loss even in the worst\ninterference case when more than 3/4 of the spectrum are occupied\nby other devices. Delay is slightly increased over the\nreference scenario (when no interference is present). This increase\nvaries between 1 to 5 ms on average. Furthermore,\nBIAS is able to rapidly adjusts to changes in the channel. The\nchannel estimation transient time can be as low as 1.5 ms\nand 250 s for DH5 and DH1 packets, respectively. In addition\n, BIAS supports QOS and maintains a low access delay\nfor delay-sensitive traffic such as video applications. Finally,\nwe observe that the use of BIAS is not as effective to mitigate\ninterference caused by other Bluetooth piconets. In this case,\nwe note no improvements in access delay and packet loss results\n, which are comparable to results obtained with Round\nRobin (RR).\nAn immediate next step for our work consists of developing\na channel estimation procedure that is able to differentiate\nbetween different types of interference, namely, WLAN and\nBluetooth interference. Our preliminary results indicate that\nthis may be helpful in a multi-Bluetooth and WLAN environment\n.\nBLUETOOTH DYNAMIC SCHEDULING AND INTERFERENCE MITIGATION\n31\nAcknowledgements\nThe author would like to thank O. Rebala and A. Tonnerre for\ntheir help in developing the simulation models and compiling\nthe results.\n\nReferences\n[1] Bluetooth Special Interest Group, Specifications of the Bluetooth System\n, Vol. 1, v. 1.0B \"Core\" and Vol. 2, v. 1.0B \"Profiles\" (December\n1999).\n[2] A. El-Hoiydi, Interference between Bluetooth networks upper bound\non the packet error rate, IEEE Communications Letters 5 (June 2001)\n245247.\n[3] D. Fumolari, Link performance of an embedded Bluetooth personal\narea network, in: Proceedings of IEEE ICC'01, Vol. 8, Helsinki, Finland\n(June 2001) pp. 25732577.\n[4] N. Golmie, N. Chevrollier and I. Elbakkouri, Interference aware Bluetooth\npacket scheduling, in: Proceedings of GLOBECOM'01, Vol. 5,\nSan Antonio, TX (November 2001) pp. 28572863.\n[5] N. Golmie and F. Mouveaux, Interference in the 2.4 GHz ISM band:\nImpact on the Bluetooth access control performance, in: Proceedings\nof IEEE ICC'01, Vol. 8, Helsinki, Finland (June 2001) pp. 25402545.\n[6] N. Golmie, R.E. Van Dyck and A. Soltanian, Interference of Bluetooth\nand IEEE 802.11: Simulation modeling and performance evaluation\n, in: Proceedings of the Fourth ACM International Workshop on\nModeling, Analysis, and Simulation of Wireless and Mobile Systems,\nMSWIM'01, Rome, Italy (July 2001) pp. 1118. Extended version appeared\nin ACM Wireless Networks 9(3) (2003) 201211.\n[7] I. Howitt, V. Mitter and J. Gutierrez, Empirical study for IEEE 802.11\nand Bluetooth interoperability, in: Proceedings of IEEE Vehicular\nTechnology Conference (VTC), Vol. 2 (Spring 2001) pp. 11031113.\n[8] IEEE Std. 802-11, IEEE Standard for Wireless LAN Medium Access\nControl (MAC) and Physical Layer (PHY) Specification (June 1997).\n[9] J. Lansford, R. Nevo and E. Zehavi, MEHTA: A method for coexistence\nbetween co-located 802.11b and Bluetooth systems, IEEE\nP802.11 Working Group Contribution, IEEE P802.15-00/360r0 (November\n2000).\n[10] S. Shellhammer, Packet error rate of an IEEE 802.11 WLAN in the\npresence of Bluetooth, IEEE P802.15 Working Group Contribution,\nIEEE P802.15-00/133r0, Seattle, WA (May 2000).\n[11] B. Treister, A. Batra, K.C. Chen and O. Eliezer, Adapative frequency\nhopping: A non-collaborative coexistence mechanism, IEEE P802.11\nWorking Group Contribution, IEEE P802.15-01/252r0, Orlando, FL\n(May 2001).\nNada Golmie received the M.S.E. degree in computer\nengineering from Syracuse University, Syracuse\n, NY, and the Ph.D. degree in computer science\nfrom the University of Maryland, College Park, MD.\nSince 1993, she has been a member of the Advanced\nNetwork Technologies Division of the National Institute\nof Standards and Technology (NIST). Her research\nin traffic management and flow control led to\nseveral papers presented at professional conferences,\njournals and numerous contributions to international\nstandard organizations and industry led consortia. Her current work is focused\non the performance evaluation of protocols for Wireless Personal Area\nNetworks. Her research interests include modeling and performance analysis\nof network protocols, media access control, and quality of service for IP\nand wireless network technologies. She is the vice-chair of the IEEE 802.15\nCoexistence Task Group.\nE-mail: nada.golmie@nist.gov", "keywords": "WLAN;BIAS;QoS;inteference;dynamic scheduling;Bluetooth;scheduling priorities;interference;coexistence;MAC scheduling;WPANs;WPAN"} {"name": "46", "title": "Breadth-First Search Crawling Yields High-Quality Pages", "abstract": "This paper examines the average page quality over time of pages downloaded during a web crawl of 328 million unique pages. We use the connectivity-based metric PageRank to measure the quality of a page. We show that traversing the web graph in breadth-first search order is a good crawling strategy, as it tends to discover high-quality pages early on in the crawl.", "fulltext": "INTRODUCTION\nAccording to a study released in October 2000, the directly\naccessible \"surface web\" consists of about 2.5 billion\npages, while the \"deep web\" (dynamically generated web\npages) consists of about 550 billion pages, 95% of which are\npublicly accessible [9].\nBy comparison, the Google index released in June 2000\ncontained 560 million full-text-indexed pages [5]. In other\nwords, Google -- which, according to a recent measurement\n[6], has the greatest coverage of all search engines -covers\nonly about 0.1% of the publicly accessible web, and\nthe other major search engines do even worse.\nIncreasing the coverage of existing search engines by three\norders of magnitude would pose a number of technical challenges\n, both with respect to their ability to discover, download\n, and index web pages, as well as their ability to serve\nqueries against an index of that size. (For query engines\nbased on inverted lists, the cost of serving a query is linear\nto the size of the index.) Therefore, search engines should\nattempt to download the best pages and include (only) them\nin their index.\nCho, Garcia-Molina, and Page [4] suggested using connectivity\n-based document quality metrics to direct a crawler towards\nhigh-quality pages. They performed a series of crawls\nover 179,000 pages in the stanford.edu domain and used\nCopyright is held by the author/owner.\nWWW10, May 1-5, 2001, Hong Kong.\nACM 1-58113-348-0/01/0005.\ndifferent ordering metrics -- breadth-first, backlink count,\nPageRank [2], and random -- to direct the different crawls.\nUnder the breath-first ordering, pages are crawled in the order\nthey are discovered. Under the backlink ordering, the\npages with the highest number of known links to them are\ncrawled first. Under the PageRank ordering, pages with the\nhighest PageRank (a page quality metric described below)\nare crawled first. Under the random ordering, the crawler\nselects the next page to download at random from the set\nof uncrawled pages. (For repeatability, these crawls were\n\"virtual\"; that is, they were performed over a cached copy\nof these 179,000 pages.) Cho et al. evaluated the effectiveness\nof each ordering metric by examining how fast it led\nthe crawler to all the \"hot\" pages. In this context, a \"hot\"\npage is a page with either a high number of links pointing\nto it, or a page with a high PageRank. They found\nthat using the PageRank metric to direct a crawler works\nextremely well. However, they also discovered that performing\nthe crawl in breadth-first order works almost as well, in\nparticular if \"hot\" pages are defined to be pages with high\nPageRank.\nThis paper extends the results of Cho et al. regarding the\neffectiveness of crawling in breadth-first search order, using\na much larger and more diverse data set. While Cho's work\nwas based on a crawl of 179,000 pages from the stanford.edu\ndomain, we performed a crawl of 328 million pages over the\nentire web, covering more than 7 million distinct hosts. We\nuse connectivity-based page quality metrics, namely Brin\nand Page's PageRank and variations of it, to measure the\nquality of downloaded pages over the life of the crawl.\nWe find that not only does breadth-first search download\nthe hot pages first, but also that the average quality of the\npages decreased over the duration of the crawl. We also\nsuggest that our crawler's modifications to strict breadth-first\nsearch -- made to increase the overall download rate\nand to avoid overloading any given web server -- enhance\nits likeliness of retrieving important pages first.\nThe remainder of this paper is structured as follows: Section\n2 reviews the PageRank metric we used to evaluate\nthe effectiveness of crawling in breadth-first search order.\nSection 3 describes the tools we used to conduct our experiments\n. Section 4 describes the experiments we performed,\nand the results we obtained. Finally, section 5 offers concluding\nremarks.\nPAGERANK\nThere are many conceivable metrics for judging the quality\nof a web page: by analyzing its content, by measuring\n114\nits popularity (that is, how often it is viewed), or by examining\nits connectivity (that is, by determining which other\npages link to this page, and vice versa). Metrics based on\nconnectivity have the advantages that they do not require\ninformation that is not easily accessible (such as page popularity\ndata), and that they are easy to compute, so they\nscale well to even very large page collections.\nThey also\nrequire retrieving only the links on each page, not the full\npage contents. Storing the full page contents requires several\nkilobytes per page, one to two orders of magnitude more\nthan just storing the links.\nPageRank is the connectivity-based page quality measure\nsuggested by Brin and Page [2]. It is a static measure; it is\ndesigned to rank pages in the absence of any queries. That\nis, PageRank computes the \"global worth\" of each page.\nIntuitively, the PageRank measure of a page is similar to its\nin-degree, which is a possible measure of the importance of\na page. The PageRank of a page is high if many pages with\na high PageRank contain links to it, and a page containing\nfew outgoing links contributes more weight to the pages it\nlinks to than a page containing many outgoing links. The\nPageRank of a page is expressed mathematically as follows.\nSuppose there are T total pages on the web. We choose\na parameter d (explained below) such that 0 < d < 1; a\ntypical value of d might lie in the range 0.1 < d < 0.15.\nLet pages p\n1\n, p\n2\n, . . . , p\nk\nlink to page p. Let R(p\ni\n) be the\nPageRank of p\ni\nand C(p\ni\n) be the number of links out of p\ni\n.\nThen the PageRank R(p) of page p is defined to satisfy:\nR(p) = d\nT + (1 - d)\nk\nX\ni=1\nR(p\ni\n)\nC(p\ni\n)\nThis equation defines R(p) uniquely, modulo a constant scaling\nfactor. If we scale R(p) so that the PageRanks of all\npages sum to 1, R(p) can be thought of as a probability\ndistribution over pages.\nThe PageRank distribution has a simple interpretation in\nterms of a random walk. Imagine a web surfer who wanders\nthe web. If the surfer visits page p, the random walk is in\nstate p. At each step, the web surfer either jumps to a page\non the web chosen uniformly at random, or the web surfer\nfollows a link chosen uniformly at random from those on\nthe current page. The former occurs with probability d, the\nlatter with probability 1\n- d. The equilibrium probability\nthat such a surfer is at page p is simply R(p). An alternative\nway to say this is that the average fraction of the steps that\na walk spends at page p is R(p) over sufficiently long walks.\nThis means that pages with high PageRank are more likely\nto be visited than pages with low PageRank.\nIn our experiments, we set d =\n1\n7\n= 0.14. We also modified\nPageRank slightly so that pages with no outgoing links\ncontribute their weight equally to all pages. That is, the\nrandom surfer is equally likely to jump to any page from\na page with no outgoing links. We ran experiments using\nboth the original PageRank algorithm, which does not distinguish\nbetween links to pages on the same versus different\nhosts, and a variant of PageRank which only considers links\nto different hosts.\nTOOLS\nWe used two tools in conducting this research: Mercator\nand the Connectivity Server 2, both developed at our lab.\nWe used Mercator to crawl the web, and the Connectivity\nServer 2 to provide fast access to the link information downloaded\nfrom the crawl.\nMercator is an extensible, multithreaded, high-performance\nweb crawler [7, 10]. It is written in Java and is highly\nconfigurable. Its default download strategy is to perform\na breadth-first search of the web, with the following three\nmodifications:\n1. It downloads multiple pages (typically 500) in parallel.\nThis modification allows us to download about 10 million\npages a day; without it, we would download well\nunder 100,000 pages per day.\n2. Only a single HTTP connection is opened to any given\nweb server at any given time.\nThis modification is\nnecessary due to the prevalence of relative URLs on the\nweb (about 80% of the links on an average web page\nrefer to the same host), which leads to a high degree\nof host locality in the crawler's download queue. If\nwe were to download many pages from the same host\nin parallel, we would overload or even crash that web\nserver.\n3. If it took t seconds to download a document from a\ngiven web server, then Mercator will wait for 10t seconds\nbefore contacting that web server again. This\nmodification is not strictly necessary, but it further\neases the load our crawler places on individual servers\non the web. We found that this policy reduces the rate\nof complaints we receive while crawling.\nFor the experiments described below, we configured Mercator\nto extract all the links from each downloaded page\nand save them to disk; for disk space reasons, we did not\nretain the pages themselves. We conducted a crawl that attempted\nto download 532 million pages over the course of 58\ndays (which we refer to as days 1 to 58 throughout the paper\n). Of all those download attempts, 328 million returned\nvalid, unique HTML pages; the others resulted in TCP- and\nDNS-errors, non-200 HTTP return codes, non-HTML documents\n, or duplicates. Mercator's download rate decreased\nover the course of the crawl, due to increasing access times\nto one of its disk-based data structures that keeps track of\nwhich URLs have already been seen. The median download\nday was 22; the mean download day was 24.5.\nThe extracted links data was then loaded into the Connectivity\nServer 2 (CS2) [11], a database for URLs and links.\nA build of CS2 takes a web crawl as input and creates a\ndatabase representation of the web graph induced by the\npages in the crawl. A CS2 database consists of all URLs that\nwere crawled, extended with all URLs referenced at least\nfive times by the crawled pages. (Incorporating uncrawled\nURLs with multiple links pointing to them ensured that we\ndid not ignore any popular URLs. Setting the threshold at\nfive incoming links reduced the set of uncrawled URLs by\nover 90%, which enabled us to fit the database within the 16\nGB of RAM available to us.) The CS2 database also contains\nall links among those URLs and host information for\neach URL. It maps each URL to all of its outgoing and its\nincoming links. It is possible to get all the incoming links\nfor a given URL, or just the links from different hosts.\nCS2 stores links in both directions in, on average, 2.4\nbytes per link (as compared to 8 bytes per link in the original\nconnectivity server (CS1) described in [1]). Like CS1,\n115\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\n50\n55\nDay of crawl\n0\n2\n4\n6\n8\nAverage PageRank\nFigure 1: Average PageRank score by day of crawl\nCS2 is designed to give high-performance access when run\non a machine with enough RAM to store the database in\nmemory. On the 667 MHz Compaq AlphaServer ES40 with\n16 GB of RAM used in our experiments, it takes 70-80 ms\nto convert a URL into an internal id or vice versa, and 0.1\nms/link to retrieve each incoming or outgoing link as an internal\nid. The database for our crawl of 328 million pages\ncontained 351 million URLs and 6.1 billion links. Therefore,\none iteration of PageRank ran in about 15 minutes.\nAVERAGE PAGE QUALITY OVER A LONG CRAWL\nIn this section, we report on our experiments. We implemented\nPageRank and its variants over the CS2 interface,\nand ran each algorithm for 100 iterations on the 6.1 billion\nlink database. (In all our experiments, the PageRank computation\nconverged within less than 100 iterations.)\nAlthough the PageRank scores are conventionally normalized\nto sum to 1 (making it easier to think of them as a\nprobability distribution), we normalized them to sum to the\nnumber of nodes in the graph (351 million). This way, the\naverage page has a PageRank of 1, independent of the number\nof pages.\nFigure 1 shows the average PageRank of all pages downloaded\non each day of the crawl. The average score for pages\ncrawled on the first day is 7.04, more than three times the average\nscore of 2.07 for pages crawled on the second day. The\naverage score tapers from there down to 1.08 after the first\nweek, 0.84 after the second week, and 0.59 after the fourth\nweek. Clearly, we downloaded more high quality pages, i.e.,\npages with high PageRank, early in the crawl than later\non. We then decided to examine specifically when we had\ncrawled the highest ranked pages.\nWe examined the pages with the top N PageRanks, for\nincreasing values of N from 1 to 328 million (all of the pages\ndownloaded). Figure 2 graphs the average day on which we\ncrawled the pages with the highest N scores. Note that the\nhorizontal axis shows the values of N on a log scale.\nAll of the top 10 and 91 of the top 100 pages were crawled\non the first day. There are some anomalies in the graph\nbetween N equals 100 and 300, where the average day fluctuates\nbetween 2 and 3 (the second and third days of the\ncrawl). These anomalies are caused by 24 pages in the top\n300 (8%) that were downloaded after the first week. Most of\nthose pages had a lot of local links (links from pages on the\nsame host), but not many remote links. In other words, the\n1\n10\n100\n1000\n10000 100000 1e+06\n1e+07\n1e+08\ntop N\n5\n10\n15\n20\n25\nAverage day top N pages were crawled\nFigure 2: Average day on which the top N pages\nwere crawled\npages on the same host \"endorse\" each other, but few other\nhosts endorse them. We address this phenomenon later in\nthe last experiment, shown in Figure 4. After N equals 400,\nthe curve steadily increases to day 24.5, the mean download\nday of the entire crawl.\nOur next experiment checks that pages with high PageRank\nare not ranked high only because they were crawled\nearly. For example, a page whose outgoing links all point\nto pages with links back to it might have an artificially high\nPageRank if all of its outgoing links have been crawled, but\nnot too many other pages. For this experiment we ran the\nPageRank algorithm on the graph induced by only the first\n28 days of the crawl. This graph contains 217 million URLs\nand 3.8 billion links between them. We then compared the\ntop ranked pages between the two data sets. We found that\nof the top 1 million scoring pages, 96% were downloaded\nduring the first 4 weeks, and 76% of them were ranked in\nthe top 1 million pages in the 28 day data set. That is, it\nwas clear that those pages were important even before the\ncrawl had finished.\nFigure 3 generalizes these statistics: for each value of N,\nwe plot the percentage of overlap between the top N scoring\npages in the 28 day crawl versus the 58 day crawl. Although\nthe top few pages are different, by the top 20 ranked pages\nthere is an 80% overlap. The overlap continues in the 60-80%\nrange through the extent of the entire 28 day data\nset. This figure suggests that breadth-first search crawling\nis fairly immune to the type of self-endorsement described\nabove: although the size of the graph induced by the full\ncrawl is about 60% larger than the graph induced by the 28\nday crawl, the longer crawl replaced only about 25% of the\n\"hot\" pages discovered during the first 28 days, irrespective\nof the size of the \"hot\" set.\nSome connectivity-based metrics, such as Kleinberg's algorithm\n[8], consider only remote links, that is, links between\npages on different hosts. We noticed that some anomalies in\nFigure 2 were due to a lot of local links, and decided to experiment\nwith a variant of the PageRank algorithm that only\npropagates weights along remote links. This modification of\nPageRank counts only links from different hosts as proper\nendorsements of a page; links from the same host are viewed\nas improper self-endorsement and therefore not counted.\nFigure 4 shows our results: the average PageRank for\npages downloaded on the first day is even higher than when\nall links are considered. The average PageRank for the first\nday is 12.1, while it's 1.8 on the second day and 1.0 on the\n116\n1\n10\n100\n1000\n10000\n100000\n1e+06\n1e+07\n1e+08\ntop N pages\n0\n20\n40\n60\n80\n100\n% overlap between data sets\nFigure 3: The percent overlap between the top N\nranked pages in the first 28 vs all 58 days of the\ncrawl\nfourth day. The average PageRank then declines gradually\ndown to 0.6 on the last day. Notice that the average PageRank\non the first day of crawling is higher than in Figure\n1, and that the curve falls more sharply. This drop indicates\nthat our crawling strategy is not biased toward self-endorsing\nhosts, as a crawler using the standard version of\nPageRank would be. We believe that this lack of bias is due\nin part to our crawler's politeness policies, which impose a\nrate limit on its accesses to any particular host.\nThere are some flaws with a metric based only on remote\nlinks. For example, http://www.yahoo.com/ has a very\nhigh PageRank score. However, it only has local outlinks,\nso its weight gets evenly distributed over all pages in the\ngraph, rather than just to the other pages in Yahoo! to\nwhich it points. Transitively, the pages on other hosts to\nwhich Yahoo! links do not benefit from the high score of\nhttp://www.yahoo.com/. In the future work section below,\nwe outline some ideas for remedying this problem.\nCONCLUSIONS\nThe experiments described in this paper demonstrate that\na crawler that downloads pages in breadth-first search order\ndiscovers the highest quality pages during the early stages\nof the crawl. As the crawl progresses, the quality of the\ndownloaded pages deteriorates. We speculate that breadth-first\nsearch is a good crawling strategy because the most\nimportant pages have many links to them from numerous\nhosts, and those links will be found early, regardless of on\nwhich host or page the crawl originates.\nDiscovering high-quality pages early on in a crawl is desirable\nfor public web search engines such as AltaVista or\nGoogle, given that none of these search engines is able to\ncrawl and index more than a fraction of the web.\nOur results have practical implications to search engine\ncompanies. Although breadth-first search crawling seems to\nbe a very natural crawling strategy, not all of the crawlers\nwe are familiar with employ it. For example, the Internet\nArchive crawler described in [3] does not perform a breadth-first\nsearch of the entire web; instead, it picks 64 hosts at a\ntime and crawls these hosts in parallel. Each host is crawled\nexhaustively; links that point to other hosts are saved to seed\nsubsequent crawls of those hosts. This crawling strategy has\nno bias towards high-quality pages; if the hosts to be crawled\nare picked in random order, the quality of downloaded pages\nwill be uniform throughout the crawl.\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\n50\n55\nDay of crawl\n0\n2\n4\n6\n8\n10\n12\nAverage PageRank (remote links only)\nFigure 4:\nAverage PageRank when only remote\nlinks are considered\nSimilarly, the Scooter web crawler used until recently by\nAltaVista downloaded pages in essentially random order.\n(At this point, AltaVista is using Mercator.) This approach\nmade it easier to provide politeness guarantees -- essentially,\nit spread the load imposed by the crawler evenly over all web\nservers -- but as a result, the quality of the discovered pages\nis uniform over the life of the crawl.\nWe cannot make any statements about other large-scale\nweb crawlers.\nMost search engine companies treat their\ncrawling strategy as a trade secret, and have not described\nit in the literature.\nCho et al. [4] showed that using a connectivity-based ordering\nmetric for downloads, such as PageRank, will steer\nthe crawler towards even higher-quality pages than using\nbreadth-first search. However, computing PageRank values\nfor several hundred million or more pages is an extremely\nexpensive computation. It took us over a day to compute\nthe PageRanks of our graph of 351 million pages, despite\nthe fact that we had the hardware resources to hold the entire\ngraph in memory! Using PageRank to steer a crawler\nwould require multiple such computations over larger and\nlarger graphs, in order to take newly discovered pages into\naccount, and is essentially infeasible in real time. On the\nother hand, crawling in breadth-first search order provides\na fairly good bias towards high quality pages without the\ncomputational cost. We believe that crawling in breadth-first\nsearch order provides the better tradeoff.\nFUTURE WORK\nThere are two directions in which we would like to extend\nthis work. One direction is to try a variant of PageRank\nwhich weighs links to pages on remote hosts differently than\nlinks to other pages on the same host. From the experiment\nthat generated Figure 4 above, we learned that remote links\nshould count more than local links, but that weights should\nbe propagated along local links as well (e.g., to distribute the\nweight of http://www.yahoo.com/ to the pages that Yahoo!\nrecommends). We suspect that some search engines already\nuse different weights for links, but there has been no formal\nstudy of how to divide the weights among the links or even\nwhether the division should be static (e.g., remote links get\n80% of the total weight) or proportional to the number of\ntotal links (e.g., each remote link gets four times the weight\nof each local link).\nThe other direction is to try different connectivity-based\n117\nmetrics. While PageRank is the only connectivity measure\nwe know aimed at ranking all of the pages on the world wide\nweb, Kleinberg's algorithm [8] is another well-known connectivity\nanalysis algorithm targeted towards computing quality\nscores for pages. The algorithm computes two scores for\neach document: a hub score and an authority score. Pages\nwith high authority scores are expected to have high-quality\ncontent; the authority scores are similar in intent to PageRanks\n. Kleinberg's algorithm is designed to rank the results\nof a query to a search engine, and only considers a small set\nof pages when it computes authority scores. However, we\nbelieve that we can extend the algorithm to consider the\nentire graph of the web.\nREFERENCES\n[1] K. Bharat, A. Broder, M. Henzinger, P. Kumar, and\nS. Venkatasubramanian. The connectivity server: Fast\naccess to linkage information on the web. In\nProceedings of the 7th International World Wide Web\nConference, pages 469477, Brisbane, Australia, April\n1998. Elsevier Science.\n[2] S. Brin and L. Page. The anatomy of a large-scale\nhypertextual web search engine. In Proceedings of the\n7th International World Wide Web Conference, pages\n107117, Brisbane, Australia, April 1998. Elsevier\nScience.\n[3] M. Burner. Crawling towards eternity: Building an\narchive of the world wide web. Web Techniques\nMagazine, 2(5):3740, May 1997.\n[4] J. Cho, H. Garcia-Molina, and L. Page. Efficient\ncrawling through URL ordering. In Proceedings of the\n7th International World Wide Web Conference, pages\n161172, Brisbane, Australia, April 1998. Elsevier\nScience.\n[5] Google Inc. Press release: \"Google launches world's\nlargest search engine.\" June 26, 2000. Available at\nhttp://www.google.com/press/pressrel/pressrelease26.html\n[6] M. Henzinger, A. Heydon, M. Mitzenmacher, and\nM. Najork. On near-uniform URL sampling. In\nProceedings of the 9th International World Wide Web\nConference, pages 295308, Amsterdam, Netherlands,\nMay 2000. Elsevier Science.\n[7] A. Heydon and M. Najork. Mercator: A scalable,\nextensible web crawler. World Wide Web,\n2(4):219229, Dec. 1999.\n[8] J. Kleinberg. Authoritative sources in a hyperlinked\nenvironment. In Proceedings of the 9th ACM-SIAM\nSymposium on Discrete Algorithms, pages 668677,\nSan Francisco, CA, Jan. 1998.\n[9] P. Lyman, H. Varian, J. Dunn, A. Strygin, and\nK. Swearingen. How much information? School of\nInformation Management and Systems, Univ. of\nCalifornia at Berkeley, 2000. Available at\nhttp://www.sims.berkeley.edu/how-much-info\n[10] Mercator Home Page.\nhttp://www.research.digital.com/SRC/mercator\n[11] J. L. Wiener, R. Wickremesinghe, M. Burrows,\nK. Randall, and R. Stata. Better link compression.\nManuscript in progress. Compaq Systems Research\nCenter, 2001.\nVITAE\nMarc Najork is a senior member of\nthe research staff at Compaq Computer\nCorporation's Systems Research\nCenter.\nHis current research focuses\non high-performance web crawling and\nweb characterization.\nHe was a principal\ncontributor to Mercator, the web\ncrawler used by AltaVista. In the past,\nhe has worked on tools for web surfing\n, 3D animation, information visualization\n, algorithm animation, and visual\nprogramming languages.\nHe received\na Ph.D. in Computer Science from\nthe University of Illinois at Urbana-Champaign\nin 1994.\nJanet L. Wiener is a member of\nthe research staff at Compaq Computer\nCorporation's Systems Research\nCenter.\nShe currently focuses on developing\nalgorithms to characterize the\nweb and tools (such as the Connectivity\nServer) to support those algorithms\n.\nPrior to joining Compaq in\n1998, she was a research scientist at\nStanford University working on data\nwarehousing, heterogeneous data integration\n, and semi-structured data. She\nreceived a Ph.D. from the University of\nWisconsin-Madison in 1995, and a B.A.\nfrom Williams College in 1989.\n118", "keywords": ";high quality pages;breadth first search;crawl order;ordering metrics;Crawling;crawling;PageRank;page quality metric;breadth-first search;connectivity-based metrics"} {"name": "47", "title": "Broadcasting Information via Display Names in Instant Messaging", "abstract": "Many instant messenger (IM) clients let a person specify the identifying name that appears in another person's contact list. We have noticed that many people add extra information to this name as a way to broadcast information to their contacts. Twelve IM contact lists comprising 444 individuals were monitored over three weeks to observe how these individuals used and altered their display names. Almost half of them changed their display names at varying frequencies, where the new information fell into seventeen different categories of communication supplied to others. Three themes encompass these categories: Identification (\"who am I\"?), Information About Self (\"this is what is going on with me\") and Broadcast Message (\"I am directing information to the community\"). The design implication is that systems supporting person to person casual interaction, such as IM, should explicitly include facilities that allow people to broadcast these types of information to their community of contacts.", "fulltext": "INTRODUCTION\nMillions of people use instant messenger (IM) clients daily to\ncommunicate with friends, relatives, co-workers and even online\ndating contacts. With this explosion of use, researchers have\ntaken to studying instant messaging and its impact. Much of the\nresearch regarding IM has been focused on its primary uses:\nmaintaining awareness of a contact's presence and availability,\nhow people (usually dyads) converse via text chat, and how they\nexploit other features such as file sharing and receipt of\nnotifications. For example, studies of IM use in the workplace\nexpose how it supports collaboration, communication and\nproject activities [3, 10, 13], as well as its negative effects [15]\nsuch as disruption [4]. In more social contexts, researchers\nfound a positive relationship between the amount of IM use and\nverbal, affective and social intimacy [9]. IM also proves\nimportant in the life of teens, where it helps support the\nmaintenance of their social relationships [8].\nOther computer-mediated communication tools, such as MUDs\n(Multi-User Domains or Multi-User Dungeons), IRC (Internet\nRelay Chat), and broadcast messaging tools also allow\nspontaneous real-time (or synchronous) communication with\nother users. However, there are significant differences between\nthem. IM is predominately used between people who are known\nto each other outside of cyberspace, e.g., friends and associates.\nIM conversations are also private, and tend to be between pairs\nof people. They are also person centered and not group centered:\nwhile a contact list may collect one's `buddies', these lists are\nnot shared across contacts. In contrast, MUDs and IRC are\npublic channels, where any conversation is heard by all people\ncurrently in the MUD or IRC. Most tend to be used by\n`strangers', i.e., those who are unknown to each other in real\nspace, and usually involve more than two individuals. Indeed,\nthe norm is for participants to protect their anonymity by\ndisplaying a pseudonym rather than their real names. Any\npersonal messages that are posted are usually in relation to their\nvirtual identity. However, a few experimental MUD-like\nsystems do focus on teams, where they provide its members\nwith rich awareness information of one another and more power\nin their collaboration tools, e.g., Sideshow [2], Notification\nCollage [7], or Community Bar [12]. Broadcast messaging tools\n[11] sit in the middle, where real-time messages usually\ncomprising notifications and announcements (not conversations)\nare sent to large groups of people who are somehow associated\nwith one another, e.g., Tickertape [6].\nThe big `win' of IM is that it provides one's ad hoc set of\ncontacts with awareness of one's online state, which in turns\nserves as an estimate of one's availability for conversation.\nWhile not completely accurate [13], even this minimal\ninformation suffices to create opportunities for lightweight text-based\nconversations and to reduce the equivalent of `telephone\ntag'. While many research systems go far beyond IM in the rich\nawareness information they give to others [e.g., 2, 7, 12, 16],\nquestions remain about privacy implications of distributing this\ninformation.\nIM contacts are identified by the system through e-mail\naddresses. While unique, these email addresses may be cryptic\n\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nGROUP'05, November 69, 2005, Sanibel Island, Florida, USA.\nCopyright 2005 ACM 1-59593-223-2/05/0011...$5.00.\n\n89\nto a human viewer e.g., a person may not be able to infer that\n12gorwan@yahoo.com is really Gregor McEwan. Consequently,\nthe designers of most IM clients include a feature that lets a\nperson create and/or change their display name at any time; this\nname is shown to others instead of the email address. For\nexample, in MSN Messenger (Figure 1) a person can raise the\n`Personal Settings...' dialog by selecting the drop-down menu\nattached to their name (i.e., `Stephanie' in the figure), and edit\nthe `Display name' text field which we also call the display\nfield within it. All contacts immediately see this new name.\nBecause we are heavy IM users, we noticed that many of our\ncontacts change their display field to do more than simply\nidentify or label themselves. Figure 1 illustrates this, where we\nsee that various people have used this feature to publicly\nbroadcast what they are doing (e.g., `anitsirK Marking') or an\nevent in their life (e.g., `Employed'), or a personal state of mind\n(e.g., `Chasing Insanity'). Other examples we saw include using\nthe display field as a way to publicize personal status, specify\nlocation, post comments, ask questions, and even post popular\nculture references. These obviously augment IM's preset\navailability messages (i.e. away, busy, be right back) in far\nricher ways than the system was explicitly designed to support.\nWe believed that people's appropriation of the IM display name\nfeature into a public broadcast facility is a phenomenon worth\nexploring. Why was this space being appropriated for messages\nbroadcast to an entire contact list? What were users trying to\ncommunicate to others and how is this information different\nthan that in a normal IM conversation? How often do these\nmessages or alternate communications occur? To answer these\nand other questions, we conducted a three week study, where we\nmonitored changes in each person's display field within contact\nlists held by various users of MSN Messenger. We tracked how\noften contacts changed their display name, and what these\nchanges were. We also categorized these changes into\ncommunication purposes.\nAfter briefly summarizing some related work, we describe the\nmethodology used to acquire display name usage data. This is\nfollowed by our results, a discussion of the findings,\nimplications of the work, and recommendations for future work.\nRELATED WORK\nThere are a variety of articles describing how people identify\nthemselves on the Internet, usually in MUDs and IRC. Yet most\nof these stress how identity is formed through pseudo-anonymity\n[1,17], i.e., where a person creates a virtual identity to project a\ndifferent persona of who they are while protecting their real\nidentity. People's choices of names and/or avatars are usually\none part of identity creation. This work is not particularly\napplicable to IM, as people on a contact list are typically known\nto one another.\nGrinter & Palen's [8] study of teen use of IM is far more\nrelevant, and partially reflects our own interests. While their\nwork broadly considers IM as an emerging feature of teen life,\nthey do mention that teens found the preset availability\nmessages to be too impersonal. To combat feelings of exclusion\nor to avoid being rude, teens would personalize the display name\narea to include a message which explains their unavailability,\nchanges in their local environment (i.e., `Going quiet because\nMom just arrived'), and for justifying their lack of presence on\nthe system (i.e., `Out for dinner').\nThe use of IM names to broadcast messages is an everyday\nworld phenomenon, and has been anecdotally noticed by non-scientists\n. For example, one reporter noted in a newspaper\narticle that changes to her display name are her main form of IM\ncommunication rather than actual chat conversations [14].\nSocial scientists talk more generally about computer mediated\ncommunications and how they can be used to build\ncommunities. Etzioni and Etzioni [5] argue that in order to form\nand sustain bonds, a community of connected individuals needs\nwhat they call \"interactive broadcasting\". This is composed of\ntwo major elements:\n\n\nthe ability to broadcast messages to many people within the\ncommunity simultaneously, and\n\n\nthe ability for those addressed by the message to provide\nfeedback, not just to the message originator, but to other\nmessage recipients as well.\nIn this context, a broadcast message can be considered a request\nfor interaction from some (or all) members of a group [11]. A\nvariety of designers have implemented this broadcast capability\ninto their systems. For example, IRC, Notification Collage [7],\nCommunity Bar [12] and Tickertape [6] are all tools that\nimplement interactive broadcasting. A message (which may\ninclude multimedia information) can be posted and broadcast to\nthe group, and it is possible for everyone to view the information\nwithout directly contributing to the conversation. Those who\nwant to respond can do so, in full view of all users. All these\nsystems allow for communal feedback, i.e., where everyone sees\nthe response. Unlike IM, however, these systems include a\nstrong notion of a common group by providing a public space\nfor interaction.\nIn summary, there are discussions of how broadcasting\ninformation contributes to community building, and there are\nsystems that are based on public dissemination of information\nwithin a group. However, excepting a few discussions of this\n\nFigure 1: MSN Messenger; modified display names are\nvisible\n90\nphenomenon [8,14], there has been no real analysis of how\npeople have appropriated the display name feature of IM. Given\nthe importance and widespread of IM, we believe this analysis is\ncritical if we are to understand how we can improve IM systems.\nMETHODOLOGY\nThis study investigates how people use the display name feature\nin IM clients to broadcast information other than one's name.\nWe do this by capturing changes in each person's display field\nas they appear in contact lists over time and over everyday use,\nby asking people to explain what these changes meant, and by\ncounting, categorizing and analyzing these changes.\n3.1\n\nResearch questions\nWe wanted to identify three main behavioural patterns within\nour captured data:\n1.\n\nAt what frequency do users change the information in their\ndisplay field when using an IM client such as MSN\nMessenger?\n2.\n\nWhat are the main communication categories that represent\nthe information held by these display field changes?\n3.\n\nWhat is the frequency distribution of these categories?\nA fourth interesting but secondary question was:\n4. Are changes to the display name related to the demographics\nof age or sex?\n3.2\n\nParticipants\nWe had two classes of participants. Our primary participants\nwere those who made their contact list available to us. Our\nsecondary participants were those who comprised the people on\nthe contact lists.\nTwelve participants were recruited as primary participants, all\nComputer Science graduate students or faculty at the University\nof Calgary. They ranged in age from 23 to 50, and were regular\nusers of MSN Messenger. These participants provided access to\ntheir IM contact lists. They were also willing to annotate the\ncollected data. While the number of contacts on each person's\nlist varied somewhat, this variance was irrelevant to our study.\nOur secondary participants were the 444 contacts found on the\ncontact lists of the 12 primary participants. These contacts\ncovered a broad range of demographics and social relationships,\ni.e., fellow students, workmates, friends, family members and\nother relatives. While the display names used by these 444\npeople were collected as data, they were not contacted directly.\n3.3\n\nMaterials and Data Capture\nEach participant (whether primary or secondary) used their own\npre-existing and unaltered MSN Messenger client on their own\ncomputer (running Windows) for everyday purposes.\nWe wrote a logging program to collect all contact list data from\neach primary participant. It monitored every person's display\nfield as it appeared in the contact list. The software worked by\ntapping into the programming API of MSN Messenger\n(regardless of its version) to monitor activities within it.\nThis logging program was given only to the 12 primary\nparticipants. No special software was needed by the 444\nsecondary participants, as their data was captured via the\nlogging software on the primary participant's computer.\nThe 12 primary participants installed our software on whatever\ncomputers they wished. When installed, it worked in tandem\nwith MSN Messenger to collect data on everyday IM usage in\nthe field.\nThe program monitored whether the participant was logged in to\nMSN Messenger. If logged in, it recorded the initial set of\ndisplay names and any display name changes of the secondary\nparticipants on the contact list. The initial set of display names\nwere needed to notice if a change occurred since the primary\nparticipant's last login.\nAs part of our analysis, we used the standard features of\nMicrosoft Excel 2003 to sort and consolidate the data files.\nRelevant data was then transferred to Minitab v.14 to tally\ndistributions, calculate any statistics and create visual\nrepresentations of the data. Further analysis of the categories of\ncommunication used in the display field was conducted using\npaper cut-outs and post-it notes to create an affinity diagram;\nthis is detailed later.\n3.4\n\nMethod\nOnce primary participants agreed to participate in the study, we\ngave them instructions on how to install the logging program on\ntheir computer. We did not have to be present for this, so people\ncould install it on whichever computers they regularly used, be it\nat work or at home. The program then ran automatically; the\nonly indication of its operation was a small red notebook icon\nappearing in the participant's system tray. This icon allowed a\nparticipant to abort the collection process if they wished, but\nnone chose this option.\nData was collected for approximately three weeks, but did\nrequire the person to be logged onto MSN Messenger. If a\nprimary participant was not logged on, no data about their\ncontacts was recorded. This meant that some display field\nchanges of secondary participants could have been missed.\n3.5\n\nAnalysis\nAt the end of three weeks, the primary participants were\ninstructed to open the data file with Excel, and indicate the sex\nand approximate age of each listed contact member in a pre-designated\ncolumn. For each display name change, they were\nalso asked to categorize the type of information the contact was\ntrying to broadcast to others. We did not predefine any\ncategories. Participants created their own category labels and\ncategorized names into them however they wished. We chose\nthis approach because we felt that participants would have a far\nbetter understanding of the true meaning of a person's display\nfield changes than someone unfamiliar with the contact; we also\nfelt that as recipients of this information, their interpretation was\nimportant. We also believed that they would generate a greater\nand therefore richer breadth of categories.\nOnce the categorizations were completed, the data files were\ntransferred to the primary investigator. The investigator\nconsolidated all of the data files into one master file, and\nremoved any duplicate entries. These duplicate entries occurred\nfor two reasons.\n\n\nMore than one person had a particular contact on their list.\n\n\nEach time a participant logged in, their entire contact list\nwas recorded in the data file. If a contact had not changed\ntheir name while the participant was offline, a duplicate\nentry was created.\n91\nWhen duplicate entries occurred, all but the earliest occurrence\nof the display name change was removed.\nA category list was created for each primary participant based\non his or her individual categorizations of display name changes.\nBecause these category names could differ between participants,\nwe needed to re-categorize these names into a master category\nlist. To do this, all categories were printed on separate slips of\npaper for easy sorting. We then created an affinity diagram to\nresort these categories, where entries from all the lists were\nsorted into groups based on similarity. These groups then\nformed a master category (see Figure 2). A master category title\nwas then chosen that best represented the theme for the\ngrouping. After this master list was established, the entries in the\nconsolidated file were then re-categorized based on these new\ndivisions; this would allow us to create a distribution profile.\nWe should mention that many entries into the display field\ncontained more than one textual element, each of which could\nbe categorized differently. When this happened, we treated the\ndisplay field as holding multiple entries. An example of this is\nshown here, where\nthe contact's display\nfield contains two elements; `Johnboy' could be categorized as a\nName Variation, while `yonge&eglington' (a street junction in\nToronto) is categorized as an indicator of Location. In this case,\nthis display field entry would be split into two text fragments,\nwhere each fragment would be counted in the category that best\nfit. As we will see, these types of dual entry usually occurred\nbecause people tend to keep their names (or an identifying\nvariation thereof) visible to others in order to identify\nthemselves. Occasionally a display field would contain two\nelements where neither were identifiers. For example, the text\nshown here is categorized\nas two elements: `packing'\nis an Activity, and `sad to be leaving' is a Mood. Only rarely did\ndisplay field entries contain more than two elements.\nRESULTS\nOur first research question was:\nAt what frequency do users change the information in their\ndisplay field when using an IM client such as MSN\nMessenger?\nBefore answering this question, recall that the recording of\ndisplay field changes of a secondary participant on a contact list\nonly happened when the primary participant was logged on to\nMSN Messenger. If the primary participant was logged out, no\ndisplay field changes to their contacts were recorded. While a\nsingle change would be noted by comparing the last recorded\nversion of the contact's display field to the one recorded when\nthe primary participant logs on, multiple changes during the\nlogout period would be lost. This means we cannot calculate the\nexact display name change distribution across all contacts. Still,\nour numbers should be a good estimate of what happens. At the\nvery least they represent a lower bound that somewhat\nunderestimates how often display fields changes occur. The data\ncertainly suffices to indicate the range of activities and\nindividual differences across 444 people.\nFigure 3 illustrates the distribution of contacts according to how\noften they changed the contents of the display field. Our results\nshow that 58% of our 444 contacts (258 people) never changed\nthe contents of the display field during the three week period.\nFor the remaining 42% of contacts (186 people), we counted a\ntotal of 1968 display name changes, or an average of 11 display\nname changes per person over the three week period, or up to 4\ntimes a week.\n\nNever\n58.1%\nRarely\n12.2%\nWeekly\n5.6%\nSeveral times a day\n4.5%\nDaily\n3.4%\nSeveral times\na week\n16.2%\n\nFigure 3: Distribution of contacts according to how often\nthey change the display field contents\n\nFigure 2: Example affinity diagram used to group participant\ncategorizations into master categories.\n92\nHowever, this average is misleading, for we also found that\npeople change their display names at different frequencies. We\ncreated six rate change categories. Based on a contact's data, we\nplaced each contact into the category that best estimated that\ncontact's change rate. Figure 3 displays this distribution of\ncontacts among the six rate change categories. We see that the\n42% of contacts who change their display name do so at\ndifferent rates. About 8% (4.5% + 3.4%) of contacts change\ntheir names from once to several times a day. About 22% of\nthem change their names less often, from once to several times a\nweek (16.2% + 5.6%). The final 12% change it rarely, i.e., once\nor twice over the three week period.\nThe person who had the highest display field change rate is\nworth added discussion, as it suggests what happens with\ncontacts who used this feature heavily. This person changed her\ndisplay field early in the morning, and notified contacts when\nshe arrived at school. Around 4 pm the changes started again,\ncontinuing until approximately 11 pm when she went to bed.\nHer changes would incorporate details on what was occupying\nher time. Changes would state particulars: when she was\nstudying, babysitting, or watching TV, and her emotional\nreactions to these events. If she found something entertaining or\ninteresting on TV, she would post quotes. If she was bored, she\nwould put out a request for someone/anyone to call. In essence,\nthis person used her display field as a web log, where she\nrecorded and disseminated information to her community. Even\nthough we had no further knowledge of this person, a sense of\nwho she was and what her life was garnered through all the\nchanges that she made to her display name.\n4.2\n\nCommunication categories\nOur second research question was:\nWhat are the main communication categories that represent\nthe information held by these display field changes?\nAfter analyzing the categories created by our primary\nparticipants through the affinity diagramming process, we\nidentified seventeen master categories. These are listed below in\nalphabetic order. A description of each category is given along\nwith illustrative examples taken from our data. Many examples\ncontain more than one textual element, usually an identifier, as\nwe present them as they appeared in the display field. To protect\nconfidentiality, name information has been changed.\nActivities include things or activities that a person has done in\nthe past, is currently involved in, or is about to participate in the\nfuture. It also includes countdowns to an upcoming event.\nExamples include:\n\n\nAmy - House hunting!\n\n\nJoe was drunk on a Tuesday...shameful.\n\n\nBraced: 60% done my portfolio!\nAdverts include advertisements or promotions for items or\nevents, and things that people have for sale.\n\n\nEaston Synergy Grip 100 Flex Iginla Blade Left (Brand\nspanking new): $225\n\n\nheadachey -- Tim Stuart Tribute and Fundraiser November\n6th @ 8PM -- ask for details\nComments are personal comments, expressions of an\nindividual's opinion and general statements on how they view\nthings in the world around them.\n\n\nJan[et] - Airlines are Evil\n\n\nBee - undocumented code should be illegal\n\n\nNancy: you don't need English to live in Vancouver\nDefault contains only the default unaltered entries to the display\nfield. After installation, the IM client displays a person's e-mail\naddress in the field. These may or may not actually contain a\nperson's name as part of the email address.\n\n\njohnsmith@hotmail.com\n\n\nJyn2l@hotmail.com\nDirections contain entries where a reader is being directed to a\nweb site or link. Examples are:\n\n\nBee-http://java.sun.com/features/1999/05/duke_gallery.html\n\n\njessie {http://littlemisskool.blogspot.com}\n\n\nCHECK THIS====> http://www.blitzkreigentt.com/....\nConstructed <====\nFun contains entries that contain puns, inside jokes, humorous\nstatements, and items placed for the amusement of others.\n\n\nMelanie. me: \"come see, its a lunar eclipse\"; kate: \"where?\"\n\n\nwhat do you call a fish with no eyes: f sh\n\n\nHuffy - Home is where you hang your @\n\n\nJoe Like\na Vermin, trapped for the very first time\n\nHandles contains those display name entries that hold a\nperson's handle. A handle is like a well known nickname: it is a\nconsistent title or name that people give themselves to represent\ntheir identity on the internet. As we will see later, IM handles\nare not used for the pseudo-anonymity purposes as found in IRC\npublic forums.\n\n\nhunnybear\n\n\nIceman\n\n\nspidermax\nLocation contains information about a person's current location\nor future destination. It can also contain travel information.\nMany times this location information is permanently attached to\nthe display name when localized at a particular computer, as in\n\"home\" or \"work\". This label can indicate to others the type of\ncommunications that are appropriate.\n\n\nMat Singh...going home in 10 days!\n\n\nIn the dominican republic\n\n\nDan James [Office]\n\n\nmike -> lab meeting\nMessages contain information of significance directed at an\nindividual on a person's contact list or to the group as a whole.\n\n\ndarren~thanks nate for the halo hookup\n\n\n\nSirMe - Happy Birthday, Angie!\n\n\nMelanie. Nick, ill be on the 3 30 or whatever bus at the\ncollege. <<school>>\nMood contains entries that give indications of a person's mood,\nfeelings, health or state of being.\n\n\ni give up\n\n\nAdam feels rejected\n\n\nbritney - disoriented haze\n\n\nJoe - as if shot in the head, yet still charging blindly forward.\n\n\nBee - double espresso\nwhee!!!\n\n\nMaggs - Not Feeling Well.\nName contains entries of a person's given name. This category\ncontains no nicknames, handles or variations on the name.\n\n\nRebecca\n\n\nFred Jones\n93\nNotices contains entries that give notice of a particular event,\nshare news or display announcements.\n\n\nDBosch [ We're home owners! ]\n\n\nTracey... down 24.2 lbs\n\n\nJennifer - party is cancelled\n\n\nNaKuL - new msn login\n\n\nGretchen -- Holy Cole's coming to vancouver!!\nPresence contains items which provide more detailed\ninformation about a person's online presence or availability\nbeyond the standard status indicators.\n\n\nBee - really am busy, only msg me in emergency\n\n\nMelanie. >> off for family time<<\n\n\nmike - reading at my desk/disregard (Away) status\n\n\nFlickerin: be Back at 630ish\nQuestions contain rhetorical questions and questions that are\nposed to stimulate response. This category also contains\nquestions that are requests for assistance, similar to those that\nappear in company broadcast messaging systems when a person\nis searching for an expert in a given area [6].\n\n\nLuke -- Anyone took CS322? I need some help with cilog!\n\n\nJoe - who keeps messing with my chair??\n\n\nShri- Needin' a Physics Toolkit w/Dynamics + Collisions +\nFields, any ideas?\n\n\nMelanie. Anyone have a working printer?\nQuotes contains quotations taken from movies, tv, books, plays\nor lyrics from music. It also contains references to pop culture.\n\n\nDusit - If you can dodge a wrench, you can dodge anything!\n\n\nb33qZ -- king jeremy the wicked... oh, rules his world...\n\n\nAndrea - so long and thanks for all the fish\nUnknown contains all the entries in which the meaning of the\ntext is too cryptic that it could not be categorized by either the\nprimary participant or the investigator. It is assumed that once\ndeciphered that each of these entries could be placed in one of\nthe other sixteen categories.\n\n\nb33qZ [nts:perri]\n\n\nAndy ~ Ah '\n\n\nBlack_Venom (In 432)\n\n\n\u00bb~-jd-~\u00ab-->\nSkRoNk\n<-- yeh social ppl\nVariations contain entries where the identifier is a variation on\nthe person's name. This can include an abbreviated version of\nthe full name, a nickname in which the given name is still\nidentifiable, or a variation in the way the letters of the name are\nprinted or ordered.\n\n\nDiAnNe\n\n\nkev\n\n\nMaggs\n\n\ntimbob\n\n\nEinahpets\n\n4.3\n\nCategory Distribution Frequency\nOur third research question was:\nWhat is the frequency distribution of these categories?\nFirst, the 2226 logged display fields were analyzed to reveal a\ntotal of 3603 elements (recall that some display fields could\nhave more than one information element in it). Second, each\nelement was then located in a single communications category\nthat best represented its meaning.\nFigure 4 shows these category counts in two sections. The top\npart plots the Name, Variations and Handle categories. We\nseparated these `Identification' categories from the other\ncategories because the information they contain satisfy the\noriginal purpose of the display field i.e., to hold identifying\ninformation. The frequency distribution of the remaining 14\ncategories are then listed.\nThe bar representing the counts of the number of elements\nwithin each of these categories are further distinguished into\nthree groups. The lightest section of each bar represents the\ngroup of category elements that were the only element contained\nby the display field. The medium coloured section shows the\nnumber of category elements whose text coexisted with another\nelement found in one of the three `Identification' categories in\nthe display field. The darkest section of the bar groups category\nelements whose text coexisted in the display field with another\nelement found in any category other than the three\n`Identification' categories.\nThe figure shows that approximately 49%, or 1766/3603 of the\ncategorized elements, were in one of the three `Identification'\ncategories, i.e., Name (32.4%), Variations (10%) or Handle\n(6.4%). This makes sense, for meaningful self-identification is\nthe expected use of the IM display name feature. The darkly\ncolored regions of their bars also reveal that identification\nelements in total coexist with other pieces of information in the\ndisplay field over 67% (1186/1766) of the time. For example,\nthe Name was included with other elements 825/1168 (71%) of\nthe time. Similarly, Variations and Handles was included\n\nFigure 4: Bar chart displaying category distribution\n94\n205/359 (43%) and 156/239 (65%) in conjunction with other\nelements. Note that there are no medium coloured regions in\nthese bars. This is because elements within the Name,\nVariations and Handle categories never co-existed with each\nother. They only occurred in conjunction with elements in the\nother 14 category types.\nThe other 14 categories of communication identify information\nunrelated to identification. Collectively, these categories\ncomprise the other 51% of the total number of elements (1837 of\n3603 total). Within these 1837 elements, we see that the most\nfrequent categories of communication used are Mood at 19.4%\n(357/1837), Comments at 17.8% (327/1837) Activities at\n16.6% (305/1837), Location at 12.5% (230/1837), Messages at\n8.3% (152/1837), followed by Quotes, Notices and Fun. The\nother categories occur less often, but still at a significant level.\nThe modest size of the lightly coloured section of all these\ncategories suggest that this information often appeared in\ntandem with other categories. Most of time, this was one of the\nName, Variations, or Handle elements, as represented by the\nmedium-coloured section in each bar. Still, the presence of the\ndarkly coloured bar sections showed that two non-identifier\ncategory elements may coexist in a display field.\n4.4\n\nDemographics of People Who Change\nTheir Display Names\nOur final research question was:\nAre changes to the display name related to the demographics\nof age or sex?\nThe 444 contacts comprised somewhat more males than\nfemales. The primary participants reported 232 males, 189\nfemales, and 1 male/female (the account was known to be used\nby a couple). The sex of the remaining 22 contacts was not\nreported.\nThe dominant age range of the 444 contacts was between 21-30\nyears old. Table 1 summarizes the age demographics of the 444\ncontacts, as reported by our 12 primary participants. Since the\nexact age of each contact was sometimes uncertain, we used age\ngroup categories to capture their estimated ages.\nWe then analyzed whether age or sex of a person was related to\nthe number of changes that person made. First, we removed\nrecords for those contacts whose sex was not reported. We then\nperformed a chi-square analysis on the remaining 421 contacts\nto determine whether there was a relationship between sex and\nthe rate that a person changed their display field. Sex and\ndisplay name change rate were found to be independent,\n\n2 (5,\nN = 421) = 7.54, p = 0.183. That is, no relationship exists\nbetween the sex of a person and how often a person changes the\ndisplay name.\nWe performed a similar chi-square analysis for age and display\nname change rates, where unreported people were excluded.\nAge groups were collapsed into three age ranges: <20, 21 to\n30, and 31+. This was done for analytic reasons, since several\ncells in the chi-square analysis would have contained counts of\nless than one with the original divisions. Age range and name\nchange rates were found to be not independent,\n\n2 (10, N = 413)\n= 20.507, p = 0.025. That is, a relationship exists between the\nage of a person and their likelihood of changing their display\nname. This result will be examined further in the discussion.\nDISCUSSION\nThe most important thing revealed by our study is that a good\nnumber of people persistently used the display name feature to\npublicly broadcast information about themselves to their friends,\nand that this happened outside of individual chat sessions. They\ndid this in spite of the fact that IM display fields are not\nexplicitly designed to be a public broadcast system. This\nsuggests that systems should be designed to better support this\nkind of broadcast activity. Details are discussed below.\n5.1\n\nInterpreting the results\nPeople change the information in their display field. From\nthis study we have learned that the changing of the information\nin an IM display field is not an oddity or something done\noccasionally by certain individuals. Rather, it is a popular\nbehaviour: 42% of users in our study changed their display\nname, and 25% did so several times a week or more. This\nbehaviour happens in spite of the fact that the Instant Messenger\nclient we studied does not make changing the display name\nimmediately accessible (e.g., through direct manipulation):\npeople had to raise menus, dialog boxes, and form fill the text.\nPeople use the display field for identification, to give\ninformation about self, and to broadcast messages. People\nused the limited text that could be displayed in the display field\nin rich ways. Seventeen different categories were needed to\ndescribe the various communications placed in the display field.\nStepping back, three themes encompass these categories. The\nfirst theme is Identification: \"who am I\"? The second theme is\nInformation About Self: \"this is what is going on with me\". The\nthird theme is Broadcast Messages: \"I am directing information\nto the community\". These are described separately in the\nfollowing three sections.\nIdentification is fundamental. Identifying oneself to personal\ncontacts by typing one's own name in the display field is the\noriginal purpose of this feature; the name replaces the default\nemail address as a way to uniquely identify a person. This\nproved necessary because e-mail addresses are a poor substitute\nfor a name; some email services enforce cryptic email addresses,\nand others are so oversubscribed that all but the rarest names are\nalready taken.\nTable 1: Age distribution of contact group\nAge Group\nCount\nPercent\n<15\n7\n1.69\n16-20\n24\n5.81\n21-25\n179\n43.34\n26-30\n126\n30.51\n31-35\n36\n8.72\n36-40\n18\n4.36\n40+\n23\n5.57\nN = 413, Unreported = 31\n95\nWhile people identified themselves in several ways, inserting\none's real Name or a recognizable Variation of it (e.g., initials\nor nicknames) proved the two most common communication\ncategories. Handles was also popular (a constant representative\nname that superficially resembles nicknames in IRC or\ndiscussion groups on the Internet [1, 17]). Regardless of the\ndifferences between these categories, in all cases the names,\nvariations or handles presented are not used to maintain pseudo-anonymity\nor complete anonymity as in IRC or MUDs. Rather,\nthe identifier is something that the contact group uses to\nrecognize a known individual.\nAnother indicator of the importance of the Identification\ncategories is that many users keep their name visible even when\nthey add extra information to the display field (the black bars in\nthe three identification categories, and the grey bars in the other\n14 categories in Figure 4). People do this in spite of the limited\ndisplay space: in a normally sized IM window about 30-50\ndisplay field characters are viewable. As well, the usual order of\nthis information is a name followed by the extra information. A\ntypical example is illustrated in Figure 5. This inclusion of\nidentity is likely done as a courtesy behaviour so that others can\ndistinguish between contacts without resorting to deciphering\nthe e-mail address.\nExtra information is usually about self. Of the remaining 14\ncategories, the majority of them provide information about\n`self'. Elements in these `about self' categories dominate the\nfrequency count (~85% of the non-name elements), with the top\nfour categories providing information about Mood, Comments,\nActivities, and Location. These top categories all present\ninformation about the person at a moment in time: they annotate\nhow they are feeling, what they are doing, or where they are.\nSimilarly, the lesser used Presence category indicates if they are\navailable, thus augmenting the preset status indicators, while\nQuotes and Fun are indirect indicators of state of mind and\npersonality traits. Obviously, these people want to disclose an\nadditional level of information revealing personal state and\naction to their community of friends, close contacts and\ncollaborators. The regular association of this kind of information\nwith one's name means that this information is truly about self;\nthis is in sharp contrast to the personas found in chat systems,\nwhere people construct an artificial pseudonym identity through\navatars or nicknames [1, 17].\nPeople want to be able to broadcast information without\ninvolving conversation. Most of the remaining categories (about\n14% of the non-name elements) contain communicative\nmessages intended for the group. In particular, Messages,\nNotices, Questions and Directions are categories that either\nprovide information thought to be of interest to the group or are\nposted to stimulate a response. Most of these are undirected e.g.,\n`Does anyone know...'. Occasionally, a message may be\nspecifically directed to an individual, yet this is done in a forum\npublic to the community of contacts. Clearly, people are\nadapting the IM display field into a form of public broadcast\ncommunication facility; they are thus fulfilling one element of\nthe broadcasting system described by Etzioni and Etzioni [5].\nSince each user's contact list contains a different set of names, a\nresponder (who may change their display name to respond to\nanother's broadcast message) is likely not sending that response\nto the same community of people. This hampering of responses\nsuggests that display names are less effective for creating the\nrunning dialogs common to IRC, MUDs and other public\nbroadcast systems [6, 11, 17].\nAsynchronous messaging. In MSN Messenger, the direct chat\nfacility is session based. That is, direct chat cannot be used by\none person to leave information for a currently `Offline'\nparticipant to read later. In contrast, the display name persists\nacross sessions, meaning that asynchronous communication to\noffline participants is possible. For example, consider the\nmessage `SirMe - Happy Birthday, Angie!' that was found in the\nMessages category. By including this in his display name,\nSirMe is leaving an asynchronous message that Angie (and\nothers) can see when they come on line.\nYounger users may change their display names more\nfrequently than older users; sex does not make a difference.\nThe demographics of our study suggest some demographic\ntrends, which are described below. However, we caution that,\ndue to the way we collected data, the demographic findings and\nhow they relate to display name changes are at best tentative.\nFirst, the age ranges of our secondary contacts (as being 14 65\nyears old) were likely heavily influenced by the fact that these\ncontacts were culled from the lists of only 12 primary\nparticipants (from 22 50 years old), most of whom were within\nthe 21-30 age group, weighing the data with a similar age range.\nSecond, our data is incomplete as display field change data for\nsecondary contacts was not collected when their associated\nprimary contact was off line. Third, ages of secondary\nparticipants were estimated, which affects the analysis we could\ndo. In spite of this tentative flavour, we include our results as\nthey suggest trends and future areas of study.\nWe saw a fairly balanced number of males and females in our\nsample: 55% were male, 45% were female. The chi-square\nanalysis for sex and display field change rates indicated that the\ntwo variables are independent, i.e., the sex of the participant\ndoes not suggest how often that person would change their\ndisplay name. However, the chi-square analysis for participant\nage and display field changes suggests that they are related\n1\n. We\nsubsequently examined the chi-square table data to compare the\nobserved count with the expected count for each cell of age\ngroup crossed with rate. Discrepancies between the observed\nand expected counts indicate a pattern where younger users are\nmore apt to frequently change their display name when\ncompared to older users. This trend may reflect a \"computer\ngeneration\" gap where younger users would be more apt to\nchange their display name. It could also reflect a culture gap,\nwhere younger users are using it for social reasons [8], while\nolder users are using it for workplace purposes [13].\n\n1\n\nWhile the chi-square test determined that the two variables are not\nindependent, it does not provide details on how the two variables are\nrelated. If true values of age and average change rates were available\ninstead of our estimated categories (a subject of a future study), other\nstatistical analyses could be used to reveal this detail.\nFigure 5: A typical display field showing how people retain\nidentity (Name), followed by other information (Activity)\n96\n5.2\n\nImplications for practitioners\nPeople persistently use the display field not only to identify\nthemselves to their community of contacts, but to reveal\npersonal information about self and to broadcast messages. They\ndo this in spite of the fact that the display field facility was\ndesigned for other purposes; the IM community co-opted this\nfeature to fill their real desires and needs.\nThe first major implication is that IM and similar facilities need\nfirst-class interface features that let people broadcast identifying\ninformation, information about self, and public messages.\nBecause some people change this information fairly often, this\ninformation should be easy to create and alter, e.g., through\ndirect manipulation.\nSome of these capabilities are only now being supplied by a few\nmajor IM vendors. For example, the new version of MSN\nMessenger (v. 7.0), released shortly after our study was\nperformed), includes a dedicated space for adding and editing a\npersonal message (Figure 6, top). A person can directly alter this\ntext by clicking within it: no menus or dialog boxes have to be\nnavigated or raised. Other people see this personal information\nas visually distinguished text, e.g., the italicized text within the\ncontact list (Figure 6, bottom). The personal information\nmessage is also proprietary to the machine, similar to the display\npicture. Thus people can set unique location labels to various\ncomputers if desired, i.e. home or work.\nThe Community Bar (CB) [12] is a multimedia groupware\nsystem being developed by collaborators in our laboratory.\nElements of its design are partially influenced by our study\nresults. People within an ad hoc group inhabit places, and all see\nthe equivalent of a contact list within a place. For example,\nFigure 7 shows a place called `IM Paper' and three participants\nwithin it. To support `Identification', each participant is\nrepresented by a `Presence item', which shows running video (or\nphoto) of them, their name. To support `Information about self',\nthe Presence item also includes optional personal information\n(which may wrap across multiple lines) that persists across login\nsessions. A person can quickly modify this personal information\nby a popup raised whenever he or she moves their mouse over\ntheir item (Figure 7, right side). To support `Broadcast\nMessages', it also lets people broadcast and respond to public\nmessages to all people in the group. This public broadcast is not\navailable in MSN Messenger 7, For example, Figure 7 (bottom)\nillustrates a public text chat dialog that lets anyone in the group\npost messages; all see its contents and all can post responses.\nNot shown is a sticky note facility, where a person can post a\npersistent message to all. Finally, certain categories of\ninformation are supported. For example, `Directions' are\nsatisfied by letting people post a `web item' (not illustrated): a\nthumbnail of a web page of interest that others can navigate to\nvia a single button press.\nAnother implication of our study is that people use many\ndifferent categories of information especially when describing\nself which in turn suggests that people are trying to provide\nothers with a rich picture of themselves. Yet most systems, even\nthe current ones shown above, only let people set one attribute\nof themselves in their personal message space (although they\nmay combine these in a text fragment). Perhaps future systems\nwill let people construct an `avatar' of sorts with multiple\nattributes that distinguish these categories, so that (say) mood,\nlocation and activity are treated independently rather than\ncompete for a small space.\nWhile these (and likely other) systems suggest point design\nsolutions to our implications, what is important is that our study\nhas placed this work on a solid intellectual footing. It provides\ndetails of what people have done, and has identified the\ncategories of information that people supply. For example, we\nsuspect that MSN Messenger's inclusion of a personal\ninformation field arose because its designers noticed that people\nwere moulding the technology to suit their needs, and they\nwanted to \"fix the interface\" to better fulfill these needs. In\ncontrast, our study helps designers understand why\n\nFigure 6: MSN Messenger v7.0 separates editing and display\nof names and personal messages.\n\n\nFigure 7: Snapshot of Community Bar displaying personal\nmessage space within presence item\n97\nappropriation occurred in the first place. Looking at the 17\ncategories of communication that are used in messages found in\nthe display name space, we saw that most are personal, or about\nthe self. In taking over this space, users are not `hacking' to\nmake IM do totally different things. Rather, they are adding\nrichness to their identity beyond their simple name label. They\nare expressing identity, and they own this expression by using a\ntext field that only they can alter.\nWe also saw that there is some use of the display field for public\nbroadcasting of messages. This suggests that there is a problem\nwith the way we compartmentalize systems: IM systems with no\nreal notion of groups or public broadcast, versus IRC and similar\nsystems where public broadcasts dominate. The real solution\nlikely amalgamates both: a system that somehow supports both\npublic and private discussions between ad hoc (and perhaps non-overlapping\n) groups. To our knowledge, only very few systems\n(such as the Community Bar above [12]) are trying to tackle this\nfusion of genres.\nCONCLUSION\nMost studies of communication using instant messenger clients\nhave been focused on the activities within the main chat\nwindow. In contrast, this study examined how contacts\nappropriate IM technology to publicly broadcast information by\nadding extra text to their display name. We exposed patterns of\nbehaviour, where we saw that almost half of the contacts we\nmonitored change their display names with varying frequencies.\nWe established a set of seventeen communication categories for\nthe different types of personal messages added to the display\nfield. We saw that people did want to identify themselves (the\nName, Variations and Handles category), and that these were\ntrue identities that contacts would recognize versus anonymous\npseudonyms not known by others within the social group. We\nalso saw that the most popular communications were those that\nadded personal information about self: a person's psycho-physiological\nstatus, one's current activities, details of their\nlocation, and expressions of personal comments and opinions.\nWe also saw that people occasionally used it to broadcast\nmessages to the group, a facility not otherwise available in IM.\nThese findings suggest that personal information and public\nbroadcast of messages, currently supported through this creative\nappropriation by users, should be provided as a first class\ninterface feature in IM design.\nThis is just the first of a set of studies that could be done. Much\nhas been discovered, although these results should be verified\nand refined further. For example, modest refinements of our\nstudy protocol would allow us to more precisely capture the\nfrequency of changes within the display field and their\ndistribution within the different communication categories.\nHowever, we suspect that the actual categories of\ncommunication will not change dramatically. We would also\nlike to consider the author's intentions of a display name change\nalong with the recipient's opinion. More importantly, we intend\nto study behaviour and communication patterns within systems\nthat provide explicit support for personal information supply\n(such as MSN v7.0) and public broadcast (such as the\nCommunity Bar).\nACKNOWLEDGMENTS\nMany thanks to all those who participated in this project and\ntook precious time to make communication category evaluations\nfor their many contacts' display name changes; their input was\ninvaluable. Special thanks to Gregor McEwan and Michael\nBoyle for their advice both on intellectual content and on our\nrecording software. This work was funded in part by the NSERC\nNectar Research Networks program.\n\nREFERENCES\n[1]\n\nBechar-Israeli, H. (1995). From "Bonehead" to\n"cLoNehEAd": Nicknames, play and identity on internet\nrelay chat. J. Computer-Mediated Communication, 1 (2).\n[2]\n\nCadiz J. J., Venolia G. D., Jancke G. & Gupta A. Designing\nand Deploying an Information Awareness Interface. Proc\nACM CSCW (2002). 314-323.\n[3]\n\nCameron, A. F. & Webster, J. (2005). Unintended\nconsequences of emerging communication technologies:\nInstant Messaging in the workplace. Computers in Human\nBehaviour, 21, 85-103.\n[4]\n\nCutrell E. B., Czerwinski M. & Horvitz E. Effects of\ninstant messaging interruptions on computing tasks. In\nProc ACM CHI Extended Abstracts (2002). 99-100.\n[5]\n\nEtzioni, A. & Etzioni, O. (1999). Face-to-face and\ncomputer-mediated communities, a comparative analysis.\nThe Information Society, 15, 241-248.\n[6]\n\nFitzpatrick, G., Parsowith, S., Segall, B., & Kaplan, S.\n(1998). Tickertape: Awareness in a single line. Proc ACM\nCHI (1998). 281-282.\n[7]\n\nGreenberg, S. & Rounding, M. (2001). The Notification\nCollage: Posting Information to Public and Personal\nDisplays. Proc ACM CHI (2001). 514-521.\n[8]\n\nGrinter, R. E. & Palen, L. (2002). Instant messaging in teen\nlife. Proc ACM CSCW (2002). 21-30.\n[9]\n\nHu, Y., Wood, J. F., Smith, V., & Westbrook, N. (2004).\nFriendships through IM: Examining the relationship\nbetween instant messaging and intimacy. J Computer-Mediated\nCommunication, 10 (1).\n[10]\n\nIsaacs E., Walendowski A., Whittaker S., Schiano D. J. &\nKamm C. (2002) The character, functions, and styles of\ninstant messaging in the workplace, Proc ACM CSCW\n(2002). 11-20.\n[11]\n\nJania, F. (2003). Broadcast Messaging: Messaging to the\nmasses. Queue, 1 (8), 38-43.\n[12]\n\nMcEwan, G. and Greenberg, S. Supporting Social Worlds\nwith the Community Bar. To appear in Proceedings of\nACM Group 2005, Sanibel Island, Florida, Nov 6-9.\n[13]\n\nNardi, B. A., Whittaker, S., & Bradner, E. Interaction and\nouteraction: instant messaging in action. In Proc ACM\nCSCW (2000), 79-88.\n[14]\n\nPiepmeyer, A. I've been replaced by a screen name. The\nDaily Utah Chronicle,\nOctober 31, 2003.\nwww.dailyutahchronicle.com/news/2003/10/31/Opinion/\nIve-Been.Replaced.By.A.Screen.Name-545565.shtml\n[15]\n\nRennecker J. & Godwin L. Theorizing the Unintended\nConsequences of Instant Messaging for Worker\nProductivity, Sprouts: Working Papers on Information\nEnvironments, Systems and Organizations, 3 (Summer).\nRetrieved Dec 3, 2004 //weatherhead.cwru.edu/sprouts/\n2003/030307.pdf\n[16]\n\nTang, J. C. & Begole, J. (2003). Beyond instant messaging.\nQueue, 1 (8), 28-37.\n[17]\n\nTurkle, S. (1997). Life on the Screen: Identity in the Age of\nthe Internet. New York: Simon & Schuster Inc.\n\n98", "keywords": "communication;Communication Catogories;Name Variation Handles;Identification Is Fundamental;Related IM Research;Distribution Frequency Of Various Catogories;Display Names;Instant messenger;awareness;MSN messager;Broadcast Information;Catorgorisation Of Display Names;Instant Messaging;display name"} {"name": "48", "title": "Building a Research Library for the History of the Web", "abstract": "This paper describes the building of a research library for studying the Web, especially research on how the structure and content of the Web change over time. The library is particularly aimed at supporting social scientists for whom the Web is both a fascinating social phenomenon and a mirror on society. The library is built on the collections of the Internet Archive, which has been preserving a crawl of the Web every two months since 1996. The technical challenges in organizing this data for research fall into two categories: high-performance computing to transfer and manage the very large amounts of data, and human-computer interfaces that empower research by non-computer specialists.", "fulltext": "1. BACKGROUND\n1.1 Research in the History of the Web\nThe Web is one of the most interesting artifacts of our time. For\nsocial scientists, it is a subject of study both for itself and for the\nmanner in which it illuminates contemporary social phenomena. Yet\na researcher who wishes to study the Web is faced with major\ndifficulties.\nAn obvious problem is that the Web is huge. Any study of the Web\nas a whole must be prepared to analyze billions of pages and\nhundreds of terabytes of data. Furthermore, the Web changes\ncontinually. It is never possible to repeat a study on the actual Web\nwith quite the same data. Any snapshot of the whole Web requires a\ncrawl that will take several weeks to gather data. Because the size\nand boundaries of the Web are ill defined, basic parameters are hard\nto come by and it is almost impossible to generate random samples\nfor statistical purposes.\nBut the biggest problem that social scientists face in carrying out\nWeb research is historical: the desire to track activities across time.\nThe Web of today can be studied by direct Web crawling, or via\ntools such as the Google Web API1\n, while Amazon has recently\nmade its older Alexa corpus commercially available for the\ndevelopment of searching and related services2\n. However, the only\ncollection that can be used for more general research into the history\nof the Web is the Web collection of the Internet Archive\n.\n1.2 The Internet Archive\nEverybody with an interest in the history of the Web must be\ngrateful to Brewster Kahle for his foresight in preserving the content\nof the Web for future generations, through the not-for-profit Internet\nArchive and through Alexa Internet, Inc., which he also founded.\n\n1\nThe Google Web Search API allows a client to submit a limited\nnumber of search requests, using the SOAP and WSDL\nstandards. See: http://www.google.com/apis/.\n2\nSee http://websearch.alexa.com/welcome.html for the Alexa\ncorpus made available by Amazon. This site also has a\ndescription of the relationship between Alexa Internet and the\nInternet Archive.\n3\nThe Internet Archive's Web site is http://www.archive.org/.\n\nCopyright is held by the author/owner(s).\nJCDL'06, June 11-15, 2006, Chapel Hill, North Carolina, USA.\nACM 1-5959\n3-354-9/06/0006.\n\n95\nVannevar Bush Best Paper Candidate\nThe Internet Archive began to collect and preserve the Web in 1996.\nWith a few gaps in the early years, the collection has added a full\ncrawl of the Web every two months since then. Most but not all of\nthis data comes from the Alexa crawls. Statistics of the sizes of the\nseparate crawls are complicated by the fact that a single crawl may\ncontain several variants of the same URL, but in August 2005 the\ntotal volume of data was 544 Terabytes (TB). This is the size of the\ncompressed data. As discussed below, the overall compression ratio\nis about 10:1, so that the total size of the collection is approximately\n5 to 6 Petabytes uncompressed. Table 1 gives estimates of the size\nof the individual crawls for each year.\nTable 1. Estimates of crawl sizes (compressed)\nYear\nWeb pages\n(TB per crawl)\nMetadata\n(TB per crawl)\n1996 1\n0.2\n1997 2\n0.4\n1998 3\n0.6\n1999 4\n0.8\n2000 10 1.2\n2001 15 2\n2002 25 3\n2003 30 4\n2004 45 6\n2005 60 10\nWe are working with the Internet Archive to build a research library\nbased on this collection. In summer 2005, we began work on the\nsystem that is being used to transfer a major subset of the data to\nCornell and to organize it for researchers, with a particular emphasis\non supporting social science research. This paper describes the\ntechnical design, performance testing, and progress in\nimplementation.\nThe overall goals of the library and plans for its use in research are\ndescribed in a separate paper [1].\n1.3 User Studies\nIn building any library, the objective is to organize the collections\nand services so that they provide the greatest range of opportunities\nfor users, both now and in the future. Inevitably the design is a\ntrade-off between predictions of what users will find helpful and the\npracticalities of building and maintaining the library. This trade-off\nis particularly important for a library of the whole Web because of\nthe computing challenges of managing very large amounts of data.\nTherefore, the design of the library began with interviews of\npotential users to identify how the collections might be organized to\nbe most valuable to them. Two users studies were carried out, with\nsociologists and with computer science researchers.\n1.3.1 Sociology\nIn fall 2005, Cornell received support from the National Science\nFoundation's Next Generation Cybertools program for a project that\ncombines sociology research with continuing development of the\nWeb library\n4\n. In this project, the specific areas of research are\ndiffusion of ideas, including polarization of opinions and the spread\nof urban legends. Conventionally, sociologists have studied such\nphenomena by analysis of small surveys with hand-coded data. One\naim of the project is to develop a new methodology for such\nresearch built around very large-scale collections of Web data, with\nautomated tools used to extract, encode and analyze the data.\nSocial science researchers identified a number of specific studies\nthat they would like to carry out using historical Web data. Many of\nthe studies have the same general structure: (a) extract a subset of\nthe Web for detailed analysis, (b) encode selected attributes of the\npages in that subset, (c) repeat for the corresponding subsets at\nseveral different dates, (d) analyze the changes over time.\nThe criteria by which a portion of the Web is chosen for analysis are\nextremely varied. Some desirable criteria are impossible with\ntoday's computing, e.g., they require understanding of the content of\na page. However, simple criteria such as domain names provide a\ngood starting point for many purposes, particularly when combined\nwith focused Web crawling to refine the subsets for analysis. Once a\nsubset has been extracted, social science researchers want to analyze\nthe text, for which full text indexes are important. They also wish to\nanalyze the structure of links between pages for the social\nrelationship that they represent.\nSuch research requires interdisciplinary efforts by computer\nscientists and social scientists. Some of the analysis tools already\nexist, e.g., using full text indexes of Web pages to trace the\nmovement of individuals. Others tools are themselves subjects of\ncomputer science research in natural language processing and\nmachine learning, e.g., to analyze the text of Web pages for\nsentiments, opinions, and other features of interest to social\nscientists.\n1.3.2 Computer Science\nTen computer scientists who carry out research on the Web\ncontributed to the user studies. Their research areas include the\nstructure and evolution of the Web, data mining, digital libraries,\nmachine learning, and natural language processing. Most of their\ninterest focuses on the textual content of Web pages and the\nstructure of the Web as revealed by the graph of links between\npages. Several of the researchers commented that they expend\nninety percent of their effort gathering test data; even then they have\ndifficulty in determining how robust the results are across time.\nA fundamental tool for such research is the Web graph of links\nbetween pages. Studies of the graph are very important in\nunderstanding the structure of the Web, and the graph is the basis of\npractical tools such as PageRank [3] or Hubs and Authorities [9].\nDespite its importance, there have been few studies that have looked\nat changes in the Web graph over time. Many of the classical studies\nof the Web graph were based on early AltaVista crawls and have\nnever been repeated. Algorithmic research needs graphs of at least\none billion pages, preferably stored in the main memory of a single\ncomputer.\nFor textual research on the Web there are two additional\nrequirements. The first is snapshots that are repeated across time\n\n4\nMichael Macy (principal investigator), et al., "Very Large Semi-Structured\nDatasets for Social Science Research". NSF grant\nSES-0537606. http://www.infosci.cornell.edu/SIN/cybertools/\n96\nthat can be used for burst analysis, and other time based research.\nThe second is full text indexes of substantial numbers of Web pages.\nFocused Web crawling is of particular importance in digital libraries\nresearch. Part of the original motivation for developing this library\nwas an interest in automatic selection of library materials from the\nWeb [2, 10]. Using the actual Web for research in focused crawling\nis technically difficult and the results are often hard to interpret\nsince no experiment can ever be repeated with exactly the same\ndata.\nARCHITECTURE\nThe Internet Archive uses highly compressed file formats developed\nin conjunction with Alexa Internet. Compressed Web pages are\npacked together in large files using the ARC format [4]. The pages\nin each ARC file are in essentially random order, usually the\nsequence in which the Web crawler originally captured them. Every\nARC file has an associated DAT file, which contains metadata for\nthe pages including URL, IP address, crawl date and time, and\nhyperlinks from the page. The files are compressed with gzip. Ten\nyears ago the decision was made that ARC files should be\napproximately 100 MB, which seemed big at the time, but this size\nis now too small for efficiency and will need to be increased. The\nsizes of the DAT files depend on the number of pages in the\nassociated ARC files, but average about 15 MB. The compression\nratios also vary widely. The ratio is more than 20:1 for text files but\nclose to 1:1 for files that are already compressed efficiently, such as\nvideos. The overall ratio for ARC files is about 10:1.\n2.1.1 The Database\nThe Cornell Web library uses a relational database to store metadata\nabout the Web pages and a separate Page Store to store the actual\npages. In addition, the unprocessed ARC and DAT files received\nfrom the Internet Archive are copied to a tape archive. In choosing a\nrelational database, we considered but rejected two approaches that\nhave been successful in related applications.\nThe first option was to use a modern digital library repository with\nsupport for rich data models, such as XML and RDF, and search\nservices that support semi-structured data, such as XQuery. Such\ncapabilities are appealing, but we know of no repository system that\ncan manage tens of billions of objects. The scale of the Web\nprecludes such an approach.\nThe second option was to follow the model of production services\nfor the Web, such as Google [7] and Yahoo. They provide low cost\nprocessing and data storage by spreading their systems across very\nlarge numbers of small, commodity computers used as servers. This\nis the approach used by the Internet Archive to store its collections\nand for its very popular Wayback Machine\n5\n. We rejected this\narchitecture for a research library for two principal reasons: (a) there\nare many algorithmic computations on large datasets where a single\nlarge computer is intrinsically more efficient than a distributed\ncluster of smaller machines, and (b) even when the research can be\ndone effectively, clusters of computers are more difficult to program\nby researchers who are carrying out Web-scale research. As an\nexample, each server at the Internet Archive has an index of the files\nstored on it, but there is only a very limited central index. The\nWayback Machine allows a user to retrieve all the pages in the\n\n5\nThe Wayback Machine is accessible at http://www.archive.org/.\nentire collection that have a given URL. It relies on a protocol in\nwhich an identifier is broadcast and each server responds with a list\nof matches. This is very efficient for this specific purpose, but it\nwould be extremely difficult to extract the flexible subsets required\nby social science researchers with this organization of data.\nA relational database has many advantages for the Web library and\none major disadvantage. Foremost among the advantages is\nscalability. Commercial relational database systems are highly\noptimized for storing, loading, indexing, extracting, backing-up, and\nrestoring huge volumes of data. Usability is another important\nadvantage. A relational schema provides a single image of the\ncollection, expressed in a manner that is familiar to many\nresearchers. The disadvantage is a loss of flexibility. The design and\nimplementation of the database attempt to reconcile the expected\nuses that will be made of the library against scalability constraints,\nbut it will be difficult to make major changes to the schema without\nrebuilding the entire database.\nThe actual Web pages are stored in a separate Page Store. At the\nInternet Archive, if two Web pages are identical they are stored\ntwice. With the new Page Store, duplicate pages are stored only\nonce. Rather surprisingly, there is as yet very little data about how\nmany pages remain unchanged between crawls, but we expect that\nelimination of duplicates will save significant online storage,\nespecially with large audio and video files.\nThe Page Store is implemented as a set of compressed files, one file\nfor each page received in the ARC files. Since many pages on the\nWeb do not change between crawls, the Preload subsystem checks\nfor content duplicates using an MD5 check sum of the content.\nThus, a copy of the content is stored only once however many pages\nhave that content. In order to guarantee fast access to the stored\ncontent, each page's content is compressed individually.\nThe architecture of the Page Store allows decisions to be made\nabout which pages to store online at any given time. For example,\nthe library might decide not to store large audio and video files\nonline. While all metadata will be online at all times, an individual\nWeb page could be accessed from the online Page Store, the off-line\ntape archive, or over the Internet from the Internet Archive.\n2.2 Equipment\nThe library is housed at the Cornell Theory Center, which is the\nuniversity's high-performance computing center. The choice of\nequipment and the use of a relational database were closely related\ndecisions. The Theory Center has expertise in distributed cluster\ncomputing, but because of the very high data rates, a symmetric\nmulti-processor configuration was chosen instead. Figure 1 shows\npart of the configuration of the central computer.\nFigure 1. Configuration of the main computer system\n97\nThe system is shared with another data-intensive program of\nresearch, the analysis of data from the Arecibo radio telescope in\nPuerto Rico. Each group has use of a dedicated Unisys ES7000/430\nserver, with 16 Itanium2 processors running at 1.5 Gigahertz. The\nmemory can be shared between the servers, but in practice each\nproject has sole use of 32 GB. Each server has a RAID disk\nsubsystem attached via a dual-ported fiber-channel. The operating\nsystem is Microsoft Windows Server 2003.\nFor the Web library the disk subsystem provides an initial 45 TB of\ndisk storage. We plan to extend the capacity to 240 TB by 2007.\nThere are no technical barriers to adding additional fiber channels\nand disk capacity. In the longer-term, disk prices are falling faster\nthan the growth in the size of Web crawls, which gives confidence\nthat the library will be able to keep up with the growth of the Web.\nBy using a symmetric multi-processor configuration with a high\nperformance disk subsystem, we are able to balance processing and\ndisk access requirements. Since the data sets are local to the system\non which the database is located, the system can perform bulk-loading\ntasks without incurring any networking penalties.\nThe large real memory is an added attraction of this configuration. It\nallows researchers to carry out substantial computation entirely in\nmemory. For instance, it is possible to process a Web graph of one\nbillion pages within memory.\n2.3 The Human Interface\nThe design of the human interface is perhaps the most challenging\naspect of developing the library. The social science research groups\nthat we are working with have the technical skills to write scripts\nand simple programs. Many are experts in statistical calculations.\nBut they should not be expected to write large or complex computer\nprograms. The current design supports three categories of users.\nThe Basic Access Service provides a Web Services API that\nallows a client to access pages in the collection by any metadata\nthat is indexed in the database, e.g., by URL and date. The Retro\nBrowser, which is described below, uses this API to allow a user\nto browse the collection as it was at a certain date.\nThe Subset Extraction Service supports users who wish to\ndownload sets of partially analyzed data to their own computers\nfor further analysis. A Web form is provided to define a subset of\nthe data (e.g., by date, URL, domain, etc.), extract subsets of the\ncollection, and store them as virtual views in the database. Sets of\nanalysis tools, many of which are already under development, can\nbe applied to the subset and the results downloaded to a client\ncomputer.\nTechnically advanced users can be authorized to run their own\nprograms on the central computer.\nTo support the Basic Access Service and the Subset Extraction\nService, we provide a dedicated Web server, which is housed next\nto the main system.\nSCALABILITY EXPERIMENTS\nAlthough the library has been generously funded by the National\nScience Foundation, we do not yet have sufficient capacity to\ndownload and mount online the entire Web collection of the Internet\nArchive. This is the long term goal, but during the initial phase, care\nhas been taken to balance the several parts of the system: online\nstorage, network bandwidth, processing of the incoming data,\ndatabase, performance, and the need to archive, back-up, and restore\nthe data. In spring 2005, several undergraduate and masters students\ncarried out independent projects to estimate sizes and processing\nrequirements\n6\n.\nTo test database performance before large amounts of actual data\nwere available, we used the R-MAT algorithm to generate a\nsynthetic graph with properties similar to the Web graph [5]. This\ntest graph has one billion nodes with more than seven billion links,\nand domain names generated according to their distribution on the\nreal Web [11].\nBased on these benchmarks, the decision was made to install a 100\nMb/sec network connection to the Internet Archive and to load data\nat a sustained rate of 250 GB/day, beginning January 2006. This rate\nwill enable the library to acquire and mount online by the end of\n2007 a complete crawl of the Web for each year since 1996. This\nphase will require approximately 240 TB of disk storage. Note that\nthe disk requirement differs from the estimates of raw data shown in\nTable 1. The database with its indexes is less highly compressed\nthan the raw data, but savings are made in the storage of duplicate\ndata, both in the database and the Page Store.\nDuring fall 2005, first generation software was written for (a) the\ndata flow system that brings data to the library, and (b) the user API\nand tool sets. They are described in the next two sections.\n\nDATA FLOW\nFigure 2 shows the flow of data into the library. When ARC and\nDAT files are received from the Internet Archive, the first step is to\nstore them in the tape archive. The Preload system then unpacks the\nraw data, extracts metadata, and prepares batch files for loading into\nthe database and Page Store.\nFigure 2. Flow of data into the library\nFigure 2 does not show the data tracking system. This is a major\nsubsystem that manages the data transfers, monitors all errors, and\ntracks the tens of millions of files within the library.\n4.1 Networking\nInternet2 is used to transfer data from the Internet Archive in San\nFrancisco, California to Cornell University in Ithaca, New York. For\nthis purpose, a 100Mbit/sec link has been established from the\nInternet Archive to Internet2. Both Cornell and the Internet Archive\nhave internal networks with 1 Gbit/sec or greater performance.\nIn the future, the National LambdaRail and the TeraGrid are\nintriguing possibilities. These new networks have the capacity to go\n\n6\nThese student reports are available at\nhttp://www.infosci.cornell.edu/SIN/WebLib/papers.html.\nUser tools\nPreload system\nDatabase\nPage store\nTape\narchive\nInternet 2\n98\nbeyond bulk data transfer and support genuine distributed\nprocessing between the Web library and the Internet Archive. For\nexample, if large audio and video files are not stored online, an\napplication could use the TeraGrid to retrieve individual large files\nfrom the Internet Archive on demand.\nAt the end of December 2005, a series of experiments were run to\nmeasure the sustained throughput of multi-threaded FTP transfers\nover Internet2, using Transport Layer Security. These measurements\nshowed transfer rates of 280 GB per day before system tuning, or\nrather better than 30 percent of the theoretical maximum throughput\nof the link to the Internet Archive. This is sufficient for the planned\nrate of 250 GB per day. If greater bandwidth proves necessary, the\nlink from the Internet Archive to Internet2 can be upgraded to\n500Mbps inexpensively, while the Cornell Theory Center will soon\nhave enormous bandwidth available via the TeraGrid.\n4.2 Preload Subsystem\nThe Preload subsystem takes incoming ARC and DAT files,\nuncompresses them, parses them to extract metadata, and generates\ntwo types of output files: metadata for loading into the database and\nthe actual content of the Web pages to be stored in the Page Store.\nMetadata for loading into the database is output in the form of 40GB\ntext files, a separate file for every database table.\nTo satisfy the speed and flexibility requirements, the Preload system\nis designed to run as a set of independent single-thread processes,\navoiding all inter-process communication and locking over input or\noutput data. Likewise, each process writes its own output files. This\ndesign allows for easy configuration of the system to run a required\nnumber of processes on a given number of processors, on one or\nmore machines. To determine each process's input, input files are\npartitioned by the first k bits of the MD5 hash sum of the filename,\nwhere 2k is the total number of processes in the system. The design\nof the subsystem does not require the corresponding ARC and DAT\nfiles to be processed together.\nA series of experiments were run to test the performance of the\nPreload system, using the metadata from the synthetic Web graph.\nThe experiments used 1, 2, 4 and 8 processors, with the data\npartitioned into 16 parts, according to the first 4 bits of the hash\nsum. Separate experiments were made for ARC and DAT files.\nFigure 3 shows results for the ARC files. The x-axis shows the\nnumber of CPUs used, and the y-axis shows the throughput in\nKB/sec. The white bar shows throughput per processor and the\nshaded bar shows total throughput. Adding more processors slightly\ndecreases the throughput per processor due to contention for random\ndisk accesses. The total throughput increases steadily up to four\nprocessors. After that, disk contention becomes too high, and the\nthroughput actually declines. The results for DAT files are similar,\nwith the total throughput flattening after four processors.\nFrom these experiments, we conclude that four processors are\noptimal. The corresponding throughputs are 73 MB/sec (about 6\nTB/day) for ARC files, and 12 MB/sec (about 1 TB/day) for DAT\nfiles.\nWhen metadata from the DAT files is uncompressed its size\nincreases by a factor of about 11:1. Fortunately, much of this data is\nduplicated. For example, a given URL may occur many times in a\ncrawl and be replicated in many crawls. Therefore duplicate\nelimination has been a major consideration in refining the database\ndesign.\nFigure 3. Performance of Preload system (ARC files)\nNote that during the above experiments no other processes were\nrunning on the system. The overall performance will be lower when\nthe Preload system is run in production mode at the same time as\nother subsystems. Also, during the preliminary phase, only text and\nHTML files were fully analyzed to extract links and anchor text.\nProcessing of other file formats will be added in the future. Some of\nthese formats will be computationally intensive, e.g., PDF files.\n4.3 Database Design\nThe relational database uses Microsoft SQL Server 2000. Three\nimportant goals when designing the database schema and deciding\nhow to load the data to the database were: (a) minimize the storage\nrequirements, (b) maximize the load throughput, and (c) support\nefficient logging, backup, and restore.\nConceptually, for each crawl, the database stores metadata about\neach page (e.g., information about the content of the page, URL of\nthe page) and about the links between them (including anchor text\nand text surrounding the anchor text). However, to avoid storing\nredundant data, the schema is denormalized. The denormalized\nschema is shown below in Figure 4. Information about URL and\ndomain names is stored in a look-up table, since the same URLs and\ndomain names appear many times in a single crawl and across\ncrawls. For similar reasons, anchor text, text surrounding the anchor\ntext, and information about page content are stored in the look-up\ntables Dictionary and Page Content respectively, as shown in the\nschema in Figure 4. To make the loading of the data faster, separate\ntables for each of Page, Page Content and Link are created for each\ncrawl while the other tables (e.g., the URL table) are shared among\ncrawls.\n80\n70\n60\n50\n40\n30\n20\n10\nThroughput (MB/s)\n1\n2 4 8\nNumber of CPUs\n99\n\nFigure 4. Database design: the de-normalized schema\nThe Preload subsystem outputs separate files conforming to the\nschema described above and these files are bulk loaded into the\ndatabase. There are many parameters that affect the bulk load\nperformance. These parameters include: batch size, file size, degree\nof parallelism, and interaction with the logging, backup and\nrecovery system. The synthetic Web data was used to understand\nhow these parameters affect the loading performance and to tune\nthem. The first sets of experiments were used to determine the\noptimal file size for bulk loading and the number of CPUs used. In\nthese experiments, default, recovery and backup mechanisms were\nused [6].\nThe results of the first set of experiments indicate that it is optimal\nto load each file as a single batch; a file size of 40GB and 4 CPUs\ngave the best performance, and around 800GB could be loaded in\none day. However, the experiments to determine the file size and\ndegree of parallelism showed significant variability. Using the\ndefault logging provided by MS SQL Server 2000, checkpointing\nand nightly backups were consuming enormous resources. They\ninterfered with the bulk loading process and are a probable cause of\nthe variance seen in the performance benchmarks.\nTwo observations were made to overcome the performance penalty.\nFirst, data is append-only while being bulk loaded and is read-only\nafter the bulk load is complete; logging, recovery and backup\nmechanisms can be customized to increase the performance and\ndecrease the variance in loading times. Second, tables in the schema\ncan reside on different disks and thus can be written in parallel.\nFollowing these two observations, in the current design each table\ncan be put onto a separate disk as shown in Figure 5. Moreover,\nPage, Page Content and Links information for each crawl are put\ninto separate files. This partitioning according to crawls is easy in\nMS SQL Server as separate tables for each of Page, Page Content\nand Link are created for each crawl.\nThe database load subsystem is divided into two programs: a high-level\nprogram that organizes the processes and a low level program\nthat runs separate loads in parallel. The workflow for loading each\ntable in each crawl consists of five major steps: (a) Get the files\nproduced by the Preload subsystem for the current table in the\ncurrent crawl and write the relevant log information to an\nadministrative database; commit the transaction writing the log\ninformation. (b) Write files of the table to the disk corresponding to\nthe current table via the low level program; files corresponding to\ndifferent tables can be written in parallel. (c) Create the necessary\nindexes. (d) Back-up the newly written data. (e) Write to the log the\nrelevant information to indicate that processing of the files for the\ncurrent table in the current crawl is complete and commit the\ntransaction writing the log information. In MS SQL Server 2000,\nbackups and index creation are all atomic.\n\nFigure 5. Database design: organization of file system\nThis new design is being implemented and tested, as of January\n2006. First indications are that the performance will comfortably\nmeet the required performance goals. Extensive benchmarking is\nrequired to tune many parameters, such as batch size, file size,\ndegree of parallelism, and the index management.\n4.4 Archiving\nArchiving and back-up are expensive operations with complex\ntrade-offs. Without care, the networking bandwidth and disk\nthroughput used in logging, back-up, and writing to the tape library\ncould have a major impact on the system throughput.\nAs described above, the database design allows the database files to\nbe backed up incrementally. This provides two options for restoring\nthe database, by reprocessing the raw data or from the back-up. The\nPage Store is not backed-up. If parts of it were ever corrupted, they\nwould have to be restored by reprocessing the raw data. A current\ndesign project is to reorganize the Page Store to permit efficient\nrestoration.\nThe library uses a robotic tape library with LTO3 tape drives. This\nis shared with other systems at the center. All unprocessed ARC and\nDAT files are copied to the tape library to be stored indefinitely.\nThis preserves another copy of the Internet Archive's data for the\nlong-term. This data is unique and could never be replaced. The\nindustry standard life of these tapes is thirty years but our\nexpectation is that six years is a more probable time before the tape\nlibrary is replaced and all the data will have to be copied onto fresh\nmedia.\nSUPPORT FOR THE USER\nFigure 6 shows the architecture of the interface that the library\noffers to users. This is a three-tier architecture. The data tier consists\nof the relational database of metadata and the Page Store; the\nmiddleware tier provides services to access the data, tools to analyze\nit, and a choice of Web Services APIs; clients interact with the\nmiddleware tools either through the APIs, or directly.\n5.1 Clients\nThe user tools system was designed to be extensible and scalable.\nSpecifically, it supports two categories of users: (a) users who\nanalyze the data remotely from their own computers, perhaps at\nanother institution or even in another country, and (b)\nPage\nPage Content\nLink\n- Destination URL\nURL Path\nDictionary\nDomain\nPage\nLink\nPage Content\nCustom\nLog\nEverything\nElse\n100\ncomputationally intensive users, who may wish to run very heavy\nanalyses of the data using the Cornell Theory Center computing\nenvironment. Corresponding to these two categories of users the\narchitecture supports two types of clients: Web services clients and\nclients that execute their own programs on the library's servers. The\nfirst category requires little technical sophistication from the user,\nwhile the second category trades complexity against the flexibility\nof being able to write custom programs and to use the full\nprocessing power available.\nFigure 6. Architecture of the interfaces provided for users of the\nlibrary\nWeb services clients are intended to be used remotely, with\nmoderate demands on their access to the data. Examples of these\nclients include running queries using a full text index, or fetching\nspecific pages using existing indexes. Web service clients may also\nbe used to start, control, and retrieve results from experiments run\nby high-performance clients. Web services are implemented by\nusing Microsoft's ATL server libraries. They run on a dedicated\nWeb server. The clients themselves can be implemented in any\nlanguage or environment that supports Web services standards.\nUsers of these clients do not need to know how the data is stored.\nThey are provided with forms that are automatically converted to\nSQL commands by the middleware.\nThe high-performance clients will for the most part run within the\nCornell Theory Center. They will usually have a high bandwidth\nconnection to the database. Clients of this form may carry out\nresearch that need lots of computation power, e.g., experiments that\nprocess very large subsets or analyze how the structure of the Web\nchanges over time. These clients are implemented by linking against\ndynamic link libraries (DLLs) provided by the application server\ntier.\n5.2 Access to the Database\nThe application server tier accesses the database using Microsoft\ndatabase tools. Two main areas of functionality have been\nimplemented in this tier: Basic Access Services (BAS), and the\nSubset Extraction Services (SES). Each consists of two parts: a set\nof services, implemented as a series of dynamic link libraries\n(DLLs) written in C++, and a Web Services API.\nBasic Access Services are for clients that interact directly with the\ndatabase. They allow a client to fetch pages given a combination of\nURL and date of crawl. They also allow a client to check within\nwhich crawls a given page is available. For example, a focused Web\ncrawler can retrieve pages from a specified crawl, using a simple\nclient script that interfaces to the BAS Web services API.\nSubset Extraction Services allow a client to select a part of the data\nas a subset. Once created, this subset is stored in the database as a\nview. Such subsets are useful for running experiments over a\nsmaller, perhaps random, sample of the Web, as well as selecting\nrelevant pages for a particular experiments, such as those from a\ngiven domain. For example, a researcher studying government Web\nsites might extract textual pages from the .gov domain for a selected\nrange of dates.\n5.2.1 Users Beyond Cornell\nThis digital library is intended for the use of all academic\nresearchers, not only those based at Cornell. Technically, this is\nstraightforward. The library is connected to the Internet, including\nInternet2, and will soon be available via the TeraGrid.\nWe are currently developing a code of use policies for researchers.\nMining this data has potential for abuses of privacy. Cornell\nresearchers need to follow the university's procedures for such\nresearch and we need to find convenient ways to extend the code of\npractice to all users of the library.\n5.3 User Tools\n5.3.1 The Retro Browser\nThe Retro Browser is an example of a Web services client that is\nalready implemented and running in a test environment [12]. To a\nuser, it appears to be a regular Web browser, except that it browses\nan historical snapshot of the Web.\nThe Retro Browser is designed with the non-technical user in mind.\nThe design assumes that the user may not be technically proficient\nand should not be expected to install new software or run special\nscripts. After the user has made an initial choice of a date in Web\nhistory, the Retro Bowser behaves like any other Web browser. The\nuser uses a standard browser to carry out all the standard Web tasks,\nsuch as download an applet, run a script, submit a forms, etc. The\nonly difference is that every URL is resolved to the record in the\nWeb library for the specified date.\nThe major component of the Retro Browser is a Web server\nconfigured to be used as a proxy server. To obtain the data from the\ndatabase, the proxy server utilizes the Basic Access Web Service\nAPI.\nThe Retro Browser client interacts with the Retro Browser proxy in\na standard HTTP client-server fashion. To fetch the appropriate\npage from the database requires a URL and the date of the crawl,\nwhich is represented by a crawl ID. The proxy server expects a\nsession cookie specifying a crawl ID with every request. If such a\ncookie is not found with the request, the user is asked to specify the\ncrawl ID. Further requests may or may not contain a cookie since\ncookies are tied to a domain. However, the Retro Browser proxy\nensures that the cookie is replicated for all domains using a series of\nredirects. In this manner, the architecture ensures that the user is\nasked to specify the crawl date for the first request only.\n5.3.2 Analysis of the Web Graph\nA set of analysis tools is under development that will provide more\ncomplex access and analysis functions. These tools are part of the\napplication server tier. They are applied to subsets of the data and\naccessed either directly or through the Subset Extraction Services\nAPI.\nData Tier\nApplication\nServer Tier\nClient Tier\nPage Store\nMetadata\nBasic Access\nServices\nSubset Extraction\nServices\nBAS Web\nService\nSES Web\nService\nClients\nHigh\nPerformance\nClients\nRetro-Browser\n101\nOne group of tools operates on the Web graph of a subset.\nHyperlinks from each Web page are stored in the database.\nRepresentation of the graph is by its adjacency matrix using a\ncompressed sparse row representation. Preliminary software has\nbeen written to read all the links from a given subset of the data and\nconstruct the adjacency matrix. The matrix is then stored in the file\nsystem in a compressed form, which allows performing the basic\noperations, such as matrix addition and multiplication. The Cuthill-McKee\nalgorithm is used to reorder the nodes to create dense blocks\nwithin the matrix to increase the compression ratio and allow in-memory\nprocessing [8].\n5.3.3 Full Text Indexes\nFull text indexes are a vital tool for many researchers. For instance a\nsocial science researcher may wish to track the Web pages that refer\nto a named individual or may identify trends by burst analysis of\nterms used on the Web.\nOur initial approach is to provide an indexing service for data\nsubsets, using the Nutch search engine. It is straightforward to\nextract a subset, which is represented by a database view, and create\na full text index of all textual pages in the subset. The only problem\nis the processing necessary to index a very large subset.\nWe are in discussions with Cutting, the principal developer of the\nLucene and Nutch search engines\n7\n. He has been working with the\nInternet Archive to build indexes of very large collections of Web\npages in ARC format. For this purpose, they are developing a\nmodified version of Nutch, known as Nutch WAX (Web Archive\neXtensions). Rather than duplicate this effort, we are exploring the\npossibility of providing access to these indexes through the Basic\nAccess Service.\nACKNOWLEDGEMENTS\nWe wish to thank Brewster Kahle, Tracey Jaquith, John Berry and\ntheir colleagues at the Internet Archive for their support of this\nwork.\nThe following Cornell students have contributed to the development\nof the library described in this paper: Mayank Gandhi, Nicholas\nGerner, Min-Daou Gu, Wei Guo, Parul Jain, Karthik Jeyabalan,\nJerrin Kallukalam, Serena Kohli, Ari Rabkin, Patrick Colin Reilly,\nLipi Sanghi, Shantanu Shah, Dmitriy Shtokman, Chris Sosa, Samuel\nBenzaquen Stern, Jimmy Yanbo Sun, Harsh Tiwari, Nurwati\nWidodo, Yu Richard Wang.\nThis work has been funded in part by the National Science\nFoundation, grants CNS-0403340, DUE-0127308, and SES-0537606\n, with equipment support from Unisys and by an E-Science\ngrant and a gift from Microsoft Corporation.\n\n7\nThe Lucene search engine is described at\nhttp://lucene.apache.org/. Nutch is described at\nhttp://lucene.apache.org/nutch/.\nREFERENCES\n[1] Arms, W., Aya, S., Dmitriev, P., Kot, B., Mitchell, R., Walle,\nL., A Research Library for the Web based on the Historical\nCollections of the Internet Archive. D-Lib Magazine. February\n2006. http://www.dlib.org/dlib/february06/arms/02arms.html\n[2] Bergmark, D., Collection synthesis. ACM/IEEE-CS Joint\nConference on Digital Libraries, 2002.\n[3] Brin, S., and Page. L., The anatomy of a large-scale\nhypertextual Web search engine. Seventh International World\nWide Web Conference. Brisbane, Australia, 1998.\n[4] Burner, M., and Kahle, B., Internet Archive ARC File Format,\n1996. http://archive.org/web/researcher/ArcFileFormat.php\n[5] Chakrabarti, D., Zhan, Y., and Faloutsos, C., R-MAT:\nrecursive model for graph mining. SIAM International\nConference on Data Mining, 2004.\n[6] Gerner, N., Sosa, C., Fall 2005 Semester Report for Web Lab\nDatabase Load Group. M.Eng. report, Computer Science\nDepartment, Cornell University, 2005.\nhttp://www.infosci.cornell.edu/SIN/WebLib/papers/Gerner200\n5.doc.\n[7] Ghemawat, S., Gobioff, H. and Leung, S., The Google File\nSystem. 19th ACM Symposium on Operating Systems\nPrinciples, October 2003.\n[8] Jeyabalan, K., Kallukalam, J., Representation of Web Graph\nfor in Memory Computation. M.Eng. report, Computer Science\nDepartment, Cornell University, 2005.\nhttp://www.infosci.cornell.edu/SIN/WebLib/papers/Jeyabalan\nKallukalam2005.doc.\n[9] J. Kleinberg. Authoritative sources in a hyperlinked\nenvironment. Ninth ACM-SIAM Symposium on Discrete\nAlgorithms, 1998.\n[10] Mitchell, S., Mooney, M., Mason, J., Paynter, G., Ruscheinski,\nJ., Kedzierski, A., Humphreys, K., iVia Open Source Virtual\nLibrary System. D-Lib Magazine, 9 (1), January 2003.\nhttp://www.dlib.org/dlib/january03/mitchell/01mitchell.html\n[11] Shah, S., Generating a web graph. M.Eng. report, Computer\nScience Department, Cornell University, 2005.\nhttp://www.infosci.cornell.edu/SIN/WebLib/papers/Shah2005a\n.doc.\n[12] Shah, S., Retro Browser. M.Eng. report, Computer Science\nDepartment, Cornell University, 2005.\nhttp://www.infosci.cornell.edu/SIN/WebLib/papers/Shah2005b\n.pdf.\n102\n", "keywords": "User Interface;Dataflow;Internet Archive. 1. BACKGROUND 1.1 Research in the History of the Web The Web is one of the most interesting artifacts of our time. For social scientists;history of the Web;basic parameters are hard to come by and it is almost impossible to generate random samples for statistical purposes. But the biggest problem that social scientists face in carrying out Web research is historical;or via tools such as the Google Web API;Database Management;it is a subject of study both for itself and for the manner in which it illuminates contemporary social phenomena. Yet a researcher who wishes to study the Web is faced with major difficulties. An obvious problem is that the Web is huge. Any study of the Web as a whole must be prepared to analyze billions of pages and hundreds of terabytes of data. Furthermore;Storage;Flexible Preload System;Internet Archive;digital libraries;Scalability;the Web changes continually. It is never possible to repeat a study on the actual Web with quite the same data. Any snapshot of the whole Web requires a crawl that will take several weeks to gather data. Because the size and boundaries of the Web are ill defined;Database Access;User Support;computational social science;Full Text Indexes;the desire to track activities across time. The Web of today can be studied by direct Web crawling"} {"name": "49", "title": "Building a Sense of History: Narratives and Pathways of Women Computing Educators", "abstract": "This working group laid the groundwork for the collection and analysis of oral histories of women computing educators. This endeavor will eventually create a body of narratives to serve as role models to attract students, in particular women, to computing; it will also serve to preserve the history of the female pioneers in computing education. Pre-conference work included administration of a survey to assess topical interest. The working group produced aids for conducting interviews, including an opening script, an outline of topics to be covered, guidelines for conducting interviews, and a set of probing questions to ensure consistency in the interviews. The group explored issues such as copyright and archival that confront the large-scale implementation of the project and suggested extensions to this research. This report includes an annotated bibliography of resources. The next steps will include training colleagues in how to conduct interviews and establishing guidelines for archival and use of the interviews.", "fulltext": "INTRODUCTION\nDuring the SIGCSE Technical Symposium held in Reno, NV in\nFebruary 2003, a significant number of events focused on under-representation\nof women in the computing curriculum and as\ncomputing educators. Eric Roberts' keynote talk, \"Expanding the\nAudience for Computer Science\" [21], was a moving discussion\nof inclusiveness and a lament about the consequences of non-inclusion\n. At the Friday luncheon, Jane Margolis and Allan\nFisher discussed results from their groundbreaking work,\nUnlocking the Clubhouse [15]. Several private discussions begun\nat the conference and continuing for some time afterward led to a\nNovember 2004, proposal for this Working Group.\nIn this report, we document the results from a Working Group of\ncomputer science educators at the 2005 ITiCSE conference held\nin Lisbon, Portugal. We were drawn together by our shared\nconcern about women's under-representation among computing\neducators. We wished to honor women who had persevered in the\nearly days of this field and to make their stories available as a\nresource for those following after.\nITiCSE working groups are convened for the purpose of intensive\ncollaborative work on a topic of common interest among the\nparticipants, prior to and during the conference, generally\n\n\n174\ncompleting the Working Group's task by conference end. In\ncontrast, this group was convened to lay the groundwork for a\nproject that we hope will continue for some time to come. The\nWorking Group leaders spent the preceding 18 months\nformulating the charter for the Working Group: to collect oral\nhistories from pioneering women in computing education. The\ngoal of the Working Group meetings at ITiCSE in Lisbon was to\nformulate a plan that could bring the charter to fruition.\nWe envision that the result of this project will be a large oral\nhistory collection of broad scope with potential value to\nresearchers and others engaged in a variety of different projects.\nBecause this project could result in a large quantity of data, it\ncannot be stored by one person in her private file space. The data\nmust be maintained and administered by an agency or institution\nprepared for such a task.\nWe write this report for multiple audiences:\n1.\n\nThose who want a concise account of what the group\naccomplished in Lisbon.\n2.\n\nThose whose work will proceed from and build on that done\nin Lisbon for the oral history project.\n3.\n\nThose who want insights into the evolution and dynamic of a\nworking group.\n4.\n\nThose seeking historical information about the beginnings of\nthe oral history project.\nPREPARING FOR THE PROJECT\nThis section outlines key steps and insights developed prior to the\nITiCSE conference.\n2.1\n\nBuilding a Background\nThe initial vision of this project was to collect stories, or\nnarratives, from successful computing educators, in particular\nfrom women. We were particularly interested in the various paths\nthese individuals had followed through their careers.\nWe considered resources related to women and computing\neducation, in particular factors that seemed to lead to success in\nthe field. We found that the area of inquiry known as oral history\nincludes techniques conducive to the type of data-gathering we\nvisualized. Key resources for our project include a set of oral\nhistory evaluation guidelines [20], an Oral History Association\n[17], and a tutorial for conducting oral history from the Oral\nHistory Institute at Baylor University [2]. We discuss these\nresources further in Section 4 and in the annotated bibliography.\n2.2\n\nProject Vision\nWhile it was clear from its inception that the primary focus for\nthis Working Group would be women computing educators, we\nrecognized that this is potentially the first phase of a longer-term\nproject. The techniques developed in this first phase could be used\nin later phases, eventually developing into a broader project\ncovering the history of computing education as a profession. This\nlonger-term project should lead to a collection of oral histories\nfrom both men and women in the field as well as other artifacts.\nWhile we expect that future investigators will analyze the\nmaterials collected during each phase of the project, analysis of\nthe materials is not the driving factor at this time. We feel it is\nvital to create an accessible repository of the data to support future\ninvestigations.\n2.3\n\nSurvey to Gather Ideas\nIn order to gather ideas about the project from a broad community\nof individuals, we designed a survey to request ideas from\ncolleagues. In recognition of the longer-range potential of this\nwork, the survey solicited information for the full field of\ncomputing education, rather than restricting responses to the\nnarrower focus of women computing educators. The full survey\ncan be found in Appendix D.\nWe targeted two on-line communities with vested interest in the\ntopic of this Working Group: Systers [23] and SIGCSE.members\n[22]. By the end of the conference, the survey had resulted in\nresponses from 24 different individuals. Respondents offered\nideas for questions, thoughts about how to recruit additional\nsubjects for the interviews, and advice for how to proceed. The\nrespondents suggested 60 educators as potential interviewees, of\nwhom 34 are women. Several respondents also indicated interest\nin becoming involved with the project as planners, interviewers,\nor subjects.\n\nFORMULATING THE PROJECT GOALS\nAt the heart of this project is the recognition that women are\nunder-represented in the computing field [12]. In particular,\nWorking Group members had a variety of ideas for how to\naddress the lack of women in computing education. Among the\nideas:\n\nproviding role models\n\ncapturing stories of women of different ages to provide a\nhistory of women in computing education\n\nexploring the history of early women computing educators to\nlearn about and honor the stories of these women, who often\nfaced difficult circumstances\n\nrecording difficulties that women educators encountered\nduring their careers, and in some cases overcame, as a source\nof inspiration and support\nConsidering the challenges faced by women in early computing\neducation also brought up questions about how they managed\nthose challenges: What internal reserves and external resources\ndid they draw on? How did they sustain their confidence in their\nown capabilities, often as the only woman in what was at times a\nhostile environment? This led the group to consider self-efficacy\nbeliefs, which Bandura ([2], p. 391) defines as \"people's\njudgments of their capabilities to organize and execute courses of\naction required to attain designated types of performances\". A\nperson's self-efficacy beliefs can play a significant role in her\ncapacity to manage difficulties: if she believes she can actualize\nher intentions, obstacles presented by the environment impose less\ndrag.\n3.2\n\nFocus for the Short Term\nThis section addresses a number of key points that the group must\nconsider for the near term in order to coordinate work by a\ndistributed set of volunteers.\n3.2.1\n\nProtocol for Collecting Stories\nA key task of the Working Group was to establish a protocol to be\nfollowed by all volunteers on this project. The resources related to\n\n\n175\ncollecting oral histories provided a rich source of information for\ndefining the protocol to be used over the life of the project. A\nclear protocol will ensure consistency in the quality and general\ncontent of the interviews, especially for interviewers with little\nexperience in collecting oral histories. We discuss the protocol\nfurther in Section 5.\n3.2.2\n\nIdentifying Potential Subjects\nThe primary focus for this phase of the project is women\ncomputing educators who are late in their careers. The project\nwill seek an international sample in order to ensure a more\ncomplete picture; during the conference in Lisbon, many non-U.S.\neducators showed interest in the project.\nIt is urgent to capture narratives from older and more senior\neducators while these pioneers are still able to participate in the\ninterview process. The Working Group has created a list of\npotential subjects, including ideas drawn from the results of the\nsurvey described in Section 2.3.\n3.2.3\n\nLegal Issues: Consent, Access, Ownership\nA paramount concern for the project is the set of legal issues\nassociated with this form of academic inquiry. Pressing concerns\nthat must be resolved include the following:\n\nobtaining permissions to ensure that the materials are openly\naccessible for use in future studies and analyses;\n\ndetermining who will have access to the materials collected in\nthis project;\n\ndetermining ownership of the materials; and\n\ndesigning appropriate copyright and permission forms.\n3.2.4\n\nStorage and Transcription of the Interviews\nWhen an interview is complete, the recording(s), notes, and any\nsupplemental materials must be prepared for later use and\nanalysis. As a temporary measure, a copy on CD will provide\nsecure storage of the materials until incorporated into a formal\nrepository.\nTo make the interviews more accessible to future users such as\nresearchers and historians, common practice is to develop a\ntranscript. Besides being easier to scan quickly, a good quality\ntranscript makes it easier to create notes and cross-reference parts\nof the interview. There are two main approaches to creating\ntranscripts for an interview:\n\nlistening to the taped interview and capturing the dialog\nmanually, a tedious and exacting process, or\n\nusing voice recognition software to automatically create a\ntranscript in a computer file, a process that tends to be very\nerror-prone.\nOnce the transcription is complete, careful editing can make the\nwork clearer and more accessible. However, editing requires a\ndeft touch, using pre-determined guidelines specific to the project.\nThe editing process may consider issues such as how and whether\nto correct errors (for example, should transcriber errors be fixed\nduring editing? Should errors of fact acknowledged by the\ninterview subject be set right?) and whether to clear out irrelevant\ninformation (for example, deleting [presumably meaningless]\nfalse starts to make the transcript's meaning clearer). Editing may\nalso introduce paragraphs and subheadings to help highlight\ntopics and make it easier for a future reader to traverse the\ntranscribed interview.\n3.3\n\nArchival of the Project Materials\nFor security and availability of the collected materials, it will be\nvital to identify a means for the long-term storage of the\ninterviews and other artifacts. In addition, the repository can be\nused to maintain a bibliography of results related to the overall\nproject. While it is premature to determine where this work might\neventually reside, an excellent example of the appropriate style of\nstorage and availability is the repository related to the history of\ncomputing maintained by the Charles Babbage Institute [4].\n\nBACKGROUND\nTo set the context for the Working Group's project, this section\nconsiders four background areas: the area of inquiry known as\noral history, resources related to the history of computing,\nresources on the history of women in computing, and work related\nto the history of computing education.\n4.1\n\nWhat is Oral History?\nOral history is a method of inquiry with a rich tradition and\nspecific guidelines. While folklore and storytelling are examples\nof oral history through the ages, modern techniques have\nimproved the reliability of the data one can gather in an oral\nhistory project. The Wikipedia article on oral history [25]\nexplains:\n\"Oral history is an account of something passed down by\nword of mouth from one generation to another. Oral\nhistory is considered by some historians to be an\nunreliable source for the study of history. However, oral\nhistory is a valid means for preserving and transmitting\nhistory.\"\nThe Oral History Association [17] has published guidelines that\naddress several aspects of conducting oral history, including\nresponsibility to subjects, the public, and the profession; interview\ncontent and conduct; storage and preservation of media and\ninterviews; and an excellent bibliography.\nThe Oral History Primer from the Department of History at\nCalifornia State University, Long Beach [6] offers an overview of\nmany of the aspects of conducting an oral history project, such as\nhow to design the study, how to conduct and process the\ninterview, and how to use the completed interview. This resource\noffers a sample outline, a sample transcript, and a sample\nagreement form.\nAs the Working Group prepared for the meetings in Lisbon, a\nnumber of oral history projects helped us formulate ideas about\nhow the materials from such a project can be planned for and\narchived. For example, the London Voices project [14] gathered\noral histories from a variety of individuals and has made these\nstories available via a Museum of London website. The Oral\nHistory Directory from Alexander Street Press [18] is an\nambitious effort to index the major oral history collections in\nEnglish throughout the world. During our working group\npresentation at the ITiCSE conference, we learned of another\nproject in Brazil, O Museu da Pessoa (Museum of the Person)\n[16], which can provide additional ideas. The annotated\nbibliography in Appendix E lists relevant projects we have\ndiscovered thus far.\n\n\n176\nOne of the Working Group members in Lisbon, William Aspray,\nis a historian of computing who has conducted over 200\ninterviews eliciting oral histories. The materials related to these\ninterviews are in the repositories of the Charles Babbage Institute\nfor History of Computing [4], which we discuss further in the next\nsection. Aspray's participation in the Working Group provided\nkey inputs and examples as the group developed the guidelines\nand planning reflected in this report.\n4.2\n\nHistory of Computing Resources\nInterest in the history of computing is broad-based. A variety of\nhistorical projects focus on areas as diverse as artifacts (e.g.,\npunched cards, old computers), the timeline of events and\ndevelopments in computing, and the people involved in driving\nthe field forward. This section highlights a few computing history\nprojects that seem particularly relevant in the context of this\nWorking Group's project.\nThe Charles Babbage Institute (CBI) [4] was started in 1978 and\nby 1989 became an historical archives and research center of the\nUniversity of Minnesota. CBI preserves relevant historical\ndocumentation in a variety of media, conducts and fosters\nresearch in history and archival methods, and sponsors scholarly\nmeetings and publications related to the history of computing. The\nresources on this site include a set of more than 300 oral histories,\nof which no more than 5% appear to be from women.\nThe IEEE Annals of the History of Computing [8], a quarterly\npublication started in 1979, features scholarly articles by leading\ncomputer scientists and historians, as well as firsthand accounts\nby computer pioneers. The Annals is intended to serve as a focal\npoint for collecting and disseminating information on historical\nprojects and organizations, oral history activities, and\ninternational conferences.\nThe IFIP Working Group 9.7 on the History of Computing [9],\nestablished in 1992, focuses on the history of computing and\ninformatics with a view to providing the impetus to preserve the\nrecords and artifacts of information processing inventions,\npractices, and activities throughout the world under the auspices\nof IFIP and its constituent organizations. Among the goals of this\ngroup are to encourage the development of national archives, to\nidentify pioneers worthy of appreciation and distinction, to\ndevelop publication plans for histories of Information Processing,\nand to promote the inclusion of historical modules in appropriate\ncurricula.\nThe Virtual Museum of Computing (VMoC) [24], maintained by\nJonathan Bowen of London South Bank University, is a collecting\npoint that leads to many different sites across the web. Sections\ncurrently featured on the VMoC site include corporate history and\noverviews, history of computing organizations, and general\nhistorical information.\nThe History of Computing project [7], started by Cornelis Robat\nin the late 1980s, is now supported by a non-profit foundation\nfounded in April, 2000. This project is based in the Netherlands\nand has partners from throughout the world, including the\nUkraine, Poland, and Mexico. The project seems focused on\ngathering artifacts into an enormous database to ensure that\nimportant historical information remains available.\n4.3\n\nResources on the History of Women in\nComputing\nEspecially relevant to this Working Group's efforts are projects to\ncollect oral histories of women in computing. Janet Abbate [1] is\nconducting a research project to develop a history of women in\ncomputing in the United States and Britain since World War II.\nHer project draws on oral history interviews with more than fifty\nwomen who were active in computer science departments and the\nsoftware industry.\nA project that apparently never came to fruition is mentioned on a\nhistory of computing site created by J.A.N. Lee [13]. This project\nwas called \"Women in (the) Computing History\" (with the\nacronym \"witch\"). The description of this project states:\n\"In keeping with the tradition of documenting women's\nhistory through oral histories, the Women in (the)\nComputing History mailing list hopes to augment\ntraditional resources of women's and histories of\ncomputing by being a repository for women's own\nstories throughout the history of computing. All in\ncomputing, too, not just those of us formally schooled in\nthe computing sciences.\"\nUnfortunately, it appears that this project has disappeared from\nview, as we have thus far been unable to establish contact with\nanyone associated with the project.\nThe IFIP Working Group 9.8 on Women and Information\nTechnology [10] was established in 2001. Aspects of this group's\ncharge include the exchange of women's experiences as scholars\nand professionals in information technology, integration of\nfeminist perspectives into computer science, and developing an\nunderstanding of the gendered aspects in design, realization, and\nimplementation of information systems. The aims that seem\nespecially relevant for this project are analyzing the role of gender\nin computing education and educational strategies to support and\nretain girls and women.\n4.4\n\nHistory of Computing Education\nResources\nConsidered separately from resources related to the History of\nComputing, few resources address the history of computing\neducation. In 1982, the Mathematical Association of America\npublished a perspective on the field of Computer Science. The\nfirst chapter is an in-depth exploration of the development of\nComputer\nScience,\nwith\nemphasis\non\nthe\neducational\nunderpinnings of this field [19].\nIn August, 2004, when the IFIP 18th World Computer Congress\nwas held in Toulouse, France, one component of the Congress\nwas a History of Computing in Education conference. A book\npublished in 2004 derives from contributions made at this\nconference. This book [11] considers two aspects: the impact of\ncomputing on education over the past forty years and as a\npedagogical tool in computing education. Various articles\nconsider how organizations have used computers to enhance\nteaching and learning, spanning experiences in elementary\neducation through university studies in several countries.\n\n\n177\n\nWORK DURING ITiCSE\nOnce the Working Group convened in Lisbon, the face-to-face\nmeeting time was spent primarily on four activities: refining the\npurpose of the project, discussing and demonstrating the relevant\ntechniques, developing a protocol to guide the process of planning\nand conducting interviews, and training members in how to use\nthe interviewing techniques and materials. Each of these aspects\nare covered below.\n5.1\n\nRefining Purpose\nDuring the Working Group meetings, we refined the purpose and\nmethods of the project. We realized the need to differentiate\nbetween the purpose of the interviews (how they are structured\nand the kind of information they elicit) and the purpose of the\nproject as a whole (how the interviews will be used). We also\ncame to realize that our original notion of interviewing\n\"successful\" women computing educators constrained the project\nin two ways: 1) defining what we meant by \"success\" and\n2) losing the stories and lessons of those who did not continue in\ncomputing education.\n5.2\n\nDemonstration\nDuring the two days of meetings before the ITiCSE conference\nbegan, a key aspect of the Working Group's efforts was to explore\nthe theory and techniques guiding this project. To this end, the\ngroup discussed general techniques for how to use oral histories.\nTwo of the group members, Aspray and Barker, have social\nscience backgrounds: Aspray is a historian of computing, while\nBarker is a social scientist whose work focuses on women in\ncomputing. Because most group members had little experience\nwith conducting this type of inquiry, Aspray overviewed the\npurposes of oral history and methods for conducting interviews.\nTo make the techniques tangible, Aspray conducted a\ndemonstration interview with Working Group leader Barbara\nOwens as the subject. In preparation for the interview, Aspray and\nthe remaining Working Group members formulated a set of topics\nand prompts to include in the interview. The demonstration\ninterview was recorded on several digital devices, both to test the\ndevices and to avoid the possible loss of information due to\ntechnical difficulties.\nAfter the demonstration interview was completed, Aspray and\nBarker led the group in deconstructing\n1\nthe interview. During this\nsession, the group reflected on what went well, what could be\nimproved, and what to change in the future.\n5.3\n\nThe Protocol\nA major product of our Working Group was a protocol for this\nproject. After much discussion, we concluded that having a\ncommon set of materials would be vital for achieving consistent\nresults in interview sessions conducted by a wide variety of\nvolunteers. The protocol materials that will be used to support the\ninterview process include an opening script, an outline of topics, a\nset of sample probing questions or prompts, and guidelines for\nconducting interviews. We discuss each of these items in the\nremainder of this section.\n1\nDeconstructing an interview is different than analyzing the\nresults; the former focuses on process, while the latter considers\ncontent.\n5.3.1\n\nThe Opening Script\nThe opening script is used by the interviewer to set the scene\nbefore beginning the session. For example, the interviewer should\ncaution the subject that it is common for sensitive topics to come\nup during the course of a session and that the subject should feel\nfree to ask that the recording be turned off. As the session gets\nunderway and the interviewer starts the recording device(s),\nspecific opening information should be read onto the recording in\norder to provide a full context for this session. The interviewer\ncould state, for example:\n\"This is an interview with (interview subject's name)\nfrom (name of institution), conducted by (interviewer's\nname). This interview is being recorded on (date) at\n(city, country). It is part of the (computing education\noral history series / formalized name yet to be\ndetermined).\n\"Did we give and pronounce your name correctly?\"\n2\n\nAfter this, the interviewer can begin giving prompts, such as \"Tell\nus about your parents, for example what they did for a living.\" In\nthis example statement, using the pronoun \"us\", rather than \"me\",\ncan help the subject remember that her story is being told for a\nwider audience than just the interviewer at hand.\n5.3.2\n\nOutline of Topics\nThe Working Group developed an outline of relevant topics to be\nused in guiding the interviews. The outline can also assist the\ninterviewer in preparing for the interview, with the goal of making\nthe face-to-face time with the subject as effective as possible. The\nOutline of Topics that the Working Group developed appears in\nAppendix A.\n5.3.3\n\nSample Probing Questions\nPrompts are follow-up questions designed to elicit more detailed\nanswers or follow up a thread introduced in an earlier answer.\nBecause an interviewer must feel free to pursue topics that emerge\nas the session progresses, the prompts set provides examples for\nhow the interview can proceed, rather than a strict step-by-step\nrecipe. The Working Group developed a list of example prompts,\nwhich appears in Appendix B.\n5.3.4\n\nGuidelines for Conducting Interviews\nThis oral history project will require many interviewers in order to\nincrease the number of stories that can be collected within a\nlimited timeframe and across a wide geographical area. Guidelines\nwill help coordinate the efforts across volunteers in order to\nachieve a level of consistency across the results. Guidelines will\nalso help the volunteers prepare for and conduct sessions. The\nguidelines can assist an interviewer in establishing the proper\nsetting, maintaining an appropriate flow, and helping the subject\nfocus on the issues at hand.\nTo prepare for the session, the interviewer should study relevant\nbackground materials such as the subject's resume or vita, their\nprofessional publications, and anything written about the subject\n\n2\nThis final sentence is relevant primarily for names that are\nunusual or difficult to pronounce.\n\n\n\n\n178\nin secondary literature. This information can help the interviewer\nplan and prioritize the specific prompts, as well as the order of\nprompts, to be used during the interview session. At the same\ntime, the interviewer should not use the outline of topics as \"tick\noff\" items. An effective interview will be interactive in nature,\nwith the specific choice and ordering of prompts based on\nprevious answers.\nBecause the duration of a session must be limited to no more than\nan hour or two, the time must be used effectively. This makes it\nessential for the interviewer to come to the session as well\nprepared as possible. The face-to-face time during the session\nshould be used to explore tacit knowledge and the reasons for\ncertain behaviors and outcomes, providing insights into the\nmotivations behind events in the subject's life. To use the time\nwell, the interviewer must avoid spending precious time during\nthe session pursuing information that can be gleaned from the\nsubject's vita or other readily available materials.\nThe Working Group's guidelines for conducting interviews\nappear in Appendix C.\n5.4\n\nTraining\nOn the second day of the ITiCSE Working Group meetings, the\ngroup divided into two sub-groups, each of which included three\ncomputing educators and a \"consultant\" (either Aspray or Barker).\nIn these sub-groups, the computing educators tested the tentative\nprotocol to conduct practice interviews with one another. Each\ninterview session lasted about 15 minutes, with one computing\neducator interviewing a second using the list of topic areas, while\nthe third member watched and listened from the side. During the\npractice interviews, both sub-groups explored technologies that\ncan be used to record the interviews and transcribe the audio\nrecordings, testing multiple devices and in one group using a\nheadset to capture the answers for automatic transcription. The\n\"consultants\" observed during the interviews, then helped the\ngroup deconstruct the interviews and critique the methods.\n\nREFLECTIONS AFTER ITiCSE\nWorking Group members have offered the following as the most\npositive outcomes of the time in Lisbon (given in no particular\norder):\n1. Learning techniques of oral history and observing an\nexperienced interviewer using the techniques during a\ndemonstration.\n2. Hearing diverse ideas about project goals and reaching\nconsensus.\n3. Fleshing out the protocol for conducting interviews, thus\nmaking clear what should be asked during a session. The\nprotocol includes a detailed set of guidelines for conducting\nan oral history interview (Appendix C), an opening script\n(Section 5.3.1), a topic outline (Appendix A), and sample\nprompts (Appendix B).\n4. Being trained in interview techniques, which allowed the\ngroup to experiment with the equipment and pilot the process\nof gathering histories. Several members expressed the desire\nfor additional training and for the opportunity to review\nrecorded interviews conducted by more experienced\nindividuals.\n5. Understanding that the operative dynamic in an interview/oral\nhistory differs from that in conversation, although the\nsimilarities make it tricky to balance the exchange. Members\nfelt very positive about the experience of the practice\ninterviews. One member reported that she often found herself\ncaught up in the stories the subjects were telling, leading her\nto realize that it takes effort to learn to stick to the list of\ntopics that the interviewer wants to cover.\n6. Seeing the importance of privacy considerations, as well as the\nneed to obtain permission and plan for storage and access.\n7. Getting to know the other group members and hearing\nsignificant parts some of their stories. Even this small sample\ngave group members a feeling for the wide variety of paths\ntaken and challenges overcome.\n8. Discovering that the individual paths were an interplay\nbetween a recitation of facts (dates and places) and the deeply\nfelt emotional life that often motivated a person's actions.\nThis underscored the need for a respectful and reasonably\nwell-trained approach by each interviewer.\nThe group encountered a number of difficulties with the software\nand equipment. It became clear that the equipment is the weakest\nlink in performing an interview. The interviewer cannot be\ncertain that the equipment is functioning as expected until he or\nshe takes a break to review the recording.\nBased on experimentation during the Working Group meetings,\nthe Working group can make the following observations:\n\nThe group used several different models of the Olympus DVR\n(Digital Voice Recorder) and was able to get each model to\nwork properly.\n\nDirect recording to the computer worked well through the\nOlympus VN-240PC digital recorder.\n\nTransferring the recordings to CD was simple and seemed like\nan excellent way to create a temporary archive.\n\nWhile the Dragon Naturally Speaking Preferred speech\nrecognition software [5] may be helpful, it will require further\nexperimentation to use it effectively.\n\nWhile the group's experiences with the i-River recording\ndevices were not successful, one member has been pleased\nwith the performance of this device in the past.\n\nIn general the digital recorders worked well. However, in\nevery session at least one of the recorders failed, generally due\nto inexperience, human error, or time limitations. A key\nconclusion is that equipment redundancy is imperative. We\ndecided it is safest to use at least three recording devices\nduring each interview in order to ensure the best possible\nquality of recording.\nGroup members were surprised at the difficulty of transcribing\nrecorded interviews. Some members had hoped there would be\nuseful tricks or slight-of-hand for doing transcription.\nUnfortunately, creating a good quality transcription is simply a\nlengthy and intense process. A group member who transcribed\nher own interview found that it took nearly five times as long as\nthe interview duration to complete the transcription of the session!\nDuring early planning for the Working Group, the co-leaders had\nhoped to \"... conduct initial analysis of pilot interview data, and\nidentify emergent themes\". In the end, the group spent no time\n\n\n179\nwith formal analysis of the practice interviews. Instead, the group\nused the time to hone interview techniques and understand how to\nmove the project forward.\nDuring the first day of the ITiCSE conference (after the Working\nGroups had each met for two full days), each Working Group\npresented their group's mission and progress for conference\nattendees. The main impressions that Working Group members\nbrought away from this presentation were very positive, with\nmany attendees showing strong interest in the project and offering\nencouragement as well as suggestions for potential subjects.\n\nWHAT COMES NEXT?\nWhile the experience during the ITiCSE conference was valuable,\nthe time in Lisbon was too short and the expectations too high for\nthe group to be able to complete everything it had hoped to\naccomplish. By the end of the conference and the completion of\nthis report, the Working Group had prepared an annotated\nbibliography, learned about oral histories, piloted hardware and\nsoftware for recording, and set the stage for ongoing collection of\nhistories, including a protocol to follow in planning for and\nconducting interviews. While the Working Group did consider\nlegal and ethical issues during their discussions, a great deal must\nbe resolved before the process of active interviewing can begin. In\nparticular, access and ownership issues must be resolved before\nwe can begin collecting interviews.\nThe Working Group has an excellent start in recruiting volunteers\nto help in carrying out all aspects of the project. However, the\nwork of the volunteers must be coordinated in order to produce\ncoherent results. In addition, volunteers who conduct interviews\nmust be trained in the techniques. Various Working Group\nmembers have agreed to propose workshops and other training\nopportunities at a variety of venues and events.\nA challenge will be to select the set of subjects from the many\nsuggestions we have received. For the current stage of work, we\nwill include only women computing educators who are retired or\nin the latter stages of their careers. The entire project has an\nunderlying sense of urgency because many of the pioneers are in\npoor health or have already passed away. We have seen clear\ninterest in eventually expanding the project to include the stories\nof women in earlier parts of their careers and men at any stage of\ntheir careers.\nObtaining one or more sources of funding will be essential to\nachieving the full vision of the project. Funding can support\naspects such as transcription and review, travel to conduct training\nor to meet with subjects, and setting up permanent archival\nfacilities.\nWhile finding a permanent home for the oral histories is not\nessential during the early phases of the project, it is important if\nthe collected stories are to be useful and usable. In addition to\nproviding for archival of the recordings and transcriptions, the\neventual home should allow for including contextual materials,\nsuch as course and curriculum artifacts. The archival capability\nmust include sophisticated support for indexing and searching in\norder to support future visitors in browsing the collection and\nanalyzing the interview transcriptions and other artifacts.\nUltimately, whether this project will succeed or fail depends on\nthe level of engagement we can generate for all phases of the\nproject. To start, we must involve the computing education\ncommunity in collecting stories from women computing educators\nwho have retired or are about to retire. At the same time, we must\ncreate and maintain a sense of excitement about the potential of\nthe project. If there are sufficiently many interested volunteers,\nthe full-blown project to collect stories from men and from\nwomen earlier in their careers could certainly get underway in\nparallel with the current efforts.\n\nACKNOWLEDGMENTS\nThe individuals who met in Lisbon enjoyed the unique\nopportunity to learn these techniques and plan for what we hope\nwill be a productive long-term project. The group was fortunate\nto have additional individuals involved in the pre-conference\ndiscussions, several of whom made key contributions to the\npreparations. In particular, the Working Group is grateful to\nBettina Bair for her enthusiastic support. Bettina set up the group\nwiki and provided feedback as well as many ideas for resources.\nWe are also grateful to the others who participated in the pre-conference\ndiscussions, including Anne Applin and Amardeep\nKahlon. We thank the individuals who responded to our survey\nand offered suggestions of future subjects and possible questions.\nComments from the anonymous reviewers allowed us to refine the\npurpose of the report and improve the presentation. Late\ndiscussions with Susan Gerhart provided additional ideas and\ninspiration for future work.\nREFERENCES\nThe references given here are used directly in the text. In\nAppendix E we provide an annotated reference list, which repeats\nseveral of these references supplemented with our annotations.\n[17]\n\nAbbate, J., Finding our Place in History: Six Decades of\nWomen in Computing, Grace Hopper Celebration of Women\nin Computing. October 6-9, 2004. Chicago, IL. online:\ngracehopper.org/Proceedings/PDF/wpp_Abbate.pdf\n; last modified 10 January 2005, accessed 17 June 2005.\n[18]\n\nBaylor University Institute for Oral History, Oral History\nWorkshop on the Web, online:\nwww.baylor.edu/Oral_History/\n, last modified 25 April\n2005, accessed 17 June 2005.\n[19]\n\nBandura, A. Social foundations of thought and action: A\nsocial cognitive theory. Englewood Cliffs, NJ: Prentice Hall,\n1986.\n[20]\n\nCharles Babbage Institute, Center for the History of\nInformation Technology, University of Minnesota. online:\nwww.cbi.umn.edu/index.html\n, accessed 17 June 2005.\n[21]\n\nDragon Naturally Speaking Preferred speech recognition\nsoftware, 1st-Dragon information page:\nwww.1st-dragon\n.com/dragnatspeak1.html\n, accessed 17 July\n2005.\n[22]\n\nGluck, S. B. An Oral History Primer, Department of History,\nCalifornia State University, Long Beach, online:\nwww.csulb.edu/depts/history/relprm/oralprimer/\nOHprimer.html\n. last updated 6 March 2001, accessed 28\nJuly 2005.\n[23]\n\nThe History of Computing project. online:\nwww.thocp.net/\n. accessed 20 July 2005.\n[24]\n\nIEEE Annals of Computing. online:\nwww.computer.org/annals/. accessed 28 July 2005.\n\n\n180\n[25]\n\nIFIP Working Group 9.7 on the History of Computing.\nonline:\nhttp://www.comphist.org/\n. last modified 12\nJuly 2005; accessed 20 July 2005.\n[26]\n\nIFIP Working Group 9.8 on Women and Information\nTechnology. online:\nwww.informatik.uni-bremen\n.de/~oechteri/IFIP/\n. last modified 24 February\n2004; accessed 20 July 2005.\n[27]\n\nImplagliazzo, J., & J. A. N. Lee, History of computing in\neducation, Boston: Kluwer, 2004.\n[28]\n\nLazowska, E. Pale and male: 19th century design in a 21st\ncentury world. Women in computing history, SIGCSE\nBulletin inroads. 34(2) (June 2002). pp. 11-12.\n[29]\n\nLee, J.A.N. History of Computing. online:\nei.cs.vt.edu/~history/\n. last updated 6 December\n2002. accessed 20 July 2005.\n[30]\n\nLondon Voices, Museum of London, online:\nwww.museumoflondon.org.uk/MOLsite/londonsvoice\ns/\n, last modified 8 July 2004, accessed 17 June 2005.\n[31]\n\nMargolis, J. & Fisher, A. Unlocking the Clubhouse: The\nCarnegie Mellon Experience, SIGCSE Bulletin inroads,\n34(2), June 2002. pp. 79-83.\n[32]\n\nMuseu da Pessoa [translation: Museum of the Person].\nonline:\nwww.museudapessoa.net/\n. accessed 20 July\n2005.\n[33]\n\nOral History Association. online:\nomega.dickinson.edu/organizations/oha/\n. accessed\n20 July 2005.\n[34]\n\nOral History Directory, Alexander Street Press, online:\nwww.alexanderstreet2.com/oralhist/\n, last modified\n18 March 2005, accessed 19 June 2005.\n[35]\n\nPollack, S. V. The Development of Computer Science. In S.\nV. Pollack (Ed.) Studies in Computer Science. MAA Studies\nin Mathematics, Volume 22. The Mathematical Association\nof America, 1982.\n[36]\n\nRitchie, D. A. Oral History Evaluation Guidelines.\nPamphlet Number 3. Oral History Association. Oral History\nAssociation. online:\nwww.dickinson.edu/oha/pub_eg.html\n, last modified 4\nMay 2004, accessed 17 June 2005.\n[37]\n\nRoberts, E. Expanding the audience for computer science.\nPowerPoint version of keynote talk presented at 2003\nSIGCSE Technical Symposium, Reno, Nevada.\n[38]\n\nSIGCSE.members, the members-only mailing list of the\nACM Special Interest Group for Computer Science\nEducation. Subscription information online at\nwww.sigcse.org/\n.\n[39]\n\nSysters, an online community for technical women in\ncomputing. Subscription information online at\nwww.mecca.org/\n.\n[40]\n\nVirtual Museum of Computing (VMoC), online:\nhttp://vmoc.museophile.org/\n; last modified 4 January\n2005, accessed 20 July 2005.\n[41]\n\nWikipedia, Oral History. online:\nen.wikipedia.org/wiki/Oral_history/\n. last modified\n17 June 2005, accessed 20 July 2005.", "keywords": "Oral History;Computing Education History"} {"name": "5", "title": "A Dependability Perspective on Emerging Technologies", "abstract": "Emerging technologies are set to provide further provisions for computing in times when the limits of current technology of microelectronics become an ever closer presence. A technology roadmap document lists biologically-inspired computing and quantum computing as two emerging technology vectors for novel computing architectures [43]. But the potential benefits that will come from entering the nanoelectronics era and from exploring novel nanotechnologies are foreseen to come at the cost of increased sensitivity to influences from the surrounding environment. This paper elaborates on a dependability perspective over these two emerging technology vectors from a designer's standpoint. Maintaining or increasing the dependability of unconventional computational processes is discussed in two different contexts: one of a bio-inspired computing architecture (the Embryonics project) and another of a quantum computational architecture (the QUERIST project).", "fulltext": "INTRODUCTION\nHigh-end computing has reached nearly every corner of our\npresent day life, in a variety of forms taylored to accommodate\neither general purpose or specialized applications. Computers\nmay be considerred as fine exponents of the present days'\ntechnological wave if not their finest, they certainly do count as\nsolid, indispensable support for the finest.\nFrom the very beginning of the computing advent, the main target\nwas squeezing out any additional performance. The inception\nperiod was not always trouble-free, accurate computation results\nbeing required at an ever faster pace on a road that has become\nmanifold: some applications do require computational speed as a\ntop priority; others are set for the highest possible dependability,\nwhile still delivering sufficient performance levels.\nSeveral definitions for dependability have been proposed: \"the\nability of a system to avoid service failures that are more frequent\nor more severe than is acceptable\" [2], or \"the property of a\ncomputer system such that reliance can justifiably be placed on\nthe service it delivers\" [9][45]. Dependability is therefore a\nsynthetic term specifying a qualitative system descriptor that can\ngenerally be quantified through a list of attributes including\nreliability, fault tolerance, availability, and others.\nIn real world, a dependable system would have to operate\nnormally over extended periods of time before experiencing any\nfail (reliability, availability) and to recover quickly from errors\n(fault tolerance, self-test and self-repair). The term \"acceptable\"\nhas an essential meaning within the dependability's definition,\nsetting the upper limits of the damages that can be supported by\nthe system while still remaining functional or computationally\naccurate. A dependability analysis should take into consideration\nif not quantitative figures for the acceptable damage limit, at least\na qualitative parameter representation for its attributes.\nDependable systems are therefore crucial for applications that\nprohibit or limit human interventions, such as long-term exposure\nto aggressive (or even hostile) environments. The best examples\nare long term operating machines as required by managing deep-underwater/nuclear\nactivities and outer space exploration.\nThere are three main concerns that should be posed through a\nsystem's design in order to achieve high dependability [42]:\n1.\n\nSpecifying the dependability requirements: selecting the\ndependability requirements that have to be pursued in\nbuilding the computing system, based on known or assumed\ngoals for the part of the world that is directly affected by the\ncomputing system;\n\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\n2.\n\nDesigning and implementing the computing system so as to\nachieve the dependability required. However, this step is hard\nto implement since the system reliability cannot be satisfied\nCF'06, May 35, 2006, Ischia, Italy.\nCopyright 2006 ACM 1-59593-302-6/06/0005...$5.00.\n\n187\nsimply from careful design. Some techniques can be used to\nhelp to achieve this goal, such as using fault injection to\nevaluate the design process.\n3.\n\nValidating a system: gaining confidence that a certain\ndependability requirement/goal has been attained.\nThis paper will address these main concerns through an attempt to\nprovide an in-depth view over modern computing directions and\nparadigms, which we consider to be representative for the efforts\ninvolved in improving overall dependability.\n1.1\n\nMotivations\nWe have listed some of the applications of dependable computing\nsystems as linked to activities that take place in special\nenvironments, such as deep underwater or outer space. At a very\nfirst sight, these applications would appear specific enough to not\nencourage a specific design for dependability approach in\ncomputing. However, evidence suggest this is hardly the case; on\nthe contrary, it is difficult to imagine a domain left unconquered\nby computer systems during times when industrial, transport,\nfinancial services and others do rely heavily on accurate computer\noperation at any given moment. If computer innacuracies could be\nmore easily overlooked at home, professional environments\ncannot accept such missbehaviors.\nYet the recent history of computing provides evidence that\ndependability is not a sine qua non feature. During their life\ncycle, electronic devices constantly suffer a number of influences\nthat manifest predominantly over transient regimes, which in turn\nintroduce a variety of errors unified in the literature under the\nname of transient faults, soft errors or single event upsets (SEUs).\nThe rate electronic devices are affected with is known under the\nterm of soft error rate or simply SER and is measured in fails per\nunit time. Because it relies on transient phenomena due to\nchanging states and logical values, digital electronics makes up\nfor a special category that is also affected by soft errors. No\nmatter the name they are referred under, these errors affect the\ncomputing processes and are due to electromagnetic noise and/or\nexternal radiations rather than design or manufacturing flaws [28].\nOne cause at the origin of soft fails affecting digital devices is\nknown to be due to radioactive decay processes. Radioactive\nisotopes, widely used for a range of purposes, might contaminate\nsemiconductor materials leading to soft errors; evidence is\navailable throughout the literature, both by empirical observations\nand experimental results [20]. Consequently, cosmic rays,\ncontaining a broad range of energized atomic/subatomic particles\nmay lead to the appearance of soft fails.\nComputers therefore are susceptive to soft errors, an issue that\nwill potentially become essential with the advent of emerging\ntechnologies. As acknowledged by the International Technology\nRoadmap for Semiconductors (ITRS), issued at the end of 2004\n[43], the microelectronics industry faces a challenging task in\ngoing to and beyond 45nm scale in order to address \"beyond\nCMOS\" applications. Scaling down the technology will enable an\nextremely large number of devices to be integrated onto the same\nchip. However, the great challenge will be to ensure the new\ndevices will be operational at this scale [6], since they will exhibit\na sensitive behavior to soft fails. In order to address the negative\neffects brought by technology scaling, it is to be expected that\nsignificant control resources will need to be implemented [3].\nAnother challenging aspect concerning emerging technologies is\nto match the newly developed device technologies with new\nsystem architectures, a synergistic/collaborative development of\nthe two being seen as likely to be very rewarding. The potential of\nbiologically-inspired and quantum computing architectures is\nacknowledged by the ITRS report on emerging technologies [43]\n(see Figure 1). This paper will investigate the relevance of soft\nfails and attempt to provide means of harnessing their negative\neffects on modern computing in the context of biologically-inspired\nand quantum computing architectures.\n\nFigure 1: Bio-inspired and quantum computing are\nacknowledged as architectural technology vectors in emerging\ntechnologies [43]\n1.2\n\nPaper Outline\nThis paper is structured as follows. Section 2 will address the first\nmain concern, that is, specifying and selecting dependability\nrequirements that will have to be pursued when building a\ncomputational platform. Parameters that describe and quantify\ndependability attributes, such as reliability, will be introduced,\nwith a highlight on their accepted models and their issues. A\nparticular consideration will be given to the failure rate parameter,\nwhich is the basis of all reliability analyses.\nSection 3 will approach some of the means for design for\ndependability; it will therefore elaborate upon two emerging\ntechnology vectors, as seen by the ITRS report [43], which define\ntwo novel architectures, namely biologically-inspired (or bio-inspired\n) and quantum computing. We will introduce two projects\nand their corresponding architectures, called Embryonics (as a\nbiologically-inspired computing platform) and QUERIST (as a\nquantum computing platform designed to allow and study error\ninjection). These two architectures are representative for the\ncoming age of nano-computing, where computational processes\ntake place as encoded at the very inner core level of matter, be it\nsemiconductor material (for nanoelectronics, targetted here by the\nEmbryonics project) or atomic scale dynamics (for quantum\ncomputing, targetted here by the QUERIST project). This section\nwill then introduce dependability aspects within bio-inspired\ncomputing (the Embryonics project being investigated in\nSubSection 3.1) and within quantum computing (the QUERIST\nproject being investigated in SubSection 3.2).\nFinally, Section 4 will present the conclusions and prospects for\ndesigning emerging technology dependable computing systems,\nas we see them.\n188\nDEPENDABILITY ATTRIBUTES\nAn important dependability attribute for any given system lies in\nits capacity to operate reliably for a given time interval, knowing\nthat normal operation was delivered at initial time [8]. Reliability\nfunctions are modelled as exponential functions of parameter ,\nwhich is the failure rate. The reliability of a system is the\nconsequence of the reliability of all of its subsystems. The\nheterogeneity of the system leads to a difficult quantitative\nassessment of its overall reliability; moreover, estimating the\nreliability functions is further made difficult because formal\nrigour is not commercially available, this being kept under\nmilitary mandate [44].\nThe failure rate for a given system can be modelled as a function\nof the failure rates of its individual subsystems, suggestions being\npresent in the MIL-HDBC-217 document, which is publicly\navailable [44]. However, this document has been strongly\ncriticized for its failure rate estimations based on the Arrhenius\nmodel, which relates the failure rate to the operating temperature:\n\nB\nE K T\nKe\n\n=\n(1)\nwhere K is a constant, K\nB\nis Boltzmann's constant, T is the\nabsolute temperature and E is the \"activation energy\" for the\nprocess\n. Quantitative values for failure rates show significant\ndifferences between those predicted using MIL-HDBC-217 and\nthose from testing real devices (see\n). There are two\nconclusions that can be drawn from this:\nB\n[18]\nFigure 2\n1.\n\nquantitative estimations for failure rate values are strongly\ndependant on the quality of information used; unfortunately,\ncurrent reliable information about electronic devices is known\nto be lacking [44];\n2.\n\ndespite differences between predicted and real values, the\nMIL-HDBC-217 methodology can be useful for qualitative\nanalyses in order to take decisions regarding sub-system parts\nthat should benefit from improved designs.\n\nFigure 2. Predicted vs real failure rates plotted against\ntemperature [18]\nSo far the failure rate of digital devices has been considerred as\ndue to internal causes. However, this is not always the case, soft\nfails being equally present due to the aggressive influences of the\nexternal environment, which also have to be modelled [22]. The\nexternal envirnment features highly dynamic changes in its\nparameters, which will eventually affect the normal operation of\ndigital devices that lack sufficient protection or ability to adapt.\nIdeally, computing devices would behave in a consistent and\naccurate manner regardless of fluctuations in environmental\nparameters. This is either a consequence of soft error mitigation\ntechniques or due to flexible hardware/software functionality that\nallow the system as a whole to adapt to environamental changes\nand tolerate induced faults.\nWhile certain soft error mitigation techniques are available, the\ntechnology scaling towards nanoelectronics affects their\nefficiency by integrating a larger number of devices per chip\n(which requires a larger amount of redundant/control logic or\nother measures), which feature, at the same time, smaller\ndimensions (which renders an electronic device much more\nsenzitive to the influence of stray energetic particles that reach it\nas part of cosmic rays). Both aspects are involved in the\ndevelopment of the two emerging technology vectors mentioned\nin SubSection 1.1, although having slightly different motivations:\nwhile the nature of the quantum environment prohibits precise\ncomputation in the absence of fault tolerance techniques, such\ntechniques are targetted by bio-inspired computing as means of\nimproving the dependability of a computing platform.\n2.1\n\nBio-Inspired Computing\nIf living beings may be considered to fulfill computational tasks,\nthen Nature is the ultimate engineer: each of the living beings\nexhibit solutions that were successfully tested and refined in such\nways human engineers will never afford. One reason is time: the\ntesting period coinciding with the very existence of life itself.\nAnother reason is variety and complexity: Nature has found and\nadapted a variety of solutions to address complex survivability\nissues in a dynamically changing environment. No matter how\nNature approached the process of evolution, engineering could\nperhaps benefit most from drawing inspiration from its\nmechanisms rather from trying to develop particular techniques.\nBio-inspired computing is not a new idea. John von Neumann was\npreoccupied to design a machine that could replicate itself and\nwas quite interested in the study of how the behavior of the\nhuman brain could be implemented by a computer [13][14]. He\nalso pioneered the field of dependable computing by studying the\npossibility of building reliable machines out of unreliable\ncomponents [15]. Unfortunately, the dream of implementing his\nself-reproducing automata could not become true until the 1990s,\nwhen massively programmable logic opened the new era of\nreconfigurable computing.\nBut when trying to adapt nature's mechanisms in digital devices,\nit becomes most evident that biological organisms are rightfully\nthe most intricate structures known to man. They continuously\ndemonstrate a highly complex behavior due to massive, parallel\ncooperation between huge numbers of relatively simple elements,\nthe cells. And considering uncountable variety of living beings,\nwith a life span up to several hundreds (for the animal regnum) or\neven thousands (for the vegetal regnum) of years, it seems nature\nis the closest spring of inspiration for designing dependable, fault\ntolerant systems.\nInvestigating the particularities of natural systems, a taxonomy of\nthree categories of processes can be identified [32]:\n1.\n\nPhylogenetic processes constitute the first level of\norganization of the living matter. They are concerned with the\ntemporal evolution of the genetic heritage of all individuals,\n189\ntherefore mastering the evolution of all species. The\nphylogenetic processes rely on mechanisms such as\nrecombination and mutation, which are essentially\nnondeterministic; the error rate ensures here nature's\ndiversity.\n2.\n\nOntogenetic processes represent the second level of\norganization of the living matter. They are also concerned\nwith the temporal evolution of the genetic heritage of, in this\ncase, a single, multicellular individual, therefore mastering an\nindividual's development from the stage of a single cell, the\nzygote, through succesive cellular division and specialization,\nto the adult stage. These processes rely on deterministic\nmechanisms; any error at this level results in malformations.\n3.\n\nEpigenetic processes represent the third level of organization\nof the living matter. They are concerned with the integration\nof interactions with the surrounding environment therefore\nresulting in what we call learning systems.\nThis taxonomy is important in that it provides a model called POE\n(from Phylogeny, Ontogeny and Epigenesis) that inspires the\ncombination of processes in order to create novel bio-inspired\nhardware (see Figure 3). We believe this is also important from a\ndependability engineering perspective, for the following reasons:\n1.\n\nPhylogenetic processes were assimilated by modern\ncomputing as evolutionary computation, including genetic\nalgorithms and genetic programming. The essence of any\ngenetic algorithm is the derivation of a solution space based\non recombination, crossover and mutation processes that\nspawn a population of individuals, each encoding a possible\nsolution. One may consider that each such step, with the\nexception of discovering the solution, is equivalent to a\nprocess of error injection, which in turn leads to wandering\nfrom the optimal solution (or class of solutions). However,\ngenetic algorithms prove to be successful despite this error\ninjection, the fitness function being responsible for the\nsuccessful quantification of the significance of the \"error\".\nTherefore genetic computation is intrinsicaly resilient to\nfaults and errors, largely due to the fact that they are part of\nthe very process that generates the solutions.\n2.\n\nOntogenetic processes have been implemented in digital\nhardware with modular and uniform architectures. Such an\narchitecture enables the implementation of mechanisms\nsimilar to the cellular division and cellular differentiation that\ntake place in living beings [31]. These mechanisms bring the\nadvantage of distributed and hierarchical fault tolerance\nstrategies: the uniformity of the architecture also makes any\nmodule to be universal, that is, to be able to take over the role\nof any other damaged module.\n3.\n\nEpigenetic processes were assimilated by modern computing\nmainly as artificial neural networks (or ANNs) as inspired by\nthe nervous system, and much less as inspired by the immune\nor endocrine systems from superior multicellular living\nbeings. ANNs are known to have a generalization capacity,\nthat is, to respond well even if the input patterns are not part\nof the patterns used during the learning phase. This means that\nANNs possess a certain ability to tolerante faults, whether\nthey manifest at the inputs or inside their intenal architecture.\nWith the advent of field programmable logic (of which the most\nsalient representative are the FPGAs) it is now possible to change\nhardware functionality through software, thus allowing\ninformation to govern matter in digital electronics. This is not\ndissimilar to what happens in nature: information coded in DNA\naffects the development of an organism. A special kind of such\ndigital devices that change dynamically their behavior are known\nas evolvable or adaptive hardware; they are bio-inspired\ncomputing systems whose behaviors may change according to\ncomputational targets, or, if harsh or unknown environments are\nto be explored, for the purpose of maximizing dependability.\n\nFigure 3. The POE model of bio-inspired systems [32]\n2.2\n\nQuantum Computing\nError detection and correction techniques are vital in quantum\ncomputing due to the destructive effect of the environment, which\ntherefore acts as an omnipresent error generator. Error detection\nand correction must provide a safe recovery process within\nquantum computing processes through keeping error propagation\nunder control. Without such dependability techniques there could\nbe no realistic prospect of an operational quantum computational\ndevice [19].\nThere are two main sources of errors: the first is due to the\nerroneous behavior of the quantum gate, producing the so-called\nprocessing errors; the second is due to the macroscopic\nenvironment that interacts with the quantum state, producing the\nstoring and transmitting errors.\nThe consistency of any quantum computation process can be\ndestroyed by innacuracies and errors if the error probability in the\nbasic components (qubits, quantum gates) excedes an accuracy\nthreshold. This is a critical aspect since the microscopic quantum\nstates are prone to frequent errors.\nThe main error source is the decoherence effect [16]. The\nenvironment is constantly attempting to measure the sensitive\nquantum superposition state, a phenomenon that cannot be\navoided technologically since it is not (yet) possible to isolate\nthem perfectly. The superposition state will decay through\nmeasuring and will therefore become a projection of the state\nvector onto a basis vector (or eigenstate). The most insidious\nerror, however, appears when decoherence affects the quantum\namplitudes without destroying them; this is similar to small\nanalog errors. Issues stated above are solved, on one hand,\nthrough intrinsic fault tolerance by technological implementation\n(topological interactions [1]) and, on the other hand, by error\ncorrecting techniques at the unitary (gate network) level. We will\nfocus on the error detecting and correcting techniques, which are\ndifficult to approach due to quantum constraints: the useful state\n190\ncan neither be observed (otherwise it will decohere), nor can it be\ncloned.\n2.2.1\n\nBackground\nAs expressed in bra-ket notation [16], the qubit is a normalized\nvector in some Hilbert space\n,\n2\nH\n{\n}\n0 , 1 being the orthonormal\nbasis:\n0\n1\n0\n1\na\na\n\n=\n+\n(\nare the so-called quantum\namplitudes, representing the square root of the associated\nmeasurement probabilities for the eigenstates\n0\n1\n,\na a\nC\n0 and 1\nrespectively, with\n0\n1\n2\n2\n1\na\na\n+\n= ). Therefore, the qubit can be\naffected by 3 types of errors:\nBit flip errors are somewhat similar to classical bit flip errors. For\na single qubit things are exactly the same as in classical\ncomputation: 0\n1 , 1\n0 . For 2 or more qubits, flip errors\naffecting the state may modify it or leave it unchanged. For\ninstance, if we consider the so-called cat state\n(\n1\n00\n11\n2\nCat\n\n=\n+\n)\n[19], and the first qubit is affected by a\nbit flip error, the resulting state will be\n(\n)\n1 10 01\n2\nCat\n\n+\n.\nBut, if both qubits are affected by bit flips, there will be no\nchange in the state:\n(\n)\n1 11 00\n2\nCat\nCat\n\n\n+\n=\n.\nPhase errors affect the phase of one of the qubit's amplitudes and\nis expressed as 0\n0 , 1\n- 1 . This type of error is very\ndangerous, due to its propagation behavior but it only makes\nsense when dealing with superposition states. If we consider an\nequally weighted qubit superposition state and inject a phase\nerror, this results in\n(\n)\n(\n)\n1\n1\n0\n1\n0\n1\n2\n2\n+\n\n.\nThere is a strict correspondence between bit flip and phase error\ntypes due to the way they map onto Hilbert spaces with the same\ndimension but different basis. The bit flip is an error from the\n\nspace with basis\n2\nH\n{\n}\n0 , 1 , whereas the phase error appears in the\nsame space with basis\n(\n)\n(\n)\n1\n1\n0\n1 ,\n0\n1\n2\n2\n\n\n+\n\nor\n{\n}\n,\n+ 1\na\n.\nThe space basis conversion, in this case, is made by applying the\nHadamard transform; Figure 4 shows an example of transforming\na bit flip error into a phase error (A, and vice versa (B.\nSmall amplitude errors: amplitudes\nof the quantum bit\ncan be affected by small errors, similar to analog errors. Even if\nsuch an error does not destroy the superposition and conserves the\nvalue of the superposed states, small amplitude errors could\naccumulate over time, eventually ruining the computation. In\norder to avoid this situation, specific methodologies for digitizing\nsmall errors are used to reduce them to a non-fault or a bit-flip\n0\nand\na\n[19].\nDue to the quantum physics laws, fault tolerance techniques have\nto comply with the following computational constraints:\n\n\nThe observation destroys the state. Since observation is\nequivalent to measurement, this leads to destroying the\nuseful state superposition.\n\n\nInformation copying is impossible. Quantum physics renders\nthe cloning of a quantum state impossible, meaning that a\nquantum state cannot be copied correctly. Therefore\nquantum error correction must address the following\nproblems:\nNon-destructive measurement. Despite the first constraint it is\nnecessary to find a way to measure the encoded information\nwithout destroying it. Because the encoded state cannot be\nmeasured directly, one needs to properly prepare some scratch\n(ancilla) qubits, which can then be measured.\nFault-tolerant recovery. Due to the high error rate in quantum\ncomputational devices, it is likely that the error recovery itself\nwill be affected by errors. If the recovery process is not fault-tolerant\n, then any error coding becomes useless.\nPhase error backward propagation. If we consider the XOR gate\nfrom Figure 5(A, a flip error affecting the target qubit (b) will\npropagate backwards and also affect the source qubit. This is due\nto the gate network equivalence from Figure 5(B and the basis\ntransformation described by Figure 4.\n\nFigure 4. Correspondence between bit flip and phase errors\n\nFigure 5. (A The backward propagation of a phase error for\nthe XOR gate; (B Gate network equivalence\nIn order to deal with the problems described the next strategies\nhave to be followed:\nDigitizing small errors. The presence of small errors is not a\nmajor concern, as they can be digitized using a special technique\nbased on measuring auxiliary (ancilla) qubits [19].\nAncilla usage. Since qubit cloning is impossible, a majority\nvoting strategy is difficult to implement. However, by using\nancilla qubits, the eigenstate information can be duplicated inside\nthe existing superposition, resulting in the entanglement of the\nancilla with the useful data. Because any measurement performed\non the ancilla could have repercussions on the useful qubits, the\nappropriate strategy will employ special coding for both data\nqubits and ancilla (data errors only will be copied onto the\nancilla), followed by the computation of an error syndrome,\nwhich has to be obtained through measuring the ancilla (see\nFigure 6).\nAvoiding massive spreading of phase errors. As shown\npreviously, a phase error on the target qubit will propagate on all\nsource qubits. The solution is to use more ancilla qubits as targets,\nso that no ancilla qubit is used more than once.\n191\n\nFigure 6. Fault-tolerant procedure with ancilla qubits\nAncilla and syndrome accuracy. Setting the ancilla code to some\nknown quantum state could be an erroneous process. Computing\nthe syndrome is also prone to errors. Hence, on one hand, one has\nto make sure that the ancilla qubits are in the right state by\nverifying and recovering them if needed; on the other hand, in\norder to have a reliable syndrome, it must be computed\nrepeatedly.\nError recovery. As the small errors can be digitized (therefore,\nthey are either corrected or transformed into bit flip errors), the\nrecovery must deal only with bit flip and phase errors. A state that\nneeds to be recovered is described by:\n0\n1\n1\n0\n0\n1\n0\n1\n1\n0\n0\n1\nif no error occurs\n0\n1\nfor a flip error\n0\n1\n0\n1\nfor a phase error\n0\n1\nfor both flip and phase errors\nerror\na\na\na\na\na\na\na\na\na\na\n\n+\n\n+\n\n+\n\n\n\n\n.\n\n\nCorrecting a bit flip error means applying the negation unitary\ntransformation\nto the affected qubit. To\ncorrect phase and combined errors, the following unitary\noperators will have to be applied respectively:\n.\n0 1\n1 0\nN\nx\nU\n\n\n=\n=\n1\n0\n0\n,\n0\n1\n0\nZ\nY\nN\nZ\ni\nU\nU\nU U\ni\n\n\n\n=\n=\n\n=\n\n\n\n\n\n\n2.2.2\n\nQuantum Error Correcting Codes\nQuantum error coding and correcting (QECC) is performed with\nspecial coding techniques inspired from the classic Hamming\ncodes. The classical error coding is adapted so that it becomes\nsuitable for the quantum strategy, allowing only the ancilla qubits\nto be measured.\nThe state-of-the-art in QECC is represented by the stabilizer\nencoding, a particular case being the Steane codes (the Shor codes\nmay also be used [29]). Steane's 7-qubit code is a single error\ncorrecting code inspired from classical Hamming coding and can\nbe adapted for ancilla coding as well. Therefore it cannot recover\nfrom two identical qubit faults, but it can recover from a bit flip a\nphase flip. The Steane 7-qubit coding of 0 and 1 consists of\nan equally weighted superposition of all the valid Hamming 7-bit\nwords with an even and odd number of 1s, respectively:\n\n(\n)\n,\n0 1 2 3 0 1 2\n0 1 2 3 0 1 2\n32\n32\n1\n0\n2\n1\n0000000\n0010111\n0101110\n0111001\n2\n1001011\n1011100\n1100101\n1110010\nu u u u c c c\nS\neven\nu u u u c c c\n\n\n\n\n\n\n=\n=\n+\n+\n+\n+\n+\n+\n+\n\n(\n)\n,\n0 1 2 3 0 1 2\n0 1 2 3 0 1 2\n32\n32\n1\n1\n2\n1 1111111 1101000 1010001 1000110\n2\n0110100\n0100011\n0011010\n0001101\nu u u u c c c\nS\nodd\nu u u u c c c\n\n\n\n\n\n\n=\n=\n+\n+\n+\n+\n+\n+\n+\n\n+\n(3)\nApplying the Steane coding on an arbitrary given quantum state\n0\n1\n0\na\na\n\n=\n+\n1 transforms it into\n0\n1\n0\n1\nS\nS\na\na\n\n=\n+\nS\n. This\ncode was designed to correct bit-flip errors, but by changing the\nbasis (through a Hadamard transform) the phase error transforms\ninto a bit flip error, which can then be corrected:\n(\n)\n(\n)\n1\n0\n0\n0\n1\n2\n1\n1\n1\n0\n1\n2\nS\nS\nS\nS\nS\nS\nS\nH\nH\n=\n\n=\n+\n=\n\n=\nS\n(4)\n2.2.3\n\nFault Tolerance Methodologies\nQuantum error-correcting codes exist for r errors,\n, 1\nr\nr\n\n\nN\n.\nTherefore a non-correctable error occurs if a number of\n1\nr\n+\nerrors occur simultaneously before the recovery process.\nIf the probability of a quantum gate error or storage error in the\ntime unit is of order\n\n, then the probability of an error affecting\nthe processed data block becomes of order\n1\nr\n\n+\n, which is\nnegligible if r is sufficiently large. However, by increasing r the\nsafe recovery also becomes more complex and hence prone to\nerrors: it is possible that\n1\nr\n+ errors accumulate in the block\nbefore the recovery is performed.\nConsidering the relationship between r and the number of\ncomputational steps required for computing the syndrome is\npolynomial of the order\np\nr\n. It was proven that in order to reduce\nas much as possible the error probability r must be chosen so that\n1\n1\n~\np\nr\ne\n\n\n[7][19]. By consequence, if attempting to execute N\ncycles of error correction without any r+1 errors accumulating\nbefore the recovery ends, then\n1\n~ exp\np\nN\n\n\n\n\n\n. Therefore the\naccuracy degree will be of the form\n(\n)\n~ log\np\nN\n\n\n, which is\nbetter than the accuracy degree corresponding to the no-coding\ncase,\n1\n~ N\n\n\n. However, there exists a\nso that if\n\nthen non-correctable error becomes likely, which limits the length\nof the recovery process. Given the extremely large number of\ngates employed by a quantum algorithm implementation,\n\nalso has to be very large; for Shor's algorithm\nmust be\nhigher than\nmax\nN\nmax\nN\nN\n>\nmax\nN\nmax\nN\n9\n3 10\n\n[30].\nAs shown in Figure 7, the required accuracy degree approaches\ntoday's technological limits (tipically 10\n-3\nfor p=4) after N=10\n5\n.\nFor a fault tolerant encoding solution for Shor algorithm\nimplementation this should have happened after N=10\n9\n [19][34].\n+\n(2)\nAdditional fault tolerance must be employed in order to preserve\nreliable quantum computation over an arbitrary number of\ncomputational steps. Concatenated coding represents one such\ntechnique, which improves the reliability by shaping the size of\n192\nthe code blocks and the number of code levels. It is also resource\ndemanding and vulnerable to correlated errors [19][37].\nAnother approach, replacing the concatenated codes, is based on\nReconfigurable Quantum Gate Arrays (RQGAs) [34][37], which\nare used for configuring ECC circuits based on stabilizer codes\n[7][33]. By using a quantum configuration register for the RQGA\n(i.e. a superposition of classical configurations), the\nreconfigurable circuit is brought to a state where it represents a\nsimultaneous superposition of distinct ECC circuits. After\nmeasuring the configuration register, only one ECC circuit is\nselected and used; if k distinct ECC circuits were superposed and\nthe gate error rate is\n, then the overall gate error probability\nbecomes\nk\n\n(see Figure 8). As a result, the accuracy threshold\nvalue for the RQGA solution clearly dominates the technological\naccuracy limit, as shown in Figure 9 [37].\n\nFigure 7. Accuracy plots: p=3 for xi\n1\n, p=4 for xi\n2\n, p=5 for xi\n3\n;\nxi\n4\nfor no-coding, ref is the reference accuracy (i.e. the\naccuracy allowed by today's state of the art technology)\n\nFigure 8. A quantum configuration register acts as a\nsuperposition of k distinct circuits sharing the same input\nstate and the same output qubits\nDEPENDABLE SYSTEM DESIGN\nIn order to model the erroneous behavior of a device of system it\nis necessary to understand the causality of phenomena concerned.\nA defect affecting a device from a physical point of view is called\na fault, or a fail. Faults may be put in evidence through logical\nmisbehavior, in which case they transform into errors. Finally,\nerrors accumulating can lead to system failure [8]. The fault-error\n-failure causal chain is essential to developping techniques\nthat reduce the risk of error occurrence, even in the presence of\nfaults, in order to minimize the probability of a system failure,\nand can be architecture specific. We will elaborate next on\ntechniques used by a bio-inspired and by a quantum computing\nplatform.\n\nFigure 9. Evolution of accuracy threshold value for RQHW\nstabilizer codes (xir); the technological accuracy limit (lim) is\nalso provided for a relevant comparison\n3.1\n\nThe Embryonics Approach\nSeveral years before his untimely death John von Neumann began\ndevelopping a theory of automata, which was to contain a\nsystematic theory of mixed mathematical and logical forms,\naimed to a better understanding of both natural systems and\ncomputers [14]. The essence of von Neumann's message appears\nto entail the formula \"genotype + ribotype = phenotype\". He\nprovided the foundations of a self-replicating machine (the\nphenotype), consisting of its complete description (the genotype),\nwhich is interpreted by a ribosome (the ribotype).\nEmbryonics (a contraction for embryonic electronics) is a long\nterm research project launched by the Logic Systems Laboratory\nat the Swiss Federal Institute of Technology, Lausanne,\nSwitzerland. Its aim is to explore the potential of biologically-inspired\nmechanisms by borrowing and adapting them from\nnature into digital devices for the purpose of endowing them with\nthe remarkable robustness present in biological entities [39].\nThough perhaps fuzzy at a first glance, analogies between biology\nand electronics are presented in Table 1 [12][31].\nBut if we consider that the function of a living cell is determined\nby the genome, and that a computer's functionality is determined\nby the operating program, then the two worlds may be regarded as\nsharing a certain degree of similarity. Three fundamental features\nshared by living entities are required to be targetted by\nEmbryonics in order to embody the formula \"genotype + ribotype\n= phenotype\" into digital hardware:\n\n\nmulticellular organisms are made of a finite number of cells,\nwhich in turn are made of a finite number of chemically\nbonded molecules;\n\n\neach cell (beginning with the original cell, the zygote) may\ngenerate one or several daughter cell(s) through a process\ncalled cellular division; both the parent and the daughter\ncell(s) share the same genetic information, called the genome;\n\n\ndifferent types of cells may exist due to cellular\ndifferentiation, a process through which only a part of the\ngenome is executed.\nThese fundamental features led the Embryonics project to settle\nfor an architectural hierarchy of four levels (see Figure 10). We\nwill not delve very deep inside the Embryonics'phylosophy, as\nsuch details were broadly covered by literature [12][20][23][24]\n[25][40]; we will, however, introduce each of the four levels in\n193\norder to be able to see how this bio-inspired platform fits modern\ndesign for dependability efforts.\nTable 1. Analogies present in Embryonics [12]\nBiology Electronics\nMulticellular organism\nParallel computer systems\nCell Processor\nMolecule FPGA\nElement\n\nFigure 10. Structural hierarchy in Embryonics [12]\nThe upmost level in Embryonics, bearing a certain similarity to\nwhat is found in nature, is the population level, composed of a\nnumber of organisms. One level down the hierarchy constitutes\nthe organismic level, and corresponds to individual entities in a\nvariety of functionalities and sizes. Each of the organisms may be\nfurther decomposed into smaller, simpler parts, called cells, which\nin turn may be decomposed in molecules. According to\nEmbryonics, a biological organism corresponds in the world of\ndigital systems to a complete computer, a biological cell is\nequivalent to a processor, and the smallest part in biology, the\nmolecule, may be seen as the smallest, programmable element in\ndigital electronics (see Table 1).\nAn extremely valuable consequence of the Embryonics\narchitecture is that each cell is "universal", containing a copy of\nthe whole of the organism's genetic material, the genome. This\nenables very flexible redundancy strategies, the living organisms\nbeing capable of self-repair (healing) or self-replication (cloning)\n[12]. Self-replication may be of great interest in the\nnanoelectronics era, where extremely large areas of\nprogrammable logic will probably render any centralized control\nvery inefficient. Instead, the self-replication mechanism\nimplemented in Embryonics will allow the initial colonization of\nthe entire programmable array in a decentralized and distributed\nmanner. Figure 11 presents an example of such colonization. At\ninitial time the configuration bitstream (containing the genome)\nenters the bottom left corner of a programmable array and, at each\nclock cycle, the genome is pushed through and partitions the\nprogrammable space accordingly.\nFrom a dependability standpoint, the Embryonics hierarchical\narchitecture offers incentives for an also hierarchical self-repair\nstrategy. Because the target applications are those in which the\nfailure frequency must be very low to be \"acceptable\", two levels\nof self-repair are offered: at the molecular level (programmable\nlogic is susceptible to soft fail occurrences) and at the cellular\nlevel (soft fails manifest at this level as soft errors).\nLet us consider an example of a simple cell made of 3 lines and 3\ncolumns of molecules, of which one column contains spare\nmolecules. If a fault occurs inside an active cell, it can be repaired\nthrough transferring its functionality toward the appropriate spare\nmolecule, which will become active (see Figure 12).\n\nFigure 11. Space colonization in Embryonics [11]\n\nFigure 12. Self-repair at the molecular level: faulty molecule\nE is replaced by spare molecule H, which becomes active [39]\nThe self-repair process at molecular level ensures the fault\nrecovery as long as there are spare molecules left for repair.\nHowever, it is possible for a cell to experience a multiple error, in\nwhich case the self-repair mechanism at the molecular level can\nno longer reconfigure the inside of the cell successfully. If such a\nsituation arises, then a second self-repair strategy is trigerred at a\nhigher level. The cell will \"die\", therefore trigerring the self-repair\nat the cellular level, the entire column containing the faulty\ncell (cell C in this example) being deactivated, its role being taken\nby the nearest spare column to the right (see Figure 13).\nA critique that could be addressed to the current Embryonics\ndesign would be its strategy of self-repair at the higher, cellular\nlevel: in case of a faulty cell, an entire column containing that cell\nwill be deactivated, its role being transferred to the first available\ncolumn of spares to the right (see Figure 13). There are two points\nin which this strategy could benefit:\n194\n1.\n\nInstead of deactivating a whole column of cells, it would be\nmore efficient to only deactivate the faulty cell only (see\nFigure 14). The resources affected by the role transfer would\nbe greatly reduced (one cell versus an entire column),\ncoupled with the fact that particle flux generating soft fails is\nunlikely to be homogeneous and isotrope. This means\nregions assimilable more likely to cells rather than entire\ncolumn of cells would be more affected by soft fails, not to\nmention that during genetic data transfer (required by taking\nover the role of the faulty cell) there is a greater risk of\nenduring a new soft fail (moving data is much more sensitive\nto soft fails than static data) [5][10].\n2.\n\nSuch a strategy would be consistent with that used for the\nself-repair at the molecular level, which would simplify a\nthorough reliability analysis. Concatenated coding would\nalso seem easier to be implemented and the strategy\nconsistency would mean that concatenated coding would not\nbe limited to a two-level hierarchy [20][21].\n\nFigure 13. Molecular self-repair failure: the cell \"dies\"\n(bottom), triggering the cellular self-repair (top) [39]\nWe consider a cell of M lines and N columns, being composed of\nmodules of M lines and n+s columns (for instance, the cell\npresented in Figure 12 consists of a single such module of two\nactive columns and one spare column), of which s are spares. In\norder to meet certain reliability criteria, it is necessary to know\nwhat is the number s of spare columns of molecules that\ncorrespond to n columns of active molecules, that is, the\nhorizontal dimensions for such a module. We will not provide a\nthorough reliability analysis, as this has been done previously\n[4][17][20][21]; instead, we will analyze the influences of the\nproposed consistent self-repair strategy at both molecular and\ncellular levels through the use of logic molecules. Therefore\nEquation (5) holds:\n( )\n{\n}( )\n{\n}( )\n(\n)\n1\nk\nModRow\ni\nR\nt =Prob no fails t\nProb i fails t\nN\nk n s\n=\n+\n=\n+\n\n\n(5)\nwhere\n( )\nModRow\nR\nt represents the reliability function for a row\nwithin a module. Then, considering the failure rate for one\nmolecule\n,\n\nthe probability of all molecules (both active and\nspare) to operate normally in a module's row becomes:\n{\n}( )\n(\n)\nn s t\nProb no fails t\ne\n\n+\n=\n\n(6)\nThe probability of a row enduring i fails in the active molecules\npart is the conditional probability of having n-i active molecules\noperating normally, while a number of s-i spare molecules are\nready to activate (that is, they are not affected by errors\nthemselves):\n{\n}( )\n{\n}( )\n{\n}(\nProb i fails t\nProb i fails active t\nProb i spares ok t\n)\n=\n\n\n\n(7)\n{\n}( )\n(\n)\n(\n)\n(\n)\n1\nn i t\nn i t\nn\nProb i fails active t\ne\ne\ni\n\n\n\n\n\n\n=\n\n\n\n(8)\n{\n}( )\n(\n)\n(\n)\n1\nit\nk i t\nk\nProb i spares ok t\ne\ne\ni\n\n\n\n\n\n=\n\n\n\n(9)\nThen the reliability function for an entire cell is the cummulated\nreliability functions for the total number of modules:\n( )\n( )\nMN n s\nCell\nModRow\nR\nt\nR\nt\n+\n=\n\n\n\n\n(10)\n\nFigure 14. Proposed reconfiguration strategy at the cellular\nlevel\nA self-repair strategy that conserves the consistency between the\nmolecular and the cellular level would allow for a more\nstraightforward reliability analysis. Basically, it would be\nsufficient to substitute dimension parameters in Equations (5)\n(10) with those approapriate to the analysis of an organism\ninstead of a cell. To illustrate this, we will consider an organism\nof M\n*\nlines and N\n*\ncolumns, being composed of modules of M\n*\n\nlines and n\n*\n+s\n*\ncolumns, of which s\n*\nare spares; we will also use\nthe organism partitioning into modules, similar to the partitioning\nof cells used before. Therefore Equation (5) transforms into\nEquation (11):\n( )\n{\n}( )\n{\n}( )\n(\n)\n1\n*\n*\n*\n*\nk\n*\n*\nCellMR\ni\nR\nt =Prob no fails t\nProb i fails t\nN\nk n\ns\n=\n+\n=\n+\n\n(11)\nwhere\n( )\nCellMR\nR\nt represents the reliability function for a row of\ncells within an organism module. In this case, the significance of\nthe terms will be as follows:\n{\n}( )\n( )\n*\n*\nn\ns\n*\nCell\nProb no fails t\nR\nt\n+\n=\n\n\n\n\n(12)\n195\nWhile Equation (7) continues to hold under the form of Equation\n(13), the significance of its terms will change according to the\ndimensions at the cellular level:\n{\n}( )\n{\n}( )\n{\n}(\n*\n*\n*\nProb i fails t\nProb i fails active t\nProb i spares ok t\n=\n\n\n)\ni\nk\ni\n\n(13)\n{\n}( )\n( )\n( )\n(\n)\n*\n*\n1\n*\nn\ni\nCell\nCell\nn\nProb i fails active t\nR\nt\nR\nt\ni\n\n=\n\n\n\n(14)\n{\n}( )\n( )\n( )\n(\n)\n*\n*\n1\n*\ni\nCell\nCell\nk\nProb i spares ok t\nR\nt\nR\nt\ni\n\n=\n\n\n\n(15)\nFinally, similar to Equation (10), the reliability function for an\nentire organism is the cummulated reliability functions for the\ntotal number of its modules:\n( )\n( )\n*\n*\n*\n*\nM N n s\nOrg\nCellMR\nR\nt\nR\nt\n+\n=\n\n\n\n\n(16)\nEquations (5) to (16) provide the basics for a thorough reliability\nanalysis for the proposed, uniform strategy of hierarchical\nreconfiguration, as opposed to the analysis provided by [21],\nwhich specifically targetted the current Embryonics architecture.\nDespite having settled the reliability model, both analyses are\nincomplete, in that the failure rate parameter is missing, which\nmakes a precise, quantitative dependability target difficult to\nmeet. However, a reliability analysis is still valuable from a\nqualitative point of view, allowing a direct comparison of\ndifferent systems.\n3.2\n\nThe QUERIST Approach\nIn order to deal with errors induced by the constant influence of\nthe external environment upon computational processes, the\nfollowing assumptions were made: errors appear randomly, are\nuncorrelated (neither in space, nor in time), there are no storage\nerrors, and there are no leakage phenomena involved [19].\nClassical HDL-based fault injection methodologies can be\nmapped to simulating quantum circuits without intervention\nprovided that the new error and fault models are taken into\naccount [35]. Of course, efficiency criteria require that they be\nadapted to one of the available efficient simulation frameworks\n[36][38][41].\nQUERIST (from QUantum ERror Injection\nSimulation Tool) is the name of such a project, fostering\nsimulated fault injection techniques in quantum circuits [34].\nSimilar to classical computation, simulated fault injection is used\nin order to evaluate the employed FTAMS (Fault Tolerance\nAlgorithms and Methodologies) [26][27].\nAn overview of the\nQUERIST project is presented in Figure 15.\nThe three cycles of initialization, simulation, and data\ncomputation are common to both classical and quantum\napproaches. The first cycle takes the quantum circuit HDL\ndescription as an input. Two abstract inputs are considered, the\nHDL model and the assumed error model; the first influences how\nthe HDL description is presented, while the second one dictates\nthe test scenario by defining the start/stop simulation states (since\nqubits are equally prone to error, all the signals must be\nobserved). HDL modeling of quantum circuits in order to attain\nefficient simulation is discussed in [34][35][36][38].\nThe outputs of the first cycle, which are also inputs for the\nsimulation cycle, consist of a test scenario and an executable HDL\nmodel with the corresponding entanglement analysis, dictated by\nthe bubble-bit encoded quantum states [36][38]. The output of the\nsecond cycle consists of time diagrams for all qubits, from the\nstart to the stop states. Useful information, extracted from the raw,\nbubble-bit-represented, qubit traces are compared to correct qubit\nvalues, the result being the probabilistic accuracy threshold value,\nin the third cycle. The initialization and simulation cycles depend\non specific aspects of quantum circuit simulation [35]. The data\nprocessing cycle is independent from the specific simulation\nframework and is aimed at determining the\naccuracy threshold as\nthe main reliability measure that also defines the feasibility of the\nquantum circuit implementations.\nSuppose that, at simulation time\nt we observe signals\n{\n}\n0\n1\n, ,...,\nn\ns s\ns . In our analysis, s\ni\nis the state observed during non-faulty\nsimulation, so for the same state in a faulty environment we\nwill have the state\n*\ni\ns .\nFor validation of the quantum\nFTAMs, we need to compare\ni\ns\nwith\n*\ni\ns . This can be done by using operator\n(\n)\n*\n,\ni\ni\ndif s s\n. This\nmeans that the total number of overall state errors at simulation\ntime\nt is\n. The error rate on the overall observed\nstates at moments\n(\n1\n*\n0\n,\nt\nn\ni\ni\ni\ne\ndif s s\n=\n=\n\n)\n0\n1\n1\n, ,...,\nm\nt t\nt\nwill\nbe given by\n1\n0\n1\nm\nsim\nj\nj\nt\ne\nm\n\n=\n=\n\n.\nThe used\nFTAMs are only valid if the relationship between the\nexperimental\nsim\n\nand the assumed singular error rate\n\nis of the\norder\n2\n~\nsim\n\n\n[19].\nCONCLUSIONS\nThis paper presented arguments in favor of two novel computing\narchitectures for the purpose of addressing the challenges raised\nby the forthcoming nanoelectronics era. Distributed self-testing\nand self-repairing will probably become a must in the next years\nas centralized control logic is expected to become unable to\nharness the extremely large number of devices, all equally prone\nto errors, that will be integrated onto the same chip. Bio-inspired\ncomputing brings valuable techniques that explore the potential of\nmassively parallel, distributed computation and fault-tolerance\nthat will likely provide an essential help to jumpstart new\nnanoelectronic architectures. As one of the representatives of bio-inspired\ncomputing, the Embryonics project presents a\nhierarchical architecture that achieves fault tolerance through\nimplementing an also hierarchical reconfiguration. A similar\napproach for maximizing fault tolerance is present in quantum\ncomputing, the QUERIST project; even if bio-inspired and\nquantum computing may seem dissimilar at a first glance, they\nboth achieve fault tolerance by adapting the same techniques from\nclassical computing and using essentially the same error model.\nNanoelectronics will potentially change the way computing\nsystems are designed, not only because of the sheer number of\ndevices that will coexist onto the same chip, but also because of\nthe sensitivity of these devices.\n\n196\n\n\n\n\nFigure 15. An overview of the QUERIST project\nTherefore, if nanoelectronics is to be employed to build\ndependable computing machines (a certain contradiction\nnotwithstanding), valuable expertise in design can be drawn from\nnatural sciences. While biology provides countless examples of\nsuccessfully implemented fault tolerance strategies, physics offers\ntheoretical foundations, both of which were found to share\ncommon ground. It is perhaps a coincidence worth exploring in\ndigital computing.\nREFERENCES\n[1]\n\nAharonov, D., Ben-Or, M. Fault Tolerant Quantum\nComputation with Constant Error.\nProc. ACM 29th Ann.\nSymposium on Theory of Computing, El Paso, Texas, May\n1997, pp. 176-188.\n[2]\n\nAvizienis, A., Laprie, J.C., Randell, B., Landwehr, C. Basic\nConcepts and Taxonomy of Dependable and Secure\nComputing.\nIEEE Transactions on Dependable and Secure\nComputing, 1, 1 (Jan-Mar 2004), 11-33.\n\n[3]\n\nButts, M., DeHon, A., Golstein, S.C. Molecular Electronics:\nDevices, Systems and Tools for Gigagate, Gigabit Chips.\nProc. Intl. Conference on CAD (ICCAD'02), 2002, pp. 433-440\n.\n\n[4]\n\nCanham, R., Tyrrell, A. An Embryonic Array with Improved\nEfficiency and Fault Tolerance.\nProc. IEEE NASA/DoD\nConference on Evolvable Hardware, Chicago Il, 2003, 275-282\n.\n[5]\n\nGaisler, J. Evaluation of a 32-Bit Microprocessor with Built-In\nConcurrent Error Detection.\nProc. 27th Annual Intl.\nSymposium on Fault-Tolerant Computing (FTCS-27), 1997,\npp. 42-46.\n[6]\n\nGoldstein, S.C. The Challenges and Opportunities of\nNanoelectronics.\nProc. Government Microcircuit Applications\nand Critical Technology Conference (GOMAC Tech 04\n), Monterey, CA, March 2004.\n[7]\n\nGottesman, D. Class of quantum error-correcting codes\nsaturating the quantum Hamming bound.\nPhys. Rev. A 54,\n1996, pp. 1862-1868.\n[8]\n\nJohnson, B.W.\nDesign and Analysis of Fault-Tolerant\nDigital Systems. Addison-Wesley, 1989.\n[9]\n\nLaprie, J.-C. (Ed.). Dependability: Basic Concepts and\nTerminology.\nDependable Computing and Fault-Tolerant\nSystems Series, Vol. 5, Springer-Verlag, Vienna, 1992.\n[10]\n\nLiden, P., Dahlgren, P., Johansson, R., Karlsson, J. On\nLatching Probability of Particle Induced Transients in\nCombinational Networks.\nProc. Intl. Symposium on Fault-Tolerant\nComputing (FTCS-24), 1994, pp.340-349.\n[11]\n\nMange, D., Sipper, M., Stauffer, A., Tempesti, G. Toward\nRobust Integrated Circuits: The Embryonics Approach.\nProc.\nof the IEEE, vol. 88, No. 4, April 2000, pp. 516-541.\n[12]\n\nMange, D. and Tomassini, M. eds.\nBio-Inspired Computing\nMachines: Towards Novel Computational Architectures.\nPresses Polytechniques et Universitaires Romandes,\nLausanne, Switzerland, 1998.\n[13]\n\nVon Neumann, J.\nThe Computer and the Brain (2\nnd\nedition).\nPhysical Science, 2000.\n[14]\n\nVon Neumann, J. The Theory of Self-Reproducing\nAutomata. A. W. Burks, ed. University of Illinois Press,\nUrbana, IL, 1966.\n[15]\n\nVon Neumann, J. Probabilistic Logic and the Synthesis of\nReliable Organisms from Unreliable Components. In C.E.\nShannon, J. McCarthy (eds.)\nAutomata Studies, Annals of\nMathematical Studies 34, Princeton University Press, 1956,\n43-98.\n[16]\n\nNielsen, M.A., Chuang, I.L.\nQuantum Computation and\nQuantum Information. Cambridge University Press, 2000.\n[17]\n\nOrtega, C., Tyrrell, A. Reliability Analysis in Self-Repairing\nEmbryonic Systems.\nProc. 1st NASA/DoD Workshop on\nEvolvable Hardware, Pasadena CA, 1999, 120-128.\n[18]\n\nO'Connor, P.D.T. Practical Reliability Engineering. John\nWiley & Sons, 4\nth\nedition, 2002.\n[19]\n\nPreskill, J. Fault Tolerant Quantum Computation. In H.K.\nLo, S. Popescu and T.P. Spiller, eds.\nIntroduction to\nQuantum Computation, World Scientific Publishing Co.,\n1998.\n197\n[20]\n\nProdan, L.\nSelf-Repairing Memory Arrays Inspired by\nBiological Processes. Ph.D. Thesis, \"Politehnica\" University\nof Timisoara, Romania, October 14, 2005.\n[21]\n\nProdan, L., Udrescu, M., Vladutiu, M. Survivability Analysis\nin Embryonics: A New Perspective.\nProc. IEEE NASA/DoD\nConference on Evolvable Hardware, Washington DC, 2005,\n280-289.\n[22]\n\nProdan, L., Udrescu, M., Vladutiu, M. Self-Repairing\nEmbryonic Memory Arrays.\nProc. IEEE NASA/DoD\nConference on Evolvable Hardware, Seattle WA, 2004, 130-137\n.\n[23]\n\nProdan, L., Tempesti, G., Mange, D., and Stauffer, A.\nEmbryonics: Electronic Stem Cells.\nProc. Artificial Life VIII,\nThe MIT Press, Cambridge MA, 2003, 101-105.\n[24]\n\nProdan, L., Tempesti, G., Mange, D., and Stauffer, A.\nEmbryonics: Artificial Cells Driven by Artificial DNA.\nProc. 4th International Conference on Evolvable Systems\n(ICES2001), Tokyo, Japan, LNCS vol. 2210, Springer,\nBerlin, 2001, 100-111.\n[25]\n\nProdan, L., Tempesti, G., Mange, D., and Stauffer, A.\nBiology Meets Electronics: The Path to a Bio-Inspired\nFPGA. In\nProc. 3rd International Conference on Evolvable\nSystems (ICES2000), Edinburgh, Scotland, LNCS 1801,\nSpringer, Berlin, 2000, 187-196.\n[26]\n\nRimen, M., Ohlsson, J., Karlsson, J., Jenn, E., Arlat, J.\nValidation of fault tolerance by fault injection in VHDL\nsimulation models.\nRapport LAAS No.92469, December\n1992.\n[27]\n\nRimen, M., Ohlsson, J., Karlsson, J., Jenn, E., Arlat, J.\nDesign guidelines of a VHDL-based simulation tool for the\nvalidation of fault tolerance.\nRapport LAAS No93170, Esprit\nBasic Research Action No.6362, May 1993.\n[28]\n\nShivakumar, P., Kistler, M., Keckler, S.W., Burger, D.,\nAlvisi, L. Modelling the Effect of Technology Trends on the\nSoft Error Rate of Combinational Logic.\nProc. Intl.\nConference on Dependable Systems and Networks (DSN),\nJune 2002, pp. 389-398.\n[29]\n\nShor, P.\nFault-tolerant quantum computation.\narXiv.org:quant-ph/9605011, 1996.\n[30]\n\nShor, P. Algorithms for Quantum Computation: Discrete\nLogarithms and Factoring.\nProc. 35th Symp. on Foundations\nof Computer Science, 1994, pp.124-134.\n[31]\n\nSipper, M., Mange, D., Stauffer, A. Ontogenetic Hardware.\nBioSystems, 44, 3, 1997, 193-207.\n[32]\n\nSipper, M., Sanchez, E., Mange, D., Tomassini, M., Perez-Uribe\n, A., Stauffer, A. A Phylogenetic, Ontogenetic and\nEpigenetic View of Bio-Inspired Hardware Systems.\nIEEE\nTransactions on Evolutionary Computation, 1, 1, April 1997,\n83-97.\n[33]\n\nSteane, A. Multiple Particle Interference and Quantum Error\nCorrection.\nProc. Roy. Soc. Lond. A 452, 1996, pp. 2551.\n[34]\n\nUdrescu, M.\nQuantum Circuits Engineering: Efficient\nSimulation and Reconfigurable Quantum Hardware. Ph.D.\nThesis, \"Politehnica\" University of Timisoara, Romania,\nNovember 25, 2005.\n[35]\n\nUdrescu, M., Prodan, L., Vladutiu, M. Simulated Fault\nInjection in Quantum Circuits with the Bubble Bit\nTechnique.\nProc. International Conference "Adaptive and\nNatural Computing Algorithms", pp. 276-279.\n[36]\n\nUdrescu, M., Prodan, L., Vladutiu, M. The Bubble Bit\nTechnique as Improvement of HDL-Based Quantum Circuits\nSimulation.\nIEEE 38th Annual Simulation Symposium, San\nDiego CA, USA, 2005, pp. 217-224.\n[37]\n\nUdrescu, M., Prodan, L., Vladutiu, M. Improving Quantum\nCircuit Dependability with Reconfigurable Quantum Gate\nArrays.\n2nd ACM International Conference on Computing\nFrontiers, Ischia, Italy, 2005, pp. 133-144.\n[38]\n\nUdrescu, M., Prodan, L., Vladutiu, M. Using HDLs for\ndescribing quantum circuits: a framework for efficient\nquantum algorithm simulation.\nProc. 1st ACM Conference\non Computing Frontiers, Ischia, Italy, 2004, 96-110.\n[39]\n\nTempesti, G.\nA Self-Repairing Multiplexer-Based FPGA\nInspired by Biological Processes. Ph.D. Thesis No. 1827,\nLogic Systems Laboratory, The Swiss Federal Institute of\nTechnology, Lausanne, 1998.\n[40]\n\nTempesti, G., Mange, D., Petraglio, E., Stauffer, A., Thoma\nY. Developmental Processes in Silicon: An Engineering\nPerspective.\nProc. IEEE NASA/DoD Conference on\nEvolvable Hardware, Chicago Il, 2003, 265-274.\n[41]\n\nViamontes, G., Markov, I., Hayes, J.P. High-performance\nQuIDD-based Simulation of Quantum Circuits.\nProc. Design\nAutom. and Test in Europe (DATE), Paris, France, 2004, pp.\n1354-1359.\n[42]\n\nYu, Y., Johnson, B.W. A Perspective on the State of\nResearch on Fault Injection Techniques. Technical Report\nUVA-CSCS-FIT-001, University of Virginia, May 20, 2002.\n[43]\n\n***. ITRS International Technology Roadmap for Semiconductors\n, Emerging Research Devices, 2004, http://www.\nitrs.net/Common/2004Update/2004_05_ERD.pdf\n[44]\n\n***. Society of Reliability Engineers, http://www.sre.org/\npubs/\n[45]\n\n***. http://www.dependability.org/wg10.4/\n\n\n198", "keywords": "emerging technologies;Self replication;Embryonics;Computing technology;Error detection;Fault tolerance;Digital devices;Computing architecture;environment;Soft errors;Dependable system;Computing system;System design;Correction techniques;bio-inspired digital design;Bio-inspired computing;Reliability;Dependability;Nano computing;Failure rate;Emerging technologies;Nanoelectronics;bio-inspired computing;Self repair;evolvable hardware;Computer system;quantum computing;fault-tolerance assessment;QUERIST;Bio-computing;reliability;Quantum computing"} {"name": "50", "title": "Building Sustainable Community Information Systems: Lessons from a Digital Government Project", "abstract": "This paper introduces a rationale for and approach to the study of sustainability in computerized community information systems. It begins by presenting a theoretical framework for posing questions about sustainability predicated upon assumptions from social construction of technology and adaptive structuration theories. Based in part on the literature and in part on our own experiences in developing a community information system, we introduce and consider three issues related to sustainability: stakeholder involvement, commitment from key players, and the development of critical mass.", "fulltext": "INTRODUCTION\nNew technologies make it feasible and in many cases practical for\nindividuals, groups, and organizations to collaborate in the\ndevelopment of joint information systems. In fact, over the last\nthree decades of evolution, few applications of information\ntechnology have stimulated so much interest on the part of so\nmany. Collaborative information systems are attractive to users\nbecause they make it possible to find information from diverse\nsources in an easy and efficient way. Such systems make good\nsense for information providers because it becomes possible to\nattract a larger audience than a solitary effort might otherwise be\nable to command and to pool resources to achieve certain\neconomies in scale and technology expense. The advantages of\ncollaborative computerized information systems have been widely\nrecognized, but this has been particularly the case for those with\nthe goal of making community information more available,\naccessible, and oriented toward community development.\nComputerized community information systems are diverse in form\nand, over time, have come to be known by many different names,\nincluding community bulletin boards, civic networks, community\nnetworks, community information networks, televillages, smart\ncommunities, and Free-Nets. They have been initiated by many\ndifferent sponsors, including government organizations at the\nfederal, state, and local levels, academic organizations, libraries,\nand ad hoc groups of citizens that may or may not later transform\ntheir enterprises into not-for-profit organizations [7]. With respect\nto longevity, these projects have come and gone, only to be\nreplaced by newer and more sophisticated manifestations of the\nsame basic information sharing capabilities.\nConsistent with the evolution of technology over the last thirty\nyears, Kubicek and Wagner [14] analyze the historical trajectory\nof community networks to understand how these applications\nhave evolved over time based upon their animating ideas, the\nzeitgeist of the time, the state of technology access, and the kinds\nof services such applications make available. Their analysis\nmakes it possible to see that there has never been a standard for\ndesign or operation when it comes to community information\nsystems. Instead, each such project has been very much a social\nexperiment, born of a cluster of varied ideas related to the general\ntheme of using technology to promote the development of vibrant\ngeographically-based communities.\nSince there has been no standard to follow, each instance of\ncomputerized community information system can be seen as an\nexperiment in accommodating the tensions between access to\nhardware/software infrastructure, design of the particular\napplication or system, user needs, and the initiating and ongoing\nresources that support these efforts. These projects can be\nresource intensive; thus, a variety of institutional actors have lent\ntheir financial support particularly over the past decade. The\nsuccessive rounds of funding for community technology projects\nby the Department of Commerce's National Telecommunications\nand Information Administration (now called the Technology\nOpportunities Program) is a case in point. The Digital\nGovernment Program of the National Science Foundation has\nCopyright held by the author\n145\nprovided support for such ventures, as have many private\nfoundations and technology corporations. From the perspective\nof funding organizations, the nature of the experiment at the heart\nof CCINs is essentially this: how to build applications that\nachieve their civic goals, that provide services perceived as\nvaluable by their users, and that can command continuing support\nfrom the community beyond the horizon of initial funding. From\na purely academic perspective, the more general question centers\non, as Venkatesh [28] has put it, the \"lifecycle\" of community\ninformation systems. More specifically, we wish to know how\nsuch systems \"originate, stabilize, and change in their\nsociohistorical context\" (p. 339).\nWe do not have extensive knowledge about the extent to which\ncommunity information systems achieve their goals, endure over\ntime, or the conditions that facilitate effectiveness and\nsustainability. However, based on what we do know, it is\napparent that such enterprises are fragile. Perhaps the closest we\nhave come to a standard or model is the relatively extensive set of\nexperiments in community networking in the 1990s called Free-Nets\n, which were fashioned after the public broadcasting system\nand intended to serve their localities by providing access to wide-area\ncomputer networks and information about the community.\nFounded in 1989, the National Public Telecomputing Network, an\numbrella organization for Free-Nets, went bankrupt in 1996.\nAfter successive decreases in the cost of computing equipment\nand Internet access, and the development of the World Wide Web,\nmany Free-Nets went out of business [14]. Studies of community\nnetworks funded by the federal and state governments also\nsuggest that community information systems have difficulty\nenduring beyond their initial funding [26] [21].\nIn this paper, we introduce and consider conditions that facilitate\nthe sustainability of computerized community information\nsystems. We base our discussion in part on our own efforts to\ndevelop a community information system called Connected Kids\nin Troy, New York, a project that began in a formal sense in 1999\nand continues today. We begin by presenting a theoretical\nframework for posing questions about sustainability based on the\nsocial construction of technology and adaptive structuration\ntheory. Drawing on the literature as well as our experiences, we\nintroduce and discuss three issues we believe to be critically\nrelated to sustainability: stakeholder involvement, commitment\nfrom key players, and critical mass.\nTHEORETICAL FRAMEWORK\nAll computerized community information systems are designed,\nalthough whether researchers and participants understand the\nsignificance of design and its relevance to sustainability varies\nfrom context to context. In some cases, where community\nnetworks have originated as the indigenous creation of\ntechnology-savvy citizens, it may appear to researchers that the\ndesign of the information system is a natural expression of\ncommunity development unfettered by theoretical considerations.\nHowever, in other cases, design is taken more seriously and\ntreated as an element that can be purposefully controlled in order\nto achieve particular kinds of effects.\nIn either case, we argue that the material form, functionalities,\nconceptual configuration, and impact of technology is shaped by\nthe uses, goals, interests, and ideologies of those who participate\nin its development and others who use it following development.\nIn the literature addressing the social construction of technology,\nthis argument is frequently illustrated by showing how users\nappropriate new technologies for their own purposes, which may\nbe contrary to those of designers (see e.g. [15] [25]). However, we\ntake this position one step further by suggesting that community\ninformation systems, and information and communication\ntechnologies more broadly, reflect the interests, orientations, and\nindeed the nave social theories of their designers, as well as being\nshaped ex post facto by their users [8]. On this basis, we have\nargued that academic researchers need to become involved in the\nprocess of technology design as a way of exploring how to\nimprove the design of technology and as a way to test social\ntheory, including communication, information, and democratic\ntheory. However, our position suggests that users of information\nsystems must also be included in their initial conceptualization\nand design in order to develop systems that reflect users' needs,\ngoals, and values. This leads quite naturally to creating\ninterdisciplinary (e.g. computer scientists, information scientists,\nsocial scientists) application design teams that provide for\nparticipation by community members; it is in such collaborative\narrangements that sustainable community information systems\nmay be designed.\nThe social construction of technology argues that technologies are\nshaped by both designers and users and suggests that information\nsystem design be undertaken collaboratively by those implicated\nin both the technical and social conceptualization of the system.\nHowever, issues of sustainability ultimately focus on reproduction\nof the system. Once designed, information systems must be\ndeployed, and once deployed, they must be re-enacted on a\nroutine basis by their users to be sustained. Adaptive structuration\ntheory is one of the most fully developed theoretical perspectives\nfor understanding how new technologies come to reproduce social\nstructures or to generate structural change in particular social\ncontexts. DeSanctis and Poole [4] base their work on structuration\nprocesses originally described by Giddens [5].\nGiddens [5] suggests that technologies in organizations either\nreproduce existing social structure or change social structure by\nvirtue of the kinds of structures that are instantiated when social\nactors use technologies. Structure consists of rules and resources\nthat actors draw upon to produce social behavior. For DeSanctis\nand Poole [4], social structures are physically incorporated in new\ntechnologies in two complementary ways. First, technologies\nembody rules and resources embedded in the form of particular\nmaterial capabilities, functionalities, and features that comprise a\nvariety of behavioral options to be used in constructing social\naction. Second, the \"spirit\" of a technology, also considered to be\na property of the technology, expresses the values and goals that\nare brought to bear upon the tasks the technology was originally\nintended to accomplish. Together the features and spirit of a\ntechnology comprise its \"structural potential,\" or the range of\npossible actions that users can draw upon to constitute or\nreproduce social structures in technology use. Orlikowski [17]\ndisputes a portion of this conceptualization, noting that, according\nto Giddens [5], structure has a \"virtual\" rather than material\nexistence, and thus can never be physically incorporated into\ntechnology. Instead, "[w]hile a technology may be seen to\nembody particular symbol and material properties, it does not\nembody structures because those are only instantiated in practice"\n([17], p. 206) and, if reproduced, are systematically repeated over\ntime.\n146\nOrlikowski's [17] point is that users may draw upon only some of\na technology's features, and may do that in ways that depart\nsubstantially from the original conceptualizations of designers. In\nessence, users \"enact\" technology in their collective, systematic,\nand routine use of a technology, reproducing some of the\ntechnology and some of its associated structures through practice.\nOrlikowski's [17] term \"technologies-in-practice\" references the\nidea that as users engage selectively with particular technological\nfeatures, particular structures, or sets of rules and resources\nassociated with the technology, are selectively reconstituted.\nThus, a technology-in-practice is a "repeatedly experienced,\npersonally ordered and edited version of the technological artifact,\nbeing experienced differently by different individuals and\ndifferently by the same individuals depending on the time or\ncircumstances" ([17], p. 408).\nApplied to sustainability, our questions center on the conditions\nunder which users \"appropriate\" the system. For community\ninformation systems, there are generally two kinds of users-information\nproviders and information consumers--and, of\ncourse, the same individuals may play both user roles. Thus, our\nquestions become: Under what conditions do users collectively\nand routinely draw upon and apply particular features of a\ncommunity information system? When do they reference the way\ntheir system "should" work in order to construct a shared\nperspective about community action? Through regular and routine\nenactments of technology in regular use, users reproduce the rules\nand resources or structures of community life that are instantiated\nin technology use. This is not to say that \"unfaithful\"\nappropriations, or those that are out-of-line with the spirit of the\ntechnology, cannot occur; but it is to say that it is unlikely they\nwill sustain a community information system.\nFACTORS RELATED TO SUSTAINABLE COMMUNITY INFORMATION SYSTEMS\nWe begin our discussion of factors related to sustainability by\ndistinguishing between the effectiveness and the sustainability of\ncomputerized community information systems. Community\ninformation systems are designed and advocated with many goals\nin mind, some of which focus on traditional issues of community\ndevelopment, such as decreasing unemployment, stimulating\neconomic growth, improving health and social welfare, and others\nfocus on building social capital, or enhancing interest and\nparticipation in government decision making processes. The issue\nof effectiveness addresses whether such systems are achieving the\ngoals for which they were designed. Sustainability, on the other\nhand, addresses whether the information system is able to endure\npast its initial launching phase, whether it is used and reproduced\nby its intended audience, and whether it can continue to attract\nresources beyond those obtained for initial development and\ndeployment. Clearly these two concepts are not irrelevant to each\nother, but neither are they the same. It is possible that questions\nof sustainability logically precede those of effectiveness, but there\nmay also be important relationships between effectiveness and\nsustainability.\nSustainability has long been a consideration in the development of\ninformation systems. Indeed, the failure rate of new IT\napplications in the public sector has motivated significant interest\nin addressing the issue of sustainability and speculation about the\nextent to which participation in system development is related\nultimately to system adoption and use [10] [11]. More\nspecifically, government services are increasingly out-sourced to\nnot-for-profit organizations that may not be experienced in\ncollaboration [3]. Information technology makes it possible for\norganizations to collaborate in providing information but whether\nor not such collaborations actually take place is more than a\ntechnical issue. The development of any information system, and\nparticularly collaborative systems, requires organizations to\nchange, in a very real way, some of their routine modes of\noperation and incorporate new behaviors. Scholl's [22] research\nfinds that stakeholder involvement and the commitment of senior\nexecutives to be highly related to the integration of e-government\nprojects into business process change for government\norganizations. Stakeholder involvement has long been\nacknowledged as a key element in the construction of community\ninformation system, although applied to this context rather than\nthat of traditional hierarchical organizations, the idea bears further\nscrutiny. We have also seen the commitment of key executives\nplaying a role in our own development work. We discuss each of\nthese two ideas at some length below, and add a third:\ndevelopment of a critical mass of users.\n3.1\n\nStakeholder Involvement\nOur work was motivated in part by Schuler's [23] invitation to\nacademic researchers to collaborate with communities in building\ncommunity networking projects. At the time, it was fair to\ncharacterize our institution's hometown, Troy, NY, as a \"digitally\ndivided\" community. Our experiences suggested that new\ntechnologies and their potential seemed to be of interest to the\nmembers of the community (see [9]). But many community and\ngovernment organizations lacked access to hardware and network\nconnections as well as the expertise needed for using this\nequipment. It seemed most likely that we would need to do more\nto generate interest in the development of a community\ninformation service in order to stimulate participation from likely\nstakeholders in such a project.\nConnected Kids was conceived in Fall 1999 in the course of\ndiscussions among Troy City Government representatives on the\ntopic of how new technologies might usefully be employed to\nprovide services to the community. At the time we learned that\nthe mayor sought to reinvigorate the City's office of youth\nservices and had speculated about whether these technologies\ncould be used to provide one of that office's primary and most\npopular functions, which was to disseminate information about\nresources and programs sponsored by not-for-profit organizations\nas well as those sponsored by Troy's own Department of\nRecreation. It seemed clear to us the World Wide Web might\nindeed be used for such purposes. Thus, Connected Kids was\nconceived as both a digital government project as well as a\ncommunity information system. We received initial assurances\nthat the City would administer the information system after it had\nbeen successfully designed and deployed.\nConnected Kids began with sensitivity to the need for stakeholder\ninvolvement, particularly that of participating organizations that\nwe hoped would be information providers. We were aware that\nthe \"best way to kill a community network,\" was to fail to involve\nthe community in system development [24]. We took seriously\nGygi's [6] prediction that the degree of community involvement\n147\nand the extent to which the project represented community\ninterests and participation would likely affect political and\neconomic outcomes. Thus, although our project began initially as\na collaboration between academic researchers and government\nadministrators, we moved quickly to invite community\norganizations to participate at an orientation meeting in February\n2000 and held a series of focus group discussions in October 2000\nin which we explored with representatives of participating\norganizations how such an information system might be\nconceptualized to best meet their information needs. In Fall 2001\nand Winter 2002 we undertook a series of participatory design\nsessions in which representatives of participating organizations\nwere introduced to portions of a system prototype based on their\nprevious contributions and asked to describe their experiences and\nsuggest improvements. Finally, as we designed interfaces in\nSummer and Fall 2002, we again consulted with representatives of\nparticipating organizations in user testing sessions. By Fall 2003\nand Winter 2004, we had demonstrated the system and trained\nnumerous representatives of participating organizations, who\nreportedly found our interface pages easy to use. However, these\nsame organizations were not spontaneously--or frequently-entering\ninformation about their programs or activities for youth.\nBased on contributions from our collaborating organizations, the\ndesign of Connected Kids reflected much of the best wisdom\nabout community information systems: the system could be used\nto both create and easily update data [2] [13]; we had involved\nend user groups (kids and their parents) in the design as well [27]\n[12]; and the system focused principally on information deemed\ncrucial by our participating organizations, information that we\nexpected had the capacity to be integrated into the routine lives of\nthe communities they serve [21]. Further, access to technology\nlost its urgency as an issue, since it is no longer the case that our\nparticipating organizations lack access to networking technology.\nThus, we did not attribute our problems with data entry to system\nattributes. Instead, we considered the suggestion by Scott and\nPage [18] that \"sustainable technologies are processes (authors'\nemphasis); they are not products.\" In traditional hierarchical\norganizations, lower levels of stakeholder involvement may be\nsufficient for system acceptance. However, a community\ninformation system requires that members of the community\ncontribute information and it must be seen to be in their\ncontinuing interest to do so. In Fall 2004, we have sought to\ncreate a quasi-formal governance body to administer the project, a\nConnected Kids Advisory Board, recruiting representatives from\n10 organizations (from among the most influential) to commit to\nguiding the short-term future of the project (approximately 1 year)\nas we transition to system deployment in Spring 2005. Our Board\nhas now met for several months, and it remains to be seen whether\nthis vehicle will foster a sufficient level of system participation,\nperhaps ownership, to sustain Connected Kids through\ndeployment and beyond.\n3.2\n\nCommitment from Key Players\nScholl [22] finds that support from key executives is critical to\nincorporating e-government projects into an organization's\nbusiness processes, and our experience underscores this finding.\nIn fact, we would expand the range of individuals likely to be\nconsidered \"key.\" Not only are senior executives important, but\nso also are others in the organization that have any significant job-related\nassociation with the information system under\ndevelopment. Application development projects take place over\npotentially long periods of time and involve many different\nindividuals in many different roles. Job occupants in the public\nsector may be comparatively stable, but they are not permanent.\nThose who champion an application development project may not\nbe around when it is time to deploy the system.\nWhat is generally not recognized when academic researchers\nundertake technology projects in organizational contexts is that\nthey may need to become the primary advocates for deployment of\nthe project. This is not a typical role for researchers, who may\nwith ample justification see their obligations confined to simply\nperforming the research or developmental work on the project.\nHowever, researchers who seek to develop sustainable products\nmay find themselves required to situate the project politically\nwithin the organization or group of organizations for which it was\noriginally intended. They may in fact be the only individuals who\ncan play this particular role.\nIn the case of Connected Kids, we secured commitment from both\nthe mayor and deputy mayor of Troy, along with, of course, that\nof our primary organizational liaison. We continued to work for\nquite some time, reporting regularly on progress to our liaison,\nwithout realizing that this individual was getting progressively\ninvolved in turf battles with another technology-oriented actor in\ncity government. As our liaison's influence within city\ngovernment eroded, so also did support for our project without\nour awareness. Once we understood what was happening, we\nacted quickly, and luckily in sufficient time, to re-establish the\nimportance of the project with the mayor and deputy mayor.\nFrom that point hence our primary liaison was the deputy mayor.\nUnfortunately, mayoral administrations come and go, and the\nadministration that was our primary government partner was\nvoted out of office in November 2003. Within six months all the\nindividuals who had any primary working relationship with our\nproject were gone, and we faced the need to re-create commitment\nwith a new mayor and deputy mayor, a process that took\nconsiderable time and that has delayed implementation by nearly a\nyear. Of course, this is not something we could have prevented.\nHowever, it is interesting to note that our new liaison with city\ngovernment is an individual who had worked in city government\nunder both administrations.\n3.3\n\nCritical Mass of Users\nUltimately, for a community information system to endure, it must\nestablish a significant number of regular users, who enact the\ntechnology for at least some of the purposes for which it was\noriginally intended, and in so doing, reproduce community\nstructures that are instantiated in the technology. In our case, this\nmeans bringing an audience of end users to the system who are\ninterested in information about youth that is disseminated through\nit. Connected Kids is in many respects similar to an electronic\n\"public good\" [20], that is, a product established through the\ncontributions and for the benefit of a set of actors that also has the\neffect of benefiting other users.\nIn our case, the system was designed by and for youth\norganizations, which serve as information providers. We have\nsought to show how these organizations may appropriate the\ntechnology and accomplish what Bannon and Griffin [1] suggest,\nwhich is to use the technology as a means to \"further their own\n148\nends\" (p. 48) rather than as an end in itself. However, the added\nvalue of a collaborative information system is that in bringing an\naudience to information distributed by one organization, that\naudience is also available to peruse the information of other\norganizations. Thus the overall effect is to increase the\ncumulative size of the audience for all involved. Further, the\nexternal user audience benefits from the ease of accessing\ninformation from a wide variety of organizations that all provide\nservices for youth.\nPatterson and Kavanaugh [19] argue that the pro-social benefits of\na public good are achieved when the system achieves a critical\nmass of users. In our case, this would equal the number of users\nthat information providers consider to make it worth their\ncontinuing efforts to input information about their activities. As\nMarkus [16] points out, the number of users will depend on the\ndiversity and value of the information available through the\nsystem. Thus, sustainability is dependent on the reciprocal\ninterdependence of both information providers and information\nusers. Both information in the system and use must achieve\ncritical mass, and this must happen relatively soon after\ndeployment.\nOur strategy is to bring both of these activities together in time.\nWe have asked the Connected Kids Advisory Board to develop a\nmarketing campaign that will accompany the deployment of the\nsystem and they are currently embarked on this activity.\nBolstered by the participation of RPI and the City of Troy, we\nseek to attract a large external audience to experiment with the\nsystem. Advisory Board members recognize that the success of\nthe marketing campaign depends on the presence of considerable\namount of high quality information in the system, and have\ncommitted to providing it. In this way, we seek to jumpstart a\nvirtuous circle in which sufficient quantities of information and\nuse reciprocally reinforce each other creating a critical mass of\ninformation providers and consumers.\nCONCLUSION\nPoised for deployment, Connected Kids enables us to test a range\nof expectations generated by this set of considerations regarding\nsustainability. Within the next year, we should be able to assess\nrelationships related to stakeholder involvement such as those\nbetween factors such as perceptions of involvement in system\ndesign and administration; actual participation in system design\nand testing activities; and perceptions of ownership with\noutcomes such as the extent of data contributed to the system,\nperceptions of commitment to the system, and significance of\norganizational resources devoted to participation. Further, we\nshould be able to assess relationships between the perceived\namount and quality of information in the system and end user\nsatisfaction, likelihood of returning to the system, and interest in\nbecoming more involved in system activities. Of course, we will\ncontinue to be able to comment anecdotally on what we have\nlearned about the politics of technology diffusion in public sector\norganizations.\n\nREFERENCES\n[1]\n\nBannon, L.J., and Griffin, J. New technology, communities,\nand networking: Problems and prospects for orchestrating\nchange. Telematics and Informatics, 18, (2001), 35-49.\n[2]\n\nCowan, D.D., Mayfield, C. T., Tompa, F. W., and Gasparini,\nW. New role for community networks. Communications of\nthe ACM, 41, 4, (1998), 61-63.\n[3]\n\nDawes, S.S., Bloniarz, P. A., and Kelly, K. L. Some\nAssembly Required: Building a Digital Government for the\n21st Century. Albany, NY: Center for Technology in\nGovernment, 1999.\nhttp://www.ctg.albany.edu/research/workshop/dgfinalreport.\npdf\n[4]\n\nDeSanctis, G., and Poole, M.S. Capturing the complexity in\nadvanced technology use: Adaptive structuration theory.\nOrganization Science, 5, (1994), 121-147.\n[5]\n\nGiddens, A. The Constitution of Society. University of\nCalifornia Press, Berkeley, CA, 1984.\n[6]\n\nGygi, K. Uncovering Best Practices: A Framework for\nAssessing Outcomes in Community Computer Networking,\n1996. http://www.laplaza.org/about\nlap/archives/cn96/gygi.html\n[7]\n\nHarrison, T., and Stephen, T. Researching and creating\ncommunity networks. In Doing Internet Research, S. Jones\ned. Sage, Newbury Park, CA, 1999, 221-241.\n[8]\n\nHarrison, T., and Zappen, J. Methodological and theoretical\nframeworks for the design of community information\nsystems. Journal of Computer-Mediated Communication, 8,\n3 (2003). http://www.ascus.org/jcmc/vol8/issue3\n[9]\n\nHarrison, T.M., Zappen, J.P., and Prell, C.L. Transforming\nnew communication Technologies into Community Media.\nIn Community Media in the Information Age: Perspectives\nand Prospects, N. Jankowski and O. Prehn eds., Hampton,\nCresskill, NJ, 2002, 249-269.\n[10]\n\nHeeks, R. Better information age reform: Reducing the risk\nof information systems failure. In Reinventing government in\nthe information age: International Practice in IT-Enabled\nPublic Sector Reform, R. Heeks ed., Routledge, London,\n1999, 75-109.\n[11]\n\nHeeks, R., and Bhatnager, S. Understanding success and\nfailure in information age reform. In Reinventing government\nin the information age: International Practice in IT-Enabled\nPublic Sector Reform, R. Heeks ed., Routledge, London,\n1999, 50-73.\n[12]\n\nHowley, K. Equity, access, and participation in community\nnetworks. Social Science Computer Review, 16, (1998),\n402-410.\n[13]\n\nKeenan T. P., and Trotter, D. M. The changing role of\ncommunity networks in providing citizen access to the\nInternet. Internet Research: Electronic Networking\nApplications and Policy, 9, 2, (1999) 100-108.\n[14]\n\nKubicek, H., and Wagner, R.M. Community networks in a\ngenerational perspective: The change of an electronic\nmedium within three decades. Information, Communication,\nand Society, 5, (2002), 291-319.\n[15]\n\nLievrouw, L., and Livingstone, S. The social shaping and\nconsequences of ICTs. In The Handbook of New Media, L.\nLievrouw and S. Livingstone, Eds. Sage, London, 2002, 121\n.\n149\n[16]\n\nMarkus, L. Toward a \"critical mass\" theory of interactive\nmedia. Communication Research, 14, (1987), 491511.\n[17]\n\nOrlikowski, W. Using technology and constituting structures:\nA practice lens for studying technology in organizations.\nOrganization Science, 11, (2000), 404-428.\n[18]\n\nPage, M. and Scott, A. Change agency and women's\nlearning. Information, Communication & Society, 4, 4\n(2001), 528-559.\n[19]\n\nPatterson, S.J., and Kavanaugh, A.L. Building a sustainable\ncommunity network: An application of critical mass theory.\nElectronic Journal of Communication, 2, (2001).\nhttp://www.cios.org/www/ejc/v11n201.htm.\n[20]\n\nRafaeli, S., and LaRose, R. Electronic bulletin boards, and\n\"public goods\" explanations of collaborative mass media.\nCommunication Research, 20, 2, (1993), 277-297.\n[21]\n\nRosenbaum, H. Web-based community networks: A study of\ninformation organization and access. ASIS '98: Access in the\nglobal information economy. In Proceedings of the 61st\nAnnual Meetings of the American Society for Information\nSociety, 35, 1998, 516-527.\n[22]\n\nScholl, H.J. Current practices in E-government-induced\nbusiness process change. Proceedings of the National\nConference on Digital Government, Digital Government\nResearch Center, 2004, 99-108.\n[23]\n\nSchuler, D. Community Computer Networks: An Opportunity\nfor Collaboration among Democratic Technology\nPractitioners and Researchers, 1997.\nhttp://www.sn.org/ip/commnet/oslo-197.text\n[24]\n\nSchuler, D. How to Kill Community Networks. Hint: We May\nHave Already Started, 1996.\nhttp://www.scn.org/ip/commnet/kill-commnets.html\n[25]\n\nSproull, L., and Kiesler, S. Connections: New Ways of\nWorking in the Networked Organization. MIT Press,\nCambridge, MA, 1991.\n[26]", "keywords": "critical mass;construction of technology;key players;community network;computerized community information system;participatory design;sustainability;skateholder involvement;Community networks"} {"name": "51", "title": "Can Machine Learning Be Secure?", "abstract": "Machine learning systems offer unparalled flexibility in dealing with evolving input in a variety of applications, such as intrusion detection systems and spam e-mail filtering. However , machine learning algorithms themselves can be a target of attack by a malicious adversary. This paper provides a framework for answering the question, \"Can machine learning be secure?\" Novel contributions of this paper include a taxonomy of different types of attacks on machine learning techniques and systems, a variety of defenses against those attacks, a discussion of ideas that are important to security for machine learning, an analytical model giving a lower bound on attacker's work function, and a list of open problems.", "fulltext": "INTRODUCTION\nMachine learning techniques are being applied to a growing\nnumber of systems and networking problems, particularly\nthose problems where the intention is to detect anomalous\nsystem behavior. For instance, network Intrusion Detection\nSystems (IDS) monitor network traffic to detect abnormal\nactivities, such as attacks against hosts or servers. Machine\nlearning techniques offer the benefit that they can detect\nnovel differences in traffic (which presumably represent attack\ntraffic) by being trained on normal (known good) and\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nASIACCS'06, March 2124, 2006, Taipei, Taiwan\nCopyright 2006 ACM 1-59593-272-0/06/0003 ...\n$\n5.00\nattack (known bad) traffic. The traditional approach to designing\nan IDS relied on an expert codifying rules defining\nnormal behavior and intrusions [26]. Because this approach\noften fails to detect novel intrusions, a variety of researchers\nhave proposed incorporating machine learning techniques\ninto intrusion detection systems [1, 16, 18, 24, 38, 41]. On\nthe other hand, use of machine learning opens the possibility\nof an adversary who maliciously \"mis-trains\" a learning system\nin an IDS. A natural question arises: what techniques\n(in their attacks) can an adversary use to confuse a learning\nsystem?\nThis paper explores the larger question, as posed in the title\nof this paper, can machine learning be secure? Specific\nquestions that we examine include:\nCan the adversary manipulate a learning system to\npermit a specific attack? For example, can an attacker\nleverage knowledge about the machine learning system\nused by a spam e-mail filtering system to bypass the\nfiltering?\nCan an adversary degrade the performance of a learning\nsystem to the extent that system administrators\nare forced to disable the IDS? For example, could the\nattacker confuse the system and cause valid e-mail to\nbe rejected?\nWhat defenses exist against adversaries manipulating\n(attacking) learning systems?\nMore generally, what is the potential impact from a security\nstandpoint of using machine learning on a system\n? Can an attacker exploit properties of the machine\nlearning technique to disrupt the system?\nThe issue of machine learning security goes beyond intrusion\ndetection systems and spam e-mail filters. Machine\nlearning is a powerful technique and has been used in a variety\nof applications, including web services, online agent\nsystems, virus detection, cluster monitoring, and a variety\nof applications that must deal with dynamically changing\ndata patterns.\nNovel contributions of this paper include a taxonomy of different\ntypes of attacks on machine learning techniques and\nsystems, a variety of defenses against those attacks, a discussion\nof ideas that are important to security for machine\nInvited Talk\n\n\nlearning, an analytical model giving a lower bound on attacker's\nwork function, and a list of open problems.\nThe rest of this paper is organized as follows: Section 2 discusses\nmachine learning and how it is typically used in a\nsystem, Section 3 develops a taxonomy of attacks, Section 4\nintroduces potential defenses against attacks and explores\ntheir potential costs, Section 5 identifies several of the ideas\nthat are important to security for machine learning, Section\n6 presents an analytical model that examines an attack\nto manipulate a naive learning algorithm, Section 7 discusses\nrelated work, potential research directions, and our conclusions\nREVIEW\nA machine learning system attempts to find a hypothesis\nfunction f that maps events (which we call points below)\ninto different classes. For example, an intrusion detection\nsystem would find a hypothesis function f that maps an\nevent point (an instance of network behavior) into one of\ntwo results: normal or intrusion.\nOne kind of learning system called supervised learning works\nby taking a training data set together with labels identifying\nthe class for every point in the training data set.\nFor example, a supervised learning algorithm for an IDS\nwould have a training set consisting of points corresponding\nto normal behavior and points corresponding to intrusion\nbehavior. The learning algorithm selects the hypothesis\nfunction f that best predicts the classification of a point.\nMore complicated learning algorithms can deal with event\npoints that are both labeled and unlabeled and furthermore\ncan deal with continuous streams of unlabeled points so that\ntraining is an ongoing process. In this paper, we call these\nalgorithms online learning systems.\nThis remainder of this subsection presents a concise overview\nof concepts in statistical learning theory. The presentation\nbelow is formal and can be skipped on a first reading. For\na fuller discussion with motivation, refer to [11, 31].\nA predictive learning problem is defined over an input space\nX, an output space Y, and a loss function : Y Y R.\nThe input to the problem is a training set\nS, specified as\n{(x\ni\n, y\ni\n)\nX Y}, and the output is a hypothesis function\nf :\nX Y. We choose f from a hypothesis space (or function\nclass)\nF to minimize the prediction error given by the\nloss function. In many cases, researchers assume stationarity\n, that the distribution of data points encountered in the\nfuture will be the same as the distribution of the training\nset. Stationarity allows us to reduce the predictive learning\nproblem to a minimization of the sum of the loss over the\ntraining set:\nf\n\n= argmin\nf\nF\nX\n(x\ni\n,y\ni\n)\nS\n(f (x\ni\n), y\ni\n)\n(1)\nLoss functions are typically defined to be non-negative over\nall inputs and zero when f (x\ni\n) = y\ni\n. A commonly used loss\nfunction is the squared-error loss\nsq\n(f (x\ni\n), y) = (f (x\ni\n)\n-y)\n2\n.\nThe hypothesis space (or function class)\nF can be any representation\nof functions from\nX to Y, such as linear functions,\npolynomials, boolean functions, or neural networks. The\nchoice of\nF involves a tradeoff between expressiveness and\nability to generalize. If\nF is too expressive, it can overfit\nthe training data. The extreme case is a lookup table that\nmaps x\ni\nto y\ni\nfor each instance of the training set but will\nnot generalize to new data. A linear function, on the other\nhand, will generalize what it learns on the training set to\nnew points, though it may not be sufficiently expressive to\ndescribe intricate data sets. We typically use simple function\nclasses, such as linear functions, to avoid overfitting.\nWe can describe a more general learning problem by dropping\nthe requirement that the training examples include all\nthe labels y\ni\n. The case where all labels are present is referred\nto as supervised learning, when no labels are present the\nproblem is unsupervised, and when some labels are present\nthe problem is semi-supervised. In all these cases we can\npose the learning problem as the minimization of some measure\nover the training set:\nf\n\n= argmin\nf\nF\nX\n(x\ni\n)\nS\nL(x\ni\n, f )\n(2)\n2.2 Terminology and Running Example\nTo illustrate some of our contributions, we use a running example\nthroughout this paper: a network Intrusion Detection\nSystem (IDS). This IDS receives network events x\nX and\nclassifies each event x as either f (x) = normal or f (x) =\nintrusion. The literature describes a number of algorithms\nfor learning f over time, but we wish to consider the impact\nof malicious input on the learning algorithm. This paper\nposes the question: can a malicious party send events to the\nIDS that will cause it to malfunction? Possible types of attacks\non the IDS include attacks on the learning algorithm,\ncausing the IDS to create an f that misclassifies events. As\nwe discuss in the next section, this is only one of a number\nof types of attacks that an adversary can make on an IDS.\nIt is important to be careful about notation here. When we\nspeak of attacks, we mean an attack on the learning system\n(e.g., the learner in an IDS). Attacks may try to make the\nlearner mis-learn, fail because of denial of service, report\ninformation about its internal state, etc. \"Attack\" should be\ndistinguished from \"intrusion.\" An attack targets a learning\nsystem; an intrusion targets a computer system (such as a\nsystem protected by an IDS). While many researchers use\nthe word \"attack\" to include intrusions, in this paper we are\ncareful to use the word \"attack\" only to mean an attack on\na learner.\nWe do not want to restrict ourselves to particular learning\nalgorithms used by intrusion detection systems to choose\nhypotheses. However, we allow adversaries that have deep\nunderstanding of the learning algorithms.\nSimilarly, we do not discuss mechanisms for translating network\nlevel events into a form relevant to the learner. We call\neach unit of data seen by the learner a data point, or simply\na point. In the context of the IDS, our discussion encompasses\ncontinuous, discrete, or mixed data. We assume that\nX is a metric space, allowing us to freely discuss distances\n\n\nIntegrity\nAvailability\nCausative:\nTargeted\nPermit a specific intrusion\nCreate sufficient errors to make system\nunusable for one person or service\nIndiscriminate\nPermit at least one intrusion\nCreate sufficient errors to make\nlearner unusable\nExploratory:\nTargeted\nFind a permitted intrusion from a\nsmall set of possibilities\nFind a set of points misclassified by\nthe learner\nIndiscriminate\nFind a permitted intrusion\nTable 1: The attack model.\nbetween points. Furthermore, we assume the set of points\nclassified as normal by the IDS forms multiple contiguous\nsubsets in\nX. The border of this set is called the decision\nboundary.\nBelow, we consider a variety of scenarios and assumptions.\nATTACKS\nWe give relevant properties for analyzing attacks on machine\nlearning systems.\nInfluence\nCausative - Causative attacks alter the training process\nthrough influence over the training data.\nExploratory - Exploratory attacks do not alter the\ntraining process but use other techniques, such as\nprobing the learner or offline analysis, to discover\ninformation.\nSpecificity\nTargeted - The specificity of an attack is a continuous\nspectrum. At the targeted end, the focus of\nthe attack is on a particular point or a small set\nof points.\nIndiscriminate - At the indiscriminate end, the adversary\nhas a more flexible goal that involves a\nvery general class of points, such as \"any false\nnegative.\"\nSecurity violation\nIntegrity - An integrity attack results in intrusion\npoints being classified as normal (false negatives).\nAvailability - An availability attack is a broader class\nof attack than an integrity attack. An availability\nattack results in so many classification errors,\nboth false negatives and false positives, that the\nsystem becomes effectively unusable.\nThese three axes define a space of attacks; Table 1 provides\na concise summary.\nIn causative attacks, the adversary has some measure of control\nover the training of the learner. An attack that causes\nthe learner to misclassify intrusion points, for example an\nattack that fools an IDS into not flagging a known exploit\nas an intrusion, is a causative integrity attack. The distinction\nbetween targeted and indiscriminate causative integrity\nattacks is the difference between choosing one particular exploit\nor just finding any exploit. A causative availability\nattack causes the learner's performance to degrade. For example\n, an adversary might cause an IDS to reject many legitimate\nHTTP connections. A causative availability attack\nmay be used to force the system administrator to disable\nthe IDS. A targeted attack focuses on a particular service,\nwhile an indiscriminate attack has a wider scope.\nExploratory attacks do not attempt to influence learning;\nthey instead attempt to discover information about the state\nof the learner. Exploratory integrity attacks seek to find\nintrusions that are not recognized by the learner.\n3.2 Online Learning\nA learner can have an explicit training phase or can be\ncontinuously trained (online learner). Online learning allows\nthe learner to adapt to changing conditions; the assumption\nof stationarity is weakened to accommodate long-term\nchanges in the distribution of data seen by the learner.\nOnline learning is more flexible, but potentially simplifies\ncausative attacks. By definition, an online learner changes\nits prediction function over time, so an adversary has the opportunity\nto shape this change. Gradual causative attacks\nmay be difficult to detect.\nDEFENSES\nIn this section we discuss potential defenses against attacks.\nThis section describes speculative work, and the efficacy of\nthese techniques in practice is a topic for future research.\n4.1 Robustness\nTo increase robustness against causative attacks we constrain\nthe class of functions (hypotheses) that the learner\nconsiders. The constraint we consider is the statistical technique\nof regularization. Regularization extends the basic\nlearning optimization in Equation (1) by adding a term J(f )\nthat penalizes complex hypotheses:\nf\n\n= argmin\nf\nF\n8\n<\n: X\n(x\ni\n,y\ni\n)\nS\n(f (x\ni\n), y\ni\n) + J(f )\n9\n=\n; (3)\nHere adjusts the trade-off. The penalty term J(f ) can\nbe as simple as the sum of squares of the parameters of f .\nRegularization is used in statistics to restrict or bias the\nchoice of hypothesis when the problem suffers from lack of\n\n\nIntegrity\nAvailability\nCausative:\nTargeted\nRegularization\nRandomization\nRegularization\nRandomization\nIndiscriminate\nRegularization\nRegularization\nExploratory:\nTargeted\nInformation hiding\nRandomization\nInformation hiding\nIndiscriminate\nInformation hiding\nTable 2: Defenses against the attacks in Table 1.\ndata or noisy data. It can also be interpreted as encoding a\nprior distribution on the parameters, penalizing parameter\nchoices that are less likely a priori. Regularization and prior\ndistributions can both be viewed as penalty functions in\nEquation (3) [42].\nThe constraint added to the learning problem by the penalty\nterm may help our defenses in two ways. First, it has the effect\nof smoothing the solution, removing complexity that an\nadversary might exploit in attacks. Second, prior distributions\ncan be a useful way to encode expert knowledge about\na domain or use domain structure learned from a preprocess-ing\nstep. In the simplest case, we might have a reasonable\nguess for the parameters (such as the mean) that we wish\nto refine; in a more complex situation, we could perform an\nanalysis of a related dataset giving correlation information\nwhich informs a multivariate Gaussian prior on the parameters\n[28]. When the learner has more prior information (or\nconstraints) on which to base the learning, there is less dependence\non exact data fitting, so there is less opportunity\nfor the adversary to exert influence over the learning process.\n4.2 Detecting Attacks\nThe learner can benefit from the ability to detect attacks\neven if they are not prevented. Detecting attacks can be difficult\neven when the adversary is not attempting to conceal\nthem. However, we may be able to detect causative attacks\nby using a special test set. This test set could include several\nknown intrusions and intrusion variants, as well as some\nrandom points that are similar to the intrusions. After the\nlearner has been trained, misclassifying a disproportionately\nhigh number of intrusions could indicate compromises.\nTo detect naive exploratory attacks, a separate clustering\nalgorithm could be run against data classified by the learner.\nThe sudden appearance of a large cluster near the decision\nboundary could indicate systematic probing. This type of\ndefense is akin to port scan detection, which has become an\narms race between port scanners and IDS [26].\nDetecting an attack gives the learner information about the\nadversary's capabilities. This information may be used to\nreformulate defense strategies.\nAs the adversary's control over the data increases, the best\nstrategy for the learner is to ignore potentially tainted data.\nOtherwise, the adversary can exploit misplaced trust. These\nideas have been formalized within the context of deception\ngames [14, 32], which typically assume all players know the\nextent to which other players may manipulate data. However\n, if the parties estimate each other's abilities, more sophisticated\nstrategies emerge.\n4.3 Disinformation\nIn some circumstances, the learner may be able to alter the\ndata seen by the adversary. This strategy of disinformation\nhas the goal of confusing the adversary's estimate of the\nlearner's state. In the simplest case, the adversary would\nthen be faced with a situation not unlike a learner under\nan indiscriminate causative availability attack. The goal of\nthe learner is to prevent the adversary from learning the\ndecision boundary. Please note how the roles of adversary\nand learner have been reversed.\nA more sophisticated learner could trick the adversary into\nbelieving that a particular intrusion was not included in the\ntraining set. This apparently permitted \"intrusion\" would\nact as a honeypot [27], causing the adversary to reveal itself.\nAn increase in the incidence of that particular attack would\nbe detected, revealing the existence of an adversary. In this\ncase again, roles would reverse, and the adversary would face\na situation analogous to a learner subjected to a targeted\ncausative integrity attack.\n4.4 Randomization for Targeted Attacks\nTargeted attacks hinge on the classification of one point or a\nsmall set of points. They are more sensitive to variations in\nthe decision boundary than indiscriminate attacks because\nboundary movement is more likely to change the classification\nof the relevant points.\nThis suggests randomization as a potential tool against targeted\ncausative attacks. In such an attack, the adversary\nhas to do a particular amount of work to move the decision\nboundary past the targeted point. If there is some randomization\nin the placement of the boundary and the adversary\nhas imperfect feedback from the learner, more work is required\n.\n0\n\n4.5 Cost of Countermeasures\nThe more we know about the distribution of training data,\nthe less room there is for an adversary to manipulate the\nlearner. The disadvantage, however, is that the legitimate\ndata has less influence in the learning process. A tension\nexists between expressivity and constraint: as the learner\nincludes more prior information, it loses flexibility to adapt\nto the data, but as it incorporates more information from\nthe data, it becomes more vulnerable to attack.\nEquation (3) makes this tradeoff explicit with . In the adversarial\nscenario, this tradeoff becomes more relevant because\nthe adversary may have influence over the data.\nRandomization increases the adversary's work, but it also\nwill increase the learner's base error rate. Determining the\nright amount of randomization is an open problem.\n4.6 Summary of Defenses\nTable 2 shows how our defenses discussed here relate to attack\nclasses presented in Table 1. (Information hiding is an\nadditional technique discussed in Section 5 below.)\nDISCUSSION\nA number of defenses and attacks upon machine learning\nalgorithms hinge upon the types of information available to\nthe adversary. Some of these involve information about the\ndecision boundary. Below we consider factors that influence\nthe security and secrecy of the decision boundary.\n5.2 Scale of Training\nSome machine learning systems are trained by the end user,\nwhile others are trained using data from many users or organizations\n. The choice between these two models is sometimes\ncast as a tradeoff between the amount of training data\nand the secrecy of the resulting classifier [3]. This issue also\napplies to an IDS; if an IDS is trained each time it is deployed\nthen it will have comparatively little data regarding\nnormal network traffic. It will also have no chance to learn\nabout novel intrusions before seeing them in the wild.\nConversely, an IDS that uses a global set of rules would\nbe able to adapt to novel intrusion attempts more quickly.\nUnfortunately, any adversary with access to a public IDS\nclassification function can test to ensure that its intrusion\npoints will be accepted by deployments of the same classification\nfunction.\nThese issues are instances of a more general problem. In\nsome cases, it seems reasonable to assume the adversary has\nlittle access to information available to the learner. However\n, unless the adversary has no prior knowledge about the\nlearning problem at hand, we cannot assume all of the information\nprovided in the training set is secret. Therefore,\nit is unclear how much is gained by attempting to keep the\ntraining set, and therefore the state of the classifier, secret.\nMany systems already attempt to achieve a balance between\nglobal and local retraining [3]. Systems that take this approach\nhave the potential to outperform systems that perform\ntraining at a single level. However, the relationships\nbetween multilevel training, the adversary's domain knowledge\n, and secrecy are not yet well understood.\n5.2.1 Adversary Observations\nEven without prior knowledge regarding a particular system\n, an adversary still may deduce the state of the learning\nalgorithm. For example, if the learning system provides\nfeedback to the adversary (e.g., \"Request denied\"), then a\nprobing attack could be used to map the space of acceptable\ninputs.\nIf the adversary has no information regarding the type of\ndecision boundary used by the learner, this process could\nrequire a number of probes proportional to the size of the\nspace. On the other hand, if the adversary knows which\nlearning algorithm is being used, a few well-chosen probes\ncould give the adversary sufficient knowledge of the learner's\nstate. As a standard security practice, we assume the learning\nalgorithm itself to be common knowledge.\nInstead of expecting the learning algorithm to be a secret,\nsome systems attempt to prevent the adversary from discovering\nthe set of features the learning algorithm uses. This\nmay be realistic in systems with a small number of deployments\n.\nIdeally, we could produce an information theoretic bound\non the amount of information an adversary could gain by\nobserving the behavior of a particular algorithm on a particular\npoint. Using these bounds, we could reason about\nthe algorithm's robustness against probing attacks. In this\nsetting, it may also be interesting to distinguish between information\ngained from normal points drawn from the data's\nunderlying distribution, intrusion points from a third party,\nand (normal or intrusion) attack points of the adversary's\nchoosing.\nAn adversary with sufficient information regarding training\ndata, classifications of data points, or the internal state of a\nlearner would be able to deduce the learner's decision boundary\n. This knowledge could simplify other types of attacks.\nFor instance, the adversary could avoid detection by choosing\nintrusion points that will be misclassified by the learner,\nor launch an availability attack by manipulating normal\npoints in a way that leads to misclassification. In either\ncase, by increasing the number of points that are in the region\nthat the defender incorrectly classifies, the adversary\ncould increase the error rate.\nSome algorithms classify points by translating them into an\nabstract space and performing the actual classification in\nthat space. The mapping between raw data and the abstract\nspace is often difficult to reason about. Therefore, it may be\ncomputationally difficult for an adversary to use knowledge\nof a classifier's decision boundary to generate \"interesting\"\nattack points that will be misclassified.\nOne can imagine classes of decision boundaries that are\nmeaningful, yet provably provide an adversary with no information\nregarding unclassified points. Even with complete\nknowledge of the state of a learner that uses such a decision\nboundary, it would be computationally intractable to find\n0\n\none of a few \"interesting\" points in a sufficiently large search\nspace.\nIn some cases, the decision boundary itself may contain sensitive\ninformation. For example, knowledge of the boundary\nmay allow an adversary to infer confidential information\nabout the training set. Alternatively, the way the decision\nboundary was constructed might be a secret.\n5.2.2 Security Properties\nThe performance of different algorithms will likely degrade\ndifferently as the adversary controls larger fractions of the\ntraining set. A measurement of an algorithm's ability to\ndeal with malicious training errors could help system designers\nreason about and decide between different learners.\nA simple approach would be to characterize an algorithm's\nperformance when subjected to a particular type of attack,\nbut this would lead to an arms race as adversaries devise\nclasses of attacks not well represented during the evaluation\nof the algorithm.\nDepending on the exact nature of the classification problem\n, it may be possible to make statements regarding the\nstrength of predictions. For example, after making a classification\na learning algorithm could examine the training set\nfor that classification. It could measure the effect of small\nchanges to that training set; if small changes generate large\neffects, the training set is more vulnerable to manipulation.\nTHEORETICAL RESULTS\nIn this section we present an analytic model that examines\na causative attack to manipulate a naive learning algorithm.\nThe model's simplicity yields an optimal policy for the adversary\nand a bound on the effort required to achieve the\nadversary's objective. We interpret the resulting bound and\ndiscuss possible extensions to this model to capture more\nrealistic settings.\nWe discuss an outlier detection technique. Outlier detection\nis the task of identifying anomalous data and is a widely used\nparadigm in fault detection [40], intrusion detection [23],\nand virus detection [33, 34]. We find the smallest region\nthat contains some fixed percentage of the observed data,\nwhich is called the support of the data's distribution. The\noutlier detector classifies points inside the support as normal\nand those outside as anomalous. Outlier detection is often\nused in scenarios where anomalous data is scarce or novel\nanomalies could arise.\n6.1 A Simple Model\nOne simple approach to outlier detection is to estimate the\nsupport of the normal data by a multi-dimensional hypersphere\n. As depicted in Figure 1(a) every point in the hypersphere\nis classified as normal and those outside the hypersphere\nare classified as outliers. The training algorithm fixes\nthe radius of the hypersphere and centers it at the mean of\nthe training data. The hypersphere can be fit into the learning\nframework presented above by a squared loss function,\nsphere\n(\nX, x\ni\n) =\n`x\ni\n- X\n\n2\n, where\nX is the centroid of the\ndata\n{x\ni\n}. It is easy to show that the parameter that minimizes\nEquation (1) is the mean of the training data.\nTo make the hypersphere adaptive, the hypersphere is retrained\non new data allowing for a repeated attack. To\nprevent arbitrary data from being introduced, we employ a\nconservative retraining strategy that only admits new points\nto the training set if they are classified as normal; we say\nthe classifier bootstraps itself. This learning framework is\nnot meant to represent the state of the art in learning techniques\n; instead, it is a illustrative technique that allows for\nan exact analysis.\n6.2 Attack Strategy\nThe attack we analyze involves an adversary determined to\nalter our detector to include a specific point G by constructing\ndata to shift the hypersphere toward the target as the\nhypersphere is retrained. We assume the goal G is initially\ncorrectly classified as an anomaly by our algorithm. For instance\n, in the IDS domain, the adversary has an intrusion\npacket that our detector currently classifies as anomalous.\nThe adversary wants to change the state of our detector to\nmisclassify the packet as normal. This scenario is a causative\ntargeted integrity attack. Before the attack, the hypersphere\nis centered at\nX\n0\nand it has a fixed radius R. The attack is\niterated over the course of T > 1 training iterations. At the\ni-th iteration the mean of the hypersphere is denoted by\nX\ni\n.\nWe give the adversary complete control: the adversary knows\nthe algorithm, its feature set, and its current state, and all\npoints are attack points. At each iteration, the bootstrapping\npolicy retrains on all points that were classified as normal\nin a previous iteration. Under this policy, the adversary's\noptimal strategy is straightforward -- as depicted in\nFigure 1(b) the adversary places points at the location where\nthe line between the mean and G intersects with the boundary\n. This reduces the attack to a single dimension along\nthis line. Suppose that in the i-th iteration, the adversary\nstrategically places\ni\npoints at the i-th optimal location\nachieving optimal displacement of the mean toward the adversary's\ngoal, G. The effort of the adversary is measured\nby M defined as\nP\nT\ni=1\n\ni\n.\nPlacing all attack points in the first iteration is not optimal.\nIt achieves a finite shift while optimal strategies achieve unbounded\ngains. As we discuss below, the attack strategy\nmust be balanced. The more points placed during an iteration\n, the further the hypersphere is displaced on that iteration\n. However, the points placed early in the attack effectively\nweigh down the hypersphere making it more difficult\nto move. The adversary must balance current gain against\nfuture gain. Another tradeoff is the number of rounds of\niteration versus the total effort.\n6.3 Optimal Attack Displacement\nWe calculate the displacement caused by a sequence\n{\ni\n} of\nattack points. For T iterations and M total attack points,\nthe function D\nR,T\n(\n{\ni\n}) denotes the relative displacement\ncaused by the attack sequence. The relative displacement is\nthe total displacement over the radius of the hypersphere,\n\nX\nT\n-\nX\n0\nR\n. Let M\ni\nbe defined as\nP\ni\nj=1\n\nj\n, the cumulative\nmass. Using these terms, the relative distance is\nD\nR,T\n(\n{M\ni\n}) = T T\nX\ni=2\nM\ni\n-1\nM\ni\n(4)\n\n\n(a) Hypersphere Outlier Detection\n(b) Attack on a Hypersphere Outlier Detector\nFigure 1: Depictions of the concept of hypersphere outlier detection and the vulnerability of naive approaches.\nIn Figure 1(a) a bounding hypersphere centered at\nX of fixed radius R is used to estimate the empirical support\nof a distribution excluding outliers. Samples from the \"normal\" distribution being modeled are indicated by\nwith three outliers indicated by\n. Meanwhile, Figure 1(b) depicts how an adversary with knowledge of\nthe state of the outlier detector could shift the outlier detector toward a first goal G. It could take several\niterations of attacks to shift the hypersphere further to include the second goal G\n\n.\nwhere we constrain M\n1\n= 1 and M\nT\n= M [25].\nBy finding an upper bound to Equation (4), we can bound\nthe minimal effort M\n\nof the adversary. For a particular\nM , we desire an optimal sequence\n{M\ni\n} that achieves the\nmaximum relative displacement, D\nR,T\n(M ). If the adversary\nhas no time constraint, the solution is M\ni\n= i, which corresponds\nto placing a single point at each iteration. However,\nif the adversary expedites the attack to T < M iterations,\nthe optimal strategy is given by M\ni\n= M\ni\n-1\nT\n-1\n. This value\nis not always an integer, so we have:\nD\nR,T\n(M )\nT - (T - 1) M\n-1\nT\n-1\nT\n(5)\n6.4 Bounding the Adversary's Effort\nFrom these results we find a bound on the adversary's effort\nM . Since M\n1 and T > 1, Equation (5) is monotonically\nincreasing in M . If the desired relative displacement to the\ngoal is D\nR\n, the bound in Equation (5) can be inverted to\nbound the minimal effort M\n\nrequired to achieve the goal.\nSince D\nR\n< T , this bound is given by:\nM\n\n\n,, T-1\nT\n- D\nR\n\nT\n-1\n(6)\nThe bound in Equation (6) gives us a worst-case bound on\nthe adversary's capability when the adversary has complete\ncontrol of the learner's training. For large relative displacements\nD\nR\n> 1, the bound decreases exponentially as the\nnumber of iterations is increased. The bound has a limiting\nvalue of M\n\ne\nD\nR\n-1\n. The adversary must tradeoff between\nusing a large number of attack points or extending the attack\nover many iterations. A tightly-fit hypersphere with\nsmall radius will be more robust since our displacement is\nrelative to its radius.\nAn apparent deficiency of this analysis is the weak bound of\nM\n\nwhere 0 < 1 that occurs when D\nR\n1. This\nan important range since the adversary's goal may be near\nthe boundary. The deficiency comes directly from our assumption\nof complete adversarial control. The lack of initial\nnon-adversarial data allows our adversary to ensure a first\nstep of one radius regardless of M . Therefore, the adversary\ncan reach the objective of D\nR\n1 with any M 1 in a\nsingle iteration.\nA more complex model could allow for initial data. By considering\nan initial N training points that support the hypersphere\nbefore the attack, we can obtain a stronger bound:\nM\n\nN\nhe\nD\nR\n- 1\ni\n(7)\nThis stronger bound ensures that even for small D\nR\n, the\nadversary's effort is a multiple of N that increases exponentially\nin the desired displacement [25].\nWe could extend the model by adding non-adversarial data\nat every training iteration, for this corresponds to scenarios\nwhere the adversary only controls part of the data.\nCONCLUSIONS\nThe earliest theoretical work we know of that approaches\nlearning in the presence of an adversary was done by Kearns\n\n\nand Li [15]. They worked in the context of Valiant's Probably\nApproximately Correct (PAC) learning framework [35,\n36], extending it to prove bounds for maliciously chosen errors\nin the training data. Specifically, they proved that if\nthe learner is to perform correctly, in general the fraction\nof training points controlled by the adversary must be less\nthan /(1 + ), where is the desired bound on classification\nerrors by the learner [4, 6, 30].\nResults from game theory may be relevant to adversarial\nlearning systems.\nIn particular, deception games involve\nplayers that have partial information and influence the information\nseen by other players. Some of these games involve\ncontinuous variables generated by various probability distributions\n[5, 9, 17, 29, 32], while others apply to scenarios\nwith discrete states [14]. This work and adversarial learning\nboth ask many of the same questions, and they both address\nthe same underlying issues. Integration of game theoretic\nconcepts is a promising direction for work in this area.\nDalvi et al. examine the learn-adapt-relearn cycle from a\ngame-theoretic point of view [8]. In their model, the learner\nhas a cost for measuring each feature of the data and the\nadversary has a cost for changing each feature in attack\npoints. If the adversary and learner have complete information\nabout each other and we accept some other assumptions\n, they find an optimal strategy for the learner to defend\nagainst the adversary's adaptations.\nResearch has also begun to examine the vulnerability of\nlearners to reverse engineering. Lowd and Meek introduce a\nnovel learning problem for adversarial classifier reverse engineering\nin which an adversary conducts an attack that minimizes\na cost function [21]. Under their framework, Lowd\nand Meek construct algorithms for reverse engineering linear\nclassifiers. Moreover, they build an attack to reverse\nengineer spam filters [22].\nAlthough they are not machine learning systems, publicly\nverifiable digital watermarks also must deal with sensitivity\n(probing) attacks. An information theoretic analysis of\nthe sensitivity attack quantifies the amount of information\nrevealed per probe.\nRandomization of thresholds within\nthe watermark verification algorithm increase the number\nof probes necessary to remove a digital watermark [19].\nAn interesting junction of learning and game theory has\ndealt with combining advice from a set of experts to predict\na sequence with the goal of doing at least as well as the best\nexpert in all possible sequences [7, 13, 37]. In this domain,\nadaptive weighting schemes are used to combine the experts\n, each accessed by how well it performs compared to the\nbest expert for an adversarially chosen sequence. Amongst\nthese schemes are the Aggregating Algorithm [37] and the\nWeighted Majority Algorithm [20].\nThere has also been work on attacking statistical spam filters\n. Wittel and Wu [39] discuss the possibility of crafting\nattacks designed to take advantage of the statistical nature\nof such spam filters, and they implement a simple attack.\nJohn Graham-Cumming describes implementing an attack\nhe calls \"Bayes vs. Bayes,\" in which the adversary trains\na second statistical spam filter based on feedback from the\nfilter under attack and then uses the second filter to find\nwords that make spam messages undetectable by the original\nfilter [10].\nMethods exist to perform exact learning of a concept using\nanswers to a series of queries. These queries return a coun-terexample\nwhen a \"no\" response is generated. In many\nscenarios, it has been shown that learning is possible even\nin the worst case [2].\nControl theory has been proposed as an alternative to game\ntheory and search oriented expert-systems for military command\nand control systems [12]. The motivation behind this\nproposal is the difficulty associated with modeling (or even\npredicting) the goals of a military adversary.\n7.2 Research Directions\nCan machine learning be secure?\nDoes adding machine\nlearning to a system introduce vulnerability? This paper\nproposes a framework for understanding these questions.\nWe present a model for describing attacks against learning\nalgorithms, and we analyze a simple attack in detail.\nWe discuss potential defenses against attacks and speculate\nabout their effectiveness.\nHere we lay out the directions for research that we see as\nmost promising. To evaluate and ensure the security of machine\nlearning, these are among the most important areas\nthat must be addressed:\nInformation\nHow crucial is it to keep information secret from an\nadversary? If an adversary has full knowledge of the\nsystem, are all the exploratory attacks trivial? If the\nadversary has no knowledge about the system, which\nattacks are still possible?\nArms race\nCan we avoid arms races in online learning systems?\nArms races have occurred in spam filters. Can game\ntheory suggest a strategy for secure re-training?\nQuantitative measurement\nCan we measure the effects of attacks? Such information\nwould allow comparison of the security performance\nof learning algorithms. We could calculate\nrisk based on probability and damage assessments of\nattacks.\nSecurity proofs\nCan we bound the amount of information leaked by the\nlearner? If so, we can bound the accuracy of the adversary's\napproximation of the learner's current state.\nDetecting adversaries\nAttacks introduce potentially detectable side effects\nsuch as drift, unusual patterns in the data observed by\nthe learner, etc. These attacks are more pronounced\nin online learning. When do these side effects reveal\nthe adversary's attack?\nACKNOWLEDGMENTS\nThanks to Michael I. Jordan, Peter Bartlett and David Mol-nar\nfor their insightful discussions and comments regarding\nthis work. We gratefully acknowledge support from the National\nScience Foundation and the Homeland Security Advanced\nResearch Projects Agency. The views expressed here\nare solely those of the authors and do not necessarily reflect\nthe views of the funding agencies or any agency of the U.S.\ngovernment.\nREFERENCES\n[1] I. Androutsopoulos, J. Koutsias, K. V. Chandrinos,\nG. Paliouras, and C. D. Spyropolous. An evaluation of\nnaive Bayesian anti-spam filtering. Proceedings of the\nWorkshop on Machine Learning in the New\nInformation Age, pages 917, 2000.\n[2] D. Angluin. Queries and concept learning. Machine\nLearning, 2(4):319342, Apr. 1988.\n[3] Apache, http://spamassassin.apache.org/.\nSpamAssassin.\n[4] P. Auer. Learning nested differences in the presence of\nmalicious noise. Theoretical Computer Science,\n185(1):159175, 1997.\n[5] V. J. Baston and F. Bostock. Deception games.\nInternational Journal of Game Theory, 17(2):129134,\n1988.\n[6] N. H. Bshouty, N. Eiron, and E. Kushilevitz. PAC\nlearning with nasty noise. Theoretical Computer\nScience, 288(2):255275, 2002.\n[7] N. Cesa-Bianchi, Y. Freund, D. P. Helmbold,\nD. Haussler, R. E. Schapire, and M. K. Warmuth.\nHow to use expert advice. Journal of the ACM,\n44(3):427485, May 1997.\n[8] N. Dalvi, P. Domingos, Mausam, S. Sanghai, and\nD. Verma. Adversarial classification. In Proceedings of\nthe Tenth ACM SIGKDD International Conference on\nKnowledge Discovery and Data Mining, pages 99108,\nSeattle, WA, 2004. ACM Press.\n[9] B. Fristedt. The deceptive number changing game in\nthe absence of symmetry. International Journal of\nGame Theory, 26:183191, 1997.\n[10] J. Graham-Cumming. How to beat an adaptive spam\nfilter. Presentation at the MIT Spam Conference, Jan.\n2004.\n[11] T. Hastie, R. Tibshirani, and J. Friedman. The\nElements of Statistical Learning: Data Mining,\nInference and Prediction. Springer, 2003.\n[12] S. A. Heise and H. S. Morse. The DARPA JFACC\nprogram: Modeling and control of military operations.\nIn Proceedings of the 39th IEEE Conference on\nDecision and Control, pages 25512555. IEEE, 2000.\n[13] M. Herbster and M. K. Warmuth. Tracking the best\nexpert. Machine Learning, 32(2):151178, Aug. 1998.\n[14] J. P. Hespanha, Y. S. Ateskan, and H. H. Kizilocak.\nDeception in non-cooperative games with partial\ninformation. In Proceedings of the 2nd\nDARPA-JFACC Symposium on Advances in\nEnterprise Control, 2000.\n[15] M. Kearns and M. Li. Learning in the presence of\nmalicious errors. SIAM Journal on Computing,\n22:807837, 1993.\n[16] A. Lazarevic, L. Ert\noz, V. Kumar, A. Ozgur, and\nJ. Srivastava. A comparative study of anomaly\ndetection schemes in network intrusion detection. In\nD. Barbar\na and C. Kamath, editors, Proceedings of\nthe Third SIAM International Conference on Data\nMining, May 2003.\n[17] K.-T. Lee. On a deception game with three boxes.\nInternational Journal of Game Theory, 22:8995,\n1993.\n[18] Y. Liao and V. R. Vemuri. Using text categorization\ntechniques for intrusion detection. In Proceedings of\nthe 11th USENIX Security Symposium, pages 5159,\nAug. 2002.\n[19] J.-P. M. Linnartz and M. van Dijk. Analysis of the\nsensitivity attack against electronic watermarks in\nimages. In D. Aucsmith, editor, Information Hiding\n'98, pages 258272. Springer-Verlag, 1998.\n[20] N. Littlestone and M. K. Warmuth. The weighted\nmajority algorithm. Information and Computation,\n108(2):212261, 1994.\n[21] D. Lowd and C. Meek. Adversarial learning. In\nProceedings of the Eleventh ACM SIGKDD\nInternational Conference on Knowledge Discovery and\nData Mining, pages 641647, 2005.\n[22] D. Lowd and C. Meek. Good word attacks on\nstatistical spam filters. In Proceedings of the Second\nConference on Email and Anti-Spam (CEAS), 2005.\n[23] M. V. Mahoney and P. K. Chan. Learning\nnonstationary models of normal network traffic for\ndetecting novel attacks. In Proceedings of the Eighth\nACM SIGKDD International Conference on\nKnowledge Discovery and Data Mining, pages\n376385, 2002.\n[24] S. Mukkamala, G. Janoski, and A. Sung. Intrusion\ndetection using neural networks and support vector\nmachines. In Proceedings of the International Joint\nConference on Neural Networks (IJCNN'02), pages\n17021707, 2002.\n[25] B. Nelson. Designing, Implementing, and Analyzing a\nSystem for Virus Detection. Master's thesis,\nUniversity of California at Berkeley, Dec. 2005.\n[26] V. Paxson. Bro: A system for detecting network\nintruders in real-time. Computer Networks,\n31(23):24352463, Dec. 1999.\n[27] N. Provos. A virtual honeypot framework. In\nProceedings of the 13th USENIX Security Symposium,\n2004.\n\n\n[28] R. Raina, A. Y. Ng, and D. Koller. Transfer learning\nby constructing informative priors. In Neural\nInformation Processing Systems Workshop on\nInductive Transfer: 10 Years Later, 2005.\n[29] M. Sakaguchi. Effect of correlation in a simple\ndeception game. Mathematica Japonica,\n35(3):527536, 1990.\n[30] R. A. Servedio. Smooth boosting and learning with\nmalicious noise. Journal of Machine Learning\nResearch (JMLR), 4:633648, Sept. 2003.\n[31] J. Shawe-Taylor and N. Cristianini. Kernel Methods\nfor Pattern Analysis. Cambridge University Press,\n2004.\n[32] J. Spencer. A deception game. American Math\nMonthly, 80:416417, 1973.\n[33] S. J. Stolfo, S. Hershkop, K. Wang, O. Nimeskern, and\nC. W. Hu. A behavior-based approach to secure email\nsystems. In Mathematical Methods, Models and\nArchitectures for Computer Networks Security, 2003.\n[34] S. J. Stolfo, W. J. Li, S. Hershkop, K. Wang, C. W.\nHu, and O. Nimeskern. Detecting viral propagations\nusing email behavior profiles. In ACM Transactions\non Internet Technology, 2004.\n[35] L. G. Valiant. A theory of the learnable.\nCommunications of the ACM, 27(11):11341142, Nov.\n1984.\n[36] L. G. Valiant. Learning disjunctions of conjunctions.\nIn Proceedings of the 9th International Joint\nConference on Artificial Intelligence, pages 560566,\n1985.\n[37] V. Vovk. Aggregating strategies. In M. Fulk and\nJ. Case, editors, Proceedings of the 7th Annual\nWorkshop on Computational Learning Theory, pages\n371383, San Mateo, CA, 1990. Morgan-Kaufmann.\n[38] L. Wehenkel. Machine learning approaches to power\nsystem security assessment. IEEE Intelligent Systems\nand Their Applications, 12(5):6072, Sept.Oct. 1997.\n[39] G. L. Wittel and S. F. Wu. On attacking statistical\nspam filters. In Proceedings of the First Conference on\nEmail and Anti-Spam (CEAS), 2004.\n[40] W. Xu, P. Bodik, and D. Patterson. A flexible\narchitecture for statistical learning and data mining\nfrom system log streams. In Temporal Data Mining:\nAlgorithms, Theory and Applications, Brighton, UK,\nNov. 2004. The Fourth IEEE International Conference\non Data Mining.\n[41] D.-Y. Yeung and C. Chow. Parzen-window network\nintrusion detectors. In Proceedings of the Sixteenth\nInternational Conference on Pattern Recognition,\npages 385388, Aug. 2002.\n[42] K. Yu and V. Tresp. Learning to learn and\ncollaborative filtering. In Neural Information\nProcessing Systems Workshop on Inductive Transfer:\n10 Years Later, 2005.\n", "keywords": "Indiscriminate attack;Targeted attack;Statistical Learning;Exploratory attack;Machine learning;Security Metrics;Integrity;Availability;Spam Filters;Causative attack;Intrusion Detection;Machine Learning;Intrusion detection system;Computer Security;Game Theory;Adversarial Learning;Learning algorithms;Computer Networks;Security"} {"name": "52", "title": "Catenaccio: Interactive Information Retrieval System through Drawing", "abstract": "The Catenaccio system integrates information retrieval with sketch manipulations. The system is designed especially for pen-based computing and allows users to retrieve information by simple pen manipulations such as drawing a picture. When a user draws a circle and writes a keyword, information nodes related to the keyword are collected automatically inside the circle. In addition, the user can create a Venn diagram by repeatedly drawing circles and keywords to form more complex queries. Thus, the user can retrieve information both interactively and visually without complex manipulations. Moreover, the sketch interaction is so simple that it is possible to combine it with other types of data such as images and real-world information for information retrieval. In this paper, we describe our Catenaccio system and how it can be effectively applied.", "fulltext": "INTRODUCTION\nPen-based computers, such as personal digital assistants (PDA)\nand tablet PCs, have been developed. These computers are\ncharacterized by simple sketch interfaces similar to drawing a\npicture on paper in the real world. This drawing manipulation is\nnot especially useful for communicating details, but is effective\nfor general use. It is especially useful for creative activities, so\nthere have been a number of research reports on improving sketch\nmanipulation [1, 2, 3].\nIn addition, some game devices (e.g., Nintendo DS [4]) support\nsuch kinds of interactions and provide many types of game\ncontent. In these systems, a user can use the entire system window\nas a workspace and create 3D CG from 2D drawings. However, as\nthe original applications may not support information retrieval,\nthe user has to use conventional retrieval applications along with\npen-based input styles.\nConsiderable research has been done to support the use of\ninformation visualization for retrieving information [5]. Technical\nvisualization methods such as zooming and scaling can be used to\neffectively display huge amounts of data [6, 7, 8]. However,\nexisting visualization systems focus on mouse manipulation (e.g.,\nclick and drag), so they are not effectively designed for pen-based\ninteractions such as a drawing.\nThe most popular method of retrieving information is no doubt\nkeyword searching. Search engines via the Web (e.g., Google and\nYahoo) have been\n\ngenerally used for keyword searching [12, 13],\nand people feel that they cannot live without such search engines.\nGenerally, keyword searching requires users to input one or more\nkeywords. In these systems, users can retrieve information related\nto the keywords with Boolean operations (e.g., AND, OR and\nNOT). However, the systems are based on conventional input\nmethods. Users of pen-based computers have to write a query into\na fixed dialog box with a stylus or pen.\nTherefore, we have been developing an information retrieval\nsystem based on simple sketch manipulations. Our goal is to\ndevise an effective and simple information retrieval system that\nworks on pen-based computers, so we integrated a keyword\nsearching that is one of the most usual methods with sketch\nmanipulation that is one of the simple interactions. In our system,\nusers retrieve information by drawing a Venn diagram instead of\ninputting keywords to a dialog box. Because the Venn diagram\ncan be used to display Boolean operations (e.g., AND, OR, and\nNOT) visually and create some relationships at the same time,\nusers can recognize the relationships at a glance. Moreover, the\nsystem allows users to use other types of data as elements in a\nVenn diagram (Fig. 1).\n\nIn this paper, we describe our Catenaccio system that integrates\ninformation retrieval with sketch manipulations, and explain how\nit can be effectively applied for information retrieval.\n\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nAVI '06, May 2326, 2006, Venezia, Italy.\nCopyright 2006 ACM 1-59593-353-0/06/0005...$5.00.\n\n79\n\n\nFigure 1: Venn diagram: Venn diagram can be used to display\nBoolean operations and some relationships at the same time\n(top). The user can create an original Venn diagram (bottom).\n\nFigure 2. Basic manipulation: A user of the Catenaccio system\ndraws a circle and writes a keyword inside that circle.\nInformation nodes related to the keyword are then collected\nwithin the circled area.\n\nRELATED WORK\nA wide variety of information visualization systems are used for\ninformation retrieval [5]. Treating information as visualized nodes\n(e.g., images and simple shapes) allows users to interact with the\ninformation space visually. Moreover, several techniques (e.g.,\nscaling, zooming, focus and context) are used to display a huge\namount of information more effectively [6, 7, 8, 19]. Especially,\nspring model [11] provides useful ways to recognize the\nrelationships between nodes. In these systems, related nodes move\nwhen the user clicks and drags a node. That is, node positions are\ndynamically changed through the user manipulations, so users\nretrieve information interactively. InfoCrystal [9] is also a visual\ntool focused on information retrieval. The system uses Venn\ndiagrams to treat huge amounts of information effectively.\nHowever, conventional systems are designed for mouse\ninteractions (e.g., click and drag) and their layouts are predefined,\nso they are not suitable for Pen-based computing, especially\ndrawing or writing by hand.\nThere are also several sketch interfaces focusing on pen-based\ncomputing [1, 2, 3]. Most of them enhance drawing manipulations\nand focus on 3D creations performed with 2D manipulation.\nCharacteristically, the manipulations required for these systems\nare simple and are similar to drawing a stroke on a piece of paper\nwith a pen. Sketch [1] users can draw 3D curves by performing\n2D manipulations. This system calculates a 3D curve by\ncombining a 2D stroke and a shadow stroke. Users of Harold [2]\nand Tolba [3] can create flat models in a 3D space by using\nsketch-based manipulation, effectively creating a 2.5D scene in a\n3D space.\n\nFigure 3. Drawing a Venn diagram: By repeatedly drawing\ncircles and keywords, users create Venn diagrams, and can\nthen retrieve information by forming complex queries.\nInformation nodes related with both \"CG\" and \"3D\" are\ncollected.\n\nSYSTEM OVERVIEW\nThe Catenaccio system is focused on pen-based computing and\nprovides an interactive and visual information retrieval\nenvironment of using drawing manipulations.\n3.1\nDrawing Circles and Writing Keywords\n\nA user of the Catenaccio system draws a circle and writes a\nkeyword inside that circle. The system automatically recognizes\nboth the circle area and the keyword. Information nodes related to\nthe keyword are then collected within the circled area. By making\na continuous series of simple drawings, the user can create a Venn\ndiagram to form a more complex query. Since the entire window\nis both a search area and a drawing canvas, the user can use the\nworkspace freely.\nSince all the manipulations required for information retrieval are\nbased on sketch manipulations, users can design an original Venn\ndiagram related their interests. Thus, users can freely exploit the\nwhole application window as both an input and a search area and\nretrieve information without complex GUIs.\nThe circle provides an area where information nodes related to the\nkeyword will be collected, and the keyword provides a query to\nsearch for related information nodes from a database. By\ncontinuing to use simple drawings, users can form more complex\nqueries. The related information nodes are moved with a force\nthat depends on the distance between the node position and the\ncenter of the circle (Fig. 2).\nFigure 3 shows an example of creating a Venn diagram by\ncontinuing to draw circles and keywords. In the example, when a\nuser retrieves information that has two keywords \"CG\" and \"3D\",\nthe user first draws a circle and writes \"CG\" (Fig. 3 (1)), and then\ndraws another circle and writes \"3D\" (Fig. 3 (2)). Information\nnodes related to both keywords appear in the shared area of the\nVenn diagram (Fig. 3 (3 and 4)). Moreover, in the Venn diagram,\nthe user can view four areas at a glance (Fig. 3 (3)).\n80\n\nFigure 4: Venn diagram of a drawing and an image, and Venn\ndiagram of a drawing and an image that contains character\ninformation.\n\nFigure 5: Combination with real-world information:\nCapturing real-world information as a picture (1, 2) and\ndrawing a circle around the keyword brings up related\ninformation nodes (3, 4).\n3.2 Combination with Other Types of Data\nUsers can now easily take pictures using digital cameras and cell\nphones that contain CCD cameras. As a result, they may have a\nhuge amount of original image data in their computers. These data\ninclude some information such as name, time, or place, so we\nconsidered using them for information retrieval.\nWe have developed prototype applications to explore the potential\nof Catenaccio. A Venn diagram is basically constructed by\ncombining keywords and areas, so it is possible to combine that\ndiagram with other types of data such as images and real-world\ninformation. Images contain name, time, or place information, and\nthat information becomes a good trigger for retrieving other\ninformation and it can be used for queries. In addition, the image\ndata has a rectangular shape that is useful for setting an area by\ncontrolling its position and size.\nThe example in Figure 4 shows how users can use images to\ncreate Venn diagrams by combining drawings with image data. In\nthe example, a file named \"Mr. Tobita\" becomes a query for the\nVenn diagram, so information related to \"VR\" AND \"Mr.\nTobita\" is collected. In this case, even if users have forgotten\nsomeone's name, they can still retrieve related information\nthrough image contents.\nFigure 5 shows an example a Venn diagram with real-world\ninformation. Generally, using captured data for an interaction\ntrigger is a common technique in AR systems [20]. We also use it\nfor elements of Venn diagrams. Users first capture real-world\ninformation through digital cameras attached to their computers\n(Fig. 5 (1, 2)), and then draw circles around keywords on the\ncaptured data and another circle to collect information nodes (Fig.\n5 (3)). As the system recognizes the keyword inside the first circle,\nrelated information nodes appear inside the second curve (Fig. 5\n(4)).\n\nFigure 6: Recognition of user drawings: The system labels the\nuser drawing area (1, 2). The system recognizes written\nkeywords by using an OCR library (3)\nPreviously, we have proposed a similar information retrieval\nsystem [10]. That system provides natural interactions, however,\nit completely depends on real-world objects. However,\nCatenaccio provides not only real-world information, but also\ndrawing manipulations. Thus, the user can retrieve information\neven if there are not enough real-world objects.\n\nIMPLEMENTATION\nThe entire workspace is bitmapped as in conventional 2D paint\nsystems, and user drawing manipulations are reflected in the\nbitmap. The system supports two types of drawing, writing\nkeywords and area drawings. The keyword writing is displayed as\ngreen, and the area drawing is displayed as blue. To set an area,\nthe system labels the inside of a green area and knows the size\nand position of the area (Fig. 6 (1)). Then, the system divides the\narea into four layers for node animations (Fig. 6(2)). For keyword\nwriting, the system sends the result to an OCR library to search\nfor its meaning (Fig. 6 (3)).\nCatenaccio is a prototype system now, and the relationship\nbetween keywords and information nodes are predefined in a\ntemporary database. The database contains three types of data:\nnode names, keywords, and relationship levels. After the image\nrecognition processes, nodes related to the keyword are selected\nand start moving until they are in the area appropriate to their\nrelationship. The force is calculated by spring model [11]. For\nexample, the node with the strongest relationship with the\nkeyword receives a force that takes it to the deepest area.\n\nDISSCUSSION\nWe have had some opportunities to demonstrate our system. Here,\nwe discuss user interactions with Catenaccio based on comments\nmade by visitors to our demonstrations. Also we consider the\nlimitations of the system and our plans for future work.\nFrom our demonstrations, the visitors quickly understood the\nconcepts of our system that integrates information retrieval with\n81\ndrawing manipulations. Using Venn diagrams makes recognizing\nthe relationships between information nodes and keywords easy.\nMost visitors could create simple Venn diagrams and set related\nnodes into the diagrams after watching a simple demonstration.\nWe observed that some users drew interesting Venn diagrams that\nresembled pictures. The system facilitates creative activities, so\nwe expect users will be able to create more original, and\nincreasingly effective drawings for information retrieval.\nEspecially, we received good reactions from users regarding the\ncombination of drawing and an image to create a Venn diagram.\nBy exploiting such combinations, the system augments keyword\nsearching, and it is different from conventional search engines [12,\n13]. Moreover, combining keywords and user-drawn pictures to\ncreate Venn diagrams is possible.\nOur system focuses on information retrieval for pen-based input.\nHowever, information retrieval using Venn diagrams is quite\nrough. We plan to combine our system with other types of sketch-based\nsystems such as VelvetPath [16] to support more detailed\ninteraction. With such a combination, users can use Catenaccio\nfor general retrieval of information and then use VelvetPath to\nexamine the information in more detail. In this case, all the\nmanipulations would still be based on drawing or handwriting, so\na user can handle a large amount of data in a natural way.\nDrawing manipulation is also useful for finger gestures.\nMany AR systems support the use of finger gestures as an input\nmethod [17, 18]. As the system recognizes user finger gestures,\nusers can create Venn diagrams by manipulating real-world\nobjects, drawing circles, and writing keywords.\nCONCLUSION\nWe described the Catenaccio system that is focused on Pen-based\ncomputing and allows users to retrieve information by drawing\nVenn diagrams. The system recognizes user writing and drawings\n(keywords and circles) and places information related to the\nkeywords inside the circles. Using this input, the system provides\nan interactive and visual information retrieval method. We\ndescribed some examples of retrieving information through\nsimple drawings. We also provided several examples of unique\nVenn diagrams created by combining drawings with images and\nreal-world information.\n\nACKNOWLEDGMENTS\nWe thank Tetsuji Takada and Sinji Daigo for the visible and\nuseful suggestions on this work.\n\nREFERENCES\n[1]\nR. C. Zeleznik, K. P. Herndon, and J. F. Hughes. An\nInterface for Sketching 3D Curves. In Proceedings of\nACM SIGGRAPH '96, pp. 163-170, 1996.\n[2]\nJ. M. Cohen, J. F. Hughes, and R. C. Zeleznik. Harold:\nA World Made of Drawings. In Proceedings of\nNPAR2000 (Symposium on Non-Photorealistic\nAnimation and Rendering), pp. 83-90, 2000.\n[3]\nO. Tolba, J. Doresey, and L. McMillan. Sketching with\nProjective 2D Strokes. In Proceedings of ACM\nUIST '99, pp. 149-157, 1999.\n[4]\nNintendo DS: http://www.nintendo.co.jp/ds/\n[5]\nS. K. Card, J. D. MacKinlay, and B. Shneiderman.\nReadings in Information Visualization: Using Vision\nto Think. Morgan Kaufmann, 1999.\n[6]\nH. Koike. Fractal views: a fractal-based method for\ncontrolling information display. In Proceedings of\nACM Transactions on Information Systems, Vol. 13,\nNo. 3, pp. 305-323, July 1995.\n[7]\nG. W. Furnas. Generalized fisheye views. In\nProceedings of the ACM Transactions on Computer-Human\nInteraction, Vol. 1, No. 2, pp. 126-160, 1994.\n[8]\nB. B. Bederson, J. D. Hollan, K. Perlin, J. Meyer, D.\nBacon, and G. Furnas. Pad++: A Zoomable Graphical\nSketchpad for Exploring Alternate Interface Physics.\nJournal of Visual Languages and Computing, Vol. 7,\nNo. 1, pp. 3-31, 1996.\n[9]\nA. Spoerri. Visual tools for information retrieval. In\nProceedings of VL'93, pp. 160-168, 1993.\n\n[10]\nH. Koike, Y. Sato, Y. Kobayashi, H. Tobita and M.\nKobayashi. Interactive Textbook and Interactive Venn\nDiagram. In Proceedings of ACM CHI2000, pp. 121-128\n, 2000.\n[11]\nR. Davidson and D. Harel. Drawing Graphics Nicely\nUsing Simulated Annealing. In Proceedings of ACM\nTransactions on Graphics, Vol. 15, No. 4, pp. 301-331,\n1996.\n[12]\nGoogle: http://www.google.com\n[13]\nYahoo: http://www.yahoo.com\n[14]\nT. Calishain and R. Dornfest. GoogleHack:\n100\nIndustrial-Strength Tips & Tricks\n. O'RELLY, 2003.\n[15]\nP. Bausch. AmazonHack: 100 Industrial-Strength Tips\n& Tools. O'RELLY, 2003.\n[16]\nH. Tobita. VelvetPath: Layout Design System with\nSketch and Paint Manipulations, In Proceedings of\nEUROGRAPHICS2003 Short Presentations, pp. 137-144\n, 2003.\n[17]\nJ. Rekimoto. SmartSkin: An Infrastructure for\nFreehand Manipulation on Interactive Surfaces, In\nProceedings of ACM CHI2002, 113-120, 2002.\n\n[18] X. Chen, H. Koike, Y. Nakanishi, K. Oka, and Y. Sato. Two-handed\ndrawing on augmented desk system, In Proceedings\nof AVI 2002, 2002.\n[19] E. Orimo and H. Koike. ZASH: A browsing system for\nmulti-dimensional data. In Proceedings of IEEE VL '99, pp.\n266-286, 1999.\n[20] J. Rekimoto and K. Nagao. The world through the computer:\nComputer augmented interaction with real world\nenvironments. In Proceedings of ACM UIST'95, pp. 29-36,\n1995.\n\n82", "keywords": "Sketch manipulation;Information node;Venn diagram;Visual information retrieval;Pen-based computing;Image data;Information retrieval;keyword searching;2D system;sketch manipulations;interactive system;Catnaccio system;Interactive information retrieval system"} {"name": "53", "title": "Compression of Inverted Indexes For Fast Query Evaluation", "abstract": "Compression reduces both the size of indexes and the time needed to evaluate queries. In this paper, we revisit the compression of inverted lists of document postings that store the position and frequency of indexed terms, considering two approaches to improving retrieval efficiency:better implementation and better choice of integer compression schemes. First, we propose several simple optimisations to well-known integer compression schemes, and show experimentally that these lead to significant reductions in time. Second, we explore the impact of choice of compression scheme on retrieval efficiency. In experiments on large collections of data, we show two surprising results:use of simple byte-aligned codes halves the query evaluation time compared to the most compact Golomb-Rice bitwise compression schemes; and, even when an index fits entirely in memory, byte-aligned codes result in faster query evaluation than does an uncompressed index, emphasising that the cost of transferring data from memory to the CPU cache is less for an appropriately compressed index than for an uncompressed index. Moreover, byte-aligned schemes have only a modest space overhead:the most compact schemes result in indexes that are around 10% of the size of the collection, while a byte-aligned scheme is around 13%. We conclude that fast byte-aligned codes should be used to store integers in inverted lists.", "fulltext": "INTRODUCTION\nSearch engines have demanding performance requirements.\nUsers expect fast answers to queries, many queries must be\nprocessed per second, and the quantity of data that must\nbe searched in response to each query is staggering. The\ndemands continue to grow:the Google search engine, for\nexample, indexed around one billion documents a year ago\nand now manages more than double that figure\n1\n. Moreover,\nthe increasing availability and affordability of large storage\ndevices suggests that the amount of data stored online will\ncontinue to grow.\nInverted indexes are used to evaluate queries in all practical\nsearch engines [14]. Compression of these indexes has\nthree major benefits for performance. First, a compressed\nindex requires less storage space. Second, compressed data\nmakes better use of the available communication bandwidth;\nmore information can be transfered per second than when\nthe data is uncompressed. For fast decompression schemes,\nthe total time cost of transfering compressed data and sub-sequently\ndecompressing is potentially much less than the\ncost of transferring uncompressed data. Third, compression\nincreases the likelihood that the part of the index required\nto evaluate a query is already cached in memory, thus entirely\navoiding a disk access. Thus index compression can\nreduce costs in retrieval systems.\nWe have found that an uncompressed inverted index that\nstores the location of the indexed words in web documents\ntypically consumes more than 30% of the space required\nto store the uncompressed collection of documents. (Web\ndocuments often include a great deal of information that is\nnot indexed, such as HTML tags; in the TREC web data,\nwhich we use in our experiments, on average around half\nof each document is indexable text.) When the index is\ncompressed, the index size is reduced to between 10%15%\nof that required to store the uncompressed collection; this\nsize includes document numbers, in-document frequencies,\nand word positions within documents. If the index is too\nlarge to fit entirely within main memory, then querying the\nuncompressed index is slower:as we show later, it is up to\ntwice as slow as the fastest compressed scheme.\nIn this paper, we revisit compression schemes for the in-1\nSee http://www.google.com/\n222\nverted list component of inverted indexes. We also propose\na new method for decoding lists. There have been a great\nmany reports of experiments on compression of indexes with\nbitwise compression schemes [6, 8, 12, 14, 15], which use an\nintegral number of bits to represent each integer, usually\nwith no restriction on the alignment of the integers to byte\nor machine-word boundaries. We consider several aspects\nof these schemes:how to decode bitwise representations of\nintegers efficiently; how to minimise the operations required\nfor the most compact scheme, Golomb coding; and the relative\nperformance of Elias gamma coding, Elias delta coding,\nGolomb coding, and Rice coding for storing indexes.\nWe question whether bitwise compression schemes are the\nbest choice for storing lists of integers. As an alternative,\nwe consider bytewise integer compression schemes, which\nrequire that each integer is stored in an integral number of\nblocks, where each block is eight bits. The length of each\nstored integer can therefore be measured in an exact number\nof bytes. An additional restriction is to require that these\neight-bit blocks must align to machine-word or byte boundaries\n.\nWe propose and experimentally investigate several\nvariations of bytewise schemes.\nWe investigate the performance of different index compression\nschemes through experiments on large query sets\nand collections of Web documents. We report two surprising\nresults.\nFor a 20 gigabyte collection, where the index is several\ntimes larger than main memory, optimised bytewise\nschemes more than halve the average decoding time\ncompared to the fastest bitwise approach.\nFor a much smaller collection, where the index fits in\nmain memory, a bytewise compressed index can still\nbe processed faster than an uncompressed index.\nThese results show that effective use of communication bandwidths\nis important for not only disk-to-memory transfers\nbut also memory-to-cache transfers. The only disadvantage\nof bytewise compressed indexes is that they are up to 30%\nlarger than bitwise compressed indexes; the smallest bitwise\nindex is around 10% of the uncompressed collection size,\nwhile the bytewise index is around 13%.\nINVERTED INDEXES\nAn inverted index consists of two major components:the\nvocabulary of terms--for example the words--from the collection\n, and inverted lists, which are vectors that contain\ninformation about the occurrence of the terms [14].\nIn a basic implementation, for each term t there is an inverted\nlist that contains postings < f\nd,t\n, d > where f\nd,t\nis\nthe frequency f of term t in the ordinal document d. One\nposting is stored in the list for each document that contains\nthe term t. Inverted lists of this form--along with additional\nstatistics such as the document length l\nd\n, and f\nt\n, the number\nof documents that contain the term t--are sufficient to\nsupport ranked and Boolean query modes.\nTo support phrase querying or proximity querying, additional\ninformation must be kept in the inverted lists. Thus\ninverted list postings should be of the form\n< f\nd,t\n, d, [o\n0,d,t\n. . . o\nf\nd,t\n,d,t\n] >\nThe additional information is the list of offsets o; one offset\nis stored for each of the f\nd,t\noccurrences of term t in\ndocument d. Postings in inverted lists are usually ordered\nby increasing d, and the offsets likewise ordered within the\npostings by increasing o. This has the benefit that differences\nbetween values--rather than the raw values--can be\nstored, improving the compressibility of the lists.\nOther arrangements of the postings in lists are useful when\nlists are not necessarily completely processed in response to\na query. For example, in frequency-sorted indexes [9, 10]\npostings are ordered by f\nd,t\n, and in impact-ordered indexes\nthe postings are ordered by quantised weights [1]. These approaches\nalso rely on compression to help achieve efficiency\ngains, and the improvements to compression performance\nwe describe in this paper are as applicable to these methods\nas they are to the simple index representations we use as a\ntestbed for our compression methods.\nConsider an example inverted list with offsets for the term\n\"Matthew\":\n< 3, 7, [6, 51, 117] >< 1, 44, [12] >< 2, 117, [14, 1077] >\nIn this index, the terms are words, the offsets are word positions\nwithin the documents, and the lists are ordered by d.\nThis inverted list states that the term \"Matthew\" occurs 3\ntimes in document 7, at offsets 6, 51, and 117. It also occurs\nonce in document 44 at offset 12, and twice in document 117,\nat offsets 14 and 1077.\nRanked queries can be answered using the inverted index\nas follows. First, the terms in the user's query are located\nin the inverted index vocabulary. Second, the corresponding\ninverted lists for each term are retrieved from disk, and\nthen processed by decreasing f\nt\n. Third, for each posting in\neach inverted list, an accumulator weight A\nd\nis increased;\nthe magnitude of the increase is dependent on the similarity\nmeasure used, and can consider the weight w\nq,t\nof term t\nin the query q, the weight w\nd,t\nof the term t in the document\nd, and other factors. Fourth, after processing part [1,\n6] or all of the lists, the accumulator scores are partially\nsorted to identify the most similar documents. Last, for a\ntypical search engine, document summaries of the top ten\ndocuments are generated or retrieved and shown to the user.\nThe offsets stored in each inverted list posting are not used\nin ranked query processing.\nPhrase queries require offsets and that a given sequence of\nwords be contiguous in a matching document. For example,\nconsider a combined ranked and phrase query:\n\"Matthew Richardson\" Richmond\nTo evaluate such a query, the same first two steps as for\nranked querying are applied. Then, instead of accumulating\nweights, it is necessary to construct a temporary inverted list\nfor the phrase, by fetching the inverted list of each of the\nindividual terms and combining them. If the inverted list for\n\"Matthew\" is as above and the inverted list for \"Richardson\"\nis\n< 1, 7, [52] > < 2, 12, [1, 4] > < 1, 44, [83] >\nthen both words occur in document 7 and as an ordered\npair. Only the word \"Richardson\" is in document 12, both\nwords occur in document 44 but not as a pair, and only\n\"Matthew\" occurs in document 117. The list for \"Matthew\nRichardson\" is therefore\n< 1, 7, [51] >\n223\nAfter this, the ranking process is continued from the third\nstep, where the list for the term \"Richmond\" and the newly\ncreated list are used to adjust accumulator weights. Phrase\nqueries can involve more than two words.\nCOMPRESSING INVERTED INDEXES\nSpecial-purpose integer compression schemes offer both\nfast decoding and compact storage of inverted lists [13, 14].\nIn this section, we consider how inverted lists are compressed\nand stored on disk. We limit our discussions here to the\nspecial-purpose integer compression techniques that have\npreviously been shown to be suitable for index compression,\nand focus on their use in increasing the speed of retrieval\nsystems.\nWithout compression, the time cost of retrieving inverted\nlists is the sum of the time taken to seek for and then retrieve\nthe inverted lists from disk into memory, and the time taken\nto transfer the lists from memory into the CPU cache before\nthey are processed. The speed of access to compressed\ninverted lists is determined by two factors:first, the com-putational\nrequirements for decoding the compressed data\nand, second, the time required to seek for and retrieve the\ncompressed data from disk and to transfer it to the CPU\ncache before it is decoded. For a compression scheme to\nallow faster access to inverted lists, the total retrieval time\nand CPU processing costs should be less than the retrieval\ntime of the uncompressed representation. However, a third\nfactor makes compression attractive even if CPU processing\ncosts exceed the saving in disk transfer time:compressing\ninverted lists increases the number of lists that can be\ncached in memory between queries, so that in the context of\na stream of queries use of compression reduces the number\nof disk accesses. It is therefore important that a compression\nscheme be efficient in both decompression CPU costs\nand space requirements.\nThere are two general classes of compression scheme that\nare appropriate for storing inverted lists.\nVariable-bit or\nbitwise schemes store integers in an integral number of bits.\nWell-known bitwise schemes include Elias gamma and delta\ncoding [3] and Golomb-Rice coding [4]. Bytewise schemes\nstore an integer in an integral number of blocks, where a\nblock is eight bits in size; we distinguish between blocks and\nbytes here, since there is no implied restriction that a block\nmust align to a physical byte-boundary. A simple bytewise\nscheme is variable-byte coding [2, 13]; uncompressed integers\nare also stored in an integral number of blocks, but we\ndo not define them as bytewise schemes since, on most architectures\n, an integer has a fixed-size representation of four\nbytes. In detail, these schemes are as follows.\nElias coding [3] is a non-parameterised bitwise method of\ncoding integers. (Non-parameterised methods use static or\nfixed codes to store integers.) The Elias gamma code represents\na positive integer k by 1 + log\n2\nk stored as a unary\ncode, followed by the binary representation of k without its\nmost significant bit. Using Elias gamma coding, small integers\nare compactly represented; in particular, the integer 1\nis represented as a single 1-bit. Gamma coding is relatively\ninefficient for storing integers larger than 15 [13].\nElias delta codes are suited to coding larger integers, but\nare inefficient for small values. For an integer k, a delta\ncode stores the gamma code representation of 1 + log\n2\nk ,\nand then the binary representation of k without its most\nsignificant bit.\nGolomb-Rice bitwise coding [4] has been shown to offer\nmore compact storage of integers and faster retrieval than\nthe Elias codes [13]; indeed, it is bitwise optimal under the\nassumption that the set of documents with a given term is\nrandom. The codes are adapted to per-term likelihoods via\na parameter that is used to determine the code emitted for\nan integer. In many cases, this parameter must be stored\nseparately using, for example, an Elias code. For coding of\ninverted lists, a single parameter is used for all document\nnumbers in a postings list, but each posting requires a parameter\nfor its offsets. The parameters can be calculated as\nthe lists are decoded using statistics stored in memory and\nin the lists, as we discuss later.\nCoding of an integer k using Golomb codes with respect\nto a parameter b is as follows. The code that is emitted is in\ntwo parts:first, the unary code of a quotient q is emitted,\nwhere q = (k - 1)/b + 1; second, a binary code is emitted\nfor the remainder r, where r = k - q b - 1. The number\nof bits required to store the remainder r is either log\n2\nb or\nlog\n2\nb . To retrieve the remainder, the value of the \"toggle\npoint\" t = 1 ((log\n2\nk)+1))-b is required, where\nindicates\na left-shift operation. After retrieving\nlog\n2\nb bits of the\nremainder r, the remainder is compared to t. If r > t, then\none additional bit of the remainder must be retrieved. It\nis generally thought that caching calculated values of log\n2\nb\nis necessary for fast decoding, with a main-memory penalty\nof having to store the values. However, as we show later,\nwhen the standard log library function is replaced with a\nfast bit-shifting version, caching is unnecessary.\nRice coding is a variant of Golomb coding where the value\nof b is restricted to be a power of 2. The advantage of this\nrestriction is that there is no \"toggle point\" calculation required\n, that is, the remainder is always stored in exactly\nlog\n2\nb bits. The disadvantage of this scheme is that the\nchoice of value for b is restricted and, therefore, the compression\nis slightly less effective than that of Golomb coding.\nFor compression of inverted lists, a value of b is required.\nWitten et al. [14] report that for cases where the probability\nof any particular integer value occurring is small--which is\nthe usual case for document numbers d and offsets o--then\nb can be calculated as:\nb = 0.69 mean(k)\nFor each inverted list, the mean value of document numbers\nd can be approximated as k = N/f\nt\nwhere N is the number\nof documents in the collection and f\nt\nis the number of postings\nin the inverted list for term t [14]. This approach can\nalso be extended to offsets:the mean value of offsets o for\nan inverted list posting can be approximated as k = l\nd\n/f\nd,t\nwhere l\nd\nis the length of document d and f\nd,t\nis the number\nof offsets of term t within that document. As the statistics\nN, f\nt\n, and l are often available in memory, or in a simple\nauxiliary structure on disk, storage of b values is not required\nfor decoding; approximate values of l can be stored in memory\nfor compactness [7], but use of approximate values has\nlittle effect on compression effectiveness as it leads to only\nsmall relative errors in computation of b.\nIn bytewise coding an integer is stored in an integral number\nof eight-bit blocks. For variable-byte codes, seven bits\nin each block are used to store a binary representation of\nthe integer k. The remaining bit is used to indicate whether\nthe current block is the final block for k, or whether an additional\nblock follows. Consider an example of an integer k\n224\nin the range of 2\n7\n= 128 to 2\n14\n= 16, 384. Two blocks are\nrequired to represent this integer:the first block contains\nthe seven least-significant bits of the integer and the eighth\nbit is used to flag that another block follows; the second\nblock contains the remaining most-significant bits and the\neighth bit flags that no further blocks follow. We use the\nconvention that the flag bit is set to 1 in the final block and\n0 otherwise.\nCompressing an inverted index, then, involves choosing\ncompression schemes for the three kinds of data that are\nstored in a posting:a document number d, an in-document\nfrequency f\nd,t\n, and a sequence of offsets o. A standard choice\nis to use Golomb codes for document numbers, gamma codes\nfor frequencies, and delta codes for offsets [14]. (We explore\nthe properties of this choice later.) In this paper, we describe\nsuch a choice as a GolD-GamF-DelO index.\n3.1\nFast Decoding\nWe experiment with compression of inverted lists of postings\nthat contain frequencies f\nd,t\n, documents numbers d, and\noffsets o. For fast decompression of these postings, there are\ntwo important considerations:first, the choice of compression\nscheme for each component of the posting; and, second,\nmodifications to each compression scheme so that it is both\nfast and compatible with the schemes used for the other\ncomponents. In this section, we outline the optimisations\nwe use for fast decompression. Our code is publically available\nand distributed under the GNU public licence.\n2\nBitwise Compression\nWe have experimented with a range of variations of bitwise\ndecompression schemes. Williams and Zobel [13] reported\nresults for several efficient schemes, where vectors that contain\ncompressed integers are retrieved from disk and subse-quently\ndecoded.\n3\nIn their approach, vector decoding uses\nbitwise shift operations, bit masks, multiplication, subtraction\n, and function calls to retrieve sequences of bits that\nspan byte boundaries. In our experiments on Intel Pentium-based\nservers running the Linux operating system, we have\nfound that bitwise shift operations are usually faster than\nbit masks, and that the function calls are slow. By opti-mising\nour code to use bitwise shifts and to remove nested\nfunction calls, we have found that the overall time to decode\nvectors--regardless of the compression scheme used--is on\naverage around 60% of that using the code of Williams and\nZobel.\nOther optimisations that are specific to Golomb-Rice coding\nare also of value. Golomb-Rice decoding requires that\nlog\n2\nb is calculated to determine the number of remainder\nbits to be retrieved.\nIt is practicable to explicitly cache\nvalues of log\n2\nb in a hash table as they are calculated, or\nto pre-calculate all likely-to-be-used values as the retrieval\nquery engine is initialised. This saves recalculation of logarithms\nwhen a value of b is reused in later processing, with\nthe penalty of additional memory requirements for storing\nthe lookup table.\nWe measured the performance of Golomb coding with and\n2\nThe\nsearch\nengine\nused\nin\nthese\nexperiments\nand\nour\ninteger\ncompression\ncode\nis\navailable\nfrom\nhttp://www.seg.rmit.edu.au/\n3\nThe code used by Williams and Zobel in their experiments\nis available from http://www.cs.rmit.edu.au/\n~hugh/software/\nwithout caching. Timings are average elapsed query evaluation\ncost to process index information for 25,000 queries on a\n9.75 gigabyte (Gb) collection of Web data [5] using our prototype\nretrieval engine on a GolD-GamF-GolO index (that\nis, Golomb document numbers, gamma frequencies, Golomb\noffsets); we discuss collection statistics and experimental design\nfurther in Section 4. The cache lookup table size is\nunrestricted.\nWe found that, without caching of log\n2\nb values, the average\nquery evaluation time is 0.961 seconds. Caching of\nlog\n2\nb values as they are calculated during query processing\nroughly halves the average query evaluation time, to 0.494\nseconds. Pre-calculating and storing the values offers almost\nno benefit over caching during query processing, reducing\nthe time to 0.491 seconds; this reflects that only limited\nb values are required during query evaluation. Caching of\ntoggle points yields 0.492 seconds. As toggle points are calculated\nusing bitwise shifts, addition, and subtraction, this\nis further evidence that bitwise shifts are inexpensive on our\nhardware.\nAn alternative approach to managing log computations\nis to replace the standard library log function with a loop\nthat determines\nlog\n2\nb using bitwise shifts and equality\ntests; the logarithm value can be determined by locating\nthe position of the most-significant 1-bit in b. We found\nthat this led to slight additional improvements in the speed\nof decoding Golomb codes, outperforming explicit caching.\nAll Golomb-Rice coding results reported in this paper are\ncomputed in this way.\nBytewise Compression\nWe have experimented with improvements to variable-byte\ncoding. Unlike in bitwise coding, we have found that masking\nand shifting are equally as fast because of the large number\nof shifts required. We use shifts in our experiments.\nPerhaps the most obvious way to increase the speed of\nvariable-byte decoding is to align the eight-bit blocks to byte\nboundaries. Alignment with byte boundaries limits the decoding\nto only one option:the flag bit indicating if this is the\nlast byte in the integer is always the most significant bit, and\nthe remaining seven bits contain the value. Without byte\nalignment, additional conditional tests and operations are\nrequired to extract the flag bit, and the seven-bit value can\nspan byte boundaries. We would expect that byte alignment\nwould improve the speed of decoding variable-byte integers.\nFigure 1 shows the effect of byte alignment of variable-byte\nintegers. In this experiment, variable-byte coding is\nused to store the offsets o in each inverted list posting. The\noptimised Golomb coding scheme described in the previous\nsection is used to code document numbers d and Elias\ngamma coding is used to store the frequencies f\nd,t\n. We refer\nto this as a GolD-GamF-VbyO index.\nThe graph at the left of Figure 1 shows total index size\nas a percentage of the uncompressed collection being indexed\n. The first bar shows that, without byte alignment,\nthe GolD-GamF-VbyO index requires almost 13% of the\nspace required by the collection. The second bar shows that\npadding to byte alignment after storing the Gamma-coded\nf\nd,t\nvalues increases the space requirement to just over 13.5%\nof the collection size. We discuss the other schemes in this\nfigure later in this section.\nThe graph at the right of Figure 1 shows elapsed query\nevaluation times using different index designs. Timings are\n225\nOriginal\nOriginal with byte boundary\nSignature block\nSignature block with byte boundary\n0\n5\n10\n15\n20\nSize (% of Collection)\nOriginal\nOriginal with byte boundary\nSignature block\nSignature block with byte boundary\nScanning\nScanning with byte boundary\nScanning with signature block\nScanning with signature block and byte boundary\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0\nAverage Query Time (Seconds)\nFigure 1: Variable-byte schemes for compressing offsets\nin inverted lists in a GolD-GamF-VbyOindex.\nFour different compression schemes are shown and,\nfor each, both original and scanning decoding are\nshown. Scanning decoding can be used when offsets\nare not needed for query resolution.\nthe average elapsed query evaluation cost to process the\ninverted lists for 25,000 queries on a 20 Gb collection of\nWeb [5] data using our prototype retrieval engine. Queries\nare processed as conjunctive Boolean queries. The first bar\nshows that the average time is around 0.7 seconds for the\nGolD-GamF-VbyO index without byte alignment. The second\nbar shows that the effect of byte alignment is a 25% reduction\nin average query time. Therefore, despite the small\nadditional space requirement, byte-alignment is beneficial\nwhen storing variable-byte integers.\nA second optimisation to variable-byte coding is to consider\nthe query mode when processing the index. For querying\nthat does not use offsets--such as ranked and Boolean\nquerying--decoding of the offsets in each posting is unnecessary\n. Rather, all that is required are the document numbers\nd and document frequencies f\nd,t\n. An optimisation is therefore\nto only examine the flag bit of each block and to ignore\nthe remaining seven bits that contain the value. The value\nof f\nd,t\nindicates the number of offsets o stored in the posting\n. By examining flag bits until f\nd,t\n1-bits are processed,\nit is possible to bypass the offsets with minimal processing.\nWe call this approach scanning.\nScanning can also be used in query modes that do require\noffset decoding. As we discussed earlier, phrase querying\nrequires that all terms are present in a matching document.\nAfter processing the inverted list for the first term that is\nevaluated in a phrase query, a temporary inverted list of\npostings is created. This temporary list has a set D of documents\nthat contain the first term. When processing the\nsecond term in the query, a second set of document numbers\nD are processed. Offsets for the posting associated\nwith document d D can be scanned, that is, passed over\nwithout decoding, if d is not a member of D. (At the same\ntime, document numbers in D that are not in D are discarded\n.)\nWe show the performance of scanning in Figure 1. The\nfifth and sixth bars show how scanning affects query evaluation\ntime for variable-bytes that are either unaligned and\naligned to byte boundaries in the GolD-GamF-VbyO index.\nScanning removes the processing of seven-bit values. This\nreduces the cost of retrieving unaligned variable-bytes to less\nthan that of the aligned variable-byte schemes; the small\nspeed advantage is due to the retrieval of smaller lists in the\nunaligned version. Scanning has little effect on byte-aligned\nvariable bytes, reflecting that the processing of seven-bit values\nusing shift operations has a low cost. Overall, however,\nbyte-alignment is preferred since the decoding cost of offsets\nis expensive in an unaligned scheme.\nA third optimisation is an approach we call signature\nblocks, which are a variant of skipping. Skipping is the approach\nof storing additional integers in inverted lists that\nindicate how much data can be skipped without any processing\n[14]. Skipping has the disadvantage of an additional\nstorage space requirement, but has been shown to offer substantial\nspeed improvements [14]. A signature block is an\neight-bit block that stores the flag bits of up to eight blocks\nthat follow. For example, a signature block with the bit-string\n11100101 represents that five integers are stored in\nthe eight following eight-bit blocks:the string 111 represents\nthat the first three blocks store one integer each; the\nstring 001 represents that the fourth integer is stored over\nthree blocks; and, the string 01 represents that the final integer\nis stored over two blocks. As all flag bits are stored\nin the signature block, the following blocks use all eight bits\nto store values, rather the seven-bit scheme in the standard\nvariable-byte integer representation.\nThe primary use of signature blocks is skipping. To skip\noffsets, f\nd,t\noffset values must be retrieved but not processed.\nBy counting the number of 1-bits in a signature block, the\nnumber of integers stored in the next eight blocks can be\ndetermined. If the value of f\nd,t\nexceeds this, then a second\nor subsequent signature block is processed until f\nd,t\noffsets\nhave been skipped. The last signature block is, on average,\nhalf full. We have found that bitwise shifts are faster than\na lookup table for processing of signature blocks.\nThe speed and space requirements are also shown in Figure\n1. Not surprisingly, the signature block scheme requires\nmore space than the previous variable-byte schemes. This\nspace requirement is further increased if byte alignment of\nblocks is enforced. In terms of speed, the third and fourth\nbars in the right-hand histogram show that signature blocks\nare slower than the original variable-byte schemes when offsets\nare processed in the GolD-GamF-VbyO index. These\nresults are not surprising:signature blocks are slow to process\nwhen they are unaligned, and the byte-aligned version\nis slow because processing costs are no less than the original\nvariable-byte schemes and longer disk reads are required.\nAs shown by the seventh bar, when offsets are skipped the\nunaligned signature block scheme is slower than the original\n226\nvariable-byte scheme. The savings of skipping with signature\nblocks are negated by more complex processing when\nblocks are not byte-aligned. In contrast, the right-most bar\nshows that the byte-aligned signature block scheme with\nskipping is slightly faster on average than all other schemes.\nHowever, we conclude--given the compactness of the index\nand good overall performance--that the best all-round\nscheme is the original variable-byte scheme with byte alignment\n. Therefore, all variable-byte results reported in the\nSection 4 use the original byte-aligned variable-byte scheme\nwith scanning.\nCustomised Compression\nCombinations of bitwise and bytewise compression schemes\nare also possible. The aim of such approaches is to combine\nthe fast decoding of bytewise schemes with the compact\nstorage of bitwise schemes. For example, a simple and\nefficient custom scheme is to store a single bit that indicates\nwhich of two compression schemes is used, and then to store\nthe integer using the designated compression scheme. We\nhave experimented with several approaches for storing offsets\n. The simplest and most efficient approach we tested\nis as follows:when f\nd,t\n= 1, we store a single bit indicating\nwhether the following offset is stored as a bitwise Elias\ndelta code or as a bytewise eight-bit binary representation.\nWhen storing values, we use Elias delta coding if the value\nis greater than 256 and the binary scheme otherwise. This\nscheme has the potential to reduce space because in the median\nposting f\nd,t\nis 1 and the average offset is around 200.\nSelective use of a fixed-width representation can save storage\nof the 6-bit prefix used to indicate magnitude in the\ncorresponding delta code.\nWe report the results with this scheme, which we call custom\n, in the next section. This was the fastest custom scheme\nwe tested. Other approaches we tried included switching between\nvariable-byte and bitwise schemes, using the custom\nscheme when f\nd,t\nis either 1 or 2, and other simple variations\n. We omit results for these less successful approaches.\nRESULTS\nAll experiments described in this paper are carried out on\nan Intel Pentium III based machine with 512 Mb of main-memory\nrunning the Linux operating system. Other processes\nand disk activity was minimised during timing experiments\n, that is, the machine was under light-load.\nA theme throughout these experiments and greatly impacting\non the results is the importance of caching. On a\nmodern machine, caching takes place at two levels. One level\nis the caching of recently-accessed disk blocks in memory, a\nprocess that is managed by the operating system. When the\nsize of the index significantly exceeds memory capacity, to\nmake space to fetch a new inverted list, the blocks containing\nmaterial that has not been accessed for a while must be\ndiscarded. One of the main benefits of compression is that a\nmuch greater volume of index information can be cached in\nmemory. For this reason, we test our compression schemes\nwith streams of 10,000 or 25,000 queries extracted from a\nquery log [11], where the frequency distribution of query\nterms leads to beneficial use of caching. Again, queries are\nprocessed as conjunctive Boolean queries.\nThe other level at which caching takes place is the retention\nin the CPU cache of small blocks of data, typically of\n128 bytes, recently accessed from memory. CPU caching\nDelD-GamF-GolO\nGolD-GamF-GamO\nGolD-GamF-DelO\nGolD-GamF-GolO\nGolD-GamF-RicO\nGolD-GamF-VbyO\nGolD-VbyF-VbyO\nRicD-GamF-RicO\nRicD-VbyF-VbyO\nVbyD-VbyF-VbyO\nCustom\nNo compression\n0\n10\n20\n30\n40\nIndex Size (% of Collection)\nDelD-GamF-GolO\nGolD-GamF-GamO\nGolD-GamF-DelO\nGolD-GamF-GolO\nGolD-GamF-RicO\nGolD-GamF-VbyO\nGolD-VbyF-VbyO\nRicD-GamF-RicO\nRicD-VbyF-VbyO\nVbyD-VbyF-VbyO\nCustom\nNo compression\n0\n1\n2\n3\n4\nAverage Query Time (Seconds x 10^-2)\nFigure 2:\nPerformance of integer compression\nschemes for offsets in inverted lists, in an index with\nGolomb document numbers and gamma frequencies.\nIn this experiment, the index fits in main memory.\nA 500 Mb collection is used, and results are averaged\nover 10,000 queries.\nis managed in hardware. In current desktop computers, as\nmany as 150 instruction cycles are required to fetch a single\nmachine-word into the CPU. At a coarser level, compression\nof postings lists means that the number of fetches from\nmemory to cache during decompression is halved.\nSmall collection\nFigure 2 shows the relative performance of the integer compression\nschemes we have described for storing offsets, on\na 500 Mb collection of 94,802 Web documents drawn from\nthe TREC Web track data [5]; timing results are averaged\nover 10,000 queries drawn from an Excite search engine\nquery log [11]. The index contains 703, 518 terms.\nThese results show the effect of varying the coding scheme\nused for document numbers d, frequencies f\nd,t\n, and offsets o.\nIn all cases where both bitwise and variable-byte codes are\nused, the bitwise codes are padded to a byte boundary before\na variable-byte code is emitted; thus, for example, in a GolD-GamF\n-VbyO index, there is padding between the gamma\n227\nfrequency and the sequence of variable-byte offsets. Not all\ncode combinations are shown; for example, given that the\nspeed advantage of using variable-byte document numbers\nis small, we have not reported results for index types such as\nVbyD-GamF-RicD, and due to the use of padding a choice\nsuch as VbyD-GamF-VbyD. Given the highly skew distribution\nof f\nd,t\nvalues, Golomb or Rice are not suitable coding\nmethods, so these have not been tried.\nIn the \"no compression\" case, fixed-width fields are used\nto store postings. Document numbers are stored in 32 bits,\nfrequencies in 16 bits, and offsets in 24 bits; these were the\nsmallest multiples of bytes that would not overflow for reasonable\nassumptions about data properties.\nThe relative performance of Elias delta and gamma, Rice,\nand Golomb coding is as expected. The non-parameterised\nElias coding schemes result in larger indexes than the param-terised\nGolomb-Rice schemes that, in turn, result in slower\nquery evaluation. The average difference between offsets is\ngreater than 15, making Elias delta coding more appropriate\noverall than gamma coding; the latter is both slower and\nless space-efficient.\nOn the lower graph in Figure 2, comparing the fourth and\nfifth columns and comparing the fifth and eighth columns,\nit can be seen that choice of Golomb or Rice codes for either\noffsets or document numbers has virtually no impact on index\nsize. Comparing the fifth and eighth columns on the upper\ngraph, the schemes yield similar decoding times for document\nnumbers. However, Rice codes are markedly faster for\ndecoding offsets, because no toggle point calculation is required\n. Among the bitwise schemes, we conclude that Rice\ncoding should be used in preference to other schemes for\ncoding document numbers and offsets.\nThe most surprising result is the effect of using the optimised\nbyte-boundary variable-byte scheme for coding offsets\n. Despite the variable-byte index being 26% larger than\nthe corresponding Rice-coded index, the overall query evaluation\ntime is 62% less. Further speed gains are given by coding\nall values in variable-byte codes. Indeed, variable-byte\ndecoding is faster even than processing uncompressed lists.\nThis result is remarkable:the cost of transfering variable-byte\ncoded lists from memory to the CPU cache and then\ndecoding the lists is less than the cost of transferring uncompressed\nlists. To our knowledge, this is the first practical\nillustration that compression improves the efficiency of\nan in-memory retrieval system. We conclude from this that\nvariable-byte coding should be used to store offsets to reduce\nboth disk retrieval and memory retrieval costs.\nIn experiments with integers, Williams and Zobel found\nthat variable-byte coding is faster than the bitwise schemes\nfor storing large integers of the magnitude stored in inverted\nlists [13]. Our result confirms this observation for retrieval\nsystems, while also showing that the effect extends to fast\nretrieval from memory and that improvements to variable-byte\ncoding can considerably increase decoding speed.\nThe custom scheme uses both Elias delta and a binary\nbytewise scheme, reducing query evaluation to around 58%\nof the time for the Elias delta scheme. However, the custom\nscheme is almost twice as slow as the variable-byte scheme\nand, therefore, has little benefit in practice.\nLarge collection\nFigure 3 shows the results of a larger experiment with an\nindex that does not fit within the main-memory of our ma-DelD\n-GamF-GolO\nGolD-GamF-GamO\nGolD-GamF-DelO\nGolD-GamF-GolO\nGolD-GamF-VbyO\nRicD-GamF-RicO\nRicD-VbyF-VbyO\nVbyD-VbyF-VbyO\nNo compression\n0\n10\n20\n30\n40\nIndex Size (% of Collection)\nDelD-GamF-GolO\nGolD-GamF-GamO\nGolD-GamF-DelO\nGolD-GamF-GolO\nGolD-GamF-VbyO\nRicD-GamF-RicO\nRicD-VbyF-VbyO\nVbyD-VbyF-VbyO\nNo compression\n0.0\n0.5\n1.0\n1.5\n2.0\nAverage Query Time (Seconds)\nFigure 3: The performance of integer compression\nschemes for compressing offsets in inverted lists,\nwith Golomb-coded document numbers and gamma-coded\noffsets. In this experiment, the index is several\ntimes larger than main memory. A 20 Gb collection\nis used, and results are averaged over 25,000\nqueries.\nchine. Exactly the same index types are tried as for the\nexperiment above. A 20 Gb collection of 4,014,894 Web documents\ndrawn from the TREC Web track data [5] is used\nand timing results are averaged over 25,000 Boolean queries\ndrawn from an Excite search engine query log [11].\nThe\nindex contains 9,574,703 terms. We include only selected\nschemes in our results.\nWe again note that we have not used heuristics to reduce\nquery evaluation costs such as frequency-ordering or early\ntermination. Indeed, we have not even used stopping; with\nstopwords removed, query times are greatly impoved. Our\naim in this research is to measure the impact on index decoding\ntime of different choices of compression method, not\nto establish new benchmarks for query evaluation time. Our\nimprovements to compression techniques could, however, be\nused in conjunction with the other heuristics, in all likelihood\nfurther reducing query evaluation time compared to\nthe best times reported previously.\n228\nThe relative speeds of the bitwise Golomb, Elias delta, and\nvariable-byte coded offset schemes are similar to that of our\nexperiments with the 500 Mb collection. Again, variable-byte\ncoding results in the fastest query evaluation. Perhaps\nunsurprisingly given the results described above, an uncompressed\nindex that does not fit in main-memory is relatively\nmuch slower than the variable-byte scheme; the disk transfer\ncosts are a larger fraction of the overall query cost when\nthe index does not fit in memory, and less use can be made\nof the memory cache. Indexes with variable-byte offsets are\ntwice as fast as indexes with Golomb, delta, or gamma offsets\n, and one-and-a-half times as fast as indexes with Rice\noffsets. VbyD-VbyF-VbyO indexes are twice as fast as any\nindex type with non-variable-byte offsets.\nIn separate experiments we have observed that the gains\ndemonstrated by compression continue to increase with collection\nsize, as the proportion of the index that can be held\nin memory declines. Despite the loss in compression with\nvariable-byte coding, indexes are still less than one-seventh\nof the size of the indexed data, and the efficiency gains are\nhuge.\nCONCLUSIONS\nCompression of inverted lists can significantly improve the\nperformance of retrieval systems. We have shown that an efficiently\nimplemented variable-byte bytewise scheme results\nin query evaluation that is twice as fast as more compact\nbitwise schemes. Moreover, we have demonstrated that the\ncost of transferring data from memory to the CPU cache\ncan also be reduced by compression:when an index fits in\nmain memory, the transfer of compressed data from memory\nto the cache and subsequent decoding is less than that\nof transferring uncompressed data. Using byte-aligned coding\n, we have shown that queries can be run more than twice\nas fast as with bitwise codes, at a small loss of compression\nefficiency. These are dramatic gains.\nModern computer architectures create opportunities for\ncompression to yield performance advantages.\nOnce, the\nmain benefits of compression were to save scarce disk space\nand computer-to-computer transmission costs. An equally\nimportant benefit now is to make use of the fact that the\nCPU is largely idle. Fetching a single byte from memory involves\na delay of 12 to 150 CPU cycles; a fetch from disk involves\na delay of 10,000,000 cycles. Compression can greatly\nreduce the number of such accesses, while CPU time that\nwould otherwise be unused can be spent on decoding. With\nfast decoding, overall costs are much reduced, greatly increasing\nquery evaluation speed. In current computers such\narchitecture considerations are increasingly important to development\nof new algorithms for query processing.\nPoor\ncaching has been a crucial shortcoming of existing algorithms\ninvestigated in this research.\nThere are several possible extensions to this work. We\nplan to investigate nibble-coding, a variant of variable-byte\ncoding where two flag bits are used in each variable-byte\nblock. It is likely that this approach may improve the performance\nof signature blocks. We will also experiment with\nphrase querying in practice and to explore the average query\nevaluation speed when partial scanning is possible.\nREFERENCES\n[1] V. Anh, O. de Kretser, and A. Moffat. Vector-Space\nranking with effective early termination. In W. Croft,\nD. Harper, D. Kraft, and J. Zobel, editors, Proc.\nACM-SIGIR International Conference on Research\nand Development in Information Retrieval, pages\n3542, New York, Sept. 2001.\n[2] E. de Moura, G. Navarro, N. Ziviani, and\nR. Baeza-Yates. Fast and flexible word searching on\ncompressed text. ACM Transactions on Information\nSystems, 18(2):113139, 2000.\n[3] P. Elias. Universal codeword sets and representations\nof the integers. IEEE Transactions on Information\nTheory, IT-21(2):194203, Mar. 1975.\n[4] S. Golomb. Run-length encodings. IEEE Transactions\non Information Theory, IT12(3):399401, July 1966.\n[5] D. Hawking, N. Creswell, and P. Thistlewaite.\nOverview of TREC-7 very large collection track. In\nE. Voorhees and D. Harman, editors, Proc. Text\nRetrieval Conference (TREC), pages 91104,\nWashington, 1999. National Institute of Standards\nand Technology Special Publication 500-242.\n[6] A. Moffat and J. Zobel. Self-indexing inverted files for\nfast text retrieval. ACM Transactions on Information\nSystems, 14(4):349379, Oct. 1996.\n[7] A. Moffat, J. Zobel, and R. Sacks-Davis.\nMemory-efficient ranking. Information Processing &\nManagement, 30(6):733744, 1994.\n[8] G. Navarro, E. de Moura, M. Neubert, N. Ziviani, and\nR. Baeza-Yates. Adding compression to block\naddressing inverted indexes. Information Retrieval,\n3(1):4977, 2000.\n[9] M. Persin. Document filtering for fast ranking. In\nW. Croft and C. van Rijsbergen, editors, Proc.\nACM-SIGIR International Conference on Research\nand Development in Information Retrieval, pages\n339348, Dublin, Ireland, 1994.\n[10] M. Persin, J. Zobel, and R. Sacks-Davis. Filtered\ndocument retrieval with frequency-sorted indexes.\nJournal of the American Society for Information\nScience, 47(10):749764, 1996.\n[11] A. Spink, D. Wolfram, B. J. Jansen, and T. Saracevic.\nSearching the web:The public and their queries.\nJournal of the American Society for Information\nScience, 52(3):226234, 2001.\n[12] A. Vo and A. Moffat. Compressed inverted files with\nreduced decoding overheads. In R. Wilkinson,\nB. Croft, K. van Rijsbergen, A. Moffat, and J. Zobel,\neditors, Proc. ACM-SIGIR International Conference\non Research and Development in Information\nRetrieval, pages 290297, Melbourne, Australia, July\n1998.\n[13] H. Williams and J. Zobel. Compressing integers for\nfast file access. Computer Journal, 42(3):193201,\n1999.\n[14] I. Witten, A. Moffat, and T. Bell. Managing\nGigabytes: Compressing and Indexing Documents and\nImages. Morgan Kaufmann Publishers, Los Altos, CA\n94022, USA, second edition, 1999.\n[15] N. Ziviani, E. de Moura, G. Navarro, and\nR. Baeza-Yates. Compression:A key for\nnext-generation text retrieval systems. IEEE\nComputer, 33(11):3744, Nov. 2000.\n229\n", "keywords": "Variable byte;Decoding;Efficiency;integer coding;Bytewise compression;Search engine;retrieval efficiency;Integer Compression;Inverted indexes;index compression;Optimisation;Compression;Inverted index;Document retrieval"} {"name": "54", "title": "Computing Consistent Query Answers using Conflict Hypergraphs", "abstract": "A consistent query answer in a possibly inconsistent database is an answer which is true in every (minimal) repair of the database. We present here a practical framework for computing consistent query answers for large, possibly inconsistent relational databases. We consider relational algebra queries without projection , and denial constraints. Because our framework handles union queries, we can effectively (and efficiently) extract indefinite disjunctive information from an inconsistent database. We describe a number of novel optimization techniques applicable in this context and summarize experimental results that validate our approach.", "fulltext": "INTRODUCTION\nTraditionally, the main role of integrity constraints in\ndatabases was to enforce consistency.\nThe occurrence of\nintegrity violations was prevented by DBMS software. However\n, while integrity constraints continue to express important\nsemantic properties of data, enforcing the constraints\nhas become problematic in current database applications.\nFor example, in data integration systems integrity violations\nmay be due to the presence of multiple autonomous\ndata sources. The sources may separately satisfy the constraints\n, but when they are integrated the constraints may\nnot hold. Moreover, because the sources are autonomous,\nthe violations cannot be simply fixed by removing the data\ninvolved in the violations.\nExample\n1. Let Student be a relation schema with the\nattributes Name and Address and the key functional dependency\nN ame Address. Consider the following instance\nof Student:\nThe first two tuples may come from different data sources,\nso it may be impossible or impractical to resolve the inconsistency\nbetween them. However, there is clearly a difference\nbetween the first two tuples and the third one. We don't\nknow whether Jeremy Burford lives in Los Angeles or New\nYork, but we do know that Linda Kenner lives in Chicago.\nAn approach to query answering that ignores inconsistencies\nwill be unable to make this distinction the distinction between\nreliable and unreliable data. On the other hand, any\napproach that simply masks out inconsistent data (the first\ntwo tuples in this example) will lose indefinite information\npresent in inconsistent databases. In this example, we know\nthat there is a student named Jeremy Burford (existential\ninformation) and that Jeremy Burford lives in Los Angeles\nor New York (disjunctive information).\nThe above example illustrates the need to modify the standard\nnotion of query answer in the context of inconsistent\ndatabases. We need to be able to talk about query answers\nthat are unaffected by integrity violations. In [2], the notion\nof consistent query answer was proposed to achieve that ob jective\n. [2] introduced the notion of repair: a database that\nsatisfies the integrity constraints and is minimally different\nfrom the original database. A consistent answer to a query,\nin this framework, is an answer present in the result of the\nquery in every repair.\nExample\n2. In Example 1, there are two repairs corresponding\nto two different ways of restoring the consistency:\neither the first or the second tuple is deleted. If a query asks\nfor all the information about students, only the tuple (Linda\nKenner,Chicago) is returned as a consistent answer because\nit is the only tuple that is present in both repairs. On the\nother hand, if a query asks for the names of students living\nin Los Angeles or New York, then Jeremy Burford is a\nconsistent answer.\nThe framework of [2] has served as a foundation for most\nof the subsequent work in the area of querying inconsistent\ndatabases [3, 5, 11, 12, 13, 15, 17, 19, 23] (see [7] for a survey\nand an in-depth discussion). The work presented here\naddresses the issue of computing consistent query answers\nfor projection-free queries and denial integrity constraints.\nIt is shown in [13] that this task can be done in polynomial\ntime, using the notion of conflict hypergraph that succinctly\n417\nrepresents all the integrity violations in a given database.\nThis line research is pursued further in the present paper.\nThe main contributions of this paper are as follows:\nA complete, scalable framework for computing consistent\nanswers to projection-free relational algebra\nqueries in the presence of denial constraints. Our approach\nuses a relational DBMS as a backend and scales\nup to large databases.\nNovel optimization techniques to eliminate redundant\nDBMS queries.\nEncouraging experimental results that compare our\napproach with an approach based on query rewriting\nand estimate the overhead of computing consistent\nquery answers. No comprehensive results of this kind\nexist in the literature.\nBecause our query language includes union, our approach\ncan extract indefinite disjunctive information present in an\ninconsistent database (see Example 1). Moreover, consistent\nquery answers are computed in polynomial time. Other existing\napproaches are either unable to handle disjunction in\nqueries [2, 12, 17] or cannot guarantee polynomial time com-putability\nof consistent query answers [3, 5, 11, 15, 19, 23].\nThe latter is due to the fact that those approaches rely on\nthe computation of answers sets of logic programs with disjunction\nand negation a\np\n2\n-complete problem. Only the\napproach of [2, 12] (which uses query rewriting) and the approach\npresented here scale up to large databases. Related\nresearch is further discussed in Section 6.\nThe plan of the paper is as follows. In Section 2, we introduce\nbasic concepts. In Section 3, we present our approach\nto computing consistent answers to projection-free queries\nand describe its implementation in a system called Hippo.\nIn Section 4, we describe several techniques for eliminating\nredundant DBMS queries, that we have implemented in\nHippo. In Section 5, we discuss a number of experiments we\nhave conducted with Hippo and query rewriting. In Section\n6, we briefly discuss related work. Section 7 contains conclusions\nand a discussion of possible future research directions.\nBASIC NOTIONS AND FACTS\nIn this paper we work in the relational model of data. We\nrecall that a database schema\nS is a set of relation names\nwith attribute names and types. An instance of a database\nis a function that assigns a finite set of tuples to each relation\nname. For the purposes of this paper we consider only two\nfixed database domains\nN (natural numbers) and D (unin-terpreted\nconstants). We also use the natural interpretation\nover\nN of binary relational symbols =, =, <, >, and we assume\nthat two constants are equal only if they have the same\nname. We also view I as a structure for the first-order language\nover the vocabulary consisting of symbols of\nS, and\nstandard built-in predicates over\nN (=, =, <, >).\nIn this article, we use projection-free (-free) relational\nalgebra expressions, defined using the following grammar:\nE :: R |\n\n(E) | E E | E E | E \\ E.\n|R| is the arity of the relation symbol R and (unless specified\notherwise) for the sake of simplicity we assume that attribute\nnames are consecutive natural numbers. We extend\nthis to expressions, i.e.\n|E| is the arity of the expression,\nand E.i is the reference to the i-th column resulting from\nthe expression E (used in conditions for subexpressions).\nMorover, t[i] is the value on the i-th position of t, t[i, j] is\nan abbreviation for a tuple (t[i], . . . , t[j]), and with |t| we\ndenote the length of the tuple t. We say that a tuple t is\ncompatible with an expression E if the length of the tuple is\nequal to the arity of the expression, i.e.\n|t| = |E|.\nFor a given expression E, QA\nE\n(I) is the result of evaluating\nE in the database instance I. In this paper we use only the\nset semantics of relational algebra expressions.\nWe also use relational calculus queries consisting of\nquantifier-free first-order formulas which may be open (having\nfree variables) or ground.\nIn fact, our approach can\nhandle relational algebra queries that require projection, as\nlong as they can be translated to quantifier-free relational\ncalculus queries. That's why we can deal with the relational\nalgebra query corresponding to the query\nStudent (X, LosAngeles ) Student(X, NewYork )\nin Example 1.We also occasionally use SQL.\n2.2\nRepairs and consistent query answers\nAn integrity constraint is a consistent closed first-order\nformula. In this paper we consider only the class of denial\nintegrity constraints of the form:\nx\n1\n, . . . ,\nx\nk\n. [R\ni\n1\n(\nx\n1\n)\n. . . R\ni\nk\n(\nx\nk\n)\n(x\n1\n, . . . ,\nx\nk\n)] ,\n(1)\nwhere is a boolean expression consisting of atomic formulas\nreferring to built-in predicates. The number k is called\nthe arity of a constraint.\nNote that, for example, functional dependencies and exclusion\nconstraints are of the above form. Below we give another\nexample.\nExample\n3. Consider the relation Emp with attributes\nName, Salary, and Manager, with Name being the primary\nkey.\nThe constraint that no employee can have a salary\ngreater that that of her manager is a denial constraint:\nn, s, m, s , m . [Emp(n, s, m) Emp(m, s , m ) s > s ].\nDefinition 1\n(Consistent database).\nA\ndatabase\ninstance I is consistent with a set of integrity constraints C\nif I |= C (i.e., C is true in I); inconsistent otherwise.\nDefinition\n2. For a given database instance I of schema\nS, its set of facts (I) is the set of all positive facts that hold\nin this database:\n(I) = {R(t)|R S t I(R)}.\nDefinition 3\n(Database distance).\nGiven two instances\nI\n1\nand I\n2\nof the same database, the distance between\nthose instances (I\n1\n, I\n2\n) is the symmetric difference between\nsets of facts of those instances:\n(I\n1\n, I\n2\n) = ((I\n1\n)\n\\ (I\n2\n))\n((I\n2\n)\n\\ (I\n1\n)).\nDefinition 4\n(Proximity relation).\nGiven\nthree\ninstances I, I\n1\n, I\n2\n, the instance I\n1\nis closer to I than the\ninstance I\n2\nif the distance between I\n1\nand I is contained in\nthe distance between I\n2\nand I, i.e.\nI\n1\n\nI\nI\n2\n(I, I\n1\n)\n(I, I\n2\n).\n418\nDefinition 5\n(Database repair).\nFor a given instance\nI and set of integrity constraints C, I is a repair\nof I w.r.t. C if I is the closest instance to I, which is consistent\nwith C , i.e. I |= C and I is\nI\n-minimal among\nthe instances that satisfy C.\nBy Rep\nC\n(I) we denote the set of all repairs of I with respect\nto C.\nThe following fact captures an important property of repairs\nof denial constraints: each repair is a maximal consistent\nsubset of the database.\nFact\n1. If C consists only of denial constraints, then:\nI Rep\nC\n(I) (I ) (I).\nDefinition 6\n(Core instance).\nFor a given instance\nI, its core w.r.t a set of integrity constraints C is an instance\nCore\nI\nC\nsuch that:\nCore\nI\nC\n(R) =\nI Rep\nC\n(I)\nI (R).\nFor any relation R and set of integrity constraints C, if there\nexists a relational algebra expression\nR\nC\nsuch that that for\nany instance I:\nQA\n\nR\nC\n(I) = Core\nI\nC\n(R),\nwe call\nR\nC\na core expression of the relation R w.r.t the set\nof integrity constraints C.\nFact\n2. If C is a set of denial integrity constraints, then\nfor any R S there exists a core expression\nR\nC\nof R w.r.t\nC.\nExample\n4. Suppose we have a table P (A, B) with a\nfunctional dependency A B. The core expression for P\nin SQL is:\nSELECT * FROM P P1 WHERE NOT EXISTS (\nSELECT * FROM P P2\nWHERE P1.A = P2.A AND P1.B <> P2.B);\nHaving defined repairs, we can define consistent answers to\nqueries. In general, the intuition is that the consistent query\nanswer is an answer to the query in every repair. In this paper\nwe consider consistent answers for two classes of queries.\nDefinition 7\n(CQA for ground queries).\nGiven a\ndatabase instance I and a set of denial integrity constraints\nC, we say that true (resp. false) is the consistent answer to\na ground query w.r.t. C in I , and we write I |=\nC\n, if\nin every repair I Rep\nC\n(I), I |= (resp. I |= ).\nDefinition 8\n(CQA for relational algebra).\nGiven a database instance I and a set of denial integrity\nconstraints C, the set of consistent answers to a query E\nw.r.t. C in I is defined as follows:\nCQA\nE\nC\n(I) =\nI Rep\nC\n(I)\nQA\nE\n(I ).\n2.3\nConflict hypergraphs\nThe conflict hypergraph [13] constitutes a compact, space-efficient\nrepresentation of all repairs of a given database instance\n. Note that this representation is specifically geared\ntoward denial constraints.\nDefinition 9\n(Conflict).\nFor a given integrity constraint\nc of form (1), a set of facts {R\ni\n1\n(t\n1\n), . . . , R\ni\nk\n(t\nk\n)\n},\nwhere t\nj\nI(R\ni\nj\n), is a conflict in a database instance I\nif (t\n1\n, . . . , t\nk\n). By\nE\nc,I\nwe denote the set of all conflicts\ngenerated by the integrity constraint c in I.\nDefinition 10\n(Conflict hypergraph).\nFor\na\ngiven set of integrity constraints C and a database instance\nI, a conflict hypergraph G\nC,I\nis a hypergraph with the set\nof vertices being the set of facts from the instance I, and\nthe set of hyperedges consisting of all conflicts generated by\nconstraints from C in I, i.e.\nG\nC,I\n= (\nV\nI\n, E\nC,I\n), where V\nI\n= (I), and E\nC,I\n=\ncC\nE\nc,I\n.\nDefinition 11\n(Maximal independent set).\nFor a\nhypergraph\nG = (V, E), the set of vertices is a maximal independent\nset if it is a maximal set that contains no hyperedge\nfrom\nE.\nFact\n3. Let I be a database instance, and C a set of\ndenial constraints, then for any repair I Rep\nC\n(I), (I )\nis a maximal independent set M in G\nC,I\n, and vice versa.\nAs shown in the following example in case of denial constraints\nthe set of conflicts can be defined using a simple\nquery.\nExample\n5. Suppose we have a table P (A, B) with a\nfunctional dependency A B. The SQL expression for selecting\nall conflicts from P generated by the functional constraint\nis:\nSELECT * FROM P P1, P P2\nWHERE P1.A = P2.A AND P1.B <> P2.B;\nDefinition 12\n(Data complexity).\nThe data complexity\nof consistent answers to ground first-order queries\nis the complexity of determining the membership in the set\nD\nC,\n=\n{I|I |=\nC\n}, where is a fixed ground first-order\nquery, and C is a fixed finite set of integrity constraints.\nWe note that for a fixed set of integrity constraints, the\nconflict hypergraph is of polynomial size (in the number of\ntuples in the database instance).\nIMPLEMENTATION\nWe review here the algorithm [13] for checking the consistency\nof ground queries in the presence of denial constraints,\nand then show how to use it to answer -free queries relational\nalgebra queries, which which correspond to open\nquantifier-free relational calculus queries.\nWe assume here that we work with a set of integrity constraints\nconsisting only of denial constraints. The input to\nthe algorithm consists of a ground quantifier-free formula\n, a set of integrity constraints C, and a database instance\nI. We want the algorithm to answer the question whether\nI |=\nC\n.\nTheorem\n1. [13] The data complexity of consistent answers\nto quantifier-free ground queries w.r.t a set of denial\nconstraints is in P .\n419\nThe proof of this theorem can be found in [13] together with\nthe corresponding algorithm that we call HProver.\nThis\nalgorithm takes the query in CNF, and a conflict hypergraph\nG\nC,I\nthat corresponds to the database instance I in\nthe presence of integrity constraints C. The first step of\nInput: =\n1\n. . .\nk\nground input formula in CNF,\nG\nC,I\n= (\nV\nI\n, E\nC,I\n) conflict hypergraph of\nI w.r.t. C.\n1\nfor i {1, . . . , k} do\n2\nlet\ni\nR\ni\n1\n(\nt\n1\n)\n. . . R\ni\np\n(\nt\np\n)\n\nR\ni\np+1\n(\nt\np+1\n)\n. . . R\ni\nm\n(\nt\nm\n).\n3\nfor j {p + 1, . . . , m} do\n4\nif t\nj\nI(R\ni\nj\n) then\n5\nnext i;\n6\nB {R\ni\np+1\n(\nt\np+1\n)\n, . . . , R\ni\nm\n(\nt\nm\n)\n}\n7\nfor j {1, . . . , p} do\n8\nif t\nj\nI(R\ni\nj\n) then\n9\nchoose e\nj\n{e E\nC,I\n|R\ni\nj\n(\nt\nj\n)\ne} nondeterm.\n10\nB B (e\nj\n\\ {R\ni\nj\n(\nt\nj\n)\n}).\n11\nif B is independent in G\nC,I\nthen\n12\nreturn false;\n13 return true;\nFigure 1: Algorithm\nHProver\nthe algorithm reduces the task of determining whether true\nis the consistent answer to the query to answering the\nsame question for every conjunct\ni\n. Then each formula\ni\nis negated and the rest of the algorithm attempts to find a\nrepair I in which\ni\nis true, i.e., in which\n1. t\nj\nI (R\ni\nj\n) for (j = p + 1, . . . , m)\n2. t\nj\nI (R\ni\nj\n) for (j = 1, . . . , p)\nSuch a repair corresponds to a maximal independent set M\nin the conflict hypergraph such that:\n1 . every of R\ni\np+1\n(t\np+1\n), . . . , R\ni\nm\n(t\nm\n) is an element of M ,\n2 . none of R\ni\n1\n(t\n1\n), . . . , R\ni\np\n(t\np\n) is an element of M .\nIf the algorithm succeeds in building an independent set\nsatisfying the properties 1 and 2 , such a set can be extended\nto a maximal one which also satisfies those properties. That\nmeans that there is a repair in which\n\ni\n, and thus also\n, is true. If the algorithm does not succeed for any i,\ni = 1, . . . , k, then true is the consistent answer to .\nThe condition 1 is satisfied by simply including the appropriate\nfacts in M . The condition 2 is satisfied by excluding\nthe appropriate facts from M . A fact can be excluded if it is\nnot in (I) or if it belongs to a hyperedge whose remaining\nelements are already in M .\n3.2\nFinding an envelope\nAny relational algebra expression E can be translated to a\ncorresponding first-order formula\nE\n(\nx) in a standard way.\nSince we consider only -free algebra expressions, the formula\nE\n(\nx) is quantifier-free. To be able to use HProver, we\nhave to ground this formula, i.e., find an appropriate set of\nbindings for the variables in the formula. This will be done\nby evaluating an envelope query over the database. An envelope\nquery should satisfy two properties: (1) it should return\na superset of the set of consistent query answers for every\ndatabase instance, and (2) it should be easily constructible\nfrom the original query. The result of evaluating an envelope\nquery over a given database will be called an envelope.\nSuppose K\nE\nis an envelope query for a query E. We have\nthat\nCQA\nE\nC\n(I) = {\nt QA\nK\nE\n(I) | I |=\nC\n\nE\n(\nt)}.\nIf an expression E does not use the difference operator (and\nthus is a monotonic expression), E itself is an envelope\nquery, as stated by the following lemma:\nLemma\n1. For any monotonic relational expression E,\nthe following holds:\nCQA\nE\nC\n(I) QA\nE\n(I).\nHowever when E is not monotonic, then the set of consistent\nquery answers may contain tuples not contained in QA\nE\n(I).\nThat kind of a situation is shown in the example below.\nExample\n6. Suppose we have two relations R(A, B) and\nS(A, B, C, D), and we have functional dependency over R :\nA B. In case when I(R) = {(1, 2), (1, 3)}, and I(S) =\n{(1, 2, 1, 3)}, the set of answers to the query\nE = S \\ (R(A\n1\n, B\n1\n)\n\nB\n1\n=B\n2\nR(A\n2\n, B\n2\n))\nis\n, while the set of consistent query answers is {(1, 2, 1, 3)}.\nTo obtain the expression for an envelope, we define two\noperators F and G by mutual recursion. The operator F\ndefines the envelope by overestimating the set of consistent\nanswers. The auxiliary operator G underestimates the set\nof consistent answers.\nDefinition\n13. We define the operators F and G recursively\n:\nF (R) = R,\nF (E\n1\nE\n2\n) = F (E\n1\n)\nF (E\n2\n),\nF (E\n1\n\\ E\n2\n) = F (E\n1\n)\n\\ G(E\n2\n),\nF (E\n1\nE\n2\n) = F (E\n1\n)\nF (E\n2\n),\nF (\n\n(E)) =\n\n(F (E)),\nG(R) =\nR\nC\n,\nG(E\n1\nE\n2\n) = G(E\n1\n)\nG(E\n2\n),\nG(E\n1\n\\ E\n2\n) = G(E\n1\n)\n\\ F (E\n2\n),\nG(E\n1\nE\n2\n) = G(E\n1\n)\nG(E\n2\n),\nG(\n\n(E)) =\n\n(G(E)).\nBecause C consist only of denial constraints, Fact 2 guarantees\nthat the expression\nR\nC\nexists, and therefore the operators\nare well defined. The pair of operators (F, G) has the\nfollowing properties:\nLemma\n2. For any -free relational algebra expression E:\nQA\nG(E)\n(I) QA\nE\n(I) QA\nF (E)\n(I), and\nCQA\nG(E)\nC\n(I) CQA\nE\nC\n(I) CQA\nF (E)\nC\n(I).\nLemma\n3. For any -free relational algebra expression E:\nI Rep\nC\n(I). QA\nG(E)\n(I) QA\nE\n(I ) QA\nF (E)\n(I)\nWith those two lemmas we can prove the following theorem.\nTheorem\n2. If C contains only denial constraints, then\nfor any -free relational algebra expression E the following\nholds for every database instance I:\nQA\nG(E)\n(I) CQA\nE\nC\n(I) QA\nF (E)\n(I).\n420\n3.3\nThe system Hippo\nWe have implemented a system called Hippo for finding\nconsistent answers to -free relational algebra queries. The\ndata is stored in an RDBMS (in our case, PostgreSQL).\nThe flow of data in Hippo is shown in Figure 2. The only\nE : , , \\,\nEstimating\nF (E) : , , , \\\nEvaluation\nConflict Detection\nDB\nTranslation\n\nE\n:\n, ,\nEnvelope\nConflict Hypergraph\nGrounding\nHProver\nAnswer Set\nIC\nFigure 2: Data flow in Hippo\noutput of this system is the Answer Set consisting of the\nconsistent answers to the input query E with respect to\na set of integrity constraints IC in the database instance\nDB. Before processing any input query, the system performs\nConflict Detection, and creates the Conflict Hypergraph. We\nassume that the number of conflicts is small enough to allow\nus to store the hypergraph in main memory. We keep in\nmain memory only the set of hyperedges corresponding to\nconflicts in database. The set of all the vertices represents\nthe entire contents of the database and thus may be too big\nto fit in main memory. In this way, we guarantee that our\napproach is scalable.\nThe processing of a query E consists of Estimating it to an\nenvelope query F (E) that after Evaluation b y an RDBMS\ngives us the Envelope. Also, the system performs Translation\nof the input query E to a corresponding first-order logic\nformula\nE\n. Now, for every tuple from the Envelope we\nperform Grounding of\nE\n. Having now a first-order ground\nquery we can check if true is the consistent answer to this\nquery using HProver. Depending on the result of this check\nwe return the tuple or not. It's important to notice here that\nbecause the hypergraph is stored in main memory, HProver\ndoesn't need any immediate knowledge of the integrity constraints\n(no arrow from IC to HProver). This is because\nin HProver the independence of constructed sets B is being\nchecked only for sets of vertices that are contained in the\ndatabase, and if such vertices are in any conflict, it is registered\nin the hypergraph. HProver makes, however, database\naccesses to check tuple membership in database relations.\nOPTIMIZATIONS\nThe previous section showed how to build a system for\ncomputing consistent query answers. But even though we\nhave decided to store the conflict hypergraph in main memory\n, we still have to perform tuple membership checks (steps\n4 and 8 in the HProver algorithm). To check if a tuple is\npresent in a given table, we execute a simple membership\nquery. For every tuple from the envelope we have to perform\nseveral tuple checks (depending on the complexity of\nthe query). Executing any query is usually a costly operation\nin the database context. Therefore tuple membership\nchecks are a significant factor in the algorithm execution\ntime.\nIn this section we address the problem of eliminating tuple\nmembership checks. We propose two improvements:\n1. The first infers information about the tuples present\nin the database from the current envelope tuple. That\nmakes it possible to answer some tuple checks without\ninterrogating the database.\n2. The second supplements the first by extending the envelope\nexpression so that we can find the results of all\nrelevant tuple checks without executing any membership\nquery.\n4.1\nKnowledge gathering\nIn this section we address the problem of answering tuple\nchecks.\nDefinition 14\n(Relevant facts).\nFor a given -free\nexpression E and a tuple t compatible with E, the set\nTC(E, t) of relevant facts is defined recursively:\nTC(R, t) = {R(t)},\nTC(E\n1\nE\n2\n, t) = TC(E\n1\n, t) TC(E\n2\n, t),\nTC(E\n1\n\\ E\n2\n, t) = TC(E\n1\n, t) TC(E\n2\n, t),\nTC(E\n1\nE\n2\n, (t\n1\n, t\n2\n)) = TC(E\n1\n, t\n1\n)\nTC(E\n2\n, t\n2\n),\nTC(\n\n(E), t) = TC(E, t).\nThe set of facts TC(E, t) consists of all facts that HProver\nmay need when working with the query\nE\n(t) (we conjecture\nthat the same set of facts will be needed by any practical\nchecker of consistent query answers for quantifier-free\nqueries). In the following example we show that the tuple t\nitself may carry information that can be used to derive some\nrelevant facts.\nExample\n7. Recall that relation attributes are named by\nnatural numbers. Assume that we have two tables R(1, 2),\nP (1, 2) and a query E = F (E) =\n1=a\n(R(RP )). Suppose\nthat a tuple t = (a, b, c, d) is the only result of the evaluation\nof F (E) in a database instance I. The set of relevant facts\nis TC(E, t) = {R(a, b), R(c, d), P (c, d)}. A natural consequence\nof the semantics of relational algebra expressions is\nthat t QA\n\n1=a\n(R(RP ))\n(I) implies (a, b) I(R). We can\nuse this information to avoid performing some membership\nqueries. At the same time the tuple t itself doesn't carry\nenough information to decide whether (c, d) belongs to either\nI(R), I(P ), or both of them.\nWe call the process of inferring the information from result\nof the evaluation of a query knowledge gathering. Formally,\nwe define the set of derived facts in the following way:\nDefinition 15\n(Knowledge gathering).\nFor\na\ngiven -free expression E and a tuple t compatible with E\n421\nwe define the set KG recursively:\nKG(R, t) = {R(t)},\nKG(E\n1\nE\n2\n, t) = KG(E\n1\n, t) KG(E\n2\n, t),\nKG(E\n1\n\\ E\n2\n, t) = KG(E\n1\n, t),\nKG(\n\n(E), t) = KG(E, t),\nKG(E\n1\nE\n2\n, (t\n1\n, t\n2\n)) = KG(E\n1\n, t\n1\n)\nKG(E\n2\n, t\n2\n).\nWe note here that the cardinality of the set of facts inferred\nwith KG is linear in the size of the query and doesn't depend\non the value of the tuple t. Now we state the main property\nof KG.\nTheorem 3\n(Soundness of KG).\nGiven a database\ninstance I and a -free expression E\nt QA\nF (E)\n(I).R(t ) TC(E, t).\nR(t )\nKG(E, t) I |= R(t ).\nKnowledge gathering is also complete in the case of\n{, }-expressions\n, i.e. it derives all relevant facts that hold in the\ndatabase I.\nTheorem 4\n(Completeness of KG for\n{, }).\nGiven a database I and any {, }-query E.\nt QA\nF (E)\n(I).R(t ) TC(E, t).\nI |= R(t )\nR(t ) KG(E, t).\n4.2\nExtended knowledge gathering\nIn general, when the expression translates to a disjunctive\nquery we need to extend the query so that the resulting tuple\ncarries some additional information allowing us to derive all\nrelevant facts. The extended approach described in detail\nbelow is illustrated first by the following example.\nExample\n8. For the previously considered expression\nE =\n1=a\n(R (R P )) the extended approach constructs\nthe expression\n1=a\n(R (R P ))\n3,4\n-- R\n3,4\n-- P , where\nis the left outer join operator\n1\n. Suppose now, I(R) =\n{(a, b), (e, f)} and I(P ) = {(c, d), (e, f)}. Then the evaluation\nof the extended envelope expression yields the following:\n\n1=a\n(R (R P ))\n3,4\n-- R\n3,4\n-- P\na\nb\na\nb\na\nb\n\na\nb\nc\nd\nc\nd\na\nb\ne\nf\ne\nf\ne\nf\nNow, consider the tuple (a, b, c, d, , , c, d). We can decompose\nit into two parts (a, b, c, d) and (, , c, d). The\nfirst part is simply the tuple from the envelope F (E), and\nit can be used to infer the fact R(a, b). The second part allows\nus to make two other important inferences. Namely,\n(c, d) I(R) and (c, d) I(P ).\nOur goal is to minimally extend the expression so that we\ncan derive all relevant facts. In order to find what information\nis not guaranteed to be gathered from evaluation of the\nenvelope expression, we generalize the definitions of KG and\nTC to non-ground tuples consisting of distinct variables.\n1\nFor clarity we simplify the notion of the outer join condition\n. When writing S\n3,4\n-- T we mean S\nS.3=T.1S.4=T.2\n----------- T ,\nand we assume the left join operator is left associative\nDefinition 16\n(Complementary set).\nFor a given\n-free expression E, the complementary set (E) is defined\nas follows:\n(E) = TC(E,\nx) \\ KG(E,\nx),\nwhere\nx = (x\n1\n, . . . , x\n|E|\n).\nExample\n9. Taking again under consideration the expression\nE =\n1=a\n(R (R P )) and\nx = (x\n1\n, . . . , x\n4\n) we\nhave:\nTC(E,\nx) = {R(x\n1\n, x\n2\n), P (x\n3\n, x\n4\n), R(x\n3\n, x\n4\n)\n},\nKG(E,\nx) = {R(x\n1\n, x\n2\n)\n}.\nR(x\n1\n, x\n2\n)\nTC(E) means that for any tuple (t\n1\n, t\n2\n, t\n3\n, t\n4\n)\nfrom the evaluation of the envelope expression for E,\nHProver may perform the tuple check R(t\n1\n, t\n2\n). We have\nalso R(x\n1\n, x\n2\n)\nKG(E) and therefore we are able to answer\nthis check using knowledge gathering. On the other\nhand R(x\n3\n, x\n4\n)\nTC(E) means that for HProver may perform\na tuple check R(t\n3\n, t\n4\n).\nSince we don't have that\nR(x\n3\n, x\n4\n)\nKG(E) we cannot guarantee that we can answer\ntuple checks R(t\n3\n, t\n4\n) without executing a membership query\non the database, even though we are able to answer tuple\nchecks R(t\n1\n, t\n2\n). The complementary set for the discussed\nexpression is:\n(E) = {R(x\n3\n, x\n4\n), P (x\n3\n, x\n4\n)\n}.\nAnalogous examples can be used to show that the simple\nknowledge gathering is not sufficient to avoid membership\nchecks when processing expressions with the difference operator\n. Next, we extend the envelope expression so that\nit evaluation provides us with all information sufficient to\nanswer the tuple checks.\nDefinition 17\n(Extended envelope expression).\nFor a given -free expression E the extended envelope\nexpression is defined as follows:\nH(E) = F (E)\n|R|\nj=1\nE.(i+j-1)=R.j\n---------------R\n(x\ni\n,...,x\ni+|R|-1\n)(E)\nR.\nThe notation means that we have as many outer joins as\nthere are elements in (E). They can appear in any order.\nWe also define the following auxiliary expression:\nS(E) = E\n\nR(x\ni\n,...,x\ni+|R|-1\n)(E)\nR.\nFor both H(E) and S(E) the elements of (E) need to be\nconsidered in the same order.\nUsing outer joins results in a natural one-to-one correspondence\nbetween the tuples from the evaluation of the extended\nenvelope expression and the tuples from the original\nenvelope.\nFact\n4. For a given database instance I and -free expression\nE, the map t t[1, |E|] is a one-to-one map of\nQA\nH(E)\n(I) onto QA\nF (E)\n(I).\nExtending knowledge gathering to null tuples KG(R, (\n, . . . , )) = allows us to state that using the extended\nenvelope expression we can determine correctly all relevant\nfacts without querying the database.\n422\nTheorem 5\n(Soundness, completeness of ext. KG).\nFor any database instance I and a -free expression E the\nfollowing holds:\nt QA\nH(E)\n(I).R(t ) TC(E, t[1, |E|]).\nR(t )\nKG(S(E), t) I |= R(t ).\nWe note that in the case of\n{, }-expressions this approach\ndoesn't unnecessarily extend the expression.\n4.3\nOther possibilities of optimizations\n4.3.1\nNegative knowledge gathering\nKnowledge gathering KG (as defined in Section 4.1) is\ncomplete only for queries that translate to a conjunction of\npositive literals. However, it is possible to come up with a\nconstruction that will be complete for queries that translate\nto a conjunction of positive as well as negative literals. The\nfollowing example presents this idea.\nExample\n10. Suppose we have tables R(1, 2) and P (1, 2)\nand a set of constraints C. For the query E = R\\P , we have\nF (E) = R \\\nP\nC\n. Take any tuple t QA\nF (E)\n(I) for some\ninstance I. We can easily conclude that t I(R). Also,\nwe can say that t QA\n\nP\nC\n(I). Having this and hypergraph\nG\nC,I\n= (\nV\nI\n, E\nC,I\n) we can easily find if t I(P ). Namely, if\nthere exists an edge e E\nC,I\nthat P (t) e, then t I(P ).\nAnd if the vertex P (t) is not involved in any conflict in E\nthen t I(P ).\nReasoning of that sort cannot be applied to a query E =\nR R \\ P P . Given a tuple t = (t\n1\n, t\n2\n) from the envelope\nwe know that t\n1\n, t\n2\nR, but the fact (t\n1\n, t\n2\n)\nQA\n\nP\nC\n\nP\nC\n(I)\ndoesn't imply that t\n1\nQA\n\nP\nC\n(I) or t\n2\nQA\n\nP\nC\n(I). And\ntherefore we are not able to find if t\n1\nI(P ) or t\n2\nI(P ).\nThis mechanism hasn't been included in the tested implementation\nyet. Implementing only positive knowledge gathering\nallows us to better observe the benefits of extending\nthe envelope expression.\nWe notice here that the query rewriting approach to computing\nconsistent query answers described in [2] works also\nonly for queries that are conjunctions of literals. However,\nas shown below, our approach leads to faster computation\nof consistent answers than query rewriting.\n4.3.2\nIntersection\nAnother possible venue of optimization comes from directly\nimplementing derived operators of relational algebra.\nFor example, for intersection the appropriate extensions of\nthe operators F and G are very simple:\nF (E\n1\nE\n2\n) = F (E\n1\n)\nF (E\n2\n), G(E\n1\nE\n2\n) = G(E\n1\n)\nG(E\n2\n).\nNow R P is equivalent to R \\(R \\P ) b ut F (R P ) = R P\nis not equivalent to F (R \\ (R \\ P )) = R \\ (\nR\nC\n\\P ). Thus the\nenvelope constructed by the operator F becomes sensitive\nto the way the original query is formulated.\nEXPERIMENTAL RESULTS\nAmong available methods for computing consistent query\nanswers, only the query rewriting technique [2] seems to be\nfeasible for large databases. This is why in this work we\ncompare the following engines:\nSQL An engine that executes the given query on the underlying\nRDBMS, and returns the query result. This\nmethod doesn't return consistent query answers, but\nprovides a baseline to observe the overhead of computing\nconsistent query answers using the proposed\nmethods.\nQR Using the SQL engine, we execute the rewritten query\nconstructed as decribed in [2]. More details on this\napproach can be found in Section 6.\nKG This method constructs the basic envelope expression\nand uses knowledge gathering, as described in Section\n4.1.\nExtKG This engine constructs the extended envelope expression\n(Section 4.2) and uses extended knowledge\ngathering.\n5.1.1\nGenerating test data\nEvery test was performed with the database containing\ntwo tables P and Q, both having three attributes X, Y, Z.\nFor the constraints, we took a functional dependency X\nZ in each table. The test databases had the following parameters\n:\nn : the number of base tuples in each table,\nm : the number of additional conflicting tuples,\nand had both tables constructed in the following way:\n1. Insert n different base tuples with X and Z being equal\nand taking subsequent values 0, . . . , n - 1, and Y being\nrandomly drawn from the set\n{0, 1}.\n2. Insert m different conflicting tuples with X taking subsequent\nvalues\n{0, n/m , 2 n/m , . . . , (m - 1)\nn/m }, Z = X + 1, and Y being randomly drawn\nfrom the set\n{0, 1}.\nIn addition, we define auxiliary tables (P\ncore\nand Q\ncore\n)\ncontaining only non-conflicting tuples from the base tables\n(resp. P and Q). Those table were used as materialized\nviews of the core expressions (\nP\nC\nand\nQ\nC\n).\nExample\n11. We show how a table P with n = 4 and\nm = 2 can be generated:\n1. First\nwe\ninsert\nthe\nbase\ntuples\n(0, 1, 0), (1, 0, 1), (2, 0, 2), (3, 1, 3) into P\n2. Then\nwe\ninsert\nthe\nfollowing\nconflicting\ntuples\n(0, 1, 1), (2, 0, 3) into P\n3. P\ncore\nwill hold the following tuples (1, 0, 1), (3, 1, 3).\nIn every table constructed in such a way the number of tuples\nis n + m, and the number of conflicts is m.\n5.1.2\nThe environment\nThe implementation is done in Java2, using PostgreSQL\n(version 7.3.3) as the relational backend. All test have been\nperformed on a PC with a 1.4GHz AMD Athlon processor\nunder SuSE Linux 8.2 (kernel ver. 2.4.20) using Sun JVM\n1.4.1.\n423\n5.2\nTest results\nTesting a query with a given engine consisted of computing\nthe consistent\n2\nanswers to the query and then iterating\nover the results. Iteration over the result is necessary, as\nthe subsequent elements of the consistent query answer set\nare computed by Hippo in a lazy manner (this allows us\nto process results bigger than the available main memory).\nEvery test has been repeated three times and the median\ntaken. Finally, we note that the cost of computing the conflict\nhypergraph, which is incurred only once per session, is\nignored while estimating the time of the query evaluation.\nWe take a closer look at the time required for hypergraph\nconstruction in Section 5.2.3.\n5.2.1\nSimple queries\nWe first compared performance of different engines on\nsimple queries: join, union, and difference.\nBecause we\nperformed the tests for large databases, we added a range\nselection to the given query to obtain small query results,\nfactoring out the time necessary to write the outputs. As\nparameters in the experiments, we considered the database\nsize, the conflict percentage, and the estimated result size.\nFigure 3 shows the execution time for join as a function\nof the size of the database. In the case of\n{, }-expressions\n(thus also joins), the execution times of KG, ExtKG and\nSQL are essentially identical. Since no membership queries\nhave to be performed, it means that for simple queries the\nwork done by HProver for all tuples is practically negligible.\n\nX<200\n(P\nX\nQ)\nTime\n(sec.)\nDatabase size (increases in 1k)\n0\n100\n200\n300\n0\n3\n6\n9\n12\nSQL\nQR\nKG\nExtKG\nConflicts: 2%\nFigure 3: Execution time for join.\nFigure 4 contains the results for union.\nIt shows that\nbasic knowledge gathering KG is not sufficient to efficiently\nhandle union. The cost of performing membership queries\nfor all tuples is very large. Note that query rewriting is not\napplicable to union queries.\nFigure 5 contains the results for set difference (the execution\ntime for KG was relatively much larger than values\nof other solutions and in order to increase readability it has\nnot been included on this figure). Here the execution time is\na function of the percentage of conflicts. We note that ExtKG\nperforms as well as QR and both are approximately\ntwice slower than SQL.\n2\nExcept when using SQL engine.\n\nX<200\n(P Q)\nTime\n(sec.)\nDatabase size (increases in 1k)\n0\n20\n40\n60\n80\n100\n0\n3\n6\n9\n12\n15\nSQL\nKG\nExtKG\nConflicts: 2%\nFigure 4: Execution time for union.\n\nX<200\n(P \\ Q);\nTime\n(sec.)\nConflicts (%)\n0\n1\n2\n3\n4\n5\n0\n1\n2\n3\nSQL\nQR\nExtKG\nDB size: 100k\nFigure 5: Execution time for difference.\n5.2.2\nComplex queries\nIn order to estimate the cost of extending the envelope we\nconsidered a complex union query\n\nX<d\n(P\nX\nQ\nX\nP\nX\nQ Q\nX\nP\nX\nQ\nX\nP ),\nwith d being a parameter that will allow us to control the\nnumber of tuples processed by each engine. To assure no\nmembership queries will be performed, we have to add 8\nouter joins. The main goal was to compare two versions\nof knowledge gathering: KG and ExtKG. We have also\nincluded the results for SQL. (It should be noted here that\nthis query has common subexpressions and RDBMS might\nuse this to optimize the query evaluation plan. PostgreSQL,\nhowever, does not perform this optimization.)\nAs we can see in Figure 6, KG outperforms ExtKG only\nin the case when the number of processed tuples is small.\nAs the result size increases, the execution time of ExtKG\ngrows significantly slower than that of KG. We notice also\nthat ExtKG needs 23 times more time than SQL but the\nexecution times of both grow in a similar fashion.\n5.2.3\nHypergraph computation\nThe time of constructing the hypergraph is presented on\nFigure 7). It depends on the total number of conflicts and\nthe size of the database.\nIt should be noticed here that the time of hypergraph construction\nconsists mainly of the execution time of conflict\ndetection queries. Therefore the time of hypergraph computation\ndepends also on the number of integrity constraints\nand their arity.\n424\nX<d(P X Q X P X Q Q X P X Q X P )\nTime\n(sec.)\nResult size estimation (d tuples)\n0\n30\n60\n90\n120\n150\n0\n10\n20\n30\nSQL\nKG\nExtKG\nDB size: 100k\nConflicts: 2%\nFigure 6: Impact of the result size.\nTime\n(sec.)\nDatabase size (increments in 1k)\n0\n50\n100\n150\n200\n250\n0\n3\n6\n9\n12\nConflicts:\n0%\n2%\n4%\nFigure 7: Hypergraph computation time\nRELATED WORK\nThe discussion of related work here is very brief and focuses\nmainly on the most recent research. For a comprehensive\ndiscussion, please see [7].\nBry [10] was the first to note that the standard notion of\nquery answer needs to be modified in the context of inconsistent\ndatabases and to propose the notion of a consistent\nquery answer. Bry's definition of consistent query answer is\nbased on provability in minimal logic and expresses the intuition\nthat the part of the database instance involved in an\nintegrity violation should not be involved in the derivation\nof consistent query answers. This is not quite satisfactory, as\none would like to have a semantic, model-theoretic notion of\nconsistent query answer that parallels that of the standard\nnotion of query answer in relational databases. Moreover,\nthe data involved in an integrity violation is not entirely\nuseless and reliable indefinite information can often be extracted\nfrom it, as seen in Example 1.\nQuery rewriting [2, 12] rewrites the original query Q to\nanother query Q with the property that the set of all the\nanswers to Q in the original database is equal to the set\nof consistent answers to Q in that database. When applicable\n, this approach provides an easy way to compute\nconsistent query answers, as the rewritten query Q can\ntypically be evaluated using the same query engine as the\nquery Q. Because the query Q is rewritten independently\nof the database, the existence of a rewriting shows that requesting\nconsistent query answers instead of the regular ones\ndoes not increase data complexity. However, query rewriting\nhas been found to apply only to restricted classes of\nqueries: the\n{, , \\}-subset [2] or the {, }-subset [13] of\nthe relational algebra. No method is presently known to\nrewrite queries with projection considered together with the\nbinary operators, or union. Also, the class of constraints\nis limited to binary universal constraints [2] or single functional\ndependencies [13]. The line of research from [2] is con-tinued\nin [17] where a class of tractable conjunctive queries,\nbased on generalized perfect matching, is identified. It is\nproved that the consistent answers to queries in this class\ncannot be obtained by query rewriting. We note here that\nthe nonexistence of query rewriting for conjunctive queries\nfollows also from the fact that computing consistent query\nanswers for such queries is a co-NP-complete problem [4,\n13]. This is because the rewritten query is first-order and\nthus can be evaluated in AC\n0\n, while known NP-complete\nproblems like SAT are not in AC\n0\n.\nSeveral different approaches have been developed to specify\nall repairs of a database as a logic program with disjunction\nand classical negation [3, 6, 15, 18, 19, 22]. Such a\nprogram can then be evaluated using an existing system like\ndlv [14]. These approaches have the advantage of generality,\nas typically arbitrary first-order queries and universal constraints\n(or even some referential integrity constraints [3])\ncan be handled. However, the generality comes at a price:\nThe classes of logic programs used are\np\n2\n-complete. Therefore\n, the approaches based on logic programming are unlikely\nto work for large databases. The paper [15] proposes\nseveral optimizations that are applicable to logic programming\napproaches. One is localization of conflict resolution,\nanother - encoding tuple membership in individual repairs\nusing bit-vectors, which makes possible efficient computation\nof consistent query answers using bitwise operators.\nHowever, it is known that even in the presence of one functional\ndependency there may be exponentially many repairs\n[4]. With only 80 tuples involved in conflicts, the number\nof repairs may exceed 10\n12\n! It is clearly impractical to efficiently\nmanipulate bit-vectors of that size.\n[11] describes several possible definitions of repair, including\nDefinition 5, and analyzes the complexity of computing\nconsistent query answers under those definitions. Key and\ninclusion dependencies are considered. The computational\napproaches proposed are based on combinations of repair\nenumeration and chase computation [1]. New tractability\nresults are obtained for classes of databases that satisfy key\nconstraints but may violate inclusion dependencies.\nPresently, our approach requires that the integrated\ndatabase be materialized at a single site. It remains to be\nseen if it can be generalized to a scenario where data is pulled\nfrom different sites during the evaluation of queries rewritten\nusing, for example, the LAV approach [20]. This problem\nhas been considered in the context of a logic-program-based\napproach to the computation of consistent query answers [8,\n9] but, as explained earlier, such an approach does not scale\nup to large databases.\nA new scenario for data integration, data exchange, has\nbeen recently proposed [16].\nIn this scenario, a target\ndatabase is materialized on the basis of a source database using\nsource-to-target dependencies. In the presence of target\nintegrity constraints, a suitable consistent target database\nmay not exist. This is a natural context for the application\nof the concepts of repair and consistent query answer. However\n, [16] does not consider the issue of the inconsistency of\ntarget databases. [11] addresses the problem of consistent\nquery answering in a restricted data exchange setting.\n425\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we have presented a practical, scalable\nframework for computing consistent query answers for large\ndatabases. We have also described a number of novel optimization\ntechniques applicable in this context and summa-rized\nexperimental results that validate our approach.\nThe approach, however, has a number of limitations. Only\nprojection-free relational algebra queries and denial integrity\nconstraints are currently supported. Adding projection to\nthe query language is a difficult issue because the complexity\nof computing consistent query answers becomes in that case\nco-NP-complete [4, 13]. So, unless P=NP, we cannot hope\nfor computing consistent query answers efficiently for arbitrary\nconjunctive queries and arbitrary database instances.\nHowever, the evaluation of queries with projection can make\nuse of the conflict hypergraph representation of all repairs,\nand of the operators F and G introduced in Section 3. Moreover\n, we expect to be able to compute consistent answers to\nqueries with projection in polynomial time if conflict hypergraphs\nare suitably restricted. We hope that such restrictions\ncan be translated into corresponding restrictions on\ndatabase instances and integrity constraints.\nIn [4], we have studied scalar aggregation queries in the\npresence of functional dependencies, also making use of conflict\ngraphs. It remains to be seen whether the techniques\ndeveloped in [4] can be combined with those of the present\npaper.\nGoing beyond denial constraints appears challenging, too.\nEssentially, integrity violations of denial constraints are due\nto the presence of some facts in the database, and thus can\nbe compactly represented using the conflict hypergraph. If\narbitrary universal constraints, for example tuple-generating\ndependencies [1, 21], are allowed, constraint violations may\nbe due to the simultaneous presence and absence of certain\ntuples in the database. It is not clear how to construct in\nthis case a compact representation of all repairs that can\nbe used for the computation of consistent query answers.\nAlso, repairs are no longer guaranteed to be subsets of the\noriginal database but can contain additional tuples. If referential\nintegrity is to be captured, constraints have to contain\nexistentially quantified variables, which leads to the unde-cidability\nof consistent query answers [11]. Only in very restricted\ncases this problem has been shown to be tractable\n[11, 13].\nAnother avenue of further research involves using preferences\nto reduce the number of repairs and consequently make\nthe computation of consistent query answers more efficient.\nFor example, in data integration, we may have a preference\nfor certain sources or for more recent information.\nThe issue of benchmarking systems that compute consistent\nquery answers requires more work.\nIt would be\ndesirable to design mechanisms that generate inconsistent\ndatabases in a systematic way and to perform more extensive\nexperimental comparisons between implemented systems\nREFERENCES\n[1] S. Abiteboul, R. Hull, and V. Vianu. Foundations of\nDatabases. Addison-Wesley, 1995.\n[2] M. Arenas, L. Bertossi, and J. Chomicki. Consistent Query\nAnswers in Inconsistent Databases. In ACM Symposium on\nPrinciples of Database Systems (PODS), pages 6879, 1999.\n[3] M. Arenas, L. Bertossi, and J. Chomicki. Answer Sets for\nConsistent Query Answering in Inconsistent Databases. Theory\nand Practice of Logic Programming, 3(45):393424, 2003.\n[4] M. Arenas, L. Bertossi, J. Chomicki, X. He, V. Raghavan, and\nJ. Spinrad. Scalar Aggregation in Inconsistent Databases.\nTheoretical Computer Science, 296(3):405434, 2003.\n[5] M. Arenas, L. Bertossi, and M. Kifer. Applications of\nAnnotated Predicate Calculus to Querying Inconsistent\nDatabases. In International Conference on Computational\nLogic, pages 926941. Springer-Verlag, LNCS 1861, 2000.\n[6] P. Barcelo and L. Bertossi. Logic Programs for Querying\nInconsistent Databases. In International Symposium on\nPractical Aspects of Declarative Languages (PADL), pages\n208222. Springer-Verlag, LNCS 2562, 2003.\n[7] L. Bertossi and J. Chomicki. Query Answering in Inconsistent\nDatabases. In J. Chomicki, R. van der Meyden, and G. Snake,\neditors, Logics for Emerging Applications of Databases, pages\n4383. Springer-Verlag, 2003.\n[8] L. Bertossi, J. Chomicki, A. Cortes, and C. Gutierrez.\nConsistent Answers from Integrated Data Sources. In\nInternational Conference on Flexible Query Answering\nSystems (FQAS), pages 7185, Copenhagen, Denmark,\nOctober 2002. Springer-Verlag.\n[9] L. Bravo and L. Bertossi. Logic Programs for Consistently\nQuerying Data Integration Systems. In International Joint\nConference on Artificial Intelligence (IJCAI), pages 1015,\n2003.\n[10] F. Bry. Query Answering in Information Systems with\nIntegrity Constraints. In IFIP WG 11.5 Working Conference\non Integrity and Control in Information Systems, pages\n113130. Chapman &Hall, 1997.\n[11] A. Cali, D. Lembo, and R. Rosati. On the Decidability and\nComplexity of Query Answering over Inconsistent and\nIncomplete Databases. In ACM Symposium on Principles of\nDatabase Systems (PODS), pages 260271, 2003.\n[12] A. Celle and L. Bertossi. Querying Inconsistent Databases:\nAlgorithms and Implementation. In International Conference\non Computational Logic, pages 942956. Springer-Verlag,\nLNCS 1861, 2000.\n[13] J. Chomicki and J. Marcinkowski. Minimal-Change Integrity\nMaintenance Using Tuple Deletions. Information and\nComputation, 2004. To appear. Earlier version: Technical\nReport cs.DB/0212004, arXiv.org e-Print archive.\n[14] T. Eiter, W. Faber, N. Leone, and G. Pfeifer. Declarative\nProblem-Solving in DLV. In J. Minker, editor, Logic-Based\nArtificial Intelligence, pages 79103. Kluwer, 2000.\n[15] T. Eiter, M. Fink, G. Greco, and D. Lembo. Efficient\nEvaluation of Logic Programs for Querying Data Integration\nSystems. In International Conference on Logic Programming\n(ICLP), pages 163177, 2003.\n[16] R. Fagin, P. G. Kolaitis, R. J. Miller, and L. Popa. Data\nExchange: Semantics and Query Answering. In International\nConference on Database Theory (ICDT), pages 207224.\nSpringer-Verlag, LNCS 2572, 2003.\n[17] A. Fuxman and R. Miller. Towards Inconsistency Management\nin Data Integration Systems. In IJCAI-03 Workshop on\nInformation Integration on the Web (IIWeb-03), 2003.\n[18] G. Greco, S. Greco, and E. Zumpano. A Logic Programming\nApproach to the Integration, Repairing and Querying of\nInconsistent Databases. In International Conference on Logic\nProgramming (ICLP), pages 348364. Springer-Verlag, LNCS\n2237, 2001.\n[19] G. Greco, S. Greco, and E. Zumpano. A Logical Framework for\nQuerying and Repairing Inconsistent Databases. IEEE\nTransactions on Knowledge and Data Engineering,\n15(6):13891408, 2003.\n[20] A. Y. Halevy. Answering Queries Using Views: A Survey.\nVLDB Journal, 10(4):270294, 2001.\n[21] P. C. Kanellakis. Elements of Relational Database Theory. In\nJan van Leeuwen, editor, Handbook of Theoretical Computer\nScience, volume B, chapter 17, pages 10731158. Elsevier/MIT\nPress, 1990.\n[22] D. Van Nieuwenborgh and D. Vermeir. Preferred Answer Sets\nfor Ordered Logic Programs. In European Conference on\nLogics for Artificial Intelligence (JELIA), pages 432443.\nSpringer-Verlag, LNAI 2424, 2002.\n[23] J. Wijsen. Condensed Representation of Database Repairs for\nConsistent Query Answering. In International Conference on\nDatabase Theory (ICDT), pages 378393. Springer-Verlag,\nLNCS 2572, 2003.\n426\n", "keywords": "Knowledge gathering;Conflict hypergraph;inconsistency;Denial constraints;Inconsistent database;Optimization;Relational algebra;Polynomial time;Consistent Query answer;Disjunctive query;integrity constraints;query processing;Repair"} {"name": "55", "title": "Consistent Query Answering under Key and Exclusion Dependencies: Algorithms and Experiments", "abstract": "Research in consistent query answering studies the definition and computation of \"meaningful\" answers to queries posed to inconsistent databases, i.e., databases whose data do not satisfy the integrity constraints (ICs) declared on their schema. Computing consistent answers to conjunctive queries is generally coNP-hard in data complexity, even in the presence of very restricted forms of ICs (single, unary keys). Recent studies on consistent query answering for database schemas containing only key dependencies have an-alyzed the possibility of identifying classes of queries whose consistent answers can be obtained by a first-order rewriting of the query, which in turn can be easily formulated in SQL and directly evaluated through any relational DBMS. In this paper we study consistent query answering in the presence of key dependencies and exclusion dependencies. We first prove that even in the presence of only exclusion dependencies the problem is coNP-hard in data complexity , and define a general method for consistent answering of conjunctive queries under key and exclusion dependencies, based on the rewriting of the query in Datalog with negation . Then, we identify a subclass of conjunctive queries that can be first-order rewritten in the presence of key and exclusion dependencies, and define an algorithm for computing the first-order rewriting of a query belonging to such a class of queries. Finally, we compare the relative efficiency of the two methods for processing queries in the subclass above mentioned. Experimental results, conducted on a real and large database of the computer science engineering degrees of the University of Rome \"La Sapienza\", clearly show the computational advantage of the first-order based technique.", "fulltext": "INTRODUCTION\nSuppose to have a database whose data violate the integrity\nconstraints (ICs) declared on its schema. What are\nthe answers that have to be returned to queries posed to\nsuch a database?\nThe standard approach to this problem\nis through data cleaning, i.e., by explicitly modifying\nthe data in order to eliminate violation of ICs: only when\ndata are \"repaired\", i.e., are consistent with the ICs, queries\ncan be answered. However, in many situations it would be\nmuch more desirable to derive significant information from\nthe database even in the presence of data inconsistent with\nthe ICs. Indeed, in many application scenarios, the explicit\nrepair of data is not convenient, or even not possible.\nThis happens, for instance, in data integration applications,\nwhich provide a unified, virtual view of a set of autonomous\ninformation sources [5].\nThis alternative approach is the one followed by research\nin consistent query answering, which studies the definition\n(and computation) of \"meaningful\" answers to queries posed\nto databases whose data do not satisfy the ICs declared on\nthe database schema [1, 14, 4]. All these approaches are\nbased on the following principle: schema is stronger than\ndata. In other words, the database schema (i.e., the set of integrity\nconstraints) is considered as the actually reliable information\n(strong knowledge), while data are considered as\ninformation to be revised (weak knowledge). Therefore, the\nproblem amounts to deciding how to \"repair\" (i.e., change)\ndata in order to reconcile them with the information expressed\nin the schema. Therefore, the intuitive semantics of\nconsistent query answering can be expressed as follows: a\ntuple t is a consistent answer to a query q in an inconsistent\ndatabase D if t is an answer to q in all the repairs of D,\ni.e., in all the possible databases obtained by (minimally)\nmodifying the data in D to eliminate violations of ICs.\nExample 1. Let D = {r(a, b)} be a database whose\nschema contains the declaration of a key dependency on the\nfirst attribute of r. Since the database instance does not violate\nthe key dependency on r, the only repair of the database\n792\nis D itself. Hence, the following query q(X, Y ) : r(X, Y )\nhas the consistent answer t = a, b . Now, let D be the\ndatabase instance obtained by adding the fact r(a, c) to D.\nD is inconsistent with the key dependency, and has two possible\nrepairs: {r(a, b)} and {r(a, c)}. Since there is no tuple\nwhich is an answer to q in both repairs, it follows that there\nare no consistent answers to the query q in D . In contrast,\nobserve that the query q (X) : r(X, Y ) has the answer a\nboth in D and in D , which can be therefore considered consistent\n.\nRecent studies in this area have established declarative semantic\ncharacterizations of consistent query answering over\nrelational databases, decidability and complexity results for\nconsistent query answering, as well as techniques for query\nprocessing [1, 6, 14, 4, 3, 5]. In particular, it has been shown\nthat computing consistent answers of conjunctive queries\n(CQs) is coNP-hard in data complexity, i.e., in the size of\nthe database instance, even in the presence of very restricted\nforms of ICs (single, unary keys).\nFrom the algorithmic viewpoint, the approach mainly\nfollowed is query answering via query rewriting: (i) First,\nthe query that must be processed (usually a conjunctive\nquery) is reformulated in terms of another, more complex\nquery. Such a reformulation is purely intensional, i.e., the\nrewritten query is independent of the database instance; (ii)\nThen, the reformulated query is evaluated over the database\ninstance. Due to the semantic nature and the inherent complexity\nof consistent query answering, Answer Set Programming\n(ASP) is usually adopted in the above reformulation\nstep [14, 3, 5], and stable model engines like DLV [15] can\nbe used for query processing.\nAn orthogonal approach to consistent query answering is\nthe one followed by recent theoretical works [1, 6, 13], whose\naim is to identify subclasses of CQs whose consistent answers\ncan be obtained by rewriting the query in terms of a first-order\n(FOL) query. The advantage of such an approach is\ntwofold: first, this technique allows for computing consistent\nanswers in time polynomial in data complexity (i.e., for such\nsubclasses of queries, consistent query answering is compu-tationally\nsimpler than for the whole class of CQs); second,\nconsistent query answering in these cases can be performed\nthrough standard database technology, since the FOL query\nsynthesized can be easily translated into SQL and then evaluated\nby any relational DBMS. On the other hand, this approach\nis only limited to polynomial subclasses of the problem\n. In particular, Fuxman and Miller in [13] have studied\ndatabases with key dependencies, and have identified a\nbroad subclass of CQs that can be treated according to the\nabove strategy.\nIn this paper we study consistent query answering in the\npresence of key dependencies and exclusion dependencies, a\nwell-known class of ICs. Notice that exclusion dependencies\nare not only typical of relational database schemas, but are\nalso relevant and very common in languages for conceptual\nmodeling, e.g., ontology languages [2]: indeed such dependencies\nallow for modeling partitioning/disjointness of entities\n. This makes the study of exclusion dependencies particularly\nimportant for the broad applicability of consistent\nquery answering.\nOur contribution can be summarized as follows:\n1. We prove that consistent answering of conjunctive\nqueries for databases with only exclusion dependencies\nis coNP-hard in data complexity, i.e., the problem\npresents the same complexity lower bound already\nknown for databases with only key dependencies [6, 4].\n2. We define a method for consistent query answering\nunder key dependencies and exclusion dependencies\nbased on the rewriting of the query in Datalog\n\n[10], a\nwell-known extension of Datalog that allows for using\nnegation in the body of program rules. The rewriting\nextends the one defined in [5] to the presence of\nexclusion dependencies. The rewriting is used by INFOMIX\n,\n1\na system for the integration of inconsistent\ndata, based on the use of the DLV system.\n3. We extend the work of [13] to the presence of exclusion\ndependencies in the database schema. In particular\n, we identify the class of KE-simple queries (a\nsubclass of CQs) that can be first-order rewritten in\nthe presence of both key dependencies and exclusion\ndependencies, and define an algorithm for computing\nthe first-order rewriting of a query belonging to such\na class of queries. We point out that our algorithm,\nthough inspired by the one of [13], in the presence of\nonly key dependencies applies to a broader class of\nqueries than the class considered first-order rewritable\nin [13]. Therefore, the technique of the present paper\nis relevant also for consistent query answering under\nonly key dependencies.\n4. We compare the relative efficiency of these two methods\nfor processing KE-simple queries. To this aim,\nwe have realized a software module that implements\nthe above two rewriting methods.\nThen, we have\ncompared query answering based on the rewriting in\nDatalog\n\nand evaluation in the DLV system [15] with\nthe method based on first-order rewriting and query\nevaluation on MySQL DBMS. We have conducted our\nexperiments on a real and large database of the computer\nscience engineering degrees of the University of\nRome \"La Sapienza\".\nOur experimental results clearly show, for KE-simple\nqueries, the computational advantage of the specialized first-order\nbased technique over the more general one based on\nDatalog\n\n. In particular, the results indicate that the advantage\nof the first-order based technique grows with the number\nof database tuples that violate the ICs. Such results thus\nprovide, in a general sense, an experimental validation of the\nfirst-order based approach: its computational advantage is\nnot only theoretical, but also can be effectively measured\nwhen applied to practical, realistic scenarios. However, it\nturns out that the general method based on Datalog\n\n, although\nnot specifically tailored for KE-simple queries, proves\nparticularly efficient in the presence of few data inconsistencies\n.\nIn the next section, we briefly introduce the formal framework\nof consistent query answering. In Section 3, we prove\ncoNP-hardness of consistent query answering under only exclusion\ndependencies, and present our Datalog\n\nrewriting\nand our algorithm for first-order rewriting in the presence\nof key and exclusion dependencies. In Section 4, we present\nour experimental results, and in Section 5 we address related\nwork and conclude the paper.\n1\nhttp://sv.mat.unical.it/infomix.\n793\nINCONSISTENT DATABASES AND CONSISTENT ANSWERS\nSyntax. A database schema S is a triple A, K, E , where:\nA is a relational signature.\nK is a set of key dependencies over A. A key dependency\n(KD) over A is an expression of the form\nkey(r) = {i\n1\n, . . . , i\nk\n}, where r is a relation of A, and,\nif n is the arity of r, 1 i\nj\nn for each j such that\n1 j k. We assume that at most one KD is specified\nover a relation r.\nE is a set of exclusion dependencies over A. An exclusion\ndependency (ED) over A is an expression of the\nform r\n1\n[i\n1\n, . . . , i\nk\n] r\n2\n[j\n1\n, . . . , j\nk\n] = , where r\n1\n, r\n2\nare\nrelations of A, and, if n\n1\nand n\n2\nare the arities of r\n1\nand r\n2\nrespectively, for each\nsuch that 1\nk,\n1 i n\n1\nand 1 j n\n2\n.\nA term is either a variable or a constant symbol. An\natom is an expression of the form p(t\n1\n, . . . , t\nn\n) where p is a\nrelation symbol of arity n and t\n1\n, . . . , t\nn\nis a sequence of n\nterms (either variables or constants). An atom is called fact\nif all the terms occurring in it are constants. A database\ninstance D for S is a set of facts over A. We denote as r\nD\nthe set {t | r(t) D}.\nA conjunctive query of arity n is an expression of the form\nh(x\n1\n, . . . , x\nn\n) : a\n1\n, . . . , a\nm\n, where the atom h(x\n1\n, . . . , x\nn\n),\nis called the head of the query (denoted by head(q)), and\na\n1\n, . . . , a\nm\n, called the body of the query (and denoted by\nbody(q)), is a set of atoms, such that all the variables occurring\nin the query head also occur in the query body. In a\nconjunctive query q, we say that a variable is a head variable\nif it occurs in the query head, while we say that a\nvariable is existential if it only occurs in the query body.\nMoreover, we call an existential variable shared if it occurs\nat least twice in the query body (otherwise we say that it\nis non-shared). A FOL query of arity n is an expression\nof the form {x\n1\n, . . . , x\nn\n| (x\n1\n, . . . , x\nn\n)}, where x\n1\n, . . . , x\nn\nare variable symbols and is a first-order formula with free\nvariables x\n1\n, . . . , x\nn\n.\nSemantics. First, we briefly recall the standard evaluation\nof queries over a database instance. Let q be the CQ\nh(x\n1\n, . . . , x\nn\n) : a\n1\n, . . . , a\nm\nand let t = c\n1\n, . . . , c\nn\nbe a tuple\nof constants. A set of facts I is an image of t w.r.t. q\nif there exists a substitution of the variables occurring in\nq such that (head(q)) = h(t) and (body(q)) = I. Given a\ndatabase instance D, we denote by q\nD\nthe evaluation of q\nover D, i.e., q\nD\nis the set of tuples t such that there exists\nan image I of t w.r.t. q such that I D.\nGiven a FOL query q and a database instance D, we denote\nby q\nD\nthe evaluation of q over D, i.e., q\nD\n= {t\n1\n, . . . , t\nn\n|\nD |= (t\n1\n, . . . , t\nn\n)}, where each t\ni\nis a constant symbol and\n(t\n1\n, . . . , t\nn\n) is the first-order sentence obtained from by\nreplacing each free variable x\ni\nwith the constant t\ni\n.\nThen, we define the semantics of queries over inconsistent\ndatabases.\nA database instance D violates the\nKD key(r) = {i\n1\n, . . . , i\nk\n} iff there exist two distinct facts\nr(c\n1\n, . . . , c\nn\n), r(d\n1\n, . . . , d\nn\n) in D such that c\ni\nj\n= d\ni\nj\nfor\neach j such that 1 j k. Moreover, D violates the\nED r\n1\n[i\n1\n, . . . , i\nk\n] r\n2\n[j\n1\n, . . . , j\nk\n] = iff there exist two facts\nr\n1\n(c\n1\n, . . . , c\nn\n), r\n2\n(d\n1\n, . . . , d\nm\n) in D such that c\ni\n= d\nj\nfor\neach\nsuch that 1 k.\nLet S = A, K, E be a database schema. A database\ninstance D is legal for S if D does not violate any KD in K\nand does not violate any ED in E.\nA set of ground atoms D is a repair of D under S iff: (i)\nD D; (ii) D is legal for S; (iii) for each D such that\nD D D, D is not legal for S. In words, a repair for\nD under S is a maximal subset of D that is legal for S.\nLet q be a CQ. A tuple t is a consistent answer to q in D\nunder S iff, for each repair D of D under S, t q\nD\n.\nExample 2. Consider\nthe\ndatabase\nschema\nS\n=\nA, K, E ,\nwhere A comprises the relations\nJournal (title, editor),\nConfPr (title, editor)\nand\nEditor (name, country),\nK\ncomprises\nthe\ndependencies\nkey(Journal)\n=\n{1},\nkey(ConfPr )\n=\n{1},\nkey(Editor )\n=\n{1},\nE\ncomprises the dependency\nJournal [1] ConfPr [1] = .\nConsider the database\ninstance D described below\n{Journal(TODS, ACM), Journal(TODS, IEEE),\nEditor (ACM, USA), ConfPr (PODS05, ACM),\nConfPr (PODS05, SV), Editor (IEEE, USA)}.\nIt is easy to see that D is not consistent with the KDs on\nJournal and ConfPr of S. Then, the repairs of D under S\nare:\n{Journal(TODS, ACM), ConfPr (PODS05, ACM),\nEditor (ACM, USA), Editor (IEEE, USA)}\n{Journal(TODS, ACM), ConfPr (PODS05, SV),\nEditor (ACM, USA), Editor (IEEE, USA)}\n{Journal(TODS, IEEE), ConfPr (PODS05, ACM),\nEditor (ACM, USA), Editor (IEEE, USA)}\n{Journal(TODS, IEEE), ConfPr (PODS05, SV),\nEditor (ACM, USA), Editor (IEEE, USA)}.\nLet q(x, z)\n:\nJournal (x, y), Editor (y, z) be a user\nquery.\nThe consistent answers to q in D under S are\n{ TODS, USA }.\nQUERY ANSWERING\nComputational Complexity. The problem of computing\nconsistent answers to conjunctive queries over inconsistent\ndatabases in the presence of KDs (under the repair\nsemantics introduced in Section 2) is coNP-hard in data\ncomplexity [4, 6]. In the following, we prove that such a\nproblem is coNP-hard in data complexity also for schemas\nin which only EDs occur\n2\n.\nTheorem 3. Let S = A, , E be a database schema containing\nonly EDs, D a database instance for S, q a CQ of\narity n over S, and t an n-tuple of constants. The problem\nof establishing whether t is a consistent answer to q in D\nunder S is coNP-hard with respect to data complexity.\nProof (sketch). We prove coNP-hardness by reducing the\n3-colorability problem to the complement of our problem.\nConsider a graph G = V, E with a set of vertices V and\nedges E. We define a relational schema S = A, , E where\nA consists of the relation edge of arity 2, and the relation col\nof arity 5, and E contains the dependencies col[3]col[4] = ,\ncol[3] col[5] = , col[4] col[5] = . The instance D is\ndefined as follows:\nD = {col(n, 1, n,\n\n,\n\n), col(n, 2,\n\n, n,\n\n), col(n, 3,\n\n,\n\n, n)|\nn V } {edge(x, y)| x, y E}.\n2\nWe consider the decision problem associated to query answering\n(see e.g., [6])\n794\nWhere each occurrence of the meta-symbol\ndenotes\na different\nconstant not occurring elsewhere in the database. Intuitively\n, to represent the fact that vertex n V is assigned\nwith color i {1, 2, 3}, D assigns to col a tuple in which i\noccurs as second component and n occurs as first and also\nas 2 + i-th component. The EDs of S impose that consistent\ninstances assign no more than one color to each node.\nFinally, we define the query\nq\n\nedge(x, y), col(x, z, w\n1\n, w\n2\n, w\n3\n), col(y, z, w\n4\n, w\n5\n, w\n6\n).\nOn the basis of the above construction it is possible to show\nthat G is 3-colorable (i.e., for each pair of adjacent vertices,\nthe vertices are associated with different colors) if and only if\nthe empty tuple\nis not a consistent answer to q in D under\nS (i.e., the boolean query q has a negative answer).\nDatalog\n\nRewriting. We now provide a sound and\ncomplete query rewriting technique for consistent query answering\nin the presence of key and exclusion dependencies.\nTo this aim, we make use of Datalog\n\n, i.e., Datalog enriched\nwith (unstratified) negation, under stable model semantics\n[10]. From a computational point of view, Datalog\n\nis coNP-complete\nwith respect to data complexity, and therefore is\nwell suited for dealing with the high computational complexity\nof our problem.\nThe rewriting that we present in the following extends the\none proposed in [4] for CQs specified over database schemas\nwith KDs, in order to properly handle the presence of EDs.\nThe rewriting is employed in the system INFOMIX. Anal-ogously\nto other proposals that solve consistent query answering\nvia query rewriting (although for different classes\nof constraints and query languages, see, e.g., [14, 3]), the\nbasic idea of the technique is to encode the constraints of\nthe relational schema into a Datalog\n\nprogram, such that\nthe stable models of the program yield the repairs of the\ndatabase instance D.\nDefinition 4. Given a CQ\n3\nq and a schema S, the\nDatalog\n\nprogram (q, S) is defined as the following set of\nrules\n4\n:\n1. the rule corresponding to the definition of q;\n2. for each relation r S, the rules\nr(~\nx, ~\ny)\n:\nr\nD\n(~\nx, ~\ny) , not r(~\nx, ~\ny)\nr(~\nx, ~\ny)\n:\nr\nD\n(~\nx, ~\ny) , r(~\nx, ~\nz) , y\n1\n= z\n1\n\nr(~\nx, ~\ny)\n:\nr\nD\n(~\nx, ~\ny) , r(~\nx, ~\nz) , y\nm\n= z\nm\nwhere: in r(~\nx, ~\ny) the variables in ~\nx correspond to the\nattributes constituting the key of the relation r; ~\ny =\ny\n1\n, . . . , y\nm\nand ~\nz = z\n1\n, . . . , z\nm\n.\n3. for\neach\nexclusion\ndependency\n(r[i\n1\n, . . . , i\nk\n] s[j\n1\n, . . . , j\nk\n]) = in E, with r = s, the\nrules:\nr(~\nx, ~\ny)\n:\nr\nD\n(~\nx, ~\ny) , s(~\nx, ~\nz)\ns(~\nx, ~\ny)\n:\ns\nD\n(~\nx, ~\ny) , r(~\nx, ~\nz)\n3\nThe present rewriting is not actually restricted to CQs,\nsince it can be immediately extended to general Datalog\n\nqueries.\n4\nWithout loss of generality, we assume that the attributes\nin the key precede all other attributes in r, that i\n1\n= j\n1\n=\n1, . . . , i\nk\n= j\nk\n= k,\n1\n= 1, . . . ,\nh\n= h, and m\n1\n= h +\n1, . . . , m\nh\n= h + h.\nwhere ~\nx = x\n1\n, . . . , x\nk\n, i.e., the variables in ~\nx correspond\nto the sequence of attributes of r and s involved\nin the ED.\n4. for\neach\nexclusion\ndependency\nr[\n1\n, . . . ,\nh\n]\nr[m\n1\n, . . . , m\nh\n] = in E, the rules:\nr(~\nx, ~\ny, ~\nz)\n:\nr\nD\n(~\nx, ~\ny, ~\nz) , r(~\ny, ~\nw\n1\n, ~\nw\n2\n) ,\nr(~\nx, ~\ny, ~\nz)\n:\nr\nD\n(~\nx, ~\ny, ~\nz) , r( ~\nw\n1\n, ~\nx, ~\nw\n2\n) ,\nr(~\nx, ~\nx, ~\nz)\n:\nr\nD\n(~\nx, ~\nx, ~\nz).\nFurthermore, we denote with (D) the database instance\nobtained from D by replacing each predicate symbol r with\nr\nD\n.\nInformally, for each relation r, (q, S) contains (i) a relation\nr\nD\nthat represents r\nD\n; (ii) a relation r that represents\na subset of r\nD\nthat is consistent with the KD for r and the\nEDs that involve r; (iii) an auxiliary relation r that represents\nthe \"complement\" of r, i.e., the subset of r\nD\nthat\ntogether with r results inconsistent with the EDs and KDs\non the schema. Notice that the extension of r depends on\nthe choice made for r (and vice-versa), and that such choices\nare made in a non-deterministic way (enforced by the use of\nthe unstratified negation). The above rules force each stable\nmodel M of (q, S) (D) to be such that r\nM\nis a maximal\nsubset of tuples from r\nD\nthat are consistent with both the\nKD for r and the EDs in E that involve r.\nExample 2.(contd.) The Datalog\n\nrewriting (q, S) of the\nquery q(x, z) : Journal(x, y), Editor (y, z) is the following\nprogram:\nq(x, z)\n:\nJournal(x, y), Editor (y, z)\nJournal(x, y)\n:\nJournal\nD\n(x, y) , not Journal(x, y)\nEditor (x, y)\n:\nEditor\nD\n(x, y) , not Editor (x, y)\nConfPr (x, y)\n:\nConfPr\nD\n(x, y) , not ConfPr (x, y)\nJournal(x, y)\n: Journal\nD\n(x, y) , Journal(x, z) , z = y\nEditor (x, y)\n: Editor\nD\n(x, y) , Editor (x, z) , z = y\nConfPr (x, y)\n: ConfPr\nD\n(x, y) , ConfPr (x, z) , z = y\nJournal(x, y)\n: Journal\nD\n(x, y) , ConfPr (x, z)\nConfPr (x, y)\n: ConfPr\nD\n(x, y) , Journal(x, z)\nThe first rule of the rewriting encodes the query. The second\n, third and fourth rule establish the relationship between\neach relation and the corresponding complementary predicate\n. The fifth, sixth, and seventh rule encode the KDs of\nS, whereas the last two rules encode the ED.\nWe now state correctness of our encoding with respect to\nthe semantics of consistent query answering.\nTheorem 5. let S = A, K, E be a database schema, D\nbe a database instance for S, and q be a CQ over S. A tuple\nt is a consistent answer to q in D under S iff t q\nM\nfor\neach stable model M of (q, S) (D).\nFrom the above theorem and Theorem 3 it follows that the\nconsistent query answering problem under KDs and EDs is\ncoNP-complete in data complexity.\nFOL Rewriting. Let us now consider a different approach\nto consistent query answering, which aims at identifying\nsubclasses of queries for which the problem is tractable.\nThis is the line followed in [1, 6, 13]. In particular, in [13]\nthe authors define a subclass of CQs, called C\ntree\n, for which\n795\nthey prove tractability of consistent query answering in the\npresence of KDs, and provide a FOL rewriting technique.\nThe class C\ntree\nis based on the notion of join graph: a join\ngraph of a query q is the graph that contains (i) a node N\ni\nfor every atom in the query body, (ii) an arc from N\ni\nto N\nj\niff an existential shared variable occurs in a non-key position\nin N\ni\nand occurs also in N\nj\n, (iii) an arc from N\ni\nto N\ni\niff an\nexistential shared variable occurs at least twice in N\ni\n, and\none occurrence is in a non-key position. According to [13],\nC\ntree\nis the class of conjunctive queries (a) without repeated\nrelation symbols, (b) in which every join condition involves\nthe entire key of at least one relation and (c) whose join\ngraph is acyclic. As pointed out in [13], this class of queries\nis very common, since cycles are rarely present in queries\nused in practice. However, no repeated symbols may occur\nin the queries, and queries must have joins from non-key\nattributes of a relation to the entire key of another one.\nWe now extend the work of [13] as follows:\nWe refine the class C\ntree\nby allowing join conditions\nin which not necessarily the entire key of one relation\nhas to be involved, but it is sufficient that, for each\npair of attributes, at least one attribute must belong\nto a key (i.e., we allow for joins involving portions of\nkey). In such a way, we obtain a new class, called C\n+\ntree\n,\nlarger than C\ntree\n, for which consistent query answering\nis polynomial in the presence of KDs. In other words,\nC\n+\ntree\nis the class of conjunctive queries for which only\ncondition (a) and (c) above hold.\nWe refine the class C\n+\ntree\nin order to obtain a class of\nqueries, called KE-simple, for which consistent query\nanswering is polynomial in the presence of both KDs\nand also EDs.\nWe provide a new algorithm for computing the FOL\nrewriting for KE-simple queries. In the algorithm, we\nexploit the notion of join graph of [13], but we enrich\nthe structure of the graph by associating to each node\nan adornment which specifies the different nature of\nterms in the atoms (see below), in order to deal with\nKE-simple queries.\nLet us describe in detail our technique. Henceforth, given a\nCQ q, we denote by R\nq\nthe set of relation symbols occurring\nin body(q). Given a database schema S = A, K, E and a CQ\nq, we denote by O\nE\n(q) the set of relation symbols O\nE\n(q) =\n{s | r[j\n1\n, . . . , j\nk\n] s[\n1\n, . . . ,\nk\n] = E and r R\nq\n}. In\nwords, O\nE\n(q) contains each relation symbol s A such that\nthere exists an exclusion dependency between s and r in E,\nwhere r is a relation symbol occurring in body(q).\nDefinition 6. Let S = A, K, E be a database schema.\nA conjunctive query q is KE-simple if q C\n+\ntree\n, and\nthere exists no pair of relation symbols r, s in O\nE\n(q)\nsuch that there exists an exclusion dependency between\nr and s in E,\nthere exists no relation symbol r in O\nE\n(q) such that\nthere exists r[i\n1\n, . . . , i\nk\n] s[j\n1\n, . . . , j\nk\n] = in E, and\neither key(r)\n{i\n1\n, . . . , i\nk\n} or key(s)\n{j\n1\n, . . . , j\nk\n},\nwhere s is a relation symbol in R\nq\n.\nIn words, a query q is KE-simple if it belongs to the class\nC\n+\ntree\n, and if both there are no EDs between relations that\nare in O\nE\n(q), and each ED between a relation r R\nq\nand\na relation s O\nE\n(q) does not involve non-key attributes of\nr or s. Notice that this last condition does not limit the\napplicability of our approach in many practical cases. For\nexample, in relational databases obtained from ER-schemas,\nEDs are typically specified between keys.\nFor KE-simple CQs, we present in the following a query\nrewriting algorithm which, given a query q, produces a FOL\nrewriting, whose evaluation over any database instance D\nfor the database schema S returns the consistent answers to\nq in D under S. The basic idea of the algorithm is to specify\na set of conditions, expressible in FOL, that, if verified over\na database instance D, for a given tuple t, guarantee that\nin any repair of D there is an image of t w.r.t q, i.e., t is\na consistent answer to q in D. We point out that, for non-KE\n-simple CQs, such conditions cannot be specified in FOL.\nObserve that, in our approach, the FOL rewriting is then in\nturn translated into SQL, and query evaluation is performed\nby means of standard DBMS query answering techniques.\nThis further encoding does not present particular difficulties,\nand due to space limit we omit such transformation.\nIn order to construct our join graph we need the following\ndefinition.\nDefinition 7. Let S = A, K, E be a database schema,\nq be a CQ, and a = r(x\n1\n, . . . , x\nn\n) be an atom (of arity n)\noccurring in R\nq\n. Then, let key(r) = {i\n1\n, . . . , i\nk\n} belong to\nK, and let 1 i n. The type of the i-th argument of a in\nq, denoted by type(a, i, q) is defined as follows:\n1. If i\n1\ni i\nk\n, then:\nif x\ni\nis a head variable of q, a constant, or an existential\nshared variable, then type(a, i, q) = KB;\nif x\ni\nis an existential non-shared variable of q,\nthen type(a, i, q) = KU.\n2. Otherwise (i /\n{i\n1\n, . . . , i\nk\n}):\nif x\ni\nis a head variable of q or a constant, then\ntype(a, i, q) = B;\nif x\ni\nis an existential shared variable of q, then\ntype(a, i, q) = S;\nif x\ni\nis an existential non-shared variable of q,\nthen type(a, i, q) = U.\nTerms typed by KB or B are called bound terms, otherwise\nthey are called unbound. We call the typing of a in q\nthe expression of the form r(x\n1\n/t\n1\n, . . . , x\nn\n/t\nn\n), where each\nt\ni\nis the type of the argument x\ni\nin q.\nThe following algorithm KEFolRewrite computes the FOL\nrewriting to a KE-simple conjunctive query q. In the algorithm\n, JG(q) denotes the join graph of q, in which each node\nN\ni\nis labelled with the typing of the corresponding atom a\ni\nin q. Furthermore, roots(JG(q)) denotes the set of nodes\nthat are roots in JG(q) (notice that for KE-simple queries\nthe join graph is a forest, since it is acyclic).\nAlgorithm KEFolRewrite(q, S)\nInput: KE-simple CQ q (whose head variables are x\n1\n, . . . , x\nn\n);\nschema S = A, K, E\nOutput: FOL query (representing the rewriting of q)\nbegin\n796\nAlgorithm FolTree(N ,E)\nInput: node N of JG(q); set of EDs E\nOutput: FOL formula\nbegin\nlet a = r(x\n1\n/t\n1\n, . . . , x\nn\n/t\nn\n) be the label of N ;\nfor i := 1 to n do\nif t\ni\n{KB, B} then v\ni\n:= x\ni\nelse v\ni\n:= y\ni\n, where y\ni\nis a new variable\nif each argument of a is of type B or KB then f\n1\n:= r(x\n1\n, . . . , x\nn\n)\nelse begin\nlet i\n1\n, . . . , i\nm\nbe the positions of the arguments of a of type S, U, KU;\nf\n1\n:= y\ni\n1\n, . . . , y\ni\nm\n. r(v\n1\n, . . . , v\nn\n)\nend;\nfor each ED r[j\n1\n, . . . , j\nk\n] s[\n1\n, . . . ,\nk\n] = E do\nbegin\nlet m be the arity of s;\nfor i := 1 to m do\nif i {\n1\n, . . . ,\nk\n} then if i =\nc\nthen z\ni\n= v\nj\nc\nelse z\ni\n= y\ni\nwhere y\ni\nis a new variable;\nlet y\ni\n1\n, . . . , y\ni\nk\nbe the new variables above introduced;\nf\n1\n= f\n1\ny\ni\n1\n, . . . , y\ni\nk\n. s(z\n1\n, . . . , z\nm\n)\nend\nif there exists no argument in a of type B or S then return f\n1\nelse begin\nlet p\n1\n, . . . , p\nc\nbe the positions of the arguments of a of type U, S or B;\nlet\n1\n, . . . ,\nh\nbe the positions of the arguments of a of type B;\nfor i := 1 to c do\nif t\np\ni\n= S then z\np\ni\n:= x\np\ni\nelse z\np\ni\n:= y\ni\n, where y\ni\nis a new variable\nfor i := 1 to n do\nif t\ni\n{KB, KU} then w\ni\n:= v\ni\nelse w\ni\n:= z\ni\n;\nf\n2\n:= z\np\n1\n, . . . , z\np\nc\n. r(w\n1\n, . . . , w\nn\n)\n\n\nN jgsucc(N )\nFolTree(N )\n\n\ni{\n1\n,...,\nh\n}\nw\ni\n= x\ni\nreturn f\n1\nf\n2\nend\nend\nFigure 1: The algorithm FolTree\ncompute JG(q);\nreturn {x\n1\n, . . . , x\nn\n|\nN roots(JG(q))\nFolTree(N, E)}\nend\nBasically, the algorithm builds the join graph of q and\nthen builds the first-order query by invoking the algorithm\nFolTree on all the nodes that are roots of the join graph.\nThe algorithm FolTree is defined in Figure 1. Roughly\nspeaking, the algorithm FolTree(N, E) returns a first-order\nformula that constitutes the encoding of the whole subtree\nof the join graph of the query whose root is the node N .\nTo do that, the algorithm computes two subformulas f\n1\nand\nf\n2\n. The formula f\n1\ncontains an atom whose predicate is\nthe predicate r labelling the node N , in which the unbound\nvariables of r are renamed with new existentially quantified\nvariables. Furthermore, f\n1\ncontains an atom of the form\ny\ni\n1\n, . . . , y\ni\nk\n. s(z\n1\n, . . . , z\nm\n) for each ED that involves r and\na relation s. Intuitively, when evaluated over a database instance\nD, each such atom checks that there are no facts of\nthe form s(t\ns\n) D that violate the ED together with a fact\nof the form r(t\nr\n) D, which is in an image I of a tuple\nt w.r.t. the input query q, i.e., the atom guarantees that I\nis not contradicted w.r.t. the ED. The formula f\n2\nis empty\nonly when all non-key arguments of the atom r are existential\nnon-shared variables (i.e., of type U ). Otherwise, the\nformula f\n2\nis a universally quantified implication. In such\nan implication, the antecedent is an atom whose predicate\nis r, and the consequent is a conjunction of equality conditions\nand other subformulas: more precisely, there is an\nequality condition for each non-key argument in r of type\nB, and a subformula for each successor N of N in the join\ngraph of q, computed by recursively invoking FolTree on N .\nIntuitively, f\n2\nenforces the joins between r and each atom\nlabelling the successors of r in the join graph of q. At the\nsame time f\n2\nensures that, when evaluated over a database\ninstance D, if there exists a fact of the form r(t\nr\n) D that\nviolates the KD specified on r together with a fact of the\nform r(t\nr\n) D, which is in the image of a tuple t w.r.t. q,\nr(t\nr\n) belongs to another image of t w.r.t. q. In other words,\nthe atom guarantees that in any repair there exists an image\nof t (w.r.t. the KD on r). Such a check is iterated for other\nKDs by recursively invoking FolTree. The following example\nillustrates the way the algorithm works.\nExample 2.(contd.) It is easy to verify that the query\nq(x, z) : Journal(x, y), Editor (y, z) is KE-simple.\nJournal(x/KB, y/S) (N 1) - (N 2) Editor (y/KB, z/B)\nNow, by applying the algorithm KEFolRewrite and FolTree\nwe obtain:\nKEFolRewrite(q)\n=\n{x, z | FolTree(N 1)}\nFolTree(N 1)\n=\ny\n2\n. Journal(x, y\n2\n) y\n2\n. ConfPr (x, y\n2\n)\ny. Journal(x, y) (FolTree(N 2))\nFolTree(N 2)\n=\nEditor (y, z) y\n2\n. Editor (y, y\n2\n) y\n2\n= z.\n797\nrelations\nintegrity constraints\nfaculty/3\nexam plan/10\nkey(f aculty) = {1, 2}\nkey(plan status) = {1}\ncourse assignment/3\ndegree/5\nkey(exam plan) = {1}\nkey(positioning) = {1}\npositioning/2\ncourse/4\nkey(university) = {1}\nkey(prof data) = {1}\nplan status/2\nkey(exam type) = {1}\nkey(degree) = {1}\nprof data/3\nkey(course) = {1}\nkey(exam) = {2}\nuniversity/3\nkey(master exam) = {1}\nbachelor exam/2\nkey(bachelor exam) = {1}\nmaster exam/2\ncourse assignment[2] professor [1] =\nexam type/2\nmaster exam[1] bachelor exam[1] =\nexam/4\ncourse[3, 4] bachelor exam[1, 2] =\nFigure 2: A portion of the test database schema\nBy evaluating the rewriting over D we get { TODS, USA },\ni.e., the set of consistent answers to q in D under S.\nNext, we state soundness and completeness of the algorithm.\nTheorem 8. Let S = A, K, E be a database schema,\nq be a KE-simple conjunctive query over S, and q\nr\nbe the\nFOL rewriting returned by KEFolRewrite(q). Then, for every\ndatabase instance D for S, a tuple t is a consistent answer\nto q in D under S iff t q\nD\nr\n.\nAs a corollary, consistent query answering for KE-simple\nconjunctive queries over database schemas with KDs and\nEDs is polynomial in data complexity.\nEXPERIMENTS\nWe now present some experimental results comparing the\nFOL and the Datalog\n\nrewriting previously described. To\nperform the experiments, we implemented a rewriting module\nthat translates CQs issued over the database schema into\nboth FOL queries and Datalog\n\nqueries. FOL queries are\nin turn translated by the module into SQL queries. Then,\nwe ran the SQL queries on a MySQL 4.1.10 instance of the\ntest database, while we executed Datalog\n\nqueries on DLV\n[15]. The experiments were conducted on a double processor\nmachine, with 3 GHz Pentium IV Xeon CPU and 2 GB of\nmain memory, running the Linux operating system.\nThe test database holds information about the computer\nscience engineering degrees of the university of Rome \"La\nSapienza\" and contains 27 tables with an overall size of over\n200.000 tuples. In Figure 2, we present the portion of the\ntest database schema that is relevant for the queries (in the\nfigure, \"r/n\" indicates that relation r is of arity n).\nDue to space limits, we only report details about three of\nthe queries we tested:\nQ\n0\n=\nq(C) : -f aculty(C, U, INGEGNERIA ).\nQ\n2\n=\nq(S, D, P ) : -positioning(P S, P ), plan status(ST, DE),\nexam plan(C, S, P S, DT, ST, 1 , U 1, U 2, U 3, U 4).\nQ\n3\n=\nq(N, D, N P, CP ) : -master exam(C, N, T, 5 ),\nexam type(T, D),\nThe queries have been posed on various instances of the\ntest database with an increasing number of pairs of tuples\nviolating some ICs. Figure 3, shows experimental results.\nIn the charts 3(a), 3(b) and 3(c), the execution time of the\nSQL encoding and of the Datalog\n\nprogram are compared\nfor queries Q\n0\n, Q\n2\n, and Q\n3\n. As expected, from a certain\ninconsistency level on, the execution time of the Datalog\n\nencoding has an exponential blow-up; in contrast, the execution\ntime for the SQL encoding is constant on the average,\nand for Q\n3\n(Figure 3(b)) it decreases: although this might\nbe surprising, it turns out that some inconsistency allows the\nSQL engine to prune the search space for query answering.\nMoreover, the chart presented in Figure 3(d) compares, on\na logarithmic scale, the execution time of all queries at the\nhighest inconsistency level. It shows that the SQL encoding\nis always more efficient when the degree of data inconsistency\ngrows; however, it turns out that the method based\non Datalog\n\nand DLV proves particularly efficient in the\npresence of few data inconsistencies.\nCONCLUSIONS\nThe present work provides a general experimental validation\nof the first-order rewriting approach to the optimization\nof consistent query answering. Of course, the applicability\nof our technique is limited to the class of KE-simple queries.\nFor general CQs, the use of a more expressive, and compu-tationally\nharder, query language like Datalog\n\nis necessary.\nVery recently, the first prototype implementations of consistent\nquery answering have appeared, and the first efforts\ntowards optimization of query processing are emerging.\nWithin INFOMIX, several optimizations are currently under\ndevelopment to improve consistent query answering for\nmore expressive classes of queries [9, 8]. In this respect,\nbinding propagation techniques based on magic sets might\nsignificantly reduce execution time for Datalog\n\nprograms\non DLV [11], even if the coNP structure of the Datalog\n\nencoding suggests that the efficiency of the SQL rewriting\ncan be hardly reached (especially for a large number of inconsistencies\n).\nThe ConQuer system [12] implements an extension of the\ntechnique of [13] which allows to rewrite in SQL queries\nbelonging to the class C\ntree\nenriched with aggregates. Experiments\nshow that the overhead of evaluating rewritten\nqueries is not onerous if compared with evaluation of the\noriginal query over the inconsistent database. Therefore,\n[12] focuses on comparing standard query answering and\nconsistent query answering, while our experiments compare\ntwo different query answering techniques. In this respect,\nwe point out that optimization of our SQL rewriting was\noutside the scope of the present paper.\nFinally, Hippo [7] is a system for consistent answering of\nunion of conjunctive queries without existential variables in\nthe presence of denial constraints. Hence, this approach\nis different from our in terms of both query language and\nintegrity constraints allowed. Moreover, Hippo techniques\nare not based on rewritings.\nAs future work, we aim at extending our approach to other\nforms of ICs (e.g., foreign keys) and at optimizing the SQL\nrewriting produced by KEFolRewrite.\n798\n(a) Q\n0\nexecution time\n(b) Q\n3\nexecution time\n(c) Q\n2\nexecution time\n(d) SQL vs. Datalog\nFigure 3: Experimental Results\nACKNOWLEDGMENTS\nThis research has been partially supported by the Project\nINFOMIX (IST-2001-33570) funded by the EU.\n\nREFERENCES\n[1] Marcelo Arenas, Leopoldo E. Bertossi, and Jan\nChomicki. Consistent query answers in inconsistent\ndatabases. In Proc. of PODS'99, pages 6879, 1999.\n[2] Franz Baader, Diego Calvanese, Deborah McGuinness,\nDaniele Nardi, and Peter F. Patel-Schneider, editors.\nThe Description Logic Handbook: Theory,\nImplementation and Applications. Cambridge\nUniversity Press, 2003.\n[3] Loreto Bravo and Leopoldo Bertossi. Logic\nprogramming for consistently querying data\nintegration systems. In Proc. of IJCAI 2003, pages\n1015, 2003.\n[4] Andrea Cal`i, Domenico Lembo, and Riccardo Rosati.\nOn the decidability and complexity of query\nanswering over inconsistent and incomplete databases.\nIn Proc. of PODS 2003, pages 260271, 2003.\n[5] Andrea Cal`i, Domenico Lembo, and Riccardo Rosati.\nQuery rewriting and answering under constraints in\ndata integration systems. In Proc. of IJCAI 2003,\npages 1621, 2003.\n[6] Jan Chomicki and Jerzy Marcinkowski. On the\ncomputational complexity of minimal-change integrity\nmaintenance in relational databases. In Inconsistency\nTolerance, pages 119150, 2005.\n[7] Jan Chomicki, Jerzy Marcinkowski, and Slawomir\nStaworko. Computing consistent query answers using\nconflict hypergraphs. In Proc. of CIKM 2004, pages\n417426, 2004.\n[8] Chiara Cumbo, Wolfgang Faber, Gianluigi Greco, and\nNicola Leone. Enhancing the magic-set method for\ndisjunctive datalog programs. In Proc. ICLP 2004),\npages 371385, 2004.\n[9] Thomas Eiter, Michael Fink, Gianluigi Greco, and\nDomenico Lembo. Efficient evaluation of logic\nprograms for querying data integration systems. In\nProc. of ICLP'03, pages 163177, 2003.\n[10] Thomas Eiter, Georg Gottlob, and Heikki Mannilla.\nDisjunctive Datalog. ACM Trans. on Database\nSystems, 22(3):364418, 1997.\n[11] Wolfgang Faber, Gianluigi Greco, and Nicola Leone.\nMagic sets and their application to data integration.\nIn Proc. of ICDT 2005, pages 306320, 2005.\n[12] Ariel Fuxman, Elham Fazli, and Renee J. Miller.\nConquer: Efficient management of inconsistent\ndatabases. In Proc. of SIGMOD 2005, pages 155166,\n2005.\n[13] Ariel Fuxman and Renee J. Miller. First-order query\nrewriting for inconsistent databases. In Proc. of\nICDT 2005, pages 337351, 2005.\n[14] Gianluigi Greco, Sergio Greco, and Ester Zumpano. A\nlogical framework for querying and repairing\ninconsistent databases. IEEE Trans. on Knowledge\nand Data Engineering, 15(6):13891408, 2003.\n[15] Nicola Leone, Gerald Pfeifer, Wolfgang Faber,\nThomas Eiter, Georg Gottlob, Simona Perri, and\nFrancesco Scarcello. The DLV system for knowledge\nrepresentation and reasoning. ACM Trans. on\nComputational Logic, 2005. To appear.\n799\n", "keywords": "relational database;Query Rewriting;integrity constraints;query rewriting;consistent query answering;Computational Complexity;conjunctive queries;inconsistent database;Inconsistency;database schemas"} {"name": "56", "title": "Context-Aware Web Information Systems", "abstract": "Apart from completeness usability, performance and maintainability are the key quality aspects for Web information systems. Considering usability as key implies taking usage processes into account right from the beginning of systems development. Context-awareness appears as a promising idea for increasing usability of Web Information Systems. In the present paper we propose an approach to context-awareness of Web Information Systems that systematically distinguishes among the various important kinds of context. We show how parts of this context can be operationalized for increasing customers' usage comfort. Our approach permits designing Web information systems such that they meet high quality expectations concerning usability, performance and maintainability. We demonstrate the validity of our approach by discussing the part of a banking Web Information System dedicated to online home-loan application.", "fulltext": "Introduction\n1.1\nGenerations of Web Services\nUnderstanding Web Information Systems (WIS) as\nmonolithic and presentation-oriented query-answer\nsystems would be too simplistic. Implementing the\nindividual services of a WIS only on the basis of XML\nor (D)HTML suites suffices for the interface accessible\nby a particular customer. The quality of service\nprovided by a WIS both expected and implemented,\nhowever, evolved over the last decade and has evolved\nbeyond mere completeness. Extending the classification\nin (Berger 2003, p.146) we distinguish between\ndifferent generations of WIS.\nFirst generation (1G): \"build it, and they will come\"\nFirst develop a WIS, then customers will come,\nbecause they believe that it is useful. Many of the\n1G-WIS were informational, i.e., they weren't interactive\n.\nSecond generation (2G): \"advertise online sales, and\nthey will come\"\nDevelop a WIS and market it. Customers will\nCopyright c 2004, Australian Computer Society, Inc. This paper\nappeared at First Asia-Pacific Conference on Conceptual\nModelling (APCCM 2004), Dunedin, New Zealand. Conferences\nin Research and Practice in Information Technology, Vol.\n31. Sven Hartmann, John Roddick, Ed. Reproduction for academic\n, not-for profit purposes permitted provided this text is\nincluded.\ncome, because the advertisement convinced them\nabout the WIS's usability.\nThe WIS may be\ntransactional, i.e., contain interactive interfaces\nto company products and services. A standard\ninterface is provided but hard to learn. No particular\ncustomer usage aid is offered.\nThird generation (3G): \"realize a pleasant use of high\nquality services, and they will come\"\nCustomers will find using the WIS helpful. They\nwill do the marketing. 3G-WIS's typical characteristics\nare:\nhigh value and up-to-date content,\nhigh performance,\nbrand value of the provider, and\npleasant and easy use for casual as well as\nfor frequent customers.\nMany WIS including several banking WIS are still\n2G. However, impressive and well-developed WIS,\ne.g., the Amazon web-site, demonstrate the feasibil-ity\nof 3G-WIS. The success of such WIS is based on\ndeep understanding of the application area, the customers\nneeds, abilities and habits. Adaptation to customers\n-- if provided -- is based on allocating the\nmost suited subspace of the WIS application space to\nthe customer.\nWIS can be classified into e-business, e-learning,\nedutainment, community, information and personality\nWIS. In the e-business class the B2B systems\nhave been more successful than B2C systems. This\nsuccess results from well-understood usage scenarios\nbuilt into the WIS. We observe that usage scenarios\nare better understood for B2B-WIS than for B2C-WIS\n.\nStoryboarding is a design approach focusing on usage\nscenarios. However, so far it is mainly used employing\npinboard approaches, see e.g. (Siegel 1998,\nVan Duyne et al. 2003). Pinboard approaches map a\nnumber of scenarios observed in the application onto\ntree-structured web sites. Storyboarding in the movie\nbusiness is used to design much more complex scenarios\n. To overcome this limitation the storyboard\nspecification language SiteLang has been introduced\nin (Thalheim and D\nusterh\noft 2001). Until now it has\nbeen applied in more that two dozen WIS projects of\nthe Cottbus InfoTeam since 1999.\nOur development experience implies that implementing\n3G-WIS requires sophisticated database support\n, see (Thalheim 2000a). Our approach to guarantee\nfor this support is based on the theory of media\ntypes, which generalize database views (see e.g.\n(Schewe and Thalheim 2001)). Another finding from\nour practical experiences is that customer behavior\nhas changed. They are no more patiently waiting until\ntheir needs are met. They require personal interfaces\n. Customization of system interfaces to users is\n37\nknown for quite a while. However, WIS are targeting\nnew and casual customers. These customers are not\ncapable or willing to arrange for system adaptation.\nInternet service providers report customers frequently\ncomplaining about insufficient user-friendliness and\nunsophisticated WIS.\n1.2\nProblems of Complex Applications\nModern applications, in particular WIS often appear\nto be relatively simple, if only their interface is considered\n. Their point-and-click operating mode is de-liberately\nset up in a way that causes the impression\nof simplicity. Internally, however, things may be quite\ndifferent. A client-server multi-tier architecture with\nHTML-server, database server and application server\nmight be used. This implies some non-trivial development\ntasks done such as database design and development\nof an application programmer interface or\nsimilar.\nIn addition, several customer types may be known\nto the application system. A WIS may appear very\ndifferent to customers of different type. The functionality\nthey access, however, is still the basic functionality\nas implemented by the servers mentioned before\n. Consequently as many function schemas and\ndata schemas need to be developed as there are anticipated\ncustomer types. These schemas need to be integrated\nto develop a consistent view of the key application\nfunctionalities. Views that are based on these\nschemas need to be generated allowing the individual\ncustomers to operate with the application in the way\nthat is most natural for them. The development of\nWIS thus can be a quite complex process.\nAmong others the complexity of this development\nprocess depends on the degree of use of an underlying\ndatabase, from which dynamic web pages are created.\nFurthermore, the complexity of this process depends\non the number of versions of usage processes of the\nWIS that need to be anticipated. Since different usage\nprocesses may lead to different data and functionality\naccessible to customers. Additional complexity\ncomes in -- e.g. in the case of modern retail banking\n-- when a requirement is set in place that various\naccess channels -- e.g. channels needed for cell-phone\n- or PDA-access -- should be made available to\ncustomers. Apart from the purely technical problem\narising from discretionary access channels the problem\nof layout for these channels has to be solved.\n1.3\nAn Application Example\nAn example of a WIS in retail banking showing a relatively\nhigh diversity of the usage process is online\nloan application, if considered in full generality as we\ndo it here. For an introduction to lending in general\n, of which the loan business is just a part, see e.g.\n(Valentine 1999). Not all banks offer online home-loan\napplication facilities. Those that provide such\nfacilities do not necessarily allow customers to deal\nwith them completely online. Banks that offer effective\nonline loan application are the Swiss UBS AG\nand the New Zealand and Australia based ASB Bank.\nThe acceptance of a longer interruption of service at\nthe ASB site indicated that at least for this bank online\nhome-loan application is not yet considered a major\npart of their business.\nComplexity in home-loan applications results from\nthe fact that the applicant not necessarily is exactly\none natural person. For each of the applicants properties\nand debts need to be identified and valuated.\nOften banks would accept only home-loan applications\nof at most two people, in general the couple that\nis going to live in the home financed with the loan.\nComplexity is further increased by the loan not necessarily\nbeing a fresh one but being already granted\nto someone who due to his or her financial conditions\nhas chosen to move the loan to a different bank.\nFurthermore, the properties offered for securing\nthe loan may belong to a variety of types. Some of\nthese types, e.g.\nreal estate property may require\nphysical inspection to determine the value they can\ncover. Other properties such as financial instruments,\ni.e. shares, options or accounts, may only need an inquiry\nto the respective depot or account. If cash is\na security, then it might even be impossible to finish\nthe process electronically, as the cash needs to be\nbrought to the bank branch, counted and deposited.\nIt is similar with debts. On real estate properties\nthere might be liabilities that require an assessment\nof the actual value. Of course there is quite a number\nof so-called loan structures (see (Valentine 1999,\np.226f.)) distinguishing between loans. For instance,\nthey may differ from each other in their term, frequency\nof repayments, borrower's authorization to increase\nthe debt (e.g. overdraft facility), the minimum\nsecurity ratio or the repayment structure. The latter\naddresses the schema of how capital and interests\nare paid for by the customer. Independently of the\nloan structure a customer might chose among several\nloan options that specify how the interests develop\nover time, i.e. they may be fixed for a particular time\nperiod or they may float like general interest rates in\nbanking.\nApart from these principal choices in a home-loan\nthere are a number of tools available for customer's\nuse throughout online home loan applications such as\na borrowing power calculator, a repayment schedule\ncalculator, etc. Additionally, dictionaries of banking\nterms, act excerpts and comments as well as descriptions\nof the applying financial instruments need to be\naccessible to customers.\nAll the possible options in the financial instruments\nand the respective variations of the WIS usage\nwill only be considered by a small number of customers\n. In more technical terms we have to deal with\na generic process type. Most of its instances realize\nonly a part of the possible variations. At present online\nhome-loan application systems are not typical retail\nbanking applications. Automated clearing house\n(ACH), i.e. direct deposit of payments, withdrawing\nmonthly mortgage payments, etc. are more typical.\nAccording to (Berger 2003, p.150f.) its use in the\nUS is steeply increasing and has after starting problems\neven increased productivity. However, ACH is\na back-office activity, whereas online home-loan application\nis a customer-home or front-office activity.\nAccording to (Berger 2003, p.149) internet-only banks\nperformed more poorly than conventional banks did.\nIf this finding implies that online home-loan applications\nare less productive than conventional home-loan\napplication processing then we believe that this is\nonly a temporary phenomenon. We believe that cultural\nobstacles concerning internet-banking will disappear\nwhen 3G-WIS have become more popular.\nAccording to (Berger 2003) there is empirical evidence\nfor an increasing market share of electronic payment\n. According to studies reported in (Berger 2003,\np.162) there is even empirical evidence for increased\nproductivity due to investment in IT labor, while\nthere is no empirical evidence for IT investments increasing\nefficiency in general. This is consistent with\nthe basic insight that not the mere use of IT but the\nkind and quality of this use can increase productivity\n. Our paper shall help making 3G-WIS more popular\nand thus contributes to internet banking more\ncompletely covering the business at a higher level of\nquality.\n38\n1.4\nRelated Work\nA lot of related work has been done on the development\nof web information systems.\nThe work in\n(Atzeni et al. 1998) emphasizes the design of content\nleading to databases, navigation leading to hypertext\n, and presentation leading to the pages layout.\nOther authors (see for example (Baresi et al. 2000),\n(Bonifati et al. 2000), (G\nadtke and Turowski 1999)\nand (Rossi et al. 1999)) follow the same lines of\nthought or concentrate on the \"add-on\" to database\ndesign, emphasizing mainly the hypertext design\ndealing with navigation structures (see (Garzotto et\nal. 1993) and (Schwabe and Rossi 1998)). The work in\n(Feyer et al. 1998) presents the forerunner of the theory\nof media types (see (Schewe and Thalheim 2001)).\nMedia types provide a theoretically sound way to integrate\ndatabases, external views, navigation structures\n, operations, and even support adaptivity to different\nusers, environments and channels. The adaptivity\nfeature distinguishes them from the dialogue\ntypes that are used to integrate database systems with\ntheir user interfaces (see (Schewe and Schewe 2000)).\nThe work in (Schewe and Thalheim 2001) already\nemphasizes that conceptual abstraction from content,\nfunctionality, and presentation of an intended site\nis not sufficient for the adequate conceptual modelling\nof web-based systems, even if complex media\ntypes are taken into consideration. Some of the approaches\nmentioned before (see (Atzeni et al. 1998),\n(Baresi et al. 2000), (Bonifati et al. 2000), (G\nadtke\nand Turowski 1999), (Rossi et al. 1999), (Garzotto\net al. 1993) and (Schwabe and Rossi 1998)) miss out\non the important aspect of story boarding, which is\nneeded to capture the business content of the system.\nStory boarding in a process-oriented holistic manner\nfocusses on user intentions. In more recent work\nsome of the authors (Kaschek et al. 2003a) started\nto investigate this idea more thoroughly.\nConceptual\nmodelling traditionally considered more ontolog-ical\naspects than epistemological ones. Since web information\nsystems in two respects considerably differ\nfrom non-web information systems epistemological\naspects, however, need to be taken more seriously:\nWeb information systems are open in the sense that\nactual users virtually may be just anyone. In non-web\nsystem there was traditionally a much stricter\naccess control preventing non-staff from using the system\n. The business idea, however, has changed and\ncustomers need to be attracted and pre-selected by\na web information system. Furthermore, web information\nsystems are open in the sense that it is very\neasy to use them for accessing other web systems.\nThis introduces more competition among those who\noffer services on the web. Quality of web information\nsystems in the sense of fitness for users' use thus\ntends to be more important than it was for non-web\nsystems. Web information systems partly substitute\nstaff-customer interaction by customer-computer interaction\n.\nConsequently, web information systems\nmust focus on aiding customers in doing the business\nthe system provider is engaged in. Clearly this\nonly can be done on the basis of a customer model.\nUser profiling together with story boarding is a holistic\nmanner for this.\nIn (Schewe and Thalheim 2001) it is suggested\nthat story boarding be supported through directed\ngraphs called scenarios, in which the nodes represent\nthe scenes and the edges correspond either to navigation\nor to actions issued by the user. This extends\nthe work in (Feyer et al. 1998), where simply partially\nordered sets have been used. In addition, user profiling\nis approached by using user dimensions capturing\nvarious aspects of how to characterise users. This has\nbeen extended in (Srinivasa 2001) to a formal description\nof interactive systems.\nThe work in (D\nusterh\noft and Thalheim 2001)\npresents a formalised language SiteLang to support\nthe specification of story boards. the work also indicates\nideas how to exploit word fields for designing\ndialogue steps in story boards. In (Schewe et al. 1995)\nand (Schewe 1996) refinement primitives for dialogues\nhave been discussed. Due to the connection between\ndialogues and scenarios, this approach to refinement\nis also useful for story boarding. The work in (Schewe\net al. 2002) applies story boarding and user profiling\nto the area of on-line loan systems.\n1.5\nOutline\nIn section 2 we discuss WIS specification, in particular\nstory spaces and scenarios, we further discuss media\nobjects, dialogue-step specification and context. In\nthe following section 3 we discuss database design for\nWIS, utilization of context for WIS and a stepwise\nWIS generation approach called \"onion generation\".\nFinally, in section 4 we continue the discussion of our\nexample and show how our approach can be applied to\nmodelling of WIS. Due to space restrictions, however,\nwe can only discuss the storyboarding part.\nWIS Specification\n2.1\nStory Spaces and Scenario\nModelling usage processes right from the beginning of\nsystems development requires using a sufficiently expressive\nhigh level semantic model as a respective conceptual\nframework. Storyboarding uses the metaphor\n\"story\" to conceptualize usage processes.\nWe presuppose\nthat a story (for the source of the interrogatives\nused here refer to (Zachman 1987, Sowa and\nZachman 1992)) tells what happened, why and where,\nas well as who did it how and when. The story of\ncustomer-WIS interaction thus is the intrigue or plot\nof a narrative work or an account of events.\nWithin a story one can distinguish threads of activity\n, so-called scenarios, i.e., paths of scenes that\nare connected by transitions. See figure 2.1 for an example\nscenario. We do not intend to model branching\nstories. These require managing a number of activities\nat the same time, i.e., in parallel. A capability\nthat -as we believe- many casual customers won't\nhave. With the term story space we mean the integration\nof all scenarios in a story.\nWe define the story space\nW\nof a WIS W as\nthe 7-tuple (S\nW\n, T\nW\n, E\nW\n, G\nW\n, A\nW\n,\nW\n,\nW\n) where\nS\nW\n, T\nW\n, E\nW\n, G\nW\nand A\nW\nare the set of scenes created\nby W , the set of scene transitions and events that\ncan occur, the set of guards and the set of actions that\nare relevant for W , respectively. Thus, T\nW\nis a subset\nof S\nW\nS\nW\n. Furthermore\nW\n: S\nW\nSceneSpec is\na function associating a scene specification with each\nscene in S\nW\n, and\nW\n: T\nW\nE\nW\nG\nW\nA\nW\n,\nt (e, g, a) is a function associating with each scene\ntransition t occurring in W the event e that triggers\ntransition t, the guard g, i.e. a logical condition blocking\nthe transition if it evaluates to false on occurrence\nof e, and the action a that is performed while the\ntransition takes place. The language SiteLang, see\n(Thalheim and D\nusterh\noft 2001), offers concepts and\nnotation for specification of story spaces, scene and\nscenarios in them. Scenes and their specifications are\ndiscussed in subsection 2.2.\n2.2\nScenes\nWe consider scenes as the conceptual locations at\nwhich the customer-WIS interaction, i.e., dialogue\n39\nsc\n1\n- sc\n2\n- sc\n3\n- sc\n4\n- sc\n5\n- ...\ny\n6\n\n?\n9\nside story\nFigure 2.1: Scenario with a loop representing a side\nstory\ntakes place.\nDialogues can be specified using so-called\ndialogue-step expressions. Scenes can be distinguished\nfrom each other by means of their identifier\n: Scene-ID. With each scene there is associated a\nmedia object and the set of actors that are involved\nin it. Furthermore, with each scene a representation\nspecification is associated as well as a context. Scenes\ntherefore can be specified using the following frame:\nScene = ( Scene-ID\nDialogueStepExpression\nMediaObject\nActors\nActorID\nRight\nTasks\nAssigned\nRoles\nRepresentation (styles, defaults, emphasis, ...)\nContext (equipment, channel, particular)\nDialogue-step expressions consist of dialogues and\noperators applied to them. Dialogue steps are discussed\nin subsection 2.4 below. The provided operators\nare based on the basic dialogue step algebra introduced\nin (Thalheim and D\nusterh\noft 2001):\nBasic control commands are sequence ; (execution\nof steps in sequence), parallel split\n|\n\n| (execute\nsteps in parallel), exclusive choice\n|\n\n| (choose one\nexecution path from many alternatives), synchronization\n|\nsync\n| (synchronize two parallel threads of\nexecution by an synchronization condition\nsync\n,\nand simple merge + (merge two alternative execution\npaths). The exclusive choice is considered\nto be the default parallel operation and is denoted\nby\n||.\nStructural control commands are arbitrary cycles\n\n(execute steps w/out any structural restriction\non loops), arbitrary cycles\n+\n(execute steps\nw/out any structural restriction on loops but at\nleast once), optional execution [ ] (execute the\nstep zero times or once), implicit termination\n\n(terminate if there is nothing to be done), entry\nstep in the scene\nand termination step in the\nscene\n.\nAdvanced branching and synchronization control\ncommands are multiple choice\n|\n(\nm,n)\n| (choose between\nm and n execution paths from several alternatives\n), multiple merge (merge many execution\npaths without synchronizing), discriminator\n(merge many execution paths without synchronizing\n, execute the subsequent steps only\nonce) n-out-of-m join (merge many execution\npaths, perform partial synchronization and execute\nsubsequent step only once), and synchronizing\njoin (merge many execution paths, synchronize\nif many paths are taken, simple merge\nif only one execution path is taken).\nWe also may define control commands on multiple\nobjects (CMO) such as CMO with a priori\nknown design time knowledge (generate many instances\nof one step when a number of instances\nis known at the design time), CMO with a priori\nknown runtime knowledge (generate many instances\nof one step when a number of instances\ncan be determined at some point during the runtime\n(as in FOR loops)), CMO with no a priori\nruntime knowledge (generate many instances of\none step when a number of instances cannot be\ndetermined (as in a while loop)), and CMO requiring\nsynchronization (synchronization edges)\n(generate many instances of one activity and synchronize\nafterwards).\nState-based control commands are deferred choice\n(execute one of the two alternative threads, the\nchoice which tread is to be executed should be\nimplicit), interleaved parallel executing (execute\ntwo activities in random order, but not in parallel\n), and milestone (enable an activity until a\nmilestone has been reached).\nFinally, cancellation control commands are used,\ne.g. cancel step (cancel (disable) an enabled step)\nand cancel case (cancel (disable) the step).\nThese control composition operators are generalizations\nof workflow patterns ( see, e.g. (Workflow Management\nCoalition 1999, Jablonski 1996)) and follow\napproaches developed for Petri net algebras.\nA graphical representation of a login scene is given\nin figure 2.2. We are interested in well-formed dialogues\nand do not allow specifications which lead to\nand-split or or-split common in workflow specifications\n. This scene is specified by the dialogue step\nexpression\nEnter login ;\n( Customer login ; [ Change profile ; ]\n( Service kind selection ; Service selection ;\nService customization)\n|| Join cooperating group\n|| Join bank club\n|| Join bank programs\n|| General customer information )\n|\n\n| ( Anonymous Login ; [Extend adding identity ; ]\n( Program selection ; Module selection ;\nUnit selection) )\nEnter\nLogin\n:\nU\nChange\nprofile\nY\nj\nGeneral\ncustomer\ninformation\nAnonymous\nlogin\nj\n:\nExtend\nby adding\nidentity\nK\nService\nkind\nselection\nj\nService\nseeking\nselection\nU\nService\ncustomization\nCustomer\nlogin\nK\ny\nj\nU\n:\nJoin\ncooperating\ngroup\nJoin\nbank\nprogram\nJoin\nbank\nclub\nLogin Scene With Adaptation of System Facilities\nFigure 2.2: Scene for Login Into a Bank WIS\n2.3\nMedia Objects\nA scene is supported by media objects following the\ncodesign approach. Media objects are instances of\nmedia types.\n40\nBank\nService\nCustomer\nService\nRole\nCustomer\nLogin\nCustomer\nProfile\nProfile\nType\nTask\nPortfolio\nType\nCustomer\nPortfolio\nWeb\nAddress\nAccount\nLogin\nHistory\nFigure 2.3: Cutout of the profiling schema\nThe core of a media type is defined by a view on\nsome underlying database schema, i.e. it consists of\na view schema and a defining query. However, this\nquery must be able to create identifiers in order to create\nlinks between the various media objects. This core\nof a media type -- called raw media type in (Schewe\nand Thalheim 2001) -- is extended in three directions\n:\nAs a first extension operations are added to the\nview in the same way as d-operations were added\nto dialogue objects in (Schewe and Schewe 2000).\nBasically, the use of operations just adds dynamics\nto the media objects. So, if a media object\nis associated with a scene, the operations of the\nmedia object define the available dynamic functionality\n.\nThe second extension provides adaptivity and\nhierarchies. Adaptivity to the user deals with\nneeds arising from different users.\nAdaptivity\nto the technical environment copes with technical\nrestrictions of end-devices. Adaptivity to\nthe communication channel deals with adaptation\nto needs arising from various communication\nchannels. For all three forms of adaptivity media\ntypes provide mechanisms for a controlled form\nof information loss, which is coupled with algorithms\nfor the splitting of information content.\nThe hierarchies are adopted from dimension hierarchies\nin OLAP.\nThe third extension simply covers ordering and\nother presentation options.\nThus, roughly speaking media objects consist of\nabstract containers, supported DBMS processes and\ndatabase manipulations requests.\nBasic media objects\n(Schewe and Thalheim 2000) are characterized\nby syntactic expressions, have a semantical meaning\nand are used within a certain pragmatical framework.\nMedia objects can be parameterized. Typical parameters\nare the representation style, the actor frame, and\nthe context frame. Therefore we distinguish between\nmedia objects and runtime media objects in which all\nparameters are instantiated.\nDuring runtime, the media object is extended by\nspecific escort information (Thalheim 2000). This escort\ninformation is represented for user support. It\nallows the user to see the history of steps performed\nbefore being in the current state. Escort information\nis further generated from the story space. In this case\na user is informed on alternative paths which could\nbe used to reach the given scene and which might be\nused for backtracking from the current scene.\nFor the generation of media objects and their\ncomposition on the basis of information units we\nextend the classical SQL frame to the frame\ngenerate Mapping : Vars\nStructure\nfrom Views\nwhere Selection condition\nrepresent using Style guide\n& Abstraction\nbrowsing definition Condition\n& Navigation\nThe views and therefore the media object may have\nhidden parameters (for instance, EventID) which are\nnot visible to the actor. They can be parameterized\nby variables (for instance, @Today). For media objects\nwe reuse ideas developed for OLAP technology\n(Thalheim 2000):\nviews on ER schemata (abstraction on schemata\n(aggregation, scoping, ...), versions),\nvariations of generation functions,\ndisplay with canonical functionality (drill-down,\nroll-up, rotate, pivoting, push, pull, dimension,\naggregation),\nusing generic evaluation functions and models,\nimplicit incorporation of hierarchies and\nimplicit incorporation of time, space, ....\nFurthermore, involved actors are specified in dependence\non their profiles, tasks assigned to them,\ntheir access and manipulation rights, and their roles\nto be taken while visiting the scene. This specification\nis based on (Altus 2000) and similar to profiles\nof actors in information systems.\nIt is our aim to specify generic scenes. Thus, we\nadd the representation styles which can be applied to\nthe media object of the scene. Representation depends\non the equipment of the actor.\nIn the city\nsite projects, we have gained experience with different\nrepresentation styles: internet display with high-speed\nchannel, internet-display with medium speed\ndisplay (default style), videotext and WAP display.\nFor instance, for videotext any graphical information\nis cut out or replaced by textual information.\nFinally, the context of access is specified. Access\ndetermines the display facilities. Channels can be of\nhigh or low speed. The particular usage of a scene by\nan actor depends on the scenario history.\nThe login scene in Figure 2.2 is based on the\nschema in Figure 2.3.\nThe corresponding media object specification has\nthe following structure:\nMediaObject(\n@Customer ID) =\ngenerate (ID, profile, portfolio, context)\nfrom Customer\n1 Login Account History\n1 Customer Profile\n1 Customer Portfolio 1 ...\nwhere Customer.ID = @Customer ID ...\nrepresent using\n41\nXSL style.Ident =\nProfile Type.Preference.StyleIdent\n&\ncreateVarsFor(profile, portfolio, context)\nbrowsing definition Customer\nportfolio\n...\n&\nNavigation\nnone\nThe representation styles determine the order and\nthe tailoring of the elements of the media object.\n2.4\nDialogue Steps\nWe conceptualize the customer-WIS interaction as a\ndialogue between these two. Therefore the customer-WIS\ninteraction unfolds in a sequences of dialogue\nsteps, i.e., elementary communication acts.\nThe\nbasic WIS-state transformations triggered by actors\ncan thus be understood as caused by dialogue steps.\nThese may access the media object that is associated\nto the scene within which the dialogue step occurs.\nComparable to (Goldin et al. 2000) we use the following\nframe for specifying the control of dialogue steps:\non precond if event and guard\ndo action result in postcond\nConsequently dialogue steps may be specified by\nthe following frame:\nDialogueStep(\nIdentification ) =\n( sub-unit = view on media object of the scene\nenabled processes =\nsubset of supplied processes,\nmanipulation requests\nactor =\nsubset of enabled actors in a given context\ncontrol =\n( precondition, enabling event,\nguard, postcondition) )\nDialogue step specifications can be represented\ngraphically as shown in figure 2.4. The figure for the\nscene 'anonymous login' represents the specification\nof dialogue step 'login'.\nAnonymous\nlogin)\n\n\nBankSurveyView\nServiceOfferView\nSelectModule,\nSelectCommunication\nNewSession\nAddToLog\nLogChannelData\nLogUserEngineData\nAnonymous User, Visitor\n6\n?\n(ClickAnonymous,ServicesAvailable,ServiceSelected\nCustomerStyleSelected\nClickOnOneOption)\nFigure 2.4: Dialogue Step for Anonymous Login\nBased on the properties of the actions we conclude,\nfor instance, that after withdrawal a previous member\nof a cooperating group cannot participate in the discussions\nin the community. A task property frame is\ndefined by a task name, reasons for task involvement,\nan aim, a postcondition (enabled next activities), the\ninformation from the database, the information for\nthe database, the resources (actor, resources, partner\n), and a starting situation (precondition, activity,\npriority, frequency, repetition rate).\nWe use graphical representations of scene specifications\nas indicated by figure 2.5. Scenes are represented\nby frameboxes and dialogue steps by ellipses.\nThe transitions among dialogue steps are represented\nby arrows between these. We use the graphical notation\ndeveloped for state charts, e.g., the default start\nstep of a scene is denoted by a solid circle, the end\nstate by a solid circle surrounded by an empty circle,\nthe history entry into a scene is denoted by an `H'\nsurrounded by an empty circle. Furthermore, we can\nadopt refinement and clustering, concurrency, delays\nand time-outs, transient states, event priorities and\nparameterized states. For more detail on state charts\nsee, e.g. (Harel and Naamad 1996) and for their application\n(Rumbaugh et al. 1991).\ndialogue\nstep\nnext\ndialogue\nstep\n\n\nsub\n-unit\nenabled\nprocess\nmanipulation\nsub-request\nenabled actor\n6\n?\ncontrol\n:\nU\ndialogue scene expression\ntransition according to\nscene\ninvolved\nactors\nstory scene\nsequence\nmedia\nobject\nrepresentation\nstyle\ncontext,\ntask\n6\n6\n6\n6\n6\n6\nFigure 2.5: Representation of scene specifications\n2.5\nContext\nContext has been usually defined within the object\nsets of the database (Bell 2001, Connolly 2001).\nThere only very few trials to consider context of the\nscenarios or stories (Whitsey 2003).\nIn (Thalheim\n2000a) context has been defined for media types. For\ndealing more complete and justifiable with context we\nstart with a dictionary definition of context of something\nas that what one needs to understand the something\n. This implies our understanding of context as\na three place predicate C(S, H, A) which if true says\nthat actor A needs helper H to act reasonably on\nS. If the actor is an individual then we stay with\nthe focus on understanding. For non-human actors,\nhowever, we focus on acting according to predefined\nquality aspects and behavior rules. The something we\nconsider here as relevant are WIS-parts. The helpers\nwe here take into account are the various data that\nare relevant for the WIS-parts in question. The actors\nwe consider here are the WIS and the individuals\noccupying the roles: customer, vendor and developer\nwith respect to the WIS at hand. We thus distinguish\nthe following contexts:\nCustomer's scenario context , i.e., that what the customer\nneeds to understand for efficiently and ef-fectively\nsolve his/her business problem.\nVendor's WIS-context , i.e., that what the vendor\nneeds to understand how to run the WIS economically\n. Data that typically is part of this context\nare:\nthe intention of the provider,\nthe theme of the web site,\nthe mission or corporate identity of the site,\nand\nthe occasion and purpose of the visits of actors\n.\nDeveloper's WIS-context , i.e., that what the developer\nneeds to understand for being capable of implementing\nthe WIS. Data that typically is part\nof this context are:\nthe potential environment, e.g. hard- and\nsoftware, channels,\nthe information system, especially the associated\ndatabases,\n42\nContext\nDialogue\nExpression\nAcceptCond\nID\nID\nDo\nCondition\nEvent\nParticular\nDefault\nObligat\nUsage\nEmphasis\nProfile\nGroup\n(1,1)\n(1,n)\n(1,1)\n\n?\nenabled\n6\n?\ninvolved\n:\nActor\nDialogue\nStep\n\nuses\n6\n?\nused\nRight\n:\nRight\nCategory\nMedia\nObject\nTask\nRole\nCategory\nTask\nAssignment\n?\nRepresentation\nStyle\n\n6\nbasedOn\nScene\nActivity\nSequence\n?\n?\nStory\n\nin\nFigure 3.1: The Structure of the Web Site Database\nthe story space, scenes, dialogue steps, roles,\nand rights,\nthe tasks to be performed within the story,\nand\nthe roles in the scenario.\nWIS's scene context , i.e., that what the WIS needs\nto be capable of making solving certain business\nproblems easy and pleasant for customers. Data\nthat typically is part of this context are:\nHistory and current usage allow context\nadaptation to scenarios which are played at\npresent by the current user.\nAdaptation to the current environment is defined\nas context adaptation to the current\nchannel, to the client infrastructure and to\nthe server load.\nUsers are grouped to actors. Therefore, we\ncan define the current user by instantiation\nof the actor.\nGoals and particular, policy (exceptions, social\n, organizational) define a specialization\nof the content, structuring and functionality\nof a web page.\nA WIS is supported by media objects that belong\nto media types.\nThe collection of all media types\nis called suite. Our framework offers four hooks for\ndealing with the context we need to consider:\n1. Specialization of media type suite for\nusage\nand\nuser adaptation:\nThe database types may have subtypes specializing\nthe database types. Media types are defined\non the basis of views. Therefore, we can follow\nthe approach discussed in (Thalheim 2000a) for\nspecialization of types. Specialization is defin-able\nthrough specialization of types, instantiation\nof parameters. extension of types, and restriction\nand constraint application.\n2. Application of rules towards\ngeneration\nof\nextended\nsuites:\nSuites can be extended by providing view rules\ndefining views on top of the media types. This\napproach supports portfolio extension and container\nextension.\n3. Instantiation of explicit context parameters can be\nused for adaptation of web sites to the current\nprofile, to the current environment or to the current\nworkload.\n4. Storage of utilization profile similar to login track\nsupports to use history of previous utilization of\nthe web site by the user. We extend the web site\ndatabase by explicit utilization logs for used media\nobjects, preferences of usage, users workspace\nor work rooms, and variations of users media objects\nDeveloping the Database Used to Generate the Web Site\n3.1\nDatabase Modelling for WIS\nWeb site management becomes a nightmare whenever\na web site has been developed in a handicraft\napproach. For this reason, generation of web\nsites is currently based on web site content management\n. Content management systems currently support\nthe representation of web pages on a take-and-place\nmetaphor: Select or compile the content objects\nof a page, compile the navigation structure and place\nthe content objects using page frames. Our web site\nteam also used this approach. This approach is entirely\nsatisfying the needs as long as the general structure\nof a web site is stable and no adaptation to the\nuser is required.\nIn order to dynamically generate the web site we\ndecided to store the web site stories in a database.\nThe structure of this database is displayed in figure\n3.1.\nWe specify the web site story space based on the\nour web SiteLang. This specification is inserted into\nthe database by the SiteLang editor. We can now extract\nthe page under consideration by an instantiated\nquery from this database. Context may be infused\ndirectly depending on the query result.\nSimilar to the context infusion, users of a web site\nhave their own profile, their own portfolio and their\nhistory. This information is used for adapting the\ncontent of the web site to the current usage.\n3.2\nContext Infusion in Scenarios\nTypical business processes have a very large number\nof variants.\nClassically, workflow approaches have\n43\nbeen used for specification of such varieties. Since\nthe complexity of variants might be much higher the\nworkflow approach did not succeed in providing a\nsound basis for the specification of all variants. We\nobserve, however, that in practice these varieties are\ninternally structured. They may be composed, extended\nor filtered by smaller scenarios.\ne-banking\nchallenges storyboarding by its orthogonality and variety\n.\nInstead of specifying all possible variants we prefer\nto model the generation mechanism of the very\nlarge variety of scenarios. This generation supports\nruntime adaptation to the current scenario, the context\nand other parameters. At the same time, banking\nsites are threatened to expose customers to the\n\"lost in hyperspace syndrome\". Therefore, customers\nshould be supported in tracking back onto the right\npath.\nOur solution to this challenge is based on generic\nparameters that are instantiated depending on the\ncustomer, the history, the context etc. Each set of\nmedia objects is specified by a context-free expression\nwith a set of parameters. These parameters are\ninstantiated depending on\nthe customer profile,\nthe customer task portfolio,\nthe customer computational environment,\nthe presentation environment, and\nthe available and accessible media objects.\nInstead of providing a full generation rule set we illustrate\nour approach on the basis of an example. A\ncustomer of a bank provides his/her identity e\n1\n, inserts\nsome data e\n2\n,1\nand e\n2\n,2\nin any order or signs that\nthe bank may request these data from somewhere else\ne\n2\n,3\n. Then the customer seeks a loan and fills the corresponding\nforms e\n3\n. The customer gives bail data in\ndifferent variants (e\n5\n,1\n|| (e\n5\n,2\n; e\n5\n,3\n)). The scenario is\nsupported by the eight media objects. Now we can\ninject the context into the media object expression of\nthe scenario. For instance, we may have the following\nstepwise refinements:\nMedia objects of a scenario:\ne\n1\n; ((e\n2\n,1\n||e\n2\n,2\n)\n|\n| e\n2\n,3\n) ; e\n3\n; (e\n5\n,1\n|| (e\n5\n,2\n; e\n5\n,3\n))\nExtending by objects syntactic verbal context\nand meta-information:\ne\n16\n; [ e\n21\n; ] e\n1\n; ((e\n2\n,1\n||e\n2\n,2\n)\n|\n| e\n2\n,3\n) ; e\n9\n; e\n3\n;\n(e\n10\n||e\n11\n) ; (e\n5\n,1\n|| (e\n5\n,2\n; e\n5\n,3\n))\nExtending by story space associations, e.g.,\nside paths,\n,\nfiltering\nagainst\navailability\nand\ncompiling\nagainst\nthe\ncustomer profile\ne\n16\n; [ e\n21\n; ] e\n1\n; ((e\n2\n,1\n||(\nSB\ne\n2\n,2\n||\nCB\ne\n2\n,2\n))\n|\n\n| e\n2\n,3\n) ; [(\ne\n17\n; e\n18\n; )] e\n9\n;\nGr\ne\n3\n,1\n;\nAn\ne\n3\n,2\n;\nInf\ne\n3\n,3\n;\nF orm\ne\n3\n;\n(e\n10\n||e\n11\n) ; (e\n5\n,1\n|| (e\n5\n,2\n; e\n5\n,3\n))\nFiltering\nwith\nor\nextending\nby\nthe\nweb site context: e\n16\n; [ e\n21\n; ] e\n1\n; (\n\nSB\ne\n2\n,2\n|\n| e\n2\n,3\n) ; [(\ne\n17\n; e\n18\n; )]\ne\n9\n;\nGr\ne\n3\n,1\n;\nAn\ne\n3\n,2\n;\nInf\ne\n3\n,3\n;\nF orm\ne\n3\n; (e\n10\n||e\n11\n) ; (\ne\n5\n,2\n; e\n5\n,3\n)\nCoping customer's history - already finished\ndialogue steps and repeating dialogue steps:\ne\nRepe\n1\n; [(\ne\n17\n; e\n18\n; )] e\nRepe\n9\n;\nGr\ne\n3\n,1\n;\nAn\ne\n3\n,2\n;\nInf\ne\n3\n,3\n;\nF orm\ne\n3\n; (e\n10\n||e\n11\n) ; (e\n5\n,2\n; e\n5\n,3\n)\nCoping\nwith\ncustomers\nhistory\nnegotiation\nsteps and pragmatical elements:\ne\nRepe\n1\n; e\n25\n; [(\ne\n17\n; e\n18\n; )] e\nRepe\n9\n;\nGr\ne\n3\n,1\n;\nAn\ne\n3\n,2\n;\nInf\ne\n3\n,3\n;\nF orm\ne\n3\n; (e\n10\n||e\n11\n) ; (e\n5\n,2\n;\ne\nP rak\n5\n,2\n; e\n5\n,3\n)\n3.3\nThe Onion Generation\nXML documents provide a universal structuring\nmechanism. XSL rules allow to generate XML documents\nfrom XML suites. This opportunity supports\na multi-layer generation of web information systems.\nThus we use the multi-layer onion generation presented\nin Figure 3.2.\npresentation engine\nactor profile adaptation, equipment adaptation,\nchannel adaptation, decomposer, style extension\ncontainer engine\nservices packages, wrapping functions,\ndialogue scene and scenario functions\nunits engine\nsurvey, landmark, indexing, I/O,\nnavigation, integration etc. functions\nview handler\nvirtual\nmaterialized views\nupdate views\nDBS\n...\nDBMS\nFigure 3.2: The Onion Approach to Stepwise WIS-Generation\nThe onion generation approach is based on the layered\nstructure of the WIS arising from the use of SiteLang\nand media objects. On the outermost shell the\npresentation facilities are introduced. This shell deals\nwith style presentation functions. Containers used in\nthe next inner shell are used to ship information from\nthe web-server to the user. Thus, this shell deals with\nthe adaptation to the user and his/her environment.\nThe next inner shell handles the information units,\ni.e. the core media objects. Inside this shell we find\nfurther shells dealing with views on the underlying\ndatabase, and innermost we find the database itself.\nThe onion approach fits nicely into a translational\napproach, which generates consistent sets of XML\ndocuments. In our projects we used the XML extender\nof the database system DB2 to generate XML\ndocuments. Thus, the layering approach to the generation\nof XML displayed in Figure 3.2 allows to use\nanother strategy to generate XML documents. This\nfacility is displayed in Figure 3.3.\nThis transformation approach has been success-fully\nused in two of our e-learning projects and our\ncommunity services projects. These project require\nsophisticated context adaptation. The approach implements\nan XML suite on top of the relational DBMS\nDB2. The extended ER model (Thalheim 2000) provides\na better approach to XML suite generation than\nrelational models or the classical ER model for a number\nof reasons:\nStructures can be defined already in complex\nnested formats.\nTypes of higher order are supported.\nThe model uses cardinality constraints with participation\nsemantics.\n44\nconceptual\nrepresentation\nabstract XML\nrepresentation\nXML implementation\non top of DB2\ndynamic scene\nobject\nXML scene\nonion\nreflective\nadaptations\ncontainer\nmedia\nobject\nmeta\nfunctions\nviews\nfunctions\n=\n\n=\n\n=\n\n\n\n\n\n\n\n\n\n\n\ndatabase\nschema (HERM)\nfunctors\nfor XSLT\nfunctors\nfor XSLT\ncontainer\nonion\nmedia object\nonion\nXML\nsuite\nDTD\nfunctors\nfor XSLT\nfunctors\nfor XSLT\nenriched\nXML suite\nenriched\nXML suite\nenriched\nXML suite\nXML\ndocuments\nDAC\nfor DB2 access\n?\n?\n?\n?\n\n\n\n?\n?\n?\n?\nj\nj\nj\n?\n?\n?\n?\nFigure 3.3: The General Procedure for Translation from SiteLang to XML\nAn Advanced e-Banking Application\n4.1\nBanking and Mortgages\nAccording to (Wierichs and Smets 2001) a bank is an\n\". . . institution that as part of an economy offers financial\nservices. The economical function of banks is\nto create a liquidity equalization in the cash flow that\nis reverse to the product and service flow. The focal\npoints of the bank operational activity are conducting\npayments, acceptance of money for investment, and\ngranting credits.\" Furthermore the particular liquidity\nequalization that is chosen out of the set of possible\nsuch equalizations is a preferable one. The respective\npreference structure is worked out by banks\non base of an assessment involving financing cost and\ninterests, see (Matthews et al. 2003).\nA loan according to (Wierichs and Smets 2001) is\nthe \"relinquishment of money or other fungible (...)\nproperties connected with the obligation of the debtor\nto give back the relinquished in equal kind, quality\nand quantity.\" An enhanced version of the model of\nthe loan process is represented in figure 4.1 as a UML\nsequence diagram. It shows the roles involved in the\nloan process as the labels inside the rectangular boxes\non the top of the diagram. It further indicates the\nconcurrency that may be utilized in this process. It\nachieves this by means of showing the communication\nbetween the roles. This communication is represented\nby the arrows starting at the dashed lines\nthat represent logical time, i.e., life lines of the roles.\nThe labels attached to the arrows indicate the content\nof the message associated with the respective arrow.\nThe bottom level rectangle containing the messages\n'Payback()' and 'CheckPayback()' signifies that these\nmessages are to be repeatedly sent until the stop condition\nsignified by the asterisk and displayed below\nthe rectangle 'debit position balanced' becomes true.\n4.2\nMortgages and variants\nThe figure 4.1 from a bank technical point of view\nschematizes the process. This process clearly is not\nfully suited as the only base of application development\n. For aiding development more information is\nneeded about how customers are anticipated to interact\nwith the system under construction. We use here\nthe function\nW\nof the story space\nW\nof a WIS W\nto show how the customer interaction with the WIS\nchanges the appearance of it for the customer Story\nboarding is a useful technique to obtain the required\ninformation.\nOur respective starting point is the investigation\nof the Web site of the Australian and New Zealand\nbased ASB Bank. From earlier work, see (Kaschek\net al. 2003) we knew that it offered an online loan\napplication facility.\nWe investigated this Web site\nmore closely and found that this site at each of its\npages essentially offered customers data that can be\ntyped as follows:\nadvertisement, i.e., information about ASB\nBank including a welcome and a logo.\ndisclaimer, i.e., a statement limiting the legal\nresponsibility of ASB Bank with respect to the\ndata displayed and the implications customers\nmight draw from it.\nsearch, i.e., a facility taking an unlimited customer\ninput and returning those ASB Bank\npages that best met this search expression.\nhighlights, i.e., the main contents that ASB\nBank wants to be displayed at each particular\nof its Web pages.\npath, i.e., a redundancy eliminated sequence of\nASB Bank Web pages visited so far by the customer\ninteracting with the site and supposed to\nbe used as a navigation aid.\nreference, i.e., a couple of links the target of\nwhich offer more information about the page actually\nvisited by the customer.\nbusiness branch selector, i.e., a navigation\nbar that breaks down the information space of\nthe site into subspaces according to the business\nbranches of ASB Bank.\nsubspace selector, i.e., a navigation bar that\nfor each subspace that corresponds to a business\nbranch breaks down the subspace into 2nd. level\nsubspaces.\nsubspace navigator, i.e., for each 2nd. level\nsubspace a navigation bar breaking down the\nsubspace in a number of information space locations\n.\n45\nMarketer\nCustomer\nProductAd()\nInquiry()\nRevisedProductAd()\nAnalyst\nApplication()\nLender\nApplicationApproval()\nNotification()\nService\nDocumentation()\nDocumentation()\nSignedContract()\nSignedContract()\nAccounts\nAdvanceFunds()\nPositionsGenerated()\nPayback()\nUseLoan()\nMoniter\nStart()\nCheckPayback()\n*[debit\nposition\nbalanced]\nFigure 4.1: UML diagram representing the loan process\nWe have then represented the navigation structure\noffered by the ASB Web site as a state chart the\nstates of which represent scenes. The state transitions\nare presupposed to be triggered by customers clicking\nlinks, i.e., navigation events. The labels attached\nto the state transitions are a string representing the\nnavigation event and an action carried out throughout\nthe transition. This action is prefixed by a slash,\ni.e., by \"/\". The semantics of the action is specified\nin form of a programming language like assignment\nand assigns new values to variables holding the data\nlisted above. In this way one can specify what data\nand functionality is accessible to a customer at a particular\nscene. Explanations and tables or the like are\npresupposed to be just text. All other transition labels\nused as values in assignments are presupposed\nto be links. If a variable is supposed to hold several\nlinks then these are connected by a plus sign, i.e., by\n\"+\". If more than one action has to take place at a\ntransition then all these actions are connected by a\n&-sign.\nThe variables used in figure 4.2 are D, S, H, A,\nR and P respectively representing values of type disclaimer\n, search facility, highlight, advertisement, references\nand path. Those of them being displayed at\na particular page are represented as non delimited\nstring, i.e., if all of them occur the string DSHARP is\nattached as label to the state representing the page.\nFurthermore the variables BS, SS and SN are used to\nrespectively represent values of type business branch\nselector, subspace selector and subspace navigator.\nThe initial state in the figure is reached after moving\nonto the home page of ASB Bank and clicking\nBS.Personal which signifies the business branch of retail\nbanking. The other 1st. level subspaces of the\napplication's information space are \"All\", \"Business\",\n\"Institutional\" and \"Rural\" in the obvious meaning\n. The subspace selector of \"BS.Personal\" allows\nto chose from 18 different 2nd. level subspaces. One\nof them is Home loans. Clicking it, i.e., \"BS.Personal-SS\n.Home loans\" leads to the initial state of figure 4.2.\nIf required we could add further navigation detail\nincluding the impact of navigation on the variables\noccurring in the figure. Furthermore if it would be\nrequired we could add further variables to represent\ndata of types here not dealt with.\n4.3\nAdaptation to customers, context and\nspecific case\nAdaptation to customers is a must if optimal customer\nsupport is aimed at. ASB Bank realizes a limited\ncustomer adaptation in that it offers in the subspace\nselector of BS.Personal second level subspaces\nboth for kids and for young folks. ASB Bank concerning\nthe home loan subspace of its information space\ndoes not offer much adaptation to customers. It only\noffers a bank technical terms dictionary and specif-ically\naddresses first home buyers.\nNeither are all\nNew Zealand official language versions of the Web\nsite available nor can it be tuned to meet any kind of\ndisabilities such as weak eyesight or color blindness.\nThe approach to adaptation to customers taken by\nsites like the one under investigation consists in identifying\nthe subspace of the information space they\ncreate that most likely will fit best the needs of a\nparticular customer. The match between customer\nand subspace is then done such that the customer\nis asked to give some characteristics of him or her\ninto the system and based on that the respective subspace\nis chosen. ASB Bank does so concerning kids\nand young folks. Other banks have additionally the\ncustomer type student or wealthy individual. This\nstrategy is suggested by the fact that the site vendor\nin general does not know much about the individuals\naccessing its site. A technique to consider knowledge\nabout customers to the design process is the creation\nand use of personas, i.e., archetypical customers and\ndesign the navigation structure as well as the page\nlayout such that it fits optimally to the personas used.\nConcerning more detail about personas in particular\ntheir construction see, e.g. (Wodtke 2003, pp. 159)\nAdaptation of the business case at hand of course\ncan only be achieved in response to the customer-site\ninteraction. In the navigation structure diagram\nin figure 4.2 we have used variable of data types that\nwere chosen with respect to the site at hand, i.e., ASB\nBank's Web site. We expect that this adaptation can\nalways be achieved the way we have proposed here.\nOnce the analysis has shown what data and functionality\nshall be accessible to customers data and functionality\ncan be typed and variables of the respective\ntype can be used to describe how the site adapts to\nthe actual use. A type level adaptation that can be\ncarried out while customers are interacting with a site\nis semi automatic reconsideration of the type of customer\n: Customers in this respect are presupposed to\nbe characterized by a value for each of a number of\n46\nLending Calculators\n/ H:=Affordability Calculator +\nAmount Requested Calculator\n+ Home Loan Options\nCalculator + Arrange your loan\nHome loans\nDSHARP\nBS, SS,SN\nArranging a loan\nDSHARP, BS, SS,SN\nAffordability\nCalculator\nAmount Required\nCalculator\nHome Loan\nOptions Calculator\nH.Affordability\nCalculator\n/ H:=\nAmount\nRequired\nCalculator +\nHome Loan\nOptions\nCalculator\nSN.Arranging\na home loan\nH.Home\nLoan\nOptions\nCalculator\n/ H:=\nAmount\nRequired\nCalculator +\nAffordability\nCalculator\nH.Amount Required Calculator\n/ H:= Apply by\nphone + Apply\nonline + We come\n2 u + U come 2 us\n+ Send inquiry &\nrefine SN\ncalculator\ninput form\ncalculator\nnput form\ncalculator\ninput form\n/ H:= Affordability Calculator + Home Loan Options Calculator\nOnline home loan\napplication\nH.Apply\nonline\nSN.Home\nLoan\nCalculator\nSS.Home loans.Introduction\n/ H:= Home loan rates + Buying your first home? + Loan top up needed? + Fixed\nrate loan expiring & SN := Introducyion + Loans at a glance + Interest rate options +\nInterest rates + Home loan calculators + Home buyers guide + Review your home\nloan + Move your loan to us + Fixed rate expiry + Arranging a home loan + Mobile\nlending service\napplication form\nSN.Loans at a glance\n/ H:= Loan types +\nloan options &\nrefine SN (types,\noptions)\nInterest rate options\nSN.Interest rate\noptions\n/ H:=\nvariable\nrates + fixed\nrates +\nexplanation\nDSHARP\nBS, SS,SN\nInterest rates\nDSHARP\nBS, SS,SN\nSN.Interest\nrates\n/ H:=\nlatest\nrates\ntable\nHome buyers guide\nSN.Home buyers guide\n/ H:= Houses 4 sale + Home\nbuyers inspection list + Home\nvaluation + Property information\n+ Priority checklist + Glossary of\nterms & refine SN\nDSHARP\nBS, SS,SN\nReview your\nloan\nDSHARP\nBS, SS,SN\nSN.Review your loan\n/ H:= explanation\n& R:= Credit\ncards + Omni\ncards +\nMoneymaker\naccount\nMove your loan to us\nSN.Move your\noan to us\n/ H:= explanation\nDSHARP\nBS, SS,SN\nLoans at a glance\nDSHARP\nBS, SS,SN\nSN.Fixed rate expiry\nFixed rate expiry\nDSHARP\nBS, SS,SN\n/ H:=\nexplanation\n& refine SN\nDSHARP\nBS, SS,SN\nMobile lending service\nDSHARP\nBS, SS,SN\nSN.Mobile lending service\n/ H:=\nexplanation +\ncontact phone\nnumbers\nFigure 4.2: Navigation structure of a part of ASB Bank's Web site\ndimensions. The customer type according to (Schewe\nand Thalheim 2001) can be defined as a convex region\nin the multi dimensional space create as cartesian\nproduct of the scales associated to the customer\ndimensions. Based on an automatic customer assessment\nthat in response to his or her site-interaction\nupdates the scores in each of the dimension throughout\ncustomer-site interaction one can then track how\na customer's trace moves through this space and detect\nwhen a modified type would better fit the customer's\nbehavior than the actual type does. Clearly\nsuch update should only be done with customer permission\nConclusion\nBanking services such as online home-loan application\nrequire a very sophisticated and well-adapted internet\ninterface. Customers want to focus on solving\ntheir business problem, i.e., the goal they want to\nachieve by means of interacting with the WIS. They\nconsider WIS as tools that shall be easy to handle,\ncompletely cover the business and do not add technical\ncomplexities to it. Customers want to have a\npleasant usage experience, in particular they do not\nwant to be treated like everybody else. They want\nWIS remember and exploit their usage peculiarities\nin authenticated and where adequate in anonymous\nsessions. This paper shows how this can be achieved.\nAs a guiding principle we introduce considering context\nand using it to simplify WIS-handling. We have\nshown how WIS's scene context can be injected into\nthe WIS and how XML suites can be generated using\nthe story board of the site and available customer\ndata.\nAcknowledgements\nWe would like to thank Hans-J\nurgen Engelbrecht\nfrom the Department of Applied & International Economics\nat Massey University for pointing to us work\non the the economic effects of technical progress in\nthe banking industry.\nReferences\nAltus, M.\nDecision support for conceptual database\ndesign based on the evidence theory - An intelligent\ndialogue interface for conceptual database\ndesign.\nPhD thesis, Faculty of Mathematics,\nNatural Sciences and Computer Science of BTU\nCottbus, Cottbus, 2000.\nAtzeni, P., Gupta, A., and Sarawagi, S.\nDesign\nand maintenance of data-intensive web-sites.\nIn Proceeding EDBT'98, vol. 1377 of LNCS.\nSpringer-Verlag, Berlin, 1998, pp. 436450.\nBaresi, L., Garzotto, F., and Paolini, P.\nFrom\nweb sites to web applications: New issues for\nconceptual modeling. In ER Workshops 2000,\nvol. 1921 of LNCS. Springer-Verlag, Berlin, 2000,\npp. 89100.\nBell,\nJ.\nPragmatic reasoning:\nInferring contexts\n. In Proc. Context'1999 (1999), LNAI 1688,\nSpringer, pp. 4253.\nBerger, A. N.\nThe Economic Effects of Technolog-ical\nProgress: Evidence from the Banking Industry\n. Journal of Money, Credit, and Banking 35,\n2 (2003), 141 176.\nBonifati,\nA.,\nCeri,\nS.,\nFraternali,\nP.,\nand\nMaurino,\nA.\nBuilding multi-device, content-centric\napplications using WebML and the W3I3\ntool suite. In ER Workshops 2000, vol. 1921 of\nLNCS. Springer-Verlag, Berlin, 2000, pp. 6475.\nWorkflow\nManagement\nCoalition\n, Ed.\nThe\nWorkflow Management Coalition specification:\nWorkflow Management Coalition terminology\n& glossary.\nWorkflow Management Coalition,\n47\nWinchester, United Kingdom, 1999. Document\nStatus Issue 3.0.\nConnolly, J. H.\nContext in the study of human languages\nand computer programming languages: A\ncomparison. In Proc. Context'2001 (2001), LNAI\n2116, Springer, pp. 116128.\nD\nusterh\noft,\nA.,\nand\nThalheim,\nB.\nSiteLang:\nConceptual modeling of internet sites. In Conceptual\nModeling ER 2001, H. S. K. et al., Ed.,\nvol. 2224 of LNCS. Springer-Verlag, Berlin, 2001,\npp. 179192.\nFeyer,\nT.,\nSchewe,\nK.-D.,\nand Thalheim,\nB.\nConceptual modelling and development of information\nservices.\nIn Conceptual Modeling\nER'98, T. Ling and S. Ram, Eds., vol. 1507 of\nLNCS. Springer-Verlag, Berlin, 1998, pp. 720.\nG\nadke, M., and Turowski, K.\nGeneric web-based\nfederation of business application systems for e-commerce\napplications.\nIn EFIS 1999. 1999,\npp. 2542.\nGarzotto,\nF.,\nPaolini,\nP.,\nand\nSchwabe,\nD.\nHDM - a model-based approach to hypertext application\ndesign. ACM ToIS 11, 1 (1993), 126.\nGoldin, D., Srinivasa, S., and Thalheim, B.\nIs\n= dbs + interaction - towards principles of information\nsystems. In Proc. ER'2000 (2000), LNCS\n1920, Springer, pp. 140153.\nHarel, D., and Naamad, A.\nThe STATEMATE\nSemantics of Statecharts. ACM Transactions on\nSoftware Engineering and Methodology 5, 4 (Ok-tober\n1996), 293333.\nJablonski,\nS.\nWorkflow-Management-Systeme:\nModellierung und Architektur. Thomson's Ak-tuelle\nTutorien. International Thomson Publishing\n, Bonn, Germay et al., 1996.\nKaschek, R., Matthews, C., and Wallace, C.\ne-Mortgages: NZ State of the Art and Perspectives\n. In Proceedings of SCI 2003 (2003).\nKaschek,\nR.,\nSchewe,\nK.-D.,\nThalheim,\nB.,\nZhang, L.\nModelling contexts in web information\nsystems. Proc. WES 2003.\nMatthews,\nC.\nD.,\nKaschek,\nR.\nH.,\nWallace\n,\nC.\nM.,\nand\nSchewe,\nK.\nD.\nIST\nin\nLending:\nUnlimited\npotential\nbut\nlimited\npractice.\nAvailable\nfrom:\nhttp://cbs.dk/staff/lars.heide/ISTOS/program.htm,\n2003. Paper presented at ISTOS Workshop in\nBarcelona, Spain, 28-30 March 2003.\nRossi, G., Schwabe, D., and Lyardet, F.\nWeb\napplication models are more than conceptual\nmodels.\nIn Advances in Conceptual Modeling,\nP. C. et al., Ed., vol. 1727 of LNCS. Springer-Verlag\n, Berlin, 1999, pp. 239252.\nRumbaugh,\nJ.,\nBlaha,\nM.,\nPremerlani,\nW.,\nEddy, F., and Lorensen, W.\nObject-Oriented\nModeling and Design. Prentice-Hall, Inc., Engle-wood\nCliffs, New Jersey, 1991.\nSchewe,\nB.\nKooperative Softwareentwicklung.\nDeutscher Universit\natsverlag, Wiesbaden, Germany\n, 1996.\nSchewe,\nB.,\nSchewe,\nK.-D.,\nand\nThalheim,\nB.\nObjektorientierter Datenbankentwurf in der\nEntwicklung betrieblicher Informationssysteme.\nInformatik Forschung und Entwicklung 10\n(1995), 115127.\nSchewe,\nK.-D.,\nKaschek,\nR.,\nMatthews,\nC.,\nand Wallace, C.\nModelling web-based banking\nsystems: Story boarding and user profiling.\nIn Proceedings of the Workshop on Conceptual\nModelling Approaches to E-Commerce, H. Mayr\nand W.-J. Van den Heuvel, Eds. Springer-Verlag,\n2002.\nSchewe,\nK.-D.,\nand\nSchewe,\nB.\nIntegrating\ndatabase and dialogue design. Knowledge and\nInformation Systems 2, 1 (2000), 132.\nSchewe, K.-D., and Thalheim, B.\nModeling interaction\nand media objects. In Proc. NLDB'\n2000 (2000), LNCS 1959, Springer, pp. 313324.\nSchewe, K.-D., and Thalheim, B.\nModeling interaction\nand media objects.\nIn Advances in\nConceptual Modeling, E. M\netais, Ed., vol. 1959\nof LNCS. Springer-Verlag, Berlin, 2001, pp. 313\n324.\nSchwabe, D., and Rossi, G.\nAn object oriented approach\nto web-based application design. TAPOS\n4, 4 (1998), 207225.\nSiegel, D.\nThe secrets of successful web sites. Markt\nund Technik, M\nunchen, 1998.\nSowa, J. F., and Zachman, J. A.\nExtending and\nformalizing the framework for information systems\narchitecture. IBM Systems Journal 31, 3\n(1992), 590 616.\nSrinivasa, S.\nA Calculus of Fixed-Points for Char-acterising\nInteractive Behaviour of Information\nSystems. PhD thesis, BTU Cottbus, Fachbereich\nInformatik, Cottbus, 2001.\nThalheim,\nB.\nEntity-relationship modeling\nFoundations of database technology.\nSpringer,\n2000.\nSee also http://www.informatik.tu-cottbus\n.de/\nthalheim/HERM.htm.\nThalheim, B.\nReadings in fundamentals of interaction\nin information systems. Reprint BTU Cottbus\n, 2000.\nCollection of papers by C. Binder, W.\nClau, A. D\nusterh\noft, T. Feyer, T. Gutacker, B. Heinze,\nJ. Lewerenz, M. Roll, B. Schewe, K.-D. Schewe, K. Seelig,\nS. Srinivasa, B. Thalheim. Accessible through\nhttp://www.informatik.tu-cottbus.de/\nthalheim\n.\nThalheim,\nB.,", "keywords": "scenarios;Web Information Systems;web site;Web services;usability;story boarding;context-awareness;context-aware information systems;web information system;media objects;scenes;media type;SiteLang"} {"name": "57", "title": "Contour-based Partial Object Recognition using Symmetry in Image Databases", "abstract": "This paper discusses the problem of partial object recognition in image databases. We propose the method to reconstruct and estimate partially occluded shapes and regions of objects in images from overlapping and cutting. We present the robust method for recognizing partially occluded objects based on symmetry properties, which is based on the contours of objects. Our method provides simple techniques to reconstruct occluded regions via a region copy using the symmetry axis within an object. Based on the estimated parameters for partially occluded objects, we perform object recognition on the classification tree. Since our method relies on reconstruction of the object based on the symmetry rather than statistical estimates, it has proven to be remarkably robust in recognizing partially occluded objects in the presence of scale changes, rotation, and viewpoint changes.", "fulltext": "INTRODUCTION\nMost existing methods for object recognition are based on full objects.\nHowever, many images in electronic catalogs contain multiple objects\nwith occluded shapes and regions. Due to the occlusion of objects,\nimage retrieval can provide incomplete, uncertain, and inaccurate\nresults. To resolve this problem, we propose new method to\nreconstruct objects using symmetry properties since most objects in a\ngiven image database are represented by symmetrical figures.\nEven though there have been several efforts in object recognition with\nocclusion, currents methods have been highly sensitive to object pose,\nrotation, scaling, and visible portion of occluded objects [12] [9] [17]\n[3] [15]. In addition, many appearance-based and model-based object\nrecognition methods assumed that they have known occluded regions\nof objects or images through extensive training processes with\nstatistical approach. However, our approach is not limited to\nrecognizing occluded objects by pose and scale changes, and does not\nneed extensive training processes.\nUnlike existing methods, our method finds shapes and regions to\nreconstruct occluded shapes and regions within objects. Our approach\ncan handle object rotation and scaling for dealing with occlusion, and\ndoes not require extensive training processes. The main advantage of\nour approach is that it becomes simple to reconstruct objects from\nocclusions. We present the robust method, which is based on the\ncontours of objects, for recognizing partially occluded objects based\non symmetry properties. The contour-based approach finds a\nsymmetry axis using the maximum diameter from the occluded object.\nIn experiments, we demonstrate how our method reconstruct and\nrecognize occluded shapes and regions using symmetry. Experiments\nuse rotated and scaled objects for dealing with occlusion. We also\nevaluate the recognition rate of the reconstructed objects using\nsymmetry and the visible portion of the occluded objects for\nrecognition.\nThe rest of this paper is organized as follows. In Section 2, we briefly\nreview work related to this study. In Section 3, we describe a method\nto recognize partial objects from given classes. In Section 4, we\ndescribe experimental results for partial object recognition. Finally,\nwe summarize this paper in Section 5.\n\nRELATED WORK\nThere have been several research efforts in object recognition for\ndealing with occlusion. Krumm [13] proposed a new algorithm for\ndetecting objects in images which uses models based on training\nimages of the object, with each model representing one pose.\nWilliams [23] proposed a method for the reconstruction of solid-shape\nfrom image contour using the Huffman labeling scheme. For\nobject recognition, Chang and Krumm [3] used the color\ncooccurrence histogram based on pairs of pixels. Schiele et al. [20]\nproposed a method to perform partial object recognition using\nstatistical methods, which are based on multidimensional receptive\nfield histograms. In addition, Rajpal et al. [17] introduced a method\nfor partial object recognition using neural network based indexing.\nIn appearance-based object recognition, Edwards and Murase [6]\naddressed the occlusion problem inherent in appearance-based\nmethods using a mask to block out part of the basic eigenimages and\nthe input image. Leonardis and Bischof [14] handled occlusion,\nscaling, and translation by randomly selecting image points from the\nscene and their corresponding points in the basis eigenvectors. Rao\n[18] applied the adaptive learning of eigenspace basis vectors in\nappearance-based methods. Ohba and Ikeuchi [16] were able to\nhandle translation and occlusion of an object using eigenwindows.\n\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise,\nor republish, to post on servers or to redistribute to lists, requires prior\nspecific permission and/or a fee.\nSAC'05, March 13-17, 2005, Santa Fe, New Mexico, USA.\nCopyright 2005 ACM 1-58113-964-0/05/0003...$5.00.\n\n1190\n2005 ACM Symposium on Applied Computing\nCurrent methods for dealing with occlusion have been based on\ntemplate matching, statistical approaches using localized invariants,\nand recognition of occluded regions based on local features. In\naddition, there are many efforts in ellipse construction and detection\n[7][9][22]. In this paper, we propose unique methodologies in object\nrecognition for dealing with occlusion based on symmetry properties\nthrough the ellipse reconstruction.\nEven though there have been several efforts in object recognition with\nocclusion, current methods have been highly sensitive to object pose\nand scaling. In addition, many appearance-based and model-based\nobject recognition methods assumed that they have known occluded\nregions of objects or images through extensive training processes.\nHowever, our method is not limited to recognizing occluded objects\nby pose and scale changes, and do not require extensive training\nprocesses.\nTHE PROPOSED METHOD\nWe discuss the object reconstruction and the parameter estimation\nmethod to find the best matching class of input objects using the\nclassification tree method [4]. We extracted shape parameters from\nreconstructed objects using RLC lines, such as roundness, aspect ratio,\nform factor, surface regularity [5].\nThe approach tries to find occluded shapes within partially occluded\nobjects. The basic assumption is that most objects are represented by\nsymmetrical figures. When a symmetric object is partially occluded,\nwe use the symmetry measure to evaluate the symmetric shape. We\nestimate the most similar parameters of occluded shape and region of\nobjects, and we retrieve objects that have the estimated parameters of\noccluded objects.\nA basic idea of reconstruction and estimation of occluded objects is to\nuse symmetry properties within objects and use to the contour of\nobjects. Fortunately, most products in electronic catalogs have\nsymmetry in their shapes and they are represented by symmetrical\nfigures. Symmetrical descriptions of shape or detection of\nsymmetrical features of objects can be useful for shape matching,\nmodel-based object matching, and object recognition [2] [1].\nIn the given database, we have elliptical and roughly-rounded objects\nsuch as plates, cups, pans, and pots, depending on their poses and\nshapes. First, we consider elliptical objects in which the occlusion\nchanges values of measurements and parameters related to diameters.\nWe assume that we can get diameters from elliptical objects, which\nare partially occluded.\n\nFigure 3.1 Three-Spoke from the Triangle.\nHowever, the elliptical objects are limited to the shape of objects.\nTherefore, it may not be applied to other types of shape such as\nirregular shapes. In this case, since we cannot easily detect the\nsymmetry axes, we introduce the three-spoke type symmetry method\nas shown in Figure 3.1. We apply this approach to roughly-rounded\nobjects such as cups.\nFor roughly-rounded objects, we use the three-spoke type method,\nwhich is derived from the triangle. The triangle is a basic model to\nrepresent figures such as circle, rectangle, and polygon. We use\nextended lines of the triangle to make axes as shown in Figure 3.1.\nThe three-spoke type symmetry axes, which are equally assigned by\n120 degrees, provide the possibility to detect proper symmetry axes\non roughly-rounded objects. Therefore, this method can detect\nsymmetry axes in roughly-rounded objects.\nIn order to perform the following procedures, we assume that objects\nare represented by symmetrical figures.\n1.\n\nWe have an occluded elliptical object in Figure 3.2 and roughly-rounded\nobject in Figure 3.6, we can get cutting points of the\nocclusion (x,y)' and (x,y)'', that are given by overlapping or\ncutting.\n\n\nFigure 3.2 The Occlusion Area\nEstimation using Symmetry: Get\ncutting points (x,y)' and (x,y)''\nand get a distance l'.\nFigure 3.3 The Occlusion Area\nEstimation using Symmetry:\nGet the maximum diameter\nand the symmetry axis.\n\n\nFigure 3.4 The Occlusion\nArea Estimation using\nSymmetry: Get the estimated\nregion a' using a line l' and\nthe symmetry axis.\n\nFigure 3.5 The Occlusion Area\nEstimation using Symmetry:\nAdd region a' to occluded shape\nand region and re-captured the\nestimated shape of an object.\n\n2.\n\nCompute a distance between two cutting points from (x,y)' and\n(x,y)'', which is called a line l' as in Figure 3.2 and 3.6.\n3.\n\nBased on a line l', make a connection between two points, fill the\nconcave region and re-captured the shape. It is important to\ncompute a centroid in an object.\n4.\n\nGet the maximum diameter from re-captured shape using\nextremal points as shown in Figure 3.4 and 3.7. Two extremal\npoints (r, l) and (r, l)' from re-captured shape as in Figure 3.7.\nThe distance between two extreme boundary points are\nrepresented by the maximum diameter.\n5.\n\nIn elliptical objects, one of the maximum and minimum\ndiameters can be a symmetry axis. In roughly-rounded objects,\nwe use the three-spoke type symmetry, one spoke can be a\n1191\nsymmetry axis to find occluded region within an object.\n6.\n\nCentroid Detection: In case of elliptical objects, we find a\ncentroid based on the maximum diameter and a line\nperpendicular to the maximum diameter, which is located in the\ncenter of the length of the maximum diameter. We select\nsymmetry axes based on one of these lines as in Figure 3.3. In\nroughly-rounded objects, we get a centroid, based on whole\nregion of an object. Equation 2 is adapted from Russ [19]. If the\ncentroid is calculated by equation 1 using the boundary pixels\nonly, the results may not be correct. The calculated points will be\nbiased toward whichever part of the boundary is most complex\nand contains the most pixels. The correct centroid location uses\nthe pairs of coordinates\ni\nx\n,\ni\ny\nfor each point in the shape\nboundary. The centroid of an irregular shape is calculated\ncorrectly using all of the pixels in an object.\n\nArea\ny\nC\nArea\nx\nC\nk\ni\ni\ny\nk\ni\ni\nx\n\n\n=\n=\n=\n=\n0\n0\n,\n\n\n(1)\nArea\nx\nx\ny\ny\nC\nArea\ny\ny\nx\nx\nC\nk\ni\ni\ni\ni\ni\ny\nk\ni\ni\ni\ni\ni\nx\n\n\n\n\n\n\n\n\n+\n=\n+\n=\n0\n2\n1\n2\n1\n0\n2\n1\n2\n1\n)\n(\n)\n(\n,\n)\n(\n)\n(\n(2)\n7.\n\nIn roughly-rounded objects, a centroid is put at the same position\nat the center of the three-spoke type symmetry axes.\n\nFigure 3.6 The occlusion of a\ncup: Get a centroid after re-captured\na shape.\nFigure 3.7 Get extremal points\n(r,l), (r,l)' and (r,l)'',(r,l)''' and\nthe maximum diameter of an\nobject.\n\nFigure 3.8 Use the three spoke\ntype symmetry: Match a center of\nthe spoke to a centroid and\nparallel one of axes to the\nmaximum diameter.\nFigure 3.9 Extend axes and\nmake symmetry axes.\n\nFigure 3.10 Select a symmetry\naxis based on two regions,\nwhich are A and B.\nFigure 3.11 Find a region a'\nof occluded shape using a\nsymmetry axis and add to a\noccluded shape.\n\n8.\n\nAxis Detection: The midpoint of the major axis is called the\ncenter of the ellipse. The minor axis is the line segment\nperpendicular to the major axis which also goes through the\ncenter and touches the ellipse at two points. In elliptical objects,\nwe detect a symmetry axis based on the maximum diameter or\nthe minimum diameter. To find a symmetry axis in roughly-rounded\nobjects, one of axes of the three-spoke type symmetry\naxes is in parallel with the maximum diameter of an object as\nshown in Figure 3.8.\nBased on occluded shape and region, we select a symmetry axis\nto estimate this region within an object. Figures 3.9 and 3.10\nshow how to select a symmetry axis. When we select an axis in\nroughly-rounded objects, we consider conditions as follows:\n\nSelect axes, which don't intersect the occluded region.\n\n3.9 and 3.10 show how to select a symmetry axis. Select\naxes, which have a region with the maximum diameter l'.\n\nArea and perimeter are invariants as in equation 3, compare\nthe proportion of region A and B.\nB\nA\nArea\nPerimeter\nArea\nPerimeter\n\n\n\n\n\n\n\n\n\n\n(3)\n9.\n\nUsing mirror symmetry, we can get points across an axis. We\nfind points on the contour across an axis which have the same\nlength l' and the same angle corresponding to the axis that is\nperpendicular to a symmetry axis, but the distance between axis\nand points may or may not be the same.\n10.\n\nCapture a region a', move the captured region to the occluded\nshape using the mirror symmetry, and add to these regions as\nshown in Figure 3.4, 3.5, and 3.11.\n11.\n\nRe-compute shape measurements such as area, diameters, and\nperimeter using RLC lines from re-captured shape of an object.\nThen, re-compute shape parameters based on measurements.\n12.\n\nApply to a classifier.\nFrom the above discussions, we described how to reconstruct and\nestimate the partially occluded shape and region of an object and how\nto find the best matching class of partially occluded objects after the\nestimation.\n1192\nEXPERIMENTAL RESULTS\nIn the sections, we evaluate and describe the results of partial object\nrecognition by our proposed a method. We have selected 190 partially\noccluded objects of images from electronic catalogs on the Internet as\nwell as manipulated images. We assume that occluded objects have\nmore than 50% visibility of objects, and images of catalogs contain\npartially occluded objects. The objects are categorized by semantic\nmeanings such as cup and plate. In addition, our approaches and\nexperiments are limited to cups and plates since we use roughly-rounded\nor elliptical objects. More precisely, the database contains 32\nobjects from different viewpoints and images of 97 objects\ncomprising image plane rotations and scale changes.\nIn sample images, we have extracted image features of partially\noccluded objects such as shape and texture. We experimented with\nshape reconstruction based on the contour of objects using symmetry\nproperties. We assumed that inputs are not correctly classified and\nhave occlusion.\nWe experimented with samples such as plates and cups to reconstruct\nthe occluded shape of objects as shown in Figure 4.1 and 4.2. In\nFigure 4.2, it is correctly classified after the reconstruction with an\nocclusion about 30%. On the other hand, Figure 4.1 is not correctly\nclassified after the reconstruction since the width of plate is too\nnarrow. This experiment shows that our method heavily relies on\nshape of objects.\n\nFigure 4.1 Example of the occlusion with a Plate.\n\n\nFigure 4.2 Example of the manipulated occlusion with a Cup.\nFinally, we performed an experiment for the relationships between\nvisible portion of objects and recognition rates. In order to evaluate\nthe visibility of objects, we used manipulated images of cups and\nplates. Figure 4.3 shows the pattern of object recognition in the\npresence of partial occlusion of objects and the results obtained by the\nsymmetric recognition. A visible portion of approximately 67% is\nsufficient for the recognition of objects based on the contour.\n\nFigure 4.3 Object recognition in the presence of the occlusion of\nobjects based on the contour.\n\nThere are many efforts in object recognition for dealing with\nocclusion. The visible portion of objects required to recognize\noccluded objects are shown in Table 4.1. Table 4.1 shows a simple\ncomparison between our method and other existing methods. The\nprobabilistic method based on local measurements requires small\nportions of objects to recognize the whole objects, but it required\nextensive training processes to recognize occluded objects [21] [20].\nOur method shows good visibility of partial object recognition and do\nnot need extensive training processes.\n\nTable 4.1 The visibility of object recognition in the presence of\npartial occlusion.\nMethods Visibility\nTraining\nprocesses\nAppearance matching techniques using\nadaptive masks\n90% not\nrequired\nProbabilistic technique using Chi-square\n72%\nrequired\nProbabilistic technique using local\nmeasurements\n34% required\nContour-based approach using symmetry\n67%\nnot required\nIn order to measure the influence of occlusion and compare its impact\non the recognition performance of the different methods, we\nperformed an experiment as follows.\nFigure 4.4 summarizes the recognition results for different visible\nobject portions. For each test object, we varied the visible object\nportion from 20% to 100% and recorded the recognition results using\nChi-square divergence and our method.\n\nFigure 4.4 Experimental results with occlusion.\n1193\nThe results show that our method clearly obtains better results than\nChi-square divergence. Using only 60% of the object area, almost\n80% of the objects are still recognized. This confirms that our method\nis capable of reliable recognition in the presence of occlusion.\n\nTable 4.2 Summary of Object Recognition Methods for dealing\nwith Occlusion.\nMethods\nOcclusion\nScale changes\nObject Pose\nRotation\nBischof et al. [1]\nYes\nYes\nNo\nNo\nEdwards et al. [6]\nYes\nYes\nNo\nYes(limited)\nOhba et al. [16]\nYes\nNo\nYes\nNo\nRao [18]\nYes\nNo\nYes\nNo\nJacob et al. [11]\nYes\nNo\nYes\nNo\nKrumm [13]\nYes\nNo\nNo\nNO\nContour-based\nusing symmetry\nYes Yes\nYes(limited)\nYes\nTable 4.2 summarizes the various object recognition methods. The\ntable indicates whether the methods can handle occlusion, rotation,\npose, and changes in the size of objects in the database. Unlike the\nother methods, our method can handle scale change, object pose, and\nrotated objects with occlusion, even though our method has minor\nlimitations of object poses.\nCONCLUSION\nIn this paper, we have discussed how to estimate parameters and to\nreconstruct the occluded shape of partial objects in image databases.\nIn order to reconstruct occluded shapes, we used symmetry, which\nprovides powerful method for the partial object recognition. Unlike\nthe existing methods, our method tried to reconstruct occluded shapes\nand regions within objects, since most objects in our domain have\nsymmetrical figures. However, we have limitations in the shape of\nobjects and the occluded region of objects. For example, if a pan has\nan occlusion in handle, it cannot correctly reconstruct and be\nrecognized.\nAnother minor limitation of our method is that a method is sensitive\nto the pose of an object. For example, if we cannot see an ellipse due\nto the object's pose, we cannot recognize the object. After estimation,\nwe have applied inputs, which include estimated parameters, to the\nexisting classification trees, to get to the best matching class.\nAll experiments are performed based on the classifier in earlier work.\nIn experiments, the results show that the recognition of the occluded\nobject is properly reconstructed, estimated, and classified, even\nthough we have limited to the size of samples. In addition, we have\nexperienced the power of the symmetry through experiments.\n\nREFERENCES\n[1]\n\nH. Bischof and A. Leonardis. Robust recognition of scaled\neigenimages through a hierachical approach. In IEEE Conference\non Computer Vision and Pattern Recognition, 1998.\n[2]\n\nH. Blum and R.N. Nagel. Shape description using weighted\nsymmetric axis features. Pattern Recognition, 1978.\n[3]\n\nP. Chang and J. Krumm. Object Recognition with Color\nCooccurrence Histograms. In IEEE Conference on Computer\nVision and Pattern Recognition, 1999.\n[4]\n\nJ. Cho and N. Adam. Efficient Splitting Rules based on the\nProbabilities of Pre-Assigned Intervals. In IEEE Conference on\nData Mining, 2001.\n[5]\n\nJ. Cho, A. Gangopadhyay and N. Adam. Feature Extraction for\nContent-based Image search in Electronic Commerce. In MIS/OA\nInternational Conference, 2000.\n[6]\n\nJ. Edwards and H. Murase. Appearance matching of occluded\nobjects using coarse-to-fine adaptive masks. In IEEE Conference\non Computer Vision and Pattern Recognition, 1997.\n[7]\n\nA. W. Fitzgibbon, M. Pilu, and R. B. Fisher. Direct least squares\nfitting of ellipses. In International Conference on Pattern\nRecognition, 1996.\n[8]\n\nM. Fleck. Local Rotational Symmetries. In IEEE Conference on\nComputer Vision and Pattern Recognition, 1986.\n[9]\n\nC. Ho and L. Chan. A fast ellipse/circle detector using geometric\nsymmetry. Pattern Recognition, 1995.\n[10]\n\nJoachim Hornegger, Heinrich Niemann, and Robert Risack.\nAppearance-based object recognition using optimal feature\ntransforms. Pattern Recognition, 2000.\n[11]\n\nDavid W. Jacobs and Ronen Basri. 3D to 2D recognition with\nregions. In IEEE Conference on Computer Vision and Pattern\nRecognition, 1997.\n[12]\n\nGrinnell Jones and Bir Bhanu. Recognition of articulated and\noccluded objects. IEEE Transaction on Pattern Analysis and\nMachine Intelligence, 1999.\n[13]\n\nJohn Krumm. Object detection with vector quantized binary\nfeatures. In IEEE Conference on Computer Vision and Pattern\nRecognition, 1997.\n[14]\n\nAles Leonardis and Horst Bishof. Dealing with Occlusions in the\nEigenspace Approach. In IEEE Conference on Computer Vision\nand Pattern Recognition, 1996.\n[15]\n\nDavid G. Lowe. Object Recognition from Local Scale-Invariant\nFeatures. In International Conference on Computer Vision, 1999.\n[16]\n\nK. Ohba and K. Ikeuchi. Detectability, uniqueness, and\nreliability of eigen windows for stable verification of partially\noccluded objects. IEEE Trans. Pattern Anal. Mach, 1997.\n[17]\n\nN. Rajpal, S. Chaudhury, and S. Banerjee. Recognition of\npartially occluded objects using neural network based indexing.\nPattern Recognition, 1999.\n[18]\n\nR. Rao. Dynamic appearance-based recognition. In IEEE\nConference on Computer Vision and Pattern Recognition, 1997.\n[19]\n\nJohn C. Russ. The Image Processing Handbook. CRC Press, 3rd\nedition, 1998.\n[20]\n\nBernt Schiele and Alex Pentland. Probabilistic Object\nRecognition and Localization. In International Conference on\nComputer Vision, 1999.\n[21]\n\nH. Schneiderman and T. Kanade. Probabilistic modeling of local\nappearance and spatial relationships for object recognition. In\nIEEE Conference on Computer Vision and Pattern Recognition,\n1998\n[22]\n\nW. Wu and M. J. Wang. Elliptical object detection by using its\ngeometrical properties. Pattern Recognition 1993.\n[23]\n\nLance R. Williams. Topological reconstruction of a smooth\nmanifold-solid from its occluding contour. Journal of Computer\nVision, 1997.\n\n1194\n", "keywords": "object recognition;reconstruction;Object;contour;Recognition;Symmetry;Image;Contour;occlusion;estimation;symmetry"} {"name": "58", "title": "COOLCAT: An entropy-based algorithm for categorical clustering", "abstract": "In this paper we explore the connection between clustering categorical data and entropy: clusters of similar poi lower entropy than those of dissimilar ones. We use this connection to design an incremental heuristic algorithm, COOLCAT , which is capable of efficiently clustering large data sets of records with categorical attributes, and data streams. In contrast with other categorical clustering algorithms published in the past, COOLCAT's clustering results are very stable for different sample sizes and parameter settings. Also, the criteria for clustering is a very intuitive one, since it is deeply rooted on the well-known notion of entropy. Most importantly, COOLCAT is well equipped to deal with clustering of data streams (continuously arriving streams of data point) since it is an incremental algorithm capable of clustering new points without having to look at every point that has been clustered so far. We demonstrate the efficiency and scalability of COOLCAT by a series of experiments on real and synthetic data sets.", "fulltext": "INTRODUCTION\nClustering is a widely used technique in which data points\nare partitioned into groups, in such a way that points in the\nsame group, or cluster, are more similar among themselves\nthan to those in other clusters. Clustering of categorical\nattributes (i.e., attributes whose domain is not numeric) is\na difficult, yet important task: many fields, from statistics\nto psychology deal with categorical data.\nIn spite of its\nimportance, the task of categorical clustering has received\nscant attention in the KDD community as of late, with only\na handful of publications addressing the problem ([18, 14,\n12]).\nMuch of the published algorithms to cluster categorical\ndata rely on the usage of a distance metric that captures\nthe separation between two vectors of categorical attributes,\nsuch as the Jaccard coefficient. In this paper, we present\nCOOLCAT (the name comes from the fact that we reduce\nthe entropy of the clusters, thereby \"cooling\" them), a novel\nmethod which uses the notion of entropy to group records.\nWe argue that a classical notion such as entropy is a more\nnatural and intuitive way of relating records, and more importantly\ndoes not rely in arbitrary distance metrics. COOLCAT\nis an incremental algorithm that aims to minimize\nthe expected entropy of the clusters. Given a set of clusters\n, COOLCAT will place the next point in the cluster\nwhere it minimizes the overall expected entropy. COOLCAT\nacts incrementally, and it is capable to cluster every\nnew point without having to re-process the entire set. Therefore\n, COOLCAT is suited to cluster data streams (contin-uosly\nincoming data points) [2].\nThis makes COOLCAT\napplicable in a large variety of emerging applications such\nas intrusion detection, and e-commerce data.\nThis paper is set up as follows. Section 2 offers the background\nand relationship between entropy and clustering, and\nformulates the problem. Section 3 reviews the related work.\nSection 4 describes COOLCAT, our algorithm. Section 5\npresents the experimental evidence that demonstrates the\nadvantages of COOLCAT. Finally, Section 6 presents conclusions\nand future work.\nBACKGROUND AND PROBLEM FOR-MULATION\nIn this section, we present the background of entropy and\nclustering and formulate the problem.\n2.1\nEntropy and Clustering\nEntropy is the measure of information and uncertainty of\na random variable [28]. Formally, if X is a random variable,\nS(X) the set of values that X can take, and p(x) the prob-582\nability function of X, the entropy E(X) is defined as shown\nin Equation 1.\nE(X) =\nx\nS(X)\np(x)log(p(x))\n(1)\nThe entropy of a multivariate vector ^\nx =\n{X\n1\n,\n, X\nn\n}\ncan be computed as shown in Equation 2, where p(^\nx) =\np(x\n1\n,\n, x\nn\n) is the multivariate probability distribution.\nE(^\nx)\nx\n1\nS(X\n1\n)\n...\nx\nn\nS(X\nn\n)\np(^\nx)logp(^\nx)\n(2)\nEntropy is sometimes referred to as a measure of the\namount of \"disorder\" in a system. A room with socks strewn\nall over the floor has more entropy than a room in which\nsocks are paired up, neatly folded, and placed in one side of\nyour sock and underwear drawer.\n2.2\nProblem formulation\nThe problem we are trying to solve can be formulated as\nfollows. Given a data set D of N points ^\np\n1\n,\n, ^\np\nN\n, where\neach point is a multidimensional vector of d categorical attributes\n, i.e., ^\np\nj\n= (p\n1\nj\n,\n, p\nd\nj\n), and given an integer k, we\nwould like to separate the points into k groups C\n1\n,\n, C\nk\n,\nor clusters, in such a way that we minimize the entropy of\nthe whole arrangement. Unfortunately, this problem is NP-Complete\n, and moreover, difficult to approximate [13]. In\nfact, the problem is NP-Complete for any distance function\nd(x, y), defined over pairs of points x, y, such that the function\nmaps pairs of points to real numbers (and hence, our\nentropy function qualifies), therefore we need to resort to\nheuristics to solve it.\nWe first have to resolve the issue of what we mean by the\n\"whole entropy of the system.\" In other words, we have\nto make our objective function clear. We aim to minimize\nthe expected entropy, whose expression is shown in Equation\n3, where E(C\n1\n),\n, E(C\nk\n), represent the entropies of\neach cluster, C\ni\ndenotes the points assigned to cluster i,\nC\ni\nD, with the property that C\ni\nC\nj\n=\n, for all\ni, j\n= 1, .., k i = j. The symbol\nC\n=\n{C\n1\n,\n, C\nk\n}\nrepresents the clustering.\n\nE(\nC) =\nk\n( |C\nk\n|\n|D| (E(C\nk\n)))\n(3)\nThis function, as we will see later, allows us to implement\nan incremental algorithm that can effectively deal with\nlarge datasets, since we do not need to look at the entire set\nof points to decide about the entropy of an arrangement.\nRather, we will be able to decide for each point, how it\nwould affect the entropy of each of the existing clusters if\nplaced in each one of them.\nThe solution we propose in this paper (and present in Section\n4) is a heuristic based in finding a set of initial clusters\n(using the entropic criteria), and then incrementally (greed-ily\n) add points to the clusters according to a criteria that\nminimizes Equation 3.\nFurthermore, we make a simplification in the computation\nof entropy of a set of records. We assume independence of\nthe attributes of the record, transforming Equation 2 into\nEquation 5.\nIn other words, the joint probability of the\ncombined attribute values becomes the product of the prob-members\nE\nExp.\nEntropy\nCluster0\n{\"red\", \"heavy\"}\n1.0\n0.66\n{\"red\", \"medium\"}\nCluster1\n{\"blue\", \"light\"}\n0\nCluster0\n{\"red\", \"heavy\"}\n2.0\n1.33\n{\"blue\", \"light\"}\nCluster1\n{\"red\", \"medium\"}\n0\nCluster0\n{\"red\", heavy\"}\n0\n1.33\nCluster1\n{\"red\", \"medium\"}\n2.0\n{\"blue\", \"light\"}\nFigure 1:\nThree different clusterings for the set\nv\n1\n, v\n2\n, v\n3\n.\nClustering 1 minimizes the expected entropy\nof the two clusters.\nabilities of each attribute, and hence the entropy can be\ncalculated as the sum of entropies of the attributes.\nE(^\nx)\n=\nx\n1\nS(X\n1\n)\n\nx\nn\nS(X\nn\n)\n(4)\ni\n(p(x\ni\n))log(\ni\np(x\ni\n))\n=\nE(X\n1\n) + E(X\n2\n) +\n+ E(X\nn\n)\n(5)\nAssume that we have a set of three records, v\n1\n=\n{\"red\"\n, \"heavy\"\n}, v\n2\n=\n{\"blue\", \"light\"}, and v\n3\n=\n{\"red\",\n\"medium\"\n}, and we want to form two clusters with them.\nFigure 1 shows all the possible arrangements, with the entropy\nof each cluster, and the expected entropy in each arrangement\n. As we can see, the minimum expected entropy\nis that of arrangement 1, which obviously is the correct way\nof clustering the records (using two clusters).\nEven though the assumption of attribute independence\nis not true in every data set, it proves to work very well\nin practice (as shall be shown in the experimental section\nof this paper). Moreover, in the cases we can demonstrate\nthat there is a correlation between two or more attributes of\nthe data set, we can always change the data points by creating\nattributes that reflect these correlations and then apply\nEquation 5 to compute the join entropy. For instance, if the\ndata set is composed of records of attributes A, B, C, D, E, F\nand we know that (A, B), (A, C) and (E, F ) are correlated.\nwe can convert the data set into one having records with\nattributes AB, AC, D, EF and compute the entropy assuming\nthat these new attributes are independent. Notice that\nfor the grouped attributes, we are in effect computing their\njoint probabilities. The correlations between attributes can\nbe easily found by techniques such as the Chi-Square and\nlikelihood ratio tests. In our experimental experience, the\ngains obtained by doing this are small enough to justify the\nusage of the independence assumption.\n2.3\nExpected entropy and the Minimum Description\nLength principle\nThe Minimum Description Length principle (MDL) [26,\n27] recommends choosing the model that minimizes the sum\nof the model's algorithmic complexity and the description of\nthe data with respect to that model. This principle is widely\n583\nused to compare classifiers (see [23]) but it has not been used\nmuch to deal with clustering.\nFormally, the complexity of the model can be stated as\nshown in Equation 6, where K() indicates the complexity, h\nis the model, and D denotes the data set. The term K(h)\ndenotes the complexity of the model, or model encoding,\nwhile K(D using h) is the complexity of the data encoding\nwith respect to the chosen model.\nK(h, D) = K(h) + K(D using h)\n(6)\nConsider first the term K(h). To encode the model, we\nneed to encode for each cluster the probability distribution\nfor the attribute values. This can be done by encoding the\nnumber of times each attribute value appears in the cluster,\nand the number of points in each cluster. Assuming that\nthere are d attributes in the data, and that attribute A\nj\ncan assume v\nj\ndifferent values. As usual, k represents the\nnumber of clusters in the model. K(h) can be written as\nshown in Equation 7. In each cluster i, we need to encode\nc\ni\n=\nd-1\nj=0\nv\nj\nvalues. So, the total number of values we\nneed to encode is\nk-1\ni=0\nc\ni\n= k, where is a constant. We\nalso need to encode the number of points in each cluster, or\nk values. The number of bits needed to encode the number\nof times each attribute value occurs in the cluster, or the\nnumber of points in a cluster is equal to log(\n|D|), since the\nmaximum number for these values is the size of the entire\ndata set. Therefore K(h) is a linear function of k, with\na constant that represents all the contributions described\nabove.\nK(h) = klog(\n|D|)\n(7)\nOn the other hand, the encoding of the data given the\nmodel can be stated as shown in Equation 8.\nOnce the\nprobabilities of occurrence of each attribute value in each\ncluster are known, an optimal code (Huffman) can be chosen\nto represent each attribute value in the cluster. Each point\nis simply represented by the encoding of its attributes' values\n. The optimal code is achieved by giving to each value a\nnumber of bits proportional to log(Pijl), where P (ijl) is the\nprobability that the l value of attribute j occurs in cluster i.\nThe second term in the equation simply indicates the membership\nof all the points, needing log(k) for the encoding of\nthe individual memberships.\nK(D using h) =\nk-1\ni=0\n|C\ni\n|\n|D|\nd-1\nj=0\nv-1\nl=0\nPijllog(Pijl) + Dlog(k)\n(8)\nNoticing that the first term of Equation 8 is simply the\nexpected entropy of the clustering, we can write K(h, D) as\nshown in Equation 9. Notice that for a fixed k, the MDL\nprinciple indicates that the best model can be found by minimizing\nthe expected entropy of the clustering, which is pre-cisely\nour goal.\nK(h, D) = log(\n|D|) + Dlog(k) +\nE(\nC)\n(9)\n2.4\nEvaluating clustering results\nA frequent problem one encounters when applying clustering\nalgorithms in practice is the difficulty in evaluating the\nsolutions. Different clustering algorithms (and sometimes\nmultiple applications of the same algorithm using slight variations\nof initial conditions or parameters) result in very different\nsolutions, all of them looking plausible. This stems\nfrom the fact that there is no unifying criteria to define clusters\n, and more often than not, the final clusters found by the\nalgorithm are in fact the ones that correspond to the criteria\nused to drive the algorithm. Methods to evaluate whether\nor not the structure found is a property of the data set and\nnot one imposed by the algorithm are needed.\nAuthors have pondered about good ways to validate clusters\nfound by algorithms (e.g., see [21, 1]). Two widely used\nmethods are the following:\nSignificance Test on External Variables This technique\ncalls for the usage of significance tests that compare\nthe clusters on variables not used to generate them.\nOne way of doing this is to compute the entropy of\nthe solution using a variable that did not participate\nin the clustering. (A class attribute.) The entropy of\nan attribute C in a cluster C\nk\nis computed as shown\nin Equation 10, where V\nj\ndenotes one of the possible\nvalues that C can take. The evaluation is performed\nby computing the expected entropy (taken into consideration\nthe cluster sizes). The smaller the value of\nE(C\nk\n), the better the clustering fares.\nE(C\nk\n) =\nj\nP (C = V\nj\n)logP (C = V\nj\n)\n(10)\nThe category utility function The category utility (CU)\nfunction [15] attempts to maximize both the probability\nthat two objects in the same cluster have attribute\nvalues in common and the probability that objects\nfrom different clusters have different attributes. The\nexpression to calculate the expected value of the CU\nfunction is shown in Equation 11, where P (A\ni\n= V\nij\n|C\nk\n)\nis the conditional probability that the attribute i has\nthe value V\nij\ngiven the cluster C\nk\n, and P (A\ni\n= V\nij\n)\nis the overall probability of the attribute i having the\nvalue V\nij\n(in the entire set).\nThe function aims to\nmeasure if the clustering improves the likelihood of\nsimilar values falling in the same cluster. Obviously,\nthe higher the value of CU, the better the clustering\nfares.\nCU\n=\nk\nC\nk\n|D|\ni\nj\n[P (A\ni\n= V\nij\n|C\nk\n)\n2\n- P (A\ni\n= V\nij\n)\n2\n] (11)\nWe have used both techniques in validating our results,\nas shall be seen in the experimental section.\n2.5\nNumber of clusters\nThe issue of choosing the number of clusters is one common\nto all clustering methods, and our technique is no exception\n. Many methods have been proposed for determining\nthe right number of clusters (e.g.,[4, 9]). Unfortunately\nmany of these methods (e.g., [4]) assume that it is possible\nto compute a centroid for each cluster, which in categorical\ndata is not easy. We consider this issue out of the scope of\n584\nthis paper since we plan to examine good ways of selecting\nthe optimal number of clusters in the context of our metric.\nRELATED WORK\nClustering is an extensively researched area not only by\ndata mining and database researchers [31, 11, 17, 18, 3], but\nalso by people in other disciplines [10]. Among the numerical\nclustering algorithms, ENCLUS [6] uses entropy as a\ncriteria to drive the algorithm. However, ENCLUS follows\na completely different algorithm to our approach, dividing\nthe hyperspace recursively. For each subspace, ENCLUS estimates\nits density and entropy and determines if it satisfies\nthe goodness criteria: its entropy has to be lower than a\nthreshold. However, it is not possible to translate either the\nalgorithm or the relationships to the area of categorical clustering\n, since the notion of density has no intuitive meaning\nwhen the attributes are categorical. In a recent paper [16],\nthe authors use Renyi's definition of entropy [25] to define\na clustering evaluation function that measures the distance\nbetween clusters as the information potential [24] between\nthem. Using this function, they describe an algorithm that,\nstarting with a random placing of points in clusters, perturbs\nthe placement until the improvement on the information potential\nis not appreciable. This algorithm, however, cannot\nscale to large data sets since it requires all points to perform\nthe calculation of the distance.\nIn the area of clustering categorical records, a few recent\npublications are worth mentioning. In [19], the authors\naddress the problem of clustering transactions in a market\nbasket database by representing frequent item sets as hyper-edges\nin a weighted hypergraph. The weight of the graph is\ncomputed as the average of the confidences for all possible\nassociation rules that can be generated from the item set.\nThen, a hypergraph partitioning algorithm is employed to\npartition the items, minimizing the weight of the cut hyper-edges\n. The algorithm does not produce a clustering of the\ntransactions and it is not obvious how to obtain one from\nthe item clusters. A related paper by Gibson et al [14] also\ntreats categorical clustering as hypergraph partitioning, but\nuses a less combinatorial approach to solving it, based on\nnon-linear dynamical systems.\nCACTUS [12], is an agglomerative algorithm that uses the\nauthor's definitions of support, strong connection and similarity\nto cluster categorical data. Support for an attribute\nvalue pair (a\ni\n, a\nj\n), where a\ni\nis in the domain of attribute\nA\ni\nand a\nj\nin the domain of attribute A\nj\nis defined as the\nnumber of tuples that have these two values. The two attributes\na\ni\n, a\nj\nare strongly connected if their support exceeds\nthe value expected under the attribute-independence.\nThis concept is then extended to sets of attributes. A cluster\nis defined as a region of attributes that are pairwise\nstrongly connected, no sub-region has the property, and its\nsupport exceeds the expected support under the attribute-independence\nassumption.\nROCK [18] computes distances between records using the\nJaccard coefficient.\nUsing a threshold, it determines, for\neach record, who are its neighbors. For a given point p, a\npoint q is a neighbor of p if the Jaccard coefficient J(p, q)\nexceeds the threshold. Then, it computes the values of a\nmatrix LIN K, in which the entries link(p, q) are the number\nof common neighbors between p and q. The algorithm\nthen proceeds to cluster the records in an agglomerative way,\ntrying to maximize for the k clusters (k is a predefined integer\n) the function\nk\ni=1\nn\ni\np,qC\ni\nlink(p,q)\nn\n1 + 2f ()\ni\n, where is\nthe threshold, and f () is a function selected by the user.\nThe choice of f () is critical in defining the fitness of the\nclusters formed the the ROCK algorithm, and, as the authors\npoint out, the function is dependent on the data set\nas well as on the kind of cluster the user is interested in. We\nfeel that choosing the function is a delicate and difficult task\nfor users that may be a roadblock to using ROCK efficiently.\nSnob [29, 30] is an unsupervised learning algorithm based\non the notion of Minimum Message Length (MML). MML\nis an information theoretic criterion for parameter estimation\nand model selection. Although MML is similar to the\nMDL criterion of Rissanen, MML is a Bayesian criterion\nand therefore uses an a-priori distribution of parameter values\n. Snob is in the category of mixture model algorithms\n[22]. Snob is iterative in nature and therefore does not scale\nwith large data sets. Moreover, contrary to COOLCAT, it\nis difficult to envision how Snob can be used to cluster data\nstreams. AUTOCLASS [5] also uses mixture models and\nBayesian criteria to cluster data sets. Again, AUTOCLASS\ndoes not scale well with large data sets.\nOUR ALGORITHM\nOur entropy-based algorithm, COOLCAT, consists of two\nsteps: initialization and incremental step.\n4.1\nInitialization\nThe initialization step \"bootstraps\" the algorithm, finding\na suitable set of clusters out of a sample S, taken from the\ndata set (\n|S| << N), where N is the size of the entire data\nset. We first find the k most \"dissimilar\" records from the\nsample set by maximizing the minimum pairwise entropy\nof the chosen points. We start by finding the two points\nps\n1\n, ps\n2\nthat maximize E(ps\n1\n, ps\n2\n) and placing them in two\nseparate clusters (C\n1\n, C\n2\n), marking the records (this takes\nO(\n|S|\n2\n)). From there, we proceed incrementally, i.e., to find\nthe record we will put in the j-th cluster, we choose an unmarked\npoint ps\nj\nthat maximizes min\ni=1,..,j-1\n(E(ps\ni\n, ps\nj\n)).\nThe rest of the sample unmarked points (\n|S| - k), as well\nas the remaining points (outside the sample), are placed in\nthe clusters using the incremental step.\nWe are interested in determining the size of the sample\nthat guarantees with high probability the existence in the\nsample of at least one member of each cluster, given the\nnumber of clusters. In [17], the authors address the same\nproblem and use Chernoff bounds[7] to bound the size of the\nsample given an estimate of the size of the smallest cluster\nwith respect to the average size (\n|D|\nk\n), and the confidence\nlevel for the probability of finding at least a member of\neach cluster. The estimate of the size of the smallest cluster\nwith respect to the average size is given in the form of a\nparameter =\n|D|\nk\nm\n, where m is the size of the smallest\ncluster. The parameter is then a number greater than\n1. The bound on the size of the sample is then given by\nEquation 12.\ns = k + klog( 1\n) + k\n(log( 1\n))\n2\n+ 2log( 1\n)\n(12)\nIt is important to remark that Equation 12 does not\ndepend on the size of the data set, which makes the bound\n585\n1.Given an initial set of clusters\nC = C\n1\n,\n, C\nk\n2.Bring points to memory from disk and\nfor each point p do\n3. For i = 1, .., k\n4. Tentatively place p in C\ni\nand\ncompute\nE(\nC\ni\n) where\nC\ni\ndenotes the clustering obtained\nby placing p in cluster C\ni\n5. Let j = argmin\ni\n(\nE(\nC\ni\n))\n6. Place p in C\nj\n7. Until all points have been placed in\nsome cluster\nFigure 2: Incremental step.\nvery favorable for larger sets (and unfavorable for small ones,\nbut this is not a problem since for small sets we can simply\nuse the entire set as a sample).\n4.2\nIncremental Step\nAfter the initialization, we process the remaining records\nof the data set (the rest of the sample and points outside\nthe sample) incrementally, finding a suitable cluster for each\nrecord. This is done by computing the expected entropy\nthat results of placing the point in each of the clusters and\nselecting the cluster for which that expected entropy is the\nminimum. We proceed in the incremental step by bringing\na buffer of points to main memory and clustering them one\nby one.\nThe order of processing points has a definite impact on\nthe quality of the clusters obtained. It is possible that a\npoint that seems a good fit for a cluster at a given point\nin time, becomes a poor fit as more points are clustered.\nIn order to reduce this effect, we enhanced the heuristic by\nre-processing a fraction of the points in the batch. After a\nbatch of points is clustered, we select a fraction m of points\nin the batch that can be considered the worst fit for the clusters\nthey were put in. We proceed to remove these points\nfrom their clusters and re-cluster them. The way we figure\nout how good a fit a point is for the cluster where it landed\noriginally, is by keeping track of the number of occurrences\nof each of its attributes' values in that cluster. That is, at\nthe end of the batch, we know the values of q\nij\n, for each\nrecord i in the batch and each attribute j, where q\nij\nrepresent\nthe number of times that the value V\nij\nappears in the\ncluster where i was placed. We convert these numbers into\nprobabilities by dividing q\nij\nby the cluster size (i.e.,\nC\nl\n,\nwhere C\nl\nis the cluster where i was placed). Let us call these\nnumbers p\nij\n. For each record, we can compute a fitting probability\np\ni\n=\nj\n(p\nij\n). Notice that the lower the p\ni\nis, the\nworst fit the record is in that cluster (we can say that the\nglobal combination of attributes is not very common in the\ncluster). We then sort records according to p\ni\nand select the\nm records in the batch with lowest p\ni\nas the records to be\nreprocessed. Each re-processed record is placed in the cluster\nthat minimizes the expected entropy (as done originally\nin the incremental step).\nEXPERIMENTAL RESULTS\nOur experiments were run in a DELL server equipped\nwith a Pentium III running at 800 MHz, and 1 Gigabyte of\nmain memory, running Red Hat Linux 2.2.14. We used two\nkinds of data sets: real data sets (for evaluating the quality\nof our algorithm) and synthetic data sets (for the evaluation\nof scalability). The experiments were conducted using the\nfollowing datasets (plus a synthetically generated data set\nto test the scalability of the algorithm).\nArchaeological data set\nOur first data set is a hypothetical collection of human\ntombs and artifacts from an archaeological site. Although\nthe data set is not \"real,\" it is realistic enough\nand so we include it in this section. It has also the\nproperty of being small, so brute force can be used\nto find the optimal clustering. The data set is taken\nfrom [1] The first attribute (not used for clustering\nbut for verification) indicates the sex (M for male, F\nfor female) of the individuals buried. The other eight\nattributes are binary (1 present, 0 non-present), and\nrepresent artifacts types (e.g., ceramics, bracelets, arrow\npoints) that were found (or not found) in the tomb.\nCongressional votes This data set was obtained from\nthe UCI KDD Archive ([20]) and contains the United\nStates Congressional Voting Records for the year 1984.\nEach record contains a Congressman's votes on 16 issues\n. All the attributes are boolean (\"yes\" or \"no\"),\nwith a few of the votes containing missing values. We\ndecided to treat missing values as another domain value\nfor the attribute. A classification field with the labels\n\"Democrat,\" or \"Republican\" is provided for each\nrecord, which are not used for clustering, but can be\nloosely used for quality measuring. (Some congress-men\n\"crossed\" parties to vote.) There are 435 records\nin the set (267 Democrats and 168 Republicans).\nKDD Cup 1999 data This data set can be obtained\nfrom the UCI Archive [20], and was used for the the\nThird International Knowledge Discovery and Data\nMining Tools Competition. This database contains a\nstandard set of network audit data, which includes a\nwide variety of simulated intrusions. Each record, corresponding\nto a connection, contains 42 features, some\nof them categorical, and the rest continuous variables.\nWe transformed the continuous variables in categorical\nby a simple process of discretization: we computed\nthe median of each attribute, and assigned any value\nbelow and including the median a label \"0,\" while the\nrest of the values were assigned a label \"1.\" There are\nmany intrusion data sets in the repository, some of\nthem to be used as training sets and some as test sets.\nWe utilized the set that corresponds to 10% of the\ntraining data. In this set, records have an extra attribute\n(class), labeled with a \"1\" if the connection is\npart of an attack, or a \"0\" if it is not. We use this\nattribute for the evaluation of external entropy (not in\nthe clustering process).\n5.1\nArchaeological Data\nFigure 3 show the results of using COOLCAT in the archaeological\ndata set.\nWe performed experiments with 2\nclusters, since the attribute with which we evaluate the external\nentropy (not used in the clustering) is Sex (and the\n586\nAlg.\nm\nCU\nExt\nE.\nExpected\n(sex)\nentropy\nCOOLCAT 0%\n0.7626\n0\n4.8599\n10%\n0.7626\n0\n4.8599\n20%\n0.7626\n0\n4.8599\nBrute\nForce\n0\n.7626\n0\n4.8599\nROCK\n0\n.3312\n0.9622 n/a\nFigure 3: Results for COOLCAT, ROCK and brute\nforce in the Archaeological data set.\ndata set is small, so we believed that the clustering could effectively\nseparate the two sexes). We conducted experiments\nwith the original data set (which we label \"independent\"),\nand a modified data set in which we grouped attributes in\nthe following way: (1), (24), (26), (34), (35), (46), (78), to reflect\nthe correlations found among the attributes of the set\n(found by using a Likelihood ratio test). However, we only\nreport the results for independent data, since the correlated\nset results are essentially the same. (The same phenomena\nwas observed in the other experiments.) We also conducted\n\"brute force\" experiments, in which we found the optimum\nclustering, i.e., that for which the expected entropy was the\nminimum.\nWe did this to compare how well our heuristic\n(COOLCAT) performed. We also report in the table\nthe best results found by ROCK (which have to be found\nby varying the parameter over a range of values). The\nresults shown in Figure 3 show that the expected entropy\nfunction does an excellent job in clustering this data. The\nresults obtained by COOLCAT (in terms of CU , and external\nentropy with respect to the variable sex, which is\nnot used in the clustering), and expected entropy are the\nsame obtained by the brute force (optimal) approach. In\nall cases, both the CU function and the external entropy\nof the COOLCAT solutions are better than those found for\nthe best ROCK solution. Particularly encouraging is the\nfact that the external entropy for the variable SEX (which\nthe authors of the data set indicated as the one being more\ncorrelated with the clusters), is 0 in all the COOLCAT solutions\n, so a perfect separation is achieved. (ROCK's solution\ndoes not achieve this, resulting in a high external entropy.)\nIn this data set, the re-processing step does not have any\neffect, as seen by the fact that the results are the same for\nall the values of m. This is attributed to the size of the data\nset (only 20 records). Both COOLCAT and ROCK took\n0.01 seconds to find a solution for this data set.\n5.2\nCongressional Voting results\nFigure 4 summarizes the results obtained by COOLCAT\nin the Congressional Voting records (no grouping of attributes\nwas performed), for three values of m. The results obtained\nfor various sample sizes are extremely stable. The CU values\nfor the clusterings obtained with COOLCAT are, in\nall the cases superior to the one obtained by ROCK. The\nvalues show no fluctuations on our results as m changes,\nwhile the value for CU is 11% better than ROCK's value.\nThe external entropy for the COOLCAT solutions is slightly\nbetter than the value in ROCK's solution. The buffer size\n(batch) in this experiment was 100 records, making the num-Alg\n.\nm\nCU\nExt.Ent.\nExpected\nRunning\n(pol.\naffl.)\nentropy\ntime\n(sec.)\nCOOL\n0%\n2.9350\n0.4975\n13.8222\n0.16\nCAT\n10%\n2.9350\n0.4975\n13.8222\n0.26\n20%\n2.9350\n0.4975\n13.8222\n0.28\nROCK\n2\n.6282\n0.4993\nN/A\n0.51\nFigure 4: Results for COOLCAT and ROCK in the\nCongressional Voting data set\nber of re-processed points 0,10, and 20 (m = 0%, 10%, 20%).\n(Again, these numbers correspond to the means of 500 runs.)\nThe running time of COOLCAT is significantly better than\nthe one for ROCK (a decrease of 45% in the slowest case,\nm = 20%, of COOLCAT).\n5.3\nKDD Cup 1999 data set\nSince we did not have explicit knowledge of how many\nclusters we could find in this data set, we decided to find\nclusterings for many k values, and report, in each case, the\nexpected entropy, external entropy (with respect to the attribute\nthat denotes whether the record is an attack or not),\nand CU . The results are shown in the form of a graph in\nFigure 5.\nIn the figure, the left hand side scale is used\nfor expected entropy and CU , while the right hand side is\nused for external entropy (the values of external entropy are\nbetween 0 and 1, while the other parameters have larger\nranges). The figure shows that all the parameters tend to\nan asymptotic limit as k grows. The saturation starts to\noccur in the value k = 10, which exhibits an external entropy\nof 0.09, which indicates that most of the clusters are\n\"inhabited\" by either attack records or attack-free records.\nIn other words, the clustering achieves a good separation of\nthe points. The experiments were conducted using a sample\nsize of 1,000 points, which guarantees a level of confidence\nof 95% (for = 10).\n5.4\nSynthetic data set\nWe used a synthetic data generator ([8]) to generate data\nsets with different number of records and attributes. We\nused these data sets to test the scalability of COOLCAT.\nThe results are shown in the graph of Figure 7, where the\ny-axis shows the execution time of COOLCAT in seconds,\nand the x-axis the number of records (in multiples of 10\n3\n),\nfor four different number of attributes (A = 5, 10, 20, 40).\nIn all the cases, COOLCAT behaves linearly with respect to\nthe number of records, due to the incremental nature of the\nalgorithm (it processes each record in the data set at most\ntwice: those that are selected for re-processing are clustered\ntwice, the rest only once; moreover, points are brought from\ndisk to memory only once). We used for these experiments\nan m equal to 20%, and a buffer size of 300 records. Notice\nthat in this experiment, we do not report running times for\nROCK. The reason for this is that ROCK is designed to\nbe a main memory algorithm. In [18], the authors make it\nexplicit that ROCK deals with large data sets by using random\nsampling (not by looking at the entire set). Therefore,\nit would have been unfair to compare COOLCAT's running\ntimes with those of ROCK (over samples of the sets).\nWe performed another experiment with synthetic data\n587\nFigure 5: Expected entropy, external entropy and\nCU vs.\nNumber of Clusters (k) in the KDD Cup\n1999 data set.\nThe left scale (y-axis) is used for\nxpected entropy and CU , while the right one is used\nfor external entropy.\nsets generated by [8]. In this experiment, each synthetic\nset contained 8,124 records of 23 attributes each. Twenty\ntwo of the attributes are used for clustering and one (Indx)\nfor the evaluation of external entropy. Each data set was\ngenerated using 21 different types of rules. A rule involves\n12 attributes. An example of the rules used is: A = c&C =\na&D = b&K = c&N = a&O = c&Q = b&R = c&S =\nc&T = b&U = b\nIndx = r\n1\n. This rule says that when\nthe 11 attributes on the left hand side take the values shown,\nthe attribute Indx takes the value r\n1\n. Every record obeys\none of the rules. Two sets were generated, using different\nprobability distributions for the rules. In the first one (uniform\n), every rule is used in the same number of records in\nthe data set. (In other words the number of records that\nobey a particular rule is equal to the size of the data set\ndivided by 21.) In the normal distribution, the populations\nare distributed following a Gaussian distribution (some rules\nreceive more records than others). The 23rd attribute takes\nthe value of the rule number (rule index).\nThe external\nentropy is calculated using this attribute (which does not\nparticipate in the clustering). Figure 6 shows the evaluation\nof clusters obtained by COOLCAT over different synthetic\ndata sets.\nThe table shows also the results obtained by\nusing ROCK. As we can see, COOLCAT results are significantly\nbetter than those obtained by ROCK for both data\nsets. Particularly significant is the fact that the external\nentropy for the COOLCAT solutions in the Uniform case\nwith m = 10%, 20% are 0, indicating a perfect separation\nof rules. The values for other cases are extremely close to\n0 as well. As expected, re-processing (increasing m) helps\nin finding a better clustering. However, the impact is more\nmarked when going from no re-processing (m = 0) to re-processing\n10% of the points, leveling out from then on.\nThe running times of COOLCAT are more than one order\nof magnitude smaller than those of ROCK.\nCONCLUSIONS\nIn this paper we have introduced a new categorical clustering\nalgorithm, COOLCAT, based in the notion of entropy\n. The algorithm groups points in the data set trying\nto minimize the expected entropy of the clusters. The ex-Dist\n.\nm\nCU\nExt.Ent.\nExpected\nRunning\n(rule index\n)\nentropy\ntime\n(sec.)\nCOOLCAT\nUniform 0%\n6.9187\n0.00816\n17.4302\n6.73\n10%\n6.9268\n0.00000\n17.2958\n11.85\n20%\n6.9268\n0.00000\n17.3958\n12.95\nNormal\n0%\n6.8893\n0.02933\n17.4969\n6.88\n10%\n6.8996\n0.00813\n17.4458\n11.99\n20%\n6.9008\n0.00742\n17.4328\n13.07\nROCK\nUniform 6\n.6899\n0.09861\nn/a\n207.37\nNormal\n6\n.2749\n0.34871\nn/a\n223.49\nFigure 6: Results for COOLCAT and ROCK in the\nsynthetic data sets\n0\n2000\n4000\n6000\n8000\n10000\n12000\n14000\n16000\n18000\n20000\n22000\n0\n1000\n2000\n3000\n4000\n5000\n6000\n7000\n8000\n9000\n10000\nexecution time\nNumber of records x 1000\nA = 5\nA = 10\nA = 20\nA = 40\nFigure 7:\nCOOLCAT's performance for the synthetic\ndata sets:\nresponse time (in seconds) vs.\nthe number of records in the data set (in multiples\nof 10\n3\n), for different number of attributes\n(A = 5, 10, 20, 40).\nperimental evaluation supports our claim that COOLCAT\nis an efficient algorithm, whose solutions are stable for different\nsamples (and sample sizes) and it is scalable for large\ndata sets (since it incrementally adds points to the initial\nclusters). We have evaluated our results using category utility\nfunction, and the external entropy which determines if\nthe clusters have significance with respect to external variables\n(i.e., variables not used in the clustering process). In\nour comparisons with ROCK, COOLCAT always shows a\nsmall advantage in terms of the quality measures (CU and\nexternal entropy). However, the real advantage of COOLCAT\nresides in the fact that ROCK is extremely difficult to\ntune (finding the right ), while COOLCAT's behavior to\nits only parameter (m) is extremely stable: small values of\nm are sufficient to obtain a good result. In the largest data\nset for which we compared both techniques (Mushrooms),\nCOOLCAT had a significantly better running time.\nThe incremental nature of COOLCAT makes it possible\nto apply the algorithm to data streams, and as the results in\nscalability show, the algorithm can cope with large volumes\nof data. We are currently doing research in tracking evolving\nclusters using COOLCAT.\n588\nACKNOWLEDGMENTS\nWe like to thank Vipin Kumar and Eui-Hong (Sam) Han\nfor lending us their implementation of ROCK.\n\nREFERENCES\n[1] M.S. Aldenderfer and R.K. Blashfield. Cluster\nAnalysis. Sage Publications, (Sage University Paper\nseries on Quantitative Applications in the Social\nSciences, No. 44), 1984.\n[2] D. Barbar\na. Requirements for clustering data streams.\nSIGKDD Explorations (Special Issue on Online,\nInteractive, and Anytime Data Mining), 3(2), 2002.\n[3] D. Barbar\na and P. Chen. Using the fractal dimension\nto cluster datasets. In Proceedings of the ACM\nSIGKDD International Conference on Knowledge\nDiscovery and Data Mining, Boston, MA, August\n2000.\n[4] R.B. Calinski and J. Harabasz. A dendrite method for\ncluster analysis. Communications in Statistics, pages\n127, 1974.\n[5] P. Cheeseman and J. Stutz. Bayesian classification\n(AUTOCLASS): Theory and Results. In U.M. Fayyad,\nG. Piatetsky-Shapiro, P. Smyth, and R. Uthurusamy,\neditors, Advances in Knowledge Discovery and Data\nMining. AAAI Press, Menlo Park, 1995.\n[6] C. CHen, A.W. Fu, and Y. Zhang. Entropy-based\nSubspace Clustering for Mining Numerical Data. In\nProceedings of ACM SIGKDD International\nConference on Knowledge Discovery and Data Mining,\nSan Diego, CA, August 1999.\n[7] H. Chernoff. A Measure of Asymptotic Efficiency for\nTests of a Hypothesis Based on the Sum of\nObservations. Annals of Mathematical Statistics, pages\n493509, 1952.\n[8] DataGen. Data Generator: Perfect data for an\nimperfect world. http://www.datasetgenerator.com/.\n[9] R.C. Dubes and A.K. Jain. Validity studies in\nclustering methodologies. Pattern Recognition, pages\n235254, 1979.\n[10] R.O. Duda and P.E. Hart. Pattern Classification and\nScene Analysis. Wiley-Interscience, New York, 1973.\n[11] M. Ester, H.P. Kriegel, and X. Wu. A density-based\nalgorithm for discovering clusters in large spatial\ndatabase with noise. In Proceedings of the\nInternational Conference on Knowledge Discovery and\nData Mining, Portland, Oregon, August 1996.\n[12] V. Ganti, J. Gehrke, and R. Ramakrishnan.\nCACTUS-Clustering Categorical Data Using\nSummaries. In Proceedings of the ACM-SIGKDD\nInternational Conference on Knowledge Discovery and\nData Mining, San Diego, CA, 1999.\n[13] M. Garey and D. Johnson. Computers and\nIntractability: A Guide to the Theory of\nNP-Completeness. W.H. Freeman, 1979.\n[14] D. Gibson, J. Kleinberg, and P. Raghavan. Clustering\nCategorical Data: An Approach Based on Dynamical\nSystems. In Proceedings of the International\nConference on Very Large Databases (VLDB), New\nYork, NY, September 1998.\n[15] A. Gluck and J. Corter. Information, uncertainty, and\nthe utility of categories. In Proceedings of the Seventh\nAnnual Conference of the Cognitive Science Society,\n1985.\n[16] E. Gokcay and J.C. Principe. Information Theoretic\nClustering. IEEE Transactions on Pattern Analysis\nand Machine Intelligence, 24(2), February 2002.\n[17] S. Guha, R. Rastogi, and K. Shim. CURE: A\nclustering algorithm for large databases. In\nProceedings of the ACM SIGMOD Conference on\nManagement of Data, Seattle, WA, May 1998.\n[18] S. Guha, R. Rastogi, and K. Shim. ROCK: A Robust\nClustering Algorithm for Categorical Attributes. In\nProceedings of the 15th International Conference on\nData Engineering, Sydney, Australia, April 1999.\n[19] E.H. Han, G. Karypis, V. Kumar, and B. Mobasher.\nClustering based on association rule hypergraphs. In\nProceedings of the SIGMOD Workshop on Research\nIssues on Data Mining and Knowledge Discovery,\nJune 1997.\n[20] S. Hettich(librarian). UCI KDD Archive.\nhttp://kdd.ics.uci.edu/.\n[21] A.K. Jain and R.C. Dubes. Algorithms for clustering\ndata. Prentice Hall, 1988.\n[22] G.J McLachlan and K.E. Basford. Mixture Models.\nMarcel Dekker, New York, 1988.\n[23] T.M. Mitchell. Machine Learning. McGraw-Hill, 1997.\n[24] J.C. Pincipe, D. Xu, and J. Fisher. Information\ntheoretic learning. In S. Haykin, editor, Unsupervised\nAdaptive Filtering. John Wiley & Sons, 2000.\n[25] A. Renyi. On Measures of Entropy and Information.\nIn Proc. of the Fourth Berkeley Symp. Math.,\nStatistics, and Probability, 1960.\n[26] J. Rissanen. A universal prior for integers and\nestimation by minimum description length. The\nAnnals of Statistics, 1983.\n[27] J. Rissanen. Stochastic complexity in statistical\ninquiry. World Scientific Pub., 1989.\n[28] C.E. Shannon. A mathematical theory of\ncommunication. Bell System Techical Journal, pages\n379423, 1948.\n[29] C.S. Wallace and D.M. Boulton. An information\nmeasure for classification. The Computer Journal,\n11(2), 1968.\n[30] C.S. Wallace and D.L. Dowe. Intrinsic classification by\nMML, the Snob program. In Proceedings of the 7th\nAustralian Joint Conference on Artificial Intelligence,\n1994.\n[31] R. Zhang, R. Ramakrishnan, and M.Livny. Birch: An\nefficient data clustering method for very large\ndatabases. In Proceedings of the ACM SIGMOD\nConference on Data Management, Montreal, Canada,\nJune 1996.\n589\n", "keywords": "data streams;incremental algorithm;COOLCAT;categorical clustering;data stream;entropy;clustering"} {"name": "59", "title": "Coupling and Cohesion Measures for Evaluation of Component Reusability", "abstract": "This paper provides an account of new measures of coupling and cohesion developed to assess the reusability of Java components retrieved from the internet by a search engine. These measures differ from the majority of established metrics in two respects: they reflect the degree to which entities are coupled or resemble each other, and they take account of indirect couplings or similarities. An empirical comparison of the new measures with eight established metrics shows the new measures are consistently superior at ranking components according to their reusability.", "fulltext": "INTRODUCTION\nThe work reported in this paper arose as part of a project that\nretrieves Java components from the internet [1]. However,\ncomponents retrieved from the internet are notoriously variable in\nquality. It seems highly desirable that the search engine should\nalso provide an indication of both how reliable the component is\nand how readily it may be adapted in a larger software system.\nA well designed component, in which the functionality has been\nappropriately distributed to its various subcomponents, is more\nlikely to be fault free and easier to adapt. Appropriate distribution\nof function underlies two key concepts: coupling and cohesion.\nCoupling is the extent to which the various subcomponents\ninteract. If they are highly interdependent then changes to one are\nlikely to have significant effects on others. Hence loose coupling\nis desirable. Cohesion is the extent to which the functions\nperformed by a subsystem are related. If a subcomponent is\nresponsible for a number of unrelated functions then the\nfunctionality has been poorly distributed to subcomponents.\nHence high cohesion is a characteristic of a well designed\nsubcomponent.\nWe decided that the component search engine should provide the\nquality rankings of retrieved components based on measures of\ntheir coupling and cohesion. There is a substantial literature on\ncoupling and cohesion metrics which is surveyed in the next\nsection. We then describe in detail the metrics we have developed\nwhich attempt to address some of the limitations of existing\nmetrics. In particular, we consider both the strength and\ntransitivity of dependencies. The following section describes an\nempirical comparison of our proposed metrics and several popular\nalternatives as predictors of reusability. Section 5 presents an\nanalysis of the results which demonstrate that our proposed\nmetrics consistently outperform the others. The paper concludes\nwith a discussion of the implications of the research.\n\nCOUPLING AND COHESION METRICS\nCohesion is a measure of the extent to which the various functions\nperformed by an entity are related to one another. Most metrics\nassess this by considering whether the methods of a class access\nsimilar sets of instance variables. Coupling is the degree of\ninteraction between classes. Many researches have been done on\nsoftware metrics [8], the most important ones are selected used in\nour comparative study. Table 1 and Table 2 summarize the\ncharacteristics of these cohesion and coupling metrics.\nTable 1. Coupling metrics\nName Definition\nCBO\n[4][5][11]\nClasses are coupled if methods or instance variables\nin one class are used by the other. CBO for a class is\nnumber of other classes coupled with it.\nRFC\n[4][5]\nCount of all methods in the class plus all methods\ncalled in other classes.\nCF\n[3][6]\nClasses are coupled if methods or instance variables\nin one class are used by the other. CF for a software\nsystem is number of coupled class pairs divided by\ntotal number of class pairs.\nDAC[9]\nThe number of attributes having other classes as\ntheir types.\n\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nMSR'06, May 22-23, 2006, Shanghai, China.\nCopyright 2006 ACM 1-59593-085-X/06/0005...$5.00.\n\n\n18\nTable 2. Cohesion metrics\nName Definition\nLCOM [5]\nNumber of non-similar method pairs in a class of\npairs.\nLCOM3[7][\n9]\nNumber of connected components in graph whose\nvertices are methods and whose edges link similar\nmethods.\nRLCOM\n[10]\nRatio of number of non-similar method pairs to\ntotal number of method pairs in the class.\nTCC [2]\nRatio of number of similar method pairs to total\nnumber of method pairs in the class.\nAll of these measures have two important features in common.\nFirst, they treat relationship between a pair of classes or methods\nas a binary quantity; second, they treat coupling and cohesion as\nan intransitive relation; that is no account is taken of the indirect\ncoupling and cohesion, although two of cohesion (LCOM3 [7][9]\nand TCC [2]) have suggested extensions to incorporate indirect\nrelationships between methods. In cohesion metrics, it should be\nnoted that three of them (LCOM, LCOM3 and RLCOM) are in\nfact measures of lack of cohesion. TCC [2], in contrast to the\nother three metrics, measures cohesion rather than its absence. In\nother respects it is similar to RLCOM, being the number of\nsimilar method pairs divided by the total number of method pairs.\n\nPROPOSED NEW METRICS\nThe study suggested that none of these measures was very\neffective in ranking the reusability of Java components. We\ntherefore decided to develop alternative coupling and cohesion\nmetrics in the hope of achieving superior performance. One\nobvious step was to develop measures that reflected the extent to\nwhich a pair of classes was coupled or a pair of methods\nresembled each other. Because none of the measures treated\ncoupling or similarity as transitive relations, we decided that such\nindirect dependencies should be incorporated into our metrics.\n3.1\n\nCohesion\nWe develop a cohesion metric that takes account of both the\ndegree of cohesion and transitive (i.e indirect) cohesion between\nmethods. Methods are said to be similar if the sets of instance\nvariables that they access overlap. We adopt a graph theoretical\napproach. The methods of the class are the vertices. Suppose a\nclass has a set of method members M\n{ M\n1\n, M\n2\n,...M\nm\n} and let.\nV\nj\n\n{V\nj,1\n, V\nj,2\n, .... V\nj,n\n} be the instance variables accessed by\nmethod M\nj\n. Then the edge from M\nj\nto M\ni\nexists if and only if V\nj\n\n\nV\ni\nis not null. Thus an edge of the graph reflects the similarity of\nthe methods in that they have at least one instance variable in\ncommon. The similarity graph is undirected because intersection\nis a symmetric relation. The next step is to associate a number\nwith each edge that reflects the extent to which the two methods\nhave instance variables in common. We therefore define\nSimD(i,j), our measure of direct similarity of two methods, M\ni\nand\nM\nj\n, as\n( )\nj\ni\nj\ni\nV\nV\nV\nV\nj\ni\nSimD\n\n\n=\n,\n\nwhere i\nj (SimD(j,j) is defined to be zero). Note that 1\nSimD(i,j)\n0.\nThe extension of the measure to include indirect similarity\nproceeds along the same lines as we employed for indirect\ncoupling. The strength of similarity provided by a path between\ntwo methods is the product of the SimD values of the edges that\nmake up the path. Thus we define SimT(i,j,\n\n), the transitive\nsimilarity between methods M\ni\nand M\nj\ndue to a specific path\n\n, as\n( )\n\n\n\n\n\n\n=\n=\n\n\n\nt\ns\nt\ns\ne\nt\ns\nt\ns\ne\nV\nV\nV\nV\nt\ns\nSimD\nj\ni\nSimT\n,\n,\n,\n)\n,\n,\n(\n\nwhere e\ns\n,\nt\ndenotes the edge between vertices s and t. As in the\ncase of coupling, the path with the highest SimT value is selected\nto define the similarity of the two methods, Sim(i,j).\n)\n,\n,\n(\n)\n,\n(\nmax\n\nj\ni\nSimT\nj\ni\nSim\n=\n\nwhere\nand\nis\nthe set of all paths from M\ni\nto M\nj\n. This measure is used to provide\na measure of the cohesion of the class, ClassCoh, by summing the\nsimilarities of all method pairs and dividing by the total number\nof such pairs:\n)\n,\n,\n(\nmax\narg\n)\n,\n(\nmax\n\n\n\nj\ni\nSimT\nj\ni\n\n\n=\nm\nm\nj\ni\nSim\nClassCoh\nm\nj\ni\n=\n\n=\n2\n1\n,\n)\n,\n(\n\nwhere m is the number of methods in the class. Finally, the\nweighted transitive cohesion of the complete software system,\nWTCoh, is defined as the mean cohesion of all the classes of\nwhich it is comprised:\nn\nClassCoh\nWTCoh\nn\nj\nj\n\n=\n=\n1\n\nwhere n is the number of classes in the system.\n3.2\n\nCoupling\nAs with cohesion measure, we regard software system as a\ndirected graph, in which the vertices are the classes comprising\nthe system. Suppose such a system comprises a set of classes C\n\n{C\n1\n, C\n2\n,...C\nm\n}. Let M\nj\n\n{M\nj,1\n, M\nj,2\n, .... M\nj,n\n} be the methods of\nthe class C\nj\n, and R\nj,i\nthe set of methods and instance variables in\nclass C\ni\ninvoked by class C\nj\nfor j\ni (R\nj,j\nis defined to be null).\nThen the edge from C\nj\nto C\ni\nexists if and only if R\nj,j\nis not null.\nThus an edge of the graph reflects the direct coupling of one class\nto another. The graph is directed since R\nj,i\nis not necessarily equal\nto Ri,j.\nThe next step is to associate a number with each edge that reflects\nthe extent of direct coupling from one class to another. We define\nCoupD(i,j), as the ratio of the number of methods in class j\ninvoked by class I to the total number of methods in class I, which\nindicates the impact of class j to class i.\n(\n)\ni\ni\nj\ni\nM\nR\nR\nj\ni\nCoupD\n+\n=\n,\n,\n\nThen the indirect coupling between classes is included. Suppose\nthat CoupD(i,j) and CoupD(j,k) have finite values but that\nCoupD(i,k) is zero. Thus although there is no direct coupling\nbetween classes C\ni\nand C\nk\n, there is a dependency because C\ni\n\ninvokes methods in C\nj\nwhich in turn invokes methods in C\nk\n. The\nstrength of this dependency depends on the two direct couplings\nof which it is composed, a reasonable measure is defined as:\n19\nCoupD(i,j)\nCoupD(j,k). This notion is readily generalised. A\ncoupling between two classes exists if there is a path from one to\nthe other made up edges whose CoupD values are all non-zero.\nThus we define CoupT(i,j,\n\n), the transitive coupling between\nclasses C\ni\nand C\nj\ndue to a specific path\n\n, as\n( )\n\n\n\n\n+\n=\n=\n\n\n\nt\ns\nt\ns\ne\ns\ns\nt\ns\ne\nM\nR\nR\nt\ns\nCoupD\nj\ni\nCoupT\n,\n,\n,\n,\n)\n,\n,\n(\n\ne\ns\n,\nt\ndenotes the edge between vertices s and t. Note first that\nCoupT includes the direct coupling, which corresponds to path of\nlength one, and second that, because the CoupD values are\nnecessarily less than one, transitive couplings due to longer paths\nwill typically have lower values.\nIn general there may be more than one path having a non-zero\nCoupT value between any two classes. We simply select the path\nwith largest CoupT value and hence define Coup(i,j), the strength\nof coupling between the two classes, C\ni\nand C\nj\nto be:\n)\n,\n,\n(\n)\n,\n(\nmax\n\nj\ni\nCoupT\nj\ni\nCoup\n=\n\nwhere\n)\n,\n,\n(\nmax\narg\n)\n,\n(\nmax\n\n\n\nj\ni\nCPT\nj\ni\n\n\n=\nand\nis the\nset of all paths from C\ni\nto C\nj\n. The final step is to use measure\nbetween each pair of classes as a basis for a measure of the total\ncoupling of a software system. The weighted transitive coupling\n(WTCoup) of a system is thus defined\nm\nm\nj\ni\nCoup\nWTCoup\nm\nj\ni\n=\n\n=\n2\n1\n,\n)\n,\n(\n\nwhere m is the number of classes in the system.\nAN EXPERIMENTAL COMPARISON\nIn our study, the metrics are used for a specific purpose:\npredicting how much effort would be required to reuse a\ncomponent within a larger system. We therefore chose to measure\nreusability as simply the number of lines of code that were added,\nmodified or deleted (NLOC) in order to extend its functionality in\na prescribed way. The more lines required, the lower the\nreusability. This appears to us to be a crude but reasonable\nmeasure of the effort that would be required to adapt a component\nfor use within a larger system. Three case studies were carried\nout: Case 1 HTML Parser: The original components analysed\nHTML documents, eliminated tags and comments and output the\ntext. The required extension was to count and output the number\nof tags found during parsing.\nCase 2 Lexical Tokenizer: The original components tokenized a\ntext document using user supplied token rules and output the\ntokens on a web interface. The required extension was to count\nand output the number of tokens retrieved.\nCase 3 Barcode: The original components accepted a sequence of\nalphanumeric characters and generated the corresponding\nbarcode. The required extension was to count the number of\nletters.\nFor each case, 20 Java components were retrieved from a\nrepository of about 10,000 Java components retrieved form the\ninternet. The requisite extensions were then implemented by a\nvery experienced Java programmer and NLOC counted. Despite\nthe relative simplicity of the extensions, there was considerable\nvariation in the quantity of extra code required. We then\nproceeded to investigate how successful the various measures of\ncoupling and cohesion are in predicting this quantity. Our\nproposed metrics are compared with all the metrics reviewed in\nsection 2. In order to present the results on the same graph, those\nmeasures that do not produce values in the range (0,1) (i.e. CBO,\nRFC, DAC, LCOM and LCOM3) were divided by 100.\n\nRESULTS\nTwo approaches were used to evaluate the performance of the\nvarious measures in predicting reusability: linear regression and\nrank correlation.\n5.1\n\nLinear Regression\nThe regression lines obtained for the five cohesion measures\nwhen applied to the HTML parser components are shown in\nFigure 1. The results for the other two sets of components were\nsimilar. It is clear that some measures provide much more\nconsistent predictors than others. There are no obvious systematic\ndepartures from linearity so the use of simple regression appears\nreasonable. The regression lines obtained for coupling measures\ndemonstrate the same situation.\nThe coefficient of determination, R\n2\n, provides a measure of how\nmuch of the variation in NLOC is accounted for by the measures.\nTable 3 and Table 4 display the values of R\n2\nobtained for each of\nthe coupling and cohesion measures on all three sets of\ncomponents. In each case, our proposed new measure, WTCoup\nand WTCoh gave the largest value of R\n2\n, indicating that it was the\nbest linear predictor of reusability. The remaining measures\nproduced at least one R\n2\nvalue so low as to indicate that that the\ncorrelation was not significantly above chance at the 5% level.\n\nFigure 1. Regression of cohesion measures against reusability\nTable 3. R\n2\nvalues for coupling measure regression lines.\nCases WTCoup CF\nCBO RFC DAC\nHTML Parser\n.846\n.621\n.259\n.793\n.254\nLexical Token.\n.836 .098\n.004\n.729\n.738\nBarcode Gen.\n.958 .693\n.121\n534\n.507\n\n20\nTable 4. R\n2\nvalues for cohesion measure regression lines.\nCases WTCoh\nRLCOM\nLCOM3\nLCOM\nTCC\nH. Parser\n.847\n.319\n.259\n.564\n.178\nL. Token.\n.838\n.783\n.002\n.709\n.646\nB. Gen.\n.892\n.702\n.177\n.101\n.785\n\n5.2\n\nSpearman Rank Correlation\nAlthough these results provide a strong indication that the\nproposed new measures are better predictors of reusability than\nthe alternatives, our primary purpose is simply to rank a set of\ncomponents retrieved from the repository. We therefore also\ncomputed the Spearman rank correlation coefficients between the\nrankings determined by NLOC and those produced by the various\ncoupling and cohesion measures (Tables 5 and 6).\nTable 5. Rank correlations values for coupling measures.\nCases WTCoup\nCF\nCBO\nRFC\nDAC\nHTML\nParser\n.975 .882 .465 .896 .507\nLexical Token.\n.952 .291 .117 .822 .817\nBarcode Gen.\n.974\n.758 .485 .656 .800\nTable 6. Rank correlations values for cohesion measures.\nCases WTCoh\nRLCOM\nLCOM3\nLCOM\nTCC\nH. Parser\n-.993\n.522\n.218\n.564\n-.343\nL. Token.\n.838\n.783\n.002\n.709\n.646\nBar. Gen.\n.892\n.702 .177 .101\n.785\nThe relative performance of the various measures is consistent\nwith the regression studies. In all cases, the two proposed\nmeasures, WTCoup and WTCoh, produced the highest rank\ncorrelations. They are in fact extremely high; no value was lower\nthan 0.95.\n\nDISCUSSION\nThese results clearly demonstrate that our proposed metrics for\ncoupling and cohesion are very good predictors of the number of\nlines of code required to make simple modifications to Java\ncomponents retrieved from the internet and are superior to other\nmeasures. The majority of coupling and cohesion metrics treat\ncoupling and similarity as simple binary quantities and ignore the\ntransitive relationship. Both our proposed measures concern these\nissues: First, they are weighted; that is, they use a numeric\nmeasure of the degree of coupling or similarity between entities\nrather than a binary quantity. Second they are transitive; that is,\nthey include indirect coupling or similarity mediated by\nintervening entities. It is reasonable to enquire whether both these\ncharacteristics are necessary to achieve good prediction\nperformance. In fact our investigations suggest that both\ncontribute to the performance.\nAlthough both WTCoup and WTCoh are good predictors, it is\nworth considering whether a linear combination might not\nproduce even better results. Multiple regression for the Lexical\nTokenizer components produced an R\n2\nof 0.981; the ranking\nproduced using the regression coefficients to weight the terms had\na Spearman correlation of 0.986. These are superior to the results\nproduced by each metric alone but not by a great margin simply\nbecause there original results leave only modest scope for\nimprovement. Developing such a composite quality measure\nwould entail assuming the relative weighting of the two metrics\nshould be the same for all types of component.\nThis work arose from, and is intended primarily as a contribution\nto, search engine technology. Nevertheless, we believe it may be\nof interest to a wider body of researchers: in particular, those\ninvolved in developing and evaluating software metrics.\n\nACKNOWLEDGMENTS\nWe are grateful to the\nfour UK higher education funding bodies (for\nEngland, Scotland, Wales and Northern Ireland) for an Overseas Research\nStudentship (ORS/2002015010) awarded to G. Gui.\n\n\nREFERENCES\n[1]\n\nGui, G. and Scott, P. D. Vector Space Based on Hierarchical\nWeighting: A Component Ranking Approach to Component\nRetrieval. In Proceedings of the 6th International Workshop\non Advanced Parallel Processing Technologies (APPT'05)\n[2]\n\nBieman, J. M. and Kang, B-Y. Cohesion and Reuse in an\nObject-Oriented System. In Proc. ACM Symposium on\nSoftware Reusability (SSR'95). (April 1995) 259-262.\n[3]\n\nBriand, L., Devanbu, P. and Melo, W. An investigation into\ncoupling measures for C++. Proceedings of ICSE 1997.\n[4]\n\nBrito e Abreu, F. and Melo, W. Evaluating the impact of OO\nDesign on Software Quality. Proc. Third International\nSoftware Metrics Symposium. (Berin 1996).\n[5]\n\nChidamber, S. R. and Kemerer, C. K. A Metrics Suite for\nObject Oriented Design. IEEE Transactions on Software\nEngineering, Vol. 20 (June 1994), 476-493.\n[6]\n\nHarrison, R., S.J.Counsell, & R.V.Nith. An Evaluation of the\nMOOD Set of Object-Oriented Software Metrics. IEEE\nTransactions on Software Engineering, Vol. 24 (June 1998),\n491-496.\n[7]\n\nHitz, M. and Montazeri, B. Measuring coupling and cohesion\nin object-oriented systems. Proceedings of International\nSymposium on Applied Corporate Computing. (Monterrey,\nMexico, 1995).\n[8]\n\nKanmani, S., Uthariraj, R., Sankaranarayanan, V. and\nThambidurai, P. Investigation into the Exploitation of\nObject-Oriented Features. ACM Sigsoft, Software\nEngineering Notes, Vol. 29 (March 2004).\n[9]\n\nLi, W. & Henry, S. Object-Oriented metrics that predict\nmaintainability. Journal of Systems and Software. 23(2) 1993\n111-122.\n[10]\n\nLi, X., Liu, Z. Pan, B. & Xing, B. A Measurement Tool for\nObject Oriented Software and Measurement Experiments\nwith It. In Proc. IWSM 2000, 44-54.\n[11]\n\nSubramanyam, R. & Krishnan, M. S. Empirical Analysis of\nCK Metrics for Object-Oriented Design Complexity:\nImplications for Software Defects. IEEE Transactions on\nSoftware Engineering, Vol. 29 (April 2003), 297-310.\n21\n", "keywords": "Binary Quantity;Experimentary Comparsion;Component search engine;Search Engine Technology;Spearman Rank Correlation;Intransitive Relation;Reusability;Coupling;Cohesion Metric;Linear Regression;Cohesion;Java components"} {"name": "6", "title": "A Distributed 3D Graphics Library", "abstract": "We present Repo-3D, a general-purpose, object-oriented library for developing distributed, interactive 3D graphics applications across a range of heterogeneous workstations. Repo-3D is designed to make it easy for programmers to rapidly build prototypes using a familiar multi-threaded, object-oriented programming paradigm. All data sharing of both graphical and non-graphical data is done via general-purpose remote and replicated objects, presenting the illusion of a single distributed shared memory. Graphical objects are directly distributed, circumventing the \"duplicate database\" problem and allowing programmers to focus on the application details. Repo-3D is embedded in Repo, an interpreted, lexically-scoped, distributed programming language, allowing entire applications to be rapidly prototyped. We discuss Repo-3D's design, and introduce the notion of local variations to the graphical objects, which allow local changes to be applied to shared graphical structures. Local variations are needed to support transient local changes, such as highlighting, and responsive local editing operations. Finally, we discuss how our approach could be applied using other programming languages, such as Java.", "fulltext": "INTRODUCTION\nTraditionally, distributed graphics has referred to the architecture\nof a single graphical application whose components are distributed\nover multiple machines [14, 15, 19, 27] (Figure 1\na\n). By taking\nadvantage of the combined power of multiple machines, and the\nparticular features of individual machines, otherwise impractical\napplications became feasible. However, as machines have grown\nmore powerful and application domains such as Computer\n1. {bm,feiner}@cs.columbia.edu, http://www.cs.columbia.edu/graphics\nSupported Cooperative Work (CSCW) and Distributed Virtual\nEnvironments (DVEs) have been making the transition from\nresearch labs to commercial products, the term distributed graphics\nis increasingly used to refer to systems for distributing the shared\ngraphical state of multi-display/multi-person, distributed, interactive\napplications (Figure 1b). This is the definition that we use here.\nWhile many excellent, high-level programming libraries are\navailable for building stand-alone 3D applications (e.g. Inventor\n[35], Performer [29], Java 3D [33]), there are no similarly powerful\nand general libraries for building distributed 3D graphics applications\n. All CSCW and DVE systems with which we are familiar\n(e.g., [1, 7, 11, 12, 16, 28, 30, 31, 32, 34, 37, 41]) use the following\napproach: A mechanism is provided for distributing application\nstate (either a custom solution or one based on a general-purpose\ndistributed programming environment, such as ISIS [4] or Obliq\n[8]), and the state of the graphical display is maintained separately\nin the local graphics library. Keeping these \"dual databases\" synchronized\nis a complex, tedious, and error-prone endeavor. In contrast\n, some non-distributed libraries, such as Inventor [35], allow\nprogrammers to avoid this problem by using the graphical scene\ndescription to encode application state. Extending this \"single database\"\nmodel to a distributed 3D graphics library is the goal of our\nwork on Repo-3D.\nRepo-3D is an object-oriented, high-level graphics package,\nderived from Obliq-3D [25]. Its 3D graphics facilities are similar to\nthose of other modern high-level graphics libraries. However, the\nobjects used to create the graphical scenes are directly distribut-able\n--from the programmer's viewpoint, the objects reside in one\nlarge distributed shared memory (DSM) instead of in a single\nprocess. The underlying system replicates any of the fine-grained\nobjects across as many processes as needed, with no additional\neffort on the part of the programmer. Updates to objects are\nautomatically reflected in all replicas, with any required objects\nautomatically distributed as needed. By integrating the replicated\nobjects into the programming languages we use, distributed\napplications may be built using Repo-3D with little more difficulty\nthan building applications in a single process.\nFigure 1:\nTwo meanings of distributed graphics: (a) a single logical\ngraphics system with distributed components, and (b) multiple distributed\nlogical graphics systems. We use the second definition here.\nNo matter how simple the construction of a distributed application\nmay be, a number of differences between distributed and\nmonolithic applications must be addressed. These include:\n\nDistributed control. In a monolithic application, a single component\ncan oversee the application and coordinate activities\namong the separate components by notifying them of changes\nto the application state. This is not possible in a non-trivial distributed\napplication. Therefore, we must provide mechanisms\nfor different components to be notified of changes to the\ndistributed state.\n\nInteractivity. Updates to distributed state will be slower than\nupdates to local state, and the amount of data that can be\ndistributed is limited by network bandwidth. If we do not want\nto sacrifice interactive speed, we must be able to perform some\noperations locally. For example, an object could be dragged\nlocally with the mouse, with only a subset of the changes\napplied to the replicated state.\n\nLocal variations. There are times when a shared graphical\nscene may need to be modified locally. For example, a\nprogrammer may want to highlight the object under one user's\nmouse pointer without affecting the scene graph viewed by\nother users.\nRepo-3D addresses these problems in two ways. First, a\nprogrammer can associate a notification object with any replicated\nobject. The notification object's methods will be invoked when the\nreplicated object is updated. This allows reactive programs to be\nbuilt in a straightforward manner. To deal with the second and third\nproblems, we introduce the notion of local variations to graphical\nobjects. That is, we allow the properties of a graphical object to be\nmodified locally, and parts of the scene graph to be locally added,\nremoved, or replaced.\nIn Section 2 we describe how we arrived at the solution presented\nhere. Section 3 discusses related work, and Section 4 offers a\ndetailed description of the underlying infrastructure that was used.\nThe design of Repo-3D is presented in Section 5, followed by\nsome examples and concluding remarks in Sections 6 and 7.\nBACKGROUND\nRepo-3D was created as part of a project to support rapid prototyping\nof distributed, interactive 3D graphical applications, with a\nparticular focus on DVEs. Our fundamental belief is that by\nproviding uniform high-level support for distributed programming\nin the languages and toolkits we use, prototyping and experimenting\nwith distributed interactive applications can be (almost) as\nsimple as multi-threaded programming in a single process. While\ncare must be taken to deal with network delays and bandwidth\nlimitations at some stage of the program design (the languages and\ntoolkits ought to facilitate this), it should be possible to ignore such\nissues until they become a problem. Our view can be summarized\nby a quote attributed to Alan Kay, \"Simple things should be\nsimple; complex things should be possible.\"\nThis is especially true during the exploration and prototyping\nphase of application programming. If programmers are forced to\nexpend significant effort building the data-distribution components\nof the application at an early stage, not only will less time be spent\nexploring different prototypes, but radical changes in direction will\nbecome difficult, and thus unlikely. For example, the implementation\neffort could cause programs to get locked into using a communication\nscheme that may eventually prove less than ideal, or even\ndetrimental, to the program's final design.\nSince we are using object-oriented languages, we also believe\nthat data distribution should be tightly integrated with the\nlanguage's general-purpose objects. This lets the language's type\nsystem and programming constructs reduce or eliminate errors in\nthe use of the data-distribution system. Language-level integration\nalso allows the system to exhibit a high degree of network data\ntransparency, or the ability for the programmer to use remote and\nlocal data in a uniform manner. Without pervasive, structured,\nhigh-level data-distribution support integrated into our programming\nlanguages and libraries, there are applications that will never\nbe built or explored, either because there is too much programming\noverhead to justify trying simple things (\"simple things are not\nsimple\"), or because the added complexity of using relatively\nprimitive tools causes the application to become intractable (\"com-plex\nthings are not possible\").\nOf the tools available for integrating distributed objects into\nprogramming languages, client-server data sharing is by far the\nmost common approach, as exemplified by CORBA [26],\nModula-3 Network Objects [5], and Java RMI [39]. Unfortunately,\ninteractive graphical applications, such as virtual reality, require\nthat the data used to refresh the display be local to the process\ndoing the rendering or acceptable frame refresh rates will not be\nachieved. Therefore, pure client-server approaches are inappropriate\nbecause at least some of the shared data must be replicated.\nFurthermore, since the time delay of synchronous remote method\ncalls is unsuitable for rapidly changing graphical applications,\nshared data should be updated asynchronously. Finally, when data\nis replicated, local access must still be fast.\nThe most widely used protocols for replicated data consistency,\nand thus many of the toolkits (e.g., ISIS [4] and Visual-Obliq [3]),\nallow data updates to proceed unimpeded, but block threads reading\nlocal data until necessary updates arrive. The same reason we\nneed replicated data in the first place--fast local read access to the\ndata--makes these protocols unsuitable for direct replication of the\ngraphical data. Of course, these protocols are fine for replicating\napplication state that will then be synchronized with a parallel\ngraphical scene description, but that is what we are explicitly trying\nto avoid. Fortunately, there are replicated data systems (e.g.,\nOrca [2] or COTERIE [24]) that provide replicated objects that are\nwell suited to interactive applications, and it is upon the second of\nthese systems that Repo-3D is built.\nRELATED WORK\nThere has been a significant amount of work that falls under the\nfirst, older definition of distributed graphics. A large number of\nsystems, ranging from established commercial products (e.g., IBM\nVisualization Data Explorer [21]) to research systems (e.g.,\nPARADISE [19] and ATLAS [14]), have been created to distribute\ninteractive graphical applications over a set of machines. However,\nthe goal of these systems is to facilitate sharing of application data\nbetween processes, with one process doing the rendering. While\nsome of these systems can be used to display graphics on more\nthan one display, they were not designed to support high-level\nsharing of graphical scenes.\nMost high-level graphics libraries, such as UGA [40], Inventor\n[35] and Java 3D [33], do not provide any support for distribution.\nOthers, such as Performer [29], provide support for distributing\ncomponents of the 3D graphics rendering system across multiple\nprocessors, but do not support distribution across multiple\nmachines. One notable exception is TBAG [13], a high-level\nconstraint-based, declarative 3D graphics framework. Scenes in\nTBAG are defined using constrained relationships between time-varying\nfunctions. TBAG allows a set of processes to share a\nsingle, replicated constraint graph. When any process asserts or\nretracts a constraint, it is asserted or retracted in all processes.\nHowever, this means that all processes share the same scene, and\nthat the system's scalability is limited because all processes have a\ncopy of (and must evaluate) all constraints, whether or not they are\ninterested in them. There is also no support for local variations of\nthe scene in different processes.\nMachiraju [22] investigated an approach similar in flavor to ours,\nbut it was not aimed at the same fine-grained level of interactivity\nand was ultimately limited by the constraints of the implementation\nplatform (CORBA and C++). For example, CORBA objects\nare heavyweight and do not support replication, so much of their\neffort was spent developing techniques to support object migration\nand \"fine-grained\" object sharing. However, their fine-grained\nobjects are coarser than ours, and, more importantly, they do not\nsupport the kind of lightweight, transparent replication we desire.\nA programmer must explicitly choose whether to replicate, move,\nor copy an object between processes when the action is to occur (as\nopposed to at object creation time). Replicated objects are independent\nnew copies that can be modified and used to replace the original\n--simultaneous editing of objects, or real-time distribution of\nchanges as they are made is not supported.\nOf greater significance is the growing interest for this sort of system\nin the Java and VRML communities. Java, like Modula-3, is\nmuch more suitable as an implementation language than C or C++\nbecause of its cross-platform compatibility and support for threads\nand garbage collection: Without the latter two language features,\nimplementing complex, large-scale distributed applications is\nextremely difficult. Most of the current effort has been focused on\nusing Java as a mechanism to facilitate multi-user VRML worlds\n(e.g., Open Communities [38]). Unfortunately, these efforts\nconcentrate on the particulars of implementing shared virtual\nenvironments and fall short of providing a general-purpose shared\ngraphics library. For example, the Open Communities work is\nbeing done on top of SPLINE [1], which supports only a single\ntop-level world in the local scene database.\nMost DVEs [11, 12, 16, 31, 32] provide support for creating\nshared virtual environments, not general purpose interactive 3D\ngraphics applications. They implement a higher level of abstraction\n, providing support for rooms, objects, avatars, collision detection\n, and other things needed in single, shared, immersive virtual\nenvironments. These systems provide neither general-purpose\nprogramming facilities nor the ability to work with 3D scenes at a\nlevel provided by libraries such as Obliq-3D or Inventor. Some use\ncommunication schemes that prevent them from scaling beyond a\nrelatively small number of distributed processes, but for most the\nfocus is explicitly on efficient communication. SIMNET [7], and\nthe later NPSNet [41], are perhaps the best known large-scale\ndistributed virtual-environment systems. They use a fixed, well-defined\ncommunication protocol designed to support a single,\nlarge-scale, shared, military virtual environment.\nThe techniques for object sharing implemented in recent CSCW\ntoolkits [28, 30, 34, 37] provide some of the features we need,\nparticularly automatic replication of data to ease construction of\ndistributed applications. However, none of these toolkits has\nintegrated the distribution of data into its programming language's\nobject model as tightly as we desire. As a result, they do not provide\na high enough level of network data transparency or suffi-ciently\nstrong consistency guarantees. In groupware applications,\ninconsistencies tend to arise when multiple users attempt to perform\nconflicting actions: the results are usually obvious to the\nusers and can be corrected using social protocols. This is not an\nacceptable solution for a general-purpose, distributed 3D graphics\ntoolkit. Furthermore, none of these CSCW systems provides any\nsupport for asynchronous update notification, or is designed to\nsupport the kind of large-scale distribution we have in mind.\nFinally, while distributed games, such as Quake, have become\nvery popular, they only distribute the minimum amount of application\nstate necessary. They do not use (or provide) an abstract, high-level\ndistributed 3D graphics system.\nUNDERLYING INFRASTRUCTURE\nOur work was done in the Modula-3 programming language [18].\nWe decided to use Modula-3 because of the language itself and the\navailability of a set of packages that provide a solid foundation for\nour infrastructure. Modula-3 is a descendant of Pascal that corrects\nmany of its deficiencies, and heavily influenced the design of Java.\nIn particular, Modula-3 retains strong type safety, while adding\nfacilities for exception handling, concurrency, object-oriented\nprogramming, and automatic garbage collection\n2\n. One of its most\nimportant features for our work is that it gives us uniform access to\nthese facilities across all architectures.\nRepo-3D relies on a number of Modula-3 libraries, as illustrated\nin Figure 2. Distributed data sharing is provided by two packages,\nthe Network Object client-server object package [5], and the\nReplicated Object shared object package [24] (see Section 4.1).\nDistAnim-3D is derived from Anim-3D [25], a powerful, non-distributed\n, general-purpose 3D library originally designed for 3D\nalgorithm animation (see Section 4.2). Finally, Repo itself is a\ndirect descendant of Obliq [8], and uses the Replicated Object\npackage to add replicated data to Obliq (see Section 4.3).\n4.1 Distributed Shared Memory\nRepo-3D's data sharing mechanism is based on the Shared Data-Object\nModel of Distributed Shared Memory (DSM) [20]. DSM\nallows a network of computers to be programmed much like a mul-tiprocessor\n, since the programmer is presented with the familiar\nparadigm of a common shared memory. The Shared Data-Object\nModel of DSM is particularly well suited to our needs since it is a\nhigh-level approach that can be implemented efficiently at the\napplication level. In this model, shared data is encapsulated in\nuser-defined objects and can only be accessed through those\nobjects' method calls. The DSM address space is partitioned\nimplicitly by the application programmer, with an object being the\nsmallest unit of sharing. All shared data is fully network transpar-2\n. The Modula-3 compiler we used is available from Critical Mass, Inc. as\npart of the Reactor programming environment. The compiler, and thus\nour system, runs on all the operating systems we have available (plus\nothers): Solaris, IRIX, HP-UX, Linux, and Windows NT and 95.\nFigure 2:\nThe architecture of Repo-3D. Aside from native graphics\nlibraries (X, Win32, OpenGL, Renderware) the Modula-3 runtime\nshields most of the application from the OS. The Replicated Object\npackage uses an Event communication package and the Network\nObject package. DistAnim-3D is implemented on top of a variety of\nnative graphics libraries and Replicated Objects. Repo exposes most of\nthe useful Modula-3 packages, as well as using Network Objects and\nReplicated Objects to present a distributed shared memory model to\nthe programmer.\nOperating System Services\nNetwork Objects\nReplicated Objects\nModula-3 Runtime\nEvents\nNative\nGraphics\nDistAnim-3D\nRepo\nRepo-3D\nNetwork\nent because it is encapsulated within the programming language\nobjects.\nDistribution of new objects between the processes is as simple as\npassing them back and forth as parameters to, or return values\nfrom, method calls--the underlying systems take care of the rest.\n3\nObjects are only distributed to new processes as necessary, and (in\nour system) are removed by the garbage collector when they are no\nlonger referenced. Furthermore, distributed garbage collection is\nsupported, so objects that are no longer referenced in any process\nare removed completely.\nThere are three kinds of distributed object semantics in our DSM:\n\nSimple objects correspond to normal data objects, and have no\nspecial distributed semantics. When a simple object is copied\nbetween processes, a new copy is created in the destination\nprocess that has no implied relationship to the object in the\nsource process.\n\nRemote objects have client-server distribution semantics. When\na remote object is copied between processes, all processes\nexcept the one in which the object was created end up with a\nproxy object that forwards method invocations across the\nnetwork to the original object.\n\nReplicated objects have replicated distribution semantics.\nWhen a replicated object is passed between processes, a new\nreplica is created in the destination process. If any replica is\nchanged, the change is reflected in all replicas.\nThe Network Object package provides support for remote\nobjects. It implements distributed garbage collection, exception\npropagation back to the calling site, and automatic marshalling and\nunmarshalling of method arguments and return values of virtually\nany data type between heterogeneous machine architectures. The\npackage is similar to other remote method invocation (RMI) packages\ndeveloped later, such as the Java RMI library [39]. All method\ninvocations are forwarded to the original object, where they are\nexecuted in the order they are received.\nThe Replicated Object package supports replicated objects. Each\nprocess can call any method of an object it shares, just as it can\nwith a simple or remote object. We will describe the Replicated\nObject package in more detail, as Repo-3D relies heavily on its\ndesign, and the design of a replicated object system is less straightforward\nthan a remote one. The model supported by the Replicated\nObject package follows two principles:\n\nAll operations on an instance of an object are atomic and\nserializable. All operations are performed in the same order on\nall copies of the object. If two methods are invoked simultaneously\n, the order of invocation is nondeterministic, just as if\ntwo threads attempted to access the same memory location\nsimultaneously in a single process.\n\nThe above principle applies to operations on single objects.\nMaking sequences of operations atomic is up to the programmer\n.\nThe implementation of the Replicated Object package is based\non the approach used in the Orca distributed programming\nlanguage [2]. A full replication scheme is used, where a single\nobject is either fully replicated in a process or not present at all.\nAvoiding partial replication significantly simplifies the implementation\nand the object model, and satisfies the primary rationale for\nreplication: fast read-access to shared data. To maintain replication\nconsistency an update scheme is used, where updates to the object\nare applied to all copies.\nThe method of deciding what is and is not an update is what\nmakes the Orca approach particularly interesting and easy to\nimplement. All methods are marked as either read or update methods\nby the programmer who creates the object type. Read methods\nare assumed to not change the state of the object and are therefore\napplied immediately to the local object without violating consistency\n. Update methods are assumed to change the state. To distribute\nupdates, arguments to the update method are marshalled into a\nmessage and sent to all replicas. To ensure all updates are applied\nin the same order, the current implementation of the Replicated\nObject package designates a sequencer process for each object.\nThere may be more than one sequencer in the system to avoid\noverloading one process with all the objects (in this case, each\nobject has its updates managed by exactly one of the sequencers.)\nThe sequencer is responsible for assigning a sequence number to\neach message before it is sent to all object replicas. The replicas\nthen execute the incoming update messages in sequence. The process\nthat initiated the update does not execute the update until it\nreceives a message back from the sequencer and all updates with\nearlier sequence numbers have been executed.\nThere are three very important reasons for choosing this\napproach. First, it is easy to implement on top of virtually any\nobject-oriented language, using automatically generated object\nsubtypes and method wrappers that communicate with a simple\nruntime system. We do this in our Modula-3 implementation, and it\nwould be equally applicable to an implementation in C++ or Java.\nFor example, the JSDT [36] data-sharing package in Java uses a\nsimilar approach.\nSecond, the Replicated Object package does not pay attention to\n(or even care) when the internal data fields of an object change.\nThis allows the programmer great flexibility in deciding exactly\nwhat constitutes an update or not, and what constitutes the shared\nstate\n4\n. For example, objects could have a combination of global\nand local state, and the methods that change the local state could\nbe classified as read methods since they do not modify the global\nstate. Alternatively, read methods could do some work locally and\nthen call an update method to propagate the results, allowing time-consuming\ncomputation to be done once and the result distributed\nin a clean way. We took advantage of both of these techniques in\nimplementing Repo-3D.\nFinally, the immediate distribution of update methods ensures\nthat changes are distributed in a timely fashion, and suggests a\nstraightforward solution to the asynchronous notification problem.\nThe Replicated Object package generates a Notification Object\ntype for each Replicated Object type. These new objects have\nmethods corresponding to the update methods of their associated\nReplicated Object. The arguments to these methods are the same as\nthe corresponding Replicated Object methods, plus an extra\nargument to hold the Replicated Object instance. These notifiers\ncan be used by a programmer to receive notification of changes to\na Replicated Object in a structured fashion. To react to updates to a\nReplicated Object instance, a programmer simply overrides the\nmethods of the corresponding Notification Object with methods\nthat react appropriately to those updates, and associates an instance\n3. An important detail is how the communication is bootstrapped. In the\ncase of the Network and Replicated Object packages, to pass a first\nobject between processes, one of them exports the object to a special\nnetwork object demon\nunder some known name on some known\nmachine. The second process then retrieves the object.\n4. Of course, it falls squarely on the shoulders of the programmer to\nensure that the methods provided always leave the object in a consistent\nstate. This is not significantly different than what needs to be done\nwhen building a complex object that is simultaneously accessed by\nmultiple threads in a non-distributed system. For example, if a\nprogrammer reads an array of numbers from inside the object and then\nuses an update method to write a computed average back into the\nobject, the internal array may have changed before the average is\nwritten, resulting in a classic inconsistency problem. In general,\nmethods that perform computations based on internal state (rather than\non the method arguments) are potentially problematic and need to be\nconsidered carefully.\nof it with the Replicated Object instance. Each time an update\nmethod of the Replicated Object is invoked, the corresponding\nmethod of the Notifier Object is also invoked. Notification Objects\neliminate the need for object polling and enable a \"data-driven\"\nflow of control.\n4.2 Obliq-3D\nObliq-3D is composed of Anim-3D, a 3D animation package\nwritten in Modula-3, and a set of wrappers that expose Anim-3D to\nthe Obliq programming language (see Section 4.3). Anim-3D is\nbased on three simple and powerful concepts: graphical objects for\nbuilding graphical scenes, properties for specifying the behavior of\nthe graphical objects, and input event callbacks to support interactive\nbehavior. Anim-3D uses the damage-repair model: whenever a\ngraphical object or property changes (is damaged), the image is\nrepaired without programmer intervention.\nGraphical objects (GOs) represent all the logical entities in the\ngraphical scene: geometry (e.g., lines, polygons, spheres, polygon\nsets, and text), lights and cameras of various sorts, and groups of\nother GOs. One special type of group, the\nRootGO\n, represents a\nwindow into which graphics are rendered. GOs can be grouped\ntogether in any valid directed acyclic graph (DAG). The GO class\nhierarchy is shown in Figure 3\n.\nA property is a defined by a name and a value. The name determines\nwhich attribute is affected by the property, such as \"Texture\nMode\" or \"Box Corner1\". The value specifies how it is affected\nand is determined by its behavior, a time-variant function that\ntakes the current animation time and returns a value. Properties,\nproperty values, and behaviors are all objects, and their relationships\nare shown in Figure 4. When a property is created, its name\nand value are fixed. However, values are mutable and their behavior\nmay be changed at any time. There are four kinds of behaviors\nfor each type of properties: constant (do not vary over time),\nsynchronous (follow a programmed set of requests, such as \"move\nfrom A to B starting at time t=1 and taking 2 seconds\"), asynchronous\n(execute an arbitrary time-dependent function to compute the\nvalue) and dependent (asynchronous properties that depend on\nother properties). Synchronous properties are linked to animation\nhandles and do not start satisfying their requests until the animation\nhandle is signalled. By linking multiple properties to the same\nhandle, a set of property value changes can be synchronized.\nAssociated with each GO g is a partial mapping of property\nnames to values determined by the properties that have been associated\nwith g. A property associated with g affects not only g but\nall the descendants of g that do not override the property. A single\nproperty may be associated with any number of GOs. It is perfectly\nlegal to associate a property with a GO that is not affected by it; for\nexample, attaching a \"Surface Color\" property to a GroupGO does\nnot affect the group node itself, but could potentially affect the\nsurface color of any GO contained in that group. A RootGO sets an\ninitial default value for each named property.\nThere are three types of input event callbacks in Anim-3D, corresponding\nto the three kinds of interactive events they handle:\nmouse callbacks (triggered by mouse button events), motion callbacks\n(triggered by mouse motion events) and keyboard callbacks\n(triggered by key press events). Each object has three callback\nstacks, and the interactive behavior of an object can be redefined\nby pushing a new callback onto the appropriate stack. Any event\nthat occurs within a root window associated with a RootGO r will\nbe delivered to the top handler on r's callback stack. The handler\ncould delegate the event to one of r's children, or it may handle it\nitself, perhaps changing the graphical scene in some way.\nDistAnim-3D is a direct descendant of Anim-3D. In addition to\nthe objects being distributed, it has many additional facilities that\nare needed for general-purpose 3D graphical applications, such as\ntexture mapping, indexed line and polygon sets, choice groups,\nprojection and transformation callbacks, and picking. Since\nDistAnim-3D is embedded in Repo instead of Obliq (see\nSection 4.3), the resulting library is called Repo-3D.\n4.3 Obliq and Repo\nObliq [8] is a lexically-scoped, untyped, interpreted language for\ndistributed object-oriented computation. It is implemented in, and\ntightly integrated with, Modula-3. An Obliq computation may\ninvolve multiple threads of control within an address space, multiple\naddress spaces on a machine, heterogeneous machines over a\nlocal network, and multiple networks over the Internet. Obliq uses,\nand supports, the Modula-3 thread, exception, and garbage-collection\nfacilities. Its distributed-computation mechanism is based on\nNetwork Objects, allowing transparent support for multiple\nprocesses on heterogeneous machines. Objects are local to a site,\nwhile computations can roam over the network. Repo [23] is a\ndescendant of Obliq that extends the Obliq object model to include\nreplicated objects. Therefore, Repo objects have state that may be\nlocal to a site (as in Obliq) or replicated across multiple sites.\nDESIGN OF REPO-3D\nRepo-3D's design has two logical parts: the basic design and local\nvariations. The basic design encompasses the changes to Obliq-3D\nto carry it into a distributed context, and additional enhancements\nthat are not particular to distributed graphics (and are therefore not\ndiscussed here). Local variations are introduced to handle two\nissues mentioned in Section 1: transient local changes and responsive\nlocal editing.\nFigure 3:\nThe Repo-3D GO class hierarchy. Most of the classes are\nalso in Obliq-3D; the italicized ones were added to Repo-3D.\nGroupGO\nGO\nCameraGO\nLightGO\nNonSurfaceGO\nSurfaceGO\nRootGO\nChoiceGroupGO\nOrthoCameraGO\nPerspCameraGO\nAmbientLightGO\nVectorLightGO\nPointLightGO\nSpotLightGO\nLineGO\nMarkerGO\nTextGO\nPolygonGO\nBoxGO\nSphereGO\nCylinderGO\nDiskGO\nTorusGO\nQuadMeshGO\nIndexedPolygonSetGO\nText2DGO\nIndexedLineSetGO\nFigure 4:\nThe relationship between properties, names, values, and\nbehaviors. Each oval represents an object and arrows show containment\n.\nValue\nBehavior\nProperty\nName\nRequest\nRequest\n. . .\n5.1 Basic Repo-3D Design\nThe Anim-3D scene-graph model is well suited for adaptation to a\ndistributed environment. First, in Anim-3D, properties are attached\nto nodes, not inserted into the graph, and the property and child\nlists are unordered (i.e., the order in which properties are assigned\nto a node, or children are added to a group, does not affect the final\nresult). In libraries that insert properties and nodes in the graph and\nexecute the graph in a well-defined order (such as Inventor), the\nsiblings of a node (or subtree) can affect the attributes of that node\n(or subtree). In Anim-3D, and similar libraries (such as Java 3D),\nproperties are only inherited down the graph, so a node's properties\nare a function of the node itself and its ancestors--its siblings do\nnot affect it. Therefore, subtrees can be added to different scene\ngraphs, perhaps in different processes, with predictable results.\nSecond, the interface (both compiled Anim-3D and interpreted\nObliq-3D) is programmatical and declarative. There is no \"graphi-cal\nscene\" file format per se: graphical scenes are created as the\nside effect of executing programs that explicitly create objects and\nmanipulate them via the object methods. Thus, all graphical\nobjects are stored as the Repo-3D programs that are executed to\ncreate them. This is significant, because by using the Replicated\nObject library described in Section 4.1 to make the graphical\nobjects distributed, the \"file format\" (i.e., a Repo-3D program) is\nupdated for free.\nConverting Anim-3D objects to Replicated Objects involved\nthree choices: what objects to replicate, what methods update the\nobject state, and what the global, replicated state of each object is.\nSince replicated objects have more overhead (e.g., method execution\ntime, memory usage, and latency when passed between\nprocesses), not every category of object in Repo-3D is replicated.\nWe will consider each of the object categories described in\nFigure 4.2 in turn: graphical objects (GOs), properties (values,\nnames, behaviors, animation handles) and callbacks. For each of\nthese objects, the obvious methods are designated as update methods\n, and, as discussed in Section 4.1, the global state of each object\nis implicitly determined by those update methods. Therefore, we\nwill not go into excessive detail about either the methods or the\nstate. Finally, Repo-3D's support for change notification will be\ndiscussed.\n5.1.1 Graphical Objects\nGOs are the most straightforward. There are currently twenty-one\ndifferent types of GOs, and all but the RootGOs are replicated.\nSince RootGOs are associated with an onscreen window, they are\nnot replicated--window creation remains an active decision of the\nlocal process. Furthermore, if replicated windows are needed, the\ngeneral-purpose programming facilities of Repo can be used to\nsupport this in a relatively straightforward manner, outside the\nscope of Repo-3D. A GO's state is comprised of the properties\nattached to the object, its name, and some other non-inherited\nproperty attributes.\n5\nThe methods that modify the property list are\nupdate methods. Group GOs also contain a set of child nodes, and\nhave update methods that modify that set.\n5.1.2 Properties\nProperties are more complex. There are far more properties in a\ngraphical scene than there are graphical objects, they change much\nmore rapidly, and each property is constructed from a set of\nModula-3 objects. There are currently 101 different properties of\nseventeen different types in Repo-3D, and any of them can be\nattached to any GO. A typical GO would have anywhere from two\nor three (e.g., a BoxGO would have at least two properties to\ndefine its corners) to a dozen or more. And, each of these properties\ncould be complex: in the example in Section 6, a single\nsynchronous property for a long animation could have hundreds of\nrequests enqueued within it.\nConsider again the object structure illustrated in Figure 4. A\nproperty is defined by a name and a value, with the value being a\ncontainer for a behavior. Only one of the Modula-3 objects is\nreplicated, the property value. Property values serve as the replicated\ncontainers for property behaviors. To change a property, a\nnew behavior is assigned to its value. The state of the value is the\ncurrent behavior.\nAnimation handles are also replicated. They tie groups of related\nsynchronous properties together, and are the basis for the interaction\nin the example in Section 6. In Anim-3D, handles have one\nanimate\nmethod, which starts an animation and blocks until it\nfinishes. Since update methods are executed everywhere, and block\naccess to the object while they are being executed, they should not\ntake an extended period of time. In creating Repo-3D, the\nanimate\nmethod was changed to call two new methods: an update\nmethod that starts the animation, and a non-update method that\nwaits for the animation to finish. We also added methods to pause\nand resume an animation, to retrieve and change the current relative\ntime of an animation handle, and to stop an animation early.\nThe state of an Animation handle is a boolean value that says if it is\nactive or not, plus the start, end, and current time (if the handle is\npaused).\nMost of the Modula-3 objects that comprise a property are not\nreplicated, for a variety of reasons:\n\nProperties represent a permanent binding between a property\nvalue and a name. Since they are immutable, they have no synchronization\nrequirements and can simply be copied between\nprocesses.\n\nNames represent simple constant identifiers, and are therefore\nnot replicated either.\n\nBehaviors and requests are not replicated. While they can be\nmodified after being created, they are treated as immutable\ndata types for two reasons. First, the vast majority of behaviors,\neven complex synchronous ones, are not changed once they\nhave been created and initialized. Thus, there is some justification\nfor classifying the method calls that modify them as part\nof their initialization process. The second reason is practical\nand much more significant. Once a scene has been created and\nis being \"used\" by the application, the bulk of the time-critical\nchanges to it tend to be assignments of new behaviors to the\nexisting property values. For example, an object is moved by\nassigning a new (often constant) behavior to its\nGO_Transform\nproperty value. Therefore, the overall performance\nof the system depends heavily on the performance of\nproperty value behavior changes. By treating behaviors as\nimmutable objects, they can simply be copied between\nprocesses without incurring the overhead of the replicated\nobject system.\n5.1.3 Input Callbacks\nIn Repo-3D, input event callbacks are not replicated. As discussed\nin Section 4.2, input events are delivered to the callback stacks of a\nRootGO. Callbacks attached to any other object receive input\nevents only if they are delivered to that object by the programmer,\nperhaps recursively from another input event callback (such as the\none attached to the RootGO). Therefore, the interactive behavior of\na root window is defined not only by the callbacks attached to its\nRootGO, but also by the set of callbacks associated with the graph\nrooted at that RootGO. Since the RootGOs are not replicated, the\n5. Some attributes of a GO, such as the arrays of Point3D properties that\ndefine the vertices of a polygon set, are not attached to the object, but\nare manipulated through method calls.\ncallbacks that they delegate event handling to are not replicated\neither. If a programmer wants to associate callbacks with objects as\nthey travel between processes, Repo's general-purpose programming\nfacilities can be used to accomplish this in a straightforward\nmanner.\n5.1.4 Change Notification\nThe final component of the basic design is support for notification\nof changes to distributed objects. For example, when an object's\nposition changes or a new child is added to a group, some of the\nprocesses containing replicas may wish to react in some way. Fortunately\n, as discussed in Section 4.1, the Replicated Object\npackage automatically generates Notification Object types for all\nreplicated object types, which provide exactly the required\nbehavior. The Notification Objects for property values allow a\nprogrammer to be notified of changes to the behavior of a property,\nand the Notification Objects for the various GOs likewise allow\nnotification of updates to them.\n5.2 Local Variations\nRepo-3D's local variations solve a set of problems particular to the\ndistributed context in which Repo-3D lives: maintaining interactivity\nand supporting local modifications to the shared scene graph.\nIf the graphical objects and their properties were always strictly\nreplicated, programmers would have to create local variations by\ncopying the objects to be modified, creating a set of Notification\nObjects on the original objects, the copies of those objects, and all\ntheir properties (to be notified when either change), and reflecting\nthe appropriate changes between the instances. Unfortunately,\nwhile this process could be automated somewhat, it would still be\nextremely tedious and error prone. More seriously, the overhead of\ncreating this vast array of objects and links between them would\n(a)\n(b)\n(c)\n(d)\nFigure 5:\nSimultaneous images from a session with the distributed CATHI animation viewer, running on four machines, showing an animation\nof an engine. (a) Plain animation viewer, running on Windows NT. (b) Overview window, running on Windows 95. (c) Animation viewer\nwith local animation meter, running on IRIX. (d) Animation viewer with local transparency to expose hidden parts, running on Solaris.\nmake this approach impractical for short transient changes, such as\nhighlighting an object under the mouse.\nTo overcome this problem, Repo-3D allows the two major\nelements of the shared state of the graphical object scene--the\nproperties attached to a GO and the children of a group--to have\nlocal variations applied to them. (Local variations on property\nvalues or animation handles are not supported, although we are\nconsidering adding support for the latter.)\nConceptually, local state is the state added to each object (the\nadditions, deletions, and replacements to the properties or\nchildren) that is only accessible to the local copies and is not\npassed to remote processes when the object is copied to create a\nnew replica. The existence of local state is possible because, as\ndiscussed in Section 4.1, the shared state of a replicated object is\nimplicitly defined by the methods that update it\n6\n. Therefore, the\nnew methods that manipulate the local variations are added to the\nGOs as non-update methods. Repo-3D combines both the global\nand local state when creating the graphical scene using the underlying\ngraphics package.\nAs mentioned above, local variations come in two flavors:\n\nProperty variations. There are three methods to set, unset, and\nget the global property list attached to a GO. We added the\nfollowing methods to manipulate local variations: add or\nremove local properties (overriding the value normally used for\nthe object), hide or reveal properties (causing the property\nvalue of the parent node to be inherited), and flush the set of\nlocal variations (removing them in one step) or atomically\napply them to the global state of the object.\n\nChild variations. There are five methods to add, remove,\nreplace, retrieve, and flush the set of children contained in a\ngroup node. We added the following ones: add a local node,\nremove a global node locally, replace a global node with some\nother node locally, remove each of these local variations, flush\nthe local variations (remove them all in one step), and atomically\napply the local variations to the global state.\nThis set of local operations supports the problems local variations\nwere designed to solve, although some possible enhancements are\ndiscussed in Section 7.\nEXAMPLE AN ANIMATION EXAMINER\nAs an example of the ease of prototyping distributed applications\nwith Repo-3D, we created a distributed animation examiner for the\nCATHI [6] animation generation system. CATHI generates short\ninformational animation clips to explain the operation of technical\ndevices. It generates full-featured animation scripts, including\ncamera and object motion, color and opacity effects, and lighting\nsetup.\nIt was reasonably straightforward to modify CATHI to generate\nRepo-3D program files, in addition to the GeomView and Render-Man\nscript files it already generated. The resulting output is a\nRepo-3D program that creates two scene DAGs: a camera graph\nand a scene graph. The objects in these DAGs have synchronous\nbehaviors specified for their surface and transformation properties.\nAn entire animation is enqueued in the requests of these behaviors,\nlasting anywhere from a few seconds to a few minutes.\nWe built a distributed, multi-user examiner over the course of a\nweekend. The examiner allows multiple users to view the same\nanimation while discussing it (e.g., via electronic chat or on the\nphone). Figure 5 shows images of the examiner running on four\nmachines, each with a different view of the scene. The first step\nwas to build a simple \"loader\" that reads the animation file, creates\na window, adds the animation scene and camera to it, and exports\nthe animation on the network, requiring less than a dozen lines of\nRepo-3D code. A \"network\" version, that imports the animation\nfrom the network instead of reading it from disk, replaced the lines\nof code to read and export the animation with a single line to\nimport it. Figure 5(a) shows an animation being viewed by one of\nthese clients.\nThe examiner program is loaded by both these simple clients, and\nis about 450 lines long. The examiner supports:\n\nPausing and continuing the animation, and changing the\ncurrent animation time using the mouse. Since this is done by\noperating on the shared animation handle, changes performed\nby any viewer are seen by all. Because of the consistency guarantees\n, all users can freely attempt to change the time, and the\nsystem will maintain all views consistently.\n\nA second \"overview\" window (Figure 5(b)), where a new\ncamera watches the animation scene and camera from a distant\nview. A local graphical child (representing a portion of the\nanimation camera's frustum) was added to the shared animation\ncamera group to let the attributes of the animation camera\nbe seen in the overview window.\n\nA local animation meter (bottom of Figure 5(c)), that can be\nadded to any window by pressing a key, and which shows the\ncurrent time offset into the animation both graphically and\nnumerically. It was added in front of the camera in the animation\nviewer window, as a local child of a GO in the camera\ngraph, so that it would be fixed to the screen in the animation\nviewer.\n\nLocal editing (Figure 5(d)), so that users can select objects and\nmake them transparent (to better see what was happening in the\nanimation) or hide them completely (useful on slow machines,\nto speed up rendering). Assorted local feedback (highlighting\nthe object under the mouse and flashing the selected object)\nwas done with local property changes to the shared GOs in the\nscene graph.\nGiven the attention paid to the design of Repo-3D, it was not\nnecessary to be overly concerned with the distributed behavior of\nthe application (we spent no more than an hour or so). Most of that\ntime was spent deciding if a given operation should be global or a\nlocal variation. The bulk of programming and debugging time was\nspent implementing application code. For example, in the overview\nwindow, the representation of the camera moves dynamically,\nbased on the bounding values of the animation's scene and camera\ngraphs. In editing mode, the property that flashes the selected node\nbases its local color on the current global color (allowing a user\nwho is editing while an animation is in progress to see any color\nchanges to the selected node.)\nCONCLUSIONS AND FUTURE WORK\nWe have presented the rationale for, and design of, Repo-3D, a\ngeneral-purpose, object-oriented library for developing distributed,\ninteractive 3D graphics applications across a range of heterogeneous\nworkstations. By presenting the programmer with the\nillusion of a large shared memory, using the Shared Data-Object\nmodel of DSM, Repo-3D makes it easy for programmers to rapidly\nprototype distributed 3D graphics applications using a familiar\nobject-oriented programming paradigm. Both graphical and\ngeneral-purpose, non-graphical data can be shared, since Repo-3D\nis embedded in Repo, a general-purpose, lexically-scoped, distributed\nprogramming language.\nRepo-3D is designed to directly support the distribution of graphical\nobjects, circumventing the \"duplicate database\" problem and\nallowing programmers to concentrate on the application function-6\n. The local state is not copied when a replicated object is first passed to a\nnew process because the Repo-3D objects have custom serialization\nroutines (or Picklers, in Modula-3 parlance). These routines only pass\nthe global state, and initialize the local state on the receiving side to\nreasonable default values corresponding to the empty local state.\nality of a system, rather than its communication or synchronization\ncomponents. We have introduced a number of issues that must be\nconsidered when building a distributed 3D graphics library, especially\nconcerning efficient and clean support for data distribution\nand local variations of shared graphical scenes, and discussed how\nRepo-3D addresses them.\nThere are a number of ways in which Repo-3D could be\nimproved. The most important is the way the library deals with\ntime. By default, the library assumes all machines are running a\ntime-synchronization protocol, such as NTP, and uses an internal\nanimation time offset\n7\n(instead of the system-specific time offset)\nbecause different OSs (e.g., NT vs. UNIX) start counting time at\ndifferent dates. Hooks have been provided to allow a programmer\nto specify their own function to compute the \"current\" animation\ntime offset within a process. Using this facility, it is possible to\nbuild inter-process time synchronization protocols (which we do),\nbut this approach is not entirely satisfactory given our stated goal\nof relieving the programmer of such tedious chores. Future\nsystems should integrate more advanced solutions, such as adjusting\ntime values as they travel between machines, so that users of\ncomputers with unsynchronized clocks can collaborate\n8\n. This will\nbecome more important as mobile computers increase in popularity\n, as it may not be practical to keep their clocks synchronized.\nThe specification of local variations in Repo-3D could benefit\nfrom adopting the notion of paths (as used in Java 3D and Inventor,\nfor example). A path is an array of objects leading from the root of\nthe graph to an object; when an object occurs in multiple places in\none or more scene graphs, paths allow these instances to be differ-entiated\n. By specifying local variations using paths, nodes in the\nshared scene graphs could have variations within a process as well\nas between processes. One other limitation of Repo-3D, arising\nfrom our use of the Replicated Object package, is that there is no\nway to be notified when local variations are applied to an object.\nRecall that the methods of an automatically generated Notification\nObject correspond to the update methods of the corresponding\nReplicated Object. Since the methods that manipulate the local\nvariations are non-update methods (i.e., they do not modify the\nreplicated state), there are no corresponding methods for them in\nthe Notification Objects. Of course, it would be relatively straightforward\nto modify the Replicated Object package to support this,\nbut we have not yet found a need for these notifiers.\nA more advanced replicated object system would also improve\nthe library. Most importantly, support for different consistency\nsemantics would be extremely useful. If we could specify\nsemantics such as \"all updates completely define the state of an\nobject, and only the last update is of interest,\" the efficiency of the\ndistribution of property values would improve significantly; in this\ncase, updates could be applied (or discarded) when they arrive,\nwithout waiting for all previous updates to be applied, and could be\napplied locally without waiting for the round trip to the sequencer.\nThere are also times when it would be useful to have support for\nconsistency across multiple objects, either using causal ordering\n(as provided by systems such as ISIS and Visual-Obliq), or some\nkind of transaction protocol to allow large groups of changes to be\napplied either as a unit, or not at all. It is not clear how one would\nprovide these features with a replicated object system such as the\none used here.\nWhile a library such as Repo-3D could be built using a variety of\nunderlying platforms, the most likely one for future work is Java.\nJava shares many of the advantages of Modula-3 (e.g., threads and\ngarbage collection are common across all architectures) and the\npackages needed to create a Repo-3D-like toolkit are beginning to\nappear. While Java does not yet have a replicated object system as\npowerful as the Replicated Object package, a package such as\nJSDT [36] (which focuses more on data communication than high-level\nobject semantics) may be a good starting point. Work is also\nbeing done on interpreted, distributed programming languages on\ntop of Java (e.g., Ambit [9]). Finally, Java 3D is very similar to\nAnim-3D, even though its design leans toward efficiency instead of\ngenerality when there are trade-offs to be made. For example, the\ndesigners chose to forgo Anim-3D's general property inheritance\nmechanism because it imposes computational overhead. By combining\npackages such as Java 3D, JSDT, and Ambit, it should be\npossible to build a distributed graphics library such as Repo-3D in\nJava.\nAcknowledgments\nWe would like to thank the reviewers for their helpful comments,\nas well as the many other people who have contributed to this\nproject. Andreas Butz ported CATHI to use Repo-3D and helped\nwith the examples and the video. Clifford Beshers participated in\nmany lively discussions about the gamut of issues dealing with\nlanguage-level support for 3D graphics. Tobias Hllerer and\nSteven Dossick took part in many other lively discussions. Xinshi\nSha implemented many of the extensions to Obliq-3D that went\ninto Repo-3D. Luca Cardelli and Marc Najork of DEC SRC\ncreated Obliq and Obliq-3D, and provided ongoing help and\nencouragement over the years that Repo and Repo-3D have been\nevolving.\nThis research was funded in part by the Office of Naval Research\nunder Contract N00014-97-1-0838 and the National Tele-Immersion\nInitiative, and by gifts of software from Critical Mass and\nMicrosoft.\nReferences\n[1]\nD. B. Anderson, J. W. Barrus, J. H. Howard, C. Rich, C. Shen, and\nR. C. Waters. Building Multi-User Interactive Multimedia Environments\nat MERL. Technical Report Research Report TR95-17, Mit-subishi\nElectric Research Laboratory, November 1995.\n[2]\nH. Bal, M. Kaashoek, and A. Tanenbaum. Orca: A Language for\nParallel Programming of Distributed Systems. IEEE Transactions on\nSoftware Engineering\n, 18(3):190205, March 1992.\n[3]\nK. Bharat and L. Cardelli. Migratory Applications. In ACM UIST '95,\npages 133-142, November 1995.\n[4]\nK. P. Birman. The Process Group Approach to Reliable Distributed\nComputing. CACM, 36(12):3653, Dec 1993.\n[5]\nA. Birrell, G. Nelson, S. Owicki, and E. Wobber. Network Objects.\nIn Proc. 14th ACM Symp. on Operating Systems Principles, 1993.\n[6]\nA Butz, Animation with CATHI, In Proceedings of AAAI/IAAI '97,\npages 957962, 1997.\n[7] J. Calvin, A. Dickens, B. Gaines, P. Metzger, D. Miller, and\nD. Owen. The SIMNET Virtual World Architecture. In Proc. IEEE\nVRAIS '93\n, pages 450455, Sept 1993.\n[8]\nL. Cardelli. A Language with Distributed Scope. Computing Systems\n, 8(1):2759, Jan 1995.\n[9]\nL. Cardelli and A. Gordon. Mobile Ambients. In Foundations of\nSoftware Science and Computational Structures\n, Maurice Nivat\n(Ed.), LNCE 1378, Springer, 140155. 1998.\n[10]\nR. Carey and G. Bell. The Annotated VRML 2.0 Reference Manual.\nAddison-Wesley, Reading, MA, 1997.\n[11] C. Carlsson and O. Hagsand. DIVE--A Multi-User Virtual Reality\nSystem. In Proc. IEEE VRAIS '93, pages 394400, Sept 1993.\n[12] C. F. Codella, R. Jalili, L. Koved, and J. B. Lewis. A Toolkit for\nDeveloping Multi-User, Distributed Virtual Environments. In Proc.\nIEEE VRAIS '93\n, pages 401407, Sept 1993.\n7. Computed as an offset from January 1, 1997.\n8. Implementation details of the combination of Network and Replicated\nObjects made it difficult for us to adopt a more advanced solution.\n[13]\nC. Elliott, G. Schechter, R. Yeung and S. Abi-Ezzi. TBAG: A High\nLevel Framework for Interactive, Animated 3D Graphics\nApplications, In Proc. ACM SIGGRAPH 94, pages 421434, August,\n1994.\n[14]\nM. Fairen and A. Vinacua, ATLAS, A Platform for Distributed\nGraphics Applications, In Proc. VI Eurographics Workshop on Programming\nParadigms in Graphics, pages 91102, September, 1997.\n[15] S. Feiner, B. MacIntyre, M. Haupt, and E. Solomon. Windows on the\nWorld: 2D Windows for 3D Augmented Reality. In Proc. ACM UIST\n'93, pages 145155, 1993.\n[16] T. A. Funkhouser. RING: A Client-Server System for Multi-User\nVirtual Environments. In Proc. 1995 ACM Symp. on Interactive 3D\nGraphics, pages 8592, March 1995.\n[17] G. Grimsdale. dVS--Distributed Virtual Environment System. In\nProc. Computer Graphics '91 Conference, 1991.\n[18] S. P. Harbison. Modula-3. Prentice-Hall, 1992.\n[19]\nH.W. Holbrook, S.K. Singhal and D.R. Cheriton, Log-Based\nReceiver-Reliable Multicast for Distributed Interactive Simulation,\nProc. ACM SIGCOMM '95, pages 328341, 1995.\n[20] W. Levelt, M. Kaashoek, H. Bal, and A. Tanenbaum. A Comparison\nof Two Paradigms for Distributed Shared Memory. Software\nPractice and Experience, 22(11):9851010, Nov 1992.\n[21]\nB. Lucas. A Scientific Visualization Renderer. In Proc. IEEE\nVisualization '92, pp. 227-233, October 1992.\n[22]\nV. Machiraju, A Framework for Migrating Objects in Distributed\nGraphics Applications, Masters Thesis, University of Utah, Department\nof Computer Science, Salt Lake City, UT, June, 1997.\n[23]\nB. MacIntyre. Repo: Obliq with Replicated Objects. Programmers\nGuide and Reference Manual. Columbia University Computer\nScience Department Research Report CUCS-023-97, 1997.}\n[24]\nB. MacIntyre, and S. Feiner. Language-level Support for Exploratory\nProgramming of Distributed Virtual Environments. In Proc. ACM\nUIST '96, pages 8394, Seattle, WA, November 68, 1996.\n[25] M. A. Najork and M. H. Brown. Obliq-3D: A High-level, Fast-turnaround\n3D Animation System. IEEE Transactions on Visualization\nand Computer Graphics, 1(2):175145, June 1995.\n[26]\nR. Ben-Natan. CORBA: A Guide to the Common Object Request\nBroker Architecture, McGraw Hill, 1995.\n[27]\nD. Phillips, M. Pique, C. Moler, J. Torborg, D. Greenberg. Distributed\nGraphics: Where to Draw the Lines? Panel Transcript,\nSIGGRAPH 89, available at:\nhttp://www.siggraph.org:443/publications/panels/siggraphi89/\n[28] A. Prakash and H. S. Shim. DistView: Support for Building Efficient\nCollaborative Applications Using Replicated Objects. In Proc. ACM\nCSCW '94, pages 153162, October 1994.\n[29]\nJ. Rohlf and J. Helman, IRIS Performer: A High Performance\nMultiprocessing Toolkit for Real-Time {3D} Graphics, In Proc.\nACM SIGGRAPH 94, pages 381394, 1994.\n[30] M. Roseman and S. Greenberg. Building Real-Time Groupware with\nGroupKit, a Groupware Toolkit. ACM Transactions on Computer-Human\nInteraction, 3(1):66106, March 1996.\n[31] C. Shaw and M. Green. The MR Toolkit Peers Package and\nExperiment. In Proc. IEEE VRAIS '93, pages 1822, Sept 1993.\n[32] G. Singh, L. Serra, W. Png, A. Wong, and H. Ng. BrickNet: Sharing\nObject Behaviors on the Net. In Proc. IEEE VRAIS '95, pages 1925,\n1995.\n[33]\nH. Sowizral, K. Rushforth, and M. Deering. The Java 3D API\nSpecification, Addison-Wesley, Reading, MA, 1998.\n[34] M. Stefik, G. Foster, D. G. Bobrow, K. Kahn, S. Lanning, and\nL. Suchman. Beyond The Chalkboard: Computer Support for\nCollaboration and Problem Solving in Meetings. CACM, 30(1):32\n47, January 1987.\n[35]\nP. S. Strauss and R. Carey, An Object-Oriented 3D Graphics Toolkit,\nIn Computer Graphics (Proc. ACM SIGGRAPH 92), pages 341349,\nAug, 1992.\n[36]\nSun Microsystems, Inc. The Java Shared Data Toolkit, 1998.\nUnsupported software, available at:\nhttp://developer.javasoft.com/developer/earlyAccess/jsdt/\n[37] I. Tou, S. Berson, G. Estrin, Y. Eterovic, and E. Wu. Prototyping\nSynchronous Group Applications. IEEE Computer, 27(5):4856,\nMay 1994.\n[38]\nR. Waters and D. Anderson. The Java Open Community Version 0.9\nApplication Program Interface. Feb, 1997. Available online at:\nhttp://www.merl.com/opencom/opencom-java-api.html\n[39]\nA. Wollrath, R. Riggs, and J. Waldo. A Distributed Object Model for\nthe Java System, In Proc. USENIX COOTS '96, pages 219231, July\n1996.\n[40]\nR. Zeleznik, D. Conner, M. Wloka, D. Aliaga, N. Huang,\nP. Hubbard, B. Knep, H. Kaufman, J. Hughes, and A. van Dam. An\nObject-oriented Framework for the Integration of Interactive\nAnimation Techniques. In Computer Graphics (SIGGRAPH '91\nProceedings), pages 105112, July, 1991.\n[41] M. J. Zyda, D. R. Pratt, J. G. Monahan, and K. P. Wilson. NPSNET:\nConstructing a 3D Virtual World. In Proc. 1992 ACM Symp. on\nInteractive 3D Graphics, pages 147156, Mar. 1992.", "keywords": "Data sharing;programming language;Distributed graphics;Data structures;Interactive graphical application;3D graphics library;Change notification;Library;Replicated object;Object representation;distributed virtual environments;Shared memory;Syncronisation;data distribution;object-oriented graphics;Java;Programming;duplicate database;local variations;multi-threaded programming;Heterogeneous workstation;Multi-user interaction;3D graphics application;Repo-3D;3D graphics;Callbacks;Prototyping;Distributed systems;Client-server approach;object-oriented library;Programming language;shared-data object model;prototype;Graphical objects;Client-Server;Object-oriented;Local variation;3D Graphics;Object oriented;Distributed applications;distributed shared memory;Distributed processes;Properties"} {"name": "60", "title": "Coupling Feature Selection and Machine Learning Methods for Navigational Query Identification", "abstract": "It is important yet hard to identify navigational queries in Web search due to a lack of sufficient information in Web queries, which are typically very short. In this paper we study several machine learning methods, including naive Bayes model, maximum entropy model, support vector machine (SVM), and stochastic gradient boosting tree (SGBT), for navigational query identification in Web search. To boost the performance of these machine techniques, we exploit several feature selection methods and propose coupling feature selection with classification approaches to achieve the best performance. Different from most prior work that uses a small number of features, in this paper, we study the problem of identifying navigational queries with thousands of available features, extracted from major commercial search engine results, Web search user click data, query log, and the whole Web's relational content. A multi-level feature extraction system is constructed. Our results on real search data show that 1) Among all the features we tested, user click distribution features are the most important set of features for identifying navigational queries. 2) In order to achieve good performance, machine learning approaches have to be coupled with good feature selection methods. We find that gradient boosting tree, coupled with linear SVM feature selection is most effective. 3) With carefully coupled feature selection and classification approaches, navigational queries can be accurately identified with 88.1% F1 score, which is 33% error rate reduction compared to the best uncoupled system, and 40% error rate reduction compared to a well tuned system without feature selection.", "fulltext": "INTRODUCTION\nNowadays, Web search has become the main method for\ninformation seeking. Users may have a variety of intents\nwhile performing a search. For example, some users may\nalready have in mind the site they want to visit when they\ntype a query; they may not know the URL of the site or\nmay not want to type in the full URL and may rely on the\nsearch engine to bring up the right site. Yet others may have\nno idea of what sites to visit before seeing the results. The\ninformation they are seeking normally exists on more than\none page.\nKnowing the different intents associated with a query may\ndramatically improve search quality. For example, if a query\nis known to be navigational, we can improve search results\nby developing a special ranking function for navigational\nqueries. The presentation of the search results or the user-perceived\nrelevance can also be improved by only showing\nthe top results and reserving the rest of space for other purposes\nsince users only care about the top result of a navigational\nquery. According to our statistics, about 18% of\nqueries in Web search are navigational (see Section 6). Thus,\ncorrectly identifying navigational queries has a great potential\nto improve search performance.\nNavigational query identification is not trivial due to a\nlack of sufficient information in Web queries, which are normally\nshort. Recently, navigational query identification, or\nmore broadly query classification, is drawing significant attention\n. Many machine learning approaches that have been\nused in general classification framework, including naive Bayes\nclassifier, maximum entropy models, support vector machines\n, and gradient boosting tree, can be directly applied\nhere. However, each of these approaches has its own advantages\nthat suit certain problems. Due to the characteristics\nof navigational query identification (more to be addressed\nin Section 2 ), it is not clear which one is the best for the\ntask of navigational query identification. Our first contribution\nin this paper is to evaluate the effectiveness of these\nmachine learning approaches in the context of navigational\nquery identification. To our knowledge, this paper is the\nvery first attempt in this regard.\n682\nMachine learning models often suffer from the curse of\nfeature dimensionality. Feature selection plays a key role\nin many tasks, such as text categorization [18]. In this paper\n, our second contribution is to evaluate several feature\nselection methods and propose coupling feature selection\nwith classification approaches to achieve the best performance\n: ranking features by using one algorithm before another\nmethod is used to train the classifier. This approach is\nespecially useful when redundant low quality heterogeneous\nfeatures are encountered.\nMost previous studies in query identification are based on\na small number of features that are obtained from limited\nresources [12]. In this paper, our third contribution is to\nexplore thousands of available features, extracted from major\ncommercial search engine results, user Web search click\ndata, query log, and the whole Web's relational content. To\nobtain most useful features, we present a three level system\nthat integrates feature generation, feature integration, and\nfeature selection in a pipe line.\nThe system, after coupling features selected by SVM with\na linear kernel and stochastic gradient boosting tree as classification\ntraining method, is able to achieve an average performance\nof 88.1% F1 score in a five fold cross-validation.\nThe rest of this paper is organized as follows. In the next\nsection, we will define the problem in more detail and describe\nthe architecture of our system. We then present a\nmulti-level feature extraction system in Section 3. We describe\nfour classification approaches in Section 4 and three\nfeature selection methods in Section 5. We then conduct\nextensive experiments on real search data in Section 6. We\npresent detailed discussions in Section 7. We discuss some\nrelated work in Section 8. Finally, we conclude the paper in\nSection 9.\nPROBLEM DEFINITION\nWe divide queries into two categories: navigational and\ninformational. According to the canonical definition [3, 14],\na query is navigational if a user already has a Web-site in\nmind and the goal is simply to reach that particular site.\nFor example, if a user issues query \"amazon\", he/she mainly\nwants to visit \"amazon.com\". This definition, however, is\nrather subjective and not easy to formalize. In this paper,\nwe extend the definition of navigational query to a more\ngeneral case: a query is navigational if it has one and only\none perfect site in the result set corresponding to this query.\nA site is considered as perfect if it contains complete information\nabout the query and lacks nothing essential.\nIn our definition, navigational query must have a corresponding\nresult page that conveys perfectness, uniqueness,\nand authority.\nUnlike Broder's definition, our definition\ndoes not require the user to have a site in mind. This makes\ndata labeling more objective and practical. For example,\nwhen a user issues a query \"Fulton, NY\", it is not clear\nif the user knows the Web-site \"www.fultoncountyny.org\".\nHowever, this Web-site has an unique authority and perfect\ncontent for this query and therefore the query \"Fulton,\nNY\" is labeled as a navigational query. All non-navigational\nqueries are considered informational. For an informational\nquery, typically there exist multiple excellent Web-sites corresponding\nto the query that users are willing to explore.\nTo give another example, in our dataset, query \"national\nearth science teachers association\" has only one perfect corresponding\nURL \"http://www.nestanet.org/\" and therefore\nis labeled as navigational query.\nQuery \"Canadian gold\nmaple leaf\" has several excellent corresponding URL's, including\n\"http://www. goldfingercoin.com/ catalog gold/ canadian\nmaple leaf.htm\", \"http://coins.about.com/ library/weekly/\naa091802a.htm\" and \"http://www.onlygold.com/Coins/ Cana-dianMapleLeafsFullScreen\n.asp\".\nTherefore, query \"Canadian\ngold maple leaf\" is labeled as non-navigational query.\nFigure 1 illustrates the architecture of our navigational\nquery identification system. A search engine takes in a query\nand returns a set of URLs. The query and returned URLs\nare sent into a multi-level feature extraction system that\ngenerates and selects useful features; details are presented\nin the next section. Selected features are then input into a\nmachine learning tool to learn a classification model.\nMULTI-LEVEL FEATURE EXTRACTION\nThe multiple level feature system is one of the unique\nfeatures of our system. Unlike prior work with a limited\nnumber of features or in a simulated environment [11, 12],\nour work is based on real search data, a major search en-gine's\nuser click information and a query log. In order to\nhandle large amount of heteorgeneous features in an efficient\nway, we propose a multi-level feature system. The first\nlevel is the feature generation level that calculates statistics\nand induces features from three resources: a click engine,\na Web-map and a query log. The second level is responsible\nfor integrating query-URL pair-wise features into query\nfeatures by applying various functions. The third level is\na feature selection module, which ranks features by using\ndifferent methods. Below we present the details of the first\ntwo levels. The third level will be presented separately in\nSection 5 since those feature selection methods are standard.\n3.1\nFeature Generation\nQueries are usually too short and lack sufficient context\nto be classified. Therefore, we have to generate more features\nfrom other resources. We use three resources to generate\nfeatures: a click engine, a Web-map, and query logs.\nThe click engine is a device to record and analyze user click\nbehavior. It is able to generate hundreds of features automatically\nbased on user click through distributions [16]. A\nWeb-map can be considered as a relational database that\nstores hundreds of induced features on page content, anchor\ntext, hyperlink structure of webpages, including the\ninbound, outbound URLs, and etc. Query logs are able to\nprovide bag-of-words features and various language model\nbased features based on all the queries issued by users over\na period of time.\nInput to feature generation module is a query-URL pair.\nFor each query, the top 100 ULRs are recorded and 100\nquery-URLs are generated. Thus for each query-URL pair,\nwe record a total of 197 features generated from the following\nfour categories:\nClick features: Click features record the click information\nabout a URL. We generate a total number of 29\nclick features for each query-URL pair. An example of\na click feature is the click ratio (CR). Let n\ni\nk\ndenote\nthe number of clicks on URL k for query i and total\nnumber of clicks\nn\ni\n= X\nk\nn\ni\nk\n.\n683\nWebmap\nClick engine\nQuery log\nEntropy\nMax\nMin\n...\nSGBT\nNaive Bayes\nMaxEnt\nSVM\nSearch engine\nquery\nClassifier\nClassification module\nFeature generation\nFeature selection module\nFeature integration\nInformation gain\nSVM feature ranking\nBoosting feature selection\nIntegrated feature\nquery-url feature\nSelected feature\nquery-URL\nFigure 1: Diagram of Result Set Based Navigational Query Identification System\nThe click ratio is the ratio of number of clicks on a\nparticular URL K for query i to the total number of\nclicks for this query, which has the form\nCR(i, K) = n\ni\nK\nn\ni\n.\nURL features: URL features measure the characteristics\nof the URL itself. There are 24 URL based features\nin total. One such feature is a URL match feature,\nnamed urlmr, which is defined as\nurlmr = l(p)\nl(u)\nwhere l(p) is the length of the longest substring p of the\nquery that presents in the URL and l(u) is the length\nof the URL u. This feature is based on the observation\nthat Web-sites tend to use their names in the URL's.\nThe distributions confers uniqueness and authority.\nAnchor text features: Anchor text is the visible text in\na hyperlink, which also provides useful information for\nnavigational query identification. For example, one anchor\ntext feature is the entropy of anchor link distribution\n[12]. This distribution is basically the histogram\nof inbound anchor text of the destination URL. If an\nURL is pointed to by the same anchor texts, the URL\nis likely to contain perfect content. There are many\nother anchor text features that are calculated by considering\nmany factors, such as edit distance between\nquery and anchor texts, diversity of the hosts, etc. In\ntotal, there are 63 features derived from anchor text.\nSince we record the top 100 results for each query and\neach query URL pair has 197 features, in total there are\n19,700 features available for each query. Feature reduction\nbecomes necessary due to curse of dimensionality [5]. Before\napplying feature selection, we conduct a feature integration\nprocedure that merges redundant features.\n3.2\nFeature Integration\nWe design a feature integration operator, named normalized\nratio r\nk\nof rank k, as follows:\nr\nk\n(f\nj\n) =\nmax(f\nj\n)\n- f\njk\nmax(f\nj\n)\n- min(f\nj\n) , k = 2, 5, 10, 20.\n(1)\nThe design of this operator is motivated by the observation\nthat the values of query-URL features for navigational\nquery and informational query decrease at different\nrates. Taking the urlmr feature for example and considering\na navigational query \"Walmart\" and an informational\nquery \"Canadian gold maple leaf\", we plot the feature values\nof top 100 URLs for both queries, as shown in Figure 2.\nAs we can see, the feature value for the navigational query\ndrops quickly to a stable point, while an information query\nis not stable. As we will see in the experiment section, this\noperator is most effective in feature reduction.\nBesides this operator, we use other statistics for feature\nintegration, including mean, median, maximum, minimum,\nentropy, standard deviation and value in top five positions\nof the result set query-URL pair features. In total, we now\nhave 15 measurements instead of 100 for the top 100 URLs\nfor each query. Therefore, for each query, the dimension of\na feature vector is m = 15\n197 = 2955, which is much\nsmaller than 197, 000.\nCLASSIFICATION METHODS\nWe apply the most popular generative (such as naive Bayes\nmethod), descriptive (such as Maximum Entropy method),\nand discriminative (such as support vector machine and\nstochastic gradient boosting tree) learning methods [19] to\nattack the problem.\n4.1\nNaive Bayes Classifier\nA simple yet effective learning algorithm for classification\n684\n0\n20\n40\n60\n80\n100\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\nRank\nResult Set Feature: URLmr\nQuery: "Walmart"\n0\n20\n40\n60\n80\n100\n0\n0.1\n0.2\n0.3\n0.4\n0.5\nRank\nResult Set Feature: URLmr\nQuery: "Canadian gold maple leaf"\nFigure 2:\nurlmr query-URL feature for navigational\nquery (upper) and a informational query (lower)\nis based on a simple application of Bayes' rule\nP (y\n|q) = P (y) P (q|y)\nP (q)\n(2)\nIn query classification, a query q is represented by a vector of\nK attributes q = (v\n1\n, v\n2\n, ....v\nK\n). Computing p(q\n|y) in this\ncase is not trivial, since the space of possible documents\nq = (v\n1\n, v\n2\n, ....v\nK\n) is vast. To simplify this computation,\nthe naive Bayes model introduces an additional assumption\nthat all of the attribute values, v\nj\n, are independent given\nthe category label, c.\nThat is, for i = j, v\ni\nand v\nj\nare\nconditionally independent given q. This assumption greatly\nsimplifies the computation by reducing Eq. (2) to\nP (y\n|q) = P (y)\nQ\nK\nj=1\nP (v\nj\n|y)\nP (q)\n(3)\nBased on Eq. (3), a maximum a posteriori (MAP) classifier\ncan be constructed by seeking the optimal category which\nmaximizes the posterior P (c\n|d):\ny\n\n= arg max\nyY\n(\nP (y)\n\nK\nY\nj=1\nP (v\nj\n|y)\n)\n(4)\n= arg max\nyY\n(\nK\nY\nj=1\nP (v\nj\n|y)\n)\n(5)\nEq. (5) is called the maximum likelihood naive Bayes classifier\n, obtained by assuming a uniform prior over categories.\nTo cope with features that remain unobserved during training\n, the estimate of P (v\nj\n|y) is usually adjusted by Laplace\nsmoothing\nP (v\nj\n|y) = N\ny\nj\n+ a\nj\nN\ny\n+ a\n(6)\nwhere N\ny\nj\nis the frequency of attribute j in D\ny\n, N\ny\n=\nP\nj\nN\ny\nj\n, and a = P\nj\na\nj\n. A special case of Laplace smoothing\nis add one smoothing, obtained by setting a\nj\n= 1. We\nuse add one smoothing in our experiments below.\n4.2\nMaximum Entropy Classifier\nMaximum entropy is a general technique for estimating\nprobability distributions from data and has been successfully\napplied in many natural language processing tasks.\nThe over-riding principle in maximum entropy is that when\nnothing is known, the distribution should be as uniform as\npossible, that is, have maximal entropy [9]. Labeled training\ndata are used to derive a set of constraints for the model\nthat characterize the class-specific expectations for the distribution\n. Constraints are represented as expected values\nof features. The improved iterative scaling algorithm finds\nthe maximum entropy distribution that is consistent with\nthe given constraints. In query classification scenario, maximum\nentropy estimates the conditional distribution of the\nclass label given a query. A query is represented by a set\nof features. The labeled training data are used to estimate\nthe expected value of these features on a class-by-class basis.\nImproved iterative scaling finds a classifier of an exponential\nform that is consistent with the constraints from the labeled\ndata.\nIt can be shown that the maximum entropy distribution\nis always of the exponential form [4]:\nP (y\n|q) = 1\nZ(q) exp(\nX\ni\n\ni\nf\ni\n(q; y))\nwhere each f\ni\n(q; y) is a feature,\ni\nis a parameter to be\nestimated and Z(q) is simply the normalizing factor to ensure\na proper probability: Z(q) = P\ny\nexp(P\ni\n\ni\nf i(q; y)).\nLearning of the parameters can be done using generalized\niterative scaling (GIS), improved iterative scaling (IIS), or\nquasi-Newton gradient-climber [13].\n4.3\nSupport Vector Machine\nSupport Vector Machine (SVM) is one of the most successful\ndiscriminative learning methods. It seeks a hyperplane\nto separate a set of positively and negatively labeled\ntraining data. The hyperplane is defined by w\nT\nx + b = 0,\nwhere the parameter w\nR\nm\nis a vector orthogonal to the\nhyperplane and b\nR is the bias. The decision function is\nthe hyperplane classifier\nH(x) = sign(w\nT\nx + b).\nThe hyperplane is designed such that y\ni\n(w\nT\nx\ni\n+ b)\n1\ni\n,\ni = 1, ..., N, where x\ni\nR\nm\nis a training data point\nand y\ni\n{+1, -1} denotes the class of the vector x\ni\n. The\nmargin is defined by the distance between the two parallel\nhyperplanes w\nT\nx + b = 1 and w\nT\nx + b =\n-1, i.e. 2/||w||\n2\n.\nThe margin is related to the generalization of the classifier\n[17]. The SVM training problem is defined as follows:\nminimize\n(1/2)w\nT\nw + 1\nT\n\nsubject to\ny\ni\n(w\nT\nx\ni\n+ b)\n1 i\n, i = 1, ..., N\n\n0\n(7)\n685\nwhere the scalar is called the regularization parameter,\nand is usually empirically selected to reduce the testing error\nrate.\nThe basic SVM formulation can be extended to the nonlinear\ncase by using nonlinear kernels.\nInterestingly, the\ncomplexity of an SVM classifier representation does not depend\non the number of features, but rather on the number of\nsupport vectors (the training examples closest to the hyperplane\n). This property makes SVMs suitable for high dimensional\nclassification problems [10]. In our experimentation,\nwe use a linear SVM and a SVM with radial basis kernel.\n4.4\nGradient Boosting Tree\nLike SVM, gradient boosting tree model also seeks a pa-rameterized\nclassifier. It iteratively fits an additive model [8]\nf\nt\n(x) = T\nt\n(x;\n0\n) +\nT\nX\nt=1\n\nt\nT\nt\n(x;\nt\n),\nsuch that certain loss function L(y\ni\n, f\nT\n(x + i) is minimized,\nwhere T\nt\n(x;\nt\n) is a tree at iteration t, weighted by parameter\nt\n, with a finite number of parameters,\nt\nand is the\nlearning rate. At iteration t, tree T\nt\n(x; ) is induced to fit\nthe negative gradient by least squares. That is\n^\n:= arg min\n\nN\nX\ni\n(\n-G\nit\nt\nT\nt\n(x\ni\n); )\n2\n,\nwhere G\nit\nis the gradient over current prediction function\nG\nit\n=\nL(y\ni\n, f (x\ni\n)\nf (x\ni\n)\n\nf=f\nt-1\n.\nThe optimal weights of trees\nt\nare determined\n\nt\n= arg min\n\nN\nX\ni\nL(y\ni\n, f\nt-1\n(x\ni\n) + T (x\ni\n, )).\nIf the L-2 loss function [y\ni\n-f(x\ni\n)]\n2\n/2 is used, we have the\ngradient G(x\ni\n) =\n-y\ni\n+ f (x\ni\n). In this paper, the Bernoulli\nloss function\n-2 X\ni\n(y\ni\nf (x\ni\n)\n- log(1 + exp(f(x\ni\n))))\nis used and the gradient has the form\nG(x\ni\n) = y\ni\n1\n1 + exp(\n-f(x\ni\n)) .\nDuring each iteration of gradient boosting, the feature\nspace is further partitioned. This kind of rectangular partition\ndoes not require any data preprocessing and the resulting\nclassifier can be very robust. However, it may suffer from\nthe dead zoom phenomenon, where prediction is not able to\nchange with features, due to its discrete feature space partition\n. Friedman (2002) found that it helps performance by\nsampling uniformly without replacement from the dataset\nbefore estimating the next gradient step [6]. This method\nwas called stochastic gradient boosting.\nFEATURE SELECTION\nMany methods have been used in feature selection for\ntext classification, including information gain, mutual information\n, document frequency thresholding, and Chi-square\nstatistics. Yang and Pedersen [18] gives a good comparison\nof these methods. Information gain is one of the most\neffective methods in the context of text categorization. In\naddition to information gain, we also use feature selection\nmethods based on SVM's feature coefficients and stochastic\ngradient boosting tree's variable importance.\n5.1\nInformation Gain\nInformation gain is frequently used as a measure of feature\ngoodness in text classification [18]. It measures the\nnumber of bits of information obtained for category prediction\nby knowing the presence or absence of a feature. Let\ny\ni\n: i = 1..m be the set of categories, information gain of a\nfeature f is defined as\nIG(f ) =\nm\nX\ni=1\nP (y\ni\n)logP (y\ni\n)\n+ P (f )\nm\nX\ni=1\nP (y\ni\n|f)logP (y\ni\n|f)\n+ P (f )\nm\nX\ni=1\nP (y\ni\n|f)logP (y\ni\n|f)\nwhere f indicates f is not present. We compute the information\ngain for each unique feature and select top ranked\nfeatures.\n5.2\nLinear SVM Feature Ranking\nLinear SVM (7) produces a hyperplane as well as a normal\nvector w. The normal vector w serves as the slope of\nthe hyperplane classifier and measures the relative importance\nthat each feature contribute to the classifier. An extreme\ncase is that when there is only one feature correlated\nto sample labels, the optimal classifier hyperplane must be\nperpendicular to this feature axle.\nThe L-2 norm of w, in the objective, denotes the inverse\nmargin. Also, it can be viewed as a Gaussian prior of random\nvariable w. Sparse results may be achieved by assuming a\nlaplace prior and using the L-1 norm [2].\nUnlike the previous information gain method, the linear\nSVM normal vector w is not determined by the whole body\nof training samples. Instead, it is determined by an optimally\ndetermined subset, support vectors, that are critical\nto be classified. Another difference is obvious: normal vector\nw is solved jointly by all features instead of one by one\nindependently.\nOur results show that linear SVM is able to provide rea-sonably\ngood results in feature ranking for our navigational\nquery identification problem even when the corresponding\nclassifier is weak.\n5.3\nStochastic Gradient Boosting Tree\nBoosting methods construct weak classifiers using subsets\nof features and combines them by considering their predica-tion\nerrors. It is a natural feature ranking procedure: each\nfeature is ranked by its related classification errors.\nTree based boosting methods approximate relative influence\nof a feature x\nj\nas\nJ\n2\nj\n=\nX\nsplits on x\nj\nI\n2\nk\n686\nwhere I\n2\nk\nis the empirical improvement by k-th splitting on\nx\nj\nat that point.\nUnlike the information gain model that considers one feature\nat a time or the SVM method that considers all the\nfeature at one time, the boosting tree model considers a set\nof features at a time and combines them according to their\nempirical errors.\nLet R(\nX ) be a feature ranking function based on data set\nX . Information gain feature ranking depends on the whole\ntraining set RInfo(X ) = RInfo(Xtr). Linear SVM ranks features\nis based on a set of optimally determined dataset. That\nis, RSVM(X ) = RSVM(XSV), where XSV is the set of support\nvectors. The stochastic gradient boosting tree (GSBT)\nuses multiple randomly sampled data to induce trees and\nranks feature by their linear combination. Its ranking function\ncan be written as RSGBT(X ) = P\nT\nt=1\n\nt\nR\nt\nSGBT(X\nt\n),\nwhere\nX\nt\nis the training set randomly sampled at iteration\nt.\nEXPERIMENTS\nA total number of 2102 queries were uniformly sampled\nfrom a query log over a four month period. The queries\nwere sent to four major search engines, including Yahoo,\nGoogle, MSN, and Ask. The top 5 URL's returned by each\nsearch engine were recorded and sent to trained editors for\nlabeling (the number 5 is just an arbitrary number we found\ngood enough to measure the quality of retrieval). If there\nexists one and only one perfect URL among all returned\nURLs for a query, this query is labeled as navigational query.\nOtherwise, it is labeled as non-navigational query.\nOut of 2102 queries, 384 queries are labeled as navigational\n. Since they are uniformly sampled from a query log,\nwe estimate there are about 18% queries are navigational.\nThe data set were divided into five folders for the purpose\nof cross-validation. All results presented in this section are\naverage testing results in five fold cross validations.\n6.2\nEvaluation\nClassification performance is evaluated using three metrics\n: precision, recall and F1 score. In each test, Let n\n++\ndenotes the number of positive samples that correctly classified\n(true positive); n\n-+\ndenotes the number of negative\nsamples that are classified as positive (false positive); n\n+-denotes\nthe number of false positive samples that are classified\nas negative (false negative); and n\n-denotes\nthe number\nof negative samples that are correctly classified (true\nnegative). Recall is the ratio of the number of true positives\nto the total number of positives samples in the testing set,\nnamely\nrecall =\nn\n++\nn\n++\n+ n\n+\n.\nPrecision is the ratio of the number of true positive samples\nto the number samples that are classified as positive, namely\nprecision =\nn\n++\nn\n++\n+ n\n-+\n.\nF1 is a single score that combines precision and recall,\ndefined as follows:\nF 1 = 2 precsion recall\nprecsion + recall\n.\n6.3\nResults\n6.3.1\nFeature Selection Results\nTable 1 shows the distributions of the top 50 features selected\nby different methods. All methods agree that click\nfeatures are the most important. In particular, linear SVM\nand boosting tree select more click features than information\ngain. On the other hand, information gain select many\nfeatures from anchor text and other metrics such as spam\nscores.\nTable 1: Distributions of the Selected Top 50 Features\nAccording to Feature Categories\nFeature Set\nInfo. Gain\nLinear SVM\nBoosting\nClick\n52%\n84%\n74%\nURL\n4%\n2%\n6%\nAnchor Text\n18%\n2%\n12%\nOther metrics\n26%\n12%\n8%\nTable 2 shows the distribution of the selected features according\nto feature integration operators.\nIt shows which\noperators applied to result set query-URL pair wise features\nare most useful. We group the 15 operators into 5 types:\nvector, normalized ratios (r\nk\n, k = 2, 5, 10, 20), min/max, en-tropy/stand\ndeviation, and median/mean. Vector group includes\nall query-URL pair features in top 5 positions; normalized\nratios are defined in (1). As we can see from the\ntable, all feature integration operators are useful.\nTable 2: Distributions of the Selected Top 50 Features\nAccording to Integration Operators\nOperators\nInfo. Gain\nLinear SVM\nBoosting\nvector\n40%\n22%\n28%\nnormalized ratios\n8%\n38%\n22%\nmin/max\n6%\n20%\n16%\nentropy/std\n20%\n16%\n18%\nmean/median\n26%\n4%\n16%\nThe number of selected features directly influence the classification\nperformance. Figure 3 shows relationship between\nthe boosting tree classification performance and the number\nof selected features. As we can see, performance increases\nwith cleaner selected features. However, if the number of\nselected feature is too small, performance will decrease. A\nnumber of 50 works the best in our work.\n6.3.2\nClassification Results\nWe first apply four different classification methods: naive\nBayes, maximum entropy methods, support vector machine\nand stochastic gradient boosting tree model over all available\nfeatures. The results are reported in Table 3. As we can see,\nstochastic gradient boosting tree has the best performance\nwith an F1 score of 0.78.\nWe then apply those methods to machine selected features\n. We test 4 different feature sets with 50 number of features\n, selected by information gain, linear SVM and boosting\ntree. The combined set consists of 30 top features selected by\nlinear SVM and 29 top features selected by boosting tree.\nPlease note that the total number of features are still 50\nsince linear SVM and boosting tree selected 9 same features\nin their top 30 feature set.\n687\n0\n500\n1000\n1500\n2000\n2500\n3000\n0.78\n0.79\n0.8\n0.81\n0.82\n0.83\n0.84\n0.85\n0.86\nNumber of Features Selected By Boosting Tree\nF1 Score of Boosting Tree Classifier\nClassification Performance VS Number of Features\nFigure 3:\nClassification performance F1 against\nnumber of features: 25, 50, 100, 200, 400, 800, and\n2955 (all features)\nTable 3: Results of Various Classification Methods\nover All Features\nRecall\nPrecision\nF1\nNaive Bayes\n0.242\n0.706\n0.360\nSVM (Linear Kernel)\n0.189\n1.000\n0.318\nMaximum Entropy\n0.743\n0.682\n0.711\nSVM (RBF Kernel)\n0.589\n0.485\n0.528\nBoosting Trees\n0.724\n0.845\n0.780\nTable 4 presents the results of the coupled feature selection\nand classification methods. It is obvious that the performance\nof each method is improved by applying them to machine\nselected clean features, except naive Bayes classifier.\nSurprisingly, the features selected by linear SVM are the\nbest set of features. The results show that even if the underlying\nproblem is not linear separable, the linear coefficients\nof the large margin linear classifier still convey important\nfeature information. When the stochastic gradient boosting\ntree is applied over this set of features, we get the best\nperformance with 0.881 F1 score among all cross-methods\nevaluations. Without feature ablation, SGBT is only able\nto achieve 0.738 F1 score. That is, feature selection has\nan effect of error reduction rate 40%. Without introducing\nlinear SVM in feature ablation, if SGBT works on the feature\nset selected by its own variable importance ranking, it\nachieves 0.848 F1 score. That is to say, a cross methods\ncoupling of feature selection and classification causes a 33%\nerror reduction.\nDISCUSSION\nAn interesting result from Table 1 is the features selected\nfor navigational query identification.\nThose features are\nmostly induced from user click information. This is intu-itively\nunderstandable because if a query is navigational,\nthe navigational URL is the most clicked one. On the other\nhand, it might be risky to completely rely on click information\n. The reasons might be 1) user click features may\nbe easier to be spammed, and 2) clicks are often biased by\nvarious presentation situation such as quality of auto abstraction\n, etc.\nFrom Table 4, we observe that linear SVM and boosting\ntree have better feature selection power than information\ngain. The reason that information gain performs inferior to\nlinear SVM and boosting tree is probably due to the fact\nthat information gain considers each feature independently\nwhile linear SVM considers all features jointly and boosting\ntree composites feature rank by sum over all used features.\nThe results show that URL, anchor text and other metrics\nare helpful only when they are considered jointly with click\nfeatures.\nThe most important result is that the stochastic gradient\nboosting tree coupled with linear SVM feature selection\nmethod achieves much better results than any other combination\n. In this application, the data has very high dimension\nconsidering the small sample size. The boosting tree method\nneeds to partition an ultra-high dimensional feature space\nfor feature selection. However, the stochastic step does not\nhave enough data to sample from [6]. Therefore, the boosted\nresult might be biased by earlier sampling and trapped in\na local optimum. Support vector machine, however, is able\nto find an optimally determined subset of training samples,\nnamely support vectors, and ranks features based on those\nvectors. Therefore, the SVM feature selection step makes\nup the disadvantage of the stochastic boosting tree in its\ninitial sampling and learning stages that may lead to a local\noptimum.\nAs expected, naive Bayes classifier hardly works for the\nnavigational query identification problem. It is also the only\nclassifier that performs worse with feature selection. Naive\nBayes classifiers work well when the selected features are\nmostly orthogonal. However, in this problem, all features\nare highly correlated.\nOn the other hand, classification\nmethods such as boosting tree, maximum entropy model\nand SVM do not require orthogonal features.\nRELATED WORK\nOur work is closely related to query classification, a task of\nassigning a query to one or more categories. However, general\nquery classification and navigational query identification\nare different in the problems themselves. Query classification\nfocuses on content classification, thus the classes are\nmainly topic based, such as shopping and products. While\nin navigational query identification, the two classes are intent\nbased.\nIn the classification approaches regard, our work is related\nto Gravano, et al. [7] where authors applied various\nclassification methods, including linear and nonlinear SVM,\ndecision tree and log-linear regression to classify query locality\nbased on result set features in 2003.\nTheir work,\nhowever, lacked carefully designed feature engineering and\ntherefore only achieved a F1 score of 0.52 with a linear SVM.\nBeitzel, et al.[1] realized the limitation of a single classification\nmethod in their query classification problem and pro-posed\na semi-supervised learning method. Their idea is to\ncompose the final classifier by combining classification results\nof multiple classification methods. Shen, et al. [15]\nalso trained a linear combination of two classifiers. Differ-ently\n, instead of combining two classifiers for prediction, we\ncouple feature selection and classification.\nIn the feature extraction aspect, our work is related to\nKang and Kim 2003 [11] where authors extracted heterogenous\nfeatures to classify user queries into three categories:\ntopic relevance task, the homepage finding task and service\nfinding task. They combined those features, for example\nURL feature and content feature, by several linear empiri-688\nTable 4: F1 Scores of Systems with Coupled Feature Selection and Classification Methods\nMethods\nInfo. Gain\nLinear SVM\nBoosting\nCombined Set\nSVM (Linear Kernel)\n0.124\n0.733\n0.712\n0.738\nNaive Bayes\n0.226\n0.182\n0.088\n0.154\nMaximum Entropy\n0.427\n0.777\n0.828\n0.784\nSVM (RBF Kernel)\n0.467\n0.753\n0.728\n0.736\nBoosting Tree\n0.627\n0.881\n0.848\n0.834\ncal linear functions. Each function was applied to a different\nbinary classification problem.\nTheir idea was to emphasize\nfeatures for different classification purposes. However,\nthe important features were not selected automatically and\ntherefore their work is not applicable in applications with\nthousands of features.\nCONCLUSION\nWe have made three contributions in the paper. First,\nwe evaluate the effectiveness of four machine learning approaches\nin the context of navigational query identification.\nWe find that boosting trees are the most effective one. Second\n, we evaluate three feature selection methods and propose\ncoupling feature selection with classification approaches.\nThird, we propose a multi-level feature extraction system to\nexploit more information for navigational query identification\n.\nThe underlying classification problem has been satisfacto-rily\nsolved with 88.1% F1 score. In addition to the successful\nclassification, we successfully identified key features for recognizing\nnavigational queries: the user click features. Other\nfeatures, such as URL, anchor text, etc. are also important\nif coupled with user click features.\nIn future research, it is of interest to conduct cross methods\nco-training for the query classification problem to utilize\nunlabeled data, as there is enough evidence that different\ntraining methods may benefit each other.\nREFERENCES\n[1] S. Beitzel, E. Jensen, D. Lewis, A. Chowdhury,\nA. Kolcz, and O. Frieder. Improving Automatic Query\nClassification via Semi-supervised Learning. In The\nFifth IEEE International Conference on Data Mining,\npages 2730, New Orleans, Louisiana, November 2005.\n[2] C. Bhattacharyya, L. R. Grate, M. I. Jordan, L. El\nGhaoui, and I. S. Mian. Robust Sparse Hyperplane\nClassifiers: Application to Uncertain Molecular\nProfiling Data. Journal of Computational Biology,\n11(6):10731089, 2004.\n[3] A. Broder. A Taxonomy of Web Search. In ACM\nSIGIR Forum, pages 310, 2002.\n[4] S. della Pietra, V. della Pietra, and J. Lafferty.\nInducing Features of Random Fields. IEEE\nTransactions on Pattern Analysis and Machine\nIntelligence, 19(4), 1995.\n[5] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern\nClassification. John Wiley, New York, NY, 2nd\nedition, 2000.\n[6] J. H. Friedman. Stochastic Gradient Boosting.\nComputational Statistics and Data Analysis,\n38(4):367378, 2002.\n[7] L. Gravano, V. Hatzivassiloglou, and R. Lichtenstein.\nCategorizing Web Queries According to Geographical\nLocality. In ACM 12th Conference on Information\nand Knowledge Management (CIKM), pages 2730,\nNew Orleans, Louisiana, November 2003.\n[8] T. Hastie, R. Tibshirani, and J. Friedman. The\nElements of Statistical Learning: Data Mining,\nInference, and Predication. Springer Verlag, New\nYork, 2001.\n[9] E. T. Jaynes. Papers on Probability, Statistics, and\nStatistical Physics. D. Reidel, Dordrecht, Holland and\nBoston and Hingham, MA, 1983.\n[10] T. Joachims. Text Categorization with Support Vector\nMachines: Learning with Many Relevant Features. In\nProceedings of the 10th European Conference on\nMachine Learning (ECML), pages 137142, Chemnitz,\nGermany, 1998.\n[11] I.-H. Kang and G. Kim. Query Type Classification for\nWeb Document Retrieval. In Proceedings of the 26th\nannual international ACM SIGIR conference on\nResearch and development in informaion retrieval,\npages 64 71, Toronto Canada, July 2003.\n[12] U. Lee, Z. Liu, and J. Cho. Automatic Identification\nof User Goals in Web Search. In Proceedings of the\n14th International World Wide Web Conference\n(WWW), Chiba, Japan, 2005.\n[13] R. Malouf. A Comparison of Algorithms for Maximum\nEntropy Parameter Estimation. In Proceedings of the\nSixth Conference on Natural Language Learning\n(CoNLL), Taipei, China, 2002.\n[14] D. E. Rose and D. Levinson. Understanding User\nGoals in Web Search. In Proceedings of The 13th\nInternational World Wide Web Conference (WWW),\n2004.\n[15] D. Shen, R. Pan, J.-T. Sun, J. J. Pan, K. Wu, J. Yin,\nand Q. Yang. Q2C at UST: Our Winning Solution to\nQuery Classification in KDDCUP 2005. SIGKDD\nExplorations, 7(2):100110, 2005.\n[16] L. Sherman and J. Deighton. Banner advertising:\nMeasuring effectiveness and optimizing placement.\nJournal of Interactive Marketing, 15(2):6064, 2001.\n[17] V. Vapnik. The Nature of Statistical Learning Theory.\nSpringer Verlag, New York, 1995.\n[18] Y. Yang and J. Pedersen. An Comparison Study on\nFeature Selection in Text Categorization. In\nProceedings of the 20th annual international ACM\nSIGIR conference on Research and development in\ninformaion retrieval, Philadelphia, PA, USA, 1997.\n[19] S.C. Zhu. Statistical modeling and conceptualization\nof visual patterns. IEEE Transactions on Pattern\nAnalysis and Machine Intelligence, 25(6):619712,\n2003.\n689\n", "keywords": "Stochastic Gradient Boosting Tree;Linear SVM Feature Ranking;Gradient Boosting Tree;Information Gain;Naive Bayes Classifier;Support Vector Machine;Experiments Results;Machine Learning;Maximum Entropy Classifier;Navigational Query Classification;Navigational and Informational query;Multiple Level feature system"} {"name": "61", "title": "Coverage Directed Test Generation for Functional Verification using Bayesian Networks", "abstract": "Functional verification is widely acknowledged as the bottleneck in the hardware design cycle. This paper addresses one of the main challenges of simulation based verification (or dynamic verification ), by providing a new approach for Coverage Directed Test Generation (CDG). This approach is based on Bayesian networks and computer learning techniques. It provides an efficient way for closing a feedback loop from the coverage domain back to a generator that produces new stimuli to the tested design. In this paper, we show how to apply Bayesian networks to the CDG problem. Applying Bayesian networks to the CDG framework has been tested in several experiments, exhibiting encouraging results and indicating that the suggested approach can be used to achieve CDG goals.", "fulltext": "INTRODUCTION\nFunctional verification is widely acknowledged as the bottleneck\nin the hardware design cycle [1]. To date, up to 70% of the design\ndevelopment time and resources are spent on functional verification\n. The increasing complexity of hardware designs raises the\nneed for the development of new techniques and methodologies\nthat can provide the verification team with the means to achieve its\ngoals quickly and with limited resources.\nThe current practice for functional verification of complex designs\nstarts with a definition of a test plan, comprised of a large\nset of events that the verification team would like to observe during\nthe verification process. The test plan is usually implemented\nusing random test generators that produce a large number of test-cases\n, and coverage tools that detect the occurrence of events in\nthe test plan, and provide information related to the progress of the\ntest plan. Analysis of the coverage reports allows the verification\nteam to modify the directives for the test generators and to better\nhit areas or specific tasks in the design that are not covered well [5].\nThe analysis of coverage reports, and their translation to a set\nof test generator directives to guide and enhance the implementation\nof the test plan, result in major manual bottlenecks in the otherwise\nhighly automated verification process. Considerable effort\nis invested in finding ways to close the loop of coverage analysis\nand test generation. Coverage directed test generation (CDG) is\na technique to automate the feedback from coverage analysis to\ntest generation. The main goals of CDG are to improve the coverage\nprogress rate, to help reaching uncovered tasks, and to provide\nmany different ways to reach a given coverage task. Achieving\nthese goals should increase the efficiency and quality of the verification\nprocess and reduce the time and effort needed to implement\na test plan.\nIn this paper, we propose a new approach for coverage directed\ntest generation. Our approach is to cast CDG in a statistical inference\nframework, and apply computer learning techniques to achieve\nthe CDG goals. Specifically, our approach is based on modeling the\nrelationship between the coverage information and the directives to\nthe test generator using Bayesian networks [9]. A Bayesian network\nis a directed graph whose nodes are random variables and\nwhose edges represent direct dependency between their sink and\nsource nodes. Each node in the Bayesian network is associated with\na set of parameters specifying its conditional probability given the\nstate of its parents.\nSimply stated, the CDG process is performed in two main steps.\nIn the first step, a training set is used to learn the parameters of\na Bayesian network that models the relationship between the coverage\ninformation and the test directives. In the second step, the\nBayesian network is used to provide the most probable directives\nthat would lead to a given coverage task (or set of tasks).\nBayesian networks are well suited to the kind of modeling required\nfor CDG, because they offer a natural and compact representation\nof the rather complex relationship between the CDG\ningredients, together with the ability to encode essential domain\nknowledge. Moreover, adaptive tuning of the Bayesian network\nparameters provides a mean to focus on the rare coverage cases.\nWe describe two experiments in which we tested the the ability\nof Bayesian networks to handle aspects of the CDG problem\nin various settings. The goals of the experiments were to increase\nthe hitting rates in hard-to-reach coverage cases; design directives\naimed at reaching uncovered tasks; and provide many different directives\nfor a given coverage task. We used two settings for our\nexperiments. In the first setting, we used a Bayesian network to\ngenerate instruction streams to an abstract model of the pipeline of\n286\n18.2\nRandom\nTest\nGenerator\nTest Plan\nFail\nPass\nInformation\nDirectives\nTest\nCoverage\nCoverage\nAnalysis Tool\nReports\nCoverage\nSimulator\nDUT\nFigure 1: Verification process with automatic test generation\nan advanced super-scalar PowerPC processor. In the second setting\n, we used a Bayesian network to generate directives to an existing\ntest generator of a storage control unit of a mainframe with a\ngoal to cover all possible transactions from the CPUs connected to\nthis unit. In both experiments we reached our goals. The encouraging\nresults suggests that Bayesian networks may well be used to\nachieve the primary goals of CDG.\nThe remainder of this paper is as follows. In Section 2, we briefly\npresent the CDG framework and review related work. In Section 3,\nwe describe Bayesian networks and their application to CDG. Sections\n4 and 5 provide detailed descriptions of the experiments. We\nconclude with a few remarks and suggestions for future study.\nCOVERAGE DIRECTED TEST GENERATION (CDG)\nIn current industry practice, verification by simulation, or dynamic\nverification, is the leading technique for functional verification\n. Coverage is used to ensure that the verification of the design is\nthorough, and the definition of coverage events or testing requirements\nis a major part in the definition of the verification plan of the\ndesign. Often, a family of coverage events that share common properties\nare grouped together to form a coverage model [7]. Members\nof the coverage model are called coverage tasks and are considered\npart of the test plan. Cross-product coverage models [7] are of special\ninterest. These models are defined by a basic event and a set of\nparameters or attributes, where the list of coverage tasks comprises\nall possible combinations of values for the attributes.\nFigure 1 illustrates the verification process with an automatic\nrandom test generation. A test plan is translated by the verification\nteam to a set of directives for the random test generator. Based\non these directives and embedded domain knowledge, the test generator\nproduces many test-cases. The design under test (DUT) is\nthen simulated using the generated test-cases, and its behavior is\nmonitored to make sure that it meets its specification. In addition,\ncoverage tools are used to detect the occurrence of coverage tasks\nduring simulation. Analysis of the reports provided by the coverage\ntools allows the verification team to modify the directives to\nthe test generator to overcome weaknesses in the implementation\nof the test plan. This process is repeated until the exit criteria in the\ntest plan are met.\nThe use of automatic test generators can dramatically reduce the\namount of manual labor required to implement the test plan. Even\nso, the manual work needed for analyzing the coverage reports and\ntranslating them to directives for the test generator, can constitute a\nbottleneck in the verification process. Therefore, considerable effort\nis spent on finding ways to automate this procedure, and close\nthe loop of coverage analysis and test generation. This automated\nfeedback from coverage analysis to test generation, known as Coverage\nDirected test Generation (CDG), can reduce the manual work\nin the verification process and increase its efficiency.\nIn general, the goal of CDG is to automatically provide directives\nthat are based on coverage analysis to the test generator. This can\nbe further divided into two sub-goals: First, to provide directives to\nthe test generator that help in reaching hard cases, namely uncovered\nor rarely covered tasks. Achieving this sub-goal can shorten\nthe time needed to fulfill the test plan and reduce the number of\nmanually written directives. Second, to provide directives that allow\neasier reach for any coverage task, using a different set of directives\nwhen possible. Achieving this sub-goal makes the verification\nprocess more robust, because it increases the number of times a task\nhas been covered during verification. Moreover, if a coverage task\nis reached via different directions, the chances to discover hidden\nbugs related to this task are increased [8].\nIn the past, two general approaches for CDG have been pro-posed\n: feedback-based CDG and CDG by construction. Feedback-based\nCDG relies on feedback from the coverage analysis to automatically\nmodify the directives to the test generator. For example,\nin [2], a genetic algorithm is used to select and modify test-cases to\nincrease coverage. In [13], coverage analysis data is used to modify\nthe parameters of a Markov Chain that represents the DUT. The\nMarkov Chain is then used to generate test-cases for the design.\nIn [11], the coverage analysis results trigger a set of generation\nrules that modify the testing directives. In contrast, in CDG by\nconstruction, an external model of the DUT is used to generate test\ndirectives designed to accurately hit the coverage tasks. For example\n, in [14] an FSM model of pipelines is used to generate tests that\ncover instruction interdependencies in the pipes.\nCOVERAGE DIRECTED TEST GENERATION USING BAYESIAN NETWORKS\nThe random nature of automatic test-case generators imposes a\nconsiderable amount of uncertainty in the relationship between test\ndirectives and coverage tasks, e.g., the same set of directives can\nbe used to generate many different test-cases, each leading to different\ncoverage tasks. This inherent uncertainty suggests to cast\nthe CDG setup in a statistical inference framework. To this end,\nBayesian networks offer an efficient modeling scheme by providing\na compact representation of the complex (possibly stochastic)\nrelationships among the CDG ingredients, together with the possibility\nto encode essential domain knowledge. It should be noted\nthat we do not suggest modeling the behavior of the design, typi-cally\na large and complicated (deterministic) finite state machine.\nRather, we model the CDG process itself, namely the trial-and-error\nprocedure governed by the verification team, which controls\nthe test generation at one end and traces the progress of covering\nthe test plan at the other.\n3.1\nA Brief Introduction to Bayesian Networks\nA Bayesian network is a graphical representation of the joint\nprobability distribution for a set of variables. This representation\nwas originally designed to encode the uncertain knowledge of an\nexpert and can be dated back to the geneticist Sewall Wright [15].\nTheir initial development in the late 1970s was motivated by the\nneed to model the top-down (semantic) and bottom-up (perceptual)\ncombinations of evidence (observations/findings). Their capability\nfor bidirectional inferences, combined with a rigorous probabilistic\nfoundation, led to the rapid emergence of Bayesian networks as the\nmethod of choice for uncertain reasoning in AI and expert systems,\nreplacing ad hoc rule-based schemes. Bayesian networks also play\na crucial role in diagnosis and decision support systems [10].\nObviously, there's a computational problem in dealing with many\nsources of uncertainty, i.e. the ability to perform probabilistic ma-nipulations\nin high dimensions (the \"curse of dimensionality\"). The\nmain breakthrough emerged in the late 1980s and can be attributed\nto Judea Pearl [12], who introduced 'modularity', thus enabling\n287\nlarge and complex models and theirs associated calculations, to be\nsplit up into small manageable pieces. The best way to do this is\nvia the imposition of meaningfully simplified conditional independence\nassumptions. These, in turn, can be expressed by means of a\npowerful and appealing graphical representation.\nA Bayesian network consists of two components. The first is a\ndirected acyclic graph in which each vertex corresponds to a random\nvariable. This graph represents a set of conditional independence\nproperties of the represented distribution: each variable is\nprobabilistically independent of its non-descendants in the graph\ngiven the state of its parents. The graph captures the qualitative\nstructure of the probability distribution, and is exploited for efficient\ninference and decision making. The second component is a\ncollection of local interaction models that describe the conditional\nprobability p\n(X\ni\n|Pa\ni\n) of each variable X\ni\ngiven its parents Pa\ni\n. Together\n, these two components represent a unique joint probability\ndistribution over the complete set of variables X [12]. The joint\nprobability distribution is given by the following equation:\np\n(X) =\nn\n\ni\n=1\np\n(X\ni\n|Pa\ni\n)\n(1)\nIt can be shown that this equation actually implies the conditional\nindependence semantics of the graphical structure given earlier.\nEq. 1 shows that the joint distribution specified by a Bayesian network\nhas a factored representation as the product of individual local\ninteraction models. Thus, while Bayesian networks can represent\narbitrary probability distributions, they provide a computational advantage\nfor those distributions that can be represented with a simple\nstructure.\nThe characterization given by Eq. 1 is a purely formal characterization\nin terms of probabilities and conditional independence.\nAn informal connection can be made between this characterization\nand the intuitive notion of direct causal influence. It has been noted\nthat if the edges in the network structure correspond to causal relationships\n, where a variable's parents represent the direct causal\ninfluences on that variable, then resulting networks are often very\nconcise and accurate descriptions of the domain. Thus it appears\nthat in many practical situations, a Bayesian network provides a\nnatural way to encode causal information. Nonetheless, it is often\ndifficult and time consuming to construct Bayesian networks from\nexpert knowledge alone, particularly because of the need to provide\nnumerical parameters. This observation, together with the fact that\ndata is becoming increasingly available and cheaper to acquire, has\nled to a growing interest in using data to learn both the structure\nand probabilities of a Bayesian network (cf. [3, 9, 12]).\nTypical types of queries that can be efficiently answered by the\nBayesian network model are derived from applying the Bayes rule\nto yield posterior probabilities for the values of a node (or set of\nnodes), X , given some evidence, E, i.e. assignment of specific values\nto other nodes:\np\n(X|E) = p(E|X) p(X)\np\n(E)\nThus, a statistical inference can be made in the form of either selecting\nthe Maximal A Posteriori (MAP) probability, max p\n(X|E), or\nobtaining the Most Probable Explanation (MPE), arg max p\n(X|E).\nThe sophisticated yet efficient methods that have been developed\nfor using Bayesian networks provide the means for predictive and\ndiagnostic inference\n1\n. A diagnostic query is such that the evidence\n1\nThis is in contrast to standard regression and classification methods\n(e.g., feed forward neural networks and decision trees) that\nencode only the probability distribution of a target variable given\nseveral input variables.\nState\nInt\nCovearge\nVariables\nDirectives\nTest Generator\nCore\nEnbable\nCmd\nType\ncp_cmd_type =\n{// val weight\n{read, 20},\n{write, 20},\n{RMW, 5},\n...\n};\ncp_core_enable =\n{// val weight\n{Core 1, 10},\n};\nOp\nMode\nCore\nCmd\nResp\n{Core 0, 10},\n{Both, 100}\nFigure 2: Bayesian Network of CDG\nnodes E represent a cause, while the queried nodes, X , represent\nan effect. The reversed direction, i.e. evidence on the effect nodes\nwhich serves to determine the possible cause, is called abductive.\nThese methods also allow Bayesian networks to reason efficiently\nwith missing values, by computing the marginal probability of the\nquery given the observed values.\nThere are two important extensions of Bayesian networks: Dynamic\nBayesian networks and influence diagrams. The first extension\n(see [6]) enables the incorporation of time, thus modeling temporal\ndependencies in a stochastic process. The second extension\n(see [3]) enriches the Bayesian network paradigm with decision\nmaking and utility considerations which create a powerful mechanism\nfor dealing with decisions under uncertainty constraints.\n3.2\nA Bayesian Network for CDG\nThe CDG process begins with the construction of a Bayesian network\nmodel that describes the relations between the test directives\nand the coverage space. Figure 2 illustrates a simple, yet typical,\nBayesian network, which models a small excerpt of the CDG setup.\nThe network describes the relationship between the directives that\ninfluence the type of command that is generated (cp cmd type)\nand the active cores inside a CPU (cp core enable), and the\ncoverage attributes of a generated command (cmd), its response\n(resp), and the core that generated it (core). The network is\ncomprised of input nodes (the white circles on the left) that relate\nto test directives that appear to their left and coverage nodes\n(the white squares on the right) that define the coverage space. In\naddition to these nodes, for which we have physical observations,\nthe network may also contain hidden nodes, namely variables for\nwhich we don't have any physical evidence (observations) for their\ninteractions. These variables are represented as shaded ovals in\nthe figure. Hidden nodes are added to the Bayesian network structure\nprimarily to reflect expert domain knowledge regarding hidden\ncauses and functionalities which impose some structure on the interaction\nbetween the interface (observed) nodes\n2\n.\nThe Bayesian network at Fig. 2 describes the causal relationships\nfrom the test generation directives (causes) to the coverage model\nspace (effects). For example, it encodes the expert knowledge that\nindicates that there is an internal mode of operation for which we\ndo not have any direct physical observation, yet it is determined\nby the combined values of the test generation attributes. On the\nother hand, the (hidden) mode of operation directly influences the\nchoice of the resulting command and core, which are attributes of\n2\nIntroducing hidden nodes to the network structure has the secondary\nimpact of reducing the computational complexity by dimensionality\nreduction, and as a means for capturing non-trivial (higher\norder) correlations between observed events.\n288\nthe coverage model. Note the absence of a direct link between the\nrequested core (via the directive cp core enable) and the observed\none (at Core), which captures our understanding that there\nis no direct influence between the directives and the coverage attribute\n. Another assumption encoded in the CDG Bayesian network\nstructure at Fig. 2, is that the only information that governs\nthe response for the command is the generated command itself, and\nthis is encoded via the direct link from Cmd to Resp.\nIn a nutshell, the design of the Bayesian network starts with identifying\nthe ingredients (attributes) that will constitute the directives\nto the test generator on one hand, and to the coverage model on the\nother. These attributes are dictated by the interface to the simulation\nenvironment, to the coverage analysis tool, and by the specification\nof the coverage model in the test plan. These ingredients are used\nas the first guess about the nodes in the graph structure. Connecting\nthese nodes with edges is our technique for expert knowledge\nencoding, as demonstrated in Fig. 2. Obviously, using a fully connected\ngraph, i.e. with an edge between every pair of nodes, represents\nabsolutely no knowledge about the possible dependencies\nand functionalities within the model. Hence, as the graph structure\nbecomes sparser, it represents deeper domain knowledge. We discovered\nthat a good practice in specifying a dependency graph is\nto remove edges for which we have strong belief that the detached\nnodes are not directly influencing one another. At this point, hidden\nnodes can be added to the structure, either to represent hidden\ncauses, which contribute to a better description of the functionalities\nof the model, or to take on a role from the complexity stand\npoint, by breaking the barges cliques in the graph (see [4]).\nAfter the Bayesian network structure is specified, it is trained\nusing a sample of directives and the respective coverage tasks. To\nthis end, we activate the simulation environment and construct a\ntraining set out of the directives used and the resulting coverage\ntasks. We then use one of the many known learning algorithms (cf.\n[3]) to estimate the Bayesian network's parameters (i.e. the set of\nconditional probability distributions). This completes the design\nand training of the Bayesian network model.\nIn the evaluation phase, the trained Bayesian network can be\nused to determine directives for a desired coverage task, via posterior\nprobabilities, MAP and MPE queries, which use the coverage\ntask attributes as evidence. For example, in a model for\nwhich the directives are weights of possible outcomes for internal\ndraws in the test generator (e.g. the directive cp cmd type\nin Fig. 2 specifies a preference to read commands, write commands\n, etc.), we can specify a desired coverage task assignment\n(evidence) for the coverage nodes (e.g. Resp = ACK) and calculate\nthe posterior probability distribution for directive nodes (e.g.\np\n(Cmd Type|Resp = ACK)), which directly translates to the set of\nweights to be written in the test generator's parameter file. Note, as\nthe example demonstrates, we can specify partial evidence and/or\ndetermine a partial set of directives.\nINSTRUCTION STREAM GENERATION USING A DYNAMIC NETWORK\nTo evaluate the feasibility of the suggested modeling approach\nto the CDG problem, we designed a controlled study that acts in\na simple domain (small state space), where we have a deep understanding\nof the DUT's logic, direct control on the input, and a\n`ground truth' reference to evaluate performance.\nWe conducted the experiment on a model of the pipeline of NorthStar\n, an advanced PowerPC processor. The pipeline of NorthStar\ncontains four execution units and a dispatch unit that dispatches instructions\nto the execution units. Figure 3 illustrates the general\nDispatch\n0000\n0000\n0000\n0000\n0000\n0000\n1111\n1111\n1111\n1111\n1111\n1111\nBranch\nPipe (B)\nWrite Back\nExecute\nData Fetch\nS3\nS2\nS1\nSimple Arith\nPipe (S)\nC2\nC1\nC3\nComplex Arith\nPipe (C)\n0000\n0000\n0000\n0000\n0000\n0000\n1111\n1111\n1111\n1111\n1111\n1111\nLoad/Store\nPipe (L)\nFigure 3: The structure of the NorthStar pipeline\nstructure of the NorthStar pipeline. For reasons of simplicity, our\nmodel contains only the simple arithmetic unit that executes simple\narithmetic instructions such as add, and the complex arithmetic unit\nthat can execute both simple and complex arithmetic instructions.\nEach execution unit consists of three pipeline stages: (1) Data fetch\nstage, in which the data of the instruction is fetched; (2) Execute\nstage, in which the instruction is executed; (3) Write back stage,\nwhere the result is written back to the target register. The flow of\ninstructions in the pipeline is governed by a simple set of rules.\nFor example, in-order dispatching of instructions to the execution\nunits, and rules for stalling because of data dependency. Note, the\ncomplete set of rules is omitted to simplify the description.\nWe developed a simple abstract model of the dispatch unit and\ntwo pipelines and used it to simulate the behavior of the pipeline.\nThe input to our NorthStar model is a simplified subset of the PowerPC\ninstruction set. Each instruction is modeled by four input\nvariables. The first variable indicates the type of the instruction.\nThere are five possible types: S - simple arithmetic; C1, C2, C3\n- complex arithmetic; and NOP - instructions that are executed in\nother execution units. The second and third input variables constitute\nthe source and target register of the instructions. For simplicity\nand in order to increase the possibility of register interdependency,\nwe used only eight registers instead of the 32 registers available in\nPowerPC. The last input variable indicates whether the instruction\nuses the condition register. Due to restrictions on the legal combinations\nof the input variables (e.g., NOP instruction is not using\nregisters), there are 449 possible instructions.\nWe used a coverage model that examines the state of the two\npipelines, and properties of the instructions in them. The coverage\nmodel consists of five attributes, the type of instruction at stage 1 of\nthe simple and complex arithmetic pipelines (S1Type and C1Type,\nresp.), flags indicating whether stage 2 of the pipelines are occupied\n(S2Valid and C2Valid, resp.), and a flag indicating whether\nthe instruction at stage 2 of the simple arithmetic pipeline uses the\ncondition register (S2CR). The total number of legal coverage tasks\nin the model is 54 (out of 80 possible cases).\nThe goal of the experiment was to generate instruction streams\nthat cover the coverage model described above. Specifically, we\nconcentrated on the ability to reach the desired coverage cases with\nmany, yet relatively short, instruction sequences.\nWe modeled the temporal dependencies between the instructions\nand coverage tasks and among the instructions using a two-slice\nDynamic Bayesian Network (DBN) [6]. Rather than an accurate\nmapping of the specific state machine structure, the DBN encoded\nthe general knowledge of an expert on the modus operandi of this\ntype of DUT. Using an expert's domain knowledge proved to be vital\nin this setup because it provided essential information needed\nfor the generation of instruction streams. Moreover, it enabled\nthe use of hidden nodes, which effectively reduced the complexity\nthrough dimensionality reduction. The resulting DBN has 19\n289\nTime slice (cycle) t\nTime slice (cycle) t+1\nInput Node\nCoverage Node\nHidden Node\ntype1\nsr1\ntg1\ncr1\ntype2\nsr2\ntg2\ncr2\ntype1\nsr1\ntg1\ncr1\ntype2\nsr2\ntg2\ncr2\nim0\nir0\nmv1\nrv1\nrcr1\nim0\nir0\nmv1\nrv1\nrcr1\nFigure 4: two-slice DBN for the NorthStar experiment\nRare\nUncovered\nInstructions\nCycles\nInstructions\nCycles\nTraining Set\n6\n7\n\nDBN\n4\n5\n4\n5\nText Book\n3\n4\n3\n4\nTable 1: NorthStar experiment results\nnodes per slice, 13 of which are observed, 15 intra (within a slice)\nedges, and 37 inter (between slices) edges (see Fig 4).\nThe training set is composed of 1000 sequences of random instructions\n. The length of each sequence is 10 cycles. Note, the\nmodel the we used for the Bayesian network made it easier to measure\nlength in terms of cycles instead of instructions. The training\nset contained 385 different instructions. During its simulation, 49\n(out of 54) coverage cases were observed. The average number of\ninstructions per sequence in the training set was 9.7 out of the 20\npossible dispatches in 10 cycles (i.e., more than half of the dispatch\nslots in the sequence are empty).\nAfter training the Bayesian network, we tried to generate instruction\nsequences for all 54 coverage tasks in the coverage model.\nEach sequence was generated using the DBN, by solving the Most\nProbable Explanation (MPE) problem for the requested coverage\ntask. All 49 coverage cases of the training set plus three additional\nuncovered cases were reached using instruction sequences\ndesigned by the DBN. In addition, we generated many different instruction\nsequences for each coverage task that was covered by the\nBayesian network. The average number of cycles in a generated sequence\ndropped to 2.9, while the average number of instructions in\na sequence reduced to 3.7. This reflects the fact that the generated\ninstruction sequences cause less stall states en-route to reaching the\ndesired coverage cases. Table 1 illustrates the details of reaching\ntwo difficult coverage cases--the rarest coverage task, which was\nseen only once in the training set, and an uncovered task. The table\nshows the number of cycles and instructions required to reach\nthese tasks in the training set, the instruction sequences generated\nby the trained DBN, and the `text book' solution--the best possible\nsequence. The table indicates that the instruction sequences\ngenerated by the DBN are shorter, both in instructions and cycles,\nthan the sequences in the training set. Overall, the results indicate\nthat the trained DBN is able to generate many compact instruction\nsequences that are not far from the best possible solution.\nResp\nCmd\nResp\nCmd\nResp\nCmd\nPipe 0\nPipe 1\nCore 0\nCore 1\nCore 0\nCore 1\nCore 0\nCore 1\nStorage Control\nElement\n(SCE)\nMemory Subsystem\nCP0\nCP1\nCP7\nFigure 5: The structure of SCE simulation environment\nSTORAGE CONTROL EXPERIMENT USING A STATIC NETWORK\nThe second experiment was conducted in a real-life setting. The\ndesign under test in the experiment is the Storage Control Element\n(SCE) of an IBM z-series system. Figure 5 shows the structure of\nthe SCE and its simulation environment. The SCE handles commands\nfrom eight CPUs (CP0 CP7). Each CPU consists of two\ncores that generate commands to the SCE independently. The SCE\nhandles incoming commands using two internal pipelines. When\nthe SCE finishes handling a command, it sends a response to the\ncommanding CPU.\nThe simulation environment for the SCE contains, in addition to\nthe SCE itself, behavioral models for the eight CPUs that it services\n, and a behavioral model for the memory subsystem. The behavioral\nmodels of the CPUs generate commands to the SCE based\non their internal state and a directive file provided by the user. The\ndirective file contains a set of parameters that affect the behavior\nof the system. Some of these parameters control the entire system\nwhile others are specific to certain components of the system,\nsuch as a specific CPU. Figure 2 shows an example of some parameters\nthat are used in the simulation environment of the SCE.\nEach parameter contains a set of possible values that the parameter\ncan receive. Each value has a weight associated with it. When\nthe value of a parameter is needed, it is randomly chosen from\nthe set of possible values according the weights of these values.\nFor example, when a CPU generates a new command, it first uses\nthe cp cmd type parameter to determine the type of command to\ngenerate, and then a specific parameter for that command type to\ndetermine the exact command to be used.\nIn the experiment, we tried to cover all the possible transactions\nbetween the CPUs and the SCE. The coverage model contained five\nattributes: The CPU (8 possible values) and the core (2 values) in\nit that initiated the command, the command itself (31 values), its\nresponse (14 values), and the pipeline in the SCE that handled it (2\nvalues). Overall, the cross product contains 13,888 cases and the\ncoverage model contains 1968 legal coverage tasks.\nThis experiment added many new challenges over the controlled\nexperiment described in the previous section. First, our knowledge\nabout the DUT in this experiment was very limited compared to\nthe full understanding of the design in the first experiment. In addition\n, we were less able to observe and control the input and output\nnodes of the Bayesian network. For the test parameters, we could\nonly specify the distribution of each parameter and we could not\nobserve the values that were actually used, only their distribution.\nMoreover, in some cases the behavioral models ignored the parameters\nand generated commands based on their internal state. Thus,\nthe actual distribution used was not exactly the provided distribu-290\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n0\n50\n100\n150\n200\n250\nTest-cases\nC\novere\nd\n\nT\nas\nk\ns\nCDG\nBaseline\n223\nFigure 6: Coverage progress of the CDG process\ntion of the parameters. This type of observation (distribution instead\nof specific value) is known as a soft evidence. The coverage\ndata that we got out of the simulation environment was a summary\nof all the coverage tasks that occurred during the simulation of a\ntest-case. Therefore, it was hard to correlate between the observed\ncoverage tasks and the parameters' values that caused them and between\nthe different observed coverage tasks.\nBecause we had limited knowledge about the DUT and the correlation\nbetween the parameters in the test directives and the coverage\ntasks, the first Bayesian network we constructed contained\narcs between each of the coverage variables and each of the test\nparameters. We trained this network with 160 test-cases (each taking\nmore than 30 minutes to execute). After the initial training, we\nanalyzed the Bayesian network and found out that most of the test\nparameters were strongly correlated either to the command and response\ncoverage variables or the pipe and core variables, but only\na single variable was strongly correlated to all coverage variables.\nTherefore, we partitioned the Bayesian network into two networks,\none for command and response and the other for core and pipe.\nThe result of the inference on the common parameter from the first\nnetwork was used as input for the second one. We trained the second\nnetwork with the same training set of 160 test-cases. During\nthe training, 1745 out of the 1968 tasks in the model were covered,\nwhile 223 remained uncovered.\nWe checked the performance of the trained network and its ability\nto increase the coverage rate for the uncovered tasks in the training\nset. The baseline for comparison was the progress achieved by\nthe best test directive file created by an expert user.\nWe tried to maximize the coverage progress rate using a large\nnumber of test directive files aimed at specific sets of uncovered\ntasks. This approach is not realistic for a human user due the effort\nneeded to create each set of directives. However, it is useful\nfor the automatic creation of directives, because the inference time\nfrom the trained network is negligible. Our method to maximize\nthe coverage progress rate was to randomly partition the uncovered\ntasks, use the trained network to create a test directive file\nfor each partition, and simulate a single test-case for each directive\nfile. This process was repeated until all the tasks were covered.\nThe CDG process was able to cover all uncovered tasks after 250\ntest-cases, while the baseline case of the user defined test directives\nfile covered only two thirds of them after over 400 test-cases (see\nFigure 6).\nCONCLUSIONS AND FUTURE WORK\nIn this paper we demonstrated how Bayesian networks can be\nused to close the loop between coverage data and directives to test\ngenerators. The experiments described in the paper show that this\nmodeling technique can be efficiently used to achieve the CDG\ngoals of easier reach for hard coverage cases, diverse reach for average\ncases, and improved coverage progress rate. It should be\nnoted that the suggested CDG method is not limited to the types\nof simulation environments handled in this paper (i.e., parameters-based\ntest generation and direct stimuli generation). It can be used\nin other types of environments, such as test generators in which the\ncontrol on the stimuli is embedded in the generator itself.\nOur future work has two distinct aspects: enhancing the learning\ncapabilities and effectively applying the suggested framework to\nthe verification process. From the learning perspective, we plan\nto explore other techniques that may increase our capabilities. For\nexample, incremental structure learning as a means for encoding\nricher domain knowledge, and the efficient construction of good\nqueries to boost targeting rare cases using selective sampling. To\neffectively deploy the CDG framework, we need to gain a better\nunderstanding of the type of knowledge that should be encoded in\nthe model, and to identify in which areas the suggested approach\nmay prove most beneficial to the verification process.\nREFERENCES\n[1] J. Bergeron. Writing Testbenches: Functional Verification of HDL\nModels. Kluwer Academic Publishers, January 2000.\n[2] M. Bose, J. Shin, E. M. Rudnick, T. Dukes, and M. Abadir. A\ngenetic approach to automatic bias generation for biased random\ninstruction generation. In Proceedings of the 2001 Congress on\nEvolutionary Computation CEC2001, pages 442448, May 2001.\n[3] R. G. Cowell, A. P. Dawid, S. L. Lauritzen, and D. J. Spiegelhalter.\nProbabilistic Networks and Expert Systems. Springer-Verlag, 1999.\n[4] G. Elidan, N. Lotner, N. Friedman, and D. Koller. Discovering\nhidden variables: A structure-based approach. In Proceedings of the\n13th Annual Conference on Neural Information Processing Systems,\npages 479485, 2000.\n[5] L. Fournier, Y. Arbetman, and M. Levinger. Functional verification\nmethodology for microprocessors using the Genesys test-program\ngenerator. In Proceedings of the 1999 Design, Automation and Test\nin Europe Conference (DATE), pages 434441, March 1999.\n[6] Z. Ghahramani. Learning dynamic Bayesian networks. In Adaptive\nProcessing of Sequences and Data Structures, Lecture Notes in\nArtificial Intelligence, pages 168197. Springer-Verlag, 1998.\n[7] R. Grinwald, E. Harel, M. Orgad, S. Ur, and A. Ziv. User defined\ncoverage - a tool supported methodology for design verification. In\nProceedings of the 35th Design Automation Conference, pages\n158165, June 1998.\n[8] A. Hartman, S. Ur, and A. Ziv. Short vs long size does make a\ndifference. In Proceedings of the High-Level Design Validation and\nTest Workshop, pages 2328, November 1999.\n[9] D. Heckerman. A tutorial on learning with Bayesian networks.\nTechnical report, Microsoft Research, 1996.\n[10] D. Heckerman, A. Mamdani, and M. Wellman. Real-world\napplications of Bayesian networks. Communications of the ACM,\n38(3):2430, 1995.\n[11] G. Nativ, S. Mittermaier, S. Ur, and A. Ziv. Cost evaluation of\ncoverage directed test generation for the IBM mainframe. In\nProceedings of the 2001 International Test Conference, pages\n793802, October 2001.\n[12] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Network of\nPlausible Inference. Morgan Kaufmann, 1988.\n[13] S. Tasiran, F. Fallah, D. G. Chinnery, S. J. Weber, and K. Keutzer. A\nfunctional validation technique: biased-random simulation guided\nby observability-based coverage. In Proceedings of the International\nConference on Computer Design, pages 8288, September 2001.\n[14] S. Ur and Y. Yadin. Micro-architecture coverage directed generation\nof test programs. In Proceedings of the 36th Design Automation\nConference, pages 175180, June 1999.\n[15] S. Wright. Correlation and causation. Journal of Agricultural\nResearch, 1921.\n291\n", "keywords": "Coverage directed test generation;conditional probability;Functional Verification;Bayesian Networks;bidirectional inferences;Maximal A Posteriori;Dynamic Bayesian Network;design under test;coverage model;Coverage Analysis;Most Probable Explanation;Markov Chain"} {"name": "62", "title": "Creating a Massive Master Index for HTML and Print", "abstract": "An index connects readers with information. Creating an index for a single book is a time-honored craft. Creating an index for a massive library of HTML topics is a modern craft that has largely been discarded in favor of robust search engines. The authors show how they optimized a single-sourced index for collections of HTML topics, printed books, and PDF books. With examples from a recent index of 24,000 entries for 7,000 distinct HTML topics also published as 40 different PDF books, the authors discuss the connections between modern technology and traditional information retrieval methods that made the index possible, usable, and efficient to create and maintain.", "fulltext": "THE PROBLEM\nA project with a large library of existing documentation of\nabout 40 books (several with multiple volumes) required a high-quality\nmaster index. The material was written using a proprietary\nSGML authoring tool and converted to both HTML for browser-based\ninformation and PDF. The library was being converted from\na set of books into a task-oriented, topic-based set of\ndocumentation, so very little indexing time was available for the\napproximately 20 authors in two sites fully engaged in rewriting\nthe documentation to conform to new guidelines for online topics,\nto incorporate new content, and to change content for the next\nrelease of their product. The four project editors responsible for\nthe complete library were likewise busy assisting the structural\nconversion of the library. Customers were asking for more direct\nways to find information. At a user conference, 52 percent of users\nsaid the former book indexes were their primary entry point into\nproduct information.\n\nGiven these imposing (but sadly typical) resource constraints,\nthe information architect for the project and an editor with\nextensive indexing experience worked together to develop an\napproach that would use technology to maximize the available\nefforts of both authors and editors. This paper describes the\napproach from the perspectives of the authors, editors, and, most\nimportantly, the users of this set of documentation. The approach\nis described in enough detail to enable other projects to adapt the\napproach to the constraints of their situation. The paper also\nindicates the future directions that the project will explore.\nThe challenges to producing a high-quality master index were\nnot just posed by available resources or by available technology,\nbut also by the writing culture of the project. The project had\nhistorically been heavily oriented towards writing and producing\nbooks--the HTML documentation set had been simply a\ncollection of books converted to HTML. As a result, navigation\nthrough the HTML documentation was difficult; there were almost\nno links between books, and each book was organized under its\nown table of contents. Full-text search of the HTML books had\nbeen the only method of finding information across the complete\nlibrary, if a user didn't know which book to consult directly.\nThe product market share was expanding, and new users had\nto learn the product quickly. However, cultural attitudes towards\nwriting reinforced the problem of books as separate silos of\ninformation: authors were responsible for, and took justifiable\npride in, producing their individual books, and while consistency\nin style and ease of access across the library was encouraged, it\nwas much less important to the writers' satisfaction and explicit\ngoals than completing their self-contained sets of information well.\nEarlier, the product had offered a PostScript master index as\na printed book and a file, in response to customer feedback about\nfinding information across the growing library. There was never\ntime to improve it, so it was eventually dropped, but at the same\ntime, the need for better retrievability of information was\nincreasing and search did not adequately meet that need. Users\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies\nare not made or distributed for profit or commercial advantage and\nthat copies bear this notice and the full citation on the first page. To\ncopy otherwise, or to republish, to post on servers or to redistribute to\nlists, requires specific permission and/or a fee.\nSIGDOC '02, October 20-23, Toronto, Ontario, Canada.\nCopyright 2002 ACM 1-58113-543-2/02/0010...$5.00\n\n186\n\n\ndemanded both an interactive information finder and an easily\nscanned printable index.\nThe previous version of the library produced a master index\nin both PDF and PostScript formats to assist users on platforms on\nwhich search did not work. However, the PDF index was\ngenerated from the individual indexes of each PDF book after the\ncontent of the books had been frozen for translation, so there was\nno attempt or opportunity to enforce consistency in indexing style\nacross books, even when editors were able to help writers improve\nindividual book indexes. Any effort in directly editing the master\nindex would have been lost on the library as a whole because the\nindex entries were part of the source books. Even so, the process\nof creating the master index took days and required the\ncooperation of dozens of authors.\nThe PDF master index was only a limited replacement for\nsearch, since neither the PDF nor the PostScript version could\nprovide direct links to the indexed information. The PDF master\nindex forced users to find and open the corresponding PDF or\nprinted book, then find the referenced page themselves.\nAs part of the effort to address the problems of navigating\nthrough the HTML documentation for the next release of the\nproduct, the information architect and the editors decided to\nproduce a high-quality master index in HTML, even though the\nfull-text search capability would be supported across all platforms\nfor the next release.\nWHY SEARCH IS NOT ENOUGH\nA frequent question early in the project was \"Why bother\nwith an online index if you have a good search engine?\" The\nproblem with search is that typical search solutions are limited to\nthe terms or keywords that are found within the content matter,\nmaking search results a happy coincidence between the user's\nterminology and the content's terminology.\nIf synonyms or preferred terms are addressed at all in typical\nsearch solutions, they are implemented as meta keywords. This\nturns an explicit advantage of the index (the capability of training\nthe user in a product's vocabulary using see or see also references)\ninto an implicit method to improve search results. The terms used\nin an index reflect the indexers' knowledge of the subject matter\nand the users of that set of documentation. While search solutions\ntypically provide the user with the title, a ranking, and\noccasionally keywords in context, an index's primary, secondary,\nand tertiary entries give users more data and context with which to\nselect the right piece of information to meet their needs.\n\n\nPROBLEMS WITH THE PREVIOUS PDF MASTER INDEX\nA previous version of the product shipped a PDF master\nindex that was generated simply by collecting and sorting the\nauthor-created, book-specific indexes from each of the 40 books.\nAn analysis of the resulting master index revealed that the\nindexing approach varied greatly across writers. There were many\ncases of inconsistent terminology. Singular versus plural nouns,\npreferences for verbs or nouns, and capitalization differences were\nthe easiest problems to spot. Some authors indexed every\noccurrence of a given term, other authors indexed only the most\nsignificant usage of a given term, and others indexed almost\nnothing at all. Some authors indexed every possible synonym for a\ngiven term, and others indexed only the term itself. Some authors\nrelied heavily on secondary and tertiary entries to lend structure to\ntheir index, while others relied almost exclusively on primary\nindex entries.\nMany writers clearly didn't understand the significance of\nsuch decisions on the customers who would ultimately use the\nindexes. The master index was over 350 pages, and seemingly\nsmall differences in primary entries sometimes resulted in users\nhaving to look through several pages to find a specific entry.\nA complicating cultural problem was that, for many authors,\nindexing was an activity to be performed only when the content\nhad been written and checked for technical accuracy. In many\ncases, this meant that the index for an individual book received\nvery little attention, and, in most cases, the master index received\nno attention at all.\n\nBARRIERS\nIt was clear that there were indexing problems throughout the\nlibrary. However, without painstakingly analyzing each index in\nconjunction with the master index, it was impossible to determine\nwhat significant material might not have been indexed or which\nbooks had inconsistent capitalization and use of plurals. In the\nwake of the first master index in PDF, the editors initiated an\nattempt to address some of the most glaring inconsistencies by\nsetting indexing guidelines, educating authors in the art of\nindexing, and encouraging authors to collaborate with other\nauthors in the project to define standard indexing terminology for\nnew or existing problem areas (for example, using specific labels\nfor program entities so they made sense in the context of other\nentities from across the library, such as \"SQL data type\" versus\n\"data type\").\nThe effort met with some success, but the process of\nstandardizing terminology across the library was unwieldy: index\nentries were maintained within the source files for each book, and\nthe average book had between 50 and 200 source files. So, making\na single terminology change that affected 10 percent of the source\nfiles in half of the books in the library would require opening,\nediting, and saving approximately 200 source files that are kept\nwithin a version control system. At five minutes per source file,\nthat's almost 17 hours of work! In addition, the source files were\noften accessible only to the author, so there was a potential\nbottleneck and a conflict between demands on the author's time to\nwrite new content or revise existing index entries.\n\n\nINDEX ENTRIES AS METADATA\nRecognizing these problems along with others related to the\nshift to a topic-based architecture, we proposed a solution that\nrequired both technical and cultural change. The new approach to\nindexing was a shift from viewing index entries as content owned\nby each author and created solely during the writing process to\ntreating index entries as metadata to be created and edited during a\nseparate process.\nThe new topic-writing process involves adding all topics in\nthe library to a relational database and storing index entries in a\ntable related to the table that stores the topics. The database is\naffectionately known as Dobalina. The very nature of an index is a\ndatabase, because they consist of records that are compiled into a\nreadable, searchable format (Wright, 2001), or several for single-sourced\ninformation. The database maximized our ability to\ngenerate flexible outputs for different formats.\nThe initial indexing pass took advantage of the index entries\nin the legacy documentation: a Perl script stripped the index\n187\n\n\nentries from the source files, replacing them with an SGML text\nentity unique to each file, and inserting the index entries into the\ndatabase. Once the index entries were in the database, both authors\nand editors could then use a Web interface to the database to\nmaintain the index entries. To build a PDF version of their books,\nthe authors download a set of auxiliary SGML files from the\ndatabase that define the SGML text entities as their corresponding\nindex entries. The following example demonstrates how the index\nentries are dynamically generated from the database and\nincorporated into the SGML source.\n<!-- Start of topic source file -->\n&index1; <!-- Index text entity -->\n<p>Some topic content.</p>\n<!-- End of topic source file -->\n\n<!-- Index entity definitions generated by\ndatabase -->\n<!ENTITY index1 \"<index>\n<primary>SQL statements</primary>\n<secondary>data definition</secondary>\n<primary>data definition</primary>\n<secondary>SQL statements</secondary>\n</index>\">\n\n\n5.1 Better living through automation\nStoring the index entries in a database gives the team the\nability to quickly generate the HTML master index, freeing\nauthors from the requirement to painstakingly transform each of\ntheir books to create the individual indexes that composed the\nPDF master index. The process of creating the master index now\ntakes approximately 15 minutes instead of days. Dynamic access\nto the entire set of index entries enables the team to easily isolate\ncertain consistency problems, such as plural nouns or incorrect\ncapitalization; to immediately identify topics without index entries;\nto help ensure library-wide consistency for new index terms by\nproviding drop-down selection of terms; and to effect social\nchange by eliminating the need to access source files to add, edit,\nor delete index entries.\nSpell-checking the 24,000-odd master index entries takes\nabout two hours. The changes to the index terms are made in the\ndatabase, so the corrections are automatically reflected in the book\nindexes. The previous approach would have required\ncommunicating each correction to the writer, and the master index\nwould still have been subject to the errors if a writer did not have\ntime to integrate the changes throughout the affected source files.\n\n5.2 Cultural and process changes\nThe role of editors changed significantly; rather than\nproviding guidance to authors, the editors collectively edit the\nmaster index by directly manipulating index entries for the entire\nlibrary through a Web interface. More extensive or complicated\nchanges to index entries are accomplished through Perl scripts that\nmanipulate the database records. Maintaining index entries in a\ndatabase enables new methods of collaboration; for example, a\nwriting team might designate their most skilled indexer to index\nthe new content for a release.\nTHE AUTHOR EXPERIENCE\nIndexing training provided detailed guidelines for the authors\nwith a hands-on experience in group indexing. The detailed\nguidelines are included in the Appendix. They consist of two\ntables--one listing general indexing guidelines and a content-oriented\ntable that lists specific consistency rules for the library.\nThey also include some guidelines to help writers improve the\nquality of their PDF indexes.\nAfter the index entries are in the database, authors can\ndynamically generate Web-based reports about the index entries\nfor their book. One report simply lists the number of index entries\nper topic; as each topic should have at least one index entry, this is\nan easy way for writers to ensure that all topics are indexed and to\nsee how extensively. This also helps writers meet the guideline\nthat each PDF book should have no more than two locators for any\nindex entry (primary-secondary-tertiary combination), to ensure\nthat entries are at the best level of detail for users.\nAnother report, shown in Figure 1, flags specific index\nentries that might contravene the indexing guidelines. For\nexample, the report checks for index entries that differ only by\ncase and primary index entries that contain a comma, as well as\nother conditions. A primary index entry that contains a comma\ntypically suggests that the entry should be split into primary and\nsecondary entries so it will nest with other secondary entries.\n\n\n\nFigure 1. Report on inconsistent index entries\n\nTo ease the authors' job of verifying or addressing\ndeficiencies noted in the reports, the reports provide links to both\nthe content of the source file (View column) and the authors'\nindexing interface (Title column). The authors' screens are\nimplemented as a Web-based form with two modes of work:\nupdating existing index entries, and adding index entries.\n\n6.1 Authors' update index screen\nThe update screen, shown in Figure 2, is a set of text fields\nthat display the current primary, secondary, and tertiary entries\n(shown as i1, i2, and i3, respectively).\n188\n\n\n\n\n\nFigure 2. Authors' updating screen\nWhen authors update an entry in a text field, the status of that\nentry automatically switches from No Change to Edit. When\nauthors complete their changes, they click Submit to commit their\nchanges for any flagged entries to the topic database. In Figure 2,\nfor example, the writer can easily see that the third entry isn't done\nthe same way as the first two, and that the third entry under names\nhas an unnecessary for. Authors also have the option of deleting\nindex entries by selecting the Delete radio button for any entries.\nAfter they submit their changes, they see a refreshed table with any\ndeletions at the top followed by a list of changes.\n6.2 Authors' add entry screen\nThe Add Index Entries screen, shown in Figure 3, enables\nauthors to add index entries easily in a way that maintains\nconsistency with the entries already in the database. To add a new\nindex entry for a topic, the author begins by clicking the initial\ncharacter for the new index entry from a drop-down box.\n\n\nFigure 3. Selecting the initial character of a new index entry\n\nThe author then selects the primary entry (the i1 entry) from\nthe drop-down box that has been dynamically added to the form.\nThis box lists all the primary entries that start with the initial\ncharacter selected.\nWhen an author picks a primary entry, as in Figure 4, it is\nautomatically copied into the i1 entry text box. The author then\ndrills down through the available secondary and tertiary entries,\npicking the relevant entries. At any time in the process of adding a\nnew index entry, authors can manually enter or change the\nprimary, secondary, or tertiary entries in the text boxes if the\nexisting index entries in the database do not suit their needs. This\napproach provides a flexible way to limit, but not rigidly control,\nauthors' indexing vocabulary. The next builds of the individual\nbook or deliverable, the HTML master index, and the PDF master\nindex reflect any new, deleted, or updated index entries.\n\n\nFigure 4. Selecting the primary entry from the existing list\nTHE EDITOR EXPERIENCE\nWhile the author interface focuses on a book-by-book (or\ndeliverable-by-deliverable) view of the index, the role of the\neditors in the indexing effort is to improve the quality of the index\nacross the entire library.\nTheir primary focus is on the master index in HTML. During\nthe editing phase, a new master index in HTML was generated\nevery two hours to enable the editors to quickly view the results of\ntheir work. The editors' indexing interface reflects their needs and\napproach to the task of editing the master index. Editors begin by\nfinding a problem index entry in the master index using a separate\nbrowser window. Then, they use a string or substring of the\nprimary index entry, as shown in Figure 5, to bring up a listing of\nall primary index entries beginning with that string. The search is\nnot case-sensitive, so any variations in capitalization are included\nin the search results.\n\n189\n\n\n\nFigure 5. Editors' screen for finding a primary index entry\nThe primary entry search yields matching results from across\nthe entire library, so the editor can quickly address inconsistencies\nbetween two primary entries and rationalize secondary and tertiary\nentries. The list of search results appears in a screen similar to the\nauthors' Show/Update/Add Index Entries screen, as shown in\nFigure 6.\n\n\nFigure 6. Editors' indexing screen for updating entries\nWhen an editor submits a change, the editors' indexing\ninterface refreshes with the results, making it easy to confirm that\nthe expected change was made.\nThe next builds of the affected individual books or\ndeliverables, the HTML master index, and the PDF master index\nreflect any new, deleted, or changed index entries.\n\n\nUSER EXPERIENCE HTML\nSeveral alternative presentation formats were considered for\nthe HTML version of the master index. These included the format\nused by the individual book indexes in the previous version of the\ndocumentation, third-party formats, and an internally developed\nformat.\n8.1 Previous presentation format\nPrevious versions of the HTML indexes generated by the\nproject's standard tools were created for individual books. These\nindexes were presented as nested, unordered lists of index terms.\nLinks to the HTML pages were displayed as arbitrary four-digit\nintegers, as shown in Figure 7. If an index entry pointed to more\nthan one location in the HTML book, the links were displayed as\narbitrary integers separated by commas.\n\n\n\nFigure 7. Previous index presentation format\nOne of the disadvantages of this presentation format is that\nusers have no criteria by which to choose among the links that\nmight be presented for a single index entry. The arbitrary integers\nmight even give users the impression of being some sort of\nrelevance ranking. Another disadvantage is that, for a master index\ncomposed of approximately 40 books, the large number of\nprimary, secondary, and tertiary index entries makes the index very\ndifficult to scan in an online medium.\n8.2 Third-party formats\nSome existing systems that support indexes, such as Sun\nJavaHelpTM, limit index entries to a one-to-one mapping between\neach unique index entry and a corresponding locator. This was not\nan acceptable limitation for the project's large documentation set;\neach unique index entry needed to be able to map to multiple\nlocators. Other systems, such as Microsoft\n\nHTML Help, do\nsupport multiple locators for a single index entry through the use\nof a pop-up window but do not support the operating systems\nsupported by the project.\nOracle Help for Java and Oracle Help for the Web\n(http://otn.oracle.com/tech/java/help/content.html) can display\nmultiple topics per index entry using the topic titles, but they are\ncurrently limited to two levels of index entries. Oracle Help for the\nWeb enables the user to type the first few characters of a primary\nentry to jump ahead to that section of the index, but requires the\nuser to click on the index entry to display any links to topics.\nWe decided to display the index in the content pane of a help\nsystem partly to enable portability. If we later decide to deliver our\ninformation in a different help system such as JavaHelp, the\nEclipse help system, or Windows\n\nHelp we can avoid any\nlimitations in the help system's built-in support for indexes. Trying\nto drop 24,000 index entries into a navigation pane simply will not\ndeliver acceptable performance. Assuming that we continue to\ndeliver HTML-based documentation in future versions of the\n190\n\n\nproduct, this approach will provide users of future versions of the\nproduct with a consistent means of accessing the master index.\n8.3 Internally developed format\nThe solution for this project was to provide an expanding,\ncollapsing master index, as shown in Figure 8. To assist in\nscanning, the HTML master index presents only the primary\nentries at first, but enables users to drill down to secondary and\ntertiary entries to find exactly the information they need. An index\nentry that maps to multiple topics displays the locators as a further\nnested list of links. To provide users with criteria by which they\ncan judge which of several locators meets their needs, each locator\nis shown as the title of the corresponding topic, as illustrated\nbelow.\n\nFigure 8. Current index presentation format\nNielsen's 1997 article \"The Need for Speed\" suggests that\n34K is the maximum size of an optimal Web page for a ten-second\nresponse time for a dial-up connection to the Web. However,\ninternal surveys of our users indicated that most users of our\nproduct access the documentation either from a local workstation\nor from a web server within their company's intranet. We wanted\nto give users the additional context provided by full topic titles, so\nwe divided the content into multiple files. Given our typical user\nscenarios, we decided that we could allow somewhat larger files\nthan recommended by Nielsen and divided the index by initial\nletter into 27 separate files, including one file for all index entries\nbeginning with non-alphabetic characters.\nThe resulting average size of the set of index files is 100K,\nthe median size of the set of index files is 60K, and the largest\nindex file is 400K. The average index file loads and displays in\nless than one second from an intranet or local workstation, while\nthe largest index file takes just over three seconds to load and\ndisplay. These times are within Nielsen's maximum threshold for\nweb usability. While the longest delays fall outside the threshold\nof optimal usability response times of less than one second, we\nfeel that the slightly increased initial load time is balanced by the\nease of scanning an index file that contains all of the entries for an\ninitial character.\nWhen we collected the complete set of 24,000 index entries\nin a single HTML file, the file size was over two megabytes and\nsome browsers were incapable of displaying the file. The current\nindex presentation format uses only two images, each smaller than\none kilobyte, which are downloaded by the browser only once.\nApproximately five percent of the total size of the HTML index\nrepresents the JavaScript and CSS code that makes the index\naccessible by keyboard navigation. The majority of the content is\ndue to the topic titles, which average 29 characters per title. Index\nentries average 13 characters per entry.\nInitial usability sessions on the current index presentation\nformat indicate that users understand how to work with the\nexpanding and collapsing lists, prefer this format to our previous\nindex format, and find the performance of the index from an\nintranet or from a locally workstation acceptable. However, as\nthese sessions drew on the experiences of only four users, we\nrecognize the need to do further research to confirm the validity of\nthis presentation format.\n\n\nUSER EXPERIENCE PDF\nIn PDF, the individual book indexes look like regular book\nindexes; that is, they display the index entries and one or more\npage numbers for those index entries. The decision to associate\nindex entries with entire topics has the detrimental effect of forcing\nall index entries for a given topic to point to the beginning of that\ntopic. For topics that are longer than a page, users might have to\nbrowse through subsequent pages to find the information\nrepresented by the index entry in which they are interested.\nHowever, length should not be an issue for most topics, as authors\nare encouraged to keep topics short to ensure good online display.\nThe exception is some reference topics: a topic covering the syntax\nof a single command might be over 20 pages, and a user might\nlegitimately be interested in just one of the optional arguments for\nthat command. In future iterations of the topic database we hope to\nenable a more granular indexing strategy to address this\nshortcoming.\nWe decided to optimize our indexing effort for the master\nindex (both in HTML and PDF), rather than for individual book\nindexes, so we had to relax some traditional indexing guidelines.\nOne decision we made was to allow individual book indexes to\ncontain primary entries with only a single secondary entry. Rather\nthan concatenating such entries with a comma, we decided, for the\nsake of the master index, to keep these entries as primary and\nsecondary entries. The following example demonstrates how an\nindividual book index is normally edited to concatenate the\nprimary and secondary entries:\nSQL statements, overview\n\n453\n\nHowever, to optimize such entries for the master index, the\nindividual book index must contain distinct primary entries with a\nlone secondary entry, as in the following example:\nSQL statements\noverview\n\n453\nWe felt that this decline in usability of the individual book\nindexes would be recouped by reducing the number of comma-spliced\nentries in the master index, as shown in the following\nexample from the PDF version (the identifiers in front of the page\nnumbers are short forms of the book names):\n191\n\n\nSQL statements, issuing\n\nADGv1: 238\nSQL statements, overview ADMINv1: 453\nThe master index instead presents the optimal arrangement of\nnested primary and secondary entries:\nSQL statements\n\nissuing\n\nADGv1: 238\n\noverview\n\nADMINv1: 453\n\nOne indexing decision driven by the requirements of PDF\nthat posed a potential compromise to the quality of the HTML\nmaster index instead improved the master index. It was necessary\nto artificially subdivide extremely long index entries into separate\nprimary and secondary entries in PDF because the books display\nthe index in a three-column format. A limitation of the PDF\ntransform technology used by the authoring tool is that extremely\nlong index entries that contain no spaces, such as the API keyword\nSQL_DESC_DATETIME_INTERVAL_ PRECISION\n, bleed\nover column boundaries rather than wrap cleanly. In those cases\nwhere many similar entries also start with the same prefix, the\nprefix was turned into a primary entry and the remainder was\nindexed as a secondary entry. This indexing decision reduces the\nnumber of primary entries in the master index, making it much\neasier for the user to scan the primary entries in the collapsed\nHTML master index. So the preceding example becomes one of\nmany index entries with the primary entry \"SQL_DESC\":\nSQL_DESC_\n\nBIND_OFFSET_PRT\n\nDATETIME_INTERVAL_PRECISION\n\nDISPLAY_SIZE\nTECHNOLOGY\nThe HTML master index is implemented in HTML Version\n4.0 using Cascading Style Sheets level 2 (CSS2 at\nhttp://www.w3.org/Style/CSS/), Document Object Model (DOM\nat http://www.w3.org/DOM/), and ECMAScript 262\n(http://www.ecma.ch/ecma1/stand/ecma-262.htm) to enable the\nexpanding and collapsing behavior. Microsoft Internet Explorer\n5.0 and higher and Netscape 6 and higher support the expanding\nand collapsing behavior. Other browsers degrade gracefully to\ndisplay nested unordered lists of the index terms and associated\ntopics.\nThe topic database, known as \"Dobalina,\" is implemented\nwith a blend of open-source and proprietary products. The\nrelational database is IBM\n\nDB2\n\nVersion 7.2\n(http://www.software.ibm.com/data/db2/udb), running on a\nreasonably powerful server, but it could fairly easily be replaced\nby an open-source database such as MySQL or PostgreSQL. The\nsource format for documentation is SGML (produced with a tool\nbuilt on ArborText using a proprietary IBMIDDOC DTD), which\nenables us to dynamically generate index entries as text entities.\nXML or some sort of manipulated HTML would also fit easily\ninto this model.\nThe project uses the Apache Web server\n(http://httpd.apache.org) to display the Web scripting front end\nimplemented in PHP (PHP: Hypertext Preprocessor, see\nhttp://php.apache.org). PHP also creates the auxiliary source files\nused to generate the PDF books.\nThe HTML master index is generated by a Perl script\n(http://www.perl.org), which connects to the database through the\nPerl database interface module (http://dbi.perl.org). Most of the\ninitial processing of the documentation source files was also\nperformed with Perl scripts.\n\n\nFUTURE DIRECTIONS\nThe index improvement process for such a large information\nset is planned over several phases, as shown in Table 1. In this\nproject, we are now planning Phase 3.\nTable 1. Phases of the indexing effort\nPhase 1:\nRecognize the\nproblem and\nbuild internal\nsupport\nReassess master index quality problems\nAnalyze information retrieval needs\nExperiment with scripts\nBuild indexing requirements\nDevelop indexing guidelines\nCollect user feedback\nPhase 2: Create\nfirst version for\nwriters and\ncustomers\nCreate the topic database\nDevelop indexing interfaces\nDesign the presentation of the master index\nand do initial user testing\nEstablish flexible controlled vocabulary\nprocess and start adding see references\nTrain authors in indexing and guidelines\nEdit master index (focus on primary entries)\nConduct initial user testing of index\npresentation format\nPhase 3:\nContinue\nrefining the\nmaster index\nProvide multiple entry points to the index in\nthe product\nImplement See also references\nReview consistency of primary entries\nEdit master index (focus on subentries)\nAdd more syntax and reference entries\nImplement and apply further reporting\nfeatures\nPromote internal use and gather feedback on\nindex content to refine user orientation of\nentries and to \"fill holes\"\nDo user testing\nReuse user-centered design scenarios\nPhase 4:\nEnsure\ncompleteness of\ncontributing\ncontent areas\nContinue to develop reporting features to\nimprove overall consistency\nEnsure completeness of concept and task\nentries\nEdit PDF indexes to help ensure that content\nis adequately indexed\nWork with each small writing teams to\nimprove index coverage\nAnalyze problems with PDF and master\nindexes in other languages\nPhase 5:\nMaintain and\ncontinue\nimproving index\nEstablish iterative maintenance process\nContinue to improve presentation,\ntechnology, and content\nBuild consensus to improve quality of\nlocalized indexes\n\nThis table shows our actual process and not necessarily the\nbest possible task flow. For example, ideally, See and See also\nreferences would be fully implemented in phase 2.\n192\n\n\nOne area for future development is the creation of additional\nreports and scripts. For example, a regular report could identify all\nthe new entries that were not part of the last published index for\nspecial editing attention.\nWe are currently augmenting our lists of see references for\nsynonyms, competitive terms, and obsolete terms. We plan to take\nfull advantage of the capabilities of our relational database to\ncreate mappings between deprecated or less acceptable terms and\nacceptable terms, so that any PDF book with an index entry for a\ndeprecated term automatically includes cross-references that lead\nto the acceptable term.\nA new indexing screen for authors, with some of the function\nof the editors' indexing screen, will facilitate improvements of the\nPDF indexes.\nWe've put a lot of effort into making what's already in the\nindex more consistent, usable, and predictable. But one of the\nbiggest problems is to fill the remaining content holes, that is,\nwhat's not yet indexed. Customer, developer, and service feedback\nindicates the need to improve the granularity of indexing reference\ntopics that document syntax.\nWe will also ensure that the index provides easy access to\ninformation needed to implement business and customer scenarios\ndeveloped by the user-centered design and development teams,\nand continue to develop the usability of the index interfaces in the\nproduct. In Phase 4, we plan to work with each small writing team\non specific ways to improve the retrievability of their information.\n\nREFERENCES\n[1]\nNielsen, Jakob, \"The Need for Speed,\" 1997; available at\nhttp://www.useit.com/alertbox/9703a.html\n\n[2]\nWright, Jan C., \"Single-Source Indexing,\" Proceedings of the\n19\nth\nAnnual International Conference on Systems\nDocumentation, October 2001; available at\nhttp://www.portal.acm.org\n\n\n193\n", "keywords": "book indexes;Information retrieval methods;Massive Master;drop-down selection of terms;Indexing;SQL data type;Search;primary index entry;Internally developed format;automation;indexing problems;Human factors;HTML master index;Online information;Navigation"} {"name": "63", "title": "Data Mining in Metric Space: An Empirical Analysis of Supervised Learning Performance Criteria", "abstract": "Many criteria can be used to evaluate the performance of supervised learning. Different criteria are appropriate in different settings, and it is not always clear which criteria to use. A further complication is that learning methods that perform well on one criterion may not perform well on other criteria. For example, SVMs and boosting are designed to optimize accuracy, whereas neural nets typically optimize squared error or cross entropy. We conducted an empirical study using a variety of learning methods (SVMs, neural nets, k-nearest neighbor, bagged and boosted trees, and boosted stumps) to compare nine boolean classification performance metrics: Accuracy, Lift, F-Score, Area under the ROC Curve, Average Precision, Precision/Recall Break-Even Point, Squared Error, Cross Entropy, and Probability Calibration. Multidimensional scaling (MDS) shows that these metrics span a low dimensional manifold. The three metrics that are appropriate when predictions are interpreted as probabilities: squared error, cross entropy, and calibration, lay in one part of metric space far away from metrics that depend on the relative order of the predicted values: ROC area, average precision, break-even point, and lift. In between them fall two metrics that depend on comparing predictions to a threshold: accuracy and F-score. As expected, maximum margin methods such as SVMs and boosted trees have excellent performance on metrics like accuracy , but perform poorly on probability metrics such as squared error. What was not expected was that the margin methods have excellent performance on ordering metrics such as ROC area and average precision. We introduce a new metric, SAR, that combines squared error, accuracy, and ROC area into one metric. MDS and correlation analysis shows that SAR is centrally located and correlates well with other metrics, suggesting that it is a good general purpose metric to use when more specific criteria are not known.", "fulltext": "INTRODUCTION\nIn supervised learning, finding a model that could predict\nthe true underlying probability for each test case would be\noptimal. We refer to such an ideal model as the One True\nModel. Any reasonable performance metric should be optimized\n(in expectation, at least) by the one true model, and\nno other model should yield performance better than it.\nUnfortunately, we usually do not know how to train models\nto predict the true underlying probabilities. The one\ntrue model is not easy to learn. Either the correct parametric\nmodel type for the domain is not known, or the training\nsample is too small for the model parameters to be esti-mated\naccurately, or there is noise in the data. Typically,\nall of these problems occur together to varying degrees.\nEven if magically the one true model were given to us, we\nwould have difficulty selecting it from other less true models.\nWe do not have performance metrics that will reliably assign\nbest performance to the probabilistically true model given\nfinite validation data.\nIn practice, we train models to minimize loss measured via\na specific performance metric. Since we don't have metrics\nthat could reliably select the one true model, we must accept\nthe fact that the model(s) we select will necessarily be\nsuboptimal. There may be only one true model, but there\nare many suboptimal models.\nThere are different ways that suboptimal models can differ\nfrom the one true model tradeoffs can be made between\ndifferent kinds of deviation from the one true model. Different\nperformance metrics reflect these different tradeoffs. For\nexample, ordering metrics such as area under the ROC curve\nand average precision do not care if the predicted values are\nnear the true probabilities, but depend only on the relative\nsize of the values. Dividing all predictions by ten does\nnot change the ROC curve, and metrics based on the ROC\ncurve are insensitive to this kind of deviation from truth.\nMetrics such as squared error and cross entropy, however,\nare greatly affected by scaling the predicted values, but are\nless affected by small changes in predicted values that might\nalter the relative ordering but not significantly change the\ndeviation from the target values. Squared error and cross\nentropy reflect very different tradeoffs than metrics based\non the ROC curve. Similarly, metrics such as accuracy depend\non how the predicted values fall relative to a threshold.\nIf predicted values are rescaled, accuracy will be unaffected\nif the threshold also is rescaled. But if small changes to\n69\nResearch Track Paper\n0\n0.2\n0.4\n0.6\n0.8\n1\n0\n0.2\n0.4\n0.6\n0.8\n1\nW2\nW1\nMax ACC\n0\n0.2\n0.4\n0.6\n0.8\n1\n0\n0.2\n0.4\n0.6\n0.8\n1\nW2\nW1\nMax AUC\n0\n0.2\n0.4\n0.6\n0.8\n1\n0\n0.2\n0.4\n0.6\n0.8\n1\nW2\nW1\nMin RMS\n0\n0.2\n0.4\n0.6\n0.8\n1\n0\n0.2\n0.4\n0.6\n0.8\n1\nW2\nW1\nMin MXE\n0\n0.2\n0.4\n0.6\n0.8\n1\n0\n0.2\n0.4\n0.6\n0.8\n1\nW2\nW1\nMin CAL\n0\n0.2\n0.4\n0.6\n0.8\n1\n0\n0.2\n0.4\n0.6\n0.8\n1\nW2\nW1\nMax SAR\nFigure 1: Level curves for six error metrics: ACC, AUC, RMS, MXE, CAL, SAR for a simple problem.\npredicted values are made for cases near the threshold, this\ncan have large impact on accuracy. Accuracy reflects yet\nanother tradeoff in how deviation from truth is measured.\nThe one true model, if available, would have (in expectation\n) the best accuracy, the best ROC curve, and the best\ncross entropy, and the different tradeoffs made by these metrics\nwould not be important. But once we accept that we\nwill not be able to find the one true model, and must therefore\naccept suboptimal models, the different tradeoffs made\nby different performance metrics become interesting and important\n. Unfortunately, little is known about how different\nperformance metrics compare to each other.\nIn this paper we present results from an empirical analysis\nof nine widely used performance metrics. We perform\nthis empirical comparison using models trained with seven\nlearning algorithms: SVMs, neural nets, k-nearest neighbor\n, bagged and boosted trees, and boosted stumps. We\nuse multidimensional scaling (MDS) and correlation analysis\nto interpret the results. We also examine which learning\nmethods perform best on the different metrics. Finally, we\nintroduce a new metric, SAR, that combines squared error,\naccuracy, and ROC area into a single, robust metric.\nTHE PERFORMANCE METRICS\nWe experiment with nine performance metrics for boolean\nclassification: Accuracy (ACC), Lift (LFT), F-Score (FSC),\nArea under the ROC Curve (AUC), Average Precision (APR),\nthe Precision/Recall Break-Even Point (BEP), Root Mean\nSquared Error (RMS), Mean Cross Entropy (MXE), and\nProbability Calibration (CAL). Definitions for each of the\nmetrics can be found in Appendix A.\nFigure 1 shows level curves for six of the ten performance\nmetrics for a model with only two parameters (W 1 and W 2)\ntrained on a simple synthetic binary problem. Peak performance\nin the first two plots occurs along a ridge in weight\nspace. In the other four plots peak performance is indicated\nby solid dots. Peak performance for some metrics nearly\ncoincide: RMS and MXE peak at nearly the same model\nweights. But other metrics peak in different places: CAL\nhas a local optimum near the optima for RMS and MXE,\nbut its global optimum is in a different place. Also, the\nridges for optimal ACC and optimal AUC do not align, and\nthe ridges do not cross the optima for the other four metrics.\nOptimizing to each of these metrics yields different models,\neach representing different tradeoffs in the kinds of errors\nthe models make. Which of these tradeoffs is best depends\non the problem, the learning algorithm, and how the model\npredictions ultimately will be used.\nWe originally divided the nine metrics into three groups:\nthreshold metrics, ordering/rank metrics, and probability\nmetrics. The three threshold metrics are accuracy (ACC),\nF-score (FSC) and lift (LFT). F-score is the harmonic mean\nof precision and recall at some threshold. Lift measures the\ntrue positive rate in the fraction of cases that fall above\nthreshold. (See Appendix A for a definition of lift, and [3]\nfor a description of Lift Curves. Lift is the same as precision\nat some threshold, but scaled so that it can be larger than\n1.) Usually ACC and FSC use a fixed threshold. In this\npaper we use 0.5. With lift, often the threshold is adjusted\nso that a fixed percent, p, of cases are predicted as positive,\nthe rest falling below threshold. Usually p depends on the\nproblem. For example, in marketing one might want to send\nfliers to 10% of customers. Here we somewhat arbitrarily set\np = 25% for all problems. Note that for all threshold metrics\nit is not important how close a prediction is too a threshold,\nonly if the predicted value is above or below threshold.\nThe ordering/rank metrics look at predictions differently\nfrom the threshold metrics. If cases are ordered by predicted\nvalue, the ordering/rank metrics measure how well the ordering\nranks positive cases above negative cases. The rank\nmetrics can be viewed as a summary of the performance of\na model across all possible thresholds. The rank metrics we\nuse are area under the ROC curve (AUC), average precision\n(APR), and precision/recall break even point (BEP). See\n[10] for a discussion of ROC curves from a machine learning\nperspective. Rank metrics depend only on the ordering\nof the predictions, not the actual predicted values. If the\nordering is preserved it makes no difference if the predicted\nvalues range between 0 and 1 or between 0.29 and 0.31.\nAlthough we group Lift with the threshold metrics, and\nBEP with the ordering metrics, BEP and Lift are similar to\neach other in some respects. Lift is directly proportional to\nBEP if Lift is calculated at p equal to the proportion of positives\nin the data set. This threshold also is the break-even\npoint where precision equals recall. BEP and Lift are similar\nto the ordering metrics because the threshold depends\nimplicitly on the ordering, but also are similar to the threshold\nmetrics because neither is sensitive to the orderings on\neither side of the threshold once that threshold has been\ndefined. Results presented later suggest that both Lift and\nBEP are more similar to the ordering metrics than to the\nthreshold metrics.\nThe three probability metrics depend on the predicted values\n, not on how the values fall relative to a threshold or relative\nto each other. The probability metrics are uniquely min-imized\n(in expectation) when the predicted value for each\ncase coincides with the true probability of that case being\npositive. The probability metrics we consider are squared\nerror (RMS), cross entropy (MXE) and calibration (CAL).\nCAL measures the calibration of a model: if a model predicts\n0.85 for a large number of cases, about 85% of those cases\nshould prove to be positive if the model is well calibrated.\nSee Appendix A for details of how CAL is calculated.\n70\nResearch Track Paper\nWe also experiment with a new performance metric, SAR,\nthat combines squared error, accuracy, and ROC area into\none measure: SAR = (ACC + AU C + (1 - RM S))/3. SAR\nbehaves somewhat differently from ACC, AUC, and RMS\nalone, and is a robust metric to use when the correct metric\nis unknown. SAR is discussed further in Section 8.\nNORMALIZING THE SCORES\nPerformance metrics such as accuracy or squared error\nhave range [0, 1], while others (lift, cross entropy) range from\n0 to q where q depends on the data set. For some metrics\nlower values indicate better performance. For others higher\nvalues are better. Metrics such as ROC area have baseline\nrates that are independent of the data, while others such as\naccuracy have baseline rates that depend on the data. If\nbaseline accuracy is 0.98, an accuracy of 0.981 probably is\nnot good performance, yet on another problem, if the Bayes\noptimal rate is 0.60, achieving an accuracy of 0.59 might be\nexcellent performance.\nIn order to compare performance metrics in a meaningful\nway, all the metrics need to be placed on a similar scale. One\nway to do this is to scale the performances for each problem\nand metric from 0 to 1, where 0 is poor performance, and 1\nis good performance. For example, we might place baseline\nperformance at 0, and the Bayes optimal performance at 1.\nUnfortunately, we cannot estimate the Bayes optimal rate\non real problems. Instead, we can use the performance of\nthe best observed model as a proxy for the Bayes optimal\nperformance. We calculate baseline rate as follows: predict\np for every case, where p is the percent of positives in the test\nset. We normalize performances to the range [0, 1], where\n0 is baseline and 1 represents best performance. If a model\nperforms worse than baseline, its normalized score will be\nnegative. See Table 1 for an example of normalized scores.\nThe disadvantage of normalized scores is that recovering the\nraw performances requires knowing the performances that\ndefine the top and bottom of the scale, and as new best\nmodels are found the top of the scale changes.\nCAL, the metric we use to measure probability calibration\n, is unusual in that the baseline model that predicts p\nfor all cases, where p is the percent of positives in the test\nset, has excellent calibration. (Because of this, measures like\nCAL typically are not used alone, but are used in conjunction\nwith other measures such as AUC to insure that only\nmodels with good discrimination and good calibration are\nselected. See Figure 1 for a picture of how unusual CAL's\nerror surface is compared with other metrics.) This creates a\nproblem when normalizing CAL scores because the baseline\nmodel and Bayes optimal model have similar CAL scores.\nThis does not mean CAL is a poor metric it is effective at\ndistinguishing poorly calibrated models from well calibrated\nmodels. We address this problem later in the paper.\nEXPERIMENTAL DESIGN\nThe goal of this work is to analyze how the ten metrics\ncompare to each other. To do this we train many different\nkinds of models on seven test problems, and calculate for\neach test problem the performance of every model on the\nten metrics.\nWe train models using seven learning algorithms: Neural\nNets (ANN), SVMs, Bagged Decision Trees (BAG-DT),\nBoosted Decision Trees (BST-DT), Boosted Decision Stumps\nTable 1: Accuracy on ADULT problem\nmodel\nacc\nnorm score\nbst-stmp\n0.8556\n1.0000\nbag-dt\n0.8534\n0.9795\ndt\n0.8503\n0.9494\nsvm\n0.8480\n0.9267\nbst-dt\n0.8464\n0.9113\nann\n0.8449\n0.8974\nknn\n0.8320\n0.7731\nbaseline\n0.7518\n0.0000\n(BST-STMP), single Decision Trees (DT) and Memory Based\nLearning (KNN). For each algorithm we train many variants\nand many parameter settings. For example, we train ten\nstyles of decision trees, neural nets of different sizes, SVMs\nusing many different kernels, etc. A total of 2000 models are\ntrained and tested on each problem. See Appendix B for a\ndescription of the parameter settings we use for each learning\nmethod. While this strategy won't create every possible\nmodel, and won't create a uniform sample of the space of\npossible models, we feel that this is an adequate sample of\nthe models that often will be trained in practice.\nFor each problem, the 2000 models are trained on the same\ntrain set of 4000 points. The performance of each model\nis measured on the same large test set for each of the ten\nperformance metrics. In order put the performances on the\nsame scale across different metrics and different problems,\nwe transform the raw performance to normalized scores as\nexplained in Section 3. In total, across the seven problems,\nwe have 2000 7 = 14, 000 models and for each model we\nhave it's score on each of the 10 performances metrics.\nDATA SETS\nWe compare the algorithms on seven binary classification\nproblems. ADULT, COVER TYPE and LETTER are from\nUCI Repository [1]. ADULT is the only problem that has\nnominal attributes. For ANNs, SVMs and KNNs we transform\nnominal attributes to boolean. Each DT, BAG-DT,\nBST-DT and BST-STMP model is trained twice, once with\nthe transformed attributes and once with the original attributes\n. COVER TYPE has been converted to a binary\nproblem by treating the largest class as the positive and the\nrest as negative. We converted LETTER to boolean in two\nways. LETTER.p1 treats the letter \"O\" as positive and the\nremaining 25 letters as negative, yielding a very unbalanced\nbinary problem. LETTER.p2 uses letters A-M as positives\nand the rest as negatives, yielding a well balanced problem.\nHYPER SPECT is the IndianPine92 data set [4] where the\ndifficult class Soybean-mintill is the positive class. SLAC is\na problem from collaborators at the Stanford Linear Accelerator\nand MEDIS is a medical data set. The characteristics\nof these data sets are summarized in Table 2.\nTable 2: Description of problems\nproblem\n#attr\ntrain size\ntest size\n% pos.\nadult\n14/104\n4000\n35222\n25%\ncover type\n54\n4000\n25000\n36%\nletter.p1\n16\n4000\n14000\n3%\nletter.p2\n16\n4000\n14000\n53%\nmedis\n63\n4000\n8199\n11%\nslac\n59\n4000\n25000\n50%\nhyper spect\n200\n4000\n4366\n24%\n71\nResearch Track Paper\nMDS IN METRIC SPACE\nTraining 2000 models on each problem using seven learning\nalgorithms gives us 14,000 models, each of which is eval-uated\non ten performance metrics. This gives us 14,000\nsample points to compare for each performance metric. We\nbuild a 10x14,000 table where lines represent the performance\nmetrics, columns represent the models, and each entry\nin the table is the score of the model on that metric.\nFor MDS, we treat each row in the table as the coordinate\nof a point in a 14,000 dimension space. The distance between\ntwo metrics is calculated as the Euclidean distance\nbetween the two corresponding points in this space. Because\nthe coordinates are strongly correlated, there is no\ncurse-of-dimensionality problem with Euclidean distance in\nthis 14,000 dimensional space.\nWe are more interested in how the metrics compare to\neach other when models have good performance than when\nmodels have poor performance. Because of this, we delete\ncolumns representing poorer performing models in order to\nfocus on the \"interesting\" part of the space where models\nthat have good performance lie. For the analyses reported\nin this paper we delete models that perform below baseline\non any metric (except CAL).\nTen metrics permits 10 9/2 = 45 pairwise comparisons.\nWe calculate Euclidean distance between each pair of metrics\nin the sample space, and then perform multidimensional\nscaling on these pairwise distances between metrics.\nMDS is sensitive to how the performance metrics are scaled.\nThe normalized scores described in Section 3 yield well-scaled\nperformances suitable for MDS analysis for most metrics\n. Unfortunately, as discussed in Section 3, normalized\nscores do not work well with CAL. Because of this, we perform\nMDS two ways. In the first, we use normalized scores,\nbut exclude the CAL metric. In the second, we include CAL,\nbut scale performances to mean 0.0 and standard deviation\n1.0 instead of using normalized scores. Scaling by standard\ndeviation resolves the problem with CAL for MDS, but is\nsomewhat less intuitive because scores scaled by standard\ndeviation depend on the full distribution of models instead\nof just the performances that fall at the top and bottom of\neach scale.\nFigure 2 shows the MDS stress as a function of the number\nof dimensions in the MDS (when CAL is included). The\nten metrics appear to span an MDS space of about 3 to 5\ndimensions. In this section we examine the 2-D MDS plots\nin some detail.\nFigure 3 shows two MDS plots for the metrics that result\nwhen dimensionality is reduced to two dimensions. The plot\non the left is MDS using normalized scores when CAL is\nexcluded. The plot on the right is MDS using standard\ndeviation scaled scores when CAL is included.\nBoth MDS plots show a similar pattern. The metrics appear\nto form 4-5 somewhat distinct groups. In the upper\nright hand corner is a group that includes AUC, APR, BEP,\nLFT, and SAR. The other groups are RMS and MXE, ACC\n(by itself, or possibly with FSC), FSC (by itself, or possibly\nwith ACC), and CAL (by itself). It is not surprising that\nsquared error and cross entropy form a cluster. Also, presumably\nbecause squared error tends to be better behaved\nthan cross entropy, RMS is closer to the other measures than\nMXE. We are somewhat surprised that RMS is so centrally\nlocated in the MDS plots. Perhaps this partially explains\nwhy squared error has proved so useful in many applications.\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n1\n2\n3\n4\n5\n6\n7\n8\nMDS Stress\nNumber of MDS Dimensions\nFigure 2: MDS stress vs. number of dimensions\nIt is somewhat surprising that accuracy does not appear\nto correlate strongly with any of the other metrics, except\npossibly with FSC. ACC does not fall very close to other\nmetrics that use thresholds such as Lift and F-Score, even\nthough F-Score uses the same 0.5 threshold as accuracy in\nour experiments. (The threshold for Lift is adjusted dynam-ically\nso that 25% of the cases are predicted as positive.)\nAccuracy is surprisingly close to RMS, and closer to RMS\nthan to MXE, again suggesting that part of the reason why\nRMS has been so useful is because of its close relationship\nto a metric such as ACC that has been so widely used.\nThe most surprising pattern in the MDS plot that includes\nCAL is that CAL is distant from most other metrics\n. There appears to be an axis running from CAL at\none end to the ordering metrics such as AUC and APR\nat the other end that forms the largest dimension in the\nspace. This is surprising because one way to achieve excellent\nordering is to accurately predict true probabilities,\nwhich is measured by the calibration metric. However, one\ncan achieve excellent AUC and APR using predicted values\nthat have extremely poor calibration, yet accurately predict\nthe relative ordering of the cases. The MDS plot suggests\nthat many models which achieve excellent ordering do so\nwithout achieving good probabilistic calibration. Closer examination\nshows that some models such as boosted decision\ntrees yield remarkably good ordering, yet have extremely\npoor calibration.\nWe believe maximum margin methods\nsuch as boosting tradeoff reduced calibration for better margin\n. See Section 9 for further discussion of this issue. One\nalso can achieve good calibration, yet have poor AUC and\nAPR. For example, decision trees with few leaves may be\nwell calibrated, but the coarse set of values they predict do\nnot provide a basis for good ordering.\nFigure 4 shows 2-D MDS plots for six of the seven test\nproblems. The seventh plot is similar and is omitted to\nsave space. (The omitted plot is one of the two LETTER\nproblems.) Although there are variations between the plots,\nthe 2-D MDS plots for the seven problems are remarkably\nconsistent given that these are different test problems. The\nconsistency between the seven MDS plots suggests that we\nhave an adequate sample size of models to reliably detect relationships\nbetween the metrics. Metrics such as ACC, FSC,\nand LFT seem to move around with respect to each other in\nthese plots. This may be because they have different sensi-72\nResearch Track Paper\nacc\nfsc\nlft\nauc\napr\nbep\nrms\nmxe\nsar\nDim 2\n-2.0\n-1.5\n-1.0\n-0.5\n0.0\n0.5\n1.0\n1.5\nDim 1\n-2.0\n-1.5\n-1.0\n-0.5\n0.0\n0.5\n1.0\n1.5\nacc\nfsc\nlft\nauc\napr\nbep\nrms\nmxe\ncal\nsar\nDim 2\n-2.0\n-1.5\n-1.0\n-0.5\n0.0\n0.5\n1.0\n1.5\nDim 1\n-2.0\n-1.5\n-1.0\n-0.5\n0.0\n0.5\n1.0\n1.5\nFigure 3: 2D MDS plot using normalized scores (left) and standard deviation scaling (right).\ntivities to the ratio of positives to negatives in the data sets.\nFor example, BEP is proprtional to LFT (and thus behaves\nsimilarly) when the percentage of positives in the dataset\nequals the fraction predicted above threshold (25% in this\npaper). Other than this, we have not been able to correlate\ndifferences we see in the individual plots with characteristics\nof the problems that might explain those differences,\nand currently believe that the MDS plots that combine all\nseven problems in Figure 3 represents an accurate summary\nof the relationships between metrics. Note that this does\nnot mean that the performance of the different learning algorithms\nexhibits the same pattern on these test problems\n(in fact they are very different), only that the relationships\nbetween the ten metrics appear to be similar across the test\nproblems when all the learning algorithms are considered at\none time.\nCORRELATION ANALYSIS\nAs with the MDS analysis in the previous section, we\nused each of the ten performance metrics to measure the\nperformance of the 2000 models trained with the different\nlearning methods on each of the seven test problems. In\nthis section we use correlation analysis on these models to\ncompare metrics instead of MDS.\nAgain, to make the correlation analysis easier to interpret\n, we first scale performances to the range [0, 1] so that\nthe best performance we observed with that metric on each\nproblem with any of the learning methods is performance\n1, and baseline performance with that metric and data set\nis performance 0. This eliminates the inverse correlation\nbetween measures such as accuracy and squared error, and\nnormalizes the scale of each metric.\nTen metrics permits 10 9/2 = 45 pairwise correlations.\nWe do these comparisons using both linear correlation (excluding\nCAL) and rank correlation. The results from the\nlinear and rank correlation analyses are qualitatively similar\n. We present the results for non-parametric rank correlation\nbecause rank correlation makes fewer assumptions\nabout the relationships between the metrics, and because\nrank correlation is insensitive to how CAL is scaled.\nTable 3 shows the rank correlation between all pairs of\nmetrics. Each entry in the table is the average rank correlation\nacross the seven test problems. The table is sym-metric\nand contains only 45 unique pairwise comparisons.\nWe present the full matrix because this makes it easier to\nscan some comparisons. The final column is the mean of\nthe rank correlations for each metric. This gives a rough\nidea how correlated each metric is on average to all other\nmetrics.\nMetrics with pairwise rank correlations near one behave\nmore similarly than those with smaller rank correlations. Ignoring\nthe SAR metric which is discussed in the next section,\nseven metric pairs have rank correlations above 0.90:\n0.96: Lift to ROC Area\n0.95: ROC Area to Average Precision\n0.93: Accuracy to Break-even Point\n0.92: RMS to Cross-Entropy\n0.92: Break-Even Point to ROC Area\n0.92: Break-Even Point to Average Precision\n0.91: Average Precision to Lift\nWe expected AUC and average precision to behave very\nsimilarly and thus have high rank correlation. But we are\nsurprised to see that Lift has such high correlation to AUC.\nNote that because Lift has high correlation to AUC, and\nAUC has high correlation to average precision, it is not surprising\nthat Lift also has high correlation to average precision\n. As expected, break-even point is highly correlated with\nthe other two ordering metrics, AUC and average precision.\nBut the high correlation between accuracy and break-even\n73\nResearch Track Paper\nacc\nfsc\nlft\nauc\napr\nbep\nrms\nmxe\ncal\nsar\n-2.0\n-1.5\n-1.0\n-0.5\n0.0\n0.5\n1.0\n1.5\nCOVER_TYPE\n-2.0\n-1.5\n-1.0\n-0.5\n0.0\n0.5\n1.0\n1.5\nacc\nfsc\nlft\nauc\napr\nbep\nrms\nmxe\ncal\nsar\n-2.0\n-1.5\n-1.0\n-0.5\n0.0\n0.5\n1.0\n1.5\nADULT\n-2.0\n-1.5\n-1.0\n-0.5\n0.0\n0.5\n1.0\n1.5\nacc\nfsc\nlft\nauc\napr\nbep\nrms\nmxe\ncal\nsar\n-2.0\n-1.5\n-1.0\n-0.5\n0.0\n0.5\n1.0\n1.5\nLETTER.P1\n-2.0\n-1.5\n-1.0\n-0.5\n0.0\n0.5\n1.0\n1.5\nacc\nfsc\nlft\nauc\napr\nbep\nrms\nmxe\ncal\nsar\n-2.0\n-1.5\n-1.0\n-0.5\n0.0\n0.5\n1.0\n1.5\nHYPER_SPECT\n-2.0\n-1.5\n-1.0\n-0.5\n0.0\n0.5\n1.0\n1.5\nacc\nfsc\nlft\nauc\napr\nbep\nrms\nmxe\ncal\nsar\n-2.0\n-1.5\n-1.0\n-0.5\n0.0\n0.5\n1.0\n1.5\nMEDIS\n-2.0\n-1.5\n-1.0\n-0.5\n0.0\n0.5\n1.0\n1.5\nacc\nfsc\nlft\nauc\napr\nbep\nrms\nmxe\ncal\nsar\n-2.0\n-1.5\n-1.0\n-0.5\n0.0\n0.5\n1.0\n1.5\nSLAC\n-2.0\n-1.5\n-1.0\n-0.5\n0.0\n0.5\n1.0\n1.5\nFigure 4: 2-D MDS plots for six of the seven test problems. The seventh problem yields a similar plot and\nis omitted only to save space. The missing plot is for one of the LETTER problems.\n74\nResearch Track Paper\nTable 3: Average rank correlations between metrics\nacc\nfsc\nlft\nauc\napr\nbep\nrms\nmxe\ncal\nsar\nmean\nacc\n1.00\n0.87\n0.85\n0.88\n0.89\n0.93\n0.87\n0.75\n0.56\n0.92\n0.852\nfsc\n0.87\n1.00\n0.77\n0.81\n0.82\n0.87\n0.79\n0.69\n0.50\n0.84\n0.796\nlft\n0.85\n0.77\n1.00\n0.96\n0.91\n0.89\n0.82\n0.73\n0.47\n0.92\n0.832\nauc\n0.88\n0.81\n0.96\n1.00\n0.95\n0.92\n0.85\n0.77\n0.51\n0.96\n0.861\napr\n0.89\n0.82\n0.91\n0.95\n1.00\n0.92\n0.86\n0.75\n0.50\n0.93\n0.853\nbep\n0.93\n0.87\n0.89\n0.92\n0.92\n1.00\n0.87\n0.75\n0.52\n0.93\n0.860\nrms\n0.87\n0.79\n0.82\n0.85\n0.86\n0.87\n1.00\n0.92\n0.79\n0.95\n0.872\nmxe\n0.75\n0.69\n0.73\n0.77\n0.75\n0.75\n0.92\n1.00\n0.81\n0.86\n0.803\ncal\n0.56\n0.50\n0.47\n0.51\n0.50\n0.52\n0.79\n0.81\n1.00\n0.65\n0.631\nsar\n0.92\n0.84\n0.92\n0.96\n0.93\n0.93\n0.95\n0.86\n0.65\n1.00\n0.896\npoint is somewhat surprising and we currently do not know\nhow to explain this.\nThe weakest correlations are all between the calibration\nmetric (CAL) and the other metrics. On average, CAL correlates\nwith the other metrics only about 0.63. We are surprised\nhow low the correlation is between probability calibration\nand other metrics, and are currently looking at other\nmeasures of calibration to see if this is true for all of them.\nacc\nfsc\nlft\nauc\napr\nbep\nrms\nmxe\ncal\nsar\nDim 2\n-2.0\n-1.5\n-1.0\n-0.5\n0.0\n0.5\n1.0\n1.5\nDim 1\n-2.0\n-1.5\n-1.0\n-0.5\n0.0\n0.5\n1.0\n1.5\nFigure 5: MDS using rank correlation\nFigure 5 shows an MDS plot for the metrics when distance\nbetween metrics is calculated as 1 - rank correlation, making\nMDS insensitive to how the metrics are scaled. (Distances\nbased on 1 - rank correlation do not respect the\ntriangle inequality so this is not a proper metric space.)\nThe overall pattern is similar to that observed in the MDS\nplots in Figure 3. CAL is at one end of the space far from\nthe other metrics. Cross-entropy is closest to RMS, though\nnot as close as in the other plots. Cross-entropy and RMS\nhave high rank correlation, but because cross-entropy has\nlower rank-correlation to other most metrics than RMS, it\nis pushed far from RMS which is close to other metrics in\nthe MDS plot. APR and AUC are at the other end of the\nspace farthest from CAL. FSC is in the upper left side of\nthe space. ACC and RMS are near the center of the space.\nSAR A GENERAL PURPOSE METRIC\nWhen applying supervised learning to data, a decision\nmust be made about what metric to train to and what metric\nto use for model selection. Often the learning algorithm\ndictates what metrics can be used for training, e.g. it is difficult\nto train a neural net for metrics other than RMS or\nMXE. But there usually is much more freedom when selecting\nthe metric to use for model selection, i.e. the metric used\nto pick the best learning algorithm and the best parameters\nfor that algorithm.\nIf the correct metric for the problem is known, model selection\nprobably should be done using that metric even if\nthe learning algorithm cannot be trained to it. What should\nbe done when the correct metric is not known? The MDS\nplots and correlation analysis suggest that RMS is remarkably\nwell correlated with the other measures, and thus might\nserve as a good general purpose metric to use when a more\nspecific optimization criterion is not known.\nWe wondered if we could devise a new metric more centrally\nlocated than RMS and with better correlation to the\nother metrics. Rather than devise a completely new metric\n, we tried averaging several of the well behaved metrics\ninto a new metric that might be more robust than each one\nindividually. SAR combines Squared error, Accuracy, and\nROC area into one measure: SAR = (ACC + AU C + (1 RM\nS))/3. We chose these metrics for SAR for three reasons\n:\n1. we wanted to select one metric from each metric group:\nthe threshold metrics, the ordering metrics, and the\nprobability metrics\n2. ACC, AUC, and RMS seemed to be the most popular\nmetric from each of these groups, respectively\n3. these three metrics are well correlated to the other\nmetrics in their groups, and in the MDS plots lie closest\nto the other metrics in their groups\nAs can be seen from the MDS plots and in the tables,\nSAR behaves differently from ACC, AUC, and RMS alone.\nIn Table 3 SAR has higher mean rank correlation to other\nmetrics than any other metric. In the MDS plots, SAR tends\nto be more consistently centrally located than other metrics.\nAnd in Table 4 it is the metric that best reflects the ordering\nby mean performance of the seven learning methods.\nThese results suggest that of the ten metrics we exam-ined\n, SAR is the metric that on average is most correlated\nwith the other metrics, both separately, and in groups. SAR\nis even more representative than RMS (though RMS also is\n75\nResearch Track Paper\nTable 4: Normalized scores for each learning algorithm by metric (average over seven problems)\nmodel\nacc\nfsc\nlft\nauc\napr\nbep\nrms\nmxe\ncal\nmean\nsar\nann\n0.9399\n0.9486\n0.9623\n0.9722\n0.9538\n0.9632\n0.9043\n0.9009\n0.9963\n0.9491\n0.9516\nsvm\n0.9010\n0.9515\n0.9642\n0.9688\n0.9523\n0.9635\n0.9024\n0.9041\n0.9881\n0.9440\n0.9524\nbag-dt\n0.8796\n0.8986\n0.9450\n0.9765\n0.9577\n0.9464\n0.8763\n0.9087\n0.9800\n0.9299\n0.9470\nbst-dt\n0.9506\n0.9443\n0.9843\n0.9866\n0.9779\n0.9858\n0.6400\n0.6427\n0.9399\n0.8947\n0.9171\nknn\n0.8127\n0.9042\n0.9248\n0.9481\n0.9052\n0.9252\n0.7954\n0.7754\n0.9871\n0.8865\n0.9012\ndt\n0.6737\n0.8621\n0.8393\n0.8897\n0.8169\n0.8403\n0.6292\n0.6748\n0.9731\n0.7999\n0.8160\nbst-stmp\n0.7929\n0.8265\n0.8721\n0.9291\n0.8799\n0.8724\n0.3181\n0.3013\n0.9477\n0.7489\n0.6966\nvery good). In an experiment where SAR was used for model\nselection, SAR outperformed eight of the nine metrics in selecting\nthe models with the best overall, and tied with RMS.\nWe believe our results suggest that SAR is a robust combination\nof three popular metrics that may bey appropriate\nwhen the correct metric to use is not known, though the\nbenefit of SAR over RMS is modest at best. Attempts to\nmake SAR better by optimizing the weights given to ACC,\nAUC, and RMS in the SAR average did not significantly improve\nSAR compared to equal weights for the three metrics.\nWe are very impressed at how well behaved RMS alone is\nand are currently working to devise a better SAR-like metric\nthat yields more improvement over RMS alone.\nPERFORMANCES BY METRIC\nTable 4 shows the normalized performance of each learning\nalgorithm on the nine metrics. (CAL is scaled so that\nthe minimum observed CAL score is 0.0 and the maximum\nobserved CAL score is 1.0) For each test problem we find\nthe best parameter settings for each learning algorithm and\ncompute it's normalized score. Each entry in the table averages\nthese scores across the seven problems. The last two\ncolumns are the mean normalized scores over the nine metrics\n, and the SAR performance. Higher scores indicate better\nperformance. The models in the table are ordered by\nmean overall performance. We have written a separate paper\nto compare the performance of the learning methods to\neach other on these metrics, but there are a few interesting\nrelationships between learning algorithms and metrics that\nare worth discussing in the context of this paper.\nOverall, the best performing models are neural nets, SVMs,\nand bagged trees. Surprisingly, neural nets outperform all\nother model types if one averages over the nine metrics.\nANNs appear to be excellent general purpose learning methods\n. This is not to say that ANNs are the best learning\nalgorithm they only win on RMS and CAL, but because\nthey rarely perform poorly on any problem or metric, they\nhave excellent overall performance.\nThe SVMs perform almost as well as ANNs. Note that\nSVM predictions on [-, +] are not suitable for measures\nlike cross entropy, calibration, and squared error. SVMs do\nwell on these metrics because we use Platt's method [8] to\ntransform SVM predictions to calibrated probabilities. Like\nneural nets, SVMs appear to be a safe, general purpose, high\nperformance learning method once their predictions have\nbeen calibrated by a method such as Platt scaling.\nAlthough single decision trees perform poorly, bagged trees\nperform nearly as well as neural nets and SVMs. Bagging\nimproves decision tree performance on all metrics, and yields\nparticularly large improvements on the probability metrics.\nLike neural nets and SVMs, bagged trees appear to be a\nsafe, general purpose, high performance learning method.\nBoosted trees outperform all other learning methods on\nACC, LFT, ROC, APR, and BEP. Boosting wins 2 of 3\nthreshold metrics and 3 of 3 rank metrics, but performs\npoorly on the probability metrics: squared error, cross entropy\n, and calibration. Maximum margin methods such as\nboosted trees yield poorly calibrated probabilities. (SVMs\nperform well on these because Platt scaling \"undoes\" the\nmaximum margin.) Overall, boosting wins 5 of the 6 metrics\nfor which it is well suited, and would easily be the top\nperforming learning method if we consider only the 6 threshold\nand ordering metrics.\nThe KNN methods were not competitive with the better\nalgorithms, but might done better with larger train sets.\nSingle decision trees also did not perform as well as most\nother methods, probably because recursive partitioning runs\nout of data quickly with 4k train sets, and because small\ntrees are not good at predicting probabilities [9]. We tested\nmany different kinds of decision trees, including smoothed\nunpruned trees, and then picked the best, so the poor performance\nof trees here is not due to any one tree type being\ninferior, but because all of the many tree types we tested\ndid not perform as well as other methods.\nInterestingly, boosting stump models does not perform\nas well as boosting full decision trees. Boosted stumps do\noutperform single trees on 5 of the 6 threshold and rank\nmetrics. Their last-place ranking below decision trees is due\nto their extremely poor performance on the three probability\nmeasures.\nRELATED WORK\nThere is not a large literature comparing performance\nmetrics. The closest work to ours is by Flach [7]. In this\nwork Flach uses the ROC space to understand and compare\ndifferent metrics. He analyzes accuracy, precision, weighted\nrelative accuracy and several decision tree splitting criteria.\nThe STATLOG project [6] performed a large scale empirical\nevaluation of a number of learning algorithms in 1995.\nSTATLOG compared the performance of the different algorithms\n, and also did an analysis of how the predictions made\nby the algorithms compared to each other. STATLOG, however\n, did not compare performance using different metrics.\nDISCUSSION AND CONCLUSIONS\nOur analysis allows us to draw a variety of conclusions\nwhich we summarize here. If the goal is to maximize accuracy\n, but the model needs a continuous performance metric\n(e.g. using backpropagation to train a neural net), it probably\nis better to train the model using squared error instead\nof cross entropy because squared error sits closer to accuracy\nin metric space. This result is surprising since cross entropy\nis the theoretically preferred loss function for binary classification\n. We suspect cross entropy is not as robust as squared\n76\nResearch Track Paper\nerror on real data sets because real data sometimes contains\nclass noise that cross entropy is very sensitive to.\nSquared error is a remarkably robust performance metric\nthat has higher average correlation to the other metrics than\nany other metric except SAR. Squared error appears to be\nan excellent general purpose metric.\nMany models achieve excellent performance on the ordering\nmetrics AUC, APR, and BEP without making predictions\nthat yield good probabilities. For example, the k-nearest\nneighbor models with the best ROC performance\nuse values of K that are so large that most of the predictions\nare close to p, the fraction of positives in the data.\nThis yields predictions that are poor when viewed as probabilities\n, yet small differences between these predicted values\nare sufficient to provide for good ordering.\nAs expected, maximum margin methods such as boosting\nand SVMs yield excellent performance on metrics such as accuracy\nfor which they are designed. Surprisingly, however,\nthe maximum margin methods also yield excellent performance\non the ordering metrics. We had not expected that\nmaximizing distances to decision boundaries would provide a\ngood basis for ordering cases that fall far from those boundaries\n.\nAlthough boosted trees perform well on accuracy and ROC,\nthey perform poorly on probability metrics such as squared\nerror and cross entropy. This poor performance on probability\nmetrics is a consequence of boosting being a maximum\nmargin method. SVMs do not exhibit this problem\nbecause we scale SVM predictions with Platt's method; Lin-early\nscaling SVM predictions to [0, 1] does not work well.\nNeural nets trained with backpropagation have excellent\noverall performance because, unlike boosting, they perform\nwell on all metrics including the probability metrics RMS,\nMXE, and CAL. We believe part of the reason why the neural\nnets perform so well is that they were trained with backpropagation\non squared error, and as we have seen squared\nerror is an excellent metric.\nThe three ordering metrics, AUC, APR, and BEP, cluster\nclose in metric space and exhibit strong pairwise correlations\n. These metrics clearly are similar to each other and\nsomewhat interchangeable. We originally grouped LFT with\nthe threshold metrics ACC and FSC, but the results suggest\nthat LFT behaves more like BEP, an ordering metric. We\nnow would group LFT with BEP in the ordering metrics\nalong with AUC and APR.\nThe metric space for the ten metrics has three or more\nsignificant dimensions. The ten metrics do not all measure\nthe same thing. Different performance metrics yield different\ntradeoffs that are appropriate in different settings. No\none metric does it all, and the metric optimized to or used\nfor model selection does matter. The SAR metric that combines\naccuracy, ROC area, and squared error appears to be\na good, general purpose metric, but RMS is so good that\nSAR may not provide much benefit over using RMS alone.\nWe hope that additional research in this area will enable us\nto design better metrics, and will shed more light on which\nmetrics are most appropriate to use in different settings.\nACKNOWLEDGMENTS\nThanks to Geoff Crew and Alex Ksikes for help running\nsome of the experiments. Thanks to the creators of XGVIS\nand XGOBI for the interactive MDS software used to generate\nthe MDS plots. Thanks to collaborators at Stanford\nLinear Accelerator for the SLAC data, and to Tony Gualtieri\nat NASA Goddard for help with the Indian Pines data.\nREFERENCES\n[1] C. Blake and C. Merz. UCI repository of machine\nlearning databases, 1998.\n[2] M. DeGroot and S. Fienberg. The comparison and\nevaluation of forecasters. Statistician, 32(1):1222,\n1982.\n[3] P. Giudici. Applied Data Mining. John Wiley and\nSons, New York, 2003.\n[4] A. Gualtieri, S. R. Chettri, R. Cromp, and\nL. Johnson. Support vector machine classifiers as\napplied to aviris data. In Proc. Eighth JPL Airborne\nGeoscience Workshop, 1999.\n[5] T. Joachims. Making large-scale svm learning\npractical. In Advances in Kernel Methods, 1999.\n[6] R. King, C. Feng, and A. Shutherland. Statlog:\ncomparison of classification algorithms on large\nreal-world problems. Applied Artificial Intelligence,\n9(3):259287, May/June 1995.\n[7] P.A.Flach. The geometry of roc space: understanding\nmachine learning metrics through roc isometrics. In\nProc. 20th International Conference on Machine\nLearning (ICML'03), pages 194201. AAAI Press,\nJanuary 2003.\n[8] J. Platt. Probabilistic outputs for support vector\nmachines and comparison to regularized likelihood\nmethods. In A. Smola, P. Bartlett, B. Schoelkopf, and\nD. Schuurmans, editors, Advances in Large Margin\nClassifiers, pages 6174, 1999.\n[9] F. Provost and P. Domingos. Tree induction for\nprobability-based rankings. Machine Learning, 52(3),\n2003.\n[10] F. J. Provost and T. Fawcett. Analysis and\nvisualization of classifier performance: Comparison\nunder imprecise class and cost distributions. In\nKnowledge Discovery and Data Mining, pages 4348,\n1997.\nAPPENDIX\nA.\nPERFORMANCE METRICS\naccuracy: probably the most widely used performance metric\nin Machine Learning. It is defined as the proportion\nof correct predictions the classifier makes relative\nto the size of the dataset. If a classifier has continuous\noutputs (e.g. neural nets), a threshold is set and everything\nabove this threshold is predicted to be a positive.\nroot-mean-squared-error (RMSE): widely used in regression\n, it measures how much predictions deviate from\nthe true targets.\n1\nRMSE is defined as:\nRM SE =\n\n1\nN\n\n(P red(C) - T rue(C))\n2\n(1)\nmean cross entropy (MXE): is used in the probabilistic\nsetting when interested in predicting the probability\n1\nRoot-mean-squared error is applicable to binary classification\nsettings where the classifier outputs predictions on [0, 1]\nthat are compared with the true target labels on {0, 1}.\n77\nResearch Track Paper\nthat an example is positive (1). It can be proven that\nin this setting minimizing the cross entropy gives the\nmaximum likelihood hypothesis. mean cross entropy is\ndefined as:\nM XE =\n1\nN\n\n(T rue(C) ln(P red(C)) +\n(1 - T rue(C)) ln(1 - P red(C)))\n(2)\n(The assumptions are that P red(C) [0, 1] and T rue(C)\n{0, 1})\nreceiver operating characteristic (ROC): has it's roots in\nWWII in the early days of radar where it was difficult\nto distinguish between true positives and false positives\n. ROC is a plot of sensitivity vs. (1-specificity)\nfor all possible thresholds. Sensitivity is the defined as\nP (P red = positive|T rue = positive) and is approxi-mated\nby the fraction of true positives that are predicted\nas positive (this is the same as recall). Specificity\nis P (P red = negative|T rue = negative). It is approx-imated\nby the fraction of true negatives predicted as\nnegatives. AUC, the area under the ROC curve, is\nused as a summary statistic. ROC has a number of\nnice properties that make it more principled than similar\nmeasures such as average precision. AUC is widely\nused in fields such as medicine, and recently has become\nmore popular in the Machine Learning community.\nlift: often used in marketing analysis, Lift measures how\nmuch better a classifier is at predicting positives than a\nbaseline classifier that randomly predicts positives (at\nthe same rate observed for positives in the data). The\ndefinition is:\nLIF T = %of true positives above the threshold\n%of dataset above the threshold\n(3)\nUsually the threshold is set so that a fixed percentage\nof the dataset is classified as positive. For example,\nsuppose a marketing agent wants to send advertising to\npotential clients, but can only afford to send ads to 10%\nof the population. A classifier is trained to predict how\nlikely a client is to respond to the advertisement, and\nthe ads are sent to the 10% of the population predicted\nmost likely to respond. A classifier with optimal lift\nwill get as many clients as possible that will respond to\nthe advertisement in this set.\nprecision and recall : These measures are widely used in\nInformation Retrieval. Precision is the fraction of examples\npredicted as positive that are actually positive.\nRecall is the fraction of the true positives that are predicted\nas positives. These measures are trivially maxi-mized\nby not predicting anything, or predicting everything\n, respectively, as positive. Because of this these\nmeasures often are used together. There are different\nways to combine these measures as described by the\nnext 4 metrics.\nprecision-recall F-score: for a given threshold, the F-score\nis the harmonic mean of the precision and recall at that\nthreshold.\nprecision at a recall level: as the name suggests, set the\nthreshold such that you have a given recall and the\nprecision for this threshold is computed.\nprecision-recall break-even point: is defined as the precision\nat the point (threshold value) where precision and recall\nare equal.\naverage precision: usually is computed as the average of\nthe precisions at eleven evenly spaced recall levels.\nCAL is based on reliability diagrams [2]. It is calculated\nas follows: order all cases by their predicted value, and\nput cases 1-100 in the same bin. Calculate the percentage\nof these cases that are true positives. This\napproximates the true probability that these cases are\npositive. Then calculate the mean prediction for these\ncases. The absolute value of the difference between the\nobserved frequency and the mean prediction is the calibration\nerror for this bin. Now take cases 2-101, 3-102,\n.... and compute the errors in the same way for each of\nthese bins. CAL is the mean of these binned calibration\nerrors.\nB.\nPARAMETER SETTINGS\nWe use the following parameter settings and algorithm\nvariations for the seven learning methods:\nKNN: we use 26 values of K ranging from K = 1 to\nK = |trainset|. We use KNN with Euclidean distance and\nEuclidean distance weighted by gain ratio. We also use distance\nweighted KNN, and locally weighted averaging. The\nkernel widths for locally weighted averaging vary from 2\n0\nto\n2\n10\ntimes the minimum distance between any two points in\nthe train set.\nANN: we train nets with gradient descent backprop and\nvary the number of hidden units {1, 2, 4, 8, 32, 128} and\nthe momentum {0, 0.2, 0.5, 0.9}. We don't use validation\nsets to do weight decay or early stopping. Instead, for each\nperformance metric, we examine the nets at many different\nepochs.\nDT: we vary the splitting criterion, pruning options, and\nsmoothing (Laplacian or Bayesian smoothing). We use all\nof the tree models in Buntine's IND package: Bayes, ID3,\nCART, CART0, C4, MML, and SMML. We also generate\ntrees of type C44 (C4 with no pruning), C44BS (C44 with\nBayesian smoothing), and MMLLS (MML with Laplacian\nsmoothing). See [9] for a description of C44.\nBAG-DT: we bag at least 25 trees of each type. With\nBST-DT we boost each tree type. Boosting can overfit,\nso we consider boosted DTs after {2, 4, 8, 16, 32, 64, 128,\n256, 512, 1024, 2048} steps of boosting. With BST-STMP\nwe use stumps (single level decision trees) with 5 different\nsplitting criteria, each boosted {2, 4, 8, 16, 32, 64, 128, 256,\n512, 1024, 2048, 4096, 8192} steps.\nSVMs: we use most kernels in SVMLight [5] {linear, polynomial\ndegree 2 & 3, radial with width {0.001, 0.005, 0.01,\n0.05, 0.1, 0.5, 1, 2}} and vary the regularization parameter\nC by factors of ten from 10\n-7\nto 10\n3\n. The output range\nof SVMs is [-, +] instead of [0, 1]. To make the SVM\npredictions compatible with other models, we use Platt's\nmethod to convert SVM outputs to probabilities by fitting\nthem to a sigmoid [8]. Without scaling, SVMs would have\npoor RMS and it would not be possible to calculate MXE\nand CAL.\n78\nResearch Track Paper", "keywords": "Lift;Precision;performance metric;ROC;Supervised Learning;supervised learning;squared error;SVMs;pairwise;Recall;algorithmns;Cross Entropy;ordering metric;Euclidean distance;Performance Evaluation;standard deviation;backpropagation;Metrics"} {"name": "64", "title": "Database Security Curriculum in InfoSec Program", "abstract": "Database Security course is an important part of the InfoSec curriculum. In many institutions this is not taught as an independent course. Parts of the contents presented in this paper are usually incorporated in other courses such as Network Security. The importance of database security concepts stems from the fact that a compromise of data at rest could expose an organization to a greater security threat than otherwise. Database vulnerabilities exposed recently in several high profile incidents would be a good reason to dedicate a full course to this important topic. In this paper we present key topics such as technologies for database protection, access control, multilevel security, database vulnerabilities and defenses, privacy and legal issues, impact of policies and some well known secure database models.", "fulltext": "INTRODUCTION\nInformation Security curriculum is receiving greater attention\nfrom many institutions, thanks to the standardization efforts by\nthe Committee on National Security Systems (CNSS). The\nCNSS members come from the National Security Agency,\nDepartment of Defense, and the Department of Homeland\nSecurity, among others. The CNSS standardization efforts are\nbased on the Presidential Decision Directive [24] issued in\n1998 for training professionals to protect the nation's critical\ninfrastructure. To achieve this goal, CNSS has developed five\nmajor standards known as the National Security\nTelecommunications Information Systems Security Instruction\n(NSTISSI). The NSTISSI standards are numbered 4011, 4012,\n4013, 4014 and 4015 [8]. Additional standards under this\nsequence are in the offing as well. The relevance of these\nstandards is that they include a vast number of topics that cover\nthe entire gamut of information assurance and database security\ntopics are included in many of these standards. First, we will\nbriefly outline the main content of each of these standards and\nthen move onto the main content of this paper.\n\nThe 4011 standard covers the information security foundation\ntopics such as wired and wireless communications basics,\noperations security, transmission security, information security\nfrom a policy perspective, cryptography, key management, legal\naspects of security, contingency planning and disaster recovery,\nrisk management, trust, auditing, and monitoring. At present,\ncoverage of topics mentioned in this standard is considered\nessential by CNSS in every InfoSec curriculum. The 4012\nstandard is primarily aimed at training Designated Approving\nAuthority personnel. A quick look at the following topics would\nshow the relationship of these standards vis--vis database\nsecurity. The primary topics of this standard include: liabilities,\nlegal issues, security policy, sensitive data access policy, threats,\nvulnerabilities, incident response, life cycle management,\nconfiguration management, and contingency management. The\npurpose of 4013 standard is to provide a minimum set of topics\nnecessary for certifying Systems Administrators in Information\nSystems Security. Some of the topics in this category include:\ndevelopment and maintenance of security policies and\nprocedures, education, training and awareness of such policies,\ndevelopment of countermeasures for known attacks as well as\ndevelopment of safeguards. Also, configuration management is\nan important part of 4013 standard. The standard for training\nInformation Systems Security Officers is 4014. This standard\ncovers topics such as facilities planning, business continuity, and\npassword management, access control policies, laws and\nregulations related to information security, privacy, encryption\nstandards, intrusion detection, audit tools, and security reviews.\nThe last standard currently in place in this series is numbered\n4015. This standard is for training System Certifiers. Among\nthe main topics here are: defining roles and responsibilities for\npersonnel, certification of systems, identifying process\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nInformation Security Curriculum Development (InfoSecCD) Conference\n'05, September 23-24, 2005, Kennesaw, GA, USA. Copyright 2005\nACM 1-59593-261-5...$5.00.\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies\nare not made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nInformation Security Curriculum Development (InfoSecCD)\nConference '05, September 23-24, 2005, Kennesaw, GA, USA.\nCopyright 2005 ACM 1-59593-261-5/05/0009...$5.00.\n\n79\nboundaries, integration, security engineering, and applications\nsecurity. These five standards have been in place since 1994\nand are constantly getting updated by CNSS.\n\nINFOSEC FOUNDATION COURSES\nTraditionally, the following courses are considered as a set of\nfoundation courses: Network Security, Information Security, and\nCryptography. Usually these courses are augmented by\nadditional courses such as Operating System Security, Database\nSecurity, Secure E-commerce, and Security Management. In\nour curriculum at the University of Louisville we are offering\nthe three foundation courses listed above and the Database\nSecurity course. The main purpose of this paper is to identify\nseveral topics that could be included in a Database Security\ncourse. In the last quarter of 2004 and the first quarter of 2005,\nseveral incidents of theft or loss of data from databases of large\norganizations have brought to light the vulnerabilities in\nmanaging database systems. Every organization depends\nheavily on databases, both large and small, in order to manage\ninventory, human resources, and business functions on a day to\nday basis. Therefore, in order to mitigate risk, every\norganization must take adequate steps to protect the data that\nthey hold. Issues related to technology as well as policies are\nimportant for the protection of data. Such topics form the core\nof this Database Security course, which we will discuss in\ngreater detail in the remaining sections.\n\n\nINFOSEC AT U OF LOUISVILLE\nAt the University of Louisville (U of L), InfoSec courses are\noffered in two departments. The Computer Information Systems\n(CIS) department in the College of Business offers an\nundergraduate concentration in InfoSec [36]. The Computer\nScience department in the college of Engineering offers graduate\ncourses in InfoSec at the masters and doctoral levels. Database\nsecurity course is offered as the second course in database, the\nfirst course being the standard database design and management\ncourse. Students taking the database security course are either\njuniors or seniors and are expected to have experience with one\nof the mainframe commercial databases such as Oracle or SQL\nServer 2000. The major course objectives were for students to:\n\n\nLearn the fundamental concepts in database security\n\n\nUnderstand how access controls work in a database\n\n\nLearn to develop and manage secure database\narchitectures\n\n\nBe familiar with the laws governing computer privacy\n\n\nUnderstand the alternatives to encrypting data at rest\n\n\nUnderstand the security implementations and\nvulnerabilities in commercial database systems such as\nOracle and SQL Server 2000\n\n\nLearn security audit methods\n\n\nLearn about multi-level database security\nThe course content was covered using material from many\nsources, primarily research papers. The Database Security book\nby Castano, et al is an out of print book as it was originally\ndeveloped in 1994. The Database Security and Auditing book\nby Afyouni was printed in April 2005 and so was not available\nwhen the semester started. In the course we used two SQL\nServer Security books which were available in print and one\nOracle Security book that was available in electronic form\nthrough Safari books. These books contributed to reinforcing\nconcepts discussed by testing several attack methods. Another\nspecial feature of teaching the Database Security course was the\navailability of a dedicated InfoSec Lab. We will discuss the\ncontribution of the InfoSec Lab later in this paper.\n\nThe initial emphasis in the course was on incorporating database\nsecurity concepts during the design phase. The primary reason\nfor this emphasis was on the need for integration of various\ncomponents of an information system. Since database security\nis heavily dependent on network security, operating system\nsecurity, physical security and other applications security such\nan integrated approach is essential for an appreciation of design\ndecisions. The course content was arranged in such a way that\nboth technology and policy aspects were equally emphasized.\nThis emphasis was motivated by the fact that there are several\nlegal requirements to be met and people's privacy must be\nprotected. A compromised database endangers the privacy of\nindividuals by the release of personal information such as social\nsecurity number, date of birth, credit card numbers, and health\nhistory.\n\nAn important part of database security is access control. There\nare several types of access controls that are available for the\ndatabase administrator to work with. More importantly,\nchoosing the proper type of access control enables the allocation\nand revocation of privileges to individuals for various\ncomponents of the database. The three types of access controls\ndiscussed related to Mandatory Access Control (MAC),\nDiscretionary Access Control (DAC) and Role-based Access\nControl (RAC). A simple example of MAC would be that of\nusing a suitable password for database access. However,\npractical uses of databases always require overriding a default\naccess privilege for a specific event. In such instances one uses\nDiscretionary Access Control. Since database privileges\nsometimes have the inheritance property it becomes essential to\nunderstand how the particular commercial system would handle\nDAC. The most important of access controls is Role-based\nAccess Control. Discussion of this topic showed the various\nnuances involved in assigning access privileges in a timely\nmanner without hindering productivity and at the same time\nproviding security. All necessary database accesses could be\nassociated with a specific role that an individual performs in an\norganization. Since roles change often and consequently access\nneeds change as well, it is much easier to manage access control\nby associating privileges with roles. It is worth noting that these\nthree types of access controls are not mutually exclusive but\nwork in combinations that suit the organizational needs.\n\nAnother important aspect of database security is authentication.\nSince databases provide many views of the data, suitable\nprivileges for the ability to drill down data requires appropriate\nauthentication. The authentication aspect of database access\nsupports the confidentiality in the CIA (Confidentiality-Integrity\n-Availability) triangle that is basic to information\nsecurity. Authentication models discussed include single-factor,\ntwo-factor, and three-factor authentication and the attendant\nperformance issues.\n\nAmong the many topics covered in this course, one of the\nimportant ones relates to Multi-Level Secure (MLS) databases\n[12, 14]. Commercial databases such as Oracle or SQL Server\ndo not handle the MLS aspects in their software. However, it is\nan important aspect to be aware of. For example, in an\n\n80\norganization not every one has the access rights to view\nconfidential information. Database queries are designed to pull\nall data that satisfy the query condition. In the MLS\nenvironment, the query condition would not necessarily reflect\nthe security level of the person making the query. Since the\nsecurity level authorization of the individual running the query\nis known from the login time, the MLS database is supposed to\nshow all data that is cleared for that level of user. Usually the\nsecurity levels could be unclassified, confidential, secret, or top\nsecret. Not all fields of data in a record would need to be\ncarrying a classification. Those sensitive data that have an\nassociated security classification should be able to serve the\nneeds of users with the appropriate security clearance and hide\nthe data from others. A major problem to overcome in this\ncontext is known as polyinstantiation [13]. This concept refers\nto the fact that if persons with a lower clearance level have\nreason to suspect the existence of a record with hidden values,\nthen they would be able to infer. Polyinstantiation could be\naddressed to a large extent by allowing certain redundancies in a\ndatabase.\n\nAnother common problem with MLS databases is the presence\nof inference channel. Inference channel leaks information about\ndata classified at a higher level to users with lower level\nclearances [19]. Security policies also play an important role in\nprotecting against inference channel leaks. A related approach\nto this problem is to develop classification constraints on data.\nThese data classifications are then used at query time and then\nthe appropriate level of the constraint is applied to the resulting\ndata before it is presented to the user.\n\nIn this context we discussed the security architecture for\ndatabases. This was broadly classified as those systems that use\na Trusted Computing Base (TCB) that is external to the DBMS\nand those systems that manage access to data through the DBMS\n[22]. In the TCB architecture, the access controls were usually\nhandled by the operating system or the network. In the DBMS\ncontrol architecture, security design involved multi-factor\nauthentication as well as security clearance and role-based\naccess. As part of the secure architecture topic, we studied the\nBell-LaPadula Model and the Biba Model [5]. Then we took a\ndetailed look at the Seaview Model [17]. This is the first paper\nthat studied in detail the security needs for database systems that\ncontained data with various levels of security clearances. The\nmajor contribution of this paper was the application-independent\nnature of data integrity with particular reference to entity\nintegrity, referential integrity and polyinstantiation integrity.\nWe studied additional secure architecture topics with particular\nreference to commercial database systems. These topics include\ninput validation, credential handling and encryption.\n\nEncryption is a major topic in itself in the security context.\nUsually encryption is an important tool for data in transit.\nHowever, the recent spate of incidents involving lost or stolen\ndata [37] shows the need for protecting data at rest from falling\ninto the wrong hands. One useful tool in this regard is\nencryption. We studied the impact of encrypted data with\nrespect to performance. Usually, encryption of sensitive data at\nrest is a desirable feature provided the access to such data is not\nfrequent. On the other hand, for data that is frequently used the\nbetter alternative to encryption would be to partially secure\nstorage [31, 33] whereby the data management is handled by an\nindependent system that works outside the operating system\ncontrol. This technique protects the data from hackers as the\naccess control is under an independent system that is not\nmanipulated by the operating system, where most of the\nvulnerabilities are exploited. In this context we studied the\nFARSITE model that discusses the reliable storage aspects in an\nincompletely trusted environment such as the Internet [2]. This\nresearch, performed at Microsoft, shows how \"to harness the\ncollective resources of loosely coupled, insecure, and unreliable\nmachines to provide logically centralized, secure, and reliable\nfile-storage service.\"\n\nThe next major topic covered was security audit for a database.\nThe sources used for this topic were Jajodia [15], Andrews [4],\nand material from the Congressional Hearing reference provided\nin the References section. Audit involves different components\nsuch as login, access and storage. Commercial database systems\nsuch as Oracle and SQL Server facilitate login auditing in a\nsimple way. For example, in SQL Server the user could set the\nlogin audit level to any one of four levels. Level 0 does not log\nany information about the logins, level 1 logs information about\nsuccessful logins only, level 2 logs information about\nunsuccessful logins only and level 3 logs information about all\nattempted logins. This aspect of setting the appropriate level is\nrelated to the security policy of the organization. An\norganization might feel that they need to know only those people\nwho attempted a login and failed as the ones who successfully\nlogged in are considered authorized users. This is not a good\nassumption when it comes to computer forensics where one is\ntrying to reconstruct an event that happened in the past.\nConsequently, organizations must consider the impact of their\npolicies when it comes to information security. Auditing is also\nmandated by certain accreditation bodies. In order to satisfy\ncertain data security requirements, some organizations might\nhave to secure C2 level security rating from the National\nComputer Security Center (NCSC). The NCSC certification is\nmeasured according to the Department of Defense Trusted\nComputer System Evaluation Criteria [4]. We concluded the\ncourse with an analysis of database protection, copyright and\nprivacy aspects both from a policy and legal perspective. First,\nwe discussed the Congressional hearing on \"Database and\nCollections of Information Misappropriation Act of 2003.\"\nThis hearing showed the limitations of Copyright laws and how\nU.S. courts have interpreted the laws that protect privacy. We\nthen studied the future of the database protection in U.S. and the\nlaws to help in this regard. U.S. court rulings, including that of\nthe Supreme Court, have shown that \"sweat of the brow\"\nargument does not offer protection for databases, rather the\ndemonstration of some form of \"originality\" in data collection\nand dissemination is essential for database ownership. A court\nruling in 2001 in United Kingdom in the case of the British\nHorseracing Board (BHB) has once again brought into focus the\nsweat of the brow argument. The U.K. court upheld the BHB's\nclaim of ownership of data pertaining to horses and jockeys\n[10]. It remains to be seen how the U.S. courts would consider\nchallenges to the sweat of the brow argument when it comes to\nprotecting large databases from competitors.\nEVALUATION TOOLS\nIn this course we used several different types of evaluation tools.\nStudents were required to write three individual research reports\non topics provided in class. The topics were:\n\n81\n1.\n\nBuffer overflows\n2.\n\nSecurity audit\n3.\n\nSarbanes Oxley Act and its impact on Database\nSecurity\nOn the testing side, we used a closed book, closed notes,\nmidterm and final examinations. All questions were essay type.\nThe students had access to a dedicated InfoSec lab where they\ncould perform several different types of hands-on testing for\nvulnerabilities [32]. The InfoSec Lab has 16 workstations on a\nLAN connected to a Windows 2000 server. First the SQL\nServer 2000 was installed on the server. Two stand-alone\ncomputers that were not connected to the network were also\nprovided to the students for testing. The first assignment\nprovided a chance for the students to install SQL Server 2000\nand choose appropriate security settings for various components\nof the database. The students then created new SQL Server\naccounts on the stand-alone computers and granted suitable\nprivileges first and then tested the DENY and REVOKE features\nas well. The students had to install the latest SQL Server\npatches on the stand-alone computers and test for vulnerabilities.\n\nThe dedicated lab environment provided an excellent facility for\nus to allow students to understand how a hacker would gain\nroutine information about the database system. First the SQL\nServer 2000 was left unpatched and the students used the SQL\nPing2 utility to gather information about the database system.\nThis showed the port 1433 in use. Then the SQL Server 2000\nwas patched with version 3a and the students tried the same\nSQL Ping2 utility, this time finding a different type of\ninformation about the SQL Server. Next, the SQL Server was\nput in hide mode and the students found out this piece of\ninformation by noticing that the listening port had changed to\n2433. We were able to accomplish this testing by making\nchanges to the SQL Server every two days giving a short time\nbetween changes for testing. This was done as assignment 2.\nThe third assignment involved testing Bulk Copy / Bulk Insert\nfeatures of SQL Server. The fourth assignment involved a\nbuffer overflow attack. A sample code was given to the students\nto try the buffer overflow attack on the patched server. The\npatched server foiled the attack. The students were then asked\nto test the same buffer overflow attack on the stand-alone\ncomputers where patches were not applied. The last assignment\ninvolved SQL Injection attack. The students were given a series\nof codes for the SQL Injection attack testing. The first part\ninvolved logging into a SQL Server database system knowing\nthe userid of the user but not the password. The second part\ninvolved not knowing the userid or the password. The third part\ninvolved creating a new user and then exploiting the system.\nThe fourth part involved finding the password of the sa account.\nThe fifth part involved dropping the SQL Server from the server\nand shutting down the SQL Server via SQL Injection attack.\nThe students were given the challenge in the fourth part of the\nSQL Injection attack testing to find out the strong password used\non the server, which had all the latest patches both for the SQL\nServer part and the operating system part. This required more\nwork beyond the SQL knowledge. One of the students\nsucceeded in finding out the server password, not just the sa\npassword, which was much easier to get using SQL Injection.\n\n\nCONCLUSION\nOverall, the students enjoyed the content of the course that\ninvolved learning many database security concepts and the\nability to test many aspects of SQL Server installation, suitable\nsettings, detect vulnerabilities, develop simple countermeasures\nand have the ability to use the logs to detect intrusion.\n\nACKNOWLEDGEMENTS\nThis research was supported in part by the NSF grant DUE-0416900\nand the Kentucky Council on Postsecondary Education\ngrant GB040955.\nREFERENCES\n[1]\nAbrams, M. D., Jajodia, S., Podell, H. J. 1995.\nInformation Security: An integrated collection of essays,\nIEEE Computer Society Press, CA.\n[2]\nAdya, A., Bolosky, W.J., Castro, M., Cermak, G.,\n\nChaiken, R., Douceur, J., Howell, J., Lorch, J.R.,\n\nTheimer, M. and Wattenhofer, R.P., 2002. \"FARSITE:\nFederated,\nAvailable, and Reliable Storage for an\n\nIncompletely Trusted Environment,\" Proceedings of the\n\n5\nth\nSymposium on Operating Systems Design and\n\nImplementation, Boston, MA, December, 1 14.\n[3]\nAfyouni, H. A. 2006. Database Security and Auditing,\n\nCourse Technology, MA.\n[4]\nAndrews, C., Litchfield, D., Grindlay, B. 2003. SQL\nServer\nSecurity\nFundamentals, McGraw-Hill/Osborne,\nNY.\n\n[5]\nCastano, S., Fugini, M., Martella, G., Samarati, P. 1994.\nDatabase Security, ACM Press Books (Diane Publishing\nCo.), NY.\n[6]\nCerrudo, C. \"Manipulating Microsoft SQL Server Using\nSQL Injection\"\nhttp://database.ittoolbox.com/browse.asp?c=DBPeerPubl\nishing&r=%2Fpub%2FSG090202%2Epdf,\nAccessed on 07/25/2005\n[7] CERT\nhttp://www.cert.org, Accessed on 05/20/2005\n[8]\nCNSS Stds. \"National IA Education Standards,\"\nhttp://www.nsa.gov/ia/academia/cnsstesstandards.cfm\n\n[9]\nCongressional Hearing, 2003. \"Database and Collections\n\nof Information Misappropriation Act of 2003,\"\nSeptember.\nhttp://www.copyright.gov/docs/regstat092303.html,\nAccessed on 04/10/2005\n[10] Duke University, 2001. \"The Future of Database\nProtection in U.S. Copyright Law\"\nhttp://www.law.duke.edu/journals/dltr/articles/2001dltr0\n017.html, Accessed on 04/15/2005\n[11] Hinke, T., 1995. \"Multilevel Secure Database\nManagement Prototypes,\" in Information Security: An\n\nIntegrated Collection of Essays, 1 edition\nst\n, Edited by\n\nAbrams, M.D., Jajodia, S.G., Podell, H.J., Essay 23,\n\nIEEE Computer Society Press, CA, 542-569.\n\n82\n[12] Jajodia, S. and Sandhu, R., 1995. \"Toward a Multilevel\nSecure Relational Model,\" in Information Security: An\n\nIntegrated Collection of Essays, 1 edition\nst\n, Edited by\n\nAbrams, M.D., Jajodia, S.G., Podell, H.J., Essay 20.\n\nIEEE Computer Society Press, CA, 460-492.\n[13] Jajodia, S., Sandhu, R. and Blaustein, B.T., 1995.\n\"Solutions to the Polyinstantiation Problem\" in\n\nInformation Security: An Integrated Collection of\nEssays,\n1 edition\nst\n, Edited by Abrams, M.D., Jajodia,\n\nS.G., Podell, H.J., Essay 21. IEEE Computer Society\n\nPress, CA, 493-529.\n[14] Jajodia, S. and Meadows, C. 1995. \"Inference problems\nin multilevel secure database management systems,\" in\n\nInformation Security: An Integrated Collection of\nEssays,\n1 edition\nst\n, Edited by Abrams, M.D., Jajodia,\n\nS.G., Podell, H.J., Essay 24. IEEE Computer Society\n\nPress, CA, 570-584.\n[15] Jajodia, S., Gadia, S.K., and Bhargava, G., 1995.\n\"Logical Design of Audit Information in Relational\n\nDatabases\" in Information Security: An Integrated\n\nCollection of Essays, 1 edition\nst\n, Edited by Abrams,\n\nM.D., Jajodia, S.G., Podell, H.J., Essay 25. IEEE\n\nComputer Society Press, CA, 585-595.\n[16] Lewis, M. 2004. \"SQL Server Security Distilled,\" 2\nnd\n\nedition,\nApress,\nCA.\n[17] Lunt, T., Denning, D. E., Schell, R. R., Heckman, M.\nand Shockley, W. R. 1990. \"The Seaview Security\nModel,\" IEEE Transactions on Software Engineering, 16\n(#6), June, 593 607.\n[18] Mao, W. 2004. \"Modern Cryptography,\" Prentice-Hall,\nNJ.\n\n[19] Meadows, C. and Jajodia, S., 1995. \"Integrity in\nMultilevel Secure Database Management Systems,\" in\n\nInformation Security: An Integrated Collection of\nEssays,\n1 edition\nst\n, Edited by Abrams, M.D., Jajodia,\n\nS.G., Podell, H.J., Essay 22. IEEE Computer Society\n\nPress, CA, 530-541.\n[20] Nevins, S.C., 2003. \"Database security breaches on the\nrise\"\nhttp://www.snwonline.com/evaluate/database_\nsecurity_03-31-03.asp?article_id=224,\nAccessed on 04/15/2005.\n[21] Nessus\nhttp://www.nessus.org. Accessed on 05/19/2005.\n[22] Notargiacomo,\nL.\n\"Architectures for MLS Database\n\nManagement Systems\" in Information Security: An\n\nIntegrated Collection of Essays, 1 edition\nst\n, Edited by\n\nAbrams, M.D., Jajodia, S.G., Podell, H.J., Essay 19.\n\nIEEE Computer Society Press, CA.\n[23] O'Reilly Publishers. Developing a Database Security\nPlan\n\n\nhttp://www.oreilly.com/catalog/orasec/chapter/ch07.html\n\nAccessed on 03/10/2005.\n[24] PDD63,\n1998.\nhttp://www.fas.org/irp/offdocs/pdd/pdd63.htm,\n\nAccessed on 05/22/2005.\n[25] Pernul, Gunther, 1994. \"Database Security\" chapter in\n`Advances in Computers,' Edited by M.C.Yovits,\n\nvol. 38, Academic Press, NY.\n[26] Rob, P. and Coronel, C. 2004. \"Design, Implementation\nand\nManagement,\" 6\nth\nEdn., Course Technology, MA.\n[27] Sandhu, R. and Samarati, P., 1994. \"Access Control:\nPrinciples and Practice,\" IEEE Communications\nMagazine, vol. 32, September, 40-48.\n[28] Sandhu, R., Coyne, E.J., Feinstein, H. L. and Youman,\nC.E., 1996. \"Role-based Access Control Models,\" IEEE\n\nComputer, vol. 29, February, 38-47.\n[29] SANS http://www.sans.org, Accessed on 05/19/2005.\n\n[30] Solworth, J. A. 2004. \"Integrating Discretionary and\nMandatory Access Controls\"\nhttp://parsys.cs.uic.edu/~solworth/integratingMacDac.pd\nf. Accessed on 04/15/2005.\n[31] Son, S. H., Chaney, C., and Thomlinson, N. P., \"Partial\nSecurity Policies to Support Timeliness in Secure Real\ntime Databases,\" 1998. Proceedings of the IEEE\n\nSymposium on Security and Privacy, May 3-6,\n\n136 147.\n[32] Srinivasan, S. 2005. \"Design and Development of an\nInformation Security Laboratory,\" Proceedings of the 9\nth\n\n\nAnnual Colloquium on Information System Security\n\nEducation, Atlanta, GA, June 6-9.\n[33] Strunk, J.D., Goodson, G.R., Scheinholtz, M.L., Soules,\nC.A.N. and Ganger, G.R., 2003. \"Self-Securing Storage:\n\nProtecting Data in Compromised Systems,\" Foundations\n\nof Intrusion Tolerant Systems, 195 209.\n[34] Theriault, M. and Heney, W. 1998. \"Oracle Security,\"\nO'Reilly Publishers, IN.\n[35] Tomson, B., 2004. \"SQL Server 2000 Security Best\nPractices\"\n\nhttp://wp.bitpipe.com/resource/org_1078177630_947/SQ\nLserver2000.pdf. Accessed on 03/20/2005.\n[36] UofL InfoSec, 2005. \"InfoSec Program website,\"\nhttp://www.louisville.edu/infosec\n\n[37] Wall Street Journal, 2005. \"ChoicePoint struggles to\ngauge how much information fell into wrong hands,\"\n\nMay 3, Page 1.\n\n\n83", "keywords": "inference channel;access control;buffer overflows;CIA;privacy;polyinstantiation;database;inference;Database;encryption;multilevel security;authentication;policy;security"} {"name": "65", "title": "dBBlue: Low Diameter and Self-routing Bluetooth Scatternet", "abstract": "This paper addresses the problem of scatternet formation for single-hop Bluetooth based ad hoc networks, with minimal communication overhead. We adopt the well-known structure de Bruijn graph to form the backbone of Bluetooth scatternet, hereafter called dBBlue, such that every master node has at most seven slaves, every slave node is in at most two piconets, and no node assumes both master and slave roles. Our structure dBBlue also enjoys a nice routing property: the diameter of the graph is O(log n) and we can find a path with at most O(log n) hops for every pair of nodes without any routing table . Moreover, the congestion of every node is at most O(log n/n), assuming that a unit of total traffic demand is equally distributed among all pair of nodes. We discuss in detail a vigorous method to locally update the structure dBBlue using at most O(log n) communications when a node joins or leaves the network. In most cases, the cost of updating the scatternet is actually O(1) since a node can join or leave without affecting the remaining scatternet. The number of nodes affected when a node joins or leaves the network is always bounded from above by a constant. To facilitate self-routing and easy updating, we design a scalable MAC assigning mechanism for piconet, which guarantees the packet delivery during scatternet updating. The dBBlue scatternet can be constructed incrementally when the nodes join the network one by one. Previously no method can guarantee all these properties although some methods can achieve some of the properties.", "fulltext": "INTRODUCTION\nBluetooth [8] is a promising new wireless technology, which enables\nportable devices to form short-range wireless ad hoc networks\nbased on a frequency hopping physical layer. Bluetooth ad-hoc\nnetworking presents some technical challenges, such as scheduling\n, network forming and routing. User mobility poses additional\nchallenges for connection rerouting and QoS services. It has been\nwidely predicted that Bluetooth will be the major technology for\nshort range wireless networks and wireless personal area networks.\nThis paper deals with the problem of building ad hoc networks using\nBluetooth technology.\nAccording to the Bluetooth standard, when two Bluetooth devices\ncome into each other's communication range, one of them\nassumes the role of master of the communication and the other becomes\nthe slave. This simple one hop network is called a piconet,\nand may include more slaves. The network topology resulted by the\nconnection of piconets is called a scatternet. There is no limit on\nthe maximum number of slaves connected to one master, although\nthe number of active slaves at one time cannot exceed . If a master\nnode has more than\nslaves, some slaves must be parked. To\ncommunicate with a parked slave, a master has to unpark it, thus\npossibly parking another active slave instead. The standard also\nallows multiple roles for the same device. A node can be master\nin one piconet and a slave in one or more other piconets. However,\none node can be active only in one piconet. To operate as a member\nof another piconet, a node has to switch to the hopping frequency\nsequence of the other piconet. Since each switch causes delay (e.g.,\nscheduling and synchronization time), an efficient scatternet formation\nprotocol can be one that minimizes the roles assigned to the\nnodes, without losing network connectivity.\nWhile several solutions and commercial products have been in-troduced\nfor one-hop Bluetooth communication, the Bluetooth specification\ndoes not indicate any method for scatternet formation. The\nproblem of scatternet formation has not been dealt with until very\nrecently. The solutions proposed in literature can be divided into\nsingle-hop and multi-hop solutions. Several criteria could be set\nas the objectives in forming scatternet. First of all, the protocol\nshould create degree limited scatternets, to avoid parking any node.\nSecondly, the number of piconets should be minimized to provide\nfaster routing. Thirdly, the formation and maintenance of scatternet\nshould have small communication overhead. Fourthly, the diameter\nof the scatternet should be small, i.e., the maximum number of hops\nbetween any two devices must be small. In this paper, we focus on\nscatternet formation for single-hop ad hoc networks. In a single-hop\nad hoc network, all wireless devices are in the radio vicinity\nof each other, e.g., electronic devices in a laboratory, or laptops in\na conference room. A single-hop network can be modeled by a\ncomplete graph.\n22\nPrevious literature on scatternet formation assumed that devices\nare not able to communicate unless they have previously discovered\neach other by synchronizing their frequency hopping patterns.\nThus, even if all nodes are within direct communication range of\neach other, only those nodes, which are synchronized with the transmitter\n, can hear the transmission. Synchronizing the frequency\nhopping patterns is apparently a time consuming and pseudo-random\nprocess [13]. In this paper we assume that the problem of discovering\nall neighbors within transmission radius of a device is resolved\nby separate Bluetooth protocol. One such protocol for discovering\nall one hop networks is described in [13, 3], while a protocol that\nprovides two-hop information to every node is described in [12].\nThese protocols are applicable as the pre-phase of our scheme.\nThis paper addresses the problem of scatternet formation for\nsingle-hop Bluetooth based ad hoc networks, with minimal communication\noverhead. We adopt the well-known structure de Bruijn\ngraph to form the backbone of Bluetooth scatternet, hereafter called\ndBBlue, such that every master node has at most seven slaves, every\nslave node is in at most two piconets, and no node assumes\nboth master and slave roles. Our structure dBBlue also enjoys a\nnice routing property: the diameter of the graph is\n\nand\nwe can find a path with at most\n\nhops between every pair\nof nodes without any routing table. Moreover, the congestion of\nevery node is at most\n\n, assuming that a unit of total\ntraffic demand is evenly distributed among all pair of nodes. We\ndiscuss in detail a vigorous method to locally update the structure\ndBBlue using at most\n\ncommunications when a node joins\nor leaves the network. In most cases, the cost of updating the scatternet\nis actually\n\nsince a node can join or leave without affecting\nthe remaining scatternet. The number of nodes affected when\na node joins or leaves the network is always bounded from above\nby a constant. To facilitate self-routing and easy updating, we design\na scalable MAC assigning mechanism for piconet, which can\nguarantee the packet delivery even during updating. Our method\ncan construct the structure dBBlue incrementally when the nodes\njoin the network one by one. In addition, the structure formed by\nour method can sustain the faults of\n\nnodes and the network is\nstill guaranteed to be connected. If a node detects a fault of some\nneighboring master node or bridge slave node, it can dynamically\nre-route the packets and the path traveled by the packet is still at\nmost\n\n. Previously no method can guarantee all these properties\nalthough some methods can achieve some of the properties.\nThe rest of the paper is organized as follows. Section 2 presents\nour new Bluetooth formation algorithms for single-hop ad hoc networks\n. We describe how to build a static scatternet of\n\nnodes based\non de Bruijn graph and assign roles and labels to them. Section 3\nproposes a vigorous method to locally and dynamically update the\nscatternet topology when node joins or leaves the network. Section\n4 describes the routing method for our de Bruijn based scatternet\nwhich efficiently finds the next node need to go without any routing\ntable. The related works is discussed in section 5. We conclude our\npaper in Section 6 by pointing out some possible future research\ndirections.\nDBBLUE SCATTERNET CONSTRUCTION\nOur dBBlue scatternet first builds a backbone based on the well-known\nde Bruijn graph [5]. The de Bruijn graph, denoted by\n\n,\nis a directed graph with\nnodes. Assume that each node is assigned\na unique label of length\non the alphabet\n\n.\nThere is an edge in\n\nfrom a node with label\n\nto\nany node with label\n\n, where\n\n. Figure\n1 illustrates\n\n. It is well-known that the de Bruijn graph\nenables self-routing intrinsically. The self-routing path from the\nsource with label\n\nto the target with label\n\nis\n\n. Observe that, we could find a shorter route by looking\nfor the longest sequence that is both a suffix of\n\nand a\nprefix of\n\n. Suppose that\n\nis\nsuch longest sequence. The shortest path between the source and\nthe target is\n\n. Clearly, the\nroute between any two nodes is at most\nhops, i.e.,\n\nhas\ndiameter\n\n, where\nis the number of nodes of the\ngraph.\n111\n001\n011\n010\n100\n110\n000\n101\nFigure 1: The de Bruijn graph\n\n.\nThe classical de Bruijn graph is balanced in the sense that the\nlabels of all nodes have the same length. The de Bruijn graph can\nbe generalized to any set of vertices whose labels form a universal\nprefix set. In [7], Fraigniaud and Gauron proposed a novel\nmethod to construct an efficient topology for P2P network based\non the generalized de Bruijn graph defined on a universal prefix\nset. \"A universal prefix set is a set\n\nof labels on an alphabet\n\nsuch that, for any infinite word\n\n, there is a unique\nword in\n\n, which is a prefix of\n\n. The empty set is also a universal\nprefix set.\"[7] For instance,\n\nis\na universal prefix set on alphabet\n\n, but\n\nand\nare not. There is a directed\nedge from node\n\nto another node\n\nin the generalized\nde Bruijn graph if\n\nis the prefix of the label of node\n\n. A generalized de Bruijn graph is pseudo-balanced if the lengths\nof the labels are different by at most one. For simplicity, we still\ndenote a pseudo-balanced de Bruijn graph on alphabet\n\nby\nif the node labels have length at least\nbits and at most\n\nbits. We also say that a node from\n\nis at level\nif its\nlabel has\nbits.\nIn this paper, we only consider the balanced or pseudo-balanced\nbinary de Bruin graph\n\n. Node labels in a pseudo-balanced\nde Bruijn graph correspond to all the leaf nodes in a full binary tree,\nin which the depth difference between any two leaf nodes is at most\none and each internal node has two children, Figure 2 illustrates the\ncorrespondence between them. In the figure, the pseudo-balanced\nde Bruijn graph is defined on the leaf nodes and directed edges.\nIn a pseudo-balanced de Bruijn graph\n\n, each node has at\nmost\nout-neighbors and\n\nin-neighbors. To route a packet from\na node\n\nwith label\n\nto another node\n\nwith label\n\n, where\n\n. Node\n\nwill forward\nthe packet to its neighbor node with label\n\n\n\n\n\n\n\n\n\n, or\n\n\n\n\n\n\n\n\n\n\n\n, or\n\n\n\n\n\n\n\n\n\n\n\n\n\n. Notice that since the labels\nof the nodes are a universal prefix set, we know that exactly\none of these three labels does exist. The following nodes keep forwarding\nthe packet similarly until it reaches node\n\n. Consequently,\nthe diameter of pseudo-balanced de Bruijn graph is still\n\n\n\n.\nIn this paper, we propose a scalable scatternet structure based on\npseudo-balanced de Bruijn graph\n\n\n.\n23\nroot\n0000 0001 0010 0011\n001\n000\n011\n010\n100 101 110 111\n10\n11\n01\n00\n1\n0\nFigure 2: The correspondence between full binary tree and\npseudo-balanced de Bruijn graph.\nIn a pseudo-balanced de Bruijn graph\n\n\n, two nodes are\ncalled critical pair if they only differ in the least significant bit of\ntheir labels. Let\n\n\n\n\n\n\n\n\n\nbe the sequence of nodes visited\nby a traversal of all leaf nodes in the corresponding binary tree of\n\n\n. A node\n\nis called the successor of another node\n\n\nand\n\n\nis called the predecessor of another node\n\n. Here\n\n\ntakes value\n\n\n\n\n\n\n. For example, in Figure 2, nodes\n\nand\n\nis a critical pair; node\n\nis the successor of the\nnode\n\n.\n2.2\nMAC Address Assignment for Piconet\nOur method will construct a balanced (or pseudo-balanced) de\nBruijn graph\n\n\nas the backbone of the network. Here the\nchoosing of the integer\n\nis discussed later. We will ignore the\ndirection of the edges in the de Bruijn graph\n\n\n. Thus, every\nnode will have at most\n(or\nfor pseudo-balanced de Bruijn graph\n\n\n) edges incident.\nEvery node in the backbone of dBBlue scatternet will be assigned\na master role. We will add a bridge slave node for every\npair of master nodes that are connected in the backbone. Thus, every\nmaster node will have at most six bridge slave nodes so far. We\nthen add some free slave nodes to each master node, and call them\npure slave nodes.\nBefore we discuss in detail our scatternet construction methods,\nwe present our novel rule of assigning the MAC address in a piconet\n. In our dBBlue scatternet, when we route a packet to a\ndestination node\n\n, we only know the piconet ID of node\n\n, say\n\n\n\n\n\n\n\n\n, which is same as the label of its master node, and the\nMAC address, say\n\n\n\n\n\n\n, of this node in that piconet. The detail\nrouting mechanism will be discussed in Section 4. When some\nnode joins or leaves the scatternet, we often have to reorganize\nsome piconets and thus re-assign the MACs of some nodes. Our\nmethod of assigning MAC addresses in a piconet and reorganizing\nthe piconets guarantees that the new piconet (even the new MAC\naddress) can be found by a simple appending or deleting the least\nsignificant bit, which keeps the label prefix of updating nodes un-changed\nso that even the delivery of the packets on the way to those\nupdating nodes will not be interrupted.\nIn a piconet, MAC\n\nis always reserved by the master node.\nFor simplicity, we omit the MAC address of a master node hereafter\nwhile representing its label, i.e., the master node with label\n\n\n\n\n\n\n\n\n\n\n\nactually has a label\n\n\n\n\n\n\n\n\n\n\n\n\nif\nconsistent labels with slave nodes are needed. Remember that, in a\npseudo-balanced de Bruijn graph, any node has\n\nin-neighbors (except\n\n\nand\n\n\n) and at most\nout-neighbors, so MAC addresses\n\nand\n\nare always reserved for the two bridge slaves to in-neighbors\n, MAC\n\n,\n\n,\n\nand\n\nare reserved for bridge\nslaves to out-neighbors if they exist, and\n\nis reserved for the\nth slave (it must be a pure slave) if it exists. Figure 3 illustrates\nall four possibilities for the piconet MAC address assignment according\nto the number of out-neighbors in scatternet backbone.\nIn the figure, for simplicity, we use\n\n\n\n\n\n\n\n\n\n\n\n\n\nto denote\na node with label\n\n\n\n\n\n\n\n\n\n\n\nor\n\n\n\n\n\n\n\n\n\n\n\n\n,\nwhichever exists in the network. Notice that a master node in\nthe constructed scatternet based on a pseudo-balanced de Bruijn\ngraph\n\n\nalways has two incoming neighbors. For example,\na master node\n\n\n\n\n\n\n\n\n\nin level\n\ncan have incoming neighbor\n\n\n\n\n\n\n\n\n\nor\n\n\n\n\n\n\n\n\n\n, but not both since the de\nBruijn graph is built upon a universal prefix set; similarly another\nincoming neighbor is\n\n\n\n\n\n\n\n\n\n\n\n\n. Analogously, a master\nnode\n\n\n\n\n\n\n\n\n\n\n\nin level\n\n\n\nhas incoming neighbors\n\n\n\n\n\n\n\n\n\n\n\n\nand\n\n\n\n\n\n\n\n\n\n\n\n\n. On the other\nhand, the number of out-neighbors of a node in the pseudo-balanced\nde Bruijn graph\n\n\ncould be\n\n\n\n. Only the node at level\n\ncould have\n\nor\nout-neighbors and only the node at level\n\n\n\ncould have\n\nout-neighbor (except nodes\n\n\nand\n\n\nif they exist).\n... x\nm-1\n1 x\n1\n0\n... x\nm-1\nx\n1\nm\n(x )\nm\n(x )\n001\n010\n100\n101\n110\n011\n111\n... x\nm+1\n... x\nx\nm+1\n2\nx\n1\nx\n2\n(a) One out-neighbor\nm\nm\n(x )\n(x )\n1\n...\nx\n1\nx\n2\nx\nm\n...\n...\nx\n1\nx\nm-1\n...\nx\n1\nx\nm-1\n001\n010\n100\n101\n110\n011\n111\n1\nx\n0\n2\nx\nm\n0\n...\nx\n2\nx\nm\n(b) Two out-neighbors\n(x )\nm\nm\n(x )\nx\nm\n...\n...\nx\n1\nx\nm-1\n...\nx\n1\nx\nm-1\nx\n2\nx\nm\n0 0\n...\nx\n2\nx\nm\n0\n...\n1\n001\n010\n100\n101\n110\n011\n111\n1\nx\n0\n2\nx\nm\n1\n...\nx\n1\nx\n2\n(c) Three out-neighbors\nm\n(x )\n(x )\nm\nx\nm-1\n...\nx\n1\nx\nm-1\nx\n2\nx\nm\n0 0\n...\nx\n2\nx\nm\n0\n...\n1\nx\n2\nx\nm\n...\n1 0\nx\n2\nx\nm\n...\n1\n1\n001\n010\n100\n101\n110\n011\n111\n1\nx\n0\n1\nx\n2\nx\nm\n...\n...\nx\n1\n(d) Four out-neighbors\nFigure 3: MAC address assignment for a piconet.\n24\nTable 1 summarizes the rule of assigning the MAC address to the\nbridge slave nodes in a piconet. Their MAC addresses can be decided\nuniquely according to the label bit difference between current\npiconet and neighbor piconet IDs. For example, if the master\n\nis\nlabeled\n\n\n\n\n\n\n\n\n\nand its out-neighbor\n\nis labeled\n\n\n\n\n\n\n\n\n\n\n\n,\nthen the MAC addresses of their bridge slave is\n\n\n\n\n\n\nassigned by\n\n, and\n\n\n\nassigned by\n\n. Remember that every bridge slave has\none MAC address in each of the two piconets it resides.\nTable 1: The rule to assign MAC address to bridge slave nodes.\nIn-Neighbor\nOut-Neighbor\nNode\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nNotice that, in bluetooth scatternet, the bridge slave nodes have\ntwo independent piconet IDs and MAC addresses in two piconets\nrespectively. However, since the routing mechanism in de Bruijn\nis directional, only their piconet ID and MAC address assigned by\ntheir in-master is public and meaningful for routing, saying label\nin the remaining paper, and the other one is only used for inter-communication\nin a piconet. Figure 4 illustrates one piconet in\nthe scatternet. Here nodes\n\n,\n\n\n,\n\n\n,\n\n\nand\n\n\nassume master\nrole and form the backbone for scatternet. These master nodes are\nconnected in the de Bruijn graph by bridge slaves\n\n\n,\n\n,\n\n\nand\n\nrespectively. Assume that node\n\nhas label\n\n\n\n\n\n\n\n\n\n\n\n.\nNodes\n\n\n,\n\n\ndenote the two incoming neighbors of node\n\n, which\nhas label\n\n\n\n\n\n\n\n\n\nand\n\n\n\n\n\n\n\n\n\nrespectively. Nodes\n\n\n,\n\n\ndenote the two outgoing neighbors of node\n\n, which has label\n\n\n\n\n\n\n\n\n\n\nand\n\n\n\n\n\n\n\n\n\n\nrespectively. Nodes\n\n\n,\n\n, and\n\nare the pure slave nodes of\n\nin the scatternet. The label\nof node\n\n(\n\n\n\n) is\n\n\n\n\n\n\n\n\n\n\n\n\nwhere is\nthe MAC address of node\n\nin this piconet, and\n\n\nand\n\nhas public\nlabel\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nand\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n,\nrespectively, which is consistent with the prefix of\n\n\nand\n\n\nrespectively\n. Notice that the MACs of\n\n\nand\n\nin the piconet mastered\nby node\n\nare\n\nand\nrespectively, which are used only by nodes\nin this piconet and not broadcasted to the network.\n2\nv\n5\nv\n1\nv\n6\nv\n4\nv\n3\nv\n7\nu\nI\n1\nI\n2\n2\nO\nO\nv\n1\nFigure 4: An example of a static piconet (with nodes inside the\nshaded region) formed by our method. Here a master node is\ndenoted by a square, a pure slave is denoted by a circle, and a\nbridge slave is denoted by a triangle.\nAs will see later, our labeling rule makes the updating of the\nscatternet topology and nodes' labels much easier when new nodes\njoin the network or some existing nodes leave the network. For incremental\nupdating of the scatternet, there are two scenarios when\na new node joins the network. The first case is that there is a master\nnode who has free slot for a pure slave. We then directly assign\nthe newly joined node as the pure slave of that master node. The\nsecond case is that no master node has free slot for a pure slave. We\nthen have to split some piconet and in turn create some free slots\nfor pure slaves. The splitting of a piconet is performed such that\nthe resulting backbone (formed by master nodes and bridge slaves)\nis still a pseudo-balanced de Bruijn graph. When a piconet is aplit-ted\n, the labels of some nodes have to be updated. While updating\nthe topology, it is possible that some packets are already on their\nway to the destinations (via or toward this splitting piconet). Our\nlabeling rule makes sure that the packets can still be routed without\nany interruption, only the local nodes are assigned new labels, and\nthe re-labeling are also conducted locally.\n2.3\nStatic Scatternet Construction\nGiven\n\nnodes currently distributed in the network, the section\ngives an efficient algorithm to construct our de Bruijn based scatternet\ndBBlue, which has low diameter and bounded node degree\nproperty. In other words, we first study the construction of the scatternet\nfor a static\n\n-nodes network, which will serve the base for\nour dynamic construction.\nOur method will construct a balanced de Bruijn graph\n\n\nas the initial backbone of the network. We will choose integer\n\nsuch that\n\n\n\n\n\n. The choosing of\n\nguarantees that\nthere are enough bridge slave nodes, which implies that no master\nnode serves as bridge slave.\nOur method does not consider the detail of the neighbor discovering\nprocess. We assume that every node already knows the existence\nof the other nodes.\nA\nLGORITHM\n1.\nStatic DeBruijn-Based Scatternet\n1. Assume that there is a leader already among these\n\nnodes\n\n. The leader could be the node with smallest ID. We give\nthe token to the leader and call it token node. Token node\nrandomly selects\n\n\nnodes (including itself) into the master\nset\n\nwhich assumes the master role in final scatternet\ntopology, where\n\n\n\n\n\nand\n\nis the number of\nnodes in\n\n. Let\n\n\n\n\n\n\n\n\n, which is the total number\nof nodes that can be assigned as pure slaves.\n2. Token node assigns itself with label\n\n\n, and each node in\n\nwith a unique\n\nbits label in the range from\n\n\n\n\n\nto\n\n\n\n\n\n. The set of nodes\n\nforms a de Bruijn graph\n\n\nas the scatternet backbone.\n3. Token node, with label\n\n\n\n\n\n\n\n, selects\n\nnodes\n1\nfrom\nthe remaining as its bridge slaves, and assigns them labels\n\n\n\n\n\n\n\n\nand\n\n\n\n\n\n\n\n\nrespectively. Here\n\n,\n\nwill also serve as the Medium Access Code (MAC) for\nthese two slaves in the piconet mastered by this token node.\nToken node uses its bridge slave node\n\n\n\n\n\n\n\n\nto\nconnect with its out-neighbor\n\n\n\n\n\n\n\n\n\n\nand the bridge\nslave node\n\n\n\n\n\n\n\n\nto connect the out-neighbor node\n\n\n\n\n\n\n\n\n\n\n.\n4. Assume that the current token node has label with value .\nThe token node selects\n\n\n\n\n\n\nnodes\n2\nfrom the\nremaining as its slaves and assigns them with labels\n\n\n\n\n\n\n\n\n,\n1\nThere are two special nodes\n\n\nand\n\n\n, which only have 1 out-neighbor\n, we then just use one bridge slave node to connect with its\nout-neighbor.\n2\nNode\n\n\nand\n\n\nmay choose\nnodes as its pure slaves since they\nonly have one in-neighbor and one out-neighbor.\n25\n\n\n\n\n\n\n\n\nand\n\n\n\n\n\n\n\n\nin the order if they exist\n. Let\n\n\n\n\n\n.\nThen the token is passed to its successor.\n5. Repeat the above steps (3) and (4) until all nodes in\n\nare\nprocessed. After all nodes have been processed, the current\ntoken node passes the token back to node\n\n\nagain.\nOnce the initial topology construction is finished, the token node\n\nwill be responsible for the following node joining and leaving\nissues. Master nodes form the backbone of bluetooth scatternet,\nand a piconet works like a node in de Bruijn graph.\n111\n001\n011\n010\n100\n110\n000\n101\nFigure 5: dBBlue Bluetooth Scatternet.\nFigure 5 illustrates a dBBlue scatternet containing\nnodes based\non\n\n\ngraph.\nT\nHEOREM\n1. In dBBlue scatternet, each master has no more\nthan\nslaves and each slave assumes as bridge for at most\n\npiconets\n. And the number of piconets is at most\n\n\nand at least\n\n.\nMoreover, the computation cost is\n\n\nfor static construction.\nP\nROOF\n. From the topology construction, each master carries at\nmost\nsame prefix slaves, and at most\n\ndifferent prefix slaves from\nits in-neighbors since each node in\n\n\ngraph has at most\n\nin-neighbors\n, so each master has no more than\nslaves. And, each\nslave exists as a free slave or as the bridge between its same prefix\nmaster\n\nand one of\n\n's out-neighbors, so the degree of a slave node\nis at most 2.\nLet\n\n\n, where\n\n\nand\n\n\nis the number of\nmasters. Then\n\n\n\n\n\nimplies\n\n\n\n\n\n\n.\nThus,\n\n\n\n\n\n\n\nand\n\n\n\n\n\n\n.\nConsequently,\n\n\n\n\n\n\n\n\n\n\n\n\n\n, which implies\n\n\n\n\n\n.\nIt is obvious that the total computation cost of constructing static\ndBBlue scatternet is\n\n\n.\nIn this paper we always assume a bluetooth piconet consists of at\nmost\nslaves and\n\nmaster. If future bluetooth technology allows\na master to bring more slaves, say\n\n, our scatternet construction\nmethod can adapt easily as follows. The scatternet backbone will be\nstill based on\n\n\nde Bruijn graph. However,\n\nis chosen such\nthat\n\n\n\n\n\n\n. In other words, every master node will\ncarry\n\n\npure slaves and\nbridge slaves to connect to its two out-neighbors\nand two in-neighbors in the de Bruijn graph\n\n\n.\nIt is not difficult to show that using de Bruijn graph\n\n\nwill\ncreate a scatternet with less piconets than using\n\n\n\n\nfor\n\nsince each master node will carry less pure slaves in the later case.\nOn the other hand, the scatternet based on\n\n\n\n\nfor\n\ndoes\nprovide a better fault tolerance since the degree of each master node\nis increased to\n\n.\nDYNAMIC SCATTERNET UPDATING\nIn this section we describe a vigorous method to locally update\nthe scatternet topology dynamically when node joins or leaves the\nnetwork. Considering each piconet as an abstract node in the de\nBruijn graph, our goal is to maintain a scalable pseudo-balanced de\nBruijn graph.\n3.1\nToken Based Updating\nFirst consider the case when a node wants to join the network.\nWe have to assign a role for this newly joined node. There are several\npossible scenarios about the existing scatternet. (1) the existing\nscatternet has a master node that has free slave slots, then we can\nsimply assign this newly joined node as the pure slave of this master\nnode. (2) all master nodes in the existing scatternet already have\nslaves, we then have to expand the backbone of the scatternet to\nincorporate this newly joined node. In other words, we have to split\nsome piconet to two such that the two new piconets will have some\nfree pure slave slots to hold this newly joined node.\nSeveral methods can be used to implement the above scheme. To\nmake the updating efficient, we should be able to quickly find the\nmaster node with empty slot for pure slave if there is any. One approach\nis to keep the current scatternet compact and assign a special\nnode the token in a way such that all master nodes with label less\nthan the token node do not have empty slot, and all master nodes\nwith label larger than the token node do have empty slot. When\na new node joins the network, we can simply assign it the empty\npure slave slot and then update the token node if necessary. This approach\nis efficient for node joining but suffers more cost for node\nleaving. When a node leaves the network, we have to update the\nscatternet to keep the scatternet compact. Thus, we possibly have\nto move some nodes to fill the slot emptied by this left node.\nThe other approach is not to compact the scatternet. When a\nnode leaves, we do nothing if the backbone of the scatternet is untouched\n. However, this approach suffers a large cost when node\njoins the network since we have to find where to put the newly\njoined node. One method is to use the broadcast method to travel\nthe whole scatternet to find the master node with free pure slave\nslot. This may perform better if only a few of the existing piconets\nhave free slots. The other method is to randomly select a master\nnode and check if it has free slot. If it does not, we then select\nanother random master node until one such master node is found.\nThis approach performs better if the majority of the piconets have\nfree slots. We omit the detail of performance analysis here, which\nwill be presented in the full version of the paper.\nIn this paper, we will adopt the compact approach. Before we\npresent the detail of our methods of updating the scatternet, we first\nstudy the possible status of the scatternet, which will be recorded\nin the token node.\nWhen a new node requests joining the network, there are three\npossible scenarios to be discussed.\n1. Current backbone is a balanced de Bruijn graph. Figure 6\nillustrates an example. The token is held by the master node\nwith the smallest label among all master nodes that have less\nthan\nsame-prefix slaves. In this status, the master node with\nthe token has some free slot for newly joined node and so do\nall master nodes with larger labels.\n2. Current backbone is pseudo-balanced de Bruijn graph\n\n\nunder expanding status, i.e., many nodes join the scatternet.\nFigure 7 illustrates an example. The token is held by the\nfirst master node with less than\nsame-prefix slaves in level\n\n\n\nif it exists, otherwise the first master node in level\n\nholds the token. In this status, all master nodes in level\n\n26\ntoken\ni-1\ni\ni+1\nFigure 6: Token in balanced de Bruijn graph.\nand\n\n\n\ndo not have free slots except the last two master\nnodes in level\n\n\n\n. In other words, at most two master\nnodes have free slots.\nlevel m\ntoken\ni\ni-1\ni+1\nlevel m+1\nFigure 7: Token in pseudo-balanced de Bruijn graph under expanding\nstatus.\n3. Current backbone is a pseudo-balanced de Bruijn graph\n\n\nunder shrinking status, i.e., many nodes leave the scatternet.\nFigure 8 illustrates an example. The token is held by the\nmaster node in level\n\nwith the smallest label. In this status,\neach master node in level\n\n\n\nand level\n\nhas\nand\n\nsame-prefix slave nodes respectively.\nlevel m+1\ni+1\ni-1\ni\ntoken\nlevel m\nFigure 8: Token in pseudo-balanced de Bruijn graph under\nshrinking status.\nThose statuses balanced, expanding, shrinking will be recorded\nin the token data structure.\n3.2\nNode Joining\nWhen a new node joins the network, there are three cases.\n1. Token status is balanced, that is to say, current backbone is a\nbalanced de Bruijn graph. See Figure 6 for an illustration.\n(a) The token node\n\n\n\n\nhas less than\nslaves. Then\nit simply adds the joining node into its slave set and\nassigns it a label\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n, where\n\n\n\n\n\n\nis\none of the un-assigned MAC address in\n\n\n\n.\nIf the token node now has\nslaves, then it passes the token\nto its successor.\n(b) The token node is fully occupied by slaves. This could\nhappen only when all master nodes in the scatternet\nhave\nslaves. Then the token is passed back to node\n\n\nif it is not at node\n\n\n. Change the token status\nto expanding and call Method 1 to split the current piconet\nmastered by the token node into two parts and\nadd the joining node as a new pure slave with label\n\n\n\n\n\n\n\n\n\n.\n2. Token status is expanding, that is to say, current backbone is\na pseudo-balanced de Bruijn graph under expanding status.\nSee Figure 7 for an illustration.\n(a) If the token node is in level\n\n\n\n, i.e., with\n\n\n\nbits\nlabel\n\n\n\n\n, the it must has less than\nslaves.\nIt simply adds the joining node into its slave set and\nassigns it a label\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n, where\n\n\n\n\n\n\nis one of the un-assigned labels in\n\n\n\n. If\nthe token node now has\nslaves, then passes the token\nto its successor.\n(b) If the token node is in level\n\n, i.e., with\n\n-bits label\n\n\n\n\n. This could happen only when all master\nnodes in the scatternet has been fully occupied by\nslaves. Call Method 1 to split the current piconet mastered\nby this token node into two piconets, and add the\njoining node as a new slave with label\n\n\n\n\n\n\n\n\n\n.\n3. Token status is shrinking, that is to say, current backbone\nis a pseudo-balanced de Bruijn graph under shrinking status\n. See Figure 8 for an illustration. In this case, token node\nsurely has exactly four slaves (see node leaving for more details\n). We first add the joining node as the slave of the token\nnode and assign it one of the un-assigned MAC addresses\nin\n\n\n\n. Call Method 1 to split current piconet\ninto two piconets, and pass token to the successor in level\n\n. If the current token node is\n\n\n, then set token status to\nbalanced and pass the token to master node\n\n\n. In other\nwords, we basically undo the updating (piconets merging)\ncaused by the previous node leaving event.\nWe then present our algorithm that split one piconet mastered by\nnode\n\n\n\n\n\n\n\nto two new piconets mastered by nodes\n\n\n\n\n\n\n\n\nand\n\n\n\n\n\n\n\n\nrespectively.\nM\nETHOD\n1.\nPiconet split due to node joining\n1. Token node\n\n\n\n\n\n\n\n\npromotes its slave node\n\n\n\n\n\n\n\n\n\nas the master for a new piconet. We change\nthe label\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nof a pure slave node or a out-neighbor\nbridge slave node by simply appending\n\n\nin the\nMAC address, i.e., the new label is\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n.\nTwo new piconets have master node with labels\n\n\n\n\n\n\n\n\nand\n\n\n\n\n\n\n\n\nrespectively. The detail of labeling and role\nupdating is as follows:\n(a)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n, which assumes\nmaster role in first piconet.\n(b)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n, which assumes\na bridge slave role in first piconet.\n(c)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n, which assumes\na bridge slave role in first piconet.\n(d)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n, which assumes\nmaster role in second piconet.\n(e)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n, which assumes\na bridge slave role in second piconet.\n(f)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n, which assumes\na bridge slave role in second piconet.\n27\nNotice this label extension still preserves their prefix. Thus,\nafter the piconet splitting, the message delivery will not be\ninterrupted at all because old addresses are still reachable\nsince the new label has same prefix. In addition, the nodes\nwith new labels with the corresponding MAC addresses will\nserve the bridge slave role in the two newly created piconets.\nFigure 9 illustrates the change while piconet splitting.\nm\n...\n0,101\nx\n1\nx\nm\n...\n1,010\nx\n1\nx\nm\n...\n0,010\nx\n1\nx\nm\n...\n1,101\nx\n1\nx\nm\n...\n0,001\nx\n1\nx\nm\n...\n0\nx\n1\nx\nm\n...\n1\nx\n1\nx\nm\n...\nx\n1\nx\nm\n...\n,001\nx\n1\nx\nm\n...\n,010\nx\n1\nx\nm\n...\n,100\nx\n1\nx\nm\n...\n,101\nx\n1\nx\nm\n...\n,110\nJoining\nv\nv\nu\nx\nu\n1\nx\nFigure 9: Piconet splits due to node joining.\n2. Then, both\n\nand\n\nneed reselect the bridge slaves to connect\nwith its in-neighbors and out-neighbors if needed. Simultaneously\n, both\n\nand\n\n's neighbors need reselect its same-prefix\nbridge slaves to connect with\n\nand\n\n. The selection\nstill follows the rule described in Section 2.2, Figure 3 illustrates\nall possible scenarios. Since the master nodes in\nthe new piconets are in level\n\n\n\n, each of them has at\nmost\n\nout-neighbors in the pseudo-balanced de Bruijn graph\n\n\n. Thus, we have enough bridge slave nodes for each\nnew piconet. The in-neighbor master nodes\n\n\n\n\n\n\n\n\n\n\n\n\n\n,\nwhere\n\n\nor\n\n, of node\n\nand\n\nin the de Bruijn graph\nhave to change one of its pure slave to bridge slave to connect\nwith node\n\nor\n\n. Notice this update is only restricted to\nlocal regions, so the update is totally localized.\n3. Finally, the token is still kept by the master node\n\n\n\n\n\n,\nwhose previous label is\n\n\n\n\n.\n3.3\nNode Leaving\nIf a node leaves elegantly, it should first notify the token node\nbefore leaving. If a master/slave node leaves because unexpected\nreason such as power off, all of its neighborhood will detect it soon\nand notify the token node. Our method does not consider the detail\nof the exception detection process, we assume the token node can\ndetect the node leaving in short time.\nWhen the token node detects the node leaving, then there are\nthree cases to be addressed again:\n1. Token status is balanced, that is to say, current backbone is a\nbalanced de Bruijn graph. Here two cases need be discussed:\n(a) If the token node does have pure slave node, then the\ntoken node requests one pure slave to replace the position\nof the leaving node, including the label;\n(b) If the token node\n\nhas no pure slave nodes, then it\npasses the token to its predecessor, say node\n\n. There\nare two scenarios also, which as discussed as follows.\ni. If node\n\nhas pure slaves, then it requests one pure\nslave to replace the position of the leaving node.\nii. If node\n\nalso has no pure slaves. This could happen\nonly when\n\n\n\n, and all master nodes have\nonly\n\nslaves serving bridge slave role. Token node\n\nchanges the token status to shrinking, and call\nMethod 2 to merge its corresponding critical pair,\nthen ask one pure slave to replace the position of\nthe leaving node.\n2. Token status is expanding, that is to say, current backbone is\na pseudo-balanced de Bruijn graph under expanding status.\n(a) If the token node is in level\n\n, i.e., with\n\n-bits label\n\n\n\n\n. This could happen only when all master\nnodes in the scatternet has been fully occupied by\nslaves. The token need be passed the predecessor,\nwhich will ask one pure slave node to replace the position\nof the leaving node.\n(b) If the token node is in level\n\n\n\n, i.e., with\n\n\n\nbits\nlabel\n\n\n\n\n. If the token node does have pure\nslave node, then the token node requests one pure slave\nto replace the position of the leaving node, otherwise\ntwo cases need be discussed here:\ni. The least significant bit of the token node's label\nis\n\n. The token will be passed to be passed the\npredecessor, which will ask one pure slave node to\nreplace the position of the leaving node.\nii. The least significant bit of the token node's label\nis\n\n. It first merges its corresponding critical pair\nby calling Method 2, then requests one pure slave\nto replace the position of the leaving node. Now\nif the current token node is\n\n\n\n\n\n, then it changes\nthe token status to balanced and passes the token\nto its predecessor\n\n\n\n\n\n.\n3. Token status is shrinking, that is to say, current backbone is\na pseudo-balanced de Bruijn graph under shrinking status.\n(a) If the token node is not\n\n\n\n\n\n, then it passes the token\nto its second predecessor with least significant bit\n\nin\nlevel\n\n\n\n, which will call Method 2 to merge its\ncritical pair piconet and ask one pure slave to replace\nthe position of the leaving node.\n(b) If the token node is\n\n\n\n\n\n, then it changes the token\nstatus to balanced and passes the token to node\n\n\n\n\n\n,\nwhich will ask one pure slave to replace the position of\nleaving node.\nOne special case is that token node leaves. In this case, the token\nnode will promote one of its pure slaves to replace it, i.e., be the\nmaster node and the new token node. If no new pure slave exits,\nsimilarly, we have to ask some pure slave node from its predecessor\nto replace its role. When the token node did not leave elegantly, it\nis more complicated and we need fault tolerance about the token\nnode, which is out of the scope of this paper.\nWe then describe our method to merge two piconets that are mastered\nby a critical pair.\nM\nETHOD\n2.\nPiconet merge due to node leaving\n1. Assume that token node\n\n\n\n\n\n\n\n\n\nrequests merging\nwith its sibling master node\n\n\n\n\n\n\n\n\n\n. The new piconet\nhas master node with label\n\n\n\n\n\n\n\n. Notice that node\n\nand node\n\neach has at most\n\nout-neighbors in the de\nBruijn graph. The label change will be achieved by simply\ndeleting the least significant bit as follows:\n(a)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n, which is the\nmaster node in the new piconet.\n28\n(b)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n, which is a pure\nslave node or the bridge slave node to connect master\nnode\n\n\n\n\n\n\n\n\nif it exists.\n(c)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n, which is the\nbridge slave node to connect master node\n\n\n\n\n\n\n\n\n,\nwhichever exists.\n(d)\n\n\n\n\n\n\n\n\n\nmoves to replace the leaving node\nposition.\n(e)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n, which is the\nbridge slave node to connect master node\n\n\n\n\n\n\n\n\n,\nwhichever exists.\n(f)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n, which is a pure\nslave node or the bridge slave node to connect master\nnode\n\n\n\n\n\n\n\n\nif it exists.\nNotice this label shrink still preserves the label prefix. Thus,\nafter the piconets merging, the message delivery will not\nbe affected at all because de Bruijn graph uses prefix based\nrouting, old addresses are still reachable by the same prefix\n. The piconets mergence will not cause any routing problem\nalthough the node label shrink is not acknowledged by\nother nodes. At the same time, the sibling master node\n\n\n\n\n\n\n\n\n\nleaves to replace the position of leaving node. To\ncontinue the message delivery for node\n\n, the new master\nnode\n\nwill keep the new label of\n\nfor a period of time and\nforwards the message targeted to\n\naccordingly. More detail\nis discussed in Section 4. Figure 10 illustrates the change of\nlabels by merging piconets.\nm\n...\nx\n1\nx\nm\n...\n,001\nx\n1\nx\nm\n...\n,101\nx\n1\nx\nm\n...\n,110\nx\n1\nx\nm\n...\n0,101\nx\n1\nx\nm\n...\n1,010\nx\n1\nx\nm\n...\n0,010\nx\n1\nx\nm\n...\n1,101\nx\n1\nx\nm\n...\n0\nx\n1\nx\nm\n...\n1\nx\n1\nx\nm\n...\n,010\nx\n1\nx\nm\n...\n1\nu\nv\nu\nv\nx\nreplace leaving node\n1\nx\nFigure 10: Piconets merge due to node leaving.\n2. Then, node\n\nneed reselect the bridge slaves to connect with\nin-neighbors and out-neighbors if needed. Simultaneously,\nthe neighboring master nodes of\n\nand\n\nneed reselect their\nsame-prefix bridge slaves to connect with\n\n. The selection\nstill follows the same rule described in Section 2.2, please see\nFigure 3 for an illustration for all possible scenarios. Notice\nthis update is totally localized.\n3. The token is now kept by the master node\n\n\n\n\n\n\n\n.\nIt is not difficult to prove the following theorem.\nT\nHEOREM\n2. Our method locally updates the dBBlue scatternet\nusing at most\n\n\n\ncommunications when a node joins or\nleaves the network. In most cases, the cost of updating the scatternet\nis actually\n\n\nsince the node can leave and join without\naffecting the remaining scatternet. The number of nodes affected\nwhen a node leaves or joins the network is always bounded from\nabove by a constant. Our method can construct the structure incrementally\nwhen the nodes join the network one by one.\n3.4\nBounded Network Size\nThe method described so far can incrementally construct the scatternet\nwhen the nodes join the network one by one and can update\nthe scatternet structure efficiently when nodes leave or join the network\neven frequently without affecting the worst case properties of\nthe scatternet. This method is efficient in most cases, however, it\ncould generate lots of merging and splitting of piconets in the worst\ncase: a node joins the scatternet which causes the splitting of a piconet\n, then a node leaves which in turn causes the merging of two\npiconets, and repeat joining, leaving.\nIn most applications, the size of the bluetooth network is often\nstable, for example, within\n\n\n\nfor a small constant\n\n. If\nthis is the case, we can apply the following approach to build the\nscatternet. First, we use Algorithm 1 to build a scatternet with\n\nnodes. When a new node joins the network, we first tries to find\nan empty pure slave slot for this node from the current token node.\nIf no empty slot, we then pass the token to the successor of the\ncurrent token node. When all master nodes in the scatternet have\nslaves, we will start to create another piconet to connect to the\ncurrent backbone. In other words, instead of having\n\npure slave\nnodes, a master node from the scatternet backbone will replace the\npure slave nodes by\n\npiconets (at maximum). We call such piconets\nassociated with the master node of the backbone. Clearly,\na backbone based on a balanced de Bruijn graph\n\n\ncould\nsupport from\n\n\n\n\nnodes to\n\n\n\nnodes without associating piconets\n. By associating piconets to the master nodes of backbone,\nthe number of nodes it can support is increased to\n\n\n\n\nsince we\ncan replace each pure slave node by a piconet of\nnodes.\nOne disadvantage of associating piconets to master nodes is that\nevery master node in the backbone will have to forward more messages\nthan the scatternet created by the method described previously\n. The other disadvantage is that when the network size goes\nbeyond its supported scope, the updating of the scatternet is more\ncostly than before. See the full version of the paper for more detail.\nROUTING IN SCATTERNET\nWe first describe the routing in the dBBlue scatternet with balanced\nbackbone. If both source and target nodes are masters, we\nassume the source master node\n\nhas label\n\n\n\n\n\n\n\n\n\nand the\ntarget master node\n\nhas label\n\n\n\n\n\n\n\n. According to the routing\nmechanism described in Section 2.1, node\n\nsimply forwards the\nmessage to its neighbor master node\n\n\n\n\n\n\n\n\n\n\n\n, relayed\nby their common bridge node\n\n\n\n\n\n\n\n\nif\n\n\n\nor by\n\n\n\n\n\n\n\n\nif\n\n\n\n. Then\n\n\nforwards the message again\nto its neighbor master node accordingly. Clearly, the message is\nguaranteed to reach the target in at most\n\nsteps. If the source\nnode is a slave, it first sends the messages to its master node. Notice\nthat pure slave node has only one master node and the bridge\nslave node has two master nodes. Then bridge slave node just randomly\npicks one master node. Similarly if the target node is a slave,\nthe message will be first forwarded to its master node. The procedure\nof routing message between these two master nodes is same as\nthe previous description. Clearly, the routing path from one master\nnode to another master node is at most\n\nhops. The longest\npath between two nodes happens from a slave node to another slave\nnode, which is at most\n\n\n\nhops. From\n\n\n\n, we have\n\n\n\n\n\n. Thus, the diameter of the de Bruijn-based scatternet\nis\n\n\n\n\n.\nT\nHEOREM\n3. For any two nodes in dBBlue scatternet, there is\na path with at most\n\n\n\n\nhops and such path can be found\nlocally based only on the labels of the source and target.\n29\nNotice that, two assumptions are made in our routing scheme\ndescribed above: (1) the source node knows the label of the target\nnode, and (2) the backbone of the scatternet is based on a balanced\nde Bruijn graph. We will not try to resolve the first assumption in\nthis paper, but discuss it briefly here. The labels of a node can be\nbroadcasted to the whole network if the nodes leaving and joining\nis not frequent, i.e., the labels of nodes do not change frequently.\nOr we can adopt a mechanism similar to the Domain Name Service\n(DNS): the labels are stored in a hierarchical manner and a node\ncan query the label servers to get the labels of the target nodes and\nthen cache them locally. Here, we discuss briefly how to perform\nbroadcast in de Bruijn graph such that it guarantees to reach each\nnode exactly once. We initiate the broadcast from node\n\n\n. Each\nnode with label\n\n\n\n\n\n\n\ncontinues forwarding the message\nto its out-neighbors. The nodes whose most significant bit is\n\nwill not forward the message. The broadcast basically works same\nas the breadth first search (BFS) in a binary tree. Clearly, a node\nwill only forward the message to nodes with larger labels. Thus, a\nnode receives the message exactly once. The communication cost\nof such broadcasting is exactly\n\nmessages.\nWe then discuss in detail how to route the packets when the scatternet\nbackbone is pseudo-balanced. Assume the source master\nnode\n\nhas label\n\n\n\n\n\n\n\n\n\n\n\nand the target master node\n\nhas label\n\n\n\n\n\n\n\n\n\n\n\n, where\n\n\n\n\n\n. Node\n\nwill\nforward the packet to its out-neighbor master node\n\nwith label\n\n\n\n\n\n\n\n\n\n, or\n\n\n\n\n\n\n\n\n\n\n\n, or\n\n\n\n\n\n\n\n\n\n\n\n\n\n. Notice\nthat since the labels of all nodes are a universal prefix set, we know\nthat exactly one of these three labels does exist. Consequently, the\ndiameter of pseudo-balanced de Bruijn graph is still\n\n\n\n. The\nbridge slave node from\n\nto\n\nhas MAC (1)\n\nif a master node\nwith label\n\n\n\n\n\n\n\n\n\nexists; or (2)\n\n\n\n\n\n\nif a master node with\nlabel\n\n\n\n\n\n\n\n\n\n\n\nexists; or (3)\n\n\n\n\n\n\nif a master node with\nlabel\n\n\n\n\n\n\n\n\n\n\n\n\n\nexists. Review Section 2.2 for more detail\nabout the rules of labeling nodes and assigning MAC addresses in\na piconet. A shorter route is obtained by looking for the longest\nsequence that is suffix of\n\n\n\n\n\n\n\n\n\nand prefix of\n\n\n\n\n\n\n\n\n\n.\nFor the purpose of illustration, let's see how we route packets\nfrom master node\n\n\n\n\n\n\n\n\nto master node\n\n\n\n\n\n\n\n\nin the scatternet based on the de Bruijn graph illustrated in\nFigure 2. First, the master node\n\nchecks the labels of all\nout-neighbor master nodes and finds that master node with label\n\n\n\n\n\n\nexists. Then it forwards the packet to master node\n\nvia the bridge slave node with MAC\n\n. Similarly, master\nnode\n\n\n\n\n\n\nforwards the packet to master node with\nlabel\n\n\n\n\n\n\nvia the bridge slave with MAC\n\n. Finally\n, the master node\n\n\n\n\n\n\nforwards the packet to node\n\n\n\n\n\n\n\n\nvia the bridge slave with MAC\n\n. Notice that\nthe last step it takes a shorter path other than via another master\nnode\n\n\n\n\n\n\n\n\n.\nAt last, we discuss how to route the messages while the scatternet\nis on updating due to nodes leaving or joining the network. When a\nnode joins the network, the piconet mastered by the token node may\nbe split into two piconets. Clearly, the message still can be routed\nsince the labels of the two newly created piconets are the children\nof this token node. Similarly, when two piconets are merged to create\na new piconet, the label-based routing still successfully route\nthe packets. The remaining case is that when a node leaves, we\nmay need find a pure slave node\n\nfrom the current token node\n\nto\nfill the space emptied by this left node. When a message targeted\nto node\n\nreaches the piconet mastered by the token node\n\n, node\n\nhas already been moved. To remedy this, we apply a mechanism\nsimilar to the mail-forwarding service provided by the post-office:\nthe master node\n\nwill keep a record of the nodes moved to other\npiconets and its new label within a time window. When a message\ntargeted for\n\nreaches, the master node forwards the message to the\nnew destination and also acknowledges the source node of the new\nlabel of\n\n. The source node will then cache the label of node\n\nif\nit is frequently used. To decrease messages forwarding, every master\nnode could record the frequency that a slave node receives messages\nfrom other node. When a pure slave node is visited frequently\nby other nodes, then we switch its role with one of the bridge slaves\nwith same prefix and broadcast the new labels of these two nodes\nto the network. When we have to move a pure slave node to other\npiconet to make the scatternet compact, the pure slave node is the\nleast frequently visited nodes among the current piconet.\nRELATED WORK\nZaruba, Basagni and Chlamtac [15] proposed two protocols for\nforming connected scatternet. In both cases, the resulting topology\nis termed a bluetree. The number of roles each node can assume\nis limited to two or three. The first protocol is initiated by a single\nnode, called the blueroot, which will be the root of the bluetree. A\nrooted spanning tree is built as follows. The root will be assigned\nthe role of master. Every one hop neighbor of the root will be its\nslave. The children of the root will be now assigned an additional\nmaster role, and all their neighbors that are not assigned any roles\nyet will become slaves of these newly created masters. This procedure\nis repeated recursively till all nodes are assigned. Each node is\nslave for only one master, the one that paged it first. Each internal\nnode of the tree is a master on one piconet, and slave of another\nmaster (its parent in the initial tree). In order to limit the number of\nslaves, they [15] observed that if a node in unit disk graph has more\nthan five neighbors, then at least two of them must be connected.\nThis observation is used to re-configure the tree so that each master\nnode has no more than\nslaves. If a master node has more\nthan\nslaves, it selects its two slaves\n\n\nand\n\n\nthat are connected\nand instructs\n\n\nto be master of\n\n\n, and then disconnects\n\n\nfrom\nitself. Such branch reorganization is carried throughout the network\n. However, whether this approach will terminate is not proved\nin [15]. Tan et al. [14] proposed a similar method for single-hop\nnetwork. In the second protocol [15], several roots are initially selected\n. Each of them then creates its own scatternet as in the first\nprotocol. In the second phase, sub-tree scatternets are connected\ninto one scatternet spanning the entire network. Notice that the tree\ntopology suffers from a major drawback: the root is a communication\nbottleneck as it will be overloaded by communications between\nthe different parts of the tree. Obviously, the root node in the\ntree-based scatternet is the bottleneck of the network and its congestion\nis\n\n\n, assuming that total traffic demand is a unit and is\nuniformly distributed. In addition, dynamic updating that preserves\ncorrect routing is not discussed in these protocols.\nLaw, Mehta and Siu [9] described an algorithm that creates connected\ndegree bounded scatternet in single-hop networks. The final\nstructure is a tree like scatternet, which limits efficiency and robust-ness\n. A single-hop Bluetooth scatternet formation scheme based on\n1-factors is described in [1]. However, piconets are not degree limited\nin that scheme.\nSalonidis et al. [13] proposed another topology construction algorithm\nrecently. It first collects neighborhood information using\nan inquiry procedure, where senders search for receivers on randomly\nchosen frequencies, and the detected receivers reply after\nrandom backoff delay. Leader is elected in the process, one for\neach connected component. Leader then collects the information\nabout the whole network, decides the roles for each node, and distributes\nback the roles. In other words, basically, it is a centralized\napproach. Thus, the solution is not scalable, and not localized.\n30\nMoreover, how to assign the roles is not elaborated in [13]. They\nalso assume up to\n\nnodes in the network. Another centralized solution\nfor single-hop networks, where the traffic between any pair\nof nodes is known a priori, is described in [10].\nSun, Chang and Lai [11] described a self-routing topology for\nsingle-hop Bluetooth networks. Nodes are organized and maintained\nin a search tree structure, with Bluetooth ID's as keys (these\nkeys are also used for routing). It relies on a sophisticated scatternet\nmerge procedure with significant communication overhead for\ncreation and maintenance. Bluerings as scatternets are proposed in\n[4]. Ring structure for Bluetooth has simplicity and easy creation as\nadvantage, but it suffers large diameter (i.e., the maximum number\nof hops between any two devices) and large number of piconets.\nThe works are most related to our dBBlue scatternet construction\nmethod is [2] and [7].\nBarriere, Fraigniaud, Narajanan, and Opatrny [2] described a\nconnected degree limited and distributed scatternet formation solution\nbased on projective geometry for single-hop networks. They\nassume that only slave nodes can act as bridges. They described\nprocedures for adding and deleting nodes from the networks and\nclaimed that its communication cost is\n\n\n\n\n\n\nand\nthe computation cost is\n\n\n\n\n\n\n\n\n, where\n\nis the number\nof nodes in the network. The degree of the scatternet can be\nfixed to any\n\n\n\n, where\n\nis a power of a prime number. However,\nin their method, every node need hold information of the projective\nplane and the master node who has the \"token\" needs to know\nthe information of the projective scatternet (which label should be\nused for the new coming master and which existing nodes need to\nbe connected to it). However, the authors did not discuss in detail\nhow to compute the labels for the new master and its slaves, and\nwhat will happen when the number of nodes reaches the number of\nnodes of a complete projective scatternets.\nNotice that our dBBlue scatternet can be easily transformed to\nsupport a Bluetooth network in which a piconet has any number\n\nof slaves, while the method in [2] can only support the piconet\nwith\n\n\nslaves where\n\nis a power of a prime number. Moreover\n, the dynamic updating cost of dBBlue is at most\n\n\n\n.\nThe construction of dBBlue scatternet is inspired by the method\nproposed by Fraigniaud and Gauron [7] for constructing a network\ntopology for P2P environment based on de Bruijn graph. When a\nnode\n\njoins the P2P network, it [7] randomly selects a node\n\nin the\nde Bruijn graph and then creates two children nodes of\n\n: one for\n\nand one for\n\n. This random selection of node\n\ncannot be applied\nto Bluetooth scatternet since it may create a de Bruijn graph with\nnode whose degree is large than . It is not difficult to show that for\nBluetooth scatternet, we can only afford the de Bruijn graph whose\nnode label lengths differ by at most\n\n. In this paper, we proposed\na novel method for assigning MAC addresses to nodes such that a\nself-routing is still possible during the updating procedures when\nnode leaves or joins the network. The de Bruijn graph is used as\nbackbone of the scatternet in our dBBlue structure.\nCONCLUSION\nIn this paper, we addressed the problem of scatternet formation\nfor single-hop Bluetooth based ad hoc networks, with minimal\ncommunication overhead. We adopted the well-known structure de\nBruijn graph to form the backbone of the dBBlue scatternet. The diameter\nof the scatternet dBBlue is\n\n\n\nand we can find a path\nwith at most\n\n\n\nhops between every pair of nodes without\nusing any routing table. Moreover, the congestion of every node is\nat most\n\n\n\n\n. We discussed in detail the method to locally\nupdate the structure dBBlue using at most\n\n\n\ncommunications\nwhen a node joins or leaves the network. In most cases, the\ncost of updating the scatternet is actually\n\n\n. Our method can\nconstruct the structure dBBlue incrementally when the nodes join\nthe network one by one. Previously no method can guarantee all\nthese properties although some methods can achieve some of the\nproperties. The dBBlue scatternet has lower dynamic updating cost\nthan the structure proposed in [2].\nNotice that, instead of having three statuses for the token, we\ncan require that the scatternet is always in the status of expanding\n. Then the scenarios for updating the scatternet become simpler\nwhen nodes join or leave the network, but with a possible high cost\nof updating: more merging and splitting of piconets will occur. We\nare currently investigating the tradeoffs of the three approaches described\nin this paper by conducting simulations on different models\nof node joining and leaving the network. We are also investigating\nthe scatternet formed based on butterfly structure [6] and compare\ntheir performance with the one described here. Notice that the butterfly\nstructure has node degree at most , which maps exactly to\nthe degree requirement by bluetooth piconet.\nREFERENCES\n[1] S. Baatz, S. Bieschke, M. Frank, P. Martini, C. Scholz, and C. Kuhl.\nBuilding efficient bluetooth scatternet topologies from 1-factors. In\nProc. IASTED Wireless and Optical Communications WOC, 2002.\n[2] L. Barriere, P Fraigniaud, L. Narajanan, and J. Opatrny. Dynamic\nconstruction of bluetooth scatternets of fixed degree and low\ndiameter. In 14th ACM-SIAM Symp. on Discrete Algorithms (SODA),\npages 781790, 2003.\n[3] S. Basagni, R. Bruno, and C. Petrioli. Device discovery in bluetooth\nnetworks: A scatternet perspective. In Proc. IFIP-TC6 Networking\nConference, Networking 2002, 2002.\n[4] F. Cgun-Choong and C. Kee-Chaing. Bluerings - bluetooth\nscatternets with ring structure. In Proc. IASTED Wireless and Optical\nCommunications WOC, 2002.\n[5] N. de Bruijn. A combinatorial problem. In Koninklijke Nederlandse\nAcademie van Wetenschappen, 49, pages 758764, 1946.\n[6] D.Malkhi, M.Naor, and D.Ratajczak. Viceroy: a scalable and\ndynamic lookup network. In Proceedings of the 21st ACM\nSymposium on Principles of Distributed Computing(PODC), 2002.\n[7] Pierre Fraigniaud and Philippe Gauron. The content-addressable\nnetwork d2b. Technical Report Technical Report TR-LRI-1349 (also\nappeared in 22nd ACM Symp. on Principles of Distributed\nComputing (PODC)), 2003.\n[8] Jaap C. Haartsen. The bluetooth radio system. IEEE Personal\nCommunications, 7:2836, 2000.\n[9] C. Law, A.K. Mehta, and K.Y. Siu. Performance of a new bluetooth\nscatternet formation protocol. In Proc. ACM Symposium on Mobile\nAd Hoc Networking and Computing MobiHoc, pages 183192, 2001.\n[10] D. Miorandi and A. Zanella. On the optimal topology of bluetooth\npiconets: Roles swapping algorithms. In Proc. Mediterranean\nConference on Ad Hoc Networks MedHoc, 2002.\n[11] C.K. Chang M.T. Sun and T.H. Lai. A self-routing topology for\nbluetooth scatternets. In 2002 International Symposium on Parallel\nArchitectures, Algorithms and Networks (ISPAN '02), 2002.\n[12] C. Petrioli and S. Basagni. Degree-constrained multihop scatternet\nformation for bluetooth networks. In Proc. IEEE GLOBECOM, 2002.\n[13] T. Salonidis, P. Bhagwat, L. Tassiulas, and R. LaMaire. Distributed\ntopology construction of bluetooth personal area networks. In Proc.\nIEEE INFOCOM, 2001.\n[14] G. Tan, A. Miu, J. Guttag, and H. Balakrishnan. Forming scatternets\nfrom bluetooth personal area networks. Technical Report\nMIT-LCS-TR-826, MIT, 2001.\n[15] G.V. Zaruba, S. Basagni, and I. Chlamtac. Bluetrees - scatternet\nformation to enable bluetooth based ad hoc networks. In Proc. IEEE\nInternational Conference on Communications(ICC), 2001.\n31\n", "keywords": "scalable MAC assignment;scatternet formation;Low Diameter;ad hoc networks;self-routing;const updating cost;de Bruijn graph;Bluetooth;Network topology;Self-routing Scatternet;Bluetooth networks;Bruijn graph;equal traffic;easy updating;low diameter;single-hop"} {"name": "66", "title": "Development of E-commerce Statistics and the Implications", "abstract": "This text has analyzed the development of E-commerce in some developed countries such as Canada, U.S.A., Japan, etc and put forward several suggestions on how to set up the system of E-commerce in our country taking the national conditions of our country into account.", "fulltext": "INTRODUCTION\nSince the 1990s, the rapid development of e-commerce has\nbrought extensive, enormous and far-reaching influence on the\neconomy of the countries all over the world. E-commerce has\nalready become the contemporary chief trend of economic and\nsocial development. As representatives of advanced productivity\nof new economic period, the level of its development has\nalready become important signs of measuring the modernization\nlevel and comprehensive strength of countries and cities; it has\nbecome important means to make changeover in the economic\nsystem and reform the style of economic, promote the upgrading\nof the industrial structure, promote the modernized level of the\ncity and strengthen international competitiveness. So, the\ngovernments all over the world have paid close attention to the\ndevelopment of E-commerce Statistics.\nThough the development of informationization in our country is\nvery quick, it still has great disparity with the developed\ncountries for relatively late start. Our country is still in the\ninterim facing the double task of informationization and\nindustrialization at present. So, in order to carry on an instant,\naccurate Statistics to the development level of E-commerce and\nset up perfect E-commerce Statistics system, we must\nunderstand, absorb and bring in the theories and methods of\nE-commerce Statistics from the main foreign countries to make\nE-commerce Statistics become effective guarantee in leading\nand promoting e-commerce in a healthy way, combining social\nsource and promoting national power.\n\nDEVELOOPMENT STATES OF E-COMMERCE STATISTICS IN THE WORLD\nWe have chosen some representative countries in the world and\nanalyzed the development of E-commerce Statistics in these\ncountries.\n2.1\n\nDefinitions of e-commerce in main\nDeveloped countries.\nThe definition of e-commerce is the standard of carrying on\nE-commerce Statistics, but there are various kinds of definition\nof E-commerce in the world because of visual angles are\ndifferent. So, it is necessary for each country to make a distinct,\nstandard, practical, wide meaningful and measurable definition\nwhich should be suitable for each field and can be amenable to\ntime.\n2.1.1\n\nDefinition of e-commerce in OECD\n(Organization for Economic Cooperation and\nDevelopment)\nThere are broadly-defined e-commerce and the\nnarrowly-defined e-commerce. The broadly-defined\ne-commerce means the activity of electronic transaction on item\nand service, no matter the transaction occurred between\nenterprise, family, government and other public or individual\norganizations. It uses network as a intermediary. Goods and\nservice should be ordered on the network but the payment and\ngoods service don't need to carry on the net; the\nnarrowly-defined e-commerce is only referred to trade activity\ncarrying on through Internet.\nIt is worth pointing out that the source of OECD definition of\ne-commerce is the Canada official Statistical department.\n2.1.2\n\nDefinition of e-commerce in Canada\nThe e-commerce definition of Canada official Statistical\ndepartment is: E-commerce is the transaction which based on\nthe computer network, including the transformation of\nownership, transformation of tangible and intangible assets right\nto use. It consists of B2B (business to business), B2C (business\nto consumer), B2G (business to government) and G2C\n(government to consumer). But the transaction taking place\ninside enterprises will not be included in the e-commerce\nStatistics.\n2.1.3\n\nDefinition of e-commerce in the U.S.A.\nDefinition of e-commerce in the U.S.A. is defined by the U.S.A.\ngeneral survey bureau who divides e-commerce into three parts\nfrom angle of the overall situation: e-commerce infrastructure;\nelectronic affairs; e-commerce.\nE-commerce infrastructure is the economic facility or\nequipment which is used for supporting electronic affairs or\nelectronic transaction activities.\nElectronic affairs include the affairs managed by computer\nnetwork in company, government and other non- profit\norganization.\nE-commerce refers to goods or service transaction activity\ncompleted on computer network.\n70\n2.2\n\nOverview and Characteristic of Main\nCountry's e-commerce Statistics\n2.2.1\n\nOverviews of e-commerce Statistical Surveys\n2.2.1.1\n\nOverviews of Canadian e-commerce\nStatistical Survey\nThe e-commerce Statistics in Canada is an official activity that\nwas presiding over by government and implemented concretely\nby State Statistics Bureau of Canada. Up till now, Canada has\nimplemented four pieces of different e-commerce Statistics.\na) \"Net-banking operation and bank service Statistics survey\non internet and e-commerce application in financial department",\nthis investigation is an irregular survey; its respondents are\nenterprises of the financial field and its nature is a separate\ninvestigation;\nb) \"Annually Statistical survey on internet application in family\n", is a fixed annual Statistical survey. It is a supplemented\ninvestigation and its respondents are families.\nc) \"The Statistics survey on communication technology and e-\ncommerce", is an irregular Statistical survey; it is a supplement\ninvestigation and the respondents are enterprises in "the\nstandard industry of North America classifies\"\nd) \"Annually Statistical survey on e-commerce and relevant\ntechnology ", is a fixed annual Statistical survey; it is a\nsupplemental investigation and the respondents are enterprises\nin \"the standard industry of North America classifies\"\n2.2.1.2\n\nOverviews of e-commerce Statistical survey\nin U.S.A.\nU.S.A. is one of the countries that e-commerce and e-commerce\nStatistical survey launched earliest in the world. The U.S.A.\ngeneral survey bureau is the principal organ responsible for\ne-commerce Statistical survey.\nThe annually Statistical survey adopted by U.S.A. general\nsurvey bureau is consisted of annual sample investigation of\ncommerce, annual sample investigation of manufacturing\nindustry, annual sample investigation of retailing business and\nannual sample investigation of service trade. The method taken\nin these investigations is dividing layer and sampling. The\nconcrete method in e-commerce Statistical survey is joining the\nquestions of e-commerce into the existing questionnaire except\nthe annually sample investigation of manufacturing industry\nwhich is joining the supplementary questionnaire. These\nrespondents investigating are enterprises and the enterprise\ne-commerce activity, business procedure and sales amount are\ninvestigated on the foundation of existing investigation.\n2.2.1.3\n\nOverview of e-commerce Statistical survey\nin other countries\n2.2.1.3.1\n\nOverview of e-commerce Statistical\nsurvey in Japan\nIn Japan, the departments in chare of e-commerce Statistical\nsurvey is Statistics Bureau, Ministry of Internal Affairs and\nCommunication, Japan, but other departments participate in the\ne-commerce Statistical survey such as Statistics Bureau of\nCabinet, Statistics Bureau of Ministry of Economics and\nIndustry, etc. So, there are more than forty kinds of official\ninvestigation on e-commerce which involve every aspect of\ne-commerce but have great differences in purpose, frequency\nand content. These investigations launch around three\ndepartments including enterprises, governments and families.\n2.2.1.3.2\n\nOverview of e-commerce Statistical\nSurvey in S.Korean\nStatistics bureau of the S.Korean began the official e-commerce\nStatistical survey since April of 2000. The investigation mainly\nconcentrates on B2C (business to consumers) and B2B\n(business to business). The investigation on B2G (business to\ngovernment) lags behind slightly, which began since the first\nquarter of 2001.\n2.2.2\n\nCharacteristics of the e-commerce Statistical\nSurvey in each country.\na)\n\nThe organizers carrying on e-commerce Statistical survey in\nthe above-mentioned countries are all official departments, or\nimplemented by cooperating with other relevant government\ndepartments (as Japan). The Statistical survey presided over by\nthe government can not only strengthen its Fairness and\ndependability, but also give the survey authoritativeness.\nb)\n\nThe investigations almost are not specially but\nsupplementary. The main reasons are high cost of special\ninvestigations and not perfect e-commerce Statistical systems of\neach country which have not reach the level of special\ninvestigation.\nc)\n\nThe above-mentioned countries confirm the content of\ninvestigation not only consulting the content that OECD\nrecommends, but also considering the development level and\ncharacteristics of the national e-commerce. It is worth pointing\nout those indexes of Statistical survey in Singapore, Canada and\nU.S.A. are comprehensive and have involved the preferential\ninvestigation content that OECD recommends.\nd)\n\nMost investigations take the annually survey as the core, but\nthere also are monthly, quarter, general survey and irregular\nsurveys. The industries included in monthly and quarterly\ninvestigation are not more than on generally, such as "monthly\ntrade sample investigation of retail business" in U.S.A. and "the\ninvestigation on family consumption trend", etc.\ne)\n\nMost countries adopt the sample investigations, but other\nmethod as census and census combine with sample investigation\nare also adopted. The method of sampling is mainly used and\nthe following two kinds are used less.\n\nIMPLICATIONS OF E-COMMERCE STATISTICAL SURVEY IN OUR COUNTRY\nThe e-commerce in our country is still in the elementary stage\nand the e-commerce Statistical survey is just start too. There are\njust some semi-official or unofficial departments and\norganization trying to carry on e-commerce Statistical survey\nbut not a formal, overall, official survey on e-commerce in our\ncountry. For instance: "Statistical Reports on the Internet\nDevelopment in China", "CII research and calculating on\ne-commerce total index system in China", "Statistical survey on\nintranet and e-commerce development level", "investigation on\ne-commerce developing in enterprises ", etc.\nMost of above mentioned investigations are irregular, even once\nonly, lack unified consideration and can't form a system except\n"Statistical Reports on the Internet Development in China"\nwhich hold regularly and establishes its own system to a certain\n71\nextent. Meanwhile, the unofficial survey is very apt to the\nsystemic deviation and utility nature for it is not mandatory;\neven affect the fairness, accuracy and representativeness of the\ninvestigation result.\n3.2\nImplications of e-commerce Statistical\nsurvey in China\nAccording to the experience of some foreign countries that\ncarrying on e-commerce Statistical and the development of\ne-commerce Statistical in our country, we consider that if we\nwant to set up a comparatively perfect e-commerce Statistical\nsurvey system, we should accomplish the following several\npoints at least:\n3.2.1\n\nAttach importance to the definition of\ne-commerce\nThe kind, range and respondents are all fixed according to the\ndefinition of e-commerce which is prerequisite of e-commerce\nStatistical survey. There is not an authoritative definition of\ne-commerce in our country so that the key problem we met is\nhow to define e-commerce when we carrying on the\ne-commerce Statistical survey.\nWe consider that open principle should be followed when\ndefining e-commerce according to its characteristic of appearing\nlate and excessive growth, in order to perfect it constantly with\nthe development of e-commerce.\n3.2.2\n\nThe government should take charge of\ne-commerce Statistical survey\nWe could understand the development of e-commerce prompt\nand accurate, find the questions existing in e-commerce and\npredict the development trend according to the e-commerce\nStatistical. It is obvious that e-commerce Statistical is important\nto the sound development of e-commerce. E-commerce could be\npromoted by just and accurate Statistical survey but the\nunilateral and utilitarian Statistical survey will mislead even\nhamper it.\nHowever, the e-commerce Statistical survey of our country\nlacks the authoritativeness and mandatory at present even\naffected the fairness and accuracy of Statistical survey. So, the\nStatistical survey of e-commerce in our country should be\nincluded in the official Statistical development plan as early as\npossible and we should set up the official survey system of\ne-commerce in order to make the e-commerce Statistical survey\nauthoritative and promote the development of it.\n3.2.3\n\nAccelerate the research of the e-commerce\nStatistical theory\nThe problem which should be considered first in research on\nStatistical theory of e-commerce is to keep the continuity with\ntraditional Statistical. The e-commerce Statistical is not\nproduced without foundation after all but is the extension on the\nnetwork of traditional Statistical so that the basic theories of\ntraditional Statistical are still suitable for the e-commerce\nStatistical survey.\nSecondly, we should make further research on Statistical\nmethod, Statistical caliber and Statistical range of e-commerce,\nand then set up the index system of e-commerce Statistical as\nsoon as possible.\nMoreover, e-commerce has the characteristics of crossing over\nthe limit of region. We should try our best to keep the harmony\nwith the world on research in e-commerce Statistical theories\nfor the overall and perfect system of e-commerce Statistical\nneed the joint efforts of countries all over the world.\n3.2.4\n\nThe service of e-commerce Statistical survey\nshould be comprehensively and pointed.\nThe e-commerce Statistical survey should serve not only for the\nmacroscopically strategic policies of countries but also for the\nmicro operation of enterprises. Meanwhile, there should be\ndifferent surveys to conform to the different respondents in\norder to offer the personalized service of the Statistical survey.\nOnly in that way can we offer the good development\nenvironment for e-commerce and reflect the value of\ne-commerce Statistical survey.\n\nREFERENCES\n[1].\n\nSeminar of "research on e-commerce Statistical survey and\napplication". "Statistical Surveys and Application of\nE-Commerce in Canada" [J]. China Statistics, 2003, 3\n2003, 4\n[2].\n\nSeminar of "research on e-commerce Statistical survey and\napplication". "Survey of e-commerce development in\nS.Korean". [J]. China Statistics, 2003, 5\n[3].\n\nSeminar of "research on e-commerce Statistical survey and\napplication". \"Statistical Surveys and Application of\nE-Commerce in Japan" [J]. China Statistics, 2003, 6\n[4].\n\nSeminar of "research on e-commerce Statistical survey and\napplication". \"Statistical Surveys and Application of\nE-Commerce in the U.S.A." [J]. China Statistics, 2003, 7\n[5].\n\n"Research paper of e-commerce development all over the\nworld", translated by Juanying Zhu, Bingzhi Yang United\nNations Trade and Development Board [M].2003.\n[6].\n\n"The application of IT in Statisticals" Feng Cui, [M] Lixin\nAccounting Publishing House,2003\n\n\n72\n", "keywords": "Development stage;Definition of E-commerce;Survey;Authority;Statistics;Statistical methods;Measurement;Implications;E-commerce Statistics;Statistical Survey;China;E-commerce"} {"name": "67", "title": "Development through communicative action and information system design: a case study from South Africa", "abstract": "Many authors have recognised the importance of structure in shaping information system (IS) design and use. Structuration theory has been used in IS research and design to assist with the identification and understanding of the structures in which the IS is situated. From a critical theoretical perspective, focusing on the Habermas' theory of communicative action, a community based child health information system was designed and implemented in a municipality in rural South Africa. The structures which shaped and influenced the design of this IS (the restructured health services and social tradition) are explored and discussed. From this case study the implications of using IS design as a developmental tool are raised: namely the development of a shared understanding, the participation of key players and the agreement on joint action.", "fulltext": "INTRODUCTION\nMany authors [Walsham and Sahay 1996; Walsham and Han 1991; Jones 1997; Rose 1999; Orlikowski 1992;\nOrlikowski and Baroudi 1991; Orlikowski and Robey 1991] have recognised the importance of structure in shaping\ninformation system (IS) design and use. Structuration theory has been used in IS research and design to assist\nwith the identification of the structures in which they are situated. Using this meta-analysis tool, information\nsystems have been used to redefine and/or reinforce some of these structures. The IS design process is particularly\nimportant, not just in shaping the structures, but also in terms of understanding what structures exist and how\nthey were formed.\nCritical approaches to IS examine those structures with the perspective of questioning and changing some of\nthem. Critical social researchers seek to emancipate people by finding alternatives to existing social conditions\nas well as challenging taken-for-granted conditions. In particular, Habermas [1987] examines communication and\nhow through striving for an ideal speech situation these structures can be challenged. In the process of IS design\ncommunication is especially important, as is who participates, and how.\nIn this paper the author explores the existing structures which have contributed to the accessibility, or as the\ncase may be inaccessibility, of the health services in the Okhahlamba municipality, KwaZulu-Natal, South Africa.\nThrough the design of the community-based child health information system these structures were explored and\naddressed throughout the design process. Communication and participation were integral to the process, as well\nas the recognition of the importance of the context in which the system is designed.\nThe rest of this paper is structured in the following manner. The following section looks at what is meant\nby structure, the process of structuration and its application to IS design. The third section looks at critical\nsocial theory in IS design, in particular Habermas' notion of communicative action. The fourth section outlines\nthe existing structures in a community in KwaZulu-Natal that were important in shaping the IS design process.\nThe fifth section explores how the process of IS design acknowledged and challenged these structures and the\nlast section discusses the implications for IS design as a developmental tool.\nAuthor Addresses:\nElaine Byrne, School of Public Health, University of the Western Cape, PBag X17, Bellville, 7535, South Africa,\nelainebyrne@telkomsa.net\nPermission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that\nthe copies are not made or distributed for profit or commercial advantage, that the copies bear this notice and the full citation on the\nfirst page. Copyrights for components of this work owned by others than SAICSIT or the ACM must be honoured. Abstracting with\ncredit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission\nand/or a fee.\nc 2003 SAICSIT\nProceedings of SAICSIT 2003, Pages 8392\n84\n\nElaine Byrne\nFigure 1.\nDimensions of duality of structure.\n\nIS DESIGN AND STRUCTURATION\nIn this paper structure is regarded as 'Rules and resources, recursively implicated in the reproduction of social\nsystems. Structure exists only as memory traces, the organic basis of human knowledgeability, and as instantiated\nin action' [Giddens 1993] p377. That is, through action, based on rules and resources in peoples' minds, structures\nin society are produced and reproduced. The rules and resources drawn upon in the production and reproduction\nof action are simultaneously the means of system reproduction (this is what Giddens refers to as the 'duality\nof structure'). The rules can be viewed as generalised procedures of action and human agents are aware and\nknowledgeable of these rules, but may not know what the outcome of that action will be because action can have\nboth intended and unintended consequences. The resources are both authoritative (coordination of the activity\nof human agents) and allocative (control of material aspects of the natural world), so both human and material\nresources are included.\nThe process of structuration involves knowledgeable actions of human agents discursively and recursively\nforming the sets of rules, practices and routines which, over time and space constitute structure. Thus agents\nand structures are not seen as independent, but as a duality whereby structure is relied upon in human actions,\nand in so doing structures are produced or reproduced. Over time these social practices becomes reasonably\nstable and routines develop. Giddens [1993] p29 breaks down social structure and human interaction into three\ndimensions which are interlinked by three modalities as illustrated in Figure 1.\nWhen human actors communicate, they draw on interpretative schemes to help make sense of interactions.\nAt the same time those interactions reproduce and modify those interpretative schemes which are embedded in\nsocial structure as meaning or signification. Similarly the human actors allocate resources through use of power,\nand produce and reproduce social structures of domination. Moral codes or norms help determine what human\nagents can sanction and thus produce and reproduce social structures of legitimation. It is useful to separate\nstructure and interaction into these three dimensions for analysis of structure, but the dimensions are interlinked.\n[Rose 1999]\nThe design and use of information systems are shaped by the very structures within which they are situated,\nbut IS can also be used to help define and redefine these structures. By exploring each of the above dimensions\nin the process of IS design, IS design can be used as a tool for development by refining the structures to include\nthe views and values of those currently disadvantaged by the existing structures. Through a participative and\nreflective process in IS design, cultural and traditional norms which influences human action can be explained,\nunderstood and addressed. The design process, and the IS itself, can improve communication and encourage\nreflection and change interpretative schemes. Through the process of IS design and reflecting on the situation\nthe excluded can be empowered, which redefines the power and resource structures. In summary IS design can\ndefine and refine structures by understanding and incorporating all the dimensions of the duality of structure in\nthe design process.\nStructuration theory has been used quite widely in IS. Rose [1998] conceptualises the use of the theory in IS\nfor three different purposes: analyse; theorise and operationalise. Walsham and Han [1991] analyse literature\nunder topics of operational studies, meta-theory and specific concepts used, as well as outlining structuration\nProceedings of SAICSIT 2003\nDevelopment through communicative action and information system design\n\n85\ntheory. Jones [1997] analyses the use of structuration theory in an attempt to reconstruct theory to accommodate\ntechnology. He further explores the application of the theory as an analytical tool, the use of theory as a meta-theory\n, and use of concepts from the theory.\nIn an attempt to theorise aspects of the IS field using structuration theory Orlikowski and Robey [1991], apply\nthe fundamentals of structuration theory to help understand the relationship between information technology\nand organisations. In a later article Orlikowski [1992] developed her structurational model of technology to\nunderstand the relationship between information technology and institutions. She recognises that technology\ncannot determine social practices, but can condition them and that technology in conditioning social practices\nis both facilitating and constraining.\nIn terms of empirical studies Walsham [1993] provides a number of case study analysis which cover issues of\nIS strategy, development, implementation and evaluation in three different organisations. Walsham and Sahay\n[1996] use structuration theory, with actor-network theory, to investigate problems in developing Geographical\nInformation Systems in an Indian government department. In a similar manner this paper, from a critical social\nperspective, uses structuration theory to highlight two key aspects of existing structure which were addressed in\nand affected the process of designing the IS. The meaning of a critical social perspective is provided in the next\nsection before section 4 decribes the key structural aspects of the case study.\nCRITICAL SOCIAL THEORY AND IS DESIGN\nCritical social researchers by their very presence influence and are influenced by the social and technological\nsystems they are studying. 'For critical social theorists, the responsibility of a researcher in a social situation\ndoes not end with the development of sound explanations and understandings of it, but must extend to a critique\nof unjust and inequitable conditions of the situation from which people require emancipation'[Ngwenyama and\nLee 1997]p151. Critical social theorists seek to emancipate people; they are concerned with finding alternatives\nto existing social conditions as well as challenging taken-for-granted conditions. Critical social theorists view\npeople, not as passive receptacles of whatever data or information that is transported to them, but as intelligent\nactors who assess the truthfulness, completeness, sincerity, and contextuality of the messages they receive.\nAdopting a critical social theoretical perspective to IS design is not new. In relation to IS research Ngwenyama\n[1991] gives an in-depth treatment of critical social theory. Ngwenyama and Lee [1997] approach research on\ncommunication richness in computer mediated communication from a critical social theoretical perspective.\nHirschheim and Klein [1994] deal with a critical approach to qualitative research.\nHabermas [1987] suggests that critical social theorists should initiate a process of self-reflection among human\nactors, but it is only participants in the community that can select the appropriate political action. His theory\nof communicative action notes that all social action assumes a basic set of norms. These norms allow all actors\nto express themselves fully and openly. They also imply that all actors accept the outcome of open rational\nargument. According to the theory of communicative action, breakdowns in communication occurs when actors\nfail to adhere to these norms. There have been numerous studies which refer in particular to the theory of\nHabermas. Lyytinen [1992] has explored the theory of Habermas to analyse systems development. Hirschheim\net al. [1996]using Habermas' theory of communicative action propose a framework for the intellectual trends in\nIS development research.\nIn this study Habermas' theory of communicative action and the notion of 'the ideal speech situation' is used to\nexplore how effective striving for its attainment is as a transformation strategy. My study uses aspects of critical\nsocial theory to examine how community action can be strengthened or changed by exploring the structures which\nenable or constrain that action. Communication, power and norms are key in trying to grasp an understanding\nof that action. Fundamental to this exploration is the belief that as intelligent and knowledgeable agents, human\nactors can, within limits, choose to act in accordance with or against societal norms.\n\nSITUATION IN OKHAHLAMBA MUNICIPALITY, UTHUKELA DISTRICT, KWAZULU-NATAL, SOUTH AFRICA\nThe existing district health information system in South Africa excludes children and adults that cannot, and/or\ndo not, access the services at the health facilities (clinics, community centres, mobiles and hospitals). Those\nwho are most vulnerable and socially excluded, and need the health support systems the greatest, are the very\nones not accessing the health services. Policies are formed and resources allocated to the community based\non the information they recieve. Since the vulnerable are excluded from the formal IS they are further and\nsystematically excluded from these policy and resource decisions.\nWith the impact of HIV/AIDS children have increasingly become an excluded and more vulnerable group.\nThis exclusion and vulnerability of children can be tackled on two interconnected levels. The first is through\nthe creation of awareness of the situation of children and the second through the commitment and action of\ngovernment and society to address this situation. The first can be supported by designing an information system\nProceedings of SAICSIT 2003\n86\n\nElaine Byrne\nfor action - an information system that can be used for advocating and influencing decisions and policies for the\nrights of these children. So IS design can be used as a developmental tool.\nSince protecting and improving the health of the children of the entire district is the aim of the district health\nsystem, research was conducted on how to develop a community-based health information system that could\nsupport a comprehensive district health information system. The research was conducted in Okhahlamba as a\ncomponent of the child health programme of the uThukela district child survival project and the department of\nhealth. Okhahlamba is a municipality of the uThukela district lying in KwaZulu Natal on the eastern coast of\nSouth Africa. The primary objective of developing a community-based information system is to assist community\nmembers in their decision-making regarding the health of their children. On a secondary level it aims to establish\ninterfaces with the formal health facility information system to enable district managers to use information from\nthe whole district to make informed decisions and policy changes.\nAfter a review of the district's health information system and a community meeting on monitoring and evaluation\ncommunity members, as well as district government staff, recognised their need for a community-based\nchild health information system. To understand what the information needs were, who should be involved in\nthe information system and the format the information should be communicated in a total of 10 interviews, 16\nfocus group discussions and 1 meeting took place between July and September 2002. From the field work there\nwas a greater understanding around the meaning of 'well-being' and 'at-risk' for a child, what factors/practices\ncontribute to these situations, how the situations can be measured and, based on what action can be taken,\nwho the information should go to. Consequently a community-based child health information system has been\nintegrated into the district health information system.\nIn this section two key aspects of structures which address, or have contributed to, the exclusion of children\nare outlined, namely restructuring of health services and status of child health, and social traditions. The first\naspect provided an opportunity for change and reflection on the current role and function of the IS whilst also\nproviding an understanding of the exclusion of segments of the population. The second aspect again provides an\nunderstanding of the position of women and children in society which impacts on IS design as well as presents\nsome challenges in the design process. [For more details of the child health programme and the research see\n[uThukela District Child Survival Project 2002; 2000a; 2000b; 1999a]]\n4.1\nRestructuring of health services and status of child health\nAfter 1994 the national Health Plan for South Africa and the Reconstruction and Development Programme\noutlined that a Primary Health Care (PHC) approach is the underlying philosophy for the restructuring of\nthe health system. Crucial to this is the role of the community in the development of a district health system\nemphasising the movement from a traditionally vertical curative based health system to a newer client centred and\npreventive based health system. In addition, more recently, there has been the move towards the decentralisation\nof health service delivery (along with other basic social services) to local authorities from the department of health.\nThe newly established structures, such as the community health committees and community health forums, have\nmeant a renegotiation of roles and responsibilities at the district level. This requires active communication\nbetween the parties involved to ensure consent on the new roles and responsibilities of all local government staff\n[uThukela District Child Survival Project 2000b; 2000a].\nSince 1994 children have benefited from the move to PHC. However the free health care policy is not without\nits fair share of problems. Due to the emphasis on PHC, there has been a 30% increase in clinic attendance and\na 2% increase in hospital attendance in the province of KwaZulu-Natal. The additional drugs and personnel\nneeded for the increased attendance at the clinics was not budgeted correctly. As a result the quality of services\nin terms of shortages of personnel and drugs has been compromised as well as putting severe strain on the budget.\nClinics in particular have struggled to accommodate the increased number of clients. Clients also complain that\nhospital-based health workers are often unsympathetic to their needs. [uThukela District Child Survival Project\n1999b]\nPoorer children living in rural areas have poorer access to PHC facilities than children living in the wealthier\nmore urbanised areas. They have greater distances to walk and fewer health personnel to cater for them.\nKwaZulu-Natal is one of two provinces with especially poor client-to-clinic ratios (23,000 clients per clinic) and\nin 1995 only 54.3% of households in KwaZulu-Natal were within 5 kms of medical care, the second lowest in the\ncountry.[Crisp and Ntuli 1999]\nChild health indicators point to the lingering effects of apartheid's racial, geographic and socio-economic\npolicies. Just over half of all children aged 12-23 months in KwaZulu-Natal are not immunised, though 62.2%\nhave their road to health cards. This indicates at least one contact with the health services, but this contact was\nnot sustained as the immunisation schedule has not been completed. The infant mortality rate for KwaZulu-Natal\nhas been estimated at 52.1/1000 and the under-five mortality rate at 74.5/1000.[Crisp and Ntuli 1999]\nProceedings of SAICSIT 2003\nDevelopment through communicative action and information system design\n\n87\nThis situation is exacerbated by disparities in access to basic infrastructure. Access to potable (drinkable) water\nand sanitation are often critical to improving child health outcomes. The government has however committed\nitself to increasing access to water and sanitation. In spite of two major dams and several springs in the area,\na serious shortage of water for agriculture and clean drinking water has impacted nearly every household, and\ninfluenced the health status of the area. The cholera epidemic in 2001 is evident of this poor access. A situational\nanalysis for the Okhahlamba municipality completed in July 1998 estimates that only 25% of the population\nlive within 15 minutes walking distance of safe water, and only 25% have adequate sanitary facilities. Transport\nremains poor, particularly during rains when rivers become impassable. [uThukela District Child Survival Project\n1999b]\n4.2\nSocial traditions\nStrong Zulu cultural and traditional values exist in the Okhahlamba municipality. Traditional leaders are highly\nrespected, though there is some controversy over the roles and powers being eroded with the formation of the new\nlocal government structures. Grandmothers and traditional healers are often the first persons to be consulted in\ntimes of illness and many locally available remedies and treatments are used and practiced.\nGrandmothers can have quite a powerful decision-making influence at household level. However, women in\ngeneral tend to be dependent on males for income and have very little access to independent means of livelihood.\nHousehold responsibilities also make women subject to 'time poverty' that is, it is not uncommon for most women\nin this rural area to work ten hours a day, making it a hardship to travel to seek health care for themselves\nor their children. Much of each day involves several hours of strenuous manual labour, hauling water and\nfirewood, and performing agricultural work. Women, including mothers, grandmothers and older 'girl children',\nare predominantly responsible for childcare [uThukela District Child Survival Project 1999b]. However if the\nhealth-seeking or care decision involves any financial decisions the head of the household, which is usually a\nman, will need to be consulted in order to make the final decision. This process often causes a delay in a child\nattending a clinic as money for transport and alternative child care for the siblings would need to be sourced.\nThrough the existing patriarchal social system women are particularly at risk from HIV/AIDS. These factors\ninclude: sexual subservience to men, higher risk of transmission with the migrant labour of partners to cities;\ndifferential access to information and resources for prevention, and; women often remain with spouses who are\nHIV positive rather than vice-versa. Women in their twenties have the highest rate of HIV infection nationally,\nbut between 1997 and 1998 the HIV prevalence among teens attending antenatal clinics jumped over 65%, from\n12.7% to 21%. With high teenage fertility rates this picture is unlikely to change in the near future. In 1998,\nthe provincial fertility rate was 3.3%, and the provincial teenage pregnancy rate was 13.8%. In Okhahlamba/\nMtshezi municipalities the average teenage pregnancy rate for young women delivering in facilities in 1999 was\n22.9%, significantly higher than the provincial rate [uThukela District Child Survival Project 1999b]. Children\nare particularly susceptible to the ravages of the HIV/AIDS epidemic through high rates of mother to child\ntransmission and an increasing number of AIDS orphans and consequent child headed households.\nASPECTS OF THE PROCESS OF DESIGNING A COMMUNITY-BASED INFORMATION SYSTEM IN OKHAHLAMBA, KWAZULU-NATAL, SOUTH AFRICA\nOne of the fundamental steps that needed to be addressed before addressing the situation of children, and how\nthis was reflected in information systems, was a paradigm shift. It required a shift from the older focus on\ncurative centre based service delivery to the newer health services approach which focuses on prevention, clients\nand quality. To support this paradigm shift the project adapted a new approach of transformational thinking, or\nfuture focussed approach, developed in the business sector, but which is also being integrated in health systems.\nThe approach focuses on working towards holistic well-being for all, rather than just solving health associated\nproblems. Through community meetings and discussions the community determined a vision for their children:\n'To achieve optimal health, growth, development and well-being of children within the family and community in\nthe uThukela Health district'.\nThe implications of the paradigm shift for IS was that though it was important to measure children's physical\ncondition, it was also important to measure how far towards our vision we are. So instead of saying 80% of our\nchildren are immunised, we would say that we still need to immunise 20% of our children. This approach reflects\nwhat we still need to do to attain our vision and thus, hopefully, stimulate action. Adopting a forward looking\nperspective also stresses the importance of the context we are presently in and the importance of measuring\nchanges in that context. Monitoring the context and acting based on that information, should lead to a situation\nwhere most children in the future would find themselves in a state of 'well-being'.\nProceedings of SAICSIT 2003\n88\n\nElaine Byrne\n5.2\nSharing of information with key actors\nIf people are to act or reflect on information received that information needs to be relevant and communicated in\na culturally sensitive and appropriate manner. In terms of a community-based information system for children\nan important step in the process of the IS design was who should participate in the process. The main role\nplayers and duty bearers\n1\nneed to be included as it is them who are in the best position to change or influence\nthe context in which the child is placed. In the case study these key people were: the community health\nworkers, parents, family members, early childhood and creche teachers, home based carers, caretakers, social\nworkers, health facility staff, clinic health committees, councillors, government officials and staff from external\norganisations. This indicates that a multi-leveled and multi-sectoral group affects the situation of children at\ncommunity level.\nWhat was also important was a common understanding by all parties on what was meant by 'well-being' and\n'at-risk' as the monitoring of these situations and conditions would be important if we were to measure whether\nwe were on the right track to attaining our vision. Meanings of 'well-being' and 'at-risk' were gathered through\nfocus-group discussions, interviews and meetings with all the role players and duty bearers. This common\nunderstanding was translated into common data definitions in the community-based, as well as in the health\nfacility, information system.\nA review of the existing data sources and flows was conducted based on the assumption that information flows\nare a key element of dialogue between providers and consumers of health services. One important conclusion\nfrom this review was that some of the data collected through the current district health information system is\nvalid and useful, but is not getting to the people who can act upon or use it. As one project leader mentioned we\nneed to look at how data is flowing and the possibility of establishing 'feedback pathways' for this data. There\nare many of these pathways at different levels, but the one between community based workers and community\nforums is core for a community-based health information system. This level of feedback was entirely absent from\nthe district health information system in Okhahlamba.\nIt is also interesting to note what was absent from existing data sets, yet what key role players and duty\nbearers felt were important in monitoring the situation of their children. Data items relating to the context in\nwhich a child is being reared are mostly excluded. Many of the current indicators focus on the condition the\nchild is currently in, such as having immunisation or not, and not the context that caused the child to be in that\nsituation, such as no caregiver to take the child to the clinic. But exclusion is a process and to prevent the child\nbecoming excluded requires analysing the situation of the child throughout that process. Measures for context,\nsuch as happiness, playfulness and communication are more intangible and therefore difficult to develop as data\nitems. However through the new observation tools developed by and for the community health worker these\nmeasures are now included. So data items on the presence of a caregiver, drug and alcohol abuse, cleanliness of\nthe household for example, are now included as indicators of 'at-risk'. This observation tool is used as part of\nthe dialogue between the health worker, who is a trusted and respected family advisor, and the household. The\nresults from the aggregated monthly data is shared through role play, song, dance, drawings and histograms in\nthe community quarterly meetings. The act of sharing information establishes networks of people at community\nlevel who are responsible for the care of the children. These networks form the basis for communication.\n5.3\nThe communication loop\nIn terms of capacity to act, or to make decisions, most respondents, in the research undertaken, felt that they\ncould act if given appropriate information and if key role players were included in the communication loop with\none another. The visioning exercise started a communication process, but this needed to be developed into more\nformal communication structures. Communication was needed with other levels of government. Building on\nthe recent development of clinic health committees and the governments' appointment of the community health\nworkers in the KwaZulu-Natal province, communication loops were developed. These loops are described below\nat three levels: household, community, and district.\n--Household level: Following on from a discussion on how to measure the more intangible measures a standardised\nobservation checklist was developed. The checklist is used as a communication tool with household members.\nBased on the community health worker's assessment a number of choices or options to solve any of the problems\nidentified is given to the household. The community health worker could facilitate the choices, such as contact\nwith certain services, if requested to do so, but the final decision lies with the household. The assessment is\nused as an empowering tool, rather than as a means of inspection. These visits assist the child caregivers in\n1\nRole players have a role to play in children's lifes, but duty bearers are those people who are responsible and obligated to fulfill\nchildrens' rights\nProceedings of SAICSIT 2003\nDevelopment through communicative action and information system design\n\n89\nterms of their knowledge of child care and health seeking behaviour within their household. The visits also\nprovide the mother or caregiver with a mediator between them and health facilities as well as a mediator\nbetween them and their family. Therefore issues of access to basic social services could be addressed.\n--Community level: The community health workers, with the assistance of their supervisors (community health\nfacilitators), conduct village health days for discussion of broader issues affecting the community served by the\nclinic. Bar graphs, role-plays, song, poetry and dance are used as these methods seem to work very well. These\nmeetings form the quarterly community health meetings, that were suggested in the course of the field work.\nMembers of the community and the clinic health committee, health facility staff, community health worker,\nschool children and other key people attend the meetings. More people have now access to the information\nthey requested and in a format that is easy to understand. The village health days also provide a forum for\nreflection and discussion.\n--District level: Communication and information flows between community and district involves combining data\nfrom various sources to provide a comprehensive database for the district. Important for the collation of this\ndata is the use of the same data definitions in the different data sources. This collation is done through the\ndistrict information officer as her office already receives this data from the different sectors. A summary of\nthe district data is distributed every quarter. The content of the summary sheet is regularly determined in\nconsultation with the clinic health committees and through feedback on the village health days. Existing\nlocal government structures, community and clinic committees, have already established clear communication\nchannels with higher levels of local government. The feedback from these meetings would be sent through\nthese structures when needed. Thus a comprehensive picture of child health in the district is achieved.\nIn summary, with the restructuring of the health services there was the need for a paradigm shift, before\naddressing the review and design of a community-based child health information system. This shift was from the\nolder more curative health service approach to the newer client and service focused approach of primary health\ncare. With the newly established local clinic and community health committees this offered an opportunity of\nnew people coming into the health services with a new vision and who were also willing to be involved in the IS\ndesign process. Furthermore the newly formed local government structures have established clear communication\nchannels with higher levels of local government. The feedback from quarterly community health meetings could\nbe sent through these channels and forms part of the health information flow and communication loop.\nChallenges around the position of women in society impacted on decisions regarding participation. However as\nwomen are the main carers of children they were involved in the process without any question. Furthermore as\nsome of the key positions in the community are occupied by men it was also felt that they needed to participate\nin the design process as their positions were influential in terms of the situation of children in the community.\nThe dialogue initiated in the design process continues through the community health quarterly meetings which\nprovide an opportunity for dialogue to take place at community level. At the household level the community\nhealth workers role is to empower the household in its health seeking and caring practices. This is done through\nhousehold visits and providing the appropriate education at the appropriate time, for example if a child has\ndiarrhoea the conversation would be around what to do for the child with diarhoea. The community health worker\nalso plays the role of mediator - mediator between households and the community forum and also between the\ncaregiver and the rest of the family. Through the supportive role of the community health worker the position\nof women and children will not change in society, but their views on health and the care of children will be\nsupported and heard.\nThe process of IS design in the case study supported and questioned two key aspects of structure, namely the\nrestructuring of health services and the status of child health and social traditions and their implications on the\nprocess of the design. The next section explores what implications the use of such an approach has for IS design.\n\nDISCUSSION IMPLICATIONS FOR IS DESIGN\nThe implications for IS design have been categorised into three main areas: the need for a shared understanding,\nthe need for participation of key people and the need for agreement on joint action.\n6.1\nShared understanding\nIf health IS design is to be used in a developmental context their needs to be agreement reached between health\ncare deliverers and those who receive the services on the design and the purpose of the health service. From\nour case study the importance of having a common vision for the health services was seen as an important\nfirst step in this direction, especially given the restructuring of the health services and the adoption of a PHC\napproach. Creation of this vision and shared understanding necessitates communication between the designers of\nthe system, the users of the IS as well as the users of the health system. The process of IS design is important for\nProceedings of SAICSIT 2003\n90\n\nElaine Byrne\nestablishing the relationship between the users and providers of health care, as reaching agreement on subsequent\naction that needs to take place involves both parties working together.\nThe objective of communicative action is to achieve mutual understanding. In this case study mutual understanding\non a vision for the children, how to measure our progress to this vision and who needs to be involved in\nthat process was made. IS design should be ' . . . concerned with achieving and maintaining mutual understanding\n. . . among all those who are involved in a coordinated organizational situation . . . Organizational actors involved\nin communicative action depend on a common language and a shared understanding of the organizational context\nin order to enact meaning from each other's communicative actions.' [Ngwenyama and Lee 1997]p158/9\nIS use in developmental contexts can go beyond communicative action and be an enabler of discursive action.\nDiscursive action is intended to achieve or restore agreement for collective action. It is ' oriented toward achieving\nor restoring agreement and redeeming validity claims. Discursive action is initiated when organizational actors\nneed to achieve agreement for joint action. In such a situation, the individuals would generally engage each\nother in a debate of the issues until they agree on a course of action' [Ngwenyama and Lee 1997]p155. However\nthere needs to be a common medium of communication, agreement on roles and responsibilities and terms and\nconditions set for means of discourse. A common understanding was needed in this case on what was meant by\n'at-risk' and 'well-being' children and how to measure the situation of the child.\nThe process of IS design can create an environment where people can express themselves, where understanding\non various roles can be agreed to, where responsibility can be taken and where action using available information\noccurs. However unless we explore and change the structures in which a person operates, e.g. the position of\nchildren and women in society it is difficult for an actor to be able to engage either in reflection or in discursive\naction.\n6.2\nParticipation of key players\nReaching a common understanding between the users and providers of the health services is impossible without\ntheir joint participation. Participation of the excluded increases transparency and opens officials and other\nresponsible parties to dialogue and wider scrutiny by the citizens they serve. Underlying power differences\nbetween different actors influences the interaction and negotiation between them (both within the community\nand between the community and outside groups) and this can influence whose 'interests' are explored and served\nin information systems. The social dynamics and power relationships that underlie and constitute the actual\npractice of the information system needs to be explicit.\nIn this research the unequal nature of social relationships and positions between different actors and also\ninstitutions was recognised from the outset. Forums were established that suited the needs of the various groups.\nDiscussions were also facilitated from people who were familiar with the area and who also had an understanding\nof the norms and values of that society. In the initial stages because of these differentials in status and roles\nwithin the community, groups comprising, for example, mothers, councillors, facility staff, met separately to\ndiscuss what they wanted for their children. These meetings were held in the local language and near the\nhomes of the individuals. The community health worker formed the essential mediation role between the service\nproviders and the clients. At a later stage representatives from the various groups met jointly to share the\nfindings from the research and to discuss the way forward.\nEven with community participation communication does not always work smoothly, or in favour of children.\nCommunication provides the means for exploring, affirming or denying norms, debating policies and practices,\nand discussing old experiences and new ideas. The situation of children will change only when action to improve\nthat situation is taken. So the next step was to explore what will happen once the information has been shared.\n6.3\nAgreement on joint action:a multi-leveled and multi-sectoral approach\nOnce the vision is formulated then the necessary action to attain that vision needs to be agreed to. Often this\ninvolves a multi-leveled and multi-sectoral approach. It also needs all key role players to be in communication\nwith one another. It is not easy to challenge or change the institutions and systems established that support the\nstatus quo.\nIn Okhahlamba the key role players that could act to change the situation of children were all identified. The\nmost difficult task was achieving agreement by these role players on their action. Most of the confusion was\nover formal roles and responsibilities which had changed with the recent moves to decentralization of basic social\nservices to local authorities, rather than an unwillingness to support one another. With this move the community\nhealth workers had also recently moved from a local non-governmental organization to the Department of Health\nand were confused over their reporting structures. The district department of health needs to hand over delivery\nof health services to local authorities, but the local authority does not have the human nor financial capacity\nto carry out this function. The volunteer clinic health committees are enthusiastic to support initiatives that\nProceedings of SAICSIT 2003\nDevelopment through communicative action and information system design\n\n91\nwill improve the situation of their children, but have only been formed recently. It was only after groups met\none another and agreement was reached on their roles and responsibilities that agreement on the action, and\nwho was responsible for that action, took place. The recent changes have provided an opportunity for inclusion\nof children on the agenda as many of the structures and systems are not, or have only recently, been formed.\nWhat was encouraging from the field work was that most people felt that they had the capability to act if they\nreceived the information.\nCONCLUSION\nIt is increasingly recognised that globalisation also produces marginalisation. Castells [2000b; 2000a] argues\nthat processes of globalisation are extremely selective, and various parts of the globe in both the developing\nand developed world run the potential of being excluded from this process. He uses the term 'fourth world' to\ndescribe this segment of society. Conditions of history and geography shape the access that groups and societies\nhave to new information and communication technologies. Lack of such access can be exclusionary. Castells\ndescribes these processes to be systematic and can lead to further marginalisation and exclusion of societies.\nIn information systems, and not just health information systems, the voices of communities - in particular\nwomen, children and youth - are not often heard, both within communities, between communities and between\nthe other levels of society. When, where and how do they get the opportunity to express their needs and\naspirations? How do they have the chance to identify and develop the skills and resources they need to address\ntheir problems? Where do they get the opportunity to express themselves or to exchange ideas and experience?\nIn a sub-district in KwaZulu-Natal these questions have been addressed through a holistic approach to health\ninformation systems development.\nSome of the challenges for IS design is the need to focus both on the output, as well as the process. Attaining a\ncommon vision is fundamental if the system is to be used, but this involves the participation of different sectors\nand different levels of actors from the outset. It also offers some opportunities. Clarification over roles and\nresponsibilities allows recognition and acceptance by duty bearers of the tasks they need to perform. This is\na first step towards action. There was great enthusiasm by these key role players in the design process and a\ndesire to work together. The community monitoring system in Okhahlamba was based on an understanding that\npeople are intelligent and know what affects their children's and their own development. There is the need to\nco-design systems, processes and tools in IS design and obtain clarity on what we need to measure. IS design\nshould be about facilitating a journey of development, rather than measuring the destination.\nCommunication and participation, as well as the capacity to do so, are needed to strive towards Habermas'\nideal speech situation. This is no easy task, as exclusion is built upon a system of norms, interpretative schema\nand facilities that systematically excludes segments of the population and country from the network society.\nCommunication will not simply be improved by introducing a new or improved health information system. Even\nso, a process from visioning, developing skills and capacity and constructing a conducive environment can mean\nthat IS design can be viewed as a development tool, as striving towards this 'ideal speech situation', even if this\nsituation is not attained.\n\nACKNOWLEDGMENTS\nI wish to thank all the people from uThukela district who assisted with the research. In particular thanks must\ngo to the staff from the uThukela District Child Survival Project and the Department of Health, who assisted\nwith the carrying out of the field research, the data analysis and the implementation of the information system.\nFor support on the formatting and editing of this paper I am grateful for the assistance of Bob Jolliffe. I have\nalso benefitted from the insightful comments from Sundeep Sahay on the various drafts of this paper. Financial\nsupport for the research was provided by a World Vision/USAID grant to uThukela District Child Survival\nProject.\n\nREFERENCES\nCastells, M.\n2000a. The Information Age: Economy, Society and Culture: The End of the Millenium, 2 ed. Vol. 2. Blackwell\nPublishers.\nCastells, M.\n2000b. The Information Age: Economy, Society and Culture: The Network Society, 2 ed. Vol. 1. Blackwell Publishers.\nCrisp, N. and Ntuli, A.\n, Eds. 1999. South African Health Review. Health Systems Trust, South Africa.\nGiddens, A.\n1993. The constitution of society. Outline of the theory of structuration. Polity Press, Oxford.\nHabermas, J.\n1987. The Theory of Communicative Action. MIT Press.\nHirschheim, R. and Klein, H.\n1994. Realizing emancipatory principles in information systems development: The case for ethics.\nMIS Quarterly 18,\n1, 83109.\nProceedings of SAICSIT 2003\n92\n\nElaine Byrne\nHirschheim, R.\n, Klein, H. K., and Lyytinen, K. 1996. Exploring the intellectual structures of information systems development:\nA social action theoretical analysis. Accounting, Management and Information Technology 6, 1/2.\nJones, M.\n1997. Re-Thinking Management Information Systems. Oxford University Press, Chapter structuration and IS.\nLyytinen, K.\n1992. Critical Management Studies. Sage Publications, London, Chapter Information systems and critical theory,\n159180.\nNgwenyama, O.\n1991.\nInformation Systems Research: Contemporary Approaches and Emergent Traditions\n. North Holland,\nAmsterdam, Chapter The Critical Social Theory Approach to Information Systems: Problems and Challenges.\nNgwenyama, O. K. and Lee, A. S.\n1997. Communciation richness in electronic mail: Critical social theory and the contextuality\nof meaning. MIS Quarterly 21, 2 (June), 145167.\nOrlikowski, W.\n1992. The duality of technology: rethinking the concept of technology in organisations. Organisation Science 3, 3\n(August).\nOrlikowski, W. and Baroudi, J. J.\n1991. Studying information technology in organisations: Research approaches and assumptions.\nInformation Systems Research 2\n, 128.\nOrlikowski, W. and Robey, D.\n1991. It and the structuring of organisations. Information Systems Research 2, 2, 143169.\nRose, J.\n1998. Evaluating the contribution of structuration theory to the is discipline. In Proceedings of the European Conference\non Information Systems\n.\nRose, J.\n1999. Towards a structurational theory of is - theory development and case study illustrations. In Proceedings of the 7th\nEuropean Conference on Information Systems\n. Copenhagen.\nuThukela District Child Survival Project\n. 1999a. Final evaluation report. uThukela District, KwaZulu-Natal, South Africa,\nunpublished.\nuThukela District Child Survival Project\n. 1999b. Knowledge, practice and coverage survey. UThukela District Child Survival\nProject, KwaZulu-Natal, South Africa.\nuThukela District Child Survival Project\n. 2000a. Cs xv detailed implementation plan. uThukela District, KwaZulu-Natal,\nSouth Africa, unpublished.\nuThukela District Child Survival Project\n. 2000b. Integrated managment of childhood illness situational analysis. uThukela\nDistrict, KwaZulu-Natal, South Africa, unpublished.\nuThukela District Child Survival Project\n. 2002. Mid term evaluation report. uThukela District, KwaZulu-Natal, South Africa,\nunpublished.\nWalsham, G.\n1993. Interpreting Information Systems in Organisations. Chichester, John Wiley.\nWalsham, G. and Han, C. K.\n1991. Structuration theory and information systems research. Journal of Applied Systems Analysis 17,\n7785.\nWalsham, G. and Sahay, S.\n1996. Gis for district-level administration in india: Problems and opportunities. International Journal\nof Geographical Informaiton Systems 10\n, 385404.\nProceedings of SAICSIT 2003\n", "keywords": "communicative action;critical social theory;moral codes or norms;community information systems;information system design;Structuration theory;interpretative schemes;critical social theory in IS design;conducive environment;community monitoring system;marginalisation;health information systems;duality of structure;structuration theory;the ideal speech situation"} {"name": "68", "title": "Diagnosis of TCP Overlay Connection Failures using Bayesian Networks", "abstract": "When failures occur in Internet overlay connections today, it is difficult for users to determine the root cause of failure. An overlay connection may require TCP connections between a series of overlay nodes to succeed, but accurately determining which of these connections has failed is difficult for users without access to the internal workings of the overlay. Diagnosis using active probing is costly and may be inaccurate if probe packets are filtered or blocked. To address this problem, we develop a passive diagnosis approach that infers the most likely cause of failure using a Bayesian network modeling the conditional probability of TCP failures given the IP addresses of the hosts along the overlay path. We collect TCP failure data for 28.3 million TCP connections using data from the new Planetseer overlay monitoring system and train a Bayesian network for the diagnosis of overlay connection failures . We evaluate the accuracy of diagnosis using this Bayesian network on a set of overlay connections generated from observations of CoDeeN traffic patterns and find that our approach can accurately diagnose failures.", "fulltext": "INTRODUCTION\nWhen failures occur in Internet overlay connections today, it is\ndifficult for users to determine the root cause of failure. The proliferation\nof TCP overlays such as content distribution networks\nand HTTP proxies means that frequently network communication\nrequires a series of TCP connections between overlay nodes to succeed\n. For example, an HTTP request using the CoDeeN[9] content\ndistribution network first requires a TCP connection to a CoDeeN\nnode and then a connection from a CoDeeN node to a server or another\nCoDeeN node. A failure in any one of the TCP connections\nalong the overlay path causes the user's HTTP request to fail. If\nthe user knows which TCP connection failed, then they can take\nappropriate action to repair or circumvent the failure. For instance,\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nSIGCOMM'06 Workshops September 11-15, 2006, Pisa, Italy.\nCopyright 2006 ACM 1-59593-417-0/06/0009 ...\n$\n5.00.\nif they know that the connection from the proxy to the server failed,\nthen they complain to the web server administrator. On the other\nhand, if the user/proxy connection fails, perhaps they can try connecting\nto the proxy using a different ISP. If multiple overlay paths\nexist between the source and destination, nodes and applications\nmay also use this type of diagnostic information to automatically\nrecover or route around failures[1].\nUnfortunately, accurately determining which TCP connection in\nan overlay connection has failed is difficult for end users, who typically\ndo not have access to the internal workings of the overlay.\nCommercial overlay networks such as Akamai typically do not reveal\ndetails of connection failures to users, and the diagnostic tools\navailable to users today are frequently inadequate. Active probing\ntechniques such as tulip[7] and Planetseer[11] frequently cannot\nprovide accurate information due to firewalls and packet filtering.\nFurthermore, active probing can be costly both in terms of network\nresources and time, and cannot diagnose the many transient TCP\nfailures that begin and end before one can complete a probe[11].\nAdditionally, one must take care when using active probing for diagnosis\nbecause they may concentrate network traffic at points of\nfailure and trigger intrusion detection systems.\nInstead, in our research we consider a passive approach to diagnosis\nin which intelligent diagnostic agents use probabilistic inference\nto determine the root cause of failure. The reliability of IP\nlinks in the Internet varies widely and hence we expect the probability\nof TCP failure to differ between different sets of hosts. Diagnostic\nagents in the Internet learn the probability of such failures\nfor different regions in the Internet based on observations of TCP\ntraffic. When users or network administrators detect network failures\n, they request diagnosis from such diagnostic agents. Agents\nthen use information about the relative probability of failure of the\nTCP connections that make up an overlay connection to identify\nthe most likely cause of failure when an overlay connection occurs\nwithout conducting any additional probes. In addition, diagnostic\nagents can also use this Bayesian network to predict the probability\nof overlay and TCP connection failure given information about the\npath of an overlay connection.\nWe collect data on TCP failure probabilities in order to determine\nwhether this data enables diagnostic agents data to accurately\ndiagnose overlay failures in the Internet. To learn the probability\nof failure for TCP connections between different points in the network\n, we observe TCP traffic on the content distribution network\nCoDeeN using an updated version of Planetseer[11]. Next we construct\na Bayesian network for diagnosis using these probabilities.\nWe then use Bayesian inference to infer the most probable cause of\nfailure for TCP-based applications.\nTo evaluate the effectiveness of this approach, we test this Bayesian\nnetwork on an artificial set of overlay connections based on the\n305\ntraffic observed on CoDeeN. We find that when a failure occurs,\nknowing only the AS numbers of the source, proxy, and destination\n, we can determine which TCP connection has failed with over\n80% probability. In addition, the probability of failure between\nASes stays relatively constant over time, and data learned can be\naccurately used for diagnosis for many hours into the future. This\nsuggests that the TCP failure probabilities we learn may be useful\nin the diagnosis of future failures as well.\nThe contribution of this research is to show how inter-AS TCP\nfailure probabilities can be used for probabilistic diagnosis of failures\nin overlay networks such as CoDeeN using Bayesian inference\n. We also demonstrate a variety of clustering methods to address\nthe problem of dataset sparsity for learning TCP failure probabilities\n. In this paper we evaluate our system on CoDeeN overlay\nconnections, but our Bayesian model generalizes to the diagnosis\nof other TCP-based applications as well.\nRELATED WORK\nThere has been previous work in passive diagnosis of failures\nin the Internet. Padmanabhan, Ramabhadran, and Padhye developed\nNetprofiler, which collects network measurements from a set\nof end hosts and attempts to identify cause of failure by examining\nthe shared dependencies among hosts that experience failures[8].\nThey show that this approach can provide information useful for\ndiagnosis, but their paper only provides some preliminary results\nand do not provide details of how their system might diagnose real-world\nfailures in practice.\nShrink probabilistically diagnoses IP link failures based on the\nobserved status of IP links that share resources[4]. Similarly in our\nwork we diagnose failures in overlay connections where an overlay\ndepends on several underlying TCP connections which may share\nIP hops. Shrink assumes that one can accurately determine the status\nof all IP links at any point in time. This allows one to identify\nthe shared cause of failure of the failed IP links. Theoretically, we\ncan also use this approach to diagnose overlay failures. That is, we\ncan identify the TCP connections that share common IP hops and\nobserve which overlay connections have failed at any point in time\nto identify the failed TCP connections.\nUnfortunately, in real-world diagnosis of TCP connections many\nof the assumptions made by systems such as Shrink do not hold for\nthe following reasons.\n1. The status of overlay connections may change rapidly, making\nit difficult to correlate failures in different overlay connections\nover time.\n2. In order to construct a Bayesian network that accurately models\nthe IP hops shared among different TCP connections we\nneed an accurate IP level map of the Internet. As the Skitter\n1\nproject demonstrates, accurately constructing such a map is\ndifficult because routes may change and frequently tools such\nas traceroute do not provide accurate information.\n3. Determining the status of an inactive overlay connection or a\nTCP connection is costly and takes time because it requires\nan active probe such as a ping, traceroute, or HTTP connection\n. Furthermore such probes are frequently inaccurate because\nof the prevalence of packet filtering, network address\ntranslation (NAT), and firewalls in the Internet[3].\n4. TCP and IP failures are frequently so transient that by the\ntime one can test the status of a link, the failure no longer\nexists [11].\n1\nhttp://www.caida.org/tools/measurement/skitter/\nTherefore in this paper we present an alternative passive diagnosis\napproach that does not require simultaneously knowing the\nstatus of all overlay connections. Instead, we cluster TCP failures\nbased on the Internet autonomous systems (ASes) of their endpoints\nand use information about the distribution of TCP failures\nto infer the cause of failure. An agent first learns a probabilistic\nmodel of failures based on a training set of observed TCP connections\n, and then it uses this model to diagnose future failures when\nit does not know the connection status.\nOther researchers have developed methods for diagnosing specific\nTCP-based applications. Ward, et al. infer the presence of\nTCP performance failures based on the rate of requests processed at\nan HTTP proxy server and TCP connection state [10]. Unlike such\nspecialized diagnostic systems, our Bayesian approach to diagnosis\ncan generalize to other applications that rely on TCP connections.\nMost previous research in probabilistic diagnosis of Internet failures\nevaluate their work on simulated failures. Steinder and Sethi\nmodel network faults using a bipartite causality graph in which the\nfailure of individual links cause the failure of end-to-end connec-tivity\n, and then perform fault localization using a belief network[6].\nIn contrast, in our research we evaluate our approach on real-world\nTCP failures using actual data collected on the Internet.\nDIAGNOSING OVERLAY CONNECTION FAILURES\nIn this paper we consider the diagnosis of overlay networks in\nwhich an overlay network connection requires a series of TCP connections\nbetween overlay nodes between the source and destination\nhosts. For example, Akamai is a content distribution network\nin which retrieving a resource from a web server may requ ire\ncommunication among multiple Akamai nodes along multiple TCP\nconnections. Another example is the content distribution network\nCoDeeN on Planetlab, in which overlay nodes act as HTTP proxies\n. An request on CoDeeN[9] first requires a TCP connection to a\nCoDeeN node and then a connection from a CoDeeN node to server\nor another CoDeeN node. A failure in any one of these TCP connections\ncauses the user's HTTP connection to fail. The challenge\nis to determine which of these TCP connections has failed.\nSometimes users can determine whether a failure has occurred\nalong the first TCP connection along the overlay path using information\nprovided by their local TCP stack, but if a failure occurs\nbeyond the first connection users cannot tell where a failure occurs\nwithout cooperation from the overlay. Depending on the type of\noverlay, users may have different amounts of information about the\noverlay path. For example, in an HTTP proxy connection, users\nknow that the proxy is the first hop along the path and that if the\nconnection is not cached, the web server is the last hop along the\npath.\nAs a first step, in our research we examine a special case of diagnosis\nin order to gain insight into how well our approach might\ngeneralize to other types of diagnosis. The question we wish to answer\nis, if a two hop overlay connection fails due to a TCP failure,\nwhich TCP connection failed? In this paper we define a TCP failure\nas three consecutive TCP retransmits without a response. We\nassume that the diagnostic agent only knows that the overlay connection\nhas failed and does not know which of the TCP connections\nhas failed. We want to answer this question knowing only the IP addresses\nof the source, IP address of the first hop overlay node, and\nthe IP address of the ultimate overlay destination host. Our model\nfor probabilistic diagnosis generalizes to overlay connections with\nany number of hops, but as a starting point in this paper we only\nconsider overlay connections with two hops.\n306\nTCP Conn.\nB\n\nC\nHour\nDst\nAS\nSrc\nAS\nOverlay Conn.\nA\n\nC\nTCP Conn.\nA\n\nB\nHour\nDst\nAS\nSrc\nAS\n0\nFailed\nOK\n1\nOK\nOK\n0\nOK\nFailed\nFailed\nB\n\nC\n0\nFailed\nP(Status\n=OK)\nA\n\nB\n...\n1\n1\nHour\n...\n...\n...\n0.87\n2\n1\n1\nDst\nAS\n0.99\n1\nP(Status\n=OK)\nSrc\nAS\nFigure 1: A Bayesian network for TCP overlay path diagnosis\n3.1\nProbabilistic Diagnosis\nThe reliability of IP links in the Internet varies widely and hence\nwe expect the probability of TCP failure to differ between different\nsets of hosts. Thus if we have knowledge of the relative probability\nof failure of the TCP connections that make up an overlay connection\n, we can then infer the most likely cause of failure when\nan overlay connection occurs without conducting any additional\nprobes. In this paper we show we can use Bayesian networks both\nto learn a model of TCP failures and to perform diagnosis.\nBayesian networks compactly represent the conditional probability\nof related events and enable efficient inference based on available\nevidence[5]. A Bayesian network is a directed acyclic graph\nin which nodes represent variables, and edges from parent nodes to\nchildren nodes represent dependence relations. Each node X has a\nconditional probability table (CPT) P\n(X|parents(X)) that encodes\nthe conditional probability of X given evidence about its parents.\nBayesian networks have several important features that make\nthem especially suitable for reasoning about failures in the Internet\n. Firstly, Bayesian networks can model both deterministic and\nprobabilistic dependencies among many types of Internet components\nand diagnostic tests. For example, an HTTP proxy connection\nfunctions if and only if the user/proxy TCP connection functions\nand the proxy/provider TCP connection functions. The probability\nthat a TCP connection functions depends on the source and\ndestination IP addresses and the time of the connection. To improve\naccuracy, we cluster IP addresses by AS and connection time\nby hour (see section 3.2). Figure 1 illustrates a Bayesian network\nthat encodes the conditional probabilities for diagnosing an overlay\nconnection from A to B to C. To diagnose an overlay connection\nfailure from A to C, one can use this Bayesian network to infer the\nmost probable status of the underlying TCP connections from A to\nB and B to C given information about the AS numbers and hour the\nconnections were made.\nThe variables in the Bayesian network represent the functional\nstatus of TCP connections and overlay connections. A node in\nthis Bayesian network represents the functional status of a connection\n: OK if functioning, Failed if malfunctioning. Malfunctioning\nmeans that a connection failure occurs along the path, functioning\nmeans that no connection failure occurs. Edges in the Bayesian network\nrepresent dependencies among connections. The CPT for an\noverlay connection node represents the probability that it is functioning\ngiven the status of its underlying TCP paths. The CPT for\na TCP path represents the probability that the TCP path functions\ngiven information about the path. In our Bayesian network we assume\nthat the conditional probability of a TCP connection failure\ndepends only on the source and destination IP addresses and the\ntime of failure for each hop of the overlay, and not on which hop\nof the overlay connection it is (user/proxy or proxy/server). We\nrepresent this by using parameter tying in this Bayesian network\nso that both TCP paths share the same CPT. We also assume that a\ndiagnostic agent can identify the intermediate hops in the overlay\nconnection, either through active probing or because it has knowledge\nof the overlay topology.\nAn advantage of modeling network components in terms of Bayesian\nnetworks is that a Bayesian network provides an abstract high-level\nrepresentation for diagnostic data suitable for reasoning. Representing\ndiagnostic data in terms of variables, evidence, and dependencies\nrather than passing around low-level measurements such as\npacket traces allows an agent to reason about the causes and consequences\nof failures without any deep knowledge of the behavior\nand characteristics of components and diagnostic tests. In addition,\nthe conditional independence assumptions of Bayesian inference\nreduce the amount of data a diagnostic agent needs to consider for\ndiagnosis.\n3.2\nClustering\nTo perform diagnosis using this Bayesian network, we need to\nlearn the conditional probability of failure of a TCP connection\ngiven the properties of a connection. Learning the conditional probability\nof failure for each pair of IP addresses is impractical because\nit is infeasible to store the probability of failure for the 2\n64\ncombinations\nof source and destination IP addresses. More importantly,\nfor each pair of IP addresses we only have a limited amount of data\nwith which to train the Bayesian network. For more effective diagnosis\n, diagnostic agents need a way to diagnose failures involving\nIP addresses it has not previously observed.\nTherefore to reduce the size of the conditional probability tables\nand to improve the accuracy of the learned probabilities, we\ncluster together IP addresses in a way that facilitates learning and\ndiagnosis. Our hypothesis is that TCP connections that share many\nIP links with one another will have similar probabilities of failure\n. Thus two TCP connections with topologically nearby sources\nand nearby destinations will likely have similar failure probabilities\n. Therefore we clustered source and destination IP addresses in\nthree ways: by the first eight bits of the IP address, the AS number,\nand by country.\nWe also cluster TCP connections based on time. We hypothesize\nthat the probability of failure changes over multiple time scales.\nFor instance, if an IP routing change occurs, the probability of failure\nfor affected TCP connections may change from low to high and\nback to low within a few minutes. On the other hand, the average\nrate of routing failure over several days may remain relatively\nconstant. We show how different methods for clustering affect the\naccuracy of diagnosis in section 5.\nCOLLECTING TCP FAILURE DATA\nIt is difficult to obtain accurate information about the distribution\nof TCP failures in the Internet because failed connections make\nup only a small percentage of overall TCP traffic and the number\n307\nof possible source and destination IP addresses is enormous. To\ncollect accurate failure probabilities, we need a way to observe the\nstatus of large quantities of TCP connections from many different\nsource and destination hosts.\nIn order to obtain such data, we used an updated version of Planetseer\nto collect data on TCP connection failures. The new Planetseer\nmonitors TCP connections in the CoDeeN content distribution\nnetwork and provides notifications when TCP sessions begin, end,\nand when TCP failures occur. Planetseer runs on over 320 Planetlab\n[2] nodes distributed around the world. We used Planetseer to\nmonitor all the TCP connections made by 196 CoDeeN nodes. We\nobserved 28.3 million TCP connections and 249,000 TCP failures\nover a ten hour period. We observed TCP connections to approximately\n17,000 distinct IP addresses per hour on average. In our\ndataset, we observed TCP connections to hosts in 2116 unique Internet\nautonomous systems.\nCoDeeN overlay nodes act as HTTP proxies and establish TCP\nconnections with web clients, web servers, and other CoDeeN nodes.\nIn a typical CoDeeN session, a user initiates a TCP connection with\nthe CoDeeN proxy, the proxy connects to a web server and retrieves\nthe requested resource, and finally the proxy sends the requested\ndata back to the user. Note that many requests are cached, and so\nthe destination of the second hop in the overlay is a CoDeeN node\nand not the web server specified in the HTTP request. We found\nthat 0.28% of user/proxy connections and 0.65% of proxy/server\nconnections experienced TCP failures. Since Planetseer monitors\nTCP connections from the vantage point of the proxy, we cannot\ndetect those TCP failures in which a user is unable to establish a\nTCP connection to the proxy. Therefore the lower percentage of\nuser/proxy failures may be partly explained by the fact that all failures\nbetween the proxy and user occur after the user successfully\nestablishes a TCP connection to the proxy.\nWe believe that the failure probabilities learned through Planetseer\nare representative of typical TCP connections in the Internet\n. CoDeeN nodes operate as HTTP proxies, so the pattern of\nTCP connections resembles typical web traffic. Though caching at\nCoDeeN nodes reduces the number of connections to web servers\nwe observe, we believe that the average failure probability to web\nservers we observe using Planetseer reflects typical failure rates for\nHTTP related TCP connections. We are currently examining other\ntypes of overlay connections to determine how well this TCP data\ngeneralizes for the diagnosis of other overlays.\nWe learn the conditional probability table for TCP connection\nfailure using the data collected from Planetseer. We cluster source\nand destination IP addresses by AS using the Oregon Route Views\nBGP tables\n2\n.\nEVALUATION\nOur hypothesis is that Bayesian inference using the conditional\nprobability of failure for TCP connections given the AS numbers of\nthe source and destination can accurately diagnose failures in overlay\nconnections. In order to test this hypothesis, we constructed a\nBayesian network using the probabilities learned from Planetseer\nand used it to diagnose failures in CoDeeN connections.\nWe wanted to answer the following questions in our experiments:\n1. Which clustering method produces the most accurate diagnosis\n: AS, IP/8 prefix, or country? We expect that clustering\nbased on AS will produce the most accurate results since it\nis most closely correlated with the Internet routing topology.\n2\nhttp://www.routeviews.org/\n2. How does diagnostic accuracy change as we increase the\ntime interval over which we cluster TCP connections? We\nexpect that as the clustering interval increases, accuracy will\nincrease at first, but then decrease as the learned probabilities\nless accurately reflect the probabilities of new failures.\n3. How does the age of the training set affect diagnostic accuracy\n? We expect that as the distribution of TCP failures in\nthe Internet changes over time, diagnostic accuracy will also\ndecrease.\n5.1\nExperimental Setup\nWe train a Bayesian network using the Bayes Net Toolbox (BNT)\nfor Matlab\n3\n. In order to diagnose TCP connections between regions\nwe did not observe in the training set, we initialize the prior probabilities\nof failure according to a uniform Dirichlet distribution,\nwhich is equivalent to adding an additional element to the training\nset for each combination of source cluster, destination cluster,\nand connection status. We test this Bayesian network on an artificial\ndataset generated based on the distribution of TCP connections\nobserved on Planetseer. Since Planetseer does not provide information\nabout which TCP connections are associated with each\nCoDeeN request, we construct a dataset based on the TCP connections\nwe observed. First we identify user/proxy, proxy/proxy,\nand proxy/server connections based on IP address and port number\n. Then for each proxy, we count the number of TCP connections\nto each server and to each proxy. We assume that the number\nof cached requests equals the number of user/proxy connections\nminus the number of proxy/server and proxy/proxy connections\n. We assign each user/proxy TCP connection a corresponding\nproxy/provider connection, where the provider may either be a web\nserver (if the resource is not cached), another proxy (if the resource\nis cached at another proxy), or the same proxy (if the resource is\ncached locally). We make these provider assignments according to\nthe observed distribution of proxy/server and proxy/proxy connections\n. Of the 19,700 failures in this dataset, approximately 82%\nof requests are cached locally, 7.9% are cached at other CoDeeN\nnodes, and 10.6% are uncached.\nFor each CoDeeN request failure our Bayesian network makes\ntwo diagnoses: one for the status of the user/proxy connection, and\none for the status of the proxy/provider connection. We measure\naccuracy in terms of the fraction of correct diagnoses. To evaluate\nthe accuracy of diagnosis, we compute the most probable explanation\nfor a TCP failure given evidence that the overlay connection\nhas failed and the AS numbers of the source, proxy, and destination\n, and then compare this diagnosis with the actual status of the\nsource/proxy and proxy/provider connections. In our experiments\nwe perform diagnosis without evidence about whether a resource is\ncached at a proxy.\nOf the CoDeeN requests that failed in the first hour of our dataset,\nwe found that 62% failed at the user/proxy connection, 31% failed\nat the proxy/server connection, and 7% failed at a the proxy/proxy\nconnection. Therefore knowing only the overall distribution of\nTCP failures between users and servers, without using information\nabout the IP addresses of the user, proxy, and server, one could diagnose\nfailures with 62% accuracy by diagnosing every failure as\na user/proxy failure. In our experiments we wish to determine if\nour Bayesian approach to diagnosis can achieve significantly better\naccuracy.\nIn order to properly compute the accuracy of diagnosis, we sepa-rated\nthe set of TCP connections with which we trained the Bayesian\nnetwork from the set of TCP connections associated with the failed\n3\nhttp://www.cs.ubc.ca/ murphyk/Software/BNT\n308\n0%\n10%\n20%\n30%\n40%\n50%\n60%\n70%\n80%\n90%\n100%\nAS\nCountry\nIP\nBaseline\nAccuracy\nFigure 2: Clustering Method Comparison\n75%\n76%\n77%\n78%\n79%\n80%\n81%\n82%\n83%\n1\n2\n3\n4\n5\n6\n7\n8\n9\nTraining Interval Length (hours)\nAccuracy\nFigure 3: Accuracy vs. Training Interval Length\noverlay connections under diagnosis. We collected ten hours of\nTCP connection data from Planetseer. In our initial experiments\nwe choose to learn the average probability of failure over one hour\nbecause we find that clustering over shorter time scales does not\nprovide enough data for accurate diagnosis.\n5.2\nExperimental Results\nFirst we compare the accuracy of three IP clustering methods:\nby Internet autonomous system number (AS), by the first eight bits\nof the IP address (IP), and by the country in which a host resides\n(Country). We determine the country of a host using the hostip.info\ndatabase\n4\n, which maps the first 24 bits of an IP address to a country\nusing location information contributed by Internet users. We\ntrain three Bayesian networks corresponding to the three clustering\nmethods using data from hour 1. Then we test these Bayesian\nnetworks on the proxy connection failures constructed using data\nfrom hours 210 and averaged the results. We use a junction tree\ninference engine to compute the most likely status for each TCP\nconnection and compare the inferred status with the actual status\nfrom the data. Since the Bayesian network we use for inference\nhas no cycles, we can perform Bayesian learning and junction tree\ninference rapidly; in our experiments, inference for a single connection\nrequires approximately 5 ms.\n4\nhttp://www.hostip.info/\n0%\n10%\n20%\n30%\n40%\n50%\n60%\n70%\n80%\n90%\n100%\n1\n2\n3\n4\n5\n6\n7\n8\n9\nTraining Set Age (hours)\nAccuracy\nFigure 4: Accuracy vs. Training Set Age\nFigure 2 compares the diagnostic accuracy of these three clustering\napproaches. We define accuracy as the fraction of correct\nstatus inferences. As a baseline, we also plot the accuracy of simply\nguessing that every failure is due to a user/proxy connection\nfailure. Figure 2 shows that all three clustering methods provide\nsimilar degrees of accuracy. Our hypothesis was that clustering\nbased on AS would produce the most accurate results, but our experiments\nshow that clustering based on the first 8 bits of the IP\naddress yields higher accuracy for short time intervals. This may\nbe because one hour is not enough time to accurately learn inter-AS\nTCP failure probabilities, or due to inaccuracies in the Route Views\nBGP table.\nNext we computed the accuracy of diagnosis as we increase the\ntime interval over which we cluster TCP connections. If the interval\nover which we train is too short, then we will not have enough data\nto accurately learn failure probabilities. If the interval is too long,\nthen it may not accurately reflect changing network conditions. We\ntrain a Bayesian network using AS clustering on x hours before\nhour 10 for values of x from 1 to 9. We then test each Bayesian\nnetwork on the data from hour 10. Figure 3 shows how accuracy\nchanges as the training time interval changes. This plot shows that\naccuracy increases as the clustering time interval increases, suggesting\nthat the training value of incorporating additional data outweighs\nthe inaccuracy introduced by using older data.\nFinally, we compute the accuracy of diagnosis as we increase the\nage of the data on which we trained the Bayesian network. We train\na Bayesian network using AS clustering on data from hour 1 and\ntest it on overlay failures observed during each of the hours from\n2 to 10. Figure 4 plots the accuracy of diagnosis over time. Average\naccuracy changes over time because the distribution of failures\nwe observe using Planetseer varies from hour to hour, but overall\ndiagnostic accuracy diminishes only slightly after nine hours, suggesting\nthat the distribution of TCP failure probabilities remains\nrelatively stationary over time.\nWe also compare the false positive and false negative rates for\neach clustering method. The false positive rate is the fraction of\nfunctioning connections that are incorrectly diagnosed as having\nfailed, while the false negative rate is the fraction of failed connections\nthat are incorrectly diagnosed as functioning. Table 1 lists the\nfalse positive and false negative rates for each clustering method.\n5.3\nAnalysis\nThese experiments show that we can diagnose overlay connection\nfailures knowing only the AS numbers of its TCP endpoints.\n309\nAS\nCountry\nIP\nBaseline\nuser/proxy false pos.\n0.174\n0.358\n0.426\n1.000\nuser/proxy false neg.\n0.219\n0.050\n0.060\n0.000\nproxy/server false pos.\n0.219\n0.101\n0.265\n0.000\nproxy/server false neg.\n0.171\n0.128\n0.100\n1.000\nTable 1: Diagnosis error rates by type\nOne reason our approach to diagnosis works is due to the heavy-tailed\ndistribution of TCP connection failure probability. The majority\nof TCP failures occur among a small number of AS pairs.\nTherefore most CoDeeN connection failures involve one TCP connection\nwith low failure probability and another TCP connection\nwith high failure probability, so probabilistic inference produces\nthe correct diagnosis. For example, we find that TCP connections\nfrom hosts in China to hosts in the USA tend to have a much\nhigher probability of failure than connections within the USA. If\nan CoDeeN user in China accesses a proxy in the USA to retrieve\ncontent from a web server in the USA and experiences a failure,\nthen it is very likely that the failure occurred on the connection between\nthe user and the CoDeeN node. If the probability of failure\nfor every pair of ASes were equal, then our probabilistic approach\nto diagnosis would not work as well.\nAnother interesting result is that the accuracy of diagnosis diminishes\nrelatively slowly over time, implying that the distribution\nof TCP failures in the Internet stays relatively stationary over time.\nThis suggests that diagnostic agents can perform accurate diagnosis\nusing inter-AS TCP failure probabilities without having to con-stantly\ncollect the latest TCP failure data.\nCONCLUSION AND FUTURE WORK\nOur initial experimental results indicate that our passive probabilistic\napproach to diagnosing TCP overlay connection failures\ncan provide useful diagnostic information. In this paper we show\nthat Bayesian inference provides a useful framework for diagnosing\ntwo hop overlay connection failures on CoDeeN, but our approach\ncan generalize to the diagnosis of other overlay connection\nfailures as well. We view our approach to diagnosing TCP overlay\nconnection failures as just one example of a more general probabilistic\napproach for Internet fault diagnosis. In this paper we show\nhow to use inter-AS TCP failure probabilities to diagnose failures\nin overlay networks, but the technique we used to diagnose failures\nin CoDeeN can be extended to the diagnosis of other overlays as\nwell. We can apply the knowledge we learned from Planetseer to\ndiagnose other classes of network components and applications by\nadding new nodes and edges to the Bayesian network we use for\ndiagnosis.\nIn this paper we only considered diagnosis without using any additional\nevidence about a failure. Typically, however, when failures\noccur users may already know the status of certain network components\nand can perform diagnostic probes to collect additional evidence\nfor diagnosing failures. We can improve the accuracy of our\napproach by adding variables and edges to the Bayesian network to\ntake into account this information. For instance, if we know the IP\npaths that TCP connections traverse, we can incorporate evidence\nof IP link failures into the Bayesian network. We intend to explore\nhow agents can incorporate such additional evidence into a\nBayesian network to improve diagnostic accuracy.\nIn future work we will also examine more accurate models for\nInternet fault diagnosis that take into account failures at both short\nand long time scales. In this paper we only evaluated our algorithm\non ten hours of data from Planetseer; we would like to conduct additional\nexperiments to more accurately determine the effectiveness\nof diagnosis using data from other time periods as well. In addition\nwe would like to explore other clustering methods, including dy-namically\nchoosing the prefix length on which to cluster based on\nhow much data an agent has about TCP connections to a particular\nIP range.\nFinally, though our paper describes a centralized diagnosis approach\n, this approach can easily be adapted for distributed diagnosis\n. Knowledge of the overlay topology and the conditional probabilities\nin the CPTs can be distributed among multiple agents in\nthe Internet, allowing different agents to collect failure data from\ndifferent points in the network. We are currently developing such a\ndistributed system for the diagnosis of TCP application failures in\nthe Internet.\nREFERENCES\n[1] D. G. Andersen, H. Balakrishnan, M. F. Kaashoek, and\nR. Morris. Resilient overlay networks. In Proceedings of the\n18th ACM Symposium on Operating System Principles\n(SOSP), 2001.\n[2] B. Chun, D. Culler, T. Roscoe, A. Bavier, L. Peterson,\nM. Wawrzoniak, and M. Bowman. Planetlab: an overlay\ntestbed for broad-coverage services. SIGCOMM Comput.\nCommun. Rev., 33(3):312, 2003.\n[3] S. Guha and P. Francis. Characterization and measurement of\ntcp traversal through nats and firewalls. In Internet\nMeasurement Conference 2005 (IMC '05), 2005.\n[4] S. Kandula, D. Katabi, and J.-P. Vasseur. Shrink: A Tool for\nFailure Diagnosis in IP Networks. In ACM SIGCOMM\nWorkshop on mining network data (MineNet-05),\nPhiladelphia, PA, August 2005.\n[5] U. Lerner, R. Parr, D. Koller, and G. Biswas. Bayesian fault\ndetection and diagnosis in dynamic systems. In Proceedings\nof the Seventeenth National Conference on Artificial\nIntelligence (AAAI-00), pages 531537, Austin, Texas,\nAugust 2000.\n[6] A. S. M Steinder. Increasing robustness of fault localization\nthrough analysis of lost, spurious, and positive symptoms. In\nProceedings of INFOCOM, 2002.\n[7] R. Mahajan, N. Spring, D. Wetherall, and T. Anderson.\nUser-level internet path diagnosis. In Proceedings of ACM\nSOSP, 2003.\n[8] V. N. Padmanabhan, S. Ramabhadran, and J. Padhye.\nNetprofiler: Profiling wide-area networks using peer\ncooperation. In Proceedings of the Fourth International\nWorkshop on Peer-to-Peer Systems (IPTPS), February 2005.\n[9] L. Wang, K. Park, R. Pang, V. Pai, and L. Peterson.\nReliability and security in the codeen content distribution\nnetwork. In Proceedings of the USENIX 2004 Annual\nTechnical Conference, 2004.\n[10] A. Ward, P. Glynn, and K. Richardson. Internet service\nperformance failure detection. SIGMETRICS Perform. Eval.\nRev., 26(3):3843, 1998.\n[11] M. Zhang, C. Zhang, V. Pai, L. Peterson, and R. Wang.\nPlanetseer: Internet path failure monitoring and\ncharacterization in wide-area services. In Proceedings of\nSixth Symposium on Operating Systems Design and\nImplementation (OSDI '04), 2004.\n310\n", "keywords": "fault diagnosis;passive diagnosis;NAT;Bayesian networks;Planetseer overlay monitoring system;active probing for diagnosis;inter-AS TCP failure probabilities;TCP overlay connections;Bayesian networks modelling;CoDeeN traffic patterns;TCP overlay path diagnosis;Planetseer;clustering;network address translation"} {"name": "69", "title": "Digital Asset Management Using A Native XML Database Implementation", "abstract": "Digital Asset Management (DAM), the management of digital content so that it can be cataloged, searched and re-purposed, is extremely challenging for organizations that rely on image handling and expect to gain business value from these assets. Metadata plays a crucial role in their management, and XML, with its inherent support for structural representation, is an ideal technology for this. This paper analyzes the capabilities of a native XML database solution via the development of a \"proof of concept\" and describes implementation requirements, strategy, and advantages and disadvantages of this solution.", "fulltext": "INTRODUCTION\nDigital asset creation and management evolved in the late 1990s.\nCompanies have created massive digital assets in the form of\nimages, video and audio files, streaming media, Power Point\ntemplates, web pages, and PDF files containing engineering specs,\nlegal documents, internal memos and more. The World Wide Web\nhas drastically increased the need for digital information and its\nexchange. Ille [8] identifies a digital asset as a strategic asset like\nany other form of working capital, and states that its efficient\nmanagement is being increasingly recognized as a competitive\nlever for companies all over the world.\nDevelopment of the model for storing any form of digital object\ninto a structured format requires a deft combination of asset\nanalysis, strategic thinking, business planning and appropriate\ntechnology. Companies can achieve early strategic advantage by\nimplementing management systems for digital assets that can be\nrepurposed or customized, providing process efficiencies in\ncollaborative work. Digital Asset Management (DAM) can deliver\ncompetitive advantage for advertising agencies, technical or\nengineering documentation departments, designers, producers and\nothers by reducing time spent locating creative assets.\nEnterprises often require reusing or sharing their digital assets. It\nis indispensable to store content in an organized way to reuse or\nprocess it for future needs. Global enterprises are facing the\ndaunting challenge of figuring out how best to address the\ngrowing complexity of creating digital assets and managing the\nflow of assets through a single infrastructure [11]. Exacerbating\nthe challenge is the fact that companies are creating a massive\nvolume of digital assets but are rarely examining their organized\nstorage and retrieval methods.\n\nSIGNIFICANCE OF THE PROBLEM\nDAM systems are still relatively new, but organizations have\nstarted realizing the importance and need for digital asset\nmanagement. The Gartner Group affirms that only a limited\nnumber of technically advanced commercial content providers use\nDAM systems today to digitally construct, store and distribute\nrich media content in single medium forms [7]. The systems also\nhave limited corporate use in advertising firms and marketing\ndepartments. Gartner predicts that by 2005 more than 25% of all\nthe enterprises with commercial internet operations will use DAM\nsystems. By 2010, more than 45% of all the enterprises with\ncommercial internet operations will use DAM systems.\nRecently reported cases provide evidence that companies have\nstarted investing in technology for DAM. For example, the Coca\nCola company has bought technology from IBM for its digital\nadvertisement archives, which contain 9,000 graphical images,\n7,000 scanned documents and more than 25,000 corporate videos\nand television advertisements [13]. The technology includes\nsearch tools for retrieving, updating, managing and disseminating\nhistorical records online, including the company's famous\nmarketing and advertising icons.\nAnother case is that of the Shoah Foundation. Steven Spielberg\nestablished the Shoah Foundation with the mission to videotape\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nCITC4'03, October 1618, 2003, Lafayette, Indiana, USA.\nCopyright 2003 ACM 1-58113-770-2/03/0010...$5.00.\n237\n\n\nand preserve the testimonies of the Holocaust survivors and\nwitnesses. The foundation has collected more than 50,000\neyewitness testimonies in 57 countries and 32 languages [12]. The\nchallenge now is to manage, catalog and disseminate the video\nand digital collection of testimonies of those survivors. These\ndigital assets of the Shoah Foundation are cataloged with lists of\nkeywords, text summaries describing the survivors, related\ndocumentaries focusing on topics such as the ghettos or labor\ncamps they lived in, and past and present photos of them and their\nfamilies.\nThe following summarizes the major challenges that are faced\nduring any wide adoption of DAM.\n\nStorage: One of the fundamental problems is physical\ndeterioration of storage devices that store digital information.\nMagnetic media are particularly subject to data loss and\nerrors. There is also the important question of hardware and\nsoftware obsolescence and the future availability of today's\ntechnologies.\n\nProcedural Issues: Technical problems are further complicated\nwhen resources to be preserved need to be transformed into\ndigital form. Digitization of paper analogs for access and\npreservation is a time consuming and labor intensive task.\nBeyond this technical problem, there is a host of financial,\nlegal, and policy problems.\n\nSecurity: Securing digital assets against misuse, theft, or\ndamage is an ongoing concern of organizations.\n\nCopyright: One of the major legal issues associated with\nelectronic assets is the actual scanning or digitizing of\nmaterials. Copyright holders have the exclusive right to\nreproduce, distribute, and display their work. Ferullo [3]\npoints out that digitizing and posting infringes on these\nexclusive rights.\n\nDistribution: Digital assets will be utilized to the fullest only\nwhen they can be distributed via proper communication\nchannels to the intended users.\n\nInfrastructure: DAM requires a robust IT infrastructure to\nsupport creation and distribution.\n\nHuman Factors: While the purpose of DAM is to provide\ngreater efficiency, getting users to adapt to the new\nenvironment can be a challenge. This is an important issue\nbecause many DAM solutions require a change in work\nprocesses before the users see any benefits [6].\nThe requirement to manage and deliver digital assets is becoming\ncritical to almost all content-related applications, due to the\nevolution of the Internet and growth of digital assets. Enterprises\nneed plans to benefit from the new world of rich, valuable digital\nassets that will affect everything from their internal processes and\ncustomer relations to their web site and telecommunications\ninfrastructures.\n\nAN XML SOLUTION\nWidespread use of rich media has spurred the growth of the DAM\nmarket. Frost and Sullivan, a market analysis firm, claims the\naverage user in a media enterprise looks for a media file 83 times\na week and fails to find it 35% of the time [5]. Canto Software\nresearch predicts that DAM solutions will drop that figure to 5%\n[9]. With the growth of internet publication, document metadata is\nincreasingly important because it provides contextual information\nabout digital assets necessary for customization of the content. In\nresponse, Adobe Systems plans to unveil new metadata\ntechnology designed to ease the process of applying content to\nmultiple types of media. The XMP (Extensible Metadata\nPlatform) provides a framework for standardizing the creation and\nprocessing of document metadata across publishing workflows,\naccording to Adobe officials [10].\nA review by Doering [2] provides the insight into methods for\ndigital asset management. According to Doering, a DAM solution\nmust include the following critical features:\n\nIndexing: As content is generated and stored, it is indexed\naccording to various possible criteria. Metadata does not\nmerely describe by whom, when, and how a piece of content\nwas created; it can also be used to interpret and catalog the\ncontent itself.\n\nRights Management: Includes handling rights to the content or\nrestricting the use of the content by the purchaser/end-user.\nThis might occur, for example, with corporate information or\nlicensed images from a third party incorporated into the\ncontent.\n\nReuse: With a viable DAM system in place, the internal\ncontent developer can research and select appropriate content\nfor reuse in new content. This represents a significant savings\npotential for the companies.\n\nReview: A final benefit of an online catalog with a DAM\nsystem is the ability to review older content more easily.\n3.2 The XML/Metadata Approach\nBy incorporating a DAM system, a company gains both the\nsavings from reusing content as well as revenue from continued\nsales of the same elements. According to Fontana [4], XML\ndatabases serve in a complementary role to traditional databases\nespecially as XML becomes prevalent. Nearly 85% of large\ncorporations are expected to move all their web-based content to\nXML over the next three years.\nFundamentally, two high-level approaches may be adopted for\nimplementing XML databases.\n1) XML-enabled database: In an XML-enabled database, the\ndocuments are stored in constituent fragments. Here the XML\ndata is stored in object relational form and one can use an XML\nSQL Utility (XSU) or SQL functions and packages to generate the\nwhole document from its object relational instances.\n2) Native XML database: In a native XML database approach,\nXML is not fragmented but rather is stored as a whole in the\nnative XML database. This means that documents are stored,\nindexed, and retrieved in their original format, with all their\ncontent, tags, attributes, entity references, and ordering preserved.\nIn this technique, the XML Database is designed to handle XML\nin its native format, thereby speeding access to data by eliminating\nthe need to transform it into the rows and columns of a standard\nrelational database.\n238\n\n\nBiggs [1] suggests there are three principal reasons to implement\na native XML database: 1) enterprises today use a mix of data,\nsuch as the data housed in object-oriented databases and\nunstructured data that needs to be exchanged with partners (native\nXML databases can leverage all of these disparate sources); 2)\nXML databases can boost processing performance due to XML-based\ntransactions; and 3) digital assets described in the XML\nformat can be used by other applications.\n3.3 A Technique Using Native XML\nWhen developing a native XML solution, certain steps should be\nfollowed:\n1) Identify the need for DAM\n2) Know your assets: Identify various assets, define and\nunderstand their use in the organization\n3) Define search needs and key attributes of your assets\n4) Capture the objects (digitized) and data about the objects\n(metadata)\n5) Process: Generate and store XML files associated with each\nobject\nThe basis of this technique lies in creation and usage of semi-structured\nmetadata stored in an XML format to manage digital\nassets efficiently. When the structure of the data is irregular and\nimplicitly given by the document itself, not by a global scheme,\nthey are often referred to as semi-structured data. The ability of an\nXML to handle dynamic data provides leeway for the applications\nto use semi-structured data to describe their digital assets.\nXML is becoming the de facto data exchange format for the Web.\nThe XML enclosing metadata can be either stored in the database\nfor future retrieval or be easily transferred over the Internet to any\nother systems.\nThe Oracle 9i DBMS (used for the Proof of Concept) provides an\n\"XMLType\" data type that can be used to store and query XML\ndata in the database. The XMLType has member functions one\ncan use to access, extract, and query the XML data using XPath\nexpressions, a standard developed by the World Wide Web\nConsortium to traverse XML documents.\n3.4 Proof of Concept\nThis study analyzed existing approaches for non-text based digital\nasset management and implemented a DAM solution by applying\na native XML Database technique. Meta-data of digital assets is\noften semi-structured and the digital files are of varied types.\nXML databases are most appropriate for managing diverse types\nof semi-structured digital assets.\nFor this project, the Proof of Concept (POC) was developed\nusing facilities and resources of the University's Digital\nEnterprise Center (DEC) at the School of Technology. The\nProduct Lifecycle Management (PLM) lab at DEC creates,\nsimulates and shares digital information related to a company's\nproducts, processes and resources. These digital assets encompass\ngraphical images, presentations, and video/audio clips of\nmanufacturing processes representing manufacturing models. A\ndigital asset produced in the PLM lab is the intellectual property\nof the DEC and needs to be managed for future use or research.\nThough the demonstration focused on the manufacturing process\nmodels in the PLM lab, the XML Database technique can be\napplied to any form of digital asset.\nSince the potential file population is very vast and is of varied\ntypes, the POC restricted the sampling to the following categories\nof digital assets: Audio files, Video files, Images, and Text based\nfiles (presentation slides, Word documents, Acrobat files). The\nPOC confined the sample to a limited number of files describing\nassembly line parts available from the PLM lab.\nMetadata was stored for each of the digital objects of the assembly\nline. The global parameters and sub-parameters used to describe\nthe digital files included the following: File Name, Type (Image,\nAudio, Video, Word etc), Author, Creation Date, General\nDescription, Keywords, and a Comment. A keyword search\ncapability was added for searching within these and other\nparameters.\nThe Proof of Concept application was developed to provide a\nstorage, search and retrieval facility to manage the digital assets of\nthe PLM lab using the School of Technology's software and\nhardware resources. The application provides a web-based\ninterface for convenient management of the digital models and has\nan \"n-tiered\" architecture. The backend database is Oracle 9i with\nnative XML support. The web interface was developed using JSP\nand the middle tier was incorporated using Java Servlets. Analysis\nwas conducted with the following steps:\n\nVarious types of data (heterogeneous data) were collected for\ndemonstration purposes.\n\nThe validity of metadata was checked before entering it into\nthe system.\n\nUpon validation, data was entered in the system (Oracle 9i\ndatabase) through the web-interface.\n\nWith the appropriate search parameters, the data entered could\nbe searched for and retrieved. Depending on the requirement,\nthis retrieved data could be viewed, reused or repurposed.\n\nProviding different search criteria tested consistency and\nreliability in the retrieval of the asset.\nThe following technologies were used to develop the POC:\nApache Webserver 1.3; Apache JServ 1.1; JDK 1.1; Oracle 9i\n(9.0.0.2); XML; Java, JSP, Servlets. Tools used for development\npurposes included: ERwin database design software; Borland\nJbuilder; Microsoft Front Page; and Oracle Enterprise Manager.\nThe POC was developed with a three-tier architecture as shown in\nFigure 1. The first tier of the architecture presents an interface to\nthe user to facilitate access to digital files. The interface is web-enabled\nand was developed using Java Server Pages. This tier\nprovides a user-friendly navigation through the site for managing\ndigital files, including screens for inserting, deleting and\nsearching on file data.\n239\n\n\nXML\nGeneration\nDatabase\nXML Files\n(Metadata\nfor Digital\nAssets)\nXML Files\n(Metadata\nfor Digital\nAssets)\nUser Interface\nWeb &\nApplication\nServer\nDigital\nAsset Storage\nXML\nValidation\nMetadata\nfor\nDigital\nAssets\nMetadata\nfor\nDigital\nAssets\nDigital\nAsset\nDigital\nAsset\nTier 3\nTier 2\nTier 1\nThe POC Architecture\nFigure 1: 3-Tier Architecture for POC\nThe middle tier of the architecture consists of Servlets and Java\nhelper classes. Servlets direct the flow of the control and provide\ndatabase connectivity for the application. In addition, the business\nlogic is implemented in Servlets. Helper classes are used by\nServlets to create and manage user-defined Java objects. Servlets\nand JSPs are deployed on an Apache Web server and Apache\nJserv.\n\nThe third tier of the architecture is the database layer itself. As\nnoted previously, the POC uses the XML capabilities of the\nOracle 9i database management system.\nOBSERVATIONS AND EVALUATION\nThe XML database technique stores attributes of digital assets in\nthe database in the form of an XML file. While the XML resides\nin the database system, digital assets might be files in a file system\nor a BLOB in a relational database. This demonstration of the\ntechnique stores digital assets in a file system.\n\nXML databases provide some features that are analogous to\nconventional text databases. Common to all native XML\ndatabases are indexes, which allow the query engine to jump to a\nparticular point in an XML document. This gives such databases a\ntremendous speed advantage when retrieving entire documents or\ndocument fragments. This is because the database can perform a\nsingle index lookup and retrieve the entire document or fragment\nin a single read.\n\nNormalizing data for an XML database is largely the same as\nnormalizing it for a relational database. The programmer needs to\ndesign the documents so that no data is repeated. One difference\nbetween native XML databases and relational databases is that\nXML supports multi-valued properties while (most) relational\ndatabases do not. This makes it possible to \"normalize\" data in a\nnative XML database in a way that is not possible in a relational\ndatabase.\n4.2 Native XML vs. XML-Enabled\nA native XML database defines a (logical) model for an XML\ndocument -- as opposed to the data in that document -- and stores\nand retrieves documents according to that model. A native XML\ndatabase has an XML document as its fundamental unit of\n(logical) storage, just as a relational database has a row in a table\nas its fundamental unit of (logical) storage. It is not required to\nhave any particular underlying physical storage model. For\nexample, it can be built on a relational, hierarchical, or object-oriented\ndatabase, or use a proprietary storage format such as\nindexed, compressed files.\n\nAn XML-enabled database has an added XML mapping layer\nprovided either by the database vendor or a third party. This\nmapping layer manages the storage and retrieval of XML data.\nData that is mapped into the database is mapped into application\nspecific formats and the original XML meta-data and structure\nmay be lost. Data retrieved as XML is NOT guaranteed to have\noriginated in XML form. Data manipulation may occur via either\nXML specific technologies (e.g. XPath) or other database\ntechnologies (e.g. SQL). The fundamental unit of storage in an\nXML-enabled database is implementation dependent. The XML\nsolutions from Oracle and Microsoft, as well as many third party\ntools, fall into this category.\n4.2.1 Advantages of Native XML\nNative XML databases have several advantages over relational\ndatabases. Since native XML databases do not have database\nschemas, one can store similar documents with more than one\nschema in the database at the same time. While one may still need\nto redesign queries and convert the existing documents -- a non-trivial\nprocess -- this may ease the transition process.\n\nXML databases make search within the XML documents very\nefficient. If the data is parsed into Document Object Model\n(DOM) format, XPATH can be used. XML database solutions\nusually add a text indexing system so that query performance is\nimproved.\n4.2.2 Disadvantages of Native XML\nCurrently, only a few native XML databases enforce referential\nintegrity. The reason for this is that most native XML databases\ndo not currently support linking mechanisms, so there are no\nreferences for integrity checking. Therefore, applications that rely\non referential integrity mechanisms of databases must enforce\nthese constraints themselves for XML databases. In the future,\nmany native XML databases will probably support linking\nmechanisms and referential integrity.\n\nAnother disadvantage of XML databases is that while query\nperformance is better, update performance suffers. The reason is\nthat the index entries for a document must be deleted and new\nentries created whenever a document is inserted or updated.\n\nIn general, an XML Database approach is better because it\nsupports the full power of XML. However, a major drawback is\nperformance degradation, as data must be constantly reparsed into\na DOM tree, wasting cycles and memory. Additionally, update\ncapabilities are weak, and finally, automated enforcement of\n240\n\n\nintegrity constraints needlessly places unreasonable burden upon\napplication programmers, increasing risks and costs.\nCONCLUSION\nDigital Asset Management (DAM) is an evolving field with a\ngreat potential. With the evolution of computers and the Internet,\ncompanies have been creating an enormous volume of digital\ncontent in various formats. Managing this content, so that it can\nbe cataloged, searched and re-purposed, is extremely challenging\nfor organizations.\n\nXML is a commonly used standard in internet applications.\nHence, representing the metadata of a digital content in an XML\nformat is, on the surface, a good design decision. XML, by its\nvery nature, provides for a complex, well-defined and yet\nextensible structural representation of the metadata and\ninteroperability between various applications dealing with digital\nassets. A major advantage of having XML natively in the database\nis that one can perform the relational manipulation operations\nsuch as insert, update, and delete on the whole or partial XML\nand can also perform XML specific operations like XPATH\nsearch and node modification using the power of SQL. This, and\nother advantages give native XML databases an edge over\nsystems that don't use XML or that manage XML externally.\n\nREFERENCES\n[1] Biggs, M. (2001, December 3). Two ways to bring XML to\nyour databases. InfoWorld, Framingham, 23 (49), 20.\n[2] Doering, D. (2001, August). Defining the DAM thing: how\ndigital asset management works. Emedia Magazine, Wilton,\n14(8), 28-32. Retrieved February 27, 2002, from\nhttp://proquest.umi.com/pqdweb?Did=000000077222944&F\nmt=4&Deli=1&Mtd=1&Idx=5&Sid=12&RQT=309\n[3] Ferullo, D. L. (2002). The challenge of e-reserves. Net\nConnect, 48 (8), 33-35.\n[4] Fontana, J. (2001, November 5). Upgraded database makes\nthe most of XML. Network World, Framingham, 18 (45), 29.\nRetrieved February 27, 2002, from\nhttp://proquest.umi.com/pqdweb?Did=000000088274369&F\nmt=4&Deli=1&Mtd=1&Idx=7&Sid=3&RQT=309\n[5] Frost & Sullivan.s (n.d.). U.S. Digital Asset Management\nMarkets. Retrieved March 23, 2002, from\nhttp://www.frost.com/prod/servlet/fcom?ActionName=Displa\nyReport&id=A192-01-00-00-00&ed=1&fcmseq=1043213084248\n[6] Garcia, K. (2001, November). Broadcasters starting to ease\ninto digital workflow. TVB Europe, 10(11), 22-24.\n[7] Gilbert, M., Landers, G., Latham, L. & Lundy, J. (2001,\nNovember 26). What's cool, what's hot: content technology\nhype cycle. Gartner Advisory. Retrieved February 9, 2002,\nfrom\nhttp://gartner.adpc.purdue.edu/rasdquest/RESEARCH/RAS/\n102700/102760/102760.html\n[8] Ille, C. (2001, February 26). Market definitions: digital\ndocument imaging. Gartner Group. Retrieved February 27,\n2002, from\nhttp://proquest.umi.com/pqdweb?Did=000000093522513&F\nmt=3&Deli=1&Mtd=1&Idx=4&Sid=3&RQT=309\n[9] Martin, N. (2001, September). DAM right! Artesia\ntechnologies focuses on digital asset management. EContent,\nWilton, 24(7), 60-62.\n[10] Moore, C. (2001, September 24). Adobe touts images.\nInfoWorld, Framingham, 23(39), 36. Retrieved February 27,\n2002, from\nhttp://proquest.umi.com/pqdweb?Did=000000081956628&F\nmt=3&Deli=1&Mtd=1&Idx=2&Sid=12&RQT=309\n[11] Moore, C. (2001, September 24). Content management\nplays role in cross-media publishing. Computer World.\nRetrieved February 9, 2002, from\nhttp://www.computerworld.com/storyba/0,4125,NAV47_ST\nO64339,00.html\n[12] Solomon, M. (2002, January 14). Managing the memories.\nComputer World. Retrieved April 16, 2002, from\nhttp://www.computerworld.com/storyba/0,4125,NAV47_ST\nO67304,00.html\n[13] Weiss, T. (2001, December 10). Coca-Cola ad legacy gets\nhelp from IBM. Computer World. Retrieved February 9,\n2002, from\nhttp://www.computerworld.com/storyba/0,4125,NAV47_ST\nO66495,00.html\n\n\n\n241", "keywords": "keyword search;metadata;multimedia;digital asset management;semi structured data;database;digital images;Digital Asset Management;XML database;storage and retrieval;native XML;DAM;heterogenous data;proof of concept"} {"name": "7", "title": "A Fair and Traffic Dependent Scheduling Algorithm for Bluetooth Scatternets", "abstract": "mechanisms and algorithms necessary to set up and maintain them. The operation of a scatternet requires some Bluetooth units to be inter-piconet units (gateways), which need to time-division multiplex their presence among their piconets. This requires a scatternet-scheduling algorithm that can schedule the presence of these units in an efficient manner. In this paper, we propose a distributed scatternet-scheduling scheme that is implemented using the HOLD mode of Bluetooth and adapts to non-uniform and changing traffic. Another attribute of the scheme is that it results in fair allocation of bandwidth to each Bluetooth unit. This scheme provides an integrated solution for both intra-and inter-piconet scheduling, i.e., for polling of slaves and scheduling of gateways.", "fulltext": "Introduction\nThe Bluetooth [10] technology was developed as a replacement\nof cables between electronic devices and this is perhaps\nits most obvious use. But, it is the ability of Bluetooth devices\nto form small networks called piconets that opens up a\nwhole new arena for applications where information may be\nexchanged seamlessly among the devices in the piconet. Typ-ically\n, such a network, referred to as a PAN (Personal Area\nNetwork), consists of a mobile phone, laptop, palmtop, headset\n, and other electronic devices that a person carries around\nin his every day life. The PAN may, from time to time, also\ninclude devices that are not carried along with the user, e.g.,\nan access point for Internet access or sensors located in a\nroom. Moreover, devices from other PANs can also be interconnected\nto enable sharing of information.\nThe networking capabilities of Bluetooth can be further\nenhanced by interconnecting piconets to form scatternets.\nThis requires that some units be present in more than one piconet\n. These units, called gateways, need to time-division\ntheir presence among the piconets. An important issue with\nthe gateways is that their presence in different piconets needs\nto be scheduled in an efficient manner. Moreover, since the\ngateway cannot receive information from more than one piconet\nat a time, there is a need to co-ordinate the presence of\nmasters and gateways.\nSome previous work has looked at scheduling in a piconet\n[2,5] and also in a scatternet. In [4], the authors define a\nRendezvous-Point based architecture for scheduling in a scatternet\n, which results in the gateway spending a fixed fraction\nof its time in each piconet. Such a fixed time-division of the\ngateway may clearly be inefficient since traffic is dynamic.\nIn [9], the authors propose the Pseudo-Random Coordinated\nScatternet Scheduling (PCSS) scheme in which Bluetooth\nnodes assign meeting points with their peers. The sequence\nof meeting points follows a pseudo-random process that leads\nto unique meeting points for different peers of a node. The\nintensity of these meeting points may be increased or decreased\naccording to the traffic intensity. This work presents\nperformance results for various cases. In [11], a scatternet-scheduling\nalgorithm based on the concept of a switch table,\nwhich can be dynamically adjusted based on traffic load, is\npresented. In [1], the authors present a credit-based scheduling\nscheme based on the SNIFF mode of Bluetooth, where\ncredits may be reallocated to cater to changing traffic.\nOur scheduling scheme addresses the issues of fairness and\nutilization of bandwidth. Since Bluetooth is a low-bandwidth\nenvironment, it is important that bandwidth should be effi-ciently\nutilized. Also, since a low bandwidth can easily lead\nto starvation of flows, another metric we focus on is fairness.\nWe propose a distributed scatternet-scheduling algorithm that\nis implemented using the HOLD mode [10] of Bluetooth\nand adapts to non-uniform and changing traffic. This algorithm\nprovides an integrated solution for both intra- and inter-piconet\nscheduling, i.e., for polling of slaves and scheduling\nof gateways. The algorithm leads to a high bandwidth utilization\nand results in a fair division of (a) the piconet bandwidth\nbetween the slaves of a piconet and (b) the gateway presence\namong different piconets.\nIn section 2, we discuss the Bluetooth technology. In\nsection 3, we present a definition of fairness in the context\nof Bluetooth scatternets, which takes into account intra- and\ninter-piconet max-min fairness. Section 4 describes the algorithm\nand proves its fairness property. Section 5 presents\nsimulation results and section 6 presents the conclusions.\nBluetooth technology\nThe Bluetooth system [3] operates in the worldwide unlicensed\n2.4 GHz IndustrialScientificMedical (ISM) frequency\nband. To make the link robust to interference, it uses\na Frequency Hopping (FH) technique with 79 radio carriers.\nIt allows a raw data transmission rate of 1 Mbit/s.\nTwo or more Bluetooth units sharing the same channel\nform a piconet. Each piconet consists of a master unit and\nup to seven active slave units. The master unit polls the slave\nunits according to a polling algorithm and a slave is only allowed\nto transmit after the master has polled it. The piconet\ncapacity is thus, shared among the slave units according to the\npolling algorithm.\nFurthermore, two or more piconets can be interconnected,\nforming a scatternet. This requires a unit, called an inter-piconet\nunit (gateway), to be a part of more than one piconet.\nSuch a unit can simultaneously be a slave member of multiple\npiconets, but a master in only one, and can transmit and\nreceive data in only one piconet at a time; so participation in\nmultiple piconets has to be on a time-division multiplex basis\n. The time of the gateway is, thus, also shared among the\npiconets it belongs to. In this work, we assume that the gateway\ncan only be a slave in its piconets. If a gateway were\nto be a master in a piconet, it would lead to the stoppage of\nall transmission in the piconet when the gateway visits some\nother piconet. Thus, we believe that the use of the gateway as\na slave is the most efficient method of scatternetting.\nFair allocation of bandwidth\nAs introduced in the previous section, units belonging to a\npiconet share the piconet capacity according to the polling\nalgorithm used by the master. In an analogous manner, gateways\nin a scatternet divide their time among their different\npiconets, according to the \"master-listening\" algorithm they\nuse. It can be noted that there is a duality in this architecture\n. On the one hand, a master divides its capacity among\nthe units of its piconet by using a polling algorithm. On the\nother hand, a gateway shares its capacity among the piconets\nit belongs to, on the basis of a scheduling algorithm it uses\nfor listening to the masters. The gateway, can, then be viewed\nas a \"virtual master\" and its masters can be viewed as \"virtual\nslaves\" forming a \"virtual piconet\", in which the polling cycle\nis, actually, the \"listening cycle\" of the gateway. A graphical\ninterpretation of this duality is given in figure 1, in which the\nsolid line shows the actual piconets, and the dotted line shows\nthe virtual piconet.\nDue to this duality, we design our scheduling scheme such\nthat the same scheduling algorithm is used for fair sharing of\nboth (a) the piconet capacity among slaves and (b) the gateway\ntime among piconets.\nWe now give a definition of max-min fairness [7]. We then\ngo on to define max-min fairness in the context of Bluetooth\nscatternets, by considering (a) intra-piconet fairness, i.e., fairness\nin division of piconet bandwidth among slaves (both\ngateway and non-gateway) of a piconet and (b) inter-piconet\nfairness, i.e., fairness in division of the gateway's presence\namong its piconets. We first define a `feasible' rate distribution\nsince this is used in the definition of max-min fairness.\nDefinition 1 (Feasible). A rate distribution is feasible if rates\nare non-negative, the aggregate rate is not greater than one,\nand no unit receives a higher rate than required.\nDefinition 2 (Max-min fairness). An allocation of rates\n1\n,\n\n2\n, . . . ,\ns\namong s units is max-min fair if it is feasible, and\nfor each unit i,\ni\ncannot be increased (while maintaining fea-sibility\n) without decreasing\nj\nfor some other unit j for which\n\nj\n\ni\n.\nThe distribution of max-min fair rates depends upon the\nset of rate demands (traffic generated) of the units. In the\nfollowing subsections, we discuss factors that determine the\nmax-min \"fair share\" of a slave (gateway or non-gateway).\nWe call these factors the Piconet Presence Fraction and the\nScatternet Presence Fraction and show how they may be used\nto calculate the \"fair share\" for a slave in a scatternet.\n3.1. Piconet presence fraction\nConsider a piconet consisting of gateway and non-gateway\nslaves in which the master has complete knowledge of the rate\ndemands of all slaves (an ideal master). Using this knowledge\n, the master polls the slaves in a max-min fair manner\nsuch that each slave gets its \"fair share\" of the master's\npolling. We refer to the \"fair share\" received by a slave as the\n\"piconet presence fraction\" (PPF) of the slave. The gateway\nhas a PPF for each piconet it belongs to.\nConsider the piconets shown in figures 2(a) and 2(b), each\nconsisting of one gateway and two slaves, with the traffic rates\nof each slave as shown. In figure 2(a) (Piconet I), the PPF of\neach non-gateway slave is 0.2, while the PPF of the gateway\nis 0.6. In figure 2(b) (Piconet II), the PPFs of the slaves are\n0.2 and 0.4, while the PPF of the gateway is 0.4.\nA FAIR AND TRAFFIC DEPENDENT SCHEDULING\n11\nFigure 2. Piconets with traffic rates between master and each slave shownn.\n3.2. Scatternet presence fraction\nA gateway will, in general, be a slave in multiple piconets and\nmay have different amounts of traffic to exchange with each\npiconet. Consider an ideal gateway that has complete knowledge\nof the rate demands of all its masters. The gateway can\nthen divide its presence among its piconets in a max-min fair\nmanner, giving each piconet a \"fair share\" of its presence. We\ncall this fair share the \"scatternet presence fraction\" (SPF) of\nthe gateway for the piconet. The importance of the SPF is that\na fair division of the gateway's presence among its piconets\ncan be achieved based on the SPF.\nConsider the piconets of figure 2 again, but the gateway of\neach of the piconets now connects them to form a scatternet,\nas shown in figure 3. The traffic requirements are the same as\nshown in figure 2. The SPF of the gateway is 0.5 in Piconet I\nand 0.5 in Piconet II.\n3.3. Fair share\nWe see that for a gateway to be fair, there are two kinds of\nfairness it has to achieve: that dictated by the PPFs, which\nachieves fairness between the gateway and the other slaves of\na piconet, and that of the SPFs, which distributes the presence\nof the gateway between its piconets in a fair manner. Both\nthese kinds of fairness may not always be completely achiev-able\nand this can lead to a change in the values of PPF and\nSPF, as we now discuss.\nWe observe that an ideal master (as in section 3.1) does\nnot give a gateway more than the PPF of its polling. Thus,\nif the SPF of a gateway is greater than its PPF for a piconet,\nthe gateway spends a fraction of its time equal to the PPF\nin the piconet. The gateway cannot stay for a fraction equal\nto its SPF in the piconet since it is limited by its PPF. Thus,\nthe extra scatternet presence fraction (the difference of the\nSPF and the PPF) is redistributed in a fair manner among\nthe gateway's other piconets for which the SPF is less than\nthe PPF. This may increase the SPF of the gateway in the\nother piconets. In other words, the gateway behaves as if\nits SPF in a particular piconet is reduced to the PPF and\nthus, its SPF in the other piconets increases. We refer to this\nchanged SPF as the \"updated SPF\" of the gateway in a piconet\n.\nSimilarly, an ideal gateway does not stay a fraction of time\nmore than the SPF in a piconet. Thus, if the PPF of the gate-Table\n1\nCalculation of fair share of the gateway in the two piconets of figure 3.\nPiconet I\nPiconet II\nActual traffic rate\n0.7\n0.6\nPPF\n0.6\n0.4\nSPF\n0.5\n0.5\nUpdated PPF\n0.6\n0.4\nUpdated SPF\n0.6\n0.4\nFair share\n0.6\n0.4\nFigure 3. Gateway shared between two piconets; traffic rates between slaves\nand the master are shown.\nway in the piconet is greater than the SPF, the gateway spends\na fraction of time equal to the SPF in the piconet. The remaining\nPPF of the gateway (the difference of the PPF and the\nSPF) is redistributed in a fair manner among the other slaves\nof the piconet (if this other slave is a gateway, it is redistributed\nto it if its SPF is greater than its PPF in the piconet). This\nmay increase the PPF of these slaves. We refer to this changed\nPPF as the \"updated PPF\" of the slave in the piconet. In case\nthere is no such redistribution, the updated PPF is equal to the\nPPF and the updated SPF is equal to the SPF.\nThe fair share can now be calculated from the \"updated\nPPF\" and the \"updated SPF\" as the minimum of these two\nquantities. Note that all these quantities PPF, SPF, updated\nPPF, updated SPF and fair shareare dependent on the traffic.\nAny change in traffic demand of a unit may lead to a change\nin some of these quantities. We explain the calculation of the\nfair share using some examples.\nAn example is given in table 1, which shows the actual traffic\nrate, PPF, SPF, Updated PPF, Updated SPF and fair share\nof the gateway in the two piconets of figure 3. In Piconet II,\nthe gateway has a PPF of 0.4, which is less than the SPF. In\nPiconet I, the gateway has a PPF of 0.6 and an SPF of 0.5.\nThus, the extra scatternet presence fraction of the gateway in\nPiconet II (the difference between the SPF and the PPF) is\ngiven to Piconet I, which has a higher traffic rate than may\nbe allowed by the SPF. This is reflected in the \"updated SPF\"\nvalues. Thus, the \"fair share\" of the gateway in Piconet I is\n0.6 and in Piconet II is 0.4. The fair shares of the non-gateway\nslaves are equal to their PPF.\nAs another example, consider the scatternet consisting of\n5 piconets with the traffic rates shown as in figure 4. As shown\nin table 2, gateway G2 has a PPF of 0.5 and an SPF of 0.4 in\nPiconet B. Thus, the \"updated PPF\" of G2 in Piconet B is 0.4.\nThe extra PPF (\n= PPF - SPF) is added to the PPF of gateway\n12\nR. KAPOOR ET AL.\nFigure 4. Scatternet with two gateways.\nTable 2\nCalculation of fair share of the gateways G1 and G2 in the scatternet of\nfigure 4.\nGateway G1\nPiconet A\nPiconet B\nPiconet C\nActual traffic rate\n0.4\n0.6\n0.1\nPPF\n0.25\n0.5\n0.1\nSPF\n0.4\n0.5\n0.1\nUpdated PPF\n0.25\n0.6\n0.1\nUpdated SPF\n0.25\n0.65\n0.1\nFair share\n0.25\n0.6\n0.1\nGateway G2\nPiconet B\nPiconet D\nPiconet E\nActual traffic rate\n0.7\n0.2\n0.4\nPPF\n0.5\n0.2\n0.4\nSPF\n0.4\n0.2\n0.4\nUpdated PPF\n0.4\n0.2\n0.4\nUpdated SPF\n0.4\n0.2\n0.4\nFair share\n0.4\n0.2\n0.4\nG1 in Piconet B. The \"updated PPF\" of G1 in Piconet B is,\nthus, 0.6.\nAlso, gateway G1 has a PPF of 0.25 and an SPF of 0.4\nin Piconet A. Thus, the \"updated SPF\" of G1 in Piconet A is\n0.25. The extra SPF (\n= SPF-PPF) is added to the SPF of G1\nin Piconet B. The \"updated SPF\" of G1 in Piconet B, is thus,\nequal to 0.65. The fair shares can now be easily calculated.\nA division of the master's polling and the gateway's presence\nbased on PPF and SPF as described in this section takes\ninto account the traffic demands of the slaves and the gateways\nand leads to fairness in the scatternet. In the next section\n, we introduce and describe an algorithm that aims to\nachieve such a fair distribution of bandwidth.\nDescription of algorithm\nWe first explain how the algorithm works in the case of a single\npiconet with no gateway. We then extend the algorithm\nto the case of a scatternet and explain how the coordination\nbetween the master and the gateways is achieved. We then\nprove the fairness of the algorithm.\n4.1. Single piconet with no gateways\nThe polling algorithm is based on the master estimating the\ntraffic rate between each slave and itself. This traffic rate is\nthe sum of the traffic rates from the master to a slave and in\nthe reverse direction. We assume, in order to simplify the explanation\nof the algorithm, that traffic flows only from slaves\nto master; masters generate no traffic to slaves. The same algorithm\nalso applies with little change when traffic flows in\nboth directions (explained later).\nThe master uses a Round Robin polling scheme, with the\nmodification that a slave is skipped if it does not belong to the\n\"active list\" of the master. The slaves are moved in and out\nof the active list on the basis of two variables that the master\nmaintains for each slave. These two variables are:\nr\nestimate of the rate of traffic generated by the slave;\nN\nestimate of the queue length of the slave.\nWhen a slave is polled, the masterslave pair gets a chance\nto exchange a maximum amount of data in each direction,\ndenoted by M. After each such polling phase, the master updates\nthe values of N and r in the following manner:\nFor the slave just polled:\nN\n= N + r - x,\n(1)\nr\n=\n\n\n\n\nr\n+ (1 - ) xT ,\nx < M\n,\nr\n+ (1 - ) xT + , x = M.\n(2)\nFor other slaves:\nN\n= N + r,\n(3)\nwhere is the time elapsed since the last update, x is the\namount of data exchanged during the poll phase, T is the total\ntime elapsed since the last poll of the same slave, is a parameter\nused to smooth the rate estimation and is a parameter\nused to probe for more bandwidth. Note that x is the actual\namount of data exchanged, which may be less than or equal\nto M, depending upon the number of packets in the slave's\nqueue. Since N is an estimate of the slave's queue length and\nr\nis an estimate of the rate at which traffic is generated, N is\nincreased at the rate of r (as in equations (1) and (3)). Also,\nwhen a slave is polled, N is decreased by the amount of data\nexchanged ((equation 1)).\nAfter updating these values, the master determines the\nchanges to be made to the active list. A slave is added or\ndeleted from the active list depending upon whether its value\nof N is greater or smaller than a \"threshold\". The value of\nthis threshold is the minimum amount of data that the master\nwould like the slave to have in order to poll it. We choose\na value equal to a multiple of a DH5 packet for the threshold\nsince this packet incurs least overhead (the selection of\nthe value of the threshold is discussed further in the next subsection\n). Thus, a slave is present in the active list if the master's\nestimate of the value of N for the slave is greater than the\nthreshold. This makes the simple Round Robin polling strategy\nadaptive to traffic and enables it to utilize bandwidth ef-ficiently\n, even when slaves have different rates of traffic. The\nmaximum amount of data that can be exchanged at each poll,\nM\n, is also set equal to the threshold. Note that if the amount\nof data, x, in the slave's queue is less than the threshold, the\npolling of the slave ends after this data has been exchanged.\nA FAIR AND TRAFFIC DEPENDENT SCHEDULING\n13\nIf the value of N is less than the threshold for all the slaves,\nthen the slave whose value of N is estimated to take the smallest\ntime to reach the threshold is polled, i.e., the slave for\nwhich the value of (Threshold\n- N)/r is the smallest.\nThe master now goes to the next slave according to the\nRound Robin ordering of slaves. If the slave is present in the\nactive list, it is polled. Else, the procedure is repeated for the\nnext slave in the Round Robin ordering.\nAlso, note that if the amount of data sent by the slave x\nis equal to M, r is increased by a small amount, . This is\nbasically an attempt by the slave to probe for more bandwidth\nif it is able to send data at the present rate. The usefulness\nof this increase is evident in the proof of fairness in the next\nsection. The value of chosen is 0.15 and that of is 0.65.\nWe also discuss the rationale behind choosing these values in\nthe proof of fairness.\nIf traffic flows in both directions, i.e., from the slaves to\nthe master and in the reverse direction, x is the average of\nthe amount of data exchanged in the two directions, r refers\nto the average of the rate-estimations of the two directions\nand N refers to the average of the queue length estimates of\nthe two directions. Also, if the number of packets in either\ndirection is less than the threshold, the polling of the slave\ncontinues till in both directions, (a) there is no more data to\nsend or (b) amount of data equal to the threshold has been\nexchanged.\nThe initial value of N is set to the threshold (to ensure that\nslaves get polled at the beginning) and that of r is set to 0.25\n(as a reasonable value). Note that the algorithm converges to\nthe fair share, but a careful selection of initial values makes\nthe initial convergence faster.\nAnother advantage of such a scheme is that it may allow\nthe master to go into a power-saving mode if it realizes that no\nslave has sufficient packets to send, i.e., if N is smaller than\nthe threshold for all slaves. Though we do not explore this\noption in this paper, it may be useful since Bluetooth devices\nare expected to work in power-constrained environments.\nTo improve the algorithm, we add a heuristic to it. The\nmaximum number of polling cycles that a slave is not polled\nis bounded. If a slave generates a large burst of data occasionally\nand then does not generate any data for a long time,\nthe value of r for the slave may be very low. This may cause\nthe value of N for the slave to be lower than the threshold\nfor a long time. By limiting the maximum number of cycles\nmissed by the slave, we make sure that such a behavior of the\nslave does not lead to its starvation. In the experiments, this\nvalue is taken to be equal to 5 cycles. We now explain how\nthe above algorithm works in a scatternet.\n4.2. Scatternet\nScheduling of gateways using Rendezvous Points.\nBefore\ndescribing how the algorithm works in a scatternet, we briefly\ndiscuss the notion of Rendezvous Points (RPs) described\nin [4]. A RP is a slot at which a master and a gateway have\nagreed to meet, i.e., at this slot, the master will poll the gateway\nand the gateway will listen to the master. In [4], RPs are\nimplemented using the SNIFF mode of Bluetooth, but we implement\nRPs using the HOLD mode [10]. In the HOLD mode,\nthe slave does not have to listen to the master for a certain time\nperiod and may use this time to visit other piconets. Prior to\nentering the HOLD mode, the master and the slave agree on\nthe time duration the slave remains in the HOLD mode. We\nimplement our algorithm using RPs as described below.\nThe working of the algorithm in a scatternet is very similar\nto its operation in a piconet. The master continues to poll the\nnon-gateway slaves in the same manner as described in the\nprevious section with the modification that a gateway is polled\nat a Rendezvous Point. Each RP is a slot at which a particular\ngateway is polled and a master has different RPs for each of its\ngateways. These RPs are always unique (i.e., a master cannot\nhave the same RP with more than one gateway). Since the\ngateway must be polled at the RP, this has implications in the\npolling of the other slaves (discussed later). Once a gateway\nhas been polled, the master continues with the polling of the\nother slaves in the same manner as described in the previous\nsection, i.e., it checks its active list to see if the next slave in\nthe polling cycle is to be polled and so on.\nIn order to divide its time among different piconets in a\nfair manner, the gateway performs similar calculations as described\nin the earlier section for the master. The gateway\nmaintains values of N and r for each piconet it belongs to and\nthese values are updated each time a gateway is polled (i.e.,\nat each RP). Thus, the calculations performed by a gateway at\neach RP are:\nFor the piconet in which the gateway was just polled:\nN\n= N + r - x,\n(4)\nr\n=\n\n\n\n\nr\n+ (1 - ) xT ,\nx < M\n,\nr\n+ (1 - ) xT + , x = M.\n(5)\nFor other piconets:\nN\n= N + r,\n(6)\nwhere is the time elapsed since the last update, x is the\namount of data exchanged during the poll phase, T is the total\ntime elapsed since the gateway was polled in the same piconet\n, and and are as defined earlier.\nMoreover, at each RP, the gateway and the master negotiate\nthe next RP between them. The assignment of this next\nRP takes into account the fairness between (a) the gateway\nand other slaves in a piconet and (b) the presence of the gateway\nin different piconets. Also, we again employ a heuristic\nthat improves the algorithm. When the next RP is being nego-tiated\n, we keep a bound on the maximum value this can take.\nThis prevents a piconet from not being visited by a gateway\nfor a long time. The maximum value of this next RP used in\nour experiments is 400 slots.\nWe now see how the master and the gateway use the information\nthat they have to achieve fairness in the scatternet.\nWhen a gateway is polled at a RP, the gateway and the master\ndo the following.\n14\nR. KAPOOR ET AL.\n(i) Gateway. The gateway calculates the number of slots,\nN\nthresh\nafter which N for the piconet will become greater\nthan the threshold; N\nthresh\n= (threshold - N)/r, where\nthreshold is as explained in the previous section, N and\nr\nare values maintained by the gateway for the piconet.\nThe gateway makes use of this value and does not visit\na piconet till its estimate of N for the piconet becomes\ngreater than the threshold. This is similar to the algorithm\nused by the master in which a slave is not polled till\nthe master's estimate of N for the slave becomes greater\nthan the threshold. Thus, the gateway tries to divide its\ntime between the piconets in a fair manner, i.e., according\nto the SPFs. Note that N\nthresh\nmay be negative if N\nis greater than the threshold. Also, N\nthresh\nis allowed to\nhave a maximum value of 400.\nMoreover, each time a gateway visits a piconet, it knows\nthe RPs for the other piconets it belongs to (except right\nat the beginning or when the gateway is added to another\npiconet).\n(ii) Master. The master calculates the number of slots after\nwhich the gateway can be polled such that the fairness\nwith other slaves is maintained. It adopts the following\nprocedure to achieve this:\nIt maintains a counter, num_slots (which is initialized\nto 0) and checks the value of N for each slave, in a cyclic\norder, starting from the slave after the current gateway in\nthe cyclic order to the slave before the current gateway.\nThe master checks if the value of N for the slave will be\ngreater than the threshold after num_slots slots. If this\ncondition is true, num_slots is incremented by twice the\nvalue of the threshold. After incrementing num_slots, the\nmaster also checks to see if it has a RP with any gateway\nwhose value is equal to num_slots and increments\nnum_slots by twice the value of the threshold if this is\ntrue. This ensures that the master has a unique RP for\neach of its gateways. Note that num_slots is incremented\nby twice the value of the threshold since the master expects\nto exchange threshold slots of data with a slave in\neach direction.\nThe master uses the above procedure to estimate the number\nof slaves who will have their value of N greater than\nthe threshold when the master polls the slaves in their\ncyclic order starting from the gateway just polled. The\nvalue of num_slots determines the number of slots which\nthe master expects to use in polling the other slaves in\none cycle before polling the gateway again and is thus,\nused by the master to maintain fairness between the gateway\nand the other slaves in the piconet. Again, note that\nnum_slots is allowed to have a maximum value of 400.\nThe master and the gateway now exchange the information\nthey have to calculate their next RP. This exchange takes\nplace using the LMP_hold_req PDU of the LMP (Link Manager\nProtocol) layer. This PDU carries a hold instant and a\nhold time, which are used to specify the instant at which the\nhold will become effective and the hold time, respectively.\nWhen the master is sending a packet to a gateway, the value\nof num_slots can be sent after hold instant and hold time in\nthe packet. The master also sends the values of its RPs with\nits other gateways in the packet. Similarly, the gateway sends\nthe master the values of its RPs with other piconets and the\nvalue of N\nthresh\nalso in an LMP_hold_req PDU. The master\nnow knows all the RPs of the gateway; similarly, the gateway\nknows all the RPs of the master.\nNote that the above information exchange requires a minimal\nchange in the Bluetooth specifications that the contents\nof the LMP_hold_req PDU need to be enhanced. This PDU is\n1-slot in length; thus, some bandwidth of the master is wasted\nin sending these PDUs. This wasted bandwidth can be reduced\nby increasing the value of threshold, i.e., the maximum\ndata that a slave and a master may exchange in each direction\nduring one poll of the slave. On the other hand, a large\nvalue of the threshold will lead to larger delays for packets.\nThus, we have a tradeoff here. We choose a threshold value\nequal to three times a DH5 packet. The effect of this wasted\nbandwidth can be seen in the experiments section where the\npiconet capacity used is slightly less than 1. Note that we\npay a small price here to get perfect coordination between the\nmaster and the gateway and also to get a high degree of fairness\nin the system, as the experiments later demonstrate.\nNow, the master and the gateway both have complete information\n. So, each of them calculates the next RP in the\nfollowing manner:\nThey take the maximum value out of num_slots and N\nthresh\nand as long as this value is the same as one of the RPs (note\nthat all relevant RPs are known to both the master and the\ngateway), the value is incremented by 2\nthreshold. The value\nat the end of this small procedure is the next RP between the\ngateway and the master. Since this value takes into account\nboth N\nthresh\nand num_slots, it incorporates both the fairness\nof the master's polling and the gateway's presence.\nNote that the value of num_slots calculated by the master is\njust an estimate (the master assumes that each slave included\nin the calculation of num_slots will exchange threshold slots\nof data with the master in each direction, but this may not be\ntrue). Thus, the master may have polled all the slaves that had\nto be polled before the RP of the gateway (according to the\nestimate in the calculation of num_slots) and still be left with\nsome slots before the RP. In this case, the master just continues\npolling the slaves in their cyclic order and polls the gateway\nwhen the time for the RP arrives. Note that this means\nthat the master may have to force a slave to send a packet\nsmaller than a certain length. For example, if two slots are\nleft for the RP, then the master will send a 1-slot packet and\nask the slave being polled to do the same. Note that the Bluetooth\nheader has 4 bits to represent the packet type and these\ncan represent 16 packet types. For ACL links, 10 (7 data,\n3 control packets) of the packet types are defined. We use 2\nof the remaining bit sequences to send packets that force the\nslave to send packets smaller than or equal to a certain length.\nThis is shown in table 3.\nFrom table 3, we see that this procedure is adopted if the\nnumber of slots left for the RP is less than 10 (if the number\nof slots left for the RP is greater than or equal to 10, then the\nA FAIR AND TRAFFIC DEPENDENT SCHEDULING\n15\nTable 3\nProcedure adopted by the master if slots left for the RP is less than 10.\nSlots left for RP\nMaximum\nMaximum\nlength of packet\nlength of packet\nsent by master\nsent by slave\n2\n1\n1\n4\n1\n1\n6\n3\n3\n8\n3\n3\nslave's packet length does not have to be restricted). Thus,\nif the slots left for the RP is 2, the master can send a packet\nof maximum length\n= 1 and the gateway can send a packet\nof maximum length\n= 1 and so on. Note that for reasons of\nfairness, the maximum packet length for the master and the\ngateway is the same. Since the master needs to restrict the\nmaximum length of the gateway's packet to either 1 or 3 (as\nshown in table 3), we need 2 packet types to achieve this. This\nprocedure effectively suspends the polling of a slave to honor\na RP with a gateway. The polling of the slave continues after\nthe gateway has been polled.\nIn addition, a gateway may lose a slot in switching from\none piconet to another. This loss is unavoidable since piconets\nare in general, not synchronized in time. In the experiments in\nthe paper, we set the value of the threshold to three times the\npayload of a DH5 packet, which can give a switching loss of\nabout 3% at heavy loads (every 2\nthreshold slots, the gateway\nloses about one slot in switching). At light loads, this switching\nloss does not lead to inefficiency since the sum of the fair\nshares of the gateway in all its piconets is less than 1 and even\nafter the switching loss, the gateway is able to obtain its fair\nshare. The simulations in the next section do not take this\nswitching loss into account and thus, the bandwidth received\nby the gateway under heavy loads will be a little smaller than\nthe one shown in the results.\n4.3. Proof of fairness\nWe now prove that the above algorithm leads to a max-min\nfair distribution of the bandwidth of a scatternet among units.\nWe start by proving this in the case of a piconet. In the next\nstep, we will extend the proof to the general case of a scatternet\n.\n4.3.1. Fairness in a piconet\nLet us introduce the following notation:\nS\n: number of slave units in the piconet;\ng\ni\n: rate-demand of the ith unit;\n\ni\n: rate achieved by the ith unit;\nr\ni\n: rate-estimation of the ith unit (as defined in equation\n(2)),\nwhere\ni\nand r\ni\nare average values.\nSlave unit i is referred to as \"satisfied\", if it achieves it rate\ndemand, i.e.,\ni\n= g\ni\n; else, the slave unit is referred to as\n\"unsatisfied\". Also, in the proof that follows, \"slot\" refers to\n\"Bluetooth slot\"; \"unit\" and \"slave unit\" may be used inter-changeably\n.\nIf there is one slave unit in a piconet, then it will always get\npolled and hence, the algorithm is fair. We prove the fairness\nwhen there are two or more slave units.\nWe first make the following observations:\n(a) If a unit has a rate-estimation, r\n0.25, it will never\nachieve a lesser rate than any other unit.\nr\nis an estimation of the average number of slots of traffic\nthat a masterslave pair will generate per slot in each direction\n. Thus, a rate of 0.25 means that a masterslave pair\ngenerates, on the average, \"threshold\" slots of traffic in each\ndirection in every 4\nthreshold slots. Suppose a piconet has\ntwo slaves, and the first has a rate-estimation, r\n0.25, then\nthe first slave will be polled at least once in every 4\nthreshold\nslots, i.e., will get on the average at least threshold polling\nslots out of every 2\nthreshold, regardless of the r of the other\nslave (since N increases at the rate of r, N will increase by\nat least 0.25\n4 threshold = threshold; thus, the slave will\nenter into the \"active list\" in 4\nthreshold slots). Thus, it will\nnever achieve a lesser rate than another unit. It is easy to see\nthat this property would be true if there were more than two\nslaves (two slaves is the worst case).\n(b) For\n0.1 and\n0.6, an unsatisfied slave will tend to\na rate-estimation of at least 0.25.\nFor an unsatisfied slave, the second part of equation (2)\n(when x\n= M) is always used for updating the rate. Thus, if\nr\ni\nis the ith rate-estimation:\nr\nn\n+1\n= r\nn\n+ (1 - ) M\nT\nn\n+ .\nThis leads to (as n becomes very large):\nr\n= (1 - )M\n\nk\n=0\n\nn\n-k\nT\nk\n+\n1\n\n1\n- .\nThus, for\n0.1 and\n0.6, for any value of T , the\nvalue of r tends to at least 0.25.\n(c) As long as there is an unsatisfied unit, the utilization of\nthe system capacity is 1 (for\n0.15 and\n0.65).\nConsider a piconet consisting of seven slave units, in\nwhich the first unit, unit\n1\nis unsatisfied. From (a) and (b),\nunit\n1\nwill never achieve a lesser rate than any other unit; this\nmeans that it will be polled at least once for each time the\nother slaves are polled. The value of T (as in equation (2)) for\nunit\n1\nis thus, at most, 14\nthreshold. For this value of T and\nfor\n= 0.15 and = 0.65, r for unit\n1\ntends to at least 0.5.\nA value of r\n= 0.5 for a slave unit means that it can be polled\nall the time (since N increases at the rate of r, N will increase\nby at least 0.5\n2 threshold = threshold; thus, the slave will\nenter into the \"active list\" in 2\nthreshold slots, which is also\nthe time of its polling). Thus, the system capacity is totally\nutilized. If there were less than 7 slave units, the value of T\nwould be smaller (than 14\nthreshold), and r would tend to a\nhigher value (than 0.5).\n16\nR. KAPOOR ET AL.\nWe choose values of and to satisfy the above properties\n, i.e.,\n= 0.15 and = 0.65.\nThe following statements hold.\n(i) Units with the same rate-demand achieve the same average\nrate:\ng\ni\n= g\nj\n\ni\n=\nj\n.\nWe prove this by contradiction. Suppose there are two units,\nunit\n1\nand unit\n2\nwith rate demands g\n1\nand g\n2\n, respectively,\nsuch that g\n1\n= g\n2\n. Also, suppose one unit achieves a higher\naverage rate than the other,\n1\n>\n2\n.\nNow, unit\n2\ndoes not achieve its rate-demand (since\n1\n>\n\n2\n)\n. Unit\n1\nmay or may not achieve its rate demand. From\nproperty (b), unit\n2\nwill always tend to a value at least equal to\n0.25, since it is an unsatisfied slave. Using property (a), this\nimplies that\n2\ncannot be less than\n1\n. This is a contradiction.\n(ii) Units with a higher rate-demand achieve an average rate\nat least equal to that achieved by units with a lower rate-demand\n:\ng\ni\n> g\nj\n\ni\n\nj\n.\nThis can be proved by contradiction in the same manner as in\npart (i).\nNow, without loss of generality, let us partition the slave\nunits into two sets, S1 and S2, in such a way that units in S1\nare satisfied, while units in S2 are not.\nIf the set S2 is empty, than all the units achieve their rate-demand\nand the system is fair.\nIf the set S2 is not empty, then using statements (i) and (ii),\nall units share the bandwidth in a fair manner. Moreover,\nsince S2 contains at least one unit, the total system capacity\nis utilized. Hence, it is not possible to increase the rate\nof a unit in S2 without decreasing the rate of some other\nunit.\n4.3.2. Fairness in a scatternet\nThe proof of fairness for a scatternet follows trivially from\nthat for a piconet. We make the following two observations:\n(1) The gateway visits a piconet only after the estimation\nof N for the piconet becomes greater than the threshold (it\ncalculates N\nthresh\nwhile determining the next RP). In other\nwords, the \"virtual master\" (gateway) does not poll (visit) its\n\"virtual slave\" (master) till the estimate of N becomes greater\nthan the threshold. This is similar to the algorithm used by\nthe master to poll the slaves in which a slave is not polled till\nits estimate of N becomes greater than the threshold. Thus,\nthe gateway divides its presence among its piconets in a fair\nmanner, i.e., according to the SPF. Note that if the PPF for\na gateway in a piconet is less than its SPF, the master does\nnot poll the gateway for more than the PPF. Thus, the apparent\nrate demand and SPF for the gateway in the piconet are\nreduced. This may increase the SPF of the gateway in other\npiconets. In this case, the gateway divides its presence according\nto the updated SPFs.\n(2) While calculating the next RP for a gateway, the master\ncalculates the num_slots value which estimates the number\nof slaves in one polling cycle (starting from the slave after\nthe gateway in the polling cycle) who will have their values\nof N greater than the threshold at the estimated time of their\npoll. This achieves fairness between the gateway and the non-gateway\nslaves. Also, the master continues to use the same\nalgorithm for polling non-gateway slaves in a scatternet as\ndescribed for a piconet in section 4.1. This maintains fairness\nbetween non-gateway slaves, i.e., the division is done according\nto the PPFs (or the updated PPFs).\n4.4. Overhead/limitations of the algorithm\nThe rate calculations will lead to a higher load on the system.\nAlso, the algorithm does not take into account SCO links. We\nbelieve (and as has been shown in [6]) that ACL links are\ncapable of carrying voice with small delays. The controlled\nchannel access in Bluetooth can ensure good support of voice\nusing ACL links. Also, scheduling in a scatternet where SCO\nlinks are allowed may not be feasible. Since SCO links require\na periodic reservation of two slots every two, four or\nsix slots, meeting the demands of such a link with a gateway\nmay be impossible when the gateway is visiting some other\npiconet.\nExperiments and results\nIn this section, we present simulation results, which show that\nthe algorithm satisfies the fairness criteria described earlier.\nWe start with simple topologies that illustrate the behavior of\nthe algorithm and then show that it also works well in more\ncomplex topologies. There are three topologies that the experiments\nfocus on and these demonstrate the behavior of the algorithm\na topology with (a) a gateway belonging to two piconets\n, (b) a gateway belonging to three piconets and (c) a piconet\nhaving two gateways. The experiments also show the\nadaptivity of the algorithm, i.e., how quickly the algorithm\nadapts to changing traffic demands of slaves.\nIn the experiments, we specify the \"rate of a slave\", which\nrefers to the sum of the rates at which a slave generates data\nfor a master (i.e., the rate demand of a slave) and the master\ngenerates data for the slave. Moreover, unless mentioned\notherwise, we assume that the traffic rate from a slave to a\nmaster is equal to that from the master to the slave. Thus, a\nslave having a rate of 0.4 means that the slave generates data\nat the rate of 0.2 Bluetooth slots per slot and the master also\nhas a rate demand of 0.2 towards the slave. As we show in the\nsection on asymmetric traffic, the algorithm works well even\nif these two rates are not the same.\nThe simulation environment used in our experiments is\nNS-2 [8]. We have augmented NS-2 with the Bluetooth\nmodel. The simulator models the Bluetooth baseband, LMP\nand L2CAP layers and enables the creation of piconets and\nscatternets. The model contains most of the standard features\nof Bluetooth like Frequency Hopping, Multi-Slot Packets,\nFast ARQ (Automatic Retransmission Query). Note that as\nmentioned earlier, in our simulator, the switching loss asso-ciated\nwith the gateway moving from one piconet to another\nA FAIR AND TRAFFIC DEPENDENT SCHEDULING\n17\nFigure 5. Example scatternet.\nis not taken into account. This effect can lead to the gateway\nlosing up to 3% of slots at heavy loads. The experiment\nresults are thus, a slight overestimate.\nIn the experiments, all traffic generated is CBR. Each experiment\nis run for a system time of 32 sec. In the experiments\n, the term \"slave\" refers to a non-gateway slave; a gateway\nslave is referred to as \"gateway\". Also, in experiments\nwhere the PPF and the SPF values (and not the updated PPF\nand the updated SPF) are shown, the PPF and the updated PPF\nare equal and the SPF and the updated SPF are also equal. In\nthe graphs, \"BW\" in the index stands for bandwidth, \"GW\"\nstands for gateway.\n5.1. Single gateway in two piconets\nWe first consider the simple topology shown in figure 5,\nwhich consists of two piconets, numbered I and II, connected\nby a single gateway. We consider various cases by changing\nthe traffic and the number of slaves in the piconets.\nExperiment 1. Adaptation between gateway and\nnon-gateway slave traffic\nEach piconet has one non-gateway slave that generates very\nhigh traffic, with rate equal to 1, to the master. The gateway\nhas equal traffic to both masters. We vary the gateway traffic\nto show the fair sharing of the piconet bandwidth between the\ngateway and the slave. We show the results for one piconet\nsince the two piconets are exactly symmetric.\nFigure 6(a) shows the sharing of bandwidth between the\ngateway and slave for different values of gateway traffic. It\nalso shows the fair share of the slave and the total fraction\nof the bandwidth obtained by the gateway and the slave in\nthe piconet. It can be seen that the slave obtains a bandwidth\nequal to its fair share for different values of gateway traffic.\nMoreover, the sum of the bandwidths obtained by the slave\nand the gateway is nearly equal to 1. The reason for this to\nbe slightly less than 1 is that some of the piconet capacity is\nused in sending LMP_hold_req PDUs of the LMP layer.\nIn figure 6(b), the comparison of the fraction of the bandwidth\nobtained by the gateway to its SPF (PPF and SPF are\nequal) is shown. Figure 6(b) shows that the gateway gets almost\nequal to its fair share of the bandwidth for all values\nof traffic. Again, the reason that the gateway obtains slightly\nless than its fair share is because some of the slots are used\nfor LMP PDUs. This also explains why the gateway obtains\nslightly less than the slave in figure 6(a).\nFigure 6. (a) Sharing of bandwidth between gateway and slave. (b) Comparison\nof fraction of bandwidth obtained to SPF for the gateway.\nExperiment 2. Different traffic to piconets\nThe same topology as in the previous case, but each slave has\na traffic rate of 0.3 to the master. The gateway has a fixed traffic\nrate of 0.2 to the master of Piconet I and variable traffic to\nthe other master. The PPF and SPF of the gateway in the first\npiconet are, thus, both equal to 0.2. The traffic in Piconet I\ndoes not change and the gateway and the slave get a constant\nfraction of 0.2 and 0.3 of the piconet bandwidth, respectively.\nFigure 7(a) shows the sharing of bandwidth between the\ngateway and slave for different values of gateway traffic,\nwhile figure 7(b) shows the comparison of the fraction of the\nbandwidth obtained by the gateway in Piconet II to its SPF\nand PPF. From the graphs, we can see that when the gateway\nhas different traffic to piconets, it divides its presence\namong the piconets according to the traffic offered and in a\nfair manner (again, the gateway obtains slightly less than its\nfair share due to the LMP PDUs). Also, the gateway makes\nuse of the lower traffic offered by the slave in Piconet II to\nobtain a higher share of the bandwidth in Piconet II.\nExperiment 3. Different number of slaves\nPiconet I has 3 slaves, while the number of slaves in Piconet II\nis variable. Each slave generates traffic to the master at the\nrate of 0.2. The gateway has a traffic rate of 0.3 to Piconet I\nand 0.8 to Piconet II. The PPF and SPF of the gateway in\nPiconet I are, thus, 0.2 and 0.3, respectively. In Piconet II, the\nvalue of PPF changes depending upon the number of slaves.\nIn Piconet I, the slaves get a bandwidth fraction of 0.2\nand the gateway gets 0.3. Figure 8(a) shows the sharing\n18\nR. KAPOOR ET AL.\nFigure 7. (a) Sharing of bandwidth between gateway and slave in Piconet II.\n(b) Comparison of fraction of bandwidth obtained by the gateway to SPF and\nPPF in Piconet II.\nof bandwidth between the gateway and each slave in Piconet\nII. Figure 8(b) shows the comparison of the fraction\nof the bandwidth obtained by the gateway in Piconet II to the\nSPF and PPF. The gateway receives a fraction of the bandwidth\nalmost equal to its fair share. Also, as the number of\nslaves increases, the fraction of the bandwidth received by\nthe gateway (and each slave) reduces in a fair manner.\nExperiment 4. Asymmetric traffic\nWe now consider a case where the traffic rates from Master\nto Slave and Slave to Master are different (asymmetric traffic\n). We consider the same topology as in experiment 2 of the\ncurrent section, with the non-gateway slaves having the same\nrate as in experiment 2. The gateway has a fixed traffic rate of\n0.2 to the master of Piconet I and variable traffic to the other\nmaster. The variable traffic is such that traffic from Master\nto Slave has a rate of 0.1 and traffic from Slave to Master\nvaries.\nFigure 9 shows the comparison of bandwidth fraction obtained\nby the gateway in this experiment versus that obtained\nby the gateway in experiment 2 in Piconet II for different values\nof gateway traffic (which is the sum of master to gateway\nand gateway to master traffic rates). We see that the fraction\nis slightly lower than the fraction obtained in experiment 2.\nAsymmetric traffic leads to wastage of slots, since an empty\nslot is returned in one direction where there is no data to send.\nIt can be seen though, that the gateway still behaves in an ap-Figure\n8. (a) Sharing of bandwidth between gateway and slave in Piconet II.\n(b) Comparison of fraction of bandwidth obtained by the gateway to SPF and\nPPF in Piconet II.\nFigure 9. Comparison of fraction of bandwidth obtained by gateway in this\nexperiment with that in experiment 2 in Piconet II.\nproximately fair manner. All other bandwidth fractions for\nslaves and the gateway are the same as in experiment 2.\n5.2. Single gateway shared between three piconets\nWe now consider a topology, where a gateway is shared between\n3 piconets, numbered I, II and III. Piconet I has 5, Piconet\nII has 1 and Piconet III has 4 slaves. Each slave has\na traffic rate of 0.2. The gateway has a traffic rate of 0.2 to\nPiconet I, 0.3 to Piconet III and a variable rate to Piconet II.\nAll traffic is symmetric (same from master to slave and from\nslave to master).\nA FAIR AND TRAFFIC DEPENDENT SCHEDULING\n19\nFigure 10. Bandwidth fraction received by gateway in the three piconets.\nFigure 11. Example scatternet topology.\nFigure 10 shows the fraction of bandwidth obtained by the\ngateway in each piconet with increasing gateway traffic rate to\nPiconet II. It also shows the PPF and the Updated SPF of the\ngateway in Piconet II. We do not show the fair shares of the\ngateway in Piconet I and III since they are constant (0.16 and\n0.2, respectively). It can be seen that the gateway manages\nto get close to its fair share in the 3 piconets. The slaves in\nPiconet I get a bandwidth fraction of 0.16 and the slaves in\nPiconet II and III get a bandwidth fraction of 0.2 (all these are\nequal to their fair shares).\n5.3. Piconet with two gateways\nWe now show the working of the algorithm in a piconet having\n2 gateways, as shown in figure 11. Piconets I, II and III\nhave 6, 2 and 4 non-gateway slaves, respectively. There are\ntwo gateways, GW 1 between Piconets I and II; and GW 2 between\nPiconets II and III. All slaves have a traffic rate of 0.2.\nGW 1 has a traffic rate of 0.2 in Piconet I and 0.5 in Piconet II.\nGW 2 has a traffic rate of 0.2 in Piconet III. We vary the traffic\nrate of GW 2 in Piconet II and show the fair sharing of\nbandwidth.\nFigure 12 shows the fraction of bandwidth obtained by\nGW 1 and GW 2 in Piconet II compared to their fair shares.\nThe x-axis denotes GW 2 traffic in Piconet II. It can be seen\nthat the bandwidth fractions obtained are very close to the\nfair value. The non-gateway slaves of Piconet II receive a\nbandwidth fraction of 0.2, which is equal to their fair share\n(not shown in the figure). The bandwidth fraction received by\nslaves in Piconets I and III does not change for different values\nof GW2 traffic in Piconet II. The fair share of each slave\n(including the gateway) in Piconet I is 0.14 and in Piconet III\nFigure 12. Fraction of bandwidth and fair share of GW1 and GW2 in Piconet\nII.\nFigure 13. Actual rate estimation of the gateway and its ideal value.\nis 0.2; the bandwidth fraction received by each slave is very\nclose to these fair shares.\n5.4. Adaptivity to changing traffic demands\nWe now show how quickly the algorithm is able to adapt to\nchanging traffic. We again consider the scenario of experiment\n1 of section 5.1, consisting of two piconets, each having\na non-gateway slave, connected by a single gateway. The\nnon-gateway slaves have a traffic rate of 1; the gateway has\nequal traffic to both the masters. We vary the traffic rate of\nthe gateway as time progresses: for the first 2.5 seconds, the\ngateway's rate is 0.1, for the next 2.5 seconds, it is 0.5 and for\nthe remaining time, it is 0.3.\nFigure 13 shows the actual rate estimation of the gateway\n(and its ideal value) versus time. It can be seen that the rate\nestimation adapts very quickly to the new rate. For example,\nwhen the rate changes from 0.1 to 0.5, the rate estimation\nreaches a value of 0.45 in about half a second after 2.5 sec.\nThus, the algorithm adapts to quickly changing traffic.\nConclusions\nThis paper proposed a distributed scatternet-scheduling algorithm\nthat adapts to non-uniform and changing traffic. This\n20\nR. KAPOOR ET AL.\nalgorithm provides an integrated solution for both intra- and\ninter-piconet scheduling and can be implemented using the\nHOLD mode of Bluetooth. Through analysis and simulations,\nwe showed that the algorithm is traffic-adaptive and results in\na fair allocation of bandwidth to units. We explained earlier\nthat the algorithm may allow a unit to go into a power-saving\nmode.\nIn future, we would like to explore this option, which also\nassumes importance since Bluetooth devices will most likely\noperate in a power-constrained environment. As future work,\nwe would also like to evaluate the performance of TCP and\nother kinds of traffic on our algorithm. We are also working\ntowards interfacing the algorithm with requirements of higher\nlayers. In this respect, we are working towards providing QoS\nsupport using the algorithm.\nReferences\n[1] S. Baatz, M. Frank, C. Kehl, P. Martini and C. Scholz, Adaptive scatternet\nsupport for Bluetooth using sniff mode, in: Proc. of IEEE LCN\n(2001).\n[2] A. Das, A. Ghose, A. Razdan, H. Saran and R. Shorey, Enhancing performance\nof asynchronous data traffic over the Bluetooth wireless ad-hoc\nnetwork, in: Proc. of IEEE INFOCOM'2001 (2001).\n[3] J. Haartsen, BLUETOOTH the universal radio interface for ad hoc\nwireless connectivity, Ericsson Review 3 (1998) 110117.\n[4] P. Johansson, M. Kazantzidis, R. Kapoor and M. Gerla, Bluetooth an\nenabler for personal area networking, IEEE Network Magazine, Wireless\nPersonal Area Network (September 2001).\n[5] M. Kalia, D. Bansal and R. Shorey, MAC scheduling and SAR policies\nfor Bluetooth: A master driven TDD pico-cellular wireless system, in:\nProc. of 6th IEEE International Workshop on Mobile Multimedia Communications\n(MOMUC) (1999).\n[6] R. Kapoor, L. Chen, Y. Lee and M. Gerla, Bluetooth: carrying voice\nover ACL links, in: Proc. of MWCN (2002).\n[7] A. Mayer, Y. Ofek and M. Yung, Approximating max-min fair rates via\ndistributed local scheduling with partial information, in: Proc. of IEEE\nINFOCOM (1996).\n[8] NS-2 simulator, http://www.isi.edu/nsnam/ns/\n[9] A. Racz, G. Miklos, F. Kubinszky and A. Valko, A pseudo-random\ncoordinated scheduling algorithm for Bluetooth scatternets, in: Proc.\nof MobiHoc (2001).\n[10] Specifications of the Bluetooth System core, Vol. 1, v. 1.1, www.\nBluetooth.com\n[11] W. Zhang and G. Cao, A flexible scatternet-wide scheduling algorithm\nfor Bluetooth networks, in: Proc. of IEEE IPCCC (2002).\nRohit Kapoor received his Bachelor degree in computer\nscience in 1999 from the University of Roor-kee\n, India. He is currently a Ph.D. candidate at the\nUniversity of California, Los Angeles (UCLA). His\nresearch focuses on Bluetooth-based personal area\nnetworks. He is a member of the Network Research\nLab at UCLA.\nE-mail: rohitk@cs.ucla.edu\nAndrea Zanella received the Ph.D. degree in telecommunication\nengineering from the University of\nPadova, Italy, in 2002.\nPrior to that he received\nthe Dr. Ing. degree (comparable to Master degree)\nin computer engineering in 1998, still from the University\nof Padova. He spent nine months, in 2001, as\npost-doc researcher at the Department of Computer\nScience of the University of California, Los Angeles\n(UCLA), where he was engaged in research on\nWireless Networks and Wireless Access to Internet\nunder the supervision of Prof. Mario Gerla. Currently, he is a research fellow\nin the Department of Information Engineering of the University of Padova,\nItaly. His research interests are mainly focused on topics related to wireless\nand mobile networking. In particular, in the last period, he has been working\non the performance aspects of wireless personal area networks based on the\nBluetooth standard.\nE-mail: zanella@dei.unipd.it\nMario Gerla is a professor in the Computer Science\nDepartment at UCLA. He received his graduate degree\nin engineering from the Politecnico di Milano\nin 1966, and his M.S. and Ph.D. degrees in engineering\nfrom UCLA in 1970 and 1973, respectively.\nHe joined the faculty of the UCLA Computer Science\nDepartment in 1977. His current research is\nin the area of analysis, design and control of communication\nnetworks. Ongoing projects include the\ndesign and evaluation of QoS routing and multicast\nalgorithms for IP domains, the design and evaluation of all-optical network\ntopologies and access protocols, the design of wireless mobile, multimedia\nnetworks for mobile computing applications, and the development of measurement\nmethods and tools for evaluating the performance of high-speed\nnetworks and applications.\nE-mail: gerla@cs.ucla.edu", "keywords": "scheduling scheme;Round Robin;Distributed algorithm;scheduling;traffic adaptive;Scatternet presence fraction;Fairness;traffic rate;scatternets;Scatternet;Bluetooth;Rendezvous Points;Scheduling algorithm;Information exchange;heuristic;Gateway;Scheduling of gateways;Slaves;Non-uniform traffic;changing traffic;Efficiency;fair share;virtual slave;Rendezvous Point;Blueooth;Piconet presence fraction;HOLD mode;Slave unit;polling algorithm;Gateway slave traffic;Scatternets;Master unit;Allocation of bandwidth;Rendezvous point;fairness;Piconet;piconet;slave;Traffic Dependent Scheduling;Time-division multiplex;scatternet;Fair share;Bluetooth technology;Non-gateway slave traffic;bandwidth utilization;allocation of bandwidth;gateway;master;Round Robin polling"} {"name": "70", "title": "DirectoryRank: Ordering Pages in Web Directories", "abstract": "ABSTRACT Web Directories are repositories of Web pages organized in a hierarchy of topics and sub-topics. In this paper, we present DirectoryRank , a ranking framework that orders the pages within a given topic according to how informative they are about the topic. Our method works in three steps: first, it processes Web pages within a topic in order to extract structures that are called lexical chains, which are then used for measuring how informative a page is for a particular topic. Then, it measures the relative semantic similarity of the pages within a topic. Finally, the two metrics are combined for ranking all the pages within a topic before presenting them to the users.", "fulltext": "INTRODUCTION\nA Web Directory is a repository of Web pages that are organized in\na topic hierarchy. Typically, Directory users locate the information\nsought simply by browsing through the topic hierarchy, identifying\nthe relevant topics and finally examining the pages listed under the\nrelevant topics. Given the current size and the high growth rate of\nthe Web [10], a comprehensive Web Directory may contain thousands\nof pages within a particular category. In such a case, it might\nbe impossible for a user to look through all the relevant pages\nwithin a particular topic in order to identify the ones that best represent\nthe current topic. Practically, it would be more time-efficient\nfor a user to view the Web pages in order of importance for a particular\ntopic, rather than go through a large list of pages.\nOne way to alleviate this problem is to use a ranking function which\nwill order the pages according to how \"informative\" they are of the\ntopic that they belong to. Currently, the Open Directory Project [3]\nlists the pages within a category alphabetically, while the Google\nDirectory [1] orders the pages within a category according to their\nPageRank [11] value on the Web. While these rankings can work\nwell in some cases, they do not directly capture the closeness of the\npages to the topic that they belong to.\nIn this paper, we present DirectoryRank, a new ranking framework\nthat we have developed in order to alleviate the problem of ranking\nthe pages within a topic based on how \"informative\" these pages\nare to the topic. DirectoryRank is based on the intuition that the\nquality (or informativeness) of a Web page with respect to a particular\ntopic is determined by the amount of information that the\npage communicates about the given topic, relative to the other\npages that are categorized in the same topic. Our method takes as\ninput a collection of Web pages that we would like to rank along\nwith a Web Directory's topic hierarchy that we would like to use.\nAt a high level, our method proceeds as follows: first, we identify\nthe most important words inside every page and we link them together\n, creating \"lexical chains\". We then use the topic hierarchy\nand the pages' lexical chains to compute the \"relatedness\" (or importance\n) of the pages to each of their corresponding topics. Having\ndetermined the pages' topic importance, we measure the relative\nsemantic similarity among the pages that relate to the same topic.\nThe semantic similarity indicates the amount of content that important\npages in some topic share with each other. Finally, we employ\nour DirectoryRank algorithm that uses the topic importance scores\nin conjunction with the semantic similarities of the pages in order to\ncompute the ranking order of the pages within a Directory topic.\nIn order to study the effectiveness of DirectoryRank in identifying\nthe most informative pages within a particular topic, we applied our\nmethod to the ranking of 318,296 Web pages listed in 156 topics in\nthe Google Directory. We have compared the rankings induced by\nDirectoryRank to the rankings induced by PageRank for the pages\nlisted in those 156 topics. Our comparison reveals that the two\nrankings have different merits and thus they are useful in different\ntasks. To delve into the two rankings' effectiveness and investigate\nwhich is more useful for ordering pages in Directories' topics, we\nconducted a user study, where we asked a group of individuals to\ncompare the rankings delivered by PageRank to the rankings delivered\nby DirectoryRank, and indicate which of the two is deemed as\nmore useful. Our results show that, in most cases, the users perceived\nDirectoryRank to be more topic-informative than PageRank.\nThe rest of the paper is organized as follows: We start our discussion\nin Section 2 with a brief introduction to PageRank, which is\ncurrently employed by the Google Directory in order to rank pages.\nIn Section 3, we briefly present the topic hierarchy that we use in\nour study as well as the process we follow for representing Web\npages into lexical chains. We also show how we explore the topic\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nWIDM'05, November 5, 2005, Bremen, Germany\nCopyright 2005 ACM 1-59593-194-5/05/0011...$5.00.\n\n17\nhierarchy and the pagers' lexical chains for measuring the pages'\ntopic-importance and semantic similarities values. Finally, we present\nhow our DirectoryRank metric employs the above values for\nmeasuring how informative Web pages are with respect to some\ntopics and rank them accordingly. In Section 4, we experimentally\nstudy the effectiveness of DirectoryRank, by comparing its performance\nto PageRank. We revise related work in Section 5 and we\nconclude the paper in Section 6.\nOVERVIEW OF PAGERANK\nIn this section, we briefly explain the main intuition of PageRank, a\nmetric that was primarily invented for ranking pages within the\nGoogle Search Engine and that is currently used within the Google\nDirectory for ordering Web pages. For a more elaborate overview\non PageRank, we refer the reader to the work of [11]. The intuition\nof PageRank metric is that a page on the Web is important if there\nare a lot of other important pages pointing to it. That is, if a page p\nhas many links from other important pages, we may conclude that\nthis page is interesting to many people and that it should be considered\nas being \"important\" or of \"good\" quality. Similarly, if an\nimportant page has links to other pages, we expect that part of its\nquality is transferred to the pages it points to, which in turn become\nof increased significance/quality. Roughly, PageRank PR(p) defines\nthe importance of page p to be the sum of the importance of the\npages that endorse p. At a high level, PageRank is calculating the\nprobability that a \"random surfer\" is looking at a given page at a\ngiven point of time. The \"random surfer\" is a mathematical model\nthat emulates the behavior of a user that, given a page, follows an\noutgoing link from that page at random. Formally, given a page p\ni\n\nthat has incoming links from the pages p\n1\n, ..., p\nn\nand let c\nj\nbe the\nnumber of out-links from p\nj\n, the PageRank of p\ni\nis given by:\n1\n1\n( )\n(1\n)\n( ) /\n...\n( ) /\ni\nn\nn\nPR p\nd\nd\nPR p\nc\nPR p\nc\n\n=\n+\n\n+\n+\n\n\n\n\n\nwhere d corresponds to the probability that the random surfer will\nget bored and his next visit will be a completely random page, and\n1-d corresponds to the probability that the page the random surfer\nwill pick for his next visit is an outgoing link of the current page.\nDIRECTORY RANK\nThe ranking of a Web page within a particular topic intuitively\ndepends on two criteria: (i) the importance of the page for the underlying\ntopic. This criterion helps us identify the most important\npages out of the several ones that may lie within a particular topic.\n(ii) the semantic correlation of a page relative to other important\npages in the same topic. This criterion helps us rank pages relative\nto each other within a topic. For measuring the importance of a\nWeb page in some topic, we explore a subject hierarchy that we\nhave built in the course of an earlier study [13] and we use the lexical\nchaining technique for identifying the most important words\ninside the page.\nWe start our discussion with a presentation of the topic hierarchy\nthat we use in our work (Section 3.1) and we describe the process\nwe follow for representing Web pages into lexical chains (Section\n3.2). We also explain how we utilize the topic hierarchy and the\npages' lexical chains for measuring the pages' importance to the\nhierarchy's topics (Section 3.2.1). The contribution of our work lies\nin the exploitation of the topic hierarchy and the lexical chains that\nwe generate for representing Web pages in order to compute the\nsemantic similarities between the pages that are important in some\ntopics. Moreover, we have developed a novel framework, which\nemploys the pages' topic importance and semantic similarity measures\nfor ranking pages inside Directory topics.\n3.1 The Topic Hierarchy\nThe main intuition in our DirectoryRank metric is that topic relevance\nestimation of a Web page relies on the page's lexical coherence\n, i.e. having a substantial portion of words associated with the\nsame topic. To capture this property, we adopt the lexical chaining\napproach: for every Web page we generate a sequence of semantically\nrelated terms, known as lexical chain. In our approach of representing\nWeb pages into lexical chains, we adopt the method reported\nin [6], which uses WordNet [5] as the knowledge base for\nproviding information about the semantic relations that exist between\nthe words in a text. A detailed description of the lexical\nchains' generation process is given in Section 3.2. Before that, we\npresent the topic hierarchy that we use for determining the topics\nthat are associated with the contents (i.e. words) of Web pages.\nSince we are mainly interested in measuring the Web pages' importance\nin the context of Web Directories, we decided to demonstrate\nthe usefulness of our DirectoryRank metric in ordering Web pages\nin the topics currently used in a real Web Directory. To that end, we\napplied DirectoryRank to the main topics used in the Google Directory\n. Google Directory provides a hierarchical listing of Web pages\ncategorized by topic and reuses the data maintained by the Open\nDirectory Project. Moreover, since DirectoryRank relies on the\nWeb pages' lexical chains rather than their entire contents for\nmeasuring the pages' importance to particular topics and since lexical\nchain generation is dependent on WordNet, we decided to enrich\nthe top level (main) topics of the Google Directory with their\nrespective WordNet lexical hierarchies.\nThe first step we took for enriching the Google topics with WordNet\ndata was to examine the compatibility between these topics and\nthe topics used to annotate WordNet's concepts with domain information\n. Note that the topic information that exists in the labels of\nWordNet's contents is taken from the freely available Suggested\nUpper Merged Ontology (SUMO) [4] and the MultiWordNet Domains\n(MWND) [2]. Due to space limitations, here we present a\nsummary of our approach into enriching the Google topics with\nWordNet hierarchies. A detailed description of the process we followed\nfor appending to the Google top level topics their corresponding\nWordNet hierarchies is given in [13]. In brief, we located\nthe Google's top level topics among the topics used in either\nSUMO or MWND for annotating WordNet concepts. Out of the 17\nGoogle topics, 13 topics (shown in Table 1) are used for labeling\nWordNet concepts with topic information. To each of those 13\ntopics, we integrated their corresponding sub-topics that we ac-quired\nfrom either SUMO or MWND. The sub-topic integration\nwas performed automatically, simply by following WordNet's hyper/hyponymy\nlinks. At the end of this process, we came down to a\nhierarchy of 489 sub-topics, which are organized into the 13 top\nlevel topics that we used from Google Directory.\nTable 1. The Hierarchy's First Level Topics\nFirst Level Topics\nArts News\nSports Society\nGames Computers\nHome Reference\nShopping Recreation\nBusiness Science\nHealth\n\n\nIn Section 3.4, we will demonstrate how to use our topic hierarchy\nfor automating the task of ranking pages within topical categories.\n18\n3.2 Measuring Web Pages' Topic Importance\nThe computational model that we adopted for generating lexical\nchains is presented in the work of Barzilay [6] and it generates lexical\nchains in a three step approach: (i) it selects a set of candidate\nterms\n1\nfrom a page, (ii) for each candidate term, it finds an appropriate\nchain relying on a relatedness criterion among members of\nthe chains, and (iii) if such a chain is found, it inserts the term in the\nchain. The relatedness factor in the second step is determined by the\ntype of WordNet links that connect the candidate term to the terms\nstored in existing lexical chains. Figure 1 illustrates an example of\nthe lexical chain generated for a text containing the candidate terms:\nsystem, network, sensor, weapon, missile, surface and net. The\nsubscript si denotes the id of the word's sense within WordNet.\nLexical chain\nsystem\ns6\nnetwork\ns4\n\nsystem\ns6\nsensor\ns1\nsystem\ns6\nweapon\ns2\nmissile\ns1\nsystem\ns6\nsurface\ns1\nnet\ns2\n\nFigure 1. An example of a lexical chain.\nHaving generated lexical chains, we disambiguate the sense of the\nwords inside every chain by employing the scoring function f introduced\nin [12], which indicates the probability that a word relation is\na correct one.\nGiven two words, w\n1\nand w\n2\n, their scoring function f via a relation\nr, depends on the words' association score, their depth in WordNet\nand their respective relation weight. The association score (Assoc)\nof the word pair (w\n1\n, w\n2\n) is determined by the words' co-occurrence\nfrequency in a corpus that has been previously collected. In practice\n, the greater the association score between a word pair w\n1\nand w\n2\nis, the greater the likelihood that w\n1\nand w\n2\nrefer to the same topic.\nFormally, the (Assoc) score of the word pair (w\n1\n, w\n2\n) is given by:\n1\n2\n1\n2\n1\n2\nlog ( ( ,\n) 1)\n( ,\n)\n( )\n( )\n\n\n+\n=\n\ns\ns\np w w\nAssoc w w\nN w N w\n\nwhere p(w\n1\n,w\n2\n) is the corpus co-occurrence probability of the word\npair (w\n1\n,w\n2\n) and N\ns\n(w) is a normalization factor, which indicates\nthe number of WordNet senses that a word w has. Given a word\npair (w\n1\n, w\n2\n) their DepthScore expresses the words' position in\nWordNet hierarchy and is defined as:\n2\n2\n1\n2\n1\n2\n( ,\n)\n( )\n( )\nDepthScore w w\nDepth w\nDepth w\n\n=\n\n\n\n,\nwhere Depth (w) is the depth of word w in WordNet. Semantic\nrelation weights (RelationWeight) have been experimentally fixed\nto 1 for reiteration, 0.2 for synonymy and hyper/hyponymy, 0.3 for\nantonymy, 0.4 for mero/holonymy and 0.005 for siblings. The scoring\nfunction f of w\n1\nand w\n2\nis defined as:\n1\n2\n1\n2\n1\n2\n( ,\n, )\n( ,\n)\n( ,\n) Re\n( )\ns\nf w w r\nAssoc w w DepthScore w w\nlationWeight r\n=\n\n\n\n\n\n\nThe value of the function f represents the probability that the relation\ntype r is the correct one between words w\n1\nand w\n2\n. In order to\ndisambiguate the senses of the words within lexical chain C\ni\nwe\ncalculate its score, by summing up the f\ns\nscores of all the words w\nj1\n\nw\nj2\n(where w\nj1\nand w\nj2\nare successive words) within the chain C\ni\n.\nFormally, the score of lexical chain C\ni\n, is expressed as the sum of\nthe score of each relation r\nj\nin C\ni\n.\n1\n2\n( )\n(\n,\n, )\ni\ns\nj\nj\nj\nr in C\nj\nj\nScore C\nf\nw\nw\nr\n\n\n\n\n=\n\n\n\n\n1\nAs candidate terms, we use nouns and proper names because they\nconvey the vast majority of conceptual information in texts.\nEventually, in order to disambiguate we will pick the relations and\nsenses that maximize the Score (C\ni\n) for that particular chain. In\nestimating the importance of a Web page p\ni\nin some Directory's\ntopic T\nk\nour first step is to identify which node within the hierarchy\n(see Section 3.1) corresponds to topic T\nk\nof the page.\n3.2.1 Topic-Importance Scoring\nOnce the topic of a page is located among the hierarchy's topics, we\nmap the words in the page's lexical chain to the WordNet nodes\nunder that particular topic. Recall that upon lexical chain generation\n, words are disambiguated ensuring that every word inside a\npage is mapped to a single word within the WordNet hierarchy. We\nthen determine the importance of a page p\ni\nto topic T\nk\nby counting\nthe number of words in the lexical chain of p\ni\nthat are subsumed by\nT\nk\nin the hierarchy's graph. The topic importance of a page is given\nby a Relatedness Score (RScore), which indicates how relevant a\npage is for a given topic. Formally, the relatedness score of a page\np\ni\n(represented by the lexical chain C\ni\n) to the hierarchy's topic T\nk\nis\ndefined as the product of the page's chain Score (C\ni\n) and the fraction\nof words in the page's chain that are descendants of T\nk\n. Formally\n, the RScore is given by:\nRScore (C\ni\n, T\nk\n) =\nk\ni\ni\ni\nScore(C ) common C and T elements\nC elements\n\n\nThe denominator is used to remove any effect the length of a lexical\nchain might have on\nRScore and ensures that the final score is normalized\nso that all values are between 0 and 1, with 0 corresponding\nto no relatedness at all and 1 indicating the page that is highly\nexpressive of the page's topic. The\nRScore of a page to a specific\ntopic captures the importance of the page in the given topic.\n3.3 Semantic Similarity Scoring\nThe relatedness score metric that we have just presented can serve\nas a good indicator for identifying the most important pages within\na topic. However, the\nRScore metric does not capture the amount of\ncommon content that is shared between the Web pages in a topic.\nThis is important in the cases where our topic-importance scoring\ngives a low score for some pages but, at the same time, these pages\nare very similar to other pages with high topic-importance scores.\nIn order to accommodate for this scenario, we now show how to\ncompute the semantic similarities among the pages that are listed in\nthe same Directory topic. Semantic similarity is indicative of the\npages' semantic correlation and helps in determining the ordering\nof the pages that are deemed important in some topic. Our DirectoryRank\nmetric employs the Web page's topic-importance scores\nand their semantic similarities to determine their ranking order inside\nsome Directory topics and is presented in the next section.\nIn order to estimate the Web pages' semantic similarity, we compare\nthe elements in a page's lexical chain to the elements in the\nlexical chains of the other pages in a Directory topic. We assume\nthat if the chains of two Web pages have a large number of elements\nin common, then the pages are correlated to each other. To\ncompute similarities between pages,\np\ni\nand\np\nj\nthat are categorized in\nthe same topic, we first need to identify the common elements between\ntheir lexical chains, represented as\nPC\ni\nand\nPC\nj,\nrespectively.\nFirst, we use WordNet to augment the elements of the lexical chains\nPC\ni\nand\nPC\nj\nwith their synonyms. Chain augmentation ensures that\npages of comparable content are not regarded unrelated, if their\nlexical chains contain distinct, but semantically equivalent elements\n. The augmented elements of\nPC\ni\nand\nPC\nj\nare defined as:\n19\n(\n)\n( )\ni\ni\ni\nAugElements PC\nC\nSynonyms C\n=\n\nU\n(\n)\n(\n)\nj\nj\nj\nAugElements PC\nC\nSynonyms C\n=\n\nU\n\nwhere,\nSynonyms (C\ni\n) denotes the set of the hierarchy's concepts\nthat are synonyms to any of the elements in\nC\ni\nand\nSynonyms (C\nj\n)\ndenotes the set of the hierarchy's concepts that are synonyms to any\nof the elements in\nC\nj\n. The common elements between the augmented\nlexical chains\nPC\ni\nand\nPC\nj\nare determined as:\n(\n,\n)\n(\n)\n(\n)\ni\nj\ni\nj\nComElements PC PC\nAugElements PC\nAugElements PC\n=\nI\nWe formally define the problem of computing pages' semantic\nsimilarities as follows: if the lexical chains of pages\np\ni\nand\np\nj\nshare\nelements in common, we produce the correlation look up table with\ntuples of the form <\nAugElements (PC\ni\n),\nAugElements (PC\nj\n),\nComE-lements>\n;. The similarity measurement between the lexical chains\nPC\ni\n,\nPC\nj\nof the pages\nP\ni\nand\nP\nj\nis given by:\n)\n)\n2\n(\n,\n)\n(\n,\n)\n(\n(\ni\nj\ni\nj\ni\nj\nComElements PC PC\nPC PC\ns\nAugElements PC\nAugElements PC\n\n\n\n=\n\n+\n\n\nwhere, the degree of semantic similarity is normalized so that all\nvalues are between zero and one, with 0 indicating that the two\npages are totally different and 1 indicating that the two pages talk\nabout the same thing.\n3.4 DirectoryRank Scoring\nPages are sorted in Directory topics on the basis of a DirectoryRank\nmetric, which defines the importance of the pages with respect to\nthe particular topics in the Directory. DirectoryRank (\nDR) measures\nthe quality of a page in some topic by the degree to which the page\ncorrelates to other informative/qualitative pages in the given topic.\nIntuitively, an informative page in a topic, is a page that has a high\nrelatedness score to the Directory's topic and that is semantically\nclose (similar) to many other pages in that topic.\nDR defines the\nquality of a page to be the sum of its topic relatedness score and its\noverall similarity to the fraction of pages with which it correlates in\nthe given topic. This way, if a page is highly related to topic\nD and\nalso correlates highly with many informative pages in\nD, its DR\nscore will be high. Formally, consider that page\np\ni\nis indexed in\nDirectory topic\nT\nk\nwith some\nRScore (p\ni\n, T\nk\n) and let\np\n1\n,\np\n2\n, ...,\np\nn\nbe\npages in\nT\nk\nwith which\np\ni\nsemantically correlates with scores of\ns\n(PC\n1\n, PC\ni\n),\ns\n(PC\n2\n, PC\ni\n),...,\ns\n(PC\nn\n, PC\ni\n), respectively. Then, the\nDirectoryRank (\nDR) of p\ni\nis given by:\n2\n1\n(\n)\n( ,\n)\n[\n(\n,\n)\n(\n,\n)\n......\n(\n,\n)]\n,\ni\nk\ni\nk\ns\ni\ns\ni\ns\nn\nDR p T\nRScore p T\nPC PC\nPC PC\nPC PC\nn\ni\n\n=\n\n+\n\n+\n+\n+\n/\n\n\nwhere n corresponds to the total number of pages in topic T\nk\nwith\nwhich\np\ni\nsemantically correlates.\nEXPERIMENTAL SETUP\nTo measure the potential of our DirectoryRank metric in delivering\ntopic-informative rankings, we conducted an experiment, where we\nstudied the effectiveness of\nDR in prioritizing the most informative\npages in some Directory's topics. To obtain perceptible evidence of\nDirectoryRank's efficiency in a practical setting, we applied our\nDR\nmetric to a set of Web pages listed in a number of topics in Google\nDirectory and we compared the rankings induced by DirectoryRank\nto the rankings that Google Directory delivers for the same set of\npages and topics. In Section 4.1 we explain how we selected the\npages for our study, while in Section 4.2 we present the similarity\nmeasure that we used for comparing the rankings induced by DirectoryRank\nto the rankings delivered by PageRank, and we give obtained\nresults. Moreover, to delve into the behavior of DirectoryRank\nwe carried out a user study, presented in Section 4.3.\n4.1 Experimental Dataset\nIn selecting our experimental data, we picked pages that are categorized\nin those topics in Google Directory, which are also present in\nour hierarchy. Recall that Google Directory is a replica of the Dmoz\nDirectory, from which we borrowed our hierarchy's 13 top-level\ntopics. Out of all the sub-topics organized in those 13 top-level\ntopics in Google Directory, 156 were represented in our hierarchy.\nHaving determined the topics, whose set of ranked pages would be\ncompared, we downloaded a total number of 318,296 pages, categorized\nin one of the 156 selected topics, which in turn are organized\ninto the 13 top-level topics. Table 2 shows the distribution of\nthe experimental pages in the top level topics in Google Directory.\nTable 2. Statistics on the experimental data\nCategory\n# of documents\n# of sub-topics\nArts 28,342\n18\nSports 20,662\n26\nGames 11,062\n6\nHome 6,262\n7\nShopping 52,342\n15\nBusiness 60,982\n7\nHealth 23,222\n7\nNews 9,462\n4\nSociety 28,662\n14\nComputers 35,382\n13\nReference 13,712\n10\nRecreation 8,182\n20\nScience 20,022\n9\nTotal\n318,296 156\n\n\nSince we were interested in comparing DirectoryRank with PageRank\n, in the context of ranking Web pages in Directory topics, we\nrecorded for the downloaded Web pages their relative ranking order\nin Google Directory in each of the 156 selected topics. We then\nstored the downloaded pages in a secondary index, maintaining\ntheir relative PageRank rankings. To compute the\nDR values for\nevery experimental page, we initially processed the downloaded\npages in order to generate and score their lexical chains. For every\npage, we first computed its\nRScore to the topic in which it is assigned\nin Google Directory, and then we computed the semantic\nsimilarity (\n\ns\n) for every pair of pages listed in each topic. Lastly,\nusing the above two scores (i.e. semantic similarity and topic relatedness\n), we computed for every Web page its DirectoryRank (\nDR)\nvalue and we sorted the pages listed within each of the topics, so\nthat pages with higher\nDR scores in some topic are prioritized\namong the set of topic related pages. Using the above data, we\nevaluated the effectiveness of our DirectoryRank metric in ordering\nWeb pages inside the Directory's topics.\n4.2 Overlap of DirectoryRank and PageRank\nTo investigate whether there is any similarity between the rankings\ninduced by DirectoryRank and the rankings delivered by PageRank\nfor our experimental pages in the 156 topics in Google Directory,\nwe used the\nOSim measure, reported in the work of [9], which indicates\nthe degree of overlap between the top\nn URLs of the two\nrankings. Formally, the overlap of two ranked lists A and B (each of\nsize\nn) is given by:\n(\n)\n,\n/\nOSim DR PR\nA\nB n\n\n=\nI\n\nUsing the above formula, we computed for each of the 156 topics\nthe overlap between the pages ranked in the top n=10 positions for\nthat topic by\nDR and PR respectively. Afterwards, we first com-20\nputed the average similarity between the two induced rankings for\neach of the 156 selected topics, and then the average similarity between\nthe two induced rankings for each of the 13 top-level topics.\nTo compute the average similarity between\nDR and PR for a top\nlevel topic\nT, we summed the average similarity of all sub-topics in\nT and we divided by the number of sub-topics that T has. Table 3\ngives the average similarity scores between\nDR and PR for each of\nthe 13 top-level topics examined in our experiment.\nTable 3. Average similarity of rankings for the top level topics\nCategory\nOSim\nArts 0.038\nSports 0.019\nGames 0.030\nHome 0.057\nShopping 0.013\nBusiness 0.028\nHealth 0.057\nNews 0.100\nSociety 0.043\nComputers 0.046\nReference 0.020\nRecreation 0.025\nScience 0.044\n\n\nObtained results demonstrate that there is little average overlap\nbetween the top 10 results for the two rankings. Note that for some\ntopics we compared the overlap between\nDR and PR for a larger set\nof pages (e.g.\nn=20 and n=30) and we found that the OSim score of\nthe two rankings increases, albeit slightly, as the size of\nn grows.\nFor example in the topic Sports, the\nOSim between DR and PR for\nn=10 is 0.019, whereas for n=20 the OSim score is 0.023 and for\nn=30, OSim is 0.028. Our results show that even the pairs with the\ngreatest similarity among all pairs examined (e.g. the rankings delivered\nfor the News topic), according to the\nOSim measure, have\nlittle in common. Despite the usefulness of the\nOSim measure for\nmaking rough estimations about the ability of the two ranking\nschemes in identifying the same top pages with respect to some\ntopics, it cannot directly capture which ranking is more useful for\nordering pages in Directory topics. This is because\nOSim does not\nindicate the degree to which the relative orderings of the top\nn\npages of two rankings are in agreement. Having established that\nPageRank and DirectoryRank order Web pages substantially differ-ently\n, we proceed to investigate which of these rankings is better for\nordering Web pages in Directory topics. To that end, we carried out\na user study, reported next.\n4.3 DirectoryRank Performance\nTo determine which of the two ranking measures, namely\nDR and\nPR, is perceived as more useful by Web users for organizing pages\nin Web Directories, we carried out a user study. From our sample\ndata, we picked the top 10 pages listed in 7 randomly selected topics\n(out of the 156 topics examined) and we recruited 15 postgraduate\nvolunteers from our school. Table 4 lists the 7 topics selected.\nFor each topic, the volunteer was shown 2 result rankings; one consisted\nof the top 10 pages for the topic ranked with\nDR, and the\nother consisted of the top 10 pages for the topic ranked with\nPR.\nFor each topic, the volunteer was asked to read the pages in both\nlists and indicate which of the two rankings, in their opinion, is\nmore \"useful\" overall for communicating information about the\ntopic. Volunteers were not told anything about how either of the\nrankings was generated. In order to avoid misinterpretations while\nanalyzing the user's selection preferences, we asked from the users\nto indicate their descriptive selections directly. More specifically,\nwe presented to our participants the following choices and we asked\nthem to indicate for which of the following reasons they selected\none ranking over the other for each of the topics examined.\nTable 4. Experimental Topics\nExperimental Topics\nT\n1\n\nCrime\nT\n2\n\nPhotography\nT\n3\n\nWater Sports\nT\n4\n\nRadiology\nT\n5\n\nMechanics\nT\n6\n\nEconometrics\nT\n7\n\nCollecting\n\n\nReason R1. \"I prefer this ranking because I obtained significant\ninformation about the topic from most of the pages\". In our analysis\n, we interpret the ranking preferences established on this reason\nas \"topic-informative\" rankings.\nReason R2: \"I prefer this ranking because I have seen most of the\npages before and I liked them\". We interpret the ranking preferences\nestablished on this reason as \"popular\" rankings.\nWe then compared the participants' descriptive selections for every\ntopic with the final\nDR/ PR choices. This way we ensure that users'\npreferences would be accurately evaluated even if two volunteers\nhad exactly the same descriptive selection, but they ended up casting\nthat selection into different\nDR, PR rankings. As a final note,\nwe also asked our volunteers to indicate their familiarity with the\nexperimental topics, by characterizing as \"familiar\" or \"unfamiliar\"\neach of the topics examined. In our evaluation, we considered that\none ranking was better than the other if at least 50% of the users\nselected it as more \"useful\". Table 5 shows the rankings selected by\nour subjects as more useful for each of the 7 examined topics. Every\nrow corresponds to a separate user. The columns marked as T\ni\nshow\nwhat the preference of the user was for the particular topic. Under\nthe T\ni\ncolumns the keyword\nDR means that the user considered\nDirectoryRank as more useful for that topic, while PR means that\nthe user deemed PageRank as more useful. The column marked as\nR on the right of a T\ni\ncolumn indicates the reason for which the user\nvoted over the specified ranking. Table 6 summarizes the rankings\npreferred by the majority of the users for each of the topics.\nTable 5. Rankings selected as more useful for each topic\nUser T\n1\n\nR\nT\n2\n\nR\nT\n3\n\nR\nT\n4\nR\nT\n5\nR T\n6\n\nR\nT\n7\n\nR\n#1\nDR\n1\nDR\n1\nDR\n1\nDR\n1 PR 2 DR\n1 PR 2\n#2 PR\n2\nDR\n2 PR 2 DR\n1\nDR\n1\nDR\n1 PR 2\n#3\nDR\n1\nDR\n1\nDR\n1\nDR\n1\nDR\n2\nDR\n1 PR 2\n#4 PR 1 PR 1 PR 2 DR\n2 PR 2 PR 2 PR 1\n#5\nDR\n1 PR 1 PR 2 PR 2 PR 2 DR\n2 DR\n1\n#6 PR\n2\nDR\n1 PR 2 DR\n1\nDR\n1\nDR\n2 DR\n1\n#7\nDR\n2 PR 2 PR 1 DR\n1 PR 2 DR\n1 DR\n1\n#8\nDR\n1\nDR\n2\nDR\n1\nDR\n1 PR 1 DR\n1 PR 2\n#9 PR\n2\nDR\n1 PR 2 PR 2 PR 2 DR\n1 DR\n2\n#10\nDR\n1\nDR\n1\nDR\n1\nDR\n1\nDR\n1\nDR\n2 DR\n2\n#11\nDR\n1\nDR\n1\nDR\n1\nDR\n2 PR 2 PR 2 PR 2\n#12\nDR\n1\nDR\n1\nDR\n1 PR 1 PR 2 DR\n1 DR\n1\n#13\nDR\n2 PR 2 PR 1 DR\n1 PR 2 DR\n1 DR\n1\n#14 PR 2 DR\n1 PR 2 DR\n1\nDR\n1\nDR\n1 PR 2\n#15\nDR\n1\nDR\n2\nDR\n1\nDR\n1 PR 1 DR\n1 DR\n1\n\n\nOur survey results demonstrate that the majority of the users perceived\nin overall DirectoryRank as more useful in comparison to\nPageRank for ordering Web pages in the Directory's topics. This is\nattested by the fact that for most of the topics examined (for 5 out of\nthe 7 topics), the majority of our subjects preferred\nDR over PR. A\ncloser look at the obtained results indicates that the reason on which\n21\nour participants' based most of their\nDR selections, is Reason 1,\nwhich implies that the rankings delivered by\nDR are perceived as\nmore topic-informative. Conversely, most of the users who liked\nbetter the rankings induced by\nPR, established their selection on\nReason 2. This suggests that the usefulness of\nPR is not implied\nmainly by how informative a page is about a topic, but rather that it\nis substantially influenced by the page's popularity.\nTable 6. Rankings preferred by the majority of users\nTopic\nPreferred by majority\nT\n1\nCrime DirectoryRank\nT\n2\nPhotography DirectoryRank\nT\n3\nWater Sports\nPageRank\nT\n4\nRadiology DirectoryRank\nT\n5\nMechanics PageRank\nT\n6\nEconometrics DirectoryRank\nT\n7\nCollecting DirectoryRank\n\n\nMoreover, although not reported here due to space limit, our survey\nresults show that our participants' answers were not generally influenced\nby their familiarity or not with the underlying topics. This\nimplies that our survey does not entail \"topic-bias\", since both\nrankings compared are applied to pages listed in the same topic.\nRELATED WORK\nThere have been a number of studies trying to identify the best\nranking order of the Web pages that are deemed to be relevant to a\ngiven query/topic. The most successful of these studies [8, 11] suggest\nthe exploitation of the pages' links connectivity on the Web\ngraph for measuring the pages' importance and rank them accordingly\n. The most widely known ranking metric that explores the\npages' links structure for measuring their importance on the Web is\nPageRank. Currently, PageRank and its variations are used by most\nmajor Web Search Engines to rank the results that they return to\nWeb users in response to their search requests. Despite PageRank's\nusefulness for ordering pages in the context of Search Engines, it is\ndesigned to measure the global importance of the pages on the\nWeb, independent of any particular topics. However, the overall\nimportance of the pages may be not a sufficient measure for ordering\nthe pages inside Directories' topics, essentially because pages\nthat are important in some topics may not be important in others,\nregardless of the number and structure of the links that may appear\nin those pages. To alleviate some of the inherent limitations of PageRank\n, a number of researchers designed new ranking metrics,\nwhich mainly rely on modifications of PageRank and are tailored\nfor specific tasks. For example, [9] studies personalization of the\nPageRank metric by giving different weights to pages, [14] examine\nthe local and the inter-site link structure in order to compute a\nglobal PageRank for Web pages, [7] introduce Hilltop, an algorithm\nwhich generates query-specific authority scores for improving rankings\nfor popular queries. While most of these works mainly focus\non improving the rankings delivered to Web users by measuring the\nWeb pages' overall importance, in this paper we are more concerned\nabout the topic importance of Web pages by measuring the\npages' informativeness with respect to particular topics. In this\nscope, we perceive our work to be complementary to previous studies\non personalized rankings [9]. Moreover, there exists prior work\nthat explores the lexical chaining technique as a means for representing\ndocuments' contents [6, 12]. Recently, we employed the\nlexical chaining technique for the automatic classification of Web\ndocuments in topic hierarchies [13]. Our findings indicated the\npotential of lexical chains in successfully capturing the thematic\ncontent of Web pages. This motivated our work to use the lexical\nchains generated for a number of Web pages as a means for ordering\npages within Directory topics. In the future we plan to investigate\nhow our approach could benefit from other linguistic approaches\n, besides lexical chains.\nCONCLUDING REMARKS\nIn this paper, we introduced DirectoryRank, a practical metric for\ndetermining how informative Web pages are for particular topics\nand ranking them accordingly. To evaluate the potential of DirectoryRank\nin ordering Web pages inside Directory topics, we conducted\nan experiment where we applied our DirectoryRank metric\nto order a set of pages listed within 156 topics in Google Directory\nand we compared the rankings induced by DirectoryRank to the\nrankings that PageRank delivers in Google Directory for the same\nset of pages and topics. In our study, we relied on the judgments\nmade by 15 users to determine which ranking is perceived as more\nuseful for Web Directories' users. Obtained results indicate that in\noverall users preferred DirectoryRank over PageRank for ordering\nWeb pages inside the Directory's topics. Although it would probably\nrequire additional studies in order to evaluate the applicability\nof our method to Web Directories other than Google and assess\nDirectoryRank's usefulness to a larger user and categories base, we\nbelieve that our work can serve as the first step towards a topic-informative\nranking metric within directories.\nREFERENCES\n[1] Google Directory http://dir.google.com/.\n[2] MultiWordNet Domains http://wndomains.itc.it/.\n[3] Open Directory Project http://dmoz.com/.\n[4] Sumo Ontology http://ontology.teknowledge.com/.\n[5] WordNet 2.0 http://www.cogsci.princeton.edu/~wn/.\n[6] Barzilay R Lexical chains for text summarization. Master's\nThesis, Ben-Gurion University, 1997.\n[7] Bharat K and Mihaila G. Hilltop: a search engine based on\nexpert documents:\nhttp://www.cs.toronto.edu/~georgem/ hilltop\n/.\n[8] Kleinberg J. Authoritative sources in a hyperlinked environment\n. In Journal of the ACM, 46(5), 1999, 604-632.\n[9] Haveliwala T. Topic sensitive PageRank. In Proceedings of the\n11\nth\nWWW Conference, 2002, 517-526.\n[10] Ntoulas A., Cho J. and Olston Ch. What's new on the web?\nThe evolution of the web from a search engine perspective. In\nProceedings of the 13\nth\nWWW Conference, 2004, 1-12.\n[11] Page L., Brin S., Motwani R. and Winograd T. The PageRank\ncitation ranking: Bringing order to the web. Available at\nhttp://dbpubs.stanford.edu:8090/pub/1999-66.\n[12] Song Y.I., Han K.S. and Rim H.C. A term weighting method\nbased on lexical chain for automatic summarization. In Proceedings\nof the 5\nth\nCICLing Conference, 2004, 636-639.\n[13] Stamou S., Krikos V., Kokosis P., Ntoulas A. and Christodoulakis\nD. Web directory construction using lexical chains. In\nProceedings of the 10\nth\nNLDB Conference 2005, 138-149.\n[14] Wang Y. and DeWitt D. Computing PageRank in a distributed\ninternet search system. In Proc. of the 30\nth\nVLDB Conf., 2004.\n\n22", "keywords": "topic hierachy;semantic similarity;ranking metric;scoring;web directory;ranking;lexical chains;DirectoryRank;topic importance;PageRank;information retrieval;Web Directory;semantic similarities"} {"name": "71", "title": "Discovering and Ranking Web Services with BASIL: A Personalized Approach with Biased Focus", "abstract": "In this paper we present a personalized web service discovery and ranking technique for discovering and ranking relevant data-intensive web services. Our first prototype called BASIL supports a personalized view of data-intensive web services through source-biased focus. BASIL provides service discovery and ranking through source-biased probing and source-biased relevance metrics. Concretely, the BASIL approach has three unique features: (1) It is able to determine in very few interactions whether a target service is relevant to the given source service by probing the target with very precise probes; (2) It can evaluate and rank the relevant services discovered based on a set of source-biased relevance metrics; and (3) It can identify interesting types of relationships for each source service with respect to other discovered services, which can be used as value-added metadata for each service. We also introduce a performance optimization technique called source-biased probing with focal terms to further improve the effectiveness of the basic source-biased service discovery algorithm. The paper concludes with a set of initial experiments showing the effectiveness of the BASIL system.", "fulltext": "INTRODUCTION\nMost web services today are web-enabled applications that\ncan be accessed and invoked using a messaging system, typically\nrelying on standards such as XML, WSDL, and SOAP [29].\nMany companies have latched onto the web services mantra, including\nmajor software developers, business exchanges, eCom-merce\nsites, and search engines [15, 9, 2, 1, 7]. A large and\ngrowing portion of the web services today can be categorized\nas data-intensive web services.\n\nThis research is partially supported by NSF CNS CCR, NSF ITR, DoE\nSciDAC, DARPA, CERCS Research Grant, IBM Faculty Award, IBM\nSUR grant, HP Equipment Grant, and LLNL LDRD.\nPermission to make digital or hard copies of all or part of this work for personal\nor classroom use is granted without fee provided that copies are not made or\ndistributed for profit or commercial advantage and that copies bear this notice\nand the full citation on the first page. To copy otherwise, to republish, to post\non servers or to redistribute to lists, requires prior specific permission and/or a\nfee.\nICSOC'04, November 1519, 2004, New York, New York, USA.\nCopyright 2004 ACM 1-58113-871-7/04/0011 ...\n$\n5.00.\nData-intensive web services provide access to huge and growing\ndata stores and support tools for searching, manipulating,\nand analyzing those data stores. For example, both Amazon [1]\nand Google [7] now provide XML- and SOAP-based web service\ninterfaces to their underlying data repositories with support\nfor advanced search operators over, collectively, billions of\nitems. In the life sciences domain, many bioinformatics services\nare transitioning from human-in-the-loop web interfaces to the\nweb services model [9], providing direct access to unprecedented\namounts of raw data and specialized research tools to provide\nhigh-level analysis and search over these data services.\nWith the increasing visibility of web services and the Service-Oriented\nComputing paradigm [18], there is a growing need\nfor efficient mechanisms for discovering and ranking services.\nEffective mechanisms for web service discovery and ranking are\ncritical for organizations to take advantage of the tremendous\nopportunities offered by web services, to engage in business\ncollaborations and service compositions, to identify potential\nservice partners, and to understand service competitors and\nincrease the competitive edge of their service offerings.\nCurrent web service discovery techniques can be classified\ninto two types: categorization-based discovery and personalized\nrelevance-based discovery. The former discovers web services\nby clustering and categorizing a collection of web services\ninto different groups based on certain common properties of the\nservices. Most of the existing UDDI [28] registry-based service\ndiscovery methods are of this type. They typically discover relevant\nservices by querying metadata maintained in the common\nregistries (like the ones offered by Microsoft [16] and IBM [10]).\nA typical question is \"Which bioinformatics web services offer\nBLAST capability\" or \"Which commercial services offer on-line\nauctions\". The second type of discovery mechanisms uses\npersonalized relevance reasoning and support questions such\nas \"Which services offer the same type of content as NCBI\",\nand \"Find the top-ten web services that offer more coverage\nthan the BLAST services at NCBI\". These two types of service\ndiscovery techniques offer different focus and complementary\ncapabilities. Consider the following examples:\nA bioinformatics researcher may be interested in finding all\nservices similar to NCBI's BLAST service for searching DNA\nand protein sequence libraries [17]. Current service registries\nmay provide pointers to other BLAST services, but they do\nnot describe how these other sites relate specifically to NCBI's\nBLAST service. Which services provide the most similar coverage\nwith respect to NCBI (e.g.\nof similar proteins or organisms\n)? Which services are complementary in their coverage\n(e.g. of other sequence libraries)? How best should the BLAST\nservices be ranked relative to the NCBI service?\nA health science researcher familiar with the PubMed med-153\nical literature service may be interested in discovering other\nrelated medical digital library services. Given his prior knowledge\nwith PubMed, he may want to ask certain personalized\n(source-biased) discovery requests, which are not supported by\nthe conventional service-registry-based discovery model. Examples\ninclude: Find and rank all PubMed-related medical literature\nsites. Which services have more general coverage than\nPubMed? Which medical literature services are more specialized\nthan PubMed?\nThese examples highlight two fundamental differences between\nthe categorization-based and the personalization-based\ndiscovery model: (1) Categorization of web services based on\ngeneral descriptions maintained at the service registries are insufficient\nand inadequate when a user is interested in discovery\nbased on a particular service (or a set of services) with which\nshe has prior experience or knowledge (e.g. NCBI BLAST or\nPubMed); and (2) There is a need for service ranking metrics\nthat capture the relative scope and coverage of the data offered\nwith respect to a previously known service. Although these two\ntypes of discovery mechanisms are complementary, most existing\nproposals on web service discovery fall into the first type.\nSurprisingly, there are, to our knowledge, no effective means\nto provide such personalized and biased discovery and ranking\nsupport without relying on significant human intervention.\nIn this paper we present algorithms for discovering and ranking\nrelevant data-intensive web services. Our first prototype\ncalled BASIL\n1\n-- supports a personalized view of web services\nthrough source-biased probing and source-biased relevance detection\nand ranking metrics. Concretely, our approach is capable\nof discovering and ranking web services by focusing on the\nnature and degree of the data relevance of the source service\nto others. Given a service like NCBI's BLAST called the\nsource - the BASIL source-biased probing technique leverages\nthe summary information of the source to generate a series of\nbiased probes to other services called the targets. This source-biased\nprobing allows us to determine whether a target service\nis relevant to the source by probing the target with very few focused\nprobes. We introduce the biased focus metric to discover\nand rank highly relevant data services and measure relevance\nbetween services. Our initial results on both simulation and\nweb experiments show that the BASIL system supports efficient\ndiscovery and ranking of data-intensive web services.\nMODEL AND PROBLEM STATEMENT\nWe consider a universe of discourse\nW consisting of D data-intensive\nweb services:\nW = {S\n1\n, S\n2\n, . . . , S\nD\n} where each service\nproduces one or more XML documents in response to a\nparticular service request. Hence, we describe each web service\nS\ni\nas a set of M\ni\ndocuments: S\ni\n=\n{doc\n1\n, doc\n2\n,\n, doc\nM\ni\n}. For\nexample, the documents corresponding to the NCBI BLAST\nservice would consist of genetic sequences and documentation\ngenerated in response to a service requests. Similarly, the documents\ncorresponding to PubMed would consist of the set of\nmedical journal articles in the PubMed data repository.\nThere are N terms (t\n1\n, t\n2\n, ..., t\nN\n) in the universe of discourse\nW including both the tags and content of the XML documents\nwhere common stopwords (like `a', `the', and so on) have been\neliminated.\nOptionally, the set of N terms may be further\nrefined by stemming [19] to remove prefixes and suffixes.\nAdopting a vector-space model [22, 23] of the service data\nrepository, we describe each service S\ni\nas a vector consisting of\nthe terms in the service along with a corresponding weight:\nSummary(S\ni\n) =\n{(t\n1\n, w\ni\n1\n), (t\n2\n, w\ni\n2\n),\n, (t\nN\n, w\niN\n)\n}\n1\nBASIL: BiAsed Service dIscovery aLgorithm\nA term that does not occur in any documents served by a\nservice S\ni\nwill have weight 0.\nTypically, for any particular\nservice S\ni\n, only a fraction of the N terms will have non-zero\nweight. We refer to the number of non-zero weighted terms in\nS\ni\nas N\ni\n.\nWe call the vector\nSummary(S\ni\n) a service summary for the\ndata-intensive web service S\ni\n. A service summary is a single aggregate\nvector that summarizes the overall distribution of terms\nin the set of documents produced by the service. In this first\nprototype of BASIL, we rely on a bag-of-words model that is\nindifferent to the structure inherent in the XML documents. As\nwe demonstrate in the experiments section, this bag-of-words\napproach is quite powerful without the added burden of structural\ncomparisons. We anticipate augmenting future versions\nof BASIL to incorporate structural components (to support\nschema matching, leverage existing ontologies, etc.).\nTo find\nSummary(S\ni\n), we must first represent each document\ndoc\nj\n(1\nj M) as a vector of terms and the frequency of\neach term in the document:\ndoc\nj\n=\n{(t\n1\n, f req\nj\n1\n), (t\n2\n, f req\nj\n2\n),\n, (t\nN\n, f req\njN\n)\n}\nwhere f req\njk\nis the frequency of occurrence of term t\nk\nin\ndocument j. The initial weight for each term may be based\non the raw frequency of the term in the document and it can\nbe refined using alternative occurrence-based metrics like the\nnormalized frequency of the term and the term-frequency inverse\ndocument-frequency (TFIDF ) weight. TFIDF weights\nthe terms in each document vector based on the characteristics\nof all documents in the set of documents.\nGiven a particular encoding for each document, we may generate\nthe overall service summary in a number of ways. Initially\n, the weight for each term in the service summary may be\nbased on the overall frequency of the term across all the documents\nin the service (called the service frequency, or servFreq ):\nw\nik\n= servF req\nik\n=\nM\nj\n=1\nf req\njk\n. Alternatively, we can also\ndefine the weight for each term based on the number of documents\nin which each term occurs (called the document count\nfrequency, or docCount).\nOnce we have chosen our service model, to effectively compare\ntwo data-intensive web services and determine the relevance\nof one service to another, we need two technical components\n: (1) a technique for generating a service summary; and\n(2) a metric for measuring the relevance between the two.\n2.1\nEstimating Service Summaries\nIdeally, we would have access to the complete set of documents\nbelonging to a data-intensive web service. We call a\nservice summary for S\ni\nbuilt on these documents an actual\nservice summary or\nASummary(S\ni\n). However, the enormous\nsize of the underlying repositories for many data-intensive web\nservices coupled with the non-trivial costs of collecting documents\n(through repeated service requests and individual document\ntransfer) make it unreasonable to generate an actual\nservice summary for every service available. As a result, previous\nresearchers in the context of distributed databases have\nintroduced several probing techniques for generating representative\nsummaries based on small samples of a document-based\ncollections [3, 4]. We call such a representative summary an\nestimated service summary, or\nESummary(S\ni\n):\nESummary(S\ni\n) =\n{(t\n1\n, w\ni\n1\n), (t\n2\n, w\ni\n2\n),\n, (t\nN\n, w\niN\n)\n}\nThe number of occurring terms (i.e. those terms that have\nnon-zero weight) in the estimated summary is denoted by N\ni\n.\nTypically, N\ni\nwill be much less than the number of non-zero\nweighted terms N\ni\nin the actual service summary since only\n154\na fraction of the total documents in a service will be examined\n. The goal of a prober is typically to find\nESummary(S\ni\n)\nsuch that the relative distribution of terms closely matches the\ndistribution of terms in\nASummary(S\ni\n), even though only a\nfraction of the total service documents will be examined.\nCurrent probing techniques for estimating service summaries\naim at estimating the overall summary of the data served by\na web service. We classify them into two categories: random\nsampling and query-based sampling.\nRandom Sampling\n- N o Bias\nIf we had unfettered access to a data-intensive web service,\nwe could randomly select terms from the service to generate\nthe estimated service summary\nESummary(S\ni\n). Barring that,\nwe could randomly select documents with which to base the\nestimated service summary. We will call such a random selection\nmechanism an unbiased prober since all terms (or documents\n) are equally likely to be selected. In practice, an unbiased\nprober is unrealistic since most services only provide a\nrequest-response mechanism for extracting documents.\nQuery-based Sampling\n- Query Bias\nAs a good approximation to unbiased probing, Callan et al. [3,\n4] have introduced a query-based sampling technique for generating\naccurate estimates of document-based collections by examining\nonly a fraction of the total documents. The Callan\ntechnique has been shown to provide accurate estimates using\nvery few documents (e.g. several hundred). Adapting the\nCallan technique to the web services context requires repeat-edly\nrequesting documents from a service using a limited set of\nservice requests. Since the documents extracted are not chosen\nrandomly, but are biased by the service request mechanism\nthrough the ranking of returned documents and by providing\nincomplete access to the entire data service repository, we say\nthat the Callan technique displays query bias. There are several\nways to define the limited set of queries, including random\nselection from a general dictionary and random selection augmented\nby terms drawn from the extracted documents from the\nservice. In the rest of the paper, when we refer to an estimated\nservice summary\nESummary(S\ni\n), we mean one that has been\nproduced by a query-biased prober.\n2.2\nComparing Service Summaries\nIn order to determine the relevance of one service S\ni\nto another\nservice S\nj\nand to assess the nature of their relationship,\nwe require an appropriate relevance metric. There are a number\nof possible relevance metrics to compare two service summaries\n. A fairly simple and straightforward approach is based\non a count of the number of common terms in the two services\nS\ni\nand S\nj\n:\nrel(S\ni\n, S\nj\n) =\n|ESummary(S\ni\n)\nESummary(S\nj\n)\n|\nmax(\n|ESummary(S\nj\n)\n|, |ESummary(S\ni\n)\n|)\nTwo services with exactly the same terms represented in their\nestimated summaries will have rel(S\ni\n, S\nj\n) = 1, indicating the\nhighest possible degree of relevance. Conversely, two services\nwith no terms in common will have rel(S\ni\n, S\nj\n) = 0, indicating\nthe lowest possible degree of relevance.\nWe now use an example to illustrate why the existing service\nsummary estimation techniques are inadequate for effectively\ndiscovering relevant services, especially in terms of the data\ncoverage of one (target) in the context of the other (source).\nExample:\nWe collected fifty documents from the Google web\nservice, the PubMed web service, and ESPN's search site, respectively\n, using a query-based sampling technique for service\nsummary estimation. Using the service summaries constructed,\nwe find that rel(Google, P ubM ed) = 0.05 and rel(ESP N ,\nP ubM ed) = 0.06. In both cases the service summaries share\nvery few terms in common and hence both Google and ESPN\nappear to be irrelevant with respect to PubMed, even though\nGoogle provides considerable health-related content. Based on\nthese figures, we could incorrectly conclude that: (1) Google\nis irrelevant to PubMed; and (2) Relatively speaking, ESPN is\nmore relevant to PubMed than Google.\nThis example underlines two critical problems with current\ntechniques for probing and comparing service summaries:\nFirst, current service summary estimation techniques are concerned\nwith generating overall (or global ) summaries of the underlying\ndata repositories. The goal is to generate essentially an\nunbiased estimate of the actual service summary. Second, the\ncurrent relevance comparison metric fails to serve as a valuable\nranking metric or indicator of interesting relationships between\ntarget services in terms of the data coverage of a target service\nwith respect to the source.\nTHE BASIL SYSTEM\nBearing these issues in mind, we now introduce BASIL an\nefficient web service discovery and ranking prototype that relies\non a biased perspective of services rather than on a single\nglobal perspective. BASIL relies on three fundamental steps:\n(1) source-biased probing for web service discovery; (2) evaluation\nand ranking of discovered services with the biased focus\nmetric; and (3) leveraging the biased perspective of service\nsources and targets to discover interesting relationships.\n3.1\nSource-Biased Probing\nGiven a data-intensive web service the source the source-biased\nprobing technique leverages the summary information\nof the source to generate a series of biased probes for analyzing\nanother service the target. This source-biased probing allows\nus to determine in very few interactions whether a target service\nis relevant to the source by probing the target with focused\nprobes.\nTo help differentiate the source-biased approach from others\ndiscussed in Section 2, in this section we use to denote the\nsource service and to denote the target service instead of S\ni\nand S\nj\n. Given two services and , the output of the source-biased\nprobing is a subjective service summary for that is\nbiased towards . We define the source-biased summary of the\ntarget service, denoted by\nESummary\n\n( ), as follows:\nESummary\n\n( ) =\n{(t\n1\n, w\n\n1\n), (t\n2\n, w\n\n2\n),\n, (t\nN\n, w\nN\n)\n}\nN is the total number of terms used in analyzing the set of\ndata-intensive web services. w\ni\n(1\ni N) is the weight of\nterm t\ni\n, defined using one of the weight function introduced in\nSection 2. To distinguish the term weight w\nj\nfrom the corresponding\nterm weight in the biased target summary, we denote\nthe bias by w\nj\n. It is important to note that typically the inequality\nw\nj\n= w\nj\ndoes hold.\nConcretely, the source-biased probing algorithm generates a\nsource-biased summary for a target as follows: It uses the estimated\nservice summary of the source , denoted by\nESummary(),\nas a dictionary of candidate probe terms and sends a series of\nquery requests parameterized by probe terms, selected from\nESummary (), to the target service ; for each probe term,\nit retrieves the top m matched documents from , generates\nsummary terms and updates\nESummary\n\n( ).\nThis process\nrepeats until a stopping condition is met. Figure 1 illustrates\nthe source-biased probing process. Note that in this first prototype\nof BASIL the service requests are constrained to keyword-based\nprobes. Note that the source-biased approach can also\nbe applied to UDDI-directory-based discovery by restricting\n155\nSourceBiasedProbing(Source , Target )\nFor target service , initialize\nESummary\n\n( ) =\n.\nrepeat\nInvoke the probe term selection algorithm\nto select a one-term query probe q from the\nsource of bias\nESummary().\nSend the query q to the target service .\nRetrieve the top-m documents from .\nUpdate\nESummary\n\n( ) with the terms and\nfrequencies from the top-m documents.\nuntil Stop probing condition is met.\nreturn\nESummary\n\n( )\nFigure 1: Source-Biased Probing Algorithm\nthe source summary to be generated from the meta-data description\nmaintained at the registries rather than performing\nthe source-biased probing directly. However, the quality of the\ndiscovery results will be lower due to the lack of richness in the\nmetadata maintained at the service registries for many services.\nNow we use a simple example to illustrate the power of\nsource-biased probing. For presentation brevity, we are considering\na simplistic world of only very few terms per service\nsummary. In reality, each service summary would consist of\norders of magnitude more terms:\nExample:\nSuppose that our goal is to understand the relevance\nof Google to PubMed. Suppose,\nESummary(P ubMed)\n=\n{arthritis, bacteria, cancer} (where for simplicity we have\ndropped the term weights from the summary). Again for simplicity\nsuppose that Google provides access to only three types of\ninformation: health, animals, and cars:\nASummary(Google)\n=\n{arthritis, bacteria, cancer, dog, elephant, frog, garage,\nhelmet, indycar\n}. An unbiased prober could result in ESummary\n(Google) =\n{arthritis, frog, helmet}, whereas a source-biased\nprober could result in\nESummary\nP ubM ed\n(Google) =\n{arthritis,\nbacteria, cancer\n}. This simple example illustrates the essence\nof the source-biased probing and how it accentuates the commonality\nbetween the two services.\nThe performance and effectiveness of the source-biased probing\nalgorithm depends upon a number of factors, including the\nselection criterion used for choosing source-specific candidate\nprobe terms, and the type of stop condition used to terminate\nthe probing process.\nMechanisms to Select Probe Terms\nThere are several possible ways to select the probes based on\nthe statistics stored with each service summary, including uniform\nrandom selection and selection based on top-weighted\nterms.\nIn general, the selection criterion will recommend a\nquery term drawn from the set N\n\nof all non-zero weighted\nterms in the unbiased source summary\nESummary().\nUniform Random Selection: In this simplest of selection techniques\n, each term that occurs in\nESummary() has an equal\nprobability of being selected, i.e. P rob(selecting term j) =\n1\nN\n\n.\nWeight-Based Selection: Rather than randomly selecting query\nterms, we could instead rely on a ranking of the terms by one of\nthe statistics that are recorded with each service summary. For\nexample, all terms in\nESummary() could be ranked according\nto the weight of each term. Terms would then be selected in\ndescending order of weight. Depending on the type of weight\ncataloged (e.g. servF req, docCount, etc.), several flavors of\nweight-based selection may be considered.\nDifferent Types of Stop Probing Conditions\nThe stop probing condition is the second critical component in\nthe source-biased probing algorithm. We consider four different\ntypes of conditions that might be used in practice:\nNumber of Queries: After some fixed number of query probes\n(M axP robes), end the probing. This condition is agnostic to\nthe number of documents that are examined for each service.\nDocuments Returned: In contrast to the first technique, the\nsecond condition considers not the number of queries, but the\ntotal number of documents (M axDocs) returned by the service\n. Since some queries may return no documents, this stopping\ncondition will require more query probes than the first\nalternative when M axP robes = M axDocs.\nDocument Thresholding: Rather than treating each document\nthe same, this third alternative applies a threshold value to\neach document to determine if it should be counted toward\nM axDocs.\nFor each document, we may calculate the relevance\nof the document to the source of bias\nESummary(). If\nthe document relevance is greater than some threshold value,\nthen the document is counted. Otherwise, the document is\ndiscarded.\nSteady-State: Rather than relying on a count of queries or documents\n, this final stopping condition alternative instead relies\non the estimated summary reaching a steady-state. After each\nprobe, we calculate the difference between the new value of\nESummary\n\n( ) and the old value.\nIf the difference (which\nmay be calculated in a number of ways) is less than some small\nvalue\n, then we consider the summary stable and stop the\nprobing.\nDue to the space limitation, we refer readers to our technical\nreport [5] for detailed experiments on the impact of these two\nparameters.\n3.2\nEvaluating and Ranking Services\nGiven a source and a target service, once we generate the\nsource-biased summary for the target service, we need an efficient\nmechanism to evaluate the source-biased relevance of a\ntarget service with respect to the source. Once a set of target\nservices have been evaluated with the source-biased relevance\nmetric, we can then rank the target services with respect to the\nsource of bias. We begin by discussing the necessary components\nof the source-biased metric.\nLet denote a source service modeled by an estimated summary\nand denote a target service with a -biased summary,\nand let f ocus\n\n( ) denote the source-biased focus measure. We\ndefine f ocus\n\n( ) to be a measure of the topical focus of the\ntarget service with respect to the source of bias . The focus\nmetric ranges from 0 to 1, with lower values indicating less\nfocus and higher values indicating more focus.\nIn general, f ocus is not a symmetric relation. We may describe\nany two data-intensive web services and with the\nfocus in terms of by f ocus\n\n( ) or in terms of by f ocus\n\n().\nWe propose to use the well-known cosine similarity (or normalized\ninner product) to approximate the source-biased focus\nmeasure. We define the cosine-based focus as follows:\nCosine f ocus\n\n( ) =\nN\nk\n=1\nw\nk\nw\nk\nN\nk\n=1\n(w\nk\n)\n2\n\nN\nk\n=1\n(w\nk\n)\n2\nwhere w\nk\nis the weight for term k in\nESummary() and\nw\nk\nis the -biased weight for term k in\nESummary\n\n( ). The\ncosine ranges from 0 to 1, with higher scores indicating a higher\ndegree of similarity. In contrast, the cosine between orthogonal\nvectors is 0, indicating that they are completely dissimilar. The\ncosine measures the angle between two vectors, regardless of the\nlength of each vector. Intuitively, the cosine-based biased focus\nis appealing since it reasonably captures the relevance between\ntwo data-intensive web services.\nRanking Relevant Services\nGiven the biased focus measure, we may probe a group of tar-156\nget services to identify the most relevant services to the source\nof bias. For a single source of bias S\n1\nfrom our universe of discourse\nW, we may evaluate multiple target services S\n2\n, S\n3\n, ...,\nS\nd\n. For each target service, we may evaluate the appropriate\nfocus measure for each source-target pair (i.e. f ocus\nS\n1\n(S\n2\n),\nf ocus\nS\n1\n(S\n3\n), etc.). We may then rank the target services in\ndescending order in terms of their source-biased focus with respect\nto S\n1\n.\nAs we will show in our experiments section, source-biased\nprobing results in the identification of relevant services that\nexisting approaches may overlook. We also show that source-biased\nprobing can generate source-biased summaries of good\nquality using far fewer documents than existing approaches,\nplacing significantly less burden on the target services.\n3.3\nIdentifying Interesting Relationships\nThe critical third component of the BASIL system consists of\nthe techniques for exploiting and understanding interesting relationships\nbetween services using a source-biased lens. By analyzing\nthe nature of the relationships between data-intensive\nweb services, we will provide support for understanding the relative\nscope and coverage of one service with respect to another.\nThe source-biased probing framework and biased focus measure\nprovide the flexible building blocks for automated identification\nof interesting relationships between services, especially\nsince the framework promotes an asymmetric source-biased\nview for any two services. Our relationship discovery module\ncreates a flexible organization of services, where each service\nis annotated with a list of relationship sets. The two typical\nrelationship types we have identified are similarity-based and\nhierarchical-based.\nSimilarity-Based Relationships\nGiven the universe of discourse\nW = {S\n1\n, S\n2\n, . . . , S\nD\n}, we identify\nthree similarity-based relationship sets for a particular service\nS\ni\n. These relationship sets are defined in terms of threshold\nvalues\nhigh\nand\nlow\n, where 0\n\nlow\n\nhigh\n< 1.\n\n- equivalent: The first relationship says that if both focus\nS\ni\n(S\nj\n) >\nhigh\nand f ocus\nS\nj\n(S\ni\n) >\nhigh\nhold, then we may conclude\nthat S\ni\nis sufficiently focused on S\nj\nand S\nj\nis sufficiently\nfocused on S\ni\n. Hence, the two services are approximately the\nsame in terms of their data coverage. We call this approximate\nequality -equivalence. It indicates that the equivalence\nis not absolute but is a function of the parameter\nhigh\n. Formally\n, -equivalent(S\ni\n) =\n{S\nj\nW | focus\nS\ni\n(S\nj\n) >\nhigh\n\nf ocus\nS\nj\n(S\ni\n) >\nhigh\n}.\n\n- complement: If both focus\nS\ni\n(S\nj\n) <\nlow\nand f ocus\nS\nj\n(S\ni\n)\n<\nlow\nhold, then we can conclude that S\ni\nand S\nj\nare sufficiently\nconcerned with different topics since neither one is very\nfocused on the other. We annotate this approximate complementary\nnature with the prefix. Formally, -complement(S\ni\n) =\n{S\nj\nW | focus\nS\ni\n(S\nj\n) <\nlow\nfocus\nS\nj\n(S\ni\n) <\nlow\n}.\n\n- overlap: When two services S\ni\nand S\nj\nare neither equivalent\nnor -complementary, we say that the two services\n-overlap. Formally, -overlap(S\ni\n) =\n{S\nj\nW | S\nj\n/\ncomplement\n(S\ni\n)\nS\nj\n/\n-equivalent(S\ni\n)\n}.\nHierarchical Relationships\nIn addition to similarity-based relationship sets, we also define\nhierarchical relationship sets by measuring the relative coverage\nof target services in\nW with respect to a particular text service\nS\ni\n(source). These hierarchical relationship sets are defined in\nterms of a parameter\ndif f\n, where 0\n\ndif f\n1.\n\n- superset: If focus\nS\ni\n(S\nj\n)\n- focus\nS\nj\n(S\ni\n) >\ndif f\n, then a\nrelatively significant portion of S\ni\nis contained in S\nj\n, indicating\nthat S\nj\nhas a -superset relationship with S\nj\n. We use the prefix\nto indicate that S\nj\nis not a strict superset of S\ni\n, but rather\nthat the relationship is parameterized by\ndif f\n. Formally, superset\n(S\ni\n) =\n{S\nj\nW | focus\nS\ni\n(S\nj\n)\n- focus\nS\nj\n(S\ni\n) >\n\ndif f\n}.\n\n- subset: Conversely, If focus\nS\nj\n(S\ni\n)\n- focus\nS\ni\n(S\nj\n) >\ndif f\n,\nthen a relatively significant portion of S\nj\nis contained in S\ni\n,\nindicating that S\nj\nhas a -subset relationship with S\ni\n. Similarly\n, S\nj\nis not a strict subset of S\ni\n, but rather the relationship\nis parameterized by\ndif f\n. Formally, -subset(S\ni\n) =\n{S\nj\n\nW | focus\nS\nj\n(S\ni\n)\n- focus\nS\ni\n(S\nj\n) >\ndif f\n}.\nWe note that the determination of the appropriate -values\nis critical for the correct assignation of services to each relationship\nset. In our experiments section, we illustrate how these\nrelationship sets may be created, but, for now, we leave the\noptimization of -values as future work.\nUsing Relationship Sets Both similarity based and hierarchy-based\ninter-service relationships can be generated automati-cally\n, and used as metadata annotation to each of the services.\nThese source-biased relevance data provide a flexible foundation\nfor relationship analysis among services. For any service\nS\ni\n, we need only consult the appropriate relationship set. The\nthree similarity-based relationship sets provide the basis for\nanswering queries of the form: \"What other services are most\nlike X? Somewhat like X? Or complementary to X?\". The two\nhierarchical-based sets provide the basis for answering queries\nof the form: \"What other services are more general than X?\nOr more specialized than X?\".\nIn addition, these relationship sets are useful for routing\nservice requests to the appropriate services. For example, a\nuser interested in BLAST data may choose to use both NCBI's\nBLAST service and all of the services that have a -equivalence\nrelationship with NCBI BLAST. Alternatively, a user interested\nin maximizing coverage of multiple topically-distinct services\n, may choose to query both the source service she knows\nabout and any members in the complementary set of the source\nservice. The hierarchical relationship sets are particularly helpful\nin cases where a user may refine a service request to more\nspecialized services, or alternatively, may choose to generalize\nthe scope of the service request by considering services further\nup the hierarchy to get more matching answers.\nFOCAL TERM PROBING\nOne of the critical parameters to the success of BASIL's\nsource-biased probing is the choice of probe terms from the\nsource of bias . We have discussed several selection techniques\nas well as different ways to define stop-probing conditions. In\nthis section we introduce a refinement over these simple selection\ntechniques whereby the source summary is segmented\ninto k groups of co-occurring terms. The main idea is to it-eratively\nselect one term from each of the k groups to probe\nthe target. We call these terms the focal terms of the corresponding\ngroup. When used in conjunction with the general\nsource-biased probing algorithm, we have an enhanced version\ncalled source-biased probing with focal terms. A unique advantage\nof using focal terms is that the biased summaries of target\nservices can be generated in fewer queries with higher quality.\n4.1\nFocal Terms and Focal Term Groups\nLet denote a source service with its unbiased service summary\nESummary\n\n. We denote the set of terms with non-zero\nweight in\nESummary\n\n(i.e. the terms that actually occur in the\nservice ) as T erms(), where T erms() consists of n terms\nt\n1\n, t\n2\n, ..., t\nn\n.\nA focal term group is a subset of terms in the set T erms()\nthat co-occur in the documents of . We denote a focal term\n157\nTable 1: Example Focal Terms for PubMed\n1\ncare, education, family, management, ...\n2\nbrain, gene, protein, nucleotide, ...\n3\nclinical, noteworthy, taxonomy, ...\n4\nexperimental, molecular, therapy, ...\n5\naids, evidence, research, winter, ...\ngroup i as F T erms\ni\n. The main idea behind source-biased probing\nwith focal terms is to partition the set T erms() into k disjoint\nterm groups such that the terms within each term group\nco-occur in documents of more frequently than they do with\nterms from other term groups.\nFormally, we need an algorithm that can find a partition of\nT erms() into k focal term groups: T erms() =\n{F T erms\n1\n,\n. . . , F T erms\ni\n, . . . , F T erms\nk\n|\nk\ni\n=1\nF T erms\ni\n=\n{t\n1\n, ..., t\nn\n} and\nF T erms\ni\nF T erms\nj\n=\n}\nIn Table 1, we show an example of five focal term groups for\na collection of 100 PubMed documents. Note that k is intended\nto be very small since the focal term groups are meant to be\nvery coarse.\nGiven k focal term groups, by selecting a focal term from\neach term group F T erms\ni\nas a probing query, we hope to retrieve\ndocuments that also contain many of the other words\nin that focal term group. For example, suppose we are using\na frequency-based measure for query probe selection from\nPubMed. The top four query terms may be \"brain\", \"gene\",\n\"protein\", and \"nucleotide\". Suppose these four terms tend to\nco-occur with each other as indicated in Table 1. By sending\nthe first query \"brain\" to a target service, we could reasonably\nexpect to find the other three terms since our analysis\nof the source indicates that these four terms tend to co-occur.\nA naive source-biased prober would ignore this co-occurrence\ninformation and, instead, send the other three queries \"gene\",\n\"protein\", and \"nucleotide\", even though we might reasonably\nexpect for those queries to generate documents similar to those\ngenerated by the first query \"brain\". In essence, we will have\nused four queries when a single query would have sufficed at\nadequately exploring the term space of the target.\nIt is important to note that, unlike previous research in\ngrouping terms for query-expansion [31, 21] or finding similar\nterms [24] our goal is not to find close semantic relationships\nbetween terms, but rather to find very coarse co-occurrence associations\namong terms to support a more efficient and effective\nbiased service summary estimation. For example, though we\nmay discover that \"brain\" and \"protein\" tend to co-occur, we\ndo not claim that there is a close semantic relationship between\nthe two terms.\n4.2\nFinding Focal Terms\nIn this section, we discuss how we may adapt a popular clustering\ntechnique to the problem of focal term discovery. Recall\nthat in Section 2, we view a service S\ni\nas a set of documents,\neach of which is described by a vector of terms and weights.\nWe now invert our view of a service using the same set of information\n. We consider a service S\ni\nas a collection of terms, each\nof which is described by a vector of the documents in which the\nterm occurs and a weight describing the occurrence frequency\nof the term in the corresponding document. Hence, we have:\nT erms(S\ni\n) =\n{term\n1\n, term\n2\n,\n, term\nN\n}. For the N terms in\nthe service, each term\nj\n(1\nj N) is a vector of documents\nand weights: term\nj\n=\n{(doc\n1\n, w\nj\n1\n), (doc\n2\n, w\nj\n2\n),\n, (doc\nM\n, w\njM\n)\n}\nWe can define a segmentation technique for finding focal term\ngroups by clustering the set T erms(S\ni\n) into k clusters. Given\nthe term vectors and the similarity function, a number of clus-FocalTerms\n(Num Clusters k, Input Vectors\nD)\nLet\nD = {d\n1\n, ..., d\nn\n} denote the set of n term vectors\nLet M denote the total number of documents in\nD\nLet d\nj\n= < (doc\n1\n, w\nj\n1), . . . , (doc\nM\n, w\njM\n) > denote a\nterm vector of M elements, w\njl\nis TFIDF weight\nof the doc\nl\nin term j (l = 1, . . . , M )\nLet\nC = {C\n1\n, ..., C\nk\n} denote a clustering of D\ninto k clusters.\nLet\ni\ndenote the center of cluster C\ni\nforeach cluster C\ni\nRandomly pick a term vector, say d\nj\nfrom\nD\nInitialize a cluster center\ni\n= d\nj\n, where d\nj\nD\nrepeat\nforeach input term vector d\nj\nD\nforeach cluster C\ni\nC i = 1, . . . , k\ncompute\ni\n= sim(d\nj\n, mu\ni\n)\nif\nh\nis the smallest among\n1\n,\n2\n, . . . ,\nk\nmu\nh\nis the nearest cluster center to d\nj\nAssign d\nj\nto the cluster C\nh\n// refine cluster centers using centroids\nforeach cluster C\ni\nC\nforeach doc l in d\nj\n(l = 1, . . . , M ))\ncw\nij\n\n1\n|C\ni\n|\nM\nl\n=1\nw\njl\n\ni\n< (doc\n1\n, cw\ni\n1\n), . . . , (doc\nM\n, cw\niM\n) >\nuntil cluster centers no longer change\nreturn\nC\nFigure 2: Focal Term Clustering Algorithm\ntering algorithms can be applied to partition the set T erms(S\ni\n)\nof N terms into k clusters. We choose Simple K-Means since\nit is conceptually simple and computationally efficient. The algorithm\nstarts by generating k random cluster centers. Each\nterm is assigned to the cluster with the most similar (or least\ndistant) center. The similarity is computed based on the close-ness\nof the term and each of the cluster centers. Then the\nalgorithm refines the k cluster centers based on the centroid\nof each cluster. Terms are then re-assigned to the cluster with\nthe most similar center. The cycle of calculating centroids and\nassigning terms in T erms(S\ni\n) to k clusters repeats until the\ncluster centroids stabilize. Let C denote a cluster in the form\nof a set of terms in the cluster. The centroid of cluster C is:\ncentroid\nC\n=\n\n\n\n\n\n\n\n\n\n\n\n\n(doc\n1\n,\n1\n|C| jC\nw\nj\n1\n)\n(doc\n2\n,\n1\n|C| jC\nw\nj\n2\n)\n\n(doc\nM\n,\n1\n|C| jC\nw\njM\n)\n\n\n\n\n\n\n\n\n\n\n\n\nwhere w\njl\nis the weight of term j in document l, and the formula\n1\n|C|\nl\nC\nw\njl\ndenotes the average weight of the document\nl in the cluster C. A sketch of the K-Means term clustering\nbased on term-vector of a service is provided in Figure 2.\nThe similarity function used in Figure 2 can be defined using\na number of functions. In this paper, we use the cosine\nsimilarity function. Given a set of N terms and a set of M\ndocuments, where w\nik\ndenotes the weight for term k in document\ni (1\nk N, 1 i M), the cosine function prescribes:\nsim(term\ni\n, term\nj\n) =\nN\nk\n=1\nw\nik\nw\njk\nN\nk\n=1\n(w\nik\n)\n2\n\nN\nk\n=1\n(w\njk\n)\n2\nIn Section 5 we report the initial experiments on effectiveness\nof using focal terms to optimize the source-biased probing\nalgorithm, showing that the source-biased algorithm with focal\nterms results in more efficient probing for varying numbers of\nfocal-term groups.\n158\n4.3\nSelecting Focal-Based Probes\nOnce the k focal term groups have been constructed for a\nsource, the remaining problem is how to select the best terms\nfor probing a target service. We propose a simple round-robin\nselection technique whereby a single term is selected from each\nfocal term group in turn. Once a single term has been selected\nfrom each group, the cycle repeats by selecting a second term\nfrom each group, a third term, and so on.\nGiven this basic strategy, we may use a number of techniques\nfor determining the order by which to select terms from the\nk groups and for selecting probe terms from each focal term\ngroup. One way to determine the order of focal term groups is\nbased upon the size of each group. We begin with the group\nwith the most terms and end each cycle with the group that\nhas the smallest number of terms. For each focal term group,\nwe may decide which term to select for each cycle by using one\nof the selection criteria discussed in Section 3.\nEXPERIMENTS\nIn this section, we describe four sets of experiments designed\nto evaluate the benefits and costs of BASIL. The first set intends\nto show the effectiveness of our source-biased probing algorithm\nand compare its performance with query-biased probing\nand unbiased probing. The second set evaluates the biased\nfocus measure as an effective tool for ranking services.\nThe third set shows the efficiency of the biased focus measure\nin identifying interesting inter-service relationships. The\nfourth set evaluates the efficacy of source-biased probing with\nfocal terms by comparing the basic source-biased probing versus\nsource-biased probing with varying number of groups of\nfocal terms. Our experiments show that focal term probing\ncan achieve about ten percent performance improvement over\nthe basic algorithm for source-biased probing.\nSince there are no large data-intensive web service collections\nfor experimentation, we rely on: (1) a large collection of\nnewsgroups designed to emulate the diversity and scope of real-world\ndata-intensive web services; and (2) a modest collection\nof real-world web sources. Since the services in the web collection\nchange frequently and are beyond our control, and in an\neffort not to overload any one site, we relied on the newsgroup\ndataset for rigorous experimental validation.\nNewsgroup Collection: We collected articles from 1,000 randomly\nselected usenet newsgroups over the period June to July\n2003. We eliminated overly small newsgroups containing fewer\nthan 100 articles, heavily spammed newsgroups, and newsgroups\nwith primarily binary data. After filtering out these\ngroups, we were left with 590 single topic newsgroups, ranging\nin size from 100 to 16,000 articles. In an effort to match\nthe heterogeneity and scope inherent in many real-world services\n, we constructed 135 additional groups of mixed topics by\nrandomly selecting articles from anywhere from 4 to 80 single\ntopic newsgroups, and 55 aggregate topic newsgroups by\ncombining articles from related newsgroups (e.g. by selecting\nrandom documents from all the subgroups in comp.unix.* into\na single aggregate group). In total, the newsgroup collection\nconsists of over 2.5GB worth of articles in 780 groups.\nWeb Collection: For the second collection, we randomly selected\n50 sites from the ProFusion [20] directory of web sites\nthat support queries, in addition to Google and PubMed. We\nqueried each site with a randomized set of single-word probes\ndrawn from the standard Unix dictionary, and collected a maximum\nof 50 documents per site.\nProbing Framework: We built a probing engine in Java\n1.4 for use in all of our experiments. For each group in both\n0.00\n0.10\n0.20\n0.30\n0.40\n0\n20\n40\n60\n80\n100\nDocuments Examined\nAverage Source Similarity\nSource Bias\nQuery Bias 2\nQuery Bias 1\nNo Bias\nFigure 3: Probing Efficiency for 100 Pairs\ndatasets, we constructed the estimated service summary based\non the overall term frequency of each term (servF req). We\neliminated a set of common stopwords (e.g. \"a\", \"the\", and\nso on) as well as collection-specific stopwords (e.g. \"wrote\",\n\"said\", and so on for the newsgroup collection).\n5.1\nEffectiveness of Source-Biased Probing\nThe goal of our first set of experiments is to compare source-biased\nprobing with existing probing techniques and to evaluate\nthe efficiency and quality of source-biased probing. The source-biased\nprobing show significant gain in terms of the percentage\nof documents probed that are similar to the source.\nWe first evaluate the efficiency of source-biased probing in\nterms of the number of documents required to be extracted\nfrom each target and the percentage of the documents extracted\nthat are similar to the source. The higher percentage of documents\nsimilar (relevant) to the source, the more effective a\nprobing algorithm is.\nWe selected 100 random source-target pairs from the newsgroup\ncollection.\nFor each pair, we evaluated four probing\ntechniques a source-biased prober (Source Bias) that selects\nprobe terms from the source summary in decreasing order of\nservF req; a query-biased prober (Query Bias 1 ) that randomly\nselects probes from the standard Unix dictionary of English\nterms; a query-biased prober (Query Bias 2 ) that selects its\ninitial probe from the Unix dictionary, but once the first document\nhas been retrieved from the target, all subsequent probes\nare selected based on the estimated servF req of the target's\nservice summary; and an unbiased prober (No Bias) that selects\ndocuments at random from each target. For each pair, we\nevaluated each of the four probing techniques for up to 100 total\ndocuments extracted from each target, collecting a maximum\nof 5 documents per probe query from each target.\nIn Figure 3, we show the average percentage of documents\nsimilar (relevant) to the source (Cosine f ocus\n\n( )) over all 100\nsource-target pairs as a function of the number of documents\nexamined in each target.\nThe percentage of the documents\nextracted that are similar to the source (biased f ocus measure)\nindicates the quality of document being extracted from each\ntarget. We see that the source-biased probing outperforms the\nNo Bias prober and the Query Bias 1 prober by about 10% and\noutperforms the Query Bias 2 prober by about 15%. Clearly,\nthe higher focus value means the higher success for a probing\nalgorithm.\nFigure 4 shows another experiment where we also identified,\nin our set of 100 source-target pairs, all of those pairs that were\na priori similar (e.g. mac.apps and mac.system) or dissimilar\n(e.g. textiles.sewing and perl.misc). We show the relative\nperformance of the Source Bias, Query Bias 1, and No Bias\n159\n0.0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1.0\n0\n20\n40\n60\n80\n100\nDocuments Examined\nAverage Source Similarity\nSource Bias\nQuery Bias\nNo Bias\nSimilar\nDissimilar\nFigure 4: Probing Efficiency Breakdown\n0.00\n0.10\n0.20\n0.30\n0.40\n0.50\n0.60\n0.70\n0.80\n0\n20\n40\n60\n80\n100\nDocuments Examined\n% Docs Similar to Source\nSource Bias\nQuery Bias 1\nQuery Bias 2\nNo Bias\nFigure 5: Average Document Quality for 100 Pairs\nprobers against these similar and dissimilar pairs. The source-biased\nprober requires fewer documents to achieve the same\nrelevance level as the other probers for all 100 source-target\npairs and for the similar and dissimilar pairs. For example, for\nthe similar source-target pairs in Figure 4, the source-biased\nprober identifies target documents with 0.8 focus after extracting\nfewer than 30 documents. In contrast, the other probers\nrequire between two and three times as many documents to\nachieve the same quality.\nThe third experiment is shown in Figure 5. Here we want\nto show how quickly a source-biased prober can hone on the\nmost source-relevant documents in a target by plotting the\npercentage of the documents extracted that are similar (relevant\n) to the source for each of the four probers. As shown in\nFigure 5, the source-biased prober performs nearly two-times\nbetter than other probers: over 70% of the first 10 documents\nextracted from a target are source-relevant, whereas the other\nprobers identify between 25% and 45% source-relevant documents\n. As more documents are examined for each target, the\nsource-biased prober continues to maintain an advantage over\nthe other probers.\n5.2\nRanking Effectiveness with Biased Focus\nThe second set of experiments intends to evaluate how well\nsource-biased probing compares with the alternative techniques\nwhen it comes to evaluating and ranking collection of target\nservices. We use PubMed as the source and examine all 50\nweb sites as targets. We computed the biased focus score using\nCosine f ocus\n\n( ) and then ranked all targets relative to\nPubMed using the biased focus measure. Since the web sites do\nnot support random document selection, we are unable to evaluate\nan unbiased prober. So this experiment only compares the\nsource-biased prober with query biased prober 1. Table 2 shows\nTable 2: Identifying Web Sources Relevant to PubMed\nQuery Bias\nSource Bias\n1. AMA\n1. Open Directory (13)\n2. WebMD\n2. Google (27)\n3. Linux Journal\n3. About (11)\n4. HealthAtoZ\n4. WebMD (2)\n5. DevGuru\n5. AMA (1)\n6. FamilyTree Magazine\n6. HealthAtoZ (4)\n7. Mayo Clinic\n7. Monster (22)\n8. Novell Support\n8. Mayo Clinic (7)\n9. Random House\n9. Random House (9)\n10. January Magazine\n10. BBC News (12)\n0.00\n0.10\n0.20\n0.30\n0.40\n0.50\n0.60\n0.70\n0.80\n0.90\ncomp.sys.mac.system\ncomp.unix.misc\ngnu.emacs.help\nrec.aviation.owning\nrec.games.chess.misc\nrec.org.sca\nrec.pets.cats.misc\nsci.physics.research\nsoc.culture.hawaii\ntalk.religion.misc\nRelevance Precision\nNo Bias\nQuery Bias\nSource Bias\nFigure 6: Precision for 10 Source Newsgroups\nthe top-10 ranked sites relative to PubMed. In the Source Bias\ncolumn we also list in parenthesis the rank of each site assigned\nby the Query Bias prober.\nThe query-biased prober identifies several health-related sites\nin the web collection, but it mistakenly lists Linux Journal\nahead of HealthAtoZ, as well as listing a web development site\n(DevGuru) and a genealogical magazine (FamilyTree) ahead of\nthe health-related Mayo Clinic. Overall, only four of the top-ten\nsites could be considered topically relevant to PubMed. In\ncontrast, the source-biased prober's top-eight sites are all relevant\nto PubMed. In addition to the health-related sites, the\nsource-biased prober also identifies three general sites that offer\naccess to medical literature (Open Directory, Google, and\nAbout) that are ranked significantly lower by the query-biased\nprober. Interestingly, the source-biased prober identifies a fair\nnumber of scientific and bioinformatics-related job descriptions\nin the Monster jobs site, resulting in its high relevance (similarity\n) score to PubMed (high biased focus value).\nTo validate the quality of source-biased service evaluation, we\nnext randomly selected 10 sources from the newsgroup collection\nto evaluate against the entire set of 780 newsgroups. We\ncompared the three probers Source Bias, Query Bias 1, and\nNo Bias. For each of the 10 sources, we measured relevance\n(similarity) precision as the percentage of the top-10 ranked\ntarget services that are considered relevant to the source using\nCosine f ocus\n\n( ). Relevance judgments were determined by\nthe consensus opinion of three volunteers. Figure 6 shows the\nprecision for the three probers after extracting 40 documents\nper target service. Source Bias results in the highest precision\nin nine of ten cases, tying with the next best prober in only\ntwo cases. For the lone failure, Source Bias does succeed after\nextracting 80 documents, indicating that the mistake may\nbe attributable to the error inherent in probing very few documents\n. In general, the average precision of the source-biased\nprober is nearly double that of the next best prober.\nIn Figure 7 we show the average precision for the ten sources\nwhen increasingly more documents are extracted per target.\nThe source-biased approach displays higher precision than both\n160\n0.00\n0.10\n0.20\n0.30\n0.40\n0.50\n20\n40\n60\n80\n100\nDocuments Examined\nRelevance Precision\nNo Bias\nQuery Bias\nSource Bias\nFigure 7: Average Relevance Precision\nthe query-biased and unbiased probers in all cases considered,\nespecially when based on very few documents.\n5.3\nIdentifying Interesting Relationships\nThe third set of experiments is designed to evaluate the effectiveness\nof using the source-biased framework to support the\nidentification of interesting inter-service relationships that the\nalternative schemes do not. Unlike the query-biased and unbiased\nprobers, the asymmetric nature of source-biased probing\nallows us to characterize the nature of the relationship beyond\nthe single relevance ranking using biased focus measure.\nWe first illustrate relationship sets for PubMed over the web\ncollection. In Table 3 we show four classes of relationship sets\nfor\nhigh\n= 0.15,\nlow\n= 0.05, and\ndif f\n= 0.10 using the\nsource-biased prober described above. Again we note, that our\ninterest here is to illustrate the power of the -formalation;\nwe leave the optimization of -values to future work. In contrast\nto the simple relevance ranking in Table 2, we see how\nthe source-biased framework can differentiate between the very\nsimilar services (the -equivalent sites) and the more general\nservices (the -superset sites) relative to PubMed. In addition,\nwe can identify sites with some common data (the -overlap\nsites) and sites concerned with significantly different topics (the\n-complement sites).\nSimilarly, we show in Table 4 several interesting relationships\nderived from the newsgroup collection for\nhigh\n= 0.70,\nlow\n=\n0.40, and\ndif f\n= 0.30 using the Source Bias prober discussed\nbefore. Again, by relying on BASIL's source-biased analysis we\nmay characterize relationships sets for each source.\nAs an example, we identify sci.physics.particle as a member\nof the -subset relationship set of the mixed topic newsgroup\nmixed11, which consists of 25% physics-related articles\nin addition to articles on backgammon, juggling, and telecommunications\n. Interestingly, we can see that there are several\noverlapping relationships between newsgroups in related but\nslightly different fields (e.g.\nvolleyball and cricket).\nFinally\n, we also identify several unrelated newsgroups, including\ncomp.sys.mac.system relative to misc.immigration.usa.\n5.4\nProbing with Focal Terms\nIn our final set of experiments, we consider the impact of focal\nterm probing on the success rate of source-biased probing. We\nevaluate four flavors of focal term probing with 2, 3, 5, and 10\nfocal term groups from which to draw source-biased probes. In\nour initial experiments with focal term probing, we discovered\nthat there was little impact on either the efficiency of probing\nor the quality of target service evaluation when considering\nsources from the single-topic newsgroup collection.\n[Due to\nspace limitations, we omit these results here].\n0.2\n0.3\n0.4\n0.5\n0.6\n0\n20\n40\n60\n80\n100\nDocuments Examined\nAverage Source Similarity\nOriginal\nFocal - 2\nFocal - 3\nFocal - 5\nFocal - 10\nFigure 8: Impact of Focal Term Probing\nIn contrast, we discovered that focal term probing had a\nsignificant impact when used on mixed topic newsgroups, in\nwhich there are documents from several unrelated single topic\nnewsgroups. In Figure 8, we show the probing efficiency for\nthe four focal term source-biased probers relative to the best\nbasic source-biased prober for 10 source-target pairs from the\nnewsgroup collection. In each case, the sources were drawn\nexclusively from the mixed topic newsgroups.\nAll of the focal term techniques resulted in more efficient\nprobing versus basic source-biased probing and only minor differences\nin ranking precision and relationship set generation\nquality, indicating that focal term probing can be advantageous\nin certain circumstances. Our intuition is that identifying focal\nterms is considerably more important in cases in which there\nare clear distinctions in term distributions as would be reflected\nin the mixed topic newsgroups in which several groups of documents\nare concerned with different topics.\nRELATED WORK\nResearchers have previously explored different aspects of the\nservice discovery problem, ranging from discovery in a federated\nenvironment [25], to identifying services that meet certain\nquality-of-service guarantees [13], to evaluating services based\non a distributed reputation metric [26], to other quality metrics\nlike in [32]. In contrast, we focus on the data relationships\nbetween services to power efficient discovery and ranking.\nOther researchers have previously studied the problem of re-peatedly\nquerying an unknown database in an effort to generate\na summary of the database internals [11, 3, 30, 4, 8, 27, 14, 6].\nThe main purpose of these techniques is to generate a representative\ncontent summary of the underlying database. Querying\nmethods suggested include the use of random queries, queries\nlearned from a classifier, and queries based on a feedback cycle\nbetween the query and the response.\nMore recently, Gravano et al. [12] have introduced an extension\nto the Callan-style probing technique that relies on a\nlearned set of queries for database classification. Their probing\nmethod is effective for classifying web sites into a pre-determined\nYahoo!-style hierarchy, but requires the potentially\nburdensome and inflexible task of labelling training data for\nlearning the classifier probes in the first place. Additionally, if\nnew categories are added or old categories removed from the hierarchy\n, new probes must be learned and each source re-probed.\nPrevious research on grouping terms (as in our source-biased\nprobing with focal terms) has focussed on finding terms that are\neffective for query-expansion [31, 21] or finding similar terms\n[24]. Our focal term formulation is similar to that used in [21],\nthough their goal is to find close semantic relationships between\nterms, unlike our coarse-grained groupings.\n161\nTable 3: Source-Biased Analysis: Identifying Relationships Relative to PubMed\nService (S)\nURL\nDescription\nfocus\nP M\n(\nS)\nfocus\nS\n(\nP M)\nRelationship\nWebMD\nwww.webmd.com\nHealth/Medical\n0.23\n0.18\n-equivalent\nAMA\nwww.ama-assn.org\nHealth/Medical\n0.19\n0.16\n-equivalent\nHealthAtoZ\nwww.healthatoz.com\nHealth/Medical\n0.18\n0.16\n-equivalent\nOpen Directory\ndmoz.org\nWeb Directory\n0.44\n0.08\n-superset\nGoogle\nwww.google.com\nWeb Search Engine\n0.37\n0.10\n-superset\nAbout\nwww.about.com\nWeb Channels\n0.25\n0.08\n-superset\nMonster\nwww.monster.com\nJobs\n0.14\n0.08\n-overlap\nMayo Clinic\nwww.mayoclinic.com\nHealth/Medical\n0.12\n0.11\n-overlap\nSilicon Investor\nwww.siliconinvestor.com\nFinance\n0.03\n0.04\n-complement\nUsenet Recipes\nrecipes2.alastra.com\nRecipes\n0.02\n0.03\n-complement\nTable 4: Source-Biased Analysis: Identifying Relationships in the Newsgroup Collection\nA\nB\nfocus\nA\n(\nB)\nfocus\nB\n(\nA)\nRelationship\ncomp.sys.mac.apps\ncomp.sys.mac.system\n0.86\n0.76\n-equivalent\ncomp.sys.mac.system\ncomp.sys.mac.advocacy\n0.79\n0.74\n-equivalent\nsci.physics.particle\nsci.physics\n0.86\n0.80\n-equivalent\nsci.physics.particle\nmixed45\n0.86\n0.62\n-subset/superset\ncomp.unix.misc\nmixed120\n0.91\n0.56\n-subset/superset\nrec.sport.volleyball\nrec.sport.cricket\n0.47\n0.46\n-overlap\nrec.games.go\nrec.games.chess.misc\n0.50\n0.53\n-overlap\nrec.crafts.textiles.sewing\ncomp.lang.perl.misc\n0.35\n0.32\n-complement\ncomp.sys.mac.system\nmisc.immigration.usa\n0.23\n0.36\n-complement\nCONCLUSIONS\nIn this paper, we have presented a novel web service discovery\nand ranking prototype called BASIL that supports a personalized\nview of data-intensive web services through source-biased\nfocus. BASIL supports personalized discovery requests\nand relevance reasoning through efficient source-biased probing\nand source-biased relevance metrics. Concretely, we have\nshown that BASIL allows us to determine in very few interactions\nwhether a target service is relevant to the source service\nby probing the target with very precise probes. The biased\nfocus measure allows us to evaluate and rank the services discovered\nand to identify interesting types of source-biased relationships\nfor a collection of services. Additionally, we have\nintroduced source-biased probing with focal terms as a performance\noptimization to further improve the effectiveness of the\nbasic source-biased algorithm.\nREFERENCES\n[1] Amazon.com. Amazon.com Web Services.\nhttp://www.amazon.com/gp/aws/landing.html, 2004.\n[2] Ariba. http://www.ariba.com, 2003.\n[3] J. Callan, M. Connell, and A. Du. Automatic discovery\nof language models for text databases. In SIGMOD '99.\n[4] J. P. Callan and M. E. Connell. Query-based sampling of\ntext databases. Information Systems, 19(2):97130, 2001.\n[5] J. Caverlee, L. Liu, and D. Rocco. Discovering and\nranking web services with BASIL: A personalized\napproach with biased focus. Technical report, GIT, 2004.\n[6] W. W. Cohen and Y. Singer. Learning to query the web.\nIn AAAI Workshop on Internet-Based Information\nSystems. 1996.\n[7] Google. Google Web APIs FAQ.\nhttp://www.google.com/apis/, 2003.\n[8] D. Hawking and P. Thistlewaite. Methods for\ninformation server selection. ACM Transactions on\nInformation Systems, 17(1):4076, 1999.\n[9] IBM. Web Services for Life Sciences.\nhttp://www.alphaworks.ibm.com/tech/ws4LS/, 2003.\n[10] IBM. IBM UDDI Business Registry.\nwww.ibm.com/services/uddi/, 2004.\n[11] P. G. Ipeirotis, L. Gravano, and M. Sahami. Probe,\ncount, and classify: Categorizing hidden-web databases.\nIn SIGMOD '01.\n[12] P. G. Ipeirotis, L. Gravano, and M. Sahami. QProber: A\nsystem for automatic classification of hidden-web\ndatabases. ACM TOIS, 21(1):141, 2003.\n[13] Y. Liu, A. H. Ngu, and L. Zeng. QoScomputation and\npolicing in dynamic web service selection. In WWW '04.\n[14] W. Meng, C. T. Yu, and K.-L. Liu. Detection of\nheterogeneities in a multiple text database environment.\nIn CoopIS '99.\n[15] Microsoft. .NET. http://www.microsoft.com/net/, 2003.\n[16] Microsoft. Microsoft UDDI Business Registry Node.\nhttp://uddi.microsoft.com/, 2004.\n[17] National Center for Biotechnology Information. NCBI\nBLAST. http://www.ncbi.nih.gov/BLAST/, 2004.\n[18] M. P. Papazoglou. Service-oriented computing: Concepts,\ncharacteristics and directions. In WISE '03.\n[19] M. F. Porter. An algorithm for suffix stripping. Program,\n14(3):130137, 1980.\n[20] ProFusion. http://www.profusion.com/, 2004.\n[21] Y. Qiu and H.-P. Frei. Concept-based query expansion.\nIn SIGIR '93, pages 160169, Pittsburgh, US.\n[22] G. Salton and C. Buckley. Term-weighting approaches in\nautomatic text retrieval. In Readings in Information\nRetrieval. Morgan Kauffman, San Francisco, CA, 1997.\n[23] G. Salton, A. Wong, and C. Yang. A vector space model\nfor automatic indexing. CACM, 18(11):613620, 1971.\n[24] H. Schutze and J. O. Pedersen. A cooccurrence-based\nthesaurus and two applications to information retrieval.\nInformation Processing and Management, 33(3), 1997.\n[25] K. Sivashanmugam, K. Verma, and A. Sheth. Discovery\nof web services in a federated registry environment. In\nICWS '04.\n[26] R. M. Sreenath and M. P. Singh. Agent-based service\nselection. Journal on Web Semantics (JWS), 2003.\n[27] A. Sugiura and O. Etzioni. Query routing for web search\nengines: Architecture and experiments. In WWW '00.\n[28] UDDI. http://www.uddi.org/, 2004.\n[29] W3C Working Group. Web Services Architecture.\nhttp://www.w3.org/TR/2004/NOTE-ws-arch-20040211/,\nFebruary 2004.\n[30] W. Wang, W. Meng, and C. Yu. Concept hierarchy based\ntext database categorization in a metasearch engine\nenvironment. In WISE '00.\n[31] J. Xu and W. B. Croft. Query expansion using local and\nglobal document analysis. In SIGIR '96, pages 411.\n[32] L. Zeng, , B. Benatallah, M. Dumas, J. Kalagnanam, and\nQ. Z. Sheng. Quality driven web services composition. In\nWWW '03.\n162\n", "keywords": "focal terms;biased discovery;ranking;data-intensive web services;data-intensive services;query-biased probing;web service discovery;source-biased probing"} {"name": "72", "title": "Distance Measures for MPEG-7-based Retrieval", "abstract": "In visual information retrieval the careful choice of suitable proximity measures is a crucial success factor. The evaluation presented in this paper aims at showing that the distance measures suggested by the MPEG-7 group for the visual descriptors can be beaten by general-purpose measures. Eight visual MPEG-7 descriptors were selected and 38 distance measures implemented. Three media collections were created and assessed, performance indicators developed and more than 22500 tests performed. Additionally, a quantisation model was developed to be able to use predicate-based distance measures on continuous data as well. The evaluation shows that the distance measures recommended in the MPEG-7-standard are among the best but that other measures perform even better.", "fulltext": "INTRODUCTION\nThe MPEG-7 standard defines among others a set of\ndescriptors for visual media. Each descriptor consists of a feature\nextraction mechanism, a description (in binary and XML format)\nand guidelines that define how to apply the descriptor on different\nkinds of media (e.g. on temporal media). The MPEG-7 descriptors\nhave been carefully designed to meet partially complementary\nrequirements of different application domains: archival, browsing,\nretrieval, etc. [9]. In the following, we will exclusively deal with\nthe visual MPEG-7 descriptors in the context of media retrieval.\nThe visual MPEG-7 descriptors fall in five groups: colour,\ntexture, shape, motion and others (e.g. face description) and sum\nup to 16 basic descriptors. For retrieval applications, a rule for\neach descriptor is mandatory that defines how to measure the\nsimilarity of two descriptions. Common rules are distance\nfunctions, like the Euclidean distance and the Mahalanobis\ndistance. Unfortunately, the MPEG-7 standard does not include\ndistance measures in the normative part, because it was not\ndesigned to be (and should not exclusively understood to be)\nretrieval-specific. However, the MPEG-7 authors give\nrecommendations, which distance measure to use on a particular\ndescriptor. These recommendations are based on accurate\nknowledge of the descriptors' behaviour and the description\nstructures.\nIn the present study a large number of successful distance\nmeasures from different areas (statistics, psychology, medicine,\nsocial and economic sciences, etc.) were implemented and applied\non MPEG-7 data vectors to verify whether or not the\nrecommended MPEG-7 distance measures are really the best for\nany reasonable class of media objects. From the MPEG-7 tests\nand the recommendations it does not become clear, how many and\nwhich distance measures have been tested on the visual\ndescriptors and the MPEG-7 test datasets. The hypothesis is that\nanalytically derived distance measures may be good in general but\nonly a quantitative analysis is capable to identify the best distance\nmeasure for a specific feature extraction method.\nThe paper is organised as follows. Section 2 gives a minimum of\nbackground information on the MPEG-7 descriptors and distance\nmeasurement in visual information retrieval (VIR, see [3], [16]).\nSection 3 gives an overview over the implemented distance\nmeasures. Section 4 describes the test setup, including the test\ndata and the implemented evaluation methods. Finally, Section 5\npresents the results per descriptor and over all descriptors.\n\nBACKGROUND\nThe visual part of the MPEG-7 standard defines several\ndescriptors. Not all of them are really descriptors in the sense that\nthey extract properties from visual media. Some of them are just\nstructures for descriptor aggregation or localisation. The basic\ndescriptors are Color Layout, Color Structure, Dominant Color,\nScalable Color, Edge Histogram, Homogeneous Texture, Texture\nBrowsing, Region-based Shape, Contour-based Shape, Camera\nMotion, Parametric Motion and Motion Activity.\nOther descriptors are based on low-level descriptors or semantic\ninformation: Group-of-Frames/Group-of-Pictures Color (based on\n\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nMIR'03, November 7, 2003, Berkeley, California, USA.\nCopyright 2003 ACM 1-58113-778-8/03/00011...$5.00.\n130\nScalable Color), Shape 3D (based on 3D mesh information),\nMotion Trajectory (based on object segmentation) and Face\nRecognition (based on face extraction).\nDescriptors for spatiotemporal aggregation and localisation are:\nSpatial 2D Coordinates, Grid Layout, Region Locator (spatial),\nTime Series, Temporal Interpolation (temporal) and\nSpatioTemporal Locator (combined). Finally, other structures\nexist for colour spaces, colour quantisation and multiple 2D views\nof 3D objects.\nThese additional structures allow combining the basic descriptors\nin multiple ways and on different levels. But they do not change\nthe characteristics of the extracted information. Consequently,\nstructures for aggregation and localisation were not considered in\nthe work described in this paper.\n2.2 Similarity measurement on visual data\nGenerally, similarity measurement on visual information aims at\nimitating human visual similarity perception. Unfortunately,\nhuman perception is much more complex than any of the existing\nsimilarity models (it includes perception, recognition and\nsubjectivity).\nThe common approach in visual information retrieval is\nmeasuring dis-similarity as distance. Both, query object and\ncandidate object are represented by their corresponding feature\nvectors. The distance between these objects is measured by\ncomputing the distance between the two vectors. Consequently,\nthe process is independent of the employed querying paradigm\n(e.g. query by example). The query object may be natural (e.g. a\nreal object) or artificial (e.g. properties of a group of objects).\nGoal of the measurement process is to express a relationship\nbetween the two objects by their distance. Iteration for multiple\ncandidates allows then to define a partial order over the\ncandidates and to address those in a (to be defined)\nneighbourhood being similar to the query object. At this point, it\nhas to be mentioned that in a multi-descriptor environment\nespecially in MPEG-7 we are only half way towards a statement\non similarity. If multiple descriptors are used (e.g. a descriptor\nscheme), a rule has to be defined how to combine all distances to\na global value for each object. Still, distance measurement is the\nmost important first step in similarity measurement.\nObviously, the main task of good distance measures is to\nreorganise descriptor space in a way that media objects with the\nhighest similarity are nearest to the query object. If distance is\ndefined minimal, the query object is always in the origin of\ndistance space and similar candidates should form clusters around\nthe origin that are as large as possible. Consequently, many well\nknown distance measures are based on geometric assumptions of\ndescriptor space (e.g. Euclidean distance is based on the metric\naxioms). Unfortunately, these measures do not fit ideally with\nhuman similarity perception (e.g. due to human subjectivity). To\novercome this shortage, researchers from different areas have\ndeveloped alternative models that are mostly predicate-based\n(descriptors are assumed to contain just binary elements, e.g.\nTversky's Feature Contrast Model [17]) and fit better with human\nperception. In the following distance measures of both groups of\napproaches will be considered.\n\nDISTANCE MEASURES\nThe distance measures used in this work have been collected from\nvarious areas (Subsection 3.1). Because they work on differently\nquantised data, Subsection 3.2 sketches a model for unification on\nthe basis of quantitative descriptions. Finally, Subsection 3.3\nintroduces the distance measures as well as their origin and the\nidea they implement.\n3.1 Sources\nDistance measurement is used in many research areas such as\npsychology, sociology (e.g. comparing test results), medicine (e.g.\ncomparing parameters of test persons), economics (e.g. comparing\nbalance sheet ratios), etc. Naturally, the character of data available\nin these areas differs significantly. Essentially, there are two\nextreme cases of data vectors (and distance measures): predicate-based\n(all vector elements are binary, e.g. {0, 1}) and quantitative\n(all vector elements are continuous, e.g. [0, 1]).\nPredicates express the existence of properties and represent high-level\ninformation while quantitative values can be used to measure\nand mostly represent low-level information. Predicates are often\nemployed in psychology, sociology and other human-related\nsciences and most predicate-based distance measures were\ntherefore developed in these areas. Descriptions in visual\ninformation retrieval are nearly ever (if they do not integrate\nsemantic information) quantitative. Consequently, mostly\nquantitative distance measures are used in visual information\nretrieval.\nThe goal of this work is to compare the MPEG-7 distance\nmeasures with the most powerful distance measures developed in\nother areas. Since MPEG-7 descriptions are purely quantitative\nbut some of the most sophisticated distance measures are defined\nexclusively on predicates, a model is mandatory that allows the\napplication of predicate-based distance measures on quantitative\ndata. The model developed for this purpose is presented in the\nnext section.\n3.2 Quantisation model\nThe goal of the quantisation model is to redefine the set operators\nthat are usually used in predicate-based distance measures on\ncontinuous data. The first in visual information retrieval to follow\nthis approach were Santini and Jain, who tried to apply Tversky's\nFeature Contrast Model [17] to content-based image retrieval\n[12], [13]. They interpreted continuous data as fuzzy predicates\nand used fuzzy set operators. Unfortunately, their model suffered\nfrom several shortcomings they described in [12], [13] (for\nexample, the quantitative model worked only for one specific\nversion of the original predicate-based measure).\nThe main idea of the presented quantisation model is that set\noperators are replaced by statistical functions. In [5] the authors\ncould show that this interpretation of set operators is reasonable.\nThe model offers a solution for the descriptors considered in the\nevaluation. It is not specific to one distance measure, but can be\napplied to any predicate-based measure. Below, it will be shown\nthat the model does not only work for predicate data but for\nquantitative data as well. Each measure implementing the model\ncan\n\nbe\n\nused\n\nas\n\na\n\nsubstitute\n\nfor\n\nthe\n\noriginal\n\npredicate-based measure.\nGenerally, binary properties of two objects (e.g. media objects)\ncan exist in both objects (denoted as a), in just one (b, c) or in\nnone of them (d). The operator needed for these relationships are\nUNION, MINUS and NOT. In the quantisation model they are\nreplaced as follows (see [5] for further details).\n131\n\n\n\n\n\n+\n+\n=\n=\n\n=\nk\njk\nik\njk\nik\nk\nk\nj\ni\nelse\nx\nx\nM\nif\nx\nx\ns\ns\nX\nX\na\n0\n2\n2\n,\n1\n\n\n(\n)\n(\n)\n\n\n\n\n\n\n\n+\n+\n=\n=\n\n\n\n=\n\n\n\n\n=\n=\n=\n\n\n\n\n=\n=\n=\nk\njk\nik\njk\nik\nk\nk\nj\ni\nk\nik\njk\nik\njk\nk\nk\ni\nj\nk\njk\nik\njk\nik\nk\nk\nj\ni\nelse\nx\nx\nif\nx\nx\nM\ns\ns\nX\nX\nd\nelse\nx\nx\nM\nif\nx\nx\ns\ns\nX\nX\nc\nelse\nx\nx\nM\nif\nx\nx\ns\ns\nX\nX\nb\n0\n2\n2\n,\n0\n,\n0\n,\n1\n2\n2\n\n\n\n\nwith:\n( )\n[\n]\n(\n)\n{ }\n0\n\\\n.\n0\n1\n.\n0\n1\n,\n2\n2\n1\nmin\nmax\nmax\nmin\n+\n\n=\n\n\n\n\n\n\n\n=\n=\n\n\n\n\n\n\n\n=\n=\n\n=\n\n\nR\np\nk\ni\nx\nwhere\nelse\np\nif\np\nM\nk\ni\nx\nwhere\nelse\np\nif\np\nM\nx\nx\nM\nx\nx\nx\nwith\nx\nX\ni\nk\nik\ni\nk\nik\nik\nik\ni\n\n\n\n\n\n\n\n\n\n\na selects properties that are present in both data vectors (X\ni\n, X\nj\n\nrepresenting media objects), b and c select properties that are\npresent in just one of them and d selects properties that are present\nin neither of the two data vectors. Every property is selected by\nthe extent to which it is present (a and d: mean, b and c:\ndifference) and only if the amount to which it is present exceeds a\ncertain threshold (depending on the mean and standard deviation\nover all elements of descriptor space).\nThe\n\nimplementation\n\nof\n\nthese\n\noperators\n\nis\n\nbased\n\non one assumption.\nIt is assumed that vector elements measure on interval scale. That\nmeans, each element expresses that the measured property is\n"more or less" present ("0": not at all, "M": fully present). This is\ntrue for most visual descriptors and all MPEG-7 descriptors. A\nnatural origin as it is assumed here ("0") is not needed.\nIntroducing p (called discriminance-defining parameter) for the\nthresholds\n2\n1\n,\n\n\nhas the positive consequence that a, b, c, d can\nthen be controlled through a single parameter. p is an additional\ncriterion for the behaviour of a distance measure and determines\nthe thresholds used in the operators. It expresses how accurate\ndata items are present (quantisation) and consequently, how\naccurate they should be investigated. p can be set by the user or\nautomatically. Interesting are the limits:\n1.\nM\np\n\n\n\n\n2\n1\n,\n\n\n\nIn this case, all elements (=properties) are assumed to be\ncontinuous (high quantisation). In consequence, all properties of a\ndescriptor are used by the operators. Then, the distance measure is\nnot discriminant for properties.\n2.\n0\n,\n0\n2\n1\n\n\n\n\n\np\n\nIn this case, all properties are assumed to be predicates. In\nconsequence, only binary elements (=predicates) are used by the\noperators (1-bit quantisation). The distance measure is then highly\ndiscriminant for properties.\nBetween these limits, a distance measure that uses the\nquantisation model is depending on p more or less\ndiscriminant for properties. This means, it selects a subset of all\navailable description vector elements for distance measurement.\nFor both predicate data and quantitative data it can be shown that\nthe quantisation model is reasonable. If description vectors consist\nof binary elements only, p should be used as follows (for example,\np can easily be set automatically):\n( )\n\n\n\n\n,\nmin\n.\n.\n,\n0\n,\n0\n2\n1\n=\n=\n\n\np\ng\ne\np\n\nIn this case, a, b, c, d measure like the set operators they replace.\nFor example, Table 1 shows their behaviour for two one-dimensional\nfeature vectors X\ni\nand X\nj\n. As can be seen, the\nstatistical measures work like set operators. Actually, the\nquantisation model works accurate on predicate data for any p.\nTo show that the model is reasonable for quantitative data the\nfollowing fact is used. It is easy to show that for predicate data\nsome quantitative distance measures degenerate to predicate-based\nmeasures. For example, the L\n1\nmetric (Manhattan metric)\ndegenerates to the Hamming distance (from [9], without weights):\ndistance\nHamming\nc\nb\nx\nx\nL\nk\njk\nik\n=\n+\n\n=\n\n1\n\nIf it can be shown that the quantisation model is able to\nreconstruct the quantitative measure from the degenerated\npredicate-based measure, the model is obviously able to extend\npredicate-based measures to the quantitative domain. This is easy\nto illustrate. For purely quantitative feature vectors, p should be\nused as follows (again, p can easily be set automatically):\n1\n,\n2\n1\n=\n\n\n\n\n\np\n\nThen, a and d become continuous functions:\n\n\n+\n=\n=\n\n\n\n+\n+\n=\n=\n\n\n\n+\nk\njk\nik\nk\nk\njk\nik\nk\njk\nik\nk\nk\njk\nik\nx\nx\nM\ns\nwhere\ns\nd\ntrue\nM\nx\nx\nx\nx\ns\nwhere\ns\na\ntrue\nM\nx\nx\nM\n2\n2\n2\n2\n\nb and c can be made continuous for the following expressions:\n(\n)\n(\n)\n\n\n\n=\n=\n+\n\n\n\n\n=\n=\n\n\n\n\n\n\n\n\n=\n=\n\n\n\n\n\nk\njk\nik\nk\nk\nk\nik\njk\nik\njk\nk\nk\nik\njk\nik\njk\nk\njk\nik\njk\nik\nk\nk\njk\nik\njk\nik\nx\nx\ns\nwhere\ns\nc\nb\nelse\nx\nx\nif\nx\nx\ns\nwhere\ns\nc\nx\nx\nM\nx\nx\nM\nelse\nx\nx\nif\nx\nx\ns\nwhere\ns\nb\nx\nx\nM\nx\nx\nM\n0\n0\n0\n0\n0\n0\n\nTable 1. Quantisation model on predicate vectors.\nX\ni\n\nX\nj\na b c d\n(1) (1) 1 0 0 0\n(1) (0) 0 1 0 0\n(0) (1) 0 0 1 0\n(0) (0) 0 0 0 1\n\n132\n\n\n=\n=\n\n=\n=\nk\nik\njk\nk\nk\nk\njk\nik\nk\nk\nx\nx\ns\nwhere\ns\nb\nc\nx\nx\ns\nwhere\ns\nc\nb\n\nThis means, for sufficiently high p every predicate-based distance\nmeasure that is either not using b and c or just as b+c, b-c or c-b,\ncan be transformed into a continuous quantitative distance\nmeasure. For example, the Hamming distance (again, without\nweights):\n1\nL\nx\nx\nx\nx\ns\nwhere\ns\nc\nb\nk\njk\nik\nk\njk\nik\nk\nk\n=\n=\n=\n=\n+\n\n\n\nThe quantisation model successfully reconstructs the L\n1\nmetric\nand no distance measure-specific modification has to be made to\nthe model. This demonstrates that the model is reasonable. In the\nfollowing it will be used to extend successful predicate-based\ndistance measures on the quantitative domain.\nThe major advantages of the quantisation model are: (1) it is\napplication domain independent, (2) the implementation is\nstraightforward, (3) the model is easy to use and finally, (4) the\nnew parameter p allows to control the similarity measurement\nprocess in a new way (discriminance on property level).\n3.3 Implemented measures\nFor the evaluation described in this work next to predicate-based\n(based on the quantisation model) and quantitative measures, the\ndistance measures recommended in the MPEG-7 standard were\nimplemented (all together 38 different distance measures).\nTable 2 summarises those predicate-based measures that\nperformed best in the evaluation (in sum 20 predicate-based\nmeasures were investigated). For these measures, K is the number\nof predicates in the data vectors X\ni\nand X\nj\n. In P1, the sum is used\nfor Tversky's f() (as Tversky himself does in [17]) and ,\n\nare\nweights for element b and c. In [5] the author's investigated\nTversky's Feature Contrast Model and found =1, =0 to be the\noptimum parameters.\nSome of the predicate-based measures are very simple (e.g. P2,\nP4) but have been heavily exploited in psychological research.\nPattern difference (P6) a very powerful measure is used in the\nstatistics package SPSS for cluster analysis. P7 is a correlation\ncoefficient for predicates developed by Pearson.\nTable 3 shows the best quantitative distance measures that were\nused. Q1 and Q2 are metric-based and were implemented as\nrepresentatives for the entire group of Minkowski distances. The\nw\ni\nare weights. In Q5,\ni\ni\n\n\n,\nare mean and standard deviation\nfor the elements of descriptor X\ni\n. In Q6, m is 2M (=0.5). Q3, the\nCanberra metric, is a normalised form of Q1. Similarly, Q4,\nClark's divergence coefficient is a normalised version of Q2. Q6 is\na further-developed correlation coefficient that is invariant against\nsign changes. This measure is used even though its particular\nproperties are of minor importance for this application domain.\nFinally, Q8 is a measure that takes the differences between\nadjacent vector elements into account. This makes it structurally\ndifferent from all other measures.\nObviously, one important distance measure is missing. The\nMahalanobis distance was not considered, because different\ndescriptors would require different covariance matrices and for\nsome descriptors it is simply impossible to define a covariance\nmatrix. If the identity matrix was used in this case, the\nMahalanobis distance would degenerate to a Minkowski distance.\nAdditionally, the recommended MPEG-7 distances were\nimplemented with the following parameters: In the distance\nmeasure of the Color Layout descriptor all weights were set to "1"\n(as in all other implemented measures). In the distance measure of\nthe Dominant Color descriptor the following parameters were\nused:\n20\n,\n1\n,\n3\n.\n0\n,\n7\n.\n0\n2\n1\n=\n=\n=\n=\nd\nT\nw\nw\n\n(as recommended). In the\nHomogeneous Texture descriptor's distance all\n( )\nk\n\nwere set to\n"1" and matching was done rotation- and scale-invariant.\nImportant! Some of the measures presented in this section are\ndistance measures while others are similarity measures. For the\ntests, it is important to notice, that all similarity measures were\ninverted to distance measures.\nTEST SETUP\nSubsection 4.1 describes the descriptors (including parameters)\nand the collections (including ground truth information) that were\nused in the evaluation. Subsection 4.2 discusses the evaluation\nmethod that was implemented and Subsection 4.3 sketches the test\nenvironment used for the evaluation process.\n4.1 Test data\nFor the evaluation eight MPEG-7 descriptors were used. All\ncolour descriptors: Color Layout, Color Structure, Dominant\nColor, Scalable Color, all texture descriptors: Edge Histogram,\nHomogeneous Texture, Texture Browsing and one shape\ndescriptor: Region-based Shape. Texture Browsing was used even\nthough the MPEG-7 standard suggests that it is not suitable for\nretrieval. The other basic shape descriptor, Contour-based Shape,\nwas not used, because it produces structurally different\ndescriptions that cannot be transformed to data vectors with\nelements measuring on interval-scales. The motion descriptors\nwere not used, because they integrate the temporal dimension of\nvisual media and would only be comparable, if the basic colour,\ntexture and shape descriptors would be aggregated over time. This\nwas not done. Finally, no high-level descriptors were used\n(Localisation, Face Recognition, etc., see Subsection 2.1),\nbecause to the author's opinion the behaviour of the basic\ndescriptors on elementary media objects should be evaluated\nbefore conclusions on aggregated structures can be drawn.\nTable 2. Predicate-based distance measures.\nNo. Measure Comment\nP1\nc\nb\na\n.\n.\n\n\n\n\nFeature Contrast Model,\nTversky 1977 [17]\nP2\na\n\nNo. of co-occurrences\nP3\nc\nb\n+\nHamming distance\nP4\nK\na\nRussel 1940 [14]\nP5\nc\nb\na\n+\nKulczvnski 1927 [14]\nP6\n2\nK\nbc\nPattern difference [14]\nP7\n(\n)(\n)(\n)(\n)\nd\nc\nd\nb\nc\na\nb\na\nbc\nad\n+\n+\n+\n+\nPearson\n1926 [11]\n\n133\nThe Texture Browsing descriptions had to be transformed from\nfive bins to an eight bin representation in order that all elements\nof the descriptor measure on an interval scale. A Manhattan metric\nwas used to measure proximity (see [6] for details).\nDescriptor extraction was performed using the MPEG-7 reference\nimplementation. In the extraction process each descriptor was\napplied on the entire content of each media object and the\nfollowing extraction parameters were used. Colour in Color\nStructure was quantised to 32 bins. For Dominant Color colour\nspace was set to YCrCb, 5-bit default quantisation was used and\nthe default value for spatial coherency\n\nwas used.\n\nHomogeneous\nTexture was quantised to 32 components. Scalable Color values\nwere quantised to sizeof(int)-3 bits and 64 bins were used. Finally,\nTexture Browsing was used with five components.\nThese descriptors were applied on three media collections with\nimage content: the Brodatz dataset (112 images, 512x512 pixel), a\nsubset of the Corel dataset (260 images, 460x300 pixel, portrait\nand landscape) and a dataset with coats-of-arms images (426\nimages, 200x200 pixel). Figure 1 shows examples from the three\ncollections.\nDesigning appropriate test sets for a visual evaluation is a highly\ndifficult task (for example, see the TREC video 2002 report [15]).\nOf course, for identifying the best distance measure for a\ndescriptor, it should be tested on an infinite number of media\nobjects. But this is not the aim of this study. It is just evaluated if\nfor likely image collections better proximity measures than\nthose suggested by the MPEG-7 group can be found. Collections\nof this relatively small size were used in the evaluation, because\nthe applied evaluation methods are above a certain minimum size\ninvariant against collection size and for smaller collections it is\neasier to define a high-quality ground truth. Still, the average ratio\nof ground truth size to collection size is at least 1:7. Especially, no\ncollection from the MPEG-7 dataset was used in the evaluation\nbecause the evaluations should show, how well the descriptors\nand the recommended distance measures perform on "unknown"\nmaterial.\nWhen the descriptor extraction was finished, the resulting XML\ndescriptions were transformed into a data matrix with 798 lines\n(media objects) and 314 columns (descriptor elements). To be\nusable with distance measures that do not integrate domain\nknowledge, the elements of this data matrix were normalised to\n[0, 1].\nFor the distance evaluation next to the normalised data matrix\nhuman similarity judgement is needed. In this work, the ground\ntruth is built of twelve groups of similar images (four for each\ndataset). Group membership was rated by humans based on\nsemantic criterions. Table 4 summarises the twelve groups and the\nunderlying descriptions. It has to be noticed, that some of these\ngroups (especially 5, 7 and 10) are much harder to find with low-level\ndescriptors than others.\n4.2 Evaluation method\nUsually, retrieval evaluation is performed based on a ground truth\nwith recall and precision (see, for example, [3], [16]). In multi-descriptor\nenvironments this leads to a problem, because the\nresulting recall and precision values are strongly influenced by the\nmethod used to merge the distance values for one media object.\nEven though it is nearly impossible to say, how big the influence\nof a single distance measure was on the resulting recall and\nprecision values, this problem has been almost ignored so far.\nIn Subsection 2.2 it was stated that the major task of a distance\nmeasure is to bring the relevant media objects as close to the\norigin (where the query object lies) as possible. Even in a multi-descriptor\nenvironment it is then simple to identify the similar\nobjects in a large distance space. Consequently, it was decided to\nTable 3. Quantitative distance measures.\n\nNo. Measure\nComment\nNo. Measure\nComment\nQ1\n\nk\njk\nik\ni\nx\nx\nw\n\nCity block\ndistance (L\n1\n)\nQ2\n(\n)\n\nk\njk\nik\ni\nx\nx\nw\n2\n\nEuclidean\ndistance (L\n2\n)\nQ3\n\n+\nk\njk\nik\njk\nik\nx\nx\nx\nx\n\nCanberra metric,\nLance, Williams\n1967 [8]\nQ4\n(\n)\n\n+\nk\njk\nik\njk\nik\nx\nx\nx\nx\nK\n2\n1\n\nDivergence\ncoefficient,\nClark 1952 [1]\nQ5\n(\n)\n(\n)\n(\n)\n(\n)\n\n\n\n\n\n\nk\nk\nj\njk\ni\nik\nk\nj\njk\ni\nik\nx\nx\nx\nx\n2\n2\n\n\n\n\n\nCorrelation\ncoefficient\nQ6\n\n\n\n\n+\n\n\n\n\n\n\n\n\n\n+\n\n\n\n\n\n\n\nk\nik\nk\njk\nik\nk\nik\nk\nk\njk\nk\nik\njk\nik\nx\nm\nKm\nx\nx\nm\nKm\nx\nx\nx\nm\nKm\nx\nx\n2\n.\n.\n2\n2\n2\n2\n2\n\nCohen 1969 [2]\nQ7\n\n\nk\nk\njk\nik\nk\njk\nik\nx\nx\nx\nx\n2\n2\n\nAngular distance,\nGower 1967 [7]\nQ8\n(\n)\n(\n)\n(\n)\n\n+\n+\n\n\n1\n2\n1\n1\nK\nk\njk\njk\nik\nik\nx\nx\nx\nx\n\nMeehl Index\n[10]\n\nTable 4. Ground truth information.\nColl. No Images Description\n1\n19\nRegular, chequered patterns\n2\n38\nDark white noise\n3 33\nMoon-like\nsurfaces\nBrodatz\n4 35\nWater-like\nsurfaces\n5\n73\nHumans in nature (difficult)\n6\n17\nImages with snow (mountains, skiing)\n7\n76\nAnimals in nature (difficult)\nCorel\n8\n27\nLarge coloured flowers\n9\n12\nBavarian communal arms\n10\n10\nAll Bavarian arms (difficult)\n11\n18\nDark objects / light unsegmented shield\nArms\n12\n14\nMajor charges on blue or red shield\n134\nuse indicators measuring the distribution in distance space of\ncandidates similar to the query object for this evaluation instead\nof recall and precision. Identifying clusters of similar objects\n(based on the given ground truth) is relatively easy, because the\nresulting distance space for one descriptor and any distance\nmeasure is always one-dimensional.\n\nClusters are found by\nsearching from the origin of distance space to the first similar\nobject, grouping all following similar objects in the cluster,\nbreaking off the cluster with the first un-similar object and so\nforth.\nFor the evaluation two indicators were defined. The first measures\nthe average distance of all cluster means to the origin:\ndistance\navg\nclusters\nno\nsize\ncluster\ndistance\nclusters\nno\ni\ni\nsize\ncluster\nj\nij\nd\ni\n_\n.\n_\n_\n_\n_\n\n\n=\n\n\nwhere distance\nij\nis the distance value of the j-th element in the i-th\ncluster,\n\n\n\n=\nCLUSTERS\ni\ni\nCLUSTERS\ni\nsize\ncluster\nj\nij\nsize\ncluster\ndistance\ndistance\navg\ni\n_\n_\n_\n, no_clusters is the\nnumber of found clusters and cluster_size\ni\nis the size of the i-th\ncluster. The resulting indicator is normalised by the distribution\ncharacteristics of the distance measure (avg_distance).\nAdditionally, the standard deviation is used. In the evaluation\nprocess this measure turned out to produce valuable results and to\nbe relatively robust against parameter p of the quantisation model.\nIn Subsection 3.2 we noted that p affects the discriminance of a\npredicate-based distance measure: The smaller p is set the larger\nare the resulting clusters because the quantisation model is then\nmore discriminant against properties and less elements of the data\nmatrix are used. This causes a side-effect that is measured by the\nsecond indicator: more and more un-similar objects come out with\nexactly the same distance value as similar objects (a problem that\ndoes not exist for large p's) and become indiscernible from similar\nobjects. Consequently, they are (false) cluster members. This\nphenomenon (conceptually similar to the "false negatives"\nindicator) was named "cluster pollution" and the indicator\nmeasures the average cluster pollution over all clusters:\nclusters\nno\ndoubles\nno\ncp\nclusters\nno\ni\nsize\ncluster\nj\nij\ni\n_\n_\n_\n_\n\n\n=\n\nwhere no_doubles\nij\nis the number of indiscernible un-similar\nobjects associated with the j-th element of cluster i.\nRemark: Even though there is a certain influence, it could be\nproven in [5] that no significant correlation exists between\nparameter p of the quantisation model and cluster pollution.\n4.3 Test environment\nAs pointed out above, to generate the descriptors, the MPEG-7\nreference implementation in version 5.6 was used (provided by\nTU Munich). Image processing was done with Adobe Photoshop\nand normalisation and all evaluations were done with Perl. The\nquerying process was performed in the following steps: (1)\nrandom selection of a ground truth group, (2) random selection of\na query object from this group, (3) distance comparison for all\nother objects in the dataset, (4) clustering of the resulting distance\nspace based on the ground truth and finally, (5) evaluation.\nFor each combination of dataset and distance measure 250 queries\nwere issued and evaluations were aggregated over all datasets and\ndescriptors. The next section shows the partially surprising\nresults.\nRESULTS\nIn the results presented below the first indicator from Subsection\n4.2 was used to evaluate distance measures. In a first step\nparameter p had to be set in a way that all measures are equally\ndiscriminant. Distance measurement is fair if the following\ncondition holds true for any predicate-based measure d\nP\nand any\ncontinuous measure d\nC\n:\n(\n) ( )\nC\nP\nd\ncp\np\nd\ncp\n\n,\n\nThen, it is guaranteed that predicate-based measures do not create\nlarger clusters (with a higher number of similar objects) for the\nprice of higher cluster pollution. In more than 1000 test queries\nthe optimum value was found to be p=1.\nResults are organised as follows: Subsection 5.1 summarises the\n\nFigure 1. Test datasets.\nLeft: Brodatz dataset, middle: Corel dataset, right: coats-of-arms dataset.\n\n135\nbest distance measures per descriptor, Section 5.2 shows the best\noverall distance measures and Section 5.3 points out other\ninteresting results (for example, distance measures that work\nparticularly good on specific ground truth groups).\n5.1 Best measure per descriptor\nFigure 2 shows the evaluation results for the first indicator. For\neach descriptor the best measure and the performance of the\nMPEG-7 recommendation are shown. The results are aggregated\nover the tested datasets.\nOn first sight, it becomes clear that the MPEG-7\nrecommendations are mostly relatively good but never the best.\nFor Color Layout the difference between MP7 and the best\nmeasure, the Meehl index (Q8), is just 4% and the MPEG-7\nmeasure has a smaller standard deviation. The reason why the\nMeehl index is better may be that this descriptors generates\ndescriptions with elements that have very similar variance.\nStatistical analysis confirmed that (see [6]).\nFor Color Structure, Edge Histogram, Homogeneous Texture,\nRegion-based Shape and Scalable Color by far the best measure is\npattern difference (P6). Psychological research on human visual\nperception has revealed that in many situation differences between\nthe query object and a candidate weigh much stronger than\ncommon properties. The pattern difference measure implements\nthis insight in the most consequent way. In the author's opinion,\nthe reason why pattern difference performs so extremely well on\nmany descriptors is due to this fact. Additional advantages of\npattern difference are that it usually has a very low variance and\nbecause it is a predicate-based measure its discriminance (and\ncluster structure) can be tuned with parameter p.\nThe best measure for Dominant Color turned out to be Clark's\nDivergence coefficient (Q4). This is a similar measure to pattern\ndifference on the continuous domain. The Texture Browsing\ndescriptor is a special problem. In the MPEG-7 standard it is\nrecommended to use it exclusively for browsing. After testing it\nfor retrieval on various distance measures the author supports this\nopinion. It is very difficult to find a good distance measure for\nTexture Browsing. The proposed Manhattan metric, for example,\nperforms very bad. The best measure is predicate-based (P7). It\nworks on common properties (a, d) but produces clusters with\nvery high cluster pollution. For this descriptor the second\nindicator is up to eight times higher than for predicate-based\nmeasures on other descriptors.\n5.2 Best overall measures\nFigure 3 summarises the results over all descriptors and media\ncollections. The diagram should give an indication on the general\npotential of the investigated distance measures for visual\ninformation retrieval.\nIt can be seen that the best overall measure is a predicate-based\none. The top performance of pattern difference (P6) proves that\nthe quantisation model is a reasonable method to extend\npredicate-based distance measures on the continuous domain. The\nsecond best group of measures are the MPEG-7\nrecommendations, which have a slightly higher mean but a lower\nstandard deviation than pattern difference. The third best measure\nis the Meehl index (Q8), a measure developed for psychological\napplications but because of its characteristic properties tailor-made\nfor certain (homogeneous) descriptors.\nMinkowski metrics are also among the best measures: the average\nmean and variance of the Manhattan metric (Q1) and the\nEuclidean metric (Q2) are in the range of Q8. Of course, these\nmeasures do not perform particularly well for any of the\ndescriptors. Remarkably for a predicate-based measure, Tversky's\nFeature Contrast Model (P1) is also in the group of very good\nmeasures (even though it is not among the best) that ends with\nQ5, the correlation coefficient. The other measures either have a\nsignificantly higher mean or a very large standard deviation.\n5.3 Other interesting results\nDistance measures that perform in average worse than others may\nin certain situations (e.g. on specific content) still perform better.\nFor Color Layout, for example, Q7 is a very good measure on\ncolour photos. It performs as good as Q8 and has a lower standard\ndeviation. For artificial images the pattern difference and the\nHamming distance produce comparable results as well.\nIf colour information is available in media objects, pattern\ndifference performs well on Dominant Color (just 20% worse Q4)\nand in case of difficult ground truth (group 5, 7, 10) the Meehl\nindex is as strong as P6.\n0,000\n0,001\n0,002\n0,003\n0,004\n0,005\n0,006\n0,007\n0,008\nQ8\nMP\n7\nP6\nMP\n7\nQ4\nMP\n7\nP6\nMP\n7\nP6\nMP\n7\nP6\nMP\n7\nP6\nMP\n7\nP7\nQ2\nColor\nLayout\nColor\nStructure\nDominant\nColor\nEdge\nHistogram\nHomog.\nTexture\nRegion\nShape\nScalable\nColor\nTexture\nBrowsing\n\nFigure 2. Results per measure and descriptor.\nThe horizontal axis shows the best measure and the performance of the MPEG-7\nrecommendation for each descriptor. The vertical axis shows the values for the first indicator (smaller value = better cluster structure).\nShades have the following meaning: black=- (good cases), black + dark grey= (average) and black + dark grey + light grey=+ (bad).\n136\nCONCLUSION\nThe evaluation presented in this paper aims at testing the\nrecommended distance measures and finding better ones for the\nbasic visual MPEG-7 descriptors. Eight descriptors were selected,\n38 distance measures were implemented, media collections were\ncreated and assessed, performance indicators were defined and\nmore than 22500 tests were performed. To be able to use\npredicate-based distance measures next to quantitative measures a\nquantisation model was defined that allows the application of\npredicate-based measures on continuous data.\nIn the evaluation the best overall distance measures for visual\ncontent as extracted by the visual MPEG-7 descriptors turned\nout to be the pattern difference measure and the Meehl index (for\nhomogeneous descriptions). Since these two measures perform\nsignificantly better than the MPEG-7 recommendations they\nshould be further tested on large collections of image and video\ncontent (e.g. from [15]).\n\nThe choice of the right distance function for similarity\nmeasurement depends on the descriptor, the queried media\ncollection and the semantic level of the user's idea of similarity.\nThis work offers suitable distance measures for various situations.\nIn consequence, the distance measures identified as the best will\nbe implemented in the open MPEG-7 based visual information\nretrieval framework VizIR [4].\n\nACKNOWLEDGEMENTS\nThe author would like to thank Christian Breiteneder for his\nvaluable comments and suggestions for improvement. The work\npresented in this paper is part of the VizIR project funded by the\nAustrian Scientific Research Fund FWF under grant no. P16111.\n\nREFERENCES\n[1] Clark, P.S. An extension of the coefficient of divergence for\nuse with multiple characters. Copeia, 2 (1952), 61-64.\n[2] Cohen, J. A profile similarity coefficient invariant over\nvariable reflection. Psychological Bulletin, 71 (1969), 281-284\n.\n[3] Del Bimbo, A. Visual information retrieval. Morgan\nKaufmann Publishers, San Francisco CA, 1999.\n[4] Eidenberger, H., and Breiteneder, C. A framework for visual\ninformation retrieval. In Proceedings Visual Information\nSystems Conference (HSinChu Taiwan, March 2002), LNCS\n2314, Springer Verlag, 105-116.\n[5] Eidenberger, H., and Breiteneder, C. Visual similarity\nmeasurement with the Feature Contrast Model. In\nProceedings SPIE Storage and Retrieval for Media Databases\nConference (Santa Clara CA, January 2003), SPIE Vol.\n5021, 64-76.\n[6] Eidenberger, H., How good are the visual MPEG-7 features?\nIn Proceedings SPIE Visual Communications and Image\nProcessing Conference (Lugano Switzerland, July 2003),\nSPIE Vol. 5150, 476-488.\n[7] Gower, J.G. Multivariate analysis and multidimensional\ngeometry. The Statistician, 17 (1967),13-25.\n[8] Lance, G.N., and Williams, W.T. Mixed data classificatory\nprograms. Agglomerative Systems Australian Comp. Journal,\n9 (1967), 373-380.\n[9] Manjunath, B.S., Ohm, J.R., Vasudevan, V.V., and Yamada,\nA. Color and texture descriptors. In Special Issue on MPEG-7\n. IEEE Transactions on Circuits and Systems for Video\nTechnology, 11/6 (June 2001), 703-715.\n[10] Meehl, P. E. The problem is epistemology, not statistics:\nReplace significance tests by confidence intervals and\nquantify accuracy of risky numerical predictions. In Harlow,\nL.L., Mulaik, S.A., and Steiger, J.H. (Eds.). What if there\nwere no significance tests? Erlbaum, Mahwah NJ, 393-425.\n[11] Pearson, K. On the coefficients of racial likeness. Biometrica,\n18 (1926), 105-117.\n[12] Santini, S., and Jain, R. Similarity is a geometer. Multimedia\nTools and Application, 5/3 (1997), 277-306.\n[13] Santini, S., and Jain, R. Similarity measures. IEEE\nTransactions on Pattern Analysis and Machine Intelligence,\n21/9 (September 1999), 871-883.\n[14] Sint, P.P. Similarity structures and similarity measures.\nAustrian Academy of Sciences Press, Vienna Austria, 1975\n(in German).\n[15] Smeaton, A.F., and Over, P. The TREC-2002 video track\nreport. NIST Special Publication SP 500-251 (March 2003),\navailable from: http://trec.nist.gov/pubs/trec11/papers/\nVIDEO.OVER.pdf (last visited: 2003-07-29)\n[16] Smeulders, A.W.M., Worring, M., Santini, S., Gupta, A., and\nJain, R. Content-based image retrieval at the end of the early\nyears. IEEE Transactions on Pattern Analysis and Machine\nIntelligence, 22/12 (December 2000), 1349-1380.\n[17] Tversky, A. Features of similarity. Psychological Review,\n84/4 (July 1977), 327-351.\n\n0,000\n0,002\n0,004\n0,006\n0,008\n0,010\n0,012\n0,014\n0,016\n0,018\n0,020\nP6\nMP\n7\nQ8\nQ1\n\nQ4\nQ2\nP2\nP4\nQ6\nQ3\nQ7\nP1\nQ5\nP3\nP5\nP7\n\nFigure 3. Overall results (ordered by the first indicator).\nThe vertical axis shows the values for the first indicator (smaller value = better\ncluster structure). Shades have the following meaning: black=-, black + dark grey= and black + dark grey + light grey=+.\n\n137", "keywords": "Similarity Perception;MPEG-7 descriptors;Distance Measurement;Content-based Image Retrieval;MPEG-7;distance measure;quantisation;Content-based Video Retrieval;Similarity Measurement;visual information retrieval;Visual Information Retrieval;human similarity perception"} {"name": "73", "title": "Dogs or Robots: Why do Children see them as Robotic Pets rather than Canine Machines?", "abstract": "In the not too distant future Intelligent Creatures (robots, smart devices, smart vehicles, smart buildings , etc) will share the everyday living environment of human beings. It is important then to analyze the attitudes humans are to adopt for interaction with morphologically different devices, based on their appearance and behavior. In particular, these devices will become multi-modal interfaces, with computers or networks of computers, for a large and complex universe of applications. Our results show that children are quickly attached to the word `dog' reflecting a conceptualization that robots that look like dogs (in particular SONY Aibo) are closer to living dogs than they are to other devices. By contrast, adults perceive Aibo as having stronger similarities to machines than to dogs (reflected by definitions of robot). Illustration of the characteristics structured in the definition of robot are insufficient to convince children Aibo is closer to a machine than to a dog.", "fulltext": "Introduction\nThe play R. U. R. (Rossum's Universal Robots), written\nby the Czech author Karel Capek, was produced\nin London in 1923. The term robot entered the English\nlanguage (in Czech the word `robota' means\n`heavy labor'). The robot concept remained science\nfiction until 1961 when Unimation Inc. installed the\nworld's first industrial robot in the US. Unimation\nInc. made Australia's first robot, installed in 1974.\nThe emergence of legged autonomous robots and their\ncommercial release (as in Honda's Asimo and Sony's\nAibo) contribute to support the hypothesis that mobile\nrobots will soon become common in our everyday\nenvironments. The commercial release of the Personal\nComputer (PC) occurred just a generation ago,\nyet now it is a common household item. This forecast\nhas prompted some studies into the acceptabil-Copyright\nc 2004, Australian Computer Society, Inc. This paper\nappeared at 5th Australasian User Interface Conference\n(AUIC2004), Dunedin. Conferences in Research and Practice\nin Information Technology, Vol. 28. A. Cockburn, Ed. Reproduction\nfor academic, not-for profit purposes permitted provided\nthis text is included.\nThis work was supported by a Griffith University Research\nGrant as part of the project \"Robots to guide the blind the\napplication of physical collaborative autonomous intelligent\nagents\".\nity and attitudes these artifacts generate among human\nbeings (Fong, Nourbakhsh & Dautenhahn 2003).\nFor example, Kahn et al have recently embarked on\na series of studies with children (Kahn Jr., Friedman\n, Freier & Severson 2003) and adults (Kahn Jr.,\nFriedman & Hagman 2002) investigating matters such\nas humans attributing social and ethical stance to\nrobots like Sony's Aibo. Earlier Bumby and Dautenhahn\n(Bumby & Dautenhahn 1999) explored reactions\nof children as they interacted with a robot.\nReports have recently appeared in the media of cases\nwhere humans also build emotional attachments to\nrobots that do not look like animals, similar to those\nthey have for home pets. An example is the robotic\nvacuum cleaners in (Kahney 2003).\nWhile some experts in childhood development argue\nthat the use of robot-dolls (machines with realism\n) in children's environments `cuts off open-ended\nimaginative play' there are some studies showing that\nintelligent children will still explore beyond the limitations\nof the machine. Similar concerns about other\ntechnologies have been also subject of debate. For\nexample, the concerns about children reducing their\nplay with other children, or increasingly aggressive\nbehavior because of computer games, can be attributed\nmore to exposure to the type of game than to\nthe technology of computer games themselves (Lawry,\nUpitis, Klawe, Anderson, Inkpen, Ndunda, Hsu, Leroux\n& Sedighian 1995). Researchers have found that\nchildren (in particular boys) will entertain and seek\nmore challenging and interesting computer games\n(not necessarily violent games) and that there is no\nobservable increase in violent behavior or deterioration\nin social behavior (Lawry et al. 1995).\nRecent studies have focused on the attitudes gen-erated\nby Sony's Aibo on humans, we propose here\nto explore the differences between Dog (as in living\nanimal) and Robot (as in lifeless machine assembled\nfrom parts) in the concepts and language formations\nin children. Naturally, the smaller the difference, the\nmore it is likely that humans will attribute animal\ncharacteristics (as high up as rights) to a robot. The\nquestion is, `what makes a small or a large difference?'\nIf the difference is very small, perhaps humans will\ninteract with autonomous robots as they do with animals\n. We suggest that in today's children's world,\nthe issue is not confusion between reality and fantasy\n(Aylett 2002). To a child, Sony's Aibo is not a\nfantasy but reality.\nIdentification of what makes children perceive a\nrobot as a dog (an animal) or as a robot is important,\nespecially if one wants to design robots stressing the\ndifference or diluting it. Our research reveals that the\nlook and feel of Sony's Aibo and its body shape go\na long way into its acceptability as a dog. Its play-7\nful behavior, tail wagging, legged walk, recovery from\nfalls, sitting and hand shaking are absorbed into the\nchild's mind. Later, illustration of its robotic features\nare repeatedly insufficient to fully convince the children\nthat this is an artifact (and not a being with\nfeelings).\nUnless Aibo does something unacceptable for a\ndog (like speak with a human voice), it remains essentially\na dog. Our findings that human speech in Aibo\nreduces its dog-ness and increases its robot-ness may\nbe attributed to the `uncanny valley' (Scheeff, Pinto,\nRahardja, Snibbe & Tow 2000). Although we are\nnot measuring emotional response, we have observed\ndissatisfaction with Aibo as a dog, since clearly it is\nonly humans that talk (although children accept talking\ntoys and talking animals in fantasy or animated\nmovies).\nThe rest of this paper is organized as follows. Section\n2 will describe our research methods. Section 3\nwill elaborate on the findings. Section 4 will present\nconclusions and final remarks. Our aim is to explore\nand contrast the properties currently accepted in the\ndefinition of `mobile autonomous robot'. The International\nFederation of Robotics (IFR) and the Australian\nRobot Association follow the ISO standard\nvocabulary (ISO 8373) to describe `manipulating industrial\nrobots operated in a manufacturing environ-ment'\n. A robot has three essential characteristics:\n1. It possesses some form of mobility (formally, a\nrobot must possess at least three programmable\naxes of motion).\n2. It can be programmed to accomplish a large variety\nof tasks.\n3. After being programmed, it operates automati-cally\n.\nMobile robots can move away from a fixed location\nand come in two varieties: tethered and autonomous;\na tethered robot may have its power supply or its\ncontrol unit overboard, possibly relying on a desktop\ncomputer and a wall outlet and a long cord. Autonomous\nmobile robots bring everything along (control\nunit, body, actuators, sensors and power supply).\nThe control unit is typically a computer and software.\nOur research attempts to find out if children do indeed\nnotice these properties or fixate more on the\nform and behavior of the artifact.\nThe methods\nThis research was performed by a series of demonstrations\nof Aibo and other robots, toys and models\n. in particular using Lego Mindstorms\n1\nconstructions\n(Knudsen 1999), remote control toy cars and\nautonomous battery toys.\nThe demonstrations were conducted with preschool\nchildren as well as those from the first 4 years\nof primary school across 3 schools, two childcare centers\n, and a museum in the urban area of Brisbane,\nAustralia. Table 1 summarizes the presentations and\nthe age groups of children. Whenever consent was\ngiven, a video/or audio of the session was recorded.\nAlternatively, a secretary took notes of the statement\nmade by children. Sessions lasted between 30 minutes\nand approximately one hour.\nThe sessions consisted of five stages.\n1. Establishing the language. In the first minutes of\nthe session, children are asked to describe what\nthey see. The goal of this stage is to obtain a\nvocabulary from the children to discuss the artifacts\nthat will be presented later.\n1\nA trademark of the LEGO group.\n2. Demonstration of Aibo. In the next 5 to 7 minutes\n, a demonstration of Aibo is performed with\none black model without a memory stick, so the\ndefault simple `dog-like' behaviors are in place.\n3. Demonstration of the concept of robot. This stage\nillustrates the main features that are common in\naccepted definitions of a robot. It also ensures\nthat the children observe that Sony's Aibo shares\nthese properties with robots.\n4. Investigate animal attributes on Aibo. This stage\nquestions the children for the existence of animal\nproperties on Sony's Aibo and the justification\nfor their belief.\n5. Challenge Aibo's animal attributes with the other\nartifacts used in the session. Children are asked\nto confirm or justify that Aibo is a robot. Attempts\nare made to convince them of the artificial\nnature of Aibo by showing the same property\nin an artifact accepted as lifeless and to compel\nthem to decide on one side of the Dog vs Robot\ndebate (or generate a new term to describe it).\nThe initial phase starts with the projection of\na video from RoboCup-2002 legged league (Veloso,\nUther, Fujita, Asada & Kitano 1998) (the video is the\nmatch between the University of Newcastle and Team\nSweden). Presentations at the Powerhouse Museum\nin Brisbane consisted of a live match with 8 dogs programmed\nas the Mi-PAL Team Griffith participation\nin RoboCup-2003. After two minutes the children are\nasked to describe what they see in the video. In the\nvideo, human manipulators are visible, which contributes\nto the realization that these are real things\nand not an animation film. Children are requested to\nindicate what the `dogs' are doing (if they suggested\nthis word, but if they suggest `puppies' then `puppies'\nis used). That is, we use the same words the children\nthemselves have introduced to describe what they see.\nChildren are then asked to justify why it is a game\nof `soccer/rugby' (or whatever word was the most\ncommon or immediate reply).\nThe phase finishes by bringing an Aibo that is\nswitched off, placing it on the ground, turning it on,\nand waiting. Since Aibo requires some seconds to\n`boot' we observe children's reactions and comments.\nThis phase is obviously different for blind children. It\nis replaced by allowing children to explore the robot\nwith their hands. Blind children still find and recognize\nlegs, paws, ears, head and tail because of shape,\ntexture, malleability and movement.\nWe then proceed to phase two where we illustrate\nthe default behavior of the Aibo, which includes the\nfollowing interactions.\nA couple of fast strokes on its back starts it on\na walk while it makes the sounds of a marching\ntune.\nHard strokes on its head produce sounds and a\nred light on its head LEDs.\nSoft strokes on its head produce sounds and a\ngreen light on its LEDs.\nScratching under its chin produces another set of\nsounds and lights.\nPlacing it on the floor on its side produces some\nsounds, and then Aibo performs a routine to\nstand back on its four legs. After getting up,\nAibo shakes, wagging his tail and making other\nsounds.\nPresenting a pink ball produces sounds and the\nLED on its tail to go pink.\n8\nSchool\nLevel\nChildren's age\nGroup size\nBoronia Childcare Center\npre-school\n4-5\n10\nCarole Park State School\npre-school\n4-5\n17\nHolland Park State School\n3rd year primary\n7-8\n25\nHolland Park State School\n1st year primary\n5-6\n24\nCamp Hill State School\n1st year primary\n5-6\n10\nCamp Hill State School\n1st year primary\n5-6\n12\nCamp Hill State School\n1st year primary\n5-6\n11\nCarole Park State School\n1st year primary\n5-6\n22\nCarole Park State School\n2nd year primary\n6-7\n24\nCarole Park State School\n2nd year primary\n6-7\n28\nCarole Park State School\n3rd year primary\n7-8\n26\nCarole Park State School\n4th year primary\n8-9\n20\nPowerhouse Museum\npre-school\n4-5\n10\nNarbethong State Special School\npre-school (blind)\n4-5\n3\nTable 1: Presentations conducted and age groups.\nFigure 1: A 4-legged walking toy with visible battery,\nmotor and gears. A flexible tail resembles a dog tail.\nOther behaviors that Aibo produces are not directly\ntriggered by the manipulator. These include\nAibo sitting down, Aibo lying on his stomach\nand waving all fours as a synchronized dance,\nAibo waiving one leg, Aibo moving his head from\nside to side and flapping his ears.\nChildren are then invited to interact with Aibo directly\n. In particular, to show the pink ball, to produce\nthe green lights or invite him to walk. They are also\ninvited to explain what Aibo is doing in their own\nwords. There are a series of questions that the presenter\ngoes through as the illustration of behaviors is\nperformed. These questions are as follows:\nWhat is Aibo doing now?\nIs he happy or is he sad?\nDoes he like to be touched like this?\nDo you think he can get up by himself?\nAt the completion of this phase, Aibo is turned off\nand focus is transferred to other examples of robotics.\nBecause the commonly accepted definitions of mobile\nrobots includes that they have their own control unit,\nphase three consists of the following:\nA presentation of a 4-legged toy with a tail (made\nof a spring) that has a visible battery, motor and\ngears (Figure 1). It is illustrated that this toy\nneeds a battery to operate it and that it has\nan on-off-reverse button. Children are asked to\ncarry out the task of setting it off or stopping\nit by taking the battery out. This illustrates\nthat mobile autonomous robots require power\nand carry their source of power.\nFigure 2: A model of a humanoid robot.\nFigure 3: Remote control car to contrast with the\nnotion of autonomous control.\nA presentation of a model of a humanoid robot\n(Figure 2). Although it looks like a robot, it can\nbe seen that it has no motors, no batteries and\nessentially does nothing.\nA presentation of a remote control car (radio con-trolled\n) (Figure 3). This car is also shown to\nhave batteries on board but all behavior is de-rived\nfrom the actions on the two-lever remote\ncontrol. The first lever produces forward or reverse\nmotion on the back wheels and the second\nlever produces left/right turns on the front\nwheels. This illustrates the notion of control (remote\nand human).\nA Lego Mindstorm construction extremely similar\nto `Hank the Bumper Tank' (Knudsen 1999)\n(Figure 4). This robot is shown to have a behavior\nthat allows it to move around and steer away\nfrom obstacles it bumps into (the program is very\nsimilar to the one suggested in (Knudsen 1999,\nChapter 2)). As part of the interactive nature\nof the presentation, the children are asked to act\n9\nFigure 4: A Mindstorm construction with touch and\nlight sensor.\nFigure 5: A Mindstorm construction with touch and\nlight sensor, that acts on its environment with a mechanical\narm, and plays sounds.\nas obstacles for `adapted' Hank. The presenter\npoints out the sensors behind the robots bumper,\nand illustrates that disconnecting these sensors\nmakes it `unaware' of obstacles.\nWe also added to Hank a program that used a\nlight sensor to monitor the color of the ground\nbeneath it.\nThis program was similar to the\nobstacle avoidance previously mentioned, but\nrather than avoid objects it would move away\nfrom dark areas. We presented this behavior as\n`being afraid of the dark'. By switching between\nthese modes, we illustrate that the behavior of\nthe robot changes with the chosen program.\nUsing Lego's ROBO-Lab (a graphical programming\napplication) to build a very simple program\nthat makes Hank spin in a circle for four seconds,\nwe show the children its programmable nature.\nThe children are taken through the process of\nbuilding the program, transferring it onto Hank\nvia an infrared interface and finally running it.\nWhen the program is running, the children are\nencouraged to count along, thus verifying that\nthe program is indeed the one just built. This is\nrepeated for at least two other timings (around\n10 to 20 seconds).\nA Lego construction extremely similar to Minerva\n(Knudsen 1999, Chapter 6) was presented\nnext (Figure 5). The components were shown\nto be the same as Hank's and the Lego RCX\nis labeled as the `computer control'. A program\nsimilar to the one suggested (Knudsen 1999) produces\nthe behavior illustrated to the children.\nMinerva moves around a white floor until it finds\na black object, uses a robotic arm to pick it up,\nthen turns around and brings it to another position\nclose to where it started. It then releases\nthe object and plays a tune. The presenter en-sured\nthat the children observed that Minerva\nperceives its environment and can act to change\nit (thus the notion of actuator is illustrated).\nA series of pictures (or videos) of autonomous\nrobots were shown to the children. These images\ndemonstrate that robots come in all sorts\nof shapes and sizes. Among these are pictures of\nmore Aibos, Honda ASIMO, the Sony humanoid\nSDX, MINERVA (Thrun, Bennewitz, Burgard,\nCremers, Dellaert, Fox, Hahnel, Rosenberg, Roy,\nSchulte & Schulz 1999) and Kismet (Brooks\n2002). It was pointed out that robots can produce\nsmiles, walk, and be as small as a cockroach.\nPictures of experimental robots were shown to\ndisplay the wires, gears and components inside\ntheir packaging.\nAibo was then brought back as the presenter repeated\nthe main concepts, namely:\nAibo requires power and carries a battery. Aibo is\nturned on and off. Also, it is shown that Aibo's\nbehaviors are interrupted and stopped if the battery\nis removed.\nAibo has motors. The gears on Aibo's joints, and\nwires near its neck are pointed out to the children\n.\nAibo has sensors. The strokes on head and sensitivity\nto the pink ball are illustrated once more.\nUsing another pink object (perhaps a piece of\nclothing, or the memory stick of Aibo), we show\nthat the behavior is triggered by the color being\nnoticed by a sensor and not Aibo understanding\nthe concept of a ball.\nAibo has actuators and a control program. We install\na memory stick and re-start Aibo. With the\nnew program it kicks a ball as in the RoboCup\nvideo.\nWe illustrate Aibo's behavior changes\nwith different memory sticks.\nOnce this is completed the next phase commences\nby the presenter asking one of the following questions.\n1. Does Aibo have feelings?\n2. Where does Aibo get energy from?\n3. Will Aibo have babies?\nResponses of several children were collected. We expected\nthat this question would have distinct answers\ndepending on whether we were referring to a robot or\na `living' dog.\nWe then passed to the final stage. Each time a\nchild made a response that seemed to indicate animal\nessence or animal agency in the Aibo, we chal-lenged\nthe response. For example, if a child indicated\nthat Aibo had feelings, we next asked what the child\nthinks happens to the feelings when Aibo is turned\noff. We found children would continue to support\ntheir point of view. Following the previous example,\nmany children followed the path that Aibo was just\nasleep when turned off. The challenge continued as\nthe presenter requested children to explain what sort\nof feelings Aibo has or if the feelings fade when the\nbattery runs out. Also, the presenter checked if the\nother artifacts, shown before, have feelings and asked\nthe children to explain why the others do or do not\nhave those or other feelings.\nThe sequence of challenges for the 3 questions\nabove were as follows.\n1. Does Aibo have feelings?\nWhat happens to Aibo's feelings when he is\nturned off?\n10\nFigure 6: The demonstrator with a class of grade 2\nchildren and 3 of the objects: Aibo, Hank and the\n4-legged walking toy.\nWhat happens to his feelings when the battery\nruns out?\nWhat happens to his feelings if we re-move/change/replace\nthe memory stick and\nAibo`s personality changes?\nWhat happens to his feelings if Aibo is broken\n? Will the feelings come back if we glue\nhim?\nWhat feelings do you know Aibo has? How\ndo you know he is happy/sad?\nIs it possible to pretend to be happy but not\nbe happy? Do you think Aibo is happy or\njust pretending?\n2. Where does Aibo get his energy from?\nWhere do you get your energy from?\nWhere do the other artifacts get their energy\nfrom?\nDoes Aibo work without a battery? Do the\nothers work without a battery?\nWhat do you think Aibo eats/drinks? Do\nyou think he needs to visit the toilet?\n3. Will Aibo have babies?\nIs Aibo a baby dog? How do you know?\nHow will Aibo look after (care for) the babies\n?\nDoes Aibo need to charge/replace the battery\nof the babies?\nThese paths of questioning were not all developed\nahead of the first presentation. They evolved from\none presentation to the next. Their length reflects\nthe resistance of the children to change their opinion,\neven if all other artifacts have opposite responses to\nthese questions. That is, for each question, before we\nprogressed to the next, we confirmed that the children\nsustained the notion that a difference remains\nbetween Aibo and the other artifacts. For example,\nin the last question sequence, children would start by\nconfirming that none of the other artifacts can have\nbabies while Aibo can. When progressing to `Is Aibo\na baby?' and contrasting this with `Is Hank a baby?',\nmost children realized that Aibo is really like Hank\nand cannot have babies.\nThe findings\nAfter an analysis of the transcript of our videos and\nnotes we summarize the following findings. It is remarkable\nthat when we queried the children for their\nfirst impressions of the video we obtained the following\nresults. To the question `What do you see?' all\nsessions had children responding that they saw `dogs'.\nThis is surprising for two main reasons: firstly the\nvideo shows the robots playing soccer, a behavior not\ncommonly attributed to dogs. And secondly, the children\nwould have anticipated seeing robots through\nprior conversations with parents and teachers. This\nmay explain why a few children claimed that the\nadults in the video where robots.\nAs the age of the pupils involved in the study in-creased\n, we noticed that the tendency to regard the\nrobots as `dogs' decreased. The more mature respondents\nwere more likely to label the robots as `robots'\nor `robotic dogs'.\nOne pupil also gave the more\ngeneric answer of `animals', and another thought that\nthey were `cats'. Interestingly, on a few occasions\nchildren referred to the humans in the video as the\nrobots. We believe this is due to their anticipation to\nsee robots and perhaps the media culture of humanoid\nrobots.\nTo the question, `What are they doing?' most children\nidentified the activity as a game of `soccer'. This\nis surprising, since the RoboCup has barriers around\nthe pitch that make the game more similar to ice-hockey\n, and although the robots are legged, they do\nnot kick the ball with one leg. All robots in this\nvideo kick the ball with two legs simultaneously or\nhead-butt the ball. The ball is also bright orange,\nclearly not a soccer ball. Another point is that although\nplayed in Australia, it is not the most commonly\nplayed sport.\nOther suggestions included, `they're fighting',\n`playing hockey', and one child thought he was watching\na game of tennis.\nJustifications for `why is it a game of soc-cer/football\n?' included a rounded ball on the ground,\ngoals, two teams, referees and goalies.\nWhen initially presenting the Aibo to the children,\nrather than give it the label `it', we found that they\nwould usually use `him' or `her'. Once again this was\nmore pronounced with younger subjects. As the presenter\nwent on to explain the attributes of the Aibo\nand show its operation, the children while probing\nwith questions would begin to lose the gender label.\nThe children generally were of the opinion that the\nAibo did have emotions, with a couple of them claiming\nthat this was so because it had a `mind'. This\nopinion was seemingly an accepted one with many\nchildren declaring at certain stages of the proceedings\nthat it was either happy or sad.\nUpon the exhibition of other robots and robot-like\nartifacts, the general consensus was that the Aibo did\nin fact meet the criteria for being a robot. However,\nthe most common term used to describe it was that\nof `robotic dog', where dog is the noun. This emphasis\non the dog nature of the robot suggests that the\nsubjects were still willing to consider it animal-like.\nThe youngest group, however, needed the most\nconvincing. They insisted that the Aibo was a dog,\neven after repeated demonstrations of its robotic nature\n, with the presenter even stating in no uncertain\nterms that it was a robot. They did come around\neventually, with one child using the `robotic dog' description\n, and his peers following suit.\nWe briefly describe some reaction to the other objects\n. Although initially enthusiastic, children were\nquickly disappointed by the model of the humanoid;\nmainly its inaction made it uninteresting. One child\nsaid, \"it's just a toy, not a robot\". The 4-legged walking\ntoy caused some laughs because it bumps into\nthings, but children realized rapidly that it did not\noffer any `interesting' interaction beside turning it on\nand off (potentially reversing the direction). The remote\ncontrol car was appealing and children wanted\nto play with it even after the presentation. It was\nclear to them they were controlling it.\nHank did\n11\ncause surprise and children wanted to continue playing\nwith it, or asked about how to program it. Children\nwanted to interact with it and explored different\nobstacles for its obstacle-backing behavior. On two\noccasions we witnessed children convinced that Hank\nalso had feelings because it was \"afraid of the dark\".\nThe mechanical-arm robot caused amazement. We\nbelieve this greater surprise was because children familiar\nwith Lego do not expect the action of a mechanical\narm lifting an object.\nWe also performed a variation in our initial approach\nto confirm some of these findings. We approached\na different grade 6 class (12 year-olds) that\nhad been already working with Mindstorm robots and\nhad done some research assignments on the Internet\nand in the library on topics such as `What is a robot?'.\nWe did a presentation in which the objects were not\nnecessarily the focus, but the properties of a robot\nwere the focus. We also demonstrated different applications\nof robotics, like using Miranda to assist a blind\nperson to read a WEB page. The method for collecting\nthe children's attitudes was a questionnaire of 25\nquestions asking children to choose between two positions\nand to give their reasons for such decisions. We\ninvited them to reflect on their responses, so they were\nasked to answer the questions over a day at school and\nat home. The results of 23 answered questionnaires\nconfirmed that a dog-looking robot rapidly acquires\nanimalistic properties and values in the minds of children\n. In particular, 75% of the children confirmed\nthat Miranda should be called a `robotic dog' rather\nthan a `dog-looking robot'. Note that the preferred\nnoun is dog over robot. The reasons provided in the\nquestionnaire are illustrative of their thinking: \"It has\nmore dog features than robot features\", \"Miranda has\ncharacteristics a dog has\", \"Kinda looks like a real\ndog\" \"It is an automatic dog\" and \"Just doesn't look\nlike a dog, she has a dog personality\". And on the\nquestion \"Does Miranda have feelings?\" again 75%\nresponded positively. Some of the reasons were \"She\njust isn't a robot. She's almost a real dog\", \"She can\nbe happy, unhappy\", If you hit her hard, she would\nmake a noise, but she felt it\". Note that in this presentation\nwe actually changed programs several times,\nradically changing the behaviors and personality of\nMiranda. Also, real dogs do not talk, but our programs\nhad a female voice for instructions to kick a\nball and a male voice reading Web pages. Only one\nchild classified Miranda as a robot because dogs do\nnot sing.\nDiscussion and Final Remarks\nThe blurring of the concept of robotic pet or canine\nmachine is of interest to us because of the direct applications\nof autonomous mobile robots in helping people\n. In particular, we foresee that people with disabilities\n, the elderly and other groups in need of assistance\n, are the first humans that will benefit from\nautonomous mobile robots. Naturally, the attitudes,\nacceptability and adequate expectations are to match\nan effective human-computer interaction. If the person\nexpects smarter behavior of the robot (things like\ngesture/voice recognition) and the technology does\nnot deliver, then rather than assisting, we will frustrate\nthe person. It is also important that anyone who\nencounters a person assisted by a robot approaches\nwith attitudes and gestures that allow the robotic\nassistant to facilitate the approach. The main motivation\nbehind this research is a related project on\nusing Aibo to assist blind people. While it may seem\nstraightforward that a robotic assistant for the vision\nimpaired person should be shaped as a dog, this is\nnot so. Even with guide dogs, other humans find it\ndifficult to approach and assist a blind person. Humans\nexpect a strong bond and loyalty of the animal\nto its owner, fearing that dogs may misinterpret help\nas interfering with the bond, causing then to react\nviolently.\nOur findings agree with those of others (Kahn Jr.\net al. 2002) in that there is a progression of attributes\nthat humans ascribe to robots like Aibo. This progression\nstarts from Essence, and advances to Agency,\nSocial Standing and Moral Standing. Our findings\nare that Aibo fulfills biological animistic underpinnings\n(children refer to its tail, legs, ears and behaviors\nin the same way as for living dogs). It also fulfills\nAgency properties (children attribute intentions, feelings\n, emotional states, wishes, desires, goals).\nWe left aside social standing in our methodology,\nbut strongly suspect that children attribute an emotional\nconnection and companionship to Aibo. We\nobserved a clear preference among children for `Do\nyou want to pat the dog/puppy?' over `Do you want\nto touch the robot?'. Many children made unsolicited\ncomments about how similar it was playing with Aibo\nto playing with their dog at home. Similarly, we refrained\nfrom exploring children's attribution of moral\nstanding to Aibo (for example, should Aibo be punished\nfor doing something wrong). Nevertheless, we\nreceived unsolicited suggestions that `leaving Aibo\nalone or not playing with him would make him sad'\nand that `batteries should always be charged, which\nmay mean more responsibility than for a living dog'.\nThese types of comments do attribute some rights to\nAibo and a sense that it also deserves some respect.\nOur observations indicate that Essence and\nAgency are maintained in the child's beliefs even in\nthe presence and practical illustration of other machines\nfor which they will not typically attach such\nbiological or animistic properties, nor psychological\ncharacteristics (although Bob the Builder's cars and\nmachines talk). In fact, we witnessed arguments and\ndebates among the children which turned the balance\nthe other way around, some managing to convince\nothers that Hank had feelings like `being afraid of the\ndark, because afraid is a feeling'.\nAlso, we found observations that concur with the\nwriting of anthropologist S. Turkle (Turkle 1999). In\nparticular, although we did not intend to observe\nadults, we witnessed parents and teachers attempting\nto convince the children that Aibo was a machine\nand not a dog. Some child-carers seem to interpret\nour experiment as a lecture on the living versus\nthe non-living. We believe this reflected some of\nTurkle's conclusions about the `thinking about alive-ness'\nwith older people interpreting machines and\ncomputers through mechanistic/physical interpretations\nwhile the newer generation interprets beings in\ncomputer games and robots as `sort of alive'. Our\nbest example of this was witnessing a parent selecting\na particular physical argument to convince her\n5-year old of `the clear difference' why Aibo is not a\ndog. This also pointed out a difference between Aibo\nand dogs that we had not observed but that the adult\nbelieved made \"the difference\". We attempt to illustrate\nit with Figure 7. Aibo has one less joint in the\nback leg than a dog (the dog, as shown in Figure 7(a),\nhas hip (1), knee (2), ankle (3) and toes (4)). This\nis one degree of freedom less and also the toes bend\nback in the dog, while they do not on Aibo. Note\nthat if we were to choose a physical argument it is\nperhaps more obvious that Aibo does not have two\neyes or does not have a wet nose. The point is that\na basic minimum of physical structure is enough to\nengage children in a psychological/conceptual interpretation\nthat then is hard to remove on the basis of\nphysical evidence.\nWe believe our results indicate that children are\n12\n(a)\n(b)\nFigure 7: A dog (a) has one more degree of freedom\nper leg than Aibo (b) and has more movement in the\ntoes than Aibo.\nquickly attached to the notion that `robotic dogs' are\ncloser to living dogs. Although we would not go as far\nas S. Turkle to suggest that `living' has a new meaning\nfor this generation of children, we suggest that\nthey will see them as robotic pets more than canine\nmachines. We expect, therefore, that in the future,\nhumans will adopt more of them as an interface for\nhuman-computer interaction.\nProf. B. Shneiderman is probably the world's leading\nauthority in Human-Computer Interaction. He\nhas repeatedly been outspoken about reducing `ma-chine\nintelligence' and `software agents' for building\ncomputers that are more effective tools (Beardsley\n1999). At first, our research seems to contradict some\nof his ideas; but, interaction with a robot is interaction\nwith a computer and we agree that it allows\nfor direct manipulation, even more realistic and perhaps\nmore meaningful than on the computer screen.\nAlso, it is now clear that domestic robots will soon\nbe around us and computers will not be restricted to\noutput devices like monitors, nor will computers be\nconfined to fixed locations. Third, we argue that studies\nsuch as ours advance the possibilities of having a\n`controllable, consistent and predictable interaction'\nwith a robot. Thus, we share the vision of interaction\nfacilitated by proper design. Finally, our aim is\ninteraction with people who are blind. In such case,\nvisualization (the coloring of pixels in a monitor) for\n`insight' cannot be used. Shneiderman also agrees on\nthis point. We argue that properly designed robots\nwill offer a multi-modal interface where insight is com-municated\nby embodiment and movement as well as\nsound.\nOther papers in the literature confirm that people\nmay develop strong attachments, and even affectionate\nrelationships with artificial information systems.\nThose studies involve human adults on one side and\nrather simple emulations of human intelligence in the\nother. In such cases, the interface has been rather\nsimple (or at least not multi-modal), like through a\nphone conversation. It is interesting that this may\nhave both positive and negative outcomes. For example\n, as reported in the case of a `Health Behavior Ad-visor\nSystem' (Kaplan, Farzanfar & Friedman 1999),\nsome patients felt motivated to follow a healthier life\nstyle, while others found it inflicted a sense of guilt\nthat did not motivate healthier habits. We believe\nthat understanding people's expectations for robots\nis important since these expectations will define the\ncontext for the interactions that may result in effective\nuse of robotic technology. An example is the potential\nattribution of moral standing to robots. This\ncould eventually regard the robot (and not its manufacturer\n) as responsible for its actions. Certainly, this\nwould have many implications for our society.\nAcknowledgments\nThe authors wish to thank the anonymous referees for\nthe constructive feedback provided in their reviews.\nThis work was supported by a Griffith University Research\nGrant as part of the project \"Robots to guide\nthe blind - the application of physical collaborative\nautonomous intelligent agents\".\nReferences\nAylett, B. (2002), ROBOTS -- Bringing intelligent\nmachines to life?, ABC Books, Sydney NSW,\nAustralia.\nBeardsley, T. (1999), `Humans unite!', Scientific\nAmerican March, 3536. Profile Column.\nBrooks, R. (2002), `Humanoid robots', Communications\nof the ACM 45(3), 3338.\nBumby, K. & Dautenhahn, K. (1999), Investigating\nchildren's attitudes towards robots: A case\nstudy, in `Proceedings of the Third Cognitive\nTechnology Conference, CT'99', M.I.N.D. Lab,\nMichigan State University, East Lansing, MI.,\npp. 391410.\nFong, T., Nourbakhsh, I. & Dautenhahn, K. (2003),\n`A survey of socially interactive robots', Robotics\nand Autonomous Systems 42, 235243.\nKahn Jr., P., Friedman, B. & Hagman, J. (2002), I\ncare about him as a pal: Conceptions of robotic\npets in online Aibo discussion forum, in `Proceedings\nof CHI, Interactive Poster: Fun changing\nthe world, changing ourselves', pp. 632633.\nKahn Jr., P. J., Friedman, B., Freier, N. & Severson,\nR. (2003), Coding manual for children's interactions\nwith Aibo, the robotic dog -- the preschool\nstudy, Technical Report UW CSE 03-04-03, Department\nof Computer Science and Engineering,\nUniversity of Washington, Seattle, US.\nKahney,\nL.\n(2003),\n`The\nnew\npet\ncraze:\nRobovacs',\nWired\nMagazine.\nJune,\n16th;\nvisited\nSeptenber\n10th,\n2003,\nwww.wired.com/news/technology/0,1282,59249,00.html.\nKaplan,\nB.,\nFarzanfar,\nR.\n& Friedman,\nR.\n(1999),\nEthnographic\ninterviews\nto\nelicit\npatients,\nreactions to an intelligent interactive\ntelephone\nhealth\nbehavior\nadvisor\nsystem,\nin\nM.\nN.M.\nLorenzi,\nBethesda,\ned.,\n`Proceedings:\nAMIA\nSymposiu',\nAmerican\nMedical\nInformatics\nAssociation,\nwww.amia.org/pubs/symposia/D005604.PDF.\nKnudsen, J. (1999), The Unofficial Guide to LEGO\nMINDSTORM Robots, O'Reilly, Sebastopol,\nCA.\n13\nLawry, J., Upitis, R., Klawe, M., Anderson, A.,\nInkpen, K., Ndunda, M., Hsu, D., Leroux, S.\n& Sedighian, K. (1995), `Exploring common conceptions\nabout boys and electronic games', Journal\nof Computer in Math and Science Teaching\n14\n(4), 439459.\nScheeff, M., Pinto, J., Rahardja, K., Snibbe, S. &\nTow, R. (2000), Experiences with Sparky: A social\nrobot, in `Proceedings of the Workshop on\nInteractive Robot Entertainment'.\nThrun, S., Bennewitz, M., Burgard, W., Cremers, A.,\nDellaert, F., Fox, D., Hahnel, D., Rosenberg, C.,\nRoy, N., Schulte, J. & Schulz, D. (1999), MINERVA\n: A tour-guide robot that learns, in `KI Kunstliche\nIntelligenz', pp. 1426.\nTurkle, S. (1999), What are we thinking about when\nwe are thinking about computers?, in M. Biagi-oli\n, ed., `The Science Studies Reader', Routledge,\nNew York.\nVeloso, M., Uther, W., Fujita, M., Asada, M. &\nKitano, H. (1998), Playing soccer with legged\nrobots, in `In Proceedings of IROS-98, Intelligent\nRobots and Systems Conference', Victoria,\nCanada.\n14", "keywords": "intelligent creatures;human attitudes;language;essence;agency;robots;perceived attitude;behavioral science;zoo-morphological autonomous mobile robots;robot attributes;multi-modal interfaces;interaction;feelings;hci"} {"name": "74", "title": "Downloading Textual Hidden Web Content Through Keyword Queries", "abstract": "An ever-increasing amount of information on the Web today is available only through search interfaces: the users have to type in a set of keywords in a search form in order to access the pages from certain Web sites. These pages are often referred to as the Hidden Web or the Deep Web. Since there are no static links to the Hidden Web pages, search engines cannot discover and index such pages and thus do not return them in the results. However, according to recent studies, the content provided by many Hidden Web sites is often of very high quality and can be extremely valuable to many users. In this paper, we study how we can build an effective Hidden Web crawler that can autonomously discover and download pages from the Hidden Web. Since the only \"entry point\" to a Hidden Web site is a query interface, the main challenge that a Hidden Web crawler has to face is how to automatically generate meaningful queries to issue to the site. Here, we provide a theoretical framework to investigate the query generation problem for the Hidden Web and we propose effective policies for generating queries automatically. Our policies proceed iteratively, issuing a different query in every iteration . We experimentally evaluate the effectiveness of these policies on 4 real Hidden Web sites and our results are very promising. For instance, in one experiment, one of our policies downloaded more than 90% of a Hidden Web site (that contains 14 million documents ) after issuing fewer than 100 queries.", "fulltext": "INTRODUCTION\nRecent studies show that a significant fraction of Web content\ncannot be reached by following links [7, 12]. In particular, a large\npart of the Web is \"hidden\" behind search forms and is reachable\nonly when users type in a set of keywords, or queries, to the forms.\nThese pages are often referred to as the Hidden Web [17] or the\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nJCDL'05, June 711, 2005, Denver, Colorado, USA.\nCopyright 2005 ACM 1-58113-876-8/05/0006 ...\n$\n5.00.\nDeep Web [7], because search engines typically cannot index the\npages and do not return them in their results (thus, the pages are\nessentially \"hidden\" from a typical Web user).\nAccording to many studies, the size of the Hidden Web increases\nrapidly as more organizations put their valuable content online\nthrough an easy-to-use Web interface [7]. In [12], Chang et al.\nestimate that well over 100,000 Hidden-Web sites currently exist\non the Web. Moreover, the content provided by many Hidden-Web\nsites is often of very high quality and can be extremely valuable\nto many users [7]. For example, PubMed hosts many high-quality\npapers on medical research that were selected from careful peer-review\nprocesses, while the site of the US Patent and Trademarks\nOffice\n1\nmakes existing patent documents available, helping potential\ninventors examine \"prior art.\"\nIn this paper, we study how we can build a Hidden-Web crawler\n2\nthat can automatically download pages from the Hidden Web, so\nthat search engines can index them. Conventional crawlers rely\non the hyperlinks on the Web to discover pages, so current search\nengines cannot index the Hidden-Web pages (due to the lack of\nlinks). We believe that an effective Hidden-Web crawler can have\na tremendous impact on how users search information on the Web:\nTapping into unexplored information:\nThe Hidden-Web\ncrawler will allow an average Web user to easily explore the\nvast amount of information that is mostly \"hidden\" at present.\nSince a majority of Web users rely on search engines to discover\npages, when pages are not indexed by search engines, they are\nunlikely to be viewed by many Web users. Unless users go directly\nto Hidden-Web sites and issue queries there, they cannot\naccess the pages at the sites.\nImproving user experience: Even if a user is aware of a number\nof Hidden-Web sites, the user still has to waste a significant\namount of time and effort, visiting all of the potentially relevant\nsites, querying each of them and exploring the result. By making\nthe Hidden-Web pages searchable at a central location, we can\nsignificantly reduce the user's wasted time and effort in searching\nthe Hidden Web.\nReducing potential bias: Due to the heavy reliance of many Web\nusers on search engines for locating information, search engines\ninfluence how the users perceive the Web [28]. Users do not\nnecessarily perceive what actually exists on the Web, but what\nis indexed by search engines [28]. According to a recent article\n[5], several organizations have recognized the importance of\nbringing information of their Hidden Web sites onto the surface,\nand committed considerable resources towards this effort. Our\n1\nUS Patent Office: http://www.uspto.gov\n2\nCrawlers are the programs that traverse the Web automatically and\ndownload pages for search engines.\n100\nFigure 1: A single-attribute search interface\nHidden-Web crawler attempts to automate this process for Hidden\nWeb sites with textual content, thus minimizing the associated\ncosts and effort required.\nGiven that the only \"entry\" to Hidden Web pages is through\nquerying a search form, there are two core challenges to implementing\nan effective Hidden Web crawler: (a) The crawler has to\nbe able to understand and model a query interface, and (b) The\ncrawler has to come up with meaningful queries to issue to the\nquery interface. The first challenge was addressed by Raghavan\nand Garcia-Molina in [29], where a method for learning search interfaces\nwas presented. Here, we present a solution to the second\nchallenge, i.e. how a crawler can automatically generate queries so\nthat it can discover and download the Hidden Web pages.\nClearly, when the search forms list all possible values for a query\n(e.g., through a drop-down list), the solution is straightforward. We\nexhaustively issue all possible queries, one query at a time. When\nthe query forms have a \"free text\" input, however, an infinite number\nof queries are possible, so we cannot exhaustively issue all possible\nqueries. In this case, what queries should we pick? Can the\ncrawler automatically come up with meaningful queries without\nunderstanding the semantics of the search form?\nIn this paper, we provide a theoretical framework to investigate\nthe Hidden-Web crawling problem and propose effective ways of\ngenerating queries automatically. We also evaluate our proposed\nsolutions through experiments conducted on real Hidden-Web sites.\nIn summary, this paper makes the following contributions:\nWe present a formal framework to study the problem of Hidden-Web\ncrawling. (Section 2).\nWe investigate a number of crawling policies for the Hidden\nWeb, including the optimal policy that can potentially download\nthe maximum number of pages through the minimum number of\ninteractions. Unfortunately, we show that the optimal policy is\nNP-hard and cannot be implemented in practice (Section 2.2).\nWe propose a new adaptive policy that approximates the optimal\npolicy. Our adaptive policy examines the pages returned from\nprevious queries and adapts its query-selection policy automatically\nbased on them (Section 3).\nWe evaluate various crawling policies through experiments on\nreal Web sites. Our experiments will show the relative advantages\nof various crawling policies and demonstrate their potential\n. The results from our experiments are very promising. In\none experiment, for example, our adaptive policy downloaded\nmore than 90% of the pages within PubMed (that contains 14\nmillion documents) after it issued fewer than 100 queries.\nFRAMEWORK\nIn this section, we present a formal framework for the study of\nthe Hidden-Web crawling problem. In Section 2.1, we describe our\nassumptions on Hidden-Web sites and explain how users interact\nwith the sites. Based on this interaction model, we present a high-level\nalgorithm for a Hidden-Web crawler in Section 2.2. Finally in\nSection 2.3, we formalize the Hidden-Web crawling problem.\n2.1\nHidden-Web database model\nThere exists a variety of Hidden Web sources that provide information\non a multitude of topics. Depending on the type of information\n, we may categorize a Hidden-Web site either as a textual\ndatabase or a structured database. A textual database is a site that\nFigure 2: A multi-attribute search interface\nmainly contains plain-text documents, such as PubMed and Lexis-Nexis\n(an online database of legal documents [1]). Since plain-text\ndocuments do not usually have well-defined structure, most\ntextual databases provide a simple search interface where users\ntype a list of keywords in a single search box (Figure 1). In contrast\n, a structured database often contains multi-attribute relational\ndata (e.g., a book on the Amazon Web site may have the fields\ntitle=`Harry Potter'\n, author=`J.K. Rowling' and\nisbn=`0590353403'\n) and supports multi-attribute search interfaces\n(Figure 2). In this paper, we will mainly focus on textual\ndatabases that support single-attribute keyword queries. We\ndiscuss how we can extend our ideas for the textual databases to\nmulti-attribute structured databases in Section 6.1.\nTypically, the users need to take the following steps in order to\naccess pages in a Hidden-Web database:\n1. Step 1. First, the user issues a query, say \"liver,\" through the\nsearch interface provided by the Web site (such as the one shown\nin Figure 1).\n2. Step 2. Shortly after the user issues the query, she is presented\nwith a result index page. That is, the Web site returns a list of\nlinks to potentially relevant Web pages, as shown in Figure 3(a).\n3. Step 3. From the list in the result index page, the user identifies\nthe pages that look \"interesting\" and follows the links. Clicking\non a link leads the user to the actual Web page, such as the one\nshown in Figure 3(b), that the user wants to look at.\n2.2\nA generic Hidden Web crawling algorithm\nGiven that the only \"entry\" to the pages in a Hidden-Web site\nis its search from, a Hidden-Web crawler should follow the three\nsteps described in the previous section. That is, the crawler has\nto generate a query, issue it to the Web site, download the result\nindex page, and follow the links to download the actual pages. In\nmost cases, a crawler has limited time and network resources, so\nthe crawler repeats these steps until it uses up its resources.\nIn Figure 4 we show the generic algorithm for a Hidden-Web\ncrawler. For simplicity, we assume that the Hidden-Web crawler\nissues single-term queries only.\n3\nThe crawler first decides which\nquery term it is going to use (Step (2)), issues the query, and retrieves\nthe result index page (Step (3)). Finally, based on the links\nfound on the result index page, it downloads the Hidden Web pages\nfrom the site (Step (4)). This same process is repeated until all the\navailable resources are used up (Step (1)).\nGiven this algorithm, we can see that the most critical decision\nthat a crawler has to make is what query to issue next. If the\ncrawler can issue successful queries that will return many matching\npages, the crawler can finish its crawling early on using minimum\nresources. In contrast, if the crawler issues completely irrelevant\nqueries that do not return any matching pages, it may waste all\nof its resources simply issuing queries without ever retrieving actual\npages. Therefore, how the crawler selects the next query can\ngreatly affect its effectiveness. In the next section, we formalize\nthis query selection problem.\n3\nFor most Web sites that assume \"AND\" for multi-keyword\nqueries, single-term queries return the maximum number of results.\nExtending our work to multi-keyword queries is straightforward.\n101\n(a) List of matching pages for query \"liver\".\n(b) The first matching page for \"liver\".\nFigure 3: Pages from the PubMed Web site.\nA\nLGORITHM\n2.1.\nCrawling a Hidden Web site\nProcedure\n(1)\nwhile ( there are available resources ) do\n// select a term to send to the site\n(2)\nq\ni\n= SelectTerm()\n// send query and acquire result index page\n(3)\nR(q\ni\n) = QueryWebSite( q\ni\n)\n// download the pages of interest\n(4)\nDownload(\nR(q\ni\n) )\n(5)\ndone\nFigure 4: Algorithm for crawling a Hidden Web site.\nS\nq\n1\nq\nq\nq\n2\n3\n4\nFigure 5: A set-formalization of the optimal query selection\nproblem.\n2.3\nProblem formalization\nTheoretically, the problem of query selection can be formalized\nas follows: We assume that the crawler downloads pages from a\nWeb site that has a set of pages\nS (the rectangle in Figure 5). We\nrepresent each Web page in\nS as a point (dots in Figure 5). Every\npotential query\nq\ni\nthat we may issue can be viewed as a subset of\nS,\ncontaining all the points (pages) that are returned when we issue\nq\ni\nto the site. Each subset is associated with a weight that represents\nthe cost of issuing the query. Under this formalization, our goal is to\nfind which subsets (queries) cover the maximum number of points\n(Web pages) with the minimum total weight (cost). This problem\nis equivalent to the set-covering problem in graph theory [16].\nThere are two main difficulties that we need to address in this\nformalization. First, in a practical situation, the crawler does not\nknow which Web pages will be returned by which queries, so the\nsubsets of\nS are not known in advance. Without knowing these\nsubsets the crawler cannot decide which queries to pick to maximize\nthe coverage. Second, the set-covering problem is known to\nbe NP-Hard [16], so an efficient algorithm to solve this problem\noptimally in polynomial time has yet to be found.\nIn this paper, we will present an approximation algorithm that\ncan find a near-optimal solution at a reasonable computational cost.\nOur algorithm leverages the observation that although we do not\nknow which pages will be returned by each query\nq\ni\nthat we issue,\nwe can predict how many pages will be returned. Based on this information\nour query selection algorithm can then select the \"best\"\nqueries that cover the content of the Web site. We present our prediction\nmethod and our query selection algorithm in Section 3.\n2.3.1\nPerformance Metric\nBefore we present our ideas for the query selection problem, we\nbriefly discuss some of our notation and the cost/performance metrics\n.\nGiven a query\nq\ni\n, we use\nP (q\ni\n) to denote the fraction of pages\nthat we will get back if we issue query\nq\ni\nto the site. For example, if\na Web site has 10,000 pages in total, and if 3,000 pages are returned\nfor the query\nq\ni\n= \"medicine\", then P (q\ni\n) = 0.3. We use P (q\n1\n\nq\n2\n) to represent the fraction of pages that are returned from both\nq\n1\nand\nq\n2\n(i.e., the intersection of\nP (q\n1\n) and P (q\n2\n)). Similarly, we\nuse\nP (q\n1\nq\n2\n) to represent the fraction of pages that are returned\nfrom either\nq\n1\nor\nq\n2\n(i.e., the union of\nP (q\n1\n) and P (q\n2\n)).\nWe also use Cost\n(q\ni\n) to represent the cost of issuing the query\nq\ni\n. Depending on the scenario, the cost can be measured either in\ntime, network bandwidth, the number of interactions with the site,\nor it can be a function of all of these. As we will see later, our\nproposed algorithms are independent of the exact cost function.\nIn the most common case, the query cost consists of a number\nof factors, including the cost for submitting the query to the site,\nretrieving the result index page (Figure 3(a)) and downloading the\nactual pages (Figure 3(b)). We assume that submitting a query incurs\na fixed cost of\nc\nq\n. The cost for downloading the result index\npage is proportional to the number of matching documents to the\nquery, while the cost\nc\nd\nfor downloading a matching document is\nalso fixed. Then the overall cost of query\nq\ni\nis\nCost\n(q\ni\n) = c\nq\n+ c\nr\nP (q\ni\n) + c\nd\nP (q\ni\n).\n(1)\nIn certain cases, some of the documents from\nq\ni\nmay have already\nbeen downloaded from previous queries. In this case, the crawler\nmay skip downloading these documents and the cost of\nq\ni\ncan be\nCost\n(q\ni\n) = c\nq\n+ c\nr\nP (q\ni\n) + c\nd\nP\nnew\n(q\ni\n).\n(2)\nHere, we use\nP\nnew\n(q\ni\n) to represent the fraction of the new documents\nfrom\nq\ni\nthat have not been retrieved from previous queries.\nLater in Section 3.1 we will study how we can estimate\nP (q\ni\n) and\nP\nnew\n(q\ni\n) to estimate the cost of q\ni\n.\nSince our algorithms are independent of the exact cost function,\nwe will assume a generic cost function Cost\n(q\ni\n) in this paper. When\nwe need a concrete cost function, however, we will use Equation 2.\nGiven the notation, we can formalize the goal of a Hidden-Web\ncrawler as follows:\n102\nP\nROBLEM\n1. Find the set of queries\nq\n1\n, . . . , q\nn\nthat maximizes\nP (q\n1\nq\nn\n)\nunder the constraint\nn\ni=1\nCost\n(q\ni\n) t.\nHere,\nt is the maximum download resource that the crawler has.\nKEYWORD SELECTION\nHow should a crawler select the queries to issue? Given that the\ngoal is to download the maximum number of unique documents\nfrom a textual database, we may consider one of the following options\n:\nRandom: We select random keywords from, say, an English dictionary\nand issue them to the database. The hope is that a random\nquery will return a reasonable number of matching documents.\nGeneric-frequency: We analyze a generic document corpus collected\nelsewhere (say, from the Web) and obtain the generic frequency\ndistribution of each keyword. Based on this generic distribution\n, we start with the most frequent keyword, issue it to the\nHidden-Web database and retrieve the result. We then continue\nto the second-most frequent keyword and repeat this process until\nwe exhaust all download resources. The hope is that the frequent\nkeywords in a generic corpus will also be frequent in the\nHidden-Web database, returning many matching documents.\nAdaptive: We analyze the documents returned from the previous\nqueries issued to the Hidden-Web database and estimate which\nkeyword is most likely to return the most documents. Based on\nthis analysis, we issue the most \"promising\" query, and repeat\nthe process.\nAmong these three general policies, we may consider the random\npolicy as the base comparison point since it is expected to\nperform the worst. Between the generic-frequency and the adaptive\npolicies, both policies may show similar performance if the\ncrawled database has a generic document collection without a specialized\ntopic. The adaptive policy, however, may perform significantly\nbetter than the generic-frequency policy if the database has a\nvery specialized collection that is different from the generic corpus.\nWe will experimentally compare these three policies in Section 4.\nWhile the first two policies (random and generic-frequency policies\n) are easy to implement, we need to understand how we can analyze\nthe downloaded pages to identify the most \"promising\" query\nin order to implement the adaptive policy. We address this issue in\nthe rest of this section.\n3.1\nEstimating the number of matching pages\nIn order to identify the most promising query, we need to estimate\nhow many new documents we will download if we issue the\nquery\nq\ni\nas the next query. That is, assuming that we have issued\nqueries\nq\n1\n, . . . , q\ni-1\nwe need to estimate\nP (q\n1\nq\ni-1\nq\ni\n), for\nevery potential next query\nq\ni\nand compare this value. In estimating\nthis number, we note that we can rewrite\nP (q\n1\nq\ni-1\nq\ni\n)\nas:\nP ((q\n1\nq\ni-1\n) q\ni\n)\n= P (q\n1\nq\ni-1\n) + P (q\ni\n) - P ((q\n1\nq\ni-1\n) q\ni\n)\n= P (q\n1\nq\ni-1\n) + P (q\ni\n)\n- P (q\n1\nq\ni-1\n)P (q\ni\n|q\n1\nq\ni-1\n)\n(3)\nIn the above formula, note that we can precisely measure\nP (q\n1\n\nq\ni-1\n) and P (q\ni\n| q\n1\nq\ni-1\n) by analyzing previously-downloaded\npages: We know\nP (q\n1\nq\ni-1\n), the union of\nall pages downloaded from\nq\n1\n, . . . , q\ni-1\n, since we have already issued\nq\n1\n, . . . , q\ni-1\nand downloaded the matching pages.\n4\nWe can\nalso measure\nP (q\ni\n| q\n1\nq\ni-1\n), the probability that q\ni\nappears\nin the pages from\nq\n1\n, . . . , q\ni-1\n, by counting how many times\nq\ni\nappears in the pages from\nq\n1\n, . . . , q\ni-1\n. Therefore, we only need\nto estimate\nP (q\ni\n) to evaluate P (q\n1\nq\ni\n). We may consider a\nnumber of different ways to estimate\nP (q\ni\n), including the following\n:\n1. Independence estimator: We assume that the appearance of the\nterm\nq\ni\nis independent of the terms\nq\n1\n, . . . , q\ni-1\n. That is, we\nassume that\nP (q\ni\n) = P (q\ni\n|q\n1\nq\ni-1\n).\n2. Zipf estimator: In [19], Ipeirotis et al. proposed a method to\nestimate how many times a particular term occurs in the entire\ncorpus based on a subset of documents from the corpus. Their\nmethod exploits the fact that the frequency of terms inside text\ncollections follows a power law distribution [30, 25]. That is,\nif we rank all terms based on their occurrence frequency (with\nthe most frequent term having a rank of 1, second most frequent\na rank of 2 etc.), then the frequency\nf of a term inside the text\ncollection is given by:\nf = (r + )\n\n(4)\nwhere\nr is the rank of the term and , , and are constants that\ndepend on the text collection.\nTheir main idea is (1) to estimate the three parameters,\n, and\n, based on the subset of documents that we have downloaded\nfrom previous queries, and (2) use the estimated parameters to\npredict\nf given the ranking r of a term within the subset. For\na more detailed description on how we can use this method to\nestimate\nP (q\ni\n), we refer the reader to the extended version of\nthis paper [27].\nAfter we estimate\nP (q\ni\n) and P (q\ni\n|q\n1\nq\ni-1\n) values, we\ncan calculate\nP (q\n1\nq\ni\n). In Section 3.3, we explain how\nwe can efficiently compute\nP (q\ni\n|q\n1\nq\ni-1\n) by maintaining a\nsuccinct summary table. In the next section, we first examine how\nwe can use this value to decide which query we should issue next\nto the Hidden Web site.\n3.2\nQuery selection algorithm\nThe goal of the Hidden-Web crawler is to download the maximum\nnumber of unique documents from a database using its limited\ndownload resources. Given this goal, the Hidden-Web crawler\nhas to take two factors into account. (1) the number of new documents\nthat can be obtained from the query\nq\ni\nand (2) the cost of\nissuing the query\nq\ni\n. For example, if two queries,\nq\ni\nand\nq\nj\n, incur\nthe same cost, but\nq\ni\nreturns more new pages than\nq\nj\n,\nq\ni\nis more\ndesirable than\nq\nj\n. Similarly, if\nq\ni\nand\nq\nj\nreturn the same number\nof new documents, but\nq\ni\nincurs less cost then\nq\nj\n,\nq\ni\nis more desirable\n. Based on this observation, the Hidden-Web crawler may\nuse the following efficiency metric to quantify the desirability of\nthe query\nq\ni\n:\nEfficiency\n(q\ni\n) = P\nnew\n(q\ni\n)\nCost\n(q\ni\n)\nHere,\nP\nnew\n(q\ni\n) represents the amount of new documents returned\nfor\nq\ni\n(the pages that have not been returned for previous queries).\nCost\n(q\ni\n) represents the cost of issuing the query q\ni\n.\nIntuitively, the efficiency of\nq\ni\nmeasures how many new documents\nare retrieved per unit cost, and can be used as an indicator of\n4\nFor exact estimation, we need to know the total number of pages in\nthe site. However, in order to compare only relative values among\nqueries, this information is not actually needed.\n103\nA\nLGORITHM\n3.1.\nGreedy SelectTerm()\nParameters:\nT : The list of potential query keywords\nProcedure\n(1)\nForeach\nt\nk\nin\nT do\n(2)\nEstimate Efficiency\n(t\nk\n) =\nP\nnew\n(t\nk\n)\nCost(t\nk\n)\n(3)\ndone\n(4)\nreturn\nt\nk\nwith maximum Efficiency\n(t\nk\n)\nFigure 6: Algorithm for selecting the next query term.\nhow well our resources are spent when issuing\nq\ni\n. Thus, the Hidden\nWeb crawler can estimate the efficiency of every candidate\nq\ni\n,\nand select the one with the highest value. By using its resources\nmore efficiently, the crawler may eventually download the maximum\nnumber of unique documents. In Figure 6, we show the query\nselection function that uses the concept of efficiency. In principle,\nthis algorithm takes a greedy approach and tries to maximize the\n\"potential gain\" in every step.\nWe can estimate the efficiency of every query using the estimation\nmethod described in Section 3.1. That is, the size of the new\ndocuments from the query\nq\ni\n,\nP\nnew\n(q\ni\n), is\nP\nnew\n(q\ni\n)\n= P (q\n1\nq\ni-1\nq\ni\n) - P (q\n1\nq\ni-1\n)\n= P (q\ni\n) - P (q\n1\nq\ni-1\n)P (q\ni\n|q\n1\nq\ni-1\n)\nfrom Equation 3, where\nP (q\ni\n) can be estimated using one of the\nmethods described in section 3. We can also estimate Cost\n(q\ni\n) sim-ilarly\n. For example, if Cost\n(q\ni\n) is\nCost\n(q\ni\n) = c\nq\n+ c\nr\nP (q\ni\n) + c\nd\nP\nnew\n(q\ni\n)\n(Equation 2), we can estimate Cost\n(q\ni\n) by estimating P (q\ni\n) and\nP\nnew\n(q\ni\n).\n3.3\nEfficient calculation of query statistics\nIn estimating the efficiency of queries, we found that we need to\nmeasure\nP (q\ni\n|q\n1\nq\ni-1\n) for every potential query q\ni\n. This calculation\ncan be very time-consuming if we repeat it from scratch for\nevery query\nq\ni\nin every iteration of our algorithm. In this section,\nwe explain how we can compute\nP (q\ni\n|q\n1\nq\ni-1\n) efficiently\nby maintaining a small table that we call a query statistics table.\nThe main idea for the query statistics table is that\nP (q\ni\n|q\n1\n\nq\ni-1\n) can be measured by counting how many times the keyword\nq\ni\nappears within the documents downloaded from\nq\n1\n, . . . , q\ni-1\n.\nWe record these counts in a table, as shown in Figure 7(a). The\nleft column of the table contains all potential query terms and the\nright column contains the number of previously-downloaded documents\ncontaining the respective term. For example, the table in Figure\n7(a) shows that we have downloaded 50 documents so far, and\nthe term model appears in 10 of these documents. Given this number\n, we can compute that\nP (model|q\n1\nq\ni-1\n) =\n10\n50\n= 0.2.\nWe note that the query statistics table needs to be updated whenever\nwe issue a new query\nq\ni\nand download more documents. This\nupdate can be done efficiently as we illustrate in the following example\n.\nE\nXAMPLE\n1. After examining the query statistics table of Figure\n7(a), we have decided to use the term \"computer\" as our next\nquery\nq\ni\n. From the new query\nq\ni\n= \"computer,\" we downloaded\n20 more new pages. Out of these, 12 contain the keyword \"model\"\nTerm\nt\nk\nN (t\nk\n)\nmodel\n10\ncomputer\n38\ndigital\n50\nTerm\nt\nk\nN (t\nk\n)\nmodel\n12\ncomputer\n20\ndisk\n18\nTotal pages:\n50\nNew pages:\n20\n(a) After\nq\n1\n, . . . , q\ni-1\n(b) New from\nq\ni\n= computer\nTerm\nt\nk\nN (t\nk\n)\nmodel\n10+12 = 22\ncomputer\n38+20 = 58\ndisk\n0+18 = 18\ndigital\n50+0 = 50\nTotal pages:\n50 + 20 = 70\n(c) After\nq\n1\n, . . . , q\ni\nFigure 7: Updating the query statistics table.\nq\ni\n1\ni-1\nq\n\\/ ... \\/\nq\nq\ni\n/\nS\nFigure 8: A Web site that does not return all the results.\nand\n18 the keyword \"disk.\" The table in Figure 7(b) shows the\nfrequency of each term in the newly-downloaded pages.\nWe can update the old table (Figure 7(a)) to include this new\ninformation by simply adding corresponding entries in Figures 7(a)\nand (b). The result is shown on Figure 7(c). For example, keyword\n\"model\" exists in\n10 + 12 = 22 pages within the pages retrieved\nfrom\nq\n1\n, . . . , q\ni\n. According to this new table,\nP (model|q\n1\nq\ni\n)\nis now\n22\n70\n= 0.3.\n3.4\nCrawling sites that limit the number of\nresults\nIn certain cases, when a query matches a large number of pages,\nthe Hidden Web site returns only a portion of those pages. For example\n, the Open Directory Project [2] allows the users to see only\nup to\n10, 000 results after they issue a query. Obviously, this kind\nof limitation has an immediate effect on our Hidden Web crawler.\nFirst, since we can only retrieve up to a specific number of pages\nper query, our crawler will need to issue more queries (and potentially\nwill use up more resources) in order to download all the\npages. Second, the query selection method that we presented in\nSection 3.2 assumes that for every potential query\nq\ni\n, we can find\nP (q\ni\n|q\n1\nq\ni-1\n). That is, for every query q\ni\nwe can find the\nfraction of documents in the whole text database that contains\nq\ni\nwith at least one of\nq\n1\n, . . . , q\ni-1\n. However, if the text database returned\nonly a portion of the results for any of the\nq\n1\n, . . . , q\ni-1\nthen\nthe value\nP (q\ni\n|q\n1\nq\ni-1\n) is not accurate and may affect our\ndecision for the next query\nq\ni\n, and potentially the performance of\nour crawler. Since we cannot retrieve more results per query than\nthe maximum number the Web site allows, our crawler has no other\nchoice besides submitting more queries. However, there is a way\nto estimate the correct value for\nP (q\ni\n|q\n1\nq\ni-1\n) in the case\nwhere the Web site returns only a portion of the results.\n104\nAgain, assume that the Hidden Web site we are currently crawling\nis represented as the rectangle on Figure 8 and its pages as\npoints in the figure. Assume that we have already issued queries\nq\n1\n, . . . , q\ni-1\nwhich returned a number of results less than the maximum\nnumber than the site allows, and therefore we have downloaded\nall the pages for these queries (big circle in Figure 8). That\nis, at this point, our estimation for\nP (q\ni\n|q\n1\nq\ni-1\n) is accurate.\nNow assume that we submit query\nq\ni\nto the Web site, but due to a\nlimitation in the number of results that we get back, we retrieve the\nset\nq\ni\n(small circle in Figure 8) instead of the set\nq\ni\n(dashed circle\nin Figure 8). Now we need to update our query statistics table so\nthat it has accurate information for the next step. That is, although\nwe got the set\nq\ni\nback, for every potential query\nq\ni+1\nwe need to\nfind\nP (q\ni+1\n|q\n1\nq\ni\n):\nP (q\ni+1\n|q\n1\nq\ni\n)\n=\n1\nP (q\n1\nq\ni\n) [P (q\ni+1\n(q\n1\nq\ni-1\n))+\nP (q\ni+1\nq\ni\n) - P (q\ni+1\nq\ni\n(q\n1\nq\ni-1\n))]\n(5)\nIn the previous equation, we can find\nP (q\n1\nq\ni\n) by estimating\nP (q\ni\n) with the method shown in Section 3. Additionally, we\ncan calculate\nP (q\ni+1\n(q\n1\nq\ni-1\n)) and P (q\ni+1\nq\ni\n(q\n1\n\nq\ni-1\n)) by directly examining the documents that we have\ndownloaded from queries\nq\n1\n, . . . , q\ni-1\n. The term\nP (q\ni+1\nq\ni\n)\nhowever is unknown and we need to estimate it. Assuming that\nq\ni\nis a random sample of\nq\ni\n, then:\nP (q\ni+1\nq\ni\n)\nP (q\ni+1\nq\ni\n) =\nP (q\ni\n)\nP (q\ni\n)\n(6)\nFrom Equation 6 we can calculate\nP (q\ni+1\nq\ni\n) and after we\nreplace this value to Equation 5 we can find\nP (q\ni+1\n|q\n1\nq\ni\n).\nEXPERIMENTAL EVALUATION\nIn this section we experimentally evaluate the performance of\nthe various algorithms for Hidden Web crawling presented in this\npaper. Our goal is to validate our theoretical analysis through real-world\nexperiments, by crawling popular Hidden Web sites of textual\ndatabases. Since the number of documents that are discovered\nand downloaded from a textual database depends on the selection\nof the words that will be issued as queries\n5\nto the search interface\nof each site, we compare the various selection policies that were\ndescribed in section 3, namely the random, generic-frequency, and\nadaptive algorithms.\nThe adaptive algorithm learns new keywords and terms from the\ndocuments that it downloads, and its selection process is driven by\na cost model as described in Section 3.2. To keep our experiment\nand its analysis simple at this point, we will assume that the cost for\nevery query is constant. That is, our goal is to maximize the number\nof downloaded pages by issuing the least number of queries. Later,\nin Section 4.4 we will present a comparison of our policies based\non a more elaborate cost model. In addition, we use the independence\nestimator (Section 3.1) to estimate\nP (q\ni\n) from downloaded\npages. Although the independence estimator is a simple estimator,\nour experiments will show that it can work very well in practice.\n6\nFor the generic-frequency policy, we compute the frequency distribution\nof words that appear in a 5.5-million-Web-page corpus\n5\nThroughout our experiments, once an algorithm has submitted a\nquery to a database, we exclude the query from subsequent submissions\nto the same database from the same algorithm.\n6\nWe defer the reporting of results based on the Zipf estimation to a\nfuture work.\ndownloaded from 154 Web sites of various topics [26]. Keywords\nare selected based on their decreasing frequency with which they\nappear in this document set, with the most frequent one being selected\nfirst, followed by the second-most frequent keyword, etc.\n7\nRegarding the random policy, we use the same set of words collected\nfrom the Web corpus, but in this case, instead of selecting\nkeywords based on their relative frequency, we choose them ran-domly\n(uniform distribution). In order to further investigate how\nthe quality of the potential query-term list affects the random-based\nalgorithm, we construct two sets: one with the\n16, 000 most frequent\nwords of the term collection used in the generic-frequency\npolicy (hereafter, the random policy with the set of 16,000 words\nwill be referred to as random-16K), and another set with the\n1 million\nmost frequent words of the same collection as above (hereafter,\nreferred to as random-1M). The former set has frequent words that\nappear in a large number of documents (at least\n10, 000 in our collection\n), and therefore can be considered of \"high-quality\" terms.\nThe latter set though contains a much larger collection of words,\namong which some might be bogus, and meaningless.\nThe experiments were conducted by employing each one of the\naforementioned algorithms (adaptive, generic-frequency, random-16K\n, and random-1M) to crawl and download contents from three\nHidden Web sites: The PubMed Medical Library,\n8\nAmazon,\n9\nand\nthe Open Directory Project[2]. According to the information on\nPubMed's Web site, its collection contains approximately\n14 million\nabstracts of biomedical articles. We consider these abstracts\nas the \"documents\" in the site, and in each iteration of the adaptive\npolicy, we use these abstracts as input to the algorithm. Thus our\ngoal is to \"discover\" as many unique abstracts as possible by repeat-edly\nquerying the Web query interface provided by PubMed. The\nHidden Web crawling on the PubMed Web site can be considered\nas topic-specific, due to the fact that all abstracts within PubMed\nare related to the fields of medicine and biology.\nIn the case of the Amazon Web site, we are interested in downloading\nall the hidden pages that contain information on books.\nThe querying to Amazon is performed through the Software De-veloper's\nKit that Amazon provides for interfacing to its Web site,\nand which returns results in XML form. The generic \"keyword\"\nfield is used for searching, and as input to the adaptive policy we\nextract the product description and the text of customer reviews\nwhen present in the XML reply. Since Amazon does not provide\nany information on how many books it has in its catalogue, we use\nrandom sampling on the 10-digit ISBN number of the books to estimate\nthe size of the collection. Out of the\n10, 000 random ISBN\nnumbers queried,\n46 are found in the Amazon catalogue, therefore\nthe size of its book collection is estimated to be\n46\n10000\n10\n10\n= 4.6\nmillion books. It's also worth noting here that Amazon poses an\nupper limit on the number of results (books in our case) returned\nby each query, which is set to\n32, 000.\nAs for the third Hidden Web site, the Open Directory Project\n(hereafter also referred to as dmoz), the site maintains the links to\n3.8 million sites together with a brief summary of each listed site.\nThe links are searchable through a keyword-search interface. We\nconsider each indexed link together with its brief summary as the\ndocument of the dmoz site, and we provide the short summaries\nto the adaptive algorithm to drive the selection of new keywords\nfor querying. On the dmoz Web site, we perform two Hidden Web\ncrawls: the first is on its generic collection of\n3.8-million indexed\n7\nWe did not manually exclude stop words (e.g., the, is, of, etc.)\nfrom the keyword list. As it turns out, all Web sites except PubMed\nreturn matching documents for the stop words, such as \"the.\"\n8\nPubMed Medical Library: http://www.pubmed.org\n9\nAmazon Inc.: http://www.amazon.com\n105\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0\n50\n100\n150\n200\nfraction of documents\nquery number\nCumulative fraction of unique documents - PubMed website\nadaptive\ngeneric-frequency\nrandom-16K\nrandom-1M\nFigure 9: Coverage of policies for Pubmed\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0\n100\n200\n300\n400\n500\n600\n700\nfraction of documents\nquery number\nCumulative fraction of unique documents - Amazon website\nadaptive\ngeneric-frequency\nrandom-16K\nrandom-1M\nFigure 10: Coverage of policies for Amazon\nsites, regardless of the category that they fall into. The other crawl\nis performed specifically on the Arts section of dmoz (http://\ndmoz.org/Arts\n), which comprises of approximately\n429, 000\nindexed sites that are relevant to Arts, making this crawl topic-specific\n, as in PubMed. Like Amazon, dmoz also enforces an upper\nlimit on the number of returned results, which is\n10, 000 links with\ntheir summaries.\n4.1\nComparison of policies\nThe first question that we seek to answer is the evolution of the\ncoverage metric as we submit queries to the sites. That is, what\nfraction of the collection of documents stored in the Hidden Web\nsite can we download as we continuously query for new words selected\nusing the policies described above? More formally, we are\ninterested in the value of\nP (q\n1\nq\ni-1\nq\ni\n), after we submit\nq\n1\n, . . . , q\ni\nqueries, and as\ni increases.\nIn Figures 9, 10, 11, and 12 we present the coverage metric for\neach policy, as a function of the query number, for the Web sites\nof PubMed, Amazon, general dmoz and the art-specific dmoz, respectively\n. On the y-axis the fraction of the total documents downloaded\nfrom the website is plotted, while the x-axis represents the\nquery number. A first observation from these graphs is that in general\n, the generic-frequency and the adaptive policies perform much\nbetter than the random-based algorithms. In all of the figures, the\ngraphs for the random-1M and the random-16K are significantly\nbelow those of other policies.\nBetween the generic-frequency and the adaptive policies, we can\nsee that the latter outperforms the former when the site is topic spe-0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0\n100\n200\n300\n400\n500\n600\n700\nfraction of documents\nquery number\nCumulative fraction of unique documents - dmoz website\nadaptive\ngeneric-frequency\nrandom-16K\nrandom-1M\nFigure 11: Coverage of policies for general dmoz\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\nfraction of documents\nquery number\nCumulative fraction of unique documents - dmoz/Arts website\nadaptive\ngeneric-frequency\nrandom-16K\nrandom-1M\nFigure 12: Coverage of policies for the Arts section of dmoz\ncific. For example, for the PubMed site (Figure 9), the adaptive\nalgorithm issues only 83 queries to download almost 80% of the\ndocuments stored in PubMed, but the generic-frequency algorithm\nrequires 106 queries for the same coverage,. For the dmoz/Arts\ncrawl (Figure 12), the difference is even more substantial: the adaptive\npolicy is able to download 99.98% of the total sites indexed in\nthe Directory by issuing 471 queries, while the frequency-based algorithm\nis much less effective using the same number of queries,\nand discovers only 72% of the total number of indexed sites. The\nadaptive algorithm, by examining the contents of the pages that it\ndownloads at each iteration, is able to identify the topic of the site as\nexpressed by the words that appear most frequently in the result-set.\nConsequently, it is able to select words for subsequent queries that\nare more relevant to the site, than those preferred by the generic-frequency\npolicy, which are drawn from a large, generic collection.\nTable 1 shows a sample of 10 keywords out of 211 chosen and submitted\nto the PubMed Web site by the adaptive algorithm, but not\nby the other policies. For each keyword, we present the number of\nthe iteration, along with the number of results that it returned. As\none can see from the table, these keywords are highly relevant to\nthe topics of medicine and biology of the Public Medical Library,\nand match against numerous articles stored in its Web site.\nIn both cases examined in Figures 9, and 12, the random-based\npolicies perform much worse than the adaptive algorithm, and the\ngeneric-frequency. It is worthy noting however, that the random-based\npolicy with the small, carefully selected set of\n16, 000 \"qual-ity\"\nwords manages to download a considerable fraction of 42.5%\n106\nIteration\nKeyword\nNumber of Results\n23\ndepartment\n2, 719, 031\n34\npatients\n1, 934, 428\n53\nclinical\n1, 198, 322\n67\ntreatment\n4, 034, 565\n69\nmedical\n1, 368, 200\n70\nhospital\n503, 307\n146\ndisease\n1, 520, 908\n172\nprotein\n2, 620, 938\nTable 1: Sample of keywords queried to PubMed exclusively by\nthe adaptive policy\nfrom the PubMed Web site after 200 queries, while the coverage\nfor the Arts section of dmoz reaches 22.7%, after 471 queried keywords\n. On the other hand, the random-based approach that makes\nuse of the vast collection of\n1 million words, among which a large\nnumber is bogus keywords, fails to download even a mere 1% of the\ntotal collection, after submitting the same number of query words.\nFor the generic collections of Amazon and the dmoz sites, shown\nin Figures 10 and 11 respectively, we get mixed results: The generic-frequency\npolicy shows slightly better performance than the adaptive\npolicy for the Amazon site (Figure 10), and the adaptive method\nclearly outperforms the generic-frequency for the general dmoz site\n(Figure 11). A closer look at the log files of the two Hidden Web\ncrawlers reveals the main reason: Amazon was functioning in a\nvery flaky way when the adaptive crawler visited it, resulting in\na large number of lost results. Thus, we suspect that the slightly\npoor performance of the adaptive policy is due to this experimental\nvariance. We are currently running another experiment to verify\nwhether this is indeed the case. Aside from this experimental\nvariance, the Amazon result indicates that if the collection and the\nwords that a Hidden Web site contains are generic enough, then the\ngeneric-frequency approach may be a good candidate algorithm for\neffective crawling.\nAs in the case of topic-specific Hidden Web sites, the random-based\npolicies also exhibit poor performance compared to the other\ntwo algorithms when crawling generic sites: for the Amazon Web\nsite, random-16K succeeds in downloading almost 36.7% after issuing\n775 queries, alas for the generic collection of dmoz, the fraction\nof the collection of links downloaded is 13.5% after the 770th\nquery. Finally, as expected, random-1M is even worse than random-16K\n, downloading only 14.5% of Amazon and 0.3% of the generic\ndmoz.\nIn summary, the adaptive algorithm performs remarkably well in\nall cases: it is able to discover and download most of the documents\nstored in Hidden Web sites by issuing the least number of queries.\nWhen the collection refers to a specific topic, it is able to identify\nthe keywords most relevant to the topic of the site and consequently\nask for terms that is most likely that will return a large number of\nresults . On the other hand, the generic-frequency policy proves to\nbe quite effective too, though less than the adaptive: it is able to retrieve\nrelatively fast a large portion of the collection, and when the\nsite is not topic-specific, its effectiveness can reach that of adaptive\n(e.g. Amazon). Finally, the random policy performs poorly in\ngeneral, and should not be preferred.\n4.2\nImpact of the initial query\nAn interesting issue that deserves further examination is whether\nthe initial choice of the keyword used as the first query issued by\nthe adaptive algorithm affects its effectiveness in subsequent itera-tions\n. The choice of this keyword is not done by the selection of the\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0\n10\n20\n30\n40\n50\n60\nfraction of documents\nquery number\nConvergence of adaptive under different initial queries - PubMed website\npubmed\ndata\ninformation\nreturn\nFigure 13: Convergence of the adaptive algorithm using different\ninitial queries for crawling the PubMed Web site\nadaptive algorithm itself and has to be manually set, since its query\nstatistics tables have not been populated yet. Thus, the selection is\ngenerally arbitrary, so for purposes of fully automating the whole\nprocess, some additional investigation seems necessary.\nFor this reason, we initiated three adaptive Hidden Web crawlers\ntargeting the PubMed Web site with different seed-words: the word\n\"data\", which returns 1,344,999 results, the word \"information\"\nthat reports\n308, 474 documents, and the word \"return\" that retrieves\n29, 707 pages, out of 14 million. These keywords represent\nvarying degrees of term popularity in PubMed, with the first\none being of high popularity, the second of medium, and the third\nof low. We also show results for the keyword \"pubmed\", used in\nthe experiments for coverage of Section 4.1, and which returns 695\narticles. As we can see from Figure 13, after a small number of\nqueries, all four crawlers roughly download the same fraction of\nthe collection, regardless of their starting point: Their coverages\nare roughly equivalent from the 25th query. Eventually, all four\ncrawlers use the same set of terms for their queries, regardless of\nthe initial query. In the specific experiment, from the 36th query onward\n, all four crawlers use the same terms for their queries in each\niteration, or the same terms are used off by one or two query numbers\n. Our result confirms the observation of [11] that the choice of\nthe initial query has minimal effect on the final performance. We\ncan explain this intuitively as follows: Our algorithm approximates\nthe optimal set of queries to use for a particular Web site. Once\nthe algorithm has issued a significant number of queries, it has an\naccurate estimation of the content of the Web site, regardless of\nthe initial query. Since this estimation is similar for all runs of the\nalgorithm, the crawlers will use roughly the same queries.\n4.3\nImpact of the limit in the number of results\nWhile the Amazon and dmoz sites have the respective limit of\n32,000 and 10,000 in their result sizes, these limits may be larger\nthan those imposed by other Hidden Web sites. In order to investigate\nhow a \"tighter\" limit in the result size affects the performance\nof our algorithms, we performed two additional crawls to\nthe generic-dmoz site: we ran the generic-frequency and adaptive\npolicies but we retrieved only up to the top 1,000 results for every\nquery. In Figure 14 we plot the coverage for the two policies\nas a function of the number of queries. As one might expect, by\ncomparing the new result in Figure 14 to that of Figure 11 where\nthe result limit was 10,000, we conclude that the tighter limit requires\na higher number of queries to achieve the same coverage.\nFor example, when the result limit was 10,000, the adaptive pol-107\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0\n500\n1000\n1500\n2000\n2500\n3000\n3500\nFraction of Unique Pages\nQuery Number\nCumulative fraction of unique pages downloaded per query - Dmoz Web site (cap limit 1000)\nadaptive\ngeneric-frequency\nFigure 14: Coverage of general dmoz after limiting the number\nof results to 1,000\nicy could download 70% of the site after issuing 630 queries, while\nit had to issue 2,600 queries to download 70% of the site when\nthe limit was 1,000. On the other hand, our new result shows that\neven with a tight result limit, it is still possible to download most\nof a Hidden Web site after issuing a reasonable number of queries.\nThe adaptive policy could download more than 85% of the site after\nissuing 3,500 queries when the limit was 1,000. Finally, our\nresult shows that our adaptive policy consistently outperforms the\ngeneric-frequency policy regardless of the result limit. In both Figure\n14 and Figure 11, our adaptive policy shows significantly larger\ncoverage than the generic-frequency policy for the same number of\nqueries.\n4.4\nIncorporating the document download\ncost\nFor brevity of presentation, the performance evaluation results\nprovided so far assumed a simplified cost-model where every query\ninvolved a constant cost. In this section we present results regarding\nthe performance of the adaptive and generic-frequency algorithms\nusing Equation 2 to drive our query selection process. As we discussed\nin Section 2.3.1, this query cost model includes the cost for\nsubmitting the query to the site, retrieving the result index page,\nand also downloading the actual pages. For these costs, we examined\nthe size of every result in the index page and the sizes of the\ndocuments, and we chose\nc\nq\n= 100, c\nr\n= 100, and c\nd\n= 10000,\nas values for the parameters of Equation 2, and for the particular\nexperiment that we ran on the PubMed website. The values that\nwe selected imply that the cost for issuing one query and retrieving\none result from the result index page are roughly the same, while\nthe cost for downloading an actual page is 100 times larger. We\nbelieve that these values are reasonable for the PubMed Web site.\nFigure 15 shows the coverage of the adaptive and generic-frequency\nalgorithms as a function of the resource units used during\nthe download process. The horizontal axis is the amount of\nresources used, and the vertical axis is the coverage. As it is evident\nfrom the graph, the adaptive policy makes more efficient use of\nthe available resources, as it is able to download more articles than\nthe generic-frequency, using the same amount of resource units.\nHowever, the difference in coverage is less dramatic in this case,\ncompared to the graph of Figure 9. The smaller difference is due\nto the fact that under the current cost metric, the download cost of\ndocuments constitutes a significant portion of the cost. Therefore,\nwhen both policies downloaded the same number of documents,\nthe saving of the adaptive policy is not as dramatic as before. That\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0\n5000\n10000\n15000\n20000\n25000\n30000\nFraction of Unique Pages\nTotal Cost (c\nq\n=100, c\nr\n=100, c\nd\n=10000)\nCumulative fraction of unique pages downloaded per cost unit - PubMed Web site\nadaptive\nfrequency\nFigure 15: Coverage of PubMed after incorporating the document\ndownload cost\nis, the savings in the query cost and the result index download cost\nis only a relatively small portion of the overall cost. Still, we observe\nnoticeable savings from the adaptive policy. At the total cost\nof 8000, for example, the coverage of the adaptive policy is roughly\n0.5 while the coverage of the frequency policy is only 0.3.\nRELATED WORK\nIn a recent study, Raghavan and Garcia-Molina [29] present an\narchitectural model for a Hidden Web crawler. The main focus of\nthis work is to learn Hidden-Web query interfaces, not to generate\nqueries automatically. The potential queries are either provided\nmanually by users or collected from the query interfaces. In contrast\n, our main focus is to generate queries automatically without\nany human intervention.\nThe idea of automatically issuing queries to a database and examining\nthe results has been previously used in different contexts.\nFor example, in [10, 11], Callan and Connel try to acquire an accurate\nlanguage model by collecting a uniform random sample from\nthe database. In [22] Lawrence and Giles issue random queries to\na number of Web Search Engines in order to estimate the fraction\nof the Web that has been indexed by each of them. In a similar\nfashion, Bharat and Broder [8] issue random queries to a set of\nSearch Engines in order to estimate the relative size and overlap of\ntheir indexes. In [6], Barbosa and Freire experimentally evaluate\nmethods for building multi-keyword queries that can return a large\nfraction of a document collection. Our work differs from the previous\nstudies in two ways. First, it provides a theoretical framework\nfor analyzing the process of generating queries for a database and\nexamining the results, which can help us better understand the effectiveness\nof the methods presented in the previous work. Second,\nwe apply our framework to the problem of Hidden Web crawling\nand demonstrate the efficiency of our algorithms.\nCope et al. [15] propose a method to automatically detect whether\na particular Web page contains a search form. This work is complementary\nto ours; once we detect search interfaces on the Web\nusing the method in [15], we may use our proposed algorithms to\ndownload pages automatically from those Web sites.\nReference [4] reports methods to estimate what fraction of a\ntext database can be eventually acquired by issuing queries to the\ndatabase. In [3] the authors study query-based techniques that can\nextract relational data from large text databases. Again, these works\nstudy orthogonal issues and are complementary to our work.\nIn order to make documents in multiple textual databases searchable\nat a central place, a number of \"harvesting\" approaches have\n108\nbeen proposed (e.g., OAI [21], DP9 [24]). These approaches essentially\nassume cooperative document databases that willingly share\nsome of their metadata and/or documents to help a third-party search\nengine to index the documents. Our approach assumes uncoop-erative\ndatabases that do not share their data publicly and whose\ndocuments are accessible only through search interfaces.\nThere exists a large body of work studying how to identify the\nmost relevant database given a user query [20, 19, 14, 23, 18]. This\nbody of work is often referred to as meta-searching or database\nselection problem over the Hidden Web. For example, [19] suggests\nthe use of focused probing to classify databases into a topical\ncategory, so that given a query, a relevant database can be selected\nbased on its topical category. Our vision is different from this body\nof work in that we intend to download and index the Hidden pages\nat a central location in advance, so that users can access all the\ninformation at their convenience from one single location.\nCONCLUSION AND FUTURE WORK\nTraditional crawlers normally follow links on the Web to discover\nand download pages. Therefore they cannot get to the Hidden\nWeb pages which are only accessible through query interfaces. In\nthis paper, we studied how we can build a Hidden Web crawler that\ncan automatically query a Hidden Web site and download pages\nfrom it. We proposed three different query generation policies for\nthe Hidden Web: a policy that picks queries at random from a list\nof keywords, a policy that picks queries based on their frequency\nin a generic text collection, and a policy which adaptively picks a\ngood query based on the content of the pages downloaded from the\nHidden Web site. Experimental evaluation on 4 real Hidden Web\nsites shows that our policies have a great potential. In particular, in\ncertain cases the adaptive policy can download more than\n90% of\na Hidden Web site after issuing approximately\n100 queries. Given\nthese results, we believe that our work provides a potential mechanism\nto improve the search-engine coverage of the Web and the\nuser experience of Web search.\n6.1\nFuture Work\nWe briefly discuss some future-research avenues.\nMulti-attribute Databases\nWe are currently investigating how\nto extend our ideas to structured multi-attribute databases. While\ngenerating queries for multi-attribute databases is clearly a more\ndifficult problem, we may exploit the following observation to address\nthis problem: When a site supports multi-attribute queries,\nthe site often returns pages that contain values for each of the query\nattributes. For example, when an online bookstore supports queries\non title, author and isbn, the pages returned from a query\ntypically contain the title, author and ISBN of corresponding books.\nThus, if we can analyze the returned pages and extract the values\nfor each field (e.g, title = `Harry Potter', author =\n`J.K. Rowling'\n, etc), we can apply the same idea that we\nused for the textual database: estimate the frequency of each attribute\nvalue and pick the most promising one. The main challenge\nis to automatically segment the returned pages so that we can identify\nthe sections of the pages that present the values corresponding\nto each attribute. Since many Web sites follow limited formatting\nstyles in presenting multiple attributes -- for example, most book\ntitles are preceded by the label \"Title:\" -- we believe we may learn\npage-segmentation rules automatically from a small set of training\nexamples.\nOther Practical Issues\nIn addition to the automatic query generation\nproblem, there are many practical issues to be addressed\nto build a fully automatic Hidden-Web crawler. For example, in\nthis paper we assumed that the crawler already knows all query interfaces\nfor Hidden-Web sites. But how can the crawler discover\nthe query interfaces? The method proposed in [15] may be a good\nstarting point. In addition, some Hidden-Web sites return their results\nin batches of, say, 20 pages, so the user has to click on a\n\"next\" button in order to see more results. In this case, a fully automatic\nHidden-Web crawler should know that the first result index\npage contains only a partial result and \"press\" the next button automatically\n. Finally, some Hidden Web sites may contain an infinite\nnumber of Hidden Web pages which do not contribute much significant\ncontent (e.g. a calendar with links for every day). In this\ncase the Hidden-Web crawler should be able to detect that the site\ndoes not have much more new content and stop downloading pages\nfrom the site. Page similarity detection algorithms may be useful\nfor this purpose [9, 13].\nREFERENCES\n[1] Lexisnexis http://www.lexisnexis.com.\n[2] The Open Directory Project, http://www.dmoz.org.\n[3] E. Agichtein and L. Gravano. Querying text databases for efficient information\nextraction. In ICDE, 2003.\n[4] E. Agichtein, P. Ipeirotis, and L. Gravano. Modeling query-based access to text\ndatabases. In WebDB, 2003.\n[5] Article on New York Times. Old Search Engine, the Library, Tries to Fit Into a\nGoogle World. Available at: http:\n//www.nytimes.com/2004/06/21/technology/21LIBR.html\n,\nJune 2004.\n[6] L. Barbosa and J. Freire. Siphoning hidden-web data through keyword-based\ninterfaces. In SBBD, 2004.\n[7] M. K. Bergman. The deep web: Surfacing hidden value,http:\n//www.press.umich.edu/jep/07-01/bergman.html\n.\n[8] K. Bharat and A. Broder. A technique for measuring the relative size and\noverlap of public web search engines. In WWW, 1998.\n[9] A. Z. Broder, S. C. Glassman, M. S. Manasse, and G. Zweig. Syntactic\nclustering of the web. In WWW, 1997.\n[10] J. Callan, M. Connell, and A. Du. Automatic discovery of language models for\ntext databases. In SIGMOD, 1999.\n[11] J. P. Callan and M. E. Connell. Query-based sampling of text databases.\nInformation Systems, 19(2):97130, 2001.\n[12] K. C.-C. Chang, B. He, C. Li, and Z. Zhang. Structured databases on the web:\nObservations and implications. Technical report, UIUC.\n[13] J. Cho, N. Shivakumar, and H. Garcia-Molina. Finding replicated web\ncollections. In SIGMOD, 2000.\n[14] W. Cohen and Y. Singer. Learning to query the web. In AAAI Workshop on\nInternet-Based Information Systems, 1996.\n[15] J. Cope, N. Craswell, and D. Hawking. Automated discovery of search\ninterfaces on the web. In 14th Australasian conference on Database\ntechnologies, 2003.\n[16] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algorithms,\n2nd Edition. MIT Press/McGraw Hill, 2001.\n[17] D. Florescu, A. Y. Levy, and A. O. Mendelzon. Database techniques for the\nworld-wide web: A survey. SIGMOD Record, 27(3):5974, 1998.\n[18] B. He and K. C.-C. Chang. Statistical schema matching across web query\ninterfaces. In SIGMOD Conference, 2003.\n[19] P. Ipeirotis and L. Gravano. Distributed search over the hidden web:\nHierarchical database sampling and selection. In VLDB, 2002.\n[20] P. G. Ipeirotis, L. Gravano, and M. Sahami. Probe, count, and classify:\nCategorizing hidden web databases. In SIGMOD, 2001.\n[21] C. Lagoze and H. V. Sompel. The Open Archives Initiative: Building a\nlow-barrier interoperability framework In JCDL, 2001.\n[22] S. Lawrence and C. L. Giles. Searching the World Wide Web. Science,\n280(5360):98--100, 1998.\n[23] V. Z. Liu, J. C. Richard C. Luo and, and W. W. Chu. Dpro: A probabilistic\napproach for hidden web database selection using dynamic probing. In ICDE,\n2004.\n[24] X. Liu, K. Maly, M. Zubair and M. L. Nelson. DP9-An OAI Gateway Service\nfor Web Crawlers. In JCDL, 2002.\n[25] B. B. Mandelbrot. Fractal Geometry of Nature. W. H. Freeman & Co.\n[26] A. Ntoulas, J. Cho, and C. Olston. What's new on the web? the evolution of the\nweb from a search engine perspective. In WWW, 2004.\n[27] A. Ntoulas, P. Zerfos, and J. Cho. Downloading hidden web content. Technical\nreport, UCLA, 2004.\n[28] S. Olsen. Does search engine's power threaten web's independence?\nhttp://news.com.com/2009-1023-963618.html\n.\n[29] S. Raghavan and H. Garcia-Molina. Crawling the hidden web. In VLDB, 2001.\n[30] G. K. Zipf. Human Behavior and the Principle of Least-Effort.\nAddison-Wesley, Cambridge, MA, 1949.\n109\n", "keywords": "crawler;deep web;hidden web;Hidden Web crawling;query selection;efficiency;Deep Web crawler;coverage;keyword selection;adaptive algorithm;potential bias;adaptive algorithmn;accurate language model;keyword query;keyword queries"} {"name": "75", "title": "Easy Language Extension with Meta-AspectJ", "abstract": "Domain-specific languages hold the potential of automating the software development process. Nevertheless, the adop-tion of a domain-specific language is hindered by the difficulty of transitioning to different language syntax and employing a separate translator in the software build process. We present a methodology that simplifies the development and deployment of small language extensions, in the context of Java. The main language design principle is that of language extension through unobtrusive annotations. The main language implementation idea is to express the language as a generator of customized AspectJ aspects, using our Meta-AspectJ tool. The advantages of the approach are twofold. First, the tool integrates into an existing software application much as a regular API or library, instead of as a language extension. This means that the programmer can remove the language extension at any point and choose to implement the required functionality by hand without needing to rewrite the client code. Second, a mature language implementation is easy to achieve with little effort since AspectJ takes care of the low-level issues of interfacing with the base Java language", "fulltext": "INTRODUCTION AND MOTIVATION\nThe idea of extensible languages has fascinated programmers\nfor many decades, as evident by the extensibility fea\nThis material is based upon work supported by the National\nScience Foundation under Grants No. CCR-0220248\nand CCR-0238289.\nCopyright is held by the author/owner.\nICSE'06, May 2028, 2006, Shanghai, China.\nACM 1-59593-085-X/06/0005.\ntures in languages as old as Lisp.\nFrom a Software Engineering\nstandpoint, the main advantages of expressing a\nconcept as a language feature, as opposed to a library API,\nare in terms of conciseness, safety, and performance.\nA\nlanguage feature allows expressing the programmer's intent\nmuch more concisely--in contrast, libraries are limited to\na function- or method-call syntax. A language feature also\nallow better static error checking--a library can only check\nthe static types of arguments of a function call against the\ndeclared types of the formals. Finally, a language feature\ncan take advantage of context information and employ an\noptimized implementation, while a library routine cannot be\ncustomized according to its uses.\nDespite these advantages, there are excellent reasons why\nfull language extensibility is undesirable. Changing the syntax\nand semantics of a programming language is confusing\nand can lead to incomprehensible code. Furthermore, programming\nlanguages are complex entities, designed to provide\na small number of features but allow them to be combined\nas generally as possible.\nA new feature can either\nincrease the complexity of the language implementation significantly\n(because of interactions with all existing features),\nor will need to be limited in its interactions, which is a bad\nlanguage design principle that leads to single-use features\nand design bloat.\nIn past work [3], we have advocated expressing small\nlanguage extensions purely through unobtrusive annotations\n. Indeed, the introduction of user-defined annotations\nin mainstream programming languages, such as C# and\nJava, has allowed specialized language extensions (e.g., for\ndistributed computing, persistence, or real-time programming\n) to be added without changing the base syntax.\nWe believe that the approach of limited language extension\nthrough annotations meshes very well with an implementation\ntechnique that uses our Meta-AspectJ (MAJ)\ntool [4] to express the semantics of the language extension.\nSpecifically, MAJ is a language that allows writing programs\nthat generate aspects in the AspectJ language [1]. The programmer\ncan easily express an extension to the Java language\nas a program that: a) reads annotations and type\ninformation from an existing program using Java reflection;\nb) outputs a customized AspectJ aspect that transforms the\noriginal program according to the information in the annotation\n; c) executes the generated aspect by using the standard\nAspectJ compiler.\nIn other words, our approach uses the AspectJ language\nas a compiler back-end.\nAspectJ code is not written by\nthe application programmer but generated by the language\n865\nextension, for the sole purpose of expressing program transformations\neasily and generally. This is appropriate, as AspectJ\nembodies the Aspect-Oriented Programming [2] philosophy\nof expressing program enhancements orthogonally\nand independently of the original source code.\nOur approach has the advantage of simplifying the implementation\nof the language extension significantly, without\nencouraging undisciplined language extension (since the\nonly extensions allowed are through annotations). Specifically\n, the approach leverages the engineering sophistication\nof the AspectJ compiler implementation and its provisions\nfor dealing correctly with different Java language features.\nIf a programmer were to replicate the same effort by hand,\nshe would likely need to reproduce much of the AspectJ\ncompiler complexity.\nThe purpose of this paper is to support the idea of implementing\nsmall language extensions as programs that produce\naspects. We have recently implemented a number of\nsuch small extensions to Java and they all exhibit a striking\nsimplicity. Specifically, we did not have to implement (or\nextend) a Java parser, we did not need to deal with syntax\ntree pattern matching and transformation, and we did not\nneed to provide special handling for many Java complexities.\nThe combined annotations-MAJ approach ensured that our\nsmall language extensions were implementable in a few hundreds\nof lines of code, without sacrificing generality in their\nconditions of use. We discuss two such extensions in detail,\nafter first introducing the MAJ language.\nBACKGROUND MAJ SYNTAX\nMAJ is an extension of Java that allows writing programs\nthat generate AspectJ source code.\nMAJ offers\ntwo operators for creating AspectJ code fragments: `[...]\n(\"quote\") and #[...] (\"unquote\").\nThe quote operator\ncreates representations of AspectJ code fragments. Parts\nof these representations can be variable and are desig-nated\nby the unquote operator (instances of unquote can\nonly occur inside a quoted code fragment).\nFor example\n, the value of the MAJ expression `[call(* *(..))] is\na data structure that represents the abstract syntax tree\nfor the fragment of AspectJ code call(* *(..)).\nSimilarly\n, the MAJ expression `[!within(#className)] is a\nquoted pattern with an unquoted part. Its value depends\non the value of the variable className.\nIf, for instance,\nclassName holds the identifier \"SomeClass\", the value of\n`[!within(#className)] is the abstract syntax tree for the\nexpression !within(SomeClass).\nMAJ also introduces a new keyword infer that can be\nused in place of a type name when a new variable is being\ndeclared and initialized to a quoted expression. For example,\nwe can write:\ninfer pct1 = `[call(* *(..))];\nThis declares a variable pct1 that can be used just like any\nother program variable. For instance, we can unquote it:\ninfer adv1 = `[void around() : #pct1 { }];\nThis creates the abstract syntax tree for a piece of AspectJ\ncode defining (empty) advice for a pointcut. Of course, since\nAspectJ is an extension of Java, any regular Java program\nfragment can be generated using MAJ.\nWe can now see a full MAJ method that generates a trivial\nbut complete AspectJ file:\nvoid generateTrivialLogging(String classNm) {\ninfer aspectCode =\n`[\npackage MyPackage;\naspect #[classNm + "Aspect"] {\nbefore : call(* #classNm.*(..))\n{ System.out.println("Method called"); }\n}\n];\nSystem.out.println(aspectCode.unparse());\n}\nThe generated aspect causes a message to be printed before\nevery call of a method in a class. The name of the affected\nclass is a parameter passed to the MAJ routine. This code\nalso shows the unparse method that MAJ supports for creating\na text representation of their code.\nEXAMPLE 1 FILLING INTERFACE METHODS\nOur first language extension is simple but a good example\nto our approach, since it can be defined very quickly and it\nis hard to implement with alternate means.\nThe Java language ensures that a class cannot declare\nto \"implement\" an interface unless it provides implementations\nfor all of its methods. Nevertheless, this often results\nin very tedious code. For instance, it is very common in\ncode dealing with the Swing graphics library to implement\nan event-listener interface with many methods, yet provide\nempty implementations for most of them because the application\ndoes not care about the corresponding events. The\nexample code below is representative:\nprivate class SomeListener\nimplements MouseListener, MouseMotionListener {\npublic void mousePressed (MouseEvent event) {\n... // do something\n}\npublic void mouseDragged (MouseEvent event) {\n... // do something\n}\n// the rest are not needed. Provide empty bodies.\npublic void mouseReleased (MouseEvent event) {}\npublic void mouseEntered (MouseEvent event) {}\npublic void mouseExited (MouseEvent event) {}\npublic void mouseMoved (MouseEvent event) {}\n}\nOf course, the programmer could avoid providing the\nempty method bodies on a per-interface basis, by associating\neach interface with a class that by default provides empty\nimplementations of all interface methods. Then a client class\ncan inherit the empty implementations and only provide implementations\nfor the methods it needs. This pattern is indeed\nsupported in Swing code (through library classes called\nadapters), but it is usually not possible to employ since the\nlistener class may already have another superclass. Instead,\nit would be nice to provide a simple Java language extension\nimplemented as an annotation. The implementation of\nthe extension would be responsible for finding the unimple-mented\nmethods and supplying empty implementations by\ndefault (or implementations that just return a default primitive\nor null value, in the case of methods that have a return\n866\ntype). In this case, the above class could be written more\nsimply as:\n@Implements ({"MouseListener","MouseMotionListener"})\npublic class SomeListener {\npublic void mousePressed (MouseEvent event) {\n... // do something\n}\npublic void mouseDragged (MouseEvent event) {\n... // do something\n}\n}\nOf course, this extension should be used carefully since\nit weakens the tests of interface conformance performed by\nthe Java compiler.\nWe implemented the above Java extension using MAJ.\nThe code for the implementation was less than 200 lines\nlong, with most of the complexity in the traversal of Java\nclasses, interfaces, and their methods using reflection. The\ncode processes a set of given Java classes and retrieves the\nones with an Implements annotation. It then finds all methods\nthat are in any of the interfaces passed as arguments to\nthe Implements annotation and are not implemented by the\ncurrent class. For each such method, code is generated in an\nAspectJ aspect to add an appropriate method implementation\nto the class. For instance, the code to add the method\nto the class in the case of a void return type is:\ninfer newMethod =\n`[ public void #methodName (#formals) {} ];\naspectMembers.add(newMethod);\nFinally, the class needs to be declared to implement the\ninterfaces specified by the annotation. This is easily done\nby emitting the appropriate AspectJ code:\ninfer dec = `[declare parents:\n#[c.getName()] implements #[iface.getName()]; ];\nThe final aspect (slightly simplified for formatting reasons\n) generated for our earlier listener class example is:\npublic aspect SomeListenerImplementsAspect1 {\nvoid SomeListener.mouseEntered(MouseEvent e) {}\nvoid SomeListener.mouseExited(MouseEvent e) {}\nvoid SomeListener.mouseMoved(MouseEvent e) {}\nvoid SomeListener.mouseReleased(MouseEvent e) {}\ndeclare parents:\nSomeListener implements MouseListener;\ndeclare parents:\nSomeListener implements MouseMotionListener;\n}\nThis aspect performs exactly the modifications required\nto the original class so that it correctly implements the\nMouseListener and MouseMotionListener interfaces.\nWe invite the reader to consider how else this language\nextension might be implemented. Our approach of using\nannotations in combination with MAJ yielded a very simple\nimplementation by letting AspectJ deal with most of the\ncomplexities of Java. Specifically, we did not have to deal\nwith the low-level complexities of either Java source syntax\nor Java bytecode. For instance, we did not have to do any\ncode parsing to find the class body or declaration that needs\nto be modified.\nDealing with Java syntactic sugar, such\nas inner classes, was automatic. We did not need to do a\nprogram transformation to add the implements clauses or\nthe extra methods to the class. Similarly, we did not need\nto worry about the valid syntax for adding an implemented\ninterface if the class already implements another.\nEXAMPLE 2 LANGUAGE SUPPORT FOR OBJECT POOLING\nOur second example language extension addresses a common\nprogramming need, especially in server-side programming\n. Software applications often find the need for pooling\nfrequently-used objects with high instantiation costs. We\nuse the following database connection class as a running example\n:\npublic class DBConnection {\npublic DBConnection(String dbURI,\nString userName,\nString password ) { ... }\npublic void open() { ... }\npublic void close() { ... }\n}\nThe cost of an open() call is very high for a database connection\n. In applications concerned with performance, such as\nhigh-volume websites with lots of database requests, one often\nfinds the need to pool database connections and keep\nthem open, instead of repeatedly creating new ones and\nopening them.\nMaking a class such as DBConnection into a \"pooled\"\nclass involves at the very least creating a pooling manager\nclass that knows how to manage instances of the class being\npooled.\nA different pooling manager class is needed\nfor each class being pooled, since the manager needs to\nhave class-specific information such as how to instantiate\na new instance of the class when the pool is running low\n(e.g., DBConnection objects are created by a constructor\ncall, followed by an open() call), and how to uniquely identify\nobjects of the same class that belong to different pools\n(e.g., DBConnection objects of different dbURI, userName,\nand password combinations need to be in different pools,\nand the pooling manager needs to understand which pool to\nfetch objects from when a request arrives).\nWe expressed the pooling concept as a language feature\nthat can used transparently with any Java class, as long\nas some broad properties hold regarding its construction\nand instantiation interface. The rest of the application will\nbe completely oblivious to the change. This facilitates the\napplication of pooling after a large code base which uses\nthe class in its non-pooled form has been developed. Using\nour extension, converting a class to a pooled class involves\nonly 4 annotations: @pooled, @constructor, @request, and\n@release. For example, to convert the DBConnection class\ninto a \"pooled\" class, and to adapt an existing code base to\nusing the pooled functionality, the user only has the add the\nfollowing annotations to the code:\n@pooled(mgr=pooled.PoolTypes.BASIC, max=10, min=2)\npublic class DBConnection {\n@constructor\npublic DBConnection(String dbURI,String userName,\nString password ) { ... }\n867\n@request public void open() { ... }\n@release public void close() { ... }\n}\nThe @pooled annotation indicates that class DBConnection\nshould be pooled. It accepts parameters that can be used to\ncustomize the pooling policy. @constructor annotates the\nconstructor call whose parameters serve as unique identifiers\nfor different kinds of DBConnection objects. In this example,\nDBConnection objects with different dbURI, userName, and\npassword combinations should be maintained separately.\n@request annotates the method that signals for the request\nof a pooled object, and @release annotates the method call\nthat signals for the return of the pooled object back to the\npooling manager.\nThe implementation of this extension using MAJ is less\nthan 400 lines of code.\nThe MAJ program searches for\nclasses annotated with @pooled, and generates two Java\nclasses and one aspect to facilitate converting this class to be\npooled. We next describe the generated code in more detail.\nThe reader may want to consider in parallel how the same\ntask could be accomplished through other means. Neither\nconventional Java facilities (i.e., classes and generics) nor\nAspectJ alone would be sufficient for expressing the functionality\nwe describe below in a general way, so that it can\nbe applied with little effort to arbitrary unomdified classes.\nFor instance, none of these facilities can be used to create a\nproxy class with methods with identical signatures as those\nof an arbitrary Java class.\nFirst, a pooling manager class, PoolMgrForDBConnection,\nis generated for DBConnection.\nThe pooling manager\nclass contains methods for requesting and releasing pooled\nDBConnection objects, as well as code to manage the expansion\nof the pool based on the min and max parameters.\nIn order to retrofit an existing code base to use\nDBConnection as a pooled class, we need to introduce proxy\nobjects that will be used wherever an object of the original\nclass would exist in the application code. This is necessary\nas different objects from the perspective of the client code\nwill correspond to the same pooled object. We generate a\nproxy class as a subclass of the pooled class. In our example\n: DBConnection_Proxy extends DBConnection. All\ninstances of the proxy class share a static reference to\nan instance of PoolMgrForDBConnection.\nEach proxy instance\nholds (non-static) references to the parameters to\nthe @constructor constructor call, and the DBConnection\nobject obtained from the pooling manager. The proxy class\nrewrites the @request and @release methods: the @request\nmethod is rewritten to obtain an object of DBConnection\ntype from the pooling manager, using the unique identifiers\nkept from the constructor call, and the @release method\nreturns the DBConnection method back to the pool, while\nsetting the reference to this object to null. The MAJ code in\nthe proxy takes care to exactly replicate the signature of the\noriginal methods, including modifiers and throws clauses.\nFor instance, the @release method in the proxy is generated\nas:\ninfer meth =\n`[ #mods #ret #[m.getName()] (#formals) #throwStmt\n{\nm_poolMgr.release(m_uniqueId, m_proxiedObj);\nm_proxiedObj = null;\n}];\nAll other methods simply delegate to the same method in\nthe superclass.\nThe idea is to have variables declared to hold a\nDBConnection object, now hold a DBConnection_Proxy object\n.\nTherefore, to complete the \"proxy\" pattern, we\nneed to change all the calls of new DBConnection(...) to\nnew DBConnection_Proxy(...). This is the role of our generated\naspect: tedious recoding effort is easily replaced by\nan aspect: the aspect intercepts all the constructor calls of\nDBConnection, and returns an object instantiated by calling\nnew DBConnection_Proxy(...).\nIn summary, a user can easily turn a class into a pooled\nclass, and retrofit any existing code base to use this class in\nits new, pooled form. The client code does not need to be\nhand-edited at all, other than with the introduction of our\n4 annotations.\nFUTURE WORK\nWe believe that the years to come will see the emergence\nof a healthy ecology of small language extensions\nbased on the annotation features of Java and C#. There\nare already major examples of such extensions, especially\nwith distribution- and persistence-related annotations, implemented\nin the context of J2EE Application Servers. Such\nextensions can be implemented with heavyweight support-e\n.g., parsing files, or recognizing annotations in a class loader\nand performing bytecode manipulation. In fact, the JBoss\nAOP mechanism (in whose early design and implementation\nwe have played an active role) is the foremost example\nof infrastructure used to implement annotation-based language\nextensions. Nevertheless, experience from compilers\nin general-purpose languages has shown that it is beneficial\nto develop a mature back-end language and implement high-level\nfeatures by translating to that back-end. Our approach\nproposes that AspectJ is well-suited as such a back-end language\nfor small Java language extensions and that generating\nAspectJ code offers significant simplicity benefits. In\nthe future we plan to support this claim with more examples\nand perform a thorough comparison with competing\nmechanisms.\nREFERENCES\n[1] G. Kiczales, E. Hilsdale, J. Hugunin, M. Kersten, J. Palm, and\nW. G. Griswold. An overview of AspectJ. In ECOOP '01:\nProceedings of the 15th European Conference on\nObject-Oriented Programming, pages 327353, London, UK,\n2001. Springer-Verlag.\n[2] G. Kiczales, J. Lamping, A. Menhdhekar, C. Maeda, C. Lopes,\nJ.-M. Loingtier, and J. Irwin. Aspect-oriented programming. In\nM. Ak\nsit and S. Matsuoka, editors, Proceedings European\nConference on Object-Oriented Programming, volume 1241,\npages 220242. Springer-Verlag, Berlin, Heidelberg, and New\nYork, 1997.\n[3] Y. Smaragdakis. A personal outlook on generator research. In\nC. Lengauer, D. Batory, C. Consel, and M. Odersky, editors,\nDomain-Specific Program Generation. Springer-Verlag, 2004.\nLNCS 3016.\n[4] D. Zook, S. S. Huang, and Y. Smaragdakis. Generating\nAspectJ programs with meta-AspectJ. In Generative\nProgramming and Component Engineering (GPCE), pages\n118. Springer-Verlag, October 2004.\n868\n", "keywords": "language extensions;annotation;domain-specific language;language extension;Meta-AspectJ;Java;simplicity;domain-specific languages"} {"name": "76", "title": "Hourly Analysis of a Very Large Topically Categorized Web Query Log", "abstract": "We review a query log of hundreds of millions of queries that constitute the total query traffic for an entire week of a general-purpose commercial web search service. Previously, query logs have been studied from a single, cumulative view. In contrast, our analysis shows changes in popularity and uniqueness of topically categorized queries across the hours of the day. We examine query traffic on an hourly basis by matching it against lists of queries that have been topically pre-categorized by human editors. This represents 13% of the query traffic. We show that query traffic from particular topical categories differs both from the query stream as a whole and from other categories. This analysis provides valuable insight for improving retrieval effectiveness and efficiency. It is also relevant to the development of enhanced query disambiguation, routing, and caching algorithms.", "fulltext": "INTRODUCTION\nUnderstanding how queries change over time is critical to\ndeveloping effective, efficient search services. We are unaware of\nany log analysis that studies differences in the query stream over\nthe hours in a day; much less how those differences are\nmanifested within topical categories. We focus on Circadian\nchanges in popularity and uniqueness of topical categories.\nEmphasis on changing query stream characteristics over this\nlongitudinal (time) aspect of query logs distinguishes this work\nfrom prior static log analysis, surveyed in [7].\nWe began with the hypothesis that there are very different\ncharacteristics during peak hours and off-peak hours during a day.\nAfter reviewing a week's worth of data hundreds of millions of\nqueries - we have found, not surprisingly, that:\n\nThe number of queries issued is substantially lower during\nnon-peak hours than peak hours.\nHowever, we knew little about how often queries are repeated\nfrom one hour of the day to the next. After examining the\nbehavior of millions of queries from one hour of the day to the\nnext we have found the less obvious result:\n\n\nThe average number of query repetitions in an hour does not\nchange significantly on an hourly basis throughout the day.\n\nMost queries appear no more than several times per hour.\nThese queries consistently account for a large portion of total\nquery volume throughout the course of the day.\n\nThe queries received during peak hours are more similar to\neach other than their non-peak hour counterparts.\nWe also analyze the queries representing different topics using a\ntopical categorization of our query stream. These cover\napproximately 13% of the total query volume. We hypothesized\nthat traffic behavior for some categories would change over time\nand that others would remain stable. For 16 different categories,\nwe examined their traffic characteristics:\n\n\nSome topical categories vary substantially more in\npopularity than others as we move through an average day.\nSome topics are more popular during particular times of the\nday, while others have a more constant level of interest over\ntime.\n\nThe query sets for different categories have differing\nsimilarity over time. The level of similarity between the\nactual query sets received within topical categories varies\ndifferently according to category.\nThis leads us to believe that predictive algorithms that are able to\nestimate the likelihood of a query being repeated may well be\npossible. This could have a significant impact on future cache\nmanagement and load-balancing algorithms. Such algorithms\ncould improve retrieval effectiveness by assisting in query\ndisambiguation, making it easier to determine what information\nneed is being expressed by a query at a given time. They could\nalso assist research in search efficiency that takes into account\nquery arrival-rates [3].\nOur analysis covers the entirety of the tens of millions of queries\neach day in the search log from America Online\n\nover a\ncomplete week in December. This represents a population of tens\nof millions of users searching for a wide variety of topics. Section\n2 reviews the prior work in query log analysis. Section 3\ndescribes our analysis of overall query traffic. Section 4 describes\nour analysis of trends in categorized queries. Finally, in Section 5\nwe present our conclusions and directions for future work.\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies\nare not made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nSIGIR'04, July 2529, 2004, Sheffield, South Yorkshire, UK.\nCopyright 2004 ACM 1-58113-881-4/04/0007...$5.00.\n\n\n321\n\n\nPRIOR WORK\nExaminations of search engine evaluation indicate that\nperformance likely varies over time due to differences in query\nsets and collections [6]. Although the change in collections over\ntime has been studied (e.g., the growth of the web) [10], analysis\nof users' queries has been primarily limited to the investigation of\na small set of available query logs that provide a snapshot of their\nquery stream over a fixed period of time. Prior work can be\npartitioned into static query log analysis and some recent\ndisclosures by web search engines.\n\nQuery log analysis can be partitioned into large-scale log analysis,\nsmall-scale log analysis and some other applications of log\nanalysis such as categorization and query clustering. Jansen and\nPooch provide a framework for static log analysis, but do not\naddress analysis of changes in a query stream over time [7].\nGiven that most search engines receive on the order of between\ntens and hundreds of millions of queries a day [22], current and\nfuture log analysis efforts should use increasingly larger query sets\nto ensure that prior assumptions still hold.\nPrevious studies measured overall aspects of users' queries from\nstatic web query logs. In the only large-scale study (all others\ninvolve only a few million queries), Silverstein concludes that\nusers typically view only the top ten search results and that they\ngenerally enter short queries from a static analysis of an AltaVista\nquery log from six weeks in 1998 consisting of 575 million non-empty\nqueries [16]. He also found that only 13.6% of queries\nappear more than three times, the top 25 queries represent 1.5% of\nthe total query volume, and in 75% of sessions users do not revise\ntheir queries. Additionally, co-occurrence analysis of the most\nfrequent 10,000 queries showed that the most correlated terms are\noften constituents of phrases. No time-based or topic-based\nanalysis of this query load was reported; it does not provide\ninsight into how or when any usage or topical interest changes\noccur. Other studies examine the effect of advanced query\noperators on the search service coverage of Google, MSN, and\nAOL, finding that in general, they had little effect [4]. These\noverall statistics do not provide any insight into temporal changes\nin the query log, but do provide some insight into how people use\nsearch services.\nJansen, et. al, also provide analysis of query frequency [7][19].\nTheir findings indicate that the majority (57%) of query terms\nfrom the Excite log of more than 51,000 queries are used only\nonce, and a large majority (78%) occur three times or less. These\nstudies show that neither queries nor their component terms\nfollow a Zipfian distribution, as the number of rare, infrequently\nrepeated queries and terms is disproportionately large. Other\nstudies have focused on user behavior at the query session level\nand found varying results, with some estimating reformulated\nqueries constituting 40-52% of queries in a log [18][21]. Wang,\net. al examined a log of more than 500,000 queries to a university\nsearch engine from 1997-2001 [23]. They find trends in the\nnumber of queries received by season, month, and day. We\nextend upon this work by examining the larger community of\ngeneral web searchers and analyzing trends corresponding to hour\nof day.\nSeveral studies examine query categories in small, static logs.\nSpink, et. al analyzed logs totaling more than one million queries\nsubmitted to the Excite web search engine during single days in\n1997, 1999, and 2001 [18][19][20]. They classified\napproximately 2,500 queries from each log into 11 topical\ncategories and found that although search topics have changed\nover the years, users' behaviors have not. Ross and Wolfram\ncategorized the top 1,000 term pairs from the one million query\nExcite log into 30 subject areas to show commonalities of terms in\ncategories [13]. Jansen, et. al used lists of terms to identify image,\naudio, and video queries and measure their presence in the one\nmillion query Excite log [9]. In order to examine the differences\nin queries from users in different countries, Spink, et. al, examined\na 500,000 query log from the FAST web search engine during\n2001, believed to be used largely by Europeans at that time,\nclassifying 2,500 queries from it into the same topical categories.\nThey found differences between FAST and Excite in the topics\nsearched for [17].\nOther work manually grouped queries by task. Broder defines\nqueries as informational, navigational or transactional and\npresents a study of AltaVista users via a popup survey and manual\ncategorization of 200 queries from a log [2]. Beitzel, et. al\nimplicitly categorized queries from a search log as navigational by\nmatching them to edited titles in web directories to automatically\nevaluate navigational web search [1]. Xie and Wolfram\nautomatically categorized query terms by using results from web\nsearch engines to assign the terms to broad subject categories [25].\nSeveral studies of query caching examine query frequency\ndistributions from a static log, focusing on the average likelihood\nof an arbitrary query being repeated over the entire, fixed-length\nlog. Lempel and Moran evaluated the performance of caching\nstrategies over a log of seven million queries to AltaVista in 2001\nand found that the frequencies of queries in their log followed a\npower law [11]. Eiron and McCurley compared query vocabulary\nfrom a log of nearly 1.3 million queries posed to a corporate\nintranet to the vocabulary of web page anchor text and found that\nthe frequency of queries and query terms follows a tail-heavy\npower law [5]. Xie and O'Hallaron studied query logs from the\nVivisimo meta-search engine of 110,881 queries over one month\nin 2001 in comparison to the Excite log of 1.9 million over one\nday in 1999 and found that although as in other studies over half\nof the queries are never repeated, the frequencies of queries that\nare repeated do follow a Zipfian distribution [26]. Saraiva, et. al\nevaluated a two-level caching scheme on a log of over 100,000\nqueries to a Brazilian search engine and found that query\nfrequencies follow a Zipf-like distribution [15]. Markatos\nsimulated the effect of several types of query caches on an Excite\nquery log of approximately one million queries and found that\ntraditional caching methods provide significant improvements in\nefficiency [12]. Although traditional MRU-style caches obviously\nenhance throughput by exploiting temporal locality at the minute-to\n-minute level, these studies do not examine changes in the query\nstream according to the hour of the day that may be leveraged in\nenhanced cache design.\nIt is well known that different users represent the same\ninformation need with different query terms, making query\nclustering attractive when examining groups of related queries.\nHowever, as Raghavan and Sever have shown, traditional\nsimilarity measures are unsuitable for finding query-to-query\nsimilarity [13]. Wen, et. al, incorporated click-through to cluster\nusers' queries [23]. In evaluating their system, they analyzed a\nrandom subset of 20,000 queries from a single month of their\napproximately 1-million queries-per-week traffic. They found\n322\n\nthat the most popular 22.5% queries represent only 400 clusters of\nqueries using differing sets of query terms.\nMany web search services have begun to offer views of the most\npopular and/or changing (becoming drastically more or less\npopular) queries: AOL Member Trends, Yahoo - Buzz Index,\nLycos - The Lycos 50 with Aaron Schatz, Google Zeitgeist,\nAltaVista - Top Queries, Ask Jeeves, Fast (AllTheWeb). These\nviews necessarily incorporate a temporal aspect, often showing\npopular queries for the current time period and those that are\nconsistently popular. Some also break down popularity by topical\ncategories. Systems seeking to display changing queries must\naddress the issue of relative versus absolute change in a query's\nfrequency to find queries whose change is \"interesting\", not\nsimply a query that went from frequency one to two (a 200%\njump), or one that went from 10,000 to 11,000 (a 1000 absolute\nchange).\nOVERALL QUERY TRAFFIC\nWe examine a search log consisting of hundreds of millions of\nqueries from a major commercial search service over the seven-day\nperiod from 12/26/03 through 1/1/04. This log represents\nqueries from approximately 50 million users. We preprocess the\nqueries to normalize the query strings by removing any case\ndifferences, replacing any punctuation with white space (stripping\nadvanced search operators from the approximately 2% of queries\ncontaining them), and compressing white space to single spaces.\nThe average query length is 1.7 terms for popular queries and 2.2\nterms over all queries. On average, users view only one page of\nresults 81% of the time, two pages 18% and three or more 1% of\nthe time. First, we examine trends in the query stream as a whole,\nand then focus on trends related to queries manually categorized\ninto topical categories.\nWe begin our analysis of the overall stream by examining how the\nvolume of query traffic changes as we move from peak to non-peak\nhours. We show the percentage of the day's total and\ndistinct number of queries for each hour in the day on average\nover our seven-day period in Figure 1 (all times in our query log\nare Eastern Standard Time). Only 0.75% of the day's total queries\nappear from 5-6AM, whereas 6.7% of the day's queries appear\nfrom 9-10PM. Perhaps more interestingly, the ratio of distinct to\ntotal queries in a given hour is nearly constant throughout the day.\nThis shows that the average number of times a query is repeated is\nvirtually constant over the hours in a day, remaining near 2.14\nwith only a 0.12 standard deviation.\nAlthough the average repetition of queries remains nearly\nconstant, we can examine this in greater detail by measuring the\nfrequency distribution of queries at various hours in the day, as\nseen in Figure 2. From this analysis it is clear that the vast\nmajority of queries in an hour appear only one to five times and\nthat these rare queries consistently account for large portions of\nthe total query volume throughout the course of the day.\nFigure 1\nAlthough we have shown that the query distribution does not\nchange substantially over the course of a day, this does not\nprovide insight into how the sets of queries vary from one hour to\nthe next. To examine this, we measure the overlap between the\nsets of queries entered during those hours. We use traditional set\nand bag overlap measures as given in Equation 1 and Equation 2,\nrespectively. Distinct overlap measures the similarity between the\nsets of unique queries from each hour, while overall (bag) overlap\nmeasures the similarity of their frequency distributions by\nincorporating the number of times each query appears in an hour,\n)\n;\n(\nA\nq\nC\ni\n. While these measures examine the similarity of the sets\nof queries received in an hour and the number of times they are\nentered, they do not incorporate the relative popularity or ranking\nof queries within the query sets. To examine this, we also\nmeasure the Pearson correlation of the queries' frequencies. As\ncan be seen from Equation 3 (where\n)\n;\n( A\nq\nC\nis the mean number\nof query repetitions in period A and\n)\n;\n( A\nq\nC\ns\nis the standard\ndeviation of all the query frequencies in period A), this measures\nthe degree of linear correlation between the frequencies of the\nqueries in each hour, so two hours that had exactly the same\nqueries with exactly the same frequencies would have a\ncorrelation of one. Note that this normalizes for the effect of\ndiffering query volume, i.e., the correlation of two hours with\nexactly the same underlying query distributions simply scaled by a\nconstant would also have a correlation of one.\nFigure 2\n\nPercentage of Average Daily Query Traffic at Each Hour\n0%\n1%\n2%\n3%\n4%\n5%\n6%\n7%\n8%\n0\n6\n12\n18\nHour of Day\nP\ne\nr\nc\nen\nt\na\ng\ne\no\nf\nD\na\ni\nl\ny Qu\ner\ny T\nr\naf\nf\ni\nc\nAverage Total Queries\nAverage Distinct Queries\nFrequency Distribution of Selected Hours from 12/26/03\n0%\n5%\n10%\n15%\n20%\n25%\n30%\n35%\n40%\n1,\n00\n11\n0\n,\n0\n0\n0\n50\n11\n,\n0\n0\n0\n20\n15\n0\n0\n10\n12\n0\n0\n51\n1\n0\n0\n26\n5\n0\n21\n2\n5\n16\n2\n0\n11\n1\n5\n10\n9\n8\n7\n6\n5\n4\n3\n2\n1\nFrequency Ranges\nP\nerc\nen\nt\na\ng\ne\no\nf\n\nT\no\nt\na\nl\nQ\nu\ner\ni\ne\ns\n12AM-1AM\n6AM-7AM\n12PM-1PM\n6PM-7PM\n323\n\nB\nA\nB\nA\nB\nA\noverlap\ndist\n\n\n=\n)\n,\n(\n.\n\nEquation 1: Distinct Overlap of Query Sets from Hours A\nand B\n\n\n\n\n\n\n\n\n\n\n\n+\n=\nB\nA\nq\ni\ni\nB\nq\ni\nA\nq\ni\nB\nA\nq\ni\ni\ni\ni\ni\ni\nB\nq\nC\nA\nq\nC\nB\nq\nC\nA\nq\nC\nB\nq\nC\nA\nq\nC\nB\nA\noverlap\n))\n;\n(\n),\n;\n(\nmin(\n)\n;\n(\n)\n;\n(\n))\n;\n(\n),\n;\n(\nmin(\n)\n,\n(\n\nEquation 2: Overall Overlap of Query Sets from Hours A and\nB\n\n)\n;\n(\n)\n;\n(\n1\n,\n)\n)\n;\n(\n)\n;\n(\n)(\n)\n;\n(\n)\n;\n(\n(\n1\n1\nB\nq\nC\nA\nq\nC\nn\ni\ni\ni\nB\nA\ns\ns\nB\nq\nC\nB\nq\nC\nA\nq\nC\nA\nq\nC\nn\nr\n\n=\n\n\n=\n\nEquation 3: Pearson Correlation of Query Frequencies from\nHours A and B\n\nFigure 3\nIn Figure 3 we examine the average level of overlap and\ncorrelation between the query sets received during the same hour\nfor each day over our week. As measuring overlap over the set of\nall queries appearing in our week would be computationally\nexpensive, we use the set of all the tens of millions of queries in\nthe day after our seven-day period as an independent sample and\nmeasure overlap at each hour in our week of the queries matching\nthose in that sample. Although we previously saw that the\nfrequency distribution of queries does not substantially change\nacross hours of the day, Figure 3 shows that the similarity between\nthe actual queries that are received during each hour does in fact\nchange. This trend seems to follow query volume, which is\napparent if we sort the same overlap data by query volume as is\ndone in Figure 4. Clearly, as query volume increases the queries\nthat compose that traffic are more likely to be similar across\nsamples of those peak time periods.\nThis finding is consistent with prior analyses of web query caches\nshowing they significantly improve performance under heavy\nload. The more redundancy they are able to detect, the more\ncaching algorithms are able to enhance throughput. Although the\nprior work primarily measures the effect of this redundancy in\ncache performance, it is obvious that redundancy must exist and\nbe detected for caching to succeed. By examining the overall\nquery stream by hour we are able to infer the effectiveness of\ngeneral caching algorithms at those times.\nFigure 4\nQUERY CATEGORIES\nIn Section 3 we analyzed the entire query log. However, this\nblanket view of the query traffic does not provide insight into the\ncharacteristics of particular categories of queries that might be\nexploited for enhanced efficiency or effectiveness. For example, a\nsearch provider who returns specialized results for entertainment\nqueries cannot determine from general query traffic alone whether\na given query is more likely to be referring to entertainment\nrelated content or how to best process and cache that query.\nThe remainder of our analysis focuses on trends relating to topical\ncategory of queries. Our query set is categorized simply by\nexactly matching queries to one of the lists corresponding to each\ncategory. These lists are manually constructed by editors who\ncategorize real users' queries, generate likely queries, and import\nlists of phrases likely to be queries in a category (e.g., cities in the\nUS for the US Sites category). Queries that match at least one\ncategory list comprise 13% of the total query traffic on average.\nThis represents millions of queries per day.\nFigure 5\nTo verify that our defined category lists sufficiently cover the\ntopics in the query stream, we manually classified a random\nsample of queries, assigning them to \"Other\" if they did not\nintuitively fit into an existing category, as can be seen in Figure 5.\nTo determine the number of queries required to achieve a\nrepresentative sample, we calculate the necessary sample size in\nqueries, ss = (z\n2\n\n2\n)/\n\n2\n, where z is the confidence level value,\n\nis\nSorted Average Overlap Characteristics from 1/2/04 that Matched Each Hour\n10%\n15%\n20%\n25%\n30%\n35%\n40%\n45%\n50%\n55%\n60%\n5\n6\n4\n7\n3\n2\n8\n1\n9\n10\n0\n11 12 13 14 23 15 16 17 18 22 19 21 20\nHour of Day\nP\ne\nrcen\nt\na\ng\ne\nOverlap\nDistinct Overlap\nPearson\nSampled Categorized Query Stream Breakdown\nPersonal\nFinance\n3%\nComputing\n9%\nResearch &\nLearn\n9%\nEntertainment\n13%\nGames\n5%\nHolidays\n1%\nHome\n5%\nUS Sites\n3%\nPorn\n10%\nShopping\n13%\nSports\n3%\nT ravel\n5%\nOther\n16%\nHealth\n5%\nAverage Overlap Characteristics of Matching Queries from 1/2/04\n10%\n15%\n20%\n25%\n30%\n35%\n40%\n45%\n50%\n55%\n0\n6\n12\n18\nHour of Day\nOverlap\nDistinct Overlap\nPearson\n324\n\nthe sample standard deviation, and\n\nis the error rate. By setting\nour confidence level to 99% and error rate to 5%, we require a\nsample of 600 queries. The relative percentages for each category\nof the approximately 13% of query volume that match any\ncategory list over our week (see Figure 9) are within the error rate\nof those from our manually categorized sample. This shows that\nour lists are a reasonable representation of these topical categories.\nWe focus on a subset of these categories and examine music and\nmovies independent of other entertainment queries. The relative\nsize of each category list we used is given in Figure 6. Obviously,\nnot all queries listed actually match those entered by users,\nespecially when the category contains large imported lists of\nphrases.\nFigure 6\nAlthough we have shown that our lists are a fair representation of\nthe topics in the query stream, this does not indicate what portion\nof the frequency distribution of that stream they represent. To\ndetermine this, we measured the average proportion of queries\nmatching any category list that appear at various frequencies each\nhour and compared them to the average overall hourly frequency\ndistribution of the query stream (see Figure 7). Unsurprisingly,\nthis comparison shows that queries in the category lists represent\nmore popular, repeated queries than average, although the general\nshape of the distributions is similar.\nFigure 7\n\n4.1 Trends in Category Popularity\nWe begin our temporal analysis of topical categories by\nmeasuring their relative popularity over the hours in a day. First,\nwe examine the percent of total query volume matching a selected\ngroup of category lists, as can be seen in Figure 8. It is clear that\ndifferent topical categories are more and less popular at different\ntimes of the day. Personal finance, for example, becomes more\npopular from 7-10AM, while music queries become less popular.\nAlthough it is difficult to compare the relative level of popularity\nshift from one category to another due to the differences in scale\nof each of their percentages of the query stream, it is clear that\nsome categories' popularity changes more drastically throughout\nthe day than others.\nFigure 8\nIn order to quantify this, we calculated the KL-divergence\n(Equation 4) between the likelihood of receiving any query at a\nparticular time and the likelihood of receiving a query in a\nparticular category, as can be seen in Figure 9. This reveals that\nthe top three categories in terms of popularity are pornography,\nentertainment, and music.\n)\n,\n|\n(\n)\n|\n(\nlog\n)\n|\n(\n))\n,\n|\n(\n)\n|\n(\n(\nt\nc\nq\np\nt\nq\np\nt\nq\np\nt\nc\nq\np\nt\nq\np\nD\nq\n\n=\n\nEquation 4: KL-Divergence of Query Occurrence Likelihood\nfor Category\nc\nand Total Stream at Time\nt\n\nFigure 9\nComparing these divergences to the proportion of categorized\nqueries in each category in Figure 6 quickly illustrates that\ndivergence is not correlated with the number of queries\ncategorized in each category. Also shown in Figure 9 is the\naverage percentage of the entire query volume and distinct queries\nthat match each category. Although the categories that cover the\nlargest portions of the query stream also have the most relative\npopularity fluctuation, this correlation does not continue\nthroughout all categories.\nRelative Percentage of Categorized Queries\n0%\n5%\n10%\n15%\n20%\n25%\n30%\n35%\n40%\n45%\n50%\nShopp\ning\nCom\nputi\nng\nTrave\nl\nHom\ne\nHea\nlth\nGov\nernm\nent Games\nRese\narch\n& L\nearn\ning\nPorn Holiday\ns\nSpor\nts\nMov\nies\nPers\nonal\nFina\nnce\nEnte\nrtain\nmen\nt\nUS S\nites Music\nP\ne\nr\nc\ne\nn\nt\na\nge\nof\nC\na\nt\ne\ngor\ni\nz\ne\nd\nQ\nu\ne\nr\ni\ne\ns\nHourly Frequency Distribution of Matching Queries vs. All Queries Averaged over\n7 Days and 16 Categories\n0%\n5%\n10%\n15%\n20%\n25%\n30%\n35%\n> 1,0\n00\n201-500\n51-1\n00\n21-2\n5\n11-1\n5\n9\n7\n5\n3\n1\nFrequency Ranges\nP\ne\nr\ncent\na\ng\ne\nof\nA\nv\ner\nage\nT\no\nt\na\nM\na\nt\nc\nhi\nng Q\nu\ner\ni\ne\ns\nAvg. Matching Queries\nAvg. Queries\nCategory Percentage of Entire Query Stream and Divergence from\nLikelihood of any Query at Each Hour\n0%\n1%\n2%\n3%\n4%\n5%\n6%\nCom\nputi\nng\nSpo\nrts\nHoli\nday\ns\nRese\narch\nand L\nearni\nng\nHea\nlth\nGam\nes\nUS S\nites\nShopp\ning\nGov\nernm\nent\nMov\nies\nTra\nvel\nPers\nona\nl Fin\nance Home Music\nEnte\nrtain\nmen\nt\nPorn\nCategory\nKL-Divergence\n% of query stream\nDistinct % of query stream\nCategorical Percent over Time\n0%\n1%\n2%\n3%\n4%\n1\n2\n3\n4\n5\n6\n7\n8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24\nHour of Day\nEntertainment\nGames\nHealth\nPersonal Finance\nShopping\nMusic\nUSSites\nPorn\n325\n\nWe drilled down into the highly fluctuating categories and\nexamined the behavior of the queries with the most highly\nfluctuating frequencies in each category. From this we hoped to\ngain some insight into the reasons why certain categories\nfluctuate, and the effect of terms and queries with very high flux\non those categories. For example, the three most changing queries\nfor the entertainment category on average over our week were:\nTable 1: Top Three Fluctuating Entertainment Queries\ngwyneth paltrow\nparis hilton\norlando bloom\nAll three of these queries are specifically related to recent events\nin US popular culture; the actress Gwyneth Paltrow recently\nmarried in secret, and the news of her nuptials broke during the\nweek we analyzed. Hilton Hotel heiress Paris Hilton has been a\npopular topic recently; she starred in a prime time reality TV show\nentitled \"The Simple Life\". Also popular is Orlando Bloom, the\nactor who portrays a popular character in the \"Lord of the Rings\"\ntrilogy. As the final installment of the series was released in US\ntheatres during the week prior to our query log, it is no surprise to\nsee his name as a top-changing query.\nDrilling down further, we pinpointed some of the specific\ninstances where these popular queries jumped the most. For\nexample, in the afternoon of Friday, December 27th, the\npopularity of the query \"gwyneth paltrow\" skyrocketed. From 3-4PM\n, it occurred once, from 4-5PM it occurred 67 times, and\nfrom 5PM-6PM it occurred 11,855 times. The top changing (on\naverage) twenty-five queries, after normalization, in the\nEntertainment and Music categories are shown in Table 2.\nTable 2: Top 25 Fluctuating Queries from Music and\nEntertainment\nMusic Entertainment\nlyrics\nmusic\nbritney spears\nfurniture\nlove\nhilary duff\ngood charlotte\nsloppy seconds\njessica simpson\nb2k\neminem\nchristina aguilera\nsimple plan\njustin timberlake\nfree music\nlinkin park\nmichael jackson\nbeyonce\njennifer lopez\n50 cent\nkinky\nnapster\nchic\ntupac\nblink 182\ngwyneth paltrow\nparis hilton\norlando bloom\nespn\ndisney\njohnny depp\nmuch music\ndisney channel\nhgtv\ndisneychannel com\nwww disneychanel com\nkatie holmes pictures\npamela anderson\ncartoon network\nhilary duff\nfake\nchad michael murray\nvivica a fox\ndisneychannel\ncare bears\nsailor moon\nwww cartoonnetwork com\ndays of our lives\ncharmed\ntom welling\n\nWe also looked at some of the most frequently changing terms to\nsee how they relate to the change of entire queries containing\nthose terms. Some excellent examples of this behavior in the\nEntertainment category include the terms \"pictures\" (the tenth-most\nchanging term) and \"duff\" (the 17\nth\n-most changing term).\nWe looked at the popularity change (i.e., change in frequency) for\nqueries containing these terms and found that several of them also\nexhibited large changes over time. For example, on the afternoon\nof December 28\nth\nfrom noon to 5PM EST, the query \"hilary duff\"\nchanged from an initial frequency of 27 from 12-1PM to a peak of\n131 (from 3-4PM), and then stabilized around 70 for the rest of\nthe evening; similar spikes in frequency for this query occurred at\nsimilar times during other days in our period of study.\n4.2 Trends in Uniqueness of Queries Within\nCategories\nAlthough we have shown that different categories have differing\ntrends of popularity over the hours of a day, this does not provide\ninsight into how the sets of queries within those categories change\nthroughout the day. In order to examine this, we return to the\noverlap measures used in Section 3. Overlap, distinct overlap, and\nthe Pearson correlation of query frequencies for Personal Finance\nand Music are shown in Figure 10 and Figure 11.\nFigure 10\n\nAlthough the uniqueness of queries in categories in general\nappears to be correlated with that of the entire query stream\n(Figure 3), that of particular categories appears to be substantially\ndifferent from one to the next. For example, if we compare the\noverlap characteristics of personal finance with those of music, we\nsee they are quite different. Not only does personal finance have\ngenerally higher overlap, but it has a much higher overall overlap\nthan distinct overlap, whereas they are nearly equal for music.\nOther categories with generally high overlap and distinct overlap\nare shopping, computing, and travel. Also, the correlation of\nfrequencies of personal finance queries is very high all day,\nindicating searchers are entering the same queries roughly the\nsame relative amount of times, this is clearly not true for music.\nSome categories have a high Pearson correlation. This indicates\nthat a significant portion of the queries in these categories is often\nranked similarly by frequency. These categories are:\npornography, travel, research and learning, and computing, and\ntheir Pearson correlations are illustrated in Figure 12.\nPersonal Finance Overlap\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0\n1\n2\n3\n4\n5\n6\n7\n8\n9 10 11 12 13 14 15 16 17 18 19 20 21 22 23\nOverlap\nDist. Olap\nPearson\n326\n\nFigure 11\n\nIt is clear that some categories have very similarly ranked queries\nby frequency throughout the day, while others vary dramatically\naccording to query volume. Referring back to Figure 6 and Figure\n9, uniqueness of queries in particular categories does not appear to\nbe correlated with the number of queries in their respective\ncategory lists, the proportion of the query stream they represent, or\nthe number of distinct queries they match.\nFigure 12\n\nThis type of data is potentially of great use to query caching\nalgorithms. For example, if it is known a priori that queries for\ncertain categories are similarly ranked throughout the day, they\ncan be given higher priority in a query-caching scheme.\nSimilarly, queries in categories whose rankings change vastly over\ntime might be given low caching priority.\nCONCLUSIONS AND FUTURE WORK\nThis study focuses on investigating the nature of changes in the\nquery stream of a very large search service over time.\n\nUnderstanding how users' queries change over time is critical to\ndeveloping effective, efficient search systems and to engineering\nrepresentative test sets and evaluations that drive this\ndevelopment. In this study we find trends over time that are stable\ndespite continuing fluctuation in query volume. Although the\naverage query is repeated only twice during any given hour of the\nday, the total query traffic varies both in magnitude from one hour\nto the next, and also in degree of overlap and correlation in\npopularity of the queries that are received. In addition, we also\nfind that the frequency distribution of an hour's worth of queries\nremains constant throughout the day. Also, at the most general\nlevel, we find that query volume is highest and query sets are most\nstable during peak hours of the day.\nThis study further investigates changes in the query stream over\ntime by examining the nature of changes in popularity of\nparticular topical categories. For this we use a set of topical\ncategories created by human editors that represents approximately\n13% of the average query traffic. We show that popularity of\nsome of these categories fluctuates considerably while other\ncategories remain relatively stable over the hours in a day.\nAdditionally, we show that the overlap and correlation in\npopularity of the queries within each topical category varies quite\ndifferently over the course of the day.\nExtending this analysis to investigate changes in the very rare\nqueries not often matched by our category lists would provide\ninsight into whether those are changing similarly to more popular\nqueries. One method for approaching this might be to incorporate\nautomatic query classification methods to extend our basic lists\nThis study is the gateway to a large and diverse body of future\nwork. Integrating this knowledge of Circadian changes in the\nquery stream by category will likely yield improved query\ndisambiguation, query caching, and load balancing algorithms.\n\n\nBIBLIOGRAPHY\n[1]\nBeitzel, S., Jensen, E., Chowdhury, A., and Grossman, D.\nUsing Titles and Category Names from Editor-driven\nTaxonomies for Automatic Evaluation. In Proceedings of\nCIKM'03 (New Orleans, LA, November, 2003), ACM Press.\n[2]\nBroder, A. A Taxonomy of Web Search. SIGIR Forum\n36(2) (Fall, 2002).\n[3]\nChowdhury, A., G. Pass. \"Operational Requirements for\nScalable Search Systems\", In Proceedings of CIKM'03 (New\nOrleans, LA, November 2003), ACM Press.\n[4]\nEastman, C., B. Jansen, \"Coverage, Relevance, and Ranking:\nThe Impact of Query Operators on Web Search Engine\nResults\", ACM Transactions on Information Systems, Vol.\n21, No. 4, October 2003, Pages 383411.\n[5]\nEiron, N., K. McCurley. \"Analysis of Anchor Text for Web\nSearch\", In Proceedings of SIGIR'03 (Toronto, Canada, July\n2003), ACM Press.\n[6]\nHawking, D., Craswell, N., and Griffiths, K. Which Search\nEngine is Best at Finding Online Services? In Proceedings\nof WWW10 (Hong Kong, May 2001), Posters. Actual poster\navailable as\nhttp://pigfish.vic.cmis.csiro.au/~nickc/pubs/www10actualpos\nter.pdf\n[7]\nJansen, B. and Pooch, U. A review of Web searching studies\nand a framework for future research.\nJournal of the American Society for Information Science and\nTechnology 52(3), 235-246, 2001.\n[8]\nJansen, B., Spink, A., and Saracevic, T. Real life, real users,\nand real needs: a study and analysis of user queries on the\nweb. Information Processing and Management, 36(2)\n(2000), 207-227.\n[9]\nJansen, B.J., Goodrum, A., Spink, A. Searching for\nmultimedia: video, audio, and image Web queries. World\nWide Web 3(4), 2000.\n[10]\nLawrence, S. and Giles, C.L. Searching the World Wide\nWeb. Science 280(5360),\n98-100, 1998.\nPearson Correlations of Frequencies for Categories\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\nPersonal Finance\nMusic\nMovies\nPorn\nComputing\nGames\nEntertainment\nGovernment\nMusic Overlap\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10 11 12 13 14 15 16 17 18 19 20 21 22 23\nOverlap\nDist. Olap\nPearson\n327\n\n[11]\nLempel, R. and Moran, S. Predictive caching and\nprefetching of query results in search engines. In\nProceedings of WWW12 (Budapest, May 2003).\n[12]\nMarkatos, E.P. On Caching Search Engine Query Results. In\nthe Proceedings of the 5th International Web Caching and\nContent Delivery Workshop, May 2000.\n[13]\nRaghavan, V. and Sever, H. On the Reuse of Past Optimal\nQueries. In Proc. of the 1995 SIGIR Conference, 344-350,\nSeattle, WA, July 1995.\n[14]\nRoss, N. and Wolfram, D. End user searching on the\nInternet: An analysis of term pair topics submitted to the\nExcite search engine. Journal of the American Society for\nInformation Science 51(10), 949-958, 2000.\n[15]\nSaraiva, P., Moura, E., Ziviani, N., Meira, W., Fonseca, R.,\nRiberio-Neto, B. Rank-preserving two-level caching for\nscalable search engines. In Proc. of the 24th SIGIR\nConference, 51-58, New Orleans, LA, September, 2001.\n[16]\nSilverstein, C., Henzinger, M., Marais, H., and Moricz, M.\nAnalysis of a very large web search engine query log. SIGIR\nForum 33(1) (Fall, 1999), 6-12.\n[17]\nSpink, A., Ozmutlu, S., Ozmutlu, H.C., and Jansen, B.J.\nU.S. versus European web searching trends. SIGIR Forum\n36(2), 32-38, 2002.\n[18]\nSpink, A., Jansen, B.J., Wolfram, D., and Saracevic, T.\nFrom E-sex to e-commerce: Web search changes. IEEE\nComputer, 35(3), 107-109, 2002.\n[19]\nSpink, A., Wolfram, D., Jansen, B.J. and Saracevic, T.\nSearching the Web: The Public and Their Queries. Journal\nof the American Society of Information Science 53(2), 226-234\n, 2001.\n[20]\nSpink, A., Jansen, B.J., and Saracevic, T. Vox populi: The\npublic searching of the web. Journal of the American\nSociety of Information Science 52 (12), 1073-1074, 2001.\n[21]\nSpink, A., Jansen, B.J., and Ozmultu, H.C. Use of query\nreformulation and relevance feedback by Excite users.\nInternet Research: Electronic Networking Applications and\nPolicy 10 (4), 2000.\n[22]\nSullivan, D. Searches Per Day. Search Engine Watch,\nFebruary, 2003.\nhttp://searchenginewatch.com/reports/article.php/2156461\n[23]\nWang, P., Berry, M., and Yang, Y. Mining longitudinal web\nqueries: Trends and patterns.\nJournal of the American Society for Information Science and\nTechnology 54(8), 743-758, June 2003.\n[24]\nJ. Wen, J. Nie, H. Zhang \"Query Clustering using User\nLogs\" ACM Transactions on Information Systems, Vol. 20,\nNo. 1, January 2002, pp59-81.\n[25]\nWolfram, D., H. Xie, \"Subject categorization of query terms\nfor exploring Web users' search interests\", Journal of the\nAmerican Society for Information Science, v.53 n.8, p.617-630\n, June 2002.\n[26]\nXie, Y., O'Hallaron, D. Locality in Search Engine Queries\nand Its Implications for Caching. Infocom 2002.\n\n328", "keywords": "query traffic;query stream;frequency distribution;topical categories;log analysis;query log;Query Log Analysis;Web Search"} {"name": "77", "title": "Efficient Multi-way Text Categorization via Generalized Discriminant Analysis", "abstract": "Text categorization is an important research area and has been receiving much attention due to the growth of the on-line information and of Internet. Automated text categorization is generally cast as a multi-class classification problem. Much of previous work focused on binary document classification problems. Support vector machines (SVMs) excel in binary classification, but the elegant theory behind large-margin hyperplane cannot be easily extended to multi-class text classification. In addition, the training time and scaling are also important concerns. On the other hand, other techniques naturally extensible to handle multi-class classification are generally not as accurate as SVM. This paper presents a simple and efficient solution to multi-class text categorization. Classification problems are first formulated as optimization via discriminant analysis . Text categorization is then cast as the problem of finding coordinate transformations that reflects the inherent similarity from the data. While most of the previous approaches decompose a multiclass classification problem into multiple independent binary classification tasks, the proposed approach enables direct multi-class classification. By using Generalized Singular Value Decomposition (GSVD), a coordinate transformation that reflects the inherent class structure indicated by the generalized singular values is identified. Extensive experiments demonstrate the efficiency and effectiveness of the proposed approach.", "fulltext": "INTRODUCTION\nWith the ever-increasing growth of the on-line information and\nthe permeation of Internet into daily life, methods that assist users\nin organizing large volumes of documents are in huge demand.\nIn particular, automatic text categorization has been extensively\nstudied recently. This categorization problem is usually viewed\nas supervised learning, where the gaol is to assign predefined category\nlabels to unlabeled documents based on the likelihood in-ferred\nfrom the training set of labeled documents. Numerous approaches\nhave been applied, including Bayesian probabilistic approaches\n[20, 31], nearest neighbor [22, 19], neural networks [33],\ndecision trees [2], inductive rule learning [4, 9], support vector machines\n[18, 14], Maximum Entropy [26], boosting [28], and linear\ndiscriminate projection [3] (see [34] for comparative studies of text\ncategorization methods).\nAlthough document collections are likely to contain many different\ncategories, most of the previous work was focused on binary\ndocument classification. One of the most effective binary classification\ntechniques is the support vector machines (SVMs) [32]. It\nhas been demonstrated that the method performs superbly in binary\ndiscriminative text classification [18, 34]. SVMs are accurate and\nrobust, and can quickly adapt to test instances. However, the elegant\ntheory behind the use of large-margin hyperplanes cannot be\neasily extended to multi-class text categorization problems. A number\nof techniques for reducing multi-class problems to binary problems\nhave been proposed, including one-versus-the-rest method,\npairwise comparison [16] and error-correcting output coding [8, 1].\nIn these approaches, the original problems are decomposed into a\ncollection of binary problems, where the assertions of the binary\nclassifiers are integrated to produce the final output. In practice,\nwhich reduction method is best suited is problem-dependent, so it\nis a non-trivial task to select the decomposition method. Indeed,\neach reduction method has its own merits and limitations [1]. In\naddition, regardless of specific details, these reduction techniques\ndo not appear to be well suited for text categorization tasks with\na large number of categories, because training of a single, binary\nSVM requires O\n\nn\n\n\ntime for 1 7\n\n2 1 where n is the number\nof training data [17]. Thus, having to train many classifiers has\na significant impact on the overall training time. Also, the use of\nmultiple classifiers slows down prediction. Thus, despite its elegance\nand superiority, the use of SVM may not be best suited for\nmulti-class document classification. However, there do not appear\nto exist many alternatives, since many other techniques that can\nbe naturally extended to handle multi-class classification problems,\n317\nsuch as neural networks and decision trees, are not so accurate as\nSVMs [34, 35].\nIn statistics pattern recognition literature, discriminant analysis\napproaches are well known to be able to learn discriminative\nfeature transformations (see, e.g., [12]). For example, Fisher discriminant\nanalysis [10] finds a discriminative feature transformation\nas eigenvectors associated with the largest eigenvalues of matrix\nT\n^\n\n\n1\nw\n^\n\nb\n, where ^\n\nw\nis the intra-class covariance matrix and\n^\n\nb\nis the inter-class covariance matrix\n1\n. Intuitively, T captures\nnot only compactness of individual classes but separations among\nthem. Thus, eigenvectors corresponding to the largest eigenvalues\nof T are likely to constitute a discriminative feature transform.\nHowever, for text categorization, ^\n\nw\nis usually singular owing to\nthe large number of terms. Simply removing the null space of ^\n\nw\nwould eliminate important discriminant information when the projections\nof ^\n\nb\nalong those directions are not zeros [12]. This issue\nhas stymied attempts to use traditional discriminant approaches in\ndocument analysis.\nIn this paper we resolve this problem. We extend discriminant\nanalysis and present a simple, efficient, but effective solution to\ntext categorization. We propose a new optimization criterion for\nclassification and cast text categorization as the problem of finding\ntransformations to reflect the inherent similarity from the data. In\nthis framework, given a document of unknown class membership,\nwe compare the distance of the new document to the centroid of\neach category in the transformed space and assign it to the class\nhaving the smallest distance to it. We call this method Generalized\nDiscriminant Analysis (GDA), since it uses generalized singular\nvalue decomposition to optimize transformation. We show that the\ntransformation derived using\nGDA\nis equivalent to optimization\nvia the trace or determinant ratios.\nGDA\nhas several favorable properties: First, it is simple and can\nbe programed in a few lines in MATLAB. Second, it is efficient.\n(Most of our experiments only took several seconds.) Third, the\nalgorithm does not involve parameter tuning. Finally, and probably\nthe most importantly, it is very accurate. We have conducted extensive\nexperiments on various datasets to evaluate its performance.\nThe rest of the paper is organized as follows: Section 2 reviews the\nrelated work on text categorization. Section 3 introduces our new\ncriterion for discriminant analysis. Section 4 introduces the basics\nof generalized singular value decomposition and gives the solution\nof the optimization problem. Section 5 shows that the transformation\nderived using\nGDA\ncan also be obtained by optimizing the\ntrace or determinant ratios. Section 6 presents some illustrating examples\n. Section 7 shows experimental results. Finally, Section 8\nprovides conclusions and discussions.\nRELATED WORK\nText categorization algorithms can be roughly classified into two\ntypes: those algorithms that can be naturally extended to handle\nmulti-class cases and those require decomposition into binary classification\nproblems. The first consists of such algorithms as Naive\nBayes [22, 19], neural networks [25, 33], K-Nearest Neighbors [22,\n19], Maximum Entropy [26] and decision trees. Naive Bayes uses\nthe joint distributions of words and categorizes to estimate the probabilities\nthat an input document belongs to each document class and\n1\nThis is equivalent to using eigenvectors associated with the smallest\neigenvalues of matrix T\n^\n\n\n1\nb\n^\n\nw\n. It indicates that traditional\ndiscriminant analysis requires the non-singularity of at least one covariance\nmatrix. Since the rank of ^\n\nw\nis usually greater than that of\n^\n\nb\n, we will base our discussion on the eigenvalue-decomposition\nof T\n^\n\n\n1\nw\n^\n\nb\n.\nthen selects the most probable class. K-Nearest Neighbor finds the\nk nearest neighbors among training documents and uses the categories\nof the k neighbors to determine the category of the test document\n. The underlying principle of maximum entropy is that without\nexternal knowledge, uniform distribution should be preferred.\nBased on this principle, it estimate the conditional distribution of\nthe class label given a document.\nThe reduction techniques that are used by the second group include\none-versus-the-rest method [29], error-correcting output coding\n[8], pairwise comparison [16], and multi-class objective functions\n, where the first two have been applied to text categorization\n[34, 13].\nIn the one-versus-the-rest method a classifier separating between\nfrom a class and the rest is trained for each class. Multi-class classification\nis carried out by integrating prediction of these individual\nclassifiers with a strategy for resolving conflicts. The method is\nsometimes criticizes for solving asymmetric problems in a symmetrical\nmanner and for not considering correlations between classes.\nError-correcting output coding (ECOC) [8] partitions the original\nset of classes into two sets in many different ways. A binary\nclassifier is trained for each partition. The partitions are carefully\nchosen so that the outputs of these classifiers assign a unique binary\ncodeword for each class (with a large Hamming distance between\nany pair of them). The class of an input with unknown class membership\nis chosen by computing the outputs of the classifiers on\nthat input and then finding the class with the codeword closest to\nthe output codeword.\nAlthough SVMs are considered to be very effective in binary\nclassification, its large training costs may make it unsuitable for\nmulti-class classification with a large number of classes if the above\ndecomposition techniques are applied. Also, the lack of a clear\nwinner among the above techniques makes the reduction task complicated\n. Our\nGDA\ndirectly deals with multi-class classification\nand does not require reduction to binary classification problems.\nOther techniques for text categorization exist. Godbole et al.\n[14] propose a new multi-class classification technique that exploits\nthe accuracy of SVMs and the speed of Naive Bayes. It uses a\nNaive Bayes classifier to compute a confusion matrix quickly. Then\nit uses this matrix to reduce both the number and the complexity\nof binary SVMs to be built. Chakrabarti et al. [3] propose a fast\ntext classification technique that uses multiple linear projections. It\nfirst projects training instances to low-dimensional space and then\nbuilds decision tree classifiers on the projected spaces. Fragoudis\net al. [11] propose a new algorithm that targets both feature and\ninstance selection for text categorization.\nIn summary, as pointed out in [34, 26], there is no obvious winner\nin multi-class classification techniques. For practical problems,\nthe choice of approach will have to be made depending on the constraints\n, e.g., the desired accuracy level, the time available, and the\nnature of the problem.\nNEW CRITERION FOR DISCRIMINANT ANALYSIS\nSuppose the dataset D has m instances, d\n1\nd\nm\n, having p features\neach. Then D can be viewed as a subset of R\np\nas well as\na member of R\nm\n\np\n. Suppose D has L classes, D\n1\nD\nL\nhaving\nm\n1\nm\nL\ninstances, respectively, where m\n\nL\ni 1\nm\ni\n. For each i,\n1\ni\nL, let J\ni\nbe the set of all j, 1\nj\nm, such that the j-th\ninstance belongs to the i-th class, and let c\n\ni\n\nbe the centroid of the\ni-th class, i.e., the component-wise average of the m\ni\nvectors in the\n318\nclass. Let c be the centroid of the entire dataset. The intra-class\nscatter matrix of D, ^\n\nw\n, is defined by\n^\n\nw\nL\n\ni 1\n\nj\n\nJ\ni\n\nd\nj\n\nc\n\ni\n\n\nT\n\nd\nj\n\nc\n\ni\n\n\nand its inter-class scatter matrix, ^\n\nb\n, is defined by\n^\n\nb\nL\n\ni 1\n\nj\n\nJ\ni\n\nd\nj\n\nc\n\nT\n\nd\nj\n\nc\n\nLet A\nw\nbe the m\n\np matrix constructed by stacking D\n1\n\n\ne\n\n1\n\n\nT\nc\n\n1\n\n,\n, D\nL\n\n\ne\n\nL\n\n\nT\nc\n\nL\n\n\nand let A\nb\nbe the p\n\nm\nmatrix whose columns are, from left to right,\n\nm\n1\n\nc\n\n1\n\n\nc\n\nT\n\n\n\n\nm\nL\n\nc\n\nL\n\n\nc\n\nT\n. Then\n^\n\nw\nA\nw\nA\nT\nw\nand ^\n\nb\nA\nb\nA\nT\nb\nAlthough there are ways (such as Kernel tricks [24]) for utilizing\nnon-linear transformation, we will focus on linear transformation\n. Given a linear transformation\n\n, the covariance matrices in\nthe transformed space are\n\nA\nb\n\n\nT\n\nA\nb\n\n\n\nT\nA\nT\nb\nA\nb\n\nT\n^\n\nb\n\nand\n\nA\nw\n\n\nT\n\nA\nw\n\n\n\nT\nA\nT\nw\nA\nw\n\nT\n^\n\nw\n\nFisher's linear discriminant analysis discriminates inter-class distance\nand intra-class distance by using their corresponding covariance\nmatrices. The optimal projection can be obtained by solving\nthe generalized eigenvalue problem:\n^\n\nb\n\n^\n\nw\n\n(1)\nIf ^\n\nw\nis nonsingular,\n\nis given by the eigenvectors of matrix\n^\n\n\n1\nw\n^\n\nb\n. As we already pointed out, the approach fails if ^\n\nw\nis singular\nwhich is often the case in document classification\n2\n. Usually,\nthis problem is overcome by using a nonsingular intermediate space\nof ^\n\nw\nobtained by removing the null space of ^\n\nw\nand then computing\neigenvectors. However, the removal of the null space of ^\n\nw\npossibly eliminates some useful information because some of the\nmost discriminant dimensions may be lost by the removal. In fact,\nthe null space of ^\n\nw\nis guaranteed to contain useful discriminant\ninformation when the projections of ^\n\nb\nare not zeros along those\ndirections. Thus, simple removal of the null space of ^\n\nw\nis not an\neffective resolution [12].\nOnce the transformation\n\nhas been determined, classification\nis performed in the transformed space based on a distance metrics,\nsuch as Euclidean distance\nd\n\nx y\n\n\n\ni\n\nx\ni\n\ny\ni\n\n2\nand cosine measure\nd\n\nx y\n\n1\n\n\ni\nx\ni\ny\ni\n\n\ni\nx\n2\ni\n\n\ni\ny\n2\ni\nA new instance, z, it is classified to\nargmin\nk\nd\n\nz\n\nx\nk\n\n\n(2)\nwhere x\nk\nis the centroid of k-th class.\n2\nIn fact, ^\n\nw\nis nonsingular only if there are p\n\nL samples. This is\nusually impractical.\n3.2\nThe New Criterion\nWe propose the use of the following criterion for discriminating\ninter-class and intra-class distances by inter-class and intra-class\ncovariance matrices:\nmin\n\nA\nb\n\n\nI\nn 2\nF\n\nA\nw\n\n2\nF\n(3)\nwhere X\nF\nis the Frobenius norm of the matrix X , i.e.,\n\n\ni j\nx\n2\ni j\n.\nThe criterion does not involve the inverse of the intra-class matrix\nand is similar to Tikhonov regularization of least squares problems.\nIntuitively, the first term of (3) is used to minimize the difference\nbetween the projection of x\ni\n\nx in a new space and the i-th unit\nvector of the new space. The second term is used to minimize the\nintra-class covariance.\nThe equation (3) can be rewritten as\nmin\n\n\n\n\n\nA\nw\nA\nb\n\n\n0\nI\nn\n\n\n\n\n2\nF\n(4)\nand this is a least squares problem with the solution\n\nA\nT\nw\nA\nw\n\nA\nT\nb\nA\nb\n\n\nA\nT\nb\n(5)\nGENERALIZED SINGULAR VALUE DECOMPOSITION\nHere we will show how to use GSVD to compute efficiently the\nsolution to the optimization problem formulated in Section 3 and\nshow that the solution thus obtained is stable.\n4.1\nThe Basics of GSVD\nSingular value decomposition (SVD) is a process of decomposing\na rectangular matrix into three other matrices of a very special\nform. It can be viewed as a technique for deriving a set of uncor-related\nindexing variables or factors [6]. A Generalized Singular\nValue Decomposition (GSVD) is an SVD of a sequence of matrices\n. GSVD has played a significant role in signal processing and\nin signal identification and has been widely used in such problems\nas source separation, stochastic realization and generalized Gauss-Markov\nestimation.\nThe diagonal form of GSVD, shown below, was first introduced\nin [21].\nT\nHEOREM\n1. (GSVD Diagonal Form\n[21]) If A\n\nR\nm\n\np\n,\nB\n\nR\nn\n\np\n, and rank\n\nA\nT\nB\nT\n\nk, then there exist two orthogonal\nmatrices, U\n\nR\nm\n\nm\nand V\n\nR\nn\n\nn\n, and a non-singular matrix,\n\n\nR\np\n\np\n, such that\nU\nT\n0\n0\nV\nT\nA\nB\nX\nC\nS\n\nI\nk\n0\n\n(6)\nwhere C and S are nonnegative diagonal and of dimension m\n\nk and n\n\nk, respectively, 1\nS\n11\n\n\n\nS\nmin\n\nn k\n\nmin\n\nn k\n\n0, and\nC\nT\nC\n\nS\nT\nS\nI\nk\n.\nThe generalized singular values are defined to be the\ncomponent-wise ratios of the diagonal entries of the two diagonal\nmatrices. In signal processing, A is often the signal matrix and B is\nthe noise matrix, in which case the generalized singular values are\nreferred to as signal-noise ratios.\n4.2\nStable Solution\nBy plugging the GSVD matrices of A\nw\nand A\nb\nin (5), we have\n\nX\nI\nk\n0\nS\nT\nV\nT\n. Since V is orthogonal, we can drop it without\n319\nchanging the squared distance. So, we have\n\nX\nI\nk\n0\nS\nT\n(7)\nThis derivation of\n\nholds even if ^\n\nw\nis singular. Thus, by using\nGSVD to solve the new criterion, we can avoid removing null\nspace, thereby keeping all the useful information. The degree of\nlinear independence of the original data, rank\n\nA\nT\nw\nA\nT\nb\n\n, is equal to\nk, Since\n\n\nR\np\n\nk\n, rank\n\nA\nw\n\n\nT\n\nA\nb\n\n\nT\n\n, the degree of linear\nindependence in the transformed space, is at most k.\nWe now state a theorem that shows that the solution is stable.\nT\nHEOREM\n2. (GSVD relative perturbation bound [7]) Suppose\nA and B be matrices with the same number of columns and B\nis of full column rank. Let A\nA\n1\nD\n1\nand B\nB\n1\nD\n2\nsuch that D\n1\nand D\n2\nhave full rank. Let E\nE\n1\nD\n1\nand F\nF\n1\nD\n2\nbe perturbations\nof A and B, respectively, such that for all x there exist some\n\n1\n\n2\n1 for which it holds that\nE\n1\nx\n2\n\n1\nA\n1\nx\n2\nF\n1\nx\n2\n\n2\nB\n1\nx\n2\nLet\n\ni\nand ~\n\ni\nbe the i-th generalized singular value of\n\nA B\n\nand\nthat of\n\nA\n\nE B\n\nF\n\n, respectively. Then either\n\ni\n~\n\ni\n0 or\n\ni\n\n~\n\ni\n\ni\n\n1\n\n\n2\n1\n\n\n2\nThe above theorem gives a bound on the relative error of the\ngeneralized eigenvalues (C\nii\nand S\nii\n) if the difference between the\nestimated covariance matrices and the genuine covariance matrices\nis small. This guarantees that the relative error of\n\nis bounded\nby the relative error of estimated intra- and inter-class covariance\nmatrices.\nGSVD also brings some favorable features, which might improve\naccuracy. In particular, computation of the cross products\nA\nT\nb\nA\nb\nand A\nT\nw\nA\nw\n, which causes roundoff errors, is not required.\n4.3\nThe GDA Algorithm\nThe pseudo codes of the training and prediction procedures are\ndescribed as follows:\nAlgorithm 1 Training procedure\n\n= Train (x's)\nInput: the training data x\ni\n's\nOutput: the transformation\n\n;\nbegin\n1.\nConstruct the matrices A\nw\nand A\nb\n;\n2.\nPerform GSVD on the matrix pair;\n3.\nObtain\n\nas described in equation 7.\n4.\nReturn\n\n;\nend\nAlgorithm 2 Prediction Procedure T\n= Predict (\n\n, x)\nInput: the transformation\n\ngenerated by the training procedure;\nand a new instance x;\nOutput: the label T of the new instance;\nbegin\n1.\nPerform Prediction as in equation 2;\n2.\nReturn T\n;\nend\nCONNECTIONS\nHere we show that the above transformation derived using our\nnew criterion can also be obtained by optimizing the trace or determinant\nratios.\n5.1\nOptimizing the determinant ratio\nFisher's criterion is to maximize the ratio of the determinant of\nthe inter-class scatter matrix of the projected samples to the determinant\nof the intra-class scatter matrix of the projected samples:\nJ\n\n\n\n\nT\n^\n\nb\n\n\nT\n^\n\nw\n\n(8)\nOne way to overcome the requirements of non-singularity of\nFisher's criterion is looking for solutions that simultaneously maximize\n\nT\n^\n\nb\n\nminimize\n\nT\n^\n\nw\n\n. Using GSVD, A\nb\nand A\nw\nare decomposed as A\nw\nUC I\nk\n0 X\n\n1\nand A\nb\nV S I\nk\n0 X\n\n1\n.\nTo maximize\nJ\n\n\n\n,\n\nT\n^\n\nb\n\nshould be increased while decreasing\n\nT\n^\n\nw\n\n. Let C\n\nC I\nk\n0\nand S\n\nS I\nk\n0\n. Then we have\n^\n\nb\nA\nT\nb\nA\nb\nX S\n\n2\nX\n\n1\nand ^\n\nw\nA\nT\nw\nA\nw\nXC\n\n2\nX\n\n1\n. This implies\n\nT\n^\n\nb\n\n\nT\nX S\n\n2\nX\n\n1\n\n\nS\n\nX\n\n1\n\n\n2\nand\n\nT\n^\n\nw\n\n\nT\nXC\n\n2\nX\n\n1\n\n\nC\n\nX\n\n1\n\n\n2\nThus, the matrix\n\nsatisfying X\n\n1\n\nI\nk\n0\nwould simultaneously\nmaximize\n\nT\n^\n\nb\n\nand minimize\n\nT\n^\n\nw\n\n(since the diagonal\nof S is decreasing). So, we have\n\nX\nI\nk\n0\n. In the case\nwhere we must weight the transformation with the generalized singular\n,\n\nX\nI\nk\n0\nS\nT\nis the transformation we want.\n5.2\nOptimizing the trace ratio\nThe same transformation can also be obtained by optimizing the\ntrace ratio. Using GSVD, we have\ntrace\n\n\nT\n^\n\nb\n\n\ntrace\n\nS\n\nS\n\nT\nX\n\n1\n\nT\nX\n\nT\n\ntrace\n\nS\n\nS\n\nT\nGG\nT\n\nk\n\ni 1\nS\n2\nii\ng\nii\nand\ntrace\n\n\nT\n^\n\nw\n\n\ntrace\n\nC\n\nC\n\nT\nX\n\n1\n\nT\nX\n\nT\n\ntrace\n\nC\n\nC\n\nT\nGG\nT\n\nk\n\ni 1\nC\n2\nii\ng\nii\nwhere G\nX\n\n1\n\nand g\nii\nis the ii-th term of G. Since C\nT\nC\n\nS\nT\nS\nI\nk\n, we have\ntrace\n\n\nT\n^\n\nb\n\n\n\ntrace\n\n\nT\n^\n\nw\n\n\nk\n\ni 1\nS\n2\nii\ng\nii\n\nk\n\ni 1\nC\n2\nii\ng\nii\nk\n\ni 1\ng\nii\nIf we force that trace\n\n\nT\n^\n\nb\n\n\n1, the optimization is formulated\nas minimization of trace\n\n\nT\n^\n\nw\n\n\n\nk\ni 1\ng\nii\n\n1. Here g\nii\n's\nare diagonal elements of a positive semi-definite matrix, so they\nare nonnegative.\nAlso, for all i, g\nii\n0 implies that for all j\n320\ng\ni j\ng\nji\n0.\nNote that GG\nT\nis a p\n\np matrix.\nSince only\nthe first k diagonal entries,\ng\nii ki 1\n, appear in the formula for\ntrace\n\n\nT\n^\n\nw\n\n\n\nk\ni 1\ng\nii\n\n1, the quantities of other m\n\nk diagonal\nentries do not affect the optimization. Thus, we may set all\nof these to 0, thereby obtaining\n\nX\nI\nk\n0\n. In the case when\nwe want to weight the transformation with the generalized singular\nvalues, we obtain\n\nX\nI\nk\n0\nS\nT\n.\nTEXT CLASSIFICATION VIA GDA EX-AMPLES\nA well-known transformation method in information retrieval is\nLatent Semantic Indexing (LSI) [6], which applies Singular Value\nDecomposition (SVD) to the document-term matrix and computes\neigenvectors having largest eigenvalues as the directions related to\nthe dominant combinations of the terms occurring in the dataset\n(latent semantics). A transformation matrix constructed from these\neigenvectors projects a document onto the latent semantic space.\nAlthough LSI has been proven extremely useful in information retrieval\n, it is not optimal for text categorization because LSI is com-pletely\nunsupervised. In other words, LSI deals with the data without\npaying any particular attention to the underlying class structure\n. It only aims at optimally transforming the original data into\na lower dimensional space with respect to the mean squared error\n, which has nothing to do with the discrimination of the different\nclasses. Our\nGDA\napproach possesses advantages of both\ndiscriminant analysis and of latent semantic analysis. By explic-itly\ntaking the intra-class and inter-class covariance matrices into\nthe optimization criterion,\nGDA\ndeals directly with discrimination\nbetween classes. Furthermore, by employing GSVD to solve the\noptimization problem,\nGDA\ntries to identify the latent concepts\nindicated by the generalized singular values.\nTo illustrate how well\nGDA\ncan perform, we present here two\nexamples. In the first example, we compare\nGDA\nagainst LDA and\nLSI. Figure 1 shows a small dataset consisting of nine phrases in\nthree topics: user interaction, graph theory, and distributed systems.\nNo.\nClass\nPhrase\n1\n1\nHuman interface for user response\n2\n1\nA survey of user opinion of computer\nsystem response time\n3\n1\nRelation of user-perceived response\ntime to error measurement\n4\n2\nThe generation of random, binary,\nunordered trees\n5\n2\nThe intersection graph of paths in trees\n6\n2\nGraph Minors IV: Widths of trees and\nwell-quasi-ordering\n7\n3\nA survey of distributed shared memory system\n8\n3\nRADAR: A multi-user distributed system\n9\n3\nManagement interface tools for\ndistributed computer system\nFigure 1: Nine example sentences\nAfter removing words (terms) that occurs only once, we have the\ndocument-term matrix as shown in Figure 2.\nThe first and second samples in each class are used for training\n.\nGDA\n, LDA, and LSI are run on the training data to obtain\ntransformation matrices.\nFigure 3 shows the plot of the\nword\n\\\nNo.\n1\n2\n3\n4\n5\n6\n7\n8\n9\na\n1\n1\n1\ncomputer\n1\n1\ndistributed\n1\n1\n1\nfor\n1\n1\ngraph\n1\n1\ninterface\n1\n1\nof\n2\n1\n1\n1\n1\n1\nresponse\n1\n1\n1\nsurvey\n1\n1\nsystem\n1\n1\n1\n1\nthe\n1\n1\ntime\n1\n1\ntrees\n1\n1\n1\nuser\n1\n1\n1\n1\nFigure 2: Document-term Matrix\ndistances/similarities between document pairs in the transformed\nspace using each of the three methods.\n(a)\nGDA\n(b) LDA\n(c) LSI\nFigure 3: Pairwise document similarity via\nGDA\n, LDA, and\nLSI. The darker the close is the more similar the documents\nare.\nGDA\nis a clear winner.\nThe second example illustrates differences between\nGDA\nand\nLSI. Distinction among three newsgroups in 20NG are attempted\nby selecting from each newsgroup twenty training and twenty for\ntesting. Figure 4 shows plots of the the sixty testing articles using\nthe two dominant directions as the axes.\nGDA\nhas clear separation\nwhile the LSI plot shows an L-shaped concentration of the\ndata points. The confusion matrices of these methods are shown in\nTable 1.\nGDA\nclearly performed better than LSI.\nprediction\nprediction\nactual\n1\n2\n3\nactual\n1\n2\n3\n1\n20\n0\n0\n1\n20\n0\n0\n2\n0\n19\n1\n2\n0\n3\n17\n3\n0\n0\n0\n3\n7\n5\n8\nTable 1: The confusion matrices. Left:\nGDA\n. Right: LSI.\nEXPERIMENTS\nFor our experiments we used a variety of datasets, most of which\nare frequently used in the information retrieval research. The range\nof the number of classes is from four to 105 and the range of the\nnumber of documents is from 476 to 20,000, which seem varied\n321\n-1.5\n-1\n-0.5\n0\n0.5\n1\n-1\n-0.8\n-0.6\n-0.4\n-0.2\n0\n0.2\n0.4\n0.6\n0.8\n1\nGroup 1\nGroup 2\nGroup 3\n(a)\nGDA\n-0.07\n-0.06\n-0.05\n-0.04\n-0.03\n-0.02\n-0.01\n0\n0.01\n0.02\n0.03\n-0.05\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\nGroup 1\nGroup 2\nGroup 3\n(b) LSI\nFigure 4: Document plots. The three groups are separated sig-nificantly\nbetter with\nGDA\nthan with LSI.\nenough to obtain good insights as to how\nGDA\nperforms. Table 2\nsummarizes the characteristics of the datasets.\n20Newsgroups\nThe 20Newsgroups (20NG) dataset contains\napproximately 20,000 articles evenly divided among 20 Usenet\nnewsgroups. The raw text size is 26MB. All words were stemmed\nusing a porter stemming program, all HTML tags were skipped,\nand all header fields except subject and organization of the posted\narticle were ignored.\nWebKB\nThe WebKB dataset\n3\ncontains Web pages collected\nfrom university computer science departments. There are approximately\n8,300 documents in the set and they are divided into seven\ncategories: student, faculty, staff, course, project, department, and\nother. The raw text size of the dataset is 27MB. Among the seven\ncategories, student, faculty, course, and project are the four most\npopulous. The subset consisting only of these categories is also\nused here, which is called WebKB4. In neither of the datasets, we\nused stemming or stop lists.\nIndustry\nSector\nThe\nIndustry\nSection\ndataset\n4\nis\nbased on the data made available by Market Guide, Inc.\n(www.marketguide.com). The set consists of company homepages\nthat are categorized in a hierarchy of industry sectors, but we\ndisregarded the hierarchy. There were 9,637 documents in the\ndataset, which were divided into 105 classes. We tokened the\ndocuments by skipping all MIME and HTML headers and using a\nstandard stop list. We did not perform stemming.\nReuters\nThe Reuters-21578 Text Categorization Test Collection\ncontains documents collected from the Reuters newswire in\n1987. It is a standard text categorization benchmark and contains\n135 categories. We used its subsets: one consisting of the ten most\nfrequent categories, which we call Reuters-Top10, and the other\nconsisting of documents associated with a single topic, which we\ncall Reuters-2. Reuters-2 had approximately 9,000 documents and\n50 categories.\nTDT2\nTDT2 is the NIST Topic Detection and Tracking text\ncorpus version 3.2 released in December 6, 1999 [30]. This corpus\ncontains news data collected daily from nine news sources in\ntwo languages (American English and Mandarin Chinese), over a\nperiod of six months (JanuaryJune in 1998). We used only the\nEnglish news texts, which were collected from New York Times\nNewswire Service, Associated Press Worldstream Service, Cable\nNews Network, Voice of America, American Broadcasting Company\n, and Public Radio International. The documents were manu-ally\nannotated using 96 target topics. We selected the documents\nhaving annotated topics and removed the brief texts. The resulting\n3\nBoth\n20NG\nand\nWebKB\nare\navailable\nat\nhttp://www-2\n.cs.cmu.edu/afs/cs/project/theo-11/www/wwkb.\n4\nAvailable at http://www.cs.cmu.edu/ TextLearning/datasets.html\ndataset contained 7,980 documents.\nK-dataset\nThis dataset was obtained from the WebACE\nproject [15]. It contained 2,340 documents consisting of news articles\nfrom Reuters News Service made available on the Web in October\n1997. These documents were divided into 20 classes. They\nwere processed by eliminating stop words and HTML tags, stemming\nthe remaining words using Porter's suffix-stripping algorithm.\nCSTR\nThis is the dataset of the abstracts of technical reports\npublished in the Department of Computer Science at the University\nof Rochester between 1991 and 2002\n5\n. The dataset contained 476\nabstracts, which were divided into four research areas: Symbolic-AI\n, Spatial-AI, Systems, and Theory. We processed the abstracts\nby removing stop words and applying stemming operations on the\nremaining words.\nDatasets\n# documents\n# class\n20NG\n20,000\n20\nWebKB4\n4,199\n4\nWebKB\n8,280\n7\nIndustry Sector\n9,637\n105\nReuters-Top10\n2,900\n10\nReuters-2\n9,000\n50\nCSTR\n476\n4\nK-dataset\n2,340\n20\nTDT2\n7,980\n96\nTable 2: Data Sets Descriptions\n7.2\nData Preprocessing\nIn all experiments, we randomly chose 70% of the documents for\ntraining and assigned the rest for testing. It is suggested in [35] that\ninformation gain is effective for term removal and it can remove up\nto 90% or more of the unique terms without performance degrade.\nSo, we first selected the top 1,000 words by information gain with\nclass labels. The feature selection is done with the Rainbow package\n[23].\nHere we use classification accuracy for evaluation. Different\nmeasures, such as precision-recall graphs and F\n1\nmeasure [34],\nhave been used in the literature. However, since the datasets used\nin our experiments are relatively balanced and single-labeled, and\nour goal in text categorization is to achieve low misclassification\nrates and high separation between different classes on a test set,\nwe thought that accuracy is the best measure of performance. All\nof our experiments were carried out on a P4 2GHz machine with\n512M memory running Linux 2.4.9-31.\n7.3\nExperimental Results\nNow we present and discuss the experimental results. Here we\ncompare\nGDA\nagainst Naive Bayes (NB for short), K-Nearest\nNeighbor (KNN for short), Maximum Entropy (ME for short),\nLDA, and SVM on the same datasets with the same training and\ntesting data. Recall that the first three of the methods we compare\nagainst are commonly-used direct methods for multi-class classification\n(in the sense that they do not require reduction to binary\nclassification problems). For experiments involving SVM we used\nSVMTorch [5]\n6\n, which uses the one-versus-the-rest decomposition.\nTable 3 and Figure 5 show performance comparisons.\nGDA\noutperformed all the other five methods on 20NG, WebKB4, WebKB\nand Industry Sector. SVM performed the best on Reuters-2,\n5\nThe TRs are available at http://www.cs.rochester.edu/trs.\n6\nDownload-able at http://old-www.idiap.ch/learning/SVMTorch.html.\n322\nK-dataset, and TDT2.\nGDA\noutperformed LDA on all the experiments\n, and the improvement was significant (more than 10%) when\nthe sample size was relatively small (in the case of CSTR, Reuters-Top10\n, and K-dataset).\nOn 20NG, the performance of\nGDA\ns 95 03%, which is approximately\n10% higher than that of NB, 6% higher than that of ME,\nand 4% higher than that of SVM. On the WebKB4 dataset,\nGDA\nbeats NB by approximately 5%, and both ME and SVM by approximately\n2%. On the WebKB dataset,\nGDA\nbeats NB by approximately\n16% and ME by 6%. The performance of\nGDA\nis about\n8% higher than that of NB and by 6% than that of ME on the Industry\nSector. The results with\nGDA\nand with SVM are almost\nthe same on WebKB, Industry Sector, Reuters-Top10, and CSTR.\nOn Reuters-2, K-dataset, and TDT2, SVM performs slightly better\nthan\nGDA\nby 3%. ME achieves the best results on the CSTR\ndataset while NB is the winner on Reuters-top10 in terms of performance\nOn CSTR, the performance of\nGDA\nis 2% lower than that\nof NB and 4% lower than that of ME. On Reuters-Top10,\nGDA\nis beaten by NB by approximately 1%. In total, the performance\nof\nGDA\nis always either the winner or very close to the winner:\nit is ranked the first four times, ranked the second three times, and\nranked the third in the remaining two. Although there is no single\nwinner over all datasets,\nGDA\nseems to outperform the rest on\nmost counts. We can say that\nGDA\nis a viable, competitive algorithm\nin text categorization.\nDatasets\nGDA\nNB\nKNN\nME\nLDA\nSVM\n20NG\n95.03\n85.60\n50.70\n89.06\n93.90\n91.07\nWebKB4\n94.01\n85.13\n37.29\n91.93\n90.72\n92.04\nWebKB\n79.02\n61.01\n44.81\n71.30\n77.35\n78.89\nIndustry Sector\n66.57\n56.32\n39.48\n58.84\n66.49\n65.96\nReuters-Top10\n81.98\n83.33\n74.07\n81.65\n71.46\n81.13\nReuters-2\n89.82\n87.88\n73.22\n88.56\n88.65\n92.43\nCSTR\n88.50\n90.85\n82.53\n92.39\n68.29\n88.71\nK-dataset\n88.44\n86.14\n58.26\n86.19\n77.69\n91.90\nTDT2\n90.54\n91.59\n86.63\n89.18\n88.41\n93.85\nTable 3: Performance comparisons. For KNN we set k to 30.\n0\n10\n20\n30\n40\n50\n60\n70\n80\n90\n100\n20N\newsg\nroup\ns\nWeb\nKB\n4\nWeb\nKB\nInd\nustr\ny S\necto\nr\nReu\nters\n-top\n10\nReu\nters\n-2\nURCS\nK-d\natas\net\nTDT\n2\nGDA\nNB\nKNN\nME\nLDA\nSVM\nFigure 5: Performance Comparison\nGDA\nis also very efficient and most experiments are done in\nseveral seconds. Table 4 summarizes the running time for all the\nexperiments of\nGDA\nand SVM. Figure 6 and Figure 7 present the\ncomparisons of training and prediction time respectively. The time\nsaving of\nGDA\nis very obvious. In summary, these experiments\nhave shown that\nGDA\nprovides an alternate choice for fast and\nefficient text categorization.\nGDA\nGDA\nSVM\nSVM\nDatasets\nTraining\nPrediction\nTraining\nPrediction\n20NG\n171.80\n6.86\n270.20\n64.28\nWebKB4\n63.4\n0.20\n114.67\n54.72\nWebKB\n94.64\n0.43\n1108.17\n103.03\nIndustry Sector\n88.23\n6.45\n423.54\n79.82\nReuters-Top10\n61.23\n0.15\n94.28\n18.65\nReuters-2\n96.19\n1.13\n566.53\n85.10\nCSTR\n3.65\n0.02\n7.50\n2.77\nK-dataset\n62.88\n0.18\n84.56\n47.70\nTDT2\n21.69\n5.14\n89.91\n26.76\nTable 4: Time Table in seconds.\n0\n200\n400\n600\n800\n1000\n1200\n20N\newsg\nrou\nps\nWe\nbKB\n4\nWe\nbKB\nInd\nustry\nSect\nor\nReu\nters\n-top\n10\nReu\nters\n-2\nCST\nR\nK-d\nata\nset\nTDT\n2\nTraining\nTime\nGDA\nSVM\nFigure 6: Training Time Comparisons\n0\n20\n40\n60\n80\n100\n120\n20N\newsg\nrou\nps\nWe\nbKB\n4\nWe\nbKB\nInd\nustry\nSect\nor\nReu\nters\n-top\n10\nReu\nters\n-2\nCST\nR\nK-d\nata\nset\nTDT\n2\nPrediction\nTime\nGDA\nSVM\nFigure 7: Prediction Time Comparisons\nDISCUSSIONS AND CONCLUSIONS\nIn this paper, we presented\nGDA\n, a simple, efficient, and yet accurate\n, direct approach to multi-class text categorization.\nGDA\nutilizes\nGSVD to transform the original data into a new space, which\ncould reflect the inherent similarities between classes based on a\nnew optimization criterion. Extensive experiments clearly demonstrate\nits efficiency and effectiveness.\nInterestingly enough, although traditional discriminant approaches\nhave been successfully applied in pattern recognition, little\nwork has been reported on document analysis. As we mentioned\nearlier, this is partly because the intra-class covariance matrix is\nusually singular for document-term data and hence restrict the usage\nof discriminant. Our new criterion avoids the problem while\nstill preserving the discriminative power of the covariance matrix.\n323\nAnother big barrier to application of discriminant analysis in document\nclassification is its large computation cost. As we know,\ntraditional discriminant analysis requires a large amount of computation\non matrix inversion, SVD, and eigenvalue-analysis. The\ncosts of these operations are extremely large in document analysis\nbecause the matrices have thousands of dimension. Our approach\nmakes use of effective feature selection via information gain, with\nwhich we can remove up to 90% or more of the unique terms without\nsignificant performance degrade [35]. One of our future plans\nis to explore how the performance correlates with different feature\nselection methods and the number of words selected. There are also\nother possible extensions such as using random projection to reduce\nthe dimensionality before applying discriminant analysis [27].\nAcknowledgments\nThis work is supported in part by NSF grants EIA-0080124, DUE-9980943\n, and EIA-0205061, and NIH grant P30-AG18254.\nREFERENCES\n[1] Allwein, E. L., Schapire, R. E., & Singer, Y. (2000). Reducing\nmulticlass to binary: A unifying approach for margin\nclassifiers. ICML-00 (pp. 916).\n[2] Apte, C., Damerau, F., & Weiss, S. (1998). Text mining with\ndecision rules and decision trees. Proceedings of the Workshop\nwith Conference on Automated Learning and Discovery:\nLearning from text and the Web.\n[3] Chakrabarti, S., Roy, S., & Soundalgekar, M. V. (2002). Fast\nand accurate text classification via multiple linear discriminant\nprojections. Proceedings of the 28th International Conference\non Very Large Databases (pp. 658669).\n[4] Cohen, W. W., & Singer, Y. (1996). Context-sensitive learning\nmethods for text categorization. Proceedings of the 19th Annual\nInternational ACM SIGIR Conference on Research and\nDevelopment in Information (pp. 307315).\n[5] Collobert, R., & Bengio, S. (2001). SVMTorch: Support\nvector machines for large-scale regression problems. Journal of\nMachine Learning Research, 1, 143160.\n[6] Deerwester, S. C., Dumais, S. T., Landauer, T. K., Furnas,\nG. W., & Harshman, R. A. (1990). Indexing by latent semantic\nanalysis. Journal of the American Society of Information\nScience, 41, 391407.\n[7] Demmel, J., & Veselic, K. (1992). Jacobi's method is more\naccurate than QP. SIAM Journal on Matrix Analysis and\nApplications, 13, 1019.\n[8] Dietterich, T. G., & Bakiri, G. (1995). Solving multiclass\nlearning problems via error-correcting output codes. Journal of\nArtificial Intelligence Research, 2, 263286.\n[9] Dumais, S., Platt, J., Heckerman, D., & Sahami, M. (1998).\nInductive learning algorithms and representations for text\ncategorization. CIKM-98 (pp. 148155).\n[10] Fisher, R. (1936). The use of multiple measurements in\ntaxonomic problems. Annals of Eugenics, 7, 179188.\n[11] Fragoudis, D., Meretakis, D., & Likothanassis, S. (2002).\nIntegrating feature and instance selection for text classification.\nSIGKDD-02 (pp. 501506).\n[12] Fukunaga, K. (1990). Introduction to statistical pattern\nrecognition. Academic Press.\n[13] Ghani, R. (2000). Using error-correcting codes for text\nclassification. ICML-00 (pp. 303310).\n[14] Godbole, S., Sarawagi, S., & Chakrabarti, S. (2002). Scaling\nmulti-class support vector machine using inter-class confusion.\nSIGKDD-02 (pp. 513518).\n[15] Han, E.-H., Boley, D., Gini, M., Gross, R., Hastings, K.,\nKarypis, G., Kumar, V., Mobasher, B., & Moore, J. (1998).\nWebACE: A web agent for document categorization and\nexploration. Agents-98 (pp. 408415).\n[16] Hastie, T., & Tibshirani, R. (1998). Classification by\npairwise coupling. Advances in Neural Information Processing\nSystems. The MIT Press.\n[17] Joachims, T. (1998). Making large-scale support vector\nmachine learning practical. In Advances in kernel methods:\nSupport vector machines.\n[18] Joachims, T. (2001). A statistical learning model of text\nclassification with support vector machines. SIGIR-01 (pp.\n128136).\n[19] Lam, W., & Ho., C. (1998). Using a generalized instance set\nfor automatic text categorization. SIGIR-98 (pp. 8189).\n[20] Lewis, D. D. (1998). Naive (Bayes) at forty: The\nindependence assumption in information retrieval. ECML-98.\n[21] Loan, C. V. (1976). Generalizing the singular value\ndecomposition. SIAM J. Num. Anal., 13, 7683.\n[22] Masand, B., Linoff, G., & Waltz., D. (1992). Classifying\nnews stories using memory based reasoning. SIGIR-92 (pp.\n5964).\n[23] McCallum, A. K. (1996). Bow: A toolkit for statistical\nlanguage modeling, text retrieval, classification and clustering.\nhttp://www.cs.cmu.edu/ mccallum/bow.\n[24] Mika, S., Ratsch, G., Weston, J., Scholkopf, B., & Muller,\nK.-R. (1999). Fisher discriminant analysis with kernels. Neural\nNetworks for Signal Processing IX (pp. 4148). IEEE.\n[25] Ng, H. T., Goh, W. B., & Low, K. L. (1997). Feature\nselection, perceptron learning, and a usability case study for\ntext categorization. Proceedings of the 20th Annual\nInternational ACM SIGIR Conference on Research and\nDevelopment in Information (pp. 6773).\n[26] Nigam, K., Lafferty, J., & McCallum, A. (1999). Using\nmaximum entropy for text classification. In IJCAI-99 Workshop\non Machine Learning for Information Filtering (pp. 6167).\n[27] Papadimitriou, C. H., Tamaki, H., Raghavan, P., & Vempala,\nS. (1998). Latent semantic indexing: A probabilistic analysis.\nProceedings of the Symposium on Principles of Database\nSystems (pp. 159168).\n[28] Schapire, R. E., & Singer, Y. (2000). Boostexter: A\nboosting-based system for text categorization. Machine\nLearning, 39, 135168.\n[29] Scholkopf, B., & J.Smola, A. (2002). Learning with kernels.\nMIT Press.\n[30] TDT2 (1998). Nist topic detection and tracking corpus.\nhttp://www.nist.gove/speech/tests/tdt/tdt98/index.htm.\n[31] Tzeras, K., & Hartmann, S. (1993). Automatic indexing\nbased on Bayesian inference networks. SIGIR-93 (pp. 2234).\n[32] Vapnik, V. N. (1998). Statistical learning theory. Wiley, New\nYork.\n[33] Wiener, E. D., Pedersen, J. O., & Weigend, A. S. (1995). A\nneural network approach to topic spotting. 4th Annual\nSymposium on Document Analysis and Information Retrieval\n(pp. 317332).\n[34] Yang, Y., & Liu, X. (1999). A re-examination of text\ncategorization methods. SIGIR-99 (pp. 4249).\n[35] Yang, Y., & Pederson, J. O. (1997). A comparative study on\nfeature selection in text categorization. ICML-97 (pp. 412420).\n324\n", "keywords": "multi-class classification;text categorization;GSVD;Discriminant Analysis;Multi-class Text Categorization;SVMs;GDA;discriminant analysis"} {"name": "78", "title": "Efficient Phrase Querying with an Auxiliary Index", "abstract": "Search engines need to evaluate queries extremely fast, a challenging task given the vast quantities of data being indexed. A significant proportion of the queries posed to search engines involve phrases. In this paper we consider how phrase queries can be efficiently supported with low disk overheads. Previous research has shown that phrase queries can be rapidly evaluated using nextword indexes, but these indexes are twice as large as conventional inverted files. We propose a combination of nextword indexes with inverted files as a solution to this problem. Our experiments show that combined use of an auxiliary nextword index and a conventional inverted file allow evaluation of phrase queries in half the time required to evaluate such queries with an inverted file alone, and the space overhead is only 10% of the size of the inverted file. Further time savings are available with only slight increases in disk requirements. Categories and Subject Descriptors", "fulltext": "INTRODUCTION\nSearch engines are used to find data in response to ad hoc\nqueries. On the Web, most queries consist of simple lists\nof words. However, a significant fraction of the queries include\nphrases, where the user has indicated that some of the\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nSIGIR'02, August 11-15, 2002, Tampere, Finland.\nCopyright 2002 ACM 1-58113-561-0/02/0008 ...\n$\n5.00.\nquery terms must be adjacent, typically by enclosing them\nin quotation marks. Phrases have the advantage of being\nunambiguous concept markers and are therefore viewed as\na valuable retrieval technique [5, 6, 7, 10]\nIn this paper, we explore new techniques for efficient evaluation\nof phrase queries.\nA standard way to evaluate phrase queries is to use an\ninverted index, in which for each index term there is a list\nof postings, and each posting includes a document identifier,\nan in-document frequency, and a list of offsets. These offsets\nare the ordinal word positions at which the term occurs in\nthe document. Given such a word-level inverted index and a\nphrase query, it is straightforward to combine the postings\nlists for the query terms to identify matching documents.\nThis does not mean, however, that the process is fast. Even\nwith an efficient representation of postings [16], the list for\na common term can require several megabytes for each gigabyte\nof indexed text. Worse, heuristics such as frequency-ordering\n[13] or impact-ordering [1] are not of value, as the\nfrequency of a word in a document does not determine its\nfrequency of participation in a particular phrase.\nA crude solution is to use stopping, as is done by some\nwidely-used web search engines (the Google search engine,\nfor example, neglects common words in queries), but this\napproach means that a small number of queries cannot be\nevaluated, while many more evaluate incorrectly [12]. Another\nsolution is to index phrases directly, but the set of\nword pairs in a text collection is large and an index on such\nphrases difficult to manage.\nIn recent work, we proposed nextword indexes as a way\nof supporting phrase queries and phrase browsing [2, 3, 15].\nIn a nextword index, for each index term or firstword there\nis a list of the words or nextwords that follow that term,\ntogether with the documents and word positions at which\nthe firstword and nextword occur as a pair. The disadvantage\nof a nextword index is its size, typically around half\nthat of the indexed collection. Also, as described originally,\nnextword index processing is not particularly efficient, as the\nnextwords must be processed linearly and (compared to an\nstandard inverted index) for rare firstwords the overhead of\nthe additional layer of structure may outweigh the benefits.\nIn this paper we propose that phrase queries be evaluated\nthrough a combination of an inverted index on rare\nwords and a form of nextword index on common words. We\nexplore the properties of phrase queries and show experimentally\nthat query evaluation time can be halved if just\nthe three most common firstwords are supported through\na nextword index. While phrase browsing is not possible\n215\nwith such an arrangement, the disk overheads of the partial\nnextword index are small and the benefits are substantial.\nWe have observed that many ordinary queries -- those\nwithout quotation marks -- nonetheless resolve successfully\nif processed as a phrase query, a phenomenon that search\nengine users are familiar with, as the most popular engines\nhighly rank matches in which the query terms are adjacent.\nThis suggests that phrase querying is a potential method for\na fast \"first cut\" evaluation method, as it allows more rapid\nidentification of documents in which the terms occur as a\nphrase.\nPROPERTIES OF QUERIES\nWith large web search engines being used daily by millions\nof users, it has become straightforward to gather large\nnumbers of queries and see how users are choosing to express\ntheir information needs. Some search engine companies have\nmade extracts of their query logs freely available. In our research\n, we have made extensive use of query logs provided\nby Excite dating to 1997 and 1999, as well as more recent\nlogs from other sources.\nThese logs have similar properties\n(with regard to our purposes), and we report primarily\non the Excite logs in this work. In the Excite log, after\nsanitizing to remove obscenity there are 1,583,922 queries\n(including duplicates). Of these, 132,276 or 8.3% are explicit\nphrase queries, that is, they include a sequence of two\nor more words enclosed in quotes. Amongst the rest of the\nqueries--those without a phrase-- about 5% contain a word\nthat does not occur at all in the 21.9 gigabytes (Gb) of\ndata we use. However, almost exactly 41% of the remaining\nnon-phrase queries actually match a phrase in the 21.9 Gb\ndataset we use in our experiments.\nA surprising proportion of the phrases include a common\nterm. Amongst the explicit phrase queries, 11,103 or 8.4%\ninclude one of the three words that are commonest in our\ndataset, \"the\", \"to\", and \"of\". 14.4% of the phrase queries\ninclude one of the 20 commonest terms. In some of these\nqueries the common word has a structural function only, as\nin tower of london, and can arguably be safely neglected\nduring query evaluation. In other queries, however, common\nwords play an important role, as in the movie title end\nof days or the band name the who, and evaluation of these\nqueries is difficult with the common words removed, especially\nwhen both \"the\" and \"who\" happen to be common\nterms [12].\nTaken together, these observations suggest that stopping\nof common words will have an unpredictable effect. Stopping\nmay yield efficiency gains, but means that a significant\nnumber of queries cannot be correctly evaluated. We experimented\nwith a set of 122,438 phrase queries that between\nthem match 309\n10\n6\ndocuments.\nStopping of common\nwords means that a query such as tower of london must\nbe evaluated as tower -- london: the query evaluation engine\nknows that the two remaining query terms must appear\nwith a single term between them. If the commonest three\nwords are stopped, there are 390\n10\n6\ntotal matches for\nall queries extracted from the log. However, these are distributed\nextremely unevenly amongst the queries: for some\nqueries the great majority of matches are incorrect. The\nfigure rises to 490\n10\n6\nfor the commonest 20 words, and\n1693\n10\n6\nfor the commonest 254 words, while a significant\nnumber of queries, containing only stopped words, cannot\nbe evaluated at all.\nIt can be argued that stopwords are often insignificant,\nand that even a document that is technically a mismatch-due\nto the wrong stopword being present--may be just as\nlikely to be relevant as a document where the match is correct\n. However, it is worth emphasising that there are many\nqueries in which stopwords do play an important role. The\nwords \"to\" and \"from\" are often stopped, for example, but\nmismatches to the query flights to london are likely to\nbe incorrect. Another instance is that the word \"the\" often\nforms part of a description, thus the moon should not match\nwebsites about a moon of Jupiter, Keith Moon, or a book\npublisher.\nAmongst the phrase queries, the median number of words\nin a phrase is 2, and the average is almost 2.5. About 34%\nof the queries have three words or more, and 1.3% have six\nwords or more. A few queries are much longer, such as titles:\nthe architect of desire beauty and danger in\nthe stanford white family by suzannah lessard.\nAnother point of interest is where in a phrase the common\nwords occur.\nIn English, the common words rarely\nterminate a phrase query. Only 0.4% of phrase queries with\n\"the\", \"to\", or \"of\" have these words at the end. Almost all\nof these queries are short: virtually no queries of four words\nor more terminate with one of the commonest terms. In\nthe short queries ending in a common term, the other query\nterms are themselves usually common. We take advantage\nof these trends in the methods for phrase query evaluation\nproposed in this paper.\nINVERTED INDEXES\nInverted indexes are the standard method for supporting\nqueries on large text databases; there are no practical alternatives\nto inverted indexes that provide sufficiently fast\nranked query evaluation. An inverted index is a two-level\nstructure. The upper level is all the index terms for the collection\n. For text databases, the index terms are usually the\nwords occurring in the text, and all words are included. The\nlower level is a set of postings lists, one per index term. Following\nthe notation of Zobel and Moffat [17], each posting\nis a triple of the form:\nd, f\nd,t\n, [o\n1\n, . . . , o\nf\nd,t\n]\nwhere d is the identifier of a document containing term t, the\nfrequency of t in d is f\nd,t\n, and the o values are the positions\nin d at which t is observed. An example inverted file is shown\nin Figure 1. In this example, there is a vocabulary of five\nwords, each of which has a postings list.\nIt is straightforward to use an inverted index to evaluate\na phrase query. Consider the query magdalene sue\nprentiss. Of these terms, \"magdalene\" is the rarest, and\nits inverted list is fetched first. The postings are decoded\nand a temporary structure is created, recording which documents\ncontain this word and the ordinal word positions in\neach document at which it occurs. The term \"prentiss\" is\nthe next rarest, and is processed next. For each document\nidentifier and word offset in the temporary structure created\nearlier, a posting is sought to see whether \"prentiss\"\nis in the document two words later. If the search fails, that\nword position is discarded from the temporary structure, as\nis the document identifier if no word positions for that document\nremain. As both the structure and the postings are\nsorted, this process is a linear merge. Then the postings list\n216\nOn Disk\nVectors\n1,(<9,2,[4,1001]>)\n53,(<9,3,[3,8,90] ...)\n23,(<4,2,[5,34]>, ...)\n243,(<5,1,[45]>,<9,1,[7]> ...)\nIn Memory\nVocabulary\nnew\nin\nhistoric\nhampshire\nrailroads\n15,(<1,1,[100]>,<9,1,[6]> ...)\nFigure 1: An inverted file for a collection with a vocabulary of five words.\nfor \"sue\" is fetched and decoded, and used to further delete\nentries from the temporary structure. The remaining entries\nare documents and word positions at which the phrase\noccurs.\nSummarizing, phrase queries are evaluated as follows.\n1. Sort the query terms from rarest to commonest, keeping\nnote of their original position in the phrase.\n2. Fetch the postings list for the first (rarest) query term.\nDecode this list into a temporary structure of document\nidentifiers and word offset positions.\n3. For each remaining query term, decode its postings\nlist, merging it with the temporary data; this merge\nprocess discards from the temporary structure all document\nidentifiers and word offsets that do not match\nany entry in the postings list.\nIn this query evaluation model, processing of the first\nquery term establishes a superset of the possible locations\nof the complete phrase, which are maintained in a temporary\nstructure; as the subsequent query terms are evaluated,\nthis structure is pruned, never added to. It is thus essential\nto begin processing with the rarest query term, to avoid\ncreation of an excessively large temporary structure (or of\nhaving to process the inverted lists in stages to stay within\na memory limit).\nA simple heuristic to address this problem is to directly\nmerge the inverted lists rather than decode them in turn. On\nthe one hand, merging has the disadvantage that techniques\nsuch as skipping [11] cannot be as easily used to reduce processing\ncosts (although as we discuss later skipping does not\nnecessarily yield significant benefits). On the other hand,\nmerging of at least some of the inverted lists is probably the\nonly viable option when all the query terms are moderately\ncommon.\nWhether the lists are merged or processed in turn, the\nwhole of each list needs to be fetched (unless query processing\nterminates early due to lack of matches). For ranked\nquery processing it is possible to predict which postings\nin each inverted list are most likely to be of value, and\nmove these to the front of the inverted list; techniques for\nsuch list modification include frequency-ordering [13] and\nimpact-ordering [1]. With these techniques, only the first of\nthe inverted lists need be fetched during evaluation of most\nqueries, greatly reducing costs.\nIn contrast, for phrase querying it is not simple to predict\nwhich occurrences of the term will be in a query phrase, and\nthus such reordering is unlikely to be effective. Offsets only\nTable 1: Size of inverted index (Mb) after stopping\nof common words.\nNumber of\nIndex size\nwords stopped\n(Mb)\n0\n2350\n3\n2259\n6\n2195\n10\n2135\n20\n2089\n254\n1708\nhave to be decoded when there is a document match, but\nthey still have to be retrieved.\nOther techniques do have the potential to reduce query\nevaluation time, in particular skipping [11], in which additional\ninformation is placed in inverted lists to reduce the\ndecoding required in regions in the list that cannot contain\npostings that will match documents that have been identified\nas potential matches. On older machines, on which CPU\ncycles were relatively scarce, skipping could yield substantial\ngains. On current machines, however, disk access costs\nare the more important factor, and in other experiments we\nhave observed that the increase in length of lists required\nby skipping outweighs the reduction in decoding time. We\ntherefore do not use skipping in our experiments.\nWe have implemented a phrase query evaluator based on\ninverted lists, using compression techniques similar to those\nemployed in MG [16] to reduce costs, and have used it to\ntest the efficiency of phrase query evaluation. Our test data\nis 21.9 Gb of HTML containing about 8.3 Gb of text (drawn\nfrom the TREC large web track [9]).\nTable 1 shows the size of the index with a range of levels\nof stopping. As can be seen, the three commonest words\naccount for around 4% of the index size, and only small space\nsavings are yielded by stopping. However, as Table 2 shows,\nthe impact of stopping on query evaluation time is dramatic.\nJust removing the three commonest words reduces average\ntime by about 60%, and by a factor of 3 for longer queries.\nFor these longer queries, the savings continue to increase as\nmore common words are stopped. It is the scale of these\nsavings that make stopping attractive, despite the fact that\nthey are at the cost of inaccurate query results.\n217\nTable 2: Times for phrase query evaluation (seconds\n) on an inverted index after stopping of common\nwords.\nResults are shown for all queries; 2-word\nqueries only; and 5-word queries only.\nNumber of\nOverall\n2-word\n5-word\nwords stopped\ntime (sec)\nqueries\nqueries\n0\n1.56\n0.49\n6.41\n3\n0.66\n0.30\n1.94\n6\n0.45\n0.29\n1.07\n10\n0.40\n0.28\n0.81\n20\n0.37\n0.28\n0.70\n254\n0.18\n0.16\n0.26\nNEXTWORD INDEXES\nInverted indexes allow evaluation of phrase queries, but\nfaster evaluation is possible with phrase-oriented indexes.\nOne possibility is to use a conventional inverted index in\nwhich the terms are word pairs. Another way to support\nphrase based query modes is to index and store phrases\ndirectly [8] or simply by using an inverted index and approximating\nphrases through a ranked query technique [5,\n10]. Greater efficiency, with no additional in-memory space\noverheads, is possible with a special-purpose structure, the\nnextword index [15], where search structures are used to accelerate\nprocessing of word pairs. The nextword index takes\nthe middle ground by indexing pairs of words and, therefore,\nis particularly good at resolving phrase queries containing\ntwo or more words. As noted above and observed elsewhere,\nthe commonest number of words in a phrase is two [14].\nA nextword index is a three-level structure. The highest\nlevel is of the distinct index terms in the collection, which we\ncall firstwords. At the middle level, for each firstword there\nis a data structure (such as a front-coded list, or for fast\naccess a structure such as a tree) of nextwords, which are\nthe words observed to follow that firstword in the indexed\ntext. For example, for the firstword \"artificial\", nextwords\ninclude \"intelligence\", \"insemination\", and \"hip\". At the\nlowest level, for each nextword there is a postings list of the\npositions at which that firstword-nextword pair occur.\nAn example nextword index is shown in Figure 2. In this\nexample, there are two firstwords, \"in\" and \"new\". Some\nof the nextwords for \"in\" are \"all\", \"new\", and \"the\". For\neach firstword-nextword pair, there is a postings list. (A\nnextword index is of course a form of inverted index, but for\nconsistency with other work we use \"inverted index\" solely\nto refer to a standard word-level inverted file.)\nIn nextword indexes, the postings lists are typically short,\nbecause most pairs only occur infrequently. For example,\nthe postings list for the firstword-nextword pair \"the\"\n\"who\"\nis orders of magnitude smaller than the postings lists for\nthese words in an inverted file. It follows that phrase query\nevaluation can be extremely fast.\nNextword indexes also have the benefit of allowing phrase\nbrowsing or phrase querying [4, 15]; given a sequence of\nwords, the index can be used to identify which words follow\nthe sequence, thus providing an alternative mechanism\nfor searching text collections. However, we do not consider\nphrase browsing further in this paper.\nFor phrase queries of more than two words, multiple postings\nlists must be fetched from the nextword index to resolve\nthe query. Selection of which listings to fetch requires a little\ncare. For example, with the query\nboulder municipal employees credit union\nthe query can be resolved by fetching the postings lists for\nthe firstword-nextword pairs \"boulder\"\n\"municipal\", \"employees\"\n\"credit\", and \"credit\"\"union\". Alternatively, it\nwould be possible to get the lists for \"boulder\"\n\"municipal\",\n\"municipal\"\n\"employees\", and \"credit\"\"union\". Which is\nmost efficient depends on which is shorter: the list for \"employees\"\n\"credit\" or the list for for \"municipal\"\"employees\".\nUnfortunately, establishing which is shorter requires two\ndisk accesses, to retrieve the nextwords for \"employees\" and\n\"municipal\". However, we have observed that the frequency\nof a firstword closely correlates to the lengths of its nextword\nlists.\nThus in the query\nhistoric railroads in new hampshire\nwe can with confidence choose \"railroads\"\n\"in\" in preference\nto \"in\"\n\"new\", because \"railroads\" is much less common\nthan \"in\". We have considered algorithms for choosing\norder of evaluation elsewhere [3]. An efficient algorithm for\nevaluating phrase queries with a nextword index is as follows\n.\n1. If the number of query terms n is even, the query\ncan consist of n/2 disjoint firstword-nextword pairs. If\nthe number of query terms n is odd, n/2 firstword-nextword\npairs must be chosen. However, in both cases\nit is more efficient to choose more than the minimum\nnumber of pairs, if doing so avoids choice of a common\nword as a firstword.\n2. The method we use is to choose all n - 1 firstword-nextword\npairs; then sort them by increasing firstword\nfrequency; then discard from the list the pairs that\nare completely covered by preceding selections. This\napproach can lead to processing of more than n/2\npairs, but experimentally was shown to reduces costs\noverall.\n3. The selected word pairs are sorted by increasing frequency\nof the firstwords, then their postings lists are\nprocessed as for inverted file phrase query processing.\nThe nextword index for our Web collection is 4487 Mb in\nsize, almost exactly twice that of an inverted file. For phrase\nqueries, the savings in query evaluation time are dramatic.\nAverage query evaluation time is reduced to 0.06 seconds,\nfaster than inverted files by a factor of 25. For two-word\nqueries, the time falls to 0.01 seconds, which is faster by a\nfactor of 50. The time for 5-word queries is 0.32.\nAn interesting possibility suggested by these results is\nthat--given space for a nextword index--all queries be evaluated\nas if they were phrases. We observed above that a\nsignificant fraction of all queries successfully evaluate, and\nindeed on browsing the query logs it is obvious that many\nof the queries without quotation marks are nonetheless intended\nto be phrases. Spink et al. [14] suggest that most\ntwo-word queries should be treated as a phrase query even\nif they were entered as a ranked query. Given that search\nengines return as highest matches the pages in which the\n218\nIn Memory\nVocabulary\nOn Disk\nNextword Lists\nin\nnew\n...\nage\nhampshire\nhouse\n...\nthe\nall\nnew\nOn Disk Inverted Vectors\n15,(<1,15,[100]>,<65,1,[1]>,<74,7,[23,43,54,62,68,114,181,203]>, ...)\n1,(<9,1,[6]>)\n3,(<1,1,[12]>,<34,3,[23,34,111]>,<77,1,[29]>)\n305,(<9,2,[7,199]>,<532,1,[256]>, ...)\n2,(<9,1,[423]>,<19,1,[4]>)\n2,(<31,3,[21,41,91]>,<44,1,[34)]>)\nFigure 2: A nextword index with two firstwords.\nquery words appear in sequence, use of a nextword index\nprovides a rapid mechanism for finding these pages.\nMuch of the speed improvement for phrase queries yielded\nby nextword indexes is for queries involving a non-rare word.\nIndeed, for queries of rare words there may be little gain, as\nquery processing with nextword indexes involves more complex\nstructures than does processing with inverted indexes.\nAs the two approaches to phrase query processing appear,\nthen, to have complementary advantages, it is attractive to\ntry to combine their strengths.\nCOMBINED QUERY EVALUATION\nWe have observed above that inverted indexes are the\nleast efficient for phrases involving common words, the case\nwhere nextword indexes yield the greatest speed advantage.\nWe therefore propose that common words only be used as\nfirstwords in a stripped-down nextword index, and that this\nnew index be used where possible in evaluation of phrase\nqueries. We call this a top frequency based scheme, since\nonly the most frequent words are indexed in the nextword\nindex. We have explored other schemes based on the frequency\nof words in the indexed collection, or based on the\nfrequency of words in the query log. None of the investi-gated\nschemes offered a better space and time trade-off, so\nwe report only results from the top frequency scheme.\nAn example of a top frequency combined index is shown\nin Figure 3. At the left there is a vocabulary of five words.\nEach word has an inverted list, together constituting a complete\ninverted file for these words. In addition, the common\nwords \"in\" and \"new\" have a nextword index.\nWith a combined index, processing involves postings lists\nfrom both the inverted index and the nextword index. Consider\nagain the query:\nhistoric railroads in new hampshire\nNeither \"historic\" nor \"railroads\" is a common word, so\nestablishing that these terms occur in the phrase involves\nfetching their postings lists from the inverted index and processing\nin the usual way. However, \"in\" and \"new\" are both\ncommon. The posting list for the firstword-nextword pair\n\"in\"\n\"new\" from the nextword index must be fetched and\nprocessed. Then there is a choice. On the one hand, the\nnextword index postings list for \"new\"\n\"hampshire\" cannot\nbe longer than the inverted index postings list for \"hampshire\"\nand in all likelihood is a great deal shorter.\nOn\nthe other hand, compared to the inverted index, an extra\ndisk access is required to fetch a postings list from the\nnextword index. In our implementation, we process using\nthe nextword index if possible, and resort to the inverted\nindex only for terms that are not in an indexed firstword-nextword\npair.\nIn summary, we use the following process:\n1. Identify all pairs in the list in which the first term is\nan indexed firstword. Sort these terms, and prune the\nlist as for standard evaluation of phrase queries via a\nnextword index.\n2. For all terms not in a firstword-nextword pair, sort.\n3. Process the postings lists in increasing order of firstword\nfrequency, so that processing of nextword index\nlists and of inverted file lists is interleaved.\nIn this model, a common word need only be evaluated via its\npostings list in the inverted file if it occurs as the last word\nin a query, which in the Excite query log is a rare event.\nWe have tested other query resolution methods that involved\nterm sorting based on nextword frequency (or NWF,\nthe number of nextwords for a firstword), inverted document\nfrequency (or IDF, the number of documents in which\na word occurs), or both. We also experimented with resolving\nnextword entries of a given query always first, or always\nlast. We found overall that these different resolution methods\ndid not significantly vary in query speed and behaved\nalmost identically to sorting by IDF only. We therefore sort\ninverted index terms and nextword terms based on IDF since\nwe do not need to keepanother statistical value per index\nterm and sorting is straightforward.\nEXPERIMENTAL RESULTS\nAll experiments were run on an Intel 700 MHz Pentium\nIII-based server with 2 Gb of memory, running the Linux\noperating system under light load. In Table 3 we show sizes\nof nextword indexes in which only the commonest terms are\nallowed as firstwords. The table shows that a nextword index\nthat contains only the three commonest terms consumes\n254 Mb, that is, just over 10% of the space of the inverted\nindex or around 1% of the size of the original HTML collection\n.\nQuery evaluation time with a combined index is shown\nin Table 4. (The \"0\" line is repeated from Table 2.) As\ncan be seen, use of a nextword index allows evaluation of all\nphrase queries, and much more rapidly than was previously\npossible. Use of a partial nextword index of 1% of the HTML\ncollection halves query evaluation time; a partial nextword\n219\nin\nhampshire\nhistoric\nnew\nrailroads\nIn Memory\nVocabulary\nOn Disk\nNextword Lists\n...\nthe\nall\nnew\n15,(<15,1,[100]>,<65,1,[1]>,<74,7,[23,43,54,62,68,114,181]> ...)\n251,(<5,1,[45]>,<9,1,[6]> ...)\n1,(<9,1,[7]>)\n23,(<9,3,[4,8,245]> ...)\n2,(<1,1,[53]>,<9,2,[4,1001>])\n23,(<1,2,[65,98]>,<9,4,[7,54,64,69]> ...)\nage\nhampshire\nhouse\n...\n15,(<2,1,[100]>,<6,1,[1]>,<9,8,[1,5,54,62,68,114,181,203]> ...)\nOn Disk Inverted Vectors\n3,(<1,1,[12]>,<34,3,[23,34,111]>,<77,1,[29]>)\n2,(<31,3,[21,41,91]>,<44,1,[34]>)\n305,(<9,2,[7,54]>,<532,1,[256]> ...)\n2,(<9,1,[423]>,<19,1,[4]>)\nFigure 3: A combined inverted file and nextword index.\nTable 3: Size of nextword index (Mb) containing\nonly common firstwords.\nNumber of\nIndex size\ncommon words\n(Mb)\n3\n254\n6\n427\n10\n520\n20\n657\n254\n1366\nindex of less than 3% of the size of the collection cuts time\nto a third.\nThese are substantial savings at low cost. Phrase query\nprocessing time with a nextword index is only slightly greater\nthan with a stopped inverted file, and no answers are lost.\nSuch combined processing can be integrated with other\nheuristics for phrase query evaluation. For example, a strategy\nthat is likely to be successful in the context of a web\nsearch engine is to maintain indexes (perhaps for a limited\ntime only) on phrases, or word pairs from phrases,\nthat are commonly posed as queries. Amongst our 132,276\nqueries, 72,184 are distinct. The commonest phrase query\n(thumbnail post) occurs 205 times and involves no common\nwords. The queries themselves contain 92,846 distinct\nword pairs; the commonest pair occurs 683 times. Indexing\nof common query pairs has the potential to yield significant\nfurther savings.\nCONCLUSIONS\nWe have proposed that phrase queries on large text collections\nbe supported by use of a small auxiliary index. In\nthis approach, all words in the text are indexed via an inverted\nfile; in addition, the commonest words are indexed\nvia an auxiliary nextword index, which stores postings lists\nfor firstword-nextword pairs. We have shown that the cost\nof evaluating phrase indexes can be cut by a factor of three,\nwith an auxiliary index that is only 3% of the size of the\nTable 4: Times for phrase query evaluation (seconds\n) on a combined index, with different numbers\nof common words used in the nextword index. Results\nare shown for all queries; 2-word queries only;\nand 5-word queries only.\nNumber of\nOverall\n2-word\n5-word\ncommon words\ntime (sec)\nqueries\nqueries\n0\n1.56\n0.49\n6.41\n3\n0.76\n0.31\n2.99\n6\n0.57\n0.31\n2.28\n10\n0.53\n0.30\n2.10\n20\n0.50\n0.30\n1.98\n254\n0.46\n0.27\n1.83\nindexed data.\nThese results show that there is no need to use stopping\nin phrases. Indeed, the statistics on the number of matches\nindicate that such stopping leads to significant error rates.\nWhile it can be argued that mistakes in matching due to\nstopping of common words are in many cases unimportant,\nthere are numerous queries where the stopwords are significant\n; moreover, we have demonstrated that there is no reason\nto make such mistakes at all.\nOur schemes have scope for improvement. In particular,\nchoosing of pairs during query evaluation requires further\nexploration, and we are further investigating structures for\nrepresenting nextword lists. However, our results show that\nevaluation of phrase queries can be dramatically accelerated\nwith only a small additional index, and that stopping of\nphrases leads to errors and is not necessary for efficiency.\n220\nAcknowledgements\nThis research was supported by the Australian Research\nCouncil. We thank Amanda Spink, Doug Cutting, Jack Xu,\nand Excite Inc. for providing the query log.\n\nREFERENCES\n[1] V. N. Anh, O. Kretser, and A. Moffat. Vector-Space\nranking with effective early termination. In W. B.\nCroft, D. J. Harp er, D. H. Kraft, and J. Zobel,\neditors, Proc. ACM-SIGIRInternational Conference\non Research and Development in Information\nRetrieval, pages 3542, New York, Sept. 2001.\n[2] D. Bahle, H.E. Williams, and J. Zobel. Compaction\ntechniques for nextword indexes. In Proc. 8th\nInternational Symposium on String Processing and\nInformation Retrieval (SPIRE2001), pages 3345, San\nRafael, Chile, 2001.\n[3] D. Bahle, H. E. Williams, and J. Zobel. Optimised\nphrase querying and browsing in text databases. In\nM. Oudshoorn, editor, Proc. Australasian Computer\nScience Conference, pages 1119, Gold Coast,\nAustralia, Jan. 2001.\n[4] P. Bruza, R. McArthur, and S. Dennis. Interactive\ninternet search: keyword, directory and query\nreformulation mechanisms compared. In N. J. Belkin,\nP. Ingwersen, and M.-K. Leong, editors, Proc.\nACM-SIGIRInternational Conference on Research\nand Development in Information Retrieval, pages\n280287, Athens, 2000.\n[5] C. L. Clarke, G. V. Cormack, and E. A. Tudhope.\nRelevance ranking for one- to three-term queries. In\nProc. of RIAO-97, 5th International Conference\n\"Recherche d'Information Assistee par Ordinateur\",\npages 388400, Montreal, CA, 1997.\n[6] W. B. Croft, H. R. Turtle, and D. D. Lewis. The use\nof phrases and structured queries in information\nretrieval. In A. Bookstein, Y. Chiaramella, G. Salton,\nand V. V. Raghavan, editors, Proc. ACM-SIGIR\nInternational Conference on Research and\nDevelopment in Information Retrieval, pages 3245,\nChicago, 1991.\n[7] E. F. de Lima and J. O. Pedersen. Phrase recognition\nand expansion for short, precision-biased queries based\non a query log. In Proc. ACM-SIGIRInternational\nConference on Research and Development in\nInformation Retrieval, pages 145152, Berkeley, 1999.\n[8] C. Gutwin, G. Paynter, I. Witten, C. NevillManning,\nand E. Frank. Improving browsing in digital libraries\nwith keyphrase indexes. Decision Support Systems,\n27(1/2):81104, 1998.\n[9] D. Hawking, N. Craswell, P. Thistlewaite, and\nD. Harman. Results and challenges in web search\nevaluation. In Proc. of the Eighth International\nWorld-Wide Web Conference, volume 31, pages\n13211330, May 1999.\n[10] D. D. Lewis and W. B. Croft. Term clustering of\nsyntactic phrases. In J.-L. Vidick, editor, Proc.\nACM-SIGIRInternational Conference on Research\nand Development in Information Retrieval, pages\n385404, 1990.\n[11] A. Moffat and J. Zobel. Self-indexing inverted files for\nfast text retrieval. ACM Transactions on Information\nSystems, 14(4):349379, 1996.\n[12] G. W. Paynter, I. H. Witten, S. J. Cunningham, and\nG. Buchanan. Scalable browsing for large collections:\nA case study. In Proc. of the 5th ACM International\nConference on Digital Libraries, pages 215223, San\nAntonio, 2000.\n[13] M. Persin, J. Zobel, and R. Sacks-Davis. Filtered\ndocument retrieval with frequency-sorted indexes.\nJournal of the American Society for Information\nScience, 47(10):749764, 1996.\n[14] A. Spink, D. Wolfram, B. J. Jansen, and T. Saracevic.\nSearching the web: The public and their queries.\nJournal of the American Society for Information\nScience, 52(3):226234, 2001.\n[15] H. Williams, J. Zobel, and P. Anderson. What's next?\nindex structures for efficient phrase querying. In\nM. Orlowska, editor, Proc. Australasian Database\nConference, pages 141152, Auckland, New Zealand,\n1999.\n[16] I. H. Witten, A. Moffat, and T. C. Bell. Managing\nGigabytes: Compressing and Indexing Documents and\nImages. Morgan Kaufmann, San Francisco, California,\nsecond edition, 1999.\n[17] J. Zobel and A. Moffat. Exploring the similarity\nspace. SIGIRForum, 32(1):1834, 1998.\n221\n", "keywords": "common words;evaluation efficiency;stopping;Indexing;nextword index;index representation;phrase query evaluation;query evaluation;phrase query;inverted index"} {"name": "79", "title": "Efficient retrieval of similar shapes", "abstract": "We propose an indexing technique for the fast retrieval of objects in 2D images basedon similarity between their boundary shapes. Our technique is robust in the presence of noise andsupports several important notions of similarity including optimal matches irrespective of variations in orientation and/or position. Our method can also handle size-invariant matches using a normalization technique, although optimality is not guaranteedhere. We implementedour method and performed experiments on real (hand-written digits) data. Our experimental results showedthe superiority of our method comparedto search basedon sequential scanning, which is the only obvious competitor. The performance gain of our method increases with any increase in the number or the size of shapes.", "fulltext": "Introduction\nThere is an increasing interest in storing andretrieving non-textual\nobjects in databases. For example, this kind of data can\nbe storedin the form of extenders in DB2, DataBlades in In-formix\n, and cartridges in Oracle. Non-textual objects are frequently\nin the form of images or shapes. In cases where the key\ninformation for description or classification of an object can be\nfound in its boundary, it is natural to store only the boundary\nanddo retrieval basedon that. Among the areas of applications\nfor boundary shape matching are industrial inspection,\nobject recognition in satellite images, character recognition,\nclassification of chromosomes, andtarget recognition.\nFor example, consider the following query:\nQuery 1\nFindall shapes similar to a given shape.\nA basic question here is how we judge whether two shapes\n(for example the two shown in Fig. 1) are similar. There is a\nlarge body of work in the area of pattern recognition and computer\nvision on extracting boundary features of a shape and\ndoing shape matching based on those features. The boundary\nof an object can be described in terms of simple descriptors\nsuch as length, diameter, and curvature ([MM86]), chain\nFig. 1.\nTwo shape boundaries both representing character `9'\ncodes ([BG80,Bri81]), Fourier descriptors ([PF77,ZR72]) or\nmoments ([BSA91]). Among these features, we use Fourier\ndescriptors as our shape features. Theoretical andexperimen-tal\nevidence in favor of Fourier descriptors can be found in the\nliterature [PF77,KSP95].\nSimilar shapes often have differences in size and orientation\n. For example, consider the two shapes shown in Fig. 1. The\nEuclidean distance between their Fourier descriptors is 22.88.\nIf we rotate the shape on the right by\n30\n\nin the clockwise\n(cw) direction, the Euclidean distance between their Fourier\ndescriptors drops to zero. A simple approach to remove differences\ndue to shifting, scaling, and rotation is to normalize\nFourier descriptors before storing them in a database. However\n, there are still two problems with normalization. First,\nnormalization is not guaranteedto minimize the distance between\ntwo arbitrary shapes. Second, normalization is not always\ndesirable; for example, the shapes `9' and`6' shouldnot\nbe treatedas similar if we are doing character recognition. A\nsolution is to rewrite the query as follows:\nQuery 2\nFindall shapes that become similar to a given shape\nafter being rotatedby\n[-30\n\n, 30\n\n].\nIf our shape collection includes, for example, shapes of airplanes\n, we may write our query insteadas follows:\nQuery 3\nFindall shapes similar to a given shape irrespective\nof rotation.\nIn this paper, we study the issue of efficiently processing\nthese queries. We show how to organize Fourier descriptors in\na multidimensional index, and how to efficiently use the index\n18\nD. Rafiei, A.O. Mendelzon: Efficient retrieval of similar shapes\nin processing a broadrange of similarity queries. Our goal is\nto develop an access method that can handle shapes of various\nsizes andorientations, is much faster than sequential scanning,\nand does not miss any qualifying data objects in the answers\n(false positives are acceptable if they can be eliminatedin a\npost-processing step without much performance degradation).\nThe organization of the rest of the paper is as follows.\nSection 2 provides some backgroundmaterial on relatedwork,\nshape representation using Fourier descriptors and shape matching\n. In Sect. 3, we propose our technique for indexing shapes\nandprocessing similarity queries. Section 4 presents experimental\nresults. We conclude in Sect. 5.\nBackground\n2.1 Related work\nThe following relevant methods for multidimensional indexing\nandsearch have been proposed:\nJagadish [Jag91] proposes a technique for storing and retrieving\nshape descriptions in a multidimensional index. He\nmaps shapes into their constituent rectangles, keeps a few\nlarger rectangles in a multidimensional index, and uses the\narea difference between the constituent rectangles of shapes\nas a measure of similarity. Due to a normalization process, the\nshape description is invariant under translation and scaling. A\nproblem with this approach is that a shape can be normally\ncoveredby multiple sets of rectangles. This can leadto ambiguity\nor storing multiple representations of the same shape.\nFurthermore, it is not possible to do matching in the presence\nof rotation; for example, two identical shapes may not match\nif one is rotatedby\n45\n\n.\nMehrotra andGary [MG93] decompose a shape into several\ncomponents anduse fixed-sizedsegments of each component\nas the shape features. Basedon a normalization process,\nthe shape description is made invariant under translation, scaling\n, androtation. A problem with this approach is that since\na shape is broken down into pieces, the overall shape of the\nboundary is lost. In addition, each shape is described in terms\nof multiple feature vectors, andthis introduces extra overhead\nduring insertions and retrievals.\nBerchtoldet al. [BKK97] study the issue of storing polygons\nso that they can be retrievedbasedon partial similarity\nmatches. They extract almost all possible boundary segments\nof polygons, transform each segment into a sequence of slope\nchanges, andmap the resulting sequences into their first few\nFourier coefficients. Thus, each polygon is representedusing\na set of feature points, andthe minimum bounding rectangle\nof these points for each polygon is storedin a multidimensional\nindex. Due to a normalization, the shape representation\nis invariant to translation, scaling, androtation, but it is not invariant\nto the starting point. This problem is handled by storing\nmultiple descriptions of a polygon, each associated to a\nstarting point. Again, representing a polygon in terms of multiple\npoints introduces extra overhead during insertions and\nretrievals.\nThe QBIC (Query By Image Content) system [FBF\n+\n94]\ncontains a component for approximate shape matching. The\nsystem keeps a 20-D feature vector to describe the shape of\nx x\ny , y\n1\n1\ny\nimaginary axis\nx\nreal axis\n0\n0\nFig. 2.\nA boundary and its representation as a complex sequence\nevery object identified in an image. Features, for example, include\nthe area and the circularity, i.e., whether the object is circular\nor not. To allow fast retrieval, it is suggestedto transform\nfeature vectors using the Karhunen Loeve (KL) transform and\nkeep a few important features (those associatedwith the few\nlargest eigenvalues) in a multidimensional index. However, the\nchoice of proper features andtheir weighting for each application\nis not an easy task. Some features are abstract quantities\nwhich may not easily fit in a distance function computation.\nIn addition, the use of the KL transform makes the multidimensional\nindex rather static.\nThe aforementionedmethods are less general than ours because\nthe notion of similarity is fixedbefore query evaluation;\nthis notion cannot be changedunless a new index structure\nis created. Our method, instead, provides a set of transformations\nto express the notion of similarity in a query; yet, the\nresulting queries are evaluatedusing the same index, without\nprior knowledge of the specific transformations used. Therefore\nwe have not comparedthe performance of our method\nwith theirs, but with sequential scanning instead.\nRelated work on time series data includes the work of\nAgrawal et al. [AFS93] on using the discrete Fourier transform\nfor retrieving similar time series andextensions andimprove-ments\nover this approach [GK95,RM97,RM00]. Similar to\nour framework, Goldin and Kanellakis [GK95] show that the\nsimilarity retrieval will be roughly invariant to simple translations\nandscales if sequences are normalizedbefore being\nstored in the index. The authors store in the index both the\ntranslation and the scale factors, in addition to normalized sequences\n, andalso allow those factors to be queriedusing range\npredicates (see Goldin's Ph.D. thesis [Gol97] for implementation\ndetails).\nA general framework for composing similarity queries is\nproposed by Jagadish, Mendelzon, and Milo [JMM95]. Our\nwork here can be seen as a special case of this framework over\nshapes. Our shape matching can also be incorporatedwithin a\nmultimedia query language such as MOQL [L\nOSO97] where\nmultiple features of images are simultaneously queried.\n2.2 Shape representation using Fourier descriptors\nGiven the figure of an object in the complex plane, its boundary\ncan be tracedproducing a 1-D complex function\nb\nt\nof\ntime. For example, a point moving along the boundary shown\nin Fig. 2 generates the complex function\nb\nt\n= x\nt\n+ jy\nt\nfor\nD. Rafiei, A.O. Mendelzon: Efficient retrieval of similar shapes\n19\nt = 0, . . . , N - 1 which is periodic with period N. That is, the\nx-axis of the figure is treatedas the real axis andthe y-axis as\nthe imaginary axis of a sequence of complex numbers. Further\ninformation on tracing the boundary of a shape and possible\nalternatives in representing it can be foundin any image processing\ntextbook such as Gonzalez andWoods [GW92].\nIt shouldbe notedthat the description is solely basedon\nthe shape of the boundary; objects can still have holes in them,\nbut this is not reflected in the description. Given a boundary\nfunction\nb\nt\n, its Fourier transform can be written as\nB\nf\n= 1\nN\nN-1\nt=0\nb\nt\ne\n-j2tf\nN\n(1)\nwhere\nf { (N - 1)/2 , . . . , 0, . . . , (N - 1)/2 } and\nj = -1 is the imaginary unit. The coefficients B\n0\n, B\n1\n, . . .,\ncalled Fourier descriptors, describe the shape of the object in\nthe frequency domain. The transformation is loss-less since\nthe energy in the frequency domain is the same as the energy\nin the spatial domain (due to Parseval's theorem) and also the\ninverse Fourier transform gives the original boundary function\n.\n2.3 Shape matching using Fourier descriptors\nConsider two boundary functions\nb\nt\n= x\nt\n+ jy\nt\nand\nb\nt\n=\nx\nt\n+jy\nt\n(for\nt = 0, . . . , N -1).A typical measure of similarity\nbetween the two boundaries is the Euclidean distance, which\ncorresponds to mean-square error and which is also directly\nrelatedto the cross-correlation [Raf98].\nD\n2\n(b, b ) =\nN-1\nt=0\n|b\nt\n- b\nt\n|\n2\n(2)\nHowever, the distance computation becomes ambiguous if the\ntwo boundaries have different numbers of samples. A solution\nto avoidthis problem is to findthe Fourier descriptors\nB and\nB , respectively, for b and b anduse a fixednumber of lower\nfrequency descriptors (say,\n2M +1) to compute the Euclidean\ndistance, i.e.,\nD\n2\n(B, B ) =\nM\nf=-M\n|B\nf\n- B\nf\n|\n2\n.\n(3)\nOur proposal\nThe general overview of the proposedmethodis as follows:\n1. Obtain the Fourier descriptors of every shape boundary in\nthe database.\n2. Compute a fingerprint for every shape, as discussed in\nSect. 3.1, andbuilda multidimensional index using the\nfingerprints. Each fingerprint is storedas a point in the\nmultidimensional index.\n3. For basic similarity queries (proximity, nearest neighbours\nand all-pairs), use the index to retrieve candidate shapes.\nThe qualifying shapes are identified after retrieving their\nfull database records and examining them.\n4. For queries that use transformations in their expressions\nof similarities, if necessary, apply the transformations to\nthe index, as discussed in Sect. 3.4, and retrieve candidate\nshapes. The full database record of every candidate shape\nis examinedto findout if it qualifies.\nWe use Fourier descriptors as our shape features. Given a\nset of shape boundaries, for each boundary\nb we findits Fourier\ntransform andretain only a fixednumber of lower frequency\ndescriptors. This number, which we denote by\n2M + 1, can\nbe chosen, for example to be the average length of a boundary\nin the spatial domain. If the number of Fourier descriptors\nhappens to be less than\n2M + 1, we store zero for higher\nfrequency descriptors.\n3.1 Computing a fingerprint\nTo aidin the retrievals that we intendto perform, we apply a\nfew transformations to the descriptors, rather than storing them\ndirectly. First, we set\nB\n0\nto\n0. B\n0\nis the only descriptor that\ncarries information about the shape location. This setting minimizes\nthe distance function (Eq. 3) with respect to translation.\nNext, the scale normalization is achieved by dividing every\ncoefficient\nB\nf\nby the amplitude of\nB\n1\n, often calledthe fundamental\nfrequency.\n|B\n1\n| turns out to be the largest amplitude\nwhen the boundary is traced in the counter-clockwise (ccw)\ndirection and the boundary does not cross itself [WW80]. After\nthe normalization,\nB\n0\nis\n0, so we do not need to store it.\nInstead, we store the original value of\nB\n0\nbefore the normalization\n. It shouldbe notedthat the real andthe imaginary parts of\nthe initial value of\nB\n0\nrepresent the shift factors, respectively,\nalong the X and the Y coordinates; the amplitude of the initial\nvalue of\nB\n1\nrepresents the scale factor. To totally get ridof\nB\n1\n, which already has an amplitude of\n1 for all shapes, we do\nan additional normalization. We shift the starting point such\nthat the phase of\nB\n1\nbecomes zero.\nDefinition 3.1. Given the Fourier descriptors\nB\n-M\n, . . . , B\nM\nof a shape, denote the real part of\nB\n0\nby\nsh\nx\n, the imaginary\npart of\nB\n0\nby\nsh\ny\n, the amplitude of\nB\n1\nby\nsc, and the phase\nof\nB\n1\nby\np. The shape description is defined as the sequence\n(sh\nx\n, sh\ny\n, sc, S\n-1\n, S\n2\n, S\n-2\n, S\n3\n, S\n-3\n, . . . , S\nM\n, S\n-M\n).\n(4)\nwhere\nS\ni\n= ((B\ni\n- (sh\nx\n+ sh\ny\nj))/sc) e\n-ipj\n(a complex\nnumber) for\ni = -1, 2, 3, . . ..\nThe Euclidean distance between two shape descriptions, irrespective\nof variations in location andsize, can be computedas\nfollows:\nD\n2\n(S, S ) =\nM\nf=-M,f=0,1\n|S\nf\n- S\nf\n|\n2\n.\n(5)\nSuch a description is still sensitive to changes in orientation\nandstarting point of the tracing. We can assume that every\ndata or query shape has a fixed starting point, if we encode its\nboundary using the same tracing algorithm and perform the\nsame normalization. For example, a tracing algorithm may\nalways start from the top right corner of a shape andtrace it\nin the ccw direction. In this way, the starting point for two\nidentical shapes will always be the same. Two similar shapes\n20\nD. Rafiei, A.O. Mendelzon: Efficient retrieval of similar shapes\nmay still have small variations in their starting points, but those\nvariations can be easily resolvedby allowing some variations\nin starting points. This is discussed in Sect. 3.4.3.\nThere are sophisticatedtechniques to do phase normalization\n[PF77,WW80]. For example, Wallace et al. [WW80]\nsuggest making the phases of the two coefficients of largest\namplitude equal to zero. This is believed to shift the starting\npoint over the axis of symmetry andalso rotate the axis of\nsymmetry such that it coincides with the real axis. However,\nit shouldbe notedthat none of these techniques are perfect in\nthe sense that a shape can have two or more different phase\nnormalizations, each as goodas the others; or equivalently,\ntwo fairly similar shapes may have descriptors which are far\nfrom each other.\nFor the purpose of indexing, important features of the description\nneedto be identifiedandplacedin the fingerprint.\nFirst, changing the orientation or the starting point of a boundary\nonly affects the phases of descriptors. To insulate the index\nfrom such changes, the information about the phases of\ndescriptors is not stored in a fingerprint. Second, as is shown\nin Fig. 3, the lower frequency descriptors contain information\nabout the general shape, andthe higher frequency descriptors\ncontain information about smaller details. There are strong\nreasons to believe that for a large class of boundary functions,\nthe lower frequency descriptors contain most of the energy.\nFor example, for continuous piece-wise smooth functions, the\namplitude spectrum\n|S\nf\n| decreases at a rate proportional to\nf\n-2\n[RH74, Page 373]. Thus, we can define a fingerprint of a\nshape as follows:\nDefinition 3.2. Given a shape description\n(sh\nx\n, sh\ny\n, sc, S\n-1\n,\nS\n2\n, S\n-2\n, . . . , S\nM\n, S\n-M\n), the fingerprint of the shape is defined\nas\n(sh\nx\n, sh\ny\n, sc, |S\n-1\n|, |S\n2\n|, |S\n-2\n|, . . . , |S\nk\n|, |S\n-k\n|) where\nk ( M) is the cut-off frequency.\nNext we show the completeness of the feature extraction.\n3.2 Using fingerprints for indexing\nThe completeness of the indexing methodis basedon the following\nlemma:\nLemma 3.3. The use of a fingerprint, in place of a full shape\ndescription for shape matching always returns a superset of\nthe answer set.\nProof: For every pair of boundaries\nS and S of length\n2M + 1 andfor every k M, we have\nM\nf=-M,f=0,1\n|S\nf\n- S\nf\n|\n2\n\nk\nf=-k,f=0,1\n||S\nf\n| - |S\nf\n||\n2\n(6)\nThis is due to the fact that for every term\n||S\nf\n| - |S\nf\n|| in\nthe right side of the inequality, there is a term\n|S\nf\n- S\nf\n| in the\nleft side and\n|S\nf\n- S\nf\n| ||S\nf\n| - |S\nf\n||.\nThus, storing the fingerprints of shapes in the index does\nnot affect the correctness since the index returns a superset of\nthe answer set. Furthermore, the distance function on the right\nside of Eq. 6 is invariant to changes in the starting point of the\nboundary and rotation.\nHowever, the index will not be effective if the choice of\nk results in a large number of false hits or high index dimensionality\n(the curse of dimensionality). Our experiments in\nSect. 4.2 show that the value of\nk can be chosen as low as 2\nwhich results in storing 5 Fourier amplitudes in the index.\nThere are a large number of multidimensional data structures\nwhich can be used for indexing (see the survey by Gaede\nandGunther [GG98] for details). We use the R*-tree as it is\nexpectedto work well for up to 20 dimensions andthe length\nof a fingerprint is expectedto be less than 20.\n3.3 Basic similarity queries\nWithin this section, we assume that the shapes being com-paredhave\nthe correct sizes, positions, andorientations. Such a\nmatch can also be useful, for example before insertions, to prevent\nstoring two replicas of the same image. We consider the\nthree basic similarity queries over a shape database: (a) proximity\nquery\n1\n; (b) all-pairs query; and(c) nearest-neighbours\nquery.\nIn a proximity query, we are given a query shape anda\nthreshold , andwe wouldlike to findall database shapes\nthat are within distance\nof the query shape. To perform a\nproximity query, both the shape description and its fingerprint\nare computedas describedin Sect. 3.1, in the same way as each\ndata shape has been. The fingerprint is then used as a search\nkey into the shape index, to retrieve all data shapes that are\nlocatedin its proximity. Note that the index retrieves a superset\nof the answer set since it only keeps the fingerprints of shape\ndescriptions. The actual result is obtained in an additional step\nwhere the Euclidean distance between the full database record\nof every matching shape andthe query shape is computed.\nIn an all-pairs query, we are given two data sets and a\nthreshold , andwe want to findall pairs of shapes such that\none shape is within distance of the other. To perform an all-pairs\nquery, we do a spatial join between the corresponding\nindices of the two data sets. This is followed by an additional\nstep where the Euclidean distance between the full database\nrecords of matching shapes are computed.\nIn a nearest-neighbours query, we are given a query shape,\nandwe wish to finddata shapes which are the closest to\nthe query shape in distance. To perform a nearest-neighbours\nquery, both the shape description and its fingerprint are computed\n(as discussedin Sect. 3.1), andthe fingerprint is usedas\na search key over the index. Since the index employs the distance\nbetween fingerprints for its pruning andthis distance is\nan underestimate of the real distance between descriptions, a\nnearest neighbour identified through searching the index may\nnot be the real nearest neighbour. For example, of the two\nshapes a and b, a couldbe the closest to the query shape based\non the distance between full descriptions, but the index will\nreturn b if b is the closest basedon the distance between fingerprints\n.\nTo fix the problem, we pick the nearest neighbour(s) iden-tifiedthrough\nthe index andcompute the distances between\nfull descriptions of the retrievedshapes andthe query shape.\nIf we denote the minimum distance over all retrieved shapes\nwith , the distance from the real nearest neighbours cannot\n1\nThis is often referredto as a range query as well [AFS93,LJF94].\nD. Rafiei, A.O. Mendelzon: Efficient retrieval of similar shapes\n21\norginal, N=34\n4 descriptors are used\n6 descriptors are used\n8 descriptors are used\n10 descriptors are used\n12 descriptors are used\nFig. 3.\nExample of reconstructions from Fourier descriptors\nbe greater than ; otherwise the shapes identified through the\nindex are the nearest neighbours. The full algorithm is as follows\n:\nAlgorithm 1:\n1. Using a nearest-neighbours search algorithm (such as\n[RKV95]), retrieve the nearest neighbour(s) from the index\n.\n2. For every candidate returned in step 1, retrieve its full\ndatabase recordandcompute its real distance from the\nquery shape. Let NN be the set of all data shapes at the\nminimum real distances from the query shape; let be this\nminimum distance.\n3. Using\nas an initial threshold, pose an incremental proximity\nquery to the index (results are returned one at a time\nandthe threshold can be tightenedduring the process).\n4. Get the next data shape within distance\nof the query\nshape. If the distance between the data shape and the query\nshape is less than , then set NN to be the new data shape\nand\nto be the new distance; if the distance between the\nnew data shape and the query shape is , then add the new\ndata shape to NN. Repeat this step until there are no more\nqualifying data shapes.\nAlgorithm 1 is a refinement of the nearest-neighbours algorithm\ngiven by Korn et al. [KSF\n+\n96]. The refinement is in\nthe form of tightening the proximity query thresholdin Step 4\nas more data shapes are retrieved. There is another incremental\nrefinement of the same algorithm, proposedby Seidl and\nKriegel [SK98], which can also be used.\n3.4 Queries with transformations\nA natural way of doing shape matching is to remove certain\ndifferences before running a comparison. We can think of this\nprocess as applying certain transformations to images before\ndoing a matching. We consider the following four kinds of\ntransformations:\n1. Shifting andscaling.\n2. Rotation.\n3. Change of starting point.\n4. Smoothing.\nIn this section, we center our discussion on proximity queries,\nbut the same techniques are applicable to nearest-neighbours\nandall-pairs queries.\nTransformations 1 to 3 can be supportedin a multidimensional\nindex by providing a function that computes the distance\nbetween a data shape and a query shape; transformations can\nbe applied to shape descriptions inside the function. Transformation\n4 can be supported by registering an additional function\nthat checks if an index entry overlaps with a query entry. The\ntransformation can then be appliedto either the index entry or\nthe query entry (or both) before checking for an overlap. Most\nmultidimensional index structures allow users to define such\na function.\nThe next four subsections respectively discuss the eval-uations\nof queries that use individual transformations 1 to 4\nin their expressions of similarities. More details on evaluating\nqueries that use a combination of transformations in their\nexpressions of similarities can be foundelsewhere [RM00].\n3.4.1 Match with shifting or scaling\nIn many cases we do not care about the position of a shape\nwithin a coordinate system or about its size for matching purposes\n.\nTo match with shifting or scaling, a fingerprint is com-putedfor\nthe query shape, as describedin Sect. 3.1, andthis\nfingerprint is usedas a search key for the index. If we are interested\nin a match invariant under shifting, we simply discard\nthe shift factor of the query point andpermit any value for the\nshift factor. Similarly, for scaling-invariant matching, we dis-cardthe\nscale factor of the query point andpermit any value\nfor the scale factor.\n22\nD. Rafiei, A.O. Mendelzon: Efficient retrieval of similar shapes\n3.4.2 Match with rotation\nWe often wish to match shapes irrespective of small variations\nin orientation. For example, the two shapes shown in Fig. 1\nmake a perfect match, if one shape is rotatedby\n30\n\n. To achieve\nthis, we state in our query the range of the rotation we wish to\nperform before doing a shape matching. Query 2, for instance,\nretrieves all database shapes that match a given query shape\nafter one shape is being rotatedby\n[-30\n\n, 30\n\n].\nSometimes, we wouldlike to do matching totally invariant\nto rotation. For example, we may not care about the orientation\nat all if we are doing airplane recognition. This can be accom-plishedby\nsimply allowing a rotation of\n[-180\n\n, 180\n\n]\nbefore matching.\nTo perform a match with rotation, a fingerprint is computed\nfor the query shape andis usedas a search key to the index. The\nsearch key is used to retrieve all candidates from the index.\nThese candidates include all data points that match the query\npoint irrespective of rotation factor. They also include false\npositives, i.e., data points that are not in the proximity of the\nquery point for any rotation factor. To discard false positives,\nwe need to retrieve the full database record of every candidate\nandcheck whether it actually falls in the proximity (say within\ndistance ) of the query shape after being rotatedby some\n[\n1\n,\n2\n]. On the other hand, rotating a shape boundary by\nis equivalent to multiplying every descriptor S\nf\nby\ne\nj\n. We\ncan thus rewrite Eq. 5 to make it reflect the rotation.\nD\n2\n(S, S ) =\nM\nf=-M,f=0,1\n|S\nf\n- e\nj\n.S\nf\n|\n2\n(7)\nLemma 3.4. The minimum and the maximum of Eq. 7 take\nplace at\n= arctan(-X/Y ) + c. where c is an integer,\nX =\n\nf\nsin\nf\n,\nY =\n\nf\ncos\nf\nand\nS\n\nf\n.S\nf\n=\nf\ne\nj\nf\n(\n\ndenotes the complex conjugation\n2\n).\nSince we are interestedin the minimum of Eq. 7 when\n[\n1\n,\n2\n] and 1\n,\n2\n, the minimum must take\nplace either at an endpoint (i.e.,\n\n1\nor\n\n2\n) or any point\n\n{arctan(-A/B) - , arctan(-A/B) + , arctan(-A/B)}\nwhich is inside the region. It is straightforward to compute\nthe distance function for these values andfindout the optimal\nrotation factor that results in the minimum distance.\n3.4.3 Match with changing starting point\nWhen we compare two boundaries, we do not care about their\nstarting points. If we use the same tracing algorithm for every\nboundary, there cannot be large variations in the starting point\n(though small variations are still possible). However, we may\nnot have much control over the tracing algorithm, andas a\nresult two similar shapes may have different starting points;\nor even if we use the same tracing algorithm for all boundaries,\nwe may want to remove small variations (if any) before doing\na comparison.\nShifting the starting point of a boundary by\n\n3\nis equivalent\nto multiplying every descriptor\nS\nf\nby\ne\njf\n. This operation\n, similar to rotation, only affects the phases of Fourier\n2\nThe complex conjugate of\nz = x+yj is defined as z\n\n= x-yj.\n3\nFor example,\n= 2s\n0\n/N for a boundary of length N means\nshifting its starting point by\ns\n0\npoints in ccw direction.\ndescriptors. Thus, we can still use the index to retrieve all\ncandidates. To discard false positives, we need to retrieve the\nfull database record of every candidate and check whether it\nstill qualifies after the starting point is being shiftedby some\n[\n1\n,\n2\n]. We can again rewrite Eq. 5 to make it reflect the\nshift in starting point.\nD\n2\n(S, S ) =\nM\nf=-M,f=0,1\n|S\nf\n- e\njf\n.S\nf\n|\n2\n(8)\nThe optimal value for\ncan be obtainedby equating the derivative\nof the above equation to zero andfinding the roots. This\ncan be done using numerical techniques up to the machine\nprecision [PTVF92].\n3.4.4 Match with smoothing\nOccasionally, we wish to do matching based on overall shape,\nirrespective of small variations in details and noise. In such\ncases, we wouldlike to smooth out sharp edges andsmall variations\nbefore doing the comparison. To achieve this, we can\napply a moving average transformation to shape boundaries.\nWhen an l-point moving average is appliedto a boundary,\nevery point is replacedwith the average of its l surrounding\npoints. On the other hand, applying a moving average to a\nboundary in the spatial domain corresponds to a vector multiplication\nin the frequency domain. For example, to apply a\n2-point moving average to a boundary with 10 points, we can\nequivalently multiply its Fourier descriptors by the Fourier\ntransform of vector\nm\n2\n= (\n1\n2\n,\n1\n2\n, 0, 0, 0, 0, 0, 0, 0, 0). This\ngives us the Fourier descriptors of the smoothed boundary.\nA distinguishing feature of smoothing, compared to other\ntransformations discussed in this paper, is that its effect on\na shape depends on the characteristics of the shape. This is\nunlike rotation, for instance, where the effect of rotating a\ndata shape by\nbefore a comparison is the same as that of\nrotating the query shape by\n-.\nGiven a query shape anda desiredmoving average for\nsmoothing, the matching can be performedas follows:\n1. Findthe Fourier transform of the desiredmoving average\n(as demonstrated for 2-point moving average); let us\ndenote this by\nM.\n2. Transforming the query shape: Apply the transformation to\nthe query shape description\n(sh\nx\n, sh\ny\n, sc, Q) by replacing\nQ with Q where Q\ni\n= Q\ni\nM\ni\nfor\ni = -1, -2, 2, . . . ,\n-k, k.\n3. Construct a search key by computing the fingerprint of the\nnew shape description.\n4. Transforming the index: Apply\nM to data entries stored in\nthe index before checking for an overlap between a data\nentry and the search key; this is done inside the function\nthat checks if a data entry from the index overlaps the\nsearch key.\n5. For every candidate, retrieve its full database record, apply\nM to it andcheck if the resulting shape falls in the\nproximity of\nQ .\nThe transformation can be appliedto the index on the fly as\nthe index is being traversed. The issue of on-the-fly applying\nsingle or multiple transformations to an index is studied\nin the domain of time series data [RM97,RM00]. The same\ntechniques can be appliedto the domain of shapes.\nD. Rafiei, A.O. Mendelzon: Efficient retrieval of similar shapes\n23\na\nD=0\nb\nD=0.05\nc\nD=0.10\nd\nD=0.15\ne\nD=0.20\nf\nD=0.25\ng\nD=0.30\ni\nD=0.10\nh\nD=0.40\nj\nD=0.20\nFig. 4.\nQuery shapes, shown in the top two rows, andtheir nearest neighbours, shown in the bottom two rows\nExperiments\nTo determine the effectiveness of our proposed technique, we\nimplementedour methodandran experiments on a dataset\nof 11,000 real hand-written digits. The data was obtained\nfrom the CEDAR CDROM dataset, which was gathered from\nscannedZIP codes at the Buffalo Post Office [Hul94]. For\nevery digit, the dataset held 1,100 images. Each image was\noriginally in the form of a 16\n16 gray-scale image which\nwas convertedinto a binary image (by thresholding) andwas\ntraced to identify a shape boundary. Then, the boundary was\nencoded using 30 lower Fourier descriptors. For boundaries\nwith length less than 30, zero was padded at the end. For each\nshape, both its description and its fingerprint are computed,\nas outlinedin Sect. 3.1, andusedfor the purpose of indexing.\nAs our index, we used Norbert Beckmann's implementation\nof the R*-tree [BKSS90]. For the nearest-neighbours search,\nwe implementedthe algorithm developedby Roussopoulos et\nal. [RKV95] as part of Algorithm 1 over R*-tree. We stored\n10,000 shapes (1,000 samples of each digit) in the index and\nusedthe 1,000 remaining samples as queries. We ran each\nquery 10 times andaveragedthe execution times from these\nruns. All our experiments were conducted on a 168 MHz Ul-trasparc\nstation.\nWe investigatedthe following questions:\nHow effective andpractical is our technique in classifying\nshapes in a real data domain?\nHow many Fourier coefficients shouldwe store in the index\n? Storing larger number of coefficients reduces the\nnumber of false positives but increases the index dimensionality\n, andas a result the search time.\nHow does our technique compare to sequential scanning?\n4.1 Shape classification\nTo verify the effectiveness of our proposedtechnique in classifying\nshapes, we triedto classify all 1,000 query shapes by\nassigning every query shape to the class of its nearest neighbours\n. When there was more than one nearest neighbours for\na shape, we pickedone randomly. The result was interesting:\n96.4% of shapes were classifiedcorrectly. Some of those query\nshapes are shown, in their gray scale andbinary representation\n, in the two top rows of Fig. 4 along with their nearest\nneighbours shown in the two bottom rows of the same figure.\nTable 1.\nVarious ranges of rotations andtheir effects in correctly\nclassifying the shapes of hand-written digits\nRotation factor\nFraction of query shapes\n\nclassifiedcorrectly (%)\n[0, 0]\n96.4%\n[-10, 10]\n96.5%\n[-20, 20]\n96.4%\n[-30, 30]\n96.4%\n[-40, 40]\n96.3%\n[-50, 50]\n96.3%\nAs is shown, query shapes shown in Figs. 4a to 4h are classified\ncorrectly with their Euclidean distances from their nearest\nneighbours varying from 0 to 0.40. The query shape shown in\nFig. 4i is not classifiedcorrectly, but its binary representation\nlooks quite similar to that of its nearest neighbour. The\nquery shape shown in Fig. 4j looks different from its nearest\nneighbour, though their boundaries still look similar.\nIn another experiment, we usedQuery 2 andtriedto identify\nfor each query shape its nearest neighbour irrespective\nof a rotation factor\n[-30\n\n, 30\n\n]. This did not change the\noverall classification rate, i.e., only 96.4% of shapes were classifiedcorrectly\n. However, allowing a rotation factor in general\ndid retrieve better matches. Figure 5 shows six query shapes\n(in the top two rows), their original nearest neighbours (in\nthe middle two rows) and their optimal nearest neighbours (in\nthe bottom two rows) when the rotation factor variedfrom\n-30\n\nto\n30\n\n. As is shown, for example rotating the data shape\nshown at the bottom of Fig. 5a by\n11\n\nin the ccw direction\nreduces its Euclidean distance from the query shape to 0.30;\nthis is less than the Euclidean distance between the same query\nshape andits original nearest neighbour. Table 1 summarizes\nthe effect of various rotations in correctly classifying shapes.\nAs is shown, applying a small rotation (\n[-10, 10]) to d ata\nshapes before matches slightly improves the classification rate\nof hand-written digits; larger rotations, on the other hand, either\nhave no effect or deteriorate the rate of correct classifications\n. This is because the digit data is generally sensitive\nto orientations andallowing larger rotations can potentially\nretrieve more non-identical digits.\nWe later picked1,000 shapes among those storedin the\ndatabase, applied to each shape a random rotation in the range\n[-, ] andusedit as a query shape. We only specifiedthe\n24\nD. Rafiei, A.O. Mendelzon: Efficient retrieval of similar shapes\na\nD=0.34\nR=11,D=0.30\nb\nD=0.25\nR=-11,D=0.22\nc\nD=0.36\nR=9,D=0.35\nd\nD=0.60\nR=15,D=0.57\ne\nD=0.45\nR=27,D=0.35\nf\nD=0.48\nR=-12,D=0.41\nFig. 5.\nQuery shapes (the top two rows), their original nearest neighbours\n(the middle two rows) and their optimal nearest neighbours\n(the bottom two rows) varying the rotation factor in\n[-30\n\n, 30\n\n]\n2\n4\n6\n8\n10\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\nNumber of Fourier amplitudes\nFraction of shapes classified correctly\nFig. 6.\nThe fraction of query shapes classifiedcorrectly, varying the\nnumber of Fourier amplitudes used for classification\nrotation interval in our query. As expected, for each query\nshape, only the shape itself was retrievedfrom the database.\n4.2 Varying the cut-off frequency\nThe effectiveness of the index mainly depends on the concentration\nof the key shape information within a few descriptors\nof fingerprints. To measure this effectiveness, we ran some\nexperiments varying the number of Fourier descriptors stored\nin a fingerprint. Figure 6 shows the ratio of query shapes that\nare classifiedcorrectly (according to the criteria outlinedin\nSect. 4.1) to all query shapes varying the number of Fourier\namplitudes used for classification. As the number of amplitudes\nincreases up to 6, the ratio of shapes that are classified\ncorrectly increases accordingly up to 0.778. This ratio remains\nthe same despite increasing the number of Fourier amplitudes\nfrom 6 to 10. Comparedto a full shape description which consists\nof both the amplitudes and the phases of 30 lower Fourier\ncoefficients, classifying 96.4% of the shapes correctly, a fingerprint\ndoes a pretty good job using only 6 amplitudes which\nmake up only 10% of a full shape description and still classifying\n0.778% of the shapes correctly.\nFigure 7a shows the average execution time of Algorithm 1\nfor 1,000 nearest-neighbours queries, broken into: (1) search\ntime in Step 1 to identify the initial approximate nearest neighbours\n; and(2) search time in Step 3 to findthe real nearest\nneighbours. Figure 7b shows the fraction of index nodes accessed\n, averaged over 1,000 nearest-neighbours queries, again\nbroken into the fractions accessedin Step 1 andStep 3.\nAs the number of Fourier amplitudes increases, the index\nselectivity improves, i.e., the index gives fewer false hits. The\nnumber of false hits, as is depicted in Fig. 8 for a proximity\nquery, mainly depends on the number of Fourier amplitudes\nusedin fingerprints andthe output size of the query. Due to\nthe high similarity between different shapes of the same digit,\na large fraction of our false hits (for example, 62% when the\noutput size was 11 andthe number of Fourier amplitudes was\n6) were other shapes of the same digit depicted by the query\nshape which were not within the specifieddistance of the query\nshape.\nThe reduction in false hits reduces the search time since\nless time is needed to remove those false hits. However, increasing\nthe number of Fourier amplitudes after some point,\noften calledthe cut-off frequency, either does not reduce the\nnumber of false hits or reduces it only slightly. This is because\nhigher frequency amplitudes carry less of the energy\nthan lower frequency ones. On the other hand, the search time\nincreases with the index dimensionality, because the tree becomes\ndeeper. Furthermore, the pruning becomes harder, as\nis shown in Fig. 7 with the ratio of index nodes that are accessed\n, because the probability of an arbitrary data bounding\nrectangle being close to the query point increases with the\ndimensionality.\nGiven the trade-off between the tree search time and the\ntime spent for removing false hits, it is natural to expect that\nthere is an optimal cut-off frequency. Basedon our experiments\n, as illustratedin Figs. 6 and7, the optimal cut-off frequency\noccurs for as few as 6 Fourier amplitudes.\n4.3 Comparison to sequential scanning\nFigure 9 shows the average execution time of our proposed\nmethodcomparedto sequential scanning for 1,000 nearest-neighbours\nqueries. To get its best performance, we used\nbufferedinput for sequential scanning, in a system with buffer\nsize of 8,192 bytes. For the experiment shown in Fig. 9a, the\nborder length was fixed to 30 while the database size variedfrom\n10,000 to 30,000 shapes. Since the size of dataset\nwas limited, we doubled or tripled the size by adding one or\ntwo randomly rotated copies of each shape to the database.\nThis doubling did not affect the performance of sequential\nscanning, which was linear in the input size, but we expected\nthe doubling to deteriorate the performance of our method\nsince high similarity among database shapes would increase\nthe number of false hits. For the experiment shown in Fig. 9b,\nthe number of shapes was fixedat 10,000 while the number of\nFourier descriptors used to represent a boundary varied from\nD. Rafiei, A.O. Mendelzon: Efficient retrieval of similar shapes\n25\n2\n4\n6\n8\n10\n0\n200\n400\n600\n800\n1000\n1200\nNumber of Fourier amplitudes\nExecution time (msec)\n: initial NN query\n: proximity query\n: Total\na\n2\n4\n6\n8\n10\n0\n0.2\n0.4\n0.6\n0.8\nNumber of Fourier amplitudes\nFraction of index nodes accessed\nb\n: initial NN query\n: proximity query\n: Total\nFig. 7.\nBreak up of a the execution time and b the fraction of index nodes accessed, for nearest-neighbours queries, varying the number of\nFourier amplitudes\n2\n3\n4\n5\n6\n10\n20\n30\n40\n50\n60\n70\nNumber of Fourier amplitudes\n# of false hits per every qualifying shape\na\n0\n200\n400\n600\n800\n1000\n6\n8\n10\n12\n14\n16\n18\n20\nOutput size\n# of false hits per every qualifying shape\nb\nFig. 8.\nThe average number of false positives for every qualifying shape a varying the number of Fourier amplitudes and fixing the average\noutput size of the query to 11, and b varying the average output size andfixing the number of Fourier amplitudes to 6\n1\n1.5\n2\n2.5\n3\nx 10\n4\n0\n1000\n2000\n3000\n4000\n5000\n6000\nNumber of shapes\nExecution time (msec)\n: Seq\n: Index\na\n20\n30\n40\n50\n0\n500\n1000\n1500\n2000\n2500\n3000\nBorder Length\nExecution time (msec)\n: Seq\n: Index\nb\nFig. 9. a\nTime per query varying the number of shapes, for nearest-neighbours queries. b Time per query varying the border length, for\nnearest-neighbours queries\n26\nD. Rafiei, A.O. Mendelzon: Efficient retrieval of similar shapes\n20 to 50. As shown in the figure, increasing either the number\nof shapes or the border length increases the relative advantage\nof our method, making it more attractive for large databases.\nConclusions\nWe have proposedan indexing technique that can efficiently\nretrieve images of objects basedon similarity between their\nboundary shapes. We have used Fourier descriptors as our\nshape features andhave developedan index organization such\nthat similar shapes can be easily retrievedirrespective of their\ndifferences in size, position and orientation. The highlight of\nour contribution is an index structure that helps find optimal\nmatches between shapes irrespective of various differences between\nthem. Our technique has the following desirable properties\n:\nIt uses a shape matching mechanism which is well-studied\nin the area of pattern recognition.\nIt exploits the fact that important features of a large class\nof shapes are concentratedwithin only a few Fourier descriptors\n.\nIt can handle shapes of various sizes.\nIt guarantees efficient retrieval of all qualifying shapes.\nFurthermore, we have presenteda refinement of an earlier\nnearest-neighbours search algorithm for feature vectors that\nare truncated, due to the significance of some features over\nothers, before being storedin a R-tree index.\nAcknowledgement. We thank the Natural Sciences andEngineering\nResearch Council of Canada andthe Institute for Robotics andIntel-ligent\nSystems for their support of this work.\nReferences\n[AFS93] Agrawal R, Faloutsos C, Swami A (1993) Efficient similarity\nsearch in sequence databases. In: Proc. 4th International\nConference on Foundations of Data Organizations\nandAlgorithms (FODO '93), pp 6984, Chicago\n[BG80]\nBribiesca E, Guzman A (1980) How to describe pure form\nandhow to measure differences in shape using shape numbers\n. Pattern Recognition 12(2):101112\n[BKK97] BerchtoldS, Keim DA, Kriegel HP (1997) Using extended\nfeature objects for partial similarity retrieval. VLDB J\n6(4):333348\n[BKSS90] Beckmann N, Kriegel HP, Schneider R, Seeger B (1990)\nThe R* tree: an efficient androbust index methodfor points\nandrectangles. In: Proc. ACM SIGMOD International\nConference on Management of Data, pp 322331, Atlantic\nCity\n[Bri81]\nBribiesca E (1981) Arithmetic operations among shapes\nusing shape numbers. Pattern Recognition 13(2):123138\n[BSA91] Belkasim SO, Shridhar M, Ahmadi M (1991) Pattern\nrecognition with invariants: a comprehensive study and\nnew results. Pattern Recognition 24:11171138\n[FBF\n+\n94] Faloutsos C, Barber R, Flickner M, Niblack W, Petkovic\nD, Equitz W (1994) Efficient andeffective querying by\nimage content. J Intell Inf Syst 3(3/4):231262\n[GG98]\nGaede V, Gunther O (1998) Multidimensional access\nmethods. ACM Comput Surv 30(2):170231\n[GK95]\nGoldin DQ, Kanellakis PC (1995) On similarity queries\nfor time-series data: constraint specification and implementation\n. In: 1st Int. Conference on the Principles and\nPractice of Constraint Programming, Lecture Notes in\nComputer Science, vol. 976. Springer, Berlin Heidelberg\nNew York, pp. 137153\n[Gol97]\nGoldin\nDQ\n(1997)\nConstraint\nquery\nalgebras\n.\nPhD\nthesis,\nBrown\nUniversity,\nwww.cs.brown.edu/people/dgk/Papers/thesis.ps\n[GW92]\nGonzalez RC, Woods RE (1992) Digital image processing.\nAddison-Wesley, Reading, Mass., USA\n[Hul94]\nHull JJ (1994) A database for handwritten text recognition\nresearch. IEEE Trans Pattern Anal Mache Intell\n16(5):550554\n[Jag91]\nJagadish HV (1991) A retrieval technique for similar\nshapes. In: Proc. ACM SIGMOD International Conference\non Management of Data, pp 208217, Denver, Colo.,\nUSA\n[JMM95] Jagadish HV, Mendelzon AO, Milo T (1995) Similarity-basedqueries\n. In: Proc. 14th ACM SIGACT-SIGMOD-SIGART\nSymposium on Principles of Database Systems,\npp 3645, San Jose, Calif., USA\n[KSF\n+\n96] Korn F, Sidiropoulos N, Faloutsos C, Siegel E, Protopa-pas\nZ (1996) Fast nearest neighbor search in medical image\ndatabases. In: Proc. 22nd International Conference on\nVery Large Data Bases, pp 215226, Mumbai, India\n[KSP95] Kauppinen H, Seppanen T, Pietikainen M (1995) An\nexperimental comparison of autoregressive andFourier-baseddescriptors\nin 2D shape classification. IEEE Trans\nPattern Anal Mach Intell 17(2):201207\n[LJF94]\nLin KI, Jagadish HV, Faloutsos C (1994) The TV-tree\n- an index structure for high-dimensional data. VLDB J\n3(4):517542\n[L\nOSO97] Li JZ,\nOzsu MT, Szafron D, Oria V (1997) MOQL: a\nmultimedia object query language. In: Proc. 3rd International\nWorkshop on Multimedia Information Systems, pp\n1928\n[MG93]\nMehrotra R, Gary JE (1993) Feature-basedretrieval of\nsimilar shapes. In: Proc. 9th International Conference on\nData Engineering, pp 108115, Vienna, Austria\n[MM86] Mokhtarian F, MackworthA (1986) A scale-baseddescrip-tion\nandrecognition of planar curves andtwo dimensional\nshapes. IEEE Trans Pattern Anal Mach Intell 8(1):3443\n[PF77]\nPersoon E, Fu KS (1977)\nShape discrimination using\nFourier descriptors. IEEE Trans Syst Man Cybern\n7(2):170179\n[PTVF92] Press WH, Teukolsky SA, Vetterling WT, Flannery BP\n(1992) Numerical recipes in C. Cambridge University,\nCambridge, UK\n[Raf98]\nRafiei D (1998) Fourier-transform basedtechniques in\nefficient retrieval of similar time sequences. PhD thesis,\nUniversity of Toronto\n[RH74]\nRichardCW, Hemami H (1974) Identification of three-dimensional\nobjects using Fourier descriptors of the\nboundary curve. IEEE Trans Syst Man Cybern 4:371378\n[RKV95] Roussopoulos N, Kelley S, Vincent F (1995) Nearest\nneighbor queries. In: Proc. ACM SIGMOD International\nConference on Management of Data, pp 7179, San Jose,\nCalif., USA\n[RM97]\nRafiei D, Mendelzon AO (1997) Similarity-based queries\nfor time series data. In: Proc. ACM SIGMOD International\nConference on Management of Data, pp 1324, Tucson,\nAriz., USA\nD. Rafiei, A.O. Mendelzon: Efficient retrieval of similar shapes\n27\n[RM00]\nRafiei D, Mendelzon AO (2000) Querying time series\ndata based on similarity. IEEE Trans Knowl Data Eng\n12(5):675693\n[SK98]\nSeidl T, Kriegel HP (1998) Optimal multi-step nearest\nneighbour search. In: Proc. ACM SIGMOD International\nConference on Management of Data, pp 154165, Seattle,\nWash., USA\n[WW80] Wallace TP, Wintz PA (1980)\nAn efficient three-dimensional\naircraft recognition algorithm using normal-izedfourier\ndescriptors. Comput Graph Image Process\n13:99126\n[ZR72]\nZahn CT, Roskies RZ (1972) Fourier descriptors for plane\nclosedcurves. IEEE Trans Comput 21(3):269281", "keywords": "fingerprint;Shape retrieval Similarity retrieval Fourier descriptors;non textual objects;efficiency;database;handwriting recognition;Fourier descriptors;Image databases;search;queries;shape classification;indexing techniques;Similarity queries"} {"name": "8", "title": "A Flexible 3D Slicer for Voxelization Using Graphics Hardware", "abstract": "In this paper we present a simple but general 3D slicer for voxelizing polygonal model. Instead of voxelizing a model by projecting and rasterizing triangles with clipping planes, the distance field is used for more accurate and stable voxelization. Distance transform is used with triangles on voxels of each slice. A voxel is marked with opacity only when the shortest distance between it and triangles is judged as intersection. With advanced programmable graphics hardware assistance, surface and solid voxelization are feasible and more efficient than on a CPU.", "fulltext": "Introduction\nObject representation is a broad topic in research. In computer\ngraphics, polygons play a dominant role in 3D graphics because\nthey approximate arbitrary surfaces by meshes. In games and animations\n, surface representation is the main technique used in rendering\nand visualization. However, volumetric representation, an\nalternative method to traditional geometric representation, is well\nknown since the 1980s. It provides a simple and uniform description\nto measure and model a volumetric objects and establish the\nresearch field of volumetric graphics. Voxelization is a process of\nconstructing a volumetric representation of an object. Voxelizing a\npolygonal object is not only a shift of the representation, it gives\nan opportunity to manipulate mesh object with volumetric operations\nsuch as morphological operation and solid modeling. Many\napplications, for example, CSG modeling, virtual medicine, haptic\nrendering, visualization of geometric model, collision detection, 3D\nspatial analysis, and model fixing, work on volumetric representation\nor use it as the inter-medium.\nIn this paper, we calculate the accurate distance field on GPU to\ncompute coverage of each voxel in voxelization for polygonal models\n. Our method works for arbitrary triangulated models without\nany preprocessing for models, except organizing meshes slab by\nslab in order to prune the unnecessary computation while voxelizing\ncomplex models. By using the power of GPU, Hausdorff distance\nis guaranteed between each sampled voxel and the polygonal\nmodel. Surface voxelization with distance field on a GPU works\nwell and runs faster than on a PC with a CPU. Our method is a reliable\nand accurate solution for the polygonal model under a given\ndistribution of sampling voxels and a specific opacity criterion. Besides\n, error tolerance in voxelization is easy to manipulate by adjusting\nthe threshold of opacity criterion for voxelization, which\nalso dominates the smoothness of features and the thickness of surface\nvoxelization.\n\ne-mail:wktai@mail.ndhu.edu.tw\nThe rest of paper is organized as follows. Some related works are\nsurveyed in the section 2. In section 3, we present the computation\nof Hausdorff distance and our framework. The experimental results\nare illustrated in section 4. Finally, we conclude the proposed approach\nand point out some future works.\nRelated Works\nVolume construction approaches are often referred to as scan-conversion\nor voxelization methods. Researchers mainly focused\non modeling aspects such as robustness and accuracy. Wang and\nKaufman [Wang and Kaufman 1993] used a method that samples\nand filters the voxels in 3D space to produce alias-free 3D volume\nmodels. They used filters to produce final density from the support\nof the region that polygons lie inside. Schroeder and Lorensen\n[Schroeder et al. 1994] created a volumetric model by finding clos-est\npolygon from distance map and classify the opacity of voxels.\nHuang et al. [Huang et al. 1998] described separability and min-imality\nas two desirable features of a discrete surface representation\nand extended 2D scan line algorithm to perform voxelization.\nDachille and Kaufman [Dachille IX and Kaufman 2000] presented\nan incremental method for voxelizing a triangle with pre-filtering\nto generate multivalued voxelization. Widjaya et al. [Widjaya et al.\n2003] presented the voxelization in common sampling lattices as\ngeneral 2D lattices including hexagonal lattices and 3D body-center\ncubic lattices. Ju [Ju 2004] constructed an octree grid for recording\nintersected edges with the model and expanded nodes to scan-convert\na polygon on an octree, and then generate signs from the\nboundary edges and faces of the octree.\nIn recent years, attention on performance of voxelization raises.\nMore and more studies try to explore the benefits of graphics hardware\nfor more efficient rendering. Chen and Feng [Chen and Fang\n1999] presented a slicing-based voxelization algorithm to generate\nslices of the underlying model in the frame buffer by setting appropriate\nclipping planes and extracting each slice of the model, which\nis extended and published in later [Chen and Fang 2000]. Karabassi\nand Theoharis [Karabassi and Theoharis 1999] projected an object\nto six faces of its bounding box through standard graphics system\nfor the outermost parts and read back the information from depth\nbuffer. However it works well only on convex objects. Dong et\nal. [Dong et al. 2004] proposed a real-time voxelization method using\nGPU acceleration to rasterize and texelize an object into three\ndirectional textures and then synthesize textures back to the final\nvolume.\n285\nHausdorff Distance Computation and Voxelization\nIn this section, we first discuss the computation of Hausdorff distance\nbetween a given triangle and a point. Then we explain how\nwe use GPU to compute the distance field of triangles and modify\nthe rendering pipeline.\n3.1\nDistance Field Computation\nFor a given 3D point P(x, y, z) and a triangle T (V\n0\n,V\n1\n,V\n2\n), Hausdorff\ndistance is the shortest distance between the point P and any\npoint v on the triangle. A point on triangle T can be parametrically\ndefined by two linearly independent vectors with two weights (s,t)\nby\nT\n(s,t) = B + se\n0\n+ te\n1\n,\n(1)\nwhere (s,t) D = {(s,t) : s [0, 1],t [0, 1], s + t 1}, and B =\nV\n0\n, e\n0\n= V\n1\n-V\n0\nand e\n1\n= V\n2\n-V\n0\n.\nFor any point on triangle T , the distance from T to P is\nT\n(s,t) - P ,\n(2)\nor we can use the squared-distance function instead\nQ\n(s,t) = T (s,t) - P\n2\n,\n(3)\nwhere a point p = (\ns\n, t) exists which makes Q(\ns\n, t) minimum.\nTherefore, the computation of distance can be reduced into a min-imization\nproblem. For an efficient computation, we can expand\nQ\n(s,t) as\nQ\n(s,t) = as\n2\n+ 2bst + ct\n2\n+ 2ds + 2et + f ,\n(4)\nwhere\na\n=\ne\n0\ne\n0\nb\n=\ne\n0\ne\n1\nc\n=\ne\n1\ne\n1\nd\n=\ne\n0\n(B - P)\ne\n=\ne\n1\n(B - P)\nf\n=\n(B - P) (B - P)\n(5)\nFrom analyzing the gradient of Q(s,t), the minimum\ns\nand t happens\nonly when Q is zero, where\n\ns\n= be - cd\nac\n- b\n2\nt = bd - ae\nac\n- b\n2\n(6)\nIf (\ns\n, t) D, the minimum distance is the distance between p and\nP\n; otherwise, according to the sign of\ns\nand t, there are six possible\nregions that the shortest distance point p may lie on triangle T ,\nas shown in Figure 1. Efficient solutions are well addressed on\nthe book [Schneider and Eberly 2003] by CPU computation with\nsimple calculation and logic classification.\nHowever, in GPU, there is no efficient dynamic flow control to determine\nthe shortest point on a triangle. Therefore, instead of directly\ncomputing the point of shortest distance on a triangle, we\ncompute the distance from the 3D point to four possible points\nwhich may be inside the triangle or on the three boundaries and\nthen the minimum scalar is the shortest distance. These four points\nare\n(s\n0\n,t\n0\n)\n=\n(\nbe\n-cd\nac\n-b\n2\n,\nbd\n-ae\nac\n-b\n2\n)\n(s\n1\n,t\n1\n)\n=\n(0, e\nc\n)\n(s\n3\n,t\n3\n)\n=\n(\nc\n+e-b-d\na\n-2b+c\n,\na\n+d-b-e\na\n-2b+c\n)\n(s\n2\n,t\n2\n)\n=\n(d\na\n, 0),\n(7)\nFigure 1: Six regions in s,t coordinate. Space is partitioned by\nrange of parameters s and t for efficient shortest position classification\nand calculation.\nFigure 2: Rendering pipeline for generating a distance field of a\ntriangle. A quad is rendered instead of a triangle. Five channels of\neach vertex of a quad (position, normal, and 4 texture coordinates)\nis filled with position of quad, position of voxel, and information of\na triangle: v\n0\n, e\n0\n, e\n1\n, and normal N respectively.\nwhere position (s\n0\n,t\n0\n) assumes point p is inside the triangle, positions\n(s\n1\n,t\n1\n), (s\n2\n,t\n2\n) and (s\n3\n,t\n3\n) assume point p is on boundaries\nof s = 0, t = 0, and s + t = 1. All calculated points are truncated\nin the range of [0, 1] so that three end vertices of the triangle T are\nalso in consideration and it guarantees these points are on the triangle\nfor distance computation. Therefore, the minimum distance is\nthe shortest distance from the point P to the triangle T .\n3.2\nGeometry Rendering for Voxelization\nVoxelization by projection and rasterization faces the difficulty of\nnon-uniform sampling of polygons because polygons with arbitrary\norientations are not parallel to projection plane for maximum projection\narea. Even classifying polygons and projecting them to individual\nbest-fit plane, there still have no guarantee on valid rasterization\n. However, distance field is omni-directional, i.e., insensitive\nto the projection plane, and has no assumption on input geometry\nand therefore no extra preprocessing is required.\nOur approach is a slice-based approach for distance field generation\nand voxelization. Figure 2 shows the rendering process\nfor generating the distance field for a triangle. For each triangle\nT\ni\n= {T\ni\n(s,t)|v\n0\n+ se\n0\n+te\n1\n, s 0,t 0, s +t 1}, a full-filled quad\nQ\ni\n= {q\ni\n0\n, q\ni\n1\n, q\ni\n2\n, q\ni\n3\n} is rendered and rasterized to generate the distance\nfield from voxels on a slice to the triangle. Triangle data and\nvoxel positions are associated with the rendering quads. Voxel positions\nare stored in the channel of vertex normal, the triangle data\n(base vertex B, vectors e\n0\nand te\n1\n, and the normal N) are separately\nstored in channels of texture coordinates and transmitted to GPU.\nVoxel positions are linearly interpolated in rasterization of rendering\npipeline and pairs of triangle data and voxel positions are sent\nto pixel processors for Hausdorff distance computation. After distance\ncomputation, the shortest distance between a triangle and a\n286\nvoxel is stored in the pixel depth and the pixel color is assigned\nfor identification depending on applications. For example, binary\nsurface voxelization uses color information to identify whether a\nvoxel intersects a geometry such as 0 for empty and 1 for opacity;\ndistance visualization uses color information to display the distance\nfrom geometries, etc.\nThe distance field of polygonal objects is constructed incrementally\nby rendering quads of triangles. Each pixel of depth buffer keeps\nthe shortest distance from its voxel to triangles rendered. Depth\nbuffer updates when a triangle is rendered. Unless distance is recal-culated\non different slice or rendered objects deform, quads which\nhave been rendered have no need to be re-rendered again even new\ngeometry are added in the scene. Depth buffer of the viewport is\ninitialized as infinitude.\nThe rendering pseudo code is abstracted as follows:\nfor each triangle t on slice i {\nCreate a quad Q for the triangle t\nfor k = 0 to 3 {\n// assign a full-filled quad\n// q is end vertices of quad\nQ.q[k].position = ScreenBoundary.q[k];\n// assign voxel position, and triangle data\nQ.q[k].normal = Slice[i].q[k];\nQ.q[k].tex0 = t.B;\nQ.q[k].tex1 = t.e0;\nQ.q[k].tex2 = t.e1;\n}\nRenderQuad(Q);\n}\n3.3\nSurface Voxelization\nWe use local distance field to reduce work load of GPU because\nthe distance field far away from a triangle is meaningless for surface\nvoxelization. For each triangle, we extend its parallel projected\nbounding rectangle by a small scalar for effective rasterization especially\nfor triangles perpendicular to the projection plane. Due to\ncoherence and precision in interpolating voxel positions, triangles\nare rendered with extended bounding rectangles. While a pixel is\nrasterized by a quad, Hausdorff distance is calculated according to\nthe interpolated voxels, i.e., centers of voxels, and the triangle data.\nOnly if the distance is less than the given threshold, e.g., distance\nfrom a uniform voxel center to its corner, the pixel is marked as\nopacity. Using local distance field could guarantee a small region\nof Hausdorff distance but greatly improve the performance of surface\nvoxelization.\nFor more efficient voxelization process on GPU, triangles can be\nculled early by space partitioning. We construct an active triangle\nlist for each slice. Currently we define slabs along Z-axis. According\nto partitioning planes, triangles are filtered and rearranged\nslab by slab. Many triangles can be pruned while rendering a slice.\nIt is significantly helpful while voxelizing very complex models.\nBecause distance field is insensitive to projecting directions of triangles\n, selection of partitioning plane has no influence on effective-ness\nof voxelization.\nExperimental Results\nWe implement our fragment program using HLSL on a Pentium 4\n3.0 MHz PC with 1G RAM and a nVidia Geforce FX5800 graphics\ncard running on Windows XP with DirectX 9.0c. We use Vertex\nShader 1.1 and Pixel Shader 2.0 to implement fragment program\nin scattering pairs of voxel positions and triangle data and in distance\ncalculation and visualization. Table 1 shows the performance\nFigure 4: Rendering from the results of voxelization (512\n3\n): dragon\nof 870K faces in 512\n3\nvoxels.\nof surface voxelization on different models and in different voxel\nresolutions. Figure 3 and Figure 4 demonstrate quality of voxelization\nresults. In the experiment, opacity threshold is set to the distance\nfrom voxel center to its corner. That means if the shortest\ndistance between a voxel and a triangle is less than the threshold,\nthe voxel will be marked as opacity. Note that voxels are normal-ized\nto cubes in rendering so the scale of output may differ from the\noriginal polygonal model.\nIn Table 1, we list average time on surface voxelization per slice, per\nvoxel, and per triangle. In the same resolution, voxelization time is\nproportional to the complexity of polygonal model. For each voxel,\nprocess time is always less than 0.1 ms. Even when voxel resolution\nincrease, GPU still could handle voxelization for complex object in\nstable throughput which may be increased much more for CPU.\nDue to speed up by using local distance field and culling for unre-lated\ngeometry, voxelization by distance field can be displayed slice\nby slice interactively under 128\n3\nvoxel resolution. When voxel resolution\nis low, voxelization time highly depends on complexity of\nmodel. However, when the voxel resolution increases higher, even\nfor a low complexity model, it still need more time to voxelize a\nmodel than in lower voxel resolution. On average, resolution of\n256\n3\ncould provide a benefit of reliable voxelization both in quality\nand time cost.\nRendering cost is stable for a triangle even when resolution of volume\nincreases while it is linear on a CPU. Voxelization with proposed\nmethod is still slower than methods using traditional projection\nand rasterization by graphic hardware. However, our method\nis stable, correct and flexible because the opacity of each voxel is\ndetermined by thresholding the distance field of objects.\nConclusion\nIn this paper, we propose a GPU-based 3D slicer approach for voxelizing\npolygonal models. We calculate minimum distance between\npairs of sampled voxels and triangles of arbitrary models with guarantee\nof Hausdorff distance. With programmable hardware ver-tex/pixel\nprocessors, efficient surface voxelization, solid voxelization\n, and visualization of the distance field all are feasible on the\nproposed 3D slicer.\nHowever, in current implementation, performance of pixel shader\nis the bottleneck in overall processing speed. Area of rasterization\nalso has a significant influence on the loading of pixel shader.\n287\nModel\nFaces\nRes.\nTime(s)\nTime\nSlices\nTime\nVoxels\nTime\nTri\n.\nRes.\nTime(s)\nTime\nSlices\nTime\nVoxels\nTime\nTri\n.\nBeethoven\n5027\n128\n13.94\n0.11\n6.65\n2.77\n256\n85.86\n0.34\n5.12\n17.08\nTeapot\n6320\n128\n14.31\n0.11\n6.82\n2.26\n256\n87.62\n0.34\n5.22\n13.86\nCup\n7494\n128\n15.24\n0.12\n7.27\n2.03\n256\n93.65\n0.37\n5.58\n12.50\nBunny\n10000\n128\n16.24\n0.13\n7.75\n1.62\n256\n93.98\n0.37\n5.60\n9.40\nBunny\n69451\n128\n43.21\n0.34\n20.60\n0.62\n256\n231.85\n0.91\n13.82\n3.34\nDragon\n871414\n128\n84.21\n0.66\n40.15\n0.10\n256\n325.87\n1.27\n19.42\n0.37\nBuddha\n1087716\n128\n170.44\n1.33\n81.27\n0.16\n256\n347.65\n1.36\n20.72\n0.32\nDragon\n871414\n512\n1748.11\n3.41\n13.02\n2.01\n* The time unit in Time/Voxels is s\nBuddha\n1087716\n512\n1825.47\n3.57\n13.60\n1.68\n* The time unit in Time/Tri. is ms\nTable 1: Surface voxelization on different models and in different voxel resolutions.\n(a)\n(b)\n(c)\n(d)\n(e)\n(f)\n(g)\n(h)\nFigure 3: Rendering from the results of voxelization (256\n3\n): (a) Beethoven in 256\n3\nvoxels, (b) Teapot in 256\n3\nvoxels, (c) Cup in 256\n3\nvoxels,\n(d) Bunny of 10000 faces in 256\n3\nvoxels, (e) dragon of 10000 faces in 256\n3\nvoxels, (f) Bunny of 69451 faces in 256\n3\nvoxels, (g) dragon of\n870K faces in 256\n3\nvoxels, and (h) Buddha of 1M faces in 256\n3\nvoxels.\nTherefore, in the near feature, searching a better computational\nmethodology for GPU is one direction to improve performance of\ndistance field computation. In addition, a sophisticated culling for\nerror-free distance computation will be a technique in demand. To\nimprove the quality of voxelization, adaptive dense voxelization\nand a mechanism for quality measurement and guide on GPU is\nanother interesting topic.\nReferences\nC\nHEN\n, H.,\nAND\nF\nANG\n, S. 1999. Fast voxelization of 3D synthetic\nobjects. ACM Journal of Graphics Tools 3, 4, 3345.\nC\nHEN\n, H.,\nAND\nF\nANG\n, S. 2000. Hardware accelerated voxelization\n. Computers and Graphics 24, 3, 433442.\nD\nACHILLE\nIX, F.,\nAND\nK\nAUFMAN\n, A. E. 2000. Incremental\ntriangle voxelization. In Graphics Interface, 205212.\nD\nONG\n, Z., C\nHEN\n, W., B\nAO\n, H., Z\nHANG\n, H.,\nAND\nP\nENG\n, Q.\n2004. Real-time voxelization for complex polygonal models. In\nProceedings of Pacific Graphics '04\n, 4350.\nH\nUANG\n, J., Y\nAGEL\n, R., F\nILIPPOV\n, V.,\nAND\nK\nURZION\n, Y. 1998.\nAn accurate method for voxelizing polygon meshes. In Proceedings\nof IEEE symposium on Volume visualization\n, 119126.\nJ\nU\n, T. 2004. Robust repair of polygonal models. ACM Transactions\non Graphics 23\n, 3, 888895.\nK\nARABASSI\n, G. P. E.-A.,\nAND\nT\nHEOHARIS\n, T. 1999. A fast\ndepth-buffer-based voxelization algorithm.\nACM Journal of\nGraphics Tools 4\n, 4, 510.\nS\nCHNEIDER\n, P.,\nAND\nE\nBERLY\n, D. H. 2003. Geometry Tools for\nComputer Graphics\n. Morgan Kaufmann.\nS\nCHROEDER\n, W. J., L\nORENSEN\n, W. E.,\nAND\nL\nINTHICUM\n, S.\n1994. Implicit modeling of swept surfaces and volumes. In Proceedings\nof IEEE Visualization\n, 4045.\nW\nANG\n, S. W.,\nAND\nK\nAUFMAN\n, A. E. 1993. Volume sampled\nvoxelization of geometric primitives. In Proceedings of IEEE\nVisualization\n, 7884.\nW\nIDJAYA\n, H., M\nUELLER\n, T.,\nAND\nE\nNTEZARI\n., A. 2003. Voxelization\nin common sampling lattices. In Proceedings of Pacific\nGraphics '03\n, 497501.\n288", "keywords": "Graphics hardware;Hausdorff distance;Voxelization;Distance field;voxelization;local distance field;Object representation;rasterization;Polygonal object;GPU-based 3D slicer approach;GPU;3D slicer;slice-based approach;Rendering;rendering;adaptive dense voxelization;Volumetric representation;pixel shader;opacity;Surface voxelization;polygonal model;Surface representation;Rendering cost;GPU computation;Hausforff distance;object representation;polygonal objects;volumetric representation;triangles;Rendering pipeline;distance transform;volume construction;Modeling;Computational Geometry;geometric representation;hausdorff distance;distance field;Computer Graphics;Polygonal model;3D modelling;infinitude"} {"name": "80", "title": "\u03c1-Queries: Enabling Querying for Semantic Associations on the Semantic Web", "abstract": "This paper presents the notion of Semantic Associations as complex relationships between resource entities. These relationships capture both a connectivity of entities as well as similarity of entities based on a specific notion of similarity called -isomorphism. It formalizes these notions for the RDF data model, by introducing a notion of a Property Sequence as a type. In the context of a graph model such as that for RDF, Semantic Associations amount to specific certain graph signatures. Specifically, they refer to sequences (i.e. directed paths) here called Property Sequences, between entities, networks of Property Sequences (i.e. undirected paths), or subgraphs of \u03c1-isomorphic Property Sequences. The ability to query about the existence of such relationships is fundamental to tasks in analytical domains such as national security and business intelligence, where tasks often focus on finding complex yet meaningful and obscured relationships between entities. However, support for such queries is lacking in contemporary query systems, including those for RDF. This paper discusses how querying for Semantic Associations might be enabled on the Semantic Web, through the use of an operator \u03c1. It also discusses two approaches for processing \u03c1-queries on available persistent RDF stores and memory resident RDF data graphs, thereby building on current RDF query languages.", "fulltext": "INTRODUCTION\nThe Semantic Web [13] proposes to explicate the meaning of\nWeb resources by annotating them with metadata that have been\ndescribed in an ontology. This will enable machines to\n\"understand\" the meaning of resources on the Web, thereby\nunleashing the potential for software agents to perform tasks on\nbehalf of humans. Consequently, significant effort in the\nSemantic Web research community is devoted to the development\nof machine processible ontology representation formalisms.\nSome success has been realized in this area in the form of W3C\nstandards such as the eXtensible Markup Language (XML) [16]\nwhich is a standard for data representation and exchange on the\nWeb, and the Resource Description Framework (RDF) [42], along\nwith its companion specification, RDF Schema (RDFS) [17],\nwhich together provide a uniform format for the description and\nexchange of the semantics of web content. Other noteworthy\nefforts include OWL [25], Topic Maps [53], DAML+OIL [31].\nThere are also related efforts in both the academic and\ncommercial communities, which are making available tools for\nsemi-automatic [30] and automatic [49][29] semantic (ontology-driven\nand/or domain-specific) metadata extraction and\nannotation.\nWith the progress towards realizing the Semantic Web, the\ndevelopment of semantic query capabilities has become a\npertinent research problem. Semantic querying techniques will\nexploit the semantics of web content to provide superior results\nthan present-day techniques which rely mostly on lexical (e.g.\nsearch engines) and structural properties (e.g. XQuery [24]) of a\ndocument. There are now a number of proposals for querying\nRDF data including RQL [40], SquishQL [45], TRIPLE [49],\nRDQL [48]. These languages offer most of the essential features\nfor semantic querying such as the ability to query using\nontological concepts, inferencing as part of query answering, and\nsome allow the ability to specify incomplete queries through the\nuse of path expressions. One key advantage of this last feature is\nthat users do not need to have in-depth knowledge of schema and\nare not required to specify the exact paths that qualify the desired\nresource entities. However, even with such expressive\ncapabilities, many of these languages do not adequately support a\nquery paradigm that enables the discovery of complex\nrelationships between resources. The pervasive querying\nparadigm offered by these languages is one in which queries are\nof the form: \"Get all entities that are related to resource A via a\nrelationship R\" where R is typically specified as possibly a join\ncondition or path expression, etc. In this approach, a query is a\nCopyright is held by the author/owner(s).\nWWW2003, May 20-24, 2003, Budapest, Hungary.\nACM 1-58113-680-3/03/0005.\n690\nspecification of which entities (i.e. resources) should be returned\nin the result. Sometimes the specification describes a relationship\nthat the qualifying entities should have with other entities, e.g. a\njoin expression or a path expression indicating a structural\nrelationship. However, the requirement that such a relationship be\nspecified as part of the query is prohibitive in domains with\nanalytical or investigative tasks such as national/homeland\nsecurity [11] and business intelligence, where the focus is on\ntrying to uncover obscured relationships or associations between\nentities and very limited information about the existence and\nnature of any such relationship is known to the user. In fact, in\nthis scenario the relationship between entities is the subject of the\nuser's query and should being returned as the result of the query\nas opposed to be specified as part of the query. That is, queries\nwould be of the form \"How is Resource A related to Resource\nB?\". For example, a security agent may want to find any\nrelationship between a terrorist act and a terrorist organization or\na country known to support such activities.\nOne major challenge in dealing with queries of this nature is that\nit is often not clear exactly what notion of a relationship is\nrequired in the query. For example, in the context of assessing\nflight security, the fact that two passengers on the same flight are\nnationals of a country with known terrorist groups and that they\nhave both recently acquired some flight training, may indicate an\nassociation due to a similarity. On the other hand, the fact that a\npassenger placed a phone call to someone in another country that\nis known to have links to terrorist organizations and activities\nmay indicate another type of association characterized by\nconnectivity. Therefore, various notions of \"relatedness\" should\nbe supported.\nThis paper intends to make two main contributions. First, we\nformalize a set of complex relationships for the RDF data model,\nwhich we call Semantic Associations. Second, we outline\ntwo possible approaches for processing queries about Semantic\nAssociations through the use of an operator\n(-Queries). One of\nthe two approaches is based on processing\n-queries on persistent\nRDF data systems such as RDFSuite [8], while the other is based\non processing these queries on a main memory based\nrepresentation of an RDF model such as JENA [56].\nThe rest of the paper is organized as follows: Section 2 discusses\nsome background and motivates our work with the help of an\nexample. Section 3 presents the formal framework for Semantic\nAssociations; section 4 discusses implementation strategies for\nthe\noperator, section 5 reviews some related work, and section 6\nconcludes the paper.\nBACKGROUND & MOTIVATION\nAlthough there are various knowledge modeling languages that\nmay be used on the Semantic Web such as Topic Maps [55],\nUML [47], DAML+OIL [31], OWL [25], etc., in this paper we\nhave chosen to formalize Semantic Associations for the RDF data\nmodel. It should be clear that we are not suggesting that the\nnotion of Semantic Associations only applies to RDF. On the\ncontrary, the notion is very general and is applicable to any data\nmodel that can be represented as a graph. The choice of RDF for\nformalization does not confer serious problems however. In the\nfirst place, some of these other models e.g. DAML+OIL build\nupon RDF. Secondly, there is work on mappings from other\nformalisms to RDF [20][41].\nNext, we will briefly summarize the RDF data model and then\nmotivate our work with an example.\n2.1 RDF\nRDF [42] is a standard for describing and exchanging semantics\nof web resources. It provides a simple data model for describing\nrelationships between resources in terms of named properties and\ntheir values. The rationale for the model is that by describing\nwhat relationships an entity has with other entities, we somehow\ncapture the meaning of the entity. Relationships in RDF, or\nProperties as they are called, are binary relationships\nbetween two resources, or between a resource and a literal value.\nAn RDF Statement, which is a triple of the form (Subject,\nProperty, Object), asserts that a resource, the Subject, has\na Property whose value is the Object (which can be either\nanother resource or a literal). This model can be represented as a\nlabeled directed graph, where nodes represent the resources\n(ovals) or literals (rectangles) and arcs representing properties\nwhose source is the subject and target is the object, and are\nlabeled with the name of the property. For example, in the bottom\npart of Figure 1, we can see a node &r1 connected by a paints\narc to the node &r2, which reflects the fact that &r1 (a painter\nwith first name Pablo, and last name Picasso) painted another\nresource &r2 (a painting). The meaning of the nodes and arcs is\nderived from the connection of these nodes and arcs to a\nvocabulary (the top part of the figure). The vocabulary contains\ndescribes types of entities i.e. classes (e.g. Museum) and types of\nproperties (e.g. creates) for the domain. The vocabulary\ndescription and is done using the companion specification to RDF\ncalled the RDF Schema specification [17]. For example in Figure\n1, classes like Painter, Museum and properties such as\nPaints, are defined. Resources are connected to classes using\nan rdf:typeof property indicating an instantiation relationship.\n2.2 MOTIVATING EXAMPLE\nAlthough the focus of our current evaluations involves scenarios\nin the National Security domain, for brevity and pedagogical\nreasons, for this paper we have chosen to use a modified version\nof the example from [40]. We will now illustrate Semantic\nAssociations by way of a simple example shown in Figure 1. The\nfigure shows an RDF model base containing information to be\nused in the development of a cultural portal, given from two\nperspectives, reflected in two different schemas (the top part of\nthe figure). The top left section of the picture is a schema that\nreflects a museum specialist's perspective of the domains using\nconcepts like Museum, Artist, Artifact, etc. The top right\nsection is a schema that reflects a Portal administrator's\nperspective of the domains using administrative metadata\nconcepts like file-size, mime-type, etc. to describe\nresources. The lower part of the figure is the model base (or\ndescription base in the linguo of [40]), that has\ndescriptions about some Web resources, e.g., museum websites\n(&r3, &r8), images of artifacts (&r2, &r5, &r7) and for\nresources that are not directly present on the Web, e.g., people,\nnodes representing electronic surrogates are created (&r1, &r4,\n&r6 for the artists Pablo Picasso, Rembrandt, and Rodin August\nrespectively).\n691\n&r3\n&r5\n\"Reina Sofia\nMuseun\"\n&r7\n\"oil on\ncanvas\"\n&r2\n2000-02-01\n\"oil on\ncanvas\"\n&r8\n\"Rodin\nMuseum\"\n\"image/jpeg\"\n2000-6-09\nExt. Resource\nString\nDate\nInteger\nString\ntitle\nfile_siz\ne\nlast_modified\nm\ni\nm\ne\nt\ny\np\ne\nArtist\nSculptor\nArtifact\nSculpture\nMuseum\nString\nString\nString\nfname\nlname\ncreates\nexhibited\nsculpts\nString\nPainting\nPainter\npaints\ntechnique\nmaterial\ntypeOf(instance)\nsubClassOf(isA)\nsubPropertyOf\nmime-type\nexhibited\ntechnique\nexhibited\ntitle\nlast_modified\nlast_modified\ntitle\ntechnique\nexhibited\n\"Rodin\"\n\"August\"\n&r6\n&r1\nfname\nlname\nfname\nlname\npaints\npaints\ncreates\n&r4\n\"Rembrandt\"\n\"Pablo\"\n\"Picasso\"\nfname\n\nFigure 1: Cultural Portal Information in RDF\nTypically, a query language allows you to find all entities that are\nrelated by a specific relationship. For example, we may ask a\nquery to retrieve all resources related to resource &r1 via a\npaints relationship, or via a paints.exhibited\nrelationship, and get &r2 as a result for the first query and &r3 as\nthe answer for the second query. However, we are unable to ask\nqueries such as \"How are resources &r1 and &r3 related? Such a\nquery should return for example that \"&r1 paints &r2 which is\nexhibited in &r3\", indicating a path connecting the two\nentities. With a query such as this one, the user is trying to\ndetermine if there is a relationship between entities, and what\nthe nature of the relationship(s) is(are). It should be possible to\nask such a query without any type of specification as to the nature\nof the relationship, such as using a path expression to give\ninformation about the structure of the relationship. For example,\nthe following example RQL query\nselect * from\n{;Artist}@P{X}.{;Sculpture}@Q{Y}.@R{Z}\nfinds all data paths that traverse the class hierarchies Artist and\nSculpture, containing three schema properties, one for each\nproperty variable (@variable). However, we notice that the query\nrequires that a property variable be added for every edge in the\nrequired path. That is, the user is required to have some idea of at\nleast the structure e.g. length, of the relationship. One approach\nthat some of these systems offer to alleviate this problem is that\nthey provide mechanisms for browsing or querying schemas to\nallow users to get the information they need. While this may be a\nreasonable requirement when querying specific domains with a\nfew schemas involved, on the Semantic Web, many schemas may\nbe involved in a query, and requiring a user to browse them all\nwould be a daunting task for the user. In fact, in some cases, such\ninformation may not be available to all users (e.g., classified\ninformation) even though the data may be used indirectly to\nanswer queries. Furthermore, browsing schemas do not always\ngive the complete picture, especially in the case of RDFS\nschemas, because, entities may belong to different schemas,\ncreating links between entities that are not obvious from just\nlooking at the schemas. For example in Figure 1, the relationship\npaints.exhibited.title connecting &r1 to \"Reina Soifa\nMuseum\", is not apparent by just looking at either schema.\nSo far, we have talked about relationships in terms of a directed\npath connecting two entities. However, there are some other\ninteresting types of relationships. Let us take for example,\nresources &r4 and &r6. Both resources could be said to be related\nbecause they have both created artifacts (&r5, and &r7) that are\nexhibited at the same museum (&r8). In this case, having some\nrelationship to the same museum associates both resources. This\nkind of connectivity is an undirected path between the entities.\nAnother closely related kind of association is class membership.\nFor example, &r1 and &r6 are both Artists, even though of a\ndifferent kind, and therefore are somewhat associated. Also, &r1\nand &r6 could be said to be associated because they both have\ncreations (&r2, and &r7) that are exhibited by a Museum (&r3\nand &r8 respectively). In this case, the association is that of a\nsimilarity. So, in the first three associations the relationships\ncapture some kind of connectivity between entities, while the last\nassociation captures a similarity between entities. Note that the\nnotion of similarity used here is not just a structural similarity, but\na semantic similarity of paths (nodes and edges) that the entities\nare involved in. Nodes are considered similar, if they have a\ncommon ancestor class. For example in the relationship involving\n&r1 and &r6, although one case involves a painting and the other\na sculpture, we consider them similar because sculptures and\npaintings are kinds of Artifacts and sculpting and painting are\nboth kinds of creative activities (the notion of similarity is\nextended to properties as well).\nThe Semantic Associations shown in this example are fairly\nsimple involving only short paths and are useful only for the\npurpose of illustration. However, in environments that support\ninformation analytics and knowledge discovery involve longer\npaths, especially undirected paths, which are not easily detectable\nby users in fast-paced environments. For example at airport\nsecurity portals, agents may want to quickly determine if a\npassenger has any kind of link to terrorist organizations or\nactivities.\nFRAMEWORK\nThe framework described in this section provides a formal basis\nfor Semantic Associations. It builds on the formalization for the\nRDF data model given in [40], by including a notion of a\nProperty Sequence. A Property Sequence allows us to\ncapture paths in the RDF model and forms the basis for\nformalizing Semantic Associations as binary relations on Property\nSequences. Secondly, we some complex queries called\n-queries\nfor querying about Semantic Associations.\n3.1 Formal Data Model\nIn section 2.1, we describe the RDF data model informally as a\nlabeled directed graph. To recap, the RDF Schema specification\n[17] provides a special vocabulary for describing classes and\nproperties in a domain. A Property is defined by specifying its\ndomain (the set of classes that it applies to), and its range\n(either a Literal type e.g. String, Integer, etc, or the classes whose\nentities it may take as values). Classes are defined in terms of\ntheir relationship to other classes using the rdfs:sublassOf\nproperty to place them at the appropriate location in a class\nhierarchy, as well as other user specified properties that may\ninclude them in their range or domain thereby linking them to\nother classes. Properties may also be organized in a hierarchy\nusing the rdfs:subPropertyOf property.\n692\nThe formalization in [40] defines a graph data model along with a\ntype system that connects the RDF Model & Syntax specification\nwith the RDFS schema specification using an interpretation\nmechanism. It forms the basis for a typed RDF query language\ncalled RQL [40]. RQL is fairly expressive and supports a broad\nrange of queries. Its type system T is the set of all possible types\nthat can be constructed from the following types:\n=\nC\n|\n\nP\n|\n\nM\n|\n\nU\n|\n\nL\n| {\n} | [1:\n1\n, 2:\n\n2\n, ..., n:\n\nn\n] | (1:\n\n1\n+ 2:\n\n2\n+\n... + n:\n\nn\n)\nwhere\n\nC\nindicates a class type,\n\nP\na property type,\n\nM\na\nmetaclass type,\n\nL\na literal type in the set L of literal type names\n(string, integer, etc.),\n\nand\n\nU\nis the type for resource URIs. For the\nRDF multi-valued types we have\n{.}\nas the Bag type,\n[.]\nis the\nSequence type, and\n(.)\nis the Alternative type. The set of values\nthat can be constructed using the resource URIs, literals and class\nproperty names is called V. Then, the interpretation of types in T\nis given naturally by the interpretation function [[ ]], which is a\nmapping from\nto the set of values in V. For example, a class C\nis interpreted as unary relation of type {\n\nU\n}, which is the set of\nresources (i.e. of type\n\nU\n) that have an rdf:typeOf property\nwith range C, and includes the interpretations of the subclasses of\nC. For a property p, [[p]] is given by\n{[v\n1\n, v\n2\n] | v\n1\n[[ p.domain ]], v\n2\n[[ p.range ]] }\n{ [[ p' ]] | p' is a subPropertyOf p}\nIt defines an RDF schema as a 5-tuple RS = (V\nS\n, E\nS\n,\n, , H)\nwhere: V\nS\nis the set of nodes and E\nS\nis the set of edges.\nis an\nincidence function\n: E\nS\n\nV\nS\nV\nS\n, and\nis a labeling function\nthat maps class and property names to one of the types in T, i.e.\n:\nV\nS\nE\nS\n\nT. H = (N, <), where N = C P, C and P are the set\nof class and property names in RS, respectively. H is a well-formed\nhierarchy, i.e., < is a smallest partial ordering such that: if\np\n1\n, p\n2\n\nP and p\n1\n< p\n2\n, then p\n1\n.domain\np\n2\n.domain and p\n1\n.range\n\np\n2\n.range. It also formalizes an instance of an RDFS schema called\na description base which contains all the asserted\ninstances of classes and properties of an RDF schema.\nWe generalize these definitions to sets of RDF schemas and\ndescription bases as basic notion of context for a\n-query.\n3.1.1 Definition 1\nThe RDFS schema set of RDFS Schemas RSS = {RS\ni\n: 1\ni n}.\nLet\nC = C\nS1\n\nC\nS2\n\n... C\nS2\nwhere C\nSi\nis the set of class names\nin schema RS\ni\nand\nP\n= P\nS1\n\nP\nS2\n\n... P\nSn\n, where P\nSi\nis the set\nof property names in RS\ni\nthen N = C\nP\n.\n[40] defines a description base RD which is an instance of an\nRDFS schema RS containing all the asserted instances of the\nclasses and properties in RS. We generalize that definition here to\nthe union of instances of the set of schemas in an RDFS schema\nset.\n3.1.2 Definition 2\nAn instance of an RDF schema set RSS = {RS\n1\n, RS\n2\n, .. RS\nn\n}, is a\ndescription base RDS defined as a 5-tuple = (V\nDS\n, E\nDS\n,\n,\n\n,\n),\nwhere V\nDS\n= V\nD1\n\nV\nD2\n\n... V\nDn\nand V\nDi\nis the set of nodes in\nthe description base of the schema RS\ni\n, and E\nDS\nis defined\nsimilarly.\nis the incidence function : E\nDS\n\nV\nDS\nV\nDS\n,\n\nis a\nvalue function that maps the nodes to the values in V i.e.\n\n: V\nDS\n\nV, is a labeling function that maps each node either to one of\nthe container type names (Seq, Bag, Alt) or to a set of class\nnames from the schema set RSS whose interpretations contain the\nvalue of the node, and each edge e = [v\n1\n, v\n2\n] to a property name p\nin RSS, where the interpretation of p contains the pair\n[\n\n(v\n1\n),\n\n(v\n2\n)], i.e., the values of v\n1\nand v\n2\n. Formally,\n: V\nD\n\nE\nD\n\n2\nN\n\n{Bag, Seq, Alt} in the following manner:\ni.\nFor a node n in RDS,\n(n) = {c | c C and\n\n(n) [[c]]}\nii.\nFor an edge e from node n\n1\nto n\n2\n,\n(e) = p\nP and\nthe\nvalues of n\n1\nto n\n2\nbelong in the interpretation of p: [\n\n(n\n1\n),\n\n(n\n2\n)]\n[[p]].\nIn order capture paths in the RDF graph model, we define a\nnotion of a Property Sequence, represented in the graph as a\nsequence of edges (i.e. properties). There is a choice to be made\nin the method for realizing such a notion in a query language such\nas RQL. One option is to add paths or property sequences as types\nin a query language making them first class citizens. The second\noption is to realize them as complex operations such as Cartesian\nproduct, on property types. We choose the later approach because\nattempting to make paths as first class citizens brings up\nadditional issues such as defining path subsumption and so on.\nWe will now define the notion of a Property Sequence.\n3.1.3 Definition 3 (Property Sequence)\nA Property Sequence PS is a finite sequence of properties\n[P\n1\n, P\n2\n, P\n3\n, ... P\nn\n] where P\ni\nis a property defined in an RDF\nSchema RS\nj\nof a schema set RSS. The interpretation of PS is\ngiven by:\n[[PS]]\n\n\ni=1\n\nn\n[[P\ni\n]] where for ps\n[[PS]], called an instance\nof PS, ps[i]\n[[P\ni\n]] for 1\ni n and ps[i][1] = ps[i+1][0]).\nps[i][1] refers to the second element of the i\nth\nordered pair and\nps[i+1][0] refers to the first element of the i+1\nth\nordered pair. We\ndefine a function NodesOfPS()which returns the set of nodes\nof a Property Sequence PS, i.e.\nPS.NodesOfPS()= {C\n1\n, C\n2\n, C\n3\n, ... C\nk\n} where C\ni\nis a class in\neither the domain or range of some Property P\nj\nin PS, 1\nj n.\nFor example in Figure 1, for PS =\nc\nreates.exhibited.title,\nPS\n.NodesOfPS () = {Artist,\nArtifact, Museum, Ext. Resource, String}.\nLet PS = [P\n1\n, P\n2\n, P\n3\n, ... P\nn\n], a description base RDS is said to\nsatisfy\nor be a\nmodel\nof PS (RDS |= PS) if there exists a\nsequence of edges e\n1\n, e\n2\n, e\n3\n, ... e\nn\nin the description base RDS\nsuch that for all i,\n(e\ni\n) = P\ni\n,\n(e\ni\n) = (v\ni\n, v\ni+1\n) and\n\ni=1\n\nn\n(v\ni\n, v\ni+1\n) =\nps for some ps\n[[PS]].\nWe define a function\nPSNodesSequence\n() on Property\nSequence instances that returns its sequence of nodes, i.e.\nps.PSNodesSequence()= [v\n1\n, v\n2\n, v\n3\n, ... v\nn+1\n]. The node v\n1\n\nis called the origin of the sequence and v\nn+1\nis called the\nterminus.\nNext, we define a set of binary relations on Property Sequences.\n693\n3.1.4 Definition 4 (\n\n\nJoined Property Sequences)\nPS\n1\n\n\nPS\n2\nis true if:\nNodesOfPS(PS\n1\n)\nNodesOfPS(PS\n2\n)\n0.\nThe Property Sequences PS\n1\nand PS\n2\nare called joined, and for\nC\n(NodesOfPS(PS\n1\n)\nNodesOfPS(PS\n2\n)), C is called a\njoin node. For example, in Figure 2, the sequences\ncreates.exhibited. and paints.exhibited are joined\nbecause they have a join node Museum.\n&r3\n&r5\n&r7\n\"oil on\ncanvas\"\n&r2\n\"oil on\ncanvas\"\n&r8\nArtist\nSculptor\nArtifact\nSculpture\nMuseum\nString\nString\nfname\nlname\ncreates\nexhibited\nsculpts\nString\nString\nPainting\nPainter\npaints\ntechnique\nmaterial\ntypeOf(instance)\nsubClassOf(isA)\nsubPropertyOf\nexhibited\ntechnique\nexhibited\ntechnique\nexhibited\n\"Rodin\"\n\"August\"\n&r6\n&r1\nfname\nlname\nfname\nlname\npaints\npaints\ncreates\n&r4\n\"Rembrandt\"\n\"Pablo\"\n\"Picasso\"\nfname\n\nFigure 2 : Isomorphic Property Sequences\n3.1.5 Definition 5 (\n\n\n\n\n-Isomorphic Property\nSequences)\nTwo property sequences PS\n1\n= P\n1\n, P\n2\n, P\n3\n, ... P\nm\n, and PS\n2\n= Q\n1\n, Q\n2\n,\nQ\n3\n, ... Q\nm\n, are called\n\n-isomorphic\n(PS\n1\n\n\nPS\n2\n), if\nfor all i, 1\ni m: P\ni\n= Q\ni\nor P\ni\n\nQ\ni\nor Q\ni\n\nP\ni\n(\nmeans\nsubPropertyOf )\nFor example in Figure 2, the sequences paints.exhibited\nand creates.exhibited are isomorphic\nbecause\npaints\nis considered to be similar to creates, since\npaints\n\nis a subproperty of\ncreates.\nNote that the example that we use\nhere is somewhat misleading because the example shown for\nJoined Property Sequences also happens to be\n-Isomorphic.\nHowever, the two notions are quite different because Joined\nProperty Sequences are not required to be similar.\n3.1.6 Definition 6 (Length)\nThe\nlength\nof a Property Sequence is equal to the number of\nproperties in the Property Sequence. In the case of a Joined\nProperty Sequence its length is the sum of all the properties in its\nconstituent Property Sequences, i.e. the length of the\nundirected path from origin of one Property Sequence to the\norigin of the other Property Sequence. For example, in Figure 2,\nthe Joined Property Sequences [creates.exhibited,\npaints.exhibited] has a length of 4.\n3.2 Semantic Associations\nWe can now define some binary relations on the domain of\nentities i.e. resources, based on the different types of Property\nSequences.\n3.2.1 Definition 7 (\n\n-pathAssociated)\n-pathAssociated (x, y) is true if there exists a Property\nSequence with ps\n[[PS]] and, either x and y are the origin and\nterminus of ps respectively, or vice versa, i.e. y is origin and x is\nterminus. Then ps is said to satisfy\n-pathAssociated (x,\ny) written as ps |= -pathAssociated (x, y).\n3.2.2 Definition 8 (\n\n-joinAssociated)\nLet PS\n1\nand PS\n2\nbe two Property Sequences such that PS\n1\n\nPS\n2\n\nwith a join node C, and there exists ps\n1\nand ps\n2\nsuch that ps\n1\n\n[[\nPS\n1\n]] and ps\n2\n\n[[ PS\n2\n]] and, n\nps1.PSNodesSequence()\nps2.PSNodesSequence(), then -joinAssociated (x,\ny) is true if either of the conditions are satisfied.\n1) x is the origin of ps\n1\nand y is the origin of ps\n2\nor\n2) x is the terminus of ps\n1\nand y is the terminus of ps\n2\n.\nThis means that either ps\n1\n.PSNodesSequence = [ x, b, c ... n,\n.,., r ] and ps\n2\n.PSNodesSequence = [ y, , , . . n, , ], or\nps\n1\n.PSNodesSequence = [ a, b, c ... n, .,., r ,x] ] and\nps\n2\n.PSNodesSequence = [ , , , . . n, , y] and n [[ C ]].\nWe say that (ps\n1\n, ps\n2\n) |= -joinAssociated (x, y).\n3.2.3 Definition 9 (\n\n-cpAssociated)\nThis is a special case of Definition 5 that captures an inclusion or\nsibling relationship (i.e. common parent) between resources.\n-cpAssociated (x, y) is true if there exists two Property\nSequences PS\n1\nand PS\n2\nsuch that PS\n1\n\nPS\n2\nwhich satisfy\njoinAssociated\n(x, y) and, both PS\n1\nand PS\n2\nare of the\nform: rdf.typeOf.(rdfs:subClassOf)*. This relation\nis used to capture the notion that entities are related if they either\nbelong to the same class or to sibling classes. For example, &r1\nand &r6 are related because they are both Artists. We say that\n(ps\n1\n, ps\n2\n) |= -cpAssociated (x, y). However, in order\nto reduce the possibility of meaningless associations e.g. both x\nand y belong to rdfs:Resource, we make further restrictions.\nWe say that\n-cpAssociated (x, y) is strong if\n1) For the join node j of the Joined Property Sequence (inducing\nthe association (i.e. the common parent of x and y), j ,\nwhere\ncalled the ceiling, refers to the most general\nclass in the hierarchy that is to be considered, which is usually\nuser-specified.\n2) the length of the Joined Property Sequence inducing the\nassociation is minimal. By minimal we mean that it is less\nthan a specific value indicated by context or specified by user.\nThe first restriction ensures that we do go to far up the hierarchy\nlooking for a common parent, while the second ensures that the\nrelationship is not too distant to be meaningful in the user's\ncontext.\n3.2.4 Definition 10 (\n\n-IsoAssociated)\n-IsoAssociated (x, y) is true if there exists two\nproperty sequences PS\n1\nand PS\n2\nsuch that PS\n1\n\n\nPS\n2\n, and there\nexists ps\n1\nand ps\n2\nsuch that ps\n1\n\n[[PS\n1\n]] and ps\n2\n\n[[PS\n2\n]] such\nthat, x is the origin of ps\n1\nand y is the origin of ps\n2\n. We say that\n(ps\n1\n, ps\n2\n) |= -IsoAssociated (x, y).\n694\nWe say that x and y are semantically associated if either\npathAssociated\n(x, y),\n-cpAssociated(x, y), -IsoAssociated(x, y),\nor\n-joinAssociated(x, y).\n3.3\n-Queries\nA\n\n-Query Q is defined as a set of operations that map from a\npair of keys (e.g. resource URIs) to the set of Property Sequences\nPS\nin the following manner:\n1.\n\n:\nU (2)\n2\nPS\n\n2.\n\n\n:\nU (2)\n2\nPS(2)\n\n3.\n\n\n:\nU (2)\n2\nPS(2)\n\n\n\nU (2)\n= { {x, y} : x, y\n\nU\nand x\ny }. Similarly,\nPS\n(2)\nis the set\nof pairs of Property Sequences. In 1., we map from a pair of keys\nx and y to a set of Property Sequences that induces a pathAssociation\nof x and y. In 2., we map from (x, y) to a set of\nbinary tuples of Property Sequences that induces either a\njoinAssociation\nor a strong\n-cpAssociation of x and y and in 3.,\nwe map from (x, y) to a set of binary tuples of Property\nSequences that induces a\n-isoAssociation.\nSTRATEGIES FOR PROCESSING -QUERIES\nOur strategy for implementation involves investigating alternative\napproaches to implementing the\n-operator, and evaluate their\nmerits and demerits. We consider two major categories. The first\ncategory, which we have developed a partial implementation for,\ninvolves leveraging existing RDF persistent data storage\ntechnologies. Here, a\n-query processing layer is developed above\nthe RDF data storage layer, which performs some of the\ncomputation and, relegates a specific portion of the computation\nto the data store\n\nlayer. In the second approach, the\nimplementation involves the use of a memory resident graph\nrepresentation of the RDF model, along with\n\nthe use of efficient\ngraph traversal algorithms. We will outline how query processing\nis done using both approaches.\n\n4.1 Exploiting RDF Data Management\nSystems\nIn this approach, we leverage existing RDF data storage\ntechnologies such as RDFSuite [8] and SESAME [18] and\ndevelop a\n-query processing layer which performs some of the\ncomputation and, relegates a specific portion of the computation\nto the data store layer. Figure 3 gives an illustration of this\napproach (although, this is somewhat of an oversimplification, it\nadequate for the purposes of this discussion). Here the processing\nof a\n-query is broken down to 4 phases. Phases 2 and 4 occur at\nthe data store layer and phases 1 and 3 occur at the\n-query\nprocessing layer.\nPhase 1 captures the query, i.e. the resources and context (i.e.\nschema set). In the second stage, the resources are classified i.e.,\nthe classes that entities belong to, within the given context, are\nidentified. This involves a query to the data store layer, which\nexploits the rdf:typeOf statements to answer the query. Much of\nthe processing is done in the third phase where potential paths\ninvolving the entities in the query are discussed by querying a\nPathGuide (a combination of index structures that stores\ninformation about paths that exist between resources classes).\nThere are two kinds of paths that are kept in the PathGuide. The\nfirst kind of path is that which is obvious from the schema. The\nsecond kind is those paths that exist at the data level but are not\nevident at the schema level. This is because of the fact that the\nRDF data model allows multiple classifications of entities.\nConsequently, every instance of a multiple classification induces\na connection between two classes that does not exist at the\nschema level, and thus is not general to all entities of those\nclasses. Therefore, a query to the PathGuide yields potential\nproperty sequences paths between entities, since some of the\npaths are not general to entire classes but specific to the entities\nthat are multiply classified. For example in Figure 1, the\npaints.exhibited.title sequence is not a sequence in\neither the left or right schema, but is present in the description\nbase (i.e. between &r1 and the literal node \"Reina Sofia\nMuseum\"). The reason for this is &r3`s membership in both the\nMuseum and the Ext.Resource classes, this can be seen as\nhaving created an intermediate class node that collapses Museum\nand the Ext.Resource classes, and consequently links the\npaints.exhibited sequence to the title property.\nThe fourth stage of query processing involves the validation of\nthe paths found in the PathGuide for the specific entities in the\nquery, by executing queries on the underlying data store. The\noutput of this stage selects only those paths whose queries do not\nreturn an empty result.\nr1\nr1 = http://www.xxx.com/yyy\nr2\nr2 = http://www.zzz.net/\nA\nB\nE\nC\nD\nr1\nr1 = http://www.xxx.com/yyy\nr2\nr2 = http://www.zzz.net/\nA\nB\nE\nC\nD\nr1\nr1 = http://www.xxx.com/yyy\nr2\nr2 = http://www.zzz.net/\nA\nB\nE\nC\nD\nr1\nr1 = http://www.xxx.com/yyy\nr2\nr2 = http://www.zzz.net/\n1. Query Entities\n2. Classification of Entities\n3. Identification of Candidate Paths\n4. Pruning of Invalid Paths\n\nFigure 3: Illustration of\n-Query Processing\n4.1.1 Issues\nTwo challenges arise from storing all potential paths between\nclasses in the PathGuide indexes. The first is that it causes the size\nof indexes to be quite large. Second, the potential paths found in\nthe PathGuide in response to a query, could generate a large\nnumber of RQL queries that need to be validated at the data store\nlayer, which slows down processing time significantly. However,\nheuristics could be employed to minimize these problems. For\nexample, to reduce the size of the indices, we could choose to\navoid adding every single potential path between classes in the\nindex, but include only those whose support value is at least as\nlarge as a user supplied threshold value, where the support value\nrepresents the percentage of resources that are involved in\n695\nmultiple classification for any given pair of classes. This means\nthat if very few resources induce a connection between two\notherwise unconnected schema classes because of a multiple\nclassification, then we do not include in the indexes, those\nadditional paths created due to the multiple classification, thereby\nreducing the size of the indices. The rationale for this is that the\nprobability of those paths being involved in the result of a query\nis low, therefore the additional cost of storing the paths in the\nindices may not be worth it. A second heuristic is to try to prune\nthe number of paths that need to be validated at the data storage\nlayer. This could be done by assigning weights to Semantic\nAssociations based on the contextual relevance and then\nvalidating only those associations with a high relevance weight.\nOur work in this area is still in progress.\nAn additional problem with processing\n-queries on existing RDF\nstorage systems is that some of these systems represent each\nproperty as a separate relation in a relational database model.\nTherefore, the computation of a Property Sequence results in a\nlarge number of joins which has a negative impact of the speed of\nquery processing. Currently, we do not see any easy solution to\nthis problem.\n4.2 Using Graph Algorithms\nThis approach involves the computation of Semantic Associations\non a memory-resident graph representation of the RDF model\nsuch as that provided by JENA [56], or the memory\nrepresentation of the schema set as in SESAME [18], to which\ngraph traversals algorithms can be applied. In the case of\npathAssociation\nwe can search for paths between entities, and in\nthe case of a\n-joinAssociation we check if the two entities belong\nin the same connected component. One issue with this approach is\nthat that trying to find all paths between entities could easily lead\nto an exponential time algorithm. However, [52] provides\npromising fast algorithms for solving path problems which may\nbe employed for such computations. In particular, it offers near-linear\ntime algorithms for computing a path expression\nrepresenting the set of all paths between nodes in a graph. Such a\nrepresentation may then be processed using contextual\ninformation to prune paths that are not very relevant in the given\ncontext. In addition, other heuristics may be added. For example,\na user may be asked to characterize the desired result set, e.g.\nshortest paths or longest paths, which will also help to reduce the\nresult set. Similar heuristics to those discussed in the first\napproach that use context to prune paths based on degree of\nrelevance can also be used here. In that case, the complexity of\nthe problem can be analyzed along the number of semantic paths\nretrieved\nComplexity =\n\n(n-1)\n\n(l=1)\n\n(# paths of length l) (probability of keeping path of length l).\nAnother issue is the complexity of graph isomorphism problem\nwhich is known to be NP-complete. However, certain classes of\ngraphs have properties making them amenable to efficient\nmanipulation. For example, [12] describes a polynomial time\nalgorithm for detecting isomorphism in rooted directed path\ngraphs, which includes the exact class of graphs that are required\nfor checking\n-isomorphism. We are currently working on a\nprototype implementation of this approach.\nRELATED WORK\nThere is some relationship between our work and that on querying\nobject-oriented and semi-structured data using path expressions\n[2][3][19][22][23][24][34]. Although, these systems provide\npowerful and expressive capability, allowing users to query for\ndata without having in-depth schema knowledge, most of them\nwork on the premise that the goal of a query is to find data entities\nbut not complex relationships such as Semantic Associations.\nSome of these systems [19][22] support paths as first class entities\nand allow for path variables to be used outside of the FROM\nclause, i.e. to be returned as a result of a query which suggests\nthat queries for\n-pathAssociations could be supported. However,\nthey typically assume a simpler data model which is a rooted\ndirected graph without the nuances of RDF such as multiple\nclassification and property hierarchies. Furthermore, the more\ncomplex Semantic Associations such as the\n-joinAssociation and\n-Isomorphism are not supported, even in systems like [22] which\nprovide some functions that range over path variables, e.g., the\ndifference function which returns the difference in the set of paths\nthat originate from two nodes.\nWith respect to RDF, the current generation of RDF query\nlanguages RQL [40], SquishQL [45], RDQL [48], do not support\npath variables as first class entities and so cannot even be used for\nquerying for path relationships. In the case of the logic-based\nRDF query languages such as TRIPLE [51], the inference rules\nrequired to reason about the full range of the Semantic\nAssociations discussed here, would require functionality beyond\nFOL.\nThe DISCOVER system [38] provides a keyword proximity\nsearch capability over relational databases, which return\nassociations called Joining Sequences. Joining Sequences\nrepresent the paths connecting keywords in the query, obtained by\ntraversing foreign key links. However, the semantics associated\nwith these associations is not explicit, but is implicit in the\ndatabase schema. Thus, the interpretation of the meaning and\nusefulness of the associations must be done by users.\nFurthermore, other more complex Semantic Associations such as\nthe\n-Isomorphism are not captured.\nThere is a common intuition underlying our work and some of the\ntasks related to data mining, in that they both involve discovering\nrelationships. However, there are significant differences in the\ngoals, methods and results produced by the both kinds of systems.\nThe first difference is articulated in a statement made in [32],\nwhere data mining is said to be opportunistic while information\naccess techniques (such as ours) are goal-driven. Traditional data\nmining [21][26] focuses on discovering patterns and relationships\nin data that can be used to develop models of the data. In\nassociation analysis [7], rules that associate attribute-value pairs\nare learned from patterns of co-occurrences of attribute values in\ndata, which capture co-occurrence relationships between\nattributes. On the contrary, we do not try to learn patterns from\ndata rather, we provide specific rules for inferring relationships\nbetween entities by looking at property value dependencies, and\nfocus on providing methods for verifying whether these kinds of\nassociations exist between entities. That is, we identify\nmeaningful sequences of binary predicates while data mining\nassociation rules involve sets of attribute value pairs. Therefore,\nwe view data mining as a complimentary technology. For\nexample, the association rules learnt from patterns in data can\n696\nprovide knowledge that can be used to guide the search for\nSemantic Associations or to rank resulting Semantic Associations\nbased on how close the follow the patterns.\nAn initial discussion on Semantic Associations is made in [10].\n\nCONCLUSION & FUTURE WORK\nMost RDF query systems do not provide adequate querying\nparadigms to support querying for complex relationships such as\nSemantic Associations. Support for such querying capabilities is\nhighly desirable in many domains. We have presented a formal\nframework for these Semantic Associations for the RDF data\nmodel, and reviewed some implementation strategies for\ncomputing them. There are many open research issues that we\nplan to focus on in the near future. First, it may be necessary to\ndevelop data organization techniques for data that will adequately\nsupport the kinds of queries discussed here. Such techniques\nshould eliminate the need for an excessive number of expensive\ncomputations such as joins during query processing. Secondly, we\nplan to develop techniques for dealing with the space complexity\nproblem of the indices used in the PathGuide. For example we\nmay use encoding schemes that compress path information, or\nheuristics for managing the size of the indices. Another top\npriority is the development of context-sensitive ranking\nalgorithms that assign higher weights to results that are most\nrelevant in the query context. Finally, we will perform a\ncomparative study of the two implementation strategies discussed\nin section 4 over a testbed consisting of large amount of\nautomatically extracted metadata generated using the SCORE\nsystem [41].\n\nACKNOWLEDGMENTS\nOur thanks to Drs. Bob Robinson, John Miller, Krys Kochut, and\nBudak Arpinar for the illuminating discussions and insightful\ncontributions, and to Boanerges Aleman-Meza on his revision\ncomments. We are also indebted to Dr. Vassilis Christophides\nwhose comments and suggestions were invaluable in preparing\nthe final version of the paper.\nThis work is funded by NSF-ITR-IDM Award # 0219649 titled\n\"\nSemantic Association Identification and Knowledge Discovery\nfor National Security Applications\n.\"\n\n\nREFERENCES\n[1]\nS. Abiteboul, P. Buneman, and D. Suciu. Data on the Web:\nFrom Relations to Semistructured Data and XML. Morgan\nKaufmann, 1999.\n[2]\nS. Abiteboul. Querying Semi-Structured data. In Proc. of\nICDT, Jan 1997.\nhttp://citeseer.nj.nec.com/abiteboul97querying.html\n[3]\nS. Abiteboul, D. Quass, J. McHugh, J. Widom, and J.\nWiener. The Lorel Query Language for Semistructured Data.\nInternational Journal on Digital Libraries, 1(1):68--88, April\n1997.\n[4]\nS. Abiteboul, R. Hull, and V. Vianu. Foundations of\nDatabases. Addison-Wesley, 1995.\n[5]\nR. Agrawal. Alpha: An Extension of Relational Algebra to\nExpress a Class of Recursive Queries. IEEE Transactions on\nSoftware Engineering. 14(7):879-- 885, July 1988.\n[6]\nR. Agrawal, A. Borgida, and H.V. Jagadish. Efficient\nManagement of Transitive Relationships in Large Data\nBases. In SIGMOD'89, pages 253--262, Portland, Oregon,\nUSA, 1989.\n[7]\nR. Agrawal, T. Imielienski and A. Swami. Mining\nAssocation Rules between Sets of Items in Large Databases.\nProc. Conf. On Management of Data. Washington, DC,\nUSA, 207--216. ACM Press, New York, NY USA 1993.\n[8]\nS. Alexaki, G. Karvounarakis, V. Christophides, D.\nPlexousakis, and K. Tolle. The ICS-FORTH RDFSuite:\nManaging Voluminous RDF Description Bases. In 2nd\nInternational Workshop on the Semantic Web, pages 1--13,\nHong Kong, 2001.\n[9]\nS. Alexaki, G. Karvounarakis, V. Christophides, D.\nPlexousakis, and K. Tolle. On Storing Voluminous RDF\ndescriptions: The case of Web Portal Catalogs. In 4th\nInternational Workshop on the Web and Databases\n(WebDB), Santa Barbara, CA, 2001. Available at\nhttp://139.91.183.30:9090/RDF/publications/webdb2001.pdf\n[10]\nK. Anyanwu, A. Sheth,\nThe\nOperator: Discovering and\nRanking Associations on the Semantic Web.\nSIGMOD\nRecord (Special issue on Amicalola Workshop), December\n2002.\n[11]\nD. Avant, M. Baum, C. Bertram, M. Fisher, A. Sheth, Y.\nWarke, \"Semantic Technology Applications for Homeland\nSecurity,\" Proc. of the 11\nth\nIntl Conf. on Information and\nKnowledge Management (CIKM 2002), McLean, VA,\nNovember 4-9, 2002, pp. 611--613.\n[12]\nL. Babel, I. Ponomarenko, G. Tinhofer. The Isomorphism\nProblem for Directed Paths and For Rooted Directed Path\nGraphs. Journal of Algorithms, 21:542--564, 1996.\n[13]\nT. Berners-Lee, J. Hendler, and O. Lassila. The Semantic\nWeb. Scientific American, May 2001.\n[14]\nB. Berendt, A. Hotho, G. Stumme. Towards Semantic Web\nMining. In Proceedings of the International Semantic Web\nConference, pp. 264--278, Sardinia, Italy. June 2002.\n[15]\nA. Branstadt, V. B. Le, J. P. Spinrad. Graph Classes: A\nSurvey. SIAM 1999.\n[16]\nT. Bray, J. Paoli, and C.M. Sperberg-McQueen. Extensible\nMarkup Language (XML) 1.0. W3C Recommendation,\nFebruary 1998.\n[17]\nD. Brickley and R.V. Guha. Resource Description\nFramework (RDF) Schema Specification 1.0, W3C\nCandidate Recommendation. 2000.\n[18]\nJ. Broekstra, A. Kampman, and F. van Harmelen. SESAME:\nAn Architecture for Storing and Querying RDF Data and\nSchema Information. In D. Fensel, J. Hendler, H. Lieberman,\nand W. Wahlster, editors, Semantics for the WWW. MIT\nPress, 2001.\n697\n[19]\nP. Buneman, M. Fernandez, D. Suciu. UnQL: A Query\nLanguage and Algebra for Semistructured Data Based on\nStructural Recursion. VLDB Journal, 9(1):76--110, 2000.\n[20]\nW. W. Chang, A Discussion of the Relationship Between\nRDF-Schema and UML. A W3C Note, NOTE-rdf-uml-19980804\n.\n[21]\nM. Chen, J. Han and P. Yu. Data Mining: An Overview from\nthe Database Perspective. IEEE Trans. On Knowledge and\nData Engineering. Vol. 8. No. 6. December 1996.\n[22]\nV. Christophides, S. Abiteboul, S. Cluet, and M. Scholl.\nFrom Structured Documents to Novel Query Facilities. In\nProc. of ACM SIGMOD Conf. on Management of Data, pp.\n313--324, Minneapolis, Minnesota, May 1994.\n[23]\nV. Christophides, S. Cluet, and G. Moerkotte. Evaluating\nQueries with Generalized Path Expressions. In Proc. of ACM\nSIGMOD, pp. 413--422, 1996.\n[24]\nD. Chamberlin, D. Florescu, J. Robie, J. Simeon, and M.\nStefanescu. XQuery: A Query Language for XML. Working\ndraft, World Wide Web Consortium, June 2001.\nhttp://www.w3.org/TR/xquery/\n[25]\nM. Dean, D. Connolly, F. Harmelen, J. Hendler, I. Horrocks,\nD. McGuinness, P. F. Patel-Schneider, and L. Stein. OWL\nWeb Ontology Language 1.0 Reference, W3C Working\nDraft 29 July 2002. http://www.w3.org/TR/owl-ref/.\n[26]\nU. M. Fayyad, G. Piatetsky-Shapiro, P. Smyth and R.\nUthurusany. Advances in Knowledge Discovery and Data\nMining.. AAAI/MIT Press 1996.\n[27]\nR. Fikes. DAML+OIL query language proposal, August\n2001. http://www.daml.org/listarchive/joint-committee/0572\n.html.\n[28]\nR. H. Guting. GraphDB: Modeling and querying graphs in\ndatabases. In Proceedings of the International Conference on\nVery Large Data Bases, pp. 297--308, 1994.\n[29]\nB. Hammond, A. Sheth, and K. Kochut, Semantic\nEnhancement Engine: A Modular Document Enhancement\nPlatform for Semantic Applications over Heterogeneous\nContent, in Real World Semantic Web Applications, V.\nKashyap and L. Shklar, Eds., IOS Press, December 2002, pp.\n29--49.\n[30]\nS. Handschuh and S. Staab. Authoring and annotation of web\npages in CREAM. In The Eleventh International World\nWide Web Conference (WWW2002), Honolulu, Hawaii,\nUSA, 7-11, May, 2002\n\n\n[31]\nF. Harmelen, P. F. Patel-Schneider, I. Horrocks, eds.\nReference Description of the DAML+OIL (March 2001)\nontology markup language.\n[32]\nM. Hearst. Distinguishing between Web Data Mining and\nInformation Access. Position statement for Web Data Mining\nKDD 97.\n[33]\nY. E. Ioannidis, R. Ramakrishnan, L. Winger: Transitive\nClosure Algorithms Based on Graph Traversal. TODS 18(3):\n512--576 (1993).\n[34]\nY. E. Ioannidis, Y. Lashkari. Incomplete Path Expressions\nand their Disambiguation, In Proc. of the 1994 ACM\nSIGMOD, International Conference on Management of Data.\np.138-149, May 24-27, 1994, Minneapolis, Minnesota,\nUnited States.\n[35]\nP. Hayes. RDF Model Theory. W3C Working Draft,\nSeptember 2001.\n[36]\nI. Horrocks, S. Tessaris. The Semantics of DQL.\nhttp://www.cs.man.ac.uk/~horrocks/Private/DAML/DQL-semantics\n.pdf\n[37]\nI. Horrocks and S. Tessaris. A Conjunctive Query Language\nfor Description Logic Aboxes. In Proc. of AAAI-00, 2000.\n[38]\nV. Hristidis and Y. Papakonstanti-nou. DISCOVER:\nKeyword search in relational databases. In Procs. VLDB,\nAug. 2002.\n[39]\nICS-FORTH. The ICS-FORTH RDFSuite web site.\nAvailable at http://139.91.183.30:9090/RDF, March 2002.\n[40]\nG. Karvounarakis, S. Alexaki, V. Christophides, D.\nPlexousakis, M. Scholl, RQL: A Declarative Query\nLanguage for RDF, WWW2002, May 7-11, 2002, Honolulu,\nHawaii, USA.\n[41]\nM. S. Lacher and S. Decker. On the Integration of Topic\nMaps and RDF Data. In Proc. of Semantic Web Working\nSymposium. Palo Alto. California. August 2001.\n[42]\nO. Lassila and R. Swick. Resource Description Framework\n(RDF) Model and Syntax Specification, W3C\nRecommendation. 1999.\n[43]\nM. Mannino, L. Shapiro, L. Extensions to Query Languages\nfor Graph Traversal Problems. TKDE 2(3): 353--363,1990.\n[44]\nA. O. Mendelzon and P. T. Wood. Finding Regular Simple\nPaths in Graph Databases. SIAM J. Comput., 24(6):1235-1258\n, 1995.\n[45]\nL. Miller, A. Seaborne, A. Reggiori. Three Implementations\nof SquishQL, a Simple RDF Query Language. In Proc. of 1\nst\n\nInternational Semantic Web Conference. ISWC2002. June 9-12\n, 2002, Sardinia, Italy.\n[46]\nA. Nanopoulos. Y. Manolopoulos. \"Mining Patterns from\nGraph Traversals\", Data and Knowledge Engineering,\nVol.37, No.3, pp.243-266, 2001.\n[47]\nJ. Rumbaugh, I. Jacobson, and G. Booch. The Unified\nModeling Language Reference Manual. Addison-Wesley,\n1999.\n[48]\nA. Seaborne. RDQL: A Data Oriented Query Language for\nRDF Models. 2001.\nhttp://www.hpl.hp.com/semweb/rdql-grammar\n.html\n\n[49]\nA. Sheth, C. Bertram, D. Avant, B. Hammond, K. Kochut,\nY. Warke. Semantic Content Management for Enterprises\nand the Web, IEEE Internet Computing, July/August 2002,\npp. 80--87.\n[50]\nA. Sheth, S. Thacker and S. Patel.\nComplex Relationship and\nKnowledge Discovery Support in the InfoQuilt System\n.\nVLDB Journal. September 25, 2002.\n[51]\nM. Sintek and S. Decker. TRIPLE---A Query, Inference,\nand Transformation Language for the Semantic Web.\n698\nInternational Semantic Web Conference (ISWC), Sardinia,\nJune 2002. http://www.dfki.uni-kl.de/frodo/triple/\n[52]\nTarjan, R. Fast Algorithms for Solving Path Problems. J.\nACM Vol. 28, No. 3, July 1891, pp. 594--614.\n[53]\nDQL: DAML Query Language.\nhttp://www.daml.org/2002/08/dql/\n[54]\nInkling: RDF query using SquishQL, 2001.\nhttp://swordfish.rdfweb.org/rdfquery/.\n[55]\nISO/IEC 13250: 2000 Topic Maps, Jan, 2000.\nhttp://www.topicmaps.org/\n[56]\nJENA\nA Java API for RDF\n.\n[57]\nWhitepaper on National Security and Intelligence, Semagix\nInc. 2002.\nhttp://www.semagix.com/pdf/national_security.pdf\n\n\n\n699", "keywords": "AI;analysis;isomorphism;Complex Data Relationships;RDF;Rooted Directed Path;Semantic Associations;automation;graph traversals;semantic association;Semantic Web Querying;relationship;semantic web;query processing;Property Sequence"} {"name": "81", "title": "Energy Management Schemes for Memory-Resident Database Systems", "abstract": "With the tremendous growth of system memories, memory-resident databases are increasingly becoming important in various domains. Newer memories provide a structured way of storing data in multiple chips, with each chip having a bank of memory modules. Current memory-resident databases are yet to take full advantage of the banked storage system, which offers a lot of room for performance and energy optimizations. In this paper, we identify the implications of a banked memory environment in supporting memory-resident databases, and propose hardware (memory-directed) and software (query-directed) schemes to reduce the energy consumption of queries executed on these databases. Our results show that high-level query-directed schemes (hosted in the query optimizer) better utilize the low-power modes in reducing the energy consumption than the respective hardware schemes (hosted in the memory controller), due to their complete knowledge of query access patterns. We extend this further and propose a query restructuring scheme and a multi-query optimization . Queries are restructured and regrouped based on their table access patterns to maximize the likelihood that data accesses are clustered. This helps increase the inter-access idle times of memory modules, which in turn enables a more effective control of their energy behavior. This heuristic is eventually integrated with our hardware optimizations to achieve maximum savings. Our experimental results show that the memory energy reduces by 90% if query restructuring method is applied along with basic energy optimizations over the unoptimized version. The system-wide performance impact of each scheme is also studied simultaneously.", "fulltext": "INTRODUCTION\nMemory-resident databases (also called in-memory databases\n[6]) are emerging to be more significant due to the current era of\nmemory-intensive computing. These databases are used in a wide\nrange of systems ranging from real-time trading applications to IP\nrouting. With the growing complexities of embedded systems (like\nreal-time constraints), use of a commercially developed structured\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nCIKM'04, November 813, 2004, Washington, DC, USA.\nCopyright 2004 ACM 1-58113-874-1/04/0011 ...\n$\n5.00.\nmemory database is becoming very critical [5].\nConsequently,\ndevice developers are turning to commercial databases, but existing\nembedded DBMS software has not provided the ideal fit.\nEmbedded databases emerged well over a decade ago to support\nbusiness systems, with features including complex caching logic\nand abnormal termination recovery. But on a device, within a\nset-top box or next-generation fax machine, for example, these\nabilities are often unnecessary and cause the application to exceed\navailable memory and CPU resources. In addition, current\nin-memory database support does not consider embedded system\nspecific issues such as energy consumption.\nMemory technology has grown tremendously over the years,\nproviding larger data storage space at a cheaper cost.\nRecent\nmemory designs have more structured and partitioned layouts\nin the form of multiple chips, each having memory banks [30].\nBanked memories are energy efficient by design, as per-access\nenergy consumption decreases with decreasing memory size (and\na memory bank is typically much smaller compared to a large\nmonolithic memory). In addition, these memory systems provide\nlow-power operating modes, which can be used for reducing the\nenergy consumption of a bank when it is not being used.\nAn\nimportant question regarding the use of these low-power modes\nis when to transition to one once an idleness is detected. Another\nimportant question is whether the application can be modified to\ntake better advantage of these low-power modes.\nWhile these\nquestions are slowly being addressed in architecture, compiler, and\nOS communities, to our knowledge, there has been no prior work\nthat examines the energy and performance behavior of databases\nunder a banked memory architecture. Considering increasingly\nwidespread use of banked memories, such a study can provide\nus with valuable information regarding the behavior of databases\nunder these memories and potential modifications to DBMSs\nfor energy efficiency. Since such banked systems are also being\nemployed in high-end server systems, banked memory friendly\ndatabase strategies can also be useful in high-end environments to\nhelp reduce energy consumption.\nOur detailed energy characterization of a banked memory architecture\nthat runs a memory-resident DBMS showed that nearly\n59% of overall energy (excluding input/output devices) in a typical\nquery execution is spent in the memory, making this component\nan important target for optimization (see Figure 1). Moreover, for\nany system, memory power and energy consumption have become\ncritical design parameters besides cost and performance. Based on\nthese observations, this paper evaluates the potential energy benefits\nthat memory-resident database queries can achieve by making\nuse of banked memory architectures supported with low-power operating\nmodes. Since each memory bank is capable of operating\nindependently, this opens up abundant avenues for energy and performance\noptimizations.\nIn this paper, we focus on a banked memory architecture and\nstudy potential energy benefits when database queries are executed.\nSpecifically, we focus on two important aspects of the problem:\nCharacterizing energy benefits of banked memories using hardware\nand software techniques: To see whether query execution can\nmake use of available low-power modes, we study both hardware\nand software techniques. The hardware techniques detect the idleness\nof memory banks and switch the inactive (idle) banks (during\n218\nMemory\n59%\nCache\n16%\nALU\n14%\nBus\n1%\nOthers\n10%\nFigure 1: Breakup of the energy consumption for various system\ncomponents. The results are based on the average energy\nconsumption of TPC-H benchmarks [35] executed on a\nmemory-resident DBMS.\nquery execution) to low-power operating modes. We also present\na query-based memory energy optimization strategy, wherein the\nquery plan is augmented by explicit bank turn-off/on instructions\nthat transition memory banks into appropriate operating modes during\nthe course of execution based on the query access pattern. We\nexperimentally evaluate all the proposed schemes and obtain energy\nconsumptions using an energy simulator. Our experiments\nusing TPC-H queries [35] and a set of queries suitable for handheld\ndevices clearly indicate that both hardware-based and query-directed\nstrategies save significant memory energy.\nQuery restructuring for memory energy savings: We propose a\nquery restructuring scheme and a multi-query optimization strategy\nto further increase energy benefits coming from using low-power\noperating modes. The idea behind these schemes is to increase\nbank inter-access times so that more aggressive low-power modes\ncan be employed and a memory bank can stay in a low-power mode\nlonger once it is transitioned. Our experimental evaluation indicates\nthat this query restructuring strategy does not only reduce\nenergy consumption, but also helps improve overall performance\n(execution cycles).\nApart from providing useful input for database designers, our results\ncan also be used by hardware designers to tune the behavior\nof low-power modes so that they handle query access patterns better\n. Similar to the observation that creating a lightweight version\nof a disk-based database will not serve as a suitable in-memory\ndatabase, our belief is that taking an in-memory database system\nand using it on a banked architecture without any modification may\nnot generate the desired results. Therefore, the results presented in\nthis work also shed light on how database design and memory architecture\ndesign interact with each other.\nThe remainder of this paper is organized as follows. Section 2\npresents related work. Section 3 elaborates on the memory database\nthat we built and also on the memory banking scheme that we employ\nfor our experiments. Section 4 presents in detail the proposed\nhardware and query-directed energy optimization techniques. The\nresults of our energy evaluation of these schemes are discussed in\nSection 5. Our experiments also account for the performance overhead\nincurred in supporting our schemes. Section 6 presents our\nquery restructuring and regrouping scheme, and Section 7 discusses\nits energy/performance benefits within the context of our banked\nmemory architecture. Finally, Section 8 summarizes the results.\nRELATED WORK\nIn the past, memory has been redesigned, tuned or optimized\nto suit emerging fields. Need for customized memory structures\nand allocation strategies form the foundation for such studies.\nCopeland et al proposed SafeRAM [11], a modified DRAM model\nfor safely supporting memory-resident databases alike disk-based\nsystems, and for achieving good performance. In PicoDBMS [27],\nPucheral et al present techniques for scaling down a database to\na smart card. This work also investigates some of the constraints\ninvolved in mapping a database to an embedded system, especially\nmemory constraints and the need for a structured data layout.\nAnciaux et al [3] explicitly model the lower bound of the memory\nspace that is needed for query execution. Their work focuses on\nlight weight devices like personal organizers, sensor networks, and\nmobile computers. Boncz et al show how memory accesses form a\nmajor bottleneck during database accesses [7]. In their work, they\nalso suggest a few remedies to alleviate the memory bottleneck.\nAn et al analyze the energy behavior of mobile devices when spatial\naccess methods are used for retrieving memory-resident data\n[2]. They use a cycle accurate simulator to identify the pros and\ntext\nQuery Optimizer\nQuery Execution\nEngine\nMemory\nDatabase\nQueries\nData\nResults\nEnergy & Performance\nOptimizations\n(Using Cost Plan)\nHardware\nOptimizations\nTargeting DBMS\nRewrite System\nParser\nSystem Catalog\nFigure 2: DBMS architecture.\ncons of various indexing schemes. In [1], Alonso et al investigate\nthe possibility of increasing the effective battery life of mobile\ncomputers by selecting energy efficient query plans through\nthe optimizer. Although the ultimate goal seems the same, their\ncost plan and the optimization criterion are entirely different from\nour scheme. Specifically, their emphasis is on a client-server model\noptimizing the network throughput and overall energy consumption\n. Gruenwald et al propose an energy-efficient transaction management\nsystem for real-time mobile databases in ad-hoc networks\n[16]. They consider an environment of mobile hosts. In [22],\nMadden et al propose TinyDB, an acquisitional query processor\nfor sensor networks. They provide SQL-like extensions to sensor\nnetworks, and also propose acquisitional techniques that reduce the\npower consumption of these networks. It should be noted that the\nqueries in such a mobile ad-hoc network or a sensor environment\nis different from those in a typical DBMS. This has been shown\nby Imielinksi et al in [19]. In our model, we base our techniques\non a generic banked memory environment and support complex,\nmemory-intensive typical database operations. There are more opportunities\nfor energy optimizations in generic memory databases,\nwhich have not yet been studied completely. The approach proposed\nin this paper is different from prior energy-aware database\nrelated studies, as we focus on a banked memory architecture, and\nuse low-power operating modes to save energy.\nGassner et al review some of the key query optimization techniques\nrequired by industrial-strength commercial query optimizers\n, using the DB2 family of relational database products as examples\n[15]. This paper provides insight into design of query cost\nplans and optimization using various approaches. In [23], Manegold\nstudies the performance bottlenecks at the memory hierarchy\nlevel and proposes a detailed cost plan for memory-resident\ndatabases. Our cost plan and optimizer mimics the PostgreSQL\nmodel [12, 14]. We chose it due to its simple cost models and open\nsource availability.\nA query restructuring algorithm is proposed by Hellerstein\nin [18].\nThis algorithm uses predicate migration to optimize\nexpensive data retrievals.\nIn [10], Chaudhuri et al extend this\napproach to study user-defined predicates and also guarantee an\noptimal solution for the migration process. Sarawagi et al present\na query restructuring algorithm that reduces the access times of\ndata retrieval from tertiary databases [32]. Monma et al develop\nthe series-parallel algorithm for reordering primitive database operations\n[24]. This algorithm optimizes an arbitrarily constrained\nstream of primitive operations by isolating independent modules.\nThis work forms the basic motivation for our query restructuring\nalgorithm. However, our paper is different from all of the above\nwork in the sense that we reorder queries for reducing energy\nconsumption. Moreover, our database is memory-resident, with\nthe presence of banked memory that gives more freedom for\noptimizations.\nSYSTEM ARCHITECTURE\nFor our work, we modified the PostgreSQL DMBS to work with\nmemory-resident data sets as its workload. The block diagram for\nour setup is shown in Figure 2. The core components are derived\nfrom PostgreSQL. The flow of our model is similar to PostgreSQL\nexcept that the database is memory resident. A query is parsed for\nsyntax and then sent to the rewrite system. The rewrite system uses\nthe system catalog to generate the query tree, which is then sent to\nthe optimizer. The query optimizer derives the cost of the query in\n219\nConfiguration\nRegisters\nSelf-Monitoring/\nPrediction\nHardware\nMemory\nController\nBank\nTo/From\nCPU\nModule\nMemory Bus\nFigure 3: Banked memory architecture.\nmultiple ways using the query tree and issues the best suited plan\nto the query execution engine. We incorporate our software-based\ntechniques at the optimizer stage of the DBMS. These optimizations\nare based on the cost that is derived for each of the query plans\n(the discussion pertaining to the modified cost model is deferred till\nSection 4). Based on the final query execution plan, the execution\nengine executes the query by using the database. The database is\nentirely memory resident and the memory is organized in a banked\nformat (elaborated in the following section). The executor recur-sively\niterates the query plan and uses a per-tuple based strategy\n(pipelined execution, and not bulk processing) to project the output\nresults. The proposed hardware optimizations are at the computer\narchitecture level of the system. Since the base DBMS model is\nsimilar to PostgreSQL, we do not elaborate each component in detail\n( [26] provides an elaborate discussion). Instead, we highlight\nour contributions, and modifications to DBMS (shown in blue in\nFigure 2) in the following sections. Overall, our strategies require\nmodification to the query optimizer, memory hardware, and system\nsoftware components.\n3.2\nMemory Model\nWe use a memory system that contains a memory array organized\nas banks (rows) and modules (columns), as is shown picto-rially\nin Figure 3 for a 4 4 memory module array. Such banked\nsystems are already being used in high-end server systems [30] as\nwell as low-end embedded systems [31]. The proposed optimizations\nwill, however, apply to most bank-organized memory systems\n. Accessing a word of data would require activating the corresponding\nmodules of the shown architecture. Such an organization\nallows one to put the unused banks into a low-power operating\nmode. To keep the issue tractable, this paper bases the experimental\nresults on a sequential database environment and does not consider\na multiprocessing environment (like transaction processing which\nrequires highly complex properties to be satisfied). We assume in\nour experiments that there is just one module in a bank; hence, in\nthe rest of our discussion, we use the terms \"bank\" and \"module\"\ninterchangeably.\n3.3\nOperating Modes\nWe assume the existence of five operating modes for a memory\nmodule: active, standby, nap, power-down, and disabled\n1\n.\nEach mode is characterized by its energy consumption and the\ntime that it takes to transition back to the active mode (termed\nresynchronization time or resynchronization cost). Typically, the\nlower the energy consumption, the higher the resynchronization\ntime [30]. Figure 4 shows possible transitions between the various\nlow-power modes (the dynamic energy\n2\nconsumed in a cycle is\ngiven for each node) in our model. The resynchronization times\nin cycles (based on a cycle time of 3.3ns) are shown along the\narrows (we assume a negligible cost\n\nfor transitioning to a lower\npower mode). The energy and resynchronization values shown\nin this figure have been obtained from the RDRAM memory data\nsheet (512MB, 2.5V, 3.3ns cycle time, 8MB modules) [30]. When\na module in standby, nap, or power-down mode is requested to\nperform a memory transaction, it first goes to the active mode, and\n1\nCurrent DRAMs [30] support up to six energy modes of operation\nwith a few of them supporting only two modes. One may choose to\nvary the number of modes based on the target memory.\n2\nWe exclusively concentrate on dynamic power consumption that\narises due to bit switching, and do not consider the static (leakage)\npower consumption [28] in this paper.\nFull Power\n(2.063 nJ)\nStandby\n(0.743 nJ)\nNap\n(0.035 nJ)\nPower\nDown\n(0.025 nJ)\nDisabled\n(0 nJ)\n1\n16\n9000\n\n\n\n\n\n\n\nFigure 4: Available operating modes and their resynchronization\ncosts.\nthen performs the requested transaction. While one could employ\nall possible transitions given in Figure 4 (and maybe more), our\nquery-directed approach only utilizes the transitions shown by\nsolid arrows. The runtime (hardware-based) approaches, on the\nother hand, can exploit two additional transitions: from standby to\nnap, and from nap to power-down.\n3.4\nSystem Support for Power Mode Setting\nTypically, several of the memory modules (that are shown in Figure\n3) are controlled by a memory controller which interfaces with\nthe memory bus. For example, the operating mode setting could\nbe done by programming a specific control register in each memory\nmodule (as in RDRAM [30]). Next is the issue of how the\nmemory controller can be told to transition the operating modes\nof the individual modules. This is explored in two ways in this\npaper: hardware-directed approach and software-directed (query-directed\n) approach.\nIn the hardware-directed approach, there is a Self-Monitoring\nand Prediction Hardware block (as shown in Figure 3), which\nmonitors all ongoing memory transactions. It contains some prediction\nhardware (based on the hardware scheme) to estimate the\ntime until the next access to a memory bank and circuitry to ask the\nmemory controller to initiate mode transitions (limited amount of\nsuch self-monitored power down is already present in current memory\ncontrollers, for example: Intel 82443BX and Intel 820 Chip\nSets).\nIn the query-directed approach, the DBMS explicitly requests\nthe memory controller to issue the control signals for a specific\nmodule's mode transitions. We assume the availability of a set of\nconfiguration registers in the memory controller (see Figure\n3) that are mapped into the address space of the CPU (similar\nto the registers in the memory controller in [20]). These registers\nare then made available to the user space (so that the DBMS application\ncan have a control) through operating system calls.\nRegardless of which strategy is used, the main objective of employing\nsuch strategies is to reduce the energy consumption of a\nquery when some memory banks are idle during the query's execution\n. That is, a typical query only accesses a small set of tables\n, which corresponds to a small number of banks. The remaining\nmemory banks can be placed into a low-power operating mode\nto save memory energy. However, it is also important to select\nthe low-power mode to use carefully (when a bank idleness is detected\n), as switching to a wrong mode either incurs significant performance\npenalties (due to large resynchronization costs) or prevents\nus from obtaining maximum potential energy benefits.\nNote that energy optimization is our context can be performed\nfrom two angles. First, suitable use of low-power operating modes\ncan reduce energy consumption of a given query execution. Second\n, the query plan can be changed (if it is possible to do so) to further\nincrease energy benefits. In this work, we explore both these\nangles.\nPOWER MANAGEMENT SCHEMES\nIn a banked architecture, the memory can be managed through\neither of the following two approaches: (1) a runtime approach\nwherein the hardware is in full control of operating mode transitions\n; and (2) a query-directed scheme wherein explicit bank turn-on/off\ninstructions are inserted in the query execution plan to invoke\nmode transitions. One also has the option of using both the\napproaches simultaneously (which we illustrate in later sections).\n220\nFull Power\nStandby\nNap\nPower\nDown\nidle\nstanby\nidle\nnap\nidle\ndown\nresynch\nstanby\nresynch\nnap\nresynch\ndown\nFigure 5: Dynamic threshold scheme.\n4.1\nHardware-Directed Schemes\nWe explore two hardware-directed approaches that allow the\nmemory system to automatically transition the idle banks to an\nenergy conserving state. The problem then is to detect/predict\nbank idleness and transition idle banks into appropriate low-power\nmodes.\n4.1.1\nStatic Standby Scheme\nThe first approach is a per-access optimization. Most of the recent\nDRAMs allow the chips to be put to standby mode immediately\nafter each reference [30]. After a read/write access, the memory\nmodule that gets accessed can be placed into the standby mode\nin the following cycle. We refer to this scheme as the static standby\nmode in the rest of our discussion. Note that, while this scheme is\nnot very difficult to implement, it may lead to frequent resynchro-nizations\n, which can be very harmful as far as execution cycles are\nconcerned.\n4.1.2\nDynamic Threshold Scheme\nOur second hardware-guided approach is based on runtime dynamics\nof the memory subsystem. The rationale behind this approach\nis that if a memory module has not been accessed in a while,\nthen it is not likely to be needed in the near future (that is, inter-access\ntimes are predicted to be long). A threshold is used to determine\nthe idleness of a module after which it is transitioned to a\nlow-power mode. More specifically, we propose a scheme where\neach memory module is put into a low-power state with its idle\ncycles as the threshold for transition.\nThe schematic of our dynamic threshold scheme is depicted in\nFigure 5. After idle\nstndby\ncycles of idleness, the corresponding\nmodule is put in the standby mode. Subsequently, if the module\nis not referenced for another idle\nnap\ncycles, it is transitioned to the\nnap mode. Finally, if the module is not referenced for a further\nidle\ndown\ncycles, it is placed into the power-down mode. Whenever\nthe module is referenced, it is brought back into the active mode incurring\nthe corresponding resynchronization costs (based on what\nlow-power mode it was in). It should be noted that even if a single\nbank experiences a resynchronization cost, the other banks will also\nincur the corresponding delay (to ensure correct execution). Implementing\nthe dynamic mechanism requires a set of counters (one for\neach bank) that are decremented at each cycle, and set to a threshold\nvalue whenever they expire or the module is accessed. A zero\ndetector for a counter initiates the memory controller to transmit\nthe instructions for mode transition to the memory modules.\n4.2\nSoftware-Directed Scheme\nIt is to be noted that a hardware-directed scheme works well\nindependent of the DBMS and the query optimizer used. This is\nbecause the idleness predictors are attached to the memory banks\nand monitor idleness from the perspective of banks. In contrast,\na query-directed scheme gives the task of enforcing mode transitions\nto the query. This is possible because the query optimizer,\nonce it generates the execution plan, has a complete information\nabout the query access patterns (i.e., which tables will be accessed\nand in what order, etc). Consequently, if the optimizer also knows\nthe table-to-bank mappings, it can have a very good idea about the\nbank access patterns. Then, using this information, it can proac-tively\ntransition memory banks to different modes. In this section,\nwe elaborate on each step in the particular query-directed approach\nthat we implemented, which includes customized bank allocation,\nquery analysis, and insertion of bank turn-on/off (for explicit power\nmode control) instructions.\n4.2.1\nBank Allocation\nIn the case of software-directed scheme, the table allocation is\nhandled by the DBMS. Specifically, the DBMS allocates the newly-created\ntables to the banks, and keeps track of the table-to-bank\nmappings. When a \"create table\" operation is issued, the DBMS\nfirst checks for free space. If there is sufficient free space available\nin a single bank, the table is allocated from that bank. If a bank is\nnot able to accommodate the entire table, the table is split across\nmultiple banks. Also, while creating a new table, the DBMS tries\nto reuse the already occupied banks to the highest extent possible;\nthat is, it does not activate a new bank unless it is necessary. Note\nthat the unactivated (unused) banks i.e., the banks that do not hold\nany data can remain in the disabled mode throughout the execution\n. However, it also tries not to split tables excessively. In more\ndetail, when it considers an already occupied bank for a new table\nallocation, the table boundaries are checked first using the available\nspace in that bank. If a bank is more than two-thirds full with the\ntable data, the rest of the bank is padded with empty bits and the\nnew table is created using pages from a new bank. Otherwise, the\ntable is created beginning in the same bank. Irrespective of whether\nthe table is created on a new bank or not, the DBMS creates a new\ntable-to-bank mapping entry after each table creation.\nIn hardware-directed schemes, we avoid these complexities involved\nin bank allocation as we assume that there is absolutely no\nsoftware control. Consequently, in the hardware-directed schemes,\nwe use the sequential first touch placement policy. This policy allocates\nnew pages sequentially in a single bank until it gets completely\nfilled, before moving on to the next bank. Also, the table-to\n-bank mapping is not stored within the DBMS since the mode\ncontrol mechanism is handled by the hardware.\n4.2.2\nEstimating Idleness and Selecting the\nAppropriate Low-Power Mode\nIt should be emphasized that the main objective of our query-directed\nscheme is to identify bank idleness. As explained above,\nin order to achieve this, it needs table-to-bank mapping. However\n, this is not sufficient as it also needs to know when each table\nwill be accessed and how long an access will take (i.e., the\nquery access pattern). To estimate this, we need to estimate the duration\nof accesses to each table, which means estimating the time\ntaken by the database operations. Fortunately, the current DBMSs\nalready maintain such estimates for query optimization purposes\n[12, 15, 29, 33, 34]. More specifically, given a query, the optimizer\nlooks at the query access pattern using the generated query plan.\nThe inter-access times are calculated using the query plan. A query\nplan elucidates the operations within a query and also the order in\nwhich these operations access the various tables in the database.\nEven in current databases, the query plan generator estimates access\ncosts using query plans [12]. We use the same access cost estimation\nmethodology. These access costs are measured in terms of\npage (block) fetches. In our memory-resident database case, a page\nis basically the block that is brought from memory to the cache. For\ninstance, the cost of sequential scan is defined as follows (taken\nfrom [12]):\nCost\nseq scan\n= N\nblocks\n+ CPU N\ntuples\nHere, N\nblocks\nis the number of data blocks retrieved, N\ntuples\nis the\nnumber of output tuples, and CPU is the fudge factor that adjusts\nthe system tuple-read speed with the actual memory hierarchy data-retrieval\nspeed. Usually, optimizers use the above cost metric to\nchoose between multiple query plan options before issuing a query.\nWe attach a cost to each page (block) read/write operation to obtain\nan estimate of the access cost (time) in terms of execution cycles.\nFor instance, the above scan operation is modified as follows:\nCost\nblock f etch\n= T cycles\nCost\nseq scan\n= N\nblocks\nT + CPU N\ntuples\n\nblock\ntuples\nT\nIn these expressions, T is the delay in cycles to fetch a block from\nthe memory. Thus, our cost plan is projected in terms of access\ncycles. We extend this to other database operations like JOIN and\nAGGREGATE based on the cost models defined in [14, 12].\nGiven a query, we break down each operation within the plan (including\nsub-plans) and estimate the access cost (in cycles) for each\n221\n- > scan A (9000 cycles)\n- > aggregate (20 cycles)\n- > scan B (9000 cycles)\n- > scan A (9000 cycles)\n- > scan A\n- > Put A=ON\n- > aggregate\n- > Put B=OFF\n- > scan B\n- > Put B=ON\n- > Put A=OFF\n- > scan A\n- > Put A=ON\n(B is already OFF)\nP2\nP1\n(i)\n(ii)\nFigure 6: Example application of the query-directed scheme.\n(i) The original execution plan. (b) The augmented execution\nplan.\nprimitive operation. Our objective in estimating the per-operation\ntime in cycles is to eventually identify the inter-access times of operations\nin the query (and hence, to put the banks that hold unused\ntables to low-power modes). There are table accesses associated\nwith each operation, and bank inter-access times depend on the table\ninter-access times. A query has information of the tables that\nit accesses. Thus, knowing the inter-access time for each operation\nleads to the inter-access times for each table as well. A table is\nmapped to certain banks, and the table-to-bank mapping is available\nin the query optimizer.\nConsequently, if the table inter-access time is T , and the resynchronization\ntime is T\np\n(assuming less than T ), then the optimizer\ncan transition the associated modules into a low-power mode (with\na unit time energy of E\np\n) for the initial T - T\np\nperiod (which\nwould consume a total [T - T\np\n]E\np\nenergy), activate the module to\nbring it back to the active mode at the end of this period following\nwhich the module will resynchronize before it is accessed again\n(consuming T\np\nE\na\nenergy during the transition assuming that E\na\nis\nthe unit time energy for active mode as well as during the transition\nperiod). As a result, the total energy consumption with this transitioning\nis [T - T\np\n]E\np\n+ T\np\nE\na\nwithout any resynchronization overheads\n, while the consumption would have been T E\na\nif there had\nbeen no transitioning (note that this calculation considers only the\nidle period). The DBMS optimizer evaluates all possible choices\n(low-power modes) based on corresponding per cycle energy costs\nand resynchronization times, and table inter-access time to pick up\nthe best choice. Note that the DBMS can select different low power\nmodes for different idle periods of the same module depending on\nthe duration of each idle period. Specifically, we use the most energy\nsaving low-power mode without increasing the original query\nexecution time (i.e., when the original idleness is over, the module\nshould be up in the active mode ready for the operation).\n4.2.3\nInserting Bank-On/Off Instructions\nThe last part of the software-directed scheme is to insert explicit\n(operating) mode transitioning instructions in the final query execution\nplan. For this, we introduce place-markers (mapped to system\ncalls) which are interpreted at the low-level (interpreted later by\nour memory controller, which actually sets the corresponding low-power\nmodes). This is done so that the query execution engine can\nissue the query without much performance overhead, and with the\nsame transparency.\nAs an example, consider the following. Let tables A and B each\nhave 1000 records, each record being 64 bytes. Consider the query\nplan depicted in Figure 6(i), taken from PostgreSQL. The query\nplan reads from bottom to top (P2 follows P1). A scan of table\nA is done first, followed by a scan of table B. The result of these\noperations are then used by an aggregate operation. Another (independent\n) scan operation on table A follows the aggregate operation.\nThe per step access costs are also shown. From the generated query\nplan, it is evident that table A is not accessed between point P1 and\npoint P2. Once the results are extracted after the scan at point P1,\nthe banks that hold table A can be put to a low-power mode, and the\nbanks that hold table B can be activated for data extraction. This is\nillustrated in Figure 6(ii) using place-markers for tables A and B.\nBanks holding Table A are reactivated at point P2 (banks of Table\nB remain off).\nEXPERIMENTAL EVALUATION OF HARDWARE-DIRECTED AND QUERY-DIRECTED SCHEMES\nIn this section, we study the potential energy benefits of our hardware\nand software-directed schemes. We first explain the experimental\nsetup that we used in our simulations. Then, the set of\nqueries that we used to study our schemes is introduced. After that,\nwe present energy consumption results. While we discuss the energy\nbenefits of using our schemes, we also elaborate the overheads\nassociated with supporting each of our schemes.\n5.1\nSetup\n5.1.1\nSimulation Environment\nAs mentioned before, the query-directed schemes are implemented\nin the query optimizer of the memory database model\nelaborated in Section 3.1. We interface this DBMS to an enhanced\nversion of the SimpleScalar/Arm simulator [4] to form a complete\ndatabase system.\nThe intermediate interface (invoked by\nDBMS) provides a set of operating system calls (on Linux kernel\n2.4.25), which in turn invokes the SimpleScalar simulator. The\nSimpleScalar simulator models a modern microprocessor with a\nfive-stage pipeline: fetch, decode, issue, write-back, and commit.\nWe implemented our hardware techniques within the framework of\nthe sim-outorder tool from the SimpleScalar suite, extended with\nthe ARM-ISA support [4]. Specifically, we modeled a processor\narchitecture similar to that of Intel StrongARM SA-1100. The\nmodeled architecture has a 16KB direct-mapped instruction cache\nand a 8KB direct-mapped data cache (each of 32 byte-length). We\nalso model a 32-entry full associative TLB with a 30-cycle miss\nlatency. The off-chip bus is 32 bit-wide. For estimating the power\nconsumption (and hence, the energy consumption), we use the\nWattch simulator from Princeton University [8].\nOur banked memory model is based on [13,21], as shown in Figure\n4. We use values from Figure 4 for modeling the delay (transition\ncycles) in activation and resynchronization of various power-states\n. Our simulations account for all performance and energy\noverheads incurred by our schemes. In particular, the energy numbers\nwe present include the energy spent in maintaining the idleness\npredictors (in the hardware-directed scheme) and the energy spent\nin maintaining the table-to-bank mappings (in the query-directed\nscheme), and in fetching and executing the bank turn-on/off instructions\n(in the query-directed scheme). The predictors were implemented\nusing decrementing counters (equal to the number of\nbanks) and zero detector based on the discussion in Section 4.1.\nThe predictors are synchronized with the system cycles to maintain\nconsistency of operation, and to minimize the overheads. The\nquery optimizer maintains the table-bank mappings, which is modeled\nas an array list for instant access. The bank turn-on/off instructions\nare executed by setting hardware registers, and hence,\nthese instructions are modeled as register operations using the existing\ninstruction set architecture. We present two important statistics\nin our experimental results. Energy consumption corresponds\nto the energy consumed in the memory system (including the above\nmentioned overheads). We also present statistics about the performance\noverhead (i.e., increase in execution cycles) for each of our\nschemes. This overhead includes the cycles spent in resynchronization\n(penalty cycles are modeled based on values in Figure 4)\nas well as the cycles spent (in the CPU datapath) in fetching and executing\nthe turn-on/off instructions (in the query-directed scheme).\n5.1.2\nQueries\nTo evaluate our scheme for memory-resident databases, we\nconsidered two classes of queries. The first class is a subset of\nqueries from the Transaction Processing Council (TPC-H) benchmark\n[35]. TPC-H involves complex queries with a large amount of\ndata accesses. Operations in decision support benchmarks (TPC-D\nevolved to TPC-H) have good spatial locality with abundant data\nintensive operations [9]. This assists us to perform a rigorous test\nof our schemes. The top part of Table 1 gives details of the TPC-H\nqueries we used and the corresponding database parameters. The\nselected operations represent a good mix and could be used to\nbuild a variety of complicated queries.\n222\nTable 1: The two classes of queries considered for our experiments.\nSource\nQuery\nDescription\nTables\nTPC-H\nQ6\nSimple query\nPART, CUSTOMER, ORDERS, and\nLINEITEM tables generated using\nDBGEN with scale 1.0\nQ3\nComplex query involving JOIN\nQ4\nComplex query involving NEST\nQ17\nComplex query involving JOIN and NEST\nQueries targeting a simple\norganizer\nP1\nSimple name and address lookup\nADDRESSBOOK populated with 1.3\nmillion entries, 50% subset of FRIENDS\nand 25% subset of COLLEAGUES\nP2\nLookup in directory of friends\nP3\nLookup in directory of colleagues and\nfriends\nMemory-resident databases run queries that are different from\nthe typical database queries as seen in TPC-H. The second set of\nqueries that we consider are representative of applications that execute\non handheld devices. The typical operations that are performed\non an organizer were imitated on our setup (we name the\nqueries P1, P2, P3). The first query involves a simple address\nlookup using a `NAME' as input. The SQL for query P1 is shown\non the left section of Table 2. Recent organizers [17, 25] provide an\nordered view of the underlying addressbook database. For instance,\norganizers provide the creation of folders. A \"friends\" folder can\nbe a collection of personnel with a tag set as \"friend\" in the addressbook\n. We defined folder as a restrained/customized view of\nthe same database (address book). Intuitively, query P2 strives to\ndo a lookup of friends living in a particular city. The \"friends\"\nview and hence the query P2 is defined on the right section of Table\n2. Query P3 combines views (folders). For this we defined a\nnew folder called \"colleagues\". P3 aims to find friends and/or colleagues\nwhose names start with an `a', living in a particular `CITY'.\nSince P3 is very similar to P2 with some extra fields, we do not\npresent the SQL for P3. The intermediate tables and results during\nquery execution are also stored in the memory.\n5.1.3\nDefault Parameters\nFor our experiments, we populate our database using the DBGEN\nsoftware from TPC-H benchmark suite with a scale factor of 1.0.\nOur organizer database is populated with 1.3 million records.\nFor dynamic threshold scheme, we use 10, 100 and 10,000 cycles\nas idle\nstndby\n, idle\nnap\n, and idle\ndown\n, respectively. For all schemes,\nthe banks are in power-down mode before their first access. On/Off\ninstructions are inserted based on the inter-access times of table.\nWe use the same cycles as in idle\nstndby\n, idle\nnap\n, and idle\ndown\nfor\ninserting instructions. As an example, consider the inter-access (T)\nof a table as 25 cycles, which lies between 10 (idle\nstndby\n) and 100\n(idle\nnap\n) cycles. We insert an On/Off instruction at the beginning\nof T to put a table to standby mode for 24 cycles, taking into consideration\nthe resynchronization period of 1 cycle as well. Similar\ntechnique is applied for inter-access times that fall in between other\npower modes.\nA single page transfer time is needed for access cost calculation\nin software-directed scheme. We derive this by executing the TPC-H\nqueries on the SimpleScalar simulator (with the SA-1100 model)\nand by studying the cycle times for transferring a data block from\nmemory to the cache. For all experiments, the default configuration\nis the 512MB RDRAM memory with 8MB banks. In the following\nsection, we study the energy implications of our hardware and software\nschemes using this setup. We then present the performance\noverheads.\n5.2\nQuery Energy Evaluation\nFigure 7 shows the normalized memory energy consumption for\nour hardware-directed schemes. While presenting our results, we\nnormalize all values with respect to the base case, which is the version\nwith no query optimizations. \"Static Standby\" in Figure 7 indicates\nthe static standby scheme. We see that, by simply putting the\nmodules to standby mode after each access, this scheme is able to\nachieve an average 55% reduction in memory energy consumption\nof TPC-H queries when compared to the unoptimized case. The\nenergy improvements are less pronounced in the case of handheld\nTable 2: SQL for organizer queries\nQuery P1\nQuery P2\nSELECT\nCREATE VIEW\na name,\nfriends AS\nP2:\na address,\nSELECT\nSELECT\na city,\na name,\na address,\na office phone,\na address,\na home phone,\na home phone,\na city,\na mobile phone\na mobile phone,\na home phone,\nFROM\na email,\na mobile phone\nfriends\na web,\nFROM\nWHERE\na specialnotes\naddressbook\na city = `[CITY]'\nFROM\nWHERE\nGROUP BY\naddressbook\na tag = `[FRIEND]'\na name;\nWHERE\nGROUP BY\na name = '[NAME]';\na name;\nqueries (37% reduction on the average). This is mainly because\nof the different number of tables manipulated by these two types\nof queries. In the TPC-H case, multiple tables are scattered across\nvarious banks and hence, there is a potential of placing more memory\nbanks into low-power modes. In the case of handheld queries,\nthere is just one table scattered across multiple banks, which makes\nputting modules to a low-power mode more difficult as modules are\ntightly connected, as far as query access patterns are concerned. We\nalso observe from Figure 7 that the dynamic threshold scheme further\nextends these improvements through its ability to put a bank\ninto any of the possible low-power modes. On an average, there is\na 60% (43%) energy improvement in TPC-H (handheld) queries.\nFigure 7 also shows the normalized energy behavior of our\nquery-directed scheme (denoted On/Off Instr). It is evident that\nthis scheme outperforms the best hardware-directed scheme (by\nan average of 10%) in saving the memory energy consumption.\nThis is because of two main reasons. First, when a bank idleness is\nestimated, the query-directed scheme has a very good idea about\nits length (duration). Therefore, it has a potential of choosing the\nmost appropriate low-power mode for a given idleness. Second,\nbased on its idleness estimate, it can also preactivate the bank. This\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\nQ6\nQ3\nQ4\nQ17\nP1\nP2\nP3\nNo\nr\nm\naliz\ned E\nn\ner\ng\ny\nStatic Standby\nDynamic Threshold\nOn/Of f Instr\nFigure 7: Energy consumption of hardware and software-directed\nmodes. The values shown are normalized to the version\nwith no energy optimizations.\n223\neliminates the time and energy that would otherwise have spent\nin resynchronization. Consequently, the average memory energy\nconsumption of the query-directed scheme is just 32% of the unoptimized\nversion for TPC-H queries, and 44% in case of organizer\n(handheld) queries [i.e., an additional 8% (13%) improvement over\nthe hardware schemes for TPC-H (handheld) queries].\n5.3\nPerformance Overhead Analysis\nOur techniques are very effective in reducing the memory energy\nconsumption. As mentioned earlier, transitions from the low-power\nmodes to the active mode come with an overhead of resynchronization\n(in terms of both performance and energy). The energy values\nreported in previous section take into consideration the extra energy\nneeded to activate the modules as well. In this part, we quantify\nthe basic performance overheads that are faced in supporting our\nschemes.\nFigure 8 shows the performance overheads for both the hardware\nand software-directed schemes. The static standby scheme has the\nmaximum overhead, which is expected. This is especially the case\nwhen queries generate frequent memory accesses. The memory is\nbrought down to the standby mode after each access, and is resyn-chronized\nin another access that follows immediately. As a result,\nthe performance worsens as bad as 28% for the static standby case.\nOn the other hand, for the dynamic threshold scheme, the performance\noverhead is slightly better since the banks are not blindly\nput to a low-power mode after each access. This verifies our prediction\nthat when a module goes to low-power mode, it would either\nremain for a while in that mode or may even be transitioned\ninto a lower power mode. The query-directed scheme has the least\noverhead (<2%). The main reason for this is the ability of pre-activating\na bank before it is actually accessed. Therefore, considering\nboth performance and energy results, one may conclude\nthat the query-directed scheme is better than the hardware-directed\nschemes. However, it is also to be noted that the query-directed\nscheme requires access to the query optimizer. In comparison,\nthe hardware-based schemes can work with any query optimizer.\nTherefore, they might be better candidates when it is not possi-ble/profitable\nto modify the query plan.\nQUERY RESTRUCTURING\nThe approaches presented above mainly try to optimize energy\nconsumption without modifying the queries themselves (except\nmaybe for the query-directed scheme where we insert turn on/off\ninstructions in the query plan). In this section, we go one step\nfurther, and demonstrate that even larger energy savings are possible\nif one has the flexibility of reorganizing query operations. We\nshow how this can be achieved in the context of both individual\nqueries and multiple queries (optimized simultaneously). Our main\nobjective in restructuring queries is to increase memory bank inter-access\ntimes. Note that when bank inter-access time is increased,\nwe can either remain in a given low-power operating mode longer,\nthereby feeling the potential impact of resynchronization less (i.e.,\namortizing the cost of resynchronization); or we can switch to\na more energy saving mode (as we now have a longer idleness),\nwhich means more energy savings. We present different query\nrestructuring strategies for achieving this.\nWhen considering a single query, the bank inter-access times can\nbe increased by re-ordering query operations. On the other hand,\nthe primary goal of the heuristic that targets at multiple-queries is\nto cluster the usage of tables from multiple queries together, so\nthat the overall table accesses are localized. That is, assuming that\nwe have multiple queries to optimize, our objective is to interleave\nthese query executions in such a way that the reuse of individual\ntables (or of table portions) is maximized. In other words, when\na table is accessed, we want to execute all other query operations\n(potentially coming from different queries) to that table (one\nafter another), before we move to the next table. This also tends\nto cluster accesses to the same bank, and tends to increase the\nbank inter-access times (which is very important from an energy\nperspective as explained above). In the following, we first study\nintra-query restructuring and then inter-query restructuring. After\nthese two steps, bank turn-on/off instructions are inserted at the\nrelevant points, depending on the bank access patterns.\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\n1.4\nQ6\nQ3\nQ4\nQ17\nP1\nP2\nP3\nNorm\na\nl\ni\nz\ned\nP\ne\nr\nf\norm\na\nn\nc\ne\nStatic Standby\nDynamic Threshold\nOn/Off Instr\nFigure 8: The performance overhead involved in supporting\nour schemes. There is an average overhead of 15%, 8%, and\n2% for standby, dynamic and on/off schemes, respectively, over\nthe unoptimized version.\nStep 1 (intra-query optimization):\nA query is first examined to see\nif there are any potential reuse regions. If there are any reusable\nregions, their accesses are grouped together.\nWe achieve this by examining the query execution plan. The query\nplan is studied to see if there are any advantages in rearranging\nthe operations (primitives) in a query based on its table usage.\nOperations that require the same (set of) table(s) are then grouped\ntogether (i.e., they are scheduled to be executed one after another).\nThe detailed procedure is shown in Figure 9. Each operation in the\nquery plan is first scanned and placed into a table group based on\nthe table(s) that it accesses. Then, the operations are rearranged\nin the query plan (taking into account the dependencies between\nthem) based on their corresponding table groups. For this, we look\nat the query plan tree. The path from each leaf node to the root,\ncalled stream, is investigated. The ultimate goal is to schedule\noperations (nodes in the plan tree) based on their table groups.\nWe try to schedule operations within one table group (which is\ncurrently active) before scheduling the operations from another\ntable group (which is not active) in an attempt to increase the bank\ninter-access times. That is, a stream is traversed from bottom to\ntop, and each node within the stream is put to the schedule queue\n(as they are encountered) based on its table group. It should be\nemphasized that we preserve the original semantics of the operations\n(constraints) in the algorithm. This procedure is repeated\nfor each stream in the tree, and until all streams have the most\nenergy-efficient schedule based on their table accesses. At the end\nof this step, an energy-aware schedule queue gets generated for the\nconsidered query (saved in schedule list).\nStep 2 (inter-query optimization):\nTables are examined to optimize\nmultiple queries simultaneously. For each table that is accessed,\nall accesses arising from multiple queries to the particular table\nare grouped together.\nIn this step, the schedule list from multiple queries are grouped\ntogether. Each list is scanned to identify nodes that access a given\ntable. The nodes that access the same table are then scheduled to\nexecute together (without disturbing the dependency constraints).\nIn fact, the nodes from multiple queries are just grouped (combined\n) not reordered. Thus, in this step, the constraint flow for\neach schedule list (taken care of in Step 1) is automatically maintained\n. Additional conditional flow checks could be reinforced at\nthis stage if desired. Figure 10 shows the regrouping procedure.\nf inal schedule list stores the final consolidated schedule of operations\nfrom all the queries.\nStep 3 (energy optimizations):\nInclude energy optimizations by inserting\nOn/Off instructions into the final schedule list.\nIn this step, the access costs are calculated for each operation in\nthe f inal schedule list as shown in Section 4.2.2. Each operation\nis attached with an access cost, and the turn-on/off instructions are\ninserted based on the table inter-access times. The methodology\nused for adding these instructions to the f inal schedule list is the\nsame as in Section 4.2.2, and the on/off markers are placed as elaborated\nin Section 4.2.3.\nAs an example, consider two queries Q1 and Q2. Their original\n224\ntable_group is a table-to-operations mapping list.\nschedule_list stores the final schedule of operations.\n/* identify the group to which an\n* operation belongs */\noperation_rearragement (){\nfor (each operation in query i) {\nidentify the table(s) in i;\nfor (each table j in i) {\nadd operation to table_group[j];\n}\n}\nschedule_operations();\n}\n/* schedule operations */\nschedule_operations() {\nschedule_list = empty;\ndo {\nfor (each stream in query plan tree) {\nstart from leaf node;\nfor (each node in stream) {\nidentify its constraint nodes that follow;\n/* the rest are independent nodes */\ngroup(constraint nodes);\ngroup(independent nodes);\ncheck for new violations;\nadd new constraints if necessary;\nsave the schedule_list;\nmove up a node in the stream;\n}\nmove to the next stream;\n}\n} until no more changes\n}\n/* group nodes */\ngroup(node_list) {\nif(node_list is constraint node list)\n{\nfor (each node in node_list) {\nlookup table_group of node;\nadd node to schedule_list based on table_group;\n/* preserve the dependency order */\npreserve flow of node_list in schedule_list;\n}\n}\nelse\n{ /* set of independent nodes */\nadd node to schedule_list based on table_group;\n/* no need to preserve constraint flow */\nregroup to put all table_group nodes together;\n}\nFigure 9: Reorganizing operations within a query to optimize\nfor energy (Step 1). The query tree is investigated from the\nbottom to top for grouping operations based on their table accesses\n.\ngroup_multiple_queries {\nfor (each schedule_list) {\ndo {\npick an unscheduled node i in schedule_list;\n/* i.e. pick a node without a "complete" tag */\nfor (other schedule_lists) {\nif (node j has same table_group as node i) {\nschedule node j after node i;\nmark node j as "complete";\n/* with respect to multi-query schedule */\n}\n}\n} until all node in schedule_list is "complete"\n}\n}\nTowards the end of the procedure,\nfinal_schedule_list stores the\nentire list of "complete" schedule.\nFigure 10: Grouping schedule list from multiple queries (Step\n2). Operations from multiple queries are grouped based on\ntheir table accesses using their corresponding schedule lists.\nquery plan is shown in Figure 11(i). Q1 is revised as the table accesses\nare optimizable. Figure 11(ii) shows the result after applying\nStep 1. Step 2 results in the output depicted in Figure 11(iii). Finally\n, in Step 3, we insert on/off instructions in appropriate places\n(see Figure 11(iv)).\nEXPERIMENTAL EVALUATION OF QUERY RESTRUCTURING\nIn this section, we evaluate our query restructuring approach\nby extending our database and queries discussed in Section 5.1.2.\nAs before, our focus is on memory energy consumption. We also\nstudy the impact of the technique on the overall performance.\nTowards the end, other alternative options are also elaborated.\n7.1\nMulti-Query Setup\nSince simultaneous processing of multiple queries is needed to\nvalidate our approach, we considered a combination of queries,\nwhich we term as scenarios in the rest of this paper. Among the\nqueries considered in Section 5.1.2, there can be multiple combinations\nof queries that arrive sequentially, and that (which) are optimizable\nusing our technique. The various combination (scenarios)\nof organizer queries and their naming schemes are shown in Table\n3. For instance, P12 indicates that P1 is sequentially processed\nalong with P2. The combination scenarios for TPC-H queries are\nshown in Table 4. The combinations shown in these tables are the\nprominent ones and the behavior of other combinations are very\nsimilar to these, hence, they are not included in this paper.\nTable 3: Scenarios for organizer queries.\nType\nLegend\nCombination\nTwo query combinations\nP11\nP1 + P1\nP12\nP1 + P2\nP23\nP2 + P3\nThree query combination\nP123\nP1 + P2 + P3\nTable 4: Scenarios for TPC-H queries.\nType\nLegend\nCombination\nTwo query combinations\nS11\nQ6 + Q6\nS12\nQ6 + Q3\nS13\nQ6 + Q4\nS14\nQ6 + Q17\nS23\nQ3 + Q4\nS24\nQ3 + Q17\nS34\nQ4 + Q17\nThree query combinations\nS222\nQ3 + Q3 + Q3\nS123\nQ6 + Q3 + Q4\nFour query combinations\nS1111\nQ6 + Q6 + Q6 + Q6\nS1234\nQ6 + Q3 + Q4 + Q17\n7.2\nQuery Energy Evaluation\nIn this section, we evaluate the query energy of the various scenarios\nwe presented in the previous section. We first study the improvements\nobtained from our query restructuring heuristic, and\nfurther extend our study to combine query restructuring with various\nhardware and software-directed schemes (of Section 4) meant\nto improve the energy consumption.\nFigure 12 shows the sole contribution of query restructuring\nscheme in improving the energy consumption. The energy reduces\nby an average 55% from the unoptimized version when our query\nrestructuring scheme is used. By just grouping similar accesses\n(to ensure data reuse), query restructuring can achieve significant\nreduction in the energy consumption of multiple queries.\nIn order to identify the benefits coming solely from Step 1 (intra-query\noptimization) in our query restructuring scheme, we also\ncombined Step 1 and Step 3, and compared it with our query-directed\nscheme (studied in Section 4.2 -- which is simply Step\n3 of our query restructuring). Figure 13 shows the results. There\nis up to 19% improvement in energy when operations are shuffled\nwith a query based on their table usage.\nWhen the query restructuring scheme is combined with\nhardware-directed schemes, there is further improvement in\nenergy savings (Figure 14). The static standby scheme works only\nfor small queries that have a uniform access pattern, but when\ncomplex queries are encountered, the dynamic runtime scheme\noutperforms the static standby scheme due to its good prediction of\n225\n- > function(A) (9000 cycles)\n- > hash join\n- > scan B (9000 cycles)\n- > scan A (9000 cycles)\nQ1\n- > aggregate (20 cycles)\n- > scan B (9000 cycles)\n- > scan A (9000 cycles)\nQ2\n- > hash join\n- > scan B (9000 cycles)\n- > function(A) (9000 cycles)\n- > scan A (9000 cycles)\nQ1\n- > aggregate (20 cycles)\n- > scan B (9000 cycles)\n- > scan A (9000 cycles)\nQ2\n(i) Original Queries\n(ii) After applying Step 1\n- > aggregate (from Q2)\n- > hash join (from Q1)\n- > scan B (from Q2)\n- > scan B (from Q1)\n- > function(A) (from Q1)\n- > scan A (from Q2)\n- > scan A (from Q1)\nQ1 + Q2\n(iii) After applying Step 2\n- > aggregate (from Q2)\n- > Put B=OFF (A is already OFF)\n- > hash join (from Q1)\n- > scan B (from Q2)\n- > scan B (from Q1)\n- > Put B=ON\n- > Put A=OFF\n- > function(A) (from Q1)\n- > scan A (from Q2)\n- > scan A (from Q1)\n- > Put A=ON\n(B is already OFF)\n(iv) After applying Step 3\nQ1 + Q2\nFigure 11: Example of query restructuring and regrouping based on energy behavior.\nthe application behavior. This can be seen in Figure 14, where the\ndynamic threshold scheme performs better in the TPC-H scenarios\nthan for the handheld query scenarios.\nThe savings obtained\nby putting a module into multiple low-power modes for longer\nperiods are more than the savings obtained by periodically putting\na module to just standby mode.\nThe software-directed schemes perform similar to dynamic the\nruntime threshold strategy when combined with the query restructuring\nalgorithm. In Figure 14, the insertion of explicit turn-on/off\ninstructions improves the energy by an average 78%, when compared\nto the unoptimized version. This result is comparable to\nthe improvements obtained using the dynamic threshold scheme.\nIn fact, the dynamic threshold scheme performs slightly better for\nsome TPC-H queries (e.g., S12, S13, and S14). This situation occurs\ndue to the following factor. When multiple queries are combined\nusing query restructuring, it becomes difficult to predict the\ninter-access times since each query has a varying access pattern,\nand combining random access patterns complicates the job of the\npredictor (and requires a more sophisticated predictor). The runtime\nschemes work at the hardware instruction level without any\nknowledge of the DBMS application. But, this illustrates how a\nsimple software technique implemented at the query optimizer (by\njust analyzing the high-level query structure) is able to achieve improvements\nas good as an equivalent but expensive hardware technique\n.\nAs mentioned earlier in the paper, when queries are restructured\nand grouped, the memory access pattern changes.\nThe\nbank turn-on/off instructions can be inserted only in prominent\n\"hot\" and \"cold\" access regions, respectively. There are a few\nmodules, which is beyond the control of software. For instance,\nwe insert turn-on/off instructions based on tables. A given table\ncould be scattered across many modules. Our predictor estimates\nthe inter-access time for which the entire table needs to be put to\nlow-power mode. However, even during a table access, there are\nregions (modules) that are hardly used. Dynamic runtime scheme\nis extremely good in handling this situation by its ability to put\nindividual modules to a low-power state based on just that module's\naccess. This implies that the combination of hardware and\nsoftware schemes form the best strategy when query restructuring\nis deployed.\nFigure 14 also shows the case when both dynamic runtime\nscheme and the turn-on/off instructions are used in tandem after\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\nP11\nP12\nP23\nP123\nS11\nS12\nS13\nS14\nS23\nS24\nS34\nS222 S123 S1111 S1234\nN\no\nr\nm\nal\ni\nz\ned\nE\nner\ngy\nFigure 12: Contribution of query restructuring towards energy\nimprovements. The energy values shown are normalized to the\nversion with no optimizations.\nquery restructuring. The benefits obtained from such a hardware-software\ninteraction is prominent.\nThere is an average 90%\nreduction in the memory energy consumption across the applications\n. In some cases, there is up to 95% improvement in the energy\nconsumption. These results clearly show that query restructuring\ncombined with the use of low-power operating modes can lead to\nsignificant energy savings.\n7.3\nPerformance Overhead Analysis\nQuery restructuring combined with the use of low-power modes\nhas an impact on the performance. In Figure 15, we present the\nnormalized system-wide performance of our query restructuring\nscheme. It is evident that the performance improves by an average\nof 48% when multiple queries are restructured and grouped. The\nimprovement in performance is mainly due to the improved locality\nutilization in the memory hierarchy. That is, the data brought\nto the cache by one query is reused by other queries (as a result\nof restructuring). We do not present here detailed cache behavior\nstatistics due to lack of space.\nFigure 16 shows the normalized performance for the combination\nschemes as well. When static standby scheme is used with\nquery restructuring, the performance improvements obtained from\nquery restructuring gets negated by the resynchronization overhead\nfrom the standby mode for each access. Thus, the performance\n0%\n2%\n4%\n6%\n8%\n10%\n12%\n14%\n16%\n18%\n20%\nQ6\nQ3\nQ4\nQ17\nP1\nP2\nP3\nE\nn\nerg\ny\nI\nm\nprov\ne\nm\ne\nn\nt\ns\nFigure 13: Benefits obtained by restructuring operations within\na query (contribution of Step 1).\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\nP11\nP12\nP23\nP123\nS11\nS12\nS13\nS14\nS23\nS24\nS34\nS222\nS123\nS1111\nS1234\nNorm\na\nl\ni\nz\ned\nE\nn\nerg\ny\nRestructuring + Static Standby\nRestructuring + Dynamic Threshold\nRestructuring + On/Off Instr\nRestructuring + On/Off Instr + Dynamic Threshold\nFigure 14: Energy consumption reduces significantly when\nlow-power modes are utilized along with query restructuring\nscheme. Values shown are normalized to the unoptimized version\n.\nBest energy savings comes from a hybrid hardware-software\nscheme.\n226\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\nP11\nP12\nP23\nP123\nS12\nS13\nS14\nS23\nS24\nS34\nS222\nS123 S1111 S1234\nNorm\na\nl\ni\nz\ned\nP\ne\nr\nf\norm\na\nnc\ne\nFigure 15: Performance improvement obtained from basic\nquery restructuring over the unoptimized version.\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\n1.4\n1.6\n1.8\nP11\nP12\nP23\nP123\nS12\nS13\nS14\nS23\nS24\nS34\nS222\nS123\nS1111\nS1234\nN\no\nr\nm\nal\ni\nz\ne\nd\nP\ne\nr\nf\no\nr\nm\nanc\ne\nRestructuring + Static Standby\nRestructuring + Dynamic Threshold\nRestructuring + On/Off Instr\nRestructuring + On/Off Instr + Dynamic Threshold\nFigure 16: Overall performance after applying energy optimizations\nalong with query restructuring. Values shown are\nnormalized to the unoptimized version.\nworsens in some cases by even 65% for complicated queries. However\n, overall, there is still a 10% performance improvement for all\napplications on the average. The turn-on/off instructions have the\nleast performance overhead, and hence, preserve the performance\nimprovements obtained from query restructuring. From Figure 16,\nthis combination shows a 47% improvement in performance (negating\nthe improvements obtained from basic query restructuring by a\nmere 1%). Dynamic runtime threshold on the other hand negates\nthe performance improvements from query restructuring by average\n6%. Combining turn-on/off instructions with dynamic runtime\nthreshold shows an average performance improvement of 45% for\napplications, which implies a 3% overhead addition from the low-power\nschemes towards query restructuring. Thus, it is clear that\nquery restructuring with both turn-on/off instructions and runtime\nthreshold forms the best alternative from both energy consumption\nand performance perspectives.\nCONCLUDING REMARKS\nThis paper is an attempt to study the potential of employing\nlow-power operating modes to save memory energy during query\nexecution. We propose hardware-directed and software-directed\n(query-directed) schemes that periodically transition the memory\nto low-power modes in order to reduce the energy consumption\nof memory-resident databases. Our experimental evaluations using\ntwo sets of queries clearly demonstrate that query-directed schemes\nperform better than hardware-directed schemes since the query optimizer\nknows the query access pattern prior to query execution,\nand can make use of this information in selecting the most suitable\nmode to use when idleness is detected. This scheme brings about\n68% reduction in energy consumption. In addition, the query-directed\nscheme can also preactivate memory banks before they are\nactually needed to reduce potential performance penalty.\nOur query restructuring scheme based on memory bank accesses\nprovides another scope for optimization. One can re-order operations\nwithin a query to increase bank inter-access times. It is also\npossible to go beyond this, and consider the access patterns of multiple\nqueries at the same time. Multiple queries are optimized based\non their table accesses, i.e., all accesses to a table are clustered as\nmuch as possible. This scheme is able to put memory banks to low-power\noperating modes for longer periods of time due to fewer table\nactivations. There is up to 90% improvement in energy and 45%\nimprovement in performance when queries are restructured and regrouped\nbased on their table accesses. Overall, we can conclude\nthat a suitable combination of query restructuring and low-power\nmode management can bring large energy benefits without hurting\nperformance.\nREFERENCES\n[1] R. Alonso and S. Ganguly. Query optimization for energy efficiency in mobile\nenvironments. In Proc. of the Fifth Workshop on Foundations of Models and\nLanguages for Data and Objects, 1993.\n[2] N. An, S. Gurumurthi, A. Sivasubramaniam, N. Vijaykrishnan, M. Kandemir,\nand M.J. Irwin. Energy-performance trade-offs for spatial access methods on\nmemory-resident data. The VLDB Journal, 11(3):179197, 2002.\n[3] N. Anciaux, L. Bouganim, and P. Pucheral. On finding a memory lower bound\nfor query evaluation in lightweight devices. Technical report, PRiSM Laboratoire\nde recherche en informatique, 2003.\n[4] T. M. Austin. The simplescalar/arm toolset. SimpleScalar LLC.\nhttp://www.simplescalar.com/.\n[5] Birdstep Technology. Database Management In Real-time and Embedded\nSystems - Technical White Paper, 2003. http://www.birdstep.com/collaterals/.\n[6] Bloor Research Ltd. Main Memory Databases, November 1999.\n[7] P.A. Boncz, S. Manegold, and M.L. Kersten. Database architecture optimized\nfor the new bottleneck: Memory access. In The VLDB Journal, pages 5465,\n1999.\n[8] D. Brooks, V. Tiwari, and M. Martonosi. Wattch: a framework for\narchitectural-level power analysis and optimizations. In Proc. International\nSymposium on Computer Architecture, 2000.\n[9] Q. Cao, P. Trancoso, J.-L Larriba-Pey, J. Torrellas, R. Knighten, and Y. Won.\nDetailed characterization of a quad pentium pro server running tpc-d. In Proc.\nof the International Conference on Computer Design, 1999.\n[10] S. Chaudhuri and K. Shim. Optimization of queries with user-defined\npredicates. ACM Transactions on Database Systems, 24(2):177228, 1999.\n[11] G.P. Copeland, T. Keller, R. Krishnamurthy, and M. Smith. The case for safe\nram. In Proc. of the Fifteenth International Conference on Very Large Data\nBases, pages 327335, 1989.\n[12] Database Management System, The PostgreSQL Global Development Group.\nPostgreSQL 7.2, 2001. http://www.postgresql.org/.\n[13] V. Delaluz, M. Kandemir, N. Vijaykrishnan, A. Sivasubramaniam, and M.J.\nIrwin. Dram energy management using software and hardware directed power\nmode control. In Proc. of the International Symposium on High-Performance\nComputer Architecture, 2001.\n[14] Z. Fong. The design and implementation of the postgres query optimizer.\nTechnical report, University of California, Berkeley.\nhttp://s2k-ftp.cs.berkeley.edu:8000/postgres/papers/.\n[15] P. Gassner, G.M. Lohman, K.B. Schiefer, and Y. Wang. Query optimization in\nthe ibm db2 family. Data Engineering Bulletin, 16(4):418, 1993.\n[16] Le Gruenwald and S.M. Banik. Energy-efficient transaction management for\nreal-time mobile databases in ad-hoc network environments. In Proc. of the\nSecond International Conference on Mobile Data Management, 2001.\n[17] Handspring. Handspring Organizers, 2004.\nhttp://www.handspring.com/products/.\n[18] J.M. Hellerstein. Optimization techniques for queries with expensive methods.\nACM Transactions on Database Systems, 23(2):113157, 1998.\n[19] T. Imielinski, S. Viswanathan, and B.R. Badrinath. Energy efficient indexing on\nair. In Proc. of ACM SIGMOD Conference, 1994.\n[20] Intel Corporation. Intel 440BX AGPset: 82443BX Host Bridge/Controller Data\nSheet, April 1998.\n[21] A.R. Lebeck, X. Fan, H. Zeng, and C.S. Ellis. Power aware page allocation. In\nProc. of the International Conference on Architectural Support for\nProgramming Languages and Operating Systems, 2000.\n[22] S. Madden, M.J. Franklin, J.M. Hellerstein, and W. Hong. The design of an\nacquisitional query processor for sensor networks. In Proc. of the ACM\nSIGMOD International Conference on Management of Data, pages 491502.\nACM Press, 2003.\n[23] S. Manegold. Understanding, Modeling, and Improving Main-Memory\nDatabase Performance. Ph.d. thesis, Universiteit van Amsterdam, Amsterdam,\nThe Netherlands, December 2002.\n[24] C.L. Monma and J.B. Sidney. Sequencing with series-parallel precedence\nconstraints. Mathematics of Operations Research, 4:215224, 1979.\n[25] Palm Inc. Palm Handhelds, 2004. http://www.palm.com/products/.\n[26] The PostgreSQL Global Development Group. PostgreSQL 7.2 Developers\nGuide, 2002. http://www.postgresql.org/docs/.\n[27] P. Pucheral, L. Bouganim, P. Valduriez, and C. Bobineau. Picodbms: Scaling\ndown database techniques for the smartcard. The VLDB Journal,\n12(1):120132, 2001.\n[28] J.M. Rabaey, A. Chandrakasan, and B. Nikolic. Digital Integrated Circuits.\nPrentice Hall, second edition, 2002.\n[29] R. Ramakrishnan and J. Gehrke. Database Management Systems. McGraw-Hill\npublishers, third edition, 2002.\n[30] Rambus Inc. Rambus RDRAM 512MB Datasheet, 2003.\n[31] Samsung Microelectronics. Mobile 512MB DRAM Chip Series.\nhttp://www.samsung.com/Products/Semiconductor/.\n[32] S. Sarawagi and M. Stonebraker. Reordering query execution in tertiary\nmemory databases. In The VLDB Journal, pages 156167, 1996.\n[33] A. Silberschatz, H.F. Korth, and S. Sudarshan. Database System Concepts.\nMcGraw-Hill, fourth edition, 2001.\n[34] Sleepycat Software. Berkeley DB V4.2, 2004.\nhttp://www.sleepycat.com/docs/index.html.\n[35] Transaction Processing Performance Council. TPC-H Benchmark Revision\n2.0.0, 2003.\n227", "keywords": "hardware energy scheme;query-directed energy management;power consumption;memory-resident databases;database;energy;low-power modes;query-directed scheme;banked memory;multi-query optimization;DRAM;energy optimization;query restructuring"} {"name": "82", "title": "Enforcing Security and Safety Models with an Information Flow Analysis Tool", "abstract": "Existing security models require that information of a given security level be prevented from \"leaking\" into lower-security information. High-security applications must be demonstrably free of such leaks, but such demonstration may require substantial manual analysis. Other authors have argued that the natural way to enforce these models automatically is with information-flow analysis, but have not shown this to be practicable for general purpose programming languages in current use. Modern safety-critical systems can contain software components with differing safety integrity levels, potentially operating in the same address space. This case poses problems similar to systems with differing security levels; failure to show separation of data may require the entire system to be validated at the higher integrity level. In this paper we show how the information flow model enforced by the SPARK Examiner provides support for enforcing these security and safety models. We describe an extension to the SPARK variable annotations which allows the specification of a security or safety level for each state variable, and an extension to the SPARK analysis which automatically enforces a given information flow policy on a SPARK program.", "fulltext": "INTRODUCTION\nSoftware is often used to develop security-critical applications\n. Some of these applications are required to manage information\nof different classification levels, ensuring that each\nuser may only access the data for which he or she has adequate\nauthorisation. This requirement has two components;\nthe developer must produce a design and implementation\nwhich supports this model and its constraints, and the implementation\nmust support verification that the constraints\nare never violated. It is not enough for the implementation\nto be correct; for it to be secure, it must be seen to be\ncorrect.\nIn this paper we examine the problem of developing and\nverifying a security-critical application containing this security\nstructure (often termed multi-level security). We will\nanalyse recent work on information flow and security and\nshow how the information flow analysis of the SPARK Examiner\ntool is appropriate for solving this problem. We will\nshow how it is also important for the efficient analysis of\nsafety-critical systems. We will then describe modifications\nto the current definition of the SPARK Ada language[2]\nand the Examiner which permit complete validation of Ada\nprograms against defined security models such as the Bell-LaPadula\nmodel[3].\nWe intend that the additional flow analysis features described\nhere will appear in a future commercial Examiner\nrelease. Although we anticipate possible minor changes to\nthe syntax and analysis performed, we expect the released\nversion to implement the analysis described here in all substantial\naspects.\nEXISTING WORK\nIn this section we identify typical standards which will inform\nthe development and verification of a security-critical\nor safety-critical application. We then analyse a survey paper\non information flow and security, and compare its conclusions\nagainst the information flow model of the SPARK\nannotated subset of Ada95.\n2.1 Standards\nThe Common Criteria for IT Security [5] specify the development\nand verification activities that are suitable for\napplications at differing levels of security. These Evaluation\nAssurance Levels (EALs) range from EAL-1 (lowest) to\nEAL-7 (highest). As the EAL number rises, the required\nactivities become more rigorous; at the higher levels, formal\nspecification and analysis of the software becomes required.\n39\nFor safety-related applications there are similar concepts\nfor the safety criticality of software; RTCA DO-178B[10]\nfor civil avionics software defines criticality levels E (lowest\n) through to A (most critical). As one would expect,\nthe required development and verification activities become\nincreasingly more onerous (and expensive) with increasing\ncriticality level. As a result, if an avionics system were to\ncontain a \"core\" of critical functionality at Level A and a\nlarger body of utility code at Level D then either the entire\nsystem would have to be developed and verified at Level A\nor a rigorous argument would have to be applied that the\nLevel D code could not in any way affect the integrity of the\nLevel A code.\nAnother notation for safety criticality is the Safety Integrity\nLevel (SIL), described in IEC 61508[8]. SIL-1 is the\nlowest level of safety integrity, and SIL-4 the highest; DO-178B\nlevel A approximates to SIL-3/SIL-4 when comparing\nthe development and verification activities required.\nIn this paper we assume that we are attempting to validate\nsystems at EAL-5 or greater (for security) and RTCA\nLevel B or greater (for safety). We are therefore required to\nprovide a rigorous and comprehensive justification for any\nstatements which we make about the separation of data.\nTherefore we now look at how such statements may be expressed\n.\n2.2 Information Flow\n\"Information flow\" as it applies to conventional imperative\ncomputer programs has a range of definitions, and is\noften confused with \"data flow\"; we shall take the definition\nas expressed in Barnes[2] p.13:\ndata flow analysis is concerned with the direction of\ndata flow; whereas\ninformation flow analysis also considers the coupling\nbetween variables.\nAn example of the difference between data flow and information\nflow comes from the following (Ada) code:\nwhile (X < 4) loop\nA := A + 2 * B;\nX := X * 2;\nend loop;\nHere, data flow analysis would state that data flows from\nX to X and from A and B to A. Information flow analysis\nwould note additionally that the final value of A is affected\nby the initial value of X, and hence that there is information\nflow from X to A.\nSabelfeld and Myers[11] recently surveyed the use of information\nflow analysis in software. They viewed the fundamental\nproblem of maintaining security as one of tracking\nthe flow of information in computing systems. They examined\nthe various ways that secure information could leak into\nless secure information, overtly and covertly. They identify\nin particular the concept of implicit information flows, an\nexample of which is given in the while-loop example above.\nTheir survey characterised language-based information flow\nas requiring:\n1. semantics-based security, so that rigorous argument\ncould be made about a variation of a high-security\nvalue not affecting a low-security value; and\n2. a security type system, so that a program or expression\ncan be assigned security values (types) and the type\nchanges of the program or expression can be characterised\n.\nThey concluded that \"standard\" security practices do not\nand cannot enforce the end-to-end confidentiality required\nby common security models. They characterised the existing\nwork in security and information flow analysis, but notably\ndid not address the work of Bergeretti and Carre[4].\n2.3 Security models\nThe Bell-LaPadula (BLP) model of computer security[3]\nenforces two properties:\n1. no process may read data from a higher security level;\nand\n2. no process may write data to a lower security level.\nSuch multi-level security has a number of problems; Anderson\n[1] provides a list of them including:\n\"blind write-up\": the inability to inform low-security\ndata whether a write to high-security data has happened\ncorrectly;\n\"downgrading\": moving information from a high security\nlevel to a lower level is sometimes desirable; and\n\"TCB bloat\": a large subset of the operating system\nmay end up in the Trusted Computing Base (TCB).\nThe Dolev-Yao security model[6] makes secret information\nindivisible; it cannot be leaked in part but only in total.\n2.4 Information Flow in Ada\nBergeretti and Carre wrote a seminal paper[4] describing\na practical implementation of information flow analysis\nin the SPADE Pascal language (although the principles\nwere applicable to most conventional imperative programming\nlanguages). This was managed by the composition\nof matrices representing information flow dependencies between\nvariable imports and exports. Notably, conditional\nand infinite loops were permissible and analysable within\nthe framework of such a language; these were managed by\ncomputing the transitive closure of the information flow matrix\ncorresponding to one execution of the loop.\nThis information flow model was implemented in the SPARK\nannotated Ada subset[2]. The subset requires each subprogram\nto be annotated with the required information flow\n(\"derives\" annotations) if information flow analysis is required\n. The SPARK subset is enforced by the SPARK Examiner\ntool which checks the required information flow of\neach subprogram against the actual information flow. The\nannotations (and other SPARK rules such as the ban on circular\ndependency) are necessary to make this information\nflow analysis tractable.\nThe result of this is that it is possible, in a fully-analysed\nSPARK program, to be certain that a given exported variable\nis independent of a given imported variable. This is a\nkey step towards supporting multi-level security in SPARK\nAda, and makes it easier to write demonstrably secure and\ncorrect code, but is not a complete solution. In Section 3\nwe will describe how SPARK Examiner analysis may be extended\nbetter to implement these checks.\nWe now examine case studies of the use of SPARK, and\nthe utility of information flow in real applications.\n40\n2.5 Security and Safety Applications\nAn example of a high-security application which was partly\ndeveloped in SPARK is the MULTOS CA[7]. This was developed\nand verified at a high level of integrity, approximating\nto the Common Criteria EAL-5. The delivered system\nhad a defect rate of 0.04 errors per thousand lines of\ncode (KLOC). This showed that SPARK was a practical\nlanguage for implementing high-security applications. Part\nof the analysis required during development and verification\nof the CA was to show that secret data could not leak out\ndirectly or indirectly in unclassified outputs.\nAn example of a safety-critical application with mixed integrity\nlevels where information flow analysis was helpful\nwas the SHOLIS information system described by King et\nal.[9]. This application mixed SIL-4 and non-critical code,\nand was written in the SPARK subset of Ada83. Static\nverification was used, including information flow analysis\nand partial program proof, to verify significant properties\nof SHOLIS. There was a successful argument based partly\non information flow analysis that the high-SIL code was not\ncompromised by the low-SIL code; however, this argument\nhad to be made manually, based on the validated information\nflow of each subprogram, as the Examiner did not provide\nsuch tracing facilities at the program level.\nIMPROVING SPARK ANALYSIS\nThe outstanding need in SPARK is to be able to mark\nstate variables in packages with a (numerical) level of security\nand / or safety. This has been implemented by allowing\nan optional aggregate after a package (\"own\") variable declaration\n.\n3.1 Marking security levels\nSuppose that a package Classify was defined as follows\nto create various secrecy levels:\npackage Classify is\n-- Security levels\nUNCLASSIFIED : constant := 0;\nRESTRICTED\n: constant := 1;\nCONFIDENTIAL : constant := 2;\nSECRET\n: constant := 3;\nTOPSECRET\n: constant := 4;\nend Classify;\nThen we define a package KeyStore to store and manage\na symmetric encryption key SymmetricKey, designed to mutate\nafter each encryption according to a rotation parameter\nRotorValue\n. The key is clearly a high-security data item,\nwith the rotor value requiring less security.\nKeyStore\nmarks its state variables with the field Integrity\n1\nthus:\n--# inherit Classify;\npackage KeyStore\n--# own SymmetricKey(Integrity => Classify.SECRET);\n--#\nRotorValue(Integrity => Classify.RESTRICTED);\nis\nprocedure Rotate;\n--# global in RotorValue;\n--#\nin out SymmetricKey;\n1\nIntegrity\nwas chosen to make sense for both security and\nsafety applications\n--# derives SymmetricKey from\n--#\nSymmetricKey, RotorValue;\nprocedure Encrypt(C : in MessageBlock;\nE : out MessageBlock);\n--# global in SymmetricKey;\n--# derives E from C, SymmetricKey;\n...\nend KeyStore;\nAny security analysis must show, in the case of this code,\nthat SymmetricKey data cannot leak into RotorValue or\ndata; i.e. there must be no subprogram (or main program)\nwhose information annotation shows SymmetricKey as an\nimport to RotorValue.\nNote that in this case package Classify will need to be\ninherited by all relevant package, but will never be withed\nand so never compiled.\n3.2 Implementation\nWe now define how the integrity levels are marked in the\nSPARK language, and how they are enforced by the SPARK\nExaminer.\nThe extra information flow checking is invoked using the\n/policy=X\ncommand line switch to the Examiner. Current\nvalid policy values are security and safety.\n3.2.1 Variable declaration\nThe above Integrity property on package own variables\nis a static property; at any point in static analysis of the\ncode, the actual and required integrity level of an own variable\nis known. Own variables without Integrity levels are\ntaken to have a default integrity level of Natural'Last (i.e.\nvery highly classified) if imported under a security policy,\nand Natural'First (i.e. unclassified) if exported under a\nsecurity policy. This gives the most paranoid checking so\nthat data may not leak from an input to an output through\nan intermediate variable with unspecified integrity.\nIf the analysis policy is safety then the default integrity\nvalues for unspecified own variables are reversed.\n3.2.2 Integrity checks\nInformation flow is declared explicitly in derives annotations\nfor subprograms; an example from our cryptographic\nKeyStore\nis the Rotate subprogram which changes the key\nbased on the value of RotorValue:\nprocedure Rotate;\n--# global in RotorValue;\n--#\nin out SymmetricKey;\n--# derives SymmetricKey from\n--#\nSymmetricKey, RotorValue;\nUsing policy security the Examiner will then check that\nthe integrity level of the export SymmetricKey (SECRET) is\nno less than the integrity levels of any import (SECRET and\nRESTRICTED\n). With policy safety the checks would be that\nthe integrity of the export is no more than the integrity levels\nof any import, i.e. that high-safety data exports cannot\nbe contaminated by low-safety data imports.\nInformation flow is also checked at each procedure call by\nsubstituting in actual parameters (which may be own variables\n) for formal parameters and rechecking the procedure's\nknown information flow for integrity flows. As examples we\n41\ngive the procedures which get and set the rotor and key\nvalues:\nprocedure GetKey(This_Key : out Key);\n--# global in SymmetricKey;\n--# derives This_Key from SymmetricKey;\nprocedure GetRotor(This_Rotor : out Rotor);\n--# global in RotorValue;\n--# derives This_Rotor from RotorValue;\nprocedure SetKey(New_Key : in Key);\n--# global out SymmetricKey;\n--# derives SymmetricKey from New_Key;\nprocedure SetRotor(New_Rotor : in Rotor);\n--# global out RotorValue;\n--# derives RotorValue from New_Rotor;\nBecause RotorValue is RESTRICTED and SymmetricKey is\nSECRET\n, the analysis requires a check at each invocation of\nthese subprograms that the actual parameters mapped to\nformal parameters do not violate integrity flow checks.\n3.2.3 Analysis techniques not adopted\nOne analysis technique which we considered (but rejected)\nwas to track the integrity flows within each subprogram\nbody and raise warnings at each individual statement where\na violation occurs. We now explain how this would have\nworked and why we rejected it.\nIn the following code, the programmer generates a key\nand rotor, encrypts a message with it, then tries to create a\nnew rotor based on the old key.\n-- Key and rotor generation\nR1 := KeyStore.MakeRotor(34,56,22,55);\nKeyStore.SetRotor(R1);\nK1 := KeyStore.MakeKey(66,11,2,4);\nKeyStore.SetKey(K1);\n-- Encrypt a message\nKeyStore.Encrypt(C => Clear, E => Encrypted);\n-- Get a copy of the (changed) key and break\n-it\ndown into data\nKeyStore.GetKey(K1);\nKeyStore.DecomposeKey(K1,I1,I2,I3,I4);\n-- Build a new rotor\nR1 := KeyStore.MakeRotor(I1,I2,I3,I4);\nKeyStore.SetRotor(R1);\nThe statement-by-statement information flow would proceed\nas shown in Table 1, where C denotes Clear, RV denotes\nRotorValue, SK denotes SymmetricKey, and E denotes\nEncrypted\n.\nThe final step causes RotorValue to exceed its assigned\nintegrity level, and would generate a static integrity flow\nerror.\nThe problem with this technique arises from the need to\ntrack the integrity levels of local variables. It quickly becomes\nclear that for practical analysis reasons each local\nvariable involved needs to be assigned an integrity level.\nThis is possible, and is done by declaring them as own variables\nwith integrity levels in a package embedded in the\nsubprogram in question, but is cumbersome. It also requires\nsubstantial rework of any existing code which we may want\nto retro-analyse.\nR1\nK1\nSK\nRV\nI1\nC\nE\nInstruction\n0\n0\n\n\n1\n\n0\n0\n\n\n1\nR1\n:=\n0\n0\n1\n1\nSetRotor\n0\n0\n1\n1\nK1\n:=\n0\n0\n3\n1\n1\nSetKey\n0\n0\n3\n1\n1\n3\nEncrypt\n0\n3\n3\n1\n1\n3\nGetKey\n0\n3\n3\n1\n3\n1\n3\nDecompose\n3\n3\n3\n1\n3\n1\n3\nR1 :=\n3\n3\n3\n3\n3\n1\n3\nSetRotor\nTable 1: Information flow for example\n3.2.4 Example of checking\nWe placed the code described above into procedure Operate\nin a package Crypto and annotated the declaration with the\ncorrect derives annotation thus:\n--# inherit KeyStore,Classify,BitString;\npackage Crypto\n--# own Clear,\n--#\nEncrypted(Integrity => Classify.SECRET);\nis\nprocedure Operate;\n--# global out KeyStore.RotorValue,Encrypted;\n--#\nin out KeyStore.SymmetricKey;\n--#\nin Clear;\n--# derives\n--#\nKeyStore.SymmetricKey,\n--#\nKeyStore.RotorValue\n--#\nfrom\n--#\nKeyStore.SymmetricKey\n--#\n&\n--#\nEncrypted\n--#\nfrom\n--#\nClear,\n--#\nKeyStore.SymmetricKey\n--#\n;\nend Crypto;\nAnalysis using /policy=security gives the following static\nsemantic error:\nUnit name:\nCrypto\nUnit type:\npackage specification\nUnit has been analysed, any errors are listed below\n1 error(s) or warning(s)\nLine\n30\n--#\nKeyStore.SymmetricKey\n^1\n*** (\n1) Semantic Error\n:175: Information flow\nfrom KeyStore.SymmetricKey to\nKeyStore.RotorValue violates the\nselected information flow policy..\nwhich correctly identifies the potential leak.\n3.3 Case study\nSHOLIS is the Royal Navy's Ship Helicopter Operating\nLimits Information System [9] designed to assist landing of\n42\nhelicopters on Royal Navy Type 23 frigates. Failure of this\nsystem could result in the death of helicopter pilots and passengers\n, loss of a helicopter and damage to the ship. This\nis intolerable for normal operation, hence SIL-4 reliability is\nrequired to give sufficient confidence that such an accident\nwill not happen during the in-service lifetime of the system.\nSHOLIS is an on-demand system rather than a continuously\noperating system, and so has a required probability of failure\nto function on demand of approximately 10\n-4\n; a more\nprecise probability would be specified in the system safety\ncase.\nHowever, the bulk of the SHOLIS code does not relate\nto critical system functionality. The code specific to SIL-4\nmust be analysed at a deep level; the rest of the code can\nbe analysed less deeply as long as it can be shown not to\naffect the SIL-4 data adversely.\n3.3.1 Original analysis\nThe original SHOLIS code is Ada 83 and consists of 75\nsource files and shadow files amenable to SPARK analysis;\n26.5KLoC of non-blank non-comment non-annotation code.\nIt is a substantial program and therefore a suitable test to\nsee if integrity level checking scales.\nWithout any own variables annotated, a full SPARK analysis\nby an Examiner with policy=safety generated no integrity\nerrors, as we would expect.\n3.3.2 Identifying critical outputs\nTo enable easy marking of variable safety criticality we\nadded a single package Safety:\npackage Safety is\nNSC : constant := 0;\nSC\n: constant := 1;\nend Safety;\nWe took as an example the safety-critical output Alarm2Z\nin a package RMR which represents an alarm signal on the\nfront panel. This was annotated:\n--# own Alarm2Z : BasicTypes.OkFailT\n--#\n(Integrity => Safety.SC);\nA re-analysis of package RMR raised no integrity errors.\nThe other package that used Alarm2Z was EVENT which was\nthe main event handler. A SPARK of this package raised\nintegrity flow errors in subprogram Sync, where Alarm2Z depended\non a large range of other inputs, which had not yet\nbeen marked as safety-critical. This was what we would expect\nso far and confirmed that basic safety integrity analysis\nwas working.\n3.3.3 Extending analysis\nThe next phase of work started in the SENSOR package\nwhich was near the middle of the package hierarchy. We set\nthe package variables representing current speed, heading,\nroll, pitch and wind velocity state to be safety-critical and\nthen ran a trial SPARK analysis. This indicated many own\nvariables in this and other packages which caused integrity\nflow errors since they had no explicit integrity level.\nFor each of these packages in turn, we:\n1. marked all of the package own variables as non-critical\n(Safety.NSC);\n2. re-analysed the package specification;\n3. analysed the package body to ensure that there were\nno integrity flow errors at subprogram call points; and\n4. if necessary, transformed NSC variables to SC status and\nre-ran.\nEventually we converged on a stable SC/NSC partition of\nthe variables.\n3.3.4 Declassification\nThe DisplayBuffers state variable in I/O package sio5\nwas a point where safety-critical and non-safety critical data\nmerged. It was necessary during the actual project to produce\na manual justification that the buffer would never be\nfilled with non-safety critical data when there was safety-critical\ndata to be added and displayed to a user. We therefore\nset its integrity to NSC and ignored all integrity errors\nrelating to flows from SIO5.DisplayBuffers.\n3.3.5 Results\nThere were 1282 integrity flow errors, but every single\none of these referred to a flow from SIO5.DisplayBuffers\nto Fault.Faults, as expected. Therefore only one manual\nargument is needed to validate the separation of SC and NSC\ndata at this level.\nOf the 233 package specification variables which were given\nintegrity levels, 110 were NSC and 123 were SC.\n3.3.6 Lessons learned\nSHOLIS was developed using a now out-of-date version\nof the SPARK Examiner which did not support proof work\ninvolving abstract state; as a consequence there were many\nmore public own variables than you would expect in a well-designed\nmodern SPARK program. This made the conversion\nwork slower; at the top level, as noted above, it in-creased\nthe time required beyond what was available for the\nstudy.\nThe \"TCB bloat\" problem noted in Section 2.3 did not\nseem to be a problem. While working up the calling hierarchy\nthere was a small amount of returning to lower levels to\nmake NSC variables into SC, but this did not spread out of\ncontrol.\nThere is a clear need for a declassification mechanism, as\ndiscussed in more detail in Section 4.3. Being able to suppress\nthe integrity flow errors from DisplayBuffers would\nhave made the transformation process easier.\n3.4 Possible tactical extensions\nGiven the preceding work, it is relatively simple to extend\nthe own variable annotation to allow other fields in the\naggregate. Within the security domain, there are considerations\nwhich mean that security cannot be considered on a\nlinear scale.\nAn example is a set of documents on an international military\naviation development where markings might include\nNATO RESTRICTED, UK RESTRICTED and ICELAND\nRESTRICTED. They are all at the same level of security,\nbut apply to different nationalities in different ways. A UK\ncitizen could receive the first two, a German could receive\nthe first one only, and a Russian could not receive any. The\nnationality information could be represented by an additional\nfield, which might be an array of booleans mapping\neach country code to an Allowed/Forbidden code.\n43\n3.5 Security policies\nSo far we have considered enforcing the Bell-LaPadula security\npolicy. However, there are other policies which may\nbe desirable for enforcement. One example is a policy where\ninformation at security level N may only flow into other information\nat security level N; there is no concept of ordering\non these security levels, they may simply not be mixed.\nThere is further work to be done on investigating whether\nother information flow policies are desirable and useful for\nsecurity-critical or safety-critical code.\nIn principle they\nshould not be complicated to enforce. The /policy=X command\nline switch provides a hook to specify different policies\nin future.\nISSUES FOR FUTURE RESEARCH\nIn this section we discuss the limitations of the existing\nwork and examine how the analysis techniques may be extended\nin the future.\n4.1 Difficulties with analysis\nThe concept of \"label creep\" as identified by Sabelfeld and\nMyers refers to the tendency of data to increase in security\nlevel as program execution progresses; assigning a\nSECRET\nvalue to one field of a large\nCONFIDENTIAL\nrecord will\nmake the entire record\nSECRET\n. It remains to be seen\nhow SPARK security programs should be designed in order\nto minimise label creep.\nConcurrency is more complex because there is the possibility\nthat security information may leak from the observable\nscheduling order of the processes. This is addressed to some\nextent because SPARK analysis of Ravenscar programs does\ninformation flow across tasks.\n4.2 Wider analysis\nOne extension suggestion that has come from a software\ndevelopment project within the company is the idea of subprogram\nintegrity level. The motivation is similar to that\nfor the SHOLIS analysis; that only part of a given program\nis safety-critical, and that verification activities (proof, unit\ntesting levels, coverage analysis etc.) may be better focused\non the safety-critical parts. Subprograms are a more useful\nunit of division for these activities than package state.\nThe algorithm for identifying a subprogram's actual integrity\nlevel is to examine its exports. If it only exports own\nvariables then the subprogram integrity level is the maximum\nof all exported own variable integrity levels. If some\nexported variables are formal parameters then each invocation\nof the subprogram must be examined for own variables\nthat may be actual parameters, and the maximum integrity\nlevel taken over all possible exported own variables. Functions\nare taken to be equivalent to a procedure with a single\nexported parameter.\nThere are two clear choices for implementing this strategy:\n1. whole-program analysis, determining subprogram integrity\nlevel once all invocations are known; or\n2. partial-program analysis, annotating each critical subprogram\ndeclaration with its integrity level and checking\nat declaration and each invocation that the maximum\nintegrity level is not violated.\nThe first choice is minimal-impact, but does not admit\nanalysis until the whole program is written which is likely to\nprove troublesome; the integrity level of many subprograms\nwill not be known until very late in the development process,\nby which time testing should already be ramped up.\nThe second choice is higher impact; it requires an extra\nannotation to be added to the SPARK language and checked\nby the Examiner, and requires developers to add the annotation\nto each potentially critical subprogram as it is specified\n. However, the benefits of the partial program analysis\nare likely to outweigh these drawbacks.\n4.3 Subverting the analysis\nDeclassification is occasionally necessary in security programs\n; this is when the assigned security level of information\nis deliberately reduced. An example would be a security filter\nwhich took\nSECRET\ninformation, stripped out sensitive\nfields and output information which was no more than\nCONFIDENTIAL\nlevel. This can be done in SPARK by hiding\nthe body of a declassifying subprogram from the Examiner,\nbut this is clearly not an optimal solution. A better solution\nshould be found.\n4.4 Considerations for certification\nIn very high security applications it may be necessary to\ncertify the object code as well as the source code. It remains\nto be seen whether and how the information known from the\nSPARK source code analysis can be carried over to inform\nan object code analysis.\nThere are other ways by which secure information can be\nobserved, such as covert or timing channels. A full implementation\nof multi-level security information analysis should\nbe followed by an analysis of how much information could\nbe leaked this way.\nCONCLUSIONS\nIn this paper we have described recent work on applying\ninformation flow analysis techniques to enforcing multi-level\nsecurity in a single software application. We have shown\nhow the requirements listed by Sabelfeld and Myers[11] are\npartially satisfied by the information flow analysis possible\nwith SPARK Ada and the SPARK Examiner. We have further\nshown that the existing SPARK language and analysis\nmay be extended to enforce the Bell-LaPadula security\nmodel with relatively little change.\nSPARK Ada has already proven itself in high-security and\nsafety-critical application development. It now appears to\nbe an effective choice of language to partition data of differing\ncriticality, and provide a low-cost but robust argument\nof safety or security for an application. SPARK already\nprovides the semantics-based security required by Sabelfeld\nand Myers; the extensions to own variable annotations now\nprovide the complementary security type system.\nFor end-to-end confidentiality in a secure system, we believe\nthat SPARK Ada's extended information flow analysis\nprovides a hard-to-refute justification of data security.\n5.1 Acknowledgements\nThe authors are grateful to Peter Amey, Neil White and\nWill Ward from Praxis Critical Systems for their feedback\non this paper and the prototype integrity checking facilities\nof the Examiner.\n44\nREFERENCES\n[1] R. J. Anderson. Security engineering: a guide to\nbuilding dependable distributed systems. Wiley\nComputer Publishing, 2001. ISBN 0-471-38922-6.\n[2] J. Barnes. High Integrity Software: The SPARK\nApproach to Safety And Security. Addison Wesley,\nApril 2003.\n[3] D. E. Bell and L. LaPadula. Secure computer systems.\nTechnical Report ESR-TR-73-278, Mitre Corporation,\nNovember 1973. v. I and II.\n[4] J.-F. Bergeretti and B. A. Carre. Information-flow and\ndata-flow analysis of while-programs. ACM\nTransactions on Programming Languages and\nSystems, 7(1):3761, January 1985.\n[5] Common Criteria. Common Criteria for Information\nTechnology Security Evaluation, August 1999.\n[6] D. Dolev and A. Yao. On the security of public-key\nprotocols. IEEE Transactions on Information Theory,\n2(29):198208, August 1983.\n[7] A. Hall and R. Chapman. Correctness by\nconstruction: Developing a commercial secure system.\nIEEE Software, pages 1825, Jan/Feb 2002.\n[8] International Electrotechnical Commission. IEC\nStandard 61508, Functional Safety of Electrical /\nElectronic / Programmable Electronic Safety-Related\nSystems, March 2000.\n[9] S. King, J. Hammond, R. Chapman, and A. Pryor. Is\nproof more cost effective than testing? IEEE\nTransactions on Software Engineering, 26(8):675686,\nAugust 2000.\n[10] RTCA / EUROCAE. RTCA DO-178B / EUROCAE\nED-12: Software Considerations in Airborne Systems\nand Equipment Certification, December 1992.\n[11] A. Sabelfeld and A. C. Myers. Language-based\ninformation-flow security. IEEE Journal on Selected\nAreas in Communications, 21(1), January 2003.\n45\n", "keywords": "security level;SPARK Ada;integrity;Information flow;Dolev-Yao;subprogram;SPARK;information flow;safety;security;Bell-LaPadula"} {"name": "83", "title": "Entropy and Self-Organization in Multi-Agent Systems", "abstract": "Emergent self-organization in multi-agent systems appears to contradict the second law of thermodynamics. This paradox has been explained in terms of a coupling between the macro level that hosts self-organization (and an apparent reduction in entropy), and the micro level (where random processes greatly increase entropy). Metaphorically, the micro level serves as an entropy \"sink,\" permitting overall system entropy to increase while sequestering this increase from the interactions where self-organization is desired. We make this metaphor precise by constructing a simple example of pheromone-based coordination, defining a way to measure the Shannon entropy at the macro (agent) and micro (pheromone) levels, and exhibiting an entropy-based view of the coordination.", "fulltext": "INTRODUCTION\nResearchers who construct multi-agent systems must cope with\nthe world's natural tendency to disorder. Many applications\nrequire a set of agents that are individually autonomous (in the\nsense that each agent determines its actions based on its own state\nand the state of the environment, without explicit external\ncommand), but corporately structured. We want individual local\ndecisions to yield coherent global behavior.\nSelf-organization in natural systems (e.g., human culture, insect\ncolonies) is an existence proof that individual autonomy is not\nincompatible with global order. However, widespread human\nexperience warns us that building systems that exhibit both\nindividual autonomy and global order is not trivial.\nNot only agent researchers, but humans in general, seek to impose\nstructure and organization on the world around us. It is a universal\nexperience that the structure we desire can be achieved only\nthrough hard work, and that it tends to fall apart if not tended.\nThis experience is sometimes summarized informally as\n\"Murphy's Law,\" the observation that anything that can go\nwrong, will go wrong and at the worst possible moment. At the\nroot of the ubiquity of disorganizing tendencies is the Second Law\nof Thermodynamics, that \"energy spontaneously tends to flow\nonly from being concentrated in one place to becoming diffused\nand spread out.\" [9]\nAdding energy to a system can overcome the Second Law's\n\"spontaneous tendency\" and lead to increasing structure.\nHowever, the way in which energy is added is critical. Gasoline in\nthe engines of construction equipment can construct a building\nout of raw steel and concrete, while the same gasoline in a bomb\ncan reduce a building to a mass of raw steel and concrete.\nAgents are not immune to Murphy. The natural tendency of a\ngroup of autonomous processes is to disorder, not to organization.\nAdding information to a collection of agents can lead to increased\norganization, but only if it is added in the right way. We will be\nsuccessful in engineering agent-based systems just to the degree\nthat we understand the interplay between disorder and order.\nThe fundamental claim of this paper is that the relation between\nself-organization in multi-agent systems and thermodynamic\nconcepts such as the second law is not just a loose metaphor, but\ncan provide quantitative, analytical guidelines for designing and\noperating agent systems. We explain the link between these\nconcepts, and demonstrate by way of a simple example how they\ncan be applied in measuring the behavior of multi-agent systems.\nOur inspiration is a model for self-organization proposed by\nKugler and Turvey [7], which suggests that the key to reducing\ndisorder in a multi-agent system is coupling that system to another\nin which disorder increases. Section 2 reviews this model and\nrelates it to the problem of agent coordination. Section 3 describes\na test scenario that we have devised, inspired by self-organization\nin pheromone systems, and outlines a method for measuring\nentropy in this scenario. Section 4 reports our experimental\nresults. Section 5 summarizes our conclusions.\n\nAN ENTROPY MODEL FOR SELF-ORGANIZATION\nIn the context of biomechanical systems, Kugler and Turvey [7]\nsuggest that self-organization can be reconciled with second-law\ntendencies if a system includes multiple coupled levels of\ndynamic activity. Purposeful, self-organizing behavior occurs at\nthe macro level. By itself, such behavior would be contrary to the\nsecond law. However, the system includes a micro level whose\ndynamics generate increasing disorder. Thus the system as a\nwhole is increasingly disordered over time. Crucially, the\nbehavior of elements at the macro level is coupled to the micro\nlevel dynamics. To understand this model, we begin with an\nexample, then abstract out the underlying dynamics, and finally\ncomment on the legitimacy of identifying processes at this level\nwith principles from thermodynamics.\n\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nAGENTS'01, May 28-June 1, 2001, Montral, Quebec, Canada.\nCopyright 2001 ACM 1-58113-326-X/01/0005...$5.00.\n\n124\n2.1 An Example: Pheromones\nThe parade example of such a system is the self-organization of an\ninsect colony (such as the construction of minimal spanning tree\nnetworks among nests and food sources by ants, or the erection of\nmulti-storied structures with regularly spaced pillars and floors by\ntropical termites), through pheromone-based coordination [1, 11].\nPheromones are scent markers that insects use in two ways. First,\nthey deposit pheromones in the environment to record their state.\nFor example, a foraging ant just emerging from the nest in search\nof food might deposit nest pheromone, while an ant that has found\nfood and is carrying it will deposit food pheromone. ([15]\ndocuments use of multiple pheromones by insects.) Second, they\norient their movements to the gradient of the pheromone field. In\nthe example of foraging ants, those seeking food climb the\ngradient of the food pheromone, while those carrying food climb\nthe gradient of the nest pheromone. The most realistic models of\nthe ants' pheromone-climbing behavior incorporates a stochastic\nelement in their movement. That is, they do not follow the\ngradient deterministically, but use its strength to weight a roulette\nwheel from which they determine their movement.\nThe environment in which pheromones are deposited plays a\ncritical role in such a system. It is not passive, but active, and\nperforms three information-processing functions with the\npheromones.\n1. It\naggregates deposits of the same flavor of pheromone from\ndifferent ants, thus providing a form of data fusion across\nmultiple agents at different times, based on their traversal of\na common location.\n2. It\nevaporates pheromones over time, thus forgetting obsolete\ninformation. This dynamic is usefully viewed as a novel\napproach to truth maintenance. Conventional knowledge\nbases remember every assertion unless there is cause to\nretract it, and execute truth maintenance processes to detect\nand resolve the conflicts that result when inconsistent\nassertions coexist. Insect systems forget every assertion\nunless it is regularly reinforced.\n3. Evaporation provides a third function, that of disseminating\ninformation from the location at which it was deposited to\nnearby locations. An ant does not have to stumble across the\nexact location at which pheromone was deposited in order to\naccess the information it represents, but can sense the\ndirection from its current location to the pheromone deposit\nin the form of the gradient of evaporated pheromone\nmolecules.\n2.2 The Model in Detail\nIn the Kugler-Turvey model, ants and their movements constitute\nthe macro level of the system, while pheromone molecules\nconstitute the micro level. The purposeful movement of ants,\nconstructing minimal paths among their nests and food sources,\nachieve a reduction in disorder at the macro level, made possible\nbecause the agents at this level are coupled to the micro level,\nwhere the evaporation of pheromone molecules under Brownian\nmotion results in an overwhelming growth in disorder. As a result,\nthe disorder of the overall system increases, in keeping with the\nSecond Law, in spite of the emergence of useful order at the\nmacro level.\nFigure 1 illustrates the interplay among these processes, and how\nthis model of agent coordination differs from more classical\nviews. Classically, agents are considered to perceive one another\ndirectly, reason about this perception, and then take rational\naction. The Kugler-Turvey model views coordination as being\nmediated by an environment that agents change by their actions\n(e.g., depositing pheromones), a process known as \"stigmergy\"\n[4]. Processes in the environment generate structures that the\nagents perceive, thus permitting ordered behavior at the agent\nlevel. At the same time, these processes increase disorder at the\nmicro level, so that the system as a whole becomes less ordered\nover time.\nResearch in synthetic pheromones [2, 12, 13] draws directly on\nthis model of coordination, but the model is of far broader\napplicability. In a multi-commodity market, individual agents\nfollow economic fields generated by myriad individual\ntransactions, and self-organization in the demand and supply of a\nparticular commodity is supported by an environment that\ndistributes resources based on the other transactions in the system.\nThe movement of currency in such a system provides similar\nfunctions to those of pheromones in insect systems. More broadly,\nwe hypothesize that a coupling of ordered and disordered systems\nis ubiquitous in robust self-organizing systems, and that the lack\nof such a coupling correlates with architectures that do not meet\ntheir designers' expectations for emergent cohesiveness.\n2.3 A Caveat\nAt this point, readers with a background in physics and chemistry\nmay be uneasy. These disciplines formulated the Second Law\nwithin a strict context of processes that result in energy changes.\nThe fundamental physical measures associated with the second\nlaw are temperature T, heat Q, and (thermodynamic) entropy S,\nrelated by the definition\nEquation 1\nT\ndQ\ndS\n=\n\nStatistical mechanics identifies this macroscopic measure with the\nnumber\nof microscopically defined states accessible to the\nsystem by the relation\nEquation 2\nS\nk ln\nwhere k is Boltzmann's constant, 1.4E-16 erg/deg.\nM icro\nNew to n ian ;\nF o rce F ield ;\nE n tro p y\nF lo w\n(E n tro p y\n\n\n\n)\nF lo w\n(E n tro p y\n\n\n\n)\nM acro\nNo n -New to n ian\nF lo w F ield\n\"Neg en tro p y\"\nPe\nrcep\ntio\nn\nPercep\ntion\nP h ero m o n e\nRational Action\n(Entropy\n\n\n)\nRa\ntio\nna\nl A\ncti\non\n(Ent\nropy\n\n\n)\nP h ero m o n e\nDynam ics\nP ercep tio n\nRatio n al A ctio n\nAg en t 1\nAg en t 2\nT rad itio n al A g en t\nDynam ics\nKey\nM icro\nNew to n ian ;\nF o rce F ield ;\nE n tro p y\nF lo w\n(E n tro p y\n\n\n\n)\nF lo w\n(E n tro p y\n\n\n\n)\nM acro\nNo n -New to n ian\nF lo w F ield\n\"Neg en tro p y\"\nPe\nrcep\ntio\nn\nPercep\ntion\nP h ero m o n e\nRational Action\n(Entropy\n\n\n)\nRa\ntio\nna\nl A\ncti\non\n(Ent\nropy\n\n\n)\nP h ero m o n e\nDynam ics\nP ercep tio n\nRatio n al A ctio n\nAg en t 1\nAg en t 2\nT rad itio n al A g en t\nDynam ics\nKey\n\nFigure 1. Comparison of Conventional and Pheromone-Based\nModels of Coordination\n125\nThus defined, thermodynamic entropy has strong formal\nsimilarities [10] to information entropy [14]\nEquation 3\n\n=\ni\ni\ni\np\np\nS\nlog\n\nwhere i ranges over the possible states of the system and p\ni\nis the\nprobability of finding the system in state i. These formal\nsimilarities have led to a widespread appropriation of the notion\nof \"entropy\" as a measure of macro-level disorder, and of the\nSecond Law as describing a tendency of systems to become more\nchaotic. Our approach participates to this appropriation.\nIt has been objected [8] that such appropriation completely\nignores the role of energy intrinsic to both thermodynamic\ndefinitions (via T and dQ in the macro definition and k in the\nmicro definition). Such an objection implicitly assumes that\nenergy is logically prior to the definition, and that unless\ninformation processes are defined in terms of energy changes, it is\nillegitimate to identify their changes in entropy with those of\nthermodynamics. An alternative approach to the question would\nargue that in fact the prior concept is not ergs but bits, the\nuniverse is nothing but a very large cellular automaton with very\nsmall cells [3, 6], and physics and chemistry can in principle be\nredefined in terms of information-theoretic concepts. Our\napproach is sympathetic with this view. While we are not prepared\nat this point to define the precise correspondence between ergs\nand bits, we believe that physical models are an under-exploited\nresource for understanding computational systems in general and\nmulti-agent systems in particular. The fact that the thermodynamic\nand information approaches work in different fundamental units\n(ergs vs. bits) is not a reason to separate them, but a pole star to\nguide research that may ultimately bring them together.\nEXPERIMENTAL SETUP\nWe experiment with these concepts using a simple model of\npheromone-based behavior. In this section we describe the\nexperiment and how one measures entropy over it.\n3.1 The Coordination Problem\nConsider two agents, one fixed and one mobile, who desire to be\ntogether. Neither knows the location of the other. The mobile\nagent, or walker, could travel to the destination of the stationary\none, if it only knew where to go. The stationary agent, or target,\ndeposits pheromone molecules at its location. As the pheromone\nmolecules diffuse through the environment, they create a gradient\nthat the walker can follow.\nInitially, the walker is at (30,30) and the target is at (50,50) in a\n100x100 field. Every time step, the target deposits one molecule\nat (50,50). Both the walker and the pheromone molecules move\nby computing an angle\n[0,2] relative to their current heading\nand taking a step of constant length (1 for the walker, 2 for the\npheromone molecule) in the resulting direction. Thus both\nmolecules and walkers can be located at any real-valued\ncoordinates in the field. Molecules move every cycle of the\nsimulation and the walker every five cycles, so altogether the\nmolecules move ten times as fast as the walker. Molecules fall off\nof the field when they reach the edge, while the walker bounces\noff the edges.\nMolecules choose the heading for their next step from a uniform\nrandom distribution, and so execute an unbiased random walk.\nThe walker computes its heading from two inputs.\n1. It generates a gradient vector\nGr\nfrom its current location to\neach molecule within a specified radius\n, with magnitude\nEquation 4\n\n<\n=\n\ni\nr\ni\nr\ng\nG\n2\nr\n\nwhere r\ni\nis the distance between the walker and the ith\nmolecule and g is a \"gravitational constant\" (currently 1).\n2. It generates a random vector\nRr\nwith random heading and\nlength equal to a temperature parameter T.\nThe vector sum\nR\nG\nr\nr +\n, normalized to the walker's step length\n(1 in these experiments), defines the walker's next step. Including\nRr\nin the computation permits us to explore the effectiveness of\ndifferent degrees of stochasticity in the walker's movement,\nfollowing the example of natural pheromone systems.\nThe state of the walker defines the macro state of the system,\nwhile the states of the molecules define the micro state. This\nmodel can easily be enhanced in a number of directions, including\nadding multiple walkers and multiple targets, and permitting\nwalkers and targets to deposit pheromone molecules of various\nflavors. The simple configuration is sufficient to demonstrate our\ntechniques and their potential for understanding how the walker\nfinds the target.\n3.2 Measuring Entropy\nComputing the Shannon or Information Entropy defined in\nEquation 3 requires that we measure\n1. the set of states accessible to the system and\n2. the probability of finding the system in each of those states.\n3.2.1 Measuring the Number of System States\nIn most computational systems, the discreteness of digital\ncomputation makes counting system states straightforward\n(though the number of possible states is extremely high). We have\npurposely defined the movement of our walker and molecules in\ncontinuous space to highlight the challenge of counting discrete\nsystem states in an application embedded in the physical world\n(such as a robotic application). Our approach is to superimpose a\ngrid on the field, and define a state on the basis of the populations\nof the cells of the grid.\nWe can define state, and thus entropy, in terms either of location\nor direction. Location-based state is based on a single snapshot of\nthe system, while direction-based state is based on how the system\nhas changed between successive snapshots. Each approach has an\nassociated gridding technique.\nFor location-based entropy, we divide the field with a grid. Figure\n2 shows a 2x2 grid with four cells, one spanning each quarter of\nthe field. The state of this system is a four-element vector\nreporting the number of molecules in each cell (in the example,\nreading row-wise from upper left, <1,1,3,2>. The number of\npossible states in an nxn grid with m particles is n\n2m\n. The\nparameters in location-based gridding are the number of divisions\nin each direction, their orientation, and the origin of the grid.\n126\nRectangular grids are easiest to manage computationally, but one\ncould also tile the plane with hexagons.\nFor direction-based entropy, we center a star on the previous\nlocation of each particle and record the sector of the star into\nwhich the particle is found at the current step. Figure 3 shows a\nfour-rayed star with a two particles. The state of the system is a\nvector with one element for each particle in some canonical order.\nCounting sectors clockwise from the upper left, the state of this\nexample is <2,3>. The number of possible states with an n-pointed\nstar and m particles is mn. The parameters in direction-based\ngridding are the number of rays in the star and the rotation\nof the star about its center.\nIn both techniques, the analysis depends critically on the\nresolution of the grid (the parameter n) and its origin and\norientation (for location) or rotation (for direction).\nTo understand the dependency on n, consider two extremes. If n is\nvery large, the chance of two distributions of particles on the field\nhaving the same state is vanishingly small. For N distributions,\neach will be a distinct state, each state will have equal probability\n1/N, and the entropy will be log(N). This state of affairs is clearly\nnot informative. At the other extreme, n = 1, all distributions\nrepresent the same state, which therefore occurs with probability\n1, yielding entropy 0, again not informative. We choose the\ngridding resolution empirically by observing the length scales\nactive in the system as it operates.\nTo understand the dependency on origin/orientation or rotation,\nconsider two particles in the same cell. After they move, will they\nstill be in the same cell (keeping entropy the same) or in different\ncells (increasing entropy)? Exactly the same movements of the\ntwo particles could yield either result, depending on how the grid\nis registered with the field. We follow Gutowitz's technique [5] of\nmeasuring the entropy with several different origins and taking the\nminimum, thus minimizing entropy contributions resulting from\nthe discrete nature of the grid.\n3.2.2 Measuring the Probabilities\nIn principle, one could compute the probability of different\nsystem states analytically. This approach would be arduous even\nfor our simple system, and completely impractical for a more\ncomplex system. We take a Monte Carlo approach instead. We\nrun the system repeatedly. At each step in time, we estimate the\nprobability of each observed state by counting the number of\nreplications in which that state was observed. The results reported\nhere are based on 30 replications.\nShannon entropy has a maximum value of log(N) for N different\nstates, achieved when each state is equally probable. To eliminate\nthis dependence on N, we normalize the entropies we report by\ndividing by log(N) (in our case, log(30)), incidentally making the\nchoice of base of logarithms irrelevant.\nEXPERIMENTAL RESULTS\nWe report the behavior of entropy first in the micro system, then\nin the unguided and guided macro system, and finally in the\ncomplete system.\n4.1 Entropy in the Micro System\nFigure 4 shows locational entropy in the micro system (the\npheromone molecules), computed from a 5x5 grid. Entropy\nincreases with time until it saturates at 1. The more molecules\nenter the system and the more they disperse throughout the field,\nthe higher the entropy grows. Increasing the grid resolution has no\neffect on the shape of this increase, but reduces the time to\nsaturation, because the molecules must spread out from a single\n(0,0)\n(100,0)\n(0,100)\n(100,100)\n(0,0)\n(100,0)\n(0,100)\n(100,100)\n\nFigure 2. Location-based gridding.\n(0,0)\n(100,0)\n(0,100)\n(100,100)\n1\n2\n(0,0)\n(100,0)\n(0,100)\n(100,100)\n1\n2\n\nFigure 3. Direction-based gridding.\n0\n50\n100\n150\n200\n250\nTime\n0.2\n0.4\n0.6\n0.8\n1\no\nr\nc\ni\nMy\np\no\nr\nt\nn\nE\n\nFigure 4. Micro Entropy x Time (5x5 Grid)\n127\nlocation and the finer the grid, the sooner they can generate a\nlarge number of different states.\nDirectional entropy also increases with time to saturation. This\nresult (not plotted) can be derived analytically. The molecule\npopulation increases linearly with time until molecules start\nreaching the edge. Then the growth slows, and eventually reaches\n0. Let M be the population of the field at equilibrium, and\nconsider all M molecules being located at (50,50) through the\nentire run. Initially, all are stationary, and each time step one\nadditional molecule is activated. Then the total number of\npossible system states for a 4-star is 4M, but the number actually\nsampled during the period of linear population growth is 4t, since\nthe stationary molecules do not generate any additional states.\nThus the entropy during the linear phase is log(4t)/log(4M). As\nthe growth becomes sublinear, the entropy asymptotically\napproaches 1, as with locational entropy.\n4.2 Entropy in the Unguided Macro System\nFigure 5 shows the path of a walker uncoupled to the micro\nsystem (when the target is emitting no pheromone molecules).\nWith no coupling to the micro field, the walker is just a single\nmolecule executing a random walk. Figure 6 shows that locational\nentropy for this walker increases over time, reflecting the\nincreased number of cells accessible to the walker as its random\nwalk takes it farther from its base. The grid size (15 divisions in\neach direction) is chosen on the basis of observations of the\nguided walker, discussed in the next section.\nThe directional entropy (not plotted) is constant at 1, since the\nwalker chooses randomly at each step from all available\ndirections.\n4.3 Entropy in Guided Macro System\nNow we provide the walker with a micro field by emitting\npheromone molecules from the target. Figure 7 shows the path\nfollowed by a typical walker with radius\n= 20 and T = 0. This\npath has three distinct parts.\nInitially, the walker wanders randomly around its origin at\n(30,30), until the wavefront of molecules diffusing from\n(50,50) encounters its radius. In this region, the walker has\nno guidance, because no molecules are visible.\nOnce the walker begins to sense molecules, it moves rather\ndirectly from the vicinity of (30,30) to (50,50), following the\npheromone gradient.\nWhen it arrives at (50,50), it again receives no guidance from\nthe molecules, because they are distributed equally in all\ndirections. So it again meanders.\nThe clouds of wandering near the start and finish have diameters\nin the range of 5 to 10, suggesting a natural grid between 20x20\nand 10x10. We report here experiments with a 15x15 grid.\nBecause of their initial random walk around their origin, walkers\nin different runs will be at different locations when they start to\nmove, and will follow slightly different paths to the target (Figure\n8).\n30\n35\n40\n45\n50\n55\nx\n30\n35\n40\n45\n50\n55\ny\n\nFigure 5. Unguided Walker Path. Axes are location in the\n(100x100) field.\n0\n50\n100\n150\n200\n250\nTime\n0.2\n0.4\n0.6\n0.8\n1\no\nr\nc\na\nMy\np\no\nr\nt\nn\nE\n\nFigure 6. Unguided Walker Locational Entropy (15x15 Grid)\n30\n35\n40\n45\n50\n55\nx\n30\n35\n40\n45\n50\n55\ny\n\nFigure 7. Guided Walker Path (\n= 20, T = 0)\n30\n35\n40\n45\n50\n55\nx\n30\n35\n40\n45\n50\n55\ny\n\nFigure 8. Ensemble of Guided Walkers (\n= 20, T = 0)\n128\nThe dots in Figure 9 and Figure 10 show the directional and\nlocational entropies across this ensemble of guided walkers as a\nfunction of time. The solid line in each case plots the normalized\nmedian distance from the walkers to the target (actual maximum\n28), while the dashed line plots the normalized median number of\nmolecules visible to the walkers (actual maximum 151). The lines\nshow how changes in entropy and reduction in distance to the\ntarget are correlated with the number of molecules that the walker\nsenses at any given moment.\nAt the beginning and end of the run, when the walkers are\nwandering without guidance, directional entropy is 1,\ncorresponding to a random walk. During the middle portion of the\nrun, when the walker is receiving useful guidance from the micro\nlevel, the entropy drops dramatically. As the temperature\nparameter T is increased in the range 50 to 100, the bottom of the\nentropy well rises, but the overall shape remains the same (plot\nnot shown).\nThe locational entropy presents a different story. The\nminimization method for avoiding discreteness artifacts has the\neffect of selecting at each time step the offset that best centers the\ncells on the walkers. At the beginning of the run and again at the\nend, most walkers are close together, and fall within the same cell\n(because we chose a cell size comparable to these clouds).\nWalkers leave the starting cloud at different times, since those\ncloser to the target sense the pheromones sooner, and follow\ndifferent paths, depending on where they were when the\npheromone reached them. Thus they spread out during this\nmovement phase, and cluster together again once they reach the\ntarget. The effect of raising T to 100 on locational entropy is that\nthe right end of the curve rises until the curve assumes a similar\nshape (plot not shown) to Figure 6.\nComparison of Figure 6 and Figure 10 shows that though the\ndirected portion of the walker's movement has higher entropy\nthan the undirected portions, coupling the walker to the micro\nlevel does reduce the walker's overall entropy. Even at its\nmaximum, the entropy of the guided walker is much lower than\nthat of the random one, demonstrating the basic dynamics of the\nKugler-Turvey model.\nThe different behavior of locational and directional entropy is\ninstructive. Which is more orderly: a randomly moving walker, or\none guided by pheromones? The expected location of a random\nwalker is stationary (though with a non-zero variance), while that\nof a guided walker is non-stationary. In terms of location, the\nrandom walker is thus more regular, and the location entropy\nreflects this. However, the movement of the guided walker is more\norderly than that of the random walker, and this difference is\nreflected in the directional entropy. This difference highlights the\nimportance of paying attention to dynamical aspects of agent\nbehavior. Our intuition that the guided walker is more orderly\nthan the random one is really an intuition about the movement of\nthis walker, not its location.\n4.4 Entropy in the Overall System\nCentral to the Kugler-Turvey model is the assertion that entropy\nincrease at the micro level is sufficient to ensure entropy increase\nin the overall system even in the presence of self-organization and\nconcomitant entropy reduction at the micro level. Our experiment\nillustrates this dynamic. As illustrated in Figure 4, by time 60,\nnormalized entropy in the micro system has reached the maximum\nlevel of 1, indicating that each of the 30 replications of the\nexperiment results in a distinct state. If each replication is already\ndistinct on the basis of the locations of the pheromone molecules\nalone, adding additional state elements (such as the location of the\nwalker) cannot cause two replications to become the same. Thus\nby time 60 the normalized entropy of the entire system must also\nbe at a maximum. In particular, decreases in macro entropy, such\nas the decrease in locational entropy from time 80 on seen in\nFigure 10, do not reduce the entropy of the overall system.\nOne may ask whether the reduction in macro (walker) entropy is\ncausally related to the increase in micro entropy, or just\ncoincidental. After all, a static gradient of pheromone molecules\nwould guide the walker to the target just as effectively, but would\nbe identical in every run, and so exhibit zero entropy. This\nargument neglects whatever process generates the static gradient\nin the first place. An intelligent observer could produce the\ngradient, but then the behavior of the system would hardly be\n\"self-organizing.\" In our scenario, the gradient emerges as a\nnatural consequence of a completely random process, the random\nwalk of the pheromone molecules emerging from the target. The\ngradient can then reduce the entropy of a walker at the macro\nlevel, but the price paid for this entropy reduction is the increase\nin entropy generated by the random process that produces and\nmaintains the gradient.\n0\n10\n20\n30\n40\n50\nTime\n0.2\n0.4\n0.6\n0.8\n1\ny\np\no\nr\nt\nn\nE\ne\nc\nn\na\nt\ns\ni\nDd\nn\nas\ne\nl\nu\nc\ne\nl\no\nM\n\nFigure 9. Guided walker: dots = directional entropy (4 star),\nsolid line = median distance to target (max 28), dashed line =\nmedian visible molecules (max 151).\n0\n50\n100\n150\n200\n250\nTime\n0.2\n0.4\n0.6\n0.8\n1\no\nr\nc\na\nMy\np\no\nr\nt\nn\nE\ne\nc\nn\na\nt\ns\ni\nDd\nn\nas\ne\nl\nu\nc\ne\nl\no\nM\n\nFigure 10. Guided walker: dots = locational entropy (15x15\ngrid), solid line = median distance to target (max 28), dashed\nline = median visible molecules (max 151).\n129\nOne may also ask whether our hypothesis requires a quantitative\nrelation between entropy loss at the macro level and entropy gain\nat the micro level. A strict entropy balance is not required; the\nmicro level might generate more entropy than the macro level\nloses. In operational terms, the system may have a greater capacity\nfor coordination than a particular instantiation exploits. What is\nrequired is that the entropy increase at the micro level be\nsufficient to cover the decrease at the macro level, and this we\nhave shown.\n\nSUMMARY\nTo be effective, multi-agent systems must yield coordinated\nbehavior from individually autonomous actions. Concepts from\nthermodynamics (in particular, the Second Law and entropy) have\nbeen invoked metaphorically to explain the conditions under\nwhich coordination can emerge. Our work makes this metaphor\nmore concrete and derives several important insights from it.\nThis metaphor can be made quantitative, through simple state\npartitioning methods and Monte Carlo simulation.\nThese methods show how coordination can arise through\ncoupling the macro level (in which we desire agent self-organization\nwith a concomitant decrease in entropy) to an\nentropy-increasing process at a micro level (e.g., pheromone\nevaporation). Our demonstration focuses on synthetic\npheromones for the sake of expositional simplicity, but we\nbelieve that the same approach would be fruitful for\nunderstanding self-organization with other mechanisms of\nagent coordination, such as market systems.\nThis confirmation of the Kugler-Turvey model encourages us\nas agent designers to think explicitly in terms of macro and\nmicro levels, with agent behaviors at the macro level coupled\nin both directions (causally and perceptually) to entropy-increasing\nprocesses at the micro level.\nSome form of pheromone or currency is a convenient\nmechanism for creating such an entropy-increasing process.\nResearchers must distinguish between static and dynamic\norder in a multi-agent system. We have exhibited a system\nthat appears intuitively to be self-organizing, and shown that\nthe measure of order underlying this intuition is dynamic\nrather than static.\nACKNOWLEDGMENTS\nThis work is supported in part by the DARPA JFACC program\nunder contract F30602-99-C-0202 to ERIM CEC. The views and\nconclusions in this document are those of the authors and should\nnot be interpreted as representing the official policies, either\nexpressed or implied, of the Defense Advanced Research Projects\nAgency or the US Government.\n\nREFERENCES\n[1] E. Bonabeau, M. Dorigo, and G. Theraulaz. Swarm\nIntelligence: From Natural to Artificial Systems. New York,\nOxford University Press, 1999.\n[2] S. Brueckner. Return from the Ant: Synthetic Ecosystems for\nManufacturing Control. Thesis at Humboldt University\nBerlin, Department of Computer Science, 2000.\n[3] E. Fredkin. Finite Nature. In Proceedings of The XXVIIth\nRecontre de Moriond, 1992.\n[4] P.-P. Grass. La Reconstruction du nid et les Coordinations\nInter-Individuelles chez Bellicositermes Natalensis et\nCubitermes sp. La thorie de la Stigmergie: Essai\nd'interprtation du Comportement des Termites Constructeurs.\nInsectes Sociaux, 6:41-80, 1959.\n[5] H. A. Gutowitz. Complexity-Seeking Ants. In Proceedings of\nThird European Conference on Artifical Life, 1993.\n[6] B. Hayes. Computational Creationism. American Scientist,\n87(5):392-396, 1999.\n[7] P. N. Kugler and M. T. Turvey. Information, Natural Law,\nand the Self-Assembly of Rhythmic Movement. Lawrence\nErlbaum, 1987.\n[8] F. L. Lambert. Shuffled Cards, Messy Desks, and Disorderly\nDorm Rooms - Examples of Entropy Increase? Nonsense!\nJournal of Chemical Education, 76:1385, 1999.\n[9] F. L. Lambert. The Second Law of Thermodynamics. 2000.\nWeb Page, http://www.secondlaw.com/.\n[10] J. Lukkarinen. Re: continuing on Entropy. 2000. Email\nArchive, http://necsi.org:8100/Lists/complex-science/Message/2236\n.html.\n[11] V. D. Parunak. 'Go to the Ant': Engineering Principles\nfrom Natural Agent Systems. Annals of Operations Research,\n75:69-101, 1997.\n[12] V. D. Parunak and S. Brueckner. Ant-Like Missionaries and\nCannibals: Synthetic Pheromones for Distributed Motion\nControl. In Proceedings of Fourth International Conference\non Autonomous Agents (Agents 2000), pages 467-474, 2000.\n[13] Peeters, P. Valckenaers, J. Wyns, and S. Brueckner.\nManufacturing Control Algorithm and Architecture. In\nProceedings of Second International Workshop on Intelligent\nManufacturing Systems, pages 877-888, K.U. Leuven, 1999.\n[14] E. Shannon and W. Weaver. The Mathematical Theory of\nCommunication. Urbana, IL, University of Illinois, 1949.\n[15] ithsonian Institution. Encyclopedia Smithsonian:\nPheromones in Insects. 1999. Web Page,\nhttp://www.si.edu/resource/faq/nmnh/buginfo/pheromones.ht\nm.\n\n\n\n130", "keywords": "thermodynamic;Pheromones;entropy;Entropy;coordination;autonomy;pheromones;multi-agent system;self-organization;Self-Organization"} {"name": "84", "title": "Entropy-based Sensor Selection Heuristic for Target Localization", "abstract": "We propose an entropy-based sensor selection heuristic for localization. Given 1) a prior probability distribution of the target location, and 2) the locations and the sensing models of a set of candidate sensors for selection, the heuristic selects an informative sensor such that the fusion of the selected sensor observation with the prior target location distribution would yield on average the greatest or nearly the greatest reduction in the entropy of the target location distribution. The heuristic greedily selects one sensor in each step without retrieving any actual sensor observations. The heuristic is also computationally much simpler than the mutual-information-based approaches. The effectiveness of the heuristic is evaluated using localization simulations in which Gaussian sensing models are assumed for simplicity. The heuristic is more effective when the optimal candidate sensor is more informative.", "fulltext": "INTRODUCTION\nThe recent convergence of micro-electro-mechanical systems\n(MEMS) technology, wireless communication and networking\ntechnology, and low-cost low-power miniature digital\nhardware design technology has made the concept of\nwireless sensor networks viable and a new frontier of research\n[2, 1]. The limited on-board energy storage and the limited\nwireless channel capacity are the major constraints of wireless\nsensor networks. In order to save precious resources,\na sensing task should not involve more sensors than necessary\n. From the information-theoretic point of view, sensors\nare tasked to observe the target in order to increase the information\n(or to reduce the uncertainty) about the target\nstate. The information gain attributable to one sensor may\nbe very different from that attributable to another when sensors\nhave different observation perspectives and sensing uncertainties\n. Selective use of informative sensors reduces the\nnumber of sensors needed to obtain information about the\ntarget state and therefore prolongs the system lifetime. In\nthe scenario of localization or tracking using wireless sensor\nnetworks, the belief state of the target location can be gradually\nimproved by repeatedly selecting the most informative\nunused sensor until the required accuracy (or uncertainty)\nlevel of the target state is achieved.\nThere have been several investigations into information-theoretic\napproaches to sensor fusion and management. The\nidea of using information theory in sensor management was\nfirst proposed in [8]. Sensor selection based on expected information\ngain was introduced for decentralized sensing systems\nin [12]. The mutual information between the predicted\nsensor observation and the current target location distribution\nwas proposed to evaluate the expected information gain\nabout the target location attributable to a sensor in [11, 6].\nOn the other hand, without using information theory, Yao\net. al. [16] found that the overall localization accuracy depends\non not only the accuracy of individual sensors but\nalso the sensor locations relative to the target location during\nthe development of localization algorithms. We propose\na novel entropy-based heuristic for sensor selection based on\nour experiences with target localization. It is computationally\nmore efficient than mutual-information-based methods\nproposed in [11, 6].\n36\nWe use the following notations throughout this paper:\n1. S is the set of candidate sensors for selection, i S is\nthe sensor index;\n2. x is the realization of the random vector that denotes the\ntarget location;\n3. x\nt\nis the actual target location;\n4. ^\nx is the maximum likelihood estimate of the target location\n;\n5. x\ni\nis the deterministic location of sensor i;\n6. z\ni\nis the realization of the random variable that denotes\nthe observation of sensor i about the target location;\n7. z\nti\nis the actual observation of sensor i about the target\nlocation;\n8. z\nv\ni\nis the realization of the random variable that denotes\nthe view of sensor i about the target location.\nThe rest of this paper is organized as follows. Section 2\ndescribes the heuristic in detail.\nSection 3 evaluates the\nheuristic using simulations.\nSection 4 discusses the discrepancy\nbetween the heuristic and the mutual information\nbased approaches. Section 5 outlines future work. Section 6\nconcludes the paper. Section 7 acknowledges the sponsors.\nSENSOR SELECTION HEURISTIC\nThis Sect. formulates the sensor selection problem in localization\n, presents the details of the entropy-based sensor\nselection heuristic, and discusses the relation between the\nentropy difference proposed in this paper and mutual information\nused in previous work about sensor selection.\n2.1\nSensor Selection Problem in Localization\nThere are several information measures. In this paper, we\nuse Shannon entropy [14] to quantify the information gain\n(or uncertainty reduction) about the target location due to\nsensor observation. We adopt the greedy sensor selection\nstrategy used in mutual-information-based approaches [11,\n6]. The greedy strategy gradually reduces the uncertainty\nof the target location distribution by repeatedly selecting\nthe currently unused sensor with maximal expected information\ngain. The observation of the selected sensor is incorporated\ninto the target location distribution using sequential\nBayesian filtering [3, 7]. The greedy sensor selection and the\nsequential information fusion continue until the uncertainty\nof the target location distribution is less than or equal to\nthe required level. The core problem of the greedy sensor\nselection approach is how to efficiently evaluate the expected\ninformation gain attributable to each candidate sensor without\nactually retrieving sensor data.\nThe sensor selection problem is formulated as follows.\nGiven\n1. the prior target location distribution: p(x),\n2. the locations of candidate sensors for selection: x\ni\n, i S,\n3.\nthe sensing models of candidate sensors for selection:\np(z\ni\n|x), i S,\nthe objective is to find the sensor ^i whose observation z\n^i\nminimizes the expected conditional entropy of the posterior\ntarget location distribution,\n^i = arg min\niS\nH(x|z\ni\n) .\n(1)\nEquivalently, the observation of sensor ^i maximizes the expected\ntarget location entropy reduction,\n^i = arg max\niS\n(H(x) - H(x|z\ni\n)) .\n(2)\nH(x) - H(x|z\ni\n) is one expression of I(x; z\ni\n), the mutual\ninformation between the target location x and the predicted\nsensor observation z\ni\n,\nI(x; z\ni\n) =\np(x, z\ni\n) log p(x, z\ni\n)\np(x)p(z\ni\n) dxdz\ni\n,\n(3)\nwhere p(x, z\ni\n) = p(z\ni\n|x)p(x) and p(z\ni\n) =\n\np(x, z\ni\n)dx. Thus,\nthe observation of sensor ^i maximizes the mutual information\nI(x; z\ni\n),\n^i = arg max\niS\nI(x; z\ni\n) .\n(4)\nSensor selection based on (4) is the maximal mutual information\ncriterion proposed in [11, 6]. The target location\nx could be of up to three dimensions. The sensor observation\nz\ni\n(e.g. the direction to a target in a three-dimensional\nspace ) could be of up to two dimensions. Therefore I(x; z\ni\n)\nis a complex integral in the joint state space (x, z\ni\n) of up to\nfive dimensions. The complexity of computing I(x; z\ni\n) could\nbe more than that low-end sensor nodes are capable of. If\nthe observation z\ni\nis related to the target location x only\nthrough the sufficient statistics z(x), then\nI(x; z\ni\n) = I(z(x); z\ni\n) .\n(5)\nIf z(x) has fewer dimensions than x, then I(z(x); z\ni\n) is less\ncomplex to compute than I(x; z\ni\n). In the above special scenario\n, I(z(x); z\ni\n) has been proposed to replace I(x; z\ni\n) to\nreduce the complexity of computing mutual information in\n[11]. In this paper, we propose an alternative entropy-based\nsensor selection heuristic. In general, the entropy-based sensor\nselection heuristic is computationally much simpler than\nthe mutual information based approaches. However, the observation\nof the sensor selected by the heuristic would still\nyield on average the greatest or nearly the greatest entropy\nreduction of the target location distribution.\n2.2\nEntropy-based Sensor Selection Heuristic\nDuring the development of wireless sensor networks for\nlocalization, we have observed that the localization uncertainty\nreduction attributable to a sensor is greatly effected\nby the difference of two quantities, namely, the entropy of\nthe distribution of that sensor's view about the target location\n, and the entropy of that sensor's sensing model for the\nactual target location.\nA sensor's view about the target location is the geometric\nprojection of the target location onto that sensor's observation\nperspective. For example, a direction-of-arrival (DOA)\nsensor's view of the target location is the direction from the\nsensor to the target. The view of sensor i about the target\nlocation is denoted as z\nv\ni\n,which is a function of the target\nlocation x and the sensor location x\ni\n,\nz\nv\ni\n= f(x, x\ni\n) .\n(6)\nz\nv\ni\nusually has less dimensions than x. The probability distribution\nof the view of sensor i about the target location,\np(z\nv\ni\n), is the projection of the target location distribution\np(x) onto the observation perspective of sensor i,\np(z\nv\ni\n)dz\nv\ni\n=\nz\nv\ni\nf(x,x\ni\n)z\nv\ni\n+dz\nv\ni\np(x)dx .\n(7)\nAlternatively, p(z\nv\ni\n) can be regarded as the `noise free' prediction\nof the sensor observation distribution p(z\ni\n) based on\nthe target location distribution p(x).\n37\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\nx 10\n-4\nEast\nNorth\n38\no\n36\no\n0\no\n100\n200\n300\n400\n50\n100\n150\n200\n250\n300\n350\n400\nFigure 1: A DOA sensor's view about the target\nlocation. Thestatespaceof thetarge\nt location is\ngridded in 1\n1 cells. Theimagedepicts theprobability\ndistribution of thetarget location. Theactual\ntarget location is (200, 200), denoted by marker +.\nFrom the perspective of the DOA sensor denoted\nby the square, only the direction to the target is\nobservable. The view of the DOA sensor about the\ntarget is in the interval [36\no\n, 38\no\n] if and only if the\ntarget is inside the sector delimited by 36\no\nlineand\n38\no\nline.\nIn practice, the state space of the target location and the\nsensor view can be discretized by griding for numerical analysis\n. The discrete representation of p(z\nv\ni\n) can be computed\nas follows.\n1. Let\nX be the grid set of the target location x;\n2. Let\nZ be the grid set of the sensor view z\nv\ni\n;\n3. For each grid point z\nv\ni\nZ, initialize p(z\nv\ni\n) to zero;\n4. For each grid point x\nX , determine the corresponding\ngrid point z\nv\ni\nZ using equation (6), and update its probability\nas p(z\nv\ni\n) = p(z\nv\ni\n) + p(x);\n5. Normalize p(z\nv\ni\n) to make the total probability of the sensor\nview be 1.\nThe numerical computation of p(z\nv\ni\n) for a DOA sensor is\nillustrated in Fig. 1 and Fig. 2.\nThe entropy of the probability distribution of the view of\nsensor i, H\nv\ni\n, is\nH\nv\ni\n=\np\n(z\nv\ni\n) log p(z\nv\ni\n)dz\nv\ni\n.\n(8)\nGiven the discrete representation of p(z\nv\ni\n) with a grid size\nof z\nv\ni\n, H\nv\ni\ncan be numerically computed as\nH\nv\ni\n=\np\n(z\nv\ni\n) log p(z\nv\ni\n)z\nv\ni\n.\n(9)\nThe sensing model of sensor i for the actual target location\nx\nt\nis p(z\ni\n|x\nt\n), which describes the probability distribution of\nthe observation of sensor i given that the target is at x\nt\n. The\nsensing model incorporates observation uncertainty from all\nsources, including the noise corruption to the signal, the signal\nmodeling error of the sensor estimation algorithm, and\nthe inaccuracy of the sensor hardware. For a single-modal\ntarget location distribution p(x), we can use the maximum\n10\n20\n30\n40\n50\n60\n70\n80\n0\n0.02\n0.04\n0.06\n0.08\n0.1\n0.12\nDOA (degree)\nProbability\nFigure2: Thediscreteprobability distribution of a\nDOA se\nnsor's vie\nw.\nThestatespaceof theDOA\nsensor view is gridded in 2\no\nintervals. The target location\ndistribution and theDOA sensor location are\nillustrated in Fig. 1. Marker X denotes the probability\nof the DOA view interval [36\no\n, 38\no\n], which is the\nsummation of theprobability of all target locations\ninside the sector delimited by 36\no\nlineand 38\no\nline\nin Fig. 1. Please note that the sensor view distribution\ndoes not depends on the sensing uncertainty\ncharacteristics at all.\nlikelihood estimate ^\nx of the target location to approximate\nthe actual target location x\nt\n. Thus the entropy of the sensing\nmodel of sensor i for the actual target location x\nt\nis\napproximated as\nH\nsi\n=\np\n(z\ni\n|^x) log p(z\ni\n|^x)dz\ni\n.\n(10)\nFor a multi-modal target location distribution p(x) with M\npeaks ^\nx\n(m)\n, where m = 1, . . . , M , the entropy of the sensing\nmodel of sensor i for the actual target location x\nt\ncan be\napproximated as a weighted average of the entropy of the\nsensing model for all modes,\nH\nsi\n=\nM\nm=1\np(^\nx\n(m)\n)\np(z\ni\n|^x\n(m)\n) log p(z\ni\n|^x\n(m)\n)dz\ni\n. (11)\nGiven a target location distribution p(x), the target location\nwith maximum likelihood or local maximum likelihood can\nbe found using standard search algorithms.\nWe have repeatedly observed that the incorporation of\nthe observation of sensor i with larger entropy difference\nH\nv\ni\n- H\nsi\nyields on average larger reduction in the uncertainty\nof the posterior target location distribution p(x|z\ni\n).\nTherefore, given a prior target location distribution and the\nlocation and the sensing uncertainty model of a set of candidate\nsensors for selection, the entropy difference H\nv\ni\n- H\nsi\ncan sort candidate sensors into nearly the same order as mutual\ninformation I(x; z\ni\n) does. Specifically, the sensor with\nthe maximal entropy difference H\nv\ni\n- H\nsi\nalso has the maximum\nor nearly the maximal mutual information I(x; z\ni\n).\nHence we propose to use the entropy difference H\nv\ni\n- H\nsi\nas\nan alternative to mutual information I(x; z\ni\n) for selecting\n38\nthe most informative sensor. The entropy-based heuristic is\nto compute H\nv\ni\n- H\nsi\nfor every candidate sensor i S and\nthen to select sensor ^i such that\n^i = arg max\niS\n(H\nv\ni\n- H\nsi\n) .\n(12)\nIn Sect. 3, the validity of the heuristic is evaluated using\nsimulations and the complexity of the heuristic is analyzed\nfor two-dimensional localization.\nThe entropy-based sensor\nselection heuristic works nearly as well as the mutual-information\n-based approaches. In addition, the heuristic is\ncomputationally much simpler than mutual information.\n2.3\nRelation of Entropy Difference\nand Mutual Information\nA brief analysis of the relation between entropy difference\nH\nv\ni\n- H\nsi\nand mutual information I(x; z\ni\n) helps to reveal\nfundamental properties of our sensor selection heuristic.\nMutual information I(x; z\ni\n) has another expression, namely,\nH(z\ni\n)\n- H(z\ni\n|x). The entropy difference H\nv\ni\n- H\nsi\nis closely\nrelated to H(z\ni\n)\n- H(z\ni\n|x).\nH(z\ni\n) is the entropy of the predicted sensor observation\ndistribution p(z\ni\n),\nH(z\ni\n) =\np\n(z\ni\n) log p(z\ni\n)dz\ni\n.\n(13)\nThe predicted sensor observation distribution p(z\ni\n) becomes\nthe sensor's view distribution p(z\nv\ni\n) when the sensing model\np(z\ni\n|x) is deterministic without uncertainty. The uncertainty\nin the sensing model p(z\ni\n|x) makes H(z\ni\n) larger than\nthe sensor's view entropy H\nv\ni\ndefined in (8). H\nv\ni\nclosely approximates\nH(z\ni\n) when the entropy of the sensing model\np(z\ni\n|x) is small relative to H\nv\ni\n.\nH(z\ni\n|x) is actually the expected entropy of the sensing\nmodel p(x) averaged for all possible target locations,\nH(z\ni\n|x) = p\n(x, z\ni\n) log p(z\ni\n|x)dxdz\ni\n=\np(x){-p\n(z\ni\n|x) log p(z\ni\n|x)dz\ni\n}dx .\n(14)\nWhen p(x) is a single-modal distribution, H\nsi\nis defined in\n(10), which is the entropy of the sensing model for the most\nlikely target location estimate ^\nx.\nWhen p(x) is a multi-modal\ndistribution, H\nsi\nis defined in (11), which is the average\nentropy of the sensing model for all target locations with\nlocal maximal likelihood. When the entropy of the sensing\nmodel,\n\np(z\ni\n|x) log p(z\ni\n|x)dz\ni\n, changes gradually with x,\nH\nsi\ncan reasonably approximate H(z\ni\n|x).\nThe entropy difference H\nv\ni\n- H\nsi\nreasonably approximates\nthe mutual information H(z\ni\n)\n- H(z\ni\n|x) when H\nsi\nis small\nrelative to H\nv\ni\nand the entropy of the sensing model changes\ngradually with x. However, selection of the most informative\nsensor does not require an exact evaluation of sensor\ninformation utility. Instead, an order of sensors in terms of\ninformation utility is needed. H\nv\ni\n- H\nsi\ncould sort sensors\ninto approximately the same order as mutual information\ndoes. Therefore, a sensor with the maximal entropy difference\nH\nv\ni\n- H\nsi\nalso has the maximal or nearly the maximal\nmutual information. The correlation between the entropy\ndifference H\nv\ni\n- H\nsi\nand mutual information I(x; z\ni\n) is analyzed\nusing simulations in Sect. 3. Section 4 discusses the\ndiscrepancy between the heuristic and the mutual information\nbased approaches.\nHEURISTIC EVALUATION\nThis Sect. presents the evaluation of the entropy-based\nsensor selection heuristic using simulations.\nThe computational\ncomplexity of the heuristic is also analyzed. The\nGaussian noise model has been widely assumed for sensor\nobservations in many localization and tracking algorithms,\ne.g. the Kalman filter [9]. Successes of these algorithms\nindicate that the Gaussian sensing model is a reasonable\nfirst-order-approximation of the reality. As a starting point,\nwe assume Gaussian sensing models in the evaluative simulations\nfor simplicity. The simple Gaussian sensing models assumed\nhere are not accurate especially when sensors are very\nclose to the target. To avoid the problem of over-simplified\nsensing models in the simulations, we only analyze sensors\nwith some middle distance range to the target. The heuristic\nwill be evaluated further under more realistic sensing\nmodels in the future. Four scenarios of sensor selection for\nlocalization have been studied. Three of them involve DOA\nsensors, range sensors, or time-difference-of-arrival (TDOA)\nsensors respectively. One of them involves all of the above\nsensors mixed together. In every sensor selection scenario,\nboth the entropy difference H\nv\ni\n- H\nsi\nand mutual information\nI(x; z\ni\n) are evaluated and compared for all candidate\nsensors. In all sensor selection scenarios, the entropy difference\nH\nv\ni\n- H\nsi\ncan sort all candidate sensors into nearly the\nsame order as mutual information I(x; z\ni\n) does. Therefore,\nthe sensor with the maximal entropy difference H\nv\ni\n- H\nsi\nselected\nby the heuristic always has the maximum or nearly\nthe maximal mutual information I(x; z\ni\n). The larger the\nentropy difference H\nv\ni\n- H\nsi\nand mutual information I(x; z\ni\n)\nare, the more consistent their sensor selection decisions are.\n3.1\nSelection of DOA Sensors\nConsider now entropy-based sensor selection when all candidate\nsensors are DOA sensors, as depicted in Fig. 3. The\nprior probability distribution p(x) of the target location x is\nnon-zero in a limited area. We assume the unbiased Gaussian\nsensing models for DOA sensors in some middle distance\nrange to the target. Specifically, given a target location such\nthat 10\nx - x\ni\n600, the probability distribution of\nDOA observation z\ni\nis assumed to be\np(z\ni\n|x) =\n1\n2 e\n-(z\ni\n-z\nv\ni\n)\n2\n/(2\n2\n)\n,\n(15)\nwhere z\nv\ni\n= f(x, x\ni\n) is the direction from sensor i to the\ntarget location x. For many DOA estimation algorithms\nlike the approximate maximum likelihood (AML) algorithm\n[4], DOA estimation usually becomes much more uncertain\nwhen the candidate sensor is either very near or very far\nfrom the target. In this scenario, we exclude sensors that\nare either outside the study area or within a distance of 10\nto the area of non-zero p(x).\nThe entropy difference H\nv\ni\n- H\nsi\nand mutual information\nI(x; z\ni\n) of DOA sensors are evaluated and compared in five\ncases. In each case, Gaussian sensing models of the same\nstandard deviation are assumed for all 100 candidate sensors\n.\nHowever, the standard deviation varies with the\ncase. As shown in fig. 4, mutual information I(x; z\ni\n) vs\nthe entropy difference H\nv\ni\n- H\nsi\nis plotted for all candidate\nsensors in all cases. Mutual information I(x; z\ni\n) increases\nnearly monotonically with the entropy difference H\nv\ni\n- H\nsi\n.\nThe larger the entropy difference H\nv\ni\n-H\nsi\nand mutual information\nI(x; z\ni\n) are, the more correlated they are. Therefore,\n39\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\nx 10\n-4\n100\n200\n300\n400\n50\n100\n150\n200\n250\n300\n350\n400\nFigure 3: Scenario of sensor selection for localization\nusing DOA sensors exclusively. The image depicts\ntheprior probability distribution p(x) of thetarget\nlocation x. p(x) is zero outside the solid rectangle.\nThe actual target location is (200, 200), denoted by\nmarker +. The squares denote candidate DOA sensors\nfor selection. 100 DOA sensors are uniformly\nrandomly placed outside the dotted rectangle. The\ngap between the solid rectangle and the dotted rect-angleis\n10.\nthe entropy difference H\nv\ni\n- H\nsi\nsorts DOA sensors in nearly\nthe same order as mutual information I(x; z\ni\n) does, especially\nwhen the entropy difference H\nv\ni\n- H\nsi\nis large. The\ncandidate DOA sensor selected by the proposed heuristic\nhas the maximal entropy difference H\nv\ni\n- H\nsi\n, and also has\nthe maximal mutual information I(x; z\ni\n).\n3.2\nSelection of Range Sensors\nand TDOA Sensors\nThis Subsect. evaluates the entropy-based sensor selection\nheuristic for range sensors and TDOA sensors respectively.\nFig. 5 shows the sensor selection scenario in which all\ncandidate sensors can only measure the range to the target\n. The prior probability distribution p(x) of the target\nlocation x is non-zero in a limited area. We assume the\nunbiased Gaussian sensing models p(z\ni\n|x) for range sensors\nused in [13]. When the actual range is small relative to the\nstandard deviation of the Gaussian sensing model, p(z\ni\n|x)\nis significantly greater than zero even for negative values\nof range observation z\ni\n. Because a range of negative value\nhas no physical meaning, the above Gaussian sensing model\nis not valid for short ranges. To avoid the above difficulty\nof the Gaussian sensing model, we only consider candidate\nsensors in some middle distance range to the target. Specifically\n, in this range sensor selection scenario, we exclude\nsensors that are either outside the study area or within a\ndistance of 32 to the area of non-zero p(x).\nFig. 6 shows the sensor selection scenario in which only\nTDOA sensors are used. The prior probability distribution\np(x) of the target location x is non-zero in a limited area. As\nin [15], the signal arrival time difference observed by every\nTDOA sensor is relative to a common reference sensor. We\n-2\n-1\n0\n1\n2\n3\n4\n0\n0.5\n1\n1.5\n2\n2.5\n3\n3.5\n4\nEntropy difference (bit)\nMutual information (bit)\n\n= 32\n\n= 16\n\n= 8\n\n= 4\n\n= 2\nFigure4: Mutual information I(x; z\ni\n) vs entropy difference\nH\nv\ni\n- H\nsi\nof DOA sensors. Each symbol denotes\n(H\nv\ni\n- H\nsi\n, I(x; z\ni\n)) pair evaluated for one candidatese\nnsor. Theprior targe\nt location distribution\nand the candidate sensor placements are shown in\nFig. 3. Five cases with different standard deviation\nof Gaussian sensing models are studied. In each\ncase, all candidate sensors are assumed to have the\nsame value.\nalso assume the unbiased Gaussian sensing models p(z\ni\n|x)\nfor TDOA sensors. In order to be comparable with scenarios\nof DOA sensors and range sensors, we only consider TDOA\nsensors in middle range distance to the target. Specifically,\nwe exclude TDOA sensors that are either outside the study\narea or within a distance of 10 to the area of non-zero p(x).\nFollowing the same approach to the heuristic evaluation\nfor DOA sensors, the entropy difference H\nv\ni\n-H\nsi\nand mutual\ninformation I(x; z\ni\n) of every candidate sensor are evaluated\nand compared for range sensor selection scenario in Fig. 5\nand for TDOA sensor selection scenario in Fig. 6 respectively\n. Mutual information I(x; z\ni\n) vs the entropy difference\nH\nv\ni\n- H\nsi\nis plotted in Fig. 7 for all range sensors and in\nFig. 8 for all TDOA sensors. In both scenarios, mutual information\nI(x; z\ni\n) increases nearly monotonically with the\nentropy difference H\nv\ni\n- H\nsi\n. The larger the entropy difference\nH\nv\ni\n-H\nsi\nand mutual information I(x; z\ni\n) are, the more\ncorrelated they are. Using the proposed heuristic, both the\nselected range sensor and the selected TDOA sensor have the\nmaximal entropy difference H\nv\ni\n- H\nsi\n, and also have nearly\nthe maximal mutual information I(x; z\ni\n).\n3.3\nSelection of Mixed Sensors\nIn order to evaluate the entropy-based sensor selection\nheuristic across different sensing modalities, this Subsect. is\ndevoted to the sensor selection scenario in which candidate\nsensors are a mixture of DOA sensors, range sensors and\nTDOA sensors.\nFig. 9 shows the sensor selection scenario for mixed candidate\nsensors. Each candidate sensor is randomly assigned\none of three sensing modalities, namely, DOA, range, and\nTDOA. Gaussian sensing models are assumed for all candidate\nsensors with middle range distance to the target. Each\n40\n0\n0.5\n1\n1.5\n2\n2.5\n3\n3.5\nx 10\n-4\n100\n200\n300\n400\n50\n100\n150\n200\n250\n300\n350\n400\nFigure 5: Scenario of sensor selection for localization\nusing rangese\nnsors.\nTheimagede\npicts theprior\nprobability distribution p(x) of thetarget location x.\np(x) is zero outside the solid rectangle. The actual\ntarget location is (200, 200), denoted by marker +.\nThe circles denote candidate range sensors for selection\n. 100 range sensors are uniformly randomly\nplaced outside the dotted rectangle.\nThe gap between\nthe solid rectangle and the dotted rectangle\nis 32.\ncandidate sensor is also randomly assigned one of five values\nof the standard deviation of the sensing model, namely,\n2, 4, 8, 16, and 32. 100 candidate sensors are uniformly\nrandomly placed in the vicinity of the prior target location\nestimation. In order to avoid the difficulties of Gaussian\nsensing models for DOA sensors and range sensors close to\nthe target, we exclude sensors either outside the study area\nor within a distance of 32 to the non-zero area of the prior\ntarget location distribution p(x).\nThe entropy difference H\nv\ni\n- H\nsi\nand mutual information\nI(x; z\ni\n) of every candidate sensor are evaluated and plotted\nin Fig.\n10.\nThe correlation between H\nv\ni\n- H\nsi\nand\nI(x; z\ni\n) of mixed sensors is very similar to the correlation\nbetween H\nv\ni\n- H\nsi\nand I(x; z\ni\n) of sensors with single modality\n. Across various sensing modalities, mutual information\nI(x; z\ni\n) increases nearly monotonically with the entropy difference\nH\nv\ni\n- H\nsi\n. Therefore, across various sensing modalities\n, the candidate sensor with the maximal entropy difference\nH\nv\ni\n- H\nsi\n, selected by the proposed heuristic, has the\nmaximal mutual information I(x; z\ni\n).\n3.4\nComputational Complexity\nComputational complexity analysis is an important part\nof the evaluation of the heuristic. We will analyze the complexity\nof the heuristic and compare it to the complexity of\nthe mutual-information-based approaches.\nFor two-dimensional localization, the target location x is\ntwo-dimensional. The sensor's view z\nv\ni\nof the target location\nx is one-dimensional. The sensor observation z\ni\nis one-dimensional\n. We assume that all random variables are gridded\nfor numerical computation. Specifically, the area with\nnon-trivial p(x) is gridded into n n. The interval with\n0\n0.5\n1\n1.5\n2\n2.5\n3\n3.5\nx 10\n-4\n100\n200\n300\n400\n50\n100\n150\n200\n250\n300\n350\n400\nFigure 6: Scenario of sensor selection for localization\nusing TDOA sensors. The image depicts the prior\nprobability distribution p(x) of thetarget location x.\np(x) is zero outside the solid rectangle. The actual\ntarget location is (200, 200), denoted by marker +.\nThe triangles denote candidate TDOA sensors for\nselection.\nEvery TDOA observation is relative to\na common reference sensor denoted by marker\n.\n100 TDOA sensors are uniformly randomly placed\noutside the dotted rectangle. The gap between the\nsolid rectangle and the dotted rectangle is 10.\nnon-trivial p(z\ni\n) or p(z\nv\ni\n) is also gridded into n. We assume\nthere are K candidate sensors for selection. K is usually a\nsmall number.\nThe proposed heuristic evaluates the entropy difference\nH\nv\ni\n- H\nsi\nof all sensors and then selects the one with the\nmaximal H\nv\ni\n- H\nsi\n. As shown in (7), p(z\nv\ni\n) can be computed\nfrom p(x) with cost O(n\n2\n). As shown in (8), H\nv\ni\ncan be\ncomputed from p(z\nv\ni\n) with cost O(n). As shown in (10) and\n(11), H\nsi\ncan be computed from p(z\ni\n|x) with cost O(n). The\ncost to compute H\nv\ni\n- H\nsi\nfor one candidate sensor is O(n\n2\n).\nTherefore, the total cost for the heuristic to select one out\nof K candidate sensors is O(n\n2\n).\nThe mutual-information-based approaches evaluate the mutual\ninformation I(x; z\ni\n) of all sensors and then select the\none with the maximal I(x; z\ni\n). As shown in (3), I(x; z\ni\n)\ncan be directly computed from p(x) and p(z\ni\n|x) with cost of\nO(n\n3\n). Therefore, the total cost to select one out of K candidate\nsensors is O(n\n3\n). As we mentioned early in Subsect.\n2.1, the computational cost of mutual information I(x; z\ni\n)\ncould be reduced in some special scenarios. In general, however\n, the heuristic is computationally much simpler than the\nmutual-information-based approaches.\nDISCREPANCY BETWEEN HEURISTIC AND MUTUAL INFORMATION\nAs shown in Sect. 3, when the mutual information I(x; z\ni\n)\nis close to 0 bit, the entropy difference H\nv\ni\n- H\nsi\nmight not\nsort candidate sensors into exactly the same order as the\nmutual information does. Such discrepancy is caused by the\ndispersion of the correlation between the entropy difference\n41\n-2\n-1\n0\n1\n2\n3\n4\n5\n0\n1\n2\n3\n4\n5\nEntropy difference (bit)\nMutual information (bit)\n\n= 32\n\n= 16\n\n= 8\n\n= 4\n\n= 2\nFigure7: Mutual information I(x; z\ni\n) vs entropy difference\nH\nv\ni\n- H\nsi\nof range senors. Each symbol denotes\n(H\nv\ni\n- H\nsi\n, I(x; z\ni\n)) pair evaluated for one candidate\nsensor. The prior target location distribution\nand the candidate sensor placements are shown in\nFig. 5. Five cases with different standard deviation\nof Gaussian sensing models are studied. In each\ncase, all candidate sensors are assumed to have the\nsame value.\nH\nv\ni\n- H\nsi\nand the mutual information I(x; z\ni\n) when the mutual\ninformation is small. In this Sect., we examine such\ncorrelation dispersion and evaluate its impact on the discrepancy\nof sensor selection decisions of the entropy-based\nheuristic and the mutual information based approaches.\n4.1\nDispersion\nIn this Subsect., we describe the dispersion of the correlation\nbetween the entropy difference H\nv\ni\n- H\nsi\nand the\nmutual information I(x; z\ni\n) when the mutual information is\nsmall. We also examine possible sources for such correlation\ndispersion.\nClose examination on the convex part of the mutual information\nvs. entropy difference curve in Fig. 7 and Fig. 8\nreveals that the correlation between the mutual information\nI(x; z\ni\n) and the entropy difference H\nv\ni\n- H\nsi\nis not strictly\nmonotonic. Instead, there is obvious dispersion of the correlation\n. The convex part corresponds to the situation in\nwhich candidate sensors are not very informative because\nthe mutual information between the target location distribution\nand the sensor observation is close to 0 bit. In another\nwords, when candidate sensors are not very informative, the\nentropy difference H\nv\ni\n-H\nsi\nmight not sort candidate sensors\ninto the same order as the mutual information I(x; z\ni\n) does.\nGiven a set of candidate sensors whose observation could\nonly reduce a little amount of uncertainty of the target location\ndistribution, the sensor selected on the basis of the\nmaximum entropy difference H\nv\ni\n- H\nsi\nmight not have the\nmaximum mutual information I(x; z\ni\n). Thus, there might\nbe discrepancy between the sensor selection decision of the\nentropy-based heuristic and that of the mutual information\nbased approaches if no candidate sensor is very informative.\n-4\n-2\n0\n2\n4\n6\n0\n1\n2\n3\n4\n5\n6\nEntropy difference (bit)\nMutual information (bit)\n\n= 32\n\n= 16\n\n= 8\n\n= 4\n\n= 2\nFigure8: Mutual information I(x; z\ni\n) vs entropy difference\nH\nv\ni\n- H\nsi\nof TDOA senors. Each symbol denotes\n(H\nv\ni\n- H\nsi\n, I(x; z\ni\n)) pair evaluated for one candidatese\nnsor. Theprior targe\nt location distribution\nand the candidate sensor placements are shown in\nFig. 6. Five cases with different standard deviation\nof Gaussian sensing models are studied. In each\ncase, all candidate sensors are assumed to have the\nsame value.\nThere might be multiple causes of such correlation dispersion\nbetween the entropy difference H\nv\ni\n-H\nsi\nand the mutual\ninformation I(x; z\ni\n). As pointed out in Subsect. 2.3, the entropy\ndifference H\nv\ni\n-H\nsi\ncan be viewed as an approximation\nof the mutual information I(x; z\ni\n). Thus, the order of sensors\nsorted by the entropy difference H\nv\ni\n-H\nsi\nis intrinsically\nan approximation of that by the mutual information I(x; z\ni\n).\nIn practice, the discretization of the state space of the target\nlocation random variable and the sensor view random\nvariable might also introduce inaccuracy into the evaluation\nof H\nv\ni\n. Besides, as shown in (10) and (11), the maximum\nlikelihood estimate of the target location is used to approximate\nthe actual target location when evaluating the entropy\nof the sensing model for the actual target location.\n4.2\nImpact\nIn this Subsect., we examine the impact of the dispersion\nof the correlation between the entropy difference H\nv\ni\n- H\nsi\nand the mutual information I(x; z\ni\n) when the mutual information\nis small. The analysis shows that such correlation\ndispersion causes very little degradation to the quality of\nsensor selection decision of the entropy-based heuristic.\nAs shown by the convex part of the mutual information\nvs. entropy difference curve in Fig. 7 and Fig. 8, there is\ndispersion of the correlation between the entropy difference\nH\nv\ni\n-H\nsi\nand the mutual information I(x; z\ni\n) when candidate\nsensors are not very informative. We model such dispersion\nusing a uniform distribution bounded by a parallelogram illustrated\nin Fig. 11. A candidate sensor could assume any\nposition (H\nv\ni\n- H\nsi\n, I(x; z\ni\n)) within the parallelogram with\nuniform probability. As shown in Fig. 11, the geometry of\nthe parallelogram is defined by parameters a, b and c. a\n42\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\nx 10\n-4\n100\n200\n300\n400\n50\n100\n150\n200\n250\n300\n350\n400\nFigure 9: Scenario of sensor selection for localization\nusing sensors with various modalities. The imagedepicts\ntheprior probability distribution p(x) of\nthetarge\nt location x. p(x) is zero outside the solid\nrectangle. The actual target location is (200, 200),\ndenoted by marker +. The squares, the circles, and\nthe triangles denote DOA sensors, range sensors and\nTDOA sensors respectively. Every TDOA observation\nis relative to a common reference sensor denoted\nby marker\n. Each sensor is randomly chosen\nto bea DOA se\nnsor, a rangese\nnsor, or a TDOA\nsensor. Each sensor is also randomly assigned one\nof five values of the standard deviation of Gaussian\nsensing models, namely, 2, 4, 8, 16, and 32. The\nsizeof a symbol indicates themagnitudeof . 100\nsensors of various sensing modalities and values\nareuniformly randomly place\nd outsidethedotte\nd\nrectangle. The gap between the solid rectangle and\nthe dotted rectangle is 32.\nis the variation scope of entropy difference H\nv\ni\n- H\nsi\namong\nthe set of candidate sensors. c indicates the variation scope\nof the mutual information I(x; z\ni\n) among the set of candidate\nsensors. b describes the magnitude of dispersion of the\ncorrelation between the entropy difference H\nv\ni\n- H\nsi\nand the\nmutual information I(x; z\ni\n). Although the bounded uniform\ndistribution is not accurate, it captures the major features\nof the correlation dispersion revealed by simulations in Sect.\n3. We choose this dispersion model for simplicity. As the\nfirst order approximation, the simple dispersion model does\nhelp to reveal some major characteristics of the impact of\nthe correlation dispersion on the heuristics-based sensor selection\n.\nA typical dispersion scenario is illustrated in Fig.\n11.\nThe mutual information I(x; z\ni\n) of candidate sensors varies\nfrom 0 bit to 1 bit. Correspondingly, the entropy difference\nH\nv\ni\n- H\nsi\nof candidate sensors changes from\n-2 bit to 0 bit.\nFor any value of the entropy difference H\nv\ni\n-H\nsi\n, the disperse\nof the mutual information I(x; z\ni\n) is 0.1 bit. Given the above\nscenario, we run 10, 000 simulations. In each simulation, 8\ncandidate sensors randomly assume their (H\nv\ni\n-H\nsi\n, I(x; z\ni\n))\npairs within the specified dispersion range. In each simulation\n, we identify both the sensor with the maximum entropy\n-2\n0\n2\n4\n6\n0\n1\n2\n3\n4\n5\n6\nEntropy difference (bit)\nMutual information (bit)\nTDOA sensor\nDOA sensor\nrange sensor\nFigure10: Mutual information I(x; z\ni\n) vs entropy\ndifference H\nv\ni\n- H\nsi\nof mixed senors.\nEach symbol\ndenotes (H\nv\ni\n- H\nsi\n, I(x; z\ni\n)) pair evaluated for one candidate\nsensor. The prior target location distribution\nand the candidate sensor placements are shown in\nFig. 9.\ndifference H\nv\ni\n- H\nsi\nand the sensor with the maximum mutual\ninformation I(x; z\ni\n). With 87.8% chance, the sensor\nselected by the entropy-based heuristic also has the maximum\nmutual information. Even when the heuristic fails to\nselect the sensor of the maximum mutual information, the\nmutual information of the selected sensor is on average only\nabout 0.026 bit less than the maximum mutual information.\nOverall, the mutual information of the sensor selected by the\nentropy-based heuristic is about 0.026(1-87.8%) = 0.0032\nbit less than the maximum mutual information. Therefore,\nmost of the time, the correlation dispersion does not cause\ndiscrepancy of the sensor selection decisions between the\nentropy-based heuristic and the mutual information based\napproaches. Over all, the entropy-based heuristic introduces\nvery little degradation to the quality of the sensor select decision\neven when candidate sensors are not very informative.\nWe have analyzed the impact of the correlation dispersion\nfor different configurations of a, b, c, and the number\nof candidate sensors. In table 1 , a = 2 bit, b = 0.1 bit and\nc = 1 bit are fixed. We only change the number of candidate\nsensors. The chance for the heuristic to successfully\nselect the sensor with the maximum mutual information decreases\nas the number of candidate sensors increases. When\nthe heuristic fails to select the sensor with the maximum\nmutual information, the degradation of sensor selection decision\nbased on the heuristic compared to that based on\nthe mutual information does not change with the number of\ncandidate sensors. Thus, the overall degradation of sensor\nselection decision based on the heuristic compared to that\nbased on mutual information also increases as the number\nof candidate sensors increases.\nIn table 2 , a = 2 bit and c = 1 bit are fixed and the\nnumber of candidate sensors are fixed to be 8.\nWe only\nchange the dispersion width b. The chance for the heuristic\nto successfully select the sensor with the maximum mutual\n43\n-2\n-1.5\n-1\n-0.5\n0\n0\n0.5\n1\n1.5\n2\nEntropy difference (bit)\nMutual information (bit)\nb\na\nc\nFigure 11: Discrepancy between the entropy-based\nsensor selection heuristic and the mutual information\nbased approaches when candidate sensors are\nnot very informative. The dispersion of the correlation\nbetween the entropy difference H\nv\ni\n- H\nsi\nand\nthemutual information I(x; z\ni\n) is modeled by a uniform\ndistribution bounded by a parallelogram. The\ngeometry of the parallelogram is defined by parameters\na, b and c. Candidate sensors are denoted by\nmarker\nwhosecoordinates are(H\nv\ni\n- H\nsi\n, I(x; z\ni\n)).\nThe entropy-based heuristic selects the rightmost\nsensor, which has the maximum entropy difference\nH\nv\ni\n- H\nsi\nand is enclosed by a square marker. The\nmutual information based approaches selects the top\nsensor, which has the maximum mutual information\nI(x; z\ni\n) and is enclosed by a diamond-shaped marker.\nThe above two selected sensors might not be the\nsame. In the scenario of this figure, a = 2 bits, b = 0.1\nbit, c = 1 bit, and 8 candidatesensors areavailable\nfor selection.\ninformation decreases as the dispersion width b increases.\nWhen the heuristic fails to select the sensor with the maximum\nmutual information, the degradation of sensor selection\ndecision based on the heuristic compared to that based\non the mutual information increases as the dispersion width\nb increases. Thus, the overall degradation of sensor selection\ndecision based on the heuristic compared to that based on\nmutual information also increases as the dispersion width b\nincreases.\nIn table 3 , a = 2 bit and b = 0.1 bit are fixed and\nthe number of candidate sensors are fixed to be 8.\nWe\nonly change the mutual information variation scope c. The\nchance for the heuristic to successfully select the sensor with\nthe maximum mutual information increases as the mutual\ninformation variation scope c increases. When the heuristic\nTable 1: Impact Change with Number of Sensors\nNumber of Candidate Sensors\n4\n8\n16\nChance of Success (%)\n93.6\n87.8\n78.2\nDegradation per Failure (bit)\n0.026\n0.026\n0.026\nOverall Degradation (bit)\n0.0016\n0.0032\n0.0058\nTable2: Impact Changewith Dispersion Width\nDispersion Width b (bit)\n0.05\n0.1\n0.2\nChance of Success (%)\n93.6\n87.8\n78.1\nDegradation per Failure (bit)\n0.013\n0.026\n0.054\nOverall Degradation (bit)\n0.0008\n0.0032\n0.012\nTable3: Impact Changewith Mutual Info. Scope\nMutual Info. Scope c (bit)\n0.5\n1\n2\nChance of Success (%)\n78.2\n87.8\n93.6\nDegradation per Failure (bit)\n0.027\n0.026\n0.025\nOverall Degradation (bit)\n0.0058\n0.0032\n0.0016\nfails to select the sensor with the maximum mutual information\n, the degradation of sensor selection decision based on\nthe heuristic compared to that based on the mutual information\ndoes not change much with the mutual information\nvariation scope c. Thus, the overall degradation of sensor\nselection decision based on the heuristic compared to that\nbased on mutual information decreases as the mutual information\nvariation scope c increases.\nIn table 4 , b = 0.1 bit is fixed and the number of candidate\nsensors are fixed to be 8. We proportionally change\nthe entropy difference variation scope a and the mutual information\nvariation scope c so that c/a = 1/2 is fixed. The\nchance for the heuristic to successfully select the sensor with\nthe maximum mutual information increases as the entropy\ndifference variation scope a and the mutual information variation\nscope c proportionally increase. When the heuristic\nfails to select the sensor with the maximum mutual information\n, the degradation of sensor selection decision based\non the heuristic compared to that based on the mutual information\ndoes not change. Thus, the overall degradation of\nsensor selection decision based on the heuristic compared to\nthat based on mutual information decreases as the entropy\ndifference variation scope a and the mutual information variation\nscope c proportionally increase.\nFUTURE WORK\nWhen the sensors is selected for tracking a temporally continuous\nsource, the prior target location distribution at time\nt + 1 can be obtained from the posterior target location distribution\nat time t by using the target dynamic model as described\nin [11]. However, when the sensor selection heuristic\nis applied to locate a temporally discontinuous source such\nas a bird call, it is not straightforward to obtain the prior\ntarget location distribution used in the sequential Bayesian\nfusion. One possible solution to the above problem could be\nas follows. First, all sensors buffer the signal once an event\nTable4: Impact Changewith Entropy Diff. Scopec\nand Mutual Info. Scope a in Proportion\nEntropy Diff. Scope a (bit)\n1\n2\n4\nMutual Info. Scope c (bit)\n0.5\n1\n2\nChance of Success (%)\n78.2\n87.8\n93.6\nDegradation per Failure (bit)\n0.026\n0.026\n0.026\nOverall Degradation (bit)\n0.0058\n0.0032\n0.0016\n44\nsuch as a bird call is detected. Then, all triggered sensors\nelect a leader that received the strongest signal intensity using\na protocol similar to that described in [10]. Finally, the\nleader can pick a few sensors to generate an initial prior target\nlocation distribution assuming a certain sensing model.\nWith the initial prior target location distribution, we can\napply the sensor selection heuristic to incrementally reduce\nthe uncertainty of the target location distribution. We plan\nto implement and test the above mechanism in the future.\n5.2\nDiscretization of State Space\nThere is a trade offof computational efficiency and numerical\naccuracy in the discretization of the state space of\nrandom variables such as the target location and the sensor\nview. The bigger the grid size is, the fewer grids are\ninvolved in the computation. However, a bigger grid size\nalso introduces more inaccuracy into the evaluation of the\nentropy difference heuristic. In the future, we must study\nmore details about the trade offin order to choose a proper\ngrid size.\n5.3\nSensing Uncertainty Model\nWe have assumed Gaussian sensing models in the simulations\nas the first step to evaluate the heuristic. Inaccuracy\nof sensing models diminishes the effectiveness of any\nsensor selection criterion. We plan to construct a more realistic\nsensing model for the AML-based DOA estimation.\nWe have implemented AML algorithm for real-time DOA\nestimation on a wireless sensor network testbed [5].\nWe\nwill first analyze the sensing uncertainty characteristic of\nthe AML algorithm, and then experimentally validate and\nrefine it using the testbed. We will also evaluate the effectiveness\nof the entropy-based sensor selection heuristic using\nrealistic sensing models and implement the heuristic on the\nreal-time wireless sensor network testbed for localization.\nCONCLUSION\nWe have proposed an entropy-based sensor selection heuristic\nfor localization. The effectiveness of the heuristic has\nbeen evaluated using simulations in which Gaussian sensing\nmodels are assumed for simplicity. Simulations have shown\nthat the heuristic selects the sensor with nearly the maximal\nmutual information between the target location and the sensor\nobservation. Given the prior target location distribution,\nthe sensor locations, and the sensing models, on average,\nthe sensor selected by the heuristic would yield nearly the\ngreatest reduction in the entropy of the posterior target location\ndistribution. The heuristic is more effective when the\noptimal candidate sensor is more informative. As mutual-information\n-based sensor selection approaches [11, 6] do, the\nheuristic greedily selects one sensor in each step without retrieving\nany actual sensor observations. In addition, in general\n, our heuristic is computationally much simpler than the\nmutual-information-based approaches.\nACKNOWLEDGMENTS\nThis material is based upon work partially supported by\nthe National Science Foundation (NSF) under Cooperative\nAgreement #CCR-0121778, and DARPA SensIT program\nunder contract AFRL/IFG 315 330-1865 and AROD-MURI\nPSU 50126.\n\nREFERENCES\n[1] I. F. Akyildiz, W. Su, Y. Sankarasubramaniam, and\nE. Cayirci. Wireless sensor networks: a survey.\nComputer Networks, 38(4):393442, March 2002.\n[2] G. Asada, M. Dong, T. Lin, F. Newberg, G. Pottie,\nW. Kaiser, and H. Marcy. Wireless integrated network\nsensors: low power systems on a chip. In Proc. the\nEuropean Solid State Circuits Conference, The Hague,\nNetherlands, 1998.\n[3] J. M. Bernardo and A. F. M. Smith. Bayesian theory.\nWiley, New York, 1996.\n[4] J. Chen, R. Hudson, and K. Yao. Maximum-likelihood\nsource localization and unknown sensor location\nestimation for wideband signals in the near-field. IEEE\nT. Signal Proces., 50(8):18431854, August 2002.\n[5] J. Chen, L. Yip, J. Elson, H. Wang, D. Maniezzo,\nR. Hudson, K. Yao, and D. Estrin. Coherent acoustic\narray processing and localization on wireless sensor\nnetworks. Proc. the IEEE, 91(8):11541162, August\n2003.\n[6] E. Ertin, J. Fisher, and L. Potter. Maximum mutual\ninformation principle for dynamic sensor query\nproblems. In Proc. IPSN'03, Palo Alto, CA, April\n2003.\n[7] S. Haykin. Adpative filter theory. Prentice Hall, New\nJersey, USA, 1996.\n[8] K. Hintz and E. McVey. A measure of the information\ngain attributable to cueing. IEEE T. Syst. Man Cyb.,\n21(2):434442, 1991.\n[9] R. E. Kalman. A new approach to linear filtering and\nprediction problems. Trans. of the ASMEJournal of\nBasic Engineering, 82(Series D):3545, 1960.\n[10] J. Liu, J. Liu, J. Reich, P. Cheung, and F. Zhao.\nDistributed group management for track initiation\nand maintenance in target localization applications. In\nProc. International Workshop on Informaiton\nProcessing in Sensor Networks (IPSN), Palo Alto,\nCA, April 2003.\n[11] J. Liu, J. Reich, and F. Zhao. Collaborative\nin-network processing for target tracking. EURASIP\nJASP: Special Issues on Sensor Networks,\n2003(4):378391, March 2003.\n[12] J. Manyika and H. Durrant-Whyte. Data fusion and\nsensor management: a decentralized\ninformation-theoretic approach. Ellis Horwood, New\nYork, 1994.\n[13] A. Savvides, W. Garber, S. Adlakha, R. Moses, and\nM. B. Srivastava. On the error characteristics of\nmultihop node localization in ad-hoc sensor networks.\nIn Proc. IPSN'03, Palo Alto, CA, USA, April 2003.\n[14] C. E. Shannon. A mathematical theory of\ncommunication. Bell Systems Technical Journal,\n27(6):379423 and 623656, 1948.\n[15] T. Tung, K. Yao, C. Reed, R. Hudson, D. Chen, and\nJ. Chen. Source localization and time delay estimation\nusing constrained least squares and best path\nsmoothing. In Proc. SPIE'99, volume 3807, pages\n220223, July 1999.\n[16] K. Yao, R. Hudson, C. Reed, D. Chen, and\nF. Lorenzelli. Blind beamforming source localization\non a sensor array system. In AWAIRS project\npresentation at UCLA, USA, December 1997.\n45\n", "keywords": "Shannon entropy;entropy;target localization;localization;target tracking;wireless sensor networks;mutual information;information-directed resource management;sensor selection;heuristic;information fusion"} {"name": "85", "title": "Estimating the Global PageRank of Web Communities", "abstract": "Localized search engines are small-scale systems that index a particular community on the web. They offer several benefits over their large-scale counterparts in that they are relatively inexpensive to build, and can provide more precise and complete search capability over their relevant domains. One disadvantage such systems have over large-scale search engines is the lack of global PageRank values. Such information is needed to assess the value of pages in the localized search domain within the context of the web as a whole. In this paper, we present well-motivated algorithms to estimate the global PageRank values of a local domain. The algorithms are all highly scalable in that, given a local domain of size n, they use O(n) resources that include computation time, bandwidth, and storage. We test our methods across a variety of localized domains, including site-specific domains and topic-specific domains. We demonstrate that by crawling as few as n or 2n additional pages, our methods can give excellent global PageRank estimates.", "fulltext": "INTRODUCTION\nLocalized search engines are small-scale search engines\nthat index only a single community of the web. Such communities\ncan be site-specific domains, such as pages within\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for prot or commercial advantage and that copies\nbear this notice and the full citation on the rst page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specic\npermission and/or a fee.\nKDD'06, August 2023, 2006, Philadelphia, Pennsylvania, USA.\nCopyright 2006 ACM 1-59593-339-5/06/0008 ...\n$\n5.00.\nthe cs.utexas.edu domain, or topic-related communities-for\nexample, political websites. Compared to the web graph\ncrawled and indexed by large-scale search engines, the size\nof such local communities is typically orders of magnitude\nsmaller. Consequently, the computational resources needed\nto build such a search engine are also similarly lighter. By\nrestricting themselves to smaller, more manageable sections\nof the web, localized search engines can also provide more\nprecise and complete search capabilities over their respective\ndomains.\nOne drawback of localized indexes is the lack of global\ninformation needed to compute link-based rankings. The\nPageRank algorithm [3], has proven to be an effective such\nmeasure. In general, the PageRank of a given page is dependent\non pages throughout the entire web graph. In the\ncontext of a localized search engine, if the PageRanks are\ncomputed using only the local subgraph, then we would expect\nthe resulting PageRanks to reflect the perceived popularity\nwithin the local community and not of the web as a\nwhole. For example, consider a localized search engine that\nindexes political pages with conservative views. A person\nwishing to research the opinions on global warming within\nthe conservative political community may encounter numerous\nsuch opinions across various websites. If only local PageRank\nvalues are available, then the search results will reflect\nonly strongly held beliefs within the community. However, if\nglobal PageRanks are also available, then the results can ad-ditionally\nreflect outsiders' views of the conservative community\n(those documents that liberals most often access within\nthe conservative community).\nThus, for many localized search engines, incorporating\nglobal PageRanks can improve the quality of search results.\nHowever, the number of pages a local search engine indexes\nis typically orders of magnitude smaller than the number of\npages indexed by their large-scale counterparts. Localized\nsearch engines do not have the bandwidth, storage capacity,\nor computational power to crawl, download, and compute\nthe global PageRanks of the entire web. In this work, we\npresent a method of approximating the global PageRanks of\na local domain while only using resources of the same order\nas those needed to compute the PageRanks of the local\nsubgraph.\nOur proposed method looks for a supergraph of our local\nsubgraph such that the local PageRanks within this supergraph\nare close to the true global PageRanks. We construct\nthis supergraph by iteratively crawling global pages on the\ncurrent web frontier--i.e., global pages with inlinks from\npages that have already been crawled. In order to provide\n116\nResearch Track Paper\na good approximation to the global PageRanks, care must\nbe taken when choosing which pages to crawl next; in this\npaper, we present a well-motivated page selection algorithm\nthat also performs well empirically. This algorithm is derived\nfrom a well-defined problem objective and has a running\ntime linear in the number of local nodes.\nWe experiment across several types of local subgraphs,\nincluding four topic related communities and several site-specific\ndomains. To evaluate performance, we measure the\ndifference between the current global PageRank estimate\nand the global PageRank, as a function of the number of\npages crawled. We compare our algorithm against several\nheuristics and also against a baseline algorithm that chooses\npages at random, and we show that our method outperforms\nthese other methods. Finally, we empirically demonstrate\nthat, given a local domain of size n, we can provide good\napproximations to the global PageRank values by crawling\nat most n or 2n additional pages.\nThe paper is organized as follows.\nSection 2 gives an\noverview of localized search engines and outlines their advantages\nover global search. Section 3 provides background\non the PageRank algorithm. Section 4 formally defines our\nproblem, and section 5 presents our page selection criteria\nand derives our algorithms. Section 6 provides experimental\nresults, section 7 gives an overview of related work, and,\nfinally, conclusions are given in section 8.\nLOCALIZED SEARCH ENGINES\nLocalized search engines index a single community of the\nweb, typically either a site-specific community, or a topic-specific\ncommunity. Localized search engines enjoy three\nmajor advantages over their large-scale counterparts: they\nare relatively inexpensive to build, they can offer more precise\nsearch capability over their local domain, and they can\nprovide a more complete index.\nThe resources needed to build a global search engine are\nenormous. A 2003 study by Lyman et al. [13] found that\nthe `surface web' (publicly available static sites) consists of\n8.9 billion pages, and that the average size of these pages is\napproximately 18.7 kilobytes. To download a crawl of this\nsize, approximately 167 terabytes of space is needed. For a\nresearcher who wishes to build a search engine with access\nto a couple of workstations or a small server, storage of this\nmagnitude is simply not available. However, building a localized\nsearch engine over a web community of a hundred\nthousand pages would only require a few gigabytes of storage\n. The computational burden required to support search\nqueries over a database this size is more manageable as well.\nWe note that, for topic-specific search engines, the relevant\ncommunity can be efficiently identified and downloaded by\nusing a focused crawler [21, 4].\nFor site-specific domains, the local domain is readily available\non their own web server. This obviates the need for\ncrawling or spidering, and a complete and up-to-date index\nof the domain can thus be guaranteed. This is in contrast\nto their large-scale counterparts, which suffer from several\nshortcomings. First, crawling dynamically generated\npages--pages in the `hidden web'--has been the subject of\nresearch [20] and is a non-trivial task for an external crawler.\nSecond, site-specific domains can enable the robots exclusion\npolicy. This prohibits external search engines' crawlers\nfrom downloading content from the domain, and an external\nsearch engine must instead rely on outside links and anchor\ntext to index these restricted pages.\nBy restricting itself to only a specific domain of the internet\n, a localized search engine can provide more precise\nsearch results.\nConsider the canonical ambiguous search\nquery, `jaguar', which can refer to either the car manufacturer\nor the animal. A scientist trying to research the habitat\nand evolutionary history of a jaguar may have better\nsuccess using a finely tuned zoology-specific search engine\nthan querying Google with multiple keyword searches and\nwading through irrelevant results. A method to learn better\nranking functions for retrieval was recently proposed by\nRadlinski and Joachims [19] and has been applied to various\nlocal domains, including Cornell University's website [8].\nPAGERANK OVERVIEW\nThe PageRank algorithm defines the importance of web\npages by analyzing the underlying hyperlink structure of a\nweb graph. The algorithm works by building a Markov chain\nfrom the link structure of the web graph and computing its\nstationary distribution. One way to compute the stationary\ndistribution of a Markov chain is to find the limiting\ndistribution of a random walk over the chain. Thus, the\nPageRank algorithm uses what is sometimes referred to as\nthe `random surfer' model. In each step of the random walk,\nthe `surfer' either follows an outlink from the current page\n(i.e. the current node in the chain), or jumps to a random\npage on the web.\nWe now precisely define the PageRank problem. Let U\nbe an m\nm adjacency matrix for a given web graph such\nthat U\nji\n= 1 if page i links to page j and U\nji\n= 0 otherwise.\nWe define the PageRank matrix P\nU\nto be:\nP\nU\n= U D\n-1\nU\n+ (1\n- )ve\nT\n,\n(1)\nwhere D\nU\nis the (unique) diagonal matrix such that U D\n-1\nU\nis column stochastic, is a given scalar such that 0\n1,\ne is the vector of all ones, and v is a non-negative, L\n1\nnormalized\nvector, sometimes called the `random surfer' vector\n. Note that the matrix D\n-1\nU\nis well-defined only if each\ncolumn of U has at least one non-zero entry--i.e., each page\nin the webgraph has at least one outlink. In the presence of\nsuch `dangling nodes' that have no outlinks, one commonly\nused solution, proposed by Brin et al. [3], is to replace each\nzero column of U by a non-negative, L\n1\n-normalized vector.\nThe PageRank vector r is the dominant eigenvector of the\nPageRank matrix, r = P\nU\nr. We will assume, without loss of\ngenerality, that r has an L\n1\n-norm of one. Computationally,\nr can be computed using the power method. This method\nfirst chooses a random starting vector r\n(0)\n, and iteratively\nmultiplies the current vector by the PageRank matrix P\nU\n;\nsee Algorithm 1. In general, each iteration of the power\nmethod can take O(m\n2\n) operations when P\nU\nis a dense matrix\n. However, in practice, the number of links in a web\ngraph will be of the order of the number of pages. By exploiting\nthe sparsity of the PageRank matrix, the work per\niteration can be reduced to O(km), where k is the average\nnumber of links per web page. It has also been shown that\nthe total number of iterations needed for convergence is proportional\nto and does not depend on the size of the web\ngraph [11, 7]. Finally, the total space needed is also O(km),\nmainly to store the matrix U .\n117\nResearch Track Paper\nAlgorithm 1:\nA linear time (per iteration) algorithm for\ncomputing PageRank.\nComputePR\n(U )\nInput:\nU : Adjacency matrix.\nOutput:\nr: PageRank vector.\nChoose (randomly) an initial non-negative vector r\n(0)\nsuch that r\n(0) 1\n= 1.\ni\n0\nrepeat\ni\ni + 1\nU D\n-1\nU\nr\n(i-1)\n{ is the random surfing probability\n}r\n(i)\n+ (1 - )v {v is the random surfer vector.}\nuntil\nr\n(i)\n- r\n(i-1)\n<\n{ is the convergence threshold.}\nr r\n(i)\nPROBLEM DEFINITION\nGiven a local domain L, let G be an N\nN adjacency\nmatrix for the entire connected component of the web that\ncontains L, such that G\nji\n= 1 if page i links to page j\nand G\nji\n= 0 otherwise. Without loss of generality, we will\npartition G as:\nG =\nL\nG\nout\nL\nout\nG\nwithin\n,\n(2)\nwhere L is the n\nn local subgraph corresponding to links\ninside the local domain, L\nout\nis the subgraph that corresponds\nto links from the local domain pointing out to the\nglobal domain, G\nout\nis the subgraph containing links from\nthe global domain into the local domain, and G\nwithin\ncontains\nlinks within the global domain. We assume that when\nbuilding a localized search engine, only pages inside the local\ndomain are crawled, and the links between these pages\nare represented by the subgraph L. The links in L\nout\nare\nalso known, as these point from crawled pages in the local\ndomain to uncrawled pages in the global domain.\nAs defined in equation (1), P\nG\nis the PageRank matrix\nformed from the global graph G, and we define the global\nPageRank vector of this graph to be g. Let the n-length\nvector p\n\nbe the L\n1\n-normalized vector corresponding to the\nglobal PageRank of the pages in the local domain L:\np\n\n=\nE\nL\ng\nE\nL\ng\n1\n,\nwhere E\nL\n= [ I\n| 0 ] is the restriction matrix that selects\nthe components from g corresponding to nodes in L. Let p\ndenote the PageRank vector constructed from the local domain\nsubgraph L. In practice, the observed local PageRank\np and the global PageRank p\n\nwill be quite different. One\nwould expect that as the size of local matrix L approaches\nthe size of global matrix G, the global PageRank and the observed\nlocal PageRank will become more similar. Thus, one\napproach to estimating the global PageRank is to crawl the\nentire global domain, compute its PageRank, and extract\nthe PageRanks of the local domain.\nTypically, however, n\nN , i.e., the number of global\npages is much larger than the number of local pages. Therefore\n, crawling all global pages will quickly exhaust all local\nresources (computational, storage, and bandwidth) available\nto create the local search engine. We instead seek a supergraph\n^\nF of our local subgraph L with size O(n). Our goal\nAlgorithm 2:\nThe FindGlobalPR algorithm.\nFindGlobalPR\n(L, L\nout\n, T , k)\nInput:\nL: zero-one adjacency matrix for the local domain\n, L\nout\n: zero-one outlink matrix from L to global\nsubgraph as in (2), T : number of iterations, k: number of\npages to crawl per iteration.\nOutput: ^\np: an improved estimate of the global PageRank\nof L.\nF L\nF\nout\nL\nout\nf ComputePR(F )\nfor (i = 1 to T )\n{Determine which pages to crawl next}\npages\nSelectNodes(F , F\nout\n, f , k)\nCrawl pages, augment F and modify F\nout\n{Update PageRanks for new local domain}\nf ComputePR(F )\nend\n{Extract PageRanks of original local domain & normalize}\n^\np\nE\nL\nf\nE\nL\nf\n1\nis to find such a supergraph ^\nF with PageRank ^\nf , so that\n^\nf when restricted to L is close to p\n\n. Formally, we seek to\nminimize\nGlobalDif f ( ^\nf ) =\nE\nL\n^\nf\nE\nL\n^\nf\n1\n- p\n\n1\n.\n(3)\nWe choose the L\n1\nnorm for measuring the error as it does\nnot place excessive weight on outliers (as the L\n2\nnorm does,\nfor example), and also because it is the most commonly used\ndistance measure in the literature for comparing PageRank\nvectors, as well as for detecting convergence of the algorithm\n[3].\nWe propose a greedy framework, given in Algorithm 2,\nfor constructing ^\nF . Initially, F is set to the local subgraph\nL, and the PageRank f of this graph is computed. The algorithm\nthen proceeds as follows. First, the SelectNodes\nalgorithm (which we discuss in the next section) is called\nand it returns a set of k nodes to crawl next from the set\nof nodes in the current crawl frontier, F\nout\n. These selected\nnodes are then crawled to expand the local subgraph, F , and\nthe PageRanks of this expanded graph are then recomputed.\nThese steps are repeated for each of T iterations. Finally,\nthe PageRank vector ^\np, which is restricted to pages within\nthe original local domain, is returned. Given our computation\n, bandwidth, and memory restrictions, we will assume\nthat the algorithm will crawl at most O(n) pages. Since the\nPageRanks are computed in each iteration of the algorithm,\nwhich is an O(n) operation, we will also assume that the\nnumber of iterations T is a constant. Of course, the main\nchallenge here is in selecting which set of k nodes to crawl\nnext. In the next section, we formally define the problem\nand give efficient algorithms.\nNODE SELECTION\nIn this section, we present node selection algorithms that\noperate within the greedy framework presented in the previous\nsection. We first give a well-defined criteria for the\npage selection problem and provide experimental evidence\nthat this criteria can effectively identify pages that optimize\nour problem objective (3). We then present our main al-118\nResearch Track Paper\ngorithmic contribution of the paper, a method with linear\nrunning time that is derived from this page selection criteria\n. Finally, we give an intuitive analysis of our algorithm in\nterms of `leaks' and `flows'. We show that if only the `flow'\nis considered, then the resulting method is very similar to a\nwidely used page selection heuristic [6].\n5.1\nFormulation\nFor a given page j in the global domain, we define the\nexpanded local graph F\nj\n:\nF\nj\n=\nF\ns\nu\nT\nj\n0\n,\n(4)\nwhere u\nj\nis the zero-one vector containing the outlinks from\nF into page j, and s contains the inlinks from page j into\nthe local domain. Note that we do not allow self-links in\nthis framework. In practice, self-links are often removed, as\nthey only serve to inflate a given page's PageRank.\nObserve that the inlinks into F from node j are not known\nuntil after node j is crawled. Therefore, we estimate this\ninlink vector as the expectation over inlink counts among\nthe set of already crawled pages,\ns =\nF\nT\ne\nF\nT\ne\n1\n.\n(5)\nIn practice, for any given page, this estimate may not reflect\nthe true inlinks from that page. Furthermore, this expectation\nis sampled from the set of links within the crawled\ndomain, whereas a better estimate would also use links from\nthe global domain. However, the latter distribution is not\nknown to a localized search engine, and we contend that the\nabove estimate will, on average, be a better estimate than\nthe uniform distribution, for example.\nLet the PageRank of F be f . We express the PageRank\nf\n+\nj\nof the expanded local graph F\nj\nas\nf\n+\nj\n=\n(1\n- x\nj\n)f\nj\nx\nj\n,\n(6)\nwhere x\nj\nis the PageRank of the candidate global node j,\nand f\nj\nis the L\n1\n-normalized PageRank vector restricted to\nthe pages in F .\nSince directly optimizing our problem goal requires knowing\nthe global PageRank p\n\n, we instead propose to crawl\nthose nodes that will have the greatest influence on the PageRanks\nof pages in the original local domain L:\ninfluence(j)\n=\nkL\n|f\nj\n[k]\n- f[k]|\n(7)\n=\nE\nL\n(f\nj\n- f )\n1\n.\nExperimentally, the influence score is a very good predictor\nof our problem objective (3). For each candidate global node\nj, figure 1(a) shows the objective function value Global Diff(f\nj\n)\nas a function of the influence of page j. The local domain\nused here is a crawl of conservative political pages (we will\nprovide more details about this dataset in section 6); we\nobserved similar results in other domains. The correlation\nis quite strong, implying that the influence criteria can effectively\nidentify pages that improve the global PageRank\nestimate. As a baseline, figure 1(b) compares our objective\nwith an alternative criteria, outlink count. The outlink\ncount is defined as the number of outlinks from the local\ndomain to page j. The correlation here is much weaker.\n.00001\n.0001\n.001\n.01\n0.26\n0.262\n0.264\n0.266\nInfluence\nObjective\n1\n10\n100\n1000\n0.266\n0.264\n0.262\n0.26\nOutlink Count\nObjective\n(a)\n(b)\nFigure 1: (a) The correlation between our\ninfluence\npage selection criteria (7) and the actual objective\nfunction (3) value is quite strong. (b) This is in contrast\nto other criteria, such as outlink count, which\nexhibit a much weaker correlation.\n5.2\nComputation\nAs described, for each candidate global page j, the influence\nscore (7) must be computed.\nIf f\nj\nis computed\nexactly for each global page j, then the PageRank algorithm\nwould need to be run for each of the O(n) such global\npages j we consider, resulting in an O(n\n2\n) computational\ncost for the node selection method. Thus, computing the\nexact value of f\nj\nwill lead to a quadratic algorithm, and we\nmust instead turn to methods of approximating this vector.\nThe algorithm we present works by performing one power\nmethod iteration used by the PageRank algorithm (Algorithm\n1). The convergence rate for the PageRank algorithm\nhas been shown to equal the random surfer probability [7,\n11]. Given a starting vector x\n(0)\n, if k PageRank iterations\nare performed, the current PageRank solution x\n(k)\nsatisfies:\nx\n(k)\n- x\n1\n= O(\nk\nx\n(0)\n- x\n1\n),\n(8)\nwhere x\n\nis the desired PageRank vector. Therefore, if only\none iteration is performed, choosing a good starting vector\nis necessary to achieve an accurate approximation.\nWe partition the PageRank matrix P\nF\nj\n, corresponding to\nthe\nsubgraph F\nj\nas:\nP\nF\nj\n=\n~\nF\n~\ns\n~\nu\nT\nj\nw\n,\n(9)\nwhere\n~\nF\n=\nF (D\nF\n+ diag(u\nj\n))\n-1\n+ (1\n- ) e\n+ 1 e\nT\n,\n~\ns = s + (1 - ) e\n+ 1 ,\n~\nu\nj\n=\n(D\nF\n+ diag(u\nj\n))\n-1\nu\nj\n+ (1\n- ) e\n+ 1 ,\nw\n=\n1\n-\n+ 1 ,\nand diag(u\nj\n) is the diagonal matrix with the (i, i)\nth\nentry\nequal to one if the i\nth\nelement of u\nj\nequals one, and is zero\notherwise. We have assumed here that the random surfer\nvector is the uniform vector, and that L has no `dangling\nlinks'. These assumptions are not necessary and serve only\nto simplify discussion and analysis.\nA simple approach for estimating f\nj\nis the following. First,\nestimate the PageRank f\n+\nj\nof F\nj\nby computing one PageRank\niteration over the matrix P\nF\nj\n, using the starting vector\n=\nf\n0\n. Then, estimate f\nj\nby removing the last\n119\nResearch Track Paper\ncomponent from our estimate of f\n+\nj\n(i.e., the component\ncorresponding to the added node j), and renormalizing.\nThe problem with this approach is in the starting vector.\nRecall from (6) that x\nj\nis the PageRank of the added node\nj. The difference between the actual PageRank f\n+\nj\nof P\nF\nj\nand the starting vector is\n- f\n+\nj\n1\n=\nx\nj\n+ f\n- (1 - x\nj\n)f\nj 1\nx\nj\n+\n| f\n1\n- (1 - x\nj\n) f\nj 1\n|\n=\nx\nj\n+\n|x\nj\n|\n=\n2x\nj\n.\nThus, by (8), after one PageRank iteration, we expect our\nestimate of f\n+\nj\nto still have an error of about 2x\nj\n. In particular\n, for candidate nodes j with relatively high PageRank\nx\nj\n, this method will yield more inaccurate results. We will\nnext present a method that eliminates this bias and runs in\nO(n) time.\n5.2.1\nStochastic Complementation\nSince f\n+\nj\n, as given in (6) is the PageRank of the matrix\nP\nF\nj\n, we have:\nf\nj\n(1\n- x\nj\n)\nx\nj\n=\n~\nF\n~\ns\n~\nu\nT\nj\nw\nf\nj\n(1\n- x\nj\n)\nx\nj\n=\n~\nF f\nj\n(1\n- x\nj\n) + ~\nsx\nj\n~\nu\nT\nj\nf\nj\n(1\n- x\nj\n) + wx\nj\n.\nSolving the above system for f\nj\ncan be shown to yield\nf\nj\n= ( ~\nF + (1 - w)\n-1\n~\ns ~\nu\nT\nj\n)f\nj\n.\n(10)\nThe matrix S = ~\nF +(1-w)\n-1\n~\ns ~\nu\nT\nj\nis known as the stochastic\ncomplement of the column stochastic matrix P\nF\nj\nwith respect\nto the sub matrix ~\nF . The theory of stochastic complementation\nis well studied, and it can be shown the stochastic\ncomplement of an irreducible matrix (such as the PageRank\nmatrix) is unique. Furthermore, the stochastic complement\nis also irreducible and therefore has a unique stationary distribution\nas well. For an extensive study, see [15].\nIt can be easily shown that the sub-dominant eigenvalue\nof S is at most\n+1\n, where\nis the size of F . For sufficiently\nlarge , this value will be very close to . This is important,\nas other properties of the PageRank algorithm, notably the\nalgorithm's sensitivity, are dependent on this value [11].\nIn this method, we estimate the length\nvector f\nj\nby\ncomputing one PageRank iteration over the\nstochastic\ncomplement S, starting at the vector f :\nf\nj\nSf.\n(11)\nThis is in contrast to the simple method outlined in the previous\nsection, which first iterates over the ( + 1)\n( + 1)\nmatrix P\nF\nj\nto estimate f\n+\nj\n, and then removes the last component\nfrom the estimate and renormalizes to approximate\nf\nj\n. The problem with the latter method is in the choice\nof the ( + 1) length starting vector, . Consequently, the\nPageRank estimate given by the simple method differs from\nthe true PageRank by at least 2x\nj\n, where x\nj\nis the PageRank\nof page j. By using the stochastic complement, we\ncan establish a tight lower bound of zero for this difference.\nTo see this, consider the case in which a node k is added\nto F to form the augmented local subgraph F\nk\n, and that\nthe PageRank of this new graph is\n(1\n- x\nk\n)f\nx\nk\n. Specifi-cally\n, the addition of page k does not change the PageRanks\nof the pages in F , and thus f\nk\n= f . By construction of\nthe stochastic complement, f\nk\n= Sf\nk\n, so the approximation\ngiven in equation (11) will yield the exact solution.\nNext, we present the computational details needed to efficiently\ncompute the quantity f\nj\n-f\n1\nover all known global\npages j. We begin by expanding the difference f\nj\n-f , where\nthe vector f\nj\nis estimated as in (11),\nf\nj\n- f Sf - f\n=\nF (D\nF\n+ diag(u\nj\n))\n-1\nf + (1 - ) e\n+ 1 e\nT\nf\n+(1\n- w)\n-1\n( ~\nu\nT\nj\nf )~\ns - f.\n(12)\nNote that the matrix (D\nF\n+diag(u\nj\n))\n-1\nis diagonal. Letting\no[k] be the outlink count for page k in F , we can express\nthe k\nth\ndiagonal element as:\n(D\nF\n+ diag(u\nj\n))\n-1\n[k, k] =\n1\no[k]+1\nif u\nj\n[k] = 1\n1\no[k]\nif u\nj\n[k] = 0\nNoting that (o[k] + 1)\n-1\n= o[k]\n-1\n- (o[k](o[k] + 1))\n-1\nand\nrewriting this in matrix form yields\n(D\nF\n+diag(u\nj\n))\n-1\n= D\n-1\nF\n-D\n-1\nF\n(D\nF\n+diag(u\nj\n))\n-1\ndiag(u\nj\n).\n(13)\nWe use the same identity to express\ne\n+ 1 =\ne - e\n( + 1) .\n(14)\nRecall that, by definition, we have P\nF\n= F D\n-1\nF\n+(1\n-)\ne\n.\nSubstituting (13) and (14) in (12) yields\nf\nj\n- f (P\nF\nf - f)\n-F D\n-1\nF\n(D\nF\n+ diag(u\nj\n))\n-1\ndiag(u\nj\n)f\n-(1 - )\ne\n( + 1) + (1 - w)\n-1\n( ~\nu\nT\nj\nf )~\ns\n=\nx + y + ( ~\nu\nT\nj\nf )z,\n(15)\nnoting that by definition, f = P\nF\nf , and defining the vectors\nx, y, and z to be\nx = -F D\n-1\nF\n(D\nF\n+ diag(u\nj\n))\n-1\ndiag(u\nj\n)f\n(16)\ny = -(1 - )\ne\n( + 1)\n(17)\nz = (1 - w)\n-1\n~\ns.\n(18)\nThe first term x is a sparse vector, and takes non-zero values\nonly for local pages k that are siblings of the global page\nj. We define (i, j)\nF if and only if F [j, i] = 1 (equiva-lently\n, page i links to page j) and express the value of the\ncomponent x[k ] as:\nx[k ] =\nk\n:(k,k )F ,u\nj\n[k]=1\nf [k]\no[k](o[k] + 1) ,\n(19)\nwhere o[k], as before, is the number of outlinks from page k\nin the local domain. Note that the last two terms, y and z\nare not dependent on the current global node j. Given the\nfunction h\nj\n(f ) =\ny + ( ~\nu\nT\nj\nf )z\n1\n, the quantity\nf\nj\n- f\n1\n120\nResearch Track Paper\ncan be expressed as\nf\nj\n- f\n1\n=\nk\nx[k] + y[k] + ( ~\nu\nT\nj\nf )z[k]\n=\nk:x[k]=0\ny[k] + ( ~\nu\nT\nj\nf )z[k]\n+\nk:x[k]=0\nx[k] + y[k] + ( ~\nu\nT\nj\nf )z[k]\n=\nh\nj\n(f )\nk\n:x[k]=0\ny[k] + ( ~\nu\nT\nj\nf )z[k]\n+\nk:x[k]=0\nx[k] + y[k] + ( ~\nu\nT\nj\nf )z[k] .(20)\nIf we can compute the function h\nj\nin linear time, then we\ncan compute each value of\nf\nj\n- f\n1\nusing an additional\namount of time that is proportional to the number of non-zero\ncomponents in x. These optimizations are carried out\nin Algorithm 3. Note that (20) computes the difference between\nall components of f and f\nj\n, whereas our node selection\ncriteria, given in (7), is restricted to the components\ncorresponding to nodes in the original local domain L.\nLet us examine Algorithm 3 in more detail. First, the\nalgorithm computes the outlink counts for each page in the\nlocal domain. The algorithm then computes the quantity\n~\nu\nT\nj\nf for each known global page j. This inner product can\nbe written as\n(1\n- ) 1\n+ 1 +\nk:(k,j)F\nout\nf [k]\no[k] + 1 ,\nwhere the second term sums over the set of local pages that\nlink to page j. Since the total number of edges in F\nout\nwas\nassumed to have size O( ) (recall that\nis the number of\npages in F ), the running time of this step is also O( ).\nThe algorithm then computes the vectors y and z, as\ngiven in (17) and (18), respectively.\nThe L\n1\nNormDiff\nmethod is called on the components of these vectors which\ncorrespond to the pages in L, and it estimates the value of\nE\nL\n(y + ( ~\nu\nT\nj\nf )z)\n1\nfor each page j. The estimation works\nas follows. First, the values of ~\nu\nT\nj\nf are discretized uniformly\ninto c values\n{a\n1\n, ..., a\nc\n}. The quantity E\nL\n(y + a\ni\nz)\n1\nis\nthen computed for each discretized value of a\ni\nand stored in\na table. To evaluate\nE\nL\n(y + az)\n1\nfor some a\n[a\n1\n, a\nc\n],\nthe closest discretized value a\ni\nis determined, and the corresponding\nentry in the table is used. The total running time\nfor this method is linear in\nand the discretization parameter\nc (which we take to be a constant). We note that if exact\nvalues are desired, we have also developed an algorithm that\nruns in O( log ) time that is not described here.\nIn the main loop, we compute the vector x, as defined\nin equation (16). The nested loops iterate over the set of\npages in F that are siblings of page j. Typically, the size\nof this set is bounded by a constant. Finally, for each page\nj, the scores vector is updated over the set of non-zero\ncomponents k of the vector x with k\nL. This set has\nsize equal to the number of local siblings of page j, and is\na subset of the total number of siblings of page j. Thus,\neach iteration of the main loop takes constant time, and the\ntotal running time of the main loop is O( ). Since we have\nassumed that the size of F will not grow larger than O(n),\nthe total running time for the algorithm is O(n).\nAlgorithm 3:\nNode Selection via Stochastic\nComplementation.\nSC-Select\n(F , F\nout\n, f , k)\nInput:\nF : zero-one adjacency matrix of size corresponding\nto the current local subgraph, F\nout\n: zero-one\noutlink matrix from F to global subgraph, f : PageRank\nof F , k: number of pages to return\nOutput: pages: set of k pages to crawl next\n{Compute outlink sums for local subgraph}\nforeach (page j\nF )\no[j]\n\nk:(j,k)F\nF [j, k]\nend\n{Compute scalar ~u\nT\nj\nf for each global node j }\nforeach (page j\nF\nout\n)\ng[j]\n(1 - )\n1\n+1\nforeach (page k : (k, j)\nF\nout\n)\ng[j]\ng[j] +\nf[k]\no[k]+1\nend\nend\n{Compute vectors y and z as in (17) and (18) }\ny -(1 - )\ne\n( +1)\nz (1 - w)\n-1\n~\ns\n{Approximate y + g[j] z\n1\nfor all values g[j]\n}\nnorm diffs\nL\n1\nNormDiffs\n(g, E\nL\ny, E\nL\nz)\nforeach (page j\nF\nout\n)\n{Compute sparse vector x as in (19)}\nx 0\nforeach (page k : (k, j)\nF\nout\n)\nforeach (page k : (k, k )\nF ))\nx[k ]\nx[k ] f\n[k]\no[k](o[k]+1)\nend\nend\nx x\nscores[j]\nnorm diffs[j]\nforeach (k : x[k] > 0 and page k\nL)\nscores[j]\nscores[j] - |y[k] + g[j] z[k]|\n+\n|x[k]+y[k]+g[j]z[k])|\nend\nend\nReturn k pages with highest scores\n5.2.2\nPageRank Flows\nWe now present an intuitive analysis of the stochastic\ncomplementation method by decomposing the change in PageRank\nin terms of `leaks' and `flows'. This analysis is motivated\nby the decomposition given in (15). PageRank `flow' is\nthe increase in the local PageRanks originating from global\npage j. The flows are represented by the non-negative vector\n( ~\nu\nT\nj\nf )z (equations (15) and (18)). The scalar ~\nu\nT\nj\nf can be\nthought of as the total amount of PageRank flow that page\nj has available to distribute. The vector z dictates how the\nflow is allocated to the local domain; the flow that local\npage k receives is proportional to (within a constant factor\ndue to the random surfer vector) the expected number of its\ninlinks.\nThe PageRank `leaks' represent the decrease in PageRank\nresulting from the addition of page j.\nThe leakage can\nbe quantified in terms of the non-positive vectors x and\ny (equations (16) and (17)). For vector x, we can see from\nequation (19) that the amount of PageRank leaked by a\nlocal page is proportional to the weighted sum of the Page-121\nResearch Track Paper\nRanks of its siblings. Thus, pages that have siblings with\nhigher PageRanks (and low outlink counts) will experience\nmore leakage. The leakage caused by y is an artifact of the\nrandom surfer vector.\nWe will next show that if only the `flow' term, ( ~\nu\nT\nj\nf )z,\nis considered, then the resulting method is very similar to\na heuristic proposed by Cho et al. [6] that has been widely\nused for the \"Crawling Through URL Ordering\" problem.\nThis heuristic is computationally cheaper, but as we will see\nlater, not as effective as the Stochastic Complementation\nmethod.\nOur node selection strategy chooses global nodes that\nhave the largest influence (equation (7)). If this influence is\napproximated using only `flows', the optimal node j\n\nis:\nj\n\n=\nargmax\nj\nE\nL\n~\nu\nT\nj\nf z\n1\n=\nargmax\nj\n~\nu\nT\nj\nf E\nL\nz\n1\n=\nargmax\nj\n~\nu\nT\nj\nf\n=\nargmax\nj\n(D\nF\n+ diag(u\nj\n))\n-1\nu\nj\n+ (1\n- ) e\n+ 1 , f\n=\nargmax\nj\nf\nT\n(D\nF\n+ diag(u\nj\n))\n-1\nu\nj\n.\nThe resulting page selection score can be expressed as a sum\nof the PageRanks of each local page k that links to j, where\neach PageRank value is normalized by o[k]+1. Interestingly,\nthe normalization that arises in our method differs from the\nheuristic given in [6], which normalizes by o[k].\nThe algorithm\nPF-Select, which is omitted due to lack of space,\nfirst computes the quantity f\nT\n(D\nF\n+diag(u\nj\n))\n-1\nu\nj\nfor each\nglobal page j, and then returns the pages with the k largest\nscores. To see that the running time for this algorithm is\nO(n), note that the computation involved in this method is\na subset of that needed for the SC-Select method (Algorithm\n3), which was shown to have a running time of O(n).\nEXPERIMENTS\nIn this section, we provide experimental evidence to verify\nthe effectiveness of our algorithms. We first outline our\nexperimental methodology and then provide results across\na variety of local domains.\n6.1\nMethodology\nGiven the limited resources available at an academic institution\n, crawling a section of the web that is of the same\nmagnitude as that indexed by Google or Yahoo! is clearly\ninfeasible. Thus, for a given local domain, we approximate\nthe global graph by crawling a local neighborhood around\nthe domain that is several orders of magnitude larger than\nthe local subgraph. Even though such a graph is still orders\nof magnitude smaller than the `true' global graph, we contend\nthat, even if there exist some highly influential pages\nthat are very far away from our local domain, it is unrealis-tic\nfor any local node selection algorithm to find them. Such\npages also tend to be highly unrelated to pages within the\nlocal domain.\nWhen explaining our node selection strategies in section\n5, we made the simplifying assumption that our local graph\ncontained no dangling nodes.\nThis assumption was only\nmade to ease our analysis. Our implementation efficiently\nhandles dangling links by replacing each zero column of our\nadjacency matrix with the uniform vector. We evaluate the\nalgorithm using the two node selection strategies given in\nSection 5.2, and also against the following baseline methods:\nRandom: Nodes are chosen uniformly at random among\nthe known global nodes.\nOutlinkCount: Global nodes with the highest number\nof outlinks from the local domain are chosen.\nAt each iteration of the FindGlobalPR algorithm, we evaluate\nperformance by computing the difference between the\ncurrent PageRank estimate of the local domain,\nE\nL\nf\nE\nL\nf\n1\n, and\nthe global PageRank of the local domain\nE\nL\ng\nE\nL\ng\n1\n. All PageRank\ncalculations were performed using the uniform random\nsurfer vector. Across all experiments, we set the random\nsurfer parameter , to be .85, and used a convergence\nthreshold of 10\n-6\n. We evaluate the difference between the\nlocal and global PageRank vectors using three different metrics\n: the L\n1\nand L\n\nnorms, and Kendall's tau. The L\n1\nnorm\nmeasures the sum of the absolute value of the differences between\nthe two vectors, and the L\n\nnorm measures the absolute\nvalue of the largest difference. Kendall's tau metric is\na popular rank correlation measure used to compare PageRanks\n[2, 11]. This metric can be computed by counting\nthe number of pairs of pairs that agree in ranking, and subtracting\nfrom that the number of pairs of pairs that disagree\nin ranking. The final value is then normalized by the total\nnumber of\nn\n2\nsuch pairs, resulting in a [\n-1, 1] range, where\na negative score signifies anti-correlation among rankings,\nand values near one correspond to strong rank correlation.\n6.2\nResults\nOur experiments are based on two large web crawls and\nwere downloaded using the web crawler that is part of the\nNutch open source search engine project [18]. All crawls\nwere restricted to only `http' pages, and to limit the number\nof dynamically generated pages that we crawl, we ig-nored\nall pages with urls containing any of the characters\n`?', `*', `@', or `='. The first crawl, which we will refer to\nas the `edu' dataset, was seeded by homepages of the top\n100 graduate computer science departments in the USA, as\nrated by the US News and World Report [16], and also by\nthe home pages of their respective institutions. A crawl of\ndepth 5 was performed, restricted to pages within the `.edu'\ndomain, resulting in a graph with approximately 4.7 million\npages and 22.9 million links. The second crawl was seeded\nby the set of pages under the `politics' hierarchy in the dmoz\nopen directory project[17]. We crawled all pages up to four\nlinks away, which yielded a graph with 4.4 million pages and\n17.3 million links.\nWithin the `edu' crawl, we identified the five site-specific\ndomains corresponding to the websites of the top five graduate\ncomputer science departments, as ranked by the US\nNews and World Report. This yielded local domains of various\nsizes, from 10,626 (UIUC) to 59,895 (Berkeley). For each\nof these site-specific domains with size n, we performed 50\niterations of the FindGlobalPR algorithm to crawl a total\nof 2n additional nodes. Figure 2(a) gives the (L\n1\n) difference\nfrom the PageRank estimate at each iteration to the global\nPageRank, for the Berkeley local domain.\nThe performance of this dataset was representative of the\ntypical performance across the five computer science site-specific\nlocal domains. Initially, the L\n1\ndifference between\nthe global and local PageRanks ranged from .0469 (Stanford\n) to .149 (MIT). For the first several iterations, the\n122\nResearch Track Paper\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\n50\n0.015\n0.02\n0.025\n0.03\n0.035\n0.04\n0.045\n0.05\n0.055\nNumber of Iterations\nGlobal and Local PageRank Difference (L1)\nStochastic Complement\nPageRank Flow\nOutlink Count\nRandom\n0\n10\n20\n30\n40\n50\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\nNumber of Iterations\nGlobal and Local PageRank Difference (L1)\nStochastic Complement\nPageRank Flow\nOutlink Count\nRandom\n0\n5\n10\n15\n20\n25\n0.16\n0.18\n0.2\n0.22\n0.24\n0.26\n0.28\n0.3\n0.32\n0.34\nNumber of Iterations\nGlobal and Local PageRank Difference (L1)\nStochastic Complement\nPageRank Flow\nOutlink Count\nRandom\n(a) www.cs.berkeley.edu\n(b) www.enterstageright.com\n(c) Politics\nFigure 2: L\n1\ndifference between the estimated and true global PageRanks for (a) Berkeley's computer science\nwebsite, (b) the site-specific domain, www.enterstageright.com, and (c) the `politics' topic-specific domain. The\nstochastic complement method outperforms all other methods across various domains.\nthree link-based methods all outperform the random selection\nheuristic.\nAfter these initial iterations, the random\nheuristic tended to be more competitive with (or even outperform\n, as in the Berkeley local domain) the outlink count\nand PageRank flow heuristics. In all tests, the stochastic\ncomplementation method either outperformed, or was competitive\nwith, the other methods. Table 1 gives the average\ndifference between the final estimated global PageRanks and\nthe true global PageRanks for various distance measures.\nAlgorithm\nL\n1\nL\n\nKendall\nStoch. Comp.\n.0384\n.00154\n.9257\nPR Flow\n.0470\n.00272\n.8946\nOutlink\n.0419\n.00196\n.9053\nRandom\n.0407\n.00204\n.9086\nTable 1: Average final performance of various node\nselection strategies for the five site-specific computer\nscience local domains.\nNote that Kendall's\nTau measures similarity, while the other metrics are\ndissimilarity measures.\nStochastic Complementation\nclearly outperforms the other methods in all\nmetrics.\nWithin the `politics' dataset, we also performed two site-specific\ntests for the largest websites in the crawl: www.adam-smith\n.org, the website for the London based Adam Smith\nInstitute, and www.enterstageright.com, an online conservative\njournal. As with the `edu' local domains, we ran our\nalgorithm for 50 iterations, crawling a total of 2n nodes. Figure\n2 (b) plots the results for the www.enterstageright.com\ndomain. In contrast to the `edu' local domains, the Random\nand OutlinkCount methods were not competitive with either\nthe SC-Select or the PF-Select methods. Among all\ndatasets and all node selection methods, the stochastic complementation\nmethod was most impressive in this dataset,\nrealizing a final estimate that differed only .0279 from the\nglobal PageRank, a ten-fold improvement over the initial local\nPageRank difference of .299. For the Adam Smith local\ndomain, the initial difference between the local and global\nPageRanks was .148, and the final estimates given by the\nSC-Select\n, PF-Select, OutlinkCount, and Random\nmethods were .0208, .0193, .0222, and .0356, respectively.\nWithin the `politics' dataset, we constructed four topic-specific\nlocal domains.\nThe first domain consisted of all\npages in the dmoz politics category, and also all pages within\neach of these sites up to two links away. This yielded a local\ndomain of 90,811 pages, and the results are given in figure 2\n(c). Because of the larger size of the topic-specific domains,\nwe ran our algorithm for only 25 iterations to crawl a total\nof n nodes.\nWe also created topic-specific domains from three political\nsub-topics: liberalism, conservatism, and socialism. The\npages in these domains were identified by their corresponding\ndmoz categories. For each sub-topic, we set the local\ndomain to be all pages within three links from the corresponding\ndmoz category pages.\nTable 2 summarizes the\nperformance of these three topic-specific domains, and also\nthe larger political domain.\nTo quantify a global page j's effect on the global PageRank\nvalues of pages in the local domain, we define page\nj's impact to be its PageRank value, g[j], normalized by the\nfraction of its outlinks pointing to the local domain:\nimpact(j) = o\nL\n[j]\no[j] g[j],\nwhere, o\nL\n[j] is the number of outlinks from page j to pages\nin the local domain L, and o[j] is the total number of j's\noutlinks. In terms of the random surfer model, the impact\nof page j is the probability that the random surfer (1) is\ncurrently at global page j in her random walk and (2) takes\nan outlink to a local page, given that she has already decided\nnot to jump to a random page.\nFor the politics local domain, we found that many of the\npages with high impact were in fact political pages that\nshould have been included in the dmoz politics topic, but\nwere not. For example, the two most influential global pages\nwere the political search engine www.askhenry.com, and the\nhome page of the online political magazine, www.policy-review\n.com. Among non-political pages, the home page of\nthe journal \"Education Next\" was most influential.\nThe\njournal is freely available online and contains articles regarding\nvarious aspect of K-12 education in America. To provide\nsome anecdotal evidence for the effectiveness of our page selection\nmethods, we note that the SC-Select method chose\n11 pages within the www.educationnext.org domain, the\nPF-Select\nmethod discovered 7 such pages, while the OutlinkCount\nand Random methods found only 6 pages each.\nFor the conservative political local domain, the socialist\nwebsite www.ornery.org had a very high impact score. This\n123\nResearch Track Paper\nAll Politics:\nAlgorithm\nL\n1\nL\n2\nKendall\nStoch. Comp.\n.1253\n.000700\n.8671\nPR Flow\n.1446\n.000710\n.8518\nOutlink\n.1470\n.00225\n.8642\nRandom\n.2055\n.00203\n.8271\nConservativism:\nAlgorithm\nL\n1\nL\n2\nKendall\nStoch. Comp.\n.0496\n.000990\n.9158\nPR Flow\n.0554\n.000939\n.9028\nOutlink\n.0602\n.00527\n.9144\nRandom\n.1197\n.00102\n.8843\nLiberalism:\nAlgorithm\nL\n1\nL\n2\nKendall\nStoch. Comp.\n.0622\n.001360\n.8848\nPR Flow\n.0799\n.001378\n.8669\nOutlink\n.0763\n.001379\n.8844\nRandom\n.1127\n.001899\n.8372\nSocialism:\nAlgorithm\nL\n1\nL\n\nKendall\nStoch. Comp.\n.04318\n.00439\n.9604\nPR Flow\n.0450\n.004251\n.9559\nOutlink\n.04282\n.00344\n.9591\nRandom\n.0631\n.005123\n.9350\nTable 2: Final performance among node selection\nstrategies for the four political topic-specific crawls.\nNote that Kendall's Tau measures similarity, while\nthe other metrics are dissimilarity measures.\nwas largely due to a link from the front page of this site\nto an article regarding global warming published by the\nNational Center for Public Policy Research, a conservative\nresearch group in Washington, DC. Not surprisingly, the\nglobal PageRank of this article (which happens to be on the\nhome page of the NCCPR, www.nationalresearch.com),\nwas approximately .002, whereas the local PageRank of this\npage was only .00158. The SC-Select method yielded a\nglobal PageRank estimate of approximately .00182, the PF-Select\nmethod estimated a value of .00167, and the Random\nand OutlinkCount methods yielded values of .01522\nand .00171, respectively.\nRELATED WORK\nThe node selection framework we have proposed is similar\nto the url ordering for crawling problem proposed by Cho\net al. in [6]. Whereas our framework seeks to minimize the\ndifference between the global and local PageRank, the objective\nused in [6] is to crawl the most highly (globally) ranked\npages first. They propose several node selection algorithms,\nincluding the outlink count heuristic, as well as a variant of\nour PF-Select algorithm which they refer to as the `Page-Rank\nordering metric'. They found this method to be most\neffective in optimizing their objective, as did a recent survey\nof these methods by Baeza-Yates et al. [1]. Boldi et al. also\nexperiment within a similar crawling framework in [2], but\nquantify their results by comparing Kendall's rank correlation\nbetween the PageRanks of the current set of crawled\npages and those of the entire global graph. They found that\nnode selection strategies that crawled pages with the highest\nglobal PageRank first actually performed worse (with\nrespect to Kendall's Tau correlation between the local and\nglobal PageRanks) than basic depth first or breadth first\nstrategies. However, their experiments differ from our work\nin that our node selection algorithms do not use (or have\naccess to) global PageRank values.\nMany algorithmic improvements for computing exact PageRank\nvalues have been proposed [9, 10, 14]. If such algorithms\nare used to compute the global PageRanks of our\nlocal domain, they would all require O(N ) computation,\nstorage, and bandwidth, where N is the size of the global\ndomain. This is in contrast to our method, which approximates\nthe global PageRank and scales linearly with the size\nof the local domain.\nWang and Dewitt [22] propose a system where the set of\nweb servers that comprise the global domain communicate\nwith each other to compute their respective global PageRanks\n. For a given web server hosting n pages, the computational\n, bandwidth, and storage requirements are also\nlinear in n. One drawback of this system is that the number\nof distinct web servers that comprise the global domain\ncan be very large. For example, our `edu' dataset contains\nwebsites from over 3,200 different universities; coordinating\nsuch a system among a large number of sites can be very\ndifficult.\nGan, Chen, and Suel propose a method for estimating the\nPageRank of a single page [5] which uses only constant bandwidth\n, computation, and space. Their approach relies on the\navailability of a remote connectivity server that can supply\nthe set of inlinks to a given page, an assumption not used in\nour framework. They experimentally show that a reasonable\nestimate of the node's PageRank can be obtained by visiting\nat most a few hundred nodes. Using their algorithm for our\nproblem would require that either the entire global domain\nfirst be downloaded or a connectivity server be used, both\nof which would lead to very large web graphs.\nCONCLUSIONS AND FUTURE WORK\nThe internet is growing exponentially, and in order to navigate\nsuch a large repository as the web, global search engines\nhave established themselves as a necessity. Along with\nthe ubiquity of these large-scale search engines comes an increase\nin search users' expectations. By providing complete\nand isolated coverage of a particular web domain, localized\nsearch engines are an effective outlet to quickly locate content\nthat could otherwise be difficult to find. In this work,\nwe contend that the use of global PageRank in a localized\nsearch engine can improve performance.\nTo estimate the global PageRank, we have proposed an\niterative node selection framework where we select which\npages from the global frontier to crawl next. Our primary\ncontribution is our stochastic complementation page selection\nalgorithm. This method crawls nodes that will most\nsignificantly impact the local domain and has running time\nlinear in the number of nodes in the local domain. Experimentally\n, we validate these methods across a diverse set of\nlocal domains, including seven site-specific domains and four\ntopic-specific domains. We conclude that by crawling an additional\nn or 2n pages, our methods find an estimate of the\nglobal PageRanks that is up to ten times better than just\nusing the local PageRanks. Furthermore, we demonstrate\nthat our algorithm consistently outperforms other existing\nheuristics.\n124\nResearch Track Paper\nOften times, topic-specific domains are discovered using\na focused web crawler which considers a page's content in\nconjunction with link anchor text to decide which pages to\ncrawl next [4]. Although such crawlers have proven to be\nquite effective in discovering topic-related content, many irrelevant\npages are also crawled in the process. Typically,\nthese pages are deleted and not indexed by the localized\nsearch engine. These pages can of course provide valuable\ninformation regarding the global PageRank of the local domain\n. One way to integrate these pages into our framework\nis to start the FindGlobalPR algorithm with the current\nsubgraph F equal to the set of pages that were crawled by\nthe focused crawler.\nThe global PageRank estimation framework, along with\nthe node selection algorithms presented, all require O(n)\ncomputation per iteration and bandwidth proportional to\nthe number of pages crawled, T k. If the number of iterations\nT is relatively small compared to the number of pages\ncrawled per iteration, k, then the bottleneck of the algorithm\nwill be the crawling phase. However, as the number of iterations\nincreases (relative to k), the bottleneck will reside in\nthe node selection computation. In this case, our algorithms\nwould benefit from constant factor optimizations. Recall\nthat the FindGlobalPR algorithm (Algorithm 2) requires\nthat the PageRanks of the current expanded local domain be\nrecomputed in each iteration. Recent work by Langville and\nMeyer [12] gives an algorithm to quickly recompute PageRanks\nof a given webgraph if a small number of nodes are\nadded. This algorithm was shown to give speedup of five to\nten times on some datasets. We plan to investigate this and\nother such optimizations as future work.\nIn this paper, we have objectively evaluated our methods\nby measuring how close our global PageRank estimates are\nto the actual global PageRanks. To determine the benefit\nof using global PageRanks in a localized search engine,\nwe suggest a user study in which users are asked to rate\nthe quality of search results for various search queries. For\nsome queries, only the local PageRanks are used in ranking\n, and for the remaining queries, local PageRanks and the\napproximate global PageRanks, as computed by our algorithms\n, are used. The results of such a study can then be\nanalyzed to determine the added benefit of using the global\nPageRanks computed by our methods, over just using the\nlocal PageRanks.\nAcknowledgements. This research was supported by NSF\ngrant CCF-0431257, NSF Career Award ACI-0093404, and\na grant from Sabre, Inc.\nREFERENCES\n[1] R. Baeza-Yates, M. Marin, C. Castillo, and\nA. Rodriguez. Crawling a country: better strategies\nthan breadth-first for web page ordering. World-Wide\nWeb Conference, 2005.\n[2] P. Boldi, M. Santini, and S. Vigna. Do your worst to\nmake the best: paradoxical effects in pagerank\nincremental computations. Workshop on Web Graphs,\n3243:168180, 2004.\n[3] S. Brin and L. Page. The anatomy of a large-scale\nhypertextual web search engine. Computer Networks\nand ISDN Systems, 33(17):107117, 1998.\n[4] S. Chakrabarti, M. van den Berg, and B. Dom.\nFocused crawling: a new approach to topic-specific\nweb resource discovery. World-Wide Web Conference,\n1999.\n[5] Y. Chen, Q. Gan, and T. Suel. Local methods for\nestimating pagerank values. Conference on\nInformation and Knowledge Management, 2004.\n[6] J. Cho, H. Garcia-Molina, and L. Page. Efficient\ncrawling through url ordering. World-Wide Web\nConference, 1998.\n[7] T. H. Haveliwala and S. D. Kamvar. The second\neigenvalue of the Google matrix. Technical report,\nStanford University, 2003.\n[8] T. Joachims, F. Radlinski, L. Granka, A. Cheng,\nC. Tillekeratne, and A. Patel. Learning retrieval\nfunctions from implicit feedback.\nhttp://www.cs.cornell.edu/People/tj/career.\n[9] S. D. Kamvar, T. H. Haveliwala, C. D. Manning, and\nG. H. Golub. Exploiting the block structure of the\nweb for computing pagerank. World-Wide Web\nConference, 2003.\n[10] S. D. Kamvar, T. H. Haveliwala, C. D. Manning, and\nG. H. Golub. Extrapolation methods for accelerating\npagerank computation. World-Wide Web Conference,\n2003.\n[11] A. N. Langville and C. D. Meyer. Deeper inside\npagerank. Internet Mathematics, 2004.\n[12] A. N. Langville and C. D. Meyer. Updating the\nstationary vector of an irreducible markov chain with\nan eye on Google's pagerank. SIAM Journal on\nMatrix Analysis, 2005.\n[13] P. Lyman, H. R. Varian, K. Swearingen, P. Charles,\nN. Good, L. L. Jordan, and J. Pal. How much\ninformation 2003? School of Information Management\nand System, University of California at Berkely, 2003.\n[14] F. McSherry. A uniform approach to accelerated\npagerank computation. World-Wide Web Conference,\n2005.\n[15] C. D. Meyer. Stochastic complementation, uncoupling\nmarkov chains, and the theory of nearly reducible\nsystems. SIAM Review, 31:240272, 1989.\n[16] US News and World Report. http://www.usnews.com.\n[17] Dmoz open directory project. http://www.dmoz.org.\n[18] Nutch open source search engine.\nhttp://www.nutch.org.\n[19] F. Radlinski and T. Joachims. Query chains: learning\nto rank from implicit feedback. ACM SIGKDD\nInternational Conference on Knowledge Discovery and\nData Mining, 2005.\n[20] S. Raghavan and H. Garcia-Molina. Crawling the\nhidden web. In Proceedings of the Twenty-seventh\nInternational Conference on Very Large Databases,\n2001.\n[21] T. Tin Tang, D. Hawking, N. Craswell, and\nK. Griffiths. Focused crawling for both topical\nrelevance and quality of medical information.\nConference on Information and Knowledge\nManagement, 2005.\n[22] Y. Wang and D. J. DeWitt. Computing pagerank in a\ndistributed internet search system. Proceedings of the\n30th VLDB Conference, 2004.\n125\nResearch Track Paper\n", "keywords": "node selection;Experimentation;global PageRank;Algorithms;crawling;site specific domain;localized search engines"} {"name": "86", "title": "Evaluating Similarity Measures: A Large-Scale Study in the Orkut Social Network", "abstract": "Online information services have grown too large for users to navigate without the help of automated tools such as collaborative filtering, which makes recommendations to users based on their collective past behavior. While many similarity measures have been proposed and individually evaluated, they have not been evaluated relative to each other in a large real-world environment. We present an extensive empirical comparison of six distinct measures of similarity for recommending online communities to members of the Orkut social network. We determine the usefulness of the different recommendations by actually measuring users' propensity to visit and join recommended communities. We also examine how the ordering of recommendations influenced user selection, as well as interesting social issues that arise in recommending communities within a real social network.", "fulltext": "INTRODUCTION\nThe amount of information available online grows far faster\nthan an individual's ability to assimilate it. For example,\nconsider \"communities\" (user-created discussion groups) within\nOrkut, a social-networking website (http://www.orkut.com)\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nKDD'05, August 2124, 2005, Chicago, Illinois, USA.\nCopyright 2005 ACM 1-59593-135-X/05/0008 ...\n$\n5.00.\naffiliated with Google. The original mechanisms for users\nto find communities were labor-intensive, including searching\nfor keywords in community titles and descriptions or\nbrowsing other users' memberships. Four months after its\nJanuary 2004 debut, Orkut had over 50,000 communities,\nproviding the necessity and opportunity for data-mining for\nautomated recommendations. There are now (May 2005)\nover 1,500,000 communities.\nWhile there are many forms of recommender systems [3],\nwe chose a collaborative filtering approach [13] based on\noverlapping membership of pairs of communities. We did\nnot make use of semantic information, such as the description\nof or messages in a community (although this may be an\narea of future work). Our recommendations were on a per-community\n, rather than a per-user basis; that is, all members\nof a given community would see the same recommendations\nwhen visiting that community's page. We chose this\napproach out of the belief, which was confirmed, that community\nmemberships were rich enough to make very useful\nrecommendations without having to perform more compu-tationally\nintensive operations, such as clustering of users or\ncommunities or computing nearest neighbor relations among\nusers. Indeed, Sarwar et al. have found such item-based algorithms\nto be both more efficient and successful than user-based\nalgorithms [13]. By measuring user acceptance of recommendations\n, we were able to evaluate the absolute and\nrelative utility of six different similarity measures on a large\nvolume of data.\nMEASURES OF SIMILARITY\nThe input data came from the membership relation M =\n{(u, c) | u U , c C} , where C is the set of communities\nwith at least 20 members and U the set of users belonging\nto at least one such community. When we began our\nexperiment in May 2004, |C| = 19, 792, |U | = 181, 160, and\n|M| = 2, 144, 435. Table 1 summarizes the distribution.\nAll of our measures of community similarity involve the\noverlap between two communities, i.e., the number of com-Table\n1: Distribution of community memberships\nmin\nmax\nmedian\n\nUsers per community\n20\n9077\n50\n230.5\nCommunities per user\n1\n4173\n6\n28.0\n678\nResearch Track Poster\nmon users. If a base community b and a (potentially) related\ncommunity r are considered as sets of users, the overlap is\n|B R|, where we use capital letters to represent the set containing\na community's members. Note that overlap cannot\nbe the sole factor in relatedness, as the size of communities\nvaries greatly. If we only considered overlap, practically every\ncommunity would be considered related to the \"Linux\"\ncommunity, which was the most popular, with 9,077 members\n. The similarity measures in the next section normalize\nthe overlap in different ways.\n2.1\nSimilarity Measure Functions\nEach similarity measure we consider is presented as a (possibly\nasymmetric) function of b and r indicating how appropriate\nthe related community r is as a recommendation for\nthe base community b.\nWe do not use the result of the\nfunction as an absolute measure of similarity, only to rank\nrecommendations for a given base community.\n2.1.1\nL1-Norm\nIf we consider the base and related communities to be\nvectors\nb and\nr , where the i\nth\nelement of a vector is 1 if\nuser i is a member and 0 if not, we can measure the overlap\nas the product of their L1-norms:\nL1(\nb ,\nr ) =\n\nb\nr\n\nb\n1\n\n\nr\n1\nThis quantity can also be expressed in set notation, where\nwe use a capital letter to represent the set containing a community's\nmembers:\nL1(B, R) = |B R|\n|B| |R|\nNote that this evaluates to the overlap between the two\ngroups divided by the product of their sizes.\nWhen the\nbase community is held constant (as when we determine the\nbase community's recommendations), this evaluates to the\noverlap divided by the size of the related community, favoring\nsmall communities. Kitts et al. [9] reported this to\nbe a successful measure of similarity in their recommender\nsystem.\n2.1.2\nL2-Norm\nSimilarly, we can measure the overlap with the product of\nthe L2-norms (\"cosine distance\" [3, 6, 12]) of\nb and\nr :\nL2(\nb ,\nr ) =\n\nb\nr\n\nb\n2\n\n\nr\n2\nIn set notation:\nL2(B, R) =\n|B R|\n|B| |R|\nNote that the square-root in the denominator causes L2 to\npenalize large communities less severely than L1.\nObserve that the L2-norm presented here is equivalent to\nthe widely used cosine coefficient applied to binary data.\nMoreover, while Pearson correlation has been used previ-ously\nin recommender systems where ranking data is available\n, we did not use this measure here since it is generally\nconsidered inappropriate for binary data.\n2.1.3\nPointwise Mutual-Information: positive correlations\n(MI1)\nInformation theory motivates other measures of correlation\n, such as \"mutual information\" [2]. We chose pointwise\nmutual information where we only count \"positive\" correlations\n(membership in both B and R). Such a formulation\nessentially focuses on how membership in one group is pre-dictive\nof membership in another (without considering how\nbase non-membership in a group effects membership in another\ngroup), yielding:\nM I1(b, r) = P(r, b) lg\nP(r, b)\nP(r) P(b)\n2.1.4\nPointwise Mutual-Information: positive and\nnegative correlations (MI2)\nSimilarly, we can compute the pointwise mutual information\nwith both positive and negative correlations (e.g.,\nmembership in both B and R, or non-membership in both\ngroups). Again, we don't compute the full expected mutual\ninformation, since we believe cross-correlations (e.g., how\nmembership in B affects non-membership in R) tend to be\ndistortive with the recommendation task since such cross-correlations\nare plentiful but not very informative.\nThis\nyields:\nM I2(b, r) = P(r, b) lg\nP(r, b)\nP(r) P(b) + P(\nr,\nb) lg\nP(\nr,\nb)\nP(\nr) P(\nb)\n2.1.5\nSalton (IDF)\nSalton proposed a measure of similarity based on inverse\ndocument frequency scaling (tf-idf) [12]:\nIDF (b, r) = P (r|b) (- lg P(r))\nIDF (B, R) = |B R|\n|B|\n(- lg |R|\n|U | )\n2.1.6\nLog-Odds\nWe first considered the standard log-odds function, which\nmeasures the relative likelihood that presence or absence in\na base community predicts membership in a related community\n:\nLogOdds0(b, r) = lg P(r|b)\nP(r|\nb)\nEmpirically, we found this generated the exact same rankings\nas using the L1-Norm, which makes sense because:\n1. Logarithm is monotonic and, while affecting scores,\ndoes not affect rankings.\n2. Constant factors, such as |B|, do not affect rankings.\n3. For |B|\n|U |, P(r|\nb) P(r)\nWe formulated a different log-odds metric, which measures\nwhether membership in the base community is likelier to\npredict membership or absence in the related community:\nLogOdds(b, r) = lg P(r|b)\nP(\nr|b)\n679\nResearch Track Poster\nTable 2: Average size of top-ranked community for\neach measure\nmeasure\nAverage size\nrank 1\nrank 2\nrank 3\nL1\n332\n482\n571\nL2\n460\n618\n694\nMI1\n903\n931\n998\nMI2\n966\n1003\n1077\nIDF\n923\n985\n1074\nLogOdds\n357\n513\n598\nTable 3: Agreement in top-ranked results between\nmeasures. For example, MI1 and IDF rank the same\nrelated community first for 98% of base communities\n. Correlations greater than 85% are in bold.\nL1\n.70\nL2\n.41\n.60\nMI1\n.39\n.57\n.96\nMI2\n.41\n.59\n.98\n.97\nIDF\n.88\n.79\n.46\n.44\n.46\nLogOdds\n2.2\nDiscussion\nFor a given measure, we refer to the related community\nyielding the highest value to be the top-ranked related community\nrelative to a base community.\nThe average size\nof top-ranked communities for each measure, which varies\ngreatly, is shown in Table 2. Table 3 shows how often two\nfunctions yield the same top-ranking result. Table 4 shows\nthe top recommendations for the \"I love wine\" community.\nNote that MI1, MI2, and IDF favor very large communities,\nwhile L1 and LogOdds favor small communities.\nNote that in addition to the obvious correlations between\nthe two mutual-information functions (96%), there is a very\nstrong correlation between IDF and the mutual-information\nfunctions (97-98%). Manipulation of the formulas for MI1\nand IDF shows:\nM I1(b, r)\n=\nP(r, b) lg\nP(r, b)\nP(r) P(b)\n=\nP(r|b)P(b) lg P(r|b) - P(r|b)P(b) lg P(r)\n=\nP(r|b)P(b) lg P(r|b)\n-P(r|b) [1 - P(\nb)] lg P(r)\n=\nP(r|b)[P(b) lg P(r|b) + P(r|b)P(\nb) lg P(r)]\n-P(r|b) lg P(r)\nSubstituting IDF (b, r) = -P (r|b) lg P(r), we get:\nM I1(b, r)\n=\nP(r|b) P(b) lg P(r|b) + P(\nb) lg P(r)\n+IDF (b, r)\nSince for virtually all communities b, P (b)\nP (\nb), we can\napproximate:\nM I1(b, r) IDF (b, r) + P(r|b) P(\nb) lg P(r)\nThus, MI1 yields a ranking that can be thought of as starting\nwith the ranking of IDF and perturbing the score of each\nelement in the ranking by P(r|b) P(\nb) lg P(r), which generally\nis not great enough to change the relative ranking of the\ntop scores, leading to MI1 and IDF often giving the same\nranking to top-scoring communities. (Note that this perturbation\nquantity is given only to explain the high correlation\nbetween MI1 and IDF. Statistically, it is meaningless, since\nb and\nb cannot simultaneously hold.)\nEXPERIMENT DESIGN\nWe designed an experiment to determine the relative value\nof the recommendations produced by each similarity measure\n. This involved interleaving different pairs of recommendations\nand tracking user clicks. Specifically, we measured\nthe efficacy of different similarity measures using pair-wise\nbinomial sign tests on click-through data rather than using\ntraditional supervised learning measures such as preci-sion/recall\nor accuracy since there is no \"true\" labeled data\nfor this task (i.e., we do not know what are the correct communities\nthat should be recommended to a user). Rather,\nwe focused on the task of determining which of the similarity\nmeasures performs best on a relative performance scale\nwith regard to acceptance by users.\n3.1\nCombination\nWhen a user viewed a community page, we hashed the\ncombined user and community identifiers to one of 30 values\n, specifying an ordered pair of similarity measures to\ncompare. Let S and T be the ordered lists of recommendations\nfor the two measures, where S = (s\n1\n, s\n2\n, . . . , s\n|S|\n) and\nT = (t\n1\n, t\n2\n, . . . , t\n|T |\n) and |S| = |T |. The recommendations of\neach measure are combined by Joachims' \"Combined Rank-ing\"\nalgorithm [7], restated in Figure 1. The resulting list is\nguaranteed to contain the top k\nS\nand k\nT\nrecommendations\nfor each measure, where k\nT\nk\nS\nk\nT\n+ 1 [7, Theorem 1].\n3.2\nMeasurements\nWhenever a user visited a community, two measures were\nchosen and their recommendations interleaved, as discussed\nabove. This was done in a deterministic manner so that\na given user always saw the same recommendations for a\ngiven community. To minimize feedback effects, we did not\nregenerate recommendations after the experiment began.\nA user who views a base community (e.g., \"I love wine\") is\neither a member (denoted by \"M\") or non-member (denoted\nby \"n\"). (We capitalize \"M\" but not \"n\" to make them eas-ier\nto visually distinguish.) In either case, recommendations\nare shown. When a user clicks on a recommendation, there\nare three possibilities: (1) the user is already a member of\nthe recommended community (\"M\"), (2) the user joins the\nrecommended community (\"j\"), or (3) the user visits but\ndoes not join the recommended community (\"n\"). The combination\nof base and related community memberships can be\ncombined in six different ways. For example \"Mj\" denotes\na click where a member of the base community clicks on a\nrecommendation to another community to which she does\nnot belong and joins that community. Traditionally, analyses\nof recommender systems focus on \"Mj\", also known\ninformally as \"if you like this, you'll like that\" or formally\nas \"similarity\" or \"conversion\". \"Mn\" recommendations\nare considered distracters, having negative utility, since they\nwaste a user's time with an item not of interest. Before running\nthe experiment, we decided that the measures should\nbe judged on their \"Mj\" performance.\nOther interpretations are possible: \"Mn\" links could be\nconsidered to have positive utility for any of the following\n680\nResearch Track Poster\nTable 4: Top recommendations for each measure for the \"I love wine\" community, with each recommended\ncommunity's overlap with the base community and size. The size of \"I love wine\" is 2400.\nL1\nL2\nMI1\nMI2\nIDF\nLogOdds\n1\nIce Wine\n(Eiswein)\n(33/51)\nRed Wine\n(208/690)\nJapanese\nFood/Sushi\nLovers\n(370/3206)\nJapanese\nFood/Sushi\nLovers\n(370/3206)\nJapanese\nFood/Sushi\nLovers\n(370/3206)\nJapanese\nFood/Sushi\nLovers\n(370/3206)\n2\nCalifornia\nPinot Noir\n(26/41)\nCheeses of the\nWorld\n(200/675)\nRed Wine\n(208/690)\nRed Wine\n(208/690)\nPhotography\n(319/4679)\nPhotography\n(319/4679)\n3\nWinery\nVisitor Worldwide\n(44/74)\nI love red\nwine!\n(170/510)\nCheeses of the\nWorld\n(200/675)\nCheeses of the\nWorld\n(200/675)\nRed Wine\n(208/690)\nLinux\n(299/9077)\nFigure 1: Joachims' \"Combine Rankings\" algorithm [7]\nInput:\nordered recommendation lists S = (s\n1\n, s\n2\n, . . . , s\n|S|\n) and T = (t\n1\n, t\n2\n, . . . , t\n|T |\n) where |S| = |T |\nCall:\ncombine (S, T, 0, 0, )\nOutput:\ncombined ordered recommendation list D\ncombine(S, T, k\ns\n, k\nt\n, D){\nif (k\ns\n< |S| k\nt\n< |T |)\nif (k\ns\n= k\nt\n) {\nif (S[k\ns\n+ 1] /\nD){D := D + S[k\ns\n+ 1]; }\ncombine(S, T, k\ns\n+ 1, k\nt\n, D);\n} else {\nif (T [k\nt\n+ 1] /\nD){D := D + T [k\nt\n+ 1]; }\ncombine(S, T, k\ns\n, k\nt\n+ 1, D);\n}\n}\n}\nTable 5: Clicks on recommendations, by membership status in the base and recommended communities, as\ncounts and as percentages of total clicks. The last column shows the conversion rate, defined as the percentage\nof non-members clicking on a related community who then joined it (\nj\nn+j\n).\nmembership in base community\nmembership in recommended community\nM (member)\nn (non-member)\nj (join)\ntotal\nconversion rate\nM (member): number of clicks\n36353\n184214\n212982\n433549\n54%\npercent of total clicks\n4%\n20%\n24%\n48%\nn (non-member): number of clicks\n8771\n381241\n77905\n467917\n17%\npercent of total clicks\n1%\n42%\n9%\n52%\ntotal: number of clicks\n45124\n565455\n290887\n901466\n34%\npercent of total clicks\n5%\n63%\n32%\n100%\n681\nResearch Track Poster\nreasons:\n1. As the user found the link sufficiently interesting to\nclick on, it was of more utility than a link not eliciting\na click.\n2. The user is genuinely interested in the related community\nbut does not want to proclaim her interest, as\nmembership information is public and some communities\nfocus on taboo or embarrassing topics. For example\n, a recommendation given for the popular \"Choco-late\"\ncommunity is \"PMS\". Note that this effect is specific\nto social networks and not, for example, Usenet\ngroups, where the user's list of communities is not revealed\nto other users.\nSimilarly, it is unclear how to value clicks from a base community\nthat the user does not belong to. Does an \"nj\"\nclick indicate failure, since the base community was not\njoined by the user, but the recommended community was,\nindicating a degree of dissimilarity?\nOr is it of positive\nutility, since it helped a user find a community of interest?\nFor these reasons, we tracked all clicks, recording the user's\nmembership status in the base and recommended communities\nfor later analysis. (We did not track whether users\nreturned to communities in the future because of the logging\noverhead that would be required.)\n3.3\nUser Interface\nOn community pages, our recommendations were provided\nin a table, each cell of which contained a recommended\ncommunity's name, optional picture, and link (Figure 2).\nRecommendations were shown by decreasing rank from left\nto right, top to bottom, in up to 4 rows of 3. For aesthetic\nreasons, we only showed entire rows; thus, no recommendations\nwere displayed if there were fewer than 3. We also\nprovided a control that allowed users to send us comments\non the recommendations.\nRESULTS\nWe analyzed all accesses between July 1, 2004, to July 18,\n2004, of users who joined Orkut during that period. The system\nserved 4,106,050 community pages with recommendations\n, which provides a lower bound on the number of views.\n(Unfortunately, we could not determine the total number of\nviews due to browser caching.) There were 901,466 clicks on\nrecommendations, 48% by members of the base community,\n52% by non-members (Table 5). Clicks to related communities\nto which the user already belonged were rare, accounting\nfor only 5% of clicks. The most common case was for a non-member\nof a base community to click through and not join\na related community (42%).\nWe defined conversion rate (also called precision) as the\npercentage of non-members who clicked through to a community\nwho then joined it. The conversion rate was three\ntimes as high (54%) when the member belonged to the base\ncommunity (from which the recommendation came) than\nnot (17%).\n4.1\nRelative performance of different measures\nWe compared each measure pairwise against every other\nmeasure by analyzing clicks of their merged recommendations\n. If the click was on a recommendation ranked higher\nby measure L2 than measure L1, for example, we considered\nit a \"win\" for L2 and a loss for L1. If both measures\nranked it equally, the result was considered to be a tie. Table\n6 shows the outcomes of all clicks, with conversions by\nmembers (\"Mj\") and non-members of the base community\n(\"nj\") broken out separately.\nWe say that a measure dominates another if, in their pairwise\ncomparison, the former has more \"wins\". For example,\nL2 dominates L1. This definition, combined with the data\nin Table 6, yielded a total order (to our surprise) among\nthe measures: L2, MI1, MI2, IDF, L1, LogOdds. The same\ntotal order occurred if only \"nj\" clicks were considered.\nThe order was different if all clicks were considered: L2, L1,\nMI1, MI2, IDF, LogOdds.\n4.2\nConversion rates\nThere was great variance in conversion rate by recommended\ncommunity.\nWe examined the 93 recommended\ncommunities that were clicked through to more than 1000\ntimes. Unsurprisingly, the ten with the lowest conversion\nrate all were about sex (e.g., Amateur Porn). Note that\nmembers of the base community were far more willing than\nnon-members to join, perhaps because they had already\nshown their willingness to join a sex-related community. At\nthe other extreme, none of the ten with the highest conversion\nrate were sexual (e.g., Flamenco). Table 7 provides\nselected data by each membership combination. Unsurprisingly\n, for all 93 base communities, members were more likely\nthan non-members to join the recommended community.\n4.3\nUser comments\nUsers were also able to submit feedback on related communities\n. Most of the feedback was from users who wanted\nrecommendations added or removed. Some complained about\ninappropriate recommendations of sexual or political\ncommunities, especially if they found the displayed image\noffensive. A few objected to our generating related community\nrecommendations at all, instead of allowing community\ncreators to specify them. In one case, poor recommendations\ndestroyed a community: The creator of a feminist sexuality\ncommunity disbanded it both because of the prurient\nrecommendations that appeared on her page and the disruptive\nnew members who joined as a result of recommendations\nfrom such communities. We agreed with her that the recommendations\nwere problematic and offered to remove them.\nWhile anecdotal, this example illustrates how a recommendation\ncan have unanticipated consequences that cannot be\ncaptured in simple statistical measures. (An informal discussion\nof users' behavior when we allowed them to choose\nrelated communities can be found elsewhere [14].)\nPOSITIONAL EFFECTS\nDuring the above experiment, we became curious how the\nrelative placement of recommendations affected users' selections\nand performed a second experiment.\n5.1\nDesign\nAfter determining that L2 was the best measure of similarity\n, we recomputed the recommendations and studied the\neffect of position on click-through. While in our original\nexperiment we displayed up to 12 recommendations in decreasing\nrank, for this experiment we displayed up to 9 recommendations\nin random order, again ensuring that each\n682\nResearch Track Poster\nTable 6: The relative performance of each measure in pairwise combination on clicks leading to joins, divided\nby base community membership status, and on all clicks.\nExcept where numbers appear in italics, the\nsuperioriority of one measure over another was statistically significant (p < .01) using a binomial sign test\n[10].\nmeasures\nM j\nn j\nall clicks\nwin\nequal\nloss\nwin\nequal\nloss\nwin\nequal\nloss\nL2\nMI1\n6899\n2977\n4993\n2600\n1073\n1853\n30664\n12277\n20332\nL2\nMI2\n6940\n2743\n5008\n2636\n1078\n1872\n31134\n11260\n19832\nL2\nIDF\n6929\n2697\n5064\n2610\n1064\n1865\n30710\n11271\n20107\nL2\nL1\n7039\n2539\n4834\n2547\n941\n1983\n28506\n13081\n23998\nL2\nLogOdds\n8186\n1638\n4442\n2852\n564\n1655\n34954\n6664\n18631\nMI1\nMI2\n3339\n9372\n1855\n1223\n3401\n683\n14812\n37632\n7529\nMI1\nIDF\n3431\n8854\n1891\n1139\n3288\n629\n14671\n37049\n7758\nMI1\nLogOdds\n7099\n3546\n3341\n2514\n1213\n1193\n29837\n13869\n13921\nMI1\nL1\n6915\n1005\n6059\n2547\n407\n2338\n27786\n4308\n29418\nMI2\nIDF\n1564\n11575\n1031\n533\n4266\n359\n6003\n47885\n4490\nMI2\nLogOdds\n6920\n3959\n3177\n2484\n1418\n598\n2881\n15308\n13188\nMI2\nL1\n6830\n950\n6419\n2383\n362\n2333\n26865\n3872\n29864\nIDF\nL1\n6799\n1006\n6304\n2467\n392\n2352\n27042\n4069\n29755\nIDF\nLogOdds\n6691\n3804\n3096\n2452\n1378\n1085\n28224\n15013\n13330\nL1\nLogOdds\n6730\n518\n5975\n2521\n108\n2059\n31903\n2097\n24431\nTable 7: Conversion rates by status of membership in base community, for communities to which more than\n1000 clicks on recommendations occurred.\nmember of base community\nnon-member of base community\nRelated community\nMM\nMj\nMj\nconversion rate\nnM\nnn\nnM\nconversion rate\n10 communities with highest\nconversion rates\n583\n2273\n6984\n75%\n198\n3454\n2017\n37%\n10 communities with lowest\nconversion rates\n326\n1984\n826\n29%\n68\n26287\n472\n1.8%\nall 93 communities\n13524\n54415\n52614\n46%\n3488\n127819\n19007\n17%\nuser always saw the same ordering of recommendations for\na given community. By randomizing the position of recommendations\n, we sought to measure ordering primacy effects\nin the recommendations as opposed to their ranked quality.\n5.2\nResults\nWe measured all 1,279,226 clicks on related community\nrecommendations from September 22, 2004, through October\n21, 2004.\nTable 8 shows the relative likelihood of\nclicks on each position. When there was only a single row,\nthe middle recommendation was clicked most, followed by\nthe leftmost, then rightmost recommendations, although the\ndifferences were not statistically significant.\nWhen there\nwere two or three rows, the differences were very significant\n(p < .001), with preferences for higher rows. P-values were\ncomputed using a Chi-Squared test comparing the observed\nclick-through rates with a uniform distribution over all positions\n[10].\nCONCLUSION AND FUTURE PLANS\nOrkut's large number of community memberships and users\nallowed us to evaluate the relative performance of six\ndifferent measures of similarity in a large-scale real-world\nstudy. We are not aware of any comparable published large-scale\nexperiments.\nWe were surprised that a total order\nemerged among the similarity measures and that L2 vector\nnormalization showed the best empirical results despite\nother measures, such as log-odds and pointwise mutual information\n, which we found more intuitive. For future work,\nwe would like to see how recommendations handpicked by\ncommunity owners compare.\nJust as we can estimate communities' similarity through\ncommon users, we can estimate users' similarity through\ncommon community memberships: i.e., user A might be\nsimilar to user B because they belong to n of the same communities\n. It will be interesting to see whether L2 also proves\nsuperior in such a domain. We could also take advantage\nof Orkut's being a social network [8], i.e., containing information\non social connections between pairs of users. In\naddition to considering common community memberships,\nwe could consider distance between users in the \"friendship\ngraph\". Users close to each other (e.g., friends or friends-of\n-friends) might be judged more likely to be similar than\ndistant strangers, although some users might prefer the latter\ntype of link, since it would introduce them to someone\nthey would be unlikely to meet otherwise, perhaps from a\ndifferent country or culture.\nSimilarly, friendship graph information can be taken into\naccount when making community recommendations, which\nwould require that recommendations be computed on a per-user\n(or per-clique), rather than per-community, basis. In\nsuch a setting, we could make community recommendations\nbased on weighted community overlap vectors where weights\nare determined based on the graph distances of other community\nmembers to a given user. This is a fertile area for\nfuture work and yet another example of how the interaction\n683\nResearch Track Poster\nFigure 2: Displays of recommendations for three different communities\nTable 8: The relative likelihood of clicks on link by position when there are (a) one, (b) two, or (c) three\nrows of three recommendations.\n(a) n=28108, p=.12\n(b) n=24459, p<.001\n(c) n=1226659, p<.001\n1.00\n1.01\n.98\n1.04\n1.05\n1.08\n1.11\n1.06\n1.04\n.97\n.94\n.92\n1.01\n.97\n.99\n1.01\n.94\n.87\nof data mining and social networks is becoming an exciting\nnew research area [4] [11].\nACKNOWLEDGMENTS\nThis work was performed while Ellen Spertus was on sabbatical\nfrom Mills College and a visiting scientist at Google.\nWe are grateful to Patrick Barry, Alex Drobychev, Jen Fitz-patrick\n, Julie Jalalpour, Dave Jeske, Katherine Lew, Marissa\nMayer, Tom Nielsen, Seva Petrov, Greg Reshko, Catherine\nRondeau, Adam Sawyer, Eric Sachs, and Lauren Simpson\nfor their help on the Orkut project and to Corey Anderson,\nAlex Drobychev, Oren Etzioni, Alan Eustace, Keith Golden,\nCarrie Grimes, Pedram Keyani, John Lamping, Tom Nielsen,\nPeter Norvig, Kerry Rodden, Gavin Tachibana, and Yonatan\nZunger for their help on this research or its exposition.\nREFERENCES\n[1] Breese, J.; Heckerman, D.; Kadie, C. Empirical Analysis of\nPredictive Algorithms for Collaborative Filtering. In\nProceedings of the Fourteenth Conference on Uncertainty\nin Artificial Intelligence (Madison, Wisconsin, 1998).\nMorgan Kaufmann.\n[2] Cover, T.M., and Thomas, J.A. Elements of Information\nTheory. Wiley, New York, 1991.\n[3] Deshpande, M., and Karypis, G. Item-Based Top-N\nRecommendation Algorithms. ACM Transactions on\nInformation Systems 22(1) (January 2004), 143-177.\n[4] Domingos, P. Prospects and Challenges for\nMulti-Relational Data Mining. ACM SIGKDD Exploration\nNewsletter 5(1) (July 2003).\n[5] Dumais, S.; Joachims, T.; Bharat, K.; Weigend, A. SIGIR\n2003 Workshop Report: Implicit Measures of User Interests\nand Preferences. SIGIR Forum 37(2) (Fall 2003).\n[6] Harman, D. Ranking Algorithms. In W. B. Frakes and R.\nBaeza-Yates (ed.), Information Retrieval: Data Structures\n& Algorithms (chapter 14). Upper Saddle River, NJ, USA:\nPrentice Hall, 1992.\n[7] Joachims, T. Evaluating Retrieval Performance Using\nClickthrough Data. In Proceedings of the SIGIR Workshop\non Mathematical/Formal Methods in Information\nRetrieval (2002). ACM Press, New York, NY.\n[8] Kautz, H.; Selman, Bart; Shah, M. Referral Web:\nCombining Social Networks and Collaborative Filtering.\nCommunications of the ACM 45(8) (March 1997).\n[9] Kitts, B.; Freed, D.; Vrieze, M. Cross-Sell: A Fast\nPromotion-Tunable Customer-Item Recommendation\nMethod based on Conditionally Independent Probabilities.\nIn Proceedings of the Sixth ACM SIGKDD International\nConference on Knowledge Discovery and Data Mining\n(Boston, 2000). ACM Press, New York, NY, 437-446.\n[10] Lehmann, E.L. Testing Statistical Hypotheses (second\nedition). Springer-Verlag, 1986.\n[11] Raghavan, P. Social Networks and the Web (Invited Talk).\nIn Advances in Web Intelligence: Proceedings of the\nSecond International Atlantic Web Intelligence\nConference, May 2004. Springer-Verlag, Heidelberg.\n[12] Salton, G. Automatic Text Processing: The\nTransformation, Analysis, and Retrieval of Information by\nComputer. Addison Wesley, Reading, MA, 1989.\n[13] Sarwar, B.; Karypis, G.; Konstan, J.; Reidl, J. Item-Based\nCollaborative Filtering Recommendation Algorithms. In\nProceedings of the Tenth International Conference on the\nWorld Wide Web (WWW10) (Hong Kong, 2001). ACM\nPress, New York, NY, 285-295.\n[14] Spertus, Ellen. Too Much Information. Orkut Media\nSelections, January 19, 2005. Available online at\n\"http://media.orkut.com/articles/0078.html\".\n684\nResearch Track Poster\n", "keywords": "collaborative filtering;online communities;community;recommender system;social network;social networks;similarity measure;Data mining"} {"name": "87", "title": "Evaluation and Evolution of a Browse and Search Interface: Relation Browser++", "abstract": "We present in this paper the design and an evaluation of a novel interface called the Relation Browser++ (RB++) for searching and browsing large information collections. RB++ provides visualized category overviews of an information space and allows dynamic filtering and exploration of the result set by tightly coupling the browsing and searching functions. A user study was conducted to compare the effectiveness, efficiency and user satisfaction of completing various types of searching and browsing using the RB++ interface and a traditional form-fillin interface for a video library. An exploration set of tasks was also included to examine the effectiveness of and user satisfaction with the RB++ when applied to a large federal statistics website. The comparison study strongly supported that RB++ was more effective, efficient, and satisfying for completing data exploration tasks. Based on the results, efforts to automatically populate the underlying database using machine learning techniques are underway. Preliminary implementations for two large-scale federal statistical websites have been installed on government servers for internal evaluation.", "fulltext": "INTRODUCTION\nThe size and breadth of large government websites and digital\nlibraries makes it difficult for people to quickly grasp what\ncontent is and is not available. Dynamic overviews and previews\nof collections can help people decide if it is worthwhile to look\nfurther [8]. As they do look further, it is helpful for their\nsearching and browsing to quickly see partitions of the\ncollection and how many items are available in different\npartitions. We believe that government website users will be\nwell-served by highly interactive user interfaces that support\nalternative views of the collections, partitions, and results sets.\nThis paper describes a user interface that aims to provide agile\ncontrol for browse and search, reports results from a user study\ncomparing this interface to a typical WWW search interface, and\ndescribes the ongoing evolution of the interface, including\nefforts to automate discovery of topical categories and\nassignment of webpages to those categories.\nFaceted category structure is one way to help people understand\nthe composition of an information collection. A faceted\napproach provides different ways to slice and dice the\ninformation space, which allows people to look at the\ninformation space from different perspectives. Allowing people\nto explore the relationships among different facets may further\ndeepen their understanding and support new insights. The\nrelation browser (RB) is an interface which provides an\noverview of the collection by displaying different categories and\nenables people to explore the relationships among these\ncategories [13]. The different facet values also serve as\nselectable objects that may be employed as query widgets for a\nsearch so that the entire space can quickly be partitioned with\nsimple mouse moves and with consequent immediate display of\nthe resulting partition in the results panel. Figure 1 shows the\nmock-up interface of an early version of the relation browser in\nthe domain of U.S. federal statistics websites. The web pages in\nthe site were sliced into four different facets: by topic, data type,\nregion, and date. The numbers beside the bars indicate the\nnumber of websites associated with the attributes. By mousing\nover any of the topics, the distribution of the specific topics in\nother facets are visualized as graphic bars. The underlying data\nfor this instance of the interface was manually extracted from a\nsmall set of 200 webpages contained in more than 70 federal\nstatistical agency websites.\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies\nare not made or distributed for profit or commercial advantage and that\ncopies bear this notice and the full citation on the first page. To copy\notherwise, or republish, to post on servers or to redistribute to lists,\nrequires prior specific permission and/or a fee.\nConference'04, Month 12, 2004, City, State, Country.\nCopyright 2004 ACM 1-58113-000-0/00/0004...$5.00.\nCopyright held by the author\n179\n\n\nFigure 1. Relation Browser (RB)\nThis early version of the RB has been redesigned based on user\nstudies and experience applying the interface to more than a\ndozen different database instances [13]. The new version is\ncalled RB++, which improves the RB significantly in several\nways (see Figure 2) [23,24]. First, RB++ displays multiple\nfacets (categories) visually and on the same screen rather than\nonly two facets with tab options to others. The multiple facets\nprovide an overview of the information space. The facet values\nare visually represented by graphic bars with different lengths,\nwhich indicate the number of items associated with them.\nSecond, RB++ allows more flexibility to explore relationships.\nOne of the features of RB++ is that you can restrict the\ninformation items (partition the information space) by mousing\nover any bars and other bars are proportionally highlighted to\nshow the conditional distribution across all the facets. Note that\nthe previous RB was limited to visualizing pairwise\nrelationships with one main facet. Third, the RB++ added a\ndynamic filtering function for the result set (see Figure 3). Once\nthe search results are displayed in the table, further filtering can\nbe done by typing in keywords (string patterns) in the boxes\nlocated immediately above the result fields. The filtering is\ndynamic, which means that with each character typed in or\nremoved from the boxes, RB++ matches the string patterns in\nthe boxes with the corresponding field of the results. Only the\nmatched results are then displayed immediately in the results\npanel and the matched string in the results is highlighted. This\ndynamic feature gives users instant and constant feedback about\nthe filtered results and how many items they will get with\ndifferent keywords, which allows users to try out different\nfiltering keywords very easily and efficiently. Fourth, the RB++\nprovides an overview of the results set and tightly couples the\noverview and results set panels. The overview panel is\ndynamically updated to give users a contextualized overview of\nthe updated result set. These new features give users more\npower to understand and explore the information collection and\ngive them a flexible and rapid way to find the information they\nwant. A linguistic model of BNF grammar to model the user\ninteraction with the interface is provided in section 2.3 to help\nreveal the dynamic nature of the RB++.\nIn the paper, we argue that the RB++ interface will bring users\nadded values beyond simple searching and browsing by in fact\ncombining these search strategies seamlessly. In the next\nsection, the methodology of a user study is described. The\nresults of the user study are then presented and discussed.\nLimitations of the interface and current efforts to deal with data\nclassification are then described.\n\nFigure 2. Initial display of RB++ with visualized category\noverview on the top\n\nFigure 3. RB++ with dynamic filtering of the results (note\nthe changes in the overview and updated results)\n\nMETHODOLOGY\nThe purpose of the user study was two-fold: first, we wanted to\ncompare the effectiveness, efficiency and user satisfaction\nassociated with completing certain tasks using RB++ against\nthat obtained by the traditional form-fillin search interface\n(baseline interface). Second, we wanted to explore if the RB++\ninterface would lead to new interaction patterns with the\ninterface and if so, to determine what these new interaction\npatterns might be.\n180\n\nSeventeen undergraduate and graduate students were recruited\nfrom the UNC-Chapel Hill campus for this study. They came\nfrom various schools and departments. There were 10 females\nand 7 males with an age range from 19 to 44, (15 were in their\n20s, and all were familiar with www browsers. The participants\nwere given $15 for their participation. The data from the first\ntwo participants was used as a pilot test; based on which the\nexperimental protocol and instruments were revised. The data\nfrom the other 15 participants was used for the data analysis.\nThe study proceeded in two phases, a within subjects\ncomparison across two different interfaces for the same\ndatabase, and an exploratory investigation with a single\ngovernment statistical website instance of RB++.\n2.1 Phase One: RB++ to Baseline\nComparison for a Film Database\nThe first phase was a comparison study in which participants\nused both the RB++ interface and the baseline interface. The\norder of using these interfaces was counter balanced. The\ndomain of the information items in both interfaces was the video\ncollection in the UNC-CH library\n(http://www.lib.unc.edu/house/mrc/index.html?page=filmograph\ny) that contains about 10000 films. The library online video\nsearch interface (Filmfinder) was used as our baseline interface.\nFilmFinder is a fairly typical www form-fillin search interface\n(see Figure 4 and Figure 5), where users can specify queries\nwithin fields such as title, release year, director, description,\ngenre, origin, and format.\n\nFigure 4. Filmfinder with Form-fillin Interface\n\nFigure 5. Results Page of Filmfinder\nAll participants were run individually in sessions ranging from\n60-90 minutes and all sessions were video taped. The protocol\nfor the first phase was as follows: First, a demographic pre-test\nquestionnaire was completed. Second, the participant was\ntrained for the first interface assigned in their condition. The\ntraining consisted of: an introduction to the features of the\ninterface, a demo of each type of task with the interface, and\nparticipant practice using the interface until s/he was\ncomfortable with it. Third, the participant used the interface to\ncomplete 10 search tasks. Tasks were assigned to participants\none by one by handing them pieces of paper for each task. A\ntimer was used to count time used to complete each task except\nfor task 10 (see description of task 10 below). After each task, a\nshort satisfaction questionnaire was completed by the\nparticipant. Fourth, a usability questionnaire was filled out after\nthe participant finished using the first interface. Next, the\nparticipant was trained for the second interface and the same\nprocedures were used to complete 10 more search tasks.\nFinally, an open-ended questionnaire about perceived\ndifferences and preferences for the two interfaces was\ncompleted.\nThe tasks were classified into three different types: 1. Simple\nlook up task. Tasks 1 to 3 in each task set were of this type. For\nexample, \"Check if the movie titled \"The Matrix\" is in the\nlibrary movie collection.\" 2. Data exploration and analysis tasks.\nTasks 4 to 9 in each task set were of this type. This kind of task\nrequires users to understand and make sense of the information\ncollection, which could be a starting point for them to further\ntheir searching or browsing. Two examples of this type are: \"In\nwhich decade did \"Steven Spielberg\" direct the most movies?\";\nand \"How many movie titles does the library hold that were\nreleased in the year 2000?\" 3. Task 10 was a free exploration\ntask, which asked participants to find five favorite videos\nwithout any time constraints. The tasks assigned for the two\ninterfaces were different but comparable. For example, the\ncomparable tasks for two interfaces simply substituted different\nvideo titles or directors.\n2.2 Phase Two: Explore RB++ for EIA\nWebsite\nThe second phase was an exploratory study of the RB++ applied\nto roughly 10,000 pages in the Energy Information\nAdministration (EIA) website. Based on intensive manual\n181\n\ninspection of the EIA website, four facets were identified with\nassociated facet values: fuel type (with the facet values:\nalternatives, coal, electricity, natural gas, nuclear, petroleum,\nand renewable); geography (state level, regional level, national\nlevel, and international level); sector (commercial, electric\nutility, industrial, and residential); and process (delivery,\nimport/export, price/cost, production, resources/reserves, and\nusage). All the facets were displayed on the overview panel (see\nFigure 6). The results panel displayed the title, page size, and\ndescription of the web pages.\n\nFigure 6. RB++ interface applied to EIA website\nThe protocol of the second phase was as follows: First, the\nRB++ EIA application was introduced to the participant.\nSecond, the participant practiced using the interface until s/he\nwas comfortable with it. Third, the participant used the interface\nto complete four tasks\n1\n. The process was recorded and a short\nsatisfaction questionnaire was filled out after finishing each task.\nFourth, an open-ended questionnaire was completed after\nfinishing all the tasks. Lastly, the participant was briefly\ninterviewed.\nData collected included both quantitative and qualitative data\nfrom the two phases of the study. Data collected for the first\nphase included performance data (time spent finishing tasks),\nerror rates of tasks, ratings on the satisfaction questionnaire after\nfinishing each task, ratings on the usability questionnaire after\nfinishing each interface, and comments on the open\nquestionnaire about perceived differences and preferences for\nthe two interfaces. Data collected for the second phase included\nratings on the satisfaction questionnaire after finishing each task,\ncomments on the post-session questionnaire and the verbal\ncomments made in the interview.\n\n1\nTasks for the second phase study:\n\n1. I want to learn the current status of Chinese nuclear energy.\n2. Find the most recent weekly data on petroleum prices in the\nUSA.\n3. Find the statistical data on coal production across different\nstates in the year 2001.\n4. What kinds of information can I and can I not find from the\nwebsite?\n\n2.3 Modeling User Interaction\nTo help us form hypotheses and analyze and make sense of the\nexperimental data, we employed a linguistic model, called BNF\ngrammar, to model the user's interaction with the interface. BNF\ngrammar was originally used by Reisner to describe the dialog\ngrammar of an interactive graphics system [15], where the user's\ninteraction with a system was seen as an action language and\nBNF grammar was used to formally describe the language. The\nBNF grammar consists of a set of rules which define higher\nlevel user behaviors in terms of lower level ones. Each rule can\nbe composed of terminals, non-terminals, and a set of symbols.\nTerminals usually represent the lowest level of user behavior,\nsuch as pressing a key or clicking a mouse button and can not be\nfurther defined. Non-terminals represent a high level abstraction\nand can be defined in terms of other non-terminals and\nterminals. Terminals are written with upper case letters and non-terminals\nare written with lower case letters. The \"::=\" symbol is\nread as \" is defined as\". The \"+\", \"|\" and \"-\" symbols are used at\nthe right hand side of rules to connect, respectively, sequence of\nuser behavior, set of options, and concurrent user behaviors.\nWith the BNF grammar, we can describe the user's interaction\nwith the RB++ as follows:\nA1 information seeking ::= explore collection(A3) | (formulate\nquery(A2) + CLICK SEARCH BUTTON + navigate\nresults(A5))\nA2 formulate query ::= (explore collection(A3) + form\nquery(A4)) | form query(A4)\nA3 explore collection ::= (CLICK VISUAL BAR-OBSERVE\nVISUAL BAR + explore collection(A3)) | (MOUSE OVER\nVISUAL BAR-OBSERVE VISUAL BAR + explore\ncollection(A3))\nA4 form query ::= (CLICK VISUAL BAR + form query(A4)) |\n(TYPE IN KEYWORD + form query(A4))\nA5 navigate results ::= (browse results(A6) + navigate\nresults(A5)) | (CLICK RESTART BUTTON + information\nseeking(A1))\nA6 browse results ::= (show results(A7)-OBSERVE RESULTS\n+ browse results(A6)) | (CLICK RESULT ITEM + browse\nresults(A6)) | (CLICK SORTING BUTTON + browse\nresults(A6))| (explore results(A8) + browse results(A6))\nA7 show results ::= CLICK SIDEBAR\nA8 explore results ::= (observe system state(A9) + explore\nresults(A8)) | (filter results(A10) + explore results(A8))\nA9 observe system state ::= (OBSERVE VISUAL BAR +\nobserve system state(A9)) | (OBSERVE NUMBER + observe\nsystem state (A9))\nA10 filter results ::= CLICK VISUAL BAR | MOUSE OVER\nVISUAL BAR | TYPE IN KEYWORD\nThe interaction with baseline interface can be described as:\nB1 information seeking ::= formulate query(B2) + CLICK\nSEARCH BUTTON + navigate results(B4)\nB2 formulate query ::= (TYPE IN KEYWORD + formulate\nquery(B2)) | (select item(B3) + formulate query(B2))\n182\n\nB3 select item ::= CLICK PULL DOWN MENU + CLICK\nITEM\nB4 navigate results ::= (browse results(B5) + navigate\nresults(B4)) | (CLICK NEW SEARCH LINK + information\nseeking(B1))\nB5 browse results ::= (show results(B6)-OBSERVE RESULTS\n+ browse results(B5)) | (show results(B6)-COUNT RESULTS +\nbrowse results(B5)) | (CLICK ITEM + browse results(B5)) |\n(CLICK SORTING LINK + browse results(B5))\nB6 show results ::= CLICK SIDEBAR | (CLICK SIDEBAR +\nCLICK NEXT PAGE LINK)\nThe number of rules and options within rules reflects the\ninteractive nature and number of alternative choices provided by\nthese two interfaces. Note that we used the terminals such as\nCLICK SEARCH BUTTON and CLICK VISUAL BAR which\nstrictly speaking are not the lowest level of user behaviors,\nhowever, using higher level abstraction as terminals is suitable\nfor interactive display-based systems [4] and ensures later data\nanalysis. Many rules are defined recursively and consist of\nseveral options, which essentially reflect the interactivity of the\ngraphical user interface (GUI). For example, a fairly interactive\nuser behavior in RB++, \"browse results (A6)\", consists of either\n`OBSERVE RESULTS', `CLICK RESULT ITEM', `CLICK\nSORTING BUTTON', explore results, or any combination of\nthe above.\nFrom the BNF definition, we can see that RB++ is a more\ninteractive interface than the baseline because it involves more\nrules and recursive definitions. However, it is not necessarily a\ncomplicated interface, since the rules for the RB++ interface are\nlargely composed of a set of options instead of a sequence of\nuser behaviors, which means that many rules are not executed\nfor some types of tasks. Based on the BNF grammars, we\nhypothesize that for the simple search tasks, the RB++ interface\nwill not necessarily be significantly different from the baseline\ninterface, but for complicated searching and browsing tasks, that\nrequire more interaction or collection exploration, the RB++ will\nbe significantly more effective, efficient, and satisfying than the\nbaseline. For simple look up type tasks, both interfaces involve\nthe sequence of user actions: formulate query, CLICK SEARCH\nBUTTON, and navigate results (see rule A1 and B1).\nNavigation of results is simple for this type of task in that it only\ninvolves the judgment of zero or non-zero results, which is\ntrivial in both interfaces. Formulation of the query in this case\ninvolves typing in keywords and/or selecting the items from the\ninterfaces (see rule A2, A4 and B2, B3). Even though item\nselection in the baseline interface involves two clicks (see rule\nB3) which means a slightly longer time to execute than in\nRB++, which only needs one click on the visual bar for item\nselection (see rule A4), we expected no significant difference.\nFor type 2 tasks that involve data exploration and analysis,\ninteraction with the visual bars of the RB++ interface provides\nan effective and efficient interaction style. Two typical\nsequences of user behaviors to complete type 2 tasks are:\nexplore the collection by clicking (or mousing over) and\nobserving the visual bars (see rule A1 and A3), or formulate a\nquery and then explore the results by observing the visual bars\n(see rule A1, A5, A6, A8 and A9). With the traditional interface\nto finish type 2 tasks, users have to formulate a query and then\nliterally scan and count all the results (see rule B1, B4, B5, and\nB6), which is time consuming.\nWe also hypothesized that users would exhibit rich interaction\nduring their navigation of the results with RB++ (see rule A5 to\nA10). Actions of typing in keywords and clicking visual bars to\nfilter results (rule A10) would be used frequently and\ninterchangeably by the users to finish complex search tasks,\nespecially when large numbers of results are returned.\nRESULTS\nTable 1 lists the average time (in seconds) across all the\nparticipants to finish tasks 1 to 9 using the two different\ninterfaces. Notice that we allowed the participants to stop the\ntask if they felt that the task was too hard or too time-consuming\nto finish. It turned out that there were five participants who\nstopped task 5 and eight participants stopped task 6 before\ncompletion when they used the FilmFinder. Performance data of\nthese participants were discarded for the unfinished tasks.\nTable 1. Performance data (in seconds)\nTask\n1 (.879) 2 (.522) 3 (.026) 4 (.000) 5 (.000)\nRB++ 14.4 16.1 17.0 18.9 15.7\nFilmFinder\n14.7 14.4 29.7 40.7 204.0\nTask\n6 (.000) 7 (.000) 8 (.000) 9 (.000)\n10\nRB++ 12.7 13.5 27.1 20.6 N/A\nFilmFinder 328.0 87.2 101.3 112.8 N/A\n\nPaired sample t tests on the performance data were computed\nand the p values are shown in the parenthesis for each task. We\ncan see that except for the first two tasks (which were type 1\ntasks), the performance differences between the two interfaces\nwere all statistically significant at the .05 level. Clearly, RB++\nsupported superior performance for type 2 tasks.\nWe also counted error rates for tasks 1 to 9, which are listed in\nTable 2. The error rate was calculated as the number of\nparticipants who gave the wrong answer to the task divided by\nthe total number of participants. We can see that except for the\n8th task, no participants got wrong answers for any of the tasks\nusing the RB++ interface. The error rates of the baseline\ninterface were much higher than that of the RB++ interface,\nespecially for tasks 5, 6, and 7. Notice that we did not consider\nthose participants who gave up the task 5 or 6 using the\nFilmfinder, so the actual denominators used for calculating the\nerror rates for these tasks were smaller than the total number of\nparticipants.\nTable 2. Error rates\nTask 1 2 3 4 5\nRB++ 0/15 0/15 0/15 0/15 0/15\nFilmFinder 0/15 0/15 0/15 2/15 5/10\nTask 6 7 8 9\n10\n183\n\nRB++ 0/15 0/15 1/15 0/15 N/A\nFilmFinder 4/7 13/15\n2/15\n5/15\nN/A\n\nWe also did paired sample t tests on the three satisfaction\nquestions\n2\nwhich were completed after each task. Each response\nwas given on a 5 point scale from strongly agree (5) to strongly\ndisagree (1). For the first three tasks (simple lookups), there\nwere no statistically significant differences between the two\ninterfaces on any of the 3 questions. On the exploratory tasks\n(4-9), statistically significant differences favoring the RB++\nwere found on all three of the satisfaction questions.\nWe also compared the results for the seven overall usability\nquestions on each interface asked after participants had done the\ntasks with each interface. Their responses were also given on a\nfive point scale from strongly agree to strongly disagree. In each\nof the seven ratings, statistically significant differences were\nfound favoring the RB+ interface. Clearly, satisfaction with the\nRB++ was greater than that with the Filmfinder.\nThere were also open-ended questions that the participants\nanswered after finishing both interfaces. All of the participants\nconsidered the RB++ interface to be easier to use, especially for\nthe complex searches. They commented on the easy use of the\nvisual display with the multiple categories, which made it easy\nto combine the search criteria and narrow down the data, and\nthey also thought it was good to be able to manipulate the search\nresults in multiple ways. Thirteen out of 15 participants\nindicated that the RB++ interface gave them more confidence to\ncomplete the tasks. It was easy to go back and forth and to verify\nthe results and the informative overview panel gave the\nparticipants more confidence to finish tasks. There was one\nparticipant who thought that both interfaces gave equal\nconfidence and there was one participant who thought that the\nFilmfinder interface gave more confidence since he was more\nfamiliar with the Filmfinder and he felt somewhat confused by\nthe dynamic feature of the RB++, but he acknowledged the\nusefulness of the dynamic feature in narrowing the results in the\nresults panel.\nWhen asked which interface better helped them gain an\nunderstanding of the library movie collection, the RB++\ninterface was chosen by all the participants. Again the visual\ndisplay of the multiple categories and the cross reference of\nthese categories was considered to be useful features for them to\nunderstand the whole collection. In addition, 10 out of the 15\nparticipants indicated that they were more likely to use the\nRB++ interface if both were available. Three participants chose\nboth interfaces, depending on the type of tasks, and two\nparticipants chose the FilmFinder because of its familiarity and\naesthetic appeal.\nFor the question on the best thing about the RB++, participants\npointed out the visual display of the multiple categories, its cross\nreference ability, the dynamic matching ability of the searching\nboxes and the one screen display of the results as opposed to the\nmultiple page display of results in Filmfinder. As the worst thing\nabout the RB++, participants indicated that it was not as\n\n2\nIt is easy to use the interface\nI feel satisfied with the results I got\nI feel confident with the results I got\n\naesthetically appealing as the Filmfinder and not quite as\nintuitive to use as the Filmfinder. Two participants specifically\nmentioned that the constant changing and updating of the\ninterface made it a bit confusing.\n3.2 Phase Two Results\nDuring the second phase we also asked participants to fill out\nthe satisfaction questionnaires after finishing each task and these\nratings were predictably high (all means above 3.5) More\nimportantly, participants were also required to answer a set of\nopen-ended questions after finishing the second phase. For the\nfirst question: \"What is your overall impression of this interface\nfor finding the statistical data?\" the overall impression was\npositive. Participants used phrases such as \"fairly easy to use\",\n\"very helpful in finding the information\", \"good for quick\nsearching\". There were also a couple of negative comments such\nas: \"interface still came up with many results after filtering\",\n\"title of the results are not descriptive enough\". Only one\nparticipant said that he did not like the interface, because of the\npoor categorization of information items under some categories\nwhich made him frustrated.\nWhen answering the second question: \"Was it helpful to\nunderstanding what is available at EIA?\" all the participants\nthought the interface was helpful in that regard, which was\nlargely attributed to the visual display of the categories, which\ngave them a sense of what the website covered. One participant\nwished that there were more categories displayed. The\nquestionnaire also asked if the search boxes were helpful in\ncompleting the tasks. Participants gave high praise to this\nfeature with comments such as \"it's great to be taken directly to\nthe page but not to have your results lost\", \"I like the way it\nnarrows the focus and sort of guides a person to the info\nsought\", \"I didn't have to be concerned with performing a\ncomplex search that may return a null set-the results reflected\nmy search string instantly\". Two participants also commented\nthat the feature was somewhat limited in use since relevant\ninformation may not appear in the title, or description.\n\nDISCUSSION\nThe results strongly support that the RB++ interface was more\neffective and efficient in completing type 2 tasks than the\nbaseline interface and that users felt more confident and satisfied\nwith the RB++ in completing type 2 tasks. The higher\neffectiveness, efficiency and satisfaction gained in the RB++\nresulted mainly from two aspects: the visual display of the\nstatistical summary of the information items and the dynamic\nkeyword searching capability in the results panel. The\nvisualization bars helped the users understand relative\nproportions of items at a glance and use the posting numbers\ndirectly, which is much faster than literally counting. If we look\nat the BNF grammar, completion of type 2 tasks in RB++ only\nrequired participants to explore the collection (see first option of\nrule A1) without submitting queries to the database and then\nobserving and counting returned results, which are necessary\nsteps for the baseline interface to complete the same tasks (see\nrule B1).\nThe dynamic search boxes allow users do further filtering based\non certain criteria and give users feedback on the filtered results\ninstantly and continuously, which not only encourages the users\nto use this function, but also improves their efficiency. Another\n184\n\ninterface feature: displaying all the results on one screen might\nalso help improve the efficiency and satisfaction, as several\nusers mentioned.\nSeveral components were tightly coupled in the interface with\ndisplayed search results. The search boxes are tightly coupled\nwith the results, which means that any input in the search boxes\nwill invoke instant filtering on the results. The visual bars are\ntightly coupled with the results and as such they support two\nfunctions. One is that any operations on the visual bars such as\nmouse over and selection, invoke the instant filtering of the\nresults. The other is that any update of the results also updates\nthe summary statistics in the visualization on the bars. Coupling\nprovides users more ways to interact with the system and make\nthe interaction more natural and smooth (see rule A8, A9, A10),\nwhich suggests a different interaction style for finding\ninformation than traditional search interfaces which tend to\nrequire discrete, well-defined turn-taking between the user and\nsystem. Traditionally, when users get to the results page, all they\ncan do is browse the results. If they want to refine the results,\nthey have to go back to the search interface, type in the refined\nkeywords, click the search, and browse the new results, which\nnot only interrupts the normal results browsing interaction, but\nalso loses the current result set. RB++ encourages users to get an\ninitial manageable result set and then refine it using one\ninterface window without the need to go back and forth. Instead\nof displaying a set of static results, RB++ offers an effective and\nefficient means for users to understand the results by displaying\nsummary statistics bars which give both visual and numeric data\n(see rule A9), and to explore results by providing ways to\ndynamically and continuously filter (see rule A10). The result\nset can be as large as displaying the whole collection, or as small\nas only one item, which depends on the initial query on the\ncollection. In the second phase study, most of the participants\ncompleted their search tasks without doing a second query on\nthe initial interface. The study showed that participants could\nutilize the initial interface to get an initial result set by selecting\nrelevant categories and then narrow down results and find\nrelevant web pages by exploring the results set. Typing in\nkeywords (or string patterns) in search boxes was found to be\nthe most frequently used means to explore and filter result set.\nThese features were highly appreciated by the participants as\nseen from their comments.\n\nRELATED WORK\nMany information access interfaces try to provide a starting\npoint for users by presenting overviews of the collection [9].\nOverviews can help users understand the whole collection and\nselect or eliminate sources from consideration. Overviews can\ndirect users to subcollections quickly, where they can explore\nthe details. Usually two types of overviews are employed:\ncategory overview and graphic overview. The category approach\nof Yahoo is a good example for the category overview. The\nHiBrowse interface for viewing category labels hierarchically\nbased on the facets is another example [14]. A more recent\ninformation access interface using the category overview by\npresenting faceted metadata is the Flamenco interface [21]. The\nlast two interfaces not only present the category labels to the\nusers but also inform the users of the number of documents\nunder each category. However, these interfaces do not allow\nusers to employ simple mouse moves to quickly explore the\nrelationship between different categories (or facets). The\nFlamenco interface could do this as part of its browsing and\nsearching efforts, but it requires many commitments from users\nsuch as clicking the category and waiting. The previous version\nof the relation browser [13] presented various categories and\nallowed users to explore the relations by mouse over operation,\nbut the interface only allowed the users to mouse over the main\ncategory.\nThe graphic overview is another type of overview, which\nusually employs various information visualization techniques.\nLin [12] used the Kohenen map to visually present a topical\noverview of the collection. Each block on the map represents a\nsubcollection with similar topics which are labeled by one or\ntwo salient words extracted from the subcollection. The\nadjacency of blocks indicates the topic similarity between\nsubcollections. Wise, et al. [19] developed a three dimensional\ninterface to visually present various topics. Zhang, et al. [25]\nexacted the key concepts from a collection and visually\npresented the concepts in a spring-embedded graph. Similar\nconcepts were clustered together and usually represented as\nsubtopics. The graphical overview is visually appealing, but the\nusability of this kind of interface has yet to be explored. 3-D\ninterfaces are more problematic than 2-D interface in terms of\nease of use and learnability. It seems that textual labels of\ncategory structure are more understandable than graphical\nrepresentation.\nSome research has been conducted on how to present the\nretrieved results in context. Hearst [10] used clustering\ntechniques to cluster retrieval results on the fly and presented\ndifferent clusters with labeled words to the users to help them\nunderstand of the results. Chen, and Dumais [3] employed\nclassification techniques to categorize retrieved results based on\nthe existing category structure and displayed them in\nhierarchical categories. Zamir and Etzioni [22] developed an\ninterface that used on-the-fly clustering of metasearch results.\nThese interfaces cluster or categorize the retrieval results on the\nfly, so scaling is problematic. The RB++ categorizes the\ncollection offline and uses a uniform category structure to\npresent overviews of the collection and the retrieval results.\nConsequently, RB++ can be scaled up easily. However, because\nRB++ depends on the metadata to reside on the client side to\nachieve its dynamics, it also suffers a different kind of\nscalability limitation. To date, we have had good success and\nresponse with data sets with tens of thousands of records and a\ndozen or so facets, however data sets with millions of records\nand scores of facets are problematic.\n3\n.\nThere also has been some work on fast location of specific\ninformation items. Sorting is a prevalent means to help users\nlocate a specific item. However, users still need to visually go\nthrough a list of items. The Alphaslider [1] is a visual\ncomponent to help users quickly locate a known string of items,\nbut it's not very easy to use, especially for novice users. Besides,\nThe Alphaslider can only locate the information items based on\nthe first letter alphabetically. RB++ provides an easy and\nflexible way to locate the information items by typing in string\npatterns and the patterns can be matched anywhere in the\ninformation items. A similar technique is actually used in some\napplications such as the address box of Internet Explorer\n\n3\nVarious RB++ examples are available at\nhttp://idl.ils.unc.edu/rave/examples.html\n185\n\nbrowser, but the patterns are limited to matching from the\nbeginning of the query string.\nDynamic query was a new type of interface [16] that inspired\nthe original relation browser work. The interface visually\ndisplays the information items and provides the visual\ncontrolling components to explore the information items by\ntightly coupling search and visual display of results. RB++ uses\nthis design concept, but instead of providing a visual interface,\nRB++ employs a more understandable (especially for topical\noverview) category structure for the information items.\nMoreover, the search box is a very effective and efficient\ncomponent for the non categorized attributes of the items, while\nthe visual controlling components such as sliders or check boxes\ncan only be used for controlling categorical attributes of the\nitems.\nQuery preview [18], attribute explorer [17] and other interfaces\n[11], and [20] provide similar ways to explore the relationships\nbetween different facets of the classification. These interfaces\nworked for structured information such as that found in\ndatabases. Of course, all these types of interfaces depend on\ngood underlying categorization of data. Our long-term goal is to\nmake the interface work for unstructured textual information.\nThe search boxes provided are a first step in this direction,\nalthough they are currently limited to search within the fields\nspecified in the results display.\n\nLIMITATIONS OF RB++ AND ONGOING WORK\nOne constraint in RB++ is the limited number of categories that\ncan be displayed, which is affected by two factors. One factor is\nscreen real estate. We can partially alleviate the issue of screen\nreal estate by utilizing a Zoomable User Interface (ZUI) to\ndisplay the categories. We have experimented with integrating\nthe Jazz toolkit [2] into the interface and this provides\naproximately a ten-fold increase in the number of facet values\nthat can be supported within each facet, although at the expense\nof some of the mouseover dynamics since the mouse must now\nbe used for zooming as well as normal hovering. Another factor\nis size of the memory to hold the client-side distribution counts\ndata, the number of which increases exponentially with\nincreased number of displayed categories. One way to solve the\nissue is to only calculate part of the distribution counts data,\nwhich hopefully are most frequently used during the user's\ninteraction with the interface. Other approaches, such as\nemploying novel data structures were also suggested by Doan, et\nal. [5]. However, all these solutions have to sacrifice the\ninteractivity of the interface that depends on client-side metadata\nto support rapid mouse activities. For example, preloading\npartial distribution counts data for large numbers of categories\nmake some distributional data and visualization unavailable\nwhen users try to re-partition the information space by mouse\nmoves. At present, the best development path seems to be\nhierarchical partitioning of very large information spaces with\nmultiple RB++ instances that require a new download for each\nof the cascading subsets.\nAnother constraint of RB++ is the limited matching function of\nthe search boxes. The interface currently matches input string\npatterns to the corresponding result fields on the lexical level.\nMatching in this level is sufficient in many cases such as\nmatching with fields with numbers or short textual strings such\nas titles, but for the fields with more semantic bearing strings\nsuch as descriptions of web pages, a more sophisticated match\nfunction based on semantics might be needed--perhaps a kind\nof full-text engine in each text field, although the close coupling\nwith the facet panel will then be in doubt.\nCurrently, the interface provides a uniform category structure\nfor both the entire collection and the retrieved results set. This is\ngood for its consistency. However, for the retrieved result set, a\nmore fine-grained category structure might be better for users to\nunderstand it and conduct string searches.\nOverall, the RB++ represents an example of a highly interactive\nuser interface that offers improved performance and satisfaction\nfor users of large websites and digital libraries. It can find\napplication as the entry point for a large website or as a way to\nwork with large results sets returned from search engines as long\nas the data is structured in advance.\n\nWORK ON AUTOMATIC CLUSTERING AND CLASSIFICATION\nOver the past few years we have created more than two dozen\ninstances of data sets using RB and RB++, demonstrating its\napplicability as an interface to many different types of data. If\nthe data resides in a database, it is possible to map the scheme to\nthe underlying RB++ scheme (see [24] for details on the system\narchitecture) and simply import the data automatically. For\nmany WWW applications, this is not possible so we have been\ndeveloping ways to automate facet discovery and webpage\nassignment to those facets. The basic approach is to crawl the\nwebsite(s), create term-document representations and then use\nmachine learning techniques to cluster the webpages and extract\ncandidate labels, use human judgment to select the best labels,\nand then classify the webpages into those categories using the\nstatistical model produced in the process. See Efron et al [6,7]\nfor details of the techniques we have applied to date.\nTo illustrate the current state of development, consider the EIA\nexample used in the study reported here. The categories\ndisplayed on the EIA RB++ instance were originally created\nmanually, which certainly did not scale well to many other large\ninformation collections such as various other government\nstatistical web sites. Figure 7 is a screen shot of the RB++\ninstance for the Bureau of Labor Statistics web site that uses\ntopic facets and webpage assignment that were automatically\ndetermined. About 13,000 HTML web pages were crawled from\nweb site and a soft clustering method was then applied to those\npages. For each webpage, the statistical model yielded a\nprobability of belonging to every cluster. Thus, every page\n`belongs' to every cluster at some level of probability. The first\ntopic column contains all the pages with highest probability of\nbelonging to those facet values. The second column contains all\nthe pages with the second highest probability values for those\nfacet values. The third and fourth columns are the months and\nyears of page update extracted from web pages themselves-facets\nthat are know to be problematic but used for illustration\nsince our primary emphasis was on topic discovery. The display\nof two topical columns reflects the underlying characteristic of\nsoft clustering where items can appear in multiple clusters.\nHowever, our discussions with BLS staff and demonstrations to\nother potential users show that this two-column display is\nconfusing. Therefore, only the first topical column was kept in a\nlater version of the RB++ BLS instance (see figure 8). An\n186\n\nadditional facet: geographical coverage, which is an important\nfacet of government statistical web sites, was added in this\nversion. The assignments were made using a rule based\nclassification method to classify the web pages into categories of\nvarious geographical coverages.\n\nFigure 7. First version of RB instance for BLS web site\n\nFigure 8. Latest version of RB instance for BLS web site.\nIn addition to the BLS instance, we have used these techniques\nto create RB++ instances for the FedStats website and are\nworking to create instances for other federal statistical agencies.\nAt present, the FedStats and BLS instances are installed on\nFedStats servers and are being tested by federal statistical\nagency personnel. We also hope to address the important facet\nof time coverage of data itself in future work.\nOverall, the RB++ user interface has evolved to a state where it\ncan be easily applied to many kinds of well-structured data. The\nuser testing reported here demonstrates the efficacy of the\ninterface for search and browse tasks that would be very difficult\nto execute with SQL syntax or form fillin interfaces. Our\ncurrent efforts are to develop techniques for automatically\npopulating the RB++ database with unstructured data from the\nWWW. To date, these efforts have led to promising prototypes\nfor several federal statistical websites.\nACKNOWLEDGMENTS\nThis work was supported by NSF Grant EIA 0131824. The\nauthors wish to thank Paul Solomon and other anonymous\nreviewers for their valuable comments on the paper and Tim\nShearer for designing the underlying database structure of the\ninterface. We also thank Jonathan Elsas and Miles Efron for\ndeveloping the clustering software and techniques.\n\nREFERENCES\n[1] Ahlberg, C., and Shneiderman, B. The alphaslider: a\ncompact and rapid selector. In Proceedings of the SIGCHI\nconference on human factors in computing systems. Boston,\nMassachusetts. 1994.\n[2] Bederson, B., Meyer, J., and Good, L. Jazz:An Extensible\nzoomable user interface graphics toolkit in java. In ACM\nUIST2000, 171-180.\n[3] Chen, H, and Dumais, S. Bringing order to the web:\nAutomatically categorizing searching results. In Proceedings of\nthe SIGCHI conference on human factors in computing systems.\nThe Hague, Amsterdam. 2000\n[4] Dix, A., Finlay, J. Abowd, G. Beale, R, and Finley, J.\nHuman-Computer Interaction (2\nnd\nEd.). Prentice Hall, Hillsdale,\nNJ, 1998\n[5] Doan, K., Plaisant, C., Shneiderman, B., and Bruns, T.\nInterface and Data Architecture for Query Preview in\nNetworked Information Systems. ACM Transactions on\nInformation Systems, July 1999, Vol. 17, No. 3, 320-341.\n[6] Efron, M., Marchionini, G. and Zhang, J. Implications of the\nrecursive representation problem for automatic concept\nidentification in on-line governmental information. ASIST SIG-CR\nWorkshop, Long Beach, CA, 2003.\n[7] Efron, M., Elsas, J., Marchionini, G., and Zhang J.. Machine\nlearning for information architecture in a large governmental\nwebsite. Joint Conference on Digital Libraries 2004 (Tuscon,\nAZ, June 7-11, 2004)\n[8] Greene, S., Marchionini, G., Plaisant, C., & Shneiderman, B.\n(2000). Previews and overviews in digital libraries: Designing\nsurrogates to support visual information seeking. Journal of the\nAmerican Society for Information Science, 51(4), 380-393.\n[9] Hearst, M. User interfaces and visualization. In Modern\ninformation retrieval. Ed. by Baeza-Yates, R., and Ribeiro-Neto,\nB. Chapter 10, ACM Press, New York, NY, 1999 257-324.\n[10] Hearst, M. and Pedersen, P. Reexamining the cluster\nhypothesis: Scatter/Gather on retrieval results, Proceedings of\n19th Annual International ACM/SIGIR Conference, Zurich,\n1996\n[11] Lanning, T., Wittenburg, K., Heinrichs, M., Fyock, C., and\nLi, G. Multidimensional information visualization through\nsliding rods. AVI'02 Palermo, Italy.\n[12] Lin, X. Map displays for information retrieval. Journal of\nthe American society for information science. 1997 48(1), 40-54.\n187\n\n[13] Marchionini, G., and Brunk, B. Toward a general relation\nbrowser: A GUI for information architects. Journal of Digital\nInformation. Article No. 179, 2003-04-09 2003 4(1).\nhttp://jodi.ecs.soton.ac.uk/Articles/v04/i01/Marchionini/\n[14] Pollitt A. S., Ellis G. P., and Smith M. P. HIBROWSE for\nBibliographic Databases Journal of Information Science, 1994\n20 (6), 413-426.\n[15] Reisner, P., Formal Grammar and human factor design of\nan interactive graphics system. IEEE Trans. on Software\nEngineering, 7(2), 229-240, 1981\n[16] Shneiderman, B., Dynamic queries for visual information\nseeking, IEEE Software 11, 6 (1994), 70-77.\n[17] Spence, R, and Tweedie, L. The attribute explore:\ninformation synthesis via exploration. Interacting with\nComputers. 1998 11, 137-146.\n[18] Tanin, E., Lotem, A., Haddadin, I., Shneiderman, B.,\nPlaisant, C., and Slaughter, L. Facilitating data exploration with\nquery previews: a study of user performance and preference.\nBehaviour & information technology. 2000 19(6). 393-403.\n[19] Wise, J., Thomas, J., Pennock, K., Lantrip, D., Pottier, M.\nand Schur, A. Visualizing the non-visual: spatial analysis and\ninteraction with information from text documents. In Proc. of\nthe Information visualization Symposium 95, pages 51-58. IEEE\nComputer Society Press, 1995", "keywords": "searching;efficiency;user satisfaction;user study;information search;Information Storage and Retrieval;interaction patterns with interface;Interface design;visualization;Relation Browser++;browsing;Browse and Search Interface;RB++;interactive system;Faceted category structure;information browsing;effectiveness;search;dynamic query;facets;browse;Human Factor;visual display;Modeling User Interaction;satisfaction;User interface;category overview;user interface"} {"name": "88", "title": "Event Threading within News Topics", "abstract": "With the overwhelming volume of online news available today, there is an increasing need for automatic techniques to analyze and present news to the user in a meaningful and efficient manner. Previous research focused only on organizing news stories by their topics into a flat hierarchy. We believe viewing a news topic as a flat collection of stories is too restrictive and inefficient for a user to understand the topic quickly. In this work, we attempt to capture the rich structure of events and their dependencies in a news topic through our event models. We call the process of recognizing events and their dependencies event threading. We believe our perspective of modeling the structure of a topic is more effective in capturing its semantics than a flat list of on-topic stories. We formally define the novel problem, suggest evaluation metrics and present a few techniques for solving the problem. Besides the standard word based features, our approaches take into account novel features such as temporal locality of stories for event recognition and time-ordering for capturing dependencies. Our experiments on a manually labeled data sets show that our models effec-tively identify the events and capture dependencies among them.", "fulltext": "INTRODUCTION\nNews forms a major portion of information disseminated in the\nworld everyday. Common people and news analysts alike are very\ninterested in keeping abreast of new things that happen in the news,\nbut it is becoming very difficult to cope with the huge volumes\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nCIKM'04, November 813, 2004, Washington,DC,USA.\nCopyright 2004 ACM 1-58113-874-1/04/0011 ...\n$\n5.00.\nof information that arrives each day. Hence there is an increasing\nneed for automatic techniques to organize news stories in a way that\nhelps users interpret and analyze them quickly. This problem is addressed\nby a research program called Topic Detection and Tracking\n(TDT) [3] that runs an open annual competition on standardized\ntasks of news organization.\nOne of the shortcomings of current TDT evaluation is its view of\nnews topics as flat collection of stories. For example, the detection\ntask of TDT is to arrange a collection of news stories into clusters\nof topics. However, a topic in news is more than a mere collection\nof stories: it is characterized by a definite structure of inter-related\nevents. This is indeed recognized by TDT which defines a topic as\n`a set of news stories that are strongly related by some seminal real-world\nevent' where an event is defined as `something that happens\nat a specific time and location' [3]. For example, when a bomb\nexplodes in a building, that is the seminal event that triggers the\ntopic. Other events in the topic may include the rescue attempts,\nthe search for perpetrators, arrests and trials and so on. We see\nthat there is a pattern of dependencies between pairs of events in\nthe topic. In the above example, the event of rescue attempts is\n`influenced' by the event of bombing and so is the event of search\nfor perpetrators.\nIn this work we investigate methods for modeling the structure\nof a topic in terms of its events. By structure, we mean not only\nidentifying the events that make up a topic, but also establishing\ndependencies--generally causal--among them. We call the process\nof recognizing events and identifying dependencies among\nthem event threading, an analogy to email threading that shows\nconnections between related email messages. We refer to the resulting\ninterconnected structure of events as the event model of the\ntopic. Although this paper focuses on threading events within an\nexisting news topic, we expect that such event based dependency\nstructure more accurately reflects the structure of news than strictly\nbounded topics do. From a user's perspective, we believe that our\nview of a news topic as a set of interconnected events helps him/her\nget a quick overview of the topic and also allows him/her navigate\nthrough the topic faster.\nThe rest of the paper is organized as follows. In section 2, we\ndiscuss related work. In section 3, we define the problem and use\nan example to illustrate threading of events within a news topic. In\nsection 4, we describe how we built the corpus for our problem.\nSection 5 presents our evaluation techniques while section 6 describes\nthe techniques we use for modeling event structure. In section\n7 we present our experiments and results. Section 8 concludes\nthe paper with a few observations on our results and comments on\nfuture work.\n446\nRELATED WORK\nThe process of threading events together is related to threading\nof electronic mail only by name for the most part. Email usually\nincorporates a strong structure of referenced messages and consistently\nformatted subject headings--though information retrieval\ntechniques are useful when the structure breaks down [7]. Email\nthreading captures reference dependencies between messages and\ndoes not attempt to reflect any underlying real-world structure of\nthe matter under discussion.\nAnother area of research that looks at the structure within a topic\nis hierarchical text classification of topics [9, 6]. The hierarchy\nwithin a topic does impose a structure on the topic, but we do not\nknow of an effort to explore the extent to which that structure reflects\nthe underlying event relationships.\nBarzilay and Lee [5] proposed a content structure modeling\ntechnique where topics within text are learnt using unsupervised\nmethods, and a linear order of these topics is modeled using hidden\nMarkov models. Our work differs from theirs in that we do not constrain\nthe dependency to be linear. Also their algorithms are tuned\nto work on specific genres of topics such as earthquakes, accidents,\netc., while we expect our algorithms to generalize over any topic.\nIn TDT, researchers have traditionally considered topics as flat-clusters\n[1]. However, in TDT-2003, a hierarchical structure of\ntopic detection has been proposed and [2] made useful attempts\nto adopt the new structure. However this structure still did not ex-plicitly\nmodel any dependencies between events.\nIn a work closest to ours, Makkonen [8] suggested modeling\nnews topics in terms of its evolving events. However, the paper\nstopped short of proposing any models to the problem. Other related\nwork that dealt with analysis within a news topic includes\ntemporal summarization of news topics [4].\nPROBLEM DEFINITION AND NOTATION\nIn this work, we have adhered to the definition of event and topic\nas defined in TDT. We present some definitions (in italics) and our\ninterpretations (regular-faced) below for clarity.\n1. Story: A story is a news article delivering some information\nto users. In TDT, a story is assumed to refer to only a single\ntopic. In this work, we also assume that each story discusses\na single event. In other words, a story is the smallest atomic\nunit in the hierarchy (topic\nevent\nstory). Clearly, both\nthe assumptions are not necessarily true in reality, but we\naccept them for simplicity in modeling.\n2. Event: An event is something that happens at some specific\ntime and place [10]. In our work, we represent an event by\na set of stories that discuss it. Following the assumption of\natomicity of a story, this means that any set of distinct events\ncan be represented by a set of non-overlapping clusters of\nnews stories.\n3. Topic: A set of news stories strongly connected by a seminal\nevent. We expand on this definition and interpret a topic as\na series of related events. Thus a topic can be represented\nby clusters of stories each representing an event and a set of\n(directed or undirected) edges between pairs of these clusters\nrepresenting the dependencies between these events. We will\ndescribe this representation of a topic in more detail in the\nnext section.\n4. Topic detection and tracking (TDT) :Topic detection detects\nclusters of stories that discuss the same topic; Topic\ntracking detects stories that discuss a previously known topic [3].\nThus TDT concerns itself mainly with clustering stories into\ntopics that discuss them.\n5. Event threading: Event threading detects events within in a\ntopic, and also captures the dependencies among the events.\nThus the main difference between event threading and TDT\nis that we focus our modeling effort on microscopic events\nrather than larger topics. Additionally event threading models\nthe relatedness or dependencies between pairs of events\nin a topic while TDT models topics as unrelated clusters of\nstories.\nWe first define our problem and representation of our model\nformally and then illustrate with the help of an example. We are\ngiven a set of\n\nnews stories\n\n\n\n\n\n\n\n\non a given topic\n\nand their time of publication. We define a set of events\n\n\n\n\n\nwith the following constraints:\n\n\n\n(1)\n\n\n(2)\n\n\n\n\n\n\n(3)\nWhile the first constraint says that each event is an element in the\npower set of S, the second constraint ensures that each story can\nbelong to at most one event. The last constraint tells us that every\nstory belongs to one of the events in\n. In fact this allows us to\ndefine a mapping function\nfrom stories to events as follows:\n\n\niff\n\n\n(4)\nFurther, we also define a set of directed edges\n\n\nwhich denote dependencies between events. It is important to explain\nwhat we mean by this directional dependency: While the existence\nof an edge itself represents relatedness of two events, the\ndirection could imply causality or temporal-ordering. By causal\ndependency we mean that the occurrence of event B is related to\nand is a consequence of the occurrence of event A. By temporal ordering\n, we mean that event B happened after event A and is related\nto A but is not necessarily a consequence of A. For example, consider\nthe following two events: `plane crash' (event A) and `subse-quent\ninvestigations' (event B) in a topic on a plane crash incident.\nClearly, the investigations are a result of the crash. Hence an arrow\nfrom A to B falls under the category of causal dependency.\nNow consider the pair of events `Pope arrives in Cuba'(event A)\nand `Pope meets Castro'(event B) in a topic that discusses Pope's\nvisit to Cuba. Now events A and B are closely related through their\nassociation with the Pope and Cuba but event B is not necessarily\na consequence of the occurrence of event A. An arrow in such scenario\ncaptures what we call time ordering. In this work, we do not\nmake an attempt to distinguish between these two kinds of dependencies\nand our models treats them as identical. A simpler (and\nhence less controversial) choice would be to ignore direction in the\ndependencies altogether and consider only undirected edges. This\nchoice definitely makes sense as a first step but we chose the former\nsince we believe directional edges make more sense to the user as\nthey provide a more illustrative flow-chart perspective to the topic.\nTo make the idea of event threading more concrete, consider the\nexample of TDT3 topic 30005, titled `Osama bin Laden's Indict-ment'\n(in the 1998 news). This topic has 23 stories which form 5\nevents. An event model of this topic can be represented as in figure\n1. Each box in the figure indicates an event in the topic of Osama's\nindictment. The occurrence of event 2, namely `Trial and Indictment\nof Osama' is dependent on the event of `evidence gathered\nby CIA', i.e., event 1. Similarly, event 2 influences the occurrences\nof events 3, 4 and 5, namely `Threats from Militants', `Reactions\n447\nfrom Muslim World' and `announcement of reward'. Thus all the\ndependencies in the example are causal.\nExtending our notation further, we call an event A a parent of B\nand B the child of A, if\n\n\n\n. We define an event model\n\n\n\nto be a tuple of the set of events and set of dependencies\n.\nTrial and\n(5)\n(3)\n(4)\nCIA announces reward\nMuslim world\nReactions from\nIslamic militants\nThreats from\n(2)\n(1)\nOsama\nIndictment of\nCIA\ngathered by\nEvidence\nFigure 1: An event model of TDT topic `Osama bin Laden's\nindictment'.\nEvent threading is strongly related to topic detection and tracking\n, but also different from it significantly. It goes beyond topics,\nand models the relationships between events. Thus, event threading\ncan be considered as a further extension of topic detection and\ntracking and is more challenging due to at least the following difficulties\n.\n1. The number of events is unknown.\n2. The granularity of events is hard to define.\n3. The dependencies among events are hard to model.\n4. Since it is a brand new research area, no standard evaluation\nmetrics and benchmark data is available.\nIn the next few sections, we will describe our attempts to tackle\nthese problems.\nLABELED DATA\nWe picked 28 topics from the TDT2 corpus and 25 topics from\nthe TDT3 corpus. The criterion we used for selecting a topic is that\nit should contain at least 15 on-topic stories from CNN headline\nnews. If the topic contained more than 30 CNN stories, we picked\nonly the first 30 stories to keep the topic short enough for annota-tors\n. The reason for choosing only CNN as the source is that the\nstories from this source tend to be short and precise and do not tend\nto digress or drift too far away from the central theme. We believe\nmodeling such stories would be a useful first step before dealing\nwith more complex data sets.\nWe hired an annotator to create truth data. Annotation includes\ndefining the event membership for each story and also the dependencies\n. We supervised the annotator on a set of three topics that\nwe did our own annotations on and then asked her to annotate the\n28 topics from TDT2 and 25 topics from TDT3.\nIn identifying events in a topic, the annotator was asked to broadly\nfollow the TDT definition of an event, i.e., `something that happens\nat a specific time and location'. The annotator was encouraged to\nmerge two events A and B into a single event C if any of the stories\ndiscusses both A and B. This is to satisfy our assumption that\neach story corresponds to a unique event. The annotator was also\nencouraged to avoid singleton events, events that contain a single\nnews story, if possible. We realized from our own experience that\npeople differ in their perception of an event especially when the\nnumber of stories in that event is small. As part of the guidelines,\nwe instructed the annotator to assign titles to all the events in each\ntopic. We believe that this would help make her understanding of\nthe events more concrete. We however, do not use or model these\ntitles in our algorithms.\nIn defining dependencies between events, we imposed no restrictions\non the graph structure. Each event could have single, multiple\nor no parents. Further, the graph could have cycles or orphan-nodes\n. The annotator was however instructed to assign a dependency\nfrom event A to event B if and only if the occurrence of B\nis `either causally influenced by A or is closely related to A and\nfollows A in time'.\nFrom the annotated topics, we created a training set of 26 topics\nand a test set of 27 topics by merging the 28 topics from TDT2 and\n25 from TDT3 and splitting them randomly. Table 1 shows that the\ntraining and test sets have fairly similar statistics.\nFeature\nTraining set\nTest set\nNum. topics\n26\n27\nAvg. Num. Stories/Topic\n28.69\n26.74\nAvg. Doc. Len.\n64.60\n64.04\nAvg. Num. Stories/Event\n5.65\n6.22\nAvg. Num. Events/Topic\n5.07\n4.29\nAvg. Num. Dependencies/Topic\n3.07\n2.92\nAvg. Num. Dependencies/Event\n0.61\n0.68\nAvg. Num. Days/Topic\n30.65\n34.48\nTable 1: Statistics of annotated data\nEVALUATION\nA system can generate some event model\n\n\n\n\n\n\nusing\ncertain algorithms, which is usually different from the truth model\n\n\n\n(we assume the annotator did not make any mistake\n). Comparing a system event model\n\n\nwith the true model\n\nrequires comparing the entire event models including their dependency\nstructure. And different event granularities may bring\nhuge discrepancy between\n\n\nand\n\n. This is certainly non-trivial\nas even testing whether two graphs are isomorphic has no known\npolynomial time solution. Hence instead of comparing the actual\nstructure we examine a pair of stories at a time and verify if the\nsystem and true labels agree on their event-memberships and dependencies\n. Specifically, we compare two kinds of story pairs:\n\nCluster pairs (\n\n\n)\n: These are the complete set of un-ordered\npairs\n\n\n\nof stories\n\nand\n\nthat fall within the\nsame event given a model\n\n. Formally,\n\n\n\n\n\n\n\n\n\n\n\n\n\n(5)\nwhere\nis the function in\n\nthat maps stories to events as\ndefined in equation 4.\n\nDependency pairs (\n\n\n)\n: These are the set of all ordered\npairs of stories\n\n\n\nsuch that there is a dependency from\nthe event of\n\nto the event of\n\nin the model\n\n.\n\n\n\n\n\n\n\n\n\n\n\n(6)\nNote the story pair is ordered here, so\n\n\n\nis not equivalent\nto\n\n\n\n. In our evaluation, a correct pair with wrong\n448\n(B->D)\nCluster pairs\n(A,C)\nDependency pairs\n(A->B)\n(C->B)\n(B->D)\nD,E\nD,E\n(D,E)\n(D,E)\n(A->C) (A->E)\n(B->C) (B->E)\n(B->E)\nCluster precision: 1/2\nCluster Recall: 1/2\nDependency Recall: 2/6\nDependency Precision: 2/4\n(A->D)\nTrue event model\nSystem event model\nA,B\nC\nA,C\nB\nCluster pairs\n(A,B)\nDependency pairs\nFigure 2: Evaluation measures\ndirection will be considered a mistake. As we mentioned earlier\nin section 3, ignoring the direction may make the problem\nsimpler, but we will lose the expressiveness of our representation\n.\nGiven these two sets of story pairs corresponding to the true\nevent model\n\nand the system event model\n\n\n, we define recall\nand precision for each category as follows.\n\nCluster Precision (CP)\n: It is the probability that two randomly\nselected stories\n\nand\n\nare in the same true-event\ngiven that they are in the same system event.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n(7)\nwhere\n\nis the story-event mapping function corresponding\nto the model\n\n\n.\n\nCluster Recall(CR)\n: It is the probability that two randomly\nselected stories\n\nand\n\nare in the same system-event given\nthat they are in the same true event.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n(8)\n\nDependency Precision(DP)\n: It is the probability that there is\na dependency between the events of two randomly selected\nstories\n\nand\n\nin the true model\n\ngiven that they have a\ndependency in the system model\n\n\n. Note that the direction\nof dependency is important in comparison.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n(9)\n\nDependency Recall(DR)\n: It is the probability that there is\na dependency between the events of two randomly selected\nstories\n\nand\n\nin the system model\n\n\ngiven that they have\na dependency in the true model\n\n. Again, the direction of\ndependency is taken into consideration.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n(10)\nThe measures are illustrated by an example in figure 2. We also\ncombine these measures using the well known F1-measure commonly\nused in text classification and other research areas as shown\nbelow.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n(11)\nwhere\nand\nare the cluster and dependency F1-measures\nrespectively and\n\nis the Joint F1-measure (\n\n) that we use to\nmeasure the overall performance.\nTECHNIQUES\nThe task of event modeling can be split into two parts: clustering\nthe stories into unique events in the topic and constructing dependencies\namong them. In the following subsections, we describe\ntechniques we developed for each of these sub-tasks.\n6.1\nClustering\nEach topic is composed of multiple events, so stories must be\nclustered into events before we can model the dependencies among\nthem. For simplicity, all stories in the same topic are assumed to\nbe available at one time, rather than coming in a text stream. This\ntask is similar to traditional clustering but features other than word\ndistributions may also be critical in our application.\nIn many text clustering systems, the similarity between two stories\nis the inner product of their tf-idf vectors, hence we use it as\none of our features. Stories in the same event tend to follow temporal\nlocality, so the time stamp of each story can be a useful feature.\nAdditionally, named-entities such as person and location names are\nanother obvious feature when forming events. Stories in the same\nevent tend to be related to the same person(s) and locations(s).\nIn this subsection, we present an agglomerative clustering algorithm\nthat combines all these features. In our experiments, however\n, we study the effect of each feature on the performance sepa-rately\nusing modified versions of this algorithm.\n6.1.1\nAgglomerative clustering with\ntime decay (ACDT)\nWe initialize our events to singleton events (clusters), i.e., each\ncluster contains exactly one story. So the similarity between two\nevents, to start with, is exactly the similarity between the corresponding\nstories. The similarity\n\n\u0474\n\n\n\n\nbetween two stories\n\n\nand\n\n\nis given by the following formula:\n\n\u0474\n\n\n\n\n\n\u05f4\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n(12)\nHere\n\n,\n\n,\n\nare the weights on different features. In this work,\nwe determined them empirically, but in the future, one can consider\nmore sophisticated learning techniques to determine them.\n\u05f4\n\n\n\n\nis the cosine similarity of the term vectors.\n\n\n\n\n\n\nis 1 if there is some location that appears in both stories, otherwise\nit is 0.\n\n\n\n\n\n\n\nis similarly defined for person name.\nWe use time decay when calculating similarity of story pairs,\ni.e., the larger time difference between two stories, the smaller their\nsimilarities. The time period of each topic differs a lot, from a few\ndays to a few months. So we normalize the time difference using\nthe whole duration of that topic. The time decay adjusted similarity\n449\n\n\u0474\n\n\n\n\nis given by\n\n\u0474\n\n\n\n\n\n\u0474\n\n\n\n\n\n\n\n\n\n\n\n(13)\nwhere\n\n\nand\n\n\nare the time stamps for story 1 and 2 respectively.\nT is the time difference between the earliest and the latest story in\nthe given topic.\n\nis the time decay factor.\nIn each iteration, we find the most similar event pair and merge\nthem. We have three different ways to compute the similarity between\ntwo events\n\nand\n\n:\n\nAverage link: In this case the similarity is the average of the\nsimilarities of all pairs of stories between\n\nand\n\nas shown\nbelow:\n\n\u0474\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\u0474\n\n\n\n\n\n\n(14)\n\nComplete link: The similarity between two events is given\nby the smallest of the pair-wise similarities.\n\n\u0474\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\u0474\n\n\n\n\n(15)\n\nSingle link: Here the similarity is given by the best similarity\nbetween all pairs of stories.\n\n\u0474\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\u0474\n\n\n\n\n(16)\nThis process continues until the maximum similarity falls below\nthe threshold or the number of clusters is smaller than a given number\n.\n6.2\nDependency modeling\nCapturing dependencies is an extremely hard problem because\nit may require a `deeper understanding' of the events in question.\nA human annotator decides on dependencies not just based on the\ninformation in the events but also based on his/her vast repertoire\nof domain-knowledge and general understanding of how things operate\nin the world. For example, in Figure 1 a human knows `Trial\nand indictment of Osama' is influenced by `Evidence gathered by\nCIA' because he/she understands the process of law in general.\nWe believe a robust model should incorporate such domain knowledge\nin capturing dependencies, but in this work, as a first step, we\nwill rely on surface-features such as time-ordering of news stories\nand word distributions to model them. Our experiments in later sections\ndemonstrate that such features are indeed useful in capturing\ndependencies to a large extent.\nIn this subsection, we describe the models we considered for capturing\ndependencies. In the rest of the discussion in this subsection,\nwe assume that we are already given the mapping\n\n\nand\nwe focus only on modeling the edges\n\n. First we define a couple\nof features that the following models will employ.\nFirst we define a 1-1 time-ordering function\n\n\n\n\n\n\n\nthat sorts stories in ascending order by their time of publication.\nNow, the event-time-ordering function\n\nis defined as follows.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\u0634\n\n\n\n\n\n\n\n\n\u0634\n\n\n(17)\nIn other words,\n\ntime-orders events based on the time-ordering of\ntheir respective first stories.\nWe will also use average cosine similarity between two events as\na feature and it is defined as follows.\n\n\n\u0474\n\n\n\n\n\n\n\n\n\n\n\n\n\n\u05f4\n\n\n\n\n\n\n(18)\n6.2.1\nComplete-Link model\nIn this model, we assume that there are dependencies between all\npairs of events. The direction of dependency is determined by the\ntime-ordering of the first stories in the respective events. Formally,\nthe system edges are defined as follows.\n\n\n\n\n\n\n\n\n\n\n\n\n\n(19)\nwhere\n\nis the event-time-ordering function. In other words, the\ndependency edge is directed from event\n\nto event\n\n, if the first\nstory in event\n\nis earlier than the first story in event\n\n. We point\nout that this is not to be confused with the complete-link algorithm\nin clustering. Although we use the same names, it will be clear\nfrom the context which one we refer to.\n6.2.2\nSimple Thresholding\nThis model is an extension of the complete link model with an\nadditional constraint that there is a dependency between any two\nevents\n\nand\n\nonly if the average cosine similarity between\nevent\n\nand event\n\nis greater than a threshold\n\n. Formally,\n\n\n\n\n\n\n\n\u0474\n\n\n\n\n\n\n\n\n\n\n\n\n(20)\n6.2.3\nNearest Parent Model\nIn this model, we assume that each event can have at most one\nparent. We define the set of dependencies as follows.\n\n\n\n\n\n\n\n\u0474\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n(21)\nThus, for each event\n\n, the nearest parent model considers only\nthe event preceding it as defined by\n\nas a potential candidate. The\ncandidate is assigned as the parent only if the average similarity\nexceeds a pre-defined threshold\n\n.\n6.2.4\nBest Similarity Model\nThis model also assumes that each event can have at most one\nparent. An event\n\nis assigned a parent\n\nif and only if\n\nis\nthe most similar earlier event to\n\nand the similarity exceeds a\nthreshold\n\n. Mathematically, this can be expressed as:\n\n\n\n\n\n\n\n\u0474\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\u0474\n\n\n\n(22)\n6.2.5\nMaximum Spanning Tree model\nIn this model, we first build a maximum spanning tree (MST) using\na greedy algorithm on the following fully connected weighted,\nundirected graph whose vertices are the events and whose edges\nare defined as follows:\n\n\n\n\n\n\n\n\n\n\n\n\u0474\n\n\n\n(23)\nLet\n\n\n\n\n\nbe the set of edges in the maximum spanning tree of\n\n. Now our directed dependency edges\nare defined as follows.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\u0474\n\n\n\n\n(24)\n450\nThus in this model, we assign dependencies between the most similar\nevents in the topic.\nEXPERIMENTS\nOur experiments consists of three parts. First we modeled only\nthe event clustering part (defining the mapping function\n\n) using\nclustering algorithms described in section 6.1. Then we modeled\nonly the dependencies by providing to the system the true clusters\nand running only the dependency algorithms of section 6.2. Finally,\nwe experimented with combinations of clustering and dependency\nalgorithms to produce the complete event model. This way of experimentation\nallows us to compare the performance of our algorithms\nin isolation and in association with other components. The\nfollowing subsections present the three parts of our experimentation\n.\n7.1\nClustering\nWe have tried several variations of the\n\nalgorithm to study\nthe effects of various features on the clustering performance. All\nthe parameters are learned by tuning on the training set. We also\ntested the algorithms on the test set with parameters fixed at their\noptimal values learned from training. We used agglomerative clus-Model\nbest T\nCP\nCR\nCF\nP-value\ncos+1-lnk\n0.15\n0.41\n0.56\n0.43\ncos+all\n-lnk\n0.00\n0.40\n0.62\n0.45\ncos+Loc+avg\n-lnk\n0.07\n0.37\n0.74\n0.45\ncos+Per+avg\n-lnk\n0.07\n0.39\n0.70\n0.46\ncos+TD+avg\n-lnk\n0.04\n0.45\n0.70\n0.53\n2.9e-4*\ncos+N(T)+avg-lnk\n0\n.41\n0.62\n0.48\n7.5e-2\ncos+N(T)+T+avg-lnk\n0.03\n0.42\n0.62\n0.49\n2.4e-2*\ncos+TD+N(T)+avg-lnk\n0\n.44\n0.66\n0.52\n7.0e-3*\ncos+TD+N(T)+T+avg-lnk\n0.03\n0.47\n0.64\n0.53\n1.1e-3*\nBaseline(cos+avg-lnk)\n0.05\n0.39\n0.67\n0.46\nTable\n2: Comparison of agglomerative clustering algorithms\n(training set)\ntering based on only cosine similarity as our clustering baseline.\nThe results on the training and test sets are in Table 2 and 3 respectively\n. We use the Cluster F1-measure (CF) averaged over all topics\nas our evaluation criterion.\nModel\nCP\nCR\nCF\nP-value\ncos+1-lnk\n0.43\n0.49\n0.39\ncos+all\n-lnk\n0.43\n0.62\n0.47\ncos+Loc+avg\n-lnk\n0.37\n0.73\n0.45\ncos+Per+avg\n-lnk\n0.44\n0.62\n0.45\ncos+TD+avg\n-lnk\n0.48\n0.70\n0.54\n0.014*\ncos+N(T)+avg-lnk\n0.41\n0.71\n0.51\n0.31\ncos+N(T)+T+avg-lnk\n0.43\n0.69*\n0.52\n0.14\ncos+TD+N(T)+avg-lnk\n0.43\n0.76\n0.54\n0.025*\ncos+TD+N(T)+T+avg-lnk\n0.47\n0.69\n0.54\n0.0095*\nBaseline(cos+avg-lnk)\n0.44\n0.67\n0.50\nTable\n3: Comparison of agglomerative clustering algorithms\n(test set)\nP-value marked with a\n\nmeans that it is a statistically significant\nimprovement over the baseline (95% confidence level, one tailed\nT-test). The methods shown in table 2 and 3 are:\n\nBaseline: tf-idf vector weight, cosine similarity, average link\nin clustering. In equation 12,\n\n\n,\n\n\n\n. And\n\n\nin equation 13. This F-value is the maximum obtained\nby tuning the threshold.\n\ncos+1-lnk: Single link comparison (see equation 16) is used\nwhere similarity of two clusters is the maximum of all story\npairs, other configurations are the same as the baseline run.\n\ncos+all-lnk: Complete link algorithm of equation 15 is used.\nSimilar to single link but it takes the minimum similarity of\nall story pairs.\n\ncos+Loc+avg-lnk: Location names are used when calculating\nsimilarity.\n\n\n\nin equation 12. All algorithms\nstarting from this one use average link (equation 14), since\nsingle link and complete link do not show any improvement\nof performance.\n\ncos+Per+avg-lnk:\n\n\n\nin equation 12, i.e., we put\nsome weight on person names in the similarity.\n\ncos+TD+avg-lnk: Time Decay coefficient\n\n\nin equation\n13, which means the similarity between two stories will be\ndecayed to\n\nif they are at different ends of the topic.\n\ncos+N(T)+avg-lnk: Use the number of true events to control\nthe agglomerative clustering algorithm. When the number\nof clusters is fewer than that of truth events, stop merging\nclusters.\n\ncos+N(T)+T+avg-lnk: similar to N(T) but also stop agglomeration\nif the maximal similarity is below the threshold\n\n.\n\ncos+TD:+N(T)+avg-lnk: similar to N(T) but the similarities\nare decayed,\n\n\nin equation 13.\n\ncos+TD+N(T)+T+avg-lnk: similar to TD+N(Truth) but calculation\nhalts when the maximal similarity is smaller than\nthe threshold\n\n.\nOur experiments demonstrate that single link and complete link\nsimilarities perform worse than average link, which is reasonable\nsince average link is less sensitive to one or two story pairs. We\nhad expected locations and person names to improve the result, but\nit is not the case. Analysis of topics shows that many on-topic\nstories share the same locations or persons irrespective of the event\nthey belong to, so these features may be more useful in identifying\ntopics rather than events. Time decay is successful because events\nare temporally localized, i.e., stories discussing the same event tend\nto be adjacent to each other in terms of time. Also we noticed\nthat providing the number of true events improves the performance\nsince it guides the clustering algorithm to get correct granularity.\nHowever, for most applications, it is not available. We used it only\nas a \"cheat\" experiment for comparison with other algorithms. On\nthe whole, time decay proved to the most powerful feature besides\ncosine similarity on both training and test sets.\n7.2\nDependencies\nIn this subsection, our goal is to model only dependencies. We\nuse the true mapping function\nand by implication the true events\n\n. We build our dependency structure\n\nusing all the five models\ndescribed in section 6.2. We first train our models on the 26\ntraining topics. Training involves learning the best threshold\n\nfor each of the models. We then test the performances of all the\ntrained models on the 27 test topics. We evaluate our performance\n451\nusing the average values of Dependency Precision (DP), Dependency\nRecall (DR) and Dependency F-measure (DF). We consider\nthe complete-link model to be our baseline since for each event, it\ntrivially considers all earlier events to be parents.\nTable 4 lists the results on the training set. We see that while all\nthe algorithms except MST outperform the baseline complete-link\nalgorithm , the nearest Parent algorithm is statistically significant\nfrom the baseline in terms of its DF-value using a one-tailed paired\nT-test at 95% confidence level.\nModel\nbest\n\nDP\nDR\nDF\nP-value\nNearest Parent\n0.025\n0.55\n0.62\n0.56\n0.04*\nBest Similarity\n0.02\n0.51\n0.62\n0.53\n0.24\nMST\n0.0\n0.46\n0.58\n0.48\nSimple\nThresh.\n0.045\n0.45\n0.76\n0.52\n0.14\nComplete-link\n0\n.36\n0.93\n0.48\nTable\n4: Results on the training set: Best\n\nis the optimal value\nof the threshold\n\n. * indicates the corresponding model is statistically\nsignificant compared to the baseline using a one-tailed,\npaired T-test at 95% confidence level.\nIn table 5 we present the comparison of the models on the test\nset. Here, we do not use any tuning but set the threshold to the\ncorresponding optimal values learned from the training set. The results\nthrow some surprises: The nearest parent model, which was\nsignificantly better than the baseline on training set, turns out to be\nworse than the baseline on the test set. However all the other models\nare better than the baseline including the best similarity which\nis statistically significant. Notice that all the models that perform\nbetter than the baseline in terms of DF, actually sacrifice their recall\nperformance compared to the baseline, but improve on their\nprecision substantially thereby improving their performance on the\nDF-measure.\nWe notice that both simple-thresholding and best similarity are\nbetter than the baseline on both training and test sets although the\nimprovement is not significant. On the whole, we observe that the\nsurface-level features we used capture the dependencies to a reasonable\nlevel achieving a best value of 0.72 DF on the test set.\nAlthough there is a lot of room for improvement, we believe this is\na good first step.\nModel\nDP\nDR\nDF\nP-value\nNearest Parent\n0.61\n0.60\n0.60\nBest\nSimilarity\n0.71\n0.74\n0.72\n0.04*\nMST\n0.70\n0.68\n0.69\n0.22\nSimple Thresh.\n0.57\n0.75\n0.64\n0.24\nBaseline (Complete-link)\n0.50\n0.94\n0.63\nTable\n5: Results on the test set\n7.3\nCombining Clustering and Dependencies\nNow that we have studied the clustering and dependency algorithms\nin isolation, we combine the best performing algorithms and\nbuild the entire event model. Since none of the dependency algorithms\nhas been shown to be consistently and significantly better\nthan the others, we use all of them in our experimentation. From\nthe clustering techniques, we choose the best performing Cos+TD.\nAs a baseline, we use a combination of the baselines in each components\n, i.e., cos for clustering and complete-link for dependencies.\nNote that we need to retrain all the algorithms on the training\nset because our objective function to optimize is now JF, the joint\nF-measure. For each algorithm, we need to optimize both the clustering\nthreshold and the dependency threshold. We did this empirically\non the training set and the optimal values are listed in table\n6.\nThe results on the training set, also presented in table 6, indicate\nthat cos+TD+Simple-Thresholding is significantly better than the\nbaseline in terms of the joint F-value JF, using a one-tailed paired T-test\nat 95% confidence level. On the whole, we notice that while the\nclustering performance is comparable to the experiments in section\n7.1, the overall performance is undermined by the low dependency\nperformance. Unlike our experiments in section 7.2 where we had\nprovided the true clusters to the system, in this case, the system\nhas to deal with deterioration in the cluster quality. Hence the performance\nof the dependency algorithms has suffered substantially\nthereby lowering the overall performance.\nThe results on the test set present a very similar story as shown\nin table 7. We also notice a fair amount of consistency in the performance\nof the combination algorithms. cos+TD+Simple-Thresholding\noutperforms the baseline significantly. The test set results also point\nto the fact that the clustering component remains a bottleneck in\nachieving an overall good performance.\nDISCUSSION AND CONCLUSIONS\nIn this paper, we have presented a new perspective of modeling\nnews topics. Contrary to the TDT view of topics as flat collection\nof news stories, we view a news topic as a relational structure\nof events interconnected by dependencies. In this paper, we also\nproposed a few approaches for both clustering stories into events\nand constructing dependencies among them. We developed a time-decay\nbased clustering approach that takes advantage of temporal-localization\nof news stories on the same event and showed that it\nperforms significantly better than the baseline approach based on\ncosine similarity. Our experiments also show that we can do fairly\nwell on dependencies using only surface-features such as cosine-similarity\nand time-stamps of news stories as long as true events\nare provided to the system. However, the performance deteriorates\nrapidly if the system has to discover the events by itself. Despite\nthat discouraging result, we have shown that our combined algorithms\nperform significantly better than the baselines.\nOur results indicate modeling dependencies can be a very hard\nproblem especially when the clustering performance is below ideal\nlevel. Errors in clustering have a magnifying effect on errors in dependencies\nas we have seen in our experiments. Hence, we should\nfocus not only on improving dependencies but also on clustering at\nthe same time.\nAs part of our future work, we plan to investigate further into\nthe data and discover new features that influence clustering as well\nas dependencies. And for modeling dependencies, a probabilistic\nframework should be a better choice since there is no definite answer\nof yes/no for the causal relations among some events. We also\nhope to devise an iterative algorithm which can improve clustering\nand dependency performance alternately as suggested by one of\nthe reviewers. We also hope to expand our labeled corpus further\nto include more diverse news sources and larger and more complex\nevent structures.\nAcknowledgments\nWe would like to thank the three anonymous reviewers for their\nvaluable comments. This work was supported in part by the Center\n452\nModel\nCluster T\nDep. T\nCP\nCR\nCF\nDP\nDR\nDF\nJF\nP-value\ncos+TD+Nearest-Parent\n0.055\n0.02\n0.51\n0.53\n0.49\n0.21\n0.19\n0.19\n0.27\ncos+TD+Best\n-Similarity\n0.04\n0.02\n0.45\n0.70\n0.53\n0.21\n0.33\n0.23\n0.32\ncos+TD+MST\n0.04\n0.00\n0.45\n0.70\n0.53\n0.22\n0.35\n0.25\n0.33\ncos+TD+Simple\n-Thresholding\n0.065\n0.02\n0.56\n0.47\n0.48\n0.23\n0.61\n0.32\n0.38\n0.0004*\nBaseline (cos+Complete-link)\n0.10\n0\n.58\n0.31\n0.38\n0.20\n0.67\n0.30\n0.33\nTable\n6: Combined results on the training set\nModel\nCP\nCR\nCF\nDP\nDR\nDF\nJF\nP-value\ncos+TD+Nearest Parent\n0.57\n0.50\n0.50\n0.27\n0.19\n0.21\n0.30\ncos+TD+Best\nSimilarity\n0.48\n0.70\n0.54\n0.31\n0.27\n0.26\n0.35\ncos+TD+MST\n0.48\n0.70\n0.54\n0.31\n0.30\n0.28\n0.37\ncos+TD+Simple\nThresholding\n0.60\n0.39\n0.44\n0.32\n0.66\n0.42\n0.43\n0.0081*\nBaseline (cos+Complete-link)\n0.66\n0.27\n0.36\n0.30\n0.72\n0.43\n0.39\nTable\n7: Combined results on the test set\nfor Intelligent Information Retrieval and in part by SPAWARSYSCEN-SD\ngrant number N66001-02-1-8903. Any opinions, findings and\nconclusions or recommendations expressed in this material are the\nauthors' and do not necessarily reflect those of the sponsor.\nREFERENCES\n[1] J. Allan, J. Carbonell, G. Doddington, J. Yamron, and\nY. Yang. Topic detection and tracking pilot study: Final\nreport. In Proceedings of the DARPA Broadcast News\nTranscription and Understanding Workshop, pages 194218,\n1998.\n[2] J. Allan, A. Feng, and A. Bolivar. Flexible intrinsic\nevaluation of hierarchical clustering for tdt. volume In the\nProc. of the ACM Twelfth International Conference on\nInformation and Knowledge Management, pages 263270,\nNov 2003.\n[3] James Allan, editor. Topic Detection and Tracking:Event\nbased Information Organization. Kluwer Academic\nPublishers, 2000.\n[4] James Allan, Rahul Gupta, and Vikas Khandelwal. Temporal\nsummaries of new topics. In Proceedings of the 24th annual\ninternational ACM SIGIR conference on Research and\ndevelopment in information retrieval, pages 1018. ACM\nPress, 2001.\n[5] Regina Barzilay and Lillian Lee. Catching the drift:\nProbabilistic content models, with applications to generation\nand summarization. In Proceedings of Human Language\nTechnology Conference and North American Chapter of the\nAssociation for Computational Linguistics(HLT-NAACL),\npages 113120, 2004.\n[6] D. Lawrie and W. B. Croft. Discovering and comparing topic\nhierarchies. In Proceedings of RIAO 2000 Conference, pages\n314330, 1999.\n[7] David D. Lewis and Kimberly A. Knowles. Threading\nelectronic mail: a preliminary study. Inf. Process. Manage.,\n33(2):209217, 1997.\n[8] Juha Makkonen. Investigations on event evolution in tdt. In\nProceedings of HLT-NAACL 2003 Student Workshop, pages\n4348, 2004.\n[9] Aixin Sun and Ee-Peng Lim. Hierarchical text classification\nand evaluation. In Proceedings of the 2001 IEEE\nInternational Conference on Data Mining, pages 521528.\nIEEE Computer Society, 2001.\n[10] Yiming Yang, Jaime Carbonell, Ralf Brown, Thomas Pierce,\nBrian T. Archibald, and Xin Liu. Learning approaches for\ndetecting and tracking news events. In IEEE Intelligent\nSystems Special Issue on Applications of Intelligent\nInformation Retrieval, volume 14 (4), pages 3243, 1999.\n453\n", "keywords": "Complete-Link model;Event;Intelligent Information Retrieval;Event Threading;threading;meaningful and efficient analysis and presentation of news;Information browsing and organization;Nearest Parent Model;information searching;Dependency modeling;Agglomerative clustering with time decay;dependency;News topic modeling;Topic detection and tracking;clustering;temporal localization of news stories"} {"name": "89", "title": "Evolutionary Learning with Kernels: A Generic Solution for Large Margin Problems", "abstract": "In this paper we embed evolutionary computation into statistical learning theory. First, we outline the connection between large margin optimization and statistical learning and see why this paradigm is successful for many pattern recognition problems. We then embed evolutionary computation into the most prominent representative of this class of learning methods, namely into Support Vector Machines (SVM). In contrast to former applications of evolutionary algorithms to SVMs we do not only optimize the method or kernel parameters . We rather use both evolution strategies and particle swarm optimization in order to directly solve the posed constrained optimization problem. Transforming the problem into the Wolfe dual reduces the total runtime and allows the usage of kernel functions. Exploiting the knowledge about this optimization problem leads to a hybrid mutation which further decreases convergence time while classification accuracy is preserved. We will show that evolutionary SVMs are at least as accurate as their quadratic programming counterparts on six real-world benchmark data sets. The evolutionary SVM variants frequently outperform their quadratic programming competitors. Additionally, the proposed algorithm is more generic than existing traditional solutions since it will also work for non-positive semidefinite kernel functions and for several, possibly competing, performance criteria.", "fulltext": "INTRODUCTION\nIn this paper we will discuss how evolutionary algorithms\ncan be used to solve large margin optimization problems.\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nGECCO'06, July 812, 2006, Seattle, Washington, USA.\nCopyright 2006 ACM 1-59593-186-4/06/0007 ...\n$\n5.00.\nWe explore the intersection of three highly active research\nareas, namely machine learning, statistical learning theory,\nand evolutionary algorithms. While the connection between\nstatistical learning and machine learning was analyzed before\n, embedding evolutionary algorithms into this connection\nwill lead to a more generic algorithm which can deal\nwith problems today's learning schemes cannot cope with.\nSupervised machine learning is often about classification\nproblems. A set of data points is divided into several classes\nand the machine learning method should learn a decision\nfunction in order to decide into which class an unseen data\npoint should be classified.\nThe maximization of a margin between data points of different\nclasses, i. e. the distance between a decision hyperplane\nand the nearest data points, interferes with the ideas\nof statistical learning theory. This allows the definition of an\nerror bound for the generalization error. Furthermore, the\nusage of kernel functions allows the learning of non-linear\ndecision functions. We focus on Support Vector Machines\n(SVM) as they are the most prominent representatives for\nlarge margin problems. Since SVMs guarantee an optimal\nsolution for the given data set they are currently one of the\nmostly used learning methods. Furthermore, many other\noptimization problems can also be formulated as large margin\nproblem [26]. The relevance of large margin methods\ncan be measured by the number of submissions to the main\nmachine learning conferences over the past years\n1\n.\nUsually, the optimization problem posed by SVMs is solved\nwith quadratic programming. However, there are some drawbacks\n.\nFirst, for kernel functions which are not positive\nsemidefinite no unique global optimum exists. In these cases\nquadratic programming is not able to find satisfying solutions\nat all. Moreover, most implementations do not even\nterminate [8]. There exist several useful non-positive kernels\n[15], among them the sigmoid kernel which simulates a\nneural network [3, 23]. A more generic optimization scheme\nshould allow such non-positive kernels without the need for\nomitting the more efficient dual optimization problem [17].\nSecond, SVMs should be able to optimize several performance\nmeasures at the same time. Traditional SVMs try\nto maximize the prediction accuracy alone. However, depending\non the application area other specific performance\ncriteria should be optimized instead of or additionally to\nprediction accuracy. Although first attempts were made to\nincorporate multivariate performance measures into SVMs\n[13], the problem is not generally solved and no solution exist\n1\nMore than 30% of all accepted papers for ICML 2005 dealt\nwith SVMs and other large margin methods.\n1553\nfor competing criteria. This problem as well as the general\ntrade-off between training error and capacity could be easily\nsolved by an (multi-objective) evolutionary optimization\napproach.\nFormer applications of evolutionary algorithms to SVMs\ninclude the optimization of method and kernel parameters\n[6, 19], the selection of optimal feature subsets [7], and the\ncreation of new kernel functions by means of genetic programming\n[10]. The latter is particularly interesting since\nit cannot be guaranteed that the resulting kernel functions\nare again positive semi-definite.\nReplacing the traditional optimization techniques by evolution\nstrategies or particle swarm optimization can tackle\nboth problems mentioned above. We will extract as much\ninformation as possible from the optimization problem at\nhand and develop and compare different search point operations\n. We will show that the proposed implementation\nleads to as good results as traditional SVMs on all real-world\nbenchmark data sets. Additionally, the optimization\nis more generic since it also allows non-positive semi-definite\nkernel functions and the simultaneous optimization of different\n, maybe competing, criteria.\n1.1\nOutline\nIn Section 2 we give a short introduction into the concept\nof structural risk minimization and the ideas of statistical\nlearning theory. We will also discuss an upper bound for\nthe generalization error. This allows us to formalize the optimization\nproblem of large margin methods in Section 3.\nWe will introduce SVMs for the classification of given data\npoints in Section 3.1 and extend the separation problem to\nnon-separable datasets (see Section 3.2) with non-linear hyperplanes\n(see Section 3.3). This leads to a constrained optimization\nproblem for which we utilize evolution strategies\nand particle swarm optimization in Section 4. We discuss\nseveral enhancements and a new type of mutation before\nwe evaluate the proposed methods on real-world benchmark\ndatasets in Section 5.\nSTRUCTURAL RISK MINIMIZATION\nIn this section we discuss the idea of structural risk minimization\n. Machine learning methods following this paradigm\nhave a solid theoretical foundation and it is possible to define\nbounds for prediction errors.\nLet X IR\nm\nbe a real-valued vector of random variables.\nLet Y IR be another random variable. X and Y obey a\nfixed but unknown probability distribution P (X, Y ). Machine\nLearning tries to find a function f(x, ) which predict\nthe value of Y for a given input x X. The function class\nf depends on a vector of parameters , e. g. if f is the\nclass of all polynomials, might be the degree. We define\na loss function L(Y, f(X, )) in order to penalize errors\nduring prediction [9]. Every convex function with arity 2,\npositive range, and L(x, x) = 0 can be used as loss function\n[22]. This leads to a possible criterion for the selection of a\nfunction f, the expected risk:\nR() =\nZ\nL(y, f(x, ))dP (x, y).\n(1)\nSince the underlying distribution is not known we are not\nable to calculate the expected risk.\nHowever, instead of\nestimating the probability distribution in order to allow this\ncalculation, we directly estimate the expected risk by using\na set of known data points T = {(x\n1\n, y\n1\n) , . . . , (x\nn\n, y\nn\n)\n}\nX Y . T is usually called training data. Using this set of\ndata points we can calculate the empirical risk :\nR\nemp\n() = 1\nn\nn\nX\ni\n=1\nL (y\ni\n, f (x\ni\n, )) .\n(2)\nIf training data is sampled according to P (X, Y ), the empirical\nrisk approximates the expected risk if the number of\nsamples grows:\nlim\nn\n\nR\nemp\n() = R().\n(3)\nIt is, however, a well known problem that for a finite number\nof samples the minimization of R\nemp\n() alone does not\nlead to a good prediction model [27]. For each loss function\nL, each candidate , and each set of tuples T X\nY with T T\n=\nexists another parameter vector\nso that L(y, f(x, )) = L(y, f(x, )) for all x T and\nL(y, f(x, )) > L(y, f(x, )) for all x T . Therefore, the\nminimization of R\nemp\n() alone does not guarantee the optimal\nselection of a parameter vector for other samples\naccording to the distribution P (X, Y ). This problem is often\nreferred to as overfitting.\nAt this point we use one of the main ideas of statistical\nlearning theory. Think of two different functions perfectly\napproximating a given set of training points. The first function\nis a linear function, i. e. a simple hyperplane in the considered\nspace IR\nm\n. The second function also hits all training\npoints but is strongly wriggling in between. Naturally, if we\nhad to choose between these two approximation functions,\nwe tend to select the more simple one, i. e. the linear hyperplane\nin this example. This derives from the observation\nthat more simple functions behave better on unseen examples\nthan very complicated functions. Since the mere minimization\nof the empirical risk according to the training data\nis not appropriate to find a good generalization, we incorporate\nthe capacity\n2\nof the used function into the optimization\nproblem (see Figure 1). This leads to the minimization of\nthe structural risk\nR\nstruct\n() = R\nemp\n() + ().\n(4)\nis a function which measures the capacity of the function\nclass f depending on the parameter vector . Since the\nempirical risk is usually a monotonically decreasing function\nof , we use to manage the trade-off between training error\nand capacity. Methods minimizing this type of risk function\nare known as shrinkage estimators [11].\n2.1\nBound on the generalization performance\nFor certain functions the structural risk is an upper\nbound for the empirical risk.\nThe capacity of the function\nf for a given can for example be measured with help\nof the Vapnik-Chervonenkis dimension (VC dimension) [27,\n28]. The VC dimension is defined as the cardinality of the\nbiggest set of tuples which can separated with help of f in all\npossible ways. For example, the VC dimension of linear hyperplanes\nin an m-dimensional space is m+1. Using the VC\ndimension as a measure for capacity leads to a probabilistic\nbound for the structural risk [27]. Let f be a function class\nwith finite VC dimension h and f() the best solution for the\n2\nAlthough not the same, the capacity of a function resembles\na measurement of the function complexity. In our example\nwe measure the ability to \"wriggle\". More details in [27].\n1554\nX\nY\nFigure 1: The simultaneous minimization of empirical\nrisk and model complexity gives a hint which\nfunction should be used in order to generalize the\ngiven data points.\nempirical risk minimization for T with |T | = n. Now choose\nsome such that 0 1. Then for losses smaller than\nsome number B, the following bound holds with probability\n1\n- :\nR() R\nemp\n() + B\ns\nh `log\n2l\nh\n+ 1\n- log\n4\nl\n.\n(5)\nSurprisingly, this bound is independent of P (X, Y ). It only\nassumes that both the seen and the unseen data points are\nindependently sampled according to some P (X, Y ). Please\nnote that this bound also no longer contains a weighting\nfactor or any other trade-off at all. The existence of a\nguaranteed error bound is the reason for the great success of\nstructural risk minimization in a wide range of applications.\nLARGE MARGIN METHODS\nAs discussed in the previous section we need to use a class\nof functions whose capacity can be controlled. In this section\nwe will discuss a special form of structural risk minimization\n, namely large margin approaches. All large margin\nmethods have one thing in common: they embed structural\nrisk minimization by maximizing a margin between a linear\nfunction and the nearest data points. The most prominent\nlarge margin method for classification tasks is the Support\nVector Machine (SVM).\n3.1\nSupport Vector Machines\nWe constrain the number of possible values of Y to 2,\nwithout loss of generality these values should be\n-1 and\n+1. In this case, finding a function f in order to decide\nwhich of both predictions is correct for an unseen data point\nis referred to as classification learning for the classes\n-1\nand +1. We start with the simplest case: learning a linear\nfunction from perfectly separable data. As we shall see in\nSection 3.2 and 3.3, the general case - non-linear functions\nderived from non-separable data - leads to a very similar\nproblem.\nIf the data points are linearly separable, a linear hyperplane\nmust exist in the input space IR\nm\nwhich separates\nboth classes. This hyperplane is defined as\nH = {x| w, x + b = 0} ,\n(6)\nH\nw\nMargin\nOrigin\n-b\n|w|\n+1\n-1\nFigure 2: A simple binary classification problem for\ntwo classes\n-1 (empty bullets) and +1 (filled bullets).\nThe separating hyperplane is defined by the vector\nw and the offset b. The distance between the nearest\ndata point(s) and the hyperplane is called\nmargin.\nwhere w is normal to the hyperplane, |b|/||w|| is the perpendicular\ndistance of the hyperplane to the origin (offset\nor bias), and\n||w|| is the Euclidean norm of w. The vector w\nand the offset b define the position and orientation of the hyperplane\nin the input space. These parameters correspond\nto the function parameters . After the optimal parameters\nw and b were found, the prediction of new data points can\nbe calculated as\nf(x, w, b) = sgn ( w, x + b) ,\n(7)\nwhich is one of the reasons why we constrained the classes\nto\n-1 and +1.\nFigure 2 shows some data points and a separating hyperplane\n. If all given data points are correctly classified by the\nhyperplane at hand the following must hold:\ni : y\ni\n( w, x\ni\n+ b) 0.\n(8)\nOf course, an infinite number of different hyperplanes exist\nwhich perfectly separate the given data points. However,\none would intuitively choose the hyperplane which has the\nbiggest amount of safety margin to both sides of the data\npoints.\nNormalizing w and b in a way that the point(s)\nclosest to the hyperplane satisfy\n| w, x\ni\n+ b| = 1 we can\ntransform equation 8 into\ni : y\ni\n( w, x\ni\n+ b) 1.\n(9)\nWe can now define the margin as the perpendicular distance\nof the nearest point(s) to the hyperplane.\nConsider two\npoints x\n1\nand x\n2\non opposite sides of the margin. That is\nw, x\n1\n+b = +1 and w, x\n2\n+b = -1 and w, (x\n1\n-x\n2\n) = 2.\nThe margin is then given by 1/||w||.\nIt can be shown, that the capacity of the class of separating\nhyperplanes decreases with increasing margin [21].\nMaximizing the margin of a hyperplane therefore formalizes\nthe structural risk minimization discussed in the previous\nsection. Instead of maximizing 1/||w|| we could also minimize\n1\n2\n||w||\n2\nwhich will result into more simple equations\nlater. This leads to the optimization problem\nminimize\n1\n2\n||w||\n2\n(10)\nsubject to\ni : y\ni\n( w, x\ni\n+ b) 1.\n(11)\n1555\nFunction 10 is the objective function and the constraints\nfrom equation 11 are called inequality constraints.\nThey\nform a constrained optimization problem. We will use a Lagrangian\nformulation of the problem. This allows us to replace\nthe inequality constraints by constraints on the Lagrange\nmultipliers which are easier to handle. The second\nreason is that after the transformation of the optimization\nproblem, the training data will only appear in dot products.\nThis will allow us to generalize the optimization to the non-linear\ncase (see Section 3.3). We will now introduce positive\nLagrange multipliers\ni\n, i = 1, . . . , n, one for each of the\ninequality constraints. The Lagrangian has the form\nL\nP\n(w, b, ) = 12||w||\n2\nn\nX\ni\n=1\n\ni\ny\ni\n( w, x\ni\n+ b) .\n(12)\nFinding a minimum of this function requires that the derivatives\nL\nP\n(w,b,)\nw\n= w n\nP\ni\n=1\n\ni\ny\ni\nx\ni\n(13)\nL\nP\n(w,b,)\nb\n=\nn\nP\ni\n=1\n\ni\ny\ni\n(14)\nare zero, i. e.\nw =\nn\nP\ni\n=1\n\ni\ny\ni\nx\ni\n(15)\n0 =\nn\nP\ni\n=1\n\ni\ny\ni\n.\n(16)\nThe Wolfe dual, which has to be maximized, results from\nthe Lagrangian by substituting 15 and 16 into 12, thus\nL\nD\n(w, b, ) =\nn\nX\ni\n=1\n\ni\n- 12\nn\nX\ni\n=1\nn\nX\nj\n=1\ny\ni\ny\nj\n\ni\n\nj\nx\ni\n, x\nj\n.\n(17)\nThis leads to the dual optimization problem which must\nbe solved in order to find a separating maximum margin\nhyperplane for given set of data points:\nmaximize\nn\nP\ni\n=1\n\ni\n1\n2\nn\nP\ni\n=1\nn\nP\nj\n=1\ny\ni\ny\nj\n\ni\n\nj\nx\ni\n, x\nj\n(18)\nsubject to\ni\n0 for all i = 1, . . . , n\n(19)\nand\nn\nP\ni\n=1\n\ni\ny\ni\n= 0.\n(20)\nFrom an optimal vector\n\nwe can calculate the optimal\nnormal vector w\n\nusing equation 15. The optimal offset can\nbe calculated with help of equation 11. Please note, that w\nis a linear combination of those data points x\ni\nwith\ni\n= 0.\nThese data points are called support vectors, hence the name\nsupport vector machine. Only support vectors determine the\nposition and orientation of the separating hyperplane, other\ndata points might as well be omitted during learning. In\nFigure 2 the support vectors are marked with circles. The\nnumber of support vectors is usually much smaller than the\ntotal number of data points.\n3.2\nNon-separable data\nWe now consider the case that the given set of data points\nis not linearly separable.\nThe optimization problem discussed\nin the previous section would not have a solution\nsince in this case constraint 11 could not be fulfilled for all\ni. We relax this constraint by introducing positive slack\nvariables\ni\n, i = 1, . . . , n. Constraint 11 becomes\ni : y\ni\n( w, x\ni\n+ b) 1 i\n.\n(21)\nIn order to minimize the number of wrong classifications\nwe introduce a correction term C P\nn\ni\n=1\n\ni\ninto the objective\nfunction. The optimization problems then becomes\nminimize\n1\n2\n||w||\n2\n+ C\nn\nP\ni\n=1\n\ni\n(22)\nsubject to\ni : y\ni\n( w, x\ni\n+ b) 1 i\n.\n(23)\nThe factor C determines the weight of wrong predictions as\npart of the objective function. As in the previous section\nwe create the dual form of the Lagrangian. The slacking\nvariables\ni\nvanish and we get the optimization problem\nmaximize\nn\nP\ni\n=1\n\ni\n1\n2\nn\nP\ni\n=1\nn\nP\nj\n=1\ny\ni\ny\nj\n\ni\n\nj\nx\ni\n, x\nj\n(24)\nsubject to 0\n\ni\nC for all i = 1, . . . , n\n(25)\nand\nn\nP\ni\n=1\n\ni\ny\ni\n= 0.\n(26)\nIt can easily be seen that the only difference to the separable\ncase is the additional upper bound C for all\ni\n.\n3.3\nNon-linear learning with kernels\nThe optimization problem described with equations 24,\n25, and 26 will deliver a linear separating hyperplane for\narbitrary datasets. The result is optimal in a sense that no\nother linear function is expected to provide a better classification\nfunction on unseen data according to P (X, Y ). However\n, if the data is not linearly separable at all the question\narises how the described optimization problem can be gener-alized\nto non-linear decision functions. Please note that the\ndata points only appear in the form of dot products x\ni\n, x\nj\n.\nA possible interpretation of this dot product is the similarity\nof these data points in the input space IR\nm\n. Now consider a\nmapping : IR\nm\nH into some other Euclidean space H\n(called feature space) which might be performed before the\ndot product is calculated. The optimization would depend\non dot products in this new space H, i. e. on functions of\nthe form (x\ni\n) , (x\nj\n) . A function k : IR\nm\nIR\nm\nIR\nwith the characteristic\nk (x\ni\n, x\nj\n) = (x\ni\n) , (x\nj\n)\n(27)\nis called kernel function or kernel. Figure 3 gives a rough\nidea how transforming the data points can help to solve\nnon-linear problems with the optimization in a (higher dimensional\n) space where the points can be linearly separated.\nA fascinating property of kernels is that for some mappings\na kernel k exists which can be calculated without\nactually performing . Since often the dimension of H is\ngreater than the dimension m of the input space and H\nsometimes is even infinite dimensional, the usage of such\nkernels is a very efficient way to introduce non-linear decision\nfunctions into large margin approaches. Prominent examples\nfor such efficient non-linear kernels are polynomial\nkernels with degree d\nk (x\ni\n, x\nj\n) = ( x\ni\n, x\nj\n+ )\nd\n,\n(28)\nradial basis function kernels (RBF kernels)\nk (x\ni\n, x\nj\n) = e\n||xi\n-xj||2\n22\n(29)\n1556\nH\nR\nm\nFigure 3: After the transformation of all data points\ninto the feature space H the non-linear separation\nproblem can be solved with a linear separation algorithm\n. In this case a transformation in the space\nof polynomials with degree 2 was chosen.\nfor a > 0, and the sigmoid kernel\nk (x\ni\n, x\nj\n) = tanh ( x\ni\n, x\nj\n- )\n(30)\nwhich can be used to simulate a neural network. and\nare scaling and shifting parameters. Since the RBF kernel\nis easy interpretable and often yields good prediction performance\n, it is used in a wide range of applications. We will\nalso use the RBF kernel for our experiments described in\nsection 5 in order to demonstrate the learning ability of the\nproposed SVM.\nWe replace the dot product in the objective function by\nkernel functions and achieve the final optimization problem\nfor finding a non-linear separation for non-separable data\npoints\nmaximize\nn\nP\ni\n=1\n\ni\n1\n2\nn\nP\ni\n=1\nn\nP\nj\n=1\ny\ni\ny\nj\n\ni\n\nj\nk (x\ni\n, x\nj\n)\n(31)\nsubject to 0\n\ni\nC for all i = 1, . . . , n\n(32)\nand\nn\nP\ni\n=1\n\ni\ny\ni\n= 0.\n(33)\nIt can be shown that if the kernel k, i. e. it's kernel matrix\n, is positive definite, the objective function is concave\n[2]. The optimization problem therefore has a global unique\nmaximum. However, in some cases a specialized kernel function\nmust be used to measure the similarity between data\npoints which is not positive definite, sometimes not even\npositive semidefinite [21]. In these cases the usual quadratic\nprogramming approaches might not be able to find a global\nmaximum in feasible time.\nEVOLUTIONARY COMPUTATION FOR LARGE MARGIN OPTIMIZATION\nSince traditional SVMs are not able to optimize for non-positive\nsemidefinite kernel function, it is a very appealing\nidea to replace the usual quadratic programming approaches\nby an evolution strategies (ES) approach [1] or by particle\nswarm optimization (PSO) [14]. In this section we will describe\nboth a straightforward application of these techniques\nand how we can exploit some information about our optimization\nproblem and incorporate that information into our\nsearch operators.\n4.1\nSolving the dual problem and other sim-plifications\nThe used optimization problem is the dual problem for\nnon-linear separation of non-separable data developed in the\nlast sections (equations 31, 32, and 33). Of course it would\nalso be possible to directly optimize the original form of\nour optimization problem depicted in equations 22 and 23.\nThat is, we could directly optimize the weight vectors and\nthe offset. As mentioned before, there are two drawbacks:\nfirst, the costs of calculating the fitness function would be\nmuch higher for the original optimization problem since the\nfulfillment of all n constraints must be recalculated for each\nnew hyperplane. It is a lot easier to check if all 0\n\ni\n\nC apply. Second, it would not be possible to allow non-linear\nlearning with efficient kernel functions in the original\nformulation of the problem. Furthermore, the kernel matrix\nK with K\nij\n= k (x\ni\n, x\nj\n) can be calculated beforehand and\nthe training data is never used during optimization again.\nThis further reduces the needed runtime for optimization\nsince the kernel matrix calculation is done only once.\nThis is a nice example for a case, where transforming the\nobjective function beforehand is both more efficient and allows\nenhancements which would not have been possible before\n. Transformations of the fitness functions became a very\ninteresting topic recently [25].\nAnother efficiency improvement can be achieved by formulating\nthe problem with b = 0. All solution hyperplanes\nmust then contain the origin and the constraint 33 will vanish\n. This is a mild restriction for high-dimensional spaces\nsince the number of degrees of freedom is only decreased by\none. However, during optimization we do not have to cope\nwith this equality constraint which would take an additional\nruntime of O(n).\n4.2\nEvoSVM and PsoSVM\nWe developed a support vector machine based on evolution\nstrategies optimization (EvoSVM). We utilized three\ndifferent types of mutation which will be described in this\nsection.\nFurthermore, we developed another SVM based\non particle swarm optimization (PsoSVM) which is also described\n.\nThe first approach (EvoSVM-G, G for Gaussian mutation\n) merely utilizes a standard ES optimization. Individuals\nare the real-valued vectors and mutation is performed\nby adding a Gaussian distributed random variable with standard\ndeviation C/10. In addition, a variance adaptation is\nconducted during optimization (1/5 rule [18]). Crossover\nprobability is high (0.9). We use tournament selection with\na tournament size of 0.25 multiplied by the population size.\nThe initial individuals are random vectors with 0\n\ni\nC.\nThe maximum number of generations is 1000 and the optimization\nis terminated if no improvement occurred during\nthe last 5 generations. The population size is 10.\nThe second version is called EvoSVM-S (S for switching\nmutation). Here we utilize the fact that only a small amount\nof input data points will become support vectors (sparsity ).\nOn the other hand, one can often observe that non-zero\nalpha values are equal to the upper bound C and only a very\nsmall amount of support vectors exists with 0 <\ni\n< C.\nTherefore, we just use the well known mutation of genetic\nalgorithms and switch between 0 and C with probability\n1/n for each\ni\n. The other parameters are equal to those\ndescribed for the EvoSVM-G.\n1557\nfor i = 1 to n do {\nif (random(0, 1) < 1/n) do {\nif (alpha_i > 0) do {\nalpha_i = 0;\n} else do {\nalpha_i = random(0, C);\n}\n}\n}\nFigure 4: A simple hybrid mutation which should\nspeed-up the search for sparser solutions.\nIt contains\nelements from standard mutations from both\ngenetic algorithms and evolution strategies.\nUsing this switching mutation inspired by genetic algorithms\nonly allow\ni\n= 0 or\ni\n= C. Instead of a complete\nswitch between 0 and C or a smooth change of all values\ni\nlike the Gaussian mutation does, we developed a hybrid\nmutation combining both elements. That means that we\ncheck for each\ni\nwith probability 1/n if the value should be\nmutated at all. If the current value\ni\nis greater than 0,\ni\nis\nset to 0. If\ni\nis equal to 0,\ni\nis set to a random value with\n0\n\ni\nC. Figure 4 gives an overview over this hybrid\nmutation. The function random(a, b) returns an uniformly\ndistributed random number between a and b. The other parameters\nare the same as described for the EvoSVM-G. We\ncall this version EvoSVM-H (H for hybrid).\nAs was mentioned before, the optimization problem usually\nis concave and the risk for local extrema is small. Therefore\n, we also applied a PSO technique. It should be inves-tigated\nif PSO, which is similar to the usual quadratic programming\napproaches for SVMs in a sense that the gradient\ninformation is exploited, is able to find a global optimum in\nshorter time. We call this last version PsoSVM and use a\nstandard PSO with inertia weight 0.1, local best weight 1.0,\nand global best weight 1.0. The inertia weight is dynami-cally\nadapted during optimization [14].\nEXPERIMENTS AND RESULTS\nIn this section we try to evaluate the proposed evolutionary\noptimization SVMs. We compare our implementation to\nthe quadratic programming approaches usually applied to\nlarge margin problems. The experiments demonstrate the\ncompetitiveness in terms of classification error minimization,\nruntime, and robustness.\nWe apply the discussed EvoSVM variants as well as the\nPsoSVM on six real-world benchmark datasets. We selected\nthese datasets from the UCI machine learning repository\n[16] and the StatLib dataset library [24], because they already\ndefine a binary classification task, consist of real-valued\nnumbers only and do not contain missing values.\nTherefore, we did not need to perform additional prepro-cessing\nsteps which might introduce some bias. The properties\nof all datasets are summarized in Table 1. The default\nerror corresponds to the error a lazy default classifier would\nmake by always predicting the major class. Classifiers must\nproduce lower error rates in order to learn at all instead of\njust guessing.\nIn order to compare the evolutionary SVMs described\nin this paper with standard implementations we also applied\ntwo other SVMs on all datasets. Both SVMs use a\nDataset\nn\nm\nSource\n\nDefault\nLiver\n346\n6\nUCI\n0.010\n42.03\nIonosphere\n351\n34\nUCI\n1.000\n35.90\nSonar\n208\n60\nUCI\n1.000\n46.62\nLawsuit\n264\n4\nStatLib\n0.010\n7.17\nLupus\n87\n3\nStatLib\n0.001\n40.00\nCrabs\n200\n7\nStatLib\n0.100\n50.00\nTable 1: The evaluation datasets.\nn is the number\nof data points, m is the dimension of the input\nspace.\nThe kernel parameter was optimized for\nthe comparison SVM learner\nmySVM. The last column\ncontains the default error, i. e. the error for\nalways predicting the major class.\nslightly different optimization technique based on quadratic\nprogramming. The used implementations were mySVM [20]\nand LibSVM [4]. The latter is an adaptation of the widely\nused SV M\nlight\n[12].\nWe use a RBF kernel for all SVMs and determine the\nbest parameter value for with a grid search parameter optimization\nfor mySVM. This ensures a fair comparison since\nthe parameter is not optimized for one of the evolutionary\nSVMs. Possible parameters were 0.001, 0.01, 0.1, 1 and 10.\nThe optimal value for each dataset is also given in Table 1.\nIn order to determine the performance of all methods we\nperform a k-fold cross validation. That means that the\ndataset T is divided into k disjoint subsets T\ni\n. For each\ni {1, . . . , k} we use T \\T\ni\nas training set and the remaining\nsubset T\ni\nas test set. If F\ni\nis the number of wrong predictions\non test set T\ni\nwe calculate the average classification\nerror\nE = 1\nk\nk\nX\ni\n=1\nF\ni\n|T\ni\n|\n(34)\nover all test sets in order to measure the classification performance\n. In our experiments we choose k = 20, i. e. for\neach evolutionary method the average and standard deviation\nof 20 runs is reported. All experiments were performed\nwith the machine learning environment\nYale [5].\nTable 2 summarizes the results for different values of C.\nIt can be seen that the EvoSVM variants frequently yield\nsmaller classification errors than the quadratic programming\ncounterparts (mySVM and LibSVM). For C = 1, a statistical\nsignificant better result was achieved by using LibSVM\nonly for the Liver data set. For all other datasets the evolutionary\noptimization outperforms the quadratic programming\napproaches. The same applies for C = 0.1. For rather\nsmall values of C most learning schemes were not able to\nproduce better predictions than the default classifier. For\nC = 0.01, however, PsoSVM at least provides a similar\naccuracy to LibSVM. The reason for higher errors of the\nquadratic programming approaches is probably a too aggressive\ntermination criterion. Although this termination\nbehavior further reduces runtime for mySVM and LibSVM,\nthe classification error is often increased.\nIt turns out that the standard ES approach EvoSVM-G\nusing a mutation adding a Gaussian distributed random\nvariable often outperforms the other SVMs. However, the\n1558\nC = 1\nLiver\nIonosphere\nSonar\nLawsuit\nLupus\nCrabs\nError\nT\nError\nT\nError\nT\nError\nT\nError\nT\nError\nT\nEvoSVM-G\n34.718.60\n68\n10.815.71\n80\n14.034.52\n26\n2.051.87\n52\n25.2011.77\n8\n2.253.72\n25\nEvoSVM-S\n35.376.39\n4\n8.493.80\n9\n17.456.64\n6\n2.401.91\n10\n30.9212.42\n<1\n4.054.63\n2\nEvoSVM-H\n34.977.32\n7\n6.833.87\n22\n15.416.39\n10\n2.011.87\n14\n24.0313.68\n1\n3.954.31\n7\nPsoSVM\n34.784.95\n8\n9.904.38\n9\n16.945.61\n7\n3.022.83\n3\n25.22 7.67\n<1\n3.403.70\n2\nmySVM\n33.624.31\n2\n8.564.25\n4\n15.815.59\n2\n1.892.51\n1\n25.28 8.58\n1\n3.003.32\n1\nLibSVM\n32.725.41\n2\n7.703.63\n3\n14.604.96\n3\n2.412.64\n1\n24.1412.33\n1\n3.004.58\n1\nF Test\n3.20 (0.01)\n9.78 (0.00)\n6.19 (0.00)\n1.51 (0.19)\n11.94 (0.00)\n2.25 (0.05)\nC = 0.1\nLiver\nIonosphere\nSonar\nLawsuit\nLupus\nCrabs\nError\nT\nError\nT\nError\nT\nError\nT\nError\nT\nError\nT\nEvoSVM-G\n33.904.19\n74\n9.406.14\n89\n21.726.63\n35\n2.351.92\n50\n24.9010.51\n7\n7.204.36\n27\nEvoSVM-S\n35.573.55\n4\n7.123.54\n18\n24.906.62\n4\n4.472.31\n13\n25.9812.56\n<1\n7.955.68\n2\nEvoSVM-H\n34.764.70\n5\n6.554.61\n23\n24.406.09\n11\n4.163.14\n19\n26.5113.03\n1\n6.505.02\n2\nPsoSVM\n36.815.04\n3\n13.967.56\n10\n24.186.11\n3\n3.032.83\n3\n29.8612.84\n1\n8.156.02\n1\nmySVM\n42.031.46\n2\n35.901.35\n2\n46.621.62\n2\n7.172.55\n1\n41.256.92\n1\n6.504.50\n1\nLibSVM\n33.0810.63\n2\n11.406.52\n3\n22.406.45\n3\n4.553.25\n1\n25.2916.95\n1\n21.0012.41\n1\nF Test\n34.46 (0.00)\n492.88 (0.00)\n323.83 (0.00)\n20.64 (0.00)\n64.83 (0.00)\n100.92 (0.00)\nC = 0.01\nLiver\nIonosphere\nSonar\nLawsuit\nLupus\nCrabs\nError\nT\nError\nT\nError\nT\nError\nT\nError\nT\nError\nT\nEvoSVM-G\n42.031.46\n75\n35.901.35\n86\n45.332.20\n39\n7.172.55\n55\n40.006.33\n7\n26.2012.66\n27\nEvoSVM-S\n42.031.46\n3\n35.901.35\n9\n46.621.62\n4\n7.172.55\n3\n40.006.33\n<1\n8.584.35\n1\nEvoSVM-H\n42.031.46\n3\n35.901.35\n20\n46.271.42\n12\n7.172.55\n3\n40.006.33\n1\n7.004.00\n2\nPsoSVM\n41.398.59\n3\n35.901.35\n4\n27.906.28\n3\n7.172.55\n2\n31.9412.70\n1\n10.057.26\n1\nmySVM\n42.031.46\n2\n35.901.35\n2\n46.621.62\n2\n7.172.55\n1\n40.006.33\n1\n6.504.50\n1\nLibSVM\n42.031.46\n2\n35.901.35\n3\n28.4610.44\n2\n7.172.55\n1\n26.1116.44\n1\n50.000.00\n1\nF Test\n0.52 (0.77)\n0.00 (1.00)\n442.46 (0.00)\n0.00 (1.00)\n78.27 (0.00)\n1095.94 (0.00)\nTable 2: Classification error, standard deviation, and runtime of all SVMs on the evaluation datasets for\nparameters C = 1, C = 0.1, and C = 0.01. The runtime T is given in seconds. The last line for each table\ndepicts the F test value and the probability that the results are not statistical significant.\nruntime for this approach is far to big to be feasible in\npractical situations. The mere GA based selection mutation\nswitching between 0 and C converges much faster but\nis often less accurate. The remaining runtime differences\nbetween EvoSVM-S and the quadratic programming counterparts\ncan surely be reduced by code optimization. The\nused SVM implementations are matured and have been optimized\nover the years whereas the implementations of the\nevolutionary approaches follow standard recipes without any\ncode optimization.\nThe hybrid version EvoSVM-H combines the best elements\nof both worlds. It converges nearly as fast as the\nEvoSVM-S and is often nearly as accurate as the EvoSVM-G\n. In some cases (Ionosphere, Lupus) it even outperforms\nall other SVMs.\nPsoSVM on the other hand does not provide the best\nperformance in terms of classification error. Compared to\nthe other evolutionary approaches, however, it converged\nmuch earlier than the other competitors.\nPlease note that the standard deviations of the errors\nachieved with the evolutionary SVMs are similar to the standard\ndeviations achieved with mySVM or LibSVM. We can\ntherefore conclude that the evolutionary optimization is as\nrobust as the quadratic programming approaches and differences\nmainly derives from different subsets for training and\ntesting due to cross validation instead of the used random-ized\nheuristics.\nTherefore, evolutionary SVMs provide an interesting alternative\nto more traditional SVM implementations. Beside\nthe similar results EvoSVM is also able to cope with non-positive\ndefinite kernel functions and multivariate optimization\nCONCLUSION\nIn this paper we connected evolutionary computation with\nstatistical learning theory. The idea of large margin methods\nwas very successful in many applications from machine\nlearning and data mining.\nWe used the most prominent\nrepresentative of this paradigm, namely Support Vector Machines\n, and employed evolution strategies and particle swarm\noptimization in order to solve the constrained optimization\nproblem at hand. We developed a hybrid mutation which\ndecreases convergence time while the classification accuracy\nis preserved.\nAn interesting property of large margin methods is that\nthe runtime for fitness evaluation is reduced by transforming\nthe problem into the dual problem. In our case, the algorithm\nis both faster and provides space for other improvements\nlike incorporating a kernel function for non-linear\nclassification tasks. This is a nice example how a transformation\ninto the dual optimization problem can be exploited\nby evolutionary algorithms.\nWe have seen that evolutionary SVMs are at least as accurate\nas their quadratic programming counterparts. For\npractical values of C the evolutionary SVM variants frequently\noutperformed their competitors. We can conclude\nthat evolutionary algorithms proved as reliable as other optimization\nschemes for this type of problems. In addition,\nbeside the inherent advantages of evolutionary algorithms\n(e. g. parallelization, multi-objective optimization of train-1559\ning error and capacity) it is now also possible to employ\nnon positive semidefinite kernel functions which would lead\nto unsolvable problems for other optimization techniques.\nIn our future work we plan to make experiments with such\nnon positive semidefinite kernel functions. This also applies\nfor multi-objective optimization of both the margin and the\ntraining error.\nIt turns out that the hybrid mutation delivers results\nnearly as accurate as the Gaussian mutation and has a similar\nconvergence behavior compared to the switching mutation\nknown from GAs. Future improvements could start\nwith a switching mutation and can post-optimize with a\nGaussian mutation after a first convergence. Values always\nremaining 0 or C during the first run could be omitted in\nthe post-optimization step. It is possible that this mutation\nis even faster and more accurate then EvoSVM-H.\nACKNOWLEDGMENTS\nThis work was supported by the Deutsche Forschungsge-meinschaft\n(DFG) within the Collaborative Research Center\n\"Reduction of Complexity for Multivariate Data Structures\".\nREFERENCES\n[1] H.-G. Beyer and H.-P. Schwefel. Evolution strategies:\nA comprehensive introduction. Journal Natural\nComputing, 1(1):252, 2002.\n[2] C. Burges. A tutorial on support vector machines for\npattern recognition. Data Mining and Knowledge\nDiscovery, 2(2):121167, 1998.\n[3] G. Camps-Valls, J. Martin-Guerrero, J. Rojo-Alvarez,\nand E. Soria-Olivas. Fuzzy sigmoid kernel for support\nvector classifiers. Neurocomputing, 62:501506, 2004.\n[4] C.-C. Chang and C.-J. Lin. LIBSVM: a library for\nsupport vector machines, 2001.\n[5] S. Fischer, R. Klinkenberg, I. Mierswa, and\nO. Ritthoff. Yale: Yet Another Learning Environment\nTutorial. Technical Report CI-136/02, Collaborative\nResearch Center 531, University of Dortmund,\nDortmund, Germany, 2002.\n[6] F. Friedrichs and C. Igel. Evolutionary tuning of\nmultiple svm parameters. In Proc. of the 12th\nEuropean Symposium on Artificial Neural Networks\n(ESANN 2004), pages 519524, 2004.\n[7] H. Fr\nphlich, O. Chapelle, and B. Sch\nolkopf. Feature\nselection for support vector machines using genetic\nalgorithms. International Journal on Artificial\nIntelligence Tools, 13(4):791800, 2004.\n[8] B. Haasdonk. Feature space interpretation of svms\nwith indefinite kernels. IEEE Transactions on Pattern\nAnalysis and Machine Intelligence, 27(4):482492,\n2005.\n[9] T. Hastie, R. Tibshirani, and J. Friedman. The\nElements of Statistical Learning: Data Mining,\nInference, and Prediction. Springer Series in Statistics.\nSpringer, 2001.\n[10] T. Howley and M. Madden. The genetic kernel\nsupport vector machine: Description and evaluation.\nArtificial Intelligence Review, 2005.\n[11] W. James and C. Stein. Estimation with quadratic\nloss. In Proceedings of the Fourth Berkeley Symposium\non Mathematics, Statistics and Probability,\npages 361380, 1960.\n[12] T. Joachims. Making large-scale SVM learning\npractical. In B. Sch\nolkopf, C. Burges, and A. Smola,\neditors, Advances in Kernel Methods - Support Vector\nLearning, chapter 11. MIT Press, Cambridge, MA,\n1999.\n[13] T. Joachims. A support vector method for\nmultivariate performance measures. In Proc. of the\nInternational Conference on Machine Learning\n(ICML), pages 377384, 2005.\n[14] J. Kennedy and R. C. Eberhart. Particle swarm\noptimization. In Proc. of the International Conference\non Neural Networks, pages 19421948, 1995.\n[15] H.-T. Lin and C.-J. Lin. A study on sigmoid kernels\nfor svm and the training of non-psd kernels by\nsmo-type methods, March 2003.\n[16] D. Newman, S. Hettich, C. Blake, and C. Merz. UCI\nrepository of machine learning databases, 1998.\nhttp://www.ics.uci.edu/\nmlearn/MLRepository.html.\n[17] C. Ong, X. Mary, S. Canu, and A. J. Smola. Learning\nwith non-positive kernels. In Proc. of the 21st\nInternational Conference on Machine Learning\n(ICML), pages 639646, 2004.\n[18] I. Rechenberg. Evolutionsstrategie: Optimierung\ntechnischer Systeme nach Prinzipien der biologischen\nEvolution. Frommann-Holzboog, 1973.\n[19] T. Runarsson and S. Sigurdsson. Asynchronous\nparallel evolutionary model selection for support\nvector machines. Neural Information Processing,\n3(3):5967, 2004.\n[20] S. R\nuping. mySVM Manual. Universit\nat Dortmund,\nLehrstuhl Informatik VIII, 2000. http://www-ai\n.cs.uni-dortmund.de/SOFTWARE/MYSVM/.\n[21] B. Sch\nolkopf and A. J. Smola. Learning with Kernels\nSupport Vector Machines, Regularization,\nOptimization, and Beyond. MIT Press, 2002.\n[22] A. Smola, B. Sch\nolkopf, and K.-R. M\nuller. General\ncost functions for support vector regression. In\nProceedings of the 8th International Conference on\nArtificial Neural Networks, pages 7983, 1998.\n[23] A. J. Smola, Z. L. Ovari, and R. C. Williamson.\nRegularization with dot-product kernels. In Proc. of\nthe Neural Information Processing Systems (NIPS),\npages 308314, 2000.\n[24] Statlib datasets archive.\nhttp://lib.stat.cmu.edu/datasets/.\n[25] T. Storch. On the impact of objective function\ntransformations on evolutionary and black-box\nalgorithms. In Proc. of the Genetic and Evolutionary\nComputation Conference (GECCO), pages 833840,\n2005.\n[26] B. Taskar, V. Chatalbashev, D. Koller, and\nC. Guestrin. Learning structured prediction models: A\nlarge margin approach. In Proc. of the International\nConference on Machine Learning (ICML), 2005.\n[27] V. Vapnik. Statistical Learning Theory. Wiley, New\nYork, 1998.\n[28] V. Vapnik and A. Chervonenkis. The necessary and\nsufficient conditions for consistency in the empirical\nrisk minimization method. Pattern Recognition and\nImage Analysis, 1(3):283305, 1991.\n1560", "keywords": "Support vector machines;statistical learning theory;kernel methods;SVM;evolution strategies;large margin;particle swarms;machine learning;hybrid mutation;evolutionary computation;kernels"} {"name": "9", "title": "A Flexible and Extensible Object Middleware: CORBA and Beyond", "abstract": "This paper presents a CORBA-compliant middleware architecture that is more flexible and extensible compared to standard CORBA. The portable design of this architecture is easily integrated in any standard CORBA middleware; for this purpose, mainly the handling of object references (IORs) has to be changed. To encapsulate those changes, we introduce the concept of a generic reference manager with portable profile managers. Profile managers are pluggable and in extreme can be downloaded on demand. To illustrate the use of this approach, we present a profile manager implementation for fragmented objects and another one for bridging CORBA to the Jini world. The first profile manager supports truly distributed objects, which allow seamless integration of partitioning , scalability, fault tolerance, end-to-end quality of service, and many more implementation aspects into a distributed object without losing distribution and location transparency. The second profile manager illustrates how our architecture enables fully transparent access from CORBA applications to services on non-CORBA platforms", "fulltext": "INTRODUCTION\nMiddleware systems are heavily used for the implementation of\ncomplex distributed applications. Current developments like mobile\nenvironments and ubiquitous computing lead to new requirements\nthat future middleware systems will have to meet. Examples\nfor such requirements are the support for self-adaptation and self-optimisation\nas well as scalability, fault-tolerance, and end-to-end\nquality of service in the context of high dynamics. Heterogeneity in\nterms of various established middleware platforms calls for cross-platform\ninteroperability. In addition, not all future requirements\ncan be predicted today. A proper middleware design should be well-prepared\nfor such future extensions.\nCORBA is a well-known standard providing an architecture for object\n-based middleware systems [5]. CORBA-based applications are\nbuilt from distributed objects that can transparently interact with\neach other, even if they reside on different nodes in a distributed environment\n. CORBA objects can be implemented in different programming\nlanguages. Their interface has to be defined in a single,\nlanguage-independent interface description language (IDL). Problem\n-specific extensions allow to add additional features to the underlying\nbase architecture.\nThis paper discusses existing approaches towards a more flexible\nmiddleware infrastructure and proposes a novel modularisation pattern\nthat leads to a flexible and extensible object middleware. Our\ndesign separates the handling of remote references from the object\nrequest broker (ORB) core and introduces the concept of ORB-independent\nportable profile managers, which are managed by a generic\nreference manager. The profile managers encapsulate all\ntasks related to reference handling, i.e., reference creation, reference\nmarshalling and unmarshalling, external representation of references\nas strings, and type casting of representatives of remote objects\n. The profile managers are independent from a specific ORB,\nand may even be loaded dynamically into the ORB. Only small\nmodifications to existing CORBA implementations are necessary\nto support such a design.\nOur architecture enables the integration of a fragmented object\nmodel into CORBA middleware platforms, which allows transparent\nsupport of many implementation aspects of complex distributed\nsystems, like partitioning, scalability, fault-tolerance, and end-to-end\nquality-of-service guarantees. It also provides a simple mechanism\nfor the integration of cross-platform interoperability, e.g., the\nintegration with services running on non-CORBA middleware platforms\n, like Jini or .NET remoting. Our design was named AspectIX\nand implemented as an extension to the open-source CORBA implementation\nJacORB, but is easily ported to other systems.\nThis paper is organised as follows: The following section discusses\nthe monolithic design of most current middleware systems in more\ndetail. It addresses the extension features of CORBA and discusses\nPermission to make digial or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistributed to lists, requires prior\nspecific permission and/or a fee.\nSEM 2005, September 2005, Lisbon, Portugal\nCopyright 2005 ACM, 1-59593-204-4/05/09...$5.00.\n69\n2\ntheir lack of flexibility. Section 3 explains our novel approach to\nmiddleware extensibility based on a generic reference manager\nwith pluggable profile managers. In Section 4, two CORBA extensions\nand the corresponding profile managers are presented: One\nfor integrating the powerful fragmented-object model into the system\n, and one for transparently accessing Jini services from CORBA\napplications. Section 5 evaluates the implementation effort and run-time\noverhead of our approach, and Section 6 presents some concluding\nremarks.\nMIDDLEWARE ARCHITECTURE AND EXTENSION POINTS\nCORBA uses a monolithic object model: CORBA objects have to\nreside on a specific node and are transparently accessed by client-side\nproxies called stubs. The stubs use an RPC-based communication\nprotocol to contact the actual object, to pass parameters, and to\nreceive results from object invocations. A CORBA-based middleware\nimplementation is free to choose the actual protocol, but has\nto support the Internet Inter-ORB Protocol (IIOP) for interoperability\n.\nCORBA uses interoperable object references (IORs) to address remote\nobjects. The IOR is a data structure composed of a set of profiles\n. According to the standard, each profile may specify contact\ninformation of the remote object for one specific interaction protocol\n; for interoperability between ORBs of different vendors, an\nIIOP profile needs to be present. In addition to protocol profiles, the\nIOR may contain a set of tagged components. Each tagged component\nis a name-value pair with a unique tag registered with the OMG\nand arbitrary associated data; these components define protocol-independent\ninformation, like a unique object ID.\nIn standard CORBA, IORs are created internally at the server ORB.\nA server application creates a servant instance and registers the\nservant with the ORB (or, to be more specific, with an object adapter\nof the ORB). Usually this IOR contains an automatically created\nIIOP profile that contains hostname, port, object adapter name, and\nobject ID for accessing the object via IIOP. Additionally, it may\ncontain other profiles representing alternative ways to access the\nobject.\nAn IOR can be passed to remote clients, either implicitly or explicitly\n. If a reference to a remote object is passed as a parameter or return\nvalue, the IOR data structure will be implicitly serialised and\ntransferred. Upon deserialisation, the receiving client ORB automatically\ninstantiates a local stub for accessing the remote object,\ninitialised with the information available in the IOR. If multiple\nprofiles exist in the IOR, the ORB may use a vendor-specific strategy\nto select a single profile that is understood by the ORB. The\nIIOP profile should be understood by all ORBs. Beside implicit\ntransfer, an explicit transfer is possible. The server application may\ncall a\nobject_to_string\nmethod at the ORB, which serial-ises\nthe IOR and transforms it to a string, an IOR-URL. This string\ncan be transferred to a client; the client application may call the local\nORB method\nstring_to_object\nto create the stub for\nthe remote object referenced by the IOR.\n2.2 Status Quo of Extensible Middleware\nMany practical tasks--e.g., authorisation, security, load balancing,\nfault tolerance, or special communication protocols--require extensions\nto the basic CORBA model. For example, fault-tolerant\nreplication requires that multiple communication addresses (i.e., of\nall replicas) are known to a client. Typically this means that these\naddresses have to be encoded into the remote reference; binding to\nand invoking methods at such a remote object require a more complex\nhandling at the client side compared to the simple stub-service\ndesign. As a second example, a peer-to-peer-like interaction between\nusers of a service might sometimes be desired. In this case,\nthe client-server structure needs to be completely abandoned.\nA good example for the lack of extensibility of standard CORBA is\nthe fault-tolerant CORBA standard (FT-CORBA) [5, Chapter 23].\nIt was not possible to define this standard in a way that all platform\nimplementations are portable across different ORBs. Instead, each\nFT-CORBA-compliant middleware has its vendor-specific implementation\ninside of a single ORB. A system design such as we envision\nallows to implement a generic \"FT-CORBA plugin\" that\nmakes any ORB, independent of its vendor, aware of fault-tolerance\nmechanisms.\nExisting concepts for interception, custom object adapters, and\nsmart proxies provide some mechanisms for such extensions. In\npart, they are included in the CORBA standard; in part, they are\nonly available in non-standardised middleware implementations.\nThe portable interceptor specification supports interception within\nthe official CORBA standard. The specification defines request interceptors\nand IOR interceptors as standardised way to extend the\nmiddleware functionality. Using request interceptors, several hooks\nmay be inserted both at client and at server side to intercept remote\nmethod calls. These hooks allow to redirect the call (e.g., for load\nbalancing), abort it with an exception (e.g., for access restriction),\nextract and modify context information embedded in the request,\nand perform monitoring tasks. A direct manipulation of the request\nis not permitted by the specification. Multiple request interceptors\nmay be used simultaneously. Interceptors add additional overhead\non each remote method invocation; furthermore, they do not allow\nto modify the remote invocations completely. IOR interceptors, on\nthe other hand, are called when a POA needs to create an IOR for a\nservice. This allows to insert additional data into the IOR (e.g., a\ntagged component for context information that is later used in a request\ninterceptor).\nCORBA allows to define custom POA implementations. The POA\nis responsible for forwarding incoming invocation requests to a\nservant implementation. This extension point allows to integrate\nserver-side mechanisms like access control, persistence, and life-cycle\nmanagement. It has, however, no influence on the interaction\nof clients with a remote service.\nBeside the standardised IIOP protocol, any CORBA implementation\nmay support custom invocation protocols. Additional IOR profiles\nmay be used for this purpose. However, establishing such extensions\nis not standardised and every vendor may include his own\nproprietary variant, which limits the interoperability of this approach\n.\nSmart proxies are a concepts for extending ORB flexibility, which\nis not yet standardised by the OMG, but which is implemented by\nsome ORB vendors, e.g., in the ACE ORB/Tao [11]. They allow to\nreplace the default CORBA stub by a custom proxy that may implement\nsome extended functionality. As such, they allow to implement\nparts of the object's functionality at the client side.\nClosely related to our research is the work on OpenORB at Lancaster\nUniversity [1]. This middleware project uses reflection to define\ndynamic (re)configuration of componentised middleware services.\nThe main difference to our design is that it completely restructures\nthe middleware architecture, whereas our concept with a reference\nmanager and pluggable profile managers is integrated in any existing\nCORBA implementation with only minimal modifications. It\nnevertheless provides equal flexibility forreconfigurations. Component\ntechnology could be used in the internal design of complex\nprofile managers.\n70\n3\nPolyORB [10] is a generic middleware system that aims at providing\na uniform solution to build distributed applications. It supports\nseveral personalities both at the application-programming interface\n(API) level and the network-protocol level. The personalities are\ncompliant to several existing standards. This way, it provides middleware\n-to-middleware interoperability. Implementations of personalities\nare specific to PolyORB, unlike our profile managers,\nwhich are intended to be portable between different ORB implementations\n. We do not address the issue of genericity at the API level\nDESIGNING A GENERIC REFERENCE MANAGER\nThe fundamental extension point of an object middleware is the\ncentral handling of remote references. It is the task of any object\nmiddleware to provide mechanisms to create remote references, to\npass them across host boundaries, and to use them for remote invocations\n. As explained above, merely providing extension points at\nthe invocation level is insufficient for several complex tasks. The\nessential point of our work is to provide a very early extension point\nby completely separating the reference handling from the middleware\ncore.\nThe impact of such a design is not only that a single middleware implementation\ngets more flexible. It is also highly desirable to provide\nthis extensibility in an vendor-independent way. That is, an extension\nmodule should be portable across different middleware implementations\n. Furthermore, these extensions should be dynamically\npluggable and, in the extreme, be loaded on demand by the\nmiddleware ORB.\nOur design provides such a middleware architecture. It is currently\ndesigned as an extension to standard CORBA, and maintains interoperability\nwith any legacy CORBA system. Our design represents,\nhowever, a generic design pattern that easily applies to any other\nobject middleware.\nThe only prerequisite made is that remote references are represented\nby a sufficiently extensible data structure. In CORBA, the Inter-operable\nObject Reference (IOR) provides such a data structure.\nEach profile of the IOR represents an alternative way to contact the\nobject. Each profile type has its own data-type definition, described\nin CORBA IDL. Hence, at the IOR level, CORBA is open to arbitrary\nextensions. The IOR handling, however, is typically encapsulated\nin the internals of a CORBA-compliant ORB implementation.\nCurrently, if a vendor uses the power of IORs for custom extensions\n, these will only be implemented internally in the respective\nORB. The extension will not be portable.\n3.1 Overview of our Design\nOur approach introduces a generic interface for plugging in portable\nextension modules for all tasks related to IOR profile handling. This\nmakes it easy to support extended features like fault-tolerant replication\n, a fragmented object model, or transparent interaction with\nother middleware systems. This improves the flexibility of a\nCORBA middleware. We factored out the IOR handling of the\nORB and put it into pluggable modules. This way, custom handlers\nfor IOR profiles may be added to the ORB without modifying the\nORB itself. Dynamically downloading and installing such handlers\nat run-time further contributes to the richness of this approach.\nFactoring out the basic remote-reference handling of the ORB core\ninto a pluggable module affects five core functions of the middleware\n: first, the creation of new object references; second, the marshalling\nprocess, which converts object references into an external-ly\nmeaningful representation (i.e., a serialised IOR); third, the unmarshalling\nprocess, which has to convert such representation into\na local representation (e.g., a stub, a smart proxy, or a fragment);\nand fourth, the explicit binding operation, which turns some symbolic\nreference (e.g., a stringified IOR) into a local representation\nand vice versa. A fifth function is not as obvious as the others: The\ntype of a remote object reference can be changed as long as the remote\nobject supports the new type. In CORBA this is realised by a\nspecial narrow operation. As in some cases the narrow operation\nneeds to create a new local representation for a remote object, this\noperation has to be considered too.\nAn extension to CORBA will have to change all five functions for\nits specific needs. Thus, we collect those function in a module that\nwe call a profile manager. A profile manager is usually responsible\nfor a single type of IOR profile, but there may be reasons to allow\nprofile managers to manage multiple profile types. Profile managers\nare pluggable modules. A part of the ORB called reference manager\nmanages all available profile managers and allows for registration\nof new profile-manager modules.\nThe basic design can be found as a UML class diagram in Fig. Figure\n. Sometimes an application needs to access the reference manager\ndirectly. By calling\nresolve_initial_references()\n, a\ngeneric operation for resolving references to system-dependent objects\n, it can retrieve a reference to the reference manager pseudo object\nfrom the ORB. At the reference manager, profile managers can\nbe registered. As profile managers are responsible for a single or for\nmultiple IOR profile types, the registration requires a parameter\nidentifying those profile types. For identification a unique profile\ntag is used. Those tags are registered with the OMG to ensure their\nuniqueness. With the registration at the reference manager, it is exactly\nknown which profile managers can handle what profile types.\nSeveral tasks of reference handling are invoked at the reference\nmanager and forwarded to the appropriate profile manager. The architecture\nresembles the chain-of-responsibility pattern introduced\nby Gamma et al. [2].\n3.2 Refactoring the Handling of References\nIn the following, we describe the handling of the five core functions\nof reference handling in our architecture:\nCreating object references.\nIn traditional CORBA, new object\nreferences are created by registering a servant at a POA of the server\n. The POA usually maintains a socket for accepting incoming invocation\nrequests, e.g., in form of IIOP messages. The POA encodes\nthe contact address of the socket into an IIOP profile and creates\nan appropriate IOR. The details of the POA implementation\nProfileManager\n+insertProfile: void\n+profileToObject:\nCORBA::Object\n+objectToIor: IOR\n+iriToIor: IOR\n+iorToIri: string\n+narrow: CORBA::Object\nReferenceManager\n+setObjID: void\n+getObjID: string\n+createNewIor: IOR\n+registerProfileManager: void\n+getProfileManager: ProfileManager\n+getProfileManagers: ProfileManager[]\n+iorToObject: CORBA::Object\n+objectToIor: IOR\n+objectToIri: string\n+iorToString: string\n+stringToIor: IOR\n+narrow: CORBA::Object\nORB\n1\nn\nFigure 1: UML class diagram of the CORBA extension\n71\n4\nvaries from ORB to ORB. In general, the registration at the POA\ncreates an IOR and an internal data structure containing all necessary\ninformation for invocation handling.\nTo be as flexible as possible our extension completely separates the\ncreation of the IOR from the handling in a POA. The creation of an\nIOR requires the invocation of\ncreateNewIor()\nat the reference\nmanager. As standard CORBA is not able to clearly identify\nobject references referring to the same object, we added some operations\nto the reference manager that allow for integrating a universally\nunique identifier (UUID) into the IOR. The UUID is stored as\na tagged component and in principle can be used by any profile\nmanager.\nFor filling the IOR with profiles, an appropriate profile manager\nmust be identified. Operations at the reference manager allow the\nretrieval of profile managers being able to handle a specific profile\ntype. A profile manager has to provide an operation\ninsertProfile\n()\nthat adds a profile of a specific type into a given IOR. This\noperation has manager-dependent parameters so that each manager\nis able to create its specific profile. Instead of creating the IOR itself\n, the POA of our extended ORB has to create the IOR by asking\nthe IIOP profile manger for adding the appropriate information\n(host, port, POA name and object ID) to a newly created IOR. As\nan object may be accessible by multiple profiles, an object adapter\ncan ask multiple profile managers for inserting profiles into an IOR.\nMarshalling of object references\n. A CORBA object passed as a\nmethod parameter in a remote invocation needs to be serialised as\nan IOR. In classic CORBA, this is done ,,magically\" by the ORB by\naccessing internal data structures. In the Java language mapping, for\nexample, a CORBA object reference is represented as a stub that\ndelegates to a sub-type of\norg.omg.CORBA.Delegate\n. Instances\nof this type store the IOR.\nIn our architecture, we cannot assume any specific implementation\nof an object reference as each profile manager may need a different\none. Thus, there is no generic way to retrieve the IOR to be serialised\n. Instead, the reference manager is asked to convert the object\nreference into an IOR that in the end can be serialised. Therefore,\nthe reference manager provides an operation called\nobjectToIor\n()\n. The reference manager, in turn, will ask all known profile\nmanagers to do the job. Profile managers will usually check\nwhether the object reference is an implementation of their middleware\nextension. If so, the manager will know how to retrieve the\nIOR. If not, the profile manager will return a null reference, and the\nreference manager will turn to the next profile manager.\nUnmarshalling of object references.\nIf a standard ORB receives\na serialised IOR as a method parameter or return result, it implicitly\nconverts it into a local representation and passes this representation\n(typically a stub) to the application. This creation of the stub needs\nto be factored out of the marshalling system of the ORB to handle\narbitrary reference types.\nOur design delegates the object creation within unmarshalling to the\nreference manager by calling\niorToObject()\n. The reference\nmanager maintains an ordered list of profile types and corresponding\nprofile managers. According to this list, each profile manager is\nasked to convert the IOR in a local representation by calling\nprofileToObject\n()\n. This way the reference manager already analyses\nthe contents of the IOR and only asks those managers that are\nlikely to be able to convert the IOR into an object reference. A profile\nmanager checks the profile and the tagged components, and\ntries to create a local representation of the object reference. For example\n, an IIOP profile manager will analyse the IIOP profile. A\nCORBA-compliant stub is created and initialised with the IOR. The\nstub is returned to the reference manager, which returns it to the application\n. If a profile manager is not able to convert the profile or\nnot able to contact the object for arbitrary reasons, it will throw an\nexception. In this case the reference manager will follow its list and\nask the next profile manager. If none of the managers can deal with\nthe IOR, an exception is thrown to the caller. This is compatible\nwith standard CORBA for the case that no profile is understood by\nthe ORB.\nThe order of profile types and managers defines the ORB-dependent\nstrategy of referencing objects. As the first matching profile\ntype and manager wins, generic managers (e.g., for IIOP) should be\nat the end of the list whereas more specific managers should be at\nthe beginning.\nExplicit binding to remote references.\nA user application may\nexplicitly call the ORB method\nstring_to_object\n, passing\nsome kind of stringified representation of the references. Usually,\nthis may either be a string representation of the marshalled IOR, or\na corbaloc or corbaname URI.\nThis ORB operation can be split into two steps: First, the string is\nparsed and converted into an IOR object. As the stringified IOR can\nhave an extension-specific IRI format--an IRI is the international-ised\nversion of an URI--each profile manager is asked for conver-Figure\n2: Sequence chart for ORB::object_to_string\nobjectToIor\nobjectToIor\nobjectToIor\nobject_to_string\n:ProfileManager2\n:ProfileManager1\n:ORB\n:ReferenceManager\nreturn IOR string\nreturn IOR string\nreturn IOR\nreturn null\n72\n5\nsion by using\niriToIor()\n. The generic IOR format will be handled\nby the reference manager itself. Second, this IOR object is converted\ninto a local representation using the same process as used\nwith unmarshalling.\nCalling\nobject_to_string()\nis handled in a similar way.\nFirst, the reference passed as parameter is converted into an IOR\nobject in the same way as for marshalling. Second, the IOR object\nis converted into a string. The IOR is encoded as a hex string representing\nan URI in the IOR schema. The complete interaction is\nshown as sequence chart in Fig. 2.\nAs sometimes an application may want to convert an object reference\ninto a more-readable IRI, our extension also provides an operation\ncalled\nobjectToIri()\n. In a first step, the object reference\nis once again converted into an IOR. The second step, the conversion\ninto an IRI, is done with the help of the profile managers by invoking\niorToIri()\n. Those may provide profile-specific URL\nschemes that may not be compatible to standard CORBA. As the\nconversion of an IRI into an object reference can be handled in the\nprofile manager this is not a problem.\nNarrow operation.\nThe narrow operation is difficult to implement\n. In the Java language mapping the operation is located in a\nhelper class of the appropriate type, expecting an object reference\nof arbitrary other type. The implementation in a standard ORB assumes\nan instance compatible to the basic stub class, which knows\na delegate to handle the actual invocations. After successfully\nchecking the type conformance, the helper class will create a new\nstub instance of the appropriate type and connect it to the same delegate\n. With any CORBA extension it cannot be assumed that object\nreferences conform to the basic stub class.\nIn our extension the helper class invokes the operation\nnarrow()\nat the reference manager. Beside the existing object reference a\nqualified type name of the new type is passed to the operation as a\nstring. The reference manager once again will call every profile\nmanager for the narrow operation. A profile manager can check\nwhether the object reference belongs to its CORBA extension. If\nyes, the manager will take care of the narrow operation. If not, a null\nresult is returned and the reference manager will turn to the next\nprofile manager.\nAs a profile manager has to create a type-specific instance, it can\nuse the passed type name to create that instance. In languages that\nprovide a reflection API (e.g., Java) this is not difficult to realise. In\nother languages a generic implementation in a profile manager may\nbe impossible. Another drawback is that reflection is not very efficient\n. An alternative implementation is the placement of profile-specific\ncode into the helper class (or in other classes and functions\nof other language mappings). Our own IDL compiler called\nIDLflex [8] can easily be adapted to generate slightly different helper\nclasses. As a compromise helper classes may have profile-specific\ncode for the most likely profiles, but if other profiles are used, the\nabove mentioned control flow through reference and profile managers\nis used. Thus, most object references can be narrowed very fast\nand the ORB is still open to object-reference implementations of\nprofile managers that may have even be downloaded on demand.\nEXTENSIONS TO CORBA BASED ON PROFILE MANAGERS\nThis section illustrates two applications of our design. The first example\npresents the AspectIX profile manager, which integrates\nfragmented object into a CORBA middleware. The second examples\nprovides a transparent gateway from CORBA to Jini.\n4.1 AspectIX Profile Manager\nThe AspectIX middleware supports a fragmented object model\n[4,7]. Unlike the traditional RPC-based client-server model, the\nfragmented object model does no longer distinguish between client\nstubs and the server object. From an abstract point of view, a fragmented\nobject is an entity with unique identity, interface, behaviour\n, and state, as in classic object-oriented design. The implementation\n, however, is not bound to a certain location, but may be distributed\narbitrarily over various fragments. Any client that wants to\naccess the fragmented object needs a local fragment, which provides\nan interface identical to that of a traditional stub. This local\nfragment may be specific for this object and this client. Two objects\nwith the same interface may lead to completely different local fragments\n.\nThis internal structure gives a high degree of freedom on where\nstate and functionality of the object is located and how the interaction\nbetween fragments is done. The internal distribution and interaction\nis not only transparent on the external interface, but may\neven change dynamically at run-time. Fragmented objects can easily\nsimulate the traditional client-server structure by using the same\nfragment type at all client locations that works as a simple stub.\nSimilarly, the fragmented object model allows a simple implementation\nof smart proxies by using the smart proxy as fragment type\nfor all clients. Moreover, this object model allows arbitrary internal\nconfigurations that partition the object, migrate it dynamically, or\nreplicate it for fault-tolerance reasons. Finally, the communication\nbetween fragments may be arbitrarily adjusted, e.g., to ensure quality\n-of-service properties or use available special-purpose communication\nmechanisms. All of these mechanisms are fully encapsulated\nin the fragmented objects and are not directly visible on the outer\ninterface that all client application use.\nSupporting a fragmented object model is clearly an extension to the\nCORBA object model. With our architecture it is very easy to integrate\nthe new model. Just a new profile manager has to be developed\nand plugged into our ORB. When a client binds to a fragmented\nobject, a more complex task than simply loading a local stub is\nneeded. In our system, the local fragment is internally composed of\nthree components, as shown in Fig. 3: the View, the Fragment Interface\n(FIfc), and the Fragment Implementation (FImpl). The fragment\nimplementation is the actual code that provides the fragment\nbehaviour. The fragment interface is the interface that the client uses\n. Due to type casts, a client may have more than one interface instance\nfor the same fragmented object. Interfaces are instantiated in\nCORBA\nnarrow\noperations; all existing interfaces delegate invocations\nto the same implementation. The View is responsible for all\nmanagement tasks; it stores the object ID and IOR, keeps track of\nall existing interfaces, and manages dynamical reconfigurations\nthat exchange the local FImpl. The management needs to update all\nFragment\nClient\nFragment\nInterface\nView\nFragment\nImplementation\nFigure 3: Internal structure of a fragment\n73\n6\nreferences from FIfcs to the FImpl and has to coordinate method invocations\nat the object that run concurrently to reconfigurations.\nTo integrate such a model into a profile-manager-aware CORBA\nsystem, it is necessary to create IORs with a special profile for fragmented\nobjects (APX profile), and to instantiate the local fragment\n(View-FImpl-FIfc) when a client implicitly or explicitly binds to\nsuch an IOR.\nThe IOR creation is highly application specific, thus it is not fully\nautomatic as in traditional CORBA. Instead, the developer of the\nfragmented object may explicitly define, which information needs\nto be present in the object. The reference manager creates an empty\nIOR for a specified IDL type, and subsequently the APX profile\nmanager can be used to add a APX profile to this IOR. This profile\nusually consists of information about the initial FImpl type that a\nclient needs to load and contact information on how to communicate\nwith other fragments of the fragmented object. The initial FImpl\ntype may be specified as a simple Java class name in a Java-only\nenvironment, or as a DLS name (dynamic loading service, [3]) in a\nheterogeneous environment, to dynamically load the object-specific\nlocal fragment implementation. The contact information may,\ne.g., indicate a unique ID, which is used to retrieve contact addresses\nfrom a location service) or a multicast address for a fragmented\nobject that uses network multicast for internal communication.\nWhen an ORB binds to an IOR with an APX profile, the corresponding\nprofile manager first checks, if a fragment of the specific\nobject already exists; if so, a reference to the existing local fragment\nis returned. Otherwise, a new default view is created and connected\nto a newly instantiated FImpl. Profile information specifies how\nthis FImpl is loaded (direct Java class name for Java-only environments\n, a code factory reference, or a unique ID for lookup to the\nglobal dynamic loading service (DLS)). Finally, a default interface\nis built and returned to the client application.\n4.2 Jini Profile Manager\nJini is a Java-based open software architecture for network-centric\nsolutions that are highly adaptive to change [9]. It extends the Java\nprogramming model with support for code mobility in the networks\n; leasing techniques enables self-healing and self-configuration\nof the network. The Jini architecture defines a way for clients\nand servers to find each other on the network. Service providers\nsupply clients with portable Java objects that implement the remote\naccess to the service. This interaction can use any kind of technology\nsuch as Java RMI, SOAP, or CORBA.\nThe goal of this CORBA extension is to seamlessly integrate Jini\nservices into CORBA. Jini services should be accessible to CORBA\nclients like any other CORBA object. Jini services offer a Java interface\n. This interface can be converted into an IDL interface using\nthe Java-to-IDL mapping from the OMG [6]. Our CORBA extension\nprovides CORBA-compatible representatives, special proxy\nobjects that appear as CORBA object references, but forward invocations\nto a Jini service. Such references to Jini services can be registered\nin a CORBA naming service and can be passed as parameters\nto any other CORBA object. CORBA clients do not have to\nknow that those references refer to Jini services.\nThe reference to a Jini service is represented as a CORBA IOR. The\nJini profile manager offers operations to create a special Jini profile\nthat refers to a Jini service. It provides operations to marshal references\nto such services by retrieving the original IOR from the spe-cialised\nproxy, to unmarshal IORs to a newly created proxy, and to\ntype cast a proxy to another IDL type.\nThe Jini profile stores a Jini service ID, and optionally a group name\nand the network address of a Jini lookup service. The profile manager\nuses automatic multicast-based discovery to find a set of\nlookup services where Jini services usually have to register their\nproxy objects. Those lookup services are asked for the proxy of the\nservice identified by the unique service ID. If an address of a lookup\nservice is given in the profile, the profile manager will only ask this\nservice for a proxy. The retrieved proxy is encapsulated in a wrapper\nobject that on the outside looks like a CORBA object reference.\nInside, it maps parameters from their IDL types to the corresponding\nJava types and forwards the invocation to the original Jini\nproxy.\nJini services may provide a lease for service usage. In the IOR, a\nmethod can be named that is supposed to retrieve a lease for the\nservice. This lease will be locally managed by the profile manager\nand automatically extended if it is due to expire.\nThis extension shows that it is possible to encapsulate access to other\nmiddleware platforms inside of profile managers. In case of the\nJini profile manager our implementation is rather simple. So, return\nparameters referring to other Jini services are not (yet) converted to\nCORBA object references. We also did not yet implement Jini IOR\nprofiles that encapsulate abstract queries: In this case not a specific\nservice ID is stored in the profile, but query parameters for the\nlookup at the lookup services. This way, it will be possible to create\nIORs with abstract meaning, e.g., encapsulating a reference to the\nnearest colour printer service (assumed that such services are registered\nas Jini services at a lookup service).\nEVALUATION\nTwo aspects need to be discussed to evaluate our design: First, the\neffort that is needed to integrate our concept into an existing ORB;\nsecond, the run-time overhead that this approach introduces. As we\nused JacORB as basis for our implementation, we compare our implementation\nwith the standard JacORB middleware.\n5.1 Implementation Cost\nThe integration of a generic reference manager into JacORB version\n2.2 affected two classes:\norg.jacorb.orb.ORB\nand\norg.jacorb.orb.CDROutputStream\n. In\nORB\n, the\nmethods\nobject_to_string\n,\nstring_to_object\n,\nand\n_getObject\n(which is used for demarshalling) need to be\nreplaced. In addition, the reference manager is automatically loaded\nat ORB initialisation and made available as initial reference. In\nCDROutputStream\n, the method\nwrite_Object\nneeds to\nbe re-implemented to access the reference manager. These changes\namount to less than 100 lines of code (LOC). The generic reference\nmanager consists of about 500 LOC; the IIOP profile manager contains\n150 LOC in addition to the IIOP implementation reused from\nJacORB.\nThese figures show that our design easily integrates into an existing\nCORBA ORB. Moreover, the generic reference manager and profile\nmanagers may be implemented fully independent of ORB internals\n, making them portable across ORBs of different vendors-with\nthe obvious restrictions to the same implementation programming\nlanguage.\n5.2 Run-time Measurements of our Implementation\nWe performed two experiments to evaluate the run-time cost of our\napproach; all tests were done on Intel PC 2.66 GHz with Linux\n2.4.27 operating system and Java 1.5.0, connected with a 100 MBit/\ns LAN. In all cases, the generic reference manager in our ORB was\nconnected with three profile managers (IIOP, APX, Jini)\nThe first experiment examines the binding cost. For this purpose, a\nstringified IOR of a simple CORBA servant with one IIOP profile\nis generated. Then, this reference is repeatedly passed to the ORB\n74\n7\nmethod\nstring_to_object\n, which parses the IOR and loads\na local client stub. Table 1 shows the average time per invocation of\n100,000 iterations.\nThe second test analyses the marshalling cost of remote references.\nAn empty remote method with one reference parameter is invoked\n100,000 times. These operation involve first a serialisation of the\nreference at client-side and afterwards a deserialisation and binding\nat the servant side; all operations are delegated to the reference\nmanager in our ORB. Table 2 shows the results of this test.\nBoth experiments show, that the increase in flexibility and extensibility\nis paid for with a slight decrease in performance. It is to be\nnoted that our reference implementation has not yet been optimised\nfor performance, so further improvement might be possible.\nCONCLUSION\nWe have presented a novel, CORBA-compliant middleware architecture\nthat is more flexible and extensible than standard CORBA.\nIt defines a portable reference manager that uses dynamically load-able\nprofile managers for different protocol profiles.\nThe concept is more flexible than traditional approaches. Unlike\nsmart proxies, it does not only modify the client-side behaviour, but\nallows to modify the complete system structure. In contrast to vendor\n-specific transport protocols, it provides a general extension to\nCORBA that allows to implement portable profile managers, which\nin the extreme may even be dynamically loaded as plug-in at run-time\n. Different from CORBA portable interceptors, it gives the\nservice developer full control over the IOR creation process, has\nless overhead than interceptors, and allows arbitrary modification\nof client requests.\nThe concept itself is not limit to CORBA. In fact, it defines a generic\ndesign pattern for any existing and future middleware platforms.\nExtracting all tasks related to the handling of remote references into\nan extensible module allows to create middleware platforms that are\neasily extended to meet even unanticipated future requirements.\nModern developments like ubiquitous computing, increased scalability\nand reliability demands and so on make it likely that such extensions\nwill be demanded. Our design principle allows to implement\nfuture systems with best efficiency and least implementation\neffort.\nWe have presented in some detail two applications of our architecture\n. Besides traditional IIOP for compliance with CORBA, we\nhave implemented a profile manager for fragmented objects and\none for accessing Jini services transparently as CORBA objects.\nCurrently, we are working on profile managers to handle fault-tolerant\nCORBA and a bridge to Java RMI.\nREFERENCES\n[1] G. Coulson, G. Blar, M. Clarke, N. Parlavantzas: The design of\na configurable and reconfigurable middleware platform.\nDistributed Computing 15(2): 2002, pp 109-126\n[2] E. Gamma, R. Helm, R. Johnson, J. Vlissides: Design\npatterns. Elements of reusable object-oriented software.\nAddison-Wesley, 1995.\n[3] R. Kapitza, F. Hauck: DLS: a CORBA service for dynamic\nloading of code. Proc. of the OTM'03 Conferences; Springer,\nLNCS 2888, 2003\n[4] M. Makpangou, Y. Gourhand, K.-P. Le Narzul, M. Shapiro:\nFragmented objects for distributed abstractions. Readings in\nDistr. Computing Systems, IEEE Comp. Society Press, 1994,\npp. 170-186\n[5] Object Management Group: Common object request broker\narchitecture: core specification, version 3.0.3; OMG\nspecification formal/04-03-12, 2004\n[6] Object Management Group: Java language mapping to OMG\nIDL, version 1.3; OMG specification formal/2003-09-04,\n2003\n[7] H. Reiser, F. Hauck, R. Kapitza, A. Schmied: Integrating\nfragmented objects into a CORBA environment. Proc. of the\nNet.ObjectDays, 2003\n[8] H. Reiser, M. Steckermeier, F. Hauck: IDLflex: A flexible and\ngeneric compiler for CORBA IDL. Proc. of the Net.Object\nDays, Erfurt, 2001, pp 151-160\n[9] Sun microsystems: Jini technology architectural overview.\nWhite paper, Jan 1999\n[10] T. Vergnaud, J. Hugues, L. Pautet, F. Kordon: PolyORB: a\nschizophrenic middleware to build versatile reliable\ndistributed applications. Proc. of the 9th Int. Conf. on Reliable\nSoftware Technologies Ada-Europe 2004 (RST'04),;\nSpringer, LNCS 3063, 2004, pp 106-119\n[11] N. Wang, K. Parameswaran, D. Schmidt: The design and\nperformance of meta-programming mechanisms for object\nrequest broker middleware. Proc. of the 6th USENIX\nConference on Object-Oriented Techology and Systems\n(COOTS'01), 2001\nTable 1: Execution time of ORB::string_to_object invocations\nStandard JacORB 2.2.1\n0.22 ms\nAspectIX ORB with reference manager\n0.28 ms\nOverhead\n0.06 ms (27%)\nTable 2: Complete remote invocation time with one reference\nparameter\nStandard JacORB 2.2.1\n0.64 ms\nAspectIX ORB with reference manager\n0.72 ms\nOverhead\n0.08 ms (12.5%)\n75\n", "keywords": "integration;Flexible and extensible object middleware;IIOP;Software architecture for middleware;CORBA;object oriented;Extensions;Extensibility;extensibility;Middleware architecture;Distiributed applications;Ubiquitous computing;Object references;Middleware;extensible and reconfigurable middleware;Interoperability;distributed objects;implementation;Fault tolerant CORBA;Reference manager;profile manager;middleware interoperability;middleware architecture;Profile manager;Remote object;encapsulation;Flexibility;IOR;Middleware platform;Middleware systems"} {"name": "90", "title": "Fan-out Measuring Human Control of Multiple Robots", "abstract": "A goal of human-robot interaction is to allow one user to operate multiple robots simultaneously. In such a scenario the robots provide leverage to the user's attention. The number of such robots that can be operated is called the fan-out of a human-robot team. Robots that have high neglect tolerance and lower interaction time will achieve higher fan-out. We define an equation that relates fan-out to a robot's activity time and its interaction time. We describe how to measure activity time and fan-out. We then use the fan-out equation to compute interaction effort. We can use this interaction effort as a measure of the effectiveness of a human-robot interaction design. We describe experiments that validate the fan-out equation and its use as a metric for improving human-robot interaction.", "fulltext": "INTRODUCTION\nAs computing becomes smaller, faster and cheaper the\nopportunity arises to embed computing in robots that\nperform a variety of \"dull, dirty and dangerous\" tasks that\nhumans would rather not perform themselves. For the\nforeseeable future robots will not be fully autonomous, but\nwill be directed by humans. This gives rise to the field of\nhuman-robot interaction (HRI). Human-robot interaction\ndiffers from traditional desktop GUI-based direct\nmanipulation interfaces in two key ways. First, robots must\noperate in a physical world that is not completely under\nsoftware control. The physical world imposes its own\nforces, timing and unexpected events that must be handled\nby HRI. Secondly, robots are expected to operate\nindependently for extended periods of time. The ability for\nhumans to provide commands that extend over time and can\naccommodate unexpected circumstances complicates the\nHRI significantly. This problem of developing interfaces\nthat control autonomous behavior while adapting to the\nunexpected is an interesting new area of research.\nWe are very interested in developing and validating metrics\nthat guide our understanding of how humans interact with\nsemiautonomous robots. We believe that such laws and\nmetrics can focus future HRI development. What we are\nfocused on are not detailed cognitive or ergonomic models\nbut rather measures for comparing competing human-robot\ninterfaces that have some validity. In this paper we look at a\nparticular aspect of HRI, which is the ability for an\nindividual to control multiple robots simultaneously. We\nrefer to this as the fan-out of a human-robot team. We\nhypothesize that the following fan-out equation holds,\nIT\nAT\nFO\n=\n\nwhere\n\nFO=fan-out or the number of robots a human can\ncontrol simultaneously,\n\nAT=activity time or the time that a robot is actively\neffective after receiving commands from a user,\n\nIT=interaction time or the time that it takes for a\nhuman to interact with a given robot.\nIn this paper we develop the rationale for the fan-out\nequation and report several experiments validating this\nequation. We show that the equation does describe many\nphenomena surrounding HRI but that the relationships are\nmore complex than this simple statement of fan-out implies.\nWe also describe the experimental methodologies\ndeveloped in trying to understand fan-out. We present them\nas tools for evaluating and measuring design progress in\nHRI systems.\nThe robotic task domain that we have focused on is search\nand rescue where robots must cover an indoor, urban or\nterrain environment in search of victims, threats, problems,\nor targets. Although we have restricted our work to this\ndomain, we are hopeful that our methods and metrics will\nextend to other HRI domains.\n\nPRIOR WORK\nOthers have done work on human-robot interaction.\nSheridan has outlined 5 levels of robot control by users\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise,\nor republish, to post on servers or to redistribute to lists, requires prior\nspecific permission and/or a fee.\nCHI 2004, April 2429, 2004, Vienna,\n\nAustria.\n\nCopyright 2004 ACM 1-58113-702-8/04/0004...$5.00.\nCHI 2004\n\nPaper\n24-29 April\n\nVienna, Austria\nVolume 6, Number 1\n\n\n\n\n\n231\n\n\n[14]. The levels range from teleoperation, where the user is\ndirectly engaged with the actuators of robot, through\nvarious levels of computer intervention between the user,\nthe sensors and the actuators, to full autonomy with users\nmerely setting very high-level goals. Fong and Thorpe [8,\n9] demonstrated collaborative control where the human and\nthe robot share the initiative, with the robot seeking\nguidance when needed. These with a variety of other\napproaches are characterized by their system architecture.\nAlthough human-robot interfaces are provided, there is little\nstudy of the nature of that interface nor on how to evaluate\nthe quality of the interface.\nThere have been a number of proposals for new modalities\nfor controlling robots including haptics, gestures, PDAS\n[7]. Others have looked at the visualization and context\nmemory problems that arise when driving robots. The\nEgosphere is one such solution [6].\nThere is also a great deal of work on using multiple robots\non a task. There are fully autonomous swarming approaches\nsuch as Bruemmer, et al [3]. These have very little human\nintervention because the desired task is preprogrammed.\nOther autonomous robot teams have done janitorial tasks,\nbox pushing and earth moving [12, 13]. All of these teams\nhave used very little human intervention. Other multi-robot\nsystems have robots operating in formations [2, 4, 16] or\naccording to predefined deployment behaviors [15] These\napproaches allow users to direct the work of a number of\nrobots simultaneously. Fong et. al. [10] point out the\nproblems with dividing human attention among multiple\nrobots and propose a collaborative control model for\ndriving. In essence their proposals increase the neglect and\nactivity time of the robots to achieve higher fan-out. Others\nhave used a \"select and command\" model for controlling\nmultiple robots [11].\nHowever, none of these have been carefully evaluated as to\nthe advantages or decrease in effort afforded by the various\nuser interface designs. In most cases the control architecture\nis intertwined with the human-robot interface making it\nhard to distinguish which part of the solution is contributing\nto progress. In this paper we describe a model for isolating\nand measuring the human-robot interface for teams of\nrobots.\n\nSAMPLE ROBOT WORLD\nTo explain our fan-out ideas, we pose the example robot\nworld shown in figure 1. In this world there are robots,\ntargets and obstacles (trees & rocks). The task is for all\ntargets to be touched by robots as soon as possible. This is\nan abstraction of a search task.\nWe can assume a simple-minded robot that accepts a\ndirection and a distance from its user and will move in that\ndirection until it either travels the indicated distance or\nencounters an obstacle, in which case it stops. In figure 1\nthe robot has three legs to its journey each characterized by\na different user command. However, the robot's guidance\nmay not be perfect. It may drift to the left on the first leg,\nrun into the trees and stop early. Its odometry may be faulty\nand it may overrun the end of leg one necessitating\nadditional commands from the user to extricate it from the\ndead-end of rocks and trees.\n\nT\n1\n2\n3\n\nFigure 1 Simple Robot World\nThis example illustrates two measures that are important to\nour model of fan-out. The first is neglect-time. That is the\ntime the robot can run while being ignored by its user.\nNeglect time is a measure of a robot's autonomy. This is\nvery similar to Crandall's neglect tolerance [5]. Unlike\nCrandall's work, we are interested in multiple robots rather\nthan efficient interfaces to a single robot. The second\nmeasure is activity-time, which is the time the robot will\nmake effective progress before it either gets a new\ncommand from the user, it stops making effective progress\nor it completes the command the operator gave it. Neglect\ntime and activity time are not the same. For example, if the\nuser does not trust the odometry, he may watch the robot to\nmake certain it does not overshoot the end of leg 1. The\nrobot is independently active, but is not being neglected.\nThis difference has an important impact on multiplexing\nhuman attention among multiple robots.\nThe relationship between activity time (AT) and neglect\ntime (NT) is determined by the amount of overlap (O)\nbetween robot activity and interaction time (IT). Overlap is\nthe percentage of the interaction time where the robot is\nalso active.\nNT\nIT\nO\nAT\n+\n=\n*\n\nThis relationship is illustrated by driving a car. The\ninteraction time and the activity time of a car are almost\ncompletely overlapped (O=1.0). A car is almost always\nmoving when the driver is steering it. In addition, the\nneglect time for a car is very small, therefore AT is not\nmuch larger than IT. Plugging this into the fan-out equation\nwe see that a person cannot drive more than one car at once.\nIn the case of a manufacturing robot, the robot is not at all\nactive during setup (O=0.0) but the robot will run for days\nor months after a day of setup. Thus AT is many times\nlarger than IT and the fan-out is quite high. The\nexperimental models that we finally used are based on AT\nand IT. The relationship between O, NT and AT does not\nimpact our comparisons of various human-robot interfaces.\nCHI 2004\n\nPaper\n24-29 April\n\nVienna, Austria\nVolume 6, Number 1\n\n\n\n\n\n232\n\n\nIn our simple robot world we can give the robot more\nintelligence. If it encounters an obstacle it can bounce off\nand continue trying to make progress towards its next check\npoint. Thus the robot will operate longer without\nintervention (increased AT) and the user can trust it more\n(increased NT). Adding some local vision and planning, the\nrobot might also detect the cul-de-sac of trees and rocks and\nnot enter there without more explicit instructions. Again the\nrobot can be trusted more and NT can increase. Increasing a\nrobot's trusted intelligence can increase its neglect time and\nthus increase fan-out.\n\nRATIONALE FOR FAN-OUT\nThe primary reason for our interest in fan-out is that it is a\nmeasure of how much leverage of human attention is being\nprovided by a robot's automated capabilities. Autonomy is\nnot an end unto itself. We may study it in that way, but in\nreality we want automation because it expands human\ncapacity. Fan-out is a measure of the leverage of human\nattention.\nThe ability for a human to control multiple robots depends\nupon how long that robot can be neglected. If a robot makes\nno progress while being neglected, the human will have no\nattention to devote to other robots. However, as will be\nshown, it is difficult to measure neglect time. Instead we\nmeasure activity time, which is an average amount of time\nthat a robot functions between instructions from the user. If\nwe divide the average activity time by the amount of time a\nuser must interact with each robot, then we get the fan-out\nequation.\nIT\nAT\nFO\n=\n\nHowever, the relationships are not a simple as this analysis\nmight indicate. We will discuss these interrelationships\nalong with the experimental data. The key point, we\nbelieve, in understanding these complexities is that IT is not\nmonolithic. Our current hypothesis is that there are at least\n4 components to interaction time. They are:\n1. Robot Monitoring and Selection reviewing the\nstate of all robots and deciding which robot needs\nthe user's attention next.\n2. Context Switching when switching attention\nbetween robots the user must spend time\nreaquiring the goals and problems of the new\nrobot.\n3. Problem Solving having reaquired the situation\nthe user must analyze the problem and plan a\ncourse of action for the robot.\n4. Command Expression the user must manipulate\ninput devices tell the robot what to do.\nTraditional direct-manipulation/desktop interfaces generally\nexhibit only components 3 and 4. The experiments that we\nhave performed show the effects of some of these\ncomponents in the ways that the data deviates from the\npredictions of the fan-out equation.\nHaving broken down IT, it seems that IT should increase\nwith FO. This is because the more robots there are to\ncontrol, the greater the monitoring and robot selection time.\nAlso the more diverse situations the robots find themselves\nin the greater the context-switching time. As we will see in\nthe data smarter robots can offload some of the problem\nsolving time from the user and thus reduce IT.\n\nMEASURING HRI\nOur hypothesis is that the fan-out equation provides a\nmodel for how humans interact with multiple robots. The\nchallenge in validating the fan-out equation is that\ninteraction time (consisting of planning, monitoring and\nsolving) occurs mostly in the user's mind and is therefore\ndifficult to measure directly.\nMeasuring Neglect Time(NT) and Activity Time(AT)\nThe properties of NT and AT are characteristics of a robot's\nability to function in a world of given complexity. These\ntimes are functions of robot ability, task complexity and the\nuser's understanding of the robot's ability.\nIn measuring either NT or AT we must control for the\ncomplexity of the task being posed. If the task posed in\nfigure 1 had half as many trees, or no rocks, the task itself\nwould be simpler and the robot could be safely neglected\nfor a longer time. In essence, the more challenges a robot\nmust face, for which it does not have sufficient autonomy,\nthe lower NT and AT will be. The nature of the challenges\nwill also have an impact. Therefore any measurements of\nNT or AT must control for the nature of the tasks being\nposed. We term this task complexity.\nOur first approach to measuring NT ignored the role of the\nuser in determining the robot's activity. We assumed that\nthere was some measurement of NT in the context of a\ngiven task complexity. To measure NT we would randomly\nplace a robot at some location in the world, give it a random\ngoal and then measure the average time that the robot\nwould operate before reaching the goal or failing to make\nprogress.\nHowever, this approach failed to produce data that was\nconsistent with the fan-out equation. After reviewing the\nvideotapes of actual usage we found that this a priori\nmeasurement consistently overestimated NT. We identified\nthree problems with this approach. The first is demonstrated\non leg 1 of the robot route in figure 1. The robot could\nfeasibly be neglected until it ran into the cul-de-sac of trees\nand rocks. However, users regularly saw such situations and\nredirected the robot early to avoid them. The second reason\nwas that users frequently did not trust the robot to work out\nlow-level problems. Users would regularly give the robots\nshorter-term goals that the user believed were possible.\nThirdly, we did not have a good measure for how much a\nrobot's activity overlapped the user's attention to the robot.\nCHI 2004\n\nPaper\n24-29 April\n\nVienna, Austria\nVolume 6, Number 1\n\n\n\n\n\n233\n\n\nAll of these failing led to NT predictions that were much\nlarger than actual usage.\nA simpler and more accurate measure we found was\nactivity time (AT). We measure the time from a user's\ncommand of a robot until that robot either stops making\nprogress or another use command is issued to it. We\naverage these times across all robots in an experiment. This\nmeasure of activity time fit better with the fan-out equation\nand was much easier to measure.\nThis activity time measure is dependent on determining\nwhen a robot is making effective progress and when it is\nnot. In our simple robot world, robots stop when they reach\nan obstacle. Thus a robot is active when it is moving.\nBecause of the nature of our test user interface robots are\nalways getting closer to the goal if they are moving.\nTherefore our simplistic measure of effective progress\nworks in this case. For more intelligent robots this is more\ncomplicated. An intelligent robot must balance goal\nprogress with obstacle or threat avoidance. This can lead to\ninteresting feedback and deadlock problems, which cannot\nalways be detected. These issues form the basis for many of\nthe conundrums of Isasc Asimov's robopsychology[1]. In\nmany situations, however, we can detect lack of progress\nand thus the end of an activity.\nMeasuring FO\nOur next challenge is to measure fan-out (FO). This, of\ncourse, is the measure that we want to maximize because it\nis an estimate of the leverage provided by our human-robot\nteams. Our first approach to fan-out measurement was the\nfan-out plateau. For a given task, we must have a measure\nof task effectiveness. In our simple robot world, this might\nbe the total time required to locate all N targets. In other\nscenarios it might be the total amount of terrain surveyed or\nthe total number of purple hummingbirds sighted. Parker\nhas identified a variety of task effectiveness metrics for use\nwith robot teams[13]. The fan-out plateau is shown in\nfigure 2. As we give a user more robots to work with, the\ntask effectiveness should rise until it reaches a point where\nthe user's attention is completely saturated. At this point\nadding more robots will not increase task effectiveness and\nmay in some situations cause a decrease if superfluous\nrobots still demand user attention.\nThe attractiveness of the fan-out plateau measure is that it\ndirectly measures the benefits of more robots on actual task\naccomplishment. The disadvantage is that it is very\nexpensive to measure. We might hypothesize that for a\ngiven HRI team, the fan-out plateau would be between 4\nand 12 robots. We then must take 8 experimental runs to\nfind the plateau (if our hypothesis was correct). Individual\ndifferences in both users and tasks require that we must take\nmany runs at each of the 8 possibilities in order to develop a\nstatistically significant estimate of the fan-out plateau.\nSince realistic tasks take 20 minutes to several hours to\naccomplish, this measurement approach rapidly consumes\nan unrealistic amount of time.\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\nNumber of Robots\nTotal\nTask E\nffecti\nveness\nFan-out\nPlateau\nFigure 2 Fan-out Plateau\n\nAn alternative measure of FO comes from out ability to\ndetermine which robots are active and which are not. If we\nhave knowledge of activity as needed by our AT\nmeasurement, then we can sample all of the robots at\nregular time intervals and compute the average number of\nactive robots. What we do to measure FO is to give the user\nmany more robots than we think is reasonable and then\nmeasure the average number of active robots across the\ntask. This gives us a strong estimate of actual fan-out that is\nrelatively easy to measure. Note that this is only an estimate\nof fan-out because the large number of robots introduces its\nown cognitive load. We believe, however, that ignoring\nunneeded robots will not impact the value of the metrics for\ncomparisons among competing HRI solutions.\nTask saturation\nA key problem that we have discovered in measuring fan-out\nis the concept of task saturation. This is where the task\ndoes not warrant as many robots as the user could\neffectively control. A simple example is in figure 1. If we\nadd another robot to the task, the effectiveness will not go\nup because one robot can reach the target just as fast as two\nor three. The problem is that the task does not justify more\nworkers. We will see this effect in the experiments.\nMeasuring IT\nTo improve human-robot interaction (HRI) what we really\nwant is a measure of the interaction time (IT). IT is the\nmeasure that will tell us if the user interface is getting better\nor just the robotic automation. Our problem, however, is\nthat we do not have a way to directly measure IT. There are\nso many things that can happen in a user's mind that we\ncannot tap into. To measure the Fitt's law effects or\nkeystroke effects will only measure the command\nexpression component of the interface. Our experience is\nthat in a multi-robot scenario, command expression is a\nminor part of the interaction time.\nSolving the fan-out equation for IT can give us a method\nfor its measurement.\nFO\nAT\nIT\n=\n\nCHI 2004\n\nPaper\n24-29 April\n\nVienna, Austria\nVolume 6, Number 1\n\n\n\n\n\n234\n\n\nHowever, this measure of IT is only valid if the fan-out\nequation is valid and the FO and AT measures are true\nmeasures. As has been shown in the preceding discussion\nwe have good estimates for both AT and FO but it would be\ntoo strong of a claim to say that we had accurately\nmeasured either value. Our approach then is to replace IT\nwith what we call interaction effort (IE). Interaction effort\nis a unitless value that is defined as:\nFO\nAT\nIE\n=\n\nThis is obviously derived from the fan-out equation but it\nmakes no claims of being an exact time. It is a measure of\nhow much effort is required to interact as part of a human-robot\nteam. Unlike interaction time, interaction effort does\nnot give us an absolute scale for measuring interaction-time\n. The interaction effort measure does give us a way to\ncompare the interactive efficiency of two HRI designs. A\ncomparison tool is sufficient as a measure of progress in\nHRI design.\nValidating the fan-out equation\nOur model of IE depends upon the validity of the fan-out\nequation, which is difficult to prove without measuring IT\nor IE directly.\nOur approach to validating the fan-out equation is as\nfollows. If we have 1) a set of robots that have varying\nabilities and thus varying neglect times, 2) all robots have\nidentical user interfaces and 3) we use the various types of\nrobots on similar tasks of the same task complexity, then if\nthe fan-out equation is valid, the measure of IE should be\nconstant across all such trials. This should be true because\nthe user interface is constant and IE should be determined\nby the user interface. The experiments described in the\nremainder of this paper will show where this does and does\nnot hold.\nROBOT SIMULATIONS\nAs a means of validating the fan-out equations we chose\nrobot simulations rather than actual robots. We did this for\nseveral reasons. The first is that it is much easier to control\nthe task conditions. When trying to validate the fan-out\nequation we need careful controls. These are hard to\nachieve with real robots in real situations. Secondly we are\ntrying to discover laws that model how humans interact\nwith multiple independent robot agents. The physical\ncharacteristics of those agents should not change the laws.\nThird we want to test robots with a variety of levels of\nintelligence. Changing a simulated robot's sensory and\nreasoning capacity is much simpler than building the\ncorresponding robots. To perform the experiments that we\ndid, we would have needed a fleet of 15 robots (5 each of 3\ntypes), with identical interfaces.\n\nThere is one way in which the real world differs sharply\nfrom our simulated world. In the real word, robots crash\ninto obstacles, fall into holes, and run into each other.\nSafety is a real issue and lack of safety reduces the user's\ntrust. As discussed earlier reduced trust leads to reduced\nactivity times. In our simulations\n,\nrobots never crash or fail\ntherefore trust is higher than reality. However, we believe\nthat this will be reflected in different activity times and\nshould not affect the validity of the fan-out equation.\nThe task\nFor our fan-out experiments we chose a maze-searching\ntask. We built a random maze generator that can\nautomatically generate tasks of a given complexity. We\ndefined task complexity as the dimensions of the maze,\ndensity of obstacles and number of targets. Using our\nrandom maze generator we were able to create a variety of\ntasks of a given complexity. After random placement of\nobstacles and targets the maze was automatically checked\nto make certain that all targets were reachable. Our measure\nof task effectiveness was the time required for all targets to\nbe touched by a robot.\nAll robots had the same user interface as shown in figure 3.\nThe user controls are quite simple and the same for all\nexperiments. Each robot has a goal represented by a small\nsquare that the user can drag around. The robot will attempt\nto reach the goal. The variation in robots is in how they deal\nwith obstacles. For less intelligent robots the user can set a\nseries of very short-term goals with no obstacles. For more\nintelligent robots more distant goals can be used with the\nrobot working out the intervening obstacles.\n\nFigure 3 Dragging Robots\nA major variation of this user interface, that we used in\nmost of our tests, obscures all regions of the maze that have\nnot been visited by robots, as in figure 4. The idea is that\nuntil a robot reaches an area and broadcasts what it finds,\nthe terrain is unknown.\nCHI 2004\n\nPaper\n24-29 April\n\nVienna, Austria\nVolume 6, Number 1\n\n\n\n\n\n235\n\n\n\nFigure 4 Obscured World\nThree types of robots\nTo test our fan-out theory of constant IE for a given user-interface\nwe developed three types of simulated robots. The\nfirst type (simple) heads directly towards its current goal\nuntil it reaches the goal or runs into an obstacle. This is a\nrelatively simple robot with little intelligence.\nThe second type (bounce) bounces off obstacles and\nattempts to get closer to the goal even if there is no direct\npath. It never backs up and thus gets trapped in cul-de-sacs.\nThe bouncing technique solves many simple obstacle\navoidance problems but none that require any global\nknowledge. This robot stops whenever it cannot find a local\nmovement that would get it closer to the goal than its\ncurrent position.\nThe third type of robot (plan) has a \"sensor radius\". It\nassumes that the robot can \"see\" all obstacles within the\nsensor radius. It then uses a shortest path algorithm to plan\na way to reach the point on its sensor perimeter that was\nclosest to the goal. This planning is performed after every\nmovement. This robot stops whenever its current position\nwas closer to the goal than any reachable point in its sensor\nperimeter. This robot can avoid local dead-ends, but not\nlarger structures where the problems are larger than its\nsensor radius.\nWe measured average neglect time for each of the types of\nrobots using the random placement/task method. As robot\nintelligence increased, neglect time increased also. This\ngave us three types of simulated robots with identical tasks\nand user interfaces.\nVALIDATING THE FAN-OUT EQUATION\nTo validate the fan-out equation we performed a number of\nexperiments using our simulated robot world. Our\nexperimental runs were of two types. In our initial runs\nindividual university students were solicited and\ncompensated to serve as test drivers. They were each given\nabout 30 minutes of training and practice time with each\ntype of robot and then given a series of mazes to solve\nusing various types of robots.\nTask Saturation\nTask saturation showed up in the early tests. In our first\ntests we started all robots in the upper left hand corner of\nthe world. Since this is a search task there is an expanding\n\"frontier\" of unsearched territory. This frontier limits the\nnumber of robots that can effectively work in an area. Fan-out\nwas low because the problem space was too crowded to\nget many robots started and once lead robots were out of\nthe way users tended to focus on the lead robots rather than\nbring in others from behind as the frontier expanded.\nBecause none of our users worked for more than 2 hours on\nthe robots, there was no time to teach or develop higher-level\nstrategies such as how to marshal additional workers.\nWe resolved the frontier problem by evenly distributing\nrobots around the periphery of the world. This is less\nrealistic for a search scenario, but eliminated the frontier\nproblem.\nWe originally posited two interface styles, one with the\nworld entirely visible (light worlds) and the other with areas\nobscured when not yet visited by a robot (dark worlds). We\nthought of this as two UI variants. Instead it was two task\nvariants. In the dark worlds the task is really to survey the\nworld. Once surveyed, touching the targets is trivial. In the\nlight worlds the problem was path planning to the targets.\nSince reaching known targets is a smaller problem than\nsearching a world, task saturation occurred much earlier.\nBecause of this all of our races were run with dark worlds\n(Figure 4).\nFigure 5 shows the relationship between fan-out and\nremaining targets. The dark thin line is the number of\nremaining targets not yet touched and the lighter jagged line\nis the average number of active robots. This graph is the\naverage of 18 runs from 8 subjects using planning type\nrobots. Other experiments showed similar graphs. In the\nvery early part of the run it takes time to get many robots\nmoving. Then as targets are located, the problem becomes\nsmaller and the fan-out reduces along with it. The crossover\noccurs because in a dark world the fact that one or two\ntargets remain is not helpful because any of the unsearched\nareas could contain those targets.\nType 3 Robot\n-2.00\n0.00\n2.00\n4.00\n6.00\n8.00\n10.00\n12.00\n0\n100\n200\n300\n400\n500\n600\n700\nTim e\nAvg. Targets\nAvg. Robots\n\nFigure 5 Task Saturation\nTest Data\nThese individual tests gave us good feedback and helped us\nrefine our ideas about fan-out and interaction time.\nHowever, unmotivated subjects distorted the results. We\nhad a number of subjects who just spent the time and\nCHI 2004\n\nPaper\n24-29 April\n\nVienna, Austria\nVolume 6, Number 1\n\n\n\n\n\n236\n\n\ncollected their money without seriously trying to perform\nthe tasks quickly. It was clear that attempting to supervise\nmultiple robots is more mentally demanding than only one.\nIn many cases fan-out was not high even though from\nviewing the videotapes, the subjects were easily capable of\ndoing better. To resolve this issue we held a series of \"robo-races\"\n. Groups of 8 people were assembled in a room each\nwith identical workstations and problem sets. Each trial was\nconducted as a race with monetary prizes to first, second\nand third place in each trial. The motivation of subjects was\nbetter and the fan-out results were much higher and more\nuniform.\nIn our first race there were 8 participants all running 8 races\nusing the dark worlds. The density of obstacles was 35%\nwith 18 robots available and 10 targets to find. We ran 2\nraces with simple robots and 3 races each for the bounce\nand plan robots for a total of 64 trial runs. The measured\nfan-out and activity time along with the computed\ninteraction time is shown in figure 6. Analysis of variance\nshows that there is no statistical difference in the interaction\ntimes across the three robot types. This supports our fan-out\nequation hypothesis.\nRobot Type Mean\nFan-out\nMean\nActivity Time\nComputed\nInteraction Effort\nSimple 1.46 4.36\n3.06\nBounce 2.94\n7.82\n2.77\nPlan 5.11 14.42\n2.88\nFigure 6 Test 1 - 18 robots, 10 targets, 35% obstacles\nTo evaluate our hypothesis that activity time and thus fan-out\nis determined by task complexity we ran a second\nidentical competition except that the obstacle density was\n22%. The data is shown in figure 7. Activity time clearly\nincreases with a reduction in task complexity along with\nfan-out, as we predicted. The interaction time computations\nare not statistically different as we hypothesized.\nRobot Type Mean\nFan-out\nMean\nActivity Time\nComputed\nInteraction Effort\nSimple 1.84 4.99\n2.88\nBounce 3.36 11.36\n3.38\nPlan 9.09 24.18\n2.69\nFigure 7 Test 2 - 18 robots, 10 targets, 22% obstacles\nOne of our goals in this work was to develop a measure of\ninteraction effort that could serve as a measure of the\neffectiveness of a human-robot interface. To test this we ran\na third competition of 8 subjects in 8 races. Test 3 was the\nsame as test 1 except that we reduced the resolution of the\ndisplay from 1600x1200 to 800x600. This meant that the\nmazes would not fit on the screen and more scrolling would\nbe required. This is obviously an inferior interface to the\none used in test 1. Figure 8 compares the fan-out and the\ninteraction effort of tests 1 and 3.\nMean Interaction Effort\nRobot Type\nno scroll\nscrolled\ndiff\nSimple 3.06 4.48\n46%\nBounce 2.77 3.47\n25%\nPlan 2.88 3.63\n26%\nFigure 8 Compare scrolled and unscrolled interfaces\nTest 3 shows that inferior interfaces produce higher\ninteraction effort, which is consistent with our desire to use\ninteraction effort as a measure of the quality of a human-robot\ninterface.\nHowever, figure 8 also shows a non-uniform interaction\neffort across robot types for the scrolling condition. This is\nnot consistent with our fan-out equation hypothesis. Since\nall three robots had the same user interface they should\nexhibit similar interaction effort measures. Analysis of\nvariance shows that the bounce and plan robots have\nidentical interaction effort but that the simple robot is\ndifferent from both of them.\nWe explain the anomaly in the simple robots by the fact\nthat interaction effort masks many different components as\ndescribed earlier and fan-out partially determins those\ncomponents. Figure 9 shows the fan-out measures for test 3.\nThe fan-out for the simple robots is barely above 1\nindicating that the user is heavily engaged with a single\nrobot. We watched the user behavior and noticed that the\ninteraction is dominated by expressing commands to the\nrobot with very little planning. With the bounce and plan\nrobots the fan-out is much higher and users spend more\ntime planning for the robot and less time trying to input\ncommands to them.\nRobot Type\nMean Fan-out\nSimple 1.12\nBounce 2.47\nPlan 3.97\nFigure 9 Fan-out for Test 3 (scrolled world)\nIt appears that when fan-out drops very low the nature of\nthe human-robot interaction changes and the interaction\neffort changes also. To understand this effect better we ran\na fourth competition where we varied the speed of the\nrobots. Varying the speed of the robot will change its\nneglect time without changing either the robot's\nintelligence or the user interface. A slower robot will take\nlonger to run into an obstacle and therefore can be\nneglected longer. We used the same worlds and interface as\nin test 1, but we varied speeds across each run with only\ntwo robot types. The results are shown in figure 10.\n\nCHI 2004\n\nPaper\n24-29 April\n\nVienna, Austria\nVolume 6, Number 1\n\n\n\n\n\n237\n\n\nRobot\nType\nRobot\nSpeed\nMean\nFan-out\nMean\nActivity\nTime\nComputed\nInteraction\nEffort\nSimple 3 2.54 7.21\n3.05\nSimple 6 1.21 3.51\n3.09\nSimple 9 0.89 3.26\n3.67\n\n\n\n\n\nBounce 3 4.44 13.54\n3.51\nBounce 6 3.11 9.60\n3.10\nBounce 9 1.97 5.76\n2.94\nBounce 12 1.82 4.42\n2.51\nBounce 15 1.62 4.04\n2.53\nFigure 10 Test 4 - Varying Robot Speed\nFor each class of robot, increasing the robot's speed\ndecreases activity time and correspondingly reduces fan-out\n. Again with the fastest simple robots, the fan-out drops\nvery low (the user cannot even keep one robot going all the\ntime) and the interaction effort is quite different from the\nslower robots. This confirms the change in interaction when\nfan-out drops. This indicates that the fan-out equation does\nnot completely capture all of the relationships between fan-out\nand interaction. This is also confirmed when we look at\nthe interaction effort for the bounce robots. As speed\nincreases, fan-out drops, as we would expect. However,\ninteraction effort also drops steadily by a small amount.\nThis would confirm the robot monitoring and context\nswitching effort that we hypothesized. As fan-out is\nreduced, these two components of interaction effort should\ncorrespondingly reduce while other interactive costs remain\nconstant. This would explain the trend in the data from test\n4.\nCONCLUSIONS\nIt is clear from the test that the fan-out equation does model\nmany of the effects of human interaction with multiple\nrobots. The experiments also indicate that interaction effort,\nas computed from activity time and fan-out can be used to\ncompare the quality of different HRI designs. This gives us\na mechanism for evaluating the user interface part of\nhuman-robot interaction. However, it is also clear that fan-out\nhas more underlying complexity that the equation\nwould indicate. This is particularly true with very low fan-out\nREFERENCES\n1. Asimov,\nI,\nI, Robot, Gnome Press, 1950.\n2. Balch, T. and Arkin, R.C.. \"Behavior-based Formation\nControlfor Multi-robot Teams.\" IEEE Transactions on\nRobotics and Automation, 14(6), 1998.\n3. Bruemmer, D. J., Dudenhoeffer, D. D., and McKay, M.\nD. \"A Robotic Swarm for Spill Finding and Perimeter\nFormation,\" Spectrum: International Conference on\nNuclear and Hazardous Waste Management, 2002.\n4. Chen, Q. and Luh, J.Y.S.. \"Coordination and Control\nof a Group of Small Mobile Robots.\" In Proceedings of\nthe IEEE International Conference on Robotics and\nAutomation, pp. 2315-2320, San Diego CA, 1994.\n5. Crandall, J.W. and Goodrich, M. A., \"Characterizing\nEfficiency of Human-Robot Interaction: A Case Study\nof Shared-Control Teleoperation.\" Proceedings of the\n2002 IEEE/RSJ International Conference Intelligent\nRobotics and Systems, 2002.\n6. Kawamura, K., Peters, R. A., Johnson, C., Nilas, P.,\nand Thongchai, S. \"Supervisory Control of Mobile\nRobots using Sensory Egosphere\" IEEE International\nSymposium on Computational Intelligence in Robotics\nand Automation, 2001.\n7. Fong, T, Conti, F., Grange, S., and Baur, C. \"Novel\nInterfaces for Remote Driving: Gesture, Haptic and\nPDA,\" SPIE Telemanipulator and Telepresence\nTechnolgies VII , 2000.\n8. Fong, T., Thorpe, C., and Baur, C., \"Collaboration\nDialogue, and Human-Robot Interaction,\" Proceedings\nof the 10th International Symposium of Robotics\nResearch, 2001.\n9. Fong, T., and Thorpe, C. \"Robot as Partner: Vehicle\nTeleoperation with Collaborative Control,\" Workshop\non Multi-Robot Systems, Naval Research Laboratory,\nWashington, D.C, 2002.\n10. Fong, T., Grange, S., Thorp, C., and Baur, C., \"Multi-Robot\nRemote Driving with Collaborative Control\"\nIEEE International Workshop on Robot-Human\nInteractive Communication, 2001.\n11. Jones, C., and Mataric, M.J., \"Sequential Task\nExecution in a Minimalist Distributed Robotic System\"\nProceedings of the Simulation of Adaptive Behavior,\n2002.\n12. Parker, C. and Zhang, H., \"Robot Collective\nConstruction by Blind Bulldozing,\" IEEE Conference\non Systems, Cybernetics and Man, 2002.\n13. Parker, L. E., \"Evaluating Success in Autonomous\nMulti-Robot Teams: Experiences from ALLIANCE\nArchitecture,\" Implementations Journal of Theoretical\nand Experimental Artificial Intelligence, 2001.\n14. Sheridan, T.B., Telerobotics, Automation and Human\nSupervisory Control MIT Press, 1992\n15. Simmons, R., Apefelbaum, D., Fox, D., Goldman, R.\nP., Haigh, K. Z., Musliner, D. J., Pelican, M., and\nThrun, S., \"Coordinated Deployment of Multiple,\nHeterogeneous Robots,\" Proceedings of the Conference\non Intelligent Robots and Systems, 2000.\n16. Wang, P.K.C., \"Navigation Strategies for Multiple\nAutonomous Robots Moving in Formation.\" Journal of\nRobotic Systems, 8(2), pp. 177-195, 1991.\n\nCHI 2004\n\nPaper\n24-29 April\n\nVienna, Austria\nVolume 6, Number 1\n\n\n\n\n\n238", "keywords": "multiple robots;Human-robot interaction;interaction time;fan-out;interaction effort;human-robot interaction;neglect time;user interface;fan-out equation;activity time"} {"name": "91", "title": "Fast String Sorting Using Order-Preserving Compression", "abstract": "We give experimental evidence for the benefits of order-preserving compression in sorting algorithms . While, in general, any algorithm might benefit from compressed data because of reduced paging requirements, we identified two natural candidates that would further benefit from order-preserving compression, namely string-oriented sorting algorithms and word-RAM algorithms for keys of bounded length. The word-RAM model has some of the fastest known sorting algorithms in practice. These algorithms are designed for keys of bounded length, usually 32 or 64 bits, which limits their direct applicability for strings. One possibility is to use an order-preserving compression scheme, so that a bounded-key-length algorithm can be applied. For the case of standard algorithms, we took what is considered to be the among the fastest nonword RAM string sorting algorithms, Fast MKQSort, and measured its performance on compressed data. The Fast MKQSort algorithm of Bentley and Sedgewick is optimized to handle text strings. Our experiments show that order-compression techniques results in savings of approximately 15% over the same algorithm on noncompressed data. For the word-RAM, we modified Andersson's sorting algorithm to handle variable-length keys. The resulting algorithm is faster than the standard Unix sort by a factor of 1.5X . Last, we used an order-preserving scheme that is within a constant additive term of the optimal HuTucker, but requires linear time rather than O(m log m), where m = | | is the size of the alphabet.", "fulltext": "INTRODUCTION\nIn recent years, the size of corporate data collections has grown rapidly. For\nexample, in the mid-1980s, a large text collection was in the order of 500 MB.\nToday, large text collections are over a thousand times larger. At the same\ntime, archival legacy data that used to sit in tape vaults is now held on-line in\nlarge data warehouses and regularly accessed. Data-storage companies, such\nas EMC, have emerged to serve this need for data storage, with market capitalizations\nthat presently rival that of all but the largest PC manufacturers.\nDevising algorithms for these massive data collections requires novel techniques\n. Because of this, over the last 10 years there has been renewed interest\nin research on indexing techniques, string-matching algorithms, and very large\ndatabase-management systems among others.\nConsider, for example, a corporate setting, such as a bank, with a large collection\nof archival data, say, a copy of every bank transaction ever made. Data\nis stored in a data-warehouse facility and periodically accessed, albeit perhaps\nsomewhat unfrequently. Storing the data requires a representation that is succinct\n, amenable to arbitrary searches, and supports efficient random access.\nAside from savings in storage, a no less important advantage of a succinct\nrepresentation of archival data is a resulting improvement in performance of\nsorting and searching operations. This improvement is twofold: First, in general\n, almost any sorting and searching algorithm benefits from operating on\nsmaller keys, as this leads to a reduced number of page faults as observed by\nMoura et al. [1997]. Second, benefits can be derived from using a word-RAM\n(unit-cost RAM) sorting algorithm, such as that of [Andersson 1994; Andersson\nand Nilsson 1998]. This algorithm sorts n w-bits keys on a unit-cost RAM with\nword size w in time O(n log n). As one can expect, in general, this algorithm\ncannot be applied to strings as the key length is substantially larger than the\nword size. However, if all or most of the keys can be compressed to below the\nword size, then this algorithm can be applied--with ensuing gains in performance\n.\nThere are many well-known techniques for compressing data, however, most\nof them are not order preserving and do not support random access to the data\nas the decoding process is inherently sequential [Bell et al. 1990; Knuth 1997].\nHence, it is important that the compression technique be static as well as order\npreserving. This rules out many of the most powerful compression techniques,\nsuch as those based on the ZivLempel method [Ziv and Lempel 1978], which\nare more attuned to compression of long text passages in any event (we note,\nhowever, that it is possible to implement certain search operations on Lempel\nZiv encoded text as shown by Farach and Thorup [1998], and K arkk ainen and\nUkknonen [1996]). The need to preserve order also eliminates many dictionary\ntechniques, such as Huffman codes [Knuth 1997; Antoshenkov 1997].\nACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.\nFast String Sorting Using Order-Preserving Compression\n\n3\nHence, we consider a special type of static code, namely, order-preserving\ncompression schemes, which are similar to Huffman codes. More precisely, we\nare given an alphabet\nwith an associated frequency p\ni\nfor each symbol s\ni\n.\nThe compression scheme E :\n{0, 1}\n\nmaps each symbol into an element\nof a set of prefix-free strings over\n{0, 1}\n\n. The goal is to minimize the entropy\ns\ni\n\np\ni\n|E(s\ni\n)\n| with the added condition that if s\ni\n< s\nj\nthen E(s\ni\n)\n< E(s\nj\n) in the\nlexicographic ordering of\n{0, 1}\n\n. This is in contrast to Huffman codes, which,\nin general, do not preserve the order of the initial alphabet. Such a scheme is\nknown as order preserving.\nOptimal order-preserving compression schemes were introduced by Gilbert\nand Moore [1959] who gave an O(n\n3\n) algorithm for computing the optimal code.\nThis was later improved by Hu and Tucker, who gave a\n(n log n) algorithm,\nwhich is optimal [Hu 1973]. In this paper we use a linear time-encoding\nalgorithm that approximates the optimal order-preserving compression scheme\nto within a constant additive term to produce a compressed form of the data.\nThe savings are of significance when dealing with large alphabets, which arise\nin applications, such as DNA databases and compression, on words. We test\nthe actual quality of the compression scheme on real string data and obtain\nthat the compressed image produced by the linear algorithm is within 0.4,\nand 5.2% of the optimal, in the worst case. The experiments suggest that\nthe compression ratio is, in practice, much better than what is predicted by\ntheory.\nThen, using the compressed form, we test the performance of the sorting\nalgorithm against the standard Unix sort in the Sun Solaris OS. Using data\nfrom a 1-GB world wide web crawl, we study first the feasability of compressing\nthe keys to obtain 64-bit word-length keys. In this case we obtain that only 2%\nof the keys cannot be resolved within the first 64 significant bits. This small\nnumber of keys are flagged and resolved in a secondary stage. For this modified\nversion of Andersson's, we report a factor of 1.5X improvement over the timings\nreported by Unix sort.\nAs noted above, an important advantage of order-preserving compression\nschemes is that they support random access. As such we consider as a likely\napplication scenario that the data is stored in compressed format. As an example\n, consider a database-management system (DBMS). Such systems often use\nsorting as an intermediate step in the process of computing a\njoin statement.\nThe DBMS would benefit from an order-preserving compression scheme, first,\nby allowing faster initial loading (copying) of the data into memory and, second,\nby executing a sorting algorithm tuned for compressed data, which reduces both\nprocessing time and the amount of paging required by the algorithm itself if\nthe data does not fit in main memory. The contribution of each of these aspects\nis highlighted later in Tables V and VI.\nThe paper is laid out as follows. In Section 2, we introduce the linear time\nalgorithm for encoding and observe that its approximation term follows from\na theorem of Bayer [1975]. In Section 3, we compare the compression ratio\nempirically for string data using the Calgary corpus. In Section 4, we compare\nthe performance of sorting algorithms aided by order-preserving data\ncompression.\nACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.\n4\n\nA. L opez-Ortiz et al.\nORDER-PRESERVING COMPRESSION\nWe consider the problem of determining code-words (encoded forms) such that\nthe compression ratio is as high as possible. Code-words may or may not have\nthe prefix property. In the prefix property case, the problem reduces to finding\nan optimum alphabetic binary tree.\n2.1 Problem Definition\nFormally, the problem of finding optimal alphabetic binary trees can be stated\nas follows: Given a sequence of n positive weights w\n1\n, w\n2\n,\n, w\nn\n, find a binary\ntree in which all weights appear in the leaves such that\nr The weights on the leaves occur in order when traversing the tree from left\nto right. Such a tree is called an alphabetic tree.\nr The sum\n1\nin\nw\ni\nl\ni\nis minimized, where l\ni\nis the depth (distance from root)\nof the ith leave from left. If so, this is an optimal alphabetic tree.\nIf we drop the first condition, the problem becomes the well-known problem of\nbuilding Huffman trees, which is known to have the same complexity as sorting.\n2.2 Previous Work\nMumey introduced the idea of finding optimal alphabetic binary tree in linear\ntime for some special classes of inputs in Mumey [1992]. One example is a simple\ncase solvable in O(n) time when the values of the weights in the initial sequence\nare all within a term of two. Mumey showed that the region-based method,\ndescribed in Mumey [1992], exhibits linear time performance for a significant\nvariety of inputs. Linear time solutions were discovered for the following special\ncases: when the input sequence of nodes is sorted sequence, bitonic sequence,\nweights exponentially separated, and weights within a constant factor [see\nLarmore and Przytycka 1998].\nMoura et al. [1997] considered the benefits of constructing a suffix tree\nover compressed text using an order-preserving code. In their paper, they observe\ndramatic savings in the construction of a suffix tree over the compressed\ntext.\n2.3 A Simple Linear-Approximation Algorithm\nHere, we present an algorithm that creates a compression dictionary in linear\ntime on the size of the alphabet and whose compression ratio compares very\nfavorably to that of optimal algorithms which have\n(n log n) running time,\nwhere n is the number of symbols or tokens for the compression scheme. For the\npurposes of presentation, we refer to the symbols or tokens as characters in an\nalphabet\n. In practice these \"characters\" might well correspond to, for example,\nentire English words or commonly occurring three-or four-letter combinations.\nIn this case, the \"alphabet\" can have tens of thousands of tokens, and, hence,\nthe importance of linear-time algorithms for creating the compression scheme.\nThe idea of the proposed algorithm is to divide the set of weights into two\nalmost equal size subsets and solve the problem recursively for them. As we\nACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.\nFast String Sorting Using Order-Preserving Compression\n\n5\nshow in Section 2.4, this algorithm finds a compression scheme within an additive\nterm of 2 bits of the average code length found by Huffman or HuTucker\nalgorithms.\n2.4 Algorithm\nLet w\n1\n, w\n2\n,\n. . . , w\nn\nbe the weights of the alphabet characters or word codes to\nbe compressed in alphabetical order. The procedure Make(i, j ) described below,\nfinds a tree in which tokens with weights w\ni\n, w\ni\n+1\n,\n. . . , w\nj\nare in the leaves:\nProcedure Make(i, j )\n1. if (i == j ) return a tree with one node containing w\ni\n.\n2. Find k such that (w\ni\n+ w\ni\n+1\n+ + w\nk\n)\n- (w\nk\n+1\n+ + w\nj\n) is minimum.\n3. Let T\n1\n= Make(i, k) and T\n2\n= Make(k\n+ 1, j )\n4. Return tree T with left subtree T\n1\nand right subtree T\n2\n.\nIn the next two subsections we study (a) the time complexity of the proposed\nalgorithm and (b) bounds on the quality of the approximation obtained.\n2.5 Time Complexity\nFirst observe that, aside from line 2, all other operations take constant time.\nHence, so long as line 2 of the algorithm can be performed in logarithmic time,\nthen the running time T (n) of the algorithm would be given by the recursion\nT (n)\n= T(k)+T(n-k)+ O(log k), and, thus, T(n) = O(n). Therefore, the critical\npart of the algorithm is line 2: how to divide the set of weights into two subsets\nof almost the same size in logarithmic time.\nSuppose that for every k, the value of a\nk\n= b\ni\n+ w\ni\n+ w\ni\n+1\n+ + w\nk\nis given\nfor all k\ni, where b\ni\nis a given integer to be specified later. Notice that the\na\nj\n's form an increasing sequence as a\ni\n< a\ni\n+1\n< < a\nj\n. Now the expression\nin line 2 of the algorithm can be rewritten as follows:\n|(w\ni\n+ w\ni\n+1\n+ + w\nk\n)\n- (w\nk\n+1\n+ + w\nj\n)\n| = |a\nj\n- 2a\nk\n+ b\ni\n|\nThus, given the value of a\nk\n, for all k, one can easily find the index u for which\n|a\nj\n-2a\nu\n+b\ni\n| is minimum using a variation of a one-sided binary search, known\nas galloping. Define a\nk\n:\n=\nk\ni\n=1\nw\nk\nand, hence, b\ni\n= a\ni\n-1\n=\ni\n-1\n=1\nw and modify\nthe algorithm as follows:\nProcedure Make(i, j , b)\n1. If (i == j ) return a tree with one node containing w\ni\n.\n2. Let k\n= Minimize(i, j , b, 1).\n3. Let T\n1\n= Make(i, k, b) and T\n2\n= Make(k + 1, j , a\nk\n)\n4. Return tree T with left subtree T\n1\nand right subtree T\n2\n.\nwhere the Minimize procedure is a one-sided binary search for the element\nclosest to zero in the sequence\n{a\nj\n- 2a\nk\n- 2b}\nk\n. More precisely:\nACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.\n6\n\nA. L opez-Ortiz et al.\nProcedure Minimize(i, j , b, g )\n1. if ( j\n- i 1) return min{|a\nj\n- 2a\ni\n- 2p|, |a\nj\n+ 2b|}.\n2. let\n= i + g, u = j - g.\n3. if (a\nj\n- 2a - 2b) > 0 and (a\nj\n- 2a\nu\n- 2b) < 0 then return Minimize(i, j ,b,2g).\n4. if (a\nj\n- 2a - 2b) < 0 then return Minimize( - g/2, ,b,1).\n5. if (a\nj\n- 2a\nu\n- 2b) > 0 then return Minimize(u,u + g/2,b,1).\nThe total time taken by all calls to Minimize is given by the recursion T (n)\n=\nT (k)\n+ T(n - k) + log k, if k n/2 and T(n) = T(k) + T(n - k) + log(n - k)\notherwise, where n is the number of elements in the entire range and k is\nthe position of the element found in the first call to Make. This recursion has\nsolution T (n)\n2n - log n - 1, as can easily be verified:\nT (n)\n= T(k) + T(n - k) + log k 2n - 1 - log(n - k) - 1 2n - 1 - log n\nwhen k\nn/2. The case k > n/2 is analogous.\nTo compute total time, we have that, at initialization time, the algorithm\ncalculates a\nk\nfor all k and then makes a call to Make(1, n, 0). The total cost of\nline 2 is linear and, hence, the entire algorithm takes time O(n).\n2.6 Approximation Bounds\nRecall that a binary tree T can be interpreted as representing a coding for symbols\ncorresponding to its leaves by assigning 0/1 labels to left/right branches,\nrespectively. Given a tree T and a set of weights associated to its leaves, we\ndenote as E(T ), the expected number of bits needed to represent each of these\nsymbols using codes represented by the tree. More precisely, if T has n leaves\nwith weights w\n1\n,\n. . . , w\nn\nand depths l\n1\n,\n. . . , l\nn\n, then\nE(T )\n=\nn\ni\n=1\nw\ni\nl\ni\nW (T )\nwhere W (T ) is defined as the total sum of the weights\nn\ni\n=1\nw\ni\n. Note that\nW (T )\n= 1 for the case of probability frequency distributions.\nT\nHEOREM\n2.1.\nLet T be the tree generated by our algorithm, and let T\nOPT\nbe\nthe optimal static binary order-preserving code. Then\nE(T )\nE(T\nOPT\n)\n+ 2\nThis fact can be proved directly by careful study of the partition mechanism\n, depending on how large the central weight w\nk\nis. However we observe\nthat a much more elegant proof can be derived from a rarely cited work\nby Paul Bayer [1975]. Consider a set of keys k\n1\n,\n. . . , k\nn\n, with probabilities\np\n1\n,\n. . . , p\nn\nfor successful searches and q\n0\n, q\n1\n,\n. . . , q\nn\nfor unsuccessful searches.\nLet H\n=\nn\ni\n=1\n-p\ni\nlg p\ni\n+\nn\ni\n=0\n-q\ni\nlg q\ni\ndenote the entropy of the associated\nprobability distribution. Observe that from Shannon's source-coding theorem\n[Shannon 1948], we know that H\nE(T) for any tree T.\nACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.\nFast String Sorting Using Order-Preserving Compression\n\n7\nDefinition 2.2.\nA weight-balanced tree is an alphabetic binary search tree\nconstructed recursively from the root by minimizing the difference between the\nweights of the left and right subtrees.\nThat is, a weight-balanced tree minimizes\n|W (L) - W (R)| in a similar fashion\nto procedure Make above.\nT\nHEOREM\n2.3 [B\nAYER\n1975].\nLet S\nOPT\ndenote the optimal alphabetic binary\nsearch tree, with keys in internal nodes and unsuccessful searches in external\nnodes. Let S denote a weight-balanced tree on the same keys, then\nE(S)\nH + 2 E(S\nOPT\n)\n+ 2\nWith this theorem at hand, we can now proceed with the original proof of\nTheorem 2.1.\nP\nROOF OF\nT\nHEOREM\n2.1. We are given a set of symbols s\ni\nand weights w\ni\nwith\n1\ni n. Consider the weight-balanced alphabetical binary search tree on\nn\n- 1 keys with successful search probabilities p\ni\n= 0 and unsuccessful search\nprobabilities q\ni\n= w\ni\n-1\n. Observe that there is a one-to-one mapping between\nalphabetic search trees for this problem and order-preserving codes. Moreover,\nthe cost of the corresponding trees coincide. It is not hard to see that the tree\nconstructed by Make corresponds to the weight-balanced tree, and that the\noptimal alphabetical binary search tree S\nOPT\nand the optimum HuTucker code\ntree T\nOPT\nalso correspond to each other. Hence from Bayer's theorem we have\nE(T )\nH + 2 E(S\nOPT\n)\n+ 2 = E(T\nOPT\n)\n+ 2\nas required.\nThis shows that in theory the algorithm proposed is fast and has only a small\nperformance penalty in terms of compression over both the optimal encoding\nmethod and the information theoretical lower bound given by the entropy.\nEXPERIMENTS ON COMPRESSION RATIO\nIn this section we experimentally compare the performance of the algorithm in\nterms of compression against other static-compression codes. We compare three\nalgorithms: Huffman, HuTucker, and our algorithm on a number of random\nfrequency distributions. We compared alphabets of size n, for variable n. In\nthe case of English, this corresponds to compression on words, rather than on\nsingle characters. Each character was given a random weight between 0 and\n100, 000, which is later normalized. The worst-case behavior of our algorithm\nin comparison with HuTucker and Huffman algorithms is shown in Table I.\nFor each sample, we calculated the expected number of bits required by each\nalgorithm on that frequency distribution and reported the ratio least favorable\namong those reported.\nWe also compared the performance of the proposed linear-time algorithm\nwith Huffman and HuTucker compression using the Calgary corpus, a common\nbenchmark in the field of data compression. This is shown in Table II.\nWe report both the compression ratio of each of the solutions as well as the\nACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.\n8\n\nA. L opez-Ortiz et al.\nTable I. Comparison of the Three Algorithms\nAlphabet size n\nLinear / Huffman\nHuTucker /Huffman\nLinear/HuTucker\nn\n= 26 (10000 tests)\n1.1028\n1.0857\n1.0519\nn\n= 256 (10000 tests)\n1.0277\n1.0198\n1.0117\nn\n= 1000 (3100 tests)\n1.0171\n1.0117\n1.0065\nn\n= 2000 (1600 tests)\n1.0147\n1.0100\n1.0053\nn\n= 3000 (608 tests)\n1.0120\n1.0089\n1.0038\nTable II. Comparison Using the Calgary Corpus\nFile\nSize (in bits)\nHuff. (%)\nLin. (%)\nH-T (%)\nLin./Huff.\nLin./H-T\nH-T/Huff.\nbib.txt\n890088\n65\n68\n67\n1.0487\n1.0140\n1.0342\nbook1.txt\n6150168\n57\n61\n59\n1.0727\n1.0199\n1.0518\nbook2.txt\n4886848\n60\n63\n62\n1.0475\n1.0159\n1.0310\npaper1.txt\n425288\n62\n65\n64\n1.0378\n1.0075\n1.0301\npaper2.txt\n657592\n57\n60\n60\n1.0520\n1.0098\n1.0418\npaper3.txt\n372208\n58\n61\n60\n1.0421\n1.0099\n1.0318\npaper4.txt\n106288\n59\n63\n61\n1.0656\n1.0321\n1.0324\npaper5.txt\n95632\n62\n65\n64\n1.0518\n1.0151\n1.0361\npaper6.txt\n304840\n63\n66\n64\n1.0495\n1.0198\n1.0290\nprogc.txt\n316888\n65\n68\n66\n1.0463\n1.0315\n1.0143\nprogl.txt\n573168\n59\n63\n61\n1.0637\n1.0324\n1.0302\nprogp.txt\n395032\n61\n64\n63\n1.0583\n1.0133\n1.0443\ntrans.txt\n749560\n69\n72\n70\n1.0436\n1.0243\n1.0187\nnews.txt\n3016872\n65\n67\n67\n1.0403\n1.0103\n1.0296\ngeo\n819200\n70\n72\n71\n1.0173\n1.0098\n1.0074\nobj1\n172032\n74\n76\n75\n1.0220\n1.0149\n1.0070\nobj2\n1974512\n78\n80\n80\n1.0280\n1.0103\n1.0175\npic\n4105728\n20\n21\n21\n1.0362\n1.0116\n1.0242\ncomparative performance of the linear-time solution with the other two well-known\nstatic methods. As we can see, the penalty on the compression factor\nof the linear-time algorithm over Huffman, which is not order-preserving, or\nHuTucker, which takes time O(n log n), is minimal.\nIt is important to observe that for the data set tested, the difference between\nthe optimal HuTucker and the linear compression code was, in all cases, below\n0.2 bits, which is much less than the worst-case additive term of 2 predicted by\nBayer's theorem.\nSTRING SORTING USING A WORD-RAM ALGORITHM\nIn this section, we compare the performance of sorting on the compressed text\nagainst the uncompressed form [as in Moura et al. 1997], including Bentley\nand Sedgewicks's FastSort [Bentley and Sedgewick 1997], as well as Andersson's\nword-RAM sort [Andersson 1994]. Traditionally, word-RAM algorithms\noperate on unbounded length keys, such as strings, by using radix/bucket-sort\nvariants which iteratively examine the keys [Andersson and Nilsson 1994]. In\ncontrast, our method uses order-preserving compression to first reduce the size\nof the keys, then sort by using fixed-key size word-RAM algorithms [Andersson\net al. 1995], sorting keys into buckets. We observed experimentally that this\nsuffices to sort the vast majority of the strings when sorting 100 MB files of\nACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.\nFast String Sorting Using Order-Preserving Compression\n\n9\nTable III. Percentage of Buckets Multiply Occupied (Web Crawl)\nSize in\n% Words Sharing a Bucket\nFile\nTokens\nUncompressed\nHuTucker\nLinear\nAlphanumeric\n515277\n16\n2\n2\nAlpha only\n373977\n21\n3\n3\nweb-crawled text data. In this case, each of the buckets contained very few elements\n, in practice. We also tested the algorithms on the Calgary corpus, which\nis a standard benchmark in the field of text compression.\nWe consider the use of a word-RAM sorting algorithm to create a dictionary\nof the words appearing in a given text. The standard word-RAM algorithms\nhave as a requirement that the keys fit within the word size of the RAM machine\nbeing used. Modern computers have word sizes of 32 or 64 bits. In this\nparticular example, we tested Andersson's 32 bit implementation [Andersson\nand Nilsson 1998] of the O(n log log n) algorithm by Andersson et al. [1995]. We\nalso tested the performance of Bentley and Sedgewick's [1997] MKQSort running\non compressed data using order-preserving compression. The algorithm is\na straightforward implementation of the code in Bentley and Sedgewick [1997].\nObserve that one can use a word-RAM algorithm on keys longer than w bits\nby initially sorting on the first w bits of the key and then identifying \"buckets\"\nwhere two or more strings are \"tied,\" i.e., share the first w bits. The algorithm\nproceeds recursively on each of these buckets until there are no further ties.\nThis method is particularly effective if the number of ties is not too large.\nTo study this effect, we consider two word dictionaries and a text source. One\nis collected from a 1-GB crawl of the world wide web, the second from all the\nunique words appearing in the Calgary corpus; the last one is a more recent\n3.3-GB crawl of the world wide web. This is motivated by an indexing application\nfor which sorting a large number of words was required. In principle, the\nresult is equally applicable to other settings, such as sorting of alphanumeric\nfields in a database.\nIn the case of the web crawl we considered two alternative tokenization\nschemes. The first one tokenizes on alphanumeric characters while the second\nignores numbers in favor of words only. Table III shows the number of buckets\nthat have more than one element after the first pass in the uncompressed\nand the compressed form of the text. We report both the figure for the proposed\nlinear algorithm and for the optimal HuTucker scheme. Observe the\ndramatic reduction in the number of buckets that require further processing in\nthe compressed data.\nIn fact the numbers of ties in the compressed case is sufficiently small that\naborting the recursion after the first pass and using a simpler sorting algorithm\non the buckets is a realistic alternative. In comparison, in the uncompressed\ncase the recursion reaches depth three in the worst case before other types of\nsorting become a realistic possibility.\nIn the case of the Calgary corpus, the number of buckets with ties in the\nuncompressed form of the text ranged from 3 to 8%. After compressing the text,\nthe number of ties, in all cases, rounded to 0\n.0%. The specific figures for a subset\nof the Calgary corpus are shown in Table IV.\nACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.\n10\n\nA. L opez-Ortiz et al.\nTable IV. Percentage of Buckets Multiply Occupied\non the Text Subset of the Calgary Corpus\nPercentage Words Tied\nFile\nUncompressed\nHuTucker\nLinear\npaper1\n5\n0\n0\npaper2\n4\n0\n0\npaper3\n6\n0\n0\npaper4\n4\n0\n0\npaper5\n3\n0\n0\npaper6\n4\n0\n0\nbook1\n3\n0\n0\nbook2\n8\n0\n0\nbib\n7\n0\n0\nNotice that in all cases the performance of the optimal HuTucker algorithm\nand the linear algorithm is comparable. We should also emphasize that while\nthe tests in this paper used compression on alphanumeric characters only, the\ncompression scheme can be applied to entire words [see, for example, Mumey\n1992]. In this case, the size of the code dictionary can range in the thousands\nof symbols, which makes the savings of a linear-time algorithm particularly\nrelevant.\nLast, we considered a series of ten web crawls from Google, each of approximately\n100 MB in size (3.3 GB in total). In this case, we operate under the\nassumption that the data is stored in the suitable format to the corresponding\nalgorithm. We posit that it is desirable to store data in compressed format, as\nthis results also in storage savings while not sacrificing searchability because of\nthe order-preserving nature of the compression. We tokenized and sorted each of\nthese files to create a dictionary, a common preprocessing step for some indexing\nalgorithms. The tokenization was performed on nonalphanumeric characters.\nFor this test, we removed tokens larger than 32 bits from the tokenized file. In\npractice, these tokens would be sorted using a second pass, as explained earlier.\nWe first studied the benefits of order-preserving compression alone by comparing\nthe time taken to sort the uncompressed and compressed forms of the text.\nThe tokens were sorted using the Unix\nsort routine, Unix qsort, Fast MKQSort,\nand Andersson's sort algorithm [Andersson and Nilsson 1998]. Table V shows a\ncomparison of CPU times among the different sorting algorithms, while Table\nVI shows the comparison in performance including I/O time for copying the\ndata into memory. Note that there are observed gains both in CPU time alone\nand in CPU plus I/O timings as a result of using the data-compressed form.\nWe report timings in individual form for the first three crawls as well as\nthe average sorting time across all 10 files together with the variance. On the\ndata provided, Fast MKQSort is the best possible choice with the compressed\nvariant, being 20% faster than the uncompressed form. These are substantial\nsavings for such a highly optimized algorithm.\nWhile, in this case, we focused on key lengths below w bits, the savings from\ncompression can be realized by most other sorting, searching, or indexing mechanisms\n, both by the reduction of the key length field and by the reduced demands\nin terms of space. To emphasize, there are two aspects of order-preserving\nACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.\nFast String Sorting Using Order-Preserving Compression\n\n11\nTable V. CPU Time (in Seconds) as Reported by the Unix\nTime Utility\nAlgorithm\nData 1\nData 2\nData 3\nAverage\nVariance\nQSort\n2.41\n2.29\n2.33\n2.33\n0.04\nQSort on compressed\n2.17\n2.18\n2.20\n2.20\n0.05\nAndersson\n0.80\n0.82\n0.81\n0.81\n0.01\nFast Sort\n0.88\n0.89\n0.86\n0.88\n0.02\nFast Sort on compressed\n0.73\n0.75\n0.75\n0.74\n0.02\nBinary Search\n1.27\n1.29\n1.26\n1.28\n0.02\nBinary Search on compressed\n1.08\n1.09\n1.09\n1.09\n0.02\nTable VI. Total System Time as Reported by the Unix\ntime Utility\nAlgorithm\nData 1\nData 2\nData 3\nAverage\nVariance\nUnix Sort\n5.53\n5.40\n5.30\n5.41\n0.07\nUnix Sort compressed\n5.43\n5.43\n5.53\n5.42\n0.06\nQSort\n2.78\n2.64\n2.68\n2.69\n0.05\nQSort on compressed\n2.45\n2.47\n2.48\n2.48\n0.05\nAndersson\n3.63\n3.60\n3.67\n3.61\n0.04\nFast Sort\n1.24\n1.25\n1.22\n1.24\n0.02\nFast Sort on compressed\n1.00\n1.04\n1.02\n1.03\n0.02\nBinary Search\n1.63\n1.64\n1.62\n1.65\n0.02\nBinary Search on compressed\n1.36\n1.38\n1.37\n1.38\n0.02\ncompression, which have a positive impact on performance. The first is that\nwhen comparing two keys byte-per-byte, we are now, in fact, comparing more\nthan key at once, since compressed characters fit at a rate of more than 1 per\nbyte. Second, the orginal data size is reduced. This leads to a decrease in the\namount of paging to external memory, which is often the principal bottleneck\nfor algorithms on large data collections.\nCONCLUSIONS\nIn this work, we studied the benefits of order-preserving compression for\nsorting strings in the word-RAM model. First, we propose a simple linear-approximation\nalgorithm for optimal order-preserving compression, which acts\nreasonably well in comparison with optimum algorithms, both in theory and\nin practice. The approximation is within a constant additive term of both the\noptimum scheme and the information theoretical ideal, i.e., the entropy of the\nprobabilistic distribution associated to the character frequency. We then test\nthe benefits of this algorithm using the sorting algorithm of Andersson for the\nword-RAM, as well as Bentley and Sedgewick's fast MKQSort. We present experimental\ndata based on a 1-GB web crawl, showing that Fast MKQSort and\nAndersson are more efficient for compressed data.\nACKNOWLEDGEMENTS\nWe wish to thank Ian Munro for helpful discussions on this topic, as well\nas anonymous referees of an earlier version of this paper for their helpful\ncomments.\nACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.\n12\n\nA. L opez-Ortiz et al.\n\nREFERENCES\nA\nNDERSSON\n, A. 1994. Faster deterministic sorting and searching in linear space. In Proceedings of\nthe 37th Annual IEEE Symposium on Foundations of Computer Science (FOCS 1996). 135141.\nA\nNDERSSON\n, A.\nAND\nN\nILSSON\n, S. 1994.\nA new efficient radix sort. In FOCS: IEEE Symposium on\nFoundations of Computer Science (FOCS).\nA\nNDERSSON\n, A.\nAND\nN\nILSSON\n, S. 1998.\nImplementing radixsort. ACM Journal of Experimental\nAlgorithms 3, 7.\nA\nNDERSSON\n, A., H\nAGERUP\n, T., N\nILSSON\n, S.,\nAND\nR\nAMAN\n, R. 1995.\nSorting in linear time? In STOC:\nACM Symposium on Theory of Computing (STOC).\nA\nNTOSHENKOV\n, G. 1997.\nDictionary-based order-preserving string compression. VLDB Journal:\nVery Large Data Bases 6, 1 (Jan.), 2639. (Electronic edition.)\nB\nAYER\n, P. J. 1975.\nImproved bounds on the costs of optimal and balanced binary search trees.\nMaster's thesis. Massachussets Institute of Technology (MIT), Cambridge, MA.\nB\nELL\n, T. C., C\nLEARY\n, J. G.,\nAND\nW\nITTEN\n, I. H. 1990.\nText compression. Prentice Hall, Englewood\nCliffs, NJ.\nB\nENTLEY\n, J. L.\nAND\nS\nEDGEWICK\n, R. 1997.\nFast algorithms for sorting and searching strings. In\nProceedings of 8th ACM-SIAM Symposium on Discrete Algorithms (SODA '97). 360369.\nF\nARACH\n, M.\nAND\nT\nHORUP\n, M. 1998.\nString matching in lempel-ziv compressed strings. Algorith-mica\n20, 4, 388404.\nG\nILBERT\n, E. N.\nAND\nM\nOORE\n, E. F. 1959.\nVariable-length binary encoding. Bell Systems Technical\nJournal 38, 933968.\nH\nU\n, T. C. 1973.\nA new proof of the T -C algorithm. SIAM Journal on Applied Mathematics 25, 1\n(July), 8394.\nK\nARKK\n\nAINEN\n, J.\nAND\nU\nKKNONEN\n, E. 1996.\nLempel-ziv parsing and sublinear-size index structures\nfor string matching. In Proceedings of the 3rd South American Workshop on String Processing\n(WSP '96). 141155.\nK\nNUTH\n, D. E. 1997.\nThe art of computer programming: Fundamental algorithms, 3rd ed, vol. 1.\nAddisonWesley, Reading, MA.\nL\nARMORE\n, L. L.\nAND\nP\nRZYTYCKA\n, T. M. 1998. The optimal alphabetic tree problem revisited. Journal\nof Algorithms 28, 1 (July), 120.\nM\nOURA\n, E., N\nAVARRO\n, G.,\nAND\nZ\nIVIANI\n, N. 1997.\nIndexing compressed text. In Proceedings of the 4th\nSouth American Workshop on String Processing (WSP '97). Carleton University Press, Ottawa,\nOntario. 95111.\nM\nUMEY\n, B. M. 1992.\nSome new results on constructing optimal alphabetic binary trees. Master's\nthesis, University of British Columbia, Vancouver, British Columbia .\nS\nHANNON\n, C. E. 1948.\nA mathematical theory of communication. Bell Syst. Technical Journal 27,\n379423, 623656.\nZ\nIV\n, J.\nAND\nL\nEMPEL\n, A. 1978.\nCompression of individual sequences via variable-rate coding. IEEE\nTrans. Inform. Theory, Vol.IT-24 5.\nReceived June 2004; revised January 2006; accepted October 2004 and January 2006\nACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.", "keywords": "Order-preserving compression;String sorting;random access;Order-preserving compression scheme;linear time algorithm;Sorting algorithms;word-RAM sorting algorithm;RAM;sorting;unit-cost;compression scheme;compression ratio;word-RAM;data collection;Keys of bounded length"} {"name": "92", "title": "FBRAM: A new Form of Memory Optimized for 3D Graphics", "abstract": "FBRAM, a new form of dynamic random access memory that greatly accelerates the rendering of Z-buffered primitives, is presented . Two key concepts make this acceleration possible. The first is to convert the read-modify-write Z-buffer compare and RGB\u03b1 blend into a single write only operation. The second is to support two levels of rectangularly shaped pixel caches internal to the memory chip. The result is a 10 megabit part that, for 3D graphics, performs read-modify-write cycles ten times faster than conventional 60 ns VRAMs. A four-way interleaved 100 MHz FBRAM frame buffer can Z-buffer up to 400 million pixels per second. Working FBRAM prototypes have been fabricated.", "fulltext": "INTRODUCTION\nOne of the traditional bottlenecks of 3D graphics hardware has been\nthe rate at which pixels can be rendered into a frame buffer. Modern\ninteractive 3D graphics applications require rendering platforms\nthat can support 30 Hz animation of richly detailed 3D scenes. But\nexisting memory technologies cannot deliver the desired rendering\nperformance at desktop price points.\nThe performance of hidden surface elimination algorithms has been\nlimited by the pixel fill rate of 2D projections of 3D primitives.\nWhile a number of exotic architectures have been proposed to improve\nrendering speed beyond that achievable with conventional\nDRAM or VRAM, to date all commercially available workstation\n3D accelerators have been based on these types of memory chips.\nThis paper describes a new form of specialized memory, Frame\nBuffer RAM (FBRAM). FBRAM increases the speed of Z-buffer\noperations by an order of magnitude, and at a lower system cost\nthan conventional VRAM. This speedup is achieved through two\narchitectural changes: moving the Z compare and RGB\nblend operations\ninside the memory chip, and using two levels of appropri-ately\nshaped and interleaved on-chip pixel caches.\nPREVIOUS WORK\nAfter the Z-buffer algorithm was invented [3], the first Z-buffered\nhardware systems were built in the 1970's from conventional\nDRAM memory chips. Over time, the density of DRAMs increased\nexponentially, but without corresponding increases in I/O bandwidth\n. Eventually, video output bandwidth requirements alone exceeded\nthe total DRAM I/O bandwidth.\nIntroduced in the early 1980's, VRAM [18][20] solved the video\noutput bandwidth problem by adding a separate video port to a\nDRAM. This allowed graphics frame buffers to continue to benefit\nfrom improving bit densities, but did nothing directly to speed rendering\noperations. More recently, rendering architectures have\nbumped up against a new memory chip bandwidth limitation: faster\nrendering engines have surpassed VRAM's input bandwidth. As a\nresult, recent generations of VRAM have been forced to increase\nthe width of their I/O busses just to keep up. For the last five years,\nthe pixel fill (i.e. write) rates of minimum chip count VRAM frame\nbuffers have increased by less than 30%.\nPerformance gains have mainly been achieved in commercially\navailable systems by brute force. Contemporary mid-range systems\nhave employed 10-way and 20-way interleaved VRAM designs\n[1][14]. Recent high-end architectures have abandoned VRAM altogether\nin favor of massively interleaved DRAM: as much as 120-way\ninterleaved DRAM frame buffers [2]. But such approaches do\nnot scale to cost effective machines.\nMore radical approaches to the problem of pixel fill have been ex-plored\nby a number of researchers. The most notable of these is the\npixel-planes architecture [9][16], others include [7][8][11][4]. [12]\nand [10] contain a good summary of these architectures. What these\narchitectures have in common is the avoidance of making the rendering\nof every pixel an explicit event on external pins. In the limit,\nonly the geometry to be rendered need enter the chip(s), and the final\npixels for video output exit.\nThese research architectures excel at extremely fast Z-buffered fill of\nlarge areas. They achieve this at the expense of high cost, out-of-order\nrendering semantics, and various overflow exception cases. Many of\nthese architectures ([16][11][4]) require screen space pre-sorting of\nprimitives before rendering commences. As a consequence, intermediate\ngeometry must be sorted and stored in large batches.\nFBRAM: A new Form of Memory\nOptimized for 3D Graphics\nMichael F Deering, Stephen A Schlapp, Michael G Lavelle\nSun Microsystems Computer Corporation\n\nUnfortunately, the benefits from the fast filling of large polygons\nare rapidly diminishing with today's very finely tessellated objects.\nThat is, the triangles are getting smaller [6]. The number of pixels\nfilled per scene is not going up anywhere near as quickly as the total\nnumber of polygons. As 3D hardware rendering systems are finally\napproaching motion fusion rates (real time), additional improvements\nin polygon rates are employed to add more fine detail, rather\nthan further increases in frame rates or depth complexity.\nZ-BUFFERING AND OTHER PIXEL PROCESSING OPERATIONS\nFundamental to the Z-buffer hidden surface removal algorithm are\nthe steps of reading the Z-buffer's old Z value for the current pixel\nbeing rendered, numerically comparing this value with the new one\njust generated, and then, as an outcome of this compare operation,\neither leaving the old Z (and RGB) frame buffer pixel values alone,\nor replacing the old Z (and RGB) value with the new.\nWith conventional memory chips, the Z data must traverse the data\npins twice: once to read out the old Z value, and then a second time to\nwrite the new Z value if it wins the comparison. Additional time must\nbe allowed for the data pins to electrically \"turn around\" between\nreading and writing. Thus the read-modify-write Z-buffer transaction\nimplemented using a straightforward read-turn-write-turn operation\nis four times longer than a pure write transaction. Batching of reads\nand writes (n reads, turn, n writes, turn) would reduce the read-modify\n-write cost to twice that of a pure write transaction for very large n,\nbut finely tessellated objects have very small values of n, and still suffer\na 3-4\npenalty.\nThis is the first problem solved by FBRAM. Starting with a data\nwidth of 32 bits per memory chip, FBRAM now makes it possible\nfor the Z comparison to be performed entirely inside the memory\nchip. Only if the internal 32 bit numeric comparison succeeds does\nthe new Z value actually replace the old value. Thus the fundamental\nread-modify-write operation is converted to a pure write operation\nat the data pins.\nBecause more than 32-bits are needed to represent a double buffered\nRGBZ pixel, some way of transmitting the results of the Z\ncomparison across multiple chips is required. The Z comparison result\nis communicated on a single external output signal pin of the\nFBRAM containing the Z planes, instructing FBRAM chips containing\nother planes of the frame buffer whether or not to write a\nnew value.\nThe Z-buffer operation is the most important of the general class of\nread-modify-write operations used in rendering. Other important\nconditional writes which must be communicated between\nFBRAMs include window ID compare [1] and stenciling.\nCompositing functions, rendering of transparent objects, and antialiased\nlines require a blending operation, which adds a specified\nfraction of the pixel RGB value just generated to a fraction of the\npixel RGB value already in the frame buffer. FBRAM provides four\n8-bit 100 MHz multiplier-adders to convert the read-modify-write\nblending operation into a pure write at the pins. These internal\nblend operations can proceed in parallel with the Z and window ID\ncompare operations, supported by two 32-bit comparators. One of\nthe comparators supports magnitude tests (\n>, , <, , =, ), the other\nsupports match tests (\n=, ). Also, traditional boolean bit-operations\n(for RasterOp) are supported inside the FBRAM. This collection of\nprocessing units is referred to as the pixel ALU.\nConverting read-modify-write operations into pure write operations\nat the data pins permits FBRAM to accept data at a 100 MHz\nrate. To match this rate, the pixel ALU design is heavily pipelined,\nand can process pixels at the rate of 100 million pixels per second.\nThus in a typical four-way interleaved frame buffer design the maximum\ntheoretical Z-buffered pixel fill rate of an FBRAM based system\nis 400 mega pixels per second. By contrast, comparable frame\nbuffers constructed with VRAM achieve peak rates of 33-66 mega\npixels per second [5][14].\nNow that pixels are arriving and being processed on-chip at\n100 MHz, we next consider the details of storing data.\nDRAM FUNDAMENTALS\nDynamic memory chips achieve their impressive densities (and\nlower costs) by employing only a single transistor per bit of storage.\nThese storage cells are organized into pages; typically there are several\nthousand cells per page. Typical DRAM arrays have hundreds\nor thousands of pages. Per bit sense amplifiers are provided which\ncan access an entire page of the array within 120 ns. These sense\namplifiers retain the last data accessed; thus they function as a several\nthousand bit page buffer. The limited number of external I/O\npins can perform either a read or a write on a small subset of the\npage buffer at a higher rate, typically every 40 ns.\nFBRAM starts with these standard DRAM components, and adds a\nmultiported high speed SRAM and pixel ALU. All of this is organized\nwithin a caching hierarchy, optimized for graphics access patterns\n, to address the bandwidth mismatch between the high speed\npins and the slow DRAM cells.\nPIXEL CACHING\nThe cache system design goal for FBRAM is to match the 100 MHz\nread-modify-write rate of the pixel ALU with the 8 MHz rate of the\nDRAM cells. Figure 1 illustrates this cache design challenge.\nCaches have long been used with general purpose processors; even\na small cache can be very effective [17]. But caches have been\nmuch less used with graphics rendering systems.\nThe data reference patterns of general purpose processors exhibit\nboth temporal and spatial locality of reference. Temporal locality is\nexhibited when multiple references are made to the same data within\na short period of time. Spatial locality is exhibited when multiple\nreferences within a small address range are made within a short period\nof time. Caches also reduce the overall load on the memory bus\nby grouping several memory accesses into a single, more efficient\nblock access.\nGraphics hardware rendering does not exhibit much temporal locality\n, but does exhibit spatial locality with a vengeance. Raster rendering\nalgorithms for polygons and vectors are a rich source of spatial\nlocality.\nAlthough the bandwidth available inside a dynamic memory chip is\norders of magnitude greater than that available at the pins, this in-Dynamic\nMemory\nALU\n?\n32-bits @ 100MHz\n2\n\n32-bits @ 100MHz\n10,240-bits @ 8MHz\nFigure 1. Bandwidth mismatch between pixel ALU and DRAM.\nFBRAM\nternal bandwidth is out of reach for architectures in which the pixel\ncache is external to the memory chips. Others have recognized the\npotential of applying caching to Z-buffered rendering [13], but they\nwere constrained to building their caches off chip. Such architectures\ncan at best approach the rendering rate constrained by the\nmemory pin bandwidth. As a result, these caching systems offer little\nor no performance gain over SIMD or MIMD interleaved pixel\nrendering.\nWith FBRAM, by contrast, the pixel caches are internal to the individual\nmemory chips. Indeed, as will be seen, two levels of internal\ncaches are employed to manage the data flow. The miss rates are\nminimized by using rectangular shaped caches. The miss costs are\nreduced by using wide and fast internal busses, augmented by an\naggressive predictive pre-fetch algorithm.\nEach successive stage from the pins to the DRAM cells has slower\nbus rates, but FBRAM compensates for this with wider busses. Because\nthe bus width increases faster than the bus rate decreases,\ntheir product (bus bandwidth) increases, making caching a practical\nsolution.\nFBRAM INTERNAL ARCHITECTURE\nModern semiconductor production facilities are optimized for a\ncertain silicon die area and fabrication process for a given generation\nof technology. FBRAM consists of 10 megabits of DRAM, a\nvideo buffer, a small cache, and a graphics processor, all implemented\nin standard DRAM process technology. The result is a die\nsize similar to a 16 megabit DRAM. A 10 megabit FBRAM is\n320\n102432 in size; four FBRAMs exactly form a standard\n1280\n102432 frame buffer.\nFigure\n2 is an internal block diagram of a single FBRAM [15]. The\nDRAM storage is broken up into four banks, referred to as banks\nA,B,C, and D. Each bank contains 256 pages of 320 words (32 bits\nper word). Each bank is accessed through a sense amplifier page\nbuffer capable of holding an entire 320 word page (10,240 bits).\nBanks can be accessed at a read-modify-write cycle time of 120 ns.\nVideo output pixels can be copied from the page buffer to one of\ntwo ping-pong video buffers, and shifted out to the display.\nFBRAM has a fast triple-ported SRAM register file. This register\nfile is organized as eight blocks of eight 32-bit words. Capable of\ncycling at 100 MHz, two of the ports (one read, one write) of the\nregister file allow 10 ns throughput for pipelined 32-bit read-modi-DRAM\nBank\nB\nVideo Buffer\nVideo Buffer\nDRAM Bank\nA\nDRAM Bank\nC\nDRAM Bank\nD\nSRAM\n2Kb\nALU\n256\n640\n640\n640\n640\n16\n32\nVideo\nData\nGlobal Bus\n32\n32\nRender\nData\nPage Buffer\nPage Buffer\nPage Buffer\nPage Buffer\n2.5Mb\n10Kb\n10Kb\n10Kb\n10Kb\n2.5Mb\n2.5Mb\n2.5Mb\nFBRAM\nFigure 2. Internal block diagram of a single FBRAM.\nfy-write ALU operations: Z-buffer compare, RGB\nblend, or boolean\n-operations. The third port allows parallel transfer of an entire\nblock (8 words) to or from a page buffer at a 20 ns cycle time via a\n256-bit \"Global Bus\".\nFBRAM has two independent sets of control and address lines: one\nfor the two ALU ports of the SRAM register file; the other for operations\ninvolving a DRAM bank. This allows DRAM operations\nto proceed in parallel with SRAM operations. The cache control\nlogic was intentionally left off-chip, to permit maximum flexibility\nand also to keep multiple chips in lock step.\nFBRAM AS CACHE\nInternally, the SRAM register file is a level one pixel cache (L1$),\ncontaining eight blocks. Each block is a 2 wide by 4 high rectangle\nof (32-bit) pixels. The cache set associativity is determined external\nto the FBRAM, permitting fully associative mapping. The L1$ uses\na write back policy; multiple data writes to each L1$ block are ac-cumulated\nfor later transfer to the L2$.\nTaken together, the four sense amplifier page buffers constitute a\nlevel two pixel cache (L2$). The L2$ is direct mapped; each page\nbuffer is mapped to one of the pages of its corresponding DRAM\nbank. Each L2$ entry contains one page of 320 32-bit words shaped\nas a 20 wide by 16 high rectangle of pixels. The L2$ uses a write\nthrough policy; data written into a L2$ entry goes immediately into\nits DRAM bank as well.\nThe Global Bus connects the L1$ to the L2$. A 2\n4 pixel block can\nbe transferred between the L1$ and L2$ in 20 ns.\nFour parallel \"sense amplifier buses\" connect the four L2$ entries\nto the four DRAM banks. A new 20\n16 pixel DRAM page can be\nread into a given L2$ entry from its DRAM bank as often as every\n120 ns. Reads to different L2$ entries can be launched every 40 ns.\nFOUR WAY INTERLEAVED FBRAM FRAME BUFFER\nThe previous sections described a single FBRAM chip. But to fully\nappreciate FBRAM's organization, it is best viewed in one of its\nnatural environments: a four way horizontally interleaved three\nchip deep 1280\n102496-bit double buffered RGB Z frame buffer.\nFigure 3 shows the chip organization of such a frame buffer, with\ntwo support blocks (render controller and video output). Figure 4 is\na logical block diagram considering all 12 chips as one system. The\nrgb\nA\nrgb\nB\nZ\nrgb\nA\nrgb\nB\nZ\nrgb\nA\nrgb\nB\nZ\nrgb\nA\nrgb\nB\nZ\nRendering Controller\nVideo Output\nFigure 3. A four-way interleaved frame buffer system composed\nof 12 FBRAMs (1280\n1024, double buffered 32-bit RGB plus\n32-bit Z).\ndiscussions of the operations of FBRAM to follow are all based on\nconsidering all 12 memory chips as one memory system.\nHorizontally interleaving four FBRAMs quadruples the number of\ndata pins; now four RGBZ pixels can be Z-buffered, blended, and\nwritten simultaneously. This interleaving also quadruples the size\nof the caches and busses in the horizontal dimension. Thus the L1$\ncan now be thought of as eight cache blocks, each 8 pixels wide by\n4 pixels high. Taken together, the individual Global Buses in the 12\nchips can transfer an 8\n4 pixel block between the L1$ and L2$. The\nfour L2$ entries are now 80 pixels wide by 16 pixels high (see Figure\n4).\nAll three levels of this memory hierarchy operate concurrently.\nWhen the addressed pixels are present in the L1$, the four way interleaved\nFBRAMs can process 4 pixels every 10 ns. On occasion,\nthe L1$ will not contain the desired pixels (an \"L1$ miss\"), incurring\na 40 ns penalty (\"L1$ miss cost\"): 20 ns to fetch the missing\nblock from the L2$ for rendering, 20 ns to write the block back to\nthe L2$ upon completion. Even less often, the L2$ will not contain\nthe block of pixels needed by the L1$ (an \"L2$ miss\"), incurring a\n40-120 ns penalty (\"L2$ miss cost\") depending upon the scheduling\nstatus of the DRAM bank.\nThis example four way interleaved frame buffer will be assumed\nfor the remainder of this paper.\nRECTANGULAR CACHES REDUCE MISS RATE\nThe organization so far shows pixels moving between fast, narrow\ndata paths to slow, wide ones. As can be seen in Figure 4, there is\nsufficient bandwidth between all stages to, in theory, keep up with\nthe incoming rendered pixels, so long as the right blocks and pages\nare flowing. We endeavor to achieve this through aggressive prefetching\nof rectangular pixel regions.\nLevel 2 Cache\nDRAM\nBank\nLevel 1 Cache\nALU(4)\nread-modify-write\n4\n1 pixels@10 ns =\n800 Mpixels/second\nread or write\n8\n4 pixels@20 ns =\n1600 Mpixels/second\nwrite-modify-read\n80\n16 pixels@120 ns =\n10600 Mpixels/second\nper bank;\nCan overlap\nbanks @40 ns =\n32 Gpixels/second\nGlobal Bus\nFigure 4. A logical representation of a four-way horizontally\ninterleaved frame buffer composed of 12 FBRAMs.\nwrite (or read)\n4\n1 pixels@10 ns =\n400 Mpixels/second\n8 wide\n\n4 high\n8 block\n80 wide\n\n16 high\n1 page per bank\nLocality of reference in graphics rendering systems tends to be to\nneighboring pixels in 2D. Because of this, graphics architects have\nlong desired fast access to square regions [19]. Unfortunately, the\nstandard VRAM page and video shift register dimensions result in\nefficient access only to long narrow horizontal regions. FBRAM\nsolves this problem by making both caches as square as possible.\nBecause the L1$ blocks are 8\n4 pixels, thin line rendering algorithms\ntend to pass through four to eight pixels per L1$ block, resulting\nin a L1$ miss every fourth to eighth pixel (a \"miss rate\" of\n1/4 to 1/8). Parallel area rendering algorithms can aim to utilize all\n32 pixels in a block, approaching a miss rate of 1/32.\nSimilarly, because the L2$ blocks are 80\n16 pixels, L2$ miss rates\nare on the order of 1/16 to 1/80 for thin lines, and asymptotically approach\n1/1280 for large areas.\nThese simplistic miss rate approximations ignore fragmentation effects\n: lines may end part way through a block or page, polygon edges\nusually cover only a fraction of a block or page. In addition, fragmentation\nreduces the effective pin bandwidth, as not all four horizontally\ninterleaved pixels (\"quads\") can be used every cycle.\nFBRAM's block and page dimensions were selected to minimize\nthe effects of fragmentation. Table 1 displays the average number\nof L1$ blocks (B), and L2$ pages (P) touched when rendering various\nsizes of thin lines and right isosceles triangles (averaged over\nall orientations and positions), for a range of alternative cache aspect\nratios. The white columns indicate FBRAM's dimensions.\nNote that smaller primitives consume more blocks and pages per\nrendered pixel, due to fragmentation effects. Although the table implies\nthat a page size of 40\n32 is better than 8016, practical limitations\nof video output overhead (ignored in this table, and to be discussed\nin section 13), dictated choosing 80\n16.\nOPERATING THE FRAME BUFFER\nFor non-cached rendering architectures, theoretical maximum performance\nrates can be derived from statistics similar to Table 1.\nThis is pessimistic for cached architectures such as FBRAM. Because\nof spatial locality, later primitives (neighboring triangles of a\nstrip) will often \"re-touch\" a block or page before it is evicted from\nthe cache, requiring fewer block and page transfers. Additional\nsimulations were performed to obtain the quad, page, and block\ntransfer rates. The left half of Table 2 shows the results for\nFBRAM's chosen dimensions.\nEquation 1 can be used to determine the upper bound on the number\nof primitives rendered per second using FBRAM. The performance\nAverage Pages/Prim\nAverage Blocks/Prim\n320\n\n4\n160\n\n8\n80\n\n16\n40\n\n32\n32\n\n1\n16\n\n2\n8\n\n4\n10 Pix Vec\n2.61\n1.84\n1.48\n1.36\n7.57\n4.58\n3.38\n20 Pix Vec\n4.21\n2.68\n1.97\n1.71\n14.1\n8.15\n5.76\n50 Pix Vec\n9.02\n5.20\n3.42\n2.78\n33.8\n18.9\n12.9\n100 Pix Vec\n17.1\n9.42\n5.85\n4.57\n66.6\n36.8\n24.9\n25 Pix Tri\n2.96\n2.02\n1.60\n1.46\n9.75\n6.12\n4.68\n50 Pix Tri\n3.80\n2.45\n1.89\n1.67\n13.8\n8.72\n6.67\n100 Pix Tri\n4.97\n3.05\n2.24\n1.94\n20.0\n12.8\n9.89\n1000 Pix Tri\n14.2\n8.05\n5.41\n4.49\n82.5\n59.6\n50.5\nTable\n1 Average number of Pages or Blocks touched\nper primitive\nis set by the slowest of the three data paths (quads at the pins and\nALU, blocks on the global bus, pages to DRAM):\n(1)\nwhere the denominators Q, B, and P are obtained from the left half\nof Table 2, and the numerators R\nQ\n, R\nB\n, R\nP\nare the bus rates for\nquads, blocks and pages. Referring again to Figure 4, R\nQ\nis 100 million\nquads/sec through the ALU (4 pixels/quad), R\nB\nis 25 million\nblocks/sec (40 ns per block, one 20 ns prefetch read plus one 20 ns\nwriteback) and R\nP\nis 8.3 million pages/sec (120 ns per page).\nThe right half of Table 2 gives the three terms of Equation 1. The\nwhite columns indicate the performance limit (the minimum of the\nthree for each case).\nEquation 1 assumes that whenever the L1$ is about to miss, the rendering\ncontroller has already brought the proper block in from the L2$ into\nthe L1$. Similarly, whenever the L2$ is about to miss, the rendering\ncontroller has already brought the proper page in from the DRAM bank\ninto the L2$. To achieve such clairvoyance, the controller must know\nwhich pages and blocks to prefetch or write back. The FBRAM philosophy\nassumes that the rendering controller queues up the pixel operations\nexternal to the FBRAMs, and snoops this write queue to predict\nwhich pages and blocks will be needed soon. These needed pages and\nblocks are prefetched using the DRAM operation pins, while the\nSRAM operation pins are used to render pixels into the L1$ at the same\ntime. Cycle accurate simulation of such architectures have shown this\ntechnique to be quite effective.\nAlthough pages can only be fetched to one L2$ entry every 120 ns,\nit is possible to fetch pages to different L2$ entries every 40 ns. To\nreduce the prefetching latency, banks A, B, C and D are interleaved\nin display space horizontally and vertically, as shown in Figure 5,\nensuring that no two pages from the same bank are adjacent horizontally\n, vertically, or diagonally. This enables pre-fetching any\nneighboring page while rendering into the current page.\nAs an example, while pixels of vector b in Figure 5 are being rendered\ninto page 0 of bank A, the pre-fetch of page 0 of bank C can\nbe in progress. Usually the pre-fetch from C can be started early\nenough to avoid idle cycles between the last pixel in page 0 of bank\nA and the first pixel in page 0 of bank C.\nThe key idea is that even for vertical vectors, such as vector d, we\ncan pre-fetch pages of pixels ahead of the rendering as fast as the\nrendering can cross a page. Even though vector c rapidly crosses\nthree pages, they can still be fetched at a 40ns rate because they are\nAverage\nMisses/Prim\nMillion Prim/sec\nQ\nuad\nB\nlock\nP\nage\nQ\nuad\nPerf\nB\nlock\nPerf\nP\nage\nPerf\n10 Pix Vec\n8.75\n2.35\n0.478\n11.4\n10.6\n17.4\n20 Pix Vec\n16.4\n4.71\n0.955\n6.10\n5.31\n8.72\n50 Pix Vec\n38.9\n11.8\n2.40\n2.57\n2.12\n3.47\n100 Pix Vec\n76.7\n23.4\n4.83\n1.30\n1.07\n1.72\n25 Pix Tri\n11.6\n1.70\n0.308\n8.62\n14.7\n27.0\n50 Pix Tri\n20.2\n3.04\n0.422\n4.95\n8.22\n19.7\n100 Pix Tri\n36.1\n6.54\n0.605\n2.77\n3.82\n13.8\n1000 Pix Tri\n286.\n46.7\n4.37\n0.350\n0.535\n1.91\nTable\n2 FBRAM Performance Limits\nprimitives/sec\nmin R\nQ\nQ\n------- R\nB\nB\n------- R\nP\nP\n\n,\n,\n(\n)\n=\nfrom three different banks. Appendix A gives a detailed cycle by\ncycle example of rendering a 10 pixel vector.\nWhen vectors are chained (vector e), the last pixel of one segment\nand the first pixel of the next segment will almost always be in the\nsame bank and page. Even when segments are isolated, the probability\nis 75% that the last pixel of one segment and first pixel of next\nsegment will be in different banks, thus enabling overlapping of\nDRAM bank fetches to L2$.\nPIXEL RECTANGLE FILL OPERATINGS\nAs fast as the FBRAM pixel write rate is, it is still valuable to provide\noptimizations for the special case of large rectangle fill. These\nspecifically include clearing to a constant value or to a repeating\npattern. Fast clearing of the RGBZ planes is required to achieve\nhigh frame rates during double buffered animation.\nFBRAM provides two levels of acceleration for rectangle filling of\nconstant data. Both are obtained by bypassing the bandwidth bottlenecks\nshown in Figure 4.\nIn the first method, once an 8\n4 L1$ block has been initialized to a\nconstant color or pattern, the entire block can be copied repeatedly\nto different blocks within the L2$ at global bus transfer rates. This\nfeature taps the full bandwidth of the global bus, bypassing the pin/\nALU bandwidth bottleneck. Thus regions can be filled at a 4\nhigher\nrate (1.6 billion pixels per second, for a four-way interleaved\nframe buffer).\nThe second method bypasses both the pin/ALU and the Global Bus\nbottlenecks, effectively writing 1,280 pixels in one DRAM page cycle\n. First, the method described in the previous paragraph is used to\ninitialize all four pages of the L2$, then these page buffers are rapidly\ncopied to their DRAM banks at a rate of 40 ns per page. Thus\nfor large areas, clearing to a constant color or screen aligned pattern\ncan proceed at a peak rate of 32 billion pixels per second (0.25 ter-abytes/sec\n), assuming a four-way interleaved design.\nWINDOW SYSTEM SUPPORT\nThe most important feature of FBRAM for window system support\nis simply its high bandwidth; however two window system specific\noptimizations are also included.\nFull read-modify-write cycles require two Global Bus transactions:\na prefetching read from the L2$, and copyback write to the L2$.\nMost window system operations do not require read-modify-write\ncycles when rendering text and simple 2D graphics. For such write-0\n16\n32\n48\n64\n0\n80\nA:0\nx\ny\n160\n80\n96\nC:0\nA:8\nC:8\nB:0\nD:0\nB:8\nD:8\na\nb\nc\nd\ne\nf\ng\nA:16\nC:16\nA:24\nC:24\nB:16\nD:16\nB:24\nD:24\n112\nh\nA:1\nC:1\nA:9\nC:9\nA:17\nC:17\nA:25\nC:25\nFigure 5.\nThe upper left corner of the frame buffer, showing\npages 0-255, and example primitives a-h.\nvertically and horizontally interleaved banks A-D,\nonly operations, the number of Global Bus transactions can be cut\nin half, improving performance. This is accomplished by skipping\nthe pre-fetching read of a new block from the L2$ to L1$.\nVertical scrolling is another frequent window system operation accelerated\nby FBRAM. This operation is accelerated by performing\nthe copy internal to the FBRAM. This results in a pixel scroll rate\nof up to 400 million pixels per second.\nVIDEO OUTPUT\nVRAM solved the display refresh bandwidth problem by adding a\nsecond port, but at significant cost in die area. FBRAM also provides\na second port for video (see Figure 2), but at a smaller area penalty.\nLike VRAM, FBRAM has a pair of ping-pong video buffers, but\nunlike VRAM, they are much smaller in size: 80 pixels each for a\nfour-way interleaved FBRAM frame buffer vs. 1,280 pixels each\nfor a five-way interleaved VRAM frame buffer. These smaller buffers\nsave silicon and enable a rectangular mapping of pages to the\ndisplay, but at the price of more frequent video buffer loads.\nThe FBRAM video buffers are loaded directly from the DRAM\nbank page buffers (L2$, 80\n16 pixels), selecting one of the 16 scan\nlines in the page buffer. The cost of loading a video buffer in both\nFBRAM and VRAM is typically 120-200 ns.\nTo estimate an upper bound for FBRAM video refresh overhead for\na 1280\n1024 76Hz non-interlaced video display, assume that all\nrendering operations cease during the 200 ns video buffer load interval\n. During each frame, a grand total of 3.28 ms\n(200 ns\n12801024 pixels / 80 pixels) of video buffer loads are\nneeded for video refresh. Thus 76 Hz video refresh overhead could\ntheoretically take away as much as 25% of rendering performance.\nThe actual video overhead is only 5-10% for several reasons. First,\nthe pixel ALU can still access its side of the L1$ during video refresh\n, because video transfers access the L2$. Second, although one\nof the four banks is affected by video refresh, global bus transfers\nto the other three banks can still take place. Finally, it is usually possible\nto schedule video transfers so that they do not conflict with\nrendering, reducing the buffer load cost from 200 to 120 ns.\nFor high frame rate displays, the raster pattern of FBRAM video\noutput refresh automatically accomplishes DRAM cell refresh, imposing\nno additional DRAM refresh tax.\nFBRAM PERFORMANCE\nThe model developed in section 10 gave theoretical upper bounds\non the performance of a four-way interleaved FBRAM system. But\nto quantify the performance obtainable by any real system built\nwith FBRAM, a number of other factors must be considered.\nFirst, a 10% derating of the section 10 model should be applied to\naccount for the additional overhead due to video and content refresh\ndescribed in section 13.\nThe sophistication of the cache prediction and scheduling algorithm\nimplemented will also affect performance. Equation 1 assumed that\nthe cache controller achieves complete overlap between the three\ndata paths; this is not always possible. More detailed simulations\nshow that aggressive controllers can achieve 75% (before video\ntax) of the performance results in table 2.\nTaking all of these effects into account, simulations of buildable\nfour-way interleaved FBRAM systems show sustained rates of 3.3\nmillion 50 pixel Z-buffered triangles per second, and 7 million 10\npixel non-antialiased Z-buffered vectors per second. FBRAM systems\nwith higher external interleave factor can sustain performances\nin the tens of millions of small triangles per second range.\nAll of our simulations assume that the rest of the graphics system\ncan keep up with the FBRAM, delivering four RGB\nZ pixels every\n10 ns. While this is a formidable challenge, pixel interpolation and\nvertex floating point processing ASICs are on a rapidly improving\nperformance curve, and should be able to sustain the desired rates.\nFBRAM performance can be appreciated by comparing it with the\npixel fill rate of the next generation Pixel Planes rasterizing chips\n[16], although FBRAM does not directly perform the triangle ras-terization\nfunction. The pixel fill rate for a single FBRAM chip is\nonly a factor of four less than the peak (256 pixel rectangle) fill rate\nof a single Pixel Planes chip, but has 400 times more storage capacity\n.\nNext let us contrast the read-modify-write performance of FBRAM\nto a 60 ns VRAM. Assuming no batching, VRAM page mode requires\nin excess of 125 ns to do what FBRAM does in 10 ns; a\n12.5\nspeed difference.\nBatching VRAM reads and writes to minimize bus-turns, as described\nin section 3, does not help as much as one might think. Typical\nVRAM configurations have very few scan lines per page,\nwhich causes fragmentation of primitives, limiting batch sizes. Table\n1 shows that for a 320\n4 page shape, a 50 pixel triangle touches\n3.8 pages, averaging 13 pixels per page. For a five way interleaved\nframe buffer, an average of only 2.6 pixels can be batched per chip.\nOTHER DRAM FFSHOOTS\nA veritable alphabet soup of new forms of DRAM are at various stages\nof development by several manufactures: CDRAM, DRAM, FBRAM,\nRAMBUS, SDRAM, SGRAM, SVRAM, VRAM, and WRAM. For\n3D graphics, FBRAM is distinguished as the only technology to directly\nsupport Z-buffering, alpha blending, and ROPs. Only FBRAM converts\nread-modify-write operations into pure write operations; this\nalone accounts for a 3-4\nperformance advantage at similar clock rates.\nOther than CDRAM, only FBRAM has two levels of cache, and efficient\nsupport of rectangular cache blocks. It is beyond the scope of this\npaper to derive precise comparative 3D rendering performance for all\nthese RAMs, but FBRAM appears to be several times faster than any\nof these alternatives.\nFUTURES\nThe demand for faster polygon rendering rates shows no sign of\nabating for some time to come. However, as was observed at the\nend of section 2, the number of pixels filled per scene is not going\nup anywhere near as rapidly. Future increases in pixel resolution,\nframe rate, and/or depth complexity are likely to be modest.\nFuture predictions of where technology is going are at best approximations\n, and their use should be limited to understanding trends.\nWith these caveats in mind, Figure 6 explores trends in polygon\nrendering rate demand vs. memory technologies over the next several\nyears. The figure shows the projected pixel fill rate (including\nfragmentation effects) demanded as the polygon rate increases over\ntime (from the data in [6]). It also displays the expected delivered\npixel fill rates of minimum chip count frame buffers implemented\nusing FBRAM and VRAM technologies (extrapolating from Equation\n1 and from the systems described in [14][5]). The demand\ncurve is above that achievable inexpensively with conventional\nVRAM or DRAM, but well within the range of a minimum chip\ncount FBRAM system.\nThe trend curve for FBRAM has a steeper slope because, unlike\nVRAM, FBRAM effectively decouples pixel rendering rates from\nthe inherently slower DRAM single transistor access rates. This\nwill allow future versions of FBRAM to follow the more rapidly increasing\nSRAM performance trends. FBRAM still benefits from\nthe inherently lower cost per bit of DRAM technology.\nThe \"excess\" pixel fill rate shown for FBRAM in Figure 6 combined\nwith FBRAM's high bit density will permit cost-effective,\none pass, full scene antialiasing using super-sampled frame buffers.\nCONCLUSIONS\nIn the past, the bandwidth demands of video output led to the creation\nof VRAM to overcome DRAM's limitations. In recent years,\nthe demands of faster and faster rendering have exceeded VRAM's\nbandwidth. This led to the creation of FBRAM, a new form of random\naccess memory optimized for Z-buffer based 3D graphics rendering\nand window system support. A ten fold increase in Z-buffered\nrendering performance for minimum chip count systems is\nachieved over conventional VRAM and DRAM. Given statistics on\nthe pixel fill requirements of the next two generations of 3D graphics\naccelerators, FBRAM may remove the pixel fill bottleneck from\n3D accelerator architectures for the rest of this century.\nACKNOWLEDGEMENTS\nFBRAM is a joint development between SMCC and Mitsubishi\nElectric Corporation. The authors would like to acknowledge the\nefforts of the entire Mitsubishi team, and in particular K. Inoue, H.\nNakamura, K. Ishihara, Charles Hart, Julie Lin, and Mark Perry.\nOn the Sun side, the authors would like to thank Mary Whitton, Scott\nNelson, Dave Kehlet, and Ralph Nichols, as well as all the other engineers\nwho reviewed drafts of this paper.\nREFERENCES\n1.\nAkeley, Kurt and T. Jermoluk. High-Performance Polygon\nRendering, Proceedings of SIGGRAPH '88 (Atlanta, GA, Aug\n1-5, 1988). In Computer Graphics 22, 4 (July 1988), 239-246.\n2.\nAkeley, Kurt. Reality Engine Graphics. Proceedings of SIGGRAPH\n`93 (Anaheim, California, August 1-6, 1993). In\nComputer Graphics, Annual Conference Series, 1993, 109-116\n.\n3.\nCatmull, E. A Subdivision Algorithm for Computer Display of\nCurved Surfaces, Ph.D. Thesis, Report UTEC-CSc-74-133,\nComputer Science Dept., University of Utah, Salt Lake City,\nUT, Dec. 1974.\nFigure 6. Pixel fill rate needed to match anticipated triangle fill\nrate demand compared with anticipated delivered pixel fill rate\ndelivered by minimum chip count FBRAM and VRAM systems.\n1993\n2001\nYear\n10B\n1B\n100M\n10M\n10M\n100M\n1B\n1M\nTriangles/sec\nVRAM\nDemand\nFBRAM\nPixels/sec\n4.\nDeering, Michael, S. Winner, B. Schediwy, C. Duffy and N.\nHunt. The Triangle Processor and Normal Vector Shader: A\nVLSI system for High Performance Graphics. Proceedings of\nSIGGRAPH '88 (Atlanta, GA, Aug 1-5, 1988). In Computer\nGraphics 22, 4 (July 1988), 21-30.\n5.\nDeering, Michael, and S. Nelson. Leo: A System for Cost Effective\nShaded 3D Graphics. Proceedings of SIGGRAPH `93\n(Anaheim, California, August 1-6, 1993). In Computer\nGraphics, Annual Conference Series, 1993, 101-108.\n6.\nDeering, Michael. Data Complexity for Virtual Reality:\nWhere do all the Triangles Go? Proceedings of IEEE VRAIS\n`93 (Seattle, WA, Sept. 18-22, 1993). 357-363.\n7.\nDemetrescu, S. A VLSI-Based Real-Time Hidden-Surface\nElimination Display System, Master's Thesis, Dept. of Computer\nScience, California Institute of Technology, Pasadena\nCA, May 1980.\n8.\nDemetrescu, S. High Speed Image Rasterization Using Scan\nLine Access Memories. Proceedings of 1985 Chapel Hill Conference\non VLSI, pages 221-243. Computer Science Press,\n1985.\n9.\nFuchs, Henry, and J. Poulton. Pixel Planes: A VLSI-Oriented\nDesign for a Raster Graphics Engine. In VLSI Design, 2,3\n(3rd quarter 1981), 20-28.\n10. Foley, James, A. van Dam, S. Feiner and J Hughes. Computer\nGraphics: Principles and Practice, 2nd ed., Addison-Wesley\n, 1990.\n11. Gharachorloo, Nader, S. Gupta, E. Hokenek, P. Bala-subramanina\n, B. Bogholtz, C. Mathieu, and C. Zoulas.\nSubnanosecond Rendering with Million Transistor Chips.\nProceedings of SIGGRAPH '88 (Boston, MA, July 31, Aug 4,\n1989). In Computer Graphics 22, 4 (Aug. 1988), 41-49.\n12. Gharachorloo, Nader, S. Gupta, R. Sproull, and I. Sutherland\n. A Characterization of Ten Rasterization Techniques.\nProceedings of SIGGRAPH '89 (Boston, MA, July 31, Aug 4,\n1989). In Computer Graphics 23, 3 (July 1989), 355-368.\n13. Goris, A., B. Fredrickson, and H. Baeverstad. A Config-urable\nPixel Cache for Fast Image Generation. In IEEE CG&A\n7,3 (March 1987), pages 24-32, 1987.\n14. Harrell, Chandlee, and F. Fouladi. Graphics Rendering Architecture\nfor a High Performance Desktop Workstation. Proceedings\nof SIGGRAPH `93 (Anaheim, California, August 16\n, 1993). In Computer Graphics, Annual Conference Series,\n1993, 93-100.\n15. M5M410092 FBRAM Specification. Mitsubishi Electric,\n1994.\n16. Molnar, Steven, J. Eyles, J. Poulton. PixelFlow: High-Speed\nRendering Using Image Composition. Proceedings of SIGGRAPH\n'92 (Chicago, IL, July 26-31, 1992). In Computer\nGraphics 26, 2 (July 1992), 231-240.\n17. Patterson, David, and J. Hennessy. Computer Architecture:\na Quantitative Approach, Morgan Kaufmann Publishers, Inc.,\n1990.\n18. Pinkham, R., M. Novak, and K. Guttag. Video RAM Excels\nat Fast Graphics. In Electronic Design 31,17, Aug. 18, 1983,\n161-182.\n19. Sproull, Robert, I. Sutherland, and S. Gupta. The 8 by 8\nDisplay. In ACM Transactions on Graphics 2, 1 (Jan 1983),\n35-56.\n20. Whitton, Mary. Memory Design for Raster Graphics Displays\n. In IEEE CG&A 4,3 (March 1984), 48-65, 1984.\nAPPENDIX A: Rendering a 10 pixel Vector\nThis appendix demonstrates the detailed steps involved in scheduling\nFBRAM rendering, using the 10 pixel long, one pixel wide vertical\nZ-buffered vector shown in Figure 7. This figure shows the\nmemory hierarchy elements touched by the vector at three levels of\ndetail: the coarsest (left most) shows banks (A..D) and pages\n(0..255), the intermediate detail (middle) shows blocks in the L2$,\nand the finest (right most) shows pixel quads.\nThe example vertical vector starts at x=1, y=10, and ends at y=19.\nTable 3 gives the bank, page, L2$ block, and quad for each pixel in\nthe vector. Note the spatial locality of pixels.\nTable 4 below shows the schedule of commands and data issued to\nthe FBRAM, and the resulting internal activities. Note that independent\ncontrols are available, and permit parallel L1$ and L2$ activities\n. The following abbreviations are used in Table 4:\nL1$[n]: Block n of the L1$.\nL2$[n]: Block n of the L2$.\nACP: Access page (DRAM to L2$ transfer).\nA:17\nC:17\nA:25\nC:25\nA:33\n0\n4\n8\n12\n16\n20\n24\n0\n4\n8\n12\n0\n1\n2\n3\n4\n5\n6\n7\n0\n1\n4\n5\nA:0\nFigure 7. A 10 pixel vector near the upper\n0\n2\n4\n6\n0\n2\n4\n6\n0\n2\n4\n6\n0\n2\n4\n6\n1\n3\n5\n7\n1\n3\n5\n7\n1\n3\n5\n7\n1\n3\n5\n7\nC:0\nB:0\nD:0\nA:8\nC:8\nB:8\nD:8\nA:16\nC:16\nB:16\nD:16\nA:24\nC:24\nB:24\nD:24\nA:32B:32\nA:1\nC:1\nA:9\nC:9\nBank:Page\nBlock\nQuad\n0\n4\n8\n12\n16\n20\n0\n80\n160\n0\n16\n32\n48\n64\n80\n96\nleft corner of the screen, at 3 levels of detail.\nRDB: Read block (L2$\n\nL1$ transfer).\nMWB: Masked write block (L1$\n\nL2$ transfer).\nPRE: Precharge bank (free L2$ entry).\nread x: Read pixel x from L1$ to ALU.\nwrite x: Write pixel x from ALU to L1$.\nWe follow the first pixel at (1, 10) through the cache hierarchy. The\npixel's page (page 0 of bank A) is transferred to the L2$ entry A in\ncycles 1 to 4 (notice that the next 5 pixels are transferred too). The\npixel's block is then transferred from L2$ entry A to L1$[0] in cycles\n5 and 6 (the next pixel is transferred too). The pixel is read from\nthe L1$[0] to the pipelined ALU in cycle 7. The old and new pixels\nare merged (Z-buffered, blended) during cycles 8 to 11. The resulting\npixel is written back to the L1$[0] in cycle 12. The pixel's block\nis transferred from the L1$[0] back to the L2$ entry A (and DRAM\npage 0 of bank A) in cycles 14 and 15.\nThe second pixel at (1,11) hits in both L1$ and L2$, and can follow\none cycle behind the first pixel, arriving back in the L1$ in cycle 13.\nThe pixel at (1,12) misses in the L1$, but hits in the L2$, requiring\nan RDB from L2$ entry A to L1$[1]. The pixel at (1,16) misses in\nboth caches, requiring a L2$ access of bank C, and followed by a\ntransfer from L2$ entry C to L1$[2]. All the other pixels hit in both\ncaches, and are scheduled like the second pixel.\nX\nY\nBank\nPage\nL2$\nBlock\nQuad\nL2$\nL1$\n1\n10\nA\n0\n2\n4\nmiss\nmiss\n1\n11\n6\nhit\nhit\n1\n12\n3\n0\nhit\nmiss\n1\n13\n2\nhit\nhit\n1\n14\n4\nhit\nhit\n1\n15\n6\nhit\nhit\n1\n16\nC\n0\n0\n0\nmiss\nmiss\n1\n17\n2\nhit\nhit\n1\n18\n4\nhit\nhit\n1\n19\n6\nhit\nhit\nTable\n3 Bank, Page, L2$ Block, and Quad for each\npixel in the vector\nTable 4. Schedule of operations for rendering a 10 pixel vector\nMerge data with Quad 4 of L1$[0]\nMerge data with Quad 6 of L1$[0]\nMerge data with Quad 0 of L1$[1]\nMerge data with Quad 2 of L1$[1]\nMerge data with Quad 4 of L1$[1]\nMerge data with Quad 6 of L1$[1]\nMerge data with Quad 0 of L1$[2]\nMerge data with Quad 2 of L1$[2]\nMerge data with Quad 4 of L1$[2]\nMerge data with Quad 6 of L1$[2]\nAccess Page 0 of Bank A\nAccess Page 0 of Bank C\nL2$[2] of Bank A\nL1$[0]\nL2$[0] of Bank C\nL1$[2]\nL2$[3] of Bank A\nL1$[1]\nL2$[2] of Bank A\nL1$[0]\nL2$[3] of Bank A\nL1$[1]\nL2$[0] of Bank C\nL1$[2]\nPrecharge Bank A\n23\n22\n21\n20\n19\n18\n17\n16\n15\n14\n13\n12\n11\n10\n9\n8\n7\n6\n5\n4\n3\n2\n1\nACP\nACP\nPRE\nRDB\nRDB\nRDB\nMWB\nMWB\nMWB\nRDB\nRDB\nRDB\nMWB\nMWB\nMWB\nread 0\nread 2\nread 4\nread 6\nread 4\nread 6\nread 0\nread 2\nread 4\nread 6\nwrite 4\nwrite 6\nwrite 4\nwrite 6\nwrite 4\nwrite 6\nwrite 0\nwrite 2\nwrite 0\nwrite 2\nL1$ Command and Data\nL2$ Command\nL2$ Activities\nL1$ Activities\nA\nC\nB\nD\n3 4 5 6 7\n0\n1\n2\nInternal Activities\nCommands and Data to FBRAM", "keywords": "FBRAM;Video output bandwidth;Dynamic random access memory;RGBa blend;dynamic memory;DRAM;Rendering rate;Z buffer;rendering;graphics;caching;memory;Dynamic memory chips;Pixel processing;3D graphics hardware;Acceleration;3D graphics;Z-buffer;Optimisation;Z-compare;pixel caching;VRAM;FBRam;Z-buffering;parallel graphics algorithms;Video buffers;pixel processing;Pixel Cache;Frame buffer;SRAM;Caches"} {"name": "94", "title": "Focused Named Entity Recognition Using Machine Learning", "abstract": "In this paper we study the problem of finding most topical named entities among all entities in a document, which we refer to as focused named entity recognition. We show that these focused named entities are useful for many natural language processing applications, such as document summarization , search result ranking, and entity detection and tracking. We propose a statistical model for focused named entity recognition by converting it into a classification problem . We then study the impact of various linguistic features and compare a number of classification algorithms. From experiments on an annotated Chinese news corpus, we demonstrate that the proposed method can achieve near human-level accuracy.", "fulltext": "INTRODUCTION\nWith the rapid growth of online electronic documents,\nmany technologies have been developed to deal with the\nenormous amount of information, such as automatic summarization\n, topic detection and tracking, and information\nretrieval. Among these technologies, a key component is to\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nSIGIR'04, July 2529, 2004, Sheffield, South Yorkshire, UK.\nCopyright 2004 ACM 1-58113-881-4/04/0007 ...\n$\n5.00.\nidentify the main topics of a document, where topics can be\nrepresented by words, sentences, concepts, and named entities\n. A number of techniques for this purpose have been\nproposed in the literature, including methods based on position\n[3], cue phrases [3], word frequency, lexical chains[1]\nand discourse segmentation [13]. Although word frequency\nis the easiest way to representing the topics of a document,\nit was reported in [12] that position methods produce better\nresults than word counting based methods.\nImportant sentence extraction is the most popular method\nstudied in the literature. A recent trend in topic sentence\nextraction is to employ machine learning methods. For example\n, trainable classifiers have been used in [8, 21, 5, 11]\nto select sentences based on features such as cue phrase,\nlocation, sentence length, word frequency and title, etc.\nAll of the above methods share the same goal of extracting\nimportant sentences from documents. However, for topic\nrepresentation, sentence-level document summaries may still\ncontain redundant information. For this reason, other representations\nhave also been suggested. For example, in [17],\nthe authors used structural features of technical papers to\nidentify important concepts rather than sentences. The authors\nof [9] presented an efficient algorithm to choose topic\nterms for hierarchical summarization according to a proba-bilistic\nlanguage model. Another hybrid system, presented\nin [7], generated summarizations with the help of named entity\nfoci of an article. These named entities include people,\norganizations, and places, and untyped names.\nIn this paper, we study the problem of finding important\nnamed entities from news articles, which we call focused\nnamed entity recognition. A news article often reports\nan event that can be effectively summarized by the five W\n(who, what, when, where, and why) approach. Many of the\nfive W's can be associated with appropriate named entities\nin the article. Our definition of focused named entities is\nmainly concerned with Who and What. Therefore it is almost\nself-evident that the concept of focused named entity\nis important for document understanding and automatic information\nextraction. In fact, a number of recent studies\nhave already suggested that named entities are useful for\ntext summarization [15, 4, 7, 16]. Moreover, we shall illustrate\nthat focused named entities can be used in other text\nprocessing tasks as well. For example, we can rank search\nresults by giving more weights to focused named entities.\nWe define focused named entities as named entities that\nare most relevant to the main topic of a news article. Our\n281\ntask is to automatically select these focused named entities\nfrom the set of all entities in a document. Since focused\nnamed entity recognition is a newly proposed machine learning\ntask, we need to determine whether it is well-posed.\nThat is, whether there exists a sufficient level of agreement\non focused named entities among human reviewers.\nA detailed study on this matter will be reported in the section\n5.2. The conclusion of our study is that there is indeed\na sufficient level of agreement. Encouraged by this study,\nwe further investigated the machine learning approach to\nthis problem, which is the focus of the paper. We discuss\nvarious issues encountered in the process of building a machine\nlearning based system, and show that our method can\nachieve near human performance.\nThe remainder of this paper is organized as follows. In\nSection 2 we introduce the problem of focused named entity\nrecognition and illustrate its applications. Section 3 describes\na general machine learning approach to this problem.\nIn Section 4, we present features used in our system. Section\n5 presents a study of human-level agreement on focused\nnamed entities, and various experiments which illustrate the\nimportance of different features. Some final conclusions will\nbe given in section 6.\nTHE PROBLEM\nFigure 1 is an example document.\n1\nThis article reports\nthat Boeing Company would work with its new Research\nand Technology Center to develop a new style of electric\nairplane. On the upper half of the page, we list all named\nentities appearing in the article and mark the focused entities\n. Among the twelve named entities, \"Boeing Company\"\nand its \"Research and Technology Center\" are most relevant\nto the main topic. Here we call \"Boeing Company\" and \"Research\nand Technology Center\" the focuses. Clearly, focused\nnamed entities are important for representing the main topic\nof the content. In the following, we show that the concept\nof focused named entity is useful for many natural language\nprocessing applications, such as summarization, search ranking\nand topic detection and tracking.\n2.1\nUsing Focused Named Entity for\nSummarization\nWe consider the task of automatic summarization of the\nsample document in Figure 1. A traditional method is to select\nsentences with highest weights, where sentence weights\nare calculated by averaging term frequencies of words it contains\n.\nThe resulting summarization is given in Figure 2.\nUsing focused named entities, we consider two methods to\nrefine the above summarization. The first method is to increase\nthe weight of the focused named entity \"Boeing\" in\nthe sentences, leading to the summary in Figure 3. The\nother method simply picks sentences containing the focused\nnamed entity \"Boeing\" as in Figure 4. From this example,\nwe can see that summarization using focused named entities\ngives more indicative description of an article.\n2.2\nUsing Focused Named Entity for Ranking\nSearch Results\nSuppose we want to find news about World Cup football\nmatch from a collection of news articles. First we search\n1\nThe\noriginal\narticle\ncan\nbe\naccessed\nat\nhttp://www.boeing.com/news/releases/2001/q4/nr 011127a.html.\nFigure 1: Sample document with focused named entities\nmarked\nBoeing To Explore Electric Airplane\nFuel cells and electric motors will not replace\njet engines on commercial transports, but they could\none day replace gas turbine auxiliary power units.\nUnlike a battery, which needs to be recharged,\nfuel cells keep working as long as the fuel lasts.\n\"Fuel cells show the promise of one day providing\nefficient, essentially pollution-free electrical power\nfor commercial airplane primary electrical power\nneeds,\" Daggett said.\nFigure 2: Summary using term frequency weighting\nBoeing To Explore Electric Airplane\nBoeing\nCommercial\nAirplanes\nwill\ndevelop\nand\ntest an electrically powered demonstrator airplane as\npart of a study to evaluate environmentally friendly\nfuel cell technology for future Boeing products.\nFuel cells and electric motors will not replace\njet engines on commercial transports, but they could\none day replace gas turbine auxiliary power units.\n\"By adapting this technology for aviation, Boeing\nintends to demonstrate its leadership in the\npursuit of delivering environmentally preferred products\n.\"\nFigure 3: Summary weighted by focused named entities\n282\nBoeing To Explore Electric Airplane\nBoeing\nCommercial\nAirplanes\nwill\ndevelop\nand\ntest an electrically powered demonstrator airplane as\npart of a study to evaluate environmentally friendly\nfuel cell technology for future Boeing products.\nThe airplane manufacturer is working with Boe-ing's\nnew Research and Technology Center in Madrid,\nSpain, to modify a small, single-engine airplane by\nreplacing its engine with fuel cells and an electric\nmotor that will turn a conventional propeller.\nBoeing Madrid will design and integrate the experimental\nairplane's control system.\n\"By adapting this technology for aviation, Boeing\nintends to demonstrate its leadership in the\npursuit of delivering environmentally preferred products\n.\"\nFigure 4: Summary using sentences containing focused\nnamed entities\nfor documents containing the key phrase \"World Cup\". The\nranking function, which determines which document is more\nrelevant to the query, is very important to the search quality.\nSince our query is a single phrase, the ranked search results\n, displayed in Table 1, are based on the term frequency\nof the phrase \"World Cup\". It is clear that without deeper\ntext understanding, term frequency is a quite reasonable\nmeasure of relevancy. However, although some articles may\ncontain more \"World Cup\" than others, they may actually\nfocus less on the World Cup event which we are interested\n. Therefore a better indication of document relevancy\nis whether a document focuses on the entity we are interested\nin. A simple method is to re-order the search results\nfirst by whether the query entity is focused or not, and then\nby its term-frequency. It is quite clear that this method\ngives higher quality ranking.\nIn this example, we use Chinese corpus for demonstration,\nso the original searching results are in Chinese, which we\nhave translated into English for reading convenience.\n2.3\nOther Uses of Focused Named Entity\nWe believe that focused named entities are also helpful in\ntext clustering and categorization tasks such as topic detection\nand tracking. This is because if focused named entities\nare automatically recognized, then the event for each document\ncan be described more precisely. Since focused named\nentities characterize what an article talks about, it is natural\nto organize articles based on them. Therefore by giving\nmore weights to focused named entities, we believe that we\ncan potentially obtain better quality clustering and more\naccurate topic detection and tracking.\nOur study of the focused named entity recognition problem\nis motivated by its potential applications as illustrated\nabove. Experiments in section 5.2 indicate that there is a\nsufficient agreement on focused named entities among human\nreviewers. Therefore our goal is to build a system that\ncan automatically detect focused named entities among all\nnamed entities in a document. We shall mention that although\nthis paper only studies named entities, the basic idea\ncan be extended to tasks such as finding important words,\nnoun-phrases in a document.\nLEARNING BASED FOCUSED NAMED ENTITY RECOGNITION\nFocused named entity recognition can be regarded as a\nbinary classification problem. Consider the set of all named\nentities in a document extracted by a named entity recognition\nsystem. Each entity in this set can be labeled yes if it\nis a focused entity, or no if it is not. We formally define a\ntwo-class categorization problem as one to determine a label\ny\n{-1, 1} associated with a vector x of input variables.\nHowever, in order to build a successful focused named entity\nextractor, a number of issues have to be studied. First,\nnamed entities that refer to the same person or organization\nneed to be grouped together; secondly what features are\nuseful; and thirdly, how well different learning algorithms\nperform on this task. These issues will be carefully studied.\n3.1\nCoreference Resolution\nCoreference is a common phenomenon in natural language.\nIt means that an entity can be referred to in different ways\nand in different locations of the text. Therefore for focused\nnamed entity recognition, it is useful to apply a coreference\nresolution algorithm to merge entities with the same referents\nin a given document. There are different kinds of\ncoreference according to the basic coreference types, such\nas pronominal coreference, proper name coreference, apposition\n, predicate nominal, etc. Here in our system, we only\nconsider proper name coreference, which is to identify all\nvariations of a named entity in the text.\nAlthough it is possible to use machine learning methods\nfor coreference resolution (see [20] as an example), we shall\nuse a simpler scheme, which works reasonably well. Our\ncoreference resolution method can be described as follows.\n1. Partitioning: The set of named entities is divided into\nsub-sets according to named entity types, because coreference\nonly occurs among entities with the same types.\n2. Pair-wise comparison: Within each sub-set, pair-wise\ncomparison is performed to detect whether each entity-pair\nis an instance of coreference. In this study, we use\na simple algorithm which is based on string-matching\nonly. Since we work with Chinese data, we split each\nentity into single Chinese characters. We study two\ndifferent schemes here: using either exact string matching\nor partial string matching to decide coreference.\nIn the case of exact string matching, two entities are\nconsidered to be a coreference pair only when they\nare identical. In the case of partial string matching, if\ncharacters in the shorter entity form a (non-consecutive)\nsub-string of the longer entity, then the two entities are\nconsidered to be a coreference pair.\n3. Clustering: Merge all coreference pairs created in the\nsecond step into the same coreference chains. This step\ncan also be done differently. For example, by using a\nsequential clustering method.\nAlthough the coreference resolution algorithm described\nabove is not perfect, it is not crucial since the results will\n283\nTable 1: Search result of \"World Cup\"\nfocus/not\ntf\ntitle\nfocus\n20\nUncover the Mystery of World Cup Draws\nfocus\n11\nBrazil and Germany Qualified, Iran Kicked out\nfocus\n9\nPreparing for World Cup, China Football Federation and Milutinovic Snatch the Time\nfocus\n6\nSun Wen Understands the Pressure Milutinovic and China Team Faced\nfocus\n5\nKorea Leaves More Tickets to China Fans\nfocus\n4\nParaguay Qualified, but Head Coach Dismissed\nno\n4\nLiXiang: Special Relationships between Milutinovic and I\nno\n3\nThree Stars on Golden Eagle Festival\nfocus\n3\nAdidas Fevernova, the Official 2002 FIFA World Cup Ball, Appears Before the Public in Beijing\nno\n2\nChina's World Top 10 Start to Vote\nfocus\n2\nQualified vs. Kicked out: McCarthy Stays on, Blazevic Demits\nfocus\n2\nChina Attends Group Match in Korea, But not in the Same Group With Korea\nno\n2\nDon't Scare Peoples with Entering WTO\nno\n1\nKelon Tops China's Home Appliance Industry in CCTV Ads Bidding\nno\n1\nLou Lan: Great Secrets Behind\nfocus\n1\nAustralia Beats Uruguay by One Goal\nno\n1\nChang Hong's \"King of Precision Display\": Good Friends of Football Fans\nbe passed to a machine learning algorithm in a later stage,\nwhich can offset the mistakes made in the coreference stage.\nOur experiment shows that by using coreference resolution,\nthe overall system performance can be improved appreciably\n.\n3.2\nClassification Methods\nIn this paper, we compare three methods: a decision tree\nbased rule induction system, a naive Bayes classifier, and a\nregularized linear classification method based on robust risk\nminimization.\n3.2.1\nDecision Tree\nIn text-mining application, model interpretability is an\nimportant characteristic to be considered in addition to the\naccuracy achieved and the computational cost. The requirement\nof interpretability can be satisfied by using a rule-based\nsystem, such as rules obtained from a decision tree. Rule-based\nsystems are particularly appealing since a person can\nexamine the rules and modify them. It is also much easier\nto understand what a system does by examining its rules.\nWe shall thus include a decision tree based classifier in this\nstudy. In a typical decision tree training algorithm, there are\nusually two stages. The first stage is tree growing where a\ntree is built by greedily splitting each tree node based on\na certain figure of merit. However after the first stage, the\ntree can overfit the training data, therefore a second stage\ninvolving tree pruning is invoked. In this stage, one removes\noverfitted branches of the tree so that the remaining portion\nhas better predictive power. In our decision tree package,\nthe splitting criteria during tree growth is similar to that\nof the standard C4.5 program [18], and the tree pruning is\ndone using a Bayesian model combination approach. See [6]\nfor detailed description.\n3.2.2\nNaive Bayes\nAnother very popular binary classification method is naive\nBayes. In spite of its simplicity, it often achieves reasonable\nperformance in practical applications. It can be regarded as\na linear classification method, where we seek a weight vector\nw and a threshold such that w\nT\nx < if its label y =\n-1\nand w\nT\nx\nif its label y = 1. A score of value w\nT\nx\ncan\nbe assigned to each data point as a surrogate for the\nlikelihood of x to be in class.\nIn this work, we adopt the multinomial model described\nin [14]. Let\n{(x\n1\n, y\n1\n), . . . , (x\nn\n, y\nn\n)\n} be the set of training\ndata. The linear weight w is given by w = w\n1\n- w\n-1\n, and\n=\n1\n\n-1\n. Denote by x\ni,j\nthe j-th component of the\ndata vector x\ni\n, then the j-th component w\nc\nj\nof w\nc\n(c =\n1)\nis given by\nw\nc\nj\n= log\n+\ni:y\ni\n=c\nx\ni,j\nd +\nd\nj=1\ni:y\ni\n=c\nx\ni,j\n,\nand\nc\n(c =\n1) is given by\nc\n=\n- log\n|{i:y\ni\n=c}|\nn\n.\nThe parameter > 0 in the above formulation is a smoothing\n(regularization) parameter. [14] fixed to be 1, which\ncorresponds to the Laplacian smoothing.\n3.2.3\nRobust Risk Minimization Method\nSimilar to naive Bayes, this method is also a linear prediction\nmethod.\nGiven a linear model p(x) = w\nT\nx + b,\nwe consider the following prediction rule: predict y = 1 if\np(x)\n0, and predict y = -1 otherwise. The classification\nerror (we shall ignore the point p(x) = 0, which is assumed\nto occur rarely) is\nI(p(x), y) =\n1\nif p(x)y\n0,\n0\nif p(x)y > 0.\nA very natural way to compute a linear classifier is by finding\na weight ( ^\nw, ^\nb) that minimizes the average classification\nerror in the training set:\n( ^\nw, ^\nb) = arg min\nw,b\n1\nn\nn\ni=1\nI(w\nT\nx\ni\n+ b, y\ni\n).\nUnfortunately this problem is typically NP-hard computa-tionally\n. It is thus desirable to replace the classification error\nloss I(p, y) with another formulation that is computation-ally\nmore desirable. Large margin methods such as SVM\nemploy modified loss functions that are convex. Many loss\nfunctions work well for related classification problems such\nas text-categorization [24, 10].\nThe specific loss function\n284\nconsider here is\nh(p, y) =\n\n\n-2py\npy <\n-1\n1\n2\n(py\n- 1)\n2\npy\n[-1, 1]\n0\npy > 1.\nThat is, our linear weights are computed by minimizing the\nfollowing average loss on the training data:\n( ^\nw, ^\nb) = arg min\nw\n1\nn\nn\ni=1\nh(w\nT\nx\ni\n+ b, y\ni\n).\nThis method, which we refer to as RRM (robust risk minimization\n), has been applied to linguistic processing [23] and\ntext categorization [2] with good results. Detailed algorithm\nwas introduced in [22].\nFEATURES\nWe assume that named entities are extracted by a named\nentity recognition system. Many named entity recognition\ntechniques have been reported in the literal, most of them\nuse machine learning approach. An overview of these methods\ncan be found in [19]. In our system, for the purpose of\nsimplicity, we use human annotated named entities in the\nexperiments. In the learning phase, each named entity is\nconsidered as an independent learning instance. Features\nmust reflect properties of an individual named entity, such\nas its type and frequency, and various global statistical measures\neither at the document scale or at the corpus scale.\nThis section describes features we have considered in our\nsystem, our motivations, and how their values are encoded.\n4.1\nEntity Type\nFour entity types are defined: person, organization, place,\nand proper nouns. The type of a named entity is a very\nuseful feature. For example, person and organization are\nmore likely to be the focus than a place. Each entity type\ncorresponds to a binary feature-component in the feature\nvector, taking a value of either one or zero. For example,\na person type is encoded as [1 0 0 0], and an organization\ntype is encoded as [0 1 0 0].\n4.2\nIn Title or Not\nWhether a named entity appears in the title or not is an\nimportant indicator of whether it is a focused entity. This\nis because title is a concise summary of what an article is\nabout. The value of this feature is binary (0 or 1).\n4.3\nEntity Frequency\nThis feature is the number of times that the named entity\noccurs in the document. Generally speaking, the more frequent\nit occurs, the more important it is. The value of this\nfeature is just the frequency of the named entity.\n4.4\nEntity Distribution\nThis feature is somewhat complicated. The motivation is\nthat if a named entity occurs in many different parts of a\ndocument, then it is more likely to be an important entity.\nTherefore we use the entropy of the probability distribution\nthat measures how evenly an entity is distributed in a document\n.\nConsider a document which is divided into m sections.\nSuppose that each named entity's probability distribution is\ngiven by\n{p\n1\n, ..., p\ni\n, ..., p\nm\n}, where p\ni\n=\noccurrence in ith section\ntotal occurrence in the doc\n.\nThe entropy of the named entity distribution is computed\nby entropy =\nm\ni=1\np\ni\nlog p\ni\n. In our experiments, we select\nm = 10. This feature contributes a real valued feature-component\nto the feature vector.\n4.5\nEntity Neighbor\nThe context in which a certain named entity appears is\nquite useful. In this study, we only consider a simple feature\nwhich counts its left and right neighboring entity types.\nIf several named entities of the same type are listed side by\nside, then it is likely that the purpose is for enumeration, and\nthe listed named entities are not important. Each neighboring\nside has five possible types -- four named entity types\nplus a normal-word (not a named entity) type. For example,\nconsider a person mentioned three times in the document.\nAmong the three mentions, the left neighbors are two person\nnames and one common word, and the right neighbors are\none place name and two common words. Then the entity\nneighbor feature components are [2 0 0 0 1 0 0 1 0 2].\n4.6\nFirst Sentence Occurrence\nThis feature is inspired by the position method [3, 12] in\nsentence extraction. Its value is the occurrences of the entity\nappearing in the beginning sentence of a paragraph.\n4.7\nDocument Has Entity in Title or Not\nThis feature indicates whether any entity exists in the title\nof the document, and thus takes binary value of 0 or 1.\n4.8\nTotal Entity Count\nThis feature is the total number of entities in the document\n, which takes integer value. The feature reflects the\nrelative importance of an entity in the entity set.\n4.9\nDocument Frequency in the Corpus\nThis is a corpus level feature. If a named entity has a low\nfrequency in the document collection, but relatively high\nfrequency in the current document, then it is likely to be a\nfocused entity. When this feature is used, the term frequency\nfeature in section 4.3 will be computed using (tf /docsize)\n\nlog(N/df ), where df is the number of documents that a\nnamed entity occurs in.\nEXPERIMENTS\nIn this section, we study the following issues: corpus annotation\n, human-level agreement on focused named entities,\nperformance of machine learning methods compared with a\nbaseline, influence of different features, and the impact of\ncoreference module to the overall performance.\n5.1\nCorpus Annotation\nWe select fifteen days of Beijing Youth Daily news in\nNovember 2001 as our testing corpus, which contains 1,325\narticles. The text, downloaded from http://bjyouth.ynet.com,\nis in Chinese. The articles belong to a variety of categories,\nincluding politics, economy, laws, education, science, entertainments\n, and sports.\nSince different people may have different opinions on the\nfocused named entities, a common set of rules should be\nagreed upon before the whole corpus is to be annotated.\nWe use the following method to come up with a general\nguideline for annotating focused named entities.\n285\nFirst, the named entities in each document were annotated\nby human. Then, we selected twenty documents from\nthe corpus and invited twelve people to mark focused named\nentities. Nine of the twelve people are experts in natural\nlanguage processing, so their opinions are very valuable to\ndefine focused named entities. Based on the survey result,\nentities marked by more than half of the survey participants\nwere defined as focused named entities. We obtained fifty\nfocused named entities for the twenty articles. By studying\nthe focused named entities in those articles, we were able to\ndesign specifications for focused named entity annotation.\nThe whole corpus was then marked according to the specifications\n.\n5.2\nHuman Agreement Statistics\nIn our survey, fifty entities were identified as focused entities\nfrom the total number of 341 entities in the 20 documents\n. Table 2 shows, among the 50 focused entities, 5\nentities are agreed as focus by all 12 persons, and 7 entities\nare agreed by 11 persons, etc.\nTable 2: Human agreement statistics\nnum of focused named entities\n5\n7\n5\n8\n7\n10\n8\nnum of person agreeing\n12\n11\n10\n9\n8\n7\n6\nLet N\nk\ndenotes the number of person with agreement on\nfocused named entity k, then the human agreement level\nAgree\nk\non the k-th focused named entity is Agree\nk\n=\nN\nk\n12\n.\nThe average agreement on the 50 focused named entities\nis Average Agree =\n50\nk=1\nAgree\nk\n50\n= 72.17%, with variance\n2.65%. We also computed the precision and the recall for the\nsurvey participants with respect to the fifty focused named\nentities.\nTable 3 shows that the best human annotator\nachieves an F\n1\nmeasure of 81.32%.\nSome of the participants\nmarked either too many or too few named entities,\nand thus had much lower performance numbers. This problem\nwas fixed when the whole corpus was annotated using\nspecifications induced from this small-scale experiment.\nTable 3: Human annotation performance\nuser id\nprecision\nrecall\nF\n1\n1\n90.24\n74.00\n81.32\n2\n86.05\n74.00\n79.57\n3\n83.33\n70.00\n76.09\n4\n84.21\n64.00\n72.73\n5\n96.55\n56.00\n70.89\n6\n90.63\n58.00\n70.73\n7\n71.74\n66.00\n68.75\n8\n73.81\n62.00\n67.39\n9\n57.14\n80.00\n66.67\n10\n48.19\n80.00\n60.15\n11\n38.60\n88.00\n53.66\n12\n33.33\n94.00\n49.21\n5.3\nCorpus Named Entity Statistics\nWe consider two data sets in our experiments: one is the\nwhole corpus of 1,325 articles, and the other is a subset of\n726 articles with named entities in their titles. Table 4 shows\nthere are totally 3,001 focused entities among 18,371 entities\nin the whole corpus, which means that 16.34 percent of the\nentities are marked as focused. On average, there are 2.26\nfocused named entities for each article, which is consistent\nwith the small-scale survey result.\nTable 4: Corpus statistics on named entities\nset\ndocnum\nentities\nfocuses\nfocus percent\nfocus/doc\n1\n1,325\n18,371\n3,001\n16.34%\n2.26\n2\n726\n10,697\n1,669\n15.60%\n2.30\n5.4\nBaseline Results\nSince named entities in title or with high frequency are\nmore likely to be the focal entities, we consider three baseline\nmethods. The first method marks entities in titles to\nbe the foci; the second method marks most frequent entities\nin each article to be the focal entities; the third method is\na combination of the above two, which selects those entities\neither in title or occurring most frequently. We use partial\nstring matching for coreference resolution in the three\nbaseline experiments.\nNamed entities occurring in the title are more likely to be\nthe focus of the document, but they only represent a small\nportion of all focal entities. Baseline experiment 1 shows the\nprecision of this method is quite high but the recall is very\nlow.\nBaseline experiment 2 implies that most of the top 1\nnamed entities are focused entities, but again the recall is\nvery low. However, if more named entities are selected, the\nprecision is decreased significantly, so that the F\n1\nmeasure\ndoes not improve. The top-3 performance is the worst, with\nan F\n1\nmeasure of only 50.47%. Note that several named entities\nmay have the same occurrence frequency in one document\n, which introduces uncertainty into the method.\nBy combining named entities from the title and with high\nfrequency, we obtain better results than either of the two\nbasic baseline methods. The best performance is achieved\nby combining the in-title and top 1 named entities, which\nachieves F\n1\nmeasures of 66.68% for data set 1, and 70.51%\nfor data set 2.\n5.5\nMachine Learning Results\nSince in our implementation, decision tree and naive Bayes\nmethods only take integer features, we encode the floating\nfeatures to integer values using a simple equal interval\nbinning method. If a feature x is observed to have values\nbounded by x\nmin\nand x\nmax\n, then the bin width is computed\nby =\nx\nmax\n-x\nmin\nk\nand the bin boundaries are at x\nmin\n+ i\nwhere i = 1, ..., k\n- 1. The method is applied to each continuous\nfeature independently and k is set to 10. Although\nmore sophisticated discretization methods exist, the equal\ninterval binning method performs quite well in practice.\nMachine learning results are obtained from five-fold cross-validation\n. Coreference resolution is done with partial string-matching\n. The test results are reported in Table 6.\nThis experiment shows that good performance can be\nachieved by using machine learning techniques. The RRM\nperformance on both data sets are significantly better than\nthe base line results, and comparable to that of the best human\nannotator we observed from our small-scale experiment\nin Section 5.2.\n286\nTable 5: Baseline results\nCorpus\nMethod\nFocuses\nfocus/doc\nPrecision\nRecall\nF\n1\n726docs\ntitle\n992\n1.36\n83.47\n49.61\n62.23\n1,325docs\ntop1\n1,580\n1.19\n88.54\n46.62\n61.08\ntop2\n4,194\n3.17\n54.48\n76.14\n63.52\ntop3\n7,658\n5.78\n35.13\n89.64\n50.47\n726docs\ntitle+top1\n1,247\n1.72\n82.44\n61.59\n70.51\ntitle+top2\n2,338\n3.22\n56.93\n79.75\n66.43\ntitle+top3\n4,165\n5.74\n36.06\n89.99\n51.49\n1,325docs\ntitle+top1\n2,011\n1.52\n83.09\n55.68\n66.68\ntitle+top2\n4,388\n3.31\n53.78\n78.64\n63.88\ntitle+top3\n7,738\n5.84\n34.94\n90.10\n50.36\nTable 6: Machine learning results\nDataset\nRRM\nDecision Tree\nNaive Bayes\nP\nR\nF\n1\nP\nR\nF\n1\nP\nR\nF\n1\n726 docs\n88.51\n80.54\n84.27\n87.29\n78.02\n82.37\n69.32\n90.28\n78.37\n1,325 docs\n84.70\n78.23\n81.32\n83.83\n74.61\n78.89\n69.14\n89.08\n77.82\n5.6\nInfluence of Features\nThe goal of this section is to study the impact of different\nfeatures with different algorithms. Results are reported in\nTable 7. Feature id corresponds to the feature subsection\nnumber in section 4.\nExperiment A uses frequency-based features only. It is\nquite similar to the bag-of-word document model for text\ncategorization, with the entity-frequency and in-title information\n. By adding more sophisticated document-level features\n, the performance can be significantly improved. For\nthe RRM method, F\n1\nfinally reaches 81.32%. It is interesting\nto observe that the corpus-level feature (experiment F\nversus G) has different impacts on the three algorithms. It\nis a good feature for naive Bayes, but not for the RRM and\ndecision tree. Whether corpus-level features can appreciably\nenhance the classification performance requires more careful\ninvestigation.\nThe experiments also indicate that the three learning algorithms\ndo not perform equally well. RRM appears to have\nthe best overall performance. The naive Bayes method requires\nall features to be independent, which is a quite unreal-istic\nassumption in practice. The main problem for decision\ntree is that it easily fragments the data, so that the probability\nestimate at the leaf-nodes become unreliable. This is\nalso the reason why voted decision trees (using procedures\nlike boosting or bagging) perform better.\nThe decision tree can find rules readable by a human. For\nexample, one such rule reads as: if a named entities appears\nat least twice, its left and right neighbors are normal words,\nits discrete distribution entropy is greater than 2, and the\nentity appears in the title, then the probability of it being a\nfocused entity is 0.87.\n5.7\nCoreference Resolution\nIn order to understand the impact of coreference resolution\non the performance of focused named entity recognition,\nwe did the same set of experiments as in section 5.5, but with\nexact string matching only for coreference resolution in the\nfeature extraction process. Table 8 reports the five-fold cross\nvalidation results. On average the performance is decreased\nby about 3 to 5 percent. This means coreference resolution\nplays an important role in the task. The reason is that it\nmaps variations of a named entity into a single group, so\nthat features such as occurrence frequency and entity distribution\ncan be estimated more reliably. We believe that with\nmore sophisticated analysis such as pronominal coreference\nresolution, the classification performance can be further improved\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we studied the problem of focused named\nentity recognition. We gave examples to illustrate that focused\nnamed entities are useful for many natural language\nprocessing applications.\nThe task can be converted into\na binary classification problem. We focused on designing\nlinguistic features, and compared the performance of three\nmachine learning algorithms. Our results show that the machine\nlearning approach can achieve near human-level accuracy\n. Because our system is trainable and features we use\nare language independent, it is easy for us to build a similar\nclassification model for other languages. Our method can\nalso be generalized to related tasks such as finding important\nwords and noun-phrases in a document.\nIn the future, we will integrate focused named entity recognition\ninto real applications, such as information retrieval,\nautomatic summarization, and topic detection and tracking,\nso that we can further study and evaluate its influences to\nthese systems.\nACKNOWLEDGMENTS\nWe thank Honglei Guo for providing the original corpus\nwith named entity annotation. Jianmin Jiang helped us to\nset up named entity annotation server, which made it much\neasier to view and annotate the corpus. We thank Zhaoming\nQiu, Shixia Liu, Zhili Guo for their valuable comments on\nthe experiments. We are also grateful to our colleagues who\nspent their time to participate in the focused named entity\nsurvey.\n\nREFERENCES\n[1] R. Barzilay and M. Elhadad. Using lexical chains for\ntext summarization. In Proceedings of the ACL\n287\nTable 7: Performance of different features with different algorithms\nID\nFeatures\nRRM\nDecision Tree\nNaive Bayes\nP\nR\nF\n1\nP\nR\nF\n1\nP\nR\nF\n1\nA\n2+3+7\n79.11\n20.86\n32.96\n77.48\n61.81\n68.70\n96.39\n33.47\n49.67\nB\n1+2+3\n71.95\n82.08\n76.60\n71.06\n72.31\n71.23\n93.29\n42.91\n58.76\nC\n1+2+3+7\n73.32\n0.8143\n76.87\n70.90\n78.63\n74.54\n92.58\n48.65\n63.74\nD\n1+2+3+7+5\n70.60\n84.99\n76.98\n74.42\n75.85\n75.09\n85.44\n61.96\n71.71\nE\n1+2+3+7+5+8\n86.15\n75.89\n80.68\n74.42\n75.85\n75.09\n66.56\n86.14\n75.07\nF\n1+2+\n+7+8\n85.98\n77.37\n81.44\n79.62\n78.30\n78.92\n66.40\n89.44\n76.19\nG\n1+2+\n+8+9\n84.70\n78.23\n81.32\n83.83\n74.61\n78.89\n69.14\n89.08\n77.82\nTable 8: Machine learning test result with exact string-matching for coreference resolution\ndata set\nRRM\nDecision Tree\nNaive Bayes\nP\nR\nF\n1\nP\nR\nF\n1\nP\nR\nF\n1\n726 docs\n84.43\n75.21\n79.49\n83.13\n73.68\n78.10\n67.85\n85.64\n75.64\n1,325 docs\n81.67\n72.60\n76.74\n79.60\n70.45\n74.69\n66.77\n83.56\n74.20\nIntelligent Scalable Text Summarization Workshop\n(ISTS'97), pages 1017, 1997.\n[2] F. J. Damerau, T. Zhang, S. M. Weiss, and\nN. Indurkhya. Text categorization for a comprehensive\ntime-dependent benchmark. Information Processing &\nManagement, 40(2):209221, 2004.\n[3] H. P. Edmundson. New methods in automatic\nabstracting. Journal of The Association for\nComputing Machinery, 16(2):264285, 1969.\n[4] J. Y. Ge, X. J. Huang, and L. Wu. Approaches to\nevent-focused summarization based on named entities\nand query words. In DUC 2003 Workshop on Text\nSummarization, 2003.\n[5] E. Hovy and C.-Y. Lin. Automated text\nsummarization in summarist. In I. Mani and\nM. Maybury, editors, Advances in Automated Text\nSummarization, pages 8194. MIT Press, 1999.\n[6] D. E. Johnson, F. J. Oles, T. Zhang, and T. Goetz. A\ndecision-tree-based symbolic rule induction system for\ntext categorization. IBM Systems Journal, 41:428437,\n2002.\n[7] M.-Y. Kan and K. R. McKeown. Information\nextraction and summarization: domain independence\nthrough focus types. Columbia University Computer\nScience Technical Report CUCS-030-99.\n[8] J. M. Kupiec, J. Pedersen, and F. Chen. A trainable\ndocument summarizer. In SIGIR '95, pages 6873,\n1995.\n[9] D. Lawrie, W. B. Croft, and A. Rosenberg. Finding\ntopic words for hierarchical summarization. In SIGIR\n'01, pages 349357, 2001.\n[10] F. Li and Y. Yang. A loss function analysis for\nclassification methods in text categorization. In ICML\n03, pages 472479, 2003.\n[11] C.-Y. Lin. Training a selection function for extraction.\nIn CIKM '99, pages 18, 1999.\n[12] C.-Y. Lin and E. Hovy. Identifying topics by position.\nIn Proceedings of the Applied Natural Language\nProcessing Conference (ANLP-97), pages 283290,\n1997.\n[13] D. Marcu. From discourse structures to text\nsummaries. In Proceedings of the ACL'97/EACL'97\nWorkshop on Intelligent Scalable Text Summarization,\npages 8288. ACL, 1997.\n[14] A. McCallum and K. Nigam. A comparison of event\nmodels for naive bayes text classification. In\nAAAI/ICML-98 Workshop on Learning for Text\nCategorization, pages 4148, 1998.\n[15] J. L. Neto, A. Santos, C. Kaestner, A. Freitas, and\nJ. Nievola. A trainable algorithm for summarizing\nnews stories. In Proceedings of PKDD'2000 Workshop\non Machine Learning and Textual Information Access,\nSeptember 2000.\n[16] C. Nobata, S. Sekine, H. Isahara, and R. Grishman.\nSummarization system integrated with named entity\ntagging and ie pattern discovery. In Proceedings of\nThird International Conference on Language\nResources and Evaluation (LREC 2002), 2002.\n[17] C. D. Paice and P. A. Jones. The identification of\nimportant concepts in highly structured technical\npapers. In SIGIR '93, pages 6978. ACM, 1993.\n[18] J. R. Quinlan. C4.5: Programs for Machine Learning.\nMorgan Kaufmann, 1993.\n[19] E. F. T. K. Sang and F. D. Meulder. Introduction to\nthe conll-2003 shared task: Language-independent\nnamed entity recognition. In Proceedings of\nCoNLL-2003, pages 142147, 2003.\n[20] W.-M. Soon, H.-T. Ng, and C.-Y. Lim. A machine\nlearning approach to coreference resolution of noun\nphrases. Computational Linguistics, 27(4):521544,\n2001.\n[21] S. Teufel and M. Moens. Sentence extraction as a\nclassification task. In ACL/EACL-97 Workshop on\nIntelligent and Scalable Text Summarization, 1997.\n[22] T. Zhang. On the dual formulation of regularized\nlinear systems. Machine Learning, 46:91129, 2002.\n[23] T. Zhang, F. Damerau, and D. E. Johnson. Text\nchunking based on a generalization of Winnow.\nJournal of Machine Learning Research, 2:615637,\n2002.\n[24] T. Zhang and F. J. Oles. Text categorization based on\nregularized linear classification methods. Information\nRetrieval, 4:531, 2001.\n288\n", "keywords": "named entities;Information retrieval;naive Bayes;topic identification;classification model;sentence extraction;Summarization;entity recognition;Natural language processing applications;ranking;information retrieval;Linguistic features;automatic summarization;Classification methods;robust risk minimization;decision tree;electronic documents;machine learning;Focused named entity recognition;Statistical model;Features;Machine learning approach;text summarization;Main topics;natural language processing"} {"name": "95", "title": "Formally Deriving an STG Machine", "abstract": "Starting from P. Sestoft semantics for lazy evaluation, we define a new semantics in which normal forms consist of variables pointing to lambdas or constructions. This is in accordance with the more recent changes in the Spineless Tagless G-machine (STG) machine, where constructions only appear in closures (lambdas only appeared in closures already in previous versions). We prove the equivalence between the new semantics and Sestoft's. Then, a sequence of STG machines are derived, formally proving the correctness of each derivation. The last machine consists of a few imperative instructions and its distance to a conventional language is minimal. The paper also discusses the differences between the final machine and the actual STG machine implemented in the Glasgow Haskell Compiler.", "fulltext": "INTRODUCTION\nThe Spineless Tagless G-machine (STG) [6] is at the heart\nof the Glasgow Haskell Compiler (GHC) [7] which is perhaps\nthe Haskell compiler generating the most efficient code. For\na description of Haskell language see [8]. Part of the secret\nfor that is the set of analysis and transformations carried\nout at the intermediate representation level. Another part\nof the explanation is the efficient design and implementation\nof the STG machine.\nA high level description of the STG can be found in [6].\nIf the reader is interested in a more detailed view, then the\nonly available information is the Haskell code of GHC (about\n80.000 lines, 12.000 of which are devoted to the implementation\nof the STG machine) and the C code of its different\nruntime systems (more than 40.000 lines)[1].\nIn this paper we provide a step-by-step derivation of the\nSTG machine, starting from a description higher-level than\nthat of [6] and arriving at a description lower-level than that.\nOur starting point is a commonly accepted operational\nsemantics for lazy evaluation provided by Peter Sestoft in\n[10] as an improvement of John Launchbury's well-known\ndefinition in [4]. Then, we present the following refinements:\n1. A new operational semantics, which we call semantics\nS3 --acknowledging that semantics 1 and 2 were defined\nby Mountjoy in a previous attempt [5]--, where\nnormal forms may appear only in bindings.\n2. A first machine, called STG-1, derived from S3 in\nwhich explicit replacement of pointers for variables is\ndone in expressions.\n3. A second machine STG-2 introducing environments in\nclosures, case alternatives, and in the control expression\n.\n4. A third machine, called ISTG (I stands for imperative)\nwith a very small set of elementary instructions, each\none very easy to be implemented in a conventional\nlanguage such as C.\n5. A translation from the language of STG-2 to the language\nof ISTG in which the data structures of STG-2\nare represented (or implemented) by the ISTG data\nstructures.\n102\ne x\n-- variable\n|\nx.e\n-- lambda abstraction\n|\ne x\n-- application\n|\nletrec x\ni\n= e\ni\nin e\n-- recursive let\n|\nC x\ni\n-- constructor application\n|\ncase e of C\ni\nx\nij\ne\ni\n-- case expression\nFigure 1: Launchbury's normalized -calculus\nAt each refinement, a formal proof of the soundness and\ncompleteness of the lower level with respect to the upper\none is carried out\n1\n. In the end, the final implementation is\nshown correct with respect to Sestoft's operational semantics\n.\nThe main contribution of the work is showing that an efficient\nmachine such as STG can be presented, understood,\nand formally reasoned about at different levels of abstraction\n. Also, there are some differences between the machine\nwe arrive at and the actual STG machine implemented in\nthe Glasgow Haskell Compiler. We argue that some design\ndecisions in the actual STG machine are not properly justified\n.\nThe plan of the paper is as follows: after this introduction\n, in Section 2, a new language called FUN is introduced\nand the semantics S3 for this language is defined. Two theorems\nrelating Launchbury's original language and semantics\nto the new ones are presented. Section 3 defines the\ntwo machines STG-1 and STG-2. Some propositions show\nthe consistency between both machines and the correctness\nand completeness of STG-1 with respect to S3, eventhough\nthe latter creates more closures in the heap and produces\ndifferent (but equivalent) normal forms. Section 4 defines\nmachine ISTG and Section 5 defines the translation from\nSTG-2 expressions to ISTG instructions. Two invariants are\nproved which show the correctness of the translation. Section\n6 discusses the differences between our translation and\nthe actual implementation done by GHC. Finally, Section 7\nconcludes.\nA NEW SEMANTICS FOR LAZY EVAL-UATION\nWe begin by reviewing the language and semantics given\nby Sestoft as an improvement to Launchbury's semantics.\nBoth share the language given in Figure 1 where A\ni\ndenotes\na vector A\n1\n, . . . , A\nn\nof subscripted entities. It is a normalized\n-calculus, extended with recursive let, constructor\napplications and case expressions. Sestoft's normalization\nprocess forces constructor applications to be saturated and\nall applications to only have variables as arguments. Weak\nhead normal forms are either lambda abstractions or constructions\n. Throughout this section, w will denote (weak\nhead) normal forms.\nSestoft's semantic rules are given in Figure 2. There, a\njudgement : e\nA\n: w denotes that expression e, with\nits free variables bound in heap , reduces to normal form\nw and produces the final heap . When fresh pointers are\ncreated, freshness is understood w.r.t. (dom ) A, where\nA contains the addresses of the closures under evaluation\n1\nThe\ndetails\nof\nthe\nproofs\ncan\nbe\nfound\nin\na\ntechnical\nreport\nat\none\nof\nthe\nauthor's\npage\nhttp://dalila.sip.ucm.es/~albertoe.\n: x.e\nA\n: x.e\nLam\n: C p\ni\n\nA\n: C p\ni\nCons\n: e\nA\n: x.e\n: e [p/x]\nA\n: w\n: e p\nA\n: w\nApp\n: e\nA{p}\n: w\n[p e] : p\nA\n[p w] : w\nVar\n[p\ni\n^\ne\ni\n] : ^\ne\nA\n: w\n: letrec x\ni\n= e\ni\nin e\nA\n: w where p\ni\nfresh Letrec\n: e\nA\n: C\nk\np\nj\n: e\nk\n[p\nj\n/x\nkj\n]\nA\n: w\n: case e of C\ni\nx\nij\ne\ni\n\nA\n: w\nCase\nFigure 2: Sestoft's natural semantics\n(see rule Var). The notation ^\ne in rule Letrec means the replacement\nof the variables x\ni\nby the fresh pointers p\ni\n. This\nis the only rule where new closures are created and added\nto the heap. We use the term pointers to refer to dynam-ically\ncreated free variables, bounded to expressions in the\nheap, and the term variables to refer to (lambda-bound, let-bound\nor case-bound) program variables. We consistently\nuse p, p\ni\n, . . . to denote free variables and x, y, . . . to denote\nprogram variables.\nJ. Mountjoy's [5] had the idea of changing Launchbury-Sestoft's\nlanguage and semantics in order to get closer to the\nSTG language, and then to derive the STG machine from\nthe new semantics.\nHe developed two different semantics: In the first one,\nwhich we call semantics S1, the main change was that normal\nforms were either constructions (as they were in Sestoft's\nsemantics) or variables pointing to closures containing\n-abstractions, instead of just -abstractions. The reason\nfor this was to forbid a -abstraction in the control expression\nas it happens in the STG machine. Another change\nwas to force applications to have the form x x\n1\n, i.e. consisting\nof a variable in the functional part. This is also what\nthe STG language requires. These changes forced Mountjoy\nto modify the source language and to define a normalization\nfrom Launchbury's language to the new one. Mountjoy\nproved that the normalization did not change the normal\nforms arrived at by both semantics.\nThe second semantics, which we call semantics S2, forced\napplications to be done at once to n arguments instead of\ndoing it one by one. Correspondingly, -abstractions were\nallowed to have several arguments. This is exactly what\nthe STG machine requires. Semantics S2 was informally\nderived and contained some mistakes. In particular, (cf. [5,\npag. 171]) rule App\nM\nmakes a -abstraction to appear in\nthe control expression, in contradiction with the desire of\nhaving -abstractions only in the heap.\nCompleting and correcting Mountjoy's work we have defined\na new semantics S3 in which the main changes in the\nsource language w.r.t. Mountjoy's are the following:\n1. We force constructor applications to appear only in\nbindings, i.e. in heap closures. Correspondingly, normal\nforms are variables pointing to either -abstractions\nor constructions. We will use the term lambda forms to\nrefer to -abstractions or constructions alike. The motivation\nfor this decision is to generate more efficient\ncode as it will be seen in Section 5.1.\n103\ne\ne x\nin\n-- n > 0, application\n|\nx\n-- variable\n|\nletrec x\ni\n= lf\ni\nin e -- recursive let\n|\ncase e of alt\ni\n-- case expression\nalt\nC x\nj\ne\n-- case alternative\nlf\nx\nin\n.e\n-- n > 0, lambda abstraction\n|\nC x\nin\n-- constructor application\n|\ne\n-- expression\nFigure 3: Language FUN\n2. We relax applications to have the form e x\nin\n, where e is\nan arbitrary expression (excluding, of course, lambda\nforms). The initial motivation for this was not to introduce\nunjustified restrictions. In the conclusions we\ndiscuss that the generated code is also more efficient\nthan the one produced by restricting applications to\nbe of the form x x\nin\n.\nAdditionally, our starting point is Sestoft's semantics instead\nof Launchbury's. The main difference is that Sestoft\nsubstitutes fresh pointers for program variables in rule Letrec\nwhile Launchbury substitutes fresh variables for all bound\nvariables in rule Var instead.\nThe syntax of the language, called FUN, is shown in Figure\n3. Notice that applications are done to n arguments\nat once, being e x\nin\nan abbreviation of (. . . (e x\n1\n) . . .) x\nn\n,\nand that consequently -abstractions may have several arguments\n. Its operational semantics, called S3 is given in\nFigure 4. For simplicity, we have eliminated the set A of\npending updates appearing in Sestoft's semantics. This set\nis not strictly needed provided that the fresh name generator\nfor pointers does not repeat previously generated names\nand provided that pointer's names are always distinguish-able\nfrom program variables. In rule Case\nS3\n, expression e\nk\nis the righthand side expression of the alternative alt\nk\n. The\nnotation [p e] highlights the fact that (p e) ,\nand [p e] means the disjoint union of and (p e).\nPlease notice that this notation may not coincide with other\nnotations in which and [p e] denote different heaps.\nFinally notice that, besides rule Letrec\nS3\n, also rule App\nS3\ncreates closures in the heap.\nLanguage FUN is at least as expressive as Launchbury's\n-calculus. The following normalization function transforms\nany Launchbury's expression into a semantically equivalent\nFUN expression.\nDefinition 1. We define the normalization function N :\nLaunch FUN:\nN x\ndef\n= x\nN (e x)\ndef\n= (N e) x\nN ( x.e)\ndef\n= letrec y = N ( x.e) in y\n, y fresh\nN (C x\ni\n)\ndef\n= letrec y = C x\ni\nin y\n, y fresh\nN (letrec x\ni\n= e\nin\nin e)\ndef\n= letrec x\ni\n= N e\ni\nn\nin N e\nN (case e of C\ni\ny\nij\ne\ni\n)\ndef\n= case N e of C\ni\ny\nij\nN e\ni\nN , N : Launch FUN:\nN (C x\nin\n)\ndef\n=\nC x\nin\nN e\ndef\n=\nN\ne\n, e = C x\nin\nN\n( x.e)\ndef\n=\nx.N\ne\nN\ne\ndef\n=\nN e\n, e = x.e\nThe following proposition prove that the normalization\nfunctions are well defined.\nProposition 1.\n1. Let e Launch then N e FUN\n2. Let e lf then N e lf\n3. Let e = x.e and e = C x\ni\nthen, N e = N e\n4. (N e)[p\ni\n/x\ni\n] = N (e[p\ni\n/x\ni\n])\n5. (N e)[p\ni\n/x\ni\n] = N (e[p\ni\n/x\ni\n])\nProof.\n1. By structural induction on e.\n2. Trivial.\n3. Trivial.\n4. By definition of N and of substitutions.\n5. By definition of N and of substitutions.\nTo see that both semantics reduce an expression to equivalent\nnormal forms, first we prove that the normalization\ndoes not change the meaning of an expression within Sestoft's\nsemantics. Then, we prove that both semantics reduce\nany FUN expression to equivalent normal forms, provided\nthat such normal forms exist.\n2.1\nSoundness and completeness\nThe following two propositions prove that the normalization\ndoes not change the meaning of an expression. We use\nthe following notation: denotes a one-to-one renaming of\npointers, and\n\nmeans that . This is needed\nto express the equivalence between the heaps of both semantics\nup to some renaming . As S3 generates more closures\nthan Sestoft's, it is not possible to guarantee that the fresh\npointers are exactly the same in the two heaps.\nProposition 2. (Sestoft Sestoft\n\n) For all e Launch\nwe have:\n{ } : e : w\n\n\n\n{ } : N e\n\n: w\n\n.\nw\n\n= N ( w)\nN\n\n\n\nProof. By induction on the number of reductions of\nLaunchbury expressions.\nProposition 3. (Sestoft\n\nSestoft) For all e FUN\nwe have:\n{ } : e : w\ne Launch.\n\nN e = e\n\n\n\n{ } : e : w\n.\nN ( w ) = w\nN\n\n\n\n\n104\n[p x\nin\n.e] : p : p\nLam\nS3\n[p C p\ni\n] : p : p\nCons\nS3\n: e [p x\nin\n.y\nim\n.e ] : p\n: e p\nin\n[q y\nim\n.e [p\ni\n/x\ni\nn\n]] : q m, n > 0, q fresh\nApp\nS3\n: e [p x\nim\n.e ] : p : e [p\ni\n/x\ni\nm\n] p\nm+1\n. . . p\nn\n[q w] : q\n: e p\nin\n: q\nn m\nApp\nS3\n: e [q w] : q\n[p e] : p [p w] : q\nVar\nS3\n[p\ni\n^\nlf\ni\n] : ^\ne [p w] : p\n: letrec x\ni\n= lf\ni\nin e : p p\ni\nfresh\nLetrec\nS3\n: e [p C\nk\np\nj\n] : p\n: e\nk\n[p\nj\n/y\nkj\n] [q w] : q\n: case e of C\ni\ny\nij\ne\ni\n: q\nCase\nS3\nFigure 4: Semantics S3\nProof. By induction on the number of reductions of\nLaunchbury expressions.\nNow we prove the equivalence between the two semantics\n. We consider only FUN expressions because it has been\nproved that the normalization does not change the meaning\nof an expression.\nProposition 4. (Sestoft S3, completeness of S3) For\nall e FUN we have:\n{ } : e : w\n\n\n\n{ } : e [p w ] : p\n.\nw = w\n\n\nProof. By induction on the number of reductions of\nFUN expressions.\nProposition 5. (S3 Sestoft, soundness of S3) For all\ne FUN we have:\n{ } : e [p w] : p\n\n\n\n{ } : e : w\n.\nw = w\n\n\nProof. By induction on the number of reductions of\nFUN expressions.\nOnce adapted the source language to the STG language,\nwe are ready to derive an STG-like machine from semantics\nS3.\nA VERY SIMPLE STG MACHINE\nFollowing a similar approach to Sestoft MARK-1 machine\n[10], we first introduce a very simple STG machine, which\nwe will call STG-1, in which explicit variable substitutions\nare done. A configuration in this machine is a triple (, e, S)\nwhere represents the heap, e is the control expression and\nS is the stack. The heap binds pointers to lambda forms\nwhich, in turn, may reference other pointers. The stack\nstores three kinds of objects: arguments p\ni\nof pending applications\n, case alternatives alts of pending pattern matchings,\nand marks #p of pending updates.\nIn Figure 5, the transitions of the machine are shown.\nThey look very close to the lazy semantics S3 presented in\nSection 2. For instance, the single rule for letrec in Figure\n5 is a literal transcription of the Letrec\nS3\nrule of Figure\n4. The semantic rules for case and applications are split\neach one into two rules in the machine. The semantic rule\nfor variable is also split into two in order to take care of\nupdating the closure. So, in principle, an execution of the\nSTG-1 machine could be regarded as the linearization of the\nsemantic derivation tree by introducing an auxiliary stack.\nBut sometimes appearances are misleading. The theorem\nbelow shows that in fact STG-1 builds less closures in the\nheap than the semantics and it may arrive to different (but\nsemantically equivalent) normal forms. In order to prove\nthe soundness and completeness of STG-1, we first enrich\nthe semantics with a stack parameter S in the rules. The\nnew rules for\nS\n(only those which modify S) are shown in\nFigure 6. It is trivial to show that the rules are equivalent to\nthe ones in Figure 4 as the stack is just an observation of the\nderivations. It may not influence them. The following theorem\nestablishes the correspondence between the (enriched)\nsemantics and the machine.\nProposition 6. Given , e and S, then : e\nS\n[p\nw] : p iff (, e, S)\n\n( , p , S ), where\n1.\n2. if [p C p\nin\n] then S = S , p = p and [p\nC p\nin\n]\n3. if [p x\nin\n.e ] then there exists m 0 s.t. [p\ny\nim\n.x\nin\n.e ] and S = q\nim\n: S and e = e [q\ni\n/y\ni\nm\n]\n105\nHeap\nControl\nStack\nrule\n\nletrec {x\ni\n= lf\ni\n} in e\nS\nletrec (\n1\n)\n=\n[p\ni\nlf\ni\n[p\nj\n/x\nj\n]]\ne[p\ni\n/x\ni\n]\nS\n\ncase e of alts\nS\ncase1\n=\n\ne\nalts : S\n[p C\nk\np\ni\n]\np\nC\nj\ny\nji\ne\nj\n: S\ncase2\n=\n\ne\nk\n[p\ni\n/y\nki\n]\nS\n\ne p\nin\nS\napp1\n=\n\ne\np\nin\n: S\n[p x\nin\n.e]\np\np\nin\n: S\napp2\n=\n\ne[p\ni\n/x\ni\nn\n]\nS\n[p e ]\np\nS\nvar1\n=\n\ne\n#p : S\n[p x\nik\n.y\nim\n.e]\np\np\nik\n: #q : S\nvar2\n=\n[q p p\nik\n]\np\np\nik\n: S\n[p C\nk\np\ni\n]\np\n#q : S\nvar3\n=\n[q C\nk\np\ni\n]\np\nS\n(\n1\n)\np\ni\nare distinct and fresh w.r.t. , letrec {x\ni\n= lf\ni\n} in e, and S\nFigure 5: The STG-1 Machine\nProof. By induction on the number of reductions of\nFUN expressions.\nThe proposition shows that the semantic rule App\nS3\nof\nFigure 4 is not literally transcribed in the machine. The\nmachine does not create intermediate lambdas in the heap\nunless an update is needed. Rule app2 in Figure 5 applies a\nlambda always to all its arguments provided that they are in\nthe stack. For this reason, a lambda with more parameters\nthan that of the semantics may be arrived at as normal\nform of a functional expression. Also for this reason, an\nupdate mark may be interspersed with arguments in the\nstack when a lambda is reached (see rule var2 in Figure 5).\nA final implication is that the machine may stop with m\npending arguments in the stack and a variable in the control\nexpression pointing to a lambda with n > m parameters.\nThe semantics always ends a derivation with a variable as\nnormal form and an empty stack.\nAgain, following Sestoft and his MARK-2 machine, once\nwe have proved the soundness and completeness of STG-1,\nwe introduce STG-2 having environments instead of explicit\nvariable substitutions. Also, we add trimmers to this machine\nso that environments kept in closures and in case alternatives\nonly reference the free variables of the expression\ninstead of all variables in scope. A configuration of STG-2\nis a quadruple (, e, E, S) where E is the environment of e,\nthe alternatives are pairs (alts, E), and a closure is a pair\n(lf , E). Now expressions and lambda forms keep their original\nvariables and the associated environment maps them to\npointers in the heap. The notation E |\nt\nmeans the trimming\nof environment E to the trimmer t. A trimmer is just a collection\nof variable names. The resulting machine is shown\nin Figure 7.\nProposition 7.\nGiven a closed expression e\n0\n.\n({ }, e\n0\n, [ ])\nSTG-1\n- (, q, p\nin\n) where either:\n[q C q\nim\n] n = 0\nor [q x\nim\n.e] m > n 0\nif and only if ({ }, e\n0\n, { }, [ ])\nSTG-2\n- (, x, E[x q], p\nin\n)) and\neither:\n[q (C x\nim\n, {x\ni\nq\nim\n})] n = 0\nor [q (x\nim\n.e , E )] m > n 0 e = E e .\nProof. By induction on the number of reductions.\nAN IMPERATIVE STG MACHINE\nIn this Section we `invent' an imperative STG machine,\ncalled ISTG, by defining a set of machine instructions and\nan operational semantics for them in terms of the state transition\nthat each instruction produces. In fact, this machine\ntries to provide an intermediate level of reasoning between\nthe STG-2 machine and the final C implementation. In\nthe actual GHC implementation, `below' the operational description\nof [6] we find only a translation to C. By looking\nat the compiler and at the runtime system listings, one can\ngrasp some details, but many others are lost. We think that\nthe gap to be bridged is too high. Moreover, it is not possible\nto reason about the correctness of the implementation\nwhen so many details are introduced at once. The ISTG architecture\nhas been inspired by the actual implementation of\nthe STG machine done by GHC, and the ISTG instructions\nhave been derived from the STG-2 machine by analyzing the\nelementary steps involved in every machine transition.\nAn ISTG machine configuration consists of a 5-tuple (is, S,\nnode, , cs), where is is a machine instruction sequence ended\nwith the instruction ENTER or RETURNCON, S is the\nstack, node is a heap pointer pointing to the closure under\nexecution (the one to which is belongs to), is the heap and\ncs is a code store where the instruction sequences resulting\nfrom compiling the program expressions are kept.\nWe will use the following notation: a for pointers to closures\nin , as and ws for lists of such pointers, and p for\n106\n: e\np\nin\n:S\n[p x\nin\n.y\nim\n.e ] : p\n: e p\nin\n\nS\n[q y\nim\n.e [p\ni\n/x\ni\nn\n]] : q m, n > 0, q fresh\nApp\nS3\n: e\np\nin\n:S\n[p x\nim\n.e ] : p : e [p\ni\n/x\ni\nm\n] p\nm+1\n. . . p\nn\n\nS\n[q w] : q\n: e p\nin\n\nS\n: q\nn m\nApp\nS3\n: e\n#p:S\n[q w] : q\n[p e] : p\nS\n[p w] : q\nVar\nS3\n: e\nalts:S\n[p C\nk\np\nj\n] : p\n: e\nk\n[p\nj\n/y\nkj\n]\nS\n[q w] : q\n: case e of alts\nS\n: q\nCase\nS3\nFigure 6: The enriched semantics\npointers to code fragments in cs. By cs[p is] we denote\nthat the code store cs maps pointer p to the instruction\nsequence is and, by cs[p is\ni\nn\n], that cs maps p to a\nvectored set of instruction sequences is\n1\n, . . . , is\nn\n, each one\ncorresponding to an alternative of a case expression with\nn constructors C\n1\n, . . . , C\nn\n. Also, S ! i will denote the i-th\nelement of the stack S counting from the top and starting\nat 0. Likewise, node\n\n! i will denote the i-th free variable of\nthe closure pointed to by node in , this time starting at 1.\nStack S may store pointers a to closures in , pointers p\nto code sequences and code alternatives in cs, and update\nmarks #a indicating that closure pointed to by a must be\nupdated. A closure is a pair (p, ws) where p is a pointer\nto an instruction sequence is in cs, and ws is the closure\nenvironment, having a heap pointer for every free variable\nin the expression whose translation is is.\nThese representation decisions are very near to the GHC\nimplementation. In its runtime system all these elements\n(stack, heap, node register and code) are present [9]. Our\nclosures are also a small simplification of theirs.\nIn Figure 8, the ISTG machine instructions and its operational\nsemantics are shown. The machine instructions\nBUILDENV, PUSHALTS and UPDTMARK roughly correspond\nto the three possible pushing actions of machine\nSTG-2. The SLIDE instruction has no clear correspondence\nin the STG-2. As we will see in Section 5, it will be used\nto change the current environment when a new closure is\nentered. Instructions ALLOC and BUILDCLS will implement\nheap closure creation in the letrec rule of STG-2. Both\nBUILDENV and BUILDCLS make use of a list of pairs, each\npair indicating whether the source variable is located in the\nstack or in the current closure. Of course, it is not intended\nthis test to be done at runtime. An efficient translation of\nthese `machine' instructions to an imperative language will\ngenerate the appropriate copy statement for each pair.\nInstructions ENTER and RETURNCON are typical of\nthe actual STG machine as described in [6]. It is interesting\nto note that it has been possible to describe our previous\nSTG machines without any reference to them. In our view,\nthey belong to ISTG, i.e. to a lower level of abstraction. Finally\n, instruction ARGCHECK, which implements updates\nwith lambda normal forms, is here at the same level of abstraction\nas RETURNCON, which implements updates with\nconstructions normal forms. Predefined code is stored in cs\nfor updating with a partial application and for blackholing\na closure under evaluation. The corresponding code pointers\nare respectively called p\nn+1\npap\nand p\nbh\nin Figure 8. The\nassociated code is the following:\np\nbh\n= [ ]\np\nn+1\npap\n= [BUILDENV [(NODE , 1), . . . , (NODE , n + 1) ],\nENTER ]\nThe code of a blackhole just blocks the machine as there\nis no instruction to execute. There is predefined code for\npartial applications with different values for n. The code\njust copies the closure into the stack and jumps to the first\npointer that is assumed to be pointing to a -abstraction\nclosure.\nThe translation to C of the 9 instructions of the ISTG\nshould appear straightforward for the reader. For instance,\nBUILDCLS and BUILDENV can be implemented by a sequence\nof assignments, copying values to/from the stack\nan the heap; PUSHALTS, UPDTMARK and ENTER do\nstraightforward stack manipulation; SLIDE is more involved\nbut can be easily translated to a sequence of loops moving\ninformation within the stack to collapse a number of stack\nfragments. The more complex ones are RETURNCON and\nARGCHECK. Both contains a loop which updates the heap\nwith normal forms (respectively, constructions and partial\napplications) as long as they encounter update marks in the\nstack. Finally, the installation of a new instruction sequence\nin the control made by ENTER and RETURNCON are implemented\nby a simple jump.\nFORMAL TRANSLATION FROM STG-2 TO ISTG\nIn this Section, we provide first the translation schemes for\nthe FUN expressions and lambda forms and then prove that\nthis translation correctly implements the STG-2 machine on\ntop of the ISTG machine. Before embarking into the details,\nwe give some hints to intuitively understand the translation:\nThe ISTG stack will represent not only the STG-2\nstack, but also (part of) the current environment E\nand all the environments associated to pending case\nalternatives. So, care must be taken to distinguish between\nenvironments and other objects in the stack.\nThe rest of the current environment E is kept in the\ncurrent closure. The translation knows where each free\n107\nHeap\nControl\nEnvironment\nStack\nrule\n\nletrec {x\ni\n= lf\ni\n|\nt\ni\n} in e\nE\nS\nletrec (\n1\n)\n\n[p\ni\n(lf\ni\n, E |\nt\ni\n)]\ne\nE\nS\n\ncase e of alts |\nt\nE\nS\ncase1\n\n\ne\nE\n(alts, E |\nt\n) : S\n[p (C\nk\nx\ni\n, {x\ni\np\ni\n})]\nx\nE{x p}\n(alts, E ) : S\ncase2 (\n2\n)\n\n\ne\nk\nE {y\nki\np\ni\n}\nS\n\ne x\nin\nE{x\ni\np\nin\n}\nS\napp1\n\n\ne\nE\np\nin\n: S\n[p (x\nin\n.e, E )]\nx\nE{x p}\np\nin\n: S\napp2\n\n\ne\nE {x\ni\np\nin\n}\nS\n[p (e, E )]\nx\nE{x p}\nS\nvar1\n\n\ne\nE\n#p : S\n[p (x\nik\n.y\nin\n.e, E )]\nx\nE{x p}\np\nik\n: #q : S\nvar2 (\n3\n)\n\n[q (x x\nik\n, E ])\nx\nE\np\nik\n: S\n[p (C\nk\nx\ni\n, E )]\nx\nE{x p}\n#q : S\nvar3\n\n[q (C\nk\nx\ni\n, E )]\nx\nE\nS\n(\n1\n)\np\ni\nare distinct and fresh w.r.t. , letrec {x\ni\n= lf\ni\n} in e, and S. E = E {x\ni\np\ni\n}\n(\n2\n)\nExpression e\nk\ncorresponds to alternative C\nk\ny\nki\ne\nk\nin alts\n(\n3\n)\nE = {x p, x\ni\np\nik\n}\nFigure 7: The STG-2 machine\nvariable is located by maintaining two compile-time\nenvironments and . The first one corresponds to\nthe environment kept in the stack, while the second one\ncorresponds to the free variables accessed through\nthe node pointer.\nThe stack can be considered as divided into big blocks\nseparated by code pointers p pointing to case alternatives\n. Each big block topped with such a pointer\ncorresponds to the environment of the associated alternatives\n.\nIn turn, each big block can be considered as divided\ninto small blocks, each one topped with a set of arguments\nof pending applications. The compile-time\nenvironment is likewise divided into big and small\nblocks, so reflecting the stack structure.\nWhen a variable is reached in the current instruction\nsequence, an ENTER instruction is executed. This\nwill finish the current sequence and start a new one.\nThe upper big block of the stack must be deleted (corresponding\nto changing the current environment) but\narguments of pending applications must be kept. This\nstack restructuring is accomplished by a SLIDE operation\nwith an appropriate argument.\nDefinition 2. A stack environment is a list [(\nk\n, m\nk\n, n\nk\n),\n. . . , (\n1\n, m\n1\n, n\n1\n)] of blocks. It describes the variables in the\nstack starting from the top. In a block (, m, n), is an\nenvironment mapping exactly m- | n | program variables\nto disjoint numbers in the range 1..m- | n |. The empty\nenvironment, denoted\n\nis the list [({}, 0, 0)].\nA block (, m, n) corresponds to a small block in the above\nexplanation. Blocks with n = -1, are topped with a code\npointer pointing to alternatives. So, they provide a separation\nbetween big blocks. The upper big block consists of all\nthe small blocks up to (and excluding) the first small block\nwith n = -1. Blocks with n > 0 have m - n free variables\nand are topped with n arguments of pending applications.\nThe upper block is the only one with n = 0 meaning that it\nis not still closed and that it can be extended.\nDefinition 3. A closure environment with n variables is\na mapping from these variables to disjoint numbers in the\nrange 1..n.\nDefinition 4. The offset of a variable x in from the top\nof the stack, denoted x, is given by\nx\ndef\n= (\nk\ni=l\nm\ni\n) l\nx, being x dom\nl\nIf the initial closed expression to be translated has different\nnames for bound variables, then the compile time environments\nand will never have duplicate names. It will be\nproved below that every free variable of an expression being\ncompiled will necessarily be either in or in , and never in\nboth. This allows us to introduce the notation (, ) x to\nmean\n(, ) x\ndef\n=\n(STACK , x)\nif x dom\n(NODE , x)\nif x dom\nThe stack environment may suffer a number of operations:\nclosing the current small block with a set of arguments, enlarging\nthe current small block with new bindings, and closing\nthe current big block with a pointer to case alternatives.\nThese are formally defined as follows.\nDefinition 5. The following operations with stack environments\nare defined:\n1. ((, m, 0) : ) + n\ndef\n= ({}, 0, 0) : (, m + n, n) :\n108\nInstructions\nStack\nNode\nHeap\nCode\ncontrol\n[ENTER]\na : S\nnode\n[a (p, ws)]\ncs[p is]\n=\nis\nS\na\n\ncs\n[RETURNCON C\nm\nk\n]\np : S\nnode\n\ncs[p is\ni\nn\n]\n=\nis\nk\nS\nnode\n\ncs\n[RETURNCON C\nm\nk\n]\n#a : S\nnode\n[a (p\nbh\n, as),\nnode (p, ws)]\ncs\n=\n[RETURNCON C\nm\nk\n]\nS\nnode\n[a (p, ws)]\ncs\nARGCHECK m : is\na\nim\n: S\nnode\n\ncs\n=\nis\na\nim\n: S\nnode\n\ncs\nARGCHECK m : is\na\nin\n: #a : S\nnode\n[a (p\nbh\n, ws)]\ncs\nn < m\n=\nARGCHECK m : is\na\nin\n: S\nnode\n[a (p\nn+1\npap\n, node : a\nin\n)]\ncs\nheap\nALLOC m : is\nS\nnode\n\ncs\n(\n1\n)\n=\nis\na\nm\n: S\nnode\n\ncs\nBUILDCLS i p z\nin\n: is\nS\nnode\n\ncs\n(\n2\n)\n=\nis\nS\nnode\n[S!i (p, a\nin\n)]\ncs\nstack\nBUILDENV z\nin\n: is\nS\nnode\n\ncs\n(\n2\n)\n=\nis\na\nin\n: S\nnode\n\ncs\nPUSHALTS p : is\nS\nnode\n\ncs\n=\nis\np : S\nnode\n\ncs\nUPDTMARK : is\nS\nnode\n[node (p, ws)]\ncs\n=\nis\n#node : S\nnode\n[node (p\nbh\n, ws)]\ncs\nSLIDE (n\nk\n, m\nk\n)\nl\n: is\na\nkj n\nk\n: b\nkj\nm\nk\nl\n: S\nnode\n\ncs\n=\nis\na\nkj n\nk\nl\n: S\nnode\n\ncs\n(\n1\n)\na\nm\nis a pointer to a new closure with space for m free variables, and is the resulting\nheap after the allocation\n(\n2\n)\na\ni\n=\nS!i\nif z\ni\n= (STACK , i)\nnode\n\n!i\nif z\ni\n= (NODE , i)\nFigure 8: The ISTG machine\n2. ((, m, 0) : )+({x\ni\nj\ni\nn\n}, n)\ndef\n= ({x\ni\nm + j\ni\nn\n},\nm+n, 0) :\n3. ((, m, 0) : )\n+\n+\ndef\n= ({}, 0, 0) : (, m + 1, -1) :\n5.1\nTranslation schemes\nFunctions trE and trA respectively translate a FUN expression\nand a case alternative to a sequence of ISTG machine\ninstructions; function trAs translates a set of alternatives\nto a pointer to a vectored set of machine instruction\nsequences in the code store; and function trB translates a\nlambda form to a pointer to a machine instruction sequence\nin the code store. The translation schemes are shown in\nFigure 9.\nThe notation . . . & cs[p . . .] means that the corresponding\ntranslation scheme has a side effect which consists\nof creating a code sequence in the code store cs and pointing\nit by the code pointer p.\nProposition 8. (static invariant) Given a closed expression\ne\n0\nwith different bound variables and an initial call\ntrE e\n0\n\n\n{}, in all internal calls of the form trE e :\n1. The stack environment has the form = (, m, 0) : .\nMoreover, there is no other block ( , m , n) in with\nn = 0. Consequently, all environment operations in\nthe above translation are well defined.\n2. All free variables of e are defined either in or in .\nMoreover, dom dom = .\n3. The last instruction generated for e is ENTER. Consequently\n, the main instruction sequence and all sequences\ncorresponding to case alternatives and to non-constructor\nclosures, end in an ENTER.\nProof. (1) and (2) are proved by induction on the tree\nstructure of calls to trE ; (3) is proved by structural induction\non FUN expressions.\nIn order to prove the correctness of the translation, we\nonly need to consider ISTG machine configurations of the\nform (is, S\nI\n, node,\nI\n, cs) in which is is generated by a call\nto trE for some expression e and environments , . We call\nthese stable configurations. We enrich then these configurations\nwith three additional components: the environments\n109\ntrE (e x\nin\n)\n= [BUILDENV (, ) x\ni\nn\n] ++\ntrE e ( + n)\ntrE (case e of alts |\nx\nin\n)\n= [BUILDENV zs, PUSHALTS p] ++ trE e\n+\n+\n( - xs)\nwhere p\n= trAs alts\n\n= + ({xs\nj\nm - j + 1\nm\n}, m)\nxs\n= [x | x x\nin\nx dom ]\nzs\n= [(node, x) | x xs]\nm\n= | xs |\ntrE (letrec x\ni\n= lf\ni\n|\ny\nij mi\nn\nin e) = [ALLOC m\nn\n, . . . , ALLOC m\n1\n] ++\n[BUILDCLS (i - 1) p\ni\nzs\ni\nn\n] ++\ntrE e\nwhere\n= + ({x\ni\nn - i + 1\nn\n}, n)\np\ni\n= trB (lf\ni\n|\ny\nij mi\n),\ni {1..n}\nzs\ni\n= ( , ) y\nij\nm\ni\n,\ni {1..n}\ntrE x\n= [BUILDENV [(, ) x],\nSLIDE ((1, 0) : ms),\nENTER]\nwhere ms\n= map (\\( , m, n) (n, m - n)) (takeWhile nn )\nnn ( , m, -1) = False\nnn\n= True\ntrAs (alt\ni\nn\n)\n= p & cs[p trA alt\ni\n\nn\n]\ntrA (C x\nin\ne)\n= trE e {x\ni\ni\nn\n}\ntrB (C\nn\nk\nx\nin\n|\nx\nin\n)\n= p & cs[p [RETURNCON C\nn\nk\n]]\ntrB (x\nil\n.e |\ny\nj n\n)\n= p & cs[p [ARGCHECK l] ++ trE e ]\nwhere\n= [({x\ni\nl - i + 1\nl\n}, l, 0)]\n\n= {y\nj\nj\nn\n}\ntrB (e |\ny\nj n\n)\n= p & cs[p [UPDTMARK ] ++ trE e\n\n]\nwhere\n= {y\nj\nj\nn\n}\nFigure 9: Translation schemes from STG-2 to ISTG\nand used to generate is, and an environment stack S\nenv\ncontaining a sequence of stack environments. The environments\nin S\nenv\nare in one to one correspondence with case\npointers stored in S\nI\n. Initially S\nenv\nis empty. Each time\nan instruction PUSHALTS is executed (see trE definition\nfor case), the environment the corresponding alternatives\nare compiled with, is pushed onto stack S\nenv\n. Each\ntime a RETURNCON pops a case pointer, stack S\nenv\nis\nalso pop-ed. So, enriched ISTG configurations have the form\n(is, , , S\nI\n, S\nenv\n, node,\nI\n, cs).\nDefinition 6. A STG-2 environment E is equivalent to an\nISTG environment defined by , , S\nI\n,\nI\nand node, denoted\nE (, S\nI\n, ,\nI\n, node) if dom E dom dom and\nx dom E\nE x = S\nI\n! ( x)\nif x dom\nE x = node\n\nI\n! ( x)\nif x dom\nDefinition 7. A STG-2 stack S is equivalent to a triple\n(, S\nI\n, S\nenv\n) of an ISTG enriched configuration, denoted S\n(, S\nI\n, S\nenv\n), if\n1. Whenever = (, m, 0) : , then S\nI\n= a\nim\n: S\nI\nand\nS ( , S\nI\n, S\nenv\n)\n2. Whenever = (, m, n) : , n > 0, then S = a\nin\n: S ,\nS\nI\n= a\nin\n: b\nj\nm-n\n: S\nI\nand S ( , S\nI\n, S\nenv\n)\n3. Whenever = (, m, -1) : , then S = (alts, E) : S ,\nS\nI\n= p\nalts\n: S\nI\n, S\nenv\n=\nalts\n: S\nenv\n, p\nalts\n= trAs alts\nalts\n,\nE (\nalts\n, S\nI\n, , , ) and S (\nalts\n, S\nI\n, S\nenv\n)\n4. Whenever S = #a : S and S\nI\n= #a : S\nI\n, then S\n(, S\nI\n, S\nenv\n)\n5. Additionally, [ ] ({}, [ ], [ ])\n110\nDefinition 8. A STG-2 heap is equivalent to an ISTG\npair (\nI\n, cs), denoted (\nI\n, cs), if for all p we have [p\n(lf |\nx\nin\n, E)] if and only if\nI\n[p (q, ws)], cs[q is], is =\ntrB (lf |\nx\nin\n) and ws = E x\ni\nn\n.\nDefinition 9. A STG-2 configuration is equivalent to an\nISTG enriched stable configuration, denoted (, e, E, S)\n(is, , , S\nI\n, S\nenv\n, node,\nI\n, cs) if\n1. (\nI\n, cs)\n2. is = trE e\n3. E (, S\nI\n, ,\nI\n, node)\n4. S (, S\nI\n, S\nenv\n)\nProposition 9. (dynamic invariant) Given a closed expression\ne\n0\nwith different bound variables and initial STG-2\nand ISTG configurations, respectively ({}, e\n0\n, {}, [ ]) and\n(trE e\n0\n\n\n{},\n\n, {}, [ ], [ ], , {}, cs), where cs is the code\nstore generated by the whole translation of e\n0\n, then both machines\nevolve through equivalent configurations.\nProof. By induction on the number of transitions of\nboth machines. Only transitions between ISTG stable configurations\nare considered.\nCorollary 10. The translation given in Section 5.1 is\ncorrect.\nDIFFERENCES WITH THE ACTUAL STG MACHINE\nThere are some differences between the machine translation\npresented in Section 5 and the actual code generated\nby GHC. Some are just omissions, other are non-substantial\ndifferences and some other are deeper ones.\nIn the first group it is the treatment of basic values, very\nelaborated in GHC (see for example [3]) and completely ig-nored\nhere. We have preferred to concentrate our study in\nthe functional kernel of the machine but, of course, a formal\nreasoning about this aspect is a clear continuation of our\nwork.\nIn the second group it is the optimization of update implementation\n. In GHC, updates can be done either by indirection\nor by closure creation, depending on whether there\nis enough space or not in the old closure to do update in\nplace. This implies to keep closure size information somewhere\n. GHC keeps it in the so called info table, a static part\nshared by all closures created from the same bind. This\ntable forces an additional indirection to access the closure\ncode. Our model has simplified these aspects. We understand\nalso that stack restructuring, as the one performed by\nour SLIDE instruction, is not implemented in this way by\nGHC. Apparently, stubbing of non used stack positions is\ndone instead. An efficiency study could show which implementation\nis better. The cost of our SLIDE instruction is\nin O(n), being n the number of arguments to be preserved\nin the stack when the current environment is discarded.\nPerhaps the deeper difference between our derived machine\nand the actual STG is our insistence in that FUN applications\nshould have the form e x\nin\ninstead of x x\nin\nas it is\nthe case in the STG language. This decision is not justified\nin the GHC papers and perhaps could have a noticeable negative\nimpact in performance. In a lazy language, the functional\npart of an application should be eagerly evaluated,\nbut GHC does it lazily. This implies constructing a number\nof closures that will be immediately entered (and perhaps\nupdated afterwards), with a corresponding additional cost\nboth in space and time. Our translation avoids creating and\nentering these closures. If the counter-argument were having\nthe possibility of sharing functional expressions, this is\nalways available in FUN since a variable is a particular case\nof an expression. What we claim is that the normalization\nprocess in the Core-to-STG translation should not introduce\nunneeded sharing.\nCONCLUSIONS\nWe have presented a stepwise derivation of a (well known)\nabstract machine starting from Sestoft's operational semantics\n, going through several intermediate machines and arriving\nat an imperative machine very close to a conventional imperative\nlanguage. This strategy of adding a small amount\nof detail in each step has allowed us both to provide insight\non fundamental decisions underlying the STG design\nand, perhaps more importantly, to be able to show the correctness\nof each refinement with respect to the previous one\nand, consequently, the correctness of the whole derivation.\nTo our knowledge, this is the first time that formal translation\nschemes and a formal proof of correctness of the STG\nto C translation has been done.\nOur previous work [2] followed a different path: it showed\nthe soundness and completeness of a STG-like machine called\nSTG-1S (laying somewhere between machines STG-2 and\nISTG of this paper) with respect to Sestoft's semantics.\nThe technique used was also different: a bisimulation between\nthe STG-1S machine and Sestoft's MARK-2 machine\nwas proved. We got the inspiration for the strategy followed\nhere from Mountjoy [5] and in Section 2 we have explained\nthe differences between his and our work. The previous machines\nof all these works, including STG-1 and STG-2 of this\npaper, are very abstract in the sense that they deal directly\nwith functional expressions. The new machine ISTG introduced\nhere is a really low level machine dealing with raw\nimperative instructions and pointers. Two contributions of\nthis paper have been to bridge this big gap by means of\nthe translations schemes and the proof of correctness of this\ntranslation.\nOur experience is that formal reasoning about even well\nknown products always reveals new details, give new insight,\nmakes good decisions more solid and provides trust in the\nbehavior of our programs.\nREFERENCES\n[1] A. at URL: http://www.haskell.org/ghc/.\n[2] A. Encina and R. Pe~\nna. Proving the Correctness of\nthe STG Machine. In Implementation of Functional\nLanguages, IFL'01. Selected Papers. LNCS 2312,\npages 88104. Springer-Verlag, 2002.\n[3] S. P. Jones and J. Launchbury. Unboxed values as first\nclass citizens in a non-strict functional language.\nConference on Functional Programming Languages\nand Computer Architecture FPCA'91, LNCS 523,\nSeptember 1991.\n[4] J. Launchbury. A Natural Semantics for Lazy\nEvaluation. In Proc. Conference on Principles of\nProgramming Languages, POPL'93. ACM, 1993.\n[5] J. Mountjoy. The Spineless Tagless G-machine,\nNaturally. In Third International Conference on\nFunctional Programming, ICFP'98, Baltimore. ACM\nPress, 1998.\n111\n[6] S. L. Peyton Jones. Implementing Lazy Functional\nLanguages on Stock Hardware: the Spineless Tagless\nG-machine, Version 2.5. Journal of Functional\nProgramming, 2(2):127202, April 1992.\n[7] S. L. Peyton Jones, C. V. Hall, K. Hammond, W. D.\nPartain, and P. L. Wadler. The Glasgow Haskell\nCompiler: A Technical Overview. In Joint Framework\nfor Inf. Technology, Keele, pages 249257, 1993.\n[8] S. L. Peyton Jones and J. Hughes, editors. Report on\nthe Programming Language Haskell 98. URL\nhttp://www.haskell.org, February 1999.\n[9] S. L. Peyton Jones, S. Marlow, and A. Reid. The STG\nRuntime System (revised).\nhttp://www.haskell.org/ghc/docs, 1999.\n[10] P. Sestoft. Deriving a Lazy Abstract Machine. Journal\nof Functional Programming, 7(3):231264, May 1997.\n112\n", "keywords": "Lazy evaluation;operational semantics;Functional programming;STG machine;compiler verification;Closures;Translation scheme;Abstract machine;Stepwise derivation;Operational semantics;abstract machines;Haskell compiler"} {"name": "96", "title": "Building Bridges for Web Query Classification", "abstract": "Web query classification (QC) aims to classify Web users' queries, which are often short and ambiguous, into a set of target categories. QC has many applications including page ranking in Web search, targeted advertisement in response to queries, and personalization. In this paper, we present a novel approach for QC that outperforms the winning solution of the ACM KDDCUP 2005 competition, whose objective is to classify 800,000 real user queries. In our approach, we first build a bridging classifier on an intermediate taxonomy in an offline mode. This classifier is then used in an online mode to map user queries to the target categories via the above intermediate taxonomy. A major innovation is that by leveraging the similarity distribution over the intermediate taxonomy, we do not need to retrain a new classifier for each new set of target categories, and therefore the bridging classifier needs to be trained only once. In addition, we introduce category selection as a new method for narrowing down the scope of the intermediate taxonomy based on which we classify the queries. Category selection can improve both efficiency and effectiveness of the online classification. By combining our algorithm with the winning solution of KDDCUP 2005, we made an improvement by 9.7% and 3.8% in terms of precision and F1 respectively compared with the best results of KDDCUP 2005.", "fulltext": "INTRODUCTION\nWith exponentially increasing information becoming available\non the Internet, Web search has become an indispensable\ntool for Web users to gain desired information. Typi-cally\n, Web users submit a short Web query consisting of a\nfew words to search engines. Because these queries are short\nand ambiguous, how to interpret the queries in terms of a\nset of target categories has become a major research issue.\nIn this paper, we call the problem of generating a ranked list\nof target categories from user queries the query classification\nproblem, or QC for short.\nThe importance of QC is underscored by many services\nprovided by Web search. A direct application is to provide\nbetter search result pages for users with interests of different\ncategories. For example, the users issuing a Web query\n\"apple\" might expect to see Web pages related to the fruit\napple, or they may prefer to see products or news related to\nthe computer company. Online advertisement services can\nrely on the QC results to promote different products more\naccurately. Search result pages can be grouped according\nto the categories predicted by a QC algorithm. However,\nthe computation of QC is non-trivial, since the queries are\nusually short in length, ambiguous and noisy (e.g., wrong\nspelling). Direct matching between queries and target categories\noften produces no result. In addition, the target categories\ncan often change, depending on the new Web contents\nas the Web evolves, and as the intended services change as\nwell.\nKDDCUP 2005 ( http://www.acm.org/sigkdd/kddcup )\nhighlighted the interests in QC, where 800,000 real Web\nqueries are to be classified into 67 target categories. Each\nquery can belong to more than one target category. For this\ntask, there is no training data provided. As an example of\na QC task, given the query \"apple\", it should be classified\ninto \"Computers\n\\Hardware; Living\\Food&Cooking\".\nThe winning solution in the KDDCUP 2005 competition,\nwhich won on all three evaluation metrics (precision, F1 and\ncreativity), relied on an innovative method to map queries\nto target categories.\nBy this method, an input query is\nfirst mapped to an intermediate category, and then a second\nmapping is applied to map the query from the intermediate\ncategory to the target category. However, we note that this\nmethod suffers from two potential problems. First, the classifier\nfor the second mapping function needs to be trained\nwhenever the target category structure changes. Since in\nreal applications, the target categories can change depending\non the needs of the service providers, as well as the\ndistribution of the Web contents, this solution is not flexible\n131\nenough. What would be better is to train the classifiers once\nand then use them in future QC tasks, even when the target\ncategories are different. Second, the winners used the Open\nDirectory Project (ODP) taxonomy as the intermediate taxonomy\n. Since the ODP contains more than 590,000 different\ncategories, it is costly to handle all mapping functions. It is\nbetter to select a portion of the most relevant parts of the\nintermediate categories.\nIn this paper, we introduce a novel QC algorithm that\nsolves the above two problems. In particular, we first build\na bridging classifier on an intermediate taxonomy in an offline\nmode. This classifier is then used in online mode to map\nusers' queries to the target categories via the above intermediate\ntaxonomy. Therefore, we do not have to build the\nclassifier each time the target categories change. In addition,\nwe propose a category-selection method to select the categories\nin the intermediate taxonomy so that the effectiveness\nand efficiency of the online classification can be improved.\nThe KDDCUP 2005 winning solution included two kinds\nof base classifiers and two ensemble classifiers of them. By\ncomparing our new method with any base classifier in the\nwinner's solution for the KDDCUP 2005 competition, we\nfound that our new method can improve the performance\nby more than 10.4% and 7.1% in terms of precision and\nF1 respectively, while our method does not require the extra\nresource such as WordNet [8]. The proposed method can\neven achieve a similar performance to the winner's ensemble\nclassifiers that achieved the best performance in the KDDCUP\n2005 competition. Furthermore, by combining the our\nmethod with the base classifiers in the winner's solution,\nwe can improve the classification results by 9.7% in terms\nof precision and 3.8% in terms of F1 as compared to the\nwinner's results.\nThis rest of the paper is organized as follows. We define\nthe query classification problem in Section 2. Section 3\npresents the methods of enriching queries and target categories\n. In Section 4, we briefly introduce the previous methods\nand put forward a new method. In Section 5, we compare\nthe approaches empirically on the tasks of KDDCUP\n2005 competition. We list some related works in Section 6.\nSection 7 gives the conclusion of the paper and some possible\nfuture research issues.\nPROBLEM DEFINITION\nThe query classification problem is not as well-formed as\nother classification problems such as text classification. The\ndifficulties include short and ambiguous queries and the lack\nof training data. In this section, inspired by KDDCUP 2005,\nwe give a stringent definition of the QC problem.\nQuery Classification:\n* The aim of query classification is to classify a user\nquery Q\ni\ninto a ranked list of n categories C\ni1\n, C\ni2\n,\n. . ., C\nin\n, among a set of N categories\n{C\n1\n, C\n2\n, . . .,\nC\nN\n}. Among the output, C\ni1\nis ranked higher than\nC\ni2\n, and C\ni2\nis higher than C\ni3\n, and so on.\n* The queries are collected from real search engines submitted\nby Web users. The meaning and intension of\nthe queries are subjective.\n* The target categories are a tree with each node representing\na category. The semantic meaning of each\ncategory is defined by the labels along the path from\nthe root to the corresponding node.\nIn addition, the training data must be found online because\n, in general, labeled training data for query classification\nare very difficult to obtain.\nFigure 1 illustrates the target taxonomy of the KDDCUP\n2005 competition.\nBecause there are no data provided\nto define the content and the semantics of a category,\nas in conventional classification problems, a new solution\nneeds be found. As mentioned above, an added difficulty\nis that the target taxonomy may change frequently. The\nqueries in this problem are from the MSN search engine\n(http://search.msn.com). Several examples of the queries\nare shown in Table 1. Since a query usually contains very\nfew words, the sparseness of queries becomes a serious problem\nas compared to other text classification problems.\nTable 1: Examples of queries.\n1967 shelby mustang\nactress hildegarde\na & r management\" property management Maryland\nnetconfig.exe\nS o ftw are\nOther\nTools & Hardware\nLiving\nSports\nComputers\nHardware\nFigure 1: An Example of the Target Taxonomy.\nQUERY AND CATEGORY ENRICHMENT\nIn this section, we discuss the approaches for enriching\nqueries and categories, which are critical for the query classification\ntask.\n3.1\nEnrichment through Search Engines\nSince queries and categories usually contain only a few\nwords in the QC problem, we need to expand them to obtain\nricher representations. One straightforward method is to\nsubmit them to search engines to get the related pages (for\ncategories, we can take their labels as the queries and submit\nthem to search engines, such as \"Computers\n\\Hardware\" in\nFigure 1).\nThe returned Web pages from search engines\nprovide the context of the queries and the target categories,\nwhich can help determine the meanings/semantics of the\nqueries and categories.\nGiven the search results for a query or category, we need\nto decide what features should be extracted from the pages\nto construct the representation. Three kinds of features are\nconsidered in this paper: the title of a page, the snippet\ngenerated by the search engines, and the full plain text of a\npage. The snippet is in fact a short query-based summary\nof a Web page in which the query words occur frequently.\nThe full plain text is all the text in a page with the html\ntags removed. Since the title of a page is usually very short\n(5.2 words on average for our data set), we combine it with\n132\nother kinds of features together. These features are studied\nin our experiments.\nBesides the above textual features, we can also obtain the\ncategory information of a Web page through the directory\ninformation from search engines. For example, Google's \"Di-rectory\nSearch\" can provide the labels of the returned Web\npages. Such labels will be leveraged to classify a query, as\nstated in Section 4.1.\n3.2\nWord Matching Between Categories\nThe query classification problem can be converted to a traditional\ntext classification problem by finding some training\ndata online for each category in the target taxonomy. Our\nmethod of collecting the training data is by finding documents\nin certain intermediate taxonomies that are found\nonline. To do so, we need to construct mapping functions\nbetween the intermediate categories and the target categories\n. Given a certain category in an intermediate taxonomy\n, we say that it is directly mapped to a target category\nif and only if the following condition is satisfied: one or\nmore terms in each node along the path in the target category\nappear along the path corresponding to the matched\nintermediate category. For example, the intermediate category\n\"Computers\n\\Hardware \\Storage\" is directly mapped to\nthe target category \"Computers\n\\Hardware\" since the words\n\"Computers\" and \"Hardware\" both appear along the path\nComputers\nHardware Storage as shown in Figure 2.\nWe call this matching method direct matching.\nAfter constructing the above mapping functions by exact\nword matching, we may still miss a large number of\nmappings. To obtain a more complete mapping function,\nwe expand the words in the labels of the target taxonomy\nthrough a thesaurus such as the WordNet [8].\nFor\nexample, the keyword \"Hardware\" is extended to \"Hardware\n& Devices & Equipments\". Then an intermediate category\nsuch as \"Computers\n\\Devices\" can now be mapped to\n\"Computers\n\\Hardware\". This matching method is called\nextended matching in this paper.\nComputers\nStorage\nHardware\n(2) Target Taxonomy\n(1) Intermediate Taxonomy\nComputers\nHardware\nFigure 2: Illustration of the matching between taxonomies\nCLASSIFICATION APPROACHES\nIn this section, we first describe the state-of-the-art query\nclassification methods. Then we describe our new bridging\nclassifier to address the disadvantages of the existing methods\n.\n4.1\nClassification by Exact Matching\nAs described in Section 3.1, a query can be expanded\nthrough search engines which results in a list of related Web\npages together with their categories from an intermediate\ntaxonomy. A straightforward approach to QC is to leverage\nthe categories by exact matching. We denote the categories\nin the intermediate taxonomy and the target taxonomy as\nC\nI\nand C\nT\nrespectively. For each category in C\nI\n, we can detect\nwhether it is mapped to any category in C\nT\naccording to\nthe matching approaches given in Section 3.2. After that,\nthe most frequent target categories to which the returned\nintermediate categories have been successfully mapped are\nregarded as the classification result. That is:\nc\n\n= arg max\nC\nT\nj\nn\ni=1\nI(C\nI\n(i) is mapped to C\nT\nj\n)\n(1)\nIn Equation (1), I(\n) is the indicator function whose value\nis 1 when its parameter is true and 0 otherwise. C\nI\n(i) is\nthe category in the intermediate taxonomy for the i\nth\npage\nreturned by the search engine. n result pages are used for\nquery classification and the parameter n is studied in our\nexperiments.\nIt is not hard to imagine that the exact matching approach\ntends to produce classification results with high precision\nbut low recall. It produces high precision because this\napproach relies on the Web pages which are associated with\nthe manually annotated category information. It produces\nlow recall because many search result pages have no intermediate\ncategories. Moreover, the exact matching approach\ncannot find all the mappings from the existing intermediate\ntaxonomy to the target taxonomy which also results in low\nrecall.\n4.2\nClassification by SVM\nTo alleviate the low-recall problem of the exact matching\nmethod, some statistical classifiers can be used for QC. In\nthe KDDCUP 2005 winning solution, Support Vector Machine\n(SVM) was used as a base classifier. Query classification\nwith SVM consists of the following steps: 1) construct\nthe training data for the target categories based on mapping\nfunctions between categories, as discussed in Section\n3.2. If an intermediate category C\nI\nis mapped to a target\ncategory C\nT\n, then the Web pages in C\nI\nare mapped into\nC\nT\n; 2) train SVM classifiers for the target categories; 3) for\neach Web query to be classified, use search engines to get its\nenriched features as discussed in Section 3.1 and classify the\nquery using the SVM classifiers. The advantage of this QC\nmethod is that it can improve the recall of the classification\nresult. For example, assume two intermediate categories, C\nI\n1\nand C\nI\n2\n, are semantically related with a target category C\nT\n1\n.\nC\nI\n1\ncan be matched with C\nT\n1\nthrough word matching but C\nI\n2\ncannot. For a query to be classified, if a search engine only\nreturns pages of C\nI\n2\n, this query cannot be classified into the\ntarget category if the exact matching classification method\nis used. However, if the query is classified by a statistical\nclassifier, it can also be assigned the target category C\nT\n1\n, as\nthe classifier is trained using pages of C\nI\n1\n, which may also\ncontain terms of C\nI\n2\nbecause the two intermediate categories\nare similar in topic.\nAlthough statistical classifiers can help increase the recall\nof the exact matching approach, they still need the exact\nmatching for collecting the training data. What is more, if\nthe target taxonomy changes, we need to collect the training\ndata by exact matching and train statistical classifiers again.\nIn the following sections, we develop a new method to solve\nthe abov e problems.\n133\n4.3\nOur New Method: Classifiers by Bridges\n4.3.1\nTaxonomy-Bridging Algorithm\nWe now describe our new QC approach called taxonomy-bridging\nclassifier, or bridging classifier in short, by which\nwe connect the target taxonomy and queries by taking an\nintermediate taxonomy as a bridge. The idea is illustrated\nin Figure 3, where two vertical lines separate the space into\nthree parts. The square in the left part denotes the queries\nto be classified; the tree in the right part represents the\ntarget taxonomy; the tree in the middle part is an existing\nintermediate taxonomy. The thickness of the dotted lines\nreflects the similarly relationship between two nodes. For\nexample, we can see that the relationship between C\nT\ni\nand\nC\nI\nj\nis much stronger than that between C\nT\ni\nand C\nI\nk\n. Given\na category C\nT\ni\nin the target taxonomy and a query to be\nclassified q\nk\n, we can judge the similarity between them by\nthe distributions of their relationship to the categories in\nthe intermediate taxonomy.\nBy defining the relationship\nand similarity under the probabilistic framework, the above\nidea can be explained by Equation (2).\nT\ni\nC\nk\nq\nI\nj\nC\nI\nk\nC\nT\nC\nI\nC\nQ\nFigure 3: Illustration of the Bridging Classifier.\np(C\nT\ni\n|q) =\n\nC\nI\nj\np(C\nT\ni\n, C\nI\nj\n|q)\n=\n\nC\nI\nj\np(C\nT\ni\n|C\nI\nj\n, q)p(C\nI\nj\n|q)\n\n\nC\nI\nj\np(C\nT\ni\n|C\nI\nj\n)p(C\nI\nj\n|q)\n=\n\nC\nI\nj\np(C\nT\ni\n|C\nI\nj\n)\np(q|C\nI\nj\n)p(C\nI\nj\n)\np(q)\n\n\nC\nI\nj\np(C\nT\ni\n|C\nI\nj\n)p(q\n|C\nI\nj\n)p(C\nI\nj\n)\n(2)\nIn Equation (2), p(C\nT\ni\n|q) denotes the conditional probability\nof C\nT\ni\ngiven q.\nSimilarly, p(C\nT\ni\n|C\nI\nj\n) and p(q\n|C\nI\nj\n) denotes\nthe probability of C\nT\ni\nand q given C\nI\nj\nrespectively.\np(C\nI\nj\n) is the prior probability of C\nI\nj\nwhich can be estimated\nfrom the Web pages in C\nI\n. If C\nT\ni\nis represented by a set\nof words (w\n1\n, w\n2\n, . . . , w\nn\n) where each word w\nk\nappears n\nk\ntimes, p(C\nT\ni\n|C\nI\nj\n) can be calculated through Equation (3)\np(C\nT\ni\n|C\nI\nj\n) =\nn\nk=1\np(w\nk\n|C\nI\nj\n)\nn\nk\n(3)\nwhere p(w\nk\n|C\nI\nj\n) stands for the probability that the word w\nk\noccurs in class C\nI\nj\n, which can be estimated by the principle\nof maximal likelihood. p(q\n|C\nI\nj\n) can be calculated in the same\nway as p(C\nT\ni\n|C\nI\nj\n).\nA query q can be classified according to Equation (4):\nc\n\n= arg max\nC\nT\ni\np(C\nT\ni\n|q)\n(4)\nTo make our bridging classifier easier to understand, we\ncan explain it in another way by rewriting Equation (2) as\nEquation (5),\np(C\nT\ni\n|q) =\n\nC\nI\nj\np(C\nT\ni\n, C\nI\nj\n|q)\n=\n\nC\nI\nj\np(C\nT\ni\n|C\nI\nj\n, q)p(C\nI\nj\n|q)\n\n\nC\nI\nj\np(C\nT\ni\n|C\nI\nj\n)p(C\nI\nj\n|q)\n=\n\nC\nI\nj\np(C\nI\nj\n|C\nT\ni\n)p(C\nT\ni\n)\np(C\nI\nj\n)\np(C\nI\nj\n|q)\n= p(C\nT\ni\n)\n\nC\nI\nj\np(C\nI\nj\n|C\nT\ni\n)p(C\nI\nj\n|q)\np(C\nI\nj\n)\n(5)\nLet us consider the numerator on the right side of the\nEquation (5).\nGiven a query q and C\nT\ni\n, p(C\nI\nj\n|C\nT\ni\n) and\np(C\nI\nj\n|q) are fixed and\n\nC\nI\nj\np(C\nI\nj\n|C\nT\ni\n) = 1,\n\nC\nI\nj\np(C\nI\nj\n|q) = 1.\np(C\nI\nj\n|C\nT\ni\n) and p(C\nI\nj\n|q) represent the probability that C\nT\ni\nand q belong to C\nI\nj\n. It is easy to prove that p(C\nT\ni\n|q) tends\nto be larger when q and C\nT\ni\ntends to belong to the same category\nin the intermediate taxonomy. The denominator p(C\nI\nj\n)\nreflects the size of category C\nI\nj\nwhich acts as a weighting factor\n. It guarantees that the higher the probability that q and\nC\nT\ni\nbelong to the smaller sized category (where size refers to\nthe number of nodes underneath the category in the tree)\nin the intermediate taxonomy, the higher the probability\nthat q belongs to C\nI\ni\n. Such an observation agrees with our\nintuition, since a larger category tends to contain more sub-topics\nwhile a smaller category contains fewer sub-topics.\nThus we can say with higher confidence that q and C\nI\ni\nare\nrelated to the same sub-topic when they belong to the same\nsmaller category.\n4.3.2\nCategory Selection\nThe intermediate taxonomy may contain enormous categories\nand some of them are irrelevant to the query classification\ntask corresponding with the predefined target taxonomy\n. Therefore, to reduce the computation complexity,\nwe should perform \"Category Selection\" in a similar sense\nof \"Feature Selection\" in text classification [15]. Two approaches\nare employed in this paper to evaluate the goodness\nof a category in the intermediate taxonomy. After sorting\nthe categories according to the scores calculated by the following\ntwo approaches, category selection can be fulfilled by\nselecting the top n categories.\nTotal Probability (TP): this method gives a score to each\ncategory in the intermediate taxonomy according to its probability\nof generating the categories in the target taxonomy,\nas shown in Equation (6).\nScore(C\nI\nj\n) =\nC\nT\ni\nP (C\nT\ni\n|C\nI\nj\n)\n(6)\nMutual Information (MI): MI is a criterion commonly\nused in statistical language modeling of word associations\nand other related applications [15]. Given a word t and a\n134\ncategory c, the mutual information between t and c is defined\nas:\nM I(t, c) = log\nP (t\nc)\nP (t)\nP (c)\n(7)\nBy considering the two-way contingency table for t and c,\nwhere A is the number of times t and c co-occur, B is the\nnumber of times that t occurs without c, C is number of\ntimes c occurs without t and N is the total number of documents\n, then the mutual information between t and c can\nbe estimated using:\nM I(t, c)\nlog\nA\nN\n(A + C)\n(A + B)\n(8)\nSince the name of a category in the target taxonomy usually\ncontains more than one term, we define the \"mutual infor-mation\"\nbetween a category in the intermediate taxonomy\nC\nI\nj\nand a category in the target taxonomy C\nT\ni\nas:\nM I(C\nT\ni\n, C\nI\nj\n) =\n1\n|C\nT\ni\n|\ntC\nT\ni\nM I(t, C\nI\nj\n)\n(9)\nwhere\n|C\nT\ni\n| is the number of terms in the name of C\nT\ni\n.\nTo measure the goodness of C\nI\nj\nin a global category selection\n, we combine the category-specific scores of C\nI\nj\nby:\nM I\navg\n(C\nI\nj\n) =\nC\nT\nj\nM I(C\nT\ni\n, C\nI\nj\n)\n(10)\n4.3.3\nDiscussions\nAs we can see, in the bridging classifier, we do not need\nto train a classifier function between an intermediate taxonomy\nand the target taxonomy. We only need to build the\nclassifiers on the intermediate taxonomy once and it can be\napplied to any target taxonomy. The framework can be extended\nin two directions. One is to include some training\ndata for each target category. With the training data, we\ndo not have to treat the labels of the target categories as\nqueries and retrieve related Web pages through search engines\nto represent the categories. We can extract features\nfrom the training data directly. The second extension is to\nuse other sophisticated models such as the n-gram model [9]\nor SVM [10] for computing p(C\nT\ni\n|C\nI\nj\n) and p(q\n|C\nI\nj\n).\nEXPERIMENTS\nIn this section, we first introduce the data set and the\nevaluation metrics. Then we present the experiment results\nand give some discussions.\n5.1\nData Set and Evaluation Metrics\n5.1.1\nData sets\nIn this paper, we use the data sets from the KDDCUP\n2005 competition which is available on the Web\n1\n. One of the\ndata sets contains 111 sample queries together with the category\ninformation. These samples are used to exemplify the\nformat of the queries by the organizer. However, since the\ncategory information of these queries is truthful, they can\nserve as the validation data. Another data set contains 800\nqueries with category information labeled by three human\nlabelers. In fact, the organizers provided 800,000 queries in\n1\nhttp://www.acm.org/sigs/sigkdd/kdd2005/kddcup.html\ntotal which are selected from the MSN search logs for testing\nthe submitted solutions. Since manually labeling all the\n800,000 queries is too expensive and time consuming, the\norganizers randomly selected 800 queries for evaluation.\nWe denote the three human query-labelers (and sometimes\nthe dataset labeled by them if no confusion is caused)\nas L1, L2 and L3, respectively. Each query has at most\nfive labels in ranked order. Table 2 shows the average precision\nand F1 score values of each labeler when evaluated\nagainst the other two labelers. The average values among\nthe three labelers are around 0.50 which indicates that the\nquery classification problem is not an easy task even for human\nlabelers. In this paper, all the experiments use only\nthe 800 queries, except in the ensemble classifiers, where we\nuse the 111 sample queries to tune the weight of each single\nclassifier.\nTable 2: The Average Scores of Each Labeler When\nEvaluated Against the Other Two Labelers\nL1\nL2\nL3\nAverage\nF1\n0.538\n0.477\n0.512\n0.509\nPre\n0.501\n0.613\n0.463\n0.526\nThe existing intermediate taxonomy used in the paper\nis from Open Directory Project (ODP, http://dmoz.org/).\nWe crawled 1,546,441 Web pages from ODP which spanned\nover 172,565 categories. The categories have a hierarchical\nstructure as shown in Figure 2(1).\nWe can consider the\nhierarchy at different levels. Table 3 shows the number of\ncategories on different levels. The first row counts all the\ncategories while the second row counts only the categories\ncontaining more than 10 Web pages. Table 4 summarizes the\nstatistics of Web page numbers in the categories with more\nthan 10 documents on different levels. As we can see, when\nwe move down to the lower levels along the hierarchy, more\ncategories appear while each category contains fewer Web\npages. In order to remove noise, we consider the categories\nwith more than 10 pages in this paper.\nTable 3: Number of Categories on Different Levels\nTop 2\nTop 3\nTop 4\nTop 5\nTop All\n#doc > 0\n435\n5,300\n24,315\n56,228\n172,565\n#doc > 10\n399\n4,011\n13,541\n23,989\n39,250\nTable 4: Statistics of the Numbers of Documents in\nthe Categories on Different Levels\nTop 2\nTop 3\nTop 4\nTop 5\nTop All\nLargest\n211,192\n153,382\n84,455\n25,053\n920\nSmallest\n11\n11\n11\n11\n11\nMean\n4,044.0\n400.8\n115.6\n61.6\n29.1\n5.1.2\nEvaluation Measurements\nIn KDDCUP 2005, precision, performance and creativity\nare the three measures to evaluate the submitted solutions.\n\"creativity\" refers to the novelty of the solutions judged by\nexperts. The other two measures are defined according to\nthe standard measures to evaluate the performance of classification\n, that is, precision, recall and F1-measure [12]. Pre-135\ncision (P) is the proportion of actual positive class members\nreturned by the system among all predicted positive class\nmembers returned by the system. Recall (R) is the proportion\nof predicted positive members among all actual positive\nclass members in the data. F1 is the harmonic mean of precision\nand recall as shown below:\nF 1 = 2\nP R/(P + R)\n(11)\n\"performance\" adopted by KDDCUP 2005 is in fact F1.\nTherefore, we denote it by F1 instead of \"performance\" for\nsimplicity.\nAs 3 labelers were asked to label the queries, the results\nreported are averaged over the values evaluated on each of\nthem.\n5.2\nResults and Analysis\n5.2.1\nPerformance of Exact matching and SVM\nIn this section, we study the performance of the two methods\nwhich tightly depend on word matching: exact matching\nand SVM, as well as the effect of query and category\nexpansion. Table 5 shows the results of the category expansion\nthrough intermediate taxonomy by word matching,\nthat is the results of collecting training data for the target\ntaxonomy. Each element in the table represents the number\nof documents collected for the target categories. The first\nrow contains the results by direct matching while the second\nrow contains the results after expanding the category\nnames through extended matching. We can see that after\nextending the names of the target categories, the number\nof documents collected for the target categories increases.\nWe expect that the expansion with the help of WordNet\nshould provide more documents to reflect the semantics of\nthe target categories which is verified by Table 6.\nTable 5: Number of Pages Collected for Training\nunder Different Category Expansion Methods\nMin\nMax\nMedian\nMean\nDirect Matching\n4\n126,397\n2,389\n14,646\nExtended Matching\n22\n227,690\n6,815\n21,295\nTable 6 presents the result comparisons of the exact matching\nmethod and SVM. We enrich the query by retrieving the\nrelevant pages through Google (http://www.google.com). The\ntop n returned pages are used to represent the query where\nn varies from 20 to 80, with the step size of 20.\nTwo\napproaches are used to extract features from the returned\npages. One is to extract the snippet of the returned pages\nand the other is to extract all the text in the Web pages except\nthe HTML tags. The Web pages' titles will be added to\nboth of these two kinds of features. The column \"0\" means\nthat we use only the terms in the query without enrichment.\nIn our experiments, we expand the target categories through\nthe ODP taxonomy; that is, we collect the training data\nfor the target categories from ODP. When constructing the\nmapping relationship as shown in Section 3.2, if we use direct\nmatching, we denote SVM and the exact matching method\nwith \"SVM-D\" and \"Extact-D\" respectively. Otherwise,if\nwe use the extended matching method, we denote SVM and\nthe exact matching method with \"SVM-E\" and \"Extact-E\"\nrespectively. The exact matching method needs the category\nlist of the retrieved Web pages for each query. The\nTable 6: Performance of Exact Matching and SVM\n(1)Measured by F1\nn\n0\n20\n40\n60\n80\nExact-D\nNull\n0.251\n0.249\n0.247\n0.246\nExact-E\nNull\n0.385\n0.396\n0.386\n0.384\nSVM-D\nsnippet\n0.205\n0.288\n0.292\n0.291\n0.289\nfull text\n0.254\n0.276\n0.267\n0.273\nSVM-E\nsnippet\n0.256\n0.378\n0.383\n0.379\n0.379\nfull text\n0.316\n0.340\n0.327\n0.336\n(2) Measured by Precision\nn\n0\n20\n40\n60\n80\nExact-D\nNull\n0.300\n0.279\n0.272\n0.268\nExact-E\nNull\n0.403\n0.405\n0.389\n0.383\nSVM-D\nsnippet\n0.178\n0.248\n0.248\n0.244\n0.246\nfull text\n0.227\n0.234\n0.242\n0.240\nSVM-E\nsnippet\n0.212\n0.335\n0.321\n0.312\n0.311\nfull text\n0.288\n0.309\n0.305\n0.296\ncategory information is obtained through Google's \"Direc-tory\nSearch\" service (http://www.google.com/dirhp).\nFrom Table 6 we can see that \"Exact-E\" is much better\nthan \"Exact-D\", and \"SVM-E\" is much better than \"SVM-D\"\n. This indicates that the extended matching with the\nhelp of WordNet can achieve a more proper representation\nof the target category. We can also observe that \"Exact-E\"\nperforms better than \"SVM-E\". Another observation\nis that the \"snippet\" representation outperforms \"full text\"\nconsistently. The reason is that the \"snippet\" provides a\nmore concise context of the query than the \"full text\" which\ntends to introduce noise. We can also see that most of the\nclassifiers achieve the highest performance when the queries\nare represented by the top 40 search result pages. Therefore,\nin the later experiments, we use snippets of the top 40 pages\nto represent queries.\n5.2.2\nPerformance of the Bridging Classifier\nAs we can see in the above experiments, the thesaurus\nWordNet plays an important role in both the exact matching\nmethod and SVM since it can help expand the words in\nthe labels of the target categories, which can further improve\nthe mapping functions. However, the effect of a thesaurus\nmay be limited due to the following reasons: 1) there may\nbe no thesaurus in some fields; 2) it is hard to determine the\nprecise expansion of the words even with a high-quality thesaurus\n, especially with the rapidly changing usage of words\non the Web. Therefore, we put forward the bridging classifier\nwhich only relies on the intermediate taxonomies.\nIn order to expand a target category, we can treat its name\nas a query and submit it to search engines. We use the snippet\nof the top n returned pages to represent a category since\nwe learned from the query expansion that snippet performs\nbetter than \"full text\". The parameter n varies from 20 to\n100. Table 7 shows the results when \"top all\" categories in\nthe ODP taxonomy are used for bridging the queries and the\ntarget taxonomy. The effect of different levels of the intermediate\ntaxonomy will be studied later. From Table 7, we can\n136\nsee that the bridging classifier achieves the best performance\nwhen n equals 60. The best F1 and precision achieved by\nthe bridging classifier is higher than those achieved either by\nthe exact matching method or SVM. The relative improvement\nis more than 10.4% and 7.1% in terms of precision\nand F1 respectively. The main reason for the improvement\nis that the bridging classifier can make thorough use of the\nfiner grained intermediate taxonomy in a probabilistic way.\nWhile the previous methods including the exact matching\nmethod and SVM exploit the intermediate taxonomy in a\nhard way when constructing the mapping function as shown\nin Section 3.2.\nTable 7:\nPerformances of the Bridging Classifier\nwith Different Representations of Target Categories\nn\n20\n40\n60\n80\n100\nF1\n0.414\n0.420\n0.424\n0.421\n0.416\nPrecision\n0.437\n0.443\n0.447\n0.444\n0.439\nTable 8:\nPerformances of the Bridging Classifier\nwith Different Granularity\nTop 2\nTop 3\nTop 4\nTop 5\nTop All\nF1\n0.267\n0.285\n0.312\n0.352\n0.424\nPrecision\n0.270\n0.291\n0.339\n0.368\n0.447\nTable 8 shows the performance of the bridging classifier\nwhen we change the granularity of the categories in the intermediate\ntaxonomy. To change the granularity of the categories\n, we use the categories on the top L level by varying L.\nIt is clear that the categories have larger granularity when\nL is smaller. From Table 8, we can see that the performance\nof the bridging classifier improves steadily by reducing the\ngranularity of categories. The reason is that categories with\nlarge granularity may be a mixture of several target categories\nwhich prohibit distinguishing the target categories.\n0.25\n0.30\n0.35\n0.40\n0.45\n0.50\n4000\n11000\n18000\n25000\n32000\n39250\nMI-F1\nMI-Pre\nTP-F1\nTP-Pre\nFigure 4: Effect of category selection.\nHowever, reducing the granularity of categories in the intermediate\ntaxonomy will certainly increase the number of\nthe intermediate categories which will thus increase the computation\ncost. One way to solve this problem is to do category\nselection. Figure 4 shows the performance of the bridging\nclassifier when we select the categories from all the ODP\ntaxonomy through the two category selection approaches\nproposed in Section 4.3.2. We can see that when the category\nnumber is around 18,000, the performance of the bridging\nclassifier is comparable to, if not better than, the previous\napproaches, including the exact matching method and\nSVM. MI works better than TP in that MI can not only\nmeasure the relevance between the categories in the target\ntaxonomy and those in the intermediate taxonomy, but also\nfavors the categories which are more powerful to distinguish\nthe categories in the target taxonomy. However, TP only\ncares about the merit of relevance.\n5.2.3\nEnsemble of Classifiers\nThe winner of the KDDCUP 2005 competition found that\nthe best result was achieved by combining the exact matching\nmethod and SVM. In the winning solution, besides the\nexact matching method on Google's directory search, two\nother exact matching methods are developed using LookS-mart\n(http://www.looksmart.com) and a search engine based\non Lemur (http://www.lemurproject.org) and their crawled\nWeb pages from ODP [11]. Two classifier-combination strategies\nare used, with one aiming at higher precision (denoted\nby EV, where 111 samples are used as the validation data to\ntune the weight of each base classifier) and the other aiming\nat higher F1 (denoted by EN in which the validation\ndata set is ignored). EV assigns a weight to a classifier proportional\nto the classifier's precision while EN gives equal\nweights to all classifiers. We follow the same strategy to\ncombine our new method with the winner's methods, which\nis denoted as \"Exact-E\"+\"SVM-E\"+Bridging as shown in\nTable 9. The numbers in the parentheses are the relative improvement\n. Note that the bridging classifier alone achieves\nsimilar F1 measurement as the KDDCU 2005 winning solution\n(\"Exact-E\"+\"SVM-E\" with the EV combination strategy\n) but improves the precision by 5.4%. From Table 9 we\ncan also find that the combination of the bridging classifier\nand the KDDCUP 2005 winning solution can improve the\nperformance by 9.7% and 3.8% in terms of precision and\nF1, respectively, when compared with the winning solution.\nThis indicates that the bridging classifier works in a different\nway as the exact matching method and SVM, and they\nare complimentary to each other.\nTable 9: Performances of Ensemble Classifiers\n\"Exact-E\"\n\"Exact-E\" + \"SVM-E\"\n+ \"SVM-E\"\n+Bridging\nEV\nF1\n0.426\n0.429(+0.007)\nPrecision\n0.424\n0.465(+0.097)\nEN\nF1\n0.444\n0.461(+0.038)\nPrecision\n0.414\n0.430(+0.039)\nRELATED WORK\nThough not much work has been done on topical query\nclassification, some work has been conducted on other kinds\nof query classification problems. Gravano et al. classified\nthe Web queries by geographical locality [3] while Kang et\nal. proposed to classify queries according to their functional\ntypes [4].\nBeitzel et al. studied the same problem in [2] as we pur-sued\nin this paper, with the goal to classify the queries according\nto their topic(s). They used two primary data sets\n137\ncontaining the queries from the AOL web search service.\nThese queries were manually classified into a set of 18 categories\n. The main difference between our problem and that\nof [2] is that we did not have training data as given input. In\nfact, it is a very difficult and time consuming task to provide\nenough training examples, especially when the target taxonomy\nis complicated. Another potential problem related\nto the training data, as pointed out in [2], is caused by the\nongoing changes in the query stream, which makes it hard to\nsystematically cover the space of queries. In this paper, we\njust rely on the structure and category names of the target\ntaxonomy without training data, which is consistent with\nthe task of KDDCUP 2005.\nKDDCUP 2005 provides a test bed for the Web query\nclassification problem.\nThere are a total of 37 solutions\nfrom 32 teams attending the competition. As summarized\nby the organizers [6], most solutions expanded the queries\nthrough search engines or WordNet and expanded the category\nby mapping between some pre-defined/existing taxonomy\nto the target taxonomy. Some solutions require human\nintervention in the mapping process [5, 13].\nBesides classifying the queries into target taxonomy, we\ncan also cluster the queries to discover some hidden taxonomies\nthrough unsupervised methods. Both Beeferman\n[1] and Wen [14] used search engines' clickthrough data to\ncluster the queries. The former makes no use of the actual\ncontent of the queries and URLs, but only how they\nco-occur within the clickthrough data, while the latter exploits\nthe usage of the content. Although the work in [1]\nand [14] proved the effectiveness of the clickthrough data\nfor query clustering, we did not utilize them in our solution\ndue to the following two reasons: 1) the clickthorugh data\ncan be quite noisy and is search engine dependent; 2) it is\ndifficult to obtain the clickthrough data due to privacy and\nlegal issues.\nCONCLUSION AND FUTURE WORK\nThis paper presented a novel solution for classifying Web\nqueries into a set of target categories, where the queries are\nvery short and there are no training data. In our solution,\nan intermediate taxonomy is used to train classifiers bridging\nthe queries and target categories so that there is no need\nto collect the training data. Experiments on the KDDCUP\n2005 data set show that the bridging classifier approach is\npromising. By combining the bridging classifier with the\nwinning solution of KDDCUP 2005, we made a further improvement\nby 9.7% and 3.8% in terms of precision and F1\nrespectively compared with the best results of KDDCUP\n2005. In the future, we plan to extend the bridging classifier\nidea to other types of query processing tasks, including\nquery clustering. We will also conduct research on how to\nleverage a group of intermediate taxonomies for query classification\nACKNOWLEDGMENTS\nDou Shen and Qiang Yang are supported by a grant from\nNEC (NECLC05/06.EG01). We thank the anonymous reviewers\nfor their useful comments.\nREFERENCES\n[1] D. Beeferman and A. Berger. Agglomerative clustering\nof a search engine query log. In KDD '00: Proceedings\nof the sixth ACM SIGKDD international conference\non Knowledge discovery and data mining, pages\n407416, 2000.\n[2] S. M. Beitzel, E. C. Jensen, O. Frieder, D. Grossman,\nD. D. Lewis, A. Chowdhury, and A. Kolcz. Automatic\nweb query classification using labeled and unlabeled\ntraining data. In SIGIR '05: Proceedings of the 28th\nannual international ACM SIGIR conference on\nResearch and development in information retrieval,\npages 581582, 2005.\n[3] L. Gravano, V. Hatzivassiloglou, and R. Lichtenstein.\nCategorizing web queries according to geographical\nlocality. In CIKM '03: Proceedings of the twelfth\ninternational conference on Information and\nknowledge management, pages 325333, 2003.\n[4] I.-H. Kang and G. Kim. Query type classification for\nweb document retrieval. In SIGIR '03: Proceedings of\nthe 26th annual international ACM SIGIR conference\non Research and development in informaion retrieval,\npages 6471, 2003.\n[5] Z. T. Kardkov\nacs, D. Tikk, and Z. B\nans\naghi. The\nferrety algorithm for the kdd cup 2005 problem.\nSIGKDD Explor. Newsl., 7(2):111116, 2005.\n[6] Y. Li, Z. Zheng, and H. K. Dai. Kdd cup-2005 report:\nfacing a great challenge. SIGKDD Explor. Newsl.,\n7(2):9199, 2005.\n[7] A. McCallum and K. Nigam. A comparison of event\nmodels for naive bayes text classication. In AAAI-98\nWorkshop on Learning for Text Categorization, 1998.\n[8] G. Miller, R. Beckwith, C. Fellbaum, D. Gross, and\nK. Miller. Introduction to wordnet: an on-line lexical\ndatabase. International Journal of Lexicography,\n3(4):23244, 1990.\n[9] F. Peng, D. Schuurmans, and S. Wang. Augmenting\nnaive bayes classifiers with statistical language\nmodels. Inf. Retr., 7(3-4):317345, 2004.\n[10] J. Platt. Probabilistic outputs for support vector\nmachines and comparisons to regularized likelihood\nmethods. In A. Smola, P. Bartlett, B. Scholkopf, and\nD. Schuurmans, editors, Advances in Large Margin\nClassifiers. MIT Press, 1999.\n[11] D. Shen, R. Pan, J.-T. Sun, J. J. Pan, K. Wu, J. Yin,\nand Q. Yang. Q2c@ust: our winning solution to query\nclassification in kddcup 2005. SIGKDD Explor.\nNewsl., 7(2):100110, 2005.\n[12] R. C. van. Information Retrieval. Butterworths,\nLondon, second edition edition, 1979.\n[13] D. Vogel, S. Bickel, P. Haider, R. Schimpfky,\nP. Siemen, S. Bridges, and T. Scheffer. Classifying\nsearch engine queries using the web as background\nknowledge. SIGKDD Explor. Newsl., 7(2):117122,\n2005.\n[14] J.-R. Wen, J.-Y. Nie, and H.-J. Zhang. Query\nclustering using content words and user feedback. In\nSIGIR '01: Proceedings of the 24th annual\ninternational ACM SIGIR conference on Research and\ndevelopment in information retrieval, pages 442443,\n2001.\n[15] Y. Yang and J. O. Pedersen. A comparative study on\nfeature selection in text categorization. In ICML '97:\nProceedings of the Fourteenth International Conference\non Machine Learning, pages 412420, 1997.\n138\n", "keywords": "Bridging classifier;Category Selection;Ensemble classifier;Bridging Classifier;Target categories;Similarity distribution;Mapping functions;Search engine;Matching approaches;Intermediate categories;Taxonomy;Query enrichment;KDDCUP 2005;Query classification;Category selection;Web Query Classification"} {"name": "97", "title": "Geographically Focused Collaborative Crawling", "abstract": "A collaborative crawler is a group of crawling nodes, in which each crawling node is responsible for a specific portion of the web. We study the problem of collecting geographically -aware pages using collaborative crawling strategies. We first propose several collaborative crawling strategies for the geographically focused crawling, whose goal is to collect web pages about specified geographic locations, by considering features like URL address of page, content of page, extended anchor text of link, and others. Later, we propose various evaluation criteria to qualify the performance of such crawling strategies. Finally, we experimentally study our crawling strategies by crawling the real web data showing that some of our crawling strategies greatly outperform the simple URL-hash based partition collaborative crawling, in which the crawling assignments are determined according to the hash-value computation over URLs. More precisely, features like URL address of page and extended anchor text of link are shown to yield the best overall performance for the geographically focused crawling.", "fulltext": "INTRODUCTION\nWhile most of the current search engines are effective for\npure keyword-oriented searches, these search engines are not\nfully effective for geographic-oriented keyword searches. For\ninstance, queries like \"restaurants in New York, NY\" or\n\"good plumbers near 100 milam street, Houston, TX\" or\n\"romantic hotels in Las Vegas, NV\" are not properly man-aged\nby traditional web search engines. Therefore, in recent\nThis work was done while the author was visiting Genieknows\n.com\nCopyright is held by the International World Wide Web Conference Committee\n(IW3C2). Distribution of these papers is limited to classroom use,\nand personal use by others.\nWWW 2006, May 2326, 2006, Edinburgh, Scotland.\nACM 1-59593-323-9/06/0005.\nyears, there has been surge of interest within the search\nindustry on the search localization (e.g., Google Local\n1\n, Yahoo\nLocal\n2\n). The main aim of such search localization is\nto allow the user to perform the search according his/her\nkeyword input as well as the geographic location of his/her\ninterest.\nDue to the current size of the Web and its dynamical nature\n, building a large scale search engine is challenging and\nit is still active area of research. For instance, the design\nof efficient crawling strategies and policies have been exten-sively\nstudied in recent years (see [9] for the overview of the\nfield). While it is possible to build geographically sensitive\nsearch engines using the full web data collected through a\nstandard web crawling, it would rather be more attractive\nto build such search engines over a more focused web data\ncollection which are only relevant to the targeted geographic\nlocations. Focusing on the collection of web pages which are\nrelevant to the targeted geographic location would leverage\nthe overall processing time and efforts for building such\nsearch engines. For instance, if we want to build a search\nengine targeting those users in New York, NY, then we can\nbuild it using the web collection, only relevant to the city\nof New York, NY. Therefore, given intended geographic regions\nfor crawling, we refer the task of collecting web pages,\nrelevant to the intended geographic regions as geographically\nfocused crawling.\nThe idea of focusing on a particular portion of the web\nfor crawling is not novel. For instance, the design of efficient\ntopic-oriented or domain-oriented crawling strategies\nhas been previously studied [8, 23, 24]. However, there has\nbeen little previous work on incorporating the geographical\ndimension of web pages to the crawling. In this paper,\nwe study various aspects of crawling when the geographical\ndimension is considered.\nWhile the basic idea behind the standard crawling is straightforward\n, the collaborative crawling or parallel crawling is often\nused due to the performance and scalability issues that\nmight arise during the real crawling of the web [12, 19].\nIn a collaborative or parallel crawler, the multiple crawling\nnodes are run in parallel on a multiprocessor or in a distributed\nmanner to maximize the download speed and to further\nimprove the overall performance especially for the scalability\nof crawling. Therefore, we study the geographically\nfocused crawling under the collaborative setting, in which\nthe targeted geographic regions are divided and then assigned\nto each participating crawling node. More precisely,\n1\nhttp://local.google.com\n2\nhttp://local.yahoo.com\n287\nin a geographically focused collaborative crawler, there will\nbe a set of geographically focused crawling nodes in which\neach node is only responsible for collecting those web pages,\nrelevant to its assigned geographic regions. Furthermore,\nthere will be additional set of general crawling nodes which\naim to support other geographically focused crawling nodes\nthrough the general crawling (download of pages which are\nnot geographically-aware). The main contributions of our\npaper are follows:\n1. We propose several geographically focused collaborative\ncrawling strategies whose goal is to collect web\npages about the specified geographic regions.\n2. We propose several evaluation criteria for measuring\nthe performance of a geographically focused crawling\nstrategy.\n3. We empirically study our proposed crawling strategies\nby crawling the real web. More specifically, we collect\nweb pages pertinent to the top 100 US cities for each\ncrawling strategy.\n4. We empirically study geographic locality.\nThat is, pages\nwhich are geographically related are more likely to be\nlinked compared to those which are not.\nThe rest of the paper is organized as follows. In Section\n2, we introduce some of the previous works related to\nour geographically focused collaborative crawling. In Section\n3, we describe the problem of geographically focused\ncollaborative crawling and then we propose several crawling\npolicies to deal with this type of crawling. In Section 4,\nwe present evaluation models to measure the performance\nof a geographically focused collaborative crawling strategy.\nIn Section 5, we present results of our experiments with the\nreal web data. Finally, in Section 6, we present final remarks\nabout our work.\nRELATED WORKS\nA focused crawler is designed to only collect web pages\non a specified topic while transversing the web. The basic\nidea of a focused crawler is to optimize the priority of the\nunvisited URLs on the crawler frontier so that pages concerning\na particular topic are retrieved earlier. Bra et al. [4]\npropose a focused web crawling method in the context of a\nclient-based real-time search engine. Its crawling strategy is\nbased on the intuition that relevant pages on the topic likely\ncontain links to other pages on the same topic. Thus, the\ncrawler follows more links from relevant pages which are estimated\nby a binary classifier that uses keyword and regular\nexpression matchings. In spite of its reasonably acceptable\nperformance, it has an important drawback as a relevant\npage on the topic might be hardly reachable when this page\nis not pointed by pages relevant to the topic.\nCho et al. [11] propose several strategies for prioritiz-ing\nunvisited URLs based on the pages downloaded so far.\nIn contrast to other focused crawlers in which a supervised\ntopic classifier is used to control the way that crawler handles\nthe priority of pages to be be downloaded, their strategies\nare based on considering some simple properties such as\nlinkage or keyword information to define the priority of pages\nto be downloaded. They conclude that determining the priority\nof pages to be downloaded based on their PageRank\nvalue yield the best overall crawling performance.\nChakrabarti et al. [8] propose another type of focused\ncrawler architecture which is composed of three components,\nnamely classifier, distiller and crawler. The classifier makes\nthe decision on the page relevancy to determine its future\nlink expansion. The distiller identifies those hub pages, as\ndefined in [20], pointing to many topic related pages to determine\nthe priority of pages to be visited. Finally, the crawling\nmodule fetches pages using the list of pages provided by the\ndistiller. In the subsequent work, Chakrabarti et al. [7]\nsuggest that only a fraction of URLs extracted from a page\nare worth following. They claim that a crawler can avoid\nirrelevant links if the relevancy of links can be determined\nby the local text surrounding it. They propose alternative\nfocused crawler architecture where documents are modeled\nas tag trees using DOM (Document Object Model). In their\ncrawler, two classifiers are used, namely the \"baseline\" and\nthe \"apprentice\". The baseline classifier refers to the module\nthat navigates through the web to obtain the enriching\ntraining data for the apprentice classifier. The apprentice\nclassifier, on the other hand, is trained over the data collected\nthrough the baseline classifier and eventually guides\nthe overall crawling by determining the relevancy of links\nusing the contextual information around them.\nDiligenti et al. [14] use the context graph to improve\nthe baseline best-first focused crawling method. In their\napproach, there is a classifier which is trained through the\nfeatures extracted from the paths that lead to the relevant\npages. They claim that there is some chance that some off-topic\npages might potentially lead to highly relevant pages.\nTherefore, in order to mediate the hardness of identifying\napparently off-topic pages, they propose the usage of context\ngraph to guide the crawling. More precisely, first a\ncontext graph for seed pages is built using links to the pages\nreturned from a search engine. Next, the context graph is\nused to train a set of classifiers to assign documents to different\ncategories using their estimated distance, based on\nthe number of links, to relevant pages on different categories\n. Their experimental results reveal that the context\ngraph based focused crawler has a better performance and\nachieves higher relevancy compared to an ordinary best-first\ncrawler.\nCho et al. [10] attempt to map and explore a full design\nspace for parallel and distributed crawlers. Their work\naddresses issues of communication bandwidth, page quality\nand the division of work between local crawlers. Later,\nChung et al. [12] study parallel or distributed crawling in\nthe context of topic-oriented crawling. Basically, in their\ntopic-oriented collaborative crawler, each crawling node is\nresponsible for a particular set of topics and the page is\nassigned to the crawling node which is responsible for the\ntopic which the page is relevant to. To determine the topic of\npage, a simple Naive-Bayes classifier is employed. Recently,\nExposto et al. [17] study distributed crawling by means of\nthe geographical partition of the web considering the multi-level\npartitioning of the reduced IP web link graph. Note\nthat our IP-based collaborative crawling strategy is similar\nto their approach in spirit as we consider the IP-addresses\nrelated to the given web pages to distribute them among\nparticipating crawling nodes.\nGravano and his collaborators study the geographically-aware\nsearch problem in various works [15, 18, 5]. Particularly\n, in [15], how to compute the geographical scope of web\nresources is discussed. In their work, linkage and seman-288\ntic information are used to assess the geographical scope of\nweb resources. Their basic idea is as follows. If a reasonable\nnumber of links pertinent to one particular geographic location\npoint to a web resource and these links are smoothly\ndistributed across the location, then this location is treated\nas one of the geographic scopes of the corresponding web\nresource. Similarly, if a reasonable number of location references\nis found within a web resource, and the location references\nare smoothly distributed across the location, then this\nlocation is treated as one of the geographical scopes of the\nweb resource. They also propose how to solve aliasing and\nambiguity. Recently,\nMarkowetz et al. [22] propose the design\nand the initial implementation of a geographic search\nengine prototype for Germany. Their prototype extracts\nvarious geographic features from the crawled web dataset\nconsisting of pages whose domain name contains \"de\". A\ngeographic footprint, a set of relevant locations for page, is\nassigned to each page. Subsequently, the resulting footprint\nis integrated into the query processor of the search engine.\nCRAWLING\nEven though, in theory, the targeted geographic locations\nof a geographically focused crawling can be any valid geographic\nlocation, in our paper, a geographic location refers\nto a city-state pair for the sake of simplicity. Therefore,\ngiven a list of city-state pairs, the goal of our geographically\nfocused crawling is to collect web pages which are \"relevant\"\nto the targeted city-state pairs. Thus, after splitting and\ndistributing the targeted city-state pairs to the participating\ncrawling nodes, each participating crawling node would\nbe responsible for the crawling of web pages relevant to its\nassigned city-state pairs.\nExample 1. Given {(New York, NY), (Houston, TX)}\nas the targeted city-state pairs and 3 crawling nodes {Cn\n1\n,\nCn\n2\n, Cn\n3\n}, one possible design of geographically focused collaborative\ncrawler is to assign (New York, NY) to Cn\n1\nand\n(Houston, TX) to Cn\n2\n.\nParticularly, for our experiments, we perform the geographically\nfocused crawling of pages targeting the top 100\nUS cities, which will be explained later in Section 5. We\nuse some general notations to denote the targeted city-state\npairs and crawling nodes as follows. Let T C = {(c\n1\n, s\n1\n),\n. . . , (c\nn\n, s\nn\n)} denote the set of targeted city-state pairs for\nour crawling where each (c\ni\n, s\ni\n) is a city-state pair. When\nit is clear in the context, we will simply denote (c\ni\n, s\ni\n) as c\ni\n.\nLet CR = {Cn\n1\n, . . . , Cn\nm\n} denote the set of participating\ncrawling nodes for our crawling. The main challenges that\nhave to be dealt by a geographically focused collaborative\ncrawler are the following:\nHow to split and then distribute T C = {c\n1\n, . . . , c\nn\n}\namong the participating CR = {Cn\n1\n, . . . , Cn\nm\n}\nGiven a retrieved page p, based on what criteria we\nassign the extracted URLs from p to the participating\ncrawling nodes.\nl number of URLs\nextracted\nq extracted\np\nq\ntransferred\nAll URLs\na) All l URLs extracted from q are transferred\nto another crawling node (the worst scenario\nfor policy A\n)\nl number of URLs\nextracted\nl number of URLs\nextracted\nl number of URLs\nextracted\nq extracted\np\nq\nq\nq\n.............\nm number of crawling nodes\nb) Page q is transferred to the m number of crawling\nnodes, but all URLs extracted from each q of the\ncrawling nodes are not transferred to other crawling\nnodes (the best scenario for policy B)\nFigure 1: Exchange of the extracted URLs\n3.2\nAssignment of the extracted URLs\nWhen a crawling node extracts the URLs from a given\npage, it has to decide whether to keep the URLs for itself\nor transfer them to other participating crawling nodes for\nfurther fetching of the URLs. Once the URL is assigned to\na particular crawling node, it may be added to the node's\npending queue. Given a retrieved page p, let pr(c\ni\n|p) be\nthe probability that page p is about the city-state pair c\ni\n.\nSuppose that the targeted city-state pairs are given and they\nare distributed over the participating crawling nodes. There\nare mainly two possible policies for the exchange of URLs\nbetween crawling nodes.\nPolicy A: Given the retrieved page p, let c\ni\nbe the\nmost probable city-state pair about p, i.e. arg max\nc\ni\n\nT C\npr(c\ni\n|p). We assign each extracted URL from page p\nto the crawling node Cn\nj\nresponsible on c\ni\nPolicy B: Given the retrieved page p, let {c\np\n1\n, . . . , c\np\nk\n}\nT C be the set of city-state pairs whose P r(c\np\ni\n|p) =\n0.\nWe assign each extracted URL from page p to\nEACH crawling node Cn\nj\nresponsible on c\np\ni\nT C,\nLemma 2. Let b be the bandwidth cost and let c be the\ninter-communication cost between crawling nodes. If b > c,\nthen the Policy A is more cost effective than the Policy B.\nProof:\nGiven an extracted URL q from page p, let m be\nthe number of crawling nodes used by the Policy B (crawling\nnodes which are assigned to download q). Since the cost for\nthe policy A and B is equal when m = 1, we suppose m 2.\nLet l be the total number of URLs extracted from q. Let\nC(A) and C(B) be the sum of total inter-communication\ncost plus the bandwidth cost for the Policy A and Policy B\nrespectively. One can easily verify that the cost of download\nfor q and all URLs extracted from q is given as C(A)\nb+l(c+b) as shown in Figure 1a) and C(B) mb+lmb.\n289\nas shown in Figure 1b). Therefore, it follows that C(A)\nC(B) since m 2 and b > c.\nThe assignment of extracted URLs for each retrieved page\nof all crawling collaboration strategies that we consider next\nwill be based on the Policy A.\n3.3\nHash Based Collaboration\nWe consider the hash based collaboration, which is the approach\ntaken by most of collaborative crawlers, for the sake\nof comparison of this basic approach to our geographically\nfocused collaboration strategies. The goal of hash based collaboration\nis to implementing a distributed crawler partition\nover the web by computing hash functions over URLs. When\na crawling node extracts a URL from the retrieved page, a\nhash function is then computed over the URL. The URL is\nassigned to the participating crawling node responsible for\nthe corresponding hash value of the URL. Since we are using\na uniform hash function for our experiments, we will have\na considerable data exchange between crawling nodes since\nthe uniform hash function will map most of URLs extracted\nfrom the retrieved page to remote crawling nodes.\n3.4\nGeographically Focused Collaborations\nWe first divide up CR, the set of participating crawling\nnodes, into geographically sensitive nodes and general nodes.\nEven though, any combination of geographically sensitive\nand general crawling nodes is allowed, the architecture of\nour crawler consists of five geographically sensitive and one\ngeneral crawling node for our experiments. A geographically\nsensitive crawling node will be responsible for the download\nof pages pertinent to a subset targeted city-state pairs while\na general crawling node will be responsible for the download\nof pages which are not geographically-aware supporting\nother geographically sensitive nodes.\nEach collaboration policy considers a particular set of features\nfor the assessment of the geographical scope of page\n(whether a page is pertinent to a particular city-state pair\nor not). From the result of this assessment, each extracted\nURL from the page will be assigned to the crawling node\nthat is responsible for the download of pages pertinent to\nthe corresponding city-state pair.\n3.4.1\nURL Based\nThe intuition behind the URL based collaboration is that\npages containing a targeted city-state pair in their URL address\nmight potentially guide the crawler toward other pages\nabout the city-state pair. More specifically, for each extracted\nURL from the retrieved page p, we verify whether\nthe city-state pair c\ni\nis found somewhere in the URL address\nof the extracted URL. If the city-state pair c\ni\nis found,\nthen we assign the corresponding URL to the crawling node\nwhich is responsible for the download of pages about c\ni\n.\n3.4.2\nExtended Anchor Text Based\nGiven link text l, an extended anchor text of l is defined\nas the set of prefix and suffix tokens of l of certain size. It is\nknown that extended anchor text provides valuable information\nto characterize the nature of the page which is pointed\nby link text. Therefore, for the extended anchor text based\ncollaboration, our assumption is that pages associated with\nthe extended anchor text, in which a targeted city-state pair\nc\ni\nis found, will lead the crawler toward those pages about\nc\ni\n. More precisely, given retrieved page p, and the extended\nanchor text l found somewhere in p, we verify whether the\ncity-state pair c\ni\nT C is found as part of the extended\nanchor text l. When multiple findings of city-state occurs,\nthen we choose the city-state pair that is the closest to the\nlink text. Finally, we assign the URL associated with l to\nthe crawling node that is responsible for the download of\npages about c\ni\n.\n3.4.3\nFull Content Based\nIn [15], the location reference is used to assess the geographical\nscope of page. Therefore, for the full content\nbased collaboration, we perform a content analysis of the\nretrieved page to guide the crawler for the future link expansion\n. Let pr((c\ni\n, s\ni\n)|p) be the probability that page p\nis about city-state pair (c\ni\n, s\ni\n). Given T C and page p, we\ncompute pr((c\ni\n, s\ni\n)|p) for (c\ni\n, s\ni\n) T C as follows:\npr((c\ni\n, s\ni\n)|p) = #((c\ni\n, s\ni\n), p) + (1 - ) pr(s\ni\n|c\ni\n) #(c\ni\n, p)\n(1)\nwhere #((c\ni\n, s\ni\n), p) denotes the number of times that the\ncity-state pair (c\ni\n, s\ni\n) is found as part of the content of\np, #(c\ni\n, p) denotes the number of times (independent of\n#((c\ni\n, s\ni\n), p)) that the city reference c\ni\nis found as part of\nthe content of p, and denotes the weighting factor. For\nour experiments, = 0.7 was used.\nThe probability pr(s\ni\n|c\ni\n) is calculated under two simplified\nassumptions: (1) pr(s\ni\n|c\ni\n) is dependent on the real population\nsize of (c\ni\n, s\ni\n) (e.g., Population of Kansas City, Kansas\nis 500,000). We obtain the population size for each city\ncity-data.com\n3\n. (2) pr(s\ni\n|c\ni\n) is dependent on the number\nof times that the state reference is found (independent of\n#((c\ni\n, s\ni\n), p)) as part of the content of p. In other words,\nour assumption for pr(s\nj\n|c\ni\n) can be written as\npr(s\ni\n|c\ni\n) S(s\ni\n|c\ni\n) + (1 - ) ~\nS(s\ni\n|p)\n(2)\nwhere S(s\ni\n|c\ni\n) is the normalized form of the population\nsize of (c\ni\n, s\ni\n), ~\nS(s\ni\n|p) is the normalized form of the number\nof appearances of the state reference s\ni\n, independent of\n#((c\ni\n, s\ni\n), p)), within the content of p, and denotes the\nweighting factor. For our experiments, = 0.5 was used.\nTherefore, pr((c\ni\n, s\ni\n)|p) is computed as\npr((c\ni\n, s\ni\n)|p)\n= #((c\ni\n, s\ni\n), p) + (1 - ) (S(s\ni\n|c\ni\n)\n+ (1 - ) ~\nS(s\ni\n|p)) #(c\ni\n, p)\n(3)\nFinally, given a retrieve page p, we assign all extracted\nURLs from p to the crawling node which is responsible for\npages relevant to arg max\n(c\ni\n,s\ni\n)T C\nP r((c\ni\n, s\ni\n)|p).\n3.4.4\nClassification Based\nChung et al. [12] show that the classification based collaboration\nyields a good performance for the topic-oriented\ncollaborative crawling. Our classification based collaboration\nfor the geographically crawling is motivated by their\nwork. In this type of collaboration, the classes for the classifier\nare the partitions of targeted city-state pairs. We train\nour classifier to determine pr(c\ni\n|p), the probability that the\nretrieved page p is pertinent to the city-state pair c\ni\n. Among\nvarious possible classification methods, we chose the Naive-Bayes\nclassifier [25] due to its simplicity. To obtain training\n3\nhttp://www.city-data.com\n290\ndata, pages from the Open Directory Project (ODP)\n4\nwere\nused. For each targeted city-state pair, we download all\npages under the corresponding city-state category which, in\nturn, is the child category for the \"REGIONAL\" category\nin the ODP. The number of pages downloaded for each city-state\npair varied from 500 to 2000. We also download a set of\nrandomly chosen pages which are not part of any city-state\ncategory in the ODP. We download 2000 pages for this purpose\n. Then, we train our Naive-Bayes classifier using these\ntraining data. Our classifier determines whether a page p\nis pertinent to either of the targeted city-state pairs or it is\nnot relevant to any city-state pair at all. Given the retrieved\npage p, we assign all extracted URLs from p to the crawling\nnode which is responsible for the download of pages which\nare pertinent to arg max\nc\ni\n\nT C\npr(c\ni\n|p).\n3.4.5\nIP-Address Based\nThe IP-address of the web service indicates the geographic\nlocation at which the web service is hosted. The IP-address\nbased collaboration explores this information to control the\nbehavior of the crawler for further downloads. Given a retrieved\npage p, we first determine the IP-address of the web\nservice from which the crawler downloaded p. With this IP-address\n, we use the IP-address mapping tool to obtain the\ncorresponding city-state pair of the given IP, and then we assign\nall extracted URLs of page p to the crawling node which\nis responsible on the computed city-state pair. For the IP-address\nmapping tool, freely available IP address mapping\ntool, hostip.info(API)\n5\nis employed.\n3.5\nNormalization and Disambiguation of City\nNames\nAs indicated in [2, 15], problems of aliasing and ambiguity\narise when one wants to map the possible city-state reference\ncandidate to an unambiguous city-state pair. In this section,\nwe describe how we handle these issues out.\nAliasing: Many times different names or abbreviations\nare used for the same city name. For example,\nLos Angeles can be also referred as LA or L.A. Similar\nto [15], we used the web database of the United\nStates Postal Service (USPS)\n6\nto deal with aliasing.\nThe service returns a list of variations of the corresponding\ncity name given the zip code. Thus, we first\nobtained the list of representative zip codes for each\ncity in the list using the US Zip Code Database product\n, purchased from ZIPWISE\n7\n, and then we obtain\nthe list of possible names and abbreviations for each\ncity from the USPS.\nAmbiguity: When we deal with city names, we have\nto deal with the ambiguity of the city name reference.\nFirst, we can not guarantee whether the possible city\nname reference actually refers to the city name. For\ninstance, New York might refer to New York as city\nname or New York as part of the brand name \"New\nYork Fries\" or New York as state name. Second, a\ncity name can refer to cities in different states. For\nexample, four states, New York, Georgia, Oregon and\n4\nhttp://www.dmoz.org\n5\nhttp://www.hostip.info\n6\nhttp://www.usps.gov\n7\nhttp://www.zipwise.com\nCalifornia, have a city called Albany. For both cases,\nunless we fully analyze the context in which the reference\nwas made, the city name reference might be\ninherently ambiguous. Note that for the full content\nbased collaboration, the issue of ambiguity is already\nhandled through the term pr(s\ni\n|c\ni\n) of the Eq. 2. For\nthe extended anchor text based and the URL based\ncollaborations, we always treat the possible city name\nreference as the city that has the largest population\nsize. For instance, Glendale found in either the URL\naddress of page or the extended anchor text of page\nwould be treated as the city name reference for Glendale\n, AZ.\n8\n.\nEVALUATION MODELS\nTo assess the performance of each crawling collaboration\nstrategy, it is imperative to determine how much geographically\n-aware pages were downloaded for each strategy and\nwhether the downloaded pages are actually pertinent to the\ntargeted geographic locations. Note that while some previous\nworks [2, 15, 18, 5] attempt to define precisely what a\ngeographically-aware page is, determining whether a page is\ngeographically-aware or not remains as an open problem [2,\n18]. For our particular application, we define the notion of\ngeographical awareness of page through geographic entities\n[21]. We refer the address description of a physical organization\nor a person as geographic entity. Since the targeted\ngeographical city-state pairs for our experiments are the top\n100 US cities, a geographic entity in the context of our experiments\nare further simplified as an address information,\nfollowing the standard US address format, for any of the\ntop 100 US cities. In other words, a geographic entity in\nour context is a sequence of Street Number, Street Name,\nCity Name and State Name, found as part of the content\nof page. Next, we present various evaluation measures for\nour crawling strategies based on geographic entities. Ad-ditionally\n, we present traditional measures to quantify the\nperformance of any collaborative crawling. Note that our\nevaluation measures are later used in our experiments.\nGeo-coverage: When a page contain at least one geographic\nentity (i.e. address information), then the\npage is clearly a geographically aware page. Therefore\n, we define the geo-coverage of retrieved pages as\nthe number of retrieved pages with at least one geographic\nentity, pertinent to the targeted geographical\nlocations (e.g., the top US 100 cities) over the total\nnumber of retrieved pages.\nGeo-focus: Each crawling node of the geographically\nfocused collaborative crawler is responsible for a subset\nof the targeted geographic locations. For instance,\nsuppose we have two geographically sensitive crawling\nnodes Cn\n1\n, and Cn\n2\n, and the targeted city-state\npairs as {(New York, NY),(Los Angeles, CA)}. Suppose\nCn\n1\nis responsible for crawling pages pertinent to\n(New York, NY) while Cn\n2\nis responsible for crawling\n8\nNote that this simple approach does minimally hurt the\noverall crawling. For instance, in many cases, even the incorrect\nassessment of the state name reference New York\ninstead of the correct city name reference New York, would\nresult into the assignment of all extracted URLs to the correct\ncrawling node.\n291\n3\n1\n2\n4\n5\n7\n8\n6\nPage without geo-entity\nPage with geo-entity\nRoot Page\nFigure 2: An example of geo-centrality measure\npages pertinent to (Los Angeles, CA). Therefore, if the\nCn\n1\nhas downloaded a page about Los Angeles, CA,\nthen this would be clearly a failure of the collaborative\ncrawling approach.\nTo formalize this notion, we define the geo-focus of a\ncrawling node, as the number of retrieved pages that\ncontain at least one geographic entity of the assigned\ncity-state pairs of the crawling node.\nGeo-centrality: One of the most frequently and fundamental\nmeasures used for the analysis of network\nstructures is the centrality measure which address the\nquestion of how central a node is respect to other nodes\nin the network. The most commonly used ones are the\ndegree centrality, eigenvector centrality, closeness centrality\nand betweenness centrality [3]. Motivated by\nthe closeness centrality and the betweenness centrality\n, Lee et al. [21] define novel centrality measures\nto assess how a node is central with respect to those\ngeographically-aware nodes (pages with geographic entities\n). A geodesic path is the shortest path, in terms\nof the number of edges transversed, between a specified\npair of nodes. Geo-centrality measures are based\non the geodesic paths from an arbitrary node to a geographically\naware node.\nGiven two arbitrary nodes, p\ni\n, p\nj\n, let GD(p\ni\n, p\nj\n) be\nthe geodesic path based distance between p\ni\nand p\nj\n(the length of the geodesic path). Let w\nGD\n(p\ni\n,p\nj\n)\n=\n1/m\nGD\n(p\ni\n,p\nj\n)\nfor some m\nand we define (p\ni\n, p\nj\n)\nas\n(p\ni\n, p\nj\n) =\n\n\n\nw\nGD\n(p\ni\n,p\nj\n)\nif p\nj\nis geographically\naware node\n0\notherwise\nFor any node p\ni\n, let\nk\n(p\ni\n) = {p\nj\n|GD(p\ni\n, p\nj\n) < k} be\nthe set nodes of whose geodesic distance from p\ni\nis less\nthan k.\nGiven p\ni\n, let GCt\nk\n(p\ni\n) be defined as\nGCt\nk\n(p\ni\n) =\n\np\nj\n\n\nk\n(p\ni\n)\n(p\ni\n, p\nj\n)\nIntuitively the geo-centrality measure computes how\nmany links have to be followed by a user which starts\nhis navigation from page p\ni\nto reach geographically-aware\npages. Moreover, w\nGD\n(p\ni\n,p\nj\n)\nis used to penalize\neach following of link by the user.\nExample 3. Let consider the graph structure of Figure\n2. Suppose that the weights are given as w\n0\n=\n1, w\n1\n= 0.1, w\n2\n= 0.01, i.e. each time a user navigates\na link, we penalize it with 0.1. Given the root node 1\ncontaining at least one geo-entity, we have\n2\n(node 1)=\n{1, . . . , 8}. Therefore, we have w\nGD\n(node 1,node 1)\n= 1,\nw\nGD\n(node 1,node 2)\n= 0.1, w\nGD\n(node 1,node 3)\n= 0.1,\nw\nGD\n(node 1,node 4)\n= 0.1, w\nGD\n(node 1,node 5)\n= 0.01,\nw\nGD\n(node 1,node 6)\n= 0.01, w\nGD\n(node 1,node 7)\n= 0.01,\nw\nGD\n(node 1,node 8)\n= 0.01. Finally, GCt\nk\n(node 1) =\n1.34.\nOverlap: The Overlap measure is first introduced in\n[10]. In the collaborative crawling, it is possible that\ndifferent crawling nodes download the same page multiple\ntimes. Multiple downloads of the same page are\nclearly undesirable. Therefore, the overlap of retrieved\npages is defined as\nN -I\nN\nwhere N denotes the total\nnumber of downloaded pages by the overall crawler\nand I denotes the number of unique downloaded pages\nby the overall crawler. Note that the hash based collaboration\napproach does not have any overlap.\nDiversity: In a crawling, it is possible that the crawling\nis biased toward a certain domain name. For instance\n, a crawler might find a crawler trap which is\nan infinite loop within the web that dynamically produces\nnew pages trapping the crawler within this loop\n[6]. To formalize this notion, we define the diversity\nas\nS\nN\nwhere S denotes the number of unique domain\nnames of downloaded pages by the overall crawler and\nN denotes the total number of downloaded pages by\nthe overall crawler.\nCommunication overhead: In a collaborative crawling\n, the participating crawling nodes need to exchange\nURLs to coordinate the overall crawling work.\nTo\nquantify how much communication is required for this\nexchange, the communication overhead is defined in\nterms of the exchanged URLs per downloaded page\n[10].\nCASE STUDY\nIn this section, we present the results of experiments that\nwe conducted to study various aspects of the proposed geographically\nfocused collaborative crawling strategies.\n5.1\nExperiment Description\nWe built an geographically focused collaborative crawler\nthat consists of one general crawling node, Cn\n0\nand five\ngeographically sensitive crawling nodes, {Cn\n1\n, . . . , Cn\n5\n}, as\ndescribed in Section 3.4. The targeted city-state pairs were\nthe top 100 US cities by the population size, whose list was\nobtained from the city-data.com\n9\n.\nWe partition the targeted city-state pairs according to\ntheir time zone to assign these to the geographically sensitive\ncrawling nodes as shown in Table 1. In other words, we\nhave the following architecture design as illustrated in Figure\n3. Cn\n0\nis general crawler targeting pages which are not\ngeographically-aware. Cn\n1\ntargets the Eastern time zone\nwith 33 cities. Cn\n2\ntargets the Pacific time zone with 22\ncities. Cn\n3\ntargets the Mountain time zone with 10 cities.\n9\nwww.city-data.com\n292\nTime Zone\nState Name\nCities\nCentral\nAL\nBirmingham,Montgomery, Mobile\nAlaska\nAK\nAnchorage\nMountain\nAR\nPhoenix, Tucson, Mesa,\nGlendale, Scottsdale\nPacific\nCA\nLos Angeles , San Diego , San Jose\nSan Francisco, Long Beach, Fresno\nOakland, Santa Ana, Anaheim\nBakersfield, Stockton, Fremont\nGlendale,Riverside , Modesto\nSacramento, Huntington Beach\nMountain\nCO\nDenver, Colorado Springs, Aurora\nEastern\nDC\nWashington\nEastern\nFL\nHialeah\nEastern\nGA\nAtlanta, Augusta-Richmond County\nHawaii\nHI\nHonolulu\nMountain\nID\nBoise\nCentral\nIL\nChicago\nCentral\nIN\nIndianapolis,Fort Wayne\nCentral\nIA\nDes Moines\nCentral\nKA\nWichita\nEastern\nKE\nLexington-Fayette, Louisville\nCentral\nLO\nNew Orleans, Baton Rouge\nShreveport\nEastern\nMD\nBaltimore\nEastern\nMA\nBoston\nEastern\nMI\nDetroit, Grand Rapids\nCentral\nMN\nMinneapolis, St. Paul\nCentral\nMO\nKansas City , St. Louis\nCentral\nNE\nOmaha , Lincoln\nPacific\nNV\nLas Vegas\nEastern\nNJ\nNewark , Jersey City\nMountain\nNM\nAlbuquerque\nEastern\nNY\nNew York, Buffalo,Rochester,Yonkers\nEastern\nNC\nCharlotte, Raleigh,Greensboro\nDurham , Winston-Salem\nEastern\nOH\nColumbus , Cleveland\nCincinnati , Toledo , Akron\nCentral\nOK\nOklahoma City, Tulsa\nPacific\nOR\nPortland\nEastern\nPA\nPhiladelphia,Pittsburgh\nCentral\nTX\nHouston,Dallas,San Antonio,Austin\nEl Paso,Fort Worth\nArlington, Corpus Christi\nPlano , Garland ,Lubbock , Irving\nEastern\nVI\nVirginia Beach , Norfolk\nChesapeake, Richmond , Arlington\nPacific\nWA\nSeattle , Spokane , Tacoma\nCentral\nWI\nMilwaukee , Madison\nTable 1: Top 100 US cities and their time zone\nWEB\nCn5: Hawaii & Alaska\nCn0: General\nCn1: Eastern (33 cities)\nCn2: Pacific (22 cities)\nCn4: Central (33 cities)\nCn3: Mountain (10 cities)\nFigure 3: Architecture of our crawler\nCn\n4\ntargets the Central time zone with 33 cities. Finally,\nCn\n5\ntargets the Hawaii-Aleutian and Alaska time zones with\ntwo cities.\nWe developed our collaborative crawler by extending the\nopen source crawler, larbin\n10\nwritten in C++. Each crawling\nnode was to dig each domain name up to the five levels\nof depth. The crawling nodes were deployed over 2 servers,\neach of them with 3.2 GHz dual P4 processors, 1 GB of\nRAM, and 600 GB of disk space. We ran our crawler for the\nperiod of approximately 2 weeks to download approximately\n12.8 million pages for each crawling strategy as shown in\nTable 2. For each crawling process, the usable bandwidth\nwas limited to 3.2 mbps, so the total maximum bandwidth\nused by our crawler was 19.2 mbps. For each crawling, we\nused the category \"Top: Regional: North America: United\nStates\" of the ODP as the seed page of crawling. The IP\nmapping tool used in our experiments did not return the\n10\nhttp://larbin.sourceforge.net/index-eng.html\nType of collaboration\nDownload size\nHash Based\n12.872 m\nURL Based\n12.872 m\nExtended Anchor Text Based\n12.820 m\nSimple Content Analysis Based\n12.878 m\nClassification Based\n12.874 m\nIP Address Based\n12.874 m\nTable 2: Number of downloaded pages\ncorresponding city-state pairs for Alaska and Hawaii, so we\nignored Alaska and Hawaii for our IP-address based collaborative\ncrawling.\n5.2\nDiscussion\n5.2.1\nQuality Issue\nAs the first step toward the performance evaluation of our\ncrawling strategies, we built an extractor for the extraction\nof geographic entities (addresses) from downloaded pages.\nOur extractor, being a gazetteer based, extracted those geographic\nentities using a dictionary of all possible city name\nreferences for the top 100 US cities augmented by a list of all\npossible street abbreviations (e.g., street, avenue, av., blvd)\nand other pattern matching heuristics. Each extracted geographic\nentity candidate was further matched against the\ndatabase of possible street names for each city that we built\nfrom the 2004 TIGER/Line files\n11\n. Our extractor was shown\nto yield 96% of accuracy out of 500 randomly chosen geographic\nentities.\nWe first analyze the geo-coverage of each crawling strategy\nas shown in Table 3. The top performers for the geo-coverage\nare the URL based and extended anchor text based\ncollaborative strategies whose portion of pages downloaded\nwith geographic entities was 7.25% and 7.88%, respectively,\nstrongly suggesting that URL address of page and extended\nanchor text of link are important features to be considered\nfor the discovery of geographically-aware pages. The next\nbest performer with respect to geo-coverage was the full content\nbased collaborative strategy achieving geo-coverage of\n4.89%. Finally, the worst performers in the group of geographically\nfocused collaborative policies were the classification\nbased and the IP-address based strategies. The poor\nperformance of the IP-address based collaborative policy\nshows that the actual physical location of web service is not\nnecessarily associated with the geographical scopes of pages\nserved by web service. The extremely poor performance of\nthe classification based crawler is surprising since this kind\nof collaboration strategy shows to achieve good performance\nfor the topic-oriented crawling [12]. Finally, the worst performance\nis observed with the URL-hash based collaborative\npolicy as expected whose portion of pages with geographical\nentities out of all retrieved pages was less than 1%. In\nconclusion, the usage of even simple but intuitively sounding\ngeographically focused collaborative policies can improve\nthe performance of standard collaborative crawling by a factor\nof 3 to 8 for the task of collecting geographically-aware\npages.\nTo check whether each geographically sensitive crawling\nnode is actually downloading pages corresponding to their\nassigned city-state pairs, we used the geo-focus as shown in\n11\nhttp://www.census.gov/geo/www/tiger/tiger2004se/\ntgr2004se.html\n293\nType of collaboration\nCn0\nCn1\nCn2\nCn3\nCn4\nCn5\nAverage\nAverage\n(without Cn0)\nURL-Hash Based\n1.15%\n0.80%\n0.77%\n0.75%\n0.82%\n0.86%\n0.86%\n0.86%\nURL Based\n3.04%\n7.39%\n9.89%\n9.37%\n7.30%\n13.10%\n7.25%\n8.63%\nExtended Anchor Text Based\n5.29%\n6.73%\n9.78%\n9.99%\n6.01%\n12.24%\n7.88%\n8.58%\nFull Content Based\n1.11%\n3.92%\n5.79%\n6.87%\n3.24%\n8.51%\n4.89%\n5.71%\nClassification Based\n0.49%\n1.23%\n1.20%\n1.27%\n1.22%\n1.10%\n1.09%\n1.21%\nIP-Address Based\n0.81%\n2.02%\n1.43%\n2.59%\n2.74%\n0.00%\n1.71%\n2.20%\nTable 3: Geo-coverage of crawling strategies\nType of collaboration\nCn1\nCn2\nCn3\nCn4\nCn5\nAverage\nURL based\n91.7%\n89.0%\n82.8%\n94.3%\n97.6%\n91.1%\nExtended anchor\n82.0%\n90.5%\n79.6%\n76.8%\n92.3%\n84.2%\ntext based\nFull content based\n75.2%\n77.4%\n75.1%\n63.5%\n84.9%\n75.2%\nClassification based\n43.5%\n32.6%\n5.5%\n25.8%\n2.9%\n22.1%\nIP-Address based\n59.6%\n63.6%\n55.6%\n80.0%\n0.0%\n51.8%\nTable 4: Geo-focus of crawling strategies\nType of collaboration\nCn0\nCn1\nCn2\nCn3\nCn4\nCn5\nAverage\nURL-hash based\n0.45\n0.47\n0.46\n0.49\n0.49\n0.49\n0.35\nURL based\n0.39\n0.2\n0.18\n0.16\n0.24\n0.07\n0.18\nExtended anchor\n0.39\n0.31\n0.22\n0.13\n0.32\n0.05\n0.16\ntext based\nFull content based\n0.49\n0.35\n0.31\n0.29\n0.39\n0.14\n0.19\nClassification based\n0.52\n0.45\n0.45\n0.46\n0.46\n0.45\n0.26\nIP-Address based\n0.46\n0.25\n0.31\n0.19\n0.32\n0.00\n0.27\nTable 5: Number of unique geographic entities over\nthe total number of geographic entities\nTable 4. Once again, the URL-based and the extended anchor\ntext based strategies show to perform well with respect\nto this particular measure achieving in average above 85%\nof geo-focus. Once again, their relatively high performance\nstrongly suggest that the city name reference within a URL\naddress of page or an extended anchor text is a good feature\nto be considered for the determination of geographical\nscope of page. The geo-focus value of 75.2% for the content\nbased collaborative strategy also suggests that the locality\nphenomena which occurs with the topic of page also occurs\nwithin the geographical dimension as well. It is reported,\n[13], that pages tend to reference (point to) other pages on\nthe same general topic. The relatively high geo-focus value\nfor the content based collaborative strategy indicates that\npages on the similar geographical scope tend to reference\neach other. The IP-address based policy achieves 51.7% of\ngeo-focus while the classification based policy only achieves\n22.7% of geo-focus. The extremely poor geo-focus of the\nclassification based policy seems to be due to the failure of\nthe classifier for the determination of the correct geographical\nscope of page.\nIn the geographically focused crawling, it is possible that\npages are biased toward a certain geographic locations. For\ninstance, when we download pages on Las Vegas, NV, it is\npossible that we have downloaded a large number of pages\nwhich are focused on a few number of casino hotels in Las\nVegas, NV which are highly referenced to each other. In\nthis case, quality of the downloaded pages would not be that\ngood since most of pages would contain a large number of\nvery similar geographic entities. To formalize the notion, we\ndepict the ratio between the number of unique geographic\nentities and the total number of geographic entities from\nthe retrieved pages as shown in Table 5. This ratio verifies\nwhether each crawling policy is covering sufficient number of\npages whose geographical scope is different. It is interesting\nType of collaboration\nGeo-centrality\nHash based\n0.0222\nURL based\n0.1754\nExtended anchor text based\n0.1519\nFull content based\n0.0994\nClassification based\n0.0273\nIP-address based\n0.0380\nTable 6: Geo-centrality of crawling strategies\nType of collaboration\nOverlap\nHash Based\nNone\nURL Based\nNone\nExtended Anchor Text Based\n0.08461\nFull Content Based\n0.173239\nClassification Based\n0.34599\nIP-address based\nNone\nTable 7: Overlap of crawling strategies\nto note that those geographically focused collaborative policies\n, which show to have good performance relative to the\nprevious measures, such as the URL based, the extended anchor\ntext based and the full content based strategies tend to\ndiscover pages with less diverse geographical scope. On the\nother hand, the less performed crawling strategies such as\nthe IP-based, the classification based, the URL-hash based\nstrategies are shown to collect pages with more diverse geographical\nscope.\nWe finally study each crawling strategy in terms of the\ngeo-centrality measure as shown in Table 6. One may observe\nfrom Table 6 that the geo-centrality value provides an\naccurate view on the quality of the downloaded geo graphically-aware\npages for each crawling strategy since the geo-centrality\nvalue for each crawling strategy follows what we have obtained\nwith respect to geo-coverage and geo-precision. URL\nbased and extended anchor text based strategies show to\nhave the best geo-centrality values with 0.1754 and 0.1519\nrespectively, followed by the full content based strategy with\n0.0994, followed by the IP based strategy with 0.0380, and\nfinally the hash based strategy and the classification based\nstrategy show to have similarly low geo-centrality values.\n5.2.2\nPerformance Issue\nIn Table 7, we first show the overlap measure which reflects\nthe number of duplicated pages out of the downloaded\npages. Note that the hash based policy does not have any\nduplicated page since its page assignment is completely independent\nof other page assignment. For the same reason,\nthe overlap for the URL based and the IP based strategies\nare none. The overlap of the extended anchor text\n294\nType of collaboration\nDiversity\nHash Based\n0.0814\nURL Based\n0.0405\nExtended Anchor Text Based\n0.0674\nFull Content Based\n0.0688\nClassification Based\n0.0564\nIP-address based\n0.3887\nTable 8: Diversity of crawling strategies\nbased is 0.08461 indicating that the extended anchor text of\npage computes the geographically scope of the corresponding\nURL in an almost unique manner.\nIn other words,\nthere is low probability that two completely different city\nname references are found within a URL address. Therefore\n, this would be another reason why the extended anchor\ntext would be a good feature to be used for the partition of\nthe web within the geographical context. The overlap of the\nfull content based and the classification based strategies are\nrelatively high with 0.173239 and 0.34599 respectively.\nIn Table 8, we present the diversity of the downloaded\npages. The diversity values of geographically focused collaborative\ncrawling strategies suggest that most of the geographically\nfocused collaborative crawling strategies tend to\nfavor those pages which are found grouped under the same\ndomain names because of their crawling method. Especially,\nthe relatively low diversity value of the URL based strongly\nemphasizes this tendency. Certainly, this matches with the\nintuition since a page like \"http://www.houston-guide.com\"\nwill eventually lead toward the download of its child page\n\"http://www.houston-guide.com/guide/arts/framearts.html\"\nwhich shares the same domain.\nIn Table 9, we present the communication-overhead of\neach crawling strategy.\nCho and Garcia-Molina [10] report\nthat the communication overhead of the Hash-Based\nwith two processors is well above five. The communication-overhead\nof the Hash-based policy that we have follows with\nwhat they have obtained.\nThe communication overhead\nof geographically focused collaborative policies is relatively\nhigh due to the intensive exchange of URLs between crawling\nnodes.\nIn Table 10, we summarize the relative merits of the proposed\ngeographically focused collaborative crawling strategies\n. In the Table, \"Good\" means that the strategy is expected\nto perform relatively well for the measure, \"Not Bad\"\nmeans that the strategy is expected to perform relatively acceptable\nfor that particular measure, and \"Bad\" means that\nit may perform worse compared to most of other collaboration\nstrategies.\n5.3\nGeographic Locality\nMany of the potential benefits of topic-oriented collaborative\ncrawling derive from the assumption of topic locality,\nthat pages tend to reference pages on the same topic [12,\n13]. For instance, a classifier is used to determine whether\nthe child page is in the same topic as the parent page and\nthen guide the overall crawling [12]. Similarly, for geographically\nfocused collaborative crawling strategies we make the\nassumption of geographic locality, that pages tend to reference\npages on the same geographic location. Therefore,\nthe performance of a geographically focused collaborative\ncrawling strategy is highly dependent on its way of exploiting\nthe geographic locality. That is whether the correspond-Type\nof collaboration\nCommunication overhead\nURL-hash based\n13.89\nURL based\n25.72\nExtended anchor text based\n61.87\nFull content text based\n46.69\nClassification based\n58.38\nIP-Address based\n0.15\nTable 9: Communication-overhead\ning strategy is based on the adequate features to determine\nthe geographical similarity of two pages which are possibly\nlinked. We empirically study in what extent the idea of geographic\nlocality holds. Recall that given the list of city-state\npairs G = {~\nc\n1\n, . . . , ~\nc\nk\n} and a geographically focused crawling\ncollaboration strategy (e.g., URL based collaboration),\npr(~\nc\ni\n|p\nj\n) is the probability that page is p\nj\nis pertinent to\ncity-state pair c\ni\naccording to that particular strategy. Let\ngs(p, q), geographic similarity between pages p, q, be\ngs(p, q) =\n\n\n\n1\nif (arg max\n~\nc\ni\n\nG\nP r(~\nc\ni\n|p)\n= arg max\n~\nc\nj\n\nG\nP r(~\nc\nj\n|q))\n0\notherwise\nIn other words, our geographical similarity determines\nwhether two pages are pertinent to the same city-state pair.\nGiven , the set of retrieved page for the considered crawling\nstrategy, let () and ~\n() be\n() = |{(p\ni\n, p\nj\n) |p\ni\n, p\nj\nlinked and gs(p, q) = 1}|\n|{(p\ni\n, p\nj\n) |p\ni\n, p\nj\nlinked}|\n~\n() = |{(p\ni\n, p\nj\n) |p\ni\n, p\nj\nnot linked and gs(p, q) = 1}|\n|{(p\ni\n, p\nj\n) |p\ni\n, p\nj\nnot linked}|\nNote that () corresponds to the probability that a pair\nof linked pages, chosen uniformly at random, is pertinent to\nthe same city-state pair under the considered collaboration\nstrategy while ~\n() corresponds to the probability that a\npair of unlinked pages, chosen uniformly at random, is pertinent\nto the same city-state pair under the considered collaboration\nstrategy. Therefore, if the geographic locality occurs\nthen we would expect to have high () value compared to\nthat of ~\n(). We selected the URL based, the classification\nbased, and the full content based collaboration strategies,\nand calculated both () and ~\n() for each collaboration\nstrategy. In Table 11, we show the results of our computation\n. One may observe from Table 11 that those pages that\nshare the same city-state pair in their URL address have the\nhigh likelihood of being linked. Those pages that share the\nsame city-state pair in their content have some likelihood\nof being linked. Finally, those pages which are classified as\nsharing the same city-state pair are less likely to be linked.\nWe may conclude the following:\nThe geographical similarity of two web pages affects\nthe likelihood of being referenced. In other words, geographic\nlocality, that pages tend to reference pages\non the same geographic location, clearly occurs on the\nweb.\nA geographically focused collaboration crawling strategy\nwhich properly explores the adequate features for\ndetermining the likelihood of two pages being in the\nsame geographical scope would expect to perform well\nfor the geographically focused crawling.\n295\nType of collaboration\nGeo-coverage\nGeo-Focus\nGeo-Connectivity\nOverlap\nDiversity\nCommunication\nURL-Hash Based\nBad\nBad\nBad\nGood\nGood\nGood\nURL Based\nGood\nGood\nGood\nGood\nBad\nBad\nExtended Anchor\nGood\nGood\nGood\nGood\nNot Bad\nBad\nText Based\nFull Content Based\nNot Bad\nNot Bad\nNot Bad\nNot Bad\nNot Bad\nBad\nClassification Based\nBad\nBad\nBad\nBad\nNot Bad\nBad\nIP-Address\nBad\nBad\nBad\nGood\nBad\nGood\nTable 10: Comparison of geographically focused collaborative crawling strategies\nType of collaboration\n()\n~\n()\nURL based\n0.41559\n0.02582\nclassification based\n0.044495\n0.008923\nfull content based\n0.26325\n0.01157\nTable 11: Geographic Locality\nCONCLUSION\nIn this paper, we studied the problem of geographically\nfocused collaborative crawling by proposing several collaborative\ncrawling strategies for this particular type of crawling.\nWe also proposed various evaluation criteria to measure the\nrelative merits of each crawling strategy while empirically\nstudying the proposed crawling strategies with the download\nof real web data. We conclude that the URL based and\nthe extended anchor text based crawling strategies have the\nbest overall performance. Finally, we empirically showed\ngeographic locality, that pages tend to reference pages on\nthe same geographical scope. For the future research, it\nwould be interesting to incorporate more sophisticated features\n(e.g., based on DOM structures) to the proposed crawling\nstrategies.\nACKNOWLEDGMENT\nWe would like to thank Genieknows.com for allowing us\nto access to its hardware, storage, and bandwidth resources\nfor our experimental studies.\n\nREFERENCES\n[1] C. C. Aggarwal, F. Al-Garawi, and P. S. Yu.\nIntelligent crawling on the world wide web with\narbitrary predicates. In WWW, pages 96105, 2001.\n[2] E. Amitay, N. Har'El, R. Sivan, and A. Soffer.\nWeb-a-where: geotagging web content. In SIGIR,\npages 273280, 2004.\n[3] S. Borgatti. Centrality and network flow. Social\nNetworks, 27(1):5571, 2005.\n[4] P. D. Bra, Y. K. Geert-Jan Houben, and R. Post.\nInformation retrieval in distributed hypertexts. In\nRIAO, pages 481491, 1994.\n[5] O. Buyukkokten, J. Cho, H. Garcia-Molina,\nL. Gravano, and N. Shivakumar. Exploiting\ngeographical location information of web pages. In\nWebDB (Informal Proceedings), pages 9196, 1999.\n[6] S. Chakrabarti. Mining the Web. Morgan Kaufmann\nPublishers, 2003.\n[7] S. Chakrabarti, K. Punera, and M. Subramanyam.\nAccelerated focused crawling through online relevance\nfeedback. In WWW, pages 148159, 2002.\n[8] S. Chakrabarti, M. van den Berg, and B. Dom.\nFocused crawling: A new approach to topic-specific\nweb resource discovery. Computer Networks,\n31(11-16):16231640, 1999.\n[9] J. Cho. Crawling the Web: Discovery and\nMaintenance of Large-Scale Web Data. PhD thesis,\nStanford, 2001.\n[10] J. Cho and H. Garcia-Molina. Parallel crawlers. In\nWWW, pages 124135, 2002.\n[11] J. Cho, H. Garcia-Molina, and L. Page. Efficient\ncrawling through url ordering. Computer Networks,\n30(1-7):161172, 1998.\n[12] C. Chung and C. L. A. Clarke. Topic-oriented\ncollaborative crawling. In CIKM, pages 3442, 2002.\n[13] B. D. Davison. Topical locality in the web. In SIGIR,\npages 272279, 2000.\n[14] M. Diligenti, F. Coetzee, S. Lawrence, C. L. Giles, and\nM. Gori. Focused crawling using context graphs. In\nVLDB, pages 527534, 2000.\n[15] J. Ding, L. Gravano, and N. Shivakumar. Computing\ngeographical scopes of web resources. In VLDB, pages\n545556, 2000.\n[16] J. Edwards, K. S. McCurley, and J. A. Tomlin. An\nadaptive model for optimizing performance of an\nincremental web crawler. In WWW, pages 106113,\n2001.\n[17] J. Exposto, J. Macedo, A. Pina, A. Alves, and\nJ. Rufino. Geographical partition for distributed web\ncrawling. In GIR, pages 5560, 2005.\n[18] L. Gravano, V. Hatzivassiloglou, and R. Lichtenstein.\nCategorizing web queries according to geographical\nlocality. In CIKM, pages 325333, 2003.\n[19] A. Heydon and M. Najork. Mercator: A scalable,\nextensible web crawler. World Wide Web,\n2(4):219229, 1999.\n[20] J. M. Kleinberg. Authoritative sources in a\nhyperlinked environment. J. ACM, 46(5):604632,\n1999.\n[21] H. C. Lee and R. Miller. Bringing geographical order\nto the web. private communication, 2005.\n[22] A. Markowetz, Y.-Y. Chen, T. Suel, X. Long, and\nB. Seeger. Design and implementation of a geographic\nsearch engine. In WebDB, pages 1924, 2005.\n[23] A. McCallum, K. Nigam, J. Rennie, and K. Seymore.\nA machine learning approach to building\ndomain-specific search engines. In IJCAI, pages\n662667, 1999.\n[24] F. Menczer, G. Pant, P. Srinivasan, and M. E. Ruiz.\nEvaluating topic-driven web crawlers. In SIGIR, pages\n241249, 2001.\n[25] T. Mitchell. Machine Learning. McGraw Hill, 1997.\n296\n", "keywords": "geographical nodes;crawling strategies;Collaborative crawler;Evaluation criteria;URL based;Geographic Locality;Normalization and Disambiguation of City Names;Search engine;Geographically focused crawling;Scalability;Geographic locality;Collaborative crawling;Anchor text;collaborative crawling;Problems of aliasing and ambiguity;Search localization;IP address based;Focused crawler;Geo-focus;geographic entities;Full Content based;Extracted URL;pattern matching;Crawling strategies;Geo-coverage;Hash based collaboration;geographically focused crawling;Quality issue"} {"name": "98", "title": "GraalBench: A 3D Graphics Benchmark Suite for Mobile Phones", "abstract": "In this paper we consider implementations of embedded 3D graphics and provide evidence indicating that 3D benchmarks employed for desktop computers are not suitable for mobile environments. Consequently, we present GraalBench, a set of 3D graphics workloads representative for contemporary and emerging mobile devices . In addition, we present detailed simulation results for a typical rasterization pipeline. The results show that the proposed benchmarks use only a part of the resources offered by current 3D graphics libraries. For instance, while each benchmark uses the texturing unit for more than 70% of the generated fragments, the alpha unit is employed for less than 13% of the fragments. The Fog unit was used for 84% of the fragments by one benchmark, but the other benchmarks did not use it at all. Our experiments on the proposed suite suggest that the texturing, depth and blending units should be implemented in hardware, while, for instance, the dithering unit may be omitted from a hardware implementation. Finally, we discuss the architectural implications of the obtained results for hardware implementations.", "fulltext": "INTRODUCTION\nIn recent years, mobile computing devices have been used for a\nbroader spectrum of applications than mobile telephony or personal\ndigital assistance. Several companies expect that 3D graphics applications\nwill become an important workload of wireless devices.\nFor example, according to [10], the number of users of interactive\n3D graphics applications (in particular games) is expected to increase\ndrastically in the future: it is predicted that the global wireless\ngames market will grow to 4 billion dollars in 2006. Because\ncurrent wireless devices do not have sufficient computational power\nto support 3D graphics in real time and because present accelerators\nconsume too much power, several companies and universities have\nstarted to develop a low-power 3D graphics accelerator. However,\nto the best of our knowledge, there is no publicly available benchmark\nsuite that can be used to guide the architectural exploration of\nsuch devices.\nThis paper presents GraalBench, a 3D graphics benchmark suite\nsuitable for 3D graphics on low-power, mobile systems, in particular\nmobile phones. These benchmarks were collected to facilitate\nour studies on low-power 3D graphics accelerators in the Graal\n(GRAphics AcceLerator) project [5]. It includes several games as\nwell as virtual reality applications such as 3D museum guides. Applications\nwere selected on the basis of several criteria. For example\n, CAD/CAM applications, such as contained in the Viewperf\npackage [18], were excluded because it is unlikely that they will be\noffered on mobile devices. Other characteristics we considered are\nresolution and polygon count.\nA second goal of this paper is to provide a detailed quantitative\nworkload characterization of the collected benchmarks. For each\nrasterization unit, we determine if it is used by the benchmark, and\ncollect several statistics such as the number of fragments that bypass\nthe unit, fragments that are processed by the unit and pass the\ntest, and fragments that are processed but fail the test. Such statistics\ncan be used to guide the development of mobile 3D graphics\n1\narchitectures. For example, a unit that is rarely used might not be\nsupported by a low-power accelerator or it might be implemented\nusing less resources. Furthermore, if many fragments are discarded\nbefore the final tests, the pixel pipeline of the last stages might be\nnarrower than the width of earlier stages.\nThis paper is organized as follows. Previous work on 3D graphics\nbenchmarking is described in Section 2. In this section we also\ngive reasons why current 3D graphics benchmarks are not appropriate\nfor mobile environments. Section 3 first explains how the\nbenchmarks were obtained, describes our tracing environment, the\nsimulator we used to collect the statistics and, after that, describes\nthe components of the proposed benchmark suite and presents some\ngeneral characteristics of the workloads. Section 4 provides a workload\ncharacterization of the benchmarks and discusses architectural\nimplications. Conclusions and directions for future work are given\nin Section 5.\nRELATED WORK\nTo the best of our knowledge, 3D graphics benchmarks specifi-cally\ntargeted at low-power architectures have not been proposed.\nFurthermore, existing benchmarks cannot be considered to be suited\nfor embedded 3D graphics architectures. For example, consider\nSPEC's Viewperf [18], a well-known benchmark suite used to evaluate\n3D graphics accelerator cards employed in desktop computers.\nThese benchmarks are unsuitable for low-power graphics because\nof the following:\nThe Viewperf benchmarks are designed for high-resolution\noutput devices, but the displays of current wireless systems\nhave a limited resolution. Specifically, by default the Viewperf\npackage is running at resolutions above SVGA (\n800\n600 pixels), while common display resolutions for mobile\nphones are QCIF (\n176 144) and QVGA (320 240).\nThe Viewperf benchmarks use a large number of polygons in\norder to obtain high picture quality (most benchmarks have\nmore than 20,000 triangles per frame [11]). Translated to\na mobile platform, most rendered polygons will be smaller\nthan one pixel so their contribution to the generated images\nwill be small or even invisible. Specifically, the polygon\ncount of the Viewperf benchmarks DRV, DX, ProCDRS, and\nMedMCAD is too high for mobile devices.\nSome benchmarks of Viewperf are CAD/CAM applications\nand use wire-frame rendering modes. It is unlikely that such\napplications will be offered on mobile platforms.\nExcept Viewperf, there are no publicly-available, portable 3D graphics\nbenchmark suites. Although there are several benchmarking\nsuites [3, 4] based on the DirectX API, they are not suitable for our\nstudy since DirectX implementations are available only on Windows\nsystems.\nThere have been several studies related to 3D graphics workload\ncharacterization (e.g., [11, 6]). Most related to our investigation is\nthe study of Mitra and Chiueh [11], since they also considered dynamic\n, polygonal 3D graphics workloads. Dynamic means that the\nworkloads consist of several consecutive image frames rather than\nindividual images, which allows to study techniques that exploit the\ncoherence between consecutive frames. Polygonal means that the\nbasic primitives are polygons, which are supported by all existing\n3D chips. The main differences between that study and our workload\ncharacterization are that Mitra and Chiueh considered high-end\napplications (Viewperf, among others) and measured different\nstatistics.\nRecently, a number of mobile 3D graphics accelerators [1, 16]\nhave been presented. In both works particular benchmarks were\nemployed to evaluate the accelerators. However, little information\nis provided about the benchmarks and they have not been made\npublicly available.\nAnother reason for the limited availability of mobile 3D graphics\nbenchmarks is that until recently there was no generally accepted\nAPI for 3D graphics on mobile phones. Recently, due to\nhigh interest in embedded 3D graphics, APIs suitable for mobile\n3D graphics such as OpenGL ES [8], Java mobile 3D Graphics\nAPI (JSR-184) [7], and Mobile GL [20] have appeared. Currently,\nhowever, there are no 3D benchmarks written using these APIs. So,\nwe have used OpenGL applications. Furthermore, our benchmarks\nuse only a part of the OpenGL functionality which is also supported\nby OpenGL ES.\nTHE GraalBench BENCHMARK SET\nIn this section we describe the environment we used to create the\nbenchmarks, the components of our benchmark set and also some\ngeneral characteristics of the workloads.\n3.1\nTracing Environment\nDue to their interactive nature, 3D games are generally not repeatable\n. In order to obtain a set of repeatable workloads, we traced\nexisting applications logging all OpenGL calls. Our tracing environment\nconsists of two components: a tracer and a trace player.\nOur tracer is based on GLtrace from Hawksoft [15]. It intercepts\nand logs OpenGL calls made by a running application, and then\ncalls the OpenGL function invoked by the application. No source\ncode is required provided the application links dynamically with\nthe OpenGL library, meaning that the executable only holds links\nto the required functions which are bounded to the corresponding\nfunctions at run-time. Statically linked applications, in which case\nthe required libraries are encapsulated in the executable image, cannot\nbe traced using this mechanism when the source code is not\navailable.\nWe improved GLtrace in two ways. First, GLtrace does not log\ncompletely reproducible OpenGL calls (for example, textures are\nnot logged). We modified the GLtrace library so that all OpenGL\ncalls are completely reproducible. Second, the trace produced by\nGLtrace is a text trace, which is rather slow. We improved its performance\nby adding a binary logging mode that significantly reduces\nthe tracing overhead.\nIn addition, we developed a trace player that plays the obtained\ntraces. It can play recorded frames as fast as the OpenGL implementation\nallows. It does not skip any frame so the workload generated\nis always the same. The workload statistics were collected\nusing our own OpenGL simulator based on Mesa [14], which is a\npublic-domain implementation of OpenGL.\n3.2\nThe Benchmarks\nThe proposed benchmark suite consists of the following components\n:\nQ3L and Q3H\nQuake III [9] or Q3, for short, is a popular interactive\n3D game belonging to the shooter games category. A\nscreenshot of this game is depicted in Figure 1(a). Even\nthough it can be considered outdated for contemporary PC-class\ngraphics accelerators, it is an appropriate and demanding\napplication for low-power devices. Q3 has a flexible design\nand permits many settings to be changed such as image\nsize and depth, texture quality, geometry detail, types of texture\nfiltering, etc. We used two profiles for this workload in\n2\n(a) Q3\n(b) Tux\n(c) AW\n(d) ANL\n(e) GRA\n(f) DIN\nFigure 1: Screenshots of the GraalBench workloads\norder to determine the implications of different image sizes\nand object complexity. The first profile, which will be re-ferred\nto as Q3H, uses a relatively high image resolution and\nobjects detail. The second profile, Q3L, employs a low resolution\nand objects detail. Q3 makes extensive use of blending\noperations in order to implement multiple texture passes.\nTux Racer\n(Tux) [19] This is a freely available game that runs on\nLinux. The goal of this game is to drive a penguin down\na mountain terrain as quickly as possible, while collecting\nherring. The image quality is higher than that of Q3. Tux\nmakes extensive use of automatic texture coordinate generation\nfunctions. A screenshot can be seen in Figure 1(b).\nAWadvs-04\n(AW) [18] This test is part of the Viewperf 6.1.2 package\n. In this test a fully textured human model is viewed from\ndifferent angles and distances. As remarked before, the other\ntest in the Viewperf package are not suitable for low-power\naccelerators, because they represent high-end applications or\nare from an application domain not likely to be offered on\nmobile platforms. A screenshot of AW is depicted in Figure\n1(c).\nANL, GRA, and DIN\nThese three VRML scenes were chosen based\non their diversity and complexity. ANL is a virtual model of\nAustrian National Library and consists of 10292 polygons,\nGRA is a model of Graz University of Technology, Austria\nand consists of 8859 polygons, and Dino (DIN) is a model\nof a dinosaur consisting of 4300 polygons. In order to obtain\na workload similar to one that might be generated by a typical\nuser, we created \"fly-by\" scenes. Initially, we used VR-Web\n[13] to navigate through the scenes, but we found that\nthe VRMLView [12] navigator produces less texture traffic\nbecause it uses the\nglBindTexture\nmechanism. Screenshots\nof ANL, GRA, and DIN are depicted in Figure 1(d),\n(e), and (f), respectively.\nGraalBench is the result of extensive searching on the World\nWide Web. The applications were selected on the basis of several\ncriteria. First, since the display resolution of contemporary mobile\nphones is at most\n320 240, we excluded applications with\nsubstantially higher resolution. Specifically, we used a maximum\nresolution of\n640 480. Second, the applications should be relevant\nfor a mobile phone, i.e., it should be likely that it will be\noffered on a mobile phone. CAD/CAM applications were excluded\nfor this reason. Third, the level of details of the applications should\nnot be too high, because otherwise, most rendered polygons will be\nsmaller than one pixel on the display of a mobile phone. Fourth,\nand finally, the benchmarks should have different characteristics.\nFor example, several links to 3D games have recently been provided\non Mesa's website (\nwww.mesa3d.org\n). However, these\ngames such as Doom, Heretic, and Quake II belong to the same\ncategory as Quake III and Tux Racer, and therefore do not represent\nbenchmarks with substantially different characteristics. We,\ntherefore, decided not to include them.\nApplications using the latest technologies (Vertex and Pixel Sha-ders\n) available on desktop 3D graphics accelerators were also not\nincluded since these technologies are not supported by the embedded\n3D graphics APIs mentioned in Section 2. We expect that more\n3D graphics applications for low-power mobile devices will appear\nwhen accelerators for these platforms will be introduced.\n3\n3.3\nGeneral Characteristics\nTable 1 present some general statistics of the workloads. The\ncharacteristics and statistics presented in this table are:\nImage resolution\nCurrently, low-power accelerator should be able\nto handle scenes with a typical resolution of\n320240 pixels.\nSince in the near future the typical resolution is expected to\nincrease we decided to use a resolution of\n640 480. The\nQ3L benchmark uses a lower resolution (\n320240) in order\nto study the impact of changing the resolution.\nFrames\nThe total number of frames in each test.\nAvg. triangles\nThe average number of triangles sent to the rasterizer\nper frame.\nAvg. processed triangles\nThe average number of triangles per frame\nthat remained after backface culling, i.e., the triangles that\nremained after eliminating the triangles that are invisible because\nthey are facing backwards from the observer's viewpoint\n.\nAvg. area\nThe average number, per frame, of fragments/pixels after\nscan conversion.\nTexture size\nThe total size of all textures per workload. This quantity\ngives an indication of the amount of texture memory required\n.\nMaximum triangles\nThe maximum number of triangles that were\nsent for one frame. Because most 3D graphics accelerators\nimplement only rasterization, this statistic is an approximation\nof the bandwidth required for geometry information,\nsince triangles need to be transferred from the CPU to the\naccelerator via a system bus. We assume that triangles are\nrepresented individually. Sharing vertices between adjacent\ntriangles allows to reduce the bus bandwidth required. This\nquantity also determines the throughput required in order to\nachieve real-time frame rates. We remark that the maximum\nnumber rather than the average number of triangles per frame\ndetermines the required bandwidth and throughput.\nMaximum processed triangles per frame\nThe maximum number\nof triangles that remained after backface culling over all frames.\nMaximum area per frame\nThe maximum number of fragments\nafter scan conversion, over all frames.\nSeveral observations can be drawn from Table 1. First, it can be\nobserved from the columns labeled \"Received triangles\" that the\nscenes generated by Tux and Dino have a relatively low complexity\n, that Q3, ANL, and Graz consist of medium complexity scenes,\nand that AW produces the most complex scenes by far. Second,\nbackface culling is effective in eliminating invisible triangles. It\neliminates approximately 30% of all triangles in the Q3 benchmarks\n, 24% in Graz, and more than half (55%) of all triangles in\nAW. Backface culling is not enabled in the ANL and Dino workloads\n. If we consider the largest number of triangles remaining\nafter backface culling (14236 for ANL) and assume that each triangle\nis represented individually and requires 28 bytes (xyz coordinates\n, 4 bytes each, rgb for color and alpha for transparency, 1 byte\neach, and uvw texture coordinates, 4 bytes each) for each of its vertices\n, the required bus bandwidth is approximately 1.2MB/frame\nor 35.9MB/s to render 30 frames per second. Finally, we remark\nthat the largest amount of texture memory is required by the Q3\nand Tux benchmarks, and that the other benchmarks require a relatively\nsmall amount of texture memory.\nTable 2: Stress variation and stress strength on various stages\nof the 3D graphics pipeline\nBench.\nT&L\nRasterization\nVar.\nStr.\nVar.\nStr.\nQ3L\nmed\nmed\nmed\nmed\nQ3H\nmed\nmed\nmed\nhigh\nTux\nvar\nlow\nlow\nmed\nAW\nlow\nhigh\nhigh\nlow\nANL\nhigh\nmed\nmed\nmed\nGRA\nhigh\nmed\nmed\nlow\nDIN\nlow\nmed\nmed\nlow\nWORKLOAD CHARACTERIZATION\nThis section provides the detailed analysis of results we obtained\nby running the proposed benchmark set. For each unit of a typical\nrasterization pipeline we present the relevant characteristics followed\nby the architectural implications.\n4.1\nDetailed Workload Statistics\nOne important aspect for 3D graphics benchmarking is to determine\npossible bottlenecks in a 3D graphics environment since\nthe 3D graphics environment has a pipeline structure and different\nparts of the pipeline can be implemented on separate computing\nresources such as general purpose processors or graphics accelerators\n. Balancing the load on the resources is an important decision.\nBottlenecks in the transform & lighting (T&L) part of the pipeline\ncan be generated by applications that have a large number of primitives\n, i.e. substantial geometry computation load, where each primitive\nhas a small size, i.e. reduced impact on the rasterization part\nof the pipeline, while bottlenecks in the rasterization part are usu-ally\ngenerated by fill intensive applications that are using a small\nnumber of primitives where each primitive covers a substantial part\nof the scene. An easy way to determine if an application is for\ninstance rasterization intensive is to remove the rasterization part\nfrom the graphics pipeline and determine the speed up.\nThe components of the GraalBench were also chosen to stress\nvarious parts of the pipeline. For instance the AW and DIN components\ngenerate an almost constant number of primitives while the\ngenerated area varies substantially across frames, thus in these scenarios\nthe T&L part of the pipeline has a virtually constant load\nwhile the rasterization part has a variable load. This behavior, depicted\nin Figure 4 (c,d,g,h), can emphasize the role of the rasterization\npart of the pipeline. The number of triangles received gives\nan indication of the triangles that have to be transformed and lit,\nwhile the number of triangles processed gives an indication of the\ntriangles that were sent to the rasterization stage after clipping and\nculling. Other components, e.g. Tux, generate a variable number\nof triangles while the generated area is almost constant, thus they\ncan be used to profile bottlenecks in the T&L part of the graphics\npipeline.\nAnother important aspect beside the variation of the workload\nfor a certain pipeline stage is also the stress strength of the various\nworkload. For convenience, in Table 2 is also presented a\nrough view of the stress variation and stress strength along the 3D\ngraphics pipeline. The stress variation represents how much varies\na workload from one frame to another, while the stress strength\nrepresents the load generated by each workload.\nOn the proposed benchmarks we determined that, for a software\nimplementation, the most computationally intensive part of\n4\nTable 1: General statistics of the benchmarks\nBench.\nResolution\nFrames\nTextures\n(MB)\nReceived\nTriangles\nAvg.\nMax.\nProcessed\nTriangles\nAvg.\nMax.\nArea\nAvg.\nMax.\nQ3L\n320 240\n1,379\n12.84\n4.5k\n9.7k\n3.25k\n6.8k\n422k\n1,327k\nQ3H\n640 480\n1,379\n12.84\n4.6k\n9.8k\n3.36k\n6.97k\n1,678k\n5,284k\nTux\n640 480\n1,363\n11.71\n3k\n4.8k\n1.8k\n2.97k\n760k\n1,224k\nAW\n640 480\n603\n3.25\n23k\n25.7k\n10.55k\n13.94k\n63k\n307k\nANL\n640 480\n600\n1.8\n4.45k\n14.2k\n4.45k\n14.2k\n776k\n1,242k\nGRA\n640 480\n599\n2.1\n4.9k\n10.8k\n3.6k\n6.9k\n245k\n325k\nDIN\n640 480\n600\n1.7\n4.15k\n4.3k\n4.15k\n4.3k\n153k\n259k\nBlending\nDepth Test\nAlpha Test\nScissor Test\nStencil Test\nPixel Ownership\nDithering\nLogicOp\nTexture Unit\nColor\nStencil\nTexturing\nColor Sum\nEdge Walk\nSpan Interpolation\nTriangle Setup\nTexture Memory\nMemory\nPrimitives sent fom the Transform & Lighting Stage\nDepth\nFog\nClear\nFigure 2: Graphics pipeline for rasterization.\nthe graphics pipeline is the rasterization part. This is the reason\nwhy only the rasterizer stage was additionally studied. The units\nfor which we further gathered results are depicted in Figure 2 and\ndescribed in the following:\nTriangle Setup, EdgeWalk, and Span Interpolation.\nThese\nunits convert primitives to fragments. We used the same algorithm\nas employed in the Mesa library rasterizer. We remark that the\nnumber of processed triangles in Table 3 is smaller than the number\nof processed triangles in Table 1 because some triangles were\nsmall and discarded using supplementary tests, such as a small triangle\nfilter. The average number of processed triangles is the lowest\nfor Tux, medium for Q3 and VRML components, and substantially\nhigher for AW considering that the number of triangles for the AW\nand VRML components were generated in approximatively half as\nmany frames as the number of frames in Q3 and Tux. The numbers\nof generated spans and fragments also give an indication of\nthe processing power required at the EdgeWalk and Span Interpolation\nunits. The AW benchmark generates on average only 4 spans\nper triangle and approximately 2 fragments per span. These results\nshow that the benchmark that could create a pipeline bottleneck in\nthese units is the AW benchmark since it has small triangles (small\nimpact on the rest of the pipeline) and it has the largest number of\ntriangles (that are processed at the Triangle Setup).\nClear Unit.\nThe clear unit is used to fill the Depth buffer and/or\nthe Color buffer with a default value. As can be seen in Table 4,\nthe Q3 benchmark uses only depth buffer clearing, except for one\ninitial color buffer clear. Q3 exploits the observation that all pixels\nfrom a scene are going to be filled at least once so there is no need\nto clear the color buffer. The other benchmarks have an equal number\nof depth and color buffer clears. Although the clear function\nis called a relatively small number of times, the number of cleared\npixels can be as high as 20% of the pixels generated by the rasterizer\n. This implies that this unit should be optimized for long write\nburst to the graphics memory.\nTexture Unit.\nWhen enabled, this unit combines the color of the\nincoming fragment with a texture sample color. Depending on the\ntexturing filter chosen, the texture color is obtained by a direct look-up\nin a texture map or by a linear interpolation between several\ncolors (up to 8 colors in the case of trilinear interpolation).\nThe results obtained for the texture unit are depicted in Figure 3.\nThe Q3 and VV benchmarks used the texture unit for all fragments,\nwhile the Tux and AW benchmarks used texturing for 75% and 90%\nof the fragments respectively.\nThis unit is the most computationally intensive unit of a rasterizer\nand can easily become a pipeline bottleneck, thus it should be\nhighly optimized for speed. Beside requiring high computational\npower, this unit also requires a large number of accesses to the texture\nmemory . However, due to high spatial locality, even by adding\na small texture cache, the traffic from the off-chip texture memory\nto the texture unit can be substantially reduced[2].\nFog Unit.\nThe Fog unit is to used modify the fragment color in\norder to simulate atmospheric effects such as haze, mist, or smoke.\nOnly the Tux benchmark uses the fog unit and it was enabled for\n84% of the fragments. From the three types of fog (linear, exponential\n, and squared exponential) only linear was used. The results\nsuggest that for these low-end applications the fog unit is seldomly\nused and that it might be implemented using slower components or\nthat this unit can be implemented off the critical path.\n5\nTable 3: Triangle Setup, Edge Walk, and Span Interpolation units statistics\nQ3L\nQ3H\nTux\nAW\nANL\nGRA\nDIN\nTriangles processed\n4,147k\n4,430k\n2,425k\n5,537k\n2,528k\n1,992k\n2,487k\nGenerated spans\n58,837k\n117,253k\n27,928k\n20,288k\n66,806k\n14,419k\n23,901k\nGenerated fragments\n(frags.)\n581,887k\n2,306,487k\n1,037,222k\n38,044k\n466,344k\n146,604k\n91,824k\nTable 4: Clear unit statistics\nQ3L\nQ3H\nTux\nAW\nANL\nGRA\nDIN\nClear depth calls\n5,470\n5,470\n1,363\n603\n601\n600\n602\nClear color calls\n1\n1\n1,363\n604\n601\n600\n602\nClear depth pixels\n105,821k\n423,284k\n418,714k\n185,242k\n184,320k\n184,013k\n189,934k\nClear color pixels\n76,800\n307,200\n418,714k\n185,549k\n184,320k\n184,013k\n189,934k\nScissor Unit.\nThis unit is used to discard fragments that are\noutside of a specified rectangular area. Only Q3 employed scissoring\n. All incoming fragments were processed and passed the test,\nso even in this case the test is redundant since no fragments were\nrejected. Normally, the scissor unit is used to restrict the drawing\nprocess to a certain rectangular region of the image space. In\nQ3 this unit, besides being always enabled for the whole size of\nthe image space (to clip primitives outside it), in some cases it is\nalso used to clear the depth component of a specific region of the\nimage so that certain objects (interface objects) will always be in\nfront of other objects (normal scene). Even though it might be used\nintensively by some applications, this unit performs simple computations\n, such as comparisons, and performs no memory accesses so\nit does not require substantial computational power.\nAlpha Unit.\nThis unit discards fragments based on their alpha\ncolor component. This unit was used only in Q3 and Tux. Furthermore\n, Q3 used the alpha unit only for a very small number\nof fragments (0.03%). The only comparison function used was\n\"Greater or Equal\". However, this is not a significant property\nsince the other comparison functions (modes) do not require a substantial\namount of extra hardware to be implemented. The number\nof passed fragments could not be determined since the texturing\nunit of our graphics simulator is not yet complete, and the alpha test\ndepends on the alpha component that can be modified by the texture\nprocessing unit. However, this unit is used significantly only\nfor the Tux benchmark so this is the only benchmark that could\nhave produced different results. Furthermore, the propagated error\nfor the results we obtained can be at most 7.8% since 92.2% of the\nfragments generated by Tux bypassed this unit. We, therefore, assumed\nthat all fragments passed the alpha test. This corresponds\nto the worst case. Since this unit is seldomly used, it could be\nimplemented using a more conservative strategy toward allocated\nresources.\nDepth Unit.\nThis unit discards a fragment based on a comparison\nbetween its depth value and the depth value stored in the depth\nbuffer in the fragment's corresponding position. This unit was used\nintensively by all benchmarks as can be seen in Table 4.1. While the\nTux, Aw, and VRML benchmarks write almost all fragments that\npassed the depth test to the depth buffer, the Q3 benchmark writes\nto the depth buffer only 36% of the fragments that passed the test.\nThis is expected since Q3 uses multiple steps to apply textures to\nprimitives and so it does not need to write to the depth buffer at each\nstep. This unit should definitely be implemented in an aggressive\nmanner with respect to throughput (processing power) and latency,\nsince for instance the depth buffer read/write operations used at this\nunit are quite expensive.\nBlending Unit.\nThis unit combines the color of the incoming\nfragment with the color stored at the corresponding position in the\nframebuffer. As depicted in Figure 3, this unit is used only by the\nQ3 and Tux benchmarks. The AW and VRML benchmarks do not\nuse this unit since they use only single textured primitives and all\nblending operations are performed at the texturing stage. Q3, on the\nother hand, uses a variety of blending modes, while Tux employs\nonly a very common blending mode (source = incoming pixel alpha\nand dest= 1 - incoming pixel alpha). An explanation why Tux\nmanages to use only this mode is that Tux uses the alpha test instead\nof multiple blending modes. Alpha tests are supposed to be\nless computationally intensive than blending operations since there\nis only one comparison per fragment, while the blending unit performs\nup to 8 multiplications and 4 additions per fragment. Based\non its usage and computational power required, the implementation\nof this unit should be tuned toward performance.\nUnused Units.\nThe LogicOp, Stencil and Color Sum units are\nnot used by any benchmark. The dithering unit is used only by the\nAW benchmark (for all fragments that passed the blending stage).\nSince these units are expected to be hardly used their implementation\ncould be tuned toward low-power efficiency.\n4.2\nArchitectural Implications Based on Unit\nUsage\nIn this section the usage of each unit for the selected benchmarks\nis presented. The statistics are gathered separately for each benchmark\n. Figure 3 breaks down the number of fragments received by\neach unit into fragments that bypassed the unit, fragments that were\nprocessed by the unit and passed the test, and fragments that failed\nthe test. All values are normalized to the number of fragments generated\nby the Span Interpolation unit.\nFrom Figure 3 it can be seen that the Q3 benchmark is quite scalable\nand the results obtained for the low resolution profile (Q3L)\nare similar with the results obtained for the high resolution profile\n(Q3H). The Q3 benchmark can be characterized as an application\nthat uses textures for most of its primitives. The Tux component is\nalso using textures for more than 70% of its primitives, and it also\nuses the fog unit. The AW component does not use the scissor test\n6\nTable 5: Depth unit statistics\nQ3L\nQ3H\nTux\nAW\nANL\nGRA\nDIN\nIncoming frags.\n581,887k\n2,306,487k\n1,037,222k\n38,044k\n466,344k\n146,604k\n91,824k\nProcessed frags.\n578,345k\n2,292,357k\n512,618k\n38,044k\n466,344k\n146,604k\n91,824k\nPassed frags.\n461,045k\n1,822,735k\n473,738k\n35,037k\n281,684k\n137,109k\n73,268k\nFrags. written to\nthe depth buffer\n166,624k\n666,633k\n462,520k\n35,037k\n281,684k\n137,109k\n73,268k\nDIN\nQ\n3L\nA\nNL\nG\nra\nA\nw\nTux\nQ\n3H\n0\n20\n40\n60\n80\n100\n120\nTexture\nFog\nScissor\nAlpha\nDepth\nBlending\nDithering\n%\nF\nr\nagments\nPassed\nFailed\nBypassed\nFigure 3: Rasterization pipeline units usage\nas the others are doing and also it has no pixels rejected at the depth\ntest. Another difference from the previous components is that AW\nis also using the dithering mechanism in order to improve the image\nquality on displays with a low color depth. Some architectural\nimplications based on the units usage are: Some of the units such\nas Color sum, LogicOp and Stencil were not used, so they might\nnot be implemented in hardware. Some units such as Fog and Alpha\nwere less used and they can be also be implemented outside\nthe critical path. The Depth and Blending units should be hard-wired\nunits and tuned toward performance. The texture unit should\nbe definitely focused upon for a high performance implementation\nsince, due to the processing power required, it can easily become a\nbottleneck for the graphics pipeline.\nCONCLUSIONS AND FUTURE WORK\nAlthough high-end 3D graphics benchmarks have been available\nfor some time, there are no benchmark suites dedicated to embedded\n3D graphics accelerators. In this paper we have described a\nset of relevant applications for embedded 3D graphics accelerators\nperformance evaluation. Also one of the objectives of this paper\nwas to determine what features of 3D graphics implementations\nare used in relevant 3D graphics applications. We have also identified\na number of units from the 3D graphics pipeline which are\nintensively used such as the texture and the depth units, while for\ninstance, stencil, fog, and dithering units being rarely used.\nThe OpenGL applications that were used to create the benchmarks\nand the GLtrace tracer are accessible via the first author's\nwebsite (\nhttp://ce.et.tudelft.nl/~tkg/\n). The benchmarks\n(i.e. the traces) cannot be made public currently, because\nthey are of no use without the trace player and the trace player is\nconfidential at the moment. However, the Quake III (demo version)\nand the AWadvs-04 components do not require the use of the trace\nplayer in order to generate repeatable workloads. We hope to be\nable to make the benchmark suite publicly available in the future.\nAs future work, we intend to extend the number of components\nfor this benchmark suite, and we also intend to extend the statistics\nto include results from embedded graphics architectures that are\nusing a tile-based rendering mechanism.\nREFERENCES\n[1] T. Akenine-Moller and J. Strom, \"Graphics for the\nMasses: A Hardware Rasterization Architecture for\nMobile Phones\", ACM Trans. on Graph.,vol 22, nr 3,\n2003, pp. 801-808.\n[2] I. Antochi, B.H.H. Juurlink, A. G. M. Cilio, and P. Liuha.\n\"Trading Efficiency for Energy in a Texture Cache\nArchitecture\", Proc. Euromicro Conf. on\nMassively-Parallel Computing Systems (MPCS'02), 2002,\nIschia, Italy, pp. 189-196.\n[3] Futuremark Corporation, \"3DMark01SE\", Available at\nhttp://www.futuremark.com/products/3dmark2001/\n[4] Futuremark Corporation, \"3DMark03\", Available at\nhttp://www.futuremark.com/products/3dmark03/\n[5] D. Crisu, S.D. Cotofana, S. Vassiliadis, and P. Liuha,\n\"GRAAL -- A Development Framework for Embedded\nGraphics Accelerators\", Proc. Design, Automation and\nTest in Europe (DATE 04), Paris, France, February 2004.\n[6] J.C. Dunwoody and M.A. Linton. \"Tracing Interactive 3D\nGraphics Programs\", Proc. ACM Symp. on Interactive 3D\nGraphics, 1990.\n7\n0\n2000\n4000\n6000\n8000\n10000\n12000\n0\n100\n200\n300\n400\n500\n600\n700\n800\n900\n1000\n1100\n1200\n1300\nFrame\nTriangles\nReceived\nProcessed\n(a) Q3L Triangles\n0\n200000\n400000\n600000\n800000\n1000000\n1200000\n1400000\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\n550\n600\n650\n700\n750\n800\n850\n900\n950 1000 1050 1100 1150 1200 1250 1300 1350\nFrame\nAre\na\n(b) Q3L Area\n0\n1000\n2000\n3000\n4000\n5000\n6000\n0\n100\n200\n300\n400\n500\n600\n700\n800\n900\n1000\n1100\n1200\n1300\nFrame\nTriangles\nReceived\nProcessed\n(c) Tux Triangles\n0\n200000\n400000\n600000\n800000\n1000000\n1200000\n1400000\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\n550\n600\n650\n700\n750\n800\n850\n900\n950 1000 1050 1100 1150 1200 1250 1300 1350\nFrame\nAre\na\n(d) Tux Area\n0\n500\n1000\n1500\n2000\n2500\n3000\n3500\n4000\n4500\n5000\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\n550\n600\nFrame\nTriangles\nReceived\nProcessed\n(e) DIN Triangles\n0\n50000\n100000\n150000\n200000\n250000\n300000\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\n550\n600\nFrame\nAre\na\n(f) DIN Area\n0\n5000\n10000\n15000\n20000\n25000\n30000\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\n550\nFrame\nTriangles\nReceived\nProcessed\n(g) AW Triangles\n0\n20000\n40000\n60000\n80000\n100000\n120000\n140000\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\n550\nFrame\nArea\n(h) AW Area\n0\n2000\n4000\n6000\n8000\n10000\n12000\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\n550\nFrame\nTriangles\nReceived\nProcessed\n(i) GRA Triangles\n0\n50000\n100000\n150000\n200000\n250000\n300000\n350000\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\n550\nFrame\nAre\na\n(j) GRA Area\n0\n2000\n4000\n6000\n8000\n10000\n12000\n14000\n16000\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\n550\n600\nFrame\nTriangles\nReceived\nProcessed\n(k) ANL Triangles\n0\n200000\n400000\n600000\n800000\n1000000\n1200000\n1400000\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\n550\n600\nFrame\nAre\na\n(l) ANL Area\nFigure 4: Triangle and area statistics for the GraalBench components\n8\n[7] JSR-184 Expert Group, \"Mobile 3D Graphics API for\nJava\nTM\n2 Micro Edition\", Available at http://jcp.org/\naboutJava/communityprocess/final/jsr184/index.html\n[8] The Khronos Group, \"OpenGL ES Overview\", Available\nat http://www.khronos.org/opengles/index.html\n[9] Id Software Inc., \"Quake III\", Available at\nhttp://www.idsoftware.com\n[10] ARM Ltd., \"ARM 3D Graphics Solutions\", Available at\nhttp://www.arm.com/miscPDFs/1643.pdf\n[11] T. Mitra and T. Chiueh. \"Dynamic 3D Graphics Workload\nCharacterization and the Architectural Implications\",\nProc. 32nd ACM/IEEE Int. Symp. on Microarchitecture\n(MICRO), 1999, pp. 62-71.\n[12] Systems in Motion, \"VRMLView\", Available at\nhttp://www.sim.no\n[13] M. Pichler, G. Orasche, K. Andrews, E. Grossman, and\nM. McCahill, \"VRweb: a Multi-System VRML Viewer\",\nProc. First Symp. on Virtual Reality Modeling Language,\n1995, San Diego, California, United States, pp. 77-85.\n[14] The Mesa Project, \"The Mesa 3D Graphics Library\",\nAvailable at http://www.mesa3d.org\n[15] Hawk Software, \"GLTrace Programming Utility\",\nAvailable at http://www.hawksoft.com/gltrace/\n[16] J. Sohn, R. Woo,and H.J. Yoo \"Optimization of Portable\nSystem Architecture for Real-Time 3D Graphics\", Proc.\nIEEE Int. Symp. on Circuits and Systems (ISCAS 2002),\nVolume: 1 , 26-29 May 2002 pp. I-769 - I-772 vol.1.\n[17] SourceForge, \"spyGLass: an OpenGL Call Tracer and\nDebugging Tool\", Available at\nhttp://spyglass.sourceforge.net/\n[18] SPEC, \"SPECviewperf 6.1.2\", Available at\nhttp://www.specbench.org/gpc/opc.static/opcview.htm\n[19] Sunspire Studios, \"Tux Racer\", Available at\nhttp://tuxracer.sourceforge.net/\n[20] Portable 3D Research Group at Korea Advanced Institute\nof Science and Technology, \"MobileGL - The Standard\nfor Embedded 3D Graphics\", Available at http://ssl.kaist.\nac.kr/projects/portable3D.html/main mgl defnition.htm\n[21] Stanford University, \"GLSim & GLTrace\", Available at\nhttp:\n//graphics.stanford.edu/courses/cs448a-01-fall/glsim.html\n[22] Yonsei University, 3D Graphics Accelerator Group,\nhttp://msl.yonsei.ac.kr/3d/\n9\n", "keywords": "mechanism;workload characterization;API;Mobile devices;embedded 3D graphics;accelerators;3D graphics benchmarking;real-time;bottlenecks;rasterization;Graalbench;architecture;embedded systems;workload;benchmark;3D graphics;pipeline;Mobile environments;3D graphics applications;mobile phones;triangles;openGL;unit;GraalBench;performance;measurement;statistics;OpenGL;transform and lighting;embedded 3D graphics architectures;3D graphics benchmarks"} {"name": "99", "title": "Handoff Trigger Table for Integrated 3G/WLAN Networks", "abstract": "Vertical handoff is a switching process between heterogeneous wireless networks in a hybrid 3G/WLAN network. Vertical handoffs fromWLAN to 3G network often fail due to the abrupt degrade of the WLAN signal strength in the transition areas. In this paper, a Handoff Trigger Table is introduced to improve the performance of vertical handoff. Based on this table, a proactive handoff scheme is proposed. Simulation results show that with the proposed scheme, the vertical handoff decisions will be more efficient so that dropping probability can be decreased dramatically.", "fulltext": "INTRODUCTION\nWith the emergence of different wireless technologies, which\nare developed for different purposes, the integration of these\nwireless networks has attracted much attention from both\nacademia and industry. Among them, the integration of 3G\ncellular networks and wireless local access networks (WLAN)\nhas become a very active area in the development toward the\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\npermission and/or a fee.\nIWCMC'06, July 36, 2006, Vancouver, British Columbia, Canada.\nCopyright 2006 ACM 1-59593-306-9/06/0007 ...\n$\n5.00.\nnext generation wireless networks. WLAN access technology\ncan offer high-speed data connections in a small coverage\nwith relatively low cost. On the other hand, cellular\nnetworks can offer connectivity over several square kilometers\nbut with relatively low data rate. Taking advantages\nof both networks will bring great benefits to both service\nproviders and users.\nOne of the desired features of such a heterogeneous wireless\nnetwork is to support seamless global roaming or vertical\nhandoff. Traditionally, handoff is performed within the\nsame wireless system, which is called horizontal handoff.\nIn contrast, a vertical handoff takes place between different\nwireless networks [2]. In an integrated 3G/WLAN network\n, there are two directions in vertical handoff, one is from\nWLANs to 3G networks and the other is from3G Networks\nto WLANs. In the first direction, the objective for handoff\nis to maintain the connectivity, i.e., switching to the cellular\nnetwork before the WLAN link breaks while trying to stay in\nthe WLAN as long as possible because of its relatively high\nbandwidth and low cost. Since a WLAN has smaller coverage\nand is usually covered by a 3G network, when the mobile\nterminal (MT) steps out of the WLAN area, the decay of\nthe signal fromthe WLAN should be accurately detected. A\ntimely decision of handoff should be made properly, and the\nMT should switch the connection to the appropriate 3G network\nsuccessfully. In the second direction, the objective of\nhandoff is usually to improve QoS and acquire higher bandwidth\nwith lower cost. In this paper, we will focus on the\nfirst direction, which is the handoff fromthe WLAN to the\n3G network.\nFor a WLAN, signal power is limited, which causes the\nsignal strength to be quite easily influenced by physical obstruction\nand blocks. For example, if the MT passes some\nblocks or moves into the elevator, there will be an abrupt\ndrop in its received WLAN signal strength. In this case,\nthe MT may not have enough time to finish the WLAN-to\n-3G-Network vertical handoff procedure before the link to\nWLAN breaks. Therefore, how to effectively detect the signal\ndecay to trigger the handoff becomes a very important\nissue.\nIn this paper, we propose to maintain a Handoff Trigger\nTable (HTT) at the Access Point (AP) to record some\nlocation information on transition areas in which vertical\n575\nhandoffs occur. With the information of the HTT, a proactive\nhandoff scheme is proposed to enable the MTs to start\nthe vertical handoff procedure early enough to finish the\nvertical handoff procedure, thus the handoff call dropping\nprobability can be decreased dramatically.\nThe rest of this paper is organized as follows. In Section 2,\nsome related work is discussed. In Section 3, we propose a\nHandoff Trigger Table to assist making handoff decisions.\nThe handoff schem e based on this table is also presented.\nSection 4 gives simulation results to demonstrate the better\nperformance of the proposed scheme in comparison with the\ntraditional one. Section 5 concludes the paper.\nRELATED WORK\nIn traditional handoff schemes, the received signal strength\n(RSS) has been used as an indicator for making handoff decisions\n. Some of the traditional approaches are as follows:\n[5]\nRSS: handoff takes place if the RSS of a candidate\npoint of attachment is larger than the RSS of the current\npoint of attachm ent (RSS\nnew\n> RSS\ncurrent\n);\nRSS plus threshold: handoff is made if the RSS of a\ncandidate point of attachment is larger than that of the\ncurrent point of attachment and the latter is less than a\ncertain pre-defined threshold T (RSS\nnew\n> RSS\ncurrent\nand RSS\ncurrent\n< T );\nRSS plus hysteresis: a handoff takes place if the RSS\nof the candidate point of attachment is larger than the\nRSS of the current one with a pre-defined hysteresis\nmargin H (RSS\nnew\n> RSS\ncurrent\n+ H);\nAlgorithm plus dwell timer: sometimes a dwell timer\ncan be added to the above algorithms. This timer is\nstarted when one of the above conditions happens, and\nthe handoff takes place if the condition is met for the\nentire dwell timer interval.\nFor the vertical handoff process, it may not be very reliable\nto make handoff decisions based only on the RSS of the point\nof attachment (e.g., AP of the WLAN) and the candidate\npoint of attachment (e.g., base station of the 3G network)\nbecause of the asymmetric nature of the handoff problem\n[6].\nAs mentioned before, the handoff from WLAN to 3G network\nis expected to be efficient and effective. Some methods\nhave been proposed to achieve this goal. One of them\nis the Fast Fourier Transform(FFT)-based decay detection\n[1]. This approach tries to estimate the signal decay, and\nwill trigger the handoff after the signal is confirmed to be\ndecreased to a certain threshold. However, this approach\nhas high calculation complexity with the need of frequent\nsampling, and suffers from estimation errors. In [2], handoff\ntriggering nodes are used to notify the mobile terminal to\nstart the handoff. These special nodes are data stations installed\nin WLAN/cellular transition regions where vertical\nhandoffs occur. When an MT moves close to it, the handoff\ntriggering node will send a handoff trigger command to trigger\nthe link layer handoff. Using handoff trigger node can\nbe good at triggering the handoffs, but if they are needed in\nmany places within a WLAN, it will be costly to set up many\ntrigger nodes. In addition, if there are new blocks appearing\nin the WLAN, it is hard to determine where the additional\ntrigger nodes should be installed. Therefore, this approach\nis not very flexible. The Fuzzy logic based handoff algorithm\n[3][4] is proposed to assist making handoff decisions.\nThis algorithmdecreases handoff delay and the number of\nunnecessary handoffs by changing the RSS average window\naccording to the MT speed. It is worth mentioning that\nsome fuzzy logic based algorithms are complex and may not\nbe easy to be implemented in practical systems.\nHANDOFF TRIGGER TABLE FOR 3G/ WLAN NETWORKS\nRecently, WLAN has been expected to provide user location\ninformation [6]-[10], which is helpful for making vertical\nhandoffs. We propose to use a Handoff Trigger Table (HTT)\nto store such location information at the AP and utilize it to\ntrigger the WLAN-to-3G vertical handoff explicitly. Based\non this table, a proactive handoff scheme is proposed to assist\nMTs to handoff in the right places at the right time.\nWith this scheme, handoff decisions can be more efficient\nand handoffs are more likely to succeed compared with the\ntraditional schemes which trigger the handoff mainly based\non the received signal strength at the MTs.\n3.1\nHandoff Trigger Table\nA typical integrated 3G/WLAN network is shown in Fig. 1.\nThe HTT is normally implemented at the APs of WLAN\nand used to record the user location information which will\nbe helpful to make handoff decisions as explained later. An\nexample of the HTT is given in Table 1.\nTable 1: An example of HandoffTrigger Table\nX\n1\n+ D > x > X\n1\n- D\nY\n1\n+ D > y > Y\n1\n- D\nX\n2\n+ D > x > X\n2\n- D\nY\n2\n+ D > y > Y\n2\n- D\n\n\nX\nn\n+ D > x > X\nn\n- D Y\nn\n+ D > y > Y\nn\n- D\nIn the HTT, the information of the locations where an\nMT needs to handoff is given. We defined Black Holes (BH)\nthe small areas in which the received signal strength at an\nMT decreases abruptly and the link to the AP breaks in a\nvery short time. In such BHs, if the MT does not switch\nthe connection to the 3G network, it will not be able to\nkeep the connectivity to its correspondence node. Each BH\nis proposed to be covered by a slightly larger area, namely,\nproactive area. When MTs move into a proactive area, the\nAP will send a message to notify the MT to start proactive\nhandoff (detailed in Section 3.2), regardless of the current\nreceived signal strength at the MT. In Table 1, (X\ni\n, Y\ni\n, i =\n1, 2, , n) are the location coordinates of the center of the\nBHs, and D is the distance between the center of the black\nhole and proactive area edges. (X\nn\n+ D > x > X\nn\n-D, Y\nn\n+\nD > y > Y\nn\n- D) is an example of possible ways to describe\nthe proactive area. Note that other types of proactive areas\nsuch as circles can also be adopted. Fig. 1 illustrates the\nabove two concepts, wherein A, B and C are three BHs\nwithin the WLAN, and A', B', C' are the corresponding\nproactive areas.\nThe HTT will be initialized when the AP is installed in\nthe WLAN, then it will be dynamically updated. In the initial\nstage, MTs will handoff in a traditional way, i.e., handoff\n576\nFigure 1: An example of WLAN with Black Holes\nwhen the received signal strength decreases to a threshold.\nIn addition, the MT sends out a handoff notification to the\nAP, and the AP will record the coordinates of this handoff\nlocation into the HTT. Usually BH is not a dot, but\na small area. Therefore, for each BH, the HTT will store\nmany coordinates of points where vertical handoffs occur,\nthen the AP will try to merge these coordinates to form a\ncorresponding proactive area and put it into the HTT. After\nthe initialization stage, the HTT will only contain the description\nof proactive areas instead of individual coordinates\nof BHs. Meanwhile, the AP will often check the HTT to decide\nwhether any MT enters the proactive areas. When a\nnew BH appears, the AP will be able to record its proactive\narea into the HTT after some vertical handoffs take place\nnear this BH, similar to the initial stage. When a BH disappears\ndue to some reasons (e.g., restructure in the WLAN),\nthere will be no vertical handoffs occur in the corresponding\nproactive area. As a result, the AP will remove the entry\nof this proactive area fromthe HTT after some predefined\ntime, which can be decided by the system administrator.\nWith the above methods, the HTT is maintained dynamically\nand can adaptive to the change of the environment.\n3.2\nProactive Handoff Scheme\nBased on the Handoff Trigger Table, we propose a proactive\nhandoff scheme to achieve better handoff performance.\nIn the traditional scheme, the handoff decision mechanism\nis that a handoff will be triggered when the current RSS is\nlower than a threshold (THm) or RSS from the candidate\nnetwork is higher than a threshold (THw). The procedure\nis illustrated in Fig. 2.\nIn the proposed scheme, we divide the handoff into two\nstages, a proactive handoff stage and a handoff stage. In the\nproactive handoff stage, when an MT moves into a proactive\narea, the AP will send it a proactive handoff message.\nFollowing that the MT will send out a binding update message\nto the AP and start this network layer handoff. The\nprocedure is given in Fig. 3.\nWhen an MT enters the WLAN, the RSSs in sampling\nintervals will be measured and their averages are computed.\nAt the same time, the AP will check if the MT is in any\nproactive area. If yes, it will send a control message Pre-Handoff\n-CMD to the MT. After receiving this message, the\nMT will start to send the cellular network a message to request\na connection. When the signal decreases to a certain\nthreshold R\nt\n, the link-layer handoff starts, and the MT will\nHandoff execution\nRSS<Rt\nNo\nYes\nStart handoff\nAveraging\nMeasurements\nFigure 2:\nTraditional handoffscheme for Hybrid\nNetworks\nMeasurements\nAveraging\nHandoff Trigger\nTable\nHandoff execution\nRSS<Rt\nPre-handoff\nYes\nNo\nFigure 3: Proactive handoffscheme for Hybrid Networks\nswitch the connection to the cellular network. When the\nreceived signal strength decreases to R\nt\n, but the MT still\ndoes not receive the Pre-Handoff-CMD, it will send a message\nto informthe AP to add this location coordinate into\nthe HTT. The threshold R\nt\nis a design parameter that can\nbe set by the administrator to best enable vertical handoff\nprocedure under the WLAN-specific physical situation.\nPERFORMANCE EVALUATION\nIn this section, we compare the performance of the proposed\nvertical handoff scheme with HTT and the traditional\nhandoff scheme by simulation. Consider the simplified network\nmodel for 3G/WLAN networks shown in Fig. 1. We\nassume that the 3G network covers the WLAN area. The\nRSS of the WLAN signal at an MH is assumed as a function\nof the distance d between the MH and the AP[11]:\nRSS(d) = P T L 10n log(d) + f(, ) dBm , where P T\nis the transmitted power, L is a constant signal power loss,\nn is the path loss exponent, and f(, ) represents shadow\nfading modeled as zero mean Gaussian random variable with\nmean and standard deviation . In the WLAN, there are\na number of BHs, in which the RSS will decrease to almost\nzero immediately and the MT will have to handoff to the\n3G networks.\n577\nThe popular Random Waypoint mobility model [12] is\nused to simulate the mobility of the MTs in the WLAN.\nAt the beginning, the MTs' positions in the WLAN are uniformly\ndistributed. Each MT will move at a random speed\nV , which is uniformly distributed in [V min, V max], and in\na randomdirection. After randomtime T m, the MT will\nstop and stay for randomtime T s, with both T m and T s\nuniformly distributed in [0, 2s]. The MT will continue the\nmovement as described above. Any MT that moves out of\nthe WLAN will be eliminated, and a new MT will be gener-ated\nin the WLAN with a randomly chosen location. Some\nother parameters used in the simulations are given in Table\n2.\nTable 2: Parameters for the numerical examples\nParameter\nValue\nPT\n100mW\nR\nt\n-85dbm\nn\n3.3\n\n7dB\nHandoff time\n2s\nVelocity range\n0 2 m /sec\nNumber of BHs\n3\nWLAN area\n100m\n100m\nWe define the transition time as the time from when an\nMT starts handoff to the moment it moves into a BH, and\nhandoff time as the time from when an MT sends out the\nbinding update message to the moment it receives the first\npackage fromthe Base Station.\nThe drop of a call only\nhappens when the transition time is less than the handoff\ntime.\nIn Figs. 4, 5 and 6, we present the dropping probability\nof the proposed scheme and the traditional handoff scheme\nbased on different proactive distance, D, and two values of\nuser mobility rate, V , respectively. Fig. 4 illustrates the\n0\n0.5\n1\n1.5\n2\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0.45\nUser Mobility rate\nDrop Probablity\nTraditional Method\nD=0.4 HTT Method\nD=0.5 HTT Method\nFigure 4: Effect of user mobility\neffect of user mobility on the connection dropping probability\n. In this figure, the dropping probabilities for proactive\ndistance D = 0.4 and D = 0.5, are given. In these results\n, handoff time is 2s. From the figure, we can see that\nthe handoff probability has been reduced with the explicit\nhandoff trigger in the proposed scheme, because mobile terminals\ncan have more time to execute the handoff. When\nthe proactive distance increases, the dropping probability\ndecreases. This is due to the fact that when the proactive\ndistance is larger, the time for the MT to execute the handoff\nwill be longer. We can also see that when the MTs move\nfaster, dropping probability will increase as well, because\nshorter time is made available for the MT to execute the\nvertical handoff given the same D. Compared with the HTT\nscheme, the traditional scheme is more sensitive to the user\nmobility rate, and the dropping probability increases rapidly\nas the mobility increases.\n\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0.45\n0.5\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0.45\nDistance of prehandoff\nD\nr\noppi\nng P\nr\nobabl\ni\nt\ny\nV=0.5 Traditional Method\nV=0.5 HTT Method\nV=1 Traditional Method\nV=1 HTT Method\nFigure 5: Effect of proactive distance\nFig. 5 shows that as the proactive distance increases, the\ndropping probability of HTT scheme decreases very fast and\nit is quite sensitive to the distance increasing. When the user\nmobility rate increases, the dropping probability will also\nincrease, which conforms with the results in Fig. 4. Fig. 6\nshows the impact of the handoff time (in practical value\nrange) on the dropping probability. With a given transition\ntime, if the handoff time increases, the dropping probability\nwill be higher, as expected.\nFig. 7 shows that the impact of the distance between AP\nand BH on the dropping probability. The signal strength\ndecays with the distance to the AP in WLANs. In the proposed\nscheme, MTs start network layer handoff in advance\naccording to the parameter D and regardless of the RSS, so\nthe performance of the proposed scheme is independent from\nthe distance d between the BH and the AP. However, for a\ngiven RSS threshold, the performance of traditional handoff\nscheme relies heavily on d, and it has better performance\nfor the BHs which are far fromthe AP. This is because the\nsignal strength is relatively high near the AP, and MT will\nnormally find that its RSS is above the threshold. As the\nMT enters the BH, the signal strength degrades so abruptly\nthat the MT does not have enough time to do handoff, which\nleads to high dropping probability. In contrast, the RSS of\nan MT moving around a BH that is far from the AP will\nbe relatively low and may be close to the threshold, hence\na handoff decision may be easily triggered before the MT\nenters the BH. In this case, the MT gets longer time to conduct\nthe handoff procedure so a relatively lower dropping\nprobability can be achieved.\n578\n\n1.8\n2\n2.2\n2.4\n2.6\n2.8\n3\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\nHandoff time(s)\nD\nr\noppi\nng P\nr\no\nbab\nl\ni\nt\ny\nTraditional Method\nHTT Method\nFigure 6: Effect of Vertical handoff time\n\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\nDistance between AP and BH\nDr\noppi\nng P\nr\nobabl\ni\nt\ny\nTraditional Method\nD=0.4 HTT Method\nD=0.5 HTT Method\nFigure 7: Effect of distance between AP and BH\nFromFig. 7, we can also see that for the proposed scheme,\nthe proactive distance D should be set properly. Similar\nrequirement applies to the setting of the threshold in the\ntraditional scheme. If D or the threshold is set to be high,\nthere will be many unnecessary handoffs although the dropping\nprobability will decrease. On the other hand, if D or\nthe threshold is too low, the dropping probability will increase\n.\nHowever, the dependence on d of the traditional\nhandoff scheme cannot be eliminated by simply adjusting\nthe threshold, which makes the selection of a proper threshold\nin the traditional scheme even more difficult.\nWe further study the performance of the two schemes in\nthe WLAN with different number of BHs. The locations\nof the BHs are set to be uniformly distributed within the\nWLAN. FromFig. 8, it can be seen that for both schemes\nthe dropping probability is not sensitive to the number of\nBHs when their locations are uniformly distributed. The\ncase for BHs with non-uniformdistribution is left as future\nwork.\n2\n4\n6\n8\n10\n12\n14\n16\n18\n0.01\n0.015\n0.02\n0.025\n0.03\n0.035\n0.04\n0.045\nNumber of BHs\nDropping Probablity\nTraditional Method\nD=0.4 HTT Method\nFigure 8: Effect of number of BHs\nCONCLUSIONS\nIn this paper, vertical handoffs fromWLAN to 3G cellular\nnetwork has been investigated. To support making proper\nhandoff decisions, a Handoff Trigger Table (HTT) has been\nproposed to be implemented at the AP of WLAN to record\nthe location information of BHs.\nBased on this table, a\nproactive handoff scheme has been proposed. Simulation\nresults have been given to show that with the information\nin the HTT, the vertical handoff decisions can be made more\nefficiently and dropping probability can be decreased signif-icantly\n. Possible future work include making the proactive\ndistance D a variable for different environments and further\ninvestigating the unnecessary handoff events in different prehandoff\nschemes. The HTT scheme can also be used in sys-temdiscovery\nto assist reducing MH power consumption.\nACKNOWLEDGEMENTS\nThis work has been supported jointly by the Natural Sciences\nand Engineering Research Council (NSERC) of Canada\nunder Strategic Grant No. 257682 and Research in Motion\n(RIM).\nREFERENCES\n[1] Q. Zhang, C. Guo, Z. Guo and W. Zhu, \"Efficient\nmobility management for vertical handoff between\nWWAN and WLAN\", IEEE Communications\nMagazine Vol. 41, Issue 11, Nov. 2003, pp. 102108.\n[2] W.-T. Chen; J.-C. Liu; H.-K. Huang, \"An adaptive\nscheme for vertical handoff in wireless overlay\nnetworks\", Proceedings of International Conference on\nParallel and Distributed Systems 2004, 7-9 July 2004,\npp. 541548.\n[3] P. Khadivi, T.D. Todd and D. Zhao, \"Handoff trigger\nnodes for hybrid IEEE 802.11 WLAN/cellular\nnetworks\", Proc. QSHINE 2004, pp. 164170.\n[4] A. Majlesi and B.H. Khalaj, \"An adaptive fuzzy logic\nbased handoff algorithmfor interworking between\nWLANs and mobile networks\", Proc. PIMRC'02,\nSept. 2002, pp. 24462451.\n[5] K. Pahlavan, et al., \"Handoff in hybrid mobile data\n579\nnetworks\", IEEE Personal Communications, Vol.7,\nNo.2, April 2000, pp. 3447.\n[6] J.-Z. Sun, J. Sauvola and D. Howie, \"Features in\nfuture: 4G visions froma technical perspective\", Proc.\nGLOBECOM'01, Nov. 2001, pp. 35333537.\n[7] P. Bahl and V. Padmanabhan, \"RADAR: An\nin-building RF-based user location and tracking\nsystem\", Proc. IEEE INFOCOM'00, Tel-Aviv, Israel,\n2000, pp. 775784.\n[8] P.Krishnan, A.S. Krishnakumar, W.-H. Ju, C.\nMallows, and S. Ganu, \"A systemfor LEASE:\nlocation estimationassisted by stationary emitters for\nindoor RF wireless networks\", Proc. IEEE\nINFOCOM'04, Hongkong, 2004, pp. 10011011.\n[9] K.-I. Itoh, S. Watanabe, J.-S. Shih, and T. Sato,\n\"Performance of handoff algorithm based on distance\nand RSSI measurements\", IEEE Trans. Vehic.\nTechnol., Vol. 51, No. 6, Nov. 2002, pp.1460 - 1468.\n[10] J. Makela, M. Ylianttila and K. Pahlavan, \"Handoff\ndecision in multi-service networks\", Proc. PIMRC'00,\nSept. 2000, pp. 655659.\n[11] A.H. Zahran and B. Liang, \"Performance evaluation\nframework for vertical handoff algorithms in\nheterogeneous networks\", Proc ICC 2005, May 2005,\npp. 173178\n[12] C Bettstetter, H Hartenstein and X Perez-Costa,\n\"Stochastic properties of the randomwaypoint\nmobility model\", Wireless Networks, Vol. 10, No. 5,\nSept. 2004, pp. 555567.\n580\n", "keywords": "WLAN;integrated networks;vertical handoff;cellular network;3G;Wireless communications;Handoff trigger table;wireles networks"}